From buildbot at python.org Sat Mar 1 01:27:20 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 01 Mar 2008 00:27:20 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 3.0 Message-ID: <20080301002720.C1A081E4009@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%203.0/builds/675 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: gerhard.haering BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From buildbot at python.org Sat Mar 1 02:33:40 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 01 Mar 2008 01:33:40 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 trunk Message-ID: <20080301013340.D2D531E4009@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%20trunk/builds/2626 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: gerhard.haering BUILD FAILED: failed test Excerpt from the test logfile: 2 tests failed: test_asynchat test_smtplib sincerely, -The Buildbot From python-checkins at python.org Sat Mar 1 03:23:38 2008 From: python-checkins at python.org (barry.warsaw) Date: Sat, 1 Mar 2008 03:23:38 +0100 (CET) Subject: [Python-checkins] r61143 - in python/trunk: Include/patchlevel.h Misc/NEWS Message-ID: <20080301022338.A77371E401A@bag.python.org> Author: barry.warsaw Date: Sat Mar 1 03:23:38 2008 New Revision: 61143 Modified: python/trunk/Include/patchlevel.h python/trunk/Misc/NEWS Log: Bump to version 2.6a1 Modified: python/trunk/Include/patchlevel.h ============================================================================== --- python/trunk/Include/patchlevel.h (original) +++ python/trunk/Include/patchlevel.h Sat Mar 1 03:23:38 2008 @@ -23,10 +23,10 @@ #define PY_MINOR_VERSION 6 #define PY_MICRO_VERSION 0 #define PY_RELEASE_LEVEL PY_RELEASE_LEVEL_ALPHA -#define PY_RELEASE_SERIAL 0 +#define PY_RELEASE_SERIAL 1 /* Version as a string */ -#define PY_VERSION "2.6a0" +#define PY_VERSION "2.6a1" /* Subversion Revision number of this file (not of the repository) */ #define PY_PATCHLEVEL_REVISION "$Revision$" Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Sat Mar 1 03:23:38 2008 @@ -12,8 +12,8 @@ Core and builtins ----------------- -- Issue #2051: pyc and pyo files are not longer created with permission 644. The - mode is now inherited from the py file. +- Issue #2051: pyc and pyo files are not longer created with permission + 644. The mode is now inherited from the py file. - Issue #2067: file.__exit__() now calls subclasses' close() method. From python-checkins at python.org Sat Mar 1 03:26:43 2008 From: python-checkins at python.org (barry.warsaw) Date: Sat, 1 Mar 2008 03:26:43 +0100 (CET) Subject: [Python-checkins] r61144 - python/trunk/Lib/idlelib/idlever.py Message-ID: <20080301022643.354B71E401A@bag.python.org> Author: barry.warsaw Date: Sat Mar 1 03:26:42 2008 New Revision: 61144 Modified: python/trunk/Lib/idlelib/idlever.py Log: bump idle version number Modified: python/trunk/Lib/idlelib/idlever.py ============================================================================== --- python/trunk/Lib/idlelib/idlever.py (original) +++ python/trunk/Lib/idlelib/idlever.py Sat Mar 1 03:26:42 2008 @@ -1 +1 @@ -IDLE_VERSION = "2.6a0" +IDLE_VERSION = "2.6a1" From python-checkins at python.org Sat Mar 1 03:45:07 2008 From: python-checkins at python.org (fred.drake) Date: Sat, 1 Mar 2008 03:45:07 +0100 (CET) Subject: [Python-checkins] r61146 - python/trunk/Misc/NEWS Message-ID: <20080301024507.807EA1E401A@bag.python.org> Author: fred.drake Date: Sat Mar 1 03:45:07 2008 New Revision: 61146 Modified: python/trunk/Misc/NEWS Log: fix typo Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Sat Mar 1 03:45:07 2008 @@ -12,7 +12,7 @@ Core and builtins ----------------- -- Issue #2051: pyc and pyo files are not longer created with permission +- Issue #2051: pyc and pyo files are no longer created with permission 644. The mode is now inherited from the py file. - Issue #2067: file.__exit__() now calls subclasses' close() method. From python-checkins at python.org Sat Mar 1 03:53:36 2008 From: python-checkins at python.org (barry.warsaw) Date: Sat, 1 Mar 2008 03:53:36 +0100 (CET) Subject: [Python-checkins] r61147 - python/trunk/Misc/NEWS Message-ID: <20080301025336.E3D4B1E401A@bag.python.org> Author: barry.warsaw Date: Sat Mar 1 03:53:36 2008 New Revision: 61147 Modified: python/trunk/Misc/NEWS Log: Add date to NEWS Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Sat Mar 1 03:53:36 2008 @@ -7,7 +7,7 @@ What's New in Python 2.6 alpha 1? ================================= -*Release date: XX-XXX-2008* +*Release date: 29-Feb-2008* Core and builtins ----------------- From python-checkins at python.org Sat Mar 1 03:57:23 2008 From: python-checkins at python.org (barry.warsaw) Date: Sat, 1 Mar 2008 03:57:23 +0100 (CET) Subject: [Python-checkins] r61148 - python/tags/r26a1 Message-ID: <20080301025723.4DF0B1E401B@bag.python.org> Author: barry.warsaw Date: Sat Mar 1 03:57:23 2008 New Revision: 61148 Added: python/tags/r26a1/ - copied from r61147, python/trunk/ Log: Tagging 2.6a1 From python-checkins at python.org Sat Mar 1 04:00:01 2008 From: python-checkins at python.org (barry.warsaw) Date: Sat, 1 Mar 2008 04:00:01 +0100 (CET) Subject: [Python-checkins] r61149 - python/tags/r26a1 Message-ID: <20080301030001.D7F601E401A@bag.python.org> Author: barry.warsaw Date: Sat Mar 1 04:00:01 2008 New Revision: 61149 Removed: python/tags/r26a1/ Log: Untagging. From python-checkins at python.org Sat Mar 1 04:00:53 2008 From: python-checkins at python.org (barry.warsaw) Date: Sat, 1 Mar 2008 04:00:53 +0100 (CET) Subject: [Python-checkins] r61150 - python/trunk/Lib/idlelib/NEWS.txt Message-ID: <20080301030053.1ABCC1E401A@bag.python.org> Author: barry.warsaw Date: Sat Mar 1 04:00:52 2008 New Revision: 61150 Modified: python/trunk/Lib/idlelib/NEWS.txt Log: Give IDLE a release date Modified: python/trunk/Lib/idlelib/NEWS.txt ============================================================================== --- python/trunk/Lib/idlelib/NEWS.txt (original) +++ python/trunk/Lib/idlelib/NEWS.txt Sat Mar 1 04:00:52 2008 @@ -1,7 +1,7 @@ What's New in IDLE 2.6a1? ========================= -*Release date: XX-XXX-2008* +*Release date: 29-Feb-2008* - Configured selection highlighting colors were ignored; updating highlighting in the config dialog would cause non-Python files to be colored as if they From python-checkins at python.org Sat Mar 1 04:15:20 2008 From: python-checkins at python.org (barry.warsaw) Date: Sat, 1 Mar 2008 04:15:20 +0100 (CET) Subject: [Python-checkins] r61151 - in python/trunk: Doc/README.txt LICENSE PC/python_nt.rc Python/getcopyright.c README Message-ID: <20080301031520.E251B1E401D@bag.python.org> Author: barry.warsaw Date: Sat Mar 1 04:15:20 2008 New Revision: 61151 Modified: python/trunk/Doc/README.txt python/trunk/LICENSE python/trunk/PC/python_nt.rc python/trunk/Python/getcopyright.c python/trunk/README Log: More copyright year and version number bumps Modified: python/trunk/Doc/README.txt ============================================================================== --- python/trunk/Doc/README.txt (original) +++ python/trunk/Doc/README.txt Sat Mar 1 04:15:20 2008 @@ -118,7 +118,7 @@ as long as you don't change or remove the copyright notice: ---------------------------------------------------------------------- -Copyright (c) 2000-2007 Python Software Foundation. +Copyright (c) 2000-2008 Python Software Foundation. All rights reserved. Copyright (c) 2000 BeOpen.com. Modified: python/trunk/LICENSE ============================================================================== --- python/trunk/LICENSE (original) +++ python/trunk/LICENSE Sat Mar 1 04:15:20 2008 @@ -55,6 +55,7 @@ 2.4.4 2.4.3 2006 PSF yes 2.5 2.4 2006 PSF yes 2.5.1 2.5 2007 PSF yes + 2.6 2.5 2008 PSF yes Footnotes: @@ -90,7 +91,7 @@ prepare derivative works, distribute, and otherwise use Python alone or in any derivative version, provided, however, that PSF's License Agreement and PSF's notice of copyright, i.e., "Copyright (c) -2001, 2002, 2003, 2004, 2005, 2006, 2007 Python Software Foundation; +2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008 Python Software Foundation; All Rights Reserved" are retained in Python alone or in any derivative version prepared by Licensee. Modified: python/trunk/PC/python_nt.rc ============================================================================== --- python/trunk/PC/python_nt.rc (original) +++ python/trunk/PC/python_nt.rc Sat Mar 1 04:15:20 2008 @@ -61,7 +61,7 @@ VALUE "FileDescription", "Python Core\0" VALUE "FileVersion", PYTHON_VERSION VALUE "InternalName", "Python DLL\0" - VALUE "LegalCopyright", "Copyright ? 2001-2007 Python Software Foundation. Copyright ? 2000 BeOpen.com. Copyright ? 1995-2001 CNRI. Copyright ? 1991-1995 SMC.\0" + VALUE "LegalCopyright", "Copyright ? 2001-2008 Python Software Foundation. Copyright ? 2000 BeOpen.com. Copyright ? 1995-2001 CNRI. Copyright ? 1991-1995 SMC.\0" VALUE "OriginalFilename", PYTHON_DLL_NAME "\0" VALUE "ProductName", "Python\0" VALUE "ProductVersion", PYTHON_VERSION Modified: python/trunk/Python/getcopyright.c ============================================================================== --- python/trunk/Python/getcopyright.c (original) +++ python/trunk/Python/getcopyright.c Sat Mar 1 04:15:20 2008 @@ -4,7 +4,7 @@ static char cprt[] = "\ -Copyright (c) 2001-2007 Python Software Foundation.\n\ +Copyright (c) 2001-2008 Python Software Foundation.\n\ All Rights Reserved.\n\ \n\ Copyright (c) 2000 BeOpen.com.\n\ Modified: python/trunk/README ============================================================================== --- python/trunk/README (original) +++ python/trunk/README Sat Mar 1 04:15:20 2008 @@ -1,7 +1,7 @@ -This is Python version 2.6 alpha 0 +This is Python version 2.6 alpha 1 ================================== -Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007 +Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008 Python Software Foundation. All rights reserved. From python-checkins at python.org Sat Mar 1 04:16:16 2008 From: python-checkins at python.org (barry.warsaw) Date: Sat, 1 Mar 2008 04:16:16 +0100 (CET) Subject: [Python-checkins] r61152 - python/tags/r26a1 Message-ID: <20080301031616.333301E4021@bag.python.org> Author: barry.warsaw Date: Sat Mar 1 04:16:15 2008 New Revision: 61152 Added: python/tags/r26a1/ - copied from r61151, python/trunk/ Log: Tagging 2.6a1... again! From buildbot at python.org Sat Mar 1 04:27:25 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 01 Mar 2008 03:27:25 +0000 Subject: [Python-checkins] buildbot failure in x86 FreeBSD trunk Message-ID: <20080301032725.6E9291E401A@bag.python.org> The Buildbot has detected a new failure of x86 FreeBSD trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20FreeBSD%20trunk/builds/672 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: bolen-freebsd Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: gerhard.haering BUILD FAILED: failed test Excerpt from the test logfile: Traceback (most recent call last): File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 227, in handle_request self.process_request(request, client_address) File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 268, in process_request self.finish_request(request, client_address) File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 281, in finish_request self.RequestHandlerClass(request, client_address, self) File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 576, in __init__ self.handle() File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable 1 test failed: test_smtplib sincerely, -The Buildbot From buildbot at python.org Sat Mar 1 04:47:12 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 01 Mar 2008 03:47:12 +0000 Subject: [Python-checkins] buildbot failure in g4 osx.4 trunk Message-ID: <20080301034712.896CC1E401A@bag.python.org> The Buildbot has detected a new failure of g4 osx.4 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/g4%20osx.4%20trunk/builds/2951 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: psf-g4 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: barry.warsaw,fred.drake BUILD FAILED: failed failed slave lost sincerely, -The Buildbot From buildbot at python.org Sat Mar 1 05:59:30 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 01 Mar 2008 04:59:30 +0000 Subject: [Python-checkins] buildbot failure in g4 osx.4 3.0 Message-ID: <20080301045930.9D0711E401A@bag.python.org> The Buildbot has detected a new failure of g4 osx.4 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/g4%20osx.4%203.0/builds/576 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: psf-g4 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: neal.norwitz BUILD FAILED: failed failed slave lost sincerely, -The Buildbot From buildbot at python.org Sat Mar 1 06:59:33 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 01 Mar 2008 05:59:33 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 trunk Message-ID: <20080301055935.012EB1E401A@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%20trunk/builds/2628 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: barry.warsaw,fred.drake BUILD FAILED: failed test Excerpt from the test logfile: 3 tests failed: test_asynchat test_smtplib test_socket ====================================================================== FAIL: testInterruptedTimeout (test.test_socket.TCPTimeoutTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_socket.py", line 994, in testInterruptedTimeout self.fail("got Alarm in wrong place") AssertionError: got Alarm in wrong place sincerely, -The Buildbot From buildbot at python.org Sat Mar 1 07:03:13 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 01 Mar 2008 06:03:13 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 3.0 Message-ID: <20080301060313.BA9151E401A@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%203.0/builds/677 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: brett.cannon,neal.norwitz BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From nnorwitz at gmail.com Sat Mar 1 12:33:13 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Sat, 1 Mar 2008 06:33:13 -0500 Subject: [Python-checkins] Python Regression Test Failures basics (1) Message-ID: <20080301113313.GA25006@python.psfb.org> 313 tests OK. 1 test failed: test_sqlite 28 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_bsddb3 test_cd test_cl test_curses test_gl test_imageop test_imgfile test_ioctl test_linuxaudiodev test_macostools test_ossaudiodev test_pep277 test_scriptpackages test_socketserver test_startfile test_sunaudiodev test_tcl test_timeout test_unicode_file test_urllib2net test_urllibnet test_winreg test_winsound test_zipfile64 1 skip unexpected on linux2: test_ioctl test_grammar test_opcodes test_dict test_builtin test_exceptions test_types test_unittest test_doctest test_doctest2 test_MimeWriter test_SimpleHTTPServer test_StringIO test___all__ test___future__ test__locale test_abc test_abstract_numbers test_aepack test_aepack skipped -- No module named aepack test_al test_al skipped -- No module named al test_anydbm test_applesingle test_applesingle skipped -- No module named macostools test_array test_ast test_asynchat test_asyncore test_atexit test_audioop test_augassign test_base64 test_bastion test_bigaddrspace test_bigmem test_binascii test_binhex test_binop test_bisect test_bool test_bsddb test_bsddb185 test_bsddb185 skipped -- No module named bsddb185 test_bsddb3 test_bsddb3 skipped -- Use of the `bsddb' resource not enabled test_buffer test_bufio test_bz2 test_calendar test_call test_capi test_cd test_cd skipped -- No module named cd test_cfgparser test_cgi test_charmapcodec test_cl test_cl skipped -- No module named cl test_class test_cmath test_cmd test_cmd_line test_cmd_line_script test_code test_codeccallbacks test_codecencodings_cn test_codecencodings_hk test_codecencodings_jp test_codecencodings_kr test_codecencodings_tw test_codecmaps_cn test_codecmaps_hk test_codecmaps_jp test_codecmaps_kr test_codecmaps_tw test_codecs test_codeop test_coding test_coercion test_collections test_colorsys test_commands test_compare test_compile test_compiler test_complex test_complex_args test_contains test_contextlib test_cookie test_cookielib test_copy test_copy_reg test_cpickle test_cprofile test_crypt test_csv test_ctypes test_curses test_curses skipped -- Use of the `curses' resource not enabled test_datetime test_dbm test_decimal test_decorators test_defaultdict test_deque test_descr test_descrtut test_difflib test_dircache test_dis test_distutils test_dl test_docxmlrpc test_dumbdbm test_dummy_thread test_dummy_threading test_email test_email_codecs test_email_renamed test_enumerate test_eof test_errno test_exception_variations test_extcall test_fcntl test_file test_filecmp test_fileinput test_float test_fnmatch test_fork1 test_format test_fpformat test_fractions test_frozen test_ftplib test_funcattrs test_functools test_future test_future_builtins test_gc test_gdbm test_generators test_genericpath test_genexps test_getargs test_getargs2 test_getopt test_gettext test_gl test_gl skipped -- No module named gl test_glob test_global test_grp test_gzip test_hash test_hashlib test_heapq test_hexoct test_hmac test_hotshot test_htmllib test_htmlparser test_httplib test_imageop test_imageop skipped -- No module named imgfile test_imaplib test_imgfile test_imgfile skipped -- No module named imgfile test_imp test_import test_importhooks test_index test_inspect test_ioctl test_ioctl skipped -- Unable to open /dev/tty test_isinstance test_iter test_iterlen test_itertools test_largefile test_linuxaudiodev test_linuxaudiodev skipped -- Use of the `audio' resource not enabled test_list test_locale test_logging test_long test_long_future test_longexp test_macostools test_macostools skipped -- No module named macostools test_macpath test_mailbox test_marshal test_math test_md5 test_mhlib test_mimetools test_mimetypes test_minidom test_mmap test_module test_modulefinder test_multibytecodec test_multibytecodec_support test_multifile test_mutants test_mutex test_netrc test_new test_nis test_normalization test_ntpath test_old_mailbox test_openpty test_operator test_optparse test_os test_ossaudiodev test_ossaudiodev skipped -- Use of the `audio' resource not enabled test_parser s_push: parser stack overflow test_peepholer test_pep247 test_pep263 test_pep277 test_pep277 skipped -- test works only on NT+ test_pep292 test_pep352 test_pickle test_pickletools test_pipes test_pkg test_pkgimport test_platform test_plistlib test_poll test_popen [8018 refs] [8018 refs] [8018 refs] test_popen2 test_poplib test_posix test_posixpath test_pow test_pprint test_profile test_profilehooks test_property test_pstats test_pty test_pwd test_pyclbr test_pyexpat test_queue test_quopri [8395 refs] [8395 refs] test_random test_re test_repr test_resource test_rfc822 test_richcmp test_robotparser test_runpy test_sax test_scope test_scriptpackages test_scriptpackages skipped -- No module named aetools test_select test_set test_sets test_sgmllib test_sha test_shelve test_shlex test_shutil test_signal test_site test_slice test_smtplib test_socket test_socket_ssl test_socketserver test_socketserver skipped -- Use of the `network' resource not enabled test_softspace test_sort test_sqlite test test_sqlite failed -- Traceback (most recent call last): File "/tmp/python-test/local/lib/python2.6/sqlite3/test/regression.py", line 118, in CheckWorkaroundForBuggySqliteTransferBindings self.con.execute("create table if not exists foo(bar)") OperationalError: near "not": syntax error test_ssl test_startfile test_startfile skipped -- cannot import name startfile test_str test_strftime test_string test_stringprep test_strop test_strptime test_struct test_structmembers test_structseq test_subprocess [8013 refs] [8015 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8015 refs] [9938 refs] [8231 refs] [8015 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] . [8013 refs] [8013 refs] this bit of output is from a test of stdout in a different process ... [8013 refs] [8013 refs] [8231 refs] test_sunaudiodev test_sunaudiodev skipped -- No module named sunaudiodev test_sundry test_symtable test_syntax test_sys [8013 refs] [8013 refs] test_tarfile test_tcl test_tcl skipped -- No module named _tkinter test_telnetlib test_tempfile [8018 refs] test_textwrap test_thread test_threaded_import test_threadedtempfile test_threading [11149 refs] test_threading_local test_threadsignals test_time test_timeout test_timeout skipped -- Use of the `network' resource not enabled test_tokenize test_trace test_traceback test_transformer test_tuple test_typechecks test_ucn test_unary test_unicode test_unicode_file test_unicode_file skipped -- No Unicode filesystem semantics on this platform. test_unicodedata test_univnewlines test_unpack test_urllib test_urllib2 test_urllib2_localnet test_urllib2net test_urllib2net skipped -- Use of the `network' resource not enabled test_urllibnet test_urllibnet skipped -- Use of the `network' resource not enabled test_urlparse test_userdict test_userlist test_userstring test_uu test_uuid WARNING: uuid.getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._ifconfig_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._unixdll_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. test_wait3 test_wait4 test_warnings test_wave test_weakref test_whichdb test_winreg test_winreg skipped -- No module named _winreg test_winsound test_winsound skipped -- No module named winsound test_with test_wsgiref test_xdrlib test_xml_etree test_xml_etree_c test_xmllib test_xmlrpc test_xpickle test_xrange test_zipfile /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:472: DeprecationWarning: struct integer overflow masking is deprecated zipfp.close() /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:399: DeprecationWarning: struct integer overflow masking is deprecated zipfp.close() test_zipfile64 test_zipfile64 skipped -- test requires loads of disk-space bytes and a long time to run test_zipimport test_zlib 313 tests OK. 1 test failed: test_sqlite 28 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_bsddb3 test_cd test_cl test_curses test_gl test_imageop test_imgfile test_ioctl test_linuxaudiodev test_macostools test_ossaudiodev test_pep277 test_scriptpackages test_socketserver test_startfile test_sunaudiodev test_tcl test_timeout test_unicode_file test_urllib2net test_urllibnet test_winreg test_winsound test_zipfile64 1 skip unexpected on linux2: test_ioctl [570330 refs] From nnorwitz at gmail.com Sat Mar 1 15:03:30 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Sat, 1 Mar 2008 09:03:30 -0500 Subject: [Python-checkins] Python Regression Test Failures opt (1) Message-ID: <20080301140330.GA27541@python.psfb.org> 313 tests OK. 1 test failed: test_sqlite 28 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_bsddb3 test_cd test_cl test_curses test_gl test_imageop test_imgfile test_ioctl test_linuxaudiodev test_macostools test_ossaudiodev test_pep277 test_scriptpackages test_socketserver test_startfile test_sunaudiodev test_tcl test_timeout test_unicode_file test_urllib2net test_urllibnet test_winreg test_winsound test_zipfile64 1 skip unexpected on linux2: test_ioctl test_grammar test_opcodes test_dict test_builtin test_exceptions test_types test_unittest test_doctest test_doctest2 test_MimeWriter test_SimpleHTTPServer test_StringIO test___all__ test___future__ test__locale test_abc test_abstract_numbers test_aepack test_aepack skipped -- No module named aepack test_al test_al skipped -- No module named al test_anydbm test_applesingle test_applesingle skipped -- No module named macostools test_array test_ast test_asynchat test_asyncore test_atexit test_audioop test_augassign test_base64 test_bastion test_bigaddrspace test_bigmem test_binascii test_binhex test_binop test_bisect test_bool test_bsddb test_bsddb185 test_bsddb185 skipped -- No module named bsddb185 test_bsddb3 test_bsddb3 skipped -- Use of the `bsddb' resource not enabled test_buffer test_bufio test_bz2 test_calendar test_call test_capi test_cd test_cd skipped -- No module named cd test_cfgparser test_cgi test_charmapcodec test_cl test_cl skipped -- No module named cl test_class test_cmath test_cmd test_cmd_line test_cmd_line_script test_code test_codeccallbacks test_codecencodings_cn test_codecencodings_hk test_codecencodings_jp test_codecencodings_kr test_codecencodings_tw test_codecmaps_cn test_codecmaps_hk test_codecmaps_jp test_codecmaps_kr test_codecmaps_tw test_codecs test_codeop test_coding test_coercion test_collections test_colorsys test_commands test_compare test_compile test_compiler test_complex test_complex_args test_contains test_contextlib test_cookie test_cookielib test_copy test_copy_reg test_cpickle test_cprofile test_crypt test_csv test_ctypes test_curses test_curses skipped -- Use of the `curses' resource not enabled test_datetime test_dbm test_decimal test_decorators test_defaultdict test_deque test_descr test_descrtut test_difflib test_dircache test_dis test_distutils [10077 refs] test_dl test_docxmlrpc test_dumbdbm test_dummy_thread test_dummy_threading test_email test_email_codecs test_email_renamed test_enumerate test_eof test_errno test_exception_variations test_extcall test_fcntl test_file test_filecmp test_fileinput test_float test_fnmatch test_fork1 test_format test_fpformat test_fractions test_frozen test_ftplib test_funcattrs test_functools test_future test_future_builtins test_gc test_gdbm test_generators test_genericpath test_genexps test_getargs test_getargs2 test_getopt test_gettext test_gl test_gl skipped -- No module named gl test_glob test_global test_grp test_gzip test_hash test_hashlib test_heapq test_hexoct test_hmac test_hotshot test_htmllib test_htmlparser test_httplib test_imageop test_imageop skipped -- No module named imgfile test_imaplib test_imgfile test_imgfile skipped -- No module named imgfile test_imp test_import test_importhooks test_index test_inspect test_ioctl test_ioctl skipped -- Unable to open /dev/tty test_isinstance test_iter test_iterlen test_itertools test_largefile test_linuxaudiodev test_linuxaudiodev skipped -- Use of the `audio' resource not enabled test_list test_locale test_logging test_long test_long_future test_longexp test_macostools test_macostools skipped -- No module named macostools test_macpath test_mailbox test_marshal test_math test_md5 test_mhlib test_mimetools test_mimetypes test_minidom test_mmap test_module test_modulefinder test_multibytecodec test_multibytecodec_support test_multifile test_mutants test_mutex test_netrc test_new test_nis test_normalization test_ntpath test_old_mailbox test_openpty test_operator test_optparse test_os test_ossaudiodev test_ossaudiodev skipped -- Use of the `audio' resource not enabled test_parser s_push: parser stack overflow test_peepholer test_pep247 test_pep263 test_pep277 test_pep277 skipped -- test works only on NT+ test_pep292 test_pep352 test_pickle test_pickletools test_pipes test_pkg test_pkgimport test_platform test_plistlib test_poll test_popen [8018 refs] [8018 refs] [8018 refs] test_popen2 test_poplib test_posix test_posixpath test_pow test_pprint test_profile test_profilehooks test_property test_pstats test_pty test_pwd test_pyclbr test_pyexpat test_queue test_quopri [8395 refs] [8395 refs] test_random test_re test_repr test_resource test_rfc822 test_richcmp test_robotparser test_runpy test_sax test_scope test_scriptpackages test_scriptpackages skipped -- No module named aetools test_select test_set test_sets test_sgmllib test_sha test_shelve test_shlex test_shutil test_signal test_site test_slice test_smtplib test_socket test_socket_ssl test_socketserver test_socketserver skipped -- Use of the `network' resource not enabled test_softspace test_sort test_sqlite test test_sqlite failed -- Traceback (most recent call last): File "/tmp/python-test/local/lib/python2.6/sqlite3/test/regression.py", line 118, in CheckWorkaroundForBuggySqliteTransferBindings self.con.execute("create table if not exists foo(bar)") OperationalError: near "not": syntax error test_ssl test_startfile test_startfile skipped -- cannot import name startfile test_str test_strftime test_string test_stringprep test_strop test_strptime test_struct test_structmembers test_structseq test_subprocess [8013 refs] [8015 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8015 refs] [9938 refs] [8231 refs] [8015 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] . [8013 refs] [8013 refs] this bit of output is from a test of stdout in a different process ... [8013 refs] [8013 refs] [8231 refs] test_sunaudiodev test_sunaudiodev skipped -- No module named sunaudiodev test_sundry test_symtable test_syntax test_sys [8013 refs] [8013 refs] test_tarfile test_tcl test_tcl skipped -- No module named _tkinter test_telnetlib test_tempfile [8018 refs] test_textwrap test_thread test_threaded_import test_threadedtempfile test_threading [11149 refs] test_threading_local test_threadsignals test_time test_timeout test_timeout skipped -- Use of the `network' resource not enabled test_tokenize test_trace test_traceback test_transformer test_tuple test_typechecks test_ucn test_unary test_unicode test_unicode_file test_unicode_file skipped -- No Unicode filesystem semantics on this platform. test_unicodedata test_univnewlines test_unpack test_urllib test_urllib2 test_urllib2_localnet test_urllib2net test_urllib2net skipped -- Use of the `network' resource not enabled test_urllibnet test_urllibnet skipped -- Use of the `network' resource not enabled test_urlparse test_userdict test_userlist test_userstring test_uu test_uuid WARNING: uuid.getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._ifconfig_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._unixdll_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. test_wait3 test_wait4 test_warnings test_wave test_weakref test_whichdb test_winreg test_winreg skipped -- No module named _winreg test_winsound test_winsound skipped -- No module named winsound test_with test_wsgiref test_xdrlib test_xml_etree test_xml_etree_c test_xmllib test_xmlrpc test_xpickle test_xrange test_zipfile /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:472: DeprecationWarning: struct integer overflow masking is deprecated zipfp.close() /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:399: DeprecationWarning: struct integer overflow masking is deprecated zipfp.close() test_zipfile64 test_zipfile64 skipped -- test requires loads of disk-space bytes and a long time to run test_zipimport test_zlib 313 tests OK. 1 test failed: test_sqlite 28 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_bsddb3 test_cd test_cl test_curses test_gl test_imageop test_imgfile test_ioctl test_linuxaudiodev test_macostools test_ossaudiodev test_pep277 test_scriptpackages test_socketserver test_startfile test_sunaudiodev test_tcl test_timeout test_unicode_file test_urllib2net test_urllibnet test_winreg test_winsound test_zipfile64 1 skip unexpected on linux2: test_ioctl [569932 refs] From python-checkins at python.org Sat Mar 1 18:11:42 2008 From: python-checkins at python.org (barry.warsaw) Date: Sat, 1 Mar 2008 18:11:42 +0100 (CET) Subject: [Python-checkins] r61157 - in python/trunk: Include/patchlevel.h Misc/NEWS Message-ID: <20080301171142.25DAC1E4005@bag.python.org> Author: barry.warsaw Date: Sat Mar 1 18:11:41 2008 New Revision: 61157 Modified: python/trunk/Include/patchlevel.h python/trunk/Misc/NEWS Log: Set things up for 2.6a2. Modified: python/trunk/Include/patchlevel.h ============================================================================== --- python/trunk/Include/patchlevel.h (original) +++ python/trunk/Include/patchlevel.h Sat Mar 1 18:11:41 2008 @@ -26,7 +26,7 @@ #define PY_RELEASE_SERIAL 1 /* Version as a string */ -#define PY_VERSION "2.6a1" +#define PY_VERSION "2.6a1+" /* Subversion Revision number of this file (not of the repository) */ #define PY_PATCHLEVEL_REVISION "$Revision$" Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Sat Mar 1 18:11:41 2008 @@ -4,6 +4,12 @@ (editors: check NEWS.help for information about editing NEWS using ReST.) +What's New in Python 2.6 alpha 2? +================================= + +*Release date: XX-XXX-2008* + + What's New in Python 2.6 alpha 1? ================================= From nnorwitz at gmail.com Sat Mar 1 18:14:23 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Sat, 1 Mar 2008 09:14:23 -0800 Subject: [Python-checkins] Python Regression Test Failures basics (1) In-Reply-To: <20080301113313.GA25006@python.psfb.org> References: <20080301113313.GA25006@python.psfb.org> Message-ID: Gerhard, I'm guessing this failure is due to your recent change. The exception is: File "/tmp/python-test/local/lib/python2.6/sqlite3/test/regression.py", line 118, in CheckWorkaroundForBuggySqliteTransferBindings self.con.execute("create table if not exists foo(bar)") OperationalError: near "not": syntax error The sqlite version is: sqlite-3.2.1-r3 on an old gentoo x86 box. Thanks, n On Sat, Mar 1, 2008 at 3:33 AM, Neal Norwitz wrote: > 313 tests OK. > 1 test failed: > test_sqlite > 28 tests skipped: > test_aepack test_al test_applesingle test_bsddb185 test_bsddb3 > test_cd test_cl test_curses test_gl test_imageop test_imgfile > test_ioctl test_linuxaudiodev test_macostools test_ossaudiodev > test_pep277 test_scriptpackages test_socketserver test_startfile > test_sunaudiodev test_tcl test_timeout test_unicode_file > test_urllib2net test_urllibnet test_winreg test_winsound > test_zipfile64 > 1 skip unexpected on linux2: > test_ioctl > > test_grammar > test_opcodes > test_dict > test_builtin > test_exceptions > test_types > test_unittest > test_doctest > test_doctest2 > test_MimeWriter > test_SimpleHTTPServer > test_StringIO > test___all__ > test___future__ > test__locale > test_abc > test_abstract_numbers > test_aepack > test_aepack skipped -- No module named aepack > test_al > test_al skipped -- No module named al > test_anydbm > test_applesingle > test_applesingle skipped -- No module named macostools > test_array > test_ast > test_asynchat > test_asyncore > test_atexit > test_audioop > test_augassign > test_base64 > test_bastion > test_bigaddrspace > test_bigmem > test_binascii > test_binhex > test_binop > test_bisect > test_bool > test_bsddb > test_bsddb185 > test_bsddb185 skipped -- No module named bsddb185 > test_bsddb3 > test_bsddb3 skipped -- Use of the `bsddb' resource not enabled > test_buffer > test_bufio > test_bz2 > test_calendar > test_call > test_capi > test_cd > test_cd skipped -- No module named cd > test_cfgparser > test_cgi > test_charmapcodec > test_cl > test_cl skipped -- No module named cl > test_class > test_cmath > test_cmd > test_cmd_line > test_cmd_line_script > test_code > test_codeccallbacks > test_codecencodings_cn > test_codecencodings_hk > test_codecencodings_jp > test_codecencodings_kr > test_codecencodings_tw > test_codecmaps_cn > test_codecmaps_hk > test_codecmaps_jp > test_codecmaps_kr > test_codecmaps_tw > test_codecs > test_codeop > test_coding > test_coercion > test_collections > test_colorsys > test_commands > test_compare > test_compile > test_compiler > test_complex > test_complex_args > test_contains > test_contextlib > test_cookie > test_cookielib > test_copy > test_copy_reg > test_cpickle > test_cprofile > test_crypt > test_csv > test_ctypes > test_curses > test_curses skipped -- Use of the `curses' resource not enabled > test_datetime > test_dbm > test_decimal > test_decorators > test_defaultdict > test_deque > test_descr > test_descrtut > test_difflib > test_dircache > test_dis > test_distutils > test_dl > test_docxmlrpc > test_dumbdbm > test_dummy_thread > test_dummy_threading > test_email > test_email_codecs > test_email_renamed > test_enumerate > test_eof > test_errno > test_exception_variations > test_extcall > test_fcntl > test_file > test_filecmp > test_fileinput > test_float > test_fnmatch > test_fork1 > test_format > test_fpformat > test_fractions > test_frozen > test_ftplib > test_funcattrs > test_functools > test_future > test_future_builtins > test_gc > test_gdbm > test_generators > test_genericpath > test_genexps > test_getargs > test_getargs2 > test_getopt > test_gettext > test_gl > test_gl skipped -- No module named gl > test_glob > test_global > test_grp > test_gzip > test_hash > test_hashlib > test_heapq > test_hexoct > test_hmac > test_hotshot > test_htmllib > test_htmlparser > test_httplib > test_imageop > test_imageop skipped -- No module named imgfile > test_imaplib > test_imgfile > test_imgfile skipped -- No module named imgfile > test_imp > test_import > test_importhooks > test_index > test_inspect > test_ioctl > test_ioctl skipped -- Unable to open /dev/tty > test_isinstance > test_iter > test_iterlen > test_itertools > test_largefile > test_linuxaudiodev > test_linuxaudiodev skipped -- Use of the `audio' resource not enabled > test_list > test_locale > test_logging > test_long > test_long_future > test_longexp > test_macostools > test_macostools skipped -- No module named macostools > test_macpath > test_mailbox > test_marshal > test_math > test_md5 > test_mhlib > test_mimetools > test_mimetypes > test_minidom > test_mmap > test_module > test_modulefinder > test_multibytecodec > test_multibytecodec_support > test_multifile > test_mutants > test_mutex > test_netrc > test_new > test_nis > test_normalization > test_ntpath > test_old_mailbox > test_openpty > test_operator > test_optparse > test_os > test_ossaudiodev > test_ossaudiodev skipped -- Use of the `audio' resource not enabled > test_parser > s_push: parser stack overflow > test_peepholer > test_pep247 > test_pep263 > test_pep277 > test_pep277 skipped -- test works only on NT+ > test_pep292 > test_pep352 > test_pickle > test_pickletools > test_pipes > test_pkg > test_pkgimport > test_platform > test_plistlib > test_poll > test_popen > [8018 refs] > [8018 refs] > [8018 refs] > test_popen2 > test_poplib > test_posix > test_posixpath > test_pow > test_pprint > test_profile > test_profilehooks > test_property > test_pstats > test_pty > test_pwd > test_pyclbr > test_pyexpat > test_queue > test_quopri > [8395 refs] > [8395 refs] > test_random > test_re > test_repr > test_resource > test_rfc822 > test_richcmp > test_robotparser > test_runpy > test_sax > test_scope > test_scriptpackages > test_scriptpackages skipped -- No module named aetools > test_select > test_set > test_sets > test_sgmllib > test_sha > test_shelve > test_shlex > test_shutil > test_signal > test_site > test_slice > test_smtplib > test_socket > test_socket_ssl > test_socketserver > test_socketserver skipped -- Use of the `network' resource not enabled > test_softspace > test_sort > test_sqlite > test test_sqlite failed -- Traceback (most recent call last): > File "/tmp/python-test/local/lib/python2.6/sqlite3/test/regression.py", line 118, in CheckWorkaroundForBuggySqliteTransferBindings > self.con.execute("create table if not exists foo(bar)") > OperationalError: near "not": syntax error > > test_ssl > test_startfile > test_startfile skipped -- cannot import name startfile > test_str > test_strftime > test_string > test_stringprep > test_strop > test_strptime > test_struct > test_structmembers > test_structseq > test_subprocess > [8013 refs] > [8015 refs] > [8013 refs] > [8013 refs] > [8013 refs] > [8013 refs] > [8013 refs] > [8013 refs] > [8013 refs] > [8013 refs] > [8015 refs] > [9938 refs] > [8231 refs] > [8015 refs] > [8013 refs] > [8013 refs] > [8013 refs] > [8013 refs] > [8013 refs] > . > [8013 refs] > [8013 refs] > this bit of output is from a test of stdout in a different process ... > [8013 refs] > [8013 refs] > [8231 refs] > test_sunaudiodev > test_sunaudiodev skipped -- No module named sunaudiodev > test_sundry > test_symtable > test_syntax > test_sys > [8013 refs] > [8013 refs] > test_tarfile > test_tcl > test_tcl skipped -- No module named _tkinter > test_telnetlib > test_tempfile > [8018 refs] > test_textwrap > test_thread > test_threaded_import > test_threadedtempfile > test_threading > [11149 refs] > test_threading_local > test_threadsignals > test_time > test_timeout > test_timeout skipped -- Use of the `network' resource not enabled > test_tokenize > test_trace > test_traceback > test_transformer > test_tuple > test_typechecks > test_ucn > test_unary > test_unicode > test_unicode_file > test_unicode_file skipped -- No Unicode filesystem semantics on this platform. > test_unicodedata > test_univnewlines > test_unpack > test_urllib > test_urllib2 > test_urllib2_localnet > test_urllib2net > test_urllib2net skipped -- Use of the `network' resource not enabled > test_urllibnet > test_urllibnet skipped -- Use of the `network' resource not enabled > test_urlparse > test_userdict > test_userlist > test_userstring > test_uu > test_uuid > WARNING: uuid.getnode is unreliable on many platforms. > It is disabled until the code and/or test can be fixed properly. > WARNING: uuid._ifconfig_getnode is unreliable on many platforms. > It is disabled until the code and/or test can be fixed properly. > WARNING: uuid._unixdll_getnode is unreliable on many platforms. > It is disabled until the code and/or test can be fixed properly. > test_wait3 > test_wait4 > test_warnings > test_wave > test_weakref > test_whichdb > test_winreg > test_winreg skipped -- No module named _winreg > test_winsound > test_winsound skipped -- No module named winsound > test_with > test_wsgiref > test_xdrlib > test_xml_etree > test_xml_etree_c > test_xmllib > test_xmlrpc > test_xpickle > test_xrange > test_zipfile > /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:472: DeprecationWarning: struct integer overflow masking is deprecated > zipfp.close() > /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:399: DeprecationWarning: struct integer overflow masking is deprecated > zipfp.close() > test_zipfile64 > test_zipfile64 skipped -- test requires loads of disk-space bytes and a long time to run > test_zipimport > test_zlib > 313 tests OK. > 1 test failed: > test_sqlite > 28 tests skipped: > test_aepack test_al test_applesingle test_bsddb185 test_bsddb3 > test_cd test_cl test_curses test_gl test_imageop test_imgfile > test_ioctl test_linuxaudiodev test_macostools test_ossaudiodev > test_pep277 test_scriptpackages test_socketserver test_startfile > test_sunaudiodev test_tcl test_timeout test_unicode_file > test_urllib2net test_urllibnet test_winreg test_winsound > test_zipfile64 > 1 skip unexpected on linux2: > test_ioctl > [570330 refs] > _______________________________________________ > Python-checkins mailing list > Python-checkins at python.org > http://mail.python.org/mailman/listinfo/python-checkins > From python-checkins at python.org Sat Mar 1 18:47:46 2008 From: python-checkins at python.org (barry.warsaw) Date: Sat, 1 Mar 2008 18:47:46 +0100 (CET) Subject: [Python-checkins] r61159 - python/tags/r30a3 Message-ID: <20080301174746.B9C891E4005@bag.python.org> Author: barry.warsaw Date: Sat Mar 1 18:47:46 2008 New Revision: 61159 Added: python/tags/r30a3/ - copied from r61158, python/branches/py3k/ Log: Tagging for 3.0a3 From buildbot at python.org Sat Mar 1 19:24:54 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 01 Mar 2008 18:24:54 +0000 Subject: [Python-checkins] buildbot failure in g4 osx.4 3.0 Message-ID: <20080301182454.A93C41E402D@bag.python.org> The Buildbot has detected a new failure of g4 osx.4 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/g4%20osx.4%203.0/builds/579 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: psf-g4 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: barry.warsaw BUILD FAILED: failed failed slave lost sincerely, -The Buildbot From python-checkins at python.org Sat Mar 1 19:36:30 2008 From: python-checkins at python.org (barry.warsaw) Date: Sat, 1 Mar 2008 19:36:30 +0100 (CET) Subject: [Python-checkins] r61160 - peps/trunk/pep-0101.txt peps/trunk/pep-0361.txt Message-ID: <20080301183630.E88C11E4005@bag.python.org> Author: barry.warsaw Date: Sat Mar 1 19:36:30 2008 New Revision: 61160 Modified: peps/trunk/pep-0101.txt peps/trunk/pep-0361.txt Log: Updates to PEP 101 (which still needs a lot of work), and PEP 361 to describe the joint 2.6/3.0 release schedule. Modified: peps/trunk/pep-0101.txt ============================================================================== --- peps/trunk/pep-0101.txt (original) +++ peps/trunk/pep-0101.txt Sat Mar 1 19:36:30 2008 @@ -20,9 +20,6 @@ recipe and you can actually print this out and check items off as you complete them. - XXX: This version is a partial update by Neal Norwitz. There are - undoubtedly still many places where reality differs! - How to Make A Release @@ -88,10 +85,10 @@ immediately after making the branch, or even before you've made the branch. - Add high level items new to this release. E.g. if we're - releasing 2.6a3, there must be a section at the top of the file - explaining "What's new in Python 2.6a3". It will be followed by - a section entitled "What's new in Python 2.6a2". + Add high level items new to this release. E.g. if we're releasing + 2.6a3, there must be a section at the top of the file explaining + "What's new in Python 2.6 alpha 3". It will be followed by a + section entitled "What's new in Python 2.6 alpha 2". Note that you /hope/ that as developers add new features to the trunk, they've updated the NEWS file accordingly. You can't be @@ -113,7 +110,7 @@ Lib/idlelib/NEWS.txt has been similarly updated. ___ Make sure the release date is fully spelled out in - Doc/commontex/boilerplate.tex (welease). + Doc/commontex/boilerplate.tex (welease). BROKEN ___ Tag and/or branch the tree for release X.YaZ (welease does tagging) @@ -140,6 +137,8 @@ When making a minor release (e.g., for 2.6a1 or 2.6.1), you should tag. To create a _tag_ (e.g., r26a1), do the following: + DO NOT TAG UNTIL YOU"VE MADE THE NECESSARY EDITS BELOW + ___ svn copy \ svn+ssh://pythondev at svn.python.org/python/branches/release26-maint \ svn+ssh://pythondev at svn.python.org/python/tags/r26a1 @@ -151,7 +150,7 @@ % svn co \ svn+ssh://pythondev at svn.python.org/python/branches/release26-maint - ___ cd relesae26-maint # cd into the branch directory. + ___ cd release26-maint # cd into the branch directory. ___ Change Include/patchlevel.h in two places, to reflect the new version number you've just created. You'll want @@ -168,16 +167,16 @@ ___ distutils also maintains its own versioning file (Lib/distutils/__init__.py). Update this file with the Python version. - ___ Change the "%define version" line of Misc/RPM/python-2.5.spec to + ___ Change the "%define version" line of Misc/RPM/python-X.Y.spec to the same string as PY_VERSION was changed to above. E.g. - %define version 2.5.1 + %define version 2.6a1 The following line, "%define libvers", should reflect the major/minor number as one would usually see in the "/usr/lib/python" directory name. E.g. - %define libvers 2.5 + %define libvers 2.6 You also probably want to reset the %define release line to '1pydotorg' if it's not already that. @@ -200,11 +199,11 @@ number changes, also update the LICENSE file. ___ There's a copy of the license in - Doc/commontex/license.tex; the DE usually takes care of that. + Doc/commontex/license.tex; the DE usually takes care of that. BROKEN ___ If the minor (middle) digit of the version number changes, update: - ___ Doc/tut/tut.tex (4 references to [Pp]ython26) + ___ Doc/tut/tut.tex (4 references to [Pp]ython26) BROKEN ___ Check the years on the copyright notice. If the last release was some time last year, add the current year to the copyright @@ -218,9 +217,9 @@ ___ Doc/README (at the end) - ___ Doc/commontex/copyright.tex + ___ Doc/commontex/copyright.tex BROKEN - ___ Doc/commontex/license.tex + ___ Doc/commontex/license.tex BROKEN ___ PC/python_nt.rc sets up the DLL version resource for Windows (displayed when you right-click on the DLL and select @@ -228,7 +227,7 @@ ___ The license.ht file for the distribution on the website contains what purports to be an HTML-ized copy of the LICENSE - file from the distribution. + file from the distribution. BROKEN ___ For a final release, edit the first paragraph of Doc/whatsnew/whatsnewXX.tex to include the actual release date; @@ -407,7 +406,7 @@ the tagged branch. % cd ~ - % svn export -rr26c2 -d Python-2.6c2 python + % svn export svn+ssh://pythondev at svn.python.org/python/tags/r26a1 Python-2.6c2 (supported by welease) ___ Generate the tarballs. Note that we're not using the `z' option @@ -452,7 +451,7 @@ freshly unpacked directory looks weird, you better stop now and figure out what the problem is. - ___ Upload the tgz file to dinsdale.python.org using scp. + ___ Upload the tar files to dinsdale.python.org using scp. # XXX(nnorwitz): this entire section dealing with the website is outdated. # The website uses SVN and the build process has changed. Modified: peps/trunk/pep-0361.txt ============================================================================== --- peps/trunk/pep-0361.txt (original) +++ peps/trunk/pep-0361.txt Sat Mar 1 19:36:30 2008 @@ -1,5 +1,5 @@ PEP: 361 -Title: Python 2.6 Release Schedule +Title: Python 2.6 and 3.0 Release Schedule Version: $Revision$ Last-Modified: $Date$ Author: Neal Norwitz @@ -7,17 +7,31 @@ Type: Informational Created: 29-June-2006 Python-Version: 2.6 +Python-Version: 3.0 Post-History: Abstract This document describes the development and release schedule for - Python 2.6. The schedule primarily concerns itself with PEP-sized - items. Small features may be added up to and including the first - beta release. Bugs may be fixed until the final release. + Python 2.6 and 3.0. The schedule primarily concerns itself with + PEP-sized items. Small features may be added up to and including + the first beta release. Bugs may be fixed until the final + release. There will be at least two alpha releases, two beta releases, and - one release candidate. The release date is planned to be in XXX 2008. + one release candidate. The release date is planned to be in 2008. + + Python 2.6 is not only the next advancement in the Python 2 + series, it is also a transitionary release, helping developers + begin to prepare their code for Python 3.0. As such, many + features are being backported from Python 3.0 to 2.6. Thus, it + makes sense to release both versions in at the same time. The + precedence for this was set with the Python 1.6 and 2.0 release. + + During the alpha test cycle we will be releasing both versions in + lockstep, on a monthly release cycle. The releases will happen on + the last Friday of every month. If this schedule works well, we + will continue releasing in lockstep during the beta program. Release Manager and Crew @@ -36,20 +50,19 @@ betas and release candidates will be determined as the release process unfolds. The minimal schedule is: - Feb 2008: (re)confirm the crew and start deciding on schedule. - The 2.6 target is for the second half of 2008. - - alpha 1: T - 16 weeks [planned] - alpha 2: T - 13 weeks [planned] - beta 1: T - 9 weeks [planned] - beta 2: T - 5 weeks [planned] - rc 1: T - 1 week [planned] - final: T [planned] + Feb 29 2008: Python 2.6a1 and 3.0a3 are released. + Mar 25 2008: next alpha releases planned Monthly releases for alphas are planned starting at the end of Feb 2008: http://mail.python.org/pipermail/python-dev/2008-February/077125.html +Completed features for 3.0 + + See PEP 3000 [#pep3000] and PEP 3100 [#pep3100] for details on the + Python 3.0 project. + + Completed features for 2.6 PEPs: @@ -224,6 +237,12 @@ .. [#pep367] PEP 367 (New Super) http://www.python.org/dev/peps/pep-0367 +.. [#pep3000] PEP 3000 (Python 3000) + http://www.python.org/dev/peps/pep-3000 + +.. [#pep3100] PEP 3100 (Miscellaneous Python 3.0 Plans) + http://www.python.org/dev/peps/pep-3100 + .. [#pep3112] PEP 3112 (Bytes literals in Python 3000) http://www.python.org/dev/peps/pep-03112 From nnorwitz at gmail.com Sat Mar 1 20:16:37 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Sat, 1 Mar 2008 14:16:37 -0500 Subject: [Python-checkins] Python Regression Test Failures all (1) Message-ID: <20080301191637.GA3560@python.psfb.org> 318 tests OK. 1 test failed: test_sqlite 20 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_cd test_cl test_gl test_imageop test_imgfile test_ioctl test_macostools test_pep277 test_scriptpackages test_startfile test_sunaudiodev test_tcl test_unicode_file test_winreg test_winsound test_zipfile64 1 skip unexpected on linux2: test_ioctl test_grammar test_opcodes test_dict test_builtin test_exceptions test_types test_unittest test_doctest test_doctest2 test_MimeWriter test_SimpleHTTPServer test_StringIO test___all__ test___future__ test__locale test_abc test_abstract_numbers test_aepack test_aepack skipped -- No module named aepack test_al test_al skipped -- No module named al test_anydbm test_applesingle test_applesingle skipped -- No module named macostools test_array test_ast test_asynchat test_asyncore test_atexit test_audioop test_augassign test_base64 test_bastion test_bigaddrspace test_bigmem test_binascii test_binhex test_binop test_bisect test_bool test_bsddb test_bsddb185 test_bsddb185 skipped -- No module named bsddb185 test_bsddb3 test_buffer test_bufio test_bz2 test_calendar test_call test_capi test_cd test_cd skipped -- No module named cd test_cfgparser test_cgi test_charmapcodec test_cl test_cl skipped -- No module named cl test_class test_cmath test_cmd test_cmd_line test_cmd_line_script test_code test_codeccallbacks test_codecencodings_cn test_codecencodings_hk test_codecencodings_jp test_codecencodings_kr test_codecencodings_tw test_codecmaps_cn test_codecmaps_hk test_codecmaps_jp test_codecmaps_kr test_codecmaps_tw test_codecs test_codeop test_coding test_coercion test_collections test_colorsys test_commands test_compare test_compile test_compiler testCompileLibrary still working, be patient... test_complex test_complex_args test_contains test_contextlib test_cookie test_cookielib test_copy test_copy_reg test_cpickle test_cprofile test_crypt test_csv test_ctypes test_datetime test_dbm test_decimal test_decorators test_defaultdict test_deque test_descr test_descrtut test_difflib test_dircache test_dis test_distutils test_dl test_docxmlrpc test_dumbdbm test_dummy_thread test_dummy_threading test_email test_email_codecs test_email_renamed test_enumerate test_eof test_errno test_exception_variations test_extcall test_fcntl test_file test_filecmp test_fileinput test_float test_fnmatch test_fork1 test_format test_fpformat test_fractions test_frozen test_ftplib test_funcattrs test_functools test_future test_future_builtins test_gc test_gdbm test_generators test_genericpath test_genexps test_getargs test_getargs2 test_getopt test_gettext test_gl test_gl skipped -- No module named gl test_glob test_global test_grp test_gzip test_hash test_hashlib test_heapq test_hexoct test_hmac test_hotshot test_htmllib test_htmlparser test_httplib test_imageop test_imageop skipped -- No module named imgfile test_imaplib test_imgfile test_imgfile skipped -- No module named imgfile test_imp test_import test_importhooks test_index test_inspect test_ioctl test_ioctl skipped -- Unable to open /dev/tty test_isinstance test_iter test_iterlen test_itertools test_largefile test_list test_locale test_logging test_long test_long_future test_longexp test_macostools test_macostools skipped -- No module named macostools test_macpath test_mailbox test_marshal test_math test_md5 test_mhlib test_mimetools test_mimetypes test_minidom test_mmap test_module test_modulefinder test_multibytecodec test_multibytecodec_support test_multifile test_mutants test_mutex test_netrc test_new test_nis test_normalization test_ntpath test_old_mailbox test_openpty test_operator test_optparse test_os test_parser s_push: parser stack overflow test_peepholer test_pep247 test_pep263 test_pep277 test_pep277 skipped -- test works only on NT+ test_pep292 test_pep352 test_pickle test_pickletools test_pipes test_pkg test_pkgimport test_platform test_plistlib test_poll test_popen [8018 refs] [8018 refs] [8018 refs] test_popen2 test_poplib test_posix test_posixpath test_pow test_pprint test_profile test_profilehooks test_property test_pstats test_pty test_pwd test_pyclbr test_pyexpat test_queue test_quopri [8395 refs] [8395 refs] test_random test_re test_repr test_resource test_rfc822 test_richcmp test_robotparser test_runpy test_sax test_scope test_scriptpackages test_scriptpackages skipped -- No module named aetools test_select test_set test_sets test_sgmllib test_sha test_shelve test_shlex test_shutil test_signal test_site test_slice test_smtplib test_socket test_socket_ssl test_socketserver test_softspace test_sort test_sqlite test test_sqlite failed -- Traceback (most recent call last): File "/tmp/python-test/local/lib/python2.6/sqlite3/test/regression.py", line 118, in CheckWorkaroundForBuggySqliteTransferBindings self.con.execute("create table if not exists foo(bar)") OperationalError: near "not": syntax error test_ssl test_startfile test_startfile skipped -- cannot import name startfile test_str test_strftime test_string test_stringprep test_strop test_strptime test_struct test_structmembers test_structseq test_subprocess [8013 refs] [8015 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8015 refs] [9938 refs] [8231 refs] [8015 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] . [8013 refs] [8013 refs] this bit of output is from a test of stdout in a different process ... [8013 refs] [8013 refs] [8231 refs] test_sunaudiodev test_sunaudiodev skipped -- No module named sunaudiodev test_sundry test_symtable test_syntax test_sys [8013 refs] [8013 refs] test_tarfile test_tcl test_tcl skipped -- No module named _tkinter test_telnetlib test_tempfile [8018 refs] test_textwrap test_thread test_threaded_import test_threadedtempfile test_threading [11149 refs] test_threading_local test_threadsignals test_time test_timeout test_tokenize test_trace test_traceback test_transformer test_tuple test_typechecks test_ucn test_unary test_unicode test_unicode_file test_unicode_file skipped -- No Unicode filesystem semantics on this platform. test_unicodedata test_univnewlines test_unpack test_urllib test_urllib2 test_urllib2_localnet test_urllib2net test_urllibnet test_urlparse test_userdict test_userlist test_userstring test_uu test_uuid WARNING: uuid.getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._ifconfig_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._unixdll_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. test_wait3 test_wait4 test_warnings test_wave test_weakref test_whichdb test_winreg test_winreg skipped -- No module named _winreg test_winsound test_winsound skipped -- No module named winsound test_with test_wsgiref test_xdrlib test_xml_etree test_xml_etree_c test_xmllib test_xmlrpc test_xpickle test_xrange test_zipfile /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:472: DeprecationWarning: struct integer overflow masking is deprecated zipfp.close() /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:399: DeprecationWarning: struct integer overflow masking is deprecated zipfp.close() test_zipfile64 test_zipfile64 skipped -- test requires loads of disk-space bytes and a long time to run test_zipimport test_zlib 318 tests OK. 1 test failed: test_sqlite 20 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_cd test_cl test_gl test_imageop test_imgfile test_ioctl test_macostools test_pep277 test_scriptpackages test_startfile test_sunaudiodev test_tcl test_unicode_file test_winreg test_winsound test_zipfile64 1 skip unexpected on linux2: test_ioctl [582098 refs] From buildbot at python.org Sat Mar 1 19:55:40 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 01 Mar 2008 18:55:40 +0000 Subject: [Python-checkins] buildbot failure in x86 gentoo 3.0 Message-ID: <20080301185540.EB1191E4005@bag.python.org> The Buildbot has detected a new failure of x86 gentoo 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20gentoo%203.0/builds/611 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-x86 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: barry.warsaw BUILD FAILED: failed test Excerpt from the test logfile: make: *** [buildbottest] Unknown signal 32 sincerely, -The Buildbot From gh at ghaering.de Sat Mar 1 19:50:38 2008 From: gh at ghaering.de (=?ISO-8859-1?Q?Gerhard_H=E4ring?=) Date: Sat, 01 Mar 2008 19:50:38 +0100 Subject: [Python-checkins] Python Regression Test Failures basics (1) In-Reply-To: References: <20080301113313.GA25006@python.psfb.org> Message-ID: <47C9A57E.2000200@ghaering.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Neal Norwitz wrote: > Gerhard, > > I'm guessing this failure is due to your recent change. The exception is: > > File "/tmp/python-test/local/lib/python2.6/sqlite3/test/regression.py", > line 118, in CheckWorkaroundForBuggySqliteTransferBindings > self.con.execute("create table if not exists foo(bar)") > OperationalError: near "not": syntax error > > The sqlite version is: sqlite-3.2.1-r3 on an old gentoo x86 box. > [...] Yes, this test will only work with SQLite versions that understand the "if not exists" clause. We can either 1) disable the test for older SQLite versions 2) find a different test that works will all SQLite versions If nobody else jumps in, I'll tackle this later this evening or tomorrow. - -- Gerhard -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFHyaV9dIO4ozGCH14RAl6vAJ9jzO01TpyWLg4p4p9A//U94+wxnACeNgiZ ZXqt+ZZU2CtqKudhp2s7qB8= =b0b+ -----END PGP SIGNATURE----- From lists at cheimes.de Sat Mar 1 21:01:27 2008 From: lists at cheimes.de (Christian Heimes) Date: Sat, 01 Mar 2008 21:01:27 +0100 Subject: [Python-checkins] r61141 - in python/trunk: Lib/sqlite3/test/dbapi.py Lib/sqlite3/test/hooks.py Lib/sqlite3/test/py25tests.py Lib/sqlite3/test/regression.py Lib/sqlite3/test/transactions.py Lib/sqlite3/test/types.py Lib/test/test_sqlite.py Modules/_sqlite/connection.c Modules/_sqlite/connection.h Modules/_sqlite/cursor.c Modules/_sqlite/cursor.h Modules/_sqlite/microprotocols.h Modules/_sqlite/module.c Modules/_sqlite/module.h Modules/_sqlite/statement.c Modules/_sqlite/util.c Modules/_sqlite/util.h In-Reply-To: <20080229220841.D028F1E4009@bag.python.org> References: <20080229220841.D028F1E4009@bag.python.org> Message-ID: <47C9B617.5060008@cheimes.de> gerhard.haering wrote: > Author: gerhard.haering > Date: Fri Feb 29 23:08:41 2008 > New Revision: 61141 > > Added: > python/trunk/Lib/sqlite3/test/py25tests.py > Modified: > python/trunk/Lib/sqlite3/test/dbapi.py > python/trunk/Lib/sqlite3/test/hooks.py > python/trunk/Lib/sqlite3/test/regression.py > python/trunk/Lib/sqlite3/test/transactions.py > python/trunk/Lib/sqlite3/test/types.py > python/trunk/Lib/test/test_sqlite.py > python/trunk/Modules/_sqlite/connection.c > python/trunk/Modules/_sqlite/connection.h > python/trunk/Modules/_sqlite/cursor.c > python/trunk/Modules/_sqlite/cursor.h > python/trunk/Modules/_sqlite/microprotocols.h > python/trunk/Modules/_sqlite/module.c > python/trunk/Modules/_sqlite/module.h > python/trunk/Modules/_sqlite/statement.c > python/trunk/Modules/_sqlite/util.c > python/trunk/Modules/_sqlite/util.h > Log: > Updated to pysqlite 2.4.1. Documentation additions will come later. Hey Gerhard! You've committed a large change to the code base in the middle of an alpha release. No harm was done but next time please watch the mailing list. You also forgot to update Misc/NEWS. Please add an entry as soon as possible. Can you do me a favor? Can you port the patch to 3.0 for me? I'm getting lots of conflicts with svnmerge.py. You know the code base for the sqlite module much better than me. Christian From buildbot at python.org Sat Mar 1 21:45:25 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 01 Mar 2008 20:45:25 +0000 Subject: [Python-checkins] buildbot failure in ia64 Ubuntu trunk Message-ID: <20080301204525.90E5C1E4005@bag.python.org> The Buildbot has detected a new failure of ia64 Ubuntu trunk. Full details are available at: http://www.python.org/dev/buildbot/all/ia64%20Ubuntu%20trunk/builds/1541 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ia64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: barry.warsaw BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_timeout ====================================================================== FAIL: testConnectTimeout (test.test_timeout.TimeoutTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ia64/build/Lib/test/test_timeout.py", line 122, in testConnectTimeout self.addr_remote) AssertionError: error not raised make: *** [buildbottest] Error 1 sincerely, -The Buildbot From python-checkins at python.org Sat Mar 1 23:41:13 2008 From: python-checkins at python.org (jeffrey.yasskin) Date: Sat, 1 Mar 2008 23:41:13 +0100 (CET) Subject: [Python-checkins] r61162 - sandbox/trunk/rational Message-ID: <20080301224113.249DE1E401F@bag.python.org> Author: jeffrey.yasskin Date: Sat Mar 1 23:41:12 2008 New Revision: 61162 Removed: sandbox/trunk/rational/ Log: Remove rational/Rational.py from the sandbox now that Lib/fractions.py is in the standard library. Most of the new module came from Demo/classes/Rat.py, but the motivation for limit_denominator came from the sandbox version. From nnorwitz at gmail.com Sun Mar 2 00:33:05 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Sat, 1 Mar 2008 18:33:05 -0500 Subject: [Python-checkins] Python Regression Test Failures basics (1) Message-ID: <20080301233305.GA19224@python.psfb.org> 313 tests OK. 1 test failed: test_sqlite 28 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_bsddb3 test_cd test_cl test_curses test_gl test_imageop test_imgfile test_ioctl test_linuxaudiodev test_macostools test_ossaudiodev test_pep277 test_scriptpackages test_socketserver test_startfile test_sunaudiodev test_tcl test_timeout test_unicode_file test_urllib2net test_urllibnet test_winreg test_winsound test_zipfile64 1 skip unexpected on linux2: test_ioctl test_grammar test_opcodes test_dict test_builtin test_exceptions test_types test_unittest test_doctest test_doctest2 test_MimeWriter test_SimpleHTTPServer test_StringIO test___all__ test___future__ test__locale test_abc test_abstract_numbers test_aepack test_aepack skipped -- No module named aepack test_al test_al skipped -- No module named al test_anydbm test_applesingle test_applesingle skipped -- No module named macostools test_array test_ast test_asynchat test_asyncore test_atexit test_audioop test_augassign test_base64 test_bastion test_bigaddrspace test_bigmem test_binascii test_binhex test_binop test_bisect test_bool test_bsddb test_bsddb185 test_bsddb185 skipped -- No module named bsddb185 test_bsddb3 test_bsddb3 skipped -- Use of the `bsddb' resource not enabled test_buffer test_bufio test_bz2 test_calendar test_call test_capi test_cd test_cd skipped -- No module named cd test_cfgparser test_cgi test_charmapcodec test_cl test_cl skipped -- No module named cl test_class test_cmath test_cmd test_cmd_line test_cmd_line_script test_code test_codeccallbacks test_codecencodings_cn test_codecencodings_hk test_codecencodings_jp test_codecencodings_kr test_codecencodings_tw test_codecmaps_cn test_codecmaps_hk test_codecmaps_jp test_codecmaps_kr test_codecmaps_tw test_codecs test_codeop test_coding test_coercion test_collections test_colorsys test_commands test_compare test_compile test_compiler test_complex test_complex_args test_contains test_contextlib test_cookie test_cookielib test_copy test_copy_reg test_cpickle test_cprofile test_crypt test_csv test_ctypes test_curses test_curses skipped -- Use of the `curses' resource not enabled test_datetime test_dbm test_decimal test_decorators test_defaultdict test_deque test_descr test_descrtut test_difflib test_dircache test_dis test_distutils test_dl test_docxmlrpc test_dumbdbm test_dummy_thread test_dummy_threading test_email test_email_codecs test_email_renamed test_enumerate test_eof test_errno test_exception_variations test_extcall test_fcntl test_file test_filecmp test_fileinput test_float test_fnmatch test_fork1 test_format test_fpformat test_fractions test_frozen test_ftplib test_funcattrs test_functools test_future test_future_builtins test_gc test_gdbm test_generators test_genericpath test_genexps test_getargs test_getargs2 test_getopt test_gettext test_gl test_gl skipped -- No module named gl test_glob test_global test_grp test_gzip test_hash test_hashlib test_heapq test_hexoct test_hmac test_hotshot test_htmllib test_htmlparser test_httplib test_imageop test_imageop skipped -- No module named imgfile test_imaplib test_imgfile test_imgfile skipped -- No module named imgfile test_imp test_import test_importhooks test_index test_inspect test_ioctl test_ioctl skipped -- Unable to open /dev/tty test_isinstance test_iter test_iterlen test_itertools test_largefile test_linuxaudiodev test_linuxaudiodev skipped -- Use of the `audio' resource not enabled test_list test_locale test_logging test_long test_long_future test_longexp test_macostools test_macostools skipped -- No module named macostools test_macpath test_mailbox test_marshal test_math test_md5 test_mhlib test_mimetools test_mimetypes test_minidom test_mmap test_module test_modulefinder test_multibytecodec test_multibytecodec_support test_multifile test_mutants test_mutex test_netrc test_new test_nis test_normalization test_ntpath test_old_mailbox test_openpty test_operator test_optparse test_os test_ossaudiodev test_ossaudiodev skipped -- Use of the `audio' resource not enabled test_parser s_push: parser stack overflow test_peepholer test_pep247 test_pep263 test_pep277 test_pep277 skipped -- test works only on NT+ test_pep292 test_pep352 test_pickle test_pickletools test_pipes test_pkg test_pkgimport test_platform test_plistlib test_poll test_popen [8018 refs] [8018 refs] [8018 refs] test_popen2 test_poplib test_posix test_posixpath test_pow test_pprint test_profile test_profilehooks test_property test_pstats test_pty test_pwd test_pyclbr test_pyexpat test_queue test_quopri [8395 refs] [8395 refs] test_random test_re test_repr test_resource test_rfc822 test_richcmp test_robotparser test_runpy test_sax test_scope test_scriptpackages test_scriptpackages skipped -- No module named aetools test_select test_set test_sets test_sgmllib test_sha test_shelve test_shlex test_shutil test_signal test_site test_slice test_smtplib test_socket test_socket_ssl test_socketserver test_socketserver skipped -- Use of the `network' resource not enabled test_softspace test_sort test_sqlite test test_sqlite failed -- Traceback (most recent call last): File "/tmp/python-test/local/lib/python2.6/sqlite3/test/regression.py", line 118, in CheckWorkaroundForBuggySqliteTransferBindings self.con.execute("create table if not exists foo(bar)") OperationalError: near "not": syntax error test_ssl test_startfile test_startfile skipped -- cannot import name startfile test_str test_strftime test_string test_stringprep test_strop test_strptime test_struct test_structmembers test_structseq test_subprocess [8013 refs] [8015 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8015 refs] [9938 refs] [8231 refs] [8015 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] . [8013 refs] [8013 refs] this bit of output is from a test of stdout in a different process ... [8013 refs] [8013 refs] [8231 refs] test_sunaudiodev test_sunaudiodev skipped -- No module named sunaudiodev test_sundry test_symtable test_syntax test_sys [8013 refs] [8013 refs] test_tarfile test_tcl test_tcl skipped -- No module named _tkinter test_telnetlib test_tempfile [8018 refs] test_textwrap test_thread test_threaded_import test_threadedtempfile test_threading [11149 refs] test_threading_local test_threadsignals test_time test_timeout test_timeout skipped -- Use of the `network' resource not enabled test_tokenize test_trace test_traceback test_transformer test_tuple test_typechecks test_ucn test_unary test_unicode test_unicode_file test_unicode_file skipped -- No Unicode filesystem semantics on this platform. test_unicodedata test_univnewlines test_unpack test_urllib test_urllib2 test_urllib2_localnet test_urllib2net test_urllib2net skipped -- Use of the `network' resource not enabled test_urllibnet test_urllibnet skipped -- Use of the `network' resource not enabled test_urlparse test_userdict test_userlist test_userstring test_uu test_uuid WARNING: uuid.getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._ifconfig_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._unixdll_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. test_wait3 test_wait4 test_warnings test_wave test_weakref test_whichdb test_winreg test_winreg skipped -- No module named _winreg test_winsound test_winsound skipped -- No module named winsound test_with test_wsgiref test_xdrlib test_xml_etree test_xml_etree_c test_xmllib test_xmlrpc test_xpickle test_xrange test_zipfile /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:472: DeprecationWarning: struct integer overflow masking is deprecated zipfp.close() /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:399: DeprecationWarning: struct integer overflow masking is deprecated zipfp.close() test_zipfile64 test_zipfile64 skipped -- test requires loads of disk-space bytes and a long time to run test_zipimport test_zlib 313 tests OK. 1 test failed: test_sqlite 28 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_bsddb3 test_cd test_cl test_curses test_gl test_imageop test_imgfile test_ioctl test_linuxaudiodev test_macostools test_ossaudiodev test_pep277 test_scriptpackages test_socketserver test_startfile test_sunaudiodev test_tcl test_timeout test_unicode_file test_urllib2net test_urllibnet test_winreg test_winsound test_zipfile64 1 skip unexpected on linux2: test_ioctl [570353 refs] From python-checkins at python.org Sun Mar 2 00:19:21 2008 From: python-checkins at python.org (mark.dickinson) Date: Sun, 2 Mar 2008 00:19:21 +0100 (CET) Subject: [Python-checkins] r61163 - python/branches/trunk-math/Lib/test/test_cmath.py Message-ID: <20080301231921.1482E1E4005@bag.python.org> Author: mark.dickinson Date: Sun Mar 2 00:19:20 2008 New Revision: 61163 Modified: python/branches/trunk-math/Lib/test/test_cmath.py Log: Better error reporting for test_cmath: if one of the 1700+ testcases in cmath_testcases.txt fails, we now get to find out which one. Modified: python/branches/trunk-math/Lib/test/test_cmath.py ============================================================================== --- python/branches/trunk-math/Lib/test/test_cmath.py (original) +++ python/branches/trunk-math/Lib/test/test_cmath.py Sun Mar 2 00:19:20 2008 @@ -46,6 +46,37 @@ (INF, NAN) ]] +def almostEqualF(a, b, rel_err=2e-15, abs_err = 5e-323): + """Determine whether floating-point values a and b are equal to within + a (small) rounding error. The default values for rel_err and + abs_err are chosen to be suitable for platforms where a float is + represented by an IEEE 754 double. They allow an error of between + 9 and 19 ulps.""" + + # special values testing + if math.isnan(a): + return math.isnan(b) + if math.isinf(a): + return a == b + + # if both a and b are zero, check whether they have the same sign + # (in theory there are examples where it would be legitimate for a + # and b to have opposite signs; in practice these hardly ever + # occur). + if not a and not b: + return math.copysign(1., a) == math.copysign(1., b) + + # if a-b overflows, or b is infinite, return False. Again, in + # theory there are examples where a is within a few ulps of the + # max representable float, and then b could legitimately be + # infinite. In practice these examples are rare. + try: + absolute_error = abs(b-a) + except OverflowError: + return False + else: + return absolute_error <= max(abs_err, rel_err * abs(a)) + class CMathTests(unittest.TestCase): # list of all functions in cmath test_functions = [getattr(cmath, fname) for fname in [ @@ -62,7 +93,7 @@ def tearDown(self): self.test_values.close() - def rAssertAlmostEqual(self, a, b, rel_eps = 2e-15, abs_eps = 5e-323): + def rAssertAlmostEqual(self, a, b, rel_err = 2e-15, abs_err = 5e-323): """Check that two floating-point numbers are almost equal.""" # special values testing @@ -92,7 +123,7 @@ except OverflowError: pass else: - if absolute_error <= max(abs_eps, rel_eps * abs(a)): + if absolute_error <= max(abs_err, rel_err * abs(a)): return self.fail("%s and %s are not sufficiently close" % (repr(a), repr(b))) @@ -289,12 +320,22 @@ else: function = getattr(cmath, fn) if 'divide-by-zero' in flags or 'invalid' in flags: - self.assertRaises(ValueError, function, arg) - continue + try: + actual = function(arg) + except ValueError: + continue + else: + test_str = "%s: %s(complex(%r, %r))" % (id, fn, ar, ai) + self.fail('ValueError not raised in test %s' % test_str) if 'overflow' in flags: - self.assertRaises(OverflowError, function, arg) - continue + try: + actual = function(arg) + except OverflowError: + continue + else: + test_str = "%s: %s(complex(%r, %r))" % (id, fn, ar, ai) + self.fail('OverflowError not raised in test %s' % test_str) actual = function(arg) @@ -305,14 +346,24 @@ actual = complex(actual.real, abs(actual.imag)) expected = complex(expected.real, abs(expected.imag)) + # for the real part of the log function, we allow an + # absolute error of up to 2e-15. if fn in ('log', 'log10'): - # for the real part of the log function, we allow an - # absolute error of up to 2e-15. - self.rAssertAlmostEqual(expected.real, actual.real, - abs_eps = 2e-15) + real_abs_err = 2e-15 else: - self.rAssertAlmostEqual(expected.real, actual.real) - self.rAssertAlmostEqual(expected.imag, actual.imag) + real_abs_err = 5e-323 + + if not (almostEqualF(expected.real, actual.real, + abs_err = real_abs_err) and + almostEqualF(expected.imag, actual.imag)): + error_message = ( + "%s: %s(complex(%r, %r))\n" % (id, fn, ar, ai) + + "Expected: complex(%r, %r)\n" % + (expected.real, expected.imag) + + "Received: complex(%r, %r)\n" % + (actual.real, actual.imag) + + "Received value insufficiently close to expected value.") + self.fail(error_message) def assertCISEqual(self, a, b): eps = 1E-7 From python-checkins at python.org Sun Mar 2 00:50:47 2008 From: python-checkins at python.org (mark.dickinson) Date: Sun, 2 Mar 2008 00:50:47 +0100 (CET) Subject: [Python-checkins] r61164 - python/branches/trunk-math/Lib/test/cmath_testcases.txt Message-ID: <20080301235047.BF6BB1E4005@bag.python.org> Author: mark.dickinson Date: Sun Mar 2 00:50:47 2008 New Revision: 61164 Modified: python/branches/trunk-math/Lib/test/cmath_testcases.txt Log: Add special value testcases for sin, cos, tan, asin, atan Modified: python/branches/trunk-math/Lib/test/cmath_testcases.txt ============================================================================== --- python/branches/trunk-math/Lib/test/cmath_testcases.txt (original) +++ python/branches/trunk-math/Lib/test/cmath_testcases.txt Sun Mar 2 00:50:47 2008 @@ -506,6 +506,45 @@ asin0218 asin -5.2499000118824295 4.6655578977512214e+307 -> -1.1252459249113292e-307 709.1269781491103 asin0219 asin -5.9904782760833433 -4.7315689314781163e+307 -> -1.2660659419394637e-307 -709.14102757522312 +-- special values +asin1000 asin -0.0 0.0 -> -0.0 0.0 +asin1001 asin 0.0 0.0 -> 0.0 0.0 +asin1002 asin -0.0 -0.0 -> -0.0 -0.0 +asin1003 asin 0.0 -0.0 -> 0.0 -0.0 +asin1004 asin -inf 0.0 -> -1.5707963267948966 inf +asin1005 asin -inf 2.2999999999999998 -> -1.5707963267948966 inf +asin1006 asin nan 0.0 -> nan nan +asin1007 asin nan 2.2999999999999998 -> nan nan +asin1008 asin -0.0 inf -> -0.0 inf +asin1009 asin -2.2999999999999998 inf -> -0.0 inf +asin1010 asin -inf inf -> -0.78539816339744828 inf +asin1011 asin nan inf -> nan inf +asin1012 asin -0.0 nan -> -0.0 nan +asin1013 asin -2.2999999999999998 nan -> nan nan +asin1014 asin -inf nan -> nan inf ignore-imag-sign +asin1015 asin nan nan -> nan nan +asin1016 asin inf 0.0 -> 1.5707963267948966 inf +asin1017 asin inf 2.2999999999999998 -> 1.5707963267948966 inf +asin1018 asin 0.0 inf -> 0.0 inf +asin1019 asin 2.2999999999999998 inf -> 0.0 inf +asin1020 asin inf inf -> 0.78539816339744828 inf +asin1021 asin 0.0 nan -> 0.0 nan +asin1022 asin 2.2999999999999998 nan -> nan nan +asin1023 asin inf nan -> nan inf ignore-imag-sign +asin1024 asin inf -0.0 -> 1.5707963267948966 -inf +asin1025 asin inf -2.2999999999999998 -> 1.5707963267948966 -inf +asin1026 asin nan -0.0 -> nan nan +asin1027 asin nan -2.2999999999999998 -> nan nan +asin1028 asin 0.0 -inf -> 0.0 -inf +asin1029 asin 2.2999999999999998 -inf -> 0.0 -inf +asin1030 asin inf -inf -> 0.78539816339744828 -inf +asin1031 asin nan -inf -> nan -inf +asin1032 asin -inf -0.0 -> -1.5707963267948966 -inf +asin1033 asin -inf -2.2999999999999998 -> -1.5707963267948966 -inf +asin1034 asin -0.0 -inf -> -0.0 -inf +asin1035 asin -2.2999999999999998 -inf -> -0.0 -inf +asin1036 asin -inf -inf -> -0.78539816339744828 -inf + ------------------------------------ -- asinh: Inverse hyperbolic sine -- @@ -798,6 +837,49 @@ atan0303 atan -1e-165 1.0 -> -0.78539816339744828 190.30984376228875 atan0304 atan -9.9998886718268301e-321 -1.0 -> -0.78539816339744828 -368.76019403576692 +-- special values +atan1000 atan -0.0 0.0 -> -0.0 0.0 +atan1001 atan nan 0.0 -> nan 0.0 +atan1002 atan -0.0 1.0 -> -0.0 inf divide-by-zero +atan1003 atan -inf 0.0 -> -1.5707963267948966 0.0 +atan1004 atan -inf 2.2999999999999998 -> -1.5707963267948966 0.0 +atan1005 atan nan 2.2999999999999998 -> nan nan +atan1006 atan -0.0 inf -> -1.5707963267948966 0.0 +atan1007 atan -2.2999999999999998 inf -> -1.5707963267948966 0.0 +atan1008 atan -inf inf -> -1.5707963267948966 0.0 +atan1009 atan nan inf -> nan 0.0 +atan1010 atan -0.0 nan -> nan nan +atan1011 atan -2.2999999999999998 nan -> nan nan +atan1012 atan -inf nan -> -1.5707963267948966 0.0 ignore-imag-sign +atan1013 atan nan nan -> nan nan +atan1014 atan 0.0 0.0 -> 0.0 0.0 +atan1015 atan 0.0 1.0 -> 0.0 inf divide-by-zero +atan1016 atan inf 0.0 -> 1.5707963267948966 0.0 +atan1017 atan inf 2.2999999999999998 -> 1.5707963267948966 0.0 +atan1018 atan 0.0 inf -> 1.5707963267948966 0.0 +atan1019 atan 2.2999999999999998 inf -> 1.5707963267948966 0.0 +atan1020 atan inf inf -> 1.5707963267948966 0.0 +atan1021 atan 0.0 nan -> nan nan +atan1022 atan 2.2999999999999998 nan -> nan nan +atan1023 atan inf nan -> 1.5707963267948966 0.0 ignore-imag-sign +atan1024 atan 0.0 -0.0 -> 0.0 -0.0 +atan1025 atan nan -0.0 -> nan -0.0 +atan1026 atan 0.0 -1.0 -> 0.0 -inf divide-by-zero +atan1027 atan inf -0.0 -> 1.5707963267948966 -0.0 +atan1028 atan inf -2.2999999999999998 -> 1.5707963267948966 -0.0 +atan1029 atan nan -2.2999999999999998 -> nan nan +atan1030 atan 0.0 -inf -> 1.5707963267948966 -0.0 +atan1031 atan 2.2999999999999998 -inf -> 1.5707963267948966 -0.0 +atan1032 atan inf -inf -> 1.5707963267948966 -0.0 +atan1033 atan nan -inf -> nan -0.0 +atan1034 atan -0.0 -0.0 -> -0.0 -0.0 +atan1035 atan -0.0 -1.0 -> -0.0 -inf divide-by-zero +atan1036 atan -inf -0.0 -> -1.5707963267948966 -0.0 +atan1037 atan -inf -2.2999999999999998 -> -1.5707963267948966 -0.0 +atan1038 atan -0.0 -inf -> -1.5707963267948966 -0.0 +atan1039 atan -2.2999999999999998 -inf -> -1.5707963267948966 -0.0 +atan1040 atan -inf -inf -> -1.5707963267948966 -0.0 + --------------------------------------- -- atanh: Inverse hyperbolic tangent -- @@ -1888,6 +1970,61 @@ cos0022 cos 7.9914515433858515 0.71659966615501436 -> -0.17375439906936566 -0.77217043527294582 cos0023 cos 0.45124351152540226 1.6992693993812158 -> 2.543477948972237 -1.1528193694875477 +-- special values +cos1000 cos -0.0 0.0 -> 1.0 0.0 +cos1001 cos -inf 0.0 -> nan 0.0 invalid ignore-imag-sign +cos1002 cos nan 0.0 -> nan 0.0 ignore-imag-sign +cos1003 cos -inf 2.2999999999999998 -> nan nan invalid +cos1004 cos nan 2.2999999999999998 -> nan nan +cos1005 cos -0.0 inf -> inf 0.0 +cos1006 cos -1.3999999999999999 inf -> inf inf +cos1007 cos -2.7999999999999998 inf -> -inf inf +cos1008 cos -4.2000000000000002 inf -> -inf -inf +cos1009 cos -5.5999999999999996 inf -> inf -inf +cos1010 cos -7.0 inf -> inf inf +cos1011 cos -inf inf -> inf nan invalid ignore-real-sign +cos1012 cos nan inf -> inf nan +cos1013 cos -0.0 nan -> nan 0.0 ignore-imag-sign +cos1014 cos -2.2999999999999998 nan -> nan nan +cos1015 cos -inf nan -> nan nan +cos1016 cos nan nan -> nan nan +cos1017 cos 0.0 0.0 -> 1.0 -0.0 +cos1018 cos inf 0.0 -> nan 0.0 invalid ignore-imag-sign +cos1019 cos inf 2.2999999999999998 -> nan nan invalid +cos1020 cos 0.0 inf -> inf -0.0 +cos1021 cos 1.3999999999999999 inf -> inf -inf +cos1022 cos 2.7999999999999998 inf -> -inf -inf +cos1023 cos 4.2000000000000002 inf -> -inf inf +cos1024 cos 5.5999999999999996 inf -> inf inf +cos1025 cos 7.0 inf -> inf -inf +cos1026 cos inf inf -> inf nan invalid ignore-real-sign +cos1027 cos 0.0 nan -> nan 0.0 ignore-imag-sign +cos1028 cos 2.2999999999999998 nan -> nan nan +cos1029 cos inf nan -> nan nan +cos1030 cos 0.0 -0.0 -> 1.0 0.0 +cos1031 cos inf -0.0 -> nan 0.0 invalid ignore-imag-sign +cos1032 cos nan -0.0 -> nan 0.0 ignore-imag-sign +cos1033 cos inf -2.2999999999999998 -> nan nan invalid +cos1034 cos nan -2.2999999999999998 -> nan nan +cos1035 cos 0.0 -inf -> inf 0.0 +cos1036 cos 1.3999999999999999 -inf -> inf inf +cos1037 cos 2.7999999999999998 -inf -> -inf inf +cos1038 cos 4.2000000000000002 -inf -> -inf -inf +cos1039 cos 5.5999999999999996 -inf -> inf -inf +cos1040 cos 7.0 -inf -> inf inf +cos1041 cos inf -inf -> inf nan invalid ignore-real-sign +cos1042 cos nan -inf -> inf nan +cos1043 cos -0.0 -0.0 -> 1.0 -0.0 +cos1044 cos -inf -0.0 -> nan 0.0 invalid ignore-imag-sign +cos1045 cos -inf -2.2999999999999998 -> nan nan invalid +cos1046 cos -0.0 -inf -> inf -0.0 +cos1047 cos -1.3999999999999999 -inf -> inf -inf +cos1048 cos -2.7999999999999998 -inf -> -inf -inf +cos1049 cos -4.2000000000000002 -inf -> -inf inf +cos1050 cos -5.5999999999999996 -inf -> inf inf +cos1051 cos -7.0 -inf -> inf -inf +cos1052 cos -inf -inf -> inf nan invalid ignore-real-sign + --------------- -- sin: Sine -- @@ -1921,6 +2058,61 @@ sin0022 sin 1.1518087354403725 4.8597235966150558 -> 58.919141989603041 26.237003403758852 sin0023 sin 0.00087773078406649192 34.792379211312095 -> 565548145569.38245 644329685822700.62 +-- special values +sin1000 sin -0.0 0.0 -> -0.0 0.0 +sin1001 sin -inf 0.0 -> nan 0.0 invalid ignore-imag-sign +sin1002 sin nan 0.0 -> nan 0.0 ignore-imag-sign +sin1003 sin -inf 2.2999999999999998 -> nan nan invalid +sin1004 sin nan 2.2999999999999998 -> nan nan +sin1005 sin -0.0 inf -> -0.0 inf +sin1006 sin -1.3999999999999999 inf -> -inf inf +sin1007 sin -2.7999999999999998 inf -> -inf -inf +sin1008 sin -4.2000000000000002 inf -> inf -inf +sin1009 sin -5.5999999999999996 inf -> inf inf +sin1010 sin -7.0 inf -> -inf inf +sin1011 sin -inf inf -> nan inf invalid ignore-imag-sign +sin1012 sin nan inf -> nan inf ignore-imag-sign +sin1013 sin -0.0 nan -> -0.0 nan +sin1014 sin -2.2999999999999998 nan -> nan nan +sin1015 sin -inf nan -> nan nan +sin1016 sin nan nan -> nan nan +sin1017 sin 0.0 0.0 -> 0.0 0.0 +sin1018 sin inf 0.0 -> nan 0.0 invalid ignore-imag-sign +sin1019 sin inf 2.2999999999999998 -> nan nan invalid +sin1020 sin 0.0 inf -> 0.0 inf +sin1021 sin 1.3999999999999999 inf -> inf inf +sin1022 sin 2.7999999999999998 inf -> inf -inf +sin1023 sin 4.2000000000000002 inf -> -inf -inf +sin1024 sin 5.5999999999999996 inf -> -inf inf +sin1025 sin 7.0 inf -> inf inf +sin1026 sin inf inf -> nan inf invalid ignore-imag-sign +sin1027 sin 0.0 nan -> 0.0 nan +sin1028 sin 2.2999999999999998 nan -> nan nan +sin1029 sin inf nan -> nan nan +sin1030 sin 0.0 -0.0 -> 0.0 -0.0 +sin1031 sin inf -0.0 -> nan 0.0 invalid ignore-imag-sign +sin1032 sin nan -0.0 -> nan 0.0 ignore-imag-sign +sin1033 sin inf -2.2999999999999998 -> nan nan invalid +sin1034 sin nan -2.2999999999999998 -> nan nan +sin1035 sin 0.0 -inf -> 0.0 -inf +sin1036 sin 1.3999999999999999 -inf -> inf -inf +sin1037 sin 2.7999999999999998 -inf -> inf inf +sin1038 sin 4.2000000000000002 -inf -> -inf inf +sin1039 sin 5.5999999999999996 -inf -> -inf -inf +sin1040 sin 7.0 -inf -> inf -inf +sin1041 sin inf -inf -> nan inf invalid ignore-imag-sign +sin1042 sin nan -inf -> nan inf ignore-imag-sign +sin1043 sin -0.0 -0.0 -> -0.0 -0.0 +sin1044 sin -inf -0.0 -> nan 0.0 invalid ignore-imag-sign +sin1045 sin -inf -2.2999999999999998 -> nan nan invalid +sin1046 sin -0.0 -inf -> -0.0 -inf +sin1047 sin -1.3999999999999999 -inf -> -inf -inf +sin1048 sin -2.7999999999999998 -inf -> -inf inf +sin1049 sin -4.2000000000000002 -inf -> inf inf +sin1050 sin -5.5999999999999996 -inf -> inf -inf +sin1051 sin -7.0 -inf -> -inf -inf +sin1052 sin -inf -inf -> nan inf invalid ignore-imag-sign + ------------------ -- tan: Tangent -- @@ -1954,6 +2146,62 @@ tan0022 tan 1.1615313900880577 1.7956298728647107 -> 0.041793186826390362 1.0375339546034792 tan0023 tan 0.067014779477908945 5.8517361577457097 -> 2.2088639754800034e-06 0.9999836182420061 +-- special values +tan1000 tan -0.0 0.0 -> -0.0 0.0 +tan1001 tan -inf 0.0 -> nan nan invalid +tan1002 tan -inf 2.2999999999999998 -> nan nan invalid +tan1003 tan nan 0.0 -> nan nan +tan1004 tan nan 2.2999999999999998 -> nan nan +tan1005 tan -0.0 inf -> -0.0 1.0 +tan1006 tan -0.69999999999999996 inf -> -0.0 1.0 +tan1007 tan -1.3999999999999999 inf -> -0.0 1.0 +tan1008 tan -2.1000000000000001 inf -> 0.0 1.0 +tan1009 tan -2.7999999999999998 inf -> 0.0 1.0 +tan1010 tan -3.5 inf -> -0.0 1.0 +tan1011 tan -inf inf -> -0.0 1.0 ignore-real-sign +tan1012 tan nan inf -> -0.0 1.0 ignore-real-sign +tan1013 tan -0.0 nan -> -0.0 nan +tan1014 tan -2.2999999999999998 nan -> nan nan +tan1015 tan -inf nan -> nan nan +tan1016 tan nan nan -> nan nan +tan1017 tan 0.0 0.0 -> 0.0 0.0 +tan1018 tan inf 0.0 -> nan nan invalid +tan1019 tan inf 2.2999999999999998 -> nan nan invalid +tan1020 tan 0.0 inf -> 0.0 1.0 +tan1021 tan 0.69999999999999996 inf -> 0.0 1.0 +tan1022 tan 1.3999999999999999 inf -> 0.0 1.0 +tan1023 tan 2.1000000000000001 inf -> -0.0 1.0 +tan1024 tan 2.7999999999999998 inf -> -0.0 1.0 +tan1025 tan 3.5 inf -> 0.0 1.0 +tan1026 tan inf inf -> -0.0 1.0 ignore-real-sign +tan1027 tan 0.0 nan -> 0.0 nan +tan1028 tan 2.2999999999999998 nan -> nan nan +tan1029 tan inf nan -> nan nan +tan1030 tan 0.0 -0.0 -> 0.0 -0.0 +tan1031 tan inf -0.0 -> nan nan invalid +tan1032 tan inf -2.2999999999999998 -> nan nan invalid +tan1033 tan nan -0.0 -> nan nan +tan1034 tan nan -2.2999999999999998 -> nan nan +tan1035 tan 0.0 -inf -> 0.0 -1.0 +tan1036 tan 0.69999999999999996 -inf -> 0.0 -1.0 +tan1037 tan 1.3999999999999999 -inf -> 0.0 -1.0 +tan1038 tan 2.1000000000000001 -inf -> -0.0 -1.0 +tan1039 tan 2.7999999999999998 -inf -> -0.0 -1.0 +tan1040 tan 3.5 -inf -> 0.0 -1.0 +tan1041 tan inf -inf -> -0.0 -1.0 ignore-real-sign +tan1042 tan nan -inf -> -0.0 -1.0 ignore-real-sign +tan1043 tan -0.0 -0.0 -> -0.0 -0.0 +tan1044 tan -inf -0.0 -> nan nan invalid +tan1045 tan -inf -2.2999999999999998 -> nan nan invalid +tan1046 tan -0.0 -inf -> -0.0 -1.0 +tan1047 tan -0.69999999999999996 -inf -> -0.0 -1.0 +tan1048 tan -1.3999999999999999 -inf -> -0.0 -1.0 +tan1049 tan -2.1000000000000001 -inf -> 0.0 -1.0 +tan1050 tan -2.7999999999999998 -inf -> 0.0 -1.0 +tan1051 tan -3.5 -inf -> -0.0 -1.0 +tan1052 tan -inf -inf -> -0.0 -1.0 ignore-real-sign + + ------------------------------------------------------------------------ -- rect: Conversion from polar coordinates to rectangular coordinates -- ------------------------------------------------------------------------ From nnorwitz at gmail.com Sun Mar 2 03:03:23 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Sat, 1 Mar 2008 21:03:23 -0500 Subject: [Python-checkins] Python Regression Test Failures opt (1) Message-ID: <20080302020323.GA21876@python.psfb.org> 313 tests OK. 1 test failed: test_sqlite 28 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_bsddb3 test_cd test_cl test_curses test_gl test_imageop test_imgfile test_ioctl test_linuxaudiodev test_macostools test_ossaudiodev test_pep277 test_scriptpackages test_socketserver test_startfile test_sunaudiodev test_tcl test_timeout test_unicode_file test_urllib2net test_urllibnet test_winreg test_winsound test_zipfile64 1 skip unexpected on linux2: test_ioctl test_grammar test_opcodes test_dict test_builtin test_exceptions test_types test_unittest test_doctest test_doctest2 test_MimeWriter test_SimpleHTTPServer test_StringIO test___all__ test___future__ test__locale test_abc test_abstract_numbers test_aepack test_aepack skipped -- No module named aepack test_al test_al skipped -- No module named al test_anydbm test_applesingle test_applesingle skipped -- No module named macostools test_array test_ast test_asynchat test_asyncore test_atexit test_audioop test_augassign test_base64 test_bastion test_bigaddrspace test_bigmem test_binascii test_binhex test_binop test_bisect test_bool test_bsddb test_bsddb185 test_bsddb185 skipped -- No module named bsddb185 test_bsddb3 test_bsddb3 skipped -- Use of the `bsddb' resource not enabled test_buffer test_bufio test_bz2 test_calendar test_call test_capi test_cd test_cd skipped -- No module named cd test_cfgparser test_cgi test_charmapcodec test_cl test_cl skipped -- No module named cl test_class test_cmath test_cmd test_cmd_line test_cmd_line_script test_code test_codeccallbacks test_codecencodings_cn test_codecencodings_hk test_codecencodings_jp test_codecencodings_kr test_codecencodings_tw test_codecmaps_cn test_codecmaps_hk test_codecmaps_jp test_codecmaps_kr test_codecmaps_tw test_codecs test_codeop test_coding test_coercion test_collections test_colorsys test_commands test_compare test_compile test_compiler test_complex test_complex_args test_contains test_contextlib test_cookie test_cookielib test_copy test_copy_reg test_cpickle test_cprofile test_crypt test_csv test_ctypes test_curses test_curses skipped -- Use of the `curses' resource not enabled test_datetime test_dbm test_decimal test_decorators test_defaultdict test_deque test_descr test_descrtut test_difflib test_dircache test_dis test_distutils [10077 refs] test_dl test_docxmlrpc test_dumbdbm test_dummy_thread test_dummy_threading test_email test_email_codecs test_email_renamed test_enumerate test_eof test_errno test_exception_variations test_extcall test_fcntl test_file test_filecmp test_fileinput test_float test_fnmatch test_fork1 test_format test_fpformat test_fractions test_frozen test_ftplib test_funcattrs test_functools test_future test_future_builtins test_gc test_gdbm test_generators test_genericpath test_genexps test_getargs test_getargs2 test_getopt test_gettext test_gl test_gl skipped -- No module named gl test_glob test_global test_grp test_gzip test_hash test_hashlib test_heapq test_hexoct test_hmac test_hotshot test_htmllib test_htmlparser test_httplib test_imageop test_imageop skipped -- No module named imgfile test_imaplib test_imgfile test_imgfile skipped -- No module named imgfile test_imp test_import test_importhooks test_index test_inspect test_ioctl test_ioctl skipped -- Unable to open /dev/tty test_isinstance test_iter test_iterlen test_itertools test_largefile test_linuxaudiodev test_linuxaudiodev skipped -- Use of the `audio' resource not enabled test_list test_locale test_logging test_long test_long_future test_longexp test_macostools test_macostools skipped -- No module named macostools test_macpath test_mailbox test_marshal test_math test_md5 test_mhlib test_mimetools test_mimetypes test_minidom test_mmap test_module test_modulefinder test_multibytecodec test_multibytecodec_support test_multifile test_mutants test_mutex test_netrc test_new test_nis test_normalization test_ntpath test_old_mailbox test_openpty test_operator test_optparse test_os test_ossaudiodev test_ossaudiodev skipped -- Use of the `audio' resource not enabled test_parser s_push: parser stack overflow test_peepholer test_pep247 test_pep263 test_pep277 test_pep277 skipped -- test works only on NT+ test_pep292 test_pep352 test_pickle test_pickletools test_pipes test_pkg test_pkgimport test_platform test_plistlib test_poll test_popen [8018 refs] [8018 refs] [8018 refs] test_popen2 test_poplib test_posix test_posixpath test_pow test_pprint test_profile test_profilehooks test_property test_pstats test_pty test_pwd test_pyclbr test_pyexpat test_queue test_quopri [8395 refs] [8395 refs] test_random test_re test_repr test_resource test_rfc822 test_richcmp test_robotparser test_runpy test_sax test_scope test_scriptpackages test_scriptpackages skipped -- No module named aetools test_select test_set test_sets test_sgmllib test_sha test_shelve test_shlex test_shutil test_signal test_site test_slice test_smtplib test_socket test_socket_ssl test_socketserver test_socketserver skipped -- Use of the `network' resource not enabled test_softspace test_sort test_sqlite test test_sqlite failed -- Traceback (most recent call last): File "/tmp/python-test/local/lib/python2.6/sqlite3/test/regression.py", line 118, in CheckWorkaroundForBuggySqliteTransferBindings self.con.execute("create table if not exists foo(bar)") OperationalError: near "not": syntax error test_ssl test_startfile test_startfile skipped -- cannot import name startfile test_str test_strftime test_string test_stringprep test_strop test_strptime test_struct test_structmembers test_structseq test_subprocess [8013 refs] [8015 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8015 refs] [9938 refs] [8231 refs] [8015 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] . [8013 refs] [8013 refs] this bit of output is from a test of stdout in a different process ... [8013 refs] [8013 refs] [8231 refs] test_sunaudiodev test_sunaudiodev skipped -- No module named sunaudiodev test_sundry test_symtable test_syntax test_sys [8013 refs] [8013 refs] test_tarfile test_tcl test_tcl skipped -- No module named _tkinter test_telnetlib test_tempfile [8018 refs] test_textwrap test_thread test_threaded_import test_threadedtempfile test_threading [11149 refs] test_threading_local test_threadsignals test_time test_timeout test_timeout skipped -- Use of the `network' resource not enabled test_tokenize test_trace test_traceback test_transformer test_tuple test_typechecks test_ucn test_unary test_unicode test_unicode_file test_unicode_file skipped -- No Unicode filesystem semantics on this platform. test_unicodedata test_univnewlines test_unpack test_urllib test_urllib2 test_urllib2_localnet test_urllib2net test_urllib2net skipped -- Use of the `network' resource not enabled test_urllibnet test_urllibnet skipped -- Use of the `network' resource not enabled test_urlparse test_userdict test_userlist test_userstring test_uu test_uuid WARNING: uuid.getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._ifconfig_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._unixdll_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. test_wait3 test_wait4 test_warnings test_wave test_weakref test_whichdb test_winreg test_winreg skipped -- No module named _winreg test_winsound test_winsound skipped -- No module named winsound test_with test_wsgiref test_xdrlib test_xml_etree test_xml_etree_c test_xmllib test_xmlrpc test_xpickle test_xrange test_zipfile /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:472: DeprecationWarning: struct integer overflow masking is deprecated zipfp.close() /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:399: DeprecationWarning: struct integer overflow masking is deprecated zipfp.close() test_zipfile64 test_zipfile64 skipped -- test requires loads of disk-space bytes and a long time to run test_zipimport test_zlib 313 tests OK. 1 test failed: test_sqlite 28 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_bsddb3 test_cd test_cl test_curses test_gl test_imageop test_imgfile test_ioctl test_linuxaudiodev test_macostools test_ossaudiodev test_pep277 test_scriptpackages test_socketserver test_startfile test_sunaudiodev test_tcl test_timeout test_unicode_file test_urllib2net test_urllibnet test_winreg test_winsound test_zipfile64 1 skip unexpected on linux2: test_ioctl [569970 refs] From guido at python.org Sun Mar 2 06:56:50 2008 From: guido at python.org (Guido van Rossum) Date: Sat, 1 Mar 2008 21:56:50 -0800 Subject: [Python-checkins] [Python-3000] RELEASED Python 2.6a1 and 3.0a3 In-Reply-To: <6E72CEB8-D3BF-4440-A1EA-1A3D545CC8DB@python.org> References: <6E72CEB8-D3BF-4440-A1EA-1A3D545CC8DB@python.org> Message-ID: Thanks so much for getting the releases out!! This is a huge step forward. I think the release process went really well, all considered. (If you want my 2 cents, I vote for a command-line version of welease too.) Just one nit: There's no mention of the releases on the python.org front page. I think this is a matter of updating data/newsindex.yml in the website's svn. On Sat, Mar 1, 2008 at 10:51 AM, Barry Warsaw wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On behalf of the Python development team and the Python community, I'm > happy to announce the first alpha release of Python 2.6, and the third > alpha release of Python 3.0. > > Python 2.6 is not only the next advancement in the Python 2 series, it > is also a transitionary release, helping developers begin to prepare > their code for Python 3.0. As such, many features are being > backported from Python 3.0 to 2.6. It makes sense to release both > versions in at the same time, the precedence for this having been set > with the Python 1.6 and 2.0 releases. > > During the alpha testing cycle we will be releasing both versions in > lockstep, on a monthly release cycle. The releases will happen on the > last Friday of every month. If this schedule works well, we will > continue releasing in lockstep during the beta program. See PEP 361 > for schedule details: > > http://www.python.org/dev/peps/pep-0361/ > > Please note that these are alpha releases, and as such are not > suitable for production environments. We continue to strive for a > high degree of quality, but there are still some known problems and > the feature sets have not been finalized. These alphas are being > released to solicit feedback and hopefully discover bugs, as well as > allowing you to determine how changes in 2.6 and 3.0 might impact > you. If you find things broken or incorrect, please submit a bug > report at > > http://bugs.python.org > > For more information and downloadable distributions, see the Python > 2.6 web > site: > > http://www.python.org/download/releases/2.6/ > > and the Python 3.0 web site: > > http://www.python.org/download/releases/3.0/ > > We are planning a number of additional alpha releases, with the final > release schedule still to be determined. > > Enjoy, > - -Barry > > Barry Warsaw > barry at python.org > Python 2.6/3.0 Release Manager > (on behalf of the entire python-dev team) > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.8 (Darwin) > > iQCVAwUBR8mlu3EjvBPtnXfVAQKePAQAgx6w9wztfJaSWkbKrbwur2U6t6o5aIY5 > pyMa00CZWY06p8099BztcSjgp5rKrd6/9V7cJ0NP7NLZ+tz20uRfyI8uqoIYBIWC > ibJay6SSnzgOQM3PRIJV/K/m0dVPPPVD1LDnoEvuu+cKUpV434yHdgWkMPswsxUd > fLydrXABlOM= > =l6aj > -----END PGP SIGNATURE----- > _______________________________________________ > Python-3000 mailing list > Python-3000 at python.org > http://mail.python.org/mailman/listinfo/python-3000 > Unsubscribe: http://mail.python.org/mailman/options/python-3000/guido%40python.org > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From python-checkins at python.org Sun Mar 2 07:28:16 2008 From: python-checkins at python.org (georg.brandl) Date: Sun, 2 Mar 2008 07:28:16 +0100 (CET) Subject: [Python-checkins] r61165 - python/trunk/Doc/tutorial/interpreter.rst Message-ID: <20080302062816.CB5E81E402A@bag.python.org> Author: georg.brandl Date: Sun Mar 2 07:28:16 2008 New Revision: 61165 Modified: python/trunk/Doc/tutorial/interpreter.rst Log: It's 2.6 now. Modified: python/trunk/Doc/tutorial/interpreter.rst ============================================================================== --- python/trunk/Doc/tutorial/interpreter.rst (original) +++ python/trunk/Doc/tutorial/interpreter.rst Sun Mar 2 07:28:16 2008 @@ -102,7 +102,7 @@ before printing the first prompt:: python - Python 2.5 (#1, Feb 28 2007, 00:02:06) + Python 2.6 (#1, Feb 28 2007, 00:02:06) Type "help", "copyright", "credits" or "license" for more information. >>> From python-checkins at python.org Sun Mar 2 07:32:33 2008 From: python-checkins at python.org (georg.brandl) Date: Sun, 2 Mar 2008 07:32:33 +0100 (CET) Subject: [Python-checkins] r61166 - python/trunk/Doc/copyright.rst python/trunk/Doc/license.rst Message-ID: <20080302063233.0B9CE1E4005@bag.python.org> Author: georg.brandl Date: Sun Mar 2 07:32:32 2008 New Revision: 61166 Modified: python/trunk/Doc/copyright.rst python/trunk/Doc/license.rst Log: Update year. Modified: python/trunk/Doc/copyright.rst ============================================================================== --- python/trunk/Doc/copyright.rst (original) +++ python/trunk/Doc/copyright.rst Sun Mar 2 07:32:32 2008 @@ -4,7 +4,7 @@ Python and this documentation is: -Copyright ? 2001-2007 Python Software Foundation. All rights reserved. +Copyright ? 2001-2008 Python Software Foundation. All rights reserved. Copyright ? 2000 BeOpen.com. All rights reserved. Modified: python/trunk/Doc/license.rst ============================================================================== --- python/trunk/Doc/license.rst (original) +++ python/trunk/Doc/license.rst Sun Mar 2 07:32:32 2008 @@ -116,7 +116,7 @@ analyze, test, perform and/or display publicly, prepare derivative works, distribute, and otherwise use Python |release| alone or in any derivative version, provided, however, that PSF's License Agreement and PSF's notice of - copyright, i.e., "Copyright ? 2001-2007 Python Software Foundation; All Rights + copyright, i.e., "Copyright ? 2001-2008 Python Software Foundation; All Rights Reserved" are retained in Python |release| alone or in any derivative version prepared by Licensee. From python-checkins at python.org Sun Mar 2 07:44:08 2008 From: python-checkins at python.org (georg.brandl) Date: Sun, 2 Mar 2008 07:44:08 +0100 (CET) Subject: [Python-checkins] r61167 - python/trunk/Doc/tools/sphinxext/patchlevel.py Message-ID: <20080302064408.EC6351E4005@bag.python.org> Author: georg.brandl Date: Sun Mar 2 07:44:08 2008 New Revision: 61167 Modified: python/trunk/Doc/tools/sphinxext/patchlevel.py Log: Make patchlevel print out the release if called as a script. Modified: python/trunk/Doc/tools/sphinxext/patchlevel.py ============================================================================== --- python/trunk/Doc/tools/sphinxext/patchlevel.py (original) +++ python/trunk/Doc/tools/sphinxext/patchlevel.py Sun Mar 2 07:44:08 2008 @@ -66,3 +66,6 @@ print >>sys.stderr, 'Can\'t get version info from Include/patchlevel.h, ' \ 'using version of this interpreter (%s).' % release return version, release + +if __name__ == '__main__': + print get_header_version_info('.')[1] From python-checkins at python.org Sun Mar 2 07:45:40 2008 From: python-checkins at python.org (georg.brandl) Date: Sun, 2 Mar 2008 07:45:40 +0100 (CET) Subject: [Python-checkins] r61168 - python/trunk/Doc/conf.py Message-ID: <20080302064540.B72E91E4005@bag.python.org> Author: georg.brandl Date: Sun Mar 2 07:45:40 2008 New Revision: 61168 Modified: python/trunk/Doc/conf.py Log: New default basename for HTML help files. Modified: python/trunk/Doc/conf.py ============================================================================== --- python/trunk/Doc/conf.py (original) +++ python/trunk/Doc/conf.py Sun Mar 2 07:45:40 2008 @@ -86,7 +86,7 @@ } # Output file base name for HTML help builder. -htmlhelp_basename = 'pydoc' +htmlhelp_basename = 'python' + release.replace('.', '') # Options for LaTeX output From nnorwitz at gmail.com Sun Mar 2 08:14:50 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Sun, 2 Mar 2008 02:14:50 -0500 Subject: [Python-checkins] Python Regression Test Failures all (1) Message-ID: <20080302071450.GA28178@python.psfb.org> 318 tests OK. 1 test failed: test_sqlite 20 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_cd test_cl test_gl test_imageop test_imgfile test_ioctl test_macostools test_pep277 test_scriptpackages test_startfile test_sunaudiodev test_tcl test_unicode_file test_winreg test_winsound test_zipfile64 1 skip unexpected on linux2: test_ioctl test_grammar test_opcodes test_dict test_builtin test_exceptions test_types test_unittest test_doctest test_doctest2 test_MimeWriter test_SimpleHTTPServer test_StringIO test___all__ test___future__ test__locale test_abc test_abstract_numbers test_aepack test_aepack skipped -- No module named aepack test_al test_al skipped -- No module named al test_anydbm test_applesingle test_applesingle skipped -- No module named macostools test_array test_ast test_asynchat test_asyncore test_atexit test_audioop test_augassign test_base64 test_bastion test_bigaddrspace test_bigmem test_binascii test_binhex test_binop test_bisect test_bool test_bsddb test_bsddb185 test_bsddb185 skipped -- No module named bsddb185 test_bsddb3 test_buffer test_bufio test_bz2 test_calendar test_call test_capi test_cd test_cd skipped -- No module named cd test_cfgparser test_cgi test_charmapcodec test_cl test_cl skipped -- No module named cl test_class test_cmath test_cmd test_cmd_line test_cmd_line_script test_code test_codeccallbacks test_codecencodings_cn test_codecencodings_hk test_codecencodings_jp test_codecencodings_kr test_codecencodings_tw test_codecmaps_cn test_codecmaps_hk test_codecmaps_jp test_codecmaps_kr test_codecmaps_tw test_codecs test_codeop test_coding test_coercion test_collections test_colorsys test_commands test_compare test_compile test_compiler testCompileLibrary still working, be patient... test_complex test_complex_args test_contains test_contextlib test_cookie test_cookielib test_copy test_copy_reg test_cpickle test_cprofile test_crypt test_csv test_ctypes test_datetime test_dbm test_decimal test_decorators test_defaultdict test_deque test_descr test_descrtut test_difflib test_dircache test_dis test_distutils test_dl test_docxmlrpc test_dumbdbm test_dummy_thread test_dummy_threading test_email test_email_codecs test_email_renamed test_enumerate test_eof test_errno test_exception_variations test_extcall test_fcntl test_file test_filecmp test_fileinput test_float test_fnmatch test_fork1 test_format test_fpformat test_fractions test_frozen test_ftplib test_funcattrs test_functools test_future test_future_builtins test_gc test_gdbm test_generators test_genericpath test_genexps test_getargs test_getargs2 test_getopt test_gettext test_gl test_gl skipped -- No module named gl test_glob test_global test_grp test_gzip test_hash test_hashlib test_heapq test_hexoct test_hmac test_hotshot test_htmllib test_htmlparser test_httplib test_imageop test_imageop skipped -- No module named imgfile test_imaplib test_imgfile test_imgfile skipped -- No module named imgfile test_imp test_import test_importhooks test_index test_inspect test_ioctl test_ioctl skipped -- Unable to open /dev/tty test_isinstance test_iter test_iterlen test_itertools test_largefile test_list test_locale test_logging test_long test_long_future test_longexp test_macostools test_macostools skipped -- No module named macostools test_macpath test_mailbox test_marshal test_math test_md5 test_mhlib test_mimetools test_mimetypes test_minidom test_mmap test_module test_modulefinder test_multibytecodec test_multibytecodec_support test_multifile test_mutants test_mutex test_netrc test_new test_nis test_normalization test_ntpath test_old_mailbox test_openpty test_operator test_optparse test_os test_parser s_push: parser stack overflow test_peepholer test_pep247 test_pep263 test_pep277 test_pep277 skipped -- test works only on NT+ test_pep292 test_pep352 test_pickle test_pickletools test_pipes test_pkg test_pkgimport test_platform test_plistlib test_poll test_popen [8018 refs] [8018 refs] [8018 refs] test_popen2 test_poplib test_posix test_posixpath test_pow test_pprint test_profile test_profilehooks test_property test_pstats test_pty test_pwd test_pyclbr test_pyexpat test_queue test_quopri [8395 refs] [8395 refs] test_random test_re test_repr test_resource test_rfc822 test_richcmp test_robotparser test_runpy test_sax test_scope test_scriptpackages test_scriptpackages skipped -- No module named aetools test_select test_set test_sets test_sgmllib test_sha test_shelve test_shlex test_shutil test_signal test_site test_slice test_smtplib test_socket test_socket_ssl test_socketserver test_softspace test_sort test_sqlite test test_sqlite failed -- Traceback (most recent call last): File "/tmp/python-test/local/lib/python2.6/sqlite3/test/regression.py", line 118, in CheckWorkaroundForBuggySqliteTransferBindings self.con.execute("create table if not exists foo(bar)") OperationalError: near "not": syntax error test_ssl test_startfile test_startfile skipped -- cannot import name startfile test_str test_strftime test_string test_stringprep test_strop test_strptime test_struct test_structmembers test_structseq test_subprocess [8013 refs] [8015 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8015 refs] [9938 refs] [8231 refs] [8015 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] . [8013 refs] [8013 refs] this bit of output is from a test of stdout in a different process ... [8013 refs] [8013 refs] [8231 refs] test_sunaudiodev test_sunaudiodev skipped -- No module named sunaudiodev test_sundry test_symtable test_syntax test_sys [8013 refs] [8013 refs] test_tarfile test_tcl test_tcl skipped -- No module named _tkinter test_telnetlib test_tempfile [8018 refs] test_textwrap test_thread test_threaded_import test_threadedtempfile test_threading [11149 refs] test_threading_local test_threadsignals test_time test_timeout test_tokenize test_trace test_traceback test_transformer test_tuple test_typechecks test_ucn test_unary test_unicode test_unicode_file test_unicode_file skipped -- No Unicode filesystem semantics on this platform. test_unicodedata test_univnewlines test_unpack test_urllib test_urllib2 test_urllib2_localnet test_urllib2net test_urllibnet test_urlparse test_userdict test_userlist test_userstring test_uu test_uuid WARNING: uuid.getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._ifconfig_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._unixdll_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. test_wait3 test_wait4 test_warnings test_wave test_weakref test_whichdb test_winreg test_winreg skipped -- No module named _winreg test_winsound test_winsound skipped -- No module named winsound test_with test_wsgiref test_xdrlib test_xml_etree test_xml_etree_c test_xmllib test_xmlrpc test_xpickle test_xrange test_zipfile /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:472: DeprecationWarning: struct integer overflow masking is deprecated zipfp.close() /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:399: DeprecationWarning: struct integer overflow masking is deprecated zipfp.close() test_zipfile64 test_zipfile64 skipped -- test requires loads of disk-space bytes and a long time to run test_zipimport test_zlib 318 tests OK. 1 test failed: test_sqlite 20 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_cd test_cl test_gl test_imageop test_imgfile test_ioctl test_macostools test_pep277 test_scriptpackages test_startfile test_sunaudiodev test_tcl test_unicode_file test_winreg test_winsound test_zipfile64 1 skip unexpected on linux2: test_ioctl [582070 refs] From python-checkins at python.org Sun Mar 2 07:51:20 2008 From: python-checkins at python.org (georg.brandl) Date: Sun, 2 Mar 2008 07:51:20 +0100 (CET) Subject: [Python-checkins] r61169 - peps/trunk/pep-0101.txt Message-ID: <20080302065120.901651E4007@bag.python.org> Author: georg.brandl Date: Sun Mar 2 07:51:20 2008 New Revision: 61169 Modified: peps/trunk/pep-0101.txt Log: Update PEP 101 for Doc changes, as far as I know things (left the website stuff untouched). Note that "make distribution" doesn't work yet; will write that before the next alpha. Modified: peps/trunk/pep-0101.txt ============================================================================== --- peps/trunk/pep-0101.txt (original) +++ peps/trunk/pep-0101.txt Sun Mar 2 07:51:20 2008 @@ -110,7 +110,7 @@ Lib/idlelib/NEWS.txt has been similarly updated. ___ Make sure the release date is fully spelled out in - Doc/commontex/boilerplate.tex (welease). BROKEN + Doc/conf.py (setting 'today') (XXX update welease). ___ Tag and/or branch the tree for release X.YaZ (welease does tagging) @@ -195,15 +195,19 @@ ___ Update the README file, which has a big banner at the top proclaiming its identity. - ___ If the major (first) or minor (middle) digit of the version - number changes, also update the LICENSE file. + ___ Also update the LICENSE file, adding the pending version to the + list of releases. ___ There's a copy of the license in - Doc/commontex/license.tex; the DE usually takes care of that. BROKEN + Doc/license.rst; the DE usually takes care of that. ___ If the minor (middle) digit of the version number changes, update: - ___ Doc/tut/tut.tex (4 references to [Pp]ython26) BROKEN + ___ Doc/tutorial/interpreter.rst (3 references to '[Pp]ython26', one + to 'Python 2.6'). + + ___ Doc/tutorial/stdlib.rst and Doc/tutorial/stdlib2.rst, which have + each one reference to '[Pp]ython26'. ___ Check the years on the copyright notice. If the last release was some time last year, add the current year to the copyright @@ -215,11 +219,11 @@ ___ Python/getcopyright.c - ___ Doc/README (at the end) + ___ Doc/README.txt (at the end) - ___ Doc/commontex/copyright.tex BROKEN + ___ Doc/copyright.rst - ___ Doc/commontex/license.tex BROKEN + ___ Doc/license.rst ___ PC/python_nt.rc sets up the DLL version resource for Windows (displayed when you right-click on the DLL and select @@ -230,7 +234,7 @@ file from the distribution. BROKEN ___ For a final release, edit the first paragraph of - Doc/whatsnew/whatsnewXX.tex to include the actual release date; + Doc/whatsnew/X.Y.rst to include the actual release date; e.g. "Python 2.5 was released on August 1, 2003." There's no need to edit this for alpha or beta releases. Note that Andrew Kuchling often takes care of this. @@ -247,26 +251,19 @@ branch in the Doc/ directory -- not even by the RM. Building the documentation is done using the Makefile in the - Doc/ directory. Once all the external tools are installed (see - the "Documenting Python" manual for information on the required - tools), use these commands to build the formatted documentation - packages:: - - $ make clobber - ... - $ make PAPER=a4 paperdist - ... - $ make distfiles - ... + Doc/ directory. Use these commands to build the formatted + documentation packages: + + $ make clean + $ make distribution - The packages can be installed on the FTP server using commands - like these: + The packages in build/distribution can be installed on the + FTP server using commands like these: - $ VERSION=`tools/getversioninfo` + $ VERSION=`python tools/sphinxext/patchlevel.py` $ TARGET=/data/python-releases/doc/$VERSION - $ rm *-$VERSION.tar $ ssh dinsdale.python.org mkdir $TARGET - $ scp *-$VERSION.* dinsdale.python.org:$TARGET + $ scp build/distribution/* dinsdale.python.org:$TARGET ___ For final releases, publish the documentation on python.org. This must be done by someone with write access to the pydotorg @@ -333,17 +330,19 @@ which runs the automated builds) to fix conflicts that arise in the checked out working areas. - ___ The WE grabs the HTML to build the Windows helpfile. - The HTML files are unpacked into a new src/html directory, and - runs this command to create the project files for MS HTML - Workshop: - - % python ..\Doc\tools\prechm.py -v 2.6 python26 - - HTML Workshop is then fired up on the created python25.hhp file, - finally resulting in an python26.chm file. He then copies the - file into the Doc directories of the build trees (once for - each target architecture). + ___ The WE builds the Windows helpfile, using (in Doc/) either + + $ make htmlhelp (on Unix) + + or + + > make.bat htmlhelp (on Windows) + + to create suitable input for HTML Help Workshop in + build/htmlhelp. HTML Help Workshop is then fired up on the + created python26.hhp file, finally resulting in an + python26.chm file. He then copies the file into the Doc + directories of the build trees (once for each target architecture). ___ The WE then generates Windows installer files for each Windows target architecture (for Python 2.6, this means x86 From python-checkins at python.org Sun Mar 2 11:59:31 2008 From: python-checkins at python.org (raymond.hettinger) Date: Sun, 2 Mar 2008 11:59:31 +0100 (CET) Subject: [Python-checkins] r61170 - python/trunk/Doc/library/itertools.rst Message-ID: <20080302105931.929891E4005@bag.python.org> Author: raymond.hettinger Date: Sun Mar 2 11:59:31 2008 New Revision: 61170 Modified: python/trunk/Doc/library/itertools.rst Log: Finish-up docs for combinations() and permutations() in itertools. Modified: python/trunk/Doc/library/itertools.rst ============================================================================== --- python/trunk/Doc/library/itertools.rst (original) +++ python/trunk/Doc/library/itertools.rst Sun Mar 2 11:59:31 2008 @@ -104,26 +104,24 @@ Each result tuple is ordered to match the input order. So, every combination is a subsequence of the input *iterable*. - Example: ``combinations(range(4), 3) --> (0,1,2), (0,1,3), (0,2,3), (1,2,3)`` - Equivalent to:: def combinations(iterable, r): + 'combinations(range(4), 3) --> (0,1,2) (0,1,3) (0,2,3) (1,2,3)' pool = tuple(iterable) n = len(pool) - assert 0 <= r <= n - vec = range(r) - yield tuple(pool[i] for i in vec) + indices = range(r) + yield tuple(pool[i] for i in indices) while 1: for i in reversed(range(r)): - if vec[i] != i + n - r: + if indices[i] != i + n - r: break else: return - vec[i] += 1 + indices[i] += 1 for j in range(i+1, r): - vec[j] = vec[j-1] + 1 - yield tuple(pool[i] for i in vec) + indices[j] = indices[j-1] + 1 + yield tuple(pool[i] for i in indices) .. versionadded:: 2.6 @@ -369,7 +367,29 @@ value. So if the input elements are unique, there will be no repeat values in each permutation. - Example: ``permutations(range(3),2) --> (1,2) (1,3) (2,1) (2,3) (3,1) (3,2)`` + Equivalent to:: + + def permutations(iterable, r=None): + 'permutations(range(3), 2) --> (0,1) (0,2) (1,0) (1,2) (2,0) (2,1)' + pool = tuple(iterable) + n = len(pool) + r = n if r is None else r + indices = range(n) + cycles = range(n-r+1, n+1)[::-1] + yield tuple(pool[i] for i in indices[:r]) + while n: + for i in reversed(range(r)): + cycles[i] -= 1 + if cycles[i] == 0: + indices[:] = indices[:i] + indices[i+1:] + indices[i:i+1] + cycles[i] = n - i + else: + j = cycles[i] + indices[i], indices[-j] = indices[-j], indices[i] + yield tuple(pool[i] for i in indices[:r]) + break + else: + return .. versionadded:: 2.6 From nnorwitz at gmail.com Sun Mar 2 12:31:54 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Sun, 2 Mar 2008 06:31:54 -0500 Subject: [Python-checkins] Python Regression Test Failures basics (1) Message-ID: <20080302113154.GA8203@python.psfb.org> 313 tests OK. 1 test failed: test_sqlite 28 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_bsddb3 test_cd test_cl test_curses test_gl test_imageop test_imgfile test_ioctl test_linuxaudiodev test_macostools test_ossaudiodev test_pep277 test_scriptpackages test_socketserver test_startfile test_sunaudiodev test_tcl test_timeout test_unicode_file test_urllib2net test_urllibnet test_winreg test_winsound test_zipfile64 1 skip unexpected on linux2: test_ioctl test_grammar test_opcodes test_dict test_builtin test_exceptions test_types test_unittest test_doctest test_doctest2 test_MimeWriter test_SimpleHTTPServer test_StringIO test___all__ test___future__ test__locale test_abc test_abstract_numbers test_aepack test_aepack skipped -- No module named aepack test_al test_al skipped -- No module named al test_anydbm test_applesingle test_applesingle skipped -- No module named macostools test_array test_ast test_asynchat test_asyncore test_atexit test_audioop test_augassign test_base64 test_bastion test_bigaddrspace test_bigmem test_binascii test_binhex test_binop test_bisect test_bool test_bsddb test_bsddb185 test_bsddb185 skipped -- No module named bsddb185 test_bsddb3 test_bsddb3 skipped -- Use of the `bsddb' resource not enabled test_buffer test_bufio test_bz2 test_calendar test_call test_capi test_cd test_cd skipped -- No module named cd test_cfgparser test_cgi test_charmapcodec test_cl test_cl skipped -- No module named cl test_class test_cmath test_cmd test_cmd_line test_cmd_line_script test_code test_codeccallbacks test_codecencodings_cn test_codecencodings_hk test_codecencodings_jp test_codecencodings_kr test_codecencodings_tw test_codecmaps_cn test_codecmaps_hk test_codecmaps_jp test_codecmaps_kr test_codecmaps_tw test_codecs test_codeop test_coding test_coercion test_collections test_colorsys test_commands test_compare test_compile test_compiler test_complex test_complex_args test_contains test_contextlib test_cookie test_cookielib test_copy test_copy_reg test_cpickle test_cprofile test_crypt test_csv test_ctypes test_curses test_curses skipped -- Use of the `curses' resource not enabled test_datetime test_dbm test_decimal test_decorators test_defaultdict test_deque test_descr test_descrtut test_difflib test_dircache test_dis test_distutils test_dl test_docxmlrpc test_dumbdbm test_dummy_thread test_dummy_threading test_email test_email_codecs test_email_renamed test_enumerate test_eof test_errno test_exception_variations test_extcall test_fcntl test_file test_filecmp test_fileinput test_float test_fnmatch test_fork1 test_format test_fpformat test_fractions test_frozen test_ftplib test_funcattrs test_functools test_future test_future_builtins test_gc test_gdbm test_generators test_genericpath test_genexps test_getargs test_getargs2 test_getopt test_gettext test_gl test_gl skipped -- No module named gl test_glob test_global test_grp test_gzip test_hash test_hashlib test_heapq test_hexoct test_hmac test_hotshot test_htmllib test_htmlparser test_httplib test_imageop test_imageop skipped -- No module named imgfile test_imaplib test_imgfile test_imgfile skipped -- No module named imgfile test_imp test_import test_importhooks test_index test_inspect test_ioctl test_ioctl skipped -- Unable to open /dev/tty test_isinstance test_iter test_iterlen test_itertools test_largefile test_linuxaudiodev test_linuxaudiodev skipped -- Use of the `audio' resource not enabled test_list test_locale test_logging test_long test_long_future test_longexp test_macostools test_macostools skipped -- No module named macostools test_macpath test_mailbox test_marshal test_math test_md5 test_mhlib test_mimetools test_mimetypes test_minidom test_mmap test_module test_modulefinder test_multibytecodec test_multibytecodec_support test_multifile test_mutants test_mutex test_netrc test_new test_nis test_normalization test_ntpath test_old_mailbox test_openpty test_operator test_optparse test_os test_ossaudiodev test_ossaudiodev skipped -- Use of the `audio' resource not enabled test_parser s_push: parser stack overflow test_peepholer test_pep247 test_pep263 test_pep277 test_pep277 skipped -- test works only on NT+ test_pep292 test_pep352 test_pickle test_pickletools test_pipes test_pkg test_pkgimport test_platform test_plistlib test_poll test_popen [8018 refs] [8018 refs] [8018 refs] test_popen2 test_poplib test_posix test_posixpath test_pow test_pprint test_profile test_profilehooks test_property test_pstats test_pty test_pwd test_pyclbr test_pyexpat test_queue test_quopri [8395 refs] [8395 refs] test_random test_re test_repr test_resource test_rfc822 test_richcmp test_robotparser test_runpy test_sax test_scope test_scriptpackages test_scriptpackages skipped -- No module named aetools test_select test_set test_sets test_sgmllib test_sha test_shelve test_shlex test_shutil test_signal test_site test_slice test_smtplib test_socket test_socket_ssl test_socketserver test_socketserver skipped -- Use of the `network' resource not enabled test_softspace test_sort test_sqlite test test_sqlite failed -- Traceback (most recent call last): File "/tmp/python-test/local/lib/python2.6/sqlite3/test/regression.py", line 118, in CheckWorkaroundForBuggySqliteTransferBindings self.con.execute("create table if not exists foo(bar)") OperationalError: near "not": syntax error test_ssl test_startfile test_startfile skipped -- cannot import name startfile test_str test_strftime test_string test_stringprep test_strop test_strptime test_struct test_structmembers test_structseq test_subprocess [8013 refs] [8015 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8015 refs] [9938 refs] [8231 refs] [8015 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] . [8013 refs] [8013 refs] this bit of output is from a test of stdout in a different process ... [8013 refs] [8013 refs] [8231 refs] test_sunaudiodev test_sunaudiodev skipped -- No module named sunaudiodev test_sundry test_symtable test_syntax test_sys [8013 refs] [8013 refs] test_tarfile test_tcl test_tcl skipped -- No module named _tkinter test_telnetlib test_tempfile [8018 refs] test_textwrap test_thread test_threaded_import test_threadedtempfile test_threading [11149 refs] test_threading_local test_threadsignals test_time test_timeout test_timeout skipped -- Use of the `network' resource not enabled test_tokenize test_trace test_traceback test_transformer test_tuple test_typechecks test_ucn test_unary test_unicode test_unicode_file test_unicode_file skipped -- No Unicode filesystem semantics on this platform. test_unicodedata test_univnewlines test_unpack test_urllib test_urllib2 test_urllib2_localnet test_urllib2net test_urllib2net skipped -- Use of the `network' resource not enabled test_urllibnet test_urllibnet skipped -- Use of the `network' resource not enabled test_urlparse test_userdict test_userlist test_userstring test_uu test_uuid WARNING: uuid.getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._ifconfig_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._unixdll_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. test_wait3 test_wait4 test_warnings test_wave test_weakref test_whichdb test_winreg test_winreg skipped -- No module named _winreg test_winsound test_winsound skipped -- No module named winsound test_with test_wsgiref test_xdrlib test_xml_etree test_xml_etree_c test_xmllib test_xmlrpc test_xpickle test_xrange test_zipfile /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:472: DeprecationWarning: struct integer overflow masking is deprecated zipfp.close() /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:399: DeprecationWarning: struct integer overflow masking is deprecated zipfp.close() test_zipfile64 test_zipfile64 skipped -- test requires loads of disk-space bytes and a long time to run test_zipimport test_zlib 313 tests OK. 1 test failed: test_sqlite 28 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_bsddb3 test_cd test_cl test_curses test_gl test_imageop test_imgfile test_ioctl test_linuxaudiodev test_macostools test_ossaudiodev test_pep277 test_scriptpackages test_socketserver test_startfile test_sunaudiodev test_tcl test_timeout test_unicode_file test_urllib2net test_urllibnet test_winreg test_winsound test_zipfile64 1 skip unexpected on linux2: test_ioctl [570359 refs] From python-checkins at python.org Sun Mar 2 12:17:51 2008 From: python-checkins at python.org (raymond.hettinger) Date: Sun, 2 Mar 2008 12:17:51 +0100 (CET) Subject: [Python-checkins] r61171 - python/trunk/Doc/library/itertools.rst Message-ID: <20080302111751.C4BE91E4005@bag.python.org> Author: raymond.hettinger Date: Sun Mar 2 12:17:51 2008 New Revision: 61171 Modified: python/trunk/Doc/library/itertools.rst Log: Tighten example code. Modified: python/trunk/Doc/library/itertools.rst ============================================================================== --- python/trunk/Doc/library/itertools.rst (original) +++ python/trunk/Doc/library/itertools.rst Sun Mar 2 12:17:51 2008 @@ -381,7 +381,7 @@ for i in reversed(range(r)): cycles[i] -= 1 if cycles[i] == 0: - indices[:] = indices[:i] + indices[i+1:] + indices[i:i+1] + indices[i:] = indices[i+1:] + indices[i:i+1] cycles[i] = n - i else: j = cycles[i] From python-checkins at python.org Sun Mar 2 12:57:16 2008 From: python-checkins at python.org (raymond.hettinger) Date: Sun, 2 Mar 2008 12:57:16 +0100 (CET) Subject: [Python-checkins] r61172 - python/trunk/Modules/itertoolsmodule.c Message-ID: <20080302115716.D6F6A1E4005@bag.python.org> Author: raymond.hettinger Date: Sun Mar 2 12:57:16 2008 New Revision: 61172 Modified: python/trunk/Modules/itertoolsmodule.c Log: Simplify code for itertools.product(). Modified: python/trunk/Modules/itertoolsmodule.c ============================================================================== --- python/trunk/Modules/itertoolsmodule.c (original) +++ python/trunk/Modules/itertoolsmodule.c Sun Mar 2 12:57:16 2008 @@ -1770,7 +1770,6 @@ typedef struct { PyObject_HEAD PyObject *pools; /* tuple of pool tuples */ - Py_ssize_t *maxvec; /* size of each pool */ Py_ssize_t *indices; /* one index per pool */ PyObject *result; /* most recently returned result tuple */ int stopped; /* set to 1 when the product iterator is exhausted */ @@ -1784,7 +1783,6 @@ productobject *lz; Py_ssize_t nargs, npools, repeat=1; PyObject *pools = NULL; - Py_ssize_t *maxvec = NULL; Py_ssize_t *indices = NULL; Py_ssize_t i; @@ -1809,9 +1807,8 @@ nargs = (repeat == 0) ? 0 : PyTuple_GET_SIZE(args); npools = nargs * repeat; - maxvec = PyMem_Malloc(npools * sizeof(Py_ssize_t)); indices = PyMem_Malloc(npools * sizeof(Py_ssize_t)); - if (maxvec == NULL || indices == NULL) { + if (indices == NULL) { PyErr_NoMemory(); goto error; } @@ -1825,16 +1822,13 @@ PyObject *pool = PySequence_Tuple(item); if (pool == NULL) goto error; - PyTuple_SET_ITEM(pools, i, pool); - maxvec[i] = PyTuple_GET_SIZE(pool); indices[i] = 0; } for ( ; i < npools; ++i) { PyObject *pool = PyTuple_GET_ITEM(pools, i - nargs); Py_INCREF(pool); PyTuple_SET_ITEM(pools, i, pool); - maxvec[i] = maxvec[i - nargs]; indices[i] = 0; } @@ -1844,7 +1838,6 @@ goto error; lz->pools = pools; - lz->maxvec = maxvec; lz->indices = indices; lz->result = NULL; lz->stopped = 0; @@ -1852,8 +1845,6 @@ return (PyObject *)lz; error: - if (maxvec != NULL) - PyMem_Free(maxvec); if (indices != NULL) PyMem_Free(indices); Py_XDECREF(pools); @@ -1866,7 +1857,6 @@ PyObject_GC_UnTrack(lz); Py_XDECREF(lz->pools); Py_XDECREF(lz->result); - PyMem_Free(lz->maxvec); PyMem_Free(lz->indices); Py_TYPE(lz)->tp_free(lz); } @@ -1913,7 +1903,6 @@ } } else { Py_ssize_t *indices = lz->indices; - Py_ssize_t *maxvec = lz->maxvec; /* Copy the previous result tuple or re-use it if available */ if (Py_REFCNT(result) > 1) { @@ -1937,7 +1926,7 @@ for (i=npools-1 ; i >= 0 ; i--) { pool = PyTuple_GET_ITEM(pools, i); indices[i]++; - if (indices[i] == maxvec[i]) { + if (indices[i] == PyTuple_GET_SIZE(pool)) { /* Roll-over and advance to next pool */ indices[i] = 0; elem = PyTuple_GET_ITEM(pool, 0); From python-checkins at python.org Sun Mar 2 13:02:19 2008 From: python-checkins at python.org (raymond.hettinger) Date: Sun, 2 Mar 2008 13:02:19 +0100 (CET) Subject: [Python-checkins] r61173 - python/trunk/Modules/itertoolsmodule.c Message-ID: <20080302120219.B76171E4007@bag.python.org> Author: raymond.hettinger Date: Sun Mar 2 13:02:19 2008 New Revision: 61173 Modified: python/trunk/Modules/itertoolsmodule.c Log: Handle 0-tuples which can be singletons. Modified: python/trunk/Modules/itertoolsmodule.c ============================================================================== --- python/trunk/Modules/itertoolsmodule.c (original) +++ python/trunk/Modules/itertoolsmodule.c Sun Mar 2 13:02:19 2008 @@ -1919,7 +1919,7 @@ Py_DECREF(old_result); } /* Now, we've got the only copy so we can update it in-place */ - assert (Py_REFCNT(result) == 1); + assert (npools==0 || Py_REFCNT(result) == 1); /* Update the pool indices right-to-left. Only advance to the next pool when the previous one rolls-over */ From buildbot at python.org Sun Mar 2 13:02:29 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 02 Mar 2008 12:02:29 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-3 trunk Message-ID: <20080302120229.C44B51E4005@bag.python.org> The Buildbot has detected a new failure of x86 XP-3 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-3%20trunk/builds/984 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl,raymond.hettinger BUILD FAILED: failed svn sincerely, -The Buildbot From buildbot at python.org Sun Mar 2 13:39:00 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 02 Mar 2008 12:39:00 +0000 Subject: [Python-checkins] buildbot failure in g4 osx.4 trunk Message-ID: <20080302123900.483D41E4005@bag.python.org> The Buildbot has detected a new failure of g4 osx.4 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/g4%20osx.4%20trunk/builds/2954 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: psf-g4 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl,raymond.hettinger BUILD FAILED: failed failed slave lost sincerely, -The Buildbot From gh at ghaering.de Sun Mar 2 13:41:42 2008 From: gh at ghaering.de (=?UTF-8?B?R2VyaGFyZCBIw6RyaW5n?=) Date: Sun, 02 Mar 2008 13:41:42 +0100 Subject: [Python-checkins] r61141 - in python/trunk: Lib/sqlite3/test/dbapi.py Lib/sqlite3/test/hooks.py Lib/sqlite3/test/py25tests.py Lib/sqlite3/test/regression.py Lib/sqlite3/test/transactions.py Lib/sqlite3/test/types.py Lib/test/test_sqlite.py Modules/_sqlite/connection.c Modules/_sqlite/connection.h Modules/_sqlite/cursor.c Modules/_sqlite/cursor.h Modules/_sqlite/microprotocols.h Modules/_sqlite/module.c Modules/_sqlite/module.h Modules/_sqlite/statement.c Modules/_sqlite/util.c Modules/_sqlite/util.h In-Reply-To: <47C9B617.5060008@cheimes.de> References: <20080229220841.D028F1E4009@bag.python.org> <47C9B617.5060008@cheimes.de> Message-ID: <47CAA086.8020907@ghaering.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Christian Heimes wrote: > gerhard.haering wrote: >> Author: gerhard.haering >> Date: Fri Feb 29 23:08:41 2008 >> New Revision: 61141 >> [...] >> Added > > Hey Gerhard! > > You've committed a large change to the code base in the middle of an > alpha release. No harm was done but next time please watch the mailing > list. You also forgot to update Misc/NEWS. Please add an entry as soon > as possible. [...] Sorry if I created disturbances here. I actually did read the mailing list, that's why I tried to get the changes in *before* the alpha release. I believe is read the time wrong and thought that was an hour before the freeze for the alpha. I'll add the entry to Misc/NEWS in a minute. I'll know better in the future. > Can you do me a favor? Can you port the patch to 3.0 for me? I'm getting > lots of conflicts with svnmerge.py. You know the code base for the > sqlite module much better than me. I will of course do that, but it will take some time. Adapting to the C API changes in 3.0 is not so trivial for me. - -- Gerhard -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFHyqCFdIO4ozGCH14RAiKaAJ9vuJpHEqC+WYFnZBv7xR0/PbYz3ACeOuBk EcmzQ+ziR2QGRx7pl5hLwgE= =mjn0 -----END PGP SIGNATURE----- From python-checkins at python.org Sun Mar 2 14:08:03 2008 From: python-checkins at python.org (gerhard.haering) Date: Sun, 2 Mar 2008 14:08:03 +0100 (CET) Subject: [Python-checkins] r61174 - python/trunk/Lib/sqlite3/test/regression.py Message-ID: <20080302130803.49EA91E4005@bag.python.org> Author: gerhard.haering Date: Sun Mar 2 14:08:03 2008 New Revision: 61174 Modified: python/trunk/Lib/sqlite3/test/regression.py Log: Made sqlite3 module's regression tests work with SQLite versions that don't support "create table if not exists", yet. Modified: python/trunk/Lib/sqlite3/test/regression.py ============================================================================== --- python/trunk/Lib/sqlite3/test/regression.py (original) +++ python/trunk/Lib/sqlite3/test/regression.py Sun Mar 2 14:08:03 2008 @@ -115,8 +115,9 @@ pysqlite would crash with older SQLite versions unless a workaround is implemented. """ - self.con.execute("create table if not exists foo(bar)") - self.con.execute("create table if not exists foo(bar)") + self.con.execute("create table foo(bar)") + self.con.execute("drop table foo") + self.con.execute("create table foo(bar)") def CheckEmptyStatement(self): """ From python-checkins at python.org Sun Mar 2 14:12:27 2008 From: python-checkins at python.org (gerhard.haering) Date: Sun, 2 Mar 2008 14:12:27 +0100 (CET) Subject: [Python-checkins] r61175 - python/trunk/Misc/NEWS Message-ID: <20080302131227.F15691E4005@bag.python.org> Author: gerhard.haering Date: Sun Mar 2 14:12:27 2008 New Revision: 61175 Modified: python/trunk/Misc/NEWS Log: Added note about update of sqlite3 module. Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Sun Mar 2 14:12:27 2008 @@ -1414,6 +1414,8 @@ - bsddb module: Fix memory leak when using database cursors on databases without a DBEnv. +- The sqlite3 module was updated to pysqlite 2.4.1. + Tests ----- From nnorwitz at gmail.com Sun Mar 2 15:02:16 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Sun, 2 Mar 2008 09:02:16 -0500 Subject: [Python-checkins] Python Regression Test Failures opt (1) Message-ID: <20080302140216.GA10738@python.psfb.org> 313 tests OK. 1 test failed: test_sqlite 28 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_bsddb3 test_cd test_cl test_curses test_gl test_imageop test_imgfile test_ioctl test_linuxaudiodev test_macostools test_ossaudiodev test_pep277 test_scriptpackages test_socketserver test_startfile test_sunaudiodev test_tcl test_timeout test_unicode_file test_urllib2net test_urllibnet test_winreg test_winsound test_zipfile64 1 skip unexpected on linux2: test_ioctl test_grammar test_opcodes test_dict test_builtin test_exceptions test_types test_unittest test_doctest test_doctest2 test_MimeWriter test_SimpleHTTPServer test_StringIO test___all__ test___future__ test__locale test_abc test_abstract_numbers test_aepack test_aepack skipped -- No module named aepack test_al test_al skipped -- No module named al test_anydbm test_applesingle test_applesingle skipped -- No module named macostools test_array test_ast test_asynchat test_asyncore test_atexit test_audioop test_augassign test_base64 test_bastion test_bigaddrspace test_bigmem test_binascii test_binhex test_binop test_bisect test_bool test_bsddb test_bsddb185 test_bsddb185 skipped -- No module named bsddb185 test_bsddb3 test_bsddb3 skipped -- Use of the `bsddb' resource not enabled test_buffer test_bufio test_bz2 test_calendar test_call test_capi test_cd test_cd skipped -- No module named cd test_cfgparser test_cgi test_charmapcodec test_cl test_cl skipped -- No module named cl test_class test_cmath test_cmd test_cmd_line test_cmd_line_script test_code test_codeccallbacks test_codecencodings_cn test_codecencodings_hk test_codecencodings_jp test_codecencodings_kr test_codecencodings_tw test_codecmaps_cn test_codecmaps_hk test_codecmaps_jp test_codecmaps_kr test_codecmaps_tw test_codecs test_codeop test_coding test_coercion test_collections test_colorsys test_commands test_compare test_compile test_compiler test_complex test_complex_args test_contains test_contextlib test_cookie test_cookielib test_copy test_copy_reg test_cpickle test_cprofile test_crypt test_csv test_ctypes test_curses test_curses skipped -- Use of the `curses' resource not enabled test_datetime test_dbm test_decimal test_decorators test_defaultdict test_deque test_descr test_descrtut test_difflib test_dircache test_dis test_distutils [10077 refs] test_dl test_docxmlrpc test_dumbdbm test_dummy_thread test_dummy_threading test_email test_email_codecs test_email_renamed test_enumerate test_eof test_errno test_exception_variations test_extcall test_fcntl test_file test_filecmp test_fileinput test_float test_fnmatch test_fork1 test_format test_fpformat test_fractions test_frozen test_ftplib test_funcattrs test_functools test_future test_future_builtins test_gc test_gdbm test_generators test_genericpath test_genexps test_getargs test_getargs2 test_getopt test_gettext test_gl test_gl skipped -- No module named gl test_glob test_global test_grp test_gzip test_hash test_hashlib test_heapq test_hexoct test_hmac test_hotshot test_htmllib test_htmlparser test_httplib test_imageop test_imageop skipped -- No module named imgfile test_imaplib test_imgfile test_imgfile skipped -- No module named imgfile test_imp test_import test_importhooks test_index test_inspect test_ioctl test_ioctl skipped -- Unable to open /dev/tty test_isinstance test_iter test_iterlen test_itertools test_largefile test_linuxaudiodev test_linuxaudiodev skipped -- Use of the `audio' resource not enabled test_list test_locale test_logging test_long test_long_future test_longexp test_macostools test_macostools skipped -- No module named macostools test_macpath test_mailbox test_marshal test_math test_md5 test_mhlib test_mimetools test_mimetypes test_minidom test_mmap test_module test_modulefinder test_multibytecodec test_multibytecodec_support test_multifile test_mutants test_mutex test_netrc test_new test_nis test_normalization test_ntpath test_old_mailbox test_openpty test_operator test_optparse test_os test_ossaudiodev test_ossaudiodev skipped -- Use of the `audio' resource not enabled test_parser s_push: parser stack overflow test_peepholer test_pep247 test_pep263 test_pep277 test_pep277 skipped -- test works only on NT+ test_pep292 test_pep352 test_pickle test_pickletools test_pipes test_pkg test_pkgimport test_platform test_plistlib test_poll test_popen [8018 refs] [8018 refs] [8018 refs] test_popen2 test_poplib test_posix test_posixpath test_pow test_pprint test_profile test_profilehooks test_property test_pstats test_pty test_pwd test_pyclbr test_pyexpat test_queue test_quopri [8395 refs] [8395 refs] test_random test_re test_repr test_resource test_rfc822 test_richcmp test_robotparser test_runpy test_sax test_scope test_scriptpackages test_scriptpackages skipped -- No module named aetools test_select test_set test_sets test_sgmllib test_sha test_shelve test_shlex test_shutil test_signal test_site test_slice test_smtplib test_socket test_socket_ssl test_socketserver test_socketserver skipped -- Use of the `network' resource not enabled test_softspace test_sort test_sqlite test test_sqlite failed -- Traceback (most recent call last): File "/tmp/python-test/local/lib/python2.6/sqlite3/test/regression.py", line 118, in CheckWorkaroundForBuggySqliteTransferBindings self.con.execute("create table if not exists foo(bar)") OperationalError: near "not": syntax error test_ssl test_startfile test_startfile skipped -- cannot import name startfile test_str test_strftime test_string test_stringprep test_strop test_strptime test_struct test_structmembers test_structseq test_subprocess [8013 refs] [8015 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8015 refs] [9938 refs] [8231 refs] [8015 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] . [8013 refs] [8013 refs] this bit of output is from a test of stdout in a different process ... [8013 refs] [8013 refs] [8231 refs] test_sunaudiodev test_sunaudiodev skipped -- No module named sunaudiodev test_sundry test_symtable test_syntax test_sys [8013 refs] [8013 refs] test_tarfile test_tcl test_tcl skipped -- No module named _tkinter test_telnetlib test_tempfile [8018 refs] test_textwrap test_thread test_threaded_import test_threadedtempfile test_threading [11149 refs] test_threading_local test_threadsignals test_time test_timeout test_timeout skipped -- Use of the `network' resource not enabled test_tokenize test_trace test_traceback test_transformer test_tuple test_typechecks test_ucn test_unary test_unicode test_unicode_file test_unicode_file skipped -- No Unicode filesystem semantics on this platform. test_unicodedata test_univnewlines test_unpack test_urllib test_urllib2 test_urllib2_localnet test_urllib2net test_urllib2net skipped -- Use of the `network' resource not enabled test_urllibnet test_urllibnet skipped -- Use of the `network' resource not enabled test_urlparse test_userdict test_userlist test_userstring test_uu test_uuid WARNING: uuid.getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._ifconfig_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._unixdll_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. test_wait3 test_wait4 test_warnings test_wave test_weakref test_whichdb test_winreg test_winreg skipped -- No module named _winreg test_winsound test_winsound skipped -- No module named winsound test_with test_wsgiref test_xdrlib test_xml_etree test_xml_etree_c test_xmllib test_xmlrpc test_xpickle test_xrange test_zipfile /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:472: DeprecationWarning: struct integer overflow masking is deprecated zipfp.close() /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:399: DeprecationWarning: struct integer overflow masking is deprecated zipfp.close() test_zipfile64 test_zipfile64 skipped -- test requires loads of disk-space bytes and a long time to run test_zipimport test_zlib 313 tests OK. 1 test failed: test_sqlite 28 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_bsddb3 test_cd test_cl test_curses test_gl test_imageop test_imgfile test_ioctl test_linuxaudiodev test_macostools test_ossaudiodev test_pep277 test_scriptpackages test_socketserver test_startfile test_sunaudiodev test_tcl test_timeout test_unicode_file test_urllib2net test_urllibnet test_winreg test_winsound test_zipfile64 1 skip unexpected on linux2: test_ioctl [569925 refs] From python-checkins at python.org Sun Mar 2 14:41:39 2008 From: python-checkins at python.org (georg.brandl) Date: Sun, 2 Mar 2008 14:41:39 +0100 (CET) Subject: [Python-checkins] r61176 - python/trunk/Doc/library/logging.rst Message-ID: <20080302134139.8F50A1E4005@bag.python.org> Author: georg.brandl Date: Sun Mar 2 14:41:39 2008 New Revision: 61176 Modified: python/trunk/Doc/library/logging.rst Log: Make clear that the constants are strings. Modified: python/trunk/Doc/library/logging.rst ============================================================================== --- python/trunk/Doc/library/logging.rst (original) +++ python/trunk/Doc/library/logging.rst Sun Mar 2 14:41:39 2008 @@ -1651,21 +1651,21 @@ You can use the *when* to specify the type of *interval*. The list of possible values is, note that they are not case sensitive: - +----------+-----------------------+ - | Value | Type of interval | - +==========+=======================+ - | S | Seconds | - +----------+-----------------------+ - | M | Minutes | - +----------+-----------------------+ - | H | Hours | - +----------+-----------------------+ - | D | Days | - +----------+-----------------------+ - | W | Week day (0=Monday) | - +----------+-----------------------+ - | midnight | Roll over at midnight | - +----------+-----------------------+ + +----------------+-----------------------+ + | Value | Type of interval | + +================+=======================+ + | ``'S'`` | Seconds | + +----------------+-----------------------+ + | ``'M'`` | Minutes | + +----------------+-----------------------+ + | ``'H'`` | Hours | + +----------------+-----------------------+ + | ``'D'`` | Days | + +----------------+-----------------------+ + | ``'W'`` | Week day (0=Monday) | + +----------------+-----------------------+ + | ``'midnight'`` | Roll over at midnight | + +----------------+-----------------------+ If *backupCount* is non-zero, the system will save old log files by appending extensions to the filename. The extensions are date-and-time based, using the From ncoghlan at gmail.com Sun Mar 2 14:53:32 2008 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 02 Mar 2008 23:53:32 +1000 Subject: [Python-checkins] r61141 - in python/trunk: Lib/sqlite3/test/dbapi.py Lib/sqlite3/test/hooks.py Lib/sqlite3/test/py25tests.py Lib/sqlite3/test/regression.py Lib/sqlite3/test/transactions.py Lib/sqlite3/test/types.py Lib/test/test_sqlite.py Modules/_sqlite/connection.c Modules/_sqlite/connection.h Modules/_sqlite/cursor.c Modules/_sqlite/cursor.h Modules/_sqlite/microprotocols.h Modules/_sqlite/module.c Modules/_sqlite/module.h Modules/_sqlite/statement.c Modules/_sqlite/util.c Modules/_sqlite/util.h In-Reply-To: <47CAA086.8020907@ghaering.de> References: <20080229220841.D028F1E4009@bag.python.org> <47C9B617.5060008@cheimes.de> <47CAA086.8020907@ghaering.de> Message-ID: <47CAB15C.2080502@gmail.com> Gerhard H?ring wrote: > Christian Heimes wrote: >> Can you do me a favor? Can you port the patch to 3.0 for me? I'm getting >> lots of conflicts with svnmerge.py. You know the code base for the >> sqlite module much better than me. > > I will of course do that, but it will take some time. Adapting to the > C API changes in 3.0 is not so trivial for me. Is this something that would be better left until after the Grand Renaming, Python 3 Edition? (I know it isn't even close to the scope of the original grand renaming that added the P[yY] prefixes to everything, but renaming PyUnicode/String/Bytes is still going to touch an awful lot of code). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From gh at ghaering.de Sun Mar 2 15:08:30 2008 From: gh at ghaering.de (=?ISO-8859-1?Q?Gerhard_H=E4ring?=) Date: Sun, 02 Mar 2008 15:08:30 +0100 Subject: [Python-checkins] r61141 - in python/trunk: Lib/sqlite3/test/dbapi.py Lib/sqlite3/test/hooks.py Lib/sqlite3/test/py25tests.py Lib/sqlite3/test/regression.py Lib/sqlite3/test/transactions.py Lib/sqlite3/test/types.py Lib/test/test_sqlite.py Modules/_sqlite/connection.c Modules/_sqlite/connection.h Modules/_sqlite/cursor.c Modules/_sqlite/cursor.h Modules/_sqlite/microprotocols.h Modules/_sqlite/module.c Modules/_sqlite/module.h Modules/_sqlite/statement.c Modules/_sqlite/util.c Modules/_sqlite/util.h In-Reply-To: <47CAB15C.2080502@gmail.com> References: <20080229220841.D028F1E4009@bag.python.org> <47C9B617.5060008@cheimes.de> <47CAA086.8020907@ghaering.de> <47CAB15C.2080502@gmail.com> Message-ID: <47CAB4DE.3050600@ghaering.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Nick Coghlan wrote: > Gerhard H?ring wrote: >> Christian Heimes wrote: >>> Can you do me a favor? Can you port the patch to 3.0 for me? I'm getting >>> lots of conflicts with svnmerge.py. You know the code base for the >>> sqlite module much better than me. >> >> I will of course do that, but it will take some time. Adapting to the >> C API changes in 3.0 is not so trivial for me. > > Is this something that would be better left until after the Grand > Renaming, Python 3 Edition? (I know it isn't even close to the scope of > the original grand renaming that added the P[yY] prefixes to everything, I know nothing about that, yet. Is there a PEP I can read or something? > but renaming PyUnicode/String/Bytes is still going to touch an awful lot > of code). ... but if it's going to involve this, I'd like to defer adapting the sqlite3 module, because this is exactly the "problem area" here. - -- Gerhard -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFHyrTedIO4ozGCH14RAi3tAKCKHAfP999MgRwlV4CJj46hCxmVLQCfZtsJ zgxMCPbmrFi2USGUDw9A768= =zdqt -----END PGP SIGNATURE----- From python-checkins at python.org Sun Mar 2 15:15:04 2008 From: python-checkins at python.org (georg.brandl) Date: Sun, 2 Mar 2008 15:15:04 +0100 (CET) Subject: [Python-checkins] r61177 - python/trunk/Doc/library/logging.rst Message-ID: <20080302141504.E86731E4005@bag.python.org> Author: georg.brandl Date: Sun Mar 2 15:15:04 2008 New Revision: 61177 Modified: python/trunk/Doc/library/logging.rst Log: Fix factual error. Modified: python/trunk/Doc/library/logging.rst ============================================================================== --- python/trunk/Doc/library/logging.rst (original) +++ python/trunk/Doc/library/logging.rst Sun Mar 2 15:15:04 2008 @@ -1667,11 +1667,12 @@ | ``'midnight'`` | Roll over at midnight | +----------------+-----------------------+ - If *backupCount* is non-zero, the system will save old log files by appending - extensions to the filename. The extensions are date-and-time based, using the - strftime format ``%Y-%m-%d_%H-%M-%S`` or a leading portion thereof, depending on - the rollover interval. At most *backupCount* files will be kept, and if more - would be created when rollover occurs, the oldest one is deleted. + The system will save old log files by appending extensions to the filename. + The extensions are date-and-time based, using the strftime format + ``%Y-%m-%d_%H-%M-%S`` or a leading portion thereof, depending on the rollover + interval. If *backupCount* is nonzero, at most *backupCount* files will be + kept, and if more would be created when rollover occurs, the oldest one is + deleted. .. method:: TimedRotatingFileHandler.doRollover() From ncoghlan at gmail.com Sun Mar 2 15:51:55 2008 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 03 Mar 2008 00:51:55 +1000 Subject: [Python-checkins] r61141 - in python/trunk: Lib/sqlite3/test/dbapi.py Lib/sqlite3/test/hooks.py Lib/sqlite3/test/py25tests.py Lib/sqlite3/test/regression.py Lib/sqlite3/test/transactions.py Lib/sqlite3/test/types.py Lib/test/test_sqlite.py Modules/_sqlite/connection.c Modules/_sqlite/connection.h Modules/_sqlite/cursor.c Modules/_sqlite/cursor.h Modules/_sqlite/microprotocols.h Modules/_sqlite/module.c Modules/_sqlite/module.h Modules/_sqlite/statement.c Modules/_sqlite/util.c Modules/_sqlite/util.h In-Reply-To: <47CAB4DE.3050600@ghaering.de> References: <20080229220841.D028F1E4009@bag.python.org> <47C9B617.5060008@cheimes.de> <47CAA086.8020907@ghaering.de> <47CAB15C.2080502@gmail.com> <47CAB4DE.3050600@ghaering.de> Message-ID: <47CABF0B.2020001@gmail.com> Gerhard H?ring wrote: > Nick Coghlan wrote: >> Gerhard H?ring wrote: >>> Christian Heimes wrote: >>>> Can you do me a favor? Can you port the patch to 3.0 for me? I'm getting >>>> lots of conflicts with svnmerge.py. You know the code base for the >>>> sqlite module much better than me. >>> I will of course do that, but it will take some time. Adapting to the >>> C API changes in 3.0 is not so trivial for me. >> Is this something that would be better left until after the Grand >> Renaming, Python 3 Edition? (I know it isn't even close to the scope of >> the original grand renaming that added the P[yY] prefixes to everything, > > I know nothing about that, yet. Is there a PEP I can read or something? The main PEP involved is the one for the bytes types, but I don't remember how much detail it does into regarding the C API. (There's actually a bit of a discussion relating to this on python-dev at the moment). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From python-checkins at python.org Sun Mar 2 18:15:58 2008 From: python-checkins at python.org (martin.v.loewis) Date: Sun, 2 Mar 2008 18:15:58 +0100 (CET) Subject: [Python-checkins] r61178 - in python/branches/release23-maint/Lib: email/Charset.py email/__init__.py fileinput.py test/test_sha.py uu.py Message-ID: <20080302171558.EA5601E4007@bag.python.org> Author: martin.v.loewis Date: Sun Mar 2 18:15:58 2008 New Revision: 61178 Modified: python/branches/release23-maint/Lib/email/Charset.py python/branches/release23-maint/Lib/email/__init__.py python/branches/release23-maint/Lib/fileinput.py python/branches/release23-maint/Lib/test/test_sha.py python/branches/release23-maint/Lib/uu.py Log: Revert r52798, r52803, r52824, r54342, as they don't fix security issues. Modified: python/branches/release23-maint/Lib/email/Charset.py ============================================================================== --- python/branches/release23-maint/Lib/email/Charset.py (original) +++ python/branches/release23-maint/Lib/email/Charset.py Sun Mar 2 18:15:58 2008 @@ -1,5 +1,5 @@ -# Copyright (C) 2001-2007 Python Software Foundation -# Author: email-sig at python.org +# Copyright (C) 2001-2006 Python Software Foundation +# Author: che at debian.org (Ben Gertzfield), barry at python.org (Barry Warsaw) from types import UnicodeType from email.Encoders import encode_7or8bit @@ -99,7 +99,7 @@ # of stability and useability. CODEC_MAP = { - 'gb2312': 'eucgb2312_cn', + 'gb2132': 'eucgb2312_cn', 'big5': 'big5_tw', 'utf-8': 'utf-8', # Hack: We don't want *any* conversion for stuff marked us-ascii, as all Modified: python/branches/release23-maint/Lib/email/__init__.py ============================================================================== --- python/branches/release23-maint/Lib/email/__init__.py (original) +++ python/branches/release23-maint/Lib/email/__init__.py Sun Mar 2 18:15:58 2008 @@ -1,9 +1,9 @@ -# Copyright (C) 2001-2007 Python Software Foundation -# Author: email-sig at python.org +# Copyright (C) 2001-2006 Python Software Foundation +# Author: barry at python.org (Barry Warsaw) """A package for parsing, handling, and generating email messages.""" -__version__ = '2.5.9' +__version__ = '2.5.8' __all__ = [ 'base64MIME', Modified: python/branches/release23-maint/Lib/fileinput.py ============================================================================== --- python/branches/release23-maint/Lib/fileinput.py (original) +++ python/branches/release23-maint/Lib/fileinput.py Sun Mar 2 18:15:58 2008 @@ -301,9 +301,7 @@ self._file = open(self._backupfilename, "r") try: perm = os.fstat(self._file.fileno()).st_mode - except (AttributeError, OSError): - # AttributeError occurs in Jython, where there's no - # os.fstat. + except OSError: self._output = open(self._filename, "w") else: fd = os.open(self._filename, Modified: python/branches/release23-maint/Lib/test/test_sha.py ============================================================================== --- python/branches/release23-maint/Lib/test/test_sha.py (original) +++ python/branches/release23-maint/Lib/test/test_sha.py Sun Mar 2 18:15:58 2008 @@ -11,23 +11,9 @@ class SHATestCase(unittest.TestCase): def check(self, data, digest): - # Check digest matches the expected value - obj = sha.new(data) - computed = obj.hexdigest() + computed = sha.new(data).hexdigest() self.assert_(computed == digest) - # Verify that the value doesn't change between two consecutive - # digest operations. - computed_again = obj.hexdigest() - self.assert_(computed == computed_again) - - # Check hexdigest() output matches digest()'s output - digest = obj.digest() - hexd = "" - for c in digest: - hexd += '%02x' % ord(c) - self.assert_(computed == hexd) - def test_case_1(self): self.check("abc", "a9993e364706816aba3e25717850c26c9cd0d89d") @@ -40,9 +26,6 @@ self.check("a" * 1000000, "34aa973cd4c4daa4f61eeb2bdbad27316534016f") - def test_case_4(self): - self.check(chr(0xAA) * 80, - '4ca0ef38f1794b28a8f8ee110ee79d48ce13be25') def test_main(): test_support.run_unittest(SHATestCase) Modified: python/branches/release23-maint/Lib/uu.py ============================================================================== --- python/branches/release23-maint/Lib/uu.py (original) +++ python/branches/release23-maint/Lib/uu.py Sun Mar 2 18:15:58 2008 @@ -115,7 +115,6 @@ # # Open the output file # - opened = False if out_file == '-': out_file = sys.stdout elif isinstance(out_file, StringType): @@ -125,7 +124,6 @@ except AttributeError: pass out_file = fp - opened = True # # Main decoding loop # @@ -143,8 +141,6 @@ s = in_file.readline() if not s: raise Error, 'Truncated input file' - if opened: - out_file.close() def test(): """uuencode/uudecode main program""" From python-checkins at python.org Sun Mar 2 18:47:52 2008 From: python-checkins at python.org (martin.v.loewis) Date: Sun, 2 Mar 2008 18:47:52 +0100 (CET) Subject: [Python-checkins] r61179 - in python/branches/release24-maint: Doc/api/concrete.tex Doc/lib/libasyncore.tex Doc/lib/libcommands.tex Doc/lib/liblogging.tex Doc/lib/libos.tex Doc/lib/libpopen2.tex Doc/lib/libsubprocess.tex Lib/idlelib/EditorWindow.py Lib/logging/__init__.py Lib/logging/config.py Lib/logging/handlers.py Lib/random.py Lib/subprocess.py Lib/test/test_dbm.py Lib/test/test_gdbm.py Lib/test/test_multibytecodec.py Lib/test/test_sha.py Lib/trace.py Lib/uu.py Mac/BuildScript/build-installer.py Misc/NEWS Modules/cjkcodecs/multibytecodec.c Python/ceval.c Message-ID: <20080302174752.5FA011E4007@bag.python.org> Author: martin.v.loewis Date: Sun Mar 2 18:47:51 2008 New Revision: 61179 Modified: python/branches/release24-maint/Doc/api/concrete.tex python/branches/release24-maint/Doc/lib/libasyncore.tex python/branches/release24-maint/Doc/lib/libcommands.tex python/branches/release24-maint/Doc/lib/liblogging.tex python/branches/release24-maint/Doc/lib/libos.tex python/branches/release24-maint/Doc/lib/libpopen2.tex python/branches/release24-maint/Doc/lib/libsubprocess.tex python/branches/release24-maint/Lib/idlelib/EditorWindow.py python/branches/release24-maint/Lib/logging/__init__.py python/branches/release24-maint/Lib/logging/config.py python/branches/release24-maint/Lib/logging/handlers.py python/branches/release24-maint/Lib/random.py python/branches/release24-maint/Lib/subprocess.py python/branches/release24-maint/Lib/test/test_dbm.py python/branches/release24-maint/Lib/test/test_gdbm.py python/branches/release24-maint/Lib/test/test_multibytecodec.py python/branches/release24-maint/Lib/test/test_sha.py python/branches/release24-maint/Lib/trace.py python/branches/release24-maint/Lib/uu.py python/branches/release24-maint/Mac/BuildScript/build-installer.py python/branches/release24-maint/Misc/NEWS python/branches/release24-maint/Modules/cjkcodecs/multibytecodec.c python/branches/release24-maint/Python/ceval.c Log: Revert the following revisions, as they don't fix security problems: 52448, 52468, 52472, 52475, 52646, 52797, 52802, 52863, 52999, 53001, 53101, 53371, 53373, 53383, 53384, 53736, 53812, 53921, 55578, 55580, 55581, 55772, 55775, 56557, 57093, 57094, 58630, 60114 Modified: python/branches/release24-maint/Doc/api/concrete.tex ============================================================================== --- python/branches/release24-maint/Doc/api/concrete.tex (original) +++ python/branches/release24-maint/Doc/api/concrete.tex Sun Mar 2 18:47:51 2008 @@ -2730,10 +2730,10 @@ Various date and time objects are supplied by the \module{datetime} module. Before using any of these functions, the header file \file{datetime.h} must be included in your source (note that this is -not included by \file{Python.h}), and the macro -\cfunction{PyDateTime_IMPORT} must be invoked. The macro puts a -pointer to a C structure into a static variable, -\code{PyDateTimeAPI}, that is used by the following macros. +not include by \file{Python.h}), and macro \cfunction{PyDateTime_IMPORT()} +must be invoked. The macro arranges to put a pointer to a C structure +in a static variable \code{PyDateTimeAPI}, which is used by the following +macros. Type-check macros: Modified: python/branches/release24-maint/Doc/lib/libasyncore.tex ============================================================================== --- python/branches/release24-maint/Doc/lib/libasyncore.tex (original) +++ python/branches/release24-maint/Doc/lib/libasyncore.tex Sun Mar 2 18:47:51 2008 @@ -199,11 +199,9 @@ \end{methoddesc} \begin{methoddesc}{bind}{address} - Bind the socket to \var{address}. The socket must not already be - bound. (The format of \var{address} depends on the address family - --- see above.) To mark the socket as re-usable (setting the - \constant{SO_REUSEADDR} option), call the \class{dispatcher} - object's \method{set_reuse_addr()} method. + Bind the socket to \var{address}. The socket must not already + be bound. (The format of \var{address} depends on the address + family --- see above.) \end{methoddesc} \begin{methoddesc}{accept}{} Modified: python/branches/release24-maint/Doc/lib/libcommands.tex ============================================================================== --- python/branches/release24-maint/Doc/lib/libcommands.tex (original) +++ python/branches/release24-maint/Doc/lib/libcommands.tex Sun Mar 2 18:47:51 2008 @@ -12,11 +12,6 @@ return any output generated by the command and, optionally, the exit status. -The \module{subprocess} module provides more powerful facilities for -spawning new processes and retrieving their results. Using the -\module{subprocess} module is preferable to using the \module{commands} -module. - The \module{commands} module defines the following functions: @@ -56,7 +51,3 @@ >>> commands.getstatus('/bin/ls') '-rwxr-xr-x 1 root 13352 Oct 14 1994 /bin/ls' \end{verbatim} - -\begin{seealso} - \seemodule{subprocess}{Module for spawning and managing subprocesses.} -\end{seealso} Modified: python/branches/release24-maint/Doc/lib/liblogging.tex ============================================================================== --- python/branches/release24-maint/Doc/lib/liblogging.tex (original) +++ python/branches/release24-maint/Doc/lib/liblogging.tex Sun Mar 2 18:47:51 2008 @@ -429,10 +429,8 @@ \end{methoddesc} \begin{methoddesc}{findCaller}{} -Finds the caller's source filename and line number. Returns the filename, -line number and function name as a 3-element tuple. -\versionchanged[The function name was added. In earlier versions, the -filename and line number were returned as a 2-element tuple.]{2.4} +Finds the caller's source filename and line number. Returns the filename +and line number as a 2-element tuple. \end{methoddesc} \begin{methoddesc}{handle}{record} @@ -1082,11 +1080,8 @@ communicate with a remote \UNIX{} machine whose address is given by \var{address} in the form of a \code{(\var{host}, \var{port})} tuple. If \var{address} is not specified, \code{('localhost', 514)} is -used. The address is used to open a UDP socket. An alternative to providing -a \code{(\var{host}, \var{port})} tuple is providing an address as a string, -for example "/dev/log". In this case, a Unix domain socket is used to send -the message to the syslog. If \var{facility} is not specified, -\constant{LOG_USER} is used. +used. The address is used to open a UDP socket. If \var{facility} is +not specified, \constant{LOG_USER} is used. \end{classdesc} \begin{methoddesc}{close}{} Modified: python/branches/release24-maint/Doc/lib/libos.tex ============================================================================== --- python/branches/release24-maint/Doc/lib/libos.tex (original) +++ python/branches/release24-maint/Doc/lib/libos.tex Sun Mar 2 18:47:51 2008 @@ -357,10 +357,6 @@ errors), \code{None} is returned. Availability: Macintosh, \UNIX, Windows. -The \module{subprocess} module provides more powerful facilities for -spawning new processes and retrieving their results; using that module -is preferable to using this function. - \versionchanged[This function worked unreliably under Windows in earlier versions of Python. This was due to the use of the \cfunction{_popen()} function from the libraries provided with @@ -375,13 +371,8 @@ Availability: Macintosh, \UNIX, Windows. \end{funcdesc} -There are a number of different \function{popen*()} functions that -provide slightly different ways to create subprocesses. Note that the -\module{subprocess} module is easier to use and more powerful; -consider using that module before writing code using the -lower-level \function{popen*()} functions. -For each of the \function{popen*()} variants, if \var{bufsize} is +For each of the following \function{popen()} variants, if \var{bufsize} is specified, it specifies the buffer size for the I/O pipes. \var{mode}, if provided, should be the string \code{'b'} or \code{'t'}; on Windows this is needed to determine whether the file @@ -1513,13 +1504,7 @@ \funcline{spawnve}{mode, path, args, env} \funcline{spawnvp}{mode, file, args} \funcline{spawnvpe}{mode, file, args, env} -Execute the program \var{path} in a new process. - -(Note that the \module{subprocess} module provides more powerful -facilities for spawning new processes and retrieving their results; -using that module is preferable to using these functions.) - -If \var{mode} is +Execute the program \var{path} in a new process. If \var{mode} is \constant{P_NOWAIT}, this function returns the process ID of the new process; if \var{mode} is \constant{P_WAIT}, returns the process's exit code if it exits normally, or \code{-\var{signal}}, where @@ -1647,10 +1632,6 @@ a non-native shell, consult your shell documentation. Availability: Macintosh, \UNIX, Windows. - -The \module{subprocess} module provides more powerful facilities for -spawning new processes and retrieving their results; using that module -is preferable to using this function. \end{funcdesc} \begin{funcdesc}{times}{} Modified: python/branches/release24-maint/Doc/lib/libpopen2.tex ============================================================================== --- python/branches/release24-maint/Doc/lib/libpopen2.tex (original) +++ python/branches/release24-maint/Doc/lib/libpopen2.tex Sun Mar 2 18:47:51 2008 @@ -11,10 +11,10 @@ input/output/error pipes and obtain their return codes under \UNIX{} and Windows. -The \module{subprocess} module provides more powerful facilities for -spawning new processes and retrieving their results. Using the -\module{subprocess} module is preferable to using the \module{popen2} -module. +Note that starting with Python 2.0, this functionality is available +using functions from the \refmodule{os} module which have the same +names as the factory functions here, but the order of the return +values is more intuitive in the \refmodule{os} module variants. The primary interface offered by this module is a trio of factory functions. For each of these, if \var{bufsize} is specified, @@ -184,7 +184,3 @@ separate threads to read each of the individual files provided by whichever \function{popen*()} function or \class{Popen*} class was used. - -\begin{seealso} - \seemodule{subprocess}{Module for spawning and managing subprocesses.} -\end{seealso} Modified: python/branches/release24-maint/Doc/lib/libsubprocess.tex ============================================================================== --- python/branches/release24-maint/Doc/lib/libsubprocess.tex (original) +++ python/branches/release24-maint/Doc/lib/libsubprocess.tex Sun Mar 2 18:47:51 2008 @@ -12,6 +12,9 @@ codes. This module intends to replace several other, older modules and functions, such as: +% XXX Should add pointers to this module to at least the popen2 +% and commands sections. + \begin{verbatim} os.system os.spawn* Modified: python/branches/release24-maint/Lib/idlelib/EditorWindow.py ============================================================================== --- python/branches/release24-maint/Lib/idlelib/EditorWindow.py (original) +++ python/branches/release24-maint/Lib/idlelib/EditorWindow.py Sun Mar 2 18:47:51 2008 @@ -703,7 +703,7 @@ def close(self): reply = self.maybesave() - if str(reply) != "cancel": + if reply != "cancel": self._close() return reply Modified: python/branches/release24-maint/Lib/logging/__init__.py ============================================================================== --- python/branches/release24-maint/Lib/logging/__init__.py (original) +++ python/branches/release24-maint/Lib/logging/__init__.py Sun Mar 2 18:47:51 2008 @@ -1,4 +1,4 @@ -# Copyright 2001-2007 by Vinay Sajip. All Rights Reserved. +# Copyright 2001-2005 by Vinay Sajip. All Rights Reserved. # # Permission to use, copy, modify, and distribute this software and its # documentation for any purpose and without fee is hereby granted, @@ -21,7 +21,7 @@ Should work under Python versions >= 1.5.2, except that source line information is not available unless 'sys._getframe()' is. -Copyright (C) 2001-2007 Vinay Sajip. All Rights Reserved. +Copyright (C) 2001-2004 Vinay Sajip. All Rights Reserved. To use, simply 'import logging' and log away! """ @@ -68,7 +68,7 @@ except: return sys.exc_traceback.tb_frame.f_back -if hasattr(sys, '_getframe'): currentframe = lambda: sys._getframe(3) +if hasattr(sys, '_getframe'): currentframe = sys._getframe # done filching # _srcfile is only used in conjunction with sys._getframe(). @@ -1318,14 +1318,14 @@ """ root.manager.disable = level -def shutdown(handlerList=_handlerList): +def shutdown(): """ Perform any cleanup actions in the logging system (e.g. flushing buffers). Should be called at application exit. """ - for h in handlerList[:]: + for h in _handlerList[:]: # was _handlers.keys(): #errors might occur, for example, if files are locked #we just ignore them if raiseExceptions is not set try: Modified: python/branches/release24-maint/Lib/logging/config.py ============================================================================== --- python/branches/release24-maint/Lib/logging/config.py (original) +++ python/branches/release24-maint/Lib/logging/config.py Sun Mar 2 18:47:51 2008 @@ -78,7 +78,7 @@ flist = string.split(flist, ",") formatters = {} for form in flist: - sectname = "formatter_%s" % string.strip(form) + sectname = "formatter_%s" % form opts = cp.options(sectname) if "format" in opts: fs = cp.get(sectname, "format", 1) @@ -97,7 +97,6 @@ try: #first, lose the existing handlers... logging._handlers.clear() - del logging._handlerList[:] #now set up the new ones... hlist = cp.get("handlers", "keys") if len(hlist): @@ -106,7 +105,7 @@ fixups = [] #for inter-handler references for hand in hlist: try: - sectname = "handler_%s" % string.strip(hand) + sectname = "handler_%s" % hand klass = cp.get(sectname, "class") opts = cp.options(sectname) if "formatter" in opts: @@ -141,7 +140,6 @@ #at last, the loggers...first the root... llist = cp.get("loggers", "keys") llist = string.split(llist, ",") - llist = map(lambda x: string.strip(x), llist) llist.remove("root") sectname = "logger_root" root = logging.root @@ -156,7 +154,7 @@ if len(hlist): hlist = string.split(hlist, ",") for hand in hlist: - log.addHandler(handlers[string.strip(hand)]) + log.addHandler(handlers[hand]) #and now the others... #we don't want to lose the existing loggers, #since other threads may have pointers to them. @@ -190,7 +188,7 @@ if len(hlist): hlist = string.split(hlist, ",") for hand in hlist: - logger.addHandler(handlers[string.strip(hand)]) + logger.addHandler(handlers[hand]) #Disable any old loggers. There's no point deleting #them as other threads may continue to hold references #and by disabling them, you stop them doing any logging. Modified: python/branches/release24-maint/Lib/logging/handlers.py ============================================================================== --- python/branches/release24-maint/Lib/logging/handlers.py (original) +++ python/branches/release24-maint/Lib/logging/handlers.py Sun Mar 2 18:47:51 2008 @@ -1,4 +1,4 @@ -# Copyright 2001-2007 by Vinay Sajip. All Rights Reserved. +# Copyright 2001-2005 by Vinay Sajip. All Rights Reserved. # # Permission to use, copy, modify, and distribute this software and its # documentation for any purpose and without fee is hereby granted, @@ -22,7 +22,7 @@ Should work under Python versions >= 1.5.2, except that source line information is not available unless 'sys._getframe()' is. -Copyright (C) 2001-2007 Vinay Sajip. All Rights Reserved. +Copyright (C) 2001-2004 Vinay Sajip. All Rights Reserved. To use, simply 'import logging' and log away! """ @@ -231,11 +231,11 @@ # of days in the next week until the rollover day (3). if when.startswith('W'): day = t[6] # 0 is Monday - if day != self.dayOfWeek: - if day < self.dayOfWeek: - daysToWait = self.dayOfWeek - day - 1 - else: - daysToWait = 6 - day + self.dayOfWeek + if day > self.dayOfWeek: + daysToWait = (day - self.dayOfWeek) - 1 + self.rolloverAt = self.rolloverAt + (daysToWait * (60 * 60 * 24)) + if day < self.dayOfWeek: + daysToWait = (6 - self.dayOfWeek) + day self.rolloverAt = self.rolloverAt + (daysToWait * (60 * 60 * 24)) #print "Will rollover at %d, %d seconds from now" % (self.rolloverAt, self.rolloverAt - currentTime) @@ -566,8 +566,7 @@ """ Initialize a handler. - If address is specified as a string, a UNIX socket is used. To log to a - local syslogd, "SysLogHandler(address="/dev/log")" can be used. + If address is specified as a string, UNIX socket is used. If facility is not specified, LOG_USER is used. """ logging.Handler.__init__(self) @@ -575,11 +574,11 @@ self.address = address self.facility = facility if type(address) == types.StringType: - self.unixsocket = 1 self._connect_unixsocket(address) + self.unixsocket = 1 else: - self.unixsocket = 0 self.socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) + self.unixsocket = 0 self.formatter = None Modified: python/branches/release24-maint/Lib/random.py ============================================================================== --- python/branches/release24-maint/Lib/random.py (original) +++ python/branches/release24-maint/Lib/random.py Sun Mar 2 18:47:51 2008 @@ -205,7 +205,7 @@ raise ValueError, "empty range for randrange()" if n >= maxwidth: - return istart + istep*self._randbelow(n) + return istart + self._randbelow(n) return istart + istep*int(self.random() * n) def randint(self, a, b): Modified: python/branches/release24-maint/Lib/subprocess.py ============================================================================== --- python/branches/release24-maint/Lib/subprocess.py (original) +++ python/branches/release24-maint/Lib/subprocess.py Sun Mar 2 18:47:51 2008 @@ -346,7 +346,6 @@ import os import types import traceback -import gc if mswindows: import threading @@ -900,16 +899,7 @@ errpipe_read, errpipe_write = os.pipe() self._set_cloexec_flag(errpipe_write) - gc_was_enabled = gc.isenabled() - # Disable gc to avoid bug where gc -> file_dealloc -> - # write to stderr -> hang. http://bugs.python.org/issue1336 - gc.disable() - try: - self.pid = os.fork() - except: - if gc_was_enabled: - gc.enable() - raise + self.pid = os.fork() if self.pid == 0: # Child try: @@ -968,8 +958,6 @@ os._exit(255) # Parent - if gc_was_enabled: - gc.enable() os.close(errpipe_write) if p2cread and p2cwrite: os.close(p2cread) Modified: python/branches/release24-maint/Lib/test/test_dbm.py ============================================================================== --- python/branches/release24-maint/Lib/test/test_dbm.py (original) +++ python/branches/release24-maint/Lib/test/test_dbm.py Sun Mar 2 18:47:51 2008 @@ -6,11 +6,11 @@ import random import dbm from dbm import error -from test.test_support import verbose, verify, TestSkipped, TESTFN +from test.test_support import verbose, verify, TestSkipped # make filename unique to allow multiple concurrent tests # and to minimize the likelihood of a problem from an old file -filename = TESTFN +filename = '/tmp/delete_me_' + str(random.random())[-6:] def cleanup(): for suffix in ['', '.pag', '.dir', '.db']: Modified: python/branches/release24-maint/Lib/test/test_gdbm.py ============================================================================== --- python/branches/release24-maint/Lib/test/test_gdbm.py (original) +++ python/branches/release24-maint/Lib/test/test_gdbm.py Sun Mar 2 18:47:51 2008 @@ -5,9 +5,9 @@ import gdbm from gdbm import error -from test.test_support import verbose, verify, TestFailed, TESTFN +from test.test_support import verbose, verify, TestFailed -filename = TESTFN +filename= '/tmp/delete_me' g = gdbm.open(filename, 'c') verify(g.keys() == []) Modified: python/branches/release24-maint/Lib/test/test_multibytecodec.py ============================================================================== --- python/branches/release24-maint/Lib/test/test_multibytecodec.py (original) +++ python/branches/release24-maint/Lib/test/test_multibytecodec.py Sun Mar 2 18:47:51 2008 @@ -7,19 +7,7 @@ from test import test_support from test import test_multibytecodec_support -from test.test_support import TESTFN -import unittest, StringIO, codecs, sys, os - -class Test_StreamReader(unittest.TestCase): - def test_bug1728403(self): - try: - open(TESTFN, 'w').write('\xa1') - f = codecs.open(TESTFN, encoding='cp949') - self.assertRaises(UnicodeDecodeError, f.read, 2) - finally: - try: f.close() - except: pass - os.unlink(TESTFN) +import unittest, StringIO, codecs, sys class Test_StreamWriter(unittest.TestCase): if len(u'\U00012345') == 2: # UCS2 @@ -111,7 +99,6 @@ def test_main(): suite = unittest.TestSuite() - suite.addTest(unittest.makeSuite(Test_StreamReader)) suite.addTest(unittest.makeSuite(Test_StreamWriter)) suite.addTest(unittest.makeSuite(Test_ISO2022)) test_support.run_suite(suite) Modified: python/branches/release24-maint/Lib/test/test_sha.py ============================================================================== --- python/branches/release24-maint/Lib/test/test_sha.py (original) +++ python/branches/release24-maint/Lib/test/test_sha.py Sun Mar 2 18:47:51 2008 @@ -11,23 +11,9 @@ class SHATestCase(unittest.TestCase): def check(self, data, digest): - # Check digest matches the expected value - obj = sha.new(data) - computed = obj.hexdigest() + computed = sha.new(data).hexdigest() self.assert_(computed == digest) - # Verify that the value doesn't change between two consecutive - # digest operations. - computed_again = obj.hexdigest() - self.assert_(computed == computed_again) - - # Check hexdigest() output matches digest()'s output - digest = obj.digest() - hexd = "" - for c in digest: - hexd += '%02x' % ord(c) - self.assert_(computed == hexd) - def test_case_1(self): self.check("abc", "a9993e364706816aba3e25717850c26c9cd0d89d") @@ -40,9 +26,6 @@ self.check("a" * 1000000, "34aa973cd4c4daa4f61eeb2bdbad27316534016f") - def test_case_4(self): - self.check(chr(0xAA) * 80, - '4ca0ef38f1794b28a8f8ee110ee79d48ce13be25') def test_main(): test_support.run_unittest(SHATestCase) Modified: python/branches/release24-maint/Lib/trace.py ============================================================================== --- python/branches/release24-maint/Lib/trace.py (original) +++ python/branches/release24-maint/Lib/trace.py Sun Mar 2 18:47:51 2008 @@ -583,7 +583,7 @@ """ if why == 'call': code = frame.f_code - filename = frame.f_globals.get('__file__', None) + filename = code.co_filename if filename: # XXX modname() doesn't work right for packages, so # the ignore support won't work right for packages Modified: python/branches/release24-maint/Lib/uu.py ============================================================================== --- python/branches/release24-maint/Lib/uu.py (original) +++ python/branches/release24-maint/Lib/uu.py Sun Mar 2 18:47:51 2008 @@ -115,7 +115,6 @@ # # Open the output file # - opened = False if out_file == '-': out_file = sys.stdout elif isinstance(out_file, StringType): @@ -125,7 +124,6 @@ except AttributeError: pass out_file = fp - opened = True # # Main decoding loop # @@ -143,8 +141,6 @@ s = in_file.readline() if not s: raise Error, 'Truncated input file' - if opened: - out_file.close() def test(): """uuencode/uudecode main program""" Modified: python/branches/release24-maint/Mac/BuildScript/build-installer.py ============================================================================== --- python/branches/release24-maint/Mac/BuildScript/build-installer.py (original) +++ python/branches/release24-maint/Mac/BuildScript/build-installer.py Sun Mar 2 18:47:51 2008 @@ -10,7 +10,7 @@ Usage: see USAGE variable in the script. """ import platform, os, sys, getopt, textwrap, shutil, urllib2, stat, time, pwd -import grp, md5 +import grp INCLUDE_TIMESTAMP=1 VERBOSE=1 @@ -33,7 +33,7 @@ def shellQuote(value): """ - Return the string value in a form that can safely be inserted into + Return the string value in a form that can savely be inserted into a shell command. """ return "'%s'"%(value.replace("'", "'\"'\"'")) @@ -56,13 +56,13 @@ raise RuntimeError, "Cannot find full version??" -# The directory we'll use to create the build (will be erased and recreated) +# The directory we'll use to create the build, will be erased and recreated WORKDIR="/tmp/_py24" -# The directory we'll use to store third-party sources. Set this to something +# The directory we'll use to store third-party sources, set this to something # else if you don't want to re-fetch required libraries every time. DEPSRC=os.path.join(WORKDIR, 'third-party') -DEPSRC=os.path.expanduser('/tmp/other-sources') +DEPSRC=os.path.expanduser('~/Universal/other-sources') # Location of the preferred SDK SDKPATH="/Developer/SDKs/MacOSX10.4u.sdk" @@ -94,9 +94,8 @@ # batteries included python. LIBRARY_RECIPES=[ dict( - name="Bzip2 1.0.4", - url="http://www.bzip.org/1.0.4/bzip2-1.0.4.tar.gz", - checksum="fc310b254f6ba5fbb5da018f04533688", + name="Bzip2 1.0.3", + url="http://www.bzip.org/1.0.3/bzip2-1.0.3.tar.gz", configure=None, install='make install PREFIX=%s/usr/local/ CFLAGS="-arch %s -isysroot %s"'%( shellQuote(os.path.join(WORKDIR, 'libraries')), @@ -107,7 +106,6 @@ dict( name="ZLib 1.2.3", url="http://www.gzip.org/zlib/zlib-1.2.3.tar.gz", - checksum="debc62758716a169df9f62e6ab2bc634", configure=None, install='make install prefix=%s/usr/local/ CFLAGS="-arch %s -isysroot %s"'%( shellQuote(os.path.join(WORKDIR, 'libraries')), @@ -119,7 +117,6 @@ # Note that GNU readline is GPL'd software name="GNU Readline 5.1.4", url="http://ftp.gnu.org/pub/gnu/readline/readline-5.1.tar.gz" , - checksum="7ee5a692db88b30ca48927a13fd60e46", patchlevel='0', patches=[ # The readline maintainers don't do actual micro releases, but @@ -134,7 +131,6 @@ dict( name="NCurses 5.5", url="http://ftp.gnu.org/pub/gnu/ncurses/ncurses-5.5.tar.gz", - checksum='e73c1ac10b4bfc46db43b2ddfd6244ef', configure_pre=[ "--without-cxx", "--without-ada", @@ -163,7 +159,6 @@ dict( name="Sleepycat DB 4.4", url="http://downloads.sleepycat.com/db-4.4.20.tar.gz", - checksum='d84dff288a19186b136b0daf7067ade3', #name="Sleepycat DB 4.3.29", #url="http://downloads.sleepycat.com/db-4.3.29.tar.gz", buildDir="build_unix", @@ -193,7 +188,7 @@ long_name="GUI Applications", source="/Applications/MacPython %(VER)s", readme="""\ - This package installs IDLE (an interactive Python IDE), + This package installs IDLE (an interactive Python IDLE), Python Launcher and Build Applet (create application bundles from python scripts). @@ -249,7 +244,8 @@ readme="""\ This package updates the system python installation on Mac OS X 10.3 to ensure that you can build new python extensions - using that copy of python after installing this version. + using that copy of python after installing this version of + python. """, postflight="../OSX/fixapplepython23.py", topdir="/Library/Frameworks/Python.framework", @@ -313,19 +309,6 @@ fatal("Please install the latest version of Xcode and the %s SDK"%( os.path.basename(SDKPATH[:-4]))) - if os.path.exists('/sw'): - fatal("Detected Fink, please remove before building Python") - - if os.path.exists('/opt/local'): - fatal("Detected MacPorts, please remove before building Python") - - if not os.path.exists('/Library/Frameworks/Tcl.framework') or \ - not os.path.exists('/Library/Frameworks/Tk.framework'): - - fatal("Please install a Universal Tcl/Tk framework in /Library from\n\thttp://tcltkaqua.sourceforge.net/") - - - def parseOptions(args = None): @@ -462,17 +445,6 @@ except: pass -def verifyChecksum(path, checksum): - summer = md5.md5() - fp = open(path, 'rb') - block = fp.read(10240) - while block: - summer.update(block) - block = fp.read(10240) - - return summer.hexdigest() == checksum - - def buildRecipe(recipe, basedir, archList): """ Build software using a recipe. This function does the @@ -494,16 +466,13 @@ os.mkdir(DEPSRC) - if os.path.exists(sourceArchive) and verifyChecksum(sourceArchive, recipe['checksum']): + if os.path.exists(sourceArchive): print "Using local copy of %s"%(name,) else: print "Downloading %s"%(name,) downloadURL(url, sourceArchive) print "Archive for %s stored as %s"%(name, sourceArchive) - if not verifyChecksum(sourceArchive, recipe['checksum']): - fatal("Download for %s failed: bad checksum"%(url,)) - print "Extracting archive for %s"%(name,) buildDir=os.path.join(WORKDIR, '_bld') @@ -655,15 +624,15 @@ print "Running make" runCommand("make") - print "Running make frameworkinstall" + print "Runing make frameworkinstall" runCommand("make frameworkinstall DESTDIR=%s"%( shellQuote(rootDir))) - print "Running make frameworkinstallextras" + print "Runing make frameworkinstallextras" runCommand("make frameworkinstallextras DESTDIR=%s"%( shellQuote(rootDir))) - print "Copying required shared libraries" + print "Copy required shared libraries" if os.path.exists(os.path.join(WORKDIR, 'libraries', 'Library')): runCommand("mv %s/* %s"%( shellQuote(os.path.join( @@ -751,8 +720,8 @@ def packageFromRecipe(targetDir, recipe): curdir = os.getcwd() try: - # The major version (such as 2.5) is included in the package name - # because having two version of python installed at the same time is + # The major version (such as 2.5) is included in the pacakge name + # because haveing two version of python installed at the same time is # common. pkgname = '%s-%s'%(recipe['name'], getVersion()) srcdir = recipe.get('source') @@ -926,7 +895,7 @@ def buildDMG(): """ - Create DMG containing the rootDir. + Create DMG containing the rootDir """ outdir = os.path.join(WORKDIR, 'diskimage') if os.path.exists(outdir): @@ -940,7 +909,7 @@ os.mkdir(outdir) time.sleep(1) - runCommand("hdiutil create -volname 'Universal MacPython %s' -srcfolder %s %s"%( + runCommand("hdiutil create -volname 'Univeral MacPython %s' -srcfolder %s %s"%( getFullVersion(), shellQuote(os.path.join(WORKDIR, 'installer')), shellQuote(imagepath))) Modified: python/branches/release24-maint/Misc/NEWS ============================================================================== --- python/branches/release24-maint/Misc/NEWS (original) +++ python/branches/release24-maint/Misc/NEWS Sun Mar 2 18:47:51 2008 @@ -15,31 +15,16 @@ - patch #1630975: Fix crash when replacing sys.stdout in sitecustomize.py -- Bug #1590891: random.randrange don't return correct value for big number - -- Bug #1542016: make sys.callstats() match its docstring and return an - 11-tuple (only relevant when Python is compiled with -DCALL_PROFILE). - Extension Modules ----------------- Library ------- -- Issue #1336: fix a race condition in subprocess.Popen if the garbage - collector kicked in at the wrong time that would cause the process - to hang when the child wrote to stderr. - -- Bug #1728403: Fix a bug that CJKCodecs StreamReader hangs when it - reads a file that ends with incomplete sequence and sizehint argument - for .read() is specified. - - HTML-escape the plain traceback in cgitb's HTML output, to prevent the traceback inadvertently or maliciously closing the comment and injecting HTML into the error page. -- idle: Honor the "Cancel" action in the save dialog (Debian bug #299092). - Tests ----- @@ -204,9 +189,6 @@ Library ------- -- Patch 1571379: Make trace's --ignore-dir facility work in the face of - relative directory names. - - Bug #1545341: The 'classifier' keyword argument to the Distutils setup() function now accepts tuples as well as lists. Modified: python/branches/release24-maint/Modules/cjkcodecs/multibytecodec.c ============================================================================== --- python/branches/release24-maint/Modules/cjkcodecs/multibytecodec.c (original) +++ python/branches/release24-maint/Modules/cjkcodecs/multibytecodec.c Sun Mar 2 18:47:51 2008 @@ -705,8 +705,6 @@ cres = NULL; for (;;) { - int endoffile; - if (sizehint < 0) cres = PyObject_CallMethod(self->stream, (char *)method, NULL); @@ -723,8 +721,6 @@ goto errorexit; } - endoffile = (PyString_GET_SIZE(cres) == 0); - if (self->pendingsize > 0) { PyObject *ctr; char *ctrdata; @@ -776,7 +772,7 @@ goto errorexit; } - if (endoffile || sizehint < 0) { /* end of file */ + if (rsize == 0 || sizehint < 0) { /* end of file */ if (buf.inbuf < buf.inbuf_end && multibytecodec_decerror(self->codec, &self->state, &buf, self->errors, MBERR_TOOFEW)) Modified: python/branches/release24-maint/Python/ceval.c ============================================================================== --- python/branches/release24-maint/Python/ceval.c (original) +++ python/branches/release24-maint/Python/ceval.c Sun Mar 2 18:47:51 2008 @@ -179,10 +179,10 @@ PyObject * PyEval_GetCallStats(PyObject *self) { - return Py_BuildValue("iiiiiiiiiii", + return Py_BuildValue("iiiiiiiiii", pcall[0], pcall[1], pcall[2], pcall[3], pcall[4], pcall[5], pcall[6], pcall[7], - pcall[8], pcall[9], pcall[10]); + pcall[8], pcall[9]); } #else #define PCALL(O) @@ -4073,10 +4073,8 @@ value = PyObject_GetAttr(v, name); if (value == NULL) err = -1; - else if (PyDict_CheckExact(locals)) - err = PyDict_SetItem(locals, name, value); else - err = PyObject_SetItem(locals, name, value); + err = PyDict_SetItem(locals, name, value); Py_DECREF(name); Py_XDECREF(value); if (err != 0) From nnorwitz at gmail.com Sun Mar 2 20:13:58 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Sun, 2 Mar 2008 14:13:58 -0500 Subject: [Python-checkins] Python Regression Test Failures all (1) Message-ID: <20080302191358.GA17039@python.psfb.org> 318 tests OK. 1 test failed: test_sqlite 20 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_cd test_cl test_gl test_imageop test_imgfile test_ioctl test_macostools test_pep277 test_scriptpackages test_startfile test_sunaudiodev test_tcl test_unicode_file test_winreg test_winsound test_zipfile64 1 skip unexpected on linux2: test_ioctl test_grammar test_opcodes test_dict test_builtin test_exceptions test_types test_unittest test_doctest test_doctest2 test_MimeWriter test_SimpleHTTPServer test_StringIO test___all__ test___future__ test__locale test_abc test_abstract_numbers test_aepack test_aepack skipped -- No module named aepack test_al test_al skipped -- No module named al test_anydbm test_applesingle test_applesingle skipped -- No module named macostools test_array test_ast test_asynchat test_asyncore test_atexit test_audioop test_augassign test_base64 test_bastion test_bigaddrspace test_bigmem test_binascii test_binhex test_binop test_bisect test_bool test_bsddb test_bsddb185 test_bsddb185 skipped -- No module named bsddb185 test_bsddb3 test_buffer test_bufio test_bz2 test_calendar test_call test_capi test_cd test_cd skipped -- No module named cd test_cfgparser test_cgi test_charmapcodec test_cl test_cl skipped -- No module named cl test_class test_cmath test_cmd test_cmd_line test_cmd_line_script test_code test_codeccallbacks test_codecencodings_cn test_codecencodings_hk test_codecencodings_jp test_codecencodings_kr test_codecencodings_tw test_codecmaps_cn test_codecmaps_hk test_codecmaps_jp test_codecmaps_kr test_codecmaps_tw test_codecs test_codeop test_coding test_coercion test_collections test_colorsys test_commands test_compare test_compile test_compiler testCompileLibrary still working, be patient... test_complex test_complex_args test_contains test_contextlib test_cookie test_cookielib test_copy test_copy_reg test_cpickle test_cprofile test_crypt test_csv test_ctypes test_datetime test_dbm test_decimal test_decorators test_defaultdict test_deque test_descr test_descrtut test_difflib test_dircache test_dis test_distutils test_dl test_docxmlrpc test_dumbdbm test_dummy_thread test_dummy_threading test_email test_email_codecs test_email_renamed test_enumerate test_eof test_errno test_exception_variations test_extcall test_fcntl test_file test_filecmp test_fileinput test_float test_fnmatch test_fork1 test_format test_fpformat test_fractions test_frozen test_ftplib test_funcattrs test_functools test_future test_future_builtins test_gc test_gdbm test_generators test_genericpath test_genexps test_getargs test_getargs2 test_getopt test_gettext test_gl test_gl skipped -- No module named gl test_glob test_global test_grp test_gzip test_hash test_hashlib test_heapq test_hexoct test_hmac test_hotshot test_htmllib test_htmlparser test_httplib test_imageop test_imageop skipped -- No module named imgfile test_imaplib test_imgfile test_imgfile skipped -- No module named imgfile test_imp test_import test_importhooks test_index test_inspect test_ioctl test_ioctl skipped -- Unable to open /dev/tty test_isinstance test_iter test_iterlen test_itertools test_largefile test_list test_locale test_logging test_long test_long_future test_longexp test_macostools test_macostools skipped -- No module named macostools test_macpath test_mailbox test_marshal test_math test_md5 test_mhlib test_mimetools test_mimetypes test_minidom test_mmap test_module test_modulefinder test_multibytecodec test_multibytecodec_support test_multifile test_mutants test_mutex test_netrc test_new test_nis test_normalization test_ntpath test_old_mailbox test_openpty test_operator test_optparse test_os test_parser s_push: parser stack overflow test_peepholer test_pep247 test_pep263 test_pep277 test_pep277 skipped -- test works only on NT+ test_pep292 test_pep352 test_pickle test_pickletools test_pipes test_pkg test_pkgimport test_platform test_plistlib test_poll test_popen [8018 refs] [8018 refs] [8018 refs] test_popen2 test_poplib test_posix test_posixpath test_pow test_pprint test_profile test_profilehooks test_property test_pstats test_pty test_pwd test_pyclbr test_pyexpat test_queue test_quopri [8395 refs] [8395 refs] test_random test_re test_repr test_resource test_rfc822 test_richcmp test_robotparser test_runpy test_sax test_scope test_scriptpackages test_scriptpackages skipped -- No module named aetools test_select test_set test_sets test_sgmllib test_sha test_shelve test_shlex test_shutil test_signal test_site test_slice test_smtplib test_socket test_socket_ssl test_socketserver test_softspace test_sort test_sqlite test test_sqlite failed -- Traceback (most recent call last): File "/tmp/python-test/local/lib/python2.6/sqlite3/test/regression.py", line 118, in CheckWorkaroundForBuggySqliteTransferBindings self.con.execute("create table if not exists foo(bar)") OperationalError: near "not": syntax error test_ssl test_startfile test_startfile skipped -- cannot import name startfile test_str test_strftime test_string test_stringprep test_strop test_strptime test_struct test_structmembers test_structseq test_subprocess [8013 refs] [8015 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8015 refs] [9938 refs] [8231 refs] [8015 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] . [8013 refs] [8013 refs] this bit of output is from a test of stdout in a different process ... [8013 refs] [8013 refs] [8231 refs] test_sunaudiodev test_sunaudiodev skipped -- No module named sunaudiodev test_sundry test_symtable test_syntax test_sys [8013 refs] [8013 refs] test_tarfile test_tcl test_tcl skipped -- No module named _tkinter test_telnetlib test_tempfile [8018 refs] test_textwrap test_thread test_threaded_import test_threadedtempfile test_threading [11149 refs] test_threading_local test_threadsignals test_time test_timeout test_tokenize test_trace test_traceback test_transformer test_tuple test_typechecks test_ucn test_unary test_unicode test_unicode_file test_unicode_file skipped -- No Unicode filesystem semantics on this platform. test_unicodedata test_univnewlines test_unpack test_urllib test_urllib2 test_urllib2_localnet test_urllib2net test_urllibnet test_urlparse test_userdict test_userlist test_userstring test_uu test_uuid WARNING: uuid.getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._ifconfig_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._unixdll_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. test_wait3 test_wait4 test_warnings test_wave test_weakref test_whichdb test_winreg test_winreg skipped -- No module named _winreg test_winsound test_winsound skipped -- No module named winsound test_with test_wsgiref test_xdrlib test_xml_etree test_xml_etree_c test_xmllib test_xmlrpc test_xpickle test_xrange test_zipfile /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:472: DeprecationWarning: struct integer overflow masking is deprecated zipfp.close() /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:399: DeprecationWarning: struct integer overflow masking is deprecated zipfp.close() test_zipfile64 test_zipfile64 skipped -- test requires loads of disk-space bytes and a long time to run test_zipimport test_zlib 318 tests OK. 1 test failed: test_sqlite 20 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_cd test_cl test_gl test_imageop test_imgfile test_ioctl test_macostools test_pep277 test_scriptpackages test_startfile test_sunaudiodev test_tcl test_unicode_file test_winreg test_winsound test_zipfile64 1 skip unexpected on linux2: test_ioctl [582082 refs] From python-checkins at python.org Sun Mar 2 20:20:33 2008 From: python-checkins at python.org (martin.v.loewis) Date: Sun, 2 Mar 2008 20:20:33 +0100 (CET) Subject: [Python-checkins] r61180 - in python/branches/release24-maint: Include/pymem.h Include/pyport.h Misc/NEWS Modules/_csv.c Modules/arraymodule.c Modules/audioop.c Modules/binascii.c Modules/cPickle.c Modules/cStringIO.c Modules/cjkcodecs/multibytecodec.c Modules/datetimemodule.c Modules/rgbimgmodule.c Modules/stropmodule.c Objects/bufferobject.c Objects/listobject.c Parser/node.c Python/bltinmodule.c Message-ID: <20080302192033.EC63C1E402A@bag.python.org> Author: martin.v.loewis Date: Sun Mar 2 20:20:32 2008 New Revision: 61180 Modified: python/branches/release24-maint/Include/pymem.h python/branches/release24-maint/Include/pyport.h python/branches/release24-maint/Misc/NEWS python/branches/release24-maint/Modules/_csv.c python/branches/release24-maint/Modules/arraymodule.c python/branches/release24-maint/Modules/audioop.c python/branches/release24-maint/Modules/binascii.c python/branches/release24-maint/Modules/cPickle.c python/branches/release24-maint/Modules/cStringIO.c python/branches/release24-maint/Modules/cjkcodecs/multibytecodec.c python/branches/release24-maint/Modules/datetimemodule.c python/branches/release24-maint/Modules/rgbimgmodule.c python/branches/release24-maint/Modules/stropmodule.c python/branches/release24-maint/Objects/bufferobject.c python/branches/release24-maint/Objects/listobject.c python/branches/release24-maint/Parser/node.c python/branches/release24-maint/Python/bltinmodule.c Log: Backport of r60793: Added checks for integer overflows, contributed by Google. Some are only available if asserts are left in the code, in cases where they can't be triggered from Python code. Modified: python/branches/release24-maint/Include/pymem.h ============================================================================== --- python/branches/release24-maint/Include/pymem.h (original) +++ python/branches/release24-maint/Include/pymem.h Sun Mar 2 20:20:32 2008 @@ -86,14 +86,18 @@ */ #define PyMem_New(type, n) \ - ( (type *) PyMem_Malloc((n) * sizeof(type)) ) + ( assert((n) <= PY_SIZE_MAX / sizeof(type)) , \ + ( (type *) PyMem_Malloc((n) * sizeof(type)) ) ) #define PyMem_NEW(type, n) \ - ( (type *) PyMem_MALLOC((n) * sizeof(type)) ) + ( assert((n) <= PY_SIZE_MAX / sizeof(type)) , \ + ( (type *) PyMem_MALLOC((n) * sizeof(type)) ) ) #define PyMem_Resize(p, type, n) \ - ( (p) = (type *) PyMem_Realloc((p), (n) * sizeof(type)) ) + ( assert((n) <= PY_SIZE_MAX / sizeof(type)) , \ + ( (p) = (type *) PyMem_Realloc((p), (n) * sizeof(type)) ) ) #define PyMem_RESIZE(p, type, n) \ - ( (p) = (type *) PyMem_REALLOC((p), (n) * sizeof(type)) ) + ( assert((n) <= PY_SIZE_MAX / sizeof(type)) , \ + ( (p) = (type *) PyMem_REALLOC((p), (n) * sizeof(type)) ) ) /* In order to avoid breaking old code mixing PyObject_{New, NEW} with PyMem_{Del, DEL} and PyMem_{Free, FREE}, the PyMem "release memory" Modified: python/branches/release24-maint/Include/pyport.h ============================================================================== --- python/branches/release24-maint/Include/pyport.h (original) +++ python/branches/release24-maint/Include/pyport.h Sun Mar 2 20:20:32 2008 @@ -616,6 +616,17 @@ #error "LONG_BIT definition appears wrong for platform (bad gcc/glibc config?)." #endif +/* Largest possible value of size_t. + SIZE_MAX is part of C99, so it might be defined on some + platforms. If it is not defined, (size_t)-1 is a portable + definition for C89, due to the way signed->unsigned + conversion is defined. */ +#ifdef SIZE_MAX +#define PY_SIZE_MAX SIZE_MAX +#else +#define PY_SIZE_MAX ((size_t)-1) +#endif + #ifdef __cplusplus } #endif Modified: python/branches/release24-maint/Misc/NEWS ============================================================================== --- python/branches/release24-maint/Misc/NEWS (original) +++ python/branches/release24-maint/Misc/NEWS Sun Mar 2 20:20:32 2008 @@ -13,6 +13,10 @@ Core and builtins ----------------- +- Added checks for integer overflows, contributed by Google. Some are + only available if asserts are left in the code, in cases where they + can't be triggered from Python code. + - patch #1630975: Fix crash when replacing sys.stdout in sitecustomize.py Extension Modules Modified: python/branches/release24-maint/Modules/_csv.c ============================================================================== --- python/branches/release24-maint/Modules/_csv.c (original) +++ python/branches/release24-maint/Modules/_csv.c Sun Mar 2 20:20:32 2008 @@ -470,6 +470,10 @@ self->field = PyMem_Malloc(self->field_size); } else { + if (self->field_size > INT_MAX / 2) { + PyErr_NoMemory(); + return 0; + } self->field_size *= 2; self->field = PyMem_Realloc(self->field, self->field_size); } @@ -1003,6 +1007,12 @@ static int join_check_rec_size(WriterObj *self, int rec_len) { + + if (rec_len < 0 || rec_len > INT_MAX - MEM_INCR) { + PyErr_NoMemory(); + return 0; + } + if (rec_len > self->rec_size) { if (self->rec_size == 0) { self->rec_size = (rec_len / MEM_INCR + 1) * MEM_INCR; Modified: python/branches/release24-maint/Modules/arraymodule.c ============================================================================== --- python/branches/release24-maint/Modules/arraymodule.c (original) +++ python/branches/release24-maint/Modules/arraymodule.c Sun Mar 2 20:20:32 2008 @@ -651,6 +651,9 @@ PyErr_BadArgument(); return NULL; } + if (a->ob_size > INT_MAX - b->ob_size) { + return PyErr_NoMemory(); + } size = a->ob_size + b->ob_size; np = (arrayobject *) newarrayobject(&Arraytype, size, a->ob_descr); if (np == NULL) { @@ -673,6 +676,9 @@ int nbytes; if (n < 0) n = 0; + if ((a->ob_size != 0) && (n > INT_MAX / a->ob_size)) { + return PyErr_NoMemory(); + } size = a->ob_size * n; np = (arrayobject *) newarrayobject(&Arraytype, size, a->ob_descr); if (np == NULL) @@ -817,6 +823,11 @@ "can only extend with array of same kind"); return -1; } + if ((self->ob_size > INT_MAX - b->ob_size) || + ((self->ob_size + b->ob_size) > INT_MAX / self->ob_descr->itemsize)) { + PyErr_NoMemory(); + return -1; + } size = self->ob_size + b->ob_size; PyMem_RESIZE(self->ob_item, char, size*self->ob_descr->itemsize); if (self->ob_item == NULL) { @@ -858,6 +869,10 @@ if (n < 0) n = 0; items = self->ob_item; + if ((self->ob_descr->itemsize != 0) && + (self->ob_size > INT_MAX / self->ob_descr->itemsize)) { + return PyErr_NoMemory(); + } size = self->ob_size * self->ob_descr->itemsize; if (n == 0) { PyMem_FREE(items); @@ -866,6 +881,9 @@ self->allocated = 0; } else { + if (size > INT_MAX / n) { + return PyErr_NoMemory(); + } PyMem_Resize(items, char, n * size); if (items == NULL) return PyErr_NoMemory(); @@ -1278,6 +1296,9 @@ if ((*self->ob_descr->setitem)(self, self->ob_size - n + i, v) != 0) { self->ob_size -= n; + if (itemsize && (self->ob_size > INT_MAX / itemsize)) { + return PyErr_NoMemory(); + } PyMem_RESIZE(item, char, self->ob_size * itemsize); self->ob_item = item; @@ -1337,6 +1358,10 @@ n = n / itemsize; if (n > 0) { char *item = self->ob_item; + if ((n > INT_MAX - self->ob_size) || + ((self->ob_size + n) > INT_MAX / itemsize)) { + return PyErr_NoMemory(); + } PyMem_RESIZE(item, char, (self->ob_size + n) * itemsize); if (item == NULL) { PyErr_NoMemory(); @@ -1362,8 +1387,12 @@ static PyObject * array_tostring(arrayobject *self, PyObject *unused) { - return PyString_FromStringAndSize(self->ob_item, + if (self->ob_size <= INT_MAX / self->ob_descr->itemsize) { + return PyString_FromStringAndSize(self->ob_item, self->ob_size * self->ob_descr->itemsize); + } else { + return PyErr_NoMemory(); + } } PyDoc_STRVAR(tostring_doc, @@ -1391,6 +1420,9 @@ } if (n > 0) { Py_UNICODE *item = (Py_UNICODE *) self->ob_item; + if (self->ob_size > INT_MAX - n) { + return PyErr_NoMemory(); + } PyMem_RESIZE(item, Py_UNICODE, self->ob_size + n); if (item == NULL) { PyErr_NoMemory(); Modified: python/branches/release24-maint/Modules/audioop.c ============================================================================== --- python/branches/release24-maint/Modules/audioop.c (original) +++ python/branches/release24-maint/Modules/audioop.c Sun Mar 2 20:20:32 2008 @@ -674,7 +674,7 @@ audioop_tostereo(PyObject *self, PyObject *args) { signed char *cp, *ncp; - int len, size, val1, val2, val = 0; + int len, new_len, size, val1, val2, val = 0; double fac1, fac2, fval, maxval; PyObject *rv; int i; @@ -690,7 +690,14 @@ return 0; } - rv = PyString_FromStringAndSize(NULL, len*2); + new_len = len*2; + if (new_len < 0) { + PyErr_SetString(PyExc_MemoryError, + "not enough memory for output buffer"); + return 0; + } + + rv = PyString_FromStringAndSize(NULL, new_len); if ( rv == 0 ) return 0; ncp = (signed char *)PyString_AsString(rv); @@ -853,7 +860,7 @@ { signed char *cp; unsigned char *ncp; - int len, size, size2, val = 0; + int len, new_len, size, size2, val = 0; PyObject *rv; int i, j; @@ -867,7 +874,13 @@ return 0; } - rv = PyString_FromStringAndSize(NULL, (len/size)*size2); + new_len = (len/size)*size2; + if (new_len < 0) { + PyErr_SetString(PyExc_MemoryError, + "not enough memory for output buffer"); + return 0; + } + rv = PyString_FromStringAndSize(NULL, new_len); if ( rv == 0 ) return 0; ncp = (unsigned char *)PyString_AsString(rv); @@ -903,6 +916,7 @@ int chan, d, *prev_i, *cur_i, cur_o; PyObject *state, *samps, *str, *rv = NULL; int bytes_per_frame; + size_t alloc_size; weightA = 1; weightB = 0; @@ -944,8 +958,14 @@ inrate /= d; outrate /= d; - prev_i = (int *) malloc(nchannels * sizeof(int)); - cur_i = (int *) malloc(nchannels * sizeof(int)); + alloc_size = sizeof(int) * (unsigned)nchannels; + if (alloc_size < nchannels) { + PyErr_SetString(PyExc_MemoryError, + "not enough memory for output buffer"); + return 0; + } + prev_i = (int *) malloc(alloc_size); + cur_i = (int *) malloc(alloc_size); if (prev_i == NULL || cur_i == NULL) { (void) PyErr_NoMemory(); goto exit; @@ -1116,7 +1136,7 @@ unsigned char *cp; unsigned char cval; signed char *ncp; - int len, size, val; + int len, new_len, size, val; PyObject *rv; int i; @@ -1129,12 +1149,18 @@ return 0; } - rv = PyString_FromStringAndSize(NULL, len*size); + new_len = len*size; + if (new_len < 0) { + PyErr_SetString(PyExc_MemoryError, + "not enough memory for output buffer"); + return 0; + } + rv = PyString_FromStringAndSize(NULL, new_len); if ( rv == 0 ) return 0; ncp = (signed char *)PyString_AsString(rv); - for ( i=0; i < len*size; i += size ) { + for ( i=0; i < new_len; i += size ) { cval = *cp++; val = st_ulaw_to_linear(cval); @@ -1259,7 +1285,7 @@ { signed char *cp; signed char *ncp; - int len, size, valpred, step, delta, index, sign, vpdiff; + int len, new_len, size, valpred, step, delta, index, sign, vpdiff; PyObject *rv, *str, *state; int i, inputbuffer = 0, bufferstep; @@ -1281,7 +1307,13 @@ } else if ( !PyArg_Parse(state, "(ii)", &valpred, &index) ) return 0; - str = PyString_FromStringAndSize(NULL, len*size*2); + new_len = len*size*2; + if (new_len < 0) { + PyErr_SetString(PyExc_MemoryError, + "not enough memory for output buffer"); + return 0; + } + str = PyString_FromStringAndSize(NULL, new_len); if ( str == 0 ) return 0; ncp = (signed char *)PyString_AsString(str); @@ -1289,7 +1321,7 @@ step = stepsizeTable[index]; bufferstep = 0; - for ( i=0; i < len*size*2; i += size ) { + for ( i=0; i < new_len; i += size ) { /* Step 1 - get the delta value and compute next index */ if ( bufferstep ) { delta = inputbuffer & 0xf; Modified: python/branches/release24-maint/Modules/binascii.c ============================================================================== --- python/branches/release24-maint/Modules/binascii.c (original) +++ python/branches/release24-maint/Modules/binascii.c Sun Mar 2 20:20:32 2008 @@ -194,6 +194,8 @@ if ( !PyArg_ParseTuple(args, "t#:a2b_uu", &ascii_data, &ascii_len) ) return NULL; + assert(ascii_len >= 0); + /* First byte: binary data length (in bytes) */ bin_len = (*ascii_data++ - ' ') & 077; ascii_len--; @@ -347,6 +349,11 @@ if ( !PyArg_ParseTuple(args, "t#:a2b_base64", &ascii_data, &ascii_len) ) return NULL; + assert(ascii_len >= 0); + + if (ascii_len > INT_MAX - 3) + return PyErr_NoMemory(); + bin_len = ((ascii_len+3)/4)*3; /* Upper bound, corrected later */ /* Allocate the buffer */ @@ -436,6 +443,9 @@ if ( !PyArg_ParseTuple(args, "s#:b2a_base64", &bin_data, &bin_len) ) return NULL; + + assert(bin_len >= 0); + if ( bin_len > BASE64_MAXBIN ) { PyErr_SetString(Error, "Too much data for base64 line"); return NULL; @@ -491,6 +501,11 @@ if ( !PyArg_ParseTuple(args, "t#:a2b_hqx", &ascii_data, &len) ) return NULL; + assert(len >= 0); + + if (len > INT_MAX - 2) + return PyErr_NoMemory(); + /* Allocate a string that is too big (fixed later) Add two to the initial length to prevent interning which would preclude subsequent resizing. */ @@ -554,6 +569,11 @@ if ( !PyArg_ParseTuple(args, "s#:rlecode_hqx", &in_data, &len) ) return NULL; + assert(len >= 0); + + if (len > INT_MAX / 2 - 2) + return PyErr_NoMemory(); + /* Worst case: output is twice as big as input (fixed later) */ if ( (rv=PyString_FromStringAndSize(NULL, len*2+2)) == NULL ) return NULL; @@ -603,6 +623,11 @@ if ( !PyArg_ParseTuple(args, "s#:b2a_hqx", &bin_data, &len) ) return NULL; + assert(len >= 0); + + if (len > INT_MAX / 2 - 2) + return PyErr_NoMemory(); + /* Allocate a buffer that is at least large enough */ if ( (rv=PyString_FromStringAndSize(NULL, len*2+2)) == NULL ) return NULL; @@ -641,9 +666,13 @@ if ( !PyArg_ParseTuple(args, "s#:rledecode_hqx", &in_data, &in_len) ) return NULL; + assert(in_len >= 0); + /* Empty string is a special case */ if ( in_len == 0 ) return Py_BuildValue("s", ""); + else if (in_len > INT_MAX / 2) + return PyErr_NoMemory(); /* Allocate a buffer of reasonable size. Resized when needed */ out_len = in_len*2; @@ -669,6 +698,7 @@ #define OUTBYTE(b) \ do { \ if ( --out_len_left < 0 ) { \ + if ( out_len > INT_MAX / 2) return PyErr_NoMemory(); \ _PyString_Resize(&rv, 2*out_len); \ if ( rv == NULL ) return NULL; \ out_data = (unsigned char *)PyString_AsString(rv) \ @@ -737,7 +767,7 @@ if ( !PyArg_ParseTuple(args, "s#i:crc_hqx", &bin_data, &len, &crc) ) return NULL; - while(len--) { + while(len-- > 0) { crc=((crc<<8)&0xff00)^crctab_hqx[((crc>>8)&0xff)^*bin_data++]; } @@ -881,7 +911,7 @@ /* only want the trailing 32 bits */ crc &= 0xFFFFFFFFUL; #endif - while (len--) + while (len-- > 0) crc = crc_32_tab[(crc ^ *bin_data++) & 0xffUL] ^ (crc >> 8); /* Note: (crc >> 8) MUST zero fill on left */ @@ -911,6 +941,10 @@ if (!PyArg_ParseTuple(args, "s#:b2a_hex", &argbuf, &arglen)) return NULL; + assert(arglen >= 0); + if (arglen > INT_MAX / 2) + return PyErr_NoMemory(); + retval = PyString_FromStringAndSize(NULL, arglen*2); if (!retval) return NULL; @@ -968,6 +1002,8 @@ if (!PyArg_ParseTuple(args, "s#:a2b_hex", &argbuf, &arglen)) return NULL; + assert(arglen >= 0); + /* XXX What should we do about strings with an odd length? Should * we add an implicit leading zero, or a trailing zero? For now, * raise an exception. Modified: python/branches/release24-maint/Modules/cPickle.c ============================================================================== --- python/branches/release24-maint/Modules/cPickle.c (original) +++ python/branches/release24-maint/Modules/cPickle.c Sun Mar 2 20:20:32 2008 @@ -3419,6 +3419,14 @@ if (self->read_func(self, &s, 4) < 0) return -1; l = calc_binint(s, 4); + if (l < 0) { + /* Corrupt or hostile pickle -- we never write one like + * this. + */ + PyErr_SetString(UnpicklingError, + "BINSTRING pickle has negative byte count"); + return -1; + } if (self->read_func(self, &s, l) < 0) return -1; @@ -3486,6 +3494,14 @@ if (self->read_func(self, &s, 4) < 0) return -1; l = calc_binint(s, 4); + if (l < 0) { + /* Corrupt or hostile pickle -- we never write one like + * this. + */ + PyErr_SetString(UnpicklingError, + "BINUNICODE pickle has negative byte count"); + return -1; + } if (self->read_func(self, &s, l) < 0) return -1; Modified: python/branches/release24-maint/Modules/cStringIO.c ============================================================================== --- python/branches/release24-maint/Modules/cStringIO.c (original) +++ python/branches/release24-maint/Modules/cStringIO.c Sun Mar 2 20:20:32 2008 @@ -121,6 +121,7 @@ static PyObject * IO_cgetval(PyObject *self) { UNLESS (IO__opencheck(IOOOBJECT(self))) return NULL; + assert(IOOOBJECT(self)->pos >= 0); return PyString_FromStringAndSize(((IOobject*)self)->buf, ((IOobject*)self)->pos); } @@ -139,6 +140,7 @@ } else s=self->string_size; + assert(self->pos >= 0); return PyString_FromStringAndSize(self->buf, s); } @@ -158,6 +160,8 @@ int l; UNLESS (IO__opencheck(IOOOBJECT(self))) return -1; + assert(IOOOBJECT(self)->pos >= 0); + assert(IOOOBJECT(self)->string_size >= 0); l = ((IOobject*)self)->string_size - ((IOobject*)self)->pos; if (n < 0 || n > l) { n = l; @@ -197,6 +201,11 @@ *output=((IOobject*)self)->buf + ((IOobject*)self)->pos; l = n - ((IOobject*)self)->buf - ((IOobject*)self)->pos; + + assert(IOOOBJECT(self)->pos <= INT_MAX - l); + assert(IOOOBJECT(self)->pos >= 0); + assert(IOOOBJECT(self)->string_size >= 0); + ((IOobject*)self)->pos += l; return l; } @@ -215,6 +224,7 @@ n -= m; self->pos -= m; } + assert(IOOOBJECT(self)->pos >= 0); return PyString_FromStringAndSize(output, n); } @@ -277,6 +287,7 @@ UNLESS (IO__opencheck(self)) return NULL; + assert(self->pos >= 0); return PyInt_FromLong(self->pos); } Modified: python/branches/release24-maint/Modules/cjkcodecs/multibytecodec.c ============================================================================== --- python/branches/release24-maint/Modules/cjkcodecs/multibytecodec.c (original) +++ python/branches/release24-maint/Modules/cjkcodecs/multibytecodec.c Sun Mar 2 20:20:32 2008 @@ -100,12 +100,16 @@ static int expand_encodebuffer(MultibyteEncodeBuffer *buf, int esize) { - int orgpos, orgsize; + int orgpos, orgsize, incsize; orgpos = (int)((char*)buf->outbuf - PyString_AS_STRING(buf->outobj)); orgsize = PyString_GET_SIZE(buf->outobj); - if (_PyString_Resize(&buf->outobj, orgsize + ( - esize < (orgsize >> 1) ? (orgsize >> 1) | 1 : esize)) == -1) + incsize = (esize < (orgsize >> 1) ? (orgsize >> 1) | 1 : esize); + + if (orgsize > INT_MAX - incsize) + return -1; + + if (_PyString_Resize(&buf->outobj, orgsize + incsize) == -1) return -1; buf->outbuf = (unsigned char *)PyString_AS_STRING(buf->outobj) +orgpos; @@ -416,6 +420,12 @@ buf.excobj = NULL; buf.inbuf = buf.inbuf_top = *data; buf.inbuf_end = buf.inbuf_top + datalen; + + if (datalen > (INT_MAX - 16) / 2) { + PyErr_NoMemory(); + goto errorexit; + } + buf.outobj = PyString_FromStringAndSize(NULL, datalen * 2 + 16); if (buf.outobj == NULL) goto errorexit; @@ -725,6 +735,10 @@ PyObject *ctr; char *ctrdata; + if (PyString_GET_SIZE(cres) > INT_MAX - self->pendingsize) { + PyErr_NoMemory(); + goto errorexit; + } rsize = PyString_GET_SIZE(cres) + self->pendingsize; ctr = PyString_FromStringAndSize(NULL, rsize); if (ctr == NULL) Modified: python/branches/release24-maint/Modules/datetimemodule.c ============================================================================== --- python/branches/release24-maint/Modules/datetimemodule.c (original) +++ python/branches/release24-maint/Modules/datetimemodule.c Sun Mar 2 20:20:32 2008 @@ -1111,6 +1111,8 @@ char sign; int none; + assert(buflen >= 1); + offset = call_utcoffset(tzinfo, tzinfoarg, &none); if (offset == -1 && PyErr_Occurred()) return -1; @@ -1188,6 +1190,11 @@ * a new format. Since computing the replacements for those codes * is expensive, don't unless they're actually used. */ + if (PyString_Size(format) > INT_MAX - 1) { + PyErr_NoMemory(); + goto Done; + } + totalnew = PyString_Size(format) + 1; /* realistic if no %z/%Z */ newfmt = PyString_FromStringAndSize(NULL, totalnew); if (newfmt == NULL) goto Done; Modified: python/branches/release24-maint/Modules/rgbimgmodule.c ============================================================================== --- python/branches/release24-maint/Modules/rgbimgmodule.c (original) +++ python/branches/release24-maint/Modules/rgbimgmodule.c Sun Mar 2 20:20:32 2008 @@ -269,7 +269,7 @@ Py_Int32 *starttab = NULL, *lengthtab = NULL; FILE *inf = NULL; IMAGE image; - int y, z, tablen; + int y, z, tablen, new_size; int xsize, ysize, zsize; int bpp, rle, cur, badorder; int rlebuflen; @@ -301,9 +301,15 @@ zsize = image.zsize; if (rle) { tablen = ysize * zsize * sizeof(Py_Int32); + rlebuflen = (int) (1.05 * xsize +10); + if ((tablen / sizeof(Py_Int32)) != (ysize * zsize) || + rlebuflen < 0) { + PyErr_NoMemory(); + goto finally; + } + starttab = (Py_Int32 *)malloc(tablen); lengthtab = (Py_Int32 *)malloc(tablen); - rlebuflen = (int) (1.05 * xsize +10); rledat = (unsigned char *)malloc(rlebuflen); if (!starttab || !lengthtab || !rledat) { PyErr_NoMemory(); @@ -331,8 +337,14 @@ fseek(inf, 512 + 2 * tablen, SEEK_SET); cur = 512 + 2 * tablen; + new_size = xsize * ysize + TAGLEN; + if (new_size < 0 || (new_size * sizeof(Py_Int32)) < 0) { + PyErr_NoMemory(); + goto finally; + } + rv = PyString_FromStringAndSize((char *)NULL, - (xsize * ysize + TAGLEN) * sizeof(Py_Int32)); + new_size * sizeof(Py_Int32)); if (rv == NULL) goto finally; @@ -400,8 +412,14 @@ copybw((Py_Int32 *) base, xsize * ysize); } else { + new_size = xsize * ysize + TAGLEN; + if (new_size < 0 || (new_size * sizeof(Py_Int32)) < 0) { + PyErr_NoMemory(); + goto finally; + } + rv = PyString_FromStringAndSize((char *) 0, - (xsize*ysize+TAGLEN)*sizeof(Py_Int32)); + new_size*sizeof(Py_Int32)); if (rv == NULL) goto finally; @@ -590,10 +608,16 @@ return NULL; } tablen = ysize * zsize * sizeof(Py_Int32); + rlebuflen = (int) (1.05 * xsize + 10); + + if ((tablen / sizeof(Py_Int32)) != (ysize * zsize) || + rlebuflen < 0 || (xsize * sizeof(Py_Int32)) < 0) { + PyErr_NoMemory(); + goto finally; + } starttab = (Py_Int32 *)malloc(tablen); lengthtab = (Py_Int32 *)malloc(tablen); - rlebuflen = (int) (1.05 * xsize + 10); rlebuf = (unsigned char *)malloc(rlebuflen); lumbuf = (unsigned char *)malloc(xsize * sizeof(Py_Int32)); if (!starttab || !lengthtab || !rlebuf || !lumbuf) { Modified: python/branches/release24-maint/Modules/stropmodule.c ============================================================================== --- python/branches/release24-maint/Modules/stropmodule.c (original) +++ python/branches/release24-maint/Modules/stropmodule.c Sun Mar 2 20:20:32 2008 @@ -576,7 +576,7 @@ char* e; char* p; char* q; - int i, j; + int i, j, old_j; PyObject* out; char* string; int stringlen; @@ -593,12 +593,18 @@ } /* First pass: determine size of output string */ - i = j = 0; /* j: current column; i: total of previous lines */ + i = j = old_j = 0; /* j: current column; i: total of previous lines */ e = string + stringlen; for (p = string; p < e; p++) { - if (*p == '\t') + if (*p == '\t') { j += tabsize - (j%tabsize); - else { + if (old_j > j) { + PyErr_SetString(PyExc_OverflowError, + "new string is too long"); + return NULL; + } + old_j = j; + } else { j++; if (*p == '\n') { i += j; @@ -607,6 +613,11 @@ } } + if ((i + j) < 0) { + PyErr_SetString(PyExc_OverflowError, "new string is too long"); + return NULL; + } + /* Second pass: create output string and fill it */ out = PyString_FromStringAndSize(NULL, i+j); if (out == NULL) Modified: python/branches/release24-maint/Objects/bufferobject.c ============================================================================== --- python/branches/release24-maint/Objects/bufferobject.c (original) +++ python/branches/release24-maint/Objects/bufferobject.c Sun Mar 2 20:20:32 2008 @@ -167,6 +167,10 @@ "size must be zero or positive"); return NULL; } + if (sizeof(*b) > INT_MAX - size) { + /* unlikely */ + return PyErr_NoMemory(); + } /* Inline PyObject_New */ o = PyObject_MALLOC(sizeof(*b) + size); if ( o == NULL ) @@ -355,6 +359,8 @@ if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) return NULL; + assert(count <= PY_SIZE_MAX - size); + ob = PyString_FromStringAndSize(NULL, size + count); p = PyString_AS_STRING(ob); memcpy(p, ptr1, size); Modified: python/branches/release24-maint/Objects/listobject.c ============================================================================== --- python/branches/release24-maint/Objects/listobject.c (original) +++ python/branches/release24-maint/Objects/listobject.c Sun Mar 2 20:20:32 2008 @@ -45,7 +45,16 @@ * system realloc(). * The growth pattern is: 0, 4, 8, 16, 25, 35, 46, 58, 72, 88, ... */ - new_allocated = (newsize >> 3) + (newsize < 9 ? 3 : 6) + newsize; + new_allocated = (newsize >> 3) + (newsize < 9 ? 3 : 6); + + /* check for integer overflow */ + if (new_allocated > PY_SIZE_MAX - newsize) { + PyErr_NoMemory(); + return -1; + } else { + new_allocated += newsize; + } + if (newsize == 0) new_allocated = 0; items = self->ob_item; @@ -92,8 +101,9 @@ return NULL; } nbytes = size * sizeof(PyObject *); - /* Check for overflow */ - if (nbytes / sizeof(PyObject *) != (size_t)size) + /* Check for overflow without an actual overflow, + * which can cause compiler to optimise out */ + if (size > PY_SIZE_MAX / sizeof(PyObject *)) return PyErr_NoMemory(); if (num_free_lists) { num_free_lists--; @@ -1372,6 +1382,10 @@ * we don't care what's in the block. */ merge_freemem(ms); + if (need > INT_MAX / sizeof(PyObject*)) { + PyErr_NoMemory(); + return -1; + } ms->a = (PyObject **)PyMem_Malloc(need * sizeof(PyObject*)); if (ms->a) { ms->alloced = need; @@ -2550,6 +2564,8 @@ step = -step; } + assert(slicelength <= PY_SIZE_MAX / sizeof(PyObject*)); + garbage = (PyObject**) PyMem_MALLOC(slicelength*sizeof(PyObject*)); if (!garbage) { Modified: python/branches/release24-maint/Parser/node.c ============================================================================== --- python/branches/release24-maint/Parser/node.c (original) +++ python/branches/release24-maint/Parser/node.c Sun Mar 2 20:20:32 2008 @@ -91,6 +91,9 @@ if (current_capacity < 0 || required_capacity < 0) return E_OVERFLOW; if (current_capacity < required_capacity) { + if (required_capacity > PY_SIZE_MAX / sizeof(node)) { + return E_NOMEM; + } n = n1->n_child; n = (node *) PyObject_REALLOC(n, required_capacity * sizeof(node)); Modified: python/branches/release24-maint/Python/bltinmodule.c ============================================================================== --- python/branches/release24-maint/Python/bltinmodule.c (original) +++ python/branches/release24-maint/Python/bltinmodule.c Sun Mar 2 20:20:32 2008 @@ -2376,11 +2376,43 @@ PyString_AS_STRING(item)[0]; } else { /* do we need more space? */ - int need = j + reslen + len-i-1; + int need = j; + + /* calculate space requirements while checking for overflow */ + if (need > INT_MAX - reslen) { + Py_DECREF(item); + goto Fail_1; + } + + need += reslen; + + if (need > INT_MAX - len) { + Py_DECREF(item); + goto Fail_1; + } + + need += len; + + if (need <= i) { + Py_DECREF(item); + goto Fail_1; + } + + need = need - i - 1; + + assert(need >= 0); + assert(outlen >= 0); + if (need > outlen) { /* overallocate, to avoid reallocations */ - if (need<2*outlen) + if (outlen > INT_MAX / 2) { + Py_DECREF(item); + return NULL; + } + + if (need<2*outlen) { need = 2*outlen; + } if (_PyString_Resize(&result, need)) { Py_DECREF(item); return NULL; @@ -2472,11 +2504,31 @@ else { /* do we need more space? */ int need = j + reslen + len - i - 1; + + /* check that didnt overflow */ + if ((j > INT_MAX - reslen) || + ((j + reslen) > INT_MAX - len) || + ((j + reslen + len) < i) || + ((j + reslen + len - i) <= 0)) { + Py_DECREF(item); + return NULL; + } + + assert(need >= 0); + assert(outlen >= 0); + if (need > outlen) { /* overallocate, to avoid reallocations */ - if (need < 2 * outlen) - need = 2 * outlen; + if (need < 2 * outlen) { + if (outlen > INT_MAX / 2) { + Py_DECREF(item); + return NULL; + } else { + need = 2 * outlen; + } + } + if (PyUnicode_Resize( &result, need) < 0) { Py_DECREF(item); From python-checkins at python.org Sun Mar 2 20:33:41 2008 From: python-checkins at python.org (martin.v.loewis) Date: Sun, 2 Mar 2008 20:33:41 +0100 (CET) Subject: [Python-checkins] r61181 - in python/branches/release24-maint: Include/patchlevel.h LICENSE Lib/idlelib/NEWS.txt Lib/idlelib/idlever.py Misc/NEWS Misc/RPM/python-2.4.spec Python/getcopyright.c README Message-ID: <20080302193341.876351E4012@bag.python.org> Author: martin.v.loewis Date: Sun Mar 2 20:33:40 2008 New Revision: 61181 Modified: python/branches/release24-maint/Include/patchlevel.h python/branches/release24-maint/LICENSE python/branches/release24-maint/Lib/idlelib/NEWS.txt python/branches/release24-maint/Lib/idlelib/idlever.py python/branches/release24-maint/Misc/NEWS python/branches/release24-maint/Misc/RPM/python-2.4.spec python/branches/release24-maint/Python/getcopyright.c python/branches/release24-maint/README Log: Prepare for 2.4.5c1 Modified: python/branches/release24-maint/Include/patchlevel.h ============================================================================== --- python/branches/release24-maint/Include/patchlevel.h (original) +++ python/branches/release24-maint/Include/patchlevel.h Sun Mar 2 20:33:40 2008 @@ -21,12 +21,12 @@ /* Version parsed out into numeric values */ #define PY_MAJOR_VERSION 2 #define PY_MINOR_VERSION 4 -#define PY_MICRO_VERSION 4 -#define PY_RELEASE_LEVEL PY_RELEASE_LEVEL_FINAL -#define PY_RELEASE_SERIAL 0 +#define PY_MICRO_VERSION 5 +#define PY_RELEASE_LEVEL PY_RELEASE_LEVEL_GAMMA +#define PY_RELEASE_SERIAL 1 /* Version as a string */ -#define PY_VERSION "2.4.4" +#define PY_VERSION "2.4.5c1" /* Version as a single 4-byte hex number, e.g. 0x010502B2 == 1.5.2b2. Use this for numeric comparisons, e.g. #if PY_VERSION_HEX >= ... */ Modified: python/branches/release24-maint/LICENSE ============================================================================== --- python/branches/release24-maint/LICENSE (original) +++ python/branches/release24-maint/LICENSE Sun Mar 2 20:33:40 2008 @@ -53,6 +53,7 @@ 2.4.2 2.4.1 2005 PSF yes 2.4.3 2.4.2 2006 PSF yes 2.4.4 2.4.3 2006 PSF yes + 2.4.5 2.4.4 2008 PSF yes 2.5 2.4 2006 PSF yes Footnotes: @@ -86,12 +87,12 @@ 2. Subject to the terms and conditions of this License Agreement, PSF hereby grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce, analyze, test, perform and/or display publicly, -prepare derivative works, distribute, and otherwise use Python -alone or in any derivative version, provided, however, that PSF's -License Agreement and PSF's notice of copyright, i.e., "Copyright (c) -2001, 2002, 2003, 2004, 2005, 2006 Python Software Foundation; All Rights -Reserved" are retained in Python alone or in any derivative version -prepared by Licensee. +prepare derivative works, distribute, and otherwise use Python alone +or in any derivative version, provided, however, that PSF's License +Agreement and PSF's notice of copyright, i.e., "Copyright (c) 2001, +2002, 2003, 2004, 2005, 2006, 2007, 2008 Python Software Foundation; +All Rights Reserved" are retained in Python alone or in any derivative +version prepared by Licensee. 3. In the event Licensee prepares a derivative work that is based on or incorporates Python or any part thereof, and wants to make Modified: python/branches/release24-maint/Lib/idlelib/NEWS.txt ============================================================================== --- python/branches/release24-maint/Lib/idlelib/NEWS.txt (original) +++ python/branches/release24-maint/Lib/idlelib/NEWS.txt Sun Mar 2 20:33:40 2008 @@ -1,3 +1,8 @@ +What's New in IDLE 1.1.5c1? +========================= + +*Release date: 02-Mar-2006* + What's New in IDLE 1.1.4? ========================= Modified: python/branches/release24-maint/Lib/idlelib/idlever.py ============================================================================== --- python/branches/release24-maint/Lib/idlelib/idlever.py (original) +++ python/branches/release24-maint/Lib/idlelib/idlever.py Sun Mar 2 20:33:40 2008 @@ -1 +1 @@ -IDLE_VERSION = "1.1.4" +IDLE_VERSION = "1.1.5c1" Modified: python/branches/release24-maint/Misc/NEWS ============================================================================== --- python/branches/release24-maint/Misc/NEWS (original) +++ python/branches/release24-maint/Misc/NEWS Sun Mar 2 20:33:40 2008 @@ -7,7 +7,7 @@ What's New in Python 2.4.5c1? ============================= -*Release date: XX-XXX-XXXX* +*Release date: 20-Mar-2008* Core and builtins Modified: python/branches/release24-maint/Misc/RPM/python-2.4.spec ============================================================================== --- python/branches/release24-maint/Misc/RPM/python-2.4.spec (original) +++ python/branches/release24-maint/Misc/RPM/python-2.4.spec Sun Mar 2 20:33:40 2008 @@ -33,7 +33,7 @@ ################################# %define name python -%define version 2.4.4 +%define version 2.4.5 %define libvers 2.4 %define release 1pydotorg %define __prefix /usr Modified: python/branches/release24-maint/Python/getcopyright.c ============================================================================== --- python/branches/release24-maint/Python/getcopyright.c (original) +++ python/branches/release24-maint/Python/getcopyright.c Sun Mar 2 20:33:40 2008 @@ -4,7 +4,7 @@ static char cprt[] = "\ -Copyright (c) 2001-2006 Python Software Foundation.\n\ +Copyright (c) 2001-2008 Python Software Foundation.\n\ All Rights Reserved.\n\ \n\ Copyright (c) 2000 BeOpen.com.\n\ Modified: python/branches/release24-maint/README ============================================================================== --- python/branches/release24-maint/README (original) +++ python/branches/release24-maint/README Sun Mar 2 20:33:40 2008 @@ -1,7 +1,7 @@ -This is Python version 2.4.4 -============================ +This is Python version 2.4.5c1 +============================== -Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006 Python Software Foundation. +Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008 Python Software Foundation. All rights reserved. Copyright (c) 2000 BeOpen.com. From python-checkins at python.org Sun Mar 2 20:34:25 2008 From: python-checkins at python.org (martin.v.loewis) Date: Sun, 2 Mar 2008 20:34:25 +0100 (CET) Subject: [Python-checkins] r61182 - python/tags/r245c1 Message-ID: <20080302193425.A04EB1E4012@bag.python.org> Author: martin.v.loewis Date: Sun Mar 2 20:34:25 2008 New Revision: 61182 Added: python/tags/r245c1/ - copied from r61181, python/branches/release24-maint/ Log: Tagging for release of Python 2.4.5c1 From python-checkins at python.org Sun Mar 2 21:00:53 2008 From: python-checkins at python.org (gregory.p.smith) Date: Sun, 2 Mar 2008 21:00:53 +0100 (CET) Subject: [Python-checkins] r61183 - python/trunk/Lib/bsddb/test/test_associate.py python/trunk/Lib/bsddb/test/test_basics.py python/trunk/Lib/bsddb/test/test_compare.py python/trunk/Lib/bsddb/test/test_cursor_pget_bug.py python/trunk/Lib/bsddb/test/test_dbobj.py python/trunk/Lib/bsddb/test/test_dbshelve.py python/trunk/Lib/bsddb/test/test_dbtables.py python/trunk/Lib/bsddb/test/test_env_close.py python/trunk/Lib/bsddb/test/test_join.py python/trunk/Lib/bsddb/test/test_lock.py python/trunk/Lib/bsddb/test/test_misc.py python/trunk/Lib/bsddb/test/test_pickle.py python/trunk/Lib/bsddb/test/test_recno.py python/trunk/Lib/bsddb/test/test_sequence.py python/trunk/Lib/bsddb/test/test_thread.py Message-ID: <20080302200053.C652F1E4026@bag.python.org> Author: gregory.p.smith Date: Sun Mar 2 21:00:53 2008 New Revision: 61183 Modified: python/trunk/Lib/bsddb/test/test_associate.py python/trunk/Lib/bsddb/test/test_basics.py python/trunk/Lib/bsddb/test/test_compare.py python/trunk/Lib/bsddb/test/test_cursor_pget_bug.py python/trunk/Lib/bsddb/test/test_dbobj.py python/trunk/Lib/bsddb/test/test_dbshelve.py python/trunk/Lib/bsddb/test/test_dbtables.py python/trunk/Lib/bsddb/test/test_env_close.py python/trunk/Lib/bsddb/test/test_join.py python/trunk/Lib/bsddb/test/test_lock.py python/trunk/Lib/bsddb/test/test_misc.py python/trunk/Lib/bsddb/test/test_pickle.py python/trunk/Lib/bsddb/test/test_recno.py python/trunk/Lib/bsddb/test/test_sequence.py python/trunk/Lib/bsddb/test/test_thread.py Log: Modify import of test_support so that the code can also be used with a stand alone distribution of bsddb that includes its own small copy of test_support for the needed functionality on older pythons. Modified: python/trunk/Lib/bsddb/test/test_associate.py ============================================================================== --- python/trunk/Lib/bsddb/test/test_associate.py (original) +++ python/trunk/Lib/bsddb/test/test_associate.py Sun Mar 2 21:00:53 2008 @@ -23,6 +23,11 @@ # For Python 2.3 from bsddb import db, dbshelve +try: + from bsddb3 import test_support +except ImportError: + from test import test_support + #---------------------------------------------------------------------- @@ -106,7 +111,6 @@ def tearDown(self): self.env.close() self.env = None - from test import test_support test_support.rmtree(self.homeDir) def test00_associateDBError(self): Modified: python/trunk/Lib/bsddb/test/test_basics.py ============================================================================== --- python/trunk/Lib/bsddb/test/test_basics.py (original) +++ python/trunk/Lib/bsddb/test/test_basics.py Sun Mar 2 21:00:53 2008 @@ -8,7 +8,6 @@ import string import tempfile from pprint import pprint -from test import test_support import unittest import time @@ -19,6 +18,11 @@ # For Python 2.3 from bsddb import db +try: + from bsddb3 import test_support +except ImportError: + from test import test_support + from test_all import verbose DASH = '-' Modified: python/trunk/Lib/bsddb/test/test_compare.py ============================================================================== --- python/trunk/Lib/bsddb/test/test_compare.py (original) +++ python/trunk/Lib/bsddb/test/test_compare.py Sun Mar 2 21:00:53 2008 @@ -15,6 +15,11 @@ # For Python 2.3 from bsddb import db, dbshelve +try: + from bsddb3 import test_support +except ImportError: + from test import test_support + lexical_cmp = cmp def lowercase_cmp(left, right): @@ -70,7 +75,6 @@ if self.env is not None: self.env.close () self.env = None - from test import test_support test_support.rmtree(self.homeDir) def addDataToDB (self, data): Modified: python/trunk/Lib/bsddb/test/test_cursor_pget_bug.py ============================================================================== --- python/trunk/Lib/bsddb/test/test_cursor_pget_bug.py (original) +++ python/trunk/Lib/bsddb/test/test_cursor_pget_bug.py Sun Mar 2 21:00:53 2008 @@ -9,6 +9,11 @@ # For Python 2.3 from bsddb import db +try: + from bsddb3 import test_support +except ImportError: + from test import test_support + #---------------------------------------------------------------------- @@ -42,7 +47,6 @@ del self.secondary_db del self.primary_db del self.env - from test import test_support test_support.rmtree(self.homeDir) def test_pget(self): Modified: python/trunk/Lib/bsddb/test/test_dbobj.py ============================================================================== --- python/trunk/Lib/bsddb/test/test_dbobj.py (original) +++ python/trunk/Lib/bsddb/test/test_dbobj.py Sun Mar 2 21:00:53 2008 @@ -10,6 +10,11 @@ # For Python 2.3 from bsddb import db, dbobj +try: + from bsddb3 import test_support +except ImportError: + from test import test_support + #---------------------------------------------------------------------- @@ -29,7 +34,6 @@ del self.db if hasattr(self, 'env'): del self.env - from test import test_support test_support.rmtree(self.homeDir) def test01_both(self): Modified: python/trunk/Lib/bsddb/test/test_dbshelve.py ============================================================================== --- python/trunk/Lib/bsddb/test/test_dbshelve.py (original) +++ python/trunk/Lib/bsddb/test/test_dbshelve.py Sun Mar 2 21:00:53 2008 @@ -14,6 +14,11 @@ # For Python 2.3 from bsddb import db, dbshelve +try: + from bsddb3 import test_support +except ImportError: + from test import test_support + from test_all import verbose @@ -262,7 +267,6 @@ def tearDown(self): - from test import test_support test_support.rmtree(self.homeDir) self.do_close() Modified: python/trunk/Lib/bsddb/test/test_dbtables.py ============================================================================== --- python/trunk/Lib/bsddb/test/test_dbtables.py (original) +++ python/trunk/Lib/bsddb/test/test_dbtables.py Sun Mar 2 21:00:53 2008 @@ -39,6 +39,10 @@ # For Python 2.3 from bsddb import db, dbtables +try: + from bsddb3 import test_support +except ImportError: + from test import test_support #---------------------------------------------------------------------- @@ -57,7 +61,6 @@ def tearDown(self): self.tdb.close() - from test import test_support test_support.rmtree(self.testHomeDir) def test01(self): Modified: python/trunk/Lib/bsddb/test/test_env_close.py ============================================================================== --- python/trunk/Lib/bsddb/test/test_env_close.py (original) +++ python/trunk/Lib/bsddb/test/test_env_close.py Sun Mar 2 21:00:53 2008 @@ -13,6 +13,11 @@ # For Python 2.3 from bsddb import db +try: + from bsddb3 import test_support +except ImportError: + from test import test_support + from test_all import verbose # We're going to get warnings in this module about trying to close the db when @@ -39,7 +44,6 @@ tempfile.tempdir = None def tearDown(self): - from test import test_support test_support.rmtree(self.homeDir) def test01_close_dbenv_before_db(self): Modified: python/trunk/Lib/bsddb/test/test_join.py ============================================================================== --- python/trunk/Lib/bsddb/test/test_join.py (original) +++ python/trunk/Lib/bsddb/test/test_join.py Sun Mar 2 21:00:53 2008 @@ -20,6 +20,10 @@ # For Python 2.3 from bsddb import db, dbshelve +try: + from bsddb3 import test_support +except ImportError: + from test import test_support #---------------------------------------------------------------------- @@ -56,7 +60,6 @@ def tearDown(self): self.env.close() - from test import test_support test_support.rmtree(self.homeDir) def test01_join(self): Modified: python/trunk/Lib/bsddb/test/test_lock.py ============================================================================== --- python/trunk/Lib/bsddb/test/test_lock.py (original) +++ python/trunk/Lib/bsddb/test/test_lock.py Sun Mar 2 21:00:53 2008 @@ -22,6 +22,11 @@ # For Python 2.3 from bsddb import db +try: + from bsddb3 import test_support +except ImportError: + from test import test_support + #---------------------------------------------------------------------- @@ -36,7 +41,6 @@ def tearDown(self): self.env.close() - from test import test_support test_support.rmtree(self.homeDir) Modified: python/trunk/Lib/bsddb/test/test_misc.py ============================================================================== --- python/trunk/Lib/bsddb/test/test_misc.py (original) +++ python/trunk/Lib/bsddb/test/test_misc.py Sun Mar 2 21:00:53 2008 @@ -12,6 +12,11 @@ # For Python 2.3 from bsddb import db, dbshelve, hashopen +try: + from bsddb3 import test_support +except ImportError: + from test import test_support + #---------------------------------------------------------------------- class MiscTestCase(unittest.TestCase): @@ -25,7 +30,6 @@ pass def tearDown(self): - from test import test_support test_support.unlink(self.filename) test_support.rmtree(self.homeDir) Modified: python/trunk/Lib/bsddb/test/test_pickle.py ============================================================================== --- python/trunk/Lib/bsddb/test/test_pickle.py (original) +++ python/trunk/Lib/bsddb/test/test_pickle.py Sun Mar 2 21:00:53 2008 @@ -15,6 +15,11 @@ # For Python 2.3 from bsddb import db +try: + from bsddb3 import test_support +except ImportError: + from test import test_support + #---------------------------------------------------------------------- @@ -34,7 +39,6 @@ del self.db if hasattr(self, 'env'): del self.env - from test import test_support test_support.rmtree(self.homeDir) def _base_test_pickle_DBError(self, pickle): Modified: python/trunk/Lib/bsddb/test/test_recno.py ============================================================================== --- python/trunk/Lib/bsddb/test/test_recno.py (original) +++ python/trunk/Lib/bsddb/test/test_recno.py Sun Mar 2 21:00:53 2008 @@ -16,6 +16,11 @@ # For Python 2.3 from bsddb import db +try: + from bsddb3 import test_support +except ImportError: + from test import test_support + letters = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ' @@ -27,7 +32,6 @@ self.homeDir = None def tearDown(self): - from test import test_support test_support.unlink(self.filename) if self.homeDir: test_support.rmtree(self.homeDir) Modified: python/trunk/Lib/bsddb/test/test_sequence.py ============================================================================== --- python/trunk/Lib/bsddb/test/test_sequence.py (original) +++ python/trunk/Lib/bsddb/test/test_sequence.py Sun Mar 2 21:00:53 2008 @@ -8,6 +8,11 @@ except ImportError: from bsddb import db +try: + from bsddb3 import test_support +except ImportError: + from test import test_support + class DBSequenceTest(unittest.TestCase): def setUp(self): @@ -37,7 +42,6 @@ self.dbenv.close() del self.dbenv - from test import test_support test_support.rmtree(self.homeDir) def test_get(self): Modified: python/trunk/Lib/bsddb/test/test_thread.py ============================================================================== --- python/trunk/Lib/bsddb/test/test_thread.py (original) +++ python/trunk/Lib/bsddb/test/test_thread.py Sun Mar 2 21:00:53 2008 @@ -38,6 +38,11 @@ # For Python 2.3 from bsddb import db, dbutils +try: + from bsddb3 import test_support +except ImportError: + from test import test_support + #---------------------------------------------------------------------- @@ -68,7 +73,6 @@ self.d.open(self.filename, self.dbtype, self.dbopenflags|db.DB_CREATE) def tearDown(self): - from test import test_support test_support.rmtree(self.homeDir) self.d.close() self.env.close() From python-checkins at python.org Sun Mar 2 21:32:58 2008 From: python-checkins at python.org (martin.v.loewis) Date: Sun, 2 Mar 2008 21:32:58 +0100 (CET) Subject: [Python-checkins] r61184 - in python/branches/release23-maint: Include/pymem.h Include/pyport.h Misc/NEWS Modules/_csv.c Modules/arraymodule.c Modules/audioop.c Modules/binascii.c Modules/cPickle.c Modules/cStringIO.c Modules/datetimemodule.c Modules/rgbimgmodule.c Modules/stropmodule.c Objects/bufferobject.c Objects/listobject.c Parser/node.c Python/bltinmodule.c Message-ID: <20080302203258.84D941E4012@bag.python.org> Author: martin.v.loewis Date: Sun Mar 2 21:32:57 2008 New Revision: 61184 Modified: python/branches/release23-maint/Include/pymem.h python/branches/release23-maint/Include/pyport.h python/branches/release23-maint/Misc/NEWS python/branches/release23-maint/Modules/_csv.c python/branches/release23-maint/Modules/arraymodule.c python/branches/release23-maint/Modules/audioop.c python/branches/release23-maint/Modules/binascii.c python/branches/release23-maint/Modules/cPickle.c python/branches/release23-maint/Modules/cStringIO.c python/branches/release23-maint/Modules/datetimemodule.c python/branches/release23-maint/Modules/rgbimgmodule.c python/branches/release23-maint/Modules/stropmodule.c python/branches/release23-maint/Objects/bufferobject.c python/branches/release23-maint/Objects/listobject.c python/branches/release23-maint/Parser/node.c python/branches/release23-maint/Python/bltinmodule.c Log: Backport of r61180: Added checks for integer overflows, contributed by Google. Some are only available if asserts are left in the code, in cases where they can't be triggered from Python code. Modified: python/branches/release23-maint/Include/pymem.h ============================================================================== --- python/branches/release23-maint/Include/pymem.h (original) +++ python/branches/release23-maint/Include/pymem.h Sun Mar 2 21:32:57 2008 @@ -86,14 +86,18 @@ */ #define PyMem_New(type, n) \ - ( (type *) PyMem_Malloc((n) * sizeof(type)) ) + ( assert((n) <= PY_SIZE_MAX / sizeof(type)) , \ + ( (type *) PyMem_Malloc((n) * sizeof(type)) ) ) #define PyMem_NEW(type, n) \ - ( (type *) PyMem_MALLOC((n) * sizeof(type)) ) + ( assert((n) <= PY_SIZE_MAX / sizeof(type)) , \ + ( (type *) PyMem_MALLOC((n) * sizeof(type)) ) ) #define PyMem_Resize(p, type, n) \ - ( (p) = (type *) PyMem_Realloc((p), (n) * sizeof(type)) ) + ( assert((n) <= PY_SIZE_MAX / sizeof(type)) , \ + ( (p) = (type *) PyMem_Realloc((p), (n) * sizeof(type)) ) ) #define PyMem_RESIZE(p, type, n) \ - ( (p) = (type *) PyMem_REALLOC((p), (n) * sizeof(type)) ) + ( assert((n) <= PY_SIZE_MAX / sizeof(type)) , \ + ( (p) = (type *) PyMem_REALLOC((p), (n) * sizeof(type)) ) ) /* In order to avoid breaking old code mixing PyObject_{New, NEW} with PyMem_{Del, DEL} and PyMem_{Free, FREE}, the PyMem "release memory" Modified: python/branches/release23-maint/Include/pyport.h ============================================================================== --- python/branches/release23-maint/Include/pyport.h (original) +++ python/branches/release23-maint/Include/pyport.h Sun Mar 2 21:32:57 2008 @@ -554,6 +554,17 @@ #error "LONG_BIT definition appears wrong for platform (bad gcc/glibc config?)." #endif +/* Largest possible value of size_t. + SIZE_MAX is part of C99, so it might be defined on some + platforms. If it is not defined, (size_t)-1 is a portable + definition for C89, due to the way signed->unsigned + conversion is defined. */ +#ifdef SIZE_MAX +#define PY_SIZE_MAX SIZE_MAX +#else +#define PY_SIZE_MAX ((size_t)-1) +#endif + #ifdef __cplusplus } #endif Modified: python/branches/release23-maint/Misc/NEWS ============================================================================== --- python/branches/release23-maint/Misc/NEWS (original) +++ python/branches/release23-maint/Misc/NEWS Sun Mar 2 21:32:57 2008 @@ -4,6 +4,19 @@ (editors: check NEWS.help for information about editing NEWS using ReST.) +What's New in Python 2.3.7c1? +=========================== + +*Release date: 02-Mar-2008* + +Core and builtins +----------------- + +- Added checks for integer overflows, contributed by Google. Some are + only available if asserts are left in the code, in cases where they + can't be triggered from Python code. + + What's New in Python 2.3.6? =========================== Modified: python/branches/release23-maint/Modules/_csv.c ============================================================================== --- python/branches/release23-maint/Modules/_csv.c (original) +++ python/branches/release23-maint/Modules/_csv.c Sun Mar 2 21:32:57 2008 @@ -470,6 +470,10 @@ self->field = PyMem_Malloc(self->field_size); } else { + if (self->field_size > INT_MAX / 2) { + PyErr_NoMemory(); + return 0; + } self->field_size *= 2; self->field = PyMem_Realloc(self->field, self->field_size); } @@ -1003,6 +1007,12 @@ static int join_check_rec_size(WriterObj *self, int rec_len) { + + if (rec_len < 0 || rec_len > INT_MAX - MEM_INCR) { + PyErr_NoMemory(); + return 0; + } + if (rec_len > self->rec_size) { if (self->rec_size == 0) { self->rec_size = (rec_len / MEM_INCR + 1) * MEM_INCR; Modified: python/branches/release23-maint/Modules/arraymodule.c ============================================================================== --- python/branches/release23-maint/Modules/arraymodule.c (original) +++ python/branches/release23-maint/Modules/arraymodule.c Sun Mar 2 21:32:57 2008 @@ -632,6 +632,9 @@ PyErr_BadArgument(); return NULL; } + if (a->ob_size > INT_MAX - b->ob_size) { + return PyErr_NoMemory(); + } size = a->ob_size + b->ob_size; np = (arrayobject *) newarrayobject(&Arraytype, size, a->ob_descr); if (np == NULL) { @@ -654,6 +657,9 @@ int nbytes; if (n < 0) n = 0; + if ((a->ob_size != 0) && (n > INT_MAX / a->ob_size)) { + return PyErr_NoMemory(); + } size = a->ob_size * n; np = (arrayobject *) newarrayobject(&Arraytype, size, a->ob_descr); if (np == NULL) @@ -775,6 +781,11 @@ "can only extend with array of same kind"); return -1; } + if ((self->ob_size > INT_MAX - b->ob_size) || + ((self->ob_size + b->ob_size) > INT_MAX / self->ob_descr->itemsize)) { + PyErr_NoMemory(); + return -1; + } size = self->ob_size + b->ob_size; PyMem_RESIZE(self->ob_item, char, size*self->ob_descr->itemsize); if (self->ob_item == NULL) { @@ -809,6 +820,10 @@ if (n < 0) n = 0; items = self->ob_item; + if ((self->ob_descr->itemsize != 0) && + (self->ob_size > INT_MAX / self->ob_descr->itemsize)) { + return PyErr_NoMemory(); + } size = self->ob_size * self->ob_descr->itemsize; if (n == 0) { PyMem_FREE(items); @@ -816,6 +831,9 @@ self->ob_size = 0; } else { + if (size > INT_MAX / n) { + return PyErr_NoMemory(); + } PyMem_Resize(items, char, n * size); if (items == NULL) return PyErr_NoMemory(); @@ -1224,6 +1242,9 @@ if ((*self->ob_descr->setitem)(self, self->ob_size - n + i, v) != 0) { self->ob_size -= n; + if (itemsize && (self->ob_size > INT_MAX / itemsize)) { + return PyErr_NoMemory(); + } PyMem_RESIZE(item, char, self->ob_size * itemsize); self->ob_item = item; @@ -1282,6 +1303,10 @@ n = n / itemsize; if (n > 0) { char *item = self->ob_item; + if ((n > INT_MAX - self->ob_size) || + ((self->ob_size + n) > INT_MAX / itemsize)) { + return PyErr_NoMemory(); + } PyMem_RESIZE(item, char, (self->ob_size + n) * itemsize); if (item == NULL) { PyErr_NoMemory(); @@ -1306,8 +1331,12 @@ static PyObject * array_tostring(arrayobject *self, PyObject *unused) { - return PyString_FromStringAndSize(self->ob_item, + if (self->ob_size <= INT_MAX / self->ob_descr->itemsize) { + return PyString_FromStringAndSize(self->ob_item, self->ob_size * self->ob_descr->itemsize); + } else { + return PyErr_NoMemory(); + } } PyDoc_STRVAR(tostring_doc, @@ -1335,6 +1364,9 @@ } if (n > 0) { Py_UNICODE *item = (Py_UNICODE *) self->ob_item; + if (self->ob_size > INT_MAX - n) { + return PyErr_NoMemory(); + } PyMem_RESIZE(item, Py_UNICODE, self->ob_size + n); if (item == NULL) { PyErr_NoMemory(); Modified: python/branches/release23-maint/Modules/audioop.c ============================================================================== --- python/branches/release23-maint/Modules/audioop.c (original) +++ python/branches/release23-maint/Modules/audioop.c Sun Mar 2 21:32:57 2008 @@ -674,7 +674,7 @@ audioop_tostereo(PyObject *self, PyObject *args) { signed char *cp, *ncp; - int len, size, val1, val2, val = 0; + int len, new_len, size, val1, val2, val = 0; double fac1, fac2, fval, maxval; PyObject *rv; int i; @@ -690,7 +690,14 @@ return 0; } - rv = PyString_FromStringAndSize(NULL, len*2); + new_len = len*2; + if (new_len < 0) { + PyErr_SetString(PyExc_MemoryError, + "not enough memory for output buffer"); + return 0; + } + + rv = PyString_FromStringAndSize(NULL, new_len); if ( rv == 0 ) return 0; ncp = (signed char *)PyString_AsString(rv); @@ -853,7 +860,7 @@ { signed char *cp; unsigned char *ncp; - int len, size, size2, val = 0; + int len, new_len, size, size2, val = 0; PyObject *rv; int i, j; @@ -867,7 +874,13 @@ return 0; } - rv = PyString_FromStringAndSize(NULL, (len/size)*size2); + new_len = (len/size)*size2; + if (new_len < 0) { + PyErr_SetString(PyExc_MemoryError, + "not enough memory for output buffer"); + return 0; + } + rv = PyString_FromStringAndSize(NULL, new_len); if ( rv == 0 ) return 0; ncp = (unsigned char *)PyString_AsString(rv); @@ -903,6 +916,7 @@ int chan, d, *prev_i, *cur_i, cur_o; PyObject *state, *samps, *str, *rv = NULL; int bytes_per_frame; + size_t alloc_size; weightA = 1; weightB = 0; @@ -944,8 +958,14 @@ inrate /= d; outrate /= d; - prev_i = (int *) malloc(nchannels * sizeof(int)); - cur_i = (int *) malloc(nchannels * sizeof(int)); + alloc_size = sizeof(int) * (unsigned)nchannels; + if (alloc_size < nchannels) { + PyErr_SetString(PyExc_MemoryError, + "not enough memory for output buffer"); + return 0; + } + prev_i = (int *) malloc(alloc_size); + cur_i = (int *) malloc(alloc_size); if (prev_i == NULL || cur_i == NULL) { (void) PyErr_NoMemory(); goto exit; @@ -1114,7 +1134,7 @@ unsigned char *cp; unsigned char cval; signed char *ncp; - int len, size, val; + int len, new_len, size, val; PyObject *rv; int i; @@ -1127,12 +1147,18 @@ return 0; } - rv = PyString_FromStringAndSize(NULL, len*size); + new_len = len*size; + if (new_len < 0) { + PyErr_SetString(PyExc_MemoryError, + "not enough memory for output buffer"); + return 0; + } + rv = PyString_FromStringAndSize(NULL, new_len); if ( rv == 0 ) return 0; ncp = (signed char *)PyString_AsString(rv); - for ( i=0; i < len*size; i += size ) { + for ( i=0; i < new_len; i += size ) { cval = *cp++; val = st_ulaw_to_linear(cval); @@ -1257,7 +1283,7 @@ { signed char *cp; signed char *ncp; - int len, size, valpred, step, delta, index, sign, vpdiff; + int len, new_len, size, valpred, step, delta, index, sign, vpdiff; PyObject *rv, *str, *state; int i, inputbuffer = 0, bufferstep; @@ -1279,7 +1305,13 @@ } else if ( !PyArg_Parse(state, "(ii)", &valpred, &index) ) return 0; - str = PyString_FromStringAndSize(NULL, len*size*2); + new_len = len*size*2; + if (new_len < 0) { + PyErr_SetString(PyExc_MemoryError, + "not enough memory for output buffer"); + return 0; + } + str = PyString_FromStringAndSize(NULL, new_len); if ( str == 0 ) return 0; ncp = (signed char *)PyString_AsString(str); @@ -1287,7 +1319,7 @@ step = stepsizeTable[index]; bufferstep = 0; - for ( i=0; i < len*size*2; i += size ) { + for ( i=0; i < new_len; i += size ) { /* Step 1 - get the delta value and compute next index */ if ( bufferstep ) { delta = inputbuffer & 0xf; Modified: python/branches/release23-maint/Modules/binascii.c ============================================================================== --- python/branches/release23-maint/Modules/binascii.c (original) +++ python/branches/release23-maint/Modules/binascii.c Sun Mar 2 21:32:57 2008 @@ -194,6 +194,8 @@ if ( !PyArg_ParseTuple(args, "t#:a2b_uu", &ascii_data, &ascii_len) ) return NULL; + assert(ascii_len >= 0); + /* First byte: binary data length (in bytes) */ bin_len = (*ascii_data++ - ' ') & 077; ascii_len--; @@ -346,6 +348,11 @@ if ( !PyArg_ParseTuple(args, "t#:a2b_base64", &ascii_data, &ascii_len) ) return NULL; + assert(ascii_len >= 0); + + if (ascii_len > INT_MAX - 3) + return PyErr_NoMemory(); + bin_len = ((ascii_len+3)/4)*3; /* Upper bound, corrected later */ /* Allocate the buffer */ @@ -435,6 +442,9 @@ if ( !PyArg_ParseTuple(args, "s#:b2a_base64", &bin_data, &bin_len) ) return NULL; + + assert(bin_len >= 0); + if ( bin_len > BASE64_MAXBIN ) { PyErr_SetString(Error, "Too much data for base64 line"); return NULL; @@ -490,6 +500,11 @@ if ( !PyArg_ParseTuple(args, "t#:a2b_hqx", &ascii_data, &len) ) return NULL; + assert(len >= 0); + + if (len > INT_MAX - 2) + return PyErr_NoMemory(); + /* Allocate a string that is too big (fixed later) */ if ( (rv=PyString_FromStringAndSize(NULL, len)) == NULL ) return NULL; @@ -551,6 +566,11 @@ if ( !PyArg_ParseTuple(args, "s#:rlecode_hqx", &in_data, &len) ) return NULL; + assert(len >= 0); + + if (len > INT_MAX / 2 - 2) + return PyErr_NoMemory(); + /* Worst case: output is twice as big as input (fixed later) */ if ( (rv=PyString_FromStringAndSize(NULL, len*2)) == NULL ) return NULL; @@ -600,6 +620,11 @@ if ( !PyArg_ParseTuple(args, "s#:b2a_hqx", &bin_data, &len) ) return NULL; + assert(len >= 0); + + if (len > INT_MAX / 2 - 2) + return PyErr_NoMemory(); + /* Allocate a buffer that is at least large enough */ if ( (rv=PyString_FromStringAndSize(NULL, len*2)) == NULL ) return NULL; @@ -638,9 +663,13 @@ if ( !PyArg_ParseTuple(args, "s#:rledecode_hqx", &in_data, &in_len) ) return NULL; + assert(in_len >= 0); + /* Empty string is a special case */ if ( in_len == 0 ) return Py_BuildValue("s", ""); + else if (in_len > INT_MAX / 2) + return PyErr_NoMemory(); /* Allocate a buffer of reasonable size. Resized when needed */ out_len = in_len*2; @@ -666,6 +695,7 @@ #define OUTBYTE(b) \ do { \ if ( --out_len_left < 0 ) { \ + if ( out_len > INT_MAX / 2) return PyErr_NoMemory(); \ _PyString_Resize(&rv, 2*out_len); \ if ( rv == NULL ) return NULL; \ out_data = (unsigned char *)PyString_AsString(rv) \ @@ -734,7 +764,7 @@ if ( !PyArg_ParseTuple(args, "s#i:crc_hqx", &bin_data, &len, &crc) ) return NULL; - while(len--) { + while(len-- > 0) { crc=((crc<<8)&0xff00)^crctab_hqx[((crc>>8)&0xff)^*bin_data++]; } @@ -878,7 +908,7 @@ /* only want the trailing 32 bits */ crc &= 0xFFFFFFFFUL; #endif - while (len--) + while (len-- > 0) crc = crc_32_tab[(crc ^ *bin_data++) & 0xffUL] ^ (crc >> 8); /* Note: (crc >> 8) MUST zero fill on left */ @@ -908,6 +938,10 @@ if (!PyArg_ParseTuple(args, "t#:b2a_hex", &argbuf, &arglen)) return NULL; + assert(arglen >= 0); + if (arglen > INT_MAX / 2) + return PyErr_NoMemory(); + retval = PyString_FromStringAndSize(NULL, arglen*2); if (!retval) return NULL; @@ -965,6 +999,8 @@ if (!PyArg_ParseTuple(args, "s#:a2b_hex", &argbuf, &arglen)) return NULL; + assert(arglen >= 0); + /* XXX What should we do about strings with an odd length? Should * we add an implicit leading zero, or a trailing zero? For now, * raise an exception. Modified: python/branches/release23-maint/Modules/cPickle.c ============================================================================== --- python/branches/release23-maint/Modules/cPickle.c (original) +++ python/branches/release23-maint/Modules/cPickle.c Sun Mar 2 21:32:57 2008 @@ -3409,6 +3409,14 @@ if (self->read_func(self, &s, 4) < 0) return -1; l = calc_binint(s, 4); + if (l < 0) { + /* Corrupt or hostile pickle -- we never write one like + * this. + */ + PyErr_SetString(UnpicklingError, + "BINSTRING pickle has negative byte count"); + return -1; + } if (self->read_func(self, &s, l) < 0) return -1; @@ -3476,6 +3484,14 @@ if (self->read_func(self, &s, 4) < 0) return -1; l = calc_binint(s, 4); + if (l < 0) { + /* Corrupt or hostile pickle -- we never write one like + * this. + */ + PyErr_SetString(UnpicklingError, + "BINUNICODE pickle has negative byte count"); + return -1; + } if (self->read_func(self, &s, l) < 0) return -1; Modified: python/branches/release23-maint/Modules/cStringIO.c ============================================================================== --- python/branches/release23-maint/Modules/cStringIO.c (original) +++ python/branches/release23-maint/Modules/cStringIO.c Sun Mar 2 21:32:57 2008 @@ -121,6 +121,7 @@ static PyObject * IO_cgetval(PyObject *self) { UNLESS (IO__opencheck(IOOOBJECT(self))) return NULL; + assert(IOOOBJECT(self)->pos >= 0); return PyString_FromStringAndSize(((IOobject*)self)->buf, ((IOobject*)self)->pos); } @@ -139,6 +140,7 @@ } else s=self->string_size; + assert(self->pos >= 0); return PyString_FromStringAndSize(self->buf, s); } @@ -158,6 +160,8 @@ int l; UNLESS (IO__opencheck(IOOOBJECT(self))) return -1; + assert(IOOOBJECT(self)->pos >= 0); + assert(IOOOBJECT(self)->string_size >= 0); l = ((IOobject*)self)->string_size - ((IOobject*)self)->pos; if (n < 0 || n > l) { n = l; @@ -197,6 +201,11 @@ *output=((IOobject*)self)->buf + ((IOobject*)self)->pos; l = n - ((IOobject*)self)->buf - ((IOobject*)self)->pos; + + assert(IOOOBJECT(self)->pos <= INT_MAX - l); + assert(IOOOBJECT(self)->pos >= 0); + assert(IOOOBJECT(self)->string_size >= 0); + ((IOobject*)self)->pos += l; return l; } @@ -215,6 +224,7 @@ n -= m; self->pos -= m; } + assert(IOOOBJECT(self)->pos >= 0); return PyString_FromStringAndSize(output, n); } @@ -274,6 +284,7 @@ UNLESS (IO__opencheck(self)) return NULL; + assert(self->pos >= 0); return PyInt_FromLong(self->pos); } Modified: python/branches/release23-maint/Modules/datetimemodule.c ============================================================================== --- python/branches/release23-maint/Modules/datetimemodule.c (original) +++ python/branches/release23-maint/Modules/datetimemodule.c Sun Mar 2 21:32:57 2008 @@ -1098,6 +1098,8 @@ char sign; int none; + assert(buflen >= 1); + offset = call_utcoffset(tzinfo, tzinfoarg, &none); if (offset == -1 && PyErr_Occurred()) return -1; @@ -1175,6 +1177,11 @@ * a new format. Since computing the replacements for those codes * is expensive, don't unless they're actually used. */ + if (PyString_Size(format) > INT_MAX - 1) { + PyErr_NoMemory(); + goto Done; + } + totalnew = PyString_Size(format) + 1; /* realistic if no %z/%Z */ newfmt = PyString_FromStringAndSize(NULL, totalnew); if (newfmt == NULL) goto Done; Modified: python/branches/release23-maint/Modules/rgbimgmodule.c ============================================================================== --- python/branches/release23-maint/Modules/rgbimgmodule.c (original) +++ python/branches/release23-maint/Modules/rgbimgmodule.c Sun Mar 2 21:32:57 2008 @@ -269,7 +269,7 @@ Py_Int32 *starttab = NULL, *lengthtab = NULL; FILE *inf = NULL; IMAGE image; - int y, z, tablen; + int y, z, tablen, new_size; int xsize, ysize, zsize; int bpp, rle, cur, badorder; int rlebuflen; @@ -301,9 +301,15 @@ zsize = image.zsize; if (rle) { tablen = ysize * zsize * sizeof(Py_Int32); + rlebuflen = (int) (1.05 * xsize +10); + if ((tablen / sizeof(Py_Int32)) != (ysize * zsize) || + rlebuflen < 0) { + PyErr_NoMemory(); + goto finally; + } + starttab = (Py_Int32 *)malloc(tablen); lengthtab = (Py_Int32 *)malloc(tablen); - rlebuflen = (int) (1.05 * xsize +10); rledat = (unsigned char *)malloc(rlebuflen); if (!starttab || !lengthtab || !rledat) { PyErr_NoMemory(); @@ -331,8 +337,14 @@ fseek(inf, 512 + 2 * tablen, SEEK_SET); cur = 512 + 2 * tablen; + new_size = xsize * ysize + TAGLEN; + if (new_size < 0 || (new_size * sizeof(Py_Int32)) < 0) { + PyErr_NoMemory(); + goto finally; + } + rv = PyString_FromStringAndSize((char *)NULL, - (xsize * ysize + TAGLEN) * sizeof(Py_Int32)); + new_size * sizeof(Py_Int32)); if (rv == NULL) goto finally; @@ -400,8 +412,14 @@ copybw((Py_Int32 *) base, xsize * ysize); } else { + new_size = xsize * ysize + TAGLEN; + if (new_size < 0 || (new_size * sizeof(Py_Int32)) < 0) { + PyErr_NoMemory(); + goto finally; + } + rv = PyString_FromStringAndSize((char *) 0, - (xsize*ysize+TAGLEN)*sizeof(Py_Int32)); + new_size*sizeof(Py_Int32)); if (rv == NULL) goto finally; @@ -581,10 +599,16 @@ return NULL; } tablen = ysize * zsize * sizeof(Py_Int32); + rlebuflen = (int) (1.05 * xsize + 10); + + if ((tablen / sizeof(Py_Int32)) != (ysize * zsize) || + rlebuflen < 0 || (xsize * sizeof(Py_Int32)) < 0) { + PyErr_NoMemory(); + goto finally; + } starttab = (Py_Int32 *)malloc(tablen); lengthtab = (Py_Int32 *)malloc(tablen); - rlebuflen = (int) (1.05 * xsize + 10); rlebuf = (unsigned char *)malloc(rlebuflen); lumbuf = (unsigned char *)malloc(xsize * sizeof(Py_Int32)); if (!starttab || !lengthtab || !rlebuf || !lumbuf) { Modified: python/branches/release23-maint/Modules/stropmodule.c ============================================================================== --- python/branches/release23-maint/Modules/stropmodule.c (original) +++ python/branches/release23-maint/Modules/stropmodule.c Sun Mar 2 21:32:57 2008 @@ -576,7 +576,7 @@ char* e; char* p; char* q; - int i, j; + int i, j, old_j; PyObject* out; char* string; int stringlen; @@ -593,12 +593,18 @@ } /* First pass: determine size of output string */ - i = j = 0; /* j: current column; i: total of previous lines */ + i = j = old_j = 0; /* j: current column; i: total of previous lines */ e = string + stringlen; for (p = string; p < e; p++) { - if (*p == '\t') + if (*p == '\t') { j += tabsize - (j%tabsize); - else { + if (old_j > j) { + PyErr_SetString(PyExc_OverflowError, + "new string is too long"); + return NULL; + } + old_j = j; + } else { j++; if (*p == '\n') { i += j; @@ -607,6 +613,11 @@ } } + if ((i + j) < 0) { + PyErr_SetString(PyExc_OverflowError, "new string is too long"); + return NULL; + } + /* Second pass: create output string and fill it */ out = PyString_FromStringAndSize(NULL, i+j); if (out == NULL) Modified: python/branches/release23-maint/Objects/bufferobject.c ============================================================================== --- python/branches/release23-maint/Objects/bufferobject.c (original) +++ python/branches/release23-maint/Objects/bufferobject.c Sun Mar 2 21:32:57 2008 @@ -138,6 +138,10 @@ "size must be zero or positive"); return NULL; } + if (sizeof(*b) > INT_MAX - size) { + /* unlikely */ + return PyErr_NoMemory(); + } /* Inline PyObject_New */ o = PyObject_MALLOC(sizeof(*b) + size); if ( o == NULL ) @@ -296,6 +300,8 @@ if ( (count = (*pb->bf_getreadbuffer)(other, 0, &p2)) < 0 ) return NULL; + assert(count <= PY_SIZE_MAX - self->b_size); + ob = PyString_FromStringAndSize(NULL, self->b_size + count); p1 = PyString_AS_STRING(ob); memcpy(p1, self->b_ptr, self->b_size); Modified: python/branches/release23-maint/Objects/listobject.c ============================================================================== --- python/branches/release23-maint/Objects/listobject.c (original) +++ python/branches/release23-maint/Objects/listobject.c Sun Mar 2 21:32:57 2008 @@ -62,8 +62,9 @@ return NULL; } nbytes = size * sizeof(PyObject *); - /* Check for overflow */ - if (nbytes / sizeof(PyObject *) != (size_t)size) { + /* Check for overflow without an actual overflow, + * which can cause compiler to optimise out */ + if (size > PY_SIZE_MAX / sizeof(PyObject *)) { return PyErr_NoMemory(); } op = PyObject_GC_New(PyListObject, &PyList_Type); @@ -1235,6 +1236,10 @@ * we don't care what's in the block. */ merge_freemem(ms); + if (need > INT_MAX / sizeof(PyObject*)) { + PyErr_NoMemory(); + return -1; + } ms->a = (PyObject **)PyMem_Malloc(need * sizeof(PyObject*)); if (ms->a) { ms->alloced = need; @@ -2312,6 +2317,8 @@ return 0; } + assert(slicelength <= PY_SIZE_MAX / sizeof(PyObject*)); + garbage = (PyObject**) PyMem_MALLOC(slicelength*sizeof(PyObject*)); Modified: python/branches/release23-maint/Parser/node.c ============================================================================== --- python/branches/release23-maint/Parser/node.c (original) +++ python/branches/release23-maint/Parser/node.c Sun Mar 2 21:32:57 2008 @@ -91,6 +91,9 @@ if (current_capacity < 0 || required_capacity < 0) return E_OVERFLOW; if (current_capacity < required_capacity) { + if (required_capacity > PY_SIZE_MAX / sizeof(node)) { + return E_NOMEM; + } n = n1->n_child; n = (node *) PyObject_REALLOC(n, required_capacity * sizeof(node)); Modified: python/branches/release23-maint/Python/bltinmodule.c ============================================================================== --- python/branches/release23-maint/Python/bltinmodule.c (original) +++ python/branches/release23-maint/Python/bltinmodule.c Sun Mar 2 21:32:57 2008 @@ -2278,11 +2278,43 @@ PyString_AS_STRING(item)[0]; } else { /* do we need more space? */ - int need = j + reslen + len-i-1; + int need = j; + + /* calculate space requirements while checking for overflow */ + if (need > INT_MAX - reslen) { + Py_DECREF(item); + goto Fail_1; + } + + need += reslen; + + if (need > INT_MAX - len) { + Py_DECREF(item); + goto Fail_1; + } + + need += len; + + if (need <= i) { + Py_DECREF(item); + goto Fail_1; + } + + need = need - i - 1; + + assert(need >= 0); + assert(outlen >= 0); + if (need > outlen) { /* overallocate, to avoid reallocations */ - if (need<2*outlen) + if (outlen > INT_MAX / 2) { + Py_DECREF(item); + return NULL; + } + + if (need<2*outlen) { need = 2*outlen; + } if (_PyString_Resize(&result, need)) { Py_DECREF(item); return NULL; @@ -2373,10 +2405,30 @@ } else { /* do we need more space? */ int need = j + reslen + len-i-1; + + /* check that didnt overflow */ + if ((j > INT_MAX - reslen) || + ((j + reslen) > INT_MAX - len) || + ((j + reslen + len) < i) || + ((j + reslen + len - i) <= 0)) { + Py_DECREF(item); + return NULL; + } + + assert(need >= 0); + assert(outlen >= 0); + if (need > outlen) { /* overallocate, to avoid reallocations */ - if (need<2*outlen) - need = 2*outlen; + if (need < 2 * outlen) { + if (outlen > INT_MAX / 2) { + Py_DECREF(item); + return NULL; + } else { + need = 2 * outlen; + } + } + if (PyUnicode_Resize(&result, need)) { Py_DECREF(item); goto Fail_1; From python-checkins at python.org Sun Mar 2 21:39:32 2008 From: python-checkins at python.org (martin.v.loewis) Date: Sun, 2 Mar 2008 21:39:32 +0100 (CET) Subject: [Python-checkins] r61185 - in python/branches/release23-maint: Include/patchlevel.h LICENSE Misc/RPM/python-2.3.spec Python/getcopyright.c README Message-ID: <20080302203932.85F0A1E4012@bag.python.org> Author: martin.v.loewis Date: Sun Mar 2 21:39:32 2008 New Revision: 61185 Modified: python/branches/release23-maint/Include/patchlevel.h python/branches/release23-maint/LICENSE python/branches/release23-maint/Misc/RPM/python-2.3.spec python/branches/release23-maint/Python/getcopyright.c python/branches/release23-maint/README Log: Prepare for 2.3.7c1. Modified: python/branches/release23-maint/Include/patchlevel.h ============================================================================== --- python/branches/release23-maint/Include/patchlevel.h (original) +++ python/branches/release23-maint/Include/patchlevel.h Sun Mar 2 21:39:32 2008 @@ -21,12 +21,12 @@ /* Version parsed out into numeric values */ #define PY_MAJOR_VERSION 2 #define PY_MINOR_VERSION 3 -#define PY_MICRO_VERSION 6 -#define PY_RELEASE_LEVEL PY_RELEASE_LEVEL_FINAL -#define PY_RELEASE_SERIAL 0 +#define PY_MICRO_VERSION 7 +#define PY_RELEASE_LEVEL PY_RELEASE_LEVEL_GAMMA +#define PY_RELEASE_SERIAL 1 /* Version as a string */ -#define PY_VERSION "2.3.6" +#define PY_VERSION "2.3.7c1" /* Version as a single 4-byte hex number, e.g. 0x010502B2 == 1.5.2b2. Use this for numeric comparisons, e.g. #if PY_VERSION_HEX >= ... */ Modified: python/branches/release23-maint/LICENSE ============================================================================== --- python/branches/release23-maint/LICENSE (original) +++ python/branches/release23-maint/LICENSE Sun Mar 2 21:39:32 2008 @@ -49,6 +49,7 @@ 2.3.4 2.3.3 2004 PSF yes 2.3.5 2.3.4 2004-2005 PSF yes 2.3.6 2.3.5 2006 PSF yes + 2.3.7 2.3.6 2008 PSF yes Footnotes: @@ -84,9 +85,9 @@ prepare derivative works, distribute, and otherwise use Python 2.3 alone or in any derivative version, provided, however, that PSF's License Agreement and PSF's notice of copyright, i.e., "Copyright (c) -2001, 2002, 2003, 2004 Python Software Foundation; All Rights Reserved" are -retained in Python 2.3 alone or in any derivative version prepared by -Licensee. +2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008 Python Software +Foundation; All Rights Reserved" are retained in Python 2.3 alone or +in any derivative version prepared by Licensee. 3. In the event Licensee prepares a derivative work that is based on or incorporates Python 2.3 or any part thereof, and wants to make Modified: python/branches/release23-maint/Misc/RPM/python-2.3.spec ============================================================================== --- python/branches/release23-maint/Misc/RPM/python-2.3.spec (original) +++ python/branches/release23-maint/Misc/RPM/python-2.3.spec Sun Mar 2 21:39:32 2008 @@ -33,7 +33,7 @@ ################################# %define name python -%define version 2.3.6 +%define version 2.3.7 %define libvers 2.3 %define release 1pydotorg %define __prefix /usr Modified: python/branches/release23-maint/Python/getcopyright.c ============================================================================== --- python/branches/release23-maint/Python/getcopyright.c (original) +++ python/branches/release23-maint/Python/getcopyright.c Sun Mar 2 21:39:32 2008 @@ -4,7 +4,7 @@ static char cprt[] = "\ -Copyright (c) 2001-2005 Python Software Foundation.\n\ +Copyright (c) 2001-2008 Python Software Foundation.\n\ All Rights Reserved.\n\ \n\ Copyright (c) 2000 BeOpen.com.\n\ Modified: python/branches/release23-maint/README ============================================================================== --- python/branches/release23-maint/README (original) +++ python/branches/release23-maint/README Sun Mar 2 21:39:32 2008 @@ -1,7 +1,7 @@ -This is Python version 2.3.6 -============================ +This is Python version 2.3.7c1 +============================== -Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006 Python Software Foundation. +Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008 Python Software Foundation. All rights reserved. Copyright (c) 2000 BeOpen.com. From python-checkins at python.org Sun Mar 2 21:40:14 2008 From: python-checkins at python.org (martin.v.loewis) Date: Sun, 2 Mar 2008 21:40:14 +0100 (CET) Subject: [Python-checkins] r61186 - python/tags/r237c1 Message-ID: <20080302204014.504D31E4021@bag.python.org> Author: martin.v.loewis Date: Sun Mar 2 21:40:14 2008 New Revision: 61186 Added: python/tags/r237c1/ - copied from r61185, python/branches/release23-maint/ Log: Tag 2.3.7c1. From buildbot at python.org Sun Mar 2 22:12:19 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 02 Mar 2008 21:12:19 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 trunk Message-ID: <20080302211219.4D3E01E402A@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%20trunk/builds/2633 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl,gregory.p.smith BUILD FAILED: failed test Excerpt from the test logfile: 2 tests failed: test_asynchat test_smtplib sincerely, -The Buildbot From buildbot at python.org Sun Mar 2 22:15:10 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 02 Mar 2008 21:15:10 +0000 Subject: [Python-checkins] buildbot failure in x86 FreeBSD trunk Message-ID: <20080302211510.D03101E4012@bag.python.org> The Buildbot has detected a new failure of x86 FreeBSD trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20FreeBSD%20trunk/builds/677 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: bolen-freebsd Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl,gregory.p.smith BUILD FAILED: failed failed slave lost sincerely, -The Buildbot From buildbot at python.org Sun Mar 2 23:33:14 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 02 Mar 2008 22:33:14 +0000 Subject: [Python-checkins] buildbot failure in ppc Debian unstable 3.0 Message-ID: <20080302223314.E03981E4016@bag.python.org> The Buildbot has detected a new failure of ppc Debian unstable 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/ppc%20Debian%20unstable%203.0/builds/612 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From python-checkins at python.org Mon Mar 3 01:38:59 2008 From: python-checkins at python.org (brett.cannon) Date: Mon, 3 Mar 2008 01:38:59 +0100 (CET) Subject: [Python-checkins] r61189 - in python/trunk: Lib/test/test_logging.py Misc/ACKS Misc/NEWS Message-ID: <20080303003859.5959A1E4006@bag.python.org> Author: brett.cannon Date: Mon Mar 3 01:38:58 2008 New Revision: 61189 Modified: python/trunk/Lib/test/test_logging.py python/trunk/Misc/ACKS python/trunk/Misc/NEWS Log: Refactor test_logging to use unittest. This should finally solve the flakiness issues. Thanks to Antoine Pitrou for the patch. Modified: python/trunk/Lib/test/test_logging.py ============================================================================== --- python/trunk/Lib/test/test_logging.py (original) +++ python/trunk/Lib/test/test_logging.py Mon Mar 3 01:38:58 2008 @@ -1,2016 +1,702 @@ #!/usr/bin/env python -""" -Test 0 -====== - -Some preliminaries: ->>> import sys ->>> import logging ->>> def nextmessage(): -... global msgcount -... rv = "Message %d" % msgcount -... msgcount = msgcount + 1 -... return rv - -Set a few variables, then go through the logger autoconfig and set the default threshold. ->>> msgcount = 0 ->>> FINISH_UP = "Finish up, it's closing time. Messages should bear numbers 0 through 24." ->>> logging.basicConfig(stream=sys.stdout) ->>> rootLogger = logging.getLogger("") ->>> rootLogger.setLevel(logging.DEBUG) - -Now, create a bunch of loggers, and set their thresholds. ->>> ERR = logging.getLogger("ERR0") ->>> ERR.setLevel(logging.ERROR) ->>> INF = logging.getLogger("INFO0") ->>> INF.setLevel(logging.INFO) ->>> INF_ERR = logging.getLogger("INFO0.ERR") ->>> INF_ERR.setLevel(logging.ERROR) ->>> DEB = logging.getLogger("DEB0") ->>> DEB.setLevel(logging.DEBUG) ->>> INF_UNDEF = logging.getLogger("INFO0.UNDEF") ->>> INF_ERR_UNDEF = logging.getLogger("INFO0.ERR.UNDEF") ->>> UNDEF = logging.getLogger("UNDEF0") ->>> GRANDCHILD = logging.getLogger("INFO0.BADPARENT.UNDEF") ->>> CHILD = logging.getLogger("INFO0.BADPARENT") - - -And finally, run all the tests. - ->>> ERR.log(logging.FATAL, nextmessage()) -CRITICAL:ERR0:Message 0 - ->>> ERR.error(nextmessage()) -ERROR:ERR0:Message 1 - ->>> INF.log(logging.FATAL, nextmessage()) -CRITICAL:INFO0:Message 2 - ->>> INF.error(nextmessage()) -ERROR:INFO0:Message 3 - ->>> INF.warn(nextmessage()) -WARNING:INFO0:Message 4 - ->>> INF.info(nextmessage()) -INFO:INFO0:Message 5 - ->>> INF_UNDEF.log(logging.FATAL, nextmessage()) -CRITICAL:INFO0.UNDEF:Message 6 - ->>> INF_UNDEF.error(nextmessage()) -ERROR:INFO0.UNDEF:Message 7 - ->>> INF_UNDEF.warn (nextmessage()) -WARNING:INFO0.UNDEF:Message 8 - ->>> INF_UNDEF.info (nextmessage()) -INFO:INFO0.UNDEF:Message 9 - ->>> INF_ERR.log(logging.FATAL, nextmessage()) -CRITICAL:INFO0.ERR:Message 10 - ->>> INF_ERR.error(nextmessage()) -ERROR:INFO0.ERR:Message 11 - ->>> INF_ERR_UNDEF.log(logging.FATAL, nextmessage()) -CRITICAL:INFO0.ERR.UNDEF:Message 12 - ->>> INF_ERR_UNDEF.error(nextmessage()) -ERROR:INFO0.ERR.UNDEF:Message 13 - ->>> DEB.log(logging.FATAL, nextmessage()) -CRITICAL:DEB0:Message 14 - ->>> DEB.error(nextmessage()) -ERROR:DEB0:Message 15 - ->>> DEB.warn (nextmessage()) -WARNING:DEB0:Message 16 - ->>> DEB.info (nextmessage()) -INFO:DEB0:Message 17 - ->>> DEB.debug(nextmessage()) -DEBUG:DEB0:Message 18 - ->>> UNDEF.log(logging.FATAL, nextmessage()) -CRITICAL:UNDEF0:Message 19 - ->>> UNDEF.error(nextmessage()) -ERROR:UNDEF0:Message 20 - ->>> UNDEF.warn (nextmessage()) -WARNING:UNDEF0:Message 21 - ->>> UNDEF.info (nextmessage()) -INFO:UNDEF0:Message 22 - ->>> GRANDCHILD.log(logging.FATAL, nextmessage()) -CRITICAL:INFO0.BADPARENT.UNDEF:Message 23 - ->>> CHILD.log(logging.FATAL, nextmessage()) -CRITICAL:INFO0.BADPARENT:Message 24 - -These should not log: - ->>> ERR.warn(nextmessage()) - ->>> ERR.info(nextmessage()) - ->>> ERR.debug(nextmessage()) - ->>> INF.debug(nextmessage()) - ->>> INF_UNDEF.debug(nextmessage()) - ->>> INF_ERR.warn(nextmessage()) - ->>> INF_ERR.info(nextmessage()) - ->>> INF_ERR.debug(nextmessage()) - ->>> INF_ERR_UNDEF.warn(nextmessage()) - ->>> INF_ERR_UNDEF.info(nextmessage()) - ->>> INF_ERR_UNDEF.debug(nextmessage()) - ->>> INF.info(FINISH_UP) -INFO:INFO0:Finish up, it's closing time. Messages should bear numbers 0 through 24. - -Test 1 -====== - ->>> import sys, logging ->>> logging.basicConfig(stream=sys.stdout) - -First, we define our levels. There can be as many as you want - the only -limitations are that they should be integers, the lowest should be > 0 and -larger values mean less information being logged. If you need specific -level values which do not fit into these limitations, you can use a -mapping dictionary to convert between your application levels and the -logging system. - ->>> SILENT = 10 ->>> TACITURN = 9 ->>> TERSE = 8 ->>> EFFUSIVE = 7 ->>> SOCIABLE = 6 ->>> VERBOSE = 5 ->>> TALKATIVE = 4 ->>> GARRULOUS = 3 ->>> CHATTERBOX = 2 ->>> BORING = 1 ->>> LEVEL_RANGE = range(BORING, SILENT + 1) - - -Next, we define names for our levels. You don't need to do this - in which - case the system will use "Level n" to denote the text for the level. -' - - ->>> my_logging_levels = { -... SILENT : 'SILENT', -... TACITURN : 'TACITURN', -... TERSE : 'TERSE', -... EFFUSIVE : 'EFFUSIVE', -... SOCIABLE : 'SOCIABLE', -... VERBOSE : 'VERBOSE', -... TALKATIVE : 'TALKATIVE', -... GARRULOUS : 'GARRULOUS', -... CHATTERBOX : 'CHATTERBOX', -... BORING : 'BORING', -... } - - -Now, to demonstrate filtering: suppose for some perverse reason we only -want to print out all except GARRULOUS messages. We create a filter for -this purpose... - ->>> class SpecificLevelFilter(logging.Filter): -... def __init__(self, lvl): -... self.level = lvl -... -... def filter(self, record): -... return self.level != record.levelno - ->>> class GarrulousFilter(SpecificLevelFilter): -... def __init__(self): -... SpecificLevelFilter.__init__(self, GARRULOUS) - - -Now, demonstrate filtering at the logger. This time, use a filter -which excludes SOCIABLE and TACITURN messages. Note that GARRULOUS events -are still excluded. - - ->>> class VerySpecificFilter(logging.Filter): -... def filter(self, record): -... return record.levelno not in [SOCIABLE, TACITURN] - ->>> SHOULD1 = "This should only be seen at the '%s' logging level (or lower)" - -Configure the logger, and tell the logging system to associate names with our levels. ->>> logging.basicConfig(stream=sys.stdout) ->>> rootLogger = logging.getLogger("") ->>> rootLogger.setLevel(logging.DEBUG) ->>> for lvl in my_logging_levels.keys(): -... logging.addLevelName(lvl, my_logging_levels[lvl]) ->>> log = logging.getLogger("") ->>> hdlr = log.handlers[0] ->>> from test_logging import message - -Set the logging level to each different value and call the utility -function to log events. In the output, you should see that each time -round the loop, the number of logging events which are actually output -decreases. - ->>> log.setLevel(1) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) -BORING:root:This should only be seen at the '1' logging level (or lower) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) -CHATTERBOX:root:This should only be seen at the '2' logging level (or lower) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) -GARRULOUS:root:This should only be seen at the '3' logging level (or lower) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) -TALKATIVE:root:This should only be seen at the '4' logging level (or lower) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) -VERBOSE:root:This should only be seen at the '5' logging level (or lower) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) -SOCIABLE:root:This should only be seen at the '6' logging level (or lower) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(0) - ->>> log.setLevel(2) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) -CHATTERBOX:root:This should only be seen at the '2' logging level (or lower) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) -GARRULOUS:root:This should only be seen at the '3' logging level (or lower) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) -TALKATIVE:root:This should only be seen at the '4' logging level (or lower) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) -VERBOSE:root:This should only be seen at the '5' logging level (or lower) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) -SOCIABLE:root:This should only be seen at the '6' logging level (or lower) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(3) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) -GARRULOUS:root:This should only be seen at the '3' logging level (or lower) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) -TALKATIVE:root:This should only be seen at the '4' logging level (or lower) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) -VERBOSE:root:This should only be seen at the '5' logging level (or lower) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) -SOCIABLE:root:This should only be seen at the '6' logging level (or lower) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(4) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) -TALKATIVE:root:This should only be seen at the '4' logging level (or lower) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) -VERBOSE:root:This should only be seen at the '5' logging level (or lower) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) -SOCIABLE:root:This should only be seen at the '6' logging level (or lower) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(5) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) -VERBOSE:root:This should only be seen at the '5' logging level (or lower) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) -SOCIABLE:root:This should only be seen at the '6' logging level (or lower) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - - ->>> log.setLevel(6) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) -SOCIABLE:root:This should only be seen at the '6' logging level (or lower) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(7) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(8) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(0) - ->>> log.setLevel(9) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(0) - ->>> log.setLevel(10) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> hdlr.setLevel(SOCIABLE) - ->>> log.setLevel(1) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) -SOCIABLE:root:This should only be seen at the '6' logging level (or lower) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(2) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) -SOCIABLE:root:This should only be seen at the '6' logging level (or lower) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(3) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) -SOCIABLE:root:This should only be seen at the '6' logging level (or lower) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(0) - ->>> log.setLevel(4) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) -SOCIABLE:root:This should only be seen at the '6' logging level (or lower) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(0) - ->>> log.setLevel(5) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) -SOCIABLE:root:This should only be seen at the '6' logging level (or lower) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(0) - ->>> log.setLevel(6) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) -SOCIABLE:root:This should only be seen at the '6' logging level (or lower) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(0) - ->>> log.setLevel(7) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(0) - ->>> log.setLevel(8) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(0) - ->>> log.setLevel(9) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(0) - ->>> log.setLevel(10) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(0) - ->>> - ->>> hdlr.setLevel(0) - ->>> garr = GarrulousFilter() - ->>> hdlr.addFilter(garr) - ->>> log.setLevel(1) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) -BORING:root:This should only be seen at the '1' logging level (or lower) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) -CHATTERBOX:root:This should only be seen at the '2' logging level (or lower) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) -TALKATIVE:root:This should only be seen at the '4' logging level (or lower) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) -VERBOSE:root:This should only be seen at the '5' logging level (or lower) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) -SOCIABLE:root:This should only be seen at the '6' logging level (or lower) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(2) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) -CHATTERBOX:root:This should only be seen at the '2' logging level (or lower) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) -TALKATIVE:root:This should only be seen at the '4' logging level (or lower) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) -VERBOSE:root:This should only be seen at the '5' logging level (or lower) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) -SOCIABLE:root:This should only be seen at the '6' logging level (or lower) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(3) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) -TALKATIVE:root:This should only be seen at the '4' logging level (or lower) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) -VERBOSE:root:This should only be seen at the '5' logging level (or lower) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) -SOCIABLE:root:This should only be seen at the '6' logging level (or lower) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(4) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) -TALKATIVE:root:This should only be seen at the '4' logging level (or lower) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) -VERBOSE:root:This should only be seen at the '5' logging level (or lower) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) -SOCIABLE:root:This should only be seen at the '6' logging level (or lower) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(5) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) -VERBOSE:root:This should only be seen at the '5' logging level (or lower) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) -SOCIABLE:root:This should only be seen at the '6' logging level (or lower) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(0) - ->>> log.setLevel(6) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) -SOCIABLE:root:This should only be seen at the '6' logging level (or lower) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(7) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(8) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(9) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(10) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> spec = VerySpecificFilter() - ->>> log.addFilter(spec) - ->>> log.setLevel(1) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) -BORING:root:This should only be seen at the '1' logging level (or lower) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) -CHATTERBOX:root:This should only be seen at the '2' logging level (or lower) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) -TALKATIVE:root:This should only be seen at the '4' logging level (or lower) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) -VERBOSE:root:This should only be seen at the '5' logging level (or lower) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(2) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) -CHATTERBOX:root:This should only be seen at the '2' logging level (or lower) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) -TALKATIVE:root:This should only be seen at the '4' logging level (or lower) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) -VERBOSE:root:This should only be seen at the '5' logging level (or lower) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(3) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) -TALKATIVE:root:This should only be seen at the '4' logging level (or lower) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) -VERBOSE:root:This should only be seen at the '5' logging level (or lower) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(4) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) -TALKATIVE:root:This should only be seen at the '4' logging level (or lower) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) -VERBOSE:root:This should only be seen at the '5' logging level (or lower) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(5) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) +# +# Copyright 2001-2004 by Vinay Sajip. All Rights Reserved. +# +# Permission to use, copy, modify, and distribute this software and its +# documentation for any purpose and without fee is hereby granted, +# provided that the above copyright notice appear in all copies and that +# both that copyright notice and this permission notice appear in +# supporting documentation, and that the name of Vinay Sajip +# not be used in advertising or publicity pertaining to distribution +# of the software without specific, written prior permission. +# VINAY SAJIP DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING +# ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL +# VINAY SAJIP BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR +# ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER +# IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT +# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) +"""Test harness for the logging module. Run all tests. ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) -VERBOSE:root:This should only be seen at the '5' logging level (or lower) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(6) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(7) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(8) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(9) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(10) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(0) - ->>> log.removeFilter(spec) - ->>> hdlr.removeFilter(garr) - ->>> logging.addLevelName(logging.DEBUG, "DEBUG") - - -Test 2 -====== -Test memory handlers. These are basically buffers for log messages: they take so many messages, and then print them all. - ->>> import logging.handlers - ->>> sys.stderr = sys.stdout ->>> logger = logging.getLogger("") ->>> sh = logger.handlers[0] ->>> sh.close() ->>> logger.removeHandler(sh) ->>> mh = logging.handlers.MemoryHandler(10,logging.WARNING, sh) ->>> logger.setLevel(logging.DEBUG) ->>> logger.addHandler(mh) - ->>> logger.debug("Debug message") - --- logging at INFO, nothing should be seen yet -- - ->>> logger.info("Info message") - --- logging at WARNING, 3 messages should be seen -- - ->>> logger.warn("Warn message") -DEBUG:root:Debug message -INFO:root:Info message -WARNING:root:Warn message - ->>> logger.info("Info index = 0") - ->>> logger.info("Info index = 1") - ->>> logger.info("Info index = 2") - ->>> logger.info("Info index = 3") - ->>> logger.info("Info index = 4") - ->>> logger.info("Info index = 5") - ->>> logger.info("Info index = 6") - ->>> logger.info("Info index = 7") - ->>> logger.info("Info index = 8") - ->>> logger.info("Info index = 9") -INFO:root:Info index = 0 -INFO:root:Info index = 1 -INFO:root:Info index = 2 -INFO:root:Info index = 3 -INFO:root:Info index = 4 -INFO:root:Info index = 5 -INFO:root:Info index = 6 -INFO:root:Info index = 7 -INFO:root:Info index = 8 -INFO:root:Info index = 9 - ->>> logger.info("Info index = 10") - ->>> logger.info("Info index = 11") - ->>> logger.info("Info index = 12") - ->>> logger.info("Info index = 13") - ->>> logger.info("Info index = 14") - ->>> logger.info("Info index = 15") - ->>> logger.info("Info index = 16") - ->>> logger.info("Info index = 17") - ->>> logger.info("Info index = 18") - ->>> logger.info("Info index = 19") -INFO:root:Info index = 10 -INFO:root:Info index = 11 -INFO:root:Info index = 12 -INFO:root:Info index = 13 -INFO:root:Info index = 14 -INFO:root:Info index = 15 -INFO:root:Info index = 16 -INFO:root:Info index = 17 -INFO:root:Info index = 18 -INFO:root:Info index = 19 - ->>> logger.info("Info index = 20") - ->>> logger.info("Info index = 21") - ->>> logger.info("Info index = 22") - ->>> logger.info("Info index = 23") - ->>> logger.info("Info index = 24") - ->>> logger.info("Info index = 25") - ->>> logger.info("Info index = 26") - ->>> logger.info("Info index = 27") - ->>> logger.info("Info index = 28") - ->>> logger.info("Info index = 29") -INFO:root:Info index = 20 -INFO:root:Info index = 21 -INFO:root:Info index = 22 -INFO:root:Info index = 23 -INFO:root:Info index = 24 -INFO:root:Info index = 25 -INFO:root:Info index = 26 -INFO:root:Info index = 27 -INFO:root:Info index = 28 -INFO:root:Info index = 29 - ->>> logger.info("Info index = 30") - ->>> logger.info("Info index = 31") - ->>> logger.info("Info index = 32") - ->>> logger.info("Info index = 33") - ->>> logger.info("Info index = 34") - ->>> logger.info("Info index = 35") - ->>> logger.info("Info index = 36") - ->>> logger.info("Info index = 37") - ->>> logger.info("Info index = 38") - ->>> logger.info("Info index = 39") -INFO:root:Info index = 30 -INFO:root:Info index = 31 -INFO:root:Info index = 32 -INFO:root:Info index = 33 -INFO:root:Info index = 34 -INFO:root:Info index = 35 -INFO:root:Info index = 36 -INFO:root:Info index = 37 -INFO:root:Info index = 38 -INFO:root:Info index = 39 - ->>> logger.info("Info index = 40") - ->>> logger.info("Info index = 41") - ->>> logger.info("Info index = 42") - ->>> logger.info("Info index = 43") - ->>> logger.info("Info index = 44") - ->>> logger.info("Info index = 45") - ->>> logger.info("Info index = 46") - ->>> logger.info("Info index = 47") - ->>> logger.info("Info index = 48") - ->>> logger.info("Info index = 49") -INFO:root:Info index = 40 -INFO:root:Info index = 41 -INFO:root:Info index = 42 -INFO:root:Info index = 43 -INFO:root:Info index = 44 -INFO:root:Info index = 45 -INFO:root:Info index = 46 -INFO:root:Info index = 47 -INFO:root:Info index = 48 -INFO:root:Info index = 49 - ->>> logger.info("Info index = 50") - ->>> logger.info("Info index = 51") - ->>> logger.info("Info index = 52") - ->>> logger.info("Info index = 53") - ->>> logger.info("Info index = 54") - ->>> logger.info("Info index = 55") - ->>> logger.info("Info index = 56") - ->>> logger.info("Info index = 57") - ->>> logger.info("Info index = 58") - ->>> logger.info("Info index = 59") -INFO:root:Info index = 50 -INFO:root:Info index = 51 -INFO:root:Info index = 52 -INFO:root:Info index = 53 -INFO:root:Info index = 54 -INFO:root:Info index = 55 -INFO:root:Info index = 56 -INFO:root:Info index = 57 -INFO:root:Info index = 58 -INFO:root:Info index = 59 - ->>> logger.info("Info index = 60") - ->>> logger.info("Info index = 61") - ->>> logger.info("Info index = 62") - ->>> logger.info("Info index = 63") - ->>> logger.info("Info index = 64") - ->>> logger.info("Info index = 65") - ->>> logger.info("Info index = 66") - ->>> logger.info("Info index = 67") - ->>> logger.info("Info index = 68") - ->>> logger.info("Info index = 69") -INFO:root:Info index = 60 -INFO:root:Info index = 61 -INFO:root:Info index = 62 -INFO:root:Info index = 63 -INFO:root:Info index = 64 -INFO:root:Info index = 65 -INFO:root:Info index = 66 -INFO:root:Info index = 67 -INFO:root:Info index = 68 -INFO:root:Info index = 69 - ->>> logger.info("Info index = 70") - ->>> logger.info("Info index = 71") - ->>> logger.info("Info index = 72") - ->>> logger.info("Info index = 73") - ->>> logger.info("Info index = 74") - ->>> logger.info("Info index = 75") - ->>> logger.info("Info index = 76") - ->>> logger.info("Info index = 77") - ->>> logger.info("Info index = 78") - ->>> logger.info("Info index = 79") -INFO:root:Info index = 70 -INFO:root:Info index = 71 -INFO:root:Info index = 72 -INFO:root:Info index = 73 -INFO:root:Info index = 74 -INFO:root:Info index = 75 -INFO:root:Info index = 76 -INFO:root:Info index = 77 -INFO:root:Info index = 78 -INFO:root:Info index = 79 - ->>> logger.info("Info index = 80") - ->>> logger.info("Info index = 81") - ->>> logger.info("Info index = 82") - ->>> logger.info("Info index = 83") - ->>> logger.info("Info index = 84") - ->>> logger.info("Info index = 85") - ->>> logger.info("Info index = 86") - ->>> logger.info("Info index = 87") - ->>> logger.info("Info index = 88") - ->>> logger.info("Info index = 89") -INFO:root:Info index = 80 -INFO:root:Info index = 81 -INFO:root:Info index = 82 -INFO:root:Info index = 83 -INFO:root:Info index = 84 -INFO:root:Info index = 85 -INFO:root:Info index = 86 -INFO:root:Info index = 87 -INFO:root:Info index = 88 -INFO:root:Info index = 89 - ->>> logger.info("Info index = 90") - ->>> logger.info("Info index = 91") - ->>> logger.info("Info index = 92") - ->>> logger.info("Info index = 93") - ->>> logger.info("Info index = 94") - ->>> logger.info("Info index = 95") - ->>> logger.info("Info index = 96") - ->>> logger.info("Info index = 97") - ->>> logger.info("Info index = 98") - ->>> logger.info("Info index = 99") -INFO:root:Info index = 90 -INFO:root:Info index = 91 -INFO:root:Info index = 92 -INFO:root:Info index = 93 -INFO:root:Info index = 94 -INFO:root:Info index = 95 -INFO:root:Info index = 96 -INFO:root:Info index = 97 -INFO:root:Info index = 98 -INFO:root:Info index = 99 - ->>> logger.info("Info index = 100") - ->>> logger.info("Info index = 101") - ->>> mh.close() -INFO:root:Info index = 100 -INFO:root:Info index = 101 - ->>> logger.removeHandler(mh) ->>> logger.addHandler(sh) - - - -Test 3 -====== - ->>> import sys, logging ->>> sys.stderr = sys ->>> logging.basicConfig() ->>> FILTER = "a.b" ->>> root = logging.getLogger() ->>> root.setLevel(logging.DEBUG) ->>> hand = root.handlers[0] - ->>> logging.getLogger("a").info("Info 1") -INFO:a:Info 1 - ->>> logging.getLogger("a.b").info("Info 2") -INFO:a.b:Info 2 - ->>> logging.getLogger("a.c").info("Info 3") -INFO:a.c:Info 3 - ->>> logging.getLogger("a.b.c").info("Info 4") -INFO:a.b.c:Info 4 - ->>> logging.getLogger("a.b.c.d").info("Info 5") -INFO:a.b.c.d:Info 5 - ->>> logging.getLogger("a.bb.c").info("Info 6") -INFO:a.bb.c:Info 6 - ->>> logging.getLogger("b").info("Info 7") -INFO:b:Info 7 - ->>> logging.getLogger("b.a").info("Info 8") -INFO:b.a:Info 8 - ->>> logging.getLogger("c.a.b").info("Info 9") -INFO:c.a.b:Info 9 - ->>> logging.getLogger("a.bb").info("Info 10") -INFO:a.bb:Info 10 - -Filtered with 'a.b'... - ->>> filt = logging.Filter(FILTER) - ->>> hand.addFilter(filt) - ->>> logging.getLogger("a").info("Info 1") - ->>> logging.getLogger("a.b").info("Info 2") -INFO:a.b:Info 2 - ->>> logging.getLogger("a.c").info("Info 3") - ->>> logging.getLogger("a.b.c").info("Info 4") -INFO:a.b.c:Info 4 +Copyright (C) 2001-2002 Vinay Sajip. All Rights Reserved. +""" ->>> logging.getLogger("a.b.c.d").info("Info 5") -INFO:a.b.c.d:Info 5 +import logging +import logging.handlers +import logging.config + +import copy +import cPickle +import cStringIO +import gc +import os +import re +import select +import socket +from SocketServer import ThreadingTCPServer, StreamRequestHandler +import string +import struct +import sys +import tempfile +from test.test_support import captured_stdout, run_with_locale, run_unittest +import textwrap +import threading +import time +import types +import unittest +import weakref + + +class BaseTest(unittest.TestCase): + + """Base class for logging tests.""" + + log_format = "%(name)s -> %(levelname)s: %(message)s" + expected_log_pat = r"^([\w.]+) -> ([\w]+): ([\d]+)$" + message_num = 0 + + def setUp(self): + """Setup the default logging stream to an internal StringIO instance, + so that we can examine log output as we want.""" + logger_dict = logging.getLogger().manager.loggerDict + logging._acquireLock() + try: + self.saved_handlers = logging._handlers.copy() + self.saved_handler_list = logging._handlerList[:] + self.saved_loggers = logger_dict.copy() + self.saved_level_names = logging._levelNames.copy() + finally: + logging._releaseLock() ->>> logging.getLogger("a.bb.c").info("Info 6") + self.root_logger = logging.getLogger("") + self.original_logging_level = self.root_logger.getEffectiveLevel() ->>> logging.getLogger("b").info("Info 7") + self.stream = cStringIO.StringIO() + self.root_logger.setLevel(logging.DEBUG) + self.root_hdlr = logging.StreamHandler(self.stream) + self.root_formatter = logging.Formatter(self.log_format) + self.root_hdlr.setFormatter(self.root_formatter) + self.root_logger.addHandler(self.root_hdlr) + + def tearDown(self): + """Remove our logging stream, and restore the original logging + level.""" + self.stream.close() + self.root_logger.removeHandler(self.root_hdlr) + self.root_logger.setLevel(self.original_logging_level) + logging._acquireLock() + try: + logging._levelNames.clear() + logging._levelNames.update(self.saved_level_names) + logging._handlers.clear() + logging._handlers.update(self.saved_handlers) + logging._handlerList[:] = self.saved_handler_list + loggerDict = logging.getLogger().manager.loggerDict + loggerDict.clear() + loggerDict.update(self.saved_loggers) + finally: + logging._releaseLock() ->>> logging.getLogger("b.a").info("Info 8") + def assert_log_lines(self, expected_values, stream=None): + """Match the collected log lines against the regular expression + self.expected_log_pat, and compare the extracted group values to + the expected_values list of tuples.""" + stream = stream or self.stream + pat = re.compile(self.expected_log_pat) + try: + stream.reset() + actual_lines = stream.readlines() + except AttributeError: + # StringIO.StringIO lacks a reset() method. + actual_lines = stream.getvalue().splitlines() + self.assertEquals(len(actual_lines), len(expected_values)) + for actual, expected in zip(actual_lines, expected_values): + match = pat.search(actual) + if not match: + self.fail("Log line does not match expected pattern:\n" + + actual) + self.assertEquals(tuple(match.groups()), expected) + s = stream.read() + if s: + self.fail("Remaining output at end of log stream:\n" + s) + + def next_message(self): + """Generate a message consisting solely of an auto-incrementing + integer.""" + self.message_num += 1 + return "%d" % self.message_num + + +class BuiltinLevelsTest(BaseTest): + """Test builtin levels and their inheritance.""" + + def test_flat(self): + #Logging levels in a flat logger namespace. + m = self.next_message + + ERR = logging.getLogger("ERR") + ERR.setLevel(logging.ERROR) + INF = logging.getLogger("INF") + INF.setLevel(logging.INFO) + DEB = logging.getLogger("DEB") + DEB.setLevel(logging.DEBUG) + + # These should log. + ERR.log(logging.CRITICAL, m()) + ERR.error(m()) + + INF.log(logging.CRITICAL, m()) + INF.error(m()) + INF.warn(m()) + INF.info(m()) + + DEB.log(logging.CRITICAL, m()) + DEB.error(m()) + DEB.warn (m()) + DEB.info (m()) + DEB.debug(m()) + + # These should not log. + ERR.warn(m()) + ERR.info(m()) + ERR.debug(m()) + + INF.debug(m()) + + self.assert_log_lines([ + ('ERR', 'CRITICAL', '1'), + ('ERR', 'ERROR', '2'), + ('INF', 'CRITICAL', '3'), + ('INF', 'ERROR', '4'), + ('INF', 'WARNING', '5'), + ('INF', 'INFO', '6'), + ('DEB', 'CRITICAL', '7'), + ('DEB', 'ERROR', '8'), + ('DEB', 'WARNING', '9'), + ('DEB', 'INFO', '10'), + ('DEB', 'DEBUG', '11'), + ]) + + def test_nested_explicit(self): + # Logging levels in a nested namespace, all explicitly set. + m = self.next_message + + INF = logging.getLogger("INF") + INF.setLevel(logging.INFO) + INF_ERR = logging.getLogger("INF.ERR") + INF_ERR.setLevel(logging.ERROR) + + # These should log. + INF_ERR.log(logging.CRITICAL, m()) + INF_ERR.error(m()) + + # These should not log. + INF_ERR.warn(m()) + INF_ERR.info(m()) + INF_ERR.debug(m()) + + self.assert_log_lines([ + ('INF.ERR', 'CRITICAL', '1'), + ('INF.ERR', 'ERROR', '2'), + ]) + + def test_nested_inherited(self): + #Logging levels in a nested namespace, inherited from parent loggers. + m = self.next_message + + INF = logging.getLogger("INF") + INF.setLevel(logging.INFO) + INF_ERR = logging.getLogger("INF.ERR") + INF_ERR.setLevel(logging.ERROR) + INF_UNDEF = logging.getLogger("INF.UNDEF") + INF_ERR_UNDEF = logging.getLogger("INF.ERR.UNDEF") + UNDEF = logging.getLogger("UNDEF") + + # These should log. + INF_UNDEF.log(logging.CRITICAL, m()) + INF_UNDEF.error(m()) + INF_UNDEF.warn(m()) + INF_UNDEF.info(m()) + INF_ERR_UNDEF.log(logging.CRITICAL, m()) + INF_ERR_UNDEF.error(m()) + + # These should not log. + INF_UNDEF.debug(m()) + INF_ERR_UNDEF.warn(m()) + INF_ERR_UNDEF.info(m()) + INF_ERR_UNDEF.debug(m()) + + self.assert_log_lines([ + ('INF.UNDEF', 'CRITICAL', '1'), + ('INF.UNDEF', 'ERROR', '2'), + ('INF.UNDEF', 'WARNING', '3'), + ('INF.UNDEF', 'INFO', '4'), + ('INF.ERR.UNDEF', 'CRITICAL', '5'), + ('INF.ERR.UNDEF', 'ERROR', '6'), + ]) + + def test_nested_with_virtual_parent(self): + # Logging levels when some parent does not exist yet. + m = self.next_message + + INF = logging.getLogger("INF") + GRANDCHILD = logging.getLogger("INF.BADPARENT.UNDEF") + CHILD = logging.getLogger("INF.BADPARENT") + INF.setLevel(logging.INFO) + + # These should log. + GRANDCHILD.log(logging.FATAL, m()) + GRANDCHILD.info(m()) + CHILD.log(logging.FATAL, m()) + CHILD.info(m()) + + # These should not log. + GRANDCHILD.debug(m()) + CHILD.debug(m()) + + self.assert_log_lines([ + ('INF.BADPARENT.UNDEF', 'CRITICAL', '1'), + ('INF.BADPARENT.UNDEF', 'INFO', '2'), + ('INF.BADPARENT', 'CRITICAL', '3'), + ('INF.BADPARENT', 'INFO', '4'), + ]) + + +class BasicFilterTest(BaseTest): + + """Test the bundled Filter class.""" + + def test_filter(self): + # Only messages satisfying the specified criteria pass through the + # filter. + filter_ = logging.Filter("spam.eggs") + handler = self.root_logger.handlers[0] + try: + handler.addFilter(filter_) + spam = logging.getLogger("spam") + spam_eggs = logging.getLogger("spam.eggs") + spam_eggs_fish = logging.getLogger("spam.eggs.fish") + spam_bakedbeans = logging.getLogger("spam.bakedbeans") + + spam.info(self.next_message()) + spam_eggs.info(self.next_message()) # Good. + spam_eggs_fish.info(self.next_message()) # Good. + spam_bakedbeans.info(self.next_message()) + + self.assert_log_lines([ + ('spam.eggs', 'INFO', '2'), + ('spam.eggs.fish', 'INFO', '3'), + ]) + finally: + handler.removeFilter(filter_) ->>> logging.getLogger("c.a.b").info("Info 9") ->>> logging.getLogger("a.bb").info("Info 10") +# +# First, we define our levels. There can be as many as you want - the only +# limitations are that they should be integers, the lowest should be > 0 and +# larger values mean less information being logged. If you need specific +# level values which do not fit into these limitations, you can use a +# mapping dictionary to convert between your application levels and the +# logging system. +# +SILENT = 120 +TACITURN = 119 +TERSE = 118 +EFFUSIVE = 117 +SOCIABLE = 116 +VERBOSE = 115 +TALKATIVE = 114 +GARRULOUS = 113 +CHATTERBOX = 112 +BORING = 111 + +LEVEL_RANGE = range(BORING, SILENT + 1) + +# +# Next, we define names for our levels. You don't need to do this - in which +# case the system will use "Level n" to denote the text for the level. +# +my_logging_levels = { + SILENT : 'Silent', + TACITURN : 'Taciturn', + TERSE : 'Terse', + EFFUSIVE : 'Effusive', + SOCIABLE : 'Sociable', + VERBOSE : 'Verbose', + TALKATIVE : 'Talkative', + GARRULOUS : 'Garrulous', + CHATTERBOX : 'Chatterbox', + BORING : 'Boring', +} + +class GarrulousFilter(logging.Filter): + + """A filter which blocks garrulous messages.""" + + def filter(self, record): + return record.levelno != GARRULOUS + +class VerySpecificFilter(logging.Filter): + + """A filter which blocks sociable and taciturn messages.""" + + def filter(self, record): + return record.levelno not in [SOCIABLE, TACITURN] + + +class CustomLevelsAndFiltersTest(BaseTest): + + """Test various filtering possibilities with custom logging levels.""" + + # Skip the logger name group. + expected_log_pat = r"^[\w.]+ -> ([\w]+): ([\d]+)$" + + def setUp(self): + BaseTest.setUp(self) + for k, v in my_logging_levels.items(): + logging.addLevelName(k, v) + + def log_at_all_levels(self, logger): + for lvl in LEVEL_RANGE: + logger.log(lvl, self.next_message()) + + def test_logger_filter(self): + # Filter at logger level. + self.root_logger.setLevel(VERBOSE) + # Levels >= 'Verbose' are good. + self.log_at_all_levels(self.root_logger) + self.assert_log_lines([ + ('Verbose', '5'), + ('Sociable', '6'), + ('Effusive', '7'), + ('Terse', '8'), + ('Taciturn', '9'), + ('Silent', '10'), + ]) + + def test_handler_filter(self): + # Filter at handler level. + self.root_logger.handlers[0].setLevel(SOCIABLE) + try: + # Levels >= 'Sociable' are good. + self.log_at_all_levels(self.root_logger) + self.assert_log_lines([ + ('Sociable', '6'), + ('Effusive', '7'), + ('Terse', '8'), + ('Taciturn', '9'), + ('Silent', '10'), + ]) + finally: + self.root_logger.handlers[0].setLevel(logging.NOTSET) ->>> hand.removeFilter(filt) + def test_specific_filters(self): + # Set a specific filter object on the handler, and then add another + # filter object on the logger itself. + handler = self.root_logger.handlers[0] + specific_filter = None + garr = GarrulousFilter() + handler.addFilter(garr) + try: + self.log_at_all_levels(self.root_logger) + first_lines = [ + # Notice how 'Garrulous' is missing + ('Boring', '1'), + ('Chatterbox', '2'), + ('Talkative', '4'), + ('Verbose', '5'), + ('Sociable', '6'), + ('Effusive', '7'), + ('Terse', '8'), + ('Taciturn', '9'), + ('Silent', '10'), + ] + self.assert_log_lines(first_lines) + + specific_filter = VerySpecificFilter() + self.root_logger.addFilter(specific_filter) + self.log_at_all_levels(self.root_logger) + self.assert_log_lines(first_lines + [ + # Not only 'Garrulous' is still missing, but also 'Sociable' + # and 'Taciturn' + ('Boring', '11'), + ('Chatterbox', '12'), + ('Talkative', '14'), + ('Verbose', '15'), + ('Effusive', '17'), + ('Terse', '18'), + ('Silent', '20'), + ]) + finally: + if specific_filter: + self.root_logger.removeFilter(specific_filter) + handler.removeFilter(garr) + + +class MemoryHandlerTest(BaseTest): + + """Tests for the MemoryHandler.""" + + # Do not bother with a logger name group. + expected_log_pat = r"^[\w.]+ -> ([\w]+): ([\d]+)$" + + def setUp(self): + BaseTest.setUp(self) + self.mem_hdlr = logging.handlers.MemoryHandler(10, logging.WARNING, + self.root_hdlr) + self.mem_logger = logging.getLogger('mem') + self.mem_logger.propagate = 0 + self.mem_logger.addHandler(self.mem_hdlr) + + def tearDown(self): + self.mem_hdlr.close() + + def test_flush(self): + # The memory handler flushes to its target handler based on specific + # criteria (message count and message level). + self.mem_logger.debug(self.next_message()) + self.assert_log_lines([]) + self.mem_logger.info(self.next_message()) + self.assert_log_lines([]) + # This will flush because the level is >= logging.WARNING + self.mem_logger.warn(self.next_message()) + lines = [ + ('DEBUG', '1'), + ('INFO', '2'), + ('WARNING', '3'), + ] + self.assert_log_lines(lines) + for n in (4, 14): + for i in range(9): + self.mem_logger.debug(self.next_message()) + self.assert_log_lines(lines) + # This will flush because it's the 10th message since the last + # flush. + self.mem_logger.debug(self.next_message()) + lines = lines + [('DEBUG', str(i)) for i in range(n, n + 10)] + self.assert_log_lines(lines) + self.mem_logger.debug(self.next_message()) + self.assert_log_lines(lines) -Test 4 -====== ->>> import sys, logging, logging.handlers, string ->>> import tempfile, logging.config, os, test.test_support ->>> sys.stderr = sys.stdout ->>> from test_logging import config0, config1 +class ExceptionFormatter(logging.Formatter): + """A special exception formatter.""" + def formatException(self, ei): + return "Got a [%s]" % ei[0].__name__ -config2 has a subtle configuration error that should be reported ->>> config2 = string.replace(config1, "sys.stdout", "sys.stbout") -config3 has a less subtle configuration error ->>> config3 = string.replace(config1, "formatter=form1", "formatter=misspelled_name") +class ConfigFileTest(BaseTest): ->>> def test4(conf): -... loggerDict = logging.getLogger().manager.loggerDict -... logging._acquireLock() -... try: -... saved_handlers = logging._handlers.copy() -... saved_handler_list = logging._handlerList[:] -... saved_loggers = loggerDict.copy() -... finally: -... logging._releaseLock() -... try: -... fn = test.test_support.TESTFN -... f = open(fn, "w") -... f.write(conf) -... f.close() -... try: -... logging.config.fileConfig(fn) -... #call again to make sure cleanup is correct -... logging.config.fileConfig(fn) -... except: -... t = sys.exc_info()[0] -... message(str(t)) -... else: -... message('ok.') -... os.remove(fn) -... finally: -... logging._acquireLock() -... try: -... logging._handlers.clear() -... logging._handlers.update(saved_handlers) -... logging._handlerList[:] = saved_handler_list -... loggerDict = logging.getLogger().manager.loggerDict -... loggerDict.clear() -... loggerDict.update(saved_loggers) -... finally: -... logging._releaseLock() + """Reading logging config from a .ini-style config file.""" ->>> test4(config0) -ok. + expected_log_pat = r"^([\w]+) \+\+ ([\w]+)$" ->>> test4(config1) -ok. + # config0 is a standard configuration. + config0 = """ + [loggers] + keys=root ->>> test4(config2) - + [handlers] + keys=hand1 ->>> test4(config3) - + [formatters] + keys=form1 ->>> import test_logging ->>> test_logging.test5() -ERROR:root:just testing -... Don't panic! + [logger_root] + level=WARNING + handlers=hand1 + [handler_hand1] + class=StreamHandler + level=NOTSET + formatter=form1 + args=(sys.stdout,) -Test Main -========= ->>> import select ->>> import os, sys, string, struct, types, cPickle, cStringIO ->>> import socket, tempfile, threading, time ->>> import logging, logging.handlers, logging.config ->>> import test_logging + [formatter_form1] + format=%(levelname)s ++ %(message)s + datefmt= + """ ->>> test_logging.test_main_inner() -ERR -> CRITICAL: Message 0 (via logrecv.tcp.ERR) -ERR -> ERROR: Message 1 (via logrecv.tcp.ERR) -INF -> CRITICAL: Message 2 (via logrecv.tcp.INF) -INF -> ERROR: Message 3 (via logrecv.tcp.INF) -INF -> WARNING: Message 4 (via logrecv.tcp.INF) -INF -> INFO: Message 5 (via logrecv.tcp.INF) -INF.UNDEF -> CRITICAL: Message 6 (via logrecv.tcp.INF.UNDEF) -INF.UNDEF -> ERROR: Message 7 (via logrecv.tcp.INF.UNDEF) -INF.UNDEF -> WARNING: Message 8 (via logrecv.tcp.INF.UNDEF) -INF.UNDEF -> INFO: Message 9 (via logrecv.tcp.INF.UNDEF) -INF.ERR -> CRITICAL: Message 10 (via logrecv.tcp.INF.ERR) -INF.ERR -> ERROR: Message 11 (via logrecv.tcp.INF.ERR) -INF.ERR.UNDEF -> CRITICAL: Message 12 (via logrecv.tcp.INF.ERR.UNDEF) -INF.ERR.UNDEF -> ERROR: Message 13 (via logrecv.tcp.INF.ERR.UNDEF) -DEB -> CRITICAL: Message 14 (via logrecv.tcp.DEB) -DEB -> ERROR: Message 15 (via logrecv.tcp.DEB) -DEB -> WARNING: Message 16 (via logrecv.tcp.DEB) -DEB -> INFO: Message 17 (via logrecv.tcp.DEB) -DEB -> DEBUG: Message 18 (via logrecv.tcp.DEB) -UNDEF -> CRITICAL: Message 19 (via logrecv.tcp.UNDEF) -UNDEF -> ERROR: Message 20 (via logrecv.tcp.UNDEF) -UNDEF -> WARNING: Message 21 (via logrecv.tcp.UNDEF) -UNDEF -> INFO: Message 22 (via logrecv.tcp.UNDEF) -INF.BADPARENT.UNDEF -> CRITICAL: Message 23 (via logrecv.tcp.INF.BADPARENT.UNDEF) -INF.BADPARENT -> CRITICAL: Message 24 (via logrecv.tcp.INF.BADPARENT) -INF -> INFO: Finish up, it's closing time. Messages should bear numbers 0 through 24. (via logrecv.tcp.INF) - -""" -import select -import os, sys, string, struct, cPickle, cStringIO -import socket, threading -import logging, logging.handlers, logging.config, test.test_support + # config1 adds a little to the standard configuration. + config1 = """ + [loggers] + keys=root,parser + + [handlers] + keys=hand1 + + [formatters] + keys=form1 + + [logger_root] + level=WARNING + handlers= + + [logger_parser] + level=DEBUG + handlers=hand1 + propagate=1 + qualname=compiler.parser + + [handler_hand1] + class=StreamHandler + level=NOTSET + formatter=form1 + args=(sys.stdout,) + + [formatter_form1] + format=%(levelname)s ++ %(message)s + datefmt= + """ + # config2 has a subtle configuration error that should be reported + config2 = config1.replace("sys.stdout", "sys.stbout") -BANNER = "-- %-10s %-6s ---------------------------------------------------\n" + # config3 has a less subtle configuration error + config3 = config1.replace("formatter=form1", "formatter=misspelled_name") -FINISH_UP = "Finish up, it's closing time. Messages should bear numbers 0 through 24." -#---------------------------------------------------------------------------- -# Test 0 -#---------------------------------------------------------------------------- + # config4 specifies a custom formatter class to be loaded + config4 = """ + [loggers] + keys=root + + [handlers] + keys=hand1 + + [formatters] + keys=form1 + + [logger_root] + level=NOTSET + handlers=hand1 + + [handler_hand1] + class=StreamHandler + level=NOTSET + formatter=form1 + args=(sys.stdout,) + + [formatter_form1] + class=""" + __name__ + """.ExceptionFormatter + format=%(levelname)s:%(name)s:%(message)s + datefmt= + """ -msgcount = 0 + def apply_config(self, conf): + try: + fn = tempfile.mktemp(".ini") + f = open(fn, "w") + f.write(textwrap.dedent(conf)) + f.close() + logging.config.fileConfig(fn) + finally: + os.remove(fn) -def nextmessage(): - global msgcount - rv = "Message %d" % msgcount - msgcount = msgcount + 1 - return rv + def test_config0_ok(self): + # A simple config file which overrides the default settings. + with captured_stdout() as output: + self.apply_config(self.config0) + logger = logging.getLogger() + # Won't output anything + logger.info(self.next_message()) + # Outputs a message + logger.error(self.next_message()) + self.assert_log_lines([ + ('ERROR', '2'), + ], stream=output) + # Original logger output is empty. + self.assert_log_lines([]) + + def test_config1_ok(self): + # A config file defining a sub-parser as well. + with captured_stdout() as output: + self.apply_config(self.config1) + logger = logging.getLogger("compiler.parser") + # Both will output a message + logger.info(self.next_message()) + logger.error(self.next_message()) + self.assert_log_lines([ + ('INFO', '1'), + ('ERROR', '2'), + ], stream=output) + # Original logger output is empty. + self.assert_log_lines([]) + + def test_config2_failure(self): + # A simple config file which overrides the default settings. + self.assertRaises(StandardError, self.apply_config, self.config2) + + def test_config3_failure(self): + # A simple config file which overrides the default settings. + self.assertRaises(StandardError, self.apply_config, self.config3) + + def test_config4_ok(self): + # A config file specifying a custom formatter class. + with captured_stdout() as output: + self.apply_config(self.config4) + logger = logging.getLogger() + try: + raise RuntimeError() + except RuntimeError: + logging.exception("just testing") + sys.stdout.seek(0) + self.assertEquals(output.getvalue(), + "ERROR:root:just testing\nGot a [RuntimeError]\n") + # Original logger output is empty + self.assert_log_lines([]) -#---------------------------------------------------------------------------- -# Log receiver -#---------------------------------------------------------------------------- -TIMEOUT = 10 +class LogRecordStreamHandler(StreamRequestHandler): -from SocketServer import ThreadingTCPServer, StreamRequestHandler + """Handler for a streaming logging request. It saves the log message in the + TCP server's 'log_output' attribute.""" -class LogRecordStreamHandler(StreamRequestHandler): - """ - Handler for a streaming logging request. It basically logs the record - using whatever logging policy is configured locally. - """ + TCP_LOG_END = "!!!END!!!" def handle(self): - """ - Handle multiple requests - each expected to be a 4-byte length, + """Handle multiple requests - each expected to be of 4-byte length, followed by the LogRecord in pickle format. Logs the record - according to whatever policy is configured locally. - """ - while 1: - try: - chunk = self.connection.recv(4) - if len(chunk) < 4: - break - slen = struct.unpack(">L", chunk)[0] - chunk = self.connection.recv(slen) - while len(chunk) < slen: - chunk = chunk + self.connection.recv(slen - len(chunk)) - obj = self.unPickle(chunk) - record = logging.makeLogRecord(obj) - self.handleLogRecord(record) - except: - raise + according to whatever policy is configured locally.""" + while True: + chunk = self.connection.recv(4) + if len(chunk) < 4: + break + slen = struct.unpack(">L", chunk)[0] + chunk = self.connection.recv(slen) + while len(chunk) < slen: + chunk = chunk + self.connection.recv(slen - len(chunk)) + obj = self.unpickle(chunk) + record = logging.makeLogRecord(obj) + self.handle_log_record(record) - def unPickle(self, data): + def unpickle(self, data): return cPickle.loads(data) - def handleLogRecord(self, record): - logname = "logrecv.tcp." + record.name - #If the end-of-messages sentinel is seen, tell the server to terminate - if record.msg == FINISH_UP: + def handle_log_record(self, record): + # If the end-of-messages sentinel is seen, tell the server to + # terminate. + if self.TCP_LOG_END in record.msg: self.server.abort = 1 - record.msg = record.msg + " (via " + logname + ")" - logger = logging.getLogger("logrecv") - logger.handle(record) - -# The server sets socketDataProcessed when it's done. -socketDataProcessed = threading.Event() -#---------------------------------------------------------------------------- -# Test 5 -#---------------------------------------------------------------------------- - -test5_config = """ -[loggers] -keys=root - -[handlers] -keys=hand1 - -[formatters] -keys=form1 - -[logger_root] -level=NOTSET -handlers=hand1 - -[handler_hand1] -class=StreamHandler -level=NOTSET -formatter=form1 -args=(sys.stdout,) - -[formatter_form1] -class=test.test_logging.FriendlyFormatter -format=%(levelname)s:%(name)s:%(message)s -datefmt= -""" - -class FriendlyFormatter (logging.Formatter): - def formatException(self, ei): - return "%s... Don't panic!" % str(ei[0]) - - -def test5(): - loggerDict = logging.getLogger().manager.loggerDict - logging._acquireLock() - try: - saved_handlers = logging._handlers.copy() - saved_handler_list = logging._handlerList[:] - saved_loggers = loggerDict.copy() - finally: - logging._releaseLock() - try: - fn = test.test_support.TESTFN - f = open(fn, "w") - f.write(test5_config) - f.close() - logging.config.fileConfig(fn) - try: - raise KeyError - except KeyError: - logging.exception("just testing") - os.remove(fn) - hdlr = logging.getLogger().handlers[0] - logging.getLogger().handlers.remove(hdlr) - finally: - logging._acquireLock() - try: - logging._handlers.clear() - logging._handlers.update(saved_handlers) - logging._handlerList[:] = saved_handler_list - loggerDict = logging.getLogger().manager.loggerDict - loggerDict.clear() - loggerDict.update(saved_loggers) - finally: - logging._releaseLock() + return + self.server.log_output += record.msg + "\n" class LogRecordSocketReceiver(ThreadingTCPServer): - """ - A simple-minded TCP socket-based logging receiver suitable for test - purposes. - """ + + """A simple-minded TCP socket-based logging receiver suitable for test + purposes.""" allow_reuse_address = 1 + log_output = "" def __init__(self, host='localhost', port=logging.handlers.DEFAULT_TCP_LOGGING_PORT, handler=LogRecordStreamHandler): ThreadingTCPServer.__init__(self, (host, port), handler) self.abort = False - self.timeout = 1 + self.timeout = 0.1 + self.finished = threading.Event() def serve_until_stopped(self): while not self.abort: @@ -2018,217 +704,119 @@ self.timeout) if rd: self.handle_request() - socketDataProcessed.set() + # Notify the main thread that we're about to exit + self.finished.set() # close the listen socket self.server_close() - def process_request(self, request, client_address): - t = threading.Thread(target = self.finish_request, - args = (request, client_address)) - t.start() - -def runTCP(tcpserver): - tcpserver.serve_until_stopped() - -def banner(nm, typ): - sep = BANNER % (nm, typ) - sys.stdout.write(sep) - sys.stdout.flush() - -def test0(): - ERR = logging.getLogger("ERR") - ERR.setLevel(logging.ERROR) - INF = logging.getLogger("INF") - INF.setLevel(logging.INFO) - INF_ERR = logging.getLogger("INF.ERR") - INF_ERR.setLevel(logging.ERROR) - DEB = logging.getLogger("DEB") - DEB.setLevel(logging.DEBUG) - - INF_UNDEF = logging.getLogger("INF.UNDEF") - INF_ERR_UNDEF = logging.getLogger("INF.ERR.UNDEF") - UNDEF = logging.getLogger("UNDEF") - - GRANDCHILD = logging.getLogger("INF.BADPARENT.UNDEF") - CHILD = logging.getLogger("INF.BADPARENT") - - #These should log - ERR.log(logging.FATAL, nextmessage()) - ERR.error(nextmessage()) - - INF.log(logging.FATAL, nextmessage()) - INF.error(nextmessage()) - INF.warn(nextmessage()) - INF.info(nextmessage()) - - INF_UNDEF.log(logging.FATAL, nextmessage()) - INF_UNDEF.error(nextmessage()) - INF_UNDEF.warn (nextmessage()) - INF_UNDEF.info (nextmessage()) - - INF_ERR.log(logging.FATAL, nextmessage()) - INF_ERR.error(nextmessage()) - - INF_ERR_UNDEF.log(logging.FATAL, nextmessage()) - INF_ERR_UNDEF.error(nextmessage()) - - DEB.log(logging.FATAL, nextmessage()) - DEB.error(nextmessage()) - DEB.warn (nextmessage()) - DEB.info (nextmessage()) - DEB.debug(nextmessage()) - - UNDEF.log(logging.FATAL, nextmessage()) - UNDEF.error(nextmessage()) - UNDEF.warn (nextmessage()) - UNDEF.info (nextmessage()) - - GRANDCHILD.log(logging.FATAL, nextmessage()) - CHILD.log(logging.FATAL, nextmessage()) - - #These should not log - ERR.warn(nextmessage()) - ERR.info(nextmessage()) - ERR.debug(nextmessage()) - - INF.debug(nextmessage()) - INF_UNDEF.debug(nextmessage()) - - INF_ERR.warn(nextmessage()) - INF_ERR.info(nextmessage()) - INF_ERR.debug(nextmessage()) - INF_ERR_UNDEF.warn(nextmessage()) - INF_ERR_UNDEF.info(nextmessage()) - INF_ERR_UNDEF.debug(nextmessage()) - - INF.info(FINISH_UP) - -def test_main_inner(): - rootLogger = logging.getLogger("") - rootLogger.setLevel(logging.DEBUG) - - tcpserver = LogRecordSocketReceiver(port=0) - port = tcpserver.socket.getsockname()[1] - - # Set up a handler such that all events are sent via a socket to the log - # receiver (logrecv). - # The handler will only be added to the rootLogger for some of the tests - shdlr = logging.handlers.SocketHandler('localhost', port) - rootLogger.addHandler(shdlr) - - # Configure the logger for logrecv so events do not propagate beyond it. - # The sockLogger output is buffered in memory until the end of the test, - # and printed at the end. - sockOut = cStringIO.StringIO() - sockLogger = logging.getLogger("logrecv") - sockLogger.setLevel(logging.DEBUG) - sockhdlr = logging.StreamHandler(sockOut) - sockhdlr.setFormatter(logging.Formatter( - "%(name)s -> %(levelname)s: %(message)s")) - sockLogger.addHandler(sockhdlr) - sockLogger.propagate = 0 - - #Set up servers - threads = [] - #sys.stdout.write("About to start TCP server...\n") - threads.append(threading.Thread(target=runTCP, args=(tcpserver,))) - - for thread in threads: - thread.start() - try: - test0() - - # XXX(nnorwitz): Try to fix timing related test failures. - # This sleep gives us some extra time to read messages. - # The test generally only fails on Solaris without this sleep. - #time.sleep(2.0) - shdlr.close() - rootLogger.removeHandler(shdlr) - - finally: - #wait for TCP receiver to terminate - socketDataProcessed.wait() - # ensure the server dies - tcpserver.abort = True - for thread in threads: - thread.join(2.0) - print(sockOut.getvalue()) - sockOut.close() - sockLogger.removeHandler(sockhdlr) - sockhdlr.close() - sys.stdout.flush() - -# config0 is a standard configuration. -config0 = """ -[loggers] -keys=root - -[handlers] -keys=hand1 - -[formatters] -keys=form1 - -[logger_root] -level=NOTSET -handlers=hand1 - -[handler_hand1] -class=StreamHandler -level=NOTSET -formatter=form1 -args=(sys.stdout,) - -[formatter_form1] -format=%(levelname)s:%(name)s:%(message)s -datefmt= -""" -# config1 adds a little to the standard configuration. -config1 = """ -[loggers] -keys=root,parser - -[handlers] -keys=hand1 - -[formatters] -keys=form1 - -[logger_root] -level=NOTSET -handlers=hand1 - -[logger_parser] -level=DEBUG -handlers=hand1 -propagate=1 -qualname=compiler.parser - -[handler_hand1] -class=StreamHandler -level=NOTSET -formatter=form1 -args=(sys.stdout,) - -[formatter_form1] -format=%(levelname)s:%(name)s:%(message)s -datefmt= -""" +class SocketHandlerTest(BaseTest): -def message(s): - sys.stdout.write("%s\n" % s) + """Test for SocketHandler objects.""" -# config2 has a subtle configuration error that should be reported -config2 = string.replace(config1, "sys.stdout", "sys.stbout") + def setUp(self): + """Set up a TCP server to receive log messages, and a SocketHandler + pointing to that server's address and port.""" + BaseTest.setUp(self) + self.tcpserver = LogRecordSocketReceiver(port=0) + self.port = self.tcpserver.socket.getsockname()[1] + self.threads = [ + threading.Thread(target=self.tcpserver.serve_until_stopped)] + for thread in self.threads: + thread.start() + + self.sock_hdlr = logging.handlers.SocketHandler('localhost', self.port) + self.sock_hdlr.setFormatter(self.root_formatter) + self.root_logger.removeHandler(self.root_logger.handlers[0]) + self.root_logger.addHandler(self.sock_hdlr) -# config3 has a less subtle configuration error -config3 = string.replace( - config1, "formatter=form1", "formatter=misspelled_name") + def tearDown(self): + """Shutdown the TCP server.""" + try: + self.tcpserver.abort = True + del self.tcpserver + self.root_logger.removeHandler(self.sock_hdlr) + self.sock_hdlr.close() + for thread in self.threads: + thread.join(2.0) + finally: + BaseTest.tearDown(self) + def get_output(self): + """Get the log output as received by the TCP server.""" + # Signal the TCP receiver and wait for it to terminate. + self.root_logger.critical(LogRecordStreamHandler.TCP_LOG_END) + self.tcpserver.finished.wait(2.0) + return self.tcpserver.log_output + + def test_output(self): + # The log message sent to the SocketHandler is properly received. + logger = logging.getLogger("tcp") + logger.error("spam") + logger.debug("eggs") + self.assertEquals(self.get_output(), "spam\neggs\n") + + +class MemoryTest(BaseTest): + + """Test memory persistence of logger objects.""" + + def setUp(self): + """Create a dict to remember potentially destroyed objects.""" + BaseTest.setUp(self) + self._survivors = {} + + def _watch_for_survival(self, *args): + """Watch the given objects for survival, by creating weakrefs to + them.""" + for obj in args: + key = id(obj), repr(obj) + self._survivors[key] = weakref.ref(obj) + + def _assert_survival(self): + """Assert that all objects watched for survival have survived.""" + # Trigger cycle breaking. + gc.collect() + dead = [] + for (id_, repr_), ref in self._survivors.items(): + if ref() is None: + dead.append(repr_) + if dead: + self.fail("%d objects should have survived " + "but have been destroyed: %s" % (len(dead), ", ".join(dead))) + + def test_persistent_loggers(self): + # Logger objects are persistent and retain their configuration, even + # if visible references are destroyed. + self.root_logger.setLevel(logging.INFO) + foo = logging.getLogger("foo") + self._watch_for_survival(foo) + foo.setLevel(logging.DEBUG) + self.root_logger.debug(self.next_message()) + foo.debug(self.next_message()) + self.assert_log_lines([ + ('foo', 'DEBUG', '2'), + ]) + del foo + # foo has survived. + self._assert_survival() + # foo has retained its settings. + bar = logging.getLogger("foo") + bar.debug(self.next_message()) + self.assert_log_lines([ + ('foo', 'DEBUG', '2'), + ('foo', 'DEBUG', '3'), + ]) + + +# Set the locale to the platform-dependent default. I have no idea +# why the test does this, but in any case we save the current locale +# first and restore it at the end. + at run_with_locale('LC_ALL', '') def test_main(): - from test import test_support, test_logging - test_support.run_doctest(test_logging) + run_unittest(BuiltinLevelsTest, BasicFilterTest, + CustomLevelsAndFiltersTest, MemoryHandlerTest, + ConfigFileTest, SocketHandlerTest, MemoryTest) -if __name__=="__main__": +if __name__ == "__main__": test_main() Modified: python/trunk/Misc/ACKS ============================================================================== --- python/trunk/Misc/ACKS (original) +++ python/trunk/Misc/ACKS Mon Mar 3 01:38:58 2008 @@ -524,6 +524,7 @@ Fran?ois Pinard Zach Pincus Michael Piotrowski +Antoine Pitrou Michael Pomraning Iustin Pop John Popplewell Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Mon Mar 3 01:38:58 2008 @@ -1419,7 +1419,7 @@ Tests ----- -- Refactor test_logging to use doctest. +- Refactor test_logging to use unittest. - Refactor test_profile and test_cprofile to use the same code to profile. From python-checkins at python.org Mon Mar 3 02:27:03 2008 From: python-checkins at python.org (jeffrey.yasskin) Date: Mon, 3 Mar 2008 02:27:03 +0100 (CET) Subject: [Python-checkins] r61190 - python/trunk/Python/ceval.c Message-ID: <20080303012703.9F1A31E4006@bag.python.org> Author: jeffrey.yasskin Date: Mon Mar 3 02:27:03 2008 New Revision: 61190 Modified: python/trunk/Python/ceval.c Log: compile.c always emits END_FINALLY after WITH_CLEANUP, so predict that in ceval.c. This is worth about a .03-.04us speedup on a simple with block. Modified: python/trunk/Python/ceval.c ============================================================================== --- python/trunk/Python/ceval.c (original) +++ python/trunk/Python/ceval.c Mon Mar 3 02:27:03 2008 @@ -1694,6 +1694,7 @@ } continue; + PREDICTED(END_FINALLY); case END_FINALLY: v = POP(); if (PyInt_Check(v)) { @@ -2302,6 +2303,7 @@ x = POP(); Py_DECREF(x); } + PREDICT(END_FINALLY); break; } From buildbot at python.org Mon Mar 3 03:06:06 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 03 Mar 2008 02:06:06 +0000 Subject: [Python-checkins] buildbot failure in ppc Debian unstable trunk Message-ID: <20080303020606.4F3B41E4016@bag.python.org> The Buildbot has detected a new failure of ppc Debian unstable trunk. Full details are available at: http://www.python.org/dev/buildbot/all/ppc%20Debian%20unstable%20trunk/builds/932 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: jeffrey.yasskin BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_tarfile make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Mon Mar 3 03:11:44 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 03 Mar 2008 02:11:44 +0000 Subject: [Python-checkins] buildbot failure in PPC64 Debian trunk Message-ID: <20080303021144.960D81E4016@bag.python.org> The Buildbot has detected a new failure of PPC64 Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/PPC64%20Debian%20trunk/builds/440 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: jeffrey.yasskin BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_tarfile make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Mon Mar 3 03:38:28 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 03 Mar 2008 02:38:28 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 trunk Message-ID: <20080303023828.E3CCB1E4016@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%20trunk/builds/2635 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: jeffrey.yasskin BUILD FAILED: failed test Excerpt from the test logfile: 3 tests failed: test_asynchat test_smtplib test_socket ====================================================================== FAIL: testInterruptedTimeout (test.test_socket.TCPTimeoutTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_socket.py", line 994, in testInterruptedTimeout self.fail("got Alarm in wrong place") AssertionError: got Alarm in wrong place sincerely, -The Buildbot From python-checkins at python.org Mon Mar 3 03:41:41 2008 From: python-checkins at python.org (brett.cannon) Date: Mon, 3 Mar 2008 03:41:41 +0100 (CET) Subject: [Python-checkins] r61192 - python/trunk/Lib/test/test_largefile.py Message-ID: <20080303024141.471BE1E4016@bag.python.org> Author: brett.cannon Date: Mon Mar 3 03:41:40 2008 New Revision: 61192 Modified: python/trunk/Lib/test/test_largefile.py Log: Move test_largefile over to using 'with' statements for open files. Also rename the driver function to test_main() instead of main_test(). Modified: python/trunk/Lib/test/test_largefile.py ============================================================================== --- python/trunk/Lib/test/test_largefile.py (original) +++ python/trunk/Lib/test/test_largefile.py Mon Mar 3 03:41:40 2008 @@ -24,13 +24,13 @@ class TestCase(unittest.TestCase): """Test that each file function works as expected for a large (i.e. > 2GB, do we have to check > 4GB) files. + """ def test_seek(self): if verbose: print 'create large file via seek (may be sparse file) ...' - f = open(TESTFN, 'wb') - try: + with open(TESTFN, 'wb') as f: f.write('z') f.seek(0) f.seek(size) @@ -39,8 +39,6 @@ if verbose: print 'check file size with os.fstat' self.assertEqual(os.fstat(f.fileno())[stat.ST_SIZE], size+1) - finally: - f.close() def test_osstat(self): if verbose: @@ -50,8 +48,7 @@ def test_seek_read(self): if verbose: print 'play around with seek() and read() with the built largefile' - f = open(TESTFN, 'rb') - try: + with open(TESTFN, 'rb') as f: self.assertEqual(f.tell(), 0) self.assertEqual(f.read(1), 'z') self.assertEqual(f.tell(), 1) @@ -80,14 +77,11 @@ f.seek(-size-1, 1) self.assertEqual(f.read(1), 'z') self.assertEqual(f.tell(), 1) - finally: - f.close() def test_lseek(self): if verbose: print 'play around with os.lseek() with the built largefile' - f = open(TESTFN, 'rb') - try: + with open(TESTFN, 'rb') as f: self.assertEqual(os.lseek(f.fileno(), 0, 0), 0) self.assertEqual(os.lseek(f.fileno(), 42, 0), 42) self.assertEqual(os.lseek(f.fileno(), 42, 1), 84) @@ -98,18 +92,15 @@ self.assertEqual(os.lseek(f.fileno(), size, 0), size) # the 'a' that was written at the end of file above self.assertEqual(f.read(1), 'a') - finally: - f.close() def test_truncate(self): if verbose: print 'try truncate' - f = open(TESTFN, 'r+b') - # this is already decided before start running the test suite - # but we do it anyway for extra protection - if not hasattr(f, 'truncate'): - raise TestSkipped, "open().truncate() not available on this system" - try: + with open(TESTFN, 'r+b') as f: + # this is already decided before start running the test suite + # but we do it anyway for extra protection + if not hasattr(f, 'truncate'): + raise TestSkipped, "open().truncate() not available on this system" f.seek(0, 2) # else we've lost track of the true size self.assertEqual(f.tell(), size+1) @@ -135,11 +126,9 @@ f.truncate(1) self.assertEqual(f.tell(), 0) # else pointer moved self.assertEqual(len(f.read()), 1) # else wasn't truncated - finally: - f.close() -def main_test(): +def test_main(): # On Windows and Mac OSX this test comsumes large resources; It # takes a long time to build the >2GB file and takes >2GB of disk # space therefore the resource must be enabled to run this test. @@ -170,14 +159,15 @@ suite.addTest(TestCase('test_osstat')) suite.addTest(TestCase('test_seek_read')) suite.addTest(TestCase('test_lseek')) - f = open(TESTFN, 'w') - if hasattr(f, 'truncate'): - suite.addTest(TestCase('test_truncate')) - f.close() - unlink(TESTFN) - run_unittest(suite) + with open(TESTFN, 'w') as f: + if hasattr(f, 'truncate'): + suite.addTest(TestCase('test_truncate')) unlink(TESTFN) + try: + run_unittest(suite) + finally: + unlink(TESTFN) if __name__ == '__main__': - main_test() + test_main() From python-checkins at python.org Mon Mar 3 04:24:48 2008 From: python-checkins at python.org (brett.cannon) Date: Mon, 3 Mar 2008 04:24:48 +0100 (CET) Subject: [Python-checkins] r61194 - python/trunk/Lib/test/test_largefile.py Message-ID: <20080303032448.892301E4016@bag.python.org> Author: brett.cannon Date: Mon Mar 3 04:24:48 2008 New Revision: 61194 Modified: python/trunk/Lib/test/test_largefile.py Log: Add a note in the main test class' docstring that the order of execution of the tests is important. Modified: python/trunk/Lib/test/test_largefile.py ============================================================================== --- python/trunk/Lib/test/test_largefile.py (original) +++ python/trunk/Lib/test/test_largefile.py Mon Mar 3 04:24:48 2008 @@ -25,6 +25,10 @@ """Test that each file function works as expected for a large (i.e. > 2GB, do we have to check > 4GB) files. + NOTE: the order of execution of the test methods is important! test_seek + must run first to create the test file. File cleanup must also be handled + outside the test instances because of this. + """ def test_seek(self): From python-checkins at python.org Mon Mar 3 04:26:43 2008 From: python-checkins at python.org (brett.cannon) Date: Mon, 3 Mar 2008 04:26:43 +0100 (CET) Subject: [Python-checkins] r61195 - python/trunk/Lib/test/test_pep247.py Message-ID: <20080303032643.7E1121E4016@bag.python.org> Author: brett.cannon Date: Mon Mar 3 04:26:43 2008 New Revision: 61195 Modified: python/trunk/Lib/test/test_pep247.py Log: Add a note in the main test class' docstring that the order of execution of the tests is important. Modified: python/trunk/Lib/test/test_pep247.py ============================================================================== --- python/trunk/Lib/test/test_pep247.py (original) +++ python/trunk/Lib/test/test_pep247.py Mon Mar 3 04:26:43 2008 @@ -10,6 +10,8 @@ DeprecationWarning) import md5, sha, hmac +from test.test_support import verbose + def check_hash_module(module, key=None): assert hasattr(module, 'digest_size'), "Must have digest_size" @@ -47,10 +49,15 @@ hd2 += "%02x" % ord(byte) assert hd2 == hexdigest, "hexdigest doesn't appear correct" - print 'Module', module.__name__, 'seems to comply with PEP 247' + if verbose: + print 'Module', module.__name__, 'seems to comply with PEP 247' -if __name__ == '__main__': +def test_main(): check_hash_module(md5) check_hash_module(sha) check_hash_module(hmac, key='abc') + + +if __name__ == '__main__': + test_main() From python-checkins at python.org Mon Mar 3 04:37:55 2008 From: python-checkins at python.org (brett.cannon) Date: Mon, 3 Mar 2008 04:37:55 +0100 (CET) Subject: [Python-checkins] r61197 - peps/trunk/pep-3108.txt Message-ID: <20080303033755.7E7F41E4016@bag.python.org> Author: brett.cannon Date: Mon Mar 3 04:37:55 2008 New Revision: 61197 Modified: peps/trunk/pep-3108.txt Log: Remove test.testall from Py3K. Modified: peps/trunk/pep-3108.txt ============================================================================== --- peps/trunk/pep-3108.txt (original) +++ peps/trunk/pep-3108.txt Mon Mar 3 04:37:55 2008 @@ -271,6 +271,10 @@ + Written before Pure Atria was bought by Rational which was then bought by IBM (in other words, very old). + +* test.testall [done] + + + From the days before regrtest. Obsolete From buildbot at python.org Mon Mar 3 04:53:06 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 03 Mar 2008 03:53:06 +0000 Subject: [Python-checkins] buildbot failure in g4 osx.4 3.0 Message-ID: <20080303035306.76B851E4016@bag.python.org> The Buildbot has detected a new failure of g4 osx.4 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/g4%20osx.4%203.0/builds/585 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: psf-g4 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: brett.cannon BUILD FAILED: failed failed slave lost sincerely, -The Buildbot From python-checkins at python.org Mon Mar 3 05:19:30 2008 From: python-checkins at python.org (brett.cannon) Date: Mon, 3 Mar 2008 05:19:30 +0100 (CET) Subject: [Python-checkins] r61198 - python/trunk/Lib/test/test_al.py python/trunk/Lib/test/test_audioop.py python/trunk/Lib/test/test_cd.py python/trunk/Lib/test/test_cl.py python/trunk/Lib/test/test_dbm.py python/trunk/Lib/test/test_gl.py python/trunk/Lib/test/test_imageop.py python/trunk/Lib/test/test_imgfile.py python/trunk/Lib/test/test_sunaudiodev.py Message-ID: <20080303041930.1488F1E4016@bag.python.org> Author: brett.cannon Date: Mon Mar 3 05:19:29 2008 New Revision: 61198 Modified: python/trunk/Lib/test/test_al.py python/trunk/Lib/test/test_audioop.py python/trunk/Lib/test/test_cd.py python/trunk/Lib/test/test_cl.py python/trunk/Lib/test/test_dbm.py python/trunk/Lib/test/test_gl.py python/trunk/Lib/test/test_imageop.py python/trunk/Lib/test/test_imgfile.py python/trunk/Lib/test/test_sunaudiodev.py Log: Add test_main() functions to various tests where it was simple to do. Done so that regrtest can execute the test_main() directly instead of relying on import side-effects. Modified: python/trunk/Lib/test/test_al.py ============================================================================== --- python/trunk/Lib/test/test_al.py (original) +++ python/trunk/Lib/test/test_al.py Mon Mar 3 05:19:29 2008 @@ -11,7 +11,7 @@ # This is a very unobtrusive test for the existence of the al module and all its # attributes. More comprehensive examples can be found in Demo/al -def main(): +def test_main(): # touch all the attributes of al without doing anything if verbose: print 'Touching al module attributes...' @@ -20,4 +20,6 @@ print 'touching: ', attr getattr(al, attr) -main() + +if __name__ == '__main__': + test_main() Modified: python/trunk/Lib/test/test_audioop.py ============================================================================== --- python/trunk/Lib/test/test_audioop.py (original) +++ python/trunk/Lib/test/test_audioop.py Mon Mar 3 05:19:29 2008 @@ -269,7 +269,7 @@ if not rv: print 'Test FAILED for audioop.'+name+'()' -def testall(): +def test_main(): data = [gendata1(), gendata2(), gendata4()] names = dir(audioop) # We know there is a routine 'add' @@ -279,4 +279,8 @@ routines.append(n) for n in routines: testone(n, data) -testall() + + + +if __name__ == '__main__': + test_main() Modified: python/trunk/Lib/test/test_cd.py ============================================================================== --- python/trunk/Lib/test/test_cd.py (original) +++ python/trunk/Lib/test/test_cd.py Mon Mar 3 05:19:29 2008 @@ -14,7 +14,7 @@ # attributes. More comprehensive examples can be found in Demo/cd and # require that you have a CD and a CD ROM drive -def main(): +def test_main(): # touch all the attributes of cd without doing anything if verbose: print 'Touching cd module attributes...' @@ -23,4 +23,7 @@ print 'touching: ', attr getattr(cd, attr) -main() + + +if __name__ == '__main__': + test_main() Modified: python/trunk/Lib/test/test_cl.py ============================================================================== --- python/trunk/Lib/test/test_cl.py (original) +++ python/trunk/Lib/test/test_cl.py Mon Mar 3 05:19:29 2008 @@ -66,7 +66,7 @@ # This is a very inobtrusive test for the existence of the cl # module and all its attributes. -def main(): +def test_main(): # touch all the attributes of al without doing anything if verbose: print 'Touching cl module attributes...' @@ -75,4 +75,7 @@ print 'touching: ', attr getattr(cl, attr) -main() + + +if __name__ == '__main__': + test_main() Modified: python/trunk/Lib/test/test_dbm.py ============================================================================== --- python/trunk/Lib/test/test_dbm.py (original) +++ python/trunk/Lib/test/test_dbm.py Mon Mar 3 05:19:29 2008 @@ -43,12 +43,18 @@ d = dbm.open(filename, 'n') d.close() -cleanup() -try: - test_keys() - test_modes() -except: +def test_main(): cleanup() - raise + try: + test_keys() + test_modes() + except: + cleanup() + raise -cleanup() + cleanup() + + + +if __name__ == '__main__': + test_main() Modified: python/trunk/Lib/test/test_gl.py ============================================================================== --- python/trunk/Lib/test/test_gl.py (original) +++ python/trunk/Lib/test/test_gl.py Mon Mar 3 05:19:29 2008 @@ -81,7 +81,7 @@ 'xfpt4s', 'xfpti', 'xfpts', 'zbuffer', 'zclear', 'zdraw', 'zfunction', 'zsource', 'zwritemask'] -def main(): +def test_main(): # insure that we at least have an X display before continuing. import os try: @@ -147,4 +147,6 @@ print 'winclose' gl.winclose(w) -main() + +if __name__ == '__main__': + test_main() Modified: python/trunk/Lib/test/test_imageop.py ============================================================================== --- python/trunk/Lib/test/test_imageop.py (original) +++ python/trunk/Lib/test/test_imageop.py Mon Mar 3 05:19:29 2008 @@ -11,7 +11,7 @@ import warnings -def main(): +def test_main(): # Create binary test files uu.decode(get_qualified_path('testrgb'+os.extsep+'uue'), 'test'+os.extsep+'rgb') @@ -145,4 +145,5 @@ return fullname return name -main() +if __name__ == '__main__': + test_main() Modified: python/trunk/Lib/test/test_imgfile.py ============================================================================== --- python/trunk/Lib/test/test_imgfile.py (original) +++ python/trunk/Lib/test/test_imgfile.py Mon Mar 3 05:19:29 2008 @@ -9,20 +9,6 @@ import imgfile, uu -def main(): - - uu.decode(findfile('testrgb.uue'), 'test.rgb') - uu.decode(findfile('greyrgb.uue'), 'greytest.rgb') - - # Test a 3 byte color image - testimage('test.rgb') - - # Test a 1 byte greyscale image - testimage('greytest.rgb') - - unlink('test.rgb') - unlink('greytest.rgb') - def testimage(name): """Run through the imgfile's battery of possible methods on the image passed in name. @@ -113,4 +99,20 @@ os.unlink(outputfile) -main() + +def test_main(): + + uu.decode(findfile('testrgb.uue'), 'test.rgb') + uu.decode(findfile('greyrgb.uue'), 'greytest.rgb') + + # Test a 3 byte color image + testimage('test.rgb') + + # Test a 1 byte greyscale image + testimage('greytest.rgb') + + unlink('test.rgb') + unlink('greytest.rgb') + +if __name__ == '__main__': + test_main() Modified: python/trunk/Lib/test/test_sunaudiodev.py ============================================================================== --- python/trunk/Lib/test/test_sunaudiodev.py (original) +++ python/trunk/Lib/test/test_sunaudiodev.py Mon Mar 3 05:19:29 2008 @@ -22,7 +22,11 @@ a.write(data) a.close() -def test(): + +def test_main(): play_sound_file(findfile('audiotest.au')) -test() + + +if __name__ == '__main__': + test_main() From python-checkins at python.org Mon Mar 3 05:37:45 2008 From: python-checkins at python.org (neal.norwitz) Date: Mon, 3 Mar 2008 05:37:45 +0100 (CET) Subject: [Python-checkins] r61199 - python/trunk/Modules/_sqlite/connection.c Message-ID: <20080303043745.67C4A1E4016@bag.python.org> Author: neal.norwitz Date: Mon Mar 3 05:37:45 2008 New Revision: 61199 Modified: python/trunk/Modules/_sqlite/connection.c Log: Only DECREF if ret != NULL Modified: python/trunk/Modules/_sqlite/connection.c ============================================================================== --- python/trunk/Modules/_sqlite/connection.c (original) +++ python/trunk/Modules/_sqlite/connection.c Mon Mar 3 05:37:45 2008 @@ -849,9 +849,9 @@ rc = 1; } else { rc = (int)PyObject_IsTrue(ret); + Py_DECREF(ret); } - Py_DECREF(ret); PyGILState_Release(gilstate); return rc; } From buildbot at python.org Mon Mar 3 06:10:45 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 03 Mar 2008 05:10:45 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 3.0 Message-ID: <20080303051046.122761E4016@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%203.0/builds/684 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: alexandre.vassalotti,brett.cannon BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From buildbot at python.org Mon Mar 3 06:42:14 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 03 Mar 2008 05:42:14 +0000 Subject: [Python-checkins] buildbot failure in S-390 Debian 3.0 Message-ID: <20080303054214.D28B31E4016@bag.python.org> The Buildbot has detected a new failure of S-390 Debian 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/S-390%20Debian%203.0/builds/73 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-s390 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: alexandre.vassalotti,brett.cannon BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_urllibnet make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Mon Mar 3 06:49:07 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 03 Mar 2008 05:49:07 +0000 Subject: [Python-checkins] buildbot failure in PPC64 Debian trunk Message-ID: <20080303054907.2C8BC1E4016@bag.python.org> The Buildbot has detected a new failure of PPC64 Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/PPC64%20Debian%20trunk/builds/444 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: neal.norwitz BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_ssl make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Mon Mar 3 06:52:55 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 03 Mar 2008 05:52:55 +0000 Subject: [Python-checkins] buildbot failure in g4 osx.4 trunk Message-ID: <20080303055255.C2BD11E4016@bag.python.org> The Buildbot has detected a new failure of g4 osx.4 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/g4%20osx.4%20trunk/builds/2963 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: psf-g4 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: neal.norwitz BUILD FAILED: failed failed slave lost sincerely, -The Buildbot From buildbot at python.org Mon Mar 3 07:38:03 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 03 Mar 2008 06:38:03 +0000 Subject: [Python-checkins] buildbot failure in alpha Debian 3.0 Message-ID: <20080303063803.66D4D1E4016@bag.python.org> The Buildbot has detected a new failure of alpha Debian 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Debian%203.0/builds/51 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-alpha Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: brett.cannon,christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From python-checkins at python.org Mon Mar 3 08:19:52 2008 From: python-checkins at python.org (martin.v.loewis) Date: Mon, 3 Mar 2008 08:19:52 +0100 (CET) Subject: [Python-checkins] r61200 - tracker/roundup-src/roundup/cgi/client.py Message-ID: <20080303071952.B8E191E401D@bag.python.org> Author: martin.v.loewis Date: Mon Mar 3 08:19:52 2008 New Revision: 61200 Modified: tracker/roundup-src/roundup/cgi/client.py Log: Protect all session cleaning against concurrent access. Modified: tracker/roundup-src/roundup/cgi/client.py ============================================================================== --- tracker/roundup-src/roundup/cgi/client.py (original) +++ tracker/roundup-src/roundup/cgi/client.py Mon Mar 3 08:19:52 2008 @@ -359,9 +359,9 @@ except ImportError: ProgrammingError = None - sessions.clean(now) - self.db.getOTKManager().clean(now) try: + sessions.clean(now) + self.db.getOTKManager().clean(now) sessions.set('last_clean', last_use=time.time()) except ProgrammingError, err: response = str(err).split('\n')[0] From buildbot at python.org Mon Mar 3 08:54:20 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 03 Mar 2008 07:54:20 +0000 Subject: [Python-checkins] buildbot failure in x86 FreeBSD trunk Message-ID: <20080303075420.DD6DB1E4008@bag.python.org> The Buildbot has detected a new failure of x86 FreeBSD trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20FreeBSD%20trunk/builds/681 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: bolen-freebsd Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: brett.cannon,neal.norwitz BUILD FAILED: failed test Excerpt from the test logfile: Traceback (most recent call last): File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 227, in handle_request self.process_request(request, client_address) File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 268, in process_request self.finish_request(request, client_address) File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 281, in finish_request self.RequestHandlerClass(request, client_address, self) File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 576, in __init__ self.handle() File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable 1 test failed: test_smtplib sincerely, -The Buildbot From python-checkins at python.org Mon Mar 3 08:56:31 2008 From: python-checkins at python.org (martin.v.loewis) Date: Mon, 3 Mar 2008 08:56:31 +0100 (CET) Subject: [Python-checkins] r61201 - tracker/roundup-src/roundup/cgi/templating.py Message-ID: <20080303075631.0E4511E4008@bag.python.org> Author: martin.v.loewis Date: Mon Mar 3 08:56:30 2008 New Revision: 61201 Modified: tracker/roundup-src/roundup/cgi/templating.py Log: Strip values of a multilink parameter after splitting at commas. Fixes #148. Modified: tracker/roundup-src/roundup/cgi/templating.py ============================================================================== --- tracker/roundup-src/roundup/cgi/templating.py (original) +++ tracker/roundup-src/roundup/cgi/templating.py Mon Mar 3 08:56:30 2008 @@ -2085,7 +2085,7 @@ value = value.value.strip() if not value: return [] - return value.split(',') + return [v.strip() for v in value.split(',')] class HTMLRequest(HTMLInputMixin): '''The *request*, holding the CGI form and environment. From python-checkins at python.org Mon Mar 3 13:37:20 2008 From: python-checkins at python.org (marc-andre.lemburg) Date: Mon, 3 Mar 2008 13:37:20 +0100 (CET) Subject: [Python-checkins] r61202 - peps/trunk/pep-0249.txt Message-ID: <20080303123720.740DF1E4026@bag.python.org> Author: marc-andre.lemburg Date: Mon Mar 3 13:37:19 2008 New Revision: 61202 Modified: peps/trunk/pep-0249.txt Log: Added optional two-phase commit (TPC) API extension as proposed by James Henstridge and discussed on the DB-SIG. Lots of small text edits (XXX->*, prepend all methods with a dot, more indents, etc.). Updated note on the Python datetime module objects. Modified: peps/trunk/pep-0249.txt ============================================================================== --- peps/trunk/pep-0249.txt (original) +++ peps/trunk/pep-0249.txt Mon Mar 3 13:37:19 2008 @@ -27,11 +27,12 @@ * Implementation Hints for Module Authors * Optional DB API Extensions * Optional Error Handling Extensions + * Optional Two-Phase Commit Extensions * Frequently Asked Questions * Major Changes from Version 1.0 to Version 2.0 * Open Issues * Footnotes - * Acknowledgements + * Acknowledgments Comments and questions about this specification may be directed to the SIG for Database Interfacing with Python @@ -242,14 +243,14 @@ Cursor Objects - These objects represent a database cursor, which is used to - manage the context of a fetch operation. Cursors created from - the same connection are not isolated, i.e., any changes - done to the database by a cursor are immediately visible by the - other cursors. Cursors created from different connections can - or can not be isolated, depending on how the transaction support - is implemented (see also the connection's rollback() and commit() - methods.) + These objects represent a database cursor, which is used to manage + the context of a fetch operation. Cursors created from the same + connection are not isolated, i.e., any changes done to the + database by a cursor are immediately visible by the other + cursors. Cursors created from different connections can or can not + be isolated, depending on how the transaction support is + implemented (see also the connection's .rollback() and .commit() + methods). Cursor Objects should respond to the following methods and attributes: @@ -257,16 +258,26 @@ .description This read-only attribute is a sequence of 7-item - sequences. Each of these sequences contains information - describing one result column: (name, type_code, - display_size, internal_size, precision, scale, - null_ok). The first two items (name and type_code) are - mandatory, the other five are optional and must be set to - None if meaningfull values are not provided. + sequences. + + Each of these sequences contains information describing + one result column: + + (name, + type_code, + display_size, + internal_size, + precision, + scale, + null_ok) + + The first two items (name and type_code) are mandatory, + the other five are optional and are set to None if no + meaningful values can be provided. This attribute will be None for operations that do not return rows or if the cursor has not had an - operation invoked via the executeXXX() method yet. + operation invoked via the .execute*() method yet. The type_code can be interpreted by comparing it to the Type Objects specified in the section below. @@ -274,13 +285,13 @@ .rowcount This read-only attribute specifies the number of rows that - the last executeXXX() produced (for DQL statements like + the last .execute*() produced (for DQL statements like 'select') or affected (for DML statements like 'update' or 'insert'). - The attribute is -1 in case no executeXXX() has been + The attribute is -1 in case no .execute*() has been performed on the cursor or the rowcount of the last - operation is not determinable by the interface. [7] + operation is cannot be determined by the interface. [7] Note: Future versions of the DB API specification could redefine the latter case to have the object return None @@ -300,7 +311,7 @@ The procedure may also provide a result set as output. This must then be made available through the - standard fetchXXX() methods. + standard .fetch*() methods. .close() @@ -324,7 +335,7 @@ but different parameters are bound to it (many times). For maximum efficiency when reusing an operation, it is - best to use the setinputsizes() method to specify the + best to use the .setinputsizes() method to specify the parameter types and sizes ahead of time. It is legal for a parameter to not match the predefined information; the implementation should compensate, possibly with a loss of @@ -332,7 +343,7 @@ The parameters may also be specified as list of tuples to e.g. insert multiple rows in a single operation, but this - kind of usage is deprecated: executemany() should be used + kind of usage is deprecated: .executemany() should be used instead. Return values are not defined. @@ -344,7 +355,7 @@ found in the sequence seq_of_parameters. Modules are free to implement this method using multiple - calls to the execute() method or by using array operations + calls to the .execute() method or by using array operations to have the database process the sequence as a whole in one call. @@ -354,7 +365,7 @@ an exception when it detects that a result set has been created by an invocation of the operation. - The same comments as for execute() also apply accordingly + The same comments as for .execute() also apply accordingly to this method. Return values are not defined. @@ -366,10 +377,10 @@ available. [6] An Error (or subclass) exception is raised if the previous - call to executeXXX() did not produce any result set or no + call to .execute*() did not produce any result set or no call was issued yet. - fetchmany([size=cursor.arraysize]) + .fetchmany([size=cursor.arraysize]) Fetch the next set of rows of a query result, returning a sequence of sequences (e.g. a list of tuples). An empty @@ -384,14 +395,14 @@ returned. An Error (or subclass) exception is raised if the previous - call to executeXXX() did not produce any result set or no + call to .execute*() did not produce any result set or no call was issued yet. Note there are performance considerations involved with the size parameter. For optimal performance, it is usually best to use the arraysize attribute. If the size parameter is used, then it is best for it to retain the - same value from one fetchmany() call to the next. + same value from one .fetchmany() call to the next. .fetchall() @@ -401,7 +412,7 @@ performance of this operation. An Error (or subclass) exception is raised if the previous - call to executeXXX() did not produce any result set or no + call to .execute*() did not produce any result set or no call was issued yet. .nextset() @@ -419,23 +430,23 @@ result set. An Error (or subclass) exception is raised if the previous - call to executeXXX() did not produce any result set or no + call to .execute*() did not produce any result set or no call was issued yet. .arraysize This read/write attribute specifies the number of rows to - fetch at a time with fetchmany(). It defaults to 1 meaning - to fetch a single row at a time. + fetch at a time with .fetchmany(). It defaults to 1 + meaning to fetch a single row at a time. Implementations must observe this value with respect to - the fetchmany() method, but are free to interact with the + the .fetchmany() method, but are free to interact with the database a single row at a time. It may also be used in - the implementation of executemany(). + the implementation of .executemany(). .setinputsizes(sizes) - This can be used before a call to executeXXX() to + This can be used before a call to .execute*() to predefine memory areas for the operation's parameters. sizes is specified as a sequence -- one item for each @@ -446,7 +457,7 @@ area will be reserved for that column (this is useful to avoid predefined areas for large inputs). - This method would be used before the executeXXX() method + This method would be used before the .execute*() method is invoked. Implementations are free to have this method do nothing @@ -460,7 +471,7 @@ will set the default size for all large columns in the cursor. - This method would be used before the executeXXX() method + This method would be used before the .execute*() method is invoked. Implementations are free to have this method do nothing @@ -475,7 +486,7 @@ database in a particular string format. Similar problems exist for "Row ID" columns or large binary items (e.g. blobs or RAW columns). This presents problems for Python since the parameters - to the executeXXX() method are untyped. When the database module + to the .execute*() method are untyped. When the database module sees a Python string object, it doesn't know if it should be bound as a simple CHAR column, as a raw BINARY item, or as a DATE. @@ -567,23 +578,11 @@ Implementation Hints for Module Authors - * The preferred object types for the date/time objects are those - defined in the mxDateTime package. It provides all necessary - constructors and methods both at Python and C level. - - * The preferred object type for Binary objects are the - buffer types available in standard Python starting with - version 1.5.2. Please see the Python documentation for - details. For information about the C interface have a - look at Include/bufferobject.h and - Objects/bufferobject.c in the Python source - distribution. - - * Starting with Python 2.3, module authors can also use the object - types defined in the standard datetime module for date/time - processing. However, it should be noted that this does not - expose a C API like mxDateTime does which means that integration - with C based database modules is more difficult. + * Date/time objects can be implemented as Python datetime module + objects (available since Python 2.3, with a C API since 2.4) or + using the mxDateTime package (available for all Python versions + since 1.5.2). They both provide all necessary constructors and + methods at Python and C level. * Here is a sample implementation of the Unix ticks based constructors for date/time delegating work to the generic @@ -592,13 +591,21 @@ import time def DateFromTicks(ticks): - return apply(Date,time.localtime(ticks)[:3]) + return Date(*time.localtime(ticks)[:3]) def TimeFromTicks(ticks): - return apply(Time,time.localtime(ticks)[3:6]) + return Time(*time.localtime(ticks)[3:6]) def TimestampFromTicks(ticks): - return apply(Timestamp,time.localtime(ticks)[:6]) + return Timestamp(*time.localtime(ticks)[:6]) + + * The preferred object type for Binary objects are the + buffer types available in standard Python starting with + version 1.5.2. Please see the Python documentation for + details. For information about the C interface have a + look at Include/bufferobject.h and + Objects/bufferobject.c in the Python source + distribution. * This Python class allows implementing the above type objects even though the description type code field yields @@ -675,17 +682,18 @@ It has been proposed to make usage of these extensions optionally visible to the programmer by issuing Python warnings through the Python warning framework. To make this feature useful, the warning - messages must be standardized in order to be able to mask them. These - standard messages are referred to below as "Warning Message". + messages must be standardized in order to be able to mask + them. These standard messages are referred to below as "Warning + Message". Cursor Attribute .rownumber This read-only attribute should provide the current 0-based - index of the cursor in the result set or None if the index cannot - be determined. + index of the cursor in the result set or None if the index + cannot be determined. - The index can be seen as index of the cursor in a sequence (the - result set). The next fetch operation will fetch the row + The index can be seen as index of the cursor in a sequence + (the result set). The next fetch operation will fetch the row indexed by .rownumber in that sequence. Warning Message: "DB-API extension cursor.rownumber used" @@ -740,7 +748,7 @@ this cursor. The list is cleared by all standard cursor methods calls (prior - to executing the call) except for the .fetchXXX() calls + to executing the call) except for the .fetch*() calls automatically to avoid excessive memory usage and can also be cleared by executing "del cursor.messages[:]". @@ -779,7 +787,8 @@ Cursor Method .__iter__() - Return self to make cursors compatible to the iteration protocol. + Return self to make cursors compatible to the iteration + protocol [8]. Warning Message: "DB-API extension cursor.__iter__() used" @@ -812,31 +821,151 @@ Cursor/Connection Attribute .errorhandler - Read/write attribute which references an error handler to call - in case an error condition is met. + Read/write attribute which references an error handler to call + in case an error condition is met. - The handler must be a Python callable taking the following - arguments: errorhandler(connection, cursor, errorclass, - errorvalue) where connection is a reference to the connection - on which the cursor operates, cursor a reference to the cursor - (or None in case the error does not apply to a cursor), - errorclass is an error class which to instantiate using - errorvalue as construction argument. - - The standard error handler should add the error information to - the appropriate .messages attribute (connection.messages or - cursor.messages) and raise the exception defined by the given - errorclass and errorvalue parameters. + The handler must be a Python callable taking the following + arguments: - If no errorhandler is set (the attribute is None), the standard - error handling scheme as outlined above, should be applied. + errorhandler(connection, cursor, errorclass, errorvalue) - Warning Message: "DB-API extension .errorhandler used" + where connection is a reference to the connection on which the + cursor operates, cursor a reference to the cursor (or None in + case the error does not apply to a cursor), errorclass is an + error class which to instantiate using errorvalue as + construction argument. + + The standard error handler should add the error information to + the appropriate .messages attribute (connection.messages or + cursor.messages) and raise the exception defined by the given + errorclass and errorvalue parameters. + + If no errorhandler is set (the attribute is None), the + standard error handling scheme as outlined above, should be + applied. + + Warning Message: "DB-API extension .errorhandler used" Cursors should inherit the .errorhandler setting from their connection objects at cursor creation time. +Optional Two-Phase Commit Extensions + + Many databases have support for two-phase commit (TPC) which + allows managing transactions across multiple database connections + and other resources. + + If a database backend provides support for two-phase commit and + the database module author wishes to expose this support, the + following API should be implemented. NotSupportedError should be + raised, if the database backend support for two-phase commit + can only be checked at run-time. + + TPC Transaction IDs + + As many databases follow the XA specification, transaction IDs + are formed from three components: + + * a format ID + * a global transaction ID + * a branch qualifier + + For a particular global transaction, the first two components + should be the same for all resources. Each resource in the + global transaction should be assigned a different branch + qualifier. + + The various components must satisfy the following criteria: + + * format ID: a non-negative 32-bit integer. + + * global transaction ID and branch qualifier: byte strings no + longer than 64 characters. + + Transaction IDs are created with the .xid() connection method: + + .xid(format_id, global_transaction_id, branch_qualifier) + + Returns a transaction ID object suitable for passing to the + .tpc_*() methods of this connection. + + If the database connection does not support TPC, a + NotSupportedError is raised. + + The type of the object returned by .xid() is not defined, but + it must provide sequence behaviour, allowing access to the + three components. A conforming database module could choose + to represent transaction IDs with tuples rather than a custom + object. + + TPC Connection Methods + + .tpc_begin(xid) + + Begins a TPC transaction with the given transaction ID xid. + + This method should be called outside of a transaction + (i.e. nothing may have executed since the last .commit() or + .rollback()). + + Furthermore, it is an error to call .commit() or .rollback() + within the TPC transaction. A ProgrammingError is raised, if + the application calls .commit() or .rollback() during an + active TPC transaction. + + If the database connection does not support TPC, a + NotSupportedError is raised. + + .tpc_prepare() + + Performs the first phase of a transaction started with + .tpc_begin(). A ProgrammingError should be raised if this + method outside of a TPC transaction. + + After calling .tpc_prepare(), no statements can be executed + until tpc_commit() or tpc_rollback() have been called. + + .tpc_commit([xid]) + + When called with no arguments, .tpc_commit() commits a TPC + transaction previously prepared with .tpc_prepare(). + + If .tpc_commit() is called prior to .tpc_prepare(), a single + phase commit is performed. A transaction manager may choose + to do this if only a single resource is participating in the + global transaction. + + When called with a transaction ID xid, the database commits + the given transaction. If an invalid transaction ID is + provided, a ProgrammingError will be raised. This form should + be called outside of a transaction, and is intended for use in + recovery. + + On return, the TPC transaction is ended. + + .tpc_rollback([xid]) + + When called with no arguments, .tpc_rollback() rolls back a + TPC transaction. It may be called before or after + .tpc_prepare(). + + When called with a transaction ID xid, it rolls back the given + transaction. If an invalid transaction ID is provided, a + ProgrammingError is raised. This form should be called + outside of a transaction, and is intended for use in recovery. + + On return, the TPC transaction is ended. + + .tpc_recover() + + Returns a list of pending transaction IDs suitable for use + with .tpc_commit(xid) or .tpc_rollback(xid). + + If the database does not support transaction recovery, it may + return an empty list or raise NotSupportedError. + + Frequently Asked Questions The database SIG often sees reoccurring questions about the DB API @@ -846,7 +975,7 @@ Question: How can I construct a dictionary out of the tuples returned by - .fetchxxx(): + .fetch*(): Answer: @@ -856,7 +985,7 @@ as basis for the keys in the row dictionary. Note that the reason for not extending the DB API specification - to also support dictionary return values for the .fetchxxx() + to also support dictionary return values for the .fetch*() methods is that this approach has several drawbacks: * Some databases don't support case-sensitive column names or @@ -891,8 +1020,8 @@ found in modern SQL databases. * New constants (apilevel, threadlevel, paramstyle) and - methods (executemany, nextset) were added to provide better - database bindings. + methods (.executemany(), .nextset()) were added to provide + better database bindings. * The semantics of .callproc() needed to call stored procedures are now clearly defined. @@ -927,8 +1056,8 @@ * Define a useful return value for .nextset() for the case where a new result set is available. - * Create a fixed point numeric type for use as loss-less - monetary and decimal interchange format. + * Integrate the decimal module Decimal object for use as + loss-less monetary and decimal interchange format. Footnotes @@ -937,15 +1066,15 @@ implemented as keyword parameters for more intuitive use and follow this order of parameters: - dsn Data source name as string - user User name as string (optional) - password Password as string (optional) - host Hostname (optional) - database Database name (optional) + dsn Data source name as string + user User name as string (optional) + password Password as string (optional) + host Hostname (optional) + database Database name (optional) E.g. a connect could look like this: - connect(dsn='myhost:MYDB',user='guido',password='234$') + connect(dsn='myhost:MYDB',user='guido',password='234$') [2] Module implementors should prefer 'numeric', 'named' or 'pyformat' over the other formats because these offer more @@ -970,7 +1099,7 @@ [4] a database interface may choose to support named cursors by allowing a string argument to the method. This feature is not part of the specification, since it complicates - semantics of the .fetchXXX() methods. + semantics of the .fetch*() methods. [5] The module will use the __getitem__ method of the parameters object to map either positions (integers) or names (strings) @@ -992,7 +1121,11 @@ [7] The rowcount attribute may be coded in a way that updates its value dynamically. This can be useful for databases that return usable rowcount values only after the first call to - a .fetchXXX() method. + a .fetch*() method. + + [8] Implementation Note: Python C extensions will have to + implement the tp_iter slot on the cursor object instead of the + .__iter__() method. Acknowledgements @@ -1000,6 +1133,9 @@ Database API Specification 2.0 from the original HTML format into the PEP format. + Many thanks to James Henstridge for leading the discussion which + led to the standardization of the two-phase commit API extensions. + Copyright This document has been placed in the Public Domain. From python-checkins at python.org Mon Mar 3 13:40:18 2008 From: python-checkins at python.org (christian.heimes) Date: Mon, 3 Mar 2008 13:40:18 +0100 (CET) Subject: [Python-checkins] r61203 - python/trunk Message-ID: <20080303124018.27FC41E4008@bag.python.org> Author: christian.heimes Date: Mon Mar 3 13:40:17 2008 New Revision: 61203 Modified: python/trunk/ (props changed) Log: Initialized merge tracking via "svnmerge" with revisions "1-60195" from svn+ssh://pythondev at svn.python.org/python/branches/trunk-math From buildbot at python.org Mon Mar 3 14:04:28 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 03 Mar 2008 13:04:28 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-4 trunk Message-ID: <20080303130428.7AA511E4008@bag.python.org> The Buildbot has detected a new failure of x86 XP-4 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-4%20trunk/builds/769 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: bolen-windows Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From python-checkins at python.org Mon Mar 3 19:28:05 2008 From: python-checkins at python.org (christian.heimes) Date: Mon, 3 Mar 2008 19:28:05 +0100 (CET) Subject: [Python-checkins] r61204 - in python/trunk: Doc/library/inspect.rst Lib/inspect.py Lib/test/regrtest.py Lib/test/test_abc.py Misc/NEWS Message-ID: <20080303182805.3F95B1E400A@bag.python.org> Author: christian.heimes Date: Mon Mar 3 19:28:04 2008 New Revision: 61204 Modified: python/trunk/Doc/library/inspect.rst python/trunk/Lib/inspect.py python/trunk/Lib/test/regrtest.py python/trunk/Lib/test/test_abc.py python/trunk/Misc/NEWS Log: Since abc._Abstract was replaces by a new type flags the regression test suite fails. I've added a new function inspect.isabstract(). Is the mmethod fine or should I check if object is a instance of type or subclass of object, too? Modified: python/trunk/Doc/library/inspect.rst ============================================================================== --- python/trunk/Doc/library/inspect.rst (original) +++ python/trunk/Doc/library/inspect.rst Mon Mar 3 19:28:04 2008 @@ -307,6 +307,12 @@ Return true if the object is a user-defined or built-in function or method. +.. function:: isabstract(object) + + Return true if the object is an abstract base class. + + .. versionadded:: 2.6 + .. function:: ismethoddescriptor(object) Modified: python/trunk/Lib/inspect.py ============================================================================== --- python/trunk/Lib/inspect.py (original) +++ python/trunk/Lib/inspect.py Mon Mar 3 19:28:04 2008 @@ -38,11 +38,15 @@ import imp import tokenize import linecache +from abc import ABCMeta from operator import attrgetter from collections import namedtuple from compiler.consts import (CO_OPTIMIZED, CO_NEWLOCALS, CO_VARARGS, CO_VARKEYWORDS, CO_GENERATOR) +# See Include/object.h +TPFLAGS_IS_ABSTRACT = 1 << 20 + # ----------------------------------------------------------- type-checking def ismodule(object): """Return true if the object is a module. @@ -241,6 +245,10 @@ """Return true if the object is a generator object.""" return isinstance(object, types.GeneratorType) +def isabstract(object): + """Return true if the object is an abstract base class (ABC).""" + return object.__flags__ & TPFLAGS_IS_ABSTRACT + def getmembers(object, predicate=None): """Return all members of an object as (name, value) pairs sorted by name. Optionally, only return members that satisfy a given predicate.""" Modified: python/trunk/Lib/test/regrtest.py ============================================================================== --- python/trunk/Lib/test/regrtest.py (original) +++ python/trunk/Lib/test/regrtest.py Mon Mar 3 19:28:04 2008 @@ -129,6 +129,7 @@ import re import cStringIO import traceback +from inspect import isabstract # I see no other way to suppress these warnings; # putting them in test_grammar.py has no effect: @@ -649,7 +650,6 @@ def dash_R(the_module, test, indirect_test, huntrleaks): # This code is hackish and inelegant, but it seems to do the job. import copy_reg, _abcoll - from abc import _Abstract if not hasattr(sys, 'gettotalrefcount'): raise Exception("Tracking reference leaks requires a debug build " @@ -661,7 +661,7 @@ pic = sys.path_importer_cache.copy() abcs = {} for abc in [getattr(_abcoll, a) for a in _abcoll.__all__]: - if not issubclass(abc, _Abstract): + if not isabstract(abc): continue for obj in abc.__subclasses__() + [abc]: abcs[obj] = obj._abc_registry.copy() @@ -699,7 +699,6 @@ import _strptime, linecache, dircache import urlparse, urllib, urllib2, mimetypes, doctest import struct, filecmp, _abcoll - from abc import _Abstract from distutils.dir_util import _path_created # Restore some original values. @@ -714,7 +713,7 @@ # Clear ABC registries, restoring previously saved ABC registries. for abc in [getattr(_abcoll, a) for a in _abcoll.__all__]: - if not issubclass(abc, _Abstract): + if not isabstract(abc): continue for obj in abc.__subclasses__() + [abc]: obj._abc_registry = abcs.get(obj, {}).copy() Modified: python/trunk/Lib/test/test_abc.py ============================================================================== --- python/trunk/Lib/test/test_abc.py (original) +++ python/trunk/Lib/test/test_abc.py Mon Mar 3 19:28:04 2008 @@ -7,6 +7,7 @@ from test import test_support import abc +from inspect import isabstract class TestABC(unittest.TestCase): @@ -43,19 +44,23 @@ def bar(self): pass # concrete self.assertEqual(C.__abstractmethods__, set(["foo"])) self.assertRaises(TypeError, C) # because foo is abstract + self.assert_(isabstract(C)) class D(C): def bar(self): pass # concrete override of concrete self.assertEqual(D.__abstractmethods__, set(["foo"])) self.assertRaises(TypeError, D) # because foo is still abstract + self.assert_(isabstract(D)) class E(D): def foo(self): pass self.assertEqual(E.__abstractmethods__, set()) E() # now foo is concrete, too + self.failIf(isabstract(E)) class F(E): @abstractthing def bar(self): pass # abstract override of concrete self.assertEqual(F.__abstractmethods__, set(["bar"])) self.assertRaises(TypeError, F) # because bar is abstract now + self.assert_(isabstract(F)) def test_subclass_oldstyle_class(self): class A: Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Mon Mar 3 19:28:04 2008 @@ -447,6 +447,8 @@ Library ------- +- Add inspect.isabstract(object) to fix bug #2223 + - Add a __format__ method to Decimal, to support PEP 3101. - Add a timing parameter when using trace.Trace to print out timestamps. From buildbot at python.org Mon Mar 3 19:56:30 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 03 Mar 2008 18:56:30 +0000 Subject: [Python-checkins] buildbot failure in amd64 gentoo trunk Message-ID: <20080303185630.44C721E400A@bag.python.org> The Buildbot has detected a new failure of amd64 gentoo trunk. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20gentoo%20trunk/builds/308 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-amd64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_inspect ====================================================================== ERROR: test_getfile (test.test_inspect.TestRetrievingSourceCode) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/test/test_inspect.py", line 215, in test_getfile self.assertEqual(inspect.getfile(mod.StupidGit), mod.__file__) File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/inspect.py", line 405, in getfile raise TypeError('arg is a built-in class') TypeError: arg is a built-in class ====================================================================== ERROR: test_getsource (test.test_inspect.TestRetrievingSourceCode) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/test/test_inspect.py", line 208, in test_getsource self.assertSourceEqual(mod.StupidGit, 21, 46) File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/test/test_inspect.py", line 152, in assertSourceEqual self.assertEqual(inspect.getsource(obj), File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/inspect.py", line 684, in getsource lines, lnum = getsourcelines(object) File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/inspect.py", line 673, in getsourcelines lines, lnum = findsource(object) File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/inspect.py", line 516, in findsource file = getsourcefile(object) or getfile(object) File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/inspect.py", line 438, in getsourcefile filename = getfile(object) File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/inspect.py", line 405, in getfile raise TypeError('arg is a built-in class') TypeError: arg is a built-in class ====================================================================== FAIL: test_getcomments (test.test_inspect.TestRetrievingSourceCode) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/test/test_inspect.py", line 190, in test_getcomments self.assertEqual(inspect.getcomments(mod.StupidGit), '# line 20\n') AssertionError: None != '# line 20\n' ====================================================================== FAIL: test_getmodule (test.test_inspect.TestRetrievingSourceCode) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/test/test_inspect.py", line 196, in test_getmodule self.assertEqual(inspect.getmodule(mod.StupidGit), mod) AssertionError: None != ====================================================================== FAIL: test_fifteen (test.test_inspect.TestPredicates) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/test/test_inspect.py", line 59, in test_fifteen self.assertEqual(count, expected, err_msg) AssertionError: There are 16 (not 15) is* functions make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Mon Mar 3 20:07:45 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 03 Mar 2008 19:07:45 +0000 Subject: [Python-checkins] buildbot failure in ppc Debian unstable trunk Message-ID: <20080303190746.1F5781E400A@bag.python.org> The Buildbot has detected a new failure of ppc Debian unstable trunk. Full details are available at: http://www.python.org/dev/buildbot/all/ppc%20Debian%20unstable%20trunk/builds/938 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_inspect ====================================================================== ERROR: test_getfile (test.test_inspect.TestRetrievingSourceCode) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_inspect.py", line 215, in test_getfile self.assertEqual(inspect.getfile(mod.StupidGit), mod.__file__) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/inspect.py", line 405, in getfile raise TypeError('arg is a built-in class') TypeError: arg is a built-in class ====================================================================== ERROR: test_getsource (test.test_inspect.TestRetrievingSourceCode) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_inspect.py", line 208, in test_getsource self.assertSourceEqual(mod.StupidGit, 21, 46) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_inspect.py", line 152, in assertSourceEqual self.assertEqual(inspect.getsource(obj), File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/inspect.py", line 684, in getsource lines, lnum = getsourcelines(object) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/inspect.py", line 673, in getsourcelines lines, lnum = findsource(object) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/inspect.py", line 516, in findsource file = getsourcefile(object) or getfile(object) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/inspect.py", line 438, in getsourcefile filename = getfile(object) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/inspect.py", line 405, in getfile raise TypeError('arg is a built-in class') TypeError: arg is a built-in class ====================================================================== FAIL: test_getcomments (test.test_inspect.TestRetrievingSourceCode) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_inspect.py", line 190, in test_getcomments self.assertEqual(inspect.getcomments(mod.StupidGit), '# line 20\n') AssertionError: None != '# line 20\n' ====================================================================== FAIL: test_getmodule (test.test_inspect.TestRetrievingSourceCode) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_inspect.py", line 196, in test_getmodule self.assertEqual(inspect.getmodule(mod.StupidGit), mod) AssertionError: None != ====================================================================== FAIL: test_fifteen (test.test_inspect.TestPredicates) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_inspect.py", line 59, in test_fifteen self.assertEqual(count, expected, err_msg) AssertionError: There are 16 (not 15) is* functions make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Mon Mar 3 20:13:22 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 03 Mar 2008 19:13:22 +0000 Subject: [Python-checkins] buildbot failure in PPC64 Debian trunk Message-ID: <20080303191322.517261E400A@bag.python.org> The Buildbot has detected a new failure of PPC64 Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/PPC64%20Debian%20trunk/builds/446 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_inspect ====================================================================== ERROR: test_getfile (test.test_inspect.TestRetrievingSourceCode) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/test/test_inspect.py", line 215, in test_getfile self.assertEqual(inspect.getfile(mod.StupidGit), mod.__file__) File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/inspect.py", line 405, in getfile raise TypeError('arg is a built-in class') TypeError: arg is a built-in class ====================================================================== ERROR: test_getsource (test.test_inspect.TestRetrievingSourceCode) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/test/test_inspect.py", line 208, in test_getsource self.assertSourceEqual(mod.StupidGit, 21, 46) File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/test/test_inspect.py", line 152, in assertSourceEqual self.assertEqual(inspect.getsource(obj), File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/inspect.py", line 684, in getsource lines, lnum = getsourcelines(object) File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/inspect.py", line 673, in getsourcelines lines, lnum = findsource(object) File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/inspect.py", line 516, in findsource file = getsourcefile(object) or getfile(object) File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/inspect.py", line 438, in getsourcefile filename = getfile(object) File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/inspect.py", line 405, in getfile raise TypeError('arg is a built-in class') TypeError: arg is a built-in class ====================================================================== FAIL: test_getcomments (test.test_inspect.TestRetrievingSourceCode) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/test/test_inspect.py", line 190, in test_getcomments self.assertEqual(inspect.getcomments(mod.StupidGit), '# line 20\n') AssertionError: None != '# line 20\n' ====================================================================== FAIL: test_getmodule (test.test_inspect.TestRetrievingSourceCode) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/test/test_inspect.py", line 196, in test_getmodule self.assertEqual(inspect.getmodule(mod.StupidGit), mod) AssertionError: None != ====================================================================== FAIL: test_fifteen (test.test_inspect.TestPredicates) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/test/test_inspect.py", line 59, in test_fifteen self.assertEqual(count, expected, err_msg) AssertionError: There are 16 (not 15) is* functions make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Mon Mar 3 20:17:28 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 03 Mar 2008 19:17:28 +0000 Subject: [Python-checkins] buildbot failure in sparc solaris10 gcc trunk Message-ID: <20080303191728.34F541E400A@bag.python.org> The Buildbot has detected a new failure of sparc solaris10 gcc trunk. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20solaris10%20gcc%20trunk/builds/2907 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: loewis-sun Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_inspect ====================================================================== ERROR: test_getfile (test.test_inspect.TestRetrievingSourceCode) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/users/buildbot/slave/trunk.loewis-sun/build/Lib/test/test_inspect.py", line 215, in test_getfile self.assertEqual(inspect.getfile(mod.StupidGit), mod.__file__) File "/opt/users/buildbot/slave/trunk.loewis-sun/build/Lib/inspect.py", line 405, in getfile raise TypeError('arg is a built-in class') TypeError: arg is a built-in class ====================================================================== ERROR: test_getsource (test.test_inspect.TestRetrievingSourceCode) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/users/buildbot/slave/trunk.loewis-sun/build/Lib/test/test_inspect.py", line 208, in test_getsource self.assertSourceEqual(mod.StupidGit, 21, 46) File "/opt/users/buildbot/slave/trunk.loewis-sun/build/Lib/test/test_inspect.py", line 152, in assertSourceEqual self.assertEqual(inspect.getsource(obj), File "/opt/users/buildbot/slave/trunk.loewis-sun/build/Lib/inspect.py", line 684, in getsource lines, lnum = getsourcelines(object) File "/opt/users/buildbot/slave/trunk.loewis-sun/build/Lib/inspect.py", line 673, in getsourcelines lines, lnum = findsource(object) File "/opt/users/buildbot/slave/trunk.loewis-sun/build/Lib/inspect.py", line 516, in findsource file = getsourcefile(object) or getfile(object) File "/opt/users/buildbot/slave/trunk.loewis-sun/build/Lib/inspect.py", line 438, in getsourcefile filename = getfile(object) File "/opt/users/buildbot/slave/trunk.loewis-sun/build/Lib/inspect.py", line 405, in getfile raise TypeError('arg is a built-in class') TypeError: arg is a built-in class ====================================================================== FAIL: test_getcomments (test.test_inspect.TestRetrievingSourceCode) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/users/buildbot/slave/trunk.loewis-sun/build/Lib/test/test_inspect.py", line 190, in test_getcomments self.assertEqual(inspect.getcomments(mod.StupidGit), '# line 20\n') AssertionError: None != '# line 20\n' ====================================================================== FAIL: test_getmodule (test.test_inspect.TestRetrievingSourceCode) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/users/buildbot/slave/trunk.loewis-sun/build/Lib/test/test_inspect.py", line 196, in test_getmodule self.assertEqual(inspect.getmodule(mod.StupidGit), mod) AssertionError: None != ====================================================================== FAIL: test_fifteen (test.test_inspect.TestPredicates) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/users/buildbot/slave/trunk.loewis-sun/build/Lib/test/test_inspect.py", line 59, in test_fifteen self.assertEqual(count, expected, err_msg) AssertionError: There are 16 (not 15) is* functions sincerely, -The Buildbot From buildbot at python.org Mon Mar 3 20:27:58 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 03 Mar 2008 19:27:58 +0000 Subject: [Python-checkins] buildbot failure in ia64 Ubuntu trunk Message-ID: <20080303192758.C86171E400A@bag.python.org> The Buildbot has detected a new failure of ia64 Ubuntu trunk. Full details are available at: http://www.python.org/dev/buildbot/all/ia64%20Ubuntu%20trunk/builds/1552 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ia64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_inspect ====================================================================== ERROR: test_getfile (test.test_inspect.TestRetrievingSourceCode) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ia64/build/Lib/test/test_inspect.py", line 215, in test_getfile self.assertEqual(inspect.getfile(mod.StupidGit), mod.__file__) File "/home/pybot/buildarea/trunk.klose-debian-ia64/build/Lib/inspect.py", line 405, in getfile raise TypeError('arg is a built-in class') TypeError: arg is a built-in class ====================================================================== ERROR: test_getsource (test.test_inspect.TestRetrievingSourceCode) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ia64/build/Lib/test/test_inspect.py", line 208, in test_getsource self.assertSourceEqual(mod.StupidGit, 21, 46) File "/home/pybot/buildarea/trunk.klose-debian-ia64/build/Lib/test/test_inspect.py", line 152, in assertSourceEqual self.assertEqual(inspect.getsource(obj), File "/home/pybot/buildarea/trunk.klose-debian-ia64/build/Lib/inspect.py", line 684, in getsource lines, lnum = getsourcelines(object) File "/home/pybot/buildarea/trunk.klose-debian-ia64/build/Lib/inspect.py", line 673, in getsourcelines lines, lnum = findsource(object) File "/home/pybot/buildarea/trunk.klose-debian-ia64/build/Lib/inspect.py", line 516, in findsource file = getsourcefile(object) or getfile(object) File "/home/pybot/buildarea/trunk.klose-debian-ia64/build/Lib/inspect.py", line 438, in getsourcefile filename = getfile(object) File "/home/pybot/buildarea/trunk.klose-debian-ia64/build/Lib/inspect.py", line 405, in getfile raise TypeError('arg is a built-in class') TypeError: arg is a built-in class ====================================================================== FAIL: test_getcomments (test.test_inspect.TestRetrievingSourceCode) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ia64/build/Lib/test/test_inspect.py", line 190, in test_getcomments self.assertEqual(inspect.getcomments(mod.StupidGit), '# line 20\n') AssertionError: None != '# line 20\n' ====================================================================== FAIL: test_getmodule (test.test_inspect.TestRetrievingSourceCode) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ia64/build/Lib/test/test_inspect.py", line 196, in test_getmodule self.assertEqual(inspect.getmodule(mod.StupidGit), mod) AssertionError: None != ====================================================================== FAIL: test_fifteen (test.test_inspect.TestPredicates) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ia64/build/Lib/test/test_inspect.py", line 59, in test_fifteen self.assertEqual(count, expected, err_msg) AssertionError: There are 16 (not 15) is* functions make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Mon Mar 3 20:37:21 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 03 Mar 2008 19:37:21 +0000 Subject: [Python-checkins] buildbot failure in S-390 Debian trunk Message-ID: <20080303193721.D6A5A1E400C@bag.python.org> The Buildbot has detected a new failure of S-390 Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/S-390%20Debian%20trunk/builds/134 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-s390 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_inspect ====================================================================== ERROR: test_getfile (test.test_inspect.TestRetrievingSourceCode) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/test/test_inspect.py", line 215, in test_getfile self.assertEqual(inspect.getfile(mod.StupidGit), mod.__file__) File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/inspect.py", line 405, in getfile raise TypeError('arg is a built-in class') TypeError: arg is a built-in class ====================================================================== ERROR: test_getsource (test.test_inspect.TestRetrievingSourceCode) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/test/test_inspect.py", line 208, in test_getsource self.assertSourceEqual(mod.StupidGit, 21, 46) File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/test/test_inspect.py", line 152, in assertSourceEqual self.assertEqual(inspect.getsource(obj), File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/inspect.py", line 684, in getsource lines, lnum = getsourcelines(object) File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/inspect.py", line 673, in getsourcelines lines, lnum = findsource(object) File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/inspect.py", line 516, in findsource file = getsourcefile(object) or getfile(object) File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/inspect.py", line 438, in getsourcefile filename = getfile(object) File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/inspect.py", line 405, in getfile raise TypeError('arg is a built-in class') TypeError: arg is a built-in class ====================================================================== FAIL: test_getcomments (test.test_inspect.TestRetrievingSourceCode) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/test/test_inspect.py", line 190, in test_getcomments self.assertEqual(inspect.getcomments(mod.StupidGit), '# line 20\n') AssertionError: None != '# line 20\n' ====================================================================== FAIL: test_getmodule (test.test_inspect.TestRetrievingSourceCode) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/test/test_inspect.py", line 196, in test_getmodule self.assertEqual(inspect.getmodule(mod.StupidGit), mod) AssertionError: None != ====================================================================== FAIL: test_fifteen (test.test_inspect.TestPredicates) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/test/test_inspect.py", line 59, in test_fifteen self.assertEqual(count, expected, err_msg) AssertionError: There are 16 (not 15) is* functions make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Mon Mar 3 20:46:19 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 03 Mar 2008 19:46:19 +0000 Subject: [Python-checkins] buildbot failure in amd64 gentoo 3.0 Message-ID: <20080303194619.445291E401D@bag.python.org> The Buildbot has detected a new failure of amd64 gentoo 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20gentoo%203.0/builds/137 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-amd64 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: 2 tests failed: test_inspect test_timeout ====================================================================== FAIL: test_fifteen (test.test_inspect.TestPredicates) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/test/test_inspect.py", line 68, in test_fifteen self.assertEqual(count, expected, err_msg) AssertionError: There are 16 (not 15) is* functions make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Mon Mar 3 20:52:47 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 03 Mar 2008 19:52:47 +0000 Subject: [Python-checkins] buildbot failure in ppc Debian unstable 3.0 Message-ID: <20080303195247.411411E400A@bag.python.org> The Buildbot has detected a new failure of ppc Debian unstable 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/ppc%20Debian%20unstable%203.0/builds/617 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_inspect ====================================================================== FAIL: test_fifteen (test.test_inspect.TestPredicates) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/3.0.klose-debian-ppc/build/Lib/test/test_inspect.py", line 68, in test_fifteen self.assertEqual(count, expected, err_msg) AssertionError: There are 16 (not 15) is* functions make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Mon Mar 3 21:03:01 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 03 Mar 2008 20:03:01 +0000 Subject: [Python-checkins] buildbot failure in sparc solaris10 gcc 3.0 Message-ID: <20080303200301.BB5721E400C@bag.python.org> The Buildbot has detected a new failure of sparc solaris10 gcc 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20solaris10%20gcc%203.0/builds/643 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: loewis-sun Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_inspect ====================================================================== FAIL: test_fifteen (test.test_inspect.TestPredicates) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/test/test_inspect.py", line 68, in test_fifteen self.assertEqual(count, expected, err_msg) AssertionError: There are 16 (not 15) is* functions sincerely, -The Buildbot From buildbot at python.org Mon Mar 3 21:15:03 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 03 Mar 2008 20:15:03 +0000 Subject: [Python-checkins] buildbot failure in ia64 Ubuntu 3.0 Message-ID: <20080303201503.A35581E400A@bag.python.org> The Buildbot has detected a new failure of ia64 Ubuntu 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/ia64%20Ubuntu%203.0/builds/588 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ia64 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_inspect ====================================================================== FAIL: test_fifteen (test.test_inspect.TestPredicates) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/3.0.klose-debian-ia64/build/Lib/test/test_inspect.py", line 68, in test_fifteen self.assertEqual(count, expected, err_msg) AssertionError: There are 16 (not 15) is* functions make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Mon Mar 3 21:27:59 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 03 Mar 2008 20:27:59 +0000 Subject: [Python-checkins] buildbot failure in sparc Ubuntu trunk Message-ID: <20080303202800.089C91E4014@bag.python.org> The Buildbot has detected a new failure of sparc Ubuntu trunk. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20Ubuntu%20trunk/builds/303 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-ubuntu-sparc Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_inspect ====================================================================== ERROR: test_getfile (test.test_inspect.TestRetrievingSourceCode) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-ubuntu-sparc/build/Lib/test/test_inspect.py", line 215, in test_getfile self.assertEqual(inspect.getfile(mod.StupidGit), mod.__file__) File "/home/pybot/buildarea/trunk.klose-ubuntu-sparc/build/Lib/inspect.py", line 405, in getfile raise TypeError('arg is a built-in class') TypeError: arg is a built-in class ====================================================================== ERROR: test_getsource (test.test_inspect.TestRetrievingSourceCode) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-ubuntu-sparc/build/Lib/test/test_inspect.py", line 208, in test_getsource self.assertSourceEqual(mod.StupidGit, 21, 46) File "/home/pybot/buildarea/trunk.klose-ubuntu-sparc/build/Lib/test/test_inspect.py", line 152, in assertSourceEqual self.assertEqual(inspect.getsource(obj), File "/home/pybot/buildarea/trunk.klose-ubuntu-sparc/build/Lib/inspect.py", line 684, in getsource lines, lnum = getsourcelines(object) File "/home/pybot/buildarea/trunk.klose-ubuntu-sparc/build/Lib/inspect.py", line 673, in getsourcelines lines, lnum = findsource(object) File "/home/pybot/buildarea/trunk.klose-ubuntu-sparc/build/Lib/inspect.py", line 516, in findsource file = getsourcefile(object) or getfile(object) File "/home/pybot/buildarea/trunk.klose-ubuntu-sparc/build/Lib/inspect.py", line 438, in getsourcefile filename = getfile(object) File "/home/pybot/buildarea/trunk.klose-ubuntu-sparc/build/Lib/inspect.py", line 405, in getfile raise TypeError('arg is a built-in class') TypeError: arg is a built-in class ====================================================================== FAIL: test_getcomments (test.test_inspect.TestRetrievingSourceCode) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-ubuntu-sparc/build/Lib/test/test_inspect.py", line 190, in test_getcomments self.assertEqual(inspect.getcomments(mod.StupidGit), '# line 20\n') AssertionError: None != '# line 20\n' ====================================================================== FAIL: test_getmodule (test.test_inspect.TestRetrievingSourceCode) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-ubuntu-sparc/build/Lib/test/test_inspect.py", line 196, in test_getmodule self.assertEqual(inspect.getmodule(mod.StupidGit), mod) AssertionError: None != ====================================================================== FAIL: test_fifteen (test.test_inspect.TestPredicates) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-ubuntu-sparc/build/Lib/test/test_inspect.py", line 59, in test_fifteen self.assertEqual(count, expected, err_msg) AssertionError: There are 16 (not 15) is* functions make: *** [buildbottest] Error 1 sincerely, -The Buildbot From python-checkins at python.org Mon Mar 3 21:30:30 2008 From: python-checkins at python.org (christian.heimes) Date: Mon, 3 Mar 2008 21:30:30 +0100 (CET) Subject: [Python-checkins] r61207 - python/trunk/Lib/test/test_inspect.py Message-ID: <20080303203030.0BAB11E400A@bag.python.org> Author: christian.heimes Date: Mon Mar 3 21:30:29 2008 New Revision: 61207 Modified: python/trunk/Lib/test/test_inspect.py Log: 15 -> 16 Modified: python/trunk/Lib/test/test_inspect.py ============================================================================== --- python/trunk/Lib/test/test_inspect.py (original) +++ python/trunk/Lib/test/test_inspect.py Mon Mar 3 21:30:29 2008 @@ -50,11 +50,11 @@ yield i class TestPredicates(IsTestBase): - def test_fifteen(self): + def test_sixteen(self): count = len(filter(lambda x:x.startswith('is'), dir(inspect))) # This test is here for remember you to update Doc/library/inspect.rst # which claims there are 15 such functions - expected = 15 + expected = 16 err_msg = "There are %d (not %d) is* functions" % (count, expected) self.assertEqual(count, expected, err_msg) From buildbot at python.org Mon Mar 3 21:31:12 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 03 Mar 2008 20:31:12 +0000 Subject: [Python-checkins] buildbot failure in sparc Debian trunk Message-ID: <20080303203113.763E61E400A@bag.python.org> The Buildbot has detected a new failure of sparc Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20Debian%20trunk/builds/150 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-sparc Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_inspect ====================================================================== ERROR: test_getfile (test.test_inspect.TestRetrievingSourceCode) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea-sid/trunk.klose-debian-sparc/build/Lib/test/test_inspect.py", line 215, in test_getfile self.assertEqual(inspect.getfile(mod.StupidGit), mod.__file__) File "/home/pybot/buildarea-sid/trunk.klose-debian-sparc/build/Lib/inspect.py", line 405, in getfile raise TypeError('arg is a built-in class') TypeError: arg is a built-in class ====================================================================== ERROR: test_getsource (test.test_inspect.TestRetrievingSourceCode) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea-sid/trunk.klose-debian-sparc/build/Lib/test/test_inspect.py", line 208, in test_getsource self.assertSourceEqual(mod.StupidGit, 21, 46) File "/home/pybot/buildarea-sid/trunk.klose-debian-sparc/build/Lib/test/test_inspect.py", line 152, in assertSourceEqual self.assertEqual(inspect.getsource(obj), File "/home/pybot/buildarea-sid/trunk.klose-debian-sparc/build/Lib/inspect.py", line 684, in getsource lines, lnum = getsourcelines(object) File "/home/pybot/buildarea-sid/trunk.klose-debian-sparc/build/Lib/inspect.py", line 673, in getsourcelines lines, lnum = findsource(object) File "/home/pybot/buildarea-sid/trunk.klose-debian-sparc/build/Lib/inspect.py", line 516, in findsource file = getsourcefile(object) or getfile(object) File "/home/pybot/buildarea-sid/trunk.klose-debian-sparc/build/Lib/inspect.py", line 438, in getsourcefile filename = getfile(object) File "/home/pybot/buildarea-sid/trunk.klose-debian-sparc/build/Lib/inspect.py", line 405, in getfile raise TypeError('arg is a built-in class') TypeError: arg is a built-in class ====================================================================== FAIL: test_getcomments (test.test_inspect.TestRetrievingSourceCode) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea-sid/trunk.klose-debian-sparc/build/Lib/test/test_inspect.py", line 190, in test_getcomments self.assertEqual(inspect.getcomments(mod.StupidGit), '# line 20\n') AssertionError: None != '# line 20\n' ====================================================================== FAIL: test_getmodule (test.test_inspect.TestRetrievingSourceCode) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea-sid/trunk.klose-debian-sparc/build/Lib/test/test_inspect.py", line 196, in test_getmodule self.assertEqual(inspect.getmodule(mod.StupidGit), mod) AssertionError: None != ====================================================================== FAIL: test_fifteen (test.test_inspect.TestPredicates) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea-sid/trunk.klose-debian-sparc/build/Lib/test/test_inspect.py", line 59, in test_fifteen self.assertEqual(count, expected, err_msg) AssertionError: There are 16 (not 15) is* functions make: *** [buildbottest] Error 1 sincerely, -The Buildbot From python-checkins at python.org Mon Mar 3 21:37:55 2008 From: python-checkins at python.org (georg.brandl) Date: Mon, 3 Mar 2008 21:37:55 +0100 (CET) Subject: [Python-checkins] r61209 - python/trunk/Doc/library/inspect.rst Message-ID: <20080303203755.898061E400A@bag.python.org> Author: georg.brandl Date: Mon Mar 3 21:37:55 2008 New Revision: 61209 Modified: python/trunk/Doc/library/inspect.rst Log: There are now sixteen isfoo functions. Modified: python/trunk/Doc/library/inspect.rst ============================================================================== --- python/trunk/Doc/library/inspect.rst (original) +++ python/trunk/Doc/library/inspect.rst Mon Mar 3 21:37:55 2008 @@ -28,7 +28,7 @@ ----------------- The :func:`getmembers` function retrieves the members of an object such as a -class or module. The fifteen functions whose names begin with "is" are mainly +class or module. The sixteen functions whose names begin with "is" are mainly provided as convenient choices for the second argument to :func:`getmembers`. They also help you determine when you can expect to find the following special attributes: From barry at python.org Mon Mar 3 21:38:12 2008 From: barry at python.org (Barry Warsaw) Date: Mon, 3 Mar 2008 15:38:12 -0500 Subject: [Python-checkins] [Python-3000] RELEASED Python 2.6a1 and 3.0a3 In-Reply-To: References: <6E72CEB8-D3BF-4440-A1EA-1A3D545CC8DB@python.org> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Mar 2, 2008, at 12:56 AM, Guido van Rossum wrote: > Thanks so much for getting the releases out!! This is a huge step > forward. I think the release process went really well, all considered. > (If you want my 2 cents, I vote for a command-line version of welease > too.) > > Just one nit: There's no mention of the releases on the python.org > front page. I think this is a matter of updating data/newsindex.yml in > the website's svn. Someone fixed that... thanks! I need to update the website twiddling sections of PEP 101. - -Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.8 (Darwin) iQCVAwUBR8xhtXEjvBPtnXfVAQJYqgP/bcuMb0ARb+J43/rSCB5jr0bxvSD7HWBP YcuA72VmEcLkd7POQycKGOmSSQMkE0eq8jtwkEqxrEp1piWVB6qboHi1FeTqOWHz BFwAXbrAmrxC/b4mIbk4TjAHhe666Kyu/uHDvG8mfTdrECyPSgM0lyMnSd5UHJMa sRhY4Pb4avQ= =QwPB -----END PGP SIGNATURE----- From g.brandl at gmx.net Mon Mar 3 21:40:54 2008 From: g.brandl at gmx.net (Georg Brandl) Date: Mon, 03 Mar 2008 21:40:54 +0100 Subject: [Python-checkins] r61207 - python/trunk/Lib/test/test_inspect.py In-Reply-To: <20080303203030.0BAB11E400A@bag.python.org> References: <20080303203030.0BAB11E400A@bag.python.org> Message-ID: christian.heimes schrieb: > Author: christian.heimes > Date: Mon Mar 3 21:30:29 2008 > New Revision: 61207 > > Modified: > python/trunk/Lib/test/test_inspect.py > Log: > 15 -> 16 > > Modified: python/trunk/Lib/test/test_inspect.py > ============================================================================== > --- python/trunk/Lib/test/test_inspect.py (original) > +++ python/trunk/Lib/test/test_inspect.py Mon Mar 3 21:30:29 2008 > @@ -50,11 +50,11 @@ > yield i > > class TestPredicates(IsTestBase): > - def test_fifteen(self): > + def test_sixteen(self): > count = len(filter(lambda x:x.startswith('is'), dir(inspect))) > # This test is here for remember you to update Doc/library/inspect.rst > # which claims there are 15 such functions > - expected = 15 > + expected = 16 > err_msg = "There are %d (not %d) is* functions" % (count, expected) > self.assertEqual(count, expected, err_msg) You somehow missed the point of this test :) Georg From python-checkins at python.org Mon Mar 3 21:39:00 2008 From: python-checkins at python.org (georg.brandl) Date: Mon, 3 Mar 2008 21:39:00 +0100 (CET) Subject: [Python-checkins] r61210 - python/trunk/Lib/test/test_inspect.py Message-ID: <20080303203900.EA18E1E400C@bag.python.org> Author: georg.brandl Date: Mon Mar 3 21:39:00 2008 New Revision: 61210 Modified: python/trunk/Lib/test/test_inspect.py Log: 15 -> 16, the 2nd Modified: python/trunk/Lib/test/test_inspect.py ============================================================================== --- python/trunk/Lib/test/test_inspect.py (original) +++ python/trunk/Lib/test/test_inspect.py Mon Mar 3 21:39:00 2008 @@ -53,7 +53,7 @@ def test_sixteen(self): count = len(filter(lambda x:x.startswith('is'), dir(inspect))) # This test is here for remember you to update Doc/library/inspect.rst - # which claims there are 15 such functions + # which claims there are 16 such functions expected = 16 err_msg = "There are %d (not %d) is* functions" % (count, expected) self.assertEqual(count, expected, err_msg) From python-checkins at python.org Mon Mar 3 22:22:48 2008 From: python-checkins at python.org (georg.brandl) Date: Mon, 3 Mar 2008 22:22:48 +0100 (CET) Subject: [Python-checkins] r61211 - python/trunk/Lib/_abcoll.py Message-ID: <20080303212248.B6F2E1E402B@bag.python.org> Author: georg.brandl Date: Mon Mar 3 22:22:47 2008 New Revision: 61211 Modified: python/trunk/Lib/_abcoll.py Log: Actually import itertools. Modified: python/trunk/Lib/_abcoll.py ============================================================================== --- python/trunk/Lib/_abcoll.py (original) +++ python/trunk/Lib/_abcoll.py Mon Mar 3 22:22:47 2008 @@ -9,6 +9,7 @@ """ from abc import ABCMeta, abstractmethod +import itertools __all__ = ["Hashable", "Iterable", "Iterator", "Sized", "Container", "Callable", From python-checkins at python.org Mon Mar 3 22:31:50 2008 From: python-checkins at python.org (georg.brandl) Date: Mon, 3 Mar 2008 22:31:50 +0100 (CET) Subject: [Python-checkins] r61212 - python/trunk/Doc/reference/expressions.rst Message-ID: <20080303213150.C14EF1E4007@bag.python.org> Author: georg.brandl Date: Mon Mar 3 22:31:50 2008 New Revision: 61212 Modified: python/trunk/Doc/reference/expressions.rst Log: Expand a bit on genexp scopes. Modified: python/trunk/Doc/reference/expressions.rst ============================================================================== --- python/trunk/Doc/reference/expressions.rst (original) +++ python/trunk/Doc/reference/expressions.rst Mon Mar 3 22:31:50 2008 @@ -230,14 +230,15 @@ evaluating the expression to yield a value that is reached the innermost block for each iteration. -Variables used in the generator expression are evaluated lazily when the -:meth:`next` method is called for generator object (in the same fashion as -normal generators). However, the leftmost :keyword:`for` clause is immediately -evaluated so that error produced by it can be seen before any other possible -error in the code that handles the generator expression. Subsequent -:keyword:`for` clauses cannot be evaluated immediately since they may depend on -the previous :keyword:`for` loop. For example: ``(x*y for x in range(10) for y -in bar(x))``. +Variables used in the generator expression are evaluated lazily in a separate +scope when the :meth:`next` method is called for the generator object (in the +same fashion as for normal generators). However, the :keyword:`in` expression +of the leftmost :keyword:`for` clause is immediately evaluated in the current +scope so that an error produced by it can be seen before any other possible +error in the code that handles the generator expression. Subsequent +:keyword:`for` and :keyword:`if` clauses cannot be evaluated immediately since +they may depend on the previous :keyword:`for` loop. For example: +``(x*y for x in range(10) for y in bar(x))``. The parentheses can be omitted on calls with only one argument. See section :ref:`calls` for the detail. From buildbot at python.org Mon Mar 3 22:40:38 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 03 Mar 2008 21:40:38 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 trunk Message-ID: <20080303214038.B20381E4007@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%20trunk/builds/2640 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: 2 tests failed: test_asynchat test_smtplib sincerely, -The Buildbot From buildbot at python.org Mon Mar 3 22:46:38 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 03 Mar 2008 21:46:38 +0000 Subject: [Python-checkins] buildbot failure in amd64 gentoo trunk Message-ID: <20080303214638.53C411E400C@bag.python.org> The Buildbot has detected a new failure of amd64 gentoo trunk. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20gentoo%20trunk/builds/310 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-amd64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl BUILD FAILED: failed test Excerpt from the test logfile: Traceback (most recent call last): File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/threading.py", line 490, in __bootstrap_inner self.run() File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/threading.py", line 446, in run self.__target(*self.__args, **self.__kwargs) File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/bsddb/test/test_thread.py", line 284, in readerThread rec = dbutils.DeadlockWrap(c.next, max_retries=10) File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/bsddb/dbutils.py", line 62, in DeadlockWrap return function(*_args, **_kwargs) DBLockDeadlockError: (-30995, 'DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock') Traceback (most recent call last): File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/threading.py", line 490, in __bootstrap_inner self.run() File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/threading.py", line 446, in run self.__target(*self.__args, **self.__kwargs) File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/bsddb/test/test_thread.py", line 284, in readerThread rec = dbutils.DeadlockWrap(c.next, max_retries=10) File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/bsddb/dbutils.py", line 62, in DeadlockWrap return function(*_args, **_kwargs) DBLockDeadlockError: (-30995, 'DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock') 1 test failed: test_socket_ssl make: *** [buildbottest] Error 1 sincerely, -The Buildbot From python-checkins at python.org Mon Mar 3 23:04:55 2008 From: python-checkins at python.org (raymond.hettinger) Date: Mon, 3 Mar 2008 23:04:55 +0100 (CET) Subject: [Python-checkins] r61213 - python/trunk/Lib/_abcoll.py Message-ID: <20080303220455.6686F1E400D@bag.python.org> Author: raymond.hettinger Date: Mon Mar 3 23:04:55 2008 New Revision: 61213 Modified: python/trunk/Lib/_abcoll.py Log: Remove dependency on itertools -- a simple genexp suffices. Modified: python/trunk/Lib/_abcoll.py ============================================================================== --- python/trunk/Lib/_abcoll.py (original) +++ python/trunk/Lib/_abcoll.py Mon Mar 3 23:04:55 2008 @@ -9,7 +9,6 @@ """ from abc import ABCMeta, abstractmethod -import itertools __all__ = ["Hashable", "Iterable", "Iterator", "Sized", "Container", "Callable", @@ -189,7 +188,8 @@ def __or__(self, other): if not isinstance(other, Iterable): return NotImplemented - return self._from_iterable(itertools.chain(self, other)) + chain = (e for s in (self, other) for e in s) + return self._from_iterable(chain) def __sub__(self, other): if not isinstance(other, Set): From buildbot at python.org Mon Mar 3 23:12:18 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 03 Mar 2008 22:12:18 +0000 Subject: [Python-checkins] buildbot failure in sparc Debian 3.0 Message-ID: <20080303221218.7BECE1E4007@bag.python.org> The Buildbot has detected a new failure of sparc Debian 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20Debian%203.0/builds/76 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-sparc Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_inspect ====================================================================== FAIL: test_fifteen (test.test_inspect.TestPredicates) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea-sid/3.0.klose-debian-sparc/build/Lib/test/test_inspect.py", line 68, in test_fifteen self.assertEqual(count, expected, err_msg) AssertionError: There are 16 (not 15) is* functions make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Mon Mar 3 23:17:48 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 03 Mar 2008 22:17:48 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-4 trunk Message-ID: <20080303221748.DF8981E4007@bag.python.org> The Buildbot has detected a new failure of x86 XP-4 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-4%20trunk/builds/771 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: bolen-windows Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From python-checkins at python.org Mon Mar 3 23:19:58 2008 From: python-checkins at python.org (raymond.hettinger) Date: Mon, 3 Mar 2008 23:19:58 +0100 (CET) Subject: [Python-checkins] r61214 - python/trunk/Lib/_abcoll.py Message-ID: <20080303221958.894F81E4007@bag.python.org> Author: raymond.hettinger Date: Mon Mar 3 23:19:58 2008 New Revision: 61214 Modified: python/trunk/Lib/_abcoll.py Log: Issue 2226: Callable checked for the wrong abstract method. Modified: python/trunk/Lib/_abcoll.py ============================================================================== --- python/trunk/Lib/_abcoll.py (original) +++ python/trunk/Lib/_abcoll.py Mon Mar 3 23:19:58 2008 @@ -107,7 +107,7 @@ __metaclass__ = ABCMeta @abstractmethod - def __contains__(self, x): + def __call__(self, *args, **kwds): return False @classmethod From buildbot at python.org Mon Mar 3 23:24:42 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 03 Mar 2008 22:24:42 +0000 Subject: [Python-checkins] buildbot failure in sparc Ubuntu 3.0 Message-ID: <20080303222442.BB11B1E400C@bag.python.org> The Buildbot has detected a new failure of sparc Ubuntu 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20Ubuntu%203.0/builds/146 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-ubuntu-sparc Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: 2 tests failed: test_inspect test_normalization ====================================================================== FAIL: test_fifteen (test.test_inspect.TestPredicates) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/3.0.klose-ubuntu-sparc/build/Lib/test/test_inspect.py", line 68, in test_fifteen self.assertEqual(count, expected, err_msg) AssertionError: There are 16 (not 15) is* functions make: *** [buildbottest] Error 1 sincerely, -The Buildbot From guido at python.org Mon Mar 3 23:42:27 2008 From: guido at python.org (Guido van Rossum) Date: Mon, 3 Mar 2008 14:42:27 -0800 Subject: [Python-checkins] r61207 - python/trunk/Lib/test/test_inspect.py In-Reply-To: References: <20080303203030.0BAB11E400A@bag.python.org> Message-ID: On Mon, Mar 3, 2008 at 12:40 PM, Georg Brandl wrote: > christian.heimes schrieb: > > > Author: christian.heimes > > Date: Mon Mar 3 21:30:29 2008 > > New Revision: 61207 > > > > Modified: > > python/trunk/Lib/test/test_inspect.py > > Log: > > 15 -> 16 > > > > Modified: python/trunk/Lib/test/test_inspect.py > > ============================================================================== > > --- python/trunk/Lib/test/test_inspect.py (original) > > +++ python/trunk/Lib/test/test_inspect.py Mon Mar 3 21:30:29 2008 > > @@ -50,11 +50,11 @@ > > yield i > > > > class TestPredicates(IsTestBase): > > - def test_fifteen(self): > > + def test_sixteen(self): > > count = len(filter(lambda x:x.startswith('is'), dir(inspect))) > > # This test is here for remember you to update Doc/library/inspect.rst > > # which claims there are 15 such functions > > - expected = 15 > > + expected = 16 > > err_msg = "There are %d (not %d) is* functions" % (count, expected) > > self.assertEqual(count, expected, err_msg) > > You somehow missed the point of this test :) Maybe the docs shouldn't be so specific? -- --Guido van Rossum (home page: http://www.python.org/~guido/) From python-checkins at python.org Mon Mar 3 23:44:30 2008 From: python-checkins at python.org (brett.cannon) Date: Mon, 3 Mar 2008 23:44:30 +0100 (CET) Subject: [Python-checkins] r61215 - peps/trunk/pep-3108.txt Message-ID: <20080303224430.668D11E400D@bag.python.org> Author: brett.cannon Date: Mon Mar 3 23:44:30 2008 New Revision: 61215 Modified: peps/trunk/pep-3108.txt Log: Add a note that modules to be removed might stick around without a documented, public interface so that dependent modules can continue to work. Modified: peps/trunk/pep-3108.txt ============================================================================== --- peps/trunk/pep-3108.txt (original) +++ peps/trunk/pep-3108.txt Mon Mar 3 23:44:30 2008 @@ -74,6 +74,10 @@ and then put up on PyPI. In such cases the code is not expected to be maintained beyond the discretion of any core developer. +If a module that is removed is used by a module that is not staying, +then the module to remove will have its documentation removed and be +renamed to signify it has no public API. + .. _PyPI: http://pypi.python.org/ From buildbot at python.org Mon Mar 3 23:58:46 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 03 Mar 2008 22:58:46 +0000 Subject: [Python-checkins] buildbot failure in g4 osx.4 trunk Message-ID: <20080303225846.7FC151E4007@bag.python.org> The Buildbot has detected a new failure of g4 osx.4 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/g4%20osx.4%20trunk/builds/2968 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: psf-g4 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl,raymond.hettinger BUILD FAILED: failed failed slave lost sincerely, -The Buildbot From python-checkins at python.org Tue Mar 4 00:07:33 2008 From: python-checkins at python.org (martin.v.loewis) Date: Tue, 4 Mar 2008 00:07:33 +0100 (CET) Subject: [Python-checkins] r61216 - tracker/instances/python-dev/html/issue.item.html Message-ID: <20080303230733.449401E400C@bag.python.org> Author: martin.v.loewis Date: Tue Mar 4 00:07:33 2008 New Revision: 61216 Modified: tracker/instances/python-dev/html/issue.item.html Log: Change Note -> Comment. Fixes #194. Modified: tracker/instances/python-dev/html/issue.item.html ============================================================================== --- tracker/instances/python-dev/html/issue.item.html (original) +++ tracker/instances/python-dev/html/issue.item.html Tue Mar 4 00:07:33 2008 @@ -156,7 +156,7 @@ - Change Note: + Comment: From python-checkins at python.org Tue Mar 4 01:40:33 2008 From: python-checkins at python.org (andrew.kuchling) Date: Tue, 4 Mar 2008 01:40:33 +0100 (CET) Subject: [Python-checkins] r61217 - python/trunk/Misc/NEWS Message-ID: <20080304004033.20E651E4007@bag.python.org> Author: andrew.kuchling Date: Tue Mar 4 01:40:32 2008 New Revision: 61217 Modified: python/trunk/Misc/NEWS Log: Typo fix Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Tue Mar 4 01:40:32 2008 @@ -1257,7 +1257,7 @@ - itertools.starmap() now accepts any iterable input. Previously, it required the function inputs to be tuples. -- itertools.chain() now has an alterate constructor, chain.from_iterable(). +- itertools.chain() now has an alternate constructor, chain.from_iterable(). - Issue #1646: Make socket support TIPC. The socket module now has support for TIPC under Linux, see http://tipc.sf.net/ for more information. From buildbot at python.org Tue Mar 4 02:02:48 2008 From: buildbot at python.org (buildbot at python.org) Date: Tue, 04 Mar 2008 01:02:48 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 trunk Message-ID: <20080304010248.F39551E4007@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%20trunk/builds/2642 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl,raymond.hettinger BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From python-checkins at python.org Tue Mar 4 02:30:10 2008 From: python-checkins at python.org (andrew.kuchling) Date: Tue, 4 Mar 2008 02:30:10 +0100 (CET) Subject: [Python-checkins] r61218 - python/trunk/Doc/library/signal.rst Message-ID: <20080304013010.D901B1E4007@bag.python.org> Author: andrew.kuchling Date: Tue Mar 4 02:30:10 2008 New Revision: 61218 Modified: python/trunk/Doc/library/signal.rst Log: Grammar fix; markup fix Modified: python/trunk/Doc/library/signal.rst ============================================================================== --- python/trunk/Doc/library/signal.rst (original) +++ python/trunk/Doc/library/signal.rst Tue Mar 4 02:30:10 2008 @@ -128,12 +128,12 @@ .. function:: siginterrupt(signalnum, flag) Change system call restart behaviour: if *flag* is :const:`False`, system calls - will be restarted when interrupted by signal *signalnum*, else system calls will + will be restarted when interrupted by signal *signalnum*, otherwise system calls will be interrupted. Returns nothing. Availability: Unix, Mac (see the man page :manpage:`siginterrupt(3)` for further information). Note that installing a signal handler with :func:`signal` will reset the restart - behaviour to interruptible by implicitly calling siginterrupt with a true *flag* + behaviour to interruptible by implicitly calling :cfunc:`siginterrupt` with a true *flag* value for the given signal. .. versionadded:: 2.6 From buildbot at python.org Tue Mar 4 02:35:03 2008 From: buildbot at python.org (buildbot at python.org) Date: Tue, 04 Mar 2008 01:35:03 +0000 Subject: [Python-checkins] buildbot failure in x86 FreeBSD trunk Message-ID: <20080304013503.F21FA1E4007@bag.python.org> The Buildbot has detected a new failure of x86 FreeBSD trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20FreeBSD%20trunk/builds/685 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: bolen-freebsd Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl,raymond.hettinger BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_smtplib sincerely, -The Buildbot From python-checkins at python.org Tue Mar 4 02:47:38 2008 From: python-checkins at python.org (andrew.kuchling) Date: Tue, 4 Mar 2008 02:47:38 +0100 (CET) Subject: [Python-checkins] r61219 - python/trunk/Doc/library/itertools.rst Message-ID: <20080304014738.C14A81E4007@bag.python.org> Author: andrew.kuchling Date: Tue Mar 4 02:47:38 2008 New Revision: 61219 Modified: python/trunk/Doc/library/itertools.rst Log: Fix sentence fragment Modified: python/trunk/Doc/library/itertools.rst ============================================================================== --- python/trunk/Doc/library/itertools.rst (original) +++ python/trunk/Doc/library/itertools.rst Tue Mar 4 02:47:38 2008 @@ -410,8 +410,8 @@ repetitions with the optional *repeat* keyword argument. For example, ``product(A, repeat=4)`` means the same as ``product(A, A, A, A)``. - Equivalent to the following except that the actual implementation does not - build-up intermediate results in memory:: + This function is equivalent to the following code, except that the + actual implementation does not build up intermediate results in memory:: def product(*args, **kwds): pools = map(tuple, args) * kwds.get('repeat', 1) From python-checkins at python.org Tue Mar 4 02:48:26 2008 From: python-checkins at python.org (andrew.kuchling) Date: Tue, 4 Mar 2008 02:48:26 +0100 (CET) Subject: [Python-checkins] r61220 - python/trunk/Misc/NEWS Message-ID: <20080304014826.765A91E4007@bag.python.org> Author: andrew.kuchling Date: Tue Mar 4 02:48:26 2008 New Revision: 61220 Modified: python/trunk/Misc/NEWS Log: Typo fix Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Tue Mar 4 02:48:26 2008 @@ -59,7 +59,7 @@ - Fixed repr() and str() of complex numbers with infinity or nan as real or imaginary part. -- Clear all free list during a gc.collect() of the highest generation in order +- Clear all free lists during a gc.collect() of the highest generation in order to allow pymalloc to free more arenas. Python may give back memory to the OS earlier. From python-checkins at python.org Tue Mar 4 02:49:37 2008 From: python-checkins at python.org (andrew.kuchling) Date: Tue, 4 Mar 2008 02:49:37 +0100 (CET) Subject: [Python-checkins] r61221 - python/trunk/Doc/library/inspect.rst Message-ID: <20080304014937.B1B0C1E4007@bag.python.org> Author: andrew.kuchling Date: Tue Mar 4 02:49:37 2008 New Revision: 61221 Modified: python/trunk/Doc/library/inspect.rst Log: Add versionadded tags Modified: python/trunk/Doc/library/inspect.rst ============================================================================== --- python/trunk/Doc/library/inspect.rst (original) +++ python/trunk/Doc/library/inspect.rst Tue Mar 4 02:49:37 2008 @@ -279,10 +279,14 @@ Return true if the object is a Python generator function. + .. versionadded:: 2.6 + .. function:: isgenerator(object) Return true if the object is a generator. + .. versionadded:: 2.6 + .. function:: istraceback(object) Return true if the object is a traceback. From python-checkins at python.org Tue Mar 4 02:50:33 2008 From: python-checkins at python.org (andrew.kuchling) Date: Tue, 4 Mar 2008 02:50:33 +0100 (CET) Subject: [Python-checkins] r61222 - python/trunk/Doc/whatsnew/2.6.rst Message-ID: <20080304015033.1E4E71E4007@bag.python.org> Author: andrew.kuchling Date: Tue Mar 4 02:50:32 2008 New Revision: 61222 Modified: python/trunk/Doc/whatsnew/2.6.rst Log: Thesis night results: add various items Modified: python/trunk/Doc/whatsnew/2.6.rst ============================================================================== --- python/trunk/Doc/whatsnew/2.6.rst (original) +++ python/trunk/Doc/whatsnew/2.6.rst Tue Mar 4 02:50:32 2008 @@ -450,6 +450,15 @@ .. ====================================================================== +.. _pep-3101: + +PEP 3101: Advanced String Formatting +===================================================== + +XXX write this + +.. ====================================================================== + .. _pep-3110: PEP 3110: Exception-Handling Changes @@ -544,6 +553,32 @@ .. ====================================================================== +.. _pep-3127: + +PEP 3127: Integer Literal Support and Syntax +===================================================== + +XXX write this + +Python 3.0 changes the syntax for octal integer literals, and +adds supports for binary integers: 0o instad of 0, +and 0b for binary. Python 2.6 doesn't support this, but a bin() +builtin was added, and + + +New bin() built-in returns the binary form of a number. + +.. ====================================================================== + +.. _pep-3129: + +PEP 3129: Class Decorators +===================================================== + +XXX write this. + +.. ====================================================================== + .. _pep-3141: PEP 3141: A Type Hierarchy for Numbers @@ -579,7 +614,9 @@ :class:`Rational` numbers derive from :class:`Real`, have :attr:`numerator` and :attr:`denominator` properties, and can be converted to floats. Python 2.6 adds a simple rational-number class, -:class:`Fraction`, in the :mod:`fractions` module. +:class:`Fraction`, in the :mod:`fractions` module. (It's called +:class:`Fraction` instead of :class:`Rational` to avoid +a name clash with :class:`numbers.Rational`.) :class:`Integral` numbers derive from :class:`Rational`, and can be shifted left and right with ``<<`` and ``>>``, @@ -587,9 +624,9 @@ and can be used as array indexes and slice boundaries. In Python 3.0, the PEP slightly redefines the existing built-ins -:func:`math.floor`, :func:`math.ceil`, :func:`round`, and adds a new -one, :func:`trunc`, that's been backported to Python 2.6. -:func:`trunc` rounds toward zero, returning the closest +:func:`round`, :func:`math.floor`, :func:`math.ceil`, and adds a new +one, :func:`math.trunc`, that's been backported to Python 2.6. +:func:`math.trunc` rounds toward zero, returning the closest :class:`Integral` that's between the function's argument and zero. .. seealso:: @@ -603,7 +640,7 @@ To fill out the hierarchy of numeric types, a rational-number class has been added as the :mod:`fractions` module. Rational numbers are -represented as a fraction; rational numbers can exactly represent +represented as a fraction, and can exactly represent numbers such as two-thirds that floating-point numbers can only approximate. @@ -692,7 +729,7 @@ A numerical nicety: when creating a complex number from two floats on systems that support signed zeros (-0 and +0), the - :func:`complex()` constructor will now preserve the sign + :func:`complex` constructor will now preserve the sign of the zero. .. Patch 1507 @@ -789,6 +826,15 @@ built-in types. This speeds up checking if an object is a subclass of one of these types. (Contributed by Neal Norwitz.) +* Unicode strings now uses faster code for detecting + whitespace and line breaks; this speeds up the :meth:`split` method + by about 25% and :meth:`splitlines` by 35%. + (Contributed by Antoine Pitrou.) + +* To reduce memory usage, the garbage collector will now clear internal + free lists when garbage-collecting the highest generation of objects. + This may return memory to the OS sooner. + The net result of the 2.6 optimizations is that Python 2.6 runs the pystone benchmark around XX% faster than Python 2.5. @@ -956,15 +1002,69 @@ can also be accessed as attributes. (Contributed by Raymond Hettinger.) -* A new function in the :mod:`itertools` module: ``izip_longest(iter1, iter2, - ...[, fillvalue])`` makes tuples from each of the elements; if some of the - iterables are shorter than others, the missing values are set to *fillvalue*. - For example:: + Some new functions in the module include + :func:`isgenerator`, :func:`isgeneratorfunction`, + and :func:`isabstract`. + +* The :mod:`itertools` module gained several new functions. + + ``izip_longest(iter1, iter2, ...[, fillvalue])`` makes tuples from + each of the elements; if some of the iterables are shorter than + others, the missing values are set to *fillvalue*. For example:: itertools.izip_longest([1,2,3], [1,2,3,4,5]) -> [(1, 1), (2, 2), (3, 3), (None, 4), (None, 5)] - (Contributed by Raymond Hettinger.) + ``product(iter1, iter2, ..., [repeat=N])`` returns the Cartesian product + of the supplied iterables, a set of tuples containing + every possible combination of the elements returned from each iterable. :: + + itertools.product([1,2,3], [4,5,6]) -> + [(1, 4), (1, 5), (1, 6), + (2, 4), (2, 5), (2, 6), + (3, 4), (3, 5), (3, 6)] + + The optional *repeat* keyword argument is used for taking the + product of an iterable or a set of iterables with themselves, + repeated *N* times. With a single iterable argument, *N*-tuples + are returned:: + + itertools.product([1,2], repeat=3)) -> + [(1, 1, 1), (1, 1, 2), (1, 2, 1), (1, 2, 2), + (2, 1, 1), (2, 1, 2), (2, 2, 1), (2, 2, 2)] + + With two iterables, *2N*-tuples are returned. :: + + itertools(product([1,2], [3,4], repeat=2) -> + [(1, 3, 1, 3), (1, 3, 1, 4), (1, 3, 2, 3), (1, 3, 2, 4), + (1, 4, 1, 3), (1, 4, 1, 4), (1, 4, 2, 3), (1, 4, 2, 4), + (2, 3, 1, 3), (2, 3, 1, 4), (2, 3, 2, 3), (2, 3, 2, 4), + (2, 4, 1, 3), (2, 4, 1, 4), (2, 4, 2, 3), (2, 4, 2, 4)] + + ``combinations(iter, r)`` returns combinations of length *r* from + the elements of *iterable*. :: + + itertools.combinations('123', 2) -> + [('1', '2'), ('1', '3'), ('2', '3')] + + itertools.combinations('123', 3) -> + [('1', '2', '3')] + + itertools.combinations('1234', 3) -> + [('1', '2', '3'), ('1', '2', '4'), ('1', '3', '4'), + ('2', '3', '4')] + + ``itertools.chain(*iterables)` is an existing function in + :mod:`itertools` that gained a new constructor. + ``itertools.chain.from_iterable(iterable)`` takes a single + iterable that should return other iterables. :func:`chain` will + then return all the elements of the first iterable, then + all the elements of the second, and so on. :: + + chain.from_iterable([[1,2,3], [4,5,6]]) -> + [1, 2, 3, 4, 5, 6] + + (All contributed by Raymond Hettinger.) * The :mod:`macfs` module has been removed. This in turn required the :func:`macostools.touched` function to be removed because it depended on the @@ -975,7 +1075,7 @@ * :class:`mmap` objects now have a :meth:`rfind` method that finds a substring, beginning at the end of the string and searching backwards. The :meth:`find` method - also gained a *end* parameter containing the index at which to stop + also gained an *end* parameter containing the index at which to stop the forward search. (Contributed by John Lenton.) @@ -984,6 +1084,29 @@ triggers a warning message when Python is running in 3.0-warning mode. +* The :mod:`operator` module gained a + :func:`methodcaller` function that takes a name and an optional + set of arguments, returning a callable that will call + the named function on any arguments passed to it. For example:: + + >>> # Equivalent to lambda s: s.replace('old', 'new') + >>> replacer = operator.methodcaller('replace', 'old', 'new') + >>> replacer('old wine in old bottles') + 'new wine in new bottles' + + (Contributed by Gregory Petrosyan.) + + The :func:`attrgetter` function now accepts dotted names and performs + the corresponding attribute lookups:: + + >>> inst_name = operator.attrgetter('__class__.__name__') + >>> inst_name('') + 'str' + >>> inst_name(help) + '_Helper' + + (Contributed by Scott Dial, after a suggestion by Barry Warsaw.) + * New functions in the :mod:`os` module include ``fchmod(fd, mode)``, ``fchown(fd, uid, gid)``, and ``lchmod(path, mode)``, on operating systems that support these @@ -1036,6 +1159,11 @@ .. Patch #1393667 +* The :mod:`pickletools` module now has an :func:`optimize` function + that takes a string containing a pickle and removes some unused + opcodes, returning a shorter pickle that contains the same data structure. + (Contributed by Raymond Hettinger.) + * New functions in the :mod:`posix` module: :func:`chflags` and :func:`lchflags` are wrappers for the corresponding system calls (where they're available). Constants for the flag values are defined in the :mod:`stat` module; some @@ -1099,6 +1227,10 @@ .. % Patch 1583 + The :func:`siginterrupt` function is now available from Python code, + and allows changing whether signals can interrupt system calls or not. + (Contributed by Ralf Schmitt.) + * The :mod:`smtplib` module now supports SMTP over SSL thanks to the addition of the :class:`SMTP_SSL` class. This class supports an interface identical to the existing :class:`SMTP` class. Both @@ -1201,6 +1333,18 @@ .. Patch #1537850 + A new class, :class:`SpooledTemporaryFile`, behaves like + a temporary file but stores its data in memory until a maximum size is + exceeded. On reaching that limit, the contents will be written to + an on-disk temporary file. (Contributed by Dustin J. Mitchell.) + + The :class:`NamedTemporaryFile` and :class:`SpooledTemporaryFile` classes + both work as context managers, so you can write + ``with tempfile.NamedTemporaryFile() as tmp: ...``. + (Contributed by Alexander Belopolsky.) + + .. Issue #2021 + * The :mod:`test.test_support` module now contains a :func:`EnvironmentVarGuard` context manager that supports temporarily changing environment variables and @@ -1415,6 +1559,12 @@ .. Patch 1530959 +* Several basic data types, such as integers and strings, maintain + internal free lists of objects that can be re-used. The data + structures for these free lists now follow a naming convention: the + variable is always named ``free_list``, the counter is always named + ``numfree``, and a macro :cmacro:`Py_MAXFREELIST` is + always defined. .. ====================================================================== From skip at pobox.com Tue Mar 4 04:48:11 2008 From: skip at pobox.com (skip at pobox.com) Date: Mon, 3 Mar 2008 21:48:11 -0600 Subject: [Python-checkins] r61215 - peps/trunk/pep-3108.txt In-Reply-To: <20080303224430.668D11E400D@bag.python.org> References: <20080303224430.668D11E400D@bag.python.org> Message-ID: <18380.50812.451.831435@montanaro-dyndns-org.local> > If a module that is removed is used by a module that is not staying, > then the module to remove will have its documentation removed and be > renamed to signify it has no public API. That should be "... by a module that is not being removed ..." or "... by a module that is staying ...", correct? Skip From brett at python.org Tue Mar 4 05:03:29 2008 From: brett at python.org (Brett Cannon) Date: Mon, 3 Mar 2008 20:03:29 -0800 Subject: [Python-checkins] r61215 - peps/trunk/pep-3108.txt In-Reply-To: <18380.50812.451.831435@montanaro-dyndns-org.local> References: <20080303224430.668D11E400D@bag.python.org> <18380.50812.451.831435@montanaro-dyndns-org.local> Message-ID: On Mon, Mar 3, 2008 at 7:48 PM, wrote: > > > If a module that is removed is used by a module that is not staying, > > then the module to remove will have its documentation removed and be > > renamed to signify it has no public API. > > That should be "... by a module that is not being removed ..." or "... by a > module that is staying ...", correct? Yep. -Brett From python-checkins at python.org Tue Mar 4 05:03:59 2008 From: python-checkins at python.org (brett.cannon) Date: Tue, 4 Mar 2008 05:03:59 +0100 (CET) Subject: [Python-checkins] r61223 - peps/trunk/pep-3108.txt Message-ID: <20080304040359.9196C1E4007@bag.python.org> Author: brett.cannon Date: Tue Mar 4 05:03:59 2008 New Revision: 61223 Modified: peps/trunk/pep-3108.txt Log: Fix a typo. Modified: peps/trunk/pep-3108.txt ============================================================================== --- peps/trunk/pep-3108.txt (original) +++ peps/trunk/pep-3108.txt Tue Mar 4 05:03:59 2008 @@ -74,7 +74,7 @@ and then put up on PyPI. In such cases the code is not expected to be maintained beyond the discretion of any core developer. -If a module that is removed is used by a module that is not staying, +If a module that is removed is used by a module that is staying, then the module to remove will have its documentation removed and be renamed to signify it has no public API. From buildbot at python.org Tue Mar 4 05:05:13 2008 From: buildbot at python.org (buildbot at python.org) Date: Tue, 04 Mar 2008 04:05:13 +0000 Subject: [Python-checkins] buildbot failure in x86 W2k8 2.5 Message-ID: <20080304040513.5734E1E4016@bag.python.org> The Buildbot has detected a new failure of x86 W2k8 2.5. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20W2k8%202.5/builds/0 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: nelson-windows Build Reason: The web-page 'force build' button was pressed by 'Trent': Build Source Stamp: [branch release25-maint] HEAD Blamelist: BUILD FAILED: failed svn sincerely, -The Buildbot From python-checkins at python.org Tue Mar 4 05:17:09 2008 From: python-checkins at python.org (raymond.hettinger) Date: Tue, 4 Mar 2008 05:17:09 +0100 (CET) Subject: [Python-checkins] r61224 - in python/trunk: Doc/library/itertools.rst Lib/test/test_itertools.py Modules/itertoolsmodule.c Message-ID: <20080304041709.1F8511E401D@bag.python.org> Author: raymond.hettinger Date: Tue Mar 4 05:17:08 2008 New Revision: 61224 Modified: python/trunk/Doc/library/itertools.rst python/trunk/Lib/test/test_itertools.py python/trunk/Modules/itertoolsmodule.c Log: Beef-up docs and tests for itertools. Fix-up end-case for product(). Modified: python/trunk/Doc/library/itertools.rst ============================================================================== --- python/trunk/Doc/library/itertools.rst (original) +++ python/trunk/Doc/library/itertools.rst Tue Mar 4 05:17:08 2008 @@ -89,6 +89,7 @@ .. versionadded:: 2.6 + .. function:: combinations(iterable, r) Return successive *r* length combinations of elements in the *iterable*. @@ -123,6 +124,17 @@ indices[j] = indices[j-1] + 1 yield tuple(pool[i] for i in indices) + The code for :func:`combinations` can be also expressed as a subsequence + of :func:`permutations` after filtering entries where the elements are not + in sorted order (according to their position in the input pool):: + + def combinations(iterable, r): + pool = tuple(iterable) + n = len(pool) + for indices in permutations(range(n), r): + if sorted(indices) == list(indices): + yield tuple(pool[i] for i in indices) + .. versionadded:: 2.6 .. function:: count([n]) @@ -391,6 +403,18 @@ else: return + The code for :func:`permutations` can be also expressed as a subsequence of + :func:`product`, filtered to exclude entries with repeated elements (those + from the same position in the input pool):: + + def permutations(iterable, r=None): + pool = tuple(iterable) + n = len(pool) + r = n if r is None else r + for indices in product(range(n), repeat=r): + if len(set(indices)) == r: + yield tuple(pool[i] for i in indices) + .. versionadded:: 2.6 .. function:: product(*iterables[, repeat]) @@ -401,9 +425,9 @@ ``product(A, B)`` returns the same as ``((x,y) for x in A for y in B)``. The leftmost iterators are in the outermost for-loop, so the output tuples - cycle in a manner similar to an odometer (with the rightmost element - changing on every iteration). This results in a lexicographic ordering - so that if the inputs iterables are sorted, the product tuples are emitted + cycle like an odometer (with the rightmost element changing on every + iteration). This results in a lexicographic ordering so that if the + inputs iterables are sorted, the product tuples are emitted in sorted order. To compute the product of an iterable with itself, specify the number of @@ -415,12 +439,11 @@ def product(*args, **kwds): pools = map(tuple, args) * kwds.get('repeat', 1) - if pools: - result = [[]] - for pool in pools: - result = [x+[y] for x in result for y in pool] - for prod in result: - yield tuple(prod) + result = [[]] + for pool in pools: + result = [x+[y] for x in result for y in pool] + for prod in result: + yield tuple(prod) .. versionadded:: 2.6 Modified: python/trunk/Lib/test/test_itertools.py ============================================================================== --- python/trunk/Lib/test/test_itertools.py (original) +++ python/trunk/Lib/test/test_itertools.py Tue Mar 4 05:17:08 2008 @@ -40,9 +40,21 @@ 'Convenience function for partially consuming a long of infinite iterable' return list(islice(seq, n)) +def prod(iterable): + return reduce(operator.mul, iterable, 1) + def fact(n): 'Factorial' - return reduce(operator.mul, range(1, n+1), 1) + return prod(range(1, n+1)) + +def permutations(iterable, r=None): + # XXX use this until real permutations code is added + pool = tuple(iterable) + n = len(pool) + r = n if r is None else r + for indices in product(range(n), repeat=r): + if len(set(indices)) == r: + yield tuple(pool[i] for i in indices) class TestBasicOps(unittest.TestCase): def test_chain(self): @@ -62,11 +74,38 @@ def test_combinations(self): self.assertRaises(TypeError, combinations, 'abc') # missing r argument self.assertRaises(TypeError, combinations, 'abc', 2, 1) # too many arguments + self.assertRaises(TypeError, combinations, None) # pool is not iterable self.assertRaises(ValueError, combinations, 'abc', -2) # r is negative self.assertRaises(ValueError, combinations, 'abc', 32) # r is too big self.assertEqual(list(combinations(range(4), 3)), [(0,1,2), (0,1,3), (0,2,3), (1,2,3)]) - for n in range(8): + + def combinations1(iterable, r): + 'Pure python version shown in the docs' + pool = tuple(iterable) + n = len(pool) + indices = range(r) + yield tuple(pool[i] for i in indices) + while 1: + for i in reversed(range(r)): + if indices[i] != i + n - r: + break + else: + return + indices[i] += 1 + for j in range(i+1, r): + indices[j] = indices[j-1] + 1 + yield tuple(pool[i] for i in indices) + + def combinations2(iterable, r): + 'Pure python version shown in the docs' + pool = tuple(iterable) + n = len(pool) + for indices in permutations(range(n), r): + if sorted(indices) == list(indices): + yield tuple(pool[i] for i in indices) + + for n in range(7): values = [5*x-12 for x in range(n)] for r in range(n+1): result = list(combinations(values, r)) @@ -78,6 +117,73 @@ self.assertEqual(len(set(c)), r) # no duplicate elements self.assertEqual(list(c), sorted(c)) # keep original ordering self.assert_(all(e in values for e in c)) # elements taken from input iterable + self.assertEqual(result, list(combinations1(values, r))) # matches first pure python version + self.assertEqual(result, list(combinations2(values, r))) # matches first pure python version + + # Test implementation detail: tuple re-use + self.assertEqual(len(set(map(id, combinations('abcde', 3)))), 1) + self.assertNotEqual(len(set(map(id, list(combinations('abcde', 3))))), 1) + + def test_permutations(self): + self.assertRaises(TypeError, permutations) # too few arguments + self.assertRaises(TypeError, permutations, 'abc', 2, 1) # too many arguments +## self.assertRaises(TypeError, permutations, None) # pool is not iterable +## self.assertRaises(ValueError, permutations, 'abc', -2) # r is negative +## self.assertRaises(ValueError, permutations, 'abc', 32) # r is too big + self.assertEqual(list(permutations(range(3), 2)), + [(0,1), (0,2), (1,0), (1,2), (2,0), (2,1)]) + + def permutations1(iterable, r=None): + 'Pure python version shown in the docs' + pool = tuple(iterable) + n = len(pool) + r = n if r is None else r + indices = range(n) + cycles = range(n-r+1, n+1)[::-1] + yield tuple(pool[i] for i in indices[:r]) + while n: + for i in reversed(range(r)): + cycles[i] -= 1 + if cycles[i] == 0: + indices[i:] = indices[i+1:] + indices[i:i+1] + cycles[i] = n - i + else: + j = cycles[i] + indices[i], indices[-j] = indices[-j], indices[i] + yield tuple(pool[i] for i in indices[:r]) + break + else: + return + + def permutations2(iterable, r=None): + 'Pure python version shown in the docs' + pool = tuple(iterable) + n = len(pool) + r = n if r is None else r + for indices in product(range(n), repeat=r): + if len(set(indices)) == r: + yield tuple(pool[i] for i in indices) + + for n in range(7): + values = [5*x-12 for x in range(n)] + for r in range(n+1): + result = list(permutations(values, r)) + self.assertEqual(len(result), fact(n) / fact(n-r)) # right number of perms + self.assertEqual(len(result), len(set(result))) # no repeats + self.assertEqual(result, sorted(result)) # lexicographic order + for p in result: + self.assertEqual(len(p), r) # r-length permutations + self.assertEqual(len(set(p)), r) # no duplicate elements + self.assert_(all(e in values for e in p)) # elements taken from input iterable + self.assertEqual(result, list(permutations1(values, r))) # matches first pure python version + self.assertEqual(result, list(permutations2(values, r))) # matches first pure python version + if r == n: + self.assertEqual(result, list(permutations(values, None))) # test r as None + self.assertEqual(result, list(permutations(values))) # test default r + + # Test implementation detail: tuple re-use +## self.assertEqual(len(set(map(id, permutations('abcde', 3)))), 1) + self.assertNotEqual(len(set(map(id, list(permutations('abcde', 3))))), 1) def test_count(self): self.assertEqual(zip('abc',count()), [('a', 0), ('b', 1), ('c', 2)]) @@ -288,7 +394,7 @@ def test_product(self): for args, result in [ - ([], []), # zero iterables ??? is this correct + ([], [()]), # zero iterables (['ab'], [('a',), ('b',)]), # one iterable ([range(2), range(3)], [(0,0), (0,1), (0,2), (1,0), (1,1), (1,2)]), # two iterables ([range(0), range(2), range(3)], []), # first iterable with zero length @@ -305,10 +411,10 @@ set('abcdefg'), range(11), tuple(range(13))] for i in range(100): args = [random.choice(argtypes) for j in range(random.randrange(5))] - n = reduce(operator.mul, map(len, args), 1) if args else 0 - self.assertEqual(len(list(product(*args))), n) + expected_len = prod(map(len, args)) + self.assertEqual(len(list(product(*args))), expected_len) args = map(iter, args) - self.assertEqual(len(list(product(*args))), n) + self.assertEqual(len(list(product(*args))), expected_len) # Test implementation detail: tuple re-use self.assertEqual(len(set(map(id, product('abc', 'def')))), 1) Modified: python/trunk/Modules/itertoolsmodule.c ============================================================================== --- python/trunk/Modules/itertoolsmodule.c (original) +++ python/trunk/Modules/itertoolsmodule.c Tue Mar 4 05:17:08 2008 @@ -1885,10 +1885,7 @@ if (result == NULL) { /* On the first pass, return an initial tuple filled with the - first element from each pool. If any pool is empty, then - whole product is empty and we're already done */ - if (npools == 0) - goto empty; + first element from each pool. */ result = PyTuple_New(npools); if (result == NULL) goto empty; From buildbot at python.org Tue Mar 4 05:26:57 2008 From: buildbot at python.org (buildbot at python.org) Date: Tue, 04 Mar 2008 04:26:57 +0000 Subject: [Python-checkins] buildbot failure in x86 W2k8 trunk Message-ID: <20080304042657.D92DF1E4007@bag.python.org> The Buildbot has detected a new failure of x86 W2k8 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20W2k8%20trunk/builds/35 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: nelson-windows Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: andrew.kuchling,raymond.hettinger BUILD FAILED: failed failed slave lost sincerely, -The Buildbot From python-checkins at python.org Tue Mar 4 08:25:54 2008 From: python-checkins at python.org (georg.brandl) Date: Tue, 4 Mar 2008 08:25:54 +0100 (CET) Subject: [Python-checkins] r61225 - python/trunk/Doc/whatsnew/2.6.rst Message-ID: <20080304072554.EAFCB1E4007@bag.python.org> Author: georg.brandl Date: Tue Mar 4 08:25:54 2008 New Revision: 61225 Modified: python/trunk/Doc/whatsnew/2.6.rst Log: Fix some patch attributions. Modified: python/trunk/Doc/whatsnew/2.6.rst ============================================================================== --- python/trunk/Doc/whatsnew/2.6.rst (original) +++ python/trunk/Doc/whatsnew/2.6.rst Tue Mar 4 08:25:54 2008 @@ -1094,7 +1094,7 @@ >>> replacer('old wine in old bottles') 'new wine in new bottles' - (Contributed by Gregory Petrosyan.) + (Contributed by Georg Brandl, after a suggestion by Gregory Petrosyan.) The :func:`attrgetter` function now accepts dotted names and performs the corresponding attribute lookups:: @@ -1105,7 +1105,7 @@ >>> inst_name(help) '_Helper' - (Contributed by Scott Dial, after a suggestion by Barry Warsaw.) + (Contributed by Georg Brandl, after a suggestion by Barry Warsaw.) * New functions in the :mod:`os` module include ``fchmod(fd, mode)``, ``fchown(fd, uid, gid)``, @@ -1380,6 +1380,8 @@ whitespace. >>> + (Contributed by Dwayne Bailey.) + .. Patch #1581073 * The :mod:`timeit` module now accepts callables as well as strings From python-checkins at python.org Tue Mar 4 08:33:30 2008 From: python-checkins at python.org (georg.brandl) Date: Tue, 4 Mar 2008 08:33:30 +0100 (CET) Subject: [Python-checkins] r61226 - python/trunk/Doc/c-api/arg.rst Message-ID: <20080304073330.72D151E4007@bag.python.org> Author: georg.brandl Date: Tue Mar 4 08:33:30 2008 New Revision: 61226 Modified: python/trunk/Doc/c-api/arg.rst Log: #2230: document that PyArg_* leaves addresses alone on error. Modified: python/trunk/Doc/c-api/arg.rst ============================================================================== --- python/trunk/Doc/c-api/arg.rst (original) +++ python/trunk/Doc/c-api/arg.rst Tue Mar 4 08:33:30 2008 @@ -208,7 +208,7 @@ :ctype:`void\*` argument that was passed to the :cfunc:`PyArg_Parse\*` function. The returned *status* should be ``1`` for a successful conversion and ``0`` if the conversion has failed. When the conversion fails, the *converter* function - should raise an exception. + should raise an exception and leave the content of *address* unmodified. ``S`` (string) [PyStringObject \*] Like ``O`` but requires that the Python object is a string object. Raises @@ -287,9 +287,13 @@ units above, where these parameters are used as input values; they should match what is specified for the corresponding format unit in that case. -For the conversion to succeed, the *arg* object must match the format and the -format must be exhausted. On success, the :cfunc:`PyArg_Parse\*` functions -return true, otherwise they return false and raise an appropriate exception. +For the conversion to succeed, the *arg* object must match the format +and the format must be exhausted. On success, the +:cfunc:`PyArg_Parse\*` functions return true, otherwise they return +false and raise an appropriate exception. When the +:cfunc:`PyArg_Parse\*` functions fail due to conversion failure in one +of the format units, the variables at the addresses corresponding to that +and the following format units are left untouched. .. cfunction:: int PyArg_ParseTuple(PyObject *args, const char *format, ...) From ncoghlan at gmail.com Tue Mar 4 13:10:36 2008 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 04 Mar 2008 22:10:36 +1000 Subject: [Python-checkins] r61204 - in python/trunk: Doc/library/inspect.rst Lib/inspect.py Lib/test/regrtest.py Lib/test/test_abc.py Misc/NEWS In-Reply-To: <20080303182805.3F95B1E400A@bag.python.org> References: <20080303182805.3F95B1E400A@bag.python.org> Message-ID: <47CD3C3C.8090802@gmail.com> christian.heimes wrote: > +def isabstract(object): > + """Return true if the object is an abstract base class (ABC).""" > + return object.__flags__ & TPFLAGS_IS_ABSTRACT > + A try/except catching attribute error may be an idea here :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From python-checkins at python.org Tue Mar 4 14:25:43 2008 From: python-checkins at python.org (thomas.heller) Date: Tue, 4 Mar 2008 14:25:43 +0100 (CET) Subject: [Python-checkins] r61230 - in python/branches/libffi3-branch: Modules/_ctypes/libffi_osx Modules/_ctypes/libffi_osx/LICENSE Modules/_ctypes/libffi_osx/README Modules/_ctypes/libffi_osx/README.pyobjc Modules/_ctypes/libffi_osx/ffi.c Modules/_ctypes/libffi_osx/include Modules/_ctypes/libffi_osx/include/ffi Modules/_ctypes/libffi_osx/include/ffi.h Modules/_ctypes/libffi_osx/include/ffi_common.h Modules/_ctypes/libffi_osx/include/fficonfig.h Modules/_ctypes/libffi_osx/include/ffitarget.h Modules/_ctypes/libffi_osx/include/ppc-ffitarget.h Modules/_ctypes/libffi_osx/include/x86-ffitarget.h Modules/_ctypes/libffi_osx/powerpc Modules/_ctypes/libffi_osx/powerpc/ppc-darwin.S Modules/_ctypes/libffi_osx/powerpc/ppc-darwin.h Modules/_ctypes/libffi_osx/powerpc/ppc-darwin_closure.S Modules/_ctypes/libffi_osx/powerpc/ppc-ffi_darwin.c Modules/_ctypes/libffi_osx/powerpc/ppc64-darwin_closure.S Modules/_ctypes/libffi_osx/types.c Modules/_ctypes/libffi_osx/x86 Modules/_ctypes/libffi_osx/x86/darwin64.S Modules/_ctypes/libffi_osx/x86/x86-darwin.S Modules/_ctypes/libffi_osx/x86/x86-ffi64.c Modules/_ctypes/libffi_osx/x86/x86-ffi_darwin.c setup.py Message-ID: <20080304132543.55B541E4007@bag.python.org> Author: thomas.heller Date: Tue Mar 4 14:25:41 2008 New Revision: 61230 Added: python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/ python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/LICENSE python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/README python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/README.pyobjc python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/ffi.c python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/include/ python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/include/ffi/ python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/include/ffi.h python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/include/ffi_common.h python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/include/fficonfig.h python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/include/ffitarget.h python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/include/ppc-ffitarget.h python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/include/x86-ffitarget.h python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/powerpc/ python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/powerpc/ppc-darwin.S python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/powerpc/ppc-darwin.h python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/powerpc/ppc-darwin_closure.S python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/powerpc/ppc-ffi_darwin.c python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/powerpc/ppc64-darwin_closure.S python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/types.c python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/x86/ python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/x86/darwin64.S python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/x86/x86-darwin.S python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/x86/x86-ffi64.c python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/x86/x86-ffi_darwin.c Modified: python/branches/libffi3-branch/setup.py Log: I gave up on using libffi3 files on os x. Instead, static configuration with files from pyobjc is used. Added: python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/LICENSE ============================================================================== --- (empty file) +++ python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/LICENSE Tue Mar 4 14:25:41 2008 @@ -0,0 +1,20 @@ +libffi - Copyright (c) 1996-2003 Red Hat, Inc. + +Permission is hereby granted, free of charge, to any person obtaining +a copy of this software and associated documentation files (the +``Software''), to deal in the Software without restriction, including +without limitation the rights to use, copy, modify, merge, publish, +distribute, sublicense, and/or sell copies of the Software, and to +permit persons to whom the Software is furnished to do so, subject to +the following conditions: + +The above copyright notice and this permission notice shall be included +in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS +OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. +IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR +OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, +ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR +OTHER DEALINGS IN THE SOFTWARE. Added: python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/README ============================================================================== --- (empty file) +++ python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/README Tue Mar 4 14:25:41 2008 @@ -0,0 +1,500 @@ +This directory contains the libffi package, which is not part of GCC but +shipped with GCC as convenience. + +Status +====== + +libffi-2.00 has not been released yet! This is a development snapshot! + +libffi-1.20 was released on October 5, 1998. Check the libffi web +page for updates: . + + +What is libffi? +=============== + +Compilers for high level languages generate code that follow certain +conventions. These conventions are necessary, in part, for separate +compilation to work. One such convention is the "calling +convention". The "calling convention" is essentially a set of +assumptions made by the compiler about where function arguments will +be found on entry to a function. A "calling convention" also specifies +where the return value for a function is found. + +Some programs may not know at the time of compilation what arguments +are to be passed to a function. For instance, an interpreter may be +told at run-time about the number and types of arguments used to call +a given function. Libffi can be used in such programs to provide a +bridge from the interpreter program to compiled code. + +The libffi library provides a portable, high level programming +interface to various calling conventions. This allows a programmer to +call any function specified by a call interface description at run +time. + +Ffi stands for Foreign Function Interface. A foreign function +interface is the popular name for the interface that allows code +written in one language to call code written in another language. The +libffi library really only provides the lowest, machine dependent +layer of a fully featured foreign function interface. A layer must +exist above libffi that handles type conversions for values passed +between the two languages. + + +Supported Platforms and Prerequisites +===================================== + +Libffi has been ported to: + + SunOS 4.1.3 & Solaris 2.x (SPARC-V8, SPARC-V9) + + Irix 5.3 & 6.2 (System V/o32 & n32) + + Intel x86 - Linux (System V ABI) + + Alpha - Linux and OSF/1 + + m68k - Linux (System V ABI) + + PowerPC - Linux (System V ABI, Darwin, AIX) + + ARM - Linux (System V ABI) + +Libffi has been tested with the egcs 1.0.2 gcc compiler. Chances are +that other versions will work. Libffi has also been built and tested +with the SGI compiler tools. + +On PowerPC, the tests failed (see the note below). + +You must use GNU make to build libffi. SGI's make will not work. +Sun's probably won't either. + +If you port libffi to another platform, please let me know! I assume +that some will be easy (x86 NetBSD), and others will be more difficult +(HP). + + +Installing libffi +================= + +[Note: before actually performing any of these installation steps, + you may wish to read the "Platform Specific Notes" below.] + +First you must configure the distribution for your particular +system. Go to the directory you wish to build libffi in and run the +"configure" program found in the root directory of the libffi source +distribution. + +You may want to tell configure where to install the libffi library and +header files. To do that, use the --prefix configure switch. Libffi +will install under /usr/local by default. + +If you want to enable extra run-time debugging checks use the the +--enable-debug configure switch. This is useful when your program dies +mysteriously while using libffi. + +Another useful configure switch is --enable-purify-safety. Using this +will add some extra code which will suppress certain warnings when you +are using Purify with libffi. Only use this switch when using +Purify, as it will slow down the library. + +Configure has many other options. Use "configure --help" to see them all. + +Once configure has finished, type "make". Note that you must be using +GNU make. SGI's make will not work. Sun's probably won't either. +You can ftp GNU make from prep.ai.mit.edu:/pub/gnu. + +To ensure that libffi is working as advertised, type "make test". + +To install the library and header files, type "make install". + + +Using libffi +============ + + The Basics + ---------- + +Libffi assumes that you have a pointer to the function you wish to +call and that you know the number and types of arguments to pass it, +as well as the return type of the function. + +The first thing you must do is create an ffi_cif object that matches +the signature of the function you wish to call. The cif in ffi_cif +stands for Call InterFace. To prepare a call interface object, use the +following function: + +ffi_status ffi_prep_cif(ffi_cif *cif, ffi_abi abi, + unsigned int nargs, + ffi_type *rtype, ffi_type **atypes); + + CIF is a pointer to the call interface object you wish + to initialize. + + ABI is an enum that specifies the calling convention + to use for the call. FFI_DEFAULT_ABI defaults + to the system's native calling convention. Other + ABI's may be used with care. They are system + specific. + + NARGS is the number of arguments this function accepts. + libffi does not yet support vararg functions. + + RTYPE is a pointer to an ffi_type structure that represents + the return type of the function. Ffi_type objects + describe the types of values. libffi provides + ffi_type objects for many of the native C types: + signed int, unsigned int, signed char, unsigned char, + etc. There is also a pointer ffi_type object and + a void ffi_type. Use &ffi_type_void for functions that + don't return values. + + ATYPES is a vector of ffi_type pointers. ARGS must be NARGS long. + If NARGS is 0, this is ignored. + + +ffi_prep_cif will return a status code that you are responsible +for checking. It will be one of the following: + + FFI_OK - All is good. + + FFI_BAD_TYPEDEF - One of the ffi_type objects that ffi_prep_cif + came across is bad. + + +Before making the call, the VALUES vector should be initialized +with pointers to the appropriate argument values. + +To call the the function using the initialized ffi_cif, use the +ffi_call function: + +void ffi_call(ffi_cif *cif, void *fn, void *rvalue, void **avalues); + + CIF is a pointer to the ffi_cif initialized specifically + for this function. + + FN is a pointer to the function you want to call. + + RVALUE is a pointer to a chunk of memory that is to hold the + result of the function call. Currently, it must be + at least one word in size (except for the n32 version + under Irix 6.x, which must be a pointer to an 8 byte + aligned value (a long long). It must also be at least + word aligned (depending on the return type, and the + system's alignment requirements). If RTYPE is + &ffi_type_void, this is ignored. If RVALUE is NULL, + the return value is discarded. + + AVALUES is a vector of void* that point to the memory locations + holding the argument values for a call. + If NARGS is 0, this is ignored. + + +If you are expecting a return value from FN it will have been stored +at RVALUE. + + + + An Example + ---------- + +Here is a trivial example that calls puts() a few times. + + #include + #include + + int main() + { + ffi_cif cif; + ffi_type *args[1]; + void *values[1]; + char *s; + int rc; + + /* Initialize the argument info vectors */ + args[0] = &ffi_type_uint; + values[0] = &s; + + /* Initialize the cif */ + if (ffi_prep_cif(&cif, FFI_DEFAULT_ABI, 1, + &ffi_type_uint, args) == FFI_OK) + { + s = "Hello World!"; + ffi_call(&cif, puts, &rc, values); + /* rc now holds the result of the call to puts */ + + /* values holds a pointer to the function's arg, so to + call puts() again all we need to do is change the + value of s */ + s = "This is cool!"; + ffi_call(&cif, puts, &rc, values); + } + + return 0; + } + + + + Aggregate Types + --------------- + +Although libffi has no special support for unions or bit-fields, it is +perfectly happy passing structures back and forth. You must first +describe the structure to libffi by creating a new ffi_type object +for it. Here is the definition of ffi_type: + + typedef struct _ffi_type + { + unsigned size; + short alignment; + short type; + struct _ffi_type **elements; + } ffi_type; + +All structures must have type set to FFI_TYPE_STRUCT. You may set +size and alignment to 0. These will be calculated and reset to the +appropriate values by ffi_prep_cif(). + +elements is a NULL terminated array of pointers to ffi_type objects +that describe the type of the structure elements. These may, in turn, +be structure elements. + +The following example initializes a ffi_type object representing the +tm struct from Linux's time.h: + + struct tm { + int tm_sec; + int tm_min; + int tm_hour; + int tm_mday; + int tm_mon; + int tm_year; + int tm_wday; + int tm_yday; + int tm_isdst; + /* Those are for future use. */ + long int __tm_gmtoff__; + __const char *__tm_zone__; + }; + + { + ffi_type tm_type; + ffi_type *tm_type_elements[12]; + int i; + + tm_type.size = tm_type.alignment = 0; + tm_type.elements = &tm_type_elements; + + for (i = 0; i < 9; i++) + tm_type_elements[i] = &ffi_type_sint; + + tm_type_elements[9] = &ffi_type_slong; + tm_type_elements[10] = &ffi_type_pointer; + tm_type_elements[11] = NULL; + + /* tm_type can now be used to represent tm argument types and + return types for ffi_prep_cif() */ + } + + + +Platform Specific Notes +======================= + + Intel x86 + --------- + +There are no known problems with the x86 port. + + Sun SPARC - SunOS 4.1.3 & Solaris 2.x + ------------------------------------- + +You must use GNU Make to build libffi on Sun platforms. + + MIPS - Irix 5.3 & 6.x + --------------------- + +Irix 6.2 and better supports three different calling conventions: o32, +n32 and n64. Currently, libffi only supports both o32 and n32 under +Irix 6.x, but only o32 under Irix 5.3. Libffi will automatically be +configured for whichever calling convention it was built for. + +By default, the configure script will try to build libffi with the GNU +development tools. To build libffi with the SGI development tools, set +the environment variable CC to either "cc -32" or "cc -n32" before +running configure under Irix 6.x (depending on whether you want an o32 +or n32 library), or just "cc" for Irix 5.3. + +With the n32 calling convention, when returning structures smaller +than 16 bytes, be sure to provide an RVALUE that is 8 byte aligned. +Here's one way of forcing this: + + double struct_storage[2]; + my_small_struct *s = (my_small_struct *) struct_storage; + /* Use s for RVALUE */ + +If you don't do this you are liable to get spurious bus errors. + +"long long" values are not supported yet. + +You must use GNU Make to build libffi on SGI platforms. + + ARM - System V ABI + ------------------ + +The ARM port was performed on a NetWinder running ARM Linux ELF +(2.0.31) and gcc 2.8.1. + + + + PowerPC System V ABI + -------------------- + +There are two `System V ABI's which libffi implements for PowerPC. +They differ only in how small structures are returned from functions. + +In the FFI_SYSV version, structures that are 8 bytes or smaller are +returned in registers. This is what GCC does when it is configured +for solaris, and is what the System V ABI I have (dated September +1995) says. + +In the FFI_GCC_SYSV version, all structures are returned the same way: +by passing a pointer as the first argument to the function. This is +what GCC does when it is configured for linux or a generic sysv +target. + +EGCS 1.0.1 (and probably other versions of EGCS/GCC) also has a +inconsistency with the SysV ABI: When a procedure is called with many +floating-point arguments, some of them get put on the stack. They are +all supposed to be stored in double-precision format, even if they are +only single-precision, but EGCS stores single-precision arguments as +single-precision anyway. This causes one test to fail (the `many +arguments' test). + + +What's With The Crazy Comments? +=============================== + +You might notice a number of cryptic comments in the code, delimited +by /*@ and @*/. These are annotations read by the program LCLint, a +tool for statically checking C programs. You can read all about it at +. + + +History +======= + +1.20 Oct-5-98 + Raffaele Sena produces ARM port. + +1.19 Oct-5-98 + Fixed x86 long double and long long return support. + m68k bug fixes from Andreas Schwab. + Patch for DU assembler compatibility for the Alpha from Richard + Henderson. + +1.18 Apr-17-98 + Bug fixes and MIPS configuration changes. + +1.17 Feb-24-98 + Bug fixes and m68k port from Andreas Schwab. PowerPC port from + Geoffrey Keating. Various bug x86, Sparc and MIPS bug fixes. + +1.16 Feb-11-98 + Richard Henderson produces Alpha port. + +1.15 Dec-4-97 + Fixed an n32 ABI bug. New libtool, auto* support. + +1.14 May-13-97 + libtool is now used to generate shared and static libraries. + Fixed a minor portability problem reported by Russ McManus + . + +1.13 Dec-2-96 + Added --enable-purify-safety to keep Purify from complaining + about certain low level code. + Sparc fix for calling functions with < 6 args. + Linux x86 a.out fix. + +1.12 Nov-22-96 + Added missing ffi_type_void, needed for supporting void return + types. Fixed test case for non MIPS machines. Cygnus Support + is now Cygnus Solutions. + +1.11 Oct-30-96 + Added notes about GNU make. + +1.10 Oct-29-96 + Added configuration fix for non GNU compilers. + +1.09 Oct-29-96 + Added --enable-debug configure switch. Clean-ups based on LCLint + feedback. ffi_mips.h is always installed. Many configuration + fixes. Fixed ffitest.c for sparc builds. + +1.08 Oct-15-96 + Fixed n32 problem. Many clean-ups. + +1.07 Oct-14-96 + Gordon Irlam rewrites v8.S again. Bug fixes. + +1.06 Oct-14-96 + Gordon Irlam improved the sparc port. + +1.05 Oct-14-96 + Interface changes based on feedback. + +1.04 Oct-11-96 + Sparc port complete (modulo struct passing bug). + +1.03 Oct-10-96 + Passing struct args, and returning struct values works for + all architectures/calling conventions. Expanded tests. + +1.02 Oct-9-96 + Added SGI n32 support. Fixed bugs in both o32 and Linux support. + Added "make test". + +1.01 Oct-8-96 + Fixed float passing bug in mips version. Restructured some + of the code. Builds cleanly with SGI tools. + +1.00 Oct-7-96 + First release. No public announcement. + + +Authors & Credits +================= + +libffi was written by Anthony Green . + +Portions of libffi were derived from Gianni Mariani's free gencall +library for Silicon Graphics machines. + +The closure mechanism was designed and implemented by Kresten Krab +Thorup. + +The Sparc port was derived from code contributed by the fine folks at +Visible Decisions Inc . Further enhancements were +made by Gordon Irlam at Cygnus Solutions . + +The Alpha port was written by Richard Henderson at Cygnus Solutions. + +Andreas Schwab ported libffi to m68k Linux and provided a number of +bug fixes. + +Geoffrey Keating ported libffi to the PowerPC. + +Raffaele Sena ported libffi to the ARM. + +Jesper Skov and Andrew Haley both did more than their fair share of +stepping through the code and tracking down bugs. + +Thanks also to Tom Tromey for bug fixes and configuration help. + +Thanks to Jim Blandy, who provided some useful feedback on the libffi +interface. + +If you have a problem, or have found a bug, please send a note to +green at cygnus.com. Added: python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/README.pyobjc ============================================================================== --- (empty file) +++ python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/README.pyobjc Tue Mar 4 14:25:41 2008 @@ -0,0 +1,5 @@ +This directory contains a slightly modified version of libffi, extracted from +the GCC source-tree. + +The only modifications are those that are necessary to compile libffi using +the Apple provided compiler and outside of the GCC source tree. Added: python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/ffi.c ============================================================================== --- (empty file) +++ python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/ffi.c Tue Mar 4 14:25:41 2008 @@ -0,0 +1,226 @@ +/* ----------------------------------------------------------------------- + prep_cif.c - Copyright (c) 1996, 1998 Red Hat, Inc. + + Permission is hereby granted, free of charge, to any person obtaining + a copy of this software and associated documentation files (the + ``Software''), to deal in the Software without restriction, including + without limitation the rights to use, copy, modify, merge, publish, + distribute, sublicense, and/or sell copies of the Software, and to + permit persons to whom the Software is furnished to do so, subject to + the following conditions: + + The above copyright notice and this permission notice shall be included + in all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS + OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. + IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR + OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + OTHER DEALINGS IN THE SOFTWARE. + ----------------------------------------------------------------------- */ + +#include +#include + +#include +#include + +/* Round up to FFI_SIZEOF_ARG. */ +#define STACK_ARG_SIZE(x) ALIGN(x, FFI_SIZEOF_ARG) + +/* Perform machine independent initialization of aggregate type + specifications. */ + +static ffi_status +initialize_aggregate( +/*@out@*/ ffi_type* arg) +{ +/*@-usedef@*/ + + if (arg == NULL || arg->elements == NULL || + arg->size != 0 || arg->alignment != 0) + return FFI_BAD_TYPEDEF; + + ffi_type** ptr = &(arg->elements[0]); + + while ((*ptr) != NULL) + { + if (((*ptr)->size == 0) && (initialize_aggregate(*ptr) != FFI_OK)) + return FFI_BAD_TYPEDEF; + + /* Perform a sanity check on the argument type */ + FFI_ASSERT_VALID_TYPE(*ptr); + +#ifdef POWERPC_DARWIN + int curalign = (*ptr)->alignment; + + if (ptr != &(arg->elements[0])) + { + if (curalign > 4 && curalign != 16) + curalign = 4; + } + + arg->size = ALIGN(arg->size, curalign); + arg->size += (*ptr)->size; + arg->alignment = (arg->alignment > curalign) ? + arg->alignment : curalign; +#else + arg->size = ALIGN(arg->size, (*ptr)->alignment); + arg->size += (*ptr)->size; + arg->alignment = (arg->alignment > (*ptr)->alignment) ? + arg->alignment : (*ptr)->alignment; +#endif + + ptr++; + } + + /* Structure size includes tail padding. This is important for + structures that fit in one register on ABIs like the PowerPC64 + Linux ABI that right justify small structs in a register. + It's also needed for nested structure layout, for example + struct A { long a; char b; }; struct B { struct A x; char y; }; + should find y at an offset of 2*sizeof(long) and result in a + total size of 3*sizeof(long). */ + arg->size = ALIGN(arg->size, arg->alignment); + + if (arg->size == 0) + return FFI_BAD_TYPEDEF; + + return FFI_OK; + +/*@=usedef@*/ +} + +#ifndef __CRIS__ +/* The CRIS ABI specifies structure elements to have byte + alignment only, so it completely overrides this functions, + which assumes "natural" alignment and padding. */ + +/* Perform machine independent ffi_cif preparation, then call + machine dependent routine. */ + +#if defined(X86_DARWIN) + +static inline bool +struct_on_stack( + int size) +{ + if (size > 8) + return true; + + /* This is not what the ABI says, but is what is really implemented */ + switch (size) + { + case 1: + case 2: + case 4: + case 8: + return false; + + default: + return true; + } +} + +#endif // defined(X86_DARWIN) + +// Arguments' ffi_type->alignment must be nonzero. +ffi_status +ffi_prep_cif( +/*@out@*/ /*@partial@*/ ffi_cif* cif, + ffi_abi abi, + unsigned int nargs, +/*@dependent@*/ /*@out@*/ /*@partial@*/ ffi_type* rtype, +/*@dependent@*/ ffi_type** atypes) +{ + if (cif == NULL) + return FFI_BAD_TYPEDEF; + + if (abi <= FFI_FIRST_ABI || abi > FFI_DEFAULT_ABI) + return FFI_BAD_ABI; + + unsigned int bytes = 0; + unsigned int i; + ffi_type** ptr; + + cif->abi = abi; + cif->arg_types = atypes; + cif->nargs = nargs; + cif->rtype = rtype; + cif->flags = 0; + + /* Initialize the return type if necessary */ + /*@-usedef@*/ + if ((cif->rtype->size == 0) && (initialize_aggregate(cif->rtype) != FFI_OK)) + return FFI_BAD_TYPEDEF; + /*@=usedef@*/ + + /* Perform a sanity check on the return type */ + FFI_ASSERT_VALID_TYPE(cif->rtype); + + /* x86-64 and s390 stack space allocation is handled in prep_machdep. */ +#if !defined M68K && !defined __x86_64__ && !defined S390 && !defined PA + /* Make space for the return structure pointer */ + if (cif->rtype->type == FFI_TYPE_STRUCT +#ifdef SPARC + && (cif->abi != FFI_V9 || cif->rtype->size > 32) +#endif +#ifdef X86_DARWIN + && (struct_on_stack(cif->rtype->size)) +#endif + ) + bytes = STACK_ARG_SIZE(sizeof(void*)); +#endif + + for (ptr = cif->arg_types, i = cif->nargs; i > 0; i--, ptr++) + { + /* Initialize any uninitialized aggregate type definitions */ + if (((*ptr)->size == 0) && (initialize_aggregate((*ptr)) != FFI_OK)) + return FFI_BAD_TYPEDEF; + + if ((*ptr)->alignment == 0) + return FFI_BAD_TYPEDEF; + + /* Perform a sanity check on the argument type, do this + check after the initialization. */ + FFI_ASSERT_VALID_TYPE(*ptr); + +#if defined(X86_DARWIN) + { + int align = (*ptr)->alignment; + + if (align > 4) + align = 4; + + if ((align - 1) & bytes) + bytes = ALIGN(bytes, align); + + bytes += STACK_ARG_SIZE((*ptr)->size); + } +#elif !defined __x86_64__ && !defined S390 && !defined PA +#ifdef SPARC + if (((*ptr)->type == FFI_TYPE_STRUCT + && ((*ptr)->size > 16 || cif->abi != FFI_V9)) + || ((*ptr)->type == FFI_TYPE_LONGDOUBLE + && cif->abi != FFI_V9)) + bytes += sizeof(void*); + else +#endif + { + /* Add any padding if necessary */ + if (((*ptr)->alignment - 1) & bytes) + bytes = ALIGN(bytes, (*ptr)->alignment); + + bytes += STACK_ARG_SIZE((*ptr)->size); + } +#endif + } + + cif->bytes = bytes; + + /* Perform machine dependent cif processing */ + return ffi_prep_cif_machdep(cif); +} +#endif /* not __CRIS__ */ Added: python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/include/ffi.h ============================================================================== --- (empty file) +++ python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/include/ffi.h Tue Mar 4 14:25:41 2008 @@ -0,0 +1,352 @@ +/* -----------------------------------------------------------------*-C-*- + libffi PyOBJC - Copyright (c) 1996-2003 Red Hat, Inc. + + Permission is hereby granted, free of charge, to any person obtaining + a copy of this software and associated documentation files (the + ``Software''), to deal in the Software without restriction, including + without limitation the rights to use, copy, modify, merge, publish, + distribute, sublicense, and/or sell copies of the Software, and to + permit persons to whom the Software is furnished to do so, subject to + the following conditions: + + The above copyright notice and this permission notice shall be included + in all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS + OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. + IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR + OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + OTHER DEALINGS IN THE SOFTWARE. + + ----------------------------------------------------------------------- */ + +/* ------------------------------------------------------------------- + The basic API is described in the README file. + + The raw API is designed to bypass some of the argument packing + and unpacking on architectures for which it can be avoided. + + The closure API allows interpreted functions to be packaged up + inside a C function pointer, so that they can be called as C functions, + with no understanding on the client side that they are interpreted. + It can also be used in other cases in which it is necessary to package + up a user specified parameter and a function pointer as a single + function pointer. + + The closure API must be implemented in order to get its functionality, + e.g. for use by gij. Routines are provided to emulate the raw API + if the underlying platform doesn't allow faster implementation. + + More details on the raw and closure API can be found in: + + http://gcc.gnu.org/ml/java/1999-q3/msg00138.html + + and + + http://gcc.gnu.org/ml/java/1999-q3/msg00174.html + -------------------------------------------------------------------- */ + +#ifndef LIBFFI_H +#define LIBFFI_H + +#ifdef __cplusplus +extern "C" { +#endif + +/* Specify which architecture libffi is configured for. */ +#ifdef MACOSX +# if defined(__i386__) || defined(__x86_64__) +# define X86_DARWIN +# elif defined(__ppc__) || defined(__ppc64__) +# define POWERPC_DARWIN +# else +# error "Unsupported MacOS X CPU type" +# endif +#else +#error "Unsupported OS type" +#endif + +/* ---- System configuration information --------------------------------- */ + +#include "ffitarget.h" +#include "fficonfig.h" + +#ifndef LIBFFI_ASM + +#include +#include + +/* LONG_LONG_MAX is not always defined (not if STRICT_ANSI, for example). + But we can find it either under the correct ANSI name, or under GNU + C's internal name. */ +#ifdef LONG_LONG_MAX +# define FFI_LONG_LONG_MAX LONG_LONG_MAX +#else +# ifdef LLONG_MAX +# define FFI_LONG_LONG_MAX LLONG_MAX +# else +# ifdef __GNUC__ +# define FFI_LONG_LONG_MAX __LONG_LONG_MAX__ +# endif +# endif +#endif + +#if SCHAR_MAX == 127 +# define ffi_type_uchar ffi_type_uint8 +# define ffi_type_schar ffi_type_sint8 +#else +#error "char size not supported" +#endif + +#if SHRT_MAX == 32767 +# define ffi_type_ushort ffi_type_uint16 +# define ffi_type_sshort ffi_type_sint16 +#elif SHRT_MAX == 2147483647 +# define ffi_type_ushort ffi_type_uint32 +# define ffi_type_sshort ffi_type_sint32 +#else +#error "short size not supported" +#endif + +#if INT_MAX == 32767 +# define ffi_type_uint ffi_type_uint16 +# define ffi_type_sint ffi_type_sint16 +#elif INT_MAX == 2147483647 +# define ffi_type_uint ffi_type_uint32 +# define ffi_type_sint ffi_type_sint32 +#elif INT_MAX == 9223372036854775807 +# define ffi_type_uint ffi_type_uint64 +# define ffi_type_sint ffi_type_sint64 +#else +#error "int size not supported" +#endif + +#define ffi_type_ulong ffi_type_uint64 +#define ffi_type_slong ffi_type_sint64 + +#if LONG_MAX == 2147483647 +# if FFI_LONG_LONG_MAX != 9223372036854775807 +# error "no 64-bit data type supported" +# endif +#elif LONG_MAX != 9223372036854775807 +#error "long size not supported" +#endif + +/* The closure code assumes that this works on pointers, i.e. a size_t + can hold a pointer. */ + +typedef struct _ffi_type { + size_t size; + unsigned short alignment; + unsigned short type; +/*@null@*/ struct _ffi_type** elements; +} ffi_type; + +/* These are defined in types.c */ +extern ffi_type ffi_type_void; +extern ffi_type ffi_type_uint8; +extern ffi_type ffi_type_sint8; +extern ffi_type ffi_type_uint16; +extern ffi_type ffi_type_sint16; +extern ffi_type ffi_type_uint32; +extern ffi_type ffi_type_sint32; +extern ffi_type ffi_type_uint64; +extern ffi_type ffi_type_sint64; +extern ffi_type ffi_type_float; +extern ffi_type ffi_type_double; +extern ffi_type ffi_type_longdouble; +extern ffi_type ffi_type_pointer; + +typedef enum ffi_status { + FFI_OK = 0, + FFI_BAD_TYPEDEF, + FFI_BAD_ABI +} ffi_status; + +typedef unsigned FFI_TYPE; + +typedef struct ffi_cif { + ffi_abi abi; + unsigned nargs; +/*@dependent@*/ ffi_type** arg_types; +/*@dependent@*/ ffi_type* rtype; + unsigned bytes; + unsigned flags; +#ifdef FFI_EXTRA_CIF_FIELDS + FFI_EXTRA_CIF_FIELDS; +#endif +} ffi_cif; + +/* ---- Definitions for the raw API -------------------------------------- */ + +#ifndef FFI_SIZEOF_ARG +# if LONG_MAX == 2147483647 +# define FFI_SIZEOF_ARG 4 +# elif LONG_MAX == 9223372036854775807 +# define FFI_SIZEOF_ARG 8 +# endif +#endif + +typedef union { + ffi_sarg sint; + ffi_arg uint; + float flt; + char data[FFI_SIZEOF_ARG]; + void* ptr; +} ffi_raw; + +void +ffi_raw_call( +/*@dependent@*/ ffi_cif* cif, + void (*fn)(void), +/*@out@*/ void* rvalue, +/*@dependent@*/ ffi_raw* avalue); + +void +ffi_ptrarray_to_raw( + ffi_cif* cif, + void** args, + ffi_raw* raw); + +void +ffi_raw_to_ptrarray( + ffi_cif* cif, + ffi_raw* raw, + void** args); + +size_t +ffi_raw_size( + ffi_cif* cif); + +/* This is analogous to the raw API, except it uses Java parameter + packing, even on 64-bit machines. I.e. on 64-bit machines + longs and doubles are followed by an empty 64-bit word. */ +void +ffi_java_raw_call( +/*@dependent@*/ ffi_cif* cif, + void (*fn)(void), +/*@out@*/ void* rvalue, +/*@dependent@*/ ffi_raw* avalue); + +void +ffi_java_ptrarray_to_raw( + ffi_cif* cif, + void** args, + ffi_raw* raw); + +void +ffi_java_raw_to_ptrarray( + ffi_cif* cif, + ffi_raw* raw, + void** args); + +size_t +ffi_java_raw_size( + ffi_cif* cif); + +/* ---- Definitions for closures ----------------------------------------- */ + +#if FFI_CLOSURES + +typedef struct ffi_closure { + char tramp[FFI_TRAMPOLINE_SIZE]; + ffi_cif* cif; + void (*fun)(ffi_cif*,void*,void**,void*); + void* user_data; +} ffi_closure; + +ffi_status +ffi_prep_closure( + ffi_closure* closure, + ffi_cif* cif, + void (*fun)(ffi_cif*,void*,void**,void*), + void* user_data); + +typedef struct ffi_raw_closure { + char tramp[FFI_TRAMPOLINE_SIZE]; + ffi_cif* cif; + +#if !FFI_NATIVE_RAW_API + /* if this is enabled, then a raw closure has the same layout + as a regular closure. We use this to install an intermediate + handler to do the transaltion, void** -> ffi_raw*. */ + void (*translate_args)(ffi_cif*,void*,void**,void*); + void* this_closure; +#endif + + void (*fun)(ffi_cif*,void*,ffi_raw*,void*); + void* user_data; +} ffi_raw_closure; + +ffi_status +ffi_prep_raw_closure( + ffi_raw_closure* closure, + ffi_cif* cif, + void (*fun)(ffi_cif*,void*,ffi_raw*,void*), + void* user_data); + +ffi_status +ffi_prep_java_raw_closure( + ffi_raw_closure* closure, + ffi_cif* cif, + void (*fun)(ffi_cif*,void*,ffi_raw*,void*), + void* user_data); + +#endif // FFI_CLOSURES + +/* ---- Public interface definition -------------------------------------- */ + +ffi_status +ffi_prep_cif( +/*@out@*/ /*@partial@*/ ffi_cif* cif, + ffi_abi abi, + unsigned int nargs, +/*@dependent@*/ /*@out@*/ /*@partial@*/ ffi_type* rtype, +/*@dependent@*/ ffi_type** atypes); + +void +ffi_call( +/*@dependent@*/ ffi_cif* cif, + void (*fn)(void), +/*@out@*/ void* rvalue, +/*@dependent@*/ void** avalue); + +/* Useful for eliminating compiler warnings */ +#define FFI_FN(f) ((void (*)(void))f) + +#endif // #ifndef LIBFFI_ASM +/* ---- Definitions shared with assembly code ---------------------------- */ + +/* If these change, update src/mips/ffitarget.h. */ +#define FFI_TYPE_VOID 0 +#define FFI_TYPE_INT 1 +#define FFI_TYPE_FLOAT 2 +#define FFI_TYPE_DOUBLE 3 + +#ifdef HAVE_LONG_DOUBLE +# define FFI_TYPE_LONGDOUBLE 4 +#else +# define FFI_TYPE_LONGDOUBLE FFI_TYPE_DOUBLE +#endif + +#define FFI_TYPE_UINT8 5 +#define FFI_TYPE_SINT8 6 +#define FFI_TYPE_UINT16 7 +#define FFI_TYPE_SINT16 8 +#define FFI_TYPE_UINT32 9 +#define FFI_TYPE_SINT32 10 +#define FFI_TYPE_UINT64 11 +#define FFI_TYPE_SINT64 12 +#define FFI_TYPE_STRUCT 13 +#define FFI_TYPE_POINTER 14 + +/* This should always refer to the last type code (for sanity checks) */ +#define FFI_TYPE_LAST FFI_TYPE_POINTER + +#ifdef __cplusplus +} +#endif + +#endif // #ifndef LIBFFI_H \ No newline at end of file Added: python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/include/ffi_common.h ============================================================================== --- (empty file) +++ python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/include/ffi_common.h Tue Mar 4 14:25:41 2008 @@ -0,0 +1,102 @@ +/* ----------------------------------------------------------------------- + ffi_common.h - Copyright (c) 1996 Red Hat, Inc. + + Common internal definitions and macros. Only necessary for building + libffi. + ----------------------------------------------------------------------- */ + +#ifndef FFI_COMMON_H +#define FFI_COMMON_H + +#ifdef __cplusplus +extern "C" { +#endif + +#include "fficonfig.h" + +/* Do not move this. Some versions of AIX are very picky about where + this is positioned. */ +#ifdef __GNUC__ +# define alloca __builtin_alloca +#else +# if HAVE_ALLOCA_H +# include +# else +# ifdef _AIX +# pragma alloca +# else +# ifndef alloca /* predefined by HP cc +Olibcalls */ +char* alloca(); +# endif +# endif +# endif +#endif + +/* Check for the existence of memcpy. */ +#if STDC_HEADERS +# include +#else +# ifndef HAVE_MEMCPY +# define memcpy(d, s, n) bcopy((s), (d), (n)) +# endif +#endif + +/*#if defined(FFI_DEBUG) +#include +#endif*/ + +#ifdef FFI_DEBUG +#include + +/*@exits@*/ void +ffi_assert( +/*@temp@*/ char* expr, +/*@temp@*/ char* file, + int line); +void +ffi_stop_here(void); +void +ffi_type_test( +/*@temp@*/ /*@out@*/ ffi_type* a, +/*@temp@*/ char* file, + int line); + +# define FFI_ASSERT(x) ((x) ? (void)0 : ffi_assert(#x, __FILE__,__LINE__)) +# define FFI_ASSERT_AT(x, f, l) ((x) ? 0 : ffi_assert(#x, (f), (l))) +# define FFI_ASSERT_VALID_TYPE(x) ffi_type_test(x, __FILE__, __LINE__) +#else +# define FFI_ASSERT(x) +# define FFI_ASSERT_AT(x, f, l) +# define FFI_ASSERT_VALID_TYPE(x) +#endif // #ifdef FFI_DEBUG + +#define ALIGN(v, a) (((size_t)(v) + (a) - 1) & ~((a) - 1)) + +/* Perform machine dependent cif processing */ +ffi_status +ffi_prep_cif_machdep( + ffi_cif* cif); + +/* Extended cif, used in callback from assembly routine */ +typedef struct extended_cif { +/*@dependent@*/ ffi_cif* cif; +/*@dependent@*/ void* rvalue; +/*@dependent@*/ void** avalue; +} extended_cif; + +/* Terse sized type definitions. */ +typedef unsigned int UINT8 __attribute__((__mode__(__QI__))); +typedef signed int SINT8 __attribute__((__mode__(__QI__))); +typedef unsigned int UINT16 __attribute__((__mode__(__HI__))); +typedef signed int SINT16 __attribute__((__mode__(__HI__))); +typedef unsigned int UINT32 __attribute__((__mode__(__SI__))); +typedef signed int SINT32 __attribute__((__mode__(__SI__))); +typedef unsigned int UINT64 __attribute__((__mode__(__DI__))); +typedef signed int SINT64 __attribute__((__mode__(__DI__))); +typedef float FLOAT32; + +#ifdef __cplusplus +} +#endif + +#endif // #ifndef FFI_COMMON_H \ No newline at end of file Added: python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/include/fficonfig.h ============================================================================== --- (empty file) +++ python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/include/fficonfig.h Tue Mar 4 14:25:41 2008 @@ -0,0 +1,150 @@ +/* Manually created fficonfig.h for Darwin on PowerPC or Intel + + This file is manually generated to do away with the need for autoconf and + therefore make it easier to cross-compile and build fat binaries. + + NOTE: This file was added by PyObjC. +*/ + +#ifndef MACOSX +#error "This file is only supported on Mac OS X" +#endif + +#if defined(__i386__) +# define BYTEORDER 1234 +# undef HOST_WORDS_BIG_ENDIAN +# undef WORDS_BIGENDIAN +# define SIZEOF_DOUBLE 8 +# define HAVE_LONG_DOUBLE 1 +# define SIZEOF_LONG_DOUBLE 16 + +#elif defined(__x86_64__) +# define BYTEORDER 1234 +# undef HOST_WORDS_BIG_ENDIAN +# undef WORDS_BIGENDIAN +# define SIZEOF_DOUBLE 8 +# define HAVE_LONG_DOUBLE 1 +# define SIZEOF_LONG_DOUBLE 16 + +#elif defined(__ppc__) +# define BYTEORDER 4321 +# define HOST_WORDS_BIG_ENDIAN 1 +# define WORDS_BIGENDIAN 1 +# define SIZEOF_DOUBLE 8 +# if __GNUC__ >= 4 +# define HAVE_LONG_DOUBLE 1 +# define SIZEOF_LONG_DOUBLE 16 +# else +# undef HAVE_LONG_DOUBLE +# define SIZEOF_LONG_DOUBLE 8 +# endif + +#elif defined(__ppc64__) +# define BYTEORDER 4321 +# define HOST_WORDS_BIG_ENDIAN 1 +# define WORDS_BIGENDIAN 1 +# define SIZEOF_DOUBLE 8 +# define HAVE_LONG_DOUBLE 1 +# define SIZEOF_LONG_DOUBLE 16 + +#else +#error "Unknown CPU type" +#endif + +/* Define to one of `_getb67', `GETB67', `getb67' for Cray-2 and Cray-YMP + systems. This function is required for `alloca.c' support on those systems. */ +#undef CRAY_STACKSEG_END + +/* Define to 1 if using `alloca.c'. */ +/* #undef C_ALLOCA */ + +/* Define to the flags needed for the .section .eh_frame directive. */ +#define EH_FRAME_FLAGS "aw" + +/* Define this if you want extra debugging. */ +/* #undef FFI_DEBUG */ + +/* Define this is you do not want support for the raw API. */ +#define FFI_NO_RAW_API 1 + +/* Define this if you do not want support for aggregate types. */ +/* #undef FFI_NO_STRUCTS */ + +/* Define to 1 if you have `alloca', as a function or macro. */ +#define HAVE_ALLOCA 1 + +/* Define to 1 if you have and it should be used (not on Ultrix). */ +#define HAVE_ALLOCA_H 1 + +/* Define if your assembler supports .register. */ +/* #undef HAVE_AS_REGISTER_PSEUDO_OP */ + +/* Define if your assembler and linker support unaligned PC relative relocs. */ +/* #undef HAVE_AS_SPARC_UA_PCREL */ + +/* Define to 1 if you have the `memcpy' function. */ +#define HAVE_MEMCPY 1 + +/* Define if mmap with MAP_ANON(YMOUS) works. */ +#define HAVE_MMAP_ANON 1 + +/* Define if mmap of /dev/zero works. */ +/* #undef HAVE_MMAP_DEV_ZERO */ + +/* Define if read-only mmap of a plain file works. */ +#define HAVE_MMAP_FILE 1 + +/* Define if .eh_frame sections should be read-only. */ +/* #undef HAVE_RO_EH_FRAME */ + +/* Define to 1 if your C compiler doesn't accept -c and -o together. */ +/* #undef NO_MINUS_C_MINUS_O */ + +/* Name of package */ +#define PACKAGE "libffi" + +/* Define to the address where bug reports for this package should be sent. */ +#define PACKAGE_BUGREPORT "http://gcc.gnu.org/bugs.html" + +/* Define to the full name of this package. */ +#define PACKAGE_NAME "libffi" + +/* Define to the full name and version of this package. */ +#define PACKAGE_STRING "libffi 2.1" + +/* Define to the one symbol short name of this package. */ +#define PACKAGE_TARNAME "libffi" + +/* Define to the version of this package. */ +#define PACKAGE_VERSION "2.1" + +/* If using the C implementation of alloca, define if you know the + direction of stack growth for your system; otherwise it will be + automatically deduced at run-time. + STACK_DIRECTION > 0 => grows toward higher addresses + STACK_DIRECTION < 0 => grows toward lower addresses + STACK_DIRECTION = 0 => direction of growth unknown */ +/* #undef STACK_DIRECTION */ + +/* Define to 1 if you have the ANSI C header files. */ +#define STDC_HEADERS 1 + +/* Define this if you are using Purify and want to suppress spurious messages. */ +/* #undef USING_PURIFY */ + +/* Version number of package */ +#define VERSION "2.1-pyobjc" + +#ifdef HAVE_HIDDEN_VISIBILITY_ATTRIBUTE +# ifdef LIBFFI_ASM +# define FFI_HIDDEN(name) .hidden name +# else +# define FFI_HIDDEN __attribute__((visibility ("hidden"))) +# endif +#else +# ifdef LIBFFI_ASM +# define FFI_HIDDEN(name) +# else +# define FFI_HIDDEN +# endif +#endif \ No newline at end of file Added: python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/include/ffitarget.h ============================================================================== --- (empty file) +++ python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/include/ffitarget.h Tue Mar 4 14:25:41 2008 @@ -0,0 +1,13 @@ +/* Dispatch to the right ffitarget file. This file is PyObjC specific; in a + normal build, the build environment copies the file to the right location or + sets up the right include flags. We want to do neither because that would + make building fat binaries harder. +*/ + +#if defined(__i386__) || defined(__x86_64__) +#include "x86-ffitarget.h" +#elif defined(__ppc__) || defined(__ppc64__) +#include "ppc-ffitarget.h" +#else +#error "Unsupported CPU type" +#endif \ No newline at end of file Added: python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/include/ppc-ffitarget.h ============================================================================== --- (empty file) +++ python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/include/ppc-ffitarget.h Tue Mar 4 14:25:41 2008 @@ -0,0 +1,104 @@ +/* -----------------------------------------------------------------*-C-*- + ppc-ffitarget.h - Copyright (c) 1996-2003 Red Hat, Inc. + Target configuration macros for PowerPC. + + Permission is hereby granted, free of charge, to any person obtaining + a copy of this software and associated documentation files (the + ``Software''), to deal in the Software without restriction, including + without limitation the rights to use, copy, modify, merge, publish, + distribute, sublicense, and/or sell copies of the Software, and to + permit persons to whom the Software is furnished to do so, subject to + the following conditions: + + The above copyright notice and this permission notice shall be included + in all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS + OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. + IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR + OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + OTHER DEALINGS IN THE SOFTWARE. + ----------------------------------------------------------------------- */ + +#ifndef LIBFFI_TARGET_H +#define LIBFFI_TARGET_H + +/* ---- System specific configurations ----------------------------------- */ + +#if (defined(POWERPC) && defined(__powerpc64__)) || \ + (defined(POWERPC_DARWIN) && defined(__ppc64__)) +#define POWERPC64 +#endif + +#ifndef LIBFFI_ASM + +typedef unsigned long ffi_arg; +typedef signed long ffi_sarg; + +typedef enum ffi_abi { + FFI_FIRST_ABI = 0, + +#ifdef POWERPC + FFI_SYSV, + FFI_GCC_SYSV, + FFI_LINUX64, +# ifdef POWERPC64 + FFI_DEFAULT_ABI = FFI_LINUX64, +# else + FFI_DEFAULT_ABI = FFI_GCC_SYSV, +# endif +#endif + +#ifdef POWERPC_AIX + FFI_AIX, + FFI_DARWIN, + FFI_DEFAULT_ABI = FFI_AIX, +#endif + +#ifdef POWERPC_DARWIN + FFI_AIX, + FFI_DARWIN, + FFI_DEFAULT_ABI = FFI_DARWIN, +#endif + +#ifdef POWERPC_FREEBSD + FFI_SYSV, + FFI_GCC_SYSV, + FFI_LINUX64, + FFI_DEFAULT_ABI = FFI_SYSV, +#endif + + FFI_LAST_ABI = FFI_DEFAULT_ABI + 1 +} ffi_abi; + +#endif // #ifndef LIBFFI_ASM + +/* ---- Definitions for closures ----------------------------------------- */ + +#define FFI_CLOSURES 1 +#define FFI_NATIVE_RAW_API 0 + +/* Needed for FFI_SYSV small structure returns. */ +#define FFI_SYSV_TYPE_SMALL_STRUCT (FFI_TYPE_LAST) + +#if defined(POWERPC64) /*|| defined(POWERPC_AIX)*/ +# define FFI_TRAMPOLINE_SIZE 48 +#elif defined(POWERPC_AIX) +# define FFI_TRAMPOLINE_SIZE 24 +#else +# define FFI_TRAMPOLINE_SIZE 40 +#endif + +#ifndef LIBFFI_ASM +# if defined(POWERPC_DARWIN) || defined(POWERPC_AIX) +typedef struct ffi_aix_trampoline_struct { + void* code_pointer; /* Pointer to ffi_closure_ASM */ + void* toc; /* TOC */ + void* static_chain; /* Pointer to closure */ +} ffi_aix_trampoline_struct; +# endif +#endif // #ifndef LIBFFI_ASM + +#endif // #ifndef LIBFFI_TARGET_H \ No newline at end of file Added: python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/include/x86-ffitarget.h ============================================================================== --- (empty file) +++ python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/include/x86-ffitarget.h Tue Mar 4 14:25:41 2008 @@ -0,0 +1,88 @@ +/* -----------------------------------------------------------------*-C-*- + x86-ffitarget.h - Copyright (c) 1996-2003 Red Hat, Inc. + Target configuration macros for x86 and x86-64. + + Permission is hereby granted, free of charge, to any person obtaining + a copy of this software and associated documentation files (the + ``Software''), to deal in the Software without restriction, including + without limitation the rights to use, copy, modify, merge, publish, + distribute, sublicense, and/or sell copies of the Software, and to + permit persons to whom the Software is furnished to do so, subject to + the following conditions: + + The above copyright notice and this permission notice shall be included + in all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS + OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. + IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR + OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + OTHER DEALINGS IN THE SOFTWARE. + + ----------------------------------------------------------------------- */ + +#ifndef LIBFFI_TARGET_H +#define LIBFFI_TARGET_H + +/* ---- System specific configurations ----------------------------------- */ + +#if defined(X86_64) && defined(__i386__) +# undef X86_64 +# define X86 +#endif + +#if defined(__x86_64__) +# ifndef X86_64 +# define X86_64 +# endif +#endif + +/* ---- Generic type definitions ----------------------------------------- */ + +#ifndef LIBFFI_ASM + +typedef unsigned long ffi_arg; +typedef signed long ffi_sarg; + +typedef enum ffi_abi { + FFI_FIRST_ABI = 0, + + /* ---- Intel x86 Win32 ---------- */ +#ifdef X86_WIN32 + FFI_SYSV, + FFI_STDCALL, + /* TODO: Add fastcall support for the sake of completeness */ + FFI_DEFAULT_ABI = FFI_SYSV, +#endif + + /* ---- Intel x86 and AMD x86-64 - */ +#if !defined(X86_WIN32) && (defined(__i386__) || defined(__x86_64__)) + FFI_SYSV, + FFI_UNIX64, /* Unix variants all use the same ABI for x86-64 */ +# ifdef __i386__ + FFI_DEFAULT_ABI = FFI_SYSV, +# else + FFI_DEFAULT_ABI = FFI_UNIX64, +# endif +#endif + + FFI_LAST_ABI = FFI_DEFAULT_ABI + 1 +} ffi_abi; + +#endif // #ifndef LIBFFI_ASM + +/* ---- Definitions for closures ----------------------------------------- */ + +#define FFI_CLOSURES 1 + +#if defined(X86_64) || (defined(__x86_64__) && defined(X86_DARWIN)) +# define FFI_TRAMPOLINE_SIZE 24 +# define FFI_NATIVE_RAW_API 0 +#else +# define FFI_TRAMPOLINE_SIZE 10 +# define FFI_NATIVE_RAW_API 1 /* x86 has native raw api support */ +#endif + +#endif // #ifndef LIBFFI_TARGET_H \ No newline at end of file Added: python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/powerpc/ppc-darwin.S ============================================================================== --- (empty file) +++ python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/powerpc/ppc-darwin.S Tue Mar 4 14:25:41 2008 @@ -0,0 +1,369 @@ +#if defined(__ppc__) || defined(__ppc64__) + +/* ----------------------------------------------------------------------- + darwin.S - Copyright (c) 2000 John Hornkvist + Copyright (c) 2004 Free Software Foundation, Inc. + + PowerPC Assembly glue. + + Permission is hereby granted, free of charge, to any person obtaining + a copy of this software and associated documentation files (the + ``Software''), to deal in the Software without restriction, including + without limitation the rights to use, copy, modify, merge, publish, + distribute, sublicense, and/or sell copies of the Software, and to + permit persons to whom the Software is furnished to do so, subject to + the following conditions: + + The above copyright notice and this permission notice shall be included + in all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS + OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. + IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR + OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + OTHER DEALINGS IN THE SOFTWARE. + ----------------------------------------------------------------------- */ + +#define LIBFFI_ASM + +#include +#include +#include +#include + +.text + .align 2 +.globl _ffi_prep_args + +.text + .align 2 +.globl _ffi_call_DARWIN + +.text + .align 2 +_ffi_call_DARWIN: +LFB0: + mr r12,r8 /* We only need r12 until the call, + so it doesn't have to be saved. */ + +LFB1: + /* Save the old stack pointer as AP. */ + mr r8,r1 + +LCFI0: +#if defined(__ppc64__) + /* Allocate the stack space we need. + r4 (size of input data) + 48 bytes (linkage area) + 40 bytes (saved registers) + 8 bytes (extra FPR) + r4 + 96 bytes total + */ + + addi r4,r4,-96 // Add our overhead. + li r0,-32 // Align to 32 bytes. + and r4,r4,r0 +#endif + stgux r1,r1,r4 // Grow the stack. + mflr r9 + + /* Save registers we use. */ +#if defined(__ppc64__) + std r27,-40(r8) +#endif + stg r28,MODE_CHOICE(-16,-32)(r8) + stg r29,MODE_CHOICE(-12,-24)(r8) + stg r30,MODE_CHOICE(-8,-16)(r8) + stg r31,MODE_CHOICE(-4,-8)(r8) + stg r9,SF_RETURN(r8) /* return address */ +#if !defined(POWERPC_DARWIN) /* TOC unused in OS X */ + stg r2,MODE_CHOICE(20,40)(r1) +#endif + +LCFI1: +#if defined(__ppc64__) + mr r27,r3 // our extended_cif +#endif + /* Save arguments over call. */ + mr r31,r5 /* flags, */ + mr r30,r6 /* rvalue, */ + mr r29,r7 /* function address, */ + mr r28,r8 /* our AP. */ + +LCFI2: + /* Call ffi_prep_args. */ + mr r4,r1 + li r9,0 + mtctr r12 /* r12 holds address of _ffi_prep_args. */ + bctrl +#if !defined(POWERPC_DARWIN) /* TOC unused in OS X */ + lg r2,MODE_CHOICE(20,40)(r1) +#endif + + /* Now do the call. + Set up cr1 with bits 4-7 of the flags. */ + mtcrf 0x40,r31 + + /* Load all those argument registers. + We have set up a nice stack frame, just load it into registers. */ + lg r3,SF_ARG1(r1) + lg r4,SF_ARG2(r1) + lg r5,SF_ARG3(r1) + lg r6,SF_ARG4(r1) + nop + lg r7,SF_ARG5(r1) + lg r8,SF_ARG6(r1) + lg r9,SF_ARG7(r1) + lg r10,SF_ARG8(r1) + + /* Load all the FP registers. */ + bf 6,L2 /* No floats to load. */ +#if defined(__ppc64__) + lfd f1,MODE_CHOICE(-16,-40)-(14*8)(r28) + lfd f2,MODE_CHOICE(-16,-40)-(13*8)(r28) + lfd f3,MODE_CHOICE(-16,-40)-(12*8)(r28) + lfd f4,MODE_CHOICE(-16,-40)-(11*8)(r28) + nop + lfd f5,MODE_CHOICE(-16,-40)-(10*8)(r28) + lfd f6,MODE_CHOICE(-16,-40)-(9*8)(r28) + lfd f7,MODE_CHOICE(-16,-40)-(8*8)(r28) + lfd f8,MODE_CHOICE(-16,-40)-(7*8)(r28) + nop + lfd f9,MODE_CHOICE(-16,-40)-(6*8)(r28) + lfd f10,MODE_CHOICE(-16,-40)-(5*8)(r28) + lfd f11,MODE_CHOICE(-16,-40)-(4*8)(r28) + lfd f12,MODE_CHOICE(-16,-40)-(3*8)(r28) + nop + lfd f13,MODE_CHOICE(-16,-40)-(2*8)(r28) + lfd f14,MODE_CHOICE(-16,-40)-(1*8)(r28) +#elif defined(__ppc__) + lfd f1,MODE_CHOICE(-16,-40)-(13*8)(r28) + lfd f2,MODE_CHOICE(-16,-40)-(12*8)(r28) + lfd f3,MODE_CHOICE(-16,-40)-(11*8)(r28) + lfd f4,MODE_CHOICE(-16,-40)-(10*8)(r28) + nop + lfd f5,MODE_CHOICE(-16,-40)-(9*8)(r28) + lfd f6,MODE_CHOICE(-16,-40)-(8*8)(r28) + lfd f7,MODE_CHOICE(-16,-40)-(7*8)(r28) + lfd f8,MODE_CHOICE(-16,-40)-(6*8)(r28) + nop + lfd f9,MODE_CHOICE(-16,-40)-(5*8)(r28) + lfd f10,MODE_CHOICE(-16,-40)-(4*8)(r28) + lfd f11,MODE_CHOICE(-16,-40)-(3*8)(r28) + lfd f12,MODE_CHOICE(-16,-40)-(2*8)(r28) + nop + lfd f13,MODE_CHOICE(-16,-40)-(1*8)(r28) +#else +#error undefined architecture +#endif + +L2: + mr r12,r29 // Put the target address in r12 as specified. + mtctr r12 // Get the address to call into CTR. + nop + nop + bctrl // Make the call. + + // Deal with the return value. +#if defined(__ppc64__) + mtcrf 0x3,r31 // flags in cr6 and cr7 + bt 27,L(st_return_value) +#elif defined(__ppc__) + mtcrf 0x1,r31 // flags in cr7 +#else +#error undefined architecture +#endif + + bt 30,L(done_return_value) + bt 29,L(fp_return_value) + stg r3,0(r30) +#if defined(__ppc__) + bf 28,L(done_return_value) // Store the second long if necessary. + stg r4,4(r30) +#endif + // Fall through + +L(done_return_value): + lg r1,0(r1) // Restore stack pointer. + // Restore the registers we used. + lg r9,SF_RETURN(r1) // return address + lg r31,MODE_CHOICE(-4,-8)(r1) + mtlr r9 + lg r30,MODE_CHOICE(-8,-16)(r1) + lg r29,MODE_CHOICE(-12,-24)(r1) + lg r28,MODE_CHOICE(-16,-32)(r1) +#if defined(__ppc64__) + ld r27,-40(r1) +#endif + blr + +#if defined(__ppc64__) +L(st_return_value): + // Grow the stack enough to fit the registers. Leave room for 8 args + // to trample the 1st 8 slots in param area. + stgu r1,-SF_ROUND(280)(r1) // 64 + 104 + 48 + 64 + + // Store GPRs + std r3,SF_ARG9(r1) + std r4,SF_ARG10(r1) + std r5,SF_ARG11(r1) + std r6,SF_ARG12(r1) + nop + std r7,SF_ARG13(r1) + std r8,SF_ARG14(r1) + std r9,SF_ARG15(r1) + std r10,SF_ARG16(r1) + + // Store FPRs + nop + bf 26,L(call_struct_to_ram_form) + stfd f1,SF_ARG17(r1) + stfd f2,SF_ARG18(r1) + stfd f3,SF_ARG19(r1) + stfd f4,SF_ARG20(r1) + nop + stfd f5,SF_ARG21(r1) + stfd f6,SF_ARG22(r1) + stfd f7,SF_ARG23(r1) + stfd f8,SF_ARG24(r1) + nop + stfd f9,SF_ARG25(r1) + stfd f10,SF_ARG26(r1) + stfd f11,SF_ARG27(r1) + stfd f12,SF_ARG28(r1) + nop + stfd f13,SF_ARG29(r1) + +L(call_struct_to_ram_form): + ld r3,0(r27) // extended_cif->cif* + ld r3,16(r3) // ffi_cif->rtype* + addi r4,r1,SF_ARG9 // stored GPRs + addi r6,r1,SF_ARG17 // stored FPRs + li r5,0 // GPR size ptr (NULL) + li r7,0 // FPR size ptr (NULL) + li r8,0 // FPR count ptr (NULL) + li r10,0 // struct offset (NULL) + mr r9,r30 // return area + bl Lffi64_struct_to_ram_form$stub + lg r1,0(r1) // Restore stack pointer. + b L(done_return_value) +#endif + +L(fp_return_value): + /* Do we have long double to store? */ + bf 31,L(fd_return_value) + stfd f1,0(r30) + stfd f2,8(r30) + b L(done_return_value) + +L(fd_return_value): + /* Do we have double to store? */ + bf 28,L(float_return_value) + stfd f1,0(r30) + b L(done_return_value) + +L(float_return_value): + /* We only have a float to store. */ + stfs f1,0(r30) + b L(done_return_value) + +LFE1: +/* END(_ffi_call_DARWIN) */ + +/* Provide a null definition of _ffi_call_AIX. */ +.text + .align 2 +.globl _ffi_call_AIX +.text + .align 2 +_ffi_call_AIX: + blr +/* END(_ffi_call_AIX) */ + +.section __TEXT,__eh_frame,coalesced,no_toc+strip_static_syms +EH_frame1: + .set L$set$0,LECIE1-LSCIE1 + .long L$set$0 ; Length of Common Information Entry +LSCIE1: + .long 0x0 ; CIE Identifier Tag + .byte 0x1 ; CIE Version + .ascii "zR\0" ; CIE Augmentation + .byte 0x1 ; uleb128 0x1; CIE Code Alignment Factor + .byte 0x7c ; sleb128 -4; CIE Data Alignment Factor + .byte 0x41 ; CIE RA Column + .byte 0x1 ; uleb128 0x1; Augmentation size + .byte 0x90 ; FDE Encoding (indirect pcrel) + .byte 0xc ; DW_CFA_def_cfa + .byte 0x1 ; uleb128 0x1 + .byte 0x0 ; uleb128 0x0 + .align LOG2_GPR_BYTES +LECIE1: +.globl _ffi_call_DARWIN.eh +_ffi_call_DARWIN.eh: +LSFDE1: + .set L$set$1,LEFDE1-LASFDE1 + .long L$set$1 ; FDE Length + +LASFDE1: + .long LASFDE1-EH_frame1 ; FDE CIE offset + .g_long LLFB0$non_lazy_ptr-. ; FDE initial location + .set L$set$3,LFE1-LFB0 + .g_long L$set$3 ; FDE address range + .byte 0x0 ; uleb128 0x0; Augmentation size + .byte 0x4 ; DW_CFA_advance_loc4 + .set L$set$4,LCFI0-LFB1 + .long L$set$4 + .byte 0xd ; DW_CFA_def_cfa_register + .byte 0x08 ; uleb128 0x08 + .byte 0x4 ; DW_CFA_advance_loc4 + .set L$set$5,LCFI1-LCFI0 + .long L$set$5 + .byte 0x11 ; DW_CFA_offset_extended_sf + .byte 0x41 ; uleb128 0x41 + .byte 0x7e ; sleb128 -2 + .byte 0x9f ; DW_CFA_offset, column 0x1f + .byte 0x1 ; uleb128 0x1 + .byte 0x9e ; DW_CFA_offset, column 0x1e + .byte 0x2 ; uleb128 0x2 + .byte 0x9d ; DW_CFA_offset, column 0x1d + .byte 0x3 ; uleb128 0x3 + .byte 0x9c ; DW_CFA_offset, column 0x1c + .byte 0x4 ; uleb128 0x4 + .byte 0x4 ; DW_CFA_advance_loc4 + .set L$set$6,LCFI2-LCFI1 + .long L$set$6 + .byte 0xd ; DW_CFA_def_cfa_register + .byte 0x1c ; uleb128 0x1c + .align LOG2_GPR_BYTES +LEFDE1: +.data + .align LOG2_GPR_BYTES +LLFB0$non_lazy_ptr: + .g_long LFB0 + +#if defined(__ppc64__) +.section __TEXT,__picsymbolstub1,symbol_stubs,pure_instructions,32 + .align LOG2_GPR_BYTES + +Lffi64_struct_to_ram_form$stub: + .indirect_symbol _ffi64_struct_to_ram_form + mflr r0 + bcl 20,31,LO$ffi64_struct_to_ram_form + +LO$ffi64_struct_to_ram_form: + mflr r11 + addis r11,r11,ha16(L_ffi64_struct_to_ram_form$lazy_ptr - LO$ffi64_struct_to_ram_form) + mtlr r0 + lgu r12,lo16(L_ffi64_struct_to_ram_form$lazy_ptr - LO$ffi64_struct_to_ram_form)(r11) + mtctr r12 + bctr + +.lazy_symbol_pointer +L_ffi64_struct_to_ram_form$lazy_ptr: + .indirect_symbol _ffi64_struct_to_ram_form + .g_long dyld_stub_binding_helper + +#endif // __ppc64__ +#endif // __ppc__ || __ppc64__ Added: python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/powerpc/ppc-darwin.h ============================================================================== --- (empty file) +++ python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/powerpc/ppc-darwin.h Tue Mar 4 14:25:41 2008 @@ -0,0 +1,106 @@ +/* ----------------------------------------------------------------------- + ppc-darwin.h - Copyright (c) 2002, 2003, 2004, Free Software Foundation, + Inc. + + Permission is hereby granted, free of charge, to any person obtaining + a copy of this software and associated documentation files (the + ``Software''), to deal in the Software without restriction, including + without limitation the rights to use, copy, modify, merge, publish, + distribute, sublicense, and/or sell copies of the Software, and to + permit persons to whom the Software is furnished to do so, subject to + the following conditions: + + The above copyright notice and this permission notice shall be included + in all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS + OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. + IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR + OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + OTHER DEALINGS IN THE SOFTWARE. + ----------------------------------------------------------------------- */ + + +#define L(x) x + +#define SF_ARG9 MODE_CHOICE(56,112) +#define SF_ARG10 MODE_CHOICE(60,120) +#define SF_ARG11 MODE_CHOICE(64,128) +#define SF_ARG12 MODE_CHOICE(68,136) +#define SF_ARG13 MODE_CHOICE(72,144) +#define SF_ARG14 MODE_CHOICE(76,152) +#define SF_ARG15 MODE_CHOICE(80,160) +#define SF_ARG16 MODE_CHOICE(84,168) +#define SF_ARG17 MODE_CHOICE(88,176) +#define SF_ARG18 MODE_CHOICE(92,184) +#define SF_ARG19 MODE_CHOICE(96,192) +#define SF_ARG20 MODE_CHOICE(100,200) +#define SF_ARG21 MODE_CHOICE(104,208) +#define SF_ARG22 MODE_CHOICE(108,216) +#define SF_ARG23 MODE_CHOICE(112,224) +#define SF_ARG24 MODE_CHOICE(116,232) +#define SF_ARG25 MODE_CHOICE(120,240) +#define SF_ARG26 MODE_CHOICE(124,248) +#define SF_ARG27 MODE_CHOICE(128,256) +#define SF_ARG28 MODE_CHOICE(132,264) +#define SF_ARG29 MODE_CHOICE(136,272) + +#define ASM_NEEDS_REGISTERS 4 +#define NUM_GPR_ARG_REGISTERS 8 +#define NUM_FPR_ARG_REGISTERS 13 + +#define FFI_TYPE_1_BYTE(x) ((x) == FFI_TYPE_UINT8 || (x) == FFI_TYPE_SINT8) +#define FFI_TYPE_2_BYTE(x) ((x) == FFI_TYPE_UINT16 || (x) == FFI_TYPE_SINT16) +#define FFI_TYPE_4_BYTE(x) \ + ((x) == FFI_TYPE_UINT32 || (x) == FFI_TYPE_SINT32 ||\ + (x) == FFI_TYPE_INT || (x) == FFI_TYPE_FLOAT) + + +#if !defined(LIBFFI_ASM) + +enum { + FLAG_RETURNS_NOTHING = 1 << (31 - 30), // cr7 + FLAG_RETURNS_FP = 1 << (31 - 29), + FLAG_RETURNS_64BITS = 1 << (31 - 28), + FLAG_RETURNS_128BITS = 1 << (31 - 31), + + FLAG_RETURNS_STRUCT = 1 << (31 - 27), // cr6 + FLAG_STRUCT_CONTAINS_FP = 1 << (31 - 26), + + FLAG_ARG_NEEDS_COPY = 1 << (31 - 7), + FLAG_FP_ARGUMENTS = 1 << (31 - 6), // cr1.eq; specified by ABI + FLAG_4_GPR_ARGUMENTS = 1 << (31 - 5), + FLAG_RETVAL_REFERENCE = 1 << (31 - 4) +}; + + +void ffi_prep_args(extended_cif* inEcif, unsigned *const stack); + +typedef union +{ + float f; + double d; +} ffi_dblfl; + +int ffi_closure_helper_DARWIN( ffi_closure* closure, + void* rvalue, unsigned long* pgr, + ffi_dblfl* pfr); + + +#if defined(__ppc64__) +void ffi64_struct_to_ram_form(const ffi_type*, const char*, unsigned int*, + const char*, unsigned int*, unsigned int*, char*, unsigned int*); +void ffi64_struct_to_reg_form(const ffi_type*, const char*, unsigned int*, + unsigned int*, char*, unsigned int*, char*, unsigned int*); +bool ffi64_stret_needs_ptr(const ffi_type* inType, + unsigned short*, unsigned short*); +bool ffi64_struct_contains_fp(const ffi_type* inType); +unsigned int ffi64_data_size(const ffi_type* inType); + + + +#endif + +#endif // !defined(LIBFFI_ASM) Added: python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/powerpc/ppc-darwin_closure.S ============================================================================== --- (empty file) +++ python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/powerpc/ppc-darwin_closure.S Tue Mar 4 14:25:41 2008 @@ -0,0 +1,325 @@ +#if defined(__ppc__) + +/* ----------------------------------------------------------------------- + darwin_closure.S - Copyright (c) 2002, 2003, 2004, Free Software Foundation, + Inc. based on ppc_closure.S + + PowerPC Assembly glue. + + Permission is hereby granted, free of charge, to any person obtaining + a copy of this software and associated documentation files (the + ``Software''), to deal in the Software without restriction, including + without limitation the rights to use, copy, modify, merge, publish, + distribute, sublicense, and/or sell copies of the Software, and to + permit persons to whom the Software is furnished to do so, subject to + the following conditions: + + The above copyright notice and this permission notice shall be included + in all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS + OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. + IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR + OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + OTHER DEALINGS IN THE SOFTWARE. + ----------------------------------------------------------------------- */ + +#define LIBFFI_ASM + +#include +#include // for FFI_TRAMPOLINE_SIZE +#include +#include + + .file "ppc-darwin_closure.S" +.text + .align LOG2_GPR_BYTES + .globl _ffi_closure_ASM + +.text + .align LOG2_GPR_BYTES + +_ffi_closure_ASM: +LFB1: + mflr r0 /* extract return address */ + stg r0,MODE_CHOICE(8,16)(r1) /* save return address */ + +LCFI0: + /* 24/48 bytes (Linkage Area) + 32/64 bytes (outgoing parameter area, always reserved) + 104 bytes (13*8 from FPR) + 16/32 bytes (result) + 176/232 total bytes */ + + /* skip over caller save area and keep stack aligned to 16/32. */ + stgu r1,-SF_ROUND(MODE_CHOICE(176,248))(r1) + +LCFI1: + /* We want to build up an area for the parameters passed + in registers. (both floating point and integer) */ + + /* 176/256 bytes (callee stack frame aligned to 16/32) + 24/48 bytes (caller linkage area) + 200/304 (start of caller parameter area aligned to 4/8) + */ + + /* Save GPRs 3 - 10 (aligned to 4/8) + in the parents outgoing area. */ + stg r3,MODE_CHOICE(200,304)(r1) + stg r4,MODE_CHOICE(204,312)(r1) + stg r5,MODE_CHOICE(208,320)(r1) + stg r6,MODE_CHOICE(212,328)(r1) + stg r7,MODE_CHOICE(216,336)(r1) + stg r8,MODE_CHOICE(220,344)(r1) + stg r9,MODE_CHOICE(224,352)(r1) + stg r10,MODE_CHOICE(228,360)(r1) + + /* Save FPRs 1 - 13. (aligned to 8) */ + stfd f1,MODE_CHOICE(56,112)(r1) + stfd f2,MODE_CHOICE(64,120)(r1) + stfd f3,MODE_CHOICE(72,128)(r1) + stfd f4,MODE_CHOICE(80,136)(r1) + stfd f5,MODE_CHOICE(88,144)(r1) + stfd f6,MODE_CHOICE(96,152)(r1) + stfd f7,MODE_CHOICE(104,160)(r1) + stfd f8,MODE_CHOICE(112,168)(r1) + stfd f9,MODE_CHOICE(120,176)(r1) + stfd f10,MODE_CHOICE(128,184)(r1) + stfd f11,MODE_CHOICE(136,192)(r1) + stfd f12,MODE_CHOICE(144,200)(r1) + stfd f13,MODE_CHOICE(152,208)(r1) + + /* Set up registers for the routine that actually does the work. + Get the context pointer from the trampoline. */ + mr r3,r11 + + /* Load the pointer to the result storage. */ + /* current stack frame size - ((4/8 * 4) + saved registers) */ + addi r4,r1,MODE_CHOICE(160,216) + + /* Load the pointer to the saved gpr registers. */ + addi r5,r1,MODE_CHOICE(200,304) + + /* Load the pointer to the saved fpr registers. */ + addi r6,r1,MODE_CHOICE(56,112) + + /* Make the call. */ + bl Lffi_closure_helper_DARWIN$stub + + /* Now r3 contains the return type + so use it to look up in a table + so we know how to deal with each type. */ + + /* Look the proper starting point in table + by using return type as offset. */ + addi r5,r1,MODE_CHOICE(160,216) // Get pointer to results area. + bl Lget_ret_type0_addr // Get pointer to Lret_type0 into LR. + mflr r4 // Move to r4. + slwi r3,r3,4 // Now multiply return type by 16. + add r3,r3,r4 // Add contents of table to table address. + mtctr r3 + bctr + +LFE1: +/* Each of the ret_typeX code fragments has to be exactly 16 bytes long + (4 instructions). For cache effectiveness we align to a 16 byte boundary + first. */ + .align 4 + nop + nop + nop + +Lget_ret_type0_addr: + blrl + +/* case FFI_TYPE_VOID */ +Lret_type0: + b Lfinish + nop + nop + nop + +/* case FFI_TYPE_INT */ +Lret_type1: + lwz r3,MODE_CHOICE(0,4)(r5) + b Lfinish + nop + nop + +/* case FFI_TYPE_FLOAT */ +Lret_type2: + lfs f1,0(r5) + b Lfinish + nop + nop + +/* case FFI_TYPE_DOUBLE */ +Lret_type3: + lfd f1,0(r5) + b Lfinish + nop + nop + +/* case FFI_TYPE_LONGDOUBLE */ +Lret_type4: + lfd f1,0(r5) + lfd f2,8(r5) + b Lfinish + nop + +/* case FFI_TYPE_UINT8 */ +Lret_type5: + lbz r3,MODE_CHOICE(3,7)(r5) + b Lfinish + nop + nop + +/* case FFI_TYPE_SINT8 */ +Lret_type6: + lbz r3,MODE_CHOICE(3,7)(r5) + extsb r3,r3 + b Lfinish + nop + +/* case FFI_TYPE_UINT16 */ +Lret_type7: + lhz r3,MODE_CHOICE(2,6)(r5) + b Lfinish + nop + nop + +/* case FFI_TYPE_SINT16 */ +Lret_type8: + lha r3,MODE_CHOICE(2,6)(r5) + b Lfinish + nop + nop + +/* case FFI_TYPE_UINT32 */ +Lret_type9: // same as Lret_type1 + lwz r3,MODE_CHOICE(0,4)(r5) + b Lfinish + nop + nop + +/* case FFI_TYPE_SINT32 */ +Lret_type10: // same as Lret_type1 + lwz r3,MODE_CHOICE(0,4)(r5) + b Lfinish + nop + nop + +/* case FFI_TYPE_UINT64 */ +Lret_type11: + lwz r3,0(r5) + lwz r4,4(r5) + b Lfinish + nop + +/* case FFI_TYPE_SINT64 */ +Lret_type12: // same as Lret_type11 + lwz r3,0(r5) + lwz r4,4(r5) + b Lfinish + nop + +/* case FFI_TYPE_STRUCT */ +Lret_type13: + b MODE_CHOICE(Lfinish,Lret_struct) + nop + nop + nop + +/* End 16-byte aligned cases */ +/* case FFI_TYPE_POINTER */ +// This case assumes that FFI_TYPE_POINTER == FFI_TYPE_LAST. If more types +// are added in future, the following code will need to be updated and +// padded to 16 bytes. +Lret_type14: + lg r3,0(r5) + +/* case done */ +Lfinish: + addi r1,r1,SF_ROUND(MODE_CHOICE(176,248)) // Restore stack pointer. + lg r0,MODE_CHOICE(8,16)(r1) /* Get return address. */ + mtlr r0 /* Reset link register. */ + blr + +/* END(ffi_closure_ASM) */ + +.section __TEXT,__eh_frame,coalesced,no_toc+strip_static_syms+live_support +EH_frame1: + .set L$set$0,LECIE1-LSCIE1 + .long L$set$0 ; Length of Common Information Entry +LSCIE1: + .long 0x0 ; CIE Identifier Tag + .byte 0x1 ; CIE Version + .ascii "zR\0" ; CIE Augmentation + .byte 0x1 ; uleb128 0x1; CIE Code Alignment Factor + .byte 0x7c ; sleb128 -4; CIE Data Alignment Factor + .byte 0x41 ; CIE RA Column + .byte 0x1 ; uleb128 0x1; Augmentation size + .byte 0x90 ; FDE Encoding (indirect pcrel) + .byte 0xc ; DW_CFA_def_cfa + .byte 0x1 ; uleb128 0x1 + .byte 0x0 ; uleb128 0x0 + .align LOG2_GPR_BYTES +LECIE1: +.globl _ffi_closure_ASM.eh +_ffi_closure_ASM.eh: +LSFDE1: + .set L$set$1,LEFDE1-LASFDE1 + .long L$set$1 ; FDE Length + +LASFDE1: + .long LASFDE1-EH_frame1 ; FDE CIE offset + .g_long LLFB1$non_lazy_ptr-. ; FDE initial location + .set L$set$3,LFE1-LFB1 + .g_long L$set$3 ; FDE address range + .byte 0x0 ; uleb128 0x0; Augmentation size + .byte 0x4 ; DW_CFA_advance_loc4 + .set L$set$3,LCFI1-LCFI0 + .long L$set$3 + .byte 0xe ; DW_CFA_def_cfa_offset + .byte 176,1 ; uleb128 176 + .byte 0x4 ; DW_CFA_advance_loc4 + .set L$set$4,LCFI0-LFB1 + .long L$set$4 + .byte 0x11 ; DW_CFA_offset_extended_sf + .byte 0x41 ; uleb128 0x41 + .byte 0x7e ; sleb128 -2 + .align LOG2_GPR_BYTES + +LEFDE1: +.data + .align LOG2_GPR_BYTES +LDFCM0: +.section __TEXT,__picsymbolstub1,symbol_stubs,pure_instructions,32 + .align LOG2_GPR_BYTES + +Lffi_closure_helper_DARWIN$stub: + .indirect_symbol _ffi_closure_helper_DARWIN + mflr r0 + bcl 20,31,LO$ffi_closure_helper_DARWIN + +LO$ffi_closure_helper_DARWIN: + mflr r11 + addis r11,r11,ha16(L_ffi_closure_helper_DARWIN$lazy_ptr - LO$ffi_closure_helper_DARWIN) + mtlr r0 + lgu r12,lo16(L_ffi_closure_helper_DARWIN$lazy_ptr - LO$ffi_closure_helper_DARWIN)(r11) + mtctr r12 + bctr + +.lazy_symbol_pointer +L_ffi_closure_helper_DARWIN$lazy_ptr: + .indirect_symbol _ffi_closure_helper_DARWIN + .g_long dyld_stub_binding_helper + +.data + .align LOG2_GPR_BYTES +LLFB1$non_lazy_ptr: + .g_long LFB1 + +#endif // __ppc__ Added: python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/powerpc/ppc-ffi_darwin.c ============================================================================== --- (empty file) +++ python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/powerpc/ppc-ffi_darwin.c Tue Mar 4 14:25:41 2008 @@ -0,0 +1,1775 @@ +#if defined(__ppc__) || defined(__ppc64__) + +/* ----------------------------------------------------------------------- + ffi.c - Copyright (c) 1998 Geoffrey Keating + + PowerPC Foreign Function Interface + + Darwin ABI support (c) 2001 John Hornkvist + AIX ABI support (c) 2002 Free Software Foundation, Inc. + + Permission is hereby granted, free of charge, to any person obtaining + a copy of this software and associated documentation files (the + ``Software''), to deal in the Software without restriction, including + without limitation the rights to use, copy, modify, merge, publish, + distribute, sublicense, and/or sell copies of the Software, and to + permit persons to whom the Software is furnished to do so, subject to + the following conditions: + + The above copyright notice and this permission notice shall be included + in all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS + OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. + IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR + OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + OTHER DEALINGS IN THE SOFTWARE. + ----------------------------------------------------------------------- */ + +#include "ffi.h" +#include "ffi_common.h" + +#include +#include +#include +#include "ppc-darwin.h" +#include + +#if 0 +#if defined(POWERPC_DARWIN) +#include // for sys_icache_invalidate() +#endif + +#else + +/* Explicit prototype instead of including a header to allow compilation + * on Tiger systems. + */ + +#pragma weak sys_icache_invalidate +extern void sys_icache_invalidate(void *start, size_t len); + +#endif + +extern void ffi_closure_ASM(void); + +// The layout of a function descriptor. A C function pointer really +// points to one of these. +typedef struct aix_fd_struct { + void* code_pointer; + void* toc; +} aix_fd; + +/* ffi_prep_args is called by the assembly routine once stack space + has been allocated for the function's arguments. + + The stack layout we want looks like this: + + | Return address from ffi_call_DARWIN | higher addresses + |--------------------------------------------| + | Previous backchain pointer 4/8 | stack pointer here + |--------------------------------------------|-\ <<< on entry to + | Saved r28-r31 (4/8)*4 | | ffi_call_DARWIN + |--------------------------------------------| | + | Parameters (at least 8*(4/8)=32/64) | | (176) +112 - +288 + |--------------------------------------------| | + | Space for GPR2 4/8 | | + |--------------------------------------------| | stack | + | Reserved (4/8)*2 | | grows | + |--------------------------------------------| | down V + | Space for callee's LR 4/8 | | + |--------------------------------------------| | lower addresses + | Saved CR 4/8 | | + |--------------------------------------------| | stack pointer here + | Current backchain pointer 4/8 | | during + |--------------------------------------------|-/ <<< ffi_call_DARWIN + + Note: ppc64 CR is saved in the low word of a long on the stack. +*/ + +/*@-exportheader@*/ +void +ffi_prep_args( + extended_cif* inEcif, + unsigned *const stack) +/*@=exportheader@*/ +{ + /* Copy the ecif to a local var so we can trample the arg. + BC note: test this with GP later for possible problems... */ + volatile extended_cif* ecif = inEcif; + + const unsigned bytes = ecif->cif->bytes; + const unsigned flags = ecif->cif->flags; + + /* Cast the stack arg from int* to long*. sizeof(long) == 4 in 32-bit mode + and 8 in 64-bit mode. */ + unsigned long *const longStack = (unsigned long *const)stack; + + /* 'stacktop' points at the previous backchain pointer. */ +#if defined(__ppc64__) + // In ppc-darwin.s, an extra 96 bytes is reserved for the linkage area, + // saved registers, and an extra FPR. + unsigned long *const stacktop = + (unsigned long *)(unsigned long)((char*)longStack + bytes + 96); +#elif defined(__ppc__) + unsigned long *const stacktop = longStack + (bytes / sizeof(long)); +#else +#error undefined architecture +#endif + + /* 'fpr_base' points at the space for fpr1, and grows upwards as + we use FPR registers. */ + double* fpr_base = (double*)(stacktop - ASM_NEEDS_REGISTERS) - + NUM_FPR_ARG_REGISTERS; + +#if defined(__ppc64__) + // 64-bit saves an extra register, and uses an extra FPR. Knock fpr_base + // down a couple pegs. + fpr_base -= 2; +#endif + + unsigned int fparg_count = 0; + + /* 'next_arg' grows up as we put parameters in it. */ + unsigned long* next_arg = longStack + 6; /* 6 reserved positions. */ + + int i; + double double_tmp; + void** p_argv = ecif->avalue; + unsigned long gprvalue; + ffi_type** ptr = ecif->cif->arg_types; + + /* Check that everything starts aligned properly. */ + FFI_ASSERT(stack == SF_ROUND(stack)); + FFI_ASSERT(stacktop == SF_ROUND(stacktop)); + FFI_ASSERT(bytes == SF_ROUND(bytes)); + + /* Deal with return values that are actually pass-by-reference. + Rule: + Return values are referenced by r3, so r4 is the first parameter. */ + + if (flags & FLAG_RETVAL_REFERENCE) + *next_arg++ = (unsigned long)(char*)ecif->rvalue; + + /* Now for the arguments. */ + for (i = ecif->cif->nargs; i > 0; i--, ptr++, p_argv++) + { + switch ((*ptr)->type) + { + /* If a floating-point parameter appears before all of the general- + purpose registers are filled, the corresponding GPRs that match + the size of the floating-point parameter are shadowed for the + benefit of vararg and pre-ANSI functions. */ + case FFI_TYPE_FLOAT: + double_tmp = *(float*)*p_argv; + + if (fparg_count < NUM_FPR_ARG_REGISTERS) + *fpr_base++ = double_tmp; + + *(double*)next_arg = double_tmp; + + next_arg++; + fparg_count++; + FFI_ASSERT(flags & FLAG_FP_ARGUMENTS); + + break; + + case FFI_TYPE_DOUBLE: + double_tmp = *(double*)*p_argv; + + if (fparg_count < NUM_FPR_ARG_REGISTERS) + *fpr_base++ = double_tmp; + + *(double*)next_arg = double_tmp; + + next_arg += MODE_CHOICE(2,1); + fparg_count++; + FFI_ASSERT(flags & FLAG_FP_ARGUMENTS); + + break; + +#if FFI_TYPE_LONGDOUBLE != FFI_TYPE_DOUBLE + case FFI_TYPE_LONGDOUBLE: +#if defined(__ppc64__) + if (fparg_count < NUM_FPR_ARG_REGISTERS) + *(long double*)fpr_base = *(long double*)*p_argv; +#elif defined(__ppc__) + if (fparg_count < NUM_FPR_ARG_REGISTERS - 1) + *(long double*)fpr_base = *(long double*)*p_argv; + else if (fparg_count == NUM_FPR_ARG_REGISTERS - 1) + *(double*)fpr_base = *(double*)*p_argv; +#else +#error undefined architecture +#endif + + *(long double*)next_arg = *(long double*)*p_argv; + fparg_count += 2; + fpr_base += 2; + next_arg += MODE_CHOICE(4,2); + FFI_ASSERT(flags & FLAG_FP_ARGUMENTS); + + break; +#endif // FFI_TYPE_LONGDOUBLE != FFI_TYPE_DOUBLE + + case FFI_TYPE_UINT64: + case FFI_TYPE_SINT64: +#if defined(__ppc64__) + gprvalue = *(long long*)*p_argv; + goto putgpr; +#elif defined(__ppc__) + *(long long*)next_arg = *(long long*)*p_argv; + next_arg += 2; + break; +#else +#error undefined architecture +#endif + + case FFI_TYPE_POINTER: + gprvalue = *(unsigned long*)*p_argv; + goto putgpr; + + case FFI_TYPE_UINT8: + gprvalue = *(unsigned char*)*p_argv; + goto putgpr; + + case FFI_TYPE_SINT8: + gprvalue = *(signed char*)*p_argv; + goto putgpr; + + case FFI_TYPE_UINT16: + gprvalue = *(unsigned short*)*p_argv; + goto putgpr; + + case FFI_TYPE_SINT16: + gprvalue = *(signed short*)*p_argv; + goto putgpr; + + case FFI_TYPE_STRUCT: + { +#if defined(__ppc64__) + unsigned int gprSize = 0; + unsigned int fprSize = 0; + + ffi64_struct_to_reg_form(*ptr, (char*)*p_argv, NULL, &fparg_count, + (char*)next_arg, &gprSize, (char*)fpr_base, &fprSize); + next_arg += gprSize / sizeof(long); + fpr_base += fprSize / sizeof(double); + +#elif defined(__ppc__) + char* dest_cpy = (char*)next_arg; + + /* Structures that match the basic modes (QI 1 byte, HI 2 bytes, + SI 4 bytes) are aligned as if they were those modes. + Structures with 3 byte in size are padded upwards. */ + unsigned size_al = (*ptr)->size; + + /* If the first member of the struct is a double, then align + the struct to double-word. */ + if ((*ptr)->elements[0]->type == FFI_TYPE_DOUBLE) + size_al = ALIGN((*ptr)->size, 8); + + if (ecif->cif->abi == FFI_DARWIN) + { + if (size_al < 3) + dest_cpy += 4 - size_al; + } + + memcpy((char*)dest_cpy, (char*)*p_argv, size_al); + next_arg += (size_al + 3) / 4; +#else +#error undefined architecture +#endif + break; + } + + case FFI_TYPE_INT: + case FFI_TYPE_UINT32: + case FFI_TYPE_SINT32: + gprvalue = *(unsigned*)*p_argv; + +putgpr: + *next_arg++ = gprvalue; + break; + + default: + break; + } + } + + /* Check that we didn't overrun the stack... */ + //FFI_ASSERT(gpr_base <= stacktop - ASM_NEEDS_REGISTERS); + //FFI_ASSERT((unsigned *)fpr_base + // <= stacktop - ASM_NEEDS_REGISTERS - NUM_GPR_ARG_REGISTERS); + //FFI_ASSERT(flags & FLAG_4_GPR_ARGUMENTS || intarg_count <= 4); +} + +#if defined(__ppc64__) + +bool +ffi64_struct_contains_fp( + const ffi_type* inType) +{ + bool containsFP = false; + unsigned int i; + + for (i = 0; inType->elements[i] != NULL && !containsFP; i++) + { + if (inType->elements[i]->type == FFI_TYPE_FLOAT || + inType->elements[i]->type == FFI_TYPE_DOUBLE || + inType->elements[i]->type == FFI_TYPE_LONGDOUBLE) + containsFP = true; + else if (inType->elements[i]->type == FFI_TYPE_STRUCT) + containsFP = ffi64_struct_contains_fp(inType->elements[i]); + } + + return containsFP; +} + +#endif // defined(__ppc64__) + +/* Perform machine dependent cif processing. */ +ffi_status +ffi_prep_cif_machdep( + ffi_cif* cif) +{ + /* All this is for the DARWIN ABI. */ + int i; + ffi_type** ptr; + int intarg_count = 0; + int fparg_count = 0; + unsigned int flags = 0; + unsigned int size_al = 0; + + /* All the machine-independent calculation of cif->bytes will be wrong. + Redo the calculation for DARWIN. */ + + /* Space for the frame pointer, callee's LR, CR, etc, and for + the asm's temp regs. */ + unsigned int bytes = (6 + ASM_NEEDS_REGISTERS) * sizeof(long); + + /* Return value handling. The rules are as follows: + - 32-bit (or less) integer values are returned in gpr3; + - Structures of size <= 4 bytes also returned in gpr3; + - 64-bit integer values and structures between 5 and 8 bytes are + returned in gpr3 and gpr4; + - Single/double FP values are returned in fpr1; + - Long double FP (if not equivalent to double) values are returned in + fpr1 and fpr2; + - Larger structures values are allocated space and a pointer is passed + as the first argument. */ + switch (cif->rtype->type) + { +#if FFI_TYPE_LONGDOUBLE != FFI_TYPE_DOUBLE + case FFI_TYPE_LONGDOUBLE: + flags |= FLAG_RETURNS_128BITS; + flags |= FLAG_RETURNS_FP; + break; +#endif // FFI_TYPE_LONGDOUBLE != FFI_TYPE_DOUBLE + + case FFI_TYPE_DOUBLE: + flags |= FLAG_RETURNS_64BITS; + /* Fall through. */ + case FFI_TYPE_FLOAT: + flags |= FLAG_RETURNS_FP; + break; + +#if defined(__ppc64__) + case FFI_TYPE_POINTER: +#endif + case FFI_TYPE_UINT64: + case FFI_TYPE_SINT64: + flags |= FLAG_RETURNS_64BITS; + break; + + case FFI_TYPE_STRUCT: + { +#if defined(__ppc64__) + + if (ffi64_stret_needs_ptr(cif->rtype, NULL, NULL)) + { + flags |= FLAG_RETVAL_REFERENCE; + flags |= FLAG_RETURNS_NOTHING; + intarg_count++; + } + else + { + flags |= FLAG_RETURNS_STRUCT; + + if (ffi64_struct_contains_fp(cif->rtype)) + flags |= FLAG_STRUCT_CONTAINS_FP; + } + +#elif defined(__ppc__) + + flags |= FLAG_RETVAL_REFERENCE; + flags |= FLAG_RETURNS_NOTHING; + intarg_count++; + +#else +#error undefined architecture +#endif + break; + } + + case FFI_TYPE_VOID: + flags |= FLAG_RETURNS_NOTHING; + break; + + default: + /* Returns 32-bit integer, or similar. Nothing to do here. */ + break; + } + + /* The first NUM_GPR_ARG_REGISTERS words of integer arguments, and the + first NUM_FPR_ARG_REGISTERS fp arguments, go in registers; the rest + goes on the stack. Structures are passed as a pointer to a copy of + the structure. Stuff on the stack needs to keep proper alignment. */ + for (ptr = cif->arg_types, i = cif->nargs; i > 0; i--, ptr++) + { + switch ((*ptr)->type) + { + case FFI_TYPE_FLOAT: + case FFI_TYPE_DOUBLE: + fparg_count++; + /* If this FP arg is going on the stack, it must be + 8-byte-aligned. */ + if (fparg_count > NUM_FPR_ARG_REGISTERS + && intarg_count % 2 != 0) + intarg_count++; + break; + +#if FFI_TYPE_LONGDOUBLE != FFI_TYPE_DOUBLE + case FFI_TYPE_LONGDOUBLE: + fparg_count += 2; + /* If this FP arg is going on the stack, it must be + 8-byte-aligned. */ + + if ( +#if defined(__ppc64__) + fparg_count > NUM_FPR_ARG_REGISTERS + 1 +#elif defined(__ppc__) + fparg_count > NUM_FPR_ARG_REGISTERS +#else +#error undefined architecture +#endif + && intarg_count % 2 != 0) + intarg_count++; + + intarg_count += 2; + break; +#endif // FFI_TYPE_LONGDOUBLE != FFI_TYPE_DOUBLE + + case FFI_TYPE_UINT64: + case FFI_TYPE_SINT64: + /* 'long long' arguments are passed as two words, but + either both words must fit in registers or both go + on the stack. If they go on the stack, they must + be 8-byte-aligned. */ + if (intarg_count == NUM_GPR_ARG_REGISTERS - 1 + || (intarg_count >= NUM_GPR_ARG_REGISTERS + && intarg_count % 2 != 0)) + intarg_count++; + + intarg_count += MODE_CHOICE(2,1); + + break; + + case FFI_TYPE_STRUCT: + size_al = (*ptr)->size; + /* If the first member of the struct is a double, then align + the struct to double-word. */ + if ((*ptr)->elements[0]->type == FFI_TYPE_DOUBLE) + size_al = ALIGN((*ptr)->size, 8); + +#if defined(__ppc64__) + // Look for FP struct members. + unsigned int j; + + for (j = 0; (*ptr)->elements[j] != NULL; j++) + { + if ((*ptr)->elements[j]->type == FFI_TYPE_FLOAT || + (*ptr)->elements[j]->type == FFI_TYPE_DOUBLE) + { + fparg_count++; + + if (fparg_count > NUM_FPR_ARG_REGISTERS) + intarg_count++; + } + else if ((*ptr)->elements[j]->type == FFI_TYPE_LONGDOUBLE) + { + fparg_count += 2; + + if (fparg_count > NUM_FPR_ARG_REGISTERS + 1) + intarg_count += 2; + } + else + intarg_count++; + } +#elif defined(__ppc__) + intarg_count += (size_al + 3) / 4; +#else +#error undefined architecture +#endif + + break; + + default: + /* Everything else is passed as a 4/8-byte word in a GPR, either + the object itself or a pointer to it. */ + intarg_count++; + break; + } + } + + /* Space for the FPR registers, if needed. */ + if (fparg_count != 0) + { + flags |= FLAG_FP_ARGUMENTS; +#if defined(__ppc64__) + bytes += (NUM_FPR_ARG_REGISTERS + 1) * sizeof(double); +#elif defined(__ppc__) + bytes += NUM_FPR_ARG_REGISTERS * sizeof(double); +#else +#error undefined architecture +#endif + } + + /* Stack space. */ +#if defined(__ppc64__) + if ((intarg_count + fparg_count) > NUM_GPR_ARG_REGISTERS) + bytes += (intarg_count + fparg_count) * sizeof(long); +#elif defined(__ppc__) + if ((intarg_count + 2 * fparg_count) > NUM_GPR_ARG_REGISTERS) + bytes += (intarg_count + 2 * fparg_count) * sizeof(long); +#else +#error undefined architecture +#endif + else + bytes += NUM_GPR_ARG_REGISTERS * sizeof(long); + + /* The stack space allocated needs to be a multiple of 16/32 bytes. */ + bytes = SF_ROUND(bytes); + + cif->flags = flags; + cif->bytes = bytes; + + return FFI_OK; +} + +/*@-declundef@*/ +/*@-exportheader@*/ +extern void +ffi_call_AIX( +/*@out@*/ extended_cif*, + unsigned, + unsigned, +/*@out@*/ unsigned*, + void (*fn)(void), + void (*fn2)(extended_cif*, unsigned *const)); + +extern void +ffi_call_DARWIN( +/*@out@*/ extended_cif*, + unsigned long, + unsigned, +/*@out@*/ unsigned*, + void (*fn)(void), + void (*fn2)(extended_cif*, unsigned *const)); +/*@=declundef@*/ +/*@=exportheader@*/ + +void +ffi_call( +/*@dependent@*/ ffi_cif* cif, + void (*fn)(void), +/*@out@*/ void* rvalue, +/*@dependent@*/ void** avalue) +{ + extended_cif ecif; + + ecif.cif = cif; + ecif.avalue = avalue; + + /* If the return value is a struct and we don't have a return + value address then we need to make one. */ + if ((rvalue == NULL) && + (cif->rtype->type == FFI_TYPE_STRUCT)) + { + /*@-sysunrecog@*/ + ecif.rvalue = alloca(cif->rtype->size); + /*@=sysunrecog@*/ + } + else + ecif.rvalue = rvalue; + + switch (cif->abi) + { + case FFI_AIX: + /*@-usedef@*/ + ffi_call_AIX(&ecif, -cif->bytes, + cif->flags, ecif.rvalue, fn, ffi_prep_args); + /*@=usedef@*/ + break; + + case FFI_DARWIN: + /*@-usedef@*/ + ffi_call_DARWIN(&ecif, -(long)cif->bytes, + cif->flags, ecif.rvalue, fn, ffi_prep_args); + /*@=usedef@*/ + break; + + default: + FFI_ASSERT(0); + break; + } +} + +/* here I'd like to add the stack frame layout we use in darwin_closure.S + and aix_clsoure.S + + SP previous -> +---------------------------------------+ <--- child frame + | back chain to caller 4 | + +---------------------------------------+ 4 + | saved CR 4 | + +---------------------------------------+ 8 + | saved LR 4 | + +---------------------------------------+ 12 + | reserved for compilers 4 | + +---------------------------------------+ 16 + | reserved for binders 4 | + +---------------------------------------+ 20 + | saved TOC pointer 4 | + +---------------------------------------+ 24 + | always reserved 8*4=32 (previous GPRs)| + | according to the linkage convention | + | from AIX | + +---------------------------------------+ 56 + | our FPR area 13*8=104 | + | f1 | + | . | + | f13 | + +---------------------------------------+ 160 + | result area 8 | + +---------------------------------------+ 168 + | alignement to the next multiple of 16 | +SP current --> +---------------------------------------+ 176 <- parent frame + | back chain to caller 4 | + +---------------------------------------+ 180 + | saved CR 4 | + +---------------------------------------+ 184 + | saved LR 4 | + +---------------------------------------+ 188 + | reserved for compilers 4 | + +---------------------------------------+ 192 + | reserved for binders 4 | + +---------------------------------------+ 196 + | saved TOC pointer 4 | + +---------------------------------------+ 200 + | always reserved 8*4=32 we store our | + | GPRs here | + | r3 | + | . | + | r10 | + +---------------------------------------+ 232 + | overflow part | + +---------------------------------------+ xxx + | ???? | + +---------------------------------------+ xxx +*/ + +#if !defined(POWERPC_DARWIN) + +#define MIN_LINE_SIZE 32 + +static void +flush_icache( + char* addr) +{ +#ifndef _AIX + __asm__ volatile ( + "dcbf 0,%0\n" + "sync\n" + "icbi 0,%0\n" + "sync\n" + "isync" + : : "r" (addr) : "memory"); +#endif +} + +static void +flush_range( + char* addr, + int size) +{ + int i; + + for (i = 0; i < size; i += MIN_LINE_SIZE) + flush_icache(addr + i); + + flush_icache(addr + size - 1); +} + +#endif // !defined(POWERPC_DARWIN) + +ffi_status +ffi_prep_closure( + ffi_closure* closure, + ffi_cif* cif, + void (*fun)(ffi_cif*, void*, void**, void*), + void* user_data) +{ + switch (cif->abi) + { + case FFI_DARWIN: + { + FFI_ASSERT (cif->abi == FFI_DARWIN); + + unsigned int* tramp = (unsigned int*)&closure->tramp[0]; + +#if defined(__ppc64__) + tramp[0] = 0x7c0802a6; // mflr r0 + tramp[1] = 0x429f0005; // bcl 20,31,+0x8 + tramp[2] = 0x7d6802a6; // mflr r11 + tramp[3] = 0x7c0803a6; // mtlr r0 + tramp[4] = 0xe98b0018; // ld r12,24(r11) + tramp[5] = 0x7d8903a6; // mtctr r12 + tramp[6] = 0xe96b0020; // ld r11,32(r11) + tramp[7] = 0x4e800420; // bctr + *(unsigned long*)&tramp[8] = (unsigned long)ffi_closure_ASM; + *(unsigned long*)&tramp[10] = (unsigned long)closure; +#elif defined(__ppc__) + tramp[0] = 0x7c0802a6; // mflr r0 + tramp[1] = 0x429f0005; // bcl 20,31,+0x8 + tramp[2] = 0x7d6802a6; // mflr r11 + tramp[3] = 0x7c0803a6; // mtlr r0 + tramp[4] = 0x818b0018; // lwz r12,24(r11) + tramp[5] = 0x7d8903a6; // mtctr r12 + tramp[6] = 0x816b001c; // lwz r11,28(r11) + tramp[7] = 0x4e800420; // bctr + tramp[8] = (unsigned long)ffi_closure_ASM; + tramp[9] = (unsigned long)closure; +#else +#error undefined architecture +#endif + + closure->cif = cif; + closure->fun = fun; + closure->user_data = user_data; + + // Flush the icache. Only necessary on Darwin. +#if defined(POWERPC_DARWIN) + if (sys_icache_invalidate) { + sys_icache_invalidate(closure->tramp, FFI_TRAMPOLINE_SIZE); + } +#else + flush_range(closure->tramp, FFI_TRAMPOLINE_SIZE); +#endif + + break; + } + + case FFI_AIX: + { + FFI_ASSERT (cif->abi == FFI_AIX); + + ffi_aix_trampoline_struct* tramp_aix = + (ffi_aix_trampoline_struct*)(closure->tramp); + aix_fd* fd = (aix_fd*)(void*)ffi_closure_ASM; + + tramp_aix->code_pointer = fd->code_pointer; + tramp_aix->toc = fd->toc; + tramp_aix->static_chain = closure; + closure->cif = cif; + closure->fun = fun; + closure->user_data = user_data; + break; + } + + default: + return FFI_BAD_ABI; + } + + return FFI_OK; +} + +#if defined(__ppc__) + typedef double ldbits[2]; + + typedef union + { + ldbits lb; + long double ld; + } ldu; +#endif + +/* The trampoline invokes ffi_closure_ASM, and on entry, r11 holds the + address of the closure. After storing the registers that could possibly + contain parameters to be passed into the stack frame and setting up space + for a return value, ffi_closure_ASM invokes the following helper function + to do most of the work. */ +int +ffi_closure_helper_DARWIN( + ffi_closure* closure, + void* rvalue, + unsigned long* pgr, + ffi_dblfl* pfr) +{ + /* rvalue is the pointer to space for return value in closure assembly + pgr is the pointer to where r3-r10 are stored in ffi_closure_ASM + pfr is the pointer to where f1-f13 are stored in ffi_closure_ASM. */ + +#if defined(__ppc__) + ldu temp_ld; +#endif + + double temp; + unsigned int i; + unsigned int nf = 0; /* number of FPRs already used. */ + unsigned int ng = 0; /* number of GPRs already used. */ + ffi_cif* cif = closure->cif; + unsigned int avn = cif->nargs; + void** avalue = alloca(cif->nargs * sizeof(void*)); + ffi_type** arg_types = cif->arg_types; + + /* Copy the caller's structure return value address so that the closure + returns the data directly to the caller. */ +#if defined(__ppc64__) + if (cif->rtype->type == FFI_TYPE_STRUCT && + ffi64_stret_needs_ptr(cif->rtype, NULL, NULL)) +#elif defined(__ppc__) + if (cif->rtype->type == FFI_TYPE_STRUCT) +#else +#error undefined architecture +#endif + { + rvalue = (void*)*pgr; + pgr++; + ng++; + } + + /* Grab the addresses of the arguments from the stack frame. */ + for (i = 0; i < avn; i++) + { + switch (arg_types[i]->type) + { + case FFI_TYPE_SINT8: + case FFI_TYPE_UINT8: + avalue[i] = (char*)pgr + MODE_CHOICE(3,7); + ng++; + pgr++; + break; + + case FFI_TYPE_SINT16: + case FFI_TYPE_UINT16: + avalue[i] = (char*)pgr + MODE_CHOICE(2,6); + ng++; + pgr++; + break; + +#if defined(__ppc__) + case FFI_TYPE_POINTER: +#endif + case FFI_TYPE_SINT32: + case FFI_TYPE_UINT32: + avalue[i] = (char*)pgr + MODE_CHOICE(0,4); + ng++; + pgr++; + + break; + + case FFI_TYPE_STRUCT: + if (cif->abi == FFI_DARWIN) + { +#if defined(__ppc64__) + unsigned int gprSize = 0; + unsigned int fprSize = 0; + unsigned int savedFPRSize = fprSize; + + avalue[i] = alloca(arg_types[i]->size); + ffi64_struct_to_ram_form(arg_types[i], (const char*)pgr, + &gprSize, (const char*)pfr, &fprSize, &nf, avalue[i], NULL); + + ng += gprSize / sizeof(long); + pgr += gprSize / sizeof(long); + pfr += (fprSize - savedFPRSize) / sizeof(double); + +#elif defined(__ppc__) + /* Structures that match the basic modes (QI 1 byte, HI 2 bytes, + SI 4 bytes) are aligned as if they were those modes. */ + unsigned int size_al = size_al = arg_types[i]->size; + + /* If the first member of the struct is a double, then align + the struct to double-word. */ + if (arg_types[i]->elements[0]->type == FFI_TYPE_DOUBLE) + size_al = ALIGN(arg_types[i]->size, 8); + + if (size_al < 3) + avalue[i] = (char*)pgr + MODE_CHOICE(4,8) - size_al; + else + avalue[i] = (char*)pgr; + + ng += (size_al + 3) / sizeof(long); + pgr += (size_al + 3) / sizeof(long); +#else +#error undefined architecture +#endif + } + + break; + +#if defined(__ppc64__) + case FFI_TYPE_POINTER: +#endif + case FFI_TYPE_SINT64: + case FFI_TYPE_UINT64: + /* Long long ints are passed in 1 or 2 GPRs. */ + avalue[i] = pgr; + ng += MODE_CHOICE(2,1); + pgr += MODE_CHOICE(2,1); + + break; + + case FFI_TYPE_FLOAT: + /* A float value consumes a GPR. + There are 13 64-bit floating point registers. */ + if (nf < NUM_FPR_ARG_REGISTERS) + { + temp = pfr->d; + pfr->f = (float)temp; + avalue[i] = pfr; + pfr++; + } + else + avalue[i] = pgr; + + nf++; + ng++; + pgr++; + break; + + case FFI_TYPE_DOUBLE: + /* A double value consumes one or two GPRs. + There are 13 64bit floating point registers. */ + if (nf < NUM_FPR_ARG_REGISTERS) + { + avalue[i] = pfr; + pfr++; + } + else + avalue[i] = pgr; + + nf++; + ng += MODE_CHOICE(2,1); + pgr += MODE_CHOICE(2,1); + + break; + +#if FFI_TYPE_LONGDOUBLE != FFI_TYPE_DOUBLE + + case FFI_TYPE_LONGDOUBLE: +#if defined(__ppc64__) + if (nf < NUM_FPR_ARG_REGISTERS) + { + avalue[i] = pfr; + pfr += 2; + } +#elif defined(__ppc__) + /* A long double value consumes 2/4 GPRs and 2 FPRs. + There are 13 64bit floating point registers. */ + if (nf < NUM_FPR_ARG_REGISTERS - 1) + { + avalue[i] = pfr; + pfr += 2; + } + /* Here we have the situation where one part of the long double + is stored in fpr13 and the other part is already on the stack. + We use a union to pass the long double to avalue[i]. */ + else if (nf == NUM_FPR_ARG_REGISTERS - 1) + { + memcpy (&temp_ld.lb[0], pfr, sizeof(ldbits)); + memcpy (&temp_ld.lb[1], pgr + 2, sizeof(ldbits)); + avalue[i] = &temp_ld.ld; + } +#else +#error undefined architecture +#endif + else + avalue[i] = pgr; + + nf += 2; + ng += MODE_CHOICE(4,2); + pgr += MODE_CHOICE(4,2); + + break; + +#endif /* FFI_TYPE_LONGDOUBLE != FFI_TYPE_DOUBLE */ + + default: + FFI_ASSERT(0); + break; + } + } + + (closure->fun)(cif, rvalue, avalue, closure->user_data); + + /* Tell ffi_closure_ASM to perform return type promotions. */ + return cif->rtype->type; +} + +#if defined(__ppc64__) + +/* ffi64_struct_to_ram_form + + Rebuild a struct's natural layout from buffers of concatenated registers. + Return the number of registers used. + inGPRs[0-7] == r3, inFPRs[0-7] == f1 ... +*/ +void +ffi64_struct_to_ram_form( + const ffi_type* inType, + const char* inGPRs, + unsigned int* ioGPRMarker, + const char* inFPRs, + unsigned int* ioFPRMarker, + unsigned int* ioFPRsUsed, + char* outStruct, // caller-allocated + unsigned int* ioStructMarker) +{ + unsigned int srcGMarker = 0; + unsigned int srcFMarker = 0; + unsigned int savedFMarker = 0; + unsigned int fprsUsed = 0; + unsigned int savedFPRsUsed = 0; + unsigned int destMarker = 0; + + static unsigned int recurseCount = 0; + + if (ioGPRMarker) + srcGMarker = *ioGPRMarker; + + if (ioFPRMarker) + { + srcFMarker = *ioFPRMarker; + savedFMarker = srcFMarker; + } + + if (ioFPRsUsed) + { + fprsUsed = *ioFPRsUsed; + savedFPRsUsed = fprsUsed; + } + + if (ioStructMarker) + destMarker = *ioStructMarker; + + size_t i; + + switch (inType->size) + { + case 1: case 2: case 4: + srcGMarker += 8 - inType->size; + break; + + default: + break; + } + + for (i = 0; inType->elements[i] != NULL; i++) + { + switch (inType->elements[i]->type) + { + case FFI_TYPE_FLOAT: + srcFMarker = ALIGN(srcFMarker, 4); + srcGMarker = ALIGN(srcGMarker, 4); + destMarker = ALIGN(destMarker, 4); + + if (fprsUsed < NUM_FPR_ARG_REGISTERS) + { + *(float*)&outStruct[destMarker] = + (float)*(double*)&inFPRs[srcFMarker]; + srcFMarker += 8; + fprsUsed++; + } + else + *(float*)&outStruct[destMarker] = + (float)*(double*)&inGPRs[srcGMarker]; + + srcGMarker += 4; + destMarker += 4; + + // Skip to next GPR if next element won't fit and we're + // not already at a register boundary. + if (inType->elements[i + 1] != NULL && (destMarker % 8)) + { + if (!FFI_TYPE_1_BYTE(inType->elements[i + 1]->type) && + (!FFI_TYPE_2_BYTE(inType->elements[i + 1]->type) || + (ALIGN(srcGMarker, 8) - srcGMarker) < 2) && + (!FFI_TYPE_4_BYTE(inType->elements[i + 1]->type) || + (ALIGN(srcGMarker, 8) - srcGMarker) < 4)) + srcGMarker = ALIGN(srcGMarker, 8); + } + + break; + + case FFI_TYPE_DOUBLE: + srcFMarker = ALIGN(srcFMarker, 8); + destMarker = ALIGN(destMarker, 8); + + if (fprsUsed < NUM_FPR_ARG_REGISTERS) + { + *(double*)&outStruct[destMarker] = + *(double*)&inFPRs[srcFMarker]; + srcFMarker += 8; + fprsUsed++; + } + else + *(double*)&outStruct[destMarker] = + *(double*)&inGPRs[srcGMarker]; + + destMarker += 8; + + // Skip next GPR + srcGMarker += 8; + srcGMarker = ALIGN(srcGMarker, 8); + + break; + + case FFI_TYPE_LONGDOUBLE: + destMarker = ALIGN(destMarker, 16); + + if (fprsUsed < NUM_FPR_ARG_REGISTERS) + { + srcFMarker = ALIGN(srcFMarker, 8); + srcGMarker = ALIGN(srcGMarker, 8); + *(long double*)&outStruct[destMarker] = + *(long double*)&inFPRs[srcFMarker]; + srcFMarker += 16; + fprsUsed += 2; + } + else + { + srcFMarker = ALIGN(srcFMarker, 16); + srcGMarker = ALIGN(srcGMarker, 16); + *(long double*)&outStruct[destMarker] = + *(long double*)&inGPRs[srcGMarker]; + } + + destMarker += 16; + + // Skip next 2 GPRs + srcGMarker += 16; + srcGMarker = ALIGN(srcGMarker, 8); + + break; + + case FFI_TYPE_UINT8: + case FFI_TYPE_SINT8: + { + if (inType->alignment == 1) // chars only + { + if (inType->size == 1) + outStruct[destMarker++] = inGPRs[srcGMarker++]; + else if (inType->size == 2) + { + outStruct[destMarker++] = inGPRs[srcGMarker++]; + outStruct[destMarker++] = inGPRs[srcGMarker++]; + i++; + } + else + { + memcpy(&outStruct[destMarker], + &inGPRs[srcGMarker], inType->size); + srcGMarker += inType->size; + destMarker += inType->size; + i += inType->size - 1; + } + } + else // chars and other stuff + { + outStruct[destMarker++] = inGPRs[srcGMarker++]; + + // Skip to next GPR if next element won't fit and we're + // not already at a register boundary. + if (inType->elements[i + 1] != NULL && (srcGMarker % 8)) + { + if (!FFI_TYPE_1_BYTE(inType->elements[i + 1]->type) && + (!FFI_TYPE_2_BYTE(inType->elements[i + 1]->type) || + (ALIGN(srcGMarker, 8) - srcGMarker) < 2) && + (!FFI_TYPE_4_BYTE(inType->elements[i + 1]->type) || + (ALIGN(srcGMarker, 8) - srcGMarker) < 4)) + srcGMarker = ALIGN(srcGMarker, inType->alignment); // was 8 + } + } + + break; + } + + case FFI_TYPE_UINT16: + case FFI_TYPE_SINT16: + srcGMarker = ALIGN(srcGMarker, 2); + destMarker = ALIGN(destMarker, 2); + + *(short*)&outStruct[destMarker] = + *(short*)&inGPRs[srcGMarker]; + srcGMarker += 2; + destMarker += 2; + + break; + + case FFI_TYPE_INT: + case FFI_TYPE_UINT32: + case FFI_TYPE_SINT32: + srcGMarker = ALIGN(srcGMarker, 4); + destMarker = ALIGN(destMarker, 4); + + *(int*)&outStruct[destMarker] = + *(int*)&inGPRs[srcGMarker]; + srcGMarker += 4; + destMarker += 4; + + break; + + case FFI_TYPE_POINTER: + case FFI_TYPE_UINT64: + case FFI_TYPE_SINT64: + srcGMarker = ALIGN(srcGMarker, 8); + destMarker = ALIGN(destMarker, 8); + + *(long long*)&outStruct[destMarker] = + *(long long*)&inGPRs[srcGMarker]; + srcGMarker += 8; + destMarker += 8; + + break; + + case FFI_TYPE_STRUCT: + recurseCount++; + ffi64_struct_to_ram_form(inType->elements[i], inGPRs, + &srcGMarker, inFPRs, &srcFMarker, &fprsUsed, + outStruct, &destMarker); + recurseCount--; + break; + + default: + FFI_ASSERT(0); // unknown element type + break; + } + } + + srcGMarker = ALIGN(srcGMarker, inType->alignment); + + // Take care of the special case for 16-byte structs, but not for + // nested structs. + if (recurseCount == 0 && srcGMarker == 16) + { + *(long double*)&outStruct[0] = *(long double*)&inGPRs[0]; + srcFMarker = savedFMarker; + fprsUsed = savedFPRsUsed; + } + + if (ioGPRMarker) + *ioGPRMarker = ALIGN(srcGMarker, 8); + + if (ioFPRMarker) + *ioFPRMarker = srcFMarker; + + if (ioFPRsUsed) + *ioFPRsUsed = fprsUsed; + + if (ioStructMarker) + *ioStructMarker = ALIGN(destMarker, 8); +} + +/* ffi64_struct_to_reg_form + + Copy a struct's elements into buffers that can be sliced into registers. + Return the sizes of the output buffers in bytes. Pass NULL buffer pointers + to calculate size only. + outGPRs[0-7] == r3, outFPRs[0-7] == f1 ... +*/ +void +ffi64_struct_to_reg_form( + const ffi_type* inType, + const char* inStruct, + unsigned int* ioStructMarker, + unsigned int* ioFPRsUsed, + char* outGPRs, // caller-allocated + unsigned int* ioGPRSize, + char* outFPRs, // caller-allocated + unsigned int* ioFPRSize) +{ + size_t i; + unsigned int srcMarker = 0; + unsigned int destGMarker = 0; + unsigned int destFMarker = 0; + unsigned int savedFMarker = 0; + unsigned int fprsUsed = 0; + unsigned int savedFPRsUsed = 0; + + static unsigned int recurseCount = 0; + + if (ioStructMarker) + srcMarker = *ioStructMarker; + + if (ioFPRsUsed) + { + fprsUsed = *ioFPRsUsed; + savedFPRsUsed = fprsUsed; + } + + if (ioGPRSize) + destGMarker = *ioGPRSize; + + if (ioFPRSize) + { + destFMarker = *ioFPRSize; + savedFMarker = destFMarker; + } + + switch (inType->size) + { + case 1: case 2: case 4: + destGMarker += 8 - inType->size; + break; + + default: + break; + } + + for (i = 0; inType->elements[i] != NULL; i++) + { + switch (inType->elements[i]->type) + { + // Shadow floating-point types in GPRs for vararg and pre-ANSI + // functions. + case FFI_TYPE_FLOAT: + // Nudge markers to next 4/8-byte boundary + srcMarker = ALIGN(srcMarker, 4); + destGMarker = ALIGN(destGMarker, 4); + destFMarker = ALIGN(destFMarker, 8); + + if (fprsUsed < NUM_FPR_ARG_REGISTERS) + { + if (outFPRs != NULL && inStruct != NULL) + *(double*)&outFPRs[destFMarker] = + (double)*(float*)&inStruct[srcMarker]; + + destFMarker += 8; + fprsUsed++; + } + + if (outGPRs != NULL && inStruct != NULL) + *(double*)&outGPRs[destGMarker] = + (double)*(float*)&inStruct[srcMarker]; + + srcMarker += 4; + destGMarker += 4; + + // Skip to next GPR if next element won't fit and we're + // not already at a register boundary. + if (inType->elements[i + 1] != NULL && (srcMarker % 8)) + { + if (!FFI_TYPE_1_BYTE(inType->elements[i + 1]->type) && + (!FFI_TYPE_2_BYTE(inType->elements[i + 1]->type) || + (ALIGN(destGMarker, 8) - destGMarker) < 2) && + (!FFI_TYPE_4_BYTE(inType->elements[i + 1]->type) || + (ALIGN(destGMarker, 8) - destGMarker) < 4)) + destGMarker = ALIGN(destGMarker, 8); + } + + break; + + case FFI_TYPE_DOUBLE: + srcMarker = ALIGN(srcMarker, 8); + destFMarker = ALIGN(destFMarker, 8); + + if (fprsUsed < NUM_FPR_ARG_REGISTERS) + { + if (outFPRs != NULL && inStruct != NULL) + *(double*)&outFPRs[destFMarker] = + *(double*)&inStruct[srcMarker]; + + destFMarker += 8; + fprsUsed++; + } + + if (outGPRs != NULL && inStruct != NULL) + *(double*)&outGPRs[destGMarker] = + *(double*)&inStruct[srcMarker]; + + srcMarker += 8; + + // Skip next GPR + destGMarker += 8; + destGMarker = ALIGN(destGMarker, 8); + + break; + + case FFI_TYPE_LONGDOUBLE: + srcMarker = ALIGN(srcMarker, 16); + + if (fprsUsed < NUM_FPR_ARG_REGISTERS) + { + destFMarker = ALIGN(destFMarker, 8); + destGMarker = ALIGN(destGMarker, 8); + + if (outFPRs != NULL && inStruct != NULL) + *(long double*)&outFPRs[destFMarker] = + *(long double*)&inStruct[srcMarker]; + + if (outGPRs != NULL && inStruct != NULL) + *(long double*)&outGPRs[destGMarker] = + *(long double*)&inStruct[srcMarker]; + + destFMarker += 16; + fprsUsed += 2; + } + else + { + destGMarker = ALIGN(destGMarker, 16); + + if (outGPRs != NULL && inStruct != NULL) + *(long double*)&outGPRs[destGMarker] = + *(long double*)&inStruct[srcMarker]; + } + + srcMarker += 16; + destGMarker += 16; // Skip next 2 GPRs + destGMarker = ALIGN(destGMarker, 8); // was 16 + + break; + + case FFI_TYPE_UINT8: + case FFI_TYPE_SINT8: + if (inType->alignment == 1) // bytes only + { + if (inType->size == 1) + { + if (outGPRs != NULL && inStruct != NULL) + outGPRs[destGMarker] = inStruct[srcMarker]; + + srcMarker++; + destGMarker++; + } + else if (inType->size == 2) + { + if (outGPRs != NULL && inStruct != NULL) + { + outGPRs[destGMarker] = inStruct[srcMarker]; + outGPRs[destGMarker + 1] = inStruct[srcMarker + 1]; + } + + srcMarker += 2; + destGMarker += 2; + + i++; + } + else + { + if (outGPRs != NULL && inStruct != NULL) + { + // Avoid memcpy for small chunks. + if (inType->size <= sizeof(long)) + *(long*)&outGPRs[destGMarker] = + *(long*)&inStruct[srcMarker]; + else + memcpy(&outGPRs[destGMarker], + &inStruct[srcMarker], inType->size); + } + + srcMarker += inType->size; + destGMarker += inType->size; + i += inType->size - 1; + } + } + else // bytes and other stuff + { + if (outGPRs != NULL && inStruct != NULL) + outGPRs[destGMarker] = inStruct[srcMarker]; + + srcMarker++; + destGMarker++; + + // Skip to next GPR if next element won't fit and we're + // not already at a register boundary. + if (inType->elements[i + 1] != NULL && (destGMarker % 8)) + { + if (!FFI_TYPE_1_BYTE(inType->elements[i + 1]->type) && + (!FFI_TYPE_2_BYTE(inType->elements[i + 1]->type) || + (ALIGN(destGMarker, 8) - destGMarker) < 2) && + (!FFI_TYPE_4_BYTE(inType->elements[i + 1]->type) || + (ALIGN(destGMarker, 8) - destGMarker) < 4)) + destGMarker = ALIGN(destGMarker, inType->alignment); // was 8 + } + } + + break; + + case FFI_TYPE_UINT16: + case FFI_TYPE_SINT16: + srcMarker = ALIGN(srcMarker, 2); + destGMarker = ALIGN(destGMarker, 2); + + if (outGPRs != NULL && inStruct != NULL) + *(short*)&outGPRs[destGMarker] = + *(short*)&inStruct[srcMarker]; + + srcMarker += 2; + destGMarker += 2; + + if (inType->elements[i + 1] == NULL) + destGMarker = ALIGN(destGMarker, inType->alignment); + + break; + + case FFI_TYPE_INT: + case FFI_TYPE_UINT32: + case FFI_TYPE_SINT32: + srcMarker = ALIGN(srcMarker, 4); + destGMarker = ALIGN(destGMarker, 4); + + if (outGPRs != NULL && inStruct != NULL) + *(int*)&outGPRs[destGMarker] = + *(int*)&inStruct[srcMarker]; + + srcMarker += 4; + destGMarker += 4; + + break; + + case FFI_TYPE_POINTER: + case FFI_TYPE_UINT64: + case FFI_TYPE_SINT64: + srcMarker = ALIGN(srcMarker, 8); + destGMarker = ALIGN(destGMarker, 8); + + if (outGPRs != NULL && inStruct != NULL) + *(long long*)&outGPRs[destGMarker] = + *(long long*)&inStruct[srcMarker]; + + srcMarker += 8; + destGMarker += 8; + + if (inType->elements[i + 1] == NULL) + destGMarker = ALIGN(destGMarker, inType->alignment); + + break; + + case FFI_TYPE_STRUCT: + recurseCount++; + ffi64_struct_to_reg_form(inType->elements[i], + inStruct, &srcMarker, &fprsUsed, outGPRs, + &destGMarker, outFPRs, &destFMarker); + recurseCount--; + break; + + default: + FFI_ASSERT(0); + break; + } + } + + destGMarker = ALIGN(destGMarker, inType->alignment); + + // Take care of the special case for 16-byte structs, but not for + // nested structs. + if (recurseCount == 0 && destGMarker == 16) + { + if (outGPRs != NULL && inStruct != NULL) + *(long double*)&outGPRs[0] = *(long double*)&inStruct[0]; + + destFMarker = savedFMarker; + fprsUsed = savedFPRsUsed; + } + + if (ioStructMarker) + *ioStructMarker = ALIGN(srcMarker, 8); + + if (ioFPRsUsed) + *ioFPRsUsed = fprsUsed; + + if (ioGPRSize) + *ioGPRSize = ALIGN(destGMarker, 8); + + if (ioFPRSize) + *ioFPRSize = ALIGN(destFMarker, 8); +} + +/* ffi64_stret_needs_ptr + + Determine whether a returned struct needs a pointer in r3 or can fit + in registers. +*/ + +bool +ffi64_stret_needs_ptr( + const ffi_type* inType, + unsigned short* ioGPRCount, + unsigned short* ioFPRCount) +{ + // Obvious case first- struct is larger than combined FPR size. + if (inType->size > 14 * 8) + return true; + + // Now the struct can physically fit in registers, determine if it + // also fits logically. + bool needsPtr = false; + unsigned short gprsUsed = 0; + unsigned short fprsUsed = 0; + size_t i; + + if (ioGPRCount) + gprsUsed = *ioGPRCount; + + if (ioFPRCount) + fprsUsed = *ioFPRCount; + + for (i = 0; inType->elements[i] != NULL && !needsPtr; i++) + { + switch (inType->elements[i]->type) + { + case FFI_TYPE_FLOAT: + case FFI_TYPE_DOUBLE: + gprsUsed++; + fprsUsed++; + + if (fprsUsed > 13) + needsPtr = true; + + break; + + case FFI_TYPE_LONGDOUBLE: + gprsUsed += 2; + fprsUsed += 2; + + if (fprsUsed > 14) + needsPtr = true; + + break; + + case FFI_TYPE_UINT8: + case FFI_TYPE_SINT8: + { + gprsUsed++; + + if (gprsUsed > 8) + { + needsPtr = true; + break; + } + + if (inType->elements[i + 1] == NULL) // last byte in the struct + break; + + // Count possible contiguous bytes ahead, up to 8. + unsigned short j; + + for (j = 1; j < 8; j++) + { + if (inType->elements[i + j] == NULL || + !FFI_TYPE_1_BYTE(inType->elements[i + j]->type)) + break; + } + + i += j - 1; // allow for i++ before the test condition + + break; + } + + case FFI_TYPE_UINT16: + case FFI_TYPE_SINT16: + case FFI_TYPE_INT: + case FFI_TYPE_UINT32: + case FFI_TYPE_SINT32: + case FFI_TYPE_POINTER: + case FFI_TYPE_UINT64: + case FFI_TYPE_SINT64: + gprsUsed++; + + if (gprsUsed > 8) + needsPtr = true; + + break; + + case FFI_TYPE_STRUCT: + needsPtr = ffi64_stret_needs_ptr( + inType->elements[i], &gprsUsed, &fprsUsed); + + break; + + default: + FFI_ASSERT(0); + break; + } + } + + if (ioGPRCount) + *ioGPRCount = gprsUsed; + + if (ioFPRCount) + *ioFPRCount = fprsUsed; + + return needsPtr; +} + +/* ffi64_data_size + + Calculate the size in bytes of an ffi type. +*/ + +unsigned int +ffi64_data_size( + const ffi_type* inType) +{ + unsigned int size = 0; + + switch (inType->type) + { + case FFI_TYPE_UINT8: + case FFI_TYPE_SINT8: + size = 1; + break; + + case FFI_TYPE_UINT16: + case FFI_TYPE_SINT16: + size = 2; + break; + + case FFI_TYPE_INT: + case FFI_TYPE_UINT32: + case FFI_TYPE_SINT32: + case FFI_TYPE_FLOAT: + size = 4; + break; + + case FFI_TYPE_POINTER: + case FFI_TYPE_UINT64: + case FFI_TYPE_SINT64: + case FFI_TYPE_DOUBLE: + size = 8; + break; + + case FFI_TYPE_LONGDOUBLE: + size = 16; + break; + + case FFI_TYPE_STRUCT: + ffi64_struct_to_reg_form( + inType, NULL, NULL, NULL, NULL, &size, NULL, NULL); + break; + + case FFI_TYPE_VOID: + break; + + default: + FFI_ASSERT(0); + break; + } + + return size; +} + +#endif /* defined(__ppc64__) */ +#endif /* __ppc__ || __ppc64__ */ Added: python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/powerpc/ppc64-darwin_closure.S ============================================================================== --- (empty file) +++ python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/powerpc/ppc64-darwin_closure.S Tue Mar 4 14:25:41 2008 @@ -0,0 +1,423 @@ +#if defined(__ppc64__) + +/* ----------------------------------------------------------------------- + darwin_closure.S - Copyright (c) 2002, 2003, 2004, Free Software Foundation, + Inc. based on ppc_closure.S + + PowerPC Assembly glue. + + Permission is hereby granted, free of charge, to any person obtaining + a copy of this software and associated documentation files (the + ``Software''), to deal in the Software without restriction, including + without limitation the rights to use, copy, modify, merge, publish, + distribute, sublicense, and/or sell copies of the Software, and to + permit persons to whom the Software is furnished to do so, subject to + the following conditions: + + The above copyright notice and this permission notice shall be included + in all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS + OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. + IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR + OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + OTHER DEALINGS IN THE SOFTWARE. + ----------------------------------------------------------------------- */ + +#define LIBFFI_ASM + +#include +#include // for FFI_TRAMPOLINE_SIZE +#include +#include + + .file "ppc64-darwin_closure.S" +.text + .align LOG2_GPR_BYTES + .globl _ffi_closure_ASM + +.text + .align LOG2_GPR_BYTES + +_ffi_closure_ASM: +LFB1: + mflr r0 + stg r0,SF_RETURN(r1) // save return address + + // Save GPRs 3 - 10 (aligned to 8) in the parents outgoing area. + stg r3,SF_ARG1(r1) + stg r4,SF_ARG2(r1) + stg r5,SF_ARG3(r1) + stg r6,SF_ARG4(r1) + stg r7,SF_ARG5(r1) + stg r8,SF_ARG6(r1) + stg r9,SF_ARG7(r1) + stg r10,SF_ARG8(r1) + +LCFI0: +/* 48 bytes (Linkage Area) + 64 bytes (outgoing parameter area, always reserved) + 112 bytes (14*8 for incoming FPR) + ? bytes (result) + 112 bytes (14*8 for outgoing FPR) + 16 bytes (2 saved registers) + 352 + ? total bytes +*/ + + std r31,-8(r1) // Save registers we use. + std r30,-16(r1) + mr r30,r1 // Save the old SP. + mr r31,r11 // Save the ffi_closure around ffi64_data_size. + + // Calculate the space we need. + stdu r1,-SF_MINSIZE(r1) + ld r3,FFI_TRAMPOLINE_SIZE(r31) // ffi_closure->cif* + ld r3,16(r3) // ffi_cif->rtype* + bl Lffi64_data_size$stub + ld r1,0(r1) + + addi r3,r3,352 // Add our overhead. + neg r3,r3 + li r0,-32 // Align to 32 bytes. + and r3,r3,r0 + stdux r1,r1,r3 // Grow the stack. + + mr r11,r31 // Copy the ffi_closure back. + +LCFI1: + // We want to build up an area for the parameters passed + // in registers. (both floating point and integer) + +/* 320 bytes (callee stack frame aligned to 32) + 48 bytes (caller linkage area) + 368 (start of caller parameter area aligned to 8) +*/ + + // Save FPRs 1 - 14. (aligned to 8) + stfd f1,112(r1) + stfd f2,120(r1) + stfd f3,128(r1) + stfd f4,136(r1) + stfd f5,144(r1) + stfd f6,152(r1) + stfd f7,160(r1) + stfd f8,168(r1) + stfd f9,176(r1) + stfd f10,184(r1) + stfd f11,192(r1) + stfd f12,200(r1) + stfd f13,208(r1) + stfd f14,216(r1) + + // Set up registers for the routine that actually does the work. + mr r3,r11 // context pointer from the trampoline + addi r4,r1,224 // result storage + addi r5,r30,SF_ARG1 // saved GPRs + addi r6,r1,112 // saved FPRs + bl Lffi_closure_helper_DARWIN$stub + + // Look the proper starting point in table + // by using return type as an offset. + addi r5,r1,224 // Get pointer to results area. + bl Lget_ret_type0_addr // Get pointer to Lret_type0 into LR. + mflr r4 // Move to r4. + slwi r3,r3,4 // Now multiply return type by 16. + add r3,r3,r4 // Add contents of table to table address. + mtctr r3 + bctr + +LFE1: + // Each of the ret_typeX code fragments has to be exactly 16 bytes long + // (4 instructions). For cache effectiveness we align to a 16 byte + // boundary first. + .align 4 + nop + nop + nop + +Lget_ret_type0_addr: + blrl + +// case FFI_TYPE_VOID +Lret_type0: + b Lfinish + nop + nop + nop + +// case FFI_TYPE_INT +Lret_type1: + lwz r3,4(r5) + b Lfinish + nop + nop + +// case FFI_TYPE_FLOAT +Lret_type2: + lfs f1,0(r5) + b Lfinish + nop + nop + +// case FFI_TYPE_DOUBLE +Lret_type3: + lfd f1,0(r5) + b Lfinish + nop + nop + +// case FFI_TYPE_LONGDOUBLE +Lret_type4: + lfd f1,0(r5) + lfd f2,8(r5) + b Lfinish + nop + +// case FFI_TYPE_UINT8 +Lret_type5: + lbz r3,7(r5) + b Lfinish + nop + nop + +// case FFI_TYPE_SINT8 +Lret_type6: + lbz r3,7(r5) + extsb r3,r3 + b Lfinish + nop + +// case FFI_TYPE_UINT16 +Lret_type7: + lhz r3,6(r5) + b Lfinish + nop + nop + +// case FFI_TYPE_SINT16 +Lret_type8: + lha r3,6(r5) + b Lfinish + nop + nop + +// case FFI_TYPE_UINT32 +Lret_type9: // same as Lret_type1 + lwz r3,4(r5) + b Lfinish + nop + nop + +// case FFI_TYPE_SINT32 +Lret_type10: // same as Lret_type1 + lwz r3,4(r5) + b Lfinish + nop + nop + +// case FFI_TYPE_UINT64 +Lret_type11: + ld r3,0(r5) + b Lfinish + nop + nop + +// case FFI_TYPE_SINT64 +Lret_type12: // same as Lret_type11 + ld r3,0(r5) + b Lfinish + nop + nop + +// case FFI_TYPE_STRUCT +Lret_type13: + b Lret_struct + nop + nop + nop + +// ** End 16-byte aligned cases ** +// case FFI_TYPE_POINTER +// This case assumes that FFI_TYPE_POINTER == FFI_TYPE_LAST. If more types +// are added in future, the following code will need to be updated and +// padded to 16 bytes. +Lret_type14: + lg r3,0(r5) + b Lfinish + +// copy struct into registers +Lret_struct: + ld r31,FFI_TRAMPOLINE_SIZE(r31) // ffi_closure->cif* + ld r3,16(r31) // ffi_cif->rtype* + ld r31,24(r31) // ffi_cif->flags + mr r4,r5 // copy struct* to 2nd arg + addi r7,r1,SF_ARG9 // GPR return area + addi r9,r30,-16-(14*8) // FPR return area + li r5,0 // struct offset ptr (NULL) + li r6,0 // FPR used count ptr (NULL) + li r8,0 // GPR return area size ptr (NULL) + li r10,0 // FPR return area size ptr (NULL) + bl Lffi64_struct_to_reg_form$stub + + // Load GPRs + ld r3,SF_ARG9(r1) + ld r4,SF_ARG10(r1) + ld r5,SF_ARG11(r1) + ld r6,SF_ARG12(r1) + nop + ld r7,SF_ARG13(r1) + ld r8,SF_ARG14(r1) + ld r9,SF_ARG15(r1) + ld r10,SF_ARG16(r1) + nop + + // Load FPRs + mtcrf 0x2,r31 + bf 26,Lfinish + lfd f1,-16-(14*8)(r30) + lfd f2,-16-(13*8)(r30) + lfd f3,-16-(12*8)(r30) + lfd f4,-16-(11*8)(r30) + nop + lfd f5,-16-(10*8)(r30) + lfd f6,-16-(9*8)(r30) + lfd f7,-16-(8*8)(r30) + lfd f8,-16-(7*8)(r30) + nop + lfd f9,-16-(6*8)(r30) + lfd f10,-16-(5*8)(r30) + lfd f11,-16-(4*8)(r30) + lfd f12,-16-(3*8)(r30) + nop + lfd f13,-16-(2*8)(r30) + lfd f14,-16-(1*8)(r30) + // Fall through + +// case done +Lfinish: + lg r1,0(r1) // Restore stack pointer. + ld r31,-8(r1) // Restore registers we used. + ld r30,-16(r1) + lg r0,SF_RETURN(r1) // Get return address. + mtlr r0 // Reset link register. + blr + +// END(ffi_closure_ASM) + +.section __TEXT,__eh_frame,coalesced,no_toc+strip_static_syms+live_support +EH_frame1: + .set L$set$0,LECIE1-LSCIE1 + .long L$set$0 ; Length of Common Information Entry +LSCIE1: + .long 0x0 ; CIE Identifier Tag + .byte 0x1 ; CIE Version + .ascii "zR\0" ; CIE Augmentation + .byte 0x1 ; uleb128 0x1; CIE Code Alignment Factor + .byte 0x7c ; sleb128 -4; CIE Data Alignment Factor + .byte 0x41 ; CIE RA Column + .byte 0x1 ; uleb128 0x1; Augmentation size + .byte 0x90 ; FDE Encoding (indirect pcrel) + .byte 0xc ; DW_CFA_def_cfa + .byte 0x1 ; uleb128 0x1 + .byte 0x0 ; uleb128 0x0 + .align LOG2_GPR_BYTES +LECIE1: +.globl _ffi_closure_ASM.eh +_ffi_closure_ASM.eh: +LSFDE1: + .set L$set$1,LEFDE1-LASFDE1 + .long L$set$1 ; FDE Length + +LASFDE1: + .long LASFDE1-EH_frame1 ; FDE CIE offset + .g_long LLFB1$non_lazy_ptr-. ; FDE initial location + .set L$set$3,LFE1-LFB1 + .g_long L$set$3 ; FDE address range + .byte 0x0 ; uleb128 0x0; Augmentation size + .byte 0x4 ; DW_CFA_advance_loc4 + .set L$set$3,LCFI1-LCFI0 + .long L$set$3 + .byte 0xe ; DW_CFA_def_cfa_offset + .byte 176,1 ; uleb128 176 + .byte 0x4 ; DW_CFA_advance_loc4 + .set L$set$4,LCFI0-LFB1 + .long L$set$4 + .byte 0x11 ; DW_CFA_offset_extended_sf + .byte 0x41 ; uleb128 0x41 + .byte 0x7e ; sleb128 -2 + .align LOG2_GPR_BYTES + +LEFDE1: +.data + .align LOG2_GPR_BYTES +LDFCM0: +.section __TEXT,__picsymbolstub1,symbol_stubs,pure_instructions,32 + .align LOG2_GPR_BYTES + +Lffi_closure_helper_DARWIN$stub: + .indirect_symbol _ffi_closure_helper_DARWIN + mflr r0 + bcl 20,31,LO$ffi_closure_helper_DARWIN + +LO$ffi_closure_helper_DARWIN: + mflr r11 + addis r11,r11,ha16(L_ffi_closure_helper_DARWIN$lazy_ptr - LO$ffi_closure_helper_DARWIN) + mtlr r0 + lgu r12,lo16(L_ffi_closure_helper_DARWIN$lazy_ptr - LO$ffi_closure_helper_DARWIN)(r11) + mtctr r12 + bctr + +.lazy_symbol_pointer +L_ffi_closure_helper_DARWIN$lazy_ptr: + .indirect_symbol _ffi_closure_helper_DARWIN + .g_long dyld_stub_binding_helper + +.data + .align LOG2_GPR_BYTES +LLFB1$non_lazy_ptr: + .g_long LFB1 + +.section __TEXT,__picsymbolstub1,symbol_stubs,pure_instructions,32 + .align LOG2_GPR_BYTES + +Lffi64_struct_to_reg_form$stub: + .indirect_symbol _ffi64_struct_to_reg_form + mflr r0 + bcl 20,31,LO$ffi64_struct_to_reg_form + +LO$ffi64_struct_to_reg_form: + mflr r11 + addis r11,r11,ha16(L_ffi64_struct_to_reg_form$lazy_ptr - LO$ffi64_struct_to_reg_form) + mtlr r0 + lgu r12,lo16(L_ffi64_struct_to_reg_form$lazy_ptr - LO$ffi64_struct_to_reg_form)(r11) + mtctr r12 + bctr + +.section __TEXT,__picsymbolstub1,symbol_stubs,pure_instructions,32 + .align LOG2_GPR_BYTES + +Lffi64_data_size$stub: + .indirect_symbol _ffi64_data_size + mflr r0 + bcl 20,31,LO$ffi64_data_size + +LO$ffi64_data_size: + mflr r11 + addis r11,r11,ha16(L_ffi64_data_size$lazy_ptr - LO$ffi64_data_size) + mtlr r0 + lgu r12,lo16(L_ffi64_data_size$lazy_ptr - LO$ffi64_data_size)(r11) + mtctr r12 + bctr + +.lazy_symbol_pointer +L_ffi64_struct_to_reg_form$lazy_ptr: + .indirect_symbol _ffi64_struct_to_reg_form + .g_long dyld_stub_binding_helper + +L_ffi64_data_size$lazy_ptr: + .indirect_symbol _ffi64_data_size + .g_long dyld_stub_binding_helper + +#endif // __ppc64__ Added: python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/types.c ============================================================================== --- (empty file) +++ python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/types.c Tue Mar 4 14:25:41 2008 @@ -0,0 +1,115 @@ +/* ----------------------------------------------------------------------- + types.c - Copyright (c) 1996, 1998 Red Hat, Inc. + + Predefined ffi_types needed by libffi. + + Permission is hereby granted, free of charge, to any person obtaining + a copy of this software and associated documentation files (the + ``Software''), to deal in the Software without restriction, including + without limitation the rights to use, copy, modify, merge, publish, + distribute, sublicense, and/or sell copies of the Software, and to + permit persons to whom the Software is furnished to do so, subject to + the following conditions: + + The above copyright notice and this permission notice shall be included + in all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS + OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. + IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR + OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + OTHER DEALINGS IN THE SOFTWARE. + ----------------------------------------------------------------------- */ + +#include +#include + +/* Type definitions */ +#define FFI_INTEGRAL_TYPEDEF(n, s, a, t) \ + ffi_type ffi_type_##n = { s, a, t, NULL } +#define FFI_AGGREGATE_TYPEDEF(n, e) \ + ffi_type ffi_type_##n = { 0, 0, FFI_TYPE_STRUCT, e } + +FFI_INTEGRAL_TYPEDEF(uint8, 1, 1, FFI_TYPE_UINT8); +FFI_INTEGRAL_TYPEDEF(sint8, 1, 1, FFI_TYPE_SINT8); +FFI_INTEGRAL_TYPEDEF(uint16, 2, 2, FFI_TYPE_UINT16); +FFI_INTEGRAL_TYPEDEF(sint16, 2, 2, FFI_TYPE_SINT16); +FFI_INTEGRAL_TYPEDEF(uint32, 4, 4, FFI_TYPE_UINT32); +FFI_INTEGRAL_TYPEDEF(sint32, 4, 4, FFI_TYPE_SINT32); +FFI_INTEGRAL_TYPEDEF(float, 4, 4, FFI_TYPE_FLOAT); + +/* Size and alignment are fake here. They must not be 0. */ +FFI_INTEGRAL_TYPEDEF(void, 1, 1, FFI_TYPE_VOID); + +#if defined ALPHA || defined SPARC64 || defined X86_64 || \ + defined S390X || defined IA64 || defined POWERPC64 +FFI_INTEGRAL_TYPEDEF(pointer, 8, 8, FFI_TYPE_POINTER); +#else +FFI_INTEGRAL_TYPEDEF(pointer, 4, 4, FFI_TYPE_POINTER); +#endif + +#if defined X86 || defined ARM || defined M68K || defined(X86_DARWIN) + +# ifdef X86_64 + FFI_INTEGRAL_TYPEDEF(uint64, 8, 8, FFI_TYPE_UINT64); + FFI_INTEGRAL_TYPEDEF(sint64, 8, 8, FFI_TYPE_SINT64); +# else + FFI_INTEGRAL_TYPEDEF(uint64, 8, 4, FFI_TYPE_UINT64); + FFI_INTEGRAL_TYPEDEF(sint64, 8, 4, FFI_TYPE_SINT64); +# endif + +#elif defined(POWERPC_DARWIN) +FFI_INTEGRAL_TYPEDEF(uint64, 8, 8, FFI_TYPE_UINT64); +FFI_INTEGRAL_TYPEDEF(sint64, 8, 8, FFI_TYPE_SINT64); +#elif defined SH +FFI_INTEGRAL_TYPEDEF(uint64, 8, 4, FFI_TYPE_UINT64); +FFI_INTEGRAL_TYPEDEF(sint64, 8, 4, FFI_TYPE_SINT64); +#else +FFI_INTEGRAL_TYPEDEF(uint64, 8, 8, FFI_TYPE_UINT64); +FFI_INTEGRAL_TYPEDEF(sint64, 8, 8, FFI_TYPE_SINT64); +#endif + +#if defined X86 || defined X86_WIN32 || defined M68K || defined(X86_DARWIN) + +# if defined X86_WIN32 || defined X86_64 + FFI_INTEGRAL_TYPEDEF(double, 8, 8, FFI_TYPE_DOUBLE); +# else + FFI_INTEGRAL_TYPEDEF(double, 8, 4, FFI_TYPE_DOUBLE); +# endif + +# ifdef X86_DARWIN + FFI_INTEGRAL_TYPEDEF(longdouble, 16, 16, FFI_TYPE_LONGDOUBLE); +# else + FFI_INTEGRAL_TYPEDEF(longdouble, 12, 4, FFI_TYPE_LONGDOUBLE); +# endif + +#elif defined ARM || defined SH || defined POWERPC_AIX +FFI_INTEGRAL_TYPEDEF(double, 8, 4, FFI_TYPE_DOUBLE); +FFI_INTEGRAL_TYPEDEF(longdouble, 8, 4, FFI_TYPE_LONGDOUBLE); +#elif defined POWERPC_DARWIN +FFI_INTEGRAL_TYPEDEF(double, 8, 8, FFI_TYPE_DOUBLE); + +# if __GNUC__ >= 4 + FFI_INTEGRAL_TYPEDEF(longdouble, 16, 16, FFI_TYPE_LONGDOUBLE); +# else + FFI_INTEGRAL_TYPEDEF(longdouble, 8, 8, FFI_TYPE_LONGDOUBLE); +# endif + +#elif defined SPARC +FFI_INTEGRAL_TYPEDEF(double, 8, 8, FFI_TYPE_DOUBLE); + +# ifdef SPARC64 + FFI_INTEGRAL_TYPEDEF(longdouble, 16, 16, FFI_TYPE_LONGDOUBLE); +# else + FFI_INTEGRAL_TYPEDEF(longdouble, 16, 8, FFI_TYPE_LONGDOUBLE); +# endif + +#elif defined X86_64 || defined POWERPC64 +FFI_INTEGRAL_TYPEDEF(double, 8, 8, FFI_TYPE_DOUBLE); +FFI_INTEGRAL_TYPEDEF(longdouble, 16, 16, FFI_TYPE_LONGDOUBLE); +#else +FFI_INTEGRAL_TYPEDEF(double, 8, 8, FFI_TYPE_DOUBLE); +FFI_INTEGRAL_TYPEDEF(longdouble, 8, 8, FFI_TYPE_LONGDOUBLE); +#endif \ No newline at end of file Added: python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/x86/darwin64.S ============================================================================== --- (empty file) +++ python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/x86/darwin64.S Tue Mar 4 14:25:41 2008 @@ -0,0 +1,415 @@ +/* ----------------------------------------------------------------------- + darwin64.S - Copyright (c) 2006 Free Software Foundation, Inc. + derived from unix64.S + + x86-64 Foreign Function Interface for Darwin. + + Permission is hereby granted, free of charge, to any person obtaining + a copy of this software and associated documentation files (the + ``Software''), to deal in the Software without restriction, including + without limitation the rights to use, copy, modify, merge, publish, + distribute, sublicense, and/or sell copies of the Software, and to + permit persons to whom the Software is furnished to do so, subject to + the following conditions: + + The above copyright notice and this permission notice shall be included + in all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS + OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. + IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR + OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + OTHER DEALINGS IN THE SOFTWARE. + ----------------------------------------------------------------------- */ + +#ifdef __x86_64__ +#define LIBFFI_ASM +#include +#include + + .file "darwin64.S" +.text + +/* ffi_call_unix64 (void *args, unsigned long bytes, unsigned flags, + void *raddr, void (*fnaddr)()); + + Bit o trickiness here -- ARGS+BYTES is the base of the stack frame + for this function. This has been allocated by ffi_call. We also + deallocate some of the stack that has been alloca'd. */ + + .align 3 + .globl _ffi_call_unix64 + +_ffi_call_unix64: +LUW0: + movq (%rsp), %r10 /* Load return address. */ + leaq (%rdi, %rsi), %rax /* Find local stack base. */ + movq %rdx, (%rax) /* Save flags. */ + movq %rcx, 8(%rax) /* Save raddr. */ + movq %rbp, 16(%rax) /* Save old frame pointer. */ + movq %r10, 24(%rax) /* Relocate return address. */ + movq %rax, %rbp /* Finalize local stack frame. */ +LUW1: + movq %rdi, %r10 /* Save a copy of the register area. */ + movq %r8, %r11 /* Save a copy of the target fn. */ + movl %r9d, %eax /* Set number of SSE registers. */ + + /* Load up all argument registers. */ + movq (%r10), %rdi + movq 8(%r10), %rsi + movq 16(%r10), %rdx + movq 24(%r10), %rcx + movq 32(%r10), %r8 + movq 40(%r10), %r9 + testl %eax, %eax + jnz Lload_sse +Lret_from_load_sse: + + /* Deallocate the reg arg area. */ + leaq 176(%r10), %rsp + + /* Call the user function. */ + call *%r11 + + /* Deallocate stack arg area; local stack frame in redzone. */ + leaq 24(%rbp), %rsp + + movq 0(%rbp), %rcx /* Reload flags. */ + movq 8(%rbp), %rdi /* Reload raddr. */ + movq 16(%rbp), %rbp /* Reload old frame pointer. */ +LUW2: + + /* The first byte of the flags contains the FFI_TYPE. */ + movzbl %cl, %r10d + leaq Lstore_table(%rip), %r11 + movslq (%r11, %r10, 4), %r10 + addq %r11, %r10 + jmp *%r10 + +Lstore_table: + .long Lst_void-Lstore_table /* FFI_TYPE_VOID */ + .long Lst_sint32-Lstore_table /* FFI_TYPE_INT */ + .long Lst_float-Lstore_table /* FFI_TYPE_FLOAT */ + .long Lst_double-Lstore_table /* FFI_TYPE_DOUBLE */ + .long Lst_ldouble-Lstore_table /* FFI_TYPE_LONGDOUBLE */ + .long Lst_uint8-Lstore_table /* FFI_TYPE_UINT8 */ + .long Lst_sint8-Lstore_table /* FFI_TYPE_SINT8 */ + .long Lst_uint16-Lstore_table /* FFI_TYPE_UINT16 */ + .long Lst_sint16-Lstore_table /* FFI_TYPE_SINT16 */ + .long Lst_uint32-Lstore_table /* FFI_TYPE_UINT32 */ + .long Lst_sint32-Lstore_table /* FFI_TYPE_SINT32 */ + .long Lst_int64-Lstore_table /* FFI_TYPE_UINT64 */ + .long Lst_int64-Lstore_table /* FFI_TYPE_SINT64 */ + .long Lst_struct-Lstore_table /* FFI_TYPE_STRUCT */ + .long Lst_int64-Lstore_table /* FFI_TYPE_POINTER */ + + .text + .align 3 +Lst_void: + ret + .align 3 +Lst_uint8: + movzbq %al, %rax + movq %rax, (%rdi) + ret + .align 3 +Lst_sint8: + movsbq %al, %rax + movq %rax, (%rdi) + ret + .align 3 +Lst_uint16: + movzwq %ax, %rax + movq %rax, (%rdi) + .align 3 +Lst_sint16: + movswq %ax, %rax + movq %rax, (%rdi) + ret + .align 3 +Lst_uint32: + movl %eax, %eax + movq %rax, (%rdi) + .align 3 +Lst_sint32: + cltq + movq %rax, (%rdi) + ret + .align 3 +Lst_int64: + movq %rax, (%rdi) + ret + .align 3 +Lst_float: + movss %xmm0, (%rdi) + ret + .align 3 +Lst_double: + movsd %xmm0, (%rdi) + ret +Lst_ldouble: + fstpt (%rdi) + ret + .align 3 +Lst_struct: + leaq -20(%rsp), %rsi /* Scratch area in redzone. */ + + /* We have to locate the values now, and since we don't want to + write too much data into the user's return value, we spill the + value to a 16 byte scratch area first. Bits 8, 9, and 10 + control where the values are located. Only one of the three + bits will be set; see ffi_prep_cif_machdep for the pattern. */ + movd %xmm0, %r10 + movd %xmm1, %r11 + testl $0x100, %ecx + cmovnz %rax, %rdx + cmovnz %r10, %rax + testl $0x200, %ecx + cmovnz %r10, %rdx + testl $0x400, %ecx + cmovnz %r10, %rax + cmovnz %r11, %rdx + movq %rax, (%rsi) + movq %rdx, 8(%rsi) + + /* Bits 12-31 contain the true size of the structure. Copy from + the scratch area to the true destination. */ + shrl $12, %ecx + rep movsb + ret + + /* Many times we can avoid loading any SSE registers at all. + It's not worth an indirect jump to load the exact set of + SSE registers needed; zero or all is a good compromise. */ + .align 3 +LUW3: +Lload_sse: + movdqa 48(%r10), %xmm0 + movdqa 64(%r10), %xmm1 + movdqa 80(%r10), %xmm2 + movdqa 96(%r10), %xmm3 + movdqa 112(%r10), %xmm4 + movdqa 128(%r10), %xmm5 + movdqa 144(%r10), %xmm6 + movdqa 160(%r10), %xmm7 + jmp Lret_from_load_sse + +LUW4: + .align 3 + .globl _ffi_closure_unix64 + +_ffi_closure_unix64: +LUW5: + /* The carry flag is set by the trampoline iff SSE registers + are used. Don't clobber it before the branch instruction. */ + leaq -200(%rsp), %rsp +LUW6: + movq %rdi, (%rsp) + movq %rsi, 8(%rsp) + movq %rdx, 16(%rsp) + movq %rcx, 24(%rsp) + movq %r8, 32(%rsp) + movq %r9, 40(%rsp) + jc Lsave_sse +Lret_from_save_sse: + + movq %r10, %rdi + leaq 176(%rsp), %rsi + movq %rsp, %rdx + leaq 208(%rsp), %rcx + call _ffi_closure_unix64_inner + + /* Deallocate stack frame early; return value is now in redzone. */ + addq $200, %rsp +LUW7: + + /* The first byte of the return value contains the FFI_TYPE. */ + movzbl %al, %r10d + leaq Lload_table(%rip), %r11 + movslq (%r11, %r10, 4), %r10 + addq %r11, %r10 + jmp *%r10 + +Lload_table: + .long Lld_void-Lload_table /* FFI_TYPE_VOID */ + .long Lld_int32-Lload_table /* FFI_TYPE_INT */ + .long Lld_float-Lload_table /* FFI_TYPE_FLOAT */ + .long Lld_double-Lload_table /* FFI_TYPE_DOUBLE */ + .long Lld_ldouble-Lload_table /* FFI_TYPE_LONGDOUBLE */ + .long Lld_int8-Lload_table /* FFI_TYPE_UINT8 */ + .long Lld_int8-Lload_table /* FFI_TYPE_SINT8 */ + .long Lld_int16-Lload_table /* FFI_TYPE_UINT16 */ + .long Lld_int16-Lload_table /* FFI_TYPE_SINT16 */ + .long Lld_int32-Lload_table /* FFI_TYPE_UINT32 */ + .long Lld_int32-Lload_table /* FFI_TYPE_SINT32 */ + .long Lld_int64-Lload_table /* FFI_TYPE_UINT64 */ + .long Lld_int64-Lload_table /* FFI_TYPE_SINT64 */ + .long Lld_struct-Lload_table /* FFI_TYPE_STRUCT */ + .long Lld_int64-Lload_table /* FFI_TYPE_POINTER */ + + .text + .align 3 +Lld_void: + ret + .align 3 +Lld_int8: + movzbl -24(%rsp), %eax + ret + .align 3 +Lld_int16: + movzwl -24(%rsp), %eax + ret + .align 3 +Lld_int32: + movl -24(%rsp), %eax + ret + .align 3 +Lld_int64: + movq -24(%rsp), %rax + ret + .align 3 +Lld_float: + movss -24(%rsp), %xmm0 + ret + .align 3 +Lld_double: + movsd -24(%rsp), %xmm0 + ret + .align 3 +Lld_ldouble: + fldt -24(%rsp) + ret + .align 3 +Lld_struct: + /* There are four possibilities here, %rax/%rdx, %xmm0/%rax, + %rax/%xmm0, %xmm0/%xmm1. We collapse two by always loading + both rdx and xmm1 with the second word. For the remaining, + bit 8 set means xmm0 gets the second word, and bit 9 means + that rax gets the second word. */ + movq -24(%rsp), %rcx + movq -16(%rsp), %rdx + movq -16(%rsp), %xmm1 + testl $0x100, %eax + cmovnz %rdx, %rcx + movd %rcx, %xmm0 + testl $0x200, %eax + movq -24(%rsp), %rax + cmovnz %rdx, %rax + ret + + /* See the comment above Lload_sse; the same logic applies here. */ + .align 3 +LUW8: +Lsave_sse: + movdqa %xmm0, 48(%rsp) + movdqa %xmm1, 64(%rsp) + movdqa %xmm2, 80(%rsp) + movdqa %xmm3, 96(%rsp) + movdqa %xmm4, 112(%rsp) + movdqa %xmm5, 128(%rsp) + movdqa %xmm6, 144(%rsp) + movdqa %xmm7, 160(%rsp) + jmp Lret_from_save_sse + +LUW9: +.section __TEXT,__eh_frame,coalesced,no_toc+strip_static_syms+live_support +EH_frame1: + .set L$set$0,LECIE1-LSCIE1 /* CIE Length */ + .long L$set$0 +LSCIE1: + .long 0x0 /* CIE Identifier Tag */ + .byte 0x1 /* CIE Version */ + .ascii "zR\0" /* CIE Augmentation */ + .byte 0x1 /* uleb128 0x1; CIE Code Alignment Factor */ + .byte 0x78 /* sleb128 -8; CIE Data Alignment Factor */ + .byte 0x10 /* CIE RA Column */ + .byte 0x1 /* uleb128 0x1; Augmentation size */ + .byte 0x10 /* FDE Encoding (pcrel sdata4) */ + .byte 0xc /* DW_CFA_def_cfa, %rsp offset 8 */ + .byte 0x7 /* uleb128 0x7 */ + .byte 0x8 /* uleb128 0x8 */ + .byte 0x90 /* DW_CFA_offset, column 0x10 */ + .byte 0x1 + .align 3 +LECIE1: + .globl _ffi_call_unix64.eh +_ffi_call_unix64.eh: +LSFDE1: + .set L$set$1,LEFDE1-LASFDE1 /* FDE Length */ + .long L$set$1 +LASFDE1: + .long LASFDE1-EH_frame1 /* FDE CIE offset */ + .quad LUW0-. /* FDE initial location */ + .set L$set$2,LUW4-LUW0 /* FDE address range */ + .quad L$set$2 + .byte 0x0 /* Augmentation size */ + .byte 0x4 /* DW_CFA_advance_loc4 */ + .set L$set$3,LUW1-LUW0 + .long L$set$3 + + /* New stack frame based off rbp. This is a itty bit of unwind + trickery in that the CFA *has* changed. There is no easy way + to describe it correctly on entry to the function. Fortunately, + it doesn't matter too much since at all points we can correctly + unwind back to ffi_call. Note that the location to which we + moved the return address is (the new) CFA-8, so from the + perspective of the unwind info, it hasn't moved. */ + .byte 0xc /* DW_CFA_def_cfa, %rbp offset 32 */ + .byte 0x6 + .byte 0x20 + .byte 0x80+6 /* DW_CFA_offset, %rbp offset 2*-8 */ + .byte 0x2 + .byte 0xa /* DW_CFA_remember_state */ + + .byte 0x4 /* DW_CFA_advance_loc4 */ + .set L$set$4,LUW2-LUW1 + .long L$set$4 + .byte 0xc /* DW_CFA_def_cfa, %rsp offset 8 */ + .byte 0x7 + .byte 0x8 + .byte 0xc0+6 /* DW_CFA_restore, %rbp */ + + .byte 0x4 /* DW_CFA_advance_loc4 */ + .set L$set$5,LUW3-LUW2 + .long L$set$5 + .byte 0xb /* DW_CFA_restore_state */ + + .align 3 +LEFDE1: + .globl _ffi_closure_unix64.eh +_ffi_closure_unix64.eh: +LSFDE3: + .set L$set$6,LEFDE3-LASFDE3 /* FDE Length */ + .long L$set$6 +LASFDE3: + .long LASFDE3-EH_frame1 /* FDE CIE offset */ + .quad LUW5-. /* FDE initial location */ + .set L$set$7,LUW9-LUW5 /* FDE address range */ + .quad L$set$7 + .byte 0x0 /* Augmentation size */ + + .byte 0x4 /* DW_CFA_advance_loc4 */ + .set L$set$8,LUW6-LUW5 + .long L$set$8 + .byte 0xe /* DW_CFA_def_cfa_offset */ + .byte 208,1 /* uleb128 208 */ + .byte 0xa /* DW_CFA_remember_state */ + + .byte 0x4 /* DW_CFA_advance_loc4 */ + .set L$set$9,LUW7-LUW6 + .long L$set$9 + .byte 0xe /* DW_CFA_def_cfa_offset */ + .byte 0x8 + + .byte 0x4 /* DW_CFA_advance_loc4 */ + .set L$set$10,LUW8-LUW7 + .long L$set$10 + .byte 0xb /* DW_CFA_restore_state */ + + .align 3 +LEFDE3: + .subsections_via_symbols + +#endif /* __x86_64__ */ Added: python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/x86/x86-darwin.S ============================================================================== --- (empty file) +++ python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/x86/x86-darwin.S Tue Mar 4 14:25:41 2008 @@ -0,0 +1,243 @@ +#ifdef __i386__ +/* ----------------------------------------------------------------------- + darwin.S - Copyright (c) 1996, 1998, 2001, 2002, 2003 Red Hat, Inc. + + X86 Foreign Function Interface + + Permission is hereby granted, free of charge, to any person obtaining + a copy of this software and associated documentation files (the + ``Software''), to deal in the Software without restriction, including + without limitation the rights to use, copy, modify, merge, publish, + distribute, sublicense, and/or sell copies of the Software, and to + permit persons to whom the Software is furnished to do so, subject to + the following conditions: + + The above copyright notice and this permission notice shall be included + in all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS + OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. + IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR + OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + OTHER DEALINGS IN THE SOFTWARE. + ----------------------------------------------------------------------- */ + +/* + * This file is based on sysv.S and then hacked up by Ronald who hasn't done + * assembly programming in 8 years. + */ + +#ifndef __x86_64__ + +#define LIBFFI_ASM +#include +#include + +#ifdef PyObjC_STRICT_DEBUGGING + /* XXX: Debugging of stack alignment, to be removed */ +#define ASSERT_STACK_ALIGNED movdqa -16(%esp), %xmm0 +#else +#define ASSERT_STACK_ALIGNED +#endif + +.text + +.globl _ffi_prep_args + +.align 4 +.globl _ffi_call_SYSV + +_ffi_call_SYSV: +.LFB1: + pushl %ebp +.LCFI0: + movl %esp,%ebp + subl $8,%esp + ASSERT_STACK_ALIGNED +.LCFI1: + /* Make room for all of the new args. */ + movl 16(%ebp),%ecx + subl %ecx,%esp + + ASSERT_STACK_ALIGNED + + movl %esp,%eax + + /* Place all of the ffi_prep_args in position */ + subl $8,%esp + pushl 12(%ebp) + pushl %eax + call *8(%ebp) + + ASSERT_STACK_ALIGNED + + /* Return stack to previous state and call the function */ + addl $16,%esp + + ASSERT_STACK_ALIGNED + + call *28(%ebp) + + /* XXX: return returns return with 'ret $4', that upsets the stack! */ + movl 16(%ebp),%ecx + addl %ecx,%esp + + + /* Load %ecx with the return type code */ + movl 20(%ebp),%ecx + + + /* If the return value pointer is NULL, assume no return value. */ + cmpl $0,24(%ebp) + jne retint + + /* Even if there is no space for the return value, we are + obliged to handle floating-point values. */ + cmpl $FFI_TYPE_FLOAT,%ecx + jne noretval + fstp %st(0) + + jmp epilogue + +retint: + cmpl $FFI_TYPE_INT,%ecx + jne retfloat + /* Load %ecx with the pointer to storage for the return value */ + movl 24(%ebp),%ecx + movl %eax,0(%ecx) + jmp epilogue + +retfloat: + cmpl $FFI_TYPE_FLOAT,%ecx + jne retdouble + /* Load %ecx with the pointer to storage for the return value */ + movl 24(%ebp),%ecx + fstps (%ecx) + jmp epilogue + +retdouble: + cmpl $FFI_TYPE_DOUBLE,%ecx + jne retlongdouble + /* Load %ecx with the pointer to storage for the return value */ + movl 24(%ebp),%ecx + fstpl (%ecx) + jmp epilogue + +retlongdouble: + cmpl $FFI_TYPE_LONGDOUBLE,%ecx + jne retint64 + /* Load %ecx with the pointer to storage for the return value */ + movl 24(%ebp),%ecx + fstpt (%ecx) + jmp epilogue + +retint64: + cmpl $FFI_TYPE_SINT64,%ecx + jne retstruct1b + /* Load %ecx with the pointer to storage for the return value */ + movl 24(%ebp),%ecx + movl %eax,0(%ecx) + movl %edx,4(%ecx) + jmp epilogue + +retstruct1b: + cmpl $FFI_TYPE_SINT8,%ecx + jne retstruct2b + movl 24(%ebp),%ecx + movb %al,0(%ecx) + jmp epilogue + +retstruct2b: + cmpl $FFI_TYPE_SINT16,%ecx + jne retstruct + movl 24(%ebp),%ecx + movw %ax,0(%ecx) + jmp epilogue + +retstruct: + cmpl $FFI_TYPE_STRUCT,%ecx + jne noretval + /* Nothing to do! */ + + subl $4,%esp + + ASSERT_STACK_ALIGNED + + addl $8,%esp + movl %ebp, %esp + popl %ebp + ret + +noretval: +epilogue: + ASSERT_STACK_ALIGNED + addl $8, %esp + + + movl %ebp,%esp + popl %ebp + ret +.LFE1: +.ffi_call_SYSV_end: +#if 0 + .size ffi_call_SYSV,.ffi_call_SYSV_end-ffi_call_SYSV +#endif + +#if 0 + .section .eh_frame,EH_FRAME_FLAGS, at progbits +.Lframe1: + .long .LECIE1-.LSCIE1 /* Length of Common Information Entry */ +.LSCIE1: + .long 0x0 /* CIE Identifier Tag */ + .byte 0x1 /* CIE Version */ +#ifdef __PIC__ + .ascii "zR\0" /* CIE Augmentation */ +#else + .ascii "\0" /* CIE Augmentation */ +#endif + .byte 0x1 /* .uleb128 0x1; CIE Code Alignment Factor */ + .byte 0x7c /* .sleb128 -4; CIE Data Alignment Factor */ + .byte 0x8 /* CIE RA Column */ +#ifdef __PIC__ + .byte 0x1 /* .uleb128 0x1; Augmentation size */ + .byte 0x1b /* FDE Encoding (pcrel sdata4) */ +#endif + .byte 0xc /* DW_CFA_def_cfa */ + .byte 0x4 /* .uleb128 0x4 */ + .byte 0x4 /* .uleb128 0x4 */ + .byte 0x88 /* DW_CFA_offset, column 0x8 */ + .byte 0x1 /* .uleb128 0x1 */ + .align 4 +.LECIE1: +.LSFDE1: + .long .LEFDE1-.LASFDE1 /* FDE Length */ +.LASFDE1: + .long .LASFDE1-.Lframe1 /* FDE CIE offset */ +#ifdef __PIC__ + .long .LFB1-. /* FDE initial location */ +#else + .long .LFB1 /* FDE initial location */ +#endif + .long .LFE1-.LFB1 /* FDE address range */ +#ifdef __PIC__ + .byte 0x0 /* .uleb128 0x0; Augmentation size */ +#endif + .byte 0x4 /* DW_CFA_advance_loc4 */ + .long .LCFI0-.LFB1 + .byte 0xe /* DW_CFA_def_cfa_offset */ + .byte 0x8 /* .uleb128 0x8 */ + .byte 0x85 /* DW_CFA_offset, column 0x5 */ + .byte 0x2 /* .uleb128 0x2 */ + .byte 0x4 /* DW_CFA_advance_loc4 */ + .long .LCFI1-.LCFI0 + .byte 0xd /* DW_CFA_def_cfa_register */ + .byte 0x5 /* .uleb128 0x5 */ + .align 4 +.LEFDE1: +#endif + +#endif /* ifndef __x86_64__ */ + +#endif /* defined __i386__ */ Added: python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/x86/x86-ffi64.c ============================================================================== --- (empty file) +++ python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/x86/x86-ffi64.c Tue Mar 4 14:25:41 2008 @@ -0,0 +1,624 @@ +#ifdef __x86_64__ + +/* ----------------------------------------------------------------------- + x86-ffi64.c - Copyright (c) 2002 Bo Thorsen + + x86-64 Foreign Function Interface + + Permission is hereby granted, free of charge, to any person obtaining + a copy of this software and associated documentation files (the + ``Software''), to deal in the Software without restriction, including + without limitation the rights to use, copy, modify, merge, publish, + distribute, sublicense, and/or sell copies of the Software, and to + permit persons to whom the Software is furnished to do so, subject to + the following conditions: + + The above copyright notice and this permission notice shall be included + in all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS + OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. + IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR + OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + OTHER DEALINGS IN THE SOFTWARE. + ----------------------------------------------------------------------- */ + +#include +#include + +#include +#include + +#define MAX_GPR_REGS 6 +#define MAX_SSE_REGS 8 + +typedef struct RegisterArgs { + /* Registers for argument passing. */ + UINT64 gpr[MAX_GPR_REGS]; + __int128_t sse[MAX_SSE_REGS]; +} RegisterArgs; + +extern void +ffi_call_unix64( + void* args, + unsigned long bytes, + unsigned flags, + void* raddr, + void (*fnaddr)(), + unsigned ssecount); + +/* All reference to register classes here is identical to the code in + gcc/config/i386/i386.c. Do *not* change one without the other. */ + +/* Register class used for passing given 64bit part of the argument. + These represent classes as documented by the PS ABI, with the exception + of SSESF, SSEDF classes, that are basically SSE class, just gcc will + use SF or DFmode move instead of DImode to avoid reformating penalties. + + Similary we play games with INTEGERSI_CLASS to use cheaper SImode moves + whenever possible (upper half does contain padding). */ +enum x86_64_reg_class +{ + X86_64_NO_CLASS, + X86_64_INTEGER_CLASS, + X86_64_INTEGERSI_CLASS, + X86_64_SSE_CLASS, + X86_64_SSESF_CLASS, + X86_64_SSEDF_CLASS, + X86_64_SSEUP_CLASS, + X86_64_X87_CLASS, + X86_64_X87UP_CLASS, + X86_64_COMPLEX_X87_CLASS, + X86_64_MEMORY_CLASS +}; + +#define MAX_CLASSES 4 +#define SSE_CLASS_P(X) ((X) >= X86_64_SSE_CLASS && X <= X86_64_SSEUP_CLASS) + +/* x86-64 register passing implementation. See x86-64 ABI for details. Goal + of this code is to classify each 8bytes of incoming argument by the register + class and assign registers accordingly. */ + +/* Return the union class of CLASS1 and CLASS2. + See the x86-64 PS ABI for details. */ +static enum x86_64_reg_class +merge_classes( + enum x86_64_reg_class class1, + enum x86_64_reg_class class2) +{ + /* Rule #1: If both classes are equal, this is the resulting class. */ + if (class1 == class2) + return class1; + + /* Rule #2: If one of the classes is NO_CLASS, the resulting class is + the other class. */ + if (class1 == X86_64_NO_CLASS) + return class2; + + if (class2 == X86_64_NO_CLASS) + return class1; + + /* Rule #3: If one of the classes is MEMORY, the result is MEMORY. */ + if (class1 == X86_64_MEMORY_CLASS || class2 == X86_64_MEMORY_CLASS) + return X86_64_MEMORY_CLASS; + + /* Rule #4: If one of the classes is INTEGER, the result is INTEGER. */ + if ((class1 == X86_64_INTEGERSI_CLASS && class2 == X86_64_SSESF_CLASS) + || (class2 == X86_64_INTEGERSI_CLASS && class1 == X86_64_SSESF_CLASS)) + return X86_64_INTEGERSI_CLASS; + + if (class1 == X86_64_INTEGER_CLASS || class1 == X86_64_INTEGERSI_CLASS + || class2 == X86_64_INTEGER_CLASS || class2 == X86_64_INTEGERSI_CLASS) + return X86_64_INTEGER_CLASS; + + /* Rule #5: If one of the classes is X87, X87UP, or COMPLEX_X87 class, + MEMORY is used. */ + if (class1 == X86_64_X87_CLASS + || class1 == X86_64_X87UP_CLASS + || class1 == X86_64_COMPLEX_X87_CLASS + || class2 == X86_64_X87_CLASS + || class2 == X86_64_X87UP_CLASS + || class2 == X86_64_COMPLEX_X87_CLASS) + return X86_64_MEMORY_CLASS; + + /* Rule #6: Otherwise class SSE is used. */ + return X86_64_SSE_CLASS; +} + +/* Classify the argument of type TYPE and mode MODE. + CLASSES will be filled by the register class used to pass each word + of the operand. The number of words is returned. In case the parameter + should be passed in memory, 0 is returned. As a special case for zero + sized containers, classes[0] will be NO_CLASS and 1 is returned. + + See the x86-64 PS ABI for details. */ + +static int +classify_argument( + ffi_type* type, + enum x86_64_reg_class classes[], + size_t byte_offset) +{ + switch (type->type) + { + case FFI_TYPE_UINT8: + case FFI_TYPE_SINT8: + case FFI_TYPE_UINT16: + case FFI_TYPE_SINT16: + case FFI_TYPE_UINT32: + case FFI_TYPE_SINT32: + case FFI_TYPE_UINT64: + case FFI_TYPE_SINT64: + case FFI_TYPE_POINTER: + if (byte_offset + type->size <= 4) + classes[0] = X86_64_INTEGERSI_CLASS; + else + classes[0] = X86_64_INTEGER_CLASS; + + return 1; + + case FFI_TYPE_FLOAT: + if (byte_offset == 0) + classes[0] = X86_64_SSESF_CLASS; + else + classes[0] = X86_64_SSE_CLASS; + + return 1; + + case FFI_TYPE_DOUBLE: + classes[0] = X86_64_SSEDF_CLASS; + return 1; + + case FFI_TYPE_LONGDOUBLE: + classes[0] = X86_64_X87_CLASS; + classes[1] = X86_64_X87UP_CLASS; + return 2; + + case FFI_TYPE_STRUCT: + { + ffi_type** ptr; + int i; + enum x86_64_reg_class subclasses[MAX_CLASSES]; + const int UNITS_PER_WORD = 8; + int words = + (type->size + UNITS_PER_WORD - 1) / UNITS_PER_WORD; + + /* If the struct is larger than 16 bytes, pass it on the stack. */ + if (type->size > 16) + return 0; + + for (i = 0; i < words; i++) + classes[i] = X86_64_NO_CLASS; + + /* Merge the fields of structure. */ + for (ptr = type->elements; *ptr != NULL; ptr++) + { + byte_offset = ALIGN(byte_offset, (*ptr)->alignment); + + int num = classify_argument(*ptr, subclasses, byte_offset % 8); + + if (num == 0) + return 0; + + int pos = byte_offset / 8; + + for (i = 0; i < num; i++) + { + classes[i + pos] = + merge_classes(subclasses[i], classes[i + pos]); + } + + byte_offset += (*ptr)->size; + } + + /* Final merger cleanup. */ + for (i = 0; i < words; i++) + { + /* If one class is MEMORY, everything should be passed in + memory. */ + if (classes[i] == X86_64_MEMORY_CLASS) + return 0; + + /* The X86_64_SSEUP_CLASS should be always preceded by + X86_64_SSE_CLASS. */ + if (classes[i] == X86_64_SSEUP_CLASS + && (i == 0 || classes[i - 1] != X86_64_SSE_CLASS)) + classes[i] = X86_64_SSE_CLASS; + + /* X86_64_X87UP_CLASS should be preceded by X86_64_X87_CLASS. */ + if (classes[i] == X86_64_X87UP_CLASS + && (i == 0 || classes[i - 1] != X86_64_X87_CLASS)) + classes[i] = X86_64_SSE_CLASS; + } + + return words; + } + + default: + FFI_ASSERT(0); + } + + return 0; /* Never reached. */ +} + +/* Examine the argument and return set number of register required in each + class. Return zero if parameter should be passed in memory, otherwise + the number of registers. */ +static int +examine_argument( + ffi_type* type, + enum x86_64_reg_class classes[MAX_CLASSES], + _Bool in_return, + int* pngpr, + int* pnsse) +{ + int n = classify_argument(type, classes, 0); + int ngpr = 0; + int nsse = 0; + int i; + + if (n == 0) + return 0; + + for (i = 0; i < n; ++i) + { + switch (classes[i]) + { + case X86_64_INTEGER_CLASS: + case X86_64_INTEGERSI_CLASS: + ngpr++; + break; + + case X86_64_SSE_CLASS: + case X86_64_SSESF_CLASS: + case X86_64_SSEDF_CLASS: + nsse++; + break; + + case X86_64_NO_CLASS: + case X86_64_SSEUP_CLASS: + break; + + case X86_64_X87_CLASS: + case X86_64_X87UP_CLASS: + case X86_64_COMPLEX_X87_CLASS: + return in_return != 0; + + default: + abort(); + } + } + + *pngpr = ngpr; + *pnsse = nsse; + + return n; +} + +/* Perform machine dependent cif processing. */ +ffi_status +ffi_prep_cif_machdep( + ffi_cif* cif) +{ + int gprcount = 0; + int ssecount = 0; + int flags = cif->rtype->type; + int i, avn, n, ngpr, nsse; + enum x86_64_reg_class classes[MAX_CLASSES]; + size_t bytes; + + if (flags != FFI_TYPE_VOID) + { + n = examine_argument (cif->rtype, classes, 1, &ngpr, &nsse); + + if (n == 0) + { + /* The return value is passed in memory. A pointer to that + memory is the first argument. Allocate a register for it. */ + gprcount++; + + /* We don't have to do anything in asm for the return. */ + flags = FFI_TYPE_VOID; + } + else if (flags == FFI_TYPE_STRUCT) + { + /* Mark which registers the result appears in. */ + _Bool sse0 = SSE_CLASS_P(classes[0]); + _Bool sse1 = n == 2 && SSE_CLASS_P(classes[1]); + + if (sse0 && !sse1) + flags |= 1 << 8; + else if (!sse0 && sse1) + flags |= 1 << 9; + else if (sse0 && sse1) + flags |= 1 << 10; + + /* Mark the true size of the structure. */ + flags |= cif->rtype->size << 12; + } + } + + /* Go over all arguments and determine the way they should be passed. + If it's in a register and there is space for it, let that be so. If + not, add it's size to the stack byte count. */ + for (bytes = 0, i = 0, avn = cif->nargs; i < avn; i++) + { + if (examine_argument(cif->arg_types[i], classes, 0, &ngpr, &nsse) == 0 + || gprcount + ngpr > MAX_GPR_REGS + || ssecount + nsse > MAX_SSE_REGS) + { + long align = cif->arg_types[i]->alignment; + + if (align < 8) + align = 8; + + bytes = ALIGN(bytes, align); + bytes += cif->arg_types[i]->size; + } + else + { + gprcount += ngpr; + ssecount += nsse; + } + } + + if (ssecount) + flags |= 1 << 11; + + cif->flags = flags; + cif->bytes = bytes; + + return FFI_OK; +} + +void +ffi_call( + ffi_cif* cif, + void (*fn)(), + void* rvalue, + void** avalue) +{ + enum x86_64_reg_class classes[MAX_CLASSES]; + char* stack; + char* argp; + ffi_type** arg_types; + int gprcount, ssecount, ngpr, nsse, i, avn; + _Bool ret_in_memory; + RegisterArgs* reg_args; + + /* Can't call 32-bit mode from 64-bit mode. */ + FFI_ASSERT(cif->abi == FFI_UNIX64); + + /* If the return value is a struct and we don't have a return value + address then we need to make one. Note the setting of flags to + VOID above in ffi_prep_cif_machdep. */ + ret_in_memory = (cif->rtype->type == FFI_TYPE_STRUCT + && (cif->flags & 0xff) == FFI_TYPE_VOID); + + if (rvalue == NULL && ret_in_memory) + rvalue = alloca (cif->rtype->size); + + /* Allocate the space for the arguments, plus 4 words of temp space. */ + stack = alloca(sizeof(RegisterArgs) + cif->bytes + 4 * 8); + reg_args = (RegisterArgs*)stack; + argp = stack + sizeof(RegisterArgs); + + gprcount = ssecount = 0; + + /* If the return value is passed in memory, add the pointer as the + first integer argument. */ + if (ret_in_memory) + reg_args->gpr[gprcount++] = (long) rvalue; + + avn = cif->nargs; + arg_types = cif->arg_types; + + for (i = 0; i < avn; ++i) + { + size_t size = arg_types[i]->size; + int n; + + n = examine_argument (arg_types[i], classes, 0, &ngpr, &nsse); + + if (n == 0 + || gprcount + ngpr > MAX_GPR_REGS + || ssecount + nsse > MAX_SSE_REGS) + { + long align = arg_types[i]->alignment; + + /* Stack arguments are *always* at least 8 byte aligned. */ + if (align < 8) + align = 8; + + /* Pass this argument in memory. */ + argp = (void *) ALIGN (argp, align); + memcpy (argp, avalue[i], size); + argp += size; + } + else + { /* The argument is passed entirely in registers. */ + char *a = (char *) avalue[i]; + int j; + + for (j = 0; j < n; j++, a += 8, size -= 8) + { + switch (classes[j]) + { + case X86_64_INTEGER_CLASS: + case X86_64_INTEGERSI_CLASS: + reg_args->gpr[gprcount] = 0; + memcpy (®_args->gpr[gprcount], a, size < 8 ? size : 8); + gprcount++; + break; + + case X86_64_SSE_CLASS: + case X86_64_SSEDF_CLASS: + reg_args->sse[ssecount++] = *(UINT64 *) a; + break; + + case X86_64_SSESF_CLASS: + reg_args->sse[ssecount++] = *(UINT32 *) a; + break; + + default: + abort(); + } + } + } + } + + ffi_call_unix64 (stack, cif->bytes + sizeof(RegisterArgs), + cif->flags, rvalue, fn, ssecount); +} + +extern void ffi_closure_unix64(void); + +ffi_status +ffi_prep_closure( + ffi_closure* closure, + ffi_cif* cif, + void (*fun)(ffi_cif*, void*, void**, void*), + void* user_data) +{ + if (cif->abi != FFI_UNIX64) + return FFI_BAD_ABI; + + volatile unsigned short* tramp = + (volatile unsigned short*)&closure->tramp[0]; + + tramp[0] = 0xbb49; /* mov , %r11 */ + *(void* volatile*)&tramp[1] = ffi_closure_unix64; + tramp[5] = 0xba49; /* mov , %r10 */ + *(void* volatile*)&tramp[6] = closure; + + /* Set the carry bit if the function uses any sse registers. + This is clc or stc, together with the first byte of the jmp. */ + tramp[10] = cif->flags & (1 << 11) ? 0x49f9 : 0x49f8; + tramp[11] = 0xe3ff; /* jmp *%r11 */ + + closure->cif = cif; + closure->fun = fun; + closure->user_data = user_data; + + return FFI_OK; +} + +int +ffi_closure_unix64_inner( + ffi_closure* closure, + void* rvalue, + RegisterArgs* reg_args, + char* argp) +{ + ffi_cif* cif = closure->cif; + void** avalue = alloca(cif->nargs * sizeof(void *)); + ffi_type** arg_types; + long i, avn; + int gprcount = 0; + int ssecount = 0; + int ngpr, nsse; + int ret; + + ret = cif->rtype->type; + + if (ret != FFI_TYPE_VOID) + { + enum x86_64_reg_class classes[MAX_CLASSES]; + int n = examine_argument (cif->rtype, classes, 1, &ngpr, &nsse); + + if (n == 0) + { + /* The return value goes in memory. Arrange for the closure + return value to go directly back to the original caller. */ + rvalue = (void *) reg_args->gpr[gprcount++]; + + /* We don't have to do anything in asm for the return. */ + ret = FFI_TYPE_VOID; + } + else if (ret == FFI_TYPE_STRUCT && n == 2) + { + /* Mark which register the second word of the structure goes in. */ + _Bool sse0 = SSE_CLASS_P (classes[0]); + _Bool sse1 = SSE_CLASS_P (classes[1]); + + if (!sse0 && sse1) + ret |= 1 << 8; + else if (sse0 && !sse1) + ret |= 1 << 9; + } + } + + avn = cif->nargs; + arg_types = cif->arg_types; + + for (i = 0; i < avn; ++i) + { + enum x86_64_reg_class classes[MAX_CLASSES]; + int n; + + n = examine_argument (arg_types[i], classes, 0, &ngpr, &nsse); + + if (n == 0 + || gprcount + ngpr > MAX_GPR_REGS + || ssecount + nsse > MAX_SSE_REGS) + { + long align = arg_types[i]->alignment; + + /* Stack arguments are *always* at least 8 byte aligned. */ + if (align < 8) + align = 8; + + /* Pass this argument in memory. */ + argp = (void *) ALIGN (argp, align); + avalue[i] = argp; + argp += arg_types[i]->size; + } + +#if !defined(X86_DARWIN) + /* If the argument is in a single register, or two consecutive + registers, then we can use that address directly. */ + else if (n == 1 || (n == 2 && + SSE_CLASS_P (classes[0]) == SSE_CLASS_P (classes[1]))) + { + // The argument is in a single register. + if (SSE_CLASS_P (classes[0])) + { + avalue[i] = ®_args->sse[ssecount]; + ssecount += n; + } + else + { + avalue[i] = ®_args->gpr[gprcount]; + gprcount += n; + } + } +#endif + + /* Otherwise, allocate space to make them consecutive. */ + else + { + char *a = alloca (16); + int j; + + avalue[i] = a; + + for (j = 0; j < n; j++, a += 8) + { + if (SSE_CLASS_P (classes[j])) + memcpy (a, ®_args->sse[ssecount++], 8); + else + memcpy (a, ®_args->gpr[gprcount++], 8); + } + } + } + + /* Invoke the closure. */ + closure->fun (cif, rvalue, avalue, closure->user_data); + + /* Tell assembly how to perform return type promotions. */ + return ret; +} + +#endif /* __x86_64__ */ Added: python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/x86/x86-ffi_darwin.c ============================================================================== --- (empty file) +++ python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/x86/x86-ffi_darwin.c Tue Mar 4 14:25:41 2008 @@ -0,0 +1,569 @@ +#ifdef __i386__ +/* ----------------------------------------------------------------------- + ffi.c - Copyright (c) 1996, 1998, 1999, 2001 Red Hat, Inc. + Copyright (c) 2002 Ranjit Mathew + Copyright (c) 2002 Bo Thorsen + Copyright (c) 2002 Roger Sayle + + x86 Foreign Function Interface + + Permission is hereby granted, free of charge, to any person obtaining + a copy of this software and associated documentation files (the + ``Software''), to deal in the Software without restriction, including + without limitation the rights to use, copy, modify, merge, publish, + distribute, sublicense, and/or sell copies of the Software, and to + permit persons to whom the Software is furnished to do so, subject to + the following conditions: + + The above copyright notice and this permission notice shall be included + in all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS + OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. + IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR + OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + OTHER DEALINGS IN THE SOFTWARE. + ----------------------------------------------------------------------- */ + +//#ifndef __x86_64__ + +#include +#include + +#include + +//void ffi_prep_args(char *stack, extended_cif *ecif); + +static inline int +retval_on_stack( + ffi_type* tp) +{ + if (tp->type == FFI_TYPE_STRUCT) + { +// int size = tp->size; + + if (tp->size > 8) + return 1; + + switch (tp->size) + { + case 1: case 2: case 4: case 8: + return 0; + default: + return 1; + } + } + + return 0; +} + +/* ffi_prep_args is called by the assembly routine once stack space + has been allocated for the function's arguments */ +/*@-exportheader@*/ +extern void ffi_prep_args(char*, extended_cif*); +void +ffi_prep_args( + char* stack, + extended_cif* ecif) +/*@=exportheader@*/ +{ + register unsigned int i; + register void** p_argv = ecif->avalue; + register char* argp = stack; + register ffi_type** p_arg; + + if (retval_on_stack(ecif->cif->rtype)) + { + *(void**)argp = ecif->rvalue; + argp += 4; + } + + p_arg = ecif->cif->arg_types; + + for (i = ecif->cif->nargs; i > 0; i--, p_arg++, p_argv++) + { + size_t z = (*p_arg)->size; + + /* Align if necessary */ + if ((sizeof(int) - 1) & (unsigned)argp) + argp = (char*)ALIGN(argp, sizeof(int)); + + if (z < sizeof(int)) + { + z = sizeof(int); + + switch ((*p_arg)->type) + { + case FFI_TYPE_SINT8: + *(signed int*)argp = (signed int)*(SINT8*)(*p_argv); + break; + + case FFI_TYPE_UINT8: + *(unsigned int*)argp = (unsigned int)*(UINT8*)(*p_argv); + break; + + case FFI_TYPE_SINT16: + *(signed int*)argp = (signed int)*(SINT16*)(*p_argv); + break; + + case FFI_TYPE_UINT16: + *(unsigned int*)argp = (unsigned int)*(UINT16*)(*p_argv); + break; + + case FFI_TYPE_SINT32: + *(signed int*)argp = (signed int)*(SINT32*)(*p_argv); + break; + + case FFI_TYPE_UINT32: + *(unsigned int*)argp = (unsigned int)*(UINT32*)(*p_argv); + break; + + case FFI_TYPE_STRUCT: + *(unsigned int*)argp = (unsigned int)*(UINT32*)(*p_argv); + break; + + default: + FFI_ASSERT(0); + break; + } + } + else + memcpy(argp, *p_argv, z); + + argp += z; + } +} + +/* Perform machine dependent cif processing */ +ffi_status +ffi_prep_cif_machdep( + ffi_cif* cif) +{ + /* Set the return type flag */ + switch (cif->rtype->type) + { +#if !defined(X86_WIN32) && !defined(X86_DARWIN) + case FFI_TYPE_STRUCT: +#endif + case FFI_TYPE_VOID: + case FFI_TYPE_SINT64: + case FFI_TYPE_FLOAT: + case FFI_TYPE_DOUBLE: + case FFI_TYPE_LONGDOUBLE: + cif->flags = (unsigned)cif->rtype->type; + break; + + case FFI_TYPE_UINT64: + cif->flags = FFI_TYPE_SINT64; + break; + +#if defined(X86_WIN32) || defined(X86_DARWIN) + case FFI_TYPE_STRUCT: + switch (cif->rtype->size) + { + case 1: + cif->flags = FFI_TYPE_SINT8; + break; + + case 2: + cif->flags = FFI_TYPE_SINT16; + break; + + case 4: + cif->flags = FFI_TYPE_INT; + break; + + case 8: + cif->flags = FFI_TYPE_SINT64; + break; + + default: + cif->flags = FFI_TYPE_STRUCT; + break; + } + + break; +#endif + + default: + cif->flags = FFI_TYPE_INT; + break; + } + + /* Darwin: The stack needs to be aligned to a multiple of 16 bytes */ + cif->bytes = (cif->bytes + 15) & ~0xF; + + return FFI_OK; +} + +/*@-declundef@*/ +/*@-exportheader@*/ +extern void +ffi_call_SYSV( + void (*)(char *, extended_cif *), +/*@out@*/ extended_cif* , + unsigned , + unsigned , +/*@out@*/ unsigned* , + void (*fn)(void)); +/*@=declundef@*/ +/*@=exportheader@*/ + +#ifdef X86_WIN32 +/*@-declundef@*/ +/*@-exportheader@*/ +extern void +ffi_call_STDCALL( + void (char *, extended_cif *), +/*@out@*/ extended_cif* , + unsigned , + unsigned , +/*@out@*/ unsigned* , + void (*fn)(void)); +/*@=declundef@*/ +/*@=exportheader@*/ +#endif /* X86_WIN32 */ + +void +ffi_call( +/*@dependent@*/ ffi_cif* cif, + void (*fn)(void), +/*@out@*/ void* rvalue, +/*@dependent@*/ void** avalue) +{ + extended_cif ecif; + + ecif.cif = cif; + ecif.avalue = avalue; + + /* If the return value is a struct and we don't have a return + value address then we need to make one. */ + + if ((rvalue == NULL) && retval_on_stack(cif->rtype)) + { + /*@-sysunrecog@*/ + ecif.rvalue = alloca(cif->rtype->size); + /*@=sysunrecog@*/ + } + else + ecif.rvalue = rvalue; + + switch (cif->abi) + { + case FFI_SYSV: + /*@-usedef@*/ + /* To avoid changing the assembly code make sure the size of the argument + block is a multiple of 16. Then add 8 to compensate for local variables + in ffi_call_SYSV. */ + ffi_call_SYSV(ffi_prep_args, &ecif, cif->bytes, + cif->flags, ecif.rvalue, fn); + /*@=usedef@*/ + break; + +#ifdef X86_WIN32 + case FFI_STDCALL: + /*@-usedef@*/ + ffi_call_STDCALL(ffi_prep_args, &ecif, cif->bytes, + cif->flags, ecif.rvalue, fn); + /*@=usedef@*/ + break; +#endif /* X86_WIN32 */ + + default: + FFI_ASSERT(0); + break; + } +} + +/** private members **/ + +static void +ffi_closure_SYSV( + ffi_closure* closure) __attribute__((regparm(1))); + +#if !FFI_NO_RAW_API +static void +ffi_closure_raw_SYSV( + ffi_raw_closure* closure) __attribute__((regparm(1))); +#endif + +/*@-exportheader@*/ +static inline +void +ffi_prep_incoming_args_SYSV( + char* stack, + void** rvalue, + void** avalue, + ffi_cif* cif) +/*@=exportheader@*/ +{ + register unsigned int i; + register void** p_argv = avalue; + register char* argp = stack; + register ffi_type** p_arg; + + if (retval_on_stack(cif->rtype)) + { + *rvalue = *(void**)argp; + argp += 4; + } + + for (i = cif->nargs, p_arg = cif->arg_types; i > 0; i--, p_arg++, p_argv++) + { +// size_t z; + + /* Align if necessary */ + if ((sizeof(int) - 1) & (unsigned)argp) + argp = (char*)ALIGN(argp, sizeof(int)); + +// z = (*p_arg)->size; + + /* because we're little endian, this is what it turns into. */ + *p_argv = (void*)argp; + + argp += (*p_arg)->size; + } +} + +/* This function is jumped to by the trampoline */ +__attribute__((regparm(1))) +static void +ffi_closure_SYSV( + ffi_closure* closure) +{ + long double res; + ffi_cif* cif = closure->cif; + void** arg_area = (void**)alloca(cif->nargs * sizeof(void*)); + void* resp = (void*)&res; + void* args = __builtin_dwarf_cfa(); + + /* This call will initialize ARG_AREA, such that each + element in that array points to the corresponding + value on the stack; and if the function returns + a structure, it will reset RESP to point to the + structure return address. */ + ffi_prep_incoming_args_SYSV(args, (void**)&resp, arg_area, cif); + + (closure->fun)(cif, resp, arg_area, closure->user_data); + + /* now, do a generic return based on the value of rtype */ + if (cif->flags == FFI_TYPE_INT) + asm("movl (%0),%%eax" + : : "r" (resp) : "eax"); + else if (cif->flags == FFI_TYPE_FLOAT) + asm("flds (%0)" + : : "r" (resp) : "st"); + else if (cif->flags == FFI_TYPE_DOUBLE) + asm("fldl (%0)" + : : "r" (resp) : "st", "st(1)"); + else if (cif->flags == FFI_TYPE_LONGDOUBLE) + asm("fldt (%0)" + : : "r" (resp) : "st", "st(1)"); + else if (cif->flags == FFI_TYPE_SINT64) + asm("movl 0(%0),%%eax;" + "movl 4(%0),%%edx" + : : "r" (resp) + : "eax", "edx"); + +#if defined(X86_WIN32) || defined(X86_DARWIN) + else if (cif->flags == FFI_TYPE_SINT8) /* 1-byte struct */ + asm("movsbl (%0),%%eax" + : : "r" (resp) : "eax"); + else if (cif->flags == FFI_TYPE_SINT16) /* 2-bytes struct */ + asm("movswl (%0),%%eax" + : : "r" (resp) : "eax"); +#endif + + else if (cif->flags == FFI_TYPE_STRUCT) + asm("lea -8(%ebp),%esp;" + "pop %esi;" + "pop %edi;" + "pop %ebp;" + "ret $4"); +} + + +/* How to make a trampoline. Derived from gcc/config/i386/i386.c. */ +#define FFI_INIT_TRAMPOLINE(TRAMP, FUN, CTX) \ + ({ \ + unsigned char* __tramp = (unsigned char*)(TRAMP); \ + unsigned int __fun = (unsigned int)(FUN); \ + unsigned int __ctx = (unsigned int)(CTX); \ + unsigned int __dis = __fun - ((unsigned int)__tramp + FFI_TRAMPOLINE_SIZE); \ + *(unsigned char*)&__tramp[0] = 0xb8; \ + *(unsigned int*)&__tramp[1] = __ctx; /* movl __ctx, %eax */ \ + *(unsigned char*)&__tramp[5] = 0xe9; \ + *(unsigned int*)&__tramp[6] = __dis; /* jmp __fun */ \ + }) + +/* the cif must already be prep'ed */ +ffi_status +ffi_prep_closure( + ffi_closure* closure, + ffi_cif* cif, + void (*fun)(ffi_cif*,void*,void**,void*), + void* user_data) +{ +// FFI_ASSERT(cif->abi == FFI_SYSV); + if (cif->abi != FFI_SYSV) + return FFI_BAD_ABI; + + FFI_INIT_TRAMPOLINE(closure->tramp, &ffi_closure_SYSV, (void*)closure); + + closure->cif = cif; + closure->user_data = user_data; + closure->fun = fun; + + return FFI_OK; +} + +/* ------- Native raw API support -------------------------------- */ + +#if !FFI_NO_RAW_API + +__attribute__((regparm(1))) +static void +ffi_closure_raw_SYSV( + ffi_raw_closure* closure) +{ + long double res; + ffi_raw* raw_args = (ffi_raw*)__builtin_dwarf_cfa(); + ffi_cif* cif = closure->cif; + unsigned short rtype = cif->flags; + void* resp = (void*)&res; + + (closure->fun)(cif, resp, raw_args, closure->user_data); + + /* now, do a generic return based on the value of rtype */ + if (rtype == FFI_TYPE_INT) + asm("movl (%0),%%eax" + : : "r" (resp) : "eax"); + else if (rtype == FFI_TYPE_FLOAT) + asm("flds (%0)" + : : "r" (resp) : "st"); + else if (rtype == FFI_TYPE_DOUBLE) + asm("fldl (%0)" + : : "r" (resp) : "st", "st(1)"); + else if (rtype == FFI_TYPE_LONGDOUBLE) + asm("fldt (%0)" + : : "r" (resp) : "st", "st(1)"); + else if (rtype == FFI_TYPE_SINT64) + asm("movl 0(%0),%%eax;" + "movl 4(%0),%%edx" + : : "r" (resp) : "eax", "edx"); +} + +ffi_status +ffi_prep_raw_closure( + ffi_raw_closure* closure, + ffi_cif* cif, + void (*fun)(ffi_cif*,void*,ffi_raw*,void*), + void* user_data) +{ +// FFI_ASSERT (cif->abi == FFI_SYSV); + if (cif->abi != FFI_SYSV) + return FFI_BAD_ABI; + + int i; + +/* We currently don't support certain kinds of arguments for raw + closures. This should be implemented by a separate assembly language + routine, since it would require argument processing, something we + don't do now for performance. */ + for (i = cif->nargs - 1; i >= 0; i--) + { + FFI_ASSERT(cif->arg_types[i]->type != FFI_TYPE_STRUCT); + FFI_ASSERT(cif->arg_types[i]->type != FFI_TYPE_LONGDOUBLE); + } + + FFI_INIT_TRAMPOLINE(closure->tramp, &ffi_closure_raw_SYSV, (void*)closure); + + closure->cif = cif; + closure->user_data = user_data; + closure->fun = fun; + + return FFI_OK; +} + +static void +ffi_prep_args_raw( + char* stack, + extended_cif* ecif) +{ + memcpy(stack, ecif->avalue, ecif->cif->bytes); +} + +/* We borrow this routine from libffi (it must be changed, though, to + actually call the function passed in the first argument. as of + libffi-1.20, this is not the case.) */ +//extern void +//ffi_call_SYSV( +// void (*)(char *, extended_cif *), +///*@out@*/ extended_cif* , +// unsigned , +// unsigned , +//*@out@*/ unsigned* , +// void (*fn)()); + +#ifdef X86_WIN32 +extern void +ffi_call_STDCALL( + void (*)(char *, extended_cif *), +/*@out@*/ extended_cif* , + unsigned , + unsigned , +/*@out@*/ unsigned* , + void (*fn)()); +#endif // X86_WIN32 + +void +ffi_raw_call( +/*@dependent@*/ ffi_cif* cif, + void (*fn)(), +/*@out@*/ void* rvalue, +/*@dependent@*/ ffi_raw* fake_avalue) +{ + extended_cif ecif; + void **avalue = (void **)fake_avalue; + + ecif.cif = cif; + ecif.avalue = avalue; + + /* If the return value is a struct and we don't have a return + value address then we need to make one */ + if ((rvalue == NULL) && retval_on_stack(cif->rtype)) + { + /*@-sysunrecog@*/ + ecif.rvalue = alloca(cif->rtype->size); + /*@=sysunrecog@*/ + } + else + ecif.rvalue = rvalue; + + switch (cif->abi) + { + case FFI_SYSV: + /*@-usedef@*/ + ffi_call_SYSV(ffi_prep_args_raw, &ecif, cif->bytes, + cif->flags, ecif.rvalue, fn); + /*@=usedef@*/ + break; +#ifdef X86_WIN32 + case FFI_STDCALL: + /*@-usedef@*/ + ffi_call_STDCALL(ffi_prep_args_raw, &ecif, cif->bytes, + cif->flags, ecif.rvalue, fn); + /*@=usedef@*/ + break; +#endif /* X86_WIN32 */ + default: + FFI_ASSERT(0); + break; + } +} + +#endif // !FFI_NO_RAW_API +//#endif // !__x86_64__ +#endif // __i386__ Modified: python/branches/libffi3-branch/setup.py ============================================================================== --- python/branches/libffi3-branch/setup.py (original) +++ python/branches/libffi3-branch/setup.py Tue Mar 4 14:25:41 2008 @@ -1452,8 +1452,37 @@ # *** Uncomment these for TOGL extension only: # -lGL -lGLU -lXext -lXmu \ + def configure_ctypes_darwin(self, ext): + # Darwin (OS X) uses preconfigured files, in + # the Modules/_ctypes/libffi_osx directory. + (srcdir,) = sysconfig.get_config_vars('srcdir') + ffi_srcdir = os.path.abspath(os.path.join(srcdir, 'Modules', + '_ctypes', 'libffi_osx')) + sources = [os.path.join(ffi_srcdir, p) + for p in ['ffi.c', + 'x86/x86-darwin.S', + 'x86/x86-ffi_darwin.c', + 'x86/x86-ffi64.c', + 'powerpc/ppc-darwin.S', + 'powerpc/ppc-darwin_closure.S', + 'powerpc/ppc-ffi_darwin.c', + 'powerpc/ppc64-darwin_closure.S', + ]] + + # Add .S (preprocessed assembly) to C compiler source extensions. + self.compiler.src_extensions.append('.S') + + include_dirs = [os.path.join(ffi_srcdir, 'include'), + os.path.join(ffi_srcdir, 'powerpc')] + ext.include_dirs.extend(include_dirs) + ext.sources.extend(sources) + return True + def configure_ctypes(self, ext): if not self.use_system_libffi: + if sys.platform == 'darwin': + return self.configure_ctypes_darwin(ext) + (srcdir,) = sysconfig.get_config_vars('srcdir') ffi_builddir = os.path.join(self.build_temp, 'libffi') ffi_srcdir = os.path.abspath(os.path.join(srcdir, 'Modules', @@ -1512,6 +1541,7 @@ if sys.platform == 'darwin': sources.append('_ctypes/darwin/dlfcn_simple.c') + extra_compile_args.append('-DMACOSX') include_dirs.append('_ctypes/darwin') # XXX Is this still needed? ## extra_link_args.extend(['-read_only_relocs', 'warning']) @@ -1541,6 +1571,11 @@ if not '--with-system-ffi' in sysconfig.get_config_var("CONFIG_ARGS"): return + if sys.platform == 'darwin': + # OS X 10.5 comes with libffi.dylib; the include files are + # in /usr/include/ffi + inc_dirs.append('/usr/include/ffi') + ffi_inc = find_file('ffi.h', [], inc_dirs) if ffi_inc is not None: ffi_h = ffi_inc[0] + '/ffi.h' From python-checkins at python.org Tue Mar 4 15:26:32 2008 From: python-checkins at python.org (thomas.heller) Date: Tue, 4 Mar 2008 15:26:32 +0100 (CET) Subject: [Python-checkins] r61231 - in python/branches/libffi3-branch/Modules/_ctypes/libffi: README configure configure.ac fficonfig.h.in fficonfig.py.in src/cris/ffi.c src/m68k/ffi.c src/powerpc/darwin.S src/powerpc/darwin_closure.S src/powerpc/ffi_darwin.c src/x86/darwin.S src/x86/ffi_darwin.c Message-ID: <20080304142632.429861E4007@bag.python.org> Author: thomas.heller Date: Tue Mar 4 15:26:31 2008 New Revision: 61231 Removed: python/branches/libffi3-branch/Modules/_ctypes/libffi/src/x86/ffi_darwin.c Modified: python/branches/libffi3-branch/Modules/_ctypes/libffi/README python/branches/libffi3-branch/Modules/_ctypes/libffi/configure python/branches/libffi3-branch/Modules/_ctypes/libffi/configure.ac python/branches/libffi3-branch/Modules/_ctypes/libffi/fficonfig.h.in python/branches/libffi3-branch/Modules/_ctypes/libffi/fficonfig.py.in python/branches/libffi3-branch/Modules/_ctypes/libffi/src/cris/ffi.c python/branches/libffi3-branch/Modules/_ctypes/libffi/src/m68k/ffi.c python/branches/libffi3-branch/Modules/_ctypes/libffi/src/powerpc/darwin.S python/branches/libffi3-branch/Modules/_ctypes/libffi/src/powerpc/darwin_closure.S python/branches/libffi3-branch/Modules/_ctypes/libffi/src/powerpc/ffi_darwin.c python/branches/libffi3-branch/Modules/_ctypes/libffi/src/x86/darwin.S Log: Apart from some small changes to configure.ac, these files are in sync now with libffi3.0.4. Modified: python/branches/libffi3-branch/Modules/_ctypes/libffi/README ============================================================================== --- python/branches/libffi3-branch/Modules/_ctypes/libffi/README (original) +++ python/branches/libffi3-branch/Modules/_ctypes/libffi/README Tue Mar 4 15:26:31 2008 @@ -1,7 +1,7 @@ Status ====== -libffi-3.0.2 was released on February 21, 2008. Check the libffi web +libffi-3.0.4 was released on February 24, 2008. Check the libffi web page for updates: . @@ -48,13 +48,17 @@ mips o32 linux (little endian) powerpc darwin powerpc64 linux - sparc solaris (SPARC V9 ABI) + sparc solaris + sparc64 solaris x86 cygwin x86 darwin x86 freebsd x86 linux + x86 openbsd + x86-64 darwin x86-64 linux x86-64 OS X + x86-64 freebsd Please send additional platform test results to libffi-discuss at sourceware.org. @@ -89,6 +93,7 @@ GNU make. You can ftp GNU make from prep.ai.mit.edu:/pub/gnu. To ensure that libffi is working as advertised, type "make check". +This will require that you have DejaGNU installed. To install the library and header files, type "make install". @@ -153,6 +158,14 @@ History ======= +3.0.4 Feb-24-08 + Fix x86 OpenBSD configury. + +3.0.3 Feb-22-08 + Enable x86 OpenBSD thanks to Thomas Heller, and + x86-64 FreeBSD thanks to Bj?rn K?nig and Andreas Tobler. + Clean up test instruction in README. + 3.0.2 Feb-21-08 Improved x86 FreeBSD support. Thanks to Bj?rn K?nig. Modified: python/branches/libffi3-branch/Modules/_ctypes/libffi/configure ============================================================================== --- python/branches/libffi3-branch/Modules/_ctypes/libffi/configure (original) +++ python/branches/libffi3-branch/Modules/_ctypes/libffi/configure Tue Mar 4 15:26:31 2008 @@ -937,7 +937,6 @@ HAVE_LONG_DOUBLE TARGET TARGETDIR -MKTARGET toolexecdir toolexeclibdir LIBOBJS @@ -4691,7 +4690,7 @@ ;; *-*-irix6*) # Find out which ABI we are using. - echo '#line 4694 "configure"' > conftest.$ac_ext + echo '#line 4693 "configure"' > conftest.$ac_ext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 (eval $ac_compile) 2>&5 ac_status=$? @@ -7434,11 +7433,11 @@ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` - (eval echo "\"\$as_me:7437: $lt_compile\"" >&5) + (eval echo "\"\$as_me:7436: $lt_compile\"" >&5) (eval "$lt_compile" 2>conftest.err) ac_status=$? cat conftest.err >&5 - echo "$as_me:7441: \$? = $ac_status" >&5 + echo "$as_me:7440: \$? = $ac_status" >&5 if (exit $ac_status) && test -s "$ac_outfile"; then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings other than the usual output. @@ -7724,11 +7723,11 @@ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` - (eval echo "\"\$as_me:7727: $lt_compile\"" >&5) + (eval echo "\"\$as_me:7726: $lt_compile\"" >&5) (eval "$lt_compile" 2>conftest.err) ac_status=$? cat conftest.err >&5 - echo "$as_me:7731: \$? = $ac_status" >&5 + echo "$as_me:7730: \$? = $ac_status" >&5 if (exit $ac_status) && test -s "$ac_outfile"; then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings other than the usual output. @@ -7828,11 +7827,11 @@ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` - (eval echo "\"\$as_me:7831: $lt_compile\"" >&5) + (eval echo "\"\$as_me:7830: $lt_compile\"" >&5) (eval "$lt_compile" 2>out/conftest.err) ac_status=$? cat out/conftest.err >&5 - echo "$as_me:7835: \$? = $ac_status" >&5 + echo "$as_me:7834: \$? = $ac_status" >&5 if (exit $ac_status) && test -s out/conftest2.$ac_objext then # The compiler can only warn and ignore the option if not recognized @@ -10179,7 +10178,7 @@ lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2 lt_status=$lt_dlunknown cat > conftest.$ac_ext < conftest.$ac_ext <&5) + (eval echo "\"\$as_me:12701: $lt_compile\"" >&5) (eval "$lt_compile" 2>conftest.err) ac_status=$? cat conftest.err >&5 - echo "$as_me:12706: \$? = $ac_status" >&5 + echo "$as_me:12705: \$? = $ac_status" >&5 if (exit $ac_status) && test -s "$ac_outfile"; then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings other than the usual output. @@ -12803,11 +12802,11 @@ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` - (eval echo "\"\$as_me:12806: $lt_compile\"" >&5) + (eval echo "\"\$as_me:12805: $lt_compile\"" >&5) (eval "$lt_compile" 2>out/conftest.err) ac_status=$? cat out/conftest.err >&5 - echo "$as_me:12810: \$? = $ac_status" >&5 + echo "$as_me:12809: \$? = $ac_status" >&5 if (exit $ac_status) && test -s out/conftest2.$ac_objext then # The compiler can only warn and ignore the option if not recognized @@ -14367,11 +14366,11 @@ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` - (eval echo "\"\$as_me:14370: $lt_compile\"" >&5) + (eval echo "\"\$as_me:14369: $lt_compile\"" >&5) (eval "$lt_compile" 2>conftest.err) ac_status=$? cat conftest.err >&5 - echo "$as_me:14374: \$? = $ac_status" >&5 + echo "$as_me:14373: \$? = $ac_status" >&5 if (exit $ac_status) && test -s "$ac_outfile"; then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings other than the usual output. @@ -14471,11 +14470,11 @@ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` - (eval echo "\"\$as_me:14474: $lt_compile\"" >&5) + (eval echo "\"\$as_me:14473: $lt_compile\"" >&5) (eval "$lt_compile" 2>out/conftest.err) ac_status=$? cat out/conftest.err >&5 - echo "$as_me:14478: \$? = $ac_status" >&5 + echo "$as_me:14477: \$? = $ac_status" >&5 if (exit $ac_status) && test -s out/conftest2.$ac_objext then # The compiler can only warn and ignore the option if not recognized @@ -16660,11 +16659,11 @@ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` - (eval echo "\"\$as_me:16663: $lt_compile\"" >&5) + (eval echo "\"\$as_me:16662: $lt_compile\"" >&5) (eval "$lt_compile" 2>conftest.err) ac_status=$? cat conftest.err >&5 - echo "$as_me:16667: \$? = $ac_status" >&5 + echo "$as_me:16666: \$? = $ac_status" >&5 if (exit $ac_status) && test -s "$ac_outfile"; then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings other than the usual output. @@ -16950,11 +16949,11 @@ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` - (eval echo "\"\$as_me:16953: $lt_compile\"" >&5) + (eval echo "\"\$as_me:16952: $lt_compile\"" >&5) (eval "$lt_compile" 2>conftest.err) ac_status=$? cat conftest.err >&5 - echo "$as_me:16957: \$? = $ac_status" >&5 + echo "$as_me:16956: \$? = $ac_status" >&5 if (exit $ac_status) && test -s "$ac_outfile"; then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings other than the usual output. @@ -17054,11 +17053,11 @@ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` - (eval echo "\"\$as_me:17057: $lt_compile\"" >&5) + (eval echo "\"\$as_me:17056: $lt_compile\"" >&5) (eval "$lt_compile" 2>out/conftest.err) ac_status=$? cat out/conftest.err >&5 - echo "$as_me:17061: \$? = $ac_status" >&5 + echo "$as_me:17060: \$? = $ac_status" >&5 if (exit $ac_status) && test -s out/conftest2.$ac_objext then # The compiler can only warn and ignore the option if not recognized @@ -20482,8 +20481,6 @@ { (exit 1); exit 1; }; } fi -MKTARGET=$TARGET - if test x$TARGET = xMIPS; then MIPS_TRUE= MIPS_FALSE='#' @@ -22369,8 +22366,6 @@ esac - - { echo "$as_me:$LINENO: checking assembler .cfi pseudo-op support" >&5 echo $ECHO_N "checking assembler .cfi pseudo-op support... $ECHO_C" >&6; } if test "${libffi_cv_as_cfi_pseudo_op+set}" = set; then @@ -22639,7 +22634,6 @@ - # Check whether --enable-debug was given. if test "${enable_debug+set}" = set; then enableval=$enable_debug; if test "$enable_debug" = "yes"; then @@ -22715,12 +22709,6 @@ ac_config_commands="$ac_config_commands src" -TARGETINCDIR=$TARGETDIR -case $host in -*-*-darwin*) - TARGETINCDIR="darwin" - ;; -esac ac_config_links="$ac_config_links include/ffitarget.h:src/$TARGETDIR/ffitarget.h" @@ -23792,14 +23780,13 @@ HAVE_LONG_DOUBLE!$HAVE_LONG_DOUBLE$ac_delim TARGET!$TARGET$ac_delim TARGETDIR!$TARGETDIR$ac_delim -MKTARGET!$MKTARGET$ac_delim toolexecdir!$toolexecdir$ac_delim toolexeclibdir!$toolexeclibdir$ac_delim LIBOBJS!$LIBOBJS$ac_delim LTLIBOBJS!$LTLIBOBJS$ac_delim _ACEOF - if test `sed -n "s/.*$ac_delim\$/X/p" conf$$subs.sed | grep -c X` = 77; then + if test `sed -n "s/.*$ac_delim\$/X/p" conf$$subs.sed | grep -c X` = 76; then break elif $ac_last_try; then { { echo "$as_me:$LINENO: error: could not make $CONFIG_STATUS" >&5 Modified: python/branches/libffi3-branch/Modules/_ctypes/libffi/configure.ac ============================================================================== --- python/branches/libffi3-branch/Modules/_ctypes/libffi/configure.ac (original) +++ python/branches/libffi3-branch/Modules/_ctypes/libffi/configure.ac Tue Mar 4 15:26:31 2008 @@ -156,12 +156,6 @@ AC_MSG_ERROR(["libffi has not been ported to $host."]) fi -dnl libffi changes TARGET for MIPS to define a such macro in the header -dnl while MIPS_IRIX or MIPS_LINUX is separatedly used to decide which -dnl files will be compiled. So, we need to keep the original decision -dnl of TARGET to use in fficonfig.py.in. -MKTARGET=$TARGET - AM_CONDITIONAL(MIPS, test x$TARGET = xMIPS) AM_CONDITIONAL(SPARC, test x$TARGET = xSPARC) AM_CONDITIONAL(X86, test x$TARGET = xX86) @@ -207,23 +201,6 @@ AC_SUBST(HAVE_LONG_DOUBLE) AC_C_BIGENDIAN -AH_VERBATIM([WORDS_BIGENDIAN], -[ -/* Define to 1 if your processor stores words with the most significant byte - first (like Motorola and SPARC, unlike Intel and VAX). - - The block below does compile-time checking for endianness on platforms - that use GCC and therefore allows compiling fat binaries on OSX by using - '-arch ppc -arch i386' as the compile flags. The phrasing was choosen - such that the configure-result is used on systems that don't use GCC. -*/ -#ifdef __BIG_ENDIAN__ -#define WORDS_BIGENDIAN 1 -#else -#ifndef __LITTLE_ENDIAN__ -#undef WORDS_BIGENDIAN -#endif -#endif]) AC_CACHE_CHECK([assembler .cfi pseudo-op support], libffi_cv_as_cfi_pseudo_op, [ @@ -326,7 +303,6 @@ AC_SUBST(TARGET) AC_SUBST(TARGETDIR) -AC_SUBST(MKTARGET) AC_SUBST(SHELL) @@ -382,12 +358,6 @@ test -d src/$TARGETDIR || mkdir src/$TARGETDIR ], [TARGETDIR="$TARGETDIR"]) -TARGETINCDIR=$TARGETDIR -case $host in -*-*-darwin*) - TARGETINCDIR="darwin" - ;; -esac AC_CONFIG_LINKS(include/ffitarget.h:src/$TARGETDIR/ffitarget.h) AC_CONFIG_FILES(include/ffi.h) Modified: python/branches/libffi3-branch/Modules/_ctypes/libffi/fficonfig.h.in ============================================================================== --- python/branches/libffi3-branch/Modules/_ctypes/libffi/fficonfig.h.in (original) +++ python/branches/libffi3-branch/Modules/_ctypes/libffi/fficonfig.h.in Tue Mar 4 15:26:31 2008 @@ -140,20 +140,8 @@ #undef VERSION /* Define to 1 if your processor stores words with the most significant byte - first (like Motorola and SPARC, unlike Intel and VAX). - - The block below does compile-time checking for endianness on platforms - that use GCC and therefore allows compiling fat binaries on OSX by using - '-arch ppc -arch i386' as the compile flags. The phrasing was choosen - such that the configure-result is used on systems that don't use GCC. -*/ -#ifdef __BIG_ENDIAN__ -#define WORDS_BIGENDIAN 1 -#else -#ifndef __LITTLE_ENDIAN__ + first (like Motorola and SPARC, unlike Intel and VAX). */ #undef WORDS_BIGENDIAN -#endif -#endif #ifdef HAVE_HIDDEN_VISIBILITY_ATTRIBUTE Modified: python/branches/libffi3-branch/Modules/_ctypes/libffi/fficonfig.py.in ============================================================================== --- python/branches/libffi3-branch/Modules/_ctypes/libffi/fficonfig.py.in (original) +++ python/branches/libffi3-branch/Modules/_ctypes/libffi/fficonfig.py.in Tue Mar 4 15:26:31 2008 @@ -6,7 +6,6 @@ 'MIPS_IRIX': ['src/mips/ffi.c', 'src/mips/o32.S', 'src/mips/n32.S'], 'MIPS_LINUX': ['src/mips/ffi.c', 'src/mips/o32.S'], 'X86': ['src/x86/ffi.c', 'src/x86/sysv.S'], - 'X86_DARWIN': ['src/x86/ffi_darwin.c', 'src/x86/darwin.S'], 'X86_FREEBSD': ['src/x86/ffi.c', 'src/x86/sysv.S'], 'X86_WIN32': ['src/x86/ffi.c', 'src/x86/win32.S'], 'SPARC': ['src/sparc/ffi.c', 'src/sparc/v8.S', 'src/sparc/v9.S'], @@ -15,8 +14,7 @@ 'M32R': ['src/m32r/sysv.S', 'src/m32r/ffi.c'], 'M68K': ['src/m68k/ffi.c', 'src/m68k/sysv.S'], 'POWERPC': ['src/powerpc/ffi.c', 'src/powerpc/sysv.S', 'src/powerpc/ppc_closure.S', 'src/powerpc/linux64.S', 'src/powerpc/linux64_closure.S'], - 'POWERPC_AIX': ['src/powerpc/ffi_darwin.c', 'src/powerpc/aix.S', 'src/powerpc/aix_closure.S'], - 'POWERPC_DARWIN': ['src/powerpc/ffi_darwin.c', 'src/powerpc/darwin.S', 'src/powerpc/darwin_closure.S'], + 'POWERPC_AIX': ['src/powerpc/ffi.c', 'src/powerpc/aix.S', 'src/powerpc/aix_closure.S'], 'POWERPC_FREEBSD': ['src/powerpc/ffi.c', 'src/powerpc/sysv.S', 'src/powerpc/ppc_closure.S'], 'ARM': ['src/arm/sysv.S', 'src/arm/ffi.c'], 'LIBFFI_CRIS': ['src/cris/sysv.S', 'src/cris/ffi.c'], @@ -28,19 +26,8 @@ 'PA': ['src/pa/linux.S', 'src/pa/ffi.c'], } -# Build all darwin related files on all supported darwin architectures, this -# makes it easier to build universal binaries. -if 1: - all_darwin = ('X86_DARWIN', 'POWERPC_DARWIN') - all_darwin_files = [] - for pn in all_darwin: - all_darwin_files.extend(ffi_platforms[pn]) - for pn in all_darwin: - ffi_platforms[pn] = all_darwin_files - del all_darwin, all_darwin_files, pn - ffi_srcdir = '@srcdir@' -ffi_sources += ffi_platforms['@MKTARGET@'] +ffi_sources += ffi_platforms['@TARGET@'] ffi_sources = [os.path.join('@srcdir@', f) for f in ffi_sources] ffi_cflags = '@CFLAGS@' Modified: python/branches/libffi3-branch/Modules/_ctypes/libffi/src/cris/ffi.c ============================================================================== --- python/branches/libffi3-branch/Modules/_ctypes/libffi/src/cris/ffi.c (original) +++ python/branches/libffi3-branch/Modules/_ctypes/libffi/src/cris/ffi.c Tue Mar 4 15:26:31 2008 @@ -236,11 +236,11 @@ extern void ffi_call_SYSV (int (*)(char *, extended_cif *), extended_cif *, - unsigned, unsigned, unsigned *, void (*fn)(void)) + unsigned, unsigned, unsigned *, void (*fn) ()) __attribute__ ((__visibility__ ("hidden"))); void -ffi_call (ffi_cif * cif, void (*fn)(void), void *rvalue, void **avalue) +ffi_call (ffi_cif * cif, void (*fn) (), void *rvalue, void **avalue) { extended_cif ecif; Modified: python/branches/libffi3-branch/Modules/_ctypes/libffi/src/m68k/ffi.c ============================================================================== --- python/branches/libffi3-branch/Modules/_ctypes/libffi/src/m68k/ffi.c (original) +++ python/branches/libffi3-branch/Modules/_ctypes/libffi/src/m68k/ffi.c Tue Mar 4 15:26:31 2008 @@ -14,7 +14,7 @@ void ffi_call_SYSV (extended_cif *, unsigned, unsigned, - void *, void (*fn)(void)); + void *, void (*fn) ()); void *ffi_prep_args (void *stack, extended_cif *ecif); void ffi_closure_SYSV (ffi_closure *); void ffi_closure_struct_SYSV (ffi_closure *); @@ -166,7 +166,7 @@ } void -ffi_call (ffi_cif *cif, void (*fn)(void), void *rvalue, void **avalue) +ffi_call (ffi_cif *cif, void (*fn) (), void *rvalue, void **avalue) { extended_cif ecif; Modified: python/branches/libffi3-branch/Modules/_ctypes/libffi/src/powerpc/darwin.S ============================================================================== --- python/branches/libffi3-branch/Modules/_ctypes/libffi/src/powerpc/darwin.S (original) +++ python/branches/libffi3-branch/Modules/_ctypes/libffi/src/powerpc/darwin.S Tue Mar 4 15:26:31 2008 @@ -1,4 +1,3 @@ -#ifdef __ppc__ /* ----------------------------------------------------------------------- darwin.S - Copyright (c) 2000 John Hornkvist Copyright (c) 2004 Free Software Foundation, Inc. @@ -244,4 +243,3 @@ .align LOG2_GPR_BYTES LLFB0$non_lazy_ptr: .g_long LFB0 -#endif Modified: python/branches/libffi3-branch/Modules/_ctypes/libffi/src/powerpc/darwin_closure.S ============================================================================== --- python/branches/libffi3-branch/Modules/_ctypes/libffi/src/powerpc/darwin_closure.S (original) +++ python/branches/libffi3-branch/Modules/_ctypes/libffi/src/powerpc/darwin_closure.S Tue Mar 4 15:26:31 2008 @@ -1,4 +1,3 @@ -#ifdef __ppc__ /* ----------------------------------------------------------------------- darwin_closure.S - Copyright (c) 2002, 2003, 2004, Free Software Foundation, Inc. based on ppc_closure.S @@ -247,7 +246,7 @@ /* END(ffi_closure_ASM) */ .data -.section __TEXT,__eh_frame,coalesced,no_toc+strip_static_syms +.section __TEXT,__eh_frame,coalesced,no_toc+strip_static_syms+live_support EH_frame1: .set L$set$0,LECIE1-LSCIE1 .long L$set$0 ; Length of Common Information Entry @@ -316,4 +315,3 @@ .align LOG2_GPR_BYTES LLFB1$non_lazy_ptr: .g_long LFB1 -#endif Modified: python/branches/libffi3-branch/Modules/_ctypes/libffi/src/powerpc/ffi_darwin.c ============================================================================== --- python/branches/libffi3-branch/Modules/_ctypes/libffi/src/powerpc/ffi_darwin.c (original) +++ python/branches/libffi3-branch/Modules/_ctypes/libffi/src/powerpc/ffi_darwin.c Tue Mar 4 15:26:31 2008 @@ -1,4 +1,3 @@ -#if !(defined(__APPLE__) && !defined(__ppc__)) /* ----------------------------------------------------------------------- ffi_darwin.c @@ -799,4 +798,3 @@ /* Tell ffi_closure_ASM to perform return type promotions. */ return cif->rtype->type; } -#endif Modified: python/branches/libffi3-branch/Modules/_ctypes/libffi/src/x86/darwin.S ============================================================================== --- python/branches/libffi3-branch/Modules/_ctypes/libffi/src/x86/darwin.S (original) +++ python/branches/libffi3-branch/Modules/_ctypes/libffi/src/x86/darwin.S Tue Mar 4 15:26:31 2008 @@ -1,4 +1,3 @@ -#ifdef __i386__ /* ----------------------------------------------------------------------- darwin.S - Copyright (c) 1996, 1998, 2001, 2002, 2003, 2005 Red Hat, Inc. Copyright (C) 2008 Free Software Foundation, Inc. @@ -442,5 +441,3 @@ #endif #endif /* ifndef __x86_64__ */ - -#endif /* defined __i386__ */ Deleted: /python/branches/libffi3-branch/Modules/_ctypes/libffi/src/x86/ffi_darwin.c ============================================================================== --- /python/branches/libffi3-branch/Modules/_ctypes/libffi/src/x86/ffi_darwin.c Tue Mar 4 15:26:31 2008 +++ (empty file) @@ -1,596 +0,0 @@ -# ifdef __i386__ -/* ----------------------------------------------------------------------- - ffi.c - Copyright (c) 1996, 1998, 1999, 2001 Red Hat, Inc. - Copyright (c) 2002 Ranjit Mathew - Copyright (c) 2002 Bo Thorsen - Copyright (c) 2002 Roger Sayle - - x86 Foreign Function Interface - - Permission is hereby granted, free of charge, to any person obtaining - a copy of this software and associated documentation files (the - ``Software''), to deal in the Software without restriction, including - without limitation the rights to use, copy, modify, merge, publish, - distribute, sublicense, and/or sell copies of the Software, and to - permit persons to whom the Software is furnished to do so, subject to - the following conditions: - - The above copyright notice and this permission notice shall be included - in all copies or substantial portions of the Software. - - THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR - OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. - ----------------------------------------------------------------------- */ - -#ifndef __x86_64__ - -#include -#include - -#include - -/* ffi_prep_args is called by the assembly routine once stack space - has been allocated for the function's arguments */ - -/*@-exportheader@*/ -void ffi_prep_args(char *stack, extended_cif *ecif); - -static inline int retval_on_stack(ffi_type* tp) -{ - if (tp->type == FFI_TYPE_STRUCT) { - int sz = tp->size; - if (sz > 8) { - return 1; - } - switch (sz) { - case 1: case 2: case 4: case 8: return 0; - default: return 1; - } - } - return 0; -} - - -void ffi_prep_args(char *stack, extended_cif *ecif) -/*@=exportheader@*/ -{ - register unsigned int i; - register void **p_argv; - register char *argp; - register ffi_type **p_arg; - - argp = stack; - - if (retval_on_stack(ecif->cif->rtype)) { - *(void **) argp = ecif->rvalue; - argp += 4; - } - - - p_argv = ecif->avalue; - - for (i = ecif->cif->nargs, p_arg = ecif->cif->arg_types; - i != 0; - i--, p_arg++) - { - size_t z; - - /* Align if necessary */ - if ((sizeof(int) - 1) & (unsigned) argp) - argp = (char *) ALIGN(argp, sizeof(int)); - - z = (*p_arg)->size; - if (z < sizeof(int)) - { - z = sizeof(int); - switch ((*p_arg)->type) - { - case FFI_TYPE_SINT8: - *(signed int *) argp = (signed int)*(SINT8 *)(* p_argv); - break; - - case FFI_TYPE_UINT8: - *(unsigned int *) argp = (unsigned int)*(UINT8 *)(* p_argv); - break; - - case FFI_TYPE_SINT16: - *(signed int *) argp = (signed int)*(SINT16 *)(* p_argv); - break; - - case FFI_TYPE_UINT16: - *(unsigned int *) argp = (unsigned int)*(UINT16 *)(* p_argv); - break; - - case FFI_TYPE_SINT32: - *(signed int *) argp = (signed int)*(SINT32 *)(* p_argv); - break; - - case FFI_TYPE_UINT32: - *(unsigned int *) argp = (unsigned int)*(UINT32 *)(* p_argv); - break; - - case FFI_TYPE_STRUCT: - *(unsigned int *) argp = (unsigned int)*(UINT32 *)(* p_argv); - break; - - default: - FFI_ASSERT(0); - } - } - else - { - memcpy(argp, *p_argv, z); - } - p_argv++; - argp += z; - } - - return; -} - -/* Perform machine dependent cif processing */ -ffi_status ffi_prep_cif_machdep(ffi_cif *cif) -{ - /* Set the return type flag */ - switch (cif->rtype->type) - { - case FFI_TYPE_VOID: -#if !defined(X86_WIN32) && !defined(X86_DARWIN) - case FFI_TYPE_STRUCT: -#endif - case FFI_TYPE_SINT64: - case FFI_TYPE_FLOAT: - case FFI_TYPE_DOUBLE: -#if FFI_TYPE_LONGDOUBLE != FFI_TYPE_DOUBLE - case FFI_TYPE_LONGDOUBLE: -#endif - cif->flags = (unsigned) cif->rtype->type; - break; - - case FFI_TYPE_UINT64: - cif->flags = FFI_TYPE_SINT64; - break; - -#if defined(X86_WIN32) || defined(X86_DARWIN) - - case FFI_TYPE_STRUCT: - if (cif->rtype->size == 1) - { - cif->flags = FFI_TYPE_SINT8; /* same as char size */ - } - else if (cif->rtype->size == 2) - { - cif->flags = FFI_TYPE_SINT16; /* same as short size */ - } - else if (cif->rtype->size == 4) - { - cif->flags = FFI_TYPE_INT; /* same as int type */ - } - else if (cif->rtype->size == 8) - { - cif->flags = FFI_TYPE_SINT64; /* same as int64 type */ - } - else - { - cif->flags = FFI_TYPE_STRUCT; - } - break; -#endif - - default: - cif->flags = FFI_TYPE_INT; - break; - } - - /* Darwin: The stack needs to be aligned to a multiple of 16 bytes */ -#if 1 - cif->bytes = (cif->bytes + 15) & ~0xF; -#endif - - - return FFI_OK; -} - -/*@-declundef@*/ -/*@-exportheader@*/ -extern void ffi_call_SYSV(void (*)(char *, extended_cif *), - /*@out@*/ extended_cif *, - unsigned, unsigned, - /*@out@*/ unsigned *, - void (*fn)(void)); -/*@=declundef@*/ -/*@=exportheader@*/ - -#ifdef X86_WIN32 -/*@-declundef@*/ -/*@-exportheader@*/ -extern void ffi_call_STDCALL(void (*)(char *, extended_cif *), - /*@out@*/ extended_cif *, - unsigned, unsigned, - /*@out@*/ unsigned *, - void (*fn)(void)); -/*@=declundef@*/ -/*@=exportheader@*/ -#endif /* X86_WIN32 */ - -void ffi_call(/*@dependent@*/ ffi_cif *cif, - void (*fn)(void), - /*@out@*/ void *rvalue, - /*@dependent@*/ void **avalue) -{ - extended_cif ecif; - - ecif.cif = cif; - ecif.avalue = avalue; - - /* If the return value is a struct and we don't have a return */ - /* value address then we need to make one */ - - if ((rvalue == NULL) && retval_on_stack(cif->rtype)) - { - /*@-sysunrecog@*/ - ecif.rvalue = alloca(cif->rtype->size); - /*@=sysunrecog@*/ - } - else - ecif.rvalue = rvalue; - - switch (cif->abi) - { - case FFI_SYSV: - /*@-usedef@*/ - /* To avoid changing the assembly code make sure the size of the argument - * block is a multiple of 16. Then add 8 to compensate for local variables - * in ffi_call_SYSV. - */ - ffi_call_SYSV(ffi_prep_args, &ecif, cif->bytes, - cif->flags, ecif.rvalue, fn); - /*@=usedef@*/ - break; -#ifdef X86_WIN32 - case FFI_STDCALL: - /*@-usedef@*/ - ffi_call_STDCALL(ffi_prep_args, &ecif, cif->bytes, - cif->flags, ecif.rvalue, fn); - /*@=usedef@*/ - break; -#endif /* X86_WIN32 */ - default: - FFI_ASSERT(0); - break; - } -} - - -/** private members **/ - -static void ffi_closure_SYSV (ffi_closure *) - __attribute__ ((regparm(1))); -#if !FFI_NO_RAW_API -static void ffi_closure_raw_SYSV (ffi_raw_closure *) - __attribute__ ((regparm(1))); -#endif - -/*@-exportheader@*/ -static inline void -ffi_prep_incoming_args_SYSV(char *stack, void **rvalue, - void **avalue, ffi_cif *cif) -/*@=exportheader@*/ -{ - register unsigned int i; - register void **p_argv; - register char *argp; - register ffi_type **p_arg; - - argp = stack; - - if (retval_on_stack(cif->rtype)) { - *rvalue = *(void **) argp; - argp += 4; - } - - p_argv = avalue; - - for (i = cif->nargs, p_arg = cif->arg_types; (i != 0); i--, p_arg++) - { - size_t z; - - /* Align if necessary */ - if ((sizeof(int) - 1) & (unsigned) argp) { - argp = (char *) ALIGN(argp, sizeof(int)); - } - - z = (*p_arg)->size; - - /* because we're little endian, this is what it turns into. */ - - *p_argv = (void*) argp; - - p_argv++; - argp += z; - } - - return; -} - -/* This function is jumped to by the trampoline */ - -static void -ffi_closure_SYSV (closure) - ffi_closure *closure; -{ - // this is our return value storage - long double res; - - // our various things... - ffi_cif *cif; - void **arg_area; - void *resp = (void*)&res; - void *args = __builtin_dwarf_cfa (); - - - cif = closure->cif; - arg_area = (void**) alloca (cif->nargs * sizeof (void*)); - - /* this call will initialize ARG_AREA, such that each - * element in that array points to the corresponding - * value on the stack; and if the function returns - * a structure, it will re-set RESP to point to the - * structure return address. */ - - ffi_prep_incoming_args_SYSV(args, (void**)&resp, arg_area, cif); - - (closure->fun) (cif, resp, arg_area, closure->user_data); - - /* now, do a generic return based on the value of rtype */ - if (cif->flags == FFI_TYPE_INT) - { - asm ("movl (%0),%%eax" : : "r" (resp) : "eax"); - } - else if (cif->flags == FFI_TYPE_FLOAT) - { - asm ("flds (%0)" : : "r" (resp) : "st" ); - } - else if (cif->flags == FFI_TYPE_DOUBLE) - { - asm ("fldl (%0)" : : "r" (resp) : "st", "st(1)" ); - } - else if (cif->flags == FFI_TYPE_LONGDOUBLE) - { - asm ("fldt (%0)" : : "r" (resp) : "st", "st(1)" ); - } - else if (cif->flags == FFI_TYPE_SINT64) - { - asm ("movl 0(%0),%%eax;" - "movl 4(%0),%%edx" - : : "r"(resp) - : "eax", "edx"); - } -#if defined(X86_WIN32) || defined(X86_DARWIN) - else if (cif->flags == FFI_TYPE_SINT8) /* 1-byte struct */ - { - asm ("movsbl (%0),%%eax" : : "r" (resp) : "eax"); - } - else if (cif->flags == FFI_TYPE_SINT16) /* 2-bytes struct */ - { - asm ("movswl (%0),%%eax" : : "r" (resp) : "eax"); - } -#endif - - else if (cif->flags == FFI_TYPE_STRUCT) - { - asm ("lea -8(%ebp),%esp;" - "pop %esi;" - "pop %edi;" - "pop %ebp;" - "ret $4"); - } -} - - -/* How to make a trampoline. Derived from gcc/config/i386/i386.c. */ - -#define FFI_INIT_TRAMPOLINE(TRAMP,FUN,CTX) \ -({ unsigned char *__tramp = (unsigned char*)(TRAMP); \ - unsigned int __fun = (unsigned int)(FUN); \ - unsigned int __ctx = (unsigned int)(CTX); \ - unsigned int __dis = __fun - ((unsigned int) __tramp + FFI_TRAMPOLINE_SIZE); \ - *(unsigned char*) &__tramp[0] = 0xb8; \ - *(unsigned int*) &__tramp[1] = __ctx; /* movl __ctx, %eax */ \ - *(unsigned char *) &__tramp[5] = 0xe9; \ - *(unsigned int*) &__tramp[6] = __dis; /* jmp __fun */ \ - }) - - -/* the cif must already be prep'ed */ - -ffi_status -ffi_prep_closure (ffi_closure* closure, - ffi_cif* cif, - void (*fun)(ffi_cif*,void*,void**,void*), - void *user_data) -{ - FFI_ASSERT (cif->abi == FFI_SYSV); - - FFI_INIT_TRAMPOLINE (&closure->tramp[0], \ - &ffi_closure_SYSV, \ - (void*)closure); - - closure->cif = cif; - closure->user_data = user_data; - closure->fun = fun; - - return FFI_OK; -} - -/* ------- Native raw API support -------------------------------- */ - -#if !FFI_NO_RAW_API - -static void -ffi_closure_raw_SYSV (closure) - ffi_raw_closure *closure; -{ - // this is our return value storage - long double res; - - // our various things... - ffi_raw *raw_args; - ffi_cif *cif; - unsigned short rtype; - void *resp = (void*)&res; - - /* get the cif */ - cif = closure->cif; - - /* the SYSV/X86 abi matches the RAW API exactly, well.. almost */ - raw_args = (ffi_raw*) __builtin_dwarf_cfa (); - - (closure->fun) (cif, resp, raw_args, closure->user_data); - - rtype = cif->flags; - - /* now, do a generic return based on the value of rtype */ - if (rtype == FFI_TYPE_INT) - { - asm ("movl (%0),%%eax" : : "r" (resp) : "eax"); - } - else if (rtype == FFI_TYPE_FLOAT) - { - asm ("flds (%0)" : : "r" (resp) : "st" ); - } - else if (rtype == FFI_TYPE_DOUBLE) - { - asm ("fldl (%0)" : : "r" (resp) : "st", "st(1)" ); - } - else if (rtype == FFI_TYPE_LONGDOUBLE) - { - asm ("fldt (%0)" : : "r" (resp) : "st", "st(1)" ); - } - else if (rtype == FFI_TYPE_SINT64) - { - asm ("movl 0(%0),%%eax; movl 4(%0),%%edx" - : : "r"(resp) - : "eax", "edx"); - } -} - - - - -ffi_status -ffi_prep_raw_closure (ffi_raw_closure* closure, - ffi_cif* cif, - void (*fun)(ffi_cif*,void*,ffi_raw*,void*), - void *user_data) -{ - int i; - - FFI_ASSERT (cif->abi == FFI_SYSV); - - // we currently don't support certain kinds of arguments for raw - // closures. This should be implemented by a separate assembly language - // routine, since it would require argument processing, something we - // don't do now for performance. - - for (i = cif->nargs-1; i >= 0; i--) - { - FFI_ASSERT (cif->arg_types[i]->type != FFI_TYPE_STRUCT); - FFI_ASSERT (cif->arg_types[i]->type != FFI_TYPE_LONGDOUBLE); - } - - - FFI_INIT_TRAMPOLINE (&closure->tramp[0], &ffi_closure_raw_SYSV, - (void*)closure); - - closure->cif = cif; - closure->user_data = user_data; - closure->fun = fun; - - return FFI_OK; -} - -static void -ffi_prep_args_raw(char *stack, extended_cif *ecif) -{ - memcpy (stack, ecif->avalue, ecif->cif->bytes); -} - -/* we borrow this routine from libffi (it must be changed, though, to - * actually call the function passed in the first argument. as of - * libffi-1.20, this is not the case.) - */ - -extern void -ffi_call_SYSV(void (*)(char *, extended_cif *), - /*@out@*/ extended_cif *, - unsigned, unsigned, - /*@out@*/ unsigned *, - void (*fn)()); - -#ifdef X86_WIN32 -extern void -ffi_call_STDCALL(void (*)(char *, extended_cif *), - /*@out@*/ extended_cif *, - unsigned, unsigned, - /*@out@*/ unsigned *, - void (*fn)()); -#endif /* X86_WIN32 */ - -void -ffi_raw_call(/*@dependent@*/ ffi_cif *cif, - void (*fn)(), - /*@out@*/ void *rvalue, - /*@dependent@*/ ffi_raw *fake_avalue) -{ - extended_cif ecif; - void **avalue = (void **)fake_avalue; - - ecif.cif = cif; - ecif.avalue = avalue; - - /* If the return value is a struct and we don't have a return */ - /* value address then we need to make one */ - - if ((rvalue == NULL) && retval_on_stack(cif->rtype)) - { - /*@-sysunrecog@*/ - ecif.rvalue = alloca(cif->rtype->size); - /*@=sysunrecog@*/ - } - else - ecif.rvalue = rvalue; - - - switch (cif->abi) - { - case FFI_SYSV: - /*@-usedef@*/ - ffi_call_SYSV(ffi_prep_args_raw, &ecif, cif->bytes, - cif->flags, ecif.rvalue, fn); - /*@=usedef@*/ - break; -#ifdef X86_WIN32 - case FFI_STDCALL: - /*@-usedef@*/ - ffi_call_STDCALL(ffi_prep_args_raw, &ecif, cif->bytes, - cif->flags, ecif.rvalue, fn); - /*@=usedef@*/ - break; -#endif /* X86_WIN32 */ - default: - FFI_ASSERT(0); - break; - } -} - -#endif - -#endif /* __x86_64__ */ - -#endif /* __i386__ */ From nnorwitz at gmail.com Tue Mar 4 16:07:14 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Tue, 4 Mar 2008 10:07:14 -0500 Subject: [Python-checkins] Python Regression Test Failures refleak (6) Message-ID: <20080304150714.GA28301@python.psfb.org> More important issues: ---------------------- test_deque leaked [100, 100, 100] references, sum=300 test_heapq leaked [136, 125, 106] references, sum=367 test_itertools leaked [7380, 7380, 7380] references, sum=22140 test_list leaked [50, 50, 50] references, sum=150 test_set leaked [680, 680, 680] references, sum=2040 test_userlist leaked [50, 50, 50] references, sum=150 Less important issues: ---------------------- test_socketserver leaked [-131, 0, 0] references, sum=-131 test_threadsignals leaked [-8, 0, 0] references, sum=-8 test_urllib2_localnet leaked [3, 3, 3] references, sum=9 From python-checkins at python.org Tue Mar 4 15:51:09 2008 From: python-checkins at python.org (thomas.heller) Date: Tue, 4 Mar 2008 15:51:09 +0100 (CET) Subject: [Python-checkins] r61232 - in python/branches/libffi3-branch: Demo/tkinter/guido/ShellWindow.py Doc/Makefile Doc/README.txt Doc/c-api/arg.rst Doc/c-api/long.rst Doc/c-api/objbuffer.rst Doc/c-api/typeobj.rst Doc/conf.py Doc/copyright.rst Doc/distutils/builtdist.rst Doc/distutils/packageindex.rst Doc/distutils/setupscript.rst Doc/howto/advocacy.rst Doc/howto/doanddont.rst Doc/howto/functional.rst Doc/howto/regex.rst Doc/howto/sockets.rst Doc/library/basehttpserver.rst Doc/library/codecs.rst Doc/library/collections.rst Doc/library/configparser.rst Doc/library/decimal.rst Doc/library/difflib.rst Doc/library/dis.rst Doc/library/inspect.rst Doc/library/itertools.rst Doc/library/logging.rst Doc/library/mailbox.rst Doc/library/modulefinder.rst Doc/library/msilib.rst Doc/library/operator.rst Doc/library/optparse.rst Doc/library/pickle.rst Doc/library/platform.rst Doc/library/profile.rst Doc/library/random.rst Doc/library/re.rst Doc/library/signal.rst Doc/library/simplexmlrpcserver.rst Doc/library/socket.rst Doc/library/tokenize.rst Doc/library/trace.rst Doc/library/userdict.rst Doc/library/weakref.rst Doc/library/xml.dom.rst Doc/library/xml.etree.elementtree.rst Doc/library/xmlrpclib.rst Doc/license.rst Doc/make.bat Doc/reference/compound_stmts.rst Doc/reference/expressions.rst Doc/reference/index.rst Doc/tools/sphinxext/download.html Doc/tools/sphinxext/patchlevel.py Doc/tutorial/interpreter.rst Doc/tutorial/stdlib2.rst Doc/using/cmdline.rst Doc/using/unix.rst Doc/using/windows.rst Doc/whatsnew/2.6.rst Grammar/Grammar Include/Python-ast.h Include/graminit.h Include/longintrepr.h Include/object.h Include/patchlevel.h LICENSE Lib/BaseHTTPServer.py Lib/ConfigParser.py Lib/SimpleHTTPServer.py Lib/SocketServer.py Lib/UserDict.py Lib/_abcoll.py Lib/abc.py Lib/bsddb/test/test_associate.py Lib/bsddb/test/test_basics.py Lib/bsddb/test/test_compare.py Lib/bsddb/test/test_compat.py Lib/bsddb/test/test_cursor_pget_bug.py Lib/bsddb/test/test_dbobj.py Lib/bsddb/test/test_dbshelve.py Lib/bsddb/test/test_dbtables.py Lib/bsddb/test/test_env_close.py Lib/bsddb/test/test_get_none.py Lib/bsddb/test/test_join.py Lib/bsddb/test/test_lock.py Lib/bsddb/test/test_misc.py Lib/bsddb/test/test_pickle.py Lib/bsddb/test/test_queue.py Lib/bsddb/test/test_recno.py Lib/bsddb/test/test_sequence.py Lib/bsddb/test/test_thread.py Lib/compiler/ast.py Lib/compiler/transformer.py Lib/ctypes/test/__init__.py Lib/ctypes/test/test_checkretval.py Lib/ctypes/test/test_find.py Lib/ctypes/test/test_libc.py Lib/ctypes/test/test_loading.py Lib/ctypes/test/test_numbers.py Lib/curses/__init__.py Lib/curses/wrapper.py Lib/decimal.py Lib/distutils/bcppcompiler.py Lib/distutils/command/bdist.py Lib/distutils/command/bdist_dumb.py Lib/distutils/command/bdist_msi.py Lib/distutils/command/bdist_rpm.py Lib/distutils/command/build_py.py Lib/distutils/command/build_scripts.py Lib/distutils/command/install.py Lib/distutils/command/install_headers.py Lib/distutils/command/install_lib.py Lib/distutils/command/register.py Lib/distutils/command/sdist.py Lib/distutils/filelist.py Lib/distutils/tests/test_dist.py Lib/distutils/tests/test_sysconfig.py Lib/distutils/unixccompiler.py Lib/email/base64mime.py Lib/email/utils.py Lib/hotshot/log.py Lib/hotshot/stones.py Lib/httplib.py Lib/idlelib/MultiCall.py Lib/idlelib/NEWS.txt Lib/idlelib/RemoteDebugger.py Lib/idlelib/TreeWidget.py Lib/idlelib/UndoDelegator.py Lib/idlelib/configDialog.py Lib/idlelib/idlever.py Lib/idlelib/keybindingDialog.py Lib/idlelib/run.py Lib/imaplib.py Lib/inspect.py Lib/lib-tk/tkSimpleDialog.py Lib/logging/handlers.py Lib/ntpath.py Lib/plat-mac/MiniAEFrame.py Lib/plat-mac/aepack.py Lib/plat-mac/bgenlocations.py Lib/plat-mac/macostools.py Lib/plat-riscos/rourl2path.py Lib/popen2.py Lib/runpy.py Lib/smtplib.py Lib/socket.py Lib/sqlite3/test/dbapi.py Lib/sqlite3/test/hooks.py Lib/sqlite3/test/py25tests.py Lib/sqlite3/test/regression.py Lib/sqlite3/test/transactions.py Lib/sqlite3/test/types.py Lib/ssl.py Lib/symbol.py Lib/test/fork_wait.py Lib/test/list_tests.py Lib/test/regrtest.py Lib/test/seq_tests.py Lib/test/string_tests.py Lib/test/test_MimeWriter.py Lib/test/test___all__.py Lib/test/test_abc.py Lib/test/test_al.py Lib/test/test_applesingle.py Lib/test/test_array.py Lib/test/test_ast.py Lib/test/test_audioop.py Lib/test/test_bisect.py Lib/test/test_bsddb185.py Lib/test/test_bsddb3.py Lib/test/test_builtin.py Lib/test/test_cd.py Lib/test/test_cfgparser.py Lib/test/test_cl.py Lib/test/test_class.py Lib/test/test_cmd.py Lib/test/test_coercion.py Lib/test/test_compare.py Lib/test/test_compiler.py Lib/test/test_copy.py Lib/test/test_cpickle.py Lib/test/test_curses.py Lib/test/test_datetime.py Lib/test/test_dbm.py Lib/test/test_decimal.py Lib/test/test_decorators.py Lib/test/test_deque.py Lib/test/test_descrtut.py Lib/test/test_dict.py Lib/test/test_dis.py Lib/test/test_doctest.py Lib/test/test_dummy_threading.py Lib/test/test_email.py Lib/test/test_email_renamed.py Lib/test/test_eof.py Lib/test/test_extcall.py Lib/test/test_file.py Lib/test/test_fileinput.py Lib/test/test_format.py Lib/test/test_fractions.py Lib/test/test_ftplib.py Lib/test/test_future_builtins.py Lib/test/test_getargs2.py Lib/test/test_gl.py Lib/test/test_grammar.py Lib/test/test_gzip.py Lib/test/test_heapq.py Lib/test/test_htmlparser.py Lib/test/test_httplib.py Lib/test/test_imageop.py Lib/test/test_imgfile.py Lib/test/test_imp.py Lib/test/test_index.py Lib/test/test_inspect.py Lib/test/test_itertools.py Lib/test/test_largefile.py Lib/test/test_linuxaudiodev.py Lib/test/test_list.py Lib/test/test_logging.py Lib/test/test_minidom.py Lib/test/test_module.py Lib/test/test_modulefinder.py Lib/test/test_multibytecodec_support.py Lib/test/test_mutex.py Lib/test/test_operator.py Lib/test/test_optparse.py Lib/test/test_ossaudiodev.py Lib/test/test_parser.py Lib/test/test_pep247.py Lib/test/test_pickle.py Lib/test/test_pkg.py Lib/test/test_plistlib.py Lib/test/test_poll.py Lib/test/test_posix.py Lib/test/test_pyclbr.py Lib/test/test_quopri.py Lib/test/test_resource.py Lib/test/test_rfc822.py Lib/test/test_scriptpackages.py Lib/test/test_sgmllib.py Lib/test/test_shlex.py Lib/test/test_signal.py Lib/test/test_site.py Lib/test/test_smtplib.py Lib/test/test_socketserver.py Lib/test/test_sqlite.py Lib/test/test_str.py Lib/test/test_strftime.py Lib/test/test_sunaudiodev.py Lib/test/test_support.py Lib/test/test_threading.py Lib/test/test_tuple.py Lib/test/test_unicode.py Lib/test/test_unpack.py Lib/test/test_urllib.py Lib/test/test_urllib2.py Lib/test/test_urllib2_localnet.py Lib/test/test_userdict.py Lib/test/test_userlist.py Lib/test/test_userstring.py Lib/test/test_uu.py Lib/test/test_whichdb.py Lib/test/test_xml_etree.py Lib/test/test_xml_etree_c.py Lib/test/test_xmlrpc.py Lib/test/test_xpickle.py Lib/test/test_zipfile64.py Lib/threading.py Lib/token.py Lib/trace.py Lib/xml/dom/minidom.py Lib/xmlrpclib.py Mac/Demo/PICTbrowse/ICONbrowse.py Mac/Demo/PICTbrowse/PICTbrowse.py Mac/Demo/PICTbrowse/PICTbrowse2.py Mac/Demo/PICTbrowse/cicnbrowse.py Mac/Demo/PICTbrowse/oldPICTbrowse.py Mac/Demo/example1/dnslookup-1.py Mac/Demo/example2/dnslookup-2.py Mac/Demo/imgbrowse/imgbrowse.py Mac/Demo/imgbrowse/mac_image.py Mac/Demo/sound/morse.py Mac/Modules/ae/aescan.py Mac/Modules/ah/ahscan.py Mac/Modules/app/appscan.py Mac/Modules/carbonevt/CarbonEvtscan.py Mac/Modules/cf/cfscan.py Mac/Modules/cg/cgscan.py Mac/Modules/cm/cmscan.py Mac/Modules/ctl/ctlscan.py Mac/Modules/dlg/dlgscan.py Mac/Modules/drag/dragscan.py Mac/Modules/evt/evtscan.py Mac/Modules/file/filescan.py Mac/Modules/fm/fmscan.py Mac/Modules/folder/folderscan.py Mac/Modules/help/helpscan.py Mac/Modules/ibcarbon/IBCarbonscan.py Mac/Modules/icn/icnscan.py Mac/Modules/launch/launchscan.py Mac/Modules/list/listscan.py Mac/Modules/menu/menuscan.py Mac/Modules/mlte/mltescan.py Mac/Modules/osa/osascan.py Mac/Modules/qd/qdscan.py Mac/Modules/qdoffs/qdoffsscan.py Mac/Modules/qt/qtscan.py Mac/Modules/res/resscan.py Mac/Modules/scrap/scrapscan.py Mac/Modules/snd/sndscan.py Mac/Modules/te/tescan.py Mac/Modules/win/winscan.py Makefile.pre.in Misc/ACKS Misc/BeOS-setup.py Misc/HISTORY Misc/NEWS Misc/cheatsheet Modules/_ctypes/_ctypes_test.c Modules/_sqlite/connection.c Modules/_sqlite/connection.h Modules/_sqlite/cursor.c Modules/_sqlite/cursor.h Modules/_sqlite/microprotocols.h Modules/_sqlite/module.c Modules/_sqlite/module.h Modules/_sqlite/statement.c Modules/_sqlite/util.c Modules/_sqlite/util.h Modules/_testcapimodule.c Modules/dbmmodule.c Modules/future_builtins.c Modules/gdbmmodule.c Modules/itertoolsmodule.c Modules/operator.c Modules/parsermodule.c Modules/signalmodule.c Modules/syslogmodule.c Objects/dictobject.c Objects/fileobject.c Objects/listobject.c Objects/stringlib/string_format.h Objects/stringobject.c Objects/typeobject.c Objects/unicodeobject.c PC/VS8.0/build_tkinter.py PC/config.c PC/python_nt.rc PCbuild/_hashlib.vcproj PCbuild/_ssl.vcproj PCbuild/build_ssl.py PCbuild/build_tkinter.py PCbuild/pcbuild.sln PCbuild/pythoncore.vcproj PCbuild/readme.txt PCbuild/x64.vsprops Parser/Python.asdl Parser/asdl_c.py Parser/parser.h Parser/spark.py Python/Python-ast.c Python/ast.c Python/bltinmodule.c Python/ceval.c Python/compile.c Python/getargs.c Python/getcopyright.c Python/graminit.c Python/import.c Python/mystrtoul.c Python/peephole.c Python/symtable.c README Tools/buildbot/external.bat Tools/compiler/ast.txt Tools/compiler/astgen.py Tools/compiler/dumppyc.py Tools/faqwiz/faqw.py Tools/modulator/Tkextra.py Tools/msi/msi.py Tools/msi/uuids.py Tools/pybench/systimes.py Tools/pynche/ChipViewer.py Tools/pynche/TypeinViewer.py Tools/scripts/logmerge.py Tools/scripts/nm2def.py Tools/scripts/pindent.py Tools/scripts/pysource.py Tools/scripts/xxci.py Tools/ssl/get-remote-certificate.py Tools/unicode/gencodec.py Tools/webchecker/wcgui.py Tools/webchecker/wsgui.py setup.py Message-ID: <20080304145109.3A9D21E4026@bag.python.org> Author: thomas.heller Date: Tue Mar 4 15:50:53 2008 New Revision: 61232 Added: python/branches/libffi3-branch/Lib/sqlite3/test/py25tests.py - copied unchanged from r61226, python/trunk/Lib/sqlite3/test/py25tests.py python/branches/libffi3-branch/Lib/test/test_future_builtins.py - copied unchanged from r61226, python/trunk/Lib/test/test_future_builtins.py python/branches/libffi3-branch/Lib/test/test_mutex.py - copied unchanged from r61226, python/trunk/Lib/test/test_mutex.py python/branches/libffi3-branch/Modules/future_builtins.c - copied unchanged from r61226, python/trunk/Modules/future_builtins.c python/branches/libffi3-branch/PCbuild/_hashlib.vcproj - copied unchanged from r61226, python/trunk/PCbuild/_hashlib.vcproj Modified: python/branches/libffi3-branch/ (props changed) python/branches/libffi3-branch/Demo/tkinter/guido/ShellWindow.py python/branches/libffi3-branch/Doc/Makefile python/branches/libffi3-branch/Doc/README.txt python/branches/libffi3-branch/Doc/c-api/arg.rst python/branches/libffi3-branch/Doc/c-api/long.rst python/branches/libffi3-branch/Doc/c-api/objbuffer.rst python/branches/libffi3-branch/Doc/c-api/typeobj.rst python/branches/libffi3-branch/Doc/conf.py python/branches/libffi3-branch/Doc/copyright.rst python/branches/libffi3-branch/Doc/distutils/builtdist.rst python/branches/libffi3-branch/Doc/distutils/packageindex.rst python/branches/libffi3-branch/Doc/distutils/setupscript.rst python/branches/libffi3-branch/Doc/howto/advocacy.rst python/branches/libffi3-branch/Doc/howto/doanddont.rst python/branches/libffi3-branch/Doc/howto/functional.rst python/branches/libffi3-branch/Doc/howto/regex.rst python/branches/libffi3-branch/Doc/howto/sockets.rst python/branches/libffi3-branch/Doc/library/basehttpserver.rst python/branches/libffi3-branch/Doc/library/codecs.rst python/branches/libffi3-branch/Doc/library/collections.rst python/branches/libffi3-branch/Doc/library/configparser.rst python/branches/libffi3-branch/Doc/library/decimal.rst python/branches/libffi3-branch/Doc/library/difflib.rst python/branches/libffi3-branch/Doc/library/dis.rst python/branches/libffi3-branch/Doc/library/inspect.rst python/branches/libffi3-branch/Doc/library/itertools.rst python/branches/libffi3-branch/Doc/library/logging.rst python/branches/libffi3-branch/Doc/library/mailbox.rst python/branches/libffi3-branch/Doc/library/modulefinder.rst python/branches/libffi3-branch/Doc/library/msilib.rst python/branches/libffi3-branch/Doc/library/operator.rst python/branches/libffi3-branch/Doc/library/optparse.rst python/branches/libffi3-branch/Doc/library/pickle.rst python/branches/libffi3-branch/Doc/library/platform.rst python/branches/libffi3-branch/Doc/library/profile.rst python/branches/libffi3-branch/Doc/library/random.rst python/branches/libffi3-branch/Doc/library/re.rst python/branches/libffi3-branch/Doc/library/signal.rst python/branches/libffi3-branch/Doc/library/simplexmlrpcserver.rst python/branches/libffi3-branch/Doc/library/socket.rst python/branches/libffi3-branch/Doc/library/tokenize.rst python/branches/libffi3-branch/Doc/library/trace.rst python/branches/libffi3-branch/Doc/library/userdict.rst python/branches/libffi3-branch/Doc/library/weakref.rst python/branches/libffi3-branch/Doc/library/xml.dom.rst python/branches/libffi3-branch/Doc/library/xml.etree.elementtree.rst python/branches/libffi3-branch/Doc/library/xmlrpclib.rst python/branches/libffi3-branch/Doc/license.rst python/branches/libffi3-branch/Doc/make.bat python/branches/libffi3-branch/Doc/reference/compound_stmts.rst python/branches/libffi3-branch/Doc/reference/expressions.rst python/branches/libffi3-branch/Doc/reference/index.rst python/branches/libffi3-branch/Doc/tools/sphinxext/download.html python/branches/libffi3-branch/Doc/tools/sphinxext/patchlevel.py python/branches/libffi3-branch/Doc/tutorial/interpreter.rst python/branches/libffi3-branch/Doc/tutorial/stdlib2.rst python/branches/libffi3-branch/Doc/using/cmdline.rst python/branches/libffi3-branch/Doc/using/unix.rst python/branches/libffi3-branch/Doc/using/windows.rst python/branches/libffi3-branch/Doc/whatsnew/2.6.rst python/branches/libffi3-branch/Grammar/Grammar python/branches/libffi3-branch/Include/Python-ast.h python/branches/libffi3-branch/Include/graminit.h python/branches/libffi3-branch/Include/longintrepr.h python/branches/libffi3-branch/Include/object.h python/branches/libffi3-branch/Include/patchlevel.h python/branches/libffi3-branch/LICENSE python/branches/libffi3-branch/Lib/BaseHTTPServer.py python/branches/libffi3-branch/Lib/ConfigParser.py python/branches/libffi3-branch/Lib/SimpleHTTPServer.py python/branches/libffi3-branch/Lib/SocketServer.py python/branches/libffi3-branch/Lib/UserDict.py python/branches/libffi3-branch/Lib/_abcoll.py python/branches/libffi3-branch/Lib/abc.py python/branches/libffi3-branch/Lib/bsddb/test/test_associate.py python/branches/libffi3-branch/Lib/bsddb/test/test_basics.py python/branches/libffi3-branch/Lib/bsddb/test/test_compare.py python/branches/libffi3-branch/Lib/bsddb/test/test_compat.py python/branches/libffi3-branch/Lib/bsddb/test/test_cursor_pget_bug.py python/branches/libffi3-branch/Lib/bsddb/test/test_dbobj.py python/branches/libffi3-branch/Lib/bsddb/test/test_dbshelve.py python/branches/libffi3-branch/Lib/bsddb/test/test_dbtables.py python/branches/libffi3-branch/Lib/bsddb/test/test_env_close.py python/branches/libffi3-branch/Lib/bsddb/test/test_get_none.py python/branches/libffi3-branch/Lib/bsddb/test/test_join.py python/branches/libffi3-branch/Lib/bsddb/test/test_lock.py python/branches/libffi3-branch/Lib/bsddb/test/test_misc.py python/branches/libffi3-branch/Lib/bsddb/test/test_pickle.py python/branches/libffi3-branch/Lib/bsddb/test/test_queue.py python/branches/libffi3-branch/Lib/bsddb/test/test_recno.py python/branches/libffi3-branch/Lib/bsddb/test/test_sequence.py python/branches/libffi3-branch/Lib/bsddb/test/test_thread.py python/branches/libffi3-branch/Lib/compiler/ast.py python/branches/libffi3-branch/Lib/compiler/transformer.py python/branches/libffi3-branch/Lib/ctypes/test/__init__.py python/branches/libffi3-branch/Lib/ctypes/test/test_checkretval.py python/branches/libffi3-branch/Lib/ctypes/test/test_find.py python/branches/libffi3-branch/Lib/ctypes/test/test_libc.py python/branches/libffi3-branch/Lib/ctypes/test/test_loading.py python/branches/libffi3-branch/Lib/ctypes/test/test_numbers.py python/branches/libffi3-branch/Lib/curses/__init__.py python/branches/libffi3-branch/Lib/curses/wrapper.py python/branches/libffi3-branch/Lib/decimal.py python/branches/libffi3-branch/Lib/distutils/bcppcompiler.py python/branches/libffi3-branch/Lib/distutils/command/bdist.py python/branches/libffi3-branch/Lib/distutils/command/bdist_dumb.py python/branches/libffi3-branch/Lib/distutils/command/bdist_msi.py python/branches/libffi3-branch/Lib/distutils/command/bdist_rpm.py python/branches/libffi3-branch/Lib/distutils/command/build_py.py python/branches/libffi3-branch/Lib/distutils/command/build_scripts.py python/branches/libffi3-branch/Lib/distutils/command/install.py python/branches/libffi3-branch/Lib/distutils/command/install_headers.py python/branches/libffi3-branch/Lib/distutils/command/install_lib.py python/branches/libffi3-branch/Lib/distutils/command/register.py python/branches/libffi3-branch/Lib/distutils/command/sdist.py python/branches/libffi3-branch/Lib/distutils/filelist.py python/branches/libffi3-branch/Lib/distutils/tests/test_dist.py python/branches/libffi3-branch/Lib/distutils/tests/test_sysconfig.py python/branches/libffi3-branch/Lib/distutils/unixccompiler.py python/branches/libffi3-branch/Lib/email/base64mime.py python/branches/libffi3-branch/Lib/email/utils.py python/branches/libffi3-branch/Lib/hotshot/log.py python/branches/libffi3-branch/Lib/hotshot/stones.py python/branches/libffi3-branch/Lib/httplib.py python/branches/libffi3-branch/Lib/idlelib/MultiCall.py python/branches/libffi3-branch/Lib/idlelib/NEWS.txt python/branches/libffi3-branch/Lib/idlelib/RemoteDebugger.py python/branches/libffi3-branch/Lib/idlelib/TreeWidget.py python/branches/libffi3-branch/Lib/idlelib/UndoDelegator.py python/branches/libffi3-branch/Lib/idlelib/configDialog.py python/branches/libffi3-branch/Lib/idlelib/idlever.py python/branches/libffi3-branch/Lib/idlelib/keybindingDialog.py python/branches/libffi3-branch/Lib/idlelib/run.py python/branches/libffi3-branch/Lib/imaplib.py python/branches/libffi3-branch/Lib/inspect.py python/branches/libffi3-branch/Lib/lib-tk/tkSimpleDialog.py python/branches/libffi3-branch/Lib/logging/handlers.py python/branches/libffi3-branch/Lib/ntpath.py python/branches/libffi3-branch/Lib/plat-mac/MiniAEFrame.py python/branches/libffi3-branch/Lib/plat-mac/aepack.py python/branches/libffi3-branch/Lib/plat-mac/bgenlocations.py python/branches/libffi3-branch/Lib/plat-mac/macostools.py python/branches/libffi3-branch/Lib/plat-riscos/rourl2path.py python/branches/libffi3-branch/Lib/popen2.py python/branches/libffi3-branch/Lib/runpy.py python/branches/libffi3-branch/Lib/smtplib.py python/branches/libffi3-branch/Lib/socket.py python/branches/libffi3-branch/Lib/sqlite3/test/dbapi.py python/branches/libffi3-branch/Lib/sqlite3/test/hooks.py python/branches/libffi3-branch/Lib/sqlite3/test/regression.py python/branches/libffi3-branch/Lib/sqlite3/test/transactions.py python/branches/libffi3-branch/Lib/sqlite3/test/types.py python/branches/libffi3-branch/Lib/ssl.py python/branches/libffi3-branch/Lib/symbol.py python/branches/libffi3-branch/Lib/test/fork_wait.py python/branches/libffi3-branch/Lib/test/list_tests.py python/branches/libffi3-branch/Lib/test/regrtest.py python/branches/libffi3-branch/Lib/test/seq_tests.py python/branches/libffi3-branch/Lib/test/string_tests.py python/branches/libffi3-branch/Lib/test/test_MimeWriter.py python/branches/libffi3-branch/Lib/test/test___all__.py python/branches/libffi3-branch/Lib/test/test_abc.py python/branches/libffi3-branch/Lib/test/test_al.py python/branches/libffi3-branch/Lib/test/test_applesingle.py python/branches/libffi3-branch/Lib/test/test_array.py python/branches/libffi3-branch/Lib/test/test_ast.py python/branches/libffi3-branch/Lib/test/test_audioop.py python/branches/libffi3-branch/Lib/test/test_bisect.py python/branches/libffi3-branch/Lib/test/test_bsddb185.py python/branches/libffi3-branch/Lib/test/test_bsddb3.py python/branches/libffi3-branch/Lib/test/test_builtin.py python/branches/libffi3-branch/Lib/test/test_cd.py python/branches/libffi3-branch/Lib/test/test_cfgparser.py python/branches/libffi3-branch/Lib/test/test_cl.py python/branches/libffi3-branch/Lib/test/test_class.py python/branches/libffi3-branch/Lib/test/test_cmd.py python/branches/libffi3-branch/Lib/test/test_coercion.py python/branches/libffi3-branch/Lib/test/test_compare.py python/branches/libffi3-branch/Lib/test/test_compiler.py python/branches/libffi3-branch/Lib/test/test_copy.py python/branches/libffi3-branch/Lib/test/test_cpickle.py python/branches/libffi3-branch/Lib/test/test_curses.py python/branches/libffi3-branch/Lib/test/test_datetime.py python/branches/libffi3-branch/Lib/test/test_dbm.py python/branches/libffi3-branch/Lib/test/test_decimal.py python/branches/libffi3-branch/Lib/test/test_decorators.py python/branches/libffi3-branch/Lib/test/test_deque.py python/branches/libffi3-branch/Lib/test/test_descrtut.py python/branches/libffi3-branch/Lib/test/test_dict.py python/branches/libffi3-branch/Lib/test/test_dis.py python/branches/libffi3-branch/Lib/test/test_doctest.py python/branches/libffi3-branch/Lib/test/test_dummy_threading.py python/branches/libffi3-branch/Lib/test/test_email.py python/branches/libffi3-branch/Lib/test/test_email_renamed.py python/branches/libffi3-branch/Lib/test/test_eof.py python/branches/libffi3-branch/Lib/test/test_extcall.py python/branches/libffi3-branch/Lib/test/test_file.py python/branches/libffi3-branch/Lib/test/test_fileinput.py python/branches/libffi3-branch/Lib/test/test_format.py python/branches/libffi3-branch/Lib/test/test_fractions.py python/branches/libffi3-branch/Lib/test/test_ftplib.py python/branches/libffi3-branch/Lib/test/test_getargs2.py python/branches/libffi3-branch/Lib/test/test_gl.py python/branches/libffi3-branch/Lib/test/test_grammar.py python/branches/libffi3-branch/Lib/test/test_gzip.py python/branches/libffi3-branch/Lib/test/test_heapq.py python/branches/libffi3-branch/Lib/test/test_htmlparser.py python/branches/libffi3-branch/Lib/test/test_httplib.py python/branches/libffi3-branch/Lib/test/test_imageop.py python/branches/libffi3-branch/Lib/test/test_imgfile.py python/branches/libffi3-branch/Lib/test/test_imp.py python/branches/libffi3-branch/Lib/test/test_index.py python/branches/libffi3-branch/Lib/test/test_inspect.py python/branches/libffi3-branch/Lib/test/test_itertools.py python/branches/libffi3-branch/Lib/test/test_largefile.py python/branches/libffi3-branch/Lib/test/test_linuxaudiodev.py python/branches/libffi3-branch/Lib/test/test_list.py python/branches/libffi3-branch/Lib/test/test_logging.py python/branches/libffi3-branch/Lib/test/test_minidom.py python/branches/libffi3-branch/Lib/test/test_module.py python/branches/libffi3-branch/Lib/test/test_modulefinder.py python/branches/libffi3-branch/Lib/test/test_multibytecodec_support.py python/branches/libffi3-branch/Lib/test/test_operator.py python/branches/libffi3-branch/Lib/test/test_optparse.py python/branches/libffi3-branch/Lib/test/test_ossaudiodev.py python/branches/libffi3-branch/Lib/test/test_parser.py python/branches/libffi3-branch/Lib/test/test_pep247.py python/branches/libffi3-branch/Lib/test/test_pickle.py python/branches/libffi3-branch/Lib/test/test_pkg.py python/branches/libffi3-branch/Lib/test/test_plistlib.py python/branches/libffi3-branch/Lib/test/test_poll.py python/branches/libffi3-branch/Lib/test/test_posix.py python/branches/libffi3-branch/Lib/test/test_pyclbr.py python/branches/libffi3-branch/Lib/test/test_quopri.py python/branches/libffi3-branch/Lib/test/test_resource.py python/branches/libffi3-branch/Lib/test/test_rfc822.py python/branches/libffi3-branch/Lib/test/test_scriptpackages.py python/branches/libffi3-branch/Lib/test/test_sgmllib.py python/branches/libffi3-branch/Lib/test/test_shlex.py python/branches/libffi3-branch/Lib/test/test_signal.py python/branches/libffi3-branch/Lib/test/test_site.py python/branches/libffi3-branch/Lib/test/test_smtplib.py python/branches/libffi3-branch/Lib/test/test_socketserver.py python/branches/libffi3-branch/Lib/test/test_sqlite.py python/branches/libffi3-branch/Lib/test/test_str.py python/branches/libffi3-branch/Lib/test/test_strftime.py python/branches/libffi3-branch/Lib/test/test_sunaudiodev.py python/branches/libffi3-branch/Lib/test/test_support.py python/branches/libffi3-branch/Lib/test/test_threading.py python/branches/libffi3-branch/Lib/test/test_tuple.py python/branches/libffi3-branch/Lib/test/test_unicode.py python/branches/libffi3-branch/Lib/test/test_unpack.py python/branches/libffi3-branch/Lib/test/test_urllib.py python/branches/libffi3-branch/Lib/test/test_urllib2.py python/branches/libffi3-branch/Lib/test/test_urllib2_localnet.py python/branches/libffi3-branch/Lib/test/test_userdict.py python/branches/libffi3-branch/Lib/test/test_userlist.py python/branches/libffi3-branch/Lib/test/test_userstring.py python/branches/libffi3-branch/Lib/test/test_uu.py python/branches/libffi3-branch/Lib/test/test_whichdb.py python/branches/libffi3-branch/Lib/test/test_xml_etree.py python/branches/libffi3-branch/Lib/test/test_xml_etree_c.py python/branches/libffi3-branch/Lib/test/test_xmlrpc.py python/branches/libffi3-branch/Lib/test/test_xpickle.py python/branches/libffi3-branch/Lib/test/test_zipfile64.py python/branches/libffi3-branch/Lib/threading.py python/branches/libffi3-branch/Lib/token.py python/branches/libffi3-branch/Lib/trace.py python/branches/libffi3-branch/Lib/xml/dom/minidom.py python/branches/libffi3-branch/Lib/xmlrpclib.py python/branches/libffi3-branch/Mac/Demo/PICTbrowse/ICONbrowse.py python/branches/libffi3-branch/Mac/Demo/PICTbrowse/PICTbrowse.py python/branches/libffi3-branch/Mac/Demo/PICTbrowse/PICTbrowse2.py python/branches/libffi3-branch/Mac/Demo/PICTbrowse/cicnbrowse.py python/branches/libffi3-branch/Mac/Demo/PICTbrowse/oldPICTbrowse.py python/branches/libffi3-branch/Mac/Demo/example1/dnslookup-1.py python/branches/libffi3-branch/Mac/Demo/example2/dnslookup-2.py python/branches/libffi3-branch/Mac/Demo/imgbrowse/imgbrowse.py python/branches/libffi3-branch/Mac/Demo/imgbrowse/mac_image.py python/branches/libffi3-branch/Mac/Demo/sound/morse.py python/branches/libffi3-branch/Mac/Modules/ae/aescan.py python/branches/libffi3-branch/Mac/Modules/ah/ahscan.py python/branches/libffi3-branch/Mac/Modules/app/appscan.py python/branches/libffi3-branch/Mac/Modules/carbonevt/CarbonEvtscan.py python/branches/libffi3-branch/Mac/Modules/cf/cfscan.py python/branches/libffi3-branch/Mac/Modules/cg/cgscan.py python/branches/libffi3-branch/Mac/Modules/cm/cmscan.py python/branches/libffi3-branch/Mac/Modules/ctl/ctlscan.py python/branches/libffi3-branch/Mac/Modules/dlg/dlgscan.py python/branches/libffi3-branch/Mac/Modules/drag/dragscan.py python/branches/libffi3-branch/Mac/Modules/evt/evtscan.py python/branches/libffi3-branch/Mac/Modules/file/filescan.py python/branches/libffi3-branch/Mac/Modules/fm/fmscan.py python/branches/libffi3-branch/Mac/Modules/folder/folderscan.py python/branches/libffi3-branch/Mac/Modules/help/helpscan.py python/branches/libffi3-branch/Mac/Modules/ibcarbon/IBCarbonscan.py python/branches/libffi3-branch/Mac/Modules/icn/icnscan.py python/branches/libffi3-branch/Mac/Modules/launch/launchscan.py python/branches/libffi3-branch/Mac/Modules/list/listscan.py python/branches/libffi3-branch/Mac/Modules/menu/menuscan.py python/branches/libffi3-branch/Mac/Modules/mlte/mltescan.py python/branches/libffi3-branch/Mac/Modules/osa/osascan.py python/branches/libffi3-branch/Mac/Modules/qd/qdscan.py python/branches/libffi3-branch/Mac/Modules/qdoffs/qdoffsscan.py python/branches/libffi3-branch/Mac/Modules/qt/qtscan.py python/branches/libffi3-branch/Mac/Modules/res/resscan.py python/branches/libffi3-branch/Mac/Modules/scrap/scrapscan.py python/branches/libffi3-branch/Mac/Modules/snd/sndscan.py python/branches/libffi3-branch/Mac/Modules/te/tescan.py python/branches/libffi3-branch/Mac/Modules/win/winscan.py python/branches/libffi3-branch/Makefile.pre.in python/branches/libffi3-branch/Misc/ACKS python/branches/libffi3-branch/Misc/BeOS-setup.py python/branches/libffi3-branch/Misc/HISTORY python/branches/libffi3-branch/Misc/NEWS python/branches/libffi3-branch/Misc/cheatsheet python/branches/libffi3-branch/Modules/_ctypes/_ctypes_test.c python/branches/libffi3-branch/Modules/_sqlite/connection.c python/branches/libffi3-branch/Modules/_sqlite/connection.h python/branches/libffi3-branch/Modules/_sqlite/cursor.c python/branches/libffi3-branch/Modules/_sqlite/cursor.h python/branches/libffi3-branch/Modules/_sqlite/microprotocols.h python/branches/libffi3-branch/Modules/_sqlite/module.c python/branches/libffi3-branch/Modules/_sqlite/module.h python/branches/libffi3-branch/Modules/_sqlite/statement.c python/branches/libffi3-branch/Modules/_sqlite/util.c python/branches/libffi3-branch/Modules/_sqlite/util.h python/branches/libffi3-branch/Modules/_testcapimodule.c python/branches/libffi3-branch/Modules/dbmmodule.c python/branches/libffi3-branch/Modules/gdbmmodule.c python/branches/libffi3-branch/Modules/itertoolsmodule.c python/branches/libffi3-branch/Modules/operator.c python/branches/libffi3-branch/Modules/parsermodule.c python/branches/libffi3-branch/Modules/signalmodule.c python/branches/libffi3-branch/Modules/syslogmodule.c python/branches/libffi3-branch/Objects/dictobject.c python/branches/libffi3-branch/Objects/fileobject.c python/branches/libffi3-branch/Objects/listobject.c python/branches/libffi3-branch/Objects/stringlib/string_format.h python/branches/libffi3-branch/Objects/stringobject.c python/branches/libffi3-branch/Objects/typeobject.c python/branches/libffi3-branch/Objects/unicodeobject.c python/branches/libffi3-branch/PC/VS8.0/build_tkinter.py python/branches/libffi3-branch/PC/config.c python/branches/libffi3-branch/PC/python_nt.rc python/branches/libffi3-branch/PCbuild/_ssl.vcproj python/branches/libffi3-branch/PCbuild/build_ssl.py python/branches/libffi3-branch/PCbuild/build_tkinter.py python/branches/libffi3-branch/PCbuild/pcbuild.sln python/branches/libffi3-branch/PCbuild/pythoncore.vcproj python/branches/libffi3-branch/PCbuild/readme.txt python/branches/libffi3-branch/PCbuild/x64.vsprops python/branches/libffi3-branch/Parser/Python.asdl python/branches/libffi3-branch/Parser/asdl_c.py python/branches/libffi3-branch/Parser/parser.h python/branches/libffi3-branch/Parser/spark.py python/branches/libffi3-branch/Python/Python-ast.c python/branches/libffi3-branch/Python/ast.c python/branches/libffi3-branch/Python/bltinmodule.c python/branches/libffi3-branch/Python/ceval.c python/branches/libffi3-branch/Python/compile.c python/branches/libffi3-branch/Python/getargs.c python/branches/libffi3-branch/Python/getcopyright.c python/branches/libffi3-branch/Python/graminit.c python/branches/libffi3-branch/Python/import.c python/branches/libffi3-branch/Python/mystrtoul.c python/branches/libffi3-branch/Python/peephole.c python/branches/libffi3-branch/Python/symtable.c python/branches/libffi3-branch/README python/branches/libffi3-branch/Tools/buildbot/external.bat python/branches/libffi3-branch/Tools/compiler/ast.txt python/branches/libffi3-branch/Tools/compiler/astgen.py python/branches/libffi3-branch/Tools/compiler/dumppyc.py python/branches/libffi3-branch/Tools/faqwiz/faqw.py python/branches/libffi3-branch/Tools/modulator/Tkextra.py python/branches/libffi3-branch/Tools/msi/msi.py python/branches/libffi3-branch/Tools/msi/uuids.py python/branches/libffi3-branch/Tools/pybench/systimes.py python/branches/libffi3-branch/Tools/pynche/ChipViewer.py python/branches/libffi3-branch/Tools/pynche/TypeinViewer.py python/branches/libffi3-branch/Tools/scripts/logmerge.py python/branches/libffi3-branch/Tools/scripts/nm2def.py python/branches/libffi3-branch/Tools/scripts/pindent.py python/branches/libffi3-branch/Tools/scripts/pysource.py python/branches/libffi3-branch/Tools/scripts/xxci.py python/branches/libffi3-branch/Tools/ssl/get-remote-certificate.py python/branches/libffi3-branch/Tools/unicode/gencodec.py python/branches/libffi3-branch/Tools/webchecker/wcgui.py python/branches/libffi3-branch/Tools/webchecker/wsgui.py python/branches/libffi3-branch/setup.py Log: Merged revisions 60926-61231 via svnmerge from svn+ssh://pythondev at svn.python.org/python/trunk ........ r60927 | raymond.hettinger | 2008-02-21 20:24:53 +0100 (Thu, 21 Feb 2008) | 1 line Update more instances of has_key(). ........ r60928 | guido.van.rossum | 2008-02-21 20:46:35 +0100 (Thu, 21 Feb 2008) | 3 lines Fix a few typos and layout glitches (more work is needed). Move 2.5 news to Misc/HISTORY. ........ r60932 | eric.smith | 2008-02-21 21:17:08 +0100 (Thu, 21 Feb 2008) | 1 line Moved test_format into the correct TestCase. ........ r60936 | georg.brandl | 2008-02-21 21:33:38 +0100 (Thu, 21 Feb 2008) | 2 lines #2079: typo in userdict docs. ........ r60938 | georg.brandl | 2008-02-21 21:38:13 +0100 (Thu, 21 Feb 2008) | 2 lines Part of #2154: minimal syntax fixes in doc example snippets. ........ r60942 | raymond.hettinger | 2008-02-22 04:16:42 +0100 (Fri, 22 Feb 2008) | 1 line First draft for itertools.product(). Docs and other updates forthcoming. ........ r60955 | nick.coghlan | 2008-02-22 11:54:06 +0100 (Fri, 22 Feb 2008) | 1 line Try to make command line error messages from runpy easier to understand (and suppress traceback cruft from the implicitly invoked runpy machinery) ........ r60956 | georg.brandl | 2008-02-22 13:31:45 +0100 (Fri, 22 Feb 2008) | 2 lines A lot more typo fixes by Ori Avtalion. ........ r60957 | georg.brandl | 2008-02-22 13:56:34 +0100 (Fri, 22 Feb 2008) | 2 lines Don't reference pyshell. ........ r60958 | georg.brandl | 2008-02-22 13:57:05 +0100 (Fri, 22 Feb 2008) | 2 lines Another fix. ........ r60962 | eric.smith | 2008-02-22 17:30:22 +0100 (Fri, 22 Feb 2008) | 1 line Added bin() builtin. I'm going to check in the tests in a seperate checkin, because the builtin doesn't need to be ported to py3k, but the tests are missing in py3k and need to be merged there. ........ r60965 | eric.smith | 2008-02-22 18:43:17 +0100 (Fri, 22 Feb 2008) | 1 line Tests for bin() builtin. These need to get merged into py3k, which has no tests for bin. ........ r60968 | raymond.hettinger | 2008-02-22 20:50:06 +0100 (Fri, 22 Feb 2008) | 1 line Document itertools.product(). ........ r60969 | raymond.hettinger | 2008-02-23 03:20:41 +0100 (Sat, 23 Feb 2008) | 9 lines Improve the implementation of itertools.product() * Fix-up issues pointed-out by Neal Norwitz. * Add extensive comments. * The lz->result variable is now a tuple instead of a list. * Use fast macro getitem/setitem calls so most code is in-line. * Re-use the result tuple if available (modify in-place instead of copy). ........ r60970 | eric.smith | 2008-02-23 04:09:44 +0100 (Sat, 23 Feb 2008) | 1 line Added future_builtins, which contains PEP 3127 compatible versions of hex() and oct(). ........ r60972 | raymond.hettinger | 2008-02-23 05:03:50 +0100 (Sat, 23 Feb 2008) | 1 line Add more comments ........ r60973 | raymond.hettinger | 2008-02-23 11:04:15 +0100 (Sat, 23 Feb 2008) | 1 line Add recipe using itertools.product(). ........ r60974 | facundo.batista | 2008-02-23 13:01:13 +0100 (Sat, 23 Feb 2008) | 6 lines Issue 1881. Increased the stack limit from 500 to 1500. Also added a test for this (and because of this test you'll see in stderr a message that parser.c sends before raising MemoryError). Thanks Ralf Schmitt. ........ r60975 | facundo.batista | 2008-02-23 13:27:17 +0100 (Sat, 23 Feb 2008) | 4 lines Issue 1776581. Minor corrections to smtplib, and two small tests. Thanks Alan McIntyre. ........ r60976 | facundo.batista | 2008-02-23 13:46:10 +0100 (Sat, 23 Feb 2008) | 5 lines Issue 1781. Now ConfigParser.add_section does not let you add a DEFAULT section any more, because it duplicated sections with the rest of the machinery. Thanks Tim Lesher and Manuel Kaufmann. ........ r60978 | christian.heimes | 2008-02-23 16:01:05 +0100 (Sat, 23 Feb 2008) | 2 lines Patch #1759: Backport of PEP 3129 class decorators with some help from Georg ........ r60980 | georg.brandl | 2008-02-23 16:02:28 +0100 (Sat, 23 Feb 2008) | 2 lines #1492: allow overriding BaseHTTPServer's content type for error messages. ........ r60982 | georg.brandl | 2008-02-23 16:06:25 +0100 (Sat, 23 Feb 2008) | 2 lines #2165: fix test_logging failure on some machines. ........ r60983 | facundo.batista | 2008-02-23 16:07:35 +0100 (Sat, 23 Feb 2008) | 6 lines Issue 1089358. Adds the siginterrupt() function, that is just a wrapper around the system call with the same name. Also added test cases, doc changes and NEWS entry. Thanks Jason and Ralf Schmitt. ........ r60984 | georg.brandl | 2008-02-23 16:11:18 +0100 (Sat, 23 Feb 2008) | 2 lines #2067: file.__exit__() now calls subclasses' close() method. ........ r60985 | georg.brandl | 2008-02-23 16:19:54 +0100 (Sat, 23 Feb 2008) | 2 lines More difflib examples. Written for GHOP by Josip Dzolonga. ........ r60987 | andrew.kuchling | 2008-02-23 16:41:51 +0100 (Sat, 23 Feb 2008) | 1 line #2072: correct documentation for .rpc_paths ........ r60988 | georg.brandl | 2008-02-23 16:43:48 +0100 (Sat, 23 Feb 2008) | 2 lines #2161: Fix opcode name. ........ r60989 | andrew.kuchling | 2008-02-23 16:49:35 +0100 (Sat, 23 Feb 2008) | 2 lines #1119331: ncurses will just call exit() if the terminal name isn't found. Call setupterm() first so that we get a Python exception instead of just existing. ........ r60990 | eric.smith | 2008-02-23 17:05:26 +0100 (Sat, 23 Feb 2008) | 1 line Removed duplicate Py_CHARMASK define. It's already defined in Python.h. ........ r60991 | andrew.kuchling | 2008-02-23 17:23:05 +0100 (Sat, 23 Feb 2008) | 4 lines #1330538: Improve comparison of xmlrpclib.DateTime and datetime instances. Remove automatic handling of datetime.date and datetime.time. This breaks backward compatibility, but python-dev discussion was strongly against this automatic conversion; see the bug for a link. ........ r60994 | andrew.kuchling | 2008-02-23 17:39:43 +0100 (Sat, 23 Feb 2008) | 1 line #835521: Add index entries for various pickle-protocol methods and attributes ........ r60995 | andrew.kuchling | 2008-02-23 18:10:46 +0100 (Sat, 23 Feb 2008) | 2 lines #1433694: minidom's .normalize() failed to set .nextSibling for last element. Fix by Malte Helmert ........ r61000 | christian.heimes | 2008-02-23 18:40:11 +0100 (Sat, 23 Feb 2008) | 1 line Patch #2167 from calvin: Remove unused imports ........ r61001 | christian.heimes | 2008-02-23 18:42:31 +0100 (Sat, 23 Feb 2008) | 1 line Patch #1957: syslogmodule: Release GIL when calling syslog(3) ........ r61002 | christian.heimes | 2008-02-23 18:52:07 +0100 (Sat, 23 Feb 2008) | 2 lines Issue #2051 and patch from Alexander Belopolsky: Permission for pyc and pyo files are inherited from the py file. ........ r61004 | georg.brandl | 2008-02-23 19:47:04 +0100 (Sat, 23 Feb 2008) | 2 lines Documentation coverage builder, part 1. ........ r61006 | andrew.kuchling | 2008-02-23 20:02:33 +0100 (Sat, 23 Feb 2008) | 1 line #1389051: IMAP module tries to read entire message in one chunk. Patch by Fredrik Lundh. ........ r61008 | andrew.kuchling | 2008-02-23 20:28:58 +0100 (Sat, 23 Feb 2008) | 1 line #1389051, #1092502: fix excessively large allocations when using read() on a socket ........ r61011 | jeffrey.yasskin | 2008-02-23 20:40:54 +0100 (Sat, 23 Feb 2008) | 13 lines Prevent classes like: class RunSelfFunction(object): def __init__(self): self.thread = threading.Thread(target=self._run) self.thread.start() def _run(self): pass from creating a permanent cycle between the object and the thread by having the Thread delete its references to the object when it completes. As an example of the effect of this bug, paramiko.Transport inherits from Thread to avoid it. ........ r61013 | jeffrey.yasskin | 2008-02-23 21:40:35 +0100 (Sat, 23 Feb 2008) | 3 lines Followup to r61011: Also avoid the reference cycle when the Thread's target raises an exception. ........ r61017 | georg.brandl | 2008-02-23 22:59:11 +0100 (Sat, 23 Feb 2008) | 2 lines #2101: fix removeAttribute docs. ........ r61018 | georg.brandl | 2008-02-23 23:05:38 +0100 (Sat, 23 Feb 2008) | 2 lines Add examples to modulefinder docs. Written for GHOP by Josip Dzolonga. ........ r61019 | georg.brandl | 2008-02-23 23:09:24 +0100 (Sat, 23 Feb 2008) | 2 lines Use os.closerange() in popen2. ........ r61020 | georg.brandl | 2008-02-23 23:14:02 +0100 (Sat, 23 Feb 2008) | 2 lines Use os.closerange(). ........ r61021 | georg.brandl | 2008-02-23 23:35:33 +0100 (Sat, 23 Feb 2008) | 3 lines In test_heapq and test_bisect, test both the Python and the C implementation. Originally written for GHOP by Josip Dzolonga, heavily patched by me. ........ r61024 | facundo.batista | 2008-02-23 23:54:12 +0100 (Sat, 23 Feb 2008) | 3 lines Added simple test case. Thanks Benjamin Peterson. ........ r61025 | georg.brandl | 2008-02-23 23:55:18 +0100 (Sat, 23 Feb 2008) | 2 lines #1825: correctly document msilib.add_data. ........ r61027 | georg.brandl | 2008-02-24 00:02:23 +0100 (Sun, 24 Feb 2008) | 2 lines #1826: allow dotted attribute paths in operator.attrgetter. ........ r61028 | georg.brandl | 2008-02-24 00:04:35 +0100 (Sun, 24 Feb 2008) | 2 lines #1506171: added operator.methodcaller(). ........ r61029 | georg.brandl | 2008-02-24 00:25:26 +0100 (Sun, 24 Feb 2008) | 2 lines Document import ./. threading issues. #1720705. ........ r61032 | georg.brandl | 2008-02-24 00:43:01 +0100 (Sun, 24 Feb 2008) | 2 lines Specify what kind of warning -3 emits. ........ r61033 | christian.heimes | 2008-02-24 00:59:45 +0100 (Sun, 24 Feb 2008) | 1 line MS Windows doesn't have mode_t but stat.st_mode is defined as unsigned short. ........ r61034 | georg.brandl | 2008-02-24 01:03:22 +0100 (Sun, 24 Feb 2008) | 4 lines #900744: If an invalid chunked-encoding header is sent by a server, httplib will now raise IncompleteRead and close the connection instead of raising ValueError. ........ r61035 | georg.brandl | 2008-02-24 01:14:24 +0100 (Sun, 24 Feb 2008) | 2 lines #1627: httplib now ignores negative Content-Length headers. ........ r61037 | neal.norwitz | 2008-02-24 03:20:25 +0100 (Sun, 24 Feb 2008) | 2 lines map(None, ...) is not supported in 3.0. ........ r61039 | andrew.kuchling | 2008-02-24 03:39:15 +0100 (Sun, 24 Feb 2008) | 1 line Remove stray word ........ r61040 | neal.norwitz | 2008-02-24 03:40:58 +0100 (Sun, 24 Feb 2008) | 3 lines Add a little info to the 3k deprecation warnings about what to use instead. Suggested by Raymond Hettinger. ........ r61041 | facundo.batista | 2008-02-24 04:17:21 +0100 (Sun, 24 Feb 2008) | 4 lines Issue 1742669. Now %d accepts very big float numbers. Thanks Gabriel Genellina. ........ r61046 | neal.norwitz | 2008-02-24 08:21:56 +0100 (Sun, 24 Feb 2008) | 5 lines Get ctypes working on the Alpha (Tru64). The problem was that there were two module_methods and the one used depended on the order the modules were loaded. By making the test module_methods static, it is not exported and the correct version is picked up. ........ r61048 | neal.norwitz | 2008-02-24 09:27:49 +0100 (Sun, 24 Feb 2008) | 1 line Fix typo of hexidecimal ........ r61049 | christian.heimes | 2008-02-24 13:26:16 +0100 (Sun, 24 Feb 2008) | 1 line Use PY_FORMAT_SIZE_T instead of z for string formatting. Thanks Neal. ........ r61051 | mark.dickinson | 2008-02-24 19:12:36 +0100 (Sun, 24 Feb 2008) | 2 lines Remove duplicate 'import re' in decimal.py ........ r61052 | neal.norwitz | 2008-02-24 19:47:03 +0100 (Sun, 24 Feb 2008) | 11 lines Create a db_home directory with a unique name so multiple users can run the test simultaneously. The simplest thing I found that worked on both Windows and Unix was to use the PID. It's unique so should be sufficient. This should prevent many of the spurious failures of the automated tests since they run as different users. Also cleanup the directory consistenly in the tearDown methods. It would be nice if someone ensured that the directories are always created with a consistent name. ........ r61054 | eric.smith | 2008-02-24 22:41:49 +0100 (Sun, 24 Feb 2008) | 1 line Corrected assert to check for correct type in py3k. ........ r61057 | christian.heimes | 2008-02-24 23:48:05 +0100 (Sun, 24 Feb 2008) | 2 lines Added dependency rules for Objects/stringlib/*.h stringobject, unicodeobject and the two formatters are rebuild whenever a header files changes ........ r61058 | neal.norwitz | 2008-02-25 02:45:37 +0100 (Mon, 25 Feb 2008) | 1 line Fix indentation ........ r61059 | brett.cannon | 2008-02-25 06:33:07 +0100 (Mon, 25 Feb 2008) | 2 lines Add minor markup for a string. ........ r61060 | brett.cannon | 2008-02-25 06:33:33 +0100 (Mon, 25 Feb 2008) | 2 lines Fix a minor typo in a docstring. ........ r61063 | andrew.kuchling | 2008-02-25 17:29:19 +0100 (Mon, 25 Feb 2008) | 1 line Move .setupterm() output so that we don't try to call endwin() if it fails ........ r61064 | andrew.kuchling | 2008-02-25 17:29:58 +0100 (Mon, 25 Feb 2008) | 1 line Use file descriptor for real stdout ........ r61065 | christian.heimes | 2008-02-25 18:32:07 +0100 (Mon, 25 Feb 2008) | 1 line Thomas Herve explained to me that PyCrypto depends on the constants. I'm adding the aliases because C code for Python 2.x should compile under 2.6 as well. The aliases aren't available in Python 3.x though. ........ r61067 | facundo.batista | 2008-02-25 19:06:00 +0100 (Mon, 25 Feb 2008) | 4 lines Issue 2117. Update compiler module to handle class decorators. Thanks Thomas Herve ........ r61069 | georg.brandl | 2008-02-25 21:17:56 +0100 (Mon, 25 Feb 2008) | 2 lines Rename sphinx.addons to sphinx.ext. ........ r61071 | georg.brandl | 2008-02-25 21:20:45 +0100 (Mon, 25 Feb 2008) | 2 lines Revert r61029. ........ r61072 | facundo.batista | 2008-02-25 23:33:55 +0100 (Mon, 25 Feb 2008) | 4 lines Issue 2168. gdbm and dbm needs to be iterable; this fixes a failure in the shelve module. Thanks Thomas Herve. ........ r61073 | raymond.hettinger | 2008-02-25 23:42:32 +0100 (Mon, 25 Feb 2008) | 1 line Make sure the itertools filter functions give the same performance for func=bool as func=None. ........ r61074 | raymond.hettinger | 2008-02-26 00:17:41 +0100 (Tue, 26 Feb 2008) | 1 line Revert part of r60927 which made invalid assumptions about the API offered by db modules. ........ r61075 | facundo.batista | 2008-02-26 00:46:02 +0100 (Tue, 26 Feb 2008) | 3 lines Coerced PyBool_Type to be able to compare it. ........ r61076 | raymond.hettinger | 2008-02-26 03:46:54 +0100 (Tue, 26 Feb 2008) | 1 line Docs for itertools.combinations(). Implementation in forthcoming checkin. ........ r61077 | neal.norwitz | 2008-02-26 05:50:37 +0100 (Tue, 26 Feb 2008) | 3 lines Don't use a hard coded port. This test could hang/fail if the port is in use. Speed this test up by avoiding a sleep and using the event. ........ r61078 | neal.norwitz | 2008-02-26 06:12:50 +0100 (Tue, 26 Feb 2008) | 1 line Whitespace normalization ........ r61079 | neal.norwitz | 2008-02-26 06:23:51 +0100 (Tue, 26 Feb 2008) | 1 line Whitespace normalization ........ r61080 | georg.brandl | 2008-02-26 07:40:10 +0100 (Tue, 26 Feb 2008) | 2 lines Banish tab. ........ r61081 | neal.norwitz | 2008-02-26 09:04:59 +0100 (Tue, 26 Feb 2008) | 7 lines Speed up this test by about 99%. Remove sleeps and replace with events. (This may fail on some slow platforms, but we can fix those cases which should be relatively isolated and easier to find now.) Move two test cases that didn't require a server to be started to a separate TestCase. These tests were taking 3 seconds which is what the timeout was set to. ........ r61082 | christian.heimes | 2008-02-26 09:18:11 +0100 (Tue, 26 Feb 2008) | 1 line The contains function raised a gcc warning. The new code is copied straight from py3k. ........ r61084 | neal.norwitz | 2008-02-26 09:21:28 +0100 (Tue, 26 Feb 2008) | 3 lines Add a timing flag to Trace so you can see where slowness occurs like waiting for socket timeouts in test_smtplib :-). ........ r61086 | christian.heimes | 2008-02-26 18:23:51 +0100 (Tue, 26 Feb 2008) | 3 lines Patch #1691070 from Roger Upole: Speed up PyArg_ParseTupleAndKeywords() and improve error msg My tests don't show the promised speed up of 10%. The code is as fast as the old code for simple cases and slightly faster for complex cases with several of args and kwargs. But the patch simplifies the code, too. ........ r61087 | georg.brandl | 2008-02-26 20:13:45 +0100 (Tue, 26 Feb 2008) | 2 lines #2194: fix some typos. ........ r61088 | raymond.hettinger | 2008-02-27 00:40:50 +0100 (Wed, 27 Feb 2008) | 1 line Add itertools.combinations(). ........ r61089 | raymond.hettinger | 2008-02-27 02:08:04 +0100 (Wed, 27 Feb 2008) | 1 line One too many decrefs. ........ r61090 | raymond.hettinger | 2008-02-27 02:08:30 +0100 (Wed, 27 Feb 2008) | 1 line Larger test range ........ r61091 | raymond.hettinger | 2008-02-27 02:44:34 +0100 (Wed, 27 Feb 2008) | 1 line Simply the sample code for combinations(). ........ r61098 | jeffrey.yasskin | 2008-02-28 05:45:36 +0100 (Thu, 28 Feb 2008) | 7 lines Move abc._Abstract into object by adding a new flag Py_TPFLAGS_IS_ABSTRACT, which forbids constructing types that have it set. The effect is to speed ./python.exe -m timeit -s 'import abc' -s 'class Foo(object): __metaclass__ = abc.ABCMeta' 'Foo()' up from 2.5us to 0.201us. This fixes issue 1762. ........ r61099 | jeffrey.yasskin | 2008-02-28 06:53:18 +0100 (Thu, 28 Feb 2008) | 3 lines Speed test_socketserver up from 28.739s to 0.226s, simplify the logic, and make sure all tests run even if some fail. ........ r61100 | jeffrey.yasskin | 2008-02-28 07:09:19 +0100 (Thu, 28 Feb 2008) | 21 lines Thread.start() used sleep(0.000001) to make sure it didn't return before the new thread had started. At least on my MacBook Pro, that wound up sleeping for a full 10ms (probably 1 jiffy). By using an Event instead, we can be absolutely certain that the thread has started, and return more quickly (217us). Before: $ ./python.exe -m timeit -s 'from threading import Thread' 't = Thread(); t.start(); t.join()' 100 loops, best of 3: 10.3 msec per loop $ ./python.exe -m timeit -s 'from threading import Thread; t = Thread()' 't.isAlive()' 1000000 loops, best of 3: 0.47 usec per loop After: $ ./python.exe -m timeit -s 'from threading import Thread' 't = Thread(); t.start(); t.join()' 1000 loops, best of 3: 217 usec per loop $ ./python.exe -m timeit -s 'from threading import Thread; t = Thread()' 't.isAlive()' 1000000 loops, best of 3: 0.86 usec per loop To be fair, the 10ms isn't CPU time, and other threads including the spawned one get to run during it. There are also some slightly more complicated ways to get back the .4us in isAlive() if we want. ........ r61101 | raymond.hettinger | 2008-02-28 10:23:48 +0100 (Thu, 28 Feb 2008) | 1 line Add repeat keyword argument to itertools.product(). ........ r61102 | christian.heimes | 2008-02-28 12:18:49 +0100 (Thu, 28 Feb 2008) | 1 line The empty tuple is usually a singleton with a much higher refcnt than 1 ........ r61105 | andrew.kuchling | 2008-02-28 15:03:03 +0100 (Thu, 28 Feb 2008) | 1 line #2169: make generated HTML more valid ........ r61106 | jeffrey.yasskin | 2008-02-28 19:03:15 +0100 (Thu, 28 Feb 2008) | 4 lines Prevent SocketServer.ForkingMixIn from waiting on child processes that it didn't create, in most cases. When there are max_children handlers running, it will still wait for any child process, not just handler processes. ........ r61107 | raymond.hettinger | 2008-02-28 20:41:24 +0100 (Thu, 28 Feb 2008) | 1 line Document impending updates to itertools. ........ r61108 | martin.v.loewis | 2008-02-28 20:44:22 +0100 (Thu, 28 Feb 2008) | 1 line Add 2.6aN uuids. ........ r61109 | martin.v.loewis | 2008-02-28 20:57:34 +0100 (Thu, 28 Feb 2008) | 1 line Bundle msvcr90.dll as a "private assembly". ........ r61113 | christian.heimes | 2008-02-28 22:00:45 +0100 (Thu, 28 Feb 2008) | 2 lines Windows fix for signal test - skip it earlier ........ r61116 | martin.v.loewis | 2008-02-28 23:20:50 +0100 (Thu, 28 Feb 2008) | 1 line Locate VS installation dir from environment, so that it works with the express edition. ........ r61118 | raymond.hettinger | 2008-02-28 23:30:42 +0100 (Thu, 28 Feb 2008) | 1 line Have itertools.chain() consume its inputs lazily instead of building a tuple of iterators at the outset. ........ r61119 | raymond.hettinger | 2008-02-28 23:46:41 +0100 (Thu, 28 Feb 2008) | 1 line Add alternate constructor for itertools.chain(). ........ r61123 | mark.dickinson | 2008-02-29 03:16:37 +0100 (Fri, 29 Feb 2008) | 2 lines Add __format__ method to Decimal, to support PEP 3101 ........ r61124 | raymond.hettinger | 2008-02-29 03:21:48 +0100 (Fri, 29 Feb 2008) | 1 line Handle the repeat keyword argument for itertools.product(). ........ r61125 | mark.dickinson | 2008-02-29 04:29:17 +0100 (Fri, 29 Feb 2008) | 2 lines Fix docstring typo. ........ r61128 | martin.v.loewis | 2008-02-29 17:59:21 +0100 (Fri, 29 Feb 2008) | 1 line Make _hashlib a separate project. ........ r61132 | georg.brandl | 2008-02-29 19:15:36 +0100 (Fri, 29 Feb 2008) | 2 lines Until we got downloadable docs, stop confusing viewers by talking about a nonexisting table. ........ r61133 | martin.v.loewis | 2008-02-29 19:17:23 +0100 (Fri, 29 Feb 2008) | 1 line Build db-4.4.20 with VS9; remove VS2003 build if necessary. ........ r61135 | georg.brandl | 2008-02-29 19:21:29 +0100 (Fri, 29 Feb 2008) | 2 lines #2208: allow for non-standard HHC location. ........ r61136 | martin.v.loewis | 2008-02-29 19:54:45 +0100 (Fri, 29 Feb 2008) | 1 line Port build_ssl.py to 2.4; support HOST_PYTHON variable ........ r61138 | martin.v.loewis | 2008-02-29 21:26:53 +0100 (Fri, 29 Feb 2008) | 1 line Make _hashlib depend on pythoncore. ........ r61139 | martin.v.loewis | 2008-02-29 21:54:44 +0100 (Fri, 29 Feb 2008) | 1 line Package Tcl from tcltk64 on AMD64. ........ r61141 | gerhard.haering | 2008-02-29 23:08:41 +0100 (Fri, 29 Feb 2008) | 2 lines Updated to pysqlite 2.4.1. Documentation additions will come later. ........ r61143 | barry.warsaw | 2008-03-01 03:23:38 +0100 (Sat, 01 Mar 2008) | 2 lines Bump to version 2.6a1 ........ r61144 | barry.warsaw | 2008-03-01 03:26:42 +0100 (Sat, 01 Mar 2008) | 1 line bump idle version number ........ r61146 | fred.drake | 2008-03-01 03:45:07 +0100 (Sat, 01 Mar 2008) | 2 lines fix typo ........ r61147 | barry.warsaw | 2008-03-01 03:53:36 +0100 (Sat, 01 Mar 2008) | 1 line Add date to NEWS ........ r61150 | barry.warsaw | 2008-03-01 04:00:52 +0100 (Sat, 01 Mar 2008) | 1 line Give IDLE a release date ........ r61151 | barry.warsaw | 2008-03-01 04:15:20 +0100 (Sat, 01 Mar 2008) | 1 line More copyright year and version number bumps ........ r61157 | barry.warsaw | 2008-03-01 18:11:41 +0100 (Sat, 01 Mar 2008) | 2 lines Set things up for 2.6a2. ........ r61165 | georg.brandl | 2008-03-02 07:28:16 +0100 (Sun, 02 Mar 2008) | 2 lines It's 2.6 now. ........ r61166 | georg.brandl | 2008-03-02 07:32:32 +0100 (Sun, 02 Mar 2008) | 2 lines Update year. ........ r61167 | georg.brandl | 2008-03-02 07:44:08 +0100 (Sun, 02 Mar 2008) | 2 lines Make patchlevel print out the release if called as a script. ........ r61168 | georg.brandl | 2008-03-02 07:45:40 +0100 (Sun, 02 Mar 2008) | 2 lines New default basename for HTML help files. ........ r61170 | raymond.hettinger | 2008-03-02 11:59:31 +0100 (Sun, 02 Mar 2008) | 1 line Finish-up docs for combinations() and permutations() in itertools. ........ r61171 | raymond.hettinger | 2008-03-02 12:17:51 +0100 (Sun, 02 Mar 2008) | 1 line Tighten example code. ........ r61172 | raymond.hettinger | 2008-03-02 12:57:16 +0100 (Sun, 02 Mar 2008) | 1 line Simplify code for itertools.product(). ........ r61173 | raymond.hettinger | 2008-03-02 13:02:19 +0100 (Sun, 02 Mar 2008) | 1 line Handle 0-tuples which can be singletons. ........ r61174 | gerhard.haering | 2008-03-02 14:08:03 +0100 (Sun, 02 Mar 2008) | 3 lines Made sqlite3 module's regression tests work with SQLite versions that don't support "create table if not exists", yet. ........ r61175 | gerhard.haering | 2008-03-02 14:12:27 +0100 (Sun, 02 Mar 2008) | 2 lines Added note about update of sqlite3 module. ........ r61176 | georg.brandl | 2008-03-02 14:41:39 +0100 (Sun, 02 Mar 2008) | 2 lines Make clear that the constants are strings. ........ r61177 | georg.brandl | 2008-03-02 15:15:04 +0100 (Sun, 02 Mar 2008) | 2 lines Fix factual error. ........ r61183 | gregory.p.smith | 2008-03-02 21:00:53 +0100 (Sun, 02 Mar 2008) | 4 lines Modify import of test_support so that the code can also be used with a stand alone distribution of bsddb that includes its own small copy of test_support for the needed functionality on older pythons. ........ r61189 | brett.cannon | 2008-03-03 01:38:58 +0100 (Mon, 03 Mar 2008) | 5 lines Refactor test_logging to use unittest. This should finally solve the flakiness issues. Thanks to Antoine Pitrou for the patch. ........ r61190 | jeffrey.yasskin | 2008-03-03 02:27:03 +0100 (Mon, 03 Mar 2008) | 3 lines compile.c always emits END_FINALLY after WITH_CLEANUP, so predict that in ceval.c. This is worth about a .03-.04us speedup on a simple with block. ........ r61192 | brett.cannon | 2008-03-03 03:41:40 +0100 (Mon, 03 Mar 2008) | 4 lines Move test_largefile over to using 'with' statements for open files. Also rename the driver function to test_main() instead of main_test(). ........ r61194 | brett.cannon | 2008-03-03 04:24:48 +0100 (Mon, 03 Mar 2008) | 3 lines Add a note in the main test class' docstring that the order of execution of the tests is important. ........ r61195 | brett.cannon | 2008-03-03 04:26:43 +0100 (Mon, 03 Mar 2008) | 3 lines Add a note in the main test class' docstring that the order of execution of the tests is important. ........ r61198 | brett.cannon | 2008-03-03 05:19:29 +0100 (Mon, 03 Mar 2008) | 4 lines Add test_main() functions to various tests where it was simple to do. Done so that regrtest can execute the test_main() directly instead of relying on import side-effects. ........ r61199 | neal.norwitz | 2008-03-03 05:37:45 +0100 (Mon, 03 Mar 2008) | 1 line Only DECREF if ret != NULL ........ r61203 | christian.heimes | 2008-03-03 13:40:17 +0100 (Mon, 03 Mar 2008) | 3 lines Initialized merge tracking via "svnmerge" with revisions "1-60195" from svn+ssh://pythondev at svn.python.org/python/branches/trunk-math ........ r61204 | christian.heimes | 2008-03-03 19:28:04 +0100 (Mon, 03 Mar 2008) | 1 line Since abc._Abstract was replaces by a new type flags the regression test suite fails. I've added a new function inspect.isabstract(). Is the mmethod fine or should I check if object is a instance of type or subclass of object, too? ........ r61207 | christian.heimes | 2008-03-03 21:30:29 +0100 (Mon, 03 Mar 2008) | 1 line 15 -> 16 ........ r61209 | georg.brandl | 2008-03-03 21:37:55 +0100 (Mon, 03 Mar 2008) | 2 lines There are now sixteen isfoo functions. ........ r61210 | georg.brandl | 2008-03-03 21:39:00 +0100 (Mon, 03 Mar 2008) | 2 lines 15 -> 16, the 2nd ........ r61211 | georg.brandl | 2008-03-03 22:22:47 +0100 (Mon, 03 Mar 2008) | 2 lines Actually import itertools. ........ r61212 | georg.brandl | 2008-03-03 22:31:50 +0100 (Mon, 03 Mar 2008) | 2 lines Expand a bit on genexp scopes. ........ r61213 | raymond.hettinger | 2008-03-03 23:04:55 +0100 (Mon, 03 Mar 2008) | 1 line Remove dependency on itertools -- a simple genexp suffices. ........ r61214 | raymond.hettinger | 2008-03-03 23:19:58 +0100 (Mon, 03 Mar 2008) | 1 line Issue 2226: Callable checked for the wrong abstract method. ........ r61217 | andrew.kuchling | 2008-03-04 01:40:32 +0100 (Tue, 04 Mar 2008) | 1 line Typo fix ........ r61218 | andrew.kuchling | 2008-03-04 02:30:10 +0100 (Tue, 04 Mar 2008) | 1 line Grammar fix; markup fix ........ r61219 | andrew.kuchling | 2008-03-04 02:47:38 +0100 (Tue, 04 Mar 2008) | 1 line Fix sentence fragment ........ r61220 | andrew.kuchling | 2008-03-04 02:48:26 +0100 (Tue, 04 Mar 2008) | 1 line Typo fix ........ r61221 | andrew.kuchling | 2008-03-04 02:49:37 +0100 (Tue, 04 Mar 2008) | 1 line Add versionadded tags ........ r61222 | andrew.kuchling | 2008-03-04 02:50:32 +0100 (Tue, 04 Mar 2008) | 1 line Thesis night results: add various items ........ r61224 | raymond.hettinger | 2008-03-04 05:17:08 +0100 (Tue, 04 Mar 2008) | 1 line Beef-up docs and tests for itertools. Fix-up end-case for product(). ........ r61225 | georg.brandl | 2008-03-04 08:25:54 +0100 (Tue, 04 Mar 2008) | 2 lines Fix some patch attributions. ........ r61226 | georg.brandl | 2008-03-04 08:33:30 +0100 (Tue, 04 Mar 2008) | 2 lines #2230: document that PyArg_* leaves addresses alone on error. ........ Modified: python/branches/libffi3-branch/Demo/tkinter/guido/ShellWindow.py ============================================================================== --- python/branches/libffi3-branch/Demo/tkinter/guido/ShellWindow.py (original) +++ python/branches/libffi3-branch/Demo/tkinter/guido/ShellWindow.py Tue Mar 4 15:50:53 2008 @@ -121,11 +121,7 @@ sys.stderr.write('popen2: bad write dup\n') if os.dup(c2pwrite) <> 2: sys.stderr.write('popen2: bad write dup\n') - for i in range(3, MAXFD): - try: - os.close(i) - except: - pass + os.closerange(3, MAXFD) try: os.execvp(prog, args) finally: Modified: python/branches/libffi3-branch/Doc/Makefile ============================================================================== --- python/branches/libffi3-branch/Doc/Makefile (original) +++ python/branches/libffi3-branch/Doc/Makefile Tue Mar 4 15:50:53 2008 @@ -12,7 +12,7 @@ ALLSPHINXOPTS = -b $(BUILDER) -d build/doctrees -D latex_paper_size=$(PAPER) \ $(SPHINXOPTS) . build/$(BUILDER) -.PHONY: help checkout update build html web htmlhelp clean +.PHONY: help checkout update build html web htmlhelp clean coverage help: @echo "Please use \`make ' where is one of" @@ -22,6 +22,7 @@ @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " changes to make an overview over all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" + @echo " coverage to check documentation coverage for library and C API" checkout: @if [ ! -d tools/sphinx ]; then \ @@ -74,9 +75,13 @@ linkcheck: BUILDER = linkcheck linkcheck: build - @echo "Link check complete; look for any errors in the above output "\ + @echo "Link check complete; look for any errors in the above output " \ "or in build/$(BUILDER)/output.txt" +coverage: BUILDER = coverage +coverage: build + @echo "Coverage finished; see c.txt and python.txt in build/coverage" + clean: -rm -rf build/* -rm -rf tools/sphinx Modified: python/branches/libffi3-branch/Doc/README.txt ============================================================================== --- python/branches/libffi3-branch/Doc/README.txt (original) +++ python/branches/libffi3-branch/Doc/README.txt Tue Mar 4 15:50:53 2008 @@ -59,6 +59,9 @@ deprecated items in the current version. This is meant as a help for the writer of the "What's New" document. + * "coverage", which builds a coverage overview for standard library modules + and C API. + A "make update" updates the Subversion checkouts in `tools/`. @@ -115,7 +118,7 @@ as long as you don't change or remove the copyright notice: ---------------------------------------------------------------------- -Copyright (c) 2000-2007 Python Software Foundation. +Copyright (c) 2000-2008 Python Software Foundation. All rights reserved. Copyright (c) 2000 BeOpen.com. Modified: python/branches/libffi3-branch/Doc/c-api/arg.rst ============================================================================== --- python/branches/libffi3-branch/Doc/c-api/arg.rst (original) +++ python/branches/libffi3-branch/Doc/c-api/arg.rst Tue Mar 4 15:50:53 2008 @@ -208,7 +208,7 @@ :ctype:`void\*` argument that was passed to the :cfunc:`PyArg_Parse\*` function. The returned *status* should be ``1`` for a successful conversion and ``0`` if the conversion has failed. When the conversion fails, the *converter* function - should raise an exception. + should raise an exception and leave the content of *address* unmodified. ``S`` (string) [PyStringObject \*] Like ``O`` but requires that the Python object is a string object. Raises @@ -287,9 +287,13 @@ units above, where these parameters are used as input values; they should match what is specified for the corresponding format unit in that case. -For the conversion to succeed, the *arg* object must match the format and the -format must be exhausted. On success, the :cfunc:`PyArg_Parse\*` functions -return true, otherwise they return false and raise an appropriate exception. +For the conversion to succeed, the *arg* object must match the format +and the format must be exhausted. On success, the +:cfunc:`PyArg_Parse\*` functions return true, otherwise they return +false and raise an appropriate exception. When the +:cfunc:`PyArg_Parse\*` functions fail due to conversion failure in one +of the format units, the variables at the addresses corresponding to that +and the following format units are left untouched. .. cfunction:: int PyArg_ParseTuple(PyObject *args, const char *format, ...) Modified: python/branches/libffi3-branch/Doc/c-api/long.rst ============================================================================== --- python/branches/libffi3-branch/Doc/c-api/long.rst (original) +++ python/branches/libffi3-branch/Doc/c-api/long.rst Tue Mar 4 15:50:53 2008 @@ -174,6 +174,6 @@ .. versionadded:: 1.5.2 .. versionchanged:: 2.5 - For values outside 0..LONG_MAX, both signed and unsigned integers are acccepted. + For values outside 0..LONG_MAX, both signed and unsigned integers are accepted. Modified: python/branches/libffi3-branch/Doc/c-api/objbuffer.rst ============================================================================== --- python/branches/libffi3-branch/Doc/c-api/objbuffer.rst (original) +++ python/branches/libffi3-branch/Doc/c-api/objbuffer.rst Tue Mar 4 15:50:53 2008 @@ -8,7 +8,7 @@ .. cfunction:: int PyObject_AsCharBuffer(PyObject *obj, const char **buffer, Py_ssize_t *buffer_len) - Returns a pointer to a read-only memory location useable as character- based + Returns a pointer to a read-only memory location usable as character-based input. The *obj* argument must support the single-segment character buffer interface. On success, returns ``0``, sets *buffer* to the memory location and *buffer_len* to the buffer length. Returns ``-1`` and sets a :exc:`TypeError` Modified: python/branches/libffi3-branch/Doc/c-api/typeobj.rst ============================================================================== --- python/branches/libffi3-branch/Doc/c-api/typeobj.rst (original) +++ python/branches/libffi3-branch/Doc/c-api/typeobj.rst Tue Mar 4 15:50:53 2008 @@ -573,7 +573,7 @@ The :attr:`tp_traverse` pointer is used by the garbage collector to detect reference cycles. A typical implementation of a :attr:`tp_traverse` function simply calls :cfunc:`Py_VISIT` on each of the instance's members that are Python - objects. For exampe, this is function :cfunc:`local_traverse` from the + objects. For example, this is function :cfunc:`local_traverse` from the :mod:`thread` extension module:: static int @@ -1160,7 +1160,7 @@ binaryfunc nb_and; binaryfunc nb_xor; binaryfunc nb_or; - coercion nb_coerce; /* Used by the coerce() funtion */ + coercion nb_coerce; /* Used by the coerce() function */ unaryfunc nb_int; unaryfunc nb_long; unaryfunc nb_float; Modified: python/branches/libffi3-branch/Doc/conf.py ============================================================================== --- python/branches/libffi3-branch/Doc/conf.py (original) +++ python/branches/libffi3-branch/Doc/conf.py Tue Mar 4 15:50:53 2008 @@ -13,7 +13,7 @@ # General configuration # --------------------- -extensions = ['sphinx.addons.refcounting'] +extensions = ['sphinx.ext.refcounting', 'sphinx.ext.coverage'] # General substitutions. project = 'Python' @@ -86,7 +86,7 @@ } # Output file base name for HTML help builder. -htmlhelp_basename = 'pydoc' +htmlhelp_basename = 'python' + release.replace('.', '') # Options for LaTeX output @@ -139,3 +139,39 @@ # Documents to append as an appendix to all manuals. latex_appendices = ['glossary', 'about', 'license', 'copyright'] + +# Options for the coverage checker +# -------------------------------- + +# The coverage checker will ignore all modules/functions/classes whose names +# match any of the following regexes (using re.match). +coverage_ignore_modules = [ + r'[T|t][k|K]', + r'Tix', + r'distutils.*', +] + +coverage_ignore_functions = [ + 'test($|_)', +] + +coverage_ignore_classes = [ +] + +# Glob patterns for C source files for C API coverage, relative to this directory. +coverage_c_path = [ + '../Include/*.h', +] + +# Regexes to find C items in the source files. +coverage_c_regexes = { + 'cfunction': (r'^PyAPI_FUNC\(.*\)\s+([^_][\w_]+)'), + 'data': (r'^PyAPI_DATA\(.*\)\s+([^_][\w_]+)'), + 'macro': (r'^#define ([^_][\w_]+)\(.*\)[\s|\\]'), +} + +# The coverage checker will ignore all C items whose names match these regexes +# (using re.match) -- the keys must be the same as in coverage_c_regexes. +coverage_ignore_c_items = { +# 'cfunction': [...] +} Modified: python/branches/libffi3-branch/Doc/copyright.rst ============================================================================== --- python/branches/libffi3-branch/Doc/copyright.rst (original) +++ python/branches/libffi3-branch/Doc/copyright.rst Tue Mar 4 15:50:53 2008 @@ -4,7 +4,7 @@ Python and this documentation is: -Copyright ?? 2001-2007 Python Software Foundation. All rights reserved. +Copyright ?? 2001-2008 Python Software Foundation. All rights reserved. Copyright ?? 2000 BeOpen.com. All rights reserved. Modified: python/branches/libffi3-branch/Doc/distutils/builtdist.rst ============================================================================== --- python/branches/libffi3-branch/Doc/distutils/builtdist.rst (original) +++ python/branches/libffi3-branch/Doc/distutils/builtdist.rst Tue Mar 4 15:50:53 2008 @@ -195,7 +195,7 @@ | | or --- & :option:`maintainer` and | | | :option:`maintainer_email` | +------------------------------------------+----------------------------------------------+ -| Copyright | :option:`licence` | +| Copyright | :option:`license` | +------------------------------------------+----------------------------------------------+ | Url | :option:`url` | +------------------------------------------+----------------------------------------------+ Modified: python/branches/libffi3-branch/Doc/distutils/packageindex.rst ============================================================================== --- python/branches/libffi3-branch/Doc/distutils/packageindex.rst (original) +++ python/branches/libffi3-branch/Doc/distutils/packageindex.rst Tue Mar 4 15:50:53 2008 @@ -53,13 +53,13 @@ The .pypirc file ================ -The format of the :file:`.pypirc` file is formated as follows:: +The format of the :file:`.pypirc` file is as follows:: [server-login] repository: username: password: -*repository* can be ommitted and defaults to ``http://www.python.org/pypi``. +*repository* can be omitted and defaults to ``http://www.python.org/pypi``. Modified: python/branches/libffi3-branch/Doc/distutils/setupscript.rst ============================================================================== --- python/branches/libffi3-branch/Doc/distutils/setupscript.rst (original) +++ python/branches/libffi3-branch/Doc/distutils/setupscript.rst Tue Mar 4 15:50:53 2008 @@ -185,7 +185,7 @@ same base package), use the :option:`ext_package` keyword argument to :func:`setup`. For example, :: - setup(... + setup(..., ext_package='pkg', ext_modules=[Extension('foo', ['foo.c']), Extension('subpkg.bar', ['bar.c'])], @@ -214,7 +214,7 @@ This warning notwithstanding, options to SWIG can be currently passed like this:: - setup(... + setup(..., ext_modules=[Extension('_foo', ['foo.i'], swig_opts=['-modern', '-I../include'])], py_modules=['foo'], @@ -443,7 +443,7 @@ The :option:`scripts` option simply is a list of files to be handled in this way. From the PyXML setup script:: - setup(... + setup(..., scripts=['scripts/xmlproc_parse', 'scripts/xmlproc_val'] ) @@ -501,7 +501,7 @@ :option:`data_files` specifies a sequence of (*directory*, *files*) pairs in the following way:: - setup(... + setup(..., data_files=[('bitmaps', ['bm/b1.gif', 'bm/b2.gif']), ('config', ['cfg/data.cfg']), ('/etc/init.d', ['init-script'])] @@ -613,7 +613,7 @@ :option:`classifiers` are specified in a python list:: - setup(... + setup(..., classifiers=[ 'Development Status :: 4 - Beta', 'Environment :: Console', Modified: python/branches/libffi3-branch/Doc/howto/advocacy.rst ============================================================================== --- python/branches/libffi3-branch/Doc/howto/advocacy.rst (original) +++ python/branches/libffi3-branch/Doc/howto/advocacy.rst Tue Mar 4 15:50:53 2008 @@ -276,7 +276,7 @@ product in any way. * If something goes wrong, you can't sue for damages. Practically all software - licences contain this condition. + licenses contain this condition. Notice that you don't have to provide source code for anything that contains Python or is built with it. Also, the Python interpreter and accompanying Modified: python/branches/libffi3-branch/Doc/howto/doanddont.rst ============================================================================== --- python/branches/libffi3-branch/Doc/howto/doanddont.rst (original) +++ python/branches/libffi3-branch/Doc/howto/doanddont.rst Tue Mar 4 15:50:53 2008 @@ -114,7 +114,7 @@ This is a "don't" which is much weaker then the previous "don't"s but is still something you should not do if you don't have good reasons to do that. The reason it is usually bad idea is because you suddenly have an object which lives -in two seperate namespaces. When the binding in one namespace changes, the +in two separate namespaces. When the binding in one namespace changes, the binding in the other will not, so there will be a discrepancy between them. This happens when, for example, one module is reloaded, or changes the definition of a function at runtime. Modified: python/branches/libffi3-branch/Doc/howto/functional.rst ============================================================================== --- python/branches/libffi3-branch/Doc/howto/functional.rst (original) +++ python/branches/libffi3-branch/Doc/howto/functional.rst Tue Mar 4 15:50:53 2008 @@ -905,7 +905,7 @@ itertools.izip(['a', 'b', 'c'], (1, 2, 3)) => ('a', 1), ('b', 2), ('c', 3) -It's similiar to the built-in :func:`zip` function, but doesn't construct an +It's similar to the built-in :func:`zip` function, but doesn't construct an in-memory list and exhaust all the input iterators before returning; instead tuples are constructed and returned only if they're requested. (The technical term for this behaviour is `lazy evaluation Modified: python/branches/libffi3-branch/Doc/howto/regex.rst ============================================================================== --- python/branches/libffi3-branch/Doc/howto/regex.rst (original) +++ python/branches/libffi3-branch/Doc/howto/regex.rst Tue Mar 4 15:50:53 2008 @@ -203,7 +203,7 @@ | | | ``bc``. | +------+-----------+---------------------------------+ | 6 | ``abcb`` | Try ``b`` again. This time | -| | | but the character at the | +| | | the character at the | | | | current position is ``'b'``, so | | | | it succeeds. | +------+-----------+---------------------------------+ Modified: python/branches/libffi3-branch/Doc/howto/sockets.rst ============================================================================== --- python/branches/libffi3-branch/Doc/howto/sockets.rst (original) +++ python/branches/libffi3-branch/Doc/howto/sockets.rst Tue Mar 4 15:50:53 2008 @@ -357,7 +357,7 @@ reason to do otherwise. In return, you will get three lists. They have the sockets that are actually -readable, writable and in error. Each of these lists is a subset (possbily +readable, writable and in error. Each of these lists is a subset (possibly empty) of the corresponding list you passed in. And if you put a socket in more than one input list, it will only be (at most) in one output list. @@ -371,7 +371,7 @@ If you have a "server" socket, put it in the potential_readers list. If it comes out in the readable list, your ``accept`` will (almost certainly) work. If you have created a new socket to ``connect`` to someone else, put it in the -ptoential_writers list. If it shows up in the writable list, you have a decent +potential_writers list. If it shows up in the writable list, you have a decent chance that it has connected. One very nasty problem with ``select``: if somewhere in those input lists of Modified: python/branches/libffi3-branch/Doc/library/basehttpserver.rst ============================================================================== --- python/branches/libffi3-branch/Doc/library/basehttpserver.rst (original) +++ python/branches/libffi3-branch/Doc/library/basehttpserver.rst Tue Mar 4 15:50:53 2008 @@ -122,6 +122,15 @@ class variable. +.. attribute:: BaseHTTPRequestHandler.error_content_type + + Specifies the Content-Type HTTP header of error responses sent to the client. + The default value is ``'text/html'``. + + .. versionadded:: 2.6 + Previously, the content type was always ``'text/html'``. + + .. attribute:: BaseHTTPRequestHandler.protocol_version This specifies the HTTP protocol version used in responses. If set to Modified: python/branches/libffi3-branch/Doc/library/codecs.rst ============================================================================== --- python/branches/libffi3-branch/Doc/library/codecs.rst (original) +++ python/branches/libffi3-branch/Doc/library/codecs.rst Tue Mar 4 15:50:53 2008 @@ -1001,7 +1001,7 @@ +-----------------+--------------------------------+--------------------------------+ | iso8859_3 | iso-8859-3, latin3, L3 | Esperanto, Maltese | +-----------------+--------------------------------+--------------------------------+ -| iso8859_4 | iso-8859-4, latin4, L4 | Baltic languagues | +| iso8859_4 | iso-8859-4, latin4, L4 | Baltic languages | +-----------------+--------------------------------+--------------------------------+ | iso8859_5 | iso-8859-5, cyrillic | Bulgarian, Byelorussian, | | | | Macedonian, Russian, Serbian | Modified: python/branches/libffi3-branch/Doc/library/collections.rst ============================================================================== --- python/branches/libffi3-branch/Doc/library/collections.rst (original) +++ python/branches/libffi3-branch/Doc/library/collections.rst Tue Mar 4 15:50:53 2008 @@ -470,7 +470,7 @@ .. function:: namedtuple(typename, fieldnames, [verbose]) Returns a new tuple subclass named *typename*. The new subclass is used to - create tuple-like objects that have fields accessable by attribute lookup as + create tuple-like objects that have fields accessible by attribute lookup as well as being indexable and iterable. Instances of the subclass also have a helpful docstring (with typename and fieldnames) and a helpful :meth:`__repr__` method which lists the tuple contents in a ``name=value`` format. @@ -536,7 +536,7 @@ >>> x, y = p # unpack like a regular tuple >>> x, y (11, 22) - >>> p.x + p.y # fields also accessable by name + >>> p.x + p.y # fields also accessible by name 33 >>> p # readable __repr__ with a name=value style Point(x=11, y=22) Modified: python/branches/libffi3-branch/Doc/library/configparser.rst ============================================================================== --- python/branches/libffi3-branch/Doc/library/configparser.rst (original) +++ python/branches/libffi3-branch/Doc/library/configparser.rst Tue Mar 4 15:50:53 2008 @@ -187,8 +187,9 @@ .. method:: RawConfigParser.add_section(section) Add a section named *section* to the instance. If a section by the given name - already exists, :exc:`DuplicateSectionError` is raised. - + already exists, :exc:`DuplicateSectionError` is raised. If the name + ``DEFAULT`` (or any of it's case-insensitive variants) is passed, + :exc:`ValueError` is raised. .. method:: RawConfigParser.has_section(section) Modified: python/branches/libffi3-branch/Doc/library/decimal.rst ============================================================================== --- python/branches/libffi3-branch/Doc/library/decimal.rst (original) +++ python/branches/libffi3-branch/Doc/library/decimal.rst Tue Mar 4 15:50:53 2008 @@ -1609,7 +1609,7 @@ original's two-place significance. If an application does not care about tracking significance, it is easy to -remove the exponent and trailing zeroes, losing signficance, but keeping the +remove the exponent and trailing zeroes, losing significance, but keeping the value unchanged:: >>> def remove_exponent(d): Modified: python/branches/libffi3-branch/Doc/library/difflib.rst ============================================================================== --- python/branches/libffi3-branch/Doc/library/difflib.rst (original) +++ python/branches/libffi3-branch/Doc/library/difflib.rst Tue Mar 4 15:50:53 2008 @@ -148,7 +148,27 @@ expressed in the format returned by :func:`time.ctime`. If not specified, the strings default to blanks. - :file:`Tools/scripts/diff.py` is a command-line front-end for this function. + :: + + >>> s1 = ['bacon\n', 'eggs\n', 'ham\n', 'guido\n'] + >>> s2 = ['python\n', 'eggy\n', 'hamster\n', 'guido\n'] + >>> for line in context_diff(s1, s2, fromfile='before.py', tofile='after.py'): + ... sys.stdout.write(line) + *** before.py + --- after.py + *************** + *** 1,4 **** + ! bacon + ! eggs + ! ham + guido + --- 1,4 ---- + ! python + ! eggy + ! hamster + guido + + See :ref:`difflib-interface` for a more detailed example. .. versionadded:: 2.3 @@ -265,7 +285,25 @@ expressed in the format returned by :func:`time.ctime`. If not specified, the strings default to blanks. - :file:`Tools/scripts/diff.py` is a command-line front-end for this function. + :: + + + >>> s1 = ['bacon\n', 'eggs\n', 'ham\n', 'guido\n'] + >>> s2 = ['python\n', 'eggy\n', 'hamster\n', 'guido\n'] + >>> for line in unified_diff(s1, s2, fromfile='before.py', tofile='after.py'): + ... sys.stdout.write(line) + --- before.py + +++ after.py + @@ -1,4 +1,4 @@ + -bacon + -eggs + -ham + +python + +eggy + +hamster + guido + + See :ref:`difflib-interface` for a more detailed example. .. versionadded:: 2.3 @@ -649,3 +687,75 @@ ? ++++ ^ ^ + 5. Flat is better than nested. + +.. _difflib-interface: + +A command-line interface to difflib +----------------------------------- + +This example shows how to use difflib to create a ``diff``-like utility. +It is also contained in the Python source distribution, as +:file:`Tools/scripts/diff.py`. + +:: + + """ Command line interface to difflib.py providing diffs in four formats: + + * ndiff: lists every line and highlights interline changes. + * context: highlights clusters of changes in a before/after format. + * unified: highlights clusters of changes in an inline format. + * html: generates side by side comparison with change highlights. + + """ + + import sys, os, time, difflib, optparse + + def main(): + # Configure the option parser + usage = "usage: %prog [options] fromfile tofile" + parser = optparse.OptionParser(usage) + parser.add_option("-c", action="store_true", default=False, + help='Produce a context format diff (default)') + parser.add_option("-u", action="store_true", default=False, + help='Produce a unified format diff') + hlp = 'Produce HTML side by side diff (can use -c and -l in conjunction)' + parser.add_option("-m", action="store_true", default=False, help=hlp) + parser.add_option("-n", action="store_true", default=False, + help='Produce a ndiff format diff') + parser.add_option("-l", "--lines", type="int", default=3, + help='Set number of context lines (default 3)') + (options, args) = parser.parse_args() + + if len(args) == 0: + parser.print_help() + sys.exit(1) + if len(args) != 2: + parser.error("need to specify both a fromfile and tofile") + + n = options.lines + fromfile, tofile = args # as specified in the usage string + + # we're passing these as arguments to the diff function + fromdate = time.ctime(os.stat(fromfile).st_mtime) + todate = time.ctime(os.stat(tofile).st_mtime) + fromlines = open(fromfile, 'U').readlines() + tolines = open(tofile, 'U').readlines() + + if options.u: + diff = difflib.unified_diff(fromlines, tolines, fromfile, tofile, + fromdate, todate, n=n) + elif options.n: + diff = difflib.ndiff(fromlines, tolines) + elif options.m: + diff = difflib.HtmlDiff().make_file(fromlines, tolines, fromfile, + tofile, context=options.c, + numlines=n) + else: + diff = difflib.context_diff(fromlines, tolines, fromfile, tofile, + fromdate, todate, n=n) + + # we're using writelines because diff is a generator + sys.stdout.writelines(diff) + + if __name__ == '__main__': + main() Modified: python/branches/libffi3-branch/Doc/library/dis.rst ============================================================================== --- python/branches/libffi3-branch/Doc/library/dis.rst (original) +++ python/branches/libffi3-branch/Doc/library/dis.rst Tue Mar 4 15:50:53 2008 @@ -544,7 +544,7 @@ .. opcode:: STORE_NAME (namei) Implements ``name = TOS``. *namei* is the index of *name* in the attribute - :attr:`co_names` of the code object. The compiler tries to use ``STORE_LOCAL`` + :attr:`co_names` of the code object. The compiler tries to use ``STORE_FAST`` or ``STORE_GLOBAL`` if possible. Modified: python/branches/libffi3-branch/Doc/library/inspect.rst ============================================================================== --- python/branches/libffi3-branch/Doc/library/inspect.rst (original) +++ python/branches/libffi3-branch/Doc/library/inspect.rst Tue Mar 4 15:50:53 2008 @@ -28,7 +28,7 @@ ----------------- The :func:`getmembers` function retrieves the members of an object such as a -class or module. The fifteen functions whose names begin with "is" are mainly +class or module. The sixteen functions whose names begin with "is" are mainly provided as convenient choices for the second argument to :func:`getmembers`. They also help you determine when you can expect to find the following special attributes: @@ -279,10 +279,14 @@ Return true if the object is a Python generator function. + .. versionadded:: 2.6 + .. function:: isgenerator(object) Return true if the object is a generator. + .. versionadded:: 2.6 + .. function:: istraceback(object) Return true if the object is a traceback. @@ -307,6 +311,12 @@ Return true if the object is a user-defined or built-in function or method. +.. function:: isabstract(object) + + Return true if the object is an abstract base class. + + .. versionadded:: 2.6 + .. function:: ismethoddescriptor(object) Modified: python/branches/libffi3-branch/Doc/library/itertools.rst ============================================================================== --- python/branches/libffi3-branch/Doc/library/itertools.rst (original) +++ python/branches/libffi3-branch/Doc/library/itertools.rst Tue Mar 4 15:50:53 2008 @@ -76,6 +76,67 @@ yield element +.. function:: itertools.chain.from_iterable(iterable) + + Alternate constructor for :func:`chain`. Gets chained inputs from a + single iterable argument that is evaluated lazily. Equivalent to:: + + @classmethod + def from_iterable(iterables): + for it in iterables: + for element in it: + yield element + + .. versionadded:: 2.6 + + +.. function:: combinations(iterable, r) + + Return successive *r* length combinations of elements in the *iterable*. + + Combinations are emitted in lexicographic sort order. So, if the + input *iterable* is sorted, the combination tuples will be produced + in sorted order. + + Elements are treated as unique based on their position, not on their + value. So if the input elements are unique, there will be no repeat + values in each combination. + + Each result tuple is ordered to match the input order. So, every + combination is a subsequence of the input *iterable*. + + Equivalent to:: + + def combinations(iterable, r): + 'combinations(range(4), 3) --> (0,1,2) (0,1,3) (0,2,3) (1,2,3)' + pool = tuple(iterable) + n = len(pool) + indices = range(r) + yield tuple(pool[i] for i in indices) + while 1: + for i in reversed(range(r)): + if indices[i] != i + n - r: + break + else: + return + indices[i] += 1 + for j in range(i+1, r): + indices[j] = indices[j-1] + 1 + yield tuple(pool[i] for i in indices) + + The code for :func:`combinations` can be also expressed as a subsequence + of :func:`permutations` after filtering entries where the elements are not + in sorted order (according to their position in the input pool):: + + def combinations(iterable, r): + pool = tuple(iterable) + n = len(pool) + for indices in permutations(range(n), r): + if sorted(indices) == list(indices): + yield tuple(pool[i] for i in indices) + + .. versionadded:: 2.6 + .. function:: count([n]) Make an iterator that returns consecutive integers starting with *n*. If not @@ -302,6 +363,89 @@ .. versionadded:: 2.6 +.. function:: permutations(iterable[, r]) + + Return successive *r* length permutations of elements in the *iterable*. + + If *r* is not specified or is ``None``, then *r* defaults to the length + of the *iterable* and all possible full-length permutations + are generated. + + Permutations are emitted in lexicographic sort order. So, if the + input *iterable* is sorted, the permutation tuples will be produced + in sorted order. + + Elements are treated as unique based on their position, not on their + value. So if the input elements are unique, there will be no repeat + values in each permutation. + + Equivalent to:: + + def permutations(iterable, r=None): + 'permutations(range(3), 2) --> (0,1) (0,2) (1,0) (1,2) (2,0) (2,1)' + pool = tuple(iterable) + n = len(pool) + r = n if r is None else r + indices = range(n) + cycles = range(n-r+1, n+1)[::-1] + yield tuple(pool[i] for i in indices[:r]) + while n: + for i in reversed(range(r)): + cycles[i] -= 1 + if cycles[i] == 0: + indices[i:] = indices[i+1:] + indices[i:i+1] + cycles[i] = n - i + else: + j = cycles[i] + indices[i], indices[-j] = indices[-j], indices[i] + yield tuple(pool[i] for i in indices[:r]) + break + else: + return + + The code for :func:`permutations` can be also expressed as a subsequence of + :func:`product`, filtered to exclude entries with repeated elements (those + from the same position in the input pool):: + + def permutations(iterable, r=None): + pool = tuple(iterable) + n = len(pool) + r = n if r is None else r + for indices in product(range(n), repeat=r): + if len(set(indices)) == r: + yield tuple(pool[i] for i in indices) + + .. versionadded:: 2.6 + +.. function:: product(*iterables[, repeat]) + + Cartesian product of input iterables. + + Equivalent to nested for-loops in a generator expression. For example, + ``product(A, B)`` returns the same as ``((x,y) for x in A for y in B)``. + + The leftmost iterators are in the outermost for-loop, so the output tuples + cycle like an odometer (with the rightmost element changing on every + iteration). This results in a lexicographic ordering so that if the + inputs iterables are sorted, the product tuples are emitted + in sorted order. + + To compute the product of an iterable with itself, specify the number of + repetitions with the optional *repeat* keyword argument. For example, + ``product(A, repeat=4)`` means the same as ``product(A, A, A, A)``. + + This function is equivalent to the following code, except that the + actual implementation does not build up intermediate results in memory:: + + def product(*args, **kwds): + pools = map(tuple, args) * kwds.get('repeat', 1) + result = [[]] + for pool in pools: + result = [x+[y] for x in result for y in pool] + for prod in result: + yield tuple(prod) + + .. versionadded:: 2.6 .. function:: repeat(object[, times]) @@ -492,13 +636,13 @@ def ncycles(seq, n): "Returns the sequence elements n times" - return chain(*repeat(seq, n)) + return chain.from_iterable(repeat(seq, n)) def dotproduct(vec1, vec2): return sum(imap(operator.mul, vec1, vec2)) def flatten(listOfLists): - return list(chain(*listOfLists)) + return list(chain.from_iterable(listOfLists)) def repeatfunc(func, times=None, *args): """Repeat calls to func with specified arguments. @@ -507,8 +651,7 @@ """ if times is None: return starmap(func, repeat(args)) - else: - return starmap(func, repeat(args, times)) + return starmap(func, repeat(args, times)) def pairwise(iterable): "s -> (s0,s1), (s1,s2), (s2, s3), ..." @@ -525,7 +668,7 @@ def roundrobin(*iterables): "roundrobin('abc', 'd', 'ef') --> 'a', 'd', 'e', 'b', 'f', 'c'" - # Recipe contributed by George Sakkis + # Recipe credited to George Sakkis pending = len(iterables) nexts = cycle(iter(it).next for it in iterables) while pending: @@ -536,3 +679,10 @@ pending -= 1 nexts = cycle(islice(nexts, pending)) + def powerset(iterable): + "powerset('ab') --> set([]), set(['a']), set(['b']), set(['a', 'b'])" + # Recipe credited to Eric Raymond + pairs = [(2**i, x) for i, x in enumerate(iterable)] + for n in xrange(2**len(pairs)): + yield set(x for m, x in pairs if m&n) + Modified: python/branches/libffi3-branch/Doc/library/logging.rst ============================================================================== --- python/branches/libffi3-branch/Doc/library/logging.rst (original) +++ python/branches/libffi3-branch/Doc/library/logging.rst Tue Mar 4 15:50:53 2008 @@ -43,7 +43,7 @@ It is, of course, possible to log messages with different verbosity levels or to different destinations. Support for writing log messages to files, HTTP GET/POST locations, email via SMTP, generic sockets, or OS-specific logging -mechnisms are all supported by the standard module. You can also create your +mechanisms are all supported by the standard module. You can also create your own log destination class if you have special requirements not met by any of the built-in classes. @@ -245,8 +245,8 @@ little more verbose for logging messages than using the log level convenience methods listed above, but this is how to log at custom log levels. -:func:`getLogger` returns a reference to a logger instance with a name of name -if a name is provided, or root if not. The names are period-separated +:func:`getLogger` returns a reference to a logger instance with the specified +if it it is provided, or ``root`` if not. The names are period-separated hierarchical structures. Multiple calls to :func:`getLogger` with the same name will return a reference to the same logger object. Loggers that are further down in the hierarchical list are children of loggers higher up in the list. @@ -267,7 +267,7 @@ with an :func:`addHandler` method. As an example scenario, an application may want to send all log messages to a log file, all log messages of error or higher to stdout, and all messages of critical to an email address. This scenario -requires three individual handlers where each hander is responsible for sending +requires three individual handlers where each handler is responsible for sending messages of a specific severity to a specific location. The standard library includes quite a few handler types; this tutorial uses only @@ -298,7 +298,7 @@ ^^^^^^^^^^ Formatter objects configure the final order, structure, and contents of the log -message. Unlike the base logging.Handler class, application code may +message. Unlike the base :class:`logging.Handler` class, application code may instantiate formatter classes, although you could likely subclass the formatter if your application needs special behavior. The constructor takes two optional arguments: a message format string and a date format string. If there is no @@ -1651,27 +1651,28 @@ You can use the *when* to specify the type of *interval*. The list of possible values is, note that they are not case sensitive: - +----------+-----------------------+ - | Value | Type of interval | - +==========+=======================+ - | S | Seconds | - +----------+-----------------------+ - | M | Minutes | - +----------+-----------------------+ - | H | Hours | - +----------+-----------------------+ - | D | Days | - +----------+-----------------------+ - | W | Week day (0=Monday) | - +----------+-----------------------+ - | midnight | Roll over at midnight | - +----------+-----------------------+ - - If *backupCount* is non-zero, the system will save old log files by appending - extensions to the filename. The extensions are date-and-time based, using the - strftime format ``%Y-%m-%d_%H-%M-%S`` or a leading portion thereof, depending on - the rollover interval. At most *backupCount* files will be kept, and if more - would be created when rollover occurs, the oldest one is deleted. + +----------------+-----------------------+ + | Value | Type of interval | + +================+=======================+ + | ``'S'`` | Seconds | + +----------------+-----------------------+ + | ``'M'`` | Minutes | + +----------------+-----------------------+ + | ``'H'`` | Hours | + +----------------+-----------------------+ + | ``'D'`` | Days | + +----------------+-----------------------+ + | ``'W'`` | Week day (0=Monday) | + +----------------+-----------------------+ + | ``'midnight'`` | Roll over at midnight | + +----------------+-----------------------+ + + The system will save old log files by appending extensions to the filename. + The extensions are date-and-time based, using the strftime format + ``%Y-%m-%d_%H-%M-%S`` or a leading portion thereof, depending on the rollover + interval. If *backupCount* is nonzero, at most *backupCount* files will be + kept, and if more would be created when rollover occurs, the oldest one is + deleted. .. method:: TimedRotatingFileHandler.doRollover() Modified: python/branches/libffi3-branch/Doc/library/mailbox.rst ============================================================================== --- python/branches/libffi3-branch/Doc/library/mailbox.rst (original) +++ python/branches/libffi3-branch/Doc/library/mailbox.rst Tue Mar 4 15:50:53 2008 @@ -433,7 +433,7 @@ original format, which is sometimes referred to as :dfn:`mboxo`. This means that the :mailheader:`Content-Length` header, if present, is ignored and that any occurrences of "From " at the beginning of a line in a message body are -transformed to ">From " when storing the message, although occurences of ">From +transformed to ">From " when storing the message, although occurrences of ">From " are not transformed to "From " when reading the message. Some :class:`Mailbox` methods implemented by :class:`mbox` deserve special @@ -581,7 +581,7 @@ .. method:: MH.close() - :class:`MH` instances do not keep any open files, so this method is equivelant + :class:`MH` instances do not keep any open files, so this method is equivalent to :meth:`unlock`. Modified: python/branches/libffi3-branch/Doc/library/modulefinder.rst ============================================================================== --- python/branches/libffi3-branch/Doc/library/modulefinder.rst (original) +++ python/branches/libffi3-branch/Doc/library/modulefinder.rst Tue Mar 4 15:50:53 2008 @@ -50,3 +50,65 @@ Analyze the contents of the *pathname* file, which must contain Python code. +.. attribute:: ModuleFinder.modules + + A dictionary mapping module names to modules. See :ref:`modulefinder-example` + + +.. _modulefinder-example: + +Example usage of :class:`ModuleFinder` +-------------------------------------- + +The script that is going to get analyzed later on (bacon.py):: + + import re, itertools + + try: + import baconhameggs + except ImportError: + pass + + try: + import guido.python.ham + except ImportError: + pass + + +The script that will output the report of bacon.py:: + + from modulefinder import ModuleFinder + + finder = ModuleFinder() + finder.run_script('bacon.py') + + print 'Loaded modules:' + for name, mod in finder.modules.iteritems(): + print '%s: ' % name, + print ','.join(mod.globalnames.keys()[:3]) + + print '-'*50 + print 'Modules not imported:' + print '\n'.join(finder.badmodules.iterkeys()) + +Sample output (may vary depending on the architecture):: + + Loaded modules: + _types: + copy_reg: _inverted_registry,_slotnames,__all__ + sre_compile: isstring,_sre,_optimize_unicode + _sre: + sre_constants: REPEAT_ONE,makedict,AT_END_LINE + sys: + re: __module__,finditer,_expand + itertools: + __main__: re,itertools,baconhameggs + sre_parse: __getslice__,_PATTERNENDERS,SRE_FLAG_UNICODE + array: + types: __module__,IntType,TypeType + --------------------------------------------------- + Modules not imported: + guido.python.ham + baconhameggs + + Modified: python/branches/libffi3-branch/Doc/library/msilib.rst ============================================================================== --- python/branches/libffi3-branch/Doc/library/msilib.rst (original) +++ python/branches/libffi3-branch/Doc/library/msilib.rst Tue Mar 4 15:50:53 2008 @@ -67,7 +67,7 @@ .. function:: init_database(name, schema, ProductName, ProductCode, ProductVersion, Manufacturer) - Create and return a new database *name*, initialize it with *schema*, and set + Create and return a new database *name*, initialize it with *schema*, and set the properties *ProductName*, *ProductCode*, *ProductVersion*, and *Manufacturer*. @@ -79,11 +79,17 @@ function returns. -.. function:: add_data(database, records) +.. function:: add_data(database, table, records) - Add all *records* to *database*. *records* should be a list of tuples, each one - containing all fields of a record according to the schema of the table. For - optional fields, ``None`` can be passed. + Add all *records* to the table named *table* in *database*. + + The *table* argument must be one of the predefined tables in the MSI schema, + e.g. ``'Feature'``, ``'File'``, ``'Component'``, ``'Dialog'``, ``'Control'``, + etc. + + *records* should be a list of tuples, each one containing all fields of a + record according to the schema of the table. For optional fields, + ``None`` can be passed. Field values can be int or long numbers, strings, or instances of the Binary class. Modified: python/branches/libffi3-branch/Doc/library/operator.rst ============================================================================== --- python/branches/libffi3-branch/Doc/library/operator.rst (original) +++ python/branches/libffi3-branch/Doc/library/operator.rst Tue Mar 4 15:50:53 2008 @@ -499,15 +499,21 @@ Return a callable object that fetches *attr* from its operand. If more than one attribute is requested, returns a tuple of attributes. After, - ``f=attrgetter('name')``, the call ``f(b)`` returns ``b.name``. After, - ``f=attrgetter('name', 'date')``, the call ``f(b)`` returns ``(b.name, + ``f = attrgetter('name')``, the call ``f(b)`` returns ``b.name``. After, + ``f = attrgetter('name', 'date')``, the call ``f(b)`` returns ``(b.name, b.date)``. + The attribute names can also contain dots; after ``f = attrgetter('date.month')``, + the call ``f(b)`` returns ``b.date.month``. + .. versionadded:: 2.4 .. versionchanged:: 2.5 Added support for multiple attributes. + .. versionchanged:: 2.6 + Added support for dotted attributes. + .. function:: itemgetter(item[, args...]) @@ -532,6 +538,17 @@ [('orange', 1), ('banana', 2), ('apple', 3), ('pear', 5)] +.. function:: methodcaller(name[, args...]) + + Return a callable object that calls the method *name* on its operand. If + additional arguments and/or keyword arguments are given, they will be given + to the method as well. After ``f = methodcaller('name')``, the call ``f(b)`` + returns ``b.name()``. After ``f = methodcaller('name', 'foo', bar=1)``, the + call ``f(b)`` returns ``b.name('foo', bar=1)``. + + .. versionadded:: 2.6 + + .. _operator-map: Mapping Operators to Functions Modified: python/branches/libffi3-branch/Doc/library/optparse.rst ============================================================================== --- python/branches/libffi3-branch/Doc/library/optparse.rst (original) +++ python/branches/libffi3-branch/Doc/library/optparse.rst Tue Mar 4 15:50:53 2008 @@ -1633,7 +1633,7 @@ value.append(arg) del rargs[0] - setattr(parser.values, option.dest, value) + setattr(parser.values, option.dest, value) [...] parser.add_option("-c", "--callback", Modified: python/branches/libffi3-branch/Doc/library/pickle.rst ============================================================================== --- python/branches/libffi3-branch/Doc/library/pickle.rst (original) +++ python/branches/libffi3-branch/Doc/library/pickle.rst Tue Mar 4 15:50:53 2008 @@ -463,6 +463,11 @@ Pickling and unpickling extension types ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +.. index:: + single: __reduce__() (pickle protocol) + single: __reduce_ex__() (pickle protocol) + single: __safe_for_unpickling__ (pickle protocol) + When the :class:`Pickler` encounters an object of a type it knows nothing about --- such as an extension type --- it looks in two places for a hint of how to pickle it. One alternative is for the object to implement a :meth:`__reduce__` @@ -541,6 +546,10 @@ Pickling and unpickling external objects ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +.. index:: + single: persistent_id (pickle protocol) + single: persistent_load (pickle protocol) + For the benefit of object persistence, the :mod:`pickle` module supports the notion of a reference to an object outside the pickled data stream. Such objects are referenced by a "persistent id", which is just an arbitrary string @@ -630,6 +639,10 @@ Subclassing Unpicklers ---------------------- +.. index:: + single: load_global() (pickle protocol) + single: find_global() (pickle protocol) + By default, unpickling will import any class that it finds in the pickle data. You can control exactly what gets unpickled and what gets called by customizing your unpickler. Unfortunately, exactly how you do this is different depending Modified: python/branches/libffi3-branch/Doc/library/platform.rst ============================================================================== --- python/branches/libffi3-branch/Doc/library/platform.rst (original) +++ python/branches/libffi3-branch/Doc/library/platform.rst Tue Mar 4 15:50:53 2008 @@ -245,7 +245,7 @@ version)`` which default to the given parameters in case the lookup fails. Note that this function has intimate knowledge of how different libc versions - add symbols to the executable is probably only useable for executables compiled + add symbols to the executable is probably only usable for executables compiled using :program:`gcc`. The file is read and scanned in chunks of *chunksize* bytes. Modified: python/branches/libffi3-branch/Doc/library/profile.rst ============================================================================== --- python/branches/libffi3-branch/Doc/library/profile.rst (original) +++ python/branches/libffi3-branch/Doc/library/profile.rst Tue Mar 4 15:50:53 2008 @@ -531,7 +531,7 @@ non-parenthesized number repeats the cumulative time spent in the function at the right. - * With :mod:`cProfile`, each caller is preceeded by three numbers: the number of + * With :mod:`cProfile`, each caller is preceded by three numbers: the number of times this specific call was made, and the total and cumulative times spent in the current function while it was invoked by this specific caller. Modified: python/branches/libffi3-branch/Doc/library/random.rst ============================================================================== --- python/branches/libffi3-branch/Doc/library/random.rst (original) +++ python/branches/libffi3-branch/Doc/library/random.rst Tue Mar 4 15:50:53 2008 @@ -98,7 +98,7 @@ Change the internal state to one different from and likely far away from the current state. *n* is a non-negative integer which is used to scramble the current state vector. This is most useful in multi-threaded programs, in - conjuction with multiple instances of the :class:`Random` class: + conjunction with multiple instances of the :class:`Random` class: :meth:`setstate` or :meth:`seed` can be used to force all instances into the same internal state, and then :meth:`jumpahead` can be used to force the instances' states far apart. Modified: python/branches/libffi3-branch/Doc/library/re.rst ============================================================================== --- python/branches/libffi3-branch/Doc/library/re.rst (original) +++ python/branches/libffi3-branch/Doc/library/re.rst Tue Mar 4 15:50:53 2008 @@ -1102,7 +1102,7 @@ 'Heather Albrecht 548.326.4584 919 Park Place'] Finally, split each entry into a list with first name, last name, telephone -number, and address. We use the ``maxsplit`` paramater of :func:`split` +number, and address. We use the ``maxsplit`` parameter of :func:`split` because the address has spaces, our splitting pattern, in it:: >>> [re.split(":? ", entry, 3) for entry in entries] @@ -1112,7 +1112,7 @@ ['Heather', 'Albrecht', '548.326.4584', '919 Park Place']] The ``:?`` pattern matches the colon after the last name, so that it does not -occur in the result list. With a ``maxsplit`` of ``4``, we could seperate the +occur in the result list. With a ``maxsplit`` of ``4``, we could separate the house number from the street name:: >>> [re.split(":? ", entry, 4) for entry in entries] @@ -1144,7 +1144,7 @@ Finding all Adverbs ^^^^^^^^^^^^^^^^^^^ -:func:`findall` matches *all* occurences of a pattern, not just the first +:func:`findall` matches *all* occurrences of a pattern, not just the first one as :func:`search` does. For example, if one was a writer and wanted to find all of the adverbs in some text, he or she might use :func:`findall` in the following manner:: Modified: python/branches/libffi3-branch/Doc/library/signal.rst ============================================================================== --- python/branches/libffi3-branch/Doc/library/signal.rst (original) +++ python/branches/libffi3-branch/Doc/library/signal.rst Tue Mar 4 15:50:53 2008 @@ -124,6 +124,21 @@ exception to be raised. + +.. function:: siginterrupt(signalnum, flag) + + Change system call restart behaviour: if *flag* is :const:`False`, system calls + will be restarted when interrupted by signal *signalnum*, otherwise system calls will + be interrupted. Returns nothing. Availability: Unix, Mac (see the man page + :manpage:`siginterrupt(3)` for further information). + + Note that installing a signal handler with :func:`signal` will reset the restart + behaviour to interruptible by implicitly calling :cfunc:`siginterrupt` with a true *flag* + value for the given signal. + + .. versionadded:: 2.6 + + .. function:: signal(signalnum, handler) Set the handler for signal *signalnum* to the function *handler*. *handler* can Modified: python/branches/libffi3-branch/Doc/library/simplexmlrpcserver.rst ============================================================================== --- python/branches/libffi3-branch/Doc/library/simplexmlrpcserver.rst (original) +++ python/branches/libffi3-branch/Doc/library/simplexmlrpcserver.rst Tue Mar 4 15:50:53 2008 @@ -120,7 +120,7 @@ Registers the XML-RPC multicall function system.multicall. -.. attribute:: SimpleXMLRPCServer.rpc_paths +.. attribute:: SimpleXMLRPCRequestHandler.rpc_paths An attribute value that must be a tuple listing valid path portions of the URL for receiving XML-RPC requests. Requests posted to other paths will result in a @@ -136,9 +136,15 @@ Server code:: from SimpleXMLRPCServer import SimpleXMLRPCServer + from SimpleXMLRPCServer import SimpleXMLRPCRequestHandler + + # Restrict to a particular path. + class RequestHandler(SimpleXMLRPCRequestHandler): + rpc_paths = ('/RPC2',) # Create server - server = SimpleXMLRPCServer(("localhost", 8000)) + server = SimpleXMLRPCServer(("localhost", 8000), + requestHandler=RequestHandler) server.register_introspection_functions() # Register pow() function; this will use the value of Modified: python/branches/libffi3-branch/Doc/library/socket.rst ============================================================================== --- python/branches/libffi3-branch/Doc/library/socket.rst (original) +++ python/branches/libffi3-branch/Doc/library/socket.rst Tue Mar 4 15:50:53 2008 @@ -929,5 +929,5 @@ # receive a package print s.recvfrom(65565) - # disabled promiscous mode + # disabled promiscuous mode s.ioctl(socket.SIO_RCVALL, socket.RCVALL_OFF) Modified: python/branches/libffi3-branch/Doc/library/tokenize.rst ============================================================================== --- python/branches/libffi3-branch/Doc/library/tokenize.rst (original) +++ python/branches/libffi3-branch/Doc/library/tokenize.rst Tue Mar 4 15:50:53 2008 @@ -18,7 +18,7 @@ .. function:: generate_tokens(readline) - The :func:`generate_tokens` generator requires one argment, *readline*, which + The :func:`generate_tokens` generator requires one argument, *readline*, which must be a callable object which provides the same interface as the :meth:`readline` method of built-in file objects (see section :ref:`bltin-file-objects`). Each call to the function should return one line of Modified: python/branches/libffi3-branch/Doc/library/trace.rst ============================================================================== --- python/branches/libffi3-branch/Doc/library/trace.rst (original) +++ python/branches/libffi3-branch/Doc/library/trace.rst Tue Mar 4 15:50:53 2008 @@ -80,7 +80,7 @@ --------------------- -.. class:: Trace([count=1[, trace=1[, countfuncs=0[, countcallers=0[, ignoremods=()[, ignoredirs=()[, infile=None[, outfile=None]]]]]]]]) +.. class:: Trace([count=1[, trace=1[, countfuncs=0[, countcallers=0[, ignoremods=()[, ignoredirs=()[, infile=None[, outfile=None[, timing=False]]]]]]]]]) Create an object to trace execution of a single statement or expression. All parameters are optional. *count* enables counting of line numbers. *trace* @@ -89,7 +89,8 @@ *ignoremods* is a list of modules or packages to ignore. *ignoredirs* is a list of directories whose modules or packages should be ignored. *infile* is the file from which to read stored count information. *outfile* is a file in which - to write updated count information. + to write updated count information. *timing* enables a timestamp relative + to when tracing was started to be displayed. .. method:: Trace.run(cmd) Modified: python/branches/libffi3-branch/Doc/library/userdict.rst ============================================================================== --- python/branches/libffi3-branch/Doc/library/userdict.rst (original) +++ python/branches/libffi3-branch/Doc/library/userdict.rst Tue Mar 4 15:50:53 2008 @@ -11,7 +11,7 @@ simplifies writing classes that need to be substitutable for dictionaries (such as the shelve module). -This also module defines a class, :class:`UserDict`, that acts as a wrapper +This module also defines a class, :class:`UserDict`, that acts as a wrapper around dictionary objects. The need for this class has been largely supplanted by the ability to subclass directly from :class:`dict` (a feature that became available starting with Python version 2.2). Prior to the introduction of Modified: python/branches/libffi3-branch/Doc/library/weakref.rst ============================================================================== --- python/branches/libffi3-branch/Doc/library/weakref.rst (original) +++ python/branches/libffi3-branch/Doc/library/weakref.rst Tue Mar 4 15:50:53 2008 @@ -63,7 +63,7 @@ class Dict(dict): pass - obj = Dict(red=1, green=2, blue=3) # this object is weak referencable + obj = Dict(red=1, green=2, blue=3) # this object is weak referenceable Extension types can easily be made to support weak references; see :ref:`weakref-support`. Modified: python/branches/libffi3-branch/Doc/library/xml.dom.rst ============================================================================== --- python/branches/libffi3-branch/Doc/library/xml.dom.rst (original) +++ python/branches/libffi3-branch/Doc/library/xml.dom.rst Tue Mar 4 15:50:53 2008 @@ -652,8 +652,8 @@ .. method:: Element.removeAttribute(name) - Remove an attribute by name. No exception is raised if there is no matching - attribute. + Remove an attribute by name. If there is no matching attribute, a + :exc:`NotFoundErr` is raised. .. method:: Element.removeAttributeNode(oldAttr) Modified: python/branches/libffi3-branch/Doc/library/xml.etree.elementtree.rst ============================================================================== --- python/branches/libffi3-branch/Doc/library/xml.etree.elementtree.rst (original) +++ python/branches/libffi3-branch/Doc/library/xml.etree.elementtree.rst Tue Mar 4 15:50:53 2008 @@ -421,7 +421,7 @@ .. method:: TreeBuilder.close() - Flushes the parser buffers, and returns the toplevel documen element. Returns an + Flushes the parser buffers, and returns the toplevel document element. Returns an Element instance. Modified: python/branches/libffi3-branch/Doc/library/xmlrpclib.rst ============================================================================== --- python/branches/libffi3-branch/Doc/library/xmlrpclib.rst (original) +++ python/branches/libffi3-branch/Doc/library/xmlrpclib.rst Tue Mar 4 15:50:53 2008 @@ -34,10 +34,7 @@ all clients and servers; see http://ontosys.com/xml-rpc/extensions.php for a description. The *use_datetime* flag can be used to cause date/time values to be presented as :class:`datetime.datetime` objects; this is false by default. - :class:`datetime.datetime`, :class:`datetime.date` and :class:`datetime.time` - objects may be passed to calls. :class:`datetime.date` objects are converted - with a time of "00:00:00". :class:`datetime.time` objects are converted using - today's date. + :class:`datetime.datetime` objects may be passed to calls. Both the HTTP and HTTPS transports support the URL syntax extension for HTTP Basic Authentication: ``http://user:pass at host:port/path``. The ``user:pass`` @@ -81,9 +78,7 @@ +---------------------------------+---------------------------------------------+ | :const:`dates` | in seconds since the epoch (pass in an | | | instance of the :class:`DateTime` class) or | - | | a :class:`datetime.datetime`, | - | | :class:`datetime.date` or | - | | :class:`datetime.time` instance | + | | a :class:`datetime.datetime` instance. | +---------------------------------+---------------------------------------------+ | :const:`binary data` | pass in an instance of the :class:`Binary` | | | wrapper class | @@ -221,10 +216,10 @@ DateTime Objects ---------------- -This class may be initialized with seconds since the epoch, a time tuple, an ISO -8601 time/date string, or a :class:`datetime.datetime`, :class:`datetime.date` -or :class:`datetime.time` instance. It has the following methods, supported -mainly for internal use by the marshalling/unmarshalling code: +This class may be initialized with seconds since the epoch, a time +tuple, an ISO 8601 time/date string, or a :class:`datetime.datetime` +instance. It has the following methods, supported mainly for internal +use by the marshalling/unmarshalling code: .. method:: DateTime.decode(string) @@ -507,10 +502,7 @@ ``None`` if no method name is present in the packet. If the XML-RPC packet represents a fault condition, this function will raise a :exc:`Fault` exception. The *use_datetime* flag can be used to cause date/time values to be presented as - :class:`datetime.datetime` objects; this is false by default. Note that even if - you call an XML-RPC method with :class:`datetime.date` or :class:`datetime.time` - objects, they are converted to :class:`DateTime` objects internally, so only - :class:`datetime.datetime` objects will be returned. + :class:`datetime.datetime` objects; this is false by default. .. versionchanged:: 2.5 The *use_datetime* flag was added. Modified: python/branches/libffi3-branch/Doc/license.rst ============================================================================== --- python/branches/libffi3-branch/Doc/license.rst (original) +++ python/branches/libffi3-branch/Doc/license.rst Tue Mar 4 15:50:53 2008 @@ -116,7 +116,7 @@ analyze, test, perform and/or display publicly, prepare derivative works, distribute, and otherwise use Python |release| alone or in any derivative version, provided, however, that PSF's License Agreement and PSF's notice of - copyright, i.e., "Copyright ?? 2001-2007 Python Software Foundation; All Rights + copyright, i.e., "Copyright ?? 2001-2008 Python Software Foundation; All Rights Reserved" are retained in Python |release| alone or in any derivative version prepared by Licensee. Modified: python/branches/libffi3-branch/Doc/make.bat ============================================================================== --- python/branches/libffi3-branch/Doc/make.bat (original) +++ python/branches/libffi3-branch/Doc/make.bat Tue Mar 4 15:50:53 2008 @@ -1,8 +1,9 @@ - at echo off +@@echo off setlocal set SVNROOT=http://svn.python.org/projects -if "%PYTHON%" EQU "" set PYTHON=python25 +if "%PYTHON%" EQU "" set PYTHON=..\pcbuild\python +if "%HTMLHELP%" EQU "" set HTMLHELP=%ProgramFiles%\HTML Help Workshop\hhc.exe if "%1" EQU "" goto help if "%1" EQU "html" goto build @@ -41,7 +42,7 @@ if not exist build\%1 mkdir build\%1 if not exist build\doctrees mkdir build\doctrees cmd /C %PYTHON% tools\sphinx-build.py -b%1 -dbuild\doctrees . build\%1 -if "%1" EQU "htmlhelp" "%ProgramFiles%\HTML Help Workshop\hhc.exe" build\htmlhelp\pydoc.hhp +if "%1" EQU "htmlhelp" "%HTMLHELP%" build\htmlhelp\pydoc.hhp goto end :webrun Modified: python/branches/libffi3-branch/Doc/reference/compound_stmts.rst ============================================================================== --- python/branches/libffi3-branch/Doc/reference/compound_stmts.rst (original) +++ python/branches/libffi3-branch/Doc/reference/compound_stmts.rst Tue Mar 4 15:50:53 2008 @@ -531,7 +531,7 @@ .. rubric:: Footnotes -.. [#] The exception is propogated to the invocation stack only if there is no +.. [#] The exception is propagated to the invocation stack only if there is no :keyword:`finally` clause that negates the exception. .. [#] Currently, control "flows off the end" except in the case of an exception or the Modified: python/branches/libffi3-branch/Doc/reference/expressions.rst ============================================================================== --- python/branches/libffi3-branch/Doc/reference/expressions.rst (original) +++ python/branches/libffi3-branch/Doc/reference/expressions.rst Tue Mar 4 15:50:53 2008 @@ -230,14 +230,15 @@ evaluating the expression to yield a value that is reached the innermost block for each iteration. -Variables used in the generator expression are evaluated lazily when the -:meth:`next` method is called for generator object (in the same fashion as -normal generators). However, the leftmost :keyword:`for` clause is immediately -evaluated so that error produced by it can be seen before any other possible -error in the code that handles the generator expression. Subsequent -:keyword:`for` clauses cannot be evaluated immediately since they may depend on -the previous :keyword:`for` loop. For example: ``(x*y for x in range(10) for y -in bar(x))``. +Variables used in the generator expression are evaluated lazily in a separate +scope when the :meth:`next` method is called for the generator object (in the +same fashion as for normal generators). However, the :keyword:`in` expression +of the leftmost :keyword:`for` clause is immediately evaluated in the current +scope so that an error produced by it can be seen before any other possible +error in the code that handles the generator expression. Subsequent +:keyword:`for` and :keyword:`if` clauses cannot be evaluated immediately since +they may depend on the previous :keyword:`for` loop. For example: +``(x*y for x in range(10) for y in bar(x))``. The parentheses can be omitted on calls with only one argument. See section :ref:`calls` for the detail. @@ -395,7 +396,7 @@ generator, or raises :exc:`StopIteration` if the generator exits without yielding another value. When :meth:`send` is called to start the generator, it must be called with :const:`None` as the argument, because there is no - :keyword:`yield` expression that could receieve the value. + :keyword:`yield` expression that could receive the value. .. method:: generator.throw(type[, value[, traceback]]) @@ -677,7 +678,7 @@ If the syntax ``*expression`` appears in the function call, ``expression`` must evaluate to a sequence. Elements from this sequence are treated as if they were -additional positional arguments; if there are postional arguments *x1*,...,*xN* +additional positional arguments; if there are positional arguments *x1*,...,*xN* , and ``expression`` evaluates to a sequence *y1*,...,*yM*, this is equivalent to a call with M+N positional arguments *x1*,...,*xN*,*y1*,...,*yM*. Modified: python/branches/libffi3-branch/Doc/reference/index.rst ============================================================================== --- python/branches/libffi3-branch/Doc/reference/index.rst (original) +++ python/branches/libffi3-branch/Doc/reference/index.rst Tue Mar 4 15:50:53 2008 @@ -17,7 +17,7 @@ interfaces available to C/C++ programmers in detail. .. toctree:: - :maxdepth: 3 + :maxdepth: 2 introduction.rst lexical_analysis.rst Modified: python/branches/libffi3-branch/Doc/tools/sphinxext/download.html ============================================================================== --- python/branches/libffi3-branch/Doc/tools/sphinxext/download.html (original) +++ python/branches/libffi3-branch/Doc/tools/sphinxext/download.html Tue Mar 4 15:50:53 2008 @@ -5,6 +5,9 @@

Download Python {{ release }} Documentation {%- if last_updated %} (last updated on {{ last_updated }}){% endif %}

+

Currently, the development documentation isn't packaged for download.

+ + + {% endblock %} Modified: python/branches/libffi3-branch/Doc/tools/sphinxext/patchlevel.py ============================================================================== --- python/branches/libffi3-branch/Doc/tools/sphinxext/patchlevel.py (original) +++ python/branches/libffi3-branch/Doc/tools/sphinxext/patchlevel.py Tue Mar 4 15:50:53 2008 @@ -66,3 +66,6 @@ print >>sys.stderr, 'Can\'t get version info from Include/patchlevel.h, ' \ 'using version of this interpreter (%s).' % release return version, release + +if __name__ == '__main__': + print get_header_version_info('.')[1] Modified: python/branches/libffi3-branch/Doc/tutorial/interpreter.rst ============================================================================== --- python/branches/libffi3-branch/Doc/tutorial/interpreter.rst (original) +++ python/branches/libffi3-branch/Doc/tutorial/interpreter.rst Tue Mar 4 15:50:53 2008 @@ -22,7 +22,7 @@ alternative location.) On Windows machines, the Python installation is usually placed in -:file:`C:\Python26`, though you can change this when you're running the +:file:`C:\\Python26`, though you can change this when you're running the installer. To add this directory to your path, you can type the following command into the command prompt in a DOS box:: @@ -102,7 +102,7 @@ before printing the first prompt:: python - Python 2.5 (#1, Feb 28 2007, 00:02:06) + Python 2.6 (#1, Feb 28 2007, 00:02:06) Type "help", "copyright", "credits" or "license" for more information. >>> Modified: python/branches/libffi3-branch/Doc/tutorial/stdlib2.rst ============================================================================== --- python/branches/libffi3-branch/Doc/tutorial/stdlib2.rst (original) +++ python/branches/libffi3-branch/Doc/tutorial/stdlib2.rst Tue Mar 4 15:50:53 2008 @@ -269,7 +269,7 @@ 0 >>> d['primary'] # entry was automatically removed Traceback (most recent call last): - File "", line 1, in -toplevel- + File "", line 1, in d['primary'] # entry was automatically removed File "C:/python26/lib/weakref.py", line 46, in __getitem__ o = self.data[key]() Modified: python/branches/libffi3-branch/Doc/using/cmdline.rst ============================================================================== --- python/branches/libffi3-branch/Doc/using/cmdline.rst (original) +++ python/branches/libffi3-branch/Doc/using/cmdline.rst Tue Mar 4 15:50:53 2008 @@ -326,6 +326,8 @@ * :func:`reduce` * :func:`reload` + Using these will emit a :exc:`DeprecationWarning`. + .. versionadded:: 2.6 Modified: python/branches/libffi3-branch/Doc/using/unix.rst ============================================================================== --- python/branches/libffi3-branch/Doc/using/unix.rst (original) +++ python/branches/libffi3-branch/Doc/using/unix.rst Tue Mar 4 15:50:53 2008 @@ -20,7 +20,7 @@ that are not available on your distro's package. You can easily compile the latest version of Python from source. -In the event Python doesn't come preinstalled and isn't in the repositories as +In the event that Python doesn't come preinstalled and isn't in the repositories as well, you can easily make packages for your own distro. Have a look at the following links: Modified: python/branches/libffi3-branch/Doc/using/windows.rst ============================================================================== --- python/branches/libffi3-branch/Doc/using/windows.rst (original) +++ python/branches/libffi3-branch/Doc/using/windows.rst Tue Mar 4 15:50:53 2008 @@ -21,7 +21,7 @@ `_ for many years. With ongoing development of Python, some platforms that used to be supported -earlier are not longer supported (due to the lack of users or developers). +earlier are no longer supported (due to the lack of users or developers). Check :pep:`11` for details on all unsupported platforms. * DOS and Windows 3.x are deprecated since Python 2.0 and code specific to these Modified: python/branches/libffi3-branch/Doc/whatsnew/2.6.rst ============================================================================== --- python/branches/libffi3-branch/Doc/whatsnew/2.6.rst (original) +++ python/branches/libffi3-branch/Doc/whatsnew/2.6.rst Tue Mar 4 15:50:53 2008 @@ -450,6 +450,15 @@ .. ====================================================================== +.. _pep-3101: + +PEP 3101: Advanced String Formatting +===================================================== + +XXX write this + +.. ====================================================================== + .. _pep-3110: PEP 3110: Exception-Handling Changes @@ -544,6 +553,32 @@ .. ====================================================================== +.. _pep-3127: + +PEP 3127: Integer Literal Support and Syntax +===================================================== + +XXX write this + +Python 3.0 changes the syntax for octal integer literals, and +adds supports for binary integers: 0o instad of 0, +and 0b for binary. Python 2.6 doesn't support this, but a bin() +builtin was added, and + + +New bin() built-in returns the binary form of a number. + +.. ====================================================================== + +.. _pep-3129: + +PEP 3129: Class Decorators +===================================================== + +XXX write this. + +.. ====================================================================== + .. _pep-3141: PEP 3141: A Type Hierarchy for Numbers @@ -560,7 +595,7 @@ Numbers are further divided into :class:`Exact` and :class:`Inexact`. Exact numbers can represent values precisely and operations never round off the results or introduce tiny errors that may break the -communtativity and associativity properties; inexact numbers may +commutativity and associativity properties; inexact numbers may perform such rounding or introduce small errors. Integers, long integers, and rational numbers are exact, while floating-point and complex numbers are inexact. @@ -579,7 +614,9 @@ :class:`Rational` numbers derive from :class:`Real`, have :attr:`numerator` and :attr:`denominator` properties, and can be converted to floats. Python 2.6 adds a simple rational-number class, -:class:`Fraction`, in the :mod:`fractions` module. +:class:`Fraction`, in the :mod:`fractions` module. (It's called +:class:`Fraction` instead of :class:`Rational` to avoid +a name clash with :class:`numbers.Rational`.) :class:`Integral` numbers derive from :class:`Rational`, and can be shifted left and right with ``<<`` and ``>>``, @@ -587,9 +624,9 @@ and can be used as array indexes and slice boundaries. In Python 3.0, the PEP slightly redefines the existing built-ins -:func:`math.floor`, :func:`math.ceil`, :func:`round`, and adds a new -one, :func:`trunc`, that's been backported to Python 2.6. -:func:`trunc` rounds toward zero, returning the closest +:func:`round`, :func:`math.floor`, :func:`math.ceil`, and adds a new +one, :func:`math.trunc`, that's been backported to Python 2.6. +:func:`math.trunc` rounds toward zero, returning the closest :class:`Integral` that's between the function's argument and zero. .. seealso:: @@ -603,7 +640,7 @@ To fill out the hierarchy of numeric types, a rational-number class has been added as the :mod:`fractions` module. Rational numbers are -represented as a fraction; rational numbers can exactly represent +represented as a fraction, and can exactly represent numbers such as two-thirds that floating-point numbers can only approximate. @@ -692,7 +729,7 @@ A numerical nicety: when creating a complex number from two floats on systems that support signed zeros (-0 and +0), the - :func:`complex()` constructor will now preserve the sign + :func:`complex` constructor will now preserve the sign of the zero. .. Patch 1507 @@ -789,6 +826,15 @@ built-in types. This speeds up checking if an object is a subclass of one of these types. (Contributed by Neal Norwitz.) +* Unicode strings now uses faster code for detecting + whitespace and line breaks; this speeds up the :meth:`split` method + by about 25% and :meth:`splitlines` by 35%. + (Contributed by Antoine Pitrou.) + +* To reduce memory usage, the garbage collector will now clear internal + free lists when garbage-collecting the highest generation of objects. + This may return memory to the OS sooner. + The net result of the 2.6 optimizations is that Python 2.6 runs the pystone benchmark around XX% faster than Python 2.5. @@ -956,15 +1002,69 @@ can also be accessed as attributes. (Contributed by Raymond Hettinger.) -* A new function in the :mod:`itertools` module: ``izip_longest(iter1, iter2, - ...[, fillvalue])`` makes tuples from each of the elements; if some of the - iterables are shorter than others, the missing values are set to *fillvalue*. - For example:: + Some new functions in the module include + :func:`isgenerator`, :func:`isgeneratorfunction`, + and :func:`isabstract`. + +* The :mod:`itertools` module gained several new functions. + + ``izip_longest(iter1, iter2, ...[, fillvalue])`` makes tuples from + each of the elements; if some of the iterables are shorter than + others, the missing values are set to *fillvalue*. For example:: itertools.izip_longest([1,2,3], [1,2,3,4,5]) -> [(1, 1), (2, 2), (3, 3), (None, 4), (None, 5)] - (Contributed by Raymond Hettinger.) + ``product(iter1, iter2, ..., [repeat=N])`` returns the Cartesian product + of the supplied iterables, a set of tuples containing + every possible combination of the elements returned from each iterable. :: + + itertools.product([1,2,3], [4,5,6]) -> + [(1, 4), (1, 5), (1, 6), + (2, 4), (2, 5), (2, 6), + (3, 4), (3, 5), (3, 6)] + + The optional *repeat* keyword argument is used for taking the + product of an iterable or a set of iterables with themselves, + repeated *N* times. With a single iterable argument, *N*-tuples + are returned:: + + itertools.product([1,2], repeat=3)) -> + [(1, 1, 1), (1, 1, 2), (1, 2, 1), (1, 2, 2), + (2, 1, 1), (2, 1, 2), (2, 2, 1), (2, 2, 2)] + + With two iterables, *2N*-tuples are returned. :: + + itertools(product([1,2], [3,4], repeat=2) -> + [(1, 3, 1, 3), (1, 3, 1, 4), (1, 3, 2, 3), (1, 3, 2, 4), + (1, 4, 1, 3), (1, 4, 1, 4), (1, 4, 2, 3), (1, 4, 2, 4), + (2, 3, 1, 3), (2, 3, 1, 4), (2, 3, 2, 3), (2, 3, 2, 4), + (2, 4, 1, 3), (2, 4, 1, 4), (2, 4, 2, 3), (2, 4, 2, 4)] + + ``combinations(iter, r)`` returns combinations of length *r* from + the elements of *iterable*. :: + + itertools.combinations('123', 2) -> + [('1', '2'), ('1', '3'), ('2', '3')] + + itertools.combinations('123', 3) -> + [('1', '2', '3')] + + itertools.combinations('1234', 3) -> + [('1', '2', '3'), ('1', '2', '4'), ('1', '3', '4'), + ('2', '3', '4')] + + ``itertools.chain(*iterables)` is an existing function in + :mod:`itertools` that gained a new constructor. + ``itertools.chain.from_iterable(iterable)`` takes a single + iterable that should return other iterables. :func:`chain` will + then return all the elements of the first iterable, then + all the elements of the second, and so on. :: + + chain.from_iterable([[1,2,3], [4,5,6]]) -> + [1, 2, 3, 4, 5, 6] + + (All contributed by Raymond Hettinger.) * The :mod:`macfs` module has been removed. This in turn required the :func:`macostools.touched` function to be removed because it depended on the @@ -975,7 +1075,7 @@ * :class:`mmap` objects now have a :meth:`rfind` method that finds a substring, beginning at the end of the string and searching backwards. The :meth:`find` method - also gained a *end* parameter containing the index at which to stop + also gained an *end* parameter containing the index at which to stop the forward search. (Contributed by John Lenton.) @@ -984,6 +1084,29 @@ triggers a warning message when Python is running in 3.0-warning mode. +* The :mod:`operator` module gained a + :func:`methodcaller` function that takes a name and an optional + set of arguments, returning a callable that will call + the named function on any arguments passed to it. For example:: + + >>> # Equivalent to lambda s: s.replace('old', 'new') + >>> replacer = operator.methodcaller('replace', 'old', 'new') + >>> replacer('old wine in old bottles') + 'new wine in new bottles' + + (Contributed by Georg Brandl, after a suggestion by Gregory Petrosyan.) + + The :func:`attrgetter` function now accepts dotted names and performs + the corresponding attribute lookups:: + + >>> inst_name = operator.attrgetter('__class__.__name__') + >>> inst_name('') + 'str' + >>> inst_name(help) + '_Helper' + + (Contributed by Georg Brandl, after a suggestion by Barry Warsaw.) + * New functions in the :mod:`os` module include ``fchmod(fd, mode)``, ``fchown(fd, uid, gid)``, and ``lchmod(path, mode)``, on operating systems that support these @@ -1036,6 +1159,11 @@ .. Patch #1393667 +* The :mod:`pickletools` module now has an :func:`optimize` function + that takes a string containing a pickle and removes some unused + opcodes, returning a shorter pickle that contains the same data structure. + (Contributed by Raymond Hettinger.) + * New functions in the :mod:`posix` module: :func:`chflags` and :func:`lchflags` are wrappers for the corresponding system calls (where they're available). Constants for the flag values are defined in the :mod:`stat` module; some @@ -1099,6 +1227,10 @@ .. % Patch 1583 + The :func:`siginterrupt` function is now available from Python code, + and allows changing whether signals can interrupt system calls or not. + (Contributed by Ralf Schmitt.) + * The :mod:`smtplib` module now supports SMTP over SSL thanks to the addition of the :class:`SMTP_SSL` class. This class supports an interface identical to the existing :class:`SMTP` class. Both @@ -1201,6 +1333,18 @@ .. Patch #1537850 + A new class, :class:`SpooledTemporaryFile`, behaves like + a temporary file but stores its data in memory until a maximum size is + exceeded. On reaching that limit, the contents will be written to + an on-disk temporary file. (Contributed by Dustin J. Mitchell.) + + The :class:`NamedTemporaryFile` and :class:`SpooledTemporaryFile` classes + both work as context managers, so you can write + ``with tempfile.NamedTemporaryFile() as tmp: ...``. + (Contributed by Alexander Belopolsky.) + + .. Issue #2021 + * The :mod:`test.test_support` module now contains a :func:`EnvironmentVarGuard` context manager that supports temporarily changing environment variables and @@ -1236,6 +1380,8 @@ whitespace. >>> + (Contributed by Dwayne Bailey.) + .. Patch #1581073 * The :mod:`timeit` module now accepts callables as well as strings @@ -1395,7 +1541,7 @@ .. Issue 1534 * Python's C API now includes two functions for case-insensitive string - comparisions, ``PyOS_stricmp(char*, char*)`` + comparisons, ``PyOS_stricmp(char*, char*)`` and ``PyOS_strnicmp(char*, char*, Py_ssize_t)``. (Contributed by Christian Heimes.) @@ -1415,6 +1561,12 @@ .. Patch 1530959 +* Several basic data types, such as integers and strings, maintain + internal free lists of objects that can be re-used. The data + structures for these free lists now follow a naming convention: the + variable is always named ``free_list``, the counter is always named + ``numfree``, and a macro :cmacro:`Py_MAXFREELIST` is + always defined. .. ====================================================================== @@ -1511,6 +1663,15 @@ .. Issue 1706815 +* The :mod:`xmlrpclib` module no longer automatically converts + :class:`datetime.date` and :class:`datetime.time` to the + :class:`xmlrpclib.DateTime` type; the conversion semantics were + not necessarily correct for all applications. Code using + :mod:`xmlrpclib` should convert :class:`date` and :class:`time` + instances. + + .. Issue 1330538 + .. ====================================================================== Modified: python/branches/libffi3-branch/Grammar/Grammar ============================================================================== --- python/branches/libffi3-branch/Grammar/Grammar (original) +++ python/branches/libffi3-branch/Grammar/Grammar Tue Mar 4 15:50:53 2008 @@ -33,7 +33,8 @@ decorator: '@' dotted_name [ '(' [arglist] ')' ] NEWLINE decorators: decorator+ -funcdef: [decorators] 'def' NAME parameters ':' suite +decorated: decorators (classdef | funcdef) +funcdef: 'def' NAME parameters ':' suite parameters: '(' [varargslist] ')' varargslist: ((fpdef ['=' test] ',')* ('*' NAME [',' '**' NAME] | '**' NAME) | @@ -73,7 +74,7 @@ exec_stmt: 'exec' expr ['in' test [',' test]] assert_stmt: 'assert' test [',' test] -compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef +compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated if_stmt: 'if' test ':' suite ('elif' test ':' suite)* ['else' ':' suite] while_stmt: 'while' test ':' suite ['else' ':' suite] for_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite] Modified: python/branches/libffi3-branch/Include/Python-ast.h ============================================================================== --- python/branches/libffi3-branch/Include/Python-ast.h (original) +++ python/branches/libffi3-branch/Include/Python-ast.h Tue Mar 4 15:50:53 2008 @@ -73,13 +73,14 @@ identifier name; arguments_ty args; asdl_seq *body; - asdl_seq *decorators; + asdl_seq *decorator_list; } FunctionDef; struct { identifier name; asdl_seq *bases; asdl_seq *body; + asdl_seq *decorator_list; } ClassDef; struct { @@ -359,11 +360,12 @@ mod_ty _Py_Suite(asdl_seq * body, PyArena *arena); #define FunctionDef(a0, a1, a2, a3, a4, a5, a6) _Py_FunctionDef(a0, a1, a2, a3, a4, a5, a6) stmt_ty _Py_FunctionDef(identifier name, arguments_ty args, asdl_seq * body, - asdl_seq * decorators, int lineno, int col_offset, + asdl_seq * decorator_list, int lineno, int col_offset, PyArena *arena); -#define ClassDef(a0, a1, a2, a3, a4, a5) _Py_ClassDef(a0, a1, a2, a3, a4, a5) -stmt_ty _Py_ClassDef(identifier name, asdl_seq * bases, asdl_seq * body, int - lineno, int col_offset, PyArena *arena); +#define ClassDef(a0, a1, a2, a3, a4, a5, a6) _Py_ClassDef(a0, a1, a2, a3, a4, a5, a6) +stmt_ty _Py_ClassDef(identifier name, asdl_seq * bases, asdl_seq * body, + asdl_seq * decorator_list, int lineno, int col_offset, + PyArena *arena); #define Return(a0, a1, a2, a3) _Py_Return(a0, a1, a2, a3) stmt_ty _Py_Return(expr_ty value, int lineno, int col_offset, PyArena *arena); #define Delete(a0, a1, a2, a3) _Py_Delete(a0, a1, a2, a3) Modified: python/branches/libffi3-branch/Include/graminit.h ============================================================================== --- python/branches/libffi3-branch/Include/graminit.h (original) +++ python/branches/libffi3-branch/Include/graminit.h Tue Mar 4 15:50:53 2008 @@ -3,82 +3,83 @@ #define eval_input 258 #define decorator 259 #define decorators 260 -#define funcdef 261 -#define parameters 262 -#define varargslist 263 -#define fpdef 264 -#define fplist 265 -#define stmt 266 -#define simple_stmt 267 -#define small_stmt 268 -#define expr_stmt 269 -#define augassign 270 -#define print_stmt 271 -#define del_stmt 272 -#define pass_stmt 273 -#define flow_stmt 274 -#define break_stmt 275 -#define continue_stmt 276 -#define return_stmt 277 -#define yield_stmt 278 -#define raise_stmt 279 -#define import_stmt 280 -#define import_name 281 -#define import_from 282 -#define import_as_name 283 -#define dotted_as_name 284 -#define import_as_names 285 -#define dotted_as_names 286 -#define dotted_name 287 -#define global_stmt 288 -#define exec_stmt 289 -#define assert_stmt 290 -#define compound_stmt 291 -#define if_stmt 292 -#define while_stmt 293 -#define for_stmt 294 -#define try_stmt 295 -#define with_stmt 296 -#define with_var 297 -#define except_clause 298 -#define suite 299 -#define testlist_safe 300 -#define old_test 301 -#define old_lambdef 302 -#define test 303 -#define or_test 304 -#define and_test 305 -#define not_test 306 -#define comparison 307 -#define comp_op 308 -#define expr 309 -#define xor_expr 310 -#define and_expr 311 -#define shift_expr 312 -#define arith_expr 313 -#define term 314 -#define factor 315 -#define power 316 -#define atom 317 -#define listmaker 318 -#define testlist_gexp 319 -#define lambdef 320 -#define trailer 321 -#define subscriptlist 322 -#define subscript 323 -#define sliceop 324 -#define exprlist 325 -#define testlist 326 -#define dictmaker 327 -#define classdef 328 -#define arglist 329 -#define argument 330 -#define list_iter 331 -#define list_for 332 -#define list_if 333 -#define gen_iter 334 -#define gen_for 335 -#define gen_if 336 -#define testlist1 337 -#define encoding_decl 338 -#define yield_expr 339 +#define decorated 261 +#define funcdef 262 +#define parameters 263 +#define varargslist 264 +#define fpdef 265 +#define fplist 266 +#define stmt 267 +#define simple_stmt 268 +#define small_stmt 269 +#define expr_stmt 270 +#define augassign 271 +#define print_stmt 272 +#define del_stmt 273 +#define pass_stmt 274 +#define flow_stmt 275 +#define break_stmt 276 +#define continue_stmt 277 +#define return_stmt 278 +#define yield_stmt 279 +#define raise_stmt 280 +#define import_stmt 281 +#define import_name 282 +#define import_from 283 +#define import_as_name 284 +#define dotted_as_name 285 +#define import_as_names 286 +#define dotted_as_names 287 +#define dotted_name 288 +#define global_stmt 289 +#define exec_stmt 290 +#define assert_stmt 291 +#define compound_stmt 292 +#define if_stmt 293 +#define while_stmt 294 +#define for_stmt 295 +#define try_stmt 296 +#define with_stmt 297 +#define with_var 298 +#define except_clause 299 +#define suite 300 +#define testlist_safe 301 +#define old_test 302 +#define old_lambdef 303 +#define test 304 +#define or_test 305 +#define and_test 306 +#define not_test 307 +#define comparison 308 +#define comp_op 309 +#define expr 310 +#define xor_expr 311 +#define and_expr 312 +#define shift_expr 313 +#define arith_expr 314 +#define term 315 +#define factor 316 +#define power 317 +#define atom 318 +#define listmaker 319 +#define testlist_gexp 320 +#define lambdef 321 +#define trailer 322 +#define subscriptlist 323 +#define subscript 324 +#define sliceop 325 +#define exprlist 326 +#define testlist 327 +#define dictmaker 328 +#define classdef 329 +#define arglist 330 +#define argument 331 +#define list_iter 332 +#define list_for 333 +#define list_if 334 +#define gen_iter 335 +#define gen_for 336 +#define gen_if 337 +#define testlist1 338 +#define encoding_decl 339 +#define yield_expr 340 Modified: python/branches/libffi3-branch/Include/longintrepr.h ============================================================================== --- python/branches/libffi3-branch/Include/longintrepr.h (original) +++ python/branches/libffi3-branch/Include/longintrepr.h Tue Mar 4 15:50:53 2008 @@ -28,8 +28,13 @@ #define PyLong_BASE ((digit)1 << PyLong_SHIFT) #define PyLong_MASK ((int)(PyLong_BASE - 1)) +/* b/w compatibility with Python 2.5 */ +#define SHIFT PyLong_SHIFT +#define BASE PyLong_BASE +#define MASK PyLong_MASK + #if PyLong_SHIFT % 5 != 0 -#error "longobject.c requires that SHIFT be divisible by 5" +#error "longobject.c requires that PyLong_SHIFT be divisible by 5" #endif /* Long integer representation. Modified: python/branches/libffi3-branch/Include/object.h ============================================================================== --- python/branches/libffi3-branch/Include/object.h (original) +++ python/branches/libffi3-branch/Include/object.h Tue Mar 4 15:50:53 2008 @@ -537,6 +537,9 @@ #define Py_TPFLAGS_HAVE_VERSION_TAG (1L<<18) #define Py_TPFLAGS_VALID_VERSION_TAG (1L<<19) +/* Type is abstract and cannot be instantiated */ +#define Py_TPFLAGS_IS_ABSTRACT (1L<<20) + /* These flags are used to determine if a type is a subclass. */ #define Py_TPFLAGS_INT_SUBCLASS (1L<<23) #define Py_TPFLAGS_LONG_SUBCLASS (1L<<24) Modified: python/branches/libffi3-branch/Include/patchlevel.h ============================================================================== --- python/branches/libffi3-branch/Include/patchlevel.h (original) +++ python/branches/libffi3-branch/Include/patchlevel.h Tue Mar 4 15:50:53 2008 @@ -23,10 +23,10 @@ #define PY_MINOR_VERSION 6 #define PY_MICRO_VERSION 0 #define PY_RELEASE_LEVEL PY_RELEASE_LEVEL_ALPHA -#define PY_RELEASE_SERIAL 0 +#define PY_RELEASE_SERIAL 1 /* Version as a string */ -#define PY_VERSION "2.6a0" +#define PY_VERSION "2.6a1+" /* Subversion Revision number of this file (not of the repository) */ #define PY_PATCHLEVEL_REVISION "$Revision$" Modified: python/branches/libffi3-branch/LICENSE ============================================================================== --- python/branches/libffi3-branch/LICENSE (original) +++ python/branches/libffi3-branch/LICENSE Tue Mar 4 15:50:53 2008 @@ -55,6 +55,7 @@ 2.4.4 2.4.3 2006 PSF yes 2.5 2.4 2006 PSF yes 2.5.1 2.5 2007 PSF yes + 2.6 2.5 2008 PSF yes Footnotes: @@ -90,7 +91,7 @@ prepare derivative works, distribute, and otherwise use Python alone or in any derivative version, provided, however, that PSF's License Agreement and PSF's notice of copyright, i.e., "Copyright (c) -2001, 2002, 2003, 2004, 2005, 2006, 2007 Python Software Foundation; +2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008 Python Software Foundation; All Rights Reserved" are retained in Python alone or in any derivative version prepared by Licensee. Modified: python/branches/libffi3-branch/Lib/BaseHTTPServer.py ============================================================================== --- python/branches/libffi3-branch/Lib/BaseHTTPServer.py (original) +++ python/branches/libffi3-branch/Lib/BaseHTTPServer.py Tue Mar 4 15:50:53 2008 @@ -76,7 +76,7 @@ import mimetools import SocketServer -# Default error message +# Default error message template DEFAULT_ERROR_MESSAGE = """\ Error response @@ -89,6 +89,8 @@ """ +DEFAULT_ERROR_CONTENT_TYPE = "text/html" + def _quote_html(html): return html.replace("&", "&").replace("<", "<").replace(">", ">") @@ -342,13 +344,14 @@ content = (self.error_message_format % {'code': code, 'message': _quote_html(message), 'explain': explain}) self.send_response(code, message) - self.send_header("Content-Type", "text/html") + self.send_header("Content-Type", self.error_content_type) self.send_header('Connection', 'close') self.end_headers() if self.command != 'HEAD' and code >= 200 and code not in (204, 304): self.wfile.write(content) error_message_format = DEFAULT_ERROR_MESSAGE + error_content_type = DEFAULT_ERROR_CONTENT_TYPE def send_response(self, code, message=None): """Send the response header and log the response code. Modified: python/branches/libffi3-branch/Lib/ConfigParser.py ============================================================================== --- python/branches/libffi3-branch/Lib/ConfigParser.py (original) +++ python/branches/libffi3-branch/Lib/ConfigParser.py Tue Mar 4 15:50:53 2008 @@ -235,8 +235,12 @@ """Create a new section in the configuration. Raise DuplicateSectionError if a section by the specified name - already exists. + already exists. Raise ValueError if name is DEFAULT or any of it's + case-insensitive variants. """ + if section.lower() == "default": + raise ValueError, 'Invalid section name: %s' % section + if section in self._sections: raise DuplicateSectionError(section) self._sections[section] = self._dict() Modified: python/branches/libffi3-branch/Lib/SimpleHTTPServer.py ============================================================================== --- python/branches/libffi3-branch/Lib/SimpleHTTPServer.py (original) +++ python/branches/libffi3-branch/Lib/SimpleHTTPServer.py Tue Mar 4 15:50:53 2008 @@ -14,7 +14,6 @@ import posixpath import BaseHTTPServer import urllib -import urlparse import cgi import shutil import mimetypes @@ -113,8 +112,9 @@ list.sort(key=lambda a: a.lower()) f = StringIO() displaypath = cgi.escape(urllib.unquote(self.path)) - f.write("Directory listing for %s\n" % displaypath) - f.write("

Directory listing for %s

\n" % displaypath) + f.write('') + f.write("\nDirectory listing for %s\n" % displaypath) + f.write("\n

Directory listing for %s

\n" % displaypath) f.write("
\n
    \n") for name in list: fullname = os.path.join(path, name) @@ -128,7 +128,7 @@ # Note: a link to a directory displays with @ and links with / f.write('
  • %s\n' % (urllib.quote(linkname), cgi.escape(displayname))) - f.write("
\n
\n") + f.write("\n
\n\n\n") length = f.tell() f.seek(0) self.send_response(200) Modified: python/branches/libffi3-branch/Lib/SocketServer.py ============================================================================== --- python/branches/libffi3-branch/Lib/SocketServer.py (original) +++ python/branches/libffi3-branch/Lib/SocketServer.py Tue Mar 4 15:50:53 2008 @@ -440,18 +440,30 @@ def collect_children(self): """Internal routine to wait for children that have exited.""" - while self.active_children: - if len(self.active_children) < self.max_children: - options = os.WNOHANG - else: - # If the maximum number of children are already - # running, block while waiting for a child to exit - options = 0 + if self.active_children is None: return + while len(self.active_children) >= self.max_children: + # XXX: This will wait for any child process, not just ones + # spawned by this library. This could confuse other + # libraries that expect to be able to wait for their own + # children. try: - pid, status = os.waitpid(0, options) + pid, status = os.waitpid(0, options=0) except os.error: pid = None - if not pid: break + if pid not in self.active_children: continue + self.active_children.remove(pid) + + # XXX: This loop runs more system calls than it ought + # to. There should be a way to put the active_children into a + # process group and then use os.waitpid(-pgid) to wait for any + # of that set, but I couldn't find a way to allocate pgids + # that couldn't collide. + for child in self.active_children: + try: + pid, status = os.waitpid(child, os.WNOHANG) + except os.error: + pid = None + if not pid: continue try: self.active_children.remove(pid) except ValueError, e: Modified: python/branches/libffi3-branch/Lib/UserDict.py ============================================================================== --- python/branches/libffi3-branch/Lib/UserDict.py (original) +++ python/branches/libffi3-branch/Lib/UserDict.py Tue Mar 4 15:50:53 2008 @@ -41,7 +41,7 @@ def iterkeys(self): return self.data.iterkeys() def itervalues(self): return self.data.itervalues() def values(self): return self.data.values() - def has_key(self, key): return self.data.has_key(key) + def has_key(self, key): return key in self.data def update(self, dict=None, **kwargs): if dict is None: pass @@ -59,7 +59,7 @@ return failobj return self[key] def setdefault(self, key, failobj=None): - if not self.has_key(key): + if key not in self: self[key] = failobj return self[key] def pop(self, key, *args): Modified: python/branches/libffi3-branch/Lib/_abcoll.py ============================================================================== --- python/branches/libffi3-branch/Lib/_abcoll.py (original) +++ python/branches/libffi3-branch/Lib/_abcoll.py Tue Mar 4 15:50:53 2008 @@ -107,7 +107,7 @@ __metaclass__ = ABCMeta @abstractmethod - def __contains__(self, x): + def __call__(self, *args, **kwds): return False @classmethod @@ -188,7 +188,8 @@ def __or__(self, other): if not isinstance(other, Iterable): return NotImplemented - return self._from_iterable(itertools.chain(self, other)) + chain = (e for s in (self, other) for e in s) + return self._from_iterable(chain) def __sub__(self, other): if not isinstance(other, Set): Modified: python/branches/libffi3-branch/Lib/abc.py ============================================================================== --- python/branches/libffi3-branch/Lib/abc.py (original) +++ python/branches/libffi3-branch/Lib/abc.py Tue Mar 4 15:50:53 2008 @@ -51,52 +51,6 @@ __isabstractmethod__ = True -class _Abstract(object): - - """Helper class inserted into the bases by ABCMeta (using _fix_bases()). - - You should never need to explicitly subclass this class. - - There should never be a base class between _Abstract and object. - """ - - def __new__(cls, *args, **kwds): - am = cls.__dict__.get("__abstractmethods__") - if am: - raise TypeError("Can't instantiate abstract class %s " - "with abstract methods %s" % - (cls.__name__, ", ".join(sorted(am)))) - if (args or kwds) and cls.__init__ is object.__init__: - raise TypeError("Can't pass arguments to __new__ " - "without overriding __init__") - return super(_Abstract, cls).__new__(cls) - - @classmethod - def __subclasshook__(cls, subclass): - """Abstract classes can override this to customize issubclass(). - - This is invoked early on by __subclasscheck__() below. It - should return True, False or NotImplemented. If it returns - NotImplemented, the normal algorithm is used. Otherwise, it - overrides the normal algorithm (and the outcome is cached). - """ - return NotImplemented - - -def _fix_bases(bases): - """Helper method that inserts _Abstract in the bases if needed.""" - for base in bases: - if issubclass(base, _Abstract): - # _Abstract is already a base (maybe indirectly) - return bases - if object in bases: - # Replace object with _Abstract - return tuple([_Abstract if base is object else base - for base in bases]) - # Append _Abstract to the end - return bases + (_Abstract,) - - class ABCMeta(type): """Metaclass for defining Abstract Base Classes (ABCs). @@ -119,7 +73,6 @@ _abc_invalidation_counter = 0 def __new__(mcls, name, bases, namespace): - bases = _fix_bases(bases) cls = super(ABCMeta, mcls).__new__(mcls, name, bases, namespace) # Compute set of abstract method names abstracts = set(name @@ -130,7 +83,7 @@ value = getattr(cls, name, None) if getattr(value, "__isabstractmethod__", False): abstracts.add(name) - cls.__abstractmethods__ = abstracts + cls.__abstractmethods__ = frozenset(abstracts) # Set up inheritance registry cls._abc_registry = set() cls._abc_cache = set() Modified: python/branches/libffi3-branch/Lib/bsddb/test/test_associate.py ============================================================================== --- python/branches/libffi3-branch/Lib/bsddb/test/test_associate.py (original) +++ python/branches/libffi3-branch/Lib/bsddb/test/test_associate.py Tue Mar 4 15:50:53 2008 @@ -23,6 +23,11 @@ # For Python 2.3 from bsddb import db, dbshelve +try: + from bsddb3 import test_support +except ImportError: + from test import test_support + #---------------------------------------------------------------------- @@ -91,7 +96,7 @@ class AssociateErrorTestCase(unittest.TestCase): def setUp(self): self.filename = self.__class__.__name__ + '.db' - homeDir = os.path.join(tempfile.gettempdir(), 'db_home') + homeDir = os.path.join(tempfile.gettempdir(), 'db_home%d'%os.getpid()) self.homeDir = homeDir try: os.mkdir(homeDir) @@ -106,11 +111,7 @@ def tearDown(self): self.env.close() self.env = None - import glob - files = glob.glob(os.path.join(self.homeDir, '*')) - for file in files: - os.remove(file) - + test_support.rmtree(self.homeDir) def test00_associateDBError(self): if verbose: @@ -151,7 +152,7 @@ def setUp(self): self.filename = self.__class__.__name__ + '.db' - homeDir = os.path.join(tempfile.gettempdir(), 'db_home') + homeDir = os.path.join(tempfile.gettempdir(), 'db_home%d'%os.getpid()) self.homeDir = homeDir try: os.mkdir(homeDir) Modified: python/branches/libffi3-branch/Lib/bsddb/test/test_basics.py ============================================================================== --- python/branches/libffi3-branch/Lib/bsddb/test/test_basics.py (original) +++ python/branches/libffi3-branch/Lib/bsddb/test/test_basics.py Tue Mar 4 15:50:53 2008 @@ -4,9 +4,7 @@ """ import os -import sys import errno -import shutil import string import tempfile from pprint import pprint @@ -20,6 +18,11 @@ # For Python 2.3 from bsddb import db +try: + from bsddb3 import test_support +except ImportError: + from test import test_support + from test_all import verbose DASH = '-' @@ -54,13 +57,9 @@ def setUp(self): if self.useEnv: - homeDir = os.path.join(tempfile.gettempdir(), 'db_home') + homeDir = os.path.join(tempfile.gettempdir(), 'db_home%d'%os.getpid()) self.homeDir = homeDir - try: - shutil.rmtree(homeDir) - except OSError, e: - # unix returns ENOENT, windows returns ESRCH - if e.errno not in (errno.ENOENT, errno.ESRCH): raise + test_support.rmtree(homeDir) os.mkdir(homeDir) try: self.env = db.DBEnv() @@ -74,7 +73,7 @@ tempfile.tempdir = None # Yes, a bare except is intended, since we're re-raising the exc. except: - shutil.rmtree(homeDir) + test_support.rmtree(homeDir) raise else: self.env = None @@ -98,8 +97,8 @@ def tearDown(self): self.d.close() if self.env is not None: + test_support.rmtree(self.homeDir) self.env.close() - shutil.rmtree(self.homeDir) ## Make a new DBEnv to remove the env files from the home dir. ## (It can't be done while the env is open, nor after it has been ## closed, so we make a new one to do it.) Modified: python/branches/libffi3-branch/Lib/bsddb/test/test_compare.py ============================================================================== --- python/branches/libffi3-branch/Lib/bsddb/test/test_compare.py (original) +++ python/branches/libffi3-branch/Lib/bsddb/test/test_compare.py Tue Mar 4 15:50:53 2008 @@ -15,6 +15,11 @@ # For Python 2.3 from bsddb import db, dbshelve +try: + from bsddb3 import test_support +except ImportError: + from test import test_support + lexical_cmp = cmp def lowercase_cmp(left, right): @@ -52,7 +57,7 @@ def setUp (self): self.filename = self.__class__.__name__ + '.db' - homeDir = os.path.join (tempfile.gettempdir(), 'db_home') + homeDir = os.path.join (tempfile.gettempdir(), 'db_home%d'%os.getpid()) self.homeDir = homeDir try: os.mkdir (homeDir) @@ -70,8 +75,7 @@ if self.env is not None: self.env.close () self.env = None - import glob - map (os.remove, glob.glob (os.path.join (self.homeDir, '*'))) + test_support.rmtree(self.homeDir) def addDataToDB (self, data): i = 0 Modified: python/branches/libffi3-branch/Lib/bsddb/test/test_compat.py ============================================================================== --- python/branches/libffi3-branch/Lib/bsddb/test/test_compat.py (original) +++ python/branches/libffi3-branch/Lib/bsddb/test/test_compat.py Tue Mar 4 15:50:53 2008 @@ -3,7 +3,7 @@ regression test suite. """ -import sys, os, string +import os, string import unittest import tempfile Modified: python/branches/libffi3-branch/Lib/bsddb/test/test_cursor_pget_bug.py ============================================================================== --- python/branches/libffi3-branch/Lib/bsddb/test/test_cursor_pget_bug.py (original) +++ python/branches/libffi3-branch/Lib/bsddb/test/test_cursor_pget_bug.py Tue Mar 4 15:50:53 2008 @@ -1,6 +1,6 @@ import unittest import tempfile -import sys, os, glob +import os, glob try: # For Pythons w/distutils pybsddb @@ -9,6 +9,11 @@ # For Python 2.3 from bsddb import db +try: + from bsddb3 import test_support +except ImportError: + from test import test_support + #---------------------------------------------------------------------- @@ -17,7 +22,7 @@ db_name = 'test-cursor_pget.db' def setUp(self): - self.homeDir = os.path.join(tempfile.gettempdir(), 'db_home') + self.homeDir = os.path.join(tempfile.gettempdir(), 'db_home%d'%os.getpid()) try: os.mkdir(self.homeDir) except os.error: @@ -42,9 +47,7 @@ del self.secondary_db del self.primary_db del self.env - for file in glob.glob(os.path.join(self.homeDir, '*')): - os.remove(file) - os.removedirs(self.homeDir) + test_support.rmtree(self.homeDir) def test_pget(self): cursor = self.secondary_db.cursor() Modified: python/branches/libffi3-branch/Lib/bsddb/test/test_dbobj.py ============================================================================== --- python/branches/libffi3-branch/Lib/bsddb/test/test_dbobj.py (original) +++ python/branches/libffi3-branch/Lib/bsddb/test/test_dbobj.py Tue Mar 4 15:50:53 2008 @@ -1,7 +1,6 @@ -import sys, os, string +import os, string import unittest -import glob import tempfile try: @@ -11,6 +10,11 @@ # For Python 2.3 from bsddb import db, dbobj +try: + from bsddb3 import test_support +except ImportError: + from test import test_support + #---------------------------------------------------------------------- @@ -20,7 +24,7 @@ db_name = 'test-dbobj.db' def setUp(self): - homeDir = os.path.join(tempfile.gettempdir(), 'db_home') + homeDir = os.path.join(tempfile.gettempdir(), 'db_home%d'%os.getpid()) self.homeDir = homeDir try: os.mkdir(homeDir) except os.error: pass @@ -30,9 +34,7 @@ del self.db if hasattr(self, 'env'): del self.env - files = glob.glob(os.path.join(self.homeDir, '*')) - for file in files: - os.remove(file) + test_support.rmtree(self.homeDir) def test01_both(self): class TestDBEnv(dbobj.DBEnv): pass Modified: python/branches/libffi3-branch/Lib/bsddb/test/test_dbshelve.py ============================================================================== --- python/branches/libffi3-branch/Lib/bsddb/test/test_dbshelve.py (original) +++ python/branches/libffi3-branch/Lib/bsddb/test/test_dbshelve.py Tue Mar 4 15:50:53 2008 @@ -2,9 +2,8 @@ TestCases for checking dbShelve objects. """ -import sys, os, string +import os, string import tempfile, random -from pprint import pprint from types import * import unittest @@ -15,6 +14,11 @@ # For Python 2.3 from bsddb import db, dbshelve +try: + from bsddb3 import test_support +except ImportError: + from test import test_support + from test_all import verbose @@ -246,7 +250,7 @@ class BasicEnvShelveTestCase(DBShelveTestCase): def do_open(self): self.homeDir = homeDir = os.path.join( - tempfile.gettempdir(), 'db_home') + tempfile.gettempdir(), 'db_home%d'%os.getpid()) try: os.mkdir(homeDir) except os.error: pass self.env = db.DBEnv() @@ -263,12 +267,8 @@ def tearDown(self): + test_support.rmtree(self.homeDir) self.do_close() - import glob - files = glob.glob(os.path.join(self.homeDir, '*')) - for file in files: - os.remove(file) - class EnvBTreeShelveTestCase(BasicEnvShelveTestCase): Modified: python/branches/libffi3-branch/Lib/bsddb/test/test_dbtables.py ============================================================================== --- python/branches/libffi3-branch/Lib/bsddb/test/test_dbtables.py (original) +++ python/branches/libffi3-branch/Lib/bsddb/test/test_dbtables.py Tue Mar 4 15:50:53 2008 @@ -20,9 +20,8 @@ # # $Id$ -import sys, os, re +import os, re import tempfile -import shutil try: import cPickle pickle = cPickle @@ -40,6 +39,10 @@ # For Python 2.3 from bsddb import db, dbtables +try: + from bsddb3 import test_support +except ImportError: + from test import test_support #---------------------------------------------------------------------- @@ -58,7 +61,7 @@ def tearDown(self): self.tdb.close() - shutil.rmtree(self.testHomeDir) + test_support.rmtree(self.testHomeDir) def test01(self): tabname = "test01" Modified: python/branches/libffi3-branch/Lib/bsddb/test/test_env_close.py ============================================================================== --- python/branches/libffi3-branch/Lib/bsddb/test/test_env_close.py (original) +++ python/branches/libffi3-branch/Lib/bsddb/test/test_env_close.py Tue Mar 4 15:50:53 2008 @@ -3,9 +3,7 @@ """ import os -import sys import tempfile -import glob import unittest try: @@ -15,6 +13,11 @@ # For Python 2.3 from bsddb import db +try: + from bsddb3 import test_support +except ImportError: + from test import test_support + from test_all import verbose # We're going to get warnings in this module about trying to close the db when @@ -33,7 +36,7 @@ class DBEnvClosedEarlyCrash(unittest.TestCase): def setUp(self): - self.homeDir = os.path.join(tempfile.gettempdir(), 'db_home') + self.homeDir = os.path.join(tempfile.gettempdir(), 'db_home%d'%os.getpid()) try: os.mkdir(self.homeDir) except os.error: pass tempfile.tempdir = self.homeDir @@ -41,10 +44,7 @@ tempfile.tempdir = None def tearDown(self): - files = glob.glob(os.path.join(self.homeDir, '*')) - for file in files: - os.remove(file) - + test_support.rmtree(self.homeDir) def test01_close_dbenv_before_db(self): dbenv = db.DBEnv() Modified: python/branches/libffi3-branch/Lib/bsddb/test/test_get_none.py ============================================================================== --- python/branches/libffi3-branch/Lib/bsddb/test/test_get_none.py (original) +++ python/branches/libffi3-branch/Lib/bsddb/test/test_get_none.py Tue Mar 4 15:50:53 2008 @@ -2,9 +2,8 @@ TestCases for checking set_get_returns_none. """ -import sys, os, string +import os, string import tempfile -from pprint import pprint import unittest try: Modified: python/branches/libffi3-branch/Lib/bsddb/test/test_join.py ============================================================================== --- python/branches/libffi3-branch/Lib/bsddb/test/test_join.py (original) +++ python/branches/libffi3-branch/Lib/bsddb/test/test_join.py Tue Mar 4 15:50:53 2008 @@ -1,10 +1,8 @@ """TestCases for using the DB.join and DBCursor.join_item methods. """ -import sys, os, string +import os import tempfile -import time -from pprint import pprint try: from threading import Thread, currentThread @@ -22,6 +20,10 @@ # For Python 2.3 from bsddb import db, dbshelve +try: + from bsddb3 import test_support +except ImportError: + from test import test_support #---------------------------------------------------------------------- @@ -49,7 +51,7 @@ def setUp(self): self.filename = self.__class__.__name__ + '.db' - homeDir = os.path.join(tempfile.gettempdir(), 'db_home') + homeDir = os.path.join(tempfile.gettempdir(), 'db_home%d'%os.getpid()) self.homeDir = homeDir try: os.mkdir(homeDir) except os.error: pass @@ -58,10 +60,7 @@ def tearDown(self): self.env.close() - import glob - files = glob.glob(os.path.join(self.homeDir, '*')) - for file in files: - os.remove(file) + test_support.rmtree(self.homeDir) def test01_join(self): if verbose: Modified: python/branches/libffi3-branch/Lib/bsddb/test/test_lock.py ============================================================================== --- python/branches/libffi3-branch/Lib/bsddb/test/test_lock.py (original) +++ python/branches/libffi3-branch/Lib/bsddb/test/test_lock.py Tue Mar 4 15:50:53 2008 @@ -2,10 +2,6 @@ TestCases for testing the locking sub-system. """ -import os -from pprint import pprint -import shutil -import sys import tempfile import time @@ -26,6 +22,11 @@ # For Python 2.3 from bsddb import db +try: + from bsddb3 import test_support +except ImportError: + from test import test_support + #---------------------------------------------------------------------- @@ -40,7 +41,7 @@ def tearDown(self): self.env.close() - shutil.rmtree(self.homeDir) + test_support.rmtree(self.homeDir) def test01_simple(self): Modified: python/branches/libffi3-branch/Lib/bsddb/test/test_misc.py ============================================================================== --- python/branches/libffi3-branch/Lib/bsddb/test/test_misc.py (original) +++ python/branches/libffi3-branch/Lib/bsddb/test/test_misc.py Tue Mar 4 15:50:53 2008 @@ -2,7 +2,6 @@ """ import os -import sys import unittest import tempfile @@ -13,12 +12,17 @@ # For Python 2.3 from bsddb import db, dbshelve, hashopen +try: + from bsddb3 import test_support +except ImportError: + from test import test_support + #---------------------------------------------------------------------- class MiscTestCase(unittest.TestCase): def setUp(self): self.filename = self.__class__.__name__ + '.db' - homeDir = os.path.join(tempfile.gettempdir(), 'db_home') + homeDir = os.path.join(tempfile.gettempdir(), 'db_home%d'%os.getpid()) self.homeDir = homeDir try: os.mkdir(homeDir) @@ -26,12 +30,8 @@ pass def tearDown(self): - try: - os.remove(self.filename) - except OSError: - pass - import shutil - shutil.rmtree(self.homeDir) + test_support.unlink(self.filename) + test_support.rmtree(self.homeDir) def test01_badpointer(self): dbs = dbshelve.open(self.filename) Modified: python/branches/libffi3-branch/Lib/bsddb/test/test_pickle.py ============================================================================== --- python/branches/libffi3-branch/Lib/bsddb/test/test_pickle.py (original) +++ python/branches/libffi3-branch/Lib/bsddb/test/test_pickle.py Tue Mar 4 15:50:53 2008 @@ -1,5 +1,5 @@ -import sys, os, string +import os import pickle try: import cPickle @@ -7,7 +7,6 @@ cPickle = None import unittest import tempfile -import glob try: # For Pythons w/distutils pybsddb @@ -16,6 +15,11 @@ # For Python 2.3 from bsddb import db +try: + from bsddb3 import test_support +except ImportError: + from test import test_support + #---------------------------------------------------------------------- @@ -25,7 +29,7 @@ db_name = 'test-dbobj.db' def setUp(self): - homeDir = os.path.join(tempfile.gettempdir(), 'db_home') + homeDir = os.path.join(tempfile.gettempdir(), 'db_home%d'%os.getpid()) self.homeDir = homeDir try: os.mkdir(homeDir) except os.error: pass @@ -35,9 +39,7 @@ del self.db if hasattr(self, 'env'): del self.env - files = glob.glob(os.path.join(self.homeDir, '*')) - for file in files: - os.remove(file) + test_support.rmtree(self.homeDir) def _base_test_pickle_DBError(self, pickle): self.env = db.DBEnv() Modified: python/branches/libffi3-branch/Lib/bsddb/test/test_queue.py ============================================================================== --- python/branches/libffi3-branch/Lib/bsddb/test/test_queue.py (original) +++ python/branches/libffi3-branch/Lib/bsddb/test/test_queue.py Tue Mar 4 15:50:53 2008 @@ -2,7 +2,7 @@ TestCases for exercising a Queue DB. """ -import sys, os, string +import os, string import tempfile from pprint import pprint import unittest Modified: python/branches/libffi3-branch/Lib/bsddb/test/test_recno.py ============================================================================== --- python/branches/libffi3-branch/Lib/bsddb/test/test_recno.py (original) +++ python/branches/libffi3-branch/Lib/bsddb/test/test_recno.py Tue Mar 4 15:50:53 2008 @@ -2,7 +2,6 @@ """ import os -import sys import errno import tempfile from pprint import pprint @@ -17,6 +16,11 @@ # For Python 2.3 from bsddb import db +try: + from bsddb3 import test_support +except ImportError: + from test import test_support + letters = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ' @@ -25,12 +29,12 @@ class SimpleRecnoTestCase(unittest.TestCase): def setUp(self): self.filename = tempfile.mktemp() + self.homeDir = None def tearDown(self): - try: - os.remove(self.filename) - except OSError, e: - if e.errno <> errno.EEXIST: raise + test_support.unlink(self.filename) + if self.homeDir: + test_support.rmtree(self.homeDir) def test01_basic(self): d = db.DB() @@ -203,7 +207,8 @@ just a line in the file, but you can set a different record delimiter if needed. """ - homeDir = os.path.join(tempfile.gettempdir(), 'db_home') + homeDir = os.path.join(tempfile.gettempdir(), 'db_home%d'%os.getpid()) + self.homeDir = homeDir source = os.path.join(homeDir, 'test_recno.txt') if not os.path.isdir(homeDir): os.mkdir(homeDir) Modified: python/branches/libffi3-branch/Lib/bsddb/test/test_sequence.py ============================================================================== --- python/branches/libffi3-branch/Lib/bsddb/test/test_sequence.py (original) +++ python/branches/libffi3-branch/Lib/bsddb/test/test_sequence.py Tue Mar 4 15:50:53 2008 @@ -1,8 +1,6 @@ import unittest import os -import sys import tempfile -import glob try: # For Pythons w/distutils pybsddb @@ -10,13 +8,16 @@ except ImportError: from bsddb import db -from test_all import verbose +try: + from bsddb3 import test_support +except ImportError: + from test import test_support class DBSequenceTest(unittest.TestCase): def setUp(self): self.int_32_max = 0x100000000 - self.homeDir = os.path.join(tempfile.gettempdir(), 'db_home') + self.homeDir = os.path.join(tempfile.gettempdir(), 'db_home%d'%os.getpid()) try: os.mkdir(self.homeDir) except os.error: @@ -41,9 +42,7 @@ self.dbenv.close() del self.dbenv - files = glob.glob(os.path.join(self.homeDir, '*')) - for file in files: - os.remove(file) + test_support.rmtree(self.homeDir) def test_get(self): self.seq = db.DBSequence(self.d, flags=0) Modified: python/branches/libffi3-branch/Lib/bsddb/test/test_thread.py ============================================================================== --- python/branches/libffi3-branch/Lib/bsddb/test/test_thread.py (original) +++ python/branches/libffi3-branch/Lib/bsddb/test/test_thread.py Tue Mar 4 15:50:53 2008 @@ -5,9 +5,7 @@ import sys import time import errno -import shutil import tempfile -from pprint import pprint from random import random try: @@ -40,6 +38,11 @@ # For Python 2.3 from bsddb import db, dbutils +try: + from bsddb3 import test_support +except ImportError: + from test import test_support + #---------------------------------------------------------------------- @@ -53,7 +56,7 @@ if verbose: dbutils._deadlock_VerboseFile = sys.stdout - homeDir = os.path.join(tempfile.gettempdir(), 'db_home') + homeDir = os.path.join(tempfile.gettempdir(), 'db_home%d'%os.getpid()) self.homeDir = homeDir try: os.mkdir(homeDir) @@ -70,12 +73,9 @@ self.d.open(self.filename, self.dbtype, self.dbopenflags|db.DB_CREATE) def tearDown(self): + test_support.rmtree(self.homeDir) self.d.close() self.env.close() - try: - shutil.rmtree(self.homeDir) - except OSError, e: - if e.errno != errno.EEXIST: raise def setEnvOpts(self): pass Modified: python/branches/libffi3-branch/Lib/compiler/ast.py ============================================================================== --- python/branches/libffi3-branch/Lib/compiler/ast.py (original) +++ python/branches/libffi3-branch/Lib/compiler/ast.py Tue Mar 4 15:50:53 2008 @@ -308,11 +308,12 @@ return "CallFunc(%s, %s, %s, %s)" % (repr(self.node), repr(self.args), repr(self.star_args), repr(self.dstar_args)) class Class(Node): - def __init__(self, name, bases, doc, code, lineno=None): + def __init__(self, name, bases, doc, code, decorators = None, lineno=None): self.name = name self.bases = bases self.doc = doc self.code = code + self.decorators = decorators self.lineno = lineno def getChildren(self): @@ -321,16 +322,19 @@ children.extend(flatten(self.bases)) children.append(self.doc) children.append(self.code) + children.append(self.decorators) return tuple(children) def getChildNodes(self): nodelist = [] nodelist.extend(flatten_nodes(self.bases)) nodelist.append(self.code) + if self.decorators is not None: + nodelist.append(self.decorators) return tuple(nodelist) def __repr__(self): - return "Class(%s, %s, %s, %s)" % (repr(self.name), repr(self.bases), repr(self.doc), repr(self.code)) + return "Class(%s, %s, %s, %s, %s)" % (repr(self.name), repr(self.bases), repr(self.doc), repr(self.code), repr(self.decorators)) class Compare(Node): def __init__(self, expr, ops, lineno=None): Modified: python/branches/libffi3-branch/Lib/compiler/transformer.py ============================================================================== --- python/branches/libffi3-branch/Lib/compiler/transformer.py (original) +++ python/branches/libffi3-branch/Lib/compiler/transformer.py Tue Mar 4 15:50:53 2008 @@ -29,7 +29,6 @@ import parser import symbol import token -import sys class WalkerError(StandardError): pass @@ -233,6 +232,18 @@ items.append(self.decorator(dec_nodelist[1:])) return Decorators(items) + def decorated(self, nodelist): + assert nodelist[0][0] == symbol.decorators + if nodelist[1][0] == symbol.funcdef: + n = [nodelist[0]] + list(nodelist[1][1:]) + return self.funcdef(n) + elif nodelist[1][0] == symbol.classdef: + decorators = self.decorators(nodelist[0][1:]) + cls = self.classdef(nodelist[1][1:]) + cls.decorators = decorators + return cls + raise WalkerError() + def funcdef(self, nodelist): # -6 -5 -4 -3 -2 -1 # funcdef: [decorators] 'def' NAME parameters ':' suite Modified: python/branches/libffi3-branch/Lib/ctypes/test/__init__.py ============================================================================== --- python/branches/libffi3-branch/Lib/ctypes/test/__init__.py (original) +++ python/branches/libffi3-branch/Lib/ctypes/test/__init__.py Tue Mar 4 15:50:53 2008 @@ -1,4 +1,4 @@ -import glob, os, sys, unittest, getopt, time +import os, sys, unittest, getopt, time use_resources = [] Modified: python/branches/libffi3-branch/Lib/ctypes/test/test_checkretval.py ============================================================================== --- python/branches/libffi3-branch/Lib/ctypes/test/test_checkretval.py (original) +++ python/branches/libffi3-branch/Lib/ctypes/test/test_checkretval.py Tue Mar 4 15:50:53 2008 @@ -1,5 +1,4 @@ import unittest -import sys from ctypes import * Modified: python/branches/libffi3-branch/Lib/ctypes/test/test_find.py ============================================================================== --- python/branches/libffi3-branch/Lib/ctypes/test/test_find.py (original) +++ python/branches/libffi3-branch/Lib/ctypes/test/test_find.py Tue Mar 4 15:50:53 2008 @@ -1,5 +1,5 @@ import unittest -import os, sys +import sys from ctypes import * from ctypes.util import find_library from ctypes.test import is_resource_enabled Modified: python/branches/libffi3-branch/Lib/ctypes/test/test_libc.py ============================================================================== --- python/branches/libffi3-branch/Lib/ctypes/test/test_libc.py (original) +++ python/branches/libffi3-branch/Lib/ctypes/test/test_libc.py Tue Mar 4 15:50:53 2008 @@ -1,4 +1,3 @@ -import sys, os import unittest from ctypes import * Modified: python/branches/libffi3-branch/Lib/ctypes/test/test_loading.py ============================================================================== --- python/branches/libffi3-branch/Lib/ctypes/test/test_loading.py (original) +++ python/branches/libffi3-branch/Lib/ctypes/test/test_loading.py Tue Mar 4 15:50:53 2008 @@ -1,6 +1,6 @@ from ctypes import * import sys, unittest -import os, StringIO +import os from ctypes.util import find_library from ctypes.test import is_resource_enabled Modified: python/branches/libffi3-branch/Lib/ctypes/test/test_numbers.py ============================================================================== --- python/branches/libffi3-branch/Lib/ctypes/test/test_numbers.py (original) +++ python/branches/libffi3-branch/Lib/ctypes/test/test_numbers.py Tue Mar 4 15:50:53 2008 @@ -1,6 +1,6 @@ from ctypes import * import unittest -import sys, struct +import struct def valid_ranges(*types): # given a sequence of numeric types, collect their _type_ Modified: python/branches/libffi3-branch/Lib/curses/__init__.py ============================================================================== --- python/branches/libffi3-branch/Lib/curses/__init__.py (original) +++ python/branches/libffi3-branch/Lib/curses/__init__.py Tue Mar 4 15:50:53 2008 @@ -14,6 +14,8 @@ from _curses import * from curses.wrapper import wrapper +import os as _os +import sys as _sys # Some constants, most notably the ACS_* ones, are only added to the C # _curses module's dictionary after initscr() is called. (Some @@ -25,6 +27,10 @@ def initscr(): import _curses, curses + # we call setupterm() here because it raises an error + # instead of calling exit() in error cases. + setupterm(term=_os.environ.get("TERM", "unknown"), + fd=_sys.__stdout__.fileno()) stdscr = _curses.initscr() for key, value in _curses.__dict__.items(): if key[0:4] == 'ACS_' or key in ('LINES', 'COLS'): Modified: python/branches/libffi3-branch/Lib/curses/wrapper.py ============================================================================== --- python/branches/libffi3-branch/Lib/curses/wrapper.py (original) +++ python/branches/libffi3-branch/Lib/curses/wrapper.py Tue Mar 4 15:50:53 2008 @@ -7,7 +7,7 @@ """ -import sys, curses +import curses def wrapper(func, *args, **kwds): """Wrapper function that initializes curses and calls another function, Modified: python/branches/libffi3-branch/Lib/decimal.py ============================================================================== --- python/branches/libffi3-branch/Lib/decimal.py (original) +++ python/branches/libffi3-branch/Lib/decimal.py Tue Mar 4 15:50:53 2008 @@ -2380,6 +2380,29 @@ coeff = str(int(coeff)+1) return _dec_from_triple(self._sign, coeff, exp) + def _round(self, places, rounding): + """Round a nonzero, nonspecial Decimal to a fixed number of + significant figures, using the given rounding mode. + + Infinities, NaNs and zeros are returned unaltered. + + This operation is quiet: it raises no flags, and uses no + information from the context. + + """ + if places <= 0: + raise ValueError("argument should be at least 1 in _round") + if self._is_special or not self: + return Decimal(self) + ans = self._rescale(self.adjusted()+1-places, rounding) + # it can happen that the rescale alters the adjusted exponent; + # for example when rounding 99.97 to 3 significant figures. + # When this happens we end up with an extra 0 at the end of + # the number; a second rescale fixes this. + if ans.adjusted() != self.adjusted(): + ans = ans._rescale(ans.adjusted()+1-places, rounding) + return ans + def to_integral_exact(self, rounding=None, context=None): """Rounds to a nearby integer. @@ -3431,6 +3454,95 @@ return self # My components are also immutable return self.__class__(str(self)) + # PEP 3101 support. See also _parse_format_specifier and _format_align + def __format__(self, specifier, context=None): + """Format a Decimal instance according to the given specifier. + + The specifier should be a standard format specifier, with the + form described in PEP 3101. Formatting types 'e', 'E', 'f', + 'F', 'g', 'G', and '%' are supported. If the formatting type + is omitted it defaults to 'g' or 'G', depending on the value + of context.capitals. + + At this time the 'n' format specifier type (which is supposed + to use the current locale) is not supported. + """ + + # Note: PEP 3101 says that if the type is not present then + # there should be at least one digit after the decimal point. + # We take the liberty of ignoring this requirement for + # Decimal---it's presumably there to make sure that + # format(float, '') behaves similarly to str(float). + if context is None: + context = getcontext() + + spec = _parse_format_specifier(specifier) + + # special values don't care about the type or precision... + if self._is_special: + return _format_align(str(self), spec) + + # a type of None defaults to 'g' or 'G', depending on context + # if type is '%', adjust exponent of self accordingly + if spec['type'] is None: + spec['type'] = ['g', 'G'][context.capitals] + elif spec['type'] == '%': + self = _dec_from_triple(self._sign, self._int, self._exp+2) + + # round if necessary, taking rounding mode from the context + rounding = context.rounding + precision = spec['precision'] + if precision is not None: + if spec['type'] in 'eE': + self = self._round(precision+1, rounding) + elif spec['type'] in 'gG': + if len(self._int) > precision: + self = self._round(precision, rounding) + elif spec['type'] in 'fF%': + self = self._rescale(-precision, rounding) + # special case: zeros with a positive exponent can't be + # represented in fixed point; rescale them to 0e0. + elif not self and self._exp > 0 and spec['type'] in 'fF%': + self = self._rescale(0, rounding) + + # figure out placement of the decimal point + leftdigits = self._exp + len(self._int) + if spec['type'] in 'fF%': + dotplace = leftdigits + elif spec['type'] in 'eE': + if not self and precision is not None: + dotplace = 1 - precision + else: + dotplace = 1 + elif spec['type'] in 'gG': + if self._exp <= 0 and leftdigits > -6: + dotplace = leftdigits + else: + dotplace = 1 + + # figure out main part of numeric string... + if dotplace <= 0: + num = '0.' + '0'*(-dotplace) + self._int + elif dotplace >= len(self._int): + # make sure we're not padding a '0' with extra zeros on the right + assert dotplace==len(self._int) or self._int != '0' + num = self._int + '0'*(dotplace-len(self._int)) + else: + num = self._int[:dotplace] + '.' + self._int[dotplace:] + + # ...then the trailing exponent, or trailing '%' + if leftdigits != dotplace or spec['type'] in 'eE': + echar = {'E': 'E', 'e': 'e', 'G': 'E', 'g': 'e'}[spec['type']] + num = num + "{0}{1:+}".format(echar, leftdigits-dotplace) + elif spec['type'] == '%': + num = num + '%' + + # add sign + if self._sign == 1: + num = '-' + num + return _format_align(num, spec) + + def _dec_from_triple(sign, coefficient, exponent, special=False): """Create a decimal instance directly, without any validation, normalization (e.g. removal of leading zeros) or argument @@ -5212,8 +5324,7 @@ ##### crud for parsing strings ############################################# -import re - +# # Regular expression used for parsing numeric strings. Additional # comments: # @@ -5251,8 +5362,136 @@ _all_zeros = re.compile('0*$').match _exact_half = re.compile('50*$').match + +##### PEP3101 support functions ############################################## +# The functions parse_format_specifier and format_align have little to do +# with the Decimal class, and could potentially be reused for other pure +# Python numeric classes that want to implement __format__ +# +# A format specifier for Decimal looks like: +# +# [[fill]align][sign][0][minimumwidth][.precision][type] +# + +_parse_format_specifier_regex = re.compile(r"""\A +(?: + (?P.)? + (?P[<>=^]) +)? +(?P[-+ ])? +(?P0)? +(?P(?!0)\d+)? +(?:\.(?P0|(?!0)\d+))? +(?P[eEfFgG%])? +\Z +""", re.VERBOSE) + del re +def _parse_format_specifier(format_spec): + """Parse and validate a format specifier. + + Turns a standard numeric format specifier into a dict, with the + following entries: + + fill: fill character to pad field to minimum width + align: alignment type, either '<', '>', '=' or '^' + sign: either '+', '-' or ' ' + minimumwidth: nonnegative integer giving minimum width + precision: nonnegative integer giving precision, or None + type: one of the characters 'eEfFgG%', or None + unicode: either True or False (always True for Python 3.x) + + """ + m = _parse_format_specifier_regex.match(format_spec) + if m is None: + raise ValueError("Invalid format specifier: " + format_spec) + + # get the dictionary + format_dict = m.groupdict() + + # defaults for fill and alignment + fill = format_dict['fill'] + align = format_dict['align'] + if format_dict.pop('zeropad') is not None: + # in the face of conflict, refuse the temptation to guess + if fill is not None and fill != '0': + raise ValueError("Fill character conflicts with '0'" + " in format specifier: " + format_spec) + if align is not None and align != '=': + raise ValueError("Alignment conflicts with '0' in " + "format specifier: " + format_spec) + fill = '0' + align = '=' + format_dict['fill'] = fill or ' ' + format_dict['align'] = align or '<' + + if format_dict['sign'] is None: + format_dict['sign'] = '-' + + # turn minimumwidth and precision entries into integers. + # minimumwidth defaults to 0; precision remains None if not given + format_dict['minimumwidth'] = int(format_dict['minimumwidth'] or '0') + if format_dict['precision'] is not None: + format_dict['precision'] = int(format_dict['precision']) + + # if format type is 'g' or 'G' then a precision of 0 makes little + # sense; convert it to 1. Same if format type is unspecified. + if format_dict['precision'] == 0: + if format_dict['type'] in 'gG' or format_dict['type'] is None: + format_dict['precision'] = 1 + + # record whether return type should be str or unicode + format_dict['unicode'] = isinstance(format_spec, unicode) + + return format_dict + +def _format_align(body, spec_dict): + """Given an unpadded, non-aligned numeric string, add padding and + aligment to conform with the given format specifier dictionary (as + output from parse_format_specifier). + + It's assumed that if body is negative then it starts with '-'. + Any leading sign ('-' or '+') is stripped from the body before + applying the alignment and padding rules, and replaced in the + appropriate position. + + """ + # figure out the sign; we only examine the first character, so if + # body has leading whitespace the results may be surprising. + if len(body) > 0 and body[0] in '-+': + sign = body[0] + body = body[1:] + else: + sign = '' + + if sign != '-': + if spec_dict['sign'] in ' +': + sign = spec_dict['sign'] + else: + sign = '' + + # how much extra space do we have to play with? + minimumwidth = spec_dict['minimumwidth'] + fill = spec_dict['fill'] + padding = fill*(max(minimumwidth - (len(sign+body)), 0)) + + align = spec_dict['align'] + if align == '<': + result = padding + sign + body + elif align == '>': + result = sign + body + padding + elif align == '=': + result = sign + padding + body + else: #align == '^' + half = len(padding)//2 + result = padding[:half] + sign + body + padding[half:] + + # make sure that result is unicode if necessary + if spec_dict['unicode']: + result = unicode(result) + + return result ##### Useful Constants (internal use only) ################################ Modified: python/branches/libffi3-branch/Lib/distutils/bcppcompiler.py ============================================================================== --- python/branches/libffi3-branch/Lib/distutils/bcppcompiler.py (original) +++ python/branches/libffi3-branch/Lib/distutils/bcppcompiler.py Tue Mar 4 15:50:53 2008 @@ -16,7 +16,7 @@ __revision__ = "$Id$" -import sys, os +import os from distutils.errors import \ DistutilsExecError, DistutilsPlatformError, \ CompileError, LibError, LinkError, UnknownFileError Modified: python/branches/libffi3-branch/Lib/distutils/command/bdist.py ============================================================================== --- python/branches/libffi3-branch/Lib/distutils/command/bdist.py (original) +++ python/branches/libffi3-branch/Lib/distutils/command/bdist.py Tue Mar 4 15:50:53 2008 @@ -7,7 +7,7 @@ __revision__ = "$Id$" -import os, string +import os from types import * from distutils.core import Command from distutils.errors import * Modified: python/branches/libffi3-branch/Lib/distutils/command/bdist_dumb.py ============================================================================== --- python/branches/libffi3-branch/Lib/distutils/command/bdist_dumb.py (original) +++ python/branches/libffi3-branch/Lib/distutils/command/bdist_dumb.py Tue Mar 4 15:50:53 2008 @@ -11,7 +11,7 @@ import os from distutils.core import Command from distutils.util import get_platform -from distutils.dir_util import create_tree, remove_tree, ensure_relative +from distutils.dir_util import remove_tree, ensure_relative from distutils.errors import * from distutils.sysconfig import get_python_version from distutils import log Modified: python/branches/libffi3-branch/Lib/distutils/command/bdist_msi.py ============================================================================== --- python/branches/libffi3-branch/Lib/distutils/command/bdist_msi.py (original) +++ python/branches/libffi3-branch/Lib/distutils/command/bdist_msi.py Tue Mar 4 15:50:53 2008 @@ -7,7 +7,7 @@ Implements the bdist_msi command. """ -import sys, os, string +import sys, os from distutils.core import Command from distutils.util import get_platform from distutils.dir_util import remove_tree Modified: python/branches/libffi3-branch/Lib/distutils/command/bdist_rpm.py ============================================================================== --- python/branches/libffi3-branch/Lib/distutils/command/bdist_rpm.py (original) +++ python/branches/libffi3-branch/Lib/distutils/command/bdist_rpm.py Tue Mar 4 15:50:53 2008 @@ -8,7 +8,6 @@ __revision__ = "$Id$" import sys, os, string -import glob from types import * from distutils.core import Command from distutils.debug import DEBUG Modified: python/branches/libffi3-branch/Lib/distutils/command/build_py.py ============================================================================== --- python/branches/libffi3-branch/Lib/distutils/command/build_py.py (original) +++ python/branches/libffi3-branch/Lib/distutils/command/build_py.py Tue Mar 4 15:50:53 2008 @@ -6,7 +6,7 @@ __revision__ = "$Id$" -import sys, string, os +import string, os from types import * from glob import glob Modified: python/branches/libffi3-branch/Lib/distutils/command/build_scripts.py ============================================================================== --- python/branches/libffi3-branch/Lib/distutils/command/build_scripts.py (original) +++ python/branches/libffi3-branch/Lib/distutils/command/build_scripts.py Tue Mar 4 15:50:53 2008 @@ -6,7 +6,7 @@ __revision__ = "$Id$" -import sys, os, re +import os, re from stat import ST_MODE from distutils import sysconfig from distutils.core import Command Modified: python/branches/libffi3-branch/Lib/distutils/command/install.py ============================================================================== --- python/branches/libffi3-branch/Lib/distutils/command/install.py (original) +++ python/branches/libffi3-branch/Lib/distutils/command/install.py Tue Mar 4 15:50:53 2008 @@ -17,7 +17,6 @@ from distutils.file_util import write_file from distutils.util import convert_path, subst_vars, change_root from distutils.errors import DistutilsOptionError -from glob import glob if sys.version < "2.2": WINDOWS_SCHEME = { Modified: python/branches/libffi3-branch/Lib/distutils/command/install_headers.py ============================================================================== --- python/branches/libffi3-branch/Lib/distutils/command/install_headers.py (original) +++ python/branches/libffi3-branch/Lib/distutils/command/install_headers.py Tue Mar 4 15:50:53 2008 @@ -7,7 +7,6 @@ __revision__ = "$Id$" -import os from distutils.core import Command Modified: python/branches/libffi3-branch/Lib/distutils/command/install_lib.py ============================================================================== --- python/branches/libffi3-branch/Lib/distutils/command/install_lib.py (original) +++ python/branches/libffi3-branch/Lib/distutils/command/install_lib.py Tue Mar 4 15:50:53 2008 @@ -2,7 +2,7 @@ __revision__ = "$Id$" -import sys, os, string +import os from types import IntType from distutils.core import Command from distutils.errors import DistutilsOptionError Modified: python/branches/libffi3-branch/Lib/distutils/command/register.py ============================================================================== --- python/branches/libffi3-branch/Lib/distutils/command/register.py (original) +++ python/branches/libffi3-branch/Lib/distutils/command/register.py Tue Mar 4 15:50:53 2008 @@ -7,7 +7,7 @@ __revision__ = "$Id$" -import sys, os, string, urllib2, getpass, urlparse +import os, string, urllib2, getpass, urlparse import StringIO, ConfigParser from distutils.core import Command Modified: python/branches/libffi3-branch/Lib/distutils/command/sdist.py ============================================================================== --- python/branches/libffi3-branch/Lib/distutils/command/sdist.py (original) +++ python/branches/libffi3-branch/Lib/distutils/command/sdist.py Tue Mar 4 15:50:53 2008 @@ -6,7 +6,7 @@ __revision__ = "$Id$" -import sys, os, string +import os, string from types import * from glob import glob from distutils.core import Command Modified: python/branches/libffi3-branch/Lib/distutils/filelist.py ============================================================================== --- python/branches/libffi3-branch/Lib/distutils/filelist.py (original) +++ python/branches/libffi3-branch/Lib/distutils/filelist.py Tue Mar 4 15:50:53 2008 @@ -11,7 +11,6 @@ import os, string, re import fnmatch from types import * -from glob import glob from distutils.util import convert_path from distutils.errors import DistutilsTemplateError, DistutilsInternalError from distutils import log Modified: python/branches/libffi3-branch/Lib/distutils/tests/test_dist.py ============================================================================== --- python/branches/libffi3-branch/Lib/distutils/tests/test_dist.py (original) +++ python/branches/libffi3-branch/Lib/distutils/tests/test_dist.py Tue Mar 4 15:50:53 2008 @@ -3,10 +3,8 @@ import distutils.cmd import distutils.dist import os -import shutil import StringIO import sys -import tempfile import unittest from test.test_support import TESTFN Modified: python/branches/libffi3-branch/Lib/distutils/tests/test_sysconfig.py ============================================================================== --- python/branches/libffi3-branch/Lib/distutils/tests/test_sysconfig.py (original) +++ python/branches/libffi3-branch/Lib/distutils/tests/test_sysconfig.py Tue Mar 4 15:50:53 2008 @@ -2,7 +2,6 @@ from distutils import sysconfig import os -import sys import unittest from test.test_support import TESTFN Modified: python/branches/libffi3-branch/Lib/distutils/unixccompiler.py ============================================================================== --- python/branches/libffi3-branch/Lib/distutils/unixccompiler.py (original) +++ python/branches/libffi3-branch/Lib/distutils/unixccompiler.py Tue Mar 4 15:50:53 2008 @@ -17,7 +17,6 @@ import os, sys from types import StringType, NoneType -from copy import copy from distutils import sysconfig from distutils.dep_util import newer Modified: python/branches/libffi3-branch/Lib/email/base64mime.py ============================================================================== --- python/branches/libffi3-branch/Lib/email/base64mime.py (original) +++ python/branches/libffi3-branch/Lib/email/base64mime.py Tue Mar 4 15:50:53 2008 @@ -35,7 +35,6 @@ 'header_encode', ] -import re from binascii import b2a_base64, a2b_base64 from email.utils import fix_eols Modified: python/branches/libffi3-branch/Lib/email/utils.py ============================================================================== --- python/branches/libffi3-branch/Lib/email/utils.py (original) +++ python/branches/libffi3-branch/Lib/email/utils.py Tue Mar 4 15:50:53 2008 @@ -27,7 +27,6 @@ import socket import urllib import warnings -from cStringIO import StringIO from email._parseaddr import quote from email._parseaddr import AddressList as _AddressList Modified: python/branches/libffi3-branch/Lib/hotshot/log.py ============================================================================== --- python/branches/libffi3-branch/Lib/hotshot/log.py (original) +++ python/branches/libffi3-branch/Lib/hotshot/log.py Tue Mar 4 15:50:53 2008 @@ -2,7 +2,6 @@ import os.path import parser import symbol -import sys from _hotshot import \ WHAT_ENTER, \ Modified: python/branches/libffi3-branch/Lib/hotshot/stones.py ============================================================================== --- python/branches/libffi3-branch/Lib/hotshot/stones.py (original) +++ python/branches/libffi3-branch/Lib/hotshot/stones.py Tue Mar 4 15:50:53 2008 @@ -1,7 +1,6 @@ import errno import hotshot import hotshot.stats -import os import sys import test.pystone Modified: python/branches/libffi3-branch/Lib/httplib.py ============================================================================== --- python/branches/libffi3-branch/Lib/httplib.py (original) +++ python/branches/libffi3-branch/Lib/httplib.py Tue Mar 4 15:50:53 2008 @@ -66,7 +66,6 @@ Req-sent-unread-response _CS_REQ_SENT """ -import errno import mimetools import socket from urlparse import urlsplit @@ -439,6 +438,9 @@ self.length = int(length) except ValueError: self.length = None + else: + if self.length < 0: # ignore nonsensical negative lengths + self.length = None else: self.length = None @@ -547,7 +549,13 @@ i = line.find(';') if i >= 0: line = line[:i] # strip chunk-extensions - chunk_left = int(line, 16) + try: + chunk_left = int(line, 16) + except ValueError: + # close the connection as protocol synchronisation is + # probably lost + self.close() + raise IncompleteRead(value) if chunk_left == 0: break if amt is None: Modified: python/branches/libffi3-branch/Lib/idlelib/MultiCall.py ============================================================================== --- python/branches/libffi3-branch/Lib/idlelib/MultiCall.py (original) +++ python/branches/libffi3-branch/Lib/idlelib/MultiCall.py Tue Mar 4 15:50:53 2008 @@ -30,7 +30,6 @@ """ import sys -import os import string import re import Tkinter Modified: python/branches/libffi3-branch/Lib/idlelib/NEWS.txt ============================================================================== --- python/branches/libffi3-branch/Lib/idlelib/NEWS.txt (original) +++ python/branches/libffi3-branch/Lib/idlelib/NEWS.txt Tue Mar 4 15:50:53 2008 @@ -1,7 +1,7 @@ What's New in IDLE 2.6a1? ========================= -*Release date: XX-XXX-2008* +*Release date: 29-Feb-2008* - Configured selection highlighting colors were ignored; updating highlighting in the config dialog would cause non-Python files to be colored as if they Modified: python/branches/libffi3-branch/Lib/idlelib/RemoteDebugger.py ============================================================================== --- python/branches/libffi3-branch/Lib/idlelib/RemoteDebugger.py (original) +++ python/branches/libffi3-branch/Lib/idlelib/RemoteDebugger.py Tue Mar 4 15:50:53 2008 @@ -20,7 +20,6 @@ """ -import sys import types import rpc import Debugger Modified: python/branches/libffi3-branch/Lib/idlelib/TreeWidget.py ============================================================================== --- python/branches/libffi3-branch/Lib/idlelib/TreeWidget.py (original) +++ python/branches/libffi3-branch/Lib/idlelib/TreeWidget.py Tue Mar 4 15:50:53 2008 @@ -15,7 +15,6 @@ # - optimize tree redraw after expand of subnode import os -import sys from Tkinter import * import imp Modified: python/branches/libffi3-branch/Lib/idlelib/UndoDelegator.py ============================================================================== --- python/branches/libffi3-branch/Lib/idlelib/UndoDelegator.py (original) +++ python/branches/libffi3-branch/Lib/idlelib/UndoDelegator.py Tue Mar 4 15:50:53 2008 @@ -1,4 +1,3 @@ -import sys import string from Tkinter import * from Delegator import Delegator Modified: python/branches/libffi3-branch/Lib/idlelib/configDialog.py ============================================================================== --- python/branches/libffi3-branch/Lib/idlelib/configDialog.py (original) +++ python/branches/libffi3-branch/Lib/idlelib/configDialog.py Tue Mar 4 15:50:53 2008 @@ -11,7 +11,7 @@ """ from Tkinter import * import tkMessageBox, tkColorChooser, tkFont -import string, copy +import string from configHandler import idleConf from dynOptionMenuWidget import DynOptionMenu Modified: python/branches/libffi3-branch/Lib/idlelib/idlever.py ============================================================================== --- python/branches/libffi3-branch/Lib/idlelib/idlever.py (original) +++ python/branches/libffi3-branch/Lib/idlelib/idlever.py Tue Mar 4 15:50:53 2008 @@ -1 +1 @@ -IDLE_VERSION = "2.6a0" +IDLE_VERSION = "2.6a1" Modified: python/branches/libffi3-branch/Lib/idlelib/keybindingDialog.py ============================================================================== --- python/branches/libffi3-branch/Lib/idlelib/keybindingDialog.py (original) +++ python/branches/libffi3-branch/Lib/idlelib/keybindingDialog.py Tue Mar 4 15:50:53 2008 @@ -3,7 +3,7 @@ """ from Tkinter import * import tkMessageBox -import string, os +import string class GetKeysDialog(Toplevel): def __init__(self,parent,title,action,currentKeySequences): Modified: python/branches/libffi3-branch/Lib/idlelib/run.py ============================================================================== --- python/branches/libffi3-branch/Lib/idlelib/run.py (original) +++ python/branches/libffi3-branch/Lib/idlelib/run.py Tue Mar 4 15:50:53 2008 @@ -1,5 +1,4 @@ import sys -import os import linecache import time import socket Modified: python/branches/libffi3-branch/Lib/imaplib.py ============================================================================== --- python/branches/libffi3-branch/Lib/imaplib.py (original) +++ python/branches/libffi3-branch/Lib/imaplib.py Tue Mar 4 15:50:53 2008 @@ -1156,7 +1156,7 @@ chunks = [] read = 0 while read < size: - data = self.sslobj.read(size-read) + data = self.sslobj.read(min(size-read, 16384)) read += len(data) chunks.append(data) Modified: python/branches/libffi3-branch/Lib/inspect.py ============================================================================== --- python/branches/libffi3-branch/Lib/inspect.py (original) +++ python/branches/libffi3-branch/Lib/inspect.py Tue Mar 4 15:50:53 2008 @@ -38,11 +38,15 @@ import imp import tokenize import linecache +from abc import ABCMeta from operator import attrgetter from collections import namedtuple from compiler.consts import (CO_OPTIMIZED, CO_NEWLOCALS, CO_VARARGS, CO_VARKEYWORDS, CO_GENERATOR) +# See Include/object.h +TPFLAGS_IS_ABSTRACT = 1 << 20 + # ----------------------------------------------------------- type-checking def ismodule(object): """Return true if the object is a module. @@ -241,6 +245,10 @@ """Return true if the object is a generator object.""" return isinstance(object, types.GeneratorType) +def isabstract(object): + """Return true if the object is an abstract base class (ABC).""" + return object.__flags__ & TPFLAGS_IS_ABSTRACT + def getmembers(object, predicate=None): """Return all members of an object as (name, value) pairs sorted by name. Optionally, only return members that satisfy a given predicate.""" Modified: python/branches/libffi3-branch/Lib/lib-tk/tkSimpleDialog.py ============================================================================== --- python/branches/libffi3-branch/Lib/lib-tk/tkSimpleDialog.py (original) +++ python/branches/libffi3-branch/Lib/lib-tk/tkSimpleDialog.py Tue Mar 4 15:50:53 2008 @@ -26,7 +26,6 @@ ''' from Tkinter import * -import os class Dialog(Toplevel): Modified: python/branches/libffi3-branch/Lib/logging/handlers.py ============================================================================== --- python/branches/libffi3-branch/Lib/logging/handlers.py (original) +++ python/branches/libffi3-branch/Lib/logging/handlers.py Tue Mar 4 15:50:53 2008 @@ -27,7 +27,7 @@ To use, simply 'import logging' and log away! """ -import sys, logging, socket, types, os, string, cPickle, struct, time, glob +import logging, socket, types, os, string, cPickle, struct, time, glob from stat import ST_DEV, ST_INO try: Modified: python/branches/libffi3-branch/Lib/ntpath.py ============================================================================== --- python/branches/libffi3-branch/Lib/ntpath.py (original) +++ python/branches/libffi3-branch/Lib/ntpath.py Tue Mar 4 15:50:53 2008 @@ -6,8 +6,8 @@ """ import os -import stat import sys +import stat import genericpath from genericpath import * Modified: python/branches/libffi3-branch/Lib/plat-mac/MiniAEFrame.py ============================================================================== --- python/branches/libffi3-branch/Lib/plat-mac/MiniAEFrame.py (original) +++ python/branches/libffi3-branch/Lib/plat-mac/MiniAEFrame.py Tue Mar 4 15:50:53 2008 @@ -6,7 +6,6 @@ only suitable for the simplest of AppleEvent servers. """ -import sys import traceback import MacOS from Carbon import AE Modified: python/branches/libffi3-branch/Lib/plat-mac/aepack.py ============================================================================== --- python/branches/libffi3-branch/Lib/plat-mac/aepack.py (original) +++ python/branches/libffi3-branch/Lib/plat-mac/aepack.py Tue Mar 4 15:50:53 2008 @@ -13,18 +13,14 @@ # import struct -import string import types -from string import strip from types import * from Carbon import AE from Carbon.AppleEvents import * import MacOS import Carbon.File -import StringIO import aetypes from aetypes import mkenum, ObjectSpecifier -import os # These ones seem to be missing from AppleEvents # (they're in AERegistry.h) Modified: python/branches/libffi3-branch/Lib/plat-mac/bgenlocations.py ============================================================================== --- python/branches/libffi3-branch/Lib/plat-mac/bgenlocations.py (original) +++ python/branches/libffi3-branch/Lib/plat-mac/bgenlocations.py Tue Mar 4 15:50:53 2008 @@ -5,7 +5,7 @@ # but mac-style for MacPython, whether running on OS9 or OSX. # -import sys, os +import os Error = "bgenlocations.Error" # Modified: python/branches/libffi3-branch/Lib/plat-mac/macostools.py ============================================================================== --- python/branches/libffi3-branch/Lib/plat-mac/macostools.py (original) +++ python/branches/libffi3-branch/Lib/plat-mac/macostools.py Tue Mar 4 15:50:53 2008 @@ -7,9 +7,7 @@ from Carbon import Res from Carbon import File, Files import os -import sys import MacOS -import time try: openrf = MacOS.openrf except AttributeError: Modified: python/branches/libffi3-branch/Lib/plat-riscos/rourl2path.py ============================================================================== --- python/branches/libffi3-branch/Lib/plat-riscos/rourl2path.py (original) +++ python/branches/libffi3-branch/Lib/plat-riscos/rourl2path.py Tue Mar 4 15:50:53 2008 @@ -4,7 +4,6 @@ import string import urllib -import os __all__ = ["url2pathname","pathname2url"] Modified: python/branches/libffi3-branch/Lib/popen2.py ============================================================================== --- python/branches/libffi3-branch/Lib/popen2.py (original) +++ python/branches/libffi3-branch/Lib/popen2.py Tue Mar 4 15:50:53 2008 @@ -82,11 +82,7 @@ def _run_child(self, cmd): if isinstance(cmd, basestring): cmd = ['/bin/sh', '-c', cmd] - for i in xrange(3, MAXFD): - try: - os.close(i) - except OSError: - pass + os.closerange(3, MAXFD) try: os.execvp(cmd[0], cmd) finally: Modified: python/branches/libffi3-branch/Lib/runpy.py ============================================================================== --- python/branches/libffi3-branch/Lib/runpy.py (original) +++ python/branches/libffi3-branch/Lib/runpy.py Tue Mar 4 15:50:53 2008 @@ -89,6 +89,9 @@ # XXX ncoghlan: Should this be documented and made public? +# (Current thoughts: don't repeat the mistake that lead to its +# creation when run_module() no longer met the needs of +# mainmodule.c, but couldn't be changed because it was public) def _run_module_as_main(mod_name, set_argv0=True): """Runs the designated module in the __main__ namespace @@ -96,7 +99,20 @@ __file__ __loader__ """ - loader, code, fname = _get_module_details(mod_name) + try: + loader, code, fname = _get_module_details(mod_name) + except ImportError as exc: + # Try to provide a good error message + # for directories, zip files and the -m switch + if set_argv0: + # For -m switch, just disply the exception + info = str(exc) + else: + # For directories/zipfiles, let the user + # know what the code was looking for + info = "can't find '__main__.py' in %r" % sys.argv[0] + msg = "%s: %s" % (sys.executable, info) + sys.exit(msg) pkg_name = mod_name.rpartition('.')[0] main_globals = sys.modules["__main__"].__dict__ if set_argv0: Modified: python/branches/libffi3-branch/Lib/smtplib.py ============================================================================== --- python/branches/libffi3-branch/Lib/smtplib.py (original) +++ python/branches/libffi3-branch/Lib/smtplib.py Tue Mar 4 15:50:53 2008 @@ -298,7 +298,7 @@ def send(self, str): """Send `str' to the server.""" if self.debuglevel > 0: print>>stderr, 'send:', repr(str) - if self.sock: + if hasattr(self, 'sock') and self.sock: try: self.sock.sendall(str) except socket.error: @@ -486,7 +486,7 @@ vrfy=verify def expn(self, address): - """SMTP 'verify' command -- checks for address validity.""" + """SMTP 'expn' command -- expands a mailing list.""" self.putcmd("expn", quoteaddr(address)) return self.getreply() Modified: python/branches/libffi3-branch/Lib/socket.py ============================================================================== --- python/branches/libffi3-branch/Lib/socket.py (original) +++ python/branches/libffi3-branch/Lib/socket.py Tue Mar 4 15:50:53 2008 @@ -328,7 +328,7 @@ self._rbuf = "" while True: left = size - buf_len - recv_size = max(self._rbufsize, left) + recv_size = min(self._rbufsize, left) data = self._sock.recv(recv_size) if not data: break Modified: python/branches/libffi3-branch/Lib/sqlite3/test/dbapi.py ============================================================================== --- python/branches/libffi3-branch/Lib/sqlite3/test/dbapi.py (original) +++ python/branches/libffi3-branch/Lib/sqlite3/test/dbapi.py Tue Mar 4 15:50:53 2008 @@ -1,7 +1,7 @@ #-*- coding: ISO-8859-1 -*- # pysqlite2/test/dbapi.py: tests for DB-API compliance # -# Copyright (C) 2004-2005 Gerhard H?ring +# Copyright (C) 2004-2007 Gerhard H?ring # # This file is part of pysqlite. # @@ -22,6 +22,7 @@ # 3. This notice may not be removed or altered from any source distribution. import unittest +import sys import threading import sqlite3 as sqlite @@ -223,12 +224,45 @@ except sqlite.ProgrammingError: pass + def CheckExecuteParamList(self): + self.cu.execute("insert into test(name) values ('foo')") + self.cu.execute("select name from test where name=?", ["foo"]) + row = self.cu.fetchone() + self.failUnlessEqual(row[0], "foo") + + def CheckExecuteParamSequence(self): + class L(object): + def __len__(self): + return 1 + def __getitem__(self, x): + assert x == 0 + return "foo" + + self.cu.execute("insert into test(name) values ('foo')") + self.cu.execute("select name from test where name=?", L()) + row = self.cu.fetchone() + self.failUnlessEqual(row[0], "foo") + def CheckExecuteDictMapping(self): self.cu.execute("insert into test(name) values ('foo')") self.cu.execute("select name from test where name=:name", {"name": "foo"}) row = self.cu.fetchone() self.failUnlessEqual(row[0], "foo") + def CheckExecuteDictMapping_Mapping(self): + # Test only works with Python 2.5 or later + if sys.version_info < (2, 5, 0): + return + + class D(dict): + def __missing__(self, key): + return "foo" + + self.cu.execute("insert into test(name) values ('foo')") + self.cu.execute("select name from test where name=:name", D()) + row = self.cu.fetchone() + self.failUnlessEqual(row[0], "foo") + def CheckExecuteDictMappingTooLittleArgs(self): self.cu.execute("insert into test(name) values ('foo')") try: @@ -378,6 +412,12 @@ res = self.cu.fetchmany(100) self.failUnlessEqual(res, []) + def CheckFetchmanyKwArg(self): + """Checks if fetchmany works with keyword arguments""" + self.cu.execute("select name from test") + res = self.cu.fetchmany(size=100) + self.failUnlessEqual(len(res), 1) + def CheckFetchall(self): self.cu.execute("select name from test") res = self.cu.fetchall() Modified: python/branches/libffi3-branch/Lib/sqlite3/test/hooks.py ============================================================================== --- python/branches/libffi3-branch/Lib/sqlite3/test/hooks.py (original) +++ python/branches/libffi3-branch/Lib/sqlite3/test/hooks.py Tue Mar 4 15:50:53 2008 @@ -1,7 +1,7 @@ #-*- coding: ISO-8859-1 -*- # pysqlite2/test/hooks.py: tests for various SQLite-specific hooks # -# Copyright (C) 2006 Gerhard H?ring +# Copyright (C) 2006-2007 Gerhard H?ring # # This file is part of pysqlite. # @@ -105,9 +105,80 @@ if not e.args[0].startswith("no such collation sequence"): self.fail("wrong OperationalError raised") +class ProgressTests(unittest.TestCase): + def CheckProgressHandlerUsed(self): + """ + Test that the progress handler is invoked once it is set. + """ + con = sqlite.connect(":memory:") + progress_calls = [] + def progress(): + progress_calls.append(None) + return 0 + con.set_progress_handler(progress, 1) + con.execute(""" + create table foo(a, b) + """) + self.failUnless(progress_calls) + + + def CheckOpcodeCount(self): + """ + Test that the opcode argument is respected. + """ + con = sqlite.connect(":memory:") + progress_calls = [] + def progress(): + progress_calls.append(None) + return 0 + con.set_progress_handler(progress, 1) + curs = con.cursor() + curs.execute(""" + create table foo (a, b) + """) + first_count = len(progress_calls) + progress_calls = [] + con.set_progress_handler(progress, 2) + curs.execute(""" + create table bar (a, b) + """) + second_count = len(progress_calls) + self.failUnless(first_count > second_count) + + def CheckCancelOperation(self): + """ + Test that returning a non-zero value stops the operation in progress. + """ + con = sqlite.connect(":memory:") + progress_calls = [] + def progress(): + progress_calls.append(None) + return 1 + con.set_progress_handler(progress, 1) + curs = con.cursor() + self.assertRaises( + sqlite.OperationalError, + curs.execute, + "create table bar (a, b)") + + def CheckClearHandler(self): + """ + Test that setting the progress handler to None clears the previously set handler. + """ + con = sqlite.connect(":memory:") + action = 0 + def progress(): + action = 1 + return 0 + con.set_progress_handler(progress, 1) + con.set_progress_handler(None, 1) + con.execute("select 1 union select 2 union select 3").fetchall() + self.failUnlessEqual(action, 0, "progress handler was not cleared") + def suite(): collation_suite = unittest.makeSuite(CollationTests, "Check") - return unittest.TestSuite((collation_suite,)) + progress_suite = unittest.makeSuite(ProgressTests, "Check") + return unittest.TestSuite((collation_suite, progress_suite)) def test(): runner = unittest.TextTestRunner() Modified: python/branches/libffi3-branch/Lib/sqlite3/test/regression.py ============================================================================== --- python/branches/libffi3-branch/Lib/sqlite3/test/regression.py (original) +++ python/branches/libffi3-branch/Lib/sqlite3/test/regression.py Tue Mar 4 15:50:53 2008 @@ -1,7 +1,7 @@ #-*- coding: ISO-8859-1 -*- # pysqlite2/test/regression.py: pysqlite regression tests # -# Copyright (C) 2006 Gerhard H?ring +# Copyright (C) 2006-2007 Gerhard H?ring # # This file is part of pysqlite. # @@ -21,6 +21,7 @@ # misrepresented as being the original software. # 3. This notice may not be removed or altered from any source distribution. +import datetime import unittest import sqlite3 as sqlite @@ -79,6 +80,80 @@ cur.fetchone() cur.fetchone() + def CheckStatementFinalizationOnCloseDb(self): + # pysqlite versions <= 2.3.3 only finalized statements in the statement + # cache when closing the database. statements that were still + # referenced in cursors weren't closed an could provoke " + # "OperationalError: Unable to close due to unfinalised statements". + con = sqlite.connect(":memory:") + cursors = [] + # default statement cache size is 100 + for i in range(105): + cur = con.cursor() + cursors.append(cur) + cur.execute("select 1 x union select " + str(i)) + con.close() + + def CheckOnConflictRollback(self): + if sqlite.sqlite_version_info < (3, 2, 2): + return + con = sqlite.connect(":memory:") + con.execute("create table foo(x, unique(x) on conflict rollback)") + con.execute("insert into foo(x) values (1)") + try: + con.execute("insert into foo(x) values (1)") + except sqlite.DatabaseError: + pass + con.execute("insert into foo(x) values (2)") + try: + con.commit() + except sqlite.OperationalError: + self.fail("pysqlite knew nothing about the implicit ROLLBACK") + + def CheckWorkaroundForBuggySqliteTransferBindings(self): + """ + pysqlite would crash with older SQLite versions unless + a workaround is implemented. + """ + self.con.execute("create table foo(bar)") + self.con.execute("drop table foo") + self.con.execute("create table foo(bar)") + + def CheckEmptyStatement(self): + """ + pysqlite used to segfault with SQLite versions 3.5.x. These return NULL + for "no-operation" statements + """ + self.con.execute("") + + def CheckUnicodeConnect(self): + """ + With pysqlite 2.4.0 you needed to use a string or a APSW connection + object for opening database connections. + + Formerly, both bytestrings and unicode strings used to work. + + Let's make sure unicode strings work in the future. + """ + con = sqlite.connect(u":memory:") + con.close() + + def CheckTypeMapUsage(self): + """ + pysqlite until 2.4.1 did not rebuild the row_cast_map when recompiling + a statement. This test exhibits the problem. + """ + SELECT = "select * from foo" + con = sqlite.connect(":memory:",detect_types=sqlite.PARSE_DECLTYPES) + con.execute("create table foo(bar timestamp)") + con.execute("insert into foo(bar) values (?)", (datetime.datetime.now(),)) + con.execute(SELECT) + con.execute("drop table foo") + con.execute("create table foo(bar integer)") + con.execute("insert into foo(bar) values (5)") + con.execute(SELECT) + + def suite(): regression_suite = unittest.makeSuite(RegressionTests, "Check") return unittest.TestSuite((regression_suite,)) Modified: python/branches/libffi3-branch/Lib/sqlite3/test/transactions.py ============================================================================== --- python/branches/libffi3-branch/Lib/sqlite3/test/transactions.py (original) +++ python/branches/libffi3-branch/Lib/sqlite3/test/transactions.py Tue Mar 4 15:50:53 2008 @@ -1,7 +1,7 @@ #-*- coding: ISO-8859-1 -*- # pysqlite2/test/transactions.py: tests transactions # -# Copyright (C) 2005 Gerhard H?ring +# Copyright (C) 2005-2007 Gerhard H?ring # # This file is part of pysqlite. # @@ -21,6 +21,7 @@ # misrepresented as being the original software. # 3. This notice may not be removed or altered from any source distribution. +import sys import os, unittest import sqlite3 as sqlite @@ -119,6 +120,23 @@ except: self.fail("should have raised an OperationalError") + def CheckLocking(self): + """ + This tests the improved concurrency with pysqlite 2.3.4. You needed + to roll back con2 before you could commit con1. + """ + self.cur1.execute("create table test(i)") + self.cur1.execute("insert into test(i) values (5)") + try: + self.cur2.execute("insert into test(i) values (5)") + self.fail("should have raised an OperationalError") + except sqlite.OperationalError: + pass + except: + self.fail("should have raised an OperationalError") + # NO self.con2.rollback() HERE!!! + self.con1.commit() + class SpecialCommandTests(unittest.TestCase): def setUp(self): self.con = sqlite.connect(":memory:") Modified: python/branches/libffi3-branch/Lib/sqlite3/test/types.py ============================================================================== --- python/branches/libffi3-branch/Lib/sqlite3/test/types.py (original) +++ python/branches/libffi3-branch/Lib/sqlite3/test/types.py Tue Mar 4 15:50:53 2008 @@ -1,7 +1,7 @@ #-*- coding: ISO-8859-1 -*- # pysqlite2/test/types.py: tests for type conversion and detection # -# Copyright (C) 2005 Gerhard H?ring +# Copyright (C) 2005-2007 Gerhard H?ring # # This file is part of pysqlite. # @@ -21,7 +21,7 @@ # misrepresented as being the original software. # 3. This notice may not be removed or altered from any source distribution. -import bz2, datetime +import zlib, datetime import unittest import sqlite3 as sqlite @@ -287,7 +287,7 @@ class BinaryConverterTests(unittest.TestCase): def convert(s): - return bz2.decompress(s) + return zlib.decompress(s) convert = staticmethod(convert) def setUp(self): @@ -299,7 +299,7 @@ def CheckBinaryInputForConverter(self): testdata = "abcdefg" * 10 - result = self.con.execute('select ? as "x [bin]"', (buffer(bz2.compress(testdata)),)).fetchone()[0] + result = self.con.execute('select ? as "x [bin]"', (buffer(zlib.compress(testdata)),)).fetchone()[0] self.failUnlessEqual(testdata, result) class DateTimeTests(unittest.TestCase): @@ -331,7 +331,8 @@ if sqlite.sqlite_version_info < (3, 1): return - now = datetime.datetime.utcnow() + # SQLite's current_timestamp uses UTC time, while datetime.datetime.now() uses local time. + now = datetime.datetime.now() self.cur.execute("insert into test(ts) values (current_timestamp)") self.cur.execute("select ts from test") ts = self.cur.fetchone()[0] Modified: python/branches/libffi3-branch/Lib/ssl.py ============================================================================== --- python/branches/libffi3-branch/Lib/ssl.py (original) +++ python/branches/libffi3-branch/Lib/ssl.py Tue Mar 4 15:50:53 2008 @@ -55,7 +55,7 @@ PROTOCOL_TLSv1 """ -import os, sys, textwrap +import textwrap import _ssl # if we can't import it, let the error propagate Modified: python/branches/libffi3-branch/Lib/symbol.py ============================================================================== --- python/branches/libffi3-branch/Lib/symbol.py (original) +++ python/branches/libffi3-branch/Lib/symbol.py Tue Mar 4 15:50:53 2008 @@ -15,85 +15,86 @@ eval_input = 258 decorator = 259 decorators = 260 -funcdef = 261 -parameters = 262 -varargslist = 263 -fpdef = 264 -fplist = 265 -stmt = 266 -simple_stmt = 267 -small_stmt = 268 -expr_stmt = 269 -augassign = 270 -print_stmt = 271 -del_stmt = 272 -pass_stmt = 273 -flow_stmt = 274 -break_stmt = 275 -continue_stmt = 276 -return_stmt = 277 -yield_stmt = 278 -raise_stmt = 279 -import_stmt = 280 -import_name = 281 -import_from = 282 -import_as_name = 283 -dotted_as_name = 284 -import_as_names = 285 -dotted_as_names = 286 -dotted_name = 287 -global_stmt = 288 -exec_stmt = 289 -assert_stmt = 290 -compound_stmt = 291 -if_stmt = 292 -while_stmt = 293 -for_stmt = 294 -try_stmt = 295 -with_stmt = 296 -with_var = 297 -except_clause = 298 -suite = 299 -testlist_safe = 300 -old_test = 301 -old_lambdef = 302 -test = 303 -or_test = 304 -and_test = 305 -not_test = 306 -comparison = 307 -comp_op = 308 -expr = 309 -xor_expr = 310 -and_expr = 311 -shift_expr = 312 -arith_expr = 313 -term = 314 -factor = 315 -power = 316 -atom = 317 -listmaker = 318 -testlist_gexp = 319 -lambdef = 320 -trailer = 321 -subscriptlist = 322 -subscript = 323 -sliceop = 324 -exprlist = 325 -testlist = 326 -dictmaker = 327 -classdef = 328 -arglist = 329 -argument = 330 -list_iter = 331 -list_for = 332 -list_if = 333 -gen_iter = 334 -gen_for = 335 -gen_if = 336 -testlist1 = 337 -encoding_decl = 338 -yield_expr = 339 +decorated = 261 +funcdef = 262 +parameters = 263 +varargslist = 264 +fpdef = 265 +fplist = 266 +stmt = 267 +simple_stmt = 268 +small_stmt = 269 +expr_stmt = 270 +augassign = 271 +print_stmt = 272 +del_stmt = 273 +pass_stmt = 274 +flow_stmt = 275 +break_stmt = 276 +continue_stmt = 277 +return_stmt = 278 +yield_stmt = 279 +raise_stmt = 280 +import_stmt = 281 +import_name = 282 +import_from = 283 +import_as_name = 284 +dotted_as_name = 285 +import_as_names = 286 +dotted_as_names = 287 +dotted_name = 288 +global_stmt = 289 +exec_stmt = 290 +assert_stmt = 291 +compound_stmt = 292 +if_stmt = 293 +while_stmt = 294 +for_stmt = 295 +try_stmt = 296 +with_stmt = 297 +with_var = 298 +except_clause = 299 +suite = 300 +testlist_safe = 301 +old_test = 302 +old_lambdef = 303 +test = 304 +or_test = 305 +and_test = 306 +not_test = 307 +comparison = 308 +comp_op = 309 +expr = 310 +xor_expr = 311 +and_expr = 312 +shift_expr = 313 +arith_expr = 314 +term = 315 +factor = 316 +power = 317 +atom = 318 +listmaker = 319 +testlist_gexp = 320 +lambdef = 321 +trailer = 322 +subscriptlist = 323 +subscript = 324 +sliceop = 325 +exprlist = 326 +testlist = 327 +dictmaker = 328 +classdef = 329 +arglist = 330 +argument = 331 +list_iter = 332 +list_for = 333 +list_if = 334 +gen_iter = 335 +gen_for = 336 +gen_if = 337 +testlist1 = 338 +encoding_decl = 339 +yield_expr = 340 #--end constants-- sym_name = {} Modified: python/branches/libffi3-branch/Lib/test/fork_wait.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/fork_wait.py (original) +++ python/branches/libffi3-branch/Lib/test/fork_wait.py Tue Mar 4 15:50:53 2008 @@ -13,7 +13,6 @@ """ import os, sys, time, thread, unittest -from test.test_support import TestSkipped LONGSLEEP = 2 SHORTSLEEP = 0.5 Modified: python/branches/libffi3-branch/Lib/test/list_tests.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/list_tests.py (original) +++ python/branches/libffi3-branch/Lib/test/list_tests.py Tue Mar 4 15:50:53 2008 @@ -5,7 +5,6 @@ import sys import os -import unittest from test import test_support, seq_tests class CommonTest(seq_tests.CommonTest): Modified: python/branches/libffi3-branch/Lib/test/regrtest.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/regrtest.py (original) +++ python/branches/libffi3-branch/Lib/test/regrtest.py Tue Mar 4 15:50:53 2008 @@ -129,6 +129,7 @@ import re import cStringIO import traceback +from inspect import isabstract # I see no other way to suppress these warnings; # putting them in test_grammar.py has no effect: @@ -649,7 +650,6 @@ def dash_R(the_module, test, indirect_test, huntrleaks): # This code is hackish and inelegant, but it seems to do the job. import copy_reg, _abcoll - from abc import _Abstract if not hasattr(sys, 'gettotalrefcount'): raise Exception("Tracking reference leaks requires a debug build " @@ -661,7 +661,7 @@ pic = sys.path_importer_cache.copy() abcs = {} for abc in [getattr(_abcoll, a) for a in _abcoll.__all__]: - if not issubclass(abc, _Abstract): + if not isabstract(abc): continue for obj in abc.__subclasses__() + [abc]: abcs[obj] = obj._abc_registry.copy() @@ -699,7 +699,6 @@ import _strptime, linecache, dircache import urlparse, urllib, urllib2, mimetypes, doctest import struct, filecmp, _abcoll - from abc import _Abstract from distutils.dir_util import _path_created # Restore some original values. @@ -714,7 +713,7 @@ # Clear ABC registries, restoring previously saved ABC registries. for abc in [getattr(_abcoll, a) for a in _abcoll.__all__]: - if not issubclass(abc, _Abstract): + if not isabstract(abc): continue for obj in abc.__subclasses__() + [abc]: obj._abc_registry = abcs.get(obj, {}).copy() Modified: python/branches/libffi3-branch/Lib/test/seq_tests.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/seq_tests.py (original) +++ python/branches/libffi3-branch/Lib/test/seq_tests.py Tue Mar 4 15:50:53 2008 @@ -3,7 +3,6 @@ """ import unittest -from test import test_support import sys # Various iterables Modified: python/branches/libffi3-branch/Lib/test/string_tests.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/string_tests.py (original) +++ python/branches/libffi3-branch/Lib/test/string_tests.py Tue Mar 4 15:50:53 2008 @@ -1033,7 +1033,14 @@ # unicode raises ValueError, str raises OverflowError self.checkraises((ValueError, OverflowError), '%c', '__mod__', ordinal) + longvalue = sys.maxint + 10L + slongvalue = str(longvalue) + if slongvalue[-1] in ("L","l"): slongvalue = slongvalue[:-1] self.checkequal(' 42', '%3ld', '__mod__', 42) + self.checkequal('42', '%d', '__mod__', 42L) + self.checkequal('42', '%d', '__mod__', 42.0) + self.checkequal(slongvalue, '%d', '__mod__', longvalue) + self.checkcall('%d', '__mod__', float(longvalue)) self.checkequal('0042.00', '%07.2f', '__mod__', 42) self.checkequal('0042.00', '%07.2F', '__mod__', 42) @@ -1043,6 +1050,8 @@ self.checkraises(TypeError, '%c', '__mod__', (None,)) self.checkraises(ValueError, '%(foo', '__mod__', {}) self.checkraises(TypeError, '%(foo)s %(bar)s', '__mod__', ('foo', 42)) + self.checkraises(TypeError, '%d', '__mod__', "42") # not numeric + self.checkraises(TypeError, '%d', '__mod__', (42+0j)) # no int/long conversion provided # argument names with properly nested brackets are supported self.checkequal('bar', '%((foo))s', '__mod__', {'(foo)': 'bar'}) Modified: python/branches/libffi3-branch/Lib/test/test_MimeWriter.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_MimeWriter.py (original) +++ python/branches/libffi3-branch/Lib/test/test_MimeWriter.py Tue Mar 4 15:50:53 2008 @@ -7,7 +7,7 @@ """ -import unittest, sys, StringIO +import unittest, StringIO from test.test_support import run_unittest import warnings Modified: python/branches/libffi3-branch/Lib/test/test___all__.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test___all__.py (original) +++ python/branches/libffi3-branch/Lib/test/test___all__.py Tue Mar 4 15:50:53 2008 @@ -1,5 +1,5 @@ import unittest -from test.test_support import verbose, run_unittest +from test.test_support import run_unittest import sys import warnings Modified: python/branches/libffi3-branch/Lib/test/test_abc.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_abc.py (original) +++ python/branches/libffi3-branch/Lib/test/test_abc.py Tue Mar 4 15:50:53 2008 @@ -3,11 +3,11 @@ """Unit tests for abc.py.""" -import sys import unittest from test import test_support import abc +from inspect import isabstract class TestABC(unittest.TestCase): @@ -44,19 +44,23 @@ def bar(self): pass # concrete self.assertEqual(C.__abstractmethods__, set(["foo"])) self.assertRaises(TypeError, C) # because foo is abstract + self.assert_(isabstract(C)) class D(C): def bar(self): pass # concrete override of concrete self.assertEqual(D.__abstractmethods__, set(["foo"])) self.assertRaises(TypeError, D) # because foo is still abstract + self.assert_(isabstract(D)) class E(D): def foo(self): pass self.assertEqual(E.__abstractmethods__, set()) E() # now foo is concrete, too + self.failIf(isabstract(E)) class F(E): @abstractthing def bar(self): pass # abstract override of concrete self.assertEqual(F.__abstractmethods__, set(["bar"])) self.assertRaises(TypeError, F) # because bar is abstract now + self.assert_(isabstract(F)) def test_subclass_oldstyle_class(self): class A: Modified: python/branches/libffi3-branch/Lib/test/test_al.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_al.py (original) +++ python/branches/libffi3-branch/Lib/test/test_al.py Tue Mar 4 15:50:53 2008 @@ -11,7 +11,7 @@ # This is a very unobtrusive test for the existence of the al module and all its # attributes. More comprehensive examples can be found in Demo/al -def main(): +def test_main(): # touch all the attributes of al without doing anything if verbose: print 'Touching al module attributes...' @@ -20,4 +20,6 @@ print 'touching: ', attr getattr(al, attr) -main() + +if __name__ == '__main__': + test_main() Modified: python/branches/libffi3-branch/Lib/test/test_applesingle.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_applesingle.py (original) +++ python/branches/libffi3-branch/Lib/test/test_applesingle.py Tue Mar 4 15:50:53 2008 @@ -5,7 +5,6 @@ import Carbon.File import MacOS import os -import sys from test import test_support import struct import applesingle Modified: python/branches/libffi3-branch/Lib/test/test_array.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_array.py (original) +++ python/branches/libffi3-branch/Lib/test/test_array.py Tue Mar 4 15:50:53 2008 @@ -6,7 +6,7 @@ import unittest from test import test_support from weakref import proxy -import array, cStringIO, math +import array, cStringIO from cPickle import loads, dumps class ArraySubclass(array.array): Modified: python/branches/libffi3-branch/Lib/test/test_ast.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_ast.py (original) +++ python/branches/libffi3-branch/Lib/test/test_ast.py Tue Mar 4 15:50:53 2008 @@ -156,7 +156,7 @@ #### EVERYTHING BELOW IS GENERATED ##### exec_results = [ ('Module', [('FunctionDef', (1, 0), 'f', ('arguments', [], None, None, []), [('Pass', (1, 9))], [])]), -('Module', [('ClassDef', (1, 0), 'C', [], [('Pass', (1, 8))])]), +('Module', [('ClassDef', (1, 0), 'C', [], [('Pass', (1, 8))], [])]), ('Module', [('FunctionDef', (1, 0), 'f', ('arguments', [], None, None, []), [('Return', (1, 8), ('Num', (1, 15), 1))], [])]), ('Module', [('Delete', (1, 0), [('Name', (1, 4), 'v', ('Del',))])]), ('Module', [('Assign', (1, 0), [('Name', (1, 0), 'v', ('Store',))], ('Num', (1, 4), 1))]), Modified: python/branches/libffi3-branch/Lib/test/test_audioop.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_audioop.py (original) +++ python/branches/libffi3-branch/Lib/test/test_audioop.py Tue Mar 4 15:50:53 2008 @@ -269,7 +269,7 @@ if not rv: print 'Test FAILED for audioop.'+name+'()' -def testall(): +def test_main(): data = [gendata1(), gendata2(), gendata4()] names = dir(audioop) # We know there is a routine 'add' @@ -279,4 +279,8 @@ routines.append(n) for n in routines: testone(n, data) -testall() + + + +if __name__ == '__main__': + test_main() Modified: python/branches/libffi3-branch/Lib/test/test_bisect.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_bisect.py (original) +++ python/branches/libffi3-branch/Lib/test/test_bisect.py Tue Mar 4 15:50:53 2008 @@ -1,91 +1,113 @@ +import sys import unittest from test import test_support -from bisect import bisect_right, bisect_left, insort_left, insort_right, insort, bisect from UserList import UserList +# We do a bit of trickery here to be able to test both the C implementation +# and the Python implementation of the module. + +# Make it impossible to import the C implementation anymore. +sys.modules['_bisect'] = 0 +# We must also handle the case that bisect was imported before. +if 'bisect' in sys.modules: + del sys.modules['bisect'] + +# Now we can import the module and get the pure Python implementation. +import bisect as py_bisect + +# Restore everything to normal. +del sys.modules['_bisect'] +del sys.modules['bisect'] + +# This is now the module with the C implementation. +import bisect as c_bisect + + class TestBisect(unittest.TestCase): + module = None - precomputedCases = [ - (bisect_right, [], 1, 0), - (bisect_right, [1], 0, 0), - (bisect_right, [1], 1, 1), - (bisect_right, [1], 2, 1), - (bisect_right, [1, 1], 0, 0), - (bisect_right, [1, 1], 1, 2), - (bisect_right, [1, 1], 2, 2), - (bisect_right, [1, 1, 1], 0, 0), - (bisect_right, [1, 1, 1], 1, 3), - (bisect_right, [1, 1, 1], 2, 3), - (bisect_right, [1, 1, 1, 1], 0, 0), - (bisect_right, [1, 1, 1, 1], 1, 4), - (bisect_right, [1, 1, 1, 1], 2, 4), - (bisect_right, [1, 2], 0, 0), - (bisect_right, [1, 2], 1, 1), - (bisect_right, [1, 2], 1.5, 1), - (bisect_right, [1, 2], 2, 2), - (bisect_right, [1, 2], 3, 2), - (bisect_right, [1, 1, 2, 2], 0, 0), - (bisect_right, [1, 1, 2, 2], 1, 2), - (bisect_right, [1, 1, 2, 2], 1.5, 2), - (bisect_right, [1, 1, 2, 2], 2, 4), - (bisect_right, [1, 1, 2, 2], 3, 4), - (bisect_right, [1, 2, 3], 0, 0), - (bisect_right, [1, 2, 3], 1, 1), - (bisect_right, [1, 2, 3], 1.5, 1), - (bisect_right, [1, 2, 3], 2, 2), - (bisect_right, [1, 2, 3], 2.5, 2), - (bisect_right, [1, 2, 3], 3, 3), - (bisect_right, [1, 2, 3], 4, 3), - (bisect_right, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 0, 0), - (bisect_right, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 1, 1), - (bisect_right, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 1.5, 1), - (bisect_right, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 2, 3), - (bisect_right, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 2.5, 3), - (bisect_right, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 3, 6), - (bisect_right, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 3.5, 6), - (bisect_right, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 4, 10), - (bisect_right, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 5, 10), - - (bisect_left, [], 1, 0), - (bisect_left, [1], 0, 0), - (bisect_left, [1], 1, 0), - (bisect_left, [1], 2, 1), - (bisect_left, [1, 1], 0, 0), - (bisect_left, [1, 1], 1, 0), - (bisect_left, [1, 1], 2, 2), - (bisect_left, [1, 1, 1], 0, 0), - (bisect_left, [1, 1, 1], 1, 0), - (bisect_left, [1, 1, 1], 2, 3), - (bisect_left, [1, 1, 1, 1], 0, 0), - (bisect_left, [1, 1, 1, 1], 1, 0), - (bisect_left, [1, 1, 1, 1], 2, 4), - (bisect_left, [1, 2], 0, 0), - (bisect_left, [1, 2], 1, 0), - (bisect_left, [1, 2], 1.5, 1), - (bisect_left, [1, 2], 2, 1), - (bisect_left, [1, 2], 3, 2), - (bisect_left, [1, 1, 2, 2], 0, 0), - (bisect_left, [1, 1, 2, 2], 1, 0), - (bisect_left, [1, 1, 2, 2], 1.5, 2), - (bisect_left, [1, 1, 2, 2], 2, 2), - (bisect_left, [1, 1, 2, 2], 3, 4), - (bisect_left, [1, 2, 3], 0, 0), - (bisect_left, [1, 2, 3], 1, 0), - (bisect_left, [1, 2, 3], 1.5, 1), - (bisect_left, [1, 2, 3], 2, 1), - (bisect_left, [1, 2, 3], 2.5, 2), - (bisect_left, [1, 2, 3], 3, 2), - (bisect_left, [1, 2, 3], 4, 3), - (bisect_left, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 0, 0), - (bisect_left, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 1, 0), - (bisect_left, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 1.5, 1), - (bisect_left, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 2, 1), - (bisect_left, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 2.5, 3), - (bisect_left, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 3, 3), - (bisect_left, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 3.5, 6), - (bisect_left, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 4, 6), - (bisect_left, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 5, 10) - ] + def setUp(self): + self.precomputedCases = [ + (self.module.bisect_right, [], 1, 0), + (self.module.bisect_right, [1], 0, 0), + (self.module.bisect_right, [1], 1, 1), + (self.module.bisect_right, [1], 2, 1), + (self.module.bisect_right, [1, 1], 0, 0), + (self.module.bisect_right, [1, 1], 1, 2), + (self.module.bisect_right, [1, 1], 2, 2), + (self.module.bisect_right, [1, 1, 1], 0, 0), + (self.module.bisect_right, [1, 1, 1], 1, 3), + (self.module.bisect_right, [1, 1, 1], 2, 3), + (self.module.bisect_right, [1, 1, 1, 1], 0, 0), + (self.module.bisect_right, [1, 1, 1, 1], 1, 4), + (self.module.bisect_right, [1, 1, 1, 1], 2, 4), + (self.module.bisect_right, [1, 2], 0, 0), + (self.module.bisect_right, [1, 2], 1, 1), + (self.module.bisect_right, [1, 2], 1.5, 1), + (self.module.bisect_right, [1, 2], 2, 2), + (self.module.bisect_right, [1, 2], 3, 2), + (self.module.bisect_right, [1, 1, 2, 2], 0, 0), + (self.module.bisect_right, [1, 1, 2, 2], 1, 2), + (self.module.bisect_right, [1, 1, 2, 2], 1.5, 2), + (self.module.bisect_right, [1, 1, 2, 2], 2, 4), + (self.module.bisect_right, [1, 1, 2, 2], 3, 4), + (self.module.bisect_right, [1, 2, 3], 0, 0), + (self.module.bisect_right, [1, 2, 3], 1, 1), + (self.module.bisect_right, [1, 2, 3], 1.5, 1), + (self.module.bisect_right, [1, 2, 3], 2, 2), + (self.module.bisect_right, [1, 2, 3], 2.5, 2), + (self.module.bisect_right, [1, 2, 3], 3, 3), + (self.module.bisect_right, [1, 2, 3], 4, 3), + (self.module.bisect_right, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 0, 0), + (self.module.bisect_right, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 1, 1), + (self.module.bisect_right, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 1.5, 1), + (self.module.bisect_right, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 2, 3), + (self.module.bisect_right, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 2.5, 3), + (self.module.bisect_right, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 3, 6), + (self.module.bisect_right, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 3.5, 6), + (self.module.bisect_right, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 4, 10), + (self.module.bisect_right, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 5, 10), + + (self.module.bisect_left, [], 1, 0), + (self.module.bisect_left, [1], 0, 0), + (self.module.bisect_left, [1], 1, 0), + (self.module.bisect_left, [1], 2, 1), + (self.module.bisect_left, [1, 1], 0, 0), + (self.module.bisect_left, [1, 1], 1, 0), + (self.module.bisect_left, [1, 1], 2, 2), + (self.module.bisect_left, [1, 1, 1], 0, 0), + (self.module.bisect_left, [1, 1, 1], 1, 0), + (self.module.bisect_left, [1, 1, 1], 2, 3), + (self.module.bisect_left, [1, 1, 1, 1], 0, 0), + (self.module.bisect_left, [1, 1, 1, 1], 1, 0), + (self.module.bisect_left, [1, 1, 1, 1], 2, 4), + (self.module.bisect_left, [1, 2], 0, 0), + (self.module.bisect_left, [1, 2], 1, 0), + (self.module.bisect_left, [1, 2], 1.5, 1), + (self.module.bisect_left, [1, 2], 2, 1), + (self.module.bisect_left, [1, 2], 3, 2), + (self.module.bisect_left, [1, 1, 2, 2], 0, 0), + (self.module.bisect_left, [1, 1, 2, 2], 1, 0), + (self.module.bisect_left, [1, 1, 2, 2], 1.5, 2), + (self.module.bisect_left, [1, 1, 2, 2], 2, 2), + (self.module.bisect_left, [1, 1, 2, 2], 3, 4), + (self.module.bisect_left, [1, 2, 3], 0, 0), + (self.module.bisect_left, [1, 2, 3], 1, 0), + (self.module.bisect_left, [1, 2, 3], 1.5, 1), + (self.module.bisect_left, [1, 2, 3], 2, 1), + (self.module.bisect_left, [1, 2, 3], 2.5, 2), + (self.module.bisect_left, [1, 2, 3], 3, 2), + (self.module.bisect_left, [1, 2, 3], 4, 3), + (self.module.bisect_left, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 0, 0), + (self.module.bisect_left, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 1, 0), + (self.module.bisect_left, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 1.5, 1), + (self.module.bisect_left, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 2, 1), + (self.module.bisect_left, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 2.5, 3), + (self.module.bisect_left, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 3, 3), + (self.module.bisect_left, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 3.5, 6), + (self.module.bisect_left, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 4, 6), + (self.module.bisect_left, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 5, 10) + ] def test_precomputed(self): for func, data, elem, expected in self.precomputedCases: @@ -98,12 +120,12 @@ data = [randrange(0, n, 2) for j in xrange(i)] data.sort() elem = randrange(-1, n+1) - ip = bisect_left(data, elem) + ip = self.module.bisect_left(data, elem) if ip < len(data): self.failUnless(elem <= data[ip]) if ip > 0: self.failUnless(data[ip-1] < elem) - ip = bisect_right(data, elem) + ip = self.module.bisect_right(data, elem) if ip < len(data): self.failUnless(elem < data[ip]) if ip > 0: @@ -117,32 +139,39 @@ hi = min(len(data), hi) ip = func(data, elem, lo, hi) self.failUnless(lo <= ip <= hi) - if func is bisect_left and ip < hi: + if func is self.module.bisect_left and ip < hi: self.failUnless(elem <= data[ip]) - if func is bisect_left and ip > lo: + if func is self.module.bisect_left and ip > lo: self.failUnless(data[ip-1] < elem) - if func is bisect_right and ip < hi: + if func is self.module.bisect_right and ip < hi: self.failUnless(elem < data[ip]) - if func is bisect_right and ip > lo: + if func is self.module.bisect_right and ip > lo: self.failUnless(data[ip-1] <= elem) self.assertEqual(ip, max(lo, min(hi, expected))) def test_backcompatibility(self): - self.assertEqual(bisect, bisect_right) + self.assertEqual(self.module.bisect, self.module.bisect_right) def test_keyword_args(self): data = [10, 20, 30, 40, 50] - self.assertEqual(bisect_left(a=data, x=25, lo=1, hi=3), 2) - self.assertEqual(bisect_right(a=data, x=25, lo=1, hi=3), 2) - self.assertEqual(bisect(a=data, x=25, lo=1, hi=3), 2) - insort_left(a=data, x=25, lo=1, hi=3) - insort_right(a=data, x=25, lo=1, hi=3) - insort(a=data, x=25, lo=1, hi=3) + self.assertEqual(self.module.bisect_left(a=data, x=25, lo=1, hi=3), 2) + self.assertEqual(self.module.bisect_right(a=data, x=25, lo=1, hi=3), 2) + self.assertEqual(self.module.bisect(a=data, x=25, lo=1, hi=3), 2) + self.module.insort_left(a=data, x=25, lo=1, hi=3) + self.module.insort_right(a=data, x=25, lo=1, hi=3) + self.module.insort(a=data, x=25, lo=1, hi=3) self.assertEqual(data, [10, 20, 25, 25, 25, 30, 40, 50]) +class TestBisectPython(TestBisect): + module = py_bisect + +class TestBisectC(TestBisect): + module = c_bisect + #============================================================================== class TestInsort(unittest.TestCase): + module = None def test_vsBuiltinSort(self, n=500): from random import choice @@ -150,14 +179,20 @@ for i in xrange(n): digit = choice("0123456789") if digit in "02468": - f = insort_left + f = self.module.insort_left else: - f = insort_right + f = self.module.insort_right f(insorted, digit) self.assertEqual(sorted(insorted), insorted) def test_backcompatibility(self): - self.assertEqual(insort, insort_right) + self.assertEqual(self.module.insort, self.module.insort_right) + +class TestInsortPython(TestInsort): + module = py_bisect + +class TestInsortC(TestInsort): + module = c_bisect #============================================================================== @@ -178,32 +213,44 @@ raise ZeroDivisionError class TestErrorHandling(unittest.TestCase): + module = None def test_non_sequence(self): - for f in (bisect_left, bisect_right, insort_left, insort_right): + for f in (self.module.bisect_left, self.module.bisect_right, + self.module.insort_left, self.module.insort_right): self.assertRaises(TypeError, f, 10, 10) def test_len_only(self): - for f in (bisect_left, bisect_right, insort_left, insort_right): + for f in (self.module.bisect_left, self.module.bisect_right, + self.module.insort_left, self.module.insort_right): self.assertRaises(AttributeError, f, LenOnly(), 10) def test_get_only(self): - for f in (bisect_left, bisect_right, insort_left, insort_right): + for f in (self.module.bisect_left, self.module.bisect_right, + self.module.insort_left, self.module.insort_right): self.assertRaises(AttributeError, f, GetOnly(), 10) def test_cmp_err(self): seq = [CmpErr(), CmpErr(), CmpErr()] - for f in (bisect_left, bisect_right, insort_left, insort_right): + for f in (self.module.bisect_left, self.module.bisect_right, + self.module.insort_left, self.module.insort_right): self.assertRaises(ZeroDivisionError, f, seq, 10) def test_arg_parsing(self): - for f in (bisect_left, bisect_right, insort_left, insort_right): + for f in (self.module.bisect_left, self.module.bisect_right, + self.module.insort_left, self.module.insort_right): self.assertRaises(TypeError, f, 10) +class TestErrorHandlingPython(TestErrorHandling): + module = py_bisect + +class TestErrorHandlingC(TestErrorHandling): + module = c_bisect + #============================================================================== libreftest = """ -Example from the Library Reference: Doc/lib/libbisect.tex +Example from the Library Reference: Doc/library/bisect.rst The bisect() function is generally useful for categorizing numeric data. This example uses bisect() to look up a letter grade for an exam total @@ -229,12 +276,10 @@ def test_main(verbose=None): from test import test_bisect - from types import BuiltinFunctionType - import sys - test_classes = [TestBisect, TestInsort] - if isinstance(bisect_left, BuiltinFunctionType): - test_classes.append(TestErrorHandling) + test_classes = [TestBisectPython, TestBisectC, + TestInsortPython, TestInsortC, + TestErrorHandlingPython, TestErrorHandlingC] test_support.run_unittest(*test_classes) test_support.run_doctest(test_bisect, verbose) Modified: python/branches/libffi3-branch/Lib/test/test_bsddb185.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_bsddb185.py (original) +++ python/branches/libffi3-branch/Lib/test/test_bsddb185.py Tue Mar 4 15:50:53 2008 @@ -4,7 +4,7 @@ testing suite. """ -from test.test_support import verbose, run_unittest, findfile +from test.test_support import run_unittest, findfile import unittest import bsddb185 import anydbm Modified: python/branches/libffi3-branch/Lib/test/test_bsddb3.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_bsddb3.py (original) +++ python/branches/libffi3-branch/Lib/test/test_bsddb3.py Tue Mar 4 15:50:53 2008 @@ -2,10 +2,12 @@ """ Run all test cases. """ +import os import sys +import tempfile import time import unittest -from test.test_support import requires, verbose, run_unittest, unlink +from test.test_support import requires, verbose, run_unittest, unlink, rmtree # When running as a script instead of within the regrtest framework, skip the # requires test, since it's obvious we want to run them. @@ -85,6 +87,15 @@ # For invocation through regrtest def test_main(): run_unittest(suite()) + db_home = os.path.join(tempfile.gettempdir(), 'db_home') + # The only reason to remove db_home is in case if there is an old + # one lying around. This might be by a different user, so just + # ignore errors. We should always make a unique name now. + try: + rmtree(db_home) + except: + pass + rmtree('db_home%d' % os.getpid()) # For invocation as a script if __name__ == '__main__': Modified: python/branches/libffi3-branch/Lib/test/test_builtin.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_builtin.py (original) +++ python/branches/libffi3-branch/Lib/test/test_builtin.py Tue Mar 4 15:50:53 2008 @@ -1973,45 +1973,6 @@ return i self.assertRaises(ValueError, zip, BadSeq(), BadSeq()) -class TestSorted(unittest.TestCase): - - def test_basic(self): - data = range(100) - copy = data[:] - random.shuffle(copy) - self.assertEqual(data, sorted(copy)) - self.assertNotEqual(data, copy) - - data.reverse() - random.shuffle(copy) - self.assertEqual(data, sorted(copy, cmp=lambda x, y: cmp(y,x))) - self.assertNotEqual(data, copy) - random.shuffle(copy) - self.assertEqual(data, sorted(copy, key=lambda x: -x)) - self.assertNotEqual(data, copy) - random.shuffle(copy) - self.assertEqual(data, sorted(copy, reverse=1)) - self.assertNotEqual(data, copy) - - def test_inputtypes(self): - s = 'abracadabra' - types = [list, tuple] - if have_unicode: - types.insert(0, unicode) - for T in types: - self.assertEqual(sorted(s), sorted(T(s))) - - s = ''.join(dict.fromkeys(s).keys()) # unique letters only - types = [set, frozenset, list, tuple, dict.fromkeys] - if have_unicode: - types.insert(0, unicode) - for T in types: - self.assertEqual(sorted(s), sorted(T(s))) - - def test_baddecorator(self): - data = 'The quick Brown fox Jumped over The lazy Dog'.split() - self.assertRaises(TypeError, sorted, data, None, lambda x,y: 0) - def test_format(self): # Test the basic machinery of the format() builtin. Don't test # the specifics of the various formatters @@ -2107,6 +2068,54 @@ class DerivedFromStr(str): pass self.assertEqual(format(0, DerivedFromStr('10')), ' 0') + def test_bin(self): + self.assertEqual(bin(0), '0b0') + self.assertEqual(bin(1), '0b1') + self.assertEqual(bin(-1), '-0b1') + self.assertEqual(bin(2**65), '0b1' + '0' * 65) + self.assertEqual(bin(2**65-1), '0b' + '1' * 65) + self.assertEqual(bin(-(2**65)), '-0b1' + '0' * 65) + self.assertEqual(bin(-(2**65-1)), '-0b' + '1' * 65) + +class TestSorted(unittest.TestCase): + + def test_basic(self): + data = range(100) + copy = data[:] + random.shuffle(copy) + self.assertEqual(data, sorted(copy)) + self.assertNotEqual(data, copy) + + data.reverse() + random.shuffle(copy) + self.assertEqual(data, sorted(copy, cmp=lambda x, y: cmp(y,x))) + self.assertNotEqual(data, copy) + random.shuffle(copy) + self.assertEqual(data, sorted(copy, key=lambda x: -x)) + self.assertNotEqual(data, copy) + random.shuffle(copy) + self.assertEqual(data, sorted(copy, reverse=1)) + self.assertNotEqual(data, copy) + + def test_inputtypes(self): + s = 'abracadabra' + types = [list, tuple] + if have_unicode: + types.insert(0, unicode) + for T in types: + self.assertEqual(sorted(s), sorted(T(s))) + + s = ''.join(dict.fromkeys(s).keys()) # unique letters only + types = [set, frozenset, list, tuple, dict.fromkeys] + if have_unicode: + types.insert(0, unicode) + for T in types: + self.assertEqual(sorted(s), sorted(T(s))) + + def test_baddecorator(self): + data = 'The quick Brown fox Jumped over The lazy Dog'.split() + self.assertRaises(TypeError, sorted, data, None, lambda x,y: 0) + def test_main(verbose=None): test_classes = (BuiltinTest, TestSorted) Modified: python/branches/libffi3-branch/Lib/test/test_cd.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_cd.py (original) +++ python/branches/libffi3-branch/Lib/test/test_cd.py Tue Mar 4 15:50:53 2008 @@ -14,7 +14,7 @@ # attributes. More comprehensive examples can be found in Demo/cd and # require that you have a CD and a CD ROM drive -def main(): +def test_main(): # touch all the attributes of cd without doing anything if verbose: print 'Touching cd module attributes...' @@ -23,4 +23,7 @@ print 'touching: ', attr getattr(cd, attr) -main() + + +if __name__ == '__main__': + test_main() Modified: python/branches/libffi3-branch/Lib/test/test_cfgparser.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_cfgparser.py (original) +++ python/branches/libffi3-branch/Lib/test/test_cfgparser.py Tue Mar 4 15:50:53 2008 @@ -446,6 +446,14 @@ self.assertRaises(TypeError, cf.set, "sect", "option2", 1.0) self.assertRaises(TypeError, cf.set, "sect", "option2", object()) + def test_add_section_default_1(self): + cf = self.newconfig() + self.assertRaises(ValueError, cf.add_section, "default") + + def test_add_section_default_2(self): + cf = self.newconfig() + self.assertRaises(ValueError, cf.add_section, "DEFAULT") + class SortedTestCase(RawConfigParserTestCase): def newconfig(self, defaults=None): self.cf = self.config_class(defaults=defaults, dict_type=SortedDict) Modified: python/branches/libffi3-branch/Lib/test/test_cl.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_cl.py (original) +++ python/branches/libffi3-branch/Lib/test/test_cl.py Tue Mar 4 15:50:53 2008 @@ -66,7 +66,7 @@ # This is a very inobtrusive test for the existence of the cl # module and all its attributes. -def main(): +def test_main(): # touch all the attributes of al without doing anything if verbose: print 'Touching cl module attributes...' @@ -75,4 +75,7 @@ print 'touching: ', attr getattr(cl, attr) -main() + + +if __name__ == '__main__': + test_main() Modified: python/branches/libffi3-branch/Lib/test/test_class.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_class.py (original) +++ python/branches/libffi3-branch/Lib/test/test_class.py Tue Mar 4 15:50:53 2008 @@ -1,7 +1,6 @@ "Test the functionality of Python classes implementing operators." import unittest -import sys from test import test_support Modified: python/branches/libffi3-branch/Lib/test/test_cmd.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_cmd.py (original) +++ python/branches/libffi3-branch/Lib/test/test_cmd.py Tue Mar 4 15:50:53 2008 @@ -5,7 +5,6 @@ """ -from test import test_support import cmd import sys @@ -170,7 +169,7 @@ from test import test_support, test_cmd test_support.run_doctest(test_cmd, verbose) -import trace, sys,re,StringIO +import trace, sys def test_coverage(coverdir): tracer=trace.Trace(ignoredirs=[sys.prefix, sys.exec_prefix,], trace=0, count=1) Modified: python/branches/libffi3-branch/Lib/test/test_coercion.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_coercion.py (original) +++ python/branches/libffi3-branch/Lib/test/test_coercion.py Tue Mar 4 15:50:53 2008 @@ -1,5 +1,4 @@ import copy -import sys import warnings import unittest from test.test_support import run_unittest, TestFailed Modified: python/branches/libffi3-branch/Lib/test/test_compare.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_compare.py (original) +++ python/branches/libffi3-branch/Lib/test/test_compare.py Tue Mar 4 15:50:53 2008 @@ -1,4 +1,3 @@ -import sys import unittest from test import test_support Modified: python/branches/libffi3-branch/Lib/test/test_compiler.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_compiler.py (original) +++ python/branches/libffi3-branch/Lib/test/test_compiler.py Tue Mar 4 15:50:53 2008 @@ -52,7 +52,8 @@ compiler.compile(buf, basename, "exec") except Exception, e: args = list(e.args) - args[0] += "[in file %s]" % basename + args.append("in file %s]" % basename) + #args[0] += "[in file %s]" % basename e.args = tuple(args) raise Modified: python/branches/libffi3-branch/Lib/test/test_copy.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_copy.py (original) +++ python/branches/libffi3-branch/Lib/test/test_copy.py Tue Mar 4 15:50:53 2008 @@ -1,6 +1,5 @@ """Unit tests for the copy module.""" -import sys import copy import copy_reg Modified: python/branches/libffi3-branch/Lib/test/test_cpickle.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_cpickle.py (original) +++ python/branches/libffi3-branch/Lib/test/test_cpickle.py Tue Mar 4 15:50:53 2008 @@ -1,5 +1,4 @@ import cPickle -import unittest from cStringIO import StringIO from test.pickletester import AbstractPickleTests, AbstractPickleModuleTests from test import test_support Modified: python/branches/libffi3-branch/Lib/test/test_curses.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_curses.py (original) +++ python/branches/libffi3-branch/Lib/test/test_curses.py Tue Mar 4 15:50:53 2008 @@ -269,13 +269,12 @@ curses.wrapper(main) unit_tests() else: + # testing setupterm() inside initscr/endwin + # causes terminal breakage + curses.setupterm(fd=sys.__stdout__.fileno()) try: - # testing setupterm() inside initscr/endwin - # causes terminal breakage - curses.setupterm(fd=sys.__stdout__.fileno()) stdscr = curses.initscr() main(stdscr) finally: curses.endwin() - unit_tests() Modified: python/branches/libffi3-branch/Lib/test/test_datetime.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_datetime.py (original) +++ python/branches/libffi3-branch/Lib/test/test_datetime.py Tue Mar 4 15:50:53 2008 @@ -4,7 +4,6 @@ """ import os -import sys import pickle import cPickle import unittest Modified: python/branches/libffi3-branch/Lib/test/test_dbm.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_dbm.py (original) +++ python/branches/libffi3-branch/Lib/test/test_dbm.py Tue Mar 4 15:50:53 2008 @@ -3,7 +3,6 @@ Roger E. Masse """ import os -import random import dbm from dbm import error from test.test_support import verbose, verify, TestSkipped, TESTFN @@ -44,12 +43,18 @@ d = dbm.open(filename, 'n') d.close() -cleanup() -try: - test_keys() - test_modes() -except: +def test_main(): cleanup() - raise + try: + test_keys() + test_modes() + except: + cleanup() + raise -cleanup() + cleanup() + + + +if __name__ == '__main__': + test_main() Modified: python/branches/libffi3-branch/Lib/test/test_decimal.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_decimal.py (original) +++ python/branches/libffi3-branch/Lib/test/test_decimal.py Tue Mar 4 15:50:53 2008 @@ -615,6 +615,98 @@ self.assertEqual(eval('Decimal(10)' + sym + 'E()'), '10' + rop + 'str') +class DecimalFormatTest(unittest.TestCase): + '''Unit tests for the format function.''' + def test_formatting(self): + # triples giving a format, a Decimal, and the expected result + test_values = [ + ('e', '0E-15', '0e-15'), + ('e', '2.3E-15', '2.3e-15'), + ('e', '2.30E+2', '2.30e+2'), # preserve significant zeros + ('e', '2.30000E-15', '2.30000e-15'), + ('e', '1.23456789123456789e40', '1.23456789123456789e+40'), + ('e', '1.5', '1.5e+0'), + ('e', '0.15', '1.5e-1'), + ('e', '0.015', '1.5e-2'), + ('e', '0.0000000000015', '1.5e-12'), + ('e', '15.0', '1.50e+1'), + ('e', '-15', '-1.5e+1'), + ('e', '0', '0e+0'), + ('e', '0E1', '0e+1'), + ('e', '0.0', '0e-1'), + ('e', '0.00', '0e-2'), + ('.6e', '0E-15', '0.000000e-9'), + ('.6e', '0', '0.000000e+6'), + ('.6e', '9.999999', '9.999999e+0'), + ('.6e', '9.9999999', '1.000000e+1'), + ('.6e', '-1.23e5', '-1.230000e+5'), + ('.6e', '1.23456789e-3', '1.234568e-3'), + ('f', '0', '0'), + ('f', '0.0', '0.0'), + ('f', '0E-2', '0.00'), + ('f', '0.00E-8', '0.0000000000'), + ('f', '0E1', '0'), # loses exponent information + ('f', '3.2E1', '32'), + ('f', '3.2E2', '320'), + ('f', '3.20E2', '320'), + ('f', '3.200E2', '320.0'), + ('f', '3.2E-6', '0.0000032'), + ('.6f', '0E-15', '0.000000'), # all zeros treated equally + ('.6f', '0E1', '0.000000'), + ('.6f', '0', '0.000000'), + ('.0f', '0', '0'), # no decimal point + ('.0f', '0e-2', '0'), + ('.0f', '3.14159265', '3'), + ('.1f', '3.14159265', '3.1'), + ('.4f', '3.14159265', '3.1416'), + ('.6f', '3.14159265', '3.141593'), + ('.7f', '3.14159265', '3.1415926'), # round-half-even! + ('.8f', '3.14159265', '3.14159265'), + ('.9f', '3.14159265', '3.141592650'), + + ('g', '0', '0'), + ('g', '0.0', '0.0'), + ('g', '0E1', '0e+1'), + ('G', '0E1', '0E+1'), + ('g', '0E-5', '0.00000'), + ('g', '0E-6', '0.000000'), + ('g', '0E-7', '0e-7'), + ('g', '-0E2', '-0e+2'), + ('.0g', '3.14159265', '3'), # 0 sig fig -> 1 sig fig + ('.1g', '3.14159265', '3'), + ('.2g', '3.14159265', '3.1'), + ('.5g', '3.14159265', '3.1416'), + ('.7g', '3.14159265', '3.141593'), + ('.8g', '3.14159265', '3.1415926'), # round-half-even! + ('.9g', '3.14159265', '3.14159265'), + ('.10g', '3.14159265', '3.14159265'), # don't pad + + ('%', '0E1', '0%'), + ('%', '0E0', '0%'), + ('%', '0E-1', '0%'), + ('%', '0E-2', '0%'), + ('%', '0E-3', '0.0%'), + ('%', '0E-4', '0.00%'), + + ('.3%', '0', '0.000%'), # all zeros treated equally + ('.3%', '0E10', '0.000%'), + ('.3%', '0E-10', '0.000%'), + ('.3%', '2.34', '234.000%'), + ('.3%', '1.234567', '123.457%'), + ('.0%', '1.23', '123%'), + + ('e', 'NaN', 'NaN'), + ('f', '-NaN123', '-NaN123'), + ('+g', 'NaN456', '+NaN456'), + ('.3e', 'Inf', 'Infinity'), + ('.16f', '-Inf', '-Infinity'), + ('.0g', '-sNaN', '-sNaN'), + + ('', '1.00', '1.00'), + ] + for fmt, d, result in test_values: + self.assertEqual(format(Decimal(d), fmt), result) + class DecimalArithmeticOperatorsTest(unittest.TestCase): '''Unit tests for all arithmetic operators, binary and unary.''' @@ -1363,6 +1455,7 @@ DecimalExplicitConstructionTest, DecimalImplicitConstructionTest, DecimalArithmeticOperatorsTest, + DecimalFormatTest, DecimalUseOfContextTest, DecimalUsabilityTest, DecimalPythonAPItests, Modified: python/branches/libffi3-branch/Lib/test/test_decorators.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_decorators.py (original) +++ python/branches/libffi3-branch/Lib/test/test_decorators.py Tue Mar 4 15:50:53 2008 @@ -266,8 +266,44 @@ self.assertEqual(bar(), 42) self.assertEqual(actions, expected_actions) +class TestClassDecorators(unittest.TestCase): + + def test_simple(self): + def plain(x): + x.extra = 'Hello' + return x + @plain + class C(object): pass + self.assertEqual(C.extra, 'Hello') + + def test_double(self): + def ten(x): + x.extra = 10 + return x + def add_five(x): + x.extra += 5 + return x + + @add_five + @ten + class C(object): pass + self.assertEqual(C.extra, 15) + + def test_order(self): + def applied_first(x): + x.extra = 'first' + return x + def applied_second(x): + x.extra = 'second' + return x + @applied_second + @applied_first + class C(object): pass + self.assertEqual(C.extra, 'second') + def test_main(): test_support.run_unittest(TestDecorators) + test_support.run_unittest(TestClassDecorators) if __name__=="__main__": test_main() Modified: python/branches/libffi3-branch/Lib/test/test_deque.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_deque.py (original) +++ python/branches/libffi3-branch/Lib/test/test_deque.py Tue Mar 4 15:50:53 2008 @@ -4,7 +4,6 @@ from weakref import proxy import copy import cPickle as pickle -from cStringIO import StringIO import random import os Modified: python/branches/libffi3-branch/Lib/test/test_descrtut.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_descrtut.py (original) +++ python/branches/libffi3-branch/Lib/test/test_descrtut.py Tue Mar 4 15:50:53 2008 @@ -209,6 +209,7 @@ '__setitem__', '__setslice__', '__str__', + '__subclasshook__', 'append', 'count', 'extend', Modified: python/branches/libffi3-branch/Lib/test/test_dict.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_dict.py (original) +++ python/branches/libffi3-branch/Lib/test/test_dict.py Tue Mar 4 15:50:53 2008 @@ -1,7 +1,7 @@ import unittest from test import test_support -import sys, UserDict, cStringIO, random, string +import UserDict, random, string class DictTest(unittest.TestCase): Modified: python/branches/libffi3-branch/Lib/test/test_dis.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_dis.py (original) +++ python/branches/libffi3-branch/Lib/test/test_dis.py Tue Mar 4 15:50:53 2008 @@ -1,6 +1,6 @@ # Minimal tests for dis module -from test.test_support import verbose, run_unittest +from test.test_support import run_unittest import unittest import sys import dis Modified: python/branches/libffi3-branch/Lib/test/test_doctest.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_doctest.py (original) +++ python/branches/libffi3-branch/Lib/test/test_doctest.py Tue Mar 4 15:50:53 2008 @@ -2418,7 +2418,7 @@ from test import test_doctest test_support.run_doctest(test_doctest, verbosity=True) -import trace, sys, re, StringIO +import trace, sys def test_coverage(coverdir): tracer = trace.Trace(ignoredirs=[sys.prefix, sys.exec_prefix,], trace=0, count=1) Modified: python/branches/libffi3-branch/Lib/test/test_dummy_threading.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_dummy_threading.py (original) +++ python/branches/libffi3-branch/Lib/test/test_dummy_threading.py Tue Mar 4 15:50:53 2008 @@ -3,7 +3,6 @@ # Create a bunch of threads, let each do some work, wait until all are done from test.test_support import verbose -import random import dummy_threading as _threading import time Modified: python/branches/libffi3-branch/Lib/test/test_email.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_email.py (original) +++ python/branches/libffi3-branch/Lib/test/test_email.py Tue Mar 4 15:50:53 2008 @@ -1,7 +1,6 @@ # Copyright (C) 2001,2002 Python Software Foundation # email package unit tests -import unittest # The specific tests now live in Lib/email/test from email.test.test_email import suite from test import test_support Modified: python/branches/libffi3-branch/Lib/test/test_email_renamed.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_email_renamed.py (original) +++ python/branches/libffi3-branch/Lib/test/test_email_renamed.py Tue Mar 4 15:50:53 2008 @@ -1,7 +1,6 @@ # Copyright (C) 2001-2006 Python Software Foundation # email package unit tests -import unittest # The specific tests now live in Lib/email/test from email.test.test_email_renamed import suite from test import test_support Modified: python/branches/libffi3-branch/Lib/test/test_eof.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_eof.py (original) +++ python/branches/libffi3-branch/Lib/test/test_eof.py Tue Mar 4 15:50:53 2008 @@ -1,7 +1,6 @@ #! /usr/bin/env python """test script for a few new invalid token catches""" -import os import unittest from test import test_support Modified: python/branches/libffi3-branch/Lib/test/test_extcall.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_extcall.py (original) +++ python/branches/libffi3-branch/Lib/test/test_extcall.py Tue Mar 4 15:50:53 2008 @@ -1,4 +1,4 @@ -from test.test_support import verify, verbose, TestFailed, sortdict +from test.test_support import verify, TestFailed, sortdict from UserList import UserList from UserDict import UserDict Modified: python/branches/libffi3-branch/Lib/test/test_file.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_file.py (original) +++ python/branches/libffi3-branch/Lib/test/test_file.py Tue Mar 4 15:50:53 2008 @@ -322,12 +322,28 @@ finally: os.unlink(TESTFN) +class FileSubclassTests(unittest.TestCase): + + def testExit(self): + # test that exiting with context calls subclass' close + class C(file): + def __init__(self, *args): + self.subclass_closed = False + file.__init__(self, *args) + def close(self): + self.subclass_closed = True + file.close(self) + + with C(TESTFN, 'w') as f: + pass + self.failUnless(f.subclass_closed) + def test_main(): # Historically, these tests have been sloppy about removing TESTFN. # So get rid of it no matter what. try: - run_unittest(AutoFileTests, OtherFileTests) + run_unittest(AutoFileTests, OtherFileTests, FileSubclassTests) finally: if os.path.exists(TESTFN): os.unlink(TESTFN) Modified: python/branches/libffi3-branch/Lib/test/test_fileinput.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_fileinput.py (original) +++ python/branches/libffi3-branch/Lib/test/test_fileinput.py Tue Mar 4 15:50:53 2008 @@ -6,7 +6,7 @@ import unittest from test.test_support import verbose, TESTFN, run_unittest from test.test_support import unlink as safe_unlink -import sys, os, re +import sys, re from StringIO import StringIO from fileinput import FileInput, hook_encoded Modified: python/branches/libffi3-branch/Lib/test/test_format.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_format.py (original) +++ python/branches/libffi3-branch/Lib/test/test_format.py Tue Mar 4 15:50:53 2008 @@ -11,7 +11,7 @@ overflowok = 1 overflowrequired = 0 -def testformat(formatstr, args, output=None): +def testformat(formatstr, args, output=None, limit=None): if verbose: if output: print "%s %% %s =? %s ..." %\ @@ -31,7 +31,18 @@ print 'no' print "overflow expected on %s %% %s" % \ (repr(formatstr), repr(args)) - elif output and result != output: + elif output and limit is None and result != output: + if verbose: + print 'no' + print "%s %% %s == %s != %s" % \ + (repr(formatstr), repr(args), repr(result), repr(output)) + # when 'limit' is specified, it determines how many characters + # must match exactly; lengths must always match. + # ex: limit=5, '12345678' matches '12345___' + # (mainly for floating point format tests for which an exact match + # can't be guaranteed due to rounding and representation errors) + elif output and limit is not None and ( + len(result)!=len(output) or result[:limit]!=output[:limit]): if verbose: print 'no' print "%s %% %s == %s != %s" % \ @@ -98,6 +109,7 @@ testboth("%.30d", big, "123456789012345678901234567890") testboth("%.31d", big, "0123456789012345678901234567890") testboth("%32.31d", big, " 0123456789012345678901234567890") +testboth("%d", float(big), "123456________________________", 6) big = 0x1234567890abcdef12345L # 21 hex digits testboth("%x", big, "1234567890abcdef12345") @@ -135,6 +147,7 @@ testboth("%#+027.23X", big, "+0X0001234567890ABCDEF12345") # same, except no 0 flag testboth("%#+27.23X", big, " +0X001234567890ABCDEF12345") +testboth("%x", float(big), "123456_______________", 6) big = 012345670123456701234567012345670L # 32 octal digits testboth("%o", big, "12345670123456701234567012345670") @@ -175,16 +188,19 @@ testboth("%034.33o", big, "0012345670123456701234567012345670") # base marker shouldn't change that testboth("%0#34.33o", big, "0012345670123456701234567012345670") +testboth("%o", float(big), "123456__________________________", 6) # Some small ints, in both Python int and long flavors). testboth("%d", 42, "42") testboth("%d", -42, "-42") testboth("%d", 42L, "42") testboth("%d", -42L, "-42") +testboth("%d", 42.0, "42") testboth("%#x", 1, "0x1") testboth("%#x", 1L, "0x1") testboth("%#X", 1, "0X1") testboth("%#X", 1L, "0X1") +testboth("%#x", 1.0, "0x1") testboth("%#o", 1, "01") testboth("%#o", 1L, "01") testboth("%#o", 0, "0") @@ -202,11 +218,13 @@ testboth("%x", -0x42, "-42") testboth("%x", 0x42L, "42") testboth("%x", -0x42L, "-42") +testboth("%x", float(0x42), "42") testboth("%o", 042, "42") testboth("%o", -042, "-42") testboth("%o", 042L, "42") testboth("%o", -042L, "-42") +testboth("%o", float(042), "42") # Test exception for unknown format characters if verbose: @@ -235,7 +253,7 @@ test_exc(unicode('abc %\u3000','raw-unicode-escape'), 1, ValueError, "unsupported format character '?' (0x3000) at index 5") -test_exc('%d', '1', TypeError, "int argument required, not str") +test_exc('%d', '1', TypeError, "%d format: a number is required, not str") test_exc('%g', '1', TypeError, "float argument required, not str") test_exc('no format', '1', TypeError, "not all arguments converted during string formatting") Modified: python/branches/libffi3-branch/Lib/test/test_fractions.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_fractions.py (original) +++ python/branches/libffi3-branch/Lib/test/test_fractions.py Tue Mar 4 15:50:53 2008 @@ -1,7 +1,7 @@ """Tests for Lib/fractions.py.""" from decimal import Decimal -from test.test_support import run_unittest, verbose +from test.test_support import run_unittest import math import operator import fractions Modified: python/branches/libffi3-branch/Lib/test/test_ftplib.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_ftplib.py (original) +++ python/branches/libffi3-branch/Lib/test/test_ftplib.py Tue Mar 4 15:50:53 2008 @@ -6,32 +6,47 @@ from unittest import TestCase from test import test_support +server_port = None + +# This function sets the evt 3 times: +# 1) when the connection is ready to be accepted. +# 2) when it is safe for the caller to close the connection +# 3) when we have closed the socket def server(evt): + global server_port serv = socket.socket(socket.AF_INET, socket.SOCK_STREAM) serv.settimeout(3) serv.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) - serv.bind(("", 9091)) + server_port = test_support.bind_port(serv, "", 9091) serv.listen(5) + # (1) Signal the caller that we are ready to accept the connection. + evt.set() try: conn, addr = serv.accept() except socket.timeout: pass else: conn.send("1 Hola mundo\n") + # (2) Signal the caller that it is safe to close the socket. + evt.set() conn.close() finally: serv.close() + # (3) Signal the caller that we are done. evt.set() class GeneralTests(TestCase): def setUp(self): - ftplib.FTP.port = 9091 self.evt = threading.Event() threading.Thread(target=server, args=(self.evt,)).start() - time.sleep(.1) + # Wait for the server to be ready. + self.evt.wait() + self.evt.clear() + ftplib.FTP.port = server_port def tearDown(self): + # Wait on the closing of the socket (this shouldn't be necessary). self.evt.wait() def testBasic(self): @@ -40,30 +55,35 @@ # connects ftp = ftplib.FTP("localhost") + self.evt.wait() ftp.sock.close() def testTimeoutDefault(self): # default ftp = ftplib.FTP("localhost") self.assertTrue(ftp.sock.gettimeout() is None) + self.evt.wait() ftp.sock.close() def testTimeoutValue(self): # a value ftp = ftplib.FTP("localhost", timeout=30) self.assertEqual(ftp.sock.gettimeout(), 30) + self.evt.wait() ftp.sock.close() def testTimeoutConnect(self): ftp = ftplib.FTP() ftp.connect("localhost", timeout=30) self.assertEqual(ftp.sock.gettimeout(), 30) + self.evt.wait() ftp.sock.close() def testTimeoutDifferentOrder(self): ftp = ftplib.FTP(timeout=30) ftp.connect("localhost") self.assertEqual(ftp.sock.gettimeout(), 30) + self.evt.wait() ftp.sock.close() def testTimeoutDirectAccess(self): @@ -71,6 +91,7 @@ ftp.timeout = 30 ftp.connect("localhost") self.assertEqual(ftp.sock.gettimeout(), 30) + self.evt.wait() ftp.sock.close() def testTimeoutNone(self): @@ -82,10 +103,10 @@ finally: socket.setdefaulttimeout(previous) self.assertEqual(ftp.sock.gettimeout(), 30) + self.evt.wait() ftp.close() - def test_main(verbose=None): test_support.run_unittest(GeneralTests) Modified: python/branches/libffi3-branch/Lib/test/test_getargs2.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_getargs2.py (original) +++ python/branches/libffi3-branch/Lib/test/test_getargs2.py Tue Mar 4 15:50:53 2008 @@ -1,8 +1,8 @@ import unittest from test import test_support -import sys +from _testcapi import getargs_keywords -import warnings, re +import warnings warnings.filterwarnings("ignore", category=DeprecationWarning, message=".*integer argument expected, got float", @@ -249,9 +249,57 @@ raise ValueError self.assertRaises(TypeError, getargs_tuple, 1, seq()) +class Keywords_TestCase(unittest.TestCase): + def test_positional_args(self): + # using all positional args + self.assertEquals( + getargs_keywords((1,2), 3, (4,(5,6)), (7,8,9), 10), + (1, 2, 3, 4, 5, 6, 7, 8, 9, 10) + ) + def test_mixed_args(self): + # positional and keyword args + self.assertEquals( + getargs_keywords((1,2), 3, (4,(5,6)), arg4=(7,8,9), arg5=10), + (1, 2, 3, 4, 5, 6, 7, 8, 9, 10) + ) + def test_keyword_args(self): + # all keywords + self.assertEquals( + getargs_keywords(arg1=(1,2), arg2=3, arg3=(4,(5,6)), arg4=(7,8,9), arg5=10), + (1, 2, 3, 4, 5, 6, 7, 8, 9, 10) + ) + def test_optional_args(self): + # missing optional keyword args, skipping tuples + self.assertEquals( + getargs_keywords(arg1=(1,2), arg2=3, arg5=10), + (1, 2, 3, -1, -1, -1, -1, -1, -1, 10) + ) + def test_required_args(self): + # required arg missing + try: + getargs_keywords(arg1=(1,2)) + except TypeError, err: + self.assertEquals(str(err), "Required argument 'arg2' (pos 2) not found") + else: + self.fail('TypeError should have been raised') + def test_too_many_args(self): + try: + getargs_keywords((1,2),3,(4,(5,6)),(7,8,9),10,111) + except TypeError, err: + self.assertEquals(str(err), "function takes at most 5 arguments (6 given)") + else: + self.fail('TypeError should have been raised') + def test_invalid_keyword(self): + # extraneous keyword arg + try: + getargs_keywords((1,2),3,arg5=10,arg666=666) + except TypeError, err: + self.assertEquals(str(err), "'arg666' is an invalid keyword argument for this function") + else: + self.fail('TypeError should have been raised') def test_main(): - tests = [Signed_TestCase, Unsigned_TestCase, Tuple_TestCase] + tests = [Signed_TestCase, Unsigned_TestCase, Tuple_TestCase, Keywords_TestCase] try: from _testcapi import getargs_L, getargs_K except ImportError: Modified: python/branches/libffi3-branch/Lib/test/test_gl.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_gl.py (original) +++ python/branches/libffi3-branch/Lib/test/test_gl.py Tue Mar 4 15:50:53 2008 @@ -81,7 +81,7 @@ 'xfpt4s', 'xfpti', 'xfpts', 'zbuffer', 'zclear', 'zdraw', 'zfunction', 'zsource', 'zwritemask'] -def main(): +def test_main(): # insure that we at least have an X display before continuing. import os try: @@ -147,4 +147,6 @@ print 'winclose' gl.winclose(w) -main() + +if __name__ == '__main__': + test_main() Modified: python/branches/libffi3-branch/Lib/test/test_grammar.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_grammar.py (original) +++ python/branches/libffi3-branch/Lib/test/test_grammar.py Tue Mar 4 15:50:53 2008 @@ -779,6 +779,16 @@ def meth1(self): pass def meth2(self, arg): pass def meth3(self, a1, a2): pass + # decorator: '@' dotted_name [ '(' [arglist] ')' ] NEWLINE + # decorators: decorator+ + # decorated: decorators (classdef | funcdef) + def class_decorator(x): + x.decorated = True + return x + @class_decorator + class G: + pass + self.assertEqual(G.decorated, True) def testListcomps(self): # list comprehension tests Modified: python/branches/libffi3-branch/Lib/test/test_gzip.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_gzip.py (original) +++ python/branches/libffi3-branch/Lib/test/test_gzip.py Tue Mar 4 15:50:53 2008 @@ -4,7 +4,7 @@ import unittest from test import test_support -import sys, os +import os import gzip Modified: python/branches/libffi3-branch/Lib/test/test_heapq.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_heapq.py (original) +++ python/branches/libffi3-branch/Lib/test/test_heapq.py Tue Mar 4 15:50:53 2008 @@ -1,21 +1,32 @@ """Unittests for heapq.""" -from heapq import heappush, heappop, heapify, heapreplace, merge, nlargest, nsmallest import random import unittest from test import test_support import sys +# We do a bit of trickery here to be able to test both the C implementation +# and the Python implementation of the module. + +# Make it impossible to import the C implementation anymore. +sys.modules['_heapq'] = 0 +# We must also handle the case that heapq was imported before. +if 'heapq' in sys.modules: + del sys.modules['heapq'] + +# Now we can import the module and get the pure Python implementation. +import heapq as py_heapq + +# Restore everything to normal. +del sys.modules['_heapq'] +del sys.modules['heapq'] + +# This is now the module with the C implementation. +import heapq as c_heapq -def heapiter(heap): - # An iterator returning a heap's elements, smallest-first. - try: - while 1: - yield heappop(heap) - except IndexError: - pass class TestHeap(unittest.TestCase): + module = None def test_push_pop(self): # 1) Push 256 random numbers and pop them off, verifying all's OK. @@ -25,11 +36,11 @@ for i in range(256): item = random.random() data.append(item) - heappush(heap, item) + self.module.heappush(heap, item) self.check_invariant(heap) results = [] while heap: - item = heappop(heap) + item = self.module.heappop(heap) self.check_invariant(heap) results.append(item) data_sorted = data[:] @@ -38,10 +49,10 @@ # 2) Check that the invariant holds for a sorted array self.check_invariant(results) - self.assertRaises(TypeError, heappush, []) + self.assertRaises(TypeError, self.module.heappush, []) try: - self.assertRaises(TypeError, heappush, None, None) - self.assertRaises(TypeError, heappop, None) + self.assertRaises(TypeError, self.module.heappush, None, None) + self.assertRaises(TypeError, self.module.heappop, None) except AttributeError: pass @@ -55,21 +66,29 @@ def test_heapify(self): for size in range(30): heap = [random.random() for dummy in range(size)] - heapify(heap) + self.module.heapify(heap) self.check_invariant(heap) - self.assertRaises(TypeError, heapify, None) + self.assertRaises(TypeError, self.module.heapify, None) def test_naive_nbest(self): data = [random.randrange(2000) for i in range(1000)] heap = [] for item in data: - heappush(heap, item) + self.module.heappush(heap, item) if len(heap) > 10: - heappop(heap) + self.module.heappop(heap) heap.sort() self.assertEqual(heap, sorted(data)[-10:]) + def heapiter(self, heap): + # An iterator returning a heap's elements, smallest-first. + try: + while 1: + yield self.module.heappop(heap) + except IndexError: + pass + def test_nbest(self): # Less-naive "N-best" algorithm, much faster (if len(data) is big # enough ) than sorting all of data. However, if we had a max @@ -78,15 +97,15 @@ # (10 log-time steps). data = [random.randrange(2000) for i in range(1000)] heap = data[:10] - heapify(heap) + self.module.heapify(heap) for item in data[10:]: if item > heap[0]: # this gets rarer the longer we run - heapreplace(heap, item) - self.assertEqual(list(heapiter(heap)), sorted(data)[-10:]) + self.module.heapreplace(heap, item) + self.assertEqual(list(self.heapiter(heap)), sorted(data)[-10:]) - self.assertRaises(TypeError, heapreplace, None) - self.assertRaises(TypeError, heapreplace, None, None) - self.assertRaises(IndexError, heapreplace, [], None) + self.assertRaises(TypeError, self.module.heapreplace, None) + self.assertRaises(TypeError, self.module.heapreplace, None, None) + self.assertRaises(IndexError, self.module.heapreplace, [], None) def test_heapsort(self): # Exercise everything with repeated heapsort checks @@ -95,12 +114,12 @@ data = [random.randrange(25) for i in range(size)] if trial & 1: # Half of the time, use heapify heap = data[:] - heapify(heap) + self.module.heapify(heap) else: # The rest of the time, use heappush heap = [] for item in data: - heappush(heap, item) - heap_sorted = [heappop(heap) for i in range(size)] + self.module.heappush(heap, item) + heap_sorted = [self.module.heappop(heap) for i in range(size)] self.assertEqual(heap_sorted, sorted(data)) def test_merge(self): @@ -108,8 +127,8 @@ for i in xrange(random.randrange(5)): row = sorted(random.randrange(1000) for j in range(random.randrange(10))) inputs.append(row) - self.assertEqual(sorted(chain(*inputs)), list(merge(*inputs))) - self.assertEqual(list(merge()), []) + self.assertEqual(sorted(chain(*inputs)), list(self.module.merge(*inputs))) + self.assertEqual(list(self.module.merge()), []) def test_merge_stability(self): class Int(int): @@ -123,25 +142,32 @@ inputs[stream].append(obj) for stream in inputs: stream.sort() - result = [i.pair for i in merge(*inputs)] + result = [i.pair for i in self.module.merge(*inputs)] self.assertEqual(result, sorted(result)) def test_nsmallest(self): data = [(random.randrange(2000), i) for i in range(1000)] for f in (None, lambda x: x[0] * 547 % 2000): for n in (0, 1, 2, 10, 100, 400, 999, 1000, 1100): - self.assertEqual(nsmallest(n, data), sorted(data)[:n]) - self.assertEqual(nsmallest(n, data, key=f), + self.assertEqual(self.module.nsmallest(n, data), sorted(data)[:n]) + self.assertEqual(self.module.nsmallest(n, data, key=f), sorted(data, key=f)[:n]) def test_nlargest(self): data = [(random.randrange(2000), i) for i in range(1000)] for f in (None, lambda x: x[0] * 547 % 2000): for n in (0, 1, 2, 10, 100, 400, 999, 1000, 1100): - self.assertEqual(nlargest(n, data), sorted(data, reverse=True)[:n]) - self.assertEqual(nlargest(n, data, key=f), + self.assertEqual(self.module.nlargest(n, data), + sorted(data, reverse=True)[:n]) + self.assertEqual(self.module.nlargest(n, data, key=f), sorted(data, key=f, reverse=True)[:n]) +class TestHeapPython(TestHeap): + module = py_heapq + +class TestHeapC(TestHeap): + module = c_heapq + #============================================================================== @@ -238,44 +264,49 @@ return chain(imap(lambda x:x, R(Ig(G(seqn))))) class TestErrorHandling(unittest.TestCase): + # only for C implementation + module = c_heapq def test_non_sequence(self): - for f in (heapify, heappop): + for f in (self.module.heapify, self.module.heappop): self.assertRaises(TypeError, f, 10) - for f in (heappush, heapreplace, nlargest, nsmallest): + for f in (self.module.heappush, self.module.heapreplace, + self.module.nlargest, self.module.nsmallest): self.assertRaises(TypeError, f, 10, 10) def test_len_only(self): - for f in (heapify, heappop): + for f in (self.module.heapify, self.module.heappop): self.assertRaises(TypeError, f, LenOnly()) - for f in (heappush, heapreplace): + for f in (self.module.heappush, self.module.heapreplace): self.assertRaises(TypeError, f, LenOnly(), 10) - for f in (nlargest, nsmallest): + for f in (self.module.nlargest, self.module.nsmallest): self.assertRaises(TypeError, f, 2, LenOnly()) def test_get_only(self): - for f in (heapify, heappop): + for f in (self.module.heapify, self.module.heappop): self.assertRaises(TypeError, f, GetOnly()) - for f in (heappush, heapreplace): + for f in (self.module.heappush, self.module.heapreplace): self.assertRaises(TypeError, f, GetOnly(), 10) - for f in (nlargest, nsmallest): + for f in (self.module.nlargest, self.module.nsmallest): self.assertRaises(TypeError, f, 2, GetOnly()) def test_get_only(self): seq = [CmpErr(), CmpErr(), CmpErr()] - for f in (heapify, heappop): + for f in (self.module.heapify, self.module.heappop): self.assertRaises(ZeroDivisionError, f, seq) - for f in (heappush, heapreplace): + for f in (self.module.heappush, self.module.heapreplace): self.assertRaises(ZeroDivisionError, f, seq, 10) - for f in (nlargest, nsmallest): + for f in (self.module.nlargest, self.module.nsmallest): self.assertRaises(ZeroDivisionError, f, 2, seq) def test_arg_parsing(self): - for f in (heapify, heappop, heappush, heapreplace, nlargest, nsmallest): + for f in (self.module.heapify, self.module.heappop, + self.module.heappush, self.module.heapreplace, + self.module.nlargest, self.module.nsmallest): self.assertRaises(TypeError, f, 10) def test_iterable_args(self): - for f in (nlargest, nsmallest): + for f in (self.module.nlargest, self.module.nsmallest): for s in ("123", "", range(1000), ('do', 1.2), xrange(2000,2200,5)): for g in (G, I, Ig, L, R): self.assertEqual(f(2, g(s)), f(2,s)) @@ -284,15 +315,14 @@ self.assertRaises(TypeError, f, 2, N(s)) self.assertRaises(ZeroDivisionError, f, 2, E(s)) + #============================================================================== def test_main(verbose=None): from types import BuiltinFunctionType - test_classes = [TestHeap] - if isinstance(heapify, BuiltinFunctionType): - test_classes.append(TestErrorHandling) + test_classes = [TestHeapPython, TestHeapC, TestErrorHandling] test_support.run_unittest(*test_classes) # verify reference counting Modified: python/branches/libffi3-branch/Lib/test/test_htmlparser.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_htmlparser.py (original) +++ python/branches/libffi3-branch/Lib/test/test_htmlparser.py Tue Mar 4 15:50:53 2008 @@ -2,7 +2,6 @@ import HTMLParser import pprint -import sys import unittest from test import test_support Modified: python/branches/libffi3-branch/Lib/test/test_httplib.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_httplib.py (original) +++ python/branches/libffi3-branch/Lib/test/test_httplib.py Tue Mar 4 15:50:53 2008 @@ -1,6 +1,5 @@ import httplib import StringIO -import sys import socket from unittest import TestCase @@ -157,6 +156,42 @@ conn.request('GET', '/foo', body) self.assertTrue(sock.data.startswith(expected)) + def test_chunked(self): + chunked_start = ( + 'HTTP/1.1 200 OK\r\n' + 'Transfer-Encoding: chunked\r\n\r\n' + 'a\r\n' + 'hello worl\r\n' + '1\r\n' + 'd\r\n' + ) + sock = FakeSocket(chunked_start + '0\r\n') + resp = httplib.HTTPResponse(sock, method="GET") + resp.begin() + self.assertEquals(resp.read(), 'hello world') + resp.close() + + for x in ('', 'foo\r\n'): + sock = FakeSocket(chunked_start + x) + resp = httplib.HTTPResponse(sock, method="GET") + resp.begin() + try: + resp.read() + except httplib.IncompleteRead, i: + self.assertEquals(i.partial, 'hello world') + else: + self.fail('IncompleteRead expected') + finally: + resp.close() + + def test_negative_content_length(self): + sock = FakeSocket('HTTP/1.1 200 OK\r\nContent-Length: -1\r\n\r\nHello\r\n') + resp = httplib.HTTPResponse(sock, method="GET") + resp.begin() + self.assertEquals(resp.read(), 'Hello\r\n') + resp.close() + + class OfflineTest(TestCase): def test_responses(self): self.assertEquals(httplib.responses[httplib.NOT_FOUND], "Not Found") Modified: python/branches/libffi3-branch/Lib/test/test_imageop.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_imageop.py (original) +++ python/branches/libffi3-branch/Lib/test/test_imageop.py Tue Mar 4 15:50:53 2008 @@ -11,7 +11,7 @@ import warnings -def main(): +def test_main(): # Create binary test files uu.decode(get_qualified_path('testrgb'+os.extsep+'uue'), 'test'+os.extsep+'rgb') @@ -145,4 +145,5 @@ return fullname return name -main() +if __name__ == '__main__': + test_main() Modified: python/branches/libffi3-branch/Lib/test/test_imgfile.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_imgfile.py (original) +++ python/branches/libffi3-branch/Lib/test/test_imgfile.py Tue Mar 4 15:50:53 2008 @@ -6,23 +6,9 @@ from test.test_support import verbose, unlink, findfile -import imgfile, uu, os +import imgfile, uu -def main(): - - uu.decode(findfile('testrgb.uue'), 'test.rgb') - uu.decode(findfile('greyrgb.uue'), 'greytest.rgb') - - # Test a 3 byte color image - testimage('test.rgb') - - # Test a 1 byte greyscale image - testimage('greytest.rgb') - - unlink('test.rgb') - unlink('greytest.rgb') - def testimage(name): """Run through the imgfile's battery of possible methods on the image passed in name. @@ -113,4 +99,20 @@ os.unlink(outputfile) -main() + +def test_main(): + + uu.decode(findfile('testrgb.uue'), 'test.rgb') + uu.decode(findfile('greyrgb.uue'), 'greytest.rgb') + + # Test a 3 byte color image + testimage('test.rgb') + + # Test a 1 byte greyscale image + testimage('greytest.rgb') + + unlink('test.rgb') + unlink('greytest.rgb') + +if __name__ == '__main__': + test_main() Modified: python/branches/libffi3-branch/Lib/test/test_imp.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_imp.py (original) +++ python/branches/libffi3-branch/Lib/test/test_imp.py Tue Mar 4 15:50:53 2008 @@ -1,5 +1,4 @@ import imp -import thread import unittest from test import test_support Modified: python/branches/libffi3-branch/Lib/test/test_index.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_index.py (original) +++ python/branches/libffi3-branch/Lib/test/test_index.py Tue Mar 4 15:50:53 2008 @@ -1,7 +1,6 @@ import unittest from test import test_support import operator -import sys from sys import maxint maxsize = test_support.MAX_Py_ssize_t minsize = -maxsize-1 Modified: python/branches/libffi3-branch/Lib/test/test_inspect.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_inspect.py (original) +++ python/branches/libffi3-branch/Lib/test/test_inspect.py Tue Mar 4 15:50:53 2008 @@ -50,11 +50,11 @@ yield i class TestPredicates(IsTestBase): - def test_fifteen(self): + def test_sixteen(self): count = len(filter(lambda x:x.startswith('is'), dir(inspect))) # This test is here for remember you to update Doc/library/inspect.rst - # which claims there are 15 such functions - expected = 15 + # which claims there are 16 such functions + expected = 16 err_msg = "There are %d (not %d) is* functions" % (count, expected) self.assertEqual(count, expected, err_msg) Modified: python/branches/libffi3-branch/Lib/test/test_itertools.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_itertools.py (original) +++ python/branches/libffi3-branch/Lib/test/test_itertools.py Tue Mar 4 15:50:53 2008 @@ -40,13 +40,150 @@ 'Convenience function for partially consuming a long of infinite iterable' return list(islice(seq, n)) +def prod(iterable): + return reduce(operator.mul, iterable, 1) + +def fact(n): + 'Factorial' + return prod(range(1, n+1)) + +def permutations(iterable, r=None): + # XXX use this until real permutations code is added + pool = tuple(iterable) + n = len(pool) + r = n if r is None else r + for indices in product(range(n), repeat=r): + if len(set(indices)) == r: + yield tuple(pool[i] for i in indices) + class TestBasicOps(unittest.TestCase): def test_chain(self): self.assertEqual(list(chain('abc', 'def')), list('abcdef')) self.assertEqual(list(chain('abc')), list('abc')) self.assertEqual(list(chain('')), []) self.assertEqual(take(4, chain('abc', 'def')), list('abcd')) - self.assertRaises(TypeError, chain, 2, 3) + self.assertRaises(TypeError, list,chain(2, 3)) + + def test_chain_from_iterable(self): + self.assertEqual(list(chain.from_iterable(['abc', 'def'])), list('abcdef')) + self.assertEqual(list(chain.from_iterable(['abc'])), list('abc')) + self.assertEqual(list(chain.from_iterable([''])), []) + self.assertEqual(take(4, chain.from_iterable(['abc', 'def'])), list('abcd')) + self.assertRaises(TypeError, list, chain.from_iterable([2, 3])) + + def test_combinations(self): + self.assertRaises(TypeError, combinations, 'abc') # missing r argument + self.assertRaises(TypeError, combinations, 'abc', 2, 1) # too many arguments + self.assertRaises(TypeError, combinations, None) # pool is not iterable + self.assertRaises(ValueError, combinations, 'abc', -2) # r is negative + self.assertRaises(ValueError, combinations, 'abc', 32) # r is too big + self.assertEqual(list(combinations(range(4), 3)), + [(0,1,2), (0,1,3), (0,2,3), (1,2,3)]) + + def combinations1(iterable, r): + 'Pure python version shown in the docs' + pool = tuple(iterable) + n = len(pool) + indices = range(r) + yield tuple(pool[i] for i in indices) + while 1: + for i in reversed(range(r)): + if indices[i] != i + n - r: + break + else: + return + indices[i] += 1 + for j in range(i+1, r): + indices[j] = indices[j-1] + 1 + yield tuple(pool[i] for i in indices) + + def combinations2(iterable, r): + 'Pure python version shown in the docs' + pool = tuple(iterable) + n = len(pool) + for indices in permutations(range(n), r): + if sorted(indices) == list(indices): + yield tuple(pool[i] for i in indices) + + for n in range(7): + values = [5*x-12 for x in range(n)] + for r in range(n+1): + result = list(combinations(values, r)) + self.assertEqual(len(result), fact(n) / fact(r) / fact(n-r)) # right number of combs + self.assertEqual(len(result), len(set(result))) # no repeats + self.assertEqual(result, sorted(result)) # lexicographic order + for c in result: + self.assertEqual(len(c), r) # r-length combinations + self.assertEqual(len(set(c)), r) # no duplicate elements + self.assertEqual(list(c), sorted(c)) # keep original ordering + self.assert_(all(e in values for e in c)) # elements taken from input iterable + self.assertEqual(result, list(combinations1(values, r))) # matches first pure python version + self.assertEqual(result, list(combinations2(values, r))) # matches first pure python version + + # Test implementation detail: tuple re-use + self.assertEqual(len(set(map(id, combinations('abcde', 3)))), 1) + self.assertNotEqual(len(set(map(id, list(combinations('abcde', 3))))), 1) + + def test_permutations(self): + self.assertRaises(TypeError, permutations) # too few arguments + self.assertRaises(TypeError, permutations, 'abc', 2, 1) # too many arguments +## self.assertRaises(TypeError, permutations, None) # pool is not iterable +## self.assertRaises(ValueError, permutations, 'abc', -2) # r is negative +## self.assertRaises(ValueError, permutations, 'abc', 32) # r is too big + self.assertEqual(list(permutations(range(3), 2)), + [(0,1), (0,2), (1,0), (1,2), (2,0), (2,1)]) + + def permutations1(iterable, r=None): + 'Pure python version shown in the docs' + pool = tuple(iterable) + n = len(pool) + r = n if r is None else r + indices = range(n) + cycles = range(n-r+1, n+1)[::-1] + yield tuple(pool[i] for i in indices[:r]) + while n: + for i in reversed(range(r)): + cycles[i] -= 1 + if cycles[i] == 0: + indices[i:] = indices[i+1:] + indices[i:i+1] + cycles[i] = n - i + else: + j = cycles[i] + indices[i], indices[-j] = indices[-j], indices[i] + yield tuple(pool[i] for i in indices[:r]) + break + else: + return + + def permutations2(iterable, r=None): + 'Pure python version shown in the docs' + pool = tuple(iterable) + n = len(pool) + r = n if r is None else r + for indices in product(range(n), repeat=r): + if len(set(indices)) == r: + yield tuple(pool[i] for i in indices) + + for n in range(7): + values = [5*x-12 for x in range(n)] + for r in range(n+1): + result = list(permutations(values, r)) + self.assertEqual(len(result), fact(n) / fact(n-r)) # right number of perms + self.assertEqual(len(result), len(set(result))) # no repeats + self.assertEqual(result, sorted(result)) # lexicographic order + for p in result: + self.assertEqual(len(p), r) # r-length permutations + self.assertEqual(len(set(p)), r) # no duplicate elements + self.assert_(all(e in values for e in p)) # elements taken from input iterable + self.assertEqual(result, list(permutations1(values, r))) # matches first pure python version + self.assertEqual(result, list(permutations2(values, r))) # matches first pure python version + if r == n: + self.assertEqual(result, list(permutations(values, None))) # test r as None + self.assertEqual(result, list(permutations(values))) # test default r + + # Test implementation detail: tuple re-use +## self.assertEqual(len(set(map(id, permutations('abcde', 3)))), 1) + self.assertNotEqual(len(set(map(id, list(permutations('abcde', 3))))), 1) def test_count(self): self.assertEqual(zip('abc',count()), [('a', 0), ('b', 1), ('c', 2)]) @@ -171,6 +308,7 @@ def test_ifilter(self): self.assertEqual(list(ifilter(isEven, range(6))), [0,2,4]) self.assertEqual(list(ifilter(None, [0,1,0,2,0])), [1,2]) + self.assertEqual(list(ifilter(bool, [0,1,0,2,0])), [1,2]) self.assertEqual(take(4, ifilter(isEven, count())), [0,2,4,6]) self.assertRaises(TypeError, ifilter) self.assertRaises(TypeError, ifilter, lambda x:x) @@ -181,6 +319,7 @@ def test_ifilterfalse(self): self.assertEqual(list(ifilterfalse(isEven, range(6))), [1,3,5]) self.assertEqual(list(ifilterfalse(None, [0,1,0,2,0])), [0,0,0]) + self.assertEqual(list(ifilterfalse(bool, [0,1,0,2,0])), [0,0,0]) self.assertEqual(take(4, ifilterfalse(isEven, count())), [1,3,5,7]) self.assertRaises(TypeError, ifilterfalse) self.assertRaises(TypeError, ifilterfalse, lambda x:x) @@ -253,6 +392,34 @@ ids = map(id, list(izip_longest('abc', 'def'))) self.assertEqual(len(dict.fromkeys(ids)), len(ids)) + def test_product(self): + for args, result in [ + ([], [()]), # zero iterables + (['ab'], [('a',), ('b',)]), # one iterable + ([range(2), range(3)], [(0,0), (0,1), (0,2), (1,0), (1,1), (1,2)]), # two iterables + ([range(0), range(2), range(3)], []), # first iterable with zero length + ([range(2), range(0), range(3)], []), # middle iterable with zero length + ([range(2), range(3), range(0)], []), # last iterable with zero length + ]: + self.assertEqual(list(product(*args)), result) + for r in range(4): + self.assertEqual(list(product(*(args*r))), + list(product(*args, **dict(repeat=r)))) + self.assertEqual(len(list(product(*[range(7)]*6))), 7**6) + self.assertRaises(TypeError, product, range(6), None) + argtypes = ['', 'abc', '', xrange(0), xrange(4), dict(a=1, b=2, c=3), + set('abcdefg'), range(11), tuple(range(13))] + for i in range(100): + args = [random.choice(argtypes) for j in range(random.randrange(5))] + expected_len = prod(map(len, args)) + self.assertEqual(len(list(product(*args))), expected_len) + args = map(iter, args) + self.assertEqual(len(list(product(*args))), expected_len) + + # Test implementation detail: tuple re-use + self.assertEqual(len(set(map(id, product('abc', 'def')))), 1) + self.assertNotEqual(len(set(map(id, list(product('abc', 'def'))))), 1) + def test_repeat(self): self.assertEqual(zip(xrange(3),repeat('a')), [(0, 'a'), (1, 'a'), (2, 'a')]) @@ -619,10 +786,16 @@ for g in (G, I, Ig, S, L, R): self.assertEqual(list(chain(g(s))), list(g(s))) self.assertEqual(list(chain(g(s), g(s))), list(g(s))+list(g(s))) - self.assertRaises(TypeError, chain, X(s)) + self.assertRaises(TypeError, list, chain(X(s))) self.assertRaises(TypeError, list, chain(N(s))) self.assertRaises(ZeroDivisionError, list, chain(E(s))) + def test_product(self): + for s in ("123", "", range(1000), ('do', 1.2), xrange(2000,2200,5)): + self.assertRaises(TypeError, product, X(s)) + self.assertRaises(TypeError, product, N(s)) + self.assertRaises(ZeroDivisionError, product, E(s)) + def test_cycle(self): for s in ("123", "", range(1000), ('do', 1.2), xrange(2000,2200,5)): for g in (G, I, Ig, S, L, R): Modified: python/branches/libffi3-branch/Lib/test/test_largefile.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_largefile.py (original) +++ python/branches/libffi3-branch/Lib/test/test_largefile.py Tue Mar 4 15:50:53 2008 @@ -24,13 +24,17 @@ class TestCase(unittest.TestCase): """Test that each file function works as expected for a large (i.e. > 2GB, do we have to check > 4GB) files. + + NOTE: the order of execution of the test methods is important! test_seek + must run first to create the test file. File cleanup must also be handled + outside the test instances because of this. + """ def test_seek(self): if verbose: print 'create large file via seek (may be sparse file) ...' - f = open(TESTFN, 'wb') - try: + with open(TESTFN, 'wb') as f: f.write('z') f.seek(0) f.seek(size) @@ -39,8 +43,6 @@ if verbose: print 'check file size with os.fstat' self.assertEqual(os.fstat(f.fileno())[stat.ST_SIZE], size+1) - finally: - f.close() def test_osstat(self): if verbose: @@ -50,8 +52,7 @@ def test_seek_read(self): if verbose: print 'play around with seek() and read() with the built largefile' - f = open(TESTFN, 'rb') - try: + with open(TESTFN, 'rb') as f: self.assertEqual(f.tell(), 0) self.assertEqual(f.read(1), 'z') self.assertEqual(f.tell(), 1) @@ -80,14 +81,11 @@ f.seek(-size-1, 1) self.assertEqual(f.read(1), 'z') self.assertEqual(f.tell(), 1) - finally: - f.close() def test_lseek(self): if verbose: print 'play around with os.lseek() with the built largefile' - f = open(TESTFN, 'rb') - try: + with open(TESTFN, 'rb') as f: self.assertEqual(os.lseek(f.fileno(), 0, 0), 0) self.assertEqual(os.lseek(f.fileno(), 42, 0), 42) self.assertEqual(os.lseek(f.fileno(), 42, 1), 84) @@ -98,18 +96,15 @@ self.assertEqual(os.lseek(f.fileno(), size, 0), size) # the 'a' that was written at the end of file above self.assertEqual(f.read(1), 'a') - finally: - f.close() def test_truncate(self): if verbose: print 'try truncate' - f = open(TESTFN, 'r+b') - # this is already decided before start running the test suite - # but we do it anyway for extra protection - if not hasattr(f, 'truncate'): - raise TestSkipped, "open().truncate() not available on this system" - try: + with open(TESTFN, 'r+b') as f: + # this is already decided before start running the test suite + # but we do it anyway for extra protection + if not hasattr(f, 'truncate'): + raise TestSkipped, "open().truncate() not available on this system" f.seek(0, 2) # else we've lost track of the true size self.assertEqual(f.tell(), size+1) @@ -135,11 +130,9 @@ f.truncate(1) self.assertEqual(f.tell(), 0) # else pointer moved self.assertEqual(len(f.read()), 1) # else wasn't truncated - finally: - f.close() -def main_test(): +def test_main(): # On Windows and Mac OSX this test comsumes large resources; It # takes a long time to build the >2GB file and takes >2GB of disk # space therefore the resource must be enabled to run this test. @@ -170,14 +163,15 @@ suite.addTest(TestCase('test_osstat')) suite.addTest(TestCase('test_seek_read')) suite.addTest(TestCase('test_lseek')) - f = open(TESTFN, 'w') - if hasattr(f, 'truncate'): - suite.addTest(TestCase('test_truncate')) - f.close() - unlink(TESTFN) - run_unittest(suite) + with open(TESTFN, 'w') as f: + if hasattr(f, 'truncate'): + suite.addTest(TestCase('test_truncate')) unlink(TESTFN) + try: + run_unittest(suite) + finally: + unlink(TESTFN) if __name__ == '__main__': - main_test() + test_main() Modified: python/branches/libffi3-branch/Lib/test/test_linuxaudiodev.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_linuxaudiodev.py (original) +++ python/branches/libffi3-branch/Lib/test/test_linuxaudiodev.py Tue Mar 4 15:50:53 2008 @@ -4,13 +4,9 @@ from test.test_support import findfile, TestSkipped, run_unittest import errno -import fcntl import linuxaudiodev -import os import sys -import select import sunaudio -import time import audioop import unittest Modified: python/branches/libffi3-branch/Lib/test/test_list.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_list.py (original) +++ python/branches/libffi3-branch/Lib/test/test_list.py Tue Mar 4 15:50:53 2008 @@ -1,4 +1,3 @@ -import unittest import sys from test import test_support, list_tests Modified: python/branches/libffi3-branch/Lib/test/test_logging.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_logging.py (original) +++ python/branches/libffi3-branch/Lib/test/test_logging.py Tue Mar 4 15:50:53 2008 @@ -1,2016 +1,702 @@ #!/usr/bin/env python -""" -Test 0 -====== - -Some preliminaries: ->>> import sys ->>> import logging ->>> def nextmessage(): -... global msgcount -... rv = "Message %d" % msgcount -... msgcount = msgcount + 1 -... return rv - -Set a few variables, then go through the logger autoconfig and set the default threshold. ->>> msgcount = 0 ->>> FINISH_UP = "Finish up, it's closing time. Messages should bear numbers 0 through 24." ->>> logging.basicConfig(stream=sys.stdout) ->>> rootLogger = logging.getLogger("") ->>> rootLogger.setLevel(logging.DEBUG) - -Now, create a bunch of loggers, and set their thresholds. ->>> ERR = logging.getLogger("ERR0") ->>> ERR.setLevel(logging.ERROR) ->>> INF = logging.getLogger("INFO0") ->>> INF.setLevel(logging.INFO) ->>> INF_ERR = logging.getLogger("INFO0.ERR") ->>> INF_ERR.setLevel(logging.ERROR) ->>> DEB = logging.getLogger("DEB0") ->>> DEB.setLevel(logging.DEBUG) ->>> INF_UNDEF = logging.getLogger("INFO0.UNDEF") ->>> INF_ERR_UNDEF = logging.getLogger("INFO0.ERR.UNDEF") ->>> UNDEF = logging.getLogger("UNDEF0") ->>> GRANDCHILD = logging.getLogger("INFO0.BADPARENT.UNDEF") ->>> CHILD = logging.getLogger("INFO0.BADPARENT") - - -And finally, run all the tests. - ->>> ERR.log(logging.FATAL, nextmessage()) -CRITICAL:ERR0:Message 0 - ->>> ERR.error(nextmessage()) -ERROR:ERR0:Message 1 - ->>> INF.log(logging.FATAL, nextmessage()) -CRITICAL:INFO0:Message 2 - ->>> INF.error(nextmessage()) -ERROR:INFO0:Message 3 - ->>> INF.warn(nextmessage()) -WARNING:INFO0:Message 4 - ->>> INF.info(nextmessage()) -INFO:INFO0:Message 5 - ->>> INF_UNDEF.log(logging.FATAL, nextmessage()) -CRITICAL:INFO0.UNDEF:Message 6 - ->>> INF_UNDEF.error(nextmessage()) -ERROR:INFO0.UNDEF:Message 7 - ->>> INF_UNDEF.warn (nextmessage()) -WARNING:INFO0.UNDEF:Message 8 - ->>> INF_UNDEF.info (nextmessage()) -INFO:INFO0.UNDEF:Message 9 - ->>> INF_ERR.log(logging.FATAL, nextmessage()) -CRITICAL:INFO0.ERR:Message 10 - ->>> INF_ERR.error(nextmessage()) -ERROR:INFO0.ERR:Message 11 - ->>> INF_ERR_UNDEF.log(logging.FATAL, nextmessage()) -CRITICAL:INFO0.ERR.UNDEF:Message 12 - ->>> INF_ERR_UNDEF.error(nextmessage()) -ERROR:INFO0.ERR.UNDEF:Message 13 - ->>> DEB.log(logging.FATAL, nextmessage()) -CRITICAL:DEB0:Message 14 - ->>> DEB.error(nextmessage()) -ERROR:DEB0:Message 15 - ->>> DEB.warn (nextmessage()) -WARNING:DEB0:Message 16 - ->>> DEB.info (nextmessage()) -INFO:DEB0:Message 17 - ->>> DEB.debug(nextmessage()) -DEBUG:DEB0:Message 18 - ->>> UNDEF.log(logging.FATAL, nextmessage()) -CRITICAL:UNDEF0:Message 19 - ->>> UNDEF.error(nextmessage()) -ERROR:UNDEF0:Message 20 - ->>> UNDEF.warn (nextmessage()) -WARNING:UNDEF0:Message 21 - ->>> UNDEF.info (nextmessage()) -INFO:UNDEF0:Message 22 - ->>> GRANDCHILD.log(logging.FATAL, nextmessage()) -CRITICAL:INFO0.BADPARENT.UNDEF:Message 23 - ->>> CHILD.log(logging.FATAL, nextmessage()) -CRITICAL:INFO0.BADPARENT:Message 24 - -These should not log: - ->>> ERR.warn(nextmessage()) - ->>> ERR.info(nextmessage()) - ->>> ERR.debug(nextmessage()) - ->>> INF.debug(nextmessage()) - ->>> INF_UNDEF.debug(nextmessage()) - ->>> INF_ERR.warn(nextmessage()) - ->>> INF_ERR.info(nextmessage()) - ->>> INF_ERR.debug(nextmessage()) - ->>> INF_ERR_UNDEF.warn(nextmessage()) - ->>> INF_ERR_UNDEF.info(nextmessage()) - ->>> INF_ERR_UNDEF.debug(nextmessage()) - ->>> INF.info(FINISH_UP) -INFO:INFO0:Finish up, it's closing time. Messages should bear numbers 0 through 24. - -Test 1 -====== - ->>> import sys, logging ->>> logging.basicConfig(stream=sys.stdout) - -First, we define our levels. There can be as many as you want - the only -limitations are that they should be integers, the lowest should be > 0 and -larger values mean less information being logged. If you need specific -level values which do not fit into these limitations, you can use a -mapping dictionary to convert between your application levels and the -logging system. - ->>> SILENT = 10 ->>> TACITURN = 9 ->>> TERSE = 8 ->>> EFFUSIVE = 7 ->>> SOCIABLE = 6 ->>> VERBOSE = 5 ->>> TALKATIVE = 4 ->>> GARRULOUS = 3 ->>> CHATTERBOX = 2 ->>> BORING = 1 ->>> LEVEL_RANGE = range(BORING, SILENT + 1) - - -Next, we define names for our levels. You don't need to do this - in which - case the system will use "Level n" to denote the text for the level. -' - - ->>> my_logging_levels = { -... SILENT : 'SILENT', -... TACITURN : 'TACITURN', -... TERSE : 'TERSE', -... EFFUSIVE : 'EFFUSIVE', -... SOCIABLE : 'SOCIABLE', -... VERBOSE : 'VERBOSE', -... TALKATIVE : 'TALKATIVE', -... GARRULOUS : 'GARRULOUS', -... CHATTERBOX : 'CHATTERBOX', -... BORING : 'BORING', -... } - - -Now, to demonstrate filtering: suppose for some perverse reason we only -want to print out all except GARRULOUS messages. We create a filter for -this purpose... - ->>> class SpecificLevelFilter(logging.Filter): -... def __init__(self, lvl): -... self.level = lvl -... -... def filter(self, record): -... return self.level != record.levelno - ->>> class GarrulousFilter(SpecificLevelFilter): -... def __init__(self): -... SpecificLevelFilter.__init__(self, GARRULOUS) - - -Now, demonstrate filtering at the logger. This time, use a filter -which excludes SOCIABLE and TACITURN messages. Note that GARRULOUS events -are still excluded. - - ->>> class VerySpecificFilter(logging.Filter): -... def filter(self, record): -... return record.levelno not in [SOCIABLE, TACITURN] - ->>> SHOULD1 = "This should only be seen at the '%s' logging level (or lower)" - -Configure the logger, and tell the logging system to associate names with our levels. ->>> logging.basicConfig(stream=sys.stdout) ->>> rootLogger = logging.getLogger("") ->>> rootLogger.setLevel(logging.DEBUG) ->>> for lvl in my_logging_levels.keys(): -... logging.addLevelName(lvl, my_logging_levels[lvl]) ->>> log = logging.getLogger("") ->>> hdlr = log.handlers[0] ->>> from test_logging import message - -Set the logging level to each different value and call the utility -function to log events. In the output, you should see that each time -round the loop, the number of logging events which are actually output -decreases. - ->>> log.setLevel(1) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) -BORING:root:This should only be seen at the '1' logging level (or lower) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) -CHATTERBOX:root:This should only be seen at the '2' logging level (or lower) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) -GARRULOUS:root:This should only be seen at the '3' logging level (or lower) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) -TALKATIVE:root:This should only be seen at the '4' logging level (or lower) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) -VERBOSE:root:This should only be seen at the '5' logging level (or lower) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) -SOCIABLE:root:This should only be seen at the '6' logging level (or lower) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(0) - ->>> log.setLevel(2) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) -CHATTERBOX:root:This should only be seen at the '2' logging level (or lower) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) -GARRULOUS:root:This should only be seen at the '3' logging level (or lower) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) -TALKATIVE:root:This should only be seen at the '4' logging level (or lower) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) -VERBOSE:root:This should only be seen at the '5' logging level (or lower) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) -SOCIABLE:root:This should only be seen at the '6' logging level (or lower) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(3) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) -GARRULOUS:root:This should only be seen at the '3' logging level (or lower) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) -TALKATIVE:root:This should only be seen at the '4' logging level (or lower) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) -VERBOSE:root:This should only be seen at the '5' logging level (or lower) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) -SOCIABLE:root:This should only be seen at the '6' logging level (or lower) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(4) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) -TALKATIVE:root:This should only be seen at the '4' logging level (or lower) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) -VERBOSE:root:This should only be seen at the '5' logging level (or lower) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) -SOCIABLE:root:This should only be seen at the '6' logging level (or lower) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(5) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) -VERBOSE:root:This should only be seen at the '5' logging level (or lower) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) -SOCIABLE:root:This should only be seen at the '6' logging level (or lower) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - - ->>> log.setLevel(6) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) -SOCIABLE:root:This should only be seen at the '6' logging level (or lower) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(7) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(8) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(0) - ->>> log.setLevel(9) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(0) - ->>> log.setLevel(10) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> hdlr.setLevel(SOCIABLE) - ->>> log.setLevel(1) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) -SOCIABLE:root:This should only be seen at the '6' logging level (or lower) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(2) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) -SOCIABLE:root:This should only be seen at the '6' logging level (or lower) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(3) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) -SOCIABLE:root:This should only be seen at the '6' logging level (or lower) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(0) - ->>> log.setLevel(4) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) -SOCIABLE:root:This should only be seen at the '6' logging level (or lower) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(0) - ->>> log.setLevel(5) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) -SOCIABLE:root:This should only be seen at the '6' logging level (or lower) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(0) - ->>> log.setLevel(6) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) -SOCIABLE:root:This should only be seen at the '6' logging level (or lower) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(0) - ->>> log.setLevel(7) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(0) - ->>> log.setLevel(8) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(0) - ->>> log.setLevel(9) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(0) - ->>> log.setLevel(10) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(0) - ->>> - ->>> hdlr.setLevel(0) - ->>> garr = GarrulousFilter() - ->>> hdlr.addFilter(garr) - ->>> log.setLevel(1) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) -BORING:root:This should only be seen at the '1' logging level (or lower) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) -CHATTERBOX:root:This should only be seen at the '2' logging level (or lower) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) -TALKATIVE:root:This should only be seen at the '4' logging level (or lower) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) -VERBOSE:root:This should only be seen at the '5' logging level (or lower) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) -SOCIABLE:root:This should only be seen at the '6' logging level (or lower) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(2) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) -CHATTERBOX:root:This should only be seen at the '2' logging level (or lower) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) -TALKATIVE:root:This should only be seen at the '4' logging level (or lower) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) -VERBOSE:root:This should only be seen at the '5' logging level (or lower) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) -SOCIABLE:root:This should only be seen at the '6' logging level (or lower) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(3) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) -TALKATIVE:root:This should only be seen at the '4' logging level (or lower) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) -VERBOSE:root:This should only be seen at the '5' logging level (or lower) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) -SOCIABLE:root:This should only be seen at the '6' logging level (or lower) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(4) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) -TALKATIVE:root:This should only be seen at the '4' logging level (or lower) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) -VERBOSE:root:This should only be seen at the '5' logging level (or lower) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) -SOCIABLE:root:This should only be seen at the '6' logging level (or lower) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(5) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) -VERBOSE:root:This should only be seen at the '5' logging level (or lower) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) -SOCIABLE:root:This should only be seen at the '6' logging level (or lower) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(0) - ->>> log.setLevel(6) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) -SOCIABLE:root:This should only be seen at the '6' logging level (or lower) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(7) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(8) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(9) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) -TACITURN:root:This should only be seen at the '9' logging level (or lower) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(10) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> spec = VerySpecificFilter() - ->>> log.addFilter(spec) - ->>> log.setLevel(1) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) -BORING:root:This should only be seen at the '1' logging level (or lower) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) -CHATTERBOX:root:This should only be seen at the '2' logging level (or lower) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) -TALKATIVE:root:This should only be seen at the '4' logging level (or lower) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) -VERBOSE:root:This should only be seen at the '5' logging level (or lower) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(2) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) -CHATTERBOX:root:This should only be seen at the '2' logging level (or lower) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) -TALKATIVE:root:This should only be seen at the '4' logging level (or lower) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) -VERBOSE:root:This should only be seen at the '5' logging level (or lower) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(3) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) -TALKATIVE:root:This should only be seen at the '4' logging level (or lower) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) -VERBOSE:root:This should only be seen at the '5' logging level (or lower) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(4) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) -TALKATIVE:root:This should only be seen at the '4' logging level (or lower) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) -VERBOSE:root:This should only be seen at the '5' logging level (or lower) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(5) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) +# +# Copyright 2001-2004 by Vinay Sajip. All Rights Reserved. +# +# Permission to use, copy, modify, and distribute this software and its +# documentation for any purpose and without fee is hereby granted, +# provided that the above copyright notice appear in all copies and that +# both that copyright notice and this permission notice appear in +# supporting documentation, and that the name of Vinay Sajip +# not be used in advertising or publicity pertaining to distribution +# of the software without specific, written prior permission. +# VINAY SAJIP DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING +# ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL +# VINAY SAJIP BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR +# ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER +# IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT +# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) +"""Test harness for the logging module. Run all tests. ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) -VERBOSE:root:This should only be seen at the '5' logging level (or lower) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(6) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(7) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) -EFFUSIVE:root:This should only be seen at the '7' logging level (or lower) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(8) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) -TERSE:root:This should only be seen at the '8' logging level (or lower) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(9) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(10) - ->>> log.log(1, "This should only be seen at the '%s' logging level (or lower)", BORING) - ->>> log.log(2, "This should only be seen at the '%s' logging level (or lower)", CHATTERBOX) - ->>> log.log(3, "This should only be seen at the '%s' logging level (or lower)", GARRULOUS) - ->>> log.log(4, "This should only be seen at the '%s' logging level (or lower)", TALKATIVE) - ->>> log.log(5, "This should only be seen at the '%s' logging level (or lower)", VERBOSE) - ->>> log.log(6, "This should only be seen at the '%s' logging level (or lower)", SOCIABLE) - ->>> log.log(7, "This should only be seen at the '%s' logging level (or lower)", EFFUSIVE) - ->>> log.log(8, "This should only be seen at the '%s' logging level (or lower)", TERSE) - ->>> log.log(9, "This should only be seen at the '%s' logging level (or lower)", TACITURN) - ->>> log.log(10, "This should only be seen at the '%s' logging level (or lower)", SILENT) -SILENT:root:This should only be seen at the '10' logging level (or lower) - ->>> log.setLevel(0) - ->>> log.removeFilter(spec) - ->>> hdlr.removeFilter(garr) - ->>> logging.addLevelName(logging.DEBUG, "DEBUG") - - -Test 2 -====== -Test memory handlers. These are basically buffers for log messages: they take so many messages, and then print them all. - ->>> import logging.handlers - ->>> sys.stderr = sys.stdout ->>> logger = logging.getLogger("") ->>> sh = logger.handlers[0] ->>> sh.close() ->>> logger.removeHandler(sh) ->>> mh = logging.handlers.MemoryHandler(10,logging.WARNING, sh) ->>> logger.setLevel(logging.DEBUG) ->>> logger.addHandler(mh) - ->>> logger.debug("Debug message") - --- logging at INFO, nothing should be seen yet -- - ->>> logger.info("Info message") - --- logging at WARNING, 3 messages should be seen -- - ->>> logger.warn("Warn message") -DEBUG:root:Debug message -INFO:root:Info message -WARNING:root:Warn message - ->>> logger.info("Info index = 0") - ->>> logger.info("Info index = 1") - ->>> logger.info("Info index = 2") - ->>> logger.info("Info index = 3") - ->>> logger.info("Info index = 4") - ->>> logger.info("Info index = 5") - ->>> logger.info("Info index = 6") - ->>> logger.info("Info index = 7") - ->>> logger.info("Info index = 8") - ->>> logger.info("Info index = 9") -INFO:root:Info index = 0 -INFO:root:Info index = 1 -INFO:root:Info index = 2 -INFO:root:Info index = 3 -INFO:root:Info index = 4 -INFO:root:Info index = 5 -INFO:root:Info index = 6 -INFO:root:Info index = 7 -INFO:root:Info index = 8 -INFO:root:Info index = 9 - ->>> logger.info("Info index = 10") - ->>> logger.info("Info index = 11") - ->>> logger.info("Info index = 12") - ->>> logger.info("Info index = 13") - ->>> logger.info("Info index = 14") - ->>> logger.info("Info index = 15") - ->>> logger.info("Info index = 16") - ->>> logger.info("Info index = 17") - ->>> logger.info("Info index = 18") - ->>> logger.info("Info index = 19") -INFO:root:Info index = 10 -INFO:root:Info index = 11 -INFO:root:Info index = 12 -INFO:root:Info index = 13 -INFO:root:Info index = 14 -INFO:root:Info index = 15 -INFO:root:Info index = 16 -INFO:root:Info index = 17 -INFO:root:Info index = 18 -INFO:root:Info index = 19 - ->>> logger.info("Info index = 20") - ->>> logger.info("Info index = 21") - ->>> logger.info("Info index = 22") - ->>> logger.info("Info index = 23") - ->>> logger.info("Info index = 24") - ->>> logger.info("Info index = 25") - ->>> logger.info("Info index = 26") - ->>> logger.info("Info index = 27") - ->>> logger.info("Info index = 28") - ->>> logger.info("Info index = 29") -INFO:root:Info index = 20 -INFO:root:Info index = 21 -INFO:root:Info index = 22 -INFO:root:Info index = 23 -INFO:root:Info index = 24 -INFO:root:Info index = 25 -INFO:root:Info index = 26 -INFO:root:Info index = 27 -INFO:root:Info index = 28 -INFO:root:Info index = 29 - ->>> logger.info("Info index = 30") - ->>> logger.info("Info index = 31") - ->>> logger.info("Info index = 32") - ->>> logger.info("Info index = 33") - ->>> logger.info("Info index = 34") - ->>> logger.info("Info index = 35") - ->>> logger.info("Info index = 36") - ->>> logger.info("Info index = 37") - ->>> logger.info("Info index = 38") - ->>> logger.info("Info index = 39") -INFO:root:Info index = 30 -INFO:root:Info index = 31 -INFO:root:Info index = 32 -INFO:root:Info index = 33 -INFO:root:Info index = 34 -INFO:root:Info index = 35 -INFO:root:Info index = 36 -INFO:root:Info index = 37 -INFO:root:Info index = 38 -INFO:root:Info index = 39 - ->>> logger.info("Info index = 40") - ->>> logger.info("Info index = 41") - ->>> logger.info("Info index = 42") - ->>> logger.info("Info index = 43") - ->>> logger.info("Info index = 44") - ->>> logger.info("Info index = 45") - ->>> logger.info("Info index = 46") - ->>> logger.info("Info index = 47") - ->>> logger.info("Info index = 48") - ->>> logger.info("Info index = 49") -INFO:root:Info index = 40 -INFO:root:Info index = 41 -INFO:root:Info index = 42 -INFO:root:Info index = 43 -INFO:root:Info index = 44 -INFO:root:Info index = 45 -INFO:root:Info index = 46 -INFO:root:Info index = 47 -INFO:root:Info index = 48 -INFO:root:Info index = 49 - ->>> logger.info("Info index = 50") - ->>> logger.info("Info index = 51") - ->>> logger.info("Info index = 52") - ->>> logger.info("Info index = 53") - ->>> logger.info("Info index = 54") - ->>> logger.info("Info index = 55") - ->>> logger.info("Info index = 56") - ->>> logger.info("Info index = 57") - ->>> logger.info("Info index = 58") - ->>> logger.info("Info index = 59") -INFO:root:Info index = 50 -INFO:root:Info index = 51 -INFO:root:Info index = 52 -INFO:root:Info index = 53 -INFO:root:Info index = 54 -INFO:root:Info index = 55 -INFO:root:Info index = 56 -INFO:root:Info index = 57 -INFO:root:Info index = 58 -INFO:root:Info index = 59 - ->>> logger.info("Info index = 60") - ->>> logger.info("Info index = 61") - ->>> logger.info("Info index = 62") - ->>> logger.info("Info index = 63") - ->>> logger.info("Info index = 64") - ->>> logger.info("Info index = 65") - ->>> logger.info("Info index = 66") - ->>> logger.info("Info index = 67") - ->>> logger.info("Info index = 68") - ->>> logger.info("Info index = 69") -INFO:root:Info index = 60 -INFO:root:Info index = 61 -INFO:root:Info index = 62 -INFO:root:Info index = 63 -INFO:root:Info index = 64 -INFO:root:Info index = 65 -INFO:root:Info index = 66 -INFO:root:Info index = 67 -INFO:root:Info index = 68 -INFO:root:Info index = 69 - ->>> logger.info("Info index = 70") - ->>> logger.info("Info index = 71") - ->>> logger.info("Info index = 72") - ->>> logger.info("Info index = 73") - ->>> logger.info("Info index = 74") - ->>> logger.info("Info index = 75") - ->>> logger.info("Info index = 76") - ->>> logger.info("Info index = 77") - ->>> logger.info("Info index = 78") - ->>> logger.info("Info index = 79") -INFO:root:Info index = 70 -INFO:root:Info index = 71 -INFO:root:Info index = 72 -INFO:root:Info index = 73 -INFO:root:Info index = 74 -INFO:root:Info index = 75 -INFO:root:Info index = 76 -INFO:root:Info index = 77 -INFO:root:Info index = 78 -INFO:root:Info index = 79 - ->>> logger.info("Info index = 80") - ->>> logger.info("Info index = 81") - ->>> logger.info("Info index = 82") - ->>> logger.info("Info index = 83") - ->>> logger.info("Info index = 84") - ->>> logger.info("Info index = 85") - ->>> logger.info("Info index = 86") - ->>> logger.info("Info index = 87") - ->>> logger.info("Info index = 88") - ->>> logger.info("Info index = 89") -INFO:root:Info index = 80 -INFO:root:Info index = 81 -INFO:root:Info index = 82 -INFO:root:Info index = 83 -INFO:root:Info index = 84 -INFO:root:Info index = 85 -INFO:root:Info index = 86 -INFO:root:Info index = 87 -INFO:root:Info index = 88 -INFO:root:Info index = 89 - ->>> logger.info("Info index = 90") - ->>> logger.info("Info index = 91") - ->>> logger.info("Info index = 92") - ->>> logger.info("Info index = 93") - ->>> logger.info("Info index = 94") - ->>> logger.info("Info index = 95") - ->>> logger.info("Info index = 96") - ->>> logger.info("Info index = 97") - ->>> logger.info("Info index = 98") - ->>> logger.info("Info index = 99") -INFO:root:Info index = 90 -INFO:root:Info index = 91 -INFO:root:Info index = 92 -INFO:root:Info index = 93 -INFO:root:Info index = 94 -INFO:root:Info index = 95 -INFO:root:Info index = 96 -INFO:root:Info index = 97 -INFO:root:Info index = 98 -INFO:root:Info index = 99 - ->>> logger.info("Info index = 100") - ->>> logger.info("Info index = 101") - ->>> mh.close() -INFO:root:Info index = 100 -INFO:root:Info index = 101 - ->>> logger.removeHandler(mh) ->>> logger.addHandler(sh) - - - -Test 3 -====== - ->>> import sys, logging ->>> sys.stderr = sys ->>> logging.basicConfig() ->>> FILTER = "a.b" ->>> root = logging.getLogger() ->>> root.setLevel(logging.DEBUG) ->>> hand = root.handlers[0] - ->>> logging.getLogger("a").info("Info 1") -INFO:a:Info 1 - ->>> logging.getLogger("a.b").info("Info 2") -INFO:a.b:Info 2 - ->>> logging.getLogger("a.c").info("Info 3") -INFO:a.c:Info 3 - ->>> logging.getLogger("a.b.c").info("Info 4") -INFO:a.b.c:Info 4 - ->>> logging.getLogger("a.b.c.d").info("Info 5") -INFO:a.b.c.d:Info 5 - ->>> logging.getLogger("a.bb.c").info("Info 6") -INFO:a.bb.c:Info 6 - ->>> logging.getLogger("b").info("Info 7") -INFO:b:Info 7 - ->>> logging.getLogger("b.a").info("Info 8") -INFO:b.a:Info 8 - ->>> logging.getLogger("c.a.b").info("Info 9") -INFO:c.a.b:Info 9 - ->>> logging.getLogger("a.bb").info("Info 10") -INFO:a.bb:Info 10 - -Filtered with 'a.b'... - ->>> filt = logging.Filter(FILTER) - ->>> hand.addFilter(filt) - ->>> logging.getLogger("a").info("Info 1") - ->>> logging.getLogger("a.b").info("Info 2") -INFO:a.b:Info 2 - ->>> logging.getLogger("a.c").info("Info 3") - ->>> logging.getLogger("a.b.c").info("Info 4") -INFO:a.b.c:Info 4 +Copyright (C) 2001-2002 Vinay Sajip. All Rights Reserved. +""" ->>> logging.getLogger("a.b.c.d").info("Info 5") -INFO:a.b.c.d:Info 5 +import logging +import logging.handlers +import logging.config + +import copy +import cPickle +import cStringIO +import gc +import os +import re +import select +import socket +from SocketServer import ThreadingTCPServer, StreamRequestHandler +import string +import struct +import sys +import tempfile +from test.test_support import captured_stdout, run_with_locale, run_unittest +import textwrap +import threading +import time +import types +import unittest +import weakref + + +class BaseTest(unittest.TestCase): + + """Base class for logging tests.""" + + log_format = "%(name)s -> %(levelname)s: %(message)s" + expected_log_pat = r"^([\w.]+) -> ([\w]+): ([\d]+)$" + message_num = 0 + + def setUp(self): + """Setup the default logging stream to an internal StringIO instance, + so that we can examine log output as we want.""" + logger_dict = logging.getLogger().manager.loggerDict + logging._acquireLock() + try: + self.saved_handlers = logging._handlers.copy() + self.saved_handler_list = logging._handlerList[:] + self.saved_loggers = logger_dict.copy() + self.saved_level_names = logging._levelNames.copy() + finally: + logging._releaseLock() ->>> logging.getLogger("a.bb.c").info("Info 6") + self.root_logger = logging.getLogger("") + self.original_logging_level = self.root_logger.getEffectiveLevel() ->>> logging.getLogger("b").info("Info 7") + self.stream = cStringIO.StringIO() + self.root_logger.setLevel(logging.DEBUG) + self.root_hdlr = logging.StreamHandler(self.stream) + self.root_formatter = logging.Formatter(self.log_format) + self.root_hdlr.setFormatter(self.root_formatter) + self.root_logger.addHandler(self.root_hdlr) + + def tearDown(self): + """Remove our logging stream, and restore the original logging + level.""" + self.stream.close() + self.root_logger.removeHandler(self.root_hdlr) + self.root_logger.setLevel(self.original_logging_level) + logging._acquireLock() + try: + logging._levelNames.clear() + logging._levelNames.update(self.saved_level_names) + logging._handlers.clear() + logging._handlers.update(self.saved_handlers) + logging._handlerList[:] = self.saved_handler_list + loggerDict = logging.getLogger().manager.loggerDict + loggerDict.clear() + loggerDict.update(self.saved_loggers) + finally: + logging._releaseLock() ->>> logging.getLogger("b.a").info("Info 8") + def assert_log_lines(self, expected_values, stream=None): + """Match the collected log lines against the regular expression + self.expected_log_pat, and compare the extracted group values to + the expected_values list of tuples.""" + stream = stream or self.stream + pat = re.compile(self.expected_log_pat) + try: + stream.reset() + actual_lines = stream.readlines() + except AttributeError: + # StringIO.StringIO lacks a reset() method. + actual_lines = stream.getvalue().splitlines() + self.assertEquals(len(actual_lines), len(expected_values)) + for actual, expected in zip(actual_lines, expected_values): + match = pat.search(actual) + if not match: + self.fail("Log line does not match expected pattern:\n" + + actual) + self.assertEquals(tuple(match.groups()), expected) + s = stream.read() + if s: + self.fail("Remaining output at end of log stream:\n" + s) + + def next_message(self): + """Generate a message consisting solely of an auto-incrementing + integer.""" + self.message_num += 1 + return "%d" % self.message_num + + +class BuiltinLevelsTest(BaseTest): + """Test builtin levels and their inheritance.""" + + def test_flat(self): + #Logging levels in a flat logger namespace. + m = self.next_message + + ERR = logging.getLogger("ERR") + ERR.setLevel(logging.ERROR) + INF = logging.getLogger("INF") + INF.setLevel(logging.INFO) + DEB = logging.getLogger("DEB") + DEB.setLevel(logging.DEBUG) + + # These should log. + ERR.log(logging.CRITICAL, m()) + ERR.error(m()) + + INF.log(logging.CRITICAL, m()) + INF.error(m()) + INF.warn(m()) + INF.info(m()) + + DEB.log(logging.CRITICAL, m()) + DEB.error(m()) + DEB.warn (m()) + DEB.info (m()) + DEB.debug(m()) + + # These should not log. + ERR.warn(m()) + ERR.info(m()) + ERR.debug(m()) + + INF.debug(m()) + + self.assert_log_lines([ + ('ERR', 'CRITICAL', '1'), + ('ERR', 'ERROR', '2'), + ('INF', 'CRITICAL', '3'), + ('INF', 'ERROR', '4'), + ('INF', 'WARNING', '5'), + ('INF', 'INFO', '6'), + ('DEB', 'CRITICAL', '7'), + ('DEB', 'ERROR', '8'), + ('DEB', 'WARNING', '9'), + ('DEB', 'INFO', '10'), + ('DEB', 'DEBUG', '11'), + ]) + + def test_nested_explicit(self): + # Logging levels in a nested namespace, all explicitly set. + m = self.next_message + + INF = logging.getLogger("INF") + INF.setLevel(logging.INFO) + INF_ERR = logging.getLogger("INF.ERR") + INF_ERR.setLevel(logging.ERROR) + + # These should log. + INF_ERR.log(logging.CRITICAL, m()) + INF_ERR.error(m()) + + # These should not log. + INF_ERR.warn(m()) + INF_ERR.info(m()) + INF_ERR.debug(m()) + + self.assert_log_lines([ + ('INF.ERR', 'CRITICAL', '1'), + ('INF.ERR', 'ERROR', '2'), + ]) + + def test_nested_inherited(self): + #Logging levels in a nested namespace, inherited from parent loggers. + m = self.next_message + + INF = logging.getLogger("INF") + INF.setLevel(logging.INFO) + INF_ERR = logging.getLogger("INF.ERR") + INF_ERR.setLevel(logging.ERROR) + INF_UNDEF = logging.getLogger("INF.UNDEF") + INF_ERR_UNDEF = logging.getLogger("INF.ERR.UNDEF") + UNDEF = logging.getLogger("UNDEF") + + # These should log. + INF_UNDEF.log(logging.CRITICAL, m()) + INF_UNDEF.error(m()) + INF_UNDEF.warn(m()) + INF_UNDEF.info(m()) + INF_ERR_UNDEF.log(logging.CRITICAL, m()) + INF_ERR_UNDEF.error(m()) + + # These should not log. + INF_UNDEF.debug(m()) + INF_ERR_UNDEF.warn(m()) + INF_ERR_UNDEF.info(m()) + INF_ERR_UNDEF.debug(m()) + + self.assert_log_lines([ + ('INF.UNDEF', 'CRITICAL', '1'), + ('INF.UNDEF', 'ERROR', '2'), + ('INF.UNDEF', 'WARNING', '3'), + ('INF.UNDEF', 'INFO', '4'), + ('INF.ERR.UNDEF', 'CRITICAL', '5'), + ('INF.ERR.UNDEF', 'ERROR', '6'), + ]) + + def test_nested_with_virtual_parent(self): + # Logging levels when some parent does not exist yet. + m = self.next_message + + INF = logging.getLogger("INF") + GRANDCHILD = logging.getLogger("INF.BADPARENT.UNDEF") + CHILD = logging.getLogger("INF.BADPARENT") + INF.setLevel(logging.INFO) + + # These should log. + GRANDCHILD.log(logging.FATAL, m()) + GRANDCHILD.info(m()) + CHILD.log(logging.FATAL, m()) + CHILD.info(m()) + + # These should not log. + GRANDCHILD.debug(m()) + CHILD.debug(m()) + + self.assert_log_lines([ + ('INF.BADPARENT.UNDEF', 'CRITICAL', '1'), + ('INF.BADPARENT.UNDEF', 'INFO', '2'), + ('INF.BADPARENT', 'CRITICAL', '3'), + ('INF.BADPARENT', 'INFO', '4'), + ]) + + +class BasicFilterTest(BaseTest): + + """Test the bundled Filter class.""" + + def test_filter(self): + # Only messages satisfying the specified criteria pass through the + # filter. + filter_ = logging.Filter("spam.eggs") + handler = self.root_logger.handlers[0] + try: + handler.addFilter(filter_) + spam = logging.getLogger("spam") + spam_eggs = logging.getLogger("spam.eggs") + spam_eggs_fish = logging.getLogger("spam.eggs.fish") + spam_bakedbeans = logging.getLogger("spam.bakedbeans") + + spam.info(self.next_message()) + spam_eggs.info(self.next_message()) # Good. + spam_eggs_fish.info(self.next_message()) # Good. + spam_bakedbeans.info(self.next_message()) + + self.assert_log_lines([ + ('spam.eggs', 'INFO', '2'), + ('spam.eggs.fish', 'INFO', '3'), + ]) + finally: + handler.removeFilter(filter_) ->>> logging.getLogger("c.a.b").info("Info 9") ->>> logging.getLogger("a.bb").info("Info 10") +# +# First, we define our levels. There can be as many as you want - the only +# limitations are that they should be integers, the lowest should be > 0 and +# larger values mean less information being logged. If you need specific +# level values which do not fit into these limitations, you can use a +# mapping dictionary to convert between your application levels and the +# logging system. +# +SILENT = 120 +TACITURN = 119 +TERSE = 118 +EFFUSIVE = 117 +SOCIABLE = 116 +VERBOSE = 115 +TALKATIVE = 114 +GARRULOUS = 113 +CHATTERBOX = 112 +BORING = 111 + +LEVEL_RANGE = range(BORING, SILENT + 1) + +# +# Next, we define names for our levels. You don't need to do this - in which +# case the system will use "Level n" to denote the text for the level. +# +my_logging_levels = { + SILENT : 'Silent', + TACITURN : 'Taciturn', + TERSE : 'Terse', + EFFUSIVE : 'Effusive', + SOCIABLE : 'Sociable', + VERBOSE : 'Verbose', + TALKATIVE : 'Talkative', + GARRULOUS : 'Garrulous', + CHATTERBOX : 'Chatterbox', + BORING : 'Boring', +} + +class GarrulousFilter(logging.Filter): + + """A filter which blocks garrulous messages.""" + + def filter(self, record): + return record.levelno != GARRULOUS + +class VerySpecificFilter(logging.Filter): + + """A filter which blocks sociable and taciturn messages.""" + + def filter(self, record): + return record.levelno not in [SOCIABLE, TACITURN] + + +class CustomLevelsAndFiltersTest(BaseTest): + + """Test various filtering possibilities with custom logging levels.""" + + # Skip the logger name group. + expected_log_pat = r"^[\w.]+ -> ([\w]+): ([\d]+)$" + + def setUp(self): + BaseTest.setUp(self) + for k, v in my_logging_levels.items(): + logging.addLevelName(k, v) + + def log_at_all_levels(self, logger): + for lvl in LEVEL_RANGE: + logger.log(lvl, self.next_message()) + + def test_logger_filter(self): + # Filter at logger level. + self.root_logger.setLevel(VERBOSE) + # Levels >= 'Verbose' are good. + self.log_at_all_levels(self.root_logger) + self.assert_log_lines([ + ('Verbose', '5'), + ('Sociable', '6'), + ('Effusive', '7'), + ('Terse', '8'), + ('Taciturn', '9'), + ('Silent', '10'), + ]) + + def test_handler_filter(self): + # Filter at handler level. + self.root_logger.handlers[0].setLevel(SOCIABLE) + try: + # Levels >= 'Sociable' are good. + self.log_at_all_levels(self.root_logger) + self.assert_log_lines([ + ('Sociable', '6'), + ('Effusive', '7'), + ('Terse', '8'), + ('Taciturn', '9'), + ('Silent', '10'), + ]) + finally: + self.root_logger.handlers[0].setLevel(logging.NOTSET) ->>> hand.removeFilter(filt) + def test_specific_filters(self): + # Set a specific filter object on the handler, and then add another + # filter object on the logger itself. + handler = self.root_logger.handlers[0] + specific_filter = None + garr = GarrulousFilter() + handler.addFilter(garr) + try: + self.log_at_all_levels(self.root_logger) + first_lines = [ + # Notice how 'Garrulous' is missing + ('Boring', '1'), + ('Chatterbox', '2'), + ('Talkative', '4'), + ('Verbose', '5'), + ('Sociable', '6'), + ('Effusive', '7'), + ('Terse', '8'), + ('Taciturn', '9'), + ('Silent', '10'), + ] + self.assert_log_lines(first_lines) + + specific_filter = VerySpecificFilter() + self.root_logger.addFilter(specific_filter) + self.log_at_all_levels(self.root_logger) + self.assert_log_lines(first_lines + [ + # Not only 'Garrulous' is still missing, but also 'Sociable' + # and 'Taciturn' + ('Boring', '11'), + ('Chatterbox', '12'), + ('Talkative', '14'), + ('Verbose', '15'), + ('Effusive', '17'), + ('Terse', '18'), + ('Silent', '20'), + ]) + finally: + if specific_filter: + self.root_logger.removeFilter(specific_filter) + handler.removeFilter(garr) + + +class MemoryHandlerTest(BaseTest): + + """Tests for the MemoryHandler.""" + + # Do not bother with a logger name group. + expected_log_pat = r"^[\w.]+ -> ([\w]+): ([\d]+)$" + + def setUp(self): + BaseTest.setUp(self) + self.mem_hdlr = logging.handlers.MemoryHandler(10, logging.WARNING, + self.root_hdlr) + self.mem_logger = logging.getLogger('mem') + self.mem_logger.propagate = 0 + self.mem_logger.addHandler(self.mem_hdlr) + + def tearDown(self): + self.mem_hdlr.close() + + def test_flush(self): + # The memory handler flushes to its target handler based on specific + # criteria (message count and message level). + self.mem_logger.debug(self.next_message()) + self.assert_log_lines([]) + self.mem_logger.info(self.next_message()) + self.assert_log_lines([]) + # This will flush because the level is >= logging.WARNING + self.mem_logger.warn(self.next_message()) + lines = [ + ('DEBUG', '1'), + ('INFO', '2'), + ('WARNING', '3'), + ] + self.assert_log_lines(lines) + for n in (4, 14): + for i in range(9): + self.mem_logger.debug(self.next_message()) + self.assert_log_lines(lines) + # This will flush because it's the 10th message since the last + # flush. + self.mem_logger.debug(self.next_message()) + lines = lines + [('DEBUG', str(i)) for i in range(n, n + 10)] + self.assert_log_lines(lines) + self.mem_logger.debug(self.next_message()) + self.assert_log_lines(lines) -Test 4 -====== ->>> import sys, logging, logging.handlers, string ->>> import tempfile, logging.config, os, test.test_support ->>> sys.stderr = sys.stdout ->>> from test_logging import config0, config1 +class ExceptionFormatter(logging.Formatter): + """A special exception formatter.""" + def formatException(self, ei): + return "Got a [%s]" % ei[0].__name__ -config2 has a subtle configuration error that should be reported ->>> config2 = string.replace(config1, "sys.stdout", "sys.stbout") -config3 has a less subtle configuration error ->>> config3 = string.replace(config1, "formatter=form1", "formatter=misspelled_name") +class ConfigFileTest(BaseTest): ->>> def test4(conf): -... loggerDict = logging.getLogger().manager.loggerDict -... logging._acquireLock() -... try: -... saved_handlers = logging._handlers.copy() -... saved_handler_list = logging._handlerList[:] -... saved_loggers = loggerDict.copy() -... finally: -... logging._releaseLock() -... try: -... fn = test.test_support.TESTFN -... f = open(fn, "w") -... f.write(conf) -... f.close() -... try: -... logging.config.fileConfig(fn) -... #call again to make sure cleanup is correct -... logging.config.fileConfig(fn) -... except: -... t = sys.exc_info()[0] -... message(str(t)) -... else: -... message('ok.') -... os.remove(fn) -... finally: -... logging._acquireLock() -... try: -... logging._handlers.clear() -... logging._handlers.update(saved_handlers) -... logging._handlerList[:] = saved_handler_list -... loggerDict = logging.getLogger().manager.loggerDict -... loggerDict.clear() -... loggerDict.update(saved_loggers) -... finally: -... logging._releaseLock() + """Reading logging config from a .ini-style config file.""" ->>> test4(config0) -ok. + expected_log_pat = r"^([\w]+) \+\+ ([\w]+)$" ->>> test4(config1) -ok. + # config0 is a standard configuration. + config0 = """ + [loggers] + keys=root ->>> test4(config2) - + [handlers] + keys=hand1 ->>> test4(config3) - + [formatters] + keys=form1 ->>> import test_logging ->>> test_logging.test5() -ERROR:root:just testing -... Don't panic! + [logger_root] + level=WARNING + handlers=hand1 + [handler_hand1] + class=StreamHandler + level=NOTSET + formatter=form1 + args=(sys.stdout,) -Test Main -========= ->>> import select ->>> import os, sys, string, struct, types, cPickle, cStringIO ->>> import socket, tempfile, threading, time ->>> import logging, logging.handlers, logging.config ->>> import test_logging + [formatter_form1] + format=%(levelname)s ++ %(message)s + datefmt= + """ ->>> test_logging.test_main_inner() -ERR -> CRITICAL: Message 0 (via logrecv.tcp.ERR) -ERR -> ERROR: Message 1 (via logrecv.tcp.ERR) -INF -> CRITICAL: Message 2 (via logrecv.tcp.INF) -INF -> ERROR: Message 3 (via logrecv.tcp.INF) -INF -> WARNING: Message 4 (via logrecv.tcp.INF) -INF -> INFO: Message 5 (via logrecv.tcp.INF) -INF.UNDEF -> CRITICAL: Message 6 (via logrecv.tcp.INF.UNDEF) -INF.UNDEF -> ERROR: Message 7 (via logrecv.tcp.INF.UNDEF) -INF.UNDEF -> WARNING: Message 8 (via logrecv.tcp.INF.UNDEF) -INF.UNDEF -> INFO: Message 9 (via logrecv.tcp.INF.UNDEF) -INF.ERR -> CRITICAL: Message 10 (via logrecv.tcp.INF.ERR) -INF.ERR -> ERROR: Message 11 (via logrecv.tcp.INF.ERR) -INF.ERR.UNDEF -> CRITICAL: Message 12 (via logrecv.tcp.INF.ERR.UNDEF) -INF.ERR.UNDEF -> ERROR: Message 13 (via logrecv.tcp.INF.ERR.UNDEF) -DEB -> CRITICAL: Message 14 (via logrecv.tcp.DEB) -DEB -> ERROR: Message 15 (via logrecv.tcp.DEB) -DEB -> WARNING: Message 16 (via logrecv.tcp.DEB) -DEB -> INFO: Message 17 (via logrecv.tcp.DEB) -DEB -> DEBUG: Message 18 (via logrecv.tcp.DEB) -UNDEF -> CRITICAL: Message 19 (via logrecv.tcp.UNDEF) -UNDEF -> ERROR: Message 20 (via logrecv.tcp.UNDEF) -UNDEF -> WARNING: Message 21 (via logrecv.tcp.UNDEF) -UNDEF -> INFO: Message 22 (via logrecv.tcp.UNDEF) -INF.BADPARENT.UNDEF -> CRITICAL: Message 23 (via logrecv.tcp.INF.BADPARENT.UNDEF) -INF.BADPARENT -> CRITICAL: Message 24 (via logrecv.tcp.INF.BADPARENT) -INF -> INFO: Finish up, it's closing time. Messages should bear numbers 0 through 24. (via logrecv.tcp.INF) - -""" -import select -import os, sys, string, struct, types, cPickle, cStringIO -import socket, tempfile, threading, time -import logging, logging.handlers, logging.config, test.test_support + # config1 adds a little to the standard configuration. + config1 = """ + [loggers] + keys=root,parser + + [handlers] + keys=hand1 + + [formatters] + keys=form1 + + [logger_root] + level=WARNING + handlers= + + [logger_parser] + level=DEBUG + handlers=hand1 + propagate=1 + qualname=compiler.parser + + [handler_hand1] + class=StreamHandler + level=NOTSET + formatter=form1 + args=(sys.stdout,) + + [formatter_form1] + format=%(levelname)s ++ %(message)s + datefmt= + """ + # config2 has a subtle configuration error that should be reported + config2 = config1.replace("sys.stdout", "sys.stbout") -BANNER = "-- %-10s %-6s ---------------------------------------------------\n" + # config3 has a less subtle configuration error + config3 = config1.replace("formatter=form1", "formatter=misspelled_name") -FINISH_UP = "Finish up, it's closing time. Messages should bear numbers 0 through 24." -#---------------------------------------------------------------------------- -# Test 0 -#---------------------------------------------------------------------------- + # config4 specifies a custom formatter class to be loaded + config4 = """ + [loggers] + keys=root + + [handlers] + keys=hand1 + + [formatters] + keys=form1 + + [logger_root] + level=NOTSET + handlers=hand1 + + [handler_hand1] + class=StreamHandler + level=NOTSET + formatter=form1 + args=(sys.stdout,) + + [formatter_form1] + class=""" + __name__ + """.ExceptionFormatter + format=%(levelname)s:%(name)s:%(message)s + datefmt= + """ -msgcount = 0 + def apply_config(self, conf): + try: + fn = tempfile.mktemp(".ini") + f = open(fn, "w") + f.write(textwrap.dedent(conf)) + f.close() + logging.config.fileConfig(fn) + finally: + os.remove(fn) -def nextmessage(): - global msgcount - rv = "Message %d" % msgcount - msgcount = msgcount + 1 - return rv + def test_config0_ok(self): + # A simple config file which overrides the default settings. + with captured_stdout() as output: + self.apply_config(self.config0) + logger = logging.getLogger() + # Won't output anything + logger.info(self.next_message()) + # Outputs a message + logger.error(self.next_message()) + self.assert_log_lines([ + ('ERROR', '2'), + ], stream=output) + # Original logger output is empty. + self.assert_log_lines([]) + + def test_config1_ok(self): + # A config file defining a sub-parser as well. + with captured_stdout() as output: + self.apply_config(self.config1) + logger = logging.getLogger("compiler.parser") + # Both will output a message + logger.info(self.next_message()) + logger.error(self.next_message()) + self.assert_log_lines([ + ('INFO', '1'), + ('ERROR', '2'), + ], stream=output) + # Original logger output is empty. + self.assert_log_lines([]) + + def test_config2_failure(self): + # A simple config file which overrides the default settings. + self.assertRaises(StandardError, self.apply_config, self.config2) + + def test_config3_failure(self): + # A simple config file which overrides the default settings. + self.assertRaises(StandardError, self.apply_config, self.config3) + + def test_config4_ok(self): + # A config file specifying a custom formatter class. + with captured_stdout() as output: + self.apply_config(self.config4) + logger = logging.getLogger() + try: + raise RuntimeError() + except RuntimeError: + logging.exception("just testing") + sys.stdout.seek(0) + self.assertEquals(output.getvalue(), + "ERROR:root:just testing\nGot a [RuntimeError]\n") + # Original logger output is empty + self.assert_log_lines([]) -#---------------------------------------------------------------------------- -# Log receiver -#---------------------------------------------------------------------------- -TIMEOUT = 10 +class LogRecordStreamHandler(StreamRequestHandler): -from SocketServer import ThreadingTCPServer, StreamRequestHandler + """Handler for a streaming logging request. It saves the log message in the + TCP server's 'log_output' attribute.""" -class LogRecordStreamHandler(StreamRequestHandler): - """ - Handler for a streaming logging request. It basically logs the record - using whatever logging policy is configured locally. - """ + TCP_LOG_END = "!!!END!!!" def handle(self): - """ - Handle multiple requests - each expected to be a 4-byte length, + """Handle multiple requests - each expected to be of 4-byte length, followed by the LogRecord in pickle format. Logs the record - according to whatever policy is configured locally. - """ - while 1: - try: - chunk = self.connection.recv(4) - if len(chunk) < 4: - break - slen = struct.unpack(">L", chunk)[0] - chunk = self.connection.recv(slen) - while len(chunk) < slen: - chunk = chunk + self.connection.recv(slen - len(chunk)) - obj = self.unPickle(chunk) - record = logging.makeLogRecord(obj) - self.handleLogRecord(record) - except: - raise + according to whatever policy is configured locally.""" + while True: + chunk = self.connection.recv(4) + if len(chunk) < 4: + break + slen = struct.unpack(">L", chunk)[0] + chunk = self.connection.recv(slen) + while len(chunk) < slen: + chunk = chunk + self.connection.recv(slen - len(chunk)) + obj = self.unpickle(chunk) + record = logging.makeLogRecord(obj) + self.handle_log_record(record) - def unPickle(self, data): + def unpickle(self, data): return cPickle.loads(data) - def handleLogRecord(self, record): - logname = "logrecv.tcp." + record.name - #If the end-of-messages sentinel is seen, tell the server to terminate - if record.msg == FINISH_UP: + def handle_log_record(self, record): + # If the end-of-messages sentinel is seen, tell the server to + # terminate. + if self.TCP_LOG_END in record.msg: self.server.abort = 1 - record.msg = record.msg + " (via " + logname + ")" - logger = logging.getLogger("logrecv") - logger.handle(record) - -# The server sets socketDataProcessed when it's done. -socketDataProcessed = threading.Event() -#---------------------------------------------------------------------------- -# Test 5 -#---------------------------------------------------------------------------- - -test5_config = """ -[loggers] -keys=root - -[handlers] -keys=hand1 - -[formatters] -keys=form1 - -[logger_root] -level=NOTSET -handlers=hand1 - -[handler_hand1] -class=StreamHandler -level=NOTSET -formatter=form1 -args=(sys.stdout,) - -[formatter_form1] -class=test.test_logging.FriendlyFormatter -format=%(levelname)s:%(name)s:%(message)s -datefmt= -""" - -class FriendlyFormatter (logging.Formatter): - def formatException(self, ei): - return "%s... Don't panic!" % str(ei[0]) - - -def test5(): - loggerDict = logging.getLogger().manager.loggerDict - logging._acquireLock() - try: - saved_handlers = logging._handlers.copy() - saved_handler_list = logging._handlerList[:] - saved_loggers = loggerDict.copy() - finally: - logging._releaseLock() - try: - fn = test.test_support.TESTFN - f = open(fn, "w") - f.write(test5_config) - f.close() - logging.config.fileConfig(fn) - try: - raise KeyError - except KeyError: - logging.exception("just testing") - os.remove(fn) - hdlr = logging.getLogger().handlers[0] - logging.getLogger().handlers.remove(hdlr) - finally: - logging._acquireLock() - try: - logging._handlers.clear() - logging._handlers.update(saved_handlers) - logging._handlerList[:] = saved_handler_list - loggerDict = logging.getLogger().manager.loggerDict - loggerDict.clear() - loggerDict.update(saved_loggers) - finally: - logging._releaseLock() + return + self.server.log_output += record.msg + "\n" class LogRecordSocketReceiver(ThreadingTCPServer): - """ - A simple-minded TCP socket-based logging receiver suitable for test - purposes. - """ + + """A simple-minded TCP socket-based logging receiver suitable for test + purposes.""" allow_reuse_address = 1 + log_output = "" def __init__(self, host='localhost', port=logging.handlers.DEFAULT_TCP_LOGGING_PORT, handler=LogRecordStreamHandler): ThreadingTCPServer.__init__(self, (host, port), handler) - self.abort = 0 - self.timeout = 1 + self.abort = False + self.timeout = 0.1 + self.finished = threading.Event() def serve_until_stopped(self): while not self.abort: @@ -2018,227 +704,119 @@ self.timeout) if rd: self.handle_request() + # Notify the main thread that we're about to exit + self.finished.set() # close the listen socket self.server_close() - def process_request(self, request, client_address): - #import threading - t = threading.Thread(target = self.finish_request, - args = (request, client_address)) - t.start() - -def runTCP(tcpserver): - tcpserver.serve_until_stopped() - -def banner(nm, typ): - sep = BANNER % (nm, typ) - sys.stdout.write(sep) - sys.stdout.flush() - -def test0(): - ERR = logging.getLogger("ERR") - ERR.setLevel(logging.ERROR) - INF = logging.getLogger("INF") - INF.setLevel(logging.INFO) - INF_ERR = logging.getLogger("INF.ERR") - INF_ERR.setLevel(logging.ERROR) - DEB = logging.getLogger("DEB") - DEB.setLevel(logging.DEBUG) - - INF_UNDEF = logging.getLogger("INF.UNDEF") - INF_ERR_UNDEF = logging.getLogger("INF.ERR.UNDEF") - UNDEF = logging.getLogger("UNDEF") - - GRANDCHILD = logging.getLogger("INF.BADPARENT.UNDEF") - CHILD = logging.getLogger("INF.BADPARENT") - - #These should log - ERR.log(logging.FATAL, nextmessage()) - ERR.error(nextmessage()) - - INF.log(logging.FATAL, nextmessage()) - INF.error(nextmessage()) - INF.warn(nextmessage()) - INF.info(nextmessage()) - - INF_UNDEF.log(logging.FATAL, nextmessage()) - INF_UNDEF.error(nextmessage()) - INF_UNDEF.warn (nextmessage()) - INF_UNDEF.info (nextmessage()) - - INF_ERR.log(logging.FATAL, nextmessage()) - INF_ERR.error(nextmessage()) - - INF_ERR_UNDEF.log(logging.FATAL, nextmessage()) - INF_ERR_UNDEF.error(nextmessage()) - - DEB.log(logging.FATAL, nextmessage()) - DEB.error(nextmessage()) - DEB.warn (nextmessage()) - DEB.info (nextmessage()) - DEB.debug(nextmessage()) - - UNDEF.log(logging.FATAL, nextmessage()) - UNDEF.error(nextmessage()) - UNDEF.warn (nextmessage()) - UNDEF.info (nextmessage()) - - GRANDCHILD.log(logging.FATAL, nextmessage()) - CHILD.log(logging.FATAL, nextmessage()) - - #These should not log - ERR.warn(nextmessage()) - ERR.info(nextmessage()) - ERR.debug(nextmessage()) - - INF.debug(nextmessage()) - INF_UNDEF.debug(nextmessage()) - - INF_ERR.warn(nextmessage()) - INF_ERR.info(nextmessage()) - INF_ERR.debug(nextmessage()) - INF_ERR_UNDEF.warn(nextmessage()) - INF_ERR_UNDEF.info(nextmessage()) - INF_ERR_UNDEF.debug(nextmessage()) - - INF.info(FINISH_UP) - -def test_main_inner(): - rootLogger = logging.getLogger("") - rootLogger.setLevel(logging.DEBUG) - - # Find an unused port number - port = logging.handlers.DEFAULT_TCP_LOGGING_PORT - while port < logging.handlers.DEFAULT_TCP_LOGGING_PORT+100: - try: - tcpserver = LogRecordSocketReceiver(port=port) - except socket.error: - port += 1 - else: - break - else: - raise ImportError, "Could not find unused port" - - - #Set up a handler such that all events are sent via a socket to the log - #receiver (logrecv). - #The handler will only be added to the rootLogger for some of the tests - shdlr = logging.handlers.SocketHandler('localhost', port) - rootLogger.addHandler(shdlr) - - #Configure the logger for logrecv so events do not propagate beyond it. - #The sockLogger output is buffered in memory until the end of the test, - #and printed at the end. - sockOut = cStringIO.StringIO() - sockLogger = logging.getLogger("logrecv") - sockLogger.setLevel(logging.DEBUG) - sockhdlr = logging.StreamHandler(sockOut) - sockhdlr.setFormatter(logging.Formatter( - "%(name)s -> %(levelname)s: %(message)s")) - sockLogger.addHandler(sockhdlr) - sockLogger.propagate = 0 - - #Set up servers - threads = [] - #sys.stdout.write("About to start TCP server...\n") - threads.append(threading.Thread(target=runTCP, args=(tcpserver,))) - - for thread in threads: - thread.start() - try: - test0() - - # XXX(nnorwitz): Try to fix timing related test failures. - # This sleep gives us some extra time to read messages. - # The test generally only fails on Solaris without this sleep. - #time.sleep(2.0) - shdlr.close() - rootLogger.removeHandler(shdlr) - - finally: - #wait for TCP receiver to terminate -# socketDataProcessed.wait() - # ensure the server dies - tcpserver.abort = 1 - for thread in threads: - thread.join(2.0) - print(sockOut.getvalue()) - sockOut.close() - sockLogger.removeHandler(sockhdlr) - sockhdlr.close() - sys.stdout.flush() - -# config0 is a standard configuration. -config0 = """ -[loggers] -keys=root - -[handlers] -keys=hand1 - -[formatters] -keys=form1 - -[logger_root] -level=NOTSET -handlers=hand1 - -[handler_hand1] -class=StreamHandler -level=NOTSET -formatter=form1 -args=(sys.stdout,) - -[formatter_form1] -format=%(levelname)s:%(name)s:%(message)s -datefmt= -""" -# config1 adds a little to the standard configuration. -config1 = """ -[loggers] -keys=root,parser - -[handlers] -keys=hand1 - -[formatters] -keys=form1 - -[logger_root] -level=NOTSET -handlers=hand1 - -[logger_parser] -level=DEBUG -handlers=hand1 -propagate=1 -qualname=compiler.parser - -[handler_hand1] -class=StreamHandler -level=NOTSET -formatter=form1 -args=(sys.stdout,) - -[formatter_form1] -format=%(levelname)s:%(name)s:%(message)s -datefmt= -""" +class SocketHandlerTest(BaseTest): -def message(s): - sys.stdout.write("%s\n" % s) + """Test for SocketHandler objects.""" -# config2 has a subtle configuration error that should be reported -config2 = string.replace(config1, "sys.stdout", "sys.stbout") + def setUp(self): + """Set up a TCP server to receive log messages, and a SocketHandler + pointing to that server's address and port.""" + BaseTest.setUp(self) + self.tcpserver = LogRecordSocketReceiver(port=0) + self.port = self.tcpserver.socket.getsockname()[1] + self.threads = [ + threading.Thread(target=self.tcpserver.serve_until_stopped)] + for thread in self.threads: + thread.start() + + self.sock_hdlr = logging.handlers.SocketHandler('localhost', self.port) + self.sock_hdlr.setFormatter(self.root_formatter) + self.root_logger.removeHandler(self.root_logger.handlers[0]) + self.root_logger.addHandler(self.sock_hdlr) -# config3 has a less subtle configuration error -config3 = string.replace( - config1, "formatter=form1", "formatter=misspelled_name") + def tearDown(self): + """Shutdown the TCP server.""" + try: + self.tcpserver.abort = True + del self.tcpserver + self.root_logger.removeHandler(self.sock_hdlr) + self.sock_hdlr.close() + for thread in self.threads: + thread.join(2.0) + finally: + BaseTest.tearDown(self) + def get_output(self): + """Get the log output as received by the TCP server.""" + # Signal the TCP receiver and wait for it to terminate. + self.root_logger.critical(LogRecordStreamHandler.TCP_LOG_END) + self.tcpserver.finished.wait(2.0) + return self.tcpserver.log_output + + def test_output(self): + # The log message sent to the SocketHandler is properly received. + logger = logging.getLogger("tcp") + logger.error("spam") + logger.debug("eggs") + self.assertEquals(self.get_output(), "spam\neggs\n") + + +class MemoryTest(BaseTest): + + """Test memory persistence of logger objects.""" + + def setUp(self): + """Create a dict to remember potentially destroyed objects.""" + BaseTest.setUp(self) + self._survivors = {} + + def _watch_for_survival(self, *args): + """Watch the given objects for survival, by creating weakrefs to + them.""" + for obj in args: + key = id(obj), repr(obj) + self._survivors[key] = weakref.ref(obj) + + def _assert_survival(self): + """Assert that all objects watched for survival have survived.""" + # Trigger cycle breaking. + gc.collect() + dead = [] + for (id_, repr_), ref in self._survivors.items(): + if ref() is None: + dead.append(repr_) + if dead: + self.fail("%d objects should have survived " + "but have been destroyed: %s" % (len(dead), ", ".join(dead))) + + def test_persistent_loggers(self): + # Logger objects are persistent and retain their configuration, even + # if visible references are destroyed. + self.root_logger.setLevel(logging.INFO) + foo = logging.getLogger("foo") + self._watch_for_survival(foo) + foo.setLevel(logging.DEBUG) + self.root_logger.debug(self.next_message()) + foo.debug(self.next_message()) + self.assert_log_lines([ + ('foo', 'DEBUG', '2'), + ]) + del foo + # foo has survived. + self._assert_survival() + # foo has retained its settings. + bar = logging.getLogger("foo") + bar.debug(self.next_message()) + self.assert_log_lines([ + ('foo', 'DEBUG', '2'), + ('foo', 'DEBUG', '3'), + ]) + + +# Set the locale to the platform-dependent default. I have no idea +# why the test does this, but in any case we save the current locale +# first and restore it at the end. + at run_with_locale('LC_ALL', '') def test_main(): - from test import test_support, test_logging - test_support.run_doctest(test_logging) + run_unittest(BuiltinLevelsTest, BasicFilterTest, + CustomLevelsAndFiltersTest, MemoryHandlerTest, + ConfigFileTest, SocketHandlerTest, MemoryTest) -if __name__=="__main__": +if __name__ == "__main__": test_main() Modified: python/branches/libffi3-branch/Lib/test/test_minidom.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_minidom.py (original) +++ python/branches/libffi3-branch/Lib/test/test_minidom.py Tue Mar 4 15:50:53 2008 @@ -3,7 +3,6 @@ import os import sys import pickle -import traceback from StringIO import StringIO from test.test_support import verbose, run_unittest, TestSkipped import unittest @@ -791,6 +790,14 @@ "testNormalize -- single empty node removed") doc.unlink() + def testBug1433694(self): + doc = parseString("t") + node = doc.documentElement + node.childNodes[1].nodeValue = "" + node.normalize() + self.confirm(node.childNodes[-1].nextSibling == None, + "Final child's .nextSibling should be None") + def testSiblings(self): doc = parseString("text?") root = doc.documentElement Modified: python/branches/libffi3-branch/Lib/test/test_module.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_module.py (original) +++ python/branches/libffi3-branch/Lib/test/test_module.py Tue Mar 4 15:50:53 2008 @@ -1,6 +1,6 @@ # Test the module type import unittest -from test.test_support import verbose, run_unittest +from test.test_support import run_unittest import sys ModuleType = type(sys) Modified: python/branches/libffi3-branch/Lib/test/test_modulefinder.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_modulefinder.py (original) +++ python/branches/libffi3-branch/Lib/test/test_modulefinder.py Tue Mar 4 15:50:53 2008 @@ -1,5 +1,5 @@ import __future__ -import sys, os +import os import unittest import distutils.dir_util import tempfile Modified: python/branches/libffi3-branch/Lib/test/test_multibytecodec_support.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_multibytecodec_support.py (original) +++ python/branches/libffi3-branch/Lib/test/test_multibytecodec_support.py Tue Mar 4 15:50:53 2008 @@ -4,7 +4,7 @@ # Common Unittest Routines for CJK codecs # -import sys, codecs, os.path +import sys, codecs import unittest, re from test import test_support from StringIO import StringIO Modified: python/branches/libffi3-branch/Lib/test/test_operator.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_operator.py (original) +++ python/branches/libffi3-branch/Lib/test/test_operator.py Tue Mar 4 15:50:53 2008 @@ -386,6 +386,26 @@ raise SyntaxError self.failUnlessRaises(SyntaxError, operator.attrgetter('foo'), C()) + # recursive gets + a = A() + a.name = 'arthur' + a.child = A() + a.child.name = 'thomas' + f = operator.attrgetter('child.name') + self.assertEqual(f(a), 'thomas') + self.assertRaises(AttributeError, f, a.child) + f = operator.attrgetter('name', 'child.name') + self.assertEqual(f(a), ('arthur', 'thomas')) + f = operator.attrgetter('name', 'child.name', 'child.child.name') + self.assertRaises(AttributeError, f, a) + + a.child.child = A() + a.child.child.name = 'johnson' + f = operator.attrgetter('child.child.name') + self.assertEqual(f(a), 'johnson') + f = operator.attrgetter('name', 'child.name', 'child.child.name') + self.assertEqual(f(a), ('arthur', 'thomas', 'johnson')) + def test_itemgetter(self): a = 'ABCDE' f = operator.itemgetter(2) @@ -420,6 +440,24 @@ self.assertEqual(operator.itemgetter(2,10,5)(data), ('2', '10', '5')) self.assertRaises(TypeError, operator.itemgetter(2, 'x', 5), data) + def test_methodcaller(self): + self.assertRaises(TypeError, operator.methodcaller) + class A: + def foo(self, *args, **kwds): + return args[0] + args[1] + def bar(self, f=42): + return f + a = A() + f = operator.methodcaller('foo') + self.assertRaises(IndexError, f, a) + f = operator.methodcaller('foo', 1, 2) + self.assertEquals(f(a), 3) + f = operator.methodcaller('bar') + self.assertEquals(f(a), 42) + self.assertRaises(TypeError, f, a, a) + f = operator.methodcaller('bar', f=5) + self.assertEquals(f(a), 5) + def test_inplace(self): class C(object): def __iadd__ (self, other): return "iadd" Modified: python/branches/libffi3-branch/Lib/test/test_optparse.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_optparse.py (original) +++ python/branches/libffi3-branch/Lib/test/test_optparse.py Tue Mar 4 15:50:53 2008 @@ -16,7 +16,6 @@ import unittest from StringIO import StringIO -from pprint import pprint from test import test_support Modified: python/branches/libffi3-branch/Lib/test/test_ossaudiodev.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_ossaudiodev.py (original) +++ python/branches/libffi3-branch/Lib/test/test_ossaudiodev.py Tue Mar 4 15:50:53 2008 @@ -1,14 +1,11 @@ from test import test_support test_support.requires('audio') -from test.test_support import verbose, findfile, TestSkipped +from test.test_support import findfile, TestSkipped import errno -import fcntl import ossaudiodev -import os import sys -import select import sunaudio import time import audioop Modified: python/branches/libffi3-branch/Lib/test/test_parser.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_parser.py (original) +++ python/branches/libffi3-branch/Lib/test/test_parser.py Tue Mar 4 15:50:53 2008 @@ -480,11 +480,28 @@ st = parser.suite('a = u"\u1"') self.assertRaises(SyntaxError, parser.compilest, st) +class ParserStackLimitTestCase(unittest.TestCase): + """try to push the parser to/over it's limits. + see http://bugs.python.org/issue1881 for a discussion + """ + def _nested_expression(self, level): + return "["*level+"]"*level + + def test_deeply_nested_list(self): + e = self._nested_expression(99) + st = parser.expr(e) + st.compile() + + def test_trigger_memory_error(self): + e = self._nested_expression(100) + self.assertRaises(MemoryError, parser.expr, e) + def test_main(): test_support.run_unittest( RoundtripLegalSyntaxTestCase, IllegalSyntaxTestCase, CompileTestCase, + ParserStackLimitTestCase, ) Modified: python/branches/libffi3-branch/Lib/test/test_pep247.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_pep247.py (original) +++ python/branches/libffi3-branch/Lib/test/test_pep247.py Tue Mar 4 15:50:53 2008 @@ -10,6 +10,8 @@ DeprecationWarning) import md5, sha, hmac +from test.test_support import verbose + def check_hash_module(module, key=None): assert hasattr(module, 'digest_size'), "Must have digest_size" @@ -47,10 +49,15 @@ hd2 += "%02x" % ord(byte) assert hd2 == hexdigest, "hexdigest doesn't appear correct" - print 'Module', module.__name__, 'seems to comply with PEP 247' + if verbose: + print 'Module', module.__name__, 'seems to comply with PEP 247' -if __name__ == '__main__': +def test_main(): check_hash_module(md5) check_hash_module(sha) check_hash_module(hmac, key='abc') + + +if __name__ == '__main__': + test_main() Modified: python/branches/libffi3-branch/Lib/test/test_pickle.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_pickle.py (original) +++ python/branches/libffi3-branch/Lib/test/test_pickle.py Tue Mar 4 15:50:53 2008 @@ -1,5 +1,4 @@ import pickle -import unittest from cStringIO import StringIO from test import test_support Modified: python/branches/libffi3-branch/Lib/test/test_pkg.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_pkg.py (original) +++ python/branches/libffi3-branch/Lib/test/test_pkg.py Tue Mar 4 15:50:53 2008 @@ -4,7 +4,6 @@ import os import tempfile import textwrap -import traceback import unittest from test import test_support Modified: python/branches/libffi3-branch/Lib/test/test_plistlib.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_plistlib.py (original) +++ python/branches/libffi3-branch/Lib/test/test_plistlib.py Tue Mar 4 15:50:53 2008 @@ -3,7 +3,6 @@ import unittest import plistlib import os -import time import datetime from test import test_support Modified: python/branches/libffi3-branch/Lib/test/test_poll.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_poll.py (original) +++ python/branches/libffi3-branch/Lib/test/test_poll.py Tue Mar 4 15:50:53 2008 @@ -1,6 +1,6 @@ # Test case for the os.poll() function -import sys, os, select, random, unittest +import os, select, random, unittest from test.test_support import TestSkipped, TESTFN, run_unittest try: Modified: python/branches/libffi3-branch/Lib/test/test_posix.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_posix.py (original) +++ python/branches/libffi3-branch/Lib/test/test_posix.py Tue Mar 4 15:50:53 2008 @@ -9,7 +9,6 @@ import time import os -import sys import unittest import warnings warnings.filterwarnings('ignore', '.* potential security risk .*', Modified: python/branches/libffi3-branch/Lib/test/test_pyclbr.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_pyclbr.py (original) +++ python/branches/libffi3-branch/Lib/test/test_pyclbr.py Tue Mar 4 15:50:53 2008 @@ -3,7 +3,7 @@ Nick Mathewson ''' from test.test_support import run_unittest -import unittest, sys +import sys from types import ClassType, FunctionType, MethodType, BuiltinFunctionType import pyclbr from unittest import TestCase Modified: python/branches/libffi3-branch/Lib/test/test_quopri.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_quopri.py (original) +++ python/branches/libffi3-branch/Lib/test/test_quopri.py Tue Mar 4 15:50:53 2008 @@ -1,7 +1,7 @@ from test import test_support import unittest -import sys, os, cStringIO, subprocess +import sys, cStringIO, subprocess import quopri Modified: python/branches/libffi3-branch/Lib/test/test_resource.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_resource.py (original) +++ python/branches/libffi3-branch/Lib/test/test_resource.py Tue Mar 4 15:50:53 2008 @@ -1,7 +1,6 @@ import unittest from test import test_support -import os import resource import time Modified: python/branches/libffi3-branch/Lib/test/test_rfc822.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_rfc822.py (original) +++ python/branches/libffi3-branch/Lib/test/test_rfc822.py Tue Mar 4 15:50:53 2008 @@ -1,5 +1,4 @@ import rfc822 -import sys import unittest from test import test_support Modified: python/branches/libffi3-branch/Lib/test/test_scriptpackages.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_scriptpackages.py (original) +++ python/branches/libffi3-branch/Lib/test/test_scriptpackages.py Tue Mar 4 15:50:53 2008 @@ -1,9 +1,6 @@ # Copyright (C) 2003 Python Software Foundation import unittest -import os -import sys -import tempfile from test import test_support import aetools Modified: python/branches/libffi3-branch/Lib/test/test_sgmllib.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_sgmllib.py (original) +++ python/branches/libffi3-branch/Lib/test/test_sgmllib.py Tue Mar 4 15:50:53 2008 @@ -1,4 +1,3 @@ -import htmlentitydefs import pprint import re import sgmllib @@ -116,7 +115,7 @@ try: events = self.get_events(source) except: - import sys + #import sys #print >>sys.stderr, pprint.pformat(self.events) raise if events != expected_events: Modified: python/branches/libffi3-branch/Lib/test/test_shlex.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_shlex.py (original) +++ python/branches/libffi3-branch/Lib/test/test_shlex.py Tue Mar 4 15:50:53 2008 @@ -1,6 +1,5 @@ # -*- coding: iso-8859-1 -*- import unittest -import os, sys import shlex from test import test_support Modified: python/branches/libffi3-branch/Lib/test/test_signal.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_signal.py (original) +++ python/branches/libffi3-branch/Lib/test/test_signal.py Tue Mar 4 15:50:53 2008 @@ -1,7 +1,12 @@ import unittest from test import test_support import signal -import os, sys, time +import sys, os, time, errno + +if sys.platform[:3] in ('win', 'os2') or sys.platform == 'riscos': + raise test_support.TestSkipped("Can't test signal on %s" % \ + sys.platform) + class HandlerBCalled(Exception): pass @@ -210,14 +215,54 @@ os.close(self.write) signal.signal(signal.SIGALRM, self.alrm) +class SiginterruptTest(unittest.TestCase): + signum = signal.SIGUSR1 + def readpipe_interrupted(self, cb): + r, w = os.pipe() + ppid = os.getpid() + pid = os.fork() + + oldhandler = signal.signal(self.signum, lambda x,y: None) + cb() + if pid==0: + # child code: sleep, kill, sleep. and then exit, + # which closes the pipe from which the parent process reads + try: + time.sleep(0.2) + os.kill(ppid, self.signum) + time.sleep(0.2) + finally: + os._exit(0) -def test_main(): - if sys.platform[:3] in ('win', 'os2') or sys.platform == 'riscos': - raise test_support.TestSkipped("Can't test signal on %s" % \ - sys.platform) + try: + os.close(w) + try: + d=os.read(r, 1) + return False + except OSError, err: + if err.errno != errno.EINTR: + raise + return True + finally: + signal.signal(self.signum, oldhandler) + os.waitpid(pid, 0) + + def test_without_siginterrupt(self): + i=self.readpipe_interrupted(lambda: None) + self.assertEquals(i, True) + + def test_siginterrupt_on(self): + i=self.readpipe_interrupted(lambda: signal.siginterrupt(self.signum, 1)) + self.assertEquals(i, True) + + def test_siginterrupt_off(self): + i=self.readpipe_interrupted(lambda: signal.siginterrupt(self.signum, 0)) + self.assertEquals(i, False) + +def test_main(): test_support.run_unittest(BasicSignalTests, InterProcessSignalTests, - WakeupSignalTests) + WakeupSignalTests, SiginterruptTest) if __name__ == "__main__": Modified: python/branches/libffi3-branch/Lib/test/test_site.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_site.py (original) +++ python/branches/libffi3-branch/Lib/test/test_site.py Tue Mar 4 15:50:53 2008 @@ -5,12 +5,11 @@ """ import unittest -from test.test_support import TestSkipped, TestFailed, run_unittest, TESTFN +from test.test_support import TestSkipped, run_unittest, TESTFN import __builtin__ import os import sys import encodings -import tempfile # Need to make sure to not import 'site' if someone specified ``-S`` at the # command-line. Detect this by just making sure 'site' has not been imported # already. Modified: python/branches/libffi3-branch/Lib/test/test_smtplib.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_smtplib.py (original) +++ python/branches/libffi3-branch/Lib/test/test_smtplib.py Tue Mar 4 15:50:53 2008 @@ -18,14 +18,15 @@ PORT = None def server(evt, buf): + serv = socket.socket(socket.AF_INET, socket.SOCK_STREAM) + serv.settimeout(1) + serv.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) + serv.bind(("", 0)) + global PORT + PORT = serv.getsockname()[1] + serv.listen(5) + evt.set() try: - serv = socket.socket(socket.AF_INET, socket.SOCK_STREAM) - serv.settimeout(3) - serv.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) - serv.bind(("", 0)) - global PORT - PORT = serv.getsockname()[1] - serv.listen(5) conn, addr = serv.accept() except socket.timeout: pass @@ -38,7 +39,6 @@ buf = buf[sent:] n -= 1 - time.sleep(0.01) conn.close() finally: @@ -52,16 +52,8 @@ self.evt = threading.Event() servargs = (self.evt, "220 Hola mundo\n") threading.Thread(target=server, args=servargs).start() - - # wait until server thread has assigned a port number - n = 500 - while PORT is None and n > 0: - time.sleep(0.01) - n -= 1 - - # wait a little longer (sometimes connections are refused - # on slow machines without this additional wait) - time.sleep(0.5) + self.evt.wait() + self.evt.clear() def tearDown(self): self.evt.wait() @@ -76,28 +68,12 @@ smtp = smtplib.SMTP("%s:%s" % (HOST, PORT)) smtp.sock.close() - def testNotConnected(self): - # Test various operations on an unconnected SMTP object that - # should raise exceptions (at present the attempt in SMTP.send - # to reference the nonexistent 'sock' attribute of the SMTP object - # causes an AttributeError) - smtp = smtplib.SMTP() - self.assertRaises(AttributeError, smtp.ehlo) - self.assertRaises(AttributeError, smtp.send, 'test msg') - def testLocalHostName(self): # check that supplied local_hostname is used smtp = smtplib.SMTP(HOST, PORT, local_hostname="testhost") self.assertEqual(smtp.local_hostname, "testhost") smtp.sock.close() - def testNonnumericPort(self): - # check that non-numeric port raises socket.error - self.assertRaises(socket.error, smtplib.SMTP, - "localhost", "bogus") - self.assertRaises(socket.error, smtplib.SMTP, - "localhost:bogus") - def testTimeoutDefault(self): # default smtp = smtplib.SMTP(HOST, PORT) @@ -127,6 +103,7 @@ serv = server_class(("", 0), ('nowhere', -1)) global PORT PORT = serv.getsockname()[1] + serv_evt.set() try: if hasattr(select, 'poll'): @@ -149,12 +126,12 @@ except socket.timeout: pass finally: - # allow some time for the client to read the result - time.sleep(0.5) - serv.close() + if not client_evt.isSet(): + # allow some time for the client to read the result + time.sleep(0.5) + serv.close() asyncore.close_all() PORT = None - time.sleep(0.5) serv_evt.set() MSG_BEGIN = '---------- MESSAGE FOLLOWS ----------\n' @@ -180,14 +157,8 @@ threading.Thread(target=debugging_server, args=serv_args).start() # wait until server thread has assigned a port number - n = 500 - while PORT is None and n > 0: - time.sleep(0.01) - n -= 1 - - # wait a little longer (sometimes connections are refused - # on slow machines without this additional wait) - time.sleep(0.5) + self.serv_evt.wait() + self.serv_evt.clear() def tearDown(self): # indicate that the client is finished @@ -257,6 +228,26 @@ self.assertEqual(self.output.getvalue(), mexpect) +class NonConnectingTests(TestCase): + + def testNotConnected(self): + # Test various operations on an unconnected SMTP object that + # should raise exceptions (at present the attempt in SMTP.send + # to reference the nonexistent 'sock' attribute of the SMTP object + # causes an AttributeError) + smtp = smtplib.SMTP() + self.assertRaises(smtplib.SMTPServerDisconnected, smtp.ehlo) + self.assertRaises(smtplib.SMTPServerDisconnected, + smtp.send, 'test msg') + + def testNonnumericPort(self): + # check that non-numeric port raises socket.error + self.assertRaises(socket.error, smtplib.SMTP, + "localhost", "bogus") + self.assertRaises(socket.error, smtplib.SMTP, + "localhost:bogus") + + # test response of client to a non-successful HELO message class BadHELOServerTests(TestCase): @@ -268,16 +259,8 @@ self.evt = threading.Event() servargs = (self.evt, "199 no hello for you!\n") threading.Thread(target=server, args=servargs).start() - - # wait until server thread has assigned a port number - n = 500 - while PORT is None and n > 0: - time.sleep(0.01) - n -= 1 - - # wait a little longer (sometimes connections are refused - # on slow machines without this additional wait) - time.sleep(0.5) + self.evt.wait() + self.evt.clear() def tearDown(self): self.evt.wait() @@ -354,14 +337,8 @@ threading.Thread(target=debugging_server, args=serv_args).start() # wait until server thread has assigned a port number - n = 500 - while PORT is None and n > 0: - time.sleep(0.01) - n -= 1 - - # wait a little longer (sometimes connections are refused - # on slow machines without this additional wait) - time.sleep(0.5) + self.serv_evt.wait() + self.serv_evt.clear() def tearDown(self): # indicate that the client is finished @@ -426,6 +403,7 @@ def test_main(verbose=None): test_support.run_unittest(GeneralTests, DebuggingServerTests, + NonConnectingTests, BadHELOServerTests, SMTPSimTests) if __name__ == '__main__': Modified: python/branches/libffi3-branch/Lib/test/test_socketserver.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_socketserver.py (original) +++ python/branches/libffi3-branch/Lib/test/test_socketserver.py Tue Mar 4 15:50:53 2008 @@ -2,14 +2,16 @@ Test suite for SocketServer.py. """ -import os -import socket +import contextlib import errno import imp +import os import select -import time +import signal +import socket +import tempfile import threading -from functools import wraps +import time import unittest import SocketServer @@ -20,7 +22,6 @@ test.test_support.requires("network") NREQ = 3 -DELAY = 0.5 TEST_STR = "hello world\n" HOST = "localhost" @@ -28,14 +29,6 @@ HAVE_FORKING = hasattr(os, "fork") and os.name != "os2" -class MyMixinHandler: - def handle(self): - time.sleep(DELAY) - line = self.rfile.readline() - time.sleep(DELAY) - self.wfile.write(line) - - def receive(sock, n, timeout=20): r, w, x = select.select([sock], [], [], timeout) if sock in r: @@ -43,14 +36,6 @@ else: raise RuntimeError, "timed out on %r" % (sock,) - -class MyStreamHandler(MyMixinHandler, SocketServer.StreamRequestHandler): - pass - -class MyDatagramHandler(MyMixinHandler, - SocketServer.DatagramRequestHandler): - pass - if HAVE_UNIX_SOCKETS: class ForkingUnixStreamServer(SocketServer.ForkingMixIn, SocketServer.UnixStreamServer): @@ -85,48 +70,40 @@ pass if verbose: print "thread: creating server" svr = svrcls(self.__addr, self.__hdlrcls) - # pull the address out of the server in case it changed - # this can happen if another process is using the port - addr = svr.server_address - if addr: - self.__addr = addr - if self.__addr != svr.socket.getsockname(): - raise RuntimeError('server_address was %s, expected %s' % - (self.__addr, svr.socket.getsockname())) + # We had the OS pick a port, so pull the real address out of + # the server. + self.addr = svr.server_address + self.port = self.addr[1] + if self.addr != svr.socket.getsockname(): + raise RuntimeError('server_address was %s, expected %s' % + (self.addr, svr.socket.getsockname())) self.ready.set() if verbose: print "thread: serving three times" svr.serve_a_few() if verbose: print "thread: done" -class ForgivingTCPServer(SocketServer.TCPServer): - # prevent errors if another process is using the port we want - def server_bind(self): - host, default_port = self.server_address - # this code shamelessly stolen from test.test_support - # the ports were changed to protect the innocent - import sys - for port in [default_port, 3434, 8798, 23833]: - try: - self.server_address = host, port - SocketServer.TCPServer.server_bind(self) - break - except socket.error, (err, msg): - if err != errno.EADDRINUSE: - raise - print >> sys.__stderr__, \ - "WARNING: failed to listen on port %d, trying another: " % port + at contextlib.contextmanager +def simple_subprocess(testcase): + pid = os.fork() + if pid == 0: + # Don't throw an exception; it would be caught by the test harness. + os._exit(72) + yield None + pid2, status = os.waitpid(pid, 0) + testcase.assertEquals(pid2, pid) + testcase.assertEquals(72 << 8, status) class SocketServerTest(unittest.TestCase): """Test all socket servers.""" def setUp(self): + signal.alarm(20) # Kill deadlocks after 20 seconds. self.port_seed = 0 self.test_files = [] def tearDown(self): - time.sleep(DELAY) reap_children() for fn in self.test_files: @@ -135,16 +112,18 @@ except os.error: pass self.test_files[:] = [] - - def pickport(self): - self.port_seed += 1 - return 10000 + (os.getpid() % 1000)*10 + self.port_seed + signal.alarm(0) # Didn't deadlock. def pickaddr(self, proto): if proto == socket.AF_INET: - return (HOST, self.pickport()) + return (HOST, 0) else: - fn = TEST_FILE + str(self.pickport()) + # XXX: We need a way to tell AF_UNIX to pick its own name + # like AF_INET provides port==0. + dir = None + if os.name == 'os2': + dir = '\socket' + fn = tempfile.mktemp(prefix='unix_socket.', dir=dir) if os.name == 'os2': # AF_UNIX socket names on OS/2 require a specific prefix # which can't include a drive letter and must also use @@ -153,7 +132,6 @@ fn = fn[2:] if fn[0] in (os.sep, os.altsep): fn = fn[1:] - fn = os.path.join('\socket', fn) if os.sep == '/': fn = fn.replace(os.sep, os.altsep) else: @@ -161,25 +139,30 @@ self.test_files.append(fn) return fn - def run_servers(self, proto, servers, hdlrcls, testfunc): - for svrcls in servers: - addr = self.pickaddr(proto) - if verbose: - print "ADDR =", addr - print "CLASS =", svrcls - t = ServerThread(addr, svrcls, hdlrcls) - if verbose: print "server created" - t.start() - if verbose: print "server running" - for i in range(NREQ): - t.ready.wait(10*DELAY) - self.assert_(t.ready.isSet(), - "Server not ready within a reasonable time") - if verbose: print "test client", i - testfunc(proto, addr) - if verbose: print "waiting for server" - t.join() - if verbose: print "done" + def run_server(self, svrcls, hdlrbase, testfunc): + class MyHandler(hdlrbase): + def handle(self): + line = self.rfile.readline() + self.wfile.write(line) + + addr = self.pickaddr(svrcls.address_family) + if verbose: + print "ADDR =", addr + print "CLASS =", svrcls + t = ServerThread(addr, svrcls, MyHandler) + if verbose: print "server created" + t.start() + if verbose: print "server running" + t.ready.wait(10) + self.assert_(t.ready.isSet(), + "%s not ready within a reasonable time" % svrcls) + addr = t.addr + for i in range(NREQ): + if verbose: print "test client", i + testfunc(svrcls.address_family, addr) + if verbose: print "waiting for server" + t.join() + if verbose: print "done" def stream_examine(self, proto, addr): s = socket.socket(proto, socket.SOCK_STREAM) @@ -202,47 +185,77 @@ self.assertEquals(buf, TEST_STR) s.close() - def test_TCPServers(self): - # Test SocketServer.TCPServer - servers = [ForgivingTCPServer, SocketServer.ThreadingTCPServer] - if HAVE_FORKING: - servers.append(SocketServer.ForkingTCPServer) - self.run_servers(socket.AF_INET, servers, - MyStreamHandler, self.stream_examine) - - def test_UDPServers(self): - # Test SocketServer.UDPServer - servers = [SocketServer.UDPServer, - SocketServer.ThreadingUDPServer] - if HAVE_FORKING: - servers.append(SocketServer.ForkingUDPServer) - self.run_servers(socket.AF_INET, servers, MyDatagramHandler, - self.dgram_examine) - - def test_stream_servers(self): - # Test SocketServer's stream servers - if not HAVE_UNIX_SOCKETS: - return - servers = [SocketServer.UnixStreamServer, - SocketServer.ThreadingUnixStreamServer] + def test_TCPServer(self): + self.run_server(SocketServer.TCPServer, + SocketServer.StreamRequestHandler, + self.stream_examine) + + def test_ThreadingTCPServer(self): + self.run_server(SocketServer.ThreadingTCPServer, + SocketServer.StreamRequestHandler, + self.stream_examine) + + if HAVE_FORKING: + def test_ForkingTCPServer(self): + with simple_subprocess(self): + self.run_server(SocketServer.ForkingTCPServer, + SocketServer.StreamRequestHandler, + self.stream_examine) + + if HAVE_UNIX_SOCKETS: + def test_UnixStreamServer(self): + self.run_server(SocketServer.UnixStreamServer, + SocketServer.StreamRequestHandler, + self.stream_examine) + + def test_ThreadingUnixStreamServer(self): + self.run_server(SocketServer.ThreadingUnixStreamServer, + SocketServer.StreamRequestHandler, + self.stream_examine) + if HAVE_FORKING: - servers.append(ForkingUnixStreamServer) - self.run_servers(socket.AF_UNIX, servers, MyStreamHandler, - self.stream_examine) + def test_ForkingUnixStreamServer(self): + with simple_subprocess(self): + self.run_server(ForkingUnixStreamServer, + SocketServer.StreamRequestHandler, + self.stream_examine) + + def test_UDPServer(self): + self.run_server(SocketServer.UDPServer, + SocketServer.DatagramRequestHandler, + self.dgram_examine) + + def test_ThreadingUDPServer(self): + self.run_server(SocketServer.ThreadingUDPServer, + SocketServer.DatagramRequestHandler, + self.dgram_examine) + + if HAVE_FORKING: + def test_ForkingUDPServer(self): + with simple_subprocess(self): + self.run_server(SocketServer.ForkingUDPServer, + SocketServer.DatagramRequestHandler, + self.dgram_examine) # Alas, on Linux (at least) recvfrom() doesn't return a meaningful # client address so this cannot work: - # def test_dgram_servers(self): - # # Test SocketServer.UnixDatagramServer - # if not HAVE_UNIX_SOCKETS: - # return - # servers = [SocketServer.UnixDatagramServer, - # SocketServer.ThreadingUnixDatagramServer] + # if HAVE_UNIX_SOCKETS: + # def test_UnixDatagramServer(self): + # self.run_server(SocketServer.UnixDatagramServer, + # SocketServer.DatagramRequestHandler, + # self.dgram_examine) + # + # def test_ThreadingUnixDatagramServer(self): + # self.run_server(SocketServer.ThreadingUnixDatagramServer, + # SocketServer.DatagramRequestHandler, + # self.dgram_examine) + # # if HAVE_FORKING: - # servers.append(ForkingUnixDatagramServer) - # self.run_servers(socket.AF_UNIX, servers, MyDatagramHandler, - # self.dgram_examine) + # def test_ForkingUnixDatagramServer(self): + # self.run_server(SocketServer.ForkingUnixDatagramServer, + # SocketServer.DatagramRequestHandler, + # self.dgram_examine) def test_main(): @@ -254,3 +267,4 @@ if __name__ == "__main__": test_main() + signal.alarm(3) # Shutdown shouldn't take more than 3 seconds. Modified: python/branches/libffi3-branch/Lib/test/test_sqlite.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_sqlite.py (original) +++ python/branches/libffi3-branch/Lib/test/test_sqlite.py Tue Mar 4 15:50:53 2008 @@ -1,17 +1,16 @@ from test.test_support import run_unittest, TestSkipped -import unittest try: import _sqlite3 except ImportError: raise TestSkipped('no sqlite available') -from sqlite3.test import (dbapi, types, userfunctions, +from sqlite3.test import (dbapi, types, userfunctions, py25tests, factory, transactions, hooks, regression) def test_main(): run_unittest(dbapi.suite(), types.suite(), userfunctions.suite(), - factory.suite(), transactions.suite(), hooks.suite(), - regression.suite()) + py25tests.suite(), factory.suite(), transactions.suite(), + hooks.suite(), regression.suite()) if __name__ == "__main__": test_main() Modified: python/branches/libffi3-branch/Lib/test/test_str.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_str.py (original) +++ python/branches/libffi3-branch/Lib/test/test_str.py Tue Mar 4 15:50:53 2008 @@ -1,5 +1,4 @@ -import unittest import struct import sys from test import test_support, string_tests Modified: python/branches/libffi3-branch/Lib/test/test_strftime.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_strftime.py (original) +++ python/branches/libffi3-branch/Lib/test/test_strftime.py Tue Mar 4 15:50:53 2008 @@ -2,7 +2,7 @@ # Sanity checker for time.strftime -import time, calendar, sys, os, re +import time, calendar, sys, re from test.test_support import verbose def main(): Modified: python/branches/libffi3-branch/Lib/test/test_sunaudiodev.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_sunaudiodev.py (original) +++ python/branches/libffi3-branch/Lib/test/test_sunaudiodev.py Tue Mar 4 15:50:53 2008 @@ -1,4 +1,4 @@ -from test.test_support import verbose, findfile, TestFailed, TestSkipped +from test.test_support import findfile, TestFailed, TestSkipped import sunaudiodev import os @@ -22,7 +22,11 @@ a.write(data) a.close() -def test(): + +def test_main(): play_sound_file(findfile('audiotest.au')) -test() + + +if __name__ == '__main__': + test_main() Modified: python/branches/libffi3-branch/Lib/test/test_support.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_support.py (original) +++ python/branches/libffi3-branch/Lib/test/test_support.py Tue Mar 4 15:50:53 2008 @@ -9,8 +9,8 @@ import sys import os import os.path +import shutil import warnings -import types import unittest class Error(Exception): @@ -65,6 +65,14 @@ except OSError: pass +def rmtree(path): + try: + shutil.rmtree(path) + except OSError, e: + # Unix returns ENOENT, Windows returns ESRCH. + if e.errno not in (errno.ENOENT, errno.ESRCH): + raise + def forget(modname): '''"Forget" a module was ever imported by removing it from sys.modules and deleting any .pyc and .pyo files.''' @@ -97,7 +105,7 @@ def bind_port(sock, host='', preferred_port=54321): """Try to bind the sock to a port. If we are running multiple - tests and we don't try multiple ports, the test can fails. This + tests and we don't try multiple ports, the test can fail. This makes the test more robust.""" # Find some random ports that hopefully no one is listening on. Modified: python/branches/libffi3-branch/Lib/test/test_threading.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_threading.py (original) +++ python/branches/libffi3-branch/Lib/test/test_threading.py Tue Mar 4 15:50:53 2008 @@ -8,6 +8,7 @@ import thread import time import unittest +import weakref # A trivial mutable counter. class Counter(object): @@ -253,6 +254,33 @@ finally: sys.setcheckinterval(old_interval) + def test_no_refcycle_through_target(self): + class RunSelfFunction(object): + def __init__(self, should_raise): + # The links in this refcycle from Thread back to self + # should be cleaned up when the thread completes. + self.should_raise = should_raise + self.thread = threading.Thread(target=self._run, + args=(self,), + kwargs={'yet_another':self}) + self.thread.start() + + def _run(self, other_ref, yet_another): + if self.should_raise: + raise SystemExit + + cyclic_object = RunSelfFunction(should_raise=False) + weak_cyclic_object = weakref.ref(cyclic_object) + cyclic_object.thread.join() + del cyclic_object + self.assertEquals(None, weak_cyclic_object()) + + raising_cyclic_object = RunSelfFunction(should_raise=True) + weak_raising_cyclic_object = weakref.ref(raising_cyclic_object) + raising_cyclic_object.thread.join() + del raising_cyclic_object + self.assertEquals(None, weak_raising_cyclic_object()) + class ThreadingExceptionTests(unittest.TestCase): # A RuntimeError should be raised if Thread.start() is called Modified: python/branches/libffi3-branch/Lib/test/test_tuple.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_tuple.py (original) +++ python/branches/libffi3-branch/Lib/test/test_tuple.py Tue Mar 4 15:50:53 2008 @@ -1,4 +1,3 @@ -import unittest from test import test_support, seq_tests class TupleTest(seq_tests.CommonTest): Modified: python/branches/libffi3-branch/Lib/test/test_unicode.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_unicode.py (original) +++ python/branches/libffi3-branch/Lib/test/test_unicode.py Tue Mar 4 15:50:53 2008 @@ -6,7 +6,7 @@ (c) Copyright CNRI, All Rights Reserved. NO WARRANTY. """#" -import unittest, sys, struct, codecs, new +import sys, struct, codecs from test import test_support, string_tests # Error handling (bad decoder return) Modified: python/branches/libffi3-branch/Lib/test/test_unpack.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_unpack.py (original) +++ python/branches/libffi3-branch/Lib/test/test_unpack.py Tue Mar 4 15:50:53 2008 @@ -122,7 +122,6 @@ __test__ = {'doctests' : doctests} def test_main(verbose=False): - import sys from test import test_support from test import test_unpack test_support.run_doctest(test_unpack, verbose) Modified: python/branches/libffi3-branch/Lib/test/test_urllib.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_urllib.py (original) +++ python/branches/libffi3-branch/Lib/test/test_urllib.py Tue Mar 4 15:50:53 2008 @@ -8,10 +8,6 @@ import mimetools import tempfile import StringIO -import ftplib -import threading -import socket -import time def hexescape(char): """Escape char as RFC 2396 specifies""" Modified: python/branches/libffi3-branch/Lib/test/test_urllib2.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_urllib2.py (original) +++ python/branches/libffi3-branch/Lib/test/test_urllib2.py Tue Mar 4 15:50:53 2008 @@ -1,7 +1,7 @@ import unittest from test import test_support -import os, socket +import os import StringIO import urllib2 @@ -589,7 +589,7 @@ self.assertEqual(int(headers["Content-length"]), len(data)) def test_file(self): - import time, rfc822, socket + import rfc822, socket h = urllib2.FileHandler() o = h.parent = MockOpener() @@ -993,7 +993,7 @@ def _test_basic_auth(self, opener, auth_handler, auth_header, realm, http_handler, password_manager, request_url, protected_url): - import base64, httplib + import base64 user, password = "wile", "coyote" # .add_password() fed through to password manager Modified: python/branches/libffi3-branch/Lib/test/test_urllib2_localnet.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_urllib2_localnet.py (original) +++ python/branches/libffi3-branch/Lib/test/test_urllib2_localnet.py Tue Mar 4 15:50:53 2008 @@ -1,6 +1,5 @@ #!/usr/bin/env python -import sys import threading import urlparse import urllib2 Modified: python/branches/libffi3-branch/Lib/test/test_userdict.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_userdict.py (original) +++ python/branches/libffi3-branch/Lib/test/test_userdict.py Tue Mar 4 15:50:53 2008 @@ -1,6 +1,5 @@ # Check every path through every method of UserDict -import unittest from test import test_support, mapping_tests import UserDict Modified: python/branches/libffi3-branch/Lib/test/test_userlist.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_userlist.py (original) +++ python/branches/libffi3-branch/Lib/test/test_userlist.py Tue Mar 4 15:50:53 2008 @@ -1,7 +1,6 @@ # Check every path through every method of UserList from UserList import UserList -import unittest from test import test_support, list_tests class UserListTest(list_tests.CommonTest): Modified: python/branches/libffi3-branch/Lib/test/test_userstring.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_userstring.py (original) +++ python/branches/libffi3-branch/Lib/test/test_userstring.py Tue Mar 4 15:50:53 2008 @@ -2,7 +2,6 @@ # UserString is a wrapper around the native builtin string type. # UserString instances should behave similar to builtin string objects. -import unittest import string from test import test_support, string_tests Modified: python/branches/libffi3-branch/Lib/test/test_uu.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_uu.py (original) +++ python/branches/libffi3-branch/Lib/test/test_uu.py Tue Mar 4 15:50:53 2008 @@ -8,7 +8,6 @@ import sys, os, uu, cStringIO import uu -from StringIO import StringIO plaintext = "The smooth-scaled python crept over the sleeping dog\n" Modified: python/branches/libffi3-branch/Lib/test/test_whichdb.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_whichdb.py (original) +++ python/branches/libffi3-branch/Lib/test/test_whichdb.py Tue Mar 4 15:50:53 2008 @@ -8,7 +8,6 @@ import unittest import whichdb import anydbm -import tempfile import glob _fname = test.test_support.TESTFN Modified: python/branches/libffi3-branch/Lib/test/test_xml_etree.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_xml_etree.py (original) +++ python/branches/libffi3-branch/Lib/test/test_xml_etree.py Tue Mar 4 15:50:53 2008 @@ -2,7 +2,8 @@ # all included components work as they should. For a more extensive # test suite, see the selftest script in the ElementTree distribution. -import doctest, sys +import doctest +import sys from test import test_support Modified: python/branches/libffi3-branch/Lib/test/test_xml_etree_c.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_xml_etree_c.py (original) +++ python/branches/libffi3-branch/Lib/test/test_xml_etree_c.py Tue Mar 4 15:50:53 2008 @@ -1,6 +1,7 @@ # xml.etree test for cElementTree -import doctest, sys +import doctest +import sys from test import test_support Modified: python/branches/libffi3-branch/Lib/test/test_xmlrpc.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_xmlrpc.py (original) +++ python/branches/libffi3-branch/Lib/test/test_xmlrpc.py Tue Mar 4 15:50:53 2008 @@ -33,10 +33,6 @@ (2005, 02, 10, 11, 41, 23, 0, 1, -1)), 'datetime3': xmlrpclib.DateTime( datetime.datetime(2005, 02, 10, 11, 41, 23)), - 'datetime4': xmlrpclib.DateTime( - datetime.date(2005, 02, 10)), - 'datetime5': xmlrpclib.DateTime( - datetime.time(11, 41, 23)), }] class XMLRPCTestCase(unittest.TestCase): @@ -59,34 +55,14 @@ (newdt,), m = xmlrpclib.loads(s, use_datetime=0) self.assertEquals(newdt, xmlrpclib.DateTime('20050210T11:41:23')) - def test_dump_bare_date(self): - # This checks that an unwrapped datetime.date object can be handled - # by the marshalling code. This can't be done via test_dump_load() - # since the unmarshaller produces a datetime object - d = datetime.datetime(2005, 02, 10, 11, 41, 23).date() - s = xmlrpclib.dumps((d,)) - (newd,), m = xmlrpclib.loads(s, use_datetime=1) - self.assertEquals(newd.date(), d) - self.assertEquals(newd.time(), datetime.time(0, 0, 0)) - self.assertEquals(m, None) - - (newdt,), m = xmlrpclib.loads(s, use_datetime=0) - self.assertEquals(newdt, xmlrpclib.DateTime('20050210T00:00:00')) - - def test_dump_bare_time(self): - # This checks that an unwrapped datetime.time object can be handled - # by the marshalling code. This can't be done via test_dump_load() - # since the unmarshaller produces a datetime object - t = datetime.datetime(2005, 02, 10, 11, 41, 23).time() - s = xmlrpclib.dumps((t,)) - (newt,), m = xmlrpclib.loads(s, use_datetime=1) - today = datetime.datetime.now().date().strftime("%Y%m%d") - self.assertEquals(newt.time(), t) - self.assertEquals(newt.date(), datetime.datetime.now().date()) - self.assertEquals(m, None) - - (newdt,), m = xmlrpclib.loads(s, use_datetime=0) - self.assertEquals(newdt, xmlrpclib.DateTime('%sT11:41:23'%today)) + def test_cmp_datetime_DateTime(self): + now = datetime.datetime.now() + dt = xmlrpclib.DateTime(now.timetuple()) + self.assert_(dt == now) + self.assert_(now == dt) + then = now + datetime.timedelta(seconds=4) + self.assert_(then >= dt) + self.assert_(dt < then) def test_bug_1164912 (self): d = xmlrpclib.DateTime() @@ -242,21 +218,6 @@ t = xmlrpclib.DateTime(d) self.assertEqual(str(t), '20070102T03:04:05') - def test_datetime_date(self): - d = datetime.date(2007,9,8) - t = xmlrpclib.DateTime(d) - self.assertEqual(str(t), '20070908T00:00:00') - - def test_datetime_time(self): - d = datetime.time(13,17,19) - # allow for date rollover by checking today's or tomorrow's dates - dd1 = datetime.datetime.now().date() - dd2 = dd1 + datetime.timedelta(days=1) - vals = (dd1.strftime('%Y%m%dT13:17:19'), - dd2.strftime('%Y%m%dT13:17:19')) - t = xmlrpclib.DateTime(d) - self.assertEqual(str(t) in vals, True) - def test_repr(self): d = datetime.datetime(2007,1,2,3,4,5) t = xmlrpclib.DateTime(d) Modified: python/branches/libffi3-branch/Lib/test/test_xpickle.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_xpickle.py (original) +++ python/branches/libffi3-branch/Lib/test/test_xpickle.py Tue Mar 4 15:50:53 2008 @@ -5,7 +5,6 @@ import pickle import cPickle -import unittest from test import test_support from test.pickletester import AbstractPickleTests Modified: python/branches/libffi3-branch/Lib/test/test_zipfile64.py ============================================================================== --- python/branches/libffi3-branch/Lib/test/test_zipfile64.py (original) +++ python/branches/libffi3-branch/Lib/test/test_zipfile64.py Tue Mar 4 15:50:53 2008 @@ -20,7 +20,6 @@ import time import sys -from StringIO import StringIO from tempfile import TemporaryFile from test.test_support import TESTFN, run_unittest Modified: python/branches/libffi3-branch/Lib/threading.py ============================================================================== --- python/branches/libffi3-branch/Lib/threading.py (original) +++ python/branches/libffi3-branch/Lib/threading.py Tue Mar 4 15:50:53 2008 @@ -404,7 +404,7 @@ self.__args = args self.__kwargs = kwargs self.__daemonic = self._set_daemon() - self.__started = False + self.__started = Event() self.__stopped = False self.__block = Condition(Lock()) self.__initialized = True @@ -419,7 +419,7 @@ def __repr__(self): assert self.__initialized, "Thread.__init__() was not called" status = "initial" - if self.__started: + if self.__started.isSet(): status = "started" if self.__stopped: status = "stopped" @@ -430,7 +430,7 @@ def start(self): if not self.__initialized: raise RuntimeError("thread.__init__() not called") - if self.__started: + if self.__started.isSet(): raise RuntimeError("thread already started") if __debug__: self._note("%s.start(): starting thread", self) @@ -438,12 +438,16 @@ _limbo[self] = self _active_limbo_lock.release() _start_new_thread(self.__bootstrap, ()) - self.__started = True - _sleep(0.000001) # 1 usec, to let the thread run (Solaris hack) + self.__started.wait() def run(self): - if self.__target: - self.__target(*self.__args, **self.__kwargs) + try: + if self.__target: + self.__target(*self.__args, **self.__kwargs) + finally: + # Avoid a refcycle if the thread is running a function with + # an argument that has a member that points to the thread. + del self.__target, self.__args, self.__kwargs def __bootstrap(self): # Wrapper around the real bootstrap code that ignores @@ -467,7 +471,7 @@ def __bootstrap_inner(self): try: - self.__started = True + self.__started.set() _active_limbo_lock.acquire() _active[_get_ident()] = self del _limbo[self] @@ -576,7 +580,7 @@ def join(self, timeout=None): if not self.__initialized: raise RuntimeError("Thread.__init__() not called") - if not self.__started: + if not self.__started.isSet(): raise RuntimeError("cannot join thread before it is started") if self is currentThread(): raise RuntimeError("cannot join current thread") @@ -616,7 +620,7 @@ def isAlive(self): assert self.__initialized, "Thread.__init__() not called" - return self.__started and not self.__stopped + return self.__started.isSet() and not self.__stopped def isDaemon(self): assert self.__initialized, "Thread.__init__() not called" @@ -625,7 +629,7 @@ def setDaemon(self, daemonic): if not self.__initialized: raise RuntimeError("Thread.__init__() not called") - if self.__started: + if self.__started.isSet(): raise RuntimeError("cannot set daemon status of active thread"); self.__daemonic = daemonic @@ -667,7 +671,7 @@ def __init__(self): Thread.__init__(self, name="MainThread") - self._Thread__started = True + self._Thread__started.set() _active_limbo_lock.acquire() _active[_get_ident()] = self _active_limbo_lock.release() @@ -713,7 +717,7 @@ # instance is immortal, that's bad, so release this resource. del self._Thread__block - self._Thread__started = True + self._Thread__started.set() _active_limbo_lock.acquire() _active[_get_ident()] = self _active_limbo_lock.release() Modified: python/branches/libffi3-branch/Lib/token.py ============================================================================== --- python/branches/libffi3-branch/Lib/token.py (original) +++ python/branches/libffi3-branch/Lib/token.py Tue Mar 4 15:50:53 2008 @@ -71,6 +71,7 @@ for _name, _value in globals().items(): if type(_value) is type(0): tok_name[_value] = _name +del _name, _value def ISTERMINAL(x): Modified: python/branches/libffi3-branch/Lib/trace.py ============================================================================== --- python/branches/libffi3-branch/Lib/trace.py (original) +++ python/branches/libffi3-branch/Lib/trace.py Tue Mar 4 15:50:53 2008 @@ -53,6 +53,7 @@ import re import sys import threading +import time import token import tokenize import types @@ -98,6 +99,8 @@ with '>>>>>> '. -s, --summary Write a brief summary on stdout for each file. (Can only be used with --count or --report.) +-g, --timing Prefix each line with the time since the program started. + Only used while tracing. Filters, may be repeated multiple times: --ignore-module= Ignore the given module(s) and its submodules @@ -435,7 +438,8 @@ class Trace: def __init__(self, count=1, trace=1, countfuncs=0, countcallers=0, - ignoremods=(), ignoredirs=(), infile=None, outfile=None): + ignoremods=(), ignoredirs=(), infile=None, outfile=None, + timing=False): """ @param count true iff it should count number of times each line is executed @@ -451,6 +455,7 @@ @param infile file from which to read stored counts to be added into the results @param outfile file in which to write the results + @param timing true iff timing information be displayed """ self.infile = infile self.outfile = outfile @@ -463,6 +468,9 @@ self._calledfuncs = {} self._callers = {} self._caller_cache = {} + self.start_time = None + if timing: + self.start_time = time.time() if countcallers: self.globaltrace = self.globaltrace_trackcallers elif countfuncs: @@ -613,6 +621,8 @@ key = filename, lineno self.counts[key] = self.counts.get(key, 0) + 1 + if self.start_time: + print '%.2f' % (time.time() - self.start_time), bname = os.path.basename(filename) print "%s(%d): %s" % (bname, lineno, linecache.getline(filename, lineno)), @@ -624,6 +634,8 @@ filename = frame.f_code.co_filename lineno = frame.f_lineno + if self.start_time: + print '%.2f' % (time.time() - self.start_time), bname = os.path.basename(filename) print "%s(%d): %s" % (bname, lineno, linecache.getline(filename, lineno)), @@ -653,13 +665,13 @@ if argv is None: argv = sys.argv try: - opts, prog_argv = getopt.getopt(argv[1:], "tcrRf:d:msC:lT", + opts, prog_argv = getopt.getopt(argv[1:], "tcrRf:d:msC:lTg", ["help", "version", "trace", "count", "report", "no-report", "summary", "file=", "missing", "ignore-module=", "ignore-dir=", "coverdir=", "listfuncs", - "trackcalls"]) + "trackcalls", "timing"]) except getopt.error, msg: sys.stderr.write("%s: %s\n" % (sys.argv[0], msg)) @@ -679,6 +691,7 @@ summary = 0 listfuncs = False countcallers = False + timing = False for opt, val in opts: if opt == "--help": @@ -697,6 +710,10 @@ listfuncs = True continue + if opt == "-g" or opt == "--timing": + timing = True + continue + if opt == "-t" or opt == "--trace": trace = 1 continue @@ -779,7 +796,7 @@ t = Trace(count, trace, countfuncs=listfuncs, countcallers=countcallers, ignoremods=ignore_modules, ignoredirs=ignore_dirs, infile=counts_file, - outfile=counts_file) + outfile=counts_file, timing=timing) try: t.run('execfile(%r)' % (progname,)) except IOError, err: Modified: python/branches/libffi3-branch/Lib/xml/dom/minidom.py ============================================================================== --- python/branches/libffi3-branch/Lib/xml/dom/minidom.py (original) +++ python/branches/libffi3-branch/Lib/xml/dom/minidom.py Tue Mar 4 15:50:53 2008 @@ -203,6 +203,8 @@ L.append(child) if child.nodeType == Node.ELEMENT_NODE: child.normalize() + if L: + L[-1].nextSibling = None self.childNodes[:] = L def cloneNode(self, deep): Modified: python/branches/libffi3-branch/Lib/xmlrpclib.py ============================================================================== --- python/branches/libffi3-branch/Lib/xmlrpclib.py (original) +++ python/branches/libffi3-branch/Lib/xmlrpclib.py Tue Mar 4 15:50:53 2008 @@ -357,13 +357,6 @@ if datetime and isinstance(value, datetime.datetime): self.value = value.strftime("%Y%m%dT%H:%M:%S") return - if datetime and isinstance(value, datetime.date): - self.value = value.strftime("%Y%m%dT%H:%M:%S") - return - if datetime and isinstance(value, datetime.time): - today = datetime.datetime.now().strftime("%Y%m%d") - self.value = value.strftime(today+"T%H:%M:%S") - return if not isinstance(value, (TupleType, time.struct_time)): if value == 0: value = time.time() @@ -371,10 +364,57 @@ value = time.strftime("%Y%m%dT%H:%M:%S", value) self.value = value - def __cmp__(self, other): + def make_comparable(self, other): if isinstance(other, DateTime): - other = other.value - return cmp(self.value, other) + s = self.value + o = other.value + elif datetime and isinstance(other, datetime.datetime): + s = self.value + o = other.strftime("%Y%m%dT%H:%M:%S") + elif isinstance(other, (str, unicode)): + s = self.value + o = other + elif hasattr(other, "timetuple"): + s = self.timetuple() + o = other.timetuple() + else: + otype = (hasattr(other, "__class__") + and other.__class__.__name__ + or type(other)) + raise TypeError("Can't compare %s and %s" % + (self.__class__.__name__, otype)) + return s, o + + def __lt__(self, other): + s, o = self.make_comparable(other) + return s < o + + def __le__(self, other): + s, o = self.make_comparable(other) + return s <= o + + def __gt__(self, other): + s, o = self.make_comparable(other) + return s > o + + def __ge__(self, other): + s, o = self.make_comparable(other) + return s >= o + + def __eq__(self, other): + s, o = self.make_comparable(other) + return s == o + + def __ne__(self, other): + s, o = self.make_comparable(other) + return s != o + + def timetuple(self): + return time.strptime(self.value, "%Y%m%dT%H:%M:%S") + + def __cmp__(self, other): + s, o = self.make_comparable(other) + return cmp(s, o) ## # Get date/time value. @@ -736,19 +776,6 @@ write("\n") dispatch[datetime.datetime] = dump_datetime - def dump_date(self, value, write): - write("") - write(value.strftime("%Y%m%dT00:00:00")) - write("\n") - dispatch[datetime.date] = dump_date - - def dump_time(self, value, write): - write("") - write(datetime.datetime.now().date().strftime("%Y%m%dT")) - write(value.strftime("%H:%M:%S")) - write("\n") - dispatch[datetime.time] = dump_time - def dump_instance(self, value, write): # check for special wrappers if value.__class__ in WRAPPERS: Modified: python/branches/libffi3-branch/Mac/Demo/PICTbrowse/ICONbrowse.py ============================================================================== --- python/branches/libffi3-branch/Mac/Demo/PICTbrowse/ICONbrowse.py (original) +++ python/branches/libffi3-branch/Mac/Demo/PICTbrowse/ICONbrowse.py Tue Mar 4 15:50:53 2008 @@ -7,8 +7,6 @@ from Carbon import Win from Carbon import Controls from Carbon import List -import sys -import struct from Carbon import Icn import macresource Modified: python/branches/libffi3-branch/Mac/Demo/PICTbrowse/PICTbrowse.py ============================================================================== --- python/branches/libffi3-branch/Mac/Demo/PICTbrowse/PICTbrowse.py (original) +++ python/branches/libffi3-branch/Mac/Demo/PICTbrowse/PICTbrowse.py Tue Mar 4 15:50:53 2008 @@ -7,7 +7,6 @@ from Carbon import Win from Carbon import Controls from Carbon import List -import sys import struct import macresource Modified: python/branches/libffi3-branch/Mac/Demo/PICTbrowse/PICTbrowse2.py ============================================================================== --- python/branches/libffi3-branch/Mac/Demo/PICTbrowse/PICTbrowse2.py (original) +++ python/branches/libffi3-branch/Mac/Demo/PICTbrowse/PICTbrowse2.py Tue Mar 4 15:50:53 2008 @@ -7,7 +7,6 @@ from Carbon import Win from Carbon import Controls from Carbon import List -import sys import struct import macresource Modified: python/branches/libffi3-branch/Mac/Demo/PICTbrowse/cicnbrowse.py ============================================================================== --- python/branches/libffi3-branch/Mac/Demo/PICTbrowse/cicnbrowse.py (original) +++ python/branches/libffi3-branch/Mac/Demo/PICTbrowse/cicnbrowse.py Tue Mar 4 15:50:53 2008 @@ -7,8 +7,6 @@ from Carbon import Win from Carbon import Controls from Carbon import List -import sys -import struct from Carbon import Icn import macresource Modified: python/branches/libffi3-branch/Mac/Demo/PICTbrowse/oldPICTbrowse.py ============================================================================== --- python/branches/libffi3-branch/Mac/Demo/PICTbrowse/oldPICTbrowse.py (original) +++ python/branches/libffi3-branch/Mac/Demo/PICTbrowse/oldPICTbrowse.py Tue Mar 4 15:50:53 2008 @@ -6,7 +6,6 @@ from Carbon import Qd from Carbon import Win from Carbon import List -import sys import struct import macresource Modified: python/branches/libffi3-branch/Mac/Demo/example1/dnslookup-1.py ============================================================================== --- python/branches/libffi3-branch/Mac/Demo/example1/dnslookup-1.py (original) +++ python/branches/libffi3-branch/Mac/Demo/example1/dnslookup-1.py Tue Mar 4 15:50:53 2008 @@ -4,7 +4,6 @@ import EasyDialogs from Carbon import Res from Carbon import Dlg -import sys import socket import string import macresource Modified: python/branches/libffi3-branch/Mac/Demo/example2/dnslookup-2.py ============================================================================== --- python/branches/libffi3-branch/Mac/Demo/example2/dnslookup-2.py (original) +++ python/branches/libffi3-branch/Mac/Demo/example2/dnslookup-2.py Tue Mar 4 15:50:53 2008 @@ -2,7 +2,6 @@ import EasyDialogs from Carbon import Res from Carbon import Dlg -import sys import socket import string import macresource Modified: python/branches/libffi3-branch/Mac/Demo/imgbrowse/imgbrowse.py ============================================================================== --- python/branches/libffi3-branch/Mac/Demo/imgbrowse/imgbrowse.py (original) +++ python/branches/libffi3-branch/Mac/Demo/imgbrowse/imgbrowse.py Tue Mar 4 15:50:53 2008 @@ -7,11 +7,9 @@ from Carbon import QuickDraw from Carbon import Win #ifrom Carbon mport List -import sys import struct import img import imgformat -import struct import mac_image Modified: python/branches/libffi3-branch/Mac/Demo/imgbrowse/mac_image.py ============================================================================== --- python/branches/libffi3-branch/Mac/Demo/imgbrowse/mac_image.py (original) +++ python/branches/libffi3-branch/Mac/Demo/imgbrowse/mac_image.py Tue Mar 4 15:50:53 2008 @@ -1,7 +1,6 @@ """mac_image - Helper routines (hacks) for images""" import imgformat from Carbon import Qd -import time import struct import MacOS Modified: python/branches/libffi3-branch/Mac/Demo/sound/morse.py ============================================================================== --- python/branches/libffi3-branch/Mac/Demo/sound/morse.py (original) +++ python/branches/libffi3-branch/Mac/Demo/sound/morse.py Tue Mar 4 15:50:53 2008 @@ -1,4 +1,4 @@ -import sys, math, audiodev +import sys, math DOT = 30 DAH = 80 Modified: python/branches/libffi3-branch/Mac/Modules/ae/aescan.py ============================================================================== --- python/branches/libffi3-branch/Mac/Modules/ae/aescan.py (original) +++ python/branches/libffi3-branch/Mac/Modules/ae/aescan.py Tue Mar 4 15:50:53 2008 @@ -3,8 +3,6 @@ # (Should learn how to tell the compiler to compile it as well.) import sys -import os -import string import MacOS from bgenlocations import TOOLBOXDIR, BGENDIR Modified: python/branches/libffi3-branch/Mac/Modules/ah/ahscan.py ============================================================================== --- python/branches/libffi3-branch/Mac/Modules/ah/ahscan.py (original) +++ python/branches/libffi3-branch/Mac/Modules/ah/ahscan.py Tue Mar 4 15:50:53 2008 @@ -1,7 +1,6 @@ # Scan an Apple header file, generating a Python file of generator calls. import sys -import os from bgenlocations import TOOLBOXDIR, BGENDIR sys.path.append(BGENDIR) from scantools import Scanner_OSX Modified: python/branches/libffi3-branch/Mac/Modules/app/appscan.py ============================================================================== --- python/branches/libffi3-branch/Mac/Modules/app/appscan.py (original) +++ python/branches/libffi3-branch/Mac/Modules/app/appscan.py Tue Mar 4 15:50:53 2008 @@ -1,7 +1,6 @@ # Scan an Apple header file, generating a Python file of generator calls. import sys -import os from bgenlocations import TOOLBOXDIR, BGENDIR sys.path.append(BGENDIR) from scantools import Scanner Modified: python/branches/libffi3-branch/Mac/Modules/carbonevt/CarbonEvtscan.py ============================================================================== --- python/branches/libffi3-branch/Mac/Modules/carbonevt/CarbonEvtscan.py (original) +++ python/branches/libffi3-branch/Mac/Modules/carbonevt/CarbonEvtscan.py Tue Mar 4 15:50:53 2008 @@ -1,8 +1,6 @@ # IBCarbonscan.py import sys -import os -import string import MacOS import sys Modified: python/branches/libffi3-branch/Mac/Modules/cf/cfscan.py ============================================================================== --- python/branches/libffi3-branch/Mac/Modules/cf/cfscan.py (original) +++ python/branches/libffi3-branch/Mac/Modules/cf/cfscan.py Tue Mar 4 15:50:53 2008 @@ -1,7 +1,6 @@ # Scan an Apple header file, generating a Python file of generator calls. import sys -import os from bgenlocations import TOOLBOXDIR, BGENDIR sys.path.append(BGENDIR) from scantools import Scanner_OSX Modified: python/branches/libffi3-branch/Mac/Modules/cg/cgscan.py ============================================================================== --- python/branches/libffi3-branch/Mac/Modules/cg/cgscan.py (original) +++ python/branches/libffi3-branch/Mac/Modules/cg/cgscan.py Tue Mar 4 15:50:53 2008 @@ -1,7 +1,6 @@ # Scan an Apple header file, generating a Python file of generator calls. import sys -import os from bgenlocations import TOOLBOXDIR, BGENDIR sys.path.append(BGENDIR) from scantools import Scanner_OSX Modified: python/branches/libffi3-branch/Mac/Modules/cm/cmscan.py ============================================================================== --- python/branches/libffi3-branch/Mac/Modules/cm/cmscan.py (original) +++ python/branches/libffi3-branch/Mac/Modules/cm/cmscan.py Tue Mar 4 15:50:53 2008 @@ -1,7 +1,6 @@ # Scan an Apple header file, generating a Python file of generator calls. import sys -import os from bgenlocations import TOOLBOXDIR, BGENDIR sys.path.append(BGENDIR) from scantools import Scanner Modified: python/branches/libffi3-branch/Mac/Modules/ctl/ctlscan.py ============================================================================== --- python/branches/libffi3-branch/Mac/Modules/ctl/ctlscan.py (original) +++ python/branches/libffi3-branch/Mac/Modules/ctl/ctlscan.py Tue Mar 4 15:50:53 2008 @@ -1,6 +1,5 @@ # Scan , generating ctlgen.py. import sys -import os from bgenlocations import TOOLBOXDIR, BGENDIR sys.path.append(BGENDIR) Modified: python/branches/libffi3-branch/Mac/Modules/dlg/dlgscan.py ============================================================================== --- python/branches/libffi3-branch/Mac/Modules/dlg/dlgscan.py (original) +++ python/branches/libffi3-branch/Mac/Modules/dlg/dlgscan.py Tue Mar 4 15:50:53 2008 @@ -1,7 +1,6 @@ # Scan an Apple header file, generating a Python file of generator calls. import sys -import os from bgenlocations import TOOLBOXDIR, BGENDIR sys.path.append(BGENDIR) Modified: python/branches/libffi3-branch/Mac/Modules/drag/dragscan.py ============================================================================== --- python/branches/libffi3-branch/Mac/Modules/drag/dragscan.py (original) +++ python/branches/libffi3-branch/Mac/Modules/drag/dragscan.py Tue Mar 4 15:50:53 2008 @@ -1,6 +1,5 @@ # Scan , generating draggen.py. import sys -import os from bgenlocations import TOOLBOXDIR, BGENDIR, INCLUDEDIR sys.path.append(BGENDIR) Modified: python/branches/libffi3-branch/Mac/Modules/evt/evtscan.py ============================================================================== --- python/branches/libffi3-branch/Mac/Modules/evt/evtscan.py (original) +++ python/branches/libffi3-branch/Mac/Modules/evt/evtscan.py Tue Mar 4 15:50:53 2008 @@ -1,7 +1,6 @@ # Scan an Apple header file, generating a Python file of generator calls. import sys -import os from bgenlocations import TOOLBOXDIR, BGENDIR sys.path.append(BGENDIR) from scantools import Scanner Modified: python/branches/libffi3-branch/Mac/Modules/file/filescan.py ============================================================================== --- python/branches/libffi3-branch/Mac/Modules/file/filescan.py (original) +++ python/branches/libffi3-branch/Mac/Modules/file/filescan.py Tue Mar 4 15:50:53 2008 @@ -1,7 +1,6 @@ # Scan an Apple header file, generating a Python file of generator calls. import sys -import os from bgenlocations import TOOLBOXDIR, BGENDIR sys.path.append(BGENDIR) from scantools import Scanner_OSX Modified: python/branches/libffi3-branch/Mac/Modules/fm/fmscan.py ============================================================================== --- python/branches/libffi3-branch/Mac/Modules/fm/fmscan.py (original) +++ python/branches/libffi3-branch/Mac/Modules/fm/fmscan.py Tue Mar 4 15:50:53 2008 @@ -1,7 +1,6 @@ # Scan an Apple header file, generating a Python file of generator calls. import sys -import os from bgenlocations import TOOLBOXDIR, BGENDIR sys.path.append(BGENDIR) from scantools import Scanner Modified: python/branches/libffi3-branch/Mac/Modules/folder/folderscan.py ============================================================================== --- python/branches/libffi3-branch/Mac/Modules/folder/folderscan.py (original) +++ python/branches/libffi3-branch/Mac/Modules/folder/folderscan.py Tue Mar 4 15:50:53 2008 @@ -1,7 +1,6 @@ # Scan an Apple header file, generating a Python file of generator calls. import sys -import os from bgenlocations import TOOLBOXDIR, BGENDIR sys.path.append(BGENDIR) from scantools import Scanner_OSX Modified: python/branches/libffi3-branch/Mac/Modules/help/helpscan.py ============================================================================== --- python/branches/libffi3-branch/Mac/Modules/help/helpscan.py (original) +++ python/branches/libffi3-branch/Mac/Modules/help/helpscan.py Tue Mar 4 15:50:53 2008 @@ -1,7 +1,6 @@ # Scan an Apple header file, generating a Python file of generator calls. import sys -import os from bgenlocations import TOOLBOXDIR, BGENDIR sys.path.append(BGENDIR) from scantools import Scanner Modified: python/branches/libffi3-branch/Mac/Modules/ibcarbon/IBCarbonscan.py ============================================================================== --- python/branches/libffi3-branch/Mac/Modules/ibcarbon/IBCarbonscan.py (original) +++ python/branches/libffi3-branch/Mac/Modules/ibcarbon/IBCarbonscan.py Tue Mar 4 15:50:53 2008 @@ -1,8 +1,6 @@ # IBCarbonscan.py import sys -import os -import string from bgenlocations import TOOLBOXDIR, BGENDIR sys.path.append(BGENDIR) Modified: python/branches/libffi3-branch/Mac/Modules/icn/icnscan.py ============================================================================== --- python/branches/libffi3-branch/Mac/Modules/icn/icnscan.py (original) +++ python/branches/libffi3-branch/Mac/Modules/icn/icnscan.py Tue Mar 4 15:50:53 2008 @@ -1,7 +1,6 @@ # Scan an Apple header file, generating a Python file of generator calls. import sys -import os from bgenlocations import TOOLBOXDIR, BGENDIR sys.path.append(BGENDIR) from scantools import Scanner Modified: python/branches/libffi3-branch/Mac/Modules/launch/launchscan.py ============================================================================== --- python/branches/libffi3-branch/Mac/Modules/launch/launchscan.py (original) +++ python/branches/libffi3-branch/Mac/Modules/launch/launchscan.py Tue Mar 4 15:50:53 2008 @@ -1,7 +1,6 @@ # Scan an Apple header file, generating a Python file of generator calls. import sys -import os from bgenlocations import TOOLBOXDIR, BGENDIR sys.path.append(BGENDIR) from scantools import Scanner Modified: python/branches/libffi3-branch/Mac/Modules/list/listscan.py ============================================================================== --- python/branches/libffi3-branch/Mac/Modules/list/listscan.py (original) +++ python/branches/libffi3-branch/Mac/Modules/list/listscan.py Tue Mar 4 15:50:53 2008 @@ -1,7 +1,6 @@ # Scan an Apple header file, generating a Python file of generator calls. import sys -import os from bgenlocations import TOOLBOXDIR, BGENDIR sys.path.append(BGENDIR) from scantools import Scanner Modified: python/branches/libffi3-branch/Mac/Modules/menu/menuscan.py ============================================================================== --- python/branches/libffi3-branch/Mac/Modules/menu/menuscan.py (original) +++ python/branches/libffi3-branch/Mac/Modules/menu/menuscan.py Tue Mar 4 15:50:53 2008 @@ -1,6 +1,5 @@ # Scan , generating menugen.py. import sys -import os from bgenlocations import TOOLBOXDIR, BGENDIR sys.path.append(BGENDIR) Modified: python/branches/libffi3-branch/Mac/Modules/mlte/mltescan.py ============================================================================== --- python/branches/libffi3-branch/Mac/Modules/mlte/mltescan.py (original) +++ python/branches/libffi3-branch/Mac/Modules/mlte/mltescan.py Tue Mar 4 15:50:53 2008 @@ -1,7 +1,6 @@ # Scan an Apple header file, generating a Python file of generator calls. import sys -import os from bgenlocations import TOOLBOXDIR, BGENDIR sys.path.append(BGENDIR) from scantools import Scanner_OSX Modified: python/branches/libffi3-branch/Mac/Modules/osa/osascan.py ============================================================================== --- python/branches/libffi3-branch/Mac/Modules/osa/osascan.py (original) +++ python/branches/libffi3-branch/Mac/Modules/osa/osascan.py Tue Mar 4 15:50:53 2008 @@ -1,7 +1,6 @@ # Scan an Apple header file, generating a Python file of generator calls. import sys -import os from bgenlocations import TOOLBOXDIR, BGENDIR sys.path.append(BGENDIR) from scantools import Scanner Modified: python/branches/libffi3-branch/Mac/Modules/qd/qdscan.py ============================================================================== --- python/branches/libffi3-branch/Mac/Modules/qd/qdscan.py (original) +++ python/branches/libffi3-branch/Mac/Modules/qd/qdscan.py Tue Mar 4 15:50:53 2008 @@ -1,7 +1,6 @@ # Scan an Apple header file, generating a Python file of generator calls. import sys -import os from bgenlocations import TOOLBOXDIR, BGENDIR sys.path.append(BGENDIR) Modified: python/branches/libffi3-branch/Mac/Modules/qdoffs/qdoffsscan.py ============================================================================== --- python/branches/libffi3-branch/Mac/Modules/qdoffs/qdoffsscan.py (original) +++ python/branches/libffi3-branch/Mac/Modules/qdoffs/qdoffsscan.py Tue Mar 4 15:50:53 2008 @@ -1,6 +1,5 @@ # Scan an Apple header file, generating a Python file of generator calls. import sys -import os from bgenlocations import TOOLBOXDIR, BGENDIR sys.path.append(BGENDIR) Modified: python/branches/libffi3-branch/Mac/Modules/qt/qtscan.py ============================================================================== --- python/branches/libffi3-branch/Mac/Modules/qt/qtscan.py (original) +++ python/branches/libffi3-branch/Mac/Modules/qt/qtscan.py Tue Mar 4 15:50:53 2008 @@ -1,7 +1,6 @@ # Scan an Apple header file, generating a Python file of generator calls. import sys -import os from bgenlocations import TOOLBOXDIR, BGENDIR sys.path.append(BGENDIR) from scantools import Scanner Modified: python/branches/libffi3-branch/Mac/Modules/res/resscan.py ============================================================================== --- python/branches/libffi3-branch/Mac/Modules/res/resscan.py (original) +++ python/branches/libffi3-branch/Mac/Modules/res/resscan.py Tue Mar 4 15:50:53 2008 @@ -3,8 +3,6 @@ # (Should learn how to tell the compiler to compile it as well.) import sys -import os -import string import MacOS from bgenlocations import TOOLBOXDIR, BGENDIR Modified: python/branches/libffi3-branch/Mac/Modules/scrap/scrapscan.py ============================================================================== --- python/branches/libffi3-branch/Mac/Modules/scrap/scrapscan.py (original) +++ python/branches/libffi3-branch/Mac/Modules/scrap/scrapscan.py Tue Mar 4 15:50:53 2008 @@ -4,7 +4,6 @@ # generates a boilerplate to be edited by hand. import sys -import os from bgenlocations import TOOLBOXDIR, BGENDIR sys.path.append(BGENDIR) from scantools import Scanner Modified: python/branches/libffi3-branch/Mac/Modules/snd/sndscan.py ============================================================================== --- python/branches/libffi3-branch/Mac/Modules/snd/sndscan.py (original) +++ python/branches/libffi3-branch/Mac/Modules/snd/sndscan.py Tue Mar 4 15:50:53 2008 @@ -3,7 +3,6 @@ # (Should learn how to tell the compiler to compile it as well.) import sys -import os from bgenlocations import TOOLBOXDIR, BGENDIR sys.path.append(BGENDIR) Modified: python/branches/libffi3-branch/Mac/Modules/te/tescan.py ============================================================================== --- python/branches/libffi3-branch/Mac/Modules/te/tescan.py (original) +++ python/branches/libffi3-branch/Mac/Modules/te/tescan.py Tue Mar 4 15:50:53 2008 @@ -1,7 +1,6 @@ # Scan an Apple header file, generating a Python file of generator calls. import sys -import os from bgenlocations import TOOLBOXDIR, BGENDIR sys.path.append(BGENDIR) from scantools import Scanner Modified: python/branches/libffi3-branch/Mac/Modules/win/winscan.py ============================================================================== --- python/branches/libffi3-branch/Mac/Modules/win/winscan.py (original) +++ python/branches/libffi3-branch/Mac/Modules/win/winscan.py Tue Mar 4 15:50:53 2008 @@ -1,6 +1,5 @@ # Scan an Apple header file, generating a Python file of generator calls. import sys -import os from bgenlocations import TOOLBOXDIR, BGENDIR sys.path.append(BGENDIR) Modified: python/branches/libffi3-branch/Makefile.pre.in ============================================================================== --- python/branches/libffi3-branch/Makefile.pre.in (original) +++ python/branches/libffi3-branch/Makefile.pre.in Tue Mar 4 15:50:53 2008 @@ -517,27 +517,27 @@ Objects/unicodectype.o: $(srcdir)/Objects/unicodectype.c \ $(srcdir)/Objects/unicodetype_db.h +STRINGLIB_HEADERS= \ + $(srcdir)/Objects/stringlib/count.h \ + $(srcdir)/Objects/stringlib/fastsearch.h \ + $(srcdir)/Objects/stringlib/find.h \ + $(srcdir)/Objects/stringlib/formatter.h \ + $(srcdir)/Objects/stringlib/partition.h \ + $(srcdir)/Objects/stringlib/stringdefs.h \ + $(srcdir)/Objects/stringlib/string_format.h \ + $(srcdir)/Objects/stringlib/unicodedefs.h + Objects/unicodeobject.o: $(srcdir)/Objects/unicodeobject.c \ - $(srcdir)/Objects/stringlib/string_format.h \ - $(srcdir)/Objects/stringlib/unicodedefs.h \ - $(srcdir)/Objects/stringlib/fastsearch.h \ - $(srcdir)/Objects/stringlib/count.h \ - $(srcdir)/Objects/stringlib/find.h \ - $(srcdir)/Objects/stringlib/partition.h + $(STRINGLIB_HEADERS) Objects/stringobject.o: $(srcdir)/Objects/stringobject.c \ - $(srcdir)/Objects/stringlib/string_format.h \ - $(srcdir)/Objects/stringlib/stringdefs.h \ - $(srcdir)/Objects/stringlib/fastsearch.h \ - $(srcdir)/Objects/stringlib/count.h \ - $(srcdir)/Objects/stringlib/find.h \ - $(srcdir)/Objects/stringlib/partition.h + $(STRINGLIB_HEADERS) Python/formatter_unicode.o: $(srcdir)/Python/formatter_unicode.c \ - $(srcdir)/Objects/stringlib/formatter.h + $(STRINGLIB_HEADERS) Python/formatter_string.o: $(srcdir)/Python/formatter_string.c \ - $(srcdir)/Objects/stringlib/formatter.h + $(STRINGLIB_HEADERS) ############################################################################ # Header files Modified: python/branches/libffi3-branch/Misc/ACKS ============================================================================== --- python/branches/libffi3-branch/Misc/ACKS (original) +++ python/branches/libffi3-branch/Misc/ACKS Tue Mar 4 15:50:53 2008 @@ -274,6 +274,7 @@ Rycharde Hawkes Jochen Hayek Thomas Heller +Malte Helmert Lance Finn Helsten Jonathan Hendry James Henstridge @@ -523,6 +524,7 @@ Fran?ois Pinard Zach Pincus Michael Piotrowski +Antoine Pitrou Michael Pomraning Iustin Pop John Popplewell Modified: python/branches/libffi3-branch/Misc/BeOS-setup.py ============================================================================== --- python/branches/libffi3-branch/Misc/BeOS-setup.py (original) +++ python/branches/libffi3-branch/Misc/BeOS-setup.py Tue Mar 4 15:50:53 2008 @@ -4,7 +4,7 @@ __version__ = "special BeOS after 1.37" -import sys, os, getopt +import sys, os from distutils import sysconfig from distutils import text_file from distutils.errors import * Modified: python/branches/libffi3-branch/Misc/HISTORY ============================================================================== --- python/branches/libffi3-branch/Misc/HISTORY (original) +++ python/branches/libffi3-branch/Misc/HISTORY Tue Mar 4 15:50:53 2008 @@ -3,11 +3,2150 @@ This file contains the release messages for previous Python releases. As you read on you go back to the dark ages of Python's history. +(Note: news about 2.5c2 and later 2.5 releases is in the Misc/NEWS +file of the release25-maint branch.) ====================================================================== +What's New in Python 2.5 release candidate 1? +============================================= + +*Release date: 17-AUG-2006* + +Core and builtins +----------------- + +- Unicode objects will no longer raise an exception when being + compared equal or unequal to a string and a UnicodeDecodeError + exception occurs, e.g. as result of a decoding failure. + + Instead, the equal (==) and unequal (!=) comparison operators will + now issue a UnicodeWarning and interpret the two objects as + unequal. The UnicodeWarning can be filtered as desired using + the warning framework, e.g. silenced completely, turned into an + exception, logged, etc. + + Note that compare operators other than equal and unequal will still + raise UnicodeDecodeError exceptions as they've always done. + +- Fix segfault when doing string formatting on subclasses of long. + +- Fix bug related to __len__ functions using values > 2**32 on 64-bit machines + with new-style classes. + +- Fix bug related to __len__ functions returning negative values with + classic classes. + +- Patch #1538606, Fix __index__() clipping. There were some problems + discovered with the API and how integers that didn't fit into Py_ssize_t + were handled. This patch attempts to provide enough alternatives + to effectively use __index__. + +- Bug #1536021: __hash__ may now return long int; the final hash + value is obtained by invoking hash on the long int. + +- Bug #1536786: buffer comparison could emit a RuntimeWarning. + +- Bug #1535165: fixed a segfault in input() and raw_input() when + sys.stdin is closed. + +- On Windows, the PyErr_Warn function is now exported from + the Python dll again. + +- Bug #1191458: tracing over for loops now produces a line event + on each iteration. Fixing this problem required changing the .pyc + magic number. This means that .pyc files generated before 2.5c1 + will be regenerated. + +- Bug #1333982: string/number constants were inappropriately stored + in the byte code and co_consts even if they were not used, ie + immediately popped off the stack. + +- Fixed a reference-counting problem in property(). + + +Library +------- + +- Fix a bug in the ``compiler`` package that caused invalid code to be + generated for generator expressions. + +- The distutils version has been changed to 2.5.0. The change to + keep it programmatically in sync with the Python version running + the code (introduced in 2.5b3) has been reverted. It will continue + to be maintained manually as static string literal. + +- If the Python part of a ctypes callback function returns None, + and this cannot be converted to the required C type, an exception is + printed with PyErr_WriteUnraisable. Before this change, the C + callback returned arbitrary values to the calling code. + +- The __repr__ method of a NULL ctypes.py_object() no longer raises + an exception. + +- uuid.UUID now has a bytes_le attribute. This returns the UUID in + little-endian byte order for Windows. In addition, uuid.py gained some + workarounds for clocks with low resolution, to stop the code yielding + duplicate UUIDs. + +- Patch #1540892: site.py Quitter() class attempts to close sys.stdin + before raising SystemExit, allowing IDLE to honor quit() and exit(). + +- Bug #1224621: make tabnanny recognize IndentationErrors raised by tokenize. + +- Patch #1536071: trace.py should now find the full module name of a + file correctly even on Windows. + +- logging's atexit hook now runs even if the rest of the module has + already been cleaned up. + +- Bug #1112549, fix DoS attack on cgi.FieldStorage. + +- Bug #1531405, format_exception no longer raises an exception if + str(exception) raised an exception. + +- Fix a bug in the ``compiler`` package that caused invalid code to be + generated for nested functions. + + +Extension Modules +----------------- + +- Patch #1511317: don't crash on invalid hostname (alias) info. + +- Patch #1535500: fix segfault in BZ2File.writelines and make sure it + raises the correct exceptions. + +- Patch # 1536908: enable building ctypes on OpenBSD/AMD64. The + '-no-stack-protector' compiler flag for OpenBSD has been removed. + +- Patch #1532975 was applied, which fixes Bug #1533481: ctypes now + uses the _as_parameter_ attribute when objects are passed to foreign + function calls. The ctypes version number was changed to 1.0.1. + +- Bug #1530559, struct.pack raises TypeError where it used to convert. + Passing float arguments to struct.pack when integers are expected + now triggers a DeprecationWarning. + + +Tests +----- + +- test_socketserver should now work on cygwin and not fail sporadically + on other platforms. + +- test_mailbox should now work on cygwin versions 2006-08-10 and later. + +- Bug #1535182: really test the xreadlines() method of bz2 objects. + +- test_threading now skips testing alternate thread stack sizes on + platforms that don't support changing thread stack size. + + +Documentation +------------- + +- Patch #1534922: unittest docs were corrected and enhanced. + + +Build +----- + +- Bug #1535502, build _hashlib on Windows, and use masm assembler + code in OpenSSL. + +- Bug #1534738, win32 debug version of _msi should be _msi_d.pyd. + +- Bug #1530448, ctypes build failure on Solaris 10 was fixed. + + +C API +----- + +- New API for Unicode rich comparisons: PyUnicode_RichCompare() + +- Bug #1069160. Internal correctness changes were made to + ``PyThreadState_SetAsyncExc()``. A test case was added, and + the documentation was changed to state that the return value + is always 1 (normal) or 0 (if the specified thread wasn't found). + + +What's New in Python 2.5 beta 3? +================================ + +*Release date: 03-AUG-2006* + +Core and builtins +----------------- + +- _PyWeakref_GetWeakrefCount() now returns a Py_ssize_t; it previously + returned a long (see PEP 353). + +- Bug #1515471: string.replace() accepts character buffers again. + +- Add PyErr_WarnEx() so C code can pass the stacklevel to warnings.warn(). + This provides the proper warning for struct.pack(). + PyErr_Warn() is now deprecated in favor of PyErr_WarnEx(). + +- Patch #1531113: Fix augmented assignment with yield expressions. + Also fix a SystemError when trying to assign to yield expressions. + +- Bug #1529871: The speed enhancement patch #921466 broke Python's compliance + with PEP 302. This was fixed by adding an ``imp.NullImporter`` type that is + used in ``sys.path_importer_cache`` to cache non-directory paths and avoid + excessive filesystem operations during imports. + +- Bug #1521947: When checking for overflow, ``PyOS_strtol()`` used some + operations on signed longs that are formally undefined by C. + Unfortunately, at least one compiler now cares about that, so complicated + the code to make that compiler happy again. + +- Bug #1524310: Properly report errors from FindNextFile in os.listdir. + +- Patch #1232023: Stop including current directory in search + path on Windows. + +- Fix some potential crashes found with failmalloc. + +- Fix warnings reported by Klocwork's static analysis tool. + +- Bug #1512814, Fix incorrect lineno's when code within a function + had more than 255 blank lines. + +- Patch #1521179: Python now accepts the standard options ``--help`` and + ``--version`` as well as ``/?`` on Windows. + +- Bug #1520864: unpacking singleton tuples in a 'for' loop (for x, in) works + again. Fixing this problem required changing the .pyc magic number. + This means that .pyc files generated before 2.5b3 will be regenerated. + +- Bug #1524317: Compiling Python ``--without-threads`` failed. + The Python core compiles again, and, in a build without threads, the + new ``sys._current_frames()`` returns a dictionary with one entry, + mapping the faux "thread id" 0 to the current frame. + +- Bug #1525447: build on MacOS X on a case-sensitive filesystem. + + +Library +------- + +- Fix #1693149. Now you can pass several modules separated by + comma to trace.py in the same --ignore-module option. + +- Correction of patch #1455898: In the mbcs decoder, set final=False + for stream decoder, but final=True for the decode function. + +- os.urandom no longer masks unrelated exceptions like SystemExit or + KeyboardInterrupt. + +- Bug #1525866: Don't copy directory stat times in + shutil.copytree on Windows + +- Bug #1002398: The documentation for os.path.sameopenfile now correctly + refers to file descriptors, not file objects. + +- The renaming of the xml package to xmlcore, and the import hackery done + to make it appear at both names, has been removed. Bug #1511497, + #1513611, and probably others. + +- Bug #1441397: The compiler module now recognizes module and function + docstrings correctly as it did in Python 2.4. + +- Bug #1529297: The rewrite of doctest for Python 2.4 unintentionally + lost that tests are sorted by name before being run. This rarely + matters for well-written tests, but can create baffling symptoms if + side effects from one test to the next affect outcomes. ``DocTestFinder`` + has been changed to sort the list of tests it returns. + +- The distutils version has been changed to 2.5.0, and is now kept + in sync with sys.version_info[:3]. + +- Bug #978833: Really close underlying socket in _socketobject.close. + +- Bug #1459963: urllib and urllib2 now normalize HTTP header names with + title(). + +- Patch #1525766: In pkgutil.walk_packages, correctly pass the onerror callback + to recursive calls and call it with the failing package name. + +- Bug #1525817: Don't truncate short lines in IDLE's tool tips. + +- Patch #1515343: Fix printing of deprecated string exceptions with a + value in the traceback module. + +- Resync optparse with Optik 1.5.3: minor tweaks for/to tests. + +- Patch #1524429: Use repr() instead of backticks in Tkinter again. + +- Bug #1520914: Change time.strftime() to accept a zero for any position in its + argument tuple. For arguments where zero is illegal, the value is forced to + the minimum value that is correct. This is to support an undocumented but + common way people used to fill in inconsequential information in the time + tuple pre-2.4. + +- Patch #1220874: Update the binhex module for Mach-O. + +- The email package has improved RFC 2231 support, specifically for + recognizing the difference between encoded (name*0*=) and non-encoded + (name*0=) parameter continuations. This may change the types of + values returned from email.message.Message.get_param() and friends. + Specifically in some cases where non-encoded continuations were used, + get_param() used to return a 3-tuple of (None, None, string) whereas now it + will just return the string (since non-encoded continuations don't have + charset and language parts). + + Also, whereas % values were decoded in all parameter continuations, they are + now only decoded in encoded parameter parts. + +- Bug #1517990: IDLE keybindings on MacOS X now work correctly + +- Bug #1517996: IDLE now longer shows the default Tk menu when a + path browser, class browser or debugger is the frontmost window on MacOS X + +- Patch #1520294: Support for getset and member descriptors in types.py, + inspect.py, and pydoc.py. Specifically, this allows for querying the type + of an object against these built-in types and more importantly, for getting + their docstrings printed in the interactive interpreter's help() function. + + +Extension Modules +----------------- + +- Patch #1519025 and bug #926423: If a KeyboardInterrupt occurs during + a socket operation on a socket with a timeout, the exception will be + caught correctly. Previously, the exception was not caught. + +- Patch #1529514: The _ctypes extension is now compiled on more + openbsd target platforms. + +- The ``__reduce__()`` method of the new ``collections.defaultdict`` had + a memory leak, affecting pickles and deep copies. + +- Bug #1471938: Fix curses module build problem on Solaris 8; patch by + Paul Eggert. + +- Patch #1448199: Release interpreter lock in _winreg.ConnectRegistry. + +- Patch #1521817: Index range checking on ctypes arrays containing + exactly one element enabled again. This allows iterating over these + arrays, without the need to check the array size before. + +- Bug #1521375: When the code in ctypes.util.find_library was + run with root privileges, it could overwrite or delete + /dev/null in certain cases; this is now fixed. + +- Bug #1467450: On Mac OS X 10.3, RTLD_GLOBAL is now used as the + default mode for loading shared libraries in ctypes. + +- Because of a misspelled preprocessor symbol, ctypes was always + compiled without thread support; this is now fixed. + +- pybsddb Bug #1527939: bsddb module DBEnv dbremove and dbrename + methods now allow their database parameter to be None as the + sleepycat API allows. + +- Bug #1526460: Fix socketmodule compile on NetBSD as it has a different + bluetooth API compared with Linux and FreeBSD. + +Tests +----- + +- Bug #1501330: Change test_ossaudiodev to be much more tolerant in terms of + how long the test file should take to play. Now accepts taking 2.93 secs + (exact time) +/- 10% instead of the hard-coded 3.1 sec. + +- Patch #1529686: The standard tests ``test_defaultdict``, ``test_iterlen``, + ``test_uuid`` and ``test_email_codecs`` didn't actually run any tests when + run via ``regrtest.py``. Now they do. + +Build +----- + +- Bug #1439538: Drop usage of test -e in configure as it is not portable. + +Mac +--- + +- PythonLauncher now works correctly when the path to the script contains + characters that are treated specially by the shell (such as quotes). + +- Bug #1527397: PythonLauncher now launches scripts with the working directory + set to the directory that contains the script instead of the user home + directory. That latter was an implementation accident and not what users + expect. + + +What's New in Python 2.5 beta 2? +================================ + +*Release date: 11-JUL-2006* + +Core and builtins +----------------- + +- Bug #1441486: The literal representation of -(sys.maxint - 1) + again evaluates to a int object, not a long. + +- Bug #1501934: The scope of global variables that are locally assigned + using augmented assignment is now correctly determined. + +- Bug #927248: Recursive method-wrapper objects can now safely + be released. + +- Bug #1417699: Reject locale-specific decimal point in float() + and atof(). + +- Bug #1511381: codec_getstreamcodec() in codec.c is corrected to + omit a default "error" argument for NULL pointer. This allows + the parser to take a codec from cjkcodecs again. + +- Bug #1519018: 'as' is now validated properly in import statements. + +- On 64 bit systems, int literals that use less than 64 bits are + now ints rather than longs. + +- Bug #1512814, Fix incorrect lineno's when code at module scope + started after line 256. + +- New function ``sys._current_frames()`` returns a dict mapping thread + id to topmost thread stack frame. This is for expert use, and is + especially useful for debugging application deadlocks. The functionality + was previously available in Fazal Majid's ``threadframe`` extension + module, but it wasn't possible to do this in a wholly threadsafe way from + an extension. + +Library +------- + +- Bug #1257728: Mention Cygwin in distutils error message about a missing + VS 2003. + +- Patch #1519566: Update turtle demo, make begin_fill idempotent. + +- Bug #1508010: msvccompiler now requires the DISTUTILS_USE_SDK + environment variable to be set in order to the SDK environment + for finding the compiler, include files, etc. + +- Bug #1515998: Properly generate logical ids for files in bdist_msi. + +- warnings.py now ignores ImportWarning by default + +- string.Template() now correctly handles tuple-values. Previously, + multi-value tuples would raise an exception and single-value tuples would + be treated as the value they contain, instead. + +- Bug #822974: Honor timeout in telnetlib.{expect,read_until} + even if some data are received. + +- Bug #1267547: Put proper recursive setup.py call into the + spec file generated by bdist_rpm. + +- Bug #1514693: Update turtle's heading when switching between + degrees and radians. + +- Reimplement turtle.circle using a polyline, to allow correct + filling of arcs. + +- Bug #1514703: Only setup canvas window in turtle when the canvas + is created. + +- Bug #1513223: .close() of a _socketobj now releases the underlying + socket again, which then gets closed as it becomes unreferenced. + +- Bug #1504333: Make sgmllib support angle brackets in quoted + attribute values. + +- Bug #853506: Fix IPv6 address parsing in unquoted attributes in + sgmllib ('[' and ']' were not accepted). + +- Fix a bug in the turtle module's end_fill function. + +- Bug #1510580: The 'warnings' module improperly required that a Warning + category be either a types.ClassType and a subclass of Warning. The proper + check is just that it is a subclass with Warning as the documentation states. + +- The compiler module now correctly compiles the new try-except-finally + statement (bug #1509132). + +- The wsgiref package is now installed properly on Unix. + +- A bug was fixed in logging.config.fileConfig() which caused a crash on + shutdown when fileConfig() was called multiple times. + +- The sqlite3 module did cut off data from the SQLite database at the first + null character before sending it to a custom converter. This has been fixed + now. + +Extension Modules +----------------- + +- #1494314: Fix a regression with high-numbered sockets in 2.4.3. This + means that select() on sockets > FD_SETSIZE (typically 1024) work again. + The patch makes sockets use poll() internally where available. + +- Assigning None to pointer type fields in ctypes structures possible + overwrote the wrong fields, this is fixed now. + +- Fixed a segfault in _ctypes when ctypes.wintypes were imported + on non-Windows platforms. + +- Bug #1518190: The ctypes.c_void_p constructor now accepts any + integer or long, without range checking. + +- Patch #1517790: It is now possible to use custom objects in the ctypes + foreign function argtypes sequence as long as they provide a from_param + method, no longer is it required that the object is a ctypes type. + +- The '_ctypes' extension module now works when Python is configured + with the --without-threads option. + +- Bug #1513646: os.access on Windows now correctly determines write + access, again. + +- Bug #1512695: cPickle.loads could crash if it was interrupted with + a KeyboardInterrupt. + +- Bug #1296433: parsing XML with a non-default encoding and + a CharacterDataHandler could crash the interpreter in pyexpat. + +- Patch #1516912: improve Modules support for OpenVMS. + +Build +----- + +- Automate Windows build process for the Win64 SSL module. + +- 'configure' now detects the zlib library the same way as distutils. + Previously, the slight difference could cause compilation errors of the + 'zlib' module on systems with more than one version of zlib. + +- The MSI compileall step was fixed to also support a TARGETDIR + with spaces in it. + +- Bug #1517388: sqlite3.dll is now installed on Windows independent + of Tcl/Tk. + +- Bug #1513032: 'make install' failed on FreeBSD 5.3 due to lib-old + trying to be installed even though it's empty. + +Tests +----- + +- Call os.waitpid() at the end of tests that spawn child processes in order + to minimize resources (zombies). + +Documentation +------------- + +- Cover ImportWarning, PendingDeprecationWarning and simplefilter() in the + documentation for the warnings module. + +- Patch #1509163: MS Toolkit Compiler no longer available. + +- Patch #1504046: Add documentation for xml.etree. + + +What's New in Python 2.5 beta 1? +================================ + +*Release date: 20-JUN-2006* + +Core and builtins +----------------- + +- Patch #1507676: Error messages returned by invalid abstract object operations + (such as iterating over an integer) have been improved and now include the + type of the offending object to help with debugging. + +- Bug #992017: A classic class that defined a __coerce__() method that returned + its arguments swapped would infinitely recurse and segfault the interpreter. + +- Fix the socket tests so they can be run concurrently. + +- Removed 5 integers from C frame objects (PyFrameObject). + f_nlocals, f_ncells, f_nfreevars, f_stack_size, f_restricted. + +- Bug #532646: object.__call__() will continue looking for the __call__ + attribute on objects until one without one is found. This leads to recursion + when you take a class and set its __call__ attribute to an instance of the + class. Originally fixed for classic classes, but this fix is for new-style. + Removes the infinite_rec_3 crasher. + +- The string and unicode methods startswith() and endswith() now accept + a tuple of prefixes/suffixes to look for. Implements RFE #1491485. + +- Buffer objects, at the C level, never used the char buffer + implementation even when the char buffer for the wrapped object was + explicitly requested (originally returned the read or write buffer). + Now a TypeError is raised if the char buffer is not present but is + requested. + +- Patch #1346214: Statements like "if 0: suite" are now again optimized + away like they were in Python 2.4. + +- Builtin exceptions are now full-blown new-style classes instead of + instances pretending to be classes, which speeds up exception handling + by about 80% in comparison to 2.5a2. + +- Patch #1494554: Update unicodedata.numeric and unicode.isnumeric to + Unicode 4.1. + +- Patch #921466: sys.path_importer_cache is now used to cache valid and + invalid file paths for the built-in import machinery which leads to + fewer open calls on startup. + +- Patch #1442927: ``long(str, base)`` is now up to 6x faster for non-power- + of-2 bases. The largest speedup is for inputs with about 1000 decimal + digits. Conversion from non-power-of-2 bases remains quadratic-time in + the number of input digits (it was and remains linear-time for bases + 2, 4, 8, 16 and 32). + +- Bug #1334662: ``int(string, base)`` could deliver a wrong answer + when ``base`` was not 2, 4, 8, 10, 16 or 32, and ``string`` represented + an integer close to ``sys.maxint``. This was repaired by patch + #1335972, which also gives a nice speedup. + +- Patch #1337051: reduced size of frame objects. + +- PyErr_NewException now accepts a tuple of base classes as its + "base" parameter. + +- Patch #876206: function call speedup by retaining allocated frame + objects. + +- Bug #1462152: file() now checks more thoroughly for invalid mode + strings and removes a possible "U" before passing the mode to the + C library function. + +- Patch #1488312, Fix memory alignment problem on SPARC in unicode + +- Bug #1487966: Fix SystemError with conditional expression in assignment + +- WindowsError now has two error code attributes: errno, which carries + the error values from errno.h, and winerror, which carries the error + values from winerror.h. Previous versions put the winerror.h values + (from GetLastError()) into the errno attribute. + +- Patch #1475845: Raise IndentationError for unexpected indent. + +- Patch #1479181: split open() and file() from being aliases for each other. + +- Patch #1497053 & bug #1275608: Exceptions occurring in ``__eq__()`` + methods were always silently ignored by dictionaries when comparing keys. + They are now passed through (except when using the C API function + ``PyDict_GetItem()``, whose semantics did not change). + +- Bug #1456209: In some obscure cases it was possible for a class with a + custom ``__eq__()`` method to confuse dict internals when class instances + were used as a dict's keys and the ``__eq__()`` method mutated the dict. + No, you don't have any code that did this ;-) + +Extension Modules +----------------- + +- Bug #1295808: expat symbols should be namespaced in pyexpat + +- Patch #1462338: Upgrade pyexpat to expat 2.0.0 + +- Change binascii.hexlify to accept a read-only buffer instead of only a char + buffer and actually follow its documentation. + +- Fixed a potentially invalid memory access of CJKCodecs' shift-jis decoder. + +- Patch #1478788 (modified version): The functional extension module has + been renamed to _functools and a functools Python wrapper module added. + This provides a home for additional function related utilities that are + not specifically about functional programming. See PEP 309. + +- Patch #1493701: performance enhancements for struct module. + +- Patch #1490224: time.altzone is now set correctly on Cygwin. + +- Patch #1435422: zlib's compress and decompress objects now have a + copy() method. + +- Patch #1454481: thread stack size is now tunable at runtime for thread + enabled builds on Windows and systems with Posix threads support. + +- On Win32, os.listdir now supports arbitrarily-long Unicode path names + (up to the system limit of 32K characters). + +- Use Win32 API to implement os.{access,chdir,chmod,mkdir,remove,rename,rmdir,utime}. + As a result, these functions now raise WindowsError instead of OSError. + +- ``time.clock()`` on Win64 should use the high-performance Windows + ``QueryPerformanceCounter()`` now (as was already the case on 32-bit + Windows platforms). + +- Calling Tk_Init twice is refused if the first call failed as that + may deadlock. + +- bsddb: added the DB_ARCH_REMOVE flag and fixed db.DBEnv.log_archive() to + accept it without potentially using an uninitialized pointer. + +- bsddb: added support for the DBEnv.log_stat() and DBEnv.lsn_reset() methods + assuming BerkeleyDB >= 4.0 and 4.4 respectively. [pybsddb project SF + patch numbers 1494885 and 1494902] + +- bsddb: added an interface for the BerkeleyDB >= 4.3 DBSequence class. + [pybsddb project SF patch number 1466734] + +- bsddb: fix DBCursor.pget() bug with keyword argument names when no data + parameter is supplied. [SF pybsddb bug #1477863] + +- bsddb: the __len__ method of a DB object has been fixed to return correct + results. It could previously incorrectly return 0 in some cases. + Fixes SF bug 1493322 (pybsddb bug 1184012). + +- bsddb: the bsddb.dbtables Modify method now raises the proper error and + aborts the db transaction safely when a modifier callback fails. + Fixes SF python patch/bug #1408584. + +- bsddb: multithreaded DB access using the simple bsddb module interface + now works reliably. It has been updated to use automatic BerkeleyDB + deadlock detection and the bsddb.dbutils.DeadlockWrap wrapper to retry + database calls that would previously deadlock. [SF python bug #775414] + +- Patch #1446489: add support for the ZIP64 extensions to zipfile. + +- Patch #1506645: add Python wrappers for the curses functions + is_term_resized, resize_term and resizeterm. + +Library +------- + +- Patch #815924: Restore ability to pass type= and icon= in tkMessageBox + functions. + +- Patch #812986: Update turtle output even if not tracing. + +- Patch #1494750: Destroy master after deleting children in + Tkinter.BaseWidget. + +- Patch #1096231: Add ``default`` argument to Tkinter.Wm.wm_iconbitmap. + +- Patch #763580: Add name and value arguments to Tkinter variable + classes. + +- Bug #1117556: SimpleHTTPServer now tries to find and use the system's + mime.types file for determining MIME types. + +- Bug #1339007: Shelf objects now don't raise an exception in their + __del__ method when initialization failed. + +- Patch #1455898: The MBCS codec now supports the incremental mode for + double-byte encodings. + +- ``difflib``'s ``SequenceMatcher.get_matching_blocks()`` was changed to + guarantee that adjacent triples in the return list always describe + non-adjacent blocks. Previously, a pair of matching blocks could end + up being described by multiple adjacent triples that formed a partition + of the matching pair. + +- Bug #1498146: fix optparse to handle Unicode strings in option help, + description, and epilog. + +- Bug #1366250: minor optparse documentation error. + +- Bug #1361643: fix textwrap.dedent() so it handles tabs appropriately; + clarify docs. + +- The wsgiref package has been added to the standard library. + +- The functions update_wrapper() and wraps() have been added to the functools + module. These make it easier to copy relevant metadata from the original + function when writing wrapper functions. + +- The optional ``isprivate`` argument to ``doctest.testmod()``, and the + ``doctest.is_private()`` function, both deprecated in 2.4, were removed. + +- Patch #1359618: Speed up charmap encoder by using a trie structure + for lookup. + +- The functions in the ``pprint`` module now sort dictionaries by key + before computing the display. Before 2.5, ``pprint`` sorted a dictionary + if and only if its display required more than one line, although that + wasn't documented. The new behavior increases predictability; e.g., + using ``pprint.pprint(a_dict)`` in a doctest is now reliable. + +- Patch #1497027: try HTTP digest auth before basic auth in urllib2 + (thanks for J. J. Lee). + +- Patch #1496206: improve urllib2 handling of passwords with respect to + default HTTP and HTTPS ports. + +- Patch #1080727: add "encoding" parameter to doctest.DocFileSuite. + +- Patch #1281707: speed up gzip.readline. + +- Patch #1180296: Two new functions were added to the locale module: + format_string() to get the effect of "format % items" but locale-aware, + and currency() to format a monetary number with currency sign. + +- Patch #1486962: Several bugs in the turtle Tk demo module were fixed + and several features added, such as speed and geometry control. + +- Patch #1488881: add support for external file objects in bz2 compressed + tarfiles. + +- Patch #721464: pdb.Pdb instances can now be given explicit stdin and + stdout arguments, making it possible to redirect input and output + for remote debugging. + +- Patch #1484695: Update the tarfile module to version 0.8. This fixes + a couple of issues, notably handling of long file names using the + GNU LONGNAME extension. + +- Patch #1478292. ``doctest.register_optionflag(name)`` shouldn't create a + new flag when ``name`` is already the name of an option flag. + +- Bug #1385040: don't allow "def foo(a=1, b): pass" in the compiler + package. + +- Patch #1472854: make the rlcompleter.Completer class usable on non- + UNIX platforms. + +- Patch #1470846: fix urllib2 ProxyBasicAuthHandler. + +- Bug #1472827: correctly escape newlines and tabs in attribute values in + the saxutils.XMLGenerator class. + + +Build +----- + +- Bug #1502728: Correctly link against librt library on HP-UX. + +- OpenBSD 3.9 is supported now. + +- Patch #1492356: Port to Windows CE. + +- Bug/Patch #1481770: Use .so extension for shared libraries on HP-UX for ia64. + +- Patch #1471883: Add --enable-universalsdk. + +C API +----- + +Tests +----- + +Tools +----- + +Documentation +------------- + + + +What's New in Python 2.5 alpha 2? +================================= + +*Release date: 27-APR-2006* + +Core and builtins +----------------- + +- Bug #1465834: 'bdist_wininst preinstall script support' was fixed + by converting these apis from macros into exported functions again: + + PyParser_SimpleParseFile PyParser_SimpleParseString PyRun_AnyFile + PyRun_AnyFileEx PyRun_AnyFileFlags PyRun_File PyRun_FileEx + PyRun_FileFlags PyRun_InteractiveLoop PyRun_InteractiveOne + PyRun_SimpleFile PyRun_SimpleFileEx PyRun_SimpleString + PyRun_String Py_CompileString + +- Under COUNT_ALLOCS, types are not necessarily immortal anymore. + +- All uses of PyStructSequence_InitType have been changed to initialize + the type objects only once, even if the interpreter is initialized + multiple times. + +- Bug #1454485, array.array('u') could crash the interpreter. This was + due to PyArgs_ParseTuple(args, 'u#', ...) trying to convert buffers (strings) + to unicode when it didn't make sense. 'u#' now requires a unicode string. + +- Py_UNICODE is unsigned. It was always documented as unsigned, but + due to a bug had a signed value in previous versions. + +- Patch #837242: ``id()`` of any Python object always gives a positive + number now, which might be a long integer. ``PyLong_FromVoidPtr`` and + ``PyLong_AsVoidPtr`` have been changed accordingly. Note that it has + never been correct to implement a ``__hash()__`` method that returns the + ``id()`` of an object: + + def __hash__(self): + return id(self) # WRONG + + because a hash result must be a (short) Python int but it was always + possible for ``id()`` to return a Python long. However, because ``id()`` + could return negative values before, on a 32-bit box an ``id()`` result + was always usable as a hash value before this patch. That's no longer + necessarily so. + +- Python on OS X 10.3 and above now uses dlopen() (via dynload_shlib.c) + to load extension modules and now provides the dl module. As a result, + sys.setdlopenflags() now works correctly on these systems. (SF patch + #1454844) + +- Patch #1463867: enhanced garbage collection to allow cleanup of cycles + involving generators that have paused outside of any ``try`` or ``with`` + blocks. (In 2.5a1, a paused generator that was part of a reference + cycle could not be garbage collected, regardless of whether it was + paused in a ``try`` or ``with`` block.) + +Extension Modules +----------------- + +- Patch #1191065: Fix preprocessor problems on systems where recvfrom + is a macro. + +- Bug #1467952: os.listdir() now correctly raises an error if readdir() + fails with an error condition. + +- Fixed bsddb.db.DBError derived exceptions so they can be unpickled. + +- Bug #1117761: bsddb.*open() no longer raises an exception when using + the cachesize parameter. + +- Bug #1149413: bsddb.*open() no longer raises an exception when using + a temporary db (file=None) with the 'n' flag to truncate on open. + +- Bug #1332852: bsddb module minimum BerkeleyDB version raised to 3.3 + as older versions cause excessive test failures. + +- Patch #1062014: AF_UNIX sockets under Linux have a special + abstract namespace that is now fully supported. + +Library +------- + +- Bug #1223937: subprocess.CalledProcessError reports the exit status + of the process using the returncode attribute, instead of + abusing errno. + +- Patch #1475231: ``doctest`` has a new ``SKIP`` option, which causes + a doctest to be skipped (the code is not run, and the expected output + or exception is ignored). + +- Fixed contextlib.nested to cope with exceptions being raised and + caught inside exit handlers. + +- Updated optparse module to Optik 1.5.1 (allow numeric constants in + hex, octal, or binary; add ``append_const`` action; keep going if + gettext cannot be imported; added ``OptionParser.destroy()`` method; + added ``epilog`` for better help generation). + +- Bug #1473760: ``tempfile.TemporaryFile()`` could hang on Windows, when + called from a thread spawned as a side effect of importing a module. + +- The pydoc module now supports documenting packages contained in + .zip or .egg files. + +- The pkgutil module now has several new utility functions, such + as ``walk_packages()`` to support working with packages that are either + in the filesystem or zip files. + +- The mailbox module can now modify and delete messages from + mailboxes, in addition to simply reading them. Thanks to Gregory + K. Johnson for writing the code, and to the 2005 Google Summer of + Code for funding his work. + +- The ``__del__`` method of class ``local`` in module ``_threading_local`` + returned before accomplishing any of its intended cleanup. + +- Patch #790710: Add breakpoint command lists in pdb. + +- Patch #1063914: Add Tkinter.Misc.clipboard_get(). + +- Patch #1191700: Adjust column alignment in bdb breakpoint lists. + +- SimpleXMLRPCServer relied on the fcntl module, which is unavailable on + Windows. Bug #1469163. + +- The warnings, linecache, inspect, traceback, site, and doctest modules + were updated to work correctly with modules imported from zipfiles or + via other PEP 302 __loader__ objects. + +- Patch #1467770: Reduce usage of subprocess._active to processes which + the application hasn't waited on. + +- Patch #1462222: Fix Tix.Grid. + +- Fix exception when doing glob.glob('anything*/') + +- The pstats.Stats class accepts an optional stream keyword argument to + direct output to an alternate file-like object. + +Build +----- + +- The Makefile now has a reindent target, which runs reindent.py on + the library. + +- Patch #1470875: Building Python with MS Free Compiler + +- Patch #1161914: Add a python-config script. + +- Patch #1324762:Remove ccpython.cc; replace --with-cxx with + --with-cxx-main. Link with C++ compiler only if --with-cxx-main was + specified. (Can be overridden by explicitly setting LINKCC.) Decouple + CXX from --with-cxx-main, see description in README. + +- Patch #1429775: Link extension modules with the shared libpython. + +- Fixed a libffi build problem on MIPS systems. + +- ``PyString_FromFormat``, ``PyErr_Format``, and ``PyString_FromFormatV`` + now accept formats "%u" for unsigned ints, "%lu" for unsigned longs, + and "%zu" for unsigned integers of type ``size_t``. + +Tests +----- + +- test_contextlib now checks contextlib.nested can cope with exceptions + being raised and caught inside exit handlers. + +- test_cmd_line now checks operation of the -m and -c command switches + +- The test_contextlib test in 2.5a1 wasn't actually run unless you ran + it separately and by hand. It also wasn't cleaning up its changes to + the current Decimal context. + +- regrtest.py now has a -M option to run tests that test the new limits of + containers, on 64-bit architectures. Running these tests is only sensible + on 64-bit machines with more than two gigabytes of memory. The argument + passed is the maximum amount of memory for the tests to use. + +Tools +----- + +- Added the Python benchmark suite pybench to the Tools/ directory; + contributed by Marc-Andre Lemburg. + +Documentation +------------- + +- Patch #1473132: Improve docs for ``tp_clear`` and ``tp_traverse``. + +- PEP 343: Added Context Types section to the library reference + and attempted to bring other PEP 343 related documentation into + line with the implementation and/or python-dev discussions. + +- Bug #1337990: clarified that ``doctest`` does not support examples + requiring both expected output and an exception. + + +What's New in Python 2.5 alpha 1? +================================= + +*Release date: 05-APR-2006* + +Core and builtins +----------------- + +- PEP 338: -m command line switch now delegates to runpy.run_module + allowing it to support modules in packages and zipfiles + +- On Windows, .DLL is not an accepted file name extension for + extension modules anymore; extensions are only found if they + end in .PYD. + +- Bug #1421664: sys.stderr.encoding is now set to the same value as + sys.stdout.encoding. + +- __import__ accepts keyword arguments. + +- Patch #1460496: round() now accepts keyword arguments. + +- Fixed bug #1459029 - unicode reprs were double-escaped. + +- Patch #1396919: The system scope threads are reenabled on FreeBSD + 5.4 and later versions. + +- Bug #1115379: Compiling a Unicode string with an encoding declaration + now gives a SyntaxError. + +- Previously, Python code had no easy way to access the contents of a + cell object. Now, a ``cell_contents`` attribute has been added + (closes patch #1170323). + +- Patch #1123430: Python's small-object allocator now returns an arena to + the system ``free()`` when all memory within an arena becomes unused + again. Prior to Python 2.5, arenas (256KB chunks of memory) were never + freed. Some applications will see a drop in virtual memory size now, + especially long-running applications that, from time to time, temporarily + use a large number of small objects. Note that when Python returns an + arena to the platform C's ``free()``, there's no guarantee that the + platform C library will in turn return that memory to the operating system. + The effect of the patch is to stop making that impossible, and in tests it + appears to be effective at least on Microsoft C and gcc-based systems. + Thanks to Evan Jones for hard work and patience. + +- Patch #1434038: property() now uses the getter's docstring if there is + no "doc" argument given. This makes it possible to legitimately use + property() as a decorator to produce a read-only property. + +- PEP 357, patch 1436368: add an __index__ method to int/long and a matching + nb_index slot to the PyNumberMethods struct. The slot is consulted instead + of requiring an int or long in slicing and a few other contexts, enabling + other objects (e.g. Numeric Python's integers) to be used as slice indices. + +- Fixed various bugs reported by Coverity's Prevent tool. + +- PEP 352, patch #1104669: Make exceptions new-style objects. Introduced the + new exception base class, BaseException, which has a new message attribute. + KeyboardInterrupt and SystemExit to directly inherit from BaseException now. + Raising a string exception now raises a DeprecationWarning. + +- Patch #1438387, PEP 328: relative and absolute imports. Imports can now be + explicitly relative, using 'from .module import name' to mean 'from the same + package as this module is in. Imports without dots still default to the + old relative-then-absolute, unless 'from __future__ import + absolute_import' is used. + +- Properly check if 'warnings' raises an exception (usually when a filter set + to "error" is triggered) when raising a warning for raising string + exceptions. + +- CO_GENERATOR_ALLOWED is no longer defined. This behavior is the default. + The name was removed from Include/code.h. + +- PEP 308: conditional expressions were added: (x if cond else y). + +- Patch 1433928: + - The copy module now "copies" function objects (as atomic objects). + - dict.__getitem__ now looks for a __missing__ hook before raising + KeyError. + +- PEP 343: with statement implemented. Needs ``from __future__ import + with_statement``. Use of 'with' as a variable will generate a warning. + Use of 'as' as a variable will also generate a warning (unless it's + part of an import statement). + The following objects have __context__ methods: + - The built-in file type. + - The thread.LockType type. + - The following types defined by the threading module: + Lock, RLock, Condition, Semaphore, BoundedSemaphore. + - The decimal.Context class. + +- Fix the encodings package codec search function to only search + inside its own package. Fixes problem reported in patch #1433198. + + Note: Codec packages should implement and register their own + codec search function. PEP 100 has the details. + +- PEP 353: Using ``Py_ssize_t`` as the index type. + +- ``PYMALLOC_DEBUG`` builds now add ``4*sizeof(size_t)`` bytes of debugging + info to each allocated block, since the ``Py_ssize_t`` changes (PEP 353) + now allow Python to make use of memory blocks exceeding 2**32 bytes for + some purposes on 64-bit boxes. A ``PYMALLOC_DEBUG`` build was limited + to 4-byte allocations before. + +- Patch #1400181, fix unicode string formatting to not use the locale. + This is how string objects work. u'%f' could use , instead of . + for the decimal point. Now both strings and unicode always use periods. + +- Bug #1244610, #1392915, fix build problem on OpenBSD 3.7 and 3.8. + configure would break checking curses.h. + +- Bug #959576: The pwd module is now builtin. This allows Python to be + built on UNIX platforms without $HOME set. + +- Bug #1072182, fix some potential problems if characters are signed. + +- Bug #889500, fix line number on SyntaxWarning for global declarations. + +- Bug #1378022, UTF-8 files with a leading BOM crashed the interpreter. + +- Support for converting hex strings to floats no longer works. + This was not portable. float('0x3') now raises a ValueError. + +- Patch #1382163: Expose Subversion revision number to Python. New C API + function Py_GetBuildNumber(). New attribute sys.subversion. Build number + is now displayed in interactive prompt banner. + +- Implementation of PEP 341 - Unification of try/except and try/finally. + "except" clauses can now be written together with a "finally" clause in + one try statement instead of two nested ones. Patch #1355913. + +- Bug #1379994: Builtin unicode_escape and raw_unicode_escape codec + now encodes backslash correctly. + +- Patch #1350409: Work around signal handling bug in Visual Studio 2005. + +- Bug #1281408: Py_BuildValue now works correctly even with unsigned longs + and long longs. + +- SF Bug #1350188, "setdlopenflags" leads to crash upon "import" + It was possible for dlerror() to return a NULL pointer, so + it will now use a default error message in this case. + +- Replaced most Unicode charmap codecs with new ones using the + new Unicode translate string feature in the builtin charmap + codec; the codecs were created from the mapping tables available + at ftp.unicode.org and contain a few updates (e.g. the Mac OS + encodings now include a mapping for the Apple logo) + +- Added a few more codecs for Mac OS encodings + +- Sped up some Unicode operations. + +- A new AST parser implementation was completed. The abstract + syntax tree is available for read-only (non-compile) access + to Python code; an _ast module was added. + +- SF bug #1167751: fix incorrect code being produced for generator expressions. + The following code now raises a SyntaxError: foo(a = i for i in range(10)) + +- SF Bug #976608: fix SystemError when mtime of an imported file is -1. + +- SF Bug #887946: fix segfault when redirecting stdin from a directory. + Provide a warning when a directory is passed on the command line. + +- Fix segfault with invalid coding. + +- SF bug #772896: unknown encoding results in MemoryError. + +- All iterators now have a Boolean value of True. Formerly, some iterators + supported a __len__() method which evaluated to False when the iterator + was empty. + +- On 64-bit platforms, when __len__() returns a value that cannot be + represented as a C int, raise OverflowError. + +- test__locale is skipped on OS X < 10.4 (only partial locale support is + present). + +- SF bug #893549: parsing keyword arguments was broken with a few format + codes. + +- Changes donated by Elemental Security to make it work on AIX 5.3 + with IBM's 64-bit compiler (SF patch #1284289). This also closes SF + bug #105470: test_pwd fails on 64bit system (Opteron). + +- Changes donated by Elemental Security to make it work on HP-UX 11 on + Itanium2 with HP's 64-bit compiler (SF patch #1225212). + +- Disallow keyword arguments for type constructors that don't use them + (fixes bug #1119418). + +- Forward UnicodeDecodeError into SyntaxError for source encoding errors. + +- SF bug #900092: When tracing (e.g. for hotshot), restore 'return' events for + exceptions that cause a function to exit. + +- The implementation of set() and frozenset() was revised to use its + own internal data structure. Memory consumption is reduced by 1/3 + and there are modest speed-ups as well. The API is unchanged. + +- SF bug #1238681: freed pointer is used in longobject.c:long_pow(). + +- SF bug #1229429: PyObject_CallMethod failed to decrement some + reference counts in some error exit cases. + +- SF bug #1185883: Python's small-object memory allocator took over + a block managed by the platform C library whenever a realloc specified + a small new size. However, there's no portable way to know then how + much of the address space following the pointer is valid, so there's no + portable way to copy data from the C-managed block into Python's + small-object space without risking a memory fault. Python's small-object + realloc now leaves such blocks under the control of the platform C + realloc. + +- SF bug #1232517: An overflow error was not detected properly when + attempting to convert a large float to an int in os.utime(). + +- SF bug #1224347: hex longs now print with lowercase letters just + like their int counterparts. + +- SF bug #1163563: the original fix for bug #1010677 ("thread Module + Breaks PyGILState_Ensure()") broke badly in the case of multiple + interpreter states; back out that fix and do a better job (see + http://mail.python.org/pipermail/python-dev/2005-June/054258.html + for a longer write-up of the problem). + +- SF patch #1180995: marshal now uses a binary format by default when + serializing floats. + +- SF patch #1181301: on platforms that appear to use IEEE 754 floats, + the routines that promise to produce IEEE 754 binary representations + of floats now simply copy bytes around. + +- bug #967182: disallow opening files with 'wU' or 'aU' as specified by PEP + 278. + +- patch #1109424: int, long, float, complex, and unicode now check for the + proper magic slot for type conversions when subclassed. Previously the + magic slot was ignored during conversion. Semantics now match the way + subclasses of str always behaved. int/long/float, conversion of an instance + to the base class has been moved to the proper nb_* magic slot and out of + PyNumber_*(). + Thanks Walter D???rwald. + +- Descriptors defined in C with a PyGetSetDef structure, where the setter is + NULL, now raise an AttributeError when attempting to set or delete the + attribute. Previously a TypeError was raised, but this was inconsistent + with the equivalent pure-Python implementation. + +- It is now safe to call PyGILState_Release() before + PyEval_InitThreads() (note that if there is reason to believe there + are multiple threads around you still must call PyEval_InitThreads() + before using the Python API; this fix is for extension modules that + have no way of knowing if Python is multi-threaded yet). + +- Typing Ctrl-C whilst raw_input() was waiting in a build with threads + disabled caused a crash. + +- Bug #1165306: instancemethod_new allowed the creation of a method + with im_class == im_self == NULL, which caused a crash when called. + +- Move exception finalisation later in the shutdown process - this + fixes the crash seen in bug #1165761 + +- Added two new builtins, any() and all(). + +- Defining a class with empty parentheses is now allowed + (e.g., ``class C(): pass`` is no longer a syntax error). + Patch #1176012 added support to the 'parser' module and 'compiler' package + (thanks to logistix for that added support). + +- Patch #1115086: Support PY_LONGLONG in structmember. + +- Bug #1155938: new style classes did not check that __init__() was + returning None. + +- Patch #802188: Report characters after line continuation character + ('\') with a specific error message. + +- Bug #723201: Raise a TypeError for passing bad objects to 'L' format. + +- Bug #1124295: the __name__ attribute of file objects was + inadvertently made inaccessible in restricted mode. + +- Bug #1074011: closing sys.std{out,err} now causes a flush() and + an ferror() call. + +- min() and max() now support key= arguments with the same meaning as in + list.sort(). + +- The peephole optimizer now performs simple constant folding in expressions: + (2+3) --> (5). + +- set and frozenset objects can now be marshalled. SF #1098985. + +- Bug #1077106: Poor argument checking could cause memory corruption + in calls to os.read(). + +- The parser did not complain about future statements in illegal + positions. It once again reports a syntax error if a future + statement occurs after anything other than a doc string. + +- Change the %s format specifier for str objects so that it returns a + unicode instance if the argument is not an instance of basestring and + calling __str__ on the argument returns a unicode instance. + +- Patch #1413181: changed ``PyThreadState_Delete()`` to forget about the + current thread state when the auto-GIL-state machinery knows about + it (since the thread state is being deleted, continuing to remember it + can't help, but can hurt if another thread happens to get created with + the same thread id). + +Extension Modules +----------------- + +- Patch #1380952: fix SSL objects timing out on consecutive read()s + +- Patch #1309579: wait3 and wait4 were added to the posix module. + +- Patch #1231053: The audioop module now supports encoding/decoding of alaw. + In addition, the existing ulaw code was updated. + +- RFE #567972: Socket objects' family, type and proto properties are + now exposed via new attributes. + +- Everything under lib-old was removed. This includes the following modules: + Para, addpack, cmp, cmpcache, codehack, dircmp, dump, find, fmt, grep, + lockfile, newdir, ni, packmail, poly, rand, statcache, tb, tzparse, + util, whatsound, whrandom, zmod + +- The following modules were removed: regsub, reconvert, regex, regex_syntax. + +- re and sre were swapped, so help(re) provides full help. importing sre + is deprecated. The undocumented re.engine variable no longer exists. + +- Bug #1448490: Fixed a bug that ISO-2022 codecs could not handle + SS2 (single-shift 2) escape sequences correctly. + +- The unicodedata module was updated to the 4.1 version of the Unicode + database. The 3.2 version is still available as unicodedata.db_3_2_0 + for applications that require this specific version (such as IDNA). + +- The timing module is no longer built by default. It was deprecated + in PEP 4 in Python 2.0 or earlier. + +- Patch 1433928: Added a new type, defaultdict, to the collections module. + This uses the new __missing__ hook behavior added to dict (see above). + +- Bug #854823: socketmodule now builds on Sun platforms even when + INET_ADDRSTRLEN is not defined. + +- Patch #1393157: os.startfile() now has an optional argument to specify + a "command verb" to invoke on the file. + +- Bug #876637, prevent stack corruption when socket descriptor + is larger than FD_SETSIZE. + +- Patch #1407135, bug #1424041: harmonize mmap behavior of anonymous memory. + mmap.mmap(-1, size) now returns anonymous memory in both Unix and Windows. + mmap.mmap(0, size) should not be used on Windows for anonymous memory. + +- Patch #1422385: The nis module now supports access to domains other + than the system default domain. + +- Use Win32 API to implement os.stat/fstat. As a result, subsecond timestamps + are reported, the limit on path name lengths is removed, and stat reports + WindowsError now (instead of OSError). + +- Add bsddb.db.DBEnv.set_tx_timestamp allowing time based database recovery. + +- Bug #1413192, fix seg fault in bsddb if a transaction was deleted + before the env. + +- Patch #1103116: Basic AF_NETLINK support. + +- Bug #1402308, (possible) segfault when using mmap.mmap(-1, ...) + +- Bug #1400822, _curses over{lay,write} doesn't work when passing 6 ints. + Also fix ungetmouse() which did not accept arguments properly. + The code now conforms to the documented signature. + +- Bug #1400115, Fix segfault when calling curses.panel.userptr() + without prior setting of the userptr. + +- Fix 64-bit problems in bsddb. + +- Patch #1365916: fix some unsafe 64-bit mmap methods. + +- Bug #1290333: Added a workaround for cjkcodecs' _codecs_cn build + problem on AIX. + +- Bug #869197: os.setgroups rejects long integer arguments + +- Bug #1346533, select.poll() doesn't raise an error if timeout > sys.maxint + +- Bug #1344508, Fix UNIX mmap leaking file descriptors + +- Patch #1338314, Bug #1336623: fix tarfile so it can extract + REGTYPE directories from tarfiles written by old programs. + +- Patch #1407992, fixes broken bsddb module db associate when using + BerkeleyDB 3.3, 4.0 or 4.1. + +- Get bsddb module to build with BerkeleyDB version 4.4 + +- Get bsddb module to build with BerkeleyDB version 3.2 + +- Patch #1309009, Fix segfault in pyexpat when the XML document is in latin_1, + but Python incorrectly assumes it is in UTF-8 format + +- Fix parse errors in the readline module when compiling without threads. + +- Patch #1288833: Removed thread lock from socket.getaddrinfo on + FreeBSD 5.3 and later versions which got thread-safe getaddrinfo(3). + +- Patches #1298449 and #1298499: Add some missing checks for error + returns in cStringIO.c. + +- Patch #1297028: fix segfault if call type on MultibyteCodec, + MultibyteStreamReader, or MultibyteStreamWriter + +- Fix memory leak in posix.access(). + +- Patch #1213831: Fix typo in unicodedata._getcode. + +- Bug #1007046: os.startfile() did not accept unicode strings encoded in + the file system encoding. + +- Patch #756021: Special-case socket.inet_aton('255.255.255.255') for + platforms that don't have inet_aton(). + +- Bug #1215928: Fix bz2.BZ2File.seek() for 64-bit file offsets. + +- Bug #1191043: Fix bz2.BZ2File.(x)readlines for files containing one + line without newlines. + +- Bug #728515: mmap.resize() now resizes the file on Unix as it did + on Windows. + +- Patch #1180695: Add nanosecond stat resolution, and st_gen, + st_birthtime for FreeBSD. + +- Patch #1231069: The fcntl.ioctl function now uses the 'I' code for + the request code argument, which results in more C-like behaviour + for large or negative values. + +- Bug #1234979: For the argument of thread.Lock.acquire, the Windows + implementation treated all integer values except 1 as false. + +- Bug #1194181: bz2.BZ2File didn't handle mode 'U' correctly. + +- Patch #1212117: os.stat().st_flags is now accessible as a attribute + if available on the platform. + +- Patch #1103951: Expose O_SHLOCK and O_EXLOCK in the posix module if + available on the platform. + +- Bug #1166660: The readline module could segfault if hook functions + were set in a different thread than that which called readline. + +- collections.deque objects now support a remove() method. + +- operator.itemgetter() and operator.attrgetter() now support retrieving + multiple fields. This provides direct support for sorting on multiple + keys (primary, secondary, etc). + +- os.access now supports Unicode path names on non-Win32 systems. + +- Patches #925152, #1118602: Avoid reading after the end of the buffer + in pyexpat.GetInputContext. + +- Patches #749830, #1144555: allow UNIX mmap size to default to current + file size. + +- Added functional.partial(). See PEP309. + +- Patch #1093585: raise a ValueError for negative history items in readline. + {remove_history,replace_history} + +- The spwd module has been added, allowing access to the shadow password + database. + +- stat_float_times is now True. + +- array.array objects are now picklable. + +- the cPickle module no longer accepts the deprecated None option in the + args tuple returned by __reduce__(). + +- itertools.islice() now accepts None for the start and step arguments. + This allows islice() to work more readily with slices: + islice(s.start, s.stop, s.step) + +- datetime.datetime() now has a strptime class method which can be used to + create datetime object using a string and format. + +- Patch #1117961: Replace the MD5 implementation from RSA Data Security Inc + with the implementation from http://sourceforge.net/projects/libmd5-rfc/. + +Library +------- + +- Patch #1388073: Numerous __-prefixed attributes of unittest.TestCase have + been renamed to have only a single underscore prefix. This was done to + make subclassing easier. + +- PEP 338: new module runpy defines a run_module function to support + executing modules which provide access to source code or a code object + via the PEP 302 import mechanisms. + +- The email module's parsedate_tz function now sets the daylight savings + flag to -1 (unknown) since it can't tell from the date whether it should + be set. + +- Patch #624325: urlparse.urlparse() and urlparse.urlsplit() results + now sport attributes that provide access to the parts of the result. + +- Patch #1462498: sgmllib now handles entity and character references + in attribute values. + +- Added the sqlite3 package. This is based on pysqlite2.1.3, and provides + a DB-API interface in the standard library. You'll need sqlite 3.0.8 or + later to build this - if you have an earlier version, the C extension + module will not be built. + +- Bug #1460340: ``random.sample(dict)`` failed in various ways. Dicts + aren't officially supported here, and trying to use them will probably + raise an exception some day. But dicts have been allowed, and "mostly + worked", so support for them won't go away without warning. + +- Bug #1445068: getpass.getpass() can now be given an explicit stream + argument to specify where to write the prompt. + +- Patch #1462313, bug #1443328: the pickle modules now can handle classes + that have __private names in their __slots__. + +- Bug #1250170: mimetools now handles socket.gethostname() failures gracefully. + +- patch #1457316: "setup.py upload" now supports --identity to select the + key to be used for signing the uploaded code. + +- Queue.Queue objects now support .task_done() and .join() methods + to make it easier to monitor when daemon threads have completed + processing all enqueued tasks. Patch #1455676. + +- popen2.Popen objects now preserve the command in a .cmd attribute. + +- Added the ctypes ffi package. + +- email 4.0 package now integrated. This is largely the same as the email 3.0 + package that was included in Python 2.3, except that PEP 8 module names are + now used (e.g. mail.message instead of email.Message). The MIME classes + have been moved to a subpackage (e.g. email.mime.text instead of + email.MIMEText). The old names are still supported for now. Several + deprecated Message methods have been removed and lots of bugs have been + fixed. More details can be found in the email package documentation. + +- Patches #1436130/#1443155: codecs.lookup() now returns a CodecInfo object + (a subclass of tuple) that provides incremental decoders and encoders + (a way to use stateful codecs without the stream API). Python functions + codecs.getincrementaldecoder() and codecs.getincrementalencoder() as well + as C functions PyCodec_IncrementalEncoder() and PyCodec_IncrementalDecoder() + have been added. + +- Patch #1359365: Calling next() on a closed StringIO.String object raises + a ValueError instead of a StopIteration now (like file and cString.String do). + cStringIO.StringIO.isatty() will raise a ValueError now if close() has been + called before (like file and StringIO.StringIO do). + +- A regrtest option -w was added to re-run failed tests in verbose mode. + +- Patch #1446372: quit and exit can now be called from the interactive + interpreter to exit. + +- The function get_count() has been added to the gc module, and gc.collect() + grew an optional 'generation' argument. + +- A library msilib to generate Windows Installer files, and a distutils + command bdist_msi have been added. + +- PEP 343: new module contextlib.py defines decorator @contextmanager + and helpful context managers nested() and closing(). + +- The compiler package now supports future imports after the module docstring. + +- Bug #1413790: zipfile now sanitizes absolute archive names that are + not allowed by the specs. + +- Patch #1215184: FileInput now can be given an opening hook which can + be used to control how files are opened. + +- Patch #1212287: fileinput.input() now has a mode parameter for + specifying the file mode input files should be opened with. + +- Patch #1215184: fileinput now has a fileno() function for getting the + current file number. + +- Patch #1349274: gettext.install() now optionally installs additional + translation functions other than _() in the builtin namespace. + +- Patch #1337756: fileinput now accepts Unicode filenames. + +- Patch #1373643: The chunk module can now read chunks larger than + two gigabytes. + +- Patch #1417555: SimpleHTTPServer now returns Last-Modified headers. + +- Bug #1430298: It is now possible to send a mail with an empty + return address using smtplib. + +- Bug #1432260: The names of lambda functions are now properly displayed + in pydoc. + +- Patch #1412872: zipfile now sets the creator system to 3 (Unix) + unless the system is Win32. + +- Patch #1349118: urllib now supports user:pass@ style proxy + specifications, raises IOErrors when proxies for unsupported protocols + are defined, and uses the https proxy on https redirections. + +- Bug #902075: urllib2 now supports 'host:port' style proxy specifications. + +- Bug #1407902: Add support for sftp:// URIs to urlparse. + +- Bug #1371247: Update Windows locale identifiers in locale.py. + +- Bug #1394565: SimpleHTTPServer now doesn't choke on query parameters + any more. + +- Bug #1403410: The warnings module now doesn't get confused + when it can't find out the module name it generates a warning for. + +- Patch #1177307: Added a new codec utf_8_sig for UTF-8 with a BOM signature. + +- Patch #1157027: cookielib mishandles RFC 2109 cookies in Netscape mode + +- Patch #1117398: cookielib.LWPCookieJar and .MozillaCookieJar now raise + LoadError as documented, instead of IOError. For compatibility, + LoadError subclasses IOError. + +- Added the hashlib module. It provides secure hash functions for MD5 and + SHA1, 224, 256, 384, and 512. Note that recent developments make the + historic MD5 and SHA1 unsuitable for cryptographic-strength applications. + In + Ronald L. Rivest offered this advice for Python: + + "The consensus of researchers in this area (at least as + expressed at the NIST Hash Function Workshop 10/31/05), + is that SHA-256 is a good choice for the time being, but + that research should continue, and other alternatives may + arise from this research. The larger SHA's also seem OK." + +- Added a subset of Fredrik Lundh's ElementTree package. Available + modules are xml.etree.ElementTree, xml.etree.ElementPath, and + xml.etree.ElementInclude, from ElementTree 1.2.6. + +- Patch #1162825: Support non-ASCII characters in IDLE window titles. + +- Bug #1365984: urllib now opens "data:" URLs again. + +- Patch #1314396: prevent deadlock for threading.Thread.join() when an exception + is raised within the method itself on a previous call (e.g., passing in an + illegal argument) + +- Bug #1340337: change time.strptime() to always return ValueError when there + is an error in the format string. + +- Patch #754022: Greatly enhanced webbrowser.py (by Oleg Broytmann). + +- Bug #729103: pydoc.py: Fix docother() method to accept additional + "parent" argument. + +- Patch #1300515: xdrlib.py: Fix pack_fstring() to really use null bytes + for padding. + +- Bug #1296004: httplib.py: Limit maximal amount of data read from the + socket to avoid a MemoryError on Windows. + +- Patch #1166948: locale.py: Prefer LC_ALL, LC_CTYPE and LANG over LANGUAGE + to get the correct encoding. + +- Patch #1166938: locale.py: Parse LANGUAGE as a colon separated list of + languages. + +- Patch #1268314: Cache lines in StreamReader.readlines for performance. + +- Bug #1290505: Fix clearing the regex cache for time.strptime(). + +- Bug #1167128: Fix size of a symlink in a tarfile to be 0. + +- Patch #810023: Fix off-by-one bug in urllib.urlretrieve reporthook + functionality. + +- Bug #1163178: Make IDNA return an empty string when the input is empty. + +- Patch #848017: Make Cookie more RFC-compliant. Use CRLF as default output + separator and do not output trailing semicolon. + +- Patch #1062060: urllib.urlretrieve() now raises a new exception, named + ContentTooShortException, when the actually downloaded size does not + match the Content-Length header. + +- Bug #1121494: distutils.dir_utils.mkpath now accepts Unicode strings. + +- Bug #1178484: Return complete lines from codec stream readers + even if there is an exception in later lines, resulting in + correct line numbers for decoding errors in source code. + +- Bug #1192315: Disallow negative arguments to clear() in pdb. + +- Patch #827386: Support absolute source paths in msvccompiler.py. + +- Patch #1105730: Apply the new implementation of commonprefix in posixpath + to ntpath, macpath, os2emxpath and riscospath. + +- Fix a problem in Tkinter introduced by SF patch #869468: delete bogus + __hasattr__ and __delattr__ methods on class Tk that were breaking + Tkdnd. + +- Bug #1015140: disambiguated the term "article id" in nntplib docs and + docstrings to either "article number" or "message id". + +- Bug #1238170: threading.Thread.__init__ no longer has "kwargs={}" as a + parameter, but uses the usual "kwargs=None". + +- textwrap now processes text chunks at O(n) speed instead of O(n**2). + Patch #1209527 (Contributed by Connelly). + +- urllib2 has now an attribute 'httpresponses' mapping from HTTP status code + to W3C name (404 -> 'Not Found'). RFE #1216944. + +- Bug #1177468: Don't cache the /dev/urandom file descriptor for os.urandom, + as this can cause problems with apps closing all file descriptors. + +- Bug #839151: Fix an attempt to access sys.argv in the warnings module; + it can be missing in embedded interpreters + +- Bug #1155638: Fix a bug which affected HTTP 0.9 responses in httplib. + +- Bug #1100201: Cross-site scripting was possible on BaseHTTPServer via + error messages. + +- Bug #1108948: Cookie.py produced invalid JavaScript code. + +- The tokenize module now detects and reports indentation errors. + Bug #1224621. + +- The tokenize module has a new untokenize() function to support a full + roundtrip from lexed tokens back to Python source code. In addition, + the generate_tokens() function now accepts a callable argument that + terminates by raising StopIteration. + +- Bug #1196315: fix weakref.WeakValueDictionary constructor. + +- Bug #1213894: os.path.realpath didn't resolve symlinks that were the first + component of the path. + +- Patch #1120353: The xmlrpclib module provides better, more transparent, + support for datetime.{datetime,date,time} objects. With use_datetime set + to True, applications shouldn't have to fiddle with the DateTime wrapper + class at all. + +- distutils.commands.upload was added to support uploading distribution + files to PyPI. + +- distutils.commands.register now encodes the data as UTF-8 before posting + them to PyPI. + +- decimal operator and comparison methods now return NotImplemented + instead of raising a TypeError when interacting with other types. This + allows other classes to implement __radd__ style methods and have them + work as expected. + +- Bug #1163325: Decimal infinities failed to hash. Attempting to + hash a NaN raised an InvalidOperation instead of a TypeError. + +- Patch #918101: Add tarfile open mode r|* for auto-detection of the + stream compression; add, for symmetry reasons, r:* as a synonym of r. + +- Patch #1043890: Add extractall method to tarfile. + +- Patch #1075887: Don't require MSVC in distutils if there is nothing + to build. + +- Patch #1103407: Properly deal with tarfile iterators when untarring + symbolic links on Windows. + +- Patch #645894: Use getrusage for computing the time consumption in + profile.py if available. + +- Patch #1046831: Use get_python_version where appropriate in sysconfig.py. + +- Patch #1117454: Remove code to special-case cookies without values + in LWPCookieJar. + +- Patch #1117339: Add cookielib special name tests. + +- Patch #1112812: Make bsddb/__init__.py more friendly for modulefinder. + +- Patch #1110248: SYNC_FLUSH the zlib buffer for GZipFile.flush. + +- Patch #1107973: Allow to iterate over the lines of a tarfile.ExFileObject. + +- Patch #1104111: Alter setup.py --help and --help-commands. + +- Patch #1121234: Properly cleanup _exit and tkerror commands. + +- Patch #1049151: xdrlib now unpacks booleans as True or False. + +- Fixed bug in a NameError bug in cookielib. Patch #1116583. + +- Applied a security fix to SimpleXMLRPCserver (PSF-2005-001). This + disables recursive traversal through instance attributes, which can + be exploited in various ways. + +- Bug #1222790: in SimpleXMLRPCServer, set the reuse-address and close-on-exec + flags on the HTTP listening socket. + +- Bug #792570: SimpleXMLRPCServer had problems if the request grew too large. + Fixed by reading the HTTP body in chunks instead of one big socket.read(). + +- Patches #893642, #1039083: add allow_none, encoding arguments to + constructors of SimpleXMLRPCServer and CGIXMLRPCRequestHandler. + +- Bug #1110478: Revert os.environ.update to do putenv again. + +- Bug #1103844: fix distutils.install.dump_dirs() with negated options. + +- os.{SEEK_SET, SEEK_CUR, SEEK_END} have been added for convenience. + +- Enhancements to the csv module: + + + Dialects are now validated by the underlying C code, better + reflecting its capabilities, and improving its compliance with + PEP 305. + + Dialect parameter parsing has been re-implemented to improve error + reporting. + + quotechar=None and quoting=QUOTE_NONE now work the way PEP 305 + dictates. + + the parser now removes the escapechar prefix from escaped characters. + + when quoting=QUOTE_NONNUMERIC, the writer now tests for numeric + types, rather than any object that can be represented as a numeric. + + when quoting=QUOTE_NONNUMERIC, the reader now casts unquoted fields + to floats. + + reader now allows \r characters to be quoted (previously it only allowed + \n to be quoted). + + writer doublequote handling improved. + + Dialect classes passed to the module are no longer instantiated by + the module before being parsed (the former validation scheme required + this, but the mechanism was unreliable). + + The dialect registry now contains instances of the internal + C-coded dialect type, rather than references to python objects. + + the internal c-coded dialect type is now immutable. + + register_dialect now accepts the same keyword dialect specifications + as the reader and writer, allowing the user to register dialects + without first creating a dialect class. + + a configurable limit to the size of parsed fields has been added - + previously, an unmatched quote character could result in the entire + file being read into the field buffer before an error was reported. + + A new module method csv.field_size_limit() has been added that sets + the parser field size limit (returning the former limit). The initial + limit is 128kB. + + A line_num attribute has been added to the reader object, which tracks + the number of lines read from the source iterator. This is not + the same as the number of records returned, as records can span + multiple lines. + + reader and writer objects were not being registered with the cyclic-GC. + This has been fixed. + +- _DummyThread objects in the threading module now delete self.__block that is + inherited from _Thread since it uses up a lock allocated by 'thread'. The + lock primitives tend to be limited in number and thus should not be wasted on + a _DummyThread object. Fixes bug #1089632. + +- The imghdr module now detects Exif files. + +- StringIO.truncate() now correctly adjusts the size attribute. + (Bug #951915). + +- locale.py now uses an updated locale alias table (built using + Tools/i18n/makelocalealias.py, a tool to parse the X11 locale + alias file); the encoding lookup was enhanced to use Python's + encoding alias table. + +- moved deprecated modules to Lib/lib-old: whrandom, tzparse, statcache. + +- the pickle module no longer accepts the deprecated None option in the + args tuple returned by __reduce__(). + +- optparse now optionally imports gettext. This allows its use in setup.py. + +- the pickle module no longer uses the deprecated bin parameter. + +- the shelve module no longer uses the deprecated binary parameter. + +- the pstats module no longer uses the deprecated ignore() method. + +- the filecmp module no longer uses the deprecated use_statcache argument. + +- unittest.TestCase.run() and unittest.TestSuite.run() can now be successfully + extended or overridden by subclasses. Formerly, the subclassed method would + be ignored by the rest of the module. (Bug #1078905). + +- heapq.nsmallest() and heapq.nlargest() now support key= arguments with + the same meaning as in list.sort(). + +- Bug #1076985: ``codecs.StreamReader.readline()`` now calls ``read()`` only + once when a size argument is given. This prevents a buffer overflow in the + tokenizer with very long source lines. + +- Bug #1083110: ``zlib.decompress.flush()`` would segfault if called + immediately after creating the object, without any intervening + ``.decompress()`` calls. + +- The reconvert.quote function can now emit triple-quoted strings. The + reconvert module now has some simple documentation. + +- ``UserString.MutableString`` now supports negative indices in + ``__setitem__`` and ``__delitem__`` + +- Bug #1149508: ``textwrap`` now handles hyphenated numbers (eg. "2004-03-05") + correctly. + +- Partial fixes for SF bugs #1163244 and #1175396: If a chunk read by + ``codecs.StreamReader.readline()`` has a trailing "\r", read one more + character even if the user has passed a size parameter to get a proper + line ending. Remove the special handling of a "\r\n" that has been split + between two lines. + +- Bug #1251300: On UCS-4 builds the "unicode-internal" codec will now complain + about illegal code points. The codec now supports PEP 293 style error + handlers. + +- Bug #1235646: ``codecs.StreamRecoder.next()`` now reencodes the data it reads + from the input stream, so that the output is a byte string in the correct + encoding instead of a unicode string. + +- Bug #1202493: Fixing SRE parser to handle '{}' as perl does, rather than + considering it exactly like a '*'. + +- Bug #1245379: Add "unicode-1-1-utf-7" as an alias for "utf-7" to + ``encodings.aliases``. + +- ` uu.encode()`` and ``uu.decode()`` now support unicode filenames. + +- Patch #1413711: Certain patterns of differences were making difflib + touch the recursion limit. + +- Bug #947906: An object oriented interface has been added to the calendar + module. It's possible to generate HTML calendar now and the module can be + called as a script (e.g. via ``python -mcalendar``). Localized month and + weekday names can be ouput (even if an exotic encoding is used) using + special classes that use unicode. + +Build +----- + +- Fix test_float, test_long, and test_struct failures on Tru64 with gcc + by using -mieee gcc option. + +- Patch #1432345: Make python compile on DragonFly. + +- Build support for Win64-AMD64 was added. + +- Patch #1428494: Prefer linking against ncursesw over ncurses library. + +- Patch #881820: look for openpty and forkpty also in libbsd. + +- The sources of zlib are now part of the Python distribution (zlib 1.2.3). + The zlib module is now builtin on Windows. + +- Use -xcode=pic32 for CCSHARED on Solaris with SunPro. + +- Bug #1189330: configure did not correctly determine the necessary + value of LINKCC if python was built with GCC 4.0. + +- Upgrade Windows build to zlib 1.2.3 which eliminates a potential security + vulnerability in zlib 1.2.1 and 1.2.2. + +- EXTRA_CFLAGS has been introduced as an environment variable to hold compiler + flags that change binary compatibility. Changes were also made to + distutils.sysconfig to also use the environment variable when used during + compilation of the interpreter and of C extensions through distutils. + +- SF patch 1171735: Darwin 8's headers are anal about POSIX compliance, + and linking has changed (prebinding is now deprecated, and libcc_dynamic + no longer exists). This configure patch makes things right. + +- Bug #1158607: Build with --disable-unicode again. + +- spwdmodule.c is built only if either HAVE_GETSPNAM or HAVE_HAVE_GETSPENT is + defined. Discovered as a result of not being able to build on OS X. + +- setup.py now uses the directories specified in LDFLAGS using the -L option + and in CPPFLAGS using the -I option for adding library and include + directories, respectively, for compiling extension modules against. This has + led to the core being compiled using the values in CPPFLAGS. It also removes + the need for the special-casing of both DarwinPorts and Fink for darwin since + the proper directories can be specified in LDFLAGS (``-L/sw/lib`` for Fink, + ``-L/opt/local/lib`` for DarwinPorts) and CPPFLAGS (``-I/sw/include`` for + Fink, ``-I/opt/local/include`` for DarwinPorts). + +- Test in configure.in that checks for tzset no longer dependent on tm->tm_zone + to exist in the struct (not required by either ISO C nor the UNIX 2 spec). + Tests for sanity in tzname when HAVE_TZNAME defined were also defined. + Closes bug #1096244. Thanks Gregory Bond. + +C API +----- + +- ``PyMem_{Del, DEL}`` and ``PyMem_{Free, FREE}`` no longer map to + ``PyObject_{Free, FREE}``. They map to the system ``free()`` now. If memory + is obtained via the ``PyObject_`` family, it must be released via the + ``PyObject_`` family, and likewise for the ``PyMem_`` family. This has + always been officially true, but when Python's small-object allocator was + introduced, an attempt was made to cater to a few extension modules + discovered at the time that obtained memory via ``PyObject_New`` but + released it via ``PyMem_DEL``. It's years later, and if such code still + exists it will fail now (probably with segfaults, but calling wrong + low-level memory management functions can yield many symptoms). + +- Added a C API for set and frozenset objects. + +- Removed PyRange_New(). + +- Patch #1313939: PyUnicode_DecodeCharmap() accepts a unicode string as the + mapping argument now. This string is used as a mapping table. Byte values + greater than the length of the string and 0xFFFE are treated as undefined + mappings. + + +Tests +----- + +- In test_os, st_?time is now truncated before comparing it with ST_?TIME. + +- Patch #1276356: New resource "urlfetch" is implemented. This enables + even impatient people to run tests that require remote files. + + +Documentation +------------- + +- Bug #1402224: Add warning to dl docs about crashes. + +- Bug #1396471: Document that Windows' ftell() can return invalid + values for text files with UNIX-style line endings. + +- Bug #1274828: Document os.path.splitunc(). + +- Bug #1190204: Clarify which directories are searched by site.py. + +- Bug #1193849: Clarify os.path.expanduser() documentation. + +- Bug #1243192: re.UNICODE and re.LOCALE affect \d, \D, \s and \S. + +- Bug #755617: Document the effects of os.chown() on Windows. + +- Patch #1180012: The documentation for modulefinder is now in the library reference. + +- Patch #1213031: Document that os.chown() accepts argument values of -1. + +- Bug #1190563: Document os.waitpid() return value with WNOHANG flag. + +- Bug #1175022: Correct the example code for property(). + +- Document the IterableUserDict class in the UserDict module. + Closes bug #1166582. + +- Remove all latent references for "Macintosh" that referred to semantics for + Mac OS 9 and change to reflect the state for OS X. + Closes patch #1095802. Thanks Jack Jansen. + +Mac +--- + + +New platforms +------------- + +- FreeBSD 7 support is added. + + +Tools/Demos +----------- + +- Created Misc/Vim/vim_syntax.py to auto-generate a python.vim file in that + directory for syntax highlighting in Vim. Vim directory was added and placed + vimrc to it (was previous up a level). + +- Added two new files to Tools/scripts: pysource.py, which recursively + finds Python source files, and findnocoding.py, which finds Python + source files that need an encoding declaration. + Patch #784089, credits to Oleg Broytmann. + +- Bug #1072853: pindent.py used an uninitialized variable. + +- Patch #1177597: Correct Complex.__init__. + +- Fixed a display glitch in Pynche, which could cause the right arrow to + wiggle over by a pixel. + + What's New in Python 2.4 final? =============================== Modified: python/branches/libffi3-branch/Misc/NEWS ============================================================================== --- python/branches/libffi3-branch/Misc/NEWS (original) +++ python/branches/libffi3-branch/Misc/NEWS Tue Mar 4 15:50:53 2008 @@ -4,14 +4,38 @@ (editors: check NEWS.help for information about editing NEWS using ReST.) -What's New in Python 2.6 alpha 1? +What's New in Python 2.6 alpha 2? ================================= *Release date: XX-XXX-2008* + +What's New in Python 2.6 alpha 1? +================================= + +*Release date: 29-Feb-2008* + Core and builtins ----------------- +- Issue #2051: pyc and pyo files are no longer created with permission + 644. The mode is now inherited from the py file. + +- Issue #2067: file.__exit__() now calls subclasses' close() method. + +- Patch #1759: Backport of PEP 3129 class decorators. + +- Issue #1881: An internal parser limit has been increased. Also see + issue 215555 for a discussion. + +- Added the future_builtins module, which contains hex() and oct(). + These are the PEP 3127 version of these functions, designed to be + compatible with the hex() and oct() builtins from Python 3.0. They + differ slightly in their output formats from the existing, unchanged + Python 2.6 builtins. The expected usage of the future_builtins + module is: + from future_builtins import hex, oct + - Issue #1600: Modifed PyOS_ascii_formatd to use at most 2 digit exponents for exponents with absolute value < 100. Follows C99 standard. This is a change on Windows, which would use 3 digits. @@ -35,7 +59,7 @@ - Fixed repr() and str() of complex numbers with infinity or nan as real or imaginary part. -- Clear all free list during a gc.collect() of the highest generation in order +- Clear all free lists during a gc.collect() of the highest generation in order to allow pymalloc to free more arenas. Python may give back memory to the OS earlier. @@ -423,6 +447,24 @@ Library ------- +- Add inspect.isabstract(object) to fix bug #2223 + +- Add a __format__ method to Decimal, to support PEP 3101. + +- Add a timing parameter when using trace.Trace to print out timestamps. + +- #1627: httplib now ignores negative Content-Length headers. + +- #900744: If an invalid chunked-encoding header is sent by a server, + httplib will now raise IncompleteRead and close the connection instead + of raising ValueError. + +- #1492: The content type of BaseHTTPServer error messages can now be + overridden. + +- Issue 1781: ConfigParser now does not let you add the "default" section + (ignore-case) + - Removed uses of dict.has_key() from distutils, and uses of callable() from copy_reg.py, so the interpreter now starts up without warnings when '-3' is given. More work like this needs to @@ -635,6 +677,11 @@ - itertools.count() is no longer bounded to LONG_MAX. Formerly, it raised an OverflowError. Now, automatically shifts from ints to longs. +- Added itertools.product() which forms the Cartesian product of + the input iterables. + +- Added itertools.combinations(). + - Patch #1541463: optimize performance of cgi.FieldStorage operations. - Decimal is fully updated to the latest Decimal Specification (v1.66). @@ -1057,14 +1104,14 @@ written to disk after calling .flush() or .close(). (Patch by David Watson.) -- Patch #1592250: Add elidge argument to Tkinter.Text.search. +- Patch #1592250: Add elide argument to Tkinter.Text.search. -- Patch #838546: Make terminal become controlling in pty.fork() +- Patch #838546: Make terminal become controlling in pty.fork(). - Patch #1351744: Add askyesnocancel helper for tkMessageBox. - Patch #1060577: Extract list of RPM files from spec file in - bdist_rpm + bdist_rpm. - Bug #1586613: fix zlib and bz2 codecs' incremental en/decoders. @@ -1148,7 +1195,8 @@ - idle: Honor the "Cancel" action in the save dialog (Debian bug #299092). - Fix utf-8-sig incremental decoder, which didn't recognise a BOM when the - first chunk fed to the decoder started with a BOM, but was longer than 3 bytes. + first chunk fed to the decoder started with a BOM, but was longer than 3 + bytes. - The implementation of UnicodeError objects has been simplified (start and end attributes are now stored directly as Py_ssize_t members). @@ -1163,10 +1211,17 @@ does not claim to support starttls. Adds the SMTP.ehlo_or_helo_if_needed() method. Patch contributed by Bill Fenner. +- Patch #1089358: Add signal.siginterrupt, a wrapper around siginterrupt(3). Extension Modules ----------------- +- Patch #1506171: added operator.methodcaller(). + +- Patch #1826: operator.attrgetter() now supports dotted attribute paths. + +- Patch #1957: syslogmodule: Release GIL when calling syslog(3) + - #2112: mmap.error is now a subclass of EnvironmentError and not a direct EnvironmentError @@ -1202,6 +1257,8 @@ - itertools.starmap() now accepts any iterable input. Previously, it required the function inputs to be tuples. +- itertools.chain() now has an alternate constructor, chain.from_iterable(). + - Issue #1646: Make socket support TIPC. The socket module now has support for TIPC under Linux, see http://tipc.sf.net/ for more information. @@ -1359,10 +1416,12 @@ - bsddb module: Fix memory leak when using database cursors on databases without a DBEnv. +- The sqlite3 module was updated to pysqlite 2.4.1. + Tests ----- -- Refactor test_logging to use doctest. +- Refactor test_logging to use unittest. - Refactor test_profile and test_cprofile to use the same code to profile. @@ -1648,2143 +1707,6 @@ lead to the deprecation of macostools.touched() as it relied solely on macfs and was a no-op under OS X. - -What's New in Python 2.5 release candidate 1? -============================================= - -*Release date: 17-AUG-2006* - -Core and builtins ------------------ - -- Unicode objects will no longer raise an exception when being - compared equal or unequal to a string and a UnicodeDecodeError - exception occurs, e.g. as result of a decoding failure. - - Instead, the equal (==) and unequal (!=) comparison operators will - now issue a UnicodeWarning and interpret the two objects as - unequal. The UnicodeWarning can be filtered as desired using - the warning framework, e.g. silenced completely, turned into an - exception, logged, etc. - - Note that compare operators other than equal and unequal will still - raise UnicodeDecodeError exceptions as they've always done. - -- Fix segfault when doing string formatting on subclasses of long. - -- Fix bug related to __len__ functions using values > 2**32 on 64-bit machines - with new-style classes. - -- Fix bug related to __len__ functions returning negative values with - classic classes. - -- Patch #1538606, Fix __index__() clipping. There were some problems - discovered with the API and how integers that didn't fit into Py_ssize_t - were handled. This patch attempts to provide enough alternatives - to effectively use __index__. - -- Bug #1536021: __hash__ may now return long int; the final hash - value is obtained by invoking hash on the long int. - -- Bug #1536786: buffer comparison could emit a RuntimeWarning. - -- Bug #1535165: fixed a segfault in input() and raw_input() when - sys.stdin is closed. - -- On Windows, the PyErr_Warn function is now exported from - the Python dll again. - -- Bug #1191458: tracing over for loops now produces a line event - on each iteration. Fixing this problem required changing the .pyc - magic number. This means that .pyc files generated before 2.5c1 - will be regenerated. - -- Bug #1333982: string/number constants were inappropriately stored - in the byte code and co_consts even if they were not used, ie - immediately popped off the stack. - -- Fixed a reference-counting problem in property(). - - -Library -------- - -- Fix a bug in the ``compiler`` package that caused invalid code to be - generated for generator expressions. - -- The distutils version has been changed to 2.5.0. The change to - keep it programmatically in sync with the Python version running - the code (introduced in 2.5b3) has been reverted. It will continue - to be maintained manually as static string literal. - -- If the Python part of a ctypes callback function returns None, - and this cannot be converted to the required C type, an exception is - printed with PyErr_WriteUnraisable. Before this change, the C - callback returned arbitrary values to the calling code. - -- The __repr__ method of a NULL ctypes.py_object() no longer raises - an exception. - -- uuid.UUID now has a bytes_le attribute. This returns the UUID in - little-endian byte order for Windows. In addition, uuid.py gained some - workarounds for clocks with low resolution, to stop the code yielding - duplicate UUIDs. - -- Patch #1540892: site.py Quitter() class attempts to close sys.stdin - before raising SystemExit, allowing IDLE to honor quit() and exit(). - -- Bug #1224621: make tabnanny recognize IndentationErrors raised by tokenize. - -- Patch #1536071: trace.py should now find the full module name of a - file correctly even on Windows. - -- logging's atexit hook now runs even if the rest of the module has - already been cleaned up. - -- Bug #1112549, fix DoS attack on cgi.FieldStorage. - -- Bug #1531405, format_exception no longer raises an exception if - str(exception) raised an exception. - -- Fix a bug in the ``compiler`` package that caused invalid code to be - generated for nested functions. - - -Extension Modules ------------------ - -- Patch #1511317: don't crash on invalid hostname (alias) info. - -- Patch #1535500: fix segfault in BZ2File.writelines and make sure it - raises the correct exceptions. - -- Patch # 1536908: enable building ctypes on OpenBSD/AMD64. The - '-no-stack-protector' compiler flag for OpenBSD has been removed. - -- Patch #1532975 was applied, which fixes Bug #1533481: ctypes now - uses the _as_parameter_ attribute when objects are passed to foreign - function calls. The ctypes version number was changed to 1.0.1. - -- Bug #1530559, struct.pack raises TypeError where it used to convert. - Passing float arguments to struct.pack when integers are expected - now triggers a DeprecationWarning. - - -Tests ------ - -- test_socketserver should now work on cygwin and not fail sporadically - on other platforms. - -- test_mailbox should now work on cygwin versions 2006-08-10 and later. - -- Bug #1535182: really test the xreadlines() method of bz2 objects. - -- test_threading now skips testing alternate thread stack sizes on - platforms that don't support changing thread stack size. - - -Documentation -------------- - -- Patch #1534922: unittest docs were corrected and enhanced. - - -Build ------ - -- Bug #1535502, build _hashlib on Windows, and use masm assembler - code in OpenSSL. - -- Bug #1534738, win32 debug version of _msi should be _msi_d.pyd. - -- Bug #1530448, ctypes build failure on Solaris 10 was fixed. - - -C API ------ - -- New API for Unicode rich comparisons: PyUnicode_RichCompare() - -- Bug #1069160. Internal correctness changes were made to - ``PyThreadState_SetAsyncExc()``. A test case was added, and - the documentation was changed to state that the return value - is always 1 (normal) or 0 (if the specified thread wasn't found). - - -What's New in Python 2.5 beta 3? -================================ - -*Release date: 03-AUG-2006* - -Core and builtins ------------------ - -- _PyWeakref_GetWeakrefCount() now returns a Py_ssize_t; it previously - returned a long (see PEP 353). - -- Bug #1515471: string.replace() accepts character buffers again. - -- Add PyErr_WarnEx() so C code can pass the stacklevel to warnings.warn(). - This provides the proper warning for struct.pack(). - PyErr_Warn() is now deprecated in favor of PyErr_WarnEx(). - -- Patch #1531113: Fix augmented assignment with yield expressions. - Also fix a SystemError when trying to assign to yield expressions. - -- Bug #1529871: The speed enhancement patch #921466 broke Python's compliance - with PEP 302. This was fixed by adding an ``imp.NullImporter`` type that is - used in ``sys.path_importer_cache`` to cache non-directory paths and avoid - excessive filesystem operations during imports. - -- Bug #1521947: When checking for overflow, ``PyOS_strtol()`` used some - operations on signed longs that are formally undefined by C. - Unfortunately, at least one compiler now cares about that, so complicated - the code to make that compiler happy again. - -- Bug #1524310: Properly report errors from FindNextFile in os.listdir. - -- Patch #1232023: Stop including current directory in search - path on Windows. - -- Fix some potential crashes found with failmalloc. - -- Fix warnings reported by Klocwork's static analysis tool. - -- Bug #1512814, Fix incorrect lineno's when code within a function - had more than 255 blank lines. - -- Patch #1521179: Python now accepts the standard options ``--help`` and - ``--version`` as well as ``/?`` on Windows. - -- Bug #1520864: unpacking singleton tuples in a 'for' loop (for x, in) works - again. Fixing this problem required changing the .pyc magic number. - This means that .pyc files generated before 2.5b3 will be regenerated. - -- Bug #1524317: Compiling Python ``--without-threads`` failed. - The Python core compiles again, and, in a build without threads, the - new ``sys._current_frames()`` returns a dictionary with one entry, - mapping the faux "thread id" 0 to the current frame. - -- Bug #1525447: build on MacOS X on a case-sensitive filesystem. - - -Library -------- - -- Fix #1693149. Now you can pass several modules separated by - comma to trace.py in the same --ignore-module option. - -- Correction of patch #1455898: In the mbcs decoder, set final=False - for stream decoder, but final=True for the decode function. - -- os.urandom no longer masks unrelated exceptions like SystemExit or - KeyboardInterrupt. - -- Bug #1525866: Don't copy directory stat times in - shutil.copytree on Windows - -- Bug #1002398: The documentation for os.path.sameopenfile now correctly - refers to file descriptors, not file objects. - -- The renaming of the xml package to xmlcore, and the import hackery done - to make it appear at both names, has been removed. Bug #1511497, - #1513611, and probably others. - -- Bug #1441397: The compiler module now recognizes module and function - docstrings correctly as it did in Python 2.4. - -- Bug #1529297: The rewrite of doctest for Python 2.4 unintentionally - lost that tests are sorted by name before being run. This rarely - matters for well-written tests, but can create baffling symptoms if - side effects from one test to the next affect outcomes. ``DocTestFinder`` - has been changed to sort the list of tests it returns. - -- The distutils version has been changed to 2.5.0, and is now kept - in sync with sys.version_info[:3]. - -- Bug #978833: Really close underlying socket in _socketobject.close. - -- Bug #1459963: urllib and urllib2 now normalize HTTP header names with - title(). - -- Patch #1525766: In pkgutil.walk_packages, correctly pass the onerror callback - to recursive calls and call it with the failing package name. - -- Bug #1525817: Don't truncate short lines in IDLE's tool tips. - -- Patch #1515343: Fix printing of deprecated string exceptions with a - value in the traceback module. - -- Resync optparse with Optik 1.5.3: minor tweaks for/to tests. - -- Patch #1524429: Use repr() instead of backticks in Tkinter again. - -- Bug #1520914: Change time.strftime() to accept a zero for any position in its - argument tuple. For arguments where zero is illegal, the value is forced to - the minimum value that is correct. This is to support an undocumented but - common way people used to fill in inconsequential information in the time - tuple pre-2.4. - -- Patch #1220874: Update the binhex module for Mach-O. - -- The email package has improved RFC 2231 support, specifically for - recognizing the difference between encoded (name*0*=) and non-encoded - (name*0=) parameter continuations. This may change the types of - values returned from email.message.Message.get_param() and friends. - Specifically in some cases where non-encoded continuations were used, - get_param() used to return a 3-tuple of (None, None, string) whereas now it - will just return the string (since non-encoded continuations don't have - charset and language parts). - - Also, whereas % values were decoded in all parameter continuations, they are - now only decoded in encoded parameter parts. - -- Bug #1517990: IDLE keybindings on MacOS X now work correctly - -- Bug #1517996: IDLE now longer shows the default Tk menu when a - path browser, class browser or debugger is the frontmost window on MacOS X - -- Patch #1520294: Support for getset and member descriptors in types.py, - inspect.py, and pydoc.py. Specifically, this allows for querying the type - of an object against these built-in types and more importantly, for getting - their docstrings printed in the interactive interpreter's help() function. - - -Extension Modules ------------------ - -- Patch #1519025 and bug #926423: If a KeyboardInterrupt occurs during - a socket operation on a socket with a timeout, the exception will be - caught correctly. Previously, the exception was not caught. - -- Patch #1529514: The _ctypes extension is now compiled on more - openbsd target platforms. - -- The ``__reduce__()`` method of the new ``collections.defaultdict`` had - a memory leak, affecting pickles and deep copies. - -- Bug #1471938: Fix curses module build problem on Solaris 8; patch by - Paul Eggert. - -- Patch #1448199: Release interpreter lock in _winreg.ConnectRegistry. - -- Patch #1521817: Index range checking on ctypes arrays containing - exactly one element enabled again. This allows iterating over these - arrays, without the need to check the array size before. - -- Bug #1521375: When the code in ctypes.util.find_library was - run with root privileges, it could overwrite or delete - /dev/null in certain cases; this is now fixed. - -- Bug #1467450: On Mac OS X 10.3, RTLD_GLOBAL is now used as the - default mode for loading shared libraries in ctypes. - -- Because of a misspelled preprocessor symbol, ctypes was always - compiled without thread support; this is now fixed. - -- pybsddb Bug #1527939: bsddb module DBEnv dbremove and dbrename - methods now allow their database parameter to be None as the - sleepycat API allows. - -- Bug #1526460: Fix socketmodule compile on NetBSD as it has a different - bluetooth API compared with Linux and FreeBSD. - -Tests ------ - -- Bug #1501330: Change test_ossaudiodev to be much more tolerant in terms of - how long the test file should take to play. Now accepts taking 2.93 secs - (exact time) +/- 10% instead of the hard-coded 3.1 sec. - -- Patch #1529686: The standard tests ``test_defaultdict``, ``test_iterlen``, - ``test_uuid`` and ``test_email_codecs`` didn't actually run any tests when - run via ``regrtest.py``. Now they do. - -Build ------ - -- Bug #1439538: Drop usage of test -e in configure as it is not portable. - -Mac ---- - -- PythonLauncher now works correctly when the path to the script contains - characters that are treated specially by the shell (such as quotes). - -- Bug #1527397: PythonLauncher now launches scripts with the working directory - set to the directory that contains the script instead of the user home - directory. That latter was an implementation accident and not what users - expect. - - -What's New in Python 2.5 beta 2? -================================ - -*Release date: 11-JUL-2006* - -Core and builtins ------------------ - -- Bug #1441486: The literal representation of -(sys.maxint - 1) - again evaluates to a int object, not a long. - -- Bug #1501934: The scope of global variables that are locally assigned - using augmented assignment is now correctly determined. - -- Bug #927248: Recursive method-wrapper objects can now safely - be released. - -- Bug #1417699: Reject locale-specific decimal point in float() - and atof(). - -- Bug #1511381: codec_getstreamcodec() in codec.c is corrected to - omit a default "error" argument for NULL pointer. This allows - the parser to take a codec from cjkcodecs again. - -- Bug #1519018: 'as' is now validated properly in import statements. - -- On 64 bit systems, int literals that use less than 64 bits are - now ints rather than longs. - -- Bug #1512814, Fix incorrect lineno's when code at module scope - started after line 256. - -- New function ``sys._current_frames()`` returns a dict mapping thread - id to topmost thread stack frame. This is for expert use, and is - especially useful for debugging application deadlocks. The functionality - was previously available in Fazal Majid's ``threadframe`` extension - module, but it wasn't possible to do this in a wholly threadsafe way from - an extension. - -Library -------- - -- Bug #1257728: Mention Cygwin in distutils error message about a missing - VS 2003. - -- Patch #1519566: Update turtle demo, make begin_fill idempotent. - -- Bug #1508010: msvccompiler now requires the DISTUTILS_USE_SDK - environment variable to be set in order to the SDK environment - for finding the compiler, include files, etc. - -- Bug #1515998: Properly generate logical ids for files in bdist_msi. - -- warnings.py now ignores ImportWarning by default - -- string.Template() now correctly handles tuple-values. Previously, - multi-value tuples would raise an exception and single-value tuples would - be treated as the value they contain, instead. - -- Bug #822974: Honor timeout in telnetlib.{expect,read_until} - even if some data are received. - -- Bug #1267547: Put proper recursive setup.py call into the - spec file generated by bdist_rpm. - -- Bug #1514693: Update turtle's heading when switching between - degrees and radians. - -- Reimplement turtle.circle using a polyline, to allow correct - filling of arcs. - -- Bug #1514703: Only setup canvas window in turtle when the canvas - is created. - -- Bug #1513223: .close() of a _socketobj now releases the underlying - socket again, which then gets closed as it becomes unreferenced. - -- Bug #1504333: Make sgmllib support angle brackets in quoted - attribute values. - -- Bug #853506: Fix IPv6 address parsing in unquoted attributes in - sgmllib ('[' and ']' were not accepted). - -- Fix a bug in the turtle module's end_fill function. - -- Bug #1510580: The 'warnings' module improperly required that a Warning - category be either a types.ClassType and a subclass of Warning. The proper - check is just that it is a subclass with Warning as the documentation states. - -- The compiler module now correctly compiles the new try-except-finally - statement (bug #1509132). - -- The wsgiref package is now installed properly on Unix. - -- A bug was fixed in logging.config.fileConfig() which caused a crash on - shutdown when fileConfig() was called multiple times. - -- The sqlite3 module did cut off data from the SQLite database at the first - null character before sending it to a custom converter. This has been fixed - now. - -Extension Modules ------------------ - -- #1494314: Fix a regression with high-numbered sockets in 2.4.3. This - means that select() on sockets > FD_SETSIZE (typically 1024) work again. - The patch makes sockets use poll() internally where available. - -- Assigning None to pointer type fields in ctypes structures possible - overwrote the wrong fields, this is fixed now. - -- Fixed a segfault in _ctypes when ctypes.wintypes were imported - on non-Windows platforms. - -- Bug #1518190: The ctypes.c_void_p constructor now accepts any - integer or long, without range checking. - -- Patch #1517790: It is now possible to use custom objects in the ctypes - foreign function argtypes sequence as long as they provide a from_param - method, no longer is it required that the object is a ctypes type. - -- The '_ctypes' extension module now works when Python is configured - with the --without-threads option. - -- Bug #1513646: os.access on Windows now correctly determines write - access, again. - -- Bug #1512695: cPickle.loads could crash if it was interrupted with - a KeyboardInterrupt. - -- Bug #1296433: parsing XML with a non-default encoding and - a CharacterDataHandler could crash the interpreter in pyexpat. - -- Patch #1516912: improve Modules support for OpenVMS. - -Build ------ - -- Automate Windows build process for the Win64 SSL module. - -- 'configure' now detects the zlib library the same way as distutils. - Previously, the slight difference could cause compilation errors of the - 'zlib' module on systems with more than one version of zlib. - -- The MSI compileall step was fixed to also support a TARGETDIR - with spaces in it. - -- Bug #1517388: sqlite3.dll is now installed on Windows independent - of Tcl/Tk. - -- Bug #1513032: 'make install' failed on FreeBSD 5.3 due to lib-old - trying to be installed even though it's empty. - -Tests ------ - -- Call os.waitpid() at the end of tests that spawn child processes in order - to minimize resources (zombies). - -Documentation -------------- - -- Cover ImportWarning, PendingDeprecationWarning and simplefilter() in the - documentation for the warnings module. - -- Patch #1509163: MS Toolkit Compiler no longer available. - -- Patch #1504046: Add documentation for xml.etree. - - -What's New in Python 2.5 beta 1? -================================ - -*Release date: 20-JUN-2006* - -Core and builtins ------------------ - -- Patch #1507676: Error messages returned by invalid abstract object operations - (such as iterating over an integer) have been improved and now include the - type of the offending object to help with debugging. - -- Bug #992017: A classic class that defined a __coerce__() method that returned - its arguments swapped would infinitely recurse and segfault the interpreter. - -- Fix the socket tests so they can be run concurrently. - -- Removed 5 integers from C frame objects (PyFrameObject). - f_nlocals, f_ncells, f_nfreevars, f_stack_size, f_restricted. - -- Bug #532646: object.__call__() will continue looking for the __call__ - attribute on objects until one without one is found. This leads to recursion - when you take a class and set its __call__ attribute to an instance of the - class. Originally fixed for classic classes, but this fix is for new-style. - Removes the infinite_rec_3 crasher. - -- The string and unicode methods startswith() and endswith() now accept - a tuple of prefixes/suffixes to look for. Implements RFE #1491485. - -- Buffer objects, at the C level, never used the char buffer - implementation even when the char buffer for the wrapped object was - explicitly requested (originally returned the read or write buffer). - Now a TypeError is raised if the char buffer is not present but is - requested. - -- Patch #1346214: Statements like "if 0: suite" are now again optimized - away like they were in Python 2.4. - -- Builtin exceptions are now full-blown new-style classes instead of - instances pretending to be classes, which speeds up exception handling - by about 80% in comparison to 2.5a2. - -- Patch #1494554: Update unicodedata.numeric and unicode.isnumeric to - Unicode 4.1. - -- Patch #921466: sys.path_importer_cache is now used to cache valid and - invalid file paths for the built-in import machinery which leads to - fewer open calls on startup. - -- Patch #1442927: ``long(str, base)`` is now up to 6x faster for non-power- - of-2 bases. The largest speedup is for inputs with about 1000 decimal - digits. Conversion from non-power-of-2 bases remains quadratic-time in - the number of input digits (it was and remains linear-time for bases - 2, 4, 8, 16 and 32). - -- Bug #1334662: ``int(string, base)`` could deliver a wrong answer - when ``base`` was not 2, 4, 8, 10, 16 or 32, and ``string`` represented - an integer close to ``sys.maxint``. This was repaired by patch - #1335972, which also gives a nice speedup. - -- Patch #1337051: reduced size of frame objects. - -- PyErr_NewException now accepts a tuple of base classes as its - "base" parameter. - -- Patch #876206: function call speedup by retaining allocated frame - objects. - -- Bug #1462152: file() now checks more thoroughly for invalid mode - strings and removes a possible "U" before passing the mode to the - C library function. - -- Patch #1488312, Fix memory alignment problem on SPARC in unicode - -- Bug #1487966: Fix SystemError with conditional expression in assignment - -- WindowsError now has two error code attributes: errno, which carries - the error values from errno.h, and winerror, which carries the error - values from winerror.h. Previous versions put the winerror.h values - (from GetLastError()) into the errno attribute. - -- Patch #1475845: Raise IndentationError for unexpected indent. - -- Patch #1479181: split open() and file() from being aliases for each other. - -- Patch #1497053 & bug #1275608: Exceptions occurring in ``__eq__()`` - methods were always silently ignored by dictionaries when comparing keys. - They are now passed through (except when using the C API function - ``PyDict_GetItem()``, whose semantics did not change). - -- Bug #1456209: In some obscure cases it was possible for a class with a - custom ``__eq__()`` method to confuse dict internals when class instances - were used as a dict's keys and the ``__eq__()`` method mutated the dict. - No, you don't have any code that did this ;-) - -Extension Modules ------------------ - -- Bug #1295808: expat symbols should be namespaced in pyexpat - -- Patch #1462338: Upgrade pyexpat to expat 2.0.0 - -- Change binascii.hexlify to accept a read-only buffer instead of only a char - buffer and actually follow its documentation. - -- Fixed a potentially invalid memory access of CJKCodecs' shift-jis decoder. - -- Patch #1478788 (modified version): The functional extension module has - been renamed to _functools and a functools Python wrapper module added. - This provides a home for additional function related utilities that are - not specifically about functional programming. See PEP 309. - -- Patch #1493701: performance enhancements for struct module. - -- Patch #1490224: time.altzone is now set correctly on Cygwin. - -- Patch #1435422: zlib's compress and decompress objects now have a - copy() method. - -- Patch #1454481: thread stack size is now tunable at runtime for thread - enabled builds on Windows and systems with Posix threads support. - -- On Win32, os.listdir now supports arbitrarily-long Unicode path names - (up to the system limit of 32K characters). - -- Use Win32 API to implement os.{access,chdir,chmod,mkdir,remove,rename,rmdir,utime}. - As a result, these functions now raise WindowsError instead of OSError. - -- ``time.clock()`` on Win64 should use the high-performance Windows - ``QueryPerformanceCounter()`` now (as was already the case on 32-bit - Windows platforms). - -- Calling Tk_Init twice is refused if the first call failed as that - may deadlock. - -- bsddb: added the DB_ARCH_REMOVE flag and fixed db.DBEnv.log_archive() to - accept it without potentially using an uninitialized pointer. - -- bsddb: added support for the DBEnv.log_stat() and DBEnv.lsn_reset() methods - assuming BerkeleyDB >= 4.0 and 4.4 respectively. [pybsddb project SF - patch numbers 1494885 and 1494902] - -- bsddb: added an interface for the BerkeleyDB >= 4.3 DBSequence class. - [pybsddb project SF patch number 1466734] - -- bsddb: fix DBCursor.pget() bug with keyword argument names when no data - parameter is supplied. [SF pybsddb bug #1477863] - -- bsddb: the __len__ method of a DB object has been fixed to return correct - results. It could previously incorrectly return 0 in some cases. - Fixes SF bug 1493322 (pybsddb bug 1184012). - -- bsddb: the bsddb.dbtables Modify method now raises the proper error and - aborts the db transaction safely when a modifier callback fails. - Fixes SF python patch/bug #1408584. - -- bsddb: multithreaded DB access using the simple bsddb module interface - now works reliably. It has been updated to use automatic BerkeleyDB - deadlock detection and the bsddb.dbutils.DeadlockWrap wrapper to retry - database calls that would previously deadlock. [SF python bug #775414] - -- Patch #1446489: add support for the ZIP64 extensions to zipfile. - -- Patch #1506645: add Python wrappers for the curses functions - is_term_resized, resize_term and resizeterm. - -Library -------- - -- Patch #815924: Restore ability to pass type= and icon= in tkMessageBox - functions. - -- Patch #812986: Update turtle output even if not tracing. - -- Patch #1494750: Destroy master after deleting children in - Tkinter.BaseWidget. - -- Patch #1096231: Add ``default`` argument to Tkinter.Wm.wm_iconbitmap. - -- Patch #763580: Add name and value arguments to Tkinter variable - classes. - -- Bug #1117556: SimpleHTTPServer now tries to find and use the system's - mime.types file for determining MIME types. - -- Bug #1339007: Shelf objects now don't raise an exception in their - __del__ method when initialization failed. - -- Patch #1455898: The MBCS codec now supports the incremental mode for - double-byte encodings. - -- ``difflib``'s ``SequenceMatcher.get_matching_blocks()`` was changed to - guarantee that adjacent triples in the return list always describe - non-adjacent blocks. Previously, a pair of matching blocks could end - up being described by multiple adjacent triples that formed a partition - of the matching pair. - -- Bug #1498146: fix optparse to handle Unicode strings in option help, - description, and epilog. - -- Bug #1366250: minor optparse documentation error. - -- Bug #1361643: fix textwrap.dedent() so it handles tabs appropriately; - clarify docs. - -- The wsgiref package has been added to the standard library. - -- The functions update_wrapper() and wraps() have been added to the functools - module. These make it easier to copy relevant metadata from the original - function when writing wrapper functions. - -- The optional ``isprivate`` argument to ``doctest.testmod()``, and the - ``doctest.is_private()`` function, both deprecated in 2.4, were removed. - -- Patch #1359618: Speed up charmap encoder by using a trie structure - for lookup. - -- The functions in the ``pprint`` module now sort dictionaries by key - before computing the display. Before 2.5, ``pprint`` sorted a dictionary - if and only if its display required more than one line, although that - wasn't documented. The new behavior increases predictability; e.g., - using ``pprint.pprint(a_dict)`` in a doctest is now reliable. - -- Patch #1497027: try HTTP digest auth before basic auth in urllib2 - (thanks for J. J. Lee). - -- Patch #1496206: improve urllib2 handling of passwords with respect to - default HTTP and HTTPS ports. - -- Patch #1080727: add "encoding" parameter to doctest.DocFileSuite. - -- Patch #1281707: speed up gzip.readline. - -- Patch #1180296: Two new functions were added to the locale module: - format_string() to get the effect of "format % items" but locale-aware, - and currency() to format a monetary number with currency sign. - -- Patch #1486962: Several bugs in the turtle Tk demo module were fixed - and several features added, such as speed and geometry control. - -- Patch #1488881: add support for external file objects in bz2 compressed - tarfiles. - -- Patch #721464: pdb.Pdb instances can now be given explicit stdin and - stdout arguments, making it possible to redirect input and output - for remote debugging. - -- Patch #1484695: Update the tarfile module to version 0.8. This fixes - a couple of issues, notably handling of long file names using the - GNU LONGNAME extension. - -- Patch #1478292. ``doctest.register_optionflag(name)`` shouldn't create a - new flag when ``name`` is already the name of an option flag. - -- Bug #1385040: don't allow "def foo(a=1, b): pass" in the compiler - package. - -- Patch #1472854: make the rlcompleter.Completer class usable on non- - UNIX platforms. - -- Patch #1470846: fix urllib2 ProxyBasicAuthHandler. - -- Bug #1472827: correctly escape newlines and tabs in attribute values in - the saxutils.XMLGenerator class. - - -Build ------ - -- Bug #1502728: Correctly link against librt library on HP-UX. - -- OpenBSD 3.9 is supported now. - -- Patch #1492356: Port to Windows CE. - -- Bug/Patch #1481770: Use .so extension for shared libraries on HP-UX for ia64. - -- Patch #1471883: Add --enable-universalsdk. - -C API ------ - -Tests ------ - -Tools ------ - -Documentation -------------- - - - -What's New in Python 2.5 alpha 2? -================================= - -*Release date: 27-APR-2006* - -Core and builtins ------------------ - -- Bug #1465834: 'bdist_wininst preinstall script support' was fixed - by converting these apis from macros into exported functions again: - - PyParser_SimpleParseFile PyParser_SimpleParseString PyRun_AnyFile - PyRun_AnyFileEx PyRun_AnyFileFlags PyRun_File PyRun_FileEx - PyRun_FileFlags PyRun_InteractiveLoop PyRun_InteractiveOne - PyRun_SimpleFile PyRun_SimpleFileEx PyRun_SimpleString - PyRun_String Py_CompileString - -- Under COUNT_ALLOCS, types are not necessarily immortal anymore. - -- All uses of PyStructSequence_InitType have been changed to initialize - the type objects only once, even if the interpreter is initialized - multiple times. - -- Bug #1454485, array.array('u') could crash the interpreter. This was - due to PyArgs_ParseTuple(args, 'u#', ...) trying to convert buffers (strings) - to unicode when it didn't make sense. 'u#' now requires a unicode string. - -- Py_UNICODE is unsigned. It was always documented as unsigned, but - due to a bug had a signed value in previous versions. - -- Patch #837242: ``id()`` of any Python object always gives a positive - number now, which might be a long integer. ``PyLong_FromVoidPtr`` and - ``PyLong_AsVoidPtr`` have been changed accordingly. Note that it has - never been correct to implement a ``__hash()__`` method that returns the - ``id()`` of an object: - - def __hash__(self): - return id(self) # WRONG - - because a hash result must be a (short) Python int but it was always - possible for ``id()`` to return a Python long. However, because ``id()`` - could return negative values before, on a 32-bit box an ``id()`` result - was always usable as a hash value before this patch. That's no longer - necessarily so. - -- Python on OS X 10.3 and above now uses dlopen() (via dynload_shlib.c) - to load extension modules and now provides the dl module. As a result, - sys.setdlopenflags() now works correctly on these systems. (SF patch - #1454844) - -- Patch #1463867: enhanced garbage collection to allow cleanup of cycles - involving generators that have paused outside of any ``try`` or ``with`` - blocks. (In 2.5a1, a paused generator that was part of a reference - cycle could not be garbage collected, regardless of whether it was - paused in a ``try`` or ``with`` block.) - -Extension Modules ------------------ - -- Patch #1191065: Fix preprocessor problems on systems where recvfrom - is a macro. - -- Bug #1467952: os.listdir() now correctly raises an error if readdir() - fails with an error condition. - -- Fixed bsddb.db.DBError derived exceptions so they can be unpickled. - -- Bug #1117761: bsddb.*open() no longer raises an exception when using - the cachesize parameter. - -- Bug #1149413: bsddb.*open() no longer raises an exception when using - a temporary db (file=None) with the 'n' flag to truncate on open. - -- Bug #1332852: bsddb module minimum BerkeleyDB version raised to 3.3 - as older versions cause excessive test failures. - -- Patch #1062014: AF_UNIX sockets under Linux have a special - abstract namespace that is now fully supported. - -Library -------- - -- Bug #1223937: subprocess.CalledProcessError reports the exit status - of the process using the returncode attribute, instead of - abusing errno. - -- Patch #1475231: ``doctest`` has a new ``SKIP`` option, which causes - a doctest to be skipped (the code is not run, and the expected output - or exception is ignored). - -- Fixed contextlib.nested to cope with exceptions being raised and - caught inside exit handlers. - -- Updated optparse module to Optik 1.5.1 (allow numeric constants in - hex, octal, or binary; add ``append_const`` action; keep going if - gettext cannot be imported; added ``OptionParser.destroy()`` method; - added ``epilog`` for better help generation). - -- Bug #1473760: ``tempfile.TemporaryFile()`` could hang on Windows, when - called from a thread spawned as a side effect of importing a module. - -- The pydoc module now supports documenting packages contained in - .zip or .egg files. - -- The pkgutil module now has several new utility functions, such - as ``walk_packages()`` to support working with packages that are either - in the filesystem or zip files. - -- The mailbox module can now modify and delete messages from - mailboxes, in addition to simply reading them. Thanks to Gregory - K. Johnson for writing the code, and to the 2005 Google Summer of - Code for funding his work. - -- The ``__del__`` method of class ``local`` in module ``_threading_local`` - returned before accomplishing any of its intended cleanup. - -- Patch #790710: Add breakpoint command lists in pdb. - -- Patch #1063914: Add Tkinter.Misc.clipboard_get(). - -- Patch #1191700: Adjust column alignment in bdb breakpoint lists. - -- SimpleXMLRPCServer relied on the fcntl module, which is unavailable on - Windows. Bug #1469163. - -- The warnings, linecache, inspect, traceback, site, and doctest modules - were updated to work correctly with modules imported from zipfiles or - via other PEP 302 __loader__ objects. - -- Patch #1467770: Reduce usage of subprocess._active to processes which - the application hasn't waited on. - -- Patch #1462222: Fix Tix.Grid. - -- Fix exception when doing glob.glob('anything*/') - -- The pstats.Stats class accepts an optional stream keyword argument to - direct output to an alternate file-like object. - -Build ------ - -- The Makefile now has a reindent target, which runs reindent.py on - the library. - -- Patch #1470875: Building Python with MS Free Compiler - -- Patch #1161914: Add a python-config script. - -- Patch #1324762:Remove ccpython.cc; replace --with-cxx with - --with-cxx-main. Link with C++ compiler only if --with-cxx-main was - specified. (Can be overridden by explicitly setting LINKCC.) Decouple - CXX from --with-cxx-main, see description in README. - -- Patch #1429775: Link extension modules with the shared libpython. - -- Fixed a libffi build problem on MIPS systems. - -- ``PyString_FromFormat``, ``PyErr_Format``, and ``PyString_FromFormatV`` - now accept formats "%u" for unsigned ints, "%lu" for unsigned longs, - and "%zu" for unsigned integers of type ``size_t``. - -Tests ------ - -- test_contextlib now checks contextlib.nested can cope with exceptions - being raised and caught inside exit handlers. - -- test_cmd_line now checks operation of the -m and -c command switches - -- The test_contextlib test in 2.5a1 wasn't actually run unless you ran - it separately and by hand. It also wasn't cleaning up its changes to - the current Decimal context. - -- regrtest.py now has a -M option to run tests that test the new limits of - containers, on 64-bit architectures. Running these tests is only sensible - on 64-bit machines with more than two gigabytes of memory. The argument - passed is the maximum amount of memory for the tests to use. - -Tools ------ - -- Added the Python benchmark suite pybench to the Tools/ directory; - contributed by Marc-Andre Lemburg. - -Documentation -------------- - -- Patch #1473132: Improve docs for ``tp_clear`` and ``tp_traverse``. - -- PEP 343: Added Context Types section to the library reference - and attempted to bring other PEP 343 related documentation into - line with the implementation and/or python-dev discussions. - -- Bug #1337990: clarified that ``doctest`` does not support examples - requiring both expected output and an exception. - - -What's New in Python 2.5 alpha 1? -================================= - -*Release date: 05-APR-2006* - -Core and builtins ------------------ - -- PEP 338: -m command line switch now delegates to runpy.run_module - allowing it to support modules in packages and zipfiles - -- On Windows, .DLL is not an accepted file name extension for - extension modules anymore; extensions are only found if they - end in .PYD. - -- Bug #1421664: sys.stderr.encoding is now set to the same value as - sys.stdout.encoding. - -- __import__ accepts keyword arguments. - -- Patch #1460496: round() now accepts keyword arguments. - -- Fixed bug #1459029 - unicode reprs were double-escaped. - -- Patch #1396919: The system scope threads are reenabled on FreeBSD - 5.4 and later versions. - -- Bug #1115379: Compiling a Unicode string with an encoding declaration - now gives a SyntaxError. - -- Previously, Python code had no easy way to access the contents of a - cell object. Now, a ``cell_contents`` attribute has been added - (closes patch #1170323). - -- Patch #1123430: Python's small-object allocator now returns an arena to - the system ``free()`` when all memory within an arena becomes unused - again. Prior to Python 2.5, arenas (256KB chunks of memory) were never - freed. Some applications will see a drop in virtual memory size now, - especially long-running applications that, from time to time, temporarily - use a large number of small objects. Note that when Python returns an - arena to the platform C's ``free()``, there's no guarantee that the - platform C library will in turn return that memory to the operating system. - The effect of the patch is to stop making that impossible, and in tests it - appears to be effective at least on Microsoft C and gcc-based systems. - Thanks to Evan Jones for hard work and patience. - -- Patch #1434038: property() now uses the getter's docstring if there is - no "doc" argument given. This makes it possible to legitimately use - property() as a decorator to produce a read-only property. - -- PEP 357, patch 1436368: add an __index__ method to int/long and a matching - nb_index slot to the PyNumberMethods struct. The slot is consulted instead - of requiring an int or long in slicing and a few other contexts, enabling - other objects (e.g. Numeric Python's integers) to be used as slice indices. - -- Fixed various bugs reported by Coverity's Prevent tool. - -- PEP 352, patch #1104669: Make exceptions new-style objects. Introduced the - new exception base class, BaseException, which has a new message attribute. - KeyboardInterrupt and SystemExit to directly inherit from BaseException now. - Raising a string exception now raises a DeprecationWarning. - -- Patch #1438387, PEP 328: relative and absolute imports. Imports can now be - explicitly relative, using 'from .module import name' to mean 'from the same - package as this module is in. Imports without dots still default to the - old relative-then-absolute, unless 'from __future__ import - absolute_import' is used. - -- Properly check if 'warnings' raises an exception (usually when a filter set - to "error" is triggered) when raising a warning for raising string - exceptions. - -- CO_GENERATOR_ALLOWED is no longer defined. This behavior is the default. - The name was removed from Include/code.h. - -- PEP 308: conditional expressions were added: (x if cond else y). - -- Patch 1433928: - - The copy module now "copies" function objects (as atomic objects). - - dict.__getitem__ now looks for a __missing__ hook before raising - KeyError. - -- PEP 343: with statement implemented. Needs ``from __future__ import - with_statement``. Use of 'with' as a variable will generate a warning. - Use of 'as' as a variable will also generate a warning (unless it's - part of an import statement). - The following objects have __context__ methods: - - The built-in file type. - - The thread.LockType type. - - The following types defined by the threading module: - Lock, RLock, Condition, Semaphore, BoundedSemaphore. - - The decimal.Context class. - -- Fix the encodings package codec search function to only search - inside its own package. Fixes problem reported in patch #1433198. - - Note: Codec packages should implement and register their own - codec search function. PEP 100 has the details. - -- PEP 353: Using ``Py_ssize_t`` as the index type. - -- ``PYMALLOC_DEBUG`` builds now add ``4*sizeof(size_t)`` bytes of debugging - info to each allocated block, since the ``Py_ssize_t`` changes (PEP 353) - now allow Python to make use of memory blocks exceeding 2**32 bytes for - some purposes on 64-bit boxes. A ``PYMALLOC_DEBUG`` build was limited - to 4-byte allocations before. - -- Patch #1400181, fix unicode string formatting to not use the locale. - This is how string objects work. u'%f' could use , instead of . - for the decimal point. Now both strings and unicode always use periods. - -- Bug #1244610, #1392915, fix build problem on OpenBSD 3.7 and 3.8. - configure would break checking curses.h. - -- Bug #959576: The pwd module is now builtin. This allows Python to be - built on UNIX platforms without $HOME set. - -- Bug #1072182, fix some potential problems if characters are signed. - -- Bug #889500, fix line number on SyntaxWarning for global declarations. - -- Bug #1378022, UTF-8 files with a leading BOM crashed the interpreter. - -- Support for converting hex strings to floats no longer works. - This was not portable. float('0x3') now raises a ValueError. - -- Patch #1382163: Expose Subversion revision number to Python. New C API - function Py_GetBuildNumber(). New attribute sys.subversion. Build number - is now displayed in interactive prompt banner. - -- Implementation of PEP 341 - Unification of try/except and try/finally. - "except" clauses can now be written together with a "finally" clause in - one try statement instead of two nested ones. Patch #1355913. - -- Bug #1379994: Builtin unicode_escape and raw_unicode_escape codec - now encodes backslash correctly. - -- Patch #1350409: Work around signal handling bug in Visual Studio 2005. - -- Bug #1281408: Py_BuildValue now works correctly even with unsigned longs - and long longs. - -- SF Bug #1350188, "setdlopenflags" leads to crash upon "import" - It was possible for dlerror() to return a NULL pointer, so - it will now use a default error message in this case. - -- Replaced most Unicode charmap codecs with new ones using the - new Unicode translate string feature in the builtin charmap - codec; the codecs were created from the mapping tables available - at ftp.unicode.org and contain a few updates (e.g. the Mac OS - encodings now include a mapping for the Apple logo) - -- Added a few more codecs for Mac OS encodings - -- Sped up some Unicode operations. - -- A new AST parser implementation was completed. The abstract - syntax tree is available for read-only (non-compile) access - to Python code; an _ast module was added. - -- SF bug #1167751: fix incorrect code being produced for generator expressions. - The following code now raises a SyntaxError: foo(a = i for i in range(10)) - -- SF Bug #976608: fix SystemError when mtime of an imported file is -1. - -- SF Bug #887946: fix segfault when redirecting stdin from a directory. - Provide a warning when a directory is passed on the command line. - -- Fix segfault with invalid coding. - -- SF bug #772896: unknown encoding results in MemoryError. - -- All iterators now have a Boolean value of True. Formerly, some iterators - supported a __len__() method which evaluated to False when the iterator - was empty. - -- On 64-bit platforms, when __len__() returns a value that cannot be - represented as a C int, raise OverflowError. - -- test__locale is skipped on OS X < 10.4 (only partial locale support is - present). - -- SF bug #893549: parsing keyword arguments was broken with a few format - codes. - -- Changes donated by Elemental Security to make it work on AIX 5.3 - with IBM's 64-bit compiler (SF patch #1284289). This also closes SF - bug #105470: test_pwd fails on 64bit system (Opteron). - -- Changes donated by Elemental Security to make it work on HP-UX 11 on - Itanium2 with HP's 64-bit compiler (SF patch #1225212). - -- Disallow keyword arguments for type constructors that don't use them - (fixes bug #1119418). - -- Forward UnicodeDecodeError into SyntaxError for source encoding errors. - -- SF bug #900092: When tracing (e.g. for hotshot), restore 'return' events for - exceptions that cause a function to exit. - -- The implementation of set() and frozenset() was revised to use its - own internal data structure. Memory consumption is reduced by 1/3 - and there are modest speed-ups as well. The API is unchanged. - -- SF bug #1238681: freed pointer is used in longobject.c:long_pow(). - -- SF bug #1229429: PyObject_CallMethod failed to decrement some - reference counts in some error exit cases. - -- SF bug #1185883: Python's small-object memory allocator took over - a block managed by the platform C library whenever a realloc specified - a small new size. However, there's no portable way to know then how - much of the address space following the pointer is valid, so there's no - portable way to copy data from the C-managed block into Python's - small-object space without risking a memory fault. Python's small-object - realloc now leaves such blocks under the control of the platform C - realloc. - -- SF bug #1232517: An overflow error was not detected properly when - attempting to convert a large float to an int in os.utime(). - -- SF bug #1224347: hex longs now print with lowercase letters just - like their int counterparts. - -- SF bug #1163563: the original fix for bug #1010677 ("thread Module - Breaks PyGILState_Ensure()") broke badly in the case of multiple - interpreter states; back out that fix and do a better job (see - http://mail.python.org/pipermail/python-dev/2005-June/054258.html - for a longer write-up of the problem). - -- SF patch #1180995: marshal now uses a binary format by default when - serializing floats. - -- SF patch #1181301: on platforms that appear to use IEEE 754 floats, - the routines that promise to produce IEEE 754 binary representations - of floats now simply copy bytes around. - -- bug #967182: disallow opening files with 'wU' or 'aU' as specified by PEP - 278. - -- patch #1109424: int, long, float, complex, and unicode now check for the - proper magic slot for type conversions when subclassed. Previously the - magic slot was ignored during conversion. Semantics now match the way - subclasses of str always behaved. int/long/float, conversion of an instance - to the base class has been moved to the proper nb_* magic slot and out of - PyNumber_*(). - Thanks Walter D???rwald. - -- Descriptors defined in C with a PyGetSetDef structure, where the setter is - NULL, now raise an AttributeError when attempting to set or delete the - attribute. Previously a TypeError was raised, but this was inconsistent - with the equivalent pure-Python implementation. - -- It is now safe to call PyGILState_Release() before - PyEval_InitThreads() (note that if there is reason to believe there - are multiple threads around you still must call PyEval_InitThreads() - before using the Python API; this fix is for extension modules that - have no way of knowing if Python is multi-threaded yet). - -- Typing Ctrl-C whilst raw_input() was waiting in a build with threads - disabled caused a crash. - -- Bug #1165306: instancemethod_new allowed the creation of a method - with im_class == im_self == NULL, which caused a crash when called. - -- Move exception finalisation later in the shutdown process - this - fixes the crash seen in bug #1165761 - -- Added two new builtins, any() and all(). - -- Defining a class with empty parentheses is now allowed - (e.g., ``class C(): pass`` is no longer a syntax error). - Patch #1176012 added support to the 'parser' module and 'compiler' package - (thanks to logistix for that added support). - -- Patch #1115086: Support PY_LONGLONG in structmember. - -- Bug #1155938: new style classes did not check that __init__() was - returning None. - -- Patch #802188: Report characters after line continuation character - ('\') with a specific error message. - -- Bug #723201: Raise a TypeError for passing bad objects to 'L' format. - -- Bug #1124295: the __name__ attribute of file objects was - inadvertently made inaccessible in restricted mode. - -- Bug #1074011: closing sys.std{out,err} now causes a flush() and - an ferror() call. - -- min() and max() now support key= arguments with the same meaning as in - list.sort(). - -- The peephole optimizer now performs simple constant folding in expressions: - (2+3) --> (5). - -- set and frozenset objects can now be marshalled. SF #1098985. - -- Bug #1077106: Poor argument checking could cause memory corruption - in calls to os.read(). - -- The parser did not complain about future statements in illegal - positions. It once again reports a syntax error if a future - statement occurs after anything other than a doc string. - -- Change the %s format specifier for str objects so that it returns a - unicode instance if the argument is not an instance of basestring and - calling __str__ on the argument returns a unicode instance. - -- Patch #1413181: changed ``PyThreadState_Delete()`` to forget about the - current thread state when the auto-GIL-state machinery knows about - it (since the thread state is being deleted, continuing to remember it - can't help, but can hurt if another thread happens to get created with - the same thread id). - -Extension Modules ------------------ - -- Patch #1380952: fix SSL objects timing out on consecutive read()s - -- Patch #1309579: wait3 and wait4 were added to the posix module. - -- Patch #1231053: The audioop module now supports encoding/decoding of alaw. - In addition, the existing ulaw code was updated. - -- RFE #567972: Socket objects' family, type and proto properties are - now exposed via new attributes. - -- Everything under lib-old was removed. This includes the following modules: - Para, addpack, cmp, cmpcache, codehack, dircmp, dump, find, fmt, grep, - lockfile, newdir, ni, packmail, poly, rand, statcache, tb, tzparse, - util, whatsound, whrandom, zmod - -- The following modules were removed: regsub, reconvert, regex, regex_syntax. - -- re and sre were swapped, so help(re) provides full help. importing sre - is deprecated. The undocumented re.engine variable no longer exists. - -- Bug #1448490: Fixed a bug that ISO-2022 codecs could not handle - SS2 (single-shift 2) escape sequences correctly. - -- The unicodedata module was updated to the 4.1 version of the Unicode - database. The 3.2 version is still available as unicodedata.db_3_2_0 - for applications that require this specific version (such as IDNA). - -- The timing module is no longer built by default. It was deprecated - in PEP 4 in Python 2.0 or earlier. - -- Patch 1433928: Added a new type, defaultdict, to the collections module. - This uses the new __missing__ hook behavior added to dict (see above). - -- Bug #854823: socketmodule now builds on Sun platforms even when - INET_ADDRSTRLEN is not defined. - -- Patch #1393157: os.startfile() now has an optional argument to specify - a "command verb" to invoke on the file. - -- Bug #876637, prevent stack corruption when socket descriptor - is larger than FD_SETSIZE. - -- Patch #1407135, bug #1424041: harmonize mmap behavior of anonymous memory. - mmap.mmap(-1, size) now returns anonymous memory in both Unix and Windows. - mmap.mmap(0, size) should not be used on Windows for anonymous memory. - -- Patch #1422385: The nis module now supports access to domains other - than the system default domain. - -- Use Win32 API to implement os.stat/fstat. As a result, subsecond timestamps - are reported, the limit on path name lengths is removed, and stat reports - WindowsError now (instead of OSError). - -- Add bsddb.db.DBEnv.set_tx_timestamp allowing time based database recovery. - -- Bug #1413192, fix seg fault in bsddb if a transaction was deleted - before the env. - -- Patch #1103116: Basic AF_NETLINK support. - -- Bug #1402308, (possible) segfault when using mmap.mmap(-1, ...) - -- Bug #1400822, _curses over{lay,write} doesn't work when passing 6 ints. - Also fix ungetmouse() which did not accept arguments properly. - The code now conforms to the documented signature. - -- Bug #1400115, Fix segfault when calling curses.panel.userptr() - without prior setting of the userptr. - -- Fix 64-bit problems in bsddb. - -- Patch #1365916: fix some unsafe 64-bit mmap methods. - -- Bug #1290333: Added a workaround for cjkcodecs' _codecs_cn build - problem on AIX. - -- Bug #869197: os.setgroups rejects long integer arguments - -- Bug #1346533, select.poll() doesn't raise an error if timeout > sys.maxint - -- Bug #1344508, Fix UNIX mmap leaking file descriptors - -- Patch #1338314, Bug #1336623: fix tarfile so it can extract - REGTYPE directories from tarfiles written by old programs. - -- Patch #1407992, fixes broken bsddb module db associate when using - BerkeleyDB 3.3, 4.0 or 4.1. - -- Get bsddb module to build with BerkeleyDB version 4.4 - -- Get bsddb module to build with BerkeleyDB version 3.2 - -- Patch #1309009, Fix segfault in pyexpat when the XML document is in latin_1, - but Python incorrectly assumes it is in UTF-8 format - -- Fix parse errors in the readline module when compiling without threads. - -- Patch #1288833: Removed thread lock from socket.getaddrinfo on - FreeBSD 5.3 and later versions which got thread-safe getaddrinfo(3). - -- Patches #1298449 and #1298499: Add some missing checks for error - returns in cStringIO.c. - -- Patch #1297028: fix segfault if call type on MultibyteCodec, - MultibyteStreamReader, or MultibyteStreamWriter - -- Fix memory leak in posix.access(). - -- Patch #1213831: Fix typo in unicodedata._getcode. - -- Bug #1007046: os.startfile() did not accept unicode strings encoded in - the file system encoding. - -- Patch #756021: Special-case socket.inet_aton('255.255.255.255') for - platforms that don't have inet_aton(). - -- Bug #1215928: Fix bz2.BZ2File.seek() for 64-bit file offsets. - -- Bug #1191043: Fix bz2.BZ2File.(x)readlines for files containing one - line without newlines. - -- Bug #728515: mmap.resize() now resizes the file on Unix as it did - on Windows. - -- Patch #1180695: Add nanosecond stat resolution, and st_gen, - st_birthtime for FreeBSD. - -- Patch #1231069: The fcntl.ioctl function now uses the 'I' code for - the request code argument, which results in more C-like behaviour - for large or negative values. - -- Bug #1234979: For the argument of thread.Lock.acquire, the Windows - implementation treated all integer values except 1 as false. - -- Bug #1194181: bz2.BZ2File didn't handle mode 'U' correctly. - -- Patch #1212117: os.stat().st_flags is now accessible as a attribute - if available on the platform. - -- Patch #1103951: Expose O_SHLOCK and O_EXLOCK in the posix module if - available on the platform. - -- Bug #1166660: The readline module could segfault if hook functions - were set in a different thread than that which called readline. - -- collections.deque objects now support a remove() method. - -- operator.itemgetter() and operator.attrgetter() now support retrieving - multiple fields. This provides direct support for sorting on multiple - keys (primary, secondary, etc). - -- os.access now supports Unicode path names on non-Win32 systems. - -- Patches #925152, #1118602: Avoid reading after the end of the buffer - in pyexpat.GetInputContext. - -- Patches #749830, #1144555: allow UNIX mmap size to default to current - file size. - -- Added functional.partial(). See PEP309. - -- Patch #1093585: raise a ValueError for negative history items in readline. - {remove_history,replace_history} - -- The spwd module has been added, allowing access to the shadow password - database. - -- stat_float_times is now True. - -- array.array objects are now picklable. - -- the cPickle module no longer accepts the deprecated None option in the - args tuple returned by __reduce__(). - -- itertools.islice() now accepts None for the start and step arguments. - This allows islice() to work more readily with slices: - islice(s.start, s.stop, s.step) - -- datetime.datetime() now has a strptime class method which can be used to - create datetime object using a string and format. - -- Patch #1117961: Replace the MD5 implementation from RSA Data Security Inc - with the implementation from http://sourceforge.net/projects/libmd5-rfc/. - -Library -------- - -- Patch #1388073: Numerous __-prefixed attributes of unittest.TestCase have - been renamed to have only a single underscore prefix. This was done to - make subclassing easier. - -- PEP 338: new module runpy defines a run_module function to support - executing modules which provide access to source code or a code object - via the PEP 302 import mechanisms. - -- The email module's parsedate_tz function now sets the daylight savings - flag to -1 (unknown) since it can't tell from the date whether it should - be set. - -- Patch #624325: urlparse.urlparse() and urlparse.urlsplit() results - now sport attributes that provide access to the parts of the result. - -- Patch #1462498: sgmllib now handles entity and character references - in attribute values. - -- Added the sqlite3 package. This is based on pysqlite2.1.3, and provides - a DB-API interface in the standard library. You'll need sqlite 3.0.8 or - later to build this - if you have an earlier version, the C extension - module will not be built. - -- Bug #1460340: ``random.sample(dict)`` failed in various ways. Dicts - aren't officially supported here, and trying to use them will probably - raise an exception some day. But dicts have been allowed, and "mostly - worked", so support for them won't go away without warning. - -- Bug #1445068: getpass.getpass() can now be given an explicit stream - argument to specify where to write the prompt. - -- Patch #1462313, bug #1443328: the pickle modules now can handle classes - that have __private names in their __slots__. - -- Bug #1250170: mimetools now handles socket.gethostname() failures gracefully. - -- patch #1457316: "setup.py upload" now supports --identity to select the - key to be used for signing the uploaded code. - -- Queue.Queue objects now support .task_done() and .join() methods - to make it easier to monitor when daemon threads have completed - processing all enqueued tasks. Patch #1455676. - -- popen2.Popen objects now preserve the command in a .cmd attribute. - -- Added the ctypes ffi package. - -- email 4.0 package now integrated. This is largely the same as the email 3.0 - package that was included in Python 2.3, except that PEP 8 module names are - now used (e.g. mail.message instead of email.Message). The MIME classes - have been moved to a subpackage (e.g. email.mime.text instead of - email.MIMEText). The old names are still supported for now. Several - deprecated Message methods have been removed and lots of bugs have been - fixed. More details can be found in the email package documentation. - -- Patches #1436130/#1443155: codecs.lookup() now returns a CodecInfo object - (a subclass of tuple) that provides incremental decoders and encoders - (a way to use stateful codecs without the stream API). Python functions - codecs.getincrementaldecoder() and codecs.getincrementalencoder() as well - as C functions PyCodec_IncrementalEncoder() and PyCodec_IncrementalDecoder() - have been added. - -- Patch #1359365: Calling next() on a closed StringIO.String object raises - a ValueError instead of a StopIteration now (like file and cString.String do). - cStringIO.StringIO.isatty() will raise a ValueError now if close() has been - called before (like file and StringIO.StringIO do). - -- A regrtest option -w was added to re-run failed tests in verbose mode. - -- Patch #1446372: quit and exit can now be called from the interactive - interpreter to exit. - -- The function get_count() has been added to the gc module, and gc.collect() - grew an optional 'generation' argument. - -- A library msilib to generate Windows Installer files, and a distutils - command bdist_msi have been added. - -- PEP 343: new module contextlib.py defines decorator @contextmanager - and helpful context managers nested() and closing(). - -- The compiler package now supports future imports after the module docstring. - -- Bug #1413790: zipfile now sanitizes absolute archive names that are - not allowed by the specs. - -- Patch #1215184: FileInput now can be given an opening hook which can - be used to control how files are opened. - -- Patch #1212287: fileinput.input() now has a mode parameter for - specifying the file mode input files should be opened with. - -- Patch #1215184: fileinput now has a fileno() function for getting the - current file number. - -- Patch #1349274: gettext.install() now optionally installs additional - translation functions other than _() in the builtin namespace. - -- Patch #1337756: fileinput now accepts Unicode filenames. - -- Patch #1373643: The chunk module can now read chunks larger than - two gigabytes. - -- Patch #1417555: SimpleHTTPServer now returns Last-Modified headers. - -- Bug #1430298: It is now possible to send a mail with an empty - return address using smtplib. - -- Bug #1432260: The names of lambda functions are now properly displayed - in pydoc. - -- Patch #1412872: zipfile now sets the creator system to 3 (Unix) - unless the system is Win32. - -- Patch #1349118: urllib now supports user:pass@ style proxy - specifications, raises IOErrors when proxies for unsupported protocols - are defined, and uses the https proxy on https redirections. - -- Bug #902075: urllib2 now supports 'host:port' style proxy specifications. - -- Bug #1407902: Add support for sftp:// URIs to urlparse. - -- Bug #1371247: Update Windows locale identifiers in locale.py. - -- Bug #1394565: SimpleHTTPServer now doesn't choke on query parameters - any more. - -- Bug #1403410: The warnings module now doesn't get confused - when it can't find out the module name it generates a warning for. - -- Patch #1177307: Added a new codec utf_8_sig for UTF-8 with a BOM signature. - -- Patch #1157027: cookielib mishandles RFC 2109 cookies in Netscape mode - -- Patch #1117398: cookielib.LWPCookieJar and .MozillaCookieJar now raise - LoadError as documented, instead of IOError. For compatibility, - LoadError subclasses IOError. - -- Added the hashlib module. It provides secure hash functions for MD5 and - SHA1, 224, 256, 384, and 512. Note that recent developments make the - historic MD5 and SHA1 unsuitable for cryptographic-strength applications. - In - Ronald L. Rivest offered this advice for Python: - - "The consensus of researchers in this area (at least as - expressed at the NIST Hash Function Workshop 10/31/05), - is that SHA-256 is a good choice for the time being, but - that research should continue, and other alternatives may - arise from this research. The larger SHA's also seem OK." - -- Added a subset of Fredrik Lundh's ElementTree package. Available - modules are xml.etree.ElementTree, xml.etree.ElementPath, and - xml.etree.ElementInclude, from ElementTree 1.2.6. - -- Patch #1162825: Support non-ASCII characters in IDLE window titles. - -- Bug #1365984: urllib now opens "data:" URLs again. - -- Patch #1314396: prevent deadlock for threading.Thread.join() when an exception - is raised within the method itself on a previous call (e.g., passing in an - illegal argument) - -- Bug #1340337: change time.strptime() to always return ValueError when there - is an error in the format string. - -- Patch #754022: Greatly enhanced webbrowser.py (by Oleg Broytmann). - -- Bug #729103: pydoc.py: Fix docother() method to accept additional - "parent" argument. - -- Patch #1300515: xdrlib.py: Fix pack_fstring() to really use null bytes - for padding. - -- Bug #1296004: httplib.py: Limit maximal amount of data read from the - socket to avoid a MemoryError on Windows. - -- Patch #1166948: locale.py: Prefer LC_ALL, LC_CTYPE and LANG over LANGUAGE - to get the correct encoding. - -- Patch #1166938: locale.py: Parse LANGUAGE as a colon separated list of - languages. - -- Patch #1268314: Cache lines in StreamReader.readlines for performance. - -- Bug #1290505: Fix clearing the regex cache for time.strptime(). - -- Bug #1167128: Fix size of a symlink in a tarfile to be 0. - -- Patch #810023: Fix off-by-one bug in urllib.urlretrieve reporthook - functionality. - -- Bug #1163178: Make IDNA return an empty string when the input is empty. - -- Patch #848017: Make Cookie more RFC-compliant. Use CRLF as default output - separator and do not output trailing semicolon. - -- Patch #1062060: urllib.urlretrieve() now raises a new exception, named - ContentTooShortException, when the actually downloaded size does not - match the Content-Length header. - -- Bug #1121494: distutils.dir_utils.mkpath now accepts Unicode strings. - -- Bug #1178484: Return complete lines from codec stream readers - even if there is an exception in later lines, resulting in - correct line numbers for decoding errors in source code. - -- Bug #1192315: Disallow negative arguments to clear() in pdb. - -- Patch #827386: Support absolute source paths in msvccompiler.py. - -- Patch #1105730: Apply the new implementation of commonprefix in posixpath - to ntpath, macpath, os2emxpath and riscospath. - -- Fix a problem in Tkinter introduced by SF patch #869468: delete bogus - __hasattr__ and __delattr__ methods on class Tk that were breaking - Tkdnd. - -- Bug #1015140: disambiguated the term "article id" in nntplib docs and - docstrings to either "article number" or "message id". - -- Bug #1238170: threading.Thread.__init__ no longer has "kwargs={}" as a - parameter, but uses the usual "kwargs=None". - -- textwrap now processes text chunks at O(n) speed instead of O(n**2). - Patch #1209527 (Contributed by Connelly). - -- urllib2 has now an attribute 'httpresponses' mapping from HTTP status code - to W3C name (404 -> 'Not Found'). RFE #1216944. - -- Bug #1177468: Don't cache the /dev/urandom file descriptor for os.urandom, - as this can cause problems with apps closing all file descriptors. - -- Bug #839151: Fix an attempt to access sys.argv in the warnings module; - it can be missing in embedded interpreters - -- Bug #1155638: Fix a bug which affected HTTP 0.9 responses in httplib. - -- Bug #1100201: Cross-site scripting was possible on BaseHTTPServer via - error messages. - -- Bug #1108948: Cookie.py produced invalid JavaScript code. - -- The tokenize module now detects and reports indentation errors. - Bug #1224621. - -- The tokenize module has a new untokenize() function to support a full - roundtrip from lexed tokens back to Python source code. In addition, - the generate_tokens() function now accepts a callable argument that - terminates by raising StopIteration. - -- Bug #1196315: fix weakref.WeakValueDictionary constructor. - -- Bug #1213894: os.path.realpath didn't resolve symlinks that were the first - component of the path. - -- Patch #1120353: The xmlrpclib module provides better, more transparent, - support for datetime.{datetime,date,time} objects. With use_datetime set - to True, applications shouldn't have to fiddle with the DateTime wrapper - class at all. - -- distutils.commands.upload was added to support uploading distribution - files to PyPI. - -- distutils.commands.register now encodes the data as UTF-8 before posting - them to PyPI. - -- decimal operator and comparison methods now return NotImplemented - instead of raising a TypeError when interacting with other types. This - allows other classes to implement __radd__ style methods and have them - work as expected. - -- Bug #1163325: Decimal infinities failed to hash. Attempting to - hash a NaN raised an InvalidOperation instead of a TypeError. - -- Patch #918101: Add tarfile open mode r|* for auto-detection of the - stream compression; add, for symmetry reasons, r:* as a synonym of r. - -- Patch #1043890: Add extractall method to tarfile. - -- Patch #1075887: Don't require MSVC in distutils if there is nothing - to build. - -- Patch #1103407: Properly deal with tarfile iterators when untarring - symbolic links on Windows. - -- Patch #645894: Use getrusage for computing the time consumption in - profile.py if available. - -- Patch #1046831: Use get_python_version where appropriate in sysconfig.py. - -- Patch #1117454: Remove code to special-case cookies without values - in LWPCookieJar. - -- Patch #1117339: Add cookielib special name tests. - -- Patch #1112812: Make bsddb/__init__.py more friendly for modulefinder. - -- Patch #1110248: SYNC_FLUSH the zlib buffer for GZipFile.flush. - -- Patch #1107973: Allow to iterate over the lines of a tarfile.ExFileObject. - -- Patch #1104111: Alter setup.py --help and --help-commands. - -- Patch #1121234: Properly cleanup _exit and tkerror commands. - -- Patch #1049151: xdrlib now unpacks booleans as True or False. - -- Fixed bug in a NameError bug in cookielib. Patch #1116583. - -- Applied a security fix to SimpleXMLRPCserver (PSF-2005-001). This - disables recursive traversal through instance attributes, which can - be exploited in various ways. - -- Bug #1222790: in SimpleXMLRPCServer, set the reuse-address and close-on-exec - flags on the HTTP listening socket. - -- Bug #792570: SimpleXMLRPCServer had problems if the request grew too large. - Fixed by reading the HTTP body in chunks instead of one big socket.read(). - -- Patches #893642, #1039083: add allow_none, encoding arguments to - constructors of SimpleXMLRPCServer and CGIXMLRPCRequestHandler. - -- Bug #1110478: Revert os.environ.update to do putenv again. - -- Bug #1103844: fix distutils.install.dump_dirs() with negated options. - -- os.{SEEK_SET, SEEK_CUR, SEEK_END} have been added for convenience. - -- Enhancements to the csv module: - - + Dialects are now validated by the underlying C code, better - reflecting its capabilities, and improving its compliance with - PEP 305. - + Dialect parameter parsing has been re-implemented to improve error - reporting. - + quotechar=None and quoting=QUOTE_NONE now work the way PEP 305 - dictates. - + the parser now removes the escapechar prefix from escaped characters. - + when quoting=QUOTE_NONNUMERIC, the writer now tests for numeric - types, rather than any object that can be represented as a numeric. - + when quoting=QUOTE_NONNUMERIC, the reader now casts unquoted fields - to floats. - + reader now allows \r characters to be quoted (previously it only allowed - \n to be quoted). - + writer doublequote handling improved. - + Dialect classes passed to the module are no longer instantiated by - the module before being parsed (the former validation scheme required - this, but the mechanism was unreliable). - + The dialect registry now contains instances of the internal - C-coded dialect type, rather than references to python objects. - + the internal c-coded dialect type is now immutable. - + register_dialect now accepts the same keyword dialect specifications - as the reader and writer, allowing the user to register dialects - without first creating a dialect class. - + a configurable limit to the size of parsed fields has been added - - previously, an unmatched quote character could result in the entire - file being read into the field buffer before an error was reported. - + A new module method csv.field_size_limit() has been added that sets - the parser field size limit (returning the former limit). The initial - limit is 128kB. - + A line_num attribute has been added to the reader object, which tracks - the number of lines read from the source iterator. This is not - the same as the number of records returned, as records can span - multiple lines. - + reader and writer objects were not being registered with the cyclic-GC. - This has been fixed. - -- _DummyThread objects in the threading module now delete self.__block that is - inherited from _Thread since it uses up a lock allocated by 'thread'. The - lock primitives tend to be limited in number and thus should not be wasted on - a _DummyThread object. Fixes bug #1089632. - -- The imghdr module now detects Exif files. - -- StringIO.truncate() now correctly adjusts the size attribute. - (Bug #951915). - -- locale.py now uses an updated locale alias table (built using - Tools/i18n/makelocalealias.py, a tool to parse the X11 locale - alias file); the encoding lookup was enhanced to use Python's - encoding alias table. - -- moved deprecated modules to Lib/lib-old: whrandom, tzparse, statcache. - -- the pickle module no longer accepts the deprecated None option in the - args tuple returned by __reduce__(). - -- optparse now optionally imports gettext. This allows its use in setup.py. - -- the pickle module no longer uses the deprecated bin parameter. - -- the shelve module no longer uses the deprecated binary parameter. - -- the pstats module no longer uses the deprecated ignore() method. - -- the filecmp module no longer uses the deprecated use_statcache argument. - -- unittest.TestCase.run() and unittest.TestSuite.run() can now be successfully - extended or overridden by subclasses. Formerly, the subclassed method would - be ignored by the rest of the module. (Bug #1078905). - -- heapq.nsmallest() and heapq.nlargest() now support key= arguments with - the same meaning as in list.sort(). - -- Bug #1076985: ``codecs.StreamReader.readline()`` now calls ``read()`` only - once when a size argument is given. This prevents a buffer overflow in the - tokenizer with very long source lines. - -- Bug #1083110: ``zlib.decompress.flush()`` would segfault if called - immediately after creating the object, without any intervening - ``.decompress()`` calls. - -- The reconvert.quote function can now emit triple-quoted strings. The - reconvert module now has some simple documentation. - -- ``UserString.MutableString`` now supports negative indices in - ``__setitem__`` and ``__delitem__`` - -- Bug #1149508: ``textwrap`` now handles hyphenated numbers (eg. "2004-03-05") - correctly. - -- Partial fixes for SF bugs #1163244 and #1175396: If a chunk read by - ``codecs.StreamReader.readline()`` has a trailing "\r", read one more - character even if the user has passed a size parameter to get a proper - line ending. Remove the special handling of a "\r\n" that has been split - between two lines. - -- Bug #1251300: On UCS-4 builds the "unicode-internal" codec will now complain - about illegal code points. The codec now supports PEP 293 style error - handlers. - -- Bug #1235646: ``codecs.StreamRecoder.next()`` now reencodes the data it reads - from the input stream, so that the output is a byte string in the correct - encoding instead of a unicode string. - -- Bug #1202493: Fixing SRE parser to handle '{}' as perl does, rather than - considering it exactly like a '*'. - -- Bug #1245379: Add "unicode-1-1-utf-7" as an alias for "utf-7" to - ``encodings.aliases``. - -- ` uu.encode()`` and ``uu.decode()`` now support unicode filenames. - -- Patch #1413711: Certain patterns of differences were making difflib - touch the recursion limit. - -- Bug #947906: An object oriented interface has been added to the calendar - module. It's possible to generate HTML calendar now and the module can be - called as a script (e.g. via ``python -mcalendar``). Localized month and - weekday names can be ouput (even if an exotic encoding is used) using - special classes that use unicode. - -Build ------ - -- Fix test_float, test_long, and test_struct failures on Tru64 with gcc - by using -mieee gcc option. - -- Patch #1432345: Make python compile on DragonFly. - -- Build support for Win64-AMD64 was added. - -- Patch #1428494: Prefer linking against ncursesw over ncurses library. - -- Patch #881820: look for openpty and forkpty also in libbsd. - -- The sources of zlib are now part of the Python distribution (zlib 1.2.3). - The zlib module is now builtin on Windows. - -- Use -xcode=pic32 for CCSHARED on Solaris with SunPro. - -- Bug #1189330: configure did not correctly determine the necessary - value of LINKCC if python was built with GCC 4.0. - -- Upgrade Windows build to zlib 1.2.3 which eliminates a potential security - vulnerability in zlib 1.2.1 and 1.2.2. - -- EXTRA_CFLAGS has been introduced as an environment variable to hold compiler - flags that change binary compatibility. Changes were also made to - distutils.sysconfig to also use the environment variable when used during - compilation of the interpreter and of C extensions through distutils. - -- SF patch 1171735: Darwin 8's headers are anal about POSIX compliance, - and linking has changed (prebinding is now deprecated, and libcc_dynamic - no longer exists). This configure patch makes things right. - -- Bug #1158607: Build with --disable-unicode again. - -- spwdmodule.c is built only if either HAVE_GETSPNAM or HAVE_HAVE_GETSPENT is - defined. Discovered as a result of not being able to build on OS X. - -- setup.py now uses the directories specified in LDFLAGS using the -L option - and in CPPFLAGS using the -I option for adding library and include - directories, respectively, for compiling extension modules against. This has - led to the core being compiled using the values in CPPFLAGS. It also removes - the need for the special-casing of both DarwinPorts and Fink for darwin since - the proper directories can be specified in LDFLAGS (``-L/sw/lib`` for Fink, - ``-L/opt/local/lib`` for DarwinPorts) and CPPFLAGS (``-I/sw/include`` for - Fink, ``-I/opt/local/include`` for DarwinPorts). - -- Test in configure.in that checks for tzset no longer dependent on tm->tm_zone - to exist in the struct (not required by either ISO C nor the UNIX 2 spec). - Tests for sanity in tzname when HAVE_TZNAME defined were also defined. - Closes bug #1096244. Thanks Gregory Bond. - -C API ------ - -- ``PyMem_{Del, DEL}`` and ``PyMem_{Free, FREE}`` no longer map to - ``PyObject_{Free, FREE}``. They map to the system ``free()`` now. If memory - is obtained via the ``PyObject_`` family, it must be released via the - ``PyObject_`` family, and likewise for the ``PyMem_`` family. This has - always been officially true, but when Python's small-object allocator was - introduced, an attempt was made to cater to a few extension modules - discovered at the time that obtained memory via ``PyObject_New`` but - released it via ``PyMem_DEL``. It's years later, and if such code still - exists it will fail now (probably with segfaults, but calling wrong - low-level memory management functions can yield many symptoms). - -- Added a C API for set and frozenset objects. - -- Removed PyRange_New(). - -- Patch #1313939: PyUnicode_DecodeCharmap() accepts a unicode string as the - mapping argument now. This string is used as a mapping table. Byte values - greater than the length of the string and 0xFFFE are treated as undefined - mappings. - - -Tests ------ - -- In test_os, st_?time is now truncated before comparing it with ST_?TIME. - -- Patch #1276356: New resource "urlfetch" is implemented. This enables - even impatient people to run tests that require remote files. - - -Documentation -------------- - -- Bug #1402224: Add warning to dl docs about crashes. - -- Bug #1396471: Document that Windows' ftell() can return invalid - values for text files with UNIX-style line endings. - -- Bug #1274828: Document os.path.splitunc(). - -- Bug #1190204: Clarify which directories are searched by site.py. - -- Bug #1193849: Clarify os.path.expanduser() documentation. - -- Bug #1243192: re.UNICODE and re.LOCALE affect \d, \D, \s and \S. - -- Bug #755617: Document the effects of os.chown() on Windows. - -- Patch #1180012: The documentation for modulefinder is now in the library reference. - -- Patch #1213031: Document that os.chown() accepts argument values of -1. - -- Bug #1190563: Document os.waitpid() return value with WNOHANG flag. - -- Bug #1175022: Correct the example code for property(). - -- Document the IterableUserDict class in the UserDict module. - Closes bug #1166582. - -- Remove all latent references for "Macintosh" that referred to semantics for - Mac OS 9 and change to reflect the state for OS X. - Closes patch #1095802. Thanks Jack Jansen. - -Mac ---- - - -New platforms -------------- - -- FreeBSD 7 support is added. - - -Tools/Demos ------------ - -- Created Misc/Vim/vim_syntax.py to auto-generate a python.vim file in that - directory for syntax highlighting in Vim. Vim directory was added and placed - vimrc to it (was previous up a level). - -- Added two new files to Tools/scripts: pysource.py, which recursively - finds Python source files, and findnocoding.py, which finds Python - source files that need an encoding declaration. - Patch #784089, credits to Oleg Broytmann. - -- Bug #1072853: pindent.py used an uninitialized variable. - -- Patch #1177597: Correct Complex.__init__. - -- Fixed a display glitch in Pynche, which could cause the right arrow to - wiggle over by a pixel. - ---- **(For information about older versions, consult the HISTORY file.)** Modified: python/branches/libffi3-branch/Misc/cheatsheet ============================================================================== --- python/branches/libffi3-branch/Misc/cheatsheet (original) +++ python/branches/libffi3-branch/Misc/cheatsheet Tue Mar 4 15:50:53 2008 @@ -565,8 +565,8 @@ i Signed integer decimal. o Unsigned octal. u Unsigned decimal. -x Unsigned hexidecimal (lowercase). -X Unsigned hexidecimal (uppercase). +x Unsigned hexadecimal (lowercase). +X Unsigned hexadecimal (uppercase). e Floating point exponential format (lowercase). E Floating point exponential format (uppercase). f Floating point decimal format. Modified: python/branches/libffi3-branch/Modules/_ctypes/_ctypes_test.c ============================================================================== --- python/branches/libffi3-branch/Modules/_ctypes/_ctypes_test.c (original) +++ python/branches/libffi3-branch/Modules/_ctypes/_ctypes_test.c Tue Mar 4 15:50:53 2008 @@ -411,7 +411,7 @@ return 0; } -PyMethodDef module_methods[] = { +static PyMethodDef module_methods[] = { /* {"get_last_tf_arg_s", get_last_tf_arg_s, METH_NOARGS}, {"get_last_tf_arg_u", get_last_tf_arg_u, METH_NOARGS}, */ Modified: python/branches/libffi3-branch/Modules/_sqlite/connection.c ============================================================================== --- python/branches/libffi3-branch/Modules/_sqlite/connection.c (original) +++ python/branches/libffi3-branch/Modules/_sqlite/connection.c Tue Mar 4 15:50:53 2008 @@ -1,6 +1,6 @@ /* connection.c - the connection type * - * Copyright (C) 2004-2006 Gerhard H?ring + * Copyright (C) 2004-2007 Gerhard H?ring * * This file is part of pysqlite. * @@ -32,6 +32,9 @@ #include "pythread.h" +#define ACTION_FINALIZE 1 +#define ACTION_RESET 2 + static int pysqlite_connection_set_isolation_level(pysqlite_Connection* self, PyObject* isolation_level); @@ -51,7 +54,7 @@ { static char *kwlist[] = {"database", "timeout", "detect_types", "isolation_level", "check_same_thread", "factory", "cached_statements", NULL, NULL}; - char* database; + PyObject* database; int detect_types = 0; PyObject* isolation_level = NULL; PyObject* factory = NULL; @@ -59,11 +62,15 @@ int cached_statements = 100; double timeout = 5.0; int rc; + PyObject* class_attr = NULL; + PyObject* class_attr_str = NULL; + int is_apsw_connection = 0; + PyObject* database_utf8; - if (!PyArg_ParseTupleAndKeywords(args, kwargs, "s|diOiOi", kwlist, + if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|diOiOi", kwlist, &database, &timeout, &detect_types, &isolation_level, &check_same_thread, &factory, &cached_statements)) { - return -1; + return -1; } self->begin_statement = NULL; @@ -77,13 +84,53 @@ Py_INCREF(&PyUnicode_Type); self->text_factory = (PyObject*)&PyUnicode_Type; - Py_BEGIN_ALLOW_THREADS - rc = sqlite3_open(database, &self->db); - Py_END_ALLOW_THREADS + if (PyString_Check(database) || PyUnicode_Check(database)) { + if (PyString_Check(database)) { + database_utf8 = database; + Py_INCREF(database_utf8); + } else { + database_utf8 = PyUnicode_AsUTF8String(database); + if (!database_utf8) { + return -1; + } + } - if (rc != SQLITE_OK) { - _pysqlite_seterror(self->db); - return -1; + Py_BEGIN_ALLOW_THREADS + rc = sqlite3_open(PyString_AsString(database_utf8), &self->db); + Py_END_ALLOW_THREADS + + Py_DECREF(database_utf8); + + if (rc != SQLITE_OK) { + _pysqlite_seterror(self->db, NULL); + return -1; + } + } else { + /* Create a pysqlite connection from a APSW connection */ + class_attr = PyObject_GetAttrString(database, "__class__"); + if (class_attr) { + class_attr_str = PyObject_Str(class_attr); + if (class_attr_str) { + if (strcmp(PyString_AsString(class_attr_str), "") == 0) { + /* In the APSW Connection object, the first entry after + * PyObject_HEAD is the sqlite3* we want to get hold of. + * Luckily, this is the same layout as we have in our + * pysqlite_Connection */ + self->db = ((pysqlite_Connection*)database)->db; + + Py_INCREF(database); + self->apsw_connection = database; + is_apsw_connection = 1; + } + } + } + Py_XDECREF(class_attr_str); + Py_XDECREF(class_attr); + + if (!is_apsw_connection) { + PyErr_SetString(PyExc_ValueError, "database parameter must be string or APSW Connection object"); + return -1; + } } if (!isolation_level) { @@ -169,7 +216,8 @@ self->statement_cache->decref_factory = 0; } -void pysqlite_reset_all_statements(pysqlite_Connection* self) +/* action in (ACTION_RESET, ACTION_FINALIZE) */ +void pysqlite_do_all_statements(pysqlite_Connection* self, int action) { int i; PyObject* weakref; @@ -179,13 +227,19 @@ weakref = PyList_GetItem(self->statements, i); statement = PyWeakref_GetObject(weakref); if (statement != Py_None) { - (void)pysqlite_statement_reset((pysqlite_Statement*)statement); + if (action == ACTION_RESET) { + (void)pysqlite_statement_reset((pysqlite_Statement*)statement); + } else { + (void)pysqlite_statement_finalize((pysqlite_Statement*)statement); + } } } } void pysqlite_connection_dealloc(pysqlite_Connection* self) { + PyObject* ret = NULL; + Py_XDECREF(self->statement_cache); /* Clean up if user has not called .close() explicitly. */ @@ -193,6 +247,10 @@ Py_BEGIN_ALLOW_THREADS sqlite3_close(self->db); Py_END_ALLOW_THREADS + } else if (self->apsw_connection) { + ret = PyObject_CallMethod(self->apsw_connection, "close", ""); + Py_XDECREF(ret); + Py_XDECREF(self->apsw_connection); } if (self->begin_statement) { @@ -205,7 +263,7 @@ Py_XDECREF(self->collations); Py_XDECREF(self->statements); - Py_TYPE(self)->tp_free((PyObject*)self); + self->ob_type->tp_free((PyObject*)self); } PyObject* pysqlite_connection_cursor(pysqlite_Connection* self, PyObject* args, PyObject* kwargs) @@ -241,24 +299,33 @@ PyObject* pysqlite_connection_close(pysqlite_Connection* self, PyObject* args) { + PyObject* ret; int rc; if (!pysqlite_check_thread(self)) { return NULL; } - pysqlite_flush_statement_cache(self); + pysqlite_do_all_statements(self, ACTION_FINALIZE); if (self->db) { - Py_BEGIN_ALLOW_THREADS - rc = sqlite3_close(self->db); - Py_END_ALLOW_THREADS - - if (rc != SQLITE_OK) { - _pysqlite_seterror(self->db); - return NULL; - } else { + if (self->apsw_connection) { + ret = PyObject_CallMethod(self->apsw_connection, "close", ""); + Py_XDECREF(ret); + Py_XDECREF(self->apsw_connection); + self->apsw_connection = NULL; self->db = NULL; + } else { + Py_BEGIN_ALLOW_THREADS + rc = sqlite3_close(self->db); + Py_END_ALLOW_THREADS + + if (rc != SQLITE_OK) { + _pysqlite_seterror(self->db, NULL); + return NULL; + } else { + self->db = NULL; + } } } @@ -292,7 +359,7 @@ Py_END_ALLOW_THREADS if (rc != SQLITE_OK) { - _pysqlite_seterror(self->db); + _pysqlite_seterror(self->db, statement); goto error; } @@ -300,7 +367,7 @@ if (rc == SQLITE_DONE) { self->inTransaction = 1; } else { - _pysqlite_seterror(self->db); + _pysqlite_seterror(self->db, statement); } Py_BEGIN_ALLOW_THREADS @@ -308,7 +375,7 @@ Py_END_ALLOW_THREADS if (rc != SQLITE_OK && !PyErr_Occurred()) { - _pysqlite_seterror(self->db); + _pysqlite_seterror(self->db, NULL); } error: @@ -335,7 +402,7 @@ rc = sqlite3_prepare(self->db, "COMMIT", -1, &statement, &tail); Py_END_ALLOW_THREADS if (rc != SQLITE_OK) { - _pysqlite_seterror(self->db); + _pysqlite_seterror(self->db, NULL); goto error; } @@ -343,14 +410,14 @@ if (rc == SQLITE_DONE) { self->inTransaction = 0; } else { - _pysqlite_seterror(self->db); + _pysqlite_seterror(self->db, statement); } Py_BEGIN_ALLOW_THREADS rc = sqlite3_finalize(statement); Py_END_ALLOW_THREADS if (rc != SQLITE_OK && !PyErr_Occurred()) { - _pysqlite_seterror(self->db); + _pysqlite_seterror(self->db, NULL); } } @@ -375,13 +442,13 @@ } if (self->inTransaction) { - pysqlite_reset_all_statements(self); + pysqlite_do_all_statements(self, ACTION_RESET); Py_BEGIN_ALLOW_THREADS rc = sqlite3_prepare(self->db, "ROLLBACK", -1, &statement, &tail); Py_END_ALLOW_THREADS if (rc != SQLITE_OK) { - _pysqlite_seterror(self->db); + _pysqlite_seterror(self->db, NULL); goto error; } @@ -389,14 +456,14 @@ if (rc == SQLITE_DONE) { self->inTransaction = 0; } else { - _pysqlite_seterror(self->db); + _pysqlite_seterror(self->db, statement); } Py_BEGIN_ALLOW_THREADS rc = sqlite3_finalize(statement); Py_END_ALLOW_THREADS if (rc != SQLITE_OK && !PyErr_Occurred()) { - _pysqlite_seterror(self->db); + _pysqlite_seterror(self->db, NULL); } } @@ -762,6 +829,33 @@ return rc; } +static int _progress_handler(void* user_arg) +{ + int rc; + PyObject *ret; + PyGILState_STATE gilstate; + + gilstate = PyGILState_Ensure(); + ret = PyObject_CallFunction((PyObject*)user_arg, ""); + + if (!ret) { + if (_enable_callback_tracebacks) { + PyErr_Print(); + } else { + PyErr_Clear(); + } + + /* abort query if error occured */ + rc = 1; + } else { + rc = (int)PyObject_IsTrue(ret); + Py_DECREF(ret); + } + + PyGILState_Release(gilstate); + return rc; +} + PyObject* pysqlite_connection_set_authorizer(pysqlite_Connection* self, PyObject* args, PyObject* kwargs) { PyObject* authorizer_cb; @@ -787,6 +881,30 @@ } } +PyObject* pysqlite_connection_set_progress_handler(pysqlite_Connection* self, PyObject* args, PyObject* kwargs) +{ + PyObject* progress_handler; + int n; + + static char *kwlist[] = { "progress_handler", "n", NULL }; + + if (!PyArg_ParseTupleAndKeywords(args, kwargs, "Oi:set_progress_handler", + kwlist, &progress_handler, &n)) { + return NULL; + } + + if (progress_handler == Py_None) { + /* None clears the progress handler previously set */ + sqlite3_progress_handler(self->db, 0, 0, (void*)0); + } else { + sqlite3_progress_handler(self->db, n, _progress_handler, progress_handler); + PyDict_SetItem(self->function_pinboard, progress_handler, Py_None); + } + + Py_INCREF(Py_None); + return Py_None; +} + int pysqlite_check_thread(pysqlite_Connection* self) { if (self->check_same_thread) { @@ -892,7 +1010,8 @@ } else if (rc == PYSQLITE_SQL_WRONG_TYPE) { PyErr_SetString(pysqlite_Warning, "SQL is of wrong type. Must be string or unicode."); } else { - _pysqlite_seterror(self->db); + (void)pysqlite_statement_reset(statement); + _pysqlite_seterror(self->db, NULL); } Py_DECREF(statement); @@ -1134,7 +1253,7 @@ (callable != Py_None) ? pysqlite_collation_callback : NULL); if (rc != SQLITE_OK) { PyDict_DelItem(self->collations, uppercase_name); - _pysqlite_seterror(self->db); + _pysqlite_seterror(self->db, NULL); goto finally; } @@ -1151,6 +1270,44 @@ return retval; } +/* Called when the connection is used as a context manager. Returns itself as a + * convenience to the caller. */ +static PyObject * +pysqlite_connection_enter(pysqlite_Connection* self, PyObject* args) +{ + Py_INCREF(self); + return (PyObject*)self; +} + +/** Called when the connection is used as a context manager. If there was any + * exception, a rollback takes place; otherwise we commit. */ +static PyObject * +pysqlite_connection_exit(pysqlite_Connection* self, PyObject* args) +{ + PyObject* exc_type, *exc_value, *exc_tb; + char* method_name; + PyObject* result; + + if (!PyArg_ParseTuple(args, "OOO", &exc_type, &exc_value, &exc_tb)) { + return NULL; + } + + if (exc_type == Py_None && exc_value == Py_None && exc_tb == Py_None) { + method_name = "commit"; + } else { + method_name = "rollback"; + } + + result = PyObject_CallMethod((PyObject*)self, method_name, ""); + if (!result) { + return NULL; + } + Py_DECREF(result); + + Py_INCREF(Py_False); + return Py_False; +} + static char connection_doc[] = PyDoc_STR("SQLite database connection object."); @@ -1175,6 +1332,8 @@ PyDoc_STR("Creates a new aggregate. Non-standard.")}, {"set_authorizer", (PyCFunction)pysqlite_connection_set_authorizer, METH_VARARGS|METH_KEYWORDS, PyDoc_STR("Sets authorizer callback. Non-standard.")}, + {"set_progress_handler", (PyCFunction)pysqlite_connection_set_progress_handler, METH_VARARGS|METH_KEYWORDS, + PyDoc_STR("Sets progress handler callback. Non-standard.")}, {"execute", (PyCFunction)pysqlite_connection_execute, METH_VARARGS, PyDoc_STR("Executes a SQL statement. Non-standard.")}, {"executemany", (PyCFunction)pysqlite_connection_executemany, METH_VARARGS, @@ -1185,6 +1344,10 @@ PyDoc_STR("Creates a collation function. Non-standard.")}, {"interrupt", (PyCFunction)pysqlite_connection_interrupt, METH_NOARGS, PyDoc_STR("Abort any pending database operation. Non-standard.")}, + {"__enter__", (PyCFunction)pysqlite_connection_enter, METH_NOARGS, + PyDoc_STR("For context manager. Non-standard.")}, + {"__exit__", (PyCFunction)pysqlite_connection_exit, METH_VARARGS, + PyDoc_STR("For context manager. Non-standard.")}, {NULL, NULL} }; Modified: python/branches/libffi3-branch/Modules/_sqlite/connection.h ============================================================================== --- python/branches/libffi3-branch/Modules/_sqlite/connection.h (original) +++ python/branches/libffi3-branch/Modules/_sqlite/connection.h Tue Mar 4 15:50:53 2008 @@ -1,6 +1,6 @@ /* connection.h - definitions for the connection type * - * Copyright (C) 2004-2006 Gerhard H?ring + * Copyright (C) 2004-2007 Gerhard H?ring * * This file is part of pysqlite. * @@ -95,6 +95,11 @@ /* a dictionary of registered collation name => collation callable mappings */ PyObject* collations; + /* if our connection was created from a APSW connection, we keep a + * reference to the APSW connection around and get rid of it in our + * destructor */ + PyObject* apsw_connection; + /* Exception objects */ PyObject* Warning; PyObject* Error; Modified: python/branches/libffi3-branch/Modules/_sqlite/cursor.c ============================================================================== --- python/branches/libffi3-branch/Modules/_sqlite/cursor.c (original) +++ python/branches/libffi3-branch/Modules/_sqlite/cursor.c Tue Mar 4 15:50:53 2008 @@ -1,6 +1,6 @@ /* cursor.c - the cursor type * - * Copyright (C) 2004-2006 Gerhard H?ring + * Copyright (C) 2004-2007 Gerhard H?ring * * This file is part of pysqlite. * @@ -80,7 +80,7 @@ if (!PyArg_ParseTuple(args, "O!", &pysqlite_ConnectionType, &connection)) { - return -1; + return -1; } Py_INCREF(connection); @@ -435,7 +435,7 @@ if (multiple) { /* executemany() */ if (!PyArg_ParseTuple(args, "OO", &operation, &second_argument)) { - return NULL; + return NULL; } if (!PyString_Check(operation) && !PyUnicode_Check(operation)) { @@ -457,7 +457,7 @@ } else { /* execute() */ if (!PyArg_ParseTuple(args, "O|O", &operation, &second_argument)) { - return NULL; + return NULL; } if (!PyString_Check(operation) && !PyUnicode_Check(operation)) { @@ -506,16 +506,47 @@ operation_cstr = PyString_AsString(operation_bytestr); } - /* reset description and rowcount */ + /* reset description */ Py_DECREF(self->description); Py_INCREF(Py_None); self->description = Py_None; - Py_DECREF(self->rowcount); - self->rowcount = PyInt_FromLong(-1L); - if (!self->rowcount) { + func_args = PyTuple_New(1); + if (!func_args) { goto error; } + Py_INCREF(operation); + if (PyTuple_SetItem(func_args, 0, operation) != 0) { + goto error; + } + + if (self->statement) { + (void)pysqlite_statement_reset(self->statement); + Py_DECREF(self->statement); + } + + self->statement = (pysqlite_Statement*)pysqlite_cache_get(self->connection->statement_cache, func_args); + Py_DECREF(func_args); + + if (!self->statement) { + goto error; + } + + if (self->statement->in_use) { + Py_DECREF(self->statement); + self->statement = PyObject_New(pysqlite_Statement, &pysqlite_StatementType); + if (!self->statement) { + goto error; + } + rc = pysqlite_statement_create(self->statement, self->connection, operation); + if (rc != SQLITE_OK) { + self->statement = 0; + goto error; + } + } + + pysqlite_statement_reset(self->statement); + pysqlite_statement_mark_dirty(self->statement); statement_type = detect_statement_type(operation_cstr); if (self->connection->begin_statement) { @@ -553,43 +584,6 @@ } } - func_args = PyTuple_New(1); - if (!func_args) { - goto error; - } - Py_INCREF(operation); - if (PyTuple_SetItem(func_args, 0, operation) != 0) { - goto error; - } - - if (self->statement) { - (void)pysqlite_statement_reset(self->statement); - Py_DECREF(self->statement); - } - - self->statement = (pysqlite_Statement*)pysqlite_cache_get(self->connection->statement_cache, func_args); - Py_DECREF(func_args); - - if (!self->statement) { - goto error; - } - - if (self->statement->in_use) { - Py_DECREF(self->statement); - self->statement = PyObject_New(pysqlite_Statement, &pysqlite_StatementType); - if (!self->statement) { - goto error; - } - rc = pysqlite_statement_create(self->statement, self->connection, operation); - if (rc != SQLITE_OK) { - self->statement = 0; - goto error; - } - } - - pysqlite_statement_reset(self->statement); - pysqlite_statement_mark_dirty(self->statement); - while (1) { parameters = PyIter_Next(parameters_iter); if (!parameters) { @@ -603,11 +597,6 @@ goto error; } - if (pysqlite_build_row_cast_map(self) != 0) { - PyErr_SetString(pysqlite_OperationalError, "Error while building row_cast_map"); - goto error; - } - /* Keep trying the SQL statement until the schema stops changing. */ while (1) { /* Actually execute the SQL statement. */ @@ -626,7 +615,8 @@ continue; } else { /* If the database gave us an error, promote it to Python. */ - _pysqlite_seterror(self->connection->db); + (void)pysqlite_statement_reset(self->statement); + _pysqlite_seterror(self->connection->db, NULL); goto error; } } else { @@ -638,17 +628,23 @@ PyErr_Clear(); } } - _pysqlite_seterror(self->connection->db); + (void)pysqlite_statement_reset(self->statement); + _pysqlite_seterror(self->connection->db, NULL); goto error; } } - if (rc == SQLITE_ROW || (rc == SQLITE_DONE && statement_type == STATEMENT_SELECT)) { - Py_BEGIN_ALLOW_THREADS - numcols = sqlite3_column_count(self->statement->st); - Py_END_ALLOW_THREADS + if (pysqlite_build_row_cast_map(self) != 0) { + PyErr_SetString(pysqlite_OperationalError, "Error while building row_cast_map"); + goto error; + } + if (rc == SQLITE_ROW || (rc == SQLITE_DONE && statement_type == STATEMENT_SELECT)) { if (self->description == Py_None) { + Py_BEGIN_ALLOW_THREADS + numcols = sqlite3_column_count(self->statement->st); + Py_END_ALLOW_THREADS + Py_DECREF(self->description); self->description = PyTuple_New(numcols); if (!self->description) { @@ -689,15 +685,11 @@ case STATEMENT_DELETE: case STATEMENT_INSERT: case STATEMENT_REPLACE: - Py_BEGIN_ALLOW_THREADS rowcount += (long)sqlite3_changes(self->connection->db); - Py_END_ALLOW_THREADS - Py_DECREF(self->rowcount); - self->rowcount = PyInt_FromLong(rowcount); } Py_DECREF(self->lastrowid); - if (statement_type == STATEMENT_INSERT) { + if (!multiple && statement_type == STATEMENT_INSERT) { Py_BEGIN_ALLOW_THREADS lastrowid = sqlite3_last_insert_rowid(self->connection->db); Py_END_ALLOW_THREADS @@ -714,14 +706,27 @@ } error: + /* just to be sure (implicit ROLLBACKs with ON CONFLICT ROLLBACK/OR + * ROLLBACK could have happened */ + #ifdef SQLITE_VERSION_NUMBER + #if SQLITE_VERSION_NUMBER >= 3002002 + self->connection->inTransaction = !sqlite3_get_autocommit(self->connection->db); + #endif + #endif + Py_XDECREF(operation_bytestr); Py_XDECREF(parameters); Py_XDECREF(parameters_iter); Py_XDECREF(parameters_list); if (PyErr_Occurred()) { + Py_DECREF(self->rowcount); + self->rowcount = PyInt_FromLong(-1L); return NULL; } else { + Py_DECREF(self->rowcount); + self->rowcount = PyInt_FromLong(rowcount); + Py_INCREF(self); return (PyObject*)self; } @@ -748,7 +753,7 @@ int statement_completed = 0; if (!PyArg_ParseTuple(args, "O", &script_obj)) { - return NULL; + return NULL; } if (!pysqlite_check_thread(self->connection) || !pysqlite_check_connection(self->connection)) { @@ -788,7 +793,7 @@ &statement, &script_cstr); if (rc != SQLITE_OK) { - _pysqlite_seterror(self->connection->db); + _pysqlite_seterror(self->connection->db, NULL); goto error; } @@ -796,17 +801,18 @@ rc = SQLITE_ROW; while (rc == SQLITE_ROW) { rc = _sqlite_step_with_busyhandler(statement, self->connection); + /* TODO: we probably need more error handling here */ } if (rc != SQLITE_DONE) { (void)sqlite3_finalize(statement); - _pysqlite_seterror(self->connection->db); + _pysqlite_seterror(self->connection->db, NULL); goto error; } rc = sqlite3_finalize(statement); if (rc != SQLITE_OK) { - _pysqlite_seterror(self->connection->db); + _pysqlite_seterror(self->connection->db, NULL); goto error; } } @@ -864,8 +870,9 @@ if (self->statement) { rc = _sqlite_step_with_busyhandler(self->statement->st, self->connection); if (rc != SQLITE_DONE && rc != SQLITE_ROW) { + (void)pysqlite_statement_reset(self->statement); Py_DECREF(next_row); - _pysqlite_seterror(self->connection->db); + _pysqlite_seterror(self->connection->db, NULL); return NULL; } @@ -890,15 +897,17 @@ return row; } -PyObject* pysqlite_cursor_fetchmany(pysqlite_Cursor* self, PyObject* args) +PyObject* pysqlite_cursor_fetchmany(pysqlite_Cursor* self, PyObject* args, PyObject* kwargs) { + static char *kwlist[] = {"size", NULL, NULL}; + PyObject* row; PyObject* list; int maxrows = self->arraysize; int counter = 0; - if (!PyArg_ParseTuple(args, "|i", &maxrows)) { - return NULL; + if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|i:fetchmany", kwlist, &maxrows)) { + return NULL; } list = PyList_New(0); @@ -992,7 +1001,7 @@ PyDoc_STR("Executes a multiple SQL statements at once. Non-standard.")}, {"fetchone", (PyCFunction)pysqlite_cursor_fetchone, METH_NOARGS, PyDoc_STR("Fetches one row from the resultset.")}, - {"fetchmany", (PyCFunction)pysqlite_cursor_fetchmany, METH_VARARGS, + {"fetchmany", (PyCFunction)pysqlite_cursor_fetchmany, METH_VARARGS|METH_KEYWORDS, PyDoc_STR("Fetches several rows from the resultset.")}, {"fetchall", (PyCFunction)pysqlite_cursor_fetchall, METH_NOARGS, PyDoc_STR("Fetches all rows from the resultset.")}, Modified: python/branches/libffi3-branch/Modules/_sqlite/cursor.h ============================================================================== --- python/branches/libffi3-branch/Modules/_sqlite/cursor.h (original) +++ python/branches/libffi3-branch/Modules/_sqlite/cursor.h Tue Mar 4 15:50:53 2008 @@ -1,6 +1,6 @@ /* cursor.h - definitions for the cursor type * - * Copyright (C) 2004-2006 Gerhard H?ring + * Copyright (C) 2004-2007 Gerhard H?ring * * This file is part of pysqlite. * @@ -60,7 +60,7 @@ PyObject* pysqlite_cursor_getiter(pysqlite_Cursor *self); PyObject* pysqlite_cursor_iternext(pysqlite_Cursor *self); PyObject* pysqlite_cursor_fetchone(pysqlite_Cursor* self, PyObject* args); -PyObject* pysqlite_cursor_fetchmany(pysqlite_Cursor* self, PyObject* args); +PyObject* pysqlite_cursor_fetchmany(pysqlite_Cursor* self, PyObject* args, PyObject* kwargs); PyObject* pysqlite_cursor_fetchall(pysqlite_Cursor* self, PyObject* args); PyObject* pysqlite_noop(pysqlite_Connection* self, PyObject* args); PyObject* pysqlite_cursor_close(pysqlite_Cursor* self, PyObject* args); Modified: python/branches/libffi3-branch/Modules/_sqlite/microprotocols.h ============================================================================== --- python/branches/libffi3-branch/Modules/_sqlite/microprotocols.h (original) +++ python/branches/libffi3-branch/Modules/_sqlite/microprotocols.h Tue Mar 4 15:50:53 2008 @@ -28,10 +28,6 @@ #include -#ifdef __cplusplus -extern "C" { -#endif - /** adapters registry **/ extern PyObject *psyco_adapters; Modified: python/branches/libffi3-branch/Modules/_sqlite/module.c ============================================================================== --- python/branches/libffi3-branch/Modules/_sqlite/module.c (original) +++ python/branches/libffi3-branch/Modules/_sqlite/module.c Tue Mar 4 15:50:53 2008 @@ -1,25 +1,25 @@ - /* module.c - the module itself - * - * Copyright (C) 2004-2006 Gerhard H?ring - * - * This file is part of pysqlite. - * - * This software is provided 'as-is', without any express or implied - * warranty. In no event will the authors be held liable for any damages - * arising from the use of this software. - * - * Permission is granted to anyone to use this software for any purpose, - * including commercial applications, and to alter it and redistribute it - * freely, subject to the following restrictions: - * - * 1. The origin of this software must not be misrepresented; you must not - * claim that you wrote the original software. If you use this software - * in a product, an acknowledgment in the product documentation would be - * appreciated but is not required. - * 2. Altered source versions must be plainly marked as such, and must not be - * misrepresented as being the original software. - * 3. This notice may not be removed or altered from any source distribution. - */ +/* module.c - the module itself + * + * Copyright (C) 2004-2007 Gerhard H?ring + * + * This file is part of pysqlite. + * + * This software is provided 'as-is', without any express or implied + * warranty. In no event will the authors be held liable for any damages + * arising from the use of this software. + * + * Permission is granted to anyone to use this software for any purpose, + * including commercial applications, and to alter it and redistribute it + * freely, subject to the following restrictions: + * + * 1. The origin of this software must not be misrepresented; you must not + * claim that you wrote the original software. If you use this software + * in a product, an acknowledgment in the product documentation would be + * appreciated but is not required. + * 2. Altered source versions must be plainly marked as such, and must not be + * misrepresented as being the original software. + * 3. This notice may not be removed or altered from any source distribution. + */ #include "connection.h" #include "statement.h" @@ -41,6 +41,7 @@ PyObject* converters; int _enable_callback_tracebacks; +int pysqlite_BaseTypeAdapted; static PyObject* module_connect(PyObject* self, PyObject* args, PyObject* kwargs) @@ -50,7 +51,7 @@ * connection.c and must always be copied from there ... */ static char *kwlist[] = {"database", "timeout", "detect_types", "isolation_level", "check_same_thread", "factory", "cached_statements", NULL, NULL}; - char* database; + PyObject* database; int detect_types = 0; PyObject* isolation_level; PyObject* factory = NULL; @@ -60,7 +61,7 @@ PyObject* result; - if (!PyArg_ParseTupleAndKeywords(args, kwargs, "s|diOiOi", kwlist, + if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|diOiOi", kwlist, &database, &timeout, &detect_types, &isolation_level, &check_same_thread, &factory, &cached_statements)) { return NULL; @@ -133,6 +134,13 @@ return NULL; } + /* a basic type is adapted; there's a performance optimization if that's not the case + * (99 % of all usages) */ + if (type == &PyInt_Type || type == &PyLong_Type || type == &PyFloat_Type + || type == &PyString_Type || type == &PyUnicode_Type || type == &PyBuffer_Type) { + pysqlite_BaseTypeAdapted = 1; + } + microprotocols_add(type, (PyObject*)&pysqlite_PrepareProtocolType, caster); Py_INCREF(Py_None); @@ -379,6 +387,8 @@ _enable_callback_tracebacks = 0; + pysqlite_BaseTypeAdapted = 0; + /* Original comment form _bsddb.c in the Python core. This is also still * needed nowadays for Python 2.3/2.4. * Modified: python/branches/libffi3-branch/Modules/_sqlite/module.h ============================================================================== --- python/branches/libffi3-branch/Modules/_sqlite/module.h (original) +++ python/branches/libffi3-branch/Modules/_sqlite/module.h Tue Mar 4 15:50:53 2008 @@ -1,6 +1,6 @@ /* module.h - definitions for the module * - * Copyright (C) 2004-2006 Gerhard H?ring + * Copyright (C) 2004-2007 Gerhard H?ring * * This file is part of pysqlite. * @@ -25,7 +25,7 @@ #define PYSQLITE_MODULE_H #include "Python.h" -#define PYSQLITE_VERSION "2.3.3" +#define PYSQLITE_VERSION "2.4.1" extern PyObject* pysqlite_Error; extern PyObject* pysqlite_Warning; @@ -51,6 +51,7 @@ extern PyObject* converters; extern int _enable_callback_tracebacks; +extern int pysqlite_BaseTypeAdapted; #define PARSE_DECLTYPES 1 #define PARSE_COLNAMES 2 Modified: python/branches/libffi3-branch/Modules/_sqlite/statement.c ============================================================================== --- python/branches/libffi3-branch/Modules/_sqlite/statement.c (original) +++ python/branches/libffi3-branch/Modules/_sqlite/statement.c Tue Mar 4 15:50:53 2008 @@ -1,6 +1,6 @@ /* statement.c - the statement type * - * Copyright (C) 2005-2006 Gerhard H?ring + * Copyright (C) 2005-2007 Gerhard H?ring * * This file is part of pysqlite. * @@ -40,6 +40,16 @@ NORMAL } parse_remaining_sql_state; +typedef enum { + TYPE_INT, + TYPE_LONG, + TYPE_FLOAT, + TYPE_STRING, + TYPE_UNICODE, + TYPE_BUFFER, + TYPE_UNKNOWN +} parameter_type; + int pysqlite_statement_create(pysqlite_Statement* self, pysqlite_Connection* connection, PyObject* sql) { const char* tail; @@ -97,42 +107,96 @@ char* string; Py_ssize_t buflen; PyObject* stringval; + parameter_type paramtype; if (parameter == Py_None) { rc = sqlite3_bind_null(self->st, pos); + goto final; + } + + if (PyInt_CheckExact(parameter)) { + paramtype = TYPE_INT; + } else if (PyLong_CheckExact(parameter)) { + paramtype = TYPE_LONG; + } else if (PyFloat_CheckExact(parameter)) { + paramtype = TYPE_FLOAT; + } else if (PyString_CheckExact(parameter)) { + paramtype = TYPE_STRING; + } else if (PyUnicode_CheckExact(parameter)) { + paramtype = TYPE_UNICODE; + } else if (PyBuffer_Check(parameter)) { + paramtype = TYPE_BUFFER; } else if (PyInt_Check(parameter)) { - longval = PyInt_AsLong(parameter); - rc = sqlite3_bind_int64(self->st, pos, (sqlite_int64)longval); -#ifdef HAVE_LONG_LONG + paramtype = TYPE_INT; } else if (PyLong_Check(parameter)) { - longlongval = PyLong_AsLongLong(parameter); - /* in the overflow error case, longlongval is -1, and an exception is set */ - rc = sqlite3_bind_int64(self->st, pos, (sqlite_int64)longlongval); -#endif + paramtype = TYPE_LONG; } else if (PyFloat_Check(parameter)) { - rc = sqlite3_bind_double(self->st, pos, PyFloat_AsDouble(parameter)); - } else if (PyBuffer_Check(parameter)) { - if (PyObject_AsCharBuffer(parameter, &buffer, &buflen) == 0) { - rc = sqlite3_bind_blob(self->st, pos, buffer, buflen, SQLITE_TRANSIENT); - } else { - PyErr_SetString(PyExc_ValueError, "could not convert BLOB to buffer"); - rc = -1; - } - } else if PyString_Check(parameter) { - string = PyString_AsString(parameter); - rc = sqlite3_bind_text(self->st, pos, string, -1, SQLITE_TRANSIENT); - } else if PyUnicode_Check(parameter) { - stringval = PyUnicode_AsUTF8String(parameter); - string = PyString_AsString(stringval); - rc = sqlite3_bind_text(self->st, pos, string, -1, SQLITE_TRANSIENT); - Py_DECREF(stringval); + paramtype = TYPE_FLOAT; + } else if (PyString_Check(parameter)) { + paramtype = TYPE_STRING; + } else if (PyUnicode_Check(parameter)) { + paramtype = TYPE_UNICODE; } else { - rc = -1; + paramtype = TYPE_UNKNOWN; } + switch (paramtype) { + case TYPE_INT: + longval = PyInt_AsLong(parameter); + rc = sqlite3_bind_int64(self->st, pos, (sqlite_int64)longval); + break; +#ifdef HAVE_LONG_LONG + case TYPE_LONG: + longlongval = PyLong_AsLongLong(parameter); + /* in the overflow error case, longlongval is -1, and an exception is set */ + rc = sqlite3_bind_int64(self->st, pos, (sqlite_int64)longlongval); + break; +#endif + case TYPE_FLOAT: + rc = sqlite3_bind_double(self->st, pos, PyFloat_AsDouble(parameter)); + break; + case TYPE_STRING: + string = PyString_AS_STRING(parameter); + rc = sqlite3_bind_text(self->st, pos, string, -1, SQLITE_TRANSIENT); + break; + case TYPE_UNICODE: + stringval = PyUnicode_AsUTF8String(parameter); + string = PyString_AsString(stringval); + rc = sqlite3_bind_text(self->st, pos, string, -1, SQLITE_TRANSIENT); + Py_DECREF(stringval); + break; + case TYPE_BUFFER: + if (PyObject_AsCharBuffer(parameter, &buffer, &buflen) == 0) { + rc = sqlite3_bind_blob(self->st, pos, buffer, buflen, SQLITE_TRANSIENT); + } else { + PyErr_SetString(PyExc_ValueError, "could not convert BLOB to buffer"); + rc = -1; + } + break; + case TYPE_UNKNOWN: + rc = -1; + } + +final: return rc; } +/* returns 0 if the object is one of Python's internal ones that don't need to be adapted */ +static int _need_adapt(PyObject* obj) +{ + if (pysqlite_BaseTypeAdapted) { + return 1; + } + + if (PyInt_CheckExact(obj) || PyLong_CheckExact(obj) + || PyFloat_CheckExact(obj) || PyString_CheckExact(obj) + || PyUnicode_CheckExact(obj) || PyBuffer_Check(obj)) { + return 0; + } else { + return 1; + } +} + void pysqlite_statement_bind_parameters(pysqlite_Statement* self, PyObject* parameters) { PyObject* current_param; @@ -147,7 +211,55 @@ num_params_needed = sqlite3_bind_parameter_count(self->st); Py_END_ALLOW_THREADS - if (PyDict_Check(parameters)) { + if (PyTuple_CheckExact(parameters) || PyList_CheckExact(parameters) || (!PyDict_Check(parameters) && PySequence_Check(parameters))) { + /* parameters passed as sequence */ + if (PyTuple_CheckExact(parameters)) { + num_params = PyTuple_GET_SIZE(parameters); + } else if (PyList_CheckExact(parameters)) { + num_params = PyList_GET_SIZE(parameters); + } else { + num_params = PySequence_Size(parameters); + } + if (num_params != num_params_needed) { + PyErr_Format(pysqlite_ProgrammingError, "Incorrect number of bindings supplied. The current statement uses %d, and there are %d supplied.", + num_params_needed, num_params); + return; + } + for (i = 0; i < num_params; i++) { + if (PyTuple_CheckExact(parameters)) { + current_param = PyTuple_GET_ITEM(parameters, i); + Py_XINCREF(current_param); + } else if (PyList_CheckExact(parameters)) { + current_param = PyList_GET_ITEM(parameters, i); + Py_XINCREF(current_param); + } else { + current_param = PySequence_GetItem(parameters, i); + } + if (!current_param) { + return; + } + + if (!_need_adapt(current_param)) { + adapted = current_param; + } else { + adapted = microprotocols_adapt(current_param, (PyObject*)&pysqlite_PrepareProtocolType, NULL); + if (adapted) { + Py_DECREF(current_param); + } else { + PyErr_Clear(); + adapted = current_param; + } + } + + rc = pysqlite_statement_bind_parameter(self, i + 1, adapted); + Py_DECREF(adapted); + + if (rc != SQLITE_OK) { + PyErr_Format(pysqlite_InterfaceError, "Error binding parameter %d - probably unsupported type.", i); + return; + } + } + } else if (PyDict_Check(parameters)) { /* parameters passed as dictionary */ for (i = 1; i <= num_params_needed; i++) { Py_BEGIN_ALLOW_THREADS @@ -159,19 +271,27 @@ } binding_name++; /* skip first char (the colon) */ - current_param = PyDict_GetItemString(parameters, binding_name); + if (PyDict_CheckExact(parameters)) { + current_param = PyDict_GetItemString(parameters, binding_name); + Py_XINCREF(current_param); + } else { + current_param = PyMapping_GetItemString(parameters, (char*)binding_name); + } if (!current_param) { PyErr_Format(pysqlite_ProgrammingError, "You did not supply a value for binding %d.", i); return; } - Py_INCREF(current_param); - adapted = microprotocols_adapt(current_param, (PyObject*)&pysqlite_PrepareProtocolType, NULL); - if (adapted) { - Py_DECREF(current_param); - } else { - PyErr_Clear(); + if (!_need_adapt(current_param)) { adapted = current_param; + } else { + adapted = microprotocols_adapt(current_param, (PyObject*)&pysqlite_PrepareProtocolType, NULL); + if (adapted) { + Py_DECREF(current_param); + } else { + PyErr_Clear(); + adapted = current_param; + } } rc = pysqlite_statement_bind_parameter(self, i, adapted); @@ -183,35 +303,7 @@ } } } else { - /* parameters passed as sequence */ - num_params = PySequence_Length(parameters); - if (num_params != num_params_needed) { - PyErr_Format(pysqlite_ProgrammingError, "Incorrect number of bindings supplied. The current statement uses %d, and there are %d supplied.", - num_params_needed, num_params); - return; - } - for (i = 0; i < num_params; i++) { - current_param = PySequence_GetItem(parameters, i); - if (!current_param) { - return; - } - adapted = microprotocols_adapt(current_param, (PyObject*)&pysqlite_PrepareProtocolType, NULL); - - if (adapted) { - Py_DECREF(current_param); - } else { - PyErr_Clear(); - adapted = current_param; - } - - rc = pysqlite_statement_bind_parameter(self, i + 1, adapted); - Py_DECREF(adapted); - - if (rc != SQLITE_OK) { - PyErr_Format(pysqlite_InterfaceError, "Error binding parameter %d - probably unsupported type.", i); - return; - } - } + PyErr_SetString(PyExc_ValueError, "parameters are of unsupported type"); } } Modified: python/branches/libffi3-branch/Modules/_sqlite/util.c ============================================================================== --- python/branches/libffi3-branch/Modules/_sqlite/util.c (original) +++ python/branches/libffi3-branch/Modules/_sqlite/util.c Tue Mar 4 15:50:53 2008 @@ -1,6 +1,6 @@ /* util.c - various utility functions * - * Copyright (C) 2005-2006 Gerhard H?ring + * Copyright (C) 2005-2007 Gerhard H?ring * * This file is part of pysqlite. * @@ -45,10 +45,15 @@ * Checks the SQLite error code and sets the appropriate DB-API exception. * Returns the error code (0 means no error occurred). */ -int _pysqlite_seterror(sqlite3* db) +int _pysqlite_seterror(sqlite3* db, sqlite3_stmt* st) { int errorcode; + /* SQLite often doesn't report anything useful, unless you reset the statement first */ + if (st != NULL) { + (void)sqlite3_reset(st); + } + errorcode = sqlite3_errcode(db); switch (errorcode) Modified: python/branches/libffi3-branch/Modules/_sqlite/util.h ============================================================================== --- python/branches/libffi3-branch/Modules/_sqlite/util.h (original) +++ python/branches/libffi3-branch/Modules/_sqlite/util.h Tue Mar 4 15:50:53 2008 @@ -34,5 +34,5 @@ * Checks the SQLite error code and sets the appropriate DB-API exception. * Returns the error code (0 means no error occurred). */ -int _pysqlite_seterror(sqlite3* db); +int _pysqlite_seterror(sqlite3* db, sqlite3_stmt* st); #endif Modified: python/branches/libffi3-branch/Modules/_testcapimodule.c ============================================================================== --- python/branches/libffi3-branch/Modules/_testcapimodule.c (original) +++ python/branches/libffi3-branch/Modules/_testcapimodule.c Tue Mar 4 15:50:53 2008 @@ -306,6 +306,22 @@ return Py_BuildValue("iii", a, b, c); } +/* test PyArg_ParseTupleAndKeywords */ +static PyObject *getargs_keywords(PyObject *self, PyObject *args, PyObject *kwargs) +{ + static char *keywords[] = {"arg1","arg2","arg3","arg4","arg5", NULL}; + static char *fmt="(ii)i|(i(ii))(iii)i"; + int int_args[10]={-1, -1, -1, -1, -1, -1, -1, -1, -1, -1}; + + if (!PyArg_ParseTupleAndKeywords(args, kwargs, fmt, keywords, + &int_args[0], &int_args[1], &int_args[2], &int_args[3], &int_args[4], + &int_args[5], &int_args[6], &int_args[7], &int_args[8], &int_args[9])) + return NULL; + return Py_BuildValue("iiiiiiiiii", + int_args[0], int_args[1], int_args[2], int_args[3], int_args[4], + int_args[5], int_args[6], int_args[7], int_args[8], int_args[9]); +} + /* Functions to call PyArg_ParseTuple with integer format codes, and return the result. */ @@ -732,6 +748,8 @@ PyDoc_STR("This is a pretty normal docstring.")}, {"getargs_tuple", getargs_tuple, METH_VARARGS}, + {"getargs_keywords", (PyCFunction)getargs_keywords, + METH_VARARGS|METH_KEYWORDS}, {"getargs_b", getargs_b, METH_VARARGS}, {"getargs_B", getargs_B, METH_VARARGS}, {"getargs_H", getargs_H, METH_VARARGS}, Modified: python/branches/libffi3-branch/Modules/dbmmodule.c ============================================================================== --- python/branches/libffi3-branch/Modules/dbmmodule.c (original) +++ python/branches/libffi3-branch/Modules/dbmmodule.c Tue Mar 4 15:50:53 2008 @@ -161,6 +161,37 @@ return 0; } +static int +dbm_contains(register dbmobject *dp, PyObject *v) +{ + datum key, val; + + if (PyString_AsStringAndSize(v, &key.dptr, &key.dsize)) { + return -1; + } + + /* Expand check_dbmobject_open to return -1 */ + if (dp->di_dbm == NULL) { + PyErr_SetString(DbmError, "DBM object has already been closed"); + return -1; + } + val = dbm_fetch(dp->di_dbm, key); + return val.dptr != NULL; +} + +static PySequenceMethods dbm_as_sequence = { + (lenfunc)dbm_length, /*_length*/ + 0, /*sq_concat*/ + 0, /*sq_repeat*/ + 0, /*sq_item*/ + 0, /*sq_slice*/ + 0, /*sq_ass_item*/ + 0, /*sq_ass_slice*/ + (objobjproc)dbm_contains, /*sq_contains*/ + 0, /*sq_inplace_concat*/ + 0 /*sq_inplace_repeat*/ +}; + static PyMappingMethods dbm_as_mapping = { (lenfunc)dbm_length, /*mp_length*/ (binaryfunc)dbm_subscript, /*mp_subscript*/ @@ -313,8 +344,15 @@ 0, /*tp_compare*/ 0, /*tp_repr*/ 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ + &dbm_as_sequence, /*tp_as_sequence*/ &dbm_as_mapping, /*tp_as_mapping*/ + 0, /*tp_hash*/ + 0, /*tp_call*/ + 0, /*tp_str*/ + 0, /*tp_getattro*/ + 0, /*tp_setattro*/ + 0, /*tp_as_buffer*/ + Py_TPFLAGS_DEFAULT, /*tp_xxx4*/ }; /* ----------------------------------------------------------------- */ Modified: python/branches/libffi3-branch/Modules/gdbmmodule.c ============================================================================== --- python/branches/libffi3-branch/Modules/gdbmmodule.c (original) +++ python/branches/libffi3-branch/Modules/gdbmmodule.c Tue Mar 4 15:50:53 2008 @@ -178,6 +178,40 @@ return 0; } +static int +dbm_contains(register dbmobject *dp, PyObject *arg) +{ + datum key; + + if ((dp)->di_dbm == NULL) { + PyErr_SetString(DbmError, + "GDBM object has already been closed"); + return -1; + } + if (!PyString_Check(arg)) { + PyErr_Format(PyExc_TypeError, + "gdbm key must be string, not %.100s", + arg->ob_type->tp_name); + return -1; + } + key.dptr = PyString_AS_STRING(arg); + key.dsize = PyString_GET_SIZE(arg); + return gdbm_exists(dp->di_dbm, key); +} + +static PySequenceMethods dbm_as_sequence = { + (lenfunc)dbm_length, /*_length*/ + 0, /*sq_concat*/ + 0, /*sq_repeat*/ + 0, /*sq_item*/ + 0, /*sq_slice*/ + 0, /*sq_ass_item*/ + 0, /*sq_ass_slice*/ + (objobjproc)dbm_contains, /*sq_contains*/ + 0, /*sq_inplace_concat*/ + 0 /*sq_inplace_repeat*/ +}; + static PyMappingMethods dbm_as_mapping = { (lenfunc)dbm_length, /*mp_length*/ (binaryfunc)dbm_subscript, /*mp_subscript*/ @@ -381,7 +415,7 @@ 0, /*tp_compare*/ 0, /*tp_repr*/ 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ + &dbm_as_sequence, /*tp_as_sequence*/ &dbm_as_mapping, /*tp_as_mapping*/ 0, /*tp_hash*/ 0, /*tp_call*/ @@ -389,7 +423,7 @@ 0, /*tp_getattro*/ 0, /*tp_setattro*/ 0, /*tp_as_buffer*/ - 0, /*tp_xxx4*/ + Py_TPFLAGS_DEFAULT, /*tp_xxx4*/ gdbm_object__doc__, /*tp_doc*/ }; Modified: python/branches/libffi3-branch/Modules/itertoolsmodule.c ============================================================================== --- python/branches/libffi3-branch/Modules/itertoolsmodule.c (original) +++ python/branches/libffi3-branch/Modules/itertoolsmodule.c Tue Mar 4 15:50:53 2008 @@ -1601,92 +1601,104 @@ typedef struct { PyObject_HEAD - Py_ssize_t tuplesize; - Py_ssize_t iternum; /* which iterator is active */ - PyObject *ittuple; /* tuple of iterators */ + PyObject *source; /* Iterator over input iterables */ + PyObject *active; /* Currently running input iterator */ } chainobject; static PyTypeObject chain_type; +static PyObject * +chain_new_internal(PyTypeObject *type, PyObject *source) +{ + chainobject *lz; + + lz = (chainobject *)type->tp_alloc(type, 0); + if (lz == NULL) { + Py_DECREF(source); + return NULL; + } + + lz->source = source; + lz->active = NULL; + return (PyObject *)lz; +} + static PyObject * chain_new(PyTypeObject *type, PyObject *args, PyObject *kwds) { - chainobject *lz; - Py_ssize_t tuplesize = PySequence_Length(args); - Py_ssize_t i; - PyObject *ittuple; + PyObject *source; if (type == &chain_type && !_PyArg_NoKeywords("chain()", kwds)) return NULL; - - /* obtain iterators */ - assert(PyTuple_Check(args)); - ittuple = PyTuple_New(tuplesize); - if (ittuple == NULL) + + source = PyObject_GetIter(args); + if (source == NULL) return NULL; - for (i=0; i < tuplesize; ++i) { - PyObject *item = PyTuple_GET_ITEM(args, i); - PyObject *it = PyObject_GetIter(item); - if (it == NULL) { - if (PyErr_ExceptionMatches(PyExc_TypeError)) - PyErr_Format(PyExc_TypeError, - "chain argument #%zd must support iteration", - i+1); - Py_DECREF(ittuple); - return NULL; - } - PyTuple_SET_ITEM(ittuple, i, it); - } - /* create chainobject structure */ - lz = (chainobject *)type->tp_alloc(type, 0); - if (lz == NULL) { - Py_DECREF(ittuple); - return NULL; - } + return chain_new_internal(type, source); +} - lz->ittuple = ittuple; - lz->iternum = 0; - lz->tuplesize = tuplesize; +static PyObject * +chain_new_from_iterable(PyTypeObject *type, PyObject *arg) +{ + PyObject *source; + + source = PyObject_GetIter(arg); + if (source == NULL) + return NULL; - return (PyObject *)lz; + return chain_new_internal(type, source); } static void chain_dealloc(chainobject *lz) { PyObject_GC_UnTrack(lz); - Py_XDECREF(lz->ittuple); + Py_XDECREF(lz->active); + Py_XDECREF(lz->source); Py_TYPE(lz)->tp_free(lz); } static int chain_traverse(chainobject *lz, visitproc visit, void *arg) { - Py_VISIT(lz->ittuple); + Py_VISIT(lz->source); + Py_VISIT(lz->active); return 0; } static PyObject * chain_next(chainobject *lz) { - PyObject *it; PyObject *item; - while (lz->iternum < lz->tuplesize) { - it = PyTuple_GET_ITEM(lz->ittuple, lz->iternum); - item = PyIter_Next(it); - if (item != NULL) - return item; - if (PyErr_Occurred()) { - if (PyErr_ExceptionMatches(PyExc_StopIteration)) - PyErr_Clear(); - else - return NULL; + if (lz->source == NULL) + return NULL; /* already stopped */ + + if (lz->active == NULL) { + PyObject *iterable = PyIter_Next(lz->source); + if (iterable == NULL) { + Py_CLEAR(lz->source); + return NULL; /* no more input sources */ + } + lz->active = PyObject_GetIter(iterable); + if (lz->active == NULL) { + Py_DECREF(iterable); + Py_CLEAR(lz->source); + return NULL; /* input not iterable */ } - lz->iternum++; } - return NULL; + item = PyIter_Next(lz->active); + if (item != NULL) + return item; + if (PyErr_Occurred()) { + if (PyErr_ExceptionMatches(PyExc_StopIteration)) + PyErr_Clear(); + else + return NULL; /* input raised an exception */ + } + Py_CLEAR(lz->active); + return chain_next(lz); /* recurse and use next active */ } PyDoc_STRVAR(chain_doc, @@ -1696,6 +1708,18 @@ first iterable until it is exhausted, then elements from the next\n\ iterable, until all of the iterables are exhausted."); +PyDoc_STRVAR(chain_from_iterable_doc, +"chain.from_iterable(iterable) --> chain object\n\ +\n\ +Alternate chain() contructor taking a single iterable argument\n\ +that evaluates lazily."); + +static PyMethodDef chain_methods[] = { + {"from_iterable", (PyCFunction) chain_new_from_iterable, METH_O | METH_CLASS, + chain_from_iterable_doc}, + {NULL, NULL} /* sentinel */ +}; + static PyTypeObject chain_type = { PyVarObject_HEAD_INIT(NULL, 0) "itertools.chain", /* tp_name */ @@ -1726,7 +1750,7 @@ 0, /* tp_weaklistoffset */ PyObject_SelfIter, /* tp_iter */ (iternextfunc)chain_next, /* tp_iternext */ - 0, /* tp_methods */ + chain_methods, /* tp_methods */ 0, /* tp_members */ 0, /* tp_getset */ 0, /* tp_base */ @@ -1741,6 +1765,479 @@ }; +/* product object ************************************************************/ + +typedef struct { + PyObject_HEAD + PyObject *pools; /* tuple of pool tuples */ + Py_ssize_t *indices; /* one index per pool */ + PyObject *result; /* most recently returned result tuple */ + int stopped; /* set to 1 when the product iterator is exhausted */ +} productobject; + +static PyTypeObject product_type; + +static PyObject * +product_new(PyTypeObject *type, PyObject *args, PyObject *kwds) +{ + productobject *lz; + Py_ssize_t nargs, npools, repeat=1; + PyObject *pools = NULL; + Py_ssize_t *indices = NULL; + Py_ssize_t i; + + if (kwds != NULL) { + char *kwlist[] = {"repeat", 0}; + PyObject *tmpargs = PyTuple_New(0); + if (tmpargs == NULL) + return NULL; + if (!PyArg_ParseTupleAndKeywords(tmpargs, kwds, "|n:product", kwlist, &repeat)) { + Py_DECREF(tmpargs); + return NULL; + } + Py_DECREF(tmpargs); + if (repeat < 0) { + PyErr_SetString(PyExc_ValueError, + "repeat argument cannot be negative"); + return NULL; + } + } + + assert(PyTuple_Check(args)); + nargs = (repeat == 0) ? 0 : PyTuple_GET_SIZE(args); + npools = nargs * repeat; + + indices = PyMem_Malloc(npools * sizeof(Py_ssize_t)); + if (indices == NULL) { + PyErr_NoMemory(); + goto error; + } + + pools = PyTuple_New(npools); + if (pools == NULL) + goto error; + + for (i=0; i < nargs ; ++i) { + PyObject *item = PyTuple_GET_ITEM(args, i); + PyObject *pool = PySequence_Tuple(item); + if (pool == NULL) + goto error; + PyTuple_SET_ITEM(pools, i, pool); + indices[i] = 0; + } + for ( ; i < npools; ++i) { + PyObject *pool = PyTuple_GET_ITEM(pools, i - nargs); + Py_INCREF(pool); + PyTuple_SET_ITEM(pools, i, pool); + indices[i] = 0; + } + + /* create productobject structure */ + lz = (productobject *)type->tp_alloc(type, 0); + if (lz == NULL) + goto error; + + lz->pools = pools; + lz->indices = indices; + lz->result = NULL; + lz->stopped = 0; + + return (PyObject *)lz; + +error: + if (indices != NULL) + PyMem_Free(indices); + Py_XDECREF(pools); + return NULL; +} + +static void +product_dealloc(productobject *lz) +{ + PyObject_GC_UnTrack(lz); + Py_XDECREF(lz->pools); + Py_XDECREF(lz->result); + PyMem_Free(lz->indices); + Py_TYPE(lz)->tp_free(lz); +} + +static int +product_traverse(productobject *lz, visitproc visit, void *arg) +{ + Py_VISIT(lz->pools); + Py_VISIT(lz->result); + return 0; +} + +static PyObject * +product_next(productobject *lz) +{ + PyObject *pool; + PyObject *elem; + PyObject *oldelem; + PyObject *pools = lz->pools; + PyObject *result = lz->result; + Py_ssize_t npools = PyTuple_GET_SIZE(pools); + Py_ssize_t i; + + if (lz->stopped) + return NULL; + + if (result == NULL) { + /* On the first pass, return an initial tuple filled with the + first element from each pool. */ + result = PyTuple_New(npools); + if (result == NULL) + goto empty; + lz->result = result; + for (i=0; i < npools; i++) { + pool = PyTuple_GET_ITEM(pools, i); + if (PyTuple_GET_SIZE(pool) == 0) + goto empty; + elem = PyTuple_GET_ITEM(pool, 0); + Py_INCREF(elem); + PyTuple_SET_ITEM(result, i, elem); + } + } else { + Py_ssize_t *indices = lz->indices; + + /* Copy the previous result tuple or re-use it if available */ + if (Py_REFCNT(result) > 1) { + PyObject *old_result = result; + result = PyTuple_New(npools); + if (result == NULL) + goto empty; + lz->result = result; + for (i=0; i < npools; i++) { + elem = PyTuple_GET_ITEM(old_result, i); + Py_INCREF(elem); + PyTuple_SET_ITEM(result, i, elem); + } + Py_DECREF(old_result); + } + /* Now, we've got the only copy so we can update it in-place */ + assert (npools==0 || Py_REFCNT(result) == 1); + + /* Update the pool indices right-to-left. Only advance to the + next pool when the previous one rolls-over */ + for (i=npools-1 ; i >= 0 ; i--) { + pool = PyTuple_GET_ITEM(pools, i); + indices[i]++; + if (indices[i] == PyTuple_GET_SIZE(pool)) { + /* Roll-over and advance to next pool */ + indices[i] = 0; + elem = PyTuple_GET_ITEM(pool, 0); + Py_INCREF(elem); + oldelem = PyTuple_GET_ITEM(result, i); + PyTuple_SET_ITEM(result, i, elem); + Py_DECREF(oldelem); + } else { + /* No rollover. Just increment and stop here. */ + elem = PyTuple_GET_ITEM(pool, indices[i]); + Py_INCREF(elem); + oldelem = PyTuple_GET_ITEM(result, i); + PyTuple_SET_ITEM(result, i, elem); + Py_DECREF(oldelem); + break; + } + } + + /* If i is negative, then the indices have all rolled-over + and we're done. */ + if (i < 0) + goto empty; + } + + Py_INCREF(result); + return result; + +empty: + lz->stopped = 1; + return NULL; +} + +PyDoc_STRVAR(product_doc, +"product(*iterables) --> product object\n\ +\n\ +Cartesian product of input iterables. Equivalent to nested for-loops.\n\n\ +For example, product(A, B) returns the same as: ((x,y) for x in A for y in B).\n\ +The leftmost iterators are in the outermost for-loop, so the output tuples\n\ +cycle in a manner similar to an odometer (with the rightmost element changing\n\ +on every iteration).\n\n\ +product('ab', range(3)) --> ('a',0) ('a',1) ('a',2) ('b',0) ('b',1) ('b',2)\n\ +product((0,1), (0,1), (0,1)) --> (0,0,0) (0,0,1) (0,1,0) (0,1,1) (1,0,0) ..."); + +static PyTypeObject product_type = { + PyVarObject_HEAD_INIT(NULL, 0) + "itertools.product", /* tp_name */ + sizeof(productobject), /* tp_basicsize */ + 0, /* tp_itemsize */ + /* methods */ + (destructor)product_dealloc, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + 0, /* tp_compare */ + 0, /* tp_repr */ + 0, /* tp_as_number */ + 0, /* tp_as_sequence */ + 0, /* tp_as_mapping */ + 0, /* tp_hash */ + 0, /* tp_call */ + 0, /* tp_str */ + PyObject_GenericGetAttr, /* tp_getattro */ + 0, /* tp_setattro */ + 0, /* tp_as_buffer */ + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | + Py_TPFLAGS_BASETYPE, /* tp_flags */ + product_doc, /* tp_doc */ + (traverseproc)product_traverse, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + PyObject_SelfIter, /* tp_iter */ + (iternextfunc)product_next, /* tp_iternext */ + 0, /* tp_methods */ + 0, /* tp_members */ + 0, /* tp_getset */ + 0, /* tp_base */ + 0, /* tp_dict */ + 0, /* tp_descr_get */ + 0, /* tp_descr_set */ + 0, /* tp_dictoffset */ + 0, /* tp_init */ + 0, /* tp_alloc */ + product_new, /* tp_new */ + PyObject_GC_Del, /* tp_free */ +}; + + +/* combinations object ************************************************************/ + +typedef struct { + PyObject_HEAD + PyObject *pool; /* input converted to a tuple */ + Py_ssize_t *indices; /* one index per result element */ + PyObject *result; /* most recently returned result tuple */ + Py_ssize_t r; /* size of result tuple */ + int stopped; /* set to 1 when the combinations iterator is exhausted */ +} combinationsobject; + +static PyTypeObject combinations_type; + +static PyObject * +combinations_new(PyTypeObject *type, PyObject *args, PyObject *kwds) +{ + combinationsobject *co; + Py_ssize_t n; + Py_ssize_t r; + PyObject *pool = NULL; + PyObject *iterable = NULL; + Py_ssize_t *indices = NULL; + Py_ssize_t i; + static char *kwargs[] = {"iterable", "r", NULL}; + + if (!PyArg_ParseTupleAndKeywords(args, kwds, "On:combinations", kwargs, + &iterable, &r)) + return NULL; + + pool = PySequence_Tuple(iterable); + if (pool == NULL) + goto error; + n = PyTuple_GET_SIZE(pool); + if (r < 0) { + PyErr_SetString(PyExc_ValueError, "r must be non-negative"); + goto error; + } + if (r > n) { + PyErr_SetString(PyExc_ValueError, "r cannot be bigger than the iterable"); + goto error; + } + + indices = PyMem_Malloc(r * sizeof(Py_ssize_t)); + if (indices == NULL) { + PyErr_NoMemory(); + goto error; + } + + for (i=0 ; itp_alloc(type, 0); + if (co == NULL) + goto error; + + co->pool = pool; + co->indices = indices; + co->result = NULL; + co->r = r; + co->stopped = 0; + + return (PyObject *)co; + +error: + if (indices != NULL) + PyMem_Free(indices); + Py_XDECREF(pool); + return NULL; +} + +static void +combinations_dealloc(combinationsobject *co) +{ + PyObject_GC_UnTrack(co); + Py_XDECREF(co->pool); + Py_XDECREF(co->result); + PyMem_Free(co->indices); + Py_TYPE(co)->tp_free(co); +} + +static int +combinations_traverse(combinationsobject *co, visitproc visit, void *arg) +{ + Py_VISIT(co->pool); + Py_VISIT(co->result); + return 0; +} + +static PyObject * +combinations_next(combinationsobject *co) +{ + PyObject *elem; + PyObject *oldelem; + PyObject *pool = co->pool; + Py_ssize_t *indices = co->indices; + PyObject *result = co->result; + Py_ssize_t n = PyTuple_GET_SIZE(pool); + Py_ssize_t r = co->r; + Py_ssize_t i, j, index; + + if (co->stopped) + return NULL; + + if (result == NULL) { + /* On the first pass, initialize result tuple using the indices */ + result = PyTuple_New(r); + if (result == NULL) + goto empty; + co->result = result; + for (i=0; i 1) { + PyObject *old_result = result; + result = PyTuple_New(r); + if (result == NULL) + goto empty; + co->result = result; + for (i=0; i= 0 && indices[i] == i+n-r ; i--) + ; + + /* If i is negative, then the indices are all at + their maximum value and we're done. */ + if (i < 0) + goto empty; + + /* Increment the current index which we know is not at its + maximum. Then move back to the right setting each index + to its lowest possible value (one higher than the index + to its left -- this maintains the sort order invariant). */ + indices[i]++; + for (j=i+1 ; jstopped = 1; + return NULL; +} + +PyDoc_STRVAR(combinations_doc, +"combinations(iterables) --> combinations object\n\ +\n\ +Return successive r-length combinations of elements in the iterable.\n\n\ +combinations(range(4), 3) --> (0,1,2), (0,1,3), (0,2,3), (1,2,3)"); + +static PyTypeObject combinations_type = { + PyVarObject_HEAD_INIT(NULL, 0) + "itertools.combinations", /* tp_name */ + sizeof(combinationsobject), /* tp_basicsize */ + 0, /* tp_itemsize */ + /* methods */ + (destructor)combinations_dealloc, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + 0, /* tp_compare */ + 0, /* tp_repr */ + 0, /* tp_as_number */ + 0, /* tp_as_sequence */ + 0, /* tp_as_mapping */ + 0, /* tp_hash */ + 0, /* tp_call */ + 0, /* tp_str */ + PyObject_GenericGetAttr, /* tp_getattro */ + 0, /* tp_setattro */ + 0, /* tp_as_buffer */ + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | + Py_TPFLAGS_BASETYPE, /* tp_flags */ + combinations_doc, /* tp_doc */ + (traverseproc)combinations_traverse, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + PyObject_SelfIter, /* tp_iter */ + (iternextfunc)combinations_next, /* tp_iternext */ + 0, /* tp_methods */ + 0, /* tp_members */ + 0, /* tp_getset */ + 0, /* tp_base */ + 0, /* tp_dict */ + 0, /* tp_descr_get */ + 0, /* tp_descr_set */ + 0, /* tp_dictoffset */ + 0, /* tp_init */ + 0, /* tp_alloc */ + combinations_new, /* tp_new */ + PyObject_GC_Del, /* tp_free */ +}; + + /* ifilter object ************************************************************/ typedef struct { @@ -1814,7 +2311,7 @@ if (item == NULL) return NULL; - if (lz->func == Py_None) { + if (lz->func == Py_None || lz->func == (PyObject *)&PyBool_Type) { ok = PyObject_IsTrue(item); } else { PyObject *good; @@ -1958,7 +2455,7 @@ if (item == NULL) return NULL; - if (lz->func == Py_None) { + if (lz->func == Py_None || lz->func == (PyObject *)&PyBool_Type) { ok = PyObject_IsTrue(item); } else { PyObject *good; @@ -2785,6 +3282,7 @@ PyObject *m; char *name; PyTypeObject *typelist[] = { + &combinations_type, &cycle_type, &dropwhile_type, &takewhile_type, @@ -2796,7 +3294,8 @@ &ifilterfalse_type, &count_type, &izip_type, - &iziplongest_type, + &iziplongest_type, + &product_type, &repeat_type, &groupby_type, NULL Modified: python/branches/libffi3-branch/Modules/operator.c ============================================================================== --- python/branches/libffi3-branch/Modules/operator.c (original) +++ python/branches/libffi3-branch/Modules/operator.c Tue Mar 4 15:50:53 2008 @@ -496,6 +496,49 @@ } static PyObject * +dotted_getattr(PyObject *obj, PyObject *attr) +{ + char *s, *p; + +#ifdef Py_USING_UNICODE + if (PyUnicode_Check(attr)) { + attr = _PyUnicode_AsDefaultEncodedString(attr, NULL); + if (attr == NULL) + return NULL; + } +#endif + + if (!PyString_Check(attr)) { + PyErr_SetString(PyExc_TypeError, + "attribute name must be a string"); + return NULL; + } + + s = PyString_AS_STRING(attr); + Py_INCREF(obj); + for (;;) { + PyObject *newobj, *str; + p = strchr(s, '.'); + str = p ? PyString_FromStringAndSize(s, (p-s)) : + PyString_FromString(s); + if (str == NULL) { + Py_DECREF(obj); + return NULL; + } + newobj = PyObject_GetAttr(obj, str); + Py_DECREF(str); + Py_DECREF(obj); + if (newobj == NULL) + return NULL; + obj = newobj; + if (p == NULL) break; + s = p+1; + } + + return obj; +} + +static PyObject * attrgetter_call(attrgetterobject *ag, PyObject *args, PyObject *kw) { PyObject *obj, *result; @@ -504,7 +547,7 @@ if (!PyArg_UnpackTuple(args, "attrgetter", 1, 1, &obj)) return NULL; if (ag->nattrs == 1) - return PyObject_GetAttr(obj, ag->attr); + return dotted_getattr(obj, ag->attr); assert(PyTuple_Check(ag->attr)); assert(PyTuple_GET_SIZE(ag->attr) == nattrs); @@ -516,7 +559,7 @@ for (i=0 ; i < nattrs ; i++) { PyObject *attr, *val; attr = PyTuple_GET_ITEM(ag->attr, i); - val = PyObject_GetAttr(obj, attr); + val = dotted_getattr(obj, attr); if (val == NULL) { Py_DECREF(result); return NULL; @@ -531,7 +574,9 @@ \n\ Return a callable object that fetches the given attribute(s) from its operand.\n\ After, f=attrgetter('name'), the call f(r) returns r.name.\n\ -After, g=attrgetter('name', 'date'), the call g(r) returns (r.name, r.date)."); +After, g=attrgetter('name', 'date'), the call g(r) returns (r.name, r.date).\n\ +After, h=attrgetter('name.first', 'name.last'), the call h(r) returns\n\ +(r.name.first, r.name.last)."); static PyTypeObject attrgetter_type = { PyVarObject_HEAD_INIT(NULL, 0) @@ -575,6 +620,139 @@ attrgetter_new, /* tp_new */ 0, /* tp_free */ }; + + +/* methodcaller object **********************************************************/ + +typedef struct { + PyObject_HEAD + PyObject *name; + PyObject *args; + PyObject *kwds; +} methodcallerobject; + +static PyTypeObject methodcaller_type; + +static PyObject * +methodcaller_new(PyTypeObject *type, PyObject *args, PyObject *kwds) +{ + methodcallerobject *mc; + PyObject *name, *newargs; + + if (PyTuple_GET_SIZE(args) < 1) { + PyErr_SetString(PyExc_TypeError, "methodcaller needs at least " + "one argument, the method name"); + return NULL; + } + + /* create methodcallerobject structure */ + mc = PyObject_GC_New(methodcallerobject, &methodcaller_type); + if (mc == NULL) + return NULL; + + newargs = PyTuple_GetSlice(args, 1, PyTuple_GET_SIZE(args)); + if (newargs == NULL) { + Py_DECREF(mc); + return NULL; + } + mc->args = newargs; + + name = PyTuple_GET_ITEM(args, 0); + Py_INCREF(name); + mc->name = name; + + Py_XINCREF(kwds); + mc->kwds = kwds; + + PyObject_GC_Track(mc); + return (PyObject *)mc; +} + +static void +methodcaller_dealloc(methodcallerobject *mc) +{ + PyObject_GC_UnTrack(mc); + Py_XDECREF(mc->name); + Py_XDECREF(mc->args); + Py_XDECREF(mc->kwds); + PyObject_GC_Del(mc); +} + +static int +methodcaller_traverse(methodcallerobject *mc, visitproc visit, void *arg) +{ + Py_VISIT(mc->args); + Py_VISIT(mc->kwds); + return 0; +} + +static PyObject * +methodcaller_call(methodcallerobject *mc, PyObject *args, PyObject *kw) +{ + PyObject *method, *obj, *result; + + if (!PyArg_UnpackTuple(args, "methodcaller", 1, 1, &obj)) + return NULL; + method = PyObject_GetAttr(obj, mc->name); + if (method == NULL) + return NULL; + result = PyObject_Call(method, mc->args, mc->kwds); + Py_DECREF(method); + return result; +} + +PyDoc_STRVAR(methodcaller_doc, +"methodcaller(name, ...) --> methodcaller object\n\ +\n\ +Return a callable object that calls the given method on its operand.\n\ +After, f = methodcaller('name'), the call f(r) returns r.name().\n\ +After, g = methodcaller('name', 'date', foo=1), the call g(r) returns\n\ +r.name('date', foo=1)."); + +static PyTypeObject methodcaller_type = { + PyVarObject_HEAD_INIT(NULL, 0) + "operator.methodcaller", /* tp_name */ + sizeof(methodcallerobject), /* tp_basicsize */ + 0, /* tp_itemsize */ + /* methods */ + (destructor)methodcaller_dealloc, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + 0, /* tp_compare */ + 0, /* tp_repr */ + 0, /* tp_as_number */ + 0, /* tp_as_sequence */ + 0, /* tp_as_mapping */ + 0, /* tp_hash */ + (ternaryfunc)methodcaller_call, /* tp_call */ + 0, /* tp_str */ + PyObject_GenericGetAttr, /* tp_getattro */ + 0, /* tp_setattro */ + 0, /* tp_as_buffer */ + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC,/* tp_flags */ + methodcaller_doc, /* tp_doc */ + (traverseproc)methodcaller_traverse, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + 0, /* tp_methods */ + 0, /* tp_members */ + 0, /* tp_getset */ + 0, /* tp_base */ + 0, /* tp_dict */ + 0, /* tp_descr_get */ + 0, /* tp_descr_set */ + 0, /* tp_dictoffset */ + 0, /* tp_init */ + 0, /* tp_alloc */ + methodcaller_new, /* tp_new */ + 0, /* tp_free */ +}; + + /* Initialization function for the module (*must* be called initoperator) */ PyMODINIT_FUNC @@ -597,4 +775,9 @@ return; Py_INCREF(&attrgetter_type); PyModule_AddObject(m, "attrgetter", (PyObject *)&attrgetter_type); + + if (PyType_Ready(&methodcaller_type) < 0) + return; + Py_INCREF(&methodcaller_type); + PyModule_AddObject(m, "methodcaller", (PyObject *)&methodcaller_type); } Modified: python/branches/libffi3-branch/Modules/parsermodule.c ============================================================================== --- python/branches/libffi3-branch/Modules/parsermodule.c (original) +++ python/branches/libffi3-branch/Modules/parsermodule.c Tue Mar 4 15:50:53 2008 @@ -1498,7 +1498,7 @@ /* compound_stmt: - * if_stmt | while_stmt | for_stmt | try_stmt | funcdef | classdef + * if_stmt | while_stmt | for_stmt | try_stmt | funcdef | classdef | decorated */ static int validate_compound_stmt(node *tree) @@ -1517,7 +1517,8 @@ || (ntype == for_stmt) || (ntype == try_stmt) || (ntype == funcdef) - || (ntype == classdef)) + || (ntype == classdef) + || (ntype == decorated)) res = validate_node(tree); else { res = 0; @@ -1527,7 +1528,6 @@ return (res); } - static int validate_yield_or_testlist(node *tree) { @@ -2558,28 +2558,40 @@ /* funcdef: * - * -6 -5 -4 -3 -2 -1 - * [decorators] 'def' NAME parameters ':' suite + * -5 -4 -3 -2 -1 + * 'def' NAME parameters ':' suite */ static int validate_funcdef(node *tree) { int nch = NCH(tree); int ok = (validate_ntype(tree, funcdef) - && ((nch == 5) || (nch == 6)) + && (nch == 5) && validate_name(RCHILD(tree, -5), "def") && validate_ntype(RCHILD(tree, -4), NAME) && validate_colon(RCHILD(tree, -2)) && validate_parameters(RCHILD(tree, -3)) && validate_suite(RCHILD(tree, -1))); - - if (ok && (nch == 6)) - ok = validate_decorators(CHILD(tree, 0)); - return ok; } +/* decorated + * decorators (classdef | funcdef) + */ +static int +validate_decorated(node *tree) +{ + int nch = NCH(tree); + int ok = (validate_ntype(tree, decorated) + && (nch == 2) + && validate_decorators(RCHILD(tree, -2)) + && (validate_funcdef(RCHILD(tree, -1)) + || validate_class(RCHILD(tree, -1))) + ); + return ok; +} + static int validate_lambdef(node *tree) { @@ -2923,6 +2935,9 @@ case classdef: res = validate_class(tree); break; + case decorated: + res = validate_decorated(tree); + break; /* * "Trivial" parse tree nodes. * (Why did I call these trivial?) Modified: python/branches/libffi3-branch/Modules/signalmodule.c ============================================================================== --- python/branches/libffi3-branch/Modules/signalmodule.c (original) +++ python/branches/libffi3-branch/Modules/signalmodule.c Tue Mar 4 15:50:53 2008 @@ -272,6 +272,36 @@ None -- if an unknown handler is in effect\n\ anything else -- the callable Python object used as a handler"); +#ifdef HAVE_SIGINTERRUPT +PyDoc_STRVAR(siginterrupt_doc, +"siginterrupt(sig, flag) -> None\n\ +change system call restart behaviour: if flag is False, system calls\n\ +will be restarted when interrupted by signal sig, else system calls\n\ +will be interrupted."); + +static PyObject * +signal_siginterrupt(PyObject *self, PyObject *args) +{ + int sig_num; + int flag; + + if (!PyArg_ParseTuple(args, "ii:siginterrupt", &sig_num, &flag)) + return NULL; + if (sig_num < 1 || sig_num >= NSIG) { + PyErr_SetString(PyExc_ValueError, + "signal number out of range"); + return NULL; + } + if (siginterrupt(sig_num, flag)<0) { + PyErr_SetFromErrno(PyExc_RuntimeError); + return NULL; + } + + Py_INCREF(Py_None); + return Py_None; +} + +#endif static PyObject * signal_set_wakeup_fd(PyObject *self, PyObject *args) @@ -325,6 +355,9 @@ {"signal", signal_signal, METH_VARARGS, signal_doc}, {"getsignal", signal_getsignal, METH_VARARGS, getsignal_doc}, {"set_wakeup_fd", signal_set_wakeup_fd, METH_VARARGS, set_wakeup_fd_doc}, +#ifdef HAVE_SIGINTERRUPT + {"siginterrupt", signal_siginterrupt, METH_VARARGS, siginterrupt_doc}, +#endif #ifdef HAVE_PAUSE {"pause", (PyCFunction)signal_pause, METH_NOARGS,pause_doc}, Modified: python/branches/libffi3-branch/Modules/syslogmodule.c ============================================================================== --- python/branches/libffi3-branch/Modules/syslogmodule.c (original) +++ python/branches/libffi3-branch/Modules/syslogmodule.c Tue Mar 4 15:50:53 2008 @@ -92,7 +92,9 @@ return NULL; } + Py_BEGIN_ALLOW_THREADS; syslog(priority, "%s", message); + Py_END_ALLOW_THREADS; Py_INCREF(Py_None); return Py_None; } Modified: python/branches/libffi3-branch/Objects/dictobject.c ============================================================================== --- python/branches/libffi3-branch/Objects/dictobject.c (original) +++ python/branches/libffi3-branch/Objects/dictobject.c Tue Mar 4 15:50:53 2008 @@ -171,8 +171,10 @@ static void show_alloc(void) { - fprintf(stderr, "Dict allocations: %zd\n", count_alloc); - fprintf(stderr, "Dict reuse through freelist: %zd\n", count_reuse); + fprintf(stderr, "Dict allocations: %" PY_FORMAT_SIZE_T "d\n", + count_alloc); + fprintf(stderr, "Dict reuse through freelist: %" PY_FORMAT_SIZE_T + "d\n", count_reuse); fprintf(stderr, "%.2f%% reuse rate\n\n", (100.0*count_reuse/(count_alloc+count_reuse))); } Modified: python/branches/libffi3-branch/Objects/fileobject.c ============================================================================== --- python/branches/libffi3-branch/Objects/fileobject.c (original) +++ python/branches/libffi3-branch/Objects/fileobject.c Tue Mar 4 15:50:53 2008 @@ -1660,9 +1660,9 @@ } static PyObject * -file_exit(PyFileObject *f, PyObject *args) +file_exit(PyObject *f, PyObject *args) { - PyObject *ret = file_close(f); + PyObject *ret = PyObject_CallMethod(f, "close", NULL); if (!ret) /* If error occurred, pass through */ return NULL; Modified: python/branches/libffi3-branch/Objects/listobject.c ============================================================================== --- python/branches/libffi3-branch/Objects/listobject.c (original) +++ python/branches/libffi3-branch/Objects/listobject.c Tue Mar 4 15:50:53 2008 @@ -72,8 +72,10 @@ static void show_alloc(void) { - fprintf(stderr, "List allocations: %zd\n", count_alloc); - fprintf(stderr, "List reuse through freelist: %zd\n", count_reuse); + fprintf(stderr, "List allocations: %" PY_FORMAT_SIZE_T "d\n", + count_alloc); + fprintf(stderr, "List reuse through freelist: %" PY_FORMAT_SIZE_T + "d\n", count_reuse); fprintf(stderr, "%.2f%% reuse rate\n\n", (100.0*count_reuse/(count_alloc+count_reuse))); } Modified: python/branches/libffi3-branch/Objects/stringlib/string_format.h ============================================================================== --- python/branches/libffi3-branch/Objects/stringlib/string_format.h (original) +++ python/branches/libffi3-branch/Objects/stringlib/string_format.h Tue Mar 4 15:50:53 2008 @@ -494,7 +494,7 @@ goto done; #if PY_VERSION_HEX >= 0x03000000 - assert(PyString_Check(result)); + assert(PyUnicode_Check(result)); #else assert(PyString_Check(result) || PyUnicode_Check(result)); Modified: python/branches/libffi3-branch/Objects/stringobject.c ============================================================================== --- python/branches/libffi3-branch/Objects/stringobject.c (original) +++ python/branches/libffi3-branch/Objects/stringobject.c Tue Mar 4 15:50:53 2008 @@ -4585,6 +4585,7 @@ int prec = -1; int c = '\0'; int fill; + int isnumok; PyObject *v = NULL; PyObject *temp = NULL; char *pbuf; @@ -4786,23 +4787,52 @@ case 'X': if (c == 'i') c = 'd'; - if (PyLong_Check(v)) { - int ilen; - temp = _PyString_FormatLong(v, flags, - prec, c, &pbuf, &ilen); - len = ilen; - if (!temp) - goto error; - sign = 1; + isnumok = 0; + if (PyNumber_Check(v)) { + PyObject *iobj=NULL; + + if (PyInt_Check(v) || (PyLong_Check(v))) { + iobj = v; + Py_INCREF(iobj); + } + else { + iobj = PyNumber_Int(v); + if (iobj==NULL) iobj = PyNumber_Long(v); + } + if (iobj!=NULL) { + if (PyInt_Check(iobj)) { + isnumok = 1; + pbuf = formatbuf; + len = formatint(pbuf, + sizeof(formatbuf), + flags, prec, c, iobj); + Py_DECREF(iobj); + if (len < 0) + goto error; + sign = 1; + } + else if (PyLong_Check(iobj)) { + int ilen; + + isnumok = 1; + temp = _PyString_FormatLong(iobj, flags, + prec, c, &pbuf, &ilen); + Py_DECREF(iobj); + len = ilen; + if (!temp) + goto error; + sign = 1; + } + else { + Py_DECREF(iobj); + } + } } - else { - pbuf = formatbuf; - len = formatint(pbuf, - sizeof(formatbuf), - flags, prec, c, v); - if (len < 0) - goto error; - sign = 1; + if (!isnumok) { + PyErr_Format(PyExc_TypeError, + "%%%c format: a number is required, " + "not %.200s", c, Py_TYPE(v)->tp_name); + goto error; } if (flags & F_ZERO) fill = '0'; Modified: python/branches/libffi3-branch/Objects/typeobject.c ============================================================================== --- python/branches/libffi3-branch/Objects/typeobject.c (original) +++ python/branches/libffi3-branch/Objects/typeobject.c Tue Mar 4 15:50:53 2008 @@ -306,6 +306,40 @@ } static PyObject * +type_abstractmethods(PyTypeObject *type, void *context) +{ + PyObject *mod = PyDict_GetItemString(type->tp_dict, + "__abstractmethods__"); + if (!mod) { + PyErr_Format(PyExc_AttributeError, "__abstractmethods__"); + return NULL; + } + Py_XINCREF(mod); + return mod; +} + +static int +type_set_abstractmethods(PyTypeObject *type, PyObject *value, void *context) +{ + /* __abstractmethods__ should only be set once on a type, in + abc.ABCMeta.__new__, so this function doesn't do anything + special to update subclasses. + */ + int res = PyDict_SetItemString(type->tp_dict, + "__abstractmethods__", value); + if (res == 0) { + type_modified(type); + if (value && PyObject_IsTrue(value)) { + type->tp_flags |= Py_TPFLAGS_IS_ABSTRACT; + } + else { + type->tp_flags &= ~Py_TPFLAGS_IS_ABSTRACT; + } + } + return res; +} + +static PyObject * type_get_bases(PyTypeObject *type, void *context) { Py_INCREF(type->tp_bases); @@ -542,6 +576,8 @@ {"__name__", (getter)type_name, (setter)type_set_name, NULL}, {"__bases__", (getter)type_get_bases, (setter)type_set_bases, NULL}, {"__module__", (getter)type_module, (setter)type_set_module, NULL}, + {"__abstractmethods__", (getter)type_abstractmethods, + (setter)type_set_abstractmethods, NULL}, {"__dict__", (getter)type_dict, NULL, NULL}, {"__doc__", (getter)type_get_doc, NULL, NULL}, {0} @@ -2749,6 +2785,56 @@ } if (err < 0) return NULL; + + if (type->tp_flags & Py_TPFLAGS_IS_ABSTRACT) { + static PyObject *comma = NULL; + PyObject *abstract_methods = NULL; + PyObject *builtins; + PyObject *sorted; + PyObject *sorted_methods = NULL; + PyObject *joined = NULL; + const char *joined_str; + + /* Compute ", ".join(sorted(type.__abstractmethods__)) + into joined. */ + abstract_methods = type_abstractmethods(type, NULL); + if (abstract_methods == NULL) + goto error; + builtins = PyEval_GetBuiltins(); + if (builtins == NULL) + goto error; + sorted = PyDict_GetItemString(builtins, "sorted"); + if (sorted == NULL) + goto error; + sorted_methods = PyObject_CallFunctionObjArgs(sorted, + abstract_methods, + NULL); + if (sorted_methods == NULL) + goto error; + if (comma == NULL) { + comma = PyString_InternFromString(", "); + if (comma == NULL) + goto error; + } + joined = PyObject_CallMethod(comma, "join", + "O", sorted_methods); + if (joined == NULL) + goto error; + joined_str = PyString_AsString(joined); + if (joined_str == NULL) + goto error; + + PyErr_Format(PyExc_TypeError, + "Can't instantiate abstract class %s " + "with abstract methods %s", + type->tp_name, + joined_str); + error: + Py_XDECREF(joined); + Py_XDECREF(sorted_methods); + Py_XDECREF(abstract_methods); + return NULL; + } return type->tp_alloc(type, 0); } @@ -3210,6 +3296,21 @@ return _common_reduce(self, proto); } +static PyObject * +object_subclasshook(PyObject *cls, PyObject *args) +{ + Py_INCREF(Py_NotImplemented); + return Py_NotImplemented; +} + +PyDoc_STRVAR(object_subclasshook_doc, +"Abstract classes can override this to customize issubclass().\n" +"\n" +"This is invoked early on by abc.ABCMeta.__subclasscheck__().\n" +"It should return True, False or NotImplemented. If it returns\n" +"NotImplemented, the normal algorithm is used. Otherwise, it\n" +"overrides the normal algorithm (and the outcome is cached).\n"); + /* from PEP 3101, this code implements: @@ -3259,6 +3360,8 @@ PyDoc_STR("helper for pickle")}, {"__reduce__", object_reduce, METH_VARARGS, PyDoc_STR("helper for pickle")}, + {"__subclasshook__", object_subclasshook, METH_CLASS | METH_VARARGS, + object_subclasshook_doc}, {"__format__", object_format, METH_VARARGS, PyDoc_STR("default object formatter")}, {0} Modified: python/branches/libffi3-branch/Objects/unicodeobject.c ============================================================================== --- python/branches/libffi3-branch/Objects/unicodeobject.c (original) +++ python/branches/libffi3-branch/Objects/unicodeobject.c Tue Mar 4 15:50:53 2008 @@ -8334,6 +8334,7 @@ int prec = -1; Py_UNICODE c = '\0'; Py_UNICODE fill; + int isnumok; PyObject *v = NULL; PyObject *temp = NULL; Py_UNICODE *pbuf; @@ -8546,21 +8547,49 @@ case 'X': if (c == 'i') c = 'd'; - if (PyLong_Check(v)) { - temp = formatlong(v, flags, prec, c); - if (!temp) - goto onError; - pbuf = PyUnicode_AS_UNICODE(temp); - len = PyUnicode_GET_SIZE(temp); - sign = 1; + isnumok = 0; + if (PyNumber_Check(v)) { + PyObject *iobj=NULL; + + if (PyInt_Check(v) || (PyLong_Check(v))) { + iobj = v; + Py_INCREF(iobj); + } + else { + iobj = PyNumber_Int(v); + if (iobj==NULL) iobj = PyNumber_Long(v); + } + if (iobj!=NULL) { + if (PyInt_Check(iobj)) { + isnumok = 1; + pbuf = formatbuf; + len = formatint(pbuf, sizeof(formatbuf)/sizeof(Py_UNICODE), + flags, prec, c, iobj); + Py_DECREF(iobj); + if (len < 0) + goto onError; + sign = 1; + } + else if (PyLong_Check(iobj)) { + isnumok = 1; + temp = formatlong(iobj, flags, prec, c); + Py_DECREF(iobj); + if (!temp) + goto onError; + pbuf = PyUnicode_AS_UNICODE(temp); + len = PyUnicode_GET_SIZE(temp); + sign = 1; + } + else { + Py_DECREF(iobj); + } + } } - else { - pbuf = formatbuf; - len = formatint(pbuf, sizeof(formatbuf)/sizeof(Py_UNICODE), - flags, prec, c, v); - if (len < 0) + if (!isnumok) { + PyErr_Format(PyExc_TypeError, + "%%%c format: a number is required, " + "not %.200s", c, Py_TYPE(v)->tp_name); goto onError; - sign = 1; } if (flags & F_ZERO) fill = '0'; Modified: python/branches/libffi3-branch/PC/VS8.0/build_tkinter.py ============================================================================== --- python/branches/libffi3-branch/PC/VS8.0/build_tkinter.py (original) +++ python/branches/libffi3-branch/PC/VS8.0/build_tkinter.py Tue Mar 4 15:50:53 2008 @@ -7,7 +7,6 @@ import os import sys -import shutil here = os.path.abspath(os.path.dirname(__file__)) par = os.path.pardir Modified: python/branches/libffi3-branch/PC/config.c ============================================================================== --- python/branches/libffi3-branch/PC/config.c (original) +++ python/branches/libffi3-branch/PC/config.c Tue Mar 4 15:50:53 2008 @@ -12,6 +12,7 @@ extern void initbinascii(void); extern void initcmath(void); extern void initerrno(void); +extern void initfuture_builtins(void); extern void initgc(void); #ifndef MS_WINI64 extern void initimageop(void); @@ -84,6 +85,7 @@ {"binascii", initbinascii}, {"cmath", initcmath}, {"errno", initerrno}, + {"future_builtins", initfuture_builtins}, {"gc", initgc}, #ifndef MS_WINI64 {"imageop", initimageop}, Modified: python/branches/libffi3-branch/PC/python_nt.rc ============================================================================== --- python/branches/libffi3-branch/PC/python_nt.rc (original) +++ python/branches/libffi3-branch/PC/python_nt.rc Tue Mar 4 15:50:53 2008 @@ -61,7 +61,7 @@ VALUE "FileDescription", "Python Core\0" VALUE "FileVersion", PYTHON_VERSION VALUE "InternalName", "Python DLL\0" - VALUE "LegalCopyright", "Copyright ? 2001-2007 Python Software Foundation. Copyright ? 2000 BeOpen.com. Copyright ? 1995-2001 CNRI. Copyright ? 1991-1995 SMC.\0" + VALUE "LegalCopyright", "Copyright ? 2001-2008 Python Software Foundation. Copyright ? 2000 BeOpen.com. Copyright ? 1995-2001 CNRI. Copyright ? 1991-1995 SMC.\0" VALUE "OriginalFilename", PYTHON_DLL_NAME "\0" VALUE "ProductName", "Python\0" VALUE "ProductVersion", PYTHON_VERSION Modified: python/branches/libffi3-branch/PCbuild/_ssl.vcproj ============================================================================== --- python/branches/libffi3-branch/PCbuild/_ssl.vcproj (original) +++ python/branches/libffi3-branch/PCbuild/_ssl.vcproj Tue Mar 4 15:50:53 2008 @@ -27,7 +27,7 @@ > - - Modified: python/branches/libffi3-branch/PCbuild/build_ssl.py ============================================================================== --- python/branches/libffi3-branch/PCbuild/build_ssl.py (original) +++ python/branches/libffi3-branch/PCbuild/build_ssl.py Tue Mar 4 15:50:53 2008 @@ -102,8 +102,11 @@ """ if not os.path.isfile(m32): return - with open(m32) as fin: - with open(makefile, 'w') as fout: + # 2.4 compatibility + fin = open(m32) + if 1: # with open(m32) as fin: + fout = open(makefile, 'w') + if 1: # with open(makefile, 'w') as fout: for line in fin: line = line.replace("=tmp32", "=tmp64") line = line.replace("=out32", "=out64") @@ -121,9 +124,13 @@ """ if not os.path.isfile(makefile): return - with open(makefile) as fin: + # 2.4 compatibility + fin = open(makefile) + if 1: # with open(makefile) as fin: lines = fin.readlines() - with open(makefile, 'w') as fout: + fin.close() + fout = open(makefile, 'w') + if 1: # with open(makefile, 'w') as fout: for line in lines: if line.startswith("PERL="): continue @@ -139,6 +146,7 @@ line = line + noalgo line = line + '\n' fout.write(line) + fout.close() def run_configure(configure, do_script): print("perl Configure "+configure) Modified: python/branches/libffi3-branch/PCbuild/build_tkinter.py ============================================================================== --- python/branches/libffi3-branch/PCbuild/build_tkinter.py (original) +++ python/branches/libffi3-branch/PCbuild/build_tkinter.py Tue Mar 4 15:50:53 2008 @@ -7,7 +7,6 @@ import os import sys -import shutil here = os.path.abspath(os.path.dirname(__file__)) par = os.path.pardir Modified: python/branches/libffi3-branch/PCbuild/pcbuild.sln ============================================================================== --- python/branches/libffi3-branch/PCbuild/pcbuild.sln (original) +++ python/branches/libffi3-branch/PCbuild/pcbuild.sln Tue Mar 4 15:50:53 2008 @@ -108,6 +108,12 @@ EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "bdist_wininst", "bdist_wininst.vcproj", "{EB1C19C1-1F18-421E-9735-CAEE69DC6A3C}" EndProject +Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "_hashlib", "_hashlib.vcproj", "{447F05A8-F581-4CAC-A466-5AC7936E207E}" + ProjectSection(ProjectDependencies) = postProject + {B11D750F-CD1F-4A96-85CE-E69A5C5259F9} = {B11D750F-CD1F-4A96-85CE-E69A5C5259F9} + {CF7AC3D1-E2DF-41D2-BEA6-1E2556CDEA26} = {CF7AC3D1-E2DF-41D2-BEA6-1E2556CDEA26} + EndProjectSection +EndProject Global GlobalSection(SolutionConfigurationPlatforms) = preSolution Debug|Win32 = Debug|Win32 @@ -464,6 +470,22 @@ {EB1C19C1-1F18-421E-9735-CAEE69DC6A3C}.PGUpdate|x64.ActiveCfg = Release|Win32 {EB1C19C1-1F18-421E-9735-CAEE69DC6A3C}.Release|Win32.ActiveCfg = Release|Win32 {EB1C19C1-1F18-421E-9735-CAEE69DC6A3C}.Release|x64.ActiveCfg = Release|Win32 + {447F05A8-F581-4CAC-A466-5AC7936E207E}.Debug|Win32.ActiveCfg = Debug|Win32 + {447F05A8-F581-4CAC-A466-5AC7936E207E}.Debug|Win32.Build.0 = Debug|Win32 + {447F05A8-F581-4CAC-A466-5AC7936E207E}.Debug|x64.ActiveCfg = Debug|x64 + {447F05A8-F581-4CAC-A466-5AC7936E207E}.Debug|x64.Build.0 = Debug|x64 + {447F05A8-F581-4CAC-A466-5AC7936E207E}.PGInstrument|Win32.ActiveCfg = PGInstrument|Win32 + {447F05A8-F581-4CAC-A466-5AC7936E207E}.PGInstrument|Win32.Build.0 = PGInstrument|Win32 + {447F05A8-F581-4CAC-A466-5AC7936E207E}.PGInstrument|x64.ActiveCfg = PGInstrument|x64 + {447F05A8-F581-4CAC-A466-5AC7936E207E}.PGInstrument|x64.Build.0 = PGInstrument|x64 + {447F05A8-F581-4CAC-A466-5AC7936E207E}.PGUpdate|Win32.ActiveCfg = PGUpdate|Win32 + {447F05A8-F581-4CAC-A466-5AC7936E207E}.PGUpdate|Win32.Build.0 = PGUpdate|Win32 + {447F05A8-F581-4CAC-A466-5AC7936E207E}.PGUpdate|x64.ActiveCfg = PGUpdate|x64 + {447F05A8-F581-4CAC-A466-5AC7936E207E}.PGUpdate|x64.Build.0 = PGUpdate|x64 + {447F05A8-F581-4CAC-A466-5AC7936E207E}.Release|Win32.ActiveCfg = Release|Win32 + {447F05A8-F581-4CAC-A466-5AC7936E207E}.Release|Win32.Build.0 = Release|Win32 + {447F05A8-F581-4CAC-A466-5AC7936E207E}.Release|x64.ActiveCfg = Release|x64 + {447F05A8-F581-4CAC-A466-5AC7936E207E}.Release|x64.Build.0 = Release|x64 EndGlobalSection GlobalSection(SolutionProperties) = preSolution HideSolutionNode = FALSE Modified: python/branches/libffi3-branch/PCbuild/pythoncore.vcproj ============================================================================== --- python/branches/libffi3-branch/PCbuild/pythoncore.vcproj (original) +++ python/branches/libffi3-branch/PCbuild/pythoncore.vcproj Tue Mar 4 15:50:53 2008 @@ -1051,6 +1051,10 @@ > + + Modified: python/branches/libffi3-branch/PCbuild/readme.txt ============================================================================== --- python/branches/libffi3-branch/PCbuild/readme.txt (original) +++ python/branches/libffi3-branch/PCbuild/readme.txt Tue Mar 4 15:50:53 2008 @@ -303,7 +303,8 @@ ------------------ The build process for AMD64 / x64 is very similar to standard builds. You just -have to set x64 as platform. +have to set x64 as platform. In addition, the HOST_PYTHON environment variable +must point to a Python interpreter (at least 2.4), to support cross-compilation. Building Python Using the free MS Toolkit Compiler -------------------------------------------------- Modified: python/branches/libffi3-branch/PCbuild/x64.vsprops ============================================================================== --- python/branches/libffi3-branch/PCbuild/x64.vsprops (original) +++ python/branches/libffi3-branch/PCbuild/x64.vsprops Tue Mar 4 15:50:53 2008 @@ -15,4 +15,8 @@ Name="VCLinkerTool" TargetMachine="17" /> + Modified: python/branches/libffi3-branch/Parser/Python.asdl ============================================================================== --- python/branches/libffi3-branch/Parser/Python.asdl (original) +++ python/branches/libffi3-branch/Parser/Python.asdl Tue Mar 4 15:50:53 2008 @@ -10,8 +10,8 @@ | Suite(stmt* body) stmt = FunctionDef(identifier name, arguments args, - stmt* body, expr* decorators) - | ClassDef(identifier name, expr* bases, stmt* body) + stmt* body, expr* decorator_list) + | ClassDef(identifier name, expr* bases, stmt* body, expr *decorator_list) | Return(expr? value) | Delete(expr* targets) Modified: python/branches/libffi3-branch/Parser/asdl_c.py ============================================================================== --- python/branches/libffi3-branch/Parser/asdl_c.py (original) +++ python/branches/libffi3-branch/Parser/asdl_c.py Tue Mar 4 15:50:53 2008 @@ -4,7 +4,7 @@ # TO DO # handle fields that have a type but no name -import os, sys, traceback +import os, sys import asdl Modified: python/branches/libffi3-branch/Parser/parser.h ============================================================================== --- python/branches/libffi3-branch/Parser/parser.h (original) +++ python/branches/libffi3-branch/Parser/parser.h Tue Mar 4 15:50:53 2008 @@ -7,7 +7,7 @@ /* Parser interface */ -#define MAXSTACK 500 +#define MAXSTACK 1500 typedef struct { int s_state; /* State in current DFA */ Modified: python/branches/libffi3-branch/Parser/spark.py ============================================================================== --- python/branches/libffi3-branch/Parser/spark.py (original) +++ python/branches/libffi3-branch/Parser/spark.py Tue Mar 4 15:50:53 2008 @@ -22,7 +22,6 @@ __version__ = 'SPARK-0.7 (pre-alpha-5)' import re -import sys import string def _namelist(instance): Modified: python/branches/libffi3-branch/Python/Python-ast.c ============================================================================== --- python/branches/libffi3-branch/Python/Python-ast.c (original) +++ python/branches/libffi3-branch/Python/Python-ast.c Tue Mar 4 15:50:53 2008 @@ -2,7 +2,7 @@ /* - __version__ 53731. + __version__ 60978. This module must be committed separately after each AST grammar change; The __version__ number is set to the revision number of the commit @@ -42,13 +42,14 @@ "name", "args", "body", - "decorators", + "decorator_list", }; static PyTypeObject *ClassDef_type; static char *ClassDef_fields[]={ "name", "bases", "body", + "decorator_list", }; static PyTypeObject *Return_type; static char *Return_fields[]={ @@ -469,7 +470,7 @@ FunctionDef_type = make_type("FunctionDef", stmt_type, FunctionDef_fields, 4); if (!FunctionDef_type) return 0; - ClassDef_type = make_type("ClassDef", stmt_type, ClassDef_fields, 3); + ClassDef_type = make_type("ClassDef", stmt_type, ClassDef_fields, 4); if (!ClassDef_type) return 0; Return_type = make_type("Return", stmt_type, Return_fields, 1); if (!Return_type) return 0; @@ -790,7 +791,7 @@ stmt_ty FunctionDef(identifier name, arguments_ty args, asdl_seq * body, asdl_seq * - decorators, int lineno, int col_offset, PyArena *arena) + decorator_list, int lineno, int col_offset, PyArena *arena) { stmt_ty p; if (!name) { @@ -810,15 +811,15 @@ p->v.FunctionDef.name = name; p->v.FunctionDef.args = args; p->v.FunctionDef.body = body; - p->v.FunctionDef.decorators = decorators; + p->v.FunctionDef.decorator_list = decorator_list; p->lineno = lineno; p->col_offset = col_offset; return p; } stmt_ty -ClassDef(identifier name, asdl_seq * bases, asdl_seq * body, int lineno, int - col_offset, PyArena *arena) +ClassDef(identifier name, asdl_seq * bases, asdl_seq * body, asdl_seq * + decorator_list, int lineno, int col_offset, PyArena *arena) { stmt_ty p; if (!name) { @@ -833,6 +834,7 @@ p->v.ClassDef.name = name; p->v.ClassDef.bases = bases; p->v.ClassDef.body = body; + p->v.ClassDef.decorator_list = decorator_list; p->lineno = lineno; p->col_offset = col_offset; return p; @@ -1906,9 +1908,11 @@ if (PyObject_SetAttrString(result, "body", value) == -1) goto failed; Py_DECREF(value); - value = ast2obj_list(o->v.FunctionDef.decorators, ast2obj_expr); + value = ast2obj_list(o->v.FunctionDef.decorator_list, + ast2obj_expr); if (!value) goto failed; - if (PyObject_SetAttrString(result, "decorators", value) == -1) + if (PyObject_SetAttrString(result, "decorator_list", value) == + -1) goto failed; Py_DECREF(value); break; @@ -1930,6 +1934,13 @@ if (PyObject_SetAttrString(result, "body", value) == -1) goto failed; Py_DECREF(value); + value = ast2obj_list(o->v.ClassDef.decorator_list, + ast2obj_expr); + if (!value) goto failed; + if (PyObject_SetAttrString(result, "decorator_list", value) == + -1) + goto failed; + Py_DECREF(value); break; case Return_kind: result = PyType_GenericNew(Return_type, NULL, NULL); @@ -2947,7 +2958,7 @@ if (PyDict_SetItemString(d, "AST", (PyObject*)AST_type) < 0) return; if (PyModule_AddIntConstant(m, "PyCF_ONLY_AST", PyCF_ONLY_AST) < 0) return; - if (PyModule_AddStringConstant(m, "__version__", "53731") < 0) + if (PyModule_AddStringConstant(m, "__version__", "60978") < 0) return; if (PyDict_SetItemString(d, "mod", (PyObject*)mod_type) < 0) return; if (PyDict_SetItemString(d, "Module", (PyObject*)Module_type) < 0) Modified: python/branches/libffi3-branch/Python/ast.c ============================================================================== --- python/branches/libffi3-branch/Python/ast.c (original) +++ python/branches/libffi3-branch/Python/ast.c Tue Mar 4 15:50:53 2008 @@ -29,6 +29,7 @@ static asdl_seq *ast_for_exprlist(struct compiling *, const node *, expr_context_ty); static expr_ty ast_for_testlist(struct compiling *, const node *); +static stmt_ty ast_for_classdef(struct compiling *, const node *, asdl_seq *); static expr_ty ast_for_testlist_gexp(struct compiling *, const node *); /* Note different signature for ast_for_call */ @@ -828,27 +829,16 @@ } static stmt_ty -ast_for_funcdef(struct compiling *c, const node *n) +ast_for_funcdef(struct compiling *c, const node *n, asdl_seq *decorator_seq) { - /* funcdef: 'def' [decorators] NAME parameters ':' suite */ + /* funcdef: 'def' NAME parameters ':' suite */ identifier name; arguments_ty args; asdl_seq *body; - asdl_seq *decorator_seq = NULL; - int name_i; + int name_i = 1; REQ(n, funcdef); - if (NCH(n) == 6) { /* decorators are present */ - decorator_seq = ast_for_decorators(c, CHILD(n, 0)); - if (!decorator_seq) - return NULL; - name_i = 2; - } - else { - name_i = 1; - } - name = NEW_IDENTIFIER(CHILD(n, name_i)); if (!name) return NULL; @@ -867,6 +857,36 @@ n->n_col_offset, c->c_arena); } +static stmt_ty +ast_for_decorated(struct compiling *c, const node *n) +{ + /* decorated: decorators (classdef | funcdef) */ + stmt_ty thing = NULL; + asdl_seq *decorator_seq = NULL; + + REQ(n, decorated); + + decorator_seq = ast_for_decorators(c, CHILD(n, 0)); + if (!decorator_seq) + return NULL; + + assert(TYPE(CHILD(n, 1)) == funcdef || + TYPE(CHILD(n, 1)) == classdef); + + if (TYPE(CHILD(n, 1)) == funcdef) { + thing = ast_for_funcdef(c, CHILD(n, 1), decorator_seq); + } else if (TYPE(CHILD(n, 1)) == classdef) { + thing = ast_for_classdef(c, CHILD(n, 1), decorator_seq); + } + /* we count the decorators in when talking about the class' or + function's line number */ + if (thing) { + thing->lineno = LINENO(n); + thing->col_offset = n->n_col_offset; + } + return thing; +} + static expr_ty ast_for_lambdef(struct compiling *c, const node *n) { @@ -2968,7 +2988,7 @@ } static stmt_ty -ast_for_classdef(struct compiling *c, const node *n) +ast_for_classdef(struct compiling *c, const node *n, asdl_seq *decorator_seq) { /* classdef: 'class' NAME ['(' testlist ')'] ':' suite */ asdl_seq *bases, *s; @@ -2984,16 +3004,16 @@ s = ast_for_suite(c, CHILD(n, 3)); if (!s) return NULL; - return ClassDef(NEW_IDENTIFIER(CHILD(n, 1)), NULL, s, LINENO(n), - n->n_col_offset, c->c_arena); + return ClassDef(NEW_IDENTIFIER(CHILD(n, 1)), NULL, s, decorator_seq, + LINENO(n), n->n_col_offset, c->c_arena); } /* check for empty base list */ if (TYPE(CHILD(n,3)) == RPAR) { s = ast_for_suite(c, CHILD(n,5)); if (!s) return NULL; - return ClassDef(NEW_IDENTIFIER(CHILD(n, 1)), NULL, s, LINENO(n), - n->n_col_offset, c->c_arena); + return ClassDef(NEW_IDENTIFIER(CHILD(n, 1)), NULL, s, decorator_seq, + LINENO(n), n->n_col_offset, c->c_arena); } /* else handle the base class list */ @@ -3004,8 +3024,8 @@ s = ast_for_suite(c, CHILD(n, 6)); if (!s) return NULL; - return ClassDef(NEW_IDENTIFIER(CHILD(n, 1)), bases, s, LINENO(n), - n->n_col_offset, c->c_arena); + return ClassDef(NEW_IDENTIFIER(CHILD(n, 1)), bases, s, decorator_seq, + LINENO(n), n->n_col_offset, c->c_arena); } static stmt_ty @@ -3054,7 +3074,7 @@ } else { /* compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt - | funcdef | classdef + | funcdef | classdef | decorated */ node *ch = CHILD(n, 0); REQ(n, compound_stmt); @@ -3070,9 +3090,11 @@ case with_stmt: return ast_for_with_stmt(c, ch); case funcdef: - return ast_for_funcdef(c, ch); + return ast_for_funcdef(c, ch, NULL); case classdef: - return ast_for_classdef(c, ch); + return ast_for_classdef(c, ch, NULL); + case decorated: + return ast_for_decorated(c, ch); default: PyErr_Format(PyExc_SystemError, "unhandled small_stmt: TYPE=%d NCH=%d\n", Modified: python/branches/libffi3-branch/Python/bltinmodule.c ============================================================================== --- python/branches/libffi3-branch/Python/bltinmodule.c (original) +++ python/branches/libffi3-branch/Python/bltinmodule.c Tue Mar 4 15:50:53 2008 @@ -166,7 +166,7 @@ if (Py_Py3kWarningFlag && PyErr_Warn(PyExc_DeprecationWarning, - "apply() not supported in 3.x") < 0) + "apply() not supported in 3.x. Use func(*args, **kwargs).") < 0) return NULL; if (!PyArg_UnpackTuple(args, "apply", 1, 3, &func, &alist, &kwdict)) @@ -209,11 +209,23 @@ static PyObject * +builtin_bin(PyObject *self, PyObject *v) +{ + return PyNumber_ToBase(v, 2); +} + +PyDoc_STRVAR(bin_doc, +"bin(number) -> string\n\ +\n\ +Return the binary representation of an integer or long integer."); + + +static PyObject * builtin_callable(PyObject *self, PyObject *v) { if (Py_Py3kWarningFlag && PyErr_Warn(PyExc_DeprecationWarning, - "callable() not supported in 3.x") < 0) + "callable() not supported in 3.x. Use hasattr(o, '__call__').") < 0) return NULL; return PyBool_FromLong((long)PyCallable_Check(v)); } @@ -672,7 +684,7 @@ if (Py_Py3kWarningFlag && PyErr_Warn(PyExc_DeprecationWarning, - "execfile() not supported in 3.x") < 0) + "execfile() not supported in 3.x. Use exec().") < 0) return NULL; if (!PyArg_ParseTuple(args, "s|O!O:execfile", @@ -897,9 +909,15 @@ func = PyTuple_GetItem(args, 0); n--; - if (func == Py_None && n == 1) { - /* map(None, S) is the same as list(S). */ - return PySequence_List(PyTuple_GetItem(args, 1)); + if (func == Py_None) { + if (Py_Py3kWarningFlag && + PyErr_Warn(PyExc_DeprecationWarning, + "map(None, ...) not supported in 3.x. Use list(...).") < 0) + return NULL; + if (n == 1) { + /* map(None, S) is the same as list(S). */ + return PySequence_List(PyTuple_GetItem(args, 1)); + } } /* Get space for sequence descriptors. Must NULL out the iterator @@ -2366,6 +2384,7 @@ {"all", builtin_all, METH_O, all_doc}, {"any", builtin_any, METH_O, any_doc}, {"apply", builtin_apply, METH_VARARGS, apply_doc}, + {"bin", builtin_bin, METH_O, bin_doc}, {"callable", builtin_callable, METH_O, callable_doc}, {"chr", builtin_chr, METH_VARARGS, chr_doc}, {"cmp", builtin_cmp, METH_VARARGS, cmp_doc}, Modified: python/branches/libffi3-branch/Python/ceval.c ============================================================================== --- python/branches/libffi3-branch/Python/ceval.c (original) +++ python/branches/libffi3-branch/Python/ceval.c Tue Mar 4 15:50:53 2008 @@ -1694,6 +1694,7 @@ } continue; + PREDICTED(END_FINALLY); case END_FINALLY: v = POP(); if (PyInt_Check(v)) { @@ -2302,6 +2303,7 @@ x = POP(); Py_DECREF(x); } + PREDICT(END_FINALLY); break; } Modified: python/branches/libffi3-branch/Python/compile.c ============================================================================== --- python/branches/libffi3-branch/Python/compile.c (original) +++ python/branches/libffi3-branch/Python/compile.c Tue Mar 4 15:50:53 2008 @@ -1362,7 +1362,7 @@ PyCodeObject *co; PyObject *first_const = Py_None; arguments_ty args = s->v.FunctionDef.args; - asdl_seq* decos = s->v.FunctionDef.decorators; + asdl_seq* decos = s->v.FunctionDef.decorator_list; stmt_ty st; int i, n, docstring; @@ -1413,9 +1413,14 @@ static int compiler_class(struct compiler *c, stmt_ty s) { - int n; + int n, i; PyCodeObject *co; PyObject *str; + asdl_seq* decos = s->v.ClassDef.decorator_list; + + if (!compiler_decorators(c, decos)) + return 0; + /* push class name on stack, needed by BUILD_CLASS */ ADDOP_O(c, LOAD_CONST, s->v.ClassDef.name, consts); /* push the tuple of base classes on the stack */ @@ -1461,6 +1466,10 @@ ADDOP_I(c, CALL_FUNCTION, 0); ADDOP(c, BUILD_CLASS); + /* apply decorators */ + for (i = 0; i < asdl_seq_LEN(decos); i++) { + ADDOP_I(c, CALL_FUNCTION, 1); + } if (!compiler_nameop(c, s->v.ClassDef.name, Store)) return 0; return 1; @@ -2314,7 +2323,7 @@ return compiler_error(c, "can not assign to __debug__"); } -mangled = _Py_Mangle(c->u->u_private, name); + mangled = _Py_Mangle(c->u->u_private, name); if (!mangled) return 0; Modified: python/branches/libffi3-branch/Python/getargs.c ============================================================================== --- python/branches/libffi3-branch/Python/getargs.c (original) +++ python/branches/libffi3-branch/Python/getargs.c Tue Mar 4 15:50:53 2008 @@ -154,7 +154,7 @@ PyMem_FREE(ptr); return -1; } - if(PyList_Append(*freelist, cobj)) { + if (PyList_Append(*freelist, cobj)) { PyMem_FREE(ptr); Py_DECREF(cobj); return -1; @@ -166,8 +166,8 @@ static int cleanreturn(int retval, PyObject *freelist) { - if(freelist) { - if((retval) == 0) { + if (freelist) { + if (retval == 0) { Py_ssize_t len = PyList_GET_SIZE(freelist), i; for (i = 0; i < len; i++) PyMem_FREE(PyCObject_AsVoidPtr( @@ -708,7 +708,7 @@ case 'L': {/* PY_LONG_LONG */ PY_LONG_LONG *p = va_arg( *p_va, PY_LONG_LONG * ); PY_LONG_LONG ival = PyLong_AsLongLong( arg ); - if( ival == (PY_LONG_LONG)-1 && PyErr_Occurred() ) { + if (ival == (PY_LONG_LONG)-1 && PyErr_Occurred() ) { return converterr("long", arg, msgbuf, bufsize); } else { *p = ival; @@ -998,7 +998,7 @@ "(memory error)", arg, msgbuf, bufsize); } - if(addcleanup(*buffer, freelist)) { + if (addcleanup(*buffer, freelist)) { Py_DECREF(s); return converterr( "(cleanup problem)", @@ -1043,7 +1043,7 @@ return converterr("(memory error)", arg, msgbuf, bufsize); } - if(addcleanup(*buffer, freelist)) { + if (addcleanup(*buffer, freelist)) { Py_DECREF(s); return converterr("(cleanup problem)", arg, msgbuf, bufsize); @@ -1345,6 +1345,7 @@ return retval; } +#define IS_END_OF_FORMAT(c) (c == '\0' || c == ';' || c == ':') static int vgetargskeywords(PyObject *args, PyObject *keywords, const char *format, @@ -1352,13 +1353,10 @@ { char msgbuf[512]; int levels[32]; - const char *fname, *message; - int min, max; - const char *formatsave; + const char *fname, *msg, *custom_msg, *keyword; + int min = INT_MAX; int i, len, nargs, nkeywords; - const char *msg; - char **p; - PyObject *freelist = NULL; + PyObject *freelist = NULL, *current_arg; assert(args != NULL && PyTuple_Check(args)); assert(keywords == NULL || PyDict_Check(keywords)); @@ -1366,168 +1364,108 @@ assert(kwlist != NULL); assert(p_va != NULL); - /* Search the format: - message <- error msg, if any (else NULL). - fname <- routine name, if any (else NULL). - min <- # of required arguments, or -1 if all are required. - max <- most arguments (required + optional). - Check that kwlist has a non-NULL entry for each arg. - Raise error if a tuple arg spec is found. - */ - fname = message = NULL; - formatsave = format; - p = kwlist; - min = -1; - max = 0; - while ((i = *format++) != '\0') { - if (isalpha(Py_CHARMASK(i)) && i != 'e') { - max++; - if (*p == NULL) { - PyErr_SetString(PyExc_RuntimeError, - "more argument specifiers than " - "keyword list entries"); - return 0; - } - p++; - } - else if (i == '|') - min = max; - else if (i == ':') { - fname = format; - break; - } - else if (i == ';') { - message = format; - break; - } - else if (i == '(') { - PyErr_SetString(PyExc_RuntimeError, - "tuple found in format when using keyword " - "arguments"); - return 0; - } - } - format = formatsave; - if (*p != NULL) { - PyErr_SetString(PyExc_RuntimeError, - "more keyword list entries than " - "argument specifiers"); - return 0; - } - if (min < 0) { - /* All arguments are required. */ - min = max; + /* grab the function name or custom error msg first (mutually exclusive) */ + fname = strchr(format, ':'); + if (fname) { + fname++; + custom_msg = NULL; } - - nargs = PyTuple_GET_SIZE(args); - nkeywords = keywords == NULL ? 0 : PyDict_Size(keywords); - - /* make sure there are no duplicate values for an argument; - its not clear when to use the term "keyword argument vs. - keyword parameter in messages */ - if (nkeywords > 0) { - for (i = 0; i < nargs; i++) { - const char *thiskw = kwlist[i]; - if (thiskw == NULL) - break; - if (PyDict_GetItemString(keywords, thiskw)) { - PyErr_Format(PyExc_TypeError, - "keyword parameter '%s' was given " - "by position and by name", - thiskw); - return 0; - } - else if (PyErr_Occurred()) - return 0; - } + else { + custom_msg = strchr(format,';'); + if (custom_msg) + custom_msg++; } - /* required arguments missing from args can be supplied by keyword - arguments; set len to the number of positional arguments, and, - if that's less than the minimum required, add in the number of - required arguments that are supplied by keywords */ - len = nargs; - if (nkeywords > 0 && nargs < min) { - for (i = nargs; i < min; i++) { - if (PyDict_GetItemString(keywords, kwlist[i])) - len++; - else if (PyErr_Occurred()) - return 0; - } - } + /* scan kwlist and get greatest possible nbr of args */ + for (len=0; kwlist[len]; len++) + continue; - /* make sure we got an acceptable number of arguments; the message - is a little confusing with keywords since keyword arguments - which are supplied, but don't match the required arguments - are not included in the "%d given" part of the message - XXX and this isn't a bug!? */ - if (len < min || max < len) { - if (message == NULL) { - PyOS_snprintf(msgbuf, sizeof(msgbuf), - "%.200s%s takes %s %d argument%s " - "(%d given)", - fname==NULL ? "function" : fname, - fname==NULL ? "" : "()", - min==max ? "exactly" - : len < min ? "at least" : "at most", - len < min ? min : max, - (len < min ? min : max) == 1 ? "" : "s", - len); - message = msgbuf; - } - PyErr_SetString(PyExc_TypeError, message); + nargs = PyTuple_GET_SIZE(args); + nkeywords = (keywords == NULL) ? 0 : PyDict_Size(keywords); + if (nargs + nkeywords > len) { + PyErr_Format(PyExc_TypeError, "%s%s takes at most %d " + "argument%s (%d given)", + (fname == NULL) ? "function" : fname, + (fname == NULL) ? "" : "()", + len, + (len == 1) ? "" : "s", + nargs + nkeywords); return 0; } - /* convert the positional arguments */ - for (i = 0; i < nargs; i++) { - if (*format == '|') + /* convert tuple args and keyword args in same loop, using kwlist to drive process */ + for (i = 0; i < len; i++) { + keyword = kwlist[i]; + if (*format == '|') { + min = i; format++; - msg = convertitem(PyTuple_GET_ITEM(args, i), &format, p_va, - flags, levels, msgbuf, sizeof(msgbuf), - &freelist); - if (msg) { - seterror(i+1, msg, levels, fname, message); + } + if (IS_END_OF_FORMAT(*format)) { + PyErr_Format(PyExc_RuntimeError, + "More keyword list entries (%d) than " + "format specifiers (%d)", len, i); return cleanreturn(0, freelist); } - } - - /* handle no keyword parameters in call */ - if (nkeywords == 0) - return cleanreturn(1, freelist); - - /* convert the keyword arguments; this uses the format - string where it was left after processing args */ - for (i = nargs; i < max; i++) { - PyObject *item; - if (*format == '|') - format++; - item = PyDict_GetItemString(keywords, kwlist[i]); - if (item != NULL) { - Py_INCREF(item); - msg = convertitem(item, &format, p_va, flags, levels, - msgbuf, sizeof(msgbuf), &freelist); - Py_DECREF(item); - if (msg) { - seterror(i+1, msg, levels, fname, message); + current_arg = NULL; + if (nkeywords) { + current_arg = PyDict_GetItemString(keywords, keyword); + } + if (current_arg) { + --nkeywords; + if (i < nargs) { + /* arg present in tuple and in dict */ + PyErr_Format(PyExc_TypeError, + "Argument given by name ('%s') " + "and position (%d)", + keyword, i+1); return cleanreturn(0, freelist); } - --nkeywords; - if (nkeywords == 0) - break; } - else if (PyErr_Occurred()) + else if (nkeywords && PyErr_Occurred()) return cleanreturn(0, freelist); - else { - msg = skipitem(&format, p_va, flags); + else if (i < nargs) + current_arg = PyTuple_GET_ITEM(args, i); + + if (current_arg) { + msg = convertitem(current_arg, &format, p_va, flags, + levels, msgbuf, sizeof(msgbuf), &freelist); if (msg) { - levels[0] = 0; - seterror(i+1, msg, levels, fname, message); + seterror(i+1, msg, levels, fname, custom_msg); return cleanreturn(0, freelist); } + continue; + } + + if (i < min) { + PyErr_Format(PyExc_TypeError, "Required argument " + "'%s' (pos %d) not found", + keyword, i+1); + return cleanreturn(0, freelist); + } + /* current code reports success when all required args + * fulfilled and no keyword args left, with no further + * validation. XXX Maybe skip this in debug build ? + */ + if (!nkeywords) + return cleanreturn(1, freelist); + + /* We are into optional args, skip thru to any remaining + * keyword args */ + msg = skipitem(&format, p_va, flags); + if (msg) { + PyErr_Format(PyExc_RuntimeError, "%s: '%s'", msg, + format); + return cleanreturn(0, freelist); } } + if (!IS_END_OF_FORMAT(*format)) { + PyErr_Format(PyExc_RuntimeError, + "more argument specifiers than keyword list entries " + "(remaining format:'%s')", format); + return cleanreturn(0, freelist); + } + /* make sure there are no extraneous keyword arguments */ if (nkeywords > 0) { PyObject *key, *value; @@ -1541,7 +1479,7 @@ return cleanreturn(0, freelist); } ks = PyString_AsString(key); - for (i = 0; i < max; i++) { + for (i = 0; i < len; i++) { if (!strcmp(ks, kwlist[i])) { match = 1; break; @@ -1564,7 +1502,7 @@ static char * skipitem(const char **p_format, va_list *p_va, int flags) { - const char *format = *p_format; + const char *format = *p_format; char c = *format++; switch (c) { @@ -1671,16 +1609,33 @@ } break; } - + + case '(': /* bypass tuple, not handled at all previously */ + { + char *msg; + for (;;) { + if (*format==')') + break; + if (IS_END_OF_FORMAT(*format)) + return "Unmatched left paren in format " + "string"; + msg = skipitem(&format, p_va, flags); + if (msg) + return msg; + } + format++; + break; + } + + case ')': + return "Unmatched right paren in format string"; + default: err: return "impossible"; } - /* The "(...)" format code for tuples is not handled here because - * it is not allowed with keyword args. */ - *p_format = format; return NULL; } Modified: python/branches/libffi3-branch/Python/getcopyright.c ============================================================================== --- python/branches/libffi3-branch/Python/getcopyright.c (original) +++ python/branches/libffi3-branch/Python/getcopyright.c Tue Mar 4 15:50:53 2008 @@ -4,7 +4,7 @@ static char cprt[] = "\ -Copyright (c) 2001-2007 Python Software Foundation.\n\ +Copyright (c) 2001-2008 Python Software Foundation.\n\ All Rights Reserved.\n\ \n\ Copyright (c) 2000 BeOpen.com.\n\ Modified: python/branches/libffi3-branch/Python/graminit.c ============================================================================== --- python/branches/libffi3-branch/Python/graminit.c (original) +++ python/branches/libffi3-branch/Python/graminit.c Tue Mar 4 15:50:53 2008 @@ -86,176 +86,184 @@ {1, arcs_4_0}, {2, arcs_4_1}, }; -static arc arcs_5_0[2] = { +static arc arcs_5_0[1] = { {16, 1}, - {18, 2}, }; -static arc arcs_5_1[1] = { +static arc arcs_5_1[2] = { {18, 2}, + {19, 2}, }; static arc arcs_5_2[1] = { - {19, 3}, -}; -static arc arcs_5_3[1] = { - {20, 4}, -}; -static arc arcs_5_4[1] = { - {21, 5}, -}; -static arc arcs_5_5[1] = { - {22, 6}, -}; -static arc arcs_5_6[1] = { - {0, 6}, + {0, 2}, }; -static state states_5[7] = { - {2, arcs_5_0}, - {1, arcs_5_1}, +static state states_5[3] = { + {1, arcs_5_0}, + {2, arcs_5_1}, {1, arcs_5_2}, - {1, arcs_5_3}, - {1, arcs_5_4}, - {1, arcs_5_5}, - {1, arcs_5_6}, }; static arc arcs_6_0[1] = { - {13, 1}, + {20, 1}, }; -static arc arcs_6_1[2] = { - {23, 2}, - {15, 3}, +static arc arcs_6_1[1] = { + {21, 2}, }; static arc arcs_6_2[1] = { - {15, 3}, + {22, 3}, }; static arc arcs_6_3[1] = { - {0, 3}, + {23, 4}, }; -static state states_6[4] = { +static arc arcs_6_4[1] = { + {24, 5}, +}; +static arc arcs_6_5[1] = { + {0, 5}, +}; +static state states_6[6] = { {1, arcs_6_0}, - {2, arcs_6_1}, + {1, arcs_6_1}, {1, arcs_6_2}, {1, arcs_6_3}, + {1, arcs_6_4}, + {1, arcs_6_5}, }; -static arc arcs_7_0[3] = { - {24, 1}, - {28, 2}, - {29, 3}, +static arc arcs_7_0[1] = { + {13, 1}, }; -static arc arcs_7_1[3] = { - {25, 4}, - {27, 5}, - {0, 1}, +static arc arcs_7_1[2] = { + {25, 2}, + {15, 3}, }; static arc arcs_7_2[1] = { - {19, 6}, + {15, 3}, }; static arc arcs_7_3[1] = { - {19, 7}, + {0, 3}, +}; +static state states_7[4] = { + {1, arcs_7_0}, + {2, arcs_7_1}, + {1, arcs_7_2}, + {1, arcs_7_3}, }; -static arc arcs_7_4[1] = { - {26, 8}, +static arc arcs_8_0[3] = { + {26, 1}, + {30, 2}, + {31, 3}, }; -static arc arcs_7_5[4] = { - {24, 1}, - {28, 2}, - {29, 3}, +static arc arcs_8_1[3] = { + {27, 4}, + {29, 5}, + {0, 1}, +}; +static arc arcs_8_2[1] = { + {21, 6}, +}; +static arc arcs_8_3[1] = { + {21, 7}, +}; +static arc arcs_8_4[1] = { + {28, 8}, +}; +static arc arcs_8_5[4] = { + {26, 1}, + {30, 2}, + {31, 3}, {0, 5}, }; -static arc arcs_7_6[2] = { - {27, 9}, +static arc arcs_8_6[2] = { + {29, 9}, {0, 6}, }; -static arc arcs_7_7[1] = { +static arc arcs_8_7[1] = { {0, 7}, }; -static arc arcs_7_8[2] = { - {27, 5}, +static arc arcs_8_8[2] = { + {29, 5}, {0, 8}, }; -static arc arcs_7_9[1] = { - {29, 3}, +static arc arcs_8_9[1] = { + {31, 3}, }; -static state states_7[10] = { - {3, arcs_7_0}, - {3, arcs_7_1}, - {1, arcs_7_2}, - {1, arcs_7_3}, - {1, arcs_7_4}, - {4, arcs_7_5}, - {2, arcs_7_6}, - {1, arcs_7_7}, - {2, arcs_7_8}, - {1, arcs_7_9}, +static state states_8[10] = { + {3, arcs_8_0}, + {3, arcs_8_1}, + {1, arcs_8_2}, + {1, arcs_8_3}, + {1, arcs_8_4}, + {4, arcs_8_5}, + {2, arcs_8_6}, + {1, arcs_8_7}, + {2, arcs_8_8}, + {1, arcs_8_9}, }; -static arc arcs_8_0[2] = { - {19, 1}, +static arc arcs_9_0[2] = { + {21, 1}, {13, 2}, }; -static arc arcs_8_1[1] = { +static arc arcs_9_1[1] = { {0, 1}, }; -static arc arcs_8_2[1] = { - {30, 3}, +static arc arcs_9_2[1] = { + {32, 3}, }; -static arc arcs_8_3[1] = { +static arc arcs_9_3[1] = { {15, 1}, }; -static state states_8[4] = { - {2, arcs_8_0}, - {1, arcs_8_1}, - {1, arcs_8_2}, - {1, arcs_8_3}, +static state states_9[4] = { + {2, arcs_9_0}, + {1, arcs_9_1}, + {1, arcs_9_2}, + {1, arcs_9_3}, }; -static arc arcs_9_0[1] = { - {24, 1}, +static arc arcs_10_0[1] = { + {26, 1}, }; -static arc arcs_9_1[2] = { - {27, 2}, +static arc arcs_10_1[2] = { + {29, 2}, {0, 1}, }; -static arc arcs_9_2[2] = { - {24, 1}, +static arc arcs_10_2[2] = { + {26, 1}, {0, 2}, }; -static state states_9[3] = { - {1, arcs_9_0}, - {2, arcs_9_1}, - {2, arcs_9_2}, +static state states_10[3] = { + {1, arcs_10_0}, + {2, arcs_10_1}, + {2, arcs_10_2}, }; -static arc arcs_10_0[2] = { +static arc arcs_11_0[2] = { {3, 1}, {4, 1}, }; -static arc arcs_10_1[1] = { +static arc arcs_11_1[1] = { {0, 1}, }; -static state states_10[2] = { - {2, arcs_10_0}, - {1, arcs_10_1}, +static state states_11[2] = { + {2, arcs_11_0}, + {1, arcs_11_1}, }; -static arc arcs_11_0[1] = { - {31, 1}, +static arc arcs_12_0[1] = { + {33, 1}, }; -static arc arcs_11_1[2] = { - {32, 2}, +static arc arcs_12_1[2] = { + {34, 2}, {2, 3}, }; -static arc arcs_11_2[2] = { - {31, 1}, +static arc arcs_12_2[2] = { + {33, 1}, {2, 3}, }; -static arc arcs_11_3[1] = { +static arc arcs_12_3[1] = { {0, 3}, }; -static state states_11[4] = { - {1, arcs_11_0}, - {2, arcs_11_1}, - {2, arcs_11_2}, - {1, arcs_11_3}, +static state states_12[4] = { + {1, arcs_12_0}, + {2, arcs_12_1}, + {2, arcs_12_2}, + {1, arcs_12_3}, }; -static arc arcs_12_0[9] = { - {33, 1}, - {34, 1}, +static arc arcs_13_0[9] = { {35, 1}, {36, 1}, {37, 1}, @@ -263,48 +271,48 @@ {39, 1}, {40, 1}, {41, 1}, + {42, 1}, + {43, 1}, }; -static arc arcs_12_1[1] = { +static arc arcs_13_1[1] = { {0, 1}, }; -static state states_12[2] = { - {9, arcs_12_0}, - {1, arcs_12_1}, +static state states_13[2] = { + {9, arcs_13_0}, + {1, arcs_13_1}, }; -static arc arcs_13_0[1] = { +static arc arcs_14_0[1] = { {9, 1}, }; -static arc arcs_13_1[3] = { - {42, 2}, - {25, 3}, +static arc arcs_14_1[3] = { + {44, 2}, + {27, 3}, {0, 1}, }; -static arc arcs_13_2[2] = { - {43, 4}, +static arc arcs_14_2[2] = { + {45, 4}, {9, 4}, }; -static arc arcs_13_3[2] = { - {43, 5}, +static arc arcs_14_3[2] = { + {45, 5}, {9, 5}, }; -static arc arcs_13_4[1] = { +static arc arcs_14_4[1] = { {0, 4}, }; -static arc arcs_13_5[2] = { - {25, 3}, +static arc arcs_14_5[2] = { + {27, 3}, {0, 5}, }; -static state states_13[6] = { - {1, arcs_13_0}, - {3, arcs_13_1}, - {2, arcs_13_2}, - {2, arcs_13_3}, - {1, arcs_13_4}, - {2, arcs_13_5}, +static state states_14[6] = { + {1, arcs_14_0}, + {3, arcs_14_1}, + {2, arcs_14_2}, + {2, arcs_14_3}, + {1, arcs_14_4}, + {2, arcs_14_5}, }; -static arc arcs_14_0[12] = { - {44, 1}, - {45, 1}, +static arc arcs_15_0[12] = { {46, 1}, {47, 1}, {48, 1}, @@ -315,109 +323,101 @@ {53, 1}, {54, 1}, {55, 1}, + {56, 1}, + {57, 1}, }; -static arc arcs_14_1[1] = { +static arc arcs_15_1[1] = { {0, 1}, }; -static state states_14[2] = { - {12, arcs_14_0}, - {1, arcs_14_1}, +static state states_15[2] = { + {12, arcs_15_0}, + {1, arcs_15_1}, }; -static arc arcs_15_0[1] = { - {56, 1}, +static arc arcs_16_0[1] = { + {58, 1}, }; -static arc arcs_15_1[3] = { - {26, 2}, - {57, 3}, +static arc arcs_16_1[3] = { + {28, 2}, + {59, 3}, {0, 1}, }; -static arc arcs_15_2[2] = { - {27, 4}, +static arc arcs_16_2[2] = { + {29, 4}, {0, 2}, }; -static arc arcs_15_3[1] = { - {26, 5}, +static arc arcs_16_3[1] = { + {28, 5}, }; -static arc arcs_15_4[2] = { - {26, 2}, +static arc arcs_16_4[2] = { + {28, 2}, {0, 4}, }; -static arc arcs_15_5[2] = { - {27, 6}, +static arc arcs_16_5[2] = { + {29, 6}, {0, 5}, }; -static arc arcs_15_6[1] = { - {26, 7}, +static arc arcs_16_6[1] = { + {28, 7}, }; -static arc arcs_15_7[2] = { - {27, 8}, +static arc arcs_16_7[2] = { + {29, 8}, {0, 7}, }; -static arc arcs_15_8[2] = { - {26, 7}, +static arc arcs_16_8[2] = { + {28, 7}, {0, 8}, }; -static state states_15[9] = { - {1, arcs_15_0}, - {3, arcs_15_1}, - {2, arcs_15_2}, - {1, arcs_15_3}, - {2, arcs_15_4}, - {2, arcs_15_5}, - {1, arcs_15_6}, - {2, arcs_15_7}, - {2, arcs_15_8}, -}; -static arc arcs_16_0[1] = { - {58, 1}, -}; -static arc arcs_16_1[1] = { - {59, 2}, -}; -static arc arcs_16_2[1] = { - {0, 2}, -}; -static state states_16[3] = { +static state states_16[9] = { {1, arcs_16_0}, - {1, arcs_16_1}, - {1, arcs_16_2}, + {3, arcs_16_1}, + {2, arcs_16_2}, + {1, arcs_16_3}, + {2, arcs_16_4}, + {2, arcs_16_5}, + {1, arcs_16_6}, + {2, arcs_16_7}, + {2, arcs_16_8}, }; static arc arcs_17_0[1] = { {60, 1}, }; static arc arcs_17_1[1] = { - {0, 1}, + {61, 2}, }; -static state states_17[2] = { +static arc arcs_17_2[1] = { + {0, 2}, +}; +static state states_17[3] = { {1, arcs_17_0}, {1, arcs_17_1}, + {1, arcs_17_2}, }; -static arc arcs_18_0[5] = { - {61, 1}, +static arc arcs_18_0[1] = { {62, 1}, - {63, 1}, - {64, 1}, - {65, 1}, }; static arc arcs_18_1[1] = { {0, 1}, }; static state states_18[2] = { - {5, arcs_18_0}, + {1, arcs_18_0}, {1, arcs_18_1}, }; -static arc arcs_19_0[1] = { +static arc arcs_19_0[5] = { + {63, 1}, + {64, 1}, + {65, 1}, {66, 1}, + {67, 1}, }; static arc arcs_19_1[1] = { {0, 1}, }; static state states_19[2] = { - {1, arcs_19_0}, + {5, arcs_19_0}, {1, arcs_19_1}, }; static arc arcs_20_0[1] = { - {67, 1}, + {68, 1}, }; static arc arcs_20_1[1] = { {0, 1}, @@ -427,155 +427,146 @@ {1, arcs_20_1}, }; static arc arcs_21_0[1] = { - {68, 1}, + {69, 1}, }; -static arc arcs_21_1[2] = { - {9, 2}, +static arc arcs_21_1[1] = { {0, 1}, }; -static arc arcs_21_2[1] = { - {0, 2}, -}; -static state states_21[3] = { +static state states_21[2] = { {1, arcs_21_0}, - {2, arcs_21_1}, - {1, arcs_21_2}, + {1, arcs_21_1}, }; static arc arcs_22_0[1] = { - {43, 1}, + {70, 1}, }; -static arc arcs_22_1[1] = { +static arc arcs_22_1[2] = { + {9, 2}, {0, 1}, }; -static state states_22[2] = { +static arc arcs_22_2[1] = { + {0, 2}, +}; +static state states_22[3] = { {1, arcs_22_0}, - {1, arcs_22_1}, + {2, arcs_22_1}, + {1, arcs_22_2}, }; static arc arcs_23_0[1] = { - {69, 1}, + {45, 1}, }; -static arc arcs_23_1[2] = { - {26, 2}, +static arc arcs_23_1[1] = { {0, 1}, }; -static arc arcs_23_2[2] = { - {27, 3}, - {0, 2}, +static state states_23[2] = { + {1, arcs_23_0}, + {1, arcs_23_1}, }; -static arc arcs_23_3[1] = { - {26, 4}, +static arc arcs_24_0[1] = { + {71, 1}, }; -static arc arcs_23_4[2] = { - {27, 5}, - {0, 4}, +static arc arcs_24_1[2] = { + {28, 2}, + {0, 1}, }; -static arc arcs_23_5[1] = { - {26, 6}, +static arc arcs_24_2[2] = { + {29, 3}, + {0, 2}, }; -static arc arcs_23_6[1] = { - {0, 6}, +static arc arcs_24_3[1] = { + {28, 4}, }; -static state states_23[7] = { - {1, arcs_23_0}, - {2, arcs_23_1}, - {2, arcs_23_2}, - {1, arcs_23_3}, - {2, arcs_23_4}, - {1, arcs_23_5}, - {1, arcs_23_6}, +static arc arcs_24_4[2] = { + {29, 5}, + {0, 4}, }; -static arc arcs_24_0[2] = { - {70, 1}, - {71, 1}, +static arc arcs_24_5[1] = { + {28, 6}, }; -static arc arcs_24_1[1] = { - {0, 1}, +static arc arcs_24_6[1] = { + {0, 6}, }; -static state states_24[2] = { - {2, arcs_24_0}, - {1, arcs_24_1}, +static state states_24[7] = { + {1, arcs_24_0}, + {2, arcs_24_1}, + {2, arcs_24_2}, + {1, arcs_24_3}, + {2, arcs_24_4}, + {1, arcs_24_5}, + {1, arcs_24_6}, }; -static arc arcs_25_0[1] = { +static arc arcs_25_0[2] = { {72, 1}, + {73, 1}, }; static arc arcs_25_1[1] = { - {73, 2}, -}; -static arc arcs_25_2[1] = { - {0, 2}, + {0, 1}, }; -static state states_25[3] = { - {1, arcs_25_0}, +static state states_25[2] = { + {2, arcs_25_0}, {1, arcs_25_1}, - {1, arcs_25_2}, }; static arc arcs_26_0[1] = { {74, 1}, }; -static arc arcs_26_1[2] = { +static arc arcs_26_1[1] = { {75, 2}, +}; +static arc arcs_26_2[1] = { + {0, 2}, +}; +static state states_26[3] = { + {1, arcs_26_0}, + {1, arcs_26_1}, + {1, arcs_26_2}, +}; +static arc arcs_27_0[1] = { + {76, 1}, +}; +static arc arcs_27_1[2] = { + {77, 2}, {12, 3}, }; -static arc arcs_26_2[3] = { - {75, 2}, +static arc arcs_27_2[3] = { + {77, 2}, {12, 3}, - {72, 4}, + {74, 4}, }; -static arc arcs_26_3[1] = { - {72, 4}, +static arc arcs_27_3[1] = { + {74, 4}, }; -static arc arcs_26_4[3] = { - {28, 5}, +static arc arcs_27_4[3] = { + {30, 5}, {13, 6}, - {76, 5}, + {78, 5}, }; -static arc arcs_26_5[1] = { +static arc arcs_27_5[1] = { {0, 5}, }; -static arc arcs_26_6[1] = { - {76, 7}, +static arc arcs_27_6[1] = { + {78, 7}, }; -static arc arcs_26_7[1] = { +static arc arcs_27_7[1] = { {15, 5}, }; -static state states_26[8] = { - {1, arcs_26_0}, - {2, arcs_26_1}, - {3, arcs_26_2}, - {1, arcs_26_3}, - {3, arcs_26_4}, - {1, arcs_26_5}, - {1, arcs_26_6}, - {1, arcs_26_7}, -}; -static arc arcs_27_0[1] = { - {19, 1}, -}; -static arc arcs_27_1[2] = { - {78, 2}, - {0, 1}, -}; -static arc arcs_27_2[1] = { - {19, 3}, -}; -static arc arcs_27_3[1] = { - {0, 3}, -}; -static state states_27[4] = { +static state states_27[8] = { {1, arcs_27_0}, {2, arcs_27_1}, - {1, arcs_27_2}, + {3, arcs_27_2}, {1, arcs_27_3}, + {3, arcs_27_4}, + {1, arcs_27_5}, + {1, arcs_27_6}, + {1, arcs_27_7}, }; static arc arcs_28_0[1] = { - {12, 1}, + {21, 1}, }; static arc arcs_28_1[2] = { - {78, 2}, + {80, 2}, {0, 1}, }; static arc arcs_28_2[1] = { - {19, 3}, + {21, 3}, }; static arc arcs_28_3[1] = { {0, 3}, @@ -587,37 +578,45 @@ {1, arcs_28_3}, }; static arc arcs_29_0[1] = { - {77, 1}, + {12, 1}, }; static arc arcs_29_1[2] = { - {27, 2}, + {80, 2}, {0, 1}, }; -static arc arcs_29_2[2] = { - {77, 1}, - {0, 2}, +static arc arcs_29_2[1] = { + {21, 3}, +}; +static arc arcs_29_3[1] = { + {0, 3}, }; -static state states_29[3] = { +static state states_29[4] = { {1, arcs_29_0}, {2, arcs_29_1}, - {2, arcs_29_2}, + {1, arcs_29_2}, + {1, arcs_29_3}, }; static arc arcs_30_0[1] = { {79, 1}, }; static arc arcs_30_1[2] = { - {27, 0}, + {29, 2}, {0, 1}, }; -static state states_30[2] = { +static arc arcs_30_2[2] = { + {79, 1}, + {0, 2}, +}; +static state states_30[3] = { {1, arcs_30_0}, {2, arcs_30_1}, + {2, arcs_30_2}, }; static arc arcs_31_0[1] = { - {19, 1}, + {81, 1}, }; static arc arcs_31_1[2] = { - {75, 0}, + {29, 0}, {0, 1}, }; static state states_31[2] = { @@ -625,148 +624,125 @@ {2, arcs_31_1}, }; static arc arcs_32_0[1] = { - {80, 1}, -}; -static arc arcs_32_1[1] = { - {19, 2}, + {21, 1}, }; -static arc arcs_32_2[2] = { - {27, 1}, - {0, 2}, +static arc arcs_32_1[2] = { + {77, 0}, + {0, 1}, }; -static state states_32[3] = { +static state states_32[2] = { {1, arcs_32_0}, - {1, arcs_32_1}, - {2, arcs_32_2}, + {2, arcs_32_1}, }; static arc arcs_33_0[1] = { - {81, 1}, + {82, 1}, }; static arc arcs_33_1[1] = { - {82, 2}, + {21, 2}, }; static arc arcs_33_2[2] = { - {83, 3}, + {29, 1}, {0, 2}, }; -static arc arcs_33_3[1] = { - {26, 4}, -}; -static arc arcs_33_4[2] = { - {27, 5}, - {0, 4}, -}; -static arc arcs_33_5[1] = { - {26, 6}, -}; -static arc arcs_33_6[1] = { - {0, 6}, -}; -static state states_33[7] = { +static state states_33[3] = { {1, arcs_33_0}, {1, arcs_33_1}, {2, arcs_33_2}, - {1, arcs_33_3}, - {2, arcs_33_4}, - {1, arcs_33_5}, - {1, arcs_33_6}, }; static arc arcs_34_0[1] = { - {84, 1}, + {83, 1}, }; static arc arcs_34_1[1] = { - {26, 2}, + {84, 2}, }; static arc arcs_34_2[2] = { - {27, 3}, + {85, 3}, {0, 2}, }; static arc arcs_34_3[1] = { - {26, 4}, + {28, 4}, }; -static arc arcs_34_4[1] = { +static arc arcs_34_4[2] = { + {29, 5}, {0, 4}, }; -static state states_34[5] = { +static arc arcs_34_5[1] = { + {28, 6}, +}; +static arc arcs_34_6[1] = { + {0, 6}, +}; +static state states_34[7] = { {1, arcs_34_0}, {1, arcs_34_1}, {2, arcs_34_2}, {1, arcs_34_3}, - {1, arcs_34_4}, + {2, arcs_34_4}, + {1, arcs_34_5}, + {1, arcs_34_6}, }; -static arc arcs_35_0[7] = { - {85, 1}, +static arc arcs_35_0[1] = { {86, 1}, - {87, 1}, - {88, 1}, - {89, 1}, - {17, 1}, - {90, 1}, }; static arc arcs_35_1[1] = { - {0, 1}, -}; -static state states_35[2] = { - {7, arcs_35_0}, - {1, arcs_35_1}, -}; -static arc arcs_36_0[1] = { - {91, 1}, -}; -static arc arcs_36_1[1] = { - {26, 2}, + {28, 2}, }; -static arc arcs_36_2[1] = { - {21, 3}, +static arc arcs_35_2[2] = { + {29, 3}, + {0, 2}, }; -static arc arcs_36_3[1] = { - {22, 4}, +static arc arcs_35_3[1] = { + {28, 4}, }; -static arc arcs_36_4[3] = { - {92, 1}, - {93, 5}, +static arc arcs_35_4[1] = { {0, 4}, }; -static arc arcs_36_5[1] = { - {21, 6}, +static state states_35[5] = { + {1, arcs_35_0}, + {1, arcs_35_1}, + {2, arcs_35_2}, + {1, arcs_35_3}, + {1, arcs_35_4}, }; -static arc arcs_36_6[1] = { - {22, 7}, +static arc arcs_36_0[8] = { + {87, 1}, + {88, 1}, + {89, 1}, + {90, 1}, + {91, 1}, + {19, 1}, + {18, 1}, + {17, 1}, }; -static arc arcs_36_7[1] = { - {0, 7}, +static arc arcs_36_1[1] = { + {0, 1}, }; -static state states_36[8] = { - {1, arcs_36_0}, +static state states_36[2] = { + {8, arcs_36_0}, {1, arcs_36_1}, - {1, arcs_36_2}, - {1, arcs_36_3}, - {3, arcs_36_4}, - {1, arcs_36_5}, - {1, arcs_36_6}, - {1, arcs_36_7}, }; static arc arcs_37_0[1] = { - {94, 1}, + {92, 1}, }; static arc arcs_37_1[1] = { - {26, 2}, + {28, 2}, }; static arc arcs_37_2[1] = { - {21, 3}, + {23, 3}, }; static arc arcs_37_3[1] = { - {22, 4}, + {24, 4}, }; -static arc arcs_37_4[2] = { - {93, 5}, +static arc arcs_37_4[3] = { + {93, 1}, + {94, 5}, {0, 4}, }; static arc arcs_37_5[1] = { - {21, 6}, + {23, 6}, }; static arc arcs_37_6[1] = { - {22, 7}, + {24, 7}, }; static arc arcs_37_7[1] = { {0, 7}, @@ -776,7 +752,7 @@ {1, arcs_37_1}, {1, arcs_37_2}, {1, arcs_37_3}, - {2, arcs_37_4}, + {3, arcs_37_4}, {1, arcs_37_5}, {1, arcs_37_6}, {1, arcs_37_7}, @@ -785,373 +761,397 @@ {95, 1}, }; static arc arcs_38_1[1] = { - {59, 2}, + {28, 2}, }; static arc arcs_38_2[1] = { - {83, 3}, + {23, 3}, }; static arc arcs_38_3[1] = { - {9, 4}, + {24, 4}, }; -static arc arcs_38_4[1] = { - {21, 5}, +static arc arcs_38_4[2] = { + {94, 5}, + {0, 4}, }; static arc arcs_38_5[1] = { - {22, 6}, + {23, 6}, }; -static arc arcs_38_6[2] = { - {93, 7}, - {0, 6}, +static arc arcs_38_6[1] = { + {24, 7}, }; static arc arcs_38_7[1] = { - {21, 8}, -}; -static arc arcs_38_8[1] = { - {22, 9}, -}; -static arc arcs_38_9[1] = { - {0, 9}, + {0, 7}, }; -static state states_38[10] = { +static state states_38[8] = { {1, arcs_38_0}, {1, arcs_38_1}, {1, arcs_38_2}, {1, arcs_38_3}, - {1, arcs_38_4}, + {2, arcs_38_4}, {1, arcs_38_5}, - {2, arcs_38_6}, + {1, arcs_38_6}, {1, arcs_38_7}, - {1, arcs_38_8}, - {1, arcs_38_9}, }; static arc arcs_39_0[1] = { {96, 1}, }; static arc arcs_39_1[1] = { - {21, 2}, + {61, 2}, }; static arc arcs_39_2[1] = { - {22, 3}, + {85, 3}, }; -static arc arcs_39_3[2] = { - {97, 4}, - {98, 5}, +static arc arcs_39_3[1] = { + {9, 4}, }; static arc arcs_39_4[1] = { - {21, 6}, + {23, 5}, }; static arc arcs_39_5[1] = { - {21, 7}, + {24, 6}, }; -static arc arcs_39_6[1] = { - {22, 8}, +static arc arcs_39_6[2] = { + {94, 7}, + {0, 6}, }; static arc arcs_39_7[1] = { - {22, 9}, + {23, 8}, }; -static arc arcs_39_8[4] = { - {97, 4}, - {93, 10}, - {98, 5}, - {0, 8}, +static arc arcs_39_8[1] = { + {24, 9}, }; static arc arcs_39_9[1] = { {0, 9}, }; -static arc arcs_39_10[1] = { - {21, 11}, -}; -static arc arcs_39_11[1] = { - {22, 12}, -}; -static arc arcs_39_12[2] = { - {98, 5}, - {0, 12}, -}; -static state states_39[13] = { +static state states_39[10] = { {1, arcs_39_0}, {1, arcs_39_1}, {1, arcs_39_2}, - {2, arcs_39_3}, + {1, arcs_39_3}, {1, arcs_39_4}, {1, arcs_39_5}, - {1, arcs_39_6}, + {2, arcs_39_6}, {1, arcs_39_7}, - {4, arcs_39_8}, + {1, arcs_39_8}, {1, arcs_39_9}, - {1, arcs_39_10}, - {1, arcs_39_11}, - {2, arcs_39_12}, }; static arc arcs_40_0[1] = { - {99, 1}, + {97, 1}, }; static arc arcs_40_1[1] = { - {26, 2}, + {23, 2}, }; -static arc arcs_40_2[2] = { - {100, 3}, - {21, 4}, +static arc arcs_40_2[1] = { + {24, 3}, }; -static arc arcs_40_3[1] = { - {21, 4}, +static arc arcs_40_3[2] = { + {98, 4}, + {99, 5}, }; static arc arcs_40_4[1] = { - {22, 5}, + {23, 6}, }; static arc arcs_40_5[1] = { - {0, 5}, + {23, 7}, }; -static state states_40[6] = { +static arc arcs_40_6[1] = { + {24, 8}, +}; +static arc arcs_40_7[1] = { + {24, 9}, +}; +static arc arcs_40_8[4] = { + {98, 4}, + {94, 10}, + {99, 5}, + {0, 8}, +}; +static arc arcs_40_9[1] = { + {0, 9}, +}; +static arc arcs_40_10[1] = { + {23, 11}, +}; +static arc arcs_40_11[1] = { + {24, 12}, +}; +static arc arcs_40_12[2] = { + {99, 5}, + {0, 12}, +}; +static state states_40[13] = { {1, arcs_40_0}, {1, arcs_40_1}, - {2, arcs_40_2}, - {1, arcs_40_3}, + {1, arcs_40_2}, + {2, arcs_40_3}, {1, arcs_40_4}, {1, arcs_40_5}, + {1, arcs_40_6}, + {1, arcs_40_7}, + {4, arcs_40_8}, + {1, arcs_40_9}, + {1, arcs_40_10}, + {1, arcs_40_11}, + {2, arcs_40_12}, }; static arc arcs_41_0[1] = { - {78, 1}, + {100, 1}, }; static arc arcs_41_1[1] = { - {82, 2}, + {28, 2}, }; -static arc arcs_41_2[1] = { - {0, 2}, +static arc arcs_41_2[2] = { + {101, 3}, + {23, 4}, +}; +static arc arcs_41_3[1] = { + {23, 4}, +}; +static arc arcs_41_4[1] = { + {24, 5}, +}; +static arc arcs_41_5[1] = { + {0, 5}, }; -static state states_41[3] = { +static state states_41[6] = { {1, arcs_41_0}, {1, arcs_41_1}, - {1, arcs_41_2}, + {2, arcs_41_2}, + {1, arcs_41_3}, + {1, arcs_41_4}, + {1, arcs_41_5}, }; static arc arcs_42_0[1] = { - {101, 1}, + {80, 1}, }; -static arc arcs_42_1[2] = { - {26, 2}, - {0, 1}, +static arc arcs_42_1[1] = { + {84, 2}, }; -static arc arcs_42_2[3] = { - {78, 3}, - {27, 3}, +static arc arcs_42_2[1] = { {0, 2}, }; -static arc arcs_42_3[1] = { - {26, 4}, -}; -static arc arcs_42_4[1] = { - {0, 4}, -}; -static state states_42[5] = { +static state states_42[3] = { {1, arcs_42_0}, - {2, arcs_42_1}, - {3, arcs_42_2}, - {1, arcs_42_3}, - {1, arcs_42_4}, + {1, arcs_42_1}, + {1, arcs_42_2}, }; -static arc arcs_43_0[2] = { - {3, 1}, - {2, 2}, +static arc arcs_43_0[1] = { + {102, 1}, }; -static arc arcs_43_1[1] = { +static arc arcs_43_1[2] = { + {28, 2}, {0, 1}, }; -static arc arcs_43_2[1] = { - {102, 3}, +static arc arcs_43_2[3] = { + {80, 3}, + {29, 3}, + {0, 2}, }; static arc arcs_43_3[1] = { - {6, 4}, + {28, 4}, }; -static arc arcs_43_4[2] = { - {6, 4}, - {103, 1}, +static arc arcs_43_4[1] = { + {0, 4}, }; static state states_43[5] = { - {2, arcs_43_0}, - {1, arcs_43_1}, - {1, arcs_43_2}, + {1, arcs_43_0}, + {2, arcs_43_1}, + {3, arcs_43_2}, {1, arcs_43_3}, - {2, arcs_43_4}, + {1, arcs_43_4}, }; -static arc arcs_44_0[1] = { - {105, 1}, +static arc arcs_44_0[2] = { + {3, 1}, + {2, 2}, }; -static arc arcs_44_1[2] = { - {27, 2}, +static arc arcs_44_1[1] = { {0, 1}, }; static arc arcs_44_2[1] = { - {105, 3}, + {103, 3}, }; -static arc arcs_44_3[2] = { - {27, 4}, - {0, 3}, +static arc arcs_44_3[1] = { + {6, 4}, }; static arc arcs_44_4[2] = { - {105, 3}, - {0, 4}, + {6, 4}, + {104, 1}, }; static state states_44[5] = { - {1, arcs_44_0}, - {2, arcs_44_1}, + {2, arcs_44_0}, + {1, arcs_44_1}, {1, arcs_44_2}, - {2, arcs_44_3}, + {1, arcs_44_3}, {2, arcs_44_4}, }; -static arc arcs_45_0[2] = { +static arc arcs_45_0[1] = { {106, 1}, - {107, 1}, }; -static arc arcs_45_1[1] = { +static arc arcs_45_1[2] = { + {29, 2}, {0, 1}, }; -static state states_45[2] = { - {2, arcs_45_0}, - {1, arcs_45_1}, +static arc arcs_45_2[1] = { + {106, 3}, }; -static arc arcs_46_0[1] = { - {108, 1}, +static arc arcs_45_3[2] = { + {29, 4}, + {0, 3}, }; -static arc arcs_46_1[2] = { - {23, 2}, - {21, 3}, +static arc arcs_45_4[2] = { + {106, 3}, + {0, 4}, }; -static arc arcs_46_2[1] = { - {21, 3}, +static state states_45[5] = { + {1, arcs_45_0}, + {2, arcs_45_1}, + {1, arcs_45_2}, + {2, arcs_45_3}, + {2, arcs_45_4}, }; -static arc arcs_46_3[1] = { - {105, 4}, +static arc arcs_46_0[2] = { + {107, 1}, + {108, 1}, }; -static arc arcs_46_4[1] = { - {0, 4}, +static arc arcs_46_1[1] = { + {0, 1}, }; -static state states_46[5] = { - {1, arcs_46_0}, - {2, arcs_46_1}, - {1, arcs_46_2}, - {1, arcs_46_3}, - {1, arcs_46_4}, +static state states_46[2] = { + {2, arcs_46_0}, + {1, arcs_46_1}, }; -static arc arcs_47_0[2] = { - {106, 1}, - {109, 2}, +static arc arcs_47_0[1] = { + {109, 1}, }; static arc arcs_47_1[2] = { - {91, 3}, - {0, 1}, + {25, 2}, + {23, 3}, }; static arc arcs_47_2[1] = { - {0, 2}, + {23, 3}, }; static arc arcs_47_3[1] = { {106, 4}, }; static arc arcs_47_4[1] = { - {93, 5}, -}; -static arc arcs_47_5[1] = { - {26, 2}, + {0, 4}, }; -static state states_47[6] = { - {2, arcs_47_0}, +static state states_47[5] = { + {1, arcs_47_0}, {2, arcs_47_1}, {1, arcs_47_2}, {1, arcs_47_3}, {1, arcs_47_4}, - {1, arcs_47_5}, }; -static arc arcs_48_0[1] = { - {110, 1}, +static arc arcs_48_0[2] = { + {107, 1}, + {110, 2}, }; static arc arcs_48_1[2] = { - {111, 0}, + {92, 3}, {0, 1}, }; -static state states_48[2] = { - {1, arcs_48_0}, +static arc arcs_48_2[1] = { + {0, 2}, +}; +static arc arcs_48_3[1] = { + {107, 4}, +}; +static arc arcs_48_4[1] = { + {94, 5}, +}; +static arc arcs_48_5[1] = { + {28, 2}, +}; +static state states_48[6] = { + {2, arcs_48_0}, {2, arcs_48_1}, + {1, arcs_48_2}, + {1, arcs_48_3}, + {1, arcs_48_4}, + {1, arcs_48_5}, }; static arc arcs_49_0[1] = { - {112, 1}, + {111, 1}, }; static arc arcs_49_1[2] = { - {113, 0}, + {112, 0}, {0, 1}, }; static state states_49[2] = { {1, arcs_49_0}, {2, arcs_49_1}, }; -static arc arcs_50_0[2] = { - {114, 1}, - {115, 2}, +static arc arcs_50_0[1] = { + {113, 1}, +}; +static arc arcs_50_1[2] = { + {114, 0}, + {0, 1}, }; -static arc arcs_50_1[1] = { - {112, 2}, +static state states_50[2] = { + {1, arcs_50_0}, + {2, arcs_50_1}, }; -static arc arcs_50_2[1] = { +static arc arcs_51_0[2] = { + {115, 1}, + {116, 2}, +}; +static arc arcs_51_1[1] = { + {113, 2}, +}; +static arc arcs_51_2[1] = { {0, 2}, }; -static state states_50[3] = { - {2, arcs_50_0}, - {1, arcs_50_1}, - {1, arcs_50_2}, +static state states_51[3] = { + {2, arcs_51_0}, + {1, arcs_51_1}, + {1, arcs_51_2}, }; -static arc arcs_51_0[1] = { - {82, 1}, +static arc arcs_52_0[1] = { + {84, 1}, }; -static arc arcs_51_1[2] = { - {116, 0}, +static arc arcs_52_1[2] = { + {117, 0}, {0, 1}, }; -static state states_51[2] = { - {1, arcs_51_0}, - {2, arcs_51_1}, +static state states_52[2] = { + {1, arcs_52_0}, + {2, arcs_52_1}, }; -static arc arcs_52_0[10] = { - {117, 1}, +static arc arcs_53_0[10] = { {118, 1}, {119, 1}, {120, 1}, {121, 1}, {122, 1}, {123, 1}, - {83, 1}, - {114, 2}, - {124, 3}, + {124, 1}, + {85, 1}, + {115, 2}, + {125, 3}, }; -static arc arcs_52_1[1] = { +static arc arcs_53_1[1] = { {0, 1}, }; -static arc arcs_52_2[1] = { - {83, 1}, +static arc arcs_53_2[1] = { + {85, 1}, }; -static arc arcs_52_3[2] = { - {114, 1}, +static arc arcs_53_3[2] = { + {115, 1}, {0, 3}, }; -static state states_52[4] = { - {10, arcs_52_0}, - {1, arcs_52_1}, - {1, arcs_52_2}, - {2, arcs_52_3}, -}; -static arc arcs_53_0[1] = { - {125, 1}, -}; -static arc arcs_53_1[2] = { - {126, 0}, - {0, 1}, -}; -static state states_53[2] = { - {1, arcs_53_0}, - {2, arcs_53_1}, +static state states_53[4] = { + {10, arcs_53_0}, + {1, arcs_53_1}, + {1, arcs_53_2}, + {2, arcs_53_3}, }; static arc arcs_54_0[1] = { - {127, 1}, + {126, 1}, }; static arc arcs_54_1[2] = { - {128, 0}, + {127, 0}, {0, 1}, }; static state states_54[2] = { @@ -1159,10 +1159,10 @@ {2, arcs_54_1}, }; static arc arcs_55_0[1] = { - {129, 1}, + {128, 1}, }; static arc arcs_55_1[2] = { - {130, 0}, + {129, 0}, {0, 1}, }; static state states_55[2] = { @@ -1170,23 +1170,22 @@ {2, arcs_55_1}, }; static arc arcs_56_0[1] = { - {131, 1}, + {130, 1}, }; -static arc arcs_56_1[3] = { - {132, 0}, - {57, 0}, +static arc arcs_56_1[2] = { + {131, 0}, {0, 1}, }; static state states_56[2] = { {1, arcs_56_0}, - {3, arcs_56_1}, + {2, arcs_56_1}, }; static arc arcs_57_0[1] = { - {133, 1}, + {132, 1}, }; static arc arcs_57_1[3] = { - {134, 0}, - {135, 0}, + {133, 0}, + {59, 0}, {0, 1}, }; static state states_57[2] = { @@ -1194,156 +1193,142 @@ {3, arcs_57_1}, }; static arc arcs_58_0[1] = { - {136, 1}, + {134, 1}, }; -static arc arcs_58_1[5] = { - {28, 0}, - {137, 0}, - {138, 0}, - {139, 0}, +static arc arcs_58_1[3] = { + {135, 0}, + {136, 0}, {0, 1}, }; static state states_58[2] = { {1, arcs_58_0}, - {5, arcs_58_1}, + {3, arcs_58_1}, }; -static arc arcs_59_0[4] = { - {134, 1}, +static arc arcs_59_0[1] = { + {137, 1}, +}; +static arc arcs_59_1[5] = { + {30, 0}, + {138, 0}, + {139, 0}, + {140, 0}, + {0, 1}, +}; +static state states_59[2] = { + {1, arcs_59_0}, + {5, arcs_59_1}, +}; +static arc arcs_60_0[4] = { {135, 1}, - {140, 1}, - {141, 2}, + {136, 1}, + {141, 1}, + {142, 2}, }; -static arc arcs_59_1[1] = { - {136, 2}, +static arc arcs_60_1[1] = { + {137, 2}, }; -static arc arcs_59_2[1] = { +static arc arcs_60_2[1] = { {0, 2}, }; -static state states_59[3] = { - {4, arcs_59_0}, - {1, arcs_59_1}, - {1, arcs_59_2}, -}; -static arc arcs_60_0[1] = { - {142, 1}, +static state states_60[3] = { + {4, arcs_60_0}, + {1, arcs_60_1}, + {1, arcs_60_2}, }; -static arc arcs_60_1[3] = { +static arc arcs_61_0[1] = { {143, 1}, - {29, 2}, +}; +static arc arcs_61_1[3] = { + {144, 1}, + {31, 2}, {0, 1}, }; -static arc arcs_60_2[1] = { - {136, 3}, +static arc arcs_61_2[1] = { + {137, 3}, }; -static arc arcs_60_3[1] = { +static arc arcs_61_3[1] = { {0, 3}, }; -static state states_60[4] = { - {1, arcs_60_0}, - {3, arcs_60_1}, - {1, arcs_60_2}, - {1, arcs_60_3}, +static state states_61[4] = { + {1, arcs_61_0}, + {3, arcs_61_1}, + {1, arcs_61_2}, + {1, arcs_61_3}, }; -static arc arcs_61_0[7] = { +static arc arcs_62_0[7] = { {13, 1}, - {145, 2}, - {148, 3}, - {151, 4}, - {19, 5}, - {153, 5}, - {154, 6}, + {146, 2}, + {149, 3}, + {152, 4}, + {21, 5}, + {154, 5}, + {155, 6}, }; -static arc arcs_61_1[3] = { - {43, 7}, - {144, 7}, +static arc arcs_62_1[3] = { + {45, 7}, + {145, 7}, {15, 5}, }; -static arc arcs_61_2[2] = { - {146, 8}, - {147, 5}, -}; -static arc arcs_61_3[2] = { - {149, 9}, - {150, 5}, +static arc arcs_62_2[2] = { + {147, 8}, + {148, 5}, }; -static arc arcs_61_4[1] = { - {152, 10}, +static arc arcs_62_3[2] = { + {150, 9}, + {151, 5}, +}; +static arc arcs_62_4[1] = { + {153, 10}, }; -static arc arcs_61_5[1] = { +static arc arcs_62_5[1] = { {0, 5}, }; -static arc arcs_61_6[2] = { - {154, 6}, +static arc arcs_62_6[2] = { + {155, 6}, {0, 6}, }; -static arc arcs_61_7[1] = { +static arc arcs_62_7[1] = { {15, 5}, }; -static arc arcs_61_8[1] = { - {147, 5}, +static arc arcs_62_8[1] = { + {148, 5}, }; -static arc arcs_61_9[1] = { - {150, 5}, -}; -static arc arcs_61_10[1] = { +static arc arcs_62_9[1] = { {151, 5}, }; -static state states_61[11] = { - {7, arcs_61_0}, - {3, arcs_61_1}, - {2, arcs_61_2}, - {2, arcs_61_3}, - {1, arcs_61_4}, - {1, arcs_61_5}, - {2, arcs_61_6}, - {1, arcs_61_7}, - {1, arcs_61_8}, - {1, arcs_61_9}, - {1, arcs_61_10}, -}; -static arc arcs_62_0[1] = { - {26, 1}, -}; -static arc arcs_62_1[3] = { - {155, 2}, - {27, 3}, - {0, 1}, -}; -static arc arcs_62_2[1] = { - {0, 2}, -}; -static arc arcs_62_3[2] = { - {26, 4}, - {0, 3}, -}; -static arc arcs_62_4[2] = { - {27, 3}, - {0, 4}, +static arc arcs_62_10[1] = { + {152, 5}, }; -static state states_62[5] = { - {1, arcs_62_0}, +static state states_62[11] = { + {7, arcs_62_0}, {3, arcs_62_1}, - {1, arcs_62_2}, + {2, arcs_62_2}, {2, arcs_62_3}, - {2, arcs_62_4}, + {1, arcs_62_4}, + {1, arcs_62_5}, + {2, arcs_62_6}, + {1, arcs_62_7}, + {1, arcs_62_8}, + {1, arcs_62_9}, + {1, arcs_62_10}, }; static arc arcs_63_0[1] = { - {26, 1}, + {28, 1}, }; static arc arcs_63_1[3] = { {156, 2}, - {27, 3}, + {29, 3}, {0, 1}, }; static arc arcs_63_2[1] = { {0, 2}, }; static arc arcs_63_3[2] = { - {26, 4}, + {28, 4}, {0, 3}, }; static arc arcs_63_4[2] = { - {27, 3}, + {29, 3}, {0, 4}, }; static state states_63[5] = { @@ -1354,153 +1339,163 @@ {2, arcs_63_4}, }; static arc arcs_64_0[1] = { - {108, 1}, + {28, 1}, }; -static arc arcs_64_1[2] = { - {23, 2}, - {21, 3}, +static arc arcs_64_1[3] = { + {157, 2}, + {29, 3}, + {0, 1}, }; static arc arcs_64_2[1] = { - {21, 3}, + {0, 2}, }; -static arc arcs_64_3[1] = { - {26, 4}, +static arc arcs_64_3[2] = { + {28, 4}, + {0, 3}, }; -static arc arcs_64_4[1] = { +static arc arcs_64_4[2] = { + {29, 3}, {0, 4}, }; static state states_64[5] = { {1, arcs_64_0}, - {2, arcs_64_1}, + {3, arcs_64_1}, {1, arcs_64_2}, - {1, arcs_64_3}, - {1, arcs_64_4}, + {2, arcs_64_3}, + {2, arcs_64_4}, }; -static arc arcs_65_0[3] = { - {13, 1}, - {145, 2}, - {75, 3}, +static arc arcs_65_0[1] = { + {109, 1}, }; static arc arcs_65_1[2] = { - {14, 4}, - {15, 5}, + {25, 2}, + {23, 3}, }; static arc arcs_65_2[1] = { - {157, 6}, + {23, 3}, }; static arc arcs_65_3[1] = { - {19, 5}, + {28, 4}, }; static arc arcs_65_4[1] = { - {15, 5}, -}; -static arc arcs_65_5[1] = { - {0, 5}, -}; -static arc arcs_65_6[1] = { - {147, 5}, + {0, 4}, }; -static state states_65[7] = { - {3, arcs_65_0}, +static state states_65[5] = { + {1, arcs_65_0}, {2, arcs_65_1}, {1, arcs_65_2}, {1, arcs_65_3}, {1, arcs_65_4}, - {1, arcs_65_5}, - {1, arcs_65_6}, }; -static arc arcs_66_0[1] = { - {158, 1}, +static arc arcs_66_0[3] = { + {13, 1}, + {146, 2}, + {77, 3}, }; static arc arcs_66_1[2] = { - {27, 2}, - {0, 1}, + {14, 4}, + {15, 5}, }; -static arc arcs_66_2[2] = { - {158, 1}, - {0, 2}, +static arc arcs_66_2[1] = { + {158, 6}, +}; +static arc arcs_66_3[1] = { + {21, 5}, +}; +static arc arcs_66_4[1] = { + {15, 5}, +}; +static arc arcs_66_5[1] = { + {0, 5}, }; -static state states_66[3] = { - {1, arcs_66_0}, +static arc arcs_66_6[1] = { + {148, 5}, +}; +static state states_66[7] = { + {3, arcs_66_0}, {2, arcs_66_1}, - {2, arcs_66_2}, + {1, arcs_66_2}, + {1, arcs_66_3}, + {1, arcs_66_4}, + {1, arcs_66_5}, + {1, arcs_66_6}, }; -static arc arcs_67_0[3] = { - {75, 1}, - {26, 2}, - {21, 3}, +static arc arcs_67_0[1] = { + {159, 1}, }; -static arc arcs_67_1[1] = { - {75, 4}, +static arc arcs_67_1[2] = { + {29, 2}, + {0, 1}, }; static arc arcs_67_2[2] = { - {21, 3}, + {159, 1}, {0, 2}, }; -static arc arcs_67_3[3] = { - {26, 5}, - {159, 6}, - {0, 3}, +static state states_67[3] = { + {1, arcs_67_0}, + {2, arcs_67_1}, + {2, arcs_67_2}, }; -static arc arcs_67_4[1] = { - {75, 6}, +static arc arcs_68_0[3] = { + {77, 1}, + {28, 2}, + {23, 3}, }; -static arc arcs_67_5[2] = { - {159, 6}, - {0, 5}, +static arc arcs_68_1[1] = { + {77, 4}, }; -static arc arcs_67_6[1] = { - {0, 6}, +static arc arcs_68_2[2] = { + {23, 3}, + {0, 2}, }; -static state states_67[7] = { - {3, arcs_67_0}, - {1, arcs_67_1}, - {2, arcs_67_2}, - {3, arcs_67_3}, - {1, arcs_67_4}, - {2, arcs_67_5}, - {1, arcs_67_6}, +static arc arcs_68_3[3] = { + {28, 5}, + {160, 6}, + {0, 3}, }; -static arc arcs_68_0[1] = { - {21, 1}, +static arc arcs_68_4[1] = { + {77, 6}, }; -static arc arcs_68_1[2] = { - {26, 2}, - {0, 1}, +static arc arcs_68_5[2] = { + {160, 6}, + {0, 5}, }; -static arc arcs_68_2[1] = { - {0, 2}, +static arc arcs_68_6[1] = { + {0, 6}, }; -static state states_68[3] = { - {1, arcs_68_0}, - {2, arcs_68_1}, - {1, arcs_68_2}, +static state states_68[7] = { + {3, arcs_68_0}, + {1, arcs_68_1}, + {2, arcs_68_2}, + {3, arcs_68_3}, + {1, arcs_68_4}, + {2, arcs_68_5}, + {1, arcs_68_6}, }; static arc arcs_69_0[1] = { - {82, 1}, + {23, 1}, }; static arc arcs_69_1[2] = { - {27, 2}, + {28, 2}, {0, 1}, }; -static arc arcs_69_2[2] = { - {82, 1}, +static arc arcs_69_2[1] = { {0, 2}, }; static state states_69[3] = { {1, arcs_69_0}, {2, arcs_69_1}, - {2, arcs_69_2}, + {1, arcs_69_2}, }; static arc arcs_70_0[1] = { - {26, 1}, + {84, 1}, }; static arc arcs_70_1[2] = { - {27, 2}, + {29, 2}, {0, 1}, }; static arc arcs_70_2[2] = { - {26, 1}, + {84, 1}, {0, 2}, }; static state states_70[3] = { @@ -1509,491 +1504,511 @@ {2, arcs_70_2}, }; static arc arcs_71_0[1] = { - {26, 1}, -}; -static arc arcs_71_1[1] = { - {21, 2}, + {28, 1}, }; -static arc arcs_71_2[1] = { - {26, 3}, -}; -static arc arcs_71_3[2] = { - {27, 4}, - {0, 3}, +static arc arcs_71_1[2] = { + {29, 2}, + {0, 1}, }; -static arc arcs_71_4[2] = { - {26, 1}, - {0, 4}, +static arc arcs_71_2[2] = { + {28, 1}, + {0, 2}, }; -static state states_71[5] = { +static state states_71[3] = { {1, arcs_71_0}, - {1, arcs_71_1}, - {1, arcs_71_2}, - {2, arcs_71_3}, - {2, arcs_71_4}, + {2, arcs_71_1}, + {2, arcs_71_2}, }; static arc arcs_72_0[1] = { - {160, 1}, + {28, 1}, }; static arc arcs_72_1[1] = { - {19, 2}, + {23, 2}, }; -static arc arcs_72_2[2] = { - {13, 3}, - {21, 4}, +static arc arcs_72_2[1] = { + {28, 3}, }; static arc arcs_72_3[2] = { - {9, 5}, - {15, 6}, -}; -static arc arcs_72_4[1] = { - {22, 7}, -}; -static arc arcs_72_5[1] = { - {15, 6}, -}; -static arc arcs_72_6[1] = { - {21, 4}, + {29, 4}, + {0, 3}, }; -static arc arcs_72_7[1] = { - {0, 7}, +static arc arcs_72_4[2] = { + {28, 1}, + {0, 4}, }; -static state states_72[8] = { +static state states_72[5] = { {1, arcs_72_0}, {1, arcs_72_1}, - {2, arcs_72_2}, + {1, arcs_72_2}, {2, arcs_72_3}, - {1, arcs_72_4}, - {1, arcs_72_5}, - {1, arcs_72_6}, - {1, arcs_72_7}, + {2, arcs_72_4}, }; -static arc arcs_73_0[3] = { +static arc arcs_73_0[1] = { {161, 1}, - {28, 2}, - {29, 3}, }; -static arc arcs_73_1[2] = { - {27, 4}, - {0, 1}, +static arc arcs_73_1[1] = { + {21, 2}, }; -static arc arcs_73_2[1] = { - {26, 5}, +static arc arcs_73_2[2] = { + {13, 3}, + {23, 4}, }; -static arc arcs_73_3[1] = { - {26, 6}, +static arc arcs_73_3[2] = { + {9, 5}, + {15, 6}, }; -static arc arcs_73_4[4] = { - {161, 1}, - {28, 2}, - {29, 3}, - {0, 4}, +static arc arcs_73_4[1] = { + {24, 7}, }; -static arc arcs_73_5[2] = { - {27, 7}, - {0, 5}, +static arc arcs_73_5[1] = { + {15, 6}, }; static arc arcs_73_6[1] = { - {0, 6}, + {23, 4}, }; static arc arcs_73_7[1] = { - {29, 3}, + {0, 7}, }; static state states_73[8] = { - {3, arcs_73_0}, - {2, arcs_73_1}, - {1, arcs_73_2}, - {1, arcs_73_3}, - {4, arcs_73_4}, - {2, arcs_73_5}, + {1, arcs_73_0}, + {1, arcs_73_1}, + {2, arcs_73_2}, + {2, arcs_73_3}, + {1, arcs_73_4}, + {1, arcs_73_5}, {1, arcs_73_6}, {1, arcs_73_7}, }; -static arc arcs_74_0[1] = { - {26, 1}, +static arc arcs_74_0[3] = { + {162, 1}, + {30, 2}, + {31, 3}, }; -static arc arcs_74_1[3] = { - {156, 2}, - {25, 3}, +static arc arcs_74_1[2] = { + {29, 4}, {0, 1}, }; static arc arcs_74_2[1] = { - {0, 2}, + {28, 5}, }; static arc arcs_74_3[1] = { - {26, 2}, + {28, 6}, +}; +static arc arcs_74_4[4] = { + {162, 1}, + {30, 2}, + {31, 3}, + {0, 4}, +}; +static arc arcs_74_5[2] = { + {29, 7}, + {0, 5}, +}; +static arc arcs_74_6[1] = { + {0, 6}, +}; +static arc arcs_74_7[1] = { + {31, 3}, }; -static state states_74[4] = { - {1, arcs_74_0}, - {3, arcs_74_1}, +static state states_74[8] = { + {3, arcs_74_0}, + {2, arcs_74_1}, {1, arcs_74_2}, {1, arcs_74_3}, + {4, arcs_74_4}, + {2, arcs_74_5}, + {1, arcs_74_6}, + {1, arcs_74_7}, }; -static arc arcs_75_0[2] = { - {155, 1}, - {163, 1}, +static arc arcs_75_0[1] = { + {28, 1}, }; -static arc arcs_75_1[1] = { +static arc arcs_75_1[3] = { + {157, 2}, + {27, 3}, {0, 1}, }; -static state states_75[2] = { - {2, arcs_75_0}, - {1, arcs_75_1}, -}; -static arc arcs_76_0[1] = { - {95, 1}, -}; -static arc arcs_76_1[1] = { - {59, 2}, +static arc arcs_75_2[1] = { + {0, 2}, }; -static arc arcs_76_2[1] = { - {83, 3}, +static arc arcs_75_3[1] = { + {28, 2}, }; -static arc arcs_76_3[1] = { - {104, 4}, +static state states_75[4] = { + {1, arcs_75_0}, + {3, arcs_75_1}, + {1, arcs_75_2}, + {1, arcs_75_3}, }; -static arc arcs_76_4[2] = { - {162, 5}, - {0, 4}, +static arc arcs_76_0[2] = { + {156, 1}, + {164, 1}, }; -static arc arcs_76_5[1] = { - {0, 5}, +static arc arcs_76_1[1] = { + {0, 1}, }; -static state states_76[6] = { - {1, arcs_76_0}, +static state states_76[2] = { + {2, arcs_76_0}, {1, arcs_76_1}, - {1, arcs_76_2}, - {1, arcs_76_3}, - {2, arcs_76_4}, - {1, arcs_76_5}, }; static arc arcs_77_0[1] = { - {91, 1}, + {96, 1}, }; static arc arcs_77_1[1] = { - {105, 2}, + {61, 2}, }; -static arc arcs_77_2[2] = { - {162, 3}, - {0, 2}, +static arc arcs_77_2[1] = { + {85, 3}, }; static arc arcs_77_3[1] = { - {0, 3}, + {105, 4}, +}; +static arc arcs_77_4[2] = { + {163, 5}, + {0, 4}, }; -static state states_77[4] = { +static arc arcs_77_5[1] = { + {0, 5}, +}; +static state states_77[6] = { {1, arcs_77_0}, {1, arcs_77_1}, - {2, arcs_77_2}, + {1, arcs_77_2}, {1, arcs_77_3}, + {2, arcs_77_4}, + {1, arcs_77_5}, }; -static arc arcs_78_0[2] = { - {156, 1}, - {165, 1}, +static arc arcs_78_0[1] = { + {92, 1}, }; static arc arcs_78_1[1] = { - {0, 1}, -}; -static state states_78[2] = { - {2, arcs_78_0}, - {1, arcs_78_1}, -}; -static arc arcs_79_0[1] = { - {95, 1}, + {106, 2}, }; -static arc arcs_79_1[1] = { - {59, 2}, +static arc arcs_78_2[2] = { + {163, 3}, + {0, 2}, }; -static arc arcs_79_2[1] = { - {83, 3}, +static arc arcs_78_3[1] = { + {0, 3}, }; -static arc arcs_79_3[1] = { - {106, 4}, +static state states_78[4] = { + {1, arcs_78_0}, + {1, arcs_78_1}, + {2, arcs_78_2}, + {1, arcs_78_3}, }; -static arc arcs_79_4[2] = { - {164, 5}, - {0, 4}, +static arc arcs_79_0[2] = { + {157, 1}, + {166, 1}, }; -static arc arcs_79_5[1] = { - {0, 5}, +static arc arcs_79_1[1] = { + {0, 1}, }; -static state states_79[6] = { - {1, arcs_79_0}, +static state states_79[2] = { + {2, arcs_79_0}, {1, arcs_79_1}, - {1, arcs_79_2}, - {1, arcs_79_3}, - {2, arcs_79_4}, - {1, arcs_79_5}, }; static arc arcs_80_0[1] = { - {91, 1}, + {96, 1}, }; static arc arcs_80_1[1] = { - {105, 2}, + {61, 2}, }; -static arc arcs_80_2[2] = { - {164, 3}, - {0, 2}, +static arc arcs_80_2[1] = { + {85, 3}, }; static arc arcs_80_3[1] = { - {0, 3}, + {107, 4}, }; -static state states_80[4] = { +static arc arcs_80_4[2] = { + {165, 5}, + {0, 4}, +}; +static arc arcs_80_5[1] = { + {0, 5}, +}; +static state states_80[6] = { {1, arcs_80_0}, {1, arcs_80_1}, - {2, arcs_80_2}, + {1, arcs_80_2}, {1, arcs_80_3}, + {2, arcs_80_4}, + {1, arcs_80_5}, }; static arc arcs_81_0[1] = { - {26, 1}, + {92, 1}, }; -static arc arcs_81_1[2] = { - {27, 0}, - {0, 1}, +static arc arcs_81_1[1] = { + {106, 2}, +}; +static arc arcs_81_2[2] = { + {165, 3}, + {0, 2}, +}; +static arc arcs_81_3[1] = { + {0, 3}, }; -static state states_81[2] = { +static state states_81[4] = { {1, arcs_81_0}, - {2, arcs_81_1}, + {1, arcs_81_1}, + {2, arcs_81_2}, + {1, arcs_81_3}, }; static arc arcs_82_0[1] = { - {19, 1}, + {28, 1}, }; -static arc arcs_82_1[1] = { +static arc arcs_82_1[2] = { + {29, 0}, {0, 1}, }; static state states_82[2] = { {1, arcs_82_0}, - {1, arcs_82_1}, + {2, arcs_82_1}, }; static arc arcs_83_0[1] = { - {167, 1}, + {21, 1}, +}; +static arc arcs_83_1[1] = { + {0, 1}, +}; +static state states_83[2] = { + {1, arcs_83_0}, + {1, arcs_83_1}, +}; +static arc arcs_84_0[1] = { + {168, 1}, }; -static arc arcs_83_1[2] = { +static arc arcs_84_1[2] = { {9, 2}, {0, 1}, }; -static arc arcs_83_2[1] = { +static arc arcs_84_2[1] = { {0, 2}, }; -static state states_83[3] = { - {1, arcs_83_0}, - {2, arcs_83_1}, - {1, arcs_83_2}, +static state states_84[3] = { + {1, arcs_84_0}, + {2, arcs_84_1}, + {1, arcs_84_2}, }; -static dfa dfas[84] = { +static dfa dfas[85] = { {256, "single_input", 0, 3, states_0, - "\004\050\014\000\000\000\000\025\074\005\023\310\011\020\004\000\300\020\222\006\201"}, + "\004\050\060\000\000\000\000\124\360\024\114\220\023\040\010\000\200\041\044\015\002\001"}, {257, "file_input", 0, 2, states_1, - "\204\050\014\000\000\000\000\025\074\005\023\310\011\020\004\000\300\020\222\006\201"}, + "\204\050\060\000\000\000\000\124\360\024\114\220\023\040\010\000\200\041\044\015\002\001"}, {258, "eval_input", 0, 3, states_2, - "\000\040\010\000\000\000\000\000\000\000\000\000\000\020\004\000\300\020\222\006\000"}, + "\000\040\040\000\000\000\000\000\000\000\000\000\000\040\010\000\200\041\044\015\000\000"}, {259, "decorator", 0, 7, states_3, - "\000\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"}, + "\000\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"}, {260, "decorators", 0, 2, states_4, - "\000\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"}, - {261, "funcdef", 0, 7, states_5, - "\000\010\004\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"}, - {262, "parameters", 0, 4, states_6, - "\000\040\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"}, - {263, "varargslist", 0, 10, states_7, - "\000\040\010\060\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"}, - {264, "fpdef", 0, 4, states_8, - "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"}, - {265, "fplist", 0, 3, states_9, - "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"}, - {266, "stmt", 0, 2, states_10, - "\000\050\014\000\000\000\000\025\074\005\023\310\011\020\004\000\300\020\222\006\201"}, - {267, "simple_stmt", 0, 4, states_11, - "\000\040\010\000\000\000\000\025\074\005\023\000\000\020\004\000\300\020\222\006\200"}, - {268, "small_stmt", 0, 2, states_12, - "\000\040\010\000\000\000\000\025\074\005\023\000\000\020\004\000\300\020\222\006\200"}, - {269, "expr_stmt", 0, 6, states_13, - "\000\040\010\000\000\000\000\000\000\000\000\000\000\020\004\000\300\020\222\006\000"}, - {270, "augassign", 0, 2, states_14, - "\000\000\000\000\000\360\377\000\000\000\000\000\000\000\000\000\000\000\000\000\000"}, - {271, "print_stmt", 0, 9, states_15, - "\000\000\000\000\000\000\000\001\000\000\000\000\000\000\000\000\000\000\000\000\000"}, - {272, "del_stmt", 0, 3, states_16, - "\000\000\000\000\000\000\000\004\000\000\000\000\000\000\000\000\000\000\000\000\000"}, - {273, "pass_stmt", 0, 2, states_17, - "\000\000\000\000\000\000\000\020\000\000\000\000\000\000\000\000\000\000\000\000\000"}, - {274, "flow_stmt", 0, 2, states_18, - "\000\000\000\000\000\000\000\000\074\000\000\000\000\000\000\000\000\000\000\000\200"}, - {275, "break_stmt", 0, 2, states_19, - "\000\000\000\000\000\000\000\000\004\000\000\000\000\000\000\000\000\000\000\000\000"}, - {276, "continue_stmt", 0, 2, states_20, - "\000\000\000\000\000\000\000\000\010\000\000\000\000\000\000\000\000\000\000\000\000"}, - {277, "return_stmt", 0, 3, states_21, - "\000\000\000\000\000\000\000\000\020\000\000\000\000\000\000\000\000\000\000\000\000"}, - {278, "yield_stmt", 0, 2, states_22, - "\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\200"}, - {279, "raise_stmt", 0, 7, states_23, - "\000\000\000\000\000\000\000\000\040\000\000\000\000\000\000\000\000\000\000\000\000"}, - {280, "import_stmt", 0, 2, states_24, - "\000\000\000\000\000\000\000\000\000\005\000\000\000\000\000\000\000\000\000\000\000"}, - {281, "import_name", 0, 3, states_25, - "\000\000\000\000\000\000\000\000\000\001\000\000\000\000\000\000\000\000\000\000\000"}, - {282, "import_from", 0, 8, states_26, - "\000\000\000\000\000\000\000\000\000\004\000\000\000\000\000\000\000\000\000\000\000"}, - {283, "import_as_name", 0, 4, states_27, - "\000\000\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"}, - {284, "dotted_as_name", 0, 4, states_28, - "\000\000\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"}, - {285, "import_as_names", 0, 3, states_29, - "\000\000\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"}, - {286, "dotted_as_names", 0, 2, states_30, - "\000\000\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"}, - {287, "dotted_name", 0, 2, states_31, - "\000\000\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"}, - {288, "global_stmt", 0, 3, states_32, - "\000\000\000\000\000\000\000\000\000\000\001\000\000\000\000\000\000\000\000\000\000"}, - {289, "exec_stmt", 0, 7, states_33, - "\000\000\000\000\000\000\000\000\000\000\002\000\000\000\000\000\000\000\000\000\000"}, - {290, "assert_stmt", 0, 5, states_34, - "\000\000\000\000\000\000\000\000\000\000\020\000\000\000\000\000\000\000\000\000\000"}, - {291, "compound_stmt", 0, 2, states_35, - "\000\010\004\000\000\000\000\000\000\000\000\310\011\000\000\000\000\000\000\000\001"}, - {292, "if_stmt", 0, 8, states_36, - "\000\000\000\000\000\000\000\000\000\000\000\010\000\000\000\000\000\000\000\000\000"}, - {293, "while_stmt", 0, 8, states_37, - "\000\000\000\000\000\000\000\000\000\000\000\100\000\000\000\000\000\000\000\000\000"}, - {294, "for_stmt", 0, 10, states_38, - "\000\000\000\000\000\000\000\000\000\000\000\200\000\000\000\000\000\000\000\000\000"}, - {295, "try_stmt", 0, 13, states_39, - "\000\000\000\000\000\000\000\000\000\000\000\000\001\000\000\000\000\000\000\000\000"}, - {296, "with_stmt", 0, 6, states_40, - "\000\000\000\000\000\000\000\000\000\000\000\000\010\000\000\000\000\000\000\000\000"}, - {297, "with_var", 0, 3, states_41, - "\000\000\000\000\000\000\000\000\000\100\000\000\000\000\000\000\000\000\000\000\000"}, - {298, "except_clause", 0, 5, states_42, - "\000\000\000\000\000\000\000\000\000\000\000\000\040\000\000\000\000\000\000\000\000"}, - {299, "suite", 0, 5, states_43, - "\004\040\010\000\000\000\000\025\074\005\023\000\000\020\004\000\300\020\222\006\200"}, - {300, "testlist_safe", 0, 5, states_44, - "\000\040\010\000\000\000\000\000\000\000\000\000\000\020\004\000\300\020\222\006\000"}, - {301, "old_test", 0, 2, states_45, - "\000\040\010\000\000\000\000\000\000\000\000\000\000\020\004\000\300\020\222\006\000"}, - {302, "old_lambdef", 0, 5, states_46, - "\000\000\000\000\000\000\000\000\000\000\000\000\000\020\000\000\000\000\000\000\000"}, - {303, "test", 0, 6, states_47, - "\000\040\010\000\000\000\000\000\000\000\000\000\000\020\004\000\300\020\222\006\000"}, - {304, "or_test", 0, 2, states_48, - "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\004\000\300\020\222\006\000"}, - {305, "and_test", 0, 2, states_49, - "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\004\000\300\020\222\006\000"}, - {306, "not_test", 0, 3, states_50, - "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\004\000\300\020\222\006\000"}, - {307, "comparison", 0, 2, states_51, - "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\000\000\300\020\222\006\000"}, - {308, "comp_op", 0, 4, states_52, - "\000\000\000\000\000\000\000\000\000\000\010\000\000\000\344\037\000\000\000\000\000"}, - {309, "expr", 0, 2, states_53, - "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\000\000\300\020\222\006\000"}, - {310, "xor_expr", 0, 2, states_54, - "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\000\000\300\020\222\006\000"}, - {311, "and_expr", 0, 2, states_55, - "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\000\000\300\020\222\006\000"}, - {312, "shift_expr", 0, 2, states_56, - "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\000\000\300\020\222\006\000"}, - {313, "arith_expr", 0, 2, states_57, - "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\000\000\300\020\222\006\000"}, - {314, "term", 0, 2, states_58, - "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\000\000\300\020\222\006\000"}, - {315, "factor", 0, 3, states_59, - "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\000\000\300\020\222\006\000"}, - {316, "power", 0, 4, states_60, - "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\222\006\000"}, - {317, "atom", 0, 11, states_61, - "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\222\006\000"}, - {318, "listmaker", 0, 5, states_62, - "\000\040\010\000\000\000\000\000\000\000\000\000\000\020\004\000\300\020\222\006\000"}, - {319, "testlist_gexp", 0, 5, states_63, - "\000\040\010\000\000\000\000\000\000\000\000\000\000\020\004\000\300\020\222\006\000"}, - {320, "lambdef", 0, 5, states_64, - "\000\000\000\000\000\000\000\000\000\000\000\000\000\020\000\000\000\000\000\000\000"}, - {321, "trailer", 0, 7, states_65, - "\000\040\000\000\000\000\000\000\000\010\000\000\000\000\000\000\000\000\002\000\000"}, - {322, "subscriptlist", 0, 3, states_66, - "\000\040\050\000\000\000\000\000\000\010\000\000\000\020\004\000\300\020\222\006\000"}, - {323, "subscript", 0, 7, states_67, - "\000\040\050\000\000\000\000\000\000\010\000\000\000\020\004\000\300\020\222\006\000"}, - {324, "sliceop", 0, 3, states_68, - "\000\000\040\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"}, - {325, "exprlist", 0, 3, states_69, - "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\000\000\300\020\222\006\000"}, - {326, "testlist", 0, 3, states_70, - "\000\040\010\000\000\000\000\000\000\000\000\000\000\020\004\000\300\020\222\006\000"}, - {327, "dictmaker", 0, 5, states_71, - "\000\040\010\000\000\000\000\000\000\000\000\000\000\020\004\000\300\020\222\006\000"}, - {328, "classdef", 0, 8, states_72, - "\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\001"}, - {329, "arglist", 0, 8, states_73, - "\000\040\010\060\000\000\000\000\000\000\000\000\000\020\004\000\300\020\222\006\000"}, - {330, "argument", 0, 4, states_74, - "\000\040\010\000\000\000\000\000\000\000\000\000\000\020\004\000\300\020\222\006\000"}, - {331, "list_iter", 0, 2, states_75, - "\000\000\000\000\000\000\000\000\000\000\000\210\000\000\000\000\000\000\000\000\000"}, - {332, "list_for", 0, 6, states_76, - "\000\000\000\000\000\000\000\000\000\000\000\200\000\000\000\000\000\000\000\000\000"}, - {333, "list_if", 0, 4, states_77, - "\000\000\000\000\000\000\000\000\000\000\000\010\000\000\000\000\000\000\000\000\000"}, - {334, "gen_iter", 0, 2, states_78, - "\000\000\000\000\000\000\000\000\000\000\000\210\000\000\000\000\000\000\000\000\000"}, - {335, "gen_for", 0, 6, states_79, - "\000\000\000\000\000\000\000\000\000\000\000\200\000\000\000\000\000\000\000\000\000"}, - {336, "gen_if", 0, 4, states_80, - "\000\000\000\000\000\000\000\000\000\000\000\010\000\000\000\000\000\000\000\000\000"}, - {337, "testlist1", 0, 2, states_81, - "\000\040\010\000\000\000\000\000\000\000\000\000\000\020\004\000\300\020\222\006\000"}, - {338, "encoding_decl", 0, 2, states_82, - "\000\000\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"}, - {339, "yield_expr", 0, 3, states_83, - "\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\200"}, + "\000\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"}, + {261, "decorated", 0, 3, states_5, + "\000\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"}, + {262, "funcdef", 0, 6, states_6, + "\000\000\020\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"}, + {263, "parameters", 0, 4, states_7, + "\000\040\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"}, + {264, "varargslist", 0, 10, states_8, + "\000\040\040\300\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"}, + {265, "fpdef", 0, 4, states_9, + "\000\040\040\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"}, + {266, "fplist", 0, 3, states_10, + "\000\040\040\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"}, + {267, "stmt", 0, 2, states_11, + "\000\050\060\000\000\000\000\124\360\024\114\220\023\040\010\000\200\041\044\015\002\001"}, + {268, "simple_stmt", 0, 4, states_12, + "\000\040\040\000\000\000\000\124\360\024\114\000\000\040\010\000\200\041\044\015\000\001"}, + {269, "small_stmt", 0, 2, states_13, + "\000\040\040\000\000\000\000\124\360\024\114\000\000\040\010\000\200\041\044\015\000\001"}, + {270, "expr_stmt", 0, 6, states_14, + "\000\040\040\000\000\000\000\000\000\000\000\000\000\040\010\000\200\041\044\015\000\000"}, + {271, "augassign", 0, 2, states_15, + "\000\000\000\000\000\300\377\003\000\000\000\000\000\000\000\000\000\000\000\000\000\000"}, + {272, "print_stmt", 0, 9, states_16, + "\000\000\000\000\000\000\000\004\000\000\000\000\000\000\000\000\000\000\000\000\000\000"}, + {273, "del_stmt", 0, 3, states_17, + "\000\000\000\000\000\000\000\020\000\000\000\000\000\000\000\000\000\000\000\000\000\000"}, + {274, "pass_stmt", 0, 2, states_18, + "\000\000\000\000\000\000\000\100\000\000\000\000\000\000\000\000\000\000\000\000\000\000"}, + {275, "flow_stmt", 0, 2, states_19, + "\000\000\000\000\000\000\000\000\360\000\000\000\000\000\000\000\000\000\000\000\000\001"}, + {276, "break_stmt", 0, 2, states_20, + "\000\000\000\000\000\000\000\000\020\000\000\000\000\000\000\000\000\000\000\000\000\000"}, + {277, "continue_stmt", 0, 2, states_21, + "\000\000\000\000\000\000\000\000\040\000\000\000\000\000\000\000\000\000\000\000\000\000"}, + {278, "return_stmt", 0, 3, states_22, + "\000\000\000\000\000\000\000\000\100\000\000\000\000\000\000\000\000\000\000\000\000\000"}, + {279, "yield_stmt", 0, 2, states_23, + "\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\001"}, + {280, "raise_stmt", 0, 7, states_24, + "\000\000\000\000\000\000\000\000\200\000\000\000\000\000\000\000\000\000\000\000\000\000"}, + {281, "import_stmt", 0, 2, states_25, + "\000\000\000\000\000\000\000\000\000\024\000\000\000\000\000\000\000\000\000\000\000\000"}, + {282, "import_name", 0, 3, states_26, + "\000\000\000\000\000\000\000\000\000\004\000\000\000\000\000\000\000\000\000\000\000\000"}, + {283, "import_from", 0, 8, states_27, + "\000\000\000\000\000\000\000\000\000\020\000\000\000\000\000\000\000\000\000\000\000\000"}, + {284, "import_as_name", 0, 4, states_28, + "\000\000\040\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"}, + {285, "dotted_as_name", 0, 4, states_29, + "\000\000\040\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"}, + {286, "import_as_names", 0, 3, states_30, + "\000\000\040\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"}, + {287, "dotted_as_names", 0, 2, states_31, + "\000\000\040\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"}, + {288, "dotted_name", 0, 2, states_32, + "\000\000\040\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"}, + {289, "global_stmt", 0, 3, states_33, + "\000\000\000\000\000\000\000\000\000\000\004\000\000\000\000\000\000\000\000\000\000\000"}, + {290, "exec_stmt", 0, 7, states_34, + "\000\000\000\000\000\000\000\000\000\000\010\000\000\000\000\000\000\000\000\000\000\000"}, + {291, "assert_stmt", 0, 5, states_35, + "\000\000\000\000\000\000\000\000\000\000\100\000\000\000\000\000\000\000\000\000\000\000"}, + {292, "compound_stmt", 0, 2, states_36, + "\000\010\020\000\000\000\000\000\000\000\000\220\023\000\000\000\000\000\000\000\002\000"}, + {293, "if_stmt", 0, 8, states_37, + "\000\000\000\000\000\000\000\000\000\000\000\020\000\000\000\000\000\000\000\000\000\000"}, + {294, "while_stmt", 0, 8, states_38, + "\000\000\000\000\000\000\000\000\000\000\000\200\000\000\000\000\000\000\000\000\000\000"}, + {295, "for_stmt", 0, 10, states_39, + "\000\000\000\000\000\000\000\000\000\000\000\000\001\000\000\000\000\000\000\000\000\000"}, + {296, "try_stmt", 0, 13, states_40, + "\000\000\000\000\000\000\000\000\000\000\000\000\002\000\000\000\000\000\000\000\000\000"}, + {297, "with_stmt", 0, 6, states_41, + "\000\000\000\000\000\000\000\000\000\000\000\000\020\000\000\000\000\000\000\000\000\000"}, + {298, "with_var", 0, 3, states_42, + "\000\000\000\000\000\000\000\000\000\000\001\000\000\000\000\000\000\000\000\000\000\000"}, + {299, "except_clause", 0, 5, states_43, + "\000\000\000\000\000\000\000\000\000\000\000\000\100\000\000\000\000\000\000\000\000\000"}, + {300, "suite", 0, 5, states_44, + "\004\040\040\000\000\000\000\124\360\024\114\000\000\040\010\000\200\041\044\015\000\001"}, + {301, "testlist_safe", 0, 5, states_45, + "\000\040\040\000\000\000\000\000\000\000\000\000\000\040\010\000\200\041\044\015\000\000"}, + {302, "old_test", 0, 2, states_46, + "\000\040\040\000\000\000\000\000\000\000\000\000\000\040\010\000\200\041\044\015\000\000"}, + {303, "old_lambdef", 0, 5, states_47, + "\000\000\000\000\000\000\000\000\000\000\000\000\000\040\000\000\000\000\000\000\000\000"}, + {304, "test", 0, 6, states_48, + "\000\040\040\000\000\000\000\000\000\000\000\000\000\040\010\000\200\041\044\015\000\000"}, + {305, "or_test", 0, 2, states_49, + "\000\040\040\000\000\000\000\000\000\000\000\000\000\000\010\000\200\041\044\015\000\000"}, + {306, "and_test", 0, 2, states_50, + "\000\040\040\000\000\000\000\000\000\000\000\000\000\000\010\000\200\041\044\015\000\000"}, + {307, "not_test", 0, 3, states_51, + "\000\040\040\000\000\000\000\000\000\000\000\000\000\000\010\000\200\041\044\015\000\000"}, + {308, "comparison", 0, 2, states_52, + "\000\040\040\000\000\000\000\000\000\000\000\000\000\000\000\000\200\041\044\015\000\000"}, + {309, "comp_op", 0, 4, states_53, + "\000\000\000\000\000\000\000\000\000\000\040\000\000\000\310\077\000\000\000\000\000\000"}, + {310, "expr", 0, 2, states_54, + "\000\040\040\000\000\000\000\000\000\000\000\000\000\000\000\000\200\041\044\015\000\000"}, + {311, "xor_expr", 0, 2, states_55, + "\000\040\040\000\000\000\000\000\000\000\000\000\000\000\000\000\200\041\044\015\000\000"}, + {312, "and_expr", 0, 2, states_56, + "\000\040\040\000\000\000\000\000\000\000\000\000\000\000\000\000\200\041\044\015\000\000"}, + {313, "shift_expr", 0, 2, states_57, + "\000\040\040\000\000\000\000\000\000\000\000\000\000\000\000\000\200\041\044\015\000\000"}, + {314, "arith_expr", 0, 2, states_58, + "\000\040\040\000\000\000\000\000\000\000\000\000\000\000\000\000\200\041\044\015\000\000"}, + {315, "term", 0, 2, states_59, + "\000\040\040\000\000\000\000\000\000\000\000\000\000\000\000\000\200\041\044\015\000\000"}, + {316, "factor", 0, 3, states_60, + "\000\040\040\000\000\000\000\000\000\000\000\000\000\000\000\000\200\041\044\015\000\000"}, + {317, "power", 0, 4, states_61, + "\000\040\040\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\044\015\000\000"}, + {318, "atom", 0, 11, states_62, + "\000\040\040\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\044\015\000\000"}, + {319, "listmaker", 0, 5, states_63, + "\000\040\040\000\000\000\000\000\000\000\000\000\000\040\010\000\200\041\044\015\000\000"}, + {320, "testlist_gexp", 0, 5, states_64, + "\000\040\040\000\000\000\000\000\000\000\000\000\000\040\010\000\200\041\044\015\000\000"}, + {321, "lambdef", 0, 5, states_65, + "\000\000\000\000\000\000\000\000\000\000\000\000\000\040\000\000\000\000\000\000\000\000"}, + {322, "trailer", 0, 7, states_66, + "\000\040\000\000\000\000\000\000\000\040\000\000\000\000\000\000\000\000\004\000\000\000"}, + {323, "subscriptlist", 0, 3, states_67, + "\000\040\240\000\000\000\000\000\000\040\000\000\000\040\010\000\200\041\044\015\000\000"}, + {324, "subscript", 0, 7, states_68, + "\000\040\240\000\000\000\000\000\000\040\000\000\000\040\010\000\200\041\044\015\000\000"}, + {325, "sliceop", 0, 3, states_69, + "\000\000\200\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"}, + {326, "exprlist", 0, 3, states_70, + "\000\040\040\000\000\000\000\000\000\000\000\000\000\000\000\000\200\041\044\015\000\000"}, + {327, "testlist", 0, 3, states_71, + "\000\040\040\000\000\000\000\000\000\000\000\000\000\040\010\000\200\041\044\015\000\000"}, + {328, "dictmaker", 0, 5, states_72, + "\000\040\040\000\000\000\000\000\000\000\000\000\000\040\010\000\200\041\044\015\000\000"}, + {329, "classdef", 0, 8, states_73, + "\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\002\000"}, + {330, "arglist", 0, 8, states_74, + "\000\040\040\300\000\000\000\000\000\000\000\000\000\040\010\000\200\041\044\015\000\000"}, + {331, "argument", 0, 4, states_75, + "\000\040\040\000\000\000\000\000\000\000\000\000\000\040\010\000\200\041\044\015\000\000"}, + {332, "list_iter", 0, 2, states_76, + "\000\000\000\000\000\000\000\000\000\000\000\020\001\000\000\000\000\000\000\000\000\000"}, + {333, "list_for", 0, 6, states_77, + "\000\000\000\000\000\000\000\000\000\000\000\000\001\000\000\000\000\000\000\000\000\000"}, + {334, "list_if", 0, 4, states_78, + "\000\000\000\000\000\000\000\000\000\000\000\020\000\000\000\000\000\000\000\000\000\000"}, + {335, "gen_iter", 0, 2, states_79, + "\000\000\000\000\000\000\000\000\000\000\000\020\001\000\000\000\000\000\000\000\000\000"}, + {336, "gen_for", 0, 6, states_80, + "\000\000\000\000\000\000\000\000\000\000\000\000\001\000\000\000\000\000\000\000\000\000"}, + {337, "gen_if", 0, 4, states_81, + "\000\000\000\000\000\000\000\000\000\000\000\020\000\000\000\000\000\000\000\000\000\000"}, + {338, "testlist1", 0, 2, states_82, + "\000\040\040\000\000\000\000\000\000\000\000\000\000\040\010\000\200\041\044\015\000\000"}, + {339, "encoding_decl", 0, 2, states_83, + "\000\000\040\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"}, + {340, "yield_expr", 0, 3, states_84, + "\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\001"}, }; -static label labels[168] = { +static label labels[169] = { {0, "EMPTY"}, {256, 0}, {4, 0}, - {267, 0}, - {291, 0}, + {268, 0}, + {292, 0}, {257, 0}, - {266, 0}, + {267, 0}, {0, 0}, {258, 0}, - {326, 0}, + {327, 0}, {259, 0}, {50, 0}, - {287, 0}, + {288, 0}, {7, 0}, - {329, 0}, + {330, 0}, {8, 0}, {260, 0}, {261, 0}, + {329, 0}, + {262, 0}, {1, "def"}, {1, 0}, - {262, 0}, - {11, 0}, - {299, 0}, {263, 0}, + {11, 0}, + {300, 0}, {264, 0}, + {265, 0}, {22, 0}, - {303, 0}, + {304, 0}, {12, 0}, {16, 0}, {36, 0}, - {265, 0}, - {268, 0}, - {13, 0}, + {266, 0}, {269, 0}, - {271, 0}, + {13, 0}, + {270, 0}, {272, 0}, {273, 0}, {274, 0}, - {280, 0}, - {288, 0}, + {275, 0}, + {281, 0}, {289, 0}, {290, 0}, - {270, 0}, - {339, 0}, + {291, 0}, + {271, 0}, + {340, 0}, {37, 0}, {38, 0}, {39, 0}, @@ -2009,64 +2024,63 @@ {1, "print"}, {35, 0}, {1, "del"}, - {325, 0}, + {326, 0}, {1, "pass"}, - {275, 0}, {276, 0}, {277, 0}, - {279, 0}, {278, 0}, + {280, 0}, + {279, 0}, {1, "break"}, {1, "continue"}, {1, "return"}, {1, "raise"}, - {281, 0}, {282, 0}, + {283, 0}, {1, "import"}, - {286, 0}, + {287, 0}, {1, "from"}, {23, 0}, - {285, 0}, - {283, 0}, - {1, "as"}, + {286, 0}, {284, 0}, + {1, "as"}, + {285, 0}, {1, "global"}, {1, "exec"}, - {309, 0}, + {310, 0}, {1, "in"}, {1, "assert"}, - {292, 0}, {293, 0}, {294, 0}, {295, 0}, {296, 0}, - {328, 0}, + {297, 0}, {1, "if"}, {1, "elif"}, {1, "else"}, {1, "while"}, {1, "for"}, {1, "try"}, - {298, 0}, + {299, 0}, {1, "finally"}, {1, "with"}, - {297, 0}, + {298, 0}, {1, "except"}, {5, 0}, {6, 0}, - {300, 0}, {301, 0}, - {304, 0}, {302, 0}, - {1, "lambda"}, - {320, 0}, {305, 0}, - {1, "or"}, + {303, 0}, + {1, "lambda"}, + {321, 0}, {306, 0}, + {1, "or"}, + {307, 0}, {1, "and"}, {1, "not"}, - {307, 0}, {308, 0}, + {309, 0}, {20, 0}, {21, 0}, {28, 0}, @@ -2075,53 +2089,53 @@ {29, 0}, {29, 0}, {1, "is"}, - {310, 0}, - {18, 0}, {311, 0}, - {33, 0}, + {18, 0}, {312, 0}, - {19, 0}, + {33, 0}, {313, 0}, - {34, 0}, + {19, 0}, {314, 0}, + {34, 0}, + {315, 0}, {14, 0}, {15, 0}, - {315, 0}, + {316, 0}, {17, 0}, {24, 0}, {48, 0}, {32, 0}, - {316, 0}, {317, 0}, - {321, 0}, - {319, 0}, - {9, 0}, {318, 0}, + {322, 0}, + {320, 0}, + {9, 0}, + {319, 0}, {10, 0}, {26, 0}, - {327, 0}, + {328, 0}, {27, 0}, {25, 0}, - {337, 0}, + {338, 0}, {2, 0}, {3, 0}, - {332, 0}, - {335, 0}, - {322, 0}, + {333, 0}, + {336, 0}, {323, 0}, {324, 0}, + {325, 0}, {1, "class"}, - {330, 0}, {331, 0}, - {333, 0}, + {332, 0}, {334, 0}, - {336, 0}, - {338, 0}, + {335, 0}, + {337, 0}, + {339, 0}, {1, "yield"}, }; grammar _PyParser_Grammar = { - 84, + 85, dfas, - {168, labels}, + {169, labels}, 256 }; Modified: python/branches/libffi3-branch/Python/import.c ============================================================================== --- python/branches/libffi3-branch/Python/import.c (original) +++ python/branches/libffi3-branch/Python/import.c Tue Mar 4 15:50:53 2008 @@ -22,6 +22,11 @@ extern "C" { #endif +#ifdef MS_WINDOWS +/* for stat.st_mode */ +typedef unsigned short mode_t; +#endif + extern time_t PyOS_GetLastModificationTime(char *, FILE *); /* In getmtime.c */ @@ -829,7 +834,7 @@ /* Helper to open a bytecode file for writing in exclusive mode */ static FILE * -open_exclusive(char *filename) +open_exclusive(char *filename, mode_t mode) { #if defined(O_EXCL)&&defined(O_CREAT)&&defined(O_WRONLY)&&defined(O_TRUNC) /* Use O_EXCL to avoid a race condition when another process tries to @@ -845,9 +850,9 @@ |O_BINARY /* necessary for Windows */ #endif #ifdef __VMS - , 0666, "ctxt=bin", "shr=nil" + , mode, "ctxt=bin", "shr=nil" #else - , 0666 + , mode #endif ); if (fd < 0) @@ -866,11 +871,13 @@ remove the file. */ static void -write_compiled_module(PyCodeObject *co, char *cpathname, time_t mtime) +write_compiled_module(PyCodeObject *co, char *cpathname, struct stat *srcstat) { FILE *fp; + time_t mtime = srcstat->st_mtime; + mode_t mode = srcstat->st_mode; - fp = open_exclusive(cpathname); + fp = open_exclusive(cpathname, mode); if (fp == NULL) { if (Py_VerboseFlag) PySys_WriteStderr( @@ -907,17 +914,16 @@ static PyObject * load_source_module(char *name, char *pathname, FILE *fp) { - time_t mtime; + struct stat st; FILE *fpc; char buf[MAXPATHLEN+1]; char *cpathname; PyCodeObject *co; PyObject *m; - - mtime = PyOS_GetLastModificationTime(pathname, fp); - if (mtime == (time_t)(-1)) { + + if (fstat(fileno(fp), &st) != 0) { PyErr_Format(PyExc_RuntimeError, - "unable to get modification time from '%s'", + "unable to get file status from '%s'", pathname); return NULL; } @@ -926,7 +932,7 @@ in 4 bytes. This will be fine until sometime in the year 2038, when a 4-byte signed time_t will overflow. */ - if (mtime >> 32) { + if (st.st_mtime >> 32) { PyErr_SetString(PyExc_OverflowError, "modification time overflows a 4 byte field"); return NULL; @@ -935,7 +941,7 @@ cpathname = make_compiled_pathname(pathname, buf, (size_t)MAXPATHLEN + 1); if (cpathname != NULL && - (fpc = check_compiled_module(pathname, mtime, cpathname))) { + (fpc = check_compiled_module(pathname, st.st_mtime, cpathname))) { co = read_compiled_module(cpathname, fpc); fclose(fpc); if (co == NULL) @@ -955,7 +961,7 @@ if (cpathname) { PyObject *ro = PySys_GetObject("dont_write_bytecode"); if (ro == NULL || !PyObject_IsTrue(ro)) - write_compiled_module(co, cpathname, mtime); + write_compiled_module(co, cpathname, &st); } } m = PyImport_ExecCodeModuleEx(name, (PyObject *)co, pathname); Modified: python/branches/libffi3-branch/Python/mystrtoul.c ============================================================================== --- python/branches/libffi3-branch/Python/mystrtoul.c (original) +++ python/branches/libffi3-branch/Python/mystrtoul.c Tue Mar 4 15:50:53 2008 @@ -5,14 +5,6 @@ #define _SGI_MP_SOURCE #endif -/* Convert a possibly signed character to a nonnegative int */ -/* XXX This assumes characters are 8 bits wide */ -#ifdef __CHAR_UNSIGNED__ -#define Py_CHARMASK(c) (c) -#else -#define Py_CHARMASK(c) ((c) & 0xff) -#endif - /* strtol and strtoul, renamed to avoid conflicts */ Modified: python/branches/libffi3-branch/Python/peephole.c ============================================================================== --- python/branches/libffi3-branch/Python/peephole.c (original) +++ python/branches/libffi3-branch/Python/peephole.c Tue Mar 4 15:50:53 2008 @@ -317,7 +317,7 @@ if (codestr == NULL) goto exitUnchanged; codestr = (unsigned char *)memcpy(codestr, - PyString_AS_STRING(code), codelen); + PyString_AS_STRING(code), codelen); /* Verify that RETURN_VALUE terminates the codestring. This allows the various transformation patterns to look ahead several Modified: python/branches/libffi3-branch/Python/symtable.c ============================================================================== --- python/branches/libffi3-branch/Python/symtable.c (original) +++ python/branches/libffi3-branch/Python/symtable.c Tue Mar 4 15:50:53 2008 @@ -931,8 +931,8 @@ return 0; if (s->v.FunctionDef.args->defaults) VISIT_SEQ(st, expr, s->v.FunctionDef.args->defaults); - if (s->v.FunctionDef.decorators) - VISIT_SEQ(st, expr, s->v.FunctionDef.decorators); + if (s->v.FunctionDef.decorator_list) + VISIT_SEQ(st, expr, s->v.FunctionDef.decorator_list); if (!symtable_enter_block(st, s->v.FunctionDef.name, FunctionBlock, (void *)s, s->lineno)) return 0; @@ -946,6 +946,8 @@ if (!symtable_add_def(st, s->v.ClassDef.name, DEF_LOCAL)) return 0; VISIT_SEQ(st, expr, s->v.ClassDef.bases); + if (s->v.ClassDef.decorator_list) + VISIT_SEQ(st, expr, s->v.ClassDef.decorator_list); if (!symtable_enter_block(st, s->v.ClassDef.name, ClassBlock, (void *)s, s->lineno)) return 0; Modified: python/branches/libffi3-branch/README ============================================================================== --- python/branches/libffi3-branch/README (original) +++ python/branches/libffi3-branch/README Tue Mar 4 15:50:53 2008 @@ -1,7 +1,7 @@ -This is Python version 2.6 alpha 0 +This is Python version 2.6 alpha 1 ================================== -Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007 +Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008 Python Software Foundation. All rights reserved. Modified: python/branches/libffi3-branch/Tools/buildbot/external.bat ============================================================================== --- python/branches/libffi3-branch/Tools/buildbot/external.bat (original) +++ python/branches/libffi3-branch/Tools/buildbot/external.bat Tue Mar 4 15:50:53 2008 @@ -8,9 +8,14 @@ if not exist bzip2-1.0.3 svn export http://svn.python.org/projects/external/bzip2-1.0.3 @rem Sleepycat db -if not exist db-4.4.20 svn export http://svn.python.org/projects/external/db-4.4.20 + at rem Remove VS 2003 builds +if exist db-4.4.20 if not exist db-4.4.20\build_win32\this_is_for_vs9 ( + echo Removing old build + rd /s/q db-4.4.20 +) +if not exist db-4.4.20 svn export http://svn.python.org/projects/external/db-4.4.20-vs9 db-4.4.20 if not exist db-4.4.20\build_win32\debug\libdb44sd.lib ( - vcbuild db-4.4.20\build_win32\Berkeley_DB.sln /build Debug /project db_static + vcbuild db-4.4.20\build_win32\db_static.vcproj "Debug|Win32" ) @rem OpenSSL Modified: python/branches/libffi3-branch/Tools/compiler/ast.txt ============================================================================== --- python/branches/libffi3-branch/Tools/compiler/ast.txt (original) +++ python/branches/libffi3-branch/Tools/compiler/ast.txt Tue Mar 4 15:50:53 2008 @@ -14,7 +14,7 @@ Decorators: nodes! Function: decorators&, name*, argnames*, defaults!, flags*, doc*, code Lambda: argnames*, defaults!, flags*, code -Class: name*, bases!, doc*, code +Class: name*, bases!, doc*, code, decorators& = None Pass: Break: Continue: @@ -97,7 +97,7 @@ self.kwargs = 1 init(GenExpr): - self.argnames = ['[outmost-iterable]'] + self.argnames = ['.0'] self.varargs = self.kwargs = None init(GenExprFor): Modified: python/branches/libffi3-branch/Tools/compiler/astgen.py ============================================================================== --- python/branches/libffi3-branch/Tools/compiler/astgen.py (original) +++ python/branches/libffi3-branch/Tools/compiler/astgen.py Tue Mar 4 15:50:53 2008 @@ -8,7 +8,6 @@ """ import fileinput -import getopt import re import sys from StringIO import StringIO Modified: python/branches/libffi3-branch/Tools/compiler/dumppyc.py ============================================================================== --- python/branches/libffi3-branch/Tools/compiler/dumppyc.py (original) +++ python/branches/libffi3-branch/Tools/compiler/dumppyc.py Tue Mar 4 15:50:53 2008 @@ -1,7 +1,6 @@ #! /usr/bin/env python import marshal -import os import dis import types Modified: python/branches/libffi3-branch/Tools/faqwiz/faqw.py ============================================================================== --- python/branches/libffi3-branch/Tools/faqwiz/faqw.py (original) +++ python/branches/libffi3-branch/Tools/faqwiz/faqw.py Tue Mar 4 15:50:53 2008 @@ -20,7 +20,7 @@ try: FAQDIR = "/usr/people/guido/python/FAQ" SRCDIR = "/usr/people/guido/python/src/Tools/faqwiz" - import os, sys, time, operator + import os, sys os.chdir(FAQDIR) sys.path.insert(0, SRCDIR) import faqwiz Modified: python/branches/libffi3-branch/Tools/modulator/Tkextra.py ============================================================================== --- python/branches/libffi3-branch/Tools/modulator/Tkextra.py (original) +++ python/branches/libffi3-branch/Tools/modulator/Tkextra.py Tue Mar 4 15:50:53 2008 @@ -218,7 +218,6 @@ 0, 'Save', 'Save as text') def _test(): - import sys global mainWidget mainWidget = Frame() Pack.config(mainWidget) Modified: python/branches/libffi3-branch/Tools/msi/msi.py ============================================================================== --- python/branches/libffi3-branch/Tools/msi/msi.py (original) +++ python/branches/libffi3-branch/Tools/msi/msi.py Tue Mar 4 15:50:53 2008 @@ -836,29 +836,21 @@ installer.FileVersion("msvcr71.dll", 1) def extract_msvcr90(): - import _winreg - # Find the location of the merge modules - k = _winreg.OpenKey( - _winreg.HKEY_LOCAL_MACHINE, - r"Software\Microsoft\VisualStudio\9.0\Setup\VS") - prod_dir = _winreg.QueryValueEx(k, "ProductDir")[0] - _winreg.CloseKey(k) - - # Copy msvcr90* - dir = os.path.join(prod_dir, r'VC\redist\x86\Microsoft.VC90.CRT') - files = glob.glob1(dir, "*CRT*.dll") + glob.glob1(dir, "*VCR*.dll") - for file in files: - shutil.copy(os.path.join(dir, file), '.') - - dir = os.path.join(prod_dir, r'VC\redist\Debug_NonRedist\x86\Microsoft.VC90.DebugCRT') - files = glob.glob1(dir, "*CRT*.dll") + glob.glob1(dir, "*VCR*.dll") - for file in files: - shutil.copy(os.path.join(dir, file), '.') + # Find the redistributable files + dir = os.path.join(os.environ['VS90COMNTOOLS'], r"..\..\VC\redist\x86\Microsoft.VC90.CRT") - # Find the version/language of msvcr90.dll + result = [] installer = msilib.MakeInstaller() - return installer.FileVersion("msvcr90.dll", 0), \ - installer.FileVersion("msvcr90.dll", 1) + # omit msvcm90 and msvcp90, as they aren't really needed + files = ["Microsoft.VC90.CRT.manifest", "msvcr90.dll"] + for f in files: + path = os.path.join(dir, f) + kw = {'src':path} + if f.endswith('.dll'): + kw['version'] = installer.FileVersion(path, 0) + kw['language'] = installer.FileVersion(path, 1) + result.append((f, kw)) + return result class PyDirectory(Directory): """By default, all components in the Python installer @@ -887,7 +879,10 @@ root.add_file("%s/pythonw.exe" % PCBUILD) # msidbComponentAttributesSharedDllRefCount = 8, see "Component Table" - dlldir = PyDirectory(db, cab, root, srcdir, "DLLDIR", ".") + #dlldir = PyDirectory(db, cab, root, srcdir, "DLLDIR", ".") + #install python30.dll into root dir for now + dlldir = root + pydll = "python%s%s.dll" % (major, minor) pydllsrc = os.path.join(srcdir, PCBUILD, pydll) dlldir.start_component("DLLDIR", flags = 8, keyfile = pydll, uuid = pythondll_uuid) @@ -900,17 +895,14 @@ dlldir.add_file("%s/python%s%s.dll" % (PCBUILD, major, minor), version=pyversion, language=installer.FileVersion(pydllsrc, 1)) + DLLs = PyDirectory(db, cab, root, srcdir + "/" + PCBUILD, "DLLs", "DLLS|DLLs") # XXX determine dependencies if MSVCR == "90": - # XXX don't package the CRT for the moment; - # this should probably use the merge module in the long run. - pass - #version, lang = extract_msvcr90() - #dlldir.start_component("msvcr90", flags=8, keyfile="msvcr90.dll", - # uuid=msvcr90_uuid) - #dlldir.add_file("msvcr90.dll", src=os.path.abspath("msvcr90.dll"), - # version=version, language=lang) - #tmpfiles.append("msvcr90.dll") + root.start_component("msvcr90") + for file, kw in extract_msvcr90(): + root.add_file(file, **kw) + if file.endswith("manifest"): + DLLs.add_file(file, **kw) else: version, lang = extract_msvcr71() dlldir.start_component("msvcr71", flags=8, keyfile="msvcr71.dll", @@ -1011,7 +1003,7 @@ pydirs.append((lib, f)) # Add DLLs default_feature.set_current() - lib = PyDirectory(db, cab, root, srcdir + "/" + PCBUILD, "DLLs", "DLLS|DLLs") + lib = DLLs lib.add_file("py.ico", src=srcdir+"/PC/py.ico") lib.add_file("pyc.ico", src=srcdir+"/PC/pyc.ico") dlls = [] @@ -1029,8 +1021,10 @@ sqlite_arch = "/ia64" elif msilib.msi_type=="x64;1033": sqlite_arch = "/amd64" + tclsuffix = "64" else: sqlite_arch = "" + tclsuffix = "" lib.add_file(srcdir+"/"+sqlite_dir+sqlite_arch+"/sqlite3.dll") if have_tcl: if not os.path.exists("%s/%s/_tkinter.pyd" % (srcdir, PCBUILD)): @@ -1039,7 +1033,7 @@ lib.start_component("TkDLLs", tcltk) lib.add_file("_tkinter.pyd") dlls.append("_tkinter.pyd") - tcldir = os.path.normpath(srcdir+"/../tcltk/bin") + tcldir = os.path.normpath(srcdir+("/../tcltk%s/bin" % tclsuffix)) for f in glob.glob1(tcldir, "*.dll"): lib.add_file(f, src=os.path.join(tcldir, f)) # check whether there are any unknown extensions @@ -1063,7 +1057,7 @@ lib.add_file('libpython%s%s.a' % (major, minor)) if have_tcl: # Add Tcl/Tk - tcldirs = [(root, '../tcltk/lib', 'tcl')] + tcldirs = [(root, '../tcltk%s/lib' % tclsuffix, 'tcl')] tcltk.set_current() while tcldirs: parent, phys, dir = tcldirs.pop() Modified: python/branches/libffi3-branch/Tools/msi/uuids.py ============================================================================== --- python/branches/libffi3-branch/Tools/msi/uuids.py (original) +++ python/branches/libffi3-branch/Tools/msi/uuids.py Tue Mar 4 15:50:53 2008 @@ -37,4 +37,8 @@ '2.5.1150':'{31800004-6386-4999-a519-518f2d78d8f0}', # 2.5.1 '2.5.2150':'{6304a7da-1132-4e91-a343-a296269eab8a}', # 2.5.2c1 '2.5.2150':'{6b976adf-8ae8-434e-b282-a06c7f624d2f}', # 2.5.2 + '2.6.101': '{0ba82e1b-52fd-4e03-8610-a6c76238e8a8}', # 2.6a1 + '2.6.102': '{3b27e16c-56db-4570-a2d3-e9a26180c60b}', # 2.6a2 + '2.6.103': '{cd06a9c5-bde5-4bd7-9874-48933997122a}', # 2.6a3 + '2.6.104': '{dc6ed634-474a-4a50-a547-8de4b7491e53}', # 2.6a4 } Modified: python/branches/libffi3-branch/Tools/pybench/systimes.py ============================================================================== --- python/branches/libffi3-branch/Tools/pybench/systimes.py (original) +++ python/branches/libffi3-branch/Tools/pybench/systimes.py Tue Mar 4 15:50:53 2008 @@ -31,7 +31,7 @@ the author. All Rights Reserved. """ -import time, sys, struct +import time, sys # # Note: Please keep this module compatible to Python 1.5.2. Modified: python/branches/libffi3-branch/Tools/pynche/ChipViewer.py ============================================================================== --- python/branches/libffi3-branch/Tools/pynche/ChipViewer.py (original) +++ python/branches/libffi3-branch/Tools/pynche/ChipViewer.py Tue Mar 4 15:50:53 2008 @@ -13,7 +13,6 @@ selected and nearest ChipWidgets. """ -from types import StringType from Tkinter import * import ColorDB Modified: python/branches/libffi3-branch/Tools/pynche/TypeinViewer.py ============================================================================== --- python/branches/libffi3-branch/Tools/pynche/TypeinViewer.py (original) +++ python/branches/libffi3-branch/Tools/pynche/TypeinViewer.py Tue Mar 4 15:50:53 2008 @@ -12,8 +12,6 @@ you must hit Return or Tab to select the color. """ -import sys -import re from Tkinter import * Modified: python/branches/libffi3-branch/Tools/scripts/logmerge.py ============================================================================== --- python/branches/libffi3-branch/Tools/scripts/logmerge.py (original) +++ python/branches/libffi3-branch/Tools/scripts/logmerge.py Tue Mar 4 15:50:53 2008 @@ -34,7 +34,7 @@ from their output. """ -import os, sys, errno, getopt, re +import sys, errno, getopt, re sep1 = '='*77 + '\n' # file separator sep2 = '-'*28 + '\n' # revision separator Modified: python/branches/libffi3-branch/Tools/scripts/nm2def.py ============================================================================== --- python/branches/libffi3-branch/Tools/scripts/nm2def.py (original) +++ python/branches/libffi3-branch/Tools/scripts/nm2def.py Tue Mar 4 15:50:53 2008 @@ -34,7 +34,7 @@ option to produce this format (since it is the original v7 Unix format). """ -import os,re,sys +import os, sys PYTHONLIB = 'libpython'+sys.version[:3]+'.a' PC_PYTHONLIB = 'Python'+sys.version[0]+sys.version[2]+'.dll' Modified: python/branches/libffi3-branch/Tools/scripts/pindent.py ============================================================================== --- python/branches/libffi3-branch/Tools/scripts/pindent.py (original) +++ python/branches/libffi3-branch/Tools/scripts/pindent.py Tue Mar 4 15:50:53 2008 @@ -81,7 +81,6 @@ TABSIZE = 8 EXPANDTABS = 0 -import os import re import sys Modified: python/branches/libffi3-branch/Tools/scripts/pysource.py ============================================================================== --- python/branches/libffi3-branch/Tools/scripts/pysource.py (original) +++ python/branches/libffi3-branch/Tools/scripts/pysource.py Tue Mar 4 15:50:53 2008 @@ -20,7 +20,7 @@ __all__ = ["has_python_ext", "looks_like_python", "can_be_compiled", "walk_python_files"] -import sys, os, re +import os, re binary_re = re.compile('[\x00-\x08\x0E-\x1F\x7F]') Modified: python/branches/libffi3-branch/Tools/scripts/xxci.py ============================================================================== --- python/branches/libffi3-branch/Tools/scripts/xxci.py (original) +++ python/branches/libffi3-branch/Tools/scripts/xxci.py Tue Mar 4 15:50:53 2008 @@ -7,7 +7,6 @@ import sys import os from stat import * -import commands import fnmatch EXECMAGIC = '\001\140\000\010' Modified: python/branches/libffi3-branch/Tools/ssl/get-remote-certificate.py ============================================================================== --- python/branches/libffi3-branch/Tools/ssl/get-remote-certificate.py (original) +++ python/branches/libffi3-branch/Tools/ssl/get-remote-certificate.py Tue Mar 4 15:50:53 2008 @@ -6,7 +6,7 @@ # # By Bill Janssen. -import sys, os +import sys def fetch_server_certificate (host, port): Modified: python/branches/libffi3-branch/Tools/unicode/gencodec.py ============================================================================== --- python/branches/libffi3-branch/Tools/unicode/gencodec.py (original) +++ python/branches/libffi3-branch/Tools/unicode/gencodec.py Tue Mar 4 15:50:53 2008 @@ -26,7 +26,7 @@ """#" -import re, os, time, marshal, codecs +import re, os, marshal, codecs # Maximum allowed size of charmap tables MAX_TABLE_SIZE = 8192 Modified: python/branches/libffi3-branch/Tools/webchecker/wcgui.py ============================================================================== --- python/branches/libffi3-branch/Tools/webchecker/wcgui.py (original) +++ python/branches/libffi3-branch/Tools/webchecker/wcgui.py Tue Mar 4 15:50:53 2008 @@ -63,7 +63,6 @@ from Tkinter import * import tktools import webchecker -import random # Override some for a weaker platform if sys.platform == 'mac': Modified: python/branches/libffi3-branch/Tools/webchecker/wsgui.py ============================================================================== --- python/branches/libffi3-branch/Tools/webchecker/wsgui.py (original) +++ python/branches/libffi3-branch/Tools/webchecker/wsgui.py Tue Mar 4 15:50:53 2008 @@ -7,9 +7,7 @@ """ from Tkinter import * -import Tkinter import websucker -import sys import os import threading import Queue Modified: python/branches/libffi3-branch/setup.py ============================================================================== --- python/branches/libffi3-branch/setup.py (original) +++ python/branches/libffi3-branch/setup.py Tue Mar 4 15:50:53 2008 @@ -417,6 +417,9 @@ libraries=math_libs) ) exts.append( Extension('datetime', ['datetimemodule.c', 'timemodule.c'], libraries=math_libs) ) + # code that will be builtins in the future, but conflict with the + # current builtins + exts.append( Extension('future_builtins', ['future_builtins.c']) ) # random number generator implemented in C exts.append( Extension("_random", ["_randommodule.c"]) ) # fast iterator tools implemented in C From python-checkins at python.org Tue Mar 4 17:22:47 2008 From: python-checkins at python.org (neal.norwitz) Date: Tue, 4 Mar 2008 17:22:47 +0100 (CET) Subject: [Python-checkins] r61233 - python/trunk/Lib/bsddb/test/test_basics.py Message-ID: <20080304162247.3588D1E4007@bag.python.org> Author: neal.norwitz Date: Tue Mar 4 17:22:46 2008 New Revision: 61233 Modified: python/trunk/Lib/bsddb/test/test_basics.py Log: Close the file before trying to remove the directory so it works on Windows. As reported by Trent Nelson on python-dev. Modified: python/trunk/Lib/bsddb/test/test_basics.py ============================================================================== --- python/trunk/Lib/bsddb/test/test_basics.py (original) +++ python/trunk/Lib/bsddb/test/test_basics.py Tue Mar 4 17:22:46 2008 @@ -97,8 +97,9 @@ def tearDown(self): self.d.close() if self.env is not None: - test_support.rmtree(self.homeDir) self.env.close() + test_support.rmtree(self.homeDir) + ## XXX(nnorwitz): is this comment stil valid? ## Make a new DBEnv to remove the env files from the home dir. ## (It can't be done while the env is open, nor after it has been ## closed, so we make a new one to do it.) From buildbot at python.org Tue Mar 4 18:58:20 2008 From: buildbot at python.org (buildbot at python.org) Date: Tue, 04 Mar 2008 17:58:20 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-4 trunk Message-ID: <20080304175820.7C8651E4007@bag.python.org> The Buildbot has detected a new failure of x86 XP-4 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-4%20trunk/builds/775 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: bolen-windows Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl,neal.norwitz BUILD FAILED: failed test Excerpt from the test logfile: 2 tests failed: test_bsddb3 test_socketserver ====================================================================== ERROR: test01_basics (bsddb.test.test_dbshelve.EnvBTreeShelveTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_dbshelve.py", line 270, in tearDown test_support.rmtree(self.homeDir) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\db3l\\locals~1\\temp\\db_home556\\__db.001' ====================================================================== ERROR: test02_cursors (bsddb.test.test_dbshelve.EnvBTreeShelveTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_dbshelve.py", line 270, in tearDown test_support.rmtree(self.homeDir) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\db3l\\locals~1\\temp\\db_home556\\tmpbjrasa' ====================================================================== ERROR: test03_append (bsddb.test.test_dbshelve.EnvBTreeShelveTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_dbshelve.py", line 270, in tearDown test_support.rmtree(self.homeDir) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\db3l\\locals~1\\temp\\db_home556\\tmp9drpcf' ====================================================================== ERROR: test01_basics (bsddb.test.test_dbshelve.EnvHashShelveTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_dbshelve.py", line 270, in tearDown test_support.rmtree(self.homeDir) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\db3l\\locals~1\\temp\\db_home556\\tmp9drpcf' ====================================================================== ERROR: test02_cursors (bsddb.test.test_dbshelve.EnvHashShelveTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_dbshelve.py", line 270, in tearDown test_support.rmtree(self.homeDir) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\db3l\\locals~1\\temp\\db_home556\\tmp9drpcf' ====================================================================== ERROR: test03_append (bsddb.test.test_dbshelve.EnvHashShelveTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_dbshelve.py", line 270, in tearDown test_support.rmtree(self.homeDir) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\db3l\\locals~1\\temp\\db_home556\\tmp9drpcf' ====================================================================== ERROR: test01_basics (bsddb.test.test_dbshelve.EnvThreadBTreeShelveTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_dbshelve.py", line 270, in tearDown test_support.rmtree(self.homeDir) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\db3l\\locals~1\\temp\\db_home556\\tmp9drpcf' ====================================================================== ERROR: test02_cursors (bsddb.test.test_dbshelve.EnvThreadBTreeShelveTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_dbshelve.py", line 270, in tearDown test_support.rmtree(self.homeDir) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\db3l\\locals~1\\temp\\db_home556\\tmp9drpcf' ====================================================================== ERROR: test03_append (bsddb.test.test_dbshelve.EnvThreadBTreeShelveTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_dbshelve.py", line 270, in tearDown test_support.rmtree(self.homeDir) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\db3l\\locals~1\\temp\\db_home556\\tmp9drpcf' ====================================================================== ERROR: test01_basics (bsddb.test.test_dbshelve.EnvThreadHashShelveTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_dbshelve.py", line 270, in tearDown test_support.rmtree(self.homeDir) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\db3l\\locals~1\\temp\\db_home556\\tmp9drpcf' ====================================================================== ERROR: test02_cursors (bsddb.test.test_dbshelve.EnvThreadHashShelveTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_dbshelve.py", line 270, in tearDown test_support.rmtree(self.homeDir) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\db3l\\locals~1\\temp\\db_home556\\tmp9drpcf' ====================================================================== ERROR: test03_append (bsddb.test.test_dbshelve.EnvThreadHashShelveTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_dbshelve.py", line 270, in tearDown test_support.rmtree(self.homeDir) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\db3l\\locals~1\\temp\\db_home556\\tmp9drpcf' ====================================================================== ERROR: test01_close_dbenv_before_db (bsddb.test.test_env_close.DBEnvClosedEarlyCrash) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_env_close.py", line 53, in test01_close_dbenv_before_db 0666) DBInvalidArgError: (22, 'Invalid argument -- configured environment flags incompatible with existing environment') ====================================================================== ERROR: test01_close_dbenv_before_db (bsddb.test.test_env_close.DBEnvClosedEarlyCrash) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_env_close.py", line 47, in tearDown test_support.rmtree(self.homeDir) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\db3l\\locals~1\\temp\\db_home556\\tmp9drpcf' ====================================================================== ERROR: test02_close_dbenv_delete_db_success (bsddb.test.test_env_close.DBEnvClosedEarlyCrash) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_env_close.py", line 78, in test02_close_dbenv_delete_db_success 0666) DBInvalidArgError: (22, 'Invalid argument -- configured environment flags incompatible with existing environment') ====================================================================== ERROR: test02_close_dbenv_delete_db_success (bsddb.test.test_env_close.DBEnvClosedEarlyCrash) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_env_close.py", line 47, in tearDown test_support.rmtree(self.homeDir) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\db3l\\locals~1\\temp\\db_home556\\tmp9drpcf' ====================================================================== ERROR: test01_join (bsddb.test.test_join.JoinTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_join.py", line 59, in setUp self.env.open(homeDir, db.DB_CREATE | db.DB_INIT_MPOOL | db.DB_INIT_LOCK ) DBInvalidArgError: (22, 'Invalid argument -- configured environment flags incompatible with existing environment') ====================================================================== ERROR: test01_badpointer (bsddb.test.test_misc.MiscTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_misc.py", line 34, in tearDown test_support.rmtree(self.homeDir) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\db3l\\locals~1\\temp\\db_home556\\tmp9drpcf' ====================================================================== ERROR: test02_db_home (bsddb.test.test_misc.MiscTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_misc.py", line 34, in tearDown test_support.rmtree(self.homeDir) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\db3l\\locals~1\\temp\\db_home556\\tmp9drpcf' ====================================================================== ERROR: test03_repr_closed_db (bsddb.test.test_misc.MiscTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_misc.py", line 34, in tearDown test_support.rmtree(self.homeDir) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\db3l\\locals~1\\temp\\db_home556\\tmp9drpcf' ====================================================================== ERROR: test04_double_free_make_key_dbt (bsddb.test.test_misc.MiscTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_misc.py", line 34, in tearDown test_support.rmtree(self.homeDir) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\db3l\\locals~1\\temp\\db_home556\\tmp9drpcf' ====================================================================== ERROR: test05_key_with_null_bytes (bsddb.test.test_misc.MiscTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_misc.py", line 34, in tearDown test_support.rmtree(self.homeDir) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\db3l\\locals~1\\temp\\db_home556\\tmp9drpcf' ====================================================================== ERROR: test_DB_set_flags_persists (bsddb.test.test_misc.MiscTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_misc.py", line 34, in tearDown test_support.rmtree(self.homeDir) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\db3l\\locals~1\\temp\\db_home556\\tmp9drpcf' ====================================================================== ERROR: test02_WithSource (bsddb.test.test_recno.SimpleRecnoTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_recno.py", line 37, in tearDown test_support.rmtree(self.homeDir) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\db3l\\locals~1\\temp\\db_home556\\tmp9drpcf' ====================================================================== ERROR: test01_1WriterMultiReaders (bsddb.test.test_thread.BTreeConcurrentDataStore) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_thread.py", line 67, in setUp self.env.open(homeDir, self.envflags | db.DB_CREATE) DBInvalidArgError: (22, 'Invalid argument -- configured environment flags incompatible with existing environment') ====================================================================== ERROR: test01_1WriterMultiReaders (bsddb.test.test_thread.HashConcurrentDataStore) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_thread.py", line 67, in setUp self.env.open(homeDir, self.envflags | db.DB_CREATE) DBInvalidArgError: (22, 'Invalid argument -- configured environment flags incompatible with existing environment') ====================================================================== ERROR: test02_SimpleLocks (bsddb.test.test_thread.BTreeSimpleThreaded) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_thread.py", line 67, in setUp self.env.open(homeDir, self.envflags | db.DB_CREATE) DBInvalidArgError: (22, 'Invalid argument -- configured environment flags incompatible with existing environment') ====================================================================== ERROR: test02_SimpleLocks (bsddb.test.test_thread.HashSimpleThreaded) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_thread.py", line 67, in setUp self.env.open(homeDir, self.envflags | db.DB_CREATE) DBInvalidArgError: (22, 'Invalid argument -- configured environment flags incompatible with existing environment') ====================================================================== ERROR: test03_ThreadedTransactions (bsddb.test.test_thread.BTreeThreadedTransactions) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_thread.py", line 67, in setUp self.env.open(homeDir, self.envflags | db.DB_CREATE) DBInvalidArgError: (22, 'Invalid argument -- configured environment flags incompatible with existing environment') ====================================================================== ERROR: test03_ThreadedTransactions (bsddb.test.test_thread.HashThreadedTransactions) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_thread.py", line 67, in setUp self.env.open(homeDir, self.envflags | db.DB_CREATE) DBInvalidArgError: (22, 'Invalid argument -- configured environment flags incompatible with existing environment') ====================================================================== ERROR: test03_ThreadedTransactions (bsddb.test.test_thread.BTreeThreadedNoWaitTransactions) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_thread.py", line 67, in setUp self.env.open(homeDir, self.envflags | db.DB_CREATE) DBInvalidArgError: (22, 'Invalid argument -- configured environment flags incompatible with existing environment') ====================================================================== ERROR: test03_ThreadedTransactions (bsddb.test.test_thread.HashThreadedNoWaitTransactions) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_thread.py", line 67, in setUp self.env.open(homeDir, self.envflags | db.DB_CREATE) DBInvalidArgError: (22, 'Invalid argument -- configured environment flags incompatible with existing environment') ====================================================================== ERROR: test_cachesize (bsddb.test.test_sequence.DBSequenceTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_sequence.py", line 45, in tearDown test_support.rmtree(self.homeDir) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\db3l\\locals~1\\temp\\db_home556\\tmp9drpcf' ====================================================================== ERROR: test_flags (bsddb.test.test_sequence.DBSequenceTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_sequence.py", line 45, in tearDown test_support.rmtree(self.homeDir) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\db3l\\locals~1\\temp\\db_home556\\tmp9drpcf' ====================================================================== ERROR: test_get (bsddb.test.test_sequence.DBSequenceTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_sequence.py", line 45, in tearDown test_support.rmtree(self.homeDir) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\db3l\\locals~1\\temp\\db_home556\\tmp9drpcf' ====================================================================== ERROR: test_get_dbp (bsddb.test.test_sequence.DBSequenceTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_sequence.py", line 45, in tearDown test_support.rmtree(self.homeDir) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\db3l\\locals~1\\temp\\db_home556\\tmp9drpcf' ====================================================================== ERROR: test_get_key (bsddb.test.test_sequence.DBSequenceTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_sequence.py", line 45, in tearDown test_support.rmtree(self.homeDir) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\db3l\\locals~1\\temp\\db_home556\\tmp9drpcf' ====================================================================== ERROR: test_range (bsddb.test.test_sequence.DBSequenceTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_sequence.py", line 45, in tearDown test_support.rmtree(self.homeDir) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\db3l\\locals~1\\temp\\db_home556\\tmp9drpcf' ====================================================================== ERROR: test_remove (bsddb.test.test_sequence.DBSequenceTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_sequence.py", line 45, in tearDown test_support.rmtree(self.homeDir) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\db3l\\locals~1\\temp\\db_home556\\tmp9drpcf' ====================================================================== ERROR: test_stat (bsddb.test.test_sequence.DBSequenceTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_sequence.py", line 45, in tearDown test_support.rmtree(self.homeDir) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\db3l\\locals~1\\temp\\db_home556\\tmp9drpcf' ====================================================================== ERROR: test_pget (bsddb.test.test_cursor_pget_bug.pget_bugTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\test\test_cursor_pget_bug.py", line 50, in tearDown test_support.rmtree(self.homeDir) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\db3l\\locals~1\\temp\\db_home556\\tmp9drpcf' ====================================================================== ERROR: test_TCPServer (test.test_socketserver.SocketServerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_socketserver.py", line 102, in setUp signal.alarm(20) # Kill deadlocks after 20 seconds. AttributeError: 'module' object has no attribute 'alarm' ====================================================================== ERROR: test_ThreadingTCPServer (test.test_socketserver.SocketServerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_socketserver.py", line 102, in setUp signal.alarm(20) # Kill deadlocks after 20 seconds. AttributeError: 'module' object has no attribute 'alarm' ====================================================================== ERROR: test_ThreadingUDPServer (test.test_socketserver.SocketServerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_socketserver.py", line 102, in setUp signal.alarm(20) # Kill deadlocks after 20 seconds. AttributeError: 'module' object has no attribute 'alarm' ====================================================================== ERROR: test_UDPServer (test.test_socketserver.SocketServerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_socketserver.py", line 102, in setUp signal.alarm(20) # Kill deadlocks after 20 seconds. AttributeError: 'module' object has no attribute 'alarm' sincerely, -The Buildbot From buildbot at python.org Tue Mar 4 19:07:29 2008 From: buildbot at python.org (buildbot at python.org) Date: Tue, 04 Mar 2008 18:07:29 +0000 Subject: [Python-checkins] buildbot failure in x86 W2k8 trunk Message-ID: <20080304180738.2AE351E4007@bag.python.org> The Buildbot has detected a new failure of x86 W2k8 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20W2k8%20trunk/builds/41 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: nelson-windows Build Reason: The web-page 'force build' button was pressed by 'Trent': Build Source Stamp: [branch trunk] HEAD Blamelist: BUILD FAILED: failed failed slave lost sincerely, -The Buildbot From nnorwitz at gmail.com Tue Mar 4 20:59:46 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Tue, 4 Mar 2008 14:59:46 -0500 Subject: [Python-checkins] Python Regression Test Failures refleak (7) Message-ID: <20080304195946.GA12225@python.psfb.org> More important issues: ---------------------- test_deque leaked [100, 100, 100] references, sum=300 test_heapq leaked [136, 125, 106] references, sum=367 test_itertools leaked [7380, 7380, 7380] references, sum=22140 test_list leaked [50, 50, 50] references, sum=150 test_set leaked [680, 680, 680] references, sum=2040 test_userlist leaked [50, 50, 50] references, sum=150 test_userlist leaked [50, 50, 50] references, sum=150 Less important issues: ---------------------- test_socketserver leaked [-131, 0, 0] references, sum=-131 test_threadsignals leaked [-8, 0, 0] references, sum=-8 test_urllib2_localnet leaked [3, 3, 3] references, sum=9 test_threadsignals leaked [-8, 0, 0] references, sum=-8 test_urllib2_localnet leaked [3, 3, 3] references, sum=9 From python-checkins at python.org Tue Mar 4 21:09:17 2008 From: python-checkins at python.org (thomas.heller) Date: Tue, 4 Mar 2008 21:09:17 +0100 (CET) Subject: [Python-checkins] r61234 - in python/trunk: Modules/_ctypes/libffi/LICENSE Modules/_ctypes/libffi/Makefile.am Modules/_ctypes/libffi/Makefile.in Modules/_ctypes/libffi/README Modules/_ctypes/libffi/acinclude.m4 Modules/_ctypes/libffi/aclocal.m4 Modules/_ctypes/libffi/config.guess Modules/_ctypes/libffi/config.sub Modules/_ctypes/libffi/configure Modules/_ctypes/libffi/configure.ac Modules/_ctypes/libffi/configure.host Modules/_ctypes/libffi/fficonfig.h.in Modules/_ctypes/libffi/fficonfig.py.in Modules/_ctypes/libffi/include/Makefile.am Modules/_ctypes/libffi/include/Makefile.in Modules/_ctypes/libffi/include/ffi.h.in Modules/_ctypes/libffi/include/ffi_common.h Modules/_ctypes/libffi/install-sh Modules/_ctypes/libffi/libffi.pc.in Modules/_ctypes/libffi/missing Modules/_ctypes/libffi/src/alpha/ffi.c Modules/_ctypes/libffi/src/alpha/ffitarget.h Modules/_ctypes/libffi/src/alpha/osf.S Modules/_ctypes/libffi/src/arm/ffi.c Modules/_ctypes/libffi/src/arm/ffitarget.h Modules/_ctypes/libffi/src/arm/sysv.S Modules/_ctypes/libffi/src/cris/ffi.c Modules/_ctypes/libffi/src/cris/ffitarget.h Modules/_ctypes/libffi/src/frv/eabi.S Modules/_ctypes/libffi/src/frv/ffi.c Modules/_ctypes/libffi/src/frv/ffitarget.h Modules/_ctypes/libffi/src/ia64/ffi.c Modules/_ctypes/libffi/src/ia64/ffitarget.h Modules/_ctypes/libffi/src/ia64/ia64_flags.h Modules/_ctypes/libffi/src/ia64/unix.S Modules/_ctypes/libffi/src/m32r/ffi.c Modules/_ctypes/libffi/src/m68k/ffi.c Modules/_ctypes/libffi/src/m68k/ffitarget.h Modules/_ctypes/libffi/src/m68k/sysv.S Modules/_ctypes/libffi/src/mips/ffi.c Modules/_ctypes/libffi/src/mips/ffitarget.h Modules/_ctypes/libffi/src/mips/n32.S Modules/_ctypes/libffi/src/mips/o32.S Modules/_ctypes/libffi/src/pa/ffi.c Modules/_ctypes/libffi/src/pa/ffitarget.h Modules/_ctypes/libffi/src/pa/linux.S Modules/_ctypes/libffi/src/powerpc/darwin.S Modules/_ctypes/libffi/src/powerpc/darwin_closure.S Modules/_ctypes/libffi/src/powerpc/ffi.c Modules/_ctypes/libffi/src/powerpc/ffi_darwin.c Modules/_ctypes/libffi/src/powerpc/ffitarget.h Modules/_ctypes/libffi/src/powerpc/linux64.S Modules/_ctypes/libffi/src/powerpc/linux64_closure.S Modules/_ctypes/libffi/src/powerpc/ppc_closure.S Modules/_ctypes/libffi/src/powerpc/sysv.S Modules/_ctypes/libffi/src/prep_cif.c Modules/_ctypes/libffi/src/s390/ffi.c Modules/_ctypes/libffi/src/s390/ffitarget.h Modules/_ctypes/libffi/src/s390/sysv.S Modules/_ctypes/libffi/src/sh/ffi.c Modules/_ctypes/libffi/src/sh/ffitarget.h Modules/_ctypes/libffi/src/sh/sysv.S Modules/_ctypes/libffi/src/sh64/ffi.c Modules/_ctypes/libffi/src/sh64/ffitarget.h Modules/_ctypes/libffi/src/sh64/sysv.S Modules/_ctypes/libffi/src/sparc/ffi.c Modules/_ctypes/libffi/src/sparc/ffitarget.h Modules/_ctypes/libffi/src/sparc/v8.S Modules/_ctypes/libffi/src/sparc/v9.S Modules/_ctypes/libffi/src/x86/darwin.S Modules/_ctypes/libffi/src/x86/ffi.c Modules/_ctypes/libffi/src/x86/ffi64.c Modules/_ctypes/libffi/src/x86/ffi_darwin.c Modules/_ctypes/libffi/src/x86/ffitarget.h Modules/_ctypes/libffi/src/x86/sysv.S Modules/_ctypes/libffi/src/x86/unix64.S Modules/_ctypes/libffi/src/x86/win32.S Modules/_ctypes/libffi_osx configure configure.in setup.py Message-ID: <20080304200917.2F7261E4007@bag.python.org> Author: thomas.heller Date: Tue Mar 4 21:09:11 2008 New Revision: 61234 Added: python/trunk/Modules/_ctypes/libffi/Makefile.am - copied unchanged from r61231, python/branches/libffi3-branch/Modules/_ctypes/libffi/Makefile.am python/trunk/Modules/_ctypes/libffi/Makefile.in - copied unchanged from r61231, python/branches/libffi3-branch/Modules/_ctypes/libffi/Makefile.in python/trunk/Modules/_ctypes/libffi/acinclude.m4 - copied unchanged from r61231, python/branches/libffi3-branch/Modules/_ctypes/libffi/acinclude.m4 python/trunk/Modules/_ctypes/libffi/configure.host - copied unchanged from r61231, python/branches/libffi3-branch/Modules/_ctypes/libffi/configure.host python/trunk/Modules/_ctypes/libffi/include/Makefile.am - copied unchanged from r61231, python/branches/libffi3-branch/Modules/_ctypes/libffi/include/Makefile.am python/trunk/Modules/_ctypes/libffi/include/Makefile.in - copied unchanged from r61231, python/branches/libffi3-branch/Modules/_ctypes/libffi/include/Makefile.in python/trunk/Modules/_ctypes/libffi/libffi.pc.in - copied unchanged from r61231, python/branches/libffi3-branch/Modules/_ctypes/libffi/libffi.pc.in python/trunk/Modules/_ctypes/libffi/missing - copied unchanged from r61231, python/branches/libffi3-branch/Modules/_ctypes/libffi/missing python/trunk/Modules/_ctypes/libffi_osx/ - copied from r61231, python/branches/libffi3-branch/Modules/_ctypes/libffi_osx/ Removed: python/trunk/Modules/_ctypes/libffi/src/x86/ffi_darwin.c Modified: python/trunk/ (props changed) python/trunk/Modules/_ctypes/libffi/LICENSE python/trunk/Modules/_ctypes/libffi/README python/trunk/Modules/_ctypes/libffi/aclocal.m4 python/trunk/Modules/_ctypes/libffi/config.guess python/trunk/Modules/_ctypes/libffi/config.sub python/trunk/Modules/_ctypes/libffi/configure python/trunk/Modules/_ctypes/libffi/configure.ac python/trunk/Modules/_ctypes/libffi/fficonfig.h.in python/trunk/Modules/_ctypes/libffi/fficonfig.py.in python/trunk/Modules/_ctypes/libffi/include/ffi.h.in python/trunk/Modules/_ctypes/libffi/include/ffi_common.h python/trunk/Modules/_ctypes/libffi/install-sh python/trunk/Modules/_ctypes/libffi/src/alpha/ffi.c python/trunk/Modules/_ctypes/libffi/src/alpha/ffitarget.h python/trunk/Modules/_ctypes/libffi/src/alpha/osf.S python/trunk/Modules/_ctypes/libffi/src/arm/ffi.c python/trunk/Modules/_ctypes/libffi/src/arm/ffitarget.h python/trunk/Modules/_ctypes/libffi/src/arm/sysv.S python/trunk/Modules/_ctypes/libffi/src/cris/ffi.c python/trunk/Modules/_ctypes/libffi/src/cris/ffitarget.h python/trunk/Modules/_ctypes/libffi/src/frv/eabi.S python/trunk/Modules/_ctypes/libffi/src/frv/ffi.c python/trunk/Modules/_ctypes/libffi/src/frv/ffitarget.h python/trunk/Modules/_ctypes/libffi/src/ia64/ffi.c python/trunk/Modules/_ctypes/libffi/src/ia64/ffitarget.h python/trunk/Modules/_ctypes/libffi/src/ia64/ia64_flags.h python/trunk/Modules/_ctypes/libffi/src/ia64/unix.S python/trunk/Modules/_ctypes/libffi/src/m32r/ffi.c python/trunk/Modules/_ctypes/libffi/src/m68k/ffi.c python/trunk/Modules/_ctypes/libffi/src/m68k/ffitarget.h python/trunk/Modules/_ctypes/libffi/src/m68k/sysv.S python/trunk/Modules/_ctypes/libffi/src/mips/ffi.c python/trunk/Modules/_ctypes/libffi/src/mips/ffitarget.h python/trunk/Modules/_ctypes/libffi/src/mips/n32.S python/trunk/Modules/_ctypes/libffi/src/mips/o32.S python/trunk/Modules/_ctypes/libffi/src/pa/ffi.c python/trunk/Modules/_ctypes/libffi/src/pa/ffitarget.h python/trunk/Modules/_ctypes/libffi/src/pa/linux.S python/trunk/Modules/_ctypes/libffi/src/powerpc/darwin.S python/trunk/Modules/_ctypes/libffi/src/powerpc/darwin_closure.S python/trunk/Modules/_ctypes/libffi/src/powerpc/ffi.c python/trunk/Modules/_ctypes/libffi/src/powerpc/ffi_darwin.c python/trunk/Modules/_ctypes/libffi/src/powerpc/ffitarget.h python/trunk/Modules/_ctypes/libffi/src/powerpc/linux64.S python/trunk/Modules/_ctypes/libffi/src/powerpc/linux64_closure.S python/trunk/Modules/_ctypes/libffi/src/powerpc/ppc_closure.S python/trunk/Modules/_ctypes/libffi/src/powerpc/sysv.S python/trunk/Modules/_ctypes/libffi/src/prep_cif.c python/trunk/Modules/_ctypes/libffi/src/s390/ffi.c python/trunk/Modules/_ctypes/libffi/src/s390/ffitarget.h python/trunk/Modules/_ctypes/libffi/src/s390/sysv.S python/trunk/Modules/_ctypes/libffi/src/sh/ffi.c python/trunk/Modules/_ctypes/libffi/src/sh/ffitarget.h python/trunk/Modules/_ctypes/libffi/src/sh/sysv.S python/trunk/Modules/_ctypes/libffi/src/sh64/ffi.c python/trunk/Modules/_ctypes/libffi/src/sh64/ffitarget.h python/trunk/Modules/_ctypes/libffi/src/sh64/sysv.S python/trunk/Modules/_ctypes/libffi/src/sparc/ffi.c python/trunk/Modules/_ctypes/libffi/src/sparc/ffitarget.h python/trunk/Modules/_ctypes/libffi/src/sparc/v8.S python/trunk/Modules/_ctypes/libffi/src/sparc/v9.S python/trunk/Modules/_ctypes/libffi/src/x86/darwin.S python/trunk/Modules/_ctypes/libffi/src/x86/ffi.c python/trunk/Modules/_ctypes/libffi/src/x86/ffi64.c python/trunk/Modules/_ctypes/libffi/src/x86/ffitarget.h python/trunk/Modules/_ctypes/libffi/src/x86/sysv.S python/trunk/Modules/_ctypes/libffi/src/x86/unix64.S python/trunk/Modules/_ctypes/libffi/src/x86/win32.S python/trunk/configure python/trunk/configure.in python/trunk/setup.py Log: Merged changes from libffi3-branch. The bundled libffi copy is now in sync with the recently released libffi3.0.4 version, apart from some small changes to Modules/_ctypes/libffi/configure.ac. I gave up on using libffi3 files on os x. Instead, static configuration with files from pyobjc is used. Modified: python/trunk/Modules/_ctypes/libffi/LICENSE ============================================================================== --- python/trunk/Modules/_ctypes/libffi/LICENSE (original) +++ python/trunk/Modules/_ctypes/libffi/LICENSE Tue Mar 4 21:09:11 2008 @@ -1,4 +1,5 @@ -libffi - Copyright (c) 1996-2003 Red Hat, Inc. +libffi - Copyright (c) 1996-2008 Red Hat, Inc and others. +See source files for details. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the @@ -11,10 +12,10 @@ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS -OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, +EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. -IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR -OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, -ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR -OTHER DEALINGS IN THE SOFTWARE. +IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY +CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, +TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE +SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. Modified: python/trunk/Modules/_ctypes/libffi/README ============================================================================== --- python/trunk/Modules/_ctypes/libffi/README (original) +++ python/trunk/Modules/_ctypes/libffi/README Tue Mar 4 21:09:11 2008 @@ -1,78 +1,67 @@ -This directory contains the libffi package, which is not part of GCC but -shipped with GCC as convenience. - Status ====== -libffi-2.00 has not been released yet! This is a development snapshot! - -libffi-1.20 was released on October 5, 1998. Check the libffi web -page for updates: . +libffi-3.0.4 was released on February 24, 2008. Check the libffi web +page for updates: . What is libffi? =============== Compilers for high level languages generate code that follow certain -conventions. These conventions are necessary, in part, for separate -compilation to work. One such convention is the "calling -convention". The "calling convention" is essentially a set of -assumptions made by the compiler about where function arguments will -be found on entry to a function. A "calling convention" also specifies -where the return value for a function is found. +conventions. These conventions are necessary, in part, for separate +compilation to work. One such convention is the "calling convention". +The "calling convention" is a set of assumptions made by the compiler +about where function arguments will be found on entry to a function. +A "calling convention" also specifies where the return value for a +function is found. Some programs may not know at the time of compilation what arguments -are to be passed to a function. For instance, an interpreter may be +are to be passed to a function. For instance, an interpreter may be told at run-time about the number and types of arguments used to call -a given function. Libffi can be used in such programs to provide a +a given function. Libffi can be used in such programs to provide a bridge from the interpreter program to compiled code. The libffi library provides a portable, high level programming -interface to various calling conventions. This allows a programmer to +interface to various calling conventions. This allows a programmer to call any function specified by a call interface description at run -time. +time. -Ffi stands for Foreign Function Interface. A foreign function +FFI stands for Foreign Function Interface. A foreign function interface is the popular name for the interface that allows code -written in one language to call code written in another language. The +written in one language to call code written in another language. The libffi library really only provides the lowest, machine dependent layer of a fully featured foreign function interface. A layer must exist above libffi that handles type conversions for values passed between the two languages. -Supported Platforms and Prerequisites -===================================== - -Libffi has been ported to: - - SunOS 4.1.3 & Solaris 2.x (SPARC-V8, SPARC-V9) - - Irix 5.3 & 6.2 (System V/o32 & n32) - - Intel x86 - Linux (System V ABI) - - Alpha - Linux and OSF/1 - - m68k - Linux (System V ABI) - - PowerPC - Linux (System V ABI, Darwin, AIX) +Supported Platforms +=================== - ARM - Linux (System V ABI) - -Libffi has been tested with the egcs 1.0.2 gcc compiler. Chances are -that other versions will work. Libffi has also been built and tested -with the SGI compiler tools. - -On PowerPC, the tests failed (see the note below). - -You must use GNU make to build libffi. SGI's make will not work. -Sun's probably won't either. - -If you port libffi to another platform, please let me know! I assume -that some will be easy (x86 NetBSD), and others will be more difficult -(HP). +Libffi has been ported to many different platforms, although this +release was only tested on: + arm oabi linux + arm eabi linux + hppa linux + mips o32 linux (little endian) + powerpc darwin + powerpc64 linux + sparc solaris + sparc64 solaris + x86 cygwin + x86 darwin + x86 freebsd + x86 linux + x86 openbsd + x86-64 darwin + x86-64 linux + x86-64 OS X + x86-64 freebsd + +Please send additional platform test results to +libffi-discuss at sourceware.org. Installing libffi ================= @@ -101,216 +90,17 @@ Configure has many other options. Use "configure --help" to see them all. Once configure has finished, type "make". Note that you must be using -GNU make. SGI's make will not work. Sun's probably won't either. -You can ftp GNU make from prep.ai.mit.edu:/pub/gnu. +GNU make. You can ftp GNU make from prep.ai.mit.edu:/pub/gnu. -To ensure that libffi is working as advertised, type "make test". +To ensure that libffi is working as advertised, type "make check". +This will require that you have DejaGNU installed. To install the library and header files, type "make install". -Using libffi -============ - - The Basics - ---------- - -Libffi assumes that you have a pointer to the function you wish to -call and that you know the number and types of arguments to pass it, -as well as the return type of the function. - -The first thing you must do is create an ffi_cif object that matches -the signature of the function you wish to call. The cif in ffi_cif -stands for Call InterFace. To prepare a call interface object, use the -following function: - -ffi_status ffi_prep_cif(ffi_cif *cif, ffi_abi abi, - unsigned int nargs, - ffi_type *rtype, ffi_type **atypes); - - CIF is a pointer to the call interface object you wish - to initialize. - - ABI is an enum that specifies the calling convention - to use for the call. FFI_DEFAULT_ABI defaults - to the system's native calling convention. Other - ABI's may be used with care. They are system - specific. - - NARGS is the number of arguments this function accepts. - libffi does not yet support vararg functions. - - RTYPE is a pointer to an ffi_type structure that represents - the return type of the function. Ffi_type objects - describe the types of values. libffi provides - ffi_type objects for many of the native C types: - signed int, unsigned int, signed char, unsigned char, - etc. There is also a pointer ffi_type object and - a void ffi_type. Use &ffi_type_void for functions that - don't return values. - - ATYPES is a vector of ffi_type pointers. ARGS must be NARGS long. - If NARGS is 0, this is ignored. - - -ffi_prep_cif will return a status code that you are responsible -for checking. It will be one of the following: - - FFI_OK - All is good. - - FFI_BAD_TYPEDEF - One of the ffi_type objects that ffi_prep_cif - came across is bad. - - -Before making the call, the VALUES vector should be initialized -with pointers to the appropriate argument values. - -To call the the function using the initialized ffi_cif, use the -ffi_call function: - -void ffi_call(ffi_cif *cif, void *fn, void *rvalue, void **avalues); - - CIF is a pointer to the ffi_cif initialized specifically - for this function. - - FN is a pointer to the function you want to call. - - RVALUE is a pointer to a chunk of memory that is to hold the - result of the function call. Currently, it must be - at least one word in size (except for the n32 version - under Irix 6.x, which must be a pointer to an 8 byte - aligned value (a long long). It must also be at least - word aligned (depending on the return type, and the - system's alignment requirements). If RTYPE is - &ffi_type_void, this is ignored. If RVALUE is NULL, - the return value is discarded. - - AVALUES is a vector of void* that point to the memory locations - holding the argument values for a call. - If NARGS is 0, this is ignored. - - -If you are expecting a return value from FN it will have been stored -at RVALUE. - - - - An Example - ---------- - -Here is a trivial example that calls puts() a few times. - - #include - #include - - int main() - { - ffi_cif cif; - ffi_type *args[1]; - void *values[1]; - char *s; - int rc; - - /* Initialize the argument info vectors */ - args[0] = &ffi_type_uint; - values[0] = &s; - - /* Initialize the cif */ - if (ffi_prep_cif(&cif, FFI_DEFAULT_ABI, 1, - &ffi_type_uint, args) == FFI_OK) - { - s = "Hello World!"; - ffi_call(&cif, puts, &rc, values); - /* rc now holds the result of the call to puts */ - - /* values holds a pointer to the function's arg, so to - call puts() again all we need to do is change the - value of s */ - s = "This is cool!"; - ffi_call(&cif, puts, &rc, values); - } - - return 0; - } - - - - Aggregate Types - --------------- - -Although libffi has no special support for unions or bit-fields, it is -perfectly happy passing structures back and forth. You must first -describe the structure to libffi by creating a new ffi_type object -for it. Here is the definition of ffi_type: - - typedef struct _ffi_type - { - unsigned size; - short alignment; - short type; - struct _ffi_type **elements; - } ffi_type; - -All structures must have type set to FFI_TYPE_STRUCT. You may set -size and alignment to 0. These will be calculated and reset to the -appropriate values by ffi_prep_cif(). - -elements is a NULL terminated array of pointers to ffi_type objects -that describe the type of the structure elements. These may, in turn, -be structure elements. - -The following example initializes a ffi_type object representing the -tm struct from Linux's time.h: - - struct tm { - int tm_sec; - int tm_min; - int tm_hour; - int tm_mday; - int tm_mon; - int tm_year; - int tm_wday; - int tm_yday; - int tm_isdst; - /* Those are for future use. */ - long int __tm_gmtoff__; - __const char *__tm_zone__; - }; - - { - ffi_type tm_type; - ffi_type *tm_type_elements[12]; - int i; - - tm_type.size = tm_type.alignment = 0; - tm_type.elements = &tm_type_elements; - - for (i = 0; i < 9; i++) - tm_type_elements[i] = &ffi_type_sint; - - tm_type_elements[9] = &ffi_type_slong; - tm_type_elements[10] = &ffi_type_pointer; - tm_type_elements[11] = NULL; - - /* tm_type can now be used to represent tm argument types and - return types for ffi_prep_cif() */ - } - - - Platform Specific Notes ======================= - Intel x86 - --------- - -There are no known problems with the x86 port. - - Sun SPARC - SunOS 4.1.3 & Solaris 2.x - ------------------------------------- - -You must use GNU Make to build libffi on Sun platforms. - MIPS - Irix 5.3 & 6.x --------------------- @@ -339,13 +129,6 @@ You must use GNU Make to build libffi on SGI platforms. - ARM - System V ABI - ------------------ - -The ARM port was performed on a NetWinder running ARM Linux ELF -(2.0.31) and gcc 2.8.1. - - PowerPC System V ABI -------------------- @@ -372,17 +155,30 @@ arguments' test). -What's With The Crazy Comments? -=============================== +History +======= -You might notice a number of cryptic comments in the code, delimited -by /*@ and @*/. These are annotations read by the program LCLint, a -tool for statically checking C programs. You can read all about it at -. +3.0.4 Feb-24-08 + Fix x86 OpenBSD configury. +3.0.3 Feb-22-08 + Enable x86 OpenBSD thanks to Thomas Heller, and + x86-64 FreeBSD thanks to Bj?rn K?nig and Andreas Tobler. + Clean up test instruction in README. + +3.0.2 Feb-21-08 + Improved x86 FreeBSD support. + Thanks to Bj?rn K?nig. + +3.0.1 Feb-15-08 + Fix instruction cache flushing bug on MIPS. + Thanks to David Daney. + +3.0.0 Feb-15-08 + Many changes, mostly thanks to the GCC project. + Cygnus Solutions is now Red Hat. -History -======= + [10 years go by...] 1.20 Oct-5-98 Raffaele Sena produces ARM port. @@ -467,34 +263,56 @@ Authors & Credits ================= -libffi was written by Anthony Green . +libffi was originally written by Anthony Green . + +The developers of the GNU Compiler Collection project have made +innumerable valuable contributions. See the ChangeLog file for +details. -Portions of libffi were derived from Gianni Mariani's free gencall -library for Silicon Graphics machines. +Some of the ideas behind libffi were inspired by Gianni Mariani's free +gencall library for Silicon Graphics machines. The closure mechanism was designed and implemented by Kresten Krab Thorup. -The Sparc port was derived from code contributed by the fine folks at -Visible Decisions Inc . Further enhancements were -made by Gordon Irlam at Cygnus Solutions . - -The Alpha port was written by Richard Henderson at Cygnus Solutions. - -Andreas Schwab ported libffi to m68k Linux and provided a number of -bug fixes. +Major processor architecture ports were contributed by the following +developers: -Geoffrey Keating ported libffi to the PowerPC. - -Raffaele Sena ported libffi to the ARM. +alpha Richard Henderson +arm Raffaele Sena +cris Simon Posnjak, Hans-Peter Nilsson +frv Anthony Green +ia64 Hans Boehm +m32r Kazuhiro Inaoka +m68k Andreas Schwab +mips Anthony Green, Casey Marshall +mips64 David Daney +pa Randolph Chung, Dave Anglin, Andreas Tobler +powerpc Geoffrey Keating, Andreas Tobler, + David Edelsohn, John Hornkvist +powerpc64 Jakub Jelinek +s390 Gerhard Tonn, Ulrich Weigand +sh Kaz Kojima +sh64 Kaz Kojima +sparc Anthony Green, Gordon Irlam +x86 Anthony Green, Jon Beniston +x86-64 Bo Thorsen Jesper Skov and Andrew Haley both did more than their fair share of stepping through the code and tracking down bugs. -Thanks also to Tom Tromey for bug fixes and configuration help. +Thanks also to Tom Tromey for bug fixes, documentation and +configuration help. Thanks to Jim Blandy, who provided some useful feedback on the libffi interface. +Andreas Tobler has done a tremendous amount of work on the testsuite. + +Alex Oliva solved the executable page problem for SElinux. + +The list above is almost certainly incomplete and inaccurate. I'm +happy to make corrections or additions upon request. + If you have a problem, or have found a bug, please send a note to -green at cygnus.com. +green at redhat.com. Modified: python/trunk/Modules/_ctypes/libffi/aclocal.m4 ============================================================================== --- python/trunk/Modules/_ctypes/libffi/aclocal.m4 (original) +++ python/trunk/Modules/_ctypes/libffi/aclocal.m4 Tue Mar 4 21:09:11 2008 @@ -1,92 +1,7516 @@ -# mmap(2) blacklisting. Some platforms provide the mmap library routine -# but don't support all of the features we need from it. -AC_DEFUN([AC_FUNC_MMAP_BLACKLIST], +# generated automatically by aclocal 1.10 -*- Autoconf -*- + +# Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, +# 2005, 2006 Free Software Foundation, Inc. +# This file is free software; the Free Software Foundation +# gives unlimited permission to copy and/or distribute it, +# with or without modifications, as long as this notice is preserved. + +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY, to the extent permitted by law; without +# even the implied warranty of MERCHANTABILITY or FITNESS FOR A +# PARTICULAR PURPOSE. + +m4_if(m4_PACKAGE_VERSION, [2.61],, +[m4_fatal([this file was generated for autoconf 2.61. +You have another version of autoconf. If you want to use that, +you should regenerate the build system entirely.], [63])]) + +# libtool.m4 - Configure libtool for the host system. -*-Autoconf-*- + +# serial 51 AC_PROG_LIBTOOL + + +# AC_PROVIDE_IFELSE(MACRO-NAME, IF-PROVIDED, IF-NOT-PROVIDED) +# ----------------------------------------------------------- +# If this macro is not defined by Autoconf, define it here. +m4_ifdef([AC_PROVIDE_IFELSE], + [], + [m4_define([AC_PROVIDE_IFELSE], + [m4_ifdef([AC_PROVIDE_$1], + [$2], [$3])])]) + + +# AC_PROG_LIBTOOL +# --------------- +AC_DEFUN([AC_PROG_LIBTOOL], +[AC_REQUIRE([_AC_PROG_LIBTOOL])dnl +dnl If AC_PROG_CXX has already been expanded, run AC_LIBTOOL_CXX +dnl immediately, otherwise, hook it in at the end of AC_PROG_CXX. + AC_PROVIDE_IFELSE([AC_PROG_CXX], + [AC_LIBTOOL_CXX], + [define([AC_PROG_CXX], defn([AC_PROG_CXX])[AC_LIBTOOL_CXX + ])]) +dnl And a similar setup for Fortran 77 support + AC_PROVIDE_IFELSE([AC_PROG_F77], + [AC_LIBTOOL_F77], + [define([AC_PROG_F77], defn([AC_PROG_F77])[AC_LIBTOOL_F77 +])]) + +dnl Quote A][M_PROG_GCJ so that aclocal doesn't bring it in needlessly. +dnl If either AC_PROG_GCJ or A][M_PROG_GCJ have already been expanded, run +dnl AC_LIBTOOL_GCJ immediately, otherwise, hook it in at the end of both. + AC_PROVIDE_IFELSE([AC_PROG_GCJ], + [AC_LIBTOOL_GCJ], + [AC_PROVIDE_IFELSE([A][M_PROG_GCJ], + [AC_LIBTOOL_GCJ], + [AC_PROVIDE_IFELSE([LT_AC_PROG_GCJ], + [AC_LIBTOOL_GCJ], + [ifdef([AC_PROG_GCJ], + [define([AC_PROG_GCJ], defn([AC_PROG_GCJ])[AC_LIBTOOL_GCJ])]) + ifdef([A][M_PROG_GCJ], + [define([A][M_PROG_GCJ], defn([A][M_PROG_GCJ])[AC_LIBTOOL_GCJ])]) + ifdef([LT_AC_PROG_GCJ], + [define([LT_AC_PROG_GCJ], + defn([LT_AC_PROG_GCJ])[AC_LIBTOOL_GCJ])])])]) +])])# AC_PROG_LIBTOOL + + +# _AC_PROG_LIBTOOL +# ---------------- +AC_DEFUN([_AC_PROG_LIBTOOL], +[AC_REQUIRE([AC_LIBTOOL_SETUP])dnl +AC_BEFORE([$0],[AC_LIBTOOL_CXX])dnl +AC_BEFORE([$0],[AC_LIBTOOL_F77])dnl +AC_BEFORE([$0],[AC_LIBTOOL_GCJ])dnl + +# This can be used to rebuild libtool when needed +LIBTOOL_DEPS="$ac_aux_dir/ltmain.sh" + +# Always use our own libtool. +LIBTOOL='$(SHELL) $(top_builddir)/libtool' +AC_SUBST(LIBTOOL)dnl + +# Prevent multiple expansion +define([AC_PROG_LIBTOOL], []) +])# _AC_PROG_LIBTOOL + + +# AC_LIBTOOL_SETUP +# ---------------- +AC_DEFUN([AC_LIBTOOL_SETUP], +[AC_PREREQ(2.50)dnl +AC_REQUIRE([AC_ENABLE_SHARED])dnl +AC_REQUIRE([AC_ENABLE_STATIC])dnl +AC_REQUIRE([AC_ENABLE_FAST_INSTALL])dnl +AC_REQUIRE([AC_CANONICAL_HOST])dnl +AC_REQUIRE([AC_CANONICAL_BUILD])dnl +AC_REQUIRE([AC_PROG_CC])dnl +AC_REQUIRE([AC_PROG_LD])dnl +AC_REQUIRE([AC_PROG_LD_RELOAD_FLAG])dnl +AC_REQUIRE([AC_PROG_NM])dnl + +AC_REQUIRE([AC_PROG_LN_S])dnl +AC_REQUIRE([AC_DEPLIBS_CHECK_METHOD])dnl +# Autoconf 2.13's AC_OBJEXT and AC_EXEEXT macros only works for C compilers! +AC_REQUIRE([AC_OBJEXT])dnl +AC_REQUIRE([AC_EXEEXT])dnl +dnl + +AC_LIBTOOL_SYS_MAX_CMD_LEN +AC_LIBTOOL_SYS_GLOBAL_SYMBOL_PIPE +AC_LIBTOOL_OBJDIR + +AC_REQUIRE([_LT_AC_SYS_COMPILER])dnl +_LT_AC_PROG_ECHO_BACKSLASH + +case $host_os in +aix3*) + # AIX sometimes has problems with the GCC collect2 program. For some + # reason, if we set the COLLECT_NAMES environment variable, the problems + # vanish in a puff of smoke. + if test "X${COLLECT_NAMES+set}" != Xset; then + COLLECT_NAMES= + export COLLECT_NAMES + fi + ;; +esac + +# Sed substitution that helps us do robust quoting. It backslashifies +# metacharacters that are still active within double-quoted strings. +Xsed='sed -e 1s/^X//' +[sed_quote_subst='s/\([\\"\\`$\\\\]\)/\\\1/g'] + +# Same as above, but do not quote variable references. +[double_quote_subst='s/\([\\"\\`\\\\]\)/\\\1/g'] + +# Sed substitution to delay expansion of an escaped shell variable in a +# double_quote_subst'ed string. +delay_variable_subst='s/\\\\\\\\\\\$/\\\\\\$/g' + +# Sed substitution to avoid accidental globbing in evaled expressions +no_glob_subst='s/\*/\\\*/g' + +# Constants: +rm="rm -f" + +# Global variables: +default_ofile=libtool +can_build_shared=yes + +# All known linkers require a `.a' archive for static linking (except MSVC, +# which needs '.lib'). +libext=a +ltmain="$ac_aux_dir/ltmain.sh" +ofile="$default_ofile" +with_gnu_ld="$lt_cv_prog_gnu_ld" + +AC_CHECK_TOOL(AR, ar, false) +AC_CHECK_TOOL(RANLIB, ranlib, :) +AC_CHECK_TOOL(STRIP, strip, :) + +old_CC="$CC" +old_CFLAGS="$CFLAGS" + +# Set sane defaults for various variables +test -z "$AR" && AR=ar +test -z "$AR_FLAGS" && AR_FLAGS=cru +test -z "$AS" && AS=as +test -z "$CC" && CC=cc +test -z "$LTCC" && LTCC=$CC +test -z "$LTCFLAGS" && LTCFLAGS=$CFLAGS +test -z "$DLLTOOL" && DLLTOOL=dlltool +test -z "$LD" && LD=ld +test -z "$LN_S" && LN_S="ln -s" +test -z "$MAGIC_CMD" && MAGIC_CMD=file +test -z "$NM" && NM=nm +test -z "$SED" && SED=sed +test -z "$OBJDUMP" && OBJDUMP=objdump +test -z "$RANLIB" && RANLIB=: +test -z "$STRIP" && STRIP=: +test -z "$ac_objext" && ac_objext=o + +# Determine commands to create old-style static archives. +old_archive_cmds='$AR $AR_FLAGS $oldlib$oldobjs' +old_postinstall_cmds='chmod 644 $oldlib' +old_postuninstall_cmds= + +if test -n "$RANLIB"; then + case $host_os in + openbsd*) + old_postinstall_cmds="$old_postinstall_cmds~\$RANLIB -t \$oldlib" + ;; + *) + old_postinstall_cmds="$old_postinstall_cmds~\$RANLIB \$oldlib" + ;; + esac + old_archive_cmds="$old_archive_cmds~\$RANLIB \$oldlib" +fi + +_LT_CC_BASENAME([$compiler]) + +# Only perform the check for file, if the check method requires it +case $deplibs_check_method in +file_magic*) + if test "$file_magic_cmd" = '$MAGIC_CMD'; then + AC_PATH_MAGIC + fi + ;; +esac + +AC_PROVIDE_IFELSE([AC_LIBTOOL_DLOPEN], enable_dlopen=yes, enable_dlopen=no) +AC_PROVIDE_IFELSE([AC_LIBTOOL_WIN32_DLL], +enable_win32_dll=yes, enable_win32_dll=no) + +AC_ARG_ENABLE([libtool-lock], + [AC_HELP_STRING([--disable-libtool-lock], + [avoid locking (might break parallel builds)])]) +test "x$enable_libtool_lock" != xno && enable_libtool_lock=yes + +AC_ARG_WITH([pic], + [AC_HELP_STRING([--with-pic], + [try to use only PIC/non-PIC objects @<:@default=use both@:>@])], + [pic_mode="$withval"], + [pic_mode=default]) +test -z "$pic_mode" && pic_mode=default + +# Use C for the default configuration in the libtool script +tagname= +AC_LIBTOOL_LANG_C_CONFIG +_LT_AC_TAGCONFIG +])# AC_LIBTOOL_SETUP + + +# _LT_AC_SYS_COMPILER +# ------------------- +AC_DEFUN([_LT_AC_SYS_COMPILER], +[AC_REQUIRE([AC_PROG_CC])dnl + +# If no C compiler was specified, use CC. +LTCC=${LTCC-"$CC"} + +# If no C compiler flags were specified, use CFLAGS. +LTCFLAGS=${LTCFLAGS-"$CFLAGS"} + +# Allow CC to be a program name with arguments. +compiler=$CC +])# _LT_AC_SYS_COMPILER + + +# _LT_CC_BASENAME(CC) +# ------------------- +# Calculate cc_basename. Skip known compiler wrappers and cross-prefix. +AC_DEFUN([_LT_CC_BASENAME], +[for cc_temp in $1""; do + case $cc_temp in + compile | *[[\\/]]compile | ccache | *[[\\/]]ccache ) ;; + distcc | *[[\\/]]distcc | purify | *[[\\/]]purify ) ;; + \-*) ;; + *) break;; + esac +done +cc_basename=`$echo "X$cc_temp" | $Xsed -e 's%.*/%%' -e "s%^$host_alias-%%"` +]) + + +# _LT_COMPILER_BOILERPLATE +# ------------------------ +# Check for compiler boilerplate output or warnings with +# the simple compiler test code. +AC_DEFUN([_LT_COMPILER_BOILERPLATE], +[AC_REQUIRE([LT_AC_PROG_SED])dnl +ac_outfile=conftest.$ac_objext +echo "$lt_simple_compile_test_code" >conftest.$ac_ext +eval "$ac_compile" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err +_lt_compiler_boilerplate=`cat conftest.err` +$rm conftest* +])# _LT_COMPILER_BOILERPLATE + + +# _LT_LINKER_BOILERPLATE +# ---------------------- +# Check for linker boilerplate output or warnings with +# the simple link test code. +AC_DEFUN([_LT_LINKER_BOILERPLATE], +[AC_REQUIRE([LT_AC_PROG_SED])dnl +ac_outfile=conftest.$ac_objext +echo "$lt_simple_link_test_code" >conftest.$ac_ext +eval "$ac_link" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err +_lt_linker_boilerplate=`cat conftest.err` +$rm conftest* +])# _LT_LINKER_BOILERPLATE + + +# _LT_AC_SYS_LIBPATH_AIX +# ---------------------- +# Links a minimal program and checks the executable +# for the system default hardcoded library path. In most cases, +# this is /usr/lib:/lib, but when the MPI compilers are used +# the location of the communication and MPI libs are included too. +# If we don't find anything, use the default library path according +# to the aix ld manual. +AC_DEFUN([_LT_AC_SYS_LIBPATH_AIX], +[AC_REQUIRE([LT_AC_PROG_SED])dnl +AC_LINK_IFELSE(AC_LANG_PROGRAM,[ +lt_aix_libpath_sed=' + /Import File Strings/,/^$/ { + /^0/ { + s/^0 *\(.*\)$/\1/ + p + } + }' +aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` +# Check for a 64-bit object if we didn't find anything. +if test -z "$aix_libpath"; then + aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` +fi],[]) +if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi +])# _LT_AC_SYS_LIBPATH_AIX + + +# _LT_AC_SHELL_INIT(ARG) +# ---------------------- +AC_DEFUN([_LT_AC_SHELL_INIT], +[ifdef([AC_DIVERSION_NOTICE], + [AC_DIVERT_PUSH(AC_DIVERSION_NOTICE)], + [AC_DIVERT_PUSH(NOTICE)]) +$1 +AC_DIVERT_POP +])# _LT_AC_SHELL_INIT + + +# _LT_AC_PROG_ECHO_BACKSLASH +# -------------------------- +# Add some code to the start of the generated configure script which +# will find an echo command which doesn't interpret backslashes. +AC_DEFUN([_LT_AC_PROG_ECHO_BACKSLASH], +[_LT_AC_SHELL_INIT([ +# Check that we are running under the correct shell. +SHELL=${CONFIG_SHELL-/bin/sh} + +case X$ECHO in +X*--fallback-echo) + # Remove one level of quotation (which was required for Make). + ECHO=`echo "$ECHO" | sed 's,\\\\\[$]\\[$]0,'[$]0','` + ;; +esac + +echo=${ECHO-echo} +if test "X[$]1" = X--no-reexec; then + # Discard the --no-reexec flag, and continue. + shift +elif test "X[$]1" = X--fallback-echo; then + # Avoid inline document here, it may be left over + : +elif test "X`($echo '\t') 2>/dev/null`" = 'X\t' ; then + # Yippee, $echo works! + : +else + # Restart under the correct shell. + exec $SHELL "[$]0" --no-reexec ${1+"[$]@"} +fi + +if test "X[$]1" = X--fallback-echo; then + # used as fallback echo + shift + cat </dev/null 2>&1 && unset CDPATH + +if test -z "$ECHO"; then +if test "X${echo_test_string+set}" != Xset; then +# find a string as large as possible, as long as the shell can cope with it + for cmd in 'sed 50q "[$]0"' 'sed 20q "[$]0"' 'sed 10q "[$]0"' 'sed 2q "[$]0"' 'echo test'; do + # expected sizes: less than 2Kb, 1Kb, 512 bytes, 16 bytes, ... + if (echo_test_string=`eval $cmd`) 2>/dev/null && + echo_test_string=`eval $cmd` && + (test "X$echo_test_string" = "X$echo_test_string") 2>/dev/null + then + break + fi + done +fi + +if test "X`($echo '\t') 2>/dev/null`" = 'X\t' && + echo_testing_string=`($echo "$echo_test_string") 2>/dev/null` && + test "X$echo_testing_string" = "X$echo_test_string"; then + : +else + # The Solaris, AIX, and Digital Unix default echo programs unquote + # backslashes. This makes it impossible to quote backslashes using + # echo "$something" | sed 's/\\/\\\\/g' + # + # So, first we look for a working echo in the user's PATH. + + lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR + for dir in $PATH /usr/ucb; do + IFS="$lt_save_ifs" + if (test -f $dir/echo || test -f $dir/echo$ac_exeext) && + test "X`($dir/echo '\t') 2>/dev/null`" = 'X\t' && + echo_testing_string=`($dir/echo "$echo_test_string") 2>/dev/null` && + test "X$echo_testing_string" = "X$echo_test_string"; then + echo="$dir/echo" + break + fi + done + IFS="$lt_save_ifs" + + if test "X$echo" = Xecho; then + # We didn't find a better echo, so look for alternatives. + if test "X`(print -r '\t') 2>/dev/null`" = 'X\t' && + echo_testing_string=`(print -r "$echo_test_string") 2>/dev/null` && + test "X$echo_testing_string" = "X$echo_test_string"; then + # This shell has a builtin print -r that does the trick. + echo='print -r' + elif (test -f /bin/ksh || test -f /bin/ksh$ac_exeext) && + test "X$CONFIG_SHELL" != X/bin/ksh; then + # If we have ksh, try running configure again with it. + ORIGINAL_CONFIG_SHELL=${CONFIG_SHELL-/bin/sh} + export ORIGINAL_CONFIG_SHELL + CONFIG_SHELL=/bin/ksh + export CONFIG_SHELL + exec $CONFIG_SHELL "[$]0" --no-reexec ${1+"[$]@"} + else + # Try using printf. + echo='printf %s\n' + if test "X`($echo '\t') 2>/dev/null`" = 'X\t' && + echo_testing_string=`($echo "$echo_test_string") 2>/dev/null` && + test "X$echo_testing_string" = "X$echo_test_string"; then + # Cool, printf works + : + elif echo_testing_string=`($ORIGINAL_CONFIG_SHELL "[$]0" --fallback-echo '\t') 2>/dev/null` && + test "X$echo_testing_string" = 'X\t' && + echo_testing_string=`($ORIGINAL_CONFIG_SHELL "[$]0" --fallback-echo "$echo_test_string") 2>/dev/null` && + test "X$echo_testing_string" = "X$echo_test_string"; then + CONFIG_SHELL=$ORIGINAL_CONFIG_SHELL + export CONFIG_SHELL + SHELL="$CONFIG_SHELL" + export SHELL + echo="$CONFIG_SHELL [$]0 --fallback-echo" + elif echo_testing_string=`($CONFIG_SHELL "[$]0" --fallback-echo '\t') 2>/dev/null` && + test "X$echo_testing_string" = 'X\t' && + echo_testing_string=`($CONFIG_SHELL "[$]0" --fallback-echo "$echo_test_string") 2>/dev/null` && + test "X$echo_testing_string" = "X$echo_test_string"; then + echo="$CONFIG_SHELL [$]0 --fallback-echo" + else + # maybe with a smaller string... + prev=: + + for cmd in 'echo test' 'sed 2q "[$]0"' 'sed 10q "[$]0"' 'sed 20q "[$]0"' 'sed 50q "[$]0"'; do + if (test "X$echo_test_string" = "X`eval $cmd`") 2>/dev/null + then + break + fi + prev="$cmd" + done + + if test "$prev" != 'sed 50q "[$]0"'; then + echo_test_string=`eval $prev` + export echo_test_string + exec ${ORIGINAL_CONFIG_SHELL-${CONFIG_SHELL-/bin/sh}} "[$]0" ${1+"[$]@"} + else + # Oops. We lost completely, so just stick with echo. + echo=echo + fi + fi + fi + fi +fi +fi + +# Copy echo and quote the copy suitably for passing to libtool from +# the Makefile, instead of quoting the original, which is used later. +ECHO=$echo +if test "X$ECHO" = "X$CONFIG_SHELL [$]0 --fallback-echo"; then + ECHO="$CONFIG_SHELL \\\$\[$]0 --fallback-echo" +fi + +AC_SUBST(ECHO) +])])# _LT_AC_PROG_ECHO_BACKSLASH + + +# _LT_AC_LOCK +# ----------- +AC_DEFUN([_LT_AC_LOCK], +[AC_ARG_ENABLE([libtool-lock], + [AC_HELP_STRING([--disable-libtool-lock], + [avoid locking (might break parallel builds)])]) +test "x$enable_libtool_lock" != xno && enable_libtool_lock=yes + +# Some flags need to be propagated to the compiler or linker for good +# libtool support. +case $host in +ia64-*-hpux*) + # Find out which ABI we are using. + echo 'int i;' > conftest.$ac_ext + if AC_TRY_EVAL(ac_compile); then + case `/usr/bin/file conftest.$ac_objext` in + *ELF-32*) + HPUX_IA64_MODE="32" + ;; + *ELF-64*) + HPUX_IA64_MODE="64" + ;; + esac + fi + rm -rf conftest* + ;; +*-*-irix6*) + # Find out which ABI we are using. + echo '[#]line __oline__ "configure"' > conftest.$ac_ext + if AC_TRY_EVAL(ac_compile); then + if test "$lt_cv_prog_gnu_ld" = yes; then + case `/usr/bin/file conftest.$ac_objext` in + *32-bit*) + LD="${LD-ld} -melf32bsmip" + ;; + *N32*) + LD="${LD-ld} -melf32bmipn32" + ;; + *64-bit*) + LD="${LD-ld} -melf64bmip" + ;; + esac + else + case `/usr/bin/file conftest.$ac_objext` in + *32-bit*) + LD="${LD-ld} -32" + ;; + *N32*) + LD="${LD-ld} -n32" + ;; + *64-bit*) + LD="${LD-ld} -64" + ;; + esac + fi + fi + rm -rf conftest* + ;; + +x86_64-*kfreebsd*-gnu|x86_64-*linux*|ppc*-*linux*|powerpc*-*linux*| \ +s390*-*linux*|sparc*-*linux*) + # Find out which ABI we are using. + echo 'int i;' > conftest.$ac_ext + if AC_TRY_EVAL(ac_compile); then + case `/usr/bin/file conftest.o` in + *32-bit*) + case $host in + x86_64-*kfreebsd*-gnu) + LD="${LD-ld} -m elf_i386_fbsd" + ;; + x86_64-*linux*) + LD="${LD-ld} -m elf_i386" + ;; + ppc64-*linux*|powerpc64-*linux*) + LD="${LD-ld} -m elf32ppclinux" + ;; + s390x-*linux*) + LD="${LD-ld} -m elf_s390" + ;; + sparc64-*linux*) + LD="${LD-ld} -m elf32_sparc" + ;; + esac + ;; + *64-bit*) + libsuff=64 + case $host in + x86_64-*kfreebsd*-gnu) + LD="${LD-ld} -m elf_x86_64_fbsd" + ;; + x86_64-*linux*) + LD="${LD-ld} -m elf_x86_64" + ;; + ppc*-*linux*|powerpc*-*linux*) + LD="${LD-ld} -m elf64ppc" + ;; + s390*-*linux*) + LD="${LD-ld} -m elf64_s390" + ;; + sparc*-*linux*) + LD="${LD-ld} -m elf64_sparc" + ;; + esac + ;; + esac + fi + rm -rf conftest* + ;; + +*-*-sco3.2v5*) + # On SCO OpenServer 5, we need -belf to get full-featured binaries. + SAVE_CFLAGS="$CFLAGS" + CFLAGS="$CFLAGS -belf" + AC_CACHE_CHECK([whether the C compiler needs -belf], lt_cv_cc_needs_belf, + [AC_LANG_PUSH(C) + AC_TRY_LINK([],[],[lt_cv_cc_needs_belf=yes],[lt_cv_cc_needs_belf=no]) + AC_LANG_POP]) + if test x"$lt_cv_cc_needs_belf" != x"yes"; then + # this is probably gcc 2.8.0, egcs 1.0 or newer; no need for -belf + CFLAGS="$SAVE_CFLAGS" + fi + ;; +sparc*-*solaris*) + # Find out which ABI we are using. + echo 'int i;' > conftest.$ac_ext + if AC_TRY_EVAL(ac_compile); then + case `/usr/bin/file conftest.o` in + *64-bit*) + case $lt_cv_prog_gnu_ld in + yes*) LD="${LD-ld} -m elf64_sparc" ;; + *) LD="${LD-ld} -64" ;; + esac + ;; + esac + fi + rm -rf conftest* + ;; + +AC_PROVIDE_IFELSE([AC_LIBTOOL_WIN32_DLL], +[*-*-cygwin* | *-*-mingw* | *-*-pw32*) + AC_CHECK_TOOL(DLLTOOL, dlltool, false) + AC_CHECK_TOOL(AS, as, false) + AC_CHECK_TOOL(OBJDUMP, objdump, false) + ;; + ]) +esac + +need_locks="$enable_libtool_lock" + +])# _LT_AC_LOCK + + +# AC_LIBTOOL_COMPILER_OPTION(MESSAGE, VARIABLE-NAME, FLAGS, +# [OUTPUT-FILE], [ACTION-SUCCESS], [ACTION-FAILURE]) +# ---------------------------------------------------------------- +# Check whether the given compiler option works +AC_DEFUN([AC_LIBTOOL_COMPILER_OPTION], +[AC_REQUIRE([LT_AC_PROG_SED]) +AC_CACHE_CHECK([$1], [$2], + [$2=no + ifelse([$4], , [ac_outfile=conftest.$ac_objext], [ac_outfile=$4]) + echo "$lt_simple_compile_test_code" > conftest.$ac_ext + lt_compiler_flag="$3" + # Insert the option either (1) after the last *FLAGS variable, or + # (2) before a word containing "conftest.", or (3) at the end. + # Note that $ac_compile itself does not contain backslashes and begins + # with a dollar sign (not a hyphen), so the echo should work correctly. + # The option is referenced via a variable to avoid confusing sed. + lt_compile=`echo "$ac_compile" | $SED \ + -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ + -e 's: [[^ ]]*conftest\.: $lt_compiler_flag&:; t' \ + -e 's:$: $lt_compiler_flag:'` + (eval echo "\"\$as_me:__oline__: $lt_compile\"" >&AS_MESSAGE_LOG_FD) + (eval "$lt_compile" 2>conftest.err) + ac_status=$? + cat conftest.err >&AS_MESSAGE_LOG_FD + echo "$as_me:__oline__: \$? = $ac_status" >&AS_MESSAGE_LOG_FD + if (exit $ac_status) && test -s "$ac_outfile"; then + # The compiler can only warn and ignore the option if not recognized + # So say no if there are warnings other than the usual output. + $echo "X$_lt_compiler_boilerplate" | $Xsed -e '/^$/d' >conftest.exp + $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 + if test ! -s conftest.er2 || diff conftest.exp conftest.er2 >/dev/null; then + $2=yes + fi + fi + $rm conftest* +]) + +if test x"[$]$2" = xyes; then + ifelse([$5], , :, [$5]) +else + ifelse([$6], , :, [$6]) +fi +])# AC_LIBTOOL_COMPILER_OPTION + + +# AC_LIBTOOL_LINKER_OPTION(MESSAGE, VARIABLE-NAME, FLAGS, +# [ACTION-SUCCESS], [ACTION-FAILURE]) +# ------------------------------------------------------------ +# Check whether the given compiler option works +AC_DEFUN([AC_LIBTOOL_LINKER_OPTION], +[AC_REQUIRE([LT_AC_PROG_SED])dnl +AC_CACHE_CHECK([$1], [$2], + [$2=no + save_LDFLAGS="$LDFLAGS" + LDFLAGS="$LDFLAGS $3" + echo "$lt_simple_link_test_code" > conftest.$ac_ext + if (eval $ac_link 2>conftest.err) && test -s conftest$ac_exeext; then + # The linker can only warn and ignore the option if not recognized + # So say no if there are warnings + if test -s conftest.err; then + # Append any errors to the config.log. + cat conftest.err 1>&AS_MESSAGE_LOG_FD + $echo "X$_lt_linker_boilerplate" | $Xsed -e '/^$/d' > conftest.exp + $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 + if diff conftest.exp conftest.er2 >/dev/null; then + $2=yes + fi + else + $2=yes + fi + fi + $rm conftest* + LDFLAGS="$save_LDFLAGS" +]) + +if test x"[$]$2" = xyes; then + ifelse([$4], , :, [$4]) +else + ifelse([$5], , :, [$5]) +fi +])# AC_LIBTOOL_LINKER_OPTION + + +# AC_LIBTOOL_SYS_MAX_CMD_LEN +# -------------------------- +AC_DEFUN([AC_LIBTOOL_SYS_MAX_CMD_LEN], +[# find the maximum length of command line arguments +AC_MSG_CHECKING([the maximum length of command line arguments]) +AC_CACHE_VAL([lt_cv_sys_max_cmd_len], [dnl + i=0 + teststring="ABCD" + + case $build_os in + msdosdjgpp*) + # On DJGPP, this test can blow up pretty badly due to problems in libc + # (any single argument exceeding 2000 bytes causes a buffer overrun + # during glob expansion). Even if it were fixed, the result of this + # check would be larger than it should be. + lt_cv_sys_max_cmd_len=12288; # 12K is about right + ;; + + gnu*) + # Under GNU Hurd, this test is not required because there is + # no limit to the length of command line arguments. + # Libtool will interpret -1 as no limit whatsoever + lt_cv_sys_max_cmd_len=-1; + ;; + + cygwin* | mingw*) + # On Win9x/ME, this test blows up -- it succeeds, but takes + # about 5 minutes as the teststring grows exponentially. + # Worse, since 9x/ME are not pre-emptively multitasking, + # you end up with a "frozen" computer, even though with patience + # the test eventually succeeds (with a max line length of 256k). + # Instead, let's just punt: use the minimum linelength reported by + # all of the supported platforms: 8192 (on NT/2K/XP). + lt_cv_sys_max_cmd_len=8192; + ;; + + amigaos*) + # On AmigaOS with pdksh, this test takes hours, literally. + # So we just punt and use a minimum line length of 8192. + lt_cv_sys_max_cmd_len=8192; + ;; + + netbsd* | freebsd* | openbsd* | darwin* | dragonfly*) + # This has been around since 386BSD, at least. Likely further. + if test -x /sbin/sysctl; then + lt_cv_sys_max_cmd_len=`/sbin/sysctl -n kern.argmax` + elif test -x /usr/sbin/sysctl; then + lt_cv_sys_max_cmd_len=`/usr/sbin/sysctl -n kern.argmax` + else + lt_cv_sys_max_cmd_len=65536 # usable default for all BSDs + fi + # And add a safety zone + lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 4` + lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \* 3` + ;; + + interix*) + # We know the value 262144 and hardcode it with a safety zone (like BSD) + lt_cv_sys_max_cmd_len=196608 + ;; + + osf*) + # Dr. Hans Ekkehard Plesser reports seeing a kernel panic running configure + # due to this test when exec_disable_arg_limit is 1 on Tru64. It is not + # nice to cause kernel panics so lets avoid the loop below. + # First set a reasonable default. + lt_cv_sys_max_cmd_len=16384 + # + if test -x /sbin/sysconfig; then + case `/sbin/sysconfig -q proc exec_disable_arg_limit` in + *1*) lt_cv_sys_max_cmd_len=-1 ;; + esac + fi + ;; + sco3.2v5*) + lt_cv_sys_max_cmd_len=102400 + ;; + sysv5* | sco5v6* | sysv4.2uw2*) + kargmax=`grep ARG_MAX /etc/conf/cf.d/stune 2>/dev/null` + if test -n "$kargmax"; then + lt_cv_sys_max_cmd_len=`echo $kargmax | sed 's/.*[[ ]]//'` + else + lt_cv_sys_max_cmd_len=32768 + fi + ;; + *) + lt_cv_sys_max_cmd_len=`(getconf ARG_MAX) 2> /dev/null` + if test -n "$lt_cv_sys_max_cmd_len"; then + lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 4` + lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \* 3` + else + SHELL=${SHELL-${CONFIG_SHELL-/bin/sh}} + while (test "X"`$SHELL [$]0 --fallback-echo "X$teststring" 2>/dev/null` \ + = "XX$teststring") >/dev/null 2>&1 && + new_result=`expr "X$teststring" : ".*" 2>&1` && + lt_cv_sys_max_cmd_len=$new_result && + test $i != 17 # 1/2 MB should be enough + do + i=`expr $i + 1` + teststring=$teststring$teststring + done + teststring= + # Add a significant safety factor because C++ compilers can tack on massive + # amounts of additional arguments before passing them to the linker. + # It appears as though 1/2 is a usable value. + lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 2` + fi + ;; + esac +]) +if test -n $lt_cv_sys_max_cmd_len ; then + AC_MSG_RESULT($lt_cv_sys_max_cmd_len) +else + AC_MSG_RESULT(none) +fi +])# AC_LIBTOOL_SYS_MAX_CMD_LEN + + +# _LT_AC_CHECK_DLFCN +# ------------------ +AC_DEFUN([_LT_AC_CHECK_DLFCN], +[AC_CHECK_HEADERS(dlfcn.h)dnl +])# _LT_AC_CHECK_DLFCN + + +# _LT_AC_TRY_DLOPEN_SELF (ACTION-IF-TRUE, ACTION-IF-TRUE-W-USCORE, +# ACTION-IF-FALSE, ACTION-IF-CROSS-COMPILING) +# --------------------------------------------------------------------- +AC_DEFUN([_LT_AC_TRY_DLOPEN_SELF], +[AC_REQUIRE([_LT_AC_CHECK_DLFCN])dnl +if test "$cross_compiling" = yes; then : + [$4] +else + lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2 + lt_status=$lt_dlunknown + cat > conftest.$ac_ext < +#endif + +#include + +#ifdef RTLD_GLOBAL +# define LT_DLGLOBAL RTLD_GLOBAL +#else +# ifdef DL_GLOBAL +# define LT_DLGLOBAL DL_GLOBAL +# else +# define LT_DLGLOBAL 0 +# endif +#endif + +/* We may have to define LT_DLLAZY_OR_NOW in the command line if we + find out it does not work in some platform. */ +#ifndef LT_DLLAZY_OR_NOW +# ifdef RTLD_LAZY +# define LT_DLLAZY_OR_NOW RTLD_LAZY +# else +# ifdef DL_LAZY +# define LT_DLLAZY_OR_NOW DL_LAZY +# else +# ifdef RTLD_NOW +# define LT_DLLAZY_OR_NOW RTLD_NOW +# else +# ifdef DL_NOW +# define LT_DLLAZY_OR_NOW DL_NOW +# else +# define LT_DLLAZY_OR_NOW 0 +# endif +# endif +# endif +# endif +#endif + +#ifdef __cplusplus +extern "C" void exit (int); +#endif + +void fnord() { int i=42;} +int main () +{ + void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW); + int status = $lt_dlunknown; + + if (self) + { + if (dlsym (self,"fnord")) status = $lt_dlno_uscore; + else if (dlsym( self,"_fnord")) status = $lt_dlneed_uscore; + /* dlclose (self); */ + } + else + puts (dlerror ()); + + exit (status); +}] +EOF + if AC_TRY_EVAL(ac_link) && test -s conftest${ac_exeext} 2>/dev/null; then + (./conftest; exit; ) >&AS_MESSAGE_LOG_FD 2>/dev/null + lt_status=$? + case x$lt_status in + x$lt_dlno_uscore) $1 ;; + x$lt_dlneed_uscore) $2 ;; + x$lt_dlunknown|x*) $3 ;; + esac + else : + # compilation failed + $3 + fi +fi +rm -fr conftest* +])# _LT_AC_TRY_DLOPEN_SELF + + +# AC_LIBTOOL_DLOPEN_SELF +# ---------------------- +AC_DEFUN([AC_LIBTOOL_DLOPEN_SELF], +[AC_REQUIRE([_LT_AC_CHECK_DLFCN])dnl +if test "x$enable_dlopen" != xyes; then + enable_dlopen=unknown + enable_dlopen_self=unknown + enable_dlopen_self_static=unknown +else + lt_cv_dlopen=no + lt_cv_dlopen_libs= + + case $host_os in + beos*) + lt_cv_dlopen="load_add_on" + lt_cv_dlopen_libs= + lt_cv_dlopen_self=yes + ;; + + mingw* | pw32*) + lt_cv_dlopen="LoadLibrary" + lt_cv_dlopen_libs= + ;; + + cygwin*) + lt_cv_dlopen="dlopen" + lt_cv_dlopen_libs= + ;; + + darwin*) + # if libdl is installed we need to link against it + AC_CHECK_LIB([dl], [dlopen], + [lt_cv_dlopen="dlopen" lt_cv_dlopen_libs="-ldl"],[ + lt_cv_dlopen="dyld" + lt_cv_dlopen_libs= + lt_cv_dlopen_self=yes + ]) + ;; + + *) + AC_CHECK_FUNC([shl_load], + [lt_cv_dlopen="shl_load"], + [AC_CHECK_LIB([dld], [shl_load], + [lt_cv_dlopen="shl_load" lt_cv_dlopen_libs="-dld"], + [AC_CHECK_FUNC([dlopen], + [lt_cv_dlopen="dlopen"], + [AC_CHECK_LIB([dl], [dlopen], + [lt_cv_dlopen="dlopen" lt_cv_dlopen_libs="-ldl"], + [AC_CHECK_LIB([svld], [dlopen], + [lt_cv_dlopen="dlopen" lt_cv_dlopen_libs="-lsvld"], + [AC_CHECK_LIB([dld], [dld_link], + [lt_cv_dlopen="dld_link" lt_cv_dlopen_libs="-dld"]) + ]) + ]) + ]) + ]) + ]) + ;; + esac + + if test "x$lt_cv_dlopen" != xno; then + enable_dlopen=yes + else + enable_dlopen=no + fi + + case $lt_cv_dlopen in + dlopen) + save_CPPFLAGS="$CPPFLAGS" + test "x$ac_cv_header_dlfcn_h" = xyes && CPPFLAGS="$CPPFLAGS -DHAVE_DLFCN_H" + + save_LDFLAGS="$LDFLAGS" + wl=$lt_prog_compiler_wl eval LDFLAGS=\"\$LDFLAGS $export_dynamic_flag_spec\" + + save_LIBS="$LIBS" + LIBS="$lt_cv_dlopen_libs $LIBS" + + AC_CACHE_CHECK([whether a program can dlopen itself], + lt_cv_dlopen_self, [dnl + _LT_AC_TRY_DLOPEN_SELF( + lt_cv_dlopen_self=yes, lt_cv_dlopen_self=yes, + lt_cv_dlopen_self=no, lt_cv_dlopen_self=cross) + ]) + + if test "x$lt_cv_dlopen_self" = xyes; then + wl=$lt_prog_compiler_wl eval LDFLAGS=\"\$LDFLAGS $lt_prog_compiler_static\" + AC_CACHE_CHECK([whether a statically linked program can dlopen itself], + lt_cv_dlopen_self_static, [dnl + _LT_AC_TRY_DLOPEN_SELF( + lt_cv_dlopen_self_static=yes, lt_cv_dlopen_self_static=yes, + lt_cv_dlopen_self_static=no, lt_cv_dlopen_self_static=cross) + ]) + fi + + CPPFLAGS="$save_CPPFLAGS" + LDFLAGS="$save_LDFLAGS" + LIBS="$save_LIBS" + ;; + esac + + case $lt_cv_dlopen_self in + yes|no) enable_dlopen_self=$lt_cv_dlopen_self ;; + *) enable_dlopen_self=unknown ;; + esac + + case $lt_cv_dlopen_self_static in + yes|no) enable_dlopen_self_static=$lt_cv_dlopen_self_static ;; + *) enable_dlopen_self_static=unknown ;; + esac +fi +])# AC_LIBTOOL_DLOPEN_SELF + + +# AC_LIBTOOL_PROG_CC_C_O([TAGNAME]) +# --------------------------------- +# Check to see if options -c and -o are simultaneously supported by compiler +AC_DEFUN([AC_LIBTOOL_PROG_CC_C_O], +[AC_REQUIRE([LT_AC_PROG_SED])dnl +AC_REQUIRE([_LT_AC_SYS_COMPILER])dnl +AC_CACHE_CHECK([if $compiler supports -c -o file.$ac_objext], + [_LT_AC_TAGVAR(lt_cv_prog_compiler_c_o, $1)], + [_LT_AC_TAGVAR(lt_cv_prog_compiler_c_o, $1)=no + $rm -r conftest 2>/dev/null + mkdir conftest + cd conftest + mkdir out + echo "$lt_simple_compile_test_code" > conftest.$ac_ext + + lt_compiler_flag="-o out/conftest2.$ac_objext" + # Insert the option either (1) after the last *FLAGS variable, or + # (2) before a word containing "conftest.", or (3) at the end. + # Note that $ac_compile itself does not contain backslashes and begins + # with a dollar sign (not a hyphen), so the echo should work correctly. + lt_compile=`echo "$ac_compile" | $SED \ + -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ + -e 's: [[^ ]]*conftest\.: $lt_compiler_flag&:; t' \ + -e 's:$: $lt_compiler_flag:'` + (eval echo "\"\$as_me:__oline__: $lt_compile\"" >&AS_MESSAGE_LOG_FD) + (eval "$lt_compile" 2>out/conftest.err) + ac_status=$? + cat out/conftest.err >&AS_MESSAGE_LOG_FD + echo "$as_me:__oline__: \$? = $ac_status" >&AS_MESSAGE_LOG_FD + if (exit $ac_status) && test -s out/conftest2.$ac_objext + then + # The compiler can only warn and ignore the option if not recognized + # So say no if there are warnings + $echo "X$_lt_compiler_boilerplate" | $Xsed -e '/^$/d' > out/conftest.exp + $SED '/^$/d; /^ *+/d' out/conftest.err >out/conftest.er2 + if test ! -s out/conftest.er2 || diff out/conftest.exp out/conftest.er2 >/dev/null; then + _LT_AC_TAGVAR(lt_cv_prog_compiler_c_o, $1)=yes + fi + fi + chmod u+w . 2>&AS_MESSAGE_LOG_FD + $rm conftest* + # SGI C++ compiler will create directory out/ii_files/ for + # template instantiation + test -d out/ii_files && $rm out/ii_files/* && rmdir out/ii_files + $rm out/* && rmdir out + cd .. + rmdir conftest + $rm conftest* +]) +])# AC_LIBTOOL_PROG_CC_C_O + + +# AC_LIBTOOL_SYS_HARD_LINK_LOCKS([TAGNAME]) +# ----------------------------------------- +# Check to see if we can do hard links to lock some files if needed +AC_DEFUN([AC_LIBTOOL_SYS_HARD_LINK_LOCKS], +[AC_REQUIRE([_LT_AC_LOCK])dnl + +hard_links="nottested" +if test "$_LT_AC_TAGVAR(lt_cv_prog_compiler_c_o, $1)" = no && test "$need_locks" != no; then + # do not overwrite the value of need_locks provided by the user + AC_MSG_CHECKING([if we can lock with hard links]) + hard_links=yes + $rm conftest* + ln conftest.a conftest.b 2>/dev/null && hard_links=no + touch conftest.a + ln conftest.a conftest.b 2>&5 || hard_links=no + ln conftest.a conftest.b 2>/dev/null && hard_links=no + AC_MSG_RESULT([$hard_links]) + if test "$hard_links" = no; then + AC_MSG_WARN([`$CC' does not support `-c -o', so `make -j' may be unsafe]) + need_locks=warn + fi +else + need_locks=no +fi +])# AC_LIBTOOL_SYS_HARD_LINK_LOCKS + + +# AC_LIBTOOL_OBJDIR +# ----------------- +AC_DEFUN([AC_LIBTOOL_OBJDIR], +[AC_CACHE_CHECK([for objdir], [lt_cv_objdir], +[rm -f .libs 2>/dev/null +mkdir .libs 2>/dev/null +if test -d .libs; then + lt_cv_objdir=.libs +else + # MS-DOS does not allow filenames that begin with a dot. + lt_cv_objdir=_libs +fi +rmdir .libs 2>/dev/null]) +objdir=$lt_cv_objdir +])# AC_LIBTOOL_OBJDIR + + +# AC_LIBTOOL_PROG_LD_HARDCODE_LIBPATH([TAGNAME]) +# ---------------------------------------------- +# Check hardcoding attributes. +AC_DEFUN([AC_LIBTOOL_PROG_LD_HARDCODE_LIBPATH], +[AC_MSG_CHECKING([how to hardcode library paths into programs]) +_LT_AC_TAGVAR(hardcode_action, $1)= +if test -n "$_LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)" || \ + test -n "$_LT_AC_TAGVAR(runpath_var, $1)" || \ + test "X$_LT_AC_TAGVAR(hardcode_automatic, $1)" = "Xyes" ; then + + # We can hardcode non-existant directories. + if test "$_LT_AC_TAGVAR(hardcode_direct, $1)" != no && + # If the only mechanism to avoid hardcoding is shlibpath_var, we + # have to relink, otherwise we might link with an installed library + # when we should be linking with a yet-to-be-installed one + ## test "$_LT_AC_TAGVAR(hardcode_shlibpath_var, $1)" != no && + test "$_LT_AC_TAGVAR(hardcode_minus_L, $1)" != no; then + # Linking always hardcodes the temporary library directory. + _LT_AC_TAGVAR(hardcode_action, $1)=relink + else + # We can link without hardcoding, and we can hardcode nonexisting dirs. + _LT_AC_TAGVAR(hardcode_action, $1)=immediate + fi +else + # We cannot hardcode anything, or else we can only hardcode existing + # directories. + _LT_AC_TAGVAR(hardcode_action, $1)=unsupported +fi +AC_MSG_RESULT([$_LT_AC_TAGVAR(hardcode_action, $1)]) + +if test "$_LT_AC_TAGVAR(hardcode_action, $1)" = relink; then + # Fast installation is not supported + enable_fast_install=no +elif test "$shlibpath_overrides_runpath" = yes || + test "$enable_shared" = no; then + # Fast installation is not necessary + enable_fast_install=needless +fi +])# AC_LIBTOOL_PROG_LD_HARDCODE_LIBPATH + + +# AC_LIBTOOL_SYS_LIB_STRIP +# ------------------------ +AC_DEFUN([AC_LIBTOOL_SYS_LIB_STRIP], +[striplib= +old_striplib= +AC_MSG_CHECKING([whether stripping libraries is possible]) +if test -n "$STRIP" && $STRIP -V 2>&1 | grep "GNU strip" >/dev/null; then + test -z "$old_striplib" && old_striplib="$STRIP --strip-debug" + test -z "$striplib" && striplib="$STRIP --strip-unneeded" + AC_MSG_RESULT([yes]) +else +# FIXME - insert some real tests, host_os isn't really good enough + case $host_os in + darwin*) + if test -n "$STRIP" ; then + striplib="$STRIP -x" + old_striplib="$STRIP -S" + AC_MSG_RESULT([yes]) + else + AC_MSG_RESULT([no]) +fi + ;; + *) + AC_MSG_RESULT([no]) + ;; + esac +fi +])# AC_LIBTOOL_SYS_LIB_STRIP + + +# AC_LIBTOOL_SYS_DYNAMIC_LINKER +# ----------------------------- +# PORTME Fill in your ld.so characteristics +AC_DEFUN([AC_LIBTOOL_SYS_DYNAMIC_LINKER], +[AC_REQUIRE([LT_AC_PROG_SED])dnl +AC_MSG_CHECKING([dynamic linker characteristics]) +library_names_spec= +libname_spec='lib$name' +soname_spec= +shrext_cmds=".so" +postinstall_cmds= +postuninstall_cmds= +finish_cmds= +finish_eval= +shlibpath_var= +shlibpath_overrides_runpath=unknown +version_type=none +dynamic_linker="$host_os ld.so" +sys_lib_dlsearch_path_spec="/lib /usr/lib" +m4_if($1,[],[ +if test "$GCC" = yes; then + case $host_os in + darwin*) lt_awk_arg="/^libraries:/,/LR/" ;; + *) lt_awk_arg="/^libraries:/" ;; + esac + lt_search_path_spec=`$CC -print-search-dirs | awk $lt_awk_arg | $SED -e "s/^libraries://" -e "s,=/,/,g"` + if echo "$lt_search_path_spec" | grep ';' >/dev/null ; then + # if the path contains ";" then we assume it to be the separator + # otherwise default to the standard path separator (i.e. ":") - it is + # assumed that no part of a normal pathname contains ";" but that should + # okay in the real world where ";" in dirpaths is itself problematic. + lt_search_path_spec=`echo "$lt_search_path_spec" | $SED -e 's/;/ /g'` + else + lt_search_path_spec=`echo "$lt_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"` + fi + # Ok, now we have the path, separated by spaces, we can step through it + # and add multilib dir if necessary. + lt_tmp_lt_search_path_spec= + lt_multi_os_dir=`$CC $CPPFLAGS $CFLAGS $LDFLAGS -print-multi-os-directory 2>/dev/null` + for lt_sys_path in $lt_search_path_spec; do + if test -d "$lt_sys_path/$lt_multi_os_dir"; then + lt_tmp_lt_search_path_spec="$lt_tmp_lt_search_path_spec $lt_sys_path/$lt_multi_os_dir" + else + test -d "$lt_sys_path" && \ + lt_tmp_lt_search_path_spec="$lt_tmp_lt_search_path_spec $lt_sys_path" + fi + done + lt_search_path_spec=`echo $lt_tmp_lt_search_path_spec | awk ' +BEGIN {RS=" "; FS="/|\n";} { + lt_foo=""; + lt_count=0; + for (lt_i = NF; lt_i > 0; lt_i--) { + if ($lt_i != "" && $lt_i != ".") { + if ($lt_i == "..") { + lt_count++; + } else { + if (lt_count == 0) { + lt_foo="/" $lt_i lt_foo; + } else { + lt_count--; + } + } + } + } + if (lt_foo != "") { lt_freq[[lt_foo]]++; } + if (lt_freq[[lt_foo]] == 1) { print lt_foo; } +}'` + sys_lib_search_path_spec=`echo $lt_search_path_spec` +else + sys_lib_search_path_spec="/lib /usr/lib /usr/local/lib" +fi]) +need_lib_prefix=unknown +hardcode_into_libs=no + +# when you set need_version to no, make sure it does not cause -set_version +# flags to be left without arguments +need_version=unknown + +case $host_os in +aix3*) + version_type=linux + library_names_spec='${libname}${release}${shared_ext}$versuffix $libname.a' + shlibpath_var=LIBPATH + + # AIX 3 has no versioning support, so we append a major version to the name. + soname_spec='${libname}${release}${shared_ext}$major' + ;; + +aix4* | aix5*) + version_type=linux + need_lib_prefix=no + need_version=no + hardcode_into_libs=yes + if test "$host_cpu" = ia64; then + # AIX 5 supports IA64 + library_names_spec='${libname}${release}${shared_ext}$major ${libname}${release}${shared_ext}$versuffix $libname${shared_ext}' + shlibpath_var=LD_LIBRARY_PATH + else + # With GCC up to 2.95.x, collect2 would create an import file + # for dependence libraries. The import file would start with + # the line `#! .'. This would cause the generated library to + # depend on `.', always an invalid library. This was fixed in + # development snapshots of GCC prior to 3.0. + case $host_os in + aix4 | aix4.[[01]] | aix4.[[01]].*) + if { echo '#if __GNUC__ > 2 || (__GNUC__ == 2 && __GNUC_MINOR__ >= 97)' + echo ' yes ' + echo '#endif'; } | ${CC} -E - | grep yes > /dev/null; then + : + else + can_build_shared=no + fi + ;; + esac + # AIX (on Power*) has no versioning support, so currently we can not hardcode correct + # soname into executable. Probably we can add versioning support to + # collect2, so additional links can be useful in future. + if test "$aix_use_runtimelinking" = yes; then + # If using run time linking (on AIX 4.2 or later) use lib.so + # instead of lib.a to let people know that these are not + # typical AIX shared libraries. + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + else + # We preserve .a as extension for shared libraries through AIX4.2 + # and later when we are not doing run time linking. + library_names_spec='${libname}${release}.a $libname.a' + soname_spec='${libname}${release}${shared_ext}$major' + fi + shlibpath_var=LIBPATH + fi + ;; + +amigaos*) + library_names_spec='$libname.ixlibrary $libname.a' + # Create ${libname}_ixlibrary.a entries in /sys/libs. + finish_eval='for lib in `ls $libdir/*.ixlibrary 2>/dev/null`; do libname=`$echo "X$lib" | $Xsed -e '\''s%^.*/\([[^/]]*\)\.ixlibrary$%\1%'\''`; test $rm /sys/libs/${libname}_ixlibrary.a; $show "cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a"; cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a || exit 1; done' + ;; + +beos*) + library_names_spec='${libname}${shared_ext}' + dynamic_linker="$host_os ld.so" + shlibpath_var=LIBRARY_PATH + ;; + +bsdi[[45]]*) + version_type=linux + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + finish_cmds='PATH="\$PATH:/sbin" ldconfig $libdir' + shlibpath_var=LD_LIBRARY_PATH + sys_lib_search_path_spec="/shlib /usr/lib /usr/X11/lib /usr/contrib/lib /lib /usr/local/lib" + sys_lib_dlsearch_path_spec="/shlib /usr/lib /usr/local/lib" + # the default ld.so.conf also contains /usr/contrib/lib and + # /usr/X11R6/lib (/usr/X11 is a link to /usr/X11R6), but let us allow + # libtool to hard-code these into programs + ;; + +cygwin* | mingw* | pw32*) + version_type=windows + shrext_cmds=".dll" + need_version=no + need_lib_prefix=no + + case $GCC,$host_os in + yes,cygwin* | yes,mingw* | yes,pw32*) + library_names_spec='$libname.dll.a' + # DLL is installed to $(libdir)/../bin by postinstall_cmds + postinstall_cmds='base_file=`basename \${file}`~ + dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i;echo \$dlname'\''`~ + dldir=$destdir/`dirname \$dlpath`~ + test -d \$dldir || mkdir -p \$dldir~ + $install_prog $dir/$dlname \$dldir/$dlname~ + chmod a+x \$dldir/$dlname' + postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~ + dlpath=$dir/\$dldll~ + $rm \$dlpath' + shlibpath_overrides_runpath=yes + + case $host_os in + cygwin*) + # Cygwin DLLs use 'cyg' prefix rather than 'lib' + soname_spec='`echo ${libname} | sed -e 's/^lib/cyg/'``echo ${release} | $SED -e 's/[[.]]/-/g'`${versuffix}${shared_ext}' + sys_lib_search_path_spec="/usr/lib /lib/w32api /lib /usr/local/lib" + ;; + mingw*) + # MinGW DLLs use traditional 'lib' prefix + soname_spec='${libname}`echo ${release} | $SED -e 's/[[.]]/-/g'`${versuffix}${shared_ext}' + sys_lib_search_path_spec=`$CC -print-search-dirs | grep "^libraries:" | $SED -e "s/^libraries://" -e "s,=/,/,g"` + if echo "$sys_lib_search_path_spec" | [grep ';[c-zC-Z]:/' >/dev/null]; then + # It is most probably a Windows format PATH printed by + # mingw gcc, but we are running on Cygwin. Gcc prints its search + # path with ; separators, and with drive letters. We can handle the + # drive letters (cygwin fileutils understands them), so leave them, + # especially as we might pass files found there to a mingw objdump, + # which wouldn't understand a cygwinified path. Ahh. + sys_lib_search_path_spec=`echo "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'` + else + sys_lib_search_path_spec=`echo "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"` + fi + ;; + pw32*) + # pw32 DLLs use 'pw' prefix rather than 'lib' + library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[[.]]/-/g'`${versuffix}${shared_ext}' + ;; + esac + ;; + + *) + library_names_spec='${libname}`echo ${release} | $SED -e 's/[[.]]/-/g'`${versuffix}${shared_ext} $libname.lib' + ;; + esac + dynamic_linker='Win32 ld.exe' + # FIXME: first we should search . and the directory the executable is in + shlibpath_var=PATH + ;; + +darwin* | rhapsody*) + dynamic_linker="$host_os dyld" + version_type=darwin + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${versuffix}$shared_ext ${libname}${release}${major}$shared_ext ${libname}$shared_ext' + soname_spec='${libname}${release}${major}$shared_ext' + shlibpath_overrides_runpath=yes + shlibpath_var=DYLD_LIBRARY_PATH + shrext_cmds='`test .$module = .yes && echo .so || echo .dylib`' + m4_if([$1], [],[ + sys_lib_search_path_spec="$sys_lib_search_path_spec /usr/local/lib"]) + sys_lib_dlsearch_path_spec='/usr/local/lib /lib /usr/lib' + ;; + +dgux*) + version_type=linux + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname$shared_ext' + soname_spec='${libname}${release}${shared_ext}$major' + shlibpath_var=LD_LIBRARY_PATH + ;; + +freebsd1*) + dynamic_linker=no + ;; + +freebsd* | dragonfly*) + # DragonFly does not have aout. When/if they implement a new + # versioning mechanism, adjust this. + if test -x /usr/bin/objformat; then + objformat=`/usr/bin/objformat` + else + case $host_os in + freebsd[[123]]*) objformat=aout ;; + *) objformat=elf ;; + esac + fi + version_type=freebsd-$objformat + case $version_type in + freebsd-elf*) + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext} $libname${shared_ext}' + need_version=no + need_lib_prefix=no + ;; + freebsd-*) + library_names_spec='${libname}${release}${shared_ext}$versuffix $libname${shared_ext}$versuffix' + need_version=yes + ;; + esac + shlibpath_var=LD_LIBRARY_PATH + case $host_os in + freebsd2*) + shlibpath_overrides_runpath=yes + ;; + freebsd3.[[01]]* | freebsdelf3.[[01]]*) + shlibpath_overrides_runpath=yes + hardcode_into_libs=yes + ;; + freebsd3.[[2-9]]* | freebsdelf3.[[2-9]]* | \ + freebsd4.[[0-5]] | freebsdelf4.[[0-5]] | freebsd4.1.1 | freebsdelf4.1.1) + shlibpath_overrides_runpath=no + hardcode_into_libs=yes + ;; + *) # from 4.6 on, and DragonFly + shlibpath_overrides_runpath=yes + hardcode_into_libs=yes + ;; + esac + ;; + +gnu*) + version_type=linux + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}${major} ${libname}${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + shlibpath_var=LD_LIBRARY_PATH + hardcode_into_libs=yes + ;; + +hpux9* | hpux10* | hpux11*) + # Give a soname corresponding to the major version so that dld.sl refuses to + # link against other versions. + version_type=sunos + need_lib_prefix=no + need_version=no + case $host_cpu in + ia64*) + shrext_cmds='.so' + hardcode_into_libs=yes + dynamic_linker="$host_os dld.so" + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=yes # Unless +noenvvar is specified. + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + if test "X$HPUX_IA64_MODE" = X32; then + sys_lib_search_path_spec="/usr/lib/hpux32 /usr/local/lib/hpux32 /usr/local/lib" + else + sys_lib_search_path_spec="/usr/lib/hpux64 /usr/local/lib/hpux64" + fi + sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec + ;; + hppa*64*) + shrext_cmds='.sl' + hardcode_into_libs=yes + dynamic_linker="$host_os dld.sl" + shlibpath_var=LD_LIBRARY_PATH # How should we handle SHLIB_PATH + shlibpath_overrides_runpath=yes # Unless +noenvvar is specified. + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + sys_lib_search_path_spec="/usr/lib/pa20_64 /usr/ccs/lib/pa20_64" + sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec + ;; + *) + shrext_cmds='.sl' + dynamic_linker="$host_os dld.sl" + shlibpath_var=SHLIB_PATH + shlibpath_overrides_runpath=no # +s is required to enable SHLIB_PATH + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + ;; + esac + # HP-UX runs *really* slowly unless shared libraries are mode 555. + postinstall_cmds='chmod 555 $lib' + ;; + +interix[[3-9]]*) + version_type=linux + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + dynamic_linker='Interix 3.x ld.so.1 (PE, like ELF)' + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=no + hardcode_into_libs=yes + ;; + +irix5* | irix6* | nonstopux*) + case $host_os in + nonstopux*) version_type=nonstopux ;; + *) + if test "$lt_cv_prog_gnu_ld" = yes; then + version_type=linux + else + version_type=irix + fi ;; + esac + need_lib_prefix=no + need_version=no + soname_spec='${libname}${release}${shared_ext}$major' + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${release}${shared_ext} $libname${shared_ext}' + case $host_os in + irix5* | nonstopux*) + libsuff= shlibsuff= + ;; + *) + case $LD in # libtool.m4 will add one of these switches to LD + *-32|*"-32 "|*-melf32bsmip|*"-melf32bsmip ") + libsuff= shlibsuff= libmagic=32-bit;; + *-n32|*"-n32 "|*-melf32bmipn32|*"-melf32bmipn32 ") + libsuff=32 shlibsuff=N32 libmagic=N32;; + *-64|*"-64 "|*-melf64bmip|*"-melf64bmip ") + libsuff=64 shlibsuff=64 libmagic=64-bit;; + *) libsuff= shlibsuff= libmagic=never-match;; + esac + ;; + esac + shlibpath_var=LD_LIBRARY${shlibsuff}_PATH + shlibpath_overrides_runpath=no + sys_lib_search_path_spec="/usr/lib${libsuff} /lib${libsuff} /usr/local/lib${libsuff}" + sys_lib_dlsearch_path_spec="/usr/lib${libsuff} /lib${libsuff}" + hardcode_into_libs=yes + ;; + +# No shared lib support for Linux oldld, aout, or coff. +linux*oldld* | linux*aout* | linux*coff*) + dynamic_linker=no + ;; + +# This must be Linux ELF. +linux* | k*bsd*-gnu) + version_type=linux + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + finish_cmds='PATH="\$PATH:/sbin" ldconfig -n $libdir' + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=no + # This implies no fast_install, which is unacceptable. + # Some rework will be needed to allow for fast_install + # before this can be enabled. + hardcode_into_libs=yes + sys_lib_search_path_spec="/usr/lib${libsuff} /lib${libsuff} /usr/local/lib${libsuff}" + sys_lib_dlsearch_path_spec="/usr/lib${libsuff} /lib${libsuff}" + + # Append ld.so.conf contents to the search path + if test -f /etc/ld.so.conf; then + lt_ld_extra=`awk '/^include / { system(sprintf("cd /etc; cat %s 2>/dev/null", \[$]2)); skip = 1; } { if (!skip) print \[$]0; skip = 0; }' < /etc/ld.so.conf | $SED -e 's/#.*//;/^[ ]*hwcap[ ]/d;s/[:, ]/ /g;s/=[^=]*$//;s/=[^= ]* / /g;/^$/d' | tr '\n' ' '` + sys_lib_dlsearch_path_spec="$sys_lib_dlsearch_path_spec $lt_ld_extra" + fi + + # We used to test for /lib/ld.so.1 and disable shared libraries on + # powerpc, because MkLinux only supported shared libraries with the + # GNU dynamic linker. Since this was broken with cross compilers, + # most powerpc-linux boxes support dynamic linking these days and + # people can always --disable-shared, the test was removed, and we + # assume the GNU/Linux dynamic linker is in use. + dynamic_linker='GNU/Linux ld.so' + ;; + +netbsd*) + version_type=sunos + need_lib_prefix=no + need_version=no + if echo __ELF__ | $CC -E - | grep __ELF__ >/dev/null; then + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix' + finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir' + dynamic_linker='NetBSD (a.out) ld.so' + else + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + dynamic_linker='NetBSD ld.elf_so' + fi + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=yes + hardcode_into_libs=yes + ;; + +newsos6) + version_type=linux + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=yes + ;; + +nto-qnx*) + version_type=linux + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=yes + ;; + +openbsd*) + version_type=sunos + sys_lib_dlsearch_path_spec="/usr/lib" + need_lib_prefix=no + # Some older versions of OpenBSD (3.3 at least) *do* need versioned libs. + case $host_os in + openbsd3.3 | openbsd3.3.*) need_version=yes ;; + *) need_version=no ;; + esac + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix' + finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir' + shlibpath_var=LD_LIBRARY_PATH + if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then + case $host_os in + openbsd2.[[89]] | openbsd2.[[89]].*) + shlibpath_overrides_runpath=no + ;; + *) + shlibpath_overrides_runpath=yes + ;; + esac + else + shlibpath_overrides_runpath=yes + fi + ;; + +os2*) + libname_spec='$name' + shrext_cmds=".dll" + need_lib_prefix=no + library_names_spec='$libname${shared_ext} $libname.a' + dynamic_linker='OS/2 ld.exe' + shlibpath_var=LIBPATH + ;; + +osf3* | osf4* | osf5*) + version_type=osf + need_lib_prefix=no + need_version=no + soname_spec='${libname}${release}${shared_ext}$major' + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + shlibpath_var=LD_LIBRARY_PATH + sys_lib_search_path_spec="/usr/shlib /usr/ccs/lib /usr/lib/cmplrs/cc /usr/lib /usr/local/lib /var/shlib" + sys_lib_dlsearch_path_spec="$sys_lib_search_path_spec" + ;; + +rdos*) + dynamic_linker=no + ;; + +solaris*) + version_type=linux + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=yes + hardcode_into_libs=yes + # ldd complains unless libraries are executable + postinstall_cmds='chmod +x $lib' + ;; + +sunos4*) + version_type=sunos + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix' + finish_cmds='PATH="\$PATH:/usr/etc" ldconfig $libdir' + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=yes + if test "$with_gnu_ld" = yes; then + need_lib_prefix=no + fi + need_version=yes + ;; + +sysv4 | sysv4.3*) + version_type=linux + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + shlibpath_var=LD_LIBRARY_PATH + case $host_vendor in + sni) + shlibpath_overrides_runpath=no + need_lib_prefix=no + export_dynamic_flag_spec='${wl}-Blargedynsym' + runpath_var=LD_RUN_PATH + ;; + siemens) + need_lib_prefix=no + ;; + motorola) + need_lib_prefix=no + need_version=no + shlibpath_overrides_runpath=no + sys_lib_search_path_spec='/lib /usr/lib /usr/ccs/lib' + ;; + esac + ;; + +sysv4*MP*) + if test -d /usr/nec ;then + version_type=linux + library_names_spec='$libname${shared_ext}.$versuffix $libname${shared_ext}.$major $libname${shared_ext}' + soname_spec='$libname${shared_ext}.$major' + shlibpath_var=LD_LIBRARY_PATH + fi + ;; + +sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX* | sysv4*uw2*) + version_type=freebsd-elf + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext} $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + shlibpath_var=LD_LIBRARY_PATH + hardcode_into_libs=yes + if test "$with_gnu_ld" = yes; then + sys_lib_search_path_spec='/usr/local/lib /usr/gnu/lib /usr/ccs/lib /usr/lib /lib' + shlibpath_overrides_runpath=no + else + sys_lib_search_path_spec='/usr/ccs/lib /usr/lib' + shlibpath_overrides_runpath=yes + case $host_os in + sco3.2v5*) + sys_lib_search_path_spec="$sys_lib_search_path_spec /lib" + ;; + esac + fi + sys_lib_dlsearch_path_spec='/usr/lib' + ;; + +uts4*) + version_type=linux + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + shlibpath_var=LD_LIBRARY_PATH + ;; + +*) + dynamic_linker=no + ;; +esac +AC_MSG_RESULT([$dynamic_linker]) +test "$dynamic_linker" = no && can_build_shared=no + +variables_saved_for_relink="PATH $shlibpath_var $runpath_var" +if test "$GCC" = yes; then + variables_saved_for_relink="$variables_saved_for_relink GCC_EXEC_PREFIX COMPILER_PATH LIBRARY_PATH" +fi +])# AC_LIBTOOL_SYS_DYNAMIC_LINKER + + +# _LT_AC_TAGCONFIG +# ---------------- +AC_DEFUN([_LT_AC_TAGCONFIG], +[AC_REQUIRE([LT_AC_PROG_SED])dnl +AC_ARG_WITH([tags], + [AC_HELP_STRING([--with-tags@<:@=TAGS@:>@], + [include additional configurations @<:@automatic@:>@])], + [tagnames="$withval"]) + +if test -f "$ltmain" && test -n "$tagnames"; then + if test ! -f "${ofile}"; then + AC_MSG_WARN([output file `$ofile' does not exist]) + fi + + if test -z "$LTCC"; then + eval "`$SHELL ${ofile} --config | grep '^LTCC='`" + if test -z "$LTCC"; then + AC_MSG_WARN([output file `$ofile' does not look like a libtool script]) + else + AC_MSG_WARN([using `LTCC=$LTCC', extracted from `$ofile']) + fi + fi + if test -z "$LTCFLAGS"; then + eval "`$SHELL ${ofile} --config | grep '^LTCFLAGS='`" + fi + + # Extract list of available tagged configurations in $ofile. + # Note that this assumes the entire list is on one line. + available_tags=`grep "^available_tags=" "${ofile}" | $SED -e 's/available_tags=\(.*$\)/\1/' -e 's/\"//g'` + + lt_save_ifs="$IFS"; IFS="${IFS}$PATH_SEPARATOR," + for tagname in $tagnames; do + IFS="$lt_save_ifs" + # Check whether tagname contains only valid characters + case `$echo "X$tagname" | $Xsed -e 's:[[-_ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz1234567890,/]]::g'` in + "") ;; + *) AC_MSG_ERROR([invalid tag name: $tagname]) + ;; + esac + + if grep "^# ### BEGIN LIBTOOL TAG CONFIG: $tagname$" < "${ofile}" > /dev/null + then + AC_MSG_ERROR([tag name \"$tagname\" already exists]) + fi + + # Update the list of available tags. + if test -n "$tagname"; then + echo appending configuration tag \"$tagname\" to $ofile + + case $tagname in + CXX) + if test -n "$CXX" && ( test "X$CXX" != "Xno" && + ( (test "X$CXX" = "Xg++" && `g++ -v >/dev/null 2>&1` ) || + (test "X$CXX" != "Xg++"))) ; then + AC_LIBTOOL_LANG_CXX_CONFIG + else + tagname="" + fi + ;; + + F77) + if test -n "$F77" && test "X$F77" != "Xno"; then + AC_LIBTOOL_LANG_F77_CONFIG + else + tagname="" + fi + ;; + + GCJ) + if test -n "$GCJ" && test "X$GCJ" != "Xno"; then + AC_LIBTOOL_LANG_GCJ_CONFIG + else + tagname="" + fi + ;; + + RC) + AC_LIBTOOL_LANG_RC_CONFIG + ;; + + *) + AC_MSG_ERROR([Unsupported tag name: $tagname]) + ;; + esac + + # Append the new tag name to the list of available tags. + if test -n "$tagname" ; then + available_tags="$available_tags $tagname" + fi + fi + done + IFS="$lt_save_ifs" + + # Now substitute the updated list of available tags. + if eval "sed -e 's/^available_tags=.*\$/available_tags=\"$available_tags\"/' \"$ofile\" > \"${ofile}T\""; then + mv "${ofile}T" "$ofile" + chmod +x "$ofile" + else + rm -f "${ofile}T" + AC_MSG_ERROR([unable to update list of available tagged configurations.]) + fi +fi +])# _LT_AC_TAGCONFIG + + +# AC_LIBTOOL_DLOPEN +# ----------------- +# enable checks for dlopen support +AC_DEFUN([AC_LIBTOOL_DLOPEN], + [AC_BEFORE([$0],[AC_LIBTOOL_SETUP]) +])# AC_LIBTOOL_DLOPEN + + +# AC_LIBTOOL_WIN32_DLL +# -------------------- +# declare package support for building win32 DLLs +AC_DEFUN([AC_LIBTOOL_WIN32_DLL], +[AC_BEFORE([$0], [AC_LIBTOOL_SETUP]) +])# AC_LIBTOOL_WIN32_DLL + + +# AC_ENABLE_SHARED([DEFAULT]) +# --------------------------- +# implement the --enable-shared flag +# DEFAULT is either `yes' or `no'. If omitted, it defaults to `yes'. +AC_DEFUN([AC_ENABLE_SHARED], +[define([AC_ENABLE_SHARED_DEFAULT], ifelse($1, no, no, yes))dnl +AC_ARG_ENABLE([shared], + [AC_HELP_STRING([--enable-shared@<:@=PKGS@:>@], + [build shared libraries @<:@default=]AC_ENABLE_SHARED_DEFAULT[@:>@])], + [p=${PACKAGE-default} + case $enableval in + yes) enable_shared=yes ;; + no) enable_shared=no ;; + *) + enable_shared=no + # Look at the argument we got. We use all the common list separators. + lt_save_ifs="$IFS"; IFS="${IFS}$PATH_SEPARATOR," + for pkg in $enableval; do + IFS="$lt_save_ifs" + if test "X$pkg" = "X$p"; then + enable_shared=yes + fi + done + IFS="$lt_save_ifs" + ;; + esac], + [enable_shared=]AC_ENABLE_SHARED_DEFAULT) +])# AC_ENABLE_SHARED + + +# AC_DISABLE_SHARED +# ----------------- +# set the default shared flag to --disable-shared +AC_DEFUN([AC_DISABLE_SHARED], +[AC_BEFORE([$0],[AC_LIBTOOL_SETUP])dnl +AC_ENABLE_SHARED(no) +])# AC_DISABLE_SHARED + + +# AC_ENABLE_STATIC([DEFAULT]) +# --------------------------- +# implement the --enable-static flag +# DEFAULT is either `yes' or `no'. If omitted, it defaults to `yes'. +AC_DEFUN([AC_ENABLE_STATIC], +[define([AC_ENABLE_STATIC_DEFAULT], ifelse($1, no, no, yes))dnl +AC_ARG_ENABLE([static], + [AC_HELP_STRING([--enable-static@<:@=PKGS@:>@], + [build static libraries @<:@default=]AC_ENABLE_STATIC_DEFAULT[@:>@])], + [p=${PACKAGE-default} + case $enableval in + yes) enable_static=yes ;; + no) enable_static=no ;; + *) + enable_static=no + # Look at the argument we got. We use all the common list separators. + lt_save_ifs="$IFS"; IFS="${IFS}$PATH_SEPARATOR," + for pkg in $enableval; do + IFS="$lt_save_ifs" + if test "X$pkg" = "X$p"; then + enable_static=yes + fi + done + IFS="$lt_save_ifs" + ;; + esac], + [enable_static=]AC_ENABLE_STATIC_DEFAULT) +])# AC_ENABLE_STATIC + + +# AC_DISABLE_STATIC +# ----------------- +# set the default static flag to --disable-static +AC_DEFUN([AC_DISABLE_STATIC], +[AC_BEFORE([$0],[AC_LIBTOOL_SETUP])dnl +AC_ENABLE_STATIC(no) +])# AC_DISABLE_STATIC + + +# AC_ENABLE_FAST_INSTALL([DEFAULT]) +# --------------------------------- +# implement the --enable-fast-install flag +# DEFAULT is either `yes' or `no'. If omitted, it defaults to `yes'. +AC_DEFUN([AC_ENABLE_FAST_INSTALL], +[define([AC_ENABLE_FAST_INSTALL_DEFAULT], ifelse($1, no, no, yes))dnl +AC_ARG_ENABLE([fast-install], + [AC_HELP_STRING([--enable-fast-install@<:@=PKGS@:>@], + [optimize for fast installation @<:@default=]AC_ENABLE_FAST_INSTALL_DEFAULT[@:>@])], + [p=${PACKAGE-default} + case $enableval in + yes) enable_fast_install=yes ;; + no) enable_fast_install=no ;; + *) + enable_fast_install=no + # Look at the argument we got. We use all the common list separators. + lt_save_ifs="$IFS"; IFS="${IFS}$PATH_SEPARATOR," + for pkg in $enableval; do + IFS="$lt_save_ifs" + if test "X$pkg" = "X$p"; then + enable_fast_install=yes + fi + done + IFS="$lt_save_ifs" + ;; + esac], + [enable_fast_install=]AC_ENABLE_FAST_INSTALL_DEFAULT) +])# AC_ENABLE_FAST_INSTALL + + +# AC_DISABLE_FAST_INSTALL +# ----------------------- +# set the default to --disable-fast-install +AC_DEFUN([AC_DISABLE_FAST_INSTALL], +[AC_BEFORE([$0],[AC_LIBTOOL_SETUP])dnl +AC_ENABLE_FAST_INSTALL(no) +])# AC_DISABLE_FAST_INSTALL + + +# AC_LIBTOOL_PICMODE([MODE]) +# -------------------------- +# implement the --with-pic flag +# MODE is either `yes' or `no'. If omitted, it defaults to `both'. +AC_DEFUN([AC_LIBTOOL_PICMODE], +[AC_BEFORE([$0],[AC_LIBTOOL_SETUP])dnl +pic_mode=ifelse($#,1,$1,default) +])# AC_LIBTOOL_PICMODE + + +# AC_PROG_EGREP +# ------------- +# This is predefined starting with Autoconf 2.54, so this conditional +# definition can be removed once we require Autoconf 2.54 or later. +m4_ifndef([AC_PROG_EGREP], [AC_DEFUN([AC_PROG_EGREP], +[AC_CACHE_CHECK([for egrep], [ac_cv_prog_egrep], + [if echo a | (grep -E '(a|b)') >/dev/null 2>&1 + then ac_cv_prog_egrep='grep -E' + else ac_cv_prog_egrep='egrep' + fi]) + EGREP=$ac_cv_prog_egrep + AC_SUBST([EGREP]) +])]) + + +# AC_PATH_TOOL_PREFIX +# ------------------- +# find a file program which can recognize shared library +AC_DEFUN([AC_PATH_TOOL_PREFIX], +[AC_REQUIRE([AC_PROG_EGREP])dnl +AC_MSG_CHECKING([for $1]) +AC_CACHE_VAL(lt_cv_path_MAGIC_CMD, +[case $MAGIC_CMD in +[[\\/*] | ?:[\\/]*]) + lt_cv_path_MAGIC_CMD="$MAGIC_CMD" # Let the user override the test with a path. + ;; +*) + lt_save_MAGIC_CMD="$MAGIC_CMD" + lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR +dnl $ac_dummy forces splitting on constant user-supplied paths. +dnl POSIX.2 word splitting is done only on the output of word expansions, +dnl not every word. This closes a longstanding sh security hole. + ac_dummy="ifelse([$2], , $PATH, [$2])" + for ac_dir in $ac_dummy; do + IFS="$lt_save_ifs" + test -z "$ac_dir" && ac_dir=. + if test -f $ac_dir/$1; then + lt_cv_path_MAGIC_CMD="$ac_dir/$1" + if test -n "$file_magic_test_file"; then + case $deplibs_check_method in + "file_magic "*) + file_magic_regex=`expr "$deplibs_check_method" : "file_magic \(.*\)"` + MAGIC_CMD="$lt_cv_path_MAGIC_CMD" + if eval $file_magic_cmd \$file_magic_test_file 2> /dev/null | + $EGREP "$file_magic_regex" > /dev/null; then + : + else + cat <&2 + +*** Warning: the command libtool uses to detect shared libraries, +*** $file_magic_cmd, produces output that libtool cannot recognize. +*** The result is that libtool may fail to recognize shared libraries +*** as such. This will affect the creation of libtool libraries that +*** depend on shared libraries, but programs linked with such libtool +*** libraries will work regardless of this problem. Nevertheless, you +*** may want to report the problem to your system manager and/or to +*** bug-libtool at gnu.org + +EOF + fi ;; + esac + fi + break + fi + done + IFS="$lt_save_ifs" + MAGIC_CMD="$lt_save_MAGIC_CMD" + ;; +esac]) +MAGIC_CMD="$lt_cv_path_MAGIC_CMD" +if test -n "$MAGIC_CMD"; then + AC_MSG_RESULT($MAGIC_CMD) +else + AC_MSG_RESULT(no) +fi +])# AC_PATH_TOOL_PREFIX + + +# AC_PATH_MAGIC +# ------------- +# find a file program which can recognize a shared library +AC_DEFUN([AC_PATH_MAGIC], +[AC_PATH_TOOL_PREFIX(${ac_tool_prefix}file, /usr/bin$PATH_SEPARATOR$PATH) +if test -z "$lt_cv_path_MAGIC_CMD"; then + if test -n "$ac_tool_prefix"; then + AC_PATH_TOOL_PREFIX(file, /usr/bin$PATH_SEPARATOR$PATH) + else + MAGIC_CMD=: + fi +fi +])# AC_PATH_MAGIC + + +# AC_PROG_LD +# ---------- +# find the pathname to the GNU or non-GNU linker +AC_DEFUN([AC_PROG_LD], +[AC_ARG_WITH([gnu-ld], + [AC_HELP_STRING([--with-gnu-ld], + [assume the C compiler uses GNU ld @<:@default=no@:>@])], + [test "$withval" = no || with_gnu_ld=yes], + [with_gnu_ld=no]) +AC_REQUIRE([LT_AC_PROG_SED])dnl +AC_REQUIRE([AC_PROG_CC])dnl +AC_REQUIRE([AC_CANONICAL_HOST])dnl +AC_REQUIRE([AC_CANONICAL_BUILD])dnl +ac_prog=ld +if test "$GCC" = yes; then + # Check if gcc -print-prog-name=ld gives a path. + AC_MSG_CHECKING([for ld used by $CC]) + case $host in + *-*-mingw*) + # gcc leaves a trailing carriage return which upsets mingw + ac_prog=`($CC -print-prog-name=ld) 2>&5 | tr -d '\015'` ;; + *) + ac_prog=`($CC -print-prog-name=ld) 2>&5` ;; + esac + case $ac_prog in + # Accept absolute paths. + [[\\/]]* | ?:[[\\/]]*) + re_direlt='/[[^/]][[^/]]*/\.\./' + # Canonicalize the pathname of ld + ac_prog=`echo $ac_prog| $SED 's%\\\\%/%g'` + while echo $ac_prog | grep "$re_direlt" > /dev/null 2>&1; do + ac_prog=`echo $ac_prog| $SED "s%$re_direlt%/%"` + done + test -z "$LD" && LD="$ac_prog" + ;; + "") + # If it fails, then pretend we aren't using GCC. + ac_prog=ld + ;; + *) + # If it is relative, then search for the first ld in PATH. + with_gnu_ld=unknown + ;; + esac +elif test "$with_gnu_ld" = yes; then + AC_MSG_CHECKING([for GNU ld]) +else + AC_MSG_CHECKING([for non-GNU ld]) +fi +AC_CACHE_VAL(lt_cv_path_LD, +[if test -z "$LD"; then + lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR + for ac_dir in $PATH; do + IFS="$lt_save_ifs" + test -z "$ac_dir" && ac_dir=. + if test -f "$ac_dir/$ac_prog" || test -f "$ac_dir/$ac_prog$ac_exeext"; then + lt_cv_path_LD="$ac_dir/$ac_prog" + # Check to see if the program is GNU ld. I'd rather use --version, + # but apparently some variants of GNU ld only accept -v. + # Break only if it was the GNU/non-GNU ld that we prefer. + case `"$lt_cv_path_LD" -v 2>&1 &1 /dev/null 2>&1; then + lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL' + lt_cv_file_magic_cmd='func_win32_libid' + else + lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?' + lt_cv_file_magic_cmd='$OBJDUMP -f' + fi + ;; + +darwin* | rhapsody*) + lt_cv_deplibs_check_method=pass_all + ;; + +freebsd* | dragonfly*) + if echo __ELF__ | $CC -E - | grep __ELF__ > /dev/null; then + case $host_cpu in + i*86 ) + # Not sure whether the presence of OpenBSD here was a mistake. + # Let's accept both of them until this is cleared up. + lt_cv_deplibs_check_method='file_magic (FreeBSD|OpenBSD|DragonFly)/i[[3-9]]86 (compact )?demand paged shared library' + lt_cv_file_magic_cmd=/usr/bin/file + lt_cv_file_magic_test_file=`echo /usr/lib/libc.so.*` + ;; + esac + else + lt_cv_deplibs_check_method=pass_all + fi + ;; + +gnu*) + lt_cv_deplibs_check_method=pass_all + ;; + +hpux10.20* | hpux11*) + lt_cv_file_magic_cmd=/usr/bin/file + case $host_cpu in + ia64*) + lt_cv_deplibs_check_method='file_magic (s[[0-9]][[0-9]][[0-9]]|ELF-[[0-9]][[0-9]]) shared object file - IA64' + lt_cv_file_magic_test_file=/usr/lib/hpux32/libc.so + ;; + hppa*64*) + [lt_cv_deplibs_check_method='file_magic (s[0-9][0-9][0-9]|ELF-[0-9][0-9]) shared object file - PA-RISC [0-9].[0-9]'] + lt_cv_file_magic_test_file=/usr/lib/pa20_64/libc.sl + ;; + *) + lt_cv_deplibs_check_method='file_magic (s[[0-9]][[0-9]][[0-9]]|PA-RISC[[0-9]].[[0-9]]) shared library' + lt_cv_file_magic_test_file=/usr/lib/libc.sl + ;; + esac + ;; + +interix[[3-9]]*) + # PIC code is broken on Interix 3.x, that's why |\.a not |_pic\.a here + lt_cv_deplibs_check_method='match_pattern /lib[[^/]]+(\.so|\.a)$' + ;; + +irix5* | irix6* | nonstopux*) + case $LD in + *-32|*"-32 ") libmagic=32-bit;; + *-n32|*"-n32 ") libmagic=N32;; + *-64|*"-64 ") libmagic=64-bit;; + *) libmagic=never-match;; + esac + lt_cv_deplibs_check_method=pass_all + ;; + +# This must be Linux ELF. +linux* | k*bsd*-gnu) + lt_cv_deplibs_check_method=pass_all + ;; + +netbsd*) + if echo __ELF__ | $CC -E - | grep __ELF__ > /dev/null; then + lt_cv_deplibs_check_method='match_pattern /lib[[^/]]+(\.so\.[[0-9]]+\.[[0-9]]+|_pic\.a)$' + else + lt_cv_deplibs_check_method='match_pattern /lib[[^/]]+(\.so|_pic\.a)$' + fi + ;; + +newos6*) + lt_cv_deplibs_check_method='file_magic ELF [[0-9]][[0-9]]*-bit [[ML]]SB (executable|dynamic lib)' + lt_cv_file_magic_cmd=/usr/bin/file + lt_cv_file_magic_test_file=/usr/lib/libnls.so + ;; + +nto-qnx*) + lt_cv_deplibs_check_method=unknown + ;; + +openbsd*) + if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then + lt_cv_deplibs_check_method='match_pattern /lib[[^/]]+(\.so\.[[0-9]]+\.[[0-9]]+|\.so|_pic\.a)$' + else + lt_cv_deplibs_check_method='match_pattern /lib[[^/]]+(\.so\.[[0-9]]+\.[[0-9]]+|_pic\.a)$' + fi + ;; + +osf3* | osf4* | osf5*) + lt_cv_deplibs_check_method=pass_all + ;; + +rdos*) + lt_cv_deplibs_check_method=pass_all + ;; + +solaris*) + lt_cv_deplibs_check_method=pass_all + ;; + +sysv4 | sysv4.3*) + case $host_vendor in + motorola) + lt_cv_deplibs_check_method='file_magic ELF [[0-9]][[0-9]]*-bit [[ML]]SB (shared object|dynamic lib) M[[0-9]][[0-9]]* Version [[0-9]]' + lt_cv_file_magic_test_file=`echo /usr/lib/libc.so*` + ;; + ncr) + lt_cv_deplibs_check_method=pass_all + ;; + sequent) + lt_cv_file_magic_cmd='/bin/file' + lt_cv_deplibs_check_method='file_magic ELF [[0-9]][[0-9]]*-bit [[LM]]SB (shared object|dynamic lib )' + ;; + sni) + lt_cv_file_magic_cmd='/bin/file' + lt_cv_deplibs_check_method="file_magic ELF [[0-9]][[0-9]]*-bit [[LM]]SB dynamic lib" + lt_cv_file_magic_test_file=/lib/libc.so + ;; + siemens) + lt_cv_deplibs_check_method=pass_all + ;; + pc) + lt_cv_deplibs_check_method=pass_all + ;; + esac + ;; + +sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX* | sysv4*uw2*) + lt_cv_deplibs_check_method=pass_all + ;; +esac +]) +file_magic_cmd=$lt_cv_file_magic_cmd +deplibs_check_method=$lt_cv_deplibs_check_method +test -z "$deplibs_check_method" && deplibs_check_method=unknown +])# AC_DEPLIBS_CHECK_METHOD + + +# AC_PROG_NM +# ---------- +# find the pathname to a BSD-compatible name lister +AC_DEFUN([AC_PROG_NM], +[AC_CACHE_CHECK([for BSD-compatible nm], lt_cv_path_NM, +[if test -n "$NM"; then + # Let the user override the test. + lt_cv_path_NM="$NM" +else + lt_nm_to_check="${ac_tool_prefix}nm" + if test -n "$ac_tool_prefix" && test "$build" = "$host"; then + lt_nm_to_check="$lt_nm_to_check nm" + fi + for lt_tmp_nm in $lt_nm_to_check; do + lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR + for ac_dir in $PATH /usr/ccs/bin/elf /usr/ccs/bin /usr/ucb /bin; do + IFS="$lt_save_ifs" + test -z "$ac_dir" && ac_dir=. + tmp_nm="$ac_dir/$lt_tmp_nm" + if test -f "$tmp_nm" || test -f "$tmp_nm$ac_exeext" ; then + # Check to see if the nm accepts a BSD-compat flag. + # Adding the `sed 1q' prevents false positives on HP-UX, which says: + # nm: unknown option "B" ignored + # Tru64's nm complains that /dev/null is an invalid object file + case `"$tmp_nm" -B /dev/null 2>&1 | sed '1q'` in + */dev/null* | *'Invalid file or object type'*) + lt_cv_path_NM="$tmp_nm -B" + break + ;; + *) + case `"$tmp_nm" -p /dev/null 2>&1 | sed '1q'` in + */dev/null*) + lt_cv_path_NM="$tmp_nm -p" + break + ;; + *) + lt_cv_path_NM=${lt_cv_path_NM="$tmp_nm"} # keep the first match, but + continue # so that we can try to find one that supports BSD flags + ;; + esac + ;; + esac + fi + done + IFS="$lt_save_ifs" + done + test -z "$lt_cv_path_NM" && lt_cv_path_NM=nm +fi]) +NM="$lt_cv_path_NM" +])# AC_PROG_NM + + +# AC_CHECK_LIBM +# ------------- +# check for math library +AC_DEFUN([AC_CHECK_LIBM], +[AC_REQUIRE([AC_CANONICAL_HOST])dnl +LIBM= +case $host in +*-*-beos* | *-*-cygwin* | *-*-pw32* | *-*-darwin*) + # These system don't have libm, or don't need it + ;; +*-ncr-sysv4.3*) + AC_CHECK_LIB(mw, _mwvalidcheckl, LIBM="-lmw") + AC_CHECK_LIB(m, cos, LIBM="$LIBM -lm") + ;; +*) + AC_CHECK_LIB(m, cos, LIBM="-lm") + ;; +esac +])# AC_CHECK_LIBM + + +# AC_LIBLTDL_CONVENIENCE([DIRECTORY]) +# ----------------------------------- +# sets LIBLTDL to the link flags for the libltdl convenience library and +# LTDLINCL to the include flags for the libltdl header and adds +# --enable-ltdl-convenience to the configure arguments. Note that +# AC_CONFIG_SUBDIRS is not called here. If DIRECTORY is not provided, +# it is assumed to be `libltdl'. LIBLTDL will be prefixed with +# '${top_builddir}/' and LTDLINCL will be prefixed with '${top_srcdir}/' +# (note the single quotes!). If your package is not flat and you're not +# using automake, define top_builddir and top_srcdir appropriately in +# the Makefiles. +AC_DEFUN([AC_LIBLTDL_CONVENIENCE], +[AC_BEFORE([$0],[AC_LIBTOOL_SETUP])dnl + case $enable_ltdl_convenience in + no) AC_MSG_ERROR([this package needs a convenience libltdl]) ;; + "") enable_ltdl_convenience=yes + ac_configure_args="$ac_configure_args --enable-ltdl-convenience" ;; + esac + LIBLTDL='${top_builddir}/'ifelse($#,1,[$1],['libltdl'])/libltdlc.la + LTDLINCL='-I${top_srcdir}/'ifelse($#,1,[$1],['libltdl']) + # For backwards non-gettext consistent compatibility... + INCLTDL="$LTDLINCL" +])# AC_LIBLTDL_CONVENIENCE + + +# AC_LIBLTDL_INSTALLABLE([DIRECTORY]) +# ----------------------------------- +# sets LIBLTDL to the link flags for the libltdl installable library and +# LTDLINCL to the include flags for the libltdl header and adds +# --enable-ltdl-install to the configure arguments. Note that +# AC_CONFIG_SUBDIRS is not called here. If DIRECTORY is not provided, +# and an installed libltdl is not found, it is assumed to be `libltdl'. +# LIBLTDL will be prefixed with '${top_builddir}/'# and LTDLINCL with +# '${top_srcdir}/' (note the single quotes!). If your package is not +# flat and you're not using automake, define top_builddir and top_srcdir +# appropriately in the Makefiles. +# In the future, this macro may have to be called after AC_PROG_LIBTOOL. +AC_DEFUN([AC_LIBLTDL_INSTALLABLE], +[AC_BEFORE([$0],[AC_LIBTOOL_SETUP])dnl + AC_CHECK_LIB(ltdl, lt_dlinit, + [test x"$enable_ltdl_install" != xyes && enable_ltdl_install=no], + [if test x"$enable_ltdl_install" = xno; then + AC_MSG_WARN([libltdl not installed, but installation disabled]) + else + enable_ltdl_install=yes + fi + ]) + if test x"$enable_ltdl_install" = x"yes"; then + ac_configure_args="$ac_configure_args --enable-ltdl-install" + LIBLTDL='${top_builddir}/'ifelse($#,1,[$1],['libltdl'])/libltdl.la + LTDLINCL='-I${top_srcdir}/'ifelse($#,1,[$1],['libltdl']) + else + ac_configure_args="$ac_configure_args --enable-ltdl-install=no" + LIBLTDL="-lltdl" + LTDLINCL= + fi + # For backwards non-gettext consistent compatibility... + INCLTDL="$LTDLINCL" +])# AC_LIBLTDL_INSTALLABLE + + +# AC_LIBTOOL_CXX +# -------------- +# enable support for C++ libraries +AC_DEFUN([AC_LIBTOOL_CXX], +[AC_REQUIRE([_LT_AC_LANG_CXX]) +])# AC_LIBTOOL_CXX + + +# _LT_AC_LANG_CXX +# --------------- +AC_DEFUN([_LT_AC_LANG_CXX], +[AC_REQUIRE([AC_PROG_CXX]) +AC_REQUIRE([_LT_AC_PROG_CXXCPP]) +_LT_AC_SHELL_INIT([tagnames=${tagnames+${tagnames},}CXX]) +])# _LT_AC_LANG_CXX + +# _LT_AC_PROG_CXXCPP +# ------------------ +AC_DEFUN([_LT_AC_PROG_CXXCPP], +[ +AC_REQUIRE([AC_PROG_CXX]) +if test -n "$CXX" && ( test "X$CXX" != "Xno" && + ( (test "X$CXX" = "Xg++" && `g++ -v >/dev/null 2>&1` ) || + (test "X$CXX" != "Xg++"))) ; then + AC_PROG_CXXCPP +fi +])# _LT_AC_PROG_CXXCPP + +# AC_LIBTOOL_F77 +# -------------- +# enable support for Fortran 77 libraries +AC_DEFUN([AC_LIBTOOL_F77], +[AC_REQUIRE([_LT_AC_LANG_F77]) +])# AC_LIBTOOL_F77 + + +# _LT_AC_LANG_F77 +# --------------- +AC_DEFUN([_LT_AC_LANG_F77], +[AC_REQUIRE([AC_PROG_F77]) +_LT_AC_SHELL_INIT([tagnames=${tagnames+${tagnames},}F77]) +])# _LT_AC_LANG_F77 + + +# AC_LIBTOOL_GCJ +# -------------- +# enable support for GCJ libraries +AC_DEFUN([AC_LIBTOOL_GCJ], +[AC_REQUIRE([_LT_AC_LANG_GCJ]) +])# AC_LIBTOOL_GCJ + + +# _LT_AC_LANG_GCJ +# --------------- +AC_DEFUN([_LT_AC_LANG_GCJ], +[AC_PROVIDE_IFELSE([AC_PROG_GCJ],[], + [AC_PROVIDE_IFELSE([A][M_PROG_GCJ],[], + [AC_PROVIDE_IFELSE([LT_AC_PROG_GCJ],[], + [ifdef([AC_PROG_GCJ],[AC_REQUIRE([AC_PROG_GCJ])], + [ifdef([A][M_PROG_GCJ],[AC_REQUIRE([A][M_PROG_GCJ])], + [AC_REQUIRE([A][C_PROG_GCJ_OR_A][M_PROG_GCJ])])])])])]) +_LT_AC_SHELL_INIT([tagnames=${tagnames+${tagnames},}GCJ]) +])# _LT_AC_LANG_GCJ + + +# AC_LIBTOOL_RC +# ------------- +# enable support for Windows resource files +AC_DEFUN([AC_LIBTOOL_RC], +[AC_REQUIRE([LT_AC_PROG_RC]) +_LT_AC_SHELL_INIT([tagnames=${tagnames+${tagnames},}RC]) +])# AC_LIBTOOL_RC + + +# AC_LIBTOOL_LANG_C_CONFIG +# ------------------------ +# Ensure that the configuration vars for the C compiler are +# suitably defined. Those variables are subsequently used by +# AC_LIBTOOL_CONFIG to write the compiler configuration to `libtool'. +AC_DEFUN([AC_LIBTOOL_LANG_C_CONFIG], [_LT_AC_LANG_C_CONFIG]) +AC_DEFUN([_LT_AC_LANG_C_CONFIG], +[lt_save_CC="$CC" +AC_LANG_PUSH(C) + +# Source file extension for C test sources. +ac_ext=c + +# Object file extension for compiled C test sources. +objext=o +_LT_AC_TAGVAR(objext, $1)=$objext + +# Code to be used in simple compile tests +lt_simple_compile_test_code="int some_variable = 0;" + +# Code to be used in simple link tests +lt_simple_link_test_code='int main(){return(0);}' + +_LT_AC_SYS_COMPILER + +# save warnings/boilerplate of simple test code +_LT_COMPILER_BOILERPLATE +_LT_LINKER_BOILERPLATE + +AC_LIBTOOL_PROG_COMPILER_NO_RTTI($1) +AC_LIBTOOL_PROG_COMPILER_PIC($1) +AC_LIBTOOL_PROG_CC_C_O($1) +AC_LIBTOOL_SYS_HARD_LINK_LOCKS($1) +AC_LIBTOOL_PROG_LD_SHLIBS($1) +AC_LIBTOOL_SYS_DYNAMIC_LINKER($1) +AC_LIBTOOL_PROG_LD_HARDCODE_LIBPATH($1) +AC_LIBTOOL_SYS_LIB_STRIP +AC_LIBTOOL_DLOPEN_SELF + +# Report which library types will actually be built +AC_MSG_CHECKING([if libtool supports shared libraries]) +AC_MSG_RESULT([$can_build_shared]) + +AC_MSG_CHECKING([whether to build shared libraries]) +test "$can_build_shared" = "no" && enable_shared=no + +# On AIX, shared libraries and static libraries use the same namespace, and +# are all built from PIC. +case $host_os in +aix3*) + test "$enable_shared" = yes && enable_static=no + if test -n "$RANLIB"; then + archive_cmds="$archive_cmds~\$RANLIB \$lib" + postinstall_cmds='$RANLIB $lib' + fi + ;; + +aix4* | aix5*) + if test "$host_cpu" != ia64 && test "$aix_use_runtimelinking" = no ; then + test "$enable_shared" = yes && enable_static=no + fi + ;; +esac +AC_MSG_RESULT([$enable_shared]) + +AC_MSG_CHECKING([whether to build static libraries]) +# Make sure either enable_shared or enable_static is yes. +test "$enable_shared" = yes || enable_static=yes +AC_MSG_RESULT([$enable_static]) + +AC_LIBTOOL_CONFIG($1) + +AC_LANG_POP +CC="$lt_save_CC" +])# AC_LIBTOOL_LANG_C_CONFIG + + +# AC_LIBTOOL_LANG_CXX_CONFIG +# -------------------------- +# Ensure that the configuration vars for the C compiler are +# suitably defined. Those variables are subsequently used by +# AC_LIBTOOL_CONFIG to write the compiler configuration to `libtool'. +AC_DEFUN([AC_LIBTOOL_LANG_CXX_CONFIG], [_LT_AC_LANG_CXX_CONFIG(CXX)]) +AC_DEFUN([_LT_AC_LANG_CXX_CONFIG], +[AC_LANG_PUSH(C++) +AC_REQUIRE([AC_PROG_CXX]) +AC_REQUIRE([_LT_AC_PROG_CXXCPP]) + +_LT_AC_TAGVAR(archive_cmds_need_lc, $1)=no +_LT_AC_TAGVAR(allow_undefined_flag, $1)= +_LT_AC_TAGVAR(always_export_symbols, $1)=no +_LT_AC_TAGVAR(archive_expsym_cmds, $1)= +_LT_AC_TAGVAR(export_dynamic_flag_spec, $1)= +_LT_AC_TAGVAR(hardcode_direct, $1)=no +_LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)= +_LT_AC_TAGVAR(hardcode_libdir_flag_spec_ld, $1)= +_LT_AC_TAGVAR(hardcode_libdir_separator, $1)= +_LT_AC_TAGVAR(hardcode_minus_L, $1)=no +_LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=unsupported +_LT_AC_TAGVAR(hardcode_automatic, $1)=no +_LT_AC_TAGVAR(module_cmds, $1)= +_LT_AC_TAGVAR(module_expsym_cmds, $1)= +_LT_AC_TAGVAR(link_all_deplibs, $1)=unknown +_LT_AC_TAGVAR(old_archive_cmds, $1)=$old_archive_cmds +_LT_AC_TAGVAR(no_undefined_flag, $1)= +_LT_AC_TAGVAR(whole_archive_flag_spec, $1)= +_LT_AC_TAGVAR(enable_shared_with_static_runtimes, $1)=no + +# Dependencies to place before and after the object being linked: +_LT_AC_TAGVAR(predep_objects, $1)= +_LT_AC_TAGVAR(postdep_objects, $1)= +_LT_AC_TAGVAR(predeps, $1)= +_LT_AC_TAGVAR(postdeps, $1)= +_LT_AC_TAGVAR(compiler_lib_search_path, $1)= + +# Source file extension for C++ test sources. +ac_ext=cpp + +# Object file extension for compiled C++ test sources. +objext=o +_LT_AC_TAGVAR(objext, $1)=$objext + +# Code to be used in simple compile tests +lt_simple_compile_test_code="int some_variable = 0;" + +# Code to be used in simple link tests +lt_simple_link_test_code='int main(int, char *[[]]) { return(0); }' + +# ltmain only uses $CC for tagged configurations so make sure $CC is set. +_LT_AC_SYS_COMPILER + +# save warnings/boilerplate of simple test code +_LT_COMPILER_BOILERPLATE +_LT_LINKER_BOILERPLATE + +# Allow CC to be a program name with arguments. +lt_save_CC=$CC +lt_save_LD=$LD +lt_save_GCC=$GCC +GCC=$GXX +lt_save_with_gnu_ld=$with_gnu_ld +lt_save_path_LD=$lt_cv_path_LD +if test -n "${lt_cv_prog_gnu_ldcxx+set}"; then + lt_cv_prog_gnu_ld=$lt_cv_prog_gnu_ldcxx +else + $as_unset lt_cv_prog_gnu_ld +fi +if test -n "${lt_cv_path_LDCXX+set}"; then + lt_cv_path_LD=$lt_cv_path_LDCXX +else + $as_unset lt_cv_path_LD +fi +test -z "${LDCXX+set}" || LD=$LDCXX +CC=${CXX-"c++"} +compiler=$CC +_LT_AC_TAGVAR(compiler, $1)=$CC +_LT_CC_BASENAME([$compiler]) + +# We don't want -fno-exception wen compiling C++ code, so set the +# no_builtin_flag separately +if test "$GXX" = yes; then + _LT_AC_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)=' -fno-builtin' +else + _LT_AC_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)= +fi + +if test "$GXX" = yes; then + # Set up default GNU C++ configuration + + AC_PROG_LD + + # Check if GNU C++ uses GNU ld as the underlying linker, since the + # archiving commands below assume that GNU ld is being used. + if test "$with_gnu_ld" = yes; then + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname -o $lib' + _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' + + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}--rpath ${wl}$libdir' + _LT_AC_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-dynamic' + + # If archive_cmds runs LD, not CC, wlarc should be empty + # XXX I think wlarc can be eliminated in ltcf-cxx, but I need to + # investigate it a little bit more. (MM) + wlarc='${wl}' + + # ancient GNU ld didn't support --whole-archive et. al. + if eval "`$CC -print-prog-name=ld` --help 2>&1" | \ + grep 'no-whole-archive' > /dev/null; then + _LT_AC_TAGVAR(whole_archive_flag_spec, $1)="$wlarc"'--whole-archive$convenience '"$wlarc"'--no-whole-archive' + else + _LT_AC_TAGVAR(whole_archive_flag_spec, $1)= + fi + else + with_gnu_ld=no + wlarc= + + # A generic and very simple default shared library creation + # command for GNU C++ for the case where it uses the native + # linker, instead of GNU ld. If possible, this setting should + # overridden to take advantage of the native linker features on + # the platform it is being used on. + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $lib' + fi + + # Commands to make compiler produce verbose output that lists + # what "hidden" libraries, object files and flags are used when + # linking a shared library. + output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | grep "\-L"' + +else + GXX=no + with_gnu_ld=no + wlarc= +fi + +# PORTME: fill in a description of your system's C++ link characteristics +AC_MSG_CHECKING([whether the $compiler linker ($LD) supports shared libraries]) +_LT_AC_TAGVAR(ld_shlibs, $1)=yes +case $host_os in + aix3*) + # FIXME: insert proper C++ library support + _LT_AC_TAGVAR(ld_shlibs, $1)=no + ;; + aix4* | aix5*) + if test "$host_cpu" = ia64; then + # On IA64, the linker does run time linking by default, so we don't + # have to do anything special. + aix_use_runtimelinking=no + exp_sym_flag='-Bexport' + no_entry_flag="" + else + aix_use_runtimelinking=no + + # Test if we are trying to use run time linking or normal + # AIX style linking. If -brtl is somewhere in LDFLAGS, we + # need to do runtime linking. + case $host_os in aix4.[[23]]|aix4.[[23]].*|aix5*) + for ld_flag in $LDFLAGS; do + case $ld_flag in + *-brtl*) + aix_use_runtimelinking=yes + break + ;; + esac + done + ;; + esac + + exp_sym_flag='-bexport' + no_entry_flag='-bnoentry' + fi + + # When large executables or shared objects are built, AIX ld can + # have problems creating the table of contents. If linking a library + # or program results in "error TOC overflow" add -mminimal-toc to + # CXXFLAGS/CFLAGS for g++/gcc. In the cases where that is not + # enough to fix the problem, add -Wl,-bbigtoc to LDFLAGS. + + _LT_AC_TAGVAR(archive_cmds, $1)='' + _LT_AC_TAGVAR(hardcode_direct, $1)=yes + _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=':' + _LT_AC_TAGVAR(link_all_deplibs, $1)=yes + + if test "$GXX" = yes; then + case $host_os in aix4.[[012]]|aix4.[[012]].*) + # We only want to do this on AIX 4.2 and lower, the check + # below for broken collect2 doesn't work under 4.3+ + collect2name=`${CC} -print-prog-name=collect2` + if test -f "$collect2name" && \ + strings "$collect2name" | grep resolve_lib_name >/dev/null + then + # We have reworked collect2 + : + else + # We have old collect2 + _LT_AC_TAGVAR(hardcode_direct, $1)=unsupported + # It fails to find uninstalled libraries when the uninstalled + # path is not listed in the libpath. Setting hardcode_minus_L + # to unsupported forces relinking + _LT_AC_TAGVAR(hardcode_minus_L, $1)=yes + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' + _LT_AC_TAGVAR(hardcode_libdir_separator, $1)= + fi + ;; + esac + shared_flag='-shared' + if test "$aix_use_runtimelinking" = yes; then + shared_flag="$shared_flag "'${wl}-G' + fi + else + # not using gcc + if test "$host_cpu" = ia64; then + # VisualAge C++, Version 5.5 for AIX 5L for IA-64, Beta 3 Release + # chokes on -Wl,-G. The following line is correct: + shared_flag='-G' + else + if test "$aix_use_runtimelinking" = yes; then + shared_flag='${wl}-G' + else + shared_flag='${wl}-bM:SRE' + fi + fi + fi + + # It seems that -bexpall does not export symbols beginning with + # underscore (_), so it is better to generate a list of symbols to export. + _LT_AC_TAGVAR(always_export_symbols, $1)=yes + if test "$aix_use_runtimelinking" = yes; then + # Warning - without using the other runtime loading flags (-brtl), + # -berok will link without error, but may produce a broken library. + _LT_AC_TAGVAR(allow_undefined_flag, $1)='-berok' + # Determine the default libpath from the value encoded in an empty executable. + _LT_AC_SYS_LIBPATH_AIX + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-blibpath:$libdir:'"$aix_libpath" + + _LT_AC_TAGVAR(archive_expsym_cmds, $1)="\$CC"' -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then echo "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag" + else + if test "$host_cpu" = ia64; then + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-R $libdir:/usr/lib:/lib' + _LT_AC_TAGVAR(allow_undefined_flag, $1)="-z nodefs" + _LT_AC_TAGVAR(archive_expsym_cmds, $1)="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags ${wl}${allow_undefined_flag} '"\${wl}$exp_sym_flag:\$export_symbols" + else + # Determine the default libpath from the value encoded in an empty executable. + _LT_AC_SYS_LIBPATH_AIX + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-blibpath:$libdir:'"$aix_libpath" + # Warning - without using the other run time loading flags, + # -berok will link without error, but may produce a broken library. + _LT_AC_TAGVAR(no_undefined_flag, $1)=' ${wl}-bernotok' + _LT_AC_TAGVAR(allow_undefined_flag, $1)=' ${wl}-berok' + # Exported symbols can be pulled into shared objects from archives + _LT_AC_TAGVAR(whole_archive_flag_spec, $1)='$convenience' + _LT_AC_TAGVAR(archive_cmds_need_lc, $1)=yes + # This is similar to how AIX traditionally builds its shared libraries. + _LT_AC_TAGVAR(archive_expsym_cmds, $1)="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs ${wl}-bnoentry $compiler_flags ${wl}-bE:$export_symbols${allow_undefined_flag}~$AR $AR_FLAGS $output_objdir/$libname$release.a $output_objdir/$soname' + fi + fi + ;; + + beos*) + if $LD --help 2>&1 | grep ': supported targets:.* elf' > /dev/null; then + _LT_AC_TAGVAR(allow_undefined_flag, $1)=unsupported + # Joseph Beckenbach says some releases of gcc + # support --undefined. This deserves some investigation. FIXME + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -nostart $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' + else + _LT_AC_TAGVAR(ld_shlibs, $1)=no + fi + ;; + + chorus*) + case $cc_basename in + *) + # FIXME: insert proper C++ library support + _LT_AC_TAGVAR(ld_shlibs, $1)=no + ;; + esac + ;; + + cygwin* | mingw* | pw32*) + # _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1) is actually meaningless, + # as there is no search path for DLLs. + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' + _LT_AC_TAGVAR(allow_undefined_flag, $1)=unsupported + _LT_AC_TAGVAR(always_export_symbols, $1)=no + _LT_AC_TAGVAR(enable_shared_with_static_runtimes, $1)=yes + + if $LD --help 2>&1 | grep 'auto-import' > /dev/null; then + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' + # If the export-symbols file already is a .def file (1st line + # is EXPORTS), use it as is; otherwise, prepend... + _LT_AC_TAGVAR(archive_expsym_cmds, $1)='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then + cp $export_symbols $output_objdir/$soname.def; + else + echo EXPORTS > $output_objdir/$soname.def; + cat $export_symbols >> $output_objdir/$soname.def; + fi~ + $CC -shared -nostdlib $output_objdir/$soname.def $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' + else + _LT_AC_TAGVAR(ld_shlibs, $1)=no + fi + ;; + darwin* | rhapsody*) + case $host_os in + rhapsody* | darwin1.[[012]]) + _LT_AC_TAGVAR(allow_undefined_flag, $1)='${wl}-undefined ${wl}suppress' + ;; + *) # Darwin 1.3 on + if test -z ${MACOSX_DEPLOYMENT_TARGET} ; then + _LT_AC_TAGVAR(allow_undefined_flag, $1)='${wl}-flat_namespace ${wl}-undefined ${wl}suppress' + else + case ${MACOSX_DEPLOYMENT_TARGET} in + 10.[[012]]) + _LT_AC_TAGVAR(allow_undefined_flag, $1)='${wl}-flat_namespace ${wl}-undefined ${wl}suppress' + ;; + 10.*) + _LT_AC_TAGVAR(allow_undefined_flag, $1)='${wl}-undefined ${wl}dynamic_lookup' + ;; + esac + fi + ;; + esac + _LT_AC_TAGVAR(archive_cmds_need_lc, $1)=no + _LT_AC_TAGVAR(hardcode_direct, $1)=no + _LT_AC_TAGVAR(hardcode_automatic, $1)=yes + _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=unsupported + _LT_AC_TAGVAR(whole_archive_flag_spec, $1)='' + _LT_AC_TAGVAR(link_all_deplibs, $1)=yes + + if test "$GXX" = yes ; then + lt_int_apple_cc_single_mod=no + output_verbose_link_cmd='echo' + if $CC -dumpspecs 2>&1 | $EGREP 'single_module' >/dev/null ; then + lt_int_apple_cc_single_mod=yes + fi + if test "X$lt_int_apple_cc_single_mod" = Xyes ; then + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -dynamiclib -single_module $allow_undefined_flag -o $lib $libobjs $deplibs $compiler_flags -install_name $rpath/$soname $verstring' + else + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -r -keep_private_externs -nostdlib -o ${lib}-master.o $libobjs~$CC -dynamiclib $allow_undefined_flag -o $lib ${lib}-master.o $deplibs $compiler_flags -install_name $rpath/$soname $verstring' + fi + _LT_AC_TAGVAR(module_cmds, $1)='$CC $allow_undefined_flag -o $lib -bundle $libobjs $deplibs$compiler_flags' + # Don't fix this by using the ld -exported_symbols_list flag, it doesn't exist in older darwin lds + if test "X$lt_int_apple_cc_single_mod" = Xyes ; then + _LT_AC_TAGVAR(archive_expsym_cmds, $1)='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC -dynamiclib -single_module $allow_undefined_flag -o $lib $libobjs $deplibs $compiler_flags -install_name $rpath/$soname $verstring~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}' + else + _LT_AC_TAGVAR(archive_expsym_cmds, $1)='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC -r -keep_private_externs -nostdlib -o ${lib}-master.o $libobjs~$CC -dynamiclib $allow_undefined_flag -o $lib ${lib}-master.o $deplibs $compiler_flags -install_name $rpath/$soname $verstring~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}' + fi + _LT_AC_TAGVAR(module_expsym_cmds, $1)='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC $allow_undefined_flag -o $lib -bundle $libobjs $deplibs$compiler_flags~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}' + else + case $cc_basename in + xlc*) + output_verbose_link_cmd='echo' + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -qmkshrobj ${wl}-single_module $allow_undefined_flag -o $lib $libobjs $deplibs $compiler_flags ${wl}-install_name ${wl}`echo $rpath/$soname` $xlcverstring' + _LT_AC_TAGVAR(module_cmds, $1)='$CC $allow_undefined_flag -o $lib -bundle $libobjs $deplibs$compiler_flags' + # Don't fix this by using the ld -exported_symbols_list flag, it doesn't exist in older darwin lds + _LT_AC_TAGVAR(archive_expsym_cmds, $1)='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC -qmkshrobj ${wl}-single_module $allow_undefined_flag -o $lib $libobjs $deplibs $compiler_flags ${wl}-install_name ${wl}$rpath/$soname $xlcverstring~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}' + _LT_AC_TAGVAR(module_expsym_cmds, $1)='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC $allow_undefined_flag -o $lib -bundle $libobjs $deplibs$compiler_flags~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}' + ;; + *) + _LT_AC_TAGVAR(ld_shlibs, $1)=no + ;; + esac + fi + ;; + + dgux*) + case $cc_basename in + ec++*) + # FIXME: insert proper C++ library support + _LT_AC_TAGVAR(ld_shlibs, $1)=no + ;; + ghcx*) + # Green Hills C++ Compiler + # FIXME: insert proper C++ library support + _LT_AC_TAGVAR(ld_shlibs, $1)=no + ;; + *) + # FIXME: insert proper C++ library support + _LT_AC_TAGVAR(ld_shlibs, $1)=no + ;; + esac + ;; + freebsd[[12]]*) + # C++ shared libraries reported to be fairly broken before switch to ELF + _LT_AC_TAGVAR(ld_shlibs, $1)=no + ;; + freebsd-elf*) + _LT_AC_TAGVAR(archive_cmds_need_lc, $1)=no + ;; + freebsd* | dragonfly*) + # FreeBSD 3 and later use GNU C++ and GNU ld with standard ELF + # conventions + _LT_AC_TAGVAR(ld_shlibs, $1)=yes + ;; + gnu*) + ;; + hpux9*) + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}+b ${wl}$libdir' + _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=: + _LT_AC_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E' + _LT_AC_TAGVAR(hardcode_direct, $1)=yes + _LT_AC_TAGVAR(hardcode_minus_L, $1)=yes # Not in the search PATH, + # but as the default + # location of the library. + + case $cc_basename in + CC*) + # FIXME: insert proper C++ library support + _LT_AC_TAGVAR(ld_shlibs, $1)=no + ;; + aCC*) + _LT_AC_TAGVAR(archive_cmds, $1)='$rm $output_objdir/$soname~$CC -b ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib' + # Commands to make compiler produce verbose output that lists + # what "hidden" libraries, object files and flags are used when + # linking a shared library. + # + # There doesn't appear to be a way to prevent this compiler from + # explicitly linking system object files so we need to strip them + # from the output so that they don't get included in the library + # dependencies. + output_verbose_link_cmd='templist=`($CC -b $CFLAGS -v conftest.$objext 2>&1) | grep "[[-]]L"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; echo $list' + ;; + *) + if test "$GXX" = yes; then + _LT_AC_TAGVAR(archive_cmds, $1)='$rm $output_objdir/$soname~$CC -shared -nostdlib -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib' + else + # FIXME: insert proper C++ library support + _LT_AC_TAGVAR(ld_shlibs, $1)=no + fi + ;; + esac + ;; + hpux10*|hpux11*) + if test $with_gnu_ld = no; then + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}+b ${wl}$libdir' + _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=: + + case $host_cpu in + hppa*64*|ia64*) ;; + *) + _LT_AC_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E' + ;; + esac + fi + case $host_cpu in + hppa*64*|ia64*) + _LT_AC_TAGVAR(hardcode_direct, $1)=no + _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no + ;; + *) + _LT_AC_TAGVAR(hardcode_direct, $1)=yes + _LT_AC_TAGVAR(hardcode_minus_L, $1)=yes # Not in the search PATH, + # but as the default + # location of the library. + ;; + esac + + case $cc_basename in + CC*) + # FIXME: insert proper C++ library support + _LT_AC_TAGVAR(ld_shlibs, $1)=no + ;; + aCC*) + case $host_cpu in + hppa*64*) + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -b ${wl}+h ${wl}$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' + ;; + ia64*) + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -b ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' + ;; + *) + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -b ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' + ;; + esac + # Commands to make compiler produce verbose output that lists + # what "hidden" libraries, object files and flags are used when + # linking a shared library. + # + # There doesn't appear to be a way to prevent this compiler from + # explicitly linking system object files so we need to strip them + # from the output so that they don't get included in the library + # dependencies. + output_verbose_link_cmd='templist=`($CC -b $CFLAGS -v conftest.$objext 2>&1) | grep "\-L"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; echo $list' + ;; + *) + if test "$GXX" = yes; then + if test $with_gnu_ld = no; then + case $host_cpu in + hppa*64*) + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' + ;; + ia64*) + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' + ;; + *) + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' + ;; + esac + fi + else + # FIXME: insert proper C++ library support + _LT_AC_TAGVAR(ld_shlibs, $1)=no + fi + ;; + esac + ;; + interix[[3-9]]*) + _LT_AC_TAGVAR(hardcode_direct, $1)=no + _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath,$libdir' + _LT_AC_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E' + # Hack: On Interix 3.x, we cannot compile PIC because of a broken gcc. + # Instead, shared libraries are loaded at an image base (0x10000000 by + # default) and relocated if they conflict, which is a slow very memory + # consuming and fragmenting process. To avoid this, we pick a random, + # 256 KiB-aligned image base between 0x50000000 and 0x6FFC0000 at link + # time. Moving up from 0x10000000 also allows more sbrk(2) space. + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-h,$soname ${wl}--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' + _LT_AC_TAGVAR(archive_expsym_cmds, $1)='sed "s,^,_," $export_symbols >$output_objdir/$soname.expsym~$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-h,$soname ${wl}--retain-symbols-file,$output_objdir/$soname.expsym ${wl}--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' + ;; + irix5* | irix6*) + case $cc_basename in + CC*) + # SGI C++ + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared -all -multigot $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib' + + # Archives containing C++ object files must be created using + # "CC -ar", where "CC" is the IRIX C++ compiler. This is + # necessary to make sure instantiated templates are included + # in the archive. + _LT_AC_TAGVAR(old_archive_cmds, $1)='$CC -ar -WR,-u -o $oldlib $oldobjs' + ;; + *) + if test "$GXX" = yes; then + if test "$with_gnu_ld" = no; then + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' + else + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` -o $lib' + fi + fi + _LT_AC_TAGVAR(link_all_deplibs, $1)=yes + ;; + esac + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' + _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=: + ;; + linux* | k*bsd*-gnu) + case $cc_basename in + KCC*) + # Kuck and Associates, Inc. (KAI) C++ Compiler + + # KCC will only create a shared library if the output file + # ends with ".so" (or ".sl" for HP-UX), so rename the library + # to its proper name (with version) after linking. + _LT_AC_TAGVAR(archive_cmds, $1)='tempext=`echo $shared_ext | $SED -e '\''s/\([[^()0-9A-Za-z{}]]\)/\\\\\1/g'\''`; templib=`echo $lib | $SED -e "s/\${tempext}\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib; mv \$templib $lib' + _LT_AC_TAGVAR(archive_expsym_cmds, $1)='tempext=`echo $shared_ext | $SED -e '\''s/\([[^()0-9A-Za-z{}]]\)/\\\\\1/g'\''`; templib=`echo $lib | $SED -e "s/\${tempext}\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib ${wl}-retain-symbols-file,$export_symbols; mv \$templib $lib' + # Commands to make compiler produce verbose output that lists + # what "hidden" libraries, object files and flags are used when + # linking a shared library. + # + # There doesn't appear to be a way to prevent this compiler from + # explicitly linking system object files so we need to strip them + # from the output so that they don't get included in the library + # dependencies. + output_verbose_link_cmd='templist=`$CC $CFLAGS -v conftest.$objext -o libconftest$shared_ext 2>&1 | grep "ld"`; rm -f libconftest$shared_ext; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; echo $list' + + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}--rpath,$libdir' + _LT_AC_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-dynamic' + + # Archives containing C++ object files must be created using + # "CC -Bstatic", where "CC" is the KAI C++ compiler. + _LT_AC_TAGVAR(old_archive_cmds, $1)='$CC -Bstatic -o $oldlib $oldobjs' + ;; + icpc*) + # Intel C++ + with_gnu_ld=yes + # version 8.0 and above of icpc choke on multiply defined symbols + # if we add $predep_objects and $postdep_objects, however 7.1 and + # earlier do not add the objects themselves. + case `$CC -V 2>&1` in + *"Version 7."*) + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname -o $lib' + _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' + ;; + *) # Version 8.0 or newer + tmp_idyn= + case $host_cpu in + ia64*) tmp_idyn=' -i_dynamic';; + esac + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared'"$tmp_idyn"' $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' + _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$CC -shared'"$tmp_idyn"' $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' + ;; + esac + _LT_AC_TAGVAR(archive_cmds_need_lc, $1)=no + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath,$libdir' + _LT_AC_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-dynamic' + _LT_AC_TAGVAR(whole_archive_flag_spec, $1)='${wl}--whole-archive$convenience ${wl}--no-whole-archive' + ;; + pgCC*) + # Portland Group C++ compiler + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib' + _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname ${wl}-retain-symbols-file ${wl}$export_symbols -o $lib' + + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}--rpath ${wl}$libdir' + _LT_AC_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-dynamic' + _LT_AC_TAGVAR(whole_archive_flag_spec, $1)='${wl}--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; $echo \"$new_convenience\"` ${wl}--no-whole-archive' + ;; + cxx*) + # Compaq C++ + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname -o $lib' + _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname -o $lib ${wl}-retain-symbols-file $wl$export_symbols' + + runpath_var=LD_RUN_PATH + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-rpath $libdir' + _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=: + + # Commands to make compiler produce verbose output that lists + # what "hidden" libraries, object files and flags are used when + # linking a shared library. + # + # There doesn't appear to be a way to prevent this compiler from + # explicitly linking system object files so we need to strip them + # from the output so that they don't get included in the library + # dependencies. + output_verbose_link_cmd='templist=`$CC -shared $CFLAGS -v conftest.$objext 2>&1 | grep "ld"`; templist=`echo $templist | $SED "s/\(^.*ld.*\)\( .*ld .*$\)/\1/"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; echo $list' + ;; + *) + case `$CC -V 2>&1 | sed 5q` in + *Sun\ C*) + # Sun C++ 5.9 + _LT_AC_TAGVAR(no_undefined_flag, $1)=' -zdefs' + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -G${allow_undefined_flag} -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' + _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$CC -G${allow_undefined_flag} -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-retain-symbols-file ${wl}$export_symbols' + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir' + _LT_AC_TAGVAR(whole_archive_flag_spec, $1)='${wl}--whole-archive`new_convenience=; for conv in $convenience\"\"; do test -z \"$conv\" || new_convenience=\"$new_convenience,$conv\"; done; $echo \"$new_convenience\"` ${wl}--no-whole-archive' + + # Not sure whether something based on + # $CC $CFLAGS -v conftest.$objext -o libconftest$shared_ext 2>&1 + # would be better. + output_verbose_link_cmd='echo' + + # Archives containing C++ object files must be created using + # "CC -xar", where "CC" is the Sun C++ compiler. This is + # necessary to make sure instantiated templates are included + # in the archive. + _LT_AC_TAGVAR(old_archive_cmds, $1)='$CC -xar -o $oldlib $oldobjs' + ;; + esac + ;; + esac + ;; + lynxos*) + # FIXME: insert proper C++ library support + _LT_AC_TAGVAR(ld_shlibs, $1)=no + ;; + m88k*) + # FIXME: insert proper C++ library support + _LT_AC_TAGVAR(ld_shlibs, $1)=no + ;; + mvs*) + case $cc_basename in + cxx*) + # FIXME: insert proper C++ library support + _LT_AC_TAGVAR(ld_shlibs, $1)=no + ;; + *) + # FIXME: insert proper C++ library support + _LT_AC_TAGVAR(ld_shlibs, $1)=no + ;; + esac + ;; + netbsd*) + if echo __ELF__ | $CC -E - | grep __ELF__ >/dev/null; then + _LT_AC_TAGVAR(archive_cmds, $1)='$LD -Bshareable -o $lib $predep_objects $libobjs $deplibs $postdep_objects $linker_flags' + wlarc= + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir' + _LT_AC_TAGVAR(hardcode_direct, $1)=yes + _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no + fi + # Workaround some broken pre-1.5 toolchains + output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | grep conftest.$objext | $SED -e "s:-lgcc -lc -lgcc::"' + ;; + openbsd2*) + # C++ shared libraries are fairly broken + _LT_AC_TAGVAR(ld_shlibs, $1)=no + ;; + openbsd*) + if test -f /usr/libexec/ld.so; then + _LT_AC_TAGVAR(hardcode_direct, $1)=yes + _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $lib' + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath,$libdir' + if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then + _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-retain-symbols-file,$export_symbols -o $lib' + _LT_AC_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E' + _LT_AC_TAGVAR(whole_archive_flag_spec, $1)="$wlarc"'--whole-archive$convenience '"$wlarc"'--no-whole-archive' + fi + output_verbose_link_cmd='echo' + else + _LT_AC_TAGVAR(ld_shlibs, $1)=no + fi + ;; + osf3*) + case $cc_basename in + KCC*) + # Kuck and Associates, Inc. (KAI) C++ Compiler + + # KCC will only create a shared library if the output file + # ends with ".so" (or ".sl" for HP-UX), so rename the library + # to its proper name (with version) after linking. + _LT_AC_TAGVAR(archive_cmds, $1)='tempext=`echo $shared_ext | $SED -e '\''s/\([[^()0-9A-Za-z{}]]\)/\\\\\1/g'\''`; templib=`echo $lib | $SED -e "s/\${tempext}\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib; mv \$templib $lib' + + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath,$libdir' + _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=: + + # Archives containing C++ object files must be created using + # "CC -Bstatic", where "CC" is the KAI C++ compiler. + _LT_AC_TAGVAR(old_archive_cmds, $1)='$CC -Bstatic -o $oldlib $oldobjs' + + ;; + RCC*) + # Rational C++ 2.4.1 + # FIXME: insert proper C++ library support + _LT_AC_TAGVAR(ld_shlibs, $1)=no + ;; + cxx*) + _LT_AC_TAGVAR(allow_undefined_flag, $1)=' ${wl}-expect_unresolved ${wl}\*' + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $soname `test -n "$verstring" && echo ${wl}-set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib' + + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' + _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=: + + # Commands to make compiler produce verbose output that lists + # what "hidden" libraries, object files and flags are used when + # linking a shared library. + # + # There doesn't appear to be a way to prevent this compiler from + # explicitly linking system object files so we need to strip them + # from the output so that they don't get included in the library + # dependencies. + output_verbose_link_cmd='templist=`$CC -shared $CFLAGS -v conftest.$objext 2>&1 | grep "ld" | grep -v "ld:"`; templist=`echo $templist | $SED "s/\(^.*ld.*\)\( .*ld.*$\)/\1/"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; echo $list' + ;; + *) + if test "$GXX" = yes && test "$with_gnu_ld" = no; then + _LT_AC_TAGVAR(allow_undefined_flag, $1)=' ${wl}-expect_unresolved ${wl}\*' + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' + + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' + _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=: + + # Commands to make compiler produce verbose output that lists + # what "hidden" libraries, object files and flags are used when + # linking a shared library. + output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | grep "\-L"' + + else + # FIXME: insert proper C++ library support + _LT_AC_TAGVAR(ld_shlibs, $1)=no + fi + ;; + esac + ;; + osf4* | osf5*) + case $cc_basename in + KCC*) + # Kuck and Associates, Inc. (KAI) C++ Compiler + + # KCC will only create a shared library if the output file + # ends with ".so" (or ".sl" for HP-UX), so rename the library + # to its proper name (with version) after linking. + _LT_AC_TAGVAR(archive_cmds, $1)='tempext=`echo $shared_ext | $SED -e '\''s/\([[^()0-9A-Za-z{}]]\)/\\\\\1/g'\''`; templib=`echo $lib | $SED -e "s/\${tempext}\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib; mv \$templib $lib' + + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath,$libdir' + _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=: + + # Archives containing C++ object files must be created using + # the KAI C++ compiler. + _LT_AC_TAGVAR(old_archive_cmds, $1)='$CC -o $oldlib $oldobjs' + ;; + RCC*) + # Rational C++ 2.4.1 + # FIXME: insert proper C++ library support + _LT_AC_TAGVAR(ld_shlibs, $1)=no + ;; + cxx*) + _LT_AC_TAGVAR(allow_undefined_flag, $1)=' -expect_unresolved \*' + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -msym -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib' + _LT_AC_TAGVAR(archive_expsym_cmds, $1)='for i in `cat $export_symbols`; do printf "%s %s\\n" -exported_symbol "\$i" >> $lib.exp; done~ + echo "-hidden">> $lib.exp~ + $CC -shared$allow_undefined_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -msym -soname $soname -Wl,-input -Wl,$lib.exp `test -n "$verstring" && echo -set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib~ + $rm $lib.exp' + + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-rpath $libdir' + _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=: + + # Commands to make compiler produce verbose output that lists + # what "hidden" libraries, object files and flags are used when + # linking a shared library. + # + # There doesn't appear to be a way to prevent this compiler from + # explicitly linking system object files so we need to strip them + # from the output so that they don't get included in the library + # dependencies. + output_verbose_link_cmd='templist=`$CC -shared $CFLAGS -v conftest.$objext 2>&1 | grep "ld" | grep -v "ld:"`; templist=`echo $templist | $SED "s/\(^.*ld.*\)\( .*ld.*$\)/\1/"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; echo $list' + ;; + *) + if test "$GXX" = yes && test "$with_gnu_ld" = no; then + _LT_AC_TAGVAR(allow_undefined_flag, $1)=' ${wl}-expect_unresolved ${wl}\*' + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' + + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' + _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=: + + # Commands to make compiler produce verbose output that lists + # what "hidden" libraries, object files and flags are used when + # linking a shared library. + output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | grep "\-L"' + + else + # FIXME: insert proper C++ library support + _LT_AC_TAGVAR(ld_shlibs, $1)=no + fi + ;; + esac + ;; + psos*) + # FIXME: insert proper C++ library support + _LT_AC_TAGVAR(ld_shlibs, $1)=no + ;; + sunos4*) + case $cc_basename in + CC*) + # Sun C++ 4.x + # FIXME: insert proper C++ library support + _LT_AC_TAGVAR(ld_shlibs, $1)=no + ;; + lcc*) + # Lucid + # FIXME: insert proper C++ library support + _LT_AC_TAGVAR(ld_shlibs, $1)=no + ;; + *) + # FIXME: insert proper C++ library support + _LT_AC_TAGVAR(ld_shlibs, $1)=no + ;; + esac + ;; + solaris*) + case $cc_basename in + CC*) + # Sun C++ 4.2, 5.x and Centerline C++ + _LT_AC_TAGVAR(archive_cmds_need_lc,$1)=yes + _LT_AC_TAGVAR(no_undefined_flag, $1)=' -zdefs' + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -G${allow_undefined_flag} -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' + _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~$echo "local: *; };" >> $lib.exp~ + $CC -G${allow_undefined_flag} ${wl}-M ${wl}$lib.exp -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$rm $lib.exp' + + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir' + _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no + case $host_os in + solaris2.[[0-5]] | solaris2.[[0-5]].*) ;; + *) + # The compiler driver will combine and reorder linker options, + # but understands `-z linker_flag'. + # Supported since Solaris 2.6 (maybe 2.5.1?) + _LT_AC_TAGVAR(whole_archive_flag_spec, $1)='-z allextract$convenience -z defaultextract' + ;; + esac + _LT_AC_TAGVAR(link_all_deplibs, $1)=yes + + output_verbose_link_cmd='echo' + + # Archives containing C++ object files must be created using + # "CC -xar", where "CC" is the Sun C++ compiler. This is + # necessary to make sure instantiated templates are included + # in the archive. + _LT_AC_TAGVAR(old_archive_cmds, $1)='$CC -xar -o $oldlib $oldobjs' + ;; + gcx*) + # Green Hills C++ Compiler + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib' + + # The C++ compiler must be used to create the archive. + _LT_AC_TAGVAR(old_archive_cmds, $1)='$CC $LDFLAGS -archive -o $oldlib $oldobjs' + ;; + *) + # GNU C++ compiler with Solaris linker + if test "$GXX" = yes && test "$with_gnu_ld" = no; then + _LT_AC_TAGVAR(no_undefined_flag, $1)=' ${wl}-z ${wl}defs' + if $CC --version | grep -v '^2\.7' > /dev/null; then + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $LDFLAGS $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib' + _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~$echo "local: *; };" >> $lib.exp~ + $CC -shared -nostdlib ${wl}-M $wl$lib.exp -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$rm $lib.exp' + + # Commands to make compiler produce verbose output that lists + # what "hidden" libraries, object files and flags are used when + # linking a shared library. + output_verbose_link_cmd="$CC -shared $CFLAGS -v conftest.$objext 2>&1 | grep \"\-L\"" + else + # g++ 2.7 appears to require `-G' NOT `-shared' on this + # platform. + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -G -nostdlib $LDFLAGS $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib' + _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~$echo "local: *; };" >> $lib.exp~ + $CC -G -nostdlib ${wl}-M $wl$lib.exp -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$rm $lib.exp' + + # Commands to make compiler produce verbose output that lists + # what "hidden" libraries, object files and flags are used when + # linking a shared library. + output_verbose_link_cmd="$CC -G $CFLAGS -v conftest.$objext 2>&1 | grep \"\-L\"" + fi + + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-R $wl$libdir' + case $host_os in + solaris2.[[0-5]] | solaris2.[[0-5]].*) ;; + *) + _LT_AC_TAGVAR(whole_archive_flag_spec, $1)='${wl}-z ${wl}allextract$convenience ${wl}-z ${wl}defaultextract' + ;; + esac + fi + ;; + esac + ;; + sysv4*uw2* | sysv5OpenUNIX* | sysv5UnixWare7.[[01]].[[10]]* | unixware7* | sco3.2v5.0.[[024]]*) + _LT_AC_TAGVAR(no_undefined_flag, $1)='${wl}-z,text' + _LT_AC_TAGVAR(archive_cmds_need_lc, $1)=no + _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no + runpath_var='LD_RUN_PATH' + + case $cc_basename in + CC*) + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -G ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' + _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$CC -G ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' + ;; + *) + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' + _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$CC -shared ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' + ;; + esac + ;; + sysv5* | sco3.2v5* | sco5v6*) + # Note: We can NOT use -z defs as we might desire, because we do not + # link with -lc, and that would cause any symbols used from libc to + # always be unresolved, which means just about no library would + # ever link correctly. If we're not using GNU ld we use -z text + # though, which does catch some bad symbols but isn't as heavy-handed + # as -z defs. + # For security reasons, it is highly recommended that you always + # use absolute paths for naming shared libraries, and exclude the + # DT_RUNPATH tag from executables and libraries. But doing so + # requires that you compile everything twice, which is a pain. + # So that behaviour is only enabled if SCOABSPATH is set to a + # non-empty value in the environment. Most likely only useful for + # creating official distributions of packages. + # This is a hack until libtool officially supports absolute path + # names for shared libraries. + _LT_AC_TAGVAR(no_undefined_flag, $1)='${wl}-z,text' + _LT_AC_TAGVAR(allow_undefined_flag, $1)='${wl}-z,nodefs' + _LT_AC_TAGVAR(archive_cmds_need_lc, $1)=no + _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='`test -z "$SCOABSPATH" && echo ${wl}-R,$libdir`' + _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=':' + _LT_AC_TAGVAR(link_all_deplibs, $1)=yes + _LT_AC_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-Bexport' + runpath_var='LD_RUN_PATH' + + case $cc_basename in + CC*) + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -G ${wl}-h,\${SCOABSPATH:+${install_libdir}/}$soname -o $lib $libobjs $deplibs $compiler_flags' + _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$CC -G ${wl}-Bexport:$export_symbols ${wl}-h,\${SCOABSPATH:+${install_libdir}/}$soname -o $lib $libobjs $deplibs $compiler_flags' + ;; + *) + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared ${wl}-h,\${SCOABSPATH:+${install_libdir}/}$soname -o $lib $libobjs $deplibs $compiler_flags' + _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$CC -shared ${wl}-Bexport:$export_symbols ${wl}-h,\${SCOABSPATH:+${install_libdir}/}$soname -o $lib $libobjs $deplibs $compiler_flags' + ;; + esac + ;; + tandem*) + case $cc_basename in + NCC*) + # NonStop-UX NCC 3.20 + # FIXME: insert proper C++ library support + _LT_AC_TAGVAR(ld_shlibs, $1)=no + ;; + *) + # FIXME: insert proper C++ library support + _LT_AC_TAGVAR(ld_shlibs, $1)=no + ;; + esac + ;; + vxworks*) + # FIXME: insert proper C++ library support + _LT_AC_TAGVAR(ld_shlibs, $1)=no + ;; + *) + # FIXME: insert proper C++ library support + _LT_AC_TAGVAR(ld_shlibs, $1)=no + ;; +esac +AC_MSG_RESULT([$_LT_AC_TAGVAR(ld_shlibs, $1)]) +test "$_LT_AC_TAGVAR(ld_shlibs, $1)" = no && can_build_shared=no + +_LT_AC_TAGVAR(GCC, $1)="$GXX" +_LT_AC_TAGVAR(LD, $1)="$LD" + +AC_LIBTOOL_POSTDEP_PREDEP($1) +AC_LIBTOOL_PROG_COMPILER_PIC($1) +AC_LIBTOOL_PROG_CC_C_O($1) +AC_LIBTOOL_SYS_HARD_LINK_LOCKS($1) +AC_LIBTOOL_PROG_LD_SHLIBS($1) +AC_LIBTOOL_SYS_DYNAMIC_LINKER($1) +AC_LIBTOOL_PROG_LD_HARDCODE_LIBPATH($1) + +AC_LIBTOOL_CONFIG($1) + +AC_LANG_POP +CC=$lt_save_CC +LDCXX=$LD +LD=$lt_save_LD +GCC=$lt_save_GCC +with_gnu_ldcxx=$with_gnu_ld +with_gnu_ld=$lt_save_with_gnu_ld +lt_cv_path_LDCXX=$lt_cv_path_LD +lt_cv_path_LD=$lt_save_path_LD +lt_cv_prog_gnu_ldcxx=$lt_cv_prog_gnu_ld +lt_cv_prog_gnu_ld=$lt_save_with_gnu_ld +])# AC_LIBTOOL_LANG_CXX_CONFIG + +# AC_LIBTOOL_POSTDEP_PREDEP([TAGNAME]) +# ------------------------------------ +# Figure out "hidden" library dependencies from verbose +# compiler output when linking a shared library. +# Parse the compiler output and extract the necessary +# objects, libraries and library flags. +AC_DEFUN([AC_LIBTOOL_POSTDEP_PREDEP],[ +dnl we can't use the lt_simple_compile_test_code here, +dnl because it contains code intended for an executable, +dnl not a library. It's possible we should let each +dnl tag define a new lt_????_link_test_code variable, +dnl but it's only used here... +ifelse([$1],[],[cat > conftest.$ac_ext < conftest.$ac_ext < conftest.$ac_ext < conftest.$ac_ext <&1 | sed 5q` in + *Sun\ C*) + # Sun C++ 5.9 + # + # The more standards-conforming stlport4 library is + # incompatible with the Cstd library. Avoid specifying + # it if it's in CXXFLAGS. Ignore libCrun as + # -library=stlport4 depends on it. + case " $CXX $CXXFLAGS " in + *" -library=stlport4 "*) + solaris_use_stlport4=yes + ;; + esac + if test "$solaris_use_stlport4" != yes; then + _LT_AC_TAGVAR(postdeps,$1)='-library=Cstd -library=Crun' + fi + ;; + esac + ;; + +solaris*) + case $cc_basename in + CC*) + # The more standards-conforming stlport4 library is + # incompatible with the Cstd library. Avoid specifying + # it if it's in CXXFLAGS. Ignore libCrun as + # -library=stlport4 depends on it. + case " $CXX $CXXFLAGS " in + *" -library=stlport4 "*) + solaris_use_stlport4=yes + ;; + esac + + # Adding this requires a known-good setup of shared libraries for + # Sun compiler versions before 5.6, else PIC objects from an old + # archive will be linked into the output, leading to subtle bugs. + if test "$solaris_use_stlport4" != yes; then + _LT_AC_TAGVAR(postdeps,$1)='-library=Cstd -library=Crun' + fi + ;; + esac + ;; +esac +]) + +case " $_LT_AC_TAGVAR(postdeps, $1) " in +*" -lc "*) _LT_AC_TAGVAR(archive_cmds_need_lc, $1)=no ;; +esac +])# AC_LIBTOOL_POSTDEP_PREDEP + +# AC_LIBTOOL_LANG_F77_CONFIG +# -------------------------- +# Ensure that the configuration vars for the C compiler are +# suitably defined. Those variables are subsequently used by +# AC_LIBTOOL_CONFIG to write the compiler configuration to `libtool'. +AC_DEFUN([AC_LIBTOOL_LANG_F77_CONFIG], [_LT_AC_LANG_F77_CONFIG(F77)]) +AC_DEFUN([_LT_AC_LANG_F77_CONFIG], +[AC_REQUIRE([AC_PROG_F77]) +AC_LANG_PUSH(Fortran 77) + +_LT_AC_TAGVAR(archive_cmds_need_lc, $1)=no +_LT_AC_TAGVAR(allow_undefined_flag, $1)= +_LT_AC_TAGVAR(always_export_symbols, $1)=no +_LT_AC_TAGVAR(archive_expsym_cmds, $1)= +_LT_AC_TAGVAR(export_dynamic_flag_spec, $1)= +_LT_AC_TAGVAR(hardcode_direct, $1)=no +_LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)= +_LT_AC_TAGVAR(hardcode_libdir_flag_spec_ld, $1)= +_LT_AC_TAGVAR(hardcode_libdir_separator, $1)= +_LT_AC_TAGVAR(hardcode_minus_L, $1)=no +_LT_AC_TAGVAR(hardcode_automatic, $1)=no +_LT_AC_TAGVAR(module_cmds, $1)= +_LT_AC_TAGVAR(module_expsym_cmds, $1)= +_LT_AC_TAGVAR(link_all_deplibs, $1)=unknown +_LT_AC_TAGVAR(old_archive_cmds, $1)=$old_archive_cmds +_LT_AC_TAGVAR(no_undefined_flag, $1)= +_LT_AC_TAGVAR(whole_archive_flag_spec, $1)= +_LT_AC_TAGVAR(enable_shared_with_static_runtimes, $1)=no + +# Source file extension for f77 test sources. +ac_ext=f + +# Object file extension for compiled f77 test sources. +objext=o +_LT_AC_TAGVAR(objext, $1)=$objext + +# Code to be used in simple compile tests +lt_simple_compile_test_code="\ + subroutine t + return + end +" + +# Code to be used in simple link tests +lt_simple_link_test_code="\ + program t + end +" + +# ltmain only uses $CC for tagged configurations so make sure $CC is set. +_LT_AC_SYS_COMPILER + +# save warnings/boilerplate of simple test code +_LT_COMPILER_BOILERPLATE +_LT_LINKER_BOILERPLATE + +# Allow CC to be a program name with arguments. +lt_save_CC="$CC" +CC=${F77-"f77"} +compiler=$CC +_LT_AC_TAGVAR(compiler, $1)=$CC +_LT_CC_BASENAME([$compiler]) + +AC_MSG_CHECKING([if libtool supports shared libraries]) +AC_MSG_RESULT([$can_build_shared]) + +AC_MSG_CHECKING([whether to build shared libraries]) +test "$can_build_shared" = "no" && enable_shared=no + +# On AIX, shared libraries and static libraries use the same namespace, and +# are all built from PIC. +case $host_os in +aix3*) + test "$enable_shared" = yes && enable_static=no + if test -n "$RANLIB"; then + archive_cmds="$archive_cmds~\$RANLIB \$lib" + postinstall_cmds='$RANLIB $lib' + fi + ;; +aix4* | aix5*) + if test "$host_cpu" != ia64 && test "$aix_use_runtimelinking" = no ; then + test "$enable_shared" = yes && enable_static=no + fi + ;; +esac +AC_MSG_RESULT([$enable_shared]) + +AC_MSG_CHECKING([whether to build static libraries]) +# Make sure either enable_shared or enable_static is yes. +test "$enable_shared" = yes || enable_static=yes +AC_MSG_RESULT([$enable_static]) + +_LT_AC_TAGVAR(GCC, $1)="$G77" +_LT_AC_TAGVAR(LD, $1)="$LD" + +AC_LIBTOOL_PROG_COMPILER_PIC($1) +AC_LIBTOOL_PROG_CC_C_O($1) +AC_LIBTOOL_SYS_HARD_LINK_LOCKS($1) +AC_LIBTOOL_PROG_LD_SHLIBS($1) +AC_LIBTOOL_SYS_DYNAMIC_LINKER($1) +AC_LIBTOOL_PROG_LD_HARDCODE_LIBPATH($1) + +AC_LIBTOOL_CONFIG($1) + +AC_LANG_POP +CC="$lt_save_CC" +])# AC_LIBTOOL_LANG_F77_CONFIG + + +# AC_LIBTOOL_LANG_GCJ_CONFIG +# -------------------------- +# Ensure that the configuration vars for the C compiler are +# suitably defined. Those variables are subsequently used by +# AC_LIBTOOL_CONFIG to write the compiler configuration to `libtool'. +AC_DEFUN([AC_LIBTOOL_LANG_GCJ_CONFIG], [_LT_AC_LANG_GCJ_CONFIG(GCJ)]) +AC_DEFUN([_LT_AC_LANG_GCJ_CONFIG], +[AC_LANG_SAVE + +# Source file extension for Java test sources. +ac_ext=java + +# Object file extension for compiled Java test sources. +objext=o +_LT_AC_TAGVAR(objext, $1)=$objext + +# Code to be used in simple compile tests +lt_simple_compile_test_code="class foo {}" + +# Code to be used in simple link tests +lt_simple_link_test_code='public class conftest { public static void main(String[[]] argv) {}; }' + +# ltmain only uses $CC for tagged configurations so make sure $CC is set. +_LT_AC_SYS_COMPILER + +# save warnings/boilerplate of simple test code +_LT_COMPILER_BOILERPLATE +_LT_LINKER_BOILERPLATE + +# Allow CC to be a program name with arguments. +lt_save_CC="$CC" +CC=${GCJ-"gcj"} +compiler=$CC +_LT_AC_TAGVAR(compiler, $1)=$CC +_LT_CC_BASENAME([$compiler]) + +# GCJ did not exist at the time GCC didn't implicitly link libc in. +_LT_AC_TAGVAR(archive_cmds_need_lc, $1)=no + +_LT_AC_TAGVAR(old_archive_cmds, $1)=$old_archive_cmds + +AC_LIBTOOL_PROG_COMPILER_NO_RTTI($1) +AC_LIBTOOL_PROG_COMPILER_PIC($1) +AC_LIBTOOL_PROG_CC_C_O($1) +AC_LIBTOOL_SYS_HARD_LINK_LOCKS($1) +AC_LIBTOOL_PROG_LD_SHLIBS($1) +AC_LIBTOOL_SYS_DYNAMIC_LINKER($1) +AC_LIBTOOL_PROG_LD_HARDCODE_LIBPATH($1) + +AC_LIBTOOL_CONFIG($1) + +AC_LANG_RESTORE +CC="$lt_save_CC" +])# AC_LIBTOOL_LANG_GCJ_CONFIG + + +# AC_LIBTOOL_LANG_RC_CONFIG +# ------------------------- +# Ensure that the configuration vars for the Windows resource compiler are +# suitably defined. Those variables are subsequently used by +# AC_LIBTOOL_CONFIG to write the compiler configuration to `libtool'. +AC_DEFUN([AC_LIBTOOL_LANG_RC_CONFIG], [_LT_AC_LANG_RC_CONFIG(RC)]) +AC_DEFUN([_LT_AC_LANG_RC_CONFIG], +[AC_LANG_SAVE + +# Source file extension for RC test sources. +ac_ext=rc + +# Object file extension for compiled RC test sources. +objext=o +_LT_AC_TAGVAR(objext, $1)=$objext + +# Code to be used in simple compile tests +lt_simple_compile_test_code='sample MENU { MENUITEM "&Soup", 100, CHECKED }' + +# Code to be used in simple link tests +lt_simple_link_test_code="$lt_simple_compile_test_code" + +# ltmain only uses $CC for tagged configurations so make sure $CC is set. +_LT_AC_SYS_COMPILER + +# save warnings/boilerplate of simple test code +_LT_COMPILER_BOILERPLATE +_LT_LINKER_BOILERPLATE + +# Allow CC to be a program name with arguments. +lt_save_CC="$CC" +CC=${RC-"windres"} +compiler=$CC +_LT_AC_TAGVAR(compiler, $1)=$CC +_LT_CC_BASENAME([$compiler]) +_LT_AC_TAGVAR(lt_cv_prog_compiler_c_o, $1)=yes + +AC_LIBTOOL_CONFIG($1) + +AC_LANG_RESTORE +CC="$lt_save_CC" +])# AC_LIBTOOL_LANG_RC_CONFIG + + +# AC_LIBTOOL_CONFIG([TAGNAME]) +# ---------------------------- +# If TAGNAME is not passed, then create an initial libtool script +# with a default configuration from the untagged config vars. Otherwise +# add code to config.status for appending the configuration named by +# TAGNAME from the matching tagged config vars. +AC_DEFUN([AC_LIBTOOL_CONFIG], +[# The else clause should only fire when bootstrapping the +# libtool distribution, otherwise you forgot to ship ltmain.sh +# with your package, and you will get complaints that there are +# no rules to generate ltmain.sh. +if test -f "$ltmain"; then + # See if we are running on zsh, and set the options which allow our commands through + # without removal of \ escapes. + if test -n "${ZSH_VERSION+set}" ; then + setopt NO_GLOB_SUBST + fi + # Now quote all the things that may contain metacharacters while being + # careful not to overquote the AC_SUBSTed values. We take copies of the + # variables and quote the copies for generation of the libtool script. + for var in echo old_CC old_CFLAGS AR AR_FLAGS EGREP RANLIB LN_S LTCC LTCFLAGS NM \ + SED SHELL STRIP \ + libname_spec library_names_spec soname_spec extract_expsyms_cmds \ + old_striplib striplib file_magic_cmd finish_cmds finish_eval \ + deplibs_check_method reload_flag reload_cmds need_locks \ + lt_cv_sys_global_symbol_pipe lt_cv_sys_global_symbol_to_cdecl \ + lt_cv_sys_global_symbol_to_c_name_address \ + sys_lib_search_path_spec sys_lib_dlsearch_path_spec \ + old_postinstall_cmds old_postuninstall_cmds \ + _LT_AC_TAGVAR(compiler, $1) \ + _LT_AC_TAGVAR(CC, $1) \ + _LT_AC_TAGVAR(LD, $1) \ + _LT_AC_TAGVAR(lt_prog_compiler_wl, $1) \ + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1) \ + _LT_AC_TAGVAR(lt_prog_compiler_static, $1) \ + _LT_AC_TAGVAR(lt_prog_compiler_no_builtin_flag, $1) \ + _LT_AC_TAGVAR(export_dynamic_flag_spec, $1) \ + _LT_AC_TAGVAR(thread_safe_flag_spec, $1) \ + _LT_AC_TAGVAR(whole_archive_flag_spec, $1) \ + _LT_AC_TAGVAR(enable_shared_with_static_runtimes, $1) \ + _LT_AC_TAGVAR(old_archive_cmds, $1) \ + _LT_AC_TAGVAR(old_archive_from_new_cmds, $1) \ + _LT_AC_TAGVAR(predep_objects, $1) \ + _LT_AC_TAGVAR(postdep_objects, $1) \ + _LT_AC_TAGVAR(predeps, $1) \ + _LT_AC_TAGVAR(postdeps, $1) \ + _LT_AC_TAGVAR(compiler_lib_search_path, $1) \ + _LT_AC_TAGVAR(archive_cmds, $1) \ + _LT_AC_TAGVAR(archive_expsym_cmds, $1) \ + _LT_AC_TAGVAR(postinstall_cmds, $1) \ + _LT_AC_TAGVAR(postuninstall_cmds, $1) \ + _LT_AC_TAGVAR(old_archive_from_expsyms_cmds, $1) \ + _LT_AC_TAGVAR(allow_undefined_flag, $1) \ + _LT_AC_TAGVAR(no_undefined_flag, $1) \ + _LT_AC_TAGVAR(export_symbols_cmds, $1) \ + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1) \ + _LT_AC_TAGVAR(hardcode_libdir_flag_spec_ld, $1) \ + _LT_AC_TAGVAR(hardcode_libdir_separator, $1) \ + _LT_AC_TAGVAR(hardcode_automatic, $1) \ + _LT_AC_TAGVAR(module_cmds, $1) \ + _LT_AC_TAGVAR(module_expsym_cmds, $1) \ + _LT_AC_TAGVAR(lt_cv_prog_compiler_c_o, $1) \ + _LT_AC_TAGVAR(fix_srcfile_path, $1) \ + _LT_AC_TAGVAR(exclude_expsyms, $1) \ + _LT_AC_TAGVAR(include_expsyms, $1); do + + case $var in + _LT_AC_TAGVAR(old_archive_cmds, $1) | \ + _LT_AC_TAGVAR(old_archive_from_new_cmds, $1) | \ + _LT_AC_TAGVAR(archive_cmds, $1) | \ + _LT_AC_TAGVAR(archive_expsym_cmds, $1) | \ + _LT_AC_TAGVAR(module_cmds, $1) | \ + _LT_AC_TAGVAR(module_expsym_cmds, $1) | \ + _LT_AC_TAGVAR(old_archive_from_expsyms_cmds, $1) | \ + _LT_AC_TAGVAR(export_symbols_cmds, $1) | \ + extract_expsyms_cmds | reload_cmds | finish_cmds | \ + postinstall_cmds | postuninstall_cmds | \ + old_postinstall_cmds | old_postuninstall_cmds | \ + sys_lib_search_path_spec | sys_lib_dlsearch_path_spec) + # Double-quote double-evaled strings. + eval "lt_$var=\\\"\`\$echo \"X\$$var\" | \$Xsed -e \"\$double_quote_subst\" -e \"\$sed_quote_subst\" -e \"\$delay_variable_subst\"\`\\\"" + ;; + *) + eval "lt_$var=\\\"\`\$echo \"X\$$var\" | \$Xsed -e \"\$sed_quote_subst\"\`\\\"" + ;; + esac + done + + case $lt_echo in + *'\[$]0 --fallback-echo"') + lt_echo=`$echo "X$lt_echo" | $Xsed -e 's/\\\\\\\[$]0 --fallback-echo"[$]/[$]0 --fallback-echo"/'` + ;; + esac + +ifelse([$1], [], + [cfgfile="${ofile}T" + trap "$rm \"$cfgfile\"; exit 1" 1 2 15 + $rm -f "$cfgfile" + AC_MSG_NOTICE([creating $ofile])], + [cfgfile="$ofile"]) + + cat <<__EOF__ >> "$cfgfile" +ifelse([$1], [], +[#! $SHELL + +# `$echo "$cfgfile" | sed 's%^.*/%%'` - Provide generalized library-building support services. +# Generated automatically by $PROGRAM (GNU $PACKAGE $VERSION$TIMESTAMP) +# NOTE: Changes made to this file will be lost: look at ltmain.sh. +# +# Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007 +# Free Software Foundation, Inc. +# +# This file is part of GNU Libtool: +# Originally by Gordon Matzigkeit , 1996 +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 2 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, but +# WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program; if not, write to the Free Software +# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. +# +# As a special exception to the GNU General Public License, if you +# distribute this file as part of a program that contains a +# configuration script generated by Autoconf, you may include it under +# the same distribution terms that you use for the rest of that program. + +# A sed program that does not truncate output. +SED=$lt_SED + +# Sed that helps us avoid accidentally triggering echo(1) options like -n. +Xsed="$SED -e 1s/^X//" + +# The HP-UX ksh and POSIX shell print the target directory to stdout +# if CDPATH is set. +(unset CDPATH) >/dev/null 2>&1 && unset CDPATH + +# The names of the tagged configurations supported by this script. +available_tags= + +# ### BEGIN LIBTOOL CONFIG], +[# ### BEGIN LIBTOOL TAG CONFIG: $tagname]) + +# Libtool was configured on host `(hostname || uname -n) 2>/dev/null | sed 1q`: + +# Shell to use when invoking shell scripts. +SHELL=$lt_SHELL + +# Whether or not to build shared libraries. +build_libtool_libs=$enable_shared + +# Whether or not to build static libraries. +build_old_libs=$enable_static + +# Whether or not to add -lc for building shared libraries. +build_libtool_need_lc=$_LT_AC_TAGVAR(archive_cmds_need_lc, $1) + +# Whether or not to disallow shared libs when runtime libs are static +allow_libtool_libs_with_static_runtimes=$_LT_AC_TAGVAR(enable_shared_with_static_runtimes, $1) + +# Whether or not to optimize for fast installation. +fast_install=$enable_fast_install + +# The host system. +host_alias=$host_alias +host=$host +host_os=$host_os + +# The build system. +build_alias=$build_alias +build=$build +build_os=$build_os + +# An echo program that does not interpret backslashes. +echo=$lt_echo + +# The archiver. +AR=$lt_AR +AR_FLAGS=$lt_AR_FLAGS + +# A C compiler. +LTCC=$lt_LTCC + +# LTCC compiler flags. +LTCFLAGS=$lt_LTCFLAGS + +# A language-specific compiler. +CC=$lt_[]_LT_AC_TAGVAR(compiler, $1) + +# Is the compiler the GNU C compiler? +with_gcc=$_LT_AC_TAGVAR(GCC, $1) + +# An ERE matcher. +EGREP=$lt_EGREP + +# The linker used to build libraries. +LD=$lt_[]_LT_AC_TAGVAR(LD, $1) + +# Whether we need hard or soft links. +LN_S=$lt_LN_S + +# A BSD-compatible nm program. +NM=$lt_NM + +# A symbol stripping program +STRIP=$lt_STRIP + +# Used to examine libraries when file_magic_cmd begins "file" +MAGIC_CMD=$MAGIC_CMD + +# Used on cygwin: DLL creation program. +DLLTOOL="$DLLTOOL" + +# Used on cygwin: object dumper. +OBJDUMP="$OBJDUMP" + +# Used on cygwin: assembler. +AS="$AS" + +# The name of the directory that contains temporary libtool files. +objdir=$objdir + +# How to create reloadable object files. +reload_flag=$lt_reload_flag +reload_cmds=$lt_reload_cmds + +# How to pass a linker flag through the compiler. +wl=$lt_[]_LT_AC_TAGVAR(lt_prog_compiler_wl, $1) + +# Object file suffix (normally "o"). +objext="$ac_objext" + +# Old archive suffix (normally "a"). +libext="$libext" + +# Shared library suffix (normally ".so"). +shrext_cmds='$shrext_cmds' + +# Executable file suffix (normally ""). +exeext="$exeext" + +# Additional compiler flags for building library objects. +pic_flag=$lt_[]_LT_AC_TAGVAR(lt_prog_compiler_pic, $1) +pic_mode=$pic_mode + +# What is the maximum length of a command? +max_cmd_len=$lt_cv_sys_max_cmd_len + +# Does compiler simultaneously support -c and -o options? +compiler_c_o=$lt_[]_LT_AC_TAGVAR(lt_cv_prog_compiler_c_o, $1) + +# Must we lock files when doing compilation? +need_locks=$lt_need_locks + +# Do we need the lib prefix for modules? +need_lib_prefix=$need_lib_prefix + +# Do we need a version for libraries? +need_version=$need_version + +# Whether dlopen is supported. +dlopen_support=$enable_dlopen + +# Whether dlopen of programs is supported. +dlopen_self=$enable_dlopen_self + +# Whether dlopen of statically linked programs is supported. +dlopen_self_static=$enable_dlopen_self_static + +# Compiler flag to prevent dynamic linking. +link_static_flag=$lt_[]_LT_AC_TAGVAR(lt_prog_compiler_static, $1) + +# Compiler flag to turn off builtin functions. +no_builtin_flag=$lt_[]_LT_AC_TAGVAR(lt_prog_compiler_no_builtin_flag, $1) + +# Compiler flag to allow reflexive dlopens. +export_dynamic_flag_spec=$lt_[]_LT_AC_TAGVAR(export_dynamic_flag_spec, $1) + +# Compiler flag to generate shared objects directly from archives. +whole_archive_flag_spec=$lt_[]_LT_AC_TAGVAR(whole_archive_flag_spec, $1) + +# Compiler flag to generate thread-safe objects. +thread_safe_flag_spec=$lt_[]_LT_AC_TAGVAR(thread_safe_flag_spec, $1) + +# Library versioning type. +version_type=$version_type + +# Format of library name prefix. +libname_spec=$lt_libname_spec + +# List of archive names. First name is the real one, the rest are links. +# The last name is the one that the linker finds with -lNAME. +library_names_spec=$lt_library_names_spec + +# The coded name of the library, if different from the real name. +soname_spec=$lt_soname_spec + +# Commands used to build and install an old-style archive. +RANLIB=$lt_RANLIB +old_archive_cmds=$lt_[]_LT_AC_TAGVAR(old_archive_cmds, $1) +old_postinstall_cmds=$lt_old_postinstall_cmds +old_postuninstall_cmds=$lt_old_postuninstall_cmds + +# Create an old-style archive from a shared archive. +old_archive_from_new_cmds=$lt_[]_LT_AC_TAGVAR(old_archive_from_new_cmds, $1) + +# Create a temporary old-style archive to link instead of a shared archive. +old_archive_from_expsyms_cmds=$lt_[]_LT_AC_TAGVAR(old_archive_from_expsyms_cmds, $1) + +# Commands used to build and install a shared archive. +archive_cmds=$lt_[]_LT_AC_TAGVAR(archive_cmds, $1) +archive_expsym_cmds=$lt_[]_LT_AC_TAGVAR(archive_expsym_cmds, $1) +postinstall_cmds=$lt_postinstall_cmds +postuninstall_cmds=$lt_postuninstall_cmds + +# Commands used to build a loadable module (assumed same as above if empty) +module_cmds=$lt_[]_LT_AC_TAGVAR(module_cmds, $1) +module_expsym_cmds=$lt_[]_LT_AC_TAGVAR(module_expsym_cmds, $1) + +# Commands to strip libraries. +old_striplib=$lt_old_striplib +striplib=$lt_striplib + +# Dependencies to place before the objects being linked to create a +# shared library. +predep_objects=$lt_[]_LT_AC_TAGVAR(predep_objects, $1) + +# Dependencies to place after the objects being linked to create a +# shared library. +postdep_objects=$lt_[]_LT_AC_TAGVAR(postdep_objects, $1) + +# Dependencies to place before the objects being linked to create a +# shared library. +predeps=$lt_[]_LT_AC_TAGVAR(predeps, $1) + +# Dependencies to place after the objects being linked to create a +# shared library. +postdeps=$lt_[]_LT_AC_TAGVAR(postdeps, $1) + +# The library search path used internally by the compiler when linking +# a shared library. +compiler_lib_search_path=$lt_[]_LT_AC_TAGVAR(compiler_lib_search_path, $1) + +# Method to check whether dependent libraries are shared objects. +deplibs_check_method=$lt_deplibs_check_method + +# Command to use when deplibs_check_method == file_magic. +file_magic_cmd=$lt_file_magic_cmd + +# Flag that allows shared libraries with undefined symbols to be built. +allow_undefined_flag=$lt_[]_LT_AC_TAGVAR(allow_undefined_flag, $1) + +# Flag that forces no undefined symbols. +no_undefined_flag=$lt_[]_LT_AC_TAGVAR(no_undefined_flag, $1) + +# Commands used to finish a libtool library installation in a directory. +finish_cmds=$lt_finish_cmds + +# Same as above, but a single script fragment to be evaled but not shown. +finish_eval=$lt_finish_eval + +# Take the output of nm and produce a listing of raw symbols and C names. +global_symbol_pipe=$lt_lt_cv_sys_global_symbol_pipe + +# Transform the output of nm in a proper C declaration +global_symbol_to_cdecl=$lt_lt_cv_sys_global_symbol_to_cdecl + +# Transform the output of nm in a C name address pair +global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address + +# This is the shared library runtime path variable. +runpath_var=$runpath_var + +# This is the shared library path variable. +shlibpath_var=$shlibpath_var + +# Is shlibpath searched before the hard-coded library search path? +shlibpath_overrides_runpath=$shlibpath_overrides_runpath + +# How to hardcode a shared library path into an executable. +hardcode_action=$_LT_AC_TAGVAR(hardcode_action, $1) + +# Whether we should hardcode library paths into libraries. +hardcode_into_libs=$hardcode_into_libs + +# Flag to hardcode \$libdir into a binary during linking. +# This must work even if \$libdir does not exist. +hardcode_libdir_flag_spec=$lt_[]_LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1) + +# If ld is used when linking, flag to hardcode \$libdir into +# a binary during linking. This must work even if \$libdir does +# not exist. +hardcode_libdir_flag_spec_ld=$lt_[]_LT_AC_TAGVAR(hardcode_libdir_flag_spec_ld, $1) + +# Whether we need a single -rpath flag with a separated argument. +hardcode_libdir_separator=$lt_[]_LT_AC_TAGVAR(hardcode_libdir_separator, $1) + +# Set to yes if using DIR/libNAME${shared_ext} during linking hardcodes DIR into the +# resulting binary. +hardcode_direct=$_LT_AC_TAGVAR(hardcode_direct, $1) + +# Set to yes if using the -LDIR flag during linking hardcodes DIR into the +# resulting binary. +hardcode_minus_L=$_LT_AC_TAGVAR(hardcode_minus_L, $1) + +# Set to yes if using SHLIBPATH_VAR=DIR during linking hardcodes DIR into +# the resulting binary. +hardcode_shlibpath_var=$_LT_AC_TAGVAR(hardcode_shlibpath_var, $1) + +# Set to yes if building a shared library automatically hardcodes DIR into the library +# and all subsequent libraries and executables linked against it. +hardcode_automatic=$_LT_AC_TAGVAR(hardcode_automatic, $1) + +# Variables whose values should be saved in libtool wrapper scripts and +# restored at relink time. +variables_saved_for_relink="$variables_saved_for_relink" + +# Whether libtool must link a program against all its dependency libraries. +link_all_deplibs=$_LT_AC_TAGVAR(link_all_deplibs, $1) + +# Compile-time system search path for libraries +sys_lib_search_path_spec=$lt_sys_lib_search_path_spec + +# Run-time system search path for libraries +sys_lib_dlsearch_path_spec=$lt_sys_lib_dlsearch_path_spec + +# Fix the shell variable \$srcfile for the compiler. +fix_srcfile_path=$lt_fix_srcfile_path + +# Set to yes if exported symbols are required. +always_export_symbols=$_LT_AC_TAGVAR(always_export_symbols, $1) + +# The commands to list exported symbols. +export_symbols_cmds=$lt_[]_LT_AC_TAGVAR(export_symbols_cmds, $1) + +# The commands to extract the exported symbol list from a shared archive. +extract_expsyms_cmds=$lt_extract_expsyms_cmds + +# Symbols that should not be listed in the preloaded symbols. +exclude_expsyms=$lt_[]_LT_AC_TAGVAR(exclude_expsyms, $1) + +# Symbols that must always be exported. +include_expsyms=$lt_[]_LT_AC_TAGVAR(include_expsyms, $1) + +ifelse([$1],[], +[# ### END LIBTOOL CONFIG], +[# ### END LIBTOOL TAG CONFIG: $tagname]) + +__EOF__ + +ifelse([$1],[], [ + case $host_os in + aix3*) + cat <<\EOF >> "$cfgfile" + +# AIX sometimes has problems with the GCC collect2 program. For some +# reason, if we set the COLLECT_NAMES environment variable, the problems +# vanish in a puff of smoke. +if test "X${COLLECT_NAMES+set}" != Xset; then + COLLECT_NAMES= + export COLLECT_NAMES +fi +EOF + ;; + esac + + # We use sed instead of cat because bash on DJGPP gets confused if + # if finds mixed CR/LF and LF-only lines. Since sed operates in + # text mode, it properly converts lines to CR/LF. This bash problem + # is reportedly fixed, but why not run on old versions too? + sed '$q' "$ltmain" >> "$cfgfile" || (rm -f "$cfgfile"; exit 1) + + mv -f "$cfgfile" "$ofile" || \ + (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile") + chmod +x "$ofile" +]) +else + # If there is no Makefile yet, we rely on a make rule to execute + # `config.status --recheck' to rerun these tests and create the + # libtool script then. + ltmain_in=`echo $ltmain | sed -e 's/\.sh$/.in/'` + if test -f "$ltmain_in"; then + test -f Makefile && make "$ltmain" + fi +fi +])# AC_LIBTOOL_CONFIG + + +# AC_LIBTOOL_PROG_COMPILER_NO_RTTI([TAGNAME]) +# ------------------------------------------- +AC_DEFUN([AC_LIBTOOL_PROG_COMPILER_NO_RTTI], +[AC_REQUIRE([_LT_AC_SYS_COMPILER])dnl + +_LT_AC_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)= + +if test "$GCC" = yes; then + _LT_AC_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)=' -fno-builtin' + + AC_LIBTOOL_COMPILER_OPTION([if $compiler supports -fno-rtti -fno-exceptions], + lt_cv_prog_compiler_rtti_exceptions, + [-fno-rtti -fno-exceptions], [], + [_LT_AC_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)="$_LT_AC_TAGVAR(lt_prog_compiler_no_builtin_flag, $1) -fno-rtti -fno-exceptions"]) +fi +])# AC_LIBTOOL_PROG_COMPILER_NO_RTTI + + +# AC_LIBTOOL_SYS_GLOBAL_SYMBOL_PIPE +# --------------------------------- +AC_DEFUN([AC_LIBTOOL_SYS_GLOBAL_SYMBOL_PIPE], +[AC_REQUIRE([AC_CANONICAL_HOST]) +AC_REQUIRE([LT_AC_PROG_SED]) +AC_REQUIRE([AC_PROG_NM]) +AC_REQUIRE([AC_OBJEXT]) +# Check for command to grab the raw symbol name followed by C symbol from nm. +AC_MSG_CHECKING([command to parse $NM output from $compiler object]) +AC_CACHE_VAL([lt_cv_sys_global_symbol_pipe], [ -AC_CHECK_HEADER([sys/mman.h], - [libffi_header_sys_mman_h=yes], [libffi_header_sys_mman_h=no]) -AC_CHECK_FUNC([mmap], [libffi_func_mmap=yes], [libffi_func_mmap=no]) -if test "$libffi_header_sys_mman_h" != yes \ - || test "$libffi_func_mmap" != yes; then - ac_cv_func_mmap_file=no - ac_cv_func_mmap_dev_zero=no - ac_cv_func_mmap_anon=no -else - AC_CACHE_CHECK([whether read-only mmap of a plain file works], - ac_cv_func_mmap_file, - [# Add a system to this blacklist if - # mmap(0, stat_size, PROT_READ, MAP_PRIVATE, fd, 0) doesn't return a - # memory area containing the same data that you'd get if you applied - # read() to the same fd. The only system known to have a problem here - # is VMS, where text files have record structure. - case "$host_os" in - vms* | ultrix*) - ac_cv_func_mmap_file=no ;; - *) - ac_cv_func_mmap_file=yes;; - esac]) - AC_CACHE_CHECK([whether mmap from /dev/zero works], - ac_cv_func_mmap_dev_zero, - [# Add a system to this blacklist if it has mmap() but /dev/zero - # does not exist, or if mmapping /dev/zero does not give anonymous - # zeroed pages with both the following properties: - # 1. If you map N consecutive pages in with one call, and then - # unmap any subset of those pages, the pages that were not - # explicitly unmapped remain accessible. - # 2. If you map two adjacent blocks of memory and then unmap them - # both at once, they must both go away. - # Systems known to be in this category are Windows (all variants), - # VMS, and Darwin. - case "$host_os" in - vms* | cygwin* | pe | mingw* | darwin* | ultrix* | hpux10* | hpux11.00) - ac_cv_func_mmap_dev_zero=no ;; - *) - ac_cv_func_mmap_dev_zero=yes;; - esac]) - - # Unlike /dev/zero, the MAP_ANON(YMOUS) defines can be probed for. - AC_CACHE_CHECK([for MAP_ANON(YMOUS)], ac_cv_decl_map_anon, - [AC_TRY_COMPILE( -[#include -#include -#include +# These are sane defaults that work on at least a few old systems. +# [They come from Ultrix. What could be older than Ultrix?!! ;)] + +# Character class describing NM global symbol codes. +symcode='[[BCDEGRST]]' + +# Regexp to match symbols that can be accessed directly from C. +sympat='\([[_A-Za-z]][[_A-Za-z0-9]]*\)' + +# Transform an extracted symbol line into a proper C declaration +lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^. .* \(.*\)$/extern int \1;/p'" + +# Transform an extracted symbol line into symbol name and symbol address +lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([[^ ]]*\) $/ {\\\"\1\\\", (lt_ptr) 0},/p' -e 's/^$symcode \([[^ ]]*\) \([[^ ]]*\)$/ {\"\2\", (lt_ptr) \&\2},/p'" + +# Define system-specific variables. +case $host_os in +aix*) + symcode='[[BCDT]]' + ;; +cygwin* | mingw* | pw32*) + symcode='[[ABCDGISTW]]' + ;; +hpux*) # Its linker distinguishes data from code symbols + if test "$host_cpu" = ia64; then + symcode='[[ABCDEGRST]]' + fi + lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'" + lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([[^ ]]*\) $/ {\\\"\1\\\", (lt_ptr) 0},/p' -e 's/^$symcode* \([[^ ]]*\) \([[^ ]]*\)$/ {\"\2\", (lt_ptr) \&\2},/p'" + ;; +linux* | k*bsd*-gnu) + if test "$host_cpu" = ia64; then + symcode='[[ABCDGIRSTW]]' + lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'" + lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([[^ ]]*\) $/ {\\\"\1\\\", (lt_ptr) 0},/p' -e 's/^$symcode* \([[^ ]]*\) \([[^ ]]*\)$/ {\"\2\", (lt_ptr) \&\2},/p'" + fi + ;; +irix* | nonstopux*) + symcode='[[BCDEGRST]]' + ;; +osf*) + symcode='[[BCDEGQRST]]' + ;; +solaris*) + symcode='[[BDRT]]' + ;; +sco3.2v5*) + symcode='[[DT]]' + ;; +sysv4.2uw2*) + symcode='[[DT]]' + ;; +sysv5* | sco5v6* | unixware* | OpenUNIX*) + symcode='[[ABDT]]' + ;; +sysv4) + symcode='[[DFNSTU]]' + ;; +esac + +# Handle CRLF in mingw tool chain +opt_cr= +case $build_os in +mingw*) + opt_cr=`echo 'x\{0,1\}' | tr x '\015'` # option cr in regexp + ;; +esac + +# If we're using GNU nm, then use its standard symbol codes. +case `$NM -V 2>&1` in +*GNU* | *'with BFD'*) + symcode='[[ABCDGIRSTW]]' ;; +esac + +# Try without a prefix undercore, then with it. +for ac_symprfx in "" "_"; do + + # Transform symcode, sympat, and symprfx into a raw symbol and a C symbol. + symxfrm="\\1 $ac_symprfx\\2 \\2" + + # Write the raw and C identifiers. + lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[[ ]]\($symcode$symcode*\)[[ ]][[ ]]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'" + + # Check to see that the pipe works correctly. + pipe_works=no + + rm -f conftest* + cat > conftest.$ac_ext < $nlist) && test -s "$nlist"; then + # Try sorting and uniquifying the output. + if sort "$nlist" | uniq > "$nlist"T; then + mv -f "$nlist"T "$nlist" + else + rm -f "$nlist"T + fi + + # Make sure that we snagged all the symbols we need. + if grep ' nm_test_var$' "$nlist" >/dev/null; then + if grep ' nm_test_func$' "$nlist" >/dev/null; then + cat < conftest.$ac_ext +#ifdef __cplusplus +extern "C" { +#endif + +EOF + # Now generate the symbol file. + eval "$lt_cv_sys_global_symbol_to_cdecl"' < "$nlist" | grep -v main >> conftest.$ac_ext' + + cat <> conftest.$ac_ext +#if defined (__STDC__) && __STDC__ +# define lt_ptr_t void * +#else +# define lt_ptr_t char * +# define const +#endif + +/* The mapping between symbol names and symbols. */ +const struct { + const char *name; + lt_ptr_t address; +} +lt_preloaded_symbols[[]] = +{ +EOF + $SED "s/^$symcode$symcode* \(.*\) \(.*\)$/ {\"\2\", (lt_ptr_t) \&\2},/" < "$nlist" | grep -v main >> conftest.$ac_ext + cat <<\EOF >> conftest.$ac_ext + {0, (lt_ptr_t) 0} +}; + +#ifdef __cplusplus +} +#endif +EOF + # Now try linking the two files. + mv conftest.$ac_objext conftstm.$ac_objext + lt_save_LIBS="$LIBS" + lt_save_CFLAGS="$CFLAGS" + LIBS="conftstm.$ac_objext" + CFLAGS="$CFLAGS$_LT_AC_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)" + if AC_TRY_EVAL(ac_link) && test -s conftest${ac_exeext}; then + pipe_works=yes + fi + LIBS="$lt_save_LIBS" + CFLAGS="$lt_save_CFLAGS" + else + echo "cannot find nm_test_func in $nlist" >&AS_MESSAGE_LOG_FD + fi + else + echo "cannot find nm_test_var in $nlist" >&AS_MESSAGE_LOG_FD + fi + else + echo "cannot run $lt_cv_sys_global_symbol_pipe" >&AS_MESSAGE_LOG_FD + fi + else + echo "$progname: failed program was:" >&AS_MESSAGE_LOG_FD + cat conftest.$ac_ext >&5 + fi + rm -f conftest* conftst* + + # Do not use the global_symbol_pipe unless it works. + if test "$pipe_works" = yes; then + break + else + lt_cv_sys_global_symbol_pipe= + fi +done +]) +if test -z "$lt_cv_sys_global_symbol_pipe"; then + lt_cv_sys_global_symbol_to_cdecl= +fi +if test -z "$lt_cv_sys_global_symbol_pipe$lt_cv_sys_global_symbol_to_cdecl"; then + AC_MSG_RESULT(failed) +else + AC_MSG_RESULT(ok) +fi +]) # AC_LIBTOOL_SYS_GLOBAL_SYMBOL_PIPE -#ifndef MAP_ANONYMOUS -#define MAP_ANONYMOUS MAP_ANON -#endif + +# AC_LIBTOOL_PROG_COMPILER_PIC([TAGNAME]) +# --------------------------------------- +AC_DEFUN([AC_LIBTOOL_PROG_COMPILER_PIC], +[_LT_AC_TAGVAR(lt_prog_compiler_wl, $1)= +_LT_AC_TAGVAR(lt_prog_compiler_pic, $1)= +_LT_AC_TAGVAR(lt_prog_compiler_static, $1)= + +AC_MSG_CHECKING([for $compiler option to produce PIC]) + ifelse([$1],[CXX],[ + # C++ specific cases for pic, static, wl, etc. + if test "$GXX" = yes; then + _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' + _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-static' + + case $host_os in + aix*) + # All AIX code is PIC. + if test "$host_cpu" = ia64; then + # AIX 5 now supports IA64 processor + _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' + fi + ;; + amigaos*) + # FIXME: we need at least 68020 code to build shared libraries, but + # adding the `-m68020' flag to GCC prevents building anything better, + # like `-m68040'. + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-m68020 -resident32 -malways-restore-a4' + ;; + beos* | irix5* | irix6* | nonstopux* | osf3* | osf4* | osf5*) + # PIC is the default for these OSes. + ;; + mingw* | cygwin* | os2* | pw32*) + # This hack is so that the source file can tell whether it is being + # built for inclusion in a dll (and should export symbols for example). + # Although the cygwin gcc ignores -fPIC, still need this for old-style + # (--disable-auto-import) libraries + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-DDLL_EXPORT' + ;; + darwin* | rhapsody*) + # PIC is the default on this platform + # Common symbols not allowed in MH_DYLIB files + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-fno-common' + ;; + *djgpp*) + # DJGPP does not support shared libraries at all + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)= + ;; + interix[[3-9]]*) + # Interix 3.x gcc -fpic/-fPIC options generate broken code. + # Instead, we relocate shared libraries at runtime. + ;; + sysv4*MP*) + if test -d /usr/nec; then + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)=-Kconform_pic + fi + ;; + hpux*) + # PIC is the default for IA64 HP-UX and 64-bit HP-UX, but + # not for PA HP-UX. + case $host_cpu in + hppa*64*|ia64*) + ;; + *) + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' + ;; + esac + ;; + *) + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' + ;; + esac + else + case $host_os in + aix4* | aix5*) + # All AIX code is PIC. + if test "$host_cpu" = ia64; then + # AIX 5 now supports IA64 processor + _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' + else + _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-bnso -bI:/lib/syscalls.exp' + fi + ;; + chorus*) + case $cc_basename in + cxch68*) + # Green Hills C++ Compiler + # _LT_AC_TAGVAR(lt_prog_compiler_static, $1)="--no_auto_instantiation -u __main -u __premain -u _abort -r $COOL_DIR/lib/libOrb.a $MVME_DIR/lib/CC/libC.a $MVME_DIR/lib/classix/libcx.s.a" + ;; + esac + ;; + darwin*) + # PIC is the default on this platform + # Common symbols not allowed in MH_DYLIB files + case $cc_basename in + xlc*) + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-qnocommon' + _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' + ;; + esac + ;; + dgux*) + case $cc_basename in + ec++*) + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' + ;; + ghcx*) + # Green Hills C++ Compiler + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-pic' + ;; + *) + ;; + esac + ;; + freebsd* | dragonfly*) + # FreeBSD uses GNU C++ + ;; + hpux9* | hpux10* | hpux11*) + case $cc_basename in + CC*) + _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' + _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='${wl}-a ${wl}archive' + if test "$host_cpu" != ia64; then + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='+Z' + fi + ;; + aCC*) + _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' + _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='${wl}-a ${wl}archive' + case $host_cpu in + hppa*64*|ia64*) + # +Z the default + ;; + *) + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='+Z' + ;; + esac + ;; + *) + ;; + esac + ;; + interix*) + # This is c89, which is MS Visual C++ (no shared libs) + # Anyone wants to do a port? + ;; + irix5* | irix6* | nonstopux*) + case $cc_basename in + CC*) + _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' + _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-non_shared' + # CC pic flag -KPIC is the default. + ;; + *) + ;; + esac + ;; + linux* | k*bsd*-gnu) + case $cc_basename in + KCC*) + # KAI C++ Compiler + _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='--backend -Wl,' + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' + ;; + icpc* | ecpc*) + # Intel C++ + _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' + _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-static' + ;; + pgCC*) + # Portland Group C++ compiler. + _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-fpic' + _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' + ;; + cxx*) + # Compaq C++ + # Make sure the PIC flag is empty. It appears that all Alpha + # Linux and Compaq Tru64 Unix objects are PIC. + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)= + _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-non_shared' + ;; + *) + case `$CC -V 2>&1 | sed 5q` in + *Sun\ C*) + # Sun C++ 5.9 + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' + _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' + _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Qoption ld ' + ;; + esac + ;; + esac + ;; + lynxos*) + ;; + m88k*) + ;; + mvs*) + case $cc_basename in + cxx*) + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-W c,exportall' + ;; + *) + ;; + esac + ;; + netbsd*) + ;; + osf3* | osf4* | osf5*) + case $cc_basename in + KCC*) + _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='--backend -Wl,' + ;; + RCC*) + # Rational C++ 2.4.1 + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-pic' + ;; + cxx*) + # Digital/Compaq C++ + _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' + # Make sure the PIC flag is empty. It appears that all Alpha + # Linux and Compaq Tru64 Unix objects are PIC. + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)= + _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-non_shared' + ;; + *) + ;; + esac + ;; + psos*) + ;; + solaris*) + case $cc_basename in + CC*) + # Sun C++ 4.2, 5.x and Centerline C++ + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' + _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' + _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Qoption ld ' + ;; + gcx*) + # Green Hills C++ Compiler + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-PIC' + ;; + *) + ;; + esac + ;; + sunos4*) + case $cc_basename in + CC*) + # Sun C++ 4.x + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-pic' + _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' + ;; + lcc*) + # Lucid + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-pic' + ;; + *) + ;; + esac + ;; + tandem*) + case $cc_basename in + NCC*) + # NonStop-UX NCC 3.20 + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' + ;; + *) + ;; + esac + ;; + sysv5* | unixware* | sco3.2v5* | sco5v6* | OpenUNIX*) + case $cc_basename in + CC*) + _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' + _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' + ;; + esac + ;; + vxworks*) + ;; + *) + _LT_AC_TAGVAR(lt_prog_compiler_can_build_shared, $1)=no + ;; + esac + fi ], -[int n = MAP_ANONYMOUS;], - ac_cv_decl_map_anon=yes, - ac_cv_decl_map_anon=no)]) +[ + if test "$GCC" = yes; then + _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' + _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-static' - if test $ac_cv_decl_map_anon = no; then - ac_cv_func_mmap_anon=no - else - AC_CACHE_CHECK([whether mmap with MAP_ANON(YMOUS) works], - ac_cv_func_mmap_anon, - [# Add a system to this blacklist if it has mmap() and MAP_ANON or - # MAP_ANONYMOUS, but using mmap(..., MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) - # doesn't give anonymous zeroed pages with the same properties listed - # above for use of /dev/zero. - # Systems known to be in this category are Windows, VMS, and SCO Unix. - case "$host_os" in - vms* | cygwin* | pe | mingw* | sco* | udk* ) - ac_cv_func_mmap_anon=no ;; - *) - ac_cv_func_mmap_anon=yes;; - esac]) + case $host_os in + aix*) + # All AIX code is PIC. + if test "$host_cpu" = ia64; then + # AIX 5 now supports IA64 processor + _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' + fi + ;; + + amigaos*) + # FIXME: we need at least 68020 code to build shared libraries, but + # adding the `-m68020' flag to GCC prevents building anything better, + # like `-m68040'. + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-m68020 -resident32 -malways-restore-a4' + ;; + + beos* | irix5* | irix6* | nonstopux* | osf3* | osf4* | osf5*) + # PIC is the default for these OSes. + ;; + + mingw* | cygwin* | pw32* | os2*) + # This hack is so that the source file can tell whether it is being + # built for inclusion in a dll (and should export symbols for example). + # Although the cygwin gcc ignores -fPIC, still need this for old-style + # (--disable-auto-import) libraries + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-DDLL_EXPORT' + ;; + + darwin* | rhapsody*) + # PIC is the default on this platform + # Common symbols not allowed in MH_DYLIB files + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-fno-common' + ;; + + interix[[3-9]]*) + # Interix 3.x gcc -fpic/-fPIC options generate broken code. + # Instead, we relocate shared libraries at runtime. + ;; + + msdosdjgpp*) + # Just because we use GCC doesn't mean we suddenly get shared libraries + # on systems that don't support them. + _LT_AC_TAGVAR(lt_prog_compiler_can_build_shared, $1)=no + enable_shared=no + ;; + + sysv4*MP*) + if test -d /usr/nec; then + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)=-Kconform_pic + fi + ;; + + hpux*) + # PIC is the default for IA64 HP-UX and 64-bit HP-UX, but + # not for PA HP-UX. + case $host_cpu in + hppa*64*|ia64*) + # +Z the default + ;; + *) + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' + ;; + esac + ;; + + *) + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' + ;; + esac + else + # PORTME Check for flag to pass linker flags through the system compiler. + case $host_os in + aix*) + _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' + if test "$host_cpu" = ia64; then + # AIX 5 now supports IA64 processor + _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' + else + _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-bnso -bI:/lib/syscalls.exp' + fi + ;; + darwin*) + # PIC is the default on this platform + # Common symbols not allowed in MH_DYLIB files + case $cc_basename in + xlc*) + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-qnocommon' + _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' + ;; + esac + ;; + + mingw* | cygwin* | pw32* | os2*) + # This hack is so that the source file can tell whether it is being + # built for inclusion in a dll (and should export symbols for example). + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-DDLL_EXPORT' + ;; + + hpux9* | hpux10* | hpux11*) + _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' + # PIC is the default for IA64 HP-UX and 64-bit HP-UX, but + # not for PA HP-UX. + case $host_cpu in + hppa*64*|ia64*) + # +Z the default + ;; + *) + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='+Z' + ;; + esac + # Is there a better lt_prog_compiler_static that works with the bundled CC? + _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='${wl}-a ${wl}archive' + ;; + + irix5* | irix6* | nonstopux*) + _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' + # PIC (with -KPIC) is the default. + _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-non_shared' + ;; + + newsos6) + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' + _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' + ;; + + linux* | k*bsd*-gnu) + case $cc_basename in + icc* | ecc*) + _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' + _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-static' + ;; + pgcc* | pgf77* | pgf90* | pgf95*) + # Portland Group compilers (*not* the Pentium gcc compiler, + # which looks to be a dead project) + _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-fpic' + _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' + ;; + ccc*) + _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' + # All Alpha code is PIC. + _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-non_shared' + ;; + *) + case `$CC -V 2>&1 | sed 5q` in + *Sun\ C*) + # Sun C 5.9 + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' + _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' + _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' + ;; + *Sun\ F*) + # Sun Fortran 8.3 passes all unrecognized flags to the linker + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' + _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' + _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='' + ;; + esac + ;; + esac + ;; + + osf3* | osf4* | osf5*) + _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' + # All OSF/1 code is PIC. + _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-non_shared' + ;; + + rdos*) + _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-non_shared' + ;; + + solaris*) + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' + _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' + case $cc_basename in + f77* | f90* | f95*) + _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Qoption ld ';; + *) + _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,';; + esac + ;; + + sunos4*) + _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Qoption ld ' + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-PIC' + _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' + ;; + + sysv4 | sysv4.2uw2* | sysv4.3*) + _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' + _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' + ;; + + sysv4*MP*) + if test -d /usr/nec ;then + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-Kconform_pic' + _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' + fi + ;; + + sysv5* | unixware* | sco3.2v5* | sco5v6* | OpenUNIX*) + _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' + _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' + ;; + + unicos*) + _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' + _LT_AC_TAGVAR(lt_prog_compiler_can_build_shared, $1)=no + ;; + + uts4*) + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-pic' + _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' + ;; + + *) + _LT_AC_TAGVAR(lt_prog_compiler_can_build_shared, $1)=no + ;; + esac + fi +]) +AC_MSG_RESULT([$_LT_AC_TAGVAR(lt_prog_compiler_pic, $1)]) + +# +# Check to make sure the PIC flag actually works. +# +if test -n "$_LT_AC_TAGVAR(lt_prog_compiler_pic, $1)"; then + AC_LIBTOOL_COMPILER_OPTION([if $compiler PIC flag $_LT_AC_TAGVAR(lt_prog_compiler_pic, $1) works], + _LT_AC_TAGVAR(lt_prog_compiler_pic_works, $1), + [$_LT_AC_TAGVAR(lt_prog_compiler_pic, $1)ifelse([$1],[],[ -DPIC],[ifelse([$1],[CXX],[ -DPIC],[])])], [], + [case $_LT_AC_TAGVAR(lt_prog_compiler_pic, $1) in + "" | " "*) ;; + *) _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)=" $_LT_AC_TAGVAR(lt_prog_compiler_pic, $1)" ;; + esac], + [_LT_AC_TAGVAR(lt_prog_compiler_pic, $1)= + _LT_AC_TAGVAR(lt_prog_compiler_can_build_shared, $1)=no]) +fi +case $host_os in + # For platforms which do not support PIC, -DPIC is meaningless: + *djgpp*) + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)= + ;; + *) + _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)="$_LT_AC_TAGVAR(lt_prog_compiler_pic, $1)ifelse([$1],[],[ -DPIC],[ifelse([$1],[CXX],[ -DPIC],[])])" + ;; +esac + +# +# Check to make sure the static flag actually works. +# +wl=$_LT_AC_TAGVAR(lt_prog_compiler_wl, $1) eval lt_tmp_static_flag=\"$_LT_AC_TAGVAR(lt_prog_compiler_static, $1)\" +AC_LIBTOOL_LINKER_OPTION([if $compiler static flag $lt_tmp_static_flag works], + _LT_AC_TAGVAR(lt_prog_compiler_static_works, $1), + $lt_tmp_static_flag, + [], + [_LT_AC_TAGVAR(lt_prog_compiler_static, $1)=]) +]) + + +# AC_LIBTOOL_PROG_LD_SHLIBS([TAGNAME]) +# ------------------------------------ +# See if the linker supports building shared libraries. +AC_DEFUN([AC_LIBTOOL_PROG_LD_SHLIBS], +[AC_REQUIRE([LT_AC_PROG_SED])dnl +AC_MSG_CHECKING([whether the $compiler linker ($LD) supports shared libraries]) +ifelse([$1],[CXX],[ + _LT_AC_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols' + case $host_os in + aix4* | aix5*) + # If we're using GNU nm, then we don't want the "-C" option. + # -C means demangle to AIX nm, but means don't demangle with GNU nm + if $NM -V 2>&1 | grep 'GNU' > /dev/null; then + _LT_AC_TAGVAR(export_symbols_cmds, $1)='$NM -Bpg $libobjs $convenience | awk '\''{ if (((\[$]2 == "T") || (\[$]2 == "D") || (\[$]2 == "B")) && ([substr](\[$]3,1,1) != ".")) { print \[$]3 } }'\'' | sort -u > $export_symbols' + else + _LT_AC_TAGVAR(export_symbols_cmds, $1)='$NM -BCpg $libobjs $convenience | awk '\''{ if (((\[$]2 == "T") || (\[$]2 == "D") || (\[$]2 == "B")) && ([substr](\[$]3,1,1) != ".")) { print \[$]3 } }'\'' | sort -u > $export_symbols' + fi + ;; + pw32*) + _LT_AC_TAGVAR(export_symbols_cmds, $1)="$ltdll_cmds" + ;; + cygwin* | mingw*) + _LT_AC_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGRS]][[ ]]/s/.*[[ ]]\([[^ ]]*\)/\1 DATA/;/^.*[[ ]]__nm__/s/^.*[[ ]]__nm__\([[^ ]]*\)[[ ]][[^ ]]*/\1 DATA/;/^I[[ ]]/d;/^[[AITW]][[ ]]/s/.*[[ ]]//'\'' | sort | uniq > $export_symbols' + ;; + *) + _LT_AC_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols' + ;; + esac +],[ + runpath_var= + _LT_AC_TAGVAR(allow_undefined_flag, $1)= + _LT_AC_TAGVAR(enable_shared_with_static_runtimes, $1)=no + _LT_AC_TAGVAR(archive_cmds, $1)= + _LT_AC_TAGVAR(archive_expsym_cmds, $1)= + _LT_AC_TAGVAR(old_archive_From_new_cmds, $1)= + _LT_AC_TAGVAR(old_archive_from_expsyms_cmds, $1)= + _LT_AC_TAGVAR(export_dynamic_flag_spec, $1)= + _LT_AC_TAGVAR(whole_archive_flag_spec, $1)= + _LT_AC_TAGVAR(thread_safe_flag_spec, $1)= + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)= + _LT_AC_TAGVAR(hardcode_libdir_flag_spec_ld, $1)= + _LT_AC_TAGVAR(hardcode_libdir_separator, $1)= + _LT_AC_TAGVAR(hardcode_direct, $1)=no + _LT_AC_TAGVAR(hardcode_minus_L, $1)=no + _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=unsupported + _LT_AC_TAGVAR(link_all_deplibs, $1)=unknown + _LT_AC_TAGVAR(hardcode_automatic, $1)=no + _LT_AC_TAGVAR(module_cmds, $1)= + _LT_AC_TAGVAR(module_expsym_cmds, $1)= + _LT_AC_TAGVAR(always_export_symbols, $1)=no + _LT_AC_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols' + # include_expsyms should be a list of space-separated symbols to be *always* + # included in the symbol list + _LT_AC_TAGVAR(include_expsyms, $1)= + # exclude_expsyms can be an extended regexp of symbols to exclude + # it will be wrapped by ` (' and `)$', so one must not match beginning or + # end of line. Example: `a|bc|.*d.*' will exclude the symbols `a' and `bc', + # as well as any symbol that contains `d'. + _LT_AC_TAGVAR(exclude_expsyms, $1)="_GLOBAL_OFFSET_TABLE_" + # Although _GLOBAL_OFFSET_TABLE_ is a valid symbol C name, most a.out + # platforms (ab)use it in PIC code, but their linkers get confused if + # the symbol is explicitly referenced. Since portable code cannot + # rely on this symbol name, it's probably fine to never include it in + # preloaded symbol tables. + extract_expsyms_cmds= + # Just being paranoid about ensuring that cc_basename is set. + _LT_CC_BASENAME([$compiler]) + case $host_os in + cygwin* | mingw* | pw32*) + # FIXME: the MSVC++ port hasn't been tested in a loooong time + # When not using gcc, we currently assume that we are using + # Microsoft Visual C++. + if test "$GCC" != yes; then + with_gnu_ld=no + fi + ;; + interix*) + # we just hope/assume this is gcc and not c89 (= MSVC++) + with_gnu_ld=yes + ;; + openbsd*) + with_gnu_ld=no + ;; + esac + + _LT_AC_TAGVAR(ld_shlibs, $1)=yes + if test "$with_gnu_ld" = yes; then + # If archive_cmds runs LD, not CC, wlarc should be empty + wlarc='${wl}' + + # Set some defaults for GNU ld with shared library support. These + # are reset later if shared libraries are not supported. Putting them + # here allows them to be overridden if necessary. + runpath_var=LD_RUN_PATH + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}--rpath ${wl}$libdir' + _LT_AC_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-dynamic' + # ancient GNU ld didn't support --whole-archive et. al. + if $LD --help 2>&1 | grep 'no-whole-archive' > /dev/null; then + _LT_AC_TAGVAR(whole_archive_flag_spec, $1)="$wlarc"'--whole-archive$convenience '"$wlarc"'--no-whole-archive' + else + _LT_AC_TAGVAR(whole_archive_flag_spec, $1)= + fi + supports_anon_versioning=no + case `$LD -v 2>/dev/null` in + *\ [[01]].* | *\ 2.[[0-9]].* | *\ 2.10.*) ;; # catch versions < 2.11 + *\ 2.11.93.0.2\ *) supports_anon_versioning=yes ;; # RH7.3 ... + *\ 2.11.92.0.12\ *) supports_anon_versioning=yes ;; # Mandrake 8.2 ... + *\ 2.11.*) ;; # other 2.11 versions + *) supports_anon_versioning=yes ;; + esac + + # See if GNU ld supports shared libraries. + case $host_os in + aix3* | aix4* | aix5*) + # On AIX/PPC, the GNU linker is very broken + if test "$host_cpu" != ia64; then + _LT_AC_TAGVAR(ld_shlibs, $1)=no + cat <&2 + +*** Warning: the GNU linker, at least up to release 2.9.1, is reported +*** to be unable to reliably create shared libraries on AIX. +*** Therefore, libtool is disabling shared libraries support. If you +*** really care for shared libraries, you may want to modify your PATH +*** so that a non-GNU linker is found, and then restart. + +EOF + fi + ;; + + amigaos*) + _LT_AC_TAGVAR(archive_cmds, $1)='$rm $output_objdir/a2ixlibrary.data~$echo "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$echo "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$echo "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$echo "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)' + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' + _LT_AC_TAGVAR(hardcode_minus_L, $1)=yes + + # Samuel A. Falvo II reports + # that the semantics of dynamic libraries on AmigaOS, at least up + # to version 4, is to share data among multiple programs linked + # with the same dynamic library. Since this doesn't match the + # behavior of shared libraries on other platforms, we can't use + # them. + _LT_AC_TAGVAR(ld_shlibs, $1)=no + ;; + + beos*) + if $LD --help 2>&1 | grep ': supported targets:.* elf' > /dev/null; then + _LT_AC_TAGVAR(allow_undefined_flag, $1)=unsupported + # Joseph Beckenbach says some releases of gcc + # support --undefined. This deserves some investigation. FIXME + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -nostart $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' + else + _LT_AC_TAGVAR(ld_shlibs, $1)=no + fi + ;; + + cygwin* | mingw* | pw32*) + # _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1) is actually meaningless, + # as there is no search path for DLLs. + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' + _LT_AC_TAGVAR(allow_undefined_flag, $1)=unsupported + _LT_AC_TAGVAR(always_export_symbols, $1)=no + _LT_AC_TAGVAR(enable_shared_with_static_runtimes, $1)=yes + _LT_AC_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGRS]][[ ]]/s/.*[[ ]]\([[^ ]]*\)/\1 DATA/'\'' -e '\''/^[[AITW]][[ ]]/s/.*[[ ]]//'\'' | sort | uniq > $export_symbols' + + if $LD --help 2>&1 | grep 'auto-import' > /dev/null; then + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' + # If the export-symbols file already is a .def file (1st line + # is EXPORTS), use it as is; otherwise, prepend... + _LT_AC_TAGVAR(archive_expsym_cmds, $1)='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then + cp $export_symbols $output_objdir/$soname.def; + else + echo EXPORTS > $output_objdir/$soname.def; + cat $export_symbols >> $output_objdir/$soname.def; + fi~ + $CC -shared $output_objdir/$soname.def $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' + else + _LT_AC_TAGVAR(ld_shlibs, $1)=no + fi + ;; + + interix[[3-9]]*) + _LT_AC_TAGVAR(hardcode_direct, $1)=no + _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath,$libdir' + _LT_AC_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E' + # Hack: On Interix 3.x, we cannot compile PIC because of a broken gcc. + # Instead, shared libraries are loaded at an image base (0x10000000 by + # default) and relocated if they conflict, which is a slow very memory + # consuming and fragmenting process. To avoid this, we pick a random, + # 256 KiB-aligned image base between 0x50000000 and 0x6FFC0000 at link + # time. Moving up from 0x10000000 also allows more sbrk(2) space. + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-h,$soname ${wl}--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' + _LT_AC_TAGVAR(archive_expsym_cmds, $1)='sed "s,^,_," $export_symbols >$output_objdir/$soname.expsym~$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-h,$soname ${wl}--retain-symbols-file,$output_objdir/$soname.expsym ${wl}--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' + ;; + + gnu* | linux* | k*bsd*-gnu) + if $LD --help 2>&1 | grep ': supported targets:.* elf' > /dev/null; then + tmp_addflag= + case $cc_basename,$host_cpu in + pgcc*) # Portland Group C compiler + _LT_AC_TAGVAR(whole_archive_flag_spec, $1)='${wl}--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; $echo \"$new_convenience\"` ${wl}--no-whole-archive' + tmp_addflag=' $pic_flag' + ;; + pgf77* | pgf90* | pgf95*) # Portland Group f77 and f90 compilers + _LT_AC_TAGVAR(whole_archive_flag_spec, $1)='${wl}--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; $echo \"$new_convenience\"` ${wl}--no-whole-archive' + tmp_addflag=' $pic_flag -Mnomain' ;; + ecc*,ia64* | icc*,ia64*) # Intel C compiler on ia64 + tmp_addflag=' -i_dynamic' ;; + efc*,ia64* | ifort*,ia64*) # Intel Fortran compiler on ia64 + tmp_addflag=' -i_dynamic -nofor_main' ;; + ifc* | ifort*) # Intel Fortran compiler + tmp_addflag=' -nofor_main' ;; + esac + case `$CC -V 2>&1 | sed 5q` in + *Sun\ C*) # Sun C 5.9 + _LT_AC_TAGVAR(whole_archive_flag_spec, $1)='${wl}--whole-archive`new_convenience=; for conv in $convenience\"\"; do test -z \"$conv\" || new_convenience=\"$new_convenience,$conv\"; done; $echo \"$new_convenience\"` ${wl}--no-whole-archive' + tmp_sharedflag='-G' ;; + *Sun\ F*) # Sun Fortran 8.3 + tmp_sharedflag='-G' ;; + *) + tmp_sharedflag='-shared' ;; + esac + _LT_AC_TAGVAR(archive_cmds, $1)='$CC '"$tmp_sharedflag""$tmp_addflag"' $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' + + if test $supports_anon_versioning = yes; then + _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$echo "{ global:" > $output_objdir/$libname.ver~ + cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~ + $echo "local: *; };" >> $output_objdir/$libname.ver~ + $CC '"$tmp_sharedflag""$tmp_addflag"' $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-version-script ${wl}$output_objdir/$libname.ver -o $lib' + fi + else + _LT_AC_TAGVAR(ld_shlibs, $1)=no + fi + ;; + + netbsd*) + if echo __ELF__ | $CC -E - | grep __ELF__ >/dev/null; then + _LT_AC_TAGVAR(archive_cmds, $1)='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib' + wlarc= + else + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' + _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' + fi + ;; + + solaris*) + if $LD -v 2>&1 | grep 'BFD 2\.8' > /dev/null; then + _LT_AC_TAGVAR(ld_shlibs, $1)=no + cat <&2 + +*** Warning: The releases 2.8.* of the GNU linker cannot reliably +*** create shared libraries on Solaris systems. Therefore, libtool +*** is disabling shared libraries support. We urge you to upgrade GNU +*** binutils to release 2.9.1 or newer. Another option is to modify +*** your PATH or compiler configuration so that the native linker is +*** used, and then restart. + +EOF + elif $LD --help 2>&1 | grep ': supported targets:.* elf' > /dev/null; then + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' + _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' + else + _LT_AC_TAGVAR(ld_shlibs, $1)=no + fi + ;; + + sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX*) + case `$LD -v 2>&1` in + *\ [[01]].* | *\ 2.[[0-9]].* | *\ 2.1[[0-5]].*) + _LT_AC_TAGVAR(ld_shlibs, $1)=no + cat <<_LT_EOF 1>&2 + +*** Warning: Releases of the GNU linker prior to 2.16.91.0.3 can not +*** reliably create shared libraries on SCO systems. Therefore, libtool +*** is disabling shared libraries support. We urge you to upgrade GNU +*** binutils to release 2.16.91.0.3 or newer. Another option is to modify +*** your PATH or compiler configuration so that the native linker is +*** used, and then restart. + +_LT_EOF + ;; + *) + if $LD --help 2>&1 | grep ': supported targets:.* elf' > /dev/null; then + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='`test -z "$SCOABSPATH" && echo ${wl}-rpath,$libdir`' + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname,\${SCOABSPATH:+${install_libdir}/}$soname -o $lib' + _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname,\${SCOABSPATH:+${install_libdir}/}$soname,-retain-symbols-file,$export_symbols -o $lib' + else + _LT_AC_TAGVAR(ld_shlibs, $1)=no + fi + ;; + esac + ;; + + sunos4*) + _LT_AC_TAGVAR(archive_cmds, $1)='$LD -assert pure-text -Bshareable -o $lib $libobjs $deplibs $linker_flags' + wlarc= + _LT_AC_TAGVAR(hardcode_direct, $1)=yes + _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no + ;; + + *) + if $LD --help 2>&1 | grep ': supported targets:.* elf' > /dev/null; then + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' + _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' + else + _LT_AC_TAGVAR(ld_shlibs, $1)=no + fi + ;; + esac + + if test "$_LT_AC_TAGVAR(ld_shlibs, $1)" = no; then + runpath_var= + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)= + _LT_AC_TAGVAR(export_dynamic_flag_spec, $1)= + _LT_AC_TAGVAR(whole_archive_flag_spec, $1)= + fi + else + # PORTME fill in a description of your system's linker (not GNU ld) + case $host_os in + aix3*) + _LT_AC_TAGVAR(allow_undefined_flag, $1)=unsupported + _LT_AC_TAGVAR(always_export_symbols, $1)=yes + _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$LD -o $output_objdir/$soname $libobjs $deplibs $linker_flags -bE:$export_symbols -T512 -H512 -bM:SRE~$AR $AR_FLAGS $lib $output_objdir/$soname' + # Note: this linker hardcodes the directories in LIBPATH if there + # are no directories specified by -L. + _LT_AC_TAGVAR(hardcode_minus_L, $1)=yes + if test "$GCC" = yes && test -z "$lt_prog_compiler_static"; then + # Neither direct hardcoding nor static linking is supported with a + # broken collect2. + _LT_AC_TAGVAR(hardcode_direct, $1)=unsupported + fi + ;; + + aix4* | aix5*) + if test "$host_cpu" = ia64; then + # On IA64, the linker does run time linking by default, so we don't + # have to do anything special. + aix_use_runtimelinking=no + exp_sym_flag='-Bexport' + no_entry_flag="" + else + # If we're using GNU nm, then we don't want the "-C" option. + # -C means demangle to AIX nm, but means don't demangle with GNU nm + if $NM -V 2>&1 | grep 'GNU' > /dev/null; then + _LT_AC_TAGVAR(export_symbols_cmds, $1)='$NM -Bpg $libobjs $convenience | awk '\''{ if (((\[$]2 == "T") || (\[$]2 == "D") || (\[$]2 == "B")) && ([substr](\[$]3,1,1) != ".")) { print \[$]3 } }'\'' | sort -u > $export_symbols' + else + _LT_AC_TAGVAR(export_symbols_cmds, $1)='$NM -BCpg $libobjs $convenience | awk '\''{ if (((\[$]2 == "T") || (\[$]2 == "D") || (\[$]2 == "B")) && ([substr](\[$]3,1,1) != ".")) { print \[$]3 } }'\'' | sort -u > $export_symbols' + fi + aix_use_runtimelinking=no + + # Test if we are trying to use run time linking or normal + # AIX style linking. If -brtl is somewhere in LDFLAGS, we + # need to do runtime linking. + case $host_os in aix4.[[23]]|aix4.[[23]].*|aix5*) + for ld_flag in $LDFLAGS; do + if (test $ld_flag = "-brtl" || test $ld_flag = "-Wl,-brtl"); then + aix_use_runtimelinking=yes + break + fi + done + ;; + esac + + exp_sym_flag='-bexport' + no_entry_flag='-bnoentry' + fi + + # When large executables or shared objects are built, AIX ld can + # have problems creating the table of contents. If linking a library + # or program results in "error TOC overflow" add -mminimal-toc to + # CXXFLAGS/CFLAGS for g++/gcc. In the cases where that is not + # enough to fix the problem, add -Wl,-bbigtoc to LDFLAGS. + + _LT_AC_TAGVAR(archive_cmds, $1)='' + _LT_AC_TAGVAR(hardcode_direct, $1)=yes + _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=':' + _LT_AC_TAGVAR(link_all_deplibs, $1)=yes + + if test "$GCC" = yes; then + case $host_os in aix4.[[012]]|aix4.[[012]].*) + # We only want to do this on AIX 4.2 and lower, the check + # below for broken collect2 doesn't work under 4.3+ + collect2name=`${CC} -print-prog-name=collect2` + if test -f "$collect2name" && \ + strings "$collect2name" | grep resolve_lib_name >/dev/null + then + # We have reworked collect2 + : + else + # We have old collect2 + _LT_AC_TAGVAR(hardcode_direct, $1)=unsupported + # It fails to find uninstalled libraries when the uninstalled + # path is not listed in the libpath. Setting hardcode_minus_L + # to unsupported forces relinking + _LT_AC_TAGVAR(hardcode_minus_L, $1)=yes + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' + _LT_AC_TAGVAR(hardcode_libdir_separator, $1)= + fi + ;; + esac + shared_flag='-shared' + if test "$aix_use_runtimelinking" = yes; then + shared_flag="$shared_flag "'${wl}-G' + fi + else + # not using gcc + if test "$host_cpu" = ia64; then + # VisualAge C++, Version 5.5 for AIX 5L for IA-64, Beta 3 Release + # chokes on -Wl,-G. The following line is correct: + shared_flag='-G' + else + if test "$aix_use_runtimelinking" = yes; then + shared_flag='${wl}-G' + else + shared_flag='${wl}-bM:SRE' + fi + fi + fi + + # It seems that -bexpall does not export symbols beginning with + # underscore (_), so it is better to generate a list of symbols to export. + _LT_AC_TAGVAR(always_export_symbols, $1)=yes + if test "$aix_use_runtimelinking" = yes; then + # Warning - without using the other runtime loading flags (-brtl), + # -berok will link without error, but may produce a broken library. + _LT_AC_TAGVAR(allow_undefined_flag, $1)='-berok' + # Determine the default libpath from the value encoded in an empty executable. + _LT_AC_SYS_LIBPATH_AIX + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-blibpath:$libdir:'"$aix_libpath" + _LT_AC_TAGVAR(archive_expsym_cmds, $1)="\$CC"' -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then echo "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag" + else + if test "$host_cpu" = ia64; then + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-R $libdir:/usr/lib:/lib' + _LT_AC_TAGVAR(allow_undefined_flag, $1)="-z nodefs" + _LT_AC_TAGVAR(archive_expsym_cmds, $1)="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags ${wl}${allow_undefined_flag} '"\${wl}$exp_sym_flag:\$export_symbols" + else + # Determine the default libpath from the value encoded in an empty executable. + _LT_AC_SYS_LIBPATH_AIX + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-blibpath:$libdir:'"$aix_libpath" + # Warning - without using the other run time loading flags, + # -berok will link without error, but may produce a broken library. + _LT_AC_TAGVAR(no_undefined_flag, $1)=' ${wl}-bernotok' + _LT_AC_TAGVAR(allow_undefined_flag, $1)=' ${wl}-berok' + # Exported symbols can be pulled into shared objects from archives + _LT_AC_TAGVAR(whole_archive_flag_spec, $1)='$convenience' + _LT_AC_TAGVAR(archive_cmds_need_lc, $1)=yes + # This is similar to how AIX traditionally builds its shared libraries. + _LT_AC_TAGVAR(archive_expsym_cmds, $1)="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs ${wl}-bnoentry $compiler_flags ${wl}-bE:$export_symbols${allow_undefined_flag}~$AR $AR_FLAGS $output_objdir/$libname$release.a $output_objdir/$soname' + fi + fi + ;; + + amigaos*) + _LT_AC_TAGVAR(archive_cmds, $1)='$rm $output_objdir/a2ixlibrary.data~$echo "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$echo "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$echo "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$echo "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)' + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' + _LT_AC_TAGVAR(hardcode_minus_L, $1)=yes + # see comment about different semantics on the GNU ld section + _LT_AC_TAGVAR(ld_shlibs, $1)=no + ;; + + bsdi[[45]]*) + _LT_AC_TAGVAR(export_dynamic_flag_spec, $1)=-rdynamic + ;; + + cygwin* | mingw* | pw32*) + # When not using gcc, we currently assume that we are using + # Microsoft Visual C++. + # hardcode_libdir_flag_spec is actually meaningless, as there is + # no search path for DLLs. + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)=' ' + _LT_AC_TAGVAR(allow_undefined_flag, $1)=unsupported + # Tell ltmain to make .lib files, not .a files. + libext=lib + # Tell ltmain to make .dll files, not .so files. + shrext_cmds=".dll" + # FIXME: Setting linknames here is a bad hack. + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -o $lib $libobjs $compiler_flags `echo "$deplibs" | $SED -e '\''s/ -lc$//'\''` -link -dll~linknames=' + # The linker will automatically build a .lib file if we build a DLL. + _LT_AC_TAGVAR(old_archive_From_new_cmds, $1)='true' + # FIXME: Should let the user specify the lib program. + _LT_AC_TAGVAR(old_archive_cmds, $1)='lib -OUT:$oldlib$oldobjs$old_deplibs' + _LT_AC_TAGVAR(fix_srcfile_path, $1)='`cygpath -w "$srcfile"`' + _LT_AC_TAGVAR(enable_shared_with_static_runtimes, $1)=yes + ;; + + darwin* | rhapsody*) + case $host_os in + rhapsody* | darwin1.[[012]]) + _LT_AC_TAGVAR(allow_undefined_flag, $1)='${wl}-undefined ${wl}suppress' + ;; + *) # Darwin 1.3 on + if test -z ${MACOSX_DEPLOYMENT_TARGET} ; then + _LT_AC_TAGVAR(allow_undefined_flag, $1)='${wl}-flat_namespace ${wl}-undefined ${wl}suppress' + else + case ${MACOSX_DEPLOYMENT_TARGET} in + 10.[[012]]) + _LT_AC_TAGVAR(allow_undefined_flag, $1)='${wl}-flat_namespace ${wl}-undefined ${wl}suppress' + ;; + 10.*) + _LT_AC_TAGVAR(allow_undefined_flag, $1)='${wl}-undefined ${wl}dynamic_lookup' + ;; + esac + fi + ;; + esac + _LT_AC_TAGVAR(archive_cmds_need_lc, $1)=no + _LT_AC_TAGVAR(hardcode_direct, $1)=no + _LT_AC_TAGVAR(hardcode_automatic, $1)=yes + _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=unsupported + _LT_AC_TAGVAR(whole_archive_flag_spec, $1)='' + _LT_AC_TAGVAR(link_all_deplibs, $1)=yes + if test "$GCC" = yes ; then + output_verbose_link_cmd='echo' + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -dynamiclib $allow_undefined_flag -o $lib $libobjs $deplibs $compiler_flags -install_name $rpath/$soname $verstring' + _LT_AC_TAGVAR(module_cmds, $1)='$CC $allow_undefined_flag -o $lib -bundle $libobjs $deplibs$compiler_flags' + # Don't fix this by using the ld -exported_symbols_list flag, it doesn't exist in older darwin lds + _LT_AC_TAGVAR(archive_expsym_cmds, $1)='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC -dynamiclib $allow_undefined_flag -o $lib $libobjs $deplibs $compiler_flags -install_name $rpath/$soname $verstring~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}' + _LT_AC_TAGVAR(module_expsym_cmds, $1)='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC $allow_undefined_flag -o $lib -bundle $libobjs $deplibs$compiler_flags~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}' + else + case $cc_basename in + xlc*) + output_verbose_link_cmd='echo' + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -qmkshrobj $allow_undefined_flag -o $lib $libobjs $deplibs $compiler_flags ${wl}-install_name ${wl}`echo $rpath/$soname` $xlcverstring' + _LT_AC_TAGVAR(module_cmds, $1)='$CC $allow_undefined_flag -o $lib -bundle $libobjs $deplibs$compiler_flags' + # Don't fix this by using the ld -exported_symbols_list flag, it doesn't exist in older darwin lds + _LT_AC_TAGVAR(archive_expsym_cmds, $1)='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC -qmkshrobj $allow_undefined_flag -o $lib $libobjs $deplibs $compiler_flags ${wl}-install_name ${wl}$rpath/$soname $xlcverstring~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}' + _LT_AC_TAGVAR(module_expsym_cmds, $1)='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC $allow_undefined_flag -o $lib -bundle $libobjs $deplibs$compiler_flags~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}' + ;; + *) + _LT_AC_TAGVAR(ld_shlibs, $1)=no + ;; + esac + fi + ;; + + dgux*) + _LT_AC_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' + _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no + ;; + + freebsd1*) + _LT_AC_TAGVAR(ld_shlibs, $1)=no + ;; + + # FreeBSD 2.2.[012] allows us to include c++rt0.o to get C++ constructor + # support. Future versions do this automatically, but an explicit c++rt0.o + # does not break anything, and helps significantly (at the cost of a little + # extra space). + freebsd2.2*) + _LT_AC_TAGVAR(archive_cmds, $1)='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags /usr/lib/c++rt0.o' + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir' + _LT_AC_TAGVAR(hardcode_direct, $1)=yes + _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no + ;; + + # Unfortunately, older versions of FreeBSD 2 do not have this feature. + freebsd2*) + _LT_AC_TAGVAR(archive_cmds, $1)='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' + _LT_AC_TAGVAR(hardcode_direct, $1)=yes + _LT_AC_TAGVAR(hardcode_minus_L, $1)=yes + _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no + ;; + + # FreeBSD 3 and greater uses gcc -shared to do shared libraries. + freebsd* | dragonfly*) + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared -o $lib $libobjs $deplibs $compiler_flags' + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir' + _LT_AC_TAGVAR(hardcode_direct, $1)=yes + _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no + ;; + + hpux9*) + if test "$GCC" = yes; then + _LT_AC_TAGVAR(archive_cmds, $1)='$rm $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib' + else + _LT_AC_TAGVAR(archive_cmds, $1)='$rm $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib' + fi + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}+b ${wl}$libdir' + _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=: + _LT_AC_TAGVAR(hardcode_direct, $1)=yes + + # hardcode_minus_L: Not really in the search PATH, + # but as the default location of the library. + _LT_AC_TAGVAR(hardcode_minus_L, $1)=yes + _LT_AC_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E' + ;; + + hpux10*) + if test "$GCC" = yes -a "$with_gnu_ld" = no; then + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags' + else + _LT_AC_TAGVAR(archive_cmds, $1)='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags' + fi + if test "$with_gnu_ld" = no; then + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}+b ${wl}$libdir' + _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=: + + _LT_AC_TAGVAR(hardcode_direct, $1)=yes + _LT_AC_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E' + + # hardcode_minus_L: Not really in the search PATH, + # but as the default location of the library. + _LT_AC_TAGVAR(hardcode_minus_L, $1)=yes + fi + ;; + + hpux11*) + if test "$GCC" = yes -a "$with_gnu_ld" = no; then + case $host_cpu in + hppa*64*) + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags' + ;; + ia64*) + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags' + ;; + *) + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags' + ;; + esac + else + case $host_cpu in + hppa*64*) + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -b ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags' + ;; + ia64*) + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -b ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags' + ;; + *) + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -b ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags' + ;; + esac + fi + if test "$with_gnu_ld" = no; then + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}+b ${wl}$libdir' + _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=: + + case $host_cpu in + hppa*64*|ia64*) + _LT_AC_TAGVAR(hardcode_libdir_flag_spec_ld, $1)='+b $libdir' + _LT_AC_TAGVAR(hardcode_direct, $1)=no + _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no + ;; + *) + _LT_AC_TAGVAR(hardcode_direct, $1)=yes + _LT_AC_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E' + + # hardcode_minus_L: Not really in the search PATH, + # but as the default location of the library. + _LT_AC_TAGVAR(hardcode_minus_L, $1)=yes + ;; + esac + fi + ;; + + irix5* | irix6* | nonstopux*) + if test "$GCC" = yes; then + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' + else + _LT_AC_TAGVAR(archive_cmds, $1)='$LD -shared $libobjs $deplibs $linker_flags -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib' + _LT_AC_TAGVAR(hardcode_libdir_flag_spec_ld, $1)='-rpath $libdir' + fi + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' + _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=: + _LT_AC_TAGVAR(link_all_deplibs, $1)=yes + ;; + + netbsd*) + if echo __ELF__ | $CC -E - | grep __ELF__ >/dev/null; then + _LT_AC_TAGVAR(archive_cmds, $1)='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' # a.out + else + _LT_AC_TAGVAR(archive_cmds, $1)='$LD -shared -o $lib $libobjs $deplibs $linker_flags' # ELF + fi + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir' + _LT_AC_TAGVAR(hardcode_direct, $1)=yes + _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no + ;; + + newsos6) + _LT_AC_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + _LT_AC_TAGVAR(hardcode_direct, $1)=yes + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' + _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=: + _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no + ;; + + openbsd*) + if test -f /usr/libexec/ld.so; then + _LT_AC_TAGVAR(hardcode_direct, $1)=yes + _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no + if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags' + _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags ${wl}-retain-symbols-file,$export_symbols' + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath,$libdir' + _LT_AC_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E' + else + case $host_os in + openbsd[[01]].* | openbsd2.[[0-7]] | openbsd2.[[0-7]].*) + _LT_AC_TAGVAR(archive_cmds, $1)='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir' + ;; + *) + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags' + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath,$libdir' + ;; + esac + fi + else + _LT_AC_TAGVAR(ld_shlibs, $1)=no + fi + ;; + + os2*) + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' + _LT_AC_TAGVAR(hardcode_minus_L, $1)=yes + _LT_AC_TAGVAR(allow_undefined_flag, $1)=unsupported + _LT_AC_TAGVAR(archive_cmds, $1)='$echo "LIBRARY $libname INITINSTANCE" > $output_objdir/$libname.def~$echo "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~$echo DATA >> $output_objdir/$libname.def~$echo " SINGLE NONSHARED" >> $output_objdir/$libname.def~$echo EXPORTS >> $output_objdir/$libname.def~emxexp $libobjs >> $output_objdir/$libname.def~$CC -Zdll -Zcrtdll -o $lib $libobjs $deplibs $compiler_flags $output_objdir/$libname.def' + _LT_AC_TAGVAR(old_archive_From_new_cmds, $1)='emximp -o $output_objdir/$libname.a $output_objdir/$libname.def' + ;; + + osf3*) + if test "$GCC" = yes; then + _LT_AC_TAGVAR(allow_undefined_flag, $1)=' ${wl}-expect_unresolved ${wl}\*' + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' + else + _LT_AC_TAGVAR(allow_undefined_flag, $1)=' -expect_unresolved \*' + _LT_AC_TAGVAR(archive_cmds, $1)='$LD -shared${allow_undefined_flag} $libobjs $deplibs $linker_flags -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib' + fi + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' + _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=: + ;; + + osf4* | osf5*) # as osf3* with the addition of -msym flag + if test "$GCC" = yes; then + _LT_AC_TAGVAR(allow_undefined_flag, $1)=' ${wl}-expect_unresolved ${wl}\*' + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' + else + _LT_AC_TAGVAR(allow_undefined_flag, $1)=' -expect_unresolved \*' + _LT_AC_TAGVAR(archive_cmds, $1)='$LD -shared${allow_undefined_flag} $libobjs $deplibs $linker_flags -msym -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib' + _LT_AC_TAGVAR(archive_expsym_cmds, $1)='for i in `cat $export_symbols`; do printf "%s %s\\n" -exported_symbol "\$i" >> $lib.exp; done; echo "-hidden">> $lib.exp~ + $LD -shared${allow_undefined_flag} -input $lib.exp $linker_flags $libobjs $deplibs -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib~$rm $lib.exp' + + # Both c and cxx compiler support -rpath directly + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-rpath $libdir' + fi + _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=: + ;; + + solaris*) + _LT_AC_TAGVAR(no_undefined_flag, $1)=' -z text' + if test "$GCC" = yes; then + wlarc='${wl}' + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags' + _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~$echo "local: *; };" >> $lib.exp~ + $CC -shared ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$rm $lib.exp' + else + wlarc='' + _LT_AC_TAGVAR(archive_cmds, $1)='$LD -G${allow_undefined_flag} -h $soname -o $lib $libobjs $deplibs $linker_flags' + _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~$echo "local: *; };" >> $lib.exp~ + $LD -G${allow_undefined_flag} -M $lib.exp -h $soname -o $lib $libobjs $deplibs $linker_flags~$rm $lib.exp' + fi + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir' + _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no + case $host_os in + solaris2.[[0-5]] | solaris2.[[0-5]].*) ;; + *) + # The compiler driver will combine and reorder linker options, + # but understands `-z linker_flag'. GCC discards it without `$wl', + # but is careful enough not to reorder. + # Supported since Solaris 2.6 (maybe 2.5.1?) + if test "$GCC" = yes; then + _LT_AC_TAGVAR(whole_archive_flag_spec, $1)='${wl}-z ${wl}allextract$convenience ${wl}-z ${wl}defaultextract' + else + _LT_AC_TAGVAR(whole_archive_flag_spec, $1)='-z allextract$convenience -z defaultextract' + fi + ;; + esac + _LT_AC_TAGVAR(link_all_deplibs, $1)=yes + ;; + + sunos4*) + if test "x$host_vendor" = xsequent; then + # Use $CC to link under sequent, because it throws in some extra .o + # files that make .init and .fini sections work. + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -G ${wl}-h $soname -o $lib $libobjs $deplibs $compiler_flags' + else + _LT_AC_TAGVAR(archive_cmds, $1)='$LD -assert pure-text -Bstatic -o $lib $libobjs $deplibs $linker_flags' + fi + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' + _LT_AC_TAGVAR(hardcode_direct, $1)=yes + _LT_AC_TAGVAR(hardcode_minus_L, $1)=yes + _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no + ;; + + sysv4) + case $host_vendor in + sni) + _LT_AC_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + _LT_AC_TAGVAR(hardcode_direct, $1)=yes # is this really true??? + ;; + siemens) + ## LD is ld it makes a PLAMLIB + ## CC just makes a GrossModule. + _LT_AC_TAGVAR(archive_cmds, $1)='$LD -G -o $lib $libobjs $deplibs $linker_flags' + _LT_AC_TAGVAR(reload_cmds, $1)='$CC -r -o $output$reload_objs' + _LT_AC_TAGVAR(hardcode_direct, $1)=no + ;; + motorola) + _LT_AC_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + _LT_AC_TAGVAR(hardcode_direct, $1)=no #Motorola manual says yes, but my tests say they lie + ;; + esac + runpath_var='LD_RUN_PATH' + _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no + ;; + + sysv4.3*) + _LT_AC_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no + _LT_AC_TAGVAR(export_dynamic_flag_spec, $1)='-Bexport' + ;; + + sysv4*MP*) + if test -d /usr/nec; then + _LT_AC_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no + runpath_var=LD_RUN_PATH + hardcode_runpath_var=yes + _LT_AC_TAGVAR(ld_shlibs, $1)=yes + fi + ;; + + sysv4*uw2* | sysv5OpenUNIX* | sysv5UnixWare7.[[01]].[[10]]* | unixware7* | sco3.2v5.0.[[024]]*) + _LT_AC_TAGVAR(no_undefined_flag, $1)='${wl}-z,text' + _LT_AC_TAGVAR(archive_cmds_need_lc, $1)=no + _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no + runpath_var='LD_RUN_PATH' + + if test "$GCC" = yes; then + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' + _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$CC -shared ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' + else + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -G ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' + _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$CC -G ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' + fi + ;; + + sysv5* | sco3.2v5* | sco5v6*) + # Note: We can NOT use -z defs as we might desire, because we do not + # link with -lc, and that would cause any symbols used from libc to + # always be unresolved, which means just about no library would + # ever link correctly. If we're not using GNU ld we use -z text + # though, which does catch some bad symbols but isn't as heavy-handed + # as -z defs. + _LT_AC_TAGVAR(no_undefined_flag, $1)='${wl}-z,text' + _LT_AC_TAGVAR(allow_undefined_flag, $1)='${wl}-z,nodefs' + _LT_AC_TAGVAR(archive_cmds_need_lc, $1)=no + _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='`test -z "$SCOABSPATH" && echo ${wl}-R,$libdir`' + _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=':' + _LT_AC_TAGVAR(link_all_deplibs, $1)=yes + _LT_AC_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-Bexport' + runpath_var='LD_RUN_PATH' + + if test "$GCC" = yes; then + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared ${wl}-h,\${SCOABSPATH:+${install_libdir}/}$soname -o $lib $libobjs $deplibs $compiler_flags' + _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$CC -shared ${wl}-Bexport:$export_symbols ${wl}-h,\${SCOABSPATH:+${install_libdir}/}$soname -o $lib $libobjs $deplibs $compiler_flags' + else + _LT_AC_TAGVAR(archive_cmds, $1)='$CC -G ${wl}-h,\${SCOABSPATH:+${install_libdir}/}$soname -o $lib $libobjs $deplibs $compiler_flags' + _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$CC -G ${wl}-Bexport:$export_symbols ${wl}-h,\${SCOABSPATH:+${install_libdir}/}$soname -o $lib $libobjs $deplibs $compiler_flags' + fi + ;; + + uts4*) + _LT_AC_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' + _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no + ;; + + *) + _LT_AC_TAGVAR(ld_shlibs, $1)=no + ;; + esac + fi +]) +AC_MSG_RESULT([$_LT_AC_TAGVAR(ld_shlibs, $1)]) +test "$_LT_AC_TAGVAR(ld_shlibs, $1)" = no && can_build_shared=no + +# +# Do we need to explicitly link libc? +# +case "x$_LT_AC_TAGVAR(archive_cmds_need_lc, $1)" in +x|xyes) + # Assume -lc should be added + _LT_AC_TAGVAR(archive_cmds_need_lc, $1)=yes + + if test "$enable_shared" = yes && test "$GCC" = yes; then + case $_LT_AC_TAGVAR(archive_cmds, $1) in + *'~'*) + # FIXME: we may have to deal with multi-command sequences. + ;; + '$CC '*) + # Test whether the compiler implicitly links with -lc since on some + # systems, -lgcc has to come before -lc. If gcc already passes -lc + # to ld, don't add -lc before -lgcc. + AC_MSG_CHECKING([whether -lc should be explicitly linked in]) + $rm conftest* + echo "$lt_simple_compile_test_code" > conftest.$ac_ext + + if AC_TRY_EVAL(ac_compile) 2>conftest.err; then + soname=conftest + lib=conftest + libobjs=conftest.$ac_objext + deplibs= + wl=$_LT_AC_TAGVAR(lt_prog_compiler_wl, $1) + pic_flag=$_LT_AC_TAGVAR(lt_prog_compiler_pic, $1) + compiler_flags=-v + linker_flags=-v + verstring= + output_objdir=. + libname=conftest + lt_save_allow_undefined_flag=$_LT_AC_TAGVAR(allow_undefined_flag, $1) + _LT_AC_TAGVAR(allow_undefined_flag, $1)= + if AC_TRY_EVAL(_LT_AC_TAGVAR(archive_cmds, $1) 2\>\&1 \| grep \" -lc \" \>/dev/null 2\>\&1) + then + _LT_AC_TAGVAR(archive_cmds_need_lc, $1)=no + else + _LT_AC_TAGVAR(archive_cmds_need_lc, $1)=yes + fi + _LT_AC_TAGVAR(allow_undefined_flag, $1)=$lt_save_allow_undefined_flag + else + cat conftest.err 1>&5 + fi + $rm conftest* + AC_MSG_RESULT([$_LT_AC_TAGVAR(archive_cmds_need_lc, $1)]) + ;; + esac + fi + ;; +esac +])# AC_LIBTOOL_PROG_LD_SHLIBS + + +# _LT_AC_FILE_LTDLL_C +# ------------------- +# Be careful that the start marker always follows a newline. +AC_DEFUN([_LT_AC_FILE_LTDLL_C], [ +# /* ltdll.c starts here */ +# #define WIN32_LEAN_AND_MEAN +# #include +# #undef WIN32_LEAN_AND_MEAN +# #include +# +# #ifndef __CYGWIN__ +# # ifdef __CYGWIN32__ +# # define __CYGWIN__ __CYGWIN32__ +# # endif +# #endif +# +# #ifdef __cplusplus +# extern "C" { +# #endif +# BOOL APIENTRY DllMain (HINSTANCE hInst, DWORD reason, LPVOID reserved); +# #ifdef __cplusplus +# } +# #endif +# +# #ifdef __CYGWIN__ +# #include +# DECLARE_CYGWIN_DLL( DllMain ); +# #endif +# HINSTANCE __hDllInstance_base; +# +# BOOL APIENTRY +# DllMain (HINSTANCE hInst, DWORD reason, LPVOID reserved) +# { +# __hDllInstance_base = hInst; +# return TRUE; +# } +# /* ltdll.c ends here */ +])# _LT_AC_FILE_LTDLL_C + + +# _LT_AC_TAGVAR(VARNAME, [TAGNAME]) +# --------------------------------- +AC_DEFUN([_LT_AC_TAGVAR], [ifelse([$2], [], [$1], [$1_$2])]) + + +# old names +AC_DEFUN([AM_PROG_LIBTOOL], [AC_PROG_LIBTOOL]) +AC_DEFUN([AM_ENABLE_SHARED], [AC_ENABLE_SHARED($@)]) +AC_DEFUN([AM_ENABLE_STATIC], [AC_ENABLE_STATIC($@)]) +AC_DEFUN([AM_DISABLE_SHARED], [AC_DISABLE_SHARED($@)]) +AC_DEFUN([AM_DISABLE_STATIC], [AC_DISABLE_STATIC($@)]) +AC_DEFUN([AM_PROG_LD], [AC_PROG_LD]) +AC_DEFUN([AM_PROG_NM], [AC_PROG_NM]) + +# This is just to silence aclocal about the macro not being used +ifelse([AC_DISABLE_FAST_INSTALL]) + +AC_DEFUN([LT_AC_PROG_GCJ], +[AC_CHECK_TOOL(GCJ, gcj, no) + test "x${GCJFLAGS+set}" = xset || GCJFLAGS="-g -O2" + AC_SUBST(GCJFLAGS) +]) + +AC_DEFUN([LT_AC_PROG_RC], +[AC_CHECK_TOOL(RC, windres, no) +]) + + +# Cheap backport of AS_EXECUTABLE_P and required macros +# from Autoconf 2.59; we should not use $as_executable_p directly. + +# _AS_TEST_PREPARE +# ---------------- +m4_ifndef([_AS_TEST_PREPARE], +[m4_defun([_AS_TEST_PREPARE], +[if test -x / >/dev/null 2>&1; then + as_executable_p='test -x' +else + as_executable_p='test -f' +fi +])])# _AS_TEST_PREPARE + +# AS_EXECUTABLE_P +# --------------- +# Check whether a file is executable. +m4_ifndef([AS_EXECUTABLE_P], +[m4_defun([AS_EXECUTABLE_P], +[AS_REQUIRE([_AS_TEST_PREPARE])dnl +$as_executable_p $1[]dnl +])])# AS_EXECUTABLE_P + +# NOTE: This macro has been submitted for inclusion into # +# GNU Autoconf as AC_PROG_SED. When it is available in # +# a released version of Autoconf we should remove this # +# macro and use it instead. # +# LT_AC_PROG_SED +# -------------- +# Check for a fully-functional sed program, that truncates +# as few characters as possible. Prefer GNU sed if found. +AC_DEFUN([LT_AC_PROG_SED], +[AC_MSG_CHECKING([for a sed that does not truncate output]) +AC_CACHE_VAL(lt_cv_path_SED, +[# Loop through the user's path and test for sed and gsed. +# Then use that list of sed's as ones to test for truncation. +as_save_IFS=$IFS; IFS=$PATH_SEPARATOR +for as_dir in $PATH +do + IFS=$as_save_IFS + test -z "$as_dir" && as_dir=. + for lt_ac_prog in sed gsed; do + for ac_exec_ext in '' $ac_executable_extensions; do + if AS_EXECUTABLE_P(["$as_dir/$lt_ac_prog$ac_exec_ext"]); then + lt_ac_sed_list="$lt_ac_sed_list $as_dir/$lt_ac_prog$ac_exec_ext" + fi + done + done +done +IFS=$as_save_IFS +lt_ac_max=0 +lt_ac_count=0 +# Add /usr/xpg4/bin/sed as it is typically found on Solaris +# along with /bin/sed that truncates output. +for lt_ac_sed in $lt_ac_sed_list /usr/xpg4/bin/sed; do + test ! -f $lt_ac_sed && continue + cat /dev/null > conftest.in + lt_ac_count=0 + echo $ECHO_N "0123456789$ECHO_C" >conftest.in + # Check for GNU sed and select it if it is found. + if "$lt_ac_sed" --version 2>&1 < /dev/null | grep 'GNU' > /dev/null; then + lt_cv_path_SED=$lt_ac_sed + break + fi + while true; do + cat conftest.in conftest.in >conftest.tmp + mv conftest.tmp conftest.in + cp conftest.in conftest.nl + echo >>conftest.nl + $lt_ac_sed -e 's/a$//' < conftest.nl >conftest.out || break + cmp -s conftest.out conftest.nl || break + # 10000 chars as input seems more than enough + test $lt_ac_count -gt 10 && break + lt_ac_count=`expr $lt_ac_count + 1` + if test $lt_ac_count -gt $lt_ac_max; then + lt_ac_max=$lt_ac_count + lt_cv_path_SED=$lt_ac_sed + fi + done +done +]) +SED=$lt_cv_path_SED +AC_SUBST([SED]) +AC_MSG_RESULT([$SED]) +]) + +# Copyright (C) 2002, 2003, 2005, 2006 Free Software Foundation, Inc. +# +# This file is free software; the Free Software Foundation +# gives unlimited permission to copy and/or distribute it, +# with or without modifications, as long as this notice is preserved. + +# AM_AUTOMAKE_VERSION(VERSION) +# ---------------------------- +# Automake X.Y traces this macro to ensure aclocal.m4 has been +# generated from the m4 files accompanying Automake X.Y. +# (This private macro should not be called outside this file.) +AC_DEFUN([AM_AUTOMAKE_VERSION], +[am__api_version='1.10' +dnl Some users find AM_AUTOMAKE_VERSION and mistake it for a way to +dnl require some minimum version. Point them to the right macro. +m4_if([$1], [1.10], [], + [AC_FATAL([Do not call $0, use AM_INIT_AUTOMAKE([$1]).])])dnl +]) + +# _AM_AUTOCONF_VERSION(VERSION) +# ----------------------------- +# aclocal traces this macro to find the Autoconf version. +# This is a private macro too. Using m4_define simplifies +# the logic in aclocal, which can simply ignore this definition. +m4_define([_AM_AUTOCONF_VERSION], []) + +# AM_SET_CURRENT_AUTOMAKE_VERSION +# ------------------------------- +# Call AM_AUTOMAKE_VERSION and AM_AUTOMAKE_VERSION so they can be traced. +# This function is AC_REQUIREd by AC_INIT_AUTOMAKE. +AC_DEFUN([AM_SET_CURRENT_AUTOMAKE_VERSION], +[AM_AUTOMAKE_VERSION([1.10])dnl +_AM_AUTOCONF_VERSION(m4_PACKAGE_VERSION)]) + +# Figure out how to run the assembler. -*- Autoconf -*- + +# Copyright (C) 2001, 2003, 2004, 2005, 2006 Free Software Foundation, Inc. +# +# This file is free software; the Free Software Foundation +# gives unlimited permission to copy and/or distribute it, +# with or without modifications, as long as this notice is preserved. + +# serial 5 + +# AM_PROG_AS +# ---------- +AC_DEFUN([AM_PROG_AS], +[# By default we simply use the C compiler to build assembly code. +AC_REQUIRE([AC_PROG_CC]) +test "${CCAS+set}" = set || CCAS=$CC +test "${CCASFLAGS+set}" = set || CCASFLAGS=$CFLAGS +AC_ARG_VAR([CCAS], [assembler compiler command (defaults to CC)]) +AC_ARG_VAR([CCASFLAGS], [assembler compiler flags (defaults to CFLAGS)]) +_AM_IF_OPTION([no-dependencies],, [_AM_DEPENDENCIES([CCAS])])dnl +]) + +# AM_AUX_DIR_EXPAND -*- Autoconf -*- + +# Copyright (C) 2001, 2003, 2005 Free Software Foundation, Inc. +# +# This file is free software; the Free Software Foundation +# gives unlimited permission to copy and/or distribute it, +# with or without modifications, as long as this notice is preserved. + +# For projects using AC_CONFIG_AUX_DIR([foo]), Autoconf sets +# $ac_aux_dir to `$srcdir/foo'. In other projects, it is set to +# `$srcdir', `$srcdir/..', or `$srcdir/../..'. +# +# Of course, Automake must honor this variable whenever it calls a +# tool from the auxiliary directory. The problem is that $srcdir (and +# therefore $ac_aux_dir as well) can be either absolute or relative, +# depending on how configure is run. This is pretty annoying, since +# it makes $ac_aux_dir quite unusable in subdirectories: in the top +# source directory, any form will work fine, but in subdirectories a +# relative path needs to be adjusted first. +# +# $ac_aux_dir/missing +# fails when called from a subdirectory if $ac_aux_dir is relative +# $top_srcdir/$ac_aux_dir/missing +# fails if $ac_aux_dir is absolute, +# fails when called from a subdirectory in a VPATH build with +# a relative $ac_aux_dir +# +# The reason of the latter failure is that $top_srcdir and $ac_aux_dir +# are both prefixed by $srcdir. In an in-source build this is usually +# harmless because $srcdir is `.', but things will broke when you +# start a VPATH build or use an absolute $srcdir. +# +# So we could use something similar to $top_srcdir/$ac_aux_dir/missing, +# iff we strip the leading $srcdir from $ac_aux_dir. That would be: +# am_aux_dir='\$(top_srcdir)/'`expr "$ac_aux_dir" : "$srcdir//*\(.*\)"` +# and then we would define $MISSING as +# MISSING="\${SHELL} $am_aux_dir/missing" +# This will work as long as MISSING is not called from configure, because +# unfortunately $(top_srcdir) has no meaning in configure. +# However there are other variables, like CC, which are often used in +# configure, and could therefore not use this "fixed" $ac_aux_dir. +# +# Another solution, used here, is to always expand $ac_aux_dir to an +# absolute PATH. The drawback is that using absolute paths prevent a +# configured tree to be moved without reconfiguration. + +AC_DEFUN([AM_AUX_DIR_EXPAND], +[dnl Rely on autoconf to set up CDPATH properly. +AC_PREREQ([2.50])dnl +# expand $ac_aux_dir to an absolute path +am_aux_dir=`cd $ac_aux_dir && pwd` +]) + +# AM_CONDITIONAL -*- Autoconf -*- + +# Copyright (C) 1997, 2000, 2001, 2003, 2004, 2005, 2006 +# Free Software Foundation, Inc. +# +# This file is free software; the Free Software Foundation +# gives unlimited permission to copy and/or distribute it, +# with or without modifications, as long as this notice is preserved. + +# serial 8 + +# AM_CONDITIONAL(NAME, SHELL-CONDITION) +# ------------------------------------- +# Define a conditional. +AC_DEFUN([AM_CONDITIONAL], +[AC_PREREQ(2.52)dnl + ifelse([$1], [TRUE], [AC_FATAL([$0: invalid condition: $1])], + [$1], [FALSE], [AC_FATAL([$0: invalid condition: $1])])dnl +AC_SUBST([$1_TRUE])dnl +AC_SUBST([$1_FALSE])dnl +_AM_SUBST_NOTMAKE([$1_TRUE])dnl +_AM_SUBST_NOTMAKE([$1_FALSE])dnl +if $2; then + $1_TRUE= + $1_FALSE='#' +else + $1_TRUE='#' + $1_FALSE= +fi +AC_CONFIG_COMMANDS_PRE( +[if test -z "${$1_TRUE}" && test -z "${$1_FALSE}"; then + AC_MSG_ERROR([[conditional "$1" was never defined. +Usually this means the macro was only invoked conditionally.]]) +fi])]) + +# Copyright (C) 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006 +# Free Software Foundation, Inc. +# +# This file is free software; the Free Software Foundation +# gives unlimited permission to copy and/or distribute it, +# with or without modifications, as long as this notice is preserved. + +# serial 9 + +# There are a few dirty hacks below to avoid letting `AC_PROG_CC' be +# written in clear, in which case automake, when reading aclocal.m4, +# will think it sees a *use*, and therefore will trigger all it's +# C support machinery. Also note that it means that autoscan, seeing +# CC etc. in the Makefile, will ask for an AC_PROG_CC use... + + +# _AM_DEPENDENCIES(NAME) +# ---------------------- +# See how the compiler implements dependency checking. +# NAME is "CC", "CXX", "GCJ", or "OBJC". +# We try a few techniques and use that to set a single cache variable. +# +# We don't AC_REQUIRE the corresponding AC_PROG_CC since the latter was +# modified to invoke _AM_DEPENDENCIES(CC); we would have a circular +# dependency, and given that the user is not expected to run this macro, +# just rely on AC_PROG_CC. +AC_DEFUN([_AM_DEPENDENCIES], +[AC_REQUIRE([AM_SET_DEPDIR])dnl +AC_REQUIRE([AM_OUTPUT_DEPENDENCY_COMMANDS])dnl +AC_REQUIRE([AM_MAKE_INCLUDE])dnl +AC_REQUIRE([AM_DEP_TRACK])dnl + +ifelse([$1], CC, [depcc="$CC" am_compiler_list=], + [$1], CXX, [depcc="$CXX" am_compiler_list=], + [$1], OBJC, [depcc="$OBJC" am_compiler_list='gcc3 gcc'], + [$1], UPC, [depcc="$UPC" am_compiler_list=], + [$1], GCJ, [depcc="$GCJ" am_compiler_list='gcc3 gcc'], + [depcc="$$1" am_compiler_list=]) + +AC_CACHE_CHECK([dependency style of $depcc], + [am_cv_$1_dependencies_compiler_type], +[if test -z "$AMDEP_TRUE" && test -f "$am_depcomp"; then + # We make a subdir and do the tests there. Otherwise we can end up + # making bogus files that we don't know about and never remove. For + # instance it was reported that on HP-UX the gcc test will end up + # making a dummy file named `D' -- because `-MD' means `put the output + # in D'. + mkdir conftest.dir + # Copy depcomp to subdir because otherwise we won't find it if we're + # using a relative directory. + cp "$am_depcomp" conftest.dir + cd conftest.dir + # We will build objects and dependencies in a subdirectory because + # it helps to detect inapplicable dependency modes. For instance + # both Tru64's cc and ICC support -MD to output dependencies as a + # side effect of compilation, but ICC will put the dependencies in + # the current directory while Tru64 will put them in the object + # directory. + mkdir sub + + am_cv_$1_dependencies_compiler_type=none + if test "$am_compiler_list" = ""; then + am_compiler_list=`sed -n ['s/^#*\([a-zA-Z0-9]*\))$/\1/p'] < ./depcomp` + fi + for depmode in $am_compiler_list; do + # Setup a source with many dependencies, because some compilers + # like to wrap large dependency lists on column 80 (with \), and + # we should not choose a depcomp mode which is confused by this. + # + # We need to recreate these files for each test, as the compiler may + # overwrite some of them when testing with obscure command lines. + # This happens at least with the AIX C compiler. + : > sub/conftest.c + for i in 1 2 3 4 5 6; do + echo '#include "conftst'$i'.h"' >> sub/conftest.c + # Using `: > sub/conftst$i.h' creates only sub/conftst1.h with + # Solaris 8's {/usr,}/bin/sh. + touch sub/conftst$i.h + done + echo "${am__include} ${am__quote}sub/conftest.Po${am__quote}" > confmf + + case $depmode in + nosideeffect) + # after this tag, mechanisms are not by side-effect, so they'll + # only be used when explicitly requested + if test "x$enable_dependency_tracking" = xyes; then + continue + else + break + fi + ;; + none) break ;; + esac + # We check with `-c' and `-o' for the sake of the "dashmstdout" + # mode. It turns out that the SunPro C++ compiler does not properly + # handle `-M -o', and we need to detect this. + if depmode=$depmode \ + source=sub/conftest.c object=sub/conftest.${OBJEXT-o} \ + depfile=sub/conftest.Po tmpdepfile=sub/conftest.TPo \ + $SHELL ./depcomp $depcc -c -o sub/conftest.${OBJEXT-o} sub/conftest.c \ + >/dev/null 2>conftest.err && + grep sub/conftst1.h sub/conftest.Po > /dev/null 2>&1 && + grep sub/conftst6.h sub/conftest.Po > /dev/null 2>&1 && + grep sub/conftest.${OBJEXT-o} sub/conftest.Po > /dev/null 2>&1 && + ${MAKE-make} -s -f confmf > /dev/null 2>&1; then + # icc doesn't choke on unknown options, it will just issue warnings + # or remarks (even with -Werror). So we grep stderr for any message + # that says an option was ignored or not supported. + # When given -MP, icc 7.0 and 7.1 complain thusly: + # icc: Command line warning: ignoring option '-M'; no argument required + # The diagnosis changed in icc 8.0: + # icc: Command line remark: option '-MP' not supported + if (grep 'ignoring option' conftest.err || + grep 'not supported' conftest.err) >/dev/null 2>&1; then :; else + am_cv_$1_dependencies_compiler_type=$depmode + break + fi + fi + done + + cd .. + rm -rf conftest.dir +else + am_cv_$1_dependencies_compiler_type=none +fi +]) +AC_SUBST([$1DEPMODE], [depmode=$am_cv_$1_dependencies_compiler_type]) +AM_CONDITIONAL([am__fastdep$1], [ + test "x$enable_dependency_tracking" != xno \ + && test "$am_cv_$1_dependencies_compiler_type" = gcc3]) +]) + + +# AM_SET_DEPDIR +# ------------- +# Choose a directory name for dependency files. +# This macro is AC_REQUIREd in _AM_DEPENDENCIES +AC_DEFUN([AM_SET_DEPDIR], +[AC_REQUIRE([AM_SET_LEADING_DOT])dnl +AC_SUBST([DEPDIR], ["${am__leading_dot}deps"])dnl +]) + + +# AM_DEP_TRACK +# ------------ +AC_DEFUN([AM_DEP_TRACK], +[AC_ARG_ENABLE(dependency-tracking, +[ --disable-dependency-tracking speeds up one-time build + --enable-dependency-tracking do not reject slow dependency extractors]) +if test "x$enable_dependency_tracking" != xno; then + am_depcomp="$ac_aux_dir/depcomp" + AMDEPBACKSLASH='\' +fi +AM_CONDITIONAL([AMDEP], [test "x$enable_dependency_tracking" != xno]) +AC_SUBST([AMDEPBACKSLASH])dnl +_AM_SUBST_NOTMAKE([AMDEPBACKSLASH])dnl +]) + +# Generate code to set up dependency tracking. -*- Autoconf -*- + +# Copyright (C) 1999, 2000, 2001, 2002, 2003, 2004, 2005 +# Free Software Foundation, Inc. +# +# This file is free software; the Free Software Foundation +# gives unlimited permission to copy and/or distribute it, +# with or without modifications, as long as this notice is preserved. + +#serial 3 + +# _AM_OUTPUT_DEPENDENCY_COMMANDS +# ------------------------------ +AC_DEFUN([_AM_OUTPUT_DEPENDENCY_COMMANDS], +[for mf in $CONFIG_FILES; do + # Strip MF so we end up with the name of the file. + mf=`echo "$mf" | sed -e 's/:.*$//'` + # Check whether this is an Automake generated Makefile or not. + # We used to match only the files named `Makefile.in', but + # some people rename them; so instead we look at the file content. + # Grep'ing the first line is not enough: some people post-process + # each Makefile.in and add a new line on top of each file to say so. + # Grep'ing the whole file is not good either: AIX grep has a line + # limit of 2048, but all sed's we know have understand at least 4000. + if sed 10q "$mf" | grep '^#.*generated by automake' > /dev/null 2>&1; then + dirpart=`AS_DIRNAME("$mf")` + else + continue + fi + # Extract the definition of DEPDIR, am__include, and am__quote + # from the Makefile without running `make'. + DEPDIR=`sed -n 's/^DEPDIR = //p' < "$mf"` + test -z "$DEPDIR" && continue + am__include=`sed -n 's/^am__include = //p' < "$mf"` + test -z "am__include" && continue + am__quote=`sed -n 's/^am__quote = //p' < "$mf"` + # When using ansi2knr, U may be empty or an underscore; expand it + U=`sed -n 's/^U = //p' < "$mf"` + # Find all dependency output files, they are included files with + # $(DEPDIR) in their names. We invoke sed twice because it is the + # simplest approach to changing $(DEPDIR) to its actual value in the + # expansion. + for file in `sed -n " + s/^$am__include $am__quote\(.*(DEPDIR).*\)$am__quote"'$/\1/p' <"$mf" | \ + sed -e 's/\$(DEPDIR)/'"$DEPDIR"'/g' -e 's/\$U/'"$U"'/g'`; do + # Make sure the directory exists. + test -f "$dirpart/$file" && continue + fdir=`AS_DIRNAME(["$file"])` + AS_MKDIR_P([$dirpart/$fdir]) + # echo "creating $dirpart/$file" + echo '# dummy' > "$dirpart/$file" + done +done +])# _AM_OUTPUT_DEPENDENCY_COMMANDS + + +# AM_OUTPUT_DEPENDENCY_COMMANDS +# ----------------------------- +# This macro should only be invoked once -- use via AC_REQUIRE. +# +# This code is only required when automatic dependency tracking +# is enabled. FIXME. This creates each `.P' file that we will +# need in order to bootstrap the dependency handling code. +AC_DEFUN([AM_OUTPUT_DEPENDENCY_COMMANDS], +[AC_CONFIG_COMMANDS([depfiles], + [test x"$AMDEP_TRUE" != x"" || _AM_OUTPUT_DEPENDENCY_COMMANDS], + [AMDEP_TRUE="$AMDEP_TRUE" ac_aux_dir="$ac_aux_dir"]) +]) + +# Do all the work for Automake. -*- Autoconf -*- + +# Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, +# 2005, 2006 Free Software Foundation, Inc. +# +# This file is free software; the Free Software Foundation +# gives unlimited permission to copy and/or distribute it, +# with or without modifications, as long as this notice is preserved. + +# serial 12 + +# This macro actually does too much. Some checks are only needed if +# your package does certain things. But this isn't really a big deal. + +# AM_INIT_AUTOMAKE(PACKAGE, VERSION, [NO-DEFINE]) +# AM_INIT_AUTOMAKE([OPTIONS]) +# ----------------------------------------------- +# The call with PACKAGE and VERSION arguments is the old style +# call (pre autoconf-2.50), which is being phased out. PACKAGE +# and VERSION should now be passed to AC_INIT and removed from +# the call to AM_INIT_AUTOMAKE. +# We support both call styles for the transition. After +# the next Automake release, Autoconf can make the AC_INIT +# arguments mandatory, and then we can depend on a new Autoconf +# release and drop the old call support. +AC_DEFUN([AM_INIT_AUTOMAKE], +[AC_PREREQ([2.60])dnl +dnl Autoconf wants to disallow AM_ names. We explicitly allow +dnl the ones we care about. +m4_pattern_allow([^AM_[A-Z]+FLAGS$])dnl +AC_REQUIRE([AM_SET_CURRENT_AUTOMAKE_VERSION])dnl +AC_REQUIRE([AC_PROG_INSTALL])dnl +if test "`cd $srcdir && pwd`" != "`pwd`"; then + # Use -I$(srcdir) only when $(srcdir) != ., so that make's output + # is not polluted with repeated "-I." + AC_SUBST([am__isrc], [' -I$(srcdir)'])_AM_SUBST_NOTMAKE([am__isrc])dnl + # test to see if srcdir already configured + if test -f $srcdir/config.status; then + AC_MSG_ERROR([source directory already configured; run "make distclean" there first]) + fi +fi + +# test whether we have cygpath +if test -z "$CYGPATH_W"; then + if (cygpath --version) >/dev/null 2>/dev/null; then + CYGPATH_W='cygpath -w' + else + CYGPATH_W=echo + fi +fi +AC_SUBST([CYGPATH_W]) + +# Define the identity of the package. +dnl Distinguish between old-style and new-style calls. +m4_ifval([$2], +[m4_ifval([$3], [_AM_SET_OPTION([no-define])])dnl + AC_SUBST([PACKAGE], [$1])dnl + AC_SUBST([VERSION], [$2])], +[_AM_SET_OPTIONS([$1])dnl +dnl Diagnose old-style AC_INIT with new-style AM_AUTOMAKE_INIT. +m4_if(m4_ifdef([AC_PACKAGE_NAME], 1)m4_ifdef([AC_PACKAGE_VERSION], 1), 11,, + [m4_fatal([AC_INIT should be called with package and version arguments])])dnl + AC_SUBST([PACKAGE], ['AC_PACKAGE_TARNAME'])dnl + AC_SUBST([VERSION], ['AC_PACKAGE_VERSION'])])dnl + +_AM_IF_OPTION([no-define],, +[AC_DEFINE_UNQUOTED(PACKAGE, "$PACKAGE", [Name of package]) + AC_DEFINE_UNQUOTED(VERSION, "$VERSION", [Version number of package])])dnl + +# Some tools Automake needs. +AC_REQUIRE([AM_SANITY_CHECK])dnl +AC_REQUIRE([AC_ARG_PROGRAM])dnl +AM_MISSING_PROG(ACLOCAL, aclocal-${am__api_version}) +AM_MISSING_PROG(AUTOCONF, autoconf) +AM_MISSING_PROG(AUTOMAKE, automake-${am__api_version}) +AM_MISSING_PROG(AUTOHEADER, autoheader) +AM_MISSING_PROG(MAKEINFO, makeinfo) +AM_PROG_INSTALL_SH +AM_PROG_INSTALL_STRIP +AC_REQUIRE([AM_PROG_MKDIR_P])dnl +# We need awk for the "check" target. The system "awk" is bad on +# some platforms. +AC_REQUIRE([AC_PROG_AWK])dnl +AC_REQUIRE([AC_PROG_MAKE_SET])dnl +AC_REQUIRE([AM_SET_LEADING_DOT])dnl +_AM_IF_OPTION([tar-ustar], [_AM_PROG_TAR([ustar])], + [_AM_IF_OPTION([tar-pax], [_AM_PROG_TAR([pax])], + [_AM_PROG_TAR([v7])])]) +_AM_IF_OPTION([no-dependencies],, +[AC_PROVIDE_IFELSE([AC_PROG_CC], + [_AM_DEPENDENCIES(CC)], + [define([AC_PROG_CC], + defn([AC_PROG_CC])[_AM_DEPENDENCIES(CC)])])dnl +AC_PROVIDE_IFELSE([AC_PROG_CXX], + [_AM_DEPENDENCIES(CXX)], + [define([AC_PROG_CXX], + defn([AC_PROG_CXX])[_AM_DEPENDENCIES(CXX)])])dnl +AC_PROVIDE_IFELSE([AC_PROG_OBJC], + [_AM_DEPENDENCIES(OBJC)], + [define([AC_PROG_OBJC], + defn([AC_PROG_OBJC])[_AM_DEPENDENCIES(OBJC)])])dnl +]) +]) + + +# When config.status generates a header, we must update the stamp-h file. +# This file resides in the same directory as the config header +# that is generated. The stamp files are numbered to have different names. + +# Autoconf calls _AC_AM_CONFIG_HEADER_HOOK (when defined) in the +# loop where config.status creates the headers, so we can generate +# our stamp files there. +AC_DEFUN([_AC_AM_CONFIG_HEADER_HOOK], +[# Compute $1's index in $config_headers. +_am_stamp_count=1 +for _am_header in $config_headers :; do + case $_am_header in + $1 | $1:* ) + break ;; + * ) + _am_stamp_count=`expr $_am_stamp_count + 1` ;; + esac +done +echo "timestamp for $1" >`AS_DIRNAME([$1])`/stamp-h[]$_am_stamp_count]) + +# Copyright (C) 2001, 2003, 2005 Free Software Foundation, Inc. +# +# This file is free software; the Free Software Foundation +# gives unlimited permission to copy and/or distribute it, +# with or without modifications, as long as this notice is preserved. + +# AM_PROG_INSTALL_SH +# ------------------ +# Define $install_sh. +AC_DEFUN([AM_PROG_INSTALL_SH], +[AC_REQUIRE([AM_AUX_DIR_EXPAND])dnl +install_sh=${install_sh-"\$(SHELL) $am_aux_dir/install-sh"} +AC_SUBST(install_sh)]) + +# Copyright (C) 2003, 2005 Free Software Foundation, Inc. +# +# This file is free software; the Free Software Foundation +# gives unlimited permission to copy and/or distribute it, +# with or without modifications, as long as this notice is preserved. + +# serial 2 + +# Check whether the underlying file-system supports filenames +# with a leading dot. For instance MS-DOS doesn't. +AC_DEFUN([AM_SET_LEADING_DOT], +[rm -rf .tst 2>/dev/null +mkdir .tst 2>/dev/null +if test -d .tst; then + am__leading_dot=. +else + am__leading_dot=_ +fi +rmdir .tst 2>/dev/null +AC_SUBST([am__leading_dot])]) + +# Add --enable-maintainer-mode option to configure. -*- Autoconf -*- +# From Jim Meyering + +# Copyright (C) 1996, 1998, 2000, 2001, 2002, 2003, 2004, 2005 +# Free Software Foundation, Inc. +# +# This file is free software; the Free Software Foundation +# gives unlimited permission to copy and/or distribute it, +# with or without modifications, as long as this notice is preserved. + +# serial 4 + +AC_DEFUN([AM_MAINTAINER_MODE], +[AC_MSG_CHECKING([whether to enable maintainer-specific portions of Makefiles]) + dnl maintainer-mode is disabled by default + AC_ARG_ENABLE(maintainer-mode, +[ --enable-maintainer-mode enable make rules and dependencies not useful + (and sometimes confusing) to the casual installer], + USE_MAINTAINER_MODE=$enableval, + USE_MAINTAINER_MODE=no) + AC_MSG_RESULT([$USE_MAINTAINER_MODE]) + AM_CONDITIONAL(MAINTAINER_MODE, [test $USE_MAINTAINER_MODE = yes]) + MAINT=$MAINTAINER_MODE_TRUE + AC_SUBST(MAINT)dnl +] +) + +AU_DEFUN([jm_MAINTAINER_MODE], [AM_MAINTAINER_MODE]) + +# Check to see how 'make' treats includes. -*- Autoconf -*- + +# Copyright (C) 2001, 2002, 2003, 2005 Free Software Foundation, Inc. +# +# This file is free software; the Free Software Foundation +# gives unlimited permission to copy and/or distribute it, +# with or without modifications, as long as this notice is preserved. + +# serial 3 + +# AM_MAKE_INCLUDE() +# ----------------- +# Check to see how make treats includes. +AC_DEFUN([AM_MAKE_INCLUDE], +[am_make=${MAKE-make} +cat > confinc << 'END' +am__doit: + @echo done +.PHONY: am__doit +END +# If we don't find an include directive, just comment out the code. +AC_MSG_CHECKING([for style of include used by $am_make]) +am__include="#" +am__quote= +_am_result=none +# First try GNU make style include. +echo "include confinc" > confmf +# We grep out `Entering directory' and `Leaving directory' +# messages which can occur if `w' ends up in MAKEFLAGS. +# In particular we don't look at `^make:' because GNU make might +# be invoked under some other name (usually "gmake"), in which +# case it prints its new name instead of `make'. +if test "`$am_make -s -f confmf 2> /dev/null | grep -v 'ing directory'`" = "done"; then + am__include=include + am__quote= + _am_result=GNU +fi +# Now try BSD make style include. +if test "$am__include" = "#"; then + echo '.include "confinc"' > confmf + if test "`$am_make -s -f confmf 2> /dev/null`" = "done"; then + am__include=.include + am__quote="\"" + _am_result=BSD fi fi +AC_SUBST([am__include]) +AC_SUBST([am__quote]) +AC_MSG_RESULT([$_am_result]) +rm -f confinc confmf +]) + +# Copyright (C) 1999, 2000, 2001, 2003, 2004, 2005 +# Free Software Foundation, Inc. +# +# This file is free software; the Free Software Foundation +# gives unlimited permission to copy and/or distribute it, +# with or without modifications, as long as this notice is preserved. + +# serial 5 + +# AM_PROG_CC_C_O +# -------------- +# Like AC_PROG_CC_C_O, but changed for automake. +AC_DEFUN([AM_PROG_CC_C_O], +[AC_REQUIRE([AC_PROG_CC_C_O])dnl +AC_REQUIRE([AM_AUX_DIR_EXPAND])dnl +AC_REQUIRE_AUX_FILE([compile])dnl +# FIXME: we rely on the cache variable name because +# there is no other way. +set dummy $CC +ac_cc=`echo $[2] | sed ['s/[^a-zA-Z0-9_]/_/g;s/^[0-9]/_/']` +if eval "test \"`echo '$ac_cv_prog_cc_'${ac_cc}_c_o`\" != yes"; then + # Losing compiler, so override with the script. + # FIXME: It is wrong to rewrite CC. + # But if we don't then we get into trouble of one sort or another. + # A longer-term fix would be to have automake use am__CC in this case, + # and then we could set am__CC="\$(top_srcdir)/compile \$(CC)" + CC="$am_aux_dir/compile $CC" +fi +dnl Make sure AC_PROG_CC is never called again, or it will override our +dnl setting of CC. +m4_define([AC_PROG_CC], + [m4_fatal([AC_PROG_CC cannot be called after AM_PROG_CC_C_O])]) +]) + +# Fake the existence of programs that GNU maintainers use. -*- Autoconf -*- + +# Copyright (C) 1997, 1999, 2000, 2001, 2003, 2004, 2005 +# Free Software Foundation, Inc. +# +# This file is free software; the Free Software Foundation +# gives unlimited permission to copy and/or distribute it, +# with or without modifications, as long as this notice is preserved. + +# serial 5 -if test $ac_cv_func_mmap_file = yes; then - AC_DEFINE(HAVE_MMAP_FILE, 1, - [Define if read-only mmap of a plain file works.]) -fi -if test $ac_cv_func_mmap_dev_zero = yes; then - AC_DEFINE(HAVE_MMAP_DEV_ZERO, 1, - [Define if mmap of /dev/zero works.]) -fi -if test $ac_cv_func_mmap_anon = yes; then - AC_DEFINE(HAVE_MMAP_ANON, 1, - [Define if mmap with MAP_ANON(YMOUS) works.]) +# AM_MISSING_PROG(NAME, PROGRAM) +# ------------------------------ +AC_DEFUN([AM_MISSING_PROG], +[AC_REQUIRE([AM_MISSING_HAS_RUN]) +$1=${$1-"${am_missing_run}$2"} +AC_SUBST($1)]) + + +# AM_MISSING_HAS_RUN +# ------------------ +# Define MISSING if not defined so far and test if it supports --run. +# If it does, set am_missing_run to use it, otherwise, to nothing. +AC_DEFUN([AM_MISSING_HAS_RUN], +[AC_REQUIRE([AM_AUX_DIR_EXPAND])dnl +AC_REQUIRE_AUX_FILE([missing])dnl +test x"${MISSING+set}" = xset || MISSING="\${SHELL} $am_aux_dir/missing" +# Use eval to expand $SHELL +if eval "$MISSING --run true"; then + am_missing_run="$MISSING --run " +else + am_missing_run= + AC_MSG_WARN([`missing' script is too old or missing]) fi ]) + +# Copyright (C) 2003, 2004, 2005, 2006 Free Software Foundation, Inc. +# +# This file is free software; the Free Software Foundation +# gives unlimited permission to copy and/or distribute it, +# with or without modifications, as long as this notice is preserved. + +# AM_PROG_MKDIR_P +# --------------- +# Check for `mkdir -p'. +AC_DEFUN([AM_PROG_MKDIR_P], +[AC_PREREQ([2.60])dnl +AC_REQUIRE([AC_PROG_MKDIR_P])dnl +dnl Automake 1.8 to 1.9.6 used to define mkdir_p. We now use MKDIR_P, +dnl while keeping a definition of mkdir_p for backward compatibility. +dnl @MKDIR_P@ is magic: AC_OUTPUT adjusts its value for each Makefile. +dnl However we cannot define mkdir_p as $(MKDIR_P) for the sake of +dnl Makefile.ins that do not define MKDIR_P, so we do our own +dnl adjustment using top_builddir (which is defined more often than +dnl MKDIR_P). +AC_SUBST([mkdir_p], ["$MKDIR_P"])dnl +case $mkdir_p in + [[\\/$]]* | ?:[[\\/]]*) ;; + */*) mkdir_p="\$(top_builddir)/$mkdir_p" ;; +esac +]) + +# Helper functions for option handling. -*- Autoconf -*- + +# Copyright (C) 2001, 2002, 2003, 2005 Free Software Foundation, Inc. +# +# This file is free software; the Free Software Foundation +# gives unlimited permission to copy and/or distribute it, +# with or without modifications, as long as this notice is preserved. + +# serial 3 + +# _AM_MANGLE_OPTION(NAME) +# ----------------------- +AC_DEFUN([_AM_MANGLE_OPTION], +[[_AM_OPTION_]m4_bpatsubst($1, [[^a-zA-Z0-9_]], [_])]) + +# _AM_SET_OPTION(NAME) +# ------------------------------ +# Set option NAME. Presently that only means defining a flag for this option. +AC_DEFUN([_AM_SET_OPTION], +[m4_define(_AM_MANGLE_OPTION([$1]), 1)]) + +# _AM_SET_OPTIONS(OPTIONS) +# ---------------------------------- +# OPTIONS is a space-separated list of Automake options. +AC_DEFUN([_AM_SET_OPTIONS], +[AC_FOREACH([_AM_Option], [$1], [_AM_SET_OPTION(_AM_Option)])]) + +# _AM_IF_OPTION(OPTION, IF-SET, [IF-NOT-SET]) +# ------------------------------------------- +# Execute IF-SET if OPTION is set, IF-NOT-SET otherwise. +AC_DEFUN([_AM_IF_OPTION], +[m4_ifset(_AM_MANGLE_OPTION([$1]), [$2], [$3])]) + +# Check to make sure that the build environment is sane. -*- Autoconf -*- + +# Copyright (C) 1996, 1997, 2000, 2001, 2003, 2005 +# Free Software Foundation, Inc. +# +# This file is free software; the Free Software Foundation +# gives unlimited permission to copy and/or distribute it, +# with or without modifications, as long as this notice is preserved. + +# serial 4 + +# AM_SANITY_CHECK +# --------------- +AC_DEFUN([AM_SANITY_CHECK], +[AC_MSG_CHECKING([whether build environment is sane]) +# Just in case +sleep 1 +echo timestamp > conftest.file +# Do `set' in a subshell so we don't clobber the current shell's +# arguments. Must try -L first in case configure is actually a +# symlink; some systems play weird games with the mod time of symlinks +# (eg FreeBSD returns the mod time of the symlink's containing +# directory). +if ( + set X `ls -Lt $srcdir/configure conftest.file 2> /dev/null` + if test "$[*]" = "X"; then + # -L didn't work. + set X `ls -t $srcdir/configure conftest.file` + fi + rm -f conftest.file + if test "$[*]" != "X $srcdir/configure conftest.file" \ + && test "$[*]" != "X conftest.file $srcdir/configure"; then + + # If neither matched, then we have a broken ls. This can happen + # if, for instance, CONFIG_SHELL is bash and it inherits a + # broken ls alias from the environment. This has actually + # happened. Such a system could not be considered "sane". + AC_MSG_ERROR([ls -t appears to fail. Make sure there is not a broken +alias in your environment]) + fi + + test "$[2]" = conftest.file + ) +then + # Ok. + : +else + AC_MSG_ERROR([newly created file is older than distributed files! +Check your system clock]) +fi +AC_MSG_RESULT(yes)]) + +# Copyright (C) 2001, 2003, 2005 Free Software Foundation, Inc. +# +# This file is free software; the Free Software Foundation +# gives unlimited permission to copy and/or distribute it, +# with or without modifications, as long as this notice is preserved. + +# AM_PROG_INSTALL_STRIP +# --------------------- +# One issue with vendor `install' (even GNU) is that you can't +# specify the program used to strip binaries. This is especially +# annoying in cross-compiling environments, where the build's strip +# is unlikely to handle the host's binaries. +# Fortunately install-sh will honor a STRIPPROG variable, so we +# always use install-sh in `make install-strip', and initialize +# STRIPPROG with the value of the STRIP variable (set by the user). +AC_DEFUN([AM_PROG_INSTALL_STRIP], +[AC_REQUIRE([AM_PROG_INSTALL_SH])dnl +# Installed binaries are usually stripped using `strip' when the user +# run `make install-strip'. However `strip' might not be the right +# tool to use in cross-compilation environments, therefore Automake +# will honor the `STRIP' environment variable to overrule this program. +dnl Don't test for $cross_compiling = yes, because it might be `maybe'. +if test "$cross_compiling" != no; then + AC_CHECK_TOOL([STRIP], [strip], :) +fi +INSTALL_STRIP_PROGRAM="\$(install_sh) -c -s" +AC_SUBST([INSTALL_STRIP_PROGRAM])]) + +# Copyright (C) 2006 Free Software Foundation, Inc. +# +# This file is free software; the Free Software Foundation +# gives unlimited permission to copy and/or distribute it, +# with or without modifications, as long as this notice is preserved. + +# _AM_SUBST_NOTMAKE(VARIABLE) +# --------------------------- +# Prevent Automake from outputing VARIABLE = @VARIABLE@ in Makefile.in. +# This macro is traced by Automake. +AC_DEFUN([_AM_SUBST_NOTMAKE]) + +# Check how to create a tarball. -*- Autoconf -*- + +# Copyright (C) 2004, 2005 Free Software Foundation, Inc. +# +# This file is free software; the Free Software Foundation +# gives unlimited permission to copy and/or distribute it, +# with or without modifications, as long as this notice is preserved. + +# serial 2 + +# _AM_PROG_TAR(FORMAT) +# -------------------- +# Check how to create a tarball in format FORMAT. +# FORMAT should be one of `v7', `ustar', or `pax'. +# +# Substitute a variable $(am__tar) that is a command +# writing to stdout a FORMAT-tarball containing the directory +# $tardir. +# tardir=directory && $(am__tar) > result.tar +# +# Substitute a variable $(am__untar) that extract such +# a tarball read from stdin. +# $(am__untar) < result.tar +AC_DEFUN([_AM_PROG_TAR], +[# Always define AMTAR for backward compatibility. +AM_MISSING_PROG([AMTAR], [tar]) +m4_if([$1], [v7], + [am__tar='${AMTAR} chof - "$$tardir"'; am__untar='${AMTAR} xf -'], + [m4_case([$1], [ustar],, [pax],, + [m4_fatal([Unknown tar format])]) +AC_MSG_CHECKING([how to create a $1 tar archive]) +# Loop over all known methods to create a tar archive until one works. +_am_tools='gnutar m4_if([$1], [ustar], [plaintar]) pax cpio none' +_am_tools=${am_cv_prog_tar_$1-$_am_tools} +# Do not fold the above two line into one, because Tru64 sh and +# Solaris sh will not grok spaces in the rhs of `-'. +for _am_tool in $_am_tools +do + case $_am_tool in + gnutar) + for _am_tar in tar gnutar gtar; + do + AM_RUN_LOG([$_am_tar --version]) && break + done + am__tar="$_am_tar --format=m4_if([$1], [pax], [posix], [$1]) -chf - "'"$$tardir"' + am__tar_="$_am_tar --format=m4_if([$1], [pax], [posix], [$1]) -chf - "'"$tardir"' + am__untar="$_am_tar -xf -" + ;; + plaintar) + # Must skip GNU tar: if it does not support --format= it doesn't create + # ustar tarball either. + (tar --version) >/dev/null 2>&1 && continue + am__tar='tar chf - "$$tardir"' + am__tar_='tar chf - "$tardir"' + am__untar='tar xf -' + ;; + pax) + am__tar='pax -L -x $1 -w "$$tardir"' + am__tar_='pax -L -x $1 -w "$tardir"' + am__untar='pax -r' + ;; + cpio) + am__tar='find "$$tardir" -print | cpio -o -H $1 -L' + am__tar_='find "$tardir" -print | cpio -o -H $1 -L' + am__untar='cpio -i -H $1 -d' + ;; + none) + am__tar=false + am__tar_=false + am__untar=false + ;; + esac + + # If the value was cached, stop now. We just wanted to have am__tar + # and am__untar set. + test -n "${am_cv_prog_tar_$1}" && break + + # tar/untar a dummy directory, and stop if the command works + rm -rf conftest.dir + mkdir conftest.dir + echo GrepMe > conftest.dir/file + AM_RUN_LOG([tardir=conftest.dir && eval $am__tar_ >conftest.tar]) + rm -rf conftest.dir + if test -s conftest.tar; then + AM_RUN_LOG([$am__untar /dev/null 2>&1 && break + fi +done +rm -rf conftest.dir + +AC_CACHE_VAL([am_cv_prog_tar_$1], [am_cv_prog_tar_$1=$_am_tool]) +AC_MSG_RESULT([$am_cv_prog_tar_$1])]) +AC_SUBST([am__tar]) +AC_SUBST([am__untar]) +]) # _AM_PROG_TAR + +m4_include([acinclude.m4]) Modified: python/trunk/Modules/_ctypes/libffi/config.guess ============================================================================== --- python/trunk/Modules/_ctypes/libffi/config.guess (original) +++ python/trunk/Modules/_ctypes/libffi/config.guess Tue Mar 4 21:09:11 2008 @@ -1,9 +1,10 @@ #! /bin/sh # Attempt to guess a canonical system name. # Copyright (C) 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, -# 2000, 2001, 2002, 2003, 2004 Free Software Foundation, Inc. +# 2000, 2001, 2002, 2003, 2004, 2005, 2006 Free Software Foundation, +# Inc. -timestamp='2004-11-12' +timestamp='2007-05-17' # This file is free software; you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by @@ -17,13 +18,15 @@ # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software -# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. +# Foundation, Inc., 51 Franklin Street - Fifth Floor, Boston, MA +# 02110-1301, USA. # # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that program. + # Originally written by Per Bothner . # Please send patches to . Submit a context # diff and a properly formatted ChangeLog entry. @@ -53,7 +56,7 @@ GNU config.guess ($timestamp) Originally written by Per Bothner. -Copyright (C) 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004 +Copyright (C) 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO @@ -66,11 +69,11 @@ while test $# -gt 0 ; do case $1 in --time-stamp | --time* | -t ) - echo "$timestamp" ; exit 0 ;; + echo "$timestamp" ; exit ;; --version | -v ) - echo "$version" ; exit 0 ;; + echo "$version" ; exit ;; --help | --h* | -h ) - echo "$usage"; exit 0 ;; + echo "$usage"; exit ;; -- ) # Stop option processing shift; break ;; - ) # Use stdin as input. @@ -104,7 +107,7 @@ trap "exitcode=\$?; (rm -f \$tmpfiles 2>/dev/null; rmdir \$tmp 2>/dev/null) && exit \$exitcode" 0 ; trap "rm -f \$tmpfiles 2>/dev/null; rmdir \$tmp 2>/dev/null; exit 1" 1 2 13 15 ; : ${TMPDIR=/tmp} ; - { tmp=`(umask 077 && mktemp -d -q "$TMPDIR/cgXXXXXX") 2>/dev/null` && test -n "$tmp" && test -d "$tmp" ; } || + { tmp=`(umask 077 && mktemp -d "$TMPDIR/cgXXXXXX") 2>/dev/null` && test -n "$tmp" && test -d "$tmp" ; } || { test -n "$RANDOM" && tmp=$TMPDIR/cg$$-$RANDOM && (umask 077 && mkdir $tmp) ; } || { tmp=$TMPDIR/cg-$$ && (umask 077 && mkdir $tmp) && echo "Warning: creating insecure temp directory" >&2 ; } || { echo "$me: cannot create a temporary directory in $TMPDIR" >&2 ; exit 1 ; } ; @@ -123,7 +126,7 @@ ;; ,,*) CC_FOR_BUILD=$CC ;; ,*,*) CC_FOR_BUILD=$HOST_CC ;; -esac ;' +esac ; set_cc_for_build= ;' # This is needed to find uname on a Pyramid OSx when run in the BSD universe. # (ghazi at noc.rutgers.edu 1994-08-24) @@ -158,6 +161,7 @@ arm*) machine=arm-unknown ;; sh3el) machine=shl-unknown ;; sh3eb) machine=sh-unknown ;; + sh5el) machine=sh5le-unknown ;; *) machine=${UNAME_MACHINE_ARCH}-unknown ;; esac # The Operating System including object format, if it has switched @@ -196,55 +200,23 @@ # contains redundant information, the shorter form: # CPU_TYPE-MANUFACTURER-OPERATING_SYSTEM is used. echo "${machine}-${os}${release}" - exit 0 ;; - amd64:OpenBSD:*:*) - echo x86_64-unknown-openbsd${UNAME_RELEASE} - exit 0 ;; - amiga:OpenBSD:*:*) - echo m68k-unknown-openbsd${UNAME_RELEASE} - exit 0 ;; - cats:OpenBSD:*:*) - echo arm-unknown-openbsd${UNAME_RELEASE} - exit 0 ;; - hp300:OpenBSD:*:*) - echo m68k-unknown-openbsd${UNAME_RELEASE} - exit 0 ;; - luna88k:OpenBSD:*:*) - echo m88k-unknown-openbsd${UNAME_RELEASE} - exit 0 ;; - mac68k:OpenBSD:*:*) - echo m68k-unknown-openbsd${UNAME_RELEASE} - exit 0 ;; - macppc:OpenBSD:*:*) - echo powerpc-unknown-openbsd${UNAME_RELEASE} - exit 0 ;; - mvme68k:OpenBSD:*:*) - echo m68k-unknown-openbsd${UNAME_RELEASE} - exit 0 ;; - mvme88k:OpenBSD:*:*) - echo m88k-unknown-openbsd${UNAME_RELEASE} - exit 0 ;; - mvmeppc:OpenBSD:*:*) - echo powerpc-unknown-openbsd${UNAME_RELEASE} - exit 0 ;; - sgi:OpenBSD:*:*) - echo mips64-unknown-openbsd${UNAME_RELEASE} - exit 0 ;; - sun3:OpenBSD:*:*) - echo m68k-unknown-openbsd${UNAME_RELEASE} - exit 0 ;; + exit ;; *:OpenBSD:*:*) - echo ${UNAME_MACHINE}-unknown-openbsd${UNAME_RELEASE} - exit 0 ;; + UNAME_MACHINE_ARCH=`arch | sed 's/OpenBSD.//'` + echo ${UNAME_MACHINE_ARCH}-unknown-openbsd${UNAME_RELEASE} + exit ;; *:ekkoBSD:*:*) echo ${UNAME_MACHINE}-unknown-ekkobsd${UNAME_RELEASE} - exit 0 ;; + exit ;; + *:SolidBSD:*:*) + echo ${UNAME_MACHINE}-unknown-solidbsd${UNAME_RELEASE} + exit ;; macppc:MirBSD:*:*) - echo powerppc-unknown-mirbsd${UNAME_RELEASE} - exit 0 ;; + echo powerpc-unknown-mirbsd${UNAME_RELEASE} + exit ;; *:MirBSD:*:*) echo ${UNAME_MACHINE}-unknown-mirbsd${UNAME_RELEASE} - exit 0 ;; + exit ;; alpha:OSF1:*:*) case $UNAME_RELEASE in *4.0) @@ -297,40 +269,43 @@ # A Xn.n version is an unreleased experimental baselevel. # 1.2 uses "1.2" for uname -r. echo ${UNAME_MACHINE}-dec-osf`echo ${UNAME_RELEASE} | sed -e 's/^[PVTX]//' | tr 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' 'abcdefghijklmnopqrstuvwxyz'` - exit 0 ;; + exit ;; Alpha\ *:Windows_NT*:*) # How do we know it's Interix rather than the generic POSIX subsystem? # Should we change UNAME_MACHINE based on the output of uname instead # of the specific Alpha model? echo alpha-pc-interix - exit 0 ;; + exit ;; 21064:Windows_NT:50:3) echo alpha-dec-winnt3.5 - exit 0 ;; + exit ;; Amiga*:UNIX_System_V:4.0:*) echo m68k-unknown-sysv4 - exit 0;; + exit ;; *:[Aa]miga[Oo][Ss]:*:*) echo ${UNAME_MACHINE}-unknown-amigaos - exit 0 ;; + exit ;; *:[Mm]orph[Oo][Ss]:*:*) echo ${UNAME_MACHINE}-unknown-morphos - exit 0 ;; + exit ;; *:OS/390:*:*) echo i370-ibm-openedition - exit 0 ;; + exit ;; *:z/VM:*:*) echo s390-ibm-zvmoe - exit 0 ;; + exit ;; *:OS400:*:*) echo powerpc-ibm-os400 - exit 0 ;; + exit ;; arm:RISC*:1.[012]*:*|arm:riscix:1.[012]*:*) echo arm-acorn-riscix${UNAME_RELEASE} - exit 0;; + exit ;; + arm:riscos:*:*|arm:RISCOS:*:*) + echo arm-unknown-riscos + exit ;; SR2?01:HI-UX/MPP:*:* | SR8000:HI-UX/MPP:*:*) echo hppa1.1-hitachi-hiuxmpp - exit 0;; + exit ;; Pyramid*:OSx*:*:* | MIS*:OSx*:*:* | MIS*:SMP_DC-OSx*:*:*) # akee at wpdis03.wpafb.af.mil (Earle F. Ake) contributed MIS and NILE. if test "`(/bin/universe) 2>/dev/null`" = att ; then @@ -338,32 +313,32 @@ else echo pyramid-pyramid-bsd fi - exit 0 ;; + exit ;; NILE*:*:*:dcosx) echo pyramid-pyramid-svr4 - exit 0 ;; + exit ;; DRS?6000:unix:4.0:6*) echo sparc-icl-nx6 - exit 0 ;; + exit ;; DRS?6000:UNIX_SV:4.2*:7* | DRS?6000:isis:4.2*:7*) case `/usr/bin/uname -p` in - sparc) echo sparc-icl-nx7 && exit 0 ;; + sparc) echo sparc-icl-nx7; exit ;; esac ;; sun4H:SunOS:5.*:*) echo sparc-hal-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'` - exit 0 ;; + exit ;; sun4*:SunOS:5.*:* | tadpole*:SunOS:5.*:*) echo sparc-sun-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'` - exit 0 ;; - i86pc:SunOS:5.*:*) + exit ;; + i86pc:SunOS:5.*:* | ix86xen:SunOS:5.*:*) echo i386-pc-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'` - exit 0 ;; + exit ;; sun4*:SunOS:6*:*) # According to config.sub, this is the proper way to canonicalize # SunOS6. Hard to guess exactly what SunOS6 will be like, but # it's likely to be more like Solaris than SunOS4. echo sparc-sun-solaris3`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'` - exit 0 ;; + exit ;; sun4*:SunOS:*:*) case "`/usr/bin/arch -k`" in Series*|S4*) @@ -372,10 +347,10 @@ esac # Japanese Language versions have a version number like `4.1.3-JL'. echo sparc-sun-sunos`echo ${UNAME_RELEASE}|sed -e 's/-/_/'` - exit 0 ;; + exit ;; sun3*:SunOS:*:*) echo m68k-sun-sunos${UNAME_RELEASE} - exit 0 ;; + exit ;; sun*:*:4.2BSD:*) UNAME_RELEASE=`(sed 1q /etc/motd | awk '{print substr($5,1,3)}') 2>/dev/null` test "x${UNAME_RELEASE}" = "x" && UNAME_RELEASE=3 @@ -387,10 +362,10 @@ echo sparc-sun-sunos${UNAME_RELEASE} ;; esac - exit 0 ;; + exit ;; aushp:SunOS:*:*) echo sparc-auspex-sunos${UNAME_RELEASE} - exit 0 ;; + exit ;; # The situation for MiNT is a little confusing. The machine name # can be virtually everything (everything which is not # "atarist" or "atariste" at least should have a processor @@ -401,40 +376,40 @@ # be no problem. atarist[e]:*MiNT:*:* | atarist[e]:*mint:*:* | atarist[e]:*TOS:*:*) echo m68k-atari-mint${UNAME_RELEASE} - exit 0 ;; + exit ;; atari*:*MiNT:*:* | atari*:*mint:*:* | atarist[e]:*TOS:*:*) echo m68k-atari-mint${UNAME_RELEASE} - exit 0 ;; + exit ;; *falcon*:*MiNT:*:* | *falcon*:*mint:*:* | *falcon*:*TOS:*:*) echo m68k-atari-mint${UNAME_RELEASE} - exit 0 ;; + exit ;; milan*:*MiNT:*:* | milan*:*mint:*:* | *milan*:*TOS:*:*) echo m68k-milan-mint${UNAME_RELEASE} - exit 0 ;; + exit ;; hades*:*MiNT:*:* | hades*:*mint:*:* | *hades*:*TOS:*:*) echo m68k-hades-mint${UNAME_RELEASE} - exit 0 ;; + exit ;; *:*MiNT:*:* | *:*mint:*:* | *:*TOS:*:*) echo m68k-unknown-mint${UNAME_RELEASE} - exit 0 ;; + exit ;; m68k:machten:*:*) echo m68k-apple-machten${UNAME_RELEASE} - exit 0 ;; + exit ;; powerpc:machten:*:*) echo powerpc-apple-machten${UNAME_RELEASE} - exit 0 ;; + exit ;; RISC*:Mach:*:*) echo mips-dec-mach_bsd4.3 - exit 0 ;; + exit ;; RISC*:ULTRIX:*:*) echo mips-dec-ultrix${UNAME_RELEASE} - exit 0 ;; + exit ;; VAX*:ULTRIX*:*:*) echo vax-dec-ultrix${UNAME_RELEASE} - exit 0 ;; + exit ;; 2020:CLIX:*:* | 2430:CLIX:*:*) echo clipper-intergraph-clix${UNAME_RELEASE} - exit 0 ;; + exit ;; mips:*:*:UMIPS | mips:*:*:RISCos) eval $set_cc_for_build sed 's/^ //' << EOF >$dummy.c @@ -458,32 +433,33 @@ exit (-1); } EOF - $CC_FOR_BUILD -o $dummy $dummy.c \ - && $dummy `echo "${UNAME_RELEASE}" | sed -n 's/\([0-9]*\).*/\1/p'` \ - && exit 0 + $CC_FOR_BUILD -o $dummy $dummy.c && + dummyarg=`echo "${UNAME_RELEASE}" | sed -n 's/\([0-9]*\).*/\1/p'` && + SYSTEM_NAME=`$dummy $dummyarg` && + { echo "$SYSTEM_NAME"; exit; } echo mips-mips-riscos${UNAME_RELEASE} - exit 0 ;; + exit ;; Motorola:PowerMAX_OS:*:*) echo powerpc-motorola-powermax - exit 0 ;; + exit ;; Motorola:*:4.3:PL8-*) echo powerpc-harris-powermax - exit 0 ;; + exit ;; Night_Hawk:*:*:PowerMAX_OS | Synergy:PowerMAX_OS:*:*) echo powerpc-harris-powermax - exit 0 ;; + exit ;; Night_Hawk:Power_UNIX:*:*) echo powerpc-harris-powerunix - exit 0 ;; + exit ;; m88k:CX/UX:7*:*) echo m88k-harris-cxux7 - exit 0 ;; + exit ;; m88k:*:4*:R4*) echo m88k-motorola-sysv4 - exit 0 ;; + exit ;; m88k:*:3*:R3*) echo m88k-motorola-sysv3 - exit 0 ;; + exit ;; AViiON:dgux:*:*) # DG/UX returns AViiON for all architectures UNAME_PROCESSOR=`/usr/bin/uname -p` @@ -499,29 +475,29 @@ else echo i586-dg-dgux${UNAME_RELEASE} fi - exit 0 ;; + exit ;; M88*:DolphinOS:*:*) # DolphinOS (SVR3) echo m88k-dolphin-sysv3 - exit 0 ;; + exit ;; M88*:*:R3*:*) # Delta 88k system running SVR3 echo m88k-motorola-sysv3 - exit 0 ;; + exit ;; XD88*:*:*:*) # Tektronix XD88 system running UTekV (SVR3) echo m88k-tektronix-sysv3 - exit 0 ;; + exit ;; Tek43[0-9][0-9]:UTek:*:*) # Tektronix 4300 system running UTek (BSD) echo m68k-tektronix-bsd - exit 0 ;; + exit ;; *:IRIX*:*:*) echo mips-sgi-irix`echo ${UNAME_RELEASE}|sed -e 's/-/_/g'` - exit 0 ;; + exit ;; ????????:AIX?:[12].1:2) # AIX 2.2.1 or AIX 2.1.1 is RT/PC AIX. - echo romp-ibm-aix # uname -m gives an 8 hex-code CPU id - exit 0 ;; # Note that: echo "'`uname -s`'" gives 'AIX ' + echo romp-ibm-aix # uname -m gives an 8 hex-code CPU id + exit ;; # Note that: echo "'`uname -s`'" gives 'AIX ' i*86:AIX:*:*) echo i386-ibm-aix - exit 0 ;; + exit ;; ia64:AIX:*:*) if [ -x /usr/bin/oslevel ] ; then IBM_REV=`/usr/bin/oslevel` @@ -529,7 +505,7 @@ IBM_REV=${UNAME_VERSION}.${UNAME_RELEASE} fi echo ${UNAME_MACHINE}-ibm-aix${IBM_REV} - exit 0 ;; + exit ;; *:AIX:2:3) if grep bos325 /usr/include/stdio.h >/dev/null 2>&1; then eval $set_cc_for_build @@ -544,14 +520,18 @@ exit(0); } EOF - $CC_FOR_BUILD -o $dummy $dummy.c && $dummy && exit 0 - echo rs6000-ibm-aix3.2.5 + if $CC_FOR_BUILD -o $dummy $dummy.c && SYSTEM_NAME=`$dummy` + then + echo "$SYSTEM_NAME" + else + echo rs6000-ibm-aix3.2.5 + fi elif grep bos324 /usr/include/stdio.h >/dev/null 2>&1; then echo rs6000-ibm-aix3.2.4 else echo rs6000-ibm-aix3.2 fi - exit 0 ;; + exit ;; *:AIX:*:[45]) IBM_CPU_ID=`/usr/sbin/lsdev -C -c processor -S available | sed 1q | awk '{ print $1 }'` if /usr/sbin/lsattr -El ${IBM_CPU_ID} | grep ' POWER' >/dev/null 2>&1; then @@ -565,28 +545,28 @@ IBM_REV=${UNAME_VERSION}.${UNAME_RELEASE} fi echo ${IBM_ARCH}-ibm-aix${IBM_REV} - exit 0 ;; + exit ;; *:AIX:*:*) echo rs6000-ibm-aix - exit 0 ;; + exit ;; ibmrt:4.4BSD:*|romp-ibm:BSD:*) echo romp-ibm-bsd4.4 - exit 0 ;; + exit ;; ibmrt:*BSD:*|romp-ibm:BSD:*) # covers RT/PC BSD and echo romp-ibm-bsd${UNAME_RELEASE} # 4.3 with uname added to - exit 0 ;; # report: romp-ibm BSD 4.3 + exit ;; # report: romp-ibm BSD 4.3 *:BOSX:*:*) echo rs6000-bull-bosx - exit 0 ;; + exit ;; DPX/2?00:B.O.S.:*:*) echo m68k-bull-sysv3 - exit 0 ;; + exit ;; 9000/[34]??:4.3bsd:1.*:*) echo m68k-hp-bsd - exit 0 ;; + exit ;; hp300:4.4BSD:*:* | 9000/[34]??:4.3bsd:2.*:*) echo m68k-hp-bsd4.4 - exit 0 ;; + exit ;; 9000/[34678]??:HP-UX:*:*) HPUX_REV=`echo ${UNAME_RELEASE}|sed -e 's/[^.]*.[0B]*//'` case "${UNAME_MACHINE}" in @@ -648,9 +628,19 @@ esac if [ ${HP_ARCH} = "hppa2.0w" ] then - # avoid double evaluation of $set_cc_for_build - test -n "$CC_FOR_BUILD" || eval $set_cc_for_build - if echo __LP64__ | (CCOPTS= $CC_FOR_BUILD -E -) | grep __LP64__ >/dev/null + eval $set_cc_for_build + + # hppa2.0w-hp-hpux* has a 64-bit kernel and a compiler generating + # 32-bit code. hppa64-hp-hpux* has the same kernel and a compiler + # generating 64-bit code. GNU and HP use different nomenclature: + # + # $ CC_FOR_BUILD=cc ./config.guess + # => hppa2.0w-hp-hpux11.23 + # $ CC_FOR_BUILD="cc +DA2.0w" ./config.guess + # => hppa64-hp-hpux11.23 + + if echo __LP64__ | (CCOPTS= $CC_FOR_BUILD -E - 2>/dev/null) | + grep __LP64__ >/dev/null then HP_ARCH="hppa2.0w" else @@ -658,11 +648,11 @@ fi fi echo ${HP_ARCH}-hp-hpux${HPUX_REV} - exit 0 ;; + exit ;; ia64:HP-UX:*:*) HPUX_REV=`echo ${UNAME_RELEASE}|sed -e 's/[^.]*.[0B]*//'` echo ia64-hp-hpux${HPUX_REV} - exit 0 ;; + exit ;; 3050*:HI-UX:*:*) eval $set_cc_for_build sed 's/^ //' << EOF >$dummy.c @@ -690,158 +680,182 @@ exit (0); } EOF - $CC_FOR_BUILD -o $dummy $dummy.c && $dummy && exit 0 + $CC_FOR_BUILD -o $dummy $dummy.c && SYSTEM_NAME=`$dummy` && + { echo "$SYSTEM_NAME"; exit; } echo unknown-hitachi-hiuxwe2 - exit 0 ;; + exit ;; 9000/7??:4.3bsd:*:* | 9000/8?[79]:4.3bsd:*:* ) echo hppa1.1-hp-bsd - exit 0 ;; + exit ;; 9000/8??:4.3bsd:*:*) echo hppa1.0-hp-bsd - exit 0 ;; + exit ;; *9??*:MPE/iX:*:* | *3000*:MPE/iX:*:*) echo hppa1.0-hp-mpeix - exit 0 ;; + exit ;; hp7??:OSF1:*:* | hp8?[79]:OSF1:*:* ) echo hppa1.1-hp-osf - exit 0 ;; + exit ;; hp8??:OSF1:*:*) echo hppa1.0-hp-osf - exit 0 ;; + exit ;; i*86:OSF1:*:*) if [ -x /usr/sbin/sysversion ] ; then echo ${UNAME_MACHINE}-unknown-osf1mk else echo ${UNAME_MACHINE}-unknown-osf1 fi - exit 0 ;; + exit ;; parisc*:Lites*:*:*) echo hppa1.1-hp-lites - exit 0 ;; + exit ;; C1*:ConvexOS:*:* | convex:ConvexOS:C1*:*) echo c1-convex-bsd - exit 0 ;; + exit ;; C2*:ConvexOS:*:* | convex:ConvexOS:C2*:*) if getsysinfo -f scalar_acc then echo c32-convex-bsd else echo c2-convex-bsd fi - exit 0 ;; + exit ;; C34*:ConvexOS:*:* | convex:ConvexOS:C34*:*) echo c34-convex-bsd - exit 0 ;; + exit ;; C38*:ConvexOS:*:* | convex:ConvexOS:C38*:*) echo c38-convex-bsd - exit 0 ;; + exit ;; C4*:ConvexOS:*:* | convex:ConvexOS:C4*:*) echo c4-convex-bsd - exit 0 ;; + exit ;; CRAY*Y-MP:*:*:*) echo ymp-cray-unicos${UNAME_RELEASE} | sed -e 's/\.[^.]*$/.X/' - exit 0 ;; + exit ;; CRAY*[A-Z]90:*:*:*) echo ${UNAME_MACHINE}-cray-unicos${UNAME_RELEASE} \ | sed -e 's/CRAY.*\([A-Z]90\)/\1/' \ -e y/ABCDEFGHIJKLMNOPQRSTUVWXYZ/abcdefghijklmnopqrstuvwxyz/ \ -e 's/\.[^.]*$/.X/' - exit 0 ;; + exit ;; CRAY*TS:*:*:*) echo t90-cray-unicos${UNAME_RELEASE} | sed -e 's/\.[^.]*$/.X/' - exit 0 ;; + exit ;; CRAY*T3E:*:*:*) echo alphaev5-cray-unicosmk${UNAME_RELEASE} | sed -e 's/\.[^.]*$/.X/' - exit 0 ;; + exit ;; CRAY*SV1:*:*:*) echo sv1-cray-unicos${UNAME_RELEASE} | sed -e 's/\.[^.]*$/.X/' - exit 0 ;; + exit ;; *:UNICOS/mp:*:*) echo craynv-cray-unicosmp${UNAME_RELEASE} | sed -e 's/\.[^.]*$/.X/' - exit 0 ;; + exit ;; F30[01]:UNIX_System_V:*:* | F700:UNIX_System_V:*:*) FUJITSU_PROC=`uname -m | tr 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' 'abcdefghijklmnopqrstuvwxyz'` FUJITSU_SYS=`uname -p | tr 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' 'abcdefghijklmnopqrstuvwxyz' | sed -e 's/\///'` FUJITSU_REL=`echo ${UNAME_RELEASE} | sed -e 's/ /_/'` echo "${FUJITSU_PROC}-fujitsu-${FUJITSU_SYS}${FUJITSU_REL}" - exit 0 ;; + exit ;; 5000:UNIX_System_V:4.*:*) FUJITSU_SYS=`uname -p | tr 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' 'abcdefghijklmnopqrstuvwxyz' | sed -e 's/\///'` FUJITSU_REL=`echo ${UNAME_RELEASE} | tr 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' 'abcdefghijklmnopqrstuvwxyz' | sed -e 's/ /_/'` echo "sparc-fujitsu-${FUJITSU_SYS}${FUJITSU_REL}" - exit 0 ;; + exit ;; i*86:BSD/386:*:* | i*86:BSD/OS:*:* | *:Ascend\ Embedded/OS:*:*) echo ${UNAME_MACHINE}-pc-bsdi${UNAME_RELEASE} - exit 0 ;; + exit ;; sparc*:BSD/OS:*:*) echo sparc-unknown-bsdi${UNAME_RELEASE} - exit 0 ;; + exit ;; *:BSD/OS:*:*) echo ${UNAME_MACHINE}-unknown-bsdi${UNAME_RELEASE} - exit 0 ;; + exit ;; *:FreeBSD:*:*) - echo ${UNAME_MACHINE}-unknown-freebsd`echo ${UNAME_RELEASE}|sed -e 's/[-(].*//'` - exit 0 ;; + case ${UNAME_MACHINE} in + pc98) + echo i386-unknown-freebsd`echo ${UNAME_RELEASE}|sed -e 's/[-(].*//'` ;; + amd64) + echo x86_64-unknown-freebsd`echo ${UNAME_RELEASE}|sed -e 's/[-(].*//'` ;; + *) + echo ${UNAME_MACHINE}-unknown-freebsd`echo ${UNAME_RELEASE}|sed -e 's/[-(].*//'` ;; + esac + exit ;; i*:CYGWIN*:*) echo ${UNAME_MACHINE}-pc-cygwin - exit 0 ;; - i*:MINGW*:*) + exit ;; + *:MINGW*:*) echo ${UNAME_MACHINE}-pc-mingw32 - exit 0 ;; + exit ;; + i*:windows32*:*) + # uname -m includes "-pc" on this system. + echo ${UNAME_MACHINE}-mingw32 + exit ;; i*:PW*:*) echo ${UNAME_MACHINE}-pc-pw32 - exit 0 ;; - x86:Interix*:[34]*) - echo i586-pc-interix${UNAME_RELEASE}|sed -e 's/\..*//' - exit 0 ;; + exit ;; + *:Interix*:[3456]*) + case ${UNAME_MACHINE} in + x86) + echo i586-pc-interix${UNAME_RELEASE} + exit ;; + EM64T | authenticamd) + echo x86_64-unknown-interix${UNAME_RELEASE} + exit ;; + esac ;; [345]86:Windows_95:* | [345]86:Windows_98:* | [345]86:Windows_NT:*) echo i${UNAME_MACHINE}-pc-mks - exit 0 ;; + exit ;; i*:Windows_NT*:* | Pentium*:Windows_NT*:*) # How do we know it's Interix rather than the generic POSIX subsystem? # It also conflicts with pre-2.0 versions of AT&T UWIN. Should we # UNAME_MACHINE based on the output of uname instead of i386? echo i586-pc-interix - exit 0 ;; + exit ;; i*:UWIN*:*) echo ${UNAME_MACHINE}-pc-uwin - exit 0 ;; + exit ;; + amd64:CYGWIN*:*:* | x86_64:CYGWIN*:*:*) + echo x86_64-unknown-cygwin + exit ;; p*:CYGWIN*:*) echo powerpcle-unknown-cygwin - exit 0 ;; + exit ;; prep*:SunOS:5.*:*) echo powerpcle-unknown-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'` - exit 0 ;; + exit ;; *:GNU:*:*) # the GNU system echo `echo ${UNAME_MACHINE}|sed -e 's,[-/].*$,,'`-unknown-gnu`echo ${UNAME_RELEASE}|sed -e 's,/.*$,,'` - exit 0 ;; + exit ;; *:GNU/*:*:*) # other systems with GNU libc and userland echo ${UNAME_MACHINE}-unknown-`echo ${UNAME_SYSTEM} | sed 's,^[^/]*/,,' | tr '[A-Z]' '[a-z]'``echo ${UNAME_RELEASE}|sed -e 's/[-(].*//'`-gnu - exit 0 ;; + exit ;; i*86:Minix:*:*) echo ${UNAME_MACHINE}-pc-minix - exit 0 ;; + exit ;; arm*:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-gnu - exit 0 ;; + exit ;; + avr32*:Linux:*:*) + echo ${UNAME_MACHINE}-unknown-linux-gnu + exit ;; cris:Linux:*:*) echo cris-axis-linux-gnu - exit 0 ;; + exit ;; crisv32:Linux:*:*) echo crisv32-axis-linux-gnu - exit 0 ;; + exit ;; frv:Linux:*:*) echo frv-unknown-linux-gnu - exit 0 ;; + exit ;; ia64:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-gnu - exit 0 ;; + exit ;; m32r*:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-gnu - exit 0 ;; + exit ;; m68*:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-gnu - exit 0 ;; + exit ;; mips:Linux:*:*) eval $set_cc_for_build sed 's/^ //' << EOF >$dummy.c @@ -858,8 +872,12 @@ #endif #endif EOF - eval `$CC_FOR_BUILD -E $dummy.c 2>/dev/null | grep ^CPU=` - test x"${CPU}" != x && echo "${CPU}-unknown-linux-gnu" && exit 0 + eval "`$CC_FOR_BUILD -E $dummy.c 2>/dev/null | sed -n ' + /^CPU/{ + s: ::g + p + }'`" + test x"${CPU}" != x && { echo "${CPU}-unknown-linux-gnu"; exit; } ;; mips64:Linux:*:*) eval $set_cc_for_build @@ -877,15 +895,22 @@ #endif #endif EOF - eval `$CC_FOR_BUILD -E $dummy.c 2>/dev/null | grep ^CPU=` - test x"${CPU}" != x && echo "${CPU}-unknown-linux-gnu" && exit 0 + eval "`$CC_FOR_BUILD -E $dummy.c 2>/dev/null | sed -n ' + /^CPU/{ + s: ::g + p + }'`" + test x"${CPU}" != x && { echo "${CPU}-unknown-linux-gnu"; exit; } ;; + or32:Linux:*:*) + echo or32-unknown-linux-gnu + exit ;; ppc:Linux:*:*) echo powerpc-unknown-linux-gnu - exit 0 ;; + exit ;; ppc64:Linux:*:*) echo powerpc64-unknown-linux-gnu - exit 0 ;; + exit ;; alpha:Linux:*:*) case `sed -n '/^cpu model/s/^.*: \(.*\)/\1/p' < /proc/cpuinfo` in EV5) UNAME_MACHINE=alphaev5 ;; @@ -899,7 +924,7 @@ objdump --private-headers /bin/sh | grep ld.so.1 >/dev/null if test "$?" = 0 ; then LIBC="libc1" ; else LIBC="" ; fi echo ${UNAME_MACHINE}-unknown-linux-gnu${LIBC} - exit 0 ;; + exit ;; parisc:Linux:*:* | hppa:Linux:*:*) # Look for CPU level case `grep '^cpu[^a-z]*:' /proc/cpuinfo 2>/dev/null | cut -d' ' -f2` in @@ -907,25 +932,31 @@ PA8*) echo hppa2.0-unknown-linux-gnu ;; *) echo hppa-unknown-linux-gnu ;; esac - exit 0 ;; + exit ;; parisc64:Linux:*:* | hppa64:Linux:*:*) echo hppa64-unknown-linux-gnu - exit 0 ;; + exit ;; s390:Linux:*:* | s390x:Linux:*:*) echo ${UNAME_MACHINE}-ibm-linux - exit 0 ;; + exit ;; sh64*:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-gnu - exit 0 ;; + exit ;; sh*:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-gnu - exit 0 ;; + exit ;; sparc:Linux:*:* | sparc64:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-gnu - exit 0 ;; + exit ;; + vax:Linux:*:*) + echo ${UNAME_MACHINE}-dec-linux-gnu + exit ;; x86_64:Linux:*:*) echo x86_64-unknown-linux-gnu - exit 0 ;; + exit ;; + xtensa:Linux:*:*) + echo xtensa-unknown-linux-gnu + exit ;; i*86:Linux:*:*) # The BFD linker knows what the default object file format is, so # first see if it will tell us. cd to the root directory to prevent @@ -943,15 +974,15 @@ ;; a.out-i386-linux) echo "${UNAME_MACHINE}-pc-linux-gnuaout" - exit 0 ;; + exit ;; coff-i386) echo "${UNAME_MACHINE}-pc-linux-gnucoff" - exit 0 ;; + exit ;; "") # Either a pre-BFD a.out linker (linux-gnuoldld) or # one that does not give us useful --help. echo "${UNAME_MACHINE}-pc-linux-gnuoldld" - exit 0 ;; + exit ;; esac # Determine whether the default compiler is a.out or elf eval $set_cc_for_build @@ -968,7 +999,7 @@ LIBC=gnulibc1 # endif #else - #ifdef __INTEL_COMPILER + #if defined(__INTEL_COMPILER) || defined(__PGI) || defined(__SUNPRO_C) || defined(__SUNPRO_CC) LIBC=gnu #else LIBC=gnuaout @@ -978,16 +1009,23 @@ LIBC=dietlibc #endif EOF - eval `$CC_FOR_BUILD -E $dummy.c 2>/dev/null | grep ^LIBC=` - test x"${LIBC}" != x && echo "${UNAME_MACHINE}-pc-linux-${LIBC}" && exit 0 - test x"${TENTATIVE}" != x && echo "${TENTATIVE}" && exit 0 + eval "`$CC_FOR_BUILD -E $dummy.c 2>/dev/null | sed -n ' + /^LIBC/{ + s: ::g + p + }'`" + test x"${LIBC}" != x && { + echo "${UNAME_MACHINE}-pc-linux-${LIBC}" + exit + } + test x"${TENTATIVE}" != x && { echo "${TENTATIVE}"; exit; } ;; i*86:DYNIX/ptx:4*:*) # ptx 4.0 does uname -s correctly, with DYNIX/ptx in there. # earlier versions are messed up and put the nodename in both # sysname and nodename. echo i386-sequent-sysv4 - exit 0 ;; + exit ;; i*86:UNIX_SV:4.2MP:2.*) # Unixware is an offshoot of SVR4, but it has its own version # number series starting with 2... @@ -995,27 +1033,27 @@ # I just have to hope. -- rms. # Use sysv4.2uw... so that sysv4* matches it. echo ${UNAME_MACHINE}-pc-sysv4.2uw${UNAME_VERSION} - exit 0 ;; + exit ;; i*86:OS/2:*:*) # If we were able to find `uname', then EMX Unix compatibility # is probably installed. echo ${UNAME_MACHINE}-pc-os2-emx - exit 0 ;; + exit ;; i*86:XTS-300:*:STOP) echo ${UNAME_MACHINE}-unknown-stop - exit 0 ;; + exit ;; i*86:atheos:*:*) echo ${UNAME_MACHINE}-unknown-atheos - exit 0 ;; - i*86:syllable:*:*) + exit ;; + i*86:syllable:*:*) echo ${UNAME_MACHINE}-pc-syllable - exit 0 ;; + exit ;; i*86:LynxOS:2.*:* | i*86:LynxOS:3.[01]*:* | i*86:LynxOS:4.0*:*) echo i386-unknown-lynxos${UNAME_RELEASE} - exit 0 ;; + exit ;; i*86:*DOS:*:*) echo ${UNAME_MACHINE}-pc-msdosdjgpp - exit 0 ;; + exit ;; i*86:*:4.*:* | i*86:SYSTEM_V:4.*:*) UNAME_REL=`echo ${UNAME_RELEASE} | sed 's/\/MP$//'` if grep Novell /usr/include/link.h >/dev/null 2>/dev/null; then @@ -1023,15 +1061,16 @@ else echo ${UNAME_MACHINE}-pc-sysv${UNAME_REL} fi - exit 0 ;; - i*86:*:5:[78]*) + exit ;; + i*86:*:5:[678]*) + # UnixWare 7.x, OpenUNIX and OpenServer 6. case `/bin/uname -X | grep "^Machine"` in *486*) UNAME_MACHINE=i486 ;; *Pentium) UNAME_MACHINE=i586 ;; *Pent*|*Celeron) UNAME_MACHINE=i686 ;; esac echo ${UNAME_MACHINE}-unknown-sysv${UNAME_RELEASE}${UNAME_SYSTEM}${UNAME_VERSION} - exit 0 ;; + exit ;; i*86:*:3.2:*) if test -f /usr/options/cb.name; then UNAME_REL=`sed -n 's/.*Version //p' /dev/null 2>&1 ; then echo i860-stardent-sysv${UNAME_RELEASE} # Stardent Vistra i860-SVR4 else # Add other i860-SVR4 vendors below as they are discovered. echo i860-unknown-sysv${UNAME_RELEASE} # Unknown i860-SVR4 fi - exit 0 ;; + exit ;; mini*:CTIX:SYS*5:*) # "miniframe" echo m68010-convergent-sysv - exit 0 ;; + exit ;; mc68k:UNIX:SYSTEM5:3.51m) echo m68k-convergent-sysv - exit 0 ;; + exit ;; M680?0:D-NIX:5.3:*) echo m68k-diab-dnix - exit 0 ;; + exit ;; M68*:*:R3V[5678]*:*) - test -r /sysV68 && echo 'm68k-motorola-sysv' && exit 0 ;; + test -r /sysV68 && { echo 'm68k-motorola-sysv'; exit; } ;; 3[345]??:*:4.0:3.0 | 3[34]??A:*:4.0:3.0 | 3[34]??,*:*:4.0:3.0 | 3[34]??/*:*:4.0:3.0 | 4400:*:4.0:3.0 | 4850:*:4.0:3.0 | SKA40:*:4.0:3.0 | SDS2:*:4.0:3.0 | SHG2:*:4.0:3.0 | S7501*:*:4.0:3.0) OS_REL='' test -r /etc/.relid \ && OS_REL=.`sed -n 's/[^ ]* [^ ]* \([0-9][0-9]\).*/\1/p' < /etc/.relid` /bin/uname -p 2>/dev/null | grep 86 >/dev/null \ - && echo i486-ncr-sysv4.3${OS_REL} && exit 0 + && { echo i486-ncr-sysv4.3${OS_REL}; exit; } /bin/uname -p 2>/dev/null | /bin/grep entium >/dev/null \ - && echo i586-ncr-sysv4.3${OS_REL} && exit 0 ;; + && { echo i586-ncr-sysv4.3${OS_REL}; exit; } ;; 3[34]??:*:4.0:* | 3[34]??,*:*:4.0:*) /bin/uname -p 2>/dev/null | grep 86 >/dev/null \ - && echo i486-ncr-sysv4 && exit 0 ;; + && { echo i486-ncr-sysv4; exit; } ;; m68*:LynxOS:2.*:* | m68*:LynxOS:3.0*:*) echo m68k-unknown-lynxos${UNAME_RELEASE} - exit 0 ;; + exit ;; mc68030:UNIX_System_V:4.*:*) echo m68k-atari-sysv4 - exit 0 ;; + exit ;; TSUNAMI:LynxOS:2.*:*) echo sparc-unknown-lynxos${UNAME_RELEASE} - exit 0 ;; + exit ;; rs6000:LynxOS:2.*:*) echo rs6000-unknown-lynxos${UNAME_RELEASE} - exit 0 ;; + exit ;; PowerPC:LynxOS:2.*:* | PowerPC:LynxOS:3.[01]*:* | PowerPC:LynxOS:4.0*:*) echo powerpc-unknown-lynxos${UNAME_RELEASE} - exit 0 ;; + exit ;; SM[BE]S:UNIX_SV:*:*) echo mips-dde-sysv${UNAME_RELEASE} - exit 0 ;; + exit ;; RM*:ReliantUNIX-*:*:*) echo mips-sni-sysv4 - exit 0 ;; + exit ;; RM*:SINIX-*:*:*) echo mips-sni-sysv4 - exit 0 ;; + exit ;; *:SINIX-*:*:*) if uname -p 2>/dev/null >/dev/null ; then UNAME_MACHINE=`(uname -p) 2>/dev/null` @@ -1123,69 +1162,81 @@ else echo ns32k-sni-sysv fi - exit 0 ;; + exit ;; PENTIUM:*:4.0*:*) # Unisys `ClearPath HMP IX 4000' SVR4/MP effort # says echo i586-unisys-sysv4 - exit 0 ;; + exit ;; *:UNIX_System_V:4*:FTX*) # From Gerald Hewes . # How about differentiating between stratus architectures? -djm echo hppa1.1-stratus-sysv4 - exit 0 ;; + exit ;; *:*:*:FTX*) # From seanf at swdc.stratus.com. echo i860-stratus-sysv4 - exit 0 ;; + exit ;; + i*86:VOS:*:*) + # From Paul.Green at stratus.com. + echo ${UNAME_MACHINE}-stratus-vos + exit ;; *:VOS:*:*) # From Paul.Green at stratus.com. echo hppa1.1-stratus-vos - exit 0 ;; + exit ;; mc68*:A/UX:*:*) echo m68k-apple-aux${UNAME_RELEASE} - exit 0 ;; + exit ;; news*:NEWS-OS:6*:*) echo mips-sony-newsos6 - exit 0 ;; + exit ;; R[34]000:*System_V*:*:* | R4000:UNIX_SYSV:*:* | R*000:UNIX_SV:*:*) if [ -d /usr/nec ]; then echo mips-nec-sysv${UNAME_RELEASE} else echo mips-unknown-sysv${UNAME_RELEASE} fi - exit 0 ;; + exit ;; BeBox:BeOS:*:*) # BeOS running on hardware made by Be, PPC only. echo powerpc-be-beos - exit 0 ;; + exit ;; BeMac:BeOS:*:*) # BeOS running on Mac or Mac clone, PPC only. echo powerpc-apple-beos - exit 0 ;; + exit ;; BePC:BeOS:*:*) # BeOS running on Intel PC compatible. echo i586-pc-beos - exit 0 ;; + exit ;; SX-4:SUPER-UX:*:*) echo sx4-nec-superux${UNAME_RELEASE} - exit 0 ;; + exit ;; SX-5:SUPER-UX:*:*) echo sx5-nec-superux${UNAME_RELEASE} - exit 0 ;; + exit ;; SX-6:SUPER-UX:*:*) echo sx6-nec-superux${UNAME_RELEASE} - exit 0 ;; + exit ;; + SX-7:SUPER-UX:*:*) + echo sx7-nec-superux${UNAME_RELEASE} + exit ;; + SX-8:SUPER-UX:*:*) + echo sx8-nec-superux${UNAME_RELEASE} + exit ;; + SX-8R:SUPER-UX:*:*) + echo sx8r-nec-superux${UNAME_RELEASE} + exit ;; Power*:Rhapsody:*:*) echo powerpc-apple-rhapsody${UNAME_RELEASE} - exit 0 ;; + exit ;; *:Rhapsody:*:*) echo ${UNAME_MACHINE}-apple-rhapsody${UNAME_RELEASE} - exit 0 ;; + exit ;; *:Darwin:*:*) UNAME_PROCESSOR=`uname -p` || UNAME_PROCESSOR=unknown case $UNAME_PROCESSOR in - *86) UNAME_PROCESSOR=i686 ;; unknown) UNAME_PROCESSOR=powerpc ;; esac echo ${UNAME_PROCESSOR}-apple-darwin${UNAME_RELEASE} - exit 0 ;; + exit ;; *:procnto*:*:* | *:QNX:[0123456789]*:*) UNAME_PROCESSOR=`uname -p` if test "$UNAME_PROCESSOR" = "x86"; then @@ -1193,22 +1244,25 @@ UNAME_MACHINE=pc fi echo ${UNAME_PROCESSOR}-${UNAME_MACHINE}-nto-qnx${UNAME_RELEASE} - exit 0 ;; + exit ;; *:QNX:*:4*) echo i386-pc-qnx - exit 0 ;; + exit ;; + NSE-?:NONSTOP_KERNEL:*:*) + echo nse-tandem-nsk${UNAME_RELEASE} + exit ;; NSR-?:NONSTOP_KERNEL:*:*) echo nsr-tandem-nsk${UNAME_RELEASE} - exit 0 ;; + exit ;; *:NonStop-UX:*:*) echo mips-compaq-nonstopux - exit 0 ;; + exit ;; BS2000:POSIX*:*:*) echo bs2000-siemens-sysv - exit 0 ;; + exit ;; DS/*:UNIX_System_V:*:*) echo ${UNAME_MACHINE}-${UNAME_SYSTEM}-${UNAME_RELEASE} - exit 0 ;; + exit ;; *:Plan9:*:*) # "uname -m" is not consistent, so use $cputype instead. 386 # is converted to i386 for consistency with other x86 @@ -1219,41 +1273,47 @@ UNAME_MACHINE="$cputype" fi echo ${UNAME_MACHINE}-unknown-plan9 - exit 0 ;; + exit ;; *:TOPS-10:*:*) echo pdp10-unknown-tops10 - exit 0 ;; + exit ;; *:TENEX:*:*) echo pdp10-unknown-tenex - exit 0 ;; + exit ;; KS10:TOPS-20:*:* | KL10:TOPS-20:*:* | TYPE4:TOPS-20:*:*) echo pdp10-dec-tops20 - exit 0 ;; + exit ;; XKL-1:TOPS-20:*:* | TYPE5:TOPS-20:*:*) echo pdp10-xkl-tops20 - exit 0 ;; + exit ;; *:TOPS-20:*:*) echo pdp10-unknown-tops20 - exit 0 ;; + exit ;; *:ITS:*:*) echo pdp10-unknown-its - exit 0 ;; + exit ;; SEI:*:*:SEIUX) echo mips-sei-seiux${UNAME_RELEASE} - exit 0 ;; + exit ;; *:DragonFly:*:*) echo ${UNAME_MACHINE}-unknown-dragonfly`echo ${UNAME_RELEASE}|sed -e 's/[-(].*//'` - exit 0 ;; + exit ;; *:*VMS:*:*) UNAME_MACHINE=`(uname -p) 2>/dev/null` case "${UNAME_MACHINE}" in - A*) echo alpha-dec-vms && exit 0 ;; - I*) echo ia64-dec-vms && exit 0 ;; - V*) echo vax-dec-vms && exit 0 ;; + A*) echo alpha-dec-vms ; exit ;; + I*) echo ia64-dec-vms ; exit ;; + V*) echo vax-dec-vms ; exit ;; esac ;; *:XENIX:*:SysV) echo i386-pc-xenix - exit 0 ;; + exit ;; + i*86:skyos:*:*) + echo ${UNAME_MACHINE}-pc-skyos`echo ${UNAME_RELEASE}` | sed -e 's/ .*$//' + exit ;; + i*86:rdos:*:*) + echo ${UNAME_MACHINE}-pc-rdos + exit ;; esac #echo '(No uname command or uname output not recognized.)' 1>&2 @@ -1285,7 +1345,7 @@ #endif #if defined (__arm) && defined (__acorn) && defined (__unix) - printf ("arm-acorn-riscix"); exit (0); + printf ("arm-acorn-riscix\n"); exit (0); #endif #if defined (hp300) && !defined (hpux) @@ -1374,11 +1434,12 @@ } EOF -$CC_FOR_BUILD -o $dummy $dummy.c 2>/dev/null && $dummy && exit 0 +$CC_FOR_BUILD -o $dummy $dummy.c 2>/dev/null && SYSTEM_NAME=`$dummy` && + { echo "$SYSTEM_NAME"; exit; } # Apollos put the system type in the environment. -test -d /usr/apollo && { echo ${ISP}-apollo-${SYSTYPE}; exit 0; } +test -d /usr/apollo && { echo ${ISP}-apollo-${SYSTYPE}; exit; } # Convex versions that predate uname can use getsysinfo(1) @@ -1387,22 +1448,22 @@ case `getsysinfo -f cpu_type` in c1*) echo c1-convex-bsd - exit 0 ;; + exit ;; c2*) if getsysinfo -f scalar_acc then echo c32-convex-bsd else echo c2-convex-bsd fi - exit 0 ;; + exit ;; c34*) echo c34-convex-bsd - exit 0 ;; + exit ;; c38*) echo c38-convex-bsd - exit 0 ;; + exit ;; c4*) echo c4-convex-bsd - exit 0 ;; + exit ;; esac fi @@ -1413,7 +1474,9 @@ the operating system you are using. It is advised that you download the most up to date version of the config scripts from - ftp://ftp.gnu.org/pub/gnu/config/ + http://savannah.gnu.org/cgi-bin/viewcvs/*checkout*/config/config/config.guess +and + http://savannah.gnu.org/cgi-bin/viewcvs/*checkout*/config/config/config.sub If the version you run ($0) is already up to date, please send the following data and any information you think might be Modified: python/trunk/Modules/_ctypes/libffi/config.sub ============================================================================== --- python/trunk/Modules/_ctypes/libffi/config.sub (original) +++ python/trunk/Modules/_ctypes/libffi/config.sub Tue Mar 4 21:09:11 2008 @@ -1,9 +1,10 @@ #! /bin/sh # Configuration validation subroutine script. # Copyright (C) 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, -# 2000, 2001, 2002, 2003, 2004, 2005 Free Software Foundation, Inc. +# 2000, 2001, 2002, 2003, 2004, 2005, 2006 Free Software Foundation, +# Inc. -timestamp='2005-04-22' +timestamp='2007-04-29' # This file is (in principle) common to ALL GNU software. # The presence of a machine in this file suggests that SOME GNU software @@ -21,14 +22,15 @@ # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software -# Foundation, Inc., 59 Temple Place - Suite 330, -# Boston, MA 02111-1307, USA. - +# Foundation, Inc., 51 Franklin Street - Fifth Floor, Boston, MA +# 02110-1301, USA. +# # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that program. + # Please send patches to . Submit a context # diff and a properly formatted ChangeLog entry. # @@ -83,11 +85,11 @@ while test $# -gt 0 ; do case $1 in --time-stamp | --time* | -t ) - echo "$timestamp" ; exit 0 ;; + echo "$timestamp" ; exit ;; --version | -v ) - echo "$version" ; exit 0 ;; + echo "$version" ; exit ;; --help | --h* | -h ) - echo "$usage"; exit 0 ;; + echo "$usage"; exit ;; -- ) # Stop option processing shift; break ;; - ) # Use stdin as input. @@ -99,7 +101,7 @@ *local*) # First pass through any local machine types. echo $1 - exit 0;; + exit ;; * ) break ;; @@ -118,8 +120,9 @@ # Here we must recognize all the valid KERNEL-OS combinations. maybe_os=`echo $1 | sed 's/^\(.*\)-\([^-]*-[^-]*\)$/\2/'` case $maybe_os in - nto-qnx* | linux-gnu* | linux-dietlibc | linux-uclibc* | uclinux-uclibc* | uclinux-gnu* | \ - kfreebsd*-gnu* | knetbsd*-gnu* | netbsd*-gnu* | storm-chaos* | os2-emx* | rtmk-nova*) + nto-qnx* | linux-gnu* | linux-dietlibc | linux-newlib* | linux-uclibc* | \ + uclinux-uclibc* | uclinux-gnu* | kfreebsd*-gnu* | knetbsd*-gnu* | netbsd*-gnu* | \ + storm-chaos* | os2-emx* | rtmk-nova*) os=-$maybe_os basic_machine=`echo $1 | sed 's/^\(.*\)-\([^-]*-[^-]*\)$/\1/'` ;; @@ -170,6 +173,10 @@ -hiux*) os=-hiuxwe2 ;; + -sco6) + os=-sco5v6 + basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` + ;; -sco5) os=-sco3.2v5 basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` @@ -186,6 +193,10 @@ # Don't forget version if it is 3.2v4 or newer. basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; + -sco5v6*) + # Don't forget version if it is 3.2v4 or newer. + basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` + ;; -sco*) os=-sco3.2v2 basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` @@ -230,15 +241,16 @@ | alpha | alphaev[4-8] | alphaev56 | alphaev6[78] | alphapca5[67] \ | alpha64 | alpha64ev[4-8] | alpha64ev56 | alpha64ev6[78] | alpha64pca5[67] \ | am33_2.0 \ - | arc | arm | arm[bl]e | arme[lb] | armv[2345] | armv[345][lb] | avr \ + | arc | arm | arm[bl]e | arme[lb] | armv[2345] | armv[345][lb] | avr | avr32 \ | bfin \ | c4x | clipper \ | d10v | d30v | dlx | dsp16xx \ - | fr30 | frv \ + | fido | fr30 | frv \ | h8300 | h8500 | hppa | hppa1.[01] | hppa2.0 | hppa2.0[nw] | hppa64 \ | i370 | i860 | i960 | ia64 \ | ip2k | iq2000 \ - | m32r | m32rle | m68000 | m68k | m88k | maxq | mcore \ + | m32c | m32r | m32rle | m68000 | m68k | m88k \ + | maxq | mb | microblaze | mcore | mep \ | mips | mipsbe | mipseb | mipsel | mipsle \ | mips16 \ | mips64 | mips64el \ @@ -247,6 +259,7 @@ | mips64vr4100 | mips64vr4100el \ | mips64vr4300 | mips64vr4300el \ | mips64vr5000 | mips64vr5000el \ + | mips64vr5900 | mips64vr5900el \ | mipsisa32 | mipsisa32el \ | mipsisa32r2 | mipsisa32r2el \ | mipsisa64 | mipsisa64el \ @@ -255,21 +268,24 @@ | mipsisa64sr71k | mipsisa64sr71kel \ | mipstx39 | mipstx39el \ | mn10200 | mn10300 \ + | mt \ | msp430 \ + | nios | nios2 \ | ns16k | ns32k \ - | openrisc | or32 \ + | or32 \ | pdp10 | pdp11 | pj | pjl \ | powerpc | powerpc64 | powerpc64le | powerpcle | ppcbe \ | pyramid \ - | sh | sh[1234] | sh[23]e | sh[34]eb | shbe | shle | sh[1234]le | sh3ele \ + | score \ + | sh | sh[1234] | sh[24]a | sh[23]e | sh[34]eb | sheb | shbe | shle | sh[1234]le | sh3ele \ | sh64 | sh64le \ - | sparc | sparc64 | sparc64b | sparc86x | sparclet | sparclite \ - | sparcv8 | sparcv9 | sparcv9b \ - | strongarm \ + | sparc | sparc64 | sparc64b | sparc64v | sparc86x | sparclet | sparclite \ + | sparcv8 | sparcv9 | sparcv9b | sparcv9v \ + | spu | strongarm \ | tahoe | thumb | tic4x | tic80 | tron \ | v850 | v850e \ | we32k \ - | x86 | xscale | xscalee[bl] | xstormy16 | xtensa \ + | x86 | xc16x | xscale | xscalee[bl] | xstormy16 | xtensa \ | z8k) basic_machine=$basic_machine-unknown ;; @@ -280,6 +296,9 @@ ;; m88110 | m680[12346]0 | m683?2 | m68360 | m5200 | v70 | w65 | z8k) ;; + ms1) + basic_machine=mt-unknown + ;; # We use `pc' rather than `unknown' # because (1) that's what they normally are, and @@ -299,18 +318,18 @@ | alpha64-* | alpha64ev[4-8]-* | alpha64ev56-* | alpha64ev6[78]-* \ | alphapca5[67]-* | alpha64pca5[67]-* | arc-* \ | arm-* | armbe-* | armle-* | armeb-* | armv*-* \ - | avr-* \ + | avr-* | avr32-* \ | bfin-* | bs2000-* \ | c[123]* | c30-* | [cjt]90-* | c4x-* | c54x-* | c55x-* | c6x-* \ | clipper-* | craynv-* | cydra-* \ | d10v-* | d30v-* | dlx-* \ | elxsi-* \ - | f30[01]-* | f700-* | fr30-* | frv-* | fx80-* \ + | f30[01]-* | f700-* | fido-* | fr30-* | frv-* | fx80-* \ | h8300-* | h8500-* \ | hppa-* | hppa1.[01]-* | hppa2.0-* | hppa2.0[nw]-* | hppa64-* \ | i*86-* | i860-* | i960-* | ia64-* \ | ip2k-* | iq2000-* \ - | m32r-* | m32rle-* \ + | m32c-* | m32r-* | m32rle-* \ | m68000-* | m680[012346]0-* | m68360-* | m683?2-* | m68k-* \ | m88110-* | m88k-* | maxq-* | mcore-* \ | mips-* | mipsbe-* | mipseb-* | mipsel-* | mipsle-* \ @@ -321,6 +340,7 @@ | mips64vr4100-* | mips64vr4100el-* \ | mips64vr4300-* | mips64vr4300el-* \ | mips64vr5000-* | mips64vr5000el-* \ + | mips64vr5900-* | mips64vr5900el-* \ | mipsisa32-* | mipsisa32el-* \ | mipsisa32r2-* | mipsisa32r2el-* \ | mipsisa64-* | mipsisa64el-* \ @@ -329,24 +349,26 @@ | mipsisa64sr71k-* | mipsisa64sr71kel-* \ | mipstx39-* | mipstx39el-* \ | mmix-* \ + | mt-* \ | msp430-* \ + | nios-* | nios2-* \ | none-* | np1-* | ns16k-* | ns32k-* \ | orion-* \ | pdp10-* | pdp11-* | pj-* | pjl-* | pn-* | power-* \ | powerpc-* | powerpc64-* | powerpc64le-* | powerpcle-* | ppcbe-* \ | pyramid-* \ | romp-* | rs6000-* \ - | sh-* | sh[1234]-* | sh[23]e-* | sh[34]eb-* | shbe-* \ + | sh-* | sh[1234]-* | sh[24]a-* | sh[23]e-* | sh[34]eb-* | sheb-* | shbe-* \ | shle-* | sh[1234]le-* | sh3ele-* | sh64-* | sh64le-* \ - | sparc-* | sparc64-* | sparc64b-* | sparc86x-* | sparclet-* \ + | sparc-* | sparc64-* | sparc64b-* | sparc64v-* | sparc86x-* | sparclet-* \ | sparclite-* \ - | sparcv8-* | sparcv9-* | sparcv9b-* | strongarm-* | sv1-* | sx?-* \ + | sparcv8-* | sparcv9-* | sparcv9b-* | sparcv9v-* | strongarm-* | sv1-* | sx?-* \ | tahoe-* | thumb-* \ | tic30-* | tic4x-* | tic54x-* | tic55x-* | tic6x-* | tic80-* \ | tron-* \ | v850-* | v850e-* | vax-* \ | we32k-* \ - | x86-* | x86_64-* | xps100-* | xscale-* | xscalee[bl]-* \ + | x86-* | x86_64-* | xc16x-* | xps100-* | xscale-* | xscalee[bl]-* \ | xstormy16-* | xtensa-* \ | ymp-* \ | z8k-*) @@ -661,6 +683,10 @@ basic_machine=i386-pc os=-mingw32 ;; + mingw32ce) + basic_machine=arm-unknown + os=-mingw32ce + ;; miniframe) basic_machine=m68000-convergent ;; @@ -686,6 +712,9 @@ basic_machine=i386-pc os=-msdos ;; + ms1-*) + basic_machine=`echo $basic_machine | sed -e 's/ms1-/mt-/'` + ;; mvs) basic_machine=i370-ibm os=-mvs @@ -761,9 +790,8 @@ basic_machine=hppa1.1-oki os=-proelf ;; - or32 | or32-*) + openrisc | openrisc-*) basic_machine=or32-unknown - os=-coff ;; os400) basic_machine=powerpc-ibm @@ -794,6 +822,12 @@ pc532 | pc532-*) basic_machine=ns32k-pc532 ;; + pc98) + basic_machine=i386-pc + ;; + pc98-*) + basic_machine=i386-`echo $basic_machine | sed 's/^[^-]*-//'` + ;; pentium | p5 | k5 | k6 | nexgen | viac3) basic_machine=i586-pc ;; @@ -850,6 +884,10 @@ basic_machine=i586-unknown os=-pw32 ;; + rdos) + basic_machine=i386-pc + os=-rdos + ;; rom68k) basic_machine=m68k-rom68k os=-coff @@ -876,6 +914,10 @@ sb1el) basic_machine=mipsisa64sb1el-unknown ;; + sde) + basic_machine=mipsisa32-sde + os=-elf + ;; sei) basic_machine=mips-sei os=-seiux @@ -887,6 +929,9 @@ basic_machine=sh-hitachi os=-hms ;; + sh5el) + basic_machine=sh5le-unknown + ;; sh64) basic_machine=sh64-unknown ;; @@ -1089,13 +1134,10 @@ we32k) basic_machine=we32k-att ;; - sh3 | sh4 | sh[34]eb | sh[1234]le | sh[23]ele) + sh[1234] | sh[24]a | sh[34]eb | sh[1234]le | sh[23]ele) basic_machine=sh-unknown ;; - sh64) - basic_machine=sh64-unknown - ;; - sparc | sparcv8 | sparcv9 | sparcv9b) + sparc | sparcv8 | sparcv9 | sparcv9b | sparcv9v) basic_machine=sparc-sun ;; cydra) @@ -1168,20 +1210,23 @@ | -aos* \ | -nindy* | -vxsim* | -vxworks* | -ebmon* | -hms* | -mvs* \ | -clix* | -riscos* | -uniplus* | -iris* | -rtu* | -xenix* \ - | -hiux* | -386bsd* | -knetbsd* | -mirbsd* | -netbsd* | -openbsd* \ + | -hiux* | -386bsd* | -knetbsd* | -mirbsd* | -netbsd* \ + | -openbsd* | -solidbsd* \ | -ekkobsd* | -kfreebsd* | -freebsd* | -riscix* | -lynxos* \ | -bosx* | -nextstep* | -cxux* | -aout* | -elf* | -oabi* \ | -ptx* | -coff* | -ecoff* | -winnt* | -domain* | -vsta* \ | -udi* | -eabi* | -lites* | -ieee* | -go32* | -aux* \ | -chorusos* | -chorusrdb* \ | -cygwin* | -pe* | -psos* | -moss* | -proelf* | -rtems* \ - | -mingw32* | -linux-gnu* | -linux-uclibc* | -uxpv* | -beos* | -mpeix* | -udk* \ + | -mingw32* | -linux-gnu* | -linux-newlib* | -linux-uclibc* \ + | -uxpv* | -beos* | -mpeix* | -udk* \ | -interix* | -uwin* | -mks* | -rhapsody* | -darwin* | -opened* \ | -openstep* | -oskit* | -conix* | -pw32* | -nonstopux* \ | -storm-chaos* | -tops10* | -tenex* | -tops20* | -its* \ | -os2* | -vos* | -palmos* | -uclinux* | -nucleus* \ | -morphos* | -superux* | -rtmk* | -rtmk-nova* | -windiss* \ - | -powermax* | -dnix* | -nx6 | -nx7 | -sei* | -dragonfly*) + | -powermax* | -dnix* | -nx6 | -nx7 | -sei* | -dragonfly* \ + | -skyos* | -haiku* | -rdos* | -toppers* | -drops*) # Remember, each alternative MUST END IN *, to match a version number. ;; -qnx*) @@ -1199,7 +1244,7 @@ os=`echo $os | sed -e 's|nto|nto-qnx|'` ;; -sim | -es1800* | -hms* | -xray | -os68k* | -none* | -v88r* \ - | -windows* | -osx | -abug | -netware* | -os9* | -beos* \ + | -windows* | -osx | -abug | -netware* | -os9* | -beos* | -haiku* \ | -macos* | -mpw* | -magic* | -mmixware* | -mon960* | -lnews*) ;; -mac*) @@ -1333,6 +1378,12 @@ # system, and we'll never get to this point. case $basic_machine in + score-*) + os=-elf + ;; + spu-*) + os=-elf + ;; *-acorn) os=-riscix1.2 ;; @@ -1342,9 +1393,9 @@ arm*-semi) os=-aout ;; - c4x-* | tic4x-*) - os=-coff - ;; + c4x-* | tic4x-*) + os=-coff + ;; # This must come before the *-dec entry. pdp10-*) os=-tops20 @@ -1370,6 +1421,9 @@ m68*-cisco) os=-aout ;; + mep-*) + os=-elf + ;; mips*-cisco) os=-elf ;; @@ -1388,6 +1442,9 @@ *-be) os=-beos ;; + *-haiku) + os=-haiku + ;; *-ibm) os=-aix ;; @@ -1559,7 +1616,7 @@ esac echo $basic_machine$os -exit 0 +exit # Local variables: # eval: (add-hook 'write-file-hooks 'time-stamp) Modified: python/trunk/Modules/_ctypes/libffi/configure ============================================================================== --- python/trunk/Modules/_ctypes/libffi/configure (original) +++ python/trunk/Modules/_ctypes/libffi/configure Tue Mar 4 21:09:11 2008 @@ -1,6 +1,6 @@ #! /bin/sh # Guess values for system-dependent variables and create Makefiles. -# Generated by GNU Autoconf 2.61 for libffi 2.1. +# Generated by GNU Autoconf 2.61 for libffi 3.0.4. # # Report bugs to . # @@ -551,6 +551,160 @@ + +# Check that we are running under the correct shell. +SHELL=${CONFIG_SHELL-/bin/sh} + +case X$ECHO in +X*--fallback-echo) + # Remove one level of quotation (which was required for Make). + ECHO=`echo "$ECHO" | sed 's,\\\\\$\\$0,'$0','` + ;; +esac + +echo=${ECHO-echo} +if test "X$1" = X--no-reexec; then + # Discard the --no-reexec flag, and continue. + shift +elif test "X$1" = X--fallback-echo; then + # Avoid inline document here, it may be left over + : +elif test "X`($echo '\t') 2>/dev/null`" = 'X\t' ; then + # Yippee, $echo works! + : +else + # Restart under the correct shell. + exec $SHELL "$0" --no-reexec ${1+"$@"} +fi + +if test "X$1" = X--fallback-echo; then + # used as fallback echo + shift + cat </dev/null 2>&1 && unset CDPATH + +if test -z "$ECHO"; then +if test "X${echo_test_string+set}" != Xset; then +# find a string as large as possible, as long as the shell can cope with it + for cmd in 'sed 50q "$0"' 'sed 20q "$0"' 'sed 10q "$0"' 'sed 2q "$0"' 'echo test'; do + # expected sizes: less than 2Kb, 1Kb, 512 bytes, 16 bytes, ... + if (echo_test_string=`eval $cmd`) 2>/dev/null && + echo_test_string=`eval $cmd` && + (test "X$echo_test_string" = "X$echo_test_string") 2>/dev/null + then + break + fi + done +fi + +if test "X`($echo '\t') 2>/dev/null`" = 'X\t' && + echo_testing_string=`($echo "$echo_test_string") 2>/dev/null` && + test "X$echo_testing_string" = "X$echo_test_string"; then + : +else + # The Solaris, AIX, and Digital Unix default echo programs unquote + # backslashes. This makes it impossible to quote backslashes using + # echo "$something" | sed 's/\\/\\\\/g' + # + # So, first we look for a working echo in the user's PATH. + + lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR + for dir in $PATH /usr/ucb; do + IFS="$lt_save_ifs" + if (test -f $dir/echo || test -f $dir/echo$ac_exeext) && + test "X`($dir/echo '\t') 2>/dev/null`" = 'X\t' && + echo_testing_string=`($dir/echo "$echo_test_string") 2>/dev/null` && + test "X$echo_testing_string" = "X$echo_test_string"; then + echo="$dir/echo" + break + fi + done + IFS="$lt_save_ifs" + + if test "X$echo" = Xecho; then + # We didn't find a better echo, so look for alternatives. + if test "X`(print -r '\t') 2>/dev/null`" = 'X\t' && + echo_testing_string=`(print -r "$echo_test_string") 2>/dev/null` && + test "X$echo_testing_string" = "X$echo_test_string"; then + # This shell has a builtin print -r that does the trick. + echo='print -r' + elif (test -f /bin/ksh || test -f /bin/ksh$ac_exeext) && + test "X$CONFIG_SHELL" != X/bin/ksh; then + # If we have ksh, try running configure again with it. + ORIGINAL_CONFIG_SHELL=${CONFIG_SHELL-/bin/sh} + export ORIGINAL_CONFIG_SHELL + CONFIG_SHELL=/bin/ksh + export CONFIG_SHELL + exec $CONFIG_SHELL "$0" --no-reexec ${1+"$@"} + else + # Try using printf. + echo='printf %s\n' + if test "X`($echo '\t') 2>/dev/null`" = 'X\t' && + echo_testing_string=`($echo "$echo_test_string") 2>/dev/null` && + test "X$echo_testing_string" = "X$echo_test_string"; then + # Cool, printf works + : + elif echo_testing_string=`($ORIGINAL_CONFIG_SHELL "$0" --fallback-echo '\t') 2>/dev/null` && + test "X$echo_testing_string" = 'X\t' && + echo_testing_string=`($ORIGINAL_CONFIG_SHELL "$0" --fallback-echo "$echo_test_string") 2>/dev/null` && + test "X$echo_testing_string" = "X$echo_test_string"; then + CONFIG_SHELL=$ORIGINAL_CONFIG_SHELL + export CONFIG_SHELL + SHELL="$CONFIG_SHELL" + export SHELL + echo="$CONFIG_SHELL $0 --fallback-echo" + elif echo_testing_string=`($CONFIG_SHELL "$0" --fallback-echo '\t') 2>/dev/null` && + test "X$echo_testing_string" = 'X\t' && + echo_testing_string=`($CONFIG_SHELL "$0" --fallback-echo "$echo_test_string") 2>/dev/null` && + test "X$echo_testing_string" = "X$echo_test_string"; then + echo="$CONFIG_SHELL $0 --fallback-echo" + else + # maybe with a smaller string... + prev=: + + for cmd in 'echo test' 'sed 2q "$0"' 'sed 10q "$0"' 'sed 20q "$0"' 'sed 50q "$0"'; do + if (test "X$echo_test_string" = "X`eval $cmd`") 2>/dev/null + then + break + fi + prev="$cmd" + done + + if test "$prev" != 'sed 50q "$0"'; then + echo_test_string=`eval $prev` + export echo_test_string + exec ${ORIGINAL_CONFIG_SHELL-${CONFIG_SHELL-/bin/sh}} "$0" ${1+"$@"} + else + # Oops. We lost completely, so just stick with echo. + echo=echo + fi + fi + fi + fi +fi +fi + +# Copy echo and quote the copy suitably for passing to libtool from +# the Makefile, instead of quoting the original, which is used later. +ECHO=$echo +if test "X$ECHO" = "X$CONFIG_SHELL $0 --fallback-echo"; then + ECHO="$CONFIG_SHELL \\\$\$0 --fallback-echo" +fi + + + + +tagnames=${tagnames+${tagnames},}CXX + +tagnames=${tagnames+${tagnames},}F77 + exec 7<&0 &1 # Name of the host. @@ -574,8 +728,8 @@ # Identity of this package. PACKAGE_NAME='libffi' PACKAGE_TARNAME='libffi' -PACKAGE_VERSION='2.1' -PACKAGE_STRING='libffi 2.1' +PACKAGE_VERSION='3.0.4' +PACKAGE_STRING='libffi 3.0.4' PACKAGE_BUGREPORT='http://gcc.gnu.org/bugs.html' # Factoring default headers for most tests. @@ -663,6 +817,28 @@ target_cpu target_vendor target_os +INSTALL_PROGRAM +INSTALL_SCRIPT +INSTALL_DATA +am__isrc +CYGPATH_W +PACKAGE +VERSION +ACLOCAL +AUTOCONF +AUTOMAKE +AUTOHEADER +MAKEINFO +install_sh +STRIP +INSTALL_STRIP_PROGRAM +mkdir_p +AWK +SET_MAKE +am__leading_dot +AMTAR +am__tar +am__untar CC CFLAGS LDFLAGS @@ -670,22 +846,117 @@ ac_ct_CC EXEEXT OBJEXT -CPP +DEPDIR +am__include +am__quote +AMDEP_TRUE +AMDEP_FALSE +AMDEPBACKSLASH +CCDEPMODE +am__fastdepCC_TRUE +am__fastdepCC_FALSE +CCAS +CCASFLAGS +CCASDEPMODE +am__fastdepCCAS_TRUE +am__fastdepCCAS_FALSE +SED GREP EGREP +LN_S +ECHO +AR +RANLIB +CPP +CXX +CXXFLAGS +ac_ct_CXX +CXXDEPMODE +am__fastdepCXX_TRUE +am__fastdepCXX_FALSE +CXXCPP +F77 +FFLAGS +ac_ct_F77 +LIBTOOL +MAINTAINER_MODE_TRUE +MAINTAINER_MODE_FALSE +MAINT +TESTSUBDIR_TRUE +TESTSUBDIR_FALSE +AM_RUNTESTFLAGS +MIPS_TRUE +MIPS_FALSE +SPARC_TRUE +SPARC_FALSE +X86_TRUE +X86_FALSE +X86_FREEBSD_TRUE +X86_FREEBSD_FALSE +X86_WIN32_TRUE +X86_WIN32_FALSE +X86_DARWIN_TRUE +X86_DARWIN_FALSE +ALPHA_TRUE +ALPHA_FALSE +IA64_TRUE +IA64_FALSE +M32R_TRUE +M32R_FALSE +M68K_TRUE +M68K_FALSE +POWERPC_TRUE +POWERPC_FALSE +POWERPC_AIX_TRUE +POWERPC_AIX_FALSE +POWERPC_DARWIN_TRUE +POWERPC_DARWIN_FALSE +POWERPC_FREEBSD_TRUE +POWERPC_FREEBSD_FALSE +ARM_TRUE +ARM_FALSE +LIBFFI_CRIS_TRUE +LIBFFI_CRIS_FALSE +FRV_TRUE +FRV_FALSE +S390_TRUE +S390_FALSE +X86_64_TRUE +X86_64_FALSE +SH_TRUE +SH_FALSE +SH64_TRUE +SH64_FALSE +PA_LINUX_TRUE +PA_LINUX_FALSE +PA_HPUX_TRUE +PA_HPUX_FALSE +PA64_HPUX_TRUE +PA64_HPUX_FALSE ALLOCA HAVE_LONG_DOUBLE TARGET TARGETDIR -MKTARGET +toolexecdir +toolexeclibdir LIBOBJS LTLIBOBJS' ac_subst_files='' ac_precious_vars='build_alias host_alias target_alias +CCAS +CCASFLAGS CPP -CPPFLAGS' +CPPFLAGS +CXX +CXXFLAGS +LDFLAGS +LIBS +CCC +CXXCPP +F77 +FFLAGS' # Initialize some variables set by options. @@ -1188,7 +1459,7 @@ # Omit some internal or obsolete options to make the list less imposing. # This message is too long to be a string in the A/UX 3.1 sh. cat <<_ACEOF -\`configure' configures libffi 2.1 to adapt to many kinds of systems. +\`configure' configures libffi 3.0.4 to adapt to many kinds of systems. Usage: $0 [OPTION]... [VAR=VALUE]... @@ -1245,6 +1516,11 @@ cat <<\_ACEOF +Program names: + --program-prefix=PREFIX prepend PREFIX to installed program names + --program-suffix=SUFFIX append SUFFIX to installed program names + --program-transform-name=PROGRAM run sed PROGRAM on installed program names + System types: --build=BUILD configure for building on BUILD [guessed] --host=HOST cross-compile to build programs to run on HOST [BUILD] @@ -1254,10 +1530,35 @@ if test -n "$ac_init_help"; then case $ac_init_help in - short | recursive ) echo "Configuration of libffi 2.1:";; + short | recursive ) echo "Configuration of libffi 3.0.4:";; esac cat <<\_ACEOF +Optional Features: + --disable-FEATURE do not include FEATURE (same as --enable-FEATURE=no) + --enable-FEATURE[=ARG] include FEATURE [ARG=yes] + --disable-dependency-tracking speeds up one-time build + --enable-dependency-tracking do not reject slow dependency extractors + --enable-shared[=PKGS] build shared libraries [default=yes] + --enable-static[=PKGS] build static libraries [default=yes] + --enable-fast-install[=PKGS] + optimize for fast installation [default=yes] + --disable-libtool-lock avoid locking (might break parallel builds) + --enable-maintainer-mode enable make rules and dependencies not useful + (and sometimes confusing) to the casual installer + --enable-debug debugging mode + --disable-structs omit code for struct support + --disable-raw-api make the raw api unavailable + --enable-purify-safety purify-safe mode + +Optional Packages: + --with-PACKAGE[=ARG] use PACKAGE [ARG=yes] + --without-PACKAGE do not use PACKAGE (same as --with-PACKAGE=no) + --with-gnu-ld assume the C compiler uses GNU ld [default=no] + --with-pic try to use only PIC/non-PIC objects [default=use + both] + --with-tags[=TAGS] include additional configurations [automatic] + Some influential environment variables: CC C compiler command CFLAGS C compiler flags @@ -1266,7 +1567,14 @@ LIBS libraries to pass to the linker, e.g. -l CPPFLAGS C/C++/Objective C preprocessor flags, e.g. -I if you have headers in a nonstandard directory + CCAS assembler compiler command (defaults to CC) + CCASFLAGS assembler compiler flags (defaults to CFLAGS) CPP C preprocessor + CXX C++ compiler command + CXXFLAGS C++ compiler flags + CXXCPP C++ preprocessor + F77 Fortran 77 compiler command + FFLAGS Fortran 77 compiler flags Use these variables to override the choices made by `configure' or to help it to find libraries and programs with nonstandard names/locations. @@ -1332,7 +1640,7 @@ test -n "$ac_init_help" && exit $ac_status if $ac_init_version; then cat <<\_ACEOF -libffi configure 2.1 +libffi configure 3.0.4 generated by GNU Autoconf 2.61 Copyright (C) 1992, 1993, 1994, 1995, 1996, 1998, 1999, 2000, 2001, @@ -1346,7 +1654,7 @@ This file contains any messages produced by compilers while running configure, to aid debugging if configure makes a mistake. -It was created by libffi $as_me 2.1, which was +It was created by libffi $as_me 3.0.4, which was generated by GNU Autoconf 2.61. Invocation command line was $ $0 $@ @@ -1861,6 +2169,466 @@ program_prefix=${target_alias}- target_alias=${target_alias-$host_alias} +. ${srcdir}/configure.host + +am__api_version='1.10' + +# Find a good install program. We prefer a C program (faster), +# so one script is as good as another. But avoid the broken or +# incompatible versions: +# SysV /etc/install, /usr/sbin/install +# SunOS /usr/etc/install +# IRIX /sbin/install +# AIX /bin/install +# AmigaOS /C/install, which installs bootblocks on floppy discs +# AIX 4 /usr/bin/installbsd, which doesn't work without a -g flag +# AFS /usr/afsws/bin/install, which mishandles nonexistent args +# SVR4 /usr/ucb/install, which tries to use the nonexistent group "staff" +# OS/2's system install, which has a completely different semantic +# ./install, which can be erroneously created by make from ./install.sh. +{ echo "$as_me:$LINENO: checking for a BSD-compatible install" >&5 +echo $ECHO_N "checking for a BSD-compatible install... $ECHO_C" >&6; } +if test -z "$INSTALL"; then +if test "${ac_cv_path_install+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + as_save_IFS=$IFS; IFS=$PATH_SEPARATOR +for as_dir in $PATH +do + IFS=$as_save_IFS + test -z "$as_dir" && as_dir=. + # Account for people who put trailing slashes in PATH elements. +case $as_dir/ in + ./ | .// | /cC/* | \ + /etc/* | /usr/sbin/* | /usr/etc/* | /sbin/* | /usr/afsws/bin/* | \ + ?:\\/os2\\/install\\/* | ?:\\/OS2\\/INSTALL\\/* | \ + /usr/ucb/* ) ;; + *) + # OSF1 and SCO ODT 3.0 have their own names for install. + # Don't use installbsd from OSF since it installs stuff as root + # by default. + for ac_prog in ginstall scoinst install; do + for ac_exec_ext in '' $ac_executable_extensions; do + if { test -f "$as_dir/$ac_prog$ac_exec_ext" && $as_test_x "$as_dir/$ac_prog$ac_exec_ext"; }; then + if test $ac_prog = install && + grep dspmsg "$as_dir/$ac_prog$ac_exec_ext" >/dev/null 2>&1; then + # AIX install. It has an incompatible calling convention. + : + elif test $ac_prog = install && + grep pwplus "$as_dir/$ac_prog$ac_exec_ext" >/dev/null 2>&1; then + # program-specific install script used by HP pwplus--don't use. + : + else + ac_cv_path_install="$as_dir/$ac_prog$ac_exec_ext -c" + break 3 + fi + fi + done + done + ;; +esac +done +IFS=$as_save_IFS + + +fi + if test "${ac_cv_path_install+set}" = set; then + INSTALL=$ac_cv_path_install + else + # As a last resort, use the slow shell script. Don't cache a + # value for INSTALL within a source directory, because that will + # break other packages using the cache if that directory is + # removed, or if the value is a relative name. + INSTALL=$ac_install_sh + fi +fi +{ echo "$as_me:$LINENO: result: $INSTALL" >&5 +echo "${ECHO_T}$INSTALL" >&6; } + +# Use test -z because SunOS4 sh mishandles braces in ${var-val}. +# It thinks the first close brace ends the variable substitution. +test -z "$INSTALL_PROGRAM" && INSTALL_PROGRAM='${INSTALL}' + +test -z "$INSTALL_SCRIPT" && INSTALL_SCRIPT='${INSTALL}' + +test -z "$INSTALL_DATA" && INSTALL_DATA='${INSTALL} -m 644' + +{ echo "$as_me:$LINENO: checking whether build environment is sane" >&5 +echo $ECHO_N "checking whether build environment is sane... $ECHO_C" >&6; } +# Just in case +sleep 1 +echo timestamp > conftest.file +# Do `set' in a subshell so we don't clobber the current shell's +# arguments. Must try -L first in case configure is actually a +# symlink; some systems play weird games with the mod time of symlinks +# (eg FreeBSD returns the mod time of the symlink's containing +# directory). +if ( + set X `ls -Lt $srcdir/configure conftest.file 2> /dev/null` + if test "$*" = "X"; then + # -L didn't work. + set X `ls -t $srcdir/configure conftest.file` + fi + rm -f conftest.file + if test "$*" != "X $srcdir/configure conftest.file" \ + && test "$*" != "X conftest.file $srcdir/configure"; then + + # If neither matched, then we have a broken ls. This can happen + # if, for instance, CONFIG_SHELL is bash and it inherits a + # broken ls alias from the environment. This has actually + # happened. Such a system could not be considered "sane". + { { echo "$as_me:$LINENO: error: ls -t appears to fail. Make sure there is not a broken +alias in your environment" >&5 +echo "$as_me: error: ls -t appears to fail. Make sure there is not a broken +alias in your environment" >&2;} + { (exit 1); exit 1; }; } + fi + + test "$2" = conftest.file + ) +then + # Ok. + : +else + { { echo "$as_me:$LINENO: error: newly created file is older than distributed files! +Check your system clock" >&5 +echo "$as_me: error: newly created file is older than distributed files! +Check your system clock" >&2;} + { (exit 1); exit 1; }; } +fi +{ echo "$as_me:$LINENO: result: yes" >&5 +echo "${ECHO_T}yes" >&6; } +test "$program_prefix" != NONE && + program_transform_name="s&^&$program_prefix&;$program_transform_name" +# Use a double $ so make ignores it. +test "$program_suffix" != NONE && + program_transform_name="s&\$&$program_suffix&;$program_transform_name" +# Double any \ or $. echo might interpret backslashes. +# By default was `s,x,x', remove it if useless. +cat <<\_ACEOF >conftest.sed +s/[\\$]/&&/g;s/;s,x,x,$// +_ACEOF +program_transform_name=`echo $program_transform_name | sed -f conftest.sed` +rm -f conftest.sed + +# expand $ac_aux_dir to an absolute path +am_aux_dir=`cd $ac_aux_dir && pwd` + +test x"${MISSING+set}" = xset || MISSING="\${SHELL} $am_aux_dir/missing" +# Use eval to expand $SHELL +if eval "$MISSING --run true"; then + am_missing_run="$MISSING --run " +else + am_missing_run= + { echo "$as_me:$LINENO: WARNING: \`missing' script is too old or missing" >&5 +echo "$as_me: WARNING: \`missing' script is too old or missing" >&2;} +fi + +{ echo "$as_me:$LINENO: checking for a thread-safe mkdir -p" >&5 +echo $ECHO_N "checking for a thread-safe mkdir -p... $ECHO_C" >&6; } +if test -z "$MKDIR_P"; then + if test "${ac_cv_path_mkdir+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + as_save_IFS=$IFS; IFS=$PATH_SEPARATOR +for as_dir in $PATH$PATH_SEPARATOR/opt/sfw/bin +do + IFS=$as_save_IFS + test -z "$as_dir" && as_dir=. + for ac_prog in mkdir gmkdir; do + for ac_exec_ext in '' $ac_executable_extensions; do + { test -f "$as_dir/$ac_prog$ac_exec_ext" && $as_test_x "$as_dir/$ac_prog$ac_exec_ext"; } || continue + case `"$as_dir/$ac_prog$ac_exec_ext" --version 2>&1` in #( + 'mkdir (GNU coreutils) '* | \ + 'mkdir (coreutils) '* | \ + 'mkdir (fileutils) '4.1*) + ac_cv_path_mkdir=$as_dir/$ac_prog$ac_exec_ext + break 3;; + esac + done + done +done +IFS=$as_save_IFS + +fi + + if test "${ac_cv_path_mkdir+set}" = set; then + MKDIR_P="$ac_cv_path_mkdir -p" + else + # As a last resort, use the slow shell script. Don't cache a + # value for MKDIR_P within a source directory, because that will + # break other packages using the cache if that directory is + # removed, or if the value is a relative name. + test -d ./--version && rmdir ./--version + MKDIR_P="$ac_install_sh -d" + fi +fi +{ echo "$as_me:$LINENO: result: $MKDIR_P" >&5 +echo "${ECHO_T}$MKDIR_P" >&6; } + +mkdir_p="$MKDIR_P" +case $mkdir_p in + [\\/$]* | ?:[\\/]*) ;; + */*) mkdir_p="\$(top_builddir)/$mkdir_p" ;; +esac + +for ac_prog in gawk mawk nawk awk +do + # Extract the first word of "$ac_prog", so it can be a program name with args. +set dummy $ac_prog; ac_word=$2 +{ echo "$as_me:$LINENO: checking for $ac_word" >&5 +echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6; } +if test "${ac_cv_prog_AWK+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + if test -n "$AWK"; then + ac_cv_prog_AWK="$AWK" # Let the user override the test. +else +as_save_IFS=$IFS; IFS=$PATH_SEPARATOR +for as_dir in $PATH +do + IFS=$as_save_IFS + test -z "$as_dir" && as_dir=. + for ac_exec_ext in '' $ac_executable_extensions; do + if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + ac_cv_prog_AWK="$ac_prog" + echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5 + break 2 + fi +done +done +IFS=$as_save_IFS + +fi +fi +AWK=$ac_cv_prog_AWK +if test -n "$AWK"; then + { echo "$as_me:$LINENO: result: $AWK" >&5 +echo "${ECHO_T}$AWK" >&6; } +else + { echo "$as_me:$LINENO: result: no" >&5 +echo "${ECHO_T}no" >&6; } +fi + + + test -n "$AWK" && break +done + +{ echo "$as_me:$LINENO: checking whether ${MAKE-make} sets \$(MAKE)" >&5 +echo $ECHO_N "checking whether ${MAKE-make} sets \$(MAKE)... $ECHO_C" >&6; } +set x ${MAKE-make}; ac_make=`echo "$2" | sed 's/+/p/g; s/[^a-zA-Z0-9_]/_/g'` +if { as_var=ac_cv_prog_make_${ac_make}_set; eval "test \"\${$as_var+set}\" = set"; }; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + cat >conftest.make <<\_ACEOF +SHELL = /bin/sh +all: + @echo '@@@%%%=$(MAKE)=@@@%%%' +_ACEOF +# GNU make sometimes prints "make[1]: Entering...", which would confuse us. +case `${MAKE-make} -f conftest.make 2>/dev/null` in + *@@@%%%=?*=@@@%%%*) + eval ac_cv_prog_make_${ac_make}_set=yes;; + *) + eval ac_cv_prog_make_${ac_make}_set=no;; +esac +rm -f conftest.make +fi +if eval test \$ac_cv_prog_make_${ac_make}_set = yes; then + { echo "$as_me:$LINENO: result: yes" >&5 +echo "${ECHO_T}yes" >&6; } + SET_MAKE= +else + { echo "$as_me:$LINENO: result: no" >&5 +echo "${ECHO_T}no" >&6; } + SET_MAKE="MAKE=${MAKE-make}" +fi + +rm -rf .tst 2>/dev/null +mkdir .tst 2>/dev/null +if test -d .tst; then + am__leading_dot=. +else + am__leading_dot=_ +fi +rmdir .tst 2>/dev/null + +if test "`cd $srcdir && pwd`" != "`pwd`"; then + # Use -I$(srcdir) only when $(srcdir) != ., so that make's output + # is not polluted with repeated "-I." + am__isrc=' -I$(srcdir)' + # test to see if srcdir already configured + if test -f $srcdir/config.status; then + { { echo "$as_me:$LINENO: error: source directory already configured; run \"make distclean\" there first" >&5 +echo "$as_me: error: source directory already configured; run \"make distclean\" there first" >&2;} + { (exit 1); exit 1; }; } + fi +fi + +# test whether we have cygpath +if test -z "$CYGPATH_W"; then + if (cygpath --version) >/dev/null 2>/dev/null; then + CYGPATH_W='cygpath -w' + else + CYGPATH_W=echo + fi +fi + + +# Define the identity of the package. + PACKAGE='libffi' + VERSION='3.0.4' + + +cat >>confdefs.h <<_ACEOF +#define PACKAGE "$PACKAGE" +_ACEOF + + +cat >>confdefs.h <<_ACEOF +#define VERSION "$VERSION" +_ACEOF + +# Some tools Automake needs. + +ACLOCAL=${ACLOCAL-"${am_missing_run}aclocal-${am__api_version}"} + + +AUTOCONF=${AUTOCONF-"${am_missing_run}autoconf"} + + +AUTOMAKE=${AUTOMAKE-"${am_missing_run}automake-${am__api_version}"} + + +AUTOHEADER=${AUTOHEADER-"${am_missing_run}autoheader"} + + +MAKEINFO=${MAKEINFO-"${am_missing_run}makeinfo"} + +install_sh=${install_sh-"\$(SHELL) $am_aux_dir/install-sh"} + +# Installed binaries are usually stripped using `strip' when the user +# run `make install-strip'. However `strip' might not be the right +# tool to use in cross-compilation environments, therefore Automake +# will honor the `STRIP' environment variable to overrule this program. +if test "$cross_compiling" != no; then + if test -n "$ac_tool_prefix"; then + # Extract the first word of "${ac_tool_prefix}strip", so it can be a program name with args. +set dummy ${ac_tool_prefix}strip; ac_word=$2 +{ echo "$as_me:$LINENO: checking for $ac_word" >&5 +echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6; } +if test "${ac_cv_prog_STRIP+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + if test -n "$STRIP"; then + ac_cv_prog_STRIP="$STRIP" # Let the user override the test. +else +as_save_IFS=$IFS; IFS=$PATH_SEPARATOR +for as_dir in $PATH +do + IFS=$as_save_IFS + test -z "$as_dir" && as_dir=. + for ac_exec_ext in '' $ac_executable_extensions; do + if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + ac_cv_prog_STRIP="${ac_tool_prefix}strip" + echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5 + break 2 + fi +done +done +IFS=$as_save_IFS + +fi +fi +STRIP=$ac_cv_prog_STRIP +if test -n "$STRIP"; then + { echo "$as_me:$LINENO: result: $STRIP" >&5 +echo "${ECHO_T}$STRIP" >&6; } +else + { echo "$as_me:$LINENO: result: no" >&5 +echo "${ECHO_T}no" >&6; } +fi + + +fi +if test -z "$ac_cv_prog_STRIP"; then + ac_ct_STRIP=$STRIP + # Extract the first word of "strip", so it can be a program name with args. +set dummy strip; ac_word=$2 +{ echo "$as_me:$LINENO: checking for $ac_word" >&5 +echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6; } +if test "${ac_cv_prog_ac_ct_STRIP+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + if test -n "$ac_ct_STRIP"; then + ac_cv_prog_ac_ct_STRIP="$ac_ct_STRIP" # Let the user override the test. +else +as_save_IFS=$IFS; IFS=$PATH_SEPARATOR +for as_dir in $PATH +do + IFS=$as_save_IFS + test -z "$as_dir" && as_dir=. + for ac_exec_ext in '' $ac_executable_extensions; do + if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + ac_cv_prog_ac_ct_STRIP="strip" + echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5 + break 2 + fi +done +done +IFS=$as_save_IFS + +fi +fi +ac_ct_STRIP=$ac_cv_prog_ac_ct_STRIP +if test -n "$ac_ct_STRIP"; then + { echo "$as_me:$LINENO: result: $ac_ct_STRIP" >&5 +echo "${ECHO_T}$ac_ct_STRIP" >&6; } +else + { echo "$as_me:$LINENO: result: no" >&5 +echo "${ECHO_T}no" >&6; } +fi + + if test "x$ac_ct_STRIP" = x; then + STRIP=":" + else + case $cross_compiling:$ac_tool_warned in +yes:) +{ echo "$as_me:$LINENO: WARNING: In the future, Autoconf will not detect cross-tools +whose name does not start with the host triplet. If you think this +configuration is useful to you, please write to autoconf at gnu.org." >&5 +echo "$as_me: WARNING: In the future, Autoconf will not detect cross-tools +whose name does not start with the host triplet. If you think this +configuration is useful to you, please write to autoconf at gnu.org." >&2;} +ac_tool_warned=yes ;; +esac + STRIP=$ac_ct_STRIP + fi +else + STRIP="$ac_cv_prog_STRIP" +fi + +fi +INSTALL_STRIP_PROGRAM="\$(install_sh) -c -s" + +# We need awk for the "check" target. The system "awk" is bad on +# some platforms. +# Always define AMTAR for backward compatibility. + +AMTAR=${AMTAR-"${am_missing_run}tar"} + +am__tar='${AMTAR} chof - "$$tardir"'; am__untar='${AMTAR} xf -' + + + + + + +# The same as in boehm-gc and libstdc++. Have to borrow it from there. +# We must force CC to /not/ be precious variables; otherwise +# the wrong, non-multilib-adjusted value will be used in multilibs. +# As a side effect, we have to subst CFLAGS ourselves. + ac_ext=c @@ -2781,242 +3549,552 @@ ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu +DEPDIR="${am__leading_dot}deps" +ac_config_commands="$ac_config_commands depfiles" +am_make=${MAKE-make} +cat > confinc << 'END' +am__doit: + @echo done +.PHONY: am__doit +END +# If we don't find an include directive, just comment out the code. +{ echo "$as_me:$LINENO: checking for style of include used by $am_make" >&5 +echo $ECHO_N "checking for style of include used by $am_make... $ECHO_C" >&6; } +am__include="#" +am__quote= +_am_result=none +# First try GNU make style include. +echo "include confinc" > confmf +# We grep out `Entering directory' and `Leaving directory' +# messages which can occur if `w' ends up in MAKEFLAGS. +# In particular we don't look at `^make:' because GNU make might +# be invoked under some other name (usually "gmake"), in which +# case it prints its new name instead of `make'. +if test "`$am_make -s -f confmf 2> /dev/null | grep -v 'ing directory'`" = "done"; then + am__include=include + am__quote= + _am_result=GNU +fi +# Now try BSD make style include. +if test "$am__include" = "#"; then + echo '.include "confinc"' > confmf + if test "`$am_make -s -f confmf 2> /dev/null`" = "done"; then + am__include=.include + am__quote="\"" + _am_result=BSD + fi +fi +{ echo "$as_me:$LINENO: result: $_am_result" >&5 +echo "${ECHO_T}$_am_result" >&6; } +rm -f confinc confmf -ac_ext=c -ac_cpp='$CPP $CPPFLAGS' -ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' -ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' -ac_compiler_gnu=$ac_cv_c_compiler_gnu -{ echo "$as_me:$LINENO: checking how to run the C preprocessor" >&5 -echo $ECHO_N "checking how to run the C preprocessor... $ECHO_C" >&6; } -# On Suns, sometimes $CPP names a directory. -if test -n "$CPP" && test -d "$CPP"; then - CPP= +# Check whether --enable-dependency-tracking was given. +if test "${enable_dependency_tracking+set}" = set; then + enableval=$enable_dependency_tracking; fi -if test -z "$CPP"; then - if test "${ac_cv_prog_CPP+set}" = set; then + +if test "x$enable_dependency_tracking" != xno; then + am_depcomp="$ac_aux_dir/depcomp" + AMDEPBACKSLASH='\' +fi + if test "x$enable_dependency_tracking" != xno; then + AMDEP_TRUE= + AMDEP_FALSE='#' +else + AMDEP_TRUE='#' + AMDEP_FALSE= +fi + + + +depcc="$CC" am_compiler_list= + +{ echo "$as_me:$LINENO: checking dependency style of $depcc" >&5 +echo $ECHO_N "checking dependency style of $depcc... $ECHO_C" >&6; } +if test "${am_cv_CC_dependencies_compiler_type+set}" = set; then echo $ECHO_N "(cached) $ECHO_C" >&6 else - # Double quotes because CPP needs to be expanded - for CPP in "$CC -E" "$CC -E -traditional-cpp" "/lib/cpp" - do - ac_preproc_ok=false -for ac_c_preproc_warn_flag in '' yes -do - # Use a header file that comes with gcc, so configuring glibc - # with a fresh cross-compiler works. - # Prefer to if __STDC__ is defined, since - # exists even on freestanding compilers. - # On the NeXT, cc -E runs the code through the compiler's parser, - # not just through cpp. "Syntax error" is here to catch this case. - cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ -#ifdef __STDC__ -# include -#else -# include -#endif - Syntax error -_ACEOF -if { (ac_try="$ac_cpp conftest.$ac_ext" -case "(($ac_try" in - *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; - *) ac_try_echo=$ac_try;; -esac -eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 - (eval "$ac_cpp conftest.$ac_ext") 2>conftest.er1 - ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); } >/dev/null && { - test -z "$ac_c_preproc_warn_flag$ac_c_werror_flag" || - test ! -s conftest.err - }; then - : + if test -z "$AMDEP_TRUE" && test -f "$am_depcomp"; then + # We make a subdir and do the tests there. Otherwise we can end up + # making bogus files that we don't know about and never remove. For + # instance it was reported that on HP-UX the gcc test will end up + # making a dummy file named `D' -- because `-MD' means `put the output + # in D'. + mkdir conftest.dir + # Copy depcomp to subdir because otherwise we won't find it if we're + # using a relative directory. + cp "$am_depcomp" conftest.dir + cd conftest.dir + # We will build objects and dependencies in a subdirectory because + # it helps to detect inapplicable dependency modes. For instance + # both Tru64's cc and ICC support -MD to output dependencies as a + # side effect of compilation, but ICC will put the dependencies in + # the current directory while Tru64 will put them in the object + # directory. + mkdir sub + + am_cv_CC_dependencies_compiler_type=none + if test "$am_compiler_list" = ""; then + am_compiler_list=`sed -n 's/^#*\([a-zA-Z0-9]*\))$/\1/p' < ./depcomp` + fi + for depmode in $am_compiler_list; do + # Setup a source with many dependencies, because some compilers + # like to wrap large dependency lists on column 80 (with \), and + # we should not choose a depcomp mode which is confused by this. + # + # We need to recreate these files for each test, as the compiler may + # overwrite some of them when testing with obscure command lines. + # This happens at least with the AIX C compiler. + : > sub/conftest.c + for i in 1 2 3 4 5 6; do + echo '#include "conftst'$i'.h"' >> sub/conftest.c + # Using `: > sub/conftst$i.h' creates only sub/conftst1.h with + # Solaris 8's {/usr,}/bin/sh. + touch sub/conftst$i.h + done + echo "${am__include} ${am__quote}sub/conftest.Po${am__quote}" > confmf + + case $depmode in + nosideeffect) + # after this tag, mechanisms are not by side-effect, so they'll + # only be used when explicitly requested + if test "x$enable_dependency_tracking" = xyes; then + continue + else + break + fi + ;; + none) break ;; + esac + # We check with `-c' and `-o' for the sake of the "dashmstdout" + # mode. It turns out that the SunPro C++ compiler does not properly + # handle `-M -o', and we need to detect this. + if depmode=$depmode \ + source=sub/conftest.c object=sub/conftest.${OBJEXT-o} \ + depfile=sub/conftest.Po tmpdepfile=sub/conftest.TPo \ + $SHELL ./depcomp $depcc -c -o sub/conftest.${OBJEXT-o} sub/conftest.c \ + >/dev/null 2>conftest.err && + grep sub/conftst1.h sub/conftest.Po > /dev/null 2>&1 && + grep sub/conftst6.h sub/conftest.Po > /dev/null 2>&1 && + grep sub/conftest.${OBJEXT-o} sub/conftest.Po > /dev/null 2>&1 && + ${MAKE-make} -s -f confmf > /dev/null 2>&1; then + # icc doesn't choke on unknown options, it will just issue warnings + # or remarks (even with -Werror). So we grep stderr for any message + # that says an option was ignored or not supported. + # When given -MP, icc 7.0 and 7.1 complain thusly: + # icc: Command line warning: ignoring option '-M'; no argument required + # The diagnosis changed in icc 8.0: + # icc: Command line remark: option '-MP' not supported + if (grep 'ignoring option' conftest.err || + grep 'not supported' conftest.err) >/dev/null 2>&1; then :; else + am_cv_CC_dependencies_compiler_type=$depmode + break + fi + fi + done + + cd .. + rm -rf conftest.dir else - echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 + am_cv_CC_dependencies_compiler_type=none +fi - # Broken: fails on valid input. -continue fi +{ echo "$as_me:$LINENO: result: $am_cv_CC_dependencies_compiler_type" >&5 +echo "${ECHO_T}$am_cv_CC_dependencies_compiler_type" >&6; } +CCDEPMODE=depmode=$am_cv_CC_dependencies_compiler_type -rm -f conftest.err conftest.$ac_ext + if + test "x$enable_dependency_tracking" != xno \ + && test "$am_cv_CC_dependencies_compiler_type" = gcc3; then + am__fastdepCC_TRUE= + am__fastdepCC_FALSE='#' +else + am__fastdepCC_TRUE='#' + am__fastdepCC_FALSE= +fi - # OK, works on sane cases. Now check whether nonexistent headers - # can be detected and how. - cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ -#include -_ACEOF -if { (ac_try="$ac_cpp conftest.$ac_ext" -case "(($ac_try" in - *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; - *) ac_try_echo=$ac_try;; -esac -eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 - (eval "$ac_cpp conftest.$ac_ext") 2>conftest.er1 - ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); } >/dev/null && { - test -z "$ac_c_preproc_warn_flag$ac_c_werror_flag" || - test ! -s conftest.err - }; then - # Broken: success on invalid input. -continue + + + + + +# By default we simply use the C compiler to build assembly code. + +test "${CCAS+set}" = set || CCAS=$CC +test "${CCASFLAGS+set}" = set || CCASFLAGS=$CFLAGS + + + +depcc="$CCAS" am_compiler_list= + +{ echo "$as_me:$LINENO: checking dependency style of $depcc" >&5 +echo $ECHO_N "checking dependency style of $depcc... $ECHO_C" >&6; } +if test "${am_cv_CCAS_dependencies_compiler_type+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 else - echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 + if test -z "$AMDEP_TRUE" && test -f "$am_depcomp"; then + # We make a subdir and do the tests there. Otherwise we can end up + # making bogus files that we don't know about and never remove. For + # instance it was reported that on HP-UX the gcc test will end up + # making a dummy file named `D' -- because `-MD' means `put the output + # in D'. + mkdir conftest.dir + # Copy depcomp to subdir because otherwise we won't find it if we're + # using a relative directory. + cp "$am_depcomp" conftest.dir + cd conftest.dir + # We will build objects and dependencies in a subdirectory because + # it helps to detect inapplicable dependency modes. For instance + # both Tru64's cc and ICC support -MD to output dependencies as a + # side effect of compilation, but ICC will put the dependencies in + # the current directory while Tru64 will put them in the object + # directory. + mkdir sub + + am_cv_CCAS_dependencies_compiler_type=none + if test "$am_compiler_list" = ""; then + am_compiler_list=`sed -n 's/^#*\([a-zA-Z0-9]*\))$/\1/p' < ./depcomp` + fi + for depmode in $am_compiler_list; do + # Setup a source with many dependencies, because some compilers + # like to wrap large dependency lists on column 80 (with \), and + # we should not choose a depcomp mode which is confused by this. + # + # We need to recreate these files for each test, as the compiler may + # overwrite some of them when testing with obscure command lines. + # This happens at least with the AIX C compiler. + : > sub/conftest.c + for i in 1 2 3 4 5 6; do + echo '#include "conftst'$i'.h"' >> sub/conftest.c + # Using `: > sub/conftst$i.h' creates only sub/conftst1.h with + # Solaris 8's {/usr,}/bin/sh. + touch sub/conftst$i.h + done + echo "${am__include} ${am__quote}sub/conftest.Po${am__quote}" > confmf - # Passes both tests. -ac_preproc_ok=: -break + case $depmode in + nosideeffect) + # after this tag, mechanisms are not by side-effect, so they'll + # only be used when explicitly requested + if test "x$enable_dependency_tracking" = xyes; then + continue + else + break + fi + ;; + none) break ;; + esac + # We check with `-c' and `-o' for the sake of the "dashmstdout" + # mode. It turns out that the SunPro C++ compiler does not properly + # handle `-M -o', and we need to detect this. + if depmode=$depmode \ + source=sub/conftest.c object=sub/conftest.${OBJEXT-o} \ + depfile=sub/conftest.Po tmpdepfile=sub/conftest.TPo \ + $SHELL ./depcomp $depcc -c -o sub/conftest.${OBJEXT-o} sub/conftest.c \ + >/dev/null 2>conftest.err && + grep sub/conftst1.h sub/conftest.Po > /dev/null 2>&1 && + grep sub/conftst6.h sub/conftest.Po > /dev/null 2>&1 && + grep sub/conftest.${OBJEXT-o} sub/conftest.Po > /dev/null 2>&1 && + ${MAKE-make} -s -f confmf > /dev/null 2>&1; then + # icc doesn't choke on unknown options, it will just issue warnings + # or remarks (even with -Werror). So we grep stderr for any message + # that says an option was ignored or not supported. + # When given -MP, icc 7.0 and 7.1 complain thusly: + # icc: Command line warning: ignoring option '-M'; no argument required + # The diagnosis changed in icc 8.0: + # icc: Command line remark: option '-MP' not supported + if (grep 'ignoring option' conftest.err || + grep 'not supported' conftest.err) >/dev/null 2>&1; then :; else + am_cv_CCAS_dependencies_compiler_type=$depmode + break + fi + fi + done + + cd .. + rm -rf conftest.dir +else + am_cv_CCAS_dependencies_compiler_type=none fi -rm -f conftest.err conftest.$ac_ext +fi +{ echo "$as_me:$LINENO: result: $am_cv_CCAS_dependencies_compiler_type" >&5 +echo "${ECHO_T}$am_cv_CCAS_dependencies_compiler_type" >&6; } +CCASDEPMODE=depmode=$am_cv_CCAS_dependencies_compiler_type -done -# Because of `break', _AC_PREPROC_IFELSE's cleaning code was skipped. -rm -f conftest.err conftest.$ac_ext -if $ac_preproc_ok; then - break + if + test "x$enable_dependency_tracking" != xno \ + && test "$am_cv_CCAS_dependencies_compiler_type" = gcc3; then + am__fastdepCCAS_TRUE= + am__fastdepCCAS_FALSE='#' +else + am__fastdepCCAS_TRUE='#' + am__fastdepCCAS_FALSE= fi - done - ac_cv_prog_CPP=$CPP -fi - CPP=$ac_cv_prog_CPP +if test "x$CC" != xcc; then + { echo "$as_me:$LINENO: checking whether $CC and cc understand -c and -o together" >&5 +echo $ECHO_N "checking whether $CC and cc understand -c and -o together... $ECHO_C" >&6; } else - ac_cv_prog_CPP=$CPP + { echo "$as_me:$LINENO: checking whether cc understands -c and -o together" >&5 +echo $ECHO_N "checking whether cc understands -c and -o together... $ECHO_C" >&6; } fi -{ echo "$as_me:$LINENO: result: $CPP" >&5 -echo "${ECHO_T}$CPP" >&6; } -ac_preproc_ok=false -for ac_c_preproc_warn_flag in '' yes -do - # Use a header file that comes with gcc, so configuring glibc - # with a fresh cross-compiler works. - # Prefer to if __STDC__ is defined, since - # exists even on freestanding compilers. - # On the NeXT, cc -E runs the code through the compiler's parser, - # not just through cpp. "Syntax error" is here to catch this case. +set dummy $CC; ac_cc=`echo $2 | + sed 's/[^a-zA-Z0-9_]/_/g;s/^[0-9]/_/'` +if { as_var=ac_cv_prog_cc_${ac_cc}_c_o; eval "test \"\${$as_var+set}\" = set"; }; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else cat >conftest.$ac_ext <<_ACEOF /* confdefs.h. */ _ACEOF cat confdefs.h >>conftest.$ac_ext cat >>conftest.$ac_ext <<_ACEOF /* end confdefs.h. */ -#ifdef __STDC__ -# include -#else -# include -#endif - Syntax error + +int +main () +{ + + ; + return 0; +} _ACEOF -if { (ac_try="$ac_cpp conftest.$ac_ext" -case "(($ac_try" in +# Make sure it works both with $CC and with simple cc. +# We do the test twice because some compilers refuse to overwrite an +# existing .o file with -o, though they will create one. +ac_try='$CC -c conftest.$ac_ext -o conftest2.$ac_objext >&5' +rm -f conftest2.* +if { (case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 - (eval "$ac_cpp conftest.$ac_ext") 2>conftest.er1 + (eval "$ac_try") 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); } >/dev/null && { - test -z "$ac_c_preproc_warn_flag$ac_c_werror_flag" || - test ! -s conftest.err - }; then - : -else - echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - - # Broken: fails on valid input. -continue -fi - -rm -f conftest.err conftest.$ac_ext - - # OK, works on sane cases. Now check whether nonexistent headers - # can be detected and how. - cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ -#include -_ACEOF -if { (ac_try="$ac_cpp conftest.$ac_ext" -case "(($ac_try" in + (exit $ac_status); } && + test -f conftest2.$ac_objext && { (case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 - (eval "$ac_cpp conftest.$ac_ext") 2>conftest.er1 + (eval "$ac_try") 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); } >/dev/null && { - test -z "$ac_c_preproc_warn_flag$ac_c_werror_flag" || - test ! -s conftest.err - }; then - # Broken: success on invalid input. -continue + (exit $ac_status); }; +then + eval ac_cv_prog_cc_${ac_cc}_c_o=yes + if test "x$CC" != xcc; then + # Test first that cc exists at all. + if { ac_try='cc -c conftest.$ac_ext >&5' + { (case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_try") 2>&5 + ac_status=$? + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); }; }; then + ac_try='cc -c conftest.$ac_ext -o conftest2.$ac_objext >&5' + rm -f conftest2.* + if { (case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_try") 2>&5 + ac_status=$? + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } && + test -f conftest2.$ac_objext && { (case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_try") 2>&5 + ac_status=$? + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); }; + then + # cc works too. + : + else + # cc exists but doesn't like -o. + eval ac_cv_prog_cc_${ac_cc}_c_o=no + fi + fi + fi else - echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 + eval ac_cv_prog_cc_${ac_cc}_c_o=no +fi +rm -f core conftest* - # Passes both tests. -ac_preproc_ok=: -break fi +if eval test \$ac_cv_prog_cc_${ac_cc}_c_o = yes; then + { echo "$as_me:$LINENO: result: yes" >&5 +echo "${ECHO_T}yes" >&6; } +else + { echo "$as_me:$LINENO: result: no" >&5 +echo "${ECHO_T}no" >&6; } -rm -f conftest.err conftest.$ac_ext +cat >>confdefs.h <<\_ACEOF +#define NO_MINUS_C_MINUS_O 1 +_ACEOF -done -# Because of `break', _AC_PREPROC_IFELSE's cleaning code was skipped. -rm -f conftest.err conftest.$ac_ext -if $ac_preproc_ok; then - : +fi + +# FIXME: we rely on the cache variable name because +# there is no other way. +set dummy $CC +ac_cc=`echo $2 | sed 's/[^a-zA-Z0-9_]/_/g;s/^[0-9]/_/'` +if eval "test \"`echo '$ac_cv_prog_cc_'${ac_cc}_c_o`\" != yes"; then + # Losing compiler, so override with the script. + # FIXME: It is wrong to rewrite CC. + # But if we don't then we get into trouble of one sort or another. + # A longer-term fix would be to have automake use am__CC in this case, + # and then we could set am__CC="\$(top_srcdir)/compile \$(CC)" + CC="$am_aux_dir/compile $CC" +fi + + +# Check whether --enable-shared was given. +if test "${enable_shared+set}" = set; then + enableval=$enable_shared; p=${PACKAGE-default} + case $enableval in + yes) enable_shared=yes ;; + no) enable_shared=no ;; + *) + enable_shared=no + # Look at the argument we got. We use all the common list separators. + lt_save_ifs="$IFS"; IFS="${IFS}$PATH_SEPARATOR," + for pkg in $enableval; do + IFS="$lt_save_ifs" + if test "X$pkg" = "X$p"; then + enable_shared=yes + fi + done + IFS="$lt_save_ifs" + ;; + esac else - { { echo "$as_me:$LINENO: error: C preprocessor \"$CPP\" fails sanity check -See \`config.log' for more details." >&5 -echo "$as_me: error: C preprocessor \"$CPP\" fails sanity check -See \`config.log' for more details." >&2;} - { (exit 1); exit 1; }; } + enable_shared=yes +fi + + +# Check whether --enable-static was given. +if test "${enable_static+set}" = set; then + enableval=$enable_static; p=${PACKAGE-default} + case $enableval in + yes) enable_static=yes ;; + no) enable_static=no ;; + *) + enable_static=no + # Look at the argument we got. We use all the common list separators. + lt_save_ifs="$IFS"; IFS="${IFS}$PATH_SEPARATOR," + for pkg in $enableval; do + IFS="$lt_save_ifs" + if test "X$pkg" = "X$p"; then + enable_static=yes + fi + done + IFS="$lt_save_ifs" + ;; + esac +else + enable_static=yes fi -ac_ext=c -ac_cpp='$CPP $CPPFLAGS' -ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' -ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' -ac_compiler_gnu=$ac_cv_c_compiler_gnu +# Check whether --enable-fast-install was given. +if test "${enable_fast_install+set}" = set; then + enableval=$enable_fast_install; p=${PACKAGE-default} + case $enableval in + yes) enable_fast_install=yes ;; + no) enable_fast_install=no ;; + *) + enable_fast_install=no + # Look at the argument we got. We use all the common list separators. + lt_save_ifs="$IFS"; IFS="${IFS}$PATH_SEPARATOR," + for pkg in $enableval; do + IFS="$lt_save_ifs" + if test "X$pkg" = "X$p"; then + enable_fast_install=yes + fi + done + IFS="$lt_save_ifs" + ;; + esac +else + enable_fast_install=yes +fi + + +{ echo "$as_me:$LINENO: checking for a sed that does not truncate output" >&5 +echo $ECHO_N "checking for a sed that does not truncate output... $ECHO_C" >&6; } +if test "${lt_cv_path_SED+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + # Loop through the user's path and test for sed and gsed. +# Then use that list of sed's as ones to test for truncation. +as_save_IFS=$IFS; IFS=$PATH_SEPARATOR +for as_dir in $PATH +do + IFS=$as_save_IFS + test -z "$as_dir" && as_dir=. + for lt_ac_prog in sed gsed; do + for ac_exec_ext in '' $ac_executable_extensions; do + if { test -f "$as_dir/$lt_ac_prog$ac_exec_ext" && $as_test_x "$as_dir/$lt_ac_prog$ac_exec_ext"; }; then + lt_ac_sed_list="$lt_ac_sed_list $as_dir/$lt_ac_prog$ac_exec_ext" + fi + done + done +done +IFS=$as_save_IFS +lt_ac_max=0 +lt_ac_count=0 +# Add /usr/xpg4/bin/sed as it is typically found on Solaris +# along with /bin/sed that truncates output. +for lt_ac_sed in $lt_ac_sed_list /usr/xpg4/bin/sed; do + test ! -f $lt_ac_sed && continue + cat /dev/null > conftest.in + lt_ac_count=0 + echo $ECHO_N "0123456789$ECHO_C" >conftest.in + # Check for GNU sed and select it if it is found. + if "$lt_ac_sed" --version 2>&1 < /dev/null | grep 'GNU' > /dev/null; then + lt_cv_path_SED=$lt_ac_sed + break + fi + while true; do + cat conftest.in conftest.in >conftest.tmp + mv conftest.tmp conftest.in + cp conftest.in conftest.nl + echo >>conftest.nl + $lt_ac_sed -e 's/a$//' < conftest.nl >conftest.out || break + cmp -s conftest.out conftest.nl || break + # 10000 chars as input seems more than enough + test $lt_ac_count -gt 10 && break + lt_ac_count=`expr $lt_ac_count + 1` + if test $lt_ac_count -gt $lt_ac_max; then + lt_ac_max=$lt_ac_count + lt_cv_path_SED=$lt_ac_sed + fi + done +done + +fi + +SED=$lt_cv_path_SED + +{ echo "$as_me:$LINENO: result: $SED" >&5 +echo "${ECHO_T}$SED" >&6; } { echo "$as_me:$LINENO: checking for grep that handles long lines and -e" >&5 echo $ECHO_N "checking for grep that handles long lines and -e... $ECHO_C" >&6; } @@ -3180,21 +4258,548 @@ EGREP="$ac_cv_path_EGREP" -{ echo "$as_me:$LINENO: checking for ANSI C header files" >&5 -echo $ECHO_N "checking for ANSI C header files... $ECHO_C" >&6; } -if test "${ac_cv_header_stdc+set}" = set; then + +# Check whether --with-gnu-ld was given. +if test "${with_gnu_ld+set}" = set; then + withval=$with_gnu_ld; test "$withval" = no || with_gnu_ld=yes +else + with_gnu_ld=no +fi + +ac_prog=ld +if test "$GCC" = yes; then + # Check if gcc -print-prog-name=ld gives a path. + { echo "$as_me:$LINENO: checking for ld used by $CC" >&5 +echo $ECHO_N "checking for ld used by $CC... $ECHO_C" >&6; } + case $host in + *-*-mingw*) + # gcc leaves a trailing carriage return which upsets mingw + ac_prog=`($CC -print-prog-name=ld) 2>&5 | tr -d '\015'` ;; + *) + ac_prog=`($CC -print-prog-name=ld) 2>&5` ;; + esac + case $ac_prog in + # Accept absolute paths. + [\\/]* | ?:[\\/]*) + re_direlt='/[^/][^/]*/\.\./' + # Canonicalize the pathname of ld + ac_prog=`echo $ac_prog| $SED 's%\\\\%/%g'` + while echo $ac_prog | grep "$re_direlt" > /dev/null 2>&1; do + ac_prog=`echo $ac_prog| $SED "s%$re_direlt%/%"` + done + test -z "$LD" && LD="$ac_prog" + ;; + "") + # If it fails, then pretend we aren't using GCC. + ac_prog=ld + ;; + *) + # If it is relative, then search for the first ld in PATH. + with_gnu_ld=unknown + ;; + esac +elif test "$with_gnu_ld" = yes; then + { echo "$as_me:$LINENO: checking for GNU ld" >&5 +echo $ECHO_N "checking for GNU ld... $ECHO_C" >&6; } +else + { echo "$as_me:$LINENO: checking for non-GNU ld" >&5 +echo $ECHO_N "checking for non-GNU ld... $ECHO_C" >&6; } +fi +if test "${lt_cv_path_LD+set}" = set; then echo $ECHO_N "(cached) $ECHO_C" >&6 else - cat >conftest.$ac_ext <<_ACEOF + if test -z "$LD"; then + lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR + for ac_dir in $PATH; do + IFS="$lt_save_ifs" + test -z "$ac_dir" && ac_dir=. + if test -f "$ac_dir/$ac_prog" || test -f "$ac_dir/$ac_prog$ac_exeext"; then + lt_cv_path_LD="$ac_dir/$ac_prog" + # Check to see if the program is GNU ld. I'd rather use --version, + # but apparently some variants of GNU ld only accept -v. + # Break only if it was the GNU/non-GNU ld that we prefer. + case `"$lt_cv_path_LD" -v 2>&1 &5 +echo "${ECHO_T}$LD" >&6; } +else + { echo "$as_me:$LINENO: result: no" >&5 +echo "${ECHO_T}no" >&6; } +fi +test -z "$LD" && { { echo "$as_me:$LINENO: error: no acceptable ld found in \$PATH" >&5 +echo "$as_me: error: no acceptable ld found in \$PATH" >&2;} + { (exit 1); exit 1; }; } +{ echo "$as_me:$LINENO: checking if the linker ($LD) is GNU ld" >&5 +echo $ECHO_N "checking if the linker ($LD) is GNU ld... $ECHO_C" >&6; } +if test "${lt_cv_prog_gnu_ld+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + # I'd rather use --version here, but apparently some GNU lds only accept -v. +case `$LD -v 2>&1 &5 +echo "${ECHO_T}$lt_cv_prog_gnu_ld" >&6; } +with_gnu_ld=$lt_cv_prog_gnu_ld + + +{ echo "$as_me:$LINENO: checking for $LD option to reload object files" >&5 +echo $ECHO_N "checking for $LD option to reload object files... $ECHO_C" >&6; } +if test "${lt_cv_ld_reload_flag+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + lt_cv_ld_reload_flag='-r' +fi +{ echo "$as_me:$LINENO: result: $lt_cv_ld_reload_flag" >&5 +echo "${ECHO_T}$lt_cv_ld_reload_flag" >&6; } +reload_flag=$lt_cv_ld_reload_flag +case $reload_flag in +"" | " "*) ;; +*) reload_flag=" $reload_flag" ;; +esac +reload_cmds='$LD$reload_flag -o $output$reload_objs' +case $host_os in + darwin*) + if test "$GCC" = yes; then + reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs' + else + reload_cmds='$LD$reload_flag -o $output$reload_objs' + fi + ;; +esac + +{ echo "$as_me:$LINENO: checking for BSD-compatible nm" >&5 +echo $ECHO_N "checking for BSD-compatible nm... $ECHO_C" >&6; } +if test "${lt_cv_path_NM+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + if test -n "$NM"; then + # Let the user override the test. + lt_cv_path_NM="$NM" +else + lt_nm_to_check="${ac_tool_prefix}nm" + if test -n "$ac_tool_prefix" && test "$build" = "$host"; then + lt_nm_to_check="$lt_nm_to_check nm" + fi + for lt_tmp_nm in $lt_nm_to_check; do + lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR + for ac_dir in $PATH /usr/ccs/bin/elf /usr/ccs/bin /usr/ucb /bin; do + IFS="$lt_save_ifs" + test -z "$ac_dir" && ac_dir=. + tmp_nm="$ac_dir/$lt_tmp_nm" + if test -f "$tmp_nm" || test -f "$tmp_nm$ac_exeext" ; then + # Check to see if the nm accepts a BSD-compat flag. + # Adding the `sed 1q' prevents false positives on HP-UX, which says: + # nm: unknown option "B" ignored + # Tru64's nm complains that /dev/null is an invalid object file + case `"$tmp_nm" -B /dev/null 2>&1 | sed '1q'` in + */dev/null* | *'Invalid file or object type'*) + lt_cv_path_NM="$tmp_nm -B" + break + ;; + *) + case `"$tmp_nm" -p /dev/null 2>&1 | sed '1q'` in + */dev/null*) + lt_cv_path_NM="$tmp_nm -p" + break + ;; + *) + lt_cv_path_NM=${lt_cv_path_NM="$tmp_nm"} # keep the first match, but + continue # so that we can try to find one that supports BSD flags + ;; + esac + ;; + esac + fi + done + IFS="$lt_save_ifs" + done + test -z "$lt_cv_path_NM" && lt_cv_path_NM=nm +fi +fi +{ echo "$as_me:$LINENO: result: $lt_cv_path_NM" >&5 +echo "${ECHO_T}$lt_cv_path_NM" >&6; } +NM="$lt_cv_path_NM" + +{ echo "$as_me:$LINENO: checking whether ln -s works" >&5 +echo $ECHO_N "checking whether ln -s works... $ECHO_C" >&6; } +LN_S=$as_ln_s +if test "$LN_S" = "ln -s"; then + { echo "$as_me:$LINENO: result: yes" >&5 +echo "${ECHO_T}yes" >&6; } +else + { echo "$as_me:$LINENO: result: no, using $LN_S" >&5 +echo "${ECHO_T}no, using $LN_S" >&6; } +fi + +{ echo "$as_me:$LINENO: checking how to recognize dependent libraries" >&5 +echo $ECHO_N "checking how to recognize dependent libraries... $ECHO_C" >&6; } +if test "${lt_cv_deplibs_check_method+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + lt_cv_file_magic_cmd='$MAGIC_CMD' +lt_cv_file_magic_test_file= +lt_cv_deplibs_check_method='unknown' +# Need to set the preceding variable on all platforms that support +# interlibrary dependencies. +# 'none' -- dependencies not supported. +# `unknown' -- same as none, but documents that we really don't know. +# 'pass_all' -- all dependencies passed with no checks. +# 'test_compile' -- check by making test program. +# 'file_magic [[regex]]' -- check by looking for files in library path +# which responds to the $file_magic_cmd with a given extended regex. +# If you have `file' or equivalent on your system and you're not sure +# whether `pass_all' will *always* work, you probably want this one. + +case $host_os in +aix4* | aix5*) + lt_cv_deplibs_check_method=pass_all + ;; + +beos*) + lt_cv_deplibs_check_method=pass_all + ;; + +bsdi[45]*) + lt_cv_deplibs_check_method='file_magic ELF [0-9][0-9]*-bit [ML]SB (shared object|dynamic lib)' + lt_cv_file_magic_cmd='/usr/bin/file -L' + lt_cv_file_magic_test_file=/shlib/libc.so + ;; + +cygwin*) + # func_win32_libid is a shell function defined in ltmain.sh + lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL' + lt_cv_file_magic_cmd='func_win32_libid' + ;; + +mingw* | pw32*) + # Base MSYS/MinGW do not provide the 'file' command needed by + # func_win32_libid shell function, so use a weaker test based on 'objdump', + # unless we find 'file', for example because we are cross-compiling. + if ( file / ) >/dev/null 2>&1; then + lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL' + lt_cv_file_magic_cmd='func_win32_libid' + else + lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?' + lt_cv_file_magic_cmd='$OBJDUMP -f' + fi + ;; + +darwin* | rhapsody*) + lt_cv_deplibs_check_method=pass_all + ;; + +freebsd* | dragonfly*) + if echo __ELF__ | $CC -E - | grep __ELF__ > /dev/null; then + case $host_cpu in + i*86 ) + # Not sure whether the presence of OpenBSD here was a mistake. + # Let's accept both of them until this is cleared up. + lt_cv_deplibs_check_method='file_magic (FreeBSD|OpenBSD|DragonFly)/i[3-9]86 (compact )?demand paged shared library' + lt_cv_file_magic_cmd=/usr/bin/file + lt_cv_file_magic_test_file=`echo /usr/lib/libc.so.*` + ;; + esac + else + lt_cv_deplibs_check_method=pass_all + fi + ;; + +gnu*) + lt_cv_deplibs_check_method=pass_all + ;; + +hpux10.20* | hpux11*) + lt_cv_file_magic_cmd=/usr/bin/file + case $host_cpu in + ia64*) + lt_cv_deplibs_check_method='file_magic (s[0-9][0-9][0-9]|ELF-[0-9][0-9]) shared object file - IA64' + lt_cv_file_magic_test_file=/usr/lib/hpux32/libc.so + ;; + hppa*64*) + lt_cv_deplibs_check_method='file_magic (s[0-9][0-9][0-9]|ELF-[0-9][0-9]) shared object file - PA-RISC [0-9].[0-9]' + lt_cv_file_magic_test_file=/usr/lib/pa20_64/libc.sl + ;; + *) + lt_cv_deplibs_check_method='file_magic (s[0-9][0-9][0-9]|PA-RISC[0-9].[0-9]) shared library' + lt_cv_file_magic_test_file=/usr/lib/libc.sl + ;; + esac + ;; + +interix[3-9]*) + # PIC code is broken on Interix 3.x, that's why |\.a not |_pic\.a here + lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so|\.a)$' + ;; + +irix5* | irix6* | nonstopux*) + case $LD in + *-32|*"-32 ") libmagic=32-bit;; + *-n32|*"-n32 ") libmagic=N32;; + *-64|*"-64 ") libmagic=64-bit;; + *) libmagic=never-match;; + esac + lt_cv_deplibs_check_method=pass_all + ;; + +# This must be Linux ELF. +linux* | k*bsd*-gnu) + lt_cv_deplibs_check_method=pass_all + ;; + +netbsd*) + if echo __ELF__ | $CC -E - | grep __ELF__ > /dev/null; then + lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so\.[0-9]+\.[0-9]+|_pic\.a)$' + else + lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so|_pic\.a)$' + fi + ;; + +newos6*) + lt_cv_deplibs_check_method='file_magic ELF [0-9][0-9]*-bit [ML]SB (executable|dynamic lib)' + lt_cv_file_magic_cmd=/usr/bin/file + lt_cv_file_magic_test_file=/usr/lib/libnls.so + ;; + +nto-qnx*) + lt_cv_deplibs_check_method=unknown + ;; + +openbsd*) + if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then + lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so\.[0-9]+\.[0-9]+|\.so|_pic\.a)$' + else + lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so\.[0-9]+\.[0-9]+|_pic\.a)$' + fi + ;; + +osf3* | osf4* | osf5*) + lt_cv_deplibs_check_method=pass_all + ;; + +rdos*) + lt_cv_deplibs_check_method=pass_all + ;; + +solaris*) + lt_cv_deplibs_check_method=pass_all + ;; + +sysv4 | sysv4.3*) + case $host_vendor in + motorola) + lt_cv_deplibs_check_method='file_magic ELF [0-9][0-9]*-bit [ML]SB (shared object|dynamic lib) M[0-9][0-9]* Version [0-9]' + lt_cv_file_magic_test_file=`echo /usr/lib/libc.so*` + ;; + ncr) + lt_cv_deplibs_check_method=pass_all + ;; + sequent) + lt_cv_file_magic_cmd='/bin/file' + lt_cv_deplibs_check_method='file_magic ELF [0-9][0-9]*-bit [LM]SB (shared object|dynamic lib )' + ;; + sni) + lt_cv_file_magic_cmd='/bin/file' + lt_cv_deplibs_check_method="file_magic ELF [0-9][0-9]*-bit [LM]SB dynamic lib" + lt_cv_file_magic_test_file=/lib/libc.so + ;; + siemens) + lt_cv_deplibs_check_method=pass_all + ;; + pc) + lt_cv_deplibs_check_method=pass_all + ;; + esac + ;; + +sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX* | sysv4*uw2*) + lt_cv_deplibs_check_method=pass_all + ;; +esac + +fi +{ echo "$as_me:$LINENO: result: $lt_cv_deplibs_check_method" >&5 +echo "${ECHO_T}$lt_cv_deplibs_check_method" >&6; } +file_magic_cmd=$lt_cv_file_magic_cmd +deplibs_check_method=$lt_cv_deplibs_check_method +test -z "$deplibs_check_method" && deplibs_check_method=unknown + + + + +# If no C compiler was specified, use CC. +LTCC=${LTCC-"$CC"} + +# If no C compiler flags were specified, use CFLAGS. +LTCFLAGS=${LTCFLAGS-"$CFLAGS"} + +# Allow CC to be a program name with arguments. +compiler=$CC + + +# Check whether --enable-libtool-lock was given. +if test "${enable_libtool_lock+set}" = set; then + enableval=$enable_libtool_lock; +fi + +test "x$enable_libtool_lock" != xno && enable_libtool_lock=yes + +# Some flags need to be propagated to the compiler or linker for good +# libtool support. +case $host in +ia64-*-hpux*) + # Find out which ABI we are using. + echo 'int i;' > conftest.$ac_ext + if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 + (eval $ac_compile) 2>&5 + ac_status=$? + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); }; then + case `/usr/bin/file conftest.$ac_objext` in + *ELF-32*) + HPUX_IA64_MODE="32" + ;; + *ELF-64*) + HPUX_IA64_MODE="64" + ;; + esac + fi + rm -rf conftest* + ;; +*-*-irix6*) + # Find out which ABI we are using. + echo '#line 4693 "configure"' > conftest.$ac_ext + if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 + (eval $ac_compile) 2>&5 + ac_status=$? + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); }; then + if test "$lt_cv_prog_gnu_ld" = yes; then + case `/usr/bin/file conftest.$ac_objext` in + *32-bit*) + LD="${LD-ld} -melf32bsmip" + ;; + *N32*) + LD="${LD-ld} -melf32bmipn32" + ;; + *64-bit*) + LD="${LD-ld} -melf64bmip" + ;; + esac + else + case `/usr/bin/file conftest.$ac_objext` in + *32-bit*) + LD="${LD-ld} -32" + ;; + *N32*) + LD="${LD-ld} -n32" + ;; + *64-bit*) + LD="${LD-ld} -64" + ;; + esac + fi + fi + rm -rf conftest* + ;; + +x86_64-*kfreebsd*-gnu|x86_64-*linux*|ppc*-*linux*|powerpc*-*linux*| \ +s390*-*linux*|sparc*-*linux*) + # Find out which ABI we are using. + echo 'int i;' > conftest.$ac_ext + if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 + (eval $ac_compile) 2>&5 + ac_status=$? + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); }; then + case `/usr/bin/file conftest.o` in + *32-bit*) + case $host in + x86_64-*kfreebsd*-gnu) + LD="${LD-ld} -m elf_i386_fbsd" + ;; + x86_64-*linux*) + LD="${LD-ld} -m elf_i386" + ;; + ppc64-*linux*|powerpc64-*linux*) + LD="${LD-ld} -m elf32ppclinux" + ;; + s390x-*linux*) + LD="${LD-ld} -m elf_s390" + ;; + sparc64-*linux*) + LD="${LD-ld} -m elf32_sparc" + ;; + esac + ;; + *64-bit*) + libsuff=64 + case $host in + x86_64-*kfreebsd*-gnu) + LD="${LD-ld} -m elf_x86_64_fbsd" + ;; + x86_64-*linux*) + LD="${LD-ld} -m elf_x86_64" + ;; + ppc*-*linux*|powerpc*-*linux*) + LD="${LD-ld} -m elf64ppc" + ;; + s390*-*linux*) + LD="${LD-ld} -m elf64_s390" + ;; + sparc*-*linux*) + LD="${LD-ld} -m elf64_sparc" + ;; + esac + ;; + esac + fi + rm -rf conftest* + ;; + +*-*-sco3.2v5*) + # On SCO OpenServer 5, we need -belf to get full-featured binaries. + SAVE_CFLAGS="$CFLAGS" + CFLAGS="$CFLAGS -belf" + { echo "$as_me:$LINENO: checking whether the C compiler needs -belf" >&5 +echo $ECHO_N "checking whether the C compiler needs -belf... $ECHO_C" >&6; } +if test "${lt_cv_cc_needs_belf+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + ac_ext=c +ac_cpp='$CPP $CPPFLAGS' +ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' +ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' +ac_compiler_gnu=$ac_cv_c_compiler_gnu + + cat >conftest.$ac_ext <<_ACEOF /* confdefs.h. */ _ACEOF cat confdefs.h >>conftest.$ac_ext cat >>conftest.$ac_ext <<_ACEOF /* end confdefs.h. */ -#include -#include -#include -#include int main () @@ -3204,14 +4809,14 @@ return 0; } _ACEOF -rm -f conftest.$ac_objext -if { (ac_try="$ac_compile" +rm -f conftest.$ac_objext conftest$ac_exeext +if { (ac_try="$ac_link" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 - (eval "$ac_compile") 2>conftest.er1 + (eval "$ac_link") 2>conftest.er1 ac_status=$? grep -v '^ *+' conftest.er1 >conftest.err rm -f conftest.er1 @@ -3220,207 +4825,14910 @@ (exit $ac_status); } && { test -z "$ac_c_werror_flag" || test ! -s conftest.err - } && test -s conftest.$ac_objext; then - ac_cv_header_stdc=yes + } && test -s conftest$ac_exeext && + $as_test_x conftest$ac_exeext; then + lt_cv_cc_needs_belf=yes else echo "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 - ac_cv_header_stdc=no + lt_cv_cc_needs_belf=no fi -rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext - -if test $ac_cv_header_stdc = yes; then - # SunOS 4.x string.h does not declare mem*, contrary to ANSI. - cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ -#include +rm -f core conftest.err conftest.$ac_objext conftest_ipa8_conftest.oo \ + conftest$ac_exeext conftest.$ac_ext + ac_ext=c +ac_cpp='$CPP $CPPFLAGS' +ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' +ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' +ac_compiler_gnu=$ac_cv_c_compiler_gnu -_ACEOF -if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | - $EGREP "memchr" >/dev/null 2>&1; then - : -else - ac_cv_header_stdc=no fi -rm -f conftest* +{ echo "$as_me:$LINENO: result: $lt_cv_cc_needs_belf" >&5 +echo "${ECHO_T}$lt_cv_cc_needs_belf" >&6; } + if test x"$lt_cv_cc_needs_belf" != x"yes"; then + # this is probably gcc 2.8.0, egcs 1.0 or newer; no need for -belf + CFLAGS="$SAVE_CFLAGS" + fi + ;; +sparc*-*solaris*) + # Find out which ABI we are using. + echo 'int i;' > conftest.$ac_ext + if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 + (eval $ac_compile) 2>&5 + ac_status=$? + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); }; then + case `/usr/bin/file conftest.o` in + *64-bit*) + case $lt_cv_prog_gnu_ld in + yes*) LD="${LD-ld} -m elf64_sparc" ;; + *) LD="${LD-ld} -64" ;; + esac + ;; + esac + fi + rm -rf conftest* + ;; -fi -if test $ac_cv_header_stdc = yes; then - # ISC 2.0.2 stdlib.h does not declare free, contrary to ANSI. +esac + +need_locks="$enable_libtool_lock" + + +ac_ext=c +ac_cpp='$CPP $CPPFLAGS' +ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' +ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' +ac_compiler_gnu=$ac_cv_c_compiler_gnu +{ echo "$as_me:$LINENO: checking how to run the C preprocessor" >&5 +echo $ECHO_N "checking how to run the C preprocessor... $ECHO_C" >&6; } +# On Suns, sometimes $CPP names a directory. +if test -n "$CPP" && test -d "$CPP"; then + CPP= +fi +if test -z "$CPP"; then + if test "${ac_cv_prog_CPP+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + # Double quotes because CPP needs to be expanded + for CPP in "$CC -E" "$CC -E -traditional-cpp" "/lib/cpp" + do + ac_preproc_ok=false +for ac_c_preproc_warn_flag in '' yes +do + # Use a header file that comes with gcc, so configuring glibc + # with a fresh cross-compiler works. + # Prefer to if __STDC__ is defined, since + # exists even on freestanding compilers. + # On the NeXT, cc -E runs the code through the compiler's parser, + # not just through cpp. "Syntax error" is here to catch this case. cat >conftest.$ac_ext <<_ACEOF /* confdefs.h. */ _ACEOF cat confdefs.h >>conftest.$ac_ext cat >>conftest.$ac_ext <<_ACEOF /* end confdefs.h. */ -#include - +#ifdef __STDC__ +# include +#else +# include +#endif + Syntax error _ACEOF -if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | - $EGREP "free" >/dev/null 2>&1; then +if { (ac_try="$ac_cpp conftest.$ac_ext" +case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_cpp conftest.$ac_ext") 2>conftest.er1 + ac_status=$? + grep -v '^ *+' conftest.er1 >conftest.err + rm -f conftest.er1 + cat conftest.err >&5 + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } >/dev/null && { + test -z "$ac_c_preproc_warn_flag$ac_c_werror_flag" || + test ! -s conftest.err + }; then : else - ac_cv_header_stdc=no + echo "$as_me: failed program was:" >&5 +sed 's/^/| /' conftest.$ac_ext >&5 + + # Broken: fails on valid input. +continue fi -rm -f conftest* +rm -f conftest.err conftest.$ac_ext + + # OK, works on sane cases. Now check whether nonexistent headers + # can be detected and how. + cat >conftest.$ac_ext <<_ACEOF +/* confdefs.h. */ +_ACEOF +cat confdefs.h >>conftest.$ac_ext +cat >>conftest.$ac_ext <<_ACEOF +/* end confdefs.h. */ +#include +_ACEOF +if { (ac_try="$ac_cpp conftest.$ac_ext" +case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_cpp conftest.$ac_ext") 2>conftest.er1 + ac_status=$? + grep -v '^ *+' conftest.er1 >conftest.err + rm -f conftest.er1 + cat conftest.err >&5 + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } >/dev/null && { + test -z "$ac_c_preproc_warn_flag$ac_c_werror_flag" || + test ! -s conftest.err + }; then + # Broken: success on invalid input. +continue +else + echo "$as_me: failed program was:" >&5 +sed 's/^/| /' conftest.$ac_ext >&5 + + # Passes both tests. +ac_preproc_ok=: +break fi -if test $ac_cv_header_stdc = yes; then - # /bin/cc in Irix-4.0.5 gets non-ANSI ctype macros unless using -ansi. - if test "$cross_compiling" = yes; then - : +rm -f conftest.err conftest.$ac_ext + +done +# Because of `break', _AC_PREPROC_IFELSE's cleaning code was skipped. +rm -f conftest.err conftest.$ac_ext +if $ac_preproc_ok; then + break +fi + + done + ac_cv_prog_CPP=$CPP + +fi + CPP=$ac_cv_prog_CPP else + ac_cv_prog_CPP=$CPP +fi +{ echo "$as_me:$LINENO: result: $CPP" >&5 +echo "${ECHO_T}$CPP" >&6; } +ac_preproc_ok=false +for ac_c_preproc_warn_flag in '' yes +do + # Use a header file that comes with gcc, so configuring glibc + # with a fresh cross-compiler works. + # Prefer to if __STDC__ is defined, since + # exists even on freestanding compilers. + # On the NeXT, cc -E runs the code through the compiler's parser, + # not just through cpp. "Syntax error" is here to catch this case. cat >conftest.$ac_ext <<_ACEOF /* confdefs.h. */ _ACEOF cat confdefs.h >>conftest.$ac_ext cat >>conftest.$ac_ext <<_ACEOF /* end confdefs.h. */ -#include -#include -#if ((' ' & 0x0FF) == 0x020) -# define ISLOWER(c) ('a' <= (c) && (c) <= 'z') -# define TOUPPER(c) (ISLOWER(c) ? 'A' + ((c) - 'a') : (c)) +#ifdef __STDC__ +# include #else -# define ISLOWER(c) \ - (('a' <= (c) && (c) <= 'i') \ - || ('j' <= (c) && (c) <= 'r') \ - || ('s' <= (c) && (c) <= 'z')) -# define TOUPPER(c) (ISLOWER(c) ? ((c) | 0x40) : (c)) +# include #endif - -#define XOR(e, f) (((e) && !(f)) || (!(e) && (f))) -int -main () -{ - int i; - for (i = 0; i < 256; i++) - if (XOR (islower (i), ISLOWER (i)) - || toupper (i) != TOUPPER (i)) - return 2; - return 0; -} + Syntax error _ACEOF -rm -f conftest$ac_exeext -if { (ac_try="$ac_link" +if { (ac_try="$ac_cpp conftest.$ac_ext" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 - (eval "$ac_link") 2>&5 + (eval "$ac_cpp conftest.$ac_ext") 2>conftest.er1 ac_status=$? + grep -v '^ *+' conftest.er1 >conftest.err + rm -f conftest.er1 + cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); } && { ac_try='./conftest$ac_exeext' - { (case "(($ac_try" in + (exit $ac_status); } >/dev/null && { + test -z "$ac_c_preproc_warn_flag$ac_c_werror_flag" || + test ! -s conftest.err + }; then + : +else + echo "$as_me: failed program was:" >&5 +sed 's/^/| /' conftest.$ac_ext >&5 + + # Broken: fails on valid input. +continue +fi + +rm -f conftest.err conftest.$ac_ext + + # OK, works on sane cases. Now check whether nonexistent headers + # can be detected and how. + cat >conftest.$ac_ext <<_ACEOF +/* confdefs.h. */ +_ACEOF +cat confdefs.h >>conftest.$ac_ext +cat >>conftest.$ac_ext <<_ACEOF +/* end confdefs.h. */ +#include +_ACEOF +if { (ac_try="$ac_cpp conftest.$ac_ext" +case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 - (eval "$ac_try") 2>&5 + (eval "$ac_cpp conftest.$ac_ext") 2>conftest.er1 ac_status=$? + grep -v '^ *+' conftest.er1 >conftest.err + rm -f conftest.er1 + cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; }; then - : + (exit $ac_status); } >/dev/null && { + test -z "$ac_c_preproc_warn_flag$ac_c_werror_flag" || + test ! -s conftest.err + }; then + # Broken: success on invalid input. +continue else - echo "$as_me: program exited with status $ac_status" >&5 -echo "$as_me: failed program was:" >&5 + echo "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 -( exit $ac_status ) -ac_cv_header_stdc=no + # Passes both tests. +ac_preproc_ok=: +break fi -rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext conftest.$ac_objext conftest.$ac_ext + +rm -f conftest.err conftest.$ac_ext + +done +# Because of `break', _AC_PREPROC_IFELSE's cleaning code was skipped. +rm -f conftest.err conftest.$ac_ext +if $ac_preproc_ok; then + : +else + { { echo "$as_me:$LINENO: error: C preprocessor \"$CPP\" fails sanity check +See \`config.log' for more details." >&5 +echo "$as_me: error: C preprocessor \"$CPP\" fails sanity check +See \`config.log' for more details." >&2;} + { (exit 1); exit 1; }; } fi +ac_ext=c +ac_cpp='$CPP $CPPFLAGS' +ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' +ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' +ac_compiler_gnu=$ac_cv_c_compiler_gnu + + +{ echo "$as_me:$LINENO: checking for ANSI C header files" >&5 +echo $ECHO_N "checking for ANSI C header files... $ECHO_C" >&6; } +if test "${ac_cv_header_stdc+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + cat >conftest.$ac_ext <<_ACEOF +/* confdefs.h. */ +_ACEOF +cat confdefs.h >>conftest.$ac_ext +cat >>conftest.$ac_ext <<_ACEOF +/* end confdefs.h. */ +#include +#include +#include +#include + +int +main () +{ + + ; + return 0; +} +_ACEOF +rm -f conftest.$ac_objext +if { (ac_try="$ac_compile" +case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_compile") 2>conftest.er1 + ac_status=$? + grep -v '^ *+' conftest.er1 >conftest.err + rm -f conftest.er1 + cat conftest.err >&5 + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } && { + test -z "$ac_c_werror_flag" || + test ! -s conftest.err + } && test -s conftest.$ac_objext; then + ac_cv_header_stdc=yes +else + echo "$as_me: failed program was:" >&5 +sed 's/^/| /' conftest.$ac_ext >&5 + + ac_cv_header_stdc=no +fi + +rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext + +if test $ac_cv_header_stdc = yes; then + # SunOS 4.x string.h does not declare mem*, contrary to ANSI. + cat >conftest.$ac_ext <<_ACEOF +/* confdefs.h. */ +_ACEOF +cat confdefs.h >>conftest.$ac_ext +cat >>conftest.$ac_ext <<_ACEOF +/* end confdefs.h. */ +#include + +_ACEOF +if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | + $EGREP "memchr" >/dev/null 2>&1; then + : +else + ac_cv_header_stdc=no +fi +rm -f conftest* + +fi + +if test $ac_cv_header_stdc = yes; then + # ISC 2.0.2 stdlib.h does not declare free, contrary to ANSI. + cat >conftest.$ac_ext <<_ACEOF +/* confdefs.h. */ +_ACEOF +cat confdefs.h >>conftest.$ac_ext +cat >>conftest.$ac_ext <<_ACEOF +/* end confdefs.h. */ +#include + +_ACEOF +if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | + $EGREP "free" >/dev/null 2>&1; then + : +else + ac_cv_header_stdc=no +fi +rm -f conftest* + +fi + +if test $ac_cv_header_stdc = yes; then + # /bin/cc in Irix-4.0.5 gets non-ANSI ctype macros unless using -ansi. + if test "$cross_compiling" = yes; then + : +else + cat >conftest.$ac_ext <<_ACEOF +/* confdefs.h. */ +_ACEOF +cat confdefs.h >>conftest.$ac_ext +cat >>conftest.$ac_ext <<_ACEOF +/* end confdefs.h. */ +#include +#include +#if ((' ' & 0x0FF) == 0x020) +# define ISLOWER(c) ('a' <= (c) && (c) <= 'z') +# define TOUPPER(c) (ISLOWER(c) ? 'A' + ((c) - 'a') : (c)) +#else +# define ISLOWER(c) \ + (('a' <= (c) && (c) <= 'i') \ + || ('j' <= (c) && (c) <= 'r') \ + || ('s' <= (c) && (c) <= 'z')) +# define TOUPPER(c) (ISLOWER(c) ? ((c) | 0x40) : (c)) +#endif + +#define XOR(e, f) (((e) && !(f)) || (!(e) && (f))) +int +main () +{ + int i; + for (i = 0; i < 256; i++) + if (XOR (islower (i), ISLOWER (i)) + || toupper (i) != TOUPPER (i)) + return 2; + return 0; +} +_ACEOF +rm -f conftest$ac_exeext +if { (ac_try="$ac_link" +case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_link") 2>&5 + ac_status=$? + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } && { ac_try='./conftest$ac_exeext' + { (case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_try") 2>&5 + ac_status=$? + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); }; }; then + : +else + echo "$as_me: program exited with status $ac_status" >&5 +echo "$as_me: failed program was:" >&5 +sed 's/^/| /' conftest.$ac_ext >&5 + +( exit $ac_status ) +ac_cv_header_stdc=no +fi +rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext conftest.$ac_objext conftest.$ac_ext +fi + + +fi +fi +{ echo "$as_me:$LINENO: result: $ac_cv_header_stdc" >&5 +echo "${ECHO_T}$ac_cv_header_stdc" >&6; } +if test $ac_cv_header_stdc = yes; then + +cat >>confdefs.h <<\_ACEOF +#define STDC_HEADERS 1 +_ACEOF + +fi + +# On IRIX 5.3, sys/types and inttypes.h are conflicting. + + + + + + + + + +for ac_header in sys/types.h sys/stat.h stdlib.h string.h memory.h strings.h \ + inttypes.h stdint.h unistd.h +do +as_ac_Header=`echo "ac_cv_header_$ac_header" | $as_tr_sh` +{ echo "$as_me:$LINENO: checking for $ac_header" >&5 +echo $ECHO_N "checking for $ac_header... $ECHO_C" >&6; } +if { as_var=$as_ac_Header; eval "test \"\${$as_var+set}\" = set"; }; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + cat >conftest.$ac_ext <<_ACEOF +/* confdefs.h. */ +_ACEOF +cat confdefs.h >>conftest.$ac_ext +cat >>conftest.$ac_ext <<_ACEOF +/* end confdefs.h. */ +$ac_includes_default + +#include <$ac_header> +_ACEOF +rm -f conftest.$ac_objext +if { (ac_try="$ac_compile" +case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_compile") 2>conftest.er1 + ac_status=$? + grep -v '^ *+' conftest.er1 >conftest.err + rm -f conftest.er1 + cat conftest.err >&5 + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } && { + test -z "$ac_c_werror_flag" || + test ! -s conftest.err + } && test -s conftest.$ac_objext; then + eval "$as_ac_Header=yes" +else + echo "$as_me: failed program was:" >&5 +sed 's/^/| /' conftest.$ac_ext >&5 + + eval "$as_ac_Header=no" +fi + +rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext +fi +ac_res=`eval echo '${'$as_ac_Header'}'` + { echo "$as_me:$LINENO: result: $ac_res" >&5 +echo "${ECHO_T}$ac_res" >&6; } +if test `eval echo '${'$as_ac_Header'}'` = yes; then + cat >>confdefs.h <<_ACEOF +#define `echo "HAVE_$ac_header" | $as_tr_cpp` 1 +_ACEOF + +fi + +done + + + +for ac_header in dlfcn.h +do +as_ac_Header=`echo "ac_cv_header_$ac_header" | $as_tr_sh` +if { as_var=$as_ac_Header; eval "test \"\${$as_var+set}\" = set"; }; then + { echo "$as_me:$LINENO: checking for $ac_header" >&5 +echo $ECHO_N "checking for $ac_header... $ECHO_C" >&6; } +if { as_var=$as_ac_Header; eval "test \"\${$as_var+set}\" = set"; }; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +fi +ac_res=`eval echo '${'$as_ac_Header'}'` + { echo "$as_me:$LINENO: result: $ac_res" >&5 +echo "${ECHO_T}$ac_res" >&6; } +else + # Is the header compilable? +{ echo "$as_me:$LINENO: checking $ac_header usability" >&5 +echo $ECHO_N "checking $ac_header usability... $ECHO_C" >&6; } +cat >conftest.$ac_ext <<_ACEOF +/* confdefs.h. */ +_ACEOF +cat confdefs.h >>conftest.$ac_ext +cat >>conftest.$ac_ext <<_ACEOF +/* end confdefs.h. */ +$ac_includes_default +#include <$ac_header> +_ACEOF +rm -f conftest.$ac_objext +if { (ac_try="$ac_compile" +case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_compile") 2>conftest.er1 + ac_status=$? + grep -v '^ *+' conftest.er1 >conftest.err + rm -f conftest.er1 + cat conftest.err >&5 + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } && { + test -z "$ac_c_werror_flag" || + test ! -s conftest.err + } && test -s conftest.$ac_objext; then + ac_header_compiler=yes +else + echo "$as_me: failed program was:" >&5 +sed 's/^/| /' conftest.$ac_ext >&5 + + ac_header_compiler=no +fi + +rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext +{ echo "$as_me:$LINENO: result: $ac_header_compiler" >&5 +echo "${ECHO_T}$ac_header_compiler" >&6; } + +# Is the header present? +{ echo "$as_me:$LINENO: checking $ac_header presence" >&5 +echo $ECHO_N "checking $ac_header presence... $ECHO_C" >&6; } +cat >conftest.$ac_ext <<_ACEOF +/* confdefs.h. */ +_ACEOF +cat confdefs.h >>conftest.$ac_ext +cat >>conftest.$ac_ext <<_ACEOF +/* end confdefs.h. */ +#include <$ac_header> +_ACEOF +if { (ac_try="$ac_cpp conftest.$ac_ext" +case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_cpp conftest.$ac_ext") 2>conftest.er1 + ac_status=$? + grep -v '^ *+' conftest.er1 >conftest.err + rm -f conftest.er1 + cat conftest.err >&5 + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } >/dev/null && { + test -z "$ac_c_preproc_warn_flag$ac_c_werror_flag" || + test ! -s conftest.err + }; then + ac_header_preproc=yes +else + echo "$as_me: failed program was:" >&5 +sed 's/^/| /' conftest.$ac_ext >&5 + + ac_header_preproc=no +fi + +rm -f conftest.err conftest.$ac_ext +{ echo "$as_me:$LINENO: result: $ac_header_preproc" >&5 +echo "${ECHO_T}$ac_header_preproc" >&6; } + +# So? What about this header? +case $ac_header_compiler:$ac_header_preproc:$ac_c_preproc_warn_flag in + yes:no: ) + { echo "$as_me:$LINENO: WARNING: $ac_header: accepted by the compiler, rejected by the preprocessor!" >&5 +echo "$as_me: WARNING: $ac_header: accepted by the compiler, rejected by the preprocessor!" >&2;} + { echo "$as_me:$LINENO: WARNING: $ac_header: proceeding with the compiler's result" >&5 +echo "$as_me: WARNING: $ac_header: proceeding with the compiler's result" >&2;} + ac_header_preproc=yes + ;; + no:yes:* ) + { echo "$as_me:$LINENO: WARNING: $ac_header: present but cannot be compiled" >&5 +echo "$as_me: WARNING: $ac_header: present but cannot be compiled" >&2;} + { echo "$as_me:$LINENO: WARNING: $ac_header: check for missing prerequisite headers?" >&5 +echo "$as_me: WARNING: $ac_header: check for missing prerequisite headers?" >&2;} + { echo "$as_me:$LINENO: WARNING: $ac_header: see the Autoconf documentation" >&5 +echo "$as_me: WARNING: $ac_header: see the Autoconf documentation" >&2;} + { echo "$as_me:$LINENO: WARNING: $ac_header: section \"Present But Cannot Be Compiled\"" >&5 +echo "$as_me: WARNING: $ac_header: section \"Present But Cannot Be Compiled\"" >&2;} + { echo "$as_me:$LINENO: WARNING: $ac_header: proceeding with the preprocessor's result" >&5 +echo "$as_me: WARNING: $ac_header: proceeding with the preprocessor's result" >&2;} + { echo "$as_me:$LINENO: WARNING: $ac_header: in the future, the compiler will take precedence" >&5 +echo "$as_me: WARNING: $ac_header: in the future, the compiler will take precedence" >&2;} + ( cat <<\_ASBOX +## ------------------------------------------- ## +## Report this to http://gcc.gnu.org/bugs.html ## +## ------------------------------------------- ## +_ASBOX + ) | sed "s/^/$as_me: WARNING: /" >&2 + ;; +esac +{ echo "$as_me:$LINENO: checking for $ac_header" >&5 +echo $ECHO_N "checking for $ac_header... $ECHO_C" >&6; } +if { as_var=$as_ac_Header; eval "test \"\${$as_var+set}\" = set"; }; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + eval "$as_ac_Header=\$ac_header_preproc" +fi +ac_res=`eval echo '${'$as_ac_Header'}'` + { echo "$as_me:$LINENO: result: $ac_res" >&5 +echo "${ECHO_T}$ac_res" >&6; } + +fi +if test `eval echo '${'$as_ac_Header'}'` = yes; then + cat >>confdefs.h <<_ACEOF +#define `echo "HAVE_$ac_header" | $as_tr_cpp` 1 +_ACEOF + +fi + +done + +ac_ext=cpp +ac_cpp='$CXXCPP $CPPFLAGS' +ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5' +ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' +ac_compiler_gnu=$ac_cv_cxx_compiler_gnu +if test -z "$CXX"; then + if test -n "$CCC"; then + CXX=$CCC + else + if test -n "$ac_tool_prefix"; then + for ac_prog in g++ c++ gpp aCC CC cxx cc++ cl.exe FCC KCC RCC xlC_r xlC + do + # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args. +set dummy $ac_tool_prefix$ac_prog; ac_word=$2 +{ echo "$as_me:$LINENO: checking for $ac_word" >&5 +echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6; } +if test "${ac_cv_prog_CXX+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + if test -n "$CXX"; then + ac_cv_prog_CXX="$CXX" # Let the user override the test. +else +as_save_IFS=$IFS; IFS=$PATH_SEPARATOR +for as_dir in $PATH +do + IFS=$as_save_IFS + test -z "$as_dir" && as_dir=. + for ac_exec_ext in '' $ac_executable_extensions; do + if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + ac_cv_prog_CXX="$ac_tool_prefix$ac_prog" + echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5 + break 2 + fi +done +done +IFS=$as_save_IFS + +fi +fi +CXX=$ac_cv_prog_CXX +if test -n "$CXX"; then + { echo "$as_me:$LINENO: result: $CXX" >&5 +echo "${ECHO_T}$CXX" >&6; } +else + { echo "$as_me:$LINENO: result: no" >&5 +echo "${ECHO_T}no" >&6; } +fi + + + test -n "$CXX" && break + done +fi +if test -z "$CXX"; then + ac_ct_CXX=$CXX + for ac_prog in g++ c++ gpp aCC CC cxx cc++ cl.exe FCC KCC RCC xlC_r xlC +do + # Extract the first word of "$ac_prog", so it can be a program name with args. +set dummy $ac_prog; ac_word=$2 +{ echo "$as_me:$LINENO: checking for $ac_word" >&5 +echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6; } +if test "${ac_cv_prog_ac_ct_CXX+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + if test -n "$ac_ct_CXX"; then + ac_cv_prog_ac_ct_CXX="$ac_ct_CXX" # Let the user override the test. +else +as_save_IFS=$IFS; IFS=$PATH_SEPARATOR +for as_dir in $PATH +do + IFS=$as_save_IFS + test -z "$as_dir" && as_dir=. + for ac_exec_ext in '' $ac_executable_extensions; do + if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + ac_cv_prog_ac_ct_CXX="$ac_prog" + echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5 + break 2 + fi +done +done +IFS=$as_save_IFS + +fi +fi +ac_ct_CXX=$ac_cv_prog_ac_ct_CXX +if test -n "$ac_ct_CXX"; then + { echo "$as_me:$LINENO: result: $ac_ct_CXX" >&5 +echo "${ECHO_T}$ac_ct_CXX" >&6; } +else + { echo "$as_me:$LINENO: result: no" >&5 +echo "${ECHO_T}no" >&6; } +fi + + + test -n "$ac_ct_CXX" && break +done + + if test "x$ac_ct_CXX" = x; then + CXX="g++" + else + case $cross_compiling:$ac_tool_warned in +yes:) +{ echo "$as_me:$LINENO: WARNING: In the future, Autoconf will not detect cross-tools +whose name does not start with the host triplet. If you think this +configuration is useful to you, please write to autoconf at gnu.org." >&5 +echo "$as_me: WARNING: In the future, Autoconf will not detect cross-tools +whose name does not start with the host triplet. If you think this +configuration is useful to you, please write to autoconf at gnu.org." >&2;} +ac_tool_warned=yes ;; +esac + CXX=$ac_ct_CXX + fi +fi + + fi +fi +# Provide some information about the compiler. +echo "$as_me:$LINENO: checking for C++ compiler version" >&5 +ac_compiler=`set X $ac_compile; echo $2` +{ (ac_try="$ac_compiler --version >&5" +case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_compiler --version >&5") 2>&5 + ac_status=$? + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } +{ (ac_try="$ac_compiler -v >&5" +case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_compiler -v >&5") 2>&5 + ac_status=$? + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } +{ (ac_try="$ac_compiler -V >&5" +case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_compiler -V >&5") 2>&5 + ac_status=$? + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } + +{ echo "$as_me:$LINENO: checking whether we are using the GNU C++ compiler" >&5 +echo $ECHO_N "checking whether we are using the GNU C++ compiler... $ECHO_C" >&6; } +if test "${ac_cv_cxx_compiler_gnu+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + cat >conftest.$ac_ext <<_ACEOF +/* confdefs.h. */ +_ACEOF +cat confdefs.h >>conftest.$ac_ext +cat >>conftest.$ac_ext <<_ACEOF +/* end confdefs.h. */ + +int +main () +{ +#ifndef __GNUC__ + choke me +#endif + + ; + return 0; +} +_ACEOF +rm -f conftest.$ac_objext +if { (ac_try="$ac_compile" +case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_compile") 2>conftest.er1 + ac_status=$? + grep -v '^ *+' conftest.er1 >conftest.err + rm -f conftest.er1 + cat conftest.err >&5 + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } && { + test -z "$ac_cxx_werror_flag" || + test ! -s conftest.err + } && test -s conftest.$ac_objext; then + ac_compiler_gnu=yes +else + echo "$as_me: failed program was:" >&5 +sed 's/^/| /' conftest.$ac_ext >&5 + + ac_compiler_gnu=no +fi + +rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext +ac_cv_cxx_compiler_gnu=$ac_compiler_gnu + +fi +{ echo "$as_me:$LINENO: result: $ac_cv_cxx_compiler_gnu" >&5 +echo "${ECHO_T}$ac_cv_cxx_compiler_gnu" >&6; } +GXX=`test $ac_compiler_gnu = yes && echo yes` +ac_test_CXXFLAGS=${CXXFLAGS+set} +ac_save_CXXFLAGS=$CXXFLAGS +{ echo "$as_me:$LINENO: checking whether $CXX accepts -g" >&5 +echo $ECHO_N "checking whether $CXX accepts -g... $ECHO_C" >&6; } +if test "${ac_cv_prog_cxx_g+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + ac_save_cxx_werror_flag=$ac_cxx_werror_flag + ac_cxx_werror_flag=yes + ac_cv_prog_cxx_g=no + CXXFLAGS="-g" + cat >conftest.$ac_ext <<_ACEOF +/* confdefs.h. */ +_ACEOF +cat confdefs.h >>conftest.$ac_ext +cat >>conftest.$ac_ext <<_ACEOF +/* end confdefs.h. */ + +int +main () +{ + + ; + return 0; +} +_ACEOF +rm -f conftest.$ac_objext +if { (ac_try="$ac_compile" +case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_compile") 2>conftest.er1 + ac_status=$? + grep -v '^ *+' conftest.er1 >conftest.err + rm -f conftest.er1 + cat conftest.err >&5 + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } && { + test -z "$ac_cxx_werror_flag" || + test ! -s conftest.err + } && test -s conftest.$ac_objext; then + ac_cv_prog_cxx_g=yes +else + echo "$as_me: failed program was:" >&5 +sed 's/^/| /' conftest.$ac_ext >&5 + + CXXFLAGS="" + cat >conftest.$ac_ext <<_ACEOF +/* confdefs.h. */ +_ACEOF +cat confdefs.h >>conftest.$ac_ext +cat >>conftest.$ac_ext <<_ACEOF +/* end confdefs.h. */ + +int +main () +{ + + ; + return 0; +} +_ACEOF +rm -f conftest.$ac_objext +if { (ac_try="$ac_compile" +case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_compile") 2>conftest.er1 + ac_status=$? + grep -v '^ *+' conftest.er1 >conftest.err + rm -f conftest.er1 + cat conftest.err >&5 + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } && { + test -z "$ac_cxx_werror_flag" || + test ! -s conftest.err + } && test -s conftest.$ac_objext; then + : +else + echo "$as_me: failed program was:" >&5 +sed 's/^/| /' conftest.$ac_ext >&5 + + ac_cxx_werror_flag=$ac_save_cxx_werror_flag + CXXFLAGS="-g" + cat >conftest.$ac_ext <<_ACEOF +/* confdefs.h. */ +_ACEOF +cat confdefs.h >>conftest.$ac_ext +cat >>conftest.$ac_ext <<_ACEOF +/* end confdefs.h. */ + +int +main () +{ + + ; + return 0; +} +_ACEOF +rm -f conftest.$ac_objext +if { (ac_try="$ac_compile" +case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_compile") 2>conftest.er1 + ac_status=$? + grep -v '^ *+' conftest.er1 >conftest.err + rm -f conftest.er1 + cat conftest.err >&5 + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } && { + test -z "$ac_cxx_werror_flag" || + test ! -s conftest.err + } && test -s conftest.$ac_objext; then + ac_cv_prog_cxx_g=yes +else + echo "$as_me: failed program was:" >&5 +sed 's/^/| /' conftest.$ac_ext >&5 + + +fi + +rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext +fi + +rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext +fi + +rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext + ac_cxx_werror_flag=$ac_save_cxx_werror_flag +fi +{ echo "$as_me:$LINENO: result: $ac_cv_prog_cxx_g" >&5 +echo "${ECHO_T}$ac_cv_prog_cxx_g" >&6; } +if test "$ac_test_CXXFLAGS" = set; then + CXXFLAGS=$ac_save_CXXFLAGS +elif test $ac_cv_prog_cxx_g = yes; then + if test "$GXX" = yes; then + CXXFLAGS="-g -O2" + else + CXXFLAGS="-g" + fi +else + if test "$GXX" = yes; then + CXXFLAGS="-O2" + else + CXXFLAGS= + fi +fi +ac_ext=cpp +ac_cpp='$CXXCPP $CPPFLAGS' +ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5' +ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' +ac_compiler_gnu=$ac_cv_cxx_compiler_gnu + +depcc="$CXX" am_compiler_list= + +{ echo "$as_me:$LINENO: checking dependency style of $depcc" >&5 +echo $ECHO_N "checking dependency style of $depcc... $ECHO_C" >&6; } +if test "${am_cv_CXX_dependencies_compiler_type+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + if test -z "$AMDEP_TRUE" && test -f "$am_depcomp"; then + # We make a subdir and do the tests there. Otherwise we can end up + # making bogus files that we don't know about and never remove. For + # instance it was reported that on HP-UX the gcc test will end up + # making a dummy file named `D' -- because `-MD' means `put the output + # in D'. + mkdir conftest.dir + # Copy depcomp to subdir because otherwise we won't find it if we're + # using a relative directory. + cp "$am_depcomp" conftest.dir + cd conftest.dir + # We will build objects and dependencies in a subdirectory because + # it helps to detect inapplicable dependency modes. For instance + # both Tru64's cc and ICC support -MD to output dependencies as a + # side effect of compilation, but ICC will put the dependencies in + # the current directory while Tru64 will put them in the object + # directory. + mkdir sub + + am_cv_CXX_dependencies_compiler_type=none + if test "$am_compiler_list" = ""; then + am_compiler_list=`sed -n 's/^#*\([a-zA-Z0-9]*\))$/\1/p' < ./depcomp` + fi + for depmode in $am_compiler_list; do + # Setup a source with many dependencies, because some compilers + # like to wrap large dependency lists on column 80 (with \), and + # we should not choose a depcomp mode which is confused by this. + # + # We need to recreate these files for each test, as the compiler may + # overwrite some of them when testing with obscure command lines. + # This happens at least with the AIX C compiler. + : > sub/conftest.c + for i in 1 2 3 4 5 6; do + echo '#include "conftst'$i'.h"' >> sub/conftest.c + # Using `: > sub/conftst$i.h' creates only sub/conftst1.h with + # Solaris 8's {/usr,}/bin/sh. + touch sub/conftst$i.h + done + echo "${am__include} ${am__quote}sub/conftest.Po${am__quote}" > confmf + + case $depmode in + nosideeffect) + # after this tag, mechanisms are not by side-effect, so they'll + # only be used when explicitly requested + if test "x$enable_dependency_tracking" = xyes; then + continue + else + break + fi + ;; + none) break ;; + esac + # We check with `-c' and `-o' for the sake of the "dashmstdout" + # mode. It turns out that the SunPro C++ compiler does not properly + # handle `-M -o', and we need to detect this. + if depmode=$depmode \ + source=sub/conftest.c object=sub/conftest.${OBJEXT-o} \ + depfile=sub/conftest.Po tmpdepfile=sub/conftest.TPo \ + $SHELL ./depcomp $depcc -c -o sub/conftest.${OBJEXT-o} sub/conftest.c \ + >/dev/null 2>conftest.err && + grep sub/conftst1.h sub/conftest.Po > /dev/null 2>&1 && + grep sub/conftst6.h sub/conftest.Po > /dev/null 2>&1 && + grep sub/conftest.${OBJEXT-o} sub/conftest.Po > /dev/null 2>&1 && + ${MAKE-make} -s -f confmf > /dev/null 2>&1; then + # icc doesn't choke on unknown options, it will just issue warnings + # or remarks (even with -Werror). So we grep stderr for any message + # that says an option was ignored or not supported. + # When given -MP, icc 7.0 and 7.1 complain thusly: + # icc: Command line warning: ignoring option '-M'; no argument required + # The diagnosis changed in icc 8.0: + # icc: Command line remark: option '-MP' not supported + if (grep 'ignoring option' conftest.err || + grep 'not supported' conftest.err) >/dev/null 2>&1; then :; else + am_cv_CXX_dependencies_compiler_type=$depmode + break + fi + fi + done + + cd .. + rm -rf conftest.dir +else + am_cv_CXX_dependencies_compiler_type=none +fi + +fi +{ echo "$as_me:$LINENO: result: $am_cv_CXX_dependencies_compiler_type" >&5 +echo "${ECHO_T}$am_cv_CXX_dependencies_compiler_type" >&6; } +CXXDEPMODE=depmode=$am_cv_CXX_dependencies_compiler_type + + if + test "x$enable_dependency_tracking" != xno \ + && test "$am_cv_CXX_dependencies_compiler_type" = gcc3; then + am__fastdepCXX_TRUE= + am__fastdepCXX_FALSE='#' +else + am__fastdepCXX_TRUE='#' + am__fastdepCXX_FALSE= +fi + + + + +if test -n "$CXX" && ( test "X$CXX" != "Xno" && + ( (test "X$CXX" = "Xg++" && `g++ -v >/dev/null 2>&1` ) || + (test "X$CXX" != "Xg++"))) ; then + ac_ext=cpp +ac_cpp='$CXXCPP $CPPFLAGS' +ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5' +ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' +ac_compiler_gnu=$ac_cv_cxx_compiler_gnu +{ echo "$as_me:$LINENO: checking how to run the C++ preprocessor" >&5 +echo $ECHO_N "checking how to run the C++ preprocessor... $ECHO_C" >&6; } +if test -z "$CXXCPP"; then + if test "${ac_cv_prog_CXXCPP+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + # Double quotes because CXXCPP needs to be expanded + for CXXCPP in "$CXX -E" "/lib/cpp" + do + ac_preproc_ok=false +for ac_cxx_preproc_warn_flag in '' yes +do + # Use a header file that comes with gcc, so configuring glibc + # with a fresh cross-compiler works. + # Prefer to if __STDC__ is defined, since + # exists even on freestanding compilers. + # On the NeXT, cc -E runs the code through the compiler's parser, + # not just through cpp. "Syntax error" is here to catch this case. + cat >conftest.$ac_ext <<_ACEOF +/* confdefs.h. */ +_ACEOF +cat confdefs.h >>conftest.$ac_ext +cat >>conftest.$ac_ext <<_ACEOF +/* end confdefs.h. */ +#ifdef __STDC__ +# include +#else +# include +#endif + Syntax error +_ACEOF +if { (ac_try="$ac_cpp conftest.$ac_ext" +case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_cpp conftest.$ac_ext") 2>conftest.er1 + ac_status=$? + grep -v '^ *+' conftest.er1 >conftest.err + rm -f conftest.er1 + cat conftest.err >&5 + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } >/dev/null && { + test -z "$ac_cxx_preproc_warn_flag$ac_cxx_werror_flag" || + test ! -s conftest.err + }; then + : +else + echo "$as_me: failed program was:" >&5 +sed 's/^/| /' conftest.$ac_ext >&5 + + # Broken: fails on valid input. +continue +fi + +rm -f conftest.err conftest.$ac_ext + + # OK, works on sane cases. Now check whether nonexistent headers + # can be detected and how. + cat >conftest.$ac_ext <<_ACEOF +/* confdefs.h. */ +_ACEOF +cat confdefs.h >>conftest.$ac_ext +cat >>conftest.$ac_ext <<_ACEOF +/* end confdefs.h. */ +#include +_ACEOF +if { (ac_try="$ac_cpp conftest.$ac_ext" +case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_cpp conftest.$ac_ext") 2>conftest.er1 + ac_status=$? + grep -v '^ *+' conftest.er1 >conftest.err + rm -f conftest.er1 + cat conftest.err >&5 + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } >/dev/null && { + test -z "$ac_cxx_preproc_warn_flag$ac_cxx_werror_flag" || + test ! -s conftest.err + }; then + # Broken: success on invalid input. +continue +else + echo "$as_me: failed program was:" >&5 +sed 's/^/| /' conftest.$ac_ext >&5 + + # Passes both tests. +ac_preproc_ok=: +break +fi + +rm -f conftest.err conftest.$ac_ext + +done +# Because of `break', _AC_PREPROC_IFELSE's cleaning code was skipped. +rm -f conftest.err conftest.$ac_ext +if $ac_preproc_ok; then + break +fi + + done + ac_cv_prog_CXXCPP=$CXXCPP + +fi + CXXCPP=$ac_cv_prog_CXXCPP +else + ac_cv_prog_CXXCPP=$CXXCPP +fi +{ echo "$as_me:$LINENO: result: $CXXCPP" >&5 +echo "${ECHO_T}$CXXCPP" >&6; } +ac_preproc_ok=false +for ac_cxx_preproc_warn_flag in '' yes +do + # Use a header file that comes with gcc, so configuring glibc + # with a fresh cross-compiler works. + # Prefer to if __STDC__ is defined, since + # exists even on freestanding compilers. + # On the NeXT, cc -E runs the code through the compiler's parser, + # not just through cpp. "Syntax error" is here to catch this case. + cat >conftest.$ac_ext <<_ACEOF +/* confdefs.h. */ +_ACEOF +cat confdefs.h >>conftest.$ac_ext +cat >>conftest.$ac_ext <<_ACEOF +/* end confdefs.h. */ +#ifdef __STDC__ +# include +#else +# include +#endif + Syntax error +_ACEOF +if { (ac_try="$ac_cpp conftest.$ac_ext" +case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_cpp conftest.$ac_ext") 2>conftest.er1 + ac_status=$? + grep -v '^ *+' conftest.er1 >conftest.err + rm -f conftest.er1 + cat conftest.err >&5 + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } >/dev/null && { + test -z "$ac_cxx_preproc_warn_flag$ac_cxx_werror_flag" || + test ! -s conftest.err + }; then + : +else + echo "$as_me: failed program was:" >&5 +sed 's/^/| /' conftest.$ac_ext >&5 + + # Broken: fails on valid input. +continue +fi + +rm -f conftest.err conftest.$ac_ext + + # OK, works on sane cases. Now check whether nonexistent headers + # can be detected and how. + cat >conftest.$ac_ext <<_ACEOF +/* confdefs.h. */ +_ACEOF +cat confdefs.h >>conftest.$ac_ext +cat >>conftest.$ac_ext <<_ACEOF +/* end confdefs.h. */ +#include +_ACEOF +if { (ac_try="$ac_cpp conftest.$ac_ext" +case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_cpp conftest.$ac_ext") 2>conftest.er1 + ac_status=$? + grep -v '^ *+' conftest.er1 >conftest.err + rm -f conftest.er1 + cat conftest.err >&5 + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } >/dev/null && { + test -z "$ac_cxx_preproc_warn_flag$ac_cxx_werror_flag" || + test ! -s conftest.err + }; then + # Broken: success on invalid input. +continue +else + echo "$as_me: failed program was:" >&5 +sed 's/^/| /' conftest.$ac_ext >&5 + + # Passes both tests. +ac_preproc_ok=: +break +fi + +rm -f conftest.err conftest.$ac_ext + +done +# Because of `break', _AC_PREPROC_IFELSE's cleaning code was skipped. +rm -f conftest.err conftest.$ac_ext +if $ac_preproc_ok; then + : +else + { { echo "$as_me:$LINENO: error: C++ preprocessor \"$CXXCPP\" fails sanity check +See \`config.log' for more details." >&5 +echo "$as_me: error: C++ preprocessor \"$CXXCPP\" fails sanity check +See \`config.log' for more details." >&2;} + { (exit 1); exit 1; }; } +fi + +ac_ext=cpp +ac_cpp='$CXXCPP $CPPFLAGS' +ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5' +ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' +ac_compiler_gnu=$ac_cv_cxx_compiler_gnu + +fi + + +ac_ext=f +ac_compile='$F77 -c $FFLAGS conftest.$ac_ext >&5' +ac_link='$F77 -o conftest$ac_exeext $FFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' +ac_compiler_gnu=$ac_cv_f77_compiler_gnu +if test -n "$ac_tool_prefix"; then + for ac_prog in g77 xlf f77 frt pgf77 cf77 fort77 fl32 af77 xlf90 f90 pgf90 pghpf epcf90 gfortran g95 xlf95 f95 fort ifort ifc efc pgf95 lf95 ftn + do + # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args. +set dummy $ac_tool_prefix$ac_prog; ac_word=$2 +{ echo "$as_me:$LINENO: checking for $ac_word" >&5 +echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6; } +if test "${ac_cv_prog_F77+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + if test -n "$F77"; then + ac_cv_prog_F77="$F77" # Let the user override the test. +else +as_save_IFS=$IFS; IFS=$PATH_SEPARATOR +for as_dir in $PATH +do + IFS=$as_save_IFS + test -z "$as_dir" && as_dir=. + for ac_exec_ext in '' $ac_executable_extensions; do + if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + ac_cv_prog_F77="$ac_tool_prefix$ac_prog" + echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5 + break 2 + fi +done +done +IFS=$as_save_IFS + +fi +fi +F77=$ac_cv_prog_F77 +if test -n "$F77"; then + { echo "$as_me:$LINENO: result: $F77" >&5 +echo "${ECHO_T}$F77" >&6; } +else + { echo "$as_me:$LINENO: result: no" >&5 +echo "${ECHO_T}no" >&6; } +fi + + + test -n "$F77" && break + done +fi +if test -z "$F77"; then + ac_ct_F77=$F77 + for ac_prog in g77 xlf f77 frt pgf77 cf77 fort77 fl32 af77 xlf90 f90 pgf90 pghpf epcf90 gfortran g95 xlf95 f95 fort ifort ifc efc pgf95 lf95 ftn +do + # Extract the first word of "$ac_prog", so it can be a program name with args. +set dummy $ac_prog; ac_word=$2 +{ echo "$as_me:$LINENO: checking for $ac_word" >&5 +echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6; } +if test "${ac_cv_prog_ac_ct_F77+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + if test -n "$ac_ct_F77"; then + ac_cv_prog_ac_ct_F77="$ac_ct_F77" # Let the user override the test. +else +as_save_IFS=$IFS; IFS=$PATH_SEPARATOR +for as_dir in $PATH +do + IFS=$as_save_IFS + test -z "$as_dir" && as_dir=. + for ac_exec_ext in '' $ac_executable_extensions; do + if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + ac_cv_prog_ac_ct_F77="$ac_prog" + echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5 + break 2 + fi +done +done +IFS=$as_save_IFS + +fi +fi +ac_ct_F77=$ac_cv_prog_ac_ct_F77 +if test -n "$ac_ct_F77"; then + { echo "$as_me:$LINENO: result: $ac_ct_F77" >&5 +echo "${ECHO_T}$ac_ct_F77" >&6; } +else + { echo "$as_me:$LINENO: result: no" >&5 +echo "${ECHO_T}no" >&6; } +fi + + + test -n "$ac_ct_F77" && break +done + + if test "x$ac_ct_F77" = x; then + F77="" + else + case $cross_compiling:$ac_tool_warned in +yes:) +{ echo "$as_me:$LINENO: WARNING: In the future, Autoconf will not detect cross-tools +whose name does not start with the host triplet. If you think this +configuration is useful to you, please write to autoconf at gnu.org." >&5 +echo "$as_me: WARNING: In the future, Autoconf will not detect cross-tools +whose name does not start with the host triplet. If you think this +configuration is useful to you, please write to autoconf at gnu.org." >&2;} +ac_tool_warned=yes ;; +esac + F77=$ac_ct_F77 + fi +fi + + +# Provide some information about the compiler. +echo "$as_me:$LINENO: checking for Fortran 77 compiler version" >&5 +ac_compiler=`set X $ac_compile; echo $2` +{ (ac_try="$ac_compiler --version >&5" +case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_compiler --version >&5") 2>&5 + ac_status=$? + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } +{ (ac_try="$ac_compiler -v >&5" +case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_compiler -v >&5") 2>&5 + ac_status=$? + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } +{ (ac_try="$ac_compiler -V >&5" +case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_compiler -V >&5") 2>&5 + ac_status=$? + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } +rm -f a.out + +# If we don't use `.F' as extension, the preprocessor is not run on the +# input file. (Note that this only needs to work for GNU compilers.) +ac_save_ext=$ac_ext +ac_ext=F +{ echo "$as_me:$LINENO: checking whether we are using the GNU Fortran 77 compiler" >&5 +echo $ECHO_N "checking whether we are using the GNU Fortran 77 compiler... $ECHO_C" >&6; } +if test "${ac_cv_f77_compiler_gnu+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + cat >conftest.$ac_ext <<_ACEOF + program main +#ifndef __GNUC__ + choke me +#endif + + end +_ACEOF +rm -f conftest.$ac_objext +if { (ac_try="$ac_compile" +case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_compile") 2>conftest.er1 + ac_status=$? + grep -v '^ *+' conftest.er1 >conftest.err + rm -f conftest.er1 + cat conftest.err >&5 + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } && { + test -z "$ac_f77_werror_flag" || + test ! -s conftest.err + } && test -s conftest.$ac_objext; then + ac_compiler_gnu=yes +else + echo "$as_me: failed program was:" >&5 +sed 's/^/| /' conftest.$ac_ext >&5 + + ac_compiler_gnu=no +fi + +rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext +ac_cv_f77_compiler_gnu=$ac_compiler_gnu + +fi +{ echo "$as_me:$LINENO: result: $ac_cv_f77_compiler_gnu" >&5 +echo "${ECHO_T}$ac_cv_f77_compiler_gnu" >&6; } +ac_ext=$ac_save_ext +ac_test_FFLAGS=${FFLAGS+set} +ac_save_FFLAGS=$FFLAGS +FFLAGS= +{ echo "$as_me:$LINENO: checking whether $F77 accepts -g" >&5 +echo $ECHO_N "checking whether $F77 accepts -g... $ECHO_C" >&6; } +if test "${ac_cv_prog_f77_g+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + FFLAGS=-g +cat >conftest.$ac_ext <<_ACEOF + program main + + end +_ACEOF +rm -f conftest.$ac_objext +if { (ac_try="$ac_compile" +case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_compile") 2>conftest.er1 + ac_status=$? + grep -v '^ *+' conftest.er1 >conftest.err + rm -f conftest.er1 + cat conftest.err >&5 + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } && { + test -z "$ac_f77_werror_flag" || + test ! -s conftest.err + } && test -s conftest.$ac_objext; then + ac_cv_prog_f77_g=yes +else + echo "$as_me: failed program was:" >&5 +sed 's/^/| /' conftest.$ac_ext >&5 + + ac_cv_prog_f77_g=no +fi + +rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext + +fi +{ echo "$as_me:$LINENO: result: $ac_cv_prog_f77_g" >&5 +echo "${ECHO_T}$ac_cv_prog_f77_g" >&6; } +if test "$ac_test_FFLAGS" = set; then + FFLAGS=$ac_save_FFLAGS +elif test $ac_cv_prog_f77_g = yes; then + if test "x$ac_cv_f77_compiler_gnu" = xyes; then + FFLAGS="-g -O2" + else + FFLAGS="-g" + fi +else + if test "x$ac_cv_f77_compiler_gnu" = xyes; then + FFLAGS="-O2" + else + FFLAGS= + fi +fi + +G77=`test $ac_compiler_gnu = yes && echo yes` +ac_ext=c +ac_cpp='$CPP $CPPFLAGS' +ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' +ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' +ac_compiler_gnu=$ac_cv_c_compiler_gnu + + + +# Autoconf 2.13's AC_OBJEXT and AC_EXEEXT macros only works for C compilers! + +# find the maximum length of command line arguments +{ echo "$as_me:$LINENO: checking the maximum length of command line arguments" >&5 +echo $ECHO_N "checking the maximum length of command line arguments... $ECHO_C" >&6; } +if test "${lt_cv_sys_max_cmd_len+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + i=0 + teststring="ABCD" + + case $build_os in + msdosdjgpp*) + # On DJGPP, this test can blow up pretty badly due to problems in libc + # (any single argument exceeding 2000 bytes causes a buffer overrun + # during glob expansion). Even if it were fixed, the result of this + # check would be larger than it should be. + lt_cv_sys_max_cmd_len=12288; # 12K is about right + ;; + + gnu*) + # Under GNU Hurd, this test is not required because there is + # no limit to the length of command line arguments. + # Libtool will interpret -1 as no limit whatsoever + lt_cv_sys_max_cmd_len=-1; + ;; + + cygwin* | mingw*) + # On Win9x/ME, this test blows up -- it succeeds, but takes + # about 5 minutes as the teststring grows exponentially. + # Worse, since 9x/ME are not pre-emptively multitasking, + # you end up with a "frozen" computer, even though with patience + # the test eventually succeeds (with a max line length of 256k). + # Instead, let's just punt: use the minimum linelength reported by + # all of the supported platforms: 8192 (on NT/2K/XP). + lt_cv_sys_max_cmd_len=8192; + ;; + + amigaos*) + # On AmigaOS with pdksh, this test takes hours, literally. + # So we just punt and use a minimum line length of 8192. + lt_cv_sys_max_cmd_len=8192; + ;; + + netbsd* | freebsd* | openbsd* | darwin* | dragonfly*) + # This has been around since 386BSD, at least. Likely further. + if test -x /sbin/sysctl; then + lt_cv_sys_max_cmd_len=`/sbin/sysctl -n kern.argmax` + elif test -x /usr/sbin/sysctl; then + lt_cv_sys_max_cmd_len=`/usr/sbin/sysctl -n kern.argmax` + else + lt_cv_sys_max_cmd_len=65536 # usable default for all BSDs + fi + # And add a safety zone + lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 4` + lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \* 3` + ;; + + interix*) + # We know the value 262144 and hardcode it with a safety zone (like BSD) + lt_cv_sys_max_cmd_len=196608 + ;; + + osf*) + # Dr. Hans Ekkehard Plesser reports seeing a kernel panic running configure + # due to this test when exec_disable_arg_limit is 1 on Tru64. It is not + # nice to cause kernel panics so lets avoid the loop below. + # First set a reasonable default. + lt_cv_sys_max_cmd_len=16384 + # + if test -x /sbin/sysconfig; then + case `/sbin/sysconfig -q proc exec_disable_arg_limit` in + *1*) lt_cv_sys_max_cmd_len=-1 ;; + esac + fi + ;; + sco3.2v5*) + lt_cv_sys_max_cmd_len=102400 + ;; + sysv5* | sco5v6* | sysv4.2uw2*) + kargmax=`grep ARG_MAX /etc/conf/cf.d/stune 2>/dev/null` + if test -n "$kargmax"; then + lt_cv_sys_max_cmd_len=`echo $kargmax | sed 's/.*[ ]//'` + else + lt_cv_sys_max_cmd_len=32768 + fi + ;; + *) + lt_cv_sys_max_cmd_len=`(getconf ARG_MAX) 2> /dev/null` + if test -n "$lt_cv_sys_max_cmd_len"; then + lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 4` + lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \* 3` + else + SHELL=${SHELL-${CONFIG_SHELL-/bin/sh}} + while (test "X"`$SHELL $0 --fallback-echo "X$teststring" 2>/dev/null` \ + = "XX$teststring") >/dev/null 2>&1 && + new_result=`expr "X$teststring" : ".*" 2>&1` && + lt_cv_sys_max_cmd_len=$new_result && + test $i != 17 # 1/2 MB should be enough + do + i=`expr $i + 1` + teststring=$teststring$teststring + done + teststring= + # Add a significant safety factor because C++ compilers can tack on massive + # amounts of additional arguments before passing them to the linker. + # It appears as though 1/2 is a usable value. + lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 2` + fi + ;; + esac + +fi + +if test -n $lt_cv_sys_max_cmd_len ; then + { echo "$as_me:$LINENO: result: $lt_cv_sys_max_cmd_len" >&5 +echo "${ECHO_T}$lt_cv_sys_max_cmd_len" >&6; } +else + { echo "$as_me:$LINENO: result: none" >&5 +echo "${ECHO_T}none" >&6; } +fi + + + + + +# Check for command to grab the raw symbol name followed by C symbol from nm. +{ echo "$as_me:$LINENO: checking command to parse $NM output from $compiler object" >&5 +echo $ECHO_N "checking command to parse $NM output from $compiler object... $ECHO_C" >&6; } +if test "${lt_cv_sys_global_symbol_pipe+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + +# These are sane defaults that work on at least a few old systems. +# [They come from Ultrix. What could be older than Ultrix?!! ;)] + +# Character class describing NM global symbol codes. +symcode='[BCDEGRST]' + +# Regexp to match symbols that can be accessed directly from C. +sympat='\([_A-Za-z][_A-Za-z0-9]*\)' + +# Transform an extracted symbol line into a proper C declaration +lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^. .* \(.*\)$/extern int \1;/p'" + +# Transform an extracted symbol line into symbol name and symbol address +lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/ {\\\"\1\\\", (lt_ptr) 0},/p' -e 's/^$symcode \([^ ]*\) \([^ ]*\)$/ {\"\2\", (lt_ptr) \&\2},/p'" + +# Define system-specific variables. +case $host_os in +aix*) + symcode='[BCDT]' + ;; +cygwin* | mingw* | pw32*) + symcode='[ABCDGISTW]' + ;; +hpux*) # Its linker distinguishes data from code symbols + if test "$host_cpu" = ia64; then + symcode='[ABCDEGRST]' + fi + lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'" + lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/ {\\\"\1\\\", (lt_ptr) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/ {\"\2\", (lt_ptr) \&\2},/p'" + ;; +linux* | k*bsd*-gnu) + if test "$host_cpu" = ia64; then + symcode='[ABCDGIRSTW]' + lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'" + lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/ {\\\"\1\\\", (lt_ptr) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/ {\"\2\", (lt_ptr) \&\2},/p'" + fi + ;; +irix* | nonstopux*) + symcode='[BCDEGRST]' + ;; +osf*) + symcode='[BCDEGQRST]' + ;; +solaris*) + symcode='[BDRT]' + ;; +sco3.2v5*) + symcode='[DT]' + ;; +sysv4.2uw2*) + symcode='[DT]' + ;; +sysv5* | sco5v6* | unixware* | OpenUNIX*) + symcode='[ABDT]' + ;; +sysv4) + symcode='[DFNSTU]' + ;; +esac + +# Handle CRLF in mingw tool chain +opt_cr= +case $build_os in +mingw*) + opt_cr=`echo 'x\{0,1\}' | tr x '\015'` # option cr in regexp + ;; +esac + +# If we're using GNU nm, then use its standard symbol codes. +case `$NM -V 2>&1` in +*GNU* | *'with BFD'*) + symcode='[ABCDGIRSTW]' ;; +esac + +# Try without a prefix undercore, then with it. +for ac_symprfx in "" "_"; do + + # Transform symcode, sympat, and symprfx into a raw symbol and a C symbol. + symxfrm="\\1 $ac_symprfx\\2 \\2" + + # Write the raw and C identifiers. + lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[ ]\($symcode$symcode*\)[ ][ ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'" + + # Check to see that the pipe works correctly. + pipe_works=no + + rm -f conftest* + cat > conftest.$ac_ext <&5 + (eval $ac_compile) 2>&5 + ac_status=$? + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); }; then + # Now try to grab the symbols. + nlist=conftest.nm + if { (eval echo "$as_me:$LINENO: \"$NM conftest.$ac_objext \| $lt_cv_sys_global_symbol_pipe \> $nlist\"") >&5 + (eval $NM conftest.$ac_objext \| $lt_cv_sys_global_symbol_pipe \> $nlist) 2>&5 + ac_status=$? + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } && test -s "$nlist"; then + # Try sorting and uniquifying the output. + if sort "$nlist" | uniq > "$nlist"T; then + mv -f "$nlist"T "$nlist" + else + rm -f "$nlist"T + fi + + # Make sure that we snagged all the symbols we need. + if grep ' nm_test_var$' "$nlist" >/dev/null; then + if grep ' nm_test_func$' "$nlist" >/dev/null; then + cat < conftest.$ac_ext +#ifdef __cplusplus +extern "C" { +#endif + +EOF + # Now generate the symbol file. + eval "$lt_cv_sys_global_symbol_to_cdecl"' < "$nlist" | grep -v main >> conftest.$ac_ext' + + cat <> conftest.$ac_ext +#if defined (__STDC__) && __STDC__ +# define lt_ptr_t void * +#else +# define lt_ptr_t char * +# define const +#endif + +/* The mapping between symbol names and symbols. */ +const struct { + const char *name; + lt_ptr_t address; +} +lt_preloaded_symbols[] = +{ +EOF + $SED "s/^$symcode$symcode* \(.*\) \(.*\)$/ {\"\2\", (lt_ptr_t) \&\2},/" < "$nlist" | grep -v main >> conftest.$ac_ext + cat <<\EOF >> conftest.$ac_ext + {0, (lt_ptr_t) 0} +}; + +#ifdef __cplusplus +} +#endif +EOF + # Now try linking the two files. + mv conftest.$ac_objext conftstm.$ac_objext + lt_save_LIBS="$LIBS" + lt_save_CFLAGS="$CFLAGS" + LIBS="conftstm.$ac_objext" + CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag" + if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5 + (eval $ac_link) 2>&5 + ac_status=$? + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } && test -s conftest${ac_exeext}; then + pipe_works=yes + fi + LIBS="$lt_save_LIBS" + CFLAGS="$lt_save_CFLAGS" + else + echo "cannot find nm_test_func in $nlist" >&5 + fi + else + echo "cannot find nm_test_var in $nlist" >&5 + fi + else + echo "cannot run $lt_cv_sys_global_symbol_pipe" >&5 + fi + else + echo "$progname: failed program was:" >&5 + cat conftest.$ac_ext >&5 + fi + rm -f conftest* conftst* + + # Do not use the global_symbol_pipe unless it works. + if test "$pipe_works" = yes; then + break + else + lt_cv_sys_global_symbol_pipe= + fi +done + +fi + +if test -z "$lt_cv_sys_global_symbol_pipe"; then + lt_cv_sys_global_symbol_to_cdecl= +fi +if test -z "$lt_cv_sys_global_symbol_pipe$lt_cv_sys_global_symbol_to_cdecl"; then + { echo "$as_me:$LINENO: result: failed" >&5 +echo "${ECHO_T}failed" >&6; } +else + { echo "$as_me:$LINENO: result: ok" >&5 +echo "${ECHO_T}ok" >&6; } +fi + +{ echo "$as_me:$LINENO: checking for objdir" >&5 +echo $ECHO_N "checking for objdir... $ECHO_C" >&6; } +if test "${lt_cv_objdir+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + rm -f .libs 2>/dev/null +mkdir .libs 2>/dev/null +if test -d .libs; then + lt_cv_objdir=.libs +else + # MS-DOS does not allow filenames that begin with a dot. + lt_cv_objdir=_libs +fi +rmdir .libs 2>/dev/null +fi +{ echo "$as_me:$LINENO: result: $lt_cv_objdir" >&5 +echo "${ECHO_T}$lt_cv_objdir" >&6; } +objdir=$lt_cv_objdir + + + + + +case $host_os in +aix3*) + # AIX sometimes has problems with the GCC collect2 program. For some + # reason, if we set the COLLECT_NAMES environment variable, the problems + # vanish in a puff of smoke. + if test "X${COLLECT_NAMES+set}" != Xset; then + COLLECT_NAMES= + export COLLECT_NAMES + fi + ;; +esac + +# Sed substitution that helps us do robust quoting. It backslashifies +# metacharacters that are still active within double-quoted strings. +Xsed='sed -e 1s/^X//' +sed_quote_subst='s/\([\\"\\`$\\\\]\)/\\\1/g' + +# Same as above, but do not quote variable references. +double_quote_subst='s/\([\\"\\`\\\\]\)/\\\1/g' + +# Sed substitution to delay expansion of an escaped shell variable in a +# double_quote_subst'ed string. +delay_variable_subst='s/\\\\\\\\\\\$/\\\\\\$/g' + +# Sed substitution to avoid accidental globbing in evaled expressions +no_glob_subst='s/\*/\\\*/g' + +# Constants: +rm="rm -f" + +# Global variables: +default_ofile=libtool +can_build_shared=yes + +# All known linkers require a `.a' archive for static linking (except MSVC, +# which needs '.lib'). +libext=a +ltmain="$ac_aux_dir/ltmain.sh" +ofile="$default_ofile" +with_gnu_ld="$lt_cv_prog_gnu_ld" + +if test -n "$ac_tool_prefix"; then + # Extract the first word of "${ac_tool_prefix}ar", so it can be a program name with args. +set dummy ${ac_tool_prefix}ar; ac_word=$2 +{ echo "$as_me:$LINENO: checking for $ac_word" >&5 +echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6; } +if test "${ac_cv_prog_AR+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + if test -n "$AR"; then + ac_cv_prog_AR="$AR" # Let the user override the test. +else +as_save_IFS=$IFS; IFS=$PATH_SEPARATOR +for as_dir in $PATH +do + IFS=$as_save_IFS + test -z "$as_dir" && as_dir=. + for ac_exec_ext in '' $ac_executable_extensions; do + if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + ac_cv_prog_AR="${ac_tool_prefix}ar" + echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5 + break 2 + fi +done +done +IFS=$as_save_IFS + +fi +fi +AR=$ac_cv_prog_AR +if test -n "$AR"; then + { echo "$as_me:$LINENO: result: $AR" >&5 +echo "${ECHO_T}$AR" >&6; } +else + { echo "$as_me:$LINENO: result: no" >&5 +echo "${ECHO_T}no" >&6; } +fi + + +fi +if test -z "$ac_cv_prog_AR"; then + ac_ct_AR=$AR + # Extract the first word of "ar", so it can be a program name with args. +set dummy ar; ac_word=$2 +{ echo "$as_me:$LINENO: checking for $ac_word" >&5 +echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6; } +if test "${ac_cv_prog_ac_ct_AR+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + if test -n "$ac_ct_AR"; then + ac_cv_prog_ac_ct_AR="$ac_ct_AR" # Let the user override the test. +else +as_save_IFS=$IFS; IFS=$PATH_SEPARATOR +for as_dir in $PATH +do + IFS=$as_save_IFS + test -z "$as_dir" && as_dir=. + for ac_exec_ext in '' $ac_executable_extensions; do + if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + ac_cv_prog_ac_ct_AR="ar" + echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5 + break 2 + fi +done +done +IFS=$as_save_IFS + +fi +fi +ac_ct_AR=$ac_cv_prog_ac_ct_AR +if test -n "$ac_ct_AR"; then + { echo "$as_me:$LINENO: result: $ac_ct_AR" >&5 +echo "${ECHO_T}$ac_ct_AR" >&6; } +else + { echo "$as_me:$LINENO: result: no" >&5 +echo "${ECHO_T}no" >&6; } +fi + + if test "x$ac_ct_AR" = x; then + AR="false" + else + case $cross_compiling:$ac_tool_warned in +yes:) +{ echo "$as_me:$LINENO: WARNING: In the future, Autoconf will not detect cross-tools +whose name does not start with the host triplet. If you think this +configuration is useful to you, please write to autoconf at gnu.org." >&5 +echo "$as_me: WARNING: In the future, Autoconf will not detect cross-tools +whose name does not start with the host triplet. If you think this +configuration is useful to you, please write to autoconf at gnu.org." >&2;} +ac_tool_warned=yes ;; +esac + AR=$ac_ct_AR + fi +else + AR="$ac_cv_prog_AR" +fi + +if test -n "$ac_tool_prefix"; then + # Extract the first word of "${ac_tool_prefix}ranlib", so it can be a program name with args. +set dummy ${ac_tool_prefix}ranlib; ac_word=$2 +{ echo "$as_me:$LINENO: checking for $ac_word" >&5 +echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6; } +if test "${ac_cv_prog_RANLIB+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + if test -n "$RANLIB"; then + ac_cv_prog_RANLIB="$RANLIB" # Let the user override the test. +else +as_save_IFS=$IFS; IFS=$PATH_SEPARATOR +for as_dir in $PATH +do + IFS=$as_save_IFS + test -z "$as_dir" && as_dir=. + for ac_exec_ext in '' $ac_executable_extensions; do + if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + ac_cv_prog_RANLIB="${ac_tool_prefix}ranlib" + echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5 + break 2 + fi +done +done +IFS=$as_save_IFS + +fi +fi +RANLIB=$ac_cv_prog_RANLIB +if test -n "$RANLIB"; then + { echo "$as_me:$LINENO: result: $RANLIB" >&5 +echo "${ECHO_T}$RANLIB" >&6; } +else + { echo "$as_me:$LINENO: result: no" >&5 +echo "${ECHO_T}no" >&6; } +fi + + +fi +if test -z "$ac_cv_prog_RANLIB"; then + ac_ct_RANLIB=$RANLIB + # Extract the first word of "ranlib", so it can be a program name with args. +set dummy ranlib; ac_word=$2 +{ echo "$as_me:$LINENO: checking for $ac_word" >&5 +echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6; } +if test "${ac_cv_prog_ac_ct_RANLIB+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + if test -n "$ac_ct_RANLIB"; then + ac_cv_prog_ac_ct_RANLIB="$ac_ct_RANLIB" # Let the user override the test. +else +as_save_IFS=$IFS; IFS=$PATH_SEPARATOR +for as_dir in $PATH +do + IFS=$as_save_IFS + test -z "$as_dir" && as_dir=. + for ac_exec_ext in '' $ac_executable_extensions; do + if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + ac_cv_prog_ac_ct_RANLIB="ranlib" + echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5 + break 2 + fi +done +done +IFS=$as_save_IFS + +fi +fi +ac_ct_RANLIB=$ac_cv_prog_ac_ct_RANLIB +if test -n "$ac_ct_RANLIB"; then + { echo "$as_me:$LINENO: result: $ac_ct_RANLIB" >&5 +echo "${ECHO_T}$ac_ct_RANLIB" >&6; } +else + { echo "$as_me:$LINENO: result: no" >&5 +echo "${ECHO_T}no" >&6; } +fi + + if test "x$ac_ct_RANLIB" = x; then + RANLIB=":" + else + case $cross_compiling:$ac_tool_warned in +yes:) +{ echo "$as_me:$LINENO: WARNING: In the future, Autoconf will not detect cross-tools +whose name does not start with the host triplet. If you think this +configuration is useful to you, please write to autoconf at gnu.org." >&5 +echo "$as_me: WARNING: In the future, Autoconf will not detect cross-tools +whose name does not start with the host triplet. If you think this +configuration is useful to you, please write to autoconf at gnu.org." >&2;} +ac_tool_warned=yes ;; +esac + RANLIB=$ac_ct_RANLIB + fi +else + RANLIB="$ac_cv_prog_RANLIB" +fi + +if test -n "$ac_tool_prefix"; then + # Extract the first word of "${ac_tool_prefix}strip", so it can be a program name with args. +set dummy ${ac_tool_prefix}strip; ac_word=$2 +{ echo "$as_me:$LINENO: checking for $ac_word" >&5 +echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6; } +if test "${ac_cv_prog_STRIP+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + if test -n "$STRIP"; then + ac_cv_prog_STRIP="$STRIP" # Let the user override the test. +else +as_save_IFS=$IFS; IFS=$PATH_SEPARATOR +for as_dir in $PATH +do + IFS=$as_save_IFS + test -z "$as_dir" && as_dir=. + for ac_exec_ext in '' $ac_executable_extensions; do + if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + ac_cv_prog_STRIP="${ac_tool_prefix}strip" + echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5 + break 2 + fi +done +done +IFS=$as_save_IFS + +fi +fi +STRIP=$ac_cv_prog_STRIP +if test -n "$STRIP"; then + { echo "$as_me:$LINENO: result: $STRIP" >&5 +echo "${ECHO_T}$STRIP" >&6; } +else + { echo "$as_me:$LINENO: result: no" >&5 +echo "${ECHO_T}no" >&6; } +fi + + +fi +if test -z "$ac_cv_prog_STRIP"; then + ac_ct_STRIP=$STRIP + # Extract the first word of "strip", so it can be a program name with args. +set dummy strip; ac_word=$2 +{ echo "$as_me:$LINENO: checking for $ac_word" >&5 +echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6; } +if test "${ac_cv_prog_ac_ct_STRIP+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + if test -n "$ac_ct_STRIP"; then + ac_cv_prog_ac_ct_STRIP="$ac_ct_STRIP" # Let the user override the test. +else +as_save_IFS=$IFS; IFS=$PATH_SEPARATOR +for as_dir in $PATH +do + IFS=$as_save_IFS + test -z "$as_dir" && as_dir=. + for ac_exec_ext in '' $ac_executable_extensions; do + if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + ac_cv_prog_ac_ct_STRIP="strip" + echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5 + break 2 + fi +done +done +IFS=$as_save_IFS + +fi +fi +ac_ct_STRIP=$ac_cv_prog_ac_ct_STRIP +if test -n "$ac_ct_STRIP"; then + { echo "$as_me:$LINENO: result: $ac_ct_STRIP" >&5 +echo "${ECHO_T}$ac_ct_STRIP" >&6; } +else + { echo "$as_me:$LINENO: result: no" >&5 +echo "${ECHO_T}no" >&6; } +fi + + if test "x$ac_ct_STRIP" = x; then + STRIP=":" + else + case $cross_compiling:$ac_tool_warned in +yes:) +{ echo "$as_me:$LINENO: WARNING: In the future, Autoconf will not detect cross-tools +whose name does not start with the host triplet. If you think this +configuration is useful to you, please write to autoconf at gnu.org." >&5 +echo "$as_me: WARNING: In the future, Autoconf will not detect cross-tools +whose name does not start with the host triplet. If you think this +configuration is useful to you, please write to autoconf at gnu.org." >&2;} +ac_tool_warned=yes ;; +esac + STRIP=$ac_ct_STRIP + fi +else + STRIP="$ac_cv_prog_STRIP" +fi + + +old_CC="$CC" +old_CFLAGS="$CFLAGS" + +# Set sane defaults for various variables +test -z "$AR" && AR=ar +test -z "$AR_FLAGS" && AR_FLAGS=cru +test -z "$AS" && AS=as +test -z "$CC" && CC=cc +test -z "$LTCC" && LTCC=$CC +test -z "$LTCFLAGS" && LTCFLAGS=$CFLAGS +test -z "$DLLTOOL" && DLLTOOL=dlltool +test -z "$LD" && LD=ld +test -z "$LN_S" && LN_S="ln -s" +test -z "$MAGIC_CMD" && MAGIC_CMD=file +test -z "$NM" && NM=nm +test -z "$SED" && SED=sed +test -z "$OBJDUMP" && OBJDUMP=objdump +test -z "$RANLIB" && RANLIB=: +test -z "$STRIP" && STRIP=: +test -z "$ac_objext" && ac_objext=o + +# Determine commands to create old-style static archives. +old_archive_cmds='$AR $AR_FLAGS $oldlib$oldobjs' +old_postinstall_cmds='chmod 644 $oldlib' +old_postuninstall_cmds= + +if test -n "$RANLIB"; then + case $host_os in + openbsd*) + old_postinstall_cmds="$old_postinstall_cmds~\$RANLIB -t \$oldlib" + ;; + *) + old_postinstall_cmds="$old_postinstall_cmds~\$RANLIB \$oldlib" + ;; + esac + old_archive_cmds="$old_archive_cmds~\$RANLIB \$oldlib" +fi + +for cc_temp in $compiler""; do + case $cc_temp in + compile | *[\\/]compile | ccache | *[\\/]ccache ) ;; + distcc | *[\\/]distcc | purify | *[\\/]purify ) ;; + \-*) ;; + *) break;; + esac +done +cc_basename=`$echo "X$cc_temp" | $Xsed -e 's%.*/%%' -e "s%^$host_alias-%%"` + + +# Only perform the check for file, if the check method requires it +case $deplibs_check_method in +file_magic*) + if test "$file_magic_cmd" = '$MAGIC_CMD'; then + { echo "$as_me:$LINENO: checking for ${ac_tool_prefix}file" >&5 +echo $ECHO_N "checking for ${ac_tool_prefix}file... $ECHO_C" >&6; } +if test "${lt_cv_path_MAGIC_CMD+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + case $MAGIC_CMD in +[\\/*] | ?:[\\/]*) + lt_cv_path_MAGIC_CMD="$MAGIC_CMD" # Let the user override the test with a path. + ;; +*) + lt_save_MAGIC_CMD="$MAGIC_CMD" + lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR + ac_dummy="/usr/bin$PATH_SEPARATOR$PATH" + for ac_dir in $ac_dummy; do + IFS="$lt_save_ifs" + test -z "$ac_dir" && ac_dir=. + if test -f $ac_dir/${ac_tool_prefix}file; then + lt_cv_path_MAGIC_CMD="$ac_dir/${ac_tool_prefix}file" + if test -n "$file_magic_test_file"; then + case $deplibs_check_method in + "file_magic "*) + file_magic_regex=`expr "$deplibs_check_method" : "file_magic \(.*\)"` + MAGIC_CMD="$lt_cv_path_MAGIC_CMD" + if eval $file_magic_cmd \$file_magic_test_file 2> /dev/null | + $EGREP "$file_magic_regex" > /dev/null; then + : + else + cat <&2 + +*** Warning: the command libtool uses to detect shared libraries, +*** $file_magic_cmd, produces output that libtool cannot recognize. +*** The result is that libtool may fail to recognize shared libraries +*** as such. This will affect the creation of libtool libraries that +*** depend on shared libraries, but programs linked with such libtool +*** libraries will work regardless of this problem. Nevertheless, you +*** may want to report the problem to your system manager and/or to +*** bug-libtool at gnu.org + +EOF + fi ;; + esac + fi + break + fi + done + IFS="$lt_save_ifs" + MAGIC_CMD="$lt_save_MAGIC_CMD" + ;; +esac +fi + +MAGIC_CMD="$lt_cv_path_MAGIC_CMD" +if test -n "$MAGIC_CMD"; then + { echo "$as_me:$LINENO: result: $MAGIC_CMD" >&5 +echo "${ECHO_T}$MAGIC_CMD" >&6; } +else + { echo "$as_me:$LINENO: result: no" >&5 +echo "${ECHO_T}no" >&6; } +fi + +if test -z "$lt_cv_path_MAGIC_CMD"; then + if test -n "$ac_tool_prefix"; then + { echo "$as_me:$LINENO: checking for file" >&5 +echo $ECHO_N "checking for file... $ECHO_C" >&6; } +if test "${lt_cv_path_MAGIC_CMD+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + case $MAGIC_CMD in +[\\/*] | ?:[\\/]*) + lt_cv_path_MAGIC_CMD="$MAGIC_CMD" # Let the user override the test with a path. + ;; +*) + lt_save_MAGIC_CMD="$MAGIC_CMD" + lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR + ac_dummy="/usr/bin$PATH_SEPARATOR$PATH" + for ac_dir in $ac_dummy; do + IFS="$lt_save_ifs" + test -z "$ac_dir" && ac_dir=. + if test -f $ac_dir/file; then + lt_cv_path_MAGIC_CMD="$ac_dir/file" + if test -n "$file_magic_test_file"; then + case $deplibs_check_method in + "file_magic "*) + file_magic_regex=`expr "$deplibs_check_method" : "file_magic \(.*\)"` + MAGIC_CMD="$lt_cv_path_MAGIC_CMD" + if eval $file_magic_cmd \$file_magic_test_file 2> /dev/null | + $EGREP "$file_magic_regex" > /dev/null; then + : + else + cat <&2 + +*** Warning: the command libtool uses to detect shared libraries, +*** $file_magic_cmd, produces output that libtool cannot recognize. +*** The result is that libtool may fail to recognize shared libraries +*** as such. This will affect the creation of libtool libraries that +*** depend on shared libraries, but programs linked with such libtool +*** libraries will work regardless of this problem. Nevertheless, you +*** may want to report the problem to your system manager and/or to +*** bug-libtool at gnu.org + +EOF + fi ;; + esac + fi + break + fi + done + IFS="$lt_save_ifs" + MAGIC_CMD="$lt_save_MAGIC_CMD" + ;; +esac +fi + +MAGIC_CMD="$lt_cv_path_MAGIC_CMD" +if test -n "$MAGIC_CMD"; then + { echo "$as_me:$LINENO: result: $MAGIC_CMD" >&5 +echo "${ECHO_T}$MAGIC_CMD" >&6; } +else + { echo "$as_me:$LINENO: result: no" >&5 +echo "${ECHO_T}no" >&6; } +fi + + else + MAGIC_CMD=: + fi +fi + + fi + ;; +esac + +enable_dlopen=no +enable_win32_dll=no + +# Check whether --enable-libtool-lock was given. +if test "${enable_libtool_lock+set}" = set; then + enableval=$enable_libtool_lock; +fi + +test "x$enable_libtool_lock" != xno && enable_libtool_lock=yes + + +# Check whether --with-pic was given. +if test "${with_pic+set}" = set; then + withval=$with_pic; pic_mode="$withval" +else + pic_mode=default +fi + +test -z "$pic_mode" && pic_mode=default + +# Use C for the default configuration in the libtool script +tagname= +lt_save_CC="$CC" +ac_ext=c +ac_cpp='$CPP $CPPFLAGS' +ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' +ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' +ac_compiler_gnu=$ac_cv_c_compiler_gnu + + +# Source file extension for C test sources. +ac_ext=c + +# Object file extension for compiled C test sources. +objext=o +objext=$objext + +# Code to be used in simple compile tests +lt_simple_compile_test_code="int some_variable = 0;" + +# Code to be used in simple link tests +lt_simple_link_test_code='int main(){return(0);}' + + +# If no C compiler was specified, use CC. +LTCC=${LTCC-"$CC"} + +# If no C compiler flags were specified, use CFLAGS. +LTCFLAGS=${LTCFLAGS-"$CFLAGS"} + +# Allow CC to be a program name with arguments. +compiler=$CC + + +# save warnings/boilerplate of simple test code +ac_outfile=conftest.$ac_objext +echo "$lt_simple_compile_test_code" >conftest.$ac_ext +eval "$ac_compile" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err +_lt_compiler_boilerplate=`cat conftest.err` +$rm conftest* + +ac_outfile=conftest.$ac_objext +echo "$lt_simple_link_test_code" >conftest.$ac_ext +eval "$ac_link" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err +_lt_linker_boilerplate=`cat conftest.err` +$rm conftest* + + + +lt_prog_compiler_no_builtin_flag= + +if test "$GCC" = yes; then + lt_prog_compiler_no_builtin_flag=' -fno-builtin' + + +{ echo "$as_me:$LINENO: checking if $compiler supports -fno-rtti -fno-exceptions" >&5 +echo $ECHO_N "checking if $compiler supports -fno-rtti -fno-exceptions... $ECHO_C" >&6; } +if test "${lt_cv_prog_compiler_rtti_exceptions+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + lt_cv_prog_compiler_rtti_exceptions=no + ac_outfile=conftest.$ac_objext + echo "$lt_simple_compile_test_code" > conftest.$ac_ext + lt_compiler_flag="-fno-rtti -fno-exceptions" + # Insert the option either (1) after the last *FLAGS variable, or + # (2) before a word containing "conftest.", or (3) at the end. + # Note that $ac_compile itself does not contain backslashes and begins + # with a dollar sign (not a hyphen), so the echo should work correctly. + # The option is referenced via a variable to avoid confusing sed. + lt_compile=`echo "$ac_compile" | $SED \ + -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ + -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ + -e 's:$: $lt_compiler_flag:'` + (eval echo "\"\$as_me:7436: $lt_compile\"" >&5) + (eval "$lt_compile" 2>conftest.err) + ac_status=$? + cat conftest.err >&5 + echo "$as_me:7440: \$? = $ac_status" >&5 + if (exit $ac_status) && test -s "$ac_outfile"; then + # The compiler can only warn and ignore the option if not recognized + # So say no if there are warnings other than the usual output. + $echo "X$_lt_compiler_boilerplate" | $Xsed -e '/^$/d' >conftest.exp + $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 + if test ! -s conftest.er2 || diff conftest.exp conftest.er2 >/dev/null; then + lt_cv_prog_compiler_rtti_exceptions=yes + fi + fi + $rm conftest* + +fi +{ echo "$as_me:$LINENO: result: $lt_cv_prog_compiler_rtti_exceptions" >&5 +echo "${ECHO_T}$lt_cv_prog_compiler_rtti_exceptions" >&6; } + +if test x"$lt_cv_prog_compiler_rtti_exceptions" = xyes; then + lt_prog_compiler_no_builtin_flag="$lt_prog_compiler_no_builtin_flag -fno-rtti -fno-exceptions" +else + : +fi + +fi + +lt_prog_compiler_wl= +lt_prog_compiler_pic= +lt_prog_compiler_static= + +{ echo "$as_me:$LINENO: checking for $compiler option to produce PIC" >&5 +echo $ECHO_N "checking for $compiler option to produce PIC... $ECHO_C" >&6; } + + if test "$GCC" = yes; then + lt_prog_compiler_wl='-Wl,' + lt_prog_compiler_static='-static' + + case $host_os in + aix*) + # All AIX code is PIC. + if test "$host_cpu" = ia64; then + # AIX 5 now supports IA64 processor + lt_prog_compiler_static='-Bstatic' + fi + ;; + + amigaos*) + # FIXME: we need at least 68020 code to build shared libraries, but + # adding the `-m68020' flag to GCC prevents building anything better, + # like `-m68040'. + lt_prog_compiler_pic='-m68020 -resident32 -malways-restore-a4' + ;; + + beos* | irix5* | irix6* | nonstopux* | osf3* | osf4* | osf5*) + # PIC is the default for these OSes. + ;; + + mingw* | cygwin* | pw32* | os2*) + # This hack is so that the source file can tell whether it is being + # built for inclusion in a dll (and should export symbols for example). + # Although the cygwin gcc ignores -fPIC, still need this for old-style + # (--disable-auto-import) libraries + lt_prog_compiler_pic='-DDLL_EXPORT' + ;; + + darwin* | rhapsody*) + # PIC is the default on this platform + # Common symbols not allowed in MH_DYLIB files + lt_prog_compiler_pic='-fno-common' + ;; + + interix[3-9]*) + # Interix 3.x gcc -fpic/-fPIC options generate broken code. + # Instead, we relocate shared libraries at runtime. + ;; + + msdosdjgpp*) + # Just because we use GCC doesn't mean we suddenly get shared libraries + # on systems that don't support them. + lt_prog_compiler_can_build_shared=no + enable_shared=no + ;; + + sysv4*MP*) + if test -d /usr/nec; then + lt_prog_compiler_pic=-Kconform_pic + fi + ;; + + hpux*) + # PIC is the default for IA64 HP-UX and 64-bit HP-UX, but + # not for PA HP-UX. + case $host_cpu in + hppa*64*|ia64*) + # +Z the default + ;; + *) + lt_prog_compiler_pic='-fPIC' + ;; + esac + ;; + + *) + lt_prog_compiler_pic='-fPIC' + ;; + esac + else + # PORTME Check for flag to pass linker flags through the system compiler. + case $host_os in + aix*) + lt_prog_compiler_wl='-Wl,' + if test "$host_cpu" = ia64; then + # AIX 5 now supports IA64 processor + lt_prog_compiler_static='-Bstatic' + else + lt_prog_compiler_static='-bnso -bI:/lib/syscalls.exp' + fi + ;; + darwin*) + # PIC is the default on this platform + # Common symbols not allowed in MH_DYLIB files + case $cc_basename in + xlc*) + lt_prog_compiler_pic='-qnocommon' + lt_prog_compiler_wl='-Wl,' + ;; + esac + ;; + + mingw* | cygwin* | pw32* | os2*) + # This hack is so that the source file can tell whether it is being + # built for inclusion in a dll (and should export symbols for example). + lt_prog_compiler_pic='-DDLL_EXPORT' + ;; + + hpux9* | hpux10* | hpux11*) + lt_prog_compiler_wl='-Wl,' + # PIC is the default for IA64 HP-UX and 64-bit HP-UX, but + # not for PA HP-UX. + case $host_cpu in + hppa*64*|ia64*) + # +Z the default + ;; + *) + lt_prog_compiler_pic='+Z' + ;; + esac + # Is there a better lt_prog_compiler_static that works with the bundled CC? + lt_prog_compiler_static='${wl}-a ${wl}archive' + ;; + + irix5* | irix6* | nonstopux*) + lt_prog_compiler_wl='-Wl,' + # PIC (with -KPIC) is the default. + lt_prog_compiler_static='-non_shared' + ;; + + newsos6) + lt_prog_compiler_pic='-KPIC' + lt_prog_compiler_static='-Bstatic' + ;; + + linux* | k*bsd*-gnu) + case $cc_basename in + icc* | ecc*) + lt_prog_compiler_wl='-Wl,' + lt_prog_compiler_pic='-KPIC' + lt_prog_compiler_static='-static' + ;; + pgcc* | pgf77* | pgf90* | pgf95*) + # Portland Group compilers (*not* the Pentium gcc compiler, + # which looks to be a dead project) + lt_prog_compiler_wl='-Wl,' + lt_prog_compiler_pic='-fpic' + lt_prog_compiler_static='-Bstatic' + ;; + ccc*) + lt_prog_compiler_wl='-Wl,' + # All Alpha code is PIC. + lt_prog_compiler_static='-non_shared' + ;; + *) + case `$CC -V 2>&1 | sed 5q` in + *Sun\ C*) + # Sun C 5.9 + lt_prog_compiler_pic='-KPIC' + lt_prog_compiler_static='-Bstatic' + lt_prog_compiler_wl='-Wl,' + ;; + *Sun\ F*) + # Sun Fortran 8.3 passes all unrecognized flags to the linker + lt_prog_compiler_pic='-KPIC' + lt_prog_compiler_static='-Bstatic' + lt_prog_compiler_wl='' + ;; + esac + ;; + esac + ;; + + osf3* | osf4* | osf5*) + lt_prog_compiler_wl='-Wl,' + # All OSF/1 code is PIC. + lt_prog_compiler_static='-non_shared' + ;; + + rdos*) + lt_prog_compiler_static='-non_shared' + ;; + + solaris*) + lt_prog_compiler_pic='-KPIC' + lt_prog_compiler_static='-Bstatic' + case $cc_basename in + f77* | f90* | f95*) + lt_prog_compiler_wl='-Qoption ld ';; + *) + lt_prog_compiler_wl='-Wl,';; + esac + ;; + + sunos4*) + lt_prog_compiler_wl='-Qoption ld ' + lt_prog_compiler_pic='-PIC' + lt_prog_compiler_static='-Bstatic' + ;; + + sysv4 | sysv4.2uw2* | sysv4.3*) + lt_prog_compiler_wl='-Wl,' + lt_prog_compiler_pic='-KPIC' + lt_prog_compiler_static='-Bstatic' + ;; + + sysv4*MP*) + if test -d /usr/nec ;then + lt_prog_compiler_pic='-Kconform_pic' + lt_prog_compiler_static='-Bstatic' + fi + ;; + + sysv5* | unixware* | sco3.2v5* | sco5v6* | OpenUNIX*) + lt_prog_compiler_wl='-Wl,' + lt_prog_compiler_pic='-KPIC' + lt_prog_compiler_static='-Bstatic' + ;; + + unicos*) + lt_prog_compiler_wl='-Wl,' + lt_prog_compiler_can_build_shared=no + ;; + + uts4*) + lt_prog_compiler_pic='-pic' + lt_prog_compiler_static='-Bstatic' + ;; + + *) + lt_prog_compiler_can_build_shared=no + ;; + esac + fi + +{ echo "$as_me:$LINENO: result: $lt_prog_compiler_pic" >&5 +echo "${ECHO_T}$lt_prog_compiler_pic" >&6; } + +# +# Check to make sure the PIC flag actually works. +# +if test -n "$lt_prog_compiler_pic"; then + +{ echo "$as_me:$LINENO: checking if $compiler PIC flag $lt_prog_compiler_pic works" >&5 +echo $ECHO_N "checking if $compiler PIC flag $lt_prog_compiler_pic works... $ECHO_C" >&6; } +if test "${lt_prog_compiler_pic_works+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + lt_prog_compiler_pic_works=no + ac_outfile=conftest.$ac_objext + echo "$lt_simple_compile_test_code" > conftest.$ac_ext + lt_compiler_flag="$lt_prog_compiler_pic -DPIC" + # Insert the option either (1) after the last *FLAGS variable, or + # (2) before a word containing "conftest.", or (3) at the end. + # Note that $ac_compile itself does not contain backslashes and begins + # with a dollar sign (not a hyphen), so the echo should work correctly. + # The option is referenced via a variable to avoid confusing sed. + lt_compile=`echo "$ac_compile" | $SED \ + -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ + -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ + -e 's:$: $lt_compiler_flag:'` + (eval echo "\"\$as_me:7726: $lt_compile\"" >&5) + (eval "$lt_compile" 2>conftest.err) + ac_status=$? + cat conftest.err >&5 + echo "$as_me:7730: \$? = $ac_status" >&5 + if (exit $ac_status) && test -s "$ac_outfile"; then + # The compiler can only warn and ignore the option if not recognized + # So say no if there are warnings other than the usual output. + $echo "X$_lt_compiler_boilerplate" | $Xsed -e '/^$/d' >conftest.exp + $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 + if test ! -s conftest.er2 || diff conftest.exp conftest.er2 >/dev/null; then + lt_prog_compiler_pic_works=yes + fi + fi + $rm conftest* + +fi +{ echo "$as_me:$LINENO: result: $lt_prog_compiler_pic_works" >&5 +echo "${ECHO_T}$lt_prog_compiler_pic_works" >&6; } + +if test x"$lt_prog_compiler_pic_works" = xyes; then + case $lt_prog_compiler_pic in + "" | " "*) ;; + *) lt_prog_compiler_pic=" $lt_prog_compiler_pic" ;; + esac +else + lt_prog_compiler_pic= + lt_prog_compiler_can_build_shared=no +fi + +fi +case $host_os in + # For platforms which do not support PIC, -DPIC is meaningless: + *djgpp*) + lt_prog_compiler_pic= + ;; + *) + lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC" + ;; +esac + +# +# Check to make sure the static flag actually works. +# +wl=$lt_prog_compiler_wl eval lt_tmp_static_flag=\"$lt_prog_compiler_static\" +{ echo "$as_me:$LINENO: checking if $compiler static flag $lt_tmp_static_flag works" >&5 +echo $ECHO_N "checking if $compiler static flag $lt_tmp_static_flag works... $ECHO_C" >&6; } +if test "${lt_prog_compiler_static_works+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + lt_prog_compiler_static_works=no + save_LDFLAGS="$LDFLAGS" + LDFLAGS="$LDFLAGS $lt_tmp_static_flag" + echo "$lt_simple_link_test_code" > conftest.$ac_ext + if (eval $ac_link 2>conftest.err) && test -s conftest$ac_exeext; then + # The linker can only warn and ignore the option if not recognized + # So say no if there are warnings + if test -s conftest.err; then + # Append any errors to the config.log. + cat conftest.err 1>&5 + $echo "X$_lt_linker_boilerplate" | $Xsed -e '/^$/d' > conftest.exp + $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 + if diff conftest.exp conftest.er2 >/dev/null; then + lt_prog_compiler_static_works=yes + fi + else + lt_prog_compiler_static_works=yes + fi + fi + $rm conftest* + LDFLAGS="$save_LDFLAGS" + +fi +{ echo "$as_me:$LINENO: result: $lt_prog_compiler_static_works" >&5 +echo "${ECHO_T}$lt_prog_compiler_static_works" >&6; } + +if test x"$lt_prog_compiler_static_works" = xyes; then + : +else + lt_prog_compiler_static= +fi + + +{ echo "$as_me:$LINENO: checking if $compiler supports -c -o file.$ac_objext" >&5 +echo $ECHO_N "checking if $compiler supports -c -o file.$ac_objext... $ECHO_C" >&6; } +if test "${lt_cv_prog_compiler_c_o+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + lt_cv_prog_compiler_c_o=no + $rm -r conftest 2>/dev/null + mkdir conftest + cd conftest + mkdir out + echo "$lt_simple_compile_test_code" > conftest.$ac_ext + + lt_compiler_flag="-o out/conftest2.$ac_objext" + # Insert the option either (1) after the last *FLAGS variable, or + # (2) before a word containing "conftest.", or (3) at the end. + # Note that $ac_compile itself does not contain backslashes and begins + # with a dollar sign (not a hyphen), so the echo should work correctly. + lt_compile=`echo "$ac_compile" | $SED \ + -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ + -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ + -e 's:$: $lt_compiler_flag:'` + (eval echo "\"\$as_me:7830: $lt_compile\"" >&5) + (eval "$lt_compile" 2>out/conftest.err) + ac_status=$? + cat out/conftest.err >&5 + echo "$as_me:7834: \$? = $ac_status" >&5 + if (exit $ac_status) && test -s out/conftest2.$ac_objext + then + # The compiler can only warn and ignore the option if not recognized + # So say no if there are warnings + $echo "X$_lt_compiler_boilerplate" | $Xsed -e '/^$/d' > out/conftest.exp + $SED '/^$/d; /^ *+/d' out/conftest.err >out/conftest.er2 + if test ! -s out/conftest.er2 || diff out/conftest.exp out/conftest.er2 >/dev/null; then + lt_cv_prog_compiler_c_o=yes + fi + fi + chmod u+w . 2>&5 + $rm conftest* + # SGI C++ compiler will create directory out/ii_files/ for + # template instantiation + test -d out/ii_files && $rm out/ii_files/* && rmdir out/ii_files + $rm out/* && rmdir out + cd .. + rmdir conftest + $rm conftest* + +fi +{ echo "$as_me:$LINENO: result: $lt_cv_prog_compiler_c_o" >&5 +echo "${ECHO_T}$lt_cv_prog_compiler_c_o" >&6; } + + +hard_links="nottested" +if test "$lt_cv_prog_compiler_c_o" = no && test "$need_locks" != no; then + # do not overwrite the value of need_locks provided by the user + { echo "$as_me:$LINENO: checking if we can lock with hard links" >&5 +echo $ECHO_N "checking if we can lock with hard links... $ECHO_C" >&6; } + hard_links=yes + $rm conftest* + ln conftest.a conftest.b 2>/dev/null && hard_links=no + touch conftest.a + ln conftest.a conftest.b 2>&5 || hard_links=no + ln conftest.a conftest.b 2>/dev/null && hard_links=no + { echo "$as_me:$LINENO: result: $hard_links" >&5 +echo "${ECHO_T}$hard_links" >&6; } + if test "$hard_links" = no; then + { echo "$as_me:$LINENO: WARNING: \`$CC' does not support \`-c -o', so \`make -j' may be unsafe" >&5 +echo "$as_me: WARNING: \`$CC' does not support \`-c -o', so \`make -j' may be unsafe" >&2;} + need_locks=warn + fi +else + need_locks=no +fi + +{ echo "$as_me:$LINENO: checking whether the $compiler linker ($LD) supports shared libraries" >&5 +echo $ECHO_N "checking whether the $compiler linker ($LD) supports shared libraries... $ECHO_C" >&6; } + + runpath_var= + allow_undefined_flag= + enable_shared_with_static_runtimes=no + archive_cmds= + archive_expsym_cmds= + old_archive_From_new_cmds= + old_archive_from_expsyms_cmds= + export_dynamic_flag_spec= + whole_archive_flag_spec= + thread_safe_flag_spec= + hardcode_libdir_flag_spec= + hardcode_libdir_flag_spec_ld= + hardcode_libdir_separator= + hardcode_direct=no + hardcode_minus_L=no + hardcode_shlibpath_var=unsupported + link_all_deplibs=unknown + hardcode_automatic=no + module_cmds= + module_expsym_cmds= + always_export_symbols=no + export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols' + # include_expsyms should be a list of space-separated symbols to be *always* + # included in the symbol list + include_expsyms= + # exclude_expsyms can be an extended regexp of symbols to exclude + # it will be wrapped by ` (' and `)$', so one must not match beginning or + # end of line. Example: `a|bc|.*d.*' will exclude the symbols `a' and `bc', + # as well as any symbol that contains `d'. + exclude_expsyms="_GLOBAL_OFFSET_TABLE_" + # Although _GLOBAL_OFFSET_TABLE_ is a valid symbol C name, most a.out + # platforms (ab)use it in PIC code, but their linkers get confused if + # the symbol is explicitly referenced. Since portable code cannot + # rely on this symbol name, it's probably fine to never include it in + # preloaded symbol tables. + extract_expsyms_cmds= + # Just being paranoid about ensuring that cc_basename is set. + for cc_temp in $compiler""; do + case $cc_temp in + compile | *[\\/]compile | ccache | *[\\/]ccache ) ;; + distcc | *[\\/]distcc | purify | *[\\/]purify ) ;; + \-*) ;; + *) break;; + esac +done +cc_basename=`$echo "X$cc_temp" | $Xsed -e 's%.*/%%' -e "s%^$host_alias-%%"` + + case $host_os in + cygwin* | mingw* | pw32*) + # FIXME: the MSVC++ port hasn't been tested in a loooong time + # When not using gcc, we currently assume that we are using + # Microsoft Visual C++. + if test "$GCC" != yes; then + with_gnu_ld=no + fi + ;; + interix*) + # we just hope/assume this is gcc and not c89 (= MSVC++) + with_gnu_ld=yes + ;; + openbsd*) + with_gnu_ld=no + ;; + esac + + ld_shlibs=yes + if test "$with_gnu_ld" = yes; then + # If archive_cmds runs LD, not CC, wlarc should be empty + wlarc='${wl}' + + # Set some defaults for GNU ld with shared library support. These + # are reset later if shared libraries are not supported. Putting them + # here allows them to be overridden if necessary. + runpath_var=LD_RUN_PATH + hardcode_libdir_flag_spec='${wl}--rpath ${wl}$libdir' + export_dynamic_flag_spec='${wl}--export-dynamic' + # ancient GNU ld didn't support --whole-archive et. al. + if $LD --help 2>&1 | grep 'no-whole-archive' > /dev/null; then + whole_archive_flag_spec="$wlarc"'--whole-archive$convenience '"$wlarc"'--no-whole-archive' + else + whole_archive_flag_spec= + fi + supports_anon_versioning=no + case `$LD -v 2>/dev/null` in + *\ [01].* | *\ 2.[0-9].* | *\ 2.10.*) ;; # catch versions < 2.11 + *\ 2.11.93.0.2\ *) supports_anon_versioning=yes ;; # RH7.3 ... + *\ 2.11.92.0.12\ *) supports_anon_versioning=yes ;; # Mandrake 8.2 ... + *\ 2.11.*) ;; # other 2.11 versions + *) supports_anon_versioning=yes ;; + esac + + # See if GNU ld supports shared libraries. + case $host_os in + aix3* | aix4* | aix5*) + # On AIX/PPC, the GNU linker is very broken + if test "$host_cpu" != ia64; then + ld_shlibs=no + cat <&2 + +*** Warning: the GNU linker, at least up to release 2.9.1, is reported +*** to be unable to reliably create shared libraries on AIX. +*** Therefore, libtool is disabling shared libraries support. If you +*** really care for shared libraries, you may want to modify your PATH +*** so that a non-GNU linker is found, and then restart. + +EOF + fi + ;; + + amigaos*) + archive_cmds='$rm $output_objdir/a2ixlibrary.data~$echo "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$echo "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$echo "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$echo "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)' + hardcode_libdir_flag_spec='-L$libdir' + hardcode_minus_L=yes + + # Samuel A. Falvo II reports + # that the semantics of dynamic libraries on AmigaOS, at least up + # to version 4, is to share data among multiple programs linked + # with the same dynamic library. Since this doesn't match the + # behavior of shared libraries on other platforms, we can't use + # them. + ld_shlibs=no + ;; + + beos*) + if $LD --help 2>&1 | grep ': supported targets:.* elf' > /dev/null; then + allow_undefined_flag=unsupported + # Joseph Beckenbach says some releases of gcc + # support --undefined. This deserves some investigation. FIXME + archive_cmds='$CC -nostart $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' + else + ld_shlibs=no + fi + ;; + + cygwin* | mingw* | pw32*) + # _LT_AC_TAGVAR(hardcode_libdir_flag_spec, ) is actually meaningless, + # as there is no search path for DLLs. + hardcode_libdir_flag_spec='-L$libdir' + allow_undefined_flag=unsupported + always_export_symbols=no + enable_shared_with_static_runtimes=yes + export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/'\'' -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols' + + if $LD --help 2>&1 | grep 'auto-import' > /dev/null; then + archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' + # If the export-symbols file already is a .def file (1st line + # is EXPORTS), use it as is; otherwise, prepend... + archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then + cp $export_symbols $output_objdir/$soname.def; + else + echo EXPORTS > $output_objdir/$soname.def; + cat $export_symbols >> $output_objdir/$soname.def; + fi~ + $CC -shared $output_objdir/$soname.def $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' + else + ld_shlibs=no + fi + ;; + + interix[3-9]*) + hardcode_direct=no + hardcode_shlibpath_var=no + hardcode_libdir_flag_spec='${wl}-rpath,$libdir' + export_dynamic_flag_spec='${wl}-E' + # Hack: On Interix 3.x, we cannot compile PIC because of a broken gcc. + # Instead, shared libraries are loaded at an image base (0x10000000 by + # default) and relocated if they conflict, which is a slow very memory + # consuming and fragmenting process. To avoid this, we pick a random, + # 256 KiB-aligned image base between 0x50000000 and 0x6FFC0000 at link + # time. Moving up from 0x10000000 also allows more sbrk(2) space. + archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-h,$soname ${wl}--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' + archive_expsym_cmds='sed "s,^,_," $export_symbols >$output_objdir/$soname.expsym~$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-h,$soname ${wl}--retain-symbols-file,$output_objdir/$soname.expsym ${wl}--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' + ;; + + gnu* | linux* | k*bsd*-gnu) + if $LD --help 2>&1 | grep ': supported targets:.* elf' > /dev/null; then + tmp_addflag= + case $cc_basename,$host_cpu in + pgcc*) # Portland Group C compiler + whole_archive_flag_spec='${wl}--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; $echo \"$new_convenience\"` ${wl}--no-whole-archive' + tmp_addflag=' $pic_flag' + ;; + pgf77* | pgf90* | pgf95*) # Portland Group f77 and f90 compilers + whole_archive_flag_spec='${wl}--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; $echo \"$new_convenience\"` ${wl}--no-whole-archive' + tmp_addflag=' $pic_flag -Mnomain' ;; + ecc*,ia64* | icc*,ia64*) # Intel C compiler on ia64 + tmp_addflag=' -i_dynamic' ;; + efc*,ia64* | ifort*,ia64*) # Intel Fortran compiler on ia64 + tmp_addflag=' -i_dynamic -nofor_main' ;; + ifc* | ifort*) # Intel Fortran compiler + tmp_addflag=' -nofor_main' ;; + esac + case `$CC -V 2>&1 | sed 5q` in + *Sun\ C*) # Sun C 5.9 + whole_archive_flag_spec='${wl}--whole-archive`new_convenience=; for conv in $convenience\"\"; do test -z \"$conv\" || new_convenience=\"$new_convenience,$conv\"; done; $echo \"$new_convenience\"` ${wl}--no-whole-archive' + tmp_sharedflag='-G' ;; + *Sun\ F*) # Sun Fortran 8.3 + tmp_sharedflag='-G' ;; + *) + tmp_sharedflag='-shared' ;; + esac + archive_cmds='$CC '"$tmp_sharedflag""$tmp_addflag"' $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' + + if test $supports_anon_versioning = yes; then + archive_expsym_cmds='$echo "{ global:" > $output_objdir/$libname.ver~ + cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~ + $echo "local: *; };" >> $output_objdir/$libname.ver~ + $CC '"$tmp_sharedflag""$tmp_addflag"' $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-version-script ${wl}$output_objdir/$libname.ver -o $lib' + fi + else + ld_shlibs=no + fi + ;; + + netbsd*) + if echo __ELF__ | $CC -E - | grep __ELF__ >/dev/null; then + archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib' + wlarc= + else + archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' + archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' + fi + ;; + + solaris*) + if $LD -v 2>&1 | grep 'BFD 2\.8' > /dev/null; then + ld_shlibs=no + cat <&2 + +*** Warning: The releases 2.8.* of the GNU linker cannot reliably +*** create shared libraries on Solaris systems. Therefore, libtool +*** is disabling shared libraries support. We urge you to upgrade GNU +*** binutils to release 2.9.1 or newer. Another option is to modify +*** your PATH or compiler configuration so that the native linker is +*** used, and then restart. + +EOF + elif $LD --help 2>&1 | grep ': supported targets:.* elf' > /dev/null; then + archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' + archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' + else + ld_shlibs=no + fi + ;; + + sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX*) + case `$LD -v 2>&1` in + *\ [01].* | *\ 2.[0-9].* | *\ 2.1[0-5].*) + ld_shlibs=no + cat <<_LT_EOF 1>&2 + +*** Warning: Releases of the GNU linker prior to 2.16.91.0.3 can not +*** reliably create shared libraries on SCO systems. Therefore, libtool +*** is disabling shared libraries support. We urge you to upgrade GNU +*** binutils to release 2.16.91.0.3 or newer. Another option is to modify +*** your PATH or compiler configuration so that the native linker is +*** used, and then restart. + +_LT_EOF + ;; + *) + if $LD --help 2>&1 | grep ': supported targets:.* elf' > /dev/null; then + hardcode_libdir_flag_spec='`test -z "$SCOABSPATH" && echo ${wl}-rpath,$libdir`' + archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname,\${SCOABSPATH:+${install_libdir}/}$soname -o $lib' + archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname,\${SCOABSPATH:+${install_libdir}/}$soname,-retain-symbols-file,$export_symbols -o $lib' + else + ld_shlibs=no + fi + ;; + esac + ;; + + sunos4*) + archive_cmds='$LD -assert pure-text -Bshareable -o $lib $libobjs $deplibs $linker_flags' + wlarc= + hardcode_direct=yes + hardcode_shlibpath_var=no + ;; + + *) + if $LD --help 2>&1 | grep ': supported targets:.* elf' > /dev/null; then + archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' + archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' + else + ld_shlibs=no + fi + ;; + esac + + if test "$ld_shlibs" = no; then + runpath_var= + hardcode_libdir_flag_spec= + export_dynamic_flag_spec= + whole_archive_flag_spec= + fi + else + # PORTME fill in a description of your system's linker (not GNU ld) + case $host_os in + aix3*) + allow_undefined_flag=unsupported + always_export_symbols=yes + archive_expsym_cmds='$LD -o $output_objdir/$soname $libobjs $deplibs $linker_flags -bE:$export_symbols -T512 -H512 -bM:SRE~$AR $AR_FLAGS $lib $output_objdir/$soname' + # Note: this linker hardcodes the directories in LIBPATH if there + # are no directories specified by -L. + hardcode_minus_L=yes + if test "$GCC" = yes && test -z "$lt_prog_compiler_static"; then + # Neither direct hardcoding nor static linking is supported with a + # broken collect2. + hardcode_direct=unsupported + fi + ;; + + aix4* | aix5*) + if test "$host_cpu" = ia64; then + # On IA64, the linker does run time linking by default, so we don't + # have to do anything special. + aix_use_runtimelinking=no + exp_sym_flag='-Bexport' + no_entry_flag="" + else + # If we're using GNU nm, then we don't want the "-C" option. + # -C means demangle to AIX nm, but means don't demangle with GNU nm + if $NM -V 2>&1 | grep 'GNU' > /dev/null; then + export_symbols_cmds='$NM -Bpg $libobjs $convenience | awk '\''{ if (((\$2 == "T") || (\$2 == "D") || (\$2 == "B")) && (substr(\$3,1,1) != ".")) { print \$3 } }'\'' | sort -u > $export_symbols' + else + export_symbols_cmds='$NM -BCpg $libobjs $convenience | awk '\''{ if (((\$2 == "T") || (\$2 == "D") || (\$2 == "B")) && (substr(\$3,1,1) != ".")) { print \$3 } }'\'' | sort -u > $export_symbols' + fi + aix_use_runtimelinking=no + + # Test if we are trying to use run time linking or normal + # AIX style linking. If -brtl is somewhere in LDFLAGS, we + # need to do runtime linking. + case $host_os in aix4.[23]|aix4.[23].*|aix5*) + for ld_flag in $LDFLAGS; do + if (test $ld_flag = "-brtl" || test $ld_flag = "-Wl,-brtl"); then + aix_use_runtimelinking=yes + break + fi + done + ;; + esac + + exp_sym_flag='-bexport' + no_entry_flag='-bnoentry' + fi + + # When large executables or shared objects are built, AIX ld can + # have problems creating the table of contents. If linking a library + # or program results in "error TOC overflow" add -mminimal-toc to + # CXXFLAGS/CFLAGS for g++/gcc. In the cases where that is not + # enough to fix the problem, add -Wl,-bbigtoc to LDFLAGS. + + archive_cmds='' + hardcode_direct=yes + hardcode_libdir_separator=':' + link_all_deplibs=yes + + if test "$GCC" = yes; then + case $host_os in aix4.[012]|aix4.[012].*) + # We only want to do this on AIX 4.2 and lower, the check + # below for broken collect2 doesn't work under 4.3+ + collect2name=`${CC} -print-prog-name=collect2` + if test -f "$collect2name" && \ + strings "$collect2name" | grep resolve_lib_name >/dev/null + then + # We have reworked collect2 + : + else + # We have old collect2 + hardcode_direct=unsupported + # It fails to find uninstalled libraries when the uninstalled + # path is not listed in the libpath. Setting hardcode_minus_L + # to unsupported forces relinking + hardcode_minus_L=yes + hardcode_libdir_flag_spec='-L$libdir' + hardcode_libdir_separator= + fi + ;; + esac + shared_flag='-shared' + if test "$aix_use_runtimelinking" = yes; then + shared_flag="$shared_flag "'${wl}-G' + fi + else + # not using gcc + if test "$host_cpu" = ia64; then + # VisualAge C++, Version 5.5 for AIX 5L for IA-64, Beta 3 Release + # chokes on -Wl,-G. The following line is correct: + shared_flag='-G' + else + if test "$aix_use_runtimelinking" = yes; then + shared_flag='${wl}-G' + else + shared_flag='${wl}-bM:SRE' + fi + fi + fi + + # It seems that -bexpall does not export symbols beginning with + # underscore (_), so it is better to generate a list of symbols to export. + always_export_symbols=yes + if test "$aix_use_runtimelinking" = yes; then + # Warning - without using the other runtime loading flags (-brtl), + # -berok will link without error, but may produce a broken library. + allow_undefined_flag='-berok' + # Determine the default libpath from the value encoded in an empty executable. + cat >conftest.$ac_ext <<_ACEOF +/* confdefs.h. */ +_ACEOF +cat confdefs.h >>conftest.$ac_ext +cat >>conftest.$ac_ext <<_ACEOF +/* end confdefs.h. */ + +int +main () +{ + + ; + return 0; +} +_ACEOF +rm -f conftest.$ac_objext conftest$ac_exeext +if { (ac_try="$ac_link" +case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_link") 2>conftest.er1 + ac_status=$? + grep -v '^ *+' conftest.er1 >conftest.err + rm -f conftest.er1 + cat conftest.err >&5 + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } && { + test -z "$ac_c_werror_flag" || + test ! -s conftest.err + } && test -s conftest$ac_exeext && + $as_test_x conftest$ac_exeext; then + +lt_aix_libpath_sed=' + /Import File Strings/,/^$/ { + /^0/ { + s/^0 *\(.*\)$/\1/ + p + } + }' +aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` +# Check for a 64-bit object if we didn't find anything. +if test -z "$aix_libpath"; then + aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` +fi +else + echo "$as_me: failed program was:" >&5 +sed 's/^/| /' conftest.$ac_ext >&5 + + +fi + +rm -f core conftest.err conftest.$ac_objext conftest_ipa8_conftest.oo \ + conftest$ac_exeext conftest.$ac_ext +if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi + + hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath" + archive_expsym_cmds="\$CC"' -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then echo "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag" + else + if test "$host_cpu" = ia64; then + hardcode_libdir_flag_spec='${wl}-R $libdir:/usr/lib:/lib' + allow_undefined_flag="-z nodefs" + archive_expsym_cmds="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags ${wl}${allow_undefined_flag} '"\${wl}$exp_sym_flag:\$export_symbols" + else + # Determine the default libpath from the value encoded in an empty executable. + cat >conftest.$ac_ext <<_ACEOF +/* confdefs.h. */ +_ACEOF +cat confdefs.h >>conftest.$ac_ext +cat >>conftest.$ac_ext <<_ACEOF +/* end confdefs.h. */ + +int +main () +{ + + ; + return 0; +} +_ACEOF +rm -f conftest.$ac_objext conftest$ac_exeext +if { (ac_try="$ac_link" +case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_link") 2>conftest.er1 + ac_status=$? + grep -v '^ *+' conftest.er1 >conftest.err + rm -f conftest.er1 + cat conftest.err >&5 + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } && { + test -z "$ac_c_werror_flag" || + test ! -s conftest.err + } && test -s conftest$ac_exeext && + $as_test_x conftest$ac_exeext; then + +lt_aix_libpath_sed=' + /Import File Strings/,/^$/ { + /^0/ { + s/^0 *\(.*\)$/\1/ + p + } + }' +aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` +# Check for a 64-bit object if we didn't find anything. +if test -z "$aix_libpath"; then + aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` +fi +else + echo "$as_me: failed program was:" >&5 +sed 's/^/| /' conftest.$ac_ext >&5 + + +fi + +rm -f core conftest.err conftest.$ac_objext conftest_ipa8_conftest.oo \ + conftest$ac_exeext conftest.$ac_ext +if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi + + hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath" + # Warning - without using the other run time loading flags, + # -berok will link without error, but may produce a broken library. + no_undefined_flag=' ${wl}-bernotok' + allow_undefined_flag=' ${wl}-berok' + # Exported symbols can be pulled into shared objects from archives + whole_archive_flag_spec='$convenience' + archive_cmds_need_lc=yes + # This is similar to how AIX traditionally builds its shared libraries. + archive_expsym_cmds="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs ${wl}-bnoentry $compiler_flags ${wl}-bE:$export_symbols${allow_undefined_flag}~$AR $AR_FLAGS $output_objdir/$libname$release.a $output_objdir/$soname' + fi + fi + ;; + + amigaos*) + archive_cmds='$rm $output_objdir/a2ixlibrary.data~$echo "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$echo "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$echo "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$echo "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)' + hardcode_libdir_flag_spec='-L$libdir' + hardcode_minus_L=yes + # see comment about different semantics on the GNU ld section + ld_shlibs=no + ;; + + bsdi[45]*) + export_dynamic_flag_spec=-rdynamic + ;; + + cygwin* | mingw* | pw32*) + # When not using gcc, we currently assume that we are using + # Microsoft Visual C++. + # hardcode_libdir_flag_spec is actually meaningless, as there is + # no search path for DLLs. + hardcode_libdir_flag_spec=' ' + allow_undefined_flag=unsupported + # Tell ltmain to make .lib files, not .a files. + libext=lib + # Tell ltmain to make .dll files, not .so files. + shrext_cmds=".dll" + # FIXME: Setting linknames here is a bad hack. + archive_cmds='$CC -o $lib $libobjs $compiler_flags `echo "$deplibs" | $SED -e '\''s/ -lc$//'\''` -link -dll~linknames=' + # The linker will automatically build a .lib file if we build a DLL. + old_archive_From_new_cmds='true' + # FIXME: Should let the user specify the lib program. + old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs' + fix_srcfile_path='`cygpath -w "$srcfile"`' + enable_shared_with_static_runtimes=yes + ;; + + darwin* | rhapsody*) + case $host_os in + rhapsody* | darwin1.[012]) + allow_undefined_flag='${wl}-undefined ${wl}suppress' + ;; + *) # Darwin 1.3 on + if test -z ${MACOSX_DEPLOYMENT_TARGET} ; then + allow_undefined_flag='${wl}-flat_namespace ${wl}-undefined ${wl}suppress' + else + case ${MACOSX_DEPLOYMENT_TARGET} in + 10.[012]) + allow_undefined_flag='${wl}-flat_namespace ${wl}-undefined ${wl}suppress' + ;; + 10.*) + allow_undefined_flag='${wl}-undefined ${wl}dynamic_lookup' + ;; + esac + fi + ;; + esac + archive_cmds_need_lc=no + hardcode_direct=no + hardcode_automatic=yes + hardcode_shlibpath_var=unsupported + whole_archive_flag_spec='' + link_all_deplibs=yes + if test "$GCC" = yes ; then + output_verbose_link_cmd='echo' + archive_cmds='$CC -dynamiclib $allow_undefined_flag -o $lib $libobjs $deplibs $compiler_flags -install_name $rpath/$soname $verstring' + module_cmds='$CC $allow_undefined_flag -o $lib -bundle $libobjs $deplibs$compiler_flags' + # Don't fix this by using the ld -exported_symbols_list flag, it doesn't exist in older darwin lds + archive_expsym_cmds='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC -dynamiclib $allow_undefined_flag -o $lib $libobjs $deplibs $compiler_flags -install_name $rpath/$soname $verstring~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}' + module_expsym_cmds='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC $allow_undefined_flag -o $lib -bundle $libobjs $deplibs$compiler_flags~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}' + else + case $cc_basename in + xlc*) + output_verbose_link_cmd='echo' + archive_cmds='$CC -qmkshrobj $allow_undefined_flag -o $lib $libobjs $deplibs $compiler_flags ${wl}-install_name ${wl}`echo $rpath/$soname` $xlcverstring' + module_cmds='$CC $allow_undefined_flag -o $lib -bundle $libobjs $deplibs$compiler_flags' + # Don't fix this by using the ld -exported_symbols_list flag, it doesn't exist in older darwin lds + archive_expsym_cmds='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC -qmkshrobj $allow_undefined_flag -o $lib $libobjs $deplibs $compiler_flags ${wl}-install_name ${wl}$rpath/$soname $xlcverstring~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}' + module_expsym_cmds='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC $allow_undefined_flag -o $lib -bundle $libobjs $deplibs$compiler_flags~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}' + ;; + *) + ld_shlibs=no + ;; + esac + fi + ;; + + dgux*) + archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + hardcode_libdir_flag_spec='-L$libdir' + hardcode_shlibpath_var=no + ;; + + freebsd1*) + ld_shlibs=no + ;; + + # FreeBSD 2.2.[012] allows us to include c++rt0.o to get C++ constructor + # support. Future versions do this automatically, but an explicit c++rt0.o + # does not break anything, and helps significantly (at the cost of a little + # extra space). + freebsd2.2*) + archive_cmds='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags /usr/lib/c++rt0.o' + hardcode_libdir_flag_spec='-R$libdir' + hardcode_direct=yes + hardcode_shlibpath_var=no + ;; + + # Unfortunately, older versions of FreeBSD 2 do not have this feature. + freebsd2*) + archive_cmds='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' + hardcode_direct=yes + hardcode_minus_L=yes + hardcode_shlibpath_var=no + ;; + + # FreeBSD 3 and greater uses gcc -shared to do shared libraries. + freebsd* | dragonfly*) + archive_cmds='$CC -shared -o $lib $libobjs $deplibs $compiler_flags' + hardcode_libdir_flag_spec='-R$libdir' + hardcode_direct=yes + hardcode_shlibpath_var=no + ;; + + hpux9*) + if test "$GCC" = yes; then + archive_cmds='$rm $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib' + else + archive_cmds='$rm $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib' + fi + hardcode_libdir_flag_spec='${wl}+b ${wl}$libdir' + hardcode_libdir_separator=: + hardcode_direct=yes + + # hardcode_minus_L: Not really in the search PATH, + # but as the default location of the library. + hardcode_minus_L=yes + export_dynamic_flag_spec='${wl}-E' + ;; + + hpux10*) + if test "$GCC" = yes -a "$with_gnu_ld" = no; then + archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags' + else + archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags' + fi + if test "$with_gnu_ld" = no; then + hardcode_libdir_flag_spec='${wl}+b ${wl}$libdir' + hardcode_libdir_separator=: + + hardcode_direct=yes + export_dynamic_flag_spec='${wl}-E' + + # hardcode_minus_L: Not really in the search PATH, + # but as the default location of the library. + hardcode_minus_L=yes + fi + ;; + + hpux11*) + if test "$GCC" = yes -a "$with_gnu_ld" = no; then + case $host_cpu in + hppa*64*) + archive_cmds='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags' + ;; + ia64*) + archive_cmds='$CC -shared ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags' + ;; + *) + archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags' + ;; + esac + else + case $host_cpu in + hppa*64*) + archive_cmds='$CC -b ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags' + ;; + ia64*) + archive_cmds='$CC -b ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags' + ;; + *) + archive_cmds='$CC -b ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags' + ;; + esac + fi + if test "$with_gnu_ld" = no; then + hardcode_libdir_flag_spec='${wl}+b ${wl}$libdir' + hardcode_libdir_separator=: + + case $host_cpu in + hppa*64*|ia64*) + hardcode_libdir_flag_spec_ld='+b $libdir' + hardcode_direct=no + hardcode_shlibpath_var=no + ;; + *) + hardcode_direct=yes + export_dynamic_flag_spec='${wl}-E' + + # hardcode_minus_L: Not really in the search PATH, + # but as the default location of the library. + hardcode_minus_L=yes + ;; + esac + fi + ;; + + irix5* | irix6* | nonstopux*) + if test "$GCC" = yes; then + archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' + else + archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib' + hardcode_libdir_flag_spec_ld='-rpath $libdir' + fi + hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir' + hardcode_libdir_separator=: + link_all_deplibs=yes + ;; + + netbsd*) + if echo __ELF__ | $CC -E - | grep __ELF__ >/dev/null; then + archive_cmds='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' # a.out + else + archive_cmds='$LD -shared -o $lib $libobjs $deplibs $linker_flags' # ELF + fi + hardcode_libdir_flag_spec='-R$libdir' + hardcode_direct=yes + hardcode_shlibpath_var=no + ;; + + newsos6) + archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + hardcode_direct=yes + hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir' + hardcode_libdir_separator=: + hardcode_shlibpath_var=no + ;; + + openbsd*) + if test -f /usr/libexec/ld.so; then + hardcode_direct=yes + hardcode_shlibpath_var=no + if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then + archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags' + archive_expsym_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags ${wl}-retain-symbols-file,$export_symbols' + hardcode_libdir_flag_spec='${wl}-rpath,$libdir' + export_dynamic_flag_spec='${wl}-E' + else + case $host_os in + openbsd[01].* | openbsd2.[0-7] | openbsd2.[0-7].*) + archive_cmds='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' + hardcode_libdir_flag_spec='-R$libdir' + ;; + *) + archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags' + hardcode_libdir_flag_spec='${wl}-rpath,$libdir' + ;; + esac + fi + else + ld_shlibs=no + fi + ;; + + os2*) + hardcode_libdir_flag_spec='-L$libdir' + hardcode_minus_L=yes + allow_undefined_flag=unsupported + archive_cmds='$echo "LIBRARY $libname INITINSTANCE" > $output_objdir/$libname.def~$echo "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~$echo DATA >> $output_objdir/$libname.def~$echo " SINGLE NONSHARED" >> $output_objdir/$libname.def~$echo EXPORTS >> $output_objdir/$libname.def~emxexp $libobjs >> $output_objdir/$libname.def~$CC -Zdll -Zcrtdll -o $lib $libobjs $deplibs $compiler_flags $output_objdir/$libname.def' + old_archive_From_new_cmds='emximp -o $output_objdir/$libname.a $output_objdir/$libname.def' + ;; + + osf3*) + if test "$GCC" = yes; then + allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*' + archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' + else + allow_undefined_flag=' -expect_unresolved \*' + archive_cmds='$LD -shared${allow_undefined_flag} $libobjs $deplibs $linker_flags -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib' + fi + hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir' + hardcode_libdir_separator=: + ;; + + osf4* | osf5*) # as osf3* with the addition of -msym flag + if test "$GCC" = yes; then + allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*' + archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' + hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir' + else + allow_undefined_flag=' -expect_unresolved \*' + archive_cmds='$LD -shared${allow_undefined_flag} $libobjs $deplibs $linker_flags -msym -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib' + archive_expsym_cmds='for i in `cat $export_symbols`; do printf "%s %s\\n" -exported_symbol "\$i" >> $lib.exp; done; echo "-hidden">> $lib.exp~ + $LD -shared${allow_undefined_flag} -input $lib.exp $linker_flags $libobjs $deplibs -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib~$rm $lib.exp' + + # Both c and cxx compiler support -rpath directly + hardcode_libdir_flag_spec='-rpath $libdir' + fi + hardcode_libdir_separator=: + ;; + + solaris*) + no_undefined_flag=' -z text' + if test "$GCC" = yes; then + wlarc='${wl}' + archive_cmds='$CC -shared ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags' + archive_expsym_cmds='$echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~$echo "local: *; };" >> $lib.exp~ + $CC -shared ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$rm $lib.exp' + else + wlarc='' + archive_cmds='$LD -G${allow_undefined_flag} -h $soname -o $lib $libobjs $deplibs $linker_flags' + archive_expsym_cmds='$echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~$echo "local: *; };" >> $lib.exp~ + $LD -G${allow_undefined_flag} -M $lib.exp -h $soname -o $lib $libobjs $deplibs $linker_flags~$rm $lib.exp' + fi + hardcode_libdir_flag_spec='-R$libdir' + hardcode_shlibpath_var=no + case $host_os in + solaris2.[0-5] | solaris2.[0-5].*) ;; + *) + # The compiler driver will combine and reorder linker options, + # but understands `-z linker_flag'. GCC discards it without `$wl', + # but is careful enough not to reorder. + # Supported since Solaris 2.6 (maybe 2.5.1?) + if test "$GCC" = yes; then + whole_archive_flag_spec='${wl}-z ${wl}allextract$convenience ${wl}-z ${wl}defaultextract' + else + whole_archive_flag_spec='-z allextract$convenience -z defaultextract' + fi + ;; + esac + link_all_deplibs=yes + ;; + + sunos4*) + if test "x$host_vendor" = xsequent; then + # Use $CC to link under sequent, because it throws in some extra .o + # files that make .init and .fini sections work. + archive_cmds='$CC -G ${wl}-h $soname -o $lib $libobjs $deplibs $compiler_flags' + else + archive_cmds='$LD -assert pure-text -Bstatic -o $lib $libobjs $deplibs $linker_flags' + fi + hardcode_libdir_flag_spec='-L$libdir' + hardcode_direct=yes + hardcode_minus_L=yes + hardcode_shlibpath_var=no + ;; + + sysv4) + case $host_vendor in + sni) + archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + hardcode_direct=yes # is this really true??? + ;; + siemens) + ## LD is ld it makes a PLAMLIB + ## CC just makes a GrossModule. + archive_cmds='$LD -G -o $lib $libobjs $deplibs $linker_flags' + reload_cmds='$CC -r -o $output$reload_objs' + hardcode_direct=no + ;; + motorola) + archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + hardcode_direct=no #Motorola manual says yes, but my tests say they lie + ;; + esac + runpath_var='LD_RUN_PATH' + hardcode_shlibpath_var=no + ;; + + sysv4.3*) + archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + hardcode_shlibpath_var=no + export_dynamic_flag_spec='-Bexport' + ;; + + sysv4*MP*) + if test -d /usr/nec; then + archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + hardcode_shlibpath_var=no + runpath_var=LD_RUN_PATH + hardcode_runpath_var=yes + ld_shlibs=yes + fi + ;; + + sysv4*uw2* | sysv5OpenUNIX* | sysv5UnixWare7.[01].[10]* | unixware7* | sco3.2v5.0.[024]*) + no_undefined_flag='${wl}-z,text' + archive_cmds_need_lc=no + hardcode_shlibpath_var=no + runpath_var='LD_RUN_PATH' + + if test "$GCC" = yes; then + archive_cmds='$CC -shared ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' + archive_expsym_cmds='$CC -shared ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' + else + archive_cmds='$CC -G ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' + archive_expsym_cmds='$CC -G ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' + fi + ;; + + sysv5* | sco3.2v5* | sco5v6*) + # Note: We can NOT use -z defs as we might desire, because we do not + # link with -lc, and that would cause any symbols used from libc to + # always be unresolved, which means just about no library would + # ever link correctly. If we're not using GNU ld we use -z text + # though, which does catch some bad symbols but isn't as heavy-handed + # as -z defs. + no_undefined_flag='${wl}-z,text' + allow_undefined_flag='${wl}-z,nodefs' + archive_cmds_need_lc=no + hardcode_shlibpath_var=no + hardcode_libdir_flag_spec='`test -z "$SCOABSPATH" && echo ${wl}-R,$libdir`' + hardcode_libdir_separator=':' + link_all_deplibs=yes + export_dynamic_flag_spec='${wl}-Bexport' + runpath_var='LD_RUN_PATH' + + if test "$GCC" = yes; then + archive_cmds='$CC -shared ${wl}-h,\${SCOABSPATH:+${install_libdir}/}$soname -o $lib $libobjs $deplibs $compiler_flags' + archive_expsym_cmds='$CC -shared ${wl}-Bexport:$export_symbols ${wl}-h,\${SCOABSPATH:+${install_libdir}/}$soname -o $lib $libobjs $deplibs $compiler_flags' + else + archive_cmds='$CC -G ${wl}-h,\${SCOABSPATH:+${install_libdir}/}$soname -o $lib $libobjs $deplibs $compiler_flags' + archive_expsym_cmds='$CC -G ${wl}-Bexport:$export_symbols ${wl}-h,\${SCOABSPATH:+${install_libdir}/}$soname -o $lib $libobjs $deplibs $compiler_flags' + fi + ;; + + uts4*) + archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + hardcode_libdir_flag_spec='-L$libdir' + hardcode_shlibpath_var=no + ;; + + *) + ld_shlibs=no + ;; + esac + fi + +{ echo "$as_me:$LINENO: result: $ld_shlibs" >&5 +echo "${ECHO_T}$ld_shlibs" >&6; } +test "$ld_shlibs" = no && can_build_shared=no + +# +# Do we need to explicitly link libc? +# +case "x$archive_cmds_need_lc" in +x|xyes) + # Assume -lc should be added + archive_cmds_need_lc=yes + + if test "$enable_shared" = yes && test "$GCC" = yes; then + case $archive_cmds in + *'~'*) + # FIXME: we may have to deal with multi-command sequences. + ;; + '$CC '*) + # Test whether the compiler implicitly links with -lc since on some + # systems, -lgcc has to come before -lc. If gcc already passes -lc + # to ld, don't add -lc before -lgcc. + { echo "$as_me:$LINENO: checking whether -lc should be explicitly linked in" >&5 +echo $ECHO_N "checking whether -lc should be explicitly linked in... $ECHO_C" >&6; } + $rm conftest* + echo "$lt_simple_compile_test_code" > conftest.$ac_ext + + if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 + (eval $ac_compile) 2>&5 + ac_status=$? + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } 2>conftest.err; then + soname=conftest + lib=conftest + libobjs=conftest.$ac_objext + deplibs= + wl=$lt_prog_compiler_wl + pic_flag=$lt_prog_compiler_pic + compiler_flags=-v + linker_flags=-v + verstring= + output_objdir=. + libname=conftest + lt_save_allow_undefined_flag=$allow_undefined_flag + allow_undefined_flag= + if { (eval echo "$as_me:$LINENO: \"$archive_cmds 2\>\&1 \| grep \" -lc \" \>/dev/null 2\>\&1\"") >&5 + (eval $archive_cmds 2\>\&1 \| grep \" -lc \" \>/dev/null 2\>\&1) 2>&5 + ac_status=$? + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } + then + archive_cmds_need_lc=no + else + archive_cmds_need_lc=yes + fi + allow_undefined_flag=$lt_save_allow_undefined_flag + else + cat conftest.err 1>&5 + fi + $rm conftest* + { echo "$as_me:$LINENO: result: $archive_cmds_need_lc" >&5 +echo "${ECHO_T}$archive_cmds_need_lc" >&6; } + ;; + esac + fi + ;; +esac + +{ echo "$as_me:$LINENO: checking dynamic linker characteristics" >&5 +echo $ECHO_N "checking dynamic linker characteristics... $ECHO_C" >&6; } +library_names_spec= +libname_spec='lib$name' +soname_spec= +shrext_cmds=".so" +postinstall_cmds= +postuninstall_cmds= +finish_cmds= +finish_eval= +shlibpath_var= +shlibpath_overrides_runpath=unknown +version_type=none +dynamic_linker="$host_os ld.so" +sys_lib_dlsearch_path_spec="/lib /usr/lib" + +if test "$GCC" = yes; then + case $host_os in + darwin*) lt_awk_arg="/^libraries:/,/LR/" ;; + *) lt_awk_arg="/^libraries:/" ;; + esac + lt_search_path_spec=`$CC -print-search-dirs | awk $lt_awk_arg | $SED -e "s/^libraries://" -e "s,=/,/,g"` + if echo "$lt_search_path_spec" | grep ';' >/dev/null ; then + # if the path contains ";" then we assume it to be the separator + # otherwise default to the standard path separator (i.e. ":") - it is + # assumed that no part of a normal pathname contains ";" but that should + # okay in the real world where ";" in dirpaths is itself problematic. + lt_search_path_spec=`echo "$lt_search_path_spec" | $SED -e 's/;/ /g'` + else + lt_search_path_spec=`echo "$lt_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"` + fi + # Ok, now we have the path, separated by spaces, we can step through it + # and add multilib dir if necessary. + lt_tmp_lt_search_path_spec= + lt_multi_os_dir=`$CC $CPPFLAGS $CFLAGS $LDFLAGS -print-multi-os-directory 2>/dev/null` + for lt_sys_path in $lt_search_path_spec; do + if test -d "$lt_sys_path/$lt_multi_os_dir"; then + lt_tmp_lt_search_path_spec="$lt_tmp_lt_search_path_spec $lt_sys_path/$lt_multi_os_dir" + else + test -d "$lt_sys_path" && \ + lt_tmp_lt_search_path_spec="$lt_tmp_lt_search_path_spec $lt_sys_path" + fi + done + lt_search_path_spec=`echo $lt_tmp_lt_search_path_spec | awk ' +BEGIN {RS=" "; FS="/|\n";} { + lt_foo=""; + lt_count=0; + for (lt_i = NF; lt_i > 0; lt_i--) { + if ($lt_i != "" && $lt_i != ".") { + if ($lt_i == "..") { + lt_count++; + } else { + if (lt_count == 0) { + lt_foo="/" $lt_i lt_foo; + } else { + lt_count--; + } + } + } + } + if (lt_foo != "") { lt_freq[lt_foo]++; } + if (lt_freq[lt_foo] == 1) { print lt_foo; } +}'` + sys_lib_search_path_spec=`echo $lt_search_path_spec` +else + sys_lib_search_path_spec="/lib /usr/lib /usr/local/lib" +fi +need_lib_prefix=unknown +hardcode_into_libs=no + +# when you set need_version to no, make sure it does not cause -set_version +# flags to be left without arguments +need_version=unknown + +case $host_os in +aix3*) + version_type=linux + library_names_spec='${libname}${release}${shared_ext}$versuffix $libname.a' + shlibpath_var=LIBPATH + + # AIX 3 has no versioning support, so we append a major version to the name. + soname_spec='${libname}${release}${shared_ext}$major' + ;; + +aix4* | aix5*) + version_type=linux + need_lib_prefix=no + need_version=no + hardcode_into_libs=yes + if test "$host_cpu" = ia64; then + # AIX 5 supports IA64 + library_names_spec='${libname}${release}${shared_ext}$major ${libname}${release}${shared_ext}$versuffix $libname${shared_ext}' + shlibpath_var=LD_LIBRARY_PATH + else + # With GCC up to 2.95.x, collect2 would create an import file + # for dependence libraries. The import file would start with + # the line `#! .'. This would cause the generated library to + # depend on `.', always an invalid library. This was fixed in + # development snapshots of GCC prior to 3.0. + case $host_os in + aix4 | aix4.[01] | aix4.[01].*) + if { echo '#if __GNUC__ > 2 || (__GNUC__ == 2 && __GNUC_MINOR__ >= 97)' + echo ' yes ' + echo '#endif'; } | ${CC} -E - | grep yes > /dev/null; then + : + else + can_build_shared=no + fi + ;; + esac + # AIX (on Power*) has no versioning support, so currently we can not hardcode correct + # soname into executable. Probably we can add versioning support to + # collect2, so additional links can be useful in future. + if test "$aix_use_runtimelinking" = yes; then + # If using run time linking (on AIX 4.2 or later) use lib.so + # instead of lib.a to let people know that these are not + # typical AIX shared libraries. + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + else + # We preserve .a as extension for shared libraries through AIX4.2 + # and later when we are not doing run time linking. + library_names_spec='${libname}${release}.a $libname.a' + soname_spec='${libname}${release}${shared_ext}$major' + fi + shlibpath_var=LIBPATH + fi + ;; + +amigaos*) + library_names_spec='$libname.ixlibrary $libname.a' + # Create ${libname}_ixlibrary.a entries in /sys/libs. + finish_eval='for lib in `ls $libdir/*.ixlibrary 2>/dev/null`; do libname=`$echo "X$lib" | $Xsed -e '\''s%^.*/\([^/]*\)\.ixlibrary$%\1%'\''`; test $rm /sys/libs/${libname}_ixlibrary.a; $show "cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a"; cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a || exit 1; done' + ;; + +beos*) + library_names_spec='${libname}${shared_ext}' + dynamic_linker="$host_os ld.so" + shlibpath_var=LIBRARY_PATH + ;; + +bsdi[45]*) + version_type=linux + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + finish_cmds='PATH="\$PATH:/sbin" ldconfig $libdir' + shlibpath_var=LD_LIBRARY_PATH + sys_lib_search_path_spec="/shlib /usr/lib /usr/X11/lib /usr/contrib/lib /lib /usr/local/lib" + sys_lib_dlsearch_path_spec="/shlib /usr/lib /usr/local/lib" + # the default ld.so.conf also contains /usr/contrib/lib and + # /usr/X11R6/lib (/usr/X11 is a link to /usr/X11R6), but let us allow + # libtool to hard-code these into programs + ;; + +cygwin* | mingw* | pw32*) + version_type=windows + shrext_cmds=".dll" + need_version=no + need_lib_prefix=no + + case $GCC,$host_os in + yes,cygwin* | yes,mingw* | yes,pw32*) + library_names_spec='$libname.dll.a' + # DLL is installed to $(libdir)/../bin by postinstall_cmds + postinstall_cmds='base_file=`basename \${file}`~ + dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i;echo \$dlname'\''`~ + dldir=$destdir/`dirname \$dlpath`~ + test -d \$dldir || mkdir -p \$dldir~ + $install_prog $dir/$dlname \$dldir/$dlname~ + chmod a+x \$dldir/$dlname' + postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~ + dlpath=$dir/\$dldll~ + $rm \$dlpath' + shlibpath_overrides_runpath=yes + + case $host_os in + cygwin*) + # Cygwin DLLs use 'cyg' prefix rather than 'lib' + soname_spec='`echo ${libname} | sed -e 's/^lib/cyg/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}' + sys_lib_search_path_spec="/usr/lib /lib/w32api /lib /usr/local/lib" + ;; + mingw*) + # MinGW DLLs use traditional 'lib' prefix + soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}' + sys_lib_search_path_spec=`$CC -print-search-dirs | grep "^libraries:" | $SED -e "s/^libraries://" -e "s,=/,/,g"` + if echo "$sys_lib_search_path_spec" | grep ';[c-zC-Z]:/' >/dev/null; then + # It is most probably a Windows format PATH printed by + # mingw gcc, but we are running on Cygwin. Gcc prints its search + # path with ; separators, and with drive letters. We can handle the + # drive letters (cygwin fileutils understands them), so leave them, + # especially as we might pass files found there to a mingw objdump, + # which wouldn't understand a cygwinified path. Ahh. + sys_lib_search_path_spec=`echo "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'` + else + sys_lib_search_path_spec=`echo "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"` + fi + ;; + pw32*) + # pw32 DLLs use 'pw' prefix rather than 'lib' + library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}' + ;; + esac + ;; + + *) + library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib' + ;; + esac + dynamic_linker='Win32 ld.exe' + # FIXME: first we should search . and the directory the executable is in + shlibpath_var=PATH + ;; + +darwin* | rhapsody*) + dynamic_linker="$host_os dyld" + version_type=darwin + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${versuffix}$shared_ext ${libname}${release}${major}$shared_ext ${libname}$shared_ext' + soname_spec='${libname}${release}${major}$shared_ext' + shlibpath_overrides_runpath=yes + shlibpath_var=DYLD_LIBRARY_PATH + shrext_cmds='`test .$module = .yes && echo .so || echo .dylib`' + + sys_lib_search_path_spec="$sys_lib_search_path_spec /usr/local/lib" + sys_lib_dlsearch_path_spec='/usr/local/lib /lib /usr/lib' + ;; + +dgux*) + version_type=linux + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname$shared_ext' + soname_spec='${libname}${release}${shared_ext}$major' + shlibpath_var=LD_LIBRARY_PATH + ;; + +freebsd1*) + dynamic_linker=no + ;; + +freebsd* | dragonfly*) + # DragonFly does not have aout. When/if they implement a new + # versioning mechanism, adjust this. + if test -x /usr/bin/objformat; then + objformat=`/usr/bin/objformat` + else + case $host_os in + freebsd[123]*) objformat=aout ;; + *) objformat=elf ;; + esac + fi + version_type=freebsd-$objformat + case $version_type in + freebsd-elf*) + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext} $libname${shared_ext}' + need_version=no + need_lib_prefix=no + ;; + freebsd-*) + library_names_spec='${libname}${release}${shared_ext}$versuffix $libname${shared_ext}$versuffix' + need_version=yes + ;; + esac + shlibpath_var=LD_LIBRARY_PATH + case $host_os in + freebsd2*) + shlibpath_overrides_runpath=yes + ;; + freebsd3.[01]* | freebsdelf3.[01]*) + shlibpath_overrides_runpath=yes + hardcode_into_libs=yes + ;; + freebsd3.[2-9]* | freebsdelf3.[2-9]* | \ + freebsd4.[0-5] | freebsdelf4.[0-5] | freebsd4.1.1 | freebsdelf4.1.1) + shlibpath_overrides_runpath=no + hardcode_into_libs=yes + ;; + *) # from 4.6 on, and DragonFly + shlibpath_overrides_runpath=yes + hardcode_into_libs=yes + ;; + esac + ;; + +gnu*) + version_type=linux + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}${major} ${libname}${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + shlibpath_var=LD_LIBRARY_PATH + hardcode_into_libs=yes + ;; + +hpux9* | hpux10* | hpux11*) + # Give a soname corresponding to the major version so that dld.sl refuses to + # link against other versions. + version_type=sunos + need_lib_prefix=no + need_version=no + case $host_cpu in + ia64*) + shrext_cmds='.so' + hardcode_into_libs=yes + dynamic_linker="$host_os dld.so" + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=yes # Unless +noenvvar is specified. + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + if test "X$HPUX_IA64_MODE" = X32; then + sys_lib_search_path_spec="/usr/lib/hpux32 /usr/local/lib/hpux32 /usr/local/lib" + else + sys_lib_search_path_spec="/usr/lib/hpux64 /usr/local/lib/hpux64" + fi + sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec + ;; + hppa*64*) + shrext_cmds='.sl' + hardcode_into_libs=yes + dynamic_linker="$host_os dld.sl" + shlibpath_var=LD_LIBRARY_PATH # How should we handle SHLIB_PATH + shlibpath_overrides_runpath=yes # Unless +noenvvar is specified. + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + sys_lib_search_path_spec="/usr/lib/pa20_64 /usr/ccs/lib/pa20_64" + sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec + ;; + *) + shrext_cmds='.sl' + dynamic_linker="$host_os dld.sl" + shlibpath_var=SHLIB_PATH + shlibpath_overrides_runpath=no # +s is required to enable SHLIB_PATH + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + ;; + esac + # HP-UX runs *really* slowly unless shared libraries are mode 555. + postinstall_cmds='chmod 555 $lib' + ;; + +interix[3-9]*) + version_type=linux + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + dynamic_linker='Interix 3.x ld.so.1 (PE, like ELF)' + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=no + hardcode_into_libs=yes + ;; + +irix5* | irix6* | nonstopux*) + case $host_os in + nonstopux*) version_type=nonstopux ;; + *) + if test "$lt_cv_prog_gnu_ld" = yes; then + version_type=linux + else + version_type=irix + fi ;; + esac + need_lib_prefix=no + need_version=no + soname_spec='${libname}${release}${shared_ext}$major' + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${release}${shared_ext} $libname${shared_ext}' + case $host_os in + irix5* | nonstopux*) + libsuff= shlibsuff= + ;; + *) + case $LD in # libtool.m4 will add one of these switches to LD + *-32|*"-32 "|*-melf32bsmip|*"-melf32bsmip ") + libsuff= shlibsuff= libmagic=32-bit;; + *-n32|*"-n32 "|*-melf32bmipn32|*"-melf32bmipn32 ") + libsuff=32 shlibsuff=N32 libmagic=N32;; + *-64|*"-64 "|*-melf64bmip|*"-melf64bmip ") + libsuff=64 shlibsuff=64 libmagic=64-bit;; + *) libsuff= shlibsuff= libmagic=never-match;; + esac + ;; + esac + shlibpath_var=LD_LIBRARY${shlibsuff}_PATH + shlibpath_overrides_runpath=no + sys_lib_search_path_spec="/usr/lib${libsuff} /lib${libsuff} /usr/local/lib${libsuff}" + sys_lib_dlsearch_path_spec="/usr/lib${libsuff} /lib${libsuff}" + hardcode_into_libs=yes + ;; + +# No shared lib support for Linux oldld, aout, or coff. +linux*oldld* | linux*aout* | linux*coff*) + dynamic_linker=no + ;; + +# This must be Linux ELF. +linux* | k*bsd*-gnu) + version_type=linux + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + finish_cmds='PATH="\$PATH:/sbin" ldconfig -n $libdir' + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=no + # This implies no fast_install, which is unacceptable. + # Some rework will be needed to allow for fast_install + # before this can be enabled. + hardcode_into_libs=yes + sys_lib_search_path_spec="/usr/lib${libsuff} /lib${libsuff} /usr/local/lib${libsuff}" + sys_lib_dlsearch_path_spec="/usr/lib${libsuff} /lib${libsuff}" + + # Append ld.so.conf contents to the search path + if test -f /etc/ld.so.conf; then + lt_ld_extra=`awk '/^include / { system(sprintf("cd /etc; cat %s 2>/dev/null", \$2)); skip = 1; } { if (!skip) print \$0; skip = 0; }' < /etc/ld.so.conf | $SED -e 's/#.*//;/^[ ]*hwcap[ ]/d;s/[:, ]/ /g;s/=[^=]*$//;s/=[^= ]* / /g;/^$/d' | tr '\n' ' '` + sys_lib_dlsearch_path_spec="$sys_lib_dlsearch_path_spec $lt_ld_extra" + fi + + # We used to test for /lib/ld.so.1 and disable shared libraries on + # powerpc, because MkLinux only supported shared libraries with the + # GNU dynamic linker. Since this was broken with cross compilers, + # most powerpc-linux boxes support dynamic linking these days and + # people can always --disable-shared, the test was removed, and we + # assume the GNU/Linux dynamic linker is in use. + dynamic_linker='GNU/Linux ld.so' + ;; + +netbsd*) + version_type=sunos + need_lib_prefix=no + need_version=no + if echo __ELF__ | $CC -E - | grep __ELF__ >/dev/null; then + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix' + finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir' + dynamic_linker='NetBSD (a.out) ld.so' + else + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + dynamic_linker='NetBSD ld.elf_so' + fi + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=yes + hardcode_into_libs=yes + ;; + +newsos6) + version_type=linux + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=yes + ;; + +nto-qnx*) + version_type=linux + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=yes + ;; + +openbsd*) + version_type=sunos + sys_lib_dlsearch_path_spec="/usr/lib" + need_lib_prefix=no + # Some older versions of OpenBSD (3.3 at least) *do* need versioned libs. + case $host_os in + openbsd3.3 | openbsd3.3.*) need_version=yes ;; + *) need_version=no ;; + esac + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix' + finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir' + shlibpath_var=LD_LIBRARY_PATH + if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then + case $host_os in + openbsd2.[89] | openbsd2.[89].*) + shlibpath_overrides_runpath=no + ;; + *) + shlibpath_overrides_runpath=yes + ;; + esac + else + shlibpath_overrides_runpath=yes + fi + ;; + +os2*) + libname_spec='$name' + shrext_cmds=".dll" + need_lib_prefix=no + library_names_spec='$libname${shared_ext} $libname.a' + dynamic_linker='OS/2 ld.exe' + shlibpath_var=LIBPATH + ;; + +osf3* | osf4* | osf5*) + version_type=osf + need_lib_prefix=no + need_version=no + soname_spec='${libname}${release}${shared_ext}$major' + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + shlibpath_var=LD_LIBRARY_PATH + sys_lib_search_path_spec="/usr/shlib /usr/ccs/lib /usr/lib/cmplrs/cc /usr/lib /usr/local/lib /var/shlib" + sys_lib_dlsearch_path_spec="$sys_lib_search_path_spec" + ;; + +rdos*) + dynamic_linker=no + ;; + +solaris*) + version_type=linux + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=yes + hardcode_into_libs=yes + # ldd complains unless libraries are executable + postinstall_cmds='chmod +x $lib' + ;; + +sunos4*) + version_type=sunos + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix' + finish_cmds='PATH="\$PATH:/usr/etc" ldconfig $libdir' + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=yes + if test "$with_gnu_ld" = yes; then + need_lib_prefix=no + fi + need_version=yes + ;; + +sysv4 | sysv4.3*) + version_type=linux + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + shlibpath_var=LD_LIBRARY_PATH + case $host_vendor in + sni) + shlibpath_overrides_runpath=no + need_lib_prefix=no + export_dynamic_flag_spec='${wl}-Blargedynsym' + runpath_var=LD_RUN_PATH + ;; + siemens) + need_lib_prefix=no + ;; + motorola) + need_lib_prefix=no + need_version=no + shlibpath_overrides_runpath=no + sys_lib_search_path_spec='/lib /usr/lib /usr/ccs/lib' + ;; + esac + ;; + +sysv4*MP*) + if test -d /usr/nec ;then + version_type=linux + library_names_spec='$libname${shared_ext}.$versuffix $libname${shared_ext}.$major $libname${shared_ext}' + soname_spec='$libname${shared_ext}.$major' + shlibpath_var=LD_LIBRARY_PATH + fi + ;; + +sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX* | sysv4*uw2*) + version_type=freebsd-elf + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext} $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + shlibpath_var=LD_LIBRARY_PATH + hardcode_into_libs=yes + if test "$with_gnu_ld" = yes; then + sys_lib_search_path_spec='/usr/local/lib /usr/gnu/lib /usr/ccs/lib /usr/lib /lib' + shlibpath_overrides_runpath=no + else + sys_lib_search_path_spec='/usr/ccs/lib /usr/lib' + shlibpath_overrides_runpath=yes + case $host_os in + sco3.2v5*) + sys_lib_search_path_spec="$sys_lib_search_path_spec /lib" + ;; + esac + fi + sys_lib_dlsearch_path_spec='/usr/lib' + ;; + +uts4*) + version_type=linux + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + shlibpath_var=LD_LIBRARY_PATH + ;; + +*) + dynamic_linker=no + ;; +esac +{ echo "$as_me:$LINENO: result: $dynamic_linker" >&5 +echo "${ECHO_T}$dynamic_linker" >&6; } +test "$dynamic_linker" = no && can_build_shared=no + +variables_saved_for_relink="PATH $shlibpath_var $runpath_var" +if test "$GCC" = yes; then + variables_saved_for_relink="$variables_saved_for_relink GCC_EXEC_PREFIX COMPILER_PATH LIBRARY_PATH" +fi + +{ echo "$as_me:$LINENO: checking how to hardcode library paths into programs" >&5 +echo $ECHO_N "checking how to hardcode library paths into programs... $ECHO_C" >&6; } +hardcode_action= +if test -n "$hardcode_libdir_flag_spec" || \ + test -n "$runpath_var" || \ + test "X$hardcode_automatic" = "Xyes" ; then + + # We can hardcode non-existant directories. + if test "$hardcode_direct" != no && + # If the only mechanism to avoid hardcoding is shlibpath_var, we + # have to relink, otherwise we might link with an installed library + # when we should be linking with a yet-to-be-installed one + ## test "$_LT_AC_TAGVAR(hardcode_shlibpath_var, )" != no && + test "$hardcode_minus_L" != no; then + # Linking always hardcodes the temporary library directory. + hardcode_action=relink + else + # We can link without hardcoding, and we can hardcode nonexisting dirs. + hardcode_action=immediate + fi +else + # We cannot hardcode anything, or else we can only hardcode existing + # directories. + hardcode_action=unsupported +fi +{ echo "$as_me:$LINENO: result: $hardcode_action" >&5 +echo "${ECHO_T}$hardcode_action" >&6; } + +if test "$hardcode_action" = relink; then + # Fast installation is not supported + enable_fast_install=no +elif test "$shlibpath_overrides_runpath" = yes || + test "$enable_shared" = no; then + # Fast installation is not necessary + enable_fast_install=needless +fi + +striplib= +old_striplib= +{ echo "$as_me:$LINENO: checking whether stripping libraries is possible" >&5 +echo $ECHO_N "checking whether stripping libraries is possible... $ECHO_C" >&6; } +if test -n "$STRIP" && $STRIP -V 2>&1 | grep "GNU strip" >/dev/null; then + test -z "$old_striplib" && old_striplib="$STRIP --strip-debug" + test -z "$striplib" && striplib="$STRIP --strip-unneeded" + { echo "$as_me:$LINENO: result: yes" >&5 +echo "${ECHO_T}yes" >&6; } +else +# FIXME - insert some real tests, host_os isn't really good enough + case $host_os in + darwin*) + if test -n "$STRIP" ; then + striplib="$STRIP -x" + old_striplib="$STRIP -S" + { echo "$as_me:$LINENO: result: yes" >&5 +echo "${ECHO_T}yes" >&6; } + else + { echo "$as_me:$LINENO: result: no" >&5 +echo "${ECHO_T}no" >&6; } +fi + ;; + *) + { echo "$as_me:$LINENO: result: no" >&5 +echo "${ECHO_T}no" >&6; } + ;; + esac +fi + +if test "x$enable_dlopen" != xyes; then + enable_dlopen=unknown + enable_dlopen_self=unknown + enable_dlopen_self_static=unknown +else + lt_cv_dlopen=no + lt_cv_dlopen_libs= + + case $host_os in + beos*) + lt_cv_dlopen="load_add_on" + lt_cv_dlopen_libs= + lt_cv_dlopen_self=yes + ;; + + mingw* | pw32*) + lt_cv_dlopen="LoadLibrary" + lt_cv_dlopen_libs= + ;; + + cygwin*) + lt_cv_dlopen="dlopen" + lt_cv_dlopen_libs= + ;; + + darwin*) + # if libdl is installed we need to link against it + { echo "$as_me:$LINENO: checking for dlopen in -ldl" >&5 +echo $ECHO_N "checking for dlopen in -ldl... $ECHO_C" >&6; } +if test "${ac_cv_lib_dl_dlopen+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + ac_check_lib_save_LIBS=$LIBS +LIBS="-ldl $LIBS" +cat >conftest.$ac_ext <<_ACEOF +/* confdefs.h. */ +_ACEOF +cat confdefs.h >>conftest.$ac_ext +cat >>conftest.$ac_ext <<_ACEOF +/* end confdefs.h. */ + +/* Override any GCC internal prototype to avoid an error. + Use char because int might match the return type of a GCC + builtin and then its argument prototype would still apply. */ +#ifdef __cplusplus +extern "C" +#endif +char dlopen (); +int +main () +{ +return dlopen (); + ; + return 0; +} +_ACEOF +rm -f conftest.$ac_objext conftest$ac_exeext +if { (ac_try="$ac_link" +case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_link") 2>conftest.er1 + ac_status=$? + grep -v '^ *+' conftest.er1 >conftest.err + rm -f conftest.er1 + cat conftest.err >&5 + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } && { + test -z "$ac_c_werror_flag" || + test ! -s conftest.err + } && test -s conftest$ac_exeext && + $as_test_x conftest$ac_exeext; then + ac_cv_lib_dl_dlopen=yes +else + echo "$as_me: failed program was:" >&5 +sed 's/^/| /' conftest.$ac_ext >&5 + + ac_cv_lib_dl_dlopen=no +fi + +rm -f core conftest.err conftest.$ac_objext conftest_ipa8_conftest.oo \ + conftest$ac_exeext conftest.$ac_ext +LIBS=$ac_check_lib_save_LIBS +fi +{ echo "$as_me:$LINENO: result: $ac_cv_lib_dl_dlopen" >&5 +echo "${ECHO_T}$ac_cv_lib_dl_dlopen" >&6; } +if test $ac_cv_lib_dl_dlopen = yes; then + lt_cv_dlopen="dlopen" lt_cv_dlopen_libs="-ldl" +else + + lt_cv_dlopen="dyld" + lt_cv_dlopen_libs= + lt_cv_dlopen_self=yes + +fi + + ;; + + *) + { echo "$as_me:$LINENO: checking for shl_load" >&5 +echo $ECHO_N "checking for shl_load... $ECHO_C" >&6; } +if test "${ac_cv_func_shl_load+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + cat >conftest.$ac_ext <<_ACEOF +/* confdefs.h. */ +_ACEOF +cat confdefs.h >>conftest.$ac_ext +cat >>conftest.$ac_ext <<_ACEOF +/* end confdefs.h. */ +/* Define shl_load to an innocuous variant, in case declares shl_load. + For example, HP-UX 11i declares gettimeofday. */ +#define shl_load innocuous_shl_load + +/* System header to define __stub macros and hopefully few prototypes, + which can conflict with char shl_load (); below. + Prefer to if __STDC__ is defined, since + exists even on freestanding compilers. */ + +#ifdef __STDC__ +# include +#else +# include +#endif + +#undef shl_load + +/* Override any GCC internal prototype to avoid an error. + Use char because int might match the return type of a GCC + builtin and then its argument prototype would still apply. */ +#ifdef __cplusplus +extern "C" +#endif +char shl_load (); +/* The GNU C library defines this for functions which it implements + to always fail with ENOSYS. Some functions are actually named + something starting with __ and the normal name is an alias. */ +#if defined __stub_shl_load || defined __stub___shl_load +choke me +#endif + +int +main () +{ +return shl_load (); + ; + return 0; +} +_ACEOF +rm -f conftest.$ac_objext conftest$ac_exeext +if { (ac_try="$ac_link" +case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_link") 2>conftest.er1 + ac_status=$? + grep -v '^ *+' conftest.er1 >conftest.err + rm -f conftest.er1 + cat conftest.err >&5 + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } && { + test -z "$ac_c_werror_flag" || + test ! -s conftest.err + } && test -s conftest$ac_exeext && + $as_test_x conftest$ac_exeext; then + ac_cv_func_shl_load=yes +else + echo "$as_me: failed program was:" >&5 +sed 's/^/| /' conftest.$ac_ext >&5 + + ac_cv_func_shl_load=no +fi + +rm -f core conftest.err conftest.$ac_objext conftest_ipa8_conftest.oo \ + conftest$ac_exeext conftest.$ac_ext +fi +{ echo "$as_me:$LINENO: result: $ac_cv_func_shl_load" >&5 +echo "${ECHO_T}$ac_cv_func_shl_load" >&6; } +if test $ac_cv_func_shl_load = yes; then + lt_cv_dlopen="shl_load" +else + { echo "$as_me:$LINENO: checking for shl_load in -ldld" >&5 +echo $ECHO_N "checking for shl_load in -ldld... $ECHO_C" >&6; } +if test "${ac_cv_lib_dld_shl_load+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + ac_check_lib_save_LIBS=$LIBS +LIBS="-ldld $LIBS" +cat >conftest.$ac_ext <<_ACEOF +/* confdefs.h. */ +_ACEOF +cat confdefs.h >>conftest.$ac_ext +cat >>conftest.$ac_ext <<_ACEOF +/* end confdefs.h. */ + +/* Override any GCC internal prototype to avoid an error. + Use char because int might match the return type of a GCC + builtin and then its argument prototype would still apply. */ +#ifdef __cplusplus +extern "C" +#endif +char shl_load (); +int +main () +{ +return shl_load (); + ; + return 0; +} +_ACEOF +rm -f conftest.$ac_objext conftest$ac_exeext +if { (ac_try="$ac_link" +case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_link") 2>conftest.er1 + ac_status=$? + grep -v '^ *+' conftest.er1 >conftest.err + rm -f conftest.er1 + cat conftest.err >&5 + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } && { + test -z "$ac_c_werror_flag" || + test ! -s conftest.err + } && test -s conftest$ac_exeext && + $as_test_x conftest$ac_exeext; then + ac_cv_lib_dld_shl_load=yes +else + echo "$as_me: failed program was:" >&5 +sed 's/^/| /' conftest.$ac_ext >&5 + + ac_cv_lib_dld_shl_load=no +fi + +rm -f core conftest.err conftest.$ac_objext conftest_ipa8_conftest.oo \ + conftest$ac_exeext conftest.$ac_ext +LIBS=$ac_check_lib_save_LIBS +fi +{ echo "$as_me:$LINENO: result: $ac_cv_lib_dld_shl_load" >&5 +echo "${ECHO_T}$ac_cv_lib_dld_shl_load" >&6; } +if test $ac_cv_lib_dld_shl_load = yes; then + lt_cv_dlopen="shl_load" lt_cv_dlopen_libs="-dld" +else + { echo "$as_me:$LINENO: checking for dlopen" >&5 +echo $ECHO_N "checking for dlopen... $ECHO_C" >&6; } +if test "${ac_cv_func_dlopen+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + cat >conftest.$ac_ext <<_ACEOF +/* confdefs.h. */ +_ACEOF +cat confdefs.h >>conftest.$ac_ext +cat >>conftest.$ac_ext <<_ACEOF +/* end confdefs.h. */ +/* Define dlopen to an innocuous variant, in case declares dlopen. + For example, HP-UX 11i declares gettimeofday. */ +#define dlopen innocuous_dlopen + +/* System header to define __stub macros and hopefully few prototypes, + which can conflict with char dlopen (); below. + Prefer to if __STDC__ is defined, since + exists even on freestanding compilers. */ + +#ifdef __STDC__ +# include +#else +# include +#endif + +#undef dlopen + +/* Override any GCC internal prototype to avoid an error. + Use char because int might match the return type of a GCC + builtin and then its argument prototype would still apply. */ +#ifdef __cplusplus +extern "C" +#endif +char dlopen (); +/* The GNU C library defines this for functions which it implements + to always fail with ENOSYS. Some functions are actually named + something starting with __ and the normal name is an alias. */ +#if defined __stub_dlopen || defined __stub___dlopen +choke me +#endif + +int +main () +{ +return dlopen (); + ; + return 0; +} +_ACEOF +rm -f conftest.$ac_objext conftest$ac_exeext +if { (ac_try="$ac_link" +case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_link") 2>conftest.er1 + ac_status=$? + grep -v '^ *+' conftest.er1 >conftest.err + rm -f conftest.er1 + cat conftest.err >&5 + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } && { + test -z "$ac_c_werror_flag" || + test ! -s conftest.err + } && test -s conftest$ac_exeext && + $as_test_x conftest$ac_exeext; then + ac_cv_func_dlopen=yes +else + echo "$as_me: failed program was:" >&5 +sed 's/^/| /' conftest.$ac_ext >&5 + + ac_cv_func_dlopen=no +fi + +rm -f core conftest.err conftest.$ac_objext conftest_ipa8_conftest.oo \ + conftest$ac_exeext conftest.$ac_ext +fi +{ echo "$as_me:$LINENO: result: $ac_cv_func_dlopen" >&5 +echo "${ECHO_T}$ac_cv_func_dlopen" >&6; } +if test $ac_cv_func_dlopen = yes; then + lt_cv_dlopen="dlopen" +else + { echo "$as_me:$LINENO: checking for dlopen in -ldl" >&5 +echo $ECHO_N "checking for dlopen in -ldl... $ECHO_C" >&6; } +if test "${ac_cv_lib_dl_dlopen+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + ac_check_lib_save_LIBS=$LIBS +LIBS="-ldl $LIBS" +cat >conftest.$ac_ext <<_ACEOF +/* confdefs.h. */ +_ACEOF +cat confdefs.h >>conftest.$ac_ext +cat >>conftest.$ac_ext <<_ACEOF +/* end confdefs.h. */ + +/* Override any GCC internal prototype to avoid an error. + Use char because int might match the return type of a GCC + builtin and then its argument prototype would still apply. */ +#ifdef __cplusplus +extern "C" +#endif +char dlopen (); +int +main () +{ +return dlopen (); + ; + return 0; +} +_ACEOF +rm -f conftest.$ac_objext conftest$ac_exeext +if { (ac_try="$ac_link" +case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_link") 2>conftest.er1 + ac_status=$? + grep -v '^ *+' conftest.er1 >conftest.err + rm -f conftest.er1 + cat conftest.err >&5 + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } && { + test -z "$ac_c_werror_flag" || + test ! -s conftest.err + } && test -s conftest$ac_exeext && + $as_test_x conftest$ac_exeext; then + ac_cv_lib_dl_dlopen=yes +else + echo "$as_me: failed program was:" >&5 +sed 's/^/| /' conftest.$ac_ext >&5 + + ac_cv_lib_dl_dlopen=no +fi + +rm -f core conftest.err conftest.$ac_objext conftest_ipa8_conftest.oo \ + conftest$ac_exeext conftest.$ac_ext +LIBS=$ac_check_lib_save_LIBS +fi +{ echo "$as_me:$LINENO: result: $ac_cv_lib_dl_dlopen" >&5 +echo "${ECHO_T}$ac_cv_lib_dl_dlopen" >&6; } +if test $ac_cv_lib_dl_dlopen = yes; then + lt_cv_dlopen="dlopen" lt_cv_dlopen_libs="-ldl" +else + { echo "$as_me:$LINENO: checking for dlopen in -lsvld" >&5 +echo $ECHO_N "checking for dlopen in -lsvld... $ECHO_C" >&6; } +if test "${ac_cv_lib_svld_dlopen+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + ac_check_lib_save_LIBS=$LIBS +LIBS="-lsvld $LIBS" +cat >conftest.$ac_ext <<_ACEOF +/* confdefs.h. */ +_ACEOF +cat confdefs.h >>conftest.$ac_ext +cat >>conftest.$ac_ext <<_ACEOF +/* end confdefs.h. */ + +/* Override any GCC internal prototype to avoid an error. + Use char because int might match the return type of a GCC + builtin and then its argument prototype would still apply. */ +#ifdef __cplusplus +extern "C" +#endif +char dlopen (); +int +main () +{ +return dlopen (); + ; + return 0; +} +_ACEOF +rm -f conftest.$ac_objext conftest$ac_exeext +if { (ac_try="$ac_link" +case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_link") 2>conftest.er1 + ac_status=$? + grep -v '^ *+' conftest.er1 >conftest.err + rm -f conftest.er1 + cat conftest.err >&5 + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } && { + test -z "$ac_c_werror_flag" || + test ! -s conftest.err + } && test -s conftest$ac_exeext && + $as_test_x conftest$ac_exeext; then + ac_cv_lib_svld_dlopen=yes +else + echo "$as_me: failed program was:" >&5 +sed 's/^/| /' conftest.$ac_ext >&5 + + ac_cv_lib_svld_dlopen=no +fi + +rm -f core conftest.err conftest.$ac_objext conftest_ipa8_conftest.oo \ + conftest$ac_exeext conftest.$ac_ext +LIBS=$ac_check_lib_save_LIBS +fi +{ echo "$as_me:$LINENO: result: $ac_cv_lib_svld_dlopen" >&5 +echo "${ECHO_T}$ac_cv_lib_svld_dlopen" >&6; } +if test $ac_cv_lib_svld_dlopen = yes; then + lt_cv_dlopen="dlopen" lt_cv_dlopen_libs="-lsvld" +else + { echo "$as_me:$LINENO: checking for dld_link in -ldld" >&5 +echo $ECHO_N "checking for dld_link in -ldld... $ECHO_C" >&6; } +if test "${ac_cv_lib_dld_dld_link+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + ac_check_lib_save_LIBS=$LIBS +LIBS="-ldld $LIBS" +cat >conftest.$ac_ext <<_ACEOF +/* confdefs.h. */ +_ACEOF +cat confdefs.h >>conftest.$ac_ext +cat >>conftest.$ac_ext <<_ACEOF +/* end confdefs.h. */ + +/* Override any GCC internal prototype to avoid an error. + Use char because int might match the return type of a GCC + builtin and then its argument prototype would still apply. */ +#ifdef __cplusplus +extern "C" +#endif +char dld_link (); +int +main () +{ +return dld_link (); + ; + return 0; +} +_ACEOF +rm -f conftest.$ac_objext conftest$ac_exeext +if { (ac_try="$ac_link" +case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_link") 2>conftest.er1 + ac_status=$? + grep -v '^ *+' conftest.er1 >conftest.err + rm -f conftest.er1 + cat conftest.err >&5 + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } && { + test -z "$ac_c_werror_flag" || + test ! -s conftest.err + } && test -s conftest$ac_exeext && + $as_test_x conftest$ac_exeext; then + ac_cv_lib_dld_dld_link=yes +else + echo "$as_me: failed program was:" >&5 +sed 's/^/| /' conftest.$ac_ext >&5 + + ac_cv_lib_dld_dld_link=no +fi + +rm -f core conftest.err conftest.$ac_objext conftest_ipa8_conftest.oo \ + conftest$ac_exeext conftest.$ac_ext +LIBS=$ac_check_lib_save_LIBS +fi +{ echo "$as_me:$LINENO: result: $ac_cv_lib_dld_dld_link" >&5 +echo "${ECHO_T}$ac_cv_lib_dld_dld_link" >&6; } +if test $ac_cv_lib_dld_dld_link = yes; then + lt_cv_dlopen="dld_link" lt_cv_dlopen_libs="-dld" +fi + + +fi + + +fi + + +fi + + +fi + + +fi + + ;; + esac + + if test "x$lt_cv_dlopen" != xno; then + enable_dlopen=yes + else + enable_dlopen=no + fi + + case $lt_cv_dlopen in + dlopen) + save_CPPFLAGS="$CPPFLAGS" + test "x$ac_cv_header_dlfcn_h" = xyes && CPPFLAGS="$CPPFLAGS -DHAVE_DLFCN_H" + + save_LDFLAGS="$LDFLAGS" + wl=$lt_prog_compiler_wl eval LDFLAGS=\"\$LDFLAGS $export_dynamic_flag_spec\" + + save_LIBS="$LIBS" + LIBS="$lt_cv_dlopen_libs $LIBS" + + { echo "$as_me:$LINENO: checking whether a program can dlopen itself" >&5 +echo $ECHO_N "checking whether a program can dlopen itself... $ECHO_C" >&6; } +if test "${lt_cv_dlopen_self+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + if test "$cross_compiling" = yes; then : + lt_cv_dlopen_self=cross +else + lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2 + lt_status=$lt_dlunknown + cat > conftest.$ac_ext < +#endif + +#include + +#ifdef RTLD_GLOBAL +# define LT_DLGLOBAL RTLD_GLOBAL +#else +# ifdef DL_GLOBAL +# define LT_DLGLOBAL DL_GLOBAL +# else +# define LT_DLGLOBAL 0 +# endif +#endif + +/* We may have to define LT_DLLAZY_OR_NOW in the command line if we + find out it does not work in some platform. */ +#ifndef LT_DLLAZY_OR_NOW +# ifdef RTLD_LAZY +# define LT_DLLAZY_OR_NOW RTLD_LAZY +# else +# ifdef DL_LAZY +# define LT_DLLAZY_OR_NOW DL_LAZY +# else +# ifdef RTLD_NOW +# define LT_DLLAZY_OR_NOW RTLD_NOW +# else +# ifdef DL_NOW +# define LT_DLLAZY_OR_NOW DL_NOW +# else +# define LT_DLLAZY_OR_NOW 0 +# endif +# endif +# endif +# endif +#endif + +#ifdef __cplusplus +extern "C" void exit (int); +#endif + +void fnord() { int i=42;} +int main () +{ + void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW); + int status = $lt_dlunknown; + + if (self) + { + if (dlsym (self,"fnord")) status = $lt_dlno_uscore; + else if (dlsym( self,"_fnord")) status = $lt_dlneed_uscore; + /* dlclose (self); */ + } + else + puts (dlerror ()); + + exit (status); +} +EOF + if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5 + (eval $ac_link) 2>&5 + ac_status=$? + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } && test -s conftest${ac_exeext} 2>/dev/null; then + (./conftest; exit; ) >&5 2>/dev/null + lt_status=$? + case x$lt_status in + x$lt_dlno_uscore) lt_cv_dlopen_self=yes ;; + x$lt_dlneed_uscore) lt_cv_dlopen_self=yes ;; + x$lt_dlunknown|x*) lt_cv_dlopen_self=no ;; + esac + else : + # compilation failed + lt_cv_dlopen_self=no + fi +fi +rm -fr conftest* + + +fi +{ echo "$as_me:$LINENO: result: $lt_cv_dlopen_self" >&5 +echo "${ECHO_T}$lt_cv_dlopen_self" >&6; } + + if test "x$lt_cv_dlopen_self" = xyes; then + wl=$lt_prog_compiler_wl eval LDFLAGS=\"\$LDFLAGS $lt_prog_compiler_static\" + { echo "$as_me:$LINENO: checking whether a statically linked program can dlopen itself" >&5 +echo $ECHO_N "checking whether a statically linked program can dlopen itself... $ECHO_C" >&6; } +if test "${lt_cv_dlopen_self_static+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + if test "$cross_compiling" = yes; then : + lt_cv_dlopen_self_static=cross +else + lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2 + lt_status=$lt_dlunknown + cat > conftest.$ac_ext < +#endif + +#include + +#ifdef RTLD_GLOBAL +# define LT_DLGLOBAL RTLD_GLOBAL +#else +# ifdef DL_GLOBAL +# define LT_DLGLOBAL DL_GLOBAL +# else +# define LT_DLGLOBAL 0 +# endif +#endif + +/* We may have to define LT_DLLAZY_OR_NOW in the command line if we + find out it does not work in some platform. */ +#ifndef LT_DLLAZY_OR_NOW +# ifdef RTLD_LAZY +# define LT_DLLAZY_OR_NOW RTLD_LAZY +# else +# ifdef DL_LAZY +# define LT_DLLAZY_OR_NOW DL_LAZY +# else +# ifdef RTLD_NOW +# define LT_DLLAZY_OR_NOW RTLD_NOW +# else +# ifdef DL_NOW +# define LT_DLLAZY_OR_NOW DL_NOW +# else +# define LT_DLLAZY_OR_NOW 0 +# endif +# endif +# endif +# endif +#endif + +#ifdef __cplusplus +extern "C" void exit (int); +#endif + +void fnord() { int i=42;} +int main () +{ + void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW); + int status = $lt_dlunknown; + + if (self) + { + if (dlsym (self,"fnord")) status = $lt_dlno_uscore; + else if (dlsym( self,"_fnord")) status = $lt_dlneed_uscore; + /* dlclose (self); */ + } + else + puts (dlerror ()); + + exit (status); +} +EOF + if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5 + (eval $ac_link) 2>&5 + ac_status=$? + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } && test -s conftest${ac_exeext} 2>/dev/null; then + (./conftest; exit; ) >&5 2>/dev/null + lt_status=$? + case x$lt_status in + x$lt_dlno_uscore) lt_cv_dlopen_self_static=yes ;; + x$lt_dlneed_uscore) lt_cv_dlopen_self_static=yes ;; + x$lt_dlunknown|x*) lt_cv_dlopen_self_static=no ;; + esac + else : + # compilation failed + lt_cv_dlopen_self_static=no + fi +fi +rm -fr conftest* + + +fi +{ echo "$as_me:$LINENO: result: $lt_cv_dlopen_self_static" >&5 +echo "${ECHO_T}$lt_cv_dlopen_self_static" >&6; } + fi + + CPPFLAGS="$save_CPPFLAGS" + LDFLAGS="$save_LDFLAGS" + LIBS="$save_LIBS" + ;; + esac + + case $lt_cv_dlopen_self in + yes|no) enable_dlopen_self=$lt_cv_dlopen_self ;; + *) enable_dlopen_self=unknown ;; + esac + + case $lt_cv_dlopen_self_static in + yes|no) enable_dlopen_self_static=$lt_cv_dlopen_self_static ;; + *) enable_dlopen_self_static=unknown ;; + esac +fi + + +# Report which library types will actually be built +{ echo "$as_me:$LINENO: checking if libtool supports shared libraries" >&5 +echo $ECHO_N "checking if libtool supports shared libraries... $ECHO_C" >&6; } +{ echo "$as_me:$LINENO: result: $can_build_shared" >&5 +echo "${ECHO_T}$can_build_shared" >&6; } + +{ echo "$as_me:$LINENO: checking whether to build shared libraries" >&5 +echo $ECHO_N "checking whether to build shared libraries... $ECHO_C" >&6; } +test "$can_build_shared" = "no" && enable_shared=no + +# On AIX, shared libraries and static libraries use the same namespace, and +# are all built from PIC. +case $host_os in +aix3*) + test "$enable_shared" = yes && enable_static=no + if test -n "$RANLIB"; then + archive_cmds="$archive_cmds~\$RANLIB \$lib" + postinstall_cmds='$RANLIB $lib' + fi + ;; + +aix4* | aix5*) + if test "$host_cpu" != ia64 && test "$aix_use_runtimelinking" = no ; then + test "$enable_shared" = yes && enable_static=no + fi + ;; +esac +{ echo "$as_me:$LINENO: result: $enable_shared" >&5 +echo "${ECHO_T}$enable_shared" >&6; } + +{ echo "$as_me:$LINENO: checking whether to build static libraries" >&5 +echo $ECHO_N "checking whether to build static libraries... $ECHO_C" >&6; } +# Make sure either enable_shared or enable_static is yes. +test "$enable_shared" = yes || enable_static=yes +{ echo "$as_me:$LINENO: result: $enable_static" >&5 +echo "${ECHO_T}$enable_static" >&6; } + +# The else clause should only fire when bootstrapping the +# libtool distribution, otherwise you forgot to ship ltmain.sh +# with your package, and you will get complaints that there are +# no rules to generate ltmain.sh. +if test -f "$ltmain"; then + # See if we are running on zsh, and set the options which allow our commands through + # without removal of \ escapes. + if test -n "${ZSH_VERSION+set}" ; then + setopt NO_GLOB_SUBST + fi + # Now quote all the things that may contain metacharacters while being + # careful not to overquote the AC_SUBSTed values. We take copies of the + # variables and quote the copies for generation of the libtool script. + for var in echo old_CC old_CFLAGS AR AR_FLAGS EGREP RANLIB LN_S LTCC LTCFLAGS NM \ + SED SHELL STRIP \ + libname_spec library_names_spec soname_spec extract_expsyms_cmds \ + old_striplib striplib file_magic_cmd finish_cmds finish_eval \ + deplibs_check_method reload_flag reload_cmds need_locks \ + lt_cv_sys_global_symbol_pipe lt_cv_sys_global_symbol_to_cdecl \ + lt_cv_sys_global_symbol_to_c_name_address \ + sys_lib_search_path_spec sys_lib_dlsearch_path_spec \ + old_postinstall_cmds old_postuninstall_cmds \ + compiler \ + CC \ + LD \ + lt_prog_compiler_wl \ + lt_prog_compiler_pic \ + lt_prog_compiler_static \ + lt_prog_compiler_no_builtin_flag \ + export_dynamic_flag_spec \ + thread_safe_flag_spec \ + whole_archive_flag_spec \ + enable_shared_with_static_runtimes \ + old_archive_cmds \ + old_archive_from_new_cmds \ + predep_objects \ + postdep_objects \ + predeps \ + postdeps \ + compiler_lib_search_path \ + archive_cmds \ + archive_expsym_cmds \ + postinstall_cmds \ + postuninstall_cmds \ + old_archive_from_expsyms_cmds \ + allow_undefined_flag \ + no_undefined_flag \ + export_symbols_cmds \ + hardcode_libdir_flag_spec \ + hardcode_libdir_flag_spec_ld \ + hardcode_libdir_separator \ + hardcode_automatic \ + module_cmds \ + module_expsym_cmds \ + lt_cv_prog_compiler_c_o \ + fix_srcfile_path \ + exclude_expsyms \ + include_expsyms; do + + case $var in + old_archive_cmds | \ + old_archive_from_new_cmds | \ + archive_cmds | \ + archive_expsym_cmds | \ + module_cmds | \ + module_expsym_cmds | \ + old_archive_from_expsyms_cmds | \ + export_symbols_cmds | \ + extract_expsyms_cmds | reload_cmds | finish_cmds | \ + postinstall_cmds | postuninstall_cmds | \ + old_postinstall_cmds | old_postuninstall_cmds | \ + sys_lib_search_path_spec | sys_lib_dlsearch_path_spec) + # Double-quote double-evaled strings. + eval "lt_$var=\\\"\`\$echo \"X\$$var\" | \$Xsed -e \"\$double_quote_subst\" -e \"\$sed_quote_subst\" -e \"\$delay_variable_subst\"\`\\\"" + ;; + *) + eval "lt_$var=\\\"\`\$echo \"X\$$var\" | \$Xsed -e \"\$sed_quote_subst\"\`\\\"" + ;; + esac + done + + case $lt_echo in + *'\$0 --fallback-echo"') + lt_echo=`$echo "X$lt_echo" | $Xsed -e 's/\\\\\\\$0 --fallback-echo"$/$0 --fallback-echo"/'` + ;; + esac + +cfgfile="${ofile}T" + trap "$rm \"$cfgfile\"; exit 1" 1 2 15 + $rm -f "$cfgfile" + { echo "$as_me:$LINENO: creating $ofile" >&5 +echo "$as_me: creating $ofile" >&6;} + + cat <<__EOF__ >> "$cfgfile" +#! $SHELL + +# `$echo "$cfgfile" | sed 's%^.*/%%'` - Provide generalized library-building support services. +# Generated automatically by $PROGRAM (GNU $PACKAGE $VERSION$TIMESTAMP) +# NOTE: Changes made to this file will be lost: look at ltmain.sh. +# +# Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007 +# Free Software Foundation, Inc. +# +# This file is part of GNU Libtool: +# Originally by Gordon Matzigkeit , 1996 +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 2 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, but +# WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program; if not, write to the Free Software +# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. +# +# As a special exception to the GNU General Public License, if you +# distribute this file as part of a program that contains a +# configuration script generated by Autoconf, you may include it under +# the same distribution terms that you use for the rest of that program. + +# A sed program that does not truncate output. +SED=$lt_SED + +# Sed that helps us avoid accidentally triggering echo(1) options like -n. +Xsed="$SED -e 1s/^X//" + +# The HP-UX ksh and POSIX shell print the target directory to stdout +# if CDPATH is set. +(unset CDPATH) >/dev/null 2>&1 && unset CDPATH + +# The names of the tagged configurations supported by this script. +available_tags= + +# ### BEGIN LIBTOOL CONFIG + +# Libtool was configured on host `(hostname || uname -n) 2>/dev/null | sed 1q`: + +# Shell to use when invoking shell scripts. +SHELL=$lt_SHELL + +# Whether or not to build shared libraries. +build_libtool_libs=$enable_shared + +# Whether or not to build static libraries. +build_old_libs=$enable_static + +# Whether or not to add -lc for building shared libraries. +build_libtool_need_lc=$archive_cmds_need_lc + +# Whether or not to disallow shared libs when runtime libs are static +allow_libtool_libs_with_static_runtimes=$enable_shared_with_static_runtimes + +# Whether or not to optimize for fast installation. +fast_install=$enable_fast_install + +# The host system. +host_alias=$host_alias +host=$host +host_os=$host_os + +# The build system. +build_alias=$build_alias +build=$build +build_os=$build_os + +# An echo program that does not interpret backslashes. +echo=$lt_echo + +# The archiver. +AR=$lt_AR +AR_FLAGS=$lt_AR_FLAGS + +# A C compiler. +LTCC=$lt_LTCC + +# LTCC compiler flags. +LTCFLAGS=$lt_LTCFLAGS + +# A language-specific compiler. +CC=$lt_compiler + +# Is the compiler the GNU C compiler? +with_gcc=$GCC + +# An ERE matcher. +EGREP=$lt_EGREP + +# The linker used to build libraries. +LD=$lt_LD + +# Whether we need hard or soft links. +LN_S=$lt_LN_S + +# A BSD-compatible nm program. +NM=$lt_NM + +# A symbol stripping program +STRIP=$lt_STRIP + +# Used to examine libraries when file_magic_cmd begins "file" +MAGIC_CMD=$MAGIC_CMD + +# Used on cygwin: DLL creation program. +DLLTOOL="$DLLTOOL" + +# Used on cygwin: object dumper. +OBJDUMP="$OBJDUMP" + +# Used on cygwin: assembler. +AS="$AS" + +# The name of the directory that contains temporary libtool files. +objdir=$objdir + +# How to create reloadable object files. +reload_flag=$lt_reload_flag +reload_cmds=$lt_reload_cmds + +# How to pass a linker flag through the compiler. +wl=$lt_lt_prog_compiler_wl + +# Object file suffix (normally "o"). +objext="$ac_objext" + +# Old archive suffix (normally "a"). +libext="$libext" + +# Shared library suffix (normally ".so"). +shrext_cmds='$shrext_cmds' + +# Executable file suffix (normally ""). +exeext="$exeext" + +# Additional compiler flags for building library objects. +pic_flag=$lt_lt_prog_compiler_pic +pic_mode=$pic_mode + +# What is the maximum length of a command? +max_cmd_len=$lt_cv_sys_max_cmd_len + +# Does compiler simultaneously support -c and -o options? +compiler_c_o=$lt_lt_cv_prog_compiler_c_o + +# Must we lock files when doing compilation? +need_locks=$lt_need_locks + +# Do we need the lib prefix for modules? +need_lib_prefix=$need_lib_prefix + +# Do we need a version for libraries? +need_version=$need_version + +# Whether dlopen is supported. +dlopen_support=$enable_dlopen + +# Whether dlopen of programs is supported. +dlopen_self=$enable_dlopen_self + +# Whether dlopen of statically linked programs is supported. +dlopen_self_static=$enable_dlopen_self_static + +# Compiler flag to prevent dynamic linking. +link_static_flag=$lt_lt_prog_compiler_static + +# Compiler flag to turn off builtin functions. +no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag + +# Compiler flag to allow reflexive dlopens. +export_dynamic_flag_spec=$lt_export_dynamic_flag_spec + +# Compiler flag to generate shared objects directly from archives. +whole_archive_flag_spec=$lt_whole_archive_flag_spec + +# Compiler flag to generate thread-safe objects. +thread_safe_flag_spec=$lt_thread_safe_flag_spec + +# Library versioning type. +version_type=$version_type + +# Format of library name prefix. +libname_spec=$lt_libname_spec + +# List of archive names. First name is the real one, the rest are links. +# The last name is the one that the linker finds with -lNAME. +library_names_spec=$lt_library_names_spec + +# The coded name of the library, if different from the real name. +soname_spec=$lt_soname_spec + +# Commands used to build and install an old-style archive. +RANLIB=$lt_RANLIB +old_archive_cmds=$lt_old_archive_cmds +old_postinstall_cmds=$lt_old_postinstall_cmds +old_postuninstall_cmds=$lt_old_postuninstall_cmds + +# Create an old-style archive from a shared archive. +old_archive_from_new_cmds=$lt_old_archive_from_new_cmds + +# Create a temporary old-style archive to link instead of a shared archive. +old_archive_from_expsyms_cmds=$lt_old_archive_from_expsyms_cmds + +# Commands used to build and install a shared archive. +archive_cmds=$lt_archive_cmds +archive_expsym_cmds=$lt_archive_expsym_cmds +postinstall_cmds=$lt_postinstall_cmds +postuninstall_cmds=$lt_postuninstall_cmds + +# Commands used to build a loadable module (assumed same as above if empty) +module_cmds=$lt_module_cmds +module_expsym_cmds=$lt_module_expsym_cmds + +# Commands to strip libraries. +old_striplib=$lt_old_striplib +striplib=$lt_striplib + +# Dependencies to place before the objects being linked to create a +# shared library. +predep_objects=$lt_predep_objects + +# Dependencies to place after the objects being linked to create a +# shared library. +postdep_objects=$lt_postdep_objects + +# Dependencies to place before the objects being linked to create a +# shared library. +predeps=$lt_predeps + +# Dependencies to place after the objects being linked to create a +# shared library. +postdeps=$lt_postdeps + +# The library search path used internally by the compiler when linking +# a shared library. +compiler_lib_search_path=$lt_compiler_lib_search_path + +# Method to check whether dependent libraries are shared objects. +deplibs_check_method=$lt_deplibs_check_method + +# Command to use when deplibs_check_method == file_magic. +file_magic_cmd=$lt_file_magic_cmd + +# Flag that allows shared libraries with undefined symbols to be built. +allow_undefined_flag=$lt_allow_undefined_flag + +# Flag that forces no undefined symbols. +no_undefined_flag=$lt_no_undefined_flag + +# Commands used to finish a libtool library installation in a directory. +finish_cmds=$lt_finish_cmds + +# Same as above, but a single script fragment to be evaled but not shown. +finish_eval=$lt_finish_eval + +# Take the output of nm and produce a listing of raw symbols and C names. +global_symbol_pipe=$lt_lt_cv_sys_global_symbol_pipe + +# Transform the output of nm in a proper C declaration +global_symbol_to_cdecl=$lt_lt_cv_sys_global_symbol_to_cdecl + +# Transform the output of nm in a C name address pair +global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address + +# This is the shared library runtime path variable. +runpath_var=$runpath_var + +# This is the shared library path variable. +shlibpath_var=$shlibpath_var + +# Is shlibpath searched before the hard-coded library search path? +shlibpath_overrides_runpath=$shlibpath_overrides_runpath + +# How to hardcode a shared library path into an executable. +hardcode_action=$hardcode_action + +# Whether we should hardcode library paths into libraries. +hardcode_into_libs=$hardcode_into_libs + +# Flag to hardcode \$libdir into a binary during linking. +# This must work even if \$libdir does not exist. +hardcode_libdir_flag_spec=$lt_hardcode_libdir_flag_spec + +# If ld is used when linking, flag to hardcode \$libdir into +# a binary during linking. This must work even if \$libdir does +# not exist. +hardcode_libdir_flag_spec_ld=$lt_hardcode_libdir_flag_spec_ld + +# Whether we need a single -rpath flag with a separated argument. +hardcode_libdir_separator=$lt_hardcode_libdir_separator + +# Set to yes if using DIR/libNAME${shared_ext} during linking hardcodes DIR into the +# resulting binary. +hardcode_direct=$hardcode_direct + +# Set to yes if using the -LDIR flag during linking hardcodes DIR into the +# resulting binary. +hardcode_minus_L=$hardcode_minus_L + +# Set to yes if using SHLIBPATH_VAR=DIR during linking hardcodes DIR into +# the resulting binary. +hardcode_shlibpath_var=$hardcode_shlibpath_var + +# Set to yes if building a shared library automatically hardcodes DIR into the library +# and all subsequent libraries and executables linked against it. +hardcode_automatic=$hardcode_automatic + +# Variables whose values should be saved in libtool wrapper scripts and +# restored at relink time. +variables_saved_for_relink="$variables_saved_for_relink" + +# Whether libtool must link a program against all its dependency libraries. +link_all_deplibs=$link_all_deplibs + +# Compile-time system search path for libraries +sys_lib_search_path_spec=$lt_sys_lib_search_path_spec + +# Run-time system search path for libraries +sys_lib_dlsearch_path_spec=$lt_sys_lib_dlsearch_path_spec + +# Fix the shell variable \$srcfile for the compiler. +fix_srcfile_path=$lt_fix_srcfile_path + +# Set to yes if exported symbols are required. +always_export_symbols=$always_export_symbols + +# The commands to list exported symbols. +export_symbols_cmds=$lt_export_symbols_cmds + +# The commands to extract the exported symbol list from a shared archive. +extract_expsyms_cmds=$lt_extract_expsyms_cmds + +# Symbols that should not be listed in the preloaded symbols. +exclude_expsyms=$lt_exclude_expsyms + +# Symbols that must always be exported. +include_expsyms=$lt_include_expsyms + +# ### END LIBTOOL CONFIG + +__EOF__ + + + case $host_os in + aix3*) + cat <<\EOF >> "$cfgfile" + +# AIX sometimes has problems with the GCC collect2 program. For some +# reason, if we set the COLLECT_NAMES environment variable, the problems +# vanish in a puff of smoke. +if test "X${COLLECT_NAMES+set}" != Xset; then + COLLECT_NAMES= + export COLLECT_NAMES +fi +EOF + ;; + esac + + # We use sed instead of cat because bash on DJGPP gets confused if + # if finds mixed CR/LF and LF-only lines. Since sed operates in + # text mode, it properly converts lines to CR/LF. This bash problem + # is reportedly fixed, but why not run on old versions too? + sed '$q' "$ltmain" >> "$cfgfile" || (rm -f "$cfgfile"; exit 1) + + mv -f "$cfgfile" "$ofile" || \ + (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile") + chmod +x "$ofile" + +else + # If there is no Makefile yet, we rely on a make rule to execute + # `config.status --recheck' to rerun these tests and create the + # libtool script then. + ltmain_in=`echo $ltmain | sed -e 's/\.sh$/.in/'` + if test -f "$ltmain_in"; then + test -f Makefile && make "$ltmain" + fi +fi + + +ac_ext=c +ac_cpp='$CPP $CPPFLAGS' +ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' +ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' +ac_compiler_gnu=$ac_cv_c_compiler_gnu + +CC="$lt_save_CC" + + +# Check whether --with-tags was given. +if test "${with_tags+set}" = set; then + withval=$with_tags; tagnames="$withval" +fi + + +if test -f "$ltmain" && test -n "$tagnames"; then + if test ! -f "${ofile}"; then + { echo "$as_me:$LINENO: WARNING: output file \`$ofile' does not exist" >&5 +echo "$as_me: WARNING: output file \`$ofile' does not exist" >&2;} + fi + + if test -z "$LTCC"; then + eval "`$SHELL ${ofile} --config | grep '^LTCC='`" + if test -z "$LTCC"; then + { echo "$as_me:$LINENO: WARNING: output file \`$ofile' does not look like a libtool script" >&5 +echo "$as_me: WARNING: output file \`$ofile' does not look like a libtool script" >&2;} + else + { echo "$as_me:$LINENO: WARNING: using \`LTCC=$LTCC', extracted from \`$ofile'" >&5 +echo "$as_me: WARNING: using \`LTCC=$LTCC', extracted from \`$ofile'" >&2;} + fi + fi + if test -z "$LTCFLAGS"; then + eval "`$SHELL ${ofile} --config | grep '^LTCFLAGS='`" + fi + + # Extract list of available tagged configurations in $ofile. + # Note that this assumes the entire list is on one line. + available_tags=`grep "^available_tags=" "${ofile}" | $SED -e 's/available_tags=\(.*$\)/\1/' -e 's/\"//g'` + + lt_save_ifs="$IFS"; IFS="${IFS}$PATH_SEPARATOR," + for tagname in $tagnames; do + IFS="$lt_save_ifs" + # Check whether tagname contains only valid characters + case `$echo "X$tagname" | $Xsed -e 's:[-_ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz1234567890,/]::g'` in + "") ;; + *) { { echo "$as_me:$LINENO: error: invalid tag name: $tagname" >&5 +echo "$as_me: error: invalid tag name: $tagname" >&2;} + { (exit 1); exit 1; }; } + ;; + esac + + if grep "^# ### BEGIN LIBTOOL TAG CONFIG: $tagname$" < "${ofile}" > /dev/null + then + { { echo "$as_me:$LINENO: error: tag name \"$tagname\" already exists" >&5 +echo "$as_me: error: tag name \"$tagname\" already exists" >&2;} + { (exit 1); exit 1; }; } + fi + + # Update the list of available tags. + if test -n "$tagname"; then + echo appending configuration tag \"$tagname\" to $ofile + + case $tagname in + CXX) + if test -n "$CXX" && ( test "X$CXX" != "Xno" && + ( (test "X$CXX" = "Xg++" && `g++ -v >/dev/null 2>&1` ) || + (test "X$CXX" != "Xg++"))) ; then + ac_ext=cpp +ac_cpp='$CXXCPP $CPPFLAGS' +ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5' +ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' +ac_compiler_gnu=$ac_cv_cxx_compiler_gnu + + + + +archive_cmds_need_lc_CXX=no +allow_undefined_flag_CXX= +always_export_symbols_CXX=no +archive_expsym_cmds_CXX= +export_dynamic_flag_spec_CXX= +hardcode_direct_CXX=no +hardcode_libdir_flag_spec_CXX= +hardcode_libdir_flag_spec_ld_CXX= +hardcode_libdir_separator_CXX= +hardcode_minus_L_CXX=no +hardcode_shlibpath_var_CXX=unsupported +hardcode_automatic_CXX=no +module_cmds_CXX= +module_expsym_cmds_CXX= +link_all_deplibs_CXX=unknown +old_archive_cmds_CXX=$old_archive_cmds +no_undefined_flag_CXX= +whole_archive_flag_spec_CXX= +enable_shared_with_static_runtimes_CXX=no + +# Dependencies to place before and after the object being linked: +predep_objects_CXX= +postdep_objects_CXX= +predeps_CXX= +postdeps_CXX= +compiler_lib_search_path_CXX= + +# Source file extension for C++ test sources. +ac_ext=cpp + +# Object file extension for compiled C++ test sources. +objext=o +objext_CXX=$objext + +# Code to be used in simple compile tests +lt_simple_compile_test_code="int some_variable = 0;" + +# Code to be used in simple link tests +lt_simple_link_test_code='int main(int, char *[]) { return(0); }' + +# ltmain only uses $CC for tagged configurations so make sure $CC is set. + +# If no C compiler was specified, use CC. +LTCC=${LTCC-"$CC"} + +# If no C compiler flags were specified, use CFLAGS. +LTCFLAGS=${LTCFLAGS-"$CFLAGS"} + +# Allow CC to be a program name with arguments. +compiler=$CC + + +# save warnings/boilerplate of simple test code +ac_outfile=conftest.$ac_objext +echo "$lt_simple_compile_test_code" >conftest.$ac_ext +eval "$ac_compile" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err +_lt_compiler_boilerplate=`cat conftest.err` +$rm conftest* + +ac_outfile=conftest.$ac_objext +echo "$lt_simple_link_test_code" >conftest.$ac_ext +eval "$ac_link" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err +_lt_linker_boilerplate=`cat conftest.err` +$rm conftest* + + +# Allow CC to be a program name with arguments. +lt_save_CC=$CC +lt_save_LD=$LD +lt_save_GCC=$GCC +GCC=$GXX +lt_save_with_gnu_ld=$with_gnu_ld +lt_save_path_LD=$lt_cv_path_LD +if test -n "${lt_cv_prog_gnu_ldcxx+set}"; then + lt_cv_prog_gnu_ld=$lt_cv_prog_gnu_ldcxx +else + $as_unset lt_cv_prog_gnu_ld +fi +if test -n "${lt_cv_path_LDCXX+set}"; then + lt_cv_path_LD=$lt_cv_path_LDCXX +else + $as_unset lt_cv_path_LD +fi +test -z "${LDCXX+set}" || LD=$LDCXX +CC=${CXX-"c++"} +compiler=$CC +compiler_CXX=$CC +for cc_temp in $compiler""; do + case $cc_temp in + compile | *[\\/]compile | ccache | *[\\/]ccache ) ;; + distcc | *[\\/]distcc | purify | *[\\/]purify ) ;; + \-*) ;; + *) break;; + esac +done +cc_basename=`$echo "X$cc_temp" | $Xsed -e 's%.*/%%' -e "s%^$host_alias-%%"` + + +# We don't want -fno-exception wen compiling C++ code, so set the +# no_builtin_flag separately +if test "$GXX" = yes; then + lt_prog_compiler_no_builtin_flag_CXX=' -fno-builtin' +else + lt_prog_compiler_no_builtin_flag_CXX= +fi + +if test "$GXX" = yes; then + # Set up default GNU C++ configuration + + +# Check whether --with-gnu-ld was given. +if test "${with_gnu_ld+set}" = set; then + withval=$with_gnu_ld; test "$withval" = no || with_gnu_ld=yes +else + with_gnu_ld=no +fi + +ac_prog=ld +if test "$GCC" = yes; then + # Check if gcc -print-prog-name=ld gives a path. + { echo "$as_me:$LINENO: checking for ld used by $CC" >&5 +echo $ECHO_N "checking for ld used by $CC... $ECHO_C" >&6; } + case $host in + *-*-mingw*) + # gcc leaves a trailing carriage return which upsets mingw + ac_prog=`($CC -print-prog-name=ld) 2>&5 | tr -d '\015'` ;; + *) + ac_prog=`($CC -print-prog-name=ld) 2>&5` ;; + esac + case $ac_prog in + # Accept absolute paths. + [\\/]* | ?:[\\/]*) + re_direlt='/[^/][^/]*/\.\./' + # Canonicalize the pathname of ld + ac_prog=`echo $ac_prog| $SED 's%\\\\%/%g'` + while echo $ac_prog | grep "$re_direlt" > /dev/null 2>&1; do + ac_prog=`echo $ac_prog| $SED "s%$re_direlt%/%"` + done + test -z "$LD" && LD="$ac_prog" + ;; + "") + # If it fails, then pretend we aren't using GCC. + ac_prog=ld + ;; + *) + # If it is relative, then search for the first ld in PATH. + with_gnu_ld=unknown + ;; + esac +elif test "$with_gnu_ld" = yes; then + { echo "$as_me:$LINENO: checking for GNU ld" >&5 +echo $ECHO_N "checking for GNU ld... $ECHO_C" >&6; } +else + { echo "$as_me:$LINENO: checking for non-GNU ld" >&5 +echo $ECHO_N "checking for non-GNU ld... $ECHO_C" >&6; } +fi +if test "${lt_cv_path_LD+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + if test -z "$LD"; then + lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR + for ac_dir in $PATH; do + IFS="$lt_save_ifs" + test -z "$ac_dir" && ac_dir=. + if test -f "$ac_dir/$ac_prog" || test -f "$ac_dir/$ac_prog$ac_exeext"; then + lt_cv_path_LD="$ac_dir/$ac_prog" + # Check to see if the program is GNU ld. I'd rather use --version, + # but apparently some variants of GNU ld only accept -v. + # Break only if it was the GNU/non-GNU ld that we prefer. + case `"$lt_cv_path_LD" -v 2>&1 &5 +echo "${ECHO_T}$LD" >&6; } +else + { echo "$as_me:$LINENO: result: no" >&5 +echo "${ECHO_T}no" >&6; } +fi +test -z "$LD" && { { echo "$as_me:$LINENO: error: no acceptable ld found in \$PATH" >&5 +echo "$as_me: error: no acceptable ld found in \$PATH" >&2;} + { (exit 1); exit 1; }; } +{ echo "$as_me:$LINENO: checking if the linker ($LD) is GNU ld" >&5 +echo $ECHO_N "checking if the linker ($LD) is GNU ld... $ECHO_C" >&6; } +if test "${lt_cv_prog_gnu_ld+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + # I'd rather use --version here, but apparently some GNU lds only accept -v. +case `$LD -v 2>&1 &5 +echo "${ECHO_T}$lt_cv_prog_gnu_ld" >&6; } +with_gnu_ld=$lt_cv_prog_gnu_ld + + + + # Check if GNU C++ uses GNU ld as the underlying linker, since the + # archiving commands below assume that GNU ld is being used. + if test "$with_gnu_ld" = yes; then + archive_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname -o $lib' + archive_expsym_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' + + hardcode_libdir_flag_spec_CXX='${wl}--rpath ${wl}$libdir' + export_dynamic_flag_spec_CXX='${wl}--export-dynamic' + + # If archive_cmds runs LD, not CC, wlarc should be empty + # XXX I think wlarc can be eliminated in ltcf-cxx, but I need to + # investigate it a little bit more. (MM) + wlarc='${wl}' + + # ancient GNU ld didn't support --whole-archive et. al. + if eval "`$CC -print-prog-name=ld` --help 2>&1" | \ + grep 'no-whole-archive' > /dev/null; then + whole_archive_flag_spec_CXX="$wlarc"'--whole-archive$convenience '"$wlarc"'--no-whole-archive' + else + whole_archive_flag_spec_CXX= + fi + else + with_gnu_ld=no + wlarc= + + # A generic and very simple default shared library creation + # command for GNU C++ for the case where it uses the native + # linker, instead of GNU ld. If possible, this setting should + # overridden to take advantage of the native linker features on + # the platform it is being used on. + archive_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $lib' + fi + + # Commands to make compiler produce verbose output that lists + # what "hidden" libraries, object files and flags are used when + # linking a shared library. + output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | grep "\-L"' + +else + GXX=no + with_gnu_ld=no + wlarc= +fi + +# PORTME: fill in a description of your system's C++ link characteristics +{ echo "$as_me:$LINENO: checking whether the $compiler linker ($LD) supports shared libraries" >&5 +echo $ECHO_N "checking whether the $compiler linker ($LD) supports shared libraries... $ECHO_C" >&6; } +ld_shlibs_CXX=yes +case $host_os in + aix3*) + # FIXME: insert proper C++ library support + ld_shlibs_CXX=no + ;; + aix4* | aix5*) + if test "$host_cpu" = ia64; then + # On IA64, the linker does run time linking by default, so we don't + # have to do anything special. + aix_use_runtimelinking=no + exp_sym_flag='-Bexport' + no_entry_flag="" + else + aix_use_runtimelinking=no + + # Test if we are trying to use run time linking or normal + # AIX style linking. If -brtl is somewhere in LDFLAGS, we + # need to do runtime linking. + case $host_os in aix4.[23]|aix4.[23].*|aix5*) + for ld_flag in $LDFLAGS; do + case $ld_flag in + *-brtl*) + aix_use_runtimelinking=yes + break + ;; + esac + done + ;; + esac + + exp_sym_flag='-bexport' + no_entry_flag='-bnoentry' + fi + + # When large executables or shared objects are built, AIX ld can + # have problems creating the table of contents. If linking a library + # or program results in "error TOC overflow" add -mminimal-toc to + # CXXFLAGS/CFLAGS for g++/gcc. In the cases where that is not + # enough to fix the problem, add -Wl,-bbigtoc to LDFLAGS. + + archive_cmds_CXX='' + hardcode_direct_CXX=yes + hardcode_libdir_separator_CXX=':' + link_all_deplibs_CXX=yes + + if test "$GXX" = yes; then + case $host_os in aix4.[012]|aix4.[012].*) + # We only want to do this on AIX 4.2 and lower, the check + # below for broken collect2 doesn't work under 4.3+ + collect2name=`${CC} -print-prog-name=collect2` + if test -f "$collect2name" && \ + strings "$collect2name" | grep resolve_lib_name >/dev/null + then + # We have reworked collect2 + : + else + # We have old collect2 + hardcode_direct_CXX=unsupported + # It fails to find uninstalled libraries when the uninstalled + # path is not listed in the libpath. Setting hardcode_minus_L + # to unsupported forces relinking + hardcode_minus_L_CXX=yes + hardcode_libdir_flag_spec_CXX='-L$libdir' + hardcode_libdir_separator_CXX= + fi + ;; + esac + shared_flag='-shared' + if test "$aix_use_runtimelinking" = yes; then + shared_flag="$shared_flag "'${wl}-G' + fi + else + # not using gcc + if test "$host_cpu" = ia64; then + # VisualAge C++, Version 5.5 for AIX 5L for IA-64, Beta 3 Release + # chokes on -Wl,-G. The following line is correct: + shared_flag='-G' + else + if test "$aix_use_runtimelinking" = yes; then + shared_flag='${wl}-G' + else + shared_flag='${wl}-bM:SRE' + fi + fi + fi + + # It seems that -bexpall does not export symbols beginning with + # underscore (_), so it is better to generate a list of symbols to export. + always_export_symbols_CXX=yes + if test "$aix_use_runtimelinking" = yes; then + # Warning - without using the other runtime loading flags (-brtl), + # -berok will link without error, but may produce a broken library. + allow_undefined_flag_CXX='-berok' + # Determine the default libpath from the value encoded in an empty executable. + cat >conftest.$ac_ext <<_ACEOF +/* confdefs.h. */ +_ACEOF +cat confdefs.h >>conftest.$ac_ext +cat >>conftest.$ac_ext <<_ACEOF +/* end confdefs.h. */ + +int +main () +{ + + ; + return 0; +} +_ACEOF +rm -f conftest.$ac_objext conftest$ac_exeext +if { (ac_try="$ac_link" +case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_link") 2>conftest.er1 + ac_status=$? + grep -v '^ *+' conftest.er1 >conftest.err + rm -f conftest.er1 + cat conftest.err >&5 + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } && { + test -z "$ac_cxx_werror_flag" || + test ! -s conftest.err + } && test -s conftest$ac_exeext && + $as_test_x conftest$ac_exeext; then + +lt_aix_libpath_sed=' + /Import File Strings/,/^$/ { + /^0/ { + s/^0 *\(.*\)$/\1/ + p + } + }' +aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` +# Check for a 64-bit object if we didn't find anything. +if test -z "$aix_libpath"; then + aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` +fi +else + echo "$as_me: failed program was:" >&5 +sed 's/^/| /' conftest.$ac_ext >&5 + + +fi + +rm -f core conftest.err conftest.$ac_objext conftest_ipa8_conftest.oo \ + conftest$ac_exeext conftest.$ac_ext +if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi + + hardcode_libdir_flag_spec_CXX='${wl}-blibpath:$libdir:'"$aix_libpath" + + archive_expsym_cmds_CXX="\$CC"' -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then echo "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag" + else + if test "$host_cpu" = ia64; then + hardcode_libdir_flag_spec_CXX='${wl}-R $libdir:/usr/lib:/lib' + allow_undefined_flag_CXX="-z nodefs" + archive_expsym_cmds_CXX="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags ${wl}${allow_undefined_flag} '"\${wl}$exp_sym_flag:\$export_symbols" + else + # Determine the default libpath from the value encoded in an empty executable. + cat >conftest.$ac_ext <<_ACEOF +/* confdefs.h. */ +_ACEOF +cat confdefs.h >>conftest.$ac_ext +cat >>conftest.$ac_ext <<_ACEOF +/* end confdefs.h. */ + +int +main () +{ + + ; + return 0; +} +_ACEOF +rm -f conftest.$ac_objext conftest$ac_exeext +if { (ac_try="$ac_link" +case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_link") 2>conftest.er1 + ac_status=$? + grep -v '^ *+' conftest.er1 >conftest.err + rm -f conftest.er1 + cat conftest.err >&5 + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } && { + test -z "$ac_cxx_werror_flag" || + test ! -s conftest.err + } && test -s conftest$ac_exeext && + $as_test_x conftest$ac_exeext; then + +lt_aix_libpath_sed=' + /Import File Strings/,/^$/ { + /^0/ { + s/^0 *\(.*\)$/\1/ + p + } + }' +aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` +# Check for a 64-bit object if we didn't find anything. +if test -z "$aix_libpath"; then + aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` +fi +else + echo "$as_me: failed program was:" >&5 +sed 's/^/| /' conftest.$ac_ext >&5 + + +fi + +rm -f core conftest.err conftest.$ac_objext conftest_ipa8_conftest.oo \ + conftest$ac_exeext conftest.$ac_ext +if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi + + hardcode_libdir_flag_spec_CXX='${wl}-blibpath:$libdir:'"$aix_libpath" + # Warning - without using the other run time loading flags, + # -berok will link without error, but may produce a broken library. + no_undefined_flag_CXX=' ${wl}-bernotok' + allow_undefined_flag_CXX=' ${wl}-berok' + # Exported symbols can be pulled into shared objects from archives + whole_archive_flag_spec_CXX='$convenience' + archive_cmds_need_lc_CXX=yes + # This is similar to how AIX traditionally builds its shared libraries. + archive_expsym_cmds_CXX="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs ${wl}-bnoentry $compiler_flags ${wl}-bE:$export_symbols${allow_undefined_flag}~$AR $AR_FLAGS $output_objdir/$libname$release.a $output_objdir/$soname' + fi + fi + ;; + + beos*) + if $LD --help 2>&1 | grep ': supported targets:.* elf' > /dev/null; then + allow_undefined_flag_CXX=unsupported + # Joseph Beckenbach says some releases of gcc + # support --undefined. This deserves some investigation. FIXME + archive_cmds_CXX='$CC -nostart $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' + else + ld_shlibs_CXX=no + fi + ;; + + chorus*) + case $cc_basename in + *) + # FIXME: insert proper C++ library support + ld_shlibs_CXX=no + ;; + esac + ;; + + cygwin* | mingw* | pw32*) + # _LT_AC_TAGVAR(hardcode_libdir_flag_spec, CXX) is actually meaningless, + # as there is no search path for DLLs. + hardcode_libdir_flag_spec_CXX='-L$libdir' + allow_undefined_flag_CXX=unsupported + always_export_symbols_CXX=no + enable_shared_with_static_runtimes_CXX=yes + + if $LD --help 2>&1 | grep 'auto-import' > /dev/null; then + archive_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' + # If the export-symbols file already is a .def file (1st line + # is EXPORTS), use it as is; otherwise, prepend... + archive_expsym_cmds_CXX='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then + cp $export_symbols $output_objdir/$soname.def; + else + echo EXPORTS > $output_objdir/$soname.def; + cat $export_symbols >> $output_objdir/$soname.def; + fi~ + $CC -shared -nostdlib $output_objdir/$soname.def $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' + else + ld_shlibs_CXX=no + fi + ;; + darwin* | rhapsody*) + case $host_os in + rhapsody* | darwin1.[012]) + allow_undefined_flag_CXX='${wl}-undefined ${wl}suppress' + ;; + *) # Darwin 1.3 on + if test -z ${MACOSX_DEPLOYMENT_TARGET} ; then + allow_undefined_flag_CXX='${wl}-flat_namespace ${wl}-undefined ${wl}suppress' + else + case ${MACOSX_DEPLOYMENT_TARGET} in + 10.[012]) + allow_undefined_flag_CXX='${wl}-flat_namespace ${wl}-undefined ${wl}suppress' + ;; + 10.*) + allow_undefined_flag_CXX='${wl}-undefined ${wl}dynamic_lookup' + ;; + esac + fi + ;; + esac + archive_cmds_need_lc_CXX=no + hardcode_direct_CXX=no + hardcode_automatic_CXX=yes + hardcode_shlibpath_var_CXX=unsupported + whole_archive_flag_spec_CXX='' + link_all_deplibs_CXX=yes + + if test "$GXX" = yes ; then + lt_int_apple_cc_single_mod=no + output_verbose_link_cmd='echo' + if $CC -dumpspecs 2>&1 | $EGREP 'single_module' >/dev/null ; then + lt_int_apple_cc_single_mod=yes + fi + if test "X$lt_int_apple_cc_single_mod" = Xyes ; then + archive_cmds_CXX='$CC -dynamiclib -single_module $allow_undefined_flag -o $lib $libobjs $deplibs $compiler_flags -install_name $rpath/$soname $verstring' + else + archive_cmds_CXX='$CC -r -keep_private_externs -nostdlib -o ${lib}-master.o $libobjs~$CC -dynamiclib $allow_undefined_flag -o $lib ${lib}-master.o $deplibs $compiler_flags -install_name $rpath/$soname $verstring' + fi + module_cmds_CXX='$CC $allow_undefined_flag -o $lib -bundle $libobjs $deplibs$compiler_flags' + # Don't fix this by using the ld -exported_symbols_list flag, it doesn't exist in older darwin lds + if test "X$lt_int_apple_cc_single_mod" = Xyes ; then + archive_expsym_cmds_CXX='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC -dynamiclib -single_module $allow_undefined_flag -o $lib $libobjs $deplibs $compiler_flags -install_name $rpath/$soname $verstring~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}' + else + archive_expsym_cmds_CXX='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC -r -keep_private_externs -nostdlib -o ${lib}-master.o $libobjs~$CC -dynamiclib $allow_undefined_flag -o $lib ${lib}-master.o $deplibs $compiler_flags -install_name $rpath/$soname $verstring~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}' + fi + module_expsym_cmds_CXX='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC $allow_undefined_flag -o $lib -bundle $libobjs $deplibs$compiler_flags~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}' + else + case $cc_basename in + xlc*) + output_verbose_link_cmd='echo' + archive_cmds_CXX='$CC -qmkshrobj ${wl}-single_module $allow_undefined_flag -o $lib $libobjs $deplibs $compiler_flags ${wl}-install_name ${wl}`echo $rpath/$soname` $xlcverstring' + module_cmds_CXX='$CC $allow_undefined_flag -o $lib -bundle $libobjs $deplibs$compiler_flags' + # Don't fix this by using the ld -exported_symbols_list flag, it doesn't exist in older darwin lds + archive_expsym_cmds_CXX='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC -qmkshrobj ${wl}-single_module $allow_undefined_flag -o $lib $libobjs $deplibs $compiler_flags ${wl}-install_name ${wl}$rpath/$soname $xlcverstring~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}' + module_expsym_cmds_CXX='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC $allow_undefined_flag -o $lib -bundle $libobjs $deplibs$compiler_flags~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}' + ;; + *) + ld_shlibs_CXX=no + ;; + esac + fi + ;; + + dgux*) + case $cc_basename in + ec++*) + # FIXME: insert proper C++ library support + ld_shlibs_CXX=no + ;; + ghcx*) + # Green Hills C++ Compiler + # FIXME: insert proper C++ library support + ld_shlibs_CXX=no + ;; + *) + # FIXME: insert proper C++ library support + ld_shlibs_CXX=no + ;; + esac + ;; + freebsd[12]*) + # C++ shared libraries reported to be fairly broken before switch to ELF + ld_shlibs_CXX=no + ;; + freebsd-elf*) + archive_cmds_need_lc_CXX=no + ;; + freebsd* | dragonfly*) + # FreeBSD 3 and later use GNU C++ and GNU ld with standard ELF + # conventions + ld_shlibs_CXX=yes + ;; + gnu*) + ;; + hpux9*) + hardcode_libdir_flag_spec_CXX='${wl}+b ${wl}$libdir' + hardcode_libdir_separator_CXX=: + export_dynamic_flag_spec_CXX='${wl}-E' + hardcode_direct_CXX=yes + hardcode_minus_L_CXX=yes # Not in the search PATH, + # but as the default + # location of the library. + + case $cc_basename in + CC*) + # FIXME: insert proper C++ library support + ld_shlibs_CXX=no + ;; + aCC*) + archive_cmds_CXX='$rm $output_objdir/$soname~$CC -b ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib' + # Commands to make compiler produce verbose output that lists + # what "hidden" libraries, object files and flags are used when + # linking a shared library. + # + # There doesn't appear to be a way to prevent this compiler from + # explicitly linking system object files so we need to strip them + # from the output so that they don't get included in the library + # dependencies. + output_verbose_link_cmd='templist=`($CC -b $CFLAGS -v conftest.$objext 2>&1) | grep "[-]L"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; echo $list' + ;; + *) + if test "$GXX" = yes; then + archive_cmds_CXX='$rm $output_objdir/$soname~$CC -shared -nostdlib -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib' + else + # FIXME: insert proper C++ library support + ld_shlibs_CXX=no + fi + ;; + esac + ;; + hpux10*|hpux11*) + if test $with_gnu_ld = no; then + hardcode_libdir_flag_spec_CXX='${wl}+b ${wl}$libdir' + hardcode_libdir_separator_CXX=: + + case $host_cpu in + hppa*64*|ia64*) ;; + *) + export_dynamic_flag_spec_CXX='${wl}-E' + ;; + esac + fi + case $host_cpu in + hppa*64*|ia64*) + hardcode_direct_CXX=no + hardcode_shlibpath_var_CXX=no + ;; + *) + hardcode_direct_CXX=yes + hardcode_minus_L_CXX=yes # Not in the search PATH, + # but as the default + # location of the library. + ;; + esac + + case $cc_basename in + CC*) + # FIXME: insert proper C++ library support + ld_shlibs_CXX=no + ;; + aCC*) + case $host_cpu in + hppa*64*) + archive_cmds_CXX='$CC -b ${wl}+h ${wl}$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' + ;; + ia64*) + archive_cmds_CXX='$CC -b ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' + ;; + *) + archive_cmds_CXX='$CC -b ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' + ;; + esac + # Commands to make compiler produce verbose output that lists + # what "hidden" libraries, object files and flags are used when + # linking a shared library. + # + # There doesn't appear to be a way to prevent this compiler from + # explicitly linking system object files so we need to strip them + # from the output so that they don't get included in the library + # dependencies. + output_verbose_link_cmd='templist=`($CC -b $CFLAGS -v conftest.$objext 2>&1) | grep "\-L"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; echo $list' + ;; + *) + if test "$GXX" = yes; then + if test $with_gnu_ld = no; then + case $host_cpu in + hppa*64*) + archive_cmds_CXX='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' + ;; + ia64*) + archive_cmds_CXX='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' + ;; + *) + archive_cmds_CXX='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' + ;; + esac + fi + else + # FIXME: insert proper C++ library support + ld_shlibs_CXX=no + fi + ;; + esac + ;; + interix[3-9]*) + hardcode_direct_CXX=no + hardcode_shlibpath_var_CXX=no + hardcode_libdir_flag_spec_CXX='${wl}-rpath,$libdir' + export_dynamic_flag_spec_CXX='${wl}-E' + # Hack: On Interix 3.x, we cannot compile PIC because of a broken gcc. + # Instead, shared libraries are loaded at an image base (0x10000000 by + # default) and relocated if they conflict, which is a slow very memory + # consuming and fragmenting process. To avoid this, we pick a random, + # 256 KiB-aligned image base between 0x50000000 and 0x6FFC0000 at link + # time. Moving up from 0x10000000 also allows more sbrk(2) space. + archive_cmds_CXX='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-h,$soname ${wl}--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' + archive_expsym_cmds_CXX='sed "s,^,_," $export_symbols >$output_objdir/$soname.expsym~$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-h,$soname ${wl}--retain-symbols-file,$output_objdir/$soname.expsym ${wl}--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' + ;; + irix5* | irix6*) + case $cc_basename in + CC*) + # SGI C++ + archive_cmds_CXX='$CC -shared -all -multigot $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib' + + # Archives containing C++ object files must be created using + # "CC -ar", where "CC" is the IRIX C++ compiler. This is + # necessary to make sure instantiated templates are included + # in the archive. + old_archive_cmds_CXX='$CC -ar -WR,-u -o $oldlib $oldobjs' + ;; + *) + if test "$GXX" = yes; then + if test "$with_gnu_ld" = no; then + archive_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' + else + archive_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` -o $lib' + fi + fi + link_all_deplibs_CXX=yes + ;; + esac + hardcode_libdir_flag_spec_CXX='${wl}-rpath ${wl}$libdir' + hardcode_libdir_separator_CXX=: + ;; + linux* | k*bsd*-gnu) + case $cc_basename in + KCC*) + # Kuck and Associates, Inc. (KAI) C++ Compiler + + # KCC will only create a shared library if the output file + # ends with ".so" (or ".sl" for HP-UX), so rename the library + # to its proper name (with version) after linking. + archive_cmds_CXX='tempext=`echo $shared_ext | $SED -e '\''s/\([^()0-9A-Za-z{}]\)/\\\\\1/g'\''`; templib=`echo $lib | $SED -e "s/\${tempext}\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib; mv \$templib $lib' + archive_expsym_cmds_CXX='tempext=`echo $shared_ext | $SED -e '\''s/\([^()0-9A-Za-z{}]\)/\\\\\1/g'\''`; templib=`echo $lib | $SED -e "s/\${tempext}\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib ${wl}-retain-symbols-file,$export_symbols; mv \$templib $lib' + # Commands to make compiler produce verbose output that lists + # what "hidden" libraries, object files and flags are used when + # linking a shared library. + # + # There doesn't appear to be a way to prevent this compiler from + # explicitly linking system object files so we need to strip them + # from the output so that they don't get included in the library + # dependencies. + output_verbose_link_cmd='templist=`$CC $CFLAGS -v conftest.$objext -o libconftest$shared_ext 2>&1 | grep "ld"`; rm -f libconftest$shared_ext; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; echo $list' + + hardcode_libdir_flag_spec_CXX='${wl}--rpath,$libdir' + export_dynamic_flag_spec_CXX='${wl}--export-dynamic' + + # Archives containing C++ object files must be created using + # "CC -Bstatic", where "CC" is the KAI C++ compiler. + old_archive_cmds_CXX='$CC -Bstatic -o $oldlib $oldobjs' + ;; + icpc*) + # Intel C++ + with_gnu_ld=yes + # version 8.0 and above of icpc choke on multiply defined symbols + # if we add $predep_objects and $postdep_objects, however 7.1 and + # earlier do not add the objects themselves. + case `$CC -V 2>&1` in + *"Version 7."*) + archive_cmds_CXX='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname -o $lib' + archive_expsym_cmds_CXX='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' + ;; + *) # Version 8.0 or newer + tmp_idyn= + case $host_cpu in + ia64*) tmp_idyn=' -i_dynamic';; + esac + archive_cmds_CXX='$CC -shared'"$tmp_idyn"' $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' + archive_expsym_cmds_CXX='$CC -shared'"$tmp_idyn"' $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' + ;; + esac + archive_cmds_need_lc_CXX=no + hardcode_libdir_flag_spec_CXX='${wl}-rpath,$libdir' + export_dynamic_flag_spec_CXX='${wl}--export-dynamic' + whole_archive_flag_spec_CXX='${wl}--whole-archive$convenience ${wl}--no-whole-archive' + ;; + pgCC*) + # Portland Group C++ compiler + archive_cmds_CXX='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib' + archive_expsym_cmds_CXX='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname ${wl}-retain-symbols-file ${wl}$export_symbols -o $lib' + + hardcode_libdir_flag_spec_CXX='${wl}--rpath ${wl}$libdir' + export_dynamic_flag_spec_CXX='${wl}--export-dynamic' + whole_archive_flag_spec_CXX='${wl}--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; $echo \"$new_convenience\"` ${wl}--no-whole-archive' + ;; + cxx*) + # Compaq C++ + archive_cmds_CXX='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname -o $lib' + archive_expsym_cmds_CXX='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname -o $lib ${wl}-retain-symbols-file $wl$export_symbols' + + runpath_var=LD_RUN_PATH + hardcode_libdir_flag_spec_CXX='-rpath $libdir' + hardcode_libdir_separator_CXX=: + + # Commands to make compiler produce verbose output that lists + # what "hidden" libraries, object files and flags are used when + # linking a shared library. + # + # There doesn't appear to be a way to prevent this compiler from + # explicitly linking system object files so we need to strip them + # from the output so that they don't get included in the library + # dependencies. + output_verbose_link_cmd='templist=`$CC -shared $CFLAGS -v conftest.$objext 2>&1 | grep "ld"`; templist=`echo $templist | $SED "s/\(^.*ld.*\)\( .*ld .*$\)/\1/"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; echo $list' + ;; + *) + case `$CC -V 2>&1 | sed 5q` in + *Sun\ C*) + # Sun C++ 5.9 + no_undefined_flag_CXX=' -zdefs' + archive_cmds_CXX='$CC -G${allow_undefined_flag} -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' + archive_expsym_cmds_CXX='$CC -G${allow_undefined_flag} -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-retain-symbols-file ${wl}$export_symbols' + hardcode_libdir_flag_spec_CXX='-R$libdir' + whole_archive_flag_spec_CXX='${wl}--whole-archive`new_convenience=; for conv in $convenience\"\"; do test -z \"$conv\" || new_convenience=\"$new_convenience,$conv\"; done; $echo \"$new_convenience\"` ${wl}--no-whole-archive' + + # Not sure whether something based on + # $CC $CFLAGS -v conftest.$objext -o libconftest$shared_ext 2>&1 + # would be better. + output_verbose_link_cmd='echo' + + # Archives containing C++ object files must be created using + # "CC -xar", where "CC" is the Sun C++ compiler. This is + # necessary to make sure instantiated templates are included + # in the archive. + old_archive_cmds_CXX='$CC -xar -o $oldlib $oldobjs' + ;; + esac + ;; + esac + ;; + lynxos*) + # FIXME: insert proper C++ library support + ld_shlibs_CXX=no + ;; + m88k*) + # FIXME: insert proper C++ library support + ld_shlibs_CXX=no + ;; + mvs*) + case $cc_basename in + cxx*) + # FIXME: insert proper C++ library support + ld_shlibs_CXX=no + ;; + *) + # FIXME: insert proper C++ library support + ld_shlibs_CXX=no + ;; + esac + ;; + netbsd*) + if echo __ELF__ | $CC -E - | grep __ELF__ >/dev/null; then + archive_cmds_CXX='$LD -Bshareable -o $lib $predep_objects $libobjs $deplibs $postdep_objects $linker_flags' + wlarc= + hardcode_libdir_flag_spec_CXX='-R$libdir' + hardcode_direct_CXX=yes + hardcode_shlibpath_var_CXX=no + fi + # Workaround some broken pre-1.5 toolchains + output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | grep conftest.$objext | $SED -e "s:-lgcc -lc -lgcc::"' + ;; + openbsd2*) + # C++ shared libraries are fairly broken + ld_shlibs_CXX=no + ;; + openbsd*) + if test -f /usr/libexec/ld.so; then + hardcode_direct_CXX=yes + hardcode_shlibpath_var_CXX=no + archive_cmds_CXX='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $lib' + hardcode_libdir_flag_spec_CXX='${wl}-rpath,$libdir' + if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then + archive_expsym_cmds_CXX='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-retain-symbols-file,$export_symbols -o $lib' + export_dynamic_flag_spec_CXX='${wl}-E' + whole_archive_flag_spec_CXX="$wlarc"'--whole-archive$convenience '"$wlarc"'--no-whole-archive' + fi + output_verbose_link_cmd='echo' + else + ld_shlibs_CXX=no + fi + ;; + osf3*) + case $cc_basename in + KCC*) + # Kuck and Associates, Inc. (KAI) C++ Compiler + + # KCC will only create a shared library if the output file + # ends with ".so" (or ".sl" for HP-UX), so rename the library + # to its proper name (with version) after linking. + archive_cmds_CXX='tempext=`echo $shared_ext | $SED -e '\''s/\([^()0-9A-Za-z{}]\)/\\\\\1/g'\''`; templib=`echo $lib | $SED -e "s/\${tempext}\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib; mv \$templib $lib' + + hardcode_libdir_flag_spec_CXX='${wl}-rpath,$libdir' + hardcode_libdir_separator_CXX=: + + # Archives containing C++ object files must be created using + # "CC -Bstatic", where "CC" is the KAI C++ compiler. + old_archive_cmds_CXX='$CC -Bstatic -o $oldlib $oldobjs' + + ;; + RCC*) + # Rational C++ 2.4.1 + # FIXME: insert proper C++ library support + ld_shlibs_CXX=no + ;; + cxx*) + allow_undefined_flag_CXX=' ${wl}-expect_unresolved ${wl}\*' + archive_cmds_CXX='$CC -shared${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $soname `test -n "$verstring" && echo ${wl}-set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib' + + hardcode_libdir_flag_spec_CXX='${wl}-rpath ${wl}$libdir' + hardcode_libdir_separator_CXX=: + + # Commands to make compiler produce verbose output that lists + # what "hidden" libraries, object files and flags are used when + # linking a shared library. + # + # There doesn't appear to be a way to prevent this compiler from + # explicitly linking system object files so we need to strip them + # from the output so that they don't get included in the library + # dependencies. + output_verbose_link_cmd='templist=`$CC -shared $CFLAGS -v conftest.$objext 2>&1 | grep "ld" | grep -v "ld:"`; templist=`echo $templist | $SED "s/\(^.*ld.*\)\( .*ld.*$\)/\1/"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; echo $list' + ;; + *) + if test "$GXX" = yes && test "$with_gnu_ld" = no; then + allow_undefined_flag_CXX=' ${wl}-expect_unresolved ${wl}\*' + archive_cmds_CXX='$CC -shared -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' + + hardcode_libdir_flag_spec_CXX='${wl}-rpath ${wl}$libdir' + hardcode_libdir_separator_CXX=: + + # Commands to make compiler produce verbose output that lists + # what "hidden" libraries, object files and flags are used when + # linking a shared library. + output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | grep "\-L"' + + else + # FIXME: insert proper C++ library support + ld_shlibs_CXX=no + fi + ;; + esac + ;; + osf4* | osf5*) + case $cc_basename in + KCC*) + # Kuck and Associates, Inc. (KAI) C++ Compiler + + # KCC will only create a shared library if the output file + # ends with ".so" (or ".sl" for HP-UX), so rename the library + # to its proper name (with version) after linking. + archive_cmds_CXX='tempext=`echo $shared_ext | $SED -e '\''s/\([^()0-9A-Za-z{}]\)/\\\\\1/g'\''`; templib=`echo $lib | $SED -e "s/\${tempext}\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib; mv \$templib $lib' + + hardcode_libdir_flag_spec_CXX='${wl}-rpath,$libdir' + hardcode_libdir_separator_CXX=: + + # Archives containing C++ object files must be created using + # the KAI C++ compiler. + old_archive_cmds_CXX='$CC -o $oldlib $oldobjs' + ;; + RCC*) + # Rational C++ 2.4.1 + # FIXME: insert proper C++ library support + ld_shlibs_CXX=no + ;; + cxx*) + allow_undefined_flag_CXX=' -expect_unresolved \*' + archive_cmds_CXX='$CC -shared${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -msym -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib' + archive_expsym_cmds_CXX='for i in `cat $export_symbols`; do printf "%s %s\\n" -exported_symbol "\$i" >> $lib.exp; done~ + echo "-hidden">> $lib.exp~ + $CC -shared$allow_undefined_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -msym -soname $soname -Wl,-input -Wl,$lib.exp `test -n "$verstring" && echo -set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib~ + $rm $lib.exp' + + hardcode_libdir_flag_spec_CXX='-rpath $libdir' + hardcode_libdir_separator_CXX=: + + # Commands to make compiler produce verbose output that lists + # what "hidden" libraries, object files and flags are used when + # linking a shared library. + # + # There doesn't appear to be a way to prevent this compiler from + # explicitly linking system object files so we need to strip them + # from the output so that they don't get included in the library + # dependencies. + output_verbose_link_cmd='templist=`$CC -shared $CFLAGS -v conftest.$objext 2>&1 | grep "ld" | grep -v "ld:"`; templist=`echo $templist | $SED "s/\(^.*ld.*\)\( .*ld.*$\)/\1/"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; echo $list' + ;; + *) + if test "$GXX" = yes && test "$with_gnu_ld" = no; then + allow_undefined_flag_CXX=' ${wl}-expect_unresolved ${wl}\*' + archive_cmds_CXX='$CC -shared -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' + + hardcode_libdir_flag_spec_CXX='${wl}-rpath ${wl}$libdir' + hardcode_libdir_separator_CXX=: + + # Commands to make compiler produce verbose output that lists + # what "hidden" libraries, object files and flags are used when + # linking a shared library. + output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | grep "\-L"' + + else + # FIXME: insert proper C++ library support + ld_shlibs_CXX=no + fi + ;; + esac + ;; + psos*) + # FIXME: insert proper C++ library support + ld_shlibs_CXX=no + ;; + sunos4*) + case $cc_basename in + CC*) + # Sun C++ 4.x + # FIXME: insert proper C++ library support + ld_shlibs_CXX=no + ;; + lcc*) + # Lucid + # FIXME: insert proper C++ library support + ld_shlibs_CXX=no + ;; + *) + # FIXME: insert proper C++ library support + ld_shlibs_CXX=no + ;; + esac + ;; + solaris*) + case $cc_basename in + CC*) + # Sun C++ 4.2, 5.x and Centerline C++ + archive_cmds_need_lc_CXX=yes + no_undefined_flag_CXX=' -zdefs' + archive_cmds_CXX='$CC -G${allow_undefined_flag} -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' + archive_expsym_cmds_CXX='$echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~$echo "local: *; };" >> $lib.exp~ + $CC -G${allow_undefined_flag} ${wl}-M ${wl}$lib.exp -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$rm $lib.exp' + + hardcode_libdir_flag_spec_CXX='-R$libdir' + hardcode_shlibpath_var_CXX=no + case $host_os in + solaris2.[0-5] | solaris2.[0-5].*) ;; + *) + # The compiler driver will combine and reorder linker options, + # but understands `-z linker_flag'. + # Supported since Solaris 2.6 (maybe 2.5.1?) + whole_archive_flag_spec_CXX='-z allextract$convenience -z defaultextract' + ;; + esac + link_all_deplibs_CXX=yes + + output_verbose_link_cmd='echo' + + # Archives containing C++ object files must be created using + # "CC -xar", where "CC" is the Sun C++ compiler. This is + # necessary to make sure instantiated templates are included + # in the archive. + old_archive_cmds_CXX='$CC -xar -o $oldlib $oldobjs' + ;; + gcx*) + # Green Hills C++ Compiler + archive_cmds_CXX='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib' + + # The C++ compiler must be used to create the archive. + old_archive_cmds_CXX='$CC $LDFLAGS -archive -o $oldlib $oldobjs' + ;; + *) + # GNU C++ compiler with Solaris linker + if test "$GXX" = yes && test "$with_gnu_ld" = no; then + no_undefined_flag_CXX=' ${wl}-z ${wl}defs' + if $CC --version | grep -v '^2\.7' > /dev/null; then + archive_cmds_CXX='$CC -shared -nostdlib $LDFLAGS $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib' + archive_expsym_cmds_CXX='$echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~$echo "local: *; };" >> $lib.exp~ + $CC -shared -nostdlib ${wl}-M $wl$lib.exp -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$rm $lib.exp' + + # Commands to make compiler produce verbose output that lists + # what "hidden" libraries, object files and flags are used when + # linking a shared library. + output_verbose_link_cmd="$CC -shared $CFLAGS -v conftest.$objext 2>&1 | grep \"\-L\"" + else + # g++ 2.7 appears to require `-G' NOT `-shared' on this + # platform. + archive_cmds_CXX='$CC -G -nostdlib $LDFLAGS $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib' + archive_expsym_cmds_CXX='$echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~$echo "local: *; };" >> $lib.exp~ + $CC -G -nostdlib ${wl}-M $wl$lib.exp -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$rm $lib.exp' + + # Commands to make compiler produce verbose output that lists + # what "hidden" libraries, object files and flags are used when + # linking a shared library. + output_verbose_link_cmd="$CC -G $CFLAGS -v conftest.$objext 2>&1 | grep \"\-L\"" + fi + + hardcode_libdir_flag_spec_CXX='${wl}-R $wl$libdir' + case $host_os in + solaris2.[0-5] | solaris2.[0-5].*) ;; + *) + whole_archive_flag_spec_CXX='${wl}-z ${wl}allextract$convenience ${wl}-z ${wl}defaultextract' + ;; + esac + fi + ;; + esac + ;; + sysv4*uw2* | sysv5OpenUNIX* | sysv5UnixWare7.[01].[10]* | unixware7* | sco3.2v5.0.[024]*) + no_undefined_flag_CXX='${wl}-z,text' + archive_cmds_need_lc_CXX=no + hardcode_shlibpath_var_CXX=no + runpath_var='LD_RUN_PATH' + + case $cc_basename in + CC*) + archive_cmds_CXX='$CC -G ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' + archive_expsym_cmds_CXX='$CC -G ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' + ;; + *) + archive_cmds_CXX='$CC -shared ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' + archive_expsym_cmds_CXX='$CC -shared ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' + ;; + esac + ;; + sysv5* | sco3.2v5* | sco5v6*) + # Note: We can NOT use -z defs as we might desire, because we do not + # link with -lc, and that would cause any symbols used from libc to + # always be unresolved, which means just about no library would + # ever link correctly. If we're not using GNU ld we use -z text + # though, which does catch some bad symbols but isn't as heavy-handed + # as -z defs. + # For security reasons, it is highly recommended that you always + # use absolute paths for naming shared libraries, and exclude the + # DT_RUNPATH tag from executables and libraries. But doing so + # requires that you compile everything twice, which is a pain. + # So that behaviour is only enabled if SCOABSPATH is set to a + # non-empty value in the environment. Most likely only useful for + # creating official distributions of packages. + # This is a hack until libtool officially supports absolute path + # names for shared libraries. + no_undefined_flag_CXX='${wl}-z,text' + allow_undefined_flag_CXX='${wl}-z,nodefs' + archive_cmds_need_lc_CXX=no + hardcode_shlibpath_var_CXX=no + hardcode_libdir_flag_spec_CXX='`test -z "$SCOABSPATH" && echo ${wl}-R,$libdir`' + hardcode_libdir_separator_CXX=':' + link_all_deplibs_CXX=yes + export_dynamic_flag_spec_CXX='${wl}-Bexport' + runpath_var='LD_RUN_PATH' + + case $cc_basename in + CC*) + archive_cmds_CXX='$CC -G ${wl}-h,\${SCOABSPATH:+${install_libdir}/}$soname -o $lib $libobjs $deplibs $compiler_flags' + archive_expsym_cmds_CXX='$CC -G ${wl}-Bexport:$export_symbols ${wl}-h,\${SCOABSPATH:+${install_libdir}/}$soname -o $lib $libobjs $deplibs $compiler_flags' + ;; + *) + archive_cmds_CXX='$CC -shared ${wl}-h,\${SCOABSPATH:+${install_libdir}/}$soname -o $lib $libobjs $deplibs $compiler_flags' + archive_expsym_cmds_CXX='$CC -shared ${wl}-Bexport:$export_symbols ${wl}-h,\${SCOABSPATH:+${install_libdir}/}$soname -o $lib $libobjs $deplibs $compiler_flags' + ;; + esac + ;; + tandem*) + case $cc_basename in + NCC*) + # NonStop-UX NCC 3.20 + # FIXME: insert proper C++ library support + ld_shlibs_CXX=no + ;; + *) + # FIXME: insert proper C++ library support + ld_shlibs_CXX=no + ;; + esac + ;; + vxworks*) + # FIXME: insert proper C++ library support + ld_shlibs_CXX=no + ;; + *) + # FIXME: insert proper C++ library support + ld_shlibs_CXX=no + ;; +esac +{ echo "$as_me:$LINENO: result: $ld_shlibs_CXX" >&5 +echo "${ECHO_T}$ld_shlibs_CXX" >&6; } +test "$ld_shlibs_CXX" = no && can_build_shared=no + +GCC_CXX="$GXX" +LD_CXX="$LD" + + +cat > conftest.$ac_ext <&5 + (eval $ac_compile) 2>&5 + ac_status=$? + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); }; then + # Parse the compiler output and extract the necessary + # objects, libraries and library flags. + + # Sentinel used to keep track of whether or not we are before + # the conftest object file. + pre_test_object_deps_done=no + + # The `*' in the case matches for architectures that use `case' in + # $output_verbose_cmd can trigger glob expansion during the loop + # eval without this substitution. + output_verbose_link_cmd=`$echo "X$output_verbose_link_cmd" | $Xsed -e "$no_glob_subst"` + + for p in `eval $output_verbose_link_cmd`; do + case $p in + + -L* | -R* | -l*) + # Some compilers place space between "-{L,R}" and the path. + # Remove the space. + if test $p = "-L" \ + || test $p = "-R"; then + prev=$p + continue + else + prev= + fi + + if test "$pre_test_object_deps_done" = no; then + case $p in + -L* | -R*) + # Internal compiler library paths should come after those + # provided the user. The postdeps already come after the + # user supplied libs so there is no need to process them. + if test -z "$compiler_lib_search_path_CXX"; then + compiler_lib_search_path_CXX="${prev}${p}" + else + compiler_lib_search_path_CXX="${compiler_lib_search_path_CXX} ${prev}${p}" + fi + ;; + # The "-l" case would never come before the object being + # linked, so don't bother handling this case. + esac + else + if test -z "$postdeps_CXX"; then + postdeps_CXX="${prev}${p}" + else + postdeps_CXX="${postdeps_CXX} ${prev}${p}" + fi + fi + ;; + + *.$objext) + # This assumes that the test object file only shows up + # once in the compiler output. + if test "$p" = "conftest.$objext"; then + pre_test_object_deps_done=yes + continue + fi + + if test "$pre_test_object_deps_done" = no; then + if test -z "$predep_objects_CXX"; then + predep_objects_CXX="$p" + else + predep_objects_CXX="$predep_objects_CXX $p" + fi + else + if test -z "$postdep_objects_CXX"; then + postdep_objects_CXX="$p" + else + postdep_objects_CXX="$postdep_objects_CXX $p" + fi + fi + ;; + + *) ;; # Ignore the rest. + + esac + done + + # Clean up. + rm -f a.out a.exe +else + echo "libtool.m4: error: problem compiling CXX test program" +fi + +$rm -f confest.$objext + +# PORTME: override above test on systems where it is broken +case $host_os in +interix[3-9]*) + # Interix 3.5 installs completely hosed .la files for C++, so rather than + # hack all around it, let's just trust "g++" to DTRT. + predep_objects_CXX= + postdep_objects_CXX= + postdeps_CXX= + ;; + +linux*) + case `$CC -V 2>&1 | sed 5q` in + *Sun\ C*) + # Sun C++ 5.9 + # + # The more standards-conforming stlport4 library is + # incompatible with the Cstd library. Avoid specifying + # it if it's in CXXFLAGS. Ignore libCrun as + # -library=stlport4 depends on it. + case " $CXX $CXXFLAGS " in + *" -library=stlport4 "*) + solaris_use_stlport4=yes + ;; + esac + if test "$solaris_use_stlport4" != yes; then + postdeps_CXX='-library=Cstd -library=Crun' + fi + ;; + esac + ;; + +solaris*) + case $cc_basename in + CC*) + # The more standards-conforming stlport4 library is + # incompatible with the Cstd library. Avoid specifying + # it if it's in CXXFLAGS. Ignore libCrun as + # -library=stlport4 depends on it. + case " $CXX $CXXFLAGS " in + *" -library=stlport4 "*) + solaris_use_stlport4=yes + ;; + esac + + # Adding this requires a known-good setup of shared libraries for + # Sun compiler versions before 5.6, else PIC objects from an old + # archive will be linked into the output, leading to subtle bugs. + if test "$solaris_use_stlport4" != yes; then + postdeps_CXX='-library=Cstd -library=Crun' + fi + ;; + esac + ;; +esac + + +case " $postdeps_CXX " in +*" -lc "*) archive_cmds_need_lc_CXX=no ;; +esac + +lt_prog_compiler_wl_CXX= +lt_prog_compiler_pic_CXX= +lt_prog_compiler_static_CXX= + +{ echo "$as_me:$LINENO: checking for $compiler option to produce PIC" >&5 +echo $ECHO_N "checking for $compiler option to produce PIC... $ECHO_C" >&6; } + + # C++ specific cases for pic, static, wl, etc. + if test "$GXX" = yes; then + lt_prog_compiler_wl_CXX='-Wl,' + lt_prog_compiler_static_CXX='-static' + + case $host_os in + aix*) + # All AIX code is PIC. + if test "$host_cpu" = ia64; then + # AIX 5 now supports IA64 processor + lt_prog_compiler_static_CXX='-Bstatic' + fi + ;; + amigaos*) + # FIXME: we need at least 68020 code to build shared libraries, but + # adding the `-m68020' flag to GCC prevents building anything better, + # like `-m68040'. + lt_prog_compiler_pic_CXX='-m68020 -resident32 -malways-restore-a4' + ;; + beos* | irix5* | irix6* | nonstopux* | osf3* | osf4* | osf5*) + # PIC is the default for these OSes. + ;; + mingw* | cygwin* | os2* | pw32*) + # This hack is so that the source file can tell whether it is being + # built for inclusion in a dll (and should export symbols for example). + # Although the cygwin gcc ignores -fPIC, still need this for old-style + # (--disable-auto-import) libraries + lt_prog_compiler_pic_CXX='-DDLL_EXPORT' + ;; + darwin* | rhapsody*) + # PIC is the default on this platform + # Common symbols not allowed in MH_DYLIB files + lt_prog_compiler_pic_CXX='-fno-common' + ;; + *djgpp*) + # DJGPP does not support shared libraries at all + lt_prog_compiler_pic_CXX= + ;; + interix[3-9]*) + # Interix 3.x gcc -fpic/-fPIC options generate broken code. + # Instead, we relocate shared libraries at runtime. + ;; + sysv4*MP*) + if test -d /usr/nec; then + lt_prog_compiler_pic_CXX=-Kconform_pic + fi + ;; + hpux*) + # PIC is the default for IA64 HP-UX and 64-bit HP-UX, but + # not for PA HP-UX. + case $host_cpu in + hppa*64*|ia64*) + ;; + *) + lt_prog_compiler_pic_CXX='-fPIC' + ;; + esac + ;; + *) + lt_prog_compiler_pic_CXX='-fPIC' + ;; + esac + else + case $host_os in + aix4* | aix5*) + # All AIX code is PIC. + if test "$host_cpu" = ia64; then + # AIX 5 now supports IA64 processor + lt_prog_compiler_static_CXX='-Bstatic' + else + lt_prog_compiler_static_CXX='-bnso -bI:/lib/syscalls.exp' + fi + ;; + chorus*) + case $cc_basename in + cxch68*) + # Green Hills C++ Compiler + # _LT_AC_TAGVAR(lt_prog_compiler_static, CXX)="--no_auto_instantiation -u __main -u __premain -u _abort -r $COOL_DIR/lib/libOrb.a $MVME_DIR/lib/CC/libC.a $MVME_DIR/lib/classix/libcx.s.a" + ;; + esac + ;; + darwin*) + # PIC is the default on this platform + # Common symbols not allowed in MH_DYLIB files + case $cc_basename in + xlc*) + lt_prog_compiler_pic_CXX='-qnocommon' + lt_prog_compiler_wl_CXX='-Wl,' + ;; + esac + ;; + dgux*) + case $cc_basename in + ec++*) + lt_prog_compiler_pic_CXX='-KPIC' + ;; + ghcx*) + # Green Hills C++ Compiler + lt_prog_compiler_pic_CXX='-pic' + ;; + *) + ;; + esac + ;; + freebsd* | dragonfly*) + # FreeBSD uses GNU C++ + ;; + hpux9* | hpux10* | hpux11*) + case $cc_basename in + CC*) + lt_prog_compiler_wl_CXX='-Wl,' + lt_prog_compiler_static_CXX='${wl}-a ${wl}archive' + if test "$host_cpu" != ia64; then + lt_prog_compiler_pic_CXX='+Z' + fi + ;; + aCC*) + lt_prog_compiler_wl_CXX='-Wl,' + lt_prog_compiler_static_CXX='${wl}-a ${wl}archive' + case $host_cpu in + hppa*64*|ia64*) + # +Z the default + ;; + *) + lt_prog_compiler_pic_CXX='+Z' + ;; + esac + ;; + *) + ;; + esac + ;; + interix*) + # This is c89, which is MS Visual C++ (no shared libs) + # Anyone wants to do a port? + ;; + irix5* | irix6* | nonstopux*) + case $cc_basename in + CC*) + lt_prog_compiler_wl_CXX='-Wl,' + lt_prog_compiler_static_CXX='-non_shared' + # CC pic flag -KPIC is the default. + ;; + *) + ;; + esac + ;; + linux* | k*bsd*-gnu) + case $cc_basename in + KCC*) + # KAI C++ Compiler + lt_prog_compiler_wl_CXX='--backend -Wl,' + lt_prog_compiler_pic_CXX='-fPIC' + ;; + icpc* | ecpc*) + # Intel C++ + lt_prog_compiler_wl_CXX='-Wl,' + lt_prog_compiler_pic_CXX='-KPIC' + lt_prog_compiler_static_CXX='-static' + ;; + pgCC*) + # Portland Group C++ compiler. + lt_prog_compiler_wl_CXX='-Wl,' + lt_prog_compiler_pic_CXX='-fpic' + lt_prog_compiler_static_CXX='-Bstatic' + ;; + cxx*) + # Compaq C++ + # Make sure the PIC flag is empty. It appears that all Alpha + # Linux and Compaq Tru64 Unix objects are PIC. + lt_prog_compiler_pic_CXX= + lt_prog_compiler_static_CXX='-non_shared' + ;; + *) + case `$CC -V 2>&1 | sed 5q` in + *Sun\ C*) + # Sun C++ 5.9 + lt_prog_compiler_pic_CXX='-KPIC' + lt_prog_compiler_static_CXX='-Bstatic' + lt_prog_compiler_wl_CXX='-Qoption ld ' + ;; + esac + ;; + esac + ;; + lynxos*) + ;; + m88k*) + ;; + mvs*) + case $cc_basename in + cxx*) + lt_prog_compiler_pic_CXX='-W c,exportall' + ;; + *) + ;; + esac + ;; + netbsd*) + ;; + osf3* | osf4* | osf5*) + case $cc_basename in + KCC*) + lt_prog_compiler_wl_CXX='--backend -Wl,' + ;; + RCC*) + # Rational C++ 2.4.1 + lt_prog_compiler_pic_CXX='-pic' + ;; + cxx*) + # Digital/Compaq C++ + lt_prog_compiler_wl_CXX='-Wl,' + # Make sure the PIC flag is empty. It appears that all Alpha + # Linux and Compaq Tru64 Unix objects are PIC. + lt_prog_compiler_pic_CXX= + lt_prog_compiler_static_CXX='-non_shared' + ;; + *) + ;; + esac + ;; + psos*) + ;; + solaris*) + case $cc_basename in + CC*) + # Sun C++ 4.2, 5.x and Centerline C++ + lt_prog_compiler_pic_CXX='-KPIC' + lt_prog_compiler_static_CXX='-Bstatic' + lt_prog_compiler_wl_CXX='-Qoption ld ' + ;; + gcx*) + # Green Hills C++ Compiler + lt_prog_compiler_pic_CXX='-PIC' + ;; + *) + ;; + esac + ;; + sunos4*) + case $cc_basename in + CC*) + # Sun C++ 4.x + lt_prog_compiler_pic_CXX='-pic' + lt_prog_compiler_static_CXX='-Bstatic' + ;; + lcc*) + # Lucid + lt_prog_compiler_pic_CXX='-pic' + ;; + *) + ;; + esac + ;; + tandem*) + case $cc_basename in + NCC*) + # NonStop-UX NCC 3.20 + lt_prog_compiler_pic_CXX='-KPIC' + ;; + *) + ;; + esac + ;; + sysv5* | unixware* | sco3.2v5* | sco5v6* | OpenUNIX*) + case $cc_basename in + CC*) + lt_prog_compiler_wl_CXX='-Wl,' + lt_prog_compiler_pic_CXX='-KPIC' + lt_prog_compiler_static_CXX='-Bstatic' + ;; + esac + ;; + vxworks*) + ;; + *) + lt_prog_compiler_can_build_shared_CXX=no + ;; + esac + fi + +{ echo "$as_me:$LINENO: result: $lt_prog_compiler_pic_CXX" >&5 +echo "${ECHO_T}$lt_prog_compiler_pic_CXX" >&6; } + +# +# Check to make sure the PIC flag actually works. +# +if test -n "$lt_prog_compiler_pic_CXX"; then + +{ echo "$as_me:$LINENO: checking if $compiler PIC flag $lt_prog_compiler_pic_CXX works" >&5 +echo $ECHO_N "checking if $compiler PIC flag $lt_prog_compiler_pic_CXX works... $ECHO_C" >&6; } +if test "${lt_prog_compiler_pic_works_CXX+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + lt_prog_compiler_pic_works_CXX=no + ac_outfile=conftest.$ac_objext + echo "$lt_simple_compile_test_code" > conftest.$ac_ext + lt_compiler_flag="$lt_prog_compiler_pic_CXX -DPIC" + # Insert the option either (1) after the last *FLAGS variable, or + # (2) before a word containing "conftest.", or (3) at the end. + # Note that $ac_compile itself does not contain backslashes and begins + # with a dollar sign (not a hyphen), so the echo should work correctly. + # The option is referenced via a variable to avoid confusing sed. + lt_compile=`echo "$ac_compile" | $SED \ + -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ + -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ + -e 's:$: $lt_compiler_flag:'` + (eval echo "\"\$as_me:12701: $lt_compile\"" >&5) + (eval "$lt_compile" 2>conftest.err) + ac_status=$? + cat conftest.err >&5 + echo "$as_me:12705: \$? = $ac_status" >&5 + if (exit $ac_status) && test -s "$ac_outfile"; then + # The compiler can only warn and ignore the option if not recognized + # So say no if there are warnings other than the usual output. + $echo "X$_lt_compiler_boilerplate" | $Xsed -e '/^$/d' >conftest.exp + $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 + if test ! -s conftest.er2 || diff conftest.exp conftest.er2 >/dev/null; then + lt_prog_compiler_pic_works_CXX=yes + fi + fi + $rm conftest* + +fi +{ echo "$as_me:$LINENO: result: $lt_prog_compiler_pic_works_CXX" >&5 +echo "${ECHO_T}$lt_prog_compiler_pic_works_CXX" >&6; } + +if test x"$lt_prog_compiler_pic_works_CXX" = xyes; then + case $lt_prog_compiler_pic_CXX in + "" | " "*) ;; + *) lt_prog_compiler_pic_CXX=" $lt_prog_compiler_pic_CXX" ;; + esac +else + lt_prog_compiler_pic_CXX= + lt_prog_compiler_can_build_shared_CXX=no +fi + +fi +case $host_os in + # For platforms which do not support PIC, -DPIC is meaningless: + *djgpp*) + lt_prog_compiler_pic_CXX= + ;; + *) + lt_prog_compiler_pic_CXX="$lt_prog_compiler_pic_CXX -DPIC" + ;; +esac + +# +# Check to make sure the static flag actually works. +# +wl=$lt_prog_compiler_wl_CXX eval lt_tmp_static_flag=\"$lt_prog_compiler_static_CXX\" +{ echo "$as_me:$LINENO: checking if $compiler static flag $lt_tmp_static_flag works" >&5 +echo $ECHO_N "checking if $compiler static flag $lt_tmp_static_flag works... $ECHO_C" >&6; } +if test "${lt_prog_compiler_static_works_CXX+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + lt_prog_compiler_static_works_CXX=no + save_LDFLAGS="$LDFLAGS" + LDFLAGS="$LDFLAGS $lt_tmp_static_flag" + echo "$lt_simple_link_test_code" > conftest.$ac_ext + if (eval $ac_link 2>conftest.err) && test -s conftest$ac_exeext; then + # The linker can only warn and ignore the option if not recognized + # So say no if there are warnings + if test -s conftest.err; then + # Append any errors to the config.log. + cat conftest.err 1>&5 + $echo "X$_lt_linker_boilerplate" | $Xsed -e '/^$/d' > conftest.exp + $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 + if diff conftest.exp conftest.er2 >/dev/null; then + lt_prog_compiler_static_works_CXX=yes + fi + else + lt_prog_compiler_static_works_CXX=yes + fi + fi + $rm conftest* + LDFLAGS="$save_LDFLAGS" + +fi +{ echo "$as_me:$LINENO: result: $lt_prog_compiler_static_works_CXX" >&5 +echo "${ECHO_T}$lt_prog_compiler_static_works_CXX" >&6; } + +if test x"$lt_prog_compiler_static_works_CXX" = xyes; then + : +else + lt_prog_compiler_static_CXX= +fi + + +{ echo "$as_me:$LINENO: checking if $compiler supports -c -o file.$ac_objext" >&5 +echo $ECHO_N "checking if $compiler supports -c -o file.$ac_objext... $ECHO_C" >&6; } +if test "${lt_cv_prog_compiler_c_o_CXX+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + lt_cv_prog_compiler_c_o_CXX=no + $rm -r conftest 2>/dev/null + mkdir conftest + cd conftest + mkdir out + echo "$lt_simple_compile_test_code" > conftest.$ac_ext + + lt_compiler_flag="-o out/conftest2.$ac_objext" + # Insert the option either (1) after the last *FLAGS variable, or + # (2) before a word containing "conftest.", or (3) at the end. + # Note that $ac_compile itself does not contain backslashes and begins + # with a dollar sign (not a hyphen), so the echo should work correctly. + lt_compile=`echo "$ac_compile" | $SED \ + -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ + -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ + -e 's:$: $lt_compiler_flag:'` + (eval echo "\"\$as_me:12805: $lt_compile\"" >&5) + (eval "$lt_compile" 2>out/conftest.err) + ac_status=$? + cat out/conftest.err >&5 + echo "$as_me:12809: \$? = $ac_status" >&5 + if (exit $ac_status) && test -s out/conftest2.$ac_objext + then + # The compiler can only warn and ignore the option if not recognized + # So say no if there are warnings + $echo "X$_lt_compiler_boilerplate" | $Xsed -e '/^$/d' > out/conftest.exp + $SED '/^$/d; /^ *+/d' out/conftest.err >out/conftest.er2 + if test ! -s out/conftest.er2 || diff out/conftest.exp out/conftest.er2 >/dev/null; then + lt_cv_prog_compiler_c_o_CXX=yes + fi + fi + chmod u+w . 2>&5 + $rm conftest* + # SGI C++ compiler will create directory out/ii_files/ for + # template instantiation + test -d out/ii_files && $rm out/ii_files/* && rmdir out/ii_files + $rm out/* && rmdir out + cd .. + rmdir conftest + $rm conftest* + +fi +{ echo "$as_me:$LINENO: result: $lt_cv_prog_compiler_c_o_CXX" >&5 +echo "${ECHO_T}$lt_cv_prog_compiler_c_o_CXX" >&6; } + + +hard_links="nottested" +if test "$lt_cv_prog_compiler_c_o_CXX" = no && test "$need_locks" != no; then + # do not overwrite the value of need_locks provided by the user + { echo "$as_me:$LINENO: checking if we can lock with hard links" >&5 +echo $ECHO_N "checking if we can lock with hard links... $ECHO_C" >&6; } + hard_links=yes + $rm conftest* + ln conftest.a conftest.b 2>/dev/null && hard_links=no + touch conftest.a + ln conftest.a conftest.b 2>&5 || hard_links=no + ln conftest.a conftest.b 2>/dev/null && hard_links=no + { echo "$as_me:$LINENO: result: $hard_links" >&5 +echo "${ECHO_T}$hard_links" >&6; } + if test "$hard_links" = no; then + { echo "$as_me:$LINENO: WARNING: \`$CC' does not support \`-c -o', so \`make -j' may be unsafe" >&5 +echo "$as_me: WARNING: \`$CC' does not support \`-c -o', so \`make -j' may be unsafe" >&2;} + need_locks=warn + fi +else + need_locks=no +fi + +{ echo "$as_me:$LINENO: checking whether the $compiler linker ($LD) supports shared libraries" >&5 +echo $ECHO_N "checking whether the $compiler linker ($LD) supports shared libraries... $ECHO_C" >&6; } + + export_symbols_cmds_CXX='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols' + case $host_os in + aix4* | aix5*) + # If we're using GNU nm, then we don't want the "-C" option. + # -C means demangle to AIX nm, but means don't demangle with GNU nm + if $NM -V 2>&1 | grep 'GNU' > /dev/null; then + export_symbols_cmds_CXX='$NM -Bpg $libobjs $convenience | awk '\''{ if (((\$2 == "T") || (\$2 == "D") || (\$2 == "B")) && (substr(\$3,1,1) != ".")) { print \$3 } }'\'' | sort -u > $export_symbols' + else + export_symbols_cmds_CXX='$NM -BCpg $libobjs $convenience | awk '\''{ if (((\$2 == "T") || (\$2 == "D") || (\$2 == "B")) && (substr(\$3,1,1) != ".")) { print \$3 } }'\'' | sort -u > $export_symbols' + fi + ;; + pw32*) + export_symbols_cmds_CXX="$ltdll_cmds" + ;; + cygwin* | mingw*) + export_symbols_cmds_CXX='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;/^.*[ ]__nm__/s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols' + ;; + *) + export_symbols_cmds_CXX='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols' + ;; + esac + +{ echo "$as_me:$LINENO: result: $ld_shlibs_CXX" >&5 +echo "${ECHO_T}$ld_shlibs_CXX" >&6; } +test "$ld_shlibs_CXX" = no && can_build_shared=no + +# +# Do we need to explicitly link libc? +# +case "x$archive_cmds_need_lc_CXX" in +x|xyes) + # Assume -lc should be added + archive_cmds_need_lc_CXX=yes + + if test "$enable_shared" = yes && test "$GCC" = yes; then + case $archive_cmds_CXX in + *'~'*) + # FIXME: we may have to deal with multi-command sequences. + ;; + '$CC '*) + # Test whether the compiler implicitly links with -lc since on some + # systems, -lgcc has to come before -lc. If gcc already passes -lc + # to ld, don't add -lc before -lgcc. + { echo "$as_me:$LINENO: checking whether -lc should be explicitly linked in" >&5 +echo $ECHO_N "checking whether -lc should be explicitly linked in... $ECHO_C" >&6; } + $rm conftest* + echo "$lt_simple_compile_test_code" > conftest.$ac_ext + + if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 + (eval $ac_compile) 2>&5 + ac_status=$? + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } 2>conftest.err; then + soname=conftest + lib=conftest + libobjs=conftest.$ac_objext + deplibs= + wl=$lt_prog_compiler_wl_CXX + pic_flag=$lt_prog_compiler_pic_CXX + compiler_flags=-v + linker_flags=-v + verstring= + output_objdir=. + libname=conftest + lt_save_allow_undefined_flag=$allow_undefined_flag_CXX + allow_undefined_flag_CXX= + if { (eval echo "$as_me:$LINENO: \"$archive_cmds_CXX 2\>\&1 \| grep \" -lc \" \>/dev/null 2\>\&1\"") >&5 + (eval $archive_cmds_CXX 2\>\&1 \| grep \" -lc \" \>/dev/null 2\>\&1) 2>&5 + ac_status=$? + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } + then + archive_cmds_need_lc_CXX=no + else + archive_cmds_need_lc_CXX=yes + fi + allow_undefined_flag_CXX=$lt_save_allow_undefined_flag + else + cat conftest.err 1>&5 + fi + $rm conftest* + { echo "$as_me:$LINENO: result: $archive_cmds_need_lc_CXX" >&5 +echo "${ECHO_T}$archive_cmds_need_lc_CXX" >&6; } + ;; + esac + fi + ;; +esac + +{ echo "$as_me:$LINENO: checking dynamic linker characteristics" >&5 +echo $ECHO_N "checking dynamic linker characteristics... $ECHO_C" >&6; } +library_names_spec= +libname_spec='lib$name' +soname_spec= +shrext_cmds=".so" +postinstall_cmds= +postuninstall_cmds= +finish_cmds= +finish_eval= +shlibpath_var= +shlibpath_overrides_runpath=unknown +version_type=none +dynamic_linker="$host_os ld.so" +sys_lib_dlsearch_path_spec="/lib /usr/lib" + +need_lib_prefix=unknown +hardcode_into_libs=no + +# when you set need_version to no, make sure it does not cause -set_version +# flags to be left without arguments +need_version=unknown + +case $host_os in +aix3*) + version_type=linux + library_names_spec='${libname}${release}${shared_ext}$versuffix $libname.a' + shlibpath_var=LIBPATH + + # AIX 3 has no versioning support, so we append a major version to the name. + soname_spec='${libname}${release}${shared_ext}$major' + ;; + +aix4* | aix5*) + version_type=linux + need_lib_prefix=no + need_version=no + hardcode_into_libs=yes + if test "$host_cpu" = ia64; then + # AIX 5 supports IA64 + library_names_spec='${libname}${release}${shared_ext}$major ${libname}${release}${shared_ext}$versuffix $libname${shared_ext}' + shlibpath_var=LD_LIBRARY_PATH + else + # With GCC up to 2.95.x, collect2 would create an import file + # for dependence libraries. The import file would start with + # the line `#! .'. This would cause the generated library to + # depend on `.', always an invalid library. This was fixed in + # development snapshots of GCC prior to 3.0. + case $host_os in + aix4 | aix4.[01] | aix4.[01].*) + if { echo '#if __GNUC__ > 2 || (__GNUC__ == 2 && __GNUC_MINOR__ >= 97)' + echo ' yes ' + echo '#endif'; } | ${CC} -E - | grep yes > /dev/null; then + : + else + can_build_shared=no + fi + ;; + esac + # AIX (on Power*) has no versioning support, so currently we can not hardcode correct + # soname into executable. Probably we can add versioning support to + # collect2, so additional links can be useful in future. + if test "$aix_use_runtimelinking" = yes; then + # If using run time linking (on AIX 4.2 or later) use lib.so + # instead of lib.a to let people know that these are not + # typical AIX shared libraries. + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + else + # We preserve .a as extension for shared libraries through AIX4.2 + # and later when we are not doing run time linking. + library_names_spec='${libname}${release}.a $libname.a' + soname_spec='${libname}${release}${shared_ext}$major' + fi + shlibpath_var=LIBPATH + fi + ;; + +amigaos*) + library_names_spec='$libname.ixlibrary $libname.a' + # Create ${libname}_ixlibrary.a entries in /sys/libs. + finish_eval='for lib in `ls $libdir/*.ixlibrary 2>/dev/null`; do libname=`$echo "X$lib" | $Xsed -e '\''s%^.*/\([^/]*\)\.ixlibrary$%\1%'\''`; test $rm /sys/libs/${libname}_ixlibrary.a; $show "cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a"; cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a || exit 1; done' + ;; + +beos*) + library_names_spec='${libname}${shared_ext}' + dynamic_linker="$host_os ld.so" + shlibpath_var=LIBRARY_PATH + ;; + +bsdi[45]*) + version_type=linux + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + finish_cmds='PATH="\$PATH:/sbin" ldconfig $libdir' + shlibpath_var=LD_LIBRARY_PATH + sys_lib_search_path_spec="/shlib /usr/lib /usr/X11/lib /usr/contrib/lib /lib /usr/local/lib" + sys_lib_dlsearch_path_spec="/shlib /usr/lib /usr/local/lib" + # the default ld.so.conf also contains /usr/contrib/lib and + # /usr/X11R6/lib (/usr/X11 is a link to /usr/X11R6), but let us allow + # libtool to hard-code these into programs + ;; + +cygwin* | mingw* | pw32*) + version_type=windows + shrext_cmds=".dll" + need_version=no + need_lib_prefix=no + + case $GCC,$host_os in + yes,cygwin* | yes,mingw* | yes,pw32*) + library_names_spec='$libname.dll.a' + # DLL is installed to $(libdir)/../bin by postinstall_cmds + postinstall_cmds='base_file=`basename \${file}`~ + dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i;echo \$dlname'\''`~ + dldir=$destdir/`dirname \$dlpath`~ + test -d \$dldir || mkdir -p \$dldir~ + $install_prog $dir/$dlname \$dldir/$dlname~ + chmod a+x \$dldir/$dlname' + postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~ + dlpath=$dir/\$dldll~ + $rm \$dlpath' + shlibpath_overrides_runpath=yes + + case $host_os in + cygwin*) + # Cygwin DLLs use 'cyg' prefix rather than 'lib' + soname_spec='`echo ${libname} | sed -e 's/^lib/cyg/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}' + sys_lib_search_path_spec="/usr/lib /lib/w32api /lib /usr/local/lib" + ;; + mingw*) + # MinGW DLLs use traditional 'lib' prefix + soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}' + sys_lib_search_path_spec=`$CC -print-search-dirs | grep "^libraries:" | $SED -e "s/^libraries://" -e "s,=/,/,g"` + if echo "$sys_lib_search_path_spec" | grep ';[c-zC-Z]:/' >/dev/null; then + # It is most probably a Windows format PATH printed by + # mingw gcc, but we are running on Cygwin. Gcc prints its search + # path with ; separators, and with drive letters. We can handle the + # drive letters (cygwin fileutils understands them), so leave them, + # especially as we might pass files found there to a mingw objdump, + # which wouldn't understand a cygwinified path. Ahh. + sys_lib_search_path_spec=`echo "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'` + else + sys_lib_search_path_spec=`echo "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"` + fi + ;; + pw32*) + # pw32 DLLs use 'pw' prefix rather than 'lib' + library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}' + ;; + esac + ;; + + *) + library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib' + ;; + esac + dynamic_linker='Win32 ld.exe' + # FIXME: first we should search . and the directory the executable is in + shlibpath_var=PATH + ;; + +darwin* | rhapsody*) + dynamic_linker="$host_os dyld" + version_type=darwin + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${versuffix}$shared_ext ${libname}${release}${major}$shared_ext ${libname}$shared_ext' + soname_spec='${libname}${release}${major}$shared_ext' + shlibpath_overrides_runpath=yes + shlibpath_var=DYLD_LIBRARY_PATH + shrext_cmds='`test .$module = .yes && echo .so || echo .dylib`' + + sys_lib_dlsearch_path_spec='/usr/local/lib /lib /usr/lib' + ;; + +dgux*) + version_type=linux + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname$shared_ext' + soname_spec='${libname}${release}${shared_ext}$major' + shlibpath_var=LD_LIBRARY_PATH + ;; + +freebsd1*) + dynamic_linker=no + ;; + +freebsd* | dragonfly*) + # DragonFly does not have aout. When/if they implement a new + # versioning mechanism, adjust this. + if test -x /usr/bin/objformat; then + objformat=`/usr/bin/objformat` + else + case $host_os in + freebsd[123]*) objformat=aout ;; + *) objformat=elf ;; + esac + fi + version_type=freebsd-$objformat + case $version_type in + freebsd-elf*) + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext} $libname${shared_ext}' + need_version=no + need_lib_prefix=no + ;; + freebsd-*) + library_names_spec='${libname}${release}${shared_ext}$versuffix $libname${shared_ext}$versuffix' + need_version=yes + ;; + esac + shlibpath_var=LD_LIBRARY_PATH + case $host_os in + freebsd2*) + shlibpath_overrides_runpath=yes + ;; + freebsd3.[01]* | freebsdelf3.[01]*) + shlibpath_overrides_runpath=yes + hardcode_into_libs=yes + ;; + freebsd3.[2-9]* | freebsdelf3.[2-9]* | \ + freebsd4.[0-5] | freebsdelf4.[0-5] | freebsd4.1.1 | freebsdelf4.1.1) + shlibpath_overrides_runpath=no + hardcode_into_libs=yes + ;; + *) # from 4.6 on, and DragonFly + shlibpath_overrides_runpath=yes + hardcode_into_libs=yes + ;; + esac + ;; + +gnu*) + version_type=linux + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}${major} ${libname}${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + shlibpath_var=LD_LIBRARY_PATH + hardcode_into_libs=yes + ;; + +hpux9* | hpux10* | hpux11*) + # Give a soname corresponding to the major version so that dld.sl refuses to + # link against other versions. + version_type=sunos + need_lib_prefix=no + need_version=no + case $host_cpu in + ia64*) + shrext_cmds='.so' + hardcode_into_libs=yes + dynamic_linker="$host_os dld.so" + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=yes # Unless +noenvvar is specified. + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + if test "X$HPUX_IA64_MODE" = X32; then + sys_lib_search_path_spec="/usr/lib/hpux32 /usr/local/lib/hpux32 /usr/local/lib" + else + sys_lib_search_path_spec="/usr/lib/hpux64 /usr/local/lib/hpux64" + fi + sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec + ;; + hppa*64*) + shrext_cmds='.sl' + hardcode_into_libs=yes + dynamic_linker="$host_os dld.sl" + shlibpath_var=LD_LIBRARY_PATH # How should we handle SHLIB_PATH + shlibpath_overrides_runpath=yes # Unless +noenvvar is specified. + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + sys_lib_search_path_spec="/usr/lib/pa20_64 /usr/ccs/lib/pa20_64" + sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec + ;; + *) + shrext_cmds='.sl' + dynamic_linker="$host_os dld.sl" + shlibpath_var=SHLIB_PATH + shlibpath_overrides_runpath=no # +s is required to enable SHLIB_PATH + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + ;; + esac + # HP-UX runs *really* slowly unless shared libraries are mode 555. + postinstall_cmds='chmod 555 $lib' + ;; + +interix[3-9]*) + version_type=linux + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + dynamic_linker='Interix 3.x ld.so.1 (PE, like ELF)' + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=no + hardcode_into_libs=yes + ;; + +irix5* | irix6* | nonstopux*) + case $host_os in + nonstopux*) version_type=nonstopux ;; + *) + if test "$lt_cv_prog_gnu_ld" = yes; then + version_type=linux + else + version_type=irix + fi ;; + esac + need_lib_prefix=no + need_version=no + soname_spec='${libname}${release}${shared_ext}$major' + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${release}${shared_ext} $libname${shared_ext}' + case $host_os in + irix5* | nonstopux*) + libsuff= shlibsuff= + ;; + *) + case $LD in # libtool.m4 will add one of these switches to LD + *-32|*"-32 "|*-melf32bsmip|*"-melf32bsmip ") + libsuff= shlibsuff= libmagic=32-bit;; + *-n32|*"-n32 "|*-melf32bmipn32|*"-melf32bmipn32 ") + libsuff=32 shlibsuff=N32 libmagic=N32;; + *-64|*"-64 "|*-melf64bmip|*"-melf64bmip ") + libsuff=64 shlibsuff=64 libmagic=64-bit;; + *) libsuff= shlibsuff= libmagic=never-match;; + esac + ;; + esac + shlibpath_var=LD_LIBRARY${shlibsuff}_PATH + shlibpath_overrides_runpath=no + sys_lib_search_path_spec="/usr/lib${libsuff} /lib${libsuff} /usr/local/lib${libsuff}" + sys_lib_dlsearch_path_spec="/usr/lib${libsuff} /lib${libsuff}" + hardcode_into_libs=yes + ;; + +# No shared lib support for Linux oldld, aout, or coff. +linux*oldld* | linux*aout* | linux*coff*) + dynamic_linker=no + ;; + +# This must be Linux ELF. +linux* | k*bsd*-gnu) + version_type=linux + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + finish_cmds='PATH="\$PATH:/sbin" ldconfig -n $libdir' + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=no + # This implies no fast_install, which is unacceptable. + # Some rework will be needed to allow for fast_install + # before this can be enabled. + hardcode_into_libs=yes + sys_lib_search_path_spec="/usr/lib${libsuff} /lib${libsuff} /usr/local/lib${libsuff}" + sys_lib_dlsearch_path_spec="/usr/lib${libsuff} /lib${libsuff}" + + # Append ld.so.conf contents to the search path + if test -f /etc/ld.so.conf; then + lt_ld_extra=`awk '/^include / { system(sprintf("cd /etc; cat %s 2>/dev/null", \$2)); skip = 1; } { if (!skip) print \$0; skip = 0; }' < /etc/ld.so.conf | $SED -e 's/#.*//;/^[ ]*hwcap[ ]/d;s/[:, ]/ /g;s/=[^=]*$//;s/=[^= ]* / /g;/^$/d' | tr '\n' ' '` + sys_lib_dlsearch_path_spec="$sys_lib_dlsearch_path_spec $lt_ld_extra" + fi + + # We used to test for /lib/ld.so.1 and disable shared libraries on + # powerpc, because MkLinux only supported shared libraries with the + # GNU dynamic linker. Since this was broken with cross compilers, + # most powerpc-linux boxes support dynamic linking these days and + # people can always --disable-shared, the test was removed, and we + # assume the GNU/Linux dynamic linker is in use. + dynamic_linker='GNU/Linux ld.so' + ;; + +netbsd*) + version_type=sunos + need_lib_prefix=no + need_version=no + if echo __ELF__ | $CC -E - | grep __ELF__ >/dev/null; then + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix' + finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir' + dynamic_linker='NetBSD (a.out) ld.so' + else + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + dynamic_linker='NetBSD ld.elf_so' + fi + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=yes + hardcode_into_libs=yes + ;; + +newsos6) + version_type=linux + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=yes + ;; + +nto-qnx*) + version_type=linux + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=yes + ;; + +openbsd*) + version_type=sunos + sys_lib_dlsearch_path_spec="/usr/lib" + need_lib_prefix=no + # Some older versions of OpenBSD (3.3 at least) *do* need versioned libs. + case $host_os in + openbsd3.3 | openbsd3.3.*) need_version=yes ;; + *) need_version=no ;; + esac + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix' + finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir' + shlibpath_var=LD_LIBRARY_PATH + if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then + case $host_os in + openbsd2.[89] | openbsd2.[89].*) + shlibpath_overrides_runpath=no + ;; + *) + shlibpath_overrides_runpath=yes + ;; + esac + else + shlibpath_overrides_runpath=yes + fi + ;; + +os2*) + libname_spec='$name' + shrext_cmds=".dll" + need_lib_prefix=no + library_names_spec='$libname${shared_ext} $libname.a' + dynamic_linker='OS/2 ld.exe' + shlibpath_var=LIBPATH + ;; + +osf3* | osf4* | osf5*) + version_type=osf + need_lib_prefix=no + need_version=no + soname_spec='${libname}${release}${shared_ext}$major' + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + shlibpath_var=LD_LIBRARY_PATH + sys_lib_search_path_spec="/usr/shlib /usr/ccs/lib /usr/lib/cmplrs/cc /usr/lib /usr/local/lib /var/shlib" + sys_lib_dlsearch_path_spec="$sys_lib_search_path_spec" + ;; + +rdos*) + dynamic_linker=no + ;; + +solaris*) + version_type=linux + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=yes + hardcode_into_libs=yes + # ldd complains unless libraries are executable + postinstall_cmds='chmod +x $lib' + ;; + +sunos4*) + version_type=sunos + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix' + finish_cmds='PATH="\$PATH:/usr/etc" ldconfig $libdir' + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=yes + if test "$with_gnu_ld" = yes; then + need_lib_prefix=no + fi + need_version=yes + ;; + +sysv4 | sysv4.3*) + version_type=linux + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + shlibpath_var=LD_LIBRARY_PATH + case $host_vendor in + sni) + shlibpath_overrides_runpath=no + need_lib_prefix=no + export_dynamic_flag_spec='${wl}-Blargedynsym' + runpath_var=LD_RUN_PATH + ;; + siemens) + need_lib_prefix=no + ;; + motorola) + need_lib_prefix=no + need_version=no + shlibpath_overrides_runpath=no + sys_lib_search_path_spec='/lib /usr/lib /usr/ccs/lib' + ;; + esac + ;; + +sysv4*MP*) + if test -d /usr/nec ;then + version_type=linux + library_names_spec='$libname${shared_ext}.$versuffix $libname${shared_ext}.$major $libname${shared_ext}' + soname_spec='$libname${shared_ext}.$major' + shlibpath_var=LD_LIBRARY_PATH + fi + ;; + +sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX* | sysv4*uw2*) + version_type=freebsd-elf + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext} $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + shlibpath_var=LD_LIBRARY_PATH + hardcode_into_libs=yes + if test "$with_gnu_ld" = yes; then + sys_lib_search_path_spec='/usr/local/lib /usr/gnu/lib /usr/ccs/lib /usr/lib /lib' + shlibpath_overrides_runpath=no + else + sys_lib_search_path_spec='/usr/ccs/lib /usr/lib' + shlibpath_overrides_runpath=yes + case $host_os in + sco3.2v5*) + sys_lib_search_path_spec="$sys_lib_search_path_spec /lib" + ;; + esac + fi + sys_lib_dlsearch_path_spec='/usr/lib' + ;; + +uts4*) + version_type=linux + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + shlibpath_var=LD_LIBRARY_PATH + ;; + +*) + dynamic_linker=no + ;; +esac +{ echo "$as_me:$LINENO: result: $dynamic_linker" >&5 +echo "${ECHO_T}$dynamic_linker" >&6; } +test "$dynamic_linker" = no && can_build_shared=no + +variables_saved_for_relink="PATH $shlibpath_var $runpath_var" +if test "$GCC" = yes; then + variables_saved_for_relink="$variables_saved_for_relink GCC_EXEC_PREFIX COMPILER_PATH LIBRARY_PATH" +fi + +{ echo "$as_me:$LINENO: checking how to hardcode library paths into programs" >&5 +echo $ECHO_N "checking how to hardcode library paths into programs... $ECHO_C" >&6; } +hardcode_action_CXX= +if test -n "$hardcode_libdir_flag_spec_CXX" || \ + test -n "$runpath_var_CXX" || \ + test "X$hardcode_automatic_CXX" = "Xyes" ; then + + # We can hardcode non-existant directories. + if test "$hardcode_direct_CXX" != no && + # If the only mechanism to avoid hardcoding is shlibpath_var, we + # have to relink, otherwise we might link with an installed library + # when we should be linking with a yet-to-be-installed one + ## test "$_LT_AC_TAGVAR(hardcode_shlibpath_var, CXX)" != no && + test "$hardcode_minus_L_CXX" != no; then + # Linking always hardcodes the temporary library directory. + hardcode_action_CXX=relink + else + # We can link without hardcoding, and we can hardcode nonexisting dirs. + hardcode_action_CXX=immediate + fi +else + # We cannot hardcode anything, or else we can only hardcode existing + # directories. + hardcode_action_CXX=unsupported +fi +{ echo "$as_me:$LINENO: result: $hardcode_action_CXX" >&5 +echo "${ECHO_T}$hardcode_action_CXX" >&6; } + +if test "$hardcode_action_CXX" = relink; then + # Fast installation is not supported + enable_fast_install=no +elif test "$shlibpath_overrides_runpath" = yes || + test "$enable_shared" = no; then + # Fast installation is not necessary + enable_fast_install=needless +fi + + +# The else clause should only fire when bootstrapping the +# libtool distribution, otherwise you forgot to ship ltmain.sh +# with your package, and you will get complaints that there are +# no rules to generate ltmain.sh. +if test -f "$ltmain"; then + # See if we are running on zsh, and set the options which allow our commands through + # without removal of \ escapes. + if test -n "${ZSH_VERSION+set}" ; then + setopt NO_GLOB_SUBST + fi + # Now quote all the things that may contain metacharacters while being + # careful not to overquote the AC_SUBSTed values. We take copies of the + # variables and quote the copies for generation of the libtool script. + for var in echo old_CC old_CFLAGS AR AR_FLAGS EGREP RANLIB LN_S LTCC LTCFLAGS NM \ + SED SHELL STRIP \ + libname_spec library_names_spec soname_spec extract_expsyms_cmds \ + old_striplib striplib file_magic_cmd finish_cmds finish_eval \ + deplibs_check_method reload_flag reload_cmds need_locks \ + lt_cv_sys_global_symbol_pipe lt_cv_sys_global_symbol_to_cdecl \ + lt_cv_sys_global_symbol_to_c_name_address \ + sys_lib_search_path_spec sys_lib_dlsearch_path_spec \ + old_postinstall_cmds old_postuninstall_cmds \ + compiler_CXX \ + CC_CXX \ + LD_CXX \ + lt_prog_compiler_wl_CXX \ + lt_prog_compiler_pic_CXX \ + lt_prog_compiler_static_CXX \ + lt_prog_compiler_no_builtin_flag_CXX \ + export_dynamic_flag_spec_CXX \ + thread_safe_flag_spec_CXX \ + whole_archive_flag_spec_CXX \ + enable_shared_with_static_runtimes_CXX \ + old_archive_cmds_CXX \ + old_archive_from_new_cmds_CXX \ + predep_objects_CXX \ + postdep_objects_CXX \ + predeps_CXX \ + postdeps_CXX \ + compiler_lib_search_path_CXX \ + archive_cmds_CXX \ + archive_expsym_cmds_CXX \ + postinstall_cmds_CXX \ + postuninstall_cmds_CXX \ + old_archive_from_expsyms_cmds_CXX \ + allow_undefined_flag_CXX \ + no_undefined_flag_CXX \ + export_symbols_cmds_CXX \ + hardcode_libdir_flag_spec_CXX \ + hardcode_libdir_flag_spec_ld_CXX \ + hardcode_libdir_separator_CXX \ + hardcode_automatic_CXX \ + module_cmds_CXX \ + module_expsym_cmds_CXX \ + lt_cv_prog_compiler_c_o_CXX \ + fix_srcfile_path_CXX \ + exclude_expsyms_CXX \ + include_expsyms_CXX; do + + case $var in + old_archive_cmds_CXX | \ + old_archive_from_new_cmds_CXX | \ + archive_cmds_CXX | \ + archive_expsym_cmds_CXX | \ + module_cmds_CXX | \ + module_expsym_cmds_CXX | \ + old_archive_from_expsyms_cmds_CXX | \ + export_symbols_cmds_CXX | \ + extract_expsyms_cmds | reload_cmds | finish_cmds | \ + postinstall_cmds | postuninstall_cmds | \ + old_postinstall_cmds | old_postuninstall_cmds | \ + sys_lib_search_path_spec | sys_lib_dlsearch_path_spec) + # Double-quote double-evaled strings. + eval "lt_$var=\\\"\`\$echo \"X\$$var\" | \$Xsed -e \"\$double_quote_subst\" -e \"\$sed_quote_subst\" -e \"\$delay_variable_subst\"\`\\\"" + ;; + *) + eval "lt_$var=\\\"\`\$echo \"X\$$var\" | \$Xsed -e \"\$sed_quote_subst\"\`\\\"" + ;; + esac + done + + case $lt_echo in + *'\$0 --fallback-echo"') + lt_echo=`$echo "X$lt_echo" | $Xsed -e 's/\\\\\\\$0 --fallback-echo"$/$0 --fallback-echo"/'` + ;; + esac + +cfgfile="$ofile" + + cat <<__EOF__ >> "$cfgfile" +# ### BEGIN LIBTOOL TAG CONFIG: $tagname + +# Libtool was configured on host `(hostname || uname -n) 2>/dev/null | sed 1q`: + +# Shell to use when invoking shell scripts. +SHELL=$lt_SHELL + +# Whether or not to build shared libraries. +build_libtool_libs=$enable_shared + +# Whether or not to build static libraries. +build_old_libs=$enable_static + +# Whether or not to add -lc for building shared libraries. +build_libtool_need_lc=$archive_cmds_need_lc_CXX + +# Whether or not to disallow shared libs when runtime libs are static +allow_libtool_libs_with_static_runtimes=$enable_shared_with_static_runtimes_CXX + +# Whether or not to optimize for fast installation. +fast_install=$enable_fast_install + +# The host system. +host_alias=$host_alias +host=$host +host_os=$host_os + +# The build system. +build_alias=$build_alias +build=$build +build_os=$build_os + +# An echo program that does not interpret backslashes. +echo=$lt_echo + +# The archiver. +AR=$lt_AR +AR_FLAGS=$lt_AR_FLAGS + +# A C compiler. +LTCC=$lt_LTCC + +# LTCC compiler flags. +LTCFLAGS=$lt_LTCFLAGS + +# A language-specific compiler. +CC=$lt_compiler_CXX + +# Is the compiler the GNU C compiler? +with_gcc=$GCC_CXX + +# An ERE matcher. +EGREP=$lt_EGREP + +# The linker used to build libraries. +LD=$lt_LD_CXX + +# Whether we need hard or soft links. +LN_S=$lt_LN_S + +# A BSD-compatible nm program. +NM=$lt_NM + +# A symbol stripping program +STRIP=$lt_STRIP + +# Used to examine libraries when file_magic_cmd begins "file" +MAGIC_CMD=$MAGIC_CMD + +# Used on cygwin: DLL creation program. +DLLTOOL="$DLLTOOL" + +# Used on cygwin: object dumper. +OBJDUMP="$OBJDUMP" + +# Used on cygwin: assembler. +AS="$AS" + +# The name of the directory that contains temporary libtool files. +objdir=$objdir + +# How to create reloadable object files. +reload_flag=$lt_reload_flag +reload_cmds=$lt_reload_cmds + +# How to pass a linker flag through the compiler. +wl=$lt_lt_prog_compiler_wl_CXX + +# Object file suffix (normally "o"). +objext="$ac_objext" + +# Old archive suffix (normally "a"). +libext="$libext" + +# Shared library suffix (normally ".so"). +shrext_cmds='$shrext_cmds' + +# Executable file suffix (normally ""). +exeext="$exeext" + +# Additional compiler flags for building library objects. +pic_flag=$lt_lt_prog_compiler_pic_CXX +pic_mode=$pic_mode + +# What is the maximum length of a command? +max_cmd_len=$lt_cv_sys_max_cmd_len + +# Does compiler simultaneously support -c and -o options? +compiler_c_o=$lt_lt_cv_prog_compiler_c_o_CXX + +# Must we lock files when doing compilation? +need_locks=$lt_need_locks + +# Do we need the lib prefix for modules? +need_lib_prefix=$need_lib_prefix + +# Do we need a version for libraries? +need_version=$need_version + +# Whether dlopen is supported. +dlopen_support=$enable_dlopen + +# Whether dlopen of programs is supported. +dlopen_self=$enable_dlopen_self + +# Whether dlopen of statically linked programs is supported. +dlopen_self_static=$enable_dlopen_self_static + +# Compiler flag to prevent dynamic linking. +link_static_flag=$lt_lt_prog_compiler_static_CXX + +# Compiler flag to turn off builtin functions. +no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag_CXX + +# Compiler flag to allow reflexive dlopens. +export_dynamic_flag_spec=$lt_export_dynamic_flag_spec_CXX + +# Compiler flag to generate shared objects directly from archives. +whole_archive_flag_spec=$lt_whole_archive_flag_spec_CXX + +# Compiler flag to generate thread-safe objects. +thread_safe_flag_spec=$lt_thread_safe_flag_spec_CXX + +# Library versioning type. +version_type=$version_type + +# Format of library name prefix. +libname_spec=$lt_libname_spec + +# List of archive names. First name is the real one, the rest are links. +# The last name is the one that the linker finds with -lNAME. +library_names_spec=$lt_library_names_spec + +# The coded name of the library, if different from the real name. +soname_spec=$lt_soname_spec + +# Commands used to build and install an old-style archive. +RANLIB=$lt_RANLIB +old_archive_cmds=$lt_old_archive_cmds_CXX +old_postinstall_cmds=$lt_old_postinstall_cmds +old_postuninstall_cmds=$lt_old_postuninstall_cmds + +# Create an old-style archive from a shared archive. +old_archive_from_new_cmds=$lt_old_archive_from_new_cmds_CXX + +# Create a temporary old-style archive to link instead of a shared archive. +old_archive_from_expsyms_cmds=$lt_old_archive_from_expsyms_cmds_CXX + +# Commands used to build and install a shared archive. +archive_cmds=$lt_archive_cmds_CXX +archive_expsym_cmds=$lt_archive_expsym_cmds_CXX +postinstall_cmds=$lt_postinstall_cmds +postuninstall_cmds=$lt_postuninstall_cmds + +# Commands used to build a loadable module (assumed same as above if empty) +module_cmds=$lt_module_cmds_CXX +module_expsym_cmds=$lt_module_expsym_cmds_CXX + +# Commands to strip libraries. +old_striplib=$lt_old_striplib +striplib=$lt_striplib + +# Dependencies to place before the objects being linked to create a +# shared library. +predep_objects=$lt_predep_objects_CXX + +# Dependencies to place after the objects being linked to create a +# shared library. +postdep_objects=$lt_postdep_objects_CXX + +# Dependencies to place before the objects being linked to create a +# shared library. +predeps=$lt_predeps_CXX + +# Dependencies to place after the objects being linked to create a +# shared library. +postdeps=$lt_postdeps_CXX + +# The library search path used internally by the compiler when linking +# a shared library. +compiler_lib_search_path=$lt_compiler_lib_search_path_CXX + +# Method to check whether dependent libraries are shared objects. +deplibs_check_method=$lt_deplibs_check_method + +# Command to use when deplibs_check_method == file_magic. +file_magic_cmd=$lt_file_magic_cmd + +# Flag that allows shared libraries with undefined symbols to be built. +allow_undefined_flag=$lt_allow_undefined_flag_CXX + +# Flag that forces no undefined symbols. +no_undefined_flag=$lt_no_undefined_flag_CXX + +# Commands used to finish a libtool library installation in a directory. +finish_cmds=$lt_finish_cmds + +# Same as above, but a single script fragment to be evaled but not shown. +finish_eval=$lt_finish_eval + +# Take the output of nm and produce a listing of raw symbols and C names. +global_symbol_pipe=$lt_lt_cv_sys_global_symbol_pipe + +# Transform the output of nm in a proper C declaration +global_symbol_to_cdecl=$lt_lt_cv_sys_global_symbol_to_cdecl + +# Transform the output of nm in a C name address pair +global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address + +# This is the shared library runtime path variable. +runpath_var=$runpath_var + +# This is the shared library path variable. +shlibpath_var=$shlibpath_var + +# Is shlibpath searched before the hard-coded library search path? +shlibpath_overrides_runpath=$shlibpath_overrides_runpath + +# How to hardcode a shared library path into an executable. +hardcode_action=$hardcode_action_CXX + +# Whether we should hardcode library paths into libraries. +hardcode_into_libs=$hardcode_into_libs + +# Flag to hardcode \$libdir into a binary during linking. +# This must work even if \$libdir does not exist. +hardcode_libdir_flag_spec=$lt_hardcode_libdir_flag_spec_CXX + +# If ld is used when linking, flag to hardcode \$libdir into +# a binary during linking. This must work even if \$libdir does +# not exist. +hardcode_libdir_flag_spec_ld=$lt_hardcode_libdir_flag_spec_ld_CXX + +# Whether we need a single -rpath flag with a separated argument. +hardcode_libdir_separator=$lt_hardcode_libdir_separator_CXX + +# Set to yes if using DIR/libNAME${shared_ext} during linking hardcodes DIR into the +# resulting binary. +hardcode_direct=$hardcode_direct_CXX + +# Set to yes if using the -LDIR flag during linking hardcodes DIR into the +# resulting binary. +hardcode_minus_L=$hardcode_minus_L_CXX + +# Set to yes if using SHLIBPATH_VAR=DIR during linking hardcodes DIR into +# the resulting binary. +hardcode_shlibpath_var=$hardcode_shlibpath_var_CXX + +# Set to yes if building a shared library automatically hardcodes DIR into the library +# and all subsequent libraries and executables linked against it. +hardcode_automatic=$hardcode_automatic_CXX + +# Variables whose values should be saved in libtool wrapper scripts and +# restored at relink time. +variables_saved_for_relink="$variables_saved_for_relink" + +# Whether libtool must link a program against all its dependency libraries. +link_all_deplibs=$link_all_deplibs_CXX + +# Compile-time system search path for libraries +sys_lib_search_path_spec=$lt_sys_lib_search_path_spec + +# Run-time system search path for libraries +sys_lib_dlsearch_path_spec=$lt_sys_lib_dlsearch_path_spec + +# Fix the shell variable \$srcfile for the compiler. +fix_srcfile_path=$lt_fix_srcfile_path + +# Set to yes if exported symbols are required. +always_export_symbols=$always_export_symbols_CXX + +# The commands to list exported symbols. +export_symbols_cmds=$lt_export_symbols_cmds_CXX + +# The commands to extract the exported symbol list from a shared archive. +extract_expsyms_cmds=$lt_extract_expsyms_cmds + +# Symbols that should not be listed in the preloaded symbols. +exclude_expsyms=$lt_exclude_expsyms_CXX + +# Symbols that must always be exported. +include_expsyms=$lt_include_expsyms_CXX + +# ### END LIBTOOL TAG CONFIG: $tagname + +__EOF__ + + +else + # If there is no Makefile yet, we rely on a make rule to execute + # `config.status --recheck' to rerun these tests and create the + # libtool script then. + ltmain_in=`echo $ltmain | sed -e 's/\.sh$/.in/'` + if test -f "$ltmain_in"; then + test -f Makefile && make "$ltmain" + fi +fi + + +ac_ext=c +ac_cpp='$CPP $CPPFLAGS' +ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' +ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' +ac_compiler_gnu=$ac_cv_c_compiler_gnu + +CC=$lt_save_CC +LDCXX=$LD +LD=$lt_save_LD +GCC=$lt_save_GCC +with_gnu_ldcxx=$with_gnu_ld +with_gnu_ld=$lt_save_with_gnu_ld +lt_cv_path_LDCXX=$lt_cv_path_LD +lt_cv_path_LD=$lt_save_path_LD +lt_cv_prog_gnu_ldcxx=$lt_cv_prog_gnu_ld +lt_cv_prog_gnu_ld=$lt_save_with_gnu_ld + + else + tagname="" + fi + ;; + + F77) + if test -n "$F77" && test "X$F77" != "Xno"; then + +ac_ext=f +ac_compile='$F77 -c $FFLAGS conftest.$ac_ext >&5' +ac_link='$F77 -o conftest$ac_exeext $FFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' +ac_compiler_gnu=$ac_cv_f77_compiler_gnu + + +archive_cmds_need_lc_F77=no +allow_undefined_flag_F77= +always_export_symbols_F77=no +archive_expsym_cmds_F77= +export_dynamic_flag_spec_F77= +hardcode_direct_F77=no +hardcode_libdir_flag_spec_F77= +hardcode_libdir_flag_spec_ld_F77= +hardcode_libdir_separator_F77= +hardcode_minus_L_F77=no +hardcode_automatic_F77=no +module_cmds_F77= +module_expsym_cmds_F77= +link_all_deplibs_F77=unknown +old_archive_cmds_F77=$old_archive_cmds +no_undefined_flag_F77= +whole_archive_flag_spec_F77= +enable_shared_with_static_runtimes_F77=no + +# Source file extension for f77 test sources. +ac_ext=f + +# Object file extension for compiled f77 test sources. +objext=o +objext_F77=$objext + +# Code to be used in simple compile tests +lt_simple_compile_test_code="\ + subroutine t + return + end +" + +# Code to be used in simple link tests +lt_simple_link_test_code="\ + program t + end +" + +# ltmain only uses $CC for tagged configurations so make sure $CC is set. + +# If no C compiler was specified, use CC. +LTCC=${LTCC-"$CC"} + +# If no C compiler flags were specified, use CFLAGS. +LTCFLAGS=${LTCFLAGS-"$CFLAGS"} + +# Allow CC to be a program name with arguments. +compiler=$CC + + +# save warnings/boilerplate of simple test code +ac_outfile=conftest.$ac_objext +echo "$lt_simple_compile_test_code" >conftest.$ac_ext +eval "$ac_compile" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err +_lt_compiler_boilerplate=`cat conftest.err` +$rm conftest* + +ac_outfile=conftest.$ac_objext +echo "$lt_simple_link_test_code" >conftest.$ac_ext +eval "$ac_link" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err +_lt_linker_boilerplate=`cat conftest.err` +$rm conftest* + + +# Allow CC to be a program name with arguments. +lt_save_CC="$CC" +CC=${F77-"f77"} +compiler=$CC +compiler_F77=$CC +for cc_temp in $compiler""; do + case $cc_temp in + compile | *[\\/]compile | ccache | *[\\/]ccache ) ;; + distcc | *[\\/]distcc | purify | *[\\/]purify ) ;; + \-*) ;; + *) break;; + esac +done +cc_basename=`$echo "X$cc_temp" | $Xsed -e 's%.*/%%' -e "s%^$host_alias-%%"` + + +{ echo "$as_me:$LINENO: checking if libtool supports shared libraries" >&5 +echo $ECHO_N "checking if libtool supports shared libraries... $ECHO_C" >&6; } +{ echo "$as_me:$LINENO: result: $can_build_shared" >&5 +echo "${ECHO_T}$can_build_shared" >&6; } + +{ echo "$as_me:$LINENO: checking whether to build shared libraries" >&5 +echo $ECHO_N "checking whether to build shared libraries... $ECHO_C" >&6; } +test "$can_build_shared" = "no" && enable_shared=no + +# On AIX, shared libraries and static libraries use the same namespace, and +# are all built from PIC. +case $host_os in +aix3*) + test "$enable_shared" = yes && enable_static=no + if test -n "$RANLIB"; then + archive_cmds="$archive_cmds~\$RANLIB \$lib" + postinstall_cmds='$RANLIB $lib' + fi + ;; +aix4* | aix5*) + if test "$host_cpu" != ia64 && test "$aix_use_runtimelinking" = no ; then + test "$enable_shared" = yes && enable_static=no + fi + ;; +esac +{ echo "$as_me:$LINENO: result: $enable_shared" >&5 +echo "${ECHO_T}$enable_shared" >&6; } + +{ echo "$as_me:$LINENO: checking whether to build static libraries" >&5 +echo $ECHO_N "checking whether to build static libraries... $ECHO_C" >&6; } +# Make sure either enable_shared or enable_static is yes. +test "$enable_shared" = yes || enable_static=yes +{ echo "$as_me:$LINENO: result: $enable_static" >&5 +echo "${ECHO_T}$enable_static" >&6; } + +GCC_F77="$G77" +LD_F77="$LD" + +lt_prog_compiler_wl_F77= +lt_prog_compiler_pic_F77= +lt_prog_compiler_static_F77= + +{ echo "$as_me:$LINENO: checking for $compiler option to produce PIC" >&5 +echo $ECHO_N "checking for $compiler option to produce PIC... $ECHO_C" >&6; } + + if test "$GCC" = yes; then + lt_prog_compiler_wl_F77='-Wl,' + lt_prog_compiler_static_F77='-static' + + case $host_os in + aix*) + # All AIX code is PIC. + if test "$host_cpu" = ia64; then + # AIX 5 now supports IA64 processor + lt_prog_compiler_static_F77='-Bstatic' + fi + ;; + + amigaos*) + # FIXME: we need at least 68020 code to build shared libraries, but + # adding the `-m68020' flag to GCC prevents building anything better, + # like `-m68040'. + lt_prog_compiler_pic_F77='-m68020 -resident32 -malways-restore-a4' + ;; + + beos* | irix5* | irix6* | nonstopux* | osf3* | osf4* | osf5*) + # PIC is the default for these OSes. + ;; + + mingw* | cygwin* | pw32* | os2*) + # This hack is so that the source file can tell whether it is being + # built for inclusion in a dll (and should export symbols for example). + # Although the cygwin gcc ignores -fPIC, still need this for old-style + # (--disable-auto-import) libraries + lt_prog_compiler_pic_F77='-DDLL_EXPORT' + ;; + + darwin* | rhapsody*) + # PIC is the default on this platform + # Common symbols not allowed in MH_DYLIB files + lt_prog_compiler_pic_F77='-fno-common' + ;; + + interix[3-9]*) + # Interix 3.x gcc -fpic/-fPIC options generate broken code. + # Instead, we relocate shared libraries at runtime. + ;; + + msdosdjgpp*) + # Just because we use GCC doesn't mean we suddenly get shared libraries + # on systems that don't support them. + lt_prog_compiler_can_build_shared_F77=no + enable_shared=no + ;; + + sysv4*MP*) + if test -d /usr/nec; then + lt_prog_compiler_pic_F77=-Kconform_pic + fi + ;; + + hpux*) + # PIC is the default for IA64 HP-UX and 64-bit HP-UX, but + # not for PA HP-UX. + case $host_cpu in + hppa*64*|ia64*) + # +Z the default + ;; + *) + lt_prog_compiler_pic_F77='-fPIC' + ;; + esac + ;; + + *) + lt_prog_compiler_pic_F77='-fPIC' + ;; + esac + else + # PORTME Check for flag to pass linker flags through the system compiler. + case $host_os in + aix*) + lt_prog_compiler_wl_F77='-Wl,' + if test "$host_cpu" = ia64; then + # AIX 5 now supports IA64 processor + lt_prog_compiler_static_F77='-Bstatic' + else + lt_prog_compiler_static_F77='-bnso -bI:/lib/syscalls.exp' + fi + ;; + darwin*) + # PIC is the default on this platform + # Common symbols not allowed in MH_DYLIB files + case $cc_basename in + xlc*) + lt_prog_compiler_pic_F77='-qnocommon' + lt_prog_compiler_wl_F77='-Wl,' + ;; + esac + ;; + + mingw* | cygwin* | pw32* | os2*) + # This hack is so that the source file can tell whether it is being + # built for inclusion in a dll (and should export symbols for example). + lt_prog_compiler_pic_F77='-DDLL_EXPORT' + ;; + + hpux9* | hpux10* | hpux11*) + lt_prog_compiler_wl_F77='-Wl,' + # PIC is the default for IA64 HP-UX and 64-bit HP-UX, but + # not for PA HP-UX. + case $host_cpu in + hppa*64*|ia64*) + # +Z the default + ;; + *) + lt_prog_compiler_pic_F77='+Z' + ;; + esac + # Is there a better lt_prog_compiler_static that works with the bundled CC? + lt_prog_compiler_static_F77='${wl}-a ${wl}archive' + ;; + + irix5* | irix6* | nonstopux*) + lt_prog_compiler_wl_F77='-Wl,' + # PIC (with -KPIC) is the default. + lt_prog_compiler_static_F77='-non_shared' + ;; + + newsos6) + lt_prog_compiler_pic_F77='-KPIC' + lt_prog_compiler_static_F77='-Bstatic' + ;; + + linux* | k*bsd*-gnu) + case $cc_basename in + icc* | ecc*) + lt_prog_compiler_wl_F77='-Wl,' + lt_prog_compiler_pic_F77='-KPIC' + lt_prog_compiler_static_F77='-static' + ;; + pgcc* | pgf77* | pgf90* | pgf95*) + # Portland Group compilers (*not* the Pentium gcc compiler, + # which looks to be a dead project) + lt_prog_compiler_wl_F77='-Wl,' + lt_prog_compiler_pic_F77='-fpic' + lt_prog_compiler_static_F77='-Bstatic' + ;; + ccc*) + lt_prog_compiler_wl_F77='-Wl,' + # All Alpha code is PIC. + lt_prog_compiler_static_F77='-non_shared' + ;; + *) + case `$CC -V 2>&1 | sed 5q` in + *Sun\ C*) + # Sun C 5.9 + lt_prog_compiler_pic_F77='-KPIC' + lt_prog_compiler_static_F77='-Bstatic' + lt_prog_compiler_wl_F77='-Wl,' + ;; + *Sun\ F*) + # Sun Fortran 8.3 passes all unrecognized flags to the linker + lt_prog_compiler_pic_F77='-KPIC' + lt_prog_compiler_static_F77='-Bstatic' + lt_prog_compiler_wl_F77='' + ;; + esac + ;; + esac + ;; + + osf3* | osf4* | osf5*) + lt_prog_compiler_wl_F77='-Wl,' + # All OSF/1 code is PIC. + lt_prog_compiler_static_F77='-non_shared' + ;; + + rdos*) + lt_prog_compiler_static_F77='-non_shared' + ;; + + solaris*) + lt_prog_compiler_pic_F77='-KPIC' + lt_prog_compiler_static_F77='-Bstatic' + case $cc_basename in + f77* | f90* | f95*) + lt_prog_compiler_wl_F77='-Qoption ld ';; + *) + lt_prog_compiler_wl_F77='-Wl,';; + esac + ;; + + sunos4*) + lt_prog_compiler_wl_F77='-Qoption ld ' + lt_prog_compiler_pic_F77='-PIC' + lt_prog_compiler_static_F77='-Bstatic' + ;; + + sysv4 | sysv4.2uw2* | sysv4.3*) + lt_prog_compiler_wl_F77='-Wl,' + lt_prog_compiler_pic_F77='-KPIC' + lt_prog_compiler_static_F77='-Bstatic' + ;; + + sysv4*MP*) + if test -d /usr/nec ;then + lt_prog_compiler_pic_F77='-Kconform_pic' + lt_prog_compiler_static_F77='-Bstatic' + fi + ;; + + sysv5* | unixware* | sco3.2v5* | sco5v6* | OpenUNIX*) + lt_prog_compiler_wl_F77='-Wl,' + lt_prog_compiler_pic_F77='-KPIC' + lt_prog_compiler_static_F77='-Bstatic' + ;; + + unicos*) + lt_prog_compiler_wl_F77='-Wl,' + lt_prog_compiler_can_build_shared_F77=no + ;; + + uts4*) + lt_prog_compiler_pic_F77='-pic' + lt_prog_compiler_static_F77='-Bstatic' + ;; + + *) + lt_prog_compiler_can_build_shared_F77=no + ;; + esac + fi + +{ echo "$as_me:$LINENO: result: $lt_prog_compiler_pic_F77" >&5 +echo "${ECHO_T}$lt_prog_compiler_pic_F77" >&6; } + +# +# Check to make sure the PIC flag actually works. +# +if test -n "$lt_prog_compiler_pic_F77"; then + +{ echo "$as_me:$LINENO: checking if $compiler PIC flag $lt_prog_compiler_pic_F77 works" >&5 +echo $ECHO_N "checking if $compiler PIC flag $lt_prog_compiler_pic_F77 works... $ECHO_C" >&6; } +if test "${lt_prog_compiler_pic_works_F77+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + lt_prog_compiler_pic_works_F77=no + ac_outfile=conftest.$ac_objext + echo "$lt_simple_compile_test_code" > conftest.$ac_ext + lt_compiler_flag="$lt_prog_compiler_pic_F77" + # Insert the option either (1) after the last *FLAGS variable, or + # (2) before a word containing "conftest.", or (3) at the end. + # Note that $ac_compile itself does not contain backslashes and begins + # with a dollar sign (not a hyphen), so the echo should work correctly. + # The option is referenced via a variable to avoid confusing sed. + lt_compile=`echo "$ac_compile" | $SED \ + -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ + -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ + -e 's:$: $lt_compiler_flag:'` + (eval echo "\"\$as_me:14369: $lt_compile\"" >&5) + (eval "$lt_compile" 2>conftest.err) + ac_status=$? + cat conftest.err >&5 + echo "$as_me:14373: \$? = $ac_status" >&5 + if (exit $ac_status) && test -s "$ac_outfile"; then + # The compiler can only warn and ignore the option if not recognized + # So say no if there are warnings other than the usual output. + $echo "X$_lt_compiler_boilerplate" | $Xsed -e '/^$/d' >conftest.exp + $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 + if test ! -s conftest.er2 || diff conftest.exp conftest.er2 >/dev/null; then + lt_prog_compiler_pic_works_F77=yes + fi + fi + $rm conftest* + +fi +{ echo "$as_me:$LINENO: result: $lt_prog_compiler_pic_works_F77" >&5 +echo "${ECHO_T}$lt_prog_compiler_pic_works_F77" >&6; } + +if test x"$lt_prog_compiler_pic_works_F77" = xyes; then + case $lt_prog_compiler_pic_F77 in + "" | " "*) ;; + *) lt_prog_compiler_pic_F77=" $lt_prog_compiler_pic_F77" ;; + esac +else + lt_prog_compiler_pic_F77= + lt_prog_compiler_can_build_shared_F77=no +fi + +fi +case $host_os in + # For platforms which do not support PIC, -DPIC is meaningless: + *djgpp*) + lt_prog_compiler_pic_F77= + ;; + *) + lt_prog_compiler_pic_F77="$lt_prog_compiler_pic_F77" + ;; +esac + +# +# Check to make sure the static flag actually works. +# +wl=$lt_prog_compiler_wl_F77 eval lt_tmp_static_flag=\"$lt_prog_compiler_static_F77\" +{ echo "$as_me:$LINENO: checking if $compiler static flag $lt_tmp_static_flag works" >&5 +echo $ECHO_N "checking if $compiler static flag $lt_tmp_static_flag works... $ECHO_C" >&6; } +if test "${lt_prog_compiler_static_works_F77+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + lt_prog_compiler_static_works_F77=no + save_LDFLAGS="$LDFLAGS" + LDFLAGS="$LDFLAGS $lt_tmp_static_flag" + echo "$lt_simple_link_test_code" > conftest.$ac_ext + if (eval $ac_link 2>conftest.err) && test -s conftest$ac_exeext; then + # The linker can only warn and ignore the option if not recognized + # So say no if there are warnings + if test -s conftest.err; then + # Append any errors to the config.log. + cat conftest.err 1>&5 + $echo "X$_lt_linker_boilerplate" | $Xsed -e '/^$/d' > conftest.exp + $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 + if diff conftest.exp conftest.er2 >/dev/null; then + lt_prog_compiler_static_works_F77=yes + fi + else + lt_prog_compiler_static_works_F77=yes + fi + fi + $rm conftest* + LDFLAGS="$save_LDFLAGS" + +fi +{ echo "$as_me:$LINENO: result: $lt_prog_compiler_static_works_F77" >&5 +echo "${ECHO_T}$lt_prog_compiler_static_works_F77" >&6; } + +if test x"$lt_prog_compiler_static_works_F77" = xyes; then + : +else + lt_prog_compiler_static_F77= +fi + + +{ echo "$as_me:$LINENO: checking if $compiler supports -c -o file.$ac_objext" >&5 +echo $ECHO_N "checking if $compiler supports -c -o file.$ac_objext... $ECHO_C" >&6; } +if test "${lt_cv_prog_compiler_c_o_F77+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + lt_cv_prog_compiler_c_o_F77=no + $rm -r conftest 2>/dev/null + mkdir conftest + cd conftest + mkdir out + echo "$lt_simple_compile_test_code" > conftest.$ac_ext + + lt_compiler_flag="-o out/conftest2.$ac_objext" + # Insert the option either (1) after the last *FLAGS variable, or + # (2) before a word containing "conftest.", or (3) at the end. + # Note that $ac_compile itself does not contain backslashes and begins + # with a dollar sign (not a hyphen), so the echo should work correctly. + lt_compile=`echo "$ac_compile" | $SED \ + -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ + -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ + -e 's:$: $lt_compiler_flag:'` + (eval echo "\"\$as_me:14473: $lt_compile\"" >&5) + (eval "$lt_compile" 2>out/conftest.err) + ac_status=$? + cat out/conftest.err >&5 + echo "$as_me:14477: \$? = $ac_status" >&5 + if (exit $ac_status) && test -s out/conftest2.$ac_objext + then + # The compiler can only warn and ignore the option if not recognized + # So say no if there are warnings + $echo "X$_lt_compiler_boilerplate" | $Xsed -e '/^$/d' > out/conftest.exp + $SED '/^$/d; /^ *+/d' out/conftest.err >out/conftest.er2 + if test ! -s out/conftest.er2 || diff out/conftest.exp out/conftest.er2 >/dev/null; then + lt_cv_prog_compiler_c_o_F77=yes + fi + fi + chmod u+w . 2>&5 + $rm conftest* + # SGI C++ compiler will create directory out/ii_files/ for + # template instantiation + test -d out/ii_files && $rm out/ii_files/* && rmdir out/ii_files + $rm out/* && rmdir out + cd .. + rmdir conftest + $rm conftest* + +fi +{ echo "$as_me:$LINENO: result: $lt_cv_prog_compiler_c_o_F77" >&5 +echo "${ECHO_T}$lt_cv_prog_compiler_c_o_F77" >&6; } + + +hard_links="nottested" +if test "$lt_cv_prog_compiler_c_o_F77" = no && test "$need_locks" != no; then + # do not overwrite the value of need_locks provided by the user + { echo "$as_me:$LINENO: checking if we can lock with hard links" >&5 +echo $ECHO_N "checking if we can lock with hard links... $ECHO_C" >&6; } + hard_links=yes + $rm conftest* + ln conftest.a conftest.b 2>/dev/null && hard_links=no + touch conftest.a + ln conftest.a conftest.b 2>&5 || hard_links=no + ln conftest.a conftest.b 2>/dev/null && hard_links=no + { echo "$as_me:$LINENO: result: $hard_links" >&5 +echo "${ECHO_T}$hard_links" >&6; } + if test "$hard_links" = no; then + { echo "$as_me:$LINENO: WARNING: \`$CC' does not support \`-c -o', so \`make -j' may be unsafe" >&5 +echo "$as_me: WARNING: \`$CC' does not support \`-c -o', so \`make -j' may be unsafe" >&2;} + need_locks=warn + fi +else + need_locks=no +fi + +{ echo "$as_me:$LINENO: checking whether the $compiler linker ($LD) supports shared libraries" >&5 +echo $ECHO_N "checking whether the $compiler linker ($LD) supports shared libraries... $ECHO_C" >&6; } + + runpath_var= + allow_undefined_flag_F77= + enable_shared_with_static_runtimes_F77=no + archive_cmds_F77= + archive_expsym_cmds_F77= + old_archive_From_new_cmds_F77= + old_archive_from_expsyms_cmds_F77= + export_dynamic_flag_spec_F77= + whole_archive_flag_spec_F77= + thread_safe_flag_spec_F77= + hardcode_libdir_flag_spec_F77= + hardcode_libdir_flag_spec_ld_F77= + hardcode_libdir_separator_F77= + hardcode_direct_F77=no + hardcode_minus_L_F77=no + hardcode_shlibpath_var_F77=unsupported + link_all_deplibs_F77=unknown + hardcode_automatic_F77=no + module_cmds_F77= + module_expsym_cmds_F77= + always_export_symbols_F77=no + export_symbols_cmds_F77='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols' + # include_expsyms should be a list of space-separated symbols to be *always* + # included in the symbol list + include_expsyms_F77= + # exclude_expsyms can be an extended regexp of symbols to exclude + # it will be wrapped by ` (' and `)$', so one must not match beginning or + # end of line. Example: `a|bc|.*d.*' will exclude the symbols `a' and `bc', + # as well as any symbol that contains `d'. + exclude_expsyms_F77="_GLOBAL_OFFSET_TABLE_" + # Although _GLOBAL_OFFSET_TABLE_ is a valid symbol C name, most a.out + # platforms (ab)use it in PIC code, but their linkers get confused if + # the symbol is explicitly referenced. Since portable code cannot + # rely on this symbol name, it's probably fine to never include it in + # preloaded symbol tables. + extract_expsyms_cmds= + # Just being paranoid about ensuring that cc_basename is set. + for cc_temp in $compiler""; do + case $cc_temp in + compile | *[\\/]compile | ccache | *[\\/]ccache ) ;; + distcc | *[\\/]distcc | purify | *[\\/]purify ) ;; + \-*) ;; + *) break;; + esac +done +cc_basename=`$echo "X$cc_temp" | $Xsed -e 's%.*/%%' -e "s%^$host_alias-%%"` + + case $host_os in + cygwin* | mingw* | pw32*) + # FIXME: the MSVC++ port hasn't been tested in a loooong time + # When not using gcc, we currently assume that we are using + # Microsoft Visual C++. + if test "$GCC" != yes; then + with_gnu_ld=no + fi + ;; + interix*) + # we just hope/assume this is gcc and not c89 (= MSVC++) + with_gnu_ld=yes + ;; + openbsd*) + with_gnu_ld=no + ;; + esac + + ld_shlibs_F77=yes + if test "$with_gnu_ld" = yes; then + # If archive_cmds runs LD, not CC, wlarc should be empty + wlarc='${wl}' + + # Set some defaults for GNU ld with shared library support. These + # are reset later if shared libraries are not supported. Putting them + # here allows them to be overridden if necessary. + runpath_var=LD_RUN_PATH + hardcode_libdir_flag_spec_F77='${wl}--rpath ${wl}$libdir' + export_dynamic_flag_spec_F77='${wl}--export-dynamic' + # ancient GNU ld didn't support --whole-archive et. al. + if $LD --help 2>&1 | grep 'no-whole-archive' > /dev/null; then + whole_archive_flag_spec_F77="$wlarc"'--whole-archive$convenience '"$wlarc"'--no-whole-archive' + else + whole_archive_flag_spec_F77= + fi + supports_anon_versioning=no + case `$LD -v 2>/dev/null` in + *\ [01].* | *\ 2.[0-9].* | *\ 2.10.*) ;; # catch versions < 2.11 + *\ 2.11.93.0.2\ *) supports_anon_versioning=yes ;; # RH7.3 ... + *\ 2.11.92.0.12\ *) supports_anon_versioning=yes ;; # Mandrake 8.2 ... + *\ 2.11.*) ;; # other 2.11 versions + *) supports_anon_versioning=yes ;; + esac + + # See if GNU ld supports shared libraries. + case $host_os in + aix3* | aix4* | aix5*) + # On AIX/PPC, the GNU linker is very broken + if test "$host_cpu" != ia64; then + ld_shlibs_F77=no + cat <&2 + +*** Warning: the GNU linker, at least up to release 2.9.1, is reported +*** to be unable to reliably create shared libraries on AIX. +*** Therefore, libtool is disabling shared libraries support. If you +*** really care for shared libraries, you may want to modify your PATH +*** so that a non-GNU linker is found, and then restart. + +EOF + fi + ;; + + amigaos*) + archive_cmds_F77='$rm $output_objdir/a2ixlibrary.data~$echo "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$echo "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$echo "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$echo "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)' + hardcode_libdir_flag_spec_F77='-L$libdir' + hardcode_minus_L_F77=yes + + # Samuel A. Falvo II reports + # that the semantics of dynamic libraries on AmigaOS, at least up + # to version 4, is to share data among multiple programs linked + # with the same dynamic library. Since this doesn't match the + # behavior of shared libraries on other platforms, we can't use + # them. + ld_shlibs_F77=no + ;; + + beos*) + if $LD --help 2>&1 | grep ': supported targets:.* elf' > /dev/null; then + allow_undefined_flag_F77=unsupported + # Joseph Beckenbach says some releases of gcc + # support --undefined. This deserves some investigation. FIXME + archive_cmds_F77='$CC -nostart $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' + else + ld_shlibs_F77=no + fi + ;; + + cygwin* | mingw* | pw32*) + # _LT_AC_TAGVAR(hardcode_libdir_flag_spec, F77) is actually meaningless, + # as there is no search path for DLLs. + hardcode_libdir_flag_spec_F77='-L$libdir' + allow_undefined_flag_F77=unsupported + always_export_symbols_F77=no + enable_shared_with_static_runtimes_F77=yes + export_symbols_cmds_F77='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/'\'' -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols' + + if $LD --help 2>&1 | grep 'auto-import' > /dev/null; then + archive_cmds_F77='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' + # If the export-symbols file already is a .def file (1st line + # is EXPORTS), use it as is; otherwise, prepend... + archive_expsym_cmds_F77='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then + cp $export_symbols $output_objdir/$soname.def; + else + echo EXPORTS > $output_objdir/$soname.def; + cat $export_symbols >> $output_objdir/$soname.def; + fi~ + $CC -shared $output_objdir/$soname.def $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' + else + ld_shlibs_F77=no + fi + ;; + + interix[3-9]*) + hardcode_direct_F77=no + hardcode_shlibpath_var_F77=no + hardcode_libdir_flag_spec_F77='${wl}-rpath,$libdir' + export_dynamic_flag_spec_F77='${wl}-E' + # Hack: On Interix 3.x, we cannot compile PIC because of a broken gcc. + # Instead, shared libraries are loaded at an image base (0x10000000 by + # default) and relocated if they conflict, which is a slow very memory + # consuming and fragmenting process. To avoid this, we pick a random, + # 256 KiB-aligned image base between 0x50000000 and 0x6FFC0000 at link + # time. Moving up from 0x10000000 also allows more sbrk(2) space. + archive_cmds_F77='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-h,$soname ${wl}--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' + archive_expsym_cmds_F77='sed "s,^,_," $export_symbols >$output_objdir/$soname.expsym~$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-h,$soname ${wl}--retain-symbols-file,$output_objdir/$soname.expsym ${wl}--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' + ;; + + gnu* | linux* | k*bsd*-gnu) + if $LD --help 2>&1 | grep ': supported targets:.* elf' > /dev/null; then + tmp_addflag= + case $cc_basename,$host_cpu in + pgcc*) # Portland Group C compiler + whole_archive_flag_spec_F77='${wl}--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; $echo \"$new_convenience\"` ${wl}--no-whole-archive' + tmp_addflag=' $pic_flag' + ;; + pgf77* | pgf90* | pgf95*) # Portland Group f77 and f90 compilers + whole_archive_flag_spec_F77='${wl}--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; $echo \"$new_convenience\"` ${wl}--no-whole-archive' + tmp_addflag=' $pic_flag -Mnomain' ;; + ecc*,ia64* | icc*,ia64*) # Intel C compiler on ia64 + tmp_addflag=' -i_dynamic' ;; + efc*,ia64* | ifort*,ia64*) # Intel Fortran compiler on ia64 + tmp_addflag=' -i_dynamic -nofor_main' ;; + ifc* | ifort*) # Intel Fortran compiler + tmp_addflag=' -nofor_main' ;; + esac + case `$CC -V 2>&1 | sed 5q` in + *Sun\ C*) # Sun C 5.9 + whole_archive_flag_spec_F77='${wl}--whole-archive`new_convenience=; for conv in $convenience\"\"; do test -z \"$conv\" || new_convenience=\"$new_convenience,$conv\"; done; $echo \"$new_convenience\"` ${wl}--no-whole-archive' + tmp_sharedflag='-G' ;; + *Sun\ F*) # Sun Fortran 8.3 + tmp_sharedflag='-G' ;; + *) + tmp_sharedflag='-shared' ;; + esac + archive_cmds_F77='$CC '"$tmp_sharedflag""$tmp_addflag"' $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' + + if test $supports_anon_versioning = yes; then + archive_expsym_cmds_F77='$echo "{ global:" > $output_objdir/$libname.ver~ + cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~ + $echo "local: *; };" >> $output_objdir/$libname.ver~ + $CC '"$tmp_sharedflag""$tmp_addflag"' $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-version-script ${wl}$output_objdir/$libname.ver -o $lib' + fi + else + ld_shlibs_F77=no + fi + ;; + + netbsd*) + if echo __ELF__ | $CC -E - | grep __ELF__ >/dev/null; then + archive_cmds_F77='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib' + wlarc= + else + archive_cmds_F77='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' + archive_expsym_cmds_F77='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' + fi + ;; + + solaris*) + if $LD -v 2>&1 | grep 'BFD 2\.8' > /dev/null; then + ld_shlibs_F77=no + cat <&2 + +*** Warning: The releases 2.8.* of the GNU linker cannot reliably +*** create shared libraries on Solaris systems. Therefore, libtool +*** is disabling shared libraries support. We urge you to upgrade GNU +*** binutils to release 2.9.1 or newer. Another option is to modify +*** your PATH or compiler configuration so that the native linker is +*** used, and then restart. + +EOF + elif $LD --help 2>&1 | grep ': supported targets:.* elf' > /dev/null; then + archive_cmds_F77='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' + archive_expsym_cmds_F77='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' + else + ld_shlibs_F77=no + fi + ;; + + sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX*) + case `$LD -v 2>&1` in + *\ [01].* | *\ 2.[0-9].* | *\ 2.1[0-5].*) + ld_shlibs_F77=no + cat <<_LT_EOF 1>&2 + +*** Warning: Releases of the GNU linker prior to 2.16.91.0.3 can not +*** reliably create shared libraries on SCO systems. Therefore, libtool +*** is disabling shared libraries support. We urge you to upgrade GNU +*** binutils to release 2.16.91.0.3 or newer. Another option is to modify +*** your PATH or compiler configuration so that the native linker is +*** used, and then restart. + +_LT_EOF + ;; + *) + if $LD --help 2>&1 | grep ': supported targets:.* elf' > /dev/null; then + hardcode_libdir_flag_spec_F77='`test -z "$SCOABSPATH" && echo ${wl}-rpath,$libdir`' + archive_cmds_F77='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname,\${SCOABSPATH:+${install_libdir}/}$soname -o $lib' + archive_expsym_cmds_F77='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname,\${SCOABSPATH:+${install_libdir}/}$soname,-retain-symbols-file,$export_symbols -o $lib' + else + ld_shlibs_F77=no + fi + ;; + esac + ;; + + sunos4*) + archive_cmds_F77='$LD -assert pure-text -Bshareable -o $lib $libobjs $deplibs $linker_flags' + wlarc= + hardcode_direct_F77=yes + hardcode_shlibpath_var_F77=no + ;; + + *) + if $LD --help 2>&1 | grep ': supported targets:.* elf' > /dev/null; then + archive_cmds_F77='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' + archive_expsym_cmds_F77='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' + else + ld_shlibs_F77=no + fi + ;; + esac + + if test "$ld_shlibs_F77" = no; then + runpath_var= + hardcode_libdir_flag_spec_F77= + export_dynamic_flag_spec_F77= + whole_archive_flag_spec_F77= + fi + else + # PORTME fill in a description of your system's linker (not GNU ld) + case $host_os in + aix3*) + allow_undefined_flag_F77=unsupported + always_export_symbols_F77=yes + archive_expsym_cmds_F77='$LD -o $output_objdir/$soname $libobjs $deplibs $linker_flags -bE:$export_symbols -T512 -H512 -bM:SRE~$AR $AR_FLAGS $lib $output_objdir/$soname' + # Note: this linker hardcodes the directories in LIBPATH if there + # are no directories specified by -L. + hardcode_minus_L_F77=yes + if test "$GCC" = yes && test -z "$lt_prog_compiler_static"; then + # Neither direct hardcoding nor static linking is supported with a + # broken collect2. + hardcode_direct_F77=unsupported + fi + ;; + + aix4* | aix5*) + if test "$host_cpu" = ia64; then + # On IA64, the linker does run time linking by default, so we don't + # have to do anything special. + aix_use_runtimelinking=no + exp_sym_flag='-Bexport' + no_entry_flag="" + else + # If we're using GNU nm, then we don't want the "-C" option. + # -C means demangle to AIX nm, but means don't demangle with GNU nm + if $NM -V 2>&1 | grep 'GNU' > /dev/null; then + export_symbols_cmds_F77='$NM -Bpg $libobjs $convenience | awk '\''{ if (((\$2 == "T") || (\$2 == "D") || (\$2 == "B")) && (substr(\$3,1,1) != ".")) { print \$3 } }'\'' | sort -u > $export_symbols' + else + export_symbols_cmds_F77='$NM -BCpg $libobjs $convenience | awk '\''{ if (((\$2 == "T") || (\$2 == "D") || (\$2 == "B")) && (substr(\$3,1,1) != ".")) { print \$3 } }'\'' | sort -u > $export_symbols' + fi + aix_use_runtimelinking=no + + # Test if we are trying to use run time linking or normal + # AIX style linking. If -brtl is somewhere in LDFLAGS, we + # need to do runtime linking. + case $host_os in aix4.[23]|aix4.[23].*|aix5*) + for ld_flag in $LDFLAGS; do + if (test $ld_flag = "-brtl" || test $ld_flag = "-Wl,-brtl"); then + aix_use_runtimelinking=yes + break + fi + done + ;; + esac + + exp_sym_flag='-bexport' + no_entry_flag='-bnoentry' + fi + + # When large executables or shared objects are built, AIX ld can + # have problems creating the table of contents. If linking a library + # or program results in "error TOC overflow" add -mminimal-toc to + # CXXFLAGS/CFLAGS for g++/gcc. In the cases where that is not + # enough to fix the problem, add -Wl,-bbigtoc to LDFLAGS. + + archive_cmds_F77='' + hardcode_direct_F77=yes + hardcode_libdir_separator_F77=':' + link_all_deplibs_F77=yes + + if test "$GCC" = yes; then + case $host_os in aix4.[012]|aix4.[012].*) + # We only want to do this on AIX 4.2 and lower, the check + # below for broken collect2 doesn't work under 4.3+ + collect2name=`${CC} -print-prog-name=collect2` + if test -f "$collect2name" && \ + strings "$collect2name" | grep resolve_lib_name >/dev/null + then + # We have reworked collect2 + : + else + # We have old collect2 + hardcode_direct_F77=unsupported + # It fails to find uninstalled libraries when the uninstalled + # path is not listed in the libpath. Setting hardcode_minus_L + # to unsupported forces relinking + hardcode_minus_L_F77=yes + hardcode_libdir_flag_spec_F77='-L$libdir' + hardcode_libdir_separator_F77= + fi + ;; + esac + shared_flag='-shared' + if test "$aix_use_runtimelinking" = yes; then + shared_flag="$shared_flag "'${wl}-G' + fi + else + # not using gcc + if test "$host_cpu" = ia64; then + # VisualAge C++, Version 5.5 for AIX 5L for IA-64, Beta 3 Release + # chokes on -Wl,-G. The following line is correct: + shared_flag='-G' + else + if test "$aix_use_runtimelinking" = yes; then + shared_flag='${wl}-G' + else + shared_flag='${wl}-bM:SRE' + fi + fi + fi + + # It seems that -bexpall does not export symbols beginning with + # underscore (_), so it is better to generate a list of symbols to export. + always_export_symbols_F77=yes + if test "$aix_use_runtimelinking" = yes; then + # Warning - without using the other runtime loading flags (-brtl), + # -berok will link without error, but may produce a broken library. + allow_undefined_flag_F77='-berok' + # Determine the default libpath from the value encoded in an empty executable. + cat >conftest.$ac_ext <<_ACEOF + program main + + end +_ACEOF +rm -f conftest.$ac_objext conftest$ac_exeext +if { (ac_try="$ac_link" +case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_link") 2>conftest.er1 + ac_status=$? + grep -v '^ *+' conftest.er1 >conftest.err + rm -f conftest.er1 + cat conftest.err >&5 + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } && { + test -z "$ac_f77_werror_flag" || + test ! -s conftest.err + } && test -s conftest$ac_exeext && + $as_test_x conftest$ac_exeext; then + +lt_aix_libpath_sed=' + /Import File Strings/,/^$/ { + /^0/ { + s/^0 *\(.*\)$/\1/ + p + } + }' +aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` +# Check for a 64-bit object if we didn't find anything. +if test -z "$aix_libpath"; then + aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` +fi +else + echo "$as_me: failed program was:" >&5 +sed 's/^/| /' conftest.$ac_ext >&5 + + +fi + +rm -f core conftest.err conftest.$ac_objext conftest_ipa8_conftest.oo \ + conftest$ac_exeext conftest.$ac_ext +if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi + + hardcode_libdir_flag_spec_F77='${wl}-blibpath:$libdir:'"$aix_libpath" + archive_expsym_cmds_F77="\$CC"' -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then echo "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag" + else + if test "$host_cpu" = ia64; then + hardcode_libdir_flag_spec_F77='${wl}-R $libdir:/usr/lib:/lib' + allow_undefined_flag_F77="-z nodefs" + archive_expsym_cmds_F77="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags ${wl}${allow_undefined_flag} '"\${wl}$exp_sym_flag:\$export_symbols" + else + # Determine the default libpath from the value encoded in an empty executable. + cat >conftest.$ac_ext <<_ACEOF + program main + + end +_ACEOF +rm -f conftest.$ac_objext conftest$ac_exeext +if { (ac_try="$ac_link" +case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_link") 2>conftest.er1 + ac_status=$? + grep -v '^ *+' conftest.er1 >conftest.err + rm -f conftest.er1 + cat conftest.err >&5 + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } && { + test -z "$ac_f77_werror_flag" || + test ! -s conftest.err + } && test -s conftest$ac_exeext && + $as_test_x conftest$ac_exeext; then + +lt_aix_libpath_sed=' + /Import File Strings/,/^$/ { + /^0/ { + s/^0 *\(.*\)$/\1/ + p + } + }' +aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` +# Check for a 64-bit object if we didn't find anything. +if test -z "$aix_libpath"; then + aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` +fi +else + echo "$as_me: failed program was:" >&5 +sed 's/^/| /' conftest.$ac_ext >&5 + + +fi + +rm -f core conftest.err conftest.$ac_objext conftest_ipa8_conftest.oo \ + conftest$ac_exeext conftest.$ac_ext +if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi + + hardcode_libdir_flag_spec_F77='${wl}-blibpath:$libdir:'"$aix_libpath" + # Warning - without using the other run time loading flags, + # -berok will link without error, but may produce a broken library. + no_undefined_flag_F77=' ${wl}-bernotok' + allow_undefined_flag_F77=' ${wl}-berok' + # Exported symbols can be pulled into shared objects from archives + whole_archive_flag_spec_F77='$convenience' + archive_cmds_need_lc_F77=yes + # This is similar to how AIX traditionally builds its shared libraries. + archive_expsym_cmds_F77="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs ${wl}-bnoentry $compiler_flags ${wl}-bE:$export_symbols${allow_undefined_flag}~$AR $AR_FLAGS $output_objdir/$libname$release.a $output_objdir/$soname' + fi + fi + ;; + + amigaos*) + archive_cmds_F77='$rm $output_objdir/a2ixlibrary.data~$echo "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$echo "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$echo "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$echo "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)' + hardcode_libdir_flag_spec_F77='-L$libdir' + hardcode_minus_L_F77=yes + # see comment about different semantics on the GNU ld section + ld_shlibs_F77=no + ;; + + bsdi[45]*) + export_dynamic_flag_spec_F77=-rdynamic + ;; + + cygwin* | mingw* | pw32*) + # When not using gcc, we currently assume that we are using + # Microsoft Visual C++. + # hardcode_libdir_flag_spec is actually meaningless, as there is + # no search path for DLLs. + hardcode_libdir_flag_spec_F77=' ' + allow_undefined_flag_F77=unsupported + # Tell ltmain to make .lib files, not .a files. + libext=lib + # Tell ltmain to make .dll files, not .so files. + shrext_cmds=".dll" + # FIXME: Setting linknames here is a bad hack. + archive_cmds_F77='$CC -o $lib $libobjs $compiler_flags `echo "$deplibs" | $SED -e '\''s/ -lc$//'\''` -link -dll~linknames=' + # The linker will automatically build a .lib file if we build a DLL. + old_archive_From_new_cmds_F77='true' + # FIXME: Should let the user specify the lib program. + old_archive_cmds_F77='lib -OUT:$oldlib$oldobjs$old_deplibs' + fix_srcfile_path_F77='`cygpath -w "$srcfile"`' + enable_shared_with_static_runtimes_F77=yes + ;; + + darwin* | rhapsody*) + case $host_os in + rhapsody* | darwin1.[012]) + allow_undefined_flag_F77='${wl}-undefined ${wl}suppress' + ;; + *) # Darwin 1.3 on + if test -z ${MACOSX_DEPLOYMENT_TARGET} ; then + allow_undefined_flag_F77='${wl}-flat_namespace ${wl}-undefined ${wl}suppress' + else + case ${MACOSX_DEPLOYMENT_TARGET} in + 10.[012]) + allow_undefined_flag_F77='${wl}-flat_namespace ${wl}-undefined ${wl}suppress' + ;; + 10.*) + allow_undefined_flag_F77='${wl}-undefined ${wl}dynamic_lookup' + ;; + esac + fi + ;; + esac + archive_cmds_need_lc_F77=no + hardcode_direct_F77=no + hardcode_automatic_F77=yes + hardcode_shlibpath_var_F77=unsupported + whole_archive_flag_spec_F77='' + link_all_deplibs_F77=yes + if test "$GCC" = yes ; then + output_verbose_link_cmd='echo' + archive_cmds_F77='$CC -dynamiclib $allow_undefined_flag -o $lib $libobjs $deplibs $compiler_flags -install_name $rpath/$soname $verstring' + module_cmds_F77='$CC $allow_undefined_flag -o $lib -bundle $libobjs $deplibs$compiler_flags' + # Don't fix this by using the ld -exported_symbols_list flag, it doesn't exist in older darwin lds + archive_expsym_cmds_F77='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC -dynamiclib $allow_undefined_flag -o $lib $libobjs $deplibs $compiler_flags -install_name $rpath/$soname $verstring~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}' + module_expsym_cmds_F77='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC $allow_undefined_flag -o $lib -bundle $libobjs $deplibs$compiler_flags~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}' + else + case $cc_basename in + xlc*) + output_verbose_link_cmd='echo' + archive_cmds_F77='$CC -qmkshrobj $allow_undefined_flag -o $lib $libobjs $deplibs $compiler_flags ${wl}-install_name ${wl}`echo $rpath/$soname` $xlcverstring' + module_cmds_F77='$CC $allow_undefined_flag -o $lib -bundle $libobjs $deplibs$compiler_flags' + # Don't fix this by using the ld -exported_symbols_list flag, it doesn't exist in older darwin lds + archive_expsym_cmds_F77='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC -qmkshrobj $allow_undefined_flag -o $lib $libobjs $deplibs $compiler_flags ${wl}-install_name ${wl}$rpath/$soname $xlcverstring~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}' + module_expsym_cmds_F77='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC $allow_undefined_flag -o $lib -bundle $libobjs $deplibs$compiler_flags~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}' + ;; + *) + ld_shlibs_F77=no + ;; + esac + fi + ;; + + dgux*) + archive_cmds_F77='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + hardcode_libdir_flag_spec_F77='-L$libdir' + hardcode_shlibpath_var_F77=no + ;; + + freebsd1*) + ld_shlibs_F77=no + ;; + + # FreeBSD 2.2.[012] allows us to include c++rt0.o to get C++ constructor + # support. Future versions do this automatically, but an explicit c++rt0.o + # does not break anything, and helps significantly (at the cost of a little + # extra space). + freebsd2.2*) + archive_cmds_F77='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags /usr/lib/c++rt0.o' + hardcode_libdir_flag_spec_F77='-R$libdir' + hardcode_direct_F77=yes + hardcode_shlibpath_var_F77=no + ;; + + # Unfortunately, older versions of FreeBSD 2 do not have this feature. + freebsd2*) + archive_cmds_F77='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' + hardcode_direct_F77=yes + hardcode_minus_L_F77=yes + hardcode_shlibpath_var_F77=no + ;; + + # FreeBSD 3 and greater uses gcc -shared to do shared libraries. + freebsd* | dragonfly*) + archive_cmds_F77='$CC -shared -o $lib $libobjs $deplibs $compiler_flags' + hardcode_libdir_flag_spec_F77='-R$libdir' + hardcode_direct_F77=yes + hardcode_shlibpath_var_F77=no + ;; + + hpux9*) + if test "$GCC" = yes; then + archive_cmds_F77='$rm $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib' + else + archive_cmds_F77='$rm $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib' + fi + hardcode_libdir_flag_spec_F77='${wl}+b ${wl}$libdir' + hardcode_libdir_separator_F77=: + hardcode_direct_F77=yes + + # hardcode_minus_L: Not really in the search PATH, + # but as the default location of the library. + hardcode_minus_L_F77=yes + export_dynamic_flag_spec_F77='${wl}-E' + ;; + + hpux10*) + if test "$GCC" = yes -a "$with_gnu_ld" = no; then + archive_cmds_F77='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags' + else + archive_cmds_F77='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags' + fi + if test "$with_gnu_ld" = no; then + hardcode_libdir_flag_spec_F77='${wl}+b ${wl}$libdir' + hardcode_libdir_separator_F77=: + + hardcode_direct_F77=yes + export_dynamic_flag_spec_F77='${wl}-E' + + # hardcode_minus_L: Not really in the search PATH, + # but as the default location of the library. + hardcode_minus_L_F77=yes + fi + ;; + + hpux11*) + if test "$GCC" = yes -a "$with_gnu_ld" = no; then + case $host_cpu in + hppa*64*) + archive_cmds_F77='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags' + ;; + ia64*) + archive_cmds_F77='$CC -shared ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags' + ;; + *) + archive_cmds_F77='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags' + ;; + esac + else + case $host_cpu in + hppa*64*) + archive_cmds_F77='$CC -b ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags' + ;; + ia64*) + archive_cmds_F77='$CC -b ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags' + ;; + *) + archive_cmds_F77='$CC -b ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags' + ;; + esac + fi + if test "$with_gnu_ld" = no; then + hardcode_libdir_flag_spec_F77='${wl}+b ${wl}$libdir' + hardcode_libdir_separator_F77=: + + case $host_cpu in + hppa*64*|ia64*) + hardcode_libdir_flag_spec_ld_F77='+b $libdir' + hardcode_direct_F77=no + hardcode_shlibpath_var_F77=no + ;; + *) + hardcode_direct_F77=yes + export_dynamic_flag_spec_F77='${wl}-E' + + # hardcode_minus_L: Not really in the search PATH, + # but as the default location of the library. + hardcode_minus_L_F77=yes + ;; + esac + fi + ;; + + irix5* | irix6* | nonstopux*) + if test "$GCC" = yes; then + archive_cmds_F77='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' + else + archive_cmds_F77='$LD -shared $libobjs $deplibs $linker_flags -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib' + hardcode_libdir_flag_spec_ld_F77='-rpath $libdir' + fi + hardcode_libdir_flag_spec_F77='${wl}-rpath ${wl}$libdir' + hardcode_libdir_separator_F77=: + link_all_deplibs_F77=yes + ;; + + netbsd*) + if echo __ELF__ | $CC -E - | grep __ELF__ >/dev/null; then + archive_cmds_F77='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' # a.out + else + archive_cmds_F77='$LD -shared -o $lib $libobjs $deplibs $linker_flags' # ELF + fi + hardcode_libdir_flag_spec_F77='-R$libdir' + hardcode_direct_F77=yes + hardcode_shlibpath_var_F77=no + ;; + + newsos6) + archive_cmds_F77='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + hardcode_direct_F77=yes + hardcode_libdir_flag_spec_F77='${wl}-rpath ${wl}$libdir' + hardcode_libdir_separator_F77=: + hardcode_shlibpath_var_F77=no + ;; + + openbsd*) + if test -f /usr/libexec/ld.so; then + hardcode_direct_F77=yes + hardcode_shlibpath_var_F77=no + if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then + archive_cmds_F77='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags' + archive_expsym_cmds_F77='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags ${wl}-retain-symbols-file,$export_symbols' + hardcode_libdir_flag_spec_F77='${wl}-rpath,$libdir' + export_dynamic_flag_spec_F77='${wl}-E' + else + case $host_os in + openbsd[01].* | openbsd2.[0-7] | openbsd2.[0-7].*) + archive_cmds_F77='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' + hardcode_libdir_flag_spec_F77='-R$libdir' + ;; + *) + archive_cmds_F77='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags' + hardcode_libdir_flag_spec_F77='${wl}-rpath,$libdir' + ;; + esac + fi + else + ld_shlibs_F77=no + fi + ;; + + os2*) + hardcode_libdir_flag_spec_F77='-L$libdir' + hardcode_minus_L_F77=yes + allow_undefined_flag_F77=unsupported + archive_cmds_F77='$echo "LIBRARY $libname INITINSTANCE" > $output_objdir/$libname.def~$echo "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~$echo DATA >> $output_objdir/$libname.def~$echo " SINGLE NONSHARED" >> $output_objdir/$libname.def~$echo EXPORTS >> $output_objdir/$libname.def~emxexp $libobjs >> $output_objdir/$libname.def~$CC -Zdll -Zcrtdll -o $lib $libobjs $deplibs $compiler_flags $output_objdir/$libname.def' + old_archive_From_new_cmds_F77='emximp -o $output_objdir/$libname.a $output_objdir/$libname.def' + ;; + + osf3*) + if test "$GCC" = yes; then + allow_undefined_flag_F77=' ${wl}-expect_unresolved ${wl}\*' + archive_cmds_F77='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' + else + allow_undefined_flag_F77=' -expect_unresolved \*' + archive_cmds_F77='$LD -shared${allow_undefined_flag} $libobjs $deplibs $linker_flags -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib' + fi + hardcode_libdir_flag_spec_F77='${wl}-rpath ${wl}$libdir' + hardcode_libdir_separator_F77=: + ;; + + osf4* | osf5*) # as osf3* with the addition of -msym flag + if test "$GCC" = yes; then + allow_undefined_flag_F77=' ${wl}-expect_unresolved ${wl}\*' + archive_cmds_F77='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' + hardcode_libdir_flag_spec_F77='${wl}-rpath ${wl}$libdir' + else + allow_undefined_flag_F77=' -expect_unresolved \*' + archive_cmds_F77='$LD -shared${allow_undefined_flag} $libobjs $deplibs $linker_flags -msym -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib' + archive_expsym_cmds_F77='for i in `cat $export_symbols`; do printf "%s %s\\n" -exported_symbol "\$i" >> $lib.exp; done; echo "-hidden">> $lib.exp~ + $LD -shared${allow_undefined_flag} -input $lib.exp $linker_flags $libobjs $deplibs -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib~$rm $lib.exp' + + # Both c and cxx compiler support -rpath directly + hardcode_libdir_flag_spec_F77='-rpath $libdir' + fi + hardcode_libdir_separator_F77=: + ;; + + solaris*) + no_undefined_flag_F77=' -z text' + if test "$GCC" = yes; then + wlarc='${wl}' + archive_cmds_F77='$CC -shared ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags' + archive_expsym_cmds_F77='$echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~$echo "local: *; };" >> $lib.exp~ + $CC -shared ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$rm $lib.exp' + else + wlarc='' + archive_cmds_F77='$LD -G${allow_undefined_flag} -h $soname -o $lib $libobjs $deplibs $linker_flags' + archive_expsym_cmds_F77='$echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~$echo "local: *; };" >> $lib.exp~ + $LD -G${allow_undefined_flag} -M $lib.exp -h $soname -o $lib $libobjs $deplibs $linker_flags~$rm $lib.exp' + fi + hardcode_libdir_flag_spec_F77='-R$libdir' + hardcode_shlibpath_var_F77=no + case $host_os in + solaris2.[0-5] | solaris2.[0-5].*) ;; + *) + # The compiler driver will combine and reorder linker options, + # but understands `-z linker_flag'. GCC discards it without `$wl', + # but is careful enough not to reorder. + # Supported since Solaris 2.6 (maybe 2.5.1?) + if test "$GCC" = yes; then + whole_archive_flag_spec_F77='${wl}-z ${wl}allextract$convenience ${wl}-z ${wl}defaultextract' + else + whole_archive_flag_spec_F77='-z allextract$convenience -z defaultextract' + fi + ;; + esac + link_all_deplibs_F77=yes + ;; + + sunos4*) + if test "x$host_vendor" = xsequent; then + # Use $CC to link under sequent, because it throws in some extra .o + # files that make .init and .fini sections work. + archive_cmds_F77='$CC -G ${wl}-h $soname -o $lib $libobjs $deplibs $compiler_flags' + else + archive_cmds_F77='$LD -assert pure-text -Bstatic -o $lib $libobjs $deplibs $linker_flags' + fi + hardcode_libdir_flag_spec_F77='-L$libdir' + hardcode_direct_F77=yes + hardcode_minus_L_F77=yes + hardcode_shlibpath_var_F77=no + ;; + + sysv4) + case $host_vendor in + sni) + archive_cmds_F77='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + hardcode_direct_F77=yes # is this really true??? + ;; + siemens) + ## LD is ld it makes a PLAMLIB + ## CC just makes a GrossModule. + archive_cmds_F77='$LD -G -o $lib $libobjs $deplibs $linker_flags' + reload_cmds_F77='$CC -r -o $output$reload_objs' + hardcode_direct_F77=no + ;; + motorola) + archive_cmds_F77='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + hardcode_direct_F77=no #Motorola manual says yes, but my tests say they lie + ;; + esac + runpath_var='LD_RUN_PATH' + hardcode_shlibpath_var_F77=no + ;; + + sysv4.3*) + archive_cmds_F77='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + hardcode_shlibpath_var_F77=no + export_dynamic_flag_spec_F77='-Bexport' + ;; + + sysv4*MP*) + if test -d /usr/nec; then + archive_cmds_F77='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + hardcode_shlibpath_var_F77=no + runpath_var=LD_RUN_PATH + hardcode_runpath_var=yes + ld_shlibs_F77=yes + fi + ;; + + sysv4*uw2* | sysv5OpenUNIX* | sysv5UnixWare7.[01].[10]* | unixware7* | sco3.2v5.0.[024]*) + no_undefined_flag_F77='${wl}-z,text' + archive_cmds_need_lc_F77=no + hardcode_shlibpath_var_F77=no + runpath_var='LD_RUN_PATH' + + if test "$GCC" = yes; then + archive_cmds_F77='$CC -shared ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' + archive_expsym_cmds_F77='$CC -shared ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' + else + archive_cmds_F77='$CC -G ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' + archive_expsym_cmds_F77='$CC -G ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' + fi + ;; + + sysv5* | sco3.2v5* | sco5v6*) + # Note: We can NOT use -z defs as we might desire, because we do not + # link with -lc, and that would cause any symbols used from libc to + # always be unresolved, which means just about no library would + # ever link correctly. If we're not using GNU ld we use -z text + # though, which does catch some bad symbols but isn't as heavy-handed + # as -z defs. + no_undefined_flag_F77='${wl}-z,text' + allow_undefined_flag_F77='${wl}-z,nodefs' + archive_cmds_need_lc_F77=no + hardcode_shlibpath_var_F77=no + hardcode_libdir_flag_spec_F77='`test -z "$SCOABSPATH" && echo ${wl}-R,$libdir`' + hardcode_libdir_separator_F77=':' + link_all_deplibs_F77=yes + export_dynamic_flag_spec_F77='${wl}-Bexport' + runpath_var='LD_RUN_PATH' + + if test "$GCC" = yes; then + archive_cmds_F77='$CC -shared ${wl}-h,\${SCOABSPATH:+${install_libdir}/}$soname -o $lib $libobjs $deplibs $compiler_flags' + archive_expsym_cmds_F77='$CC -shared ${wl}-Bexport:$export_symbols ${wl}-h,\${SCOABSPATH:+${install_libdir}/}$soname -o $lib $libobjs $deplibs $compiler_flags' + else + archive_cmds_F77='$CC -G ${wl}-h,\${SCOABSPATH:+${install_libdir}/}$soname -o $lib $libobjs $deplibs $compiler_flags' + archive_expsym_cmds_F77='$CC -G ${wl}-Bexport:$export_symbols ${wl}-h,\${SCOABSPATH:+${install_libdir}/}$soname -o $lib $libobjs $deplibs $compiler_flags' + fi + ;; + + uts4*) + archive_cmds_F77='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + hardcode_libdir_flag_spec_F77='-L$libdir' + hardcode_shlibpath_var_F77=no + ;; + + *) + ld_shlibs_F77=no + ;; + esac + fi + +{ echo "$as_me:$LINENO: result: $ld_shlibs_F77" >&5 +echo "${ECHO_T}$ld_shlibs_F77" >&6; } +test "$ld_shlibs_F77" = no && can_build_shared=no + +# +# Do we need to explicitly link libc? +# +case "x$archive_cmds_need_lc_F77" in +x|xyes) + # Assume -lc should be added + archive_cmds_need_lc_F77=yes + + if test "$enable_shared" = yes && test "$GCC" = yes; then + case $archive_cmds_F77 in + *'~'*) + # FIXME: we may have to deal with multi-command sequences. + ;; + '$CC '*) + # Test whether the compiler implicitly links with -lc since on some + # systems, -lgcc has to come before -lc. If gcc already passes -lc + # to ld, don't add -lc before -lgcc. + { echo "$as_me:$LINENO: checking whether -lc should be explicitly linked in" >&5 +echo $ECHO_N "checking whether -lc should be explicitly linked in... $ECHO_C" >&6; } + $rm conftest* + echo "$lt_simple_compile_test_code" > conftest.$ac_ext + + if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 + (eval $ac_compile) 2>&5 + ac_status=$? + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } 2>conftest.err; then + soname=conftest + lib=conftest + libobjs=conftest.$ac_objext + deplibs= + wl=$lt_prog_compiler_wl_F77 + pic_flag=$lt_prog_compiler_pic_F77 + compiler_flags=-v + linker_flags=-v + verstring= + output_objdir=. + libname=conftest + lt_save_allow_undefined_flag=$allow_undefined_flag_F77 + allow_undefined_flag_F77= + if { (eval echo "$as_me:$LINENO: \"$archive_cmds_F77 2\>\&1 \| grep \" -lc \" \>/dev/null 2\>\&1\"") >&5 + (eval $archive_cmds_F77 2\>\&1 \| grep \" -lc \" \>/dev/null 2\>\&1) 2>&5 + ac_status=$? + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } + then + archive_cmds_need_lc_F77=no + else + archive_cmds_need_lc_F77=yes + fi + allow_undefined_flag_F77=$lt_save_allow_undefined_flag + else + cat conftest.err 1>&5 + fi + $rm conftest* + { echo "$as_me:$LINENO: result: $archive_cmds_need_lc_F77" >&5 +echo "${ECHO_T}$archive_cmds_need_lc_F77" >&6; } + ;; + esac + fi + ;; +esac + +{ echo "$as_me:$LINENO: checking dynamic linker characteristics" >&5 +echo $ECHO_N "checking dynamic linker characteristics... $ECHO_C" >&6; } +library_names_spec= +libname_spec='lib$name' +soname_spec= +shrext_cmds=".so" +postinstall_cmds= +postuninstall_cmds= +finish_cmds= +finish_eval= +shlibpath_var= +shlibpath_overrides_runpath=unknown +version_type=none +dynamic_linker="$host_os ld.so" +sys_lib_dlsearch_path_spec="/lib /usr/lib" + +need_lib_prefix=unknown +hardcode_into_libs=no + +# when you set need_version to no, make sure it does not cause -set_version +# flags to be left without arguments +need_version=unknown + +case $host_os in +aix3*) + version_type=linux + library_names_spec='${libname}${release}${shared_ext}$versuffix $libname.a' + shlibpath_var=LIBPATH + + # AIX 3 has no versioning support, so we append a major version to the name. + soname_spec='${libname}${release}${shared_ext}$major' + ;; + +aix4* | aix5*) + version_type=linux + need_lib_prefix=no + need_version=no + hardcode_into_libs=yes + if test "$host_cpu" = ia64; then + # AIX 5 supports IA64 + library_names_spec='${libname}${release}${shared_ext}$major ${libname}${release}${shared_ext}$versuffix $libname${shared_ext}' + shlibpath_var=LD_LIBRARY_PATH + else + # With GCC up to 2.95.x, collect2 would create an import file + # for dependence libraries. The import file would start with + # the line `#! .'. This would cause the generated library to + # depend on `.', always an invalid library. This was fixed in + # development snapshots of GCC prior to 3.0. + case $host_os in + aix4 | aix4.[01] | aix4.[01].*) + if { echo '#if __GNUC__ > 2 || (__GNUC__ == 2 && __GNUC_MINOR__ >= 97)' + echo ' yes ' + echo '#endif'; } | ${CC} -E - | grep yes > /dev/null; then + : + else + can_build_shared=no + fi + ;; + esac + # AIX (on Power*) has no versioning support, so currently we can not hardcode correct + # soname into executable. Probably we can add versioning support to + # collect2, so additional links can be useful in future. + if test "$aix_use_runtimelinking" = yes; then + # If using run time linking (on AIX 4.2 or later) use lib.so + # instead of lib.a to let people know that these are not + # typical AIX shared libraries. + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + else + # We preserve .a as extension for shared libraries through AIX4.2 + # and later when we are not doing run time linking. + library_names_spec='${libname}${release}.a $libname.a' + soname_spec='${libname}${release}${shared_ext}$major' + fi + shlibpath_var=LIBPATH + fi + ;; + +amigaos*) + library_names_spec='$libname.ixlibrary $libname.a' + # Create ${libname}_ixlibrary.a entries in /sys/libs. + finish_eval='for lib in `ls $libdir/*.ixlibrary 2>/dev/null`; do libname=`$echo "X$lib" | $Xsed -e '\''s%^.*/\([^/]*\)\.ixlibrary$%\1%'\''`; test $rm /sys/libs/${libname}_ixlibrary.a; $show "cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a"; cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a || exit 1; done' + ;; + +beos*) + library_names_spec='${libname}${shared_ext}' + dynamic_linker="$host_os ld.so" + shlibpath_var=LIBRARY_PATH + ;; + +bsdi[45]*) + version_type=linux + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + finish_cmds='PATH="\$PATH:/sbin" ldconfig $libdir' + shlibpath_var=LD_LIBRARY_PATH + sys_lib_search_path_spec="/shlib /usr/lib /usr/X11/lib /usr/contrib/lib /lib /usr/local/lib" + sys_lib_dlsearch_path_spec="/shlib /usr/lib /usr/local/lib" + # the default ld.so.conf also contains /usr/contrib/lib and + # /usr/X11R6/lib (/usr/X11 is a link to /usr/X11R6), but let us allow + # libtool to hard-code these into programs + ;; + +cygwin* | mingw* | pw32*) + version_type=windows + shrext_cmds=".dll" + need_version=no + need_lib_prefix=no + + case $GCC,$host_os in + yes,cygwin* | yes,mingw* | yes,pw32*) + library_names_spec='$libname.dll.a' + # DLL is installed to $(libdir)/../bin by postinstall_cmds + postinstall_cmds='base_file=`basename \${file}`~ + dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i;echo \$dlname'\''`~ + dldir=$destdir/`dirname \$dlpath`~ + test -d \$dldir || mkdir -p \$dldir~ + $install_prog $dir/$dlname \$dldir/$dlname~ + chmod a+x \$dldir/$dlname' + postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~ + dlpath=$dir/\$dldll~ + $rm \$dlpath' + shlibpath_overrides_runpath=yes + + case $host_os in + cygwin*) + # Cygwin DLLs use 'cyg' prefix rather than 'lib' + soname_spec='`echo ${libname} | sed -e 's/^lib/cyg/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}' + sys_lib_search_path_spec="/usr/lib /lib/w32api /lib /usr/local/lib" + ;; + mingw*) + # MinGW DLLs use traditional 'lib' prefix + soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}' + sys_lib_search_path_spec=`$CC -print-search-dirs | grep "^libraries:" | $SED -e "s/^libraries://" -e "s,=/,/,g"` + if echo "$sys_lib_search_path_spec" | grep ';[c-zC-Z]:/' >/dev/null; then + # It is most probably a Windows format PATH printed by + # mingw gcc, but we are running on Cygwin. Gcc prints its search + # path with ; separators, and with drive letters. We can handle the + # drive letters (cygwin fileutils understands them), so leave them, + # especially as we might pass files found there to a mingw objdump, + # which wouldn't understand a cygwinified path. Ahh. + sys_lib_search_path_spec=`echo "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'` + else + sys_lib_search_path_spec=`echo "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"` + fi + ;; + pw32*) + # pw32 DLLs use 'pw' prefix rather than 'lib' + library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}' + ;; + esac + ;; + + *) + library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib' + ;; + esac + dynamic_linker='Win32 ld.exe' + # FIXME: first we should search . and the directory the executable is in + shlibpath_var=PATH + ;; + +darwin* | rhapsody*) + dynamic_linker="$host_os dyld" + version_type=darwin + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${versuffix}$shared_ext ${libname}${release}${major}$shared_ext ${libname}$shared_ext' + soname_spec='${libname}${release}${major}$shared_ext' + shlibpath_overrides_runpath=yes + shlibpath_var=DYLD_LIBRARY_PATH + shrext_cmds='`test .$module = .yes && echo .so || echo .dylib`' + + sys_lib_dlsearch_path_spec='/usr/local/lib /lib /usr/lib' + ;; + +dgux*) + version_type=linux + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname$shared_ext' + soname_spec='${libname}${release}${shared_ext}$major' + shlibpath_var=LD_LIBRARY_PATH + ;; + +freebsd1*) + dynamic_linker=no + ;; + +freebsd* | dragonfly*) + # DragonFly does not have aout. When/if they implement a new + # versioning mechanism, adjust this. + if test -x /usr/bin/objformat; then + objformat=`/usr/bin/objformat` + else + case $host_os in + freebsd[123]*) objformat=aout ;; + *) objformat=elf ;; + esac + fi + version_type=freebsd-$objformat + case $version_type in + freebsd-elf*) + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext} $libname${shared_ext}' + need_version=no + need_lib_prefix=no + ;; + freebsd-*) + library_names_spec='${libname}${release}${shared_ext}$versuffix $libname${shared_ext}$versuffix' + need_version=yes + ;; + esac + shlibpath_var=LD_LIBRARY_PATH + case $host_os in + freebsd2*) + shlibpath_overrides_runpath=yes + ;; + freebsd3.[01]* | freebsdelf3.[01]*) + shlibpath_overrides_runpath=yes + hardcode_into_libs=yes + ;; + freebsd3.[2-9]* | freebsdelf3.[2-9]* | \ + freebsd4.[0-5] | freebsdelf4.[0-5] | freebsd4.1.1 | freebsdelf4.1.1) + shlibpath_overrides_runpath=no + hardcode_into_libs=yes + ;; + *) # from 4.6 on, and DragonFly + shlibpath_overrides_runpath=yes + hardcode_into_libs=yes + ;; + esac + ;; + +gnu*) + version_type=linux + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}${major} ${libname}${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + shlibpath_var=LD_LIBRARY_PATH + hardcode_into_libs=yes + ;; + +hpux9* | hpux10* | hpux11*) + # Give a soname corresponding to the major version so that dld.sl refuses to + # link against other versions. + version_type=sunos + need_lib_prefix=no + need_version=no + case $host_cpu in + ia64*) + shrext_cmds='.so' + hardcode_into_libs=yes + dynamic_linker="$host_os dld.so" + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=yes # Unless +noenvvar is specified. + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + if test "X$HPUX_IA64_MODE" = X32; then + sys_lib_search_path_spec="/usr/lib/hpux32 /usr/local/lib/hpux32 /usr/local/lib" + else + sys_lib_search_path_spec="/usr/lib/hpux64 /usr/local/lib/hpux64" + fi + sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec + ;; + hppa*64*) + shrext_cmds='.sl' + hardcode_into_libs=yes + dynamic_linker="$host_os dld.sl" + shlibpath_var=LD_LIBRARY_PATH # How should we handle SHLIB_PATH + shlibpath_overrides_runpath=yes # Unless +noenvvar is specified. + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + sys_lib_search_path_spec="/usr/lib/pa20_64 /usr/ccs/lib/pa20_64" + sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec + ;; + *) + shrext_cmds='.sl' + dynamic_linker="$host_os dld.sl" + shlibpath_var=SHLIB_PATH + shlibpath_overrides_runpath=no # +s is required to enable SHLIB_PATH + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + ;; + esac + # HP-UX runs *really* slowly unless shared libraries are mode 555. + postinstall_cmds='chmod 555 $lib' + ;; + +interix[3-9]*) + version_type=linux + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + dynamic_linker='Interix 3.x ld.so.1 (PE, like ELF)' + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=no + hardcode_into_libs=yes + ;; + +irix5* | irix6* | nonstopux*) + case $host_os in + nonstopux*) version_type=nonstopux ;; + *) + if test "$lt_cv_prog_gnu_ld" = yes; then + version_type=linux + else + version_type=irix + fi ;; + esac + need_lib_prefix=no + need_version=no + soname_spec='${libname}${release}${shared_ext}$major' + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${release}${shared_ext} $libname${shared_ext}' + case $host_os in + irix5* | nonstopux*) + libsuff= shlibsuff= + ;; + *) + case $LD in # libtool.m4 will add one of these switches to LD + *-32|*"-32 "|*-melf32bsmip|*"-melf32bsmip ") + libsuff= shlibsuff= libmagic=32-bit;; + *-n32|*"-n32 "|*-melf32bmipn32|*"-melf32bmipn32 ") + libsuff=32 shlibsuff=N32 libmagic=N32;; + *-64|*"-64 "|*-melf64bmip|*"-melf64bmip ") + libsuff=64 shlibsuff=64 libmagic=64-bit;; + *) libsuff= shlibsuff= libmagic=never-match;; + esac + ;; + esac + shlibpath_var=LD_LIBRARY${shlibsuff}_PATH + shlibpath_overrides_runpath=no + sys_lib_search_path_spec="/usr/lib${libsuff} /lib${libsuff} /usr/local/lib${libsuff}" + sys_lib_dlsearch_path_spec="/usr/lib${libsuff} /lib${libsuff}" + hardcode_into_libs=yes + ;; + +# No shared lib support for Linux oldld, aout, or coff. +linux*oldld* | linux*aout* | linux*coff*) + dynamic_linker=no + ;; + +# This must be Linux ELF. +linux* | k*bsd*-gnu) + version_type=linux + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + finish_cmds='PATH="\$PATH:/sbin" ldconfig -n $libdir' + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=no + # This implies no fast_install, which is unacceptable. + # Some rework will be needed to allow for fast_install + # before this can be enabled. + hardcode_into_libs=yes + sys_lib_search_path_spec="/usr/lib${libsuff} /lib${libsuff} /usr/local/lib${libsuff}" + sys_lib_dlsearch_path_spec="/usr/lib${libsuff} /lib${libsuff}" + + # Append ld.so.conf contents to the search path + if test -f /etc/ld.so.conf; then + lt_ld_extra=`awk '/^include / { system(sprintf("cd /etc; cat %s 2>/dev/null", \$2)); skip = 1; } { if (!skip) print \$0; skip = 0; }' < /etc/ld.so.conf | $SED -e 's/#.*//;/^[ ]*hwcap[ ]/d;s/[:, ]/ /g;s/=[^=]*$//;s/=[^= ]* / /g;/^$/d' | tr '\n' ' '` + sys_lib_dlsearch_path_spec="$sys_lib_dlsearch_path_spec $lt_ld_extra" + fi + + # We used to test for /lib/ld.so.1 and disable shared libraries on + # powerpc, because MkLinux only supported shared libraries with the + # GNU dynamic linker. Since this was broken with cross compilers, + # most powerpc-linux boxes support dynamic linking these days and + # people can always --disable-shared, the test was removed, and we + # assume the GNU/Linux dynamic linker is in use. + dynamic_linker='GNU/Linux ld.so' + ;; + +netbsd*) + version_type=sunos + need_lib_prefix=no + need_version=no + if echo __ELF__ | $CC -E - | grep __ELF__ >/dev/null; then + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix' + finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir' + dynamic_linker='NetBSD (a.out) ld.so' + else + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + dynamic_linker='NetBSD ld.elf_so' + fi + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=yes + hardcode_into_libs=yes + ;; + +newsos6) + version_type=linux + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=yes + ;; + +nto-qnx*) + version_type=linux + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=yes + ;; + +openbsd*) + version_type=sunos + sys_lib_dlsearch_path_spec="/usr/lib" + need_lib_prefix=no + # Some older versions of OpenBSD (3.3 at least) *do* need versioned libs. + case $host_os in + openbsd3.3 | openbsd3.3.*) need_version=yes ;; + *) need_version=no ;; + esac + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix' + finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir' + shlibpath_var=LD_LIBRARY_PATH + if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then + case $host_os in + openbsd2.[89] | openbsd2.[89].*) + shlibpath_overrides_runpath=no + ;; + *) + shlibpath_overrides_runpath=yes + ;; + esac + else + shlibpath_overrides_runpath=yes + fi + ;; + +os2*) + libname_spec='$name' + shrext_cmds=".dll" + need_lib_prefix=no + library_names_spec='$libname${shared_ext} $libname.a' + dynamic_linker='OS/2 ld.exe' + shlibpath_var=LIBPATH + ;; + +osf3* | osf4* | osf5*) + version_type=osf + need_lib_prefix=no + need_version=no + soname_spec='${libname}${release}${shared_ext}$major' + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + shlibpath_var=LD_LIBRARY_PATH + sys_lib_search_path_spec="/usr/shlib /usr/ccs/lib /usr/lib/cmplrs/cc /usr/lib /usr/local/lib /var/shlib" + sys_lib_dlsearch_path_spec="$sys_lib_search_path_spec" + ;; + +rdos*) + dynamic_linker=no + ;; + +solaris*) + version_type=linux + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=yes + hardcode_into_libs=yes + # ldd complains unless libraries are executable + postinstall_cmds='chmod +x $lib' + ;; + +sunos4*) + version_type=sunos + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix' + finish_cmds='PATH="\$PATH:/usr/etc" ldconfig $libdir' + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=yes + if test "$with_gnu_ld" = yes; then + need_lib_prefix=no + fi + need_version=yes + ;; + +sysv4 | sysv4.3*) + version_type=linux + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + shlibpath_var=LD_LIBRARY_PATH + case $host_vendor in + sni) + shlibpath_overrides_runpath=no + need_lib_prefix=no + export_dynamic_flag_spec='${wl}-Blargedynsym' + runpath_var=LD_RUN_PATH + ;; + siemens) + need_lib_prefix=no + ;; + motorola) + need_lib_prefix=no + need_version=no + shlibpath_overrides_runpath=no + sys_lib_search_path_spec='/lib /usr/lib /usr/ccs/lib' + ;; + esac + ;; + +sysv4*MP*) + if test -d /usr/nec ;then + version_type=linux + library_names_spec='$libname${shared_ext}.$versuffix $libname${shared_ext}.$major $libname${shared_ext}' + soname_spec='$libname${shared_ext}.$major' + shlibpath_var=LD_LIBRARY_PATH + fi + ;; + +sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX* | sysv4*uw2*) + version_type=freebsd-elf + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext} $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + shlibpath_var=LD_LIBRARY_PATH + hardcode_into_libs=yes + if test "$with_gnu_ld" = yes; then + sys_lib_search_path_spec='/usr/local/lib /usr/gnu/lib /usr/ccs/lib /usr/lib /lib' + shlibpath_overrides_runpath=no + else + sys_lib_search_path_spec='/usr/ccs/lib /usr/lib' + shlibpath_overrides_runpath=yes + case $host_os in + sco3.2v5*) + sys_lib_search_path_spec="$sys_lib_search_path_spec /lib" + ;; + esac + fi + sys_lib_dlsearch_path_spec='/usr/lib' + ;; + +uts4*) + version_type=linux + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + shlibpath_var=LD_LIBRARY_PATH + ;; + +*) + dynamic_linker=no + ;; +esac +{ echo "$as_me:$LINENO: result: $dynamic_linker" >&5 +echo "${ECHO_T}$dynamic_linker" >&6; } +test "$dynamic_linker" = no && can_build_shared=no + +variables_saved_for_relink="PATH $shlibpath_var $runpath_var" +if test "$GCC" = yes; then + variables_saved_for_relink="$variables_saved_for_relink GCC_EXEC_PREFIX COMPILER_PATH LIBRARY_PATH" +fi + +{ echo "$as_me:$LINENO: checking how to hardcode library paths into programs" >&5 +echo $ECHO_N "checking how to hardcode library paths into programs... $ECHO_C" >&6; } +hardcode_action_F77= +if test -n "$hardcode_libdir_flag_spec_F77" || \ + test -n "$runpath_var_F77" || \ + test "X$hardcode_automatic_F77" = "Xyes" ; then + + # We can hardcode non-existant directories. + if test "$hardcode_direct_F77" != no && + # If the only mechanism to avoid hardcoding is shlibpath_var, we + # have to relink, otherwise we might link with an installed library + # when we should be linking with a yet-to-be-installed one + ## test "$_LT_AC_TAGVAR(hardcode_shlibpath_var, F77)" != no && + test "$hardcode_minus_L_F77" != no; then + # Linking always hardcodes the temporary library directory. + hardcode_action_F77=relink + else + # We can link without hardcoding, and we can hardcode nonexisting dirs. + hardcode_action_F77=immediate + fi +else + # We cannot hardcode anything, or else we can only hardcode existing + # directories. + hardcode_action_F77=unsupported +fi +{ echo "$as_me:$LINENO: result: $hardcode_action_F77" >&5 +echo "${ECHO_T}$hardcode_action_F77" >&6; } + +if test "$hardcode_action_F77" = relink; then + # Fast installation is not supported + enable_fast_install=no +elif test "$shlibpath_overrides_runpath" = yes || + test "$enable_shared" = no; then + # Fast installation is not necessary + enable_fast_install=needless +fi + + +# The else clause should only fire when bootstrapping the +# libtool distribution, otherwise you forgot to ship ltmain.sh +# with your package, and you will get complaints that there are +# no rules to generate ltmain.sh. +if test -f "$ltmain"; then + # See if we are running on zsh, and set the options which allow our commands through + # without removal of \ escapes. + if test -n "${ZSH_VERSION+set}" ; then + setopt NO_GLOB_SUBST + fi + # Now quote all the things that may contain metacharacters while being + # careful not to overquote the AC_SUBSTed values. We take copies of the + # variables and quote the copies for generation of the libtool script. + for var in echo old_CC old_CFLAGS AR AR_FLAGS EGREP RANLIB LN_S LTCC LTCFLAGS NM \ + SED SHELL STRIP \ + libname_spec library_names_spec soname_spec extract_expsyms_cmds \ + old_striplib striplib file_magic_cmd finish_cmds finish_eval \ + deplibs_check_method reload_flag reload_cmds need_locks \ + lt_cv_sys_global_symbol_pipe lt_cv_sys_global_symbol_to_cdecl \ + lt_cv_sys_global_symbol_to_c_name_address \ + sys_lib_search_path_spec sys_lib_dlsearch_path_spec \ + old_postinstall_cmds old_postuninstall_cmds \ + compiler_F77 \ + CC_F77 \ + LD_F77 \ + lt_prog_compiler_wl_F77 \ + lt_prog_compiler_pic_F77 \ + lt_prog_compiler_static_F77 \ + lt_prog_compiler_no_builtin_flag_F77 \ + export_dynamic_flag_spec_F77 \ + thread_safe_flag_spec_F77 \ + whole_archive_flag_spec_F77 \ + enable_shared_with_static_runtimes_F77 \ + old_archive_cmds_F77 \ + old_archive_from_new_cmds_F77 \ + predep_objects_F77 \ + postdep_objects_F77 \ + predeps_F77 \ + postdeps_F77 \ + compiler_lib_search_path_F77 \ + archive_cmds_F77 \ + archive_expsym_cmds_F77 \ + postinstall_cmds_F77 \ + postuninstall_cmds_F77 \ + old_archive_from_expsyms_cmds_F77 \ + allow_undefined_flag_F77 \ + no_undefined_flag_F77 \ + export_symbols_cmds_F77 \ + hardcode_libdir_flag_spec_F77 \ + hardcode_libdir_flag_spec_ld_F77 \ + hardcode_libdir_separator_F77 \ + hardcode_automatic_F77 \ + module_cmds_F77 \ + module_expsym_cmds_F77 \ + lt_cv_prog_compiler_c_o_F77 \ + fix_srcfile_path_F77 \ + exclude_expsyms_F77 \ + include_expsyms_F77; do + + case $var in + old_archive_cmds_F77 | \ + old_archive_from_new_cmds_F77 | \ + archive_cmds_F77 | \ + archive_expsym_cmds_F77 | \ + module_cmds_F77 | \ + module_expsym_cmds_F77 | \ + old_archive_from_expsyms_cmds_F77 | \ + export_symbols_cmds_F77 | \ + extract_expsyms_cmds | reload_cmds | finish_cmds | \ + postinstall_cmds | postuninstall_cmds | \ + old_postinstall_cmds | old_postuninstall_cmds | \ + sys_lib_search_path_spec | sys_lib_dlsearch_path_spec) + # Double-quote double-evaled strings. + eval "lt_$var=\\\"\`\$echo \"X\$$var\" | \$Xsed -e \"\$double_quote_subst\" -e \"\$sed_quote_subst\" -e \"\$delay_variable_subst\"\`\\\"" + ;; + *) + eval "lt_$var=\\\"\`\$echo \"X\$$var\" | \$Xsed -e \"\$sed_quote_subst\"\`\\\"" + ;; + esac + done + + case $lt_echo in + *'\$0 --fallback-echo"') + lt_echo=`$echo "X$lt_echo" | $Xsed -e 's/\\\\\\\$0 --fallback-echo"$/$0 --fallback-echo"/'` + ;; + esac + +cfgfile="$ofile" + + cat <<__EOF__ >> "$cfgfile" +# ### BEGIN LIBTOOL TAG CONFIG: $tagname + +# Libtool was configured on host `(hostname || uname -n) 2>/dev/null | sed 1q`: + +# Shell to use when invoking shell scripts. +SHELL=$lt_SHELL + +# Whether or not to build shared libraries. +build_libtool_libs=$enable_shared + +# Whether or not to build static libraries. +build_old_libs=$enable_static + +# Whether or not to add -lc for building shared libraries. +build_libtool_need_lc=$archive_cmds_need_lc_F77 + +# Whether or not to disallow shared libs when runtime libs are static +allow_libtool_libs_with_static_runtimes=$enable_shared_with_static_runtimes_F77 + +# Whether or not to optimize for fast installation. +fast_install=$enable_fast_install + +# The host system. +host_alias=$host_alias +host=$host +host_os=$host_os + +# The build system. +build_alias=$build_alias +build=$build +build_os=$build_os + +# An echo program that does not interpret backslashes. +echo=$lt_echo + +# The archiver. +AR=$lt_AR +AR_FLAGS=$lt_AR_FLAGS + +# A C compiler. +LTCC=$lt_LTCC + +# LTCC compiler flags. +LTCFLAGS=$lt_LTCFLAGS + +# A language-specific compiler. +CC=$lt_compiler_F77 + +# Is the compiler the GNU C compiler? +with_gcc=$GCC_F77 + +# An ERE matcher. +EGREP=$lt_EGREP + +# The linker used to build libraries. +LD=$lt_LD_F77 + +# Whether we need hard or soft links. +LN_S=$lt_LN_S + +# A BSD-compatible nm program. +NM=$lt_NM + +# A symbol stripping program +STRIP=$lt_STRIP + +# Used to examine libraries when file_magic_cmd begins "file" +MAGIC_CMD=$MAGIC_CMD + +# Used on cygwin: DLL creation program. +DLLTOOL="$DLLTOOL" + +# Used on cygwin: object dumper. +OBJDUMP="$OBJDUMP" + +# Used on cygwin: assembler. +AS="$AS" + +# The name of the directory that contains temporary libtool files. +objdir=$objdir + +# How to create reloadable object files. +reload_flag=$lt_reload_flag +reload_cmds=$lt_reload_cmds + +# How to pass a linker flag through the compiler. +wl=$lt_lt_prog_compiler_wl_F77 + +# Object file suffix (normally "o"). +objext="$ac_objext" + +# Old archive suffix (normally "a"). +libext="$libext" + +# Shared library suffix (normally ".so"). +shrext_cmds='$shrext_cmds' + +# Executable file suffix (normally ""). +exeext="$exeext" + +# Additional compiler flags for building library objects. +pic_flag=$lt_lt_prog_compiler_pic_F77 +pic_mode=$pic_mode + +# What is the maximum length of a command? +max_cmd_len=$lt_cv_sys_max_cmd_len + +# Does compiler simultaneously support -c and -o options? +compiler_c_o=$lt_lt_cv_prog_compiler_c_o_F77 + +# Must we lock files when doing compilation? +need_locks=$lt_need_locks + +# Do we need the lib prefix for modules? +need_lib_prefix=$need_lib_prefix + +# Do we need a version for libraries? +need_version=$need_version + +# Whether dlopen is supported. +dlopen_support=$enable_dlopen + +# Whether dlopen of programs is supported. +dlopen_self=$enable_dlopen_self + +# Whether dlopen of statically linked programs is supported. +dlopen_self_static=$enable_dlopen_self_static + +# Compiler flag to prevent dynamic linking. +link_static_flag=$lt_lt_prog_compiler_static_F77 + +# Compiler flag to turn off builtin functions. +no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag_F77 + +# Compiler flag to allow reflexive dlopens. +export_dynamic_flag_spec=$lt_export_dynamic_flag_spec_F77 + +# Compiler flag to generate shared objects directly from archives. +whole_archive_flag_spec=$lt_whole_archive_flag_spec_F77 + +# Compiler flag to generate thread-safe objects. +thread_safe_flag_spec=$lt_thread_safe_flag_spec_F77 + +# Library versioning type. +version_type=$version_type + +# Format of library name prefix. +libname_spec=$lt_libname_spec + +# List of archive names. First name is the real one, the rest are links. +# The last name is the one that the linker finds with -lNAME. +library_names_spec=$lt_library_names_spec + +# The coded name of the library, if different from the real name. +soname_spec=$lt_soname_spec + +# Commands used to build and install an old-style archive. +RANLIB=$lt_RANLIB +old_archive_cmds=$lt_old_archive_cmds_F77 +old_postinstall_cmds=$lt_old_postinstall_cmds +old_postuninstall_cmds=$lt_old_postuninstall_cmds + +# Create an old-style archive from a shared archive. +old_archive_from_new_cmds=$lt_old_archive_from_new_cmds_F77 + +# Create a temporary old-style archive to link instead of a shared archive. +old_archive_from_expsyms_cmds=$lt_old_archive_from_expsyms_cmds_F77 + +# Commands used to build and install a shared archive. +archive_cmds=$lt_archive_cmds_F77 +archive_expsym_cmds=$lt_archive_expsym_cmds_F77 +postinstall_cmds=$lt_postinstall_cmds +postuninstall_cmds=$lt_postuninstall_cmds + +# Commands used to build a loadable module (assumed same as above if empty) +module_cmds=$lt_module_cmds_F77 +module_expsym_cmds=$lt_module_expsym_cmds_F77 + +# Commands to strip libraries. +old_striplib=$lt_old_striplib +striplib=$lt_striplib + +# Dependencies to place before the objects being linked to create a +# shared library. +predep_objects=$lt_predep_objects_F77 + +# Dependencies to place after the objects being linked to create a +# shared library. +postdep_objects=$lt_postdep_objects_F77 + +# Dependencies to place before the objects being linked to create a +# shared library. +predeps=$lt_predeps_F77 + +# Dependencies to place after the objects being linked to create a +# shared library. +postdeps=$lt_postdeps_F77 + +# The library search path used internally by the compiler when linking +# a shared library. +compiler_lib_search_path=$lt_compiler_lib_search_path_F77 + +# Method to check whether dependent libraries are shared objects. +deplibs_check_method=$lt_deplibs_check_method + +# Command to use when deplibs_check_method == file_magic. +file_magic_cmd=$lt_file_magic_cmd + +# Flag that allows shared libraries with undefined symbols to be built. +allow_undefined_flag=$lt_allow_undefined_flag_F77 + +# Flag that forces no undefined symbols. +no_undefined_flag=$lt_no_undefined_flag_F77 + +# Commands used to finish a libtool library installation in a directory. +finish_cmds=$lt_finish_cmds + +# Same as above, but a single script fragment to be evaled but not shown. +finish_eval=$lt_finish_eval + +# Take the output of nm and produce a listing of raw symbols and C names. +global_symbol_pipe=$lt_lt_cv_sys_global_symbol_pipe + +# Transform the output of nm in a proper C declaration +global_symbol_to_cdecl=$lt_lt_cv_sys_global_symbol_to_cdecl + +# Transform the output of nm in a C name address pair +global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address + +# This is the shared library runtime path variable. +runpath_var=$runpath_var + +# This is the shared library path variable. +shlibpath_var=$shlibpath_var + +# Is shlibpath searched before the hard-coded library search path? +shlibpath_overrides_runpath=$shlibpath_overrides_runpath + +# How to hardcode a shared library path into an executable. +hardcode_action=$hardcode_action_F77 + +# Whether we should hardcode library paths into libraries. +hardcode_into_libs=$hardcode_into_libs + +# Flag to hardcode \$libdir into a binary during linking. +# This must work even if \$libdir does not exist. +hardcode_libdir_flag_spec=$lt_hardcode_libdir_flag_spec_F77 + +# If ld is used when linking, flag to hardcode \$libdir into +# a binary during linking. This must work even if \$libdir does +# not exist. +hardcode_libdir_flag_spec_ld=$lt_hardcode_libdir_flag_spec_ld_F77 + +# Whether we need a single -rpath flag with a separated argument. +hardcode_libdir_separator=$lt_hardcode_libdir_separator_F77 + +# Set to yes if using DIR/libNAME${shared_ext} during linking hardcodes DIR into the +# resulting binary. +hardcode_direct=$hardcode_direct_F77 + +# Set to yes if using the -LDIR flag during linking hardcodes DIR into the +# resulting binary. +hardcode_minus_L=$hardcode_minus_L_F77 + +# Set to yes if using SHLIBPATH_VAR=DIR during linking hardcodes DIR into +# the resulting binary. +hardcode_shlibpath_var=$hardcode_shlibpath_var_F77 + +# Set to yes if building a shared library automatically hardcodes DIR into the library +# and all subsequent libraries and executables linked against it. +hardcode_automatic=$hardcode_automatic_F77 + +# Variables whose values should be saved in libtool wrapper scripts and +# restored at relink time. +variables_saved_for_relink="$variables_saved_for_relink" + +# Whether libtool must link a program against all its dependency libraries. +link_all_deplibs=$link_all_deplibs_F77 + +# Compile-time system search path for libraries +sys_lib_search_path_spec=$lt_sys_lib_search_path_spec + +# Run-time system search path for libraries +sys_lib_dlsearch_path_spec=$lt_sys_lib_dlsearch_path_spec + +# Fix the shell variable \$srcfile for the compiler. +fix_srcfile_path=$lt_fix_srcfile_path + +# Set to yes if exported symbols are required. +always_export_symbols=$always_export_symbols_F77 + +# The commands to list exported symbols. +export_symbols_cmds=$lt_export_symbols_cmds_F77 + +# The commands to extract the exported symbol list from a shared archive. +extract_expsyms_cmds=$lt_extract_expsyms_cmds + +# Symbols that should not be listed in the preloaded symbols. +exclude_expsyms=$lt_exclude_expsyms_F77 + +# Symbols that must always be exported. +include_expsyms=$lt_include_expsyms_F77 + +# ### END LIBTOOL TAG CONFIG: $tagname + +__EOF__ + + +else + # If there is no Makefile yet, we rely on a make rule to execute + # `config.status --recheck' to rerun these tests and create the + # libtool script then. + ltmain_in=`echo $ltmain | sed -e 's/\.sh$/.in/'` + if test -f "$ltmain_in"; then + test -f Makefile && make "$ltmain" + fi +fi + + +ac_ext=c +ac_cpp='$CPP $CPPFLAGS' +ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' +ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' +ac_compiler_gnu=$ac_cv_c_compiler_gnu + +CC="$lt_save_CC" + + else + tagname="" + fi + ;; + + GCJ) + if test -n "$GCJ" && test "X$GCJ" != "Xno"; then + + +# Source file extension for Java test sources. +ac_ext=java + +# Object file extension for compiled Java test sources. +objext=o +objext_GCJ=$objext + +# Code to be used in simple compile tests +lt_simple_compile_test_code="class foo {}" + +# Code to be used in simple link tests +lt_simple_link_test_code='public class conftest { public static void main(String[] argv) {}; }' + +# ltmain only uses $CC for tagged configurations so make sure $CC is set. + +# If no C compiler was specified, use CC. +LTCC=${LTCC-"$CC"} + +# If no C compiler flags were specified, use CFLAGS. +LTCFLAGS=${LTCFLAGS-"$CFLAGS"} + +# Allow CC to be a program name with arguments. +compiler=$CC + + +# save warnings/boilerplate of simple test code +ac_outfile=conftest.$ac_objext +echo "$lt_simple_compile_test_code" >conftest.$ac_ext +eval "$ac_compile" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err +_lt_compiler_boilerplate=`cat conftest.err` +$rm conftest* + +ac_outfile=conftest.$ac_objext +echo "$lt_simple_link_test_code" >conftest.$ac_ext +eval "$ac_link" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err +_lt_linker_boilerplate=`cat conftest.err` +$rm conftest* + + +# Allow CC to be a program name with arguments. +lt_save_CC="$CC" +CC=${GCJ-"gcj"} +compiler=$CC +compiler_GCJ=$CC +for cc_temp in $compiler""; do + case $cc_temp in + compile | *[\\/]compile | ccache | *[\\/]ccache ) ;; + distcc | *[\\/]distcc | purify | *[\\/]purify ) ;; + \-*) ;; + *) break;; + esac +done +cc_basename=`$echo "X$cc_temp" | $Xsed -e 's%.*/%%' -e "s%^$host_alias-%%"` + + +# GCJ did not exist at the time GCC didn't implicitly link libc in. +archive_cmds_need_lc_GCJ=no + +old_archive_cmds_GCJ=$old_archive_cmds + + +lt_prog_compiler_no_builtin_flag_GCJ= + +if test "$GCC" = yes; then + lt_prog_compiler_no_builtin_flag_GCJ=' -fno-builtin' + + +{ echo "$as_me:$LINENO: checking if $compiler supports -fno-rtti -fno-exceptions" >&5 +echo $ECHO_N "checking if $compiler supports -fno-rtti -fno-exceptions... $ECHO_C" >&6; } +if test "${lt_cv_prog_compiler_rtti_exceptions+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + lt_cv_prog_compiler_rtti_exceptions=no + ac_outfile=conftest.$ac_objext + echo "$lt_simple_compile_test_code" > conftest.$ac_ext + lt_compiler_flag="-fno-rtti -fno-exceptions" + # Insert the option either (1) after the last *FLAGS variable, or + # (2) before a word containing "conftest.", or (3) at the end. + # Note that $ac_compile itself does not contain backslashes and begins + # with a dollar sign (not a hyphen), so the echo should work correctly. + # The option is referenced via a variable to avoid confusing sed. + lt_compile=`echo "$ac_compile" | $SED \ + -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ + -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ + -e 's:$: $lt_compiler_flag:'` + (eval echo "\"\$as_me:16662: $lt_compile\"" >&5) + (eval "$lt_compile" 2>conftest.err) + ac_status=$? + cat conftest.err >&5 + echo "$as_me:16666: \$? = $ac_status" >&5 + if (exit $ac_status) && test -s "$ac_outfile"; then + # The compiler can only warn and ignore the option if not recognized + # So say no if there are warnings other than the usual output. + $echo "X$_lt_compiler_boilerplate" | $Xsed -e '/^$/d' >conftest.exp + $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 + if test ! -s conftest.er2 || diff conftest.exp conftest.er2 >/dev/null; then + lt_cv_prog_compiler_rtti_exceptions=yes + fi + fi + $rm conftest* + +fi +{ echo "$as_me:$LINENO: result: $lt_cv_prog_compiler_rtti_exceptions" >&5 +echo "${ECHO_T}$lt_cv_prog_compiler_rtti_exceptions" >&6; } + +if test x"$lt_cv_prog_compiler_rtti_exceptions" = xyes; then + lt_prog_compiler_no_builtin_flag_GCJ="$lt_prog_compiler_no_builtin_flag_GCJ -fno-rtti -fno-exceptions" +else + : +fi + +fi + +lt_prog_compiler_wl_GCJ= +lt_prog_compiler_pic_GCJ= +lt_prog_compiler_static_GCJ= + +{ echo "$as_me:$LINENO: checking for $compiler option to produce PIC" >&5 +echo $ECHO_N "checking for $compiler option to produce PIC... $ECHO_C" >&6; } + + if test "$GCC" = yes; then + lt_prog_compiler_wl_GCJ='-Wl,' + lt_prog_compiler_static_GCJ='-static' + + case $host_os in + aix*) + # All AIX code is PIC. + if test "$host_cpu" = ia64; then + # AIX 5 now supports IA64 processor + lt_prog_compiler_static_GCJ='-Bstatic' + fi + ;; + + amigaos*) + # FIXME: we need at least 68020 code to build shared libraries, but + # adding the `-m68020' flag to GCC prevents building anything better, + # like `-m68040'. + lt_prog_compiler_pic_GCJ='-m68020 -resident32 -malways-restore-a4' + ;; + + beos* | irix5* | irix6* | nonstopux* | osf3* | osf4* | osf5*) + # PIC is the default for these OSes. + ;; + + mingw* | cygwin* | pw32* | os2*) + # This hack is so that the source file can tell whether it is being + # built for inclusion in a dll (and should export symbols for example). + # Although the cygwin gcc ignores -fPIC, still need this for old-style + # (--disable-auto-import) libraries + lt_prog_compiler_pic_GCJ='-DDLL_EXPORT' + ;; + + darwin* | rhapsody*) + # PIC is the default on this platform + # Common symbols not allowed in MH_DYLIB files + lt_prog_compiler_pic_GCJ='-fno-common' + ;; + + interix[3-9]*) + # Interix 3.x gcc -fpic/-fPIC options generate broken code. + # Instead, we relocate shared libraries at runtime. + ;; + + msdosdjgpp*) + # Just because we use GCC doesn't mean we suddenly get shared libraries + # on systems that don't support them. + lt_prog_compiler_can_build_shared_GCJ=no + enable_shared=no + ;; + + sysv4*MP*) + if test -d /usr/nec; then + lt_prog_compiler_pic_GCJ=-Kconform_pic + fi + ;; + + hpux*) + # PIC is the default for IA64 HP-UX and 64-bit HP-UX, but + # not for PA HP-UX. + case $host_cpu in + hppa*64*|ia64*) + # +Z the default + ;; + *) + lt_prog_compiler_pic_GCJ='-fPIC' + ;; + esac + ;; + + *) + lt_prog_compiler_pic_GCJ='-fPIC' + ;; + esac + else + # PORTME Check for flag to pass linker flags through the system compiler. + case $host_os in + aix*) + lt_prog_compiler_wl_GCJ='-Wl,' + if test "$host_cpu" = ia64; then + # AIX 5 now supports IA64 processor + lt_prog_compiler_static_GCJ='-Bstatic' + else + lt_prog_compiler_static_GCJ='-bnso -bI:/lib/syscalls.exp' + fi + ;; + darwin*) + # PIC is the default on this platform + # Common symbols not allowed in MH_DYLIB files + case $cc_basename in + xlc*) + lt_prog_compiler_pic_GCJ='-qnocommon' + lt_prog_compiler_wl_GCJ='-Wl,' + ;; + esac + ;; + + mingw* | cygwin* | pw32* | os2*) + # This hack is so that the source file can tell whether it is being + # built for inclusion in a dll (and should export symbols for example). + lt_prog_compiler_pic_GCJ='-DDLL_EXPORT' + ;; + + hpux9* | hpux10* | hpux11*) + lt_prog_compiler_wl_GCJ='-Wl,' + # PIC is the default for IA64 HP-UX and 64-bit HP-UX, but + # not for PA HP-UX. + case $host_cpu in + hppa*64*|ia64*) + # +Z the default + ;; + *) + lt_prog_compiler_pic_GCJ='+Z' + ;; + esac + # Is there a better lt_prog_compiler_static that works with the bundled CC? + lt_prog_compiler_static_GCJ='${wl}-a ${wl}archive' + ;; + + irix5* | irix6* | nonstopux*) + lt_prog_compiler_wl_GCJ='-Wl,' + # PIC (with -KPIC) is the default. + lt_prog_compiler_static_GCJ='-non_shared' + ;; + + newsos6) + lt_prog_compiler_pic_GCJ='-KPIC' + lt_prog_compiler_static_GCJ='-Bstatic' + ;; + + linux* | k*bsd*-gnu) + case $cc_basename in + icc* | ecc*) + lt_prog_compiler_wl_GCJ='-Wl,' + lt_prog_compiler_pic_GCJ='-KPIC' + lt_prog_compiler_static_GCJ='-static' + ;; + pgcc* | pgf77* | pgf90* | pgf95*) + # Portland Group compilers (*not* the Pentium gcc compiler, + # which looks to be a dead project) + lt_prog_compiler_wl_GCJ='-Wl,' + lt_prog_compiler_pic_GCJ='-fpic' + lt_prog_compiler_static_GCJ='-Bstatic' + ;; + ccc*) + lt_prog_compiler_wl_GCJ='-Wl,' + # All Alpha code is PIC. + lt_prog_compiler_static_GCJ='-non_shared' + ;; + *) + case `$CC -V 2>&1 | sed 5q` in + *Sun\ C*) + # Sun C 5.9 + lt_prog_compiler_pic_GCJ='-KPIC' + lt_prog_compiler_static_GCJ='-Bstatic' + lt_prog_compiler_wl_GCJ='-Wl,' + ;; + *Sun\ F*) + # Sun Fortran 8.3 passes all unrecognized flags to the linker + lt_prog_compiler_pic_GCJ='-KPIC' + lt_prog_compiler_static_GCJ='-Bstatic' + lt_prog_compiler_wl_GCJ='' + ;; + esac + ;; + esac + ;; + + osf3* | osf4* | osf5*) + lt_prog_compiler_wl_GCJ='-Wl,' + # All OSF/1 code is PIC. + lt_prog_compiler_static_GCJ='-non_shared' + ;; + + rdos*) + lt_prog_compiler_static_GCJ='-non_shared' + ;; + + solaris*) + lt_prog_compiler_pic_GCJ='-KPIC' + lt_prog_compiler_static_GCJ='-Bstatic' + case $cc_basename in + f77* | f90* | f95*) + lt_prog_compiler_wl_GCJ='-Qoption ld ';; + *) + lt_prog_compiler_wl_GCJ='-Wl,';; + esac + ;; + + sunos4*) + lt_prog_compiler_wl_GCJ='-Qoption ld ' + lt_prog_compiler_pic_GCJ='-PIC' + lt_prog_compiler_static_GCJ='-Bstatic' + ;; + + sysv4 | sysv4.2uw2* | sysv4.3*) + lt_prog_compiler_wl_GCJ='-Wl,' + lt_prog_compiler_pic_GCJ='-KPIC' + lt_prog_compiler_static_GCJ='-Bstatic' + ;; + + sysv4*MP*) + if test -d /usr/nec ;then + lt_prog_compiler_pic_GCJ='-Kconform_pic' + lt_prog_compiler_static_GCJ='-Bstatic' + fi + ;; + + sysv5* | unixware* | sco3.2v5* | sco5v6* | OpenUNIX*) + lt_prog_compiler_wl_GCJ='-Wl,' + lt_prog_compiler_pic_GCJ='-KPIC' + lt_prog_compiler_static_GCJ='-Bstatic' + ;; + + unicos*) + lt_prog_compiler_wl_GCJ='-Wl,' + lt_prog_compiler_can_build_shared_GCJ=no + ;; + + uts4*) + lt_prog_compiler_pic_GCJ='-pic' + lt_prog_compiler_static_GCJ='-Bstatic' + ;; + + *) + lt_prog_compiler_can_build_shared_GCJ=no + ;; + esac + fi + +{ echo "$as_me:$LINENO: result: $lt_prog_compiler_pic_GCJ" >&5 +echo "${ECHO_T}$lt_prog_compiler_pic_GCJ" >&6; } + +# +# Check to make sure the PIC flag actually works. +# +if test -n "$lt_prog_compiler_pic_GCJ"; then + +{ echo "$as_me:$LINENO: checking if $compiler PIC flag $lt_prog_compiler_pic_GCJ works" >&5 +echo $ECHO_N "checking if $compiler PIC flag $lt_prog_compiler_pic_GCJ works... $ECHO_C" >&6; } +if test "${lt_prog_compiler_pic_works_GCJ+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + lt_prog_compiler_pic_works_GCJ=no + ac_outfile=conftest.$ac_objext + echo "$lt_simple_compile_test_code" > conftest.$ac_ext + lt_compiler_flag="$lt_prog_compiler_pic_GCJ" + # Insert the option either (1) after the last *FLAGS variable, or + # (2) before a word containing "conftest.", or (3) at the end. + # Note that $ac_compile itself does not contain backslashes and begins + # with a dollar sign (not a hyphen), so the echo should work correctly. + # The option is referenced via a variable to avoid confusing sed. + lt_compile=`echo "$ac_compile" | $SED \ + -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ + -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ + -e 's:$: $lt_compiler_flag:'` + (eval echo "\"\$as_me:16952: $lt_compile\"" >&5) + (eval "$lt_compile" 2>conftest.err) + ac_status=$? + cat conftest.err >&5 + echo "$as_me:16956: \$? = $ac_status" >&5 + if (exit $ac_status) && test -s "$ac_outfile"; then + # The compiler can only warn and ignore the option if not recognized + # So say no if there are warnings other than the usual output. + $echo "X$_lt_compiler_boilerplate" | $Xsed -e '/^$/d' >conftest.exp + $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 + if test ! -s conftest.er2 || diff conftest.exp conftest.er2 >/dev/null; then + lt_prog_compiler_pic_works_GCJ=yes + fi + fi + $rm conftest* + +fi +{ echo "$as_me:$LINENO: result: $lt_prog_compiler_pic_works_GCJ" >&5 +echo "${ECHO_T}$lt_prog_compiler_pic_works_GCJ" >&6; } + +if test x"$lt_prog_compiler_pic_works_GCJ" = xyes; then + case $lt_prog_compiler_pic_GCJ in + "" | " "*) ;; + *) lt_prog_compiler_pic_GCJ=" $lt_prog_compiler_pic_GCJ" ;; + esac +else + lt_prog_compiler_pic_GCJ= + lt_prog_compiler_can_build_shared_GCJ=no +fi + +fi +case $host_os in + # For platforms which do not support PIC, -DPIC is meaningless: + *djgpp*) + lt_prog_compiler_pic_GCJ= + ;; + *) + lt_prog_compiler_pic_GCJ="$lt_prog_compiler_pic_GCJ" + ;; +esac + +# +# Check to make sure the static flag actually works. +# +wl=$lt_prog_compiler_wl_GCJ eval lt_tmp_static_flag=\"$lt_prog_compiler_static_GCJ\" +{ echo "$as_me:$LINENO: checking if $compiler static flag $lt_tmp_static_flag works" >&5 +echo $ECHO_N "checking if $compiler static flag $lt_tmp_static_flag works... $ECHO_C" >&6; } +if test "${lt_prog_compiler_static_works_GCJ+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + lt_prog_compiler_static_works_GCJ=no + save_LDFLAGS="$LDFLAGS" + LDFLAGS="$LDFLAGS $lt_tmp_static_flag" + echo "$lt_simple_link_test_code" > conftest.$ac_ext + if (eval $ac_link 2>conftest.err) && test -s conftest$ac_exeext; then + # The linker can only warn and ignore the option if not recognized + # So say no if there are warnings + if test -s conftest.err; then + # Append any errors to the config.log. + cat conftest.err 1>&5 + $echo "X$_lt_linker_boilerplate" | $Xsed -e '/^$/d' > conftest.exp + $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 + if diff conftest.exp conftest.er2 >/dev/null; then + lt_prog_compiler_static_works_GCJ=yes + fi + else + lt_prog_compiler_static_works_GCJ=yes + fi + fi + $rm conftest* + LDFLAGS="$save_LDFLAGS" + +fi +{ echo "$as_me:$LINENO: result: $lt_prog_compiler_static_works_GCJ" >&5 +echo "${ECHO_T}$lt_prog_compiler_static_works_GCJ" >&6; } + +if test x"$lt_prog_compiler_static_works_GCJ" = xyes; then + : +else + lt_prog_compiler_static_GCJ= +fi + + +{ echo "$as_me:$LINENO: checking if $compiler supports -c -o file.$ac_objext" >&5 +echo $ECHO_N "checking if $compiler supports -c -o file.$ac_objext... $ECHO_C" >&6; } +if test "${lt_cv_prog_compiler_c_o_GCJ+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + lt_cv_prog_compiler_c_o_GCJ=no + $rm -r conftest 2>/dev/null + mkdir conftest + cd conftest + mkdir out + echo "$lt_simple_compile_test_code" > conftest.$ac_ext + + lt_compiler_flag="-o out/conftest2.$ac_objext" + # Insert the option either (1) after the last *FLAGS variable, or + # (2) before a word containing "conftest.", or (3) at the end. + # Note that $ac_compile itself does not contain backslashes and begins + # with a dollar sign (not a hyphen), so the echo should work correctly. + lt_compile=`echo "$ac_compile" | $SED \ + -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ + -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ + -e 's:$: $lt_compiler_flag:'` + (eval echo "\"\$as_me:17056: $lt_compile\"" >&5) + (eval "$lt_compile" 2>out/conftest.err) + ac_status=$? + cat out/conftest.err >&5 + echo "$as_me:17060: \$? = $ac_status" >&5 + if (exit $ac_status) && test -s out/conftest2.$ac_objext + then + # The compiler can only warn and ignore the option if not recognized + # So say no if there are warnings + $echo "X$_lt_compiler_boilerplate" | $Xsed -e '/^$/d' > out/conftest.exp + $SED '/^$/d; /^ *+/d' out/conftest.err >out/conftest.er2 + if test ! -s out/conftest.er2 || diff out/conftest.exp out/conftest.er2 >/dev/null; then + lt_cv_prog_compiler_c_o_GCJ=yes + fi + fi + chmod u+w . 2>&5 + $rm conftest* + # SGI C++ compiler will create directory out/ii_files/ for + # template instantiation + test -d out/ii_files && $rm out/ii_files/* && rmdir out/ii_files + $rm out/* && rmdir out + cd .. + rmdir conftest + $rm conftest* + +fi +{ echo "$as_me:$LINENO: result: $lt_cv_prog_compiler_c_o_GCJ" >&5 +echo "${ECHO_T}$lt_cv_prog_compiler_c_o_GCJ" >&6; } + + +hard_links="nottested" +if test "$lt_cv_prog_compiler_c_o_GCJ" = no && test "$need_locks" != no; then + # do not overwrite the value of need_locks provided by the user + { echo "$as_me:$LINENO: checking if we can lock with hard links" >&5 +echo $ECHO_N "checking if we can lock with hard links... $ECHO_C" >&6; } + hard_links=yes + $rm conftest* + ln conftest.a conftest.b 2>/dev/null && hard_links=no + touch conftest.a + ln conftest.a conftest.b 2>&5 || hard_links=no + ln conftest.a conftest.b 2>/dev/null && hard_links=no + { echo "$as_me:$LINENO: result: $hard_links" >&5 +echo "${ECHO_T}$hard_links" >&6; } + if test "$hard_links" = no; then + { echo "$as_me:$LINENO: WARNING: \`$CC' does not support \`-c -o', so \`make -j' may be unsafe" >&5 +echo "$as_me: WARNING: \`$CC' does not support \`-c -o', so \`make -j' may be unsafe" >&2;} + need_locks=warn + fi +else + need_locks=no +fi + +{ echo "$as_me:$LINENO: checking whether the $compiler linker ($LD) supports shared libraries" >&5 +echo $ECHO_N "checking whether the $compiler linker ($LD) supports shared libraries... $ECHO_C" >&6; } + + runpath_var= + allow_undefined_flag_GCJ= + enable_shared_with_static_runtimes_GCJ=no + archive_cmds_GCJ= + archive_expsym_cmds_GCJ= + old_archive_From_new_cmds_GCJ= + old_archive_from_expsyms_cmds_GCJ= + export_dynamic_flag_spec_GCJ= + whole_archive_flag_spec_GCJ= + thread_safe_flag_spec_GCJ= + hardcode_libdir_flag_spec_GCJ= + hardcode_libdir_flag_spec_ld_GCJ= + hardcode_libdir_separator_GCJ= + hardcode_direct_GCJ=no + hardcode_minus_L_GCJ=no + hardcode_shlibpath_var_GCJ=unsupported + link_all_deplibs_GCJ=unknown + hardcode_automatic_GCJ=no + module_cmds_GCJ= + module_expsym_cmds_GCJ= + always_export_symbols_GCJ=no + export_symbols_cmds_GCJ='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols' + # include_expsyms should be a list of space-separated symbols to be *always* + # included in the symbol list + include_expsyms_GCJ= + # exclude_expsyms can be an extended regexp of symbols to exclude + # it will be wrapped by ` (' and `)$', so one must not match beginning or + # end of line. Example: `a|bc|.*d.*' will exclude the symbols `a' and `bc', + # as well as any symbol that contains `d'. + exclude_expsyms_GCJ="_GLOBAL_OFFSET_TABLE_" + # Although _GLOBAL_OFFSET_TABLE_ is a valid symbol C name, most a.out + # platforms (ab)use it in PIC code, but their linkers get confused if + # the symbol is explicitly referenced. Since portable code cannot + # rely on this symbol name, it's probably fine to never include it in + # preloaded symbol tables. + extract_expsyms_cmds= + # Just being paranoid about ensuring that cc_basename is set. + for cc_temp in $compiler""; do + case $cc_temp in + compile | *[\\/]compile | ccache | *[\\/]ccache ) ;; + distcc | *[\\/]distcc | purify | *[\\/]purify ) ;; + \-*) ;; + *) break;; + esac +done +cc_basename=`$echo "X$cc_temp" | $Xsed -e 's%.*/%%' -e "s%^$host_alias-%%"` + + case $host_os in + cygwin* | mingw* | pw32*) + # FIXME: the MSVC++ port hasn't been tested in a loooong time + # When not using gcc, we currently assume that we are using + # Microsoft Visual C++. + if test "$GCC" != yes; then + with_gnu_ld=no + fi + ;; + interix*) + # we just hope/assume this is gcc and not c89 (= MSVC++) + with_gnu_ld=yes + ;; + openbsd*) + with_gnu_ld=no + ;; + esac + + ld_shlibs_GCJ=yes + if test "$with_gnu_ld" = yes; then + # If archive_cmds runs LD, not CC, wlarc should be empty + wlarc='${wl}' + + # Set some defaults for GNU ld with shared library support. These + # are reset later if shared libraries are not supported. Putting them + # here allows them to be overridden if necessary. + runpath_var=LD_RUN_PATH + hardcode_libdir_flag_spec_GCJ='${wl}--rpath ${wl}$libdir' + export_dynamic_flag_spec_GCJ='${wl}--export-dynamic' + # ancient GNU ld didn't support --whole-archive et. al. + if $LD --help 2>&1 | grep 'no-whole-archive' > /dev/null; then + whole_archive_flag_spec_GCJ="$wlarc"'--whole-archive$convenience '"$wlarc"'--no-whole-archive' + else + whole_archive_flag_spec_GCJ= + fi + supports_anon_versioning=no + case `$LD -v 2>/dev/null` in + *\ [01].* | *\ 2.[0-9].* | *\ 2.10.*) ;; # catch versions < 2.11 + *\ 2.11.93.0.2\ *) supports_anon_versioning=yes ;; # RH7.3 ... + *\ 2.11.92.0.12\ *) supports_anon_versioning=yes ;; # Mandrake 8.2 ... + *\ 2.11.*) ;; # other 2.11 versions + *) supports_anon_versioning=yes ;; + esac + + # See if GNU ld supports shared libraries. + case $host_os in + aix3* | aix4* | aix5*) + # On AIX/PPC, the GNU linker is very broken + if test "$host_cpu" != ia64; then + ld_shlibs_GCJ=no + cat <&2 + +*** Warning: the GNU linker, at least up to release 2.9.1, is reported +*** to be unable to reliably create shared libraries on AIX. +*** Therefore, libtool is disabling shared libraries support. If you +*** really care for shared libraries, you may want to modify your PATH +*** so that a non-GNU linker is found, and then restart. + +EOF + fi + ;; + + amigaos*) + archive_cmds_GCJ='$rm $output_objdir/a2ixlibrary.data~$echo "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$echo "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$echo "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$echo "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)' + hardcode_libdir_flag_spec_GCJ='-L$libdir' + hardcode_minus_L_GCJ=yes + + # Samuel A. Falvo II reports + # that the semantics of dynamic libraries on AmigaOS, at least up + # to version 4, is to share data among multiple programs linked + # with the same dynamic library. Since this doesn't match the + # behavior of shared libraries on other platforms, we can't use + # them. + ld_shlibs_GCJ=no + ;; + + beos*) + if $LD --help 2>&1 | grep ': supported targets:.* elf' > /dev/null; then + allow_undefined_flag_GCJ=unsupported + # Joseph Beckenbach says some releases of gcc + # support --undefined. This deserves some investigation. FIXME + archive_cmds_GCJ='$CC -nostart $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' + else + ld_shlibs_GCJ=no + fi + ;; + + cygwin* | mingw* | pw32*) + # _LT_AC_TAGVAR(hardcode_libdir_flag_spec, GCJ) is actually meaningless, + # as there is no search path for DLLs. + hardcode_libdir_flag_spec_GCJ='-L$libdir' + allow_undefined_flag_GCJ=unsupported + always_export_symbols_GCJ=no + enable_shared_with_static_runtimes_GCJ=yes + export_symbols_cmds_GCJ='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/'\'' -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols' + + if $LD --help 2>&1 | grep 'auto-import' > /dev/null; then + archive_cmds_GCJ='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' + # If the export-symbols file already is a .def file (1st line + # is EXPORTS), use it as is; otherwise, prepend... + archive_expsym_cmds_GCJ='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then + cp $export_symbols $output_objdir/$soname.def; + else + echo EXPORTS > $output_objdir/$soname.def; + cat $export_symbols >> $output_objdir/$soname.def; + fi~ + $CC -shared $output_objdir/$soname.def $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' + else + ld_shlibs_GCJ=no + fi + ;; + + interix[3-9]*) + hardcode_direct_GCJ=no + hardcode_shlibpath_var_GCJ=no + hardcode_libdir_flag_spec_GCJ='${wl}-rpath,$libdir' + export_dynamic_flag_spec_GCJ='${wl}-E' + # Hack: On Interix 3.x, we cannot compile PIC because of a broken gcc. + # Instead, shared libraries are loaded at an image base (0x10000000 by + # default) and relocated if they conflict, which is a slow very memory + # consuming and fragmenting process. To avoid this, we pick a random, + # 256 KiB-aligned image base between 0x50000000 and 0x6FFC0000 at link + # time. Moving up from 0x10000000 also allows more sbrk(2) space. + archive_cmds_GCJ='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-h,$soname ${wl}--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' + archive_expsym_cmds_GCJ='sed "s,^,_," $export_symbols >$output_objdir/$soname.expsym~$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-h,$soname ${wl}--retain-symbols-file,$output_objdir/$soname.expsym ${wl}--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' + ;; + + gnu* | linux* | k*bsd*-gnu) + if $LD --help 2>&1 | grep ': supported targets:.* elf' > /dev/null; then + tmp_addflag= + case $cc_basename,$host_cpu in + pgcc*) # Portland Group C compiler + whole_archive_flag_spec_GCJ='${wl}--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; $echo \"$new_convenience\"` ${wl}--no-whole-archive' + tmp_addflag=' $pic_flag' + ;; + pgf77* | pgf90* | pgf95*) # Portland Group f77 and f90 compilers + whole_archive_flag_spec_GCJ='${wl}--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; $echo \"$new_convenience\"` ${wl}--no-whole-archive' + tmp_addflag=' $pic_flag -Mnomain' ;; + ecc*,ia64* | icc*,ia64*) # Intel C compiler on ia64 + tmp_addflag=' -i_dynamic' ;; + efc*,ia64* | ifort*,ia64*) # Intel Fortran compiler on ia64 + tmp_addflag=' -i_dynamic -nofor_main' ;; + ifc* | ifort*) # Intel Fortran compiler + tmp_addflag=' -nofor_main' ;; + esac + case `$CC -V 2>&1 | sed 5q` in + *Sun\ C*) # Sun C 5.9 + whole_archive_flag_spec_GCJ='${wl}--whole-archive`new_convenience=; for conv in $convenience\"\"; do test -z \"$conv\" || new_convenience=\"$new_convenience,$conv\"; done; $echo \"$new_convenience\"` ${wl}--no-whole-archive' + tmp_sharedflag='-G' ;; + *Sun\ F*) # Sun Fortran 8.3 + tmp_sharedflag='-G' ;; + *) + tmp_sharedflag='-shared' ;; + esac + archive_cmds_GCJ='$CC '"$tmp_sharedflag""$tmp_addflag"' $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' + + if test $supports_anon_versioning = yes; then + archive_expsym_cmds_GCJ='$echo "{ global:" > $output_objdir/$libname.ver~ + cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~ + $echo "local: *; };" >> $output_objdir/$libname.ver~ + $CC '"$tmp_sharedflag""$tmp_addflag"' $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-version-script ${wl}$output_objdir/$libname.ver -o $lib' + fi + else + ld_shlibs_GCJ=no + fi + ;; + + netbsd*) + if echo __ELF__ | $CC -E - | grep __ELF__ >/dev/null; then + archive_cmds_GCJ='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib' + wlarc= + else + archive_cmds_GCJ='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' + archive_expsym_cmds_GCJ='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' + fi + ;; + + solaris*) + if $LD -v 2>&1 | grep 'BFD 2\.8' > /dev/null; then + ld_shlibs_GCJ=no + cat <&2 + +*** Warning: The releases 2.8.* of the GNU linker cannot reliably +*** create shared libraries on Solaris systems. Therefore, libtool +*** is disabling shared libraries support. We urge you to upgrade GNU +*** binutils to release 2.9.1 or newer. Another option is to modify +*** your PATH or compiler configuration so that the native linker is +*** used, and then restart. + +EOF + elif $LD --help 2>&1 | grep ': supported targets:.* elf' > /dev/null; then + archive_cmds_GCJ='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' + archive_expsym_cmds_GCJ='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' + else + ld_shlibs_GCJ=no + fi + ;; + + sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX*) + case `$LD -v 2>&1` in + *\ [01].* | *\ 2.[0-9].* | *\ 2.1[0-5].*) + ld_shlibs_GCJ=no + cat <<_LT_EOF 1>&2 + +*** Warning: Releases of the GNU linker prior to 2.16.91.0.3 can not +*** reliably create shared libraries on SCO systems. Therefore, libtool +*** is disabling shared libraries support. We urge you to upgrade GNU +*** binutils to release 2.16.91.0.3 or newer. Another option is to modify +*** your PATH or compiler configuration so that the native linker is +*** used, and then restart. + +_LT_EOF + ;; + *) + if $LD --help 2>&1 | grep ': supported targets:.* elf' > /dev/null; then + hardcode_libdir_flag_spec_GCJ='`test -z "$SCOABSPATH" && echo ${wl}-rpath,$libdir`' + archive_cmds_GCJ='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname,\${SCOABSPATH:+${install_libdir}/}$soname -o $lib' + archive_expsym_cmds_GCJ='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname,\${SCOABSPATH:+${install_libdir}/}$soname,-retain-symbols-file,$export_symbols -o $lib' + else + ld_shlibs_GCJ=no + fi + ;; + esac + ;; + + sunos4*) + archive_cmds_GCJ='$LD -assert pure-text -Bshareable -o $lib $libobjs $deplibs $linker_flags' + wlarc= + hardcode_direct_GCJ=yes + hardcode_shlibpath_var_GCJ=no + ;; + + *) + if $LD --help 2>&1 | grep ': supported targets:.* elf' > /dev/null; then + archive_cmds_GCJ='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' + archive_expsym_cmds_GCJ='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' + else + ld_shlibs_GCJ=no + fi + ;; + esac + + if test "$ld_shlibs_GCJ" = no; then + runpath_var= + hardcode_libdir_flag_spec_GCJ= + export_dynamic_flag_spec_GCJ= + whole_archive_flag_spec_GCJ= + fi + else + # PORTME fill in a description of your system's linker (not GNU ld) + case $host_os in + aix3*) + allow_undefined_flag_GCJ=unsupported + always_export_symbols_GCJ=yes + archive_expsym_cmds_GCJ='$LD -o $output_objdir/$soname $libobjs $deplibs $linker_flags -bE:$export_symbols -T512 -H512 -bM:SRE~$AR $AR_FLAGS $lib $output_objdir/$soname' + # Note: this linker hardcodes the directories in LIBPATH if there + # are no directories specified by -L. + hardcode_minus_L_GCJ=yes + if test "$GCC" = yes && test -z "$lt_prog_compiler_static"; then + # Neither direct hardcoding nor static linking is supported with a + # broken collect2. + hardcode_direct_GCJ=unsupported + fi + ;; + + aix4* | aix5*) + if test "$host_cpu" = ia64; then + # On IA64, the linker does run time linking by default, so we don't + # have to do anything special. + aix_use_runtimelinking=no + exp_sym_flag='-Bexport' + no_entry_flag="" + else + # If we're using GNU nm, then we don't want the "-C" option. + # -C means demangle to AIX nm, but means don't demangle with GNU nm + if $NM -V 2>&1 | grep 'GNU' > /dev/null; then + export_symbols_cmds_GCJ='$NM -Bpg $libobjs $convenience | awk '\''{ if (((\$2 == "T") || (\$2 == "D") || (\$2 == "B")) && (substr(\$3,1,1) != ".")) { print \$3 } }'\'' | sort -u > $export_symbols' + else + export_symbols_cmds_GCJ='$NM -BCpg $libobjs $convenience | awk '\''{ if (((\$2 == "T") || (\$2 == "D") || (\$2 == "B")) && (substr(\$3,1,1) != ".")) { print \$3 } }'\'' | sort -u > $export_symbols' + fi + aix_use_runtimelinking=no + + # Test if we are trying to use run time linking or normal + # AIX style linking. If -brtl is somewhere in LDFLAGS, we + # need to do runtime linking. + case $host_os in aix4.[23]|aix4.[23].*|aix5*) + for ld_flag in $LDFLAGS; do + if (test $ld_flag = "-brtl" || test $ld_flag = "-Wl,-brtl"); then + aix_use_runtimelinking=yes + break + fi + done + ;; + esac + + exp_sym_flag='-bexport' + no_entry_flag='-bnoentry' + fi + + # When large executables or shared objects are built, AIX ld can + # have problems creating the table of contents. If linking a library + # or program results in "error TOC overflow" add -mminimal-toc to + # CXXFLAGS/CFLAGS for g++/gcc. In the cases where that is not + # enough to fix the problem, add -Wl,-bbigtoc to LDFLAGS. + + archive_cmds_GCJ='' + hardcode_direct_GCJ=yes + hardcode_libdir_separator_GCJ=':' + link_all_deplibs_GCJ=yes + + if test "$GCC" = yes; then + case $host_os in aix4.[012]|aix4.[012].*) + # We only want to do this on AIX 4.2 and lower, the check + # below for broken collect2 doesn't work under 4.3+ + collect2name=`${CC} -print-prog-name=collect2` + if test -f "$collect2name" && \ + strings "$collect2name" | grep resolve_lib_name >/dev/null + then + # We have reworked collect2 + : + else + # We have old collect2 + hardcode_direct_GCJ=unsupported + # It fails to find uninstalled libraries when the uninstalled + # path is not listed in the libpath. Setting hardcode_minus_L + # to unsupported forces relinking + hardcode_minus_L_GCJ=yes + hardcode_libdir_flag_spec_GCJ='-L$libdir' + hardcode_libdir_separator_GCJ= + fi + ;; + esac + shared_flag='-shared' + if test "$aix_use_runtimelinking" = yes; then + shared_flag="$shared_flag "'${wl}-G' + fi + else + # not using gcc + if test "$host_cpu" = ia64; then + # VisualAge C++, Version 5.5 for AIX 5L for IA-64, Beta 3 Release + # chokes on -Wl,-G. The following line is correct: + shared_flag='-G' + else + if test "$aix_use_runtimelinking" = yes; then + shared_flag='${wl}-G' + else + shared_flag='${wl}-bM:SRE' + fi + fi + fi + + # It seems that -bexpall does not export symbols beginning with + # underscore (_), so it is better to generate a list of symbols to export. + always_export_symbols_GCJ=yes + if test "$aix_use_runtimelinking" = yes; then + # Warning - without using the other runtime loading flags (-brtl), + # -berok will link without error, but may produce a broken library. + allow_undefined_flag_GCJ='-berok' + # Determine the default libpath from the value encoded in an empty executable. + cat >conftest.$ac_ext <<_ACEOF +/* confdefs.h. */ +_ACEOF +cat confdefs.h >>conftest.$ac_ext +cat >>conftest.$ac_ext <<_ACEOF +/* end confdefs.h. */ + +int +main () +{ + + ; + return 0; +} +_ACEOF +rm -f conftest.$ac_objext conftest$ac_exeext +if { (ac_try="$ac_link" +case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_link") 2>conftest.er1 + ac_status=$? + grep -v '^ *+' conftest.er1 >conftest.err + rm -f conftest.er1 + cat conftest.err >&5 + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } && { + test -z "$ac_c_werror_flag" || + test ! -s conftest.err + } && test -s conftest$ac_exeext && + $as_test_x conftest$ac_exeext; then + +lt_aix_libpath_sed=' + /Import File Strings/,/^$/ { + /^0/ { + s/^0 *\(.*\)$/\1/ + p + } + }' +aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` +# Check for a 64-bit object if we didn't find anything. +if test -z "$aix_libpath"; then + aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` +fi +else + echo "$as_me: failed program was:" >&5 +sed 's/^/| /' conftest.$ac_ext >&5 + + +fi + +rm -f core conftest.err conftest.$ac_objext conftest_ipa8_conftest.oo \ + conftest$ac_exeext conftest.$ac_ext +if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi + + hardcode_libdir_flag_spec_GCJ='${wl}-blibpath:$libdir:'"$aix_libpath" + archive_expsym_cmds_GCJ="\$CC"' -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then echo "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag" + else + if test "$host_cpu" = ia64; then + hardcode_libdir_flag_spec_GCJ='${wl}-R $libdir:/usr/lib:/lib' + allow_undefined_flag_GCJ="-z nodefs" + archive_expsym_cmds_GCJ="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags ${wl}${allow_undefined_flag} '"\${wl}$exp_sym_flag:\$export_symbols" + else + # Determine the default libpath from the value encoded in an empty executable. + cat >conftest.$ac_ext <<_ACEOF +/* confdefs.h. */ +_ACEOF +cat confdefs.h >>conftest.$ac_ext +cat >>conftest.$ac_ext <<_ACEOF +/* end confdefs.h. */ + +int +main () +{ + + ; + return 0; +} +_ACEOF +rm -f conftest.$ac_objext conftest$ac_exeext +if { (ac_try="$ac_link" +case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_link") 2>conftest.er1 + ac_status=$? + grep -v '^ *+' conftest.er1 >conftest.err + rm -f conftest.er1 + cat conftest.err >&5 + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } && { + test -z "$ac_c_werror_flag" || + test ! -s conftest.err + } && test -s conftest$ac_exeext && + $as_test_x conftest$ac_exeext; then + +lt_aix_libpath_sed=' + /Import File Strings/,/^$/ { + /^0/ { + s/^0 *\(.*\)$/\1/ + p + } + }' +aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` +# Check for a 64-bit object if we didn't find anything. +if test -z "$aix_libpath"; then + aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` +fi +else + echo "$as_me: failed program was:" >&5 +sed 's/^/| /' conftest.$ac_ext >&5 + + +fi + +rm -f core conftest.err conftest.$ac_objext conftest_ipa8_conftest.oo \ + conftest$ac_exeext conftest.$ac_ext +if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi + + hardcode_libdir_flag_spec_GCJ='${wl}-blibpath:$libdir:'"$aix_libpath" + # Warning - without using the other run time loading flags, + # -berok will link without error, but may produce a broken library. + no_undefined_flag_GCJ=' ${wl}-bernotok' + allow_undefined_flag_GCJ=' ${wl}-berok' + # Exported symbols can be pulled into shared objects from archives + whole_archive_flag_spec_GCJ='$convenience' + archive_cmds_need_lc_GCJ=yes + # This is similar to how AIX traditionally builds its shared libraries. + archive_expsym_cmds_GCJ="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs ${wl}-bnoentry $compiler_flags ${wl}-bE:$export_symbols${allow_undefined_flag}~$AR $AR_FLAGS $output_objdir/$libname$release.a $output_objdir/$soname' + fi + fi + ;; + + amigaos*) + archive_cmds_GCJ='$rm $output_objdir/a2ixlibrary.data~$echo "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$echo "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$echo "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$echo "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)' + hardcode_libdir_flag_spec_GCJ='-L$libdir' + hardcode_minus_L_GCJ=yes + # see comment about different semantics on the GNU ld section + ld_shlibs_GCJ=no + ;; + + bsdi[45]*) + export_dynamic_flag_spec_GCJ=-rdynamic + ;; + + cygwin* | mingw* | pw32*) + # When not using gcc, we currently assume that we are using + # Microsoft Visual C++. + # hardcode_libdir_flag_spec is actually meaningless, as there is + # no search path for DLLs. + hardcode_libdir_flag_spec_GCJ=' ' + allow_undefined_flag_GCJ=unsupported + # Tell ltmain to make .lib files, not .a files. + libext=lib + # Tell ltmain to make .dll files, not .so files. + shrext_cmds=".dll" + # FIXME: Setting linknames here is a bad hack. + archive_cmds_GCJ='$CC -o $lib $libobjs $compiler_flags `echo "$deplibs" | $SED -e '\''s/ -lc$//'\''` -link -dll~linknames=' + # The linker will automatically build a .lib file if we build a DLL. + old_archive_From_new_cmds_GCJ='true' + # FIXME: Should let the user specify the lib program. + old_archive_cmds_GCJ='lib -OUT:$oldlib$oldobjs$old_deplibs' + fix_srcfile_path_GCJ='`cygpath -w "$srcfile"`' + enable_shared_with_static_runtimes_GCJ=yes + ;; + + darwin* | rhapsody*) + case $host_os in + rhapsody* | darwin1.[012]) + allow_undefined_flag_GCJ='${wl}-undefined ${wl}suppress' + ;; + *) # Darwin 1.3 on + if test -z ${MACOSX_DEPLOYMENT_TARGET} ; then + allow_undefined_flag_GCJ='${wl}-flat_namespace ${wl}-undefined ${wl}suppress' + else + case ${MACOSX_DEPLOYMENT_TARGET} in + 10.[012]) + allow_undefined_flag_GCJ='${wl}-flat_namespace ${wl}-undefined ${wl}suppress' + ;; + 10.*) + allow_undefined_flag_GCJ='${wl}-undefined ${wl}dynamic_lookup' + ;; + esac + fi + ;; + esac + archive_cmds_need_lc_GCJ=no + hardcode_direct_GCJ=no + hardcode_automatic_GCJ=yes + hardcode_shlibpath_var_GCJ=unsupported + whole_archive_flag_spec_GCJ='' + link_all_deplibs_GCJ=yes + if test "$GCC" = yes ; then + output_verbose_link_cmd='echo' + archive_cmds_GCJ='$CC -dynamiclib $allow_undefined_flag -o $lib $libobjs $deplibs $compiler_flags -install_name $rpath/$soname $verstring' + module_cmds_GCJ='$CC $allow_undefined_flag -o $lib -bundle $libobjs $deplibs$compiler_flags' + # Don't fix this by using the ld -exported_symbols_list flag, it doesn't exist in older darwin lds + archive_expsym_cmds_GCJ='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC -dynamiclib $allow_undefined_flag -o $lib $libobjs $deplibs $compiler_flags -install_name $rpath/$soname $verstring~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}' + module_expsym_cmds_GCJ='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC $allow_undefined_flag -o $lib -bundle $libobjs $deplibs$compiler_flags~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}' + else + case $cc_basename in + xlc*) + output_verbose_link_cmd='echo' + archive_cmds_GCJ='$CC -qmkshrobj $allow_undefined_flag -o $lib $libobjs $deplibs $compiler_flags ${wl}-install_name ${wl}`echo $rpath/$soname` $xlcverstring' + module_cmds_GCJ='$CC $allow_undefined_flag -o $lib -bundle $libobjs $deplibs$compiler_flags' + # Don't fix this by using the ld -exported_symbols_list flag, it doesn't exist in older darwin lds + archive_expsym_cmds_GCJ='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC -qmkshrobj $allow_undefined_flag -o $lib $libobjs $deplibs $compiler_flags ${wl}-install_name ${wl}$rpath/$soname $xlcverstring~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}' + module_expsym_cmds_GCJ='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC $allow_undefined_flag -o $lib -bundle $libobjs $deplibs$compiler_flags~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}' + ;; + *) + ld_shlibs_GCJ=no + ;; + esac + fi + ;; + + dgux*) + archive_cmds_GCJ='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + hardcode_libdir_flag_spec_GCJ='-L$libdir' + hardcode_shlibpath_var_GCJ=no + ;; + + freebsd1*) + ld_shlibs_GCJ=no + ;; + + # FreeBSD 2.2.[012] allows us to include c++rt0.o to get C++ constructor + # support. Future versions do this automatically, but an explicit c++rt0.o + # does not break anything, and helps significantly (at the cost of a little + # extra space). + freebsd2.2*) + archive_cmds_GCJ='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags /usr/lib/c++rt0.o' + hardcode_libdir_flag_spec_GCJ='-R$libdir' + hardcode_direct_GCJ=yes + hardcode_shlibpath_var_GCJ=no + ;; + + # Unfortunately, older versions of FreeBSD 2 do not have this feature. + freebsd2*) + archive_cmds_GCJ='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' + hardcode_direct_GCJ=yes + hardcode_minus_L_GCJ=yes + hardcode_shlibpath_var_GCJ=no + ;; + + # FreeBSD 3 and greater uses gcc -shared to do shared libraries. + freebsd* | dragonfly*) + archive_cmds_GCJ='$CC -shared -o $lib $libobjs $deplibs $compiler_flags' + hardcode_libdir_flag_spec_GCJ='-R$libdir' + hardcode_direct_GCJ=yes + hardcode_shlibpath_var_GCJ=no + ;; + + hpux9*) + if test "$GCC" = yes; then + archive_cmds_GCJ='$rm $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib' + else + archive_cmds_GCJ='$rm $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib' + fi + hardcode_libdir_flag_spec_GCJ='${wl}+b ${wl}$libdir' + hardcode_libdir_separator_GCJ=: + hardcode_direct_GCJ=yes + + # hardcode_minus_L: Not really in the search PATH, + # but as the default location of the library. + hardcode_minus_L_GCJ=yes + export_dynamic_flag_spec_GCJ='${wl}-E' + ;; + + hpux10*) + if test "$GCC" = yes -a "$with_gnu_ld" = no; then + archive_cmds_GCJ='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags' + else + archive_cmds_GCJ='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags' + fi + if test "$with_gnu_ld" = no; then + hardcode_libdir_flag_spec_GCJ='${wl}+b ${wl}$libdir' + hardcode_libdir_separator_GCJ=: + + hardcode_direct_GCJ=yes + export_dynamic_flag_spec_GCJ='${wl}-E' + + # hardcode_minus_L: Not really in the search PATH, + # but as the default location of the library. + hardcode_minus_L_GCJ=yes + fi + ;; + + hpux11*) + if test "$GCC" = yes -a "$with_gnu_ld" = no; then + case $host_cpu in + hppa*64*) + archive_cmds_GCJ='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags' + ;; + ia64*) + archive_cmds_GCJ='$CC -shared ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags' + ;; + *) + archive_cmds_GCJ='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags' + ;; + esac + else + case $host_cpu in + hppa*64*) + archive_cmds_GCJ='$CC -b ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags' + ;; + ia64*) + archive_cmds_GCJ='$CC -b ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags' + ;; + *) + archive_cmds_GCJ='$CC -b ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags' + ;; + esac + fi + if test "$with_gnu_ld" = no; then + hardcode_libdir_flag_spec_GCJ='${wl}+b ${wl}$libdir' + hardcode_libdir_separator_GCJ=: + + case $host_cpu in + hppa*64*|ia64*) + hardcode_libdir_flag_spec_ld_GCJ='+b $libdir' + hardcode_direct_GCJ=no + hardcode_shlibpath_var_GCJ=no + ;; + *) + hardcode_direct_GCJ=yes + export_dynamic_flag_spec_GCJ='${wl}-E' + + # hardcode_minus_L: Not really in the search PATH, + # but as the default location of the library. + hardcode_minus_L_GCJ=yes + ;; + esac + fi + ;; + + irix5* | irix6* | nonstopux*) + if test "$GCC" = yes; then + archive_cmds_GCJ='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' + else + archive_cmds_GCJ='$LD -shared $libobjs $deplibs $linker_flags -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib' + hardcode_libdir_flag_spec_ld_GCJ='-rpath $libdir' + fi + hardcode_libdir_flag_spec_GCJ='${wl}-rpath ${wl}$libdir' + hardcode_libdir_separator_GCJ=: + link_all_deplibs_GCJ=yes + ;; + + netbsd*) + if echo __ELF__ | $CC -E - | grep __ELF__ >/dev/null; then + archive_cmds_GCJ='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' # a.out + else + archive_cmds_GCJ='$LD -shared -o $lib $libobjs $deplibs $linker_flags' # ELF + fi + hardcode_libdir_flag_spec_GCJ='-R$libdir' + hardcode_direct_GCJ=yes + hardcode_shlibpath_var_GCJ=no + ;; + + newsos6) + archive_cmds_GCJ='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + hardcode_direct_GCJ=yes + hardcode_libdir_flag_spec_GCJ='${wl}-rpath ${wl}$libdir' + hardcode_libdir_separator_GCJ=: + hardcode_shlibpath_var_GCJ=no + ;; + + openbsd*) + if test -f /usr/libexec/ld.so; then + hardcode_direct_GCJ=yes + hardcode_shlibpath_var_GCJ=no + if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then + archive_cmds_GCJ='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags' + archive_expsym_cmds_GCJ='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags ${wl}-retain-symbols-file,$export_symbols' + hardcode_libdir_flag_spec_GCJ='${wl}-rpath,$libdir' + export_dynamic_flag_spec_GCJ='${wl}-E' + else + case $host_os in + openbsd[01].* | openbsd2.[0-7] | openbsd2.[0-7].*) + archive_cmds_GCJ='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' + hardcode_libdir_flag_spec_GCJ='-R$libdir' + ;; + *) + archive_cmds_GCJ='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags' + hardcode_libdir_flag_spec_GCJ='${wl}-rpath,$libdir' + ;; + esac + fi + else + ld_shlibs_GCJ=no + fi + ;; + + os2*) + hardcode_libdir_flag_spec_GCJ='-L$libdir' + hardcode_minus_L_GCJ=yes + allow_undefined_flag_GCJ=unsupported + archive_cmds_GCJ='$echo "LIBRARY $libname INITINSTANCE" > $output_objdir/$libname.def~$echo "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~$echo DATA >> $output_objdir/$libname.def~$echo " SINGLE NONSHARED" >> $output_objdir/$libname.def~$echo EXPORTS >> $output_objdir/$libname.def~emxexp $libobjs >> $output_objdir/$libname.def~$CC -Zdll -Zcrtdll -o $lib $libobjs $deplibs $compiler_flags $output_objdir/$libname.def' + old_archive_From_new_cmds_GCJ='emximp -o $output_objdir/$libname.a $output_objdir/$libname.def' + ;; + + osf3*) + if test "$GCC" = yes; then + allow_undefined_flag_GCJ=' ${wl}-expect_unresolved ${wl}\*' + archive_cmds_GCJ='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' + else + allow_undefined_flag_GCJ=' -expect_unresolved \*' + archive_cmds_GCJ='$LD -shared${allow_undefined_flag} $libobjs $deplibs $linker_flags -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib' + fi + hardcode_libdir_flag_spec_GCJ='${wl}-rpath ${wl}$libdir' + hardcode_libdir_separator_GCJ=: + ;; + + osf4* | osf5*) # as osf3* with the addition of -msym flag + if test "$GCC" = yes; then + allow_undefined_flag_GCJ=' ${wl}-expect_unresolved ${wl}\*' + archive_cmds_GCJ='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' + hardcode_libdir_flag_spec_GCJ='${wl}-rpath ${wl}$libdir' + else + allow_undefined_flag_GCJ=' -expect_unresolved \*' + archive_cmds_GCJ='$LD -shared${allow_undefined_flag} $libobjs $deplibs $linker_flags -msym -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib' + archive_expsym_cmds_GCJ='for i in `cat $export_symbols`; do printf "%s %s\\n" -exported_symbol "\$i" >> $lib.exp; done; echo "-hidden">> $lib.exp~ + $LD -shared${allow_undefined_flag} -input $lib.exp $linker_flags $libobjs $deplibs -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib~$rm $lib.exp' + + # Both c and cxx compiler support -rpath directly + hardcode_libdir_flag_spec_GCJ='-rpath $libdir' + fi + hardcode_libdir_separator_GCJ=: + ;; + + solaris*) + no_undefined_flag_GCJ=' -z text' + if test "$GCC" = yes; then + wlarc='${wl}' + archive_cmds_GCJ='$CC -shared ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags' + archive_expsym_cmds_GCJ='$echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~$echo "local: *; };" >> $lib.exp~ + $CC -shared ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$rm $lib.exp' + else + wlarc='' + archive_cmds_GCJ='$LD -G${allow_undefined_flag} -h $soname -o $lib $libobjs $deplibs $linker_flags' + archive_expsym_cmds_GCJ='$echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~$echo "local: *; };" >> $lib.exp~ + $LD -G${allow_undefined_flag} -M $lib.exp -h $soname -o $lib $libobjs $deplibs $linker_flags~$rm $lib.exp' + fi + hardcode_libdir_flag_spec_GCJ='-R$libdir' + hardcode_shlibpath_var_GCJ=no + case $host_os in + solaris2.[0-5] | solaris2.[0-5].*) ;; + *) + # The compiler driver will combine and reorder linker options, + # but understands `-z linker_flag'. GCC discards it without `$wl', + # but is careful enough not to reorder. + # Supported since Solaris 2.6 (maybe 2.5.1?) + if test "$GCC" = yes; then + whole_archive_flag_spec_GCJ='${wl}-z ${wl}allextract$convenience ${wl}-z ${wl}defaultextract' + else + whole_archive_flag_spec_GCJ='-z allextract$convenience -z defaultextract' + fi + ;; + esac + link_all_deplibs_GCJ=yes + ;; + + sunos4*) + if test "x$host_vendor" = xsequent; then + # Use $CC to link under sequent, because it throws in some extra .o + # files that make .init and .fini sections work. + archive_cmds_GCJ='$CC -G ${wl}-h $soname -o $lib $libobjs $deplibs $compiler_flags' + else + archive_cmds_GCJ='$LD -assert pure-text -Bstatic -o $lib $libobjs $deplibs $linker_flags' + fi + hardcode_libdir_flag_spec_GCJ='-L$libdir' + hardcode_direct_GCJ=yes + hardcode_minus_L_GCJ=yes + hardcode_shlibpath_var_GCJ=no + ;; + + sysv4) + case $host_vendor in + sni) + archive_cmds_GCJ='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + hardcode_direct_GCJ=yes # is this really true??? + ;; + siemens) + ## LD is ld it makes a PLAMLIB + ## CC just makes a GrossModule. + archive_cmds_GCJ='$LD -G -o $lib $libobjs $deplibs $linker_flags' + reload_cmds_GCJ='$CC -r -o $output$reload_objs' + hardcode_direct_GCJ=no + ;; + motorola) + archive_cmds_GCJ='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + hardcode_direct_GCJ=no #Motorola manual says yes, but my tests say they lie + ;; + esac + runpath_var='LD_RUN_PATH' + hardcode_shlibpath_var_GCJ=no + ;; + + sysv4.3*) + archive_cmds_GCJ='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + hardcode_shlibpath_var_GCJ=no + export_dynamic_flag_spec_GCJ='-Bexport' + ;; + + sysv4*MP*) + if test -d /usr/nec; then + archive_cmds_GCJ='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + hardcode_shlibpath_var_GCJ=no + runpath_var=LD_RUN_PATH + hardcode_runpath_var=yes + ld_shlibs_GCJ=yes + fi + ;; + + sysv4*uw2* | sysv5OpenUNIX* | sysv5UnixWare7.[01].[10]* | unixware7* | sco3.2v5.0.[024]*) + no_undefined_flag_GCJ='${wl}-z,text' + archive_cmds_need_lc_GCJ=no + hardcode_shlibpath_var_GCJ=no + runpath_var='LD_RUN_PATH' + + if test "$GCC" = yes; then + archive_cmds_GCJ='$CC -shared ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' + archive_expsym_cmds_GCJ='$CC -shared ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' + else + archive_cmds_GCJ='$CC -G ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' + archive_expsym_cmds_GCJ='$CC -G ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' + fi + ;; + + sysv5* | sco3.2v5* | sco5v6*) + # Note: We can NOT use -z defs as we might desire, because we do not + # link with -lc, and that would cause any symbols used from libc to + # always be unresolved, which means just about no library would + # ever link correctly. If we're not using GNU ld we use -z text + # though, which does catch some bad symbols but isn't as heavy-handed + # as -z defs. + no_undefined_flag_GCJ='${wl}-z,text' + allow_undefined_flag_GCJ='${wl}-z,nodefs' + archive_cmds_need_lc_GCJ=no + hardcode_shlibpath_var_GCJ=no + hardcode_libdir_flag_spec_GCJ='`test -z "$SCOABSPATH" && echo ${wl}-R,$libdir`' + hardcode_libdir_separator_GCJ=':' + link_all_deplibs_GCJ=yes + export_dynamic_flag_spec_GCJ='${wl}-Bexport' + runpath_var='LD_RUN_PATH' + + if test "$GCC" = yes; then + archive_cmds_GCJ='$CC -shared ${wl}-h,\${SCOABSPATH:+${install_libdir}/}$soname -o $lib $libobjs $deplibs $compiler_flags' + archive_expsym_cmds_GCJ='$CC -shared ${wl}-Bexport:$export_symbols ${wl}-h,\${SCOABSPATH:+${install_libdir}/}$soname -o $lib $libobjs $deplibs $compiler_flags' + else + archive_cmds_GCJ='$CC -G ${wl}-h,\${SCOABSPATH:+${install_libdir}/}$soname -o $lib $libobjs $deplibs $compiler_flags' + archive_expsym_cmds_GCJ='$CC -G ${wl}-Bexport:$export_symbols ${wl}-h,\${SCOABSPATH:+${install_libdir}/}$soname -o $lib $libobjs $deplibs $compiler_flags' + fi + ;; + + uts4*) + archive_cmds_GCJ='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + hardcode_libdir_flag_spec_GCJ='-L$libdir' + hardcode_shlibpath_var_GCJ=no + ;; + + *) + ld_shlibs_GCJ=no + ;; + esac + fi + +{ echo "$as_me:$LINENO: result: $ld_shlibs_GCJ" >&5 +echo "${ECHO_T}$ld_shlibs_GCJ" >&6; } +test "$ld_shlibs_GCJ" = no && can_build_shared=no + +# +# Do we need to explicitly link libc? +# +case "x$archive_cmds_need_lc_GCJ" in +x|xyes) + # Assume -lc should be added + archive_cmds_need_lc_GCJ=yes + + if test "$enable_shared" = yes && test "$GCC" = yes; then + case $archive_cmds_GCJ in + *'~'*) + # FIXME: we may have to deal with multi-command sequences. + ;; + '$CC '*) + # Test whether the compiler implicitly links with -lc since on some + # systems, -lgcc has to come before -lc. If gcc already passes -lc + # to ld, don't add -lc before -lgcc. + { echo "$as_me:$LINENO: checking whether -lc should be explicitly linked in" >&5 +echo $ECHO_N "checking whether -lc should be explicitly linked in... $ECHO_C" >&6; } + $rm conftest* + echo "$lt_simple_compile_test_code" > conftest.$ac_ext + + if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 + (eval $ac_compile) 2>&5 + ac_status=$? + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } 2>conftest.err; then + soname=conftest + lib=conftest + libobjs=conftest.$ac_objext + deplibs= + wl=$lt_prog_compiler_wl_GCJ + pic_flag=$lt_prog_compiler_pic_GCJ + compiler_flags=-v + linker_flags=-v + verstring= + output_objdir=. + libname=conftest + lt_save_allow_undefined_flag=$allow_undefined_flag_GCJ + allow_undefined_flag_GCJ= + if { (eval echo "$as_me:$LINENO: \"$archive_cmds_GCJ 2\>\&1 \| grep \" -lc \" \>/dev/null 2\>\&1\"") >&5 + (eval $archive_cmds_GCJ 2\>\&1 \| grep \" -lc \" \>/dev/null 2\>\&1) 2>&5 + ac_status=$? + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } + then + archive_cmds_need_lc_GCJ=no + else + archive_cmds_need_lc_GCJ=yes + fi + allow_undefined_flag_GCJ=$lt_save_allow_undefined_flag + else + cat conftest.err 1>&5 + fi + $rm conftest* + { echo "$as_me:$LINENO: result: $archive_cmds_need_lc_GCJ" >&5 +echo "${ECHO_T}$archive_cmds_need_lc_GCJ" >&6; } + ;; + esac + fi + ;; +esac + +{ echo "$as_me:$LINENO: checking dynamic linker characteristics" >&5 +echo $ECHO_N "checking dynamic linker characteristics... $ECHO_C" >&6; } +library_names_spec= +libname_spec='lib$name' +soname_spec= +shrext_cmds=".so" +postinstall_cmds= +postuninstall_cmds= +finish_cmds= +finish_eval= +shlibpath_var= +shlibpath_overrides_runpath=unknown +version_type=none +dynamic_linker="$host_os ld.so" +sys_lib_dlsearch_path_spec="/lib /usr/lib" + +need_lib_prefix=unknown +hardcode_into_libs=no + +# when you set need_version to no, make sure it does not cause -set_version +# flags to be left without arguments +need_version=unknown + +case $host_os in +aix3*) + version_type=linux + library_names_spec='${libname}${release}${shared_ext}$versuffix $libname.a' + shlibpath_var=LIBPATH + + # AIX 3 has no versioning support, so we append a major version to the name. + soname_spec='${libname}${release}${shared_ext}$major' + ;; + +aix4* | aix5*) + version_type=linux + need_lib_prefix=no + need_version=no + hardcode_into_libs=yes + if test "$host_cpu" = ia64; then + # AIX 5 supports IA64 + library_names_spec='${libname}${release}${shared_ext}$major ${libname}${release}${shared_ext}$versuffix $libname${shared_ext}' + shlibpath_var=LD_LIBRARY_PATH + else + # With GCC up to 2.95.x, collect2 would create an import file + # for dependence libraries. The import file would start with + # the line `#! .'. This would cause the generated library to + # depend on `.', always an invalid library. This was fixed in + # development snapshots of GCC prior to 3.0. + case $host_os in + aix4 | aix4.[01] | aix4.[01].*) + if { echo '#if __GNUC__ > 2 || (__GNUC__ == 2 && __GNUC_MINOR__ >= 97)' + echo ' yes ' + echo '#endif'; } | ${CC} -E - | grep yes > /dev/null; then + : + else + can_build_shared=no + fi + ;; + esac + # AIX (on Power*) has no versioning support, so currently we can not hardcode correct + # soname into executable. Probably we can add versioning support to + # collect2, so additional links can be useful in future. + if test "$aix_use_runtimelinking" = yes; then + # If using run time linking (on AIX 4.2 or later) use lib.so + # instead of lib.a to let people know that these are not + # typical AIX shared libraries. + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + else + # We preserve .a as extension for shared libraries through AIX4.2 + # and later when we are not doing run time linking. + library_names_spec='${libname}${release}.a $libname.a' + soname_spec='${libname}${release}${shared_ext}$major' + fi + shlibpath_var=LIBPATH + fi + ;; + +amigaos*) + library_names_spec='$libname.ixlibrary $libname.a' + # Create ${libname}_ixlibrary.a entries in /sys/libs. + finish_eval='for lib in `ls $libdir/*.ixlibrary 2>/dev/null`; do libname=`$echo "X$lib" | $Xsed -e '\''s%^.*/\([^/]*\)\.ixlibrary$%\1%'\''`; test $rm /sys/libs/${libname}_ixlibrary.a; $show "cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a"; cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a || exit 1; done' + ;; + +beos*) + library_names_spec='${libname}${shared_ext}' + dynamic_linker="$host_os ld.so" + shlibpath_var=LIBRARY_PATH + ;; + +bsdi[45]*) + version_type=linux + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + finish_cmds='PATH="\$PATH:/sbin" ldconfig $libdir' + shlibpath_var=LD_LIBRARY_PATH + sys_lib_search_path_spec="/shlib /usr/lib /usr/X11/lib /usr/contrib/lib /lib /usr/local/lib" + sys_lib_dlsearch_path_spec="/shlib /usr/lib /usr/local/lib" + # the default ld.so.conf also contains /usr/contrib/lib and + # /usr/X11R6/lib (/usr/X11 is a link to /usr/X11R6), but let us allow + # libtool to hard-code these into programs + ;; + +cygwin* | mingw* | pw32*) + version_type=windows + shrext_cmds=".dll" + need_version=no + need_lib_prefix=no + + case $GCC,$host_os in + yes,cygwin* | yes,mingw* | yes,pw32*) + library_names_spec='$libname.dll.a' + # DLL is installed to $(libdir)/../bin by postinstall_cmds + postinstall_cmds='base_file=`basename \${file}`~ + dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i;echo \$dlname'\''`~ + dldir=$destdir/`dirname \$dlpath`~ + test -d \$dldir || mkdir -p \$dldir~ + $install_prog $dir/$dlname \$dldir/$dlname~ + chmod a+x \$dldir/$dlname' + postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~ + dlpath=$dir/\$dldll~ + $rm \$dlpath' + shlibpath_overrides_runpath=yes + + case $host_os in + cygwin*) + # Cygwin DLLs use 'cyg' prefix rather than 'lib' + soname_spec='`echo ${libname} | sed -e 's/^lib/cyg/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}' + sys_lib_search_path_spec="/usr/lib /lib/w32api /lib /usr/local/lib" + ;; + mingw*) + # MinGW DLLs use traditional 'lib' prefix + soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}' + sys_lib_search_path_spec=`$CC -print-search-dirs | grep "^libraries:" | $SED -e "s/^libraries://" -e "s,=/,/,g"` + if echo "$sys_lib_search_path_spec" | grep ';[c-zC-Z]:/' >/dev/null; then + # It is most probably a Windows format PATH printed by + # mingw gcc, but we are running on Cygwin. Gcc prints its search + # path with ; separators, and with drive letters. We can handle the + # drive letters (cygwin fileutils understands them), so leave them, + # especially as we might pass files found there to a mingw objdump, + # which wouldn't understand a cygwinified path. Ahh. + sys_lib_search_path_spec=`echo "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'` + else + sys_lib_search_path_spec=`echo "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"` + fi + ;; + pw32*) + # pw32 DLLs use 'pw' prefix rather than 'lib' + library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}' + ;; + esac + ;; + + *) + library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib' + ;; + esac + dynamic_linker='Win32 ld.exe' + # FIXME: first we should search . and the directory the executable is in + shlibpath_var=PATH + ;; + +darwin* | rhapsody*) + dynamic_linker="$host_os dyld" + version_type=darwin + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${versuffix}$shared_ext ${libname}${release}${major}$shared_ext ${libname}$shared_ext' + soname_spec='${libname}${release}${major}$shared_ext' + shlibpath_overrides_runpath=yes + shlibpath_var=DYLD_LIBRARY_PATH + shrext_cmds='`test .$module = .yes && echo .so || echo .dylib`' + + sys_lib_dlsearch_path_spec='/usr/local/lib /lib /usr/lib' + ;; + +dgux*) + version_type=linux + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname$shared_ext' + soname_spec='${libname}${release}${shared_ext}$major' + shlibpath_var=LD_LIBRARY_PATH + ;; + +freebsd1*) + dynamic_linker=no + ;; + +freebsd* | dragonfly*) + # DragonFly does not have aout. When/if they implement a new + # versioning mechanism, adjust this. + if test -x /usr/bin/objformat; then + objformat=`/usr/bin/objformat` + else + case $host_os in + freebsd[123]*) objformat=aout ;; + *) objformat=elf ;; + esac + fi + version_type=freebsd-$objformat + case $version_type in + freebsd-elf*) + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext} $libname${shared_ext}' + need_version=no + need_lib_prefix=no + ;; + freebsd-*) + library_names_spec='${libname}${release}${shared_ext}$versuffix $libname${shared_ext}$versuffix' + need_version=yes + ;; + esac + shlibpath_var=LD_LIBRARY_PATH + case $host_os in + freebsd2*) + shlibpath_overrides_runpath=yes + ;; + freebsd3.[01]* | freebsdelf3.[01]*) + shlibpath_overrides_runpath=yes + hardcode_into_libs=yes + ;; + freebsd3.[2-9]* | freebsdelf3.[2-9]* | \ + freebsd4.[0-5] | freebsdelf4.[0-5] | freebsd4.1.1 | freebsdelf4.1.1) + shlibpath_overrides_runpath=no + hardcode_into_libs=yes + ;; + *) # from 4.6 on, and DragonFly + shlibpath_overrides_runpath=yes + hardcode_into_libs=yes + ;; + esac + ;; + +gnu*) + version_type=linux + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}${major} ${libname}${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + shlibpath_var=LD_LIBRARY_PATH + hardcode_into_libs=yes + ;; + +hpux9* | hpux10* | hpux11*) + # Give a soname corresponding to the major version so that dld.sl refuses to + # link against other versions. + version_type=sunos + need_lib_prefix=no + need_version=no + case $host_cpu in + ia64*) + shrext_cmds='.so' + hardcode_into_libs=yes + dynamic_linker="$host_os dld.so" + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=yes # Unless +noenvvar is specified. + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + if test "X$HPUX_IA64_MODE" = X32; then + sys_lib_search_path_spec="/usr/lib/hpux32 /usr/local/lib/hpux32 /usr/local/lib" + else + sys_lib_search_path_spec="/usr/lib/hpux64 /usr/local/lib/hpux64" + fi + sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec + ;; + hppa*64*) + shrext_cmds='.sl' + hardcode_into_libs=yes + dynamic_linker="$host_os dld.sl" + shlibpath_var=LD_LIBRARY_PATH # How should we handle SHLIB_PATH + shlibpath_overrides_runpath=yes # Unless +noenvvar is specified. + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + sys_lib_search_path_spec="/usr/lib/pa20_64 /usr/ccs/lib/pa20_64" + sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec + ;; + *) + shrext_cmds='.sl' + dynamic_linker="$host_os dld.sl" + shlibpath_var=SHLIB_PATH + shlibpath_overrides_runpath=no # +s is required to enable SHLIB_PATH + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + ;; + esac + # HP-UX runs *really* slowly unless shared libraries are mode 555. + postinstall_cmds='chmod 555 $lib' + ;; + +interix[3-9]*) + version_type=linux + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + dynamic_linker='Interix 3.x ld.so.1 (PE, like ELF)' + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=no + hardcode_into_libs=yes + ;; + +irix5* | irix6* | nonstopux*) + case $host_os in + nonstopux*) version_type=nonstopux ;; + *) + if test "$lt_cv_prog_gnu_ld" = yes; then + version_type=linux + else + version_type=irix + fi ;; + esac + need_lib_prefix=no + need_version=no + soname_spec='${libname}${release}${shared_ext}$major' + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${release}${shared_ext} $libname${shared_ext}' + case $host_os in + irix5* | nonstopux*) + libsuff= shlibsuff= + ;; + *) + case $LD in # libtool.m4 will add one of these switches to LD + *-32|*"-32 "|*-melf32bsmip|*"-melf32bsmip ") + libsuff= shlibsuff= libmagic=32-bit;; + *-n32|*"-n32 "|*-melf32bmipn32|*"-melf32bmipn32 ") + libsuff=32 shlibsuff=N32 libmagic=N32;; + *-64|*"-64 "|*-melf64bmip|*"-melf64bmip ") + libsuff=64 shlibsuff=64 libmagic=64-bit;; + *) libsuff= shlibsuff= libmagic=never-match;; + esac + ;; + esac + shlibpath_var=LD_LIBRARY${shlibsuff}_PATH + shlibpath_overrides_runpath=no + sys_lib_search_path_spec="/usr/lib${libsuff} /lib${libsuff} /usr/local/lib${libsuff}" + sys_lib_dlsearch_path_spec="/usr/lib${libsuff} /lib${libsuff}" + hardcode_into_libs=yes + ;; + +# No shared lib support for Linux oldld, aout, or coff. +linux*oldld* | linux*aout* | linux*coff*) + dynamic_linker=no + ;; + +# This must be Linux ELF. +linux* | k*bsd*-gnu) + version_type=linux + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + finish_cmds='PATH="\$PATH:/sbin" ldconfig -n $libdir' + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=no + # This implies no fast_install, which is unacceptable. + # Some rework will be needed to allow for fast_install + # before this can be enabled. + hardcode_into_libs=yes + sys_lib_search_path_spec="/usr/lib${libsuff} /lib${libsuff} /usr/local/lib${libsuff}" + sys_lib_dlsearch_path_spec="/usr/lib${libsuff} /lib${libsuff}" + + # Append ld.so.conf contents to the search path + if test -f /etc/ld.so.conf; then + lt_ld_extra=`awk '/^include / { system(sprintf("cd /etc; cat %s 2>/dev/null", \$2)); skip = 1; } { if (!skip) print \$0; skip = 0; }' < /etc/ld.so.conf | $SED -e 's/#.*//;/^[ ]*hwcap[ ]/d;s/[:, ]/ /g;s/=[^=]*$//;s/=[^= ]* / /g;/^$/d' | tr '\n' ' '` + sys_lib_dlsearch_path_spec="$sys_lib_dlsearch_path_spec $lt_ld_extra" + fi + + # We used to test for /lib/ld.so.1 and disable shared libraries on + # powerpc, because MkLinux only supported shared libraries with the + # GNU dynamic linker. Since this was broken with cross compilers, + # most powerpc-linux boxes support dynamic linking these days and + # people can always --disable-shared, the test was removed, and we + # assume the GNU/Linux dynamic linker is in use. + dynamic_linker='GNU/Linux ld.so' + ;; + +netbsd*) + version_type=sunos + need_lib_prefix=no + need_version=no + if echo __ELF__ | $CC -E - | grep __ELF__ >/dev/null; then + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix' + finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir' + dynamic_linker='NetBSD (a.out) ld.so' + else + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + dynamic_linker='NetBSD ld.elf_so' + fi + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=yes + hardcode_into_libs=yes + ;; + +newsos6) + version_type=linux + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=yes + ;; + +nto-qnx*) + version_type=linux + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=yes + ;; + +openbsd*) + version_type=sunos + sys_lib_dlsearch_path_spec="/usr/lib" + need_lib_prefix=no + # Some older versions of OpenBSD (3.3 at least) *do* need versioned libs. + case $host_os in + openbsd3.3 | openbsd3.3.*) need_version=yes ;; + *) need_version=no ;; + esac + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix' + finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir' + shlibpath_var=LD_LIBRARY_PATH + if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then + case $host_os in + openbsd2.[89] | openbsd2.[89].*) + shlibpath_overrides_runpath=no + ;; + *) + shlibpath_overrides_runpath=yes + ;; + esac + else + shlibpath_overrides_runpath=yes + fi + ;; + +os2*) + libname_spec='$name' + shrext_cmds=".dll" + need_lib_prefix=no + library_names_spec='$libname${shared_ext} $libname.a' + dynamic_linker='OS/2 ld.exe' + shlibpath_var=LIBPATH + ;; + +osf3* | osf4* | osf5*) + version_type=osf + need_lib_prefix=no + need_version=no + soname_spec='${libname}${release}${shared_ext}$major' + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + shlibpath_var=LD_LIBRARY_PATH + sys_lib_search_path_spec="/usr/shlib /usr/ccs/lib /usr/lib/cmplrs/cc /usr/lib /usr/local/lib /var/shlib" + sys_lib_dlsearch_path_spec="$sys_lib_search_path_spec" + ;; + +rdos*) + dynamic_linker=no + ;; + +solaris*) + version_type=linux + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=yes + hardcode_into_libs=yes + # ldd complains unless libraries are executable + postinstall_cmds='chmod +x $lib' + ;; + +sunos4*) + version_type=sunos + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix' + finish_cmds='PATH="\$PATH:/usr/etc" ldconfig $libdir' + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=yes + if test "$with_gnu_ld" = yes; then + need_lib_prefix=no + fi + need_version=yes + ;; + +sysv4 | sysv4.3*) + version_type=linux + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + shlibpath_var=LD_LIBRARY_PATH + case $host_vendor in + sni) + shlibpath_overrides_runpath=no + need_lib_prefix=no + export_dynamic_flag_spec='${wl}-Blargedynsym' + runpath_var=LD_RUN_PATH + ;; + siemens) + need_lib_prefix=no + ;; + motorola) + need_lib_prefix=no + need_version=no + shlibpath_overrides_runpath=no + sys_lib_search_path_spec='/lib /usr/lib /usr/ccs/lib' + ;; + esac + ;; + +sysv4*MP*) + if test -d /usr/nec ;then + version_type=linux + library_names_spec='$libname${shared_ext}.$versuffix $libname${shared_ext}.$major $libname${shared_ext}' + soname_spec='$libname${shared_ext}.$major' + shlibpath_var=LD_LIBRARY_PATH + fi + ;; + +sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX* | sysv4*uw2*) + version_type=freebsd-elf + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext} $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + shlibpath_var=LD_LIBRARY_PATH + hardcode_into_libs=yes + if test "$with_gnu_ld" = yes; then + sys_lib_search_path_spec='/usr/local/lib /usr/gnu/lib /usr/ccs/lib /usr/lib /lib' + shlibpath_overrides_runpath=no + else + sys_lib_search_path_spec='/usr/ccs/lib /usr/lib' + shlibpath_overrides_runpath=yes + case $host_os in + sco3.2v5*) + sys_lib_search_path_spec="$sys_lib_search_path_spec /lib" + ;; + esac + fi + sys_lib_dlsearch_path_spec='/usr/lib' + ;; + +uts4*) + version_type=linux + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + shlibpath_var=LD_LIBRARY_PATH + ;; + +*) + dynamic_linker=no + ;; +esac +{ echo "$as_me:$LINENO: result: $dynamic_linker" >&5 +echo "${ECHO_T}$dynamic_linker" >&6; } +test "$dynamic_linker" = no && can_build_shared=no + +variables_saved_for_relink="PATH $shlibpath_var $runpath_var" +if test "$GCC" = yes; then + variables_saved_for_relink="$variables_saved_for_relink GCC_EXEC_PREFIX COMPILER_PATH LIBRARY_PATH" +fi + +{ echo "$as_me:$LINENO: checking how to hardcode library paths into programs" >&5 +echo $ECHO_N "checking how to hardcode library paths into programs... $ECHO_C" >&6; } +hardcode_action_GCJ= +if test -n "$hardcode_libdir_flag_spec_GCJ" || \ + test -n "$runpath_var_GCJ" || \ + test "X$hardcode_automatic_GCJ" = "Xyes" ; then + + # We can hardcode non-existant directories. + if test "$hardcode_direct_GCJ" != no && + # If the only mechanism to avoid hardcoding is shlibpath_var, we + # have to relink, otherwise we might link with an installed library + # when we should be linking with a yet-to-be-installed one + ## test "$_LT_AC_TAGVAR(hardcode_shlibpath_var, GCJ)" != no && + test "$hardcode_minus_L_GCJ" != no; then + # Linking always hardcodes the temporary library directory. + hardcode_action_GCJ=relink + else + # We can link without hardcoding, and we can hardcode nonexisting dirs. + hardcode_action_GCJ=immediate + fi +else + # We cannot hardcode anything, or else we can only hardcode existing + # directories. + hardcode_action_GCJ=unsupported +fi +{ echo "$as_me:$LINENO: result: $hardcode_action_GCJ" >&5 +echo "${ECHO_T}$hardcode_action_GCJ" >&6; } + +if test "$hardcode_action_GCJ" = relink; then + # Fast installation is not supported + enable_fast_install=no +elif test "$shlibpath_overrides_runpath" = yes || + test "$enable_shared" = no; then + # Fast installation is not necessary + enable_fast_install=needless +fi + + +# The else clause should only fire when bootstrapping the +# libtool distribution, otherwise you forgot to ship ltmain.sh +# with your package, and you will get complaints that there are +# no rules to generate ltmain.sh. +if test -f "$ltmain"; then + # See if we are running on zsh, and set the options which allow our commands through + # without removal of \ escapes. + if test -n "${ZSH_VERSION+set}" ; then + setopt NO_GLOB_SUBST + fi + # Now quote all the things that may contain metacharacters while being + # careful not to overquote the AC_SUBSTed values. We take copies of the + # variables and quote the copies for generation of the libtool script. + for var in echo old_CC old_CFLAGS AR AR_FLAGS EGREP RANLIB LN_S LTCC LTCFLAGS NM \ + SED SHELL STRIP \ + libname_spec library_names_spec soname_spec extract_expsyms_cmds \ + old_striplib striplib file_magic_cmd finish_cmds finish_eval \ + deplibs_check_method reload_flag reload_cmds need_locks \ + lt_cv_sys_global_symbol_pipe lt_cv_sys_global_symbol_to_cdecl \ + lt_cv_sys_global_symbol_to_c_name_address \ + sys_lib_search_path_spec sys_lib_dlsearch_path_spec \ + old_postinstall_cmds old_postuninstall_cmds \ + compiler_GCJ \ + CC_GCJ \ + LD_GCJ \ + lt_prog_compiler_wl_GCJ \ + lt_prog_compiler_pic_GCJ \ + lt_prog_compiler_static_GCJ \ + lt_prog_compiler_no_builtin_flag_GCJ \ + export_dynamic_flag_spec_GCJ \ + thread_safe_flag_spec_GCJ \ + whole_archive_flag_spec_GCJ \ + enable_shared_with_static_runtimes_GCJ \ + old_archive_cmds_GCJ \ + old_archive_from_new_cmds_GCJ \ + predep_objects_GCJ \ + postdep_objects_GCJ \ + predeps_GCJ \ + postdeps_GCJ \ + compiler_lib_search_path_GCJ \ + archive_cmds_GCJ \ + archive_expsym_cmds_GCJ \ + postinstall_cmds_GCJ \ + postuninstall_cmds_GCJ \ + old_archive_from_expsyms_cmds_GCJ \ + allow_undefined_flag_GCJ \ + no_undefined_flag_GCJ \ + export_symbols_cmds_GCJ \ + hardcode_libdir_flag_spec_GCJ \ + hardcode_libdir_flag_spec_ld_GCJ \ + hardcode_libdir_separator_GCJ \ + hardcode_automatic_GCJ \ + module_cmds_GCJ \ + module_expsym_cmds_GCJ \ + lt_cv_prog_compiler_c_o_GCJ \ + fix_srcfile_path_GCJ \ + exclude_expsyms_GCJ \ + include_expsyms_GCJ; do + + case $var in + old_archive_cmds_GCJ | \ + old_archive_from_new_cmds_GCJ | \ + archive_cmds_GCJ | \ + archive_expsym_cmds_GCJ | \ + module_cmds_GCJ | \ + module_expsym_cmds_GCJ | \ + old_archive_from_expsyms_cmds_GCJ | \ + export_symbols_cmds_GCJ | \ + extract_expsyms_cmds | reload_cmds | finish_cmds | \ + postinstall_cmds | postuninstall_cmds | \ + old_postinstall_cmds | old_postuninstall_cmds | \ + sys_lib_search_path_spec | sys_lib_dlsearch_path_spec) + # Double-quote double-evaled strings. + eval "lt_$var=\\\"\`\$echo \"X\$$var\" | \$Xsed -e \"\$double_quote_subst\" -e \"\$sed_quote_subst\" -e \"\$delay_variable_subst\"\`\\\"" + ;; + *) + eval "lt_$var=\\\"\`\$echo \"X\$$var\" | \$Xsed -e \"\$sed_quote_subst\"\`\\\"" + ;; + esac + done + + case $lt_echo in + *'\$0 --fallback-echo"') + lt_echo=`$echo "X$lt_echo" | $Xsed -e 's/\\\\\\\$0 --fallback-echo"$/$0 --fallback-echo"/'` + ;; + esac + +cfgfile="$ofile" + + cat <<__EOF__ >> "$cfgfile" +# ### BEGIN LIBTOOL TAG CONFIG: $tagname + +# Libtool was configured on host `(hostname || uname -n) 2>/dev/null | sed 1q`: + +# Shell to use when invoking shell scripts. +SHELL=$lt_SHELL + +# Whether or not to build shared libraries. +build_libtool_libs=$enable_shared + +# Whether or not to build static libraries. +build_old_libs=$enable_static + +# Whether or not to add -lc for building shared libraries. +build_libtool_need_lc=$archive_cmds_need_lc_GCJ + +# Whether or not to disallow shared libs when runtime libs are static +allow_libtool_libs_with_static_runtimes=$enable_shared_with_static_runtimes_GCJ + +# Whether or not to optimize for fast installation. +fast_install=$enable_fast_install + +# The host system. +host_alias=$host_alias +host=$host +host_os=$host_os + +# The build system. +build_alias=$build_alias +build=$build +build_os=$build_os + +# An echo program that does not interpret backslashes. +echo=$lt_echo + +# The archiver. +AR=$lt_AR +AR_FLAGS=$lt_AR_FLAGS + +# A C compiler. +LTCC=$lt_LTCC + +# LTCC compiler flags. +LTCFLAGS=$lt_LTCFLAGS + +# A language-specific compiler. +CC=$lt_compiler_GCJ + +# Is the compiler the GNU C compiler? +with_gcc=$GCC_GCJ + +# An ERE matcher. +EGREP=$lt_EGREP + +# The linker used to build libraries. +LD=$lt_LD_GCJ + +# Whether we need hard or soft links. +LN_S=$lt_LN_S + +# A BSD-compatible nm program. +NM=$lt_NM + +# A symbol stripping program +STRIP=$lt_STRIP + +# Used to examine libraries when file_magic_cmd begins "file" +MAGIC_CMD=$MAGIC_CMD + +# Used on cygwin: DLL creation program. +DLLTOOL="$DLLTOOL" + +# Used on cygwin: object dumper. +OBJDUMP="$OBJDUMP" + +# Used on cygwin: assembler. +AS="$AS" + +# The name of the directory that contains temporary libtool files. +objdir=$objdir + +# How to create reloadable object files. +reload_flag=$lt_reload_flag +reload_cmds=$lt_reload_cmds + +# How to pass a linker flag through the compiler. +wl=$lt_lt_prog_compiler_wl_GCJ + +# Object file suffix (normally "o"). +objext="$ac_objext" + +# Old archive suffix (normally "a"). +libext="$libext" + +# Shared library suffix (normally ".so"). +shrext_cmds='$shrext_cmds' + +# Executable file suffix (normally ""). +exeext="$exeext" + +# Additional compiler flags for building library objects. +pic_flag=$lt_lt_prog_compiler_pic_GCJ +pic_mode=$pic_mode + +# What is the maximum length of a command? +max_cmd_len=$lt_cv_sys_max_cmd_len + +# Does compiler simultaneously support -c and -o options? +compiler_c_o=$lt_lt_cv_prog_compiler_c_o_GCJ + +# Must we lock files when doing compilation? +need_locks=$lt_need_locks + +# Do we need the lib prefix for modules? +need_lib_prefix=$need_lib_prefix + +# Do we need a version for libraries? +need_version=$need_version + +# Whether dlopen is supported. +dlopen_support=$enable_dlopen + +# Whether dlopen of programs is supported. +dlopen_self=$enable_dlopen_self + +# Whether dlopen of statically linked programs is supported. +dlopen_self_static=$enable_dlopen_self_static + +# Compiler flag to prevent dynamic linking. +link_static_flag=$lt_lt_prog_compiler_static_GCJ + +# Compiler flag to turn off builtin functions. +no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag_GCJ + +# Compiler flag to allow reflexive dlopens. +export_dynamic_flag_spec=$lt_export_dynamic_flag_spec_GCJ + +# Compiler flag to generate shared objects directly from archives. +whole_archive_flag_spec=$lt_whole_archive_flag_spec_GCJ + +# Compiler flag to generate thread-safe objects. +thread_safe_flag_spec=$lt_thread_safe_flag_spec_GCJ + +# Library versioning type. +version_type=$version_type + +# Format of library name prefix. +libname_spec=$lt_libname_spec + +# List of archive names. First name is the real one, the rest are links. +# The last name is the one that the linker finds with -lNAME. +library_names_spec=$lt_library_names_spec + +# The coded name of the library, if different from the real name. +soname_spec=$lt_soname_spec + +# Commands used to build and install an old-style archive. +RANLIB=$lt_RANLIB +old_archive_cmds=$lt_old_archive_cmds_GCJ +old_postinstall_cmds=$lt_old_postinstall_cmds +old_postuninstall_cmds=$lt_old_postuninstall_cmds + +# Create an old-style archive from a shared archive. +old_archive_from_new_cmds=$lt_old_archive_from_new_cmds_GCJ + +# Create a temporary old-style archive to link instead of a shared archive. +old_archive_from_expsyms_cmds=$lt_old_archive_from_expsyms_cmds_GCJ + +# Commands used to build and install a shared archive. +archive_cmds=$lt_archive_cmds_GCJ +archive_expsym_cmds=$lt_archive_expsym_cmds_GCJ +postinstall_cmds=$lt_postinstall_cmds +postuninstall_cmds=$lt_postuninstall_cmds + +# Commands used to build a loadable module (assumed same as above if empty) +module_cmds=$lt_module_cmds_GCJ +module_expsym_cmds=$lt_module_expsym_cmds_GCJ + +# Commands to strip libraries. +old_striplib=$lt_old_striplib +striplib=$lt_striplib + +# Dependencies to place before the objects being linked to create a +# shared library. +predep_objects=$lt_predep_objects_GCJ + +# Dependencies to place after the objects being linked to create a +# shared library. +postdep_objects=$lt_postdep_objects_GCJ + +# Dependencies to place before the objects being linked to create a +# shared library. +predeps=$lt_predeps_GCJ + +# Dependencies to place after the objects being linked to create a +# shared library. +postdeps=$lt_postdeps_GCJ + +# The library search path used internally by the compiler when linking +# a shared library. +compiler_lib_search_path=$lt_compiler_lib_search_path_GCJ + +# Method to check whether dependent libraries are shared objects. +deplibs_check_method=$lt_deplibs_check_method + +# Command to use when deplibs_check_method == file_magic. +file_magic_cmd=$lt_file_magic_cmd + +# Flag that allows shared libraries with undefined symbols to be built. +allow_undefined_flag=$lt_allow_undefined_flag_GCJ + +# Flag that forces no undefined symbols. +no_undefined_flag=$lt_no_undefined_flag_GCJ + +# Commands used to finish a libtool library installation in a directory. +finish_cmds=$lt_finish_cmds + +# Same as above, but a single script fragment to be evaled but not shown. +finish_eval=$lt_finish_eval + +# Take the output of nm and produce a listing of raw symbols and C names. +global_symbol_pipe=$lt_lt_cv_sys_global_symbol_pipe + +# Transform the output of nm in a proper C declaration +global_symbol_to_cdecl=$lt_lt_cv_sys_global_symbol_to_cdecl + +# Transform the output of nm in a C name address pair +global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address + +# This is the shared library runtime path variable. +runpath_var=$runpath_var + +# This is the shared library path variable. +shlibpath_var=$shlibpath_var + +# Is shlibpath searched before the hard-coded library search path? +shlibpath_overrides_runpath=$shlibpath_overrides_runpath + +# How to hardcode a shared library path into an executable. +hardcode_action=$hardcode_action_GCJ + +# Whether we should hardcode library paths into libraries. +hardcode_into_libs=$hardcode_into_libs + +# Flag to hardcode \$libdir into a binary during linking. +# This must work even if \$libdir does not exist. +hardcode_libdir_flag_spec=$lt_hardcode_libdir_flag_spec_GCJ + +# If ld is used when linking, flag to hardcode \$libdir into +# a binary during linking. This must work even if \$libdir does +# not exist. +hardcode_libdir_flag_spec_ld=$lt_hardcode_libdir_flag_spec_ld_GCJ + +# Whether we need a single -rpath flag with a separated argument. +hardcode_libdir_separator=$lt_hardcode_libdir_separator_GCJ + +# Set to yes if using DIR/libNAME${shared_ext} during linking hardcodes DIR into the +# resulting binary. +hardcode_direct=$hardcode_direct_GCJ + +# Set to yes if using the -LDIR flag during linking hardcodes DIR into the +# resulting binary. +hardcode_minus_L=$hardcode_minus_L_GCJ + +# Set to yes if using SHLIBPATH_VAR=DIR during linking hardcodes DIR into +# the resulting binary. +hardcode_shlibpath_var=$hardcode_shlibpath_var_GCJ + +# Set to yes if building a shared library automatically hardcodes DIR into the library +# and all subsequent libraries and executables linked against it. +hardcode_automatic=$hardcode_automatic_GCJ + +# Variables whose values should be saved in libtool wrapper scripts and +# restored at relink time. +variables_saved_for_relink="$variables_saved_for_relink" + +# Whether libtool must link a program against all its dependency libraries. +link_all_deplibs=$link_all_deplibs_GCJ + +# Compile-time system search path for libraries +sys_lib_search_path_spec=$lt_sys_lib_search_path_spec + +# Run-time system search path for libraries +sys_lib_dlsearch_path_spec=$lt_sys_lib_dlsearch_path_spec + +# Fix the shell variable \$srcfile for the compiler. +fix_srcfile_path=$lt_fix_srcfile_path + +# Set to yes if exported symbols are required. +always_export_symbols=$always_export_symbols_GCJ + +# The commands to list exported symbols. +export_symbols_cmds=$lt_export_symbols_cmds_GCJ + +# The commands to extract the exported symbol list from a shared archive. +extract_expsyms_cmds=$lt_extract_expsyms_cmds + +# Symbols that should not be listed in the preloaded symbols. +exclude_expsyms=$lt_exclude_expsyms_GCJ + +# Symbols that must always be exported. +include_expsyms=$lt_include_expsyms_GCJ + +# ### END LIBTOOL TAG CONFIG: $tagname + +__EOF__ + + +else + # If there is no Makefile yet, we rely on a make rule to execute + # `config.status --recheck' to rerun these tests and create the + # libtool script then. + ltmain_in=`echo $ltmain | sed -e 's/\.sh$/.in/'` + if test -f "$ltmain_in"; then + test -f Makefile && make "$ltmain" + fi +fi + + +ac_ext=c +ac_cpp='$CPP $CPPFLAGS' +ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' +ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' +ac_compiler_gnu=$ac_cv_c_compiler_gnu + +CC="$lt_save_CC" + + else + tagname="" + fi + ;; + + RC) + + +# Source file extension for RC test sources. +ac_ext=rc + +# Object file extension for compiled RC test sources. +objext=o +objext_RC=$objext + +# Code to be used in simple compile tests +lt_simple_compile_test_code='sample MENU { MENUITEM "&Soup", 100, CHECKED }' + +# Code to be used in simple link tests +lt_simple_link_test_code="$lt_simple_compile_test_code" + +# ltmain only uses $CC for tagged configurations so make sure $CC is set. + +# If no C compiler was specified, use CC. +LTCC=${LTCC-"$CC"} + +# If no C compiler flags were specified, use CFLAGS. +LTCFLAGS=${LTCFLAGS-"$CFLAGS"} + +# Allow CC to be a program name with arguments. +compiler=$CC + + +# save warnings/boilerplate of simple test code +ac_outfile=conftest.$ac_objext +echo "$lt_simple_compile_test_code" >conftest.$ac_ext +eval "$ac_compile" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err +_lt_compiler_boilerplate=`cat conftest.err` +$rm conftest* + +ac_outfile=conftest.$ac_objext +echo "$lt_simple_link_test_code" >conftest.$ac_ext +eval "$ac_link" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err +_lt_linker_boilerplate=`cat conftest.err` +$rm conftest* + + +# Allow CC to be a program name with arguments. +lt_save_CC="$CC" +CC=${RC-"windres"} +compiler=$CC +compiler_RC=$CC +for cc_temp in $compiler""; do + case $cc_temp in + compile | *[\\/]compile | ccache | *[\\/]ccache ) ;; + distcc | *[\\/]distcc | purify | *[\\/]purify ) ;; + \-*) ;; + *) break;; + esac +done +cc_basename=`$echo "X$cc_temp" | $Xsed -e 's%.*/%%' -e "s%^$host_alias-%%"` + +lt_cv_prog_compiler_c_o_RC=yes + +# The else clause should only fire when bootstrapping the +# libtool distribution, otherwise you forgot to ship ltmain.sh +# with your package, and you will get complaints that there are +# no rules to generate ltmain.sh. +if test -f "$ltmain"; then + # See if we are running on zsh, and set the options which allow our commands through + # without removal of \ escapes. + if test -n "${ZSH_VERSION+set}" ; then + setopt NO_GLOB_SUBST + fi + # Now quote all the things that may contain metacharacters while being + # careful not to overquote the AC_SUBSTed values. We take copies of the + # variables and quote the copies for generation of the libtool script. + for var in echo old_CC old_CFLAGS AR AR_FLAGS EGREP RANLIB LN_S LTCC LTCFLAGS NM \ + SED SHELL STRIP \ + libname_spec library_names_spec soname_spec extract_expsyms_cmds \ + old_striplib striplib file_magic_cmd finish_cmds finish_eval \ + deplibs_check_method reload_flag reload_cmds need_locks \ + lt_cv_sys_global_symbol_pipe lt_cv_sys_global_symbol_to_cdecl \ + lt_cv_sys_global_symbol_to_c_name_address \ + sys_lib_search_path_spec sys_lib_dlsearch_path_spec \ + old_postinstall_cmds old_postuninstall_cmds \ + compiler_RC \ + CC_RC \ + LD_RC \ + lt_prog_compiler_wl_RC \ + lt_prog_compiler_pic_RC \ + lt_prog_compiler_static_RC \ + lt_prog_compiler_no_builtin_flag_RC \ + export_dynamic_flag_spec_RC \ + thread_safe_flag_spec_RC \ + whole_archive_flag_spec_RC \ + enable_shared_with_static_runtimes_RC \ + old_archive_cmds_RC \ + old_archive_from_new_cmds_RC \ + predep_objects_RC \ + postdep_objects_RC \ + predeps_RC \ + postdeps_RC \ + compiler_lib_search_path_RC \ + archive_cmds_RC \ + archive_expsym_cmds_RC \ + postinstall_cmds_RC \ + postuninstall_cmds_RC \ + old_archive_from_expsyms_cmds_RC \ + allow_undefined_flag_RC \ + no_undefined_flag_RC \ + export_symbols_cmds_RC \ + hardcode_libdir_flag_spec_RC \ + hardcode_libdir_flag_spec_ld_RC \ + hardcode_libdir_separator_RC \ + hardcode_automatic_RC \ + module_cmds_RC \ + module_expsym_cmds_RC \ + lt_cv_prog_compiler_c_o_RC \ + fix_srcfile_path_RC \ + exclude_expsyms_RC \ + include_expsyms_RC; do + + case $var in + old_archive_cmds_RC | \ + old_archive_from_new_cmds_RC | \ + archive_cmds_RC | \ + archive_expsym_cmds_RC | \ + module_cmds_RC | \ + module_expsym_cmds_RC | \ + old_archive_from_expsyms_cmds_RC | \ + export_symbols_cmds_RC | \ + extract_expsyms_cmds | reload_cmds | finish_cmds | \ + postinstall_cmds | postuninstall_cmds | \ + old_postinstall_cmds | old_postuninstall_cmds | \ + sys_lib_search_path_spec | sys_lib_dlsearch_path_spec) + # Double-quote double-evaled strings. + eval "lt_$var=\\\"\`\$echo \"X\$$var\" | \$Xsed -e \"\$double_quote_subst\" -e \"\$sed_quote_subst\" -e \"\$delay_variable_subst\"\`\\\"" + ;; + *) + eval "lt_$var=\\\"\`\$echo \"X\$$var\" | \$Xsed -e \"\$sed_quote_subst\"\`\\\"" + ;; + esac + done + + case $lt_echo in + *'\$0 --fallback-echo"') + lt_echo=`$echo "X$lt_echo" | $Xsed -e 's/\\\\\\\$0 --fallback-echo"$/$0 --fallback-echo"/'` + ;; + esac + +cfgfile="$ofile" + + cat <<__EOF__ >> "$cfgfile" +# ### BEGIN LIBTOOL TAG CONFIG: $tagname + +# Libtool was configured on host `(hostname || uname -n) 2>/dev/null | sed 1q`: + +# Shell to use when invoking shell scripts. +SHELL=$lt_SHELL + +# Whether or not to build shared libraries. +build_libtool_libs=$enable_shared + +# Whether or not to build static libraries. +build_old_libs=$enable_static + +# Whether or not to add -lc for building shared libraries. +build_libtool_need_lc=$archive_cmds_need_lc_RC + +# Whether or not to disallow shared libs when runtime libs are static +allow_libtool_libs_with_static_runtimes=$enable_shared_with_static_runtimes_RC + +# Whether or not to optimize for fast installation. +fast_install=$enable_fast_install + +# The host system. +host_alias=$host_alias +host=$host +host_os=$host_os + +# The build system. +build_alias=$build_alias +build=$build +build_os=$build_os + +# An echo program that does not interpret backslashes. +echo=$lt_echo + +# The archiver. +AR=$lt_AR +AR_FLAGS=$lt_AR_FLAGS + +# A C compiler. +LTCC=$lt_LTCC + +# LTCC compiler flags. +LTCFLAGS=$lt_LTCFLAGS + +# A language-specific compiler. +CC=$lt_compiler_RC + +# Is the compiler the GNU C compiler? +with_gcc=$GCC_RC + +# An ERE matcher. +EGREP=$lt_EGREP + +# The linker used to build libraries. +LD=$lt_LD_RC + +# Whether we need hard or soft links. +LN_S=$lt_LN_S + +# A BSD-compatible nm program. +NM=$lt_NM + +# A symbol stripping program +STRIP=$lt_STRIP + +# Used to examine libraries when file_magic_cmd begins "file" +MAGIC_CMD=$MAGIC_CMD + +# Used on cygwin: DLL creation program. +DLLTOOL="$DLLTOOL" + +# Used on cygwin: object dumper. +OBJDUMP="$OBJDUMP" + +# Used on cygwin: assembler. +AS="$AS" + +# The name of the directory that contains temporary libtool files. +objdir=$objdir + +# How to create reloadable object files. +reload_flag=$lt_reload_flag +reload_cmds=$lt_reload_cmds + +# How to pass a linker flag through the compiler. +wl=$lt_lt_prog_compiler_wl_RC + +# Object file suffix (normally "o"). +objext="$ac_objext" + +# Old archive suffix (normally "a"). +libext="$libext" + +# Shared library suffix (normally ".so"). +shrext_cmds='$shrext_cmds' + +# Executable file suffix (normally ""). +exeext="$exeext" + +# Additional compiler flags for building library objects. +pic_flag=$lt_lt_prog_compiler_pic_RC +pic_mode=$pic_mode + +# What is the maximum length of a command? +max_cmd_len=$lt_cv_sys_max_cmd_len + +# Does compiler simultaneously support -c and -o options? +compiler_c_o=$lt_lt_cv_prog_compiler_c_o_RC + +# Must we lock files when doing compilation? +need_locks=$lt_need_locks + +# Do we need the lib prefix for modules? +need_lib_prefix=$need_lib_prefix + +# Do we need a version for libraries? +need_version=$need_version + +# Whether dlopen is supported. +dlopen_support=$enable_dlopen + +# Whether dlopen of programs is supported. +dlopen_self=$enable_dlopen_self + +# Whether dlopen of statically linked programs is supported. +dlopen_self_static=$enable_dlopen_self_static + +# Compiler flag to prevent dynamic linking. +link_static_flag=$lt_lt_prog_compiler_static_RC + +# Compiler flag to turn off builtin functions. +no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag_RC + +# Compiler flag to allow reflexive dlopens. +export_dynamic_flag_spec=$lt_export_dynamic_flag_spec_RC + +# Compiler flag to generate shared objects directly from archives. +whole_archive_flag_spec=$lt_whole_archive_flag_spec_RC + +# Compiler flag to generate thread-safe objects. +thread_safe_flag_spec=$lt_thread_safe_flag_spec_RC + +# Library versioning type. +version_type=$version_type + +# Format of library name prefix. +libname_spec=$lt_libname_spec + +# List of archive names. First name is the real one, the rest are links. +# The last name is the one that the linker finds with -lNAME. +library_names_spec=$lt_library_names_spec + +# The coded name of the library, if different from the real name. +soname_spec=$lt_soname_spec + +# Commands used to build and install an old-style archive. +RANLIB=$lt_RANLIB +old_archive_cmds=$lt_old_archive_cmds_RC +old_postinstall_cmds=$lt_old_postinstall_cmds +old_postuninstall_cmds=$lt_old_postuninstall_cmds + +# Create an old-style archive from a shared archive. +old_archive_from_new_cmds=$lt_old_archive_from_new_cmds_RC + +# Create a temporary old-style archive to link instead of a shared archive. +old_archive_from_expsyms_cmds=$lt_old_archive_from_expsyms_cmds_RC + +# Commands used to build and install a shared archive. +archive_cmds=$lt_archive_cmds_RC +archive_expsym_cmds=$lt_archive_expsym_cmds_RC +postinstall_cmds=$lt_postinstall_cmds +postuninstall_cmds=$lt_postuninstall_cmds + +# Commands used to build a loadable module (assumed same as above if empty) +module_cmds=$lt_module_cmds_RC +module_expsym_cmds=$lt_module_expsym_cmds_RC + +# Commands to strip libraries. +old_striplib=$lt_old_striplib +striplib=$lt_striplib + +# Dependencies to place before the objects being linked to create a +# shared library. +predep_objects=$lt_predep_objects_RC + +# Dependencies to place after the objects being linked to create a +# shared library. +postdep_objects=$lt_postdep_objects_RC + +# Dependencies to place before the objects being linked to create a +# shared library. +predeps=$lt_predeps_RC + +# Dependencies to place after the objects being linked to create a +# shared library. +postdeps=$lt_postdeps_RC + +# The library search path used internally by the compiler when linking +# a shared library. +compiler_lib_search_path=$lt_compiler_lib_search_path_RC + +# Method to check whether dependent libraries are shared objects. +deplibs_check_method=$lt_deplibs_check_method + +# Command to use when deplibs_check_method == file_magic. +file_magic_cmd=$lt_file_magic_cmd + +# Flag that allows shared libraries with undefined symbols to be built. +allow_undefined_flag=$lt_allow_undefined_flag_RC + +# Flag that forces no undefined symbols. +no_undefined_flag=$lt_no_undefined_flag_RC + +# Commands used to finish a libtool library installation in a directory. +finish_cmds=$lt_finish_cmds + +# Same as above, but a single script fragment to be evaled but not shown. +finish_eval=$lt_finish_eval + +# Take the output of nm and produce a listing of raw symbols and C names. +global_symbol_pipe=$lt_lt_cv_sys_global_symbol_pipe + +# Transform the output of nm in a proper C declaration +global_symbol_to_cdecl=$lt_lt_cv_sys_global_symbol_to_cdecl + +# Transform the output of nm in a C name address pair +global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address + +# This is the shared library runtime path variable. +runpath_var=$runpath_var + +# This is the shared library path variable. +shlibpath_var=$shlibpath_var + +# Is shlibpath searched before the hard-coded library search path? +shlibpath_overrides_runpath=$shlibpath_overrides_runpath + +# How to hardcode a shared library path into an executable. +hardcode_action=$hardcode_action_RC + +# Whether we should hardcode library paths into libraries. +hardcode_into_libs=$hardcode_into_libs + +# Flag to hardcode \$libdir into a binary during linking. +# This must work even if \$libdir does not exist. +hardcode_libdir_flag_spec=$lt_hardcode_libdir_flag_spec_RC + +# If ld is used when linking, flag to hardcode \$libdir into +# a binary during linking. This must work even if \$libdir does +# not exist. +hardcode_libdir_flag_spec_ld=$lt_hardcode_libdir_flag_spec_ld_RC + +# Whether we need a single -rpath flag with a separated argument. +hardcode_libdir_separator=$lt_hardcode_libdir_separator_RC + +# Set to yes if using DIR/libNAME${shared_ext} during linking hardcodes DIR into the +# resulting binary. +hardcode_direct=$hardcode_direct_RC + +# Set to yes if using the -LDIR flag during linking hardcodes DIR into the +# resulting binary. +hardcode_minus_L=$hardcode_minus_L_RC + +# Set to yes if using SHLIBPATH_VAR=DIR during linking hardcodes DIR into +# the resulting binary. +hardcode_shlibpath_var=$hardcode_shlibpath_var_RC + +# Set to yes if building a shared library automatically hardcodes DIR into the library +# and all subsequent libraries and executables linked against it. +hardcode_automatic=$hardcode_automatic_RC + +# Variables whose values should be saved in libtool wrapper scripts and +# restored at relink time. +variables_saved_for_relink="$variables_saved_for_relink" + +# Whether libtool must link a program against all its dependency libraries. +link_all_deplibs=$link_all_deplibs_RC + +# Compile-time system search path for libraries +sys_lib_search_path_spec=$lt_sys_lib_search_path_spec + +# Run-time system search path for libraries +sys_lib_dlsearch_path_spec=$lt_sys_lib_dlsearch_path_spec + +# Fix the shell variable \$srcfile for the compiler. +fix_srcfile_path=$lt_fix_srcfile_path + +# Set to yes if exported symbols are required. +always_export_symbols=$always_export_symbols_RC + +# The commands to list exported symbols. +export_symbols_cmds=$lt_export_symbols_cmds_RC + +# The commands to extract the exported symbol list from a shared archive. +extract_expsyms_cmds=$lt_extract_expsyms_cmds + +# Symbols that should not be listed in the preloaded symbols. +exclude_expsyms=$lt_exclude_expsyms_RC + +# Symbols that must always be exported. +include_expsyms=$lt_include_expsyms_RC + +# ### END LIBTOOL TAG CONFIG: $tagname + +__EOF__ + +else + # If there is no Makefile yet, we rely on a make rule to execute + # `config.status --recheck' to rerun these tests and create the + # libtool script then. + ltmain_in=`echo $ltmain | sed -e 's/\.sh$/.in/'` + if test -f "$ltmain_in"; then + test -f Makefile && make "$ltmain" + fi fi -fi -{ echo "$as_me:$LINENO: result: $ac_cv_header_stdc" >&5 -echo "${ECHO_T}$ac_cv_header_stdc" >&6; } -if test $ac_cv_header_stdc = yes; then -cat >>confdefs.h <<\_ACEOF -#define STDC_HEADERS 1 -_ACEOF +ac_ext=c +ac_cpp='$CPP $CPPFLAGS' +ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' +ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' +ac_compiler_gnu=$ac_cv_c_compiler_gnu + +CC="$lt_save_CC" + + ;; + + *) + { { echo "$as_me:$LINENO: error: Unsupported tag name: $tagname" >&5 +echo "$as_me: error: Unsupported tag name: $tagname" >&2;} + { (exit 1); exit 1; }; } + ;; + esac + + # Append the new tag name to the list of available tags. + if test -n "$tagname" ; then + available_tags="$available_tags $tagname" + fi + fi + done + IFS="$lt_save_ifs" + + # Now substitute the updated list of available tags. + if eval "sed -e 's/^available_tags=.*\$/available_tags=\"$available_tags\"/' \"$ofile\" > \"${ofile}T\""; then + mv "${ofile}T" "$ofile" + chmod +x "$ofile" + else + rm -f "${ofile}T" + { { echo "$as_me:$LINENO: error: unable to update list of available tagged configurations." >&5 +echo "$as_me: error: unable to update list of available tagged configurations." >&2;} + { (exit 1); exit 1; }; } + fi fi -# On IRIX 5.3, sys/types and inttypes.h are conflicting. +# This can be used to rebuild libtool when needed +LIBTOOL_DEPS="$ac_aux_dir/ltmain.sh" +# Always use our own libtool. +LIBTOOL='$(SHELL) $(top_builddir)/libtool' +# Prevent multiple expansion -for ac_header in sys/types.h sys/stat.h stdlib.h string.h memory.h strings.h \ - inttypes.h stdint.h unistd.h -do -as_ac_Header=`echo "ac_cv_header_$ac_header" | $as_tr_sh` -{ echo "$as_me:$LINENO: checking for $ac_header" >&5 -echo $ECHO_N "checking for $ac_header... $ECHO_C" >&6; } -if { as_var=$as_ac_Header; eval "test \"\${$as_var+set}\" = set"; }; then - echo $ECHO_N "(cached) $ECHO_C" >&6 -else - cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ -$ac_includes_default -#include <$ac_header> -_ACEOF -rm -f conftest.$ac_objext -if { (ac_try="$ac_compile" -case "(($ac_try" in - *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; - *) ac_try_echo=$ac_try;; -esac -eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 - (eval "$ac_compile") 2>conftest.er1 - ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); } && { - test -z "$ac_c_werror_flag" || - test ! -s conftest.err - } && test -s conftest.$ac_objext; then - eval "$as_ac_Header=yes" -else - echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - eval "$as_ac_Header=no" -fi -rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext + + + + + + + + + + + + + +{ echo "$as_me:$LINENO: checking whether to enable maintainer-specific portions of Makefiles" >&5 +echo $ECHO_N "checking whether to enable maintainer-specific portions of Makefiles... $ECHO_C" >&6; } + # Check whether --enable-maintainer-mode was given. +if test "${enable_maintainer_mode+set}" = set; then + enableval=$enable_maintainer_mode; USE_MAINTAINER_MODE=$enableval +else + USE_MAINTAINER_MODE=no fi -ac_res=`eval echo '${'$as_ac_Header'}'` - { echo "$as_me:$LINENO: result: $ac_res" >&5 -echo "${ECHO_T}$ac_res" >&6; } -if test `eval echo '${'$as_ac_Header'}'` = yes; then - cat >>confdefs.h <<_ACEOF -#define `echo "HAVE_$ac_header" | $as_tr_cpp` 1 -_ACEOF + { echo "$as_me:$LINENO: result: $USE_MAINTAINER_MODE" >&5 +echo "${ECHO_T}$USE_MAINTAINER_MODE" >&6; } + if test $USE_MAINTAINER_MODE = yes; then + MAINTAINER_MODE_TRUE= + MAINTAINER_MODE_FALSE='#' +else + MAINTAINER_MODE_TRUE='#' + MAINTAINER_MODE_FALSE= fi -done + MAINT=$MAINTAINER_MODE_TRUE + @@ -4043,73 +20351,328 @@ fi + if test -d $srcdir/testsuite; then + TESTSUBDIR_TRUE= + TESTSUBDIR_FALSE='#' +else + TESTSUBDIR_TRUE='#' + TESTSUBDIR_FALSE= +fi + + TARGETDIR="unknown" case "$host" in -x86_64-*-openbsd*) TARGET=X86_64; TARGETDIR=x86;; -mips*-*-openbsd*) TARGET=MIPS; TARGETDIR=mips;; -sparc-*-openbsd*) TARGET=SPARC; TARGETDIR=sparc;; -sparc64-*-openbsd*) TARGET=SPARC; TARGETDIR=sparc;; -alpha*-*-openbsd*) TARGET=ALPHA; TARGETDIR=alpha;; -m68k-*-openbsd*) TARGET=M68K; TARGETDIR=m68k;; -powerpc-*-openbsd*) TARGET=POWERPC; TARGETDIR=powerpc;; -i*86-*-darwin*) TARGET=X86_DARWIN; TARGETDIR=x86;; -i*86-*-linux*) TARGET=X86; TARGETDIR=x86;; -i*86-*-gnu*) TARGET=X86; TARGETDIR=x86;; -i*86-*-solaris2.1[0-9]*) TARGET=X86_64; TARGETDIR=x86;; -i*86-*-solaris*) TARGET=X86; TARGETDIR=x86;; -i*86-*-beos*) TARGET=X86; TARGETDIR=x86;; -i*86-*-freebsd* | i*86-*-kfreebsd*-gnu) TARGET=X86; TARGETDIR=x86;; -i*86-*-netbsdelf* | i*86-*-knetbsd*-gnu) TARGET=X86; TARGETDIR=x86;; -i*86-*-openbsd*) TARGET=X86; TARGETDIR=x86;; -i*86-*-rtems*) TARGET=X86; TARGETDIR=x86;; -i*86-*-win32*) TARGET=X86_WIN32; TARGETDIR=x86;; -i*86-*-cygwin*) TARGET=X86_WIN32; TARGETDIR=x86;; -i*86-*-mingw*) TARGET=X86_WIN32; TARGETDIR=x86;; -frv-*-*) TARGET=FRV; TARGETDIR=frv;; -sparc-sun-4*) TARGET=SPARC; TARGETDIR=sparc;; -sparc*-sun-*) TARGET=SPARC; TARGETDIR=sparc;; -sparc-*-linux* | sparc-*-netbsdelf* | sparc-*-knetbsd*-gnu) TARGET=SPARC; TARGETDIR=sparc;; -sparc*-*-rtems*) TARGET=SPARC; TARGETDIR=sparc;; -sparc64-*-linux* | sparc64-*-freebsd* | sparc64-*-netbsd* | sparc64-*-knetbsd*-gnu) TARGET=SPARC; TARGETDIR=sparc;; -alpha*-*-linux* | alpha*-*-osf* | alpha*-*-freebsd* | alpha*-*-kfreebsd*-gnu | alpha*-*-netbsd* | alpha*-*-knetbsd*-gnu) TARGET=ALPHA; TARGETDIR=alpha;; -ia64*-*-*) TARGET=IA64; TARGETDIR=ia64;; -m32r*-*-linux* ) TARGET=M32R; TARGETDIR=m32r;; -m68k-*-linux*) TARGET=M68K; TARGETDIR=m68k;; -mips64*-*);; -mips-sgi-irix5.* | mips-sgi-irix6.*) TARGET=MIPS_IRIX; TARGETDIR=mips;; -mips*-*-linux*) TARGET=MIPS_LINUX; TARGETDIR=mips;; -powerpc*-*-linux* | powerpc-*-sysv*) TARGET=POWERPC; TARGETDIR=powerpc;; -powerpc-*-beos*) TARGET=POWERPC; TARGETDIR=powerpc;; -powerpc-*-darwin*) TARGET=POWERPC_DARWIN; TARGETDIR=powerpc;; -powerpc-*-aix*) TARGET=POWERPC_AIX; TARGETDIR=powerpc;; -powerpc-*-freebsd*) TARGET=POWERPC_FREEBSD; TARGETDIR=powerpc;; -powerpc*-*-rtems*) TARGET=POWERPC; TARGETDIR=powerpc;; -rs6000-*-aix*) TARGET=POWERPC_AIX; TARGETDIR=powerpc;; -arm*-*-linux-*) TARGET=ARM; TARGETDIR=arm;; -arm*-*-netbsdelf* | arm*-*-knetbsd*-gnu) TARGET=ARM; TARGETDIR=arm;; -arm*-*-rtems*) TARGET=ARM; TARGETDIR=arm;; -cris-*-*) TARGET=LIBFFI_CRIS; TARGETDIR=cris;; -s390-*-linux-*) TARGET=S390; TARGETDIR=s390;; -s390x-*-linux-*) TARGET=S390; TARGETDIR=s390;; -amd64-*-freebsd* | x86_64-*-linux* | x86_64-*-freebsd* | x86_64-*-kfreebsd*-gnu) TARGET=X86_64; TARGETDIR=x86;; -sh-*-linux* | sh[34]*-*-linux*) TARGET=SH; TARGETDIR=sh;; -sh-*-rtems*) TARGET=SH; TARGETDIR=sh;; -sh64-*-linux* | sh5*-*-linux*) TARGET=SH64; TARGETDIR=sh64;; -hppa*-*-linux* | parisc*-*-linux*) TARGET=PA; TARGETDIR=pa;; + alpha*-*-*) + TARGET=ALPHA; TARGETDIR=alpha; + # Support 128-bit long double, changable via command-line switch. + HAVE_LONG_DOUBLE='defined(__LONG_DOUBLE_128__)' + ;; + + arm*-*-*) + TARGET=ARM; TARGETDIR=arm + ;; + + amd64-*-freebsd*) + TARGET=X86_64; TARGETDIR=x86 + ;; + + cris-*-*) + TARGET=LIBFFI_CRIS; TARGETDIR=cris + ;; + + frv-*-*) + TARGET=FRV; TARGETDIR=frv + ;; + + hppa*-*-linux* | parisc*-*-linux*) + TARGET=PA_LINUX; TARGETDIR=pa + ;; + hppa*64-*-hpux*) + TARGET=PA64_HPUX; TARGETDIR=pa + ;; + hppa*-*-hpux*) + TARGET=PA_HPUX; TARGETDIR=pa + ;; + + i386-*-freebsd* | i386-*-openbsd*) + TARGET=X86_FREEBSD; TARGETDIR=x86 + ;; + i?86-win32* | i?86-*-cygwin* | i?86-*-mingw*) + TARGET=X86_WIN32; TARGETDIR=x86 + ;; + i?86-*-darwin*) + TARGET=X86_DARWIN; TARGETDIR=x86 + ;; + i?86-*-solaris2.1[0-9]*) + TARGET=X86_64; TARGETDIR=x86 + ;; + i?86-*-*) + TARGET=X86; TARGETDIR=x86 + ;; + + ia64*-*-*) + TARGET=IA64; TARGETDIR=ia64 + ;; + + m32r*-*-*) + TARGET=M32R; TARGETDIR=m32r + ;; + + m68k-*-*) + TARGET=M68K; TARGETDIR=m68k + ;; + + mips-sgi-irix5.* | mips-sgi-irix6.*) + TARGET=MIPS; TARGETDIR=mips + ;; + mips*-*-linux*) + TARGET=MIPS; TARGETDIR=mips + ;; + + powerpc*-*-linux* | powerpc-*-sysv*) + TARGET=POWERPC; TARGETDIR=powerpc + ;; + powerpc-*-beos*) + TARGET=POWERPC; TARGETDIR=powerpc + ;; + powerpc-*-darwin*) + TARGET=POWERPC_DARWIN; TARGETDIR=powerpc + ;; + powerpc-*-aix* | rs6000-*-aix*) + TARGET=POWERPC_AIX; TARGETDIR=powerpc + ;; + powerpc-*-freebsd*) + TARGET=POWERPC_FREEBSD; TARGETDIR=powerpc + ;; + powerpc*-*-rtems*) + TARGET=POWERPC; TARGETDIR=powerpc + ;; + + s390-*-* | s390x-*-*) + TARGET=S390; TARGETDIR=s390 + ;; + + sh-*-* | sh[34]*-*-*) + TARGET=SH; TARGETDIR=sh + ;; + sh64-*-* | sh5*-*-*) + TARGET=SH64; TARGETDIR=sh64 + ;; + + sparc*-*-*) + TARGET=SPARC; TARGETDIR=sparc + ;; + + x86_64-*-darwin*) + TARGET=X86_DARWIN; TARGETDIR=x86 + ;; + x86_64-*-cygwin* | x86_64-*-mingw*) + ;; + x86_64-*-*) + TARGET=X86_64; TARGETDIR=x86 + ;; esac + + if test $TARGETDIR = unknown; then { { echo "$as_me:$LINENO: error: \"libffi has not been ported to $host.\"" >&5 echo "$as_me: error: \"libffi has not been ported to $host.\"" >&2;} { (exit 1); exit 1; }; } fi -MKTARGET=$TARGET + if test x$TARGET = xMIPS; then + MIPS_TRUE= + MIPS_FALSE='#' +else + MIPS_TRUE='#' + MIPS_FALSE= +fi + + if test x$TARGET = xSPARC; then + SPARC_TRUE= + SPARC_FALSE='#' +else + SPARC_TRUE='#' + SPARC_FALSE= +fi + + if test x$TARGET = xX86; then + X86_TRUE= + X86_FALSE='#' +else + X86_TRUE='#' + X86_FALSE= +fi + + if test x$TARGET = xX86_FREEBSD; then + X86_FREEBSD_TRUE= + X86_FREEBSD_FALSE='#' +else + X86_FREEBSD_TRUE='#' + X86_FREEBSD_FALSE= +fi + + if test x$TARGET = xX86_WIN32; then + X86_WIN32_TRUE= + X86_WIN32_FALSE='#' +else + X86_WIN32_TRUE='#' + X86_WIN32_FALSE= +fi + + if test x$TARGET = xX86_DARWIN; then + X86_DARWIN_TRUE= + X86_DARWIN_FALSE='#' +else + X86_DARWIN_TRUE='#' + X86_DARWIN_FALSE= +fi + + if test x$TARGET = xALPHA; then + ALPHA_TRUE= + ALPHA_FALSE='#' +else + ALPHA_TRUE='#' + ALPHA_FALSE= +fi + + if test x$TARGET = xIA64; then + IA64_TRUE= + IA64_FALSE='#' +else + IA64_TRUE='#' + IA64_FALSE= +fi + + if test x$TARGET = xM32R; then + M32R_TRUE= + M32R_FALSE='#' +else + M32R_TRUE='#' + M32R_FALSE= +fi + + if test x$TARGET = xM68K; then + M68K_TRUE= + M68K_FALSE='#' +else + M68K_TRUE='#' + M68K_FALSE= +fi + + if test x$TARGET = xPOWERPC; then + POWERPC_TRUE= + POWERPC_FALSE='#' +else + POWERPC_TRUE='#' + POWERPC_FALSE= +fi + + if test x$TARGET = xPOWERPC_AIX; then + POWERPC_AIX_TRUE= + POWERPC_AIX_FALSE='#' +else + POWERPC_AIX_TRUE='#' + POWERPC_AIX_FALSE= +fi + + if test x$TARGET = xPOWERPC_DARWIN; then + POWERPC_DARWIN_TRUE= + POWERPC_DARWIN_FALSE='#' +else + POWERPC_DARWIN_TRUE='#' + POWERPC_DARWIN_FALSE= +fi + + if test x$TARGET = xPOWERPC_FREEBSD; then + POWERPC_FREEBSD_TRUE= + POWERPC_FREEBSD_FALSE='#' +else + POWERPC_FREEBSD_TRUE='#' + POWERPC_FREEBSD_FALSE= +fi + + if test x$TARGET = xARM; then + ARM_TRUE= + ARM_FALSE='#' +else + ARM_TRUE='#' + ARM_FALSE= +fi + + if test x$TARGET = xLIBFFI_CRIS; then + LIBFFI_CRIS_TRUE= + LIBFFI_CRIS_FALSE='#' +else + LIBFFI_CRIS_TRUE='#' + LIBFFI_CRIS_FALSE= +fi + + if test x$TARGET = xFRV; then + FRV_TRUE= + FRV_FALSE='#' +else + FRV_TRUE='#' + FRV_FALSE= +fi + + if test x$TARGET = xS390; then + S390_TRUE= + S390_FALSE='#' +else + S390_TRUE='#' + S390_FALSE= +fi + + if test x$TARGET = xX86_64; then + X86_64_TRUE= + X86_64_FALSE='#' +else + X86_64_TRUE='#' + X86_64_FALSE= +fi + + if test x$TARGET = xSH; then + SH_TRUE= + SH_FALSE='#' +else + SH_TRUE='#' + SH_FALSE= +fi + + if test x$TARGET = xSH64; then + SH64_TRUE= + SH64_FALSE='#' +else + SH64_TRUE='#' + SH64_FALSE= +fi + + if test x$TARGET = xPA_LINUX; then + PA_LINUX_TRUE= + PA_LINUX_FALSE='#' +else + PA_LINUX_TRUE='#' + PA_LINUX_FALSE= +fi + + if test x$TARGET = xPA_HPUX; then + PA_HPUX_TRUE= + PA_HPUX_FALSE='#' +else + PA_HPUX_TRUE='#' + PA_HPUX_FALSE= +fi + + if test x$TARGET = xPA64_HPUX; then + PA64_HPUX_TRUE= + PA64_HPUX_FALSE='#' +else + PA64_HPUX_TRUE='#' + PA64_HPUX_FALSE= +fi -case x$TARGET in - xMIPS*) TARGET=MIPS ;; - *) ;; -esac { echo "$as_me:$LINENO: checking for ANSI C header files" >&5 echo $ECHO_N "checking for ANSI C header files... $ECHO_C" >&6; } @@ -5551,15 +22114,17 @@ # Also AC_SUBST this variable for ffi.h. -HAVE_LONG_DOUBLE=0 -if test $ac_cv_sizeof_double != $ac_cv_sizeof_long_double; then - if test $ac_cv_sizeof_long_double != 0; then - HAVE_LONG_DOUBLE=1 +if test -z "$HAVE_LONG_DOUBLE"; then + HAVE_LONG_DOUBLE=0 + if test $ac_cv_sizeof_double != $ac_cv_sizeof_long_double; then + if test $ac_cv_sizeof_long_double != 0; then + HAVE_LONG_DOUBLE=1 cat >>confdefs.h <<\_ACEOF #define HAVE_LONG_DOUBLE 1 _ACEOF + fi fi fi @@ -5801,8 +22366,65 @@ esac +{ echo "$as_me:$LINENO: checking assembler .cfi pseudo-op support" >&5 +echo $ECHO_N "checking assembler .cfi pseudo-op support... $ECHO_C" >&6; } +if test "${libffi_cv_as_cfi_pseudo_op+set}" = set; then + echo $ECHO_N "(cached) $ECHO_C" >&6 +else + + libffi_cv_as_cfi_pseudo_op=unknown + cat >conftest.$ac_ext <<_ACEOF +/* confdefs.h. */ +_ACEOF +cat confdefs.h >>conftest.$ac_ext +cat >>conftest.$ac_ext <<_ACEOF +/* end confdefs.h. */ +asm (".cfi_startproc\n\t.cfi_endproc"); +int +main () +{ + + ; + return 0; +} +_ACEOF +rm -f conftest.$ac_objext +if { (ac_try="$ac_compile" +case "(($ac_try" in + *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; + *) ac_try_echo=$ac_try;; +esac +eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 + (eval "$ac_compile") 2>conftest.er1 + ac_status=$? + grep -v '^ *+' conftest.er1 >conftest.err + rm -f conftest.er1 + cat conftest.err >&5 + echo "$as_me:$LINENO: \$? = $ac_status" >&5 + (exit $ac_status); } && { + test -z "$ac_c_werror_flag" || + test ! -s conftest.err + } && test -s conftest.$ac_objext; then + libffi_cv_as_cfi_pseudo_op=yes +else + echo "$as_me: failed program was:" >&5 +sed 's/^/| /' conftest.$ac_ext >&5 + + libffi_cv_as_cfi_pseudo_op=no +fi + +rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext + +fi +{ echo "$as_me:$LINENO: result: $libffi_cv_as_cfi_pseudo_op" >&5 +echo "${ECHO_T}$libffi_cv_as_cfi_pseudo_op" >&6; } +if test "x$libffi_cv_as_cfi_pseudo_op" = xyes; then +cat >>confdefs.h <<\_ACEOF +#define HAVE_AS_CFI_PSEUDO_OP 1 +_ACEOF +fi if test x$TARGET = xSPARC; then { echo "$as_me:$LINENO: checking assembler and linker support unaligned pc related relocs" >&5 @@ -6012,32 +22634,91 @@ +# Check whether --enable-debug was given. +if test "${enable_debug+set}" = set; then + enableval=$enable_debug; if test "$enable_debug" = "yes"; then + +cat >>confdefs.h <<\_ACEOF +#define FFI_DEBUG 1 +_ACEOF + + fi +fi + + +# Check whether --enable-structs was given. +if test "${enable_structs+set}" = set; then + enableval=$enable_structs; if test "$enable_structs" = "no"; then + +cat >>confdefs.h <<\_ACEOF +#define FFI_NO_STRUCTS 1 +_ACEOF + + fi +fi + +# Check whether --enable-raw-api was given. +if test "${enable_raw_api+set}" = set; then + enableval=$enable_raw_api; if test "$enable_raw_api" = "no"; then cat >>confdefs.h <<\_ACEOF #define FFI_NO_RAW_API 1 _ACEOF + fi +fi + + +# Check whether --enable-purify-safety was given. +if test "${enable_purify_safety+set}" = set; then + enableval=$enable_purify_safety; if test "$enable_purify_safety" = "yes"; then + +cat >>confdefs.h <<\_ACEOF +#define USING_PURIFY 1 +_ACEOF + + fi +fi + + +if test -n "$with_cross_host" && + test x"$with_cross_host" != x"no"; then + toolexecdir='$(exec_prefix)/$(target_alias)' + toolexeclibdir='$(toolexecdir)/lib' +else + toolexecdir='$(libdir)/gcc-lib/$(target_alias)' + toolexeclibdir='$(libdir)' +fi +multi_os_directory=`$CC -print-multi-os-directory` +case $multi_os_directory in + .) ;; # Avoid trailing /. + *) toolexeclibdir=$toolexeclibdir/$multi_os_directory ;; +esac + + + +if test "${multilib}" = "yes"; then + multilib_arg="--enable-multilib" +else + multilib_arg= +fi ac_config_commands="$ac_config_commands include" ac_config_commands="$ac_config_commands src" -TARGETINCDIR=$TARGETDIR -case $host in -*-*-darwin*) - TARGETINCDIR="darwin" - ;; -esac +ac_config_links="$ac_config_links include/ffitarget.h:src/$TARGETDIR/ffitarget.h" + +ac_config_files="$ac_config_files include/ffi.h" -ac_config_links="$ac_config_links include/ffitarget.h:src/$TARGETINCDIR/ffitarget.h" ac_config_links="$ac_config_links include/ffi_common.h:include/ffi_common.h" -ac_config_files="$ac_config_files include/ffi.h fficonfig.py" +ac_config_files="$ac_config_files fficonfig.py" cat >confcache <<\_ACEOF @@ -6136,6 +22817,216 @@ LTLIBOBJS=$ac_ltlibobjs +if test -z "${AMDEP_TRUE}" && test -z "${AMDEP_FALSE}"; then + { { echo "$as_me:$LINENO: error: conditional \"AMDEP\" was never defined. +Usually this means the macro was only invoked conditionally." >&5 +echo "$as_me: error: conditional \"AMDEP\" was never defined. +Usually this means the macro was only invoked conditionally." >&2;} + { (exit 1); exit 1; }; } +fi +if test -z "${am__fastdepCC_TRUE}" && test -z "${am__fastdepCC_FALSE}"; then + { { echo "$as_me:$LINENO: error: conditional \"am__fastdepCC\" was never defined. +Usually this means the macro was only invoked conditionally." >&5 +echo "$as_me: error: conditional \"am__fastdepCC\" was never defined. +Usually this means the macro was only invoked conditionally." >&2;} + { (exit 1); exit 1; }; } +fi +if test -z "${am__fastdepCCAS_TRUE}" && test -z "${am__fastdepCCAS_FALSE}"; then + { { echo "$as_me:$LINENO: error: conditional \"am__fastdepCCAS\" was never defined. +Usually this means the macro was only invoked conditionally." >&5 +echo "$as_me: error: conditional \"am__fastdepCCAS\" was never defined. +Usually this means the macro was only invoked conditionally." >&2;} + { (exit 1); exit 1; }; } +fi +if test -z "${am__fastdepCXX_TRUE}" && test -z "${am__fastdepCXX_FALSE}"; then + { { echo "$as_me:$LINENO: error: conditional \"am__fastdepCXX\" was never defined. +Usually this means the macro was only invoked conditionally." >&5 +echo "$as_me: error: conditional \"am__fastdepCXX\" was never defined. +Usually this means the macro was only invoked conditionally." >&2;} + { (exit 1); exit 1; }; } +fi +if test -z "${MAINTAINER_MODE_TRUE}" && test -z "${MAINTAINER_MODE_FALSE}"; then + { { echo "$as_me:$LINENO: error: conditional \"MAINTAINER_MODE\" was never defined. +Usually this means the macro was only invoked conditionally." >&5 +echo "$as_me: error: conditional \"MAINTAINER_MODE\" was never defined. +Usually this means the macro was only invoked conditionally." >&2;} + { (exit 1); exit 1; }; } +fi +if test -z "${TESTSUBDIR_TRUE}" && test -z "${TESTSUBDIR_FALSE}"; then + { { echo "$as_me:$LINENO: error: conditional \"TESTSUBDIR\" was never defined. +Usually this means the macro was only invoked conditionally." >&5 +echo "$as_me: error: conditional \"TESTSUBDIR\" was never defined. +Usually this means the macro was only invoked conditionally." >&2;} + { (exit 1); exit 1; }; } +fi +if test -z "${MIPS_TRUE}" && test -z "${MIPS_FALSE}"; then + { { echo "$as_me:$LINENO: error: conditional \"MIPS\" was never defined. +Usually this means the macro was only invoked conditionally." >&5 +echo "$as_me: error: conditional \"MIPS\" was never defined. +Usually this means the macro was only invoked conditionally." >&2;} + { (exit 1); exit 1; }; } +fi +if test -z "${SPARC_TRUE}" && test -z "${SPARC_FALSE}"; then + { { echo "$as_me:$LINENO: error: conditional \"SPARC\" was never defined. +Usually this means the macro was only invoked conditionally." >&5 +echo "$as_me: error: conditional \"SPARC\" was never defined. +Usually this means the macro was only invoked conditionally." >&2;} + { (exit 1); exit 1; }; } +fi +if test -z "${X86_TRUE}" && test -z "${X86_FALSE}"; then + { { echo "$as_me:$LINENO: error: conditional \"X86\" was never defined. +Usually this means the macro was only invoked conditionally." >&5 +echo "$as_me: error: conditional \"X86\" was never defined. +Usually this means the macro was only invoked conditionally." >&2;} + { (exit 1); exit 1; }; } +fi +if test -z "${X86_FREEBSD_TRUE}" && test -z "${X86_FREEBSD_FALSE}"; then + { { echo "$as_me:$LINENO: error: conditional \"X86_FREEBSD\" was never defined. +Usually this means the macro was only invoked conditionally." >&5 +echo "$as_me: error: conditional \"X86_FREEBSD\" was never defined. +Usually this means the macro was only invoked conditionally." >&2;} + { (exit 1); exit 1; }; } +fi +if test -z "${X86_WIN32_TRUE}" && test -z "${X86_WIN32_FALSE}"; then + { { echo "$as_me:$LINENO: error: conditional \"X86_WIN32\" was never defined. +Usually this means the macro was only invoked conditionally." >&5 +echo "$as_me: error: conditional \"X86_WIN32\" was never defined. +Usually this means the macro was only invoked conditionally." >&2;} + { (exit 1); exit 1; }; } +fi +if test -z "${X86_DARWIN_TRUE}" && test -z "${X86_DARWIN_FALSE}"; then + { { echo "$as_me:$LINENO: error: conditional \"X86_DARWIN\" was never defined. +Usually this means the macro was only invoked conditionally." >&5 +echo "$as_me: error: conditional \"X86_DARWIN\" was never defined. +Usually this means the macro was only invoked conditionally." >&2;} + { (exit 1); exit 1; }; } +fi +if test -z "${ALPHA_TRUE}" && test -z "${ALPHA_FALSE}"; then + { { echo "$as_me:$LINENO: error: conditional \"ALPHA\" was never defined. +Usually this means the macro was only invoked conditionally." >&5 +echo "$as_me: error: conditional \"ALPHA\" was never defined. +Usually this means the macro was only invoked conditionally." >&2;} + { (exit 1); exit 1; }; } +fi +if test -z "${IA64_TRUE}" && test -z "${IA64_FALSE}"; then + { { echo "$as_me:$LINENO: error: conditional \"IA64\" was never defined. +Usually this means the macro was only invoked conditionally." >&5 +echo "$as_me: error: conditional \"IA64\" was never defined. +Usually this means the macro was only invoked conditionally." >&2;} + { (exit 1); exit 1; }; } +fi +if test -z "${M32R_TRUE}" && test -z "${M32R_FALSE}"; then + { { echo "$as_me:$LINENO: error: conditional \"M32R\" was never defined. +Usually this means the macro was only invoked conditionally." >&5 +echo "$as_me: error: conditional \"M32R\" was never defined. +Usually this means the macro was only invoked conditionally." >&2;} + { (exit 1); exit 1; }; } +fi +if test -z "${M68K_TRUE}" && test -z "${M68K_FALSE}"; then + { { echo "$as_me:$LINENO: error: conditional \"M68K\" was never defined. +Usually this means the macro was only invoked conditionally." >&5 +echo "$as_me: error: conditional \"M68K\" was never defined. +Usually this means the macro was only invoked conditionally." >&2;} + { (exit 1); exit 1; }; } +fi +if test -z "${POWERPC_TRUE}" && test -z "${POWERPC_FALSE}"; then + { { echo "$as_me:$LINENO: error: conditional \"POWERPC\" was never defined. +Usually this means the macro was only invoked conditionally." >&5 +echo "$as_me: error: conditional \"POWERPC\" was never defined. +Usually this means the macro was only invoked conditionally." >&2;} + { (exit 1); exit 1; }; } +fi +if test -z "${POWERPC_AIX_TRUE}" && test -z "${POWERPC_AIX_FALSE}"; then + { { echo "$as_me:$LINENO: error: conditional \"POWERPC_AIX\" was never defined. +Usually this means the macro was only invoked conditionally." >&5 +echo "$as_me: error: conditional \"POWERPC_AIX\" was never defined. +Usually this means the macro was only invoked conditionally." >&2;} + { (exit 1); exit 1; }; } +fi +if test -z "${POWERPC_DARWIN_TRUE}" && test -z "${POWERPC_DARWIN_FALSE}"; then + { { echo "$as_me:$LINENO: error: conditional \"POWERPC_DARWIN\" was never defined. +Usually this means the macro was only invoked conditionally." >&5 +echo "$as_me: error: conditional \"POWERPC_DARWIN\" was never defined. +Usually this means the macro was only invoked conditionally." >&2;} + { (exit 1); exit 1; }; } +fi +if test -z "${POWERPC_FREEBSD_TRUE}" && test -z "${POWERPC_FREEBSD_FALSE}"; then + { { echo "$as_me:$LINENO: error: conditional \"POWERPC_FREEBSD\" was never defined. +Usually this means the macro was only invoked conditionally." >&5 +echo "$as_me: error: conditional \"POWERPC_FREEBSD\" was never defined. +Usually this means the macro was only invoked conditionally." >&2;} + { (exit 1); exit 1; }; } +fi +if test -z "${ARM_TRUE}" && test -z "${ARM_FALSE}"; then + { { echo "$as_me:$LINENO: error: conditional \"ARM\" was never defined. +Usually this means the macro was only invoked conditionally." >&5 +echo "$as_me: error: conditional \"ARM\" was never defined. +Usually this means the macro was only invoked conditionally." >&2;} + { (exit 1); exit 1; }; } +fi +if test -z "${LIBFFI_CRIS_TRUE}" && test -z "${LIBFFI_CRIS_FALSE}"; then + { { echo "$as_me:$LINENO: error: conditional \"LIBFFI_CRIS\" was never defined. +Usually this means the macro was only invoked conditionally." >&5 +echo "$as_me: error: conditional \"LIBFFI_CRIS\" was never defined. +Usually this means the macro was only invoked conditionally." >&2;} + { (exit 1); exit 1; }; } +fi +if test -z "${FRV_TRUE}" && test -z "${FRV_FALSE}"; then + { { echo "$as_me:$LINENO: error: conditional \"FRV\" was never defined. +Usually this means the macro was only invoked conditionally." >&5 +echo "$as_me: error: conditional \"FRV\" was never defined. +Usually this means the macro was only invoked conditionally." >&2;} + { (exit 1); exit 1; }; } +fi +if test -z "${S390_TRUE}" && test -z "${S390_FALSE}"; then + { { echo "$as_me:$LINENO: error: conditional \"S390\" was never defined. +Usually this means the macro was only invoked conditionally." >&5 +echo "$as_me: error: conditional \"S390\" was never defined. +Usually this means the macro was only invoked conditionally." >&2;} + { (exit 1); exit 1; }; } +fi +if test -z "${X86_64_TRUE}" && test -z "${X86_64_FALSE}"; then + { { echo "$as_me:$LINENO: error: conditional \"X86_64\" was never defined. +Usually this means the macro was only invoked conditionally." >&5 +echo "$as_me: error: conditional \"X86_64\" was never defined. +Usually this means the macro was only invoked conditionally." >&2;} + { (exit 1); exit 1; }; } +fi +if test -z "${SH_TRUE}" && test -z "${SH_FALSE}"; then + { { echo "$as_me:$LINENO: error: conditional \"SH\" was never defined. +Usually this means the macro was only invoked conditionally." >&5 +echo "$as_me: error: conditional \"SH\" was never defined. +Usually this means the macro was only invoked conditionally." >&2;} + { (exit 1); exit 1; }; } +fi +if test -z "${SH64_TRUE}" && test -z "${SH64_FALSE}"; then + { { echo "$as_me:$LINENO: error: conditional \"SH64\" was never defined. +Usually this means the macro was only invoked conditionally." >&5 +echo "$as_me: error: conditional \"SH64\" was never defined. +Usually this means the macro was only invoked conditionally." >&2;} + { (exit 1); exit 1; }; } +fi +if test -z "${PA_LINUX_TRUE}" && test -z "${PA_LINUX_FALSE}"; then + { { echo "$as_me:$LINENO: error: conditional \"PA_LINUX\" was never defined. +Usually this means the macro was only invoked conditionally." >&5 +echo "$as_me: error: conditional \"PA_LINUX\" was never defined. +Usually this means the macro was only invoked conditionally." >&2;} + { (exit 1); exit 1; }; } +fi +if test -z "${PA_HPUX_TRUE}" && test -z "${PA_HPUX_FALSE}"; then + { { echo "$as_me:$LINENO: error: conditional \"PA_HPUX\" was never defined. +Usually this means the macro was only invoked conditionally." >&5 +echo "$as_me: error: conditional \"PA_HPUX\" was never defined. +Usually this means the macro was only invoked conditionally." >&2;} + { (exit 1); exit 1; }; } +fi +if test -z "${PA64_HPUX_TRUE}" && test -z "${PA64_HPUX_FALSE}"; then + { { echo "$as_me:$LINENO: error: conditional \"PA64_HPUX\" was never defined. +Usually this means the macro was only invoked conditionally." >&5 +echo "$as_me: error: conditional \"PA64_HPUX\" was never defined. +Usually this means the macro was only invoked conditionally." >&2;} + { (exit 1); exit 1; }; } +fi : ${CONFIG_STATUS=./config.status} ac_clean_files_save=$ac_clean_files @@ -6436,7 +23327,7 @@ # report actual input values of CONFIG_FILES etc. instead of their # values after options handling. ac_log=" -This file was extended by libffi $as_me 2.1, which was +This file was extended by libffi $as_me 3.0.4, which was generated by GNU Autoconf 2.61. Invocation command line was CONFIG_FILES = $CONFIG_FILES @@ -6493,7 +23384,7 @@ _ACEOF cat >>$CONFIG_STATUS <<_ACEOF ac_cs_version="\\ -libffi config.status 2.1 +libffi config.status 3.0.4 configured by $0, generated by GNU Autoconf 2.61, with options \\"`echo "$ac_configure_args" | sed 's/^ //; s/[\\""\`\$]/\\\\&/g'`\\" @@ -6503,6 +23394,8 @@ ac_pwd='$ac_pwd' srcdir='$srcdir' +INSTALL='$INSTALL' +MKDIR_P='$MKDIR_P' _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF @@ -6595,6 +23488,7 @@ # # INIT-COMMANDS # +AMDEP_TRUE="$AMDEP_TRUE" ac_aux_dir="$ac_aux_dir" TARGETDIR="$TARGETDIR" _ACEOF @@ -6606,11 +23500,12 @@ do case $ac_config_target in "fficonfig.h") CONFIG_HEADERS="$CONFIG_HEADERS fficonfig.h" ;; + "depfiles") CONFIG_COMMANDS="$CONFIG_COMMANDS depfiles" ;; "include") CONFIG_COMMANDS="$CONFIG_COMMANDS include" ;; "src") CONFIG_COMMANDS="$CONFIG_COMMANDS src" ;; - "include/ffitarget.h") CONFIG_LINKS="$CONFIG_LINKS include/ffitarget.h:src/$TARGETINCDIR/ffitarget.h" ;; - "include/ffi_common.h") CONFIG_LINKS="$CONFIG_LINKS include/ffi_common.h:include/ffi_common.h" ;; + "include/ffitarget.h") CONFIG_LINKS="$CONFIG_LINKS include/ffitarget.h:src/$TARGETDIR/ffitarget.h" ;; "include/ffi.h") CONFIG_FILES="$CONFIG_FILES include/ffi.h" ;; + "include/ffi_common.h") CONFIG_LINKS="$CONFIG_LINKS include/ffi_common.h:include/ffi_common.h" ;; "fficonfig.py") CONFIG_FILES="$CONFIG_FILES fficonfig.py" ;; *) { { echo "$as_me:$LINENO: error: invalid argument: $ac_config_target" >&5 @@ -6724,6 +23619,28 @@ target_cpu!$target_cpu$ac_delim target_vendor!$target_vendor$ac_delim target_os!$target_os$ac_delim +INSTALL_PROGRAM!$INSTALL_PROGRAM$ac_delim +INSTALL_SCRIPT!$INSTALL_SCRIPT$ac_delim +INSTALL_DATA!$INSTALL_DATA$ac_delim +am__isrc!$am__isrc$ac_delim +CYGPATH_W!$CYGPATH_W$ac_delim +PACKAGE!$PACKAGE$ac_delim +VERSION!$VERSION$ac_delim +ACLOCAL!$ACLOCAL$ac_delim +AUTOCONF!$AUTOCONF$ac_delim +AUTOMAKE!$AUTOMAKE$ac_delim +AUTOHEADER!$AUTOHEADER$ac_delim +MAKEINFO!$MAKEINFO$ac_delim +install_sh!$install_sh$ac_delim +STRIP!$STRIP$ac_delim +INSTALL_STRIP_PROGRAM!$INSTALL_STRIP_PROGRAM$ac_delim +mkdir_p!$mkdir_p$ac_delim +AWK!$AWK$ac_delim +SET_MAKE!$SET_MAKE$ac_delim +am__leading_dot!$am__leading_dot$ac_delim +AMTAR!$AMTAR$ac_delim +am__tar!$am__tar$ac_delim +am__untar!$am__untar$ac_delim CC!$CC$ac_delim CFLAGS!$CFLAGS$ac_delim LDFLAGS!$LDFLAGS$ac_delim @@ -6731,19 +23648,145 @@ ac_ct_CC!$ac_ct_CC$ac_delim EXEEXT!$EXEEXT$ac_delim OBJEXT!$OBJEXT$ac_delim -CPP!$CPP$ac_delim +DEPDIR!$DEPDIR$ac_delim +am__include!$am__include$ac_delim +am__quote!$am__quote$ac_delim +AMDEP_TRUE!$AMDEP_TRUE$ac_delim +AMDEP_FALSE!$AMDEP_FALSE$ac_delim +AMDEPBACKSLASH!$AMDEPBACKSLASH$ac_delim +CCDEPMODE!$CCDEPMODE$ac_delim +am__fastdepCC_TRUE!$am__fastdepCC_TRUE$ac_delim +am__fastdepCC_FALSE!$am__fastdepCC_FALSE$ac_delim +CCAS!$CCAS$ac_delim +CCASFLAGS!$CCASFLAGS$ac_delim +CCASDEPMODE!$CCASDEPMODE$ac_delim +am__fastdepCCAS_TRUE!$am__fastdepCCAS_TRUE$ac_delim +am__fastdepCCAS_FALSE!$am__fastdepCCAS_FALSE$ac_delim +SED!$SED$ac_delim GREP!$GREP$ac_delim EGREP!$EGREP$ac_delim +LN_S!$LN_S$ac_delim +ECHO!$ECHO$ac_delim +_ACEOF + + if test `sed -n "s/.*$ac_delim\$/X/p" conf$$subs.sed | grep -c X` = 97; then + break + elif $ac_last_try; then + { { echo "$as_me:$LINENO: error: could not make $CONFIG_STATUS" >&5 +echo "$as_me: error: could not make $CONFIG_STATUS" >&2;} + { (exit 1); exit 1; }; } + else + ac_delim="$ac_delim!$ac_delim _$ac_delim!! " + fi +done + +ac_eof=`sed -n '/^CEOF[0-9]*$/s/CEOF/0/p' conf$$subs.sed` +if test -n "$ac_eof"; then + ac_eof=`echo "$ac_eof" | sort -nru | sed 1q` + ac_eof=`expr $ac_eof + 1` +fi + +cat >>$CONFIG_STATUS <<_ACEOF +cat >"\$tmp/subs-1.sed" <<\CEOF$ac_eof +/@[a-zA-Z_][a-zA-Z_0-9]*@/!b +_ACEOF +sed ' +s/[,\\&]/\\&/g; s/@/@|#_!!_#|/g +s/^/s,@/; s/!/@,|#_!!_#|/ +:n +t n +s/'"$ac_delim"'$/,g/; t +s/$/\\/; p +N; s/^.*\n//; s/[,\\&]/\\&/g; s/@/@|#_!!_#|/g; b n +' >>$CONFIG_STATUS >$CONFIG_STATUS <<_ACEOF +CEOF$ac_eof +_ACEOF + + +ac_delim='%!_!# ' +for ac_last_try in false false false false false :; do + cat >conf$$subs.sed <<_ACEOF +AR!$AR$ac_delim +RANLIB!$RANLIB$ac_delim +CPP!$CPP$ac_delim +CXX!$CXX$ac_delim +CXXFLAGS!$CXXFLAGS$ac_delim +ac_ct_CXX!$ac_ct_CXX$ac_delim +CXXDEPMODE!$CXXDEPMODE$ac_delim +am__fastdepCXX_TRUE!$am__fastdepCXX_TRUE$ac_delim +am__fastdepCXX_FALSE!$am__fastdepCXX_FALSE$ac_delim +CXXCPP!$CXXCPP$ac_delim +F77!$F77$ac_delim +FFLAGS!$FFLAGS$ac_delim +ac_ct_F77!$ac_ct_F77$ac_delim +LIBTOOL!$LIBTOOL$ac_delim +MAINTAINER_MODE_TRUE!$MAINTAINER_MODE_TRUE$ac_delim +MAINTAINER_MODE_FALSE!$MAINTAINER_MODE_FALSE$ac_delim +MAINT!$MAINT$ac_delim +TESTSUBDIR_TRUE!$TESTSUBDIR_TRUE$ac_delim +TESTSUBDIR_FALSE!$TESTSUBDIR_FALSE$ac_delim +AM_RUNTESTFLAGS!$AM_RUNTESTFLAGS$ac_delim +MIPS_TRUE!$MIPS_TRUE$ac_delim +MIPS_FALSE!$MIPS_FALSE$ac_delim +SPARC_TRUE!$SPARC_TRUE$ac_delim +SPARC_FALSE!$SPARC_FALSE$ac_delim +X86_TRUE!$X86_TRUE$ac_delim +X86_FALSE!$X86_FALSE$ac_delim +X86_FREEBSD_TRUE!$X86_FREEBSD_TRUE$ac_delim +X86_FREEBSD_FALSE!$X86_FREEBSD_FALSE$ac_delim +X86_WIN32_TRUE!$X86_WIN32_TRUE$ac_delim +X86_WIN32_FALSE!$X86_WIN32_FALSE$ac_delim +X86_DARWIN_TRUE!$X86_DARWIN_TRUE$ac_delim +X86_DARWIN_FALSE!$X86_DARWIN_FALSE$ac_delim +ALPHA_TRUE!$ALPHA_TRUE$ac_delim +ALPHA_FALSE!$ALPHA_FALSE$ac_delim +IA64_TRUE!$IA64_TRUE$ac_delim +IA64_FALSE!$IA64_FALSE$ac_delim +M32R_TRUE!$M32R_TRUE$ac_delim +M32R_FALSE!$M32R_FALSE$ac_delim +M68K_TRUE!$M68K_TRUE$ac_delim +M68K_FALSE!$M68K_FALSE$ac_delim +POWERPC_TRUE!$POWERPC_TRUE$ac_delim +POWERPC_FALSE!$POWERPC_FALSE$ac_delim +POWERPC_AIX_TRUE!$POWERPC_AIX_TRUE$ac_delim +POWERPC_AIX_FALSE!$POWERPC_AIX_FALSE$ac_delim +POWERPC_DARWIN_TRUE!$POWERPC_DARWIN_TRUE$ac_delim +POWERPC_DARWIN_FALSE!$POWERPC_DARWIN_FALSE$ac_delim +POWERPC_FREEBSD_TRUE!$POWERPC_FREEBSD_TRUE$ac_delim +POWERPC_FREEBSD_FALSE!$POWERPC_FREEBSD_FALSE$ac_delim +ARM_TRUE!$ARM_TRUE$ac_delim +ARM_FALSE!$ARM_FALSE$ac_delim +LIBFFI_CRIS_TRUE!$LIBFFI_CRIS_TRUE$ac_delim +LIBFFI_CRIS_FALSE!$LIBFFI_CRIS_FALSE$ac_delim +FRV_TRUE!$FRV_TRUE$ac_delim +FRV_FALSE!$FRV_FALSE$ac_delim +S390_TRUE!$S390_TRUE$ac_delim +S390_FALSE!$S390_FALSE$ac_delim +X86_64_TRUE!$X86_64_TRUE$ac_delim +X86_64_FALSE!$X86_64_FALSE$ac_delim +SH_TRUE!$SH_TRUE$ac_delim +SH_FALSE!$SH_FALSE$ac_delim +SH64_TRUE!$SH64_TRUE$ac_delim +SH64_FALSE!$SH64_FALSE$ac_delim +PA_LINUX_TRUE!$PA_LINUX_TRUE$ac_delim +PA_LINUX_FALSE!$PA_LINUX_FALSE$ac_delim +PA_HPUX_TRUE!$PA_HPUX_TRUE$ac_delim +PA_HPUX_FALSE!$PA_HPUX_FALSE$ac_delim +PA64_HPUX_TRUE!$PA64_HPUX_TRUE$ac_delim +PA64_HPUX_FALSE!$PA64_HPUX_FALSE$ac_delim ALLOCA!$ALLOCA$ac_delim HAVE_LONG_DOUBLE!$HAVE_LONG_DOUBLE$ac_delim TARGET!$TARGET$ac_delim TARGETDIR!$TARGETDIR$ac_delim -MKTARGET!$MKTARGET$ac_delim +toolexecdir!$toolexecdir$ac_delim +toolexeclibdir!$toolexeclibdir$ac_delim LIBOBJS!$LIBOBJS$ac_delim LTLIBOBJS!$LTLIBOBJS$ac_delim _ACEOF - if test `sed -n "s/.*$ac_delim\$/X/p" conf$$subs.sed | grep -c X` = 66; then + if test `sed -n "s/.*$ac_delim\$/X/p" conf$$subs.sed | grep -c X` = 76; then break elif $ac_last_try; then { { echo "$as_me:$LINENO: error: could not make $CONFIG_STATUS" >&5 @@ -6761,7 +23804,7 @@ fi cat >>$CONFIG_STATUS <<_ACEOF -cat >"\$tmp/subs-1.sed" <<\CEOF$ac_eof +cat >"\$tmp/subs-2.sed" <<\CEOF$ac_eof /@[a-zA-Z_][a-zA-Z_0-9]*@/!b end _ACEOF sed ' @@ -6966,6 +24009,15 @@ # CONFIG_FILE # + case $INSTALL in + [\\/$]* | ?:[\\/]* ) ac_INSTALL=$INSTALL ;; + *) ac_INSTALL=$ac_top_build_prefix$INSTALL ;; + esac + ac_MKDIR_P=$MKDIR_P + case $MKDIR_P in + [\\/$]* | ?:[\\/]* ) ;; + */*) ac_MKDIR_P=$ac_top_build_prefix$MKDIR_P ;; + esac _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF @@ -7018,8 +24070,10 @@ s&@builddir@&$ac_builddir&;t t s&@abs_builddir@&$ac_abs_builddir&;t t s&@abs_top_builddir@&$ac_abs_top_builddir&;t t +s&@INSTALL@&$ac_INSTALL&;t t +s&@MKDIR_P@&$ac_MKDIR_P&;t t $ac_datarootdir_hack -" $ac_file_inputs | sed -f "$tmp/subs-1.sed" >$tmp/out +" $ac_file_inputs | sed -f "$tmp/subs-1.sed" | sed -f "$tmp/subs-2.sed" >$tmp/out test -z "$ac_datarootdir_hack$ac_datarootdir_seen" && { ac_out=`sed -n '/\${datarootdir}/p' "$tmp/out"`; test -n "$ac_out"; } && @@ -7132,6 +24186,39 @@ cat "$ac_result" fi rm -f "$tmp/out12" +# Compute $ac_file's index in $config_headers. +_am_stamp_count=1 +for _am_header in $config_headers :; do + case $_am_header in + $ac_file | $ac_file:* ) + break ;; + * ) + _am_stamp_count=`expr $_am_stamp_count + 1` ;; + esac +done +echo "timestamp for $ac_file" >`$as_dirname -- $ac_file || +$as_expr X$ac_file : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ + X$ac_file : 'X\(//\)[^/]' \| \ + X$ac_file : 'X\(//\)$' \| \ + X$ac_file : 'X\(/\)' \| . 2>/dev/null || +echo X$ac_file | + sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ + s//\1/ + q + } + /^X\(\/\/\)[^/].*/{ + s//\1/ + q + } + /^X\(\/\/\)$/{ + s//\1/ + q + } + /^X\(\/\).*/{ + s//\1/ + q + } + s/.*/./; q'`/stamp-h$_am_stamp_count ;; :L) # @@ -7167,6 +24254,130 @@ case $ac_file$ac_mode in + "depfiles":C) test x"$AMDEP_TRUE" != x"" || for mf in $CONFIG_FILES; do + # Strip MF so we end up with the name of the file. + mf=`echo "$mf" | sed -e 's/:.*$//'` + # Check whether this is an Automake generated Makefile or not. + # We used to match only the files named `Makefile.in', but + # some people rename them; so instead we look at the file content. + # Grep'ing the first line is not enough: some people post-process + # each Makefile.in and add a new line on top of each file to say so. + # Grep'ing the whole file is not good either: AIX grep has a line + # limit of 2048, but all sed's we know have understand at least 4000. + if sed 10q "$mf" | grep '^#.*generated by automake' > /dev/null 2>&1; then + dirpart=`$as_dirname -- "$mf" || +$as_expr X"$mf" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ + X"$mf" : 'X\(//\)[^/]' \| \ + X"$mf" : 'X\(//\)$' \| \ + X"$mf" : 'X\(/\)' \| . 2>/dev/null || +echo X"$mf" | + sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ + s//\1/ + q + } + /^X\(\/\/\)[^/].*/{ + s//\1/ + q + } + /^X\(\/\/\)$/{ + s//\1/ + q + } + /^X\(\/\).*/{ + s//\1/ + q + } + s/.*/./; q'` + else + continue + fi + # Extract the definition of DEPDIR, am__include, and am__quote + # from the Makefile without running `make'. + DEPDIR=`sed -n 's/^DEPDIR = //p' < "$mf"` + test -z "$DEPDIR" && continue + am__include=`sed -n 's/^am__include = //p' < "$mf"` + test -z "am__include" && continue + am__quote=`sed -n 's/^am__quote = //p' < "$mf"` + # When using ansi2knr, U may be empty or an underscore; expand it + U=`sed -n 's/^U = //p' < "$mf"` + # Find all dependency output files, they are included files with + # $(DEPDIR) in their names. We invoke sed twice because it is the + # simplest approach to changing $(DEPDIR) to its actual value in the + # expansion. + for file in `sed -n " + s/^$am__include $am__quote\(.*(DEPDIR).*\)$am__quote"'$/\1/p' <"$mf" | \ + sed -e 's/\$(DEPDIR)/'"$DEPDIR"'/g' -e 's/\$U/'"$U"'/g'`; do + # Make sure the directory exists. + test -f "$dirpart/$file" && continue + fdir=`$as_dirname -- "$file" || +$as_expr X"$file" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ + X"$file" : 'X\(//\)[^/]' \| \ + X"$file" : 'X\(//\)$' \| \ + X"$file" : 'X\(/\)' \| . 2>/dev/null || +echo X"$file" | + sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ + s//\1/ + q + } + /^X\(\/\/\)[^/].*/{ + s//\1/ + q + } + /^X\(\/\/\)$/{ + s//\1/ + q + } + /^X\(\/\).*/{ + s//\1/ + q + } + s/.*/./; q'` + { as_dir=$dirpart/$fdir + case $as_dir in #( + -*) as_dir=./$as_dir;; + esac + test -d "$as_dir" || { $as_mkdir_p && mkdir -p "$as_dir"; } || { + as_dirs= + while :; do + case $as_dir in #( + *\'*) as_qdir=`echo "$as_dir" | sed "s/'/'\\\\\\\\''/g"`;; #( + *) as_qdir=$as_dir;; + esac + as_dirs="'$as_qdir' $as_dirs" + as_dir=`$as_dirname -- "$as_dir" || +$as_expr X"$as_dir" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ + X"$as_dir" : 'X\(//\)[^/]' \| \ + X"$as_dir" : 'X\(//\)$' \| \ + X"$as_dir" : 'X\(/\)' \| . 2>/dev/null || +echo X"$as_dir" | + sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ + s//\1/ + q + } + /^X\(\/\/\)[^/].*/{ + s//\1/ + q + } + /^X\(\/\/\)$/{ + s//\1/ + q + } + /^X\(\/\).*/{ + s//\1/ + q + } + s/.*/./; q'` + test -d "$as_dir" && break + done + test -z "$as_dirs" || eval "mkdir $as_dirs" + } || test -d "$as_dir" || { { echo "$as_me:$LINENO: error: cannot create directory $as_dir" >&5 +echo "$as_me: error: cannot create directory $as_dir" >&2;} + { (exit 1); exit 1; }; }; } + # echo "creating $dirpart/$file" + echo '# dummy' > "$dirpart/$file" + done +done + ;; "include":C) test -d include || mkdir include ;; "src":C) test -d src || mkdir src Modified: python/trunk/Modules/_ctypes/libffi/configure.ac ============================================================================== --- python/trunk/Modules/_ctypes/libffi/configure.ac (original) +++ python/trunk/Modules/_ctypes/libffi/configure.ac Tue Mar 4 21:09:11 2008 @@ -2,12 +2,21 @@ AC_PREREQ(2.59) -AC_INIT([libffi], [2.1], [http://gcc.gnu.org/bugs.html]) +AC_INIT([libffi], [3.0.4], [http://gcc.gnu.org/bugs.html]) AC_CONFIG_HEADERS([fficonfig.h]) AC_CANONICAL_SYSTEM target_alias=${target_alias-$host_alias} +. ${srcdir}/configure.host + +AM_INIT_AUTOMAKE + +# The same as in boehm-gc and libstdc++. Have to borrow it from there. +# We must force CC to /not/ be precious variables; otherwise +# the wrong, non-multilib-adjusted value will be used in multilibs. +# As a side effect, we have to subst CFLAGS ourselves. + m4_rename([_AC_ARG_VAR_PRECIOUS],[real_PRECIOUS]) m4_define([_AC_ARG_VAR_PRECIOUS],[]) AC_PROG_CC @@ -15,79 +24,162 @@ AC_SUBST(CFLAGS) +AM_PROG_AS +AM_PROG_CC_C_O +AC_PROG_LIBTOOL + +AM_MAINTAINER_MODE + AC_CHECK_HEADERS(sys/mman.h) AC_CHECK_FUNCS(mmap) AC_FUNC_MMAP_BLACKLIST +dnl The -no-testsuite modules omit the test subdir. +AM_CONDITIONAL(TESTSUBDIR, test -d $srcdir/testsuite) + TARGETDIR="unknown" case "$host" in -x86_64-*-openbsd*) TARGET=X86_64; TARGETDIR=x86;; -mips*-*-openbsd*) TARGET=MIPS; TARGETDIR=mips;; -sparc-*-openbsd*) TARGET=SPARC; TARGETDIR=sparc;; -sparc64-*-openbsd*) TARGET=SPARC; TARGETDIR=sparc;; -alpha*-*-openbsd*) TARGET=ALPHA; TARGETDIR=alpha;; -m68k-*-openbsd*) TARGET=M68K; TARGETDIR=m68k;; -powerpc-*-openbsd*) TARGET=POWERPC; TARGETDIR=powerpc;; -i*86-*-darwin*) TARGET=X86_DARWIN; TARGETDIR=x86;; -i*86-*-linux*) TARGET=X86; TARGETDIR=x86;; -i*86-*-gnu*) TARGET=X86; TARGETDIR=x86;; -i*86-*-solaris2.1[[0-9]]*) TARGET=X86_64; TARGETDIR=x86;; -i*86-*-solaris*) TARGET=X86; TARGETDIR=x86;; -i*86-*-beos*) TARGET=X86; TARGETDIR=x86;; -i*86-*-freebsd* | i*86-*-kfreebsd*-gnu) TARGET=X86; TARGETDIR=x86;; -i*86-*-netbsdelf* | i*86-*-knetbsd*-gnu) TARGET=X86; TARGETDIR=x86;; -i*86-*-openbsd*) TARGET=X86; TARGETDIR=x86;; -i*86-*-rtems*) TARGET=X86; TARGETDIR=x86;; -i*86-*-win32*) TARGET=X86_WIN32; TARGETDIR=x86;; -i*86-*-cygwin*) TARGET=X86_WIN32; TARGETDIR=x86;; -i*86-*-mingw*) TARGET=X86_WIN32; TARGETDIR=x86;; -frv-*-*) TARGET=FRV; TARGETDIR=frv;; -sparc-sun-4*) TARGET=SPARC; TARGETDIR=sparc;; -sparc*-sun-*) TARGET=SPARC; TARGETDIR=sparc;; -sparc-*-linux* | sparc-*-netbsdelf* | sparc-*-knetbsd*-gnu) TARGET=SPARC; TARGETDIR=sparc;; -sparc*-*-rtems*) TARGET=SPARC; TARGETDIR=sparc;; -sparc64-*-linux* | sparc64-*-freebsd* | sparc64-*-netbsd* | sparc64-*-knetbsd*-gnu) TARGET=SPARC; TARGETDIR=sparc;; -alpha*-*-linux* | alpha*-*-osf* | alpha*-*-freebsd* | alpha*-*-kfreebsd*-gnu | alpha*-*-netbsd* | alpha*-*-knetbsd*-gnu) TARGET=ALPHA; TARGETDIR=alpha;; -ia64*-*-*) TARGET=IA64; TARGETDIR=ia64;; -m32r*-*-linux* ) TARGET=M32R; TARGETDIR=m32r;; -m68k-*-linux*) TARGET=M68K; TARGETDIR=m68k;; -mips64*-*);; -mips-sgi-irix5.* | mips-sgi-irix6.*) TARGET=MIPS_IRIX; TARGETDIR=mips;; -mips*-*-linux*) TARGET=MIPS_LINUX; TARGETDIR=mips;; -powerpc*-*-linux* | powerpc-*-sysv*) TARGET=POWERPC; TARGETDIR=powerpc;; -powerpc-*-beos*) TARGET=POWERPC; TARGETDIR=powerpc;; -powerpc-*-darwin*) TARGET=POWERPC_DARWIN; TARGETDIR=powerpc;; -powerpc-*-aix*) TARGET=POWERPC_AIX; TARGETDIR=powerpc;; -powerpc-*-freebsd*) TARGET=POWERPC_FREEBSD; TARGETDIR=powerpc;; -powerpc*-*-rtems*) TARGET=POWERPC; TARGETDIR=powerpc;; -rs6000-*-aix*) TARGET=POWERPC_AIX; TARGETDIR=powerpc;; -arm*-*-linux-*) TARGET=ARM; TARGETDIR=arm;; -arm*-*-netbsdelf* | arm*-*-knetbsd*-gnu) TARGET=ARM; TARGETDIR=arm;; -arm*-*-rtems*) TARGET=ARM; TARGETDIR=arm;; -cris-*-*) TARGET=LIBFFI_CRIS; TARGETDIR=cris;; -s390-*-linux-*) TARGET=S390; TARGETDIR=s390;; -s390x-*-linux-*) TARGET=S390; TARGETDIR=s390;; -amd64-*-freebsd* | x86_64-*-linux* | x86_64-*-freebsd* | x86_64-*-kfreebsd*-gnu) TARGET=X86_64; TARGETDIR=x86;; -sh-*-linux* | sh[[34]]*-*-linux*) TARGET=SH; TARGETDIR=sh;; -sh-*-rtems*) TARGET=SH; TARGETDIR=sh;; -sh64-*-linux* | sh5*-*-linux*) TARGET=SH64; TARGETDIR=sh64;; -hppa*-*-linux* | parisc*-*-linux*) TARGET=PA; TARGETDIR=pa;; + alpha*-*-*) + TARGET=ALPHA; TARGETDIR=alpha; + # Support 128-bit long double, changable via command-line switch. + HAVE_LONG_DOUBLE='defined(__LONG_DOUBLE_128__)' + ;; + + arm*-*-*) + TARGET=ARM; TARGETDIR=arm + ;; + + amd64-*-freebsd*) + TARGET=X86_64; TARGETDIR=x86 + ;; + + cris-*-*) + TARGET=LIBFFI_CRIS; TARGETDIR=cris + ;; + + frv-*-*) + TARGET=FRV; TARGETDIR=frv + ;; + + hppa*-*-linux* | parisc*-*-linux*) + TARGET=PA_LINUX; TARGETDIR=pa + ;; + hppa*64-*-hpux*) + TARGET=PA64_HPUX; TARGETDIR=pa + ;; + hppa*-*-hpux*) + TARGET=PA_HPUX; TARGETDIR=pa + ;; + + i386-*-freebsd* | i386-*-openbsd*) + TARGET=X86_FREEBSD; TARGETDIR=x86 + ;; + i?86-win32* | i?86-*-cygwin* | i?86-*-mingw*) + TARGET=X86_WIN32; TARGETDIR=x86 + ;; + i?86-*-darwin*) + TARGET=X86_DARWIN; TARGETDIR=x86 + ;; + i?86-*-solaris2.1[[0-9]]*) + TARGET=X86_64; TARGETDIR=x86 + ;; + i?86-*-*) + TARGET=X86; TARGETDIR=x86 + ;; + + ia64*-*-*) + TARGET=IA64; TARGETDIR=ia64 + ;; + + m32r*-*-*) + TARGET=M32R; TARGETDIR=m32r + ;; + + m68k-*-*) + TARGET=M68K; TARGETDIR=m68k + ;; + + mips-sgi-irix5.* | mips-sgi-irix6.*) + TARGET=MIPS; TARGETDIR=mips + ;; + mips*-*-linux*) + TARGET=MIPS; TARGETDIR=mips + ;; + + powerpc*-*-linux* | powerpc-*-sysv*) + TARGET=POWERPC; TARGETDIR=powerpc + ;; + powerpc-*-beos*) + TARGET=POWERPC; TARGETDIR=powerpc + ;; + powerpc-*-darwin*) + TARGET=POWERPC_DARWIN; TARGETDIR=powerpc + ;; + powerpc-*-aix* | rs6000-*-aix*) + TARGET=POWERPC_AIX; TARGETDIR=powerpc + ;; + powerpc-*-freebsd*) + TARGET=POWERPC_FREEBSD; TARGETDIR=powerpc + ;; + powerpc*-*-rtems*) + TARGET=POWERPC; TARGETDIR=powerpc + ;; + + s390-*-* | s390x-*-*) + TARGET=S390; TARGETDIR=s390 + ;; + + sh-*-* | sh[[34]]*-*-*) + TARGET=SH; TARGETDIR=sh + ;; + sh64-*-* | sh5*-*-*) + TARGET=SH64; TARGETDIR=sh64 + ;; + + sparc*-*-*) + TARGET=SPARC; TARGETDIR=sparc + ;; + + x86_64-*-darwin*) + TARGET=X86_DARWIN; TARGETDIR=x86 + ;; + x86_64-*-cygwin* | x86_64-*-mingw*) + ;; + x86_64-*-*) + TARGET=X86_64; TARGETDIR=x86 + ;; esac +AC_SUBST(AM_RUNTESTFLAGS) + if test $TARGETDIR = unknown; then AC_MSG_ERROR(["libffi has not been ported to $host."]) fi -dnl libffi changes TARGET for MIPS to define a such macro in the header -dnl while MIPS_IRIX or MIPS_LINUX is separatedly used to decide which -dnl files will be compiled. So, we need to keep the original decision -dnl of TARGET to use in fficonfig.py.in. -MKTARGET=$TARGET - -case x$TARGET in - xMIPS*) TARGET=MIPS ;; - *) ;; -esac +AM_CONDITIONAL(MIPS, test x$TARGET = xMIPS) +AM_CONDITIONAL(SPARC, test x$TARGET = xSPARC) +AM_CONDITIONAL(X86, test x$TARGET = xX86) +AM_CONDITIONAL(X86_FREEBSD, test x$TARGET = xX86_FREEBSD) +AM_CONDITIONAL(X86_WIN32, test x$TARGET = xX86_WIN32) +AM_CONDITIONAL(X86_DARWIN, test x$TARGET = xX86_DARWIN) +AM_CONDITIONAL(ALPHA, test x$TARGET = xALPHA) +AM_CONDITIONAL(IA64, test x$TARGET = xIA64) +AM_CONDITIONAL(M32R, test x$TARGET = xM32R) +AM_CONDITIONAL(M68K, test x$TARGET = xM68K) +AM_CONDITIONAL(POWERPC, test x$TARGET = xPOWERPC) +AM_CONDITIONAL(POWERPC_AIX, test x$TARGET = xPOWERPC_AIX) +AM_CONDITIONAL(POWERPC_DARWIN, test x$TARGET = xPOWERPC_DARWIN) +AM_CONDITIONAL(POWERPC_FREEBSD, test x$TARGET = xPOWERPC_FREEBSD) +AM_CONDITIONAL(ARM, test x$TARGET = xARM) +AM_CONDITIONAL(LIBFFI_CRIS, test x$TARGET = xLIBFFI_CRIS) +AM_CONDITIONAL(FRV, test x$TARGET = xFRV) +AM_CONDITIONAL(S390, test x$TARGET = xS390) +AM_CONDITIONAL(X86_64, test x$TARGET = xX86_64) +AM_CONDITIONAL(SH, test x$TARGET = xSH) +AM_CONDITIONAL(SH64, test x$TARGET = xSH64) +AM_CONDITIONAL(PA_LINUX, test x$TARGET = xPA_LINUX) +AM_CONDITIONAL(PA_HPUX, test x$TARGET = xPA_HPUX) +AM_CONDITIONAL(PA64_HPUX, test x$TARGET = xPA64_HPUX) AC_HEADER_STDC AC_CHECK_FUNCS(memcpy) @@ -97,34 +189,30 @@ AC_CHECK_SIZEOF(long double) # Also AC_SUBST this variable for ffi.h. -HAVE_LONG_DOUBLE=0 -if test $ac_cv_sizeof_double != $ac_cv_sizeof_long_double; then - if test $ac_cv_sizeof_long_double != 0; then - HAVE_LONG_DOUBLE=1 - AC_DEFINE(HAVE_LONG_DOUBLE, 1, [Define if you have the long double type and it is bigger than a double]) +if test -z "$HAVE_LONG_DOUBLE"; then + HAVE_LONG_DOUBLE=0 + if test $ac_cv_sizeof_double != $ac_cv_sizeof_long_double; then + if test $ac_cv_sizeof_long_double != 0; then + HAVE_LONG_DOUBLE=1 + AC_DEFINE(HAVE_LONG_DOUBLE, 1, [Define if you have the long double type and it is bigger than a double]) + fi fi fi AC_SUBST(HAVE_LONG_DOUBLE) AC_C_BIGENDIAN -AH_VERBATIM([WORDS_BIGENDIAN], -[ -/* Define to 1 if your processor stores words with the most significant byte - first (like Motorola and SPARC, unlike Intel and VAX). - - The block below does compile-time checking for endianness on platforms - that use GCC and therefore allows compiling fat binaries on OSX by using - '-arch ppc -arch i386' as the compile flags. The phrasing was choosen - such that the configure-result is used on systems that don't use GCC. -*/ -#ifdef __BIG_ENDIAN__ -#define WORDS_BIGENDIAN 1 -#else -#ifndef __LITTLE_ENDIAN__ -#undef WORDS_BIGENDIAN -#endif -#endif]) +AC_CACHE_CHECK([assembler .cfi pseudo-op support], + libffi_cv_as_cfi_pseudo_op, [ + libffi_cv_as_cfi_pseudo_op=unknown + AC_TRY_COMPILE([asm (".cfi_startproc\n\t.cfi_endproc");],, + [libffi_cv_as_cfi_pseudo_op=yes], + [libffi_cv_as_cfi_pseudo_op=no]) +]) +if test "x$libffi_cv_as_cfi_pseudo_op" = xyes; then + AC_DEFINE(HAVE_AS_CFI_PSEUDO_OP, 1, + [Define if your assembler supports .cfi_* directives.]) +fi if test x$TARGET = xSPARC; then AC_CACHE_CHECK([assembler and linker support unaligned pc related relocs], @@ -215,11 +303,54 @@ AC_SUBST(TARGET) AC_SUBST(TARGETDIR) -AC_SUBST(MKTARGET) AC_SUBST(SHELL) -AC_DEFINE(FFI_NO_RAW_API, 1, [Define this is you do not want support for the raw API.]) +AC_ARG_ENABLE(debug, +[ --enable-debug debugging mode], + if test "$enable_debug" = "yes"; then + AC_DEFINE(FFI_DEBUG, 1, [Define this if you want extra debugging.]) + fi) + +AC_ARG_ENABLE(structs, +[ --disable-structs omit code for struct support], + if test "$enable_structs" = "no"; then + AC_DEFINE(FFI_NO_STRUCTS, 1, [Define this is you do not want support for aggregate types.]) + fi) + +AC_ARG_ENABLE(raw-api, +[ --disable-raw-api make the raw api unavailable], + if test "$enable_raw_api" = "no"; then + AC_DEFINE(FFI_NO_RAW_API, 1, [Define this is you do not want support for the raw API.]) + fi) + +AC_ARG_ENABLE(purify-safety, +[ --enable-purify-safety purify-safe mode], + if test "$enable_purify_safety" = "yes"; then + AC_DEFINE(USING_PURIFY, 1, [Define this if you are using Purify and want to suppress spurious messages.]) + fi) + +if test -n "$with_cross_host" && + test x"$with_cross_host" != x"no"; then + toolexecdir='$(exec_prefix)/$(target_alias)' + toolexeclibdir='$(toolexecdir)/lib' +else + toolexecdir='$(libdir)/gcc-lib/$(target_alias)' + toolexeclibdir='$(libdir)' +fi +multi_os_directory=`$CC -print-multi-os-directory` +case $multi_os_directory in + .) ;; # Avoid trailing /. + *) toolexeclibdir=$toolexeclibdir/$multi_os_directory ;; +esac +AC_SUBST(toolexecdir) +AC_SUBST(toolexeclibdir) + +if test "${multilib}" = "yes"; then + multilib_arg="--enable-multilib" +else + multilib_arg= +fi AC_CONFIG_COMMANDS(include, [test -d include || mkdir include]) AC_CONFIG_COMMANDS(src, [ @@ -227,17 +358,12 @@ test -d src/$TARGETDIR || mkdir src/$TARGETDIR ], [TARGETDIR="$TARGETDIR"]) -TARGETINCDIR=$TARGETDIR -case $host in -*-*-darwin*) - TARGETINCDIR="darwin" - ;; -esac +AC_CONFIG_LINKS(include/ffitarget.h:src/$TARGETDIR/ffitarget.h) +AC_CONFIG_FILES(include/ffi.h) -AC_CONFIG_LINKS(include/ffitarget.h:src/$TARGETINCDIR/ffitarget.h) AC_CONFIG_LINKS(include/ffi_common.h:include/ffi_common.h) -AC_CONFIG_FILES(include/ffi.h fficonfig.py) +AC_CONFIG_FILES(fficonfig.py) AC_OUTPUT Modified: python/trunk/Modules/_ctypes/libffi/fficonfig.h.in ============================================================================== --- python/trunk/Modules/_ctypes/libffi/fficonfig.h.in (original) +++ python/trunk/Modules/_ctypes/libffi/fficonfig.h.in Tue Mar 4 21:09:11 2008 @@ -11,9 +11,15 @@ /* Define to the flags needed for the .section .eh_frame directive. */ #undef EH_FRAME_FLAGS +/* Define this if you want extra debugging. */ +#undef FFI_DEBUG + /* Define this is you do not want support for the raw API. */ #undef FFI_NO_RAW_API +/* Define this is you do not want support for aggregate types. */ +#undef FFI_NO_STRUCTS + /* Define to 1 if you have `alloca', as a function or macro. */ #undef HAVE_ALLOCA @@ -21,6 +27,9 @@ */ #undef HAVE_ALLOCA_H +/* Define if your assembler supports .cfi_* directives. */ +#undef HAVE_AS_CFI_PSEUDO_OP + /* Define if your assembler supports .register. */ #undef HAVE_AS_REGISTER_PSEUDO_OP @@ -28,6 +37,9 @@ */ #undef HAVE_AS_SPARC_UA_PCREL +/* Define to 1 if you have the header file. */ +#undef HAVE_DLFCN_H + /* Define if __attribute__((visibility("hidden"))) is supported. */ #undef HAVE_HIDDEN_VISIBILITY_ATTRIBUTE @@ -82,6 +94,12 @@ /* Define to 1 if you have the header file. */ #undef HAVE_UNISTD_H +/* Define to 1 if your C compiler doesn't accept -c and -o together. */ +#undef NO_MINUS_C_MINUS_O + +/* Name of package */ +#undef PACKAGE + /* Define to the address where bug reports for this package should be sent. */ #undef PACKAGE_BUGREPORT @@ -114,22 +132,16 @@ /* Define to 1 if you have the ANSI C header files. */ #undef STDC_HEADERS +/* Define this if you are using Purify and want to suppress spurious messages. + */ +#undef USING_PURIFY -/* Define to 1 if your processor stores words with the most significant byte - first (like Motorola and SPARC, unlike Intel and VAX). +/* Version number of package */ +#undef VERSION - The block below does compile-time checking for endianness on platforms - that use GCC and therefore allows compiling fat binaries on OSX by using - '-arch ppc -arch i386' as the compile flags. The phrasing was choosen - such that the configure-result is used on systems that don't use GCC. -*/ -#ifdef __BIG_ENDIAN__ -#define WORDS_BIGENDIAN 1 -#else -#ifndef __LITTLE_ENDIAN__ +/* Define to 1 if your processor stores words with the most significant byte + first (like Motorola and SPARC, unlike Intel and VAX). */ #undef WORDS_BIGENDIAN -#endif -#endif #ifdef HAVE_HIDDEN_VISIBILITY_ATTRIBUTE Modified: python/trunk/Modules/_ctypes/libffi/fficonfig.py.in ============================================================================== --- python/trunk/Modules/_ctypes/libffi/fficonfig.py.in (original) +++ python/trunk/Modules/_ctypes/libffi/fficonfig.py.in Tue Mar 4 21:09:11 2008 @@ -6,7 +6,7 @@ 'MIPS_IRIX': ['src/mips/ffi.c', 'src/mips/o32.S', 'src/mips/n32.S'], 'MIPS_LINUX': ['src/mips/ffi.c', 'src/mips/o32.S'], 'X86': ['src/x86/ffi.c', 'src/x86/sysv.S'], - 'X86_DARWIN': ['src/x86/ffi_darwin.c', 'src/x86/darwin.S'], + 'X86_FREEBSD': ['src/x86/ffi.c', 'src/x86/sysv.S'], 'X86_WIN32': ['src/x86/ffi.c', 'src/x86/win32.S'], 'SPARC': ['src/sparc/ffi.c', 'src/sparc/v8.S', 'src/sparc/v9.S'], 'ALPHA': ['src/alpha/ffi.c', 'src/alpha/osf.S'], @@ -14,8 +14,7 @@ 'M32R': ['src/m32r/sysv.S', 'src/m32r/ffi.c'], 'M68K': ['src/m68k/ffi.c', 'src/m68k/sysv.S'], 'POWERPC': ['src/powerpc/ffi.c', 'src/powerpc/sysv.S', 'src/powerpc/ppc_closure.S', 'src/powerpc/linux64.S', 'src/powerpc/linux64_closure.S'], - 'POWERPC_AIX': ['src/powerpc/ffi_darwin.c', 'src/powerpc/aix.S', 'src/powerpc/aix_closure.S'], - 'POWERPC_DARWIN': ['src/powerpc/ffi_darwin.c', 'src/powerpc/darwin.S', 'src/powerpc/darwin_closure.S'], + 'POWERPC_AIX': ['src/powerpc/ffi.c', 'src/powerpc/aix.S', 'src/powerpc/aix_closure.S'], 'POWERPC_FREEBSD': ['src/powerpc/ffi.c', 'src/powerpc/sysv.S', 'src/powerpc/ppc_closure.S'], 'ARM': ['src/arm/sysv.S', 'src/arm/ffi.c'], 'LIBFFI_CRIS': ['src/cris/sysv.S', 'src/cris/ffi.c'], @@ -27,19 +26,8 @@ 'PA': ['src/pa/linux.S', 'src/pa/ffi.c'], } -# Build all darwin related files on all supported darwin architectures, this -# makes it easier to build universal binaries. -if 1: - all_darwin = ('X86_DARWIN', 'POWERPC_DARWIN') - all_darwin_files = [] - for pn in all_darwin: - all_darwin_files.extend(ffi_platforms[pn]) - for pn in all_darwin: - ffi_platforms[pn] = all_darwin_files - del all_darwin, all_darwin_files, pn - ffi_srcdir = '@srcdir@' -ffi_sources += ffi_platforms['@MKTARGET@'] +ffi_sources += ffi_platforms['@TARGET@'] ffi_sources = [os.path.join('@srcdir@', f) for f in ffi_sources] ffi_cflags = '@CFLAGS@' Modified: python/trunk/Modules/_ctypes/libffi/include/ffi.h.in ============================================================================== --- python/trunk/Modules/_ctypes/libffi/include/ffi.h.in (original) +++ python/trunk/Modules/_ctypes/libffi/include/ffi.h.in Tue Mar 4 21:09:11 2008 @@ -1,5 +1,5 @@ /* -----------------------------------------------------------------*-C-*- - libffi @VERSION@ - Copyright (c) 1996-2003 Red Hat, Inc. + libffi @VERSION@ - Copyright (c) 1996-2003, 2007, 2008 Red Hat, Inc. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the @@ -12,13 +12,14 @@ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR - OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. ----------------------------------------------------------------------- */ @@ -82,6 +83,18 @@ # endif #endif +/* The closure code assumes that this works on pointers, i.e. a size_t */ +/* can hold a pointer. */ + +typedef struct _ffi_type +{ + size_t size; + unsigned short alignment; + unsigned short type; + struct _ffi_type **elements; +} ffi_type; + +#ifndef LIBFFI_HIDE_BASIC_TYPES #if SCHAR_MAX == 127 # define ffi_type_uchar ffi_type_uint8 # define ffi_type_schar ffi_type_sint8 @@ -112,26 +125,23 @@ #error "int size not supported" #endif -#define ffi_type_ulong ffi_type_uint64 -#define ffi_type_slong ffi_type_sint64 #if LONG_MAX == 2147483647 # if FFI_LONG_LONG_MAX != 9223372036854775807 - #error "no 64-bit data type supported" + #error "no 64-bit data type supported" # endif #elif LONG_MAX != 9223372036854775807 #error "long size not supported" #endif -/* The closure code assumes that this works on pointers, i.e. a size_t */ -/* can hold a pointer. */ - -typedef struct _ffi_type -{ - size_t size; - unsigned short alignment; - unsigned short type; - /*@null@*/ struct _ffi_type **elements; -} ffi_type; +#if LONG_MAX == 2147483647 +# define ffi_type_ulong ffi_type_uint32 +# define ffi_type_slong ffi_type_sint32 +#elif LONG_MAX == 9223372036854775807 +# define ffi_type_ulong ffi_type_uint64 +# define ffi_type_slong ffi_type_sint64 +#else + #error "long size not supported" +#endif /* These are defined in types.c */ extern ffi_type ffi_type_void; @@ -145,14 +155,19 @@ extern ffi_type ffi_type_sint64; extern ffi_type ffi_type_float; extern ffi_type ffi_type_double; -extern ffi_type ffi_type_longdouble; extern ffi_type ffi_type_pointer; +#if @HAVE_LONG_DOUBLE@ +extern ffi_type ffi_type_longdouble; +#else +#define ffi_type_longdouble ffi_type_double +#endif +#endif /* LIBFFI_HIDE_BASIC_TYPES */ typedef enum { FFI_OK = 0, FFI_BAD_TYPEDEF, - FFI_BAD_ABI + FFI_BAD_ABI } ffi_status; typedef unsigned FFI_TYPE; @@ -160,8 +175,8 @@ typedef struct { ffi_abi abi; unsigned nargs; - /*@dependent@*/ ffi_type **arg_types; - /*@dependent@*/ ffi_type *rtype; + ffi_type **arg_types; + ffi_type *rtype; unsigned bytes; unsigned flags; #ifdef FFI_EXTRA_CIF_FIELDS @@ -179,6 +194,10 @@ # endif #endif +#ifndef FFI_SIZEOF_JAVA_RAW +# define FFI_SIZEOF_JAVA_RAW FFI_SIZEOF_ARG +#endif + typedef union { ffi_sarg sint; ffi_arg uint; @@ -187,10 +206,25 @@ void* ptr; } ffi_raw; -void ffi_raw_call (/*@dependent@*/ ffi_cif *cif, - void (*fn)(void), - /*@out@*/ void *rvalue, - /*@dependent@*/ ffi_raw *avalue); +#if FFI_SIZEOF_JAVA_RAW == 4 && FFI_SIZEOF_ARG == 8 +/* This is a special case for mips64/n32 ABI (and perhaps others) where + sizeof(void *) is 4 and FFI_SIZEOF_ARG is 8. */ +typedef union { + signed int sint; + unsigned int uint; + float flt; + char data[FFI_SIZEOF_JAVA_RAW]; + void* ptr; +} ffi_java_raw; +#else +typedef ffi_raw ffi_java_raw; +#endif + + +void ffi_raw_call (ffi_cif *cif, + void (*fn)(void), + void *rvalue, + ffi_raw *avalue); void ffi_ptrarray_to_raw (ffi_cif *cif, void **args, ffi_raw *raw); void ffi_raw_to_ptrarray (ffi_cif *cif, ffi_raw *raw, void **args); @@ -200,13 +234,13 @@ /* packing, even on 64-bit machines. I.e. on 64-bit machines */ /* longs and doubles are followed by an empty 64-bit word. */ -void ffi_java_raw_call (/*@dependent@*/ ffi_cif *cif, - void (*fn)(void), - /*@out@*/ void *rvalue, - /*@dependent@*/ ffi_raw *avalue); +void ffi_java_raw_call (ffi_cif *cif, + void (*fn)(void), + void *rvalue, + ffi_java_raw *avalue); -void ffi_java_ptrarray_to_raw (ffi_cif *cif, void **args, ffi_raw *raw); -void ffi_java_raw_to_ptrarray (ffi_cif *cif, ffi_raw *raw, void **args); +void ffi_java_ptrarray_to_raw (ffi_cif *cif, void **args, ffi_java_raw *raw); +void ffi_java_raw_to_ptrarray (ffi_cif *cif, ffi_java_raw *raw, void **args); size_t ffi_java_raw_size (ffi_cif *cif); /* ---- Definitions for closures ----------------------------------------- */ @@ -220,12 +254,22 @@ void *user_data; } ffi_closure __attribute__((aligned (8))); +void *ffi_closure_alloc (size_t size, void **code); +void ffi_closure_free (void *); + ffi_status ffi_prep_closure (ffi_closure*, ffi_cif *, void (*fun)(ffi_cif*,void*,void**,void*), void *user_data); +ffi_status +ffi_prep_closure_loc (ffi_closure*, + ffi_cif *, + void (*fun)(ffi_cif*,void*,void**,void*), + void *user_data, + void*codeloc); + typedef struct { char tramp[FFI_TRAMPOLINE_SIZE]; @@ -247,6 +291,27 @@ } ffi_raw_closure; +typedef struct { + char tramp[FFI_TRAMPOLINE_SIZE]; + + ffi_cif *cif; + +#if !FFI_NATIVE_RAW_API + + /* if this is enabled, then a raw closure has the same layout + as a regular closure. We use this to install an intermediate + handler to do the transaltion, void** -> ffi_raw*. */ + + void (*translate_args)(ffi_cif*,void*,void**,void*); + void *this_closure; + +#endif + + void (*fun)(ffi_cif*,void*,ffi_java_raw*,void*); + void *user_data; + +} ffi_java_raw_closure; + ffi_status ffi_prep_raw_closure (ffi_raw_closure*, ffi_cif *cif, @@ -254,28 +319,42 @@ void *user_data); ffi_status -ffi_prep_java_raw_closure (ffi_raw_closure*, +ffi_prep_raw_closure_loc (ffi_raw_closure*, + ffi_cif *cif, + void (*fun)(ffi_cif*,void*,ffi_raw*,void*), + void *user_data, + void *codeloc); + +ffi_status +ffi_prep_java_raw_closure (ffi_java_raw_closure*, ffi_cif *cif, - void (*fun)(ffi_cif*,void*,ffi_raw*,void*), + void (*fun)(ffi_cif*,void*,ffi_java_raw*,void*), void *user_data); +ffi_status +ffi_prep_java_raw_closure_loc (ffi_java_raw_closure*, + ffi_cif *cif, + void (*fun)(ffi_cif*,void*,ffi_java_raw*,void*), + void *user_data, + void *codeloc); + #endif /* FFI_CLOSURES */ /* ---- Public interface definition -------------------------------------- */ -ffi_status ffi_prep_cif(/*@out@*/ /*@partial@*/ ffi_cif *cif, +ffi_status ffi_prep_cif(ffi_cif *cif, ffi_abi abi, - unsigned int nargs, - /*@dependent@*/ /*@out@*/ /*@partial@*/ ffi_type *rtype, - /*@dependent@*/ ffi_type **atypes); - -void ffi_call(/*@dependent@*/ ffi_cif *cif, - void (*fn)(void), - /*@out@*/ void *rvalue, - /*@dependent@*/ void **avalue); + unsigned int nargs, + ffi_type *rtype, + ffi_type **atypes); + +void ffi_call(ffi_cif *cif, + void (*fn)(void), + void *rvalue, + void **avalue); /* Useful for eliminating compiler warnings */ -#define FFI_FN(f) ((void (*)())f) +#define FFI_FN(f) ((void (*)(void))f) /* ---- Definitions shared with assembly code ---------------------------- */ @@ -310,4 +389,3 @@ #endif #endif - Modified: python/trunk/Modules/_ctypes/libffi/include/ffi_common.h ============================================================================== --- python/trunk/Modules/_ctypes/libffi/include/ffi_common.h (original) +++ python/trunk/Modules/_ctypes/libffi/include/ffi_common.h Tue Mar 4 21:09:11 2008 @@ -1,5 +1,6 @@ /* ----------------------------------------------------------------------- ffi_common.h - Copyright (c) 1996 Red Hat, Inc. + Copyright (C) 2007 Free Software Foundation, Inc Common internal definitions and macros. Only necessary for building libffi. @@ -18,7 +19,9 @@ this is positioned. */ #ifdef __GNUC__ # define alloca __builtin_alloca +# define MAYBE_UNUSED __attribute__((__unused__)) #else +# define MAYBE_UNUSED # if HAVE_ALLOCA_H # include # else @@ -41,20 +44,20 @@ # endif #endif -#if defined(FFI_DEBUG) +#if defined(FFI_DEBUG) #include #endif #ifdef FFI_DEBUG -/*@exits@*/ void ffi_assert(/*@temp@*/ char *expr, /*@temp@*/ char *file, int line); +void ffi_assert(char *expr, char *file, int line); void ffi_stop_here(void); -void ffi_type_test(/*@temp@*/ /*@out@*/ ffi_type *a, /*@temp@*/ char *file, int line); +void ffi_type_test(ffi_type *a, char *file, int line); #define FFI_ASSERT(x) ((x) ? (void)0 : ffi_assert(#x, __FILE__,__LINE__)) #define FFI_ASSERT_AT(x, f, l) ((x) ? 0 : ffi_assert(#x, (f), (l))) #define FFI_ASSERT_VALID_TYPE(x) ffi_type_test (x, __FILE__, __LINE__) #else -#define FFI_ASSERT(x) +#define FFI_ASSERT(x) #define FFI_ASSERT_AT(x, f, l) #define FFI_ASSERT_VALID_TYPE(x) #endif @@ -68,9 +71,9 @@ /* Extended cif, used in callback from assembly routine */ typedef struct { - /*@dependent@*/ ffi_cif *cif; - /*@dependent@*/ void *rvalue; - /*@dependent@*/ void **avalue; + ffi_cif *cif; + void *rvalue; + void **avalue; } extended_cif; /* Terse sized type definitions. */ Modified: python/trunk/Modules/_ctypes/libffi/install-sh ============================================================================== --- python/trunk/Modules/_ctypes/libffi/install-sh (original) +++ python/trunk/Modules/_ctypes/libffi/install-sh Tue Mar 4 21:09:11 2008 @@ -1,7 +1,8 @@ #!/bin/sh -# # install - install a program, script, or datafile -# + +scriptversion=2004-12-17.09 + # This originates from X11R5 (mit/util/scripts/install.sh), which was # later released in X11R6 (xc/config/util/install.sh) with the # following copyright and license. @@ -41,13 +42,11 @@ # from scratch. It can only install one file at a time, a restriction # shared with many OS's install programs. - # set DOITPROG to echo to test this script # Don't use :- since 4.3BSD and earlier shells don't like it. doit="${DOITPROG-}" - # put in absolute paths if you don't have them in your path; or use env. vars. mvprog="${MVPROG-mv}" @@ -59,236 +58,266 @@ rmprog="${RMPROG-rm}" mkdirprog="${MKDIRPROG-mkdir}" -transformbasename="" -transform_arg="" -instcmd="$mvprog" chmodcmd="$chmodprog 0755" -chowncmd="" -chgrpcmd="" -stripcmd="" +chowncmd= +chgrpcmd= +stripcmd= rmcmd="$rmprog -f" mvcmd="$mvprog" -src="" -dst="" -dir_arg="" - -while [ x"$1" != x ]; do - case $1 in - -c) instcmd=$cpprog - shift - continue;; - - -d) dir_arg=true - shift - continue;; - - -m) chmodcmd="$chmodprog $2" - shift - shift - continue;; - - -o) chowncmd="$chownprog $2" - shift - shift - continue;; - - -g) chgrpcmd="$chgrpprog $2" - shift - shift - continue;; - - -s) stripcmd=$stripprog - shift - continue;; - - -t=*) transformarg=`echo $1 | sed 's/-t=//'` - shift - continue;; - - -b=*) transformbasename=`echo $1 | sed 's/-b=//'` - shift - continue;; - - *) if [ x"$src" = x ] - then - src=$1 - else - # this colon is to work around a 386BSD /bin/sh bug - : - dst=$1 - fi - shift - continue;; - esac -done - -if [ x"$src" = x ] -then - echo "$0: no input file specified" >&2 - exit 1 -else - : -fi +src= +dst= +dir_arg= +dstarg= +no_target_directory= + +usage="Usage: $0 [OPTION]... [-T] SRCFILE DSTFILE + or: $0 [OPTION]... SRCFILES... DIRECTORY + or: $0 [OPTION]... -t DIRECTORY SRCFILES... + or: $0 [OPTION]... -d DIRECTORIES... + +In the 1st form, copy SRCFILE to DSTFILE. +In the 2nd and 3rd, copy all SRCFILES to DIRECTORY. +In the 4th, create DIRECTORIES. + +Options: +-c (ignored) +-d create directories instead of installing files. +-g GROUP $chgrpprog installed files to GROUP. +-m MODE $chmodprog installed files to MODE. +-o USER $chownprog installed files to USER. +-s $stripprog installed files. +-t DIRECTORY install into DIRECTORY. +-T report an error if DSTFILE is a directory. +--help display this help and exit. +--version display version info and exit. + +Environment variables override the default commands: + CHGRPPROG CHMODPROG CHOWNPROG CPPROG MKDIRPROG MVPROG RMPROG STRIPPROG +" + +while test -n "$1"; do + case $1 in + -c) shift + continue;; + + -d) dir_arg=true + shift + continue;; + + -g) chgrpcmd="$chgrpprog $2" + shift + shift + continue;; + + --help) echo "$usage"; exit 0;; + + -m) chmodcmd="$chmodprog $2" + shift + shift + continue;; + + -o) chowncmd="$chownprog $2" + shift + shift + continue;; + + -s) stripcmd=$stripprog + shift + continue;; -if [ x"$dir_arg" != x ]; then - dst=$src - src="" - - if [ -d "$dst" ]; then - instcmd=: - chmodcmd="" - else - instcmd=$mkdirprog - fi -else - -# Waiting for this to be detected by the "$instcmd $src $dsttmp" command -# might cause directories to be created, which would be especially bad -# if $src (and thus $dsttmp) contains '*'. - - if [ -f "$src" ] || [ -d "$src" ] - then - : - else - echo "$0: $src does not exist" >&2 - exit 1 - fi - - if [ x"$dst" = x ] - then - echo "$0: no destination specified" >&2 - exit 1 - else - : - fi - -# If destination is a directory, append the input filename; if your system -# does not like double slashes in filenames, you may need to add some logic - - if [ -d "$dst" ] - then - dst=$dst/`basename "$src"` - else - : - fi -fi - -## this sed command emulates the dirname command -dstdir=`echo "$dst" | sed -e 's,[^/]*$,,;s,/$,,;s,^$,.,'` - -# Make sure that the destination directory exists. -# this part is taken from Noah Friedman's mkinstalldirs script - -# Skip lots of stat calls in the usual case. -if [ ! -d "$dstdir" ]; then -defaultIFS=' - ' -IFS="${IFS-$defaultIFS}" - -oIFS=$IFS -# Some sh's can't handle IFS=/ for some reason. -IFS='%' -set - `echo "$dstdir" | sed -e 's@/@%@g' -e 's@^%@/@'` -IFS=$oIFS - -pathcomp='' + -t) dstarg=$2 + shift + shift + continue;; -while [ $# -ne 0 ] ; do - pathcomp=$pathcomp$1 + -T) no_target_directory=true shift + continue;; - if [ ! -d "$pathcomp" ] ; - then - $mkdirprog "$pathcomp" - else - : - fi + --version) echo "$0 $scriptversion"; exit 0;; - pathcomp=$pathcomp/ + *) # When -d is used, all remaining arguments are directories to create. + # When -t is used, the destination is already specified. + test -n "$dir_arg$dstarg" && break + # Otherwise, the last argument is the destination. Remove it from $@. + for arg + do + if test -n "$dstarg"; then + # $@ is not empty: it contains at least $arg. + set fnord "$@" "$dstarg" + shift # fnord + fi + shift # arg + dstarg=$arg + done + break;; + esac done -fi - -if [ x"$dir_arg" != x ] -then - $doit $instcmd "$dst" && - - if [ x"$chowncmd" != x ]; then $doit $chowncmd "$dst"; else : ; fi && - if [ x"$chgrpcmd" != x ]; then $doit $chgrpcmd "$dst"; else : ; fi && - if [ x"$stripcmd" != x ]; then $doit $stripcmd "$dst"; else : ; fi && - if [ x"$chmodcmd" != x ]; then $doit $chmodcmd "$dst"; else : ; fi -else - -# If we're going to rename the final executable, determine the name now. - if [ x"$transformarg" = x ] - then - dstfile=`basename "$dst"` - else - dstfile=`basename "$dst" $transformbasename | - sed $transformarg`$transformbasename - fi - -# don't allow the sed command to completely eliminate the filename - - if [ x"$dstfile" = x ] - then - dstfile=`basename "$dst"` - else - : - fi - -# Make a couple of temp file names in the proper directory. - - dsttmp=$dstdir/#inst.$$# - rmtmp=$dstdir/#rm.$$# - -# Trap to clean up temp files at exit. - - trap 'status=$?; rm -f "$dsttmp" "$rmtmp" && exit $status' 0 - trap '(exit $?); exit' 1 2 13 15 - -# Move or copy the file name to the temp name - - $doit $instcmd "$src" "$dsttmp" && - -# and set any options; do chmod last to preserve setuid bits - -# If any of these fail, we abort the whole thing. If we want to -# ignore errors from any of these, just make sure not to ignore -# errors from the above "$doit $instcmd $src $dsttmp" command. - - if [ x"$chowncmd" != x ]; then $doit $chowncmd "$dsttmp"; else :;fi && - if [ x"$chgrpcmd" != x ]; then $doit $chgrpcmd "$dsttmp"; else :;fi && - if [ x"$stripcmd" != x ]; then $doit $stripcmd "$dsttmp"; else :;fi && - if [ x"$chmodcmd" != x ]; then $doit $chmodcmd "$dsttmp"; else :;fi && - -# Now remove or move aside any old file at destination location. We try this -# two ways since rm can't unlink itself on some systems and the destination -# file might be busy for other reasons. In this case, the final cleanup -# might fail but the new file should still install successfully. - -{ - if [ -f "$dstdir/$dstfile" ] - then - $doit $rmcmd -f "$dstdir/$dstfile" 2>/dev/null || - $doit $mvcmd -f "$dstdir/$dstfile" "$rmtmp" 2>/dev/null || - { - echo "$0: cannot unlink or rename $dstdir/$dstfile" >&2 - (exit 1); exit - } - else - : - fi -} && - -# Now rename the file to the real destination. +if test -z "$1"; then + if test -z "$dir_arg"; then + echo "$0: no input file specified." >&2 + exit 1 + fi + # It's OK to call `install-sh -d' without argument. + # This can happen when creating conditional directories. + exit 0 +fi - $doit $mvcmd "$dsttmp" "$dstdir/$dstfile" +for src +do + # Protect names starting with `-'. + case $src in + -*) src=./$src ;; + esac + + if test -n "$dir_arg"; then + dst=$src + src= + + if test -d "$dst"; then + mkdircmd=: + chmodcmd= + else + mkdircmd=$mkdirprog + fi + else + # Waiting for this to be detected by the "$cpprog $src $dsttmp" command + # might cause directories to be created, which would be especially bad + # if $src (and thus $dsttmp) contains '*'. + if test ! -f "$src" && test ! -d "$src"; then + echo "$0: $src does not exist." >&2 + exit 1 + fi + + if test -z "$dstarg"; then + echo "$0: no destination specified." >&2 + exit 1 + fi + + dst=$dstarg + # Protect names starting with `-'. + case $dst in + -*) dst=./$dst ;; + esac -fi && + # If destination is a directory, append the input filename; won't work + # if double slashes aren't ignored. + if test -d "$dst"; then + if test -n "$no_target_directory"; then + echo "$0: $dstarg: Is a directory" >&2 + exit 1 + fi + dst=$dst/`basename "$src"` + fi + fi + + # This sed command emulates the dirname command. + dstdir=`echo "$dst" | sed -e 's,/*$,,;s,[^/]*$,,;s,/*$,,;s,^$,.,'` + + # Make sure that the destination directory exists. + + # Skip lots of stat calls in the usual case. + if test ! -d "$dstdir"; then + defaultIFS=' + ' + IFS="${IFS-$defaultIFS}" + + oIFS=$IFS + # Some sh's can't handle IFS=/ for some reason. + IFS='%' + set x `echo "$dstdir" | sed -e 's@/@%@g' -e 's@^%@/@'` + shift + IFS=$oIFS + + pathcomp= + + while test $# -ne 0 ; do + pathcomp=$pathcomp$1 + shift + if test ! -d "$pathcomp"; then + $mkdirprog "$pathcomp" + # mkdir can fail with a `File exist' error in case several + # install-sh are creating the directory concurrently. This + # is OK. + test -d "$pathcomp" || exit + fi + pathcomp=$pathcomp/ + done + fi + + if test -n "$dir_arg"; then + $doit $mkdircmd "$dst" \ + && { test -z "$chowncmd" || $doit $chowncmd "$dst"; } \ + && { test -z "$chgrpcmd" || $doit $chgrpcmd "$dst"; } \ + && { test -z "$stripcmd" || $doit $stripcmd "$dst"; } \ + && { test -z "$chmodcmd" || $doit $chmodcmd "$dst"; } + + else + dstfile=`basename "$dst"` + + # Make a couple of temp file names in the proper directory. + dsttmp=$dstdir/_inst.$$_ + rmtmp=$dstdir/_rm.$$_ + + # Trap to clean up those temp files at exit. + trap 'ret=$?; rm -f "$dsttmp" "$rmtmp" && exit $ret' 0 + trap '(exit $?); exit' 1 2 13 15 + + # Copy the file name to the temp name. + $doit $cpprog "$src" "$dsttmp" && + + # and set any options; do chmod last to preserve setuid bits. + # + # If any of these fail, we abort the whole thing. If we want to + # ignore errors from any of these, just make sure not to ignore + # errors from the above "$doit $cpprog $src $dsttmp" command. + # + { test -z "$chowncmd" || $doit $chowncmd "$dsttmp"; } \ + && { test -z "$chgrpcmd" || $doit $chgrpcmd "$dsttmp"; } \ + && { test -z "$stripcmd" || $doit $stripcmd "$dsttmp"; } \ + && { test -z "$chmodcmd" || $doit $chmodcmd "$dsttmp"; } && + + # Now rename the file to the real destination. + { $doit $mvcmd -f "$dsttmp" "$dstdir/$dstfile" 2>/dev/null \ + || { + # The rename failed, perhaps because mv can't rename something else + # to itself, or perhaps because mv is so ancient that it does not + # support -f. + + # Now remove or move aside any old file at destination location. + # We try this two ways since rm can't unlink itself on some + # systems and the destination file might be busy for other + # reasons. In this case, the final cleanup might fail but the new + # file should still install successfully. + { + if test -f "$dstdir/$dstfile"; then + $doit $rmcmd -f "$dstdir/$dstfile" 2>/dev/null \ + || $doit $mvcmd -f "$dstdir/$dstfile" "$rmtmp" 2>/dev/null \ + || { + echo "$0: cannot unlink or rename $dstdir/$dstfile" >&2 + (exit 1); exit 1 + } + else + : + fi + } && + + # Now rename the file to the real destination. + $doit $mvcmd "$dsttmp" "$dstdir/$dstfile" + } + } + fi || { (exit 1); exit 1; } +done # The final little trick to "correctly" pass the exit status to the exit trap. - { - (exit 0); exit + (exit 0); exit 0 } + +# Local variables: +# eval: (add-hook 'write-file-hooks 'time-stamp) +# time-stamp-start: "scriptversion=" +# time-stamp-format: "%:y-%02m-%02d.%02H" +# time-stamp-end: "$" +# End: Modified: python/trunk/Modules/_ctypes/libffi/src/alpha/ffi.c ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/alpha/ffi.c (original) +++ python/trunk/Modules/_ctypes/libffi/src/alpha/ffi.c Tue Mar 4 21:09:11 2008 @@ -1,5 +1,5 @@ /* ----------------------------------------------------------------------- - ffi.c - Copyright (c) 1998, 2001 Red Hat, Inc. + ffi.c - Copyright (c) 1998, 2001, 2007, 2008 Red Hat, Inc. Alpha Foreign Function Interface @@ -14,13 +14,14 @@ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR - OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. ----------------------------------------------------------------------- */ #include @@ -169,10 +170,11 @@ ffi_status -ffi_prep_closure (ffi_closure* closure, - ffi_cif* cif, - void (*fun)(ffi_cif*, void*, void**, void*), - void *user_data) +ffi_prep_closure_loc (ffi_closure* closure, + ffi_cif* cif, + void (*fun)(ffi_cif*, void*, void**, void*), + void *user_data, + void *codeloc) { unsigned int *tramp; Modified: python/trunk/Modules/_ctypes/libffi/src/alpha/ffitarget.h ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/alpha/ffitarget.h (original) +++ python/trunk/Modules/_ctypes/libffi/src/alpha/ffitarget.h Tue Mar 4 21:09:11 2008 @@ -13,13 +13,14 @@ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR - OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. ----------------------------------------------------------------------- */ @@ -33,8 +34,8 @@ typedef enum ffi_abi { FFI_FIRST_ABI = 0, FFI_OSF, - FFI_DEFAULT_ABI = FFI_OSF, - FFI_LAST_ABI = FFI_DEFAULT_ABI + 1 + FFI_LAST_ABI, + FFI_DEFAULT_ABI = FFI_OSF } ffi_abi; #endif @@ -45,4 +46,3 @@ #define FFI_NATIVE_RAW_API 0 #endif - Modified: python/trunk/Modules/_ctypes/libffi/src/alpha/osf.S ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/alpha/osf.S (original) +++ python/trunk/Modules/_ctypes/libffi/src/alpha/osf.S Tue Mar 4 21:09:11 2008 @@ -1,5 +1,5 @@ /* ----------------------------------------------------------------------- - osf.S - Copyright (c) 1998, 2001, 2007 Red Hat + osf.S - Copyright (c) 1998, 2001, 2007, 2008 Red Hat Alpha/OSF Foreign Function Interface @@ -14,13 +14,14 @@ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR - OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. ----------------------------------------------------------------------- */ #define LIBFFI_ASM @@ -31,7 +32,7 @@ .text /* ffi_call_osf (void *args, unsigned long bytes, unsigned flags, - void *raddr, void (*fnaddr)()); + void *raddr, void (*fnaddr)(void)); Bit o trickiness here -- ARGS+BYTES is the base of the stack frame for this function. This has been allocated by ffi_call. We also @@ -358,4 +359,8 @@ .byte 16 # uleb128 offset 16*-8 .align 3 $LEFDE3: + +#ifdef __linux__ + .section .note.GNU-stack,"", at progbits +#endif #endif Modified: python/trunk/Modules/_ctypes/libffi/src/arm/ffi.c ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/arm/ffi.c (original) +++ python/trunk/Modules/_ctypes/libffi/src/arm/ffi.c Tue Mar 4 21:09:11 2008 @@ -1,5 +1,5 @@ /* ----------------------------------------------------------------------- - ffi.c - Copyright (c) 1998 Red Hat, Inc. + ffi.c - Copyright (c) 1998, 2008 Red Hat, Inc. ARM Foreign Function Interface @@ -14,13 +14,14 @@ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR - OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. ----------------------------------------------------------------------- */ #include @@ -31,9 +32,7 @@ /* ffi_prep_args is called by the assembly routine once stack space has been allocated for the function's arguments */ -/*@-exportheader@*/ void ffi_prep_args(char *stack, extended_cif *ecif) -/*@=exportheader@*/ { register unsigned int i; register void **p_argv; @@ -42,7 +41,7 @@ argp = stack; - if ( ecif->cif->rtype->type == FFI_TYPE_STRUCT ) { + if ( ecif->cif->flags == FFI_TYPE_STRUCT ) { *(void **) argp = ecif->rvalue; argp += 4; } @@ -60,6 +59,9 @@ argp = (char *) ALIGN(argp, (*p_arg)->alignment); } + if ((*p_arg)->type == FFI_TYPE_STRUCT) + argp = (char *) ALIGN(argp, 4); + z = (*p_arg)->size; if (z < sizeof(int)) { @@ -83,7 +85,7 @@ break; case FFI_TYPE_STRUCT: - *(unsigned int *) argp = (unsigned int)*(UINT32 *)(* p_argv); + memcpy(argp, *p_argv, (*p_arg)->size); break; default: @@ -117,7 +119,6 @@ switch (cif->rtype->type) { case FFI_TYPE_VOID: - case FFI_TYPE_STRUCT: case FFI_TYPE_FLOAT: case FFI_TYPE_DOUBLE: cif->flags = (unsigned) cif->rtype->type; @@ -128,6 +129,17 @@ cif->flags = (unsigned) FFI_TYPE_SINT64; break; + case FFI_TYPE_STRUCT: + if (cif->rtype->size <= 4) + /* A Composite Type not larger than 4 bytes is returned in r0. */ + cif->flags = (unsigned)FFI_TYPE_INT; + else + /* A Composite Type larger than 4 bytes, or whose size cannot + be determined statically ... is stored in memory at an + address passed [in r0]. */ + cif->flags = (unsigned)FFI_TYPE_STRUCT; + break; + default: cif->flags = FFI_TYPE_INT; break; @@ -136,50 +148,162 @@ return FFI_OK; } -/*@-declundef@*/ -/*@-exportheader@*/ -extern void ffi_call_SYSV(void (*)(char *, extended_cif *), - /*@out@*/ extended_cif *, - unsigned, unsigned, - /*@out@*/ unsigned *, - void (*fn)()); -/*@=declundef@*/ -/*@=exportheader@*/ +extern void ffi_call_SYSV(void (*)(char *, extended_cif *), extended_cif *, + unsigned, unsigned, unsigned *, void (*fn)(void)); -void ffi_call(/*@dependent@*/ ffi_cif *cif, - void (*fn)(), - /*@out@*/ void *rvalue, - /*@dependent@*/ void **avalue) +void ffi_call(ffi_cif *cif, void (*fn)(void), void *rvalue, void **avalue) { extended_cif ecif; + int small_struct = (cif->flags == FFI_TYPE_INT + && cif->rtype->type == FFI_TYPE_STRUCT); + ecif.cif = cif; ecif.avalue = avalue; + + unsigned int temp; /* If the return value is a struct and we don't have a return */ /* value address then we need to make one */ if ((rvalue == NULL) && - (cif->rtype->type == FFI_TYPE_STRUCT)) + (cif->flags == FFI_TYPE_STRUCT)) { - /*@-sysunrecog@*/ ecif.rvalue = alloca(cif->rtype->size); - /*@=sysunrecog@*/ } + else if (small_struct) + ecif.rvalue = &temp; else ecif.rvalue = rvalue; - - + switch (cif->abi) { case FFI_SYSV: - /*@-usedef@*/ - ffi_call_SYSV(ffi_prep_args, &ecif, cif->bytes, - cif->flags, ecif.rvalue, fn); - /*@=usedef@*/ + ffi_call_SYSV(ffi_prep_args, &ecif, cif->bytes, cif->flags, ecif.rvalue, + fn); + break; default: FFI_ASSERT(0); break; } + if (small_struct) + memcpy (rvalue, &temp, cif->rtype->size); +} + +/** private members **/ + +static void ffi_prep_incoming_args_SYSV (char *stack, void **ret, + void** args, ffi_cif* cif); + +void ffi_closure_SYSV (ffi_closure *); + +/* This function is jumped to by the trampoline */ + +unsigned int +ffi_closure_SYSV_inner (closure, respp, args) + ffi_closure *closure; + void **respp; + void *args; +{ + // our various things... + ffi_cif *cif; + void **arg_area; + + cif = closure->cif; + arg_area = (void**) alloca (cif->nargs * sizeof (void*)); + + /* this call will initialize ARG_AREA, such that each + * element in that array points to the corresponding + * value on the stack; and if the function returns + * a structure, it will re-set RESP to point to the + * structure return address. */ + + ffi_prep_incoming_args_SYSV(args, respp, arg_area, cif); + + (closure->fun) (cif, *respp, arg_area, closure->user_data); + + return cif->flags; +} + +/*@-exportheader@*/ +static void +ffi_prep_incoming_args_SYSV(char *stack, void **rvalue, + void **avalue, ffi_cif *cif) +/*@=exportheader@*/ +{ + register unsigned int i; + register void **p_argv; + register char *argp; + register ffi_type **p_arg; + + argp = stack; + + if ( cif->flags == FFI_TYPE_STRUCT ) { + *rvalue = *(void **) argp; + argp += 4; + } + + p_argv = avalue; + + for (i = cif->nargs, p_arg = cif->arg_types; (i != 0); i--, p_arg++) + { + size_t z; + + size_t alignment = (*p_arg)->alignment; + if (alignment < 4) + alignment = 4; + /* Align if necessary */ + if ((alignment - 1) & (unsigned) argp) { + argp = (char *) ALIGN(argp, alignment); + } + + z = (*p_arg)->size; + + /* because we're little endian, this is what it turns into. */ + + *p_argv = (void*) argp; + + p_argv++; + argp += z; + } + + return; +} + +/* How to make a trampoline. */ + +#define FFI_INIT_TRAMPOLINE(TRAMP,FUN,CTX) \ +({ unsigned char *__tramp = (unsigned char*)(TRAMP); \ + unsigned int __fun = (unsigned int)(FUN); \ + unsigned int __ctx = (unsigned int)(CTX); \ + *(unsigned int*) &__tramp[0] = 0xe92d000f; /* stmfd sp!, {r0-r3} */ \ + *(unsigned int*) &__tramp[4] = 0xe59f0000; /* ldr r0, [pc] */ \ + *(unsigned int*) &__tramp[8] = 0xe59ff000; /* ldr pc, [pc] */ \ + *(unsigned int*) &__tramp[12] = __ctx; \ + *(unsigned int*) &__tramp[16] = __fun; \ + __clear_cache((&__tramp[0]), (&__tramp[19])); \ + }) + + +/* the cif must already be prep'ed */ + +ffi_status +ffi_prep_closure_loc (ffi_closure* closure, + ffi_cif* cif, + void (*fun)(ffi_cif*,void*,void**,void*), + void *user_data, + void *codeloc) +{ + FFI_ASSERT (cif->abi == FFI_SYSV); + + FFI_INIT_TRAMPOLINE (&closure->tramp[0], \ + &ffi_closure_SYSV, \ + codeloc); + + closure->cif = cif; + closure->user_data = user_data; + closure->fun = fun; + + return FFI_OK; } Modified: python/trunk/Modules/_ctypes/libffi/src/arm/ffitarget.h ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/arm/ffitarget.h (original) +++ python/trunk/Modules/_ctypes/libffi/src/arm/ffitarget.h Tue Mar 4 21:09:11 2008 @@ -13,13 +13,14 @@ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR - OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. ----------------------------------------------------------------------- */ @@ -40,7 +41,8 @@ /* ---- Definitions for closures ----------------------------------------- */ -#define FFI_CLOSURES 0 +#define FFI_CLOSURES 1 +#define FFI_TRAMPOLINE_SIZE 20 #define FFI_NATIVE_RAW_API 0 #endif Modified: python/trunk/Modules/_ctypes/libffi/src/arm/sysv.S ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/arm/sysv.S (original) +++ python/trunk/Modules/_ctypes/libffi/src/arm/sysv.S Tue Mar 4 21:09:11 2008 @@ -1,5 +1,5 @@ /* ----------------------------------------------------------------------- - sysv.S - Copyright (c) 1998 Red Hat, Inc. + sysv.S - Copyright (c) 1998, 2008 Red Hat, Inc. ARM Foreign Function Interface @@ -14,13 +14,14 @@ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR - OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. ----------------------------------------------------------------------- */ #define LIBFFI_ASM @@ -82,6 +83,14 @@ # define call_reg(x) mov lr, pc ; mov pc, x #endif +/* Conditionally compile unwinder directives. */ +#ifdef __ARM_EABI__ +#define UNWIND +#else +#define UNWIND @ +#endif + + #if defined(__thumb__) && !defined(__THUMB_INTERWORK__) .macro ARM_FUNC_START name .text @@ -92,6 +101,7 @@ bx pc nop .arm + UNWIND .fnstart /* A hook to tell gdb that we've switched to ARM mode. Also used to call directly from other local arm routines. */ _L__\name: @@ -102,6 +112,7 @@ .align 0 .arm ENTRY(\name) + UNWIND .fnstart .endm #endif @@ -134,8 +145,11 @@ ARM_FUNC_START ffi_call_SYSV @ Save registers stmfd sp!, {r0-r3, fp, lr} + UNWIND .save {r0-r3, fp, lr} mov fp, sp + UNWIND .setfp fp, sp + @ Make room for all of the new args. sub sp, fp, r2 @@ -205,5 +219,81 @@ RETLDM "r0-r3,fp" .ffi_call_SYSV_end: + UNWIND .fnend .size CNAME(ffi_call_SYSV),.ffi_call_SYSV_end-CNAME(ffi_call_SYSV) +/* + unsigned int FFI_HIDDEN + ffi_closure_SYSV_inner (closure, respp, args) + ffi_closure *closure; + void **respp; + void *args; +*/ + +ARM_FUNC_START ffi_closure_SYSV + UNWIND .pad #16 + add ip, sp, #16 + stmfd sp!, {ip, lr} + UNWIND .save {r0, lr} + add r2, sp, #8 + .pad #16 + sub sp, sp, #16 + str sp, [sp, #8] + add r1, sp, #8 + bl ffi_closure_SYSV_inner + cmp r0, #FFI_TYPE_INT + beq .Lretint + + cmp r0, #FFI_TYPE_FLOAT +#ifdef __SOFTFP__ + beq .Lretint +#else + beq .Lretfloat +#endif + + cmp r0, #FFI_TYPE_DOUBLE +#ifdef __SOFTFP__ + beq .Lretlonglong +#else + beq .Lretdouble +#endif + + cmp r0, #FFI_TYPE_LONGDOUBLE +#ifdef __SOFTFP__ + beq .Lretlonglong +#else + beq .Lretlongdouble +#endif + + cmp r0, #FFI_TYPE_SINT64 + beq .Lretlonglong +.Lclosure_epilogue: + add sp, sp, #16 + ldmfd sp, {sp, pc} +.Lretint: + ldr r0, [sp] + b .Lclosure_epilogue +.Lretlonglong: + ldr r0, [sp] + ldr r1, [sp, #4] + b .Lclosure_epilogue + +#ifndef __SOFTFP__ +.Lretfloat: + ldfs f0, [sp] + b .Lclosure_epilogue +.Lretdouble: + ldfd f0, [sp] + b .Lclosure_epilogue +.Lretlongdouble: + ldfd f0, [sp] + b .Lclosure_epilogue +#endif + +.ffi_closure_SYSV_end: + UNWIND .fnend + .size CNAME(ffi_closure_SYSV),.ffi_closure_SYSV_end-CNAME(ffi_closure_SYSV) + +#if defined __ELF__ && defined __linux__ + .section .note.GNU-stack,"",%progbits +#endif Modified: python/trunk/Modules/_ctypes/libffi/src/cris/ffi.c ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/cris/ffi.c (original) +++ python/trunk/Modules/_ctypes/libffi/src/cris/ffi.c Tue Mar 4 21:09:11 2008 @@ -2,6 +2,7 @@ ffi.c - Copyright (c) 1998 Cygnus Solutions Copyright (c) 2004 Simon Posnjak Copyright (c) 2005 Axis Communications AB + Copyright (C) 2007 Free Software Foundation, Inc. CRIS Foreign Function Interface @@ -360,10 +361,11 @@ /* API function: Prepare the trampoline. */ ffi_status -ffi_prep_closure (ffi_closure* closure, - ffi_cif* cif, - void (*fun)(ffi_cif *, void *, void **, void*), - void *user_data) +ffi_prep_closure_loc (ffi_closure* closure, + ffi_cif* cif, + void (*fun)(ffi_cif *, void *, void **, void*), + void *user_data, + void *codeloc) { void *innerfn = ffi_prep_closure_inner; FFI_ASSERT (cif->abi == FFI_SYSV); @@ -375,7 +377,7 @@ memcpy (closure->tramp + ffi_cris_trampoline_fn_offset, &innerfn, sizeof (void *)); memcpy (closure->tramp + ffi_cris_trampoline_closure_offset, - &closure, sizeof (void *)); + &codeloc, sizeof (void *)); return FFI_OK; } Modified: python/trunk/Modules/_ctypes/libffi/src/cris/ffitarget.h ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/cris/ffitarget.h (original) +++ python/trunk/Modules/_ctypes/libffi/src/cris/ffitarget.h Tue Mar 4 21:09:11 2008 @@ -13,13 +13,14 @@ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR - OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. ----------------------------------------------------------------------- */ Modified: python/trunk/Modules/_ctypes/libffi/src/frv/eabi.S ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/frv/eabi.S (original) +++ python/trunk/Modules/_ctypes/libffi/src/frv/eabi.S Tue Mar 4 21:09:11 2008 @@ -3,8 +3,6 @@ FR-V Assembly glue. - $Id: eabi.S,v 1.2 2006/03/03 20:24:46 theller Exp $ - Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the ``Software''), to deal in the Software without restriction, including Modified: python/trunk/Modules/_ctypes/libffi/src/frv/ffi.c ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/frv/ffi.c (original) +++ python/trunk/Modules/_ctypes/libffi/src/frv/ffi.c Tue Mar 4 21:09:11 2008 @@ -1,5 +1,7 @@ /* ----------------------------------------------------------------------- - ffi.c - Copyright (c) 2004 Anthony Green + ffi.c - Copyright (C) 2004 Anthony Green + Copyright (C) 2007 Free Software Foundation, Inc. + Copyright (C) 2008 Red Hat, Inc. FR-V Foreign Function Interface @@ -14,13 +16,14 @@ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR - OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. ----------------------------------------------------------------------- */ #include @@ -124,10 +127,10 @@ extended_cif *, unsigned, unsigned, unsigned *, - void (*fn)()); + void (*fn)(void)); void ffi_call(ffi_cif *cif, - void (*fn)(), + void (*fn)(void), void *rvalue, void **avalue) { @@ -243,14 +246,15 @@ } ffi_status -ffi_prep_closure (ffi_closure* closure, - ffi_cif* cif, - void (*fun)(ffi_cif*, void*, void**, void*), - void *user_data) +ffi_prep_closure_loc (ffi_closure* closure, + ffi_cif* cif, + void (*fun)(ffi_cif*, void*, void**, void*), + void *user_data, + void *codeloc) { unsigned int *tramp = (unsigned int *) &closure->tramp[0]; unsigned long fn = (long) ffi_closure_eabi; - unsigned long cls = (long) closure; + unsigned long cls = (long) codeloc; #ifdef __FRV_FDPIC__ register void *got __asm__("gr15"); #endif @@ -259,7 +263,7 @@ fn = (unsigned long) ffi_closure_eabi; #ifdef __FRV_FDPIC__ - tramp[0] = &tramp[2]; + tramp[0] = &((unsigned int *)codeloc)[2]; tramp[1] = got; tramp[2] = 0x8cfc0000 + (fn & 0xffff); /* setlos lo(fn), gr6 */ tramp[3] = 0x8efc0000 + (cls & 0xffff); /* setlos lo(cls), gr7 */ @@ -281,7 +285,8 @@ /* Cache flushing. */ for (i = 0; i < FFI_TRAMPOLINE_SIZE; i++) - __asm__ volatile ("dcf @(%0,%1)\n\tici @(%0,%1)" :: "r" (tramp), "r" (i)); + __asm__ volatile ("dcf @(%0,%1)\n\tici @(%2,%1)" :: "r" (tramp), "r" (i), + "r" (codeloc)); return FFI_OK; } Modified: python/trunk/Modules/_ctypes/libffi/src/frv/ffitarget.h ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/frv/ffitarget.h (original) +++ python/trunk/Modules/_ctypes/libffi/src/frv/ffitarget.h Tue Mar 4 21:09:11 2008 @@ -13,13 +13,14 @@ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR - OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. ----------------------------------------------------------------------- */ Modified: python/trunk/Modules/_ctypes/libffi/src/ia64/ffi.c ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/ia64/ffi.c (original) +++ python/trunk/Modules/_ctypes/libffi/src/ia64/ffi.c Tue Mar 4 21:09:11 2008 @@ -1,5 +1,5 @@ /* ----------------------------------------------------------------------- - ffi.c - Copyright (c) 1998 Red Hat, Inc. + ffi.c - Copyright (c) 1998, 2007, 2008 Red Hat, Inc. Copyright (c) 2000 Hewlett Packard Company IA64 Foreign Function Interface @@ -15,13 +15,14 @@ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR - OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. ----------------------------------------------------------------------- */ #include @@ -69,24 +70,19 @@ #endif } -/* Store VALUE to ADDR in the current cpu implementation's fp spill format. */ +/* Store VALUE to ADDR in the current cpu implementation's fp spill format. + This is a macro instead of a function, so that it works for all 3 floating + point types without type conversions. Type conversion to long double breaks + the denorm support. */ -static inline void -stf_spill(fpreg *addr, __float80 value) -{ +#define stf_spill(addr, value) \ asm ("stf.spill %0 = %1%P0" : "=m" (*addr) : "f"(value)); -} /* Load a value from ADDR, which is in the current cpu implementation's - fp spill format. */ + fp spill format. As above, this must also be a macro. */ -static inline __float80 -ldf_fill(fpreg *addr) -{ - __float80 ret; - asm ("ldf.fill %0 = %1%P1" : "=f"(ret) : "m"(*addr)); - return ret; -} +#define ldf_fill(result, addr) \ + asm ("ldf.fill %0 = %1%P1" : "=f"(result) : "m"(*addr)); /* Return the size of the C type associated with with TYPE. Which will be one of the FFI_IA64_TYPE_HFA_* values. */ @@ -110,17 +106,20 @@ /* Load from ADDR a value indicated by TYPE. Which will be one of the FFI_IA64_TYPE_HFA_* values. */ -static __float80 -hfa_type_load (int type, void *addr) +static void +hfa_type_load (fpreg *fpaddr, int type, void *addr) { switch (type) { case FFI_IA64_TYPE_HFA_FLOAT: - return *(float *) addr; + stf_spill (fpaddr, *(float *) addr); + return; case FFI_IA64_TYPE_HFA_DOUBLE: - return *(double *) addr; + stf_spill (fpaddr, *(double *) addr); + return; case FFI_IA64_TYPE_HFA_LDOUBLE: - return *(__float80 *) addr; + stf_spill (fpaddr, *(__float80 *) addr); + return; default: abort (); } @@ -130,19 +129,31 @@ the FFI_IA64_TYPE_HFA_* values. */ static void -hfa_type_store (int type, void *addr, __float80 value) +hfa_type_store (int type, void *addr, fpreg *fpaddr) { switch (type) { case FFI_IA64_TYPE_HFA_FLOAT: - *(float *) addr = value; - break; + { + float result; + ldf_fill (result, fpaddr); + *(float *) addr = result; + break; + } case FFI_IA64_TYPE_HFA_DOUBLE: - *(double *) addr = value; - break; + { + double result; + ldf_fill (result, fpaddr); + *(double *) addr = result; + break; + } case FFI_IA64_TYPE_HFA_LDOUBLE: - *(__float80 *) addr = value; - break; + { + __float80 result; + ldf_fill (result, fpaddr); + *(__float80 *) addr = result; + break; + } default: abort (); } @@ -351,8 +362,8 @@ && offset < size && gp_offset < 8 * 8) { - stf_spill (&stack->fp_regs[fpcount], - hfa_type_load (hfa_type, avalue[i] + offset)); + hfa_type_load (&stack->fp_regs[fpcount], hfa_type, + avalue[i] + offset); offset += hfa_size; gp_offset += hfa_size; fpcount += 1; @@ -387,13 +398,14 @@ gp pointer to the closure. This allows the function entry code to both retrieve the user data, and to restire the correct gp pointer. */ -extern void ffi_closure_unix (void); +extern void ffi_closure_unix (); ffi_status -ffi_prep_closure (ffi_closure* closure, - ffi_cif* cif, - void (*fun)(ffi_cif*,void*,void**,void*), - void *user_data) +ffi_prep_closure_loc (ffi_closure* closure, + ffi_cif* cif, + void (*fun)(ffi_cif*,void*,void**,void*), + void *user_data, + void *codeloc) { /* The layout of a function descriptor. A C function pointer really points to one of these. */ @@ -420,7 +432,7 @@ tramp->code_pointer = fd->code_pointer; tramp->real_gp = fd->gp; - tramp->fake_gp = (UINT64)(PTR64)closure; + tramp->fake_gp = (UINT64)(PTR64)codeloc; closure->cif = cif; closure->user_data = user_data; closure->fun = fun; @@ -475,9 +487,11 @@ case FFI_TYPE_FLOAT: if (gpcount < 8 && fpcount < 8) { - void *addr = &stack->fp_regs[fpcount++]; + fpreg *addr = &stack->fp_regs[fpcount++]; + float result; avalue[i] = addr; - *(float *)addr = ldf_fill (addr); + ldf_fill (result, addr); + *(float *)addr = result; } else avalue[i] = endian_adjust(&stack->gp_regs[gpcount], 4); @@ -487,9 +501,11 @@ case FFI_TYPE_DOUBLE: if (gpcount < 8 && fpcount < 8) { - void *addr = &stack->fp_regs[fpcount++]; + fpreg *addr = &stack->fp_regs[fpcount++]; + double result; avalue[i] = addr; - *(double *)addr = ldf_fill (addr); + ldf_fill (result, addr); + *(double *)addr = result; } else avalue[i] = &stack->gp_regs[gpcount]; @@ -501,9 +517,11 @@ gpcount++; if (LDBL_MANT_DIG == 64 && gpcount < 8 && fpcount < 8) { - void *addr = &stack->fp_regs[fpcount++]; + fpreg *addr = &stack->fp_regs[fpcount++]; + __float80 result; avalue[i] = addr; - *(__float80 *)addr = ldf_fill (addr); + ldf_fill (result, addr); + *(__float80 *)addr = result; } else avalue[i] = &stack->gp_regs[gpcount]; @@ -533,8 +551,8 @@ && offset < size && gp_offset < 8 * 8) { - hfa_type_store (hfa_type, addr + offset, - ldf_fill (&stack->fp_regs[fpcount])); + hfa_type_store (hfa_type, addr + offset, + &stack->fp_regs[fpcount]); offset += hfa_size; gp_offset += hfa_size; fpcount += 1; Modified: python/trunk/Modules/_ctypes/libffi/src/ia64/ffitarget.h ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/ia64/ffitarget.h (original) +++ python/trunk/Modules/_ctypes/libffi/src/ia64/ffitarget.h Tue Mar 4 21:09:11 2008 @@ -13,13 +13,14 @@ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR - OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. ----------------------------------------------------------------------- */ Modified: python/trunk/Modules/_ctypes/libffi/src/ia64/ia64_flags.h ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/ia64/ia64_flags.h (original) +++ python/trunk/Modules/_ctypes/libffi/src/ia64/ia64_flags.h Tue Mar 4 21:09:11 2008 @@ -16,13 +16,14 @@ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR - OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. ----------------------------------------------------------------------- */ /* "Type" codes used between assembly and C. When used as a part of Modified: python/trunk/Modules/_ctypes/libffi/src/ia64/unix.S ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/ia64/unix.S (original) +++ python/trunk/Modules/_ctypes/libffi/src/ia64/unix.S Tue Mar 4 21:09:11 2008 @@ -1,5 +1,5 @@ /* ----------------------------------------------------------------------- - unix.S - Copyright (c) 1998 Red Hat, Inc. + unix.S - Copyright (c) 1998, 2008 Red Hat, Inc. Copyright (c) 2000 Hewlett Packard Company IA64/unix Foreign Function Interface @@ -19,13 +19,14 @@ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR - OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. ----------------------------------------------------------------------- */ #define LIBFFI_ASM @@ -37,7 +38,7 @@ .text /* int ffi_call_unix (struct ia64_args *stack, PTR64 rvalue, - void (*fn)(), int flags); + void (*fn)(void), int flags); */ .align 16 @@ -553,3 +554,7 @@ data8 @pcrel(.Lld_hfa_float) // FFI_IA64_TYPE_HFA_FLOAT data8 @pcrel(.Lld_hfa_double) // FFI_IA64_TYPE_HFA_DOUBLE data8 @pcrel(.Lld_hfa_ldouble) // FFI_IA64_TYPE_HFA_LDOUBLE + +#if defined __ELF__ && defined __linux__ + .section .note.GNU-stack,"", at progbits +#endif Modified: python/trunk/Modules/_ctypes/libffi/src/m32r/ffi.c ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/m32r/ffi.c (original) +++ python/trunk/Modules/_ctypes/libffi/src/m32r/ffi.c Tue Mar 4 21:09:11 2008 @@ -1,5 +1,6 @@ /* ----------------------------------------------------------------------- ffi.c - Copyright (c) 2004 Renesas Technology + Copyright (c) 2008 Red Hat, Inc. M32R Foreign Function Interface @@ -31,9 +32,7 @@ /* ffi_prep_args is called by the assembly routine once stack space has been allocated for the function's arguments. */ -/*@-exportheader@*/ void ffi_prep_args(char *stack, extended_cif *ecif) -/*@=exportheader@*/ { unsigned int i; int tmp; @@ -173,20 +172,10 @@ return FFI_OK; } -/*@-declundef@*/ -/*@-exportheader@*/ -extern void ffi_call_SYSV(void (*)(char *, extended_cif *), - /*@out@*/ extended_cif *, - unsigned, unsigned, - /*@out@*/ unsigned *, - void (*fn)()); -/*@=declundef@*/ -/*@=exportheader@*/ - -void ffi_call(/*@dependent@*/ ffi_cif *cif, - void (*fn)(), - /*@out@*/ void *rvalue, - /*@dependent@*/ void **avalue) +extern void ffi_call_SYSV(void (*)(char *, extended_cif *), extended_cif *, + unsigned, unsigned, unsigned *, void (*fn)(void)); + +void ffi_call(ffi_cif *cif, void (*fn)(void), void *rvalue, void **avalue) { extended_cif ecif; @@ -198,9 +187,7 @@ if ((rvalue == NULL) && (cif->rtype->type == FFI_TYPE_STRUCT)) { - /*@-sysunrecog@*/ ecif.rvalue = alloca (cif->rtype->size); - /*@=sysunrecog@*/ } else ecif.rvalue = rvalue; @@ -208,7 +195,6 @@ switch (cif->abi) { case FFI_SYSV: - /*@-usedef@*/ ffi_call_SYSV(ffi_prep_args, &ecif, cif->bytes, cif->flags, ecif.rvalue, fn); if (cif->rtype->type == FFI_TYPE_STRUCT) @@ -237,7 +223,6 @@ } } } - /*@=usedef@*/ break; default: Modified: python/trunk/Modules/_ctypes/libffi/src/m68k/ffi.c ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/m68k/ffi.c (original) +++ python/trunk/Modules/_ctypes/libffi/src/m68k/ffi.c Tue Mar 4 21:09:11 2008 @@ -8,11 +8,23 @@ #include #include +#include +#include +#include + +void ffi_call_SYSV (extended_cif *, + unsigned, unsigned, + void *, void (*fn) ()); +void *ffi_prep_args (void *stack, extended_cif *ecif); +void ffi_closure_SYSV (ffi_closure *); +void ffi_closure_struct_SYSV (ffi_closure *); +unsigned int ffi_closure_SYSV_inner (ffi_closure *closure, + void *resp, void *args); /* ffi_prep_args is called by the assembly routine once stack space has been allocated for the function's arguments. */ -static void * +void * ffi_prep_args (void *stack, extended_cif *ecif) { unsigned int i; @@ -24,7 +36,7 @@ argp = stack; if (ecif->cif->rtype->type == FFI_TYPE_STRUCT - && ecif->cif->rtype->size > 8) + && !ecif->cif->flags) struct_value_ptr = ecif->rvalue; else struct_value_ptr = NULL; @@ -37,44 +49,47 @@ { size_t z; - /* Align if necessary. */ - if (((*p_arg)->alignment - 1) & (unsigned) argp) - argp = (char *) ALIGN (argp, (*p_arg)->alignment); - - z = (*p_arg)->size; - if (z < sizeof (int)) + z = (*p_arg)->size; + if (z < sizeof (int)) + { + switch ((*p_arg)->type) { - switch ((*p_arg)->type) - { - case FFI_TYPE_SINT8: - *(signed int *) argp = (signed int) *(SINT8 *) *p_argv; - break; - - case FFI_TYPE_UINT8: - *(unsigned int *) argp = (unsigned int) *(UINT8 *) *p_argv; - break; - - case FFI_TYPE_SINT16: - *(signed int *) argp = (signed int) *(SINT16 *) *p_argv; - break; - - case FFI_TYPE_UINT16: - *(unsigned int *) argp = (unsigned int) *(UINT16 *) *p_argv; - break; - - case FFI_TYPE_STRUCT: - memcpy (argp + sizeof (int) - z, *p_argv, z); - break; - - default: - FFI_ASSERT (0); - } - z = sizeof (int); + case FFI_TYPE_SINT8: + *(signed int *) argp = (signed int) *(SINT8 *) *p_argv; + break; + + case FFI_TYPE_UINT8: + *(unsigned int *) argp = (unsigned int) *(UINT8 *) *p_argv; + break; + + case FFI_TYPE_SINT16: + *(signed int *) argp = (signed int) *(SINT16 *) *p_argv; + break; + + case FFI_TYPE_UINT16: + *(unsigned int *) argp = (unsigned int) *(UINT16 *) *p_argv; + break; + + case FFI_TYPE_STRUCT: + memcpy (argp + sizeof (int) - z, *p_argv, z); + break; + + default: + FFI_ASSERT (0); } - else - memcpy (argp, *p_argv, z); - p_argv++; - argp += z; + z = sizeof (int); + } + else + { + memcpy (argp, *p_argv, z); + + /* Align if necessary. */ + if ((sizeof(int) - 1) & z) + z = ALIGN(z, sizeof(int)); + } + + p_argv++; + argp += z; } return struct_value_ptr; @@ -86,7 +101,8 @@ #define CIF_FLAGS_DOUBLE 8 #define CIF_FLAGS_LDOUBLE 16 #define CIF_FLAGS_POINTER 32 -#define CIF_FLAGS_STRUCT 64 +#define CIF_FLAGS_STRUCT1 64 +#define CIF_FLAGS_STRUCT2 128 /* Perform machine dependent cif processing */ ffi_status @@ -100,12 +116,24 @@ break; case FFI_TYPE_STRUCT: - if (cif->rtype->size > 4 && cif->rtype->size <= 8) - cif->flags = CIF_FLAGS_DINT; - else if (cif->rtype->size <= 4) - cif->flags = CIF_FLAGS_STRUCT; - else - cif->flags = 0; + switch (cif->rtype->size) + { + case 1: + cif->flags = CIF_FLAGS_STRUCT1; + break; + case 2: + cif->flags = CIF_FLAGS_STRUCT2; + break; + case 4: + cif->flags = CIF_FLAGS_INT; + break; + case 8: + cif->flags = CIF_FLAGS_DINT; + break; + default: + cif->flags = 0; + break; + } break; case FFI_TYPE_FLOAT: @@ -137,11 +165,6 @@ return FFI_OK; } -extern void ffi_call_SYSV (void *(*) (void *, extended_cif *), - extended_cif *, - unsigned, unsigned, unsigned, - void *, void (*fn) ()); - void ffi_call (ffi_cif *cif, void (*fn) (), void *rvalue, void **avalue) { @@ -149,7 +172,7 @@ ecif.cif = cif; ecif.avalue = avalue; - + /* If the return value is a struct and we don't have a return value address then we need to make one. */ @@ -159,13 +182,11 @@ ecif.rvalue = alloca (cif->rtype->size); else ecif.rvalue = rvalue; - - - switch (cif->abi) + + switch (cif->abi) { case FFI_SYSV: - ffi_call_SYSV (ffi_prep_args, &ecif, cif->bytes, - cif->flags, cif->rtype->size * 8, + ffi_call_SYSV (&ecif, cif->bytes, cif->flags, ecif.rvalue, fn); break; @@ -174,3 +195,84 @@ break; } } + +static void +ffi_prep_incoming_args_SYSV (char *stack, void **avalue, ffi_cif *cif) +{ + unsigned int i; + void **p_argv; + char *argp; + ffi_type **p_arg; + + argp = stack; + p_argv = avalue; + + for (i = cif->nargs, p_arg = cif->arg_types; (i != 0); i--, p_arg++) + { + size_t z; + + z = (*p_arg)->size; + if (z <= 4) + { + *p_argv = (void *) (argp + 4 - z); + + z = 4; + } + else + { + *p_argv = (void *) argp; + + /* Align if necessary */ + if ((sizeof(int) - 1) & z) + z = ALIGN(z, sizeof(int)); + } + + p_argv++; + argp += z; + } +} + +unsigned int +ffi_closure_SYSV_inner (ffi_closure *closure, void *resp, void *args) +{ + ffi_cif *cif; + void **arg_area; + + cif = closure->cif; + arg_area = (void**) alloca (cif->nargs * sizeof (void *)); + + ffi_prep_incoming_args_SYSV(args, arg_area, cif); + + (closure->fun) (cif, resp, arg_area, closure->user_data); + + return cif->flags; +} + +ffi_status +ffi_prep_closure_loc (ffi_closure* closure, + ffi_cif* cif, + void (*fun)(ffi_cif*,void*,void**,void*), + void *user_data, + void *codeloc) +{ + FFI_ASSERT (cif->abi == FFI_SYSV); + + *(unsigned short *)closure->tramp = 0x207c; + *(void **)(closure->tramp + 2) = codeloc; + *(unsigned short *)(closure->tramp + 6) = 0x4ef9; + if (cif->rtype->type == FFI_TYPE_STRUCT + && !cif->flags) + *(void **)(closure->tramp + 8) = ffi_closure_struct_SYSV; + else + *(void **)(closure->tramp + 8) = ffi_closure_SYSV; + + syscall(SYS_cacheflush, codeloc, FLUSH_SCOPE_LINE, + FLUSH_CACHE_BOTH, FFI_TRAMPOLINE_SIZE); + + closure->cif = cif; + closure->user_data = user_data; + closure->fun = fun; + + return FFI_OK; +} + Modified: python/trunk/Modules/_ctypes/libffi/src/m68k/ffitarget.h ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/m68k/ffitarget.h (original) +++ python/trunk/Modules/_ctypes/libffi/src/m68k/ffitarget.h Tue Mar 4 21:09:11 2008 @@ -13,13 +13,14 @@ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR - OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. ----------------------------------------------------------------------- */ @@ -40,7 +41,8 @@ /* ---- Definitions for closures ----------------------------------------- */ -#define FFI_CLOSURES 0 +#define FFI_CLOSURES 1 +#define FFI_TRAMPOLINE_SIZE 16 #define FFI_NATIVE_RAW_API 0 #endif Modified: python/trunk/Modules/_ctypes/libffi/src/m68k/sysv.S ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/m68k/sysv.S (original) +++ python/trunk/Modules/_ctypes/libffi/src/m68k/sysv.S Tue Mar 4 21:09:11 2008 @@ -1,47 +1,88 @@ /* ----------------------------------------------------------------------- - sysv.S + sysv.S - Copyright (c) 1998 Andreas Schwab + Copyright (c) 2008 Red Hat, Inc. m68k Foreign Function Interface + + Permission is hereby granted, free of charge, to any person obtaining + a copy of this software and associated documentation files (the + ``Software''), to deal in the Software without restriction, including + without limitation the rights to use, copy, modify, merge, publish, + distribute, sublicense, and/or sell copies of the Software, and to + permit persons to whom the Software is furnished to do so, subject to + the following conditions: + + The above copyright notice and this permission notice shall be included + in all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. ----------------------------------------------------------------------- */ #define LIBFFI_ASM #include #include +#ifdef HAVE_AS_CFI_PSEUDO_OP +#define CFI_STARTPROC() .cfi_startproc +#define CFI_OFFSET(reg,off) .cfi_offset reg,off +#define CFI_DEF_CFA(reg,off) .cfi_def_cfa reg,off +#define CFI_ENDPROC() .cfi_endproc +#else +#define CFI_STARTPROC() +#define CFI_OFFSET(reg,off) +#define CFI_DEF_CFA(reg,off) +#define CFI_ENDPROC() +#endif + .text .globl ffi_call_SYSV .type ffi_call_SYSV, at function + .align 4 ffi_call_SYSV: + CFI_STARTPROC() link %fp,#0 + CFI_OFFSET(14,-8) + CFI_DEF_CFA(14,8) move.l %d2,-(%sp) + CFI_OFFSET(2,-12) | Make room for all of the new args. - sub.l 16(%fp),%sp + sub.l 12(%fp),%sp | Call ffi_prep_args - move.l 12(%fp),-(%sp) + move.l 8(%fp),-(%sp) pea 4(%sp) - move.l 8(%fp),%a0 - jsr (%a0) +#if !defined __PIC__ + jsr ffi_prep_args +#else + bsr.l ffi_prep_args at PLTPC +#endif addq.l #8,%sp | Pass pointer to struct value, if any move.l %a0,%a1 | Call the function - move.l 32(%fp),%a0 + move.l 24(%fp),%a0 jsr (%a0) | Remove the space we pushed for the args - add.l 16(%fp),%sp + add.l 12(%fp),%sp | Load the pointer to storage for the return value - move.l 28(%fp),%a1 + move.l 20(%fp),%a1 | Load the return type code - move.l 20(%fp),%d2 + move.l 16(%fp),%d2 | If the return value pointer is NULL, assume no return value. tst.l %a1 @@ -79,19 +120,115 @@ retpointer: btst #5,%d2 - jbeq retstruct + jbeq retstruct1 move.l %a0,(%a1) jbra epilogue -retstruct: +retstruct1: btst #6,%d2 + jbeq retstruct2 + move.b %d0,(%a1) + jbra epilogue + +retstruct2: + btst #7,%d2 jbeq noretval - move.l 24(%fp),%d2 - bfins %d0,(%a1){#0,%d2} + move.w %d0,(%a1) noretval: epilogue: move.l (%sp)+,%d2 - unlk %a6 + unlk %fp rts + CFI_ENDPROC() .size ffi_call_SYSV,.-ffi_call_SYSV + + .globl ffi_closure_SYSV + .type ffi_closure_SYSV, @function + .align 4 + +ffi_closure_SYSV: + CFI_STARTPROC() + link %fp,#-12 + CFI_OFFSET(14,-8) + CFI_DEF_CFA(14,8) + move.l %sp,-12(%fp) + pea 8(%fp) + pea -12(%fp) + move.l %a0,-(%sp) +#if !defined __PIC__ + jsr ffi_closure_SYSV_inner +#else + bsr.l ffi_closure_SYSV_inner at PLTPC +#endif + + lsr.l #1,%d0 + jne 1f + jcc .Lcls_epilogue + move.l -12(%fp),%d0 +.Lcls_epilogue: + unlk %fp + rts +1: + lea -12(%fp),%a0 + lsr.l #2,%d0 + jne 1f + jcs .Lcls_ret_float + move.l (%a0)+,%d0 + move.l (%a0),%d1 + jra .Lcls_epilogue +.Lcls_ret_float: + fmove.s (%a0),%fp0 + jra .Lcls_epilogue +1: + lsr.l #2,%d0 + jne 1f + jcs .Lcls_ret_ldouble + fmove.d (%a0),%fp0 + jra .Lcls_epilogue +.Lcls_ret_ldouble: + fmove.x (%a0),%fp0 + jra .Lcls_epilogue +1: + lsr.l #2,%d0 + jne .Lcls_ret_struct2 + jcs .Lcls_ret_struct1 + move.l (%a0),%a0 + move.l %a0,%d0 + jra .Lcls_epilogue +.Lcls_ret_struct1: + move.b (%a0),%d0 + jra .Lcls_epilogue +.Lcls_ret_struct2: + move.w (%a0),%d0 + jra .Lcls_epilogue + CFI_ENDPROC() + + .size ffi_closure_SYSV,.-ffi_closure_SYSV + + .globl ffi_closure_struct_SYSV + .type ffi_closure_struct_SYSV, @function + .align 4 + +ffi_closure_struct_SYSV: + CFI_STARTPROC() + link %fp,#0 + CFI_OFFSET(14,-8) + CFI_DEF_CFA(14,8) + move.l %sp,-12(%fp) + pea 8(%fp) + move.l %a1,-(%sp) + move.l %a0,-(%sp) +#if !defined __PIC__ + jsr ffi_closure_SYSV_inner +#else + bsr.l ffi_closure_SYSV_inner at PLTPC +#endif + unlk %fp + rts + CFI_ENDPROC() + .size ffi_closure_struct_SYSV,.-ffi_closure_struct_SYSV + +#if defined __ELF__ && defined __linux__ + .section .note.GNU-stack,"", at progbits +#endif Modified: python/trunk/Modules/_ctypes/libffi/src/mips/ffi.c ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/mips/ffi.c (original) +++ python/trunk/Modules/_ctypes/libffi/src/mips/ffi.c Tue Mar 4 21:09:11 2008 @@ -1,5 +1,6 @@ /* ----------------------------------------------------------------------- - ffi.c - Copyright (c) 1996 Red Hat, Inc. + ffi.c - Copyright (c) 1996, 2007, 2008 Red Hat, Inc. + Copyright (c) 2008 David Daney MIPS Foreign Function Interface @@ -14,28 +15,44 @@ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR - OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. ----------------------------------------------------------------------- */ #include #include #include + +#ifdef __GNUC__ +# if (__GNUC__ > 4) || ((__GNUC__ == 4) && (__GNUC_MINOR__ >= 3)) +# define USE__BUILTIN___CLEAR_CACHE 1 +# endif +#endif + +#ifndef USE__BUILTIN___CLEAR_CACHE #include +#endif -#if _MIPS_SIM == _ABIN32 +#ifdef FFI_DEBUG +# define FFI_MIPS_STOP_HERE() ffi_stop_here() +#else +# define FFI_MIPS_STOP_HERE() do {} while(0) +#endif + +#ifdef FFI_MIPS_N32 #define FIX_ARGP \ FFI_ASSERT(argp <= &stack[bytes]); \ if (argp == &stack[bytes]) \ { \ argp = stack; \ - ffi_stop_here(); \ + FFI_MIPS_STOP_HERE(); \ } #else #define FIX_ARGP @@ -55,7 +72,7 @@ char *argp; ffi_type **p_arg; -#if _MIPS_SIM == _ABIN32 +#ifdef FFI_MIPS_N32 /* If more than 8 double words are used, the remainder go on the stack. We reorder stuff on the stack here to support this easily. */ @@ -69,7 +86,7 @@ memset(stack, 0, bytes); -#if _MIPS_SIM == _ABIN32 +#ifdef FFI_MIPS_N32 if ( ecif->cif->rstruct_flag != 0 ) #else if ( ecif->cif->rtype->type == FFI_TYPE_STRUCT ) @@ -92,7 +109,7 @@ if (a < sizeof(ffi_arg)) a = sizeof(ffi_arg); - if ((a - 1) & (unsigned int) argp) + if ((a - 1) & (unsigned long) argp) { argp = (char *) ALIGN(argp, a); FIX_ARGP; @@ -101,9 +118,15 @@ z = (*p_arg)->size; if (z <= sizeof(ffi_arg)) { + int type = (*p_arg)->type; z = sizeof(ffi_arg); - switch ((*p_arg)->type) + /* The size of a pointer depends on the ABI */ + if (type == FFI_TYPE_POINTER) + type = + (ecif->cif->abi == FFI_N64) ? FFI_TYPE_SINT64 : FFI_TYPE_SINT32; + + switch (type) { case FFI_TYPE_SINT8: *(ffi_arg *)argp = *(SINT8 *)(* p_argv); @@ -126,7 +149,6 @@ break; case FFI_TYPE_UINT32: - case FFI_TYPE_POINTER: *(ffi_arg *)argp = *(UINT32 *)(* p_argv); break; @@ -135,8 +157,7 @@ *(float *) argp = *(float *)(* p_argv); break; - /* Handle small structures. */ - case FFI_TYPE_STRUCT: + /* Handle structures. */ default: memcpy(argp, *p_argv, (*p_arg)->size); break; @@ -144,12 +165,12 @@ } else { -#if _MIPS_SIM == _ABIO32 +#ifdef FFI_MIPS_O32 memcpy(argp, *p_argv, z); #else { - unsigned end = (unsigned) argp+z; - unsigned cap = (unsigned) stack+bytes; + unsigned long end = (unsigned long) argp + z; + unsigned long cap = (unsigned long) stack + bytes; /* Check if the data will fit within the register space. Handle it if it doesn't. */ @@ -158,12 +179,13 @@ memcpy(argp, *p_argv, z); else { - unsigned portion = end - cap; + unsigned long portion = cap - (unsigned long)argp; memcpy(argp, *p_argv, portion); argp = stack; - memcpy(argp, - (void*)((unsigned)(*p_argv)+portion), z - portion); + z -= portion; + memcpy(argp, (void*)((unsigned long)(*p_argv) + portion), + z); } } #endif @@ -174,7 +196,7 @@ } } -#if _MIPS_SIM == _ABIN32 +#ifdef FFI_MIPS_N32 /* The n32 spec says that if "a chunk consists solely of a double float field (but not a double, which is part of a union), it @@ -182,35 +204,41 @@ passed in an integer register". This code traverses structure definitions and generates the appropriate flags. */ -unsigned calc_n32_struct_flags(ffi_type *arg, unsigned *shift) +static unsigned +calc_n32_struct_flags(ffi_type *arg, unsigned *loc, unsigned *arg_reg) { unsigned flags = 0; unsigned index = 0; ffi_type *e; - while (e = arg->elements[index]) + while ((e = arg->elements[index])) { + /* Align this object. */ + *loc = ALIGN(*loc, e->alignment); if (e->type == FFI_TYPE_DOUBLE) { - flags += (FFI_TYPE_DOUBLE << *shift); - *shift += FFI_FLAG_BITS; + /* Already aligned to FFI_SIZEOF_ARG. */ + *arg_reg = *loc / FFI_SIZEOF_ARG; + if (*arg_reg > 7) + break; + flags += (FFI_TYPE_DOUBLE << (*arg_reg * FFI_FLAG_BITS)); + *loc += e->size; } - else if (e->type == FFI_TYPE_STRUCT) - flags += calc_n32_struct_flags(e, shift); else - *shift += FFI_FLAG_BITS; - + *loc += e->size; index++; } + /* Next Argument register at alignment of FFI_SIZEOF_ARG. */ + *arg_reg = ALIGN(*loc, FFI_SIZEOF_ARG) / FFI_SIZEOF_ARG; return flags; } -unsigned calc_n32_return_struct_flags(ffi_type *arg) +static unsigned +calc_n32_return_struct_flags(ffi_type *arg) { unsigned flags = 0; - unsigned index = 0; unsigned small = FFI_TYPE_SMALLSTRUCT; ffi_type *e; @@ -229,16 +257,16 @@ e = arg->elements[0]; if (e->type == FFI_TYPE_DOUBLE) - flags = FFI_TYPE_DOUBLE << FFI_FLAG_BITS; + flags = FFI_TYPE_DOUBLE; else if (e->type == FFI_TYPE_FLOAT) - flags = FFI_TYPE_FLOAT << FFI_FLAG_BITS; + flags = FFI_TYPE_FLOAT; if (flags && (e = arg->elements[1])) { if (e->type == FFI_TYPE_DOUBLE) - flags += FFI_TYPE_DOUBLE; + flags += FFI_TYPE_DOUBLE << FFI_FLAG_BITS; else if (e->type == FFI_TYPE_FLOAT) - flags += FFI_TYPE_FLOAT; + flags += FFI_TYPE_FLOAT << FFI_FLAG_BITS; else return small; @@ -263,7 +291,7 @@ { cif->flags = 0; -#if _MIPS_SIM == _ABIO32 +#ifdef FFI_MIPS_O32 /* Set the flags necessary for O32 processing. FFI_O32_SOFT_FLOAT * does not have special handling for floating point args. */ @@ -351,10 +379,11 @@ } #endif -#if _MIPS_SIM == _ABIN32 +#ifdef FFI_MIPS_N32 /* Set the flags necessary for N32 processing */ { - unsigned shift = 0; + unsigned arg_reg = 0; + unsigned loc = 0; unsigned count = (cif->nargs < 8) ? cif->nargs : 8; unsigned index = 0; @@ -369,7 +398,7 @@ /* This means that the structure is being passed as a hidden argument */ - shift = FFI_FLAG_BITS; + arg_reg = 1; count = (cif->nargs < 7) ? cif->nargs : 7; cif->rstruct_flag = !0; @@ -380,23 +409,37 @@ else cif->rstruct_flag = 0; - while (count-- > 0) + while (count-- > 0 && arg_reg < 8) { switch ((cif->arg_types)[index]->type) { case FFI_TYPE_FLOAT: case FFI_TYPE_DOUBLE: - cif->flags += ((cif->arg_types)[index]->type << shift); - shift += FFI_FLAG_BITS; + cif->flags += + ((cif->arg_types)[index]->type << (arg_reg * FFI_FLAG_BITS)); + arg_reg++; break; + case FFI_TYPE_LONGDOUBLE: + /* Align it. */ + arg_reg = ALIGN(arg_reg, 2); + /* Treat it as two adjacent doubles. */ + cif->flags += + (FFI_TYPE_DOUBLE << (arg_reg * FFI_FLAG_BITS)); + arg_reg++; + cif->flags += + (FFI_TYPE_DOUBLE << (arg_reg * FFI_FLAG_BITS)); + arg_reg++; + break; case FFI_TYPE_STRUCT: + loc = arg_reg * FFI_SIZEOF_ARG; cif->flags += calc_n32_struct_flags((cif->arg_types)[index], - &shift); + &loc, &arg_reg); break; default: - shift += FFI_FLAG_BITS; + arg_reg++; + break; } index++; @@ -431,7 +474,13 @@ case FFI_TYPE_DOUBLE: cif->flags += cif->rtype->type << (FFI_FLAG_BITS * 8); break; - + case FFI_TYPE_LONGDOUBLE: + /* Long double is returned as if it were a struct containing + two doubles. */ + cif->flags += FFI_TYPE_STRUCT << (FFI_FLAG_BITS * 8); + cif->flags += (FFI_TYPE_DOUBLE + (FFI_TYPE_DOUBLE << FFI_FLAG_BITS)) + << (4 + (FFI_FLAG_BITS * 8)); + break; default: cif->flags += FFI_TYPE_INT << (FFI_FLAG_BITS * 8); break; @@ -470,7 +519,7 @@ switch (cif->abi) { -#if _MIPS_SIM == _ABIO32 +#ifdef FFI_MIPS_O32 case FFI_O32: case FFI_O32_SOFT_FLOAT: ffi_call_O32(ffi_prep_args, &ecif, cif->bytes, @@ -478,10 +527,25 @@ break; #endif -#if _MIPS_SIM == _ABIN32 +#ifdef FFI_MIPS_N32 case FFI_N32: - ffi_call_N32(ffi_prep_args, &ecif, cif->bytes, - cif->flags, ecif.rvalue, fn); + case FFI_N64: + { + int copy_rvalue = 0; + void *rvalue_copy = ecif.rvalue; + if (cif->rtype->type == FFI_TYPE_STRUCT && cif->rtype->size < 16) + { + /* For structures smaller than 16 bytes we clobber memory + in 8 byte increments. Make a copy so we don't clobber + the callers memory outside of the struct bounds. */ + rvalue_copy = alloca(16); + copy_rvalue = 1; + } + ffi_call_N32(ffi_prep_args, &ecif, cif->bytes, + cif->flags, rvalue_copy, fn); + if (copy_rvalue) + memcpy(ecif.rvalue, rvalue_copy, cif->rtype->size); + } break; #endif @@ -491,42 +555,83 @@ } } -#if FFI_CLOSURES /* N32 not implemented yet, FFI_CLOSURES not defined */ +#if FFI_CLOSURES #if defined(FFI_MIPS_O32) extern void ffi_closure_O32(void); +#else +extern void ffi_closure_N32(void); #endif /* FFI_MIPS_O32 */ ffi_status -ffi_prep_closure (ffi_closure *closure, - ffi_cif *cif, - void (*fun)(ffi_cif*,void*,void**,void*), - void *user_data) +ffi_prep_closure_loc (ffi_closure *closure, + ffi_cif *cif, + void (*fun)(ffi_cif*,void*,void**,void*), + void *user_data, + void *codeloc) { unsigned int *tramp = (unsigned int *) &closure->tramp[0]; - unsigned int fn; - unsigned int ctx = (unsigned int) closure; + void * fn; + char *clear_location = (char *) codeloc; #if defined(FFI_MIPS_O32) FFI_ASSERT(cif->abi == FFI_O32 || cif->abi == FFI_O32_SOFT_FLOAT); - fn = (unsigned int) ffi_closure_O32; + fn = ffi_closure_O32; #else /* FFI_MIPS_N32 */ - FFI_ASSERT(cif->abi == FFI_N32); - FFI_ASSERT(!"not implemented"); + FFI_ASSERT(cif->abi == FFI_N32 || cif->abi == FFI_N64); + fn = ffi_closure_N32; #endif /* FFI_MIPS_O32 */ - tramp[0] = 0x3c190000 | (fn >> 16); /* lui $25,high(fn) */ - tramp[1] = 0x37390000 | (fn & 0xffff); /* ori $25,low(fn) */ - tramp[2] = 0x3c080000 | (ctx >> 16); /* lui $8,high(ctx) */ - tramp[3] = 0x03200008; /* jr $25 */ - tramp[4] = 0x35080000 | (ctx & 0xffff); /* ori $8,low(ctx) */ +#if defined(FFI_MIPS_O32) || (_MIPS_SIM ==_ABIN32) + /* lui $25,high(fn) */ + tramp[0] = 0x3c190000 | ((unsigned)fn >> 16); + /* ori $25,low(fn) */ + tramp[1] = 0x37390000 | ((unsigned)fn & 0xffff); + /* lui $12,high(codeloc) */ + tramp[2] = 0x3c0c0000 | ((unsigned)codeloc >> 16); + /* jr $25 */ + tramp[3] = 0x03200008; + /* ori $12,low(codeloc) */ + tramp[4] = 0x358c0000 | ((unsigned)codeloc & 0xffff); +#else + /* N64 has a somewhat larger trampoline. */ + /* lui $25,high(fn) */ + tramp[0] = 0x3c190000 | ((unsigned long)fn >> 48); + /* lui $12,high(codeloc) */ + tramp[1] = 0x3c0c0000 | ((unsigned long)codeloc >> 48); + /* ori $25,mid-high(fn) */ + tramp[2] = 0x37390000 | (((unsigned long)fn >> 32 ) & 0xffff); + /* ori $12,mid-high(codeloc) */ + tramp[3] = 0x358c0000 | (((unsigned long)codeloc >> 32) & 0xffff); + /* dsll $25,$25,16 */ + tramp[4] = 0x0019cc38; + /* dsll $12,$12,16 */ + tramp[5] = 0x000c6438; + /* ori $25,mid-low(fn) */ + tramp[6] = 0x37390000 | (((unsigned long)fn >> 16 ) & 0xffff); + /* ori $12,mid-low(codeloc) */ + tramp[7] = 0x358c0000 | (((unsigned long)codeloc >> 16) & 0xffff); + /* dsll $25,$25,16 */ + tramp[8] = 0x0019cc38; + /* dsll $12,$12,16 */ + tramp[9] = 0x000c6438; + /* ori $25,low(fn) */ + tramp[10] = 0x37390000 | ((unsigned long)fn & 0xffff); + /* jr $25 */ + tramp[11] = 0x03200008; + /* ori $12,low(codeloc) */ + tramp[12] = 0x358c0000 | ((unsigned long)codeloc & 0xffff); + +#endif closure->cif = cif; closure->fun = fun; closure->user_data = user_data; - /* XXX this is available on Linux, but anything else? */ - cacheflush (tramp, FFI_TRAMPOLINE_SIZE, ICACHE); - +#ifdef USE__BUILTIN___CLEAR_CACHE + __builtin___clear_cache(clear_location, clear_location + FFI_TRAMPOLINE_SIZE); +#else + cacheflush (clear_location, FFI_TRAMPOLINE_SIZE, ICACHE); +#endif return FFI_OK; } @@ -567,7 +672,7 @@ if ((cif->flags >> (FFI_FLAG_BITS * 2)) == FFI_TYPE_STRUCT) { - rvalue = (void *) ar[0]; + rvalue = (void *)(UINT32)ar[0]; argn = 1; } @@ -645,4 +750,177 @@ } } +#if defined(FFI_MIPS_N32) + +static void +copy_struct_N32(char *target, unsigned offset, ffi_abi abi, ffi_type *type, + int argn, unsigned arg_offset, ffi_arg *ar, + ffi_arg *fpr) +{ + ffi_type **elt_typep = type->elements; + while(*elt_typep) + { + ffi_type *elt_type = *elt_typep; + unsigned o; + char *tp; + char *argp; + char *fpp; + + o = ALIGN(offset, elt_type->alignment); + arg_offset += o - offset; + offset = o; + argn += arg_offset / sizeof(ffi_arg); + arg_offset = arg_offset % sizeof(ffi_arg); + + argp = (char *)(ar + argn); + fpp = (char *)(argn >= 8 ? ar + argn : fpr + argn); + + tp = target + offset; + + if (elt_type->type == FFI_TYPE_DOUBLE) + *(double *)tp = *(double *)fpp; + else + memcpy(tp, argp + arg_offset, elt_type->size); + + offset += elt_type->size; + arg_offset += elt_type->size; + elt_typep++; + argn += arg_offset / sizeof(ffi_arg); + arg_offset = arg_offset % sizeof(ffi_arg); + } +} + +/* + * Decodes the arguments to a function, which will be stored on the + * stack. AR is the pointer to the beginning of the integer + * arguments. FPR is a pointer to the area where floating point + * registers have been saved. + * + * RVALUE is the location where the function return value will be + * stored. CLOSURE is the prepared closure to invoke. + * + * This function should only be called from assembly, which is in + * turn called from a trampoline. + * + * Returns the function return flags. + * + */ +int +ffi_closure_mips_inner_N32 (ffi_closure *closure, + void *rvalue, ffi_arg *ar, + ffi_arg *fpr) +{ + ffi_cif *cif; + void **avaluep; + ffi_arg *avalue; + ffi_type **arg_types; + int i, avn, argn; + + cif = closure->cif; + avalue = alloca (cif->nargs * sizeof (ffi_arg)); + avaluep = alloca (cif->nargs * sizeof (ffi_arg)); + + argn = 0; + + if (cif->rstruct_flag) + { +#if _MIPS_SIM==_ABIN32 + rvalue = (void *)(UINT32)ar[0]; +#else /* N64 */ + rvalue = (void *)ar[0]; +#endif + argn = 1; + } + + i = 0; + avn = cif->nargs; + arg_types = cif->arg_types; + + while (i < avn) + { + if (arg_types[i]->type == FFI_TYPE_FLOAT + || arg_types[i]->type == FFI_TYPE_DOUBLE) + { + ffi_arg *argp = argn >= 8 ? ar + argn : fpr + argn; +#ifdef __MIPSEB__ + if (arg_types[i]->type == FFI_TYPE_FLOAT && argn < 8) + avaluep[i] = ((char *) argp) + sizeof (float); + else +#endif + avaluep[i] = (char *) argp; + } + else + { + unsigned type = arg_types[i]->type; + + if (arg_types[i]->alignment > sizeof(ffi_arg)) + argn = ALIGN(argn, arg_types[i]->alignment / sizeof(ffi_arg)); + + ffi_arg *argp = ar + argn; + + /* The size of a pointer depends on the ABI */ + if (type == FFI_TYPE_POINTER) + type = (cif->abi == FFI_N64) ? FFI_TYPE_SINT64 : FFI_TYPE_SINT32; + + switch (type) + { + case FFI_TYPE_SINT8: + avaluep[i] = &avalue[i]; + *(SINT8 *) &avalue[i] = (SINT8) *argp; + break; + + case FFI_TYPE_UINT8: + avaluep[i] = &avalue[i]; + *(UINT8 *) &avalue[i] = (UINT8) *argp; + break; + + case FFI_TYPE_SINT16: + avaluep[i] = &avalue[i]; + *(SINT16 *) &avalue[i] = (SINT16) *argp; + break; + + case FFI_TYPE_UINT16: + avaluep[i] = &avalue[i]; + *(UINT16 *) &avalue[i] = (UINT16) *argp; + break; + + case FFI_TYPE_SINT32: + avaluep[i] = &avalue[i]; + *(SINT32 *) &avalue[i] = (SINT32) *argp; + break; + + case FFI_TYPE_UINT32: + avaluep[i] = &avalue[i]; + *(UINT32 *) &avalue[i] = (UINT32) *argp; + break; + + case FFI_TYPE_STRUCT: + if (argn < 8) + { + /* Allocate space for the struct as at least part of + it was passed in registers. */ + avaluep[i] = alloca(arg_types[i]->size); + copy_struct_N32(avaluep[i], 0, cif->abi, arg_types[i], + argn, 0, ar, fpr); + + break; + } + /* Else fall through. */ + default: + avaluep[i] = (char *) argp; + break; + } + } + argn += ALIGN(arg_types[i]->size, sizeof(ffi_arg)) / sizeof(ffi_arg); + i++; + } + + /* Invoke the closure. */ + (closure->fun) (cif, rvalue, avaluep, closure->user_data); + + return cif->flags >> (FFI_FLAG_BITS * 8); +} + +#endif /* FFI_MIPS_N32 */ + #endif /* FFI_CLOSURES */ Modified: python/trunk/Modules/_ctypes/libffi/src/mips/ffitarget.h ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/mips/ffitarget.h (original) +++ python/trunk/Modules/_ctypes/libffi/src/mips/ffitarget.h Tue Mar 4 21:09:11 2008 @@ -13,19 +13,33 @@ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR - OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. ----------------------------------------------------------------------- */ #ifndef LIBFFI_TARGET_H #define LIBFFI_TARGET_H +#ifdef linux +#include +# ifndef _ABIN32 +# define _ABIN32 _MIPS_SIM_NABI32 +# endif +# ifndef _ABI64 +# define _ABI64 _MIPS_SIM_ABI64 +# endif +# ifndef _ABIO32 +# define _ABIO32 _MIPS_SIM_ABI32 +# endif +#endif + #if !defined(_MIPS_SIM) -- something is very wrong -- #else @@ -42,10 +56,13 @@ #ifdef FFI_MIPS_O32 /* O32 stack frames have 32bit integer args */ -#define FFI_SIZEOF_ARG 4 +# define FFI_SIZEOF_ARG 4 #else /* N32 and N64 frames have 64bit integer args */ -#define FFI_SIZEOF_ARG 8 +# define FFI_SIZEOF_ARG 8 +# if _MIPS_SIM == _ABIN32 +# define FFI_SIZEOF_JAVA_RAW 4 +# endif #endif #define FFI_FLAG_BITS 2 @@ -104,19 +121,28 @@ #define ra $31 #ifdef FFI_MIPS_O32 -#define REG_L lw -#define REG_S sw -#define SUBU subu -#define ADDU addu -#define SRL srl -#define LI li +# define REG_L lw +# define REG_S sw +# define SUBU subu +# define ADDU addu +# define SRL srl +# define LI li #else /* !FFI_MIPS_O32 */ -#define REG_L ld -#define REG_S sd -#define SUBU dsubu -#define ADDU daddu -#define SRL dsrl -#define LI dli +# define REG_L ld +# define REG_S sd +# define SUBU dsubu +# define ADDU daddu +# define SRL dsrl +# define LI dli +# if (_MIPS_SIM==_ABI64) +# define LA dla +# define EH_FRAME_ALIGN 3 +# define FDE_ADDR_BYTES .8byte +# else +# define LA la +# define EH_FRAME_ALIGN 2 +# define FDE_ADDR_BYTES .4byte +# endif /* _MIPS_SIM==_ABI64 */ #endif /* !FFI_MIPS_O32 */ #else /* !LIBFFI_ASM */ #ifdef FFI_MIPS_O32 @@ -143,7 +169,11 @@ FFI_DEFAULT_ABI = FFI_O32, #endif #else +# if _MIPS_SIM==_ABI64 + FFI_DEFAULT_ABI = FFI_N64, +# else FFI_DEFAULT_ABI = FFI_N32, +# endif #endif FFI_LAST_ABI = FFI_DEFAULT_ABI + 1 @@ -158,8 +188,13 @@ #define FFI_CLOSURES 1 #define FFI_TRAMPOLINE_SIZE 20 #else -/* N32/N64 not implemented yet. */ -#define FFI_CLOSURES 0 +/* N32/N64. */ +# define FFI_CLOSURES 1 +#if _MIPS_SIM==_ABI64 +#define FFI_TRAMPOLINE_SIZE 52 +#else +#define FFI_TRAMPOLINE_SIZE 20 +#endif #endif /* FFI_MIPS_O32 */ #define FFI_NATIVE_RAW_API 0 Modified: python/trunk/Modules/_ctypes/libffi/src/mips/n32.S ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/mips/n32.S (original) +++ python/trunk/Modules/_ctypes/libffi/src/mips/n32.S Tue Mar 4 21:09:11 2008 @@ -17,7 +17,8 @@ THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR + IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR + ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. @@ -45,13 +46,19 @@ .globl ffi_call_N32 .ent ffi_call_N32 ffi_call_N32: +.LFB3: + .frame $fp, SIZEOF_FRAME, ra + .mask 0xc0000000,-FFI_SIZEOF_ARG + .fmask 0x00000000,0 # Prologue SUBU $sp, SIZEOF_FRAME # Frame size +.LCFI0: REG_S $fp, SIZEOF_FRAME - 2*FFI_SIZEOF_ARG($sp) # Save frame pointer REG_S ra, SIZEOF_FRAME - 1*FFI_SIZEOF_ARG($sp) # Save return address +.LCFI1: move $fp, $sp - +.LCFI3: move t9, callback # callback function pointer REG_S bytes, 2*FFI_SIZEOF_ARG($fp) # bytes REG_S flags, 3*FFI_SIZEOF_ARG($fp) # flags @@ -72,14 +79,12 @@ SUBU $sp, $sp, v0 # move the stack pointer to reflect the # arg space - ADDU a0, $sp, 0 # 4 * FFI_SIZEOF_ARG + move a0, $sp # 4 * FFI_SIZEOF_ARG ADDU a3, $fp, 3 * FFI_SIZEOF_ARG # Call ffi_prep_args jal t9 - # ADDU $sp, $sp, 4 * FFI_SIZEOF_ARG # adjust $sp to new args - # Copy the stack pointer to t9 move t9, $sp @@ -90,18 +95,16 @@ REG_L t6, 2*FFI_SIZEOF_ARG($fp) # Is it bigger than 8 * FFI_SIZEOF_ARG? - dadd t7, $0, 8 * FFI_SIZEOF_ARG - dsub t8, t6, t7 + daddiu t8, t6, -(8 * FFI_SIZEOF_ARG) bltz t8, loadregs - add t9, t9, t8 + ADDU t9, t9, t8 loadregs: - REG_L t4, 3*FFI_SIZEOF_ARG($fp) # load the flags word - add t6, t4, 0 # and copy it into t6 + REG_L t6, 3*FFI_SIZEOF_ARG($fp) # load the flags word into t6. - and t4, ((1< + (c) 2008 Red Hat, Inc. HPPA Foreign Function Interface + HP-UX PA ABI support (c) 2006 Free Software Foundation, Inc. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the @@ -14,13 +16,14 @@ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR - OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. ----------------------------------------------------------------------- */ #include @@ -30,15 +33,19 @@ #include #define ROUND_UP(v, a) (((size_t)(v) + (a) - 1) & ~((a) - 1)) -#define ROUND_DOWN(v, a) (((size_t)(v) - (a) + 1) & ~((a) - 1)) + #define MIN_STACK_SIZE 64 #define FIRST_ARG_SLOT 9 #define DEBUG_LEVEL 0 -#define fldw(addr, fpreg) asm volatile ("fldw 0(%0), %%" #fpreg "L" : : "r"(addr) : #fpreg) -#define fstw(fpreg, addr) asm volatile ("fstw %%" #fpreg "L, 0(%0)" : : "r"(addr)) -#define fldd(addr, fpreg) asm volatile ("fldd 0(%0), %%" #fpreg : : "r"(addr) : #fpreg) -#define fstd(fpreg, addr) asm volatile ("fstd %%" #fpreg "L, 0(%0)" : : "r"(addr)) +#define fldw(addr, fpreg) \ + __asm__ volatile ("fldw 0(%0), %%" #fpreg "L" : : "r"(addr) : #fpreg) +#define fstw(fpreg, addr) \ + __asm__ volatile ("fstw %%" #fpreg "L, 0(%0)" : : "r"(addr)) +#define fldd(addr, fpreg) \ + __asm__ volatile ("fldd 0(%0), %%" #fpreg : : "r"(addr) : #fpreg) +#define fstd(fpreg, addr) \ + __asm__ volatile ("fstd %%" #fpreg "L, 0(%0)" : : "r"(addr)) #define debug(lvl, x...) do { if (lvl <= DEBUG_LEVEL) { printf(x); } } while (0) @@ -47,16 +54,19 @@ size_t sz = t->size; /* Small structure results are passed in registers, - larger ones are passed by pointer. */ + larger ones are passed by pointer. Note that + small structures of size 2, 4 and 8 differ from + the corresponding integer types in that they have + different alignment requirements. */ if (sz <= 1) return FFI_TYPE_UINT8; else if (sz == 2) - return FFI_TYPE_UINT16; + return FFI_TYPE_SMALL_STRUCT2; else if (sz == 3) return FFI_TYPE_SMALL_STRUCT3; else if (sz == 4) - return FFI_TYPE_UINT32; + return FFI_TYPE_SMALL_STRUCT4; else if (sz == 5) return FFI_TYPE_SMALL_STRUCT5; else if (sz == 6) @@ -64,61 +74,80 @@ else if (sz == 7) return FFI_TYPE_SMALL_STRUCT7; else if (sz <= 8) - return FFI_TYPE_UINT64; + return FFI_TYPE_SMALL_STRUCT8; else return FFI_TYPE_STRUCT; /* else, we pass it by pointer. */ } /* PA has a downward growing stack, which looks like this: - + Offset - [ Variable args ] + [ Variable args ] SP = (4*(n+9)) arg word N ... SP-52 arg word 4 - [ Fixed args ] + [ Fixed args ] SP-48 arg word 3 SP-44 arg word 2 SP-40 arg word 1 SP-36 arg word 0 - [ Frame marker ] + [ Frame marker ] ... SP-20 RP SP-4 previous SP - - First 4 non-FP 32-bit args are passed in gr26, gr25, gr24 and gr23 - First 2 non-FP 64-bit args are passed in register pairs, starting - on an even numbered register (i.e. r26/r25 and r24+r23) - First 4 FP 32-bit arguments are passed in fr4L, fr5L, fr6L and fr7L - First 2 FP 64-bit arguments are passed in fr5 and fr7 - The rest are passed on the stack starting at SP-52, but 64-bit - arguments need to be aligned to an 8-byte boundary - + + The first four argument words on the stack are reserved for use by + the callee. Instead, the general and floating registers replace + the first four argument slots. Non FP arguments are passed solely + in the general registers. FP arguments are passed in both general + and floating registers when using libffi. + + Non-FP 32-bit args are passed in gr26, gr25, gr24 and gr23. + Non-FP 64-bit args are passed in register pairs, starting + on an odd numbered register (i.e. r25+r26 and r23+r24). + FP 32-bit arguments are passed in fr4L, fr5L, fr6L and fr7L. + FP 64-bit arguments are passed in fr5 and fr7. + + The registers are allocated in the same manner as stack slots. + This allows the callee to save its arguments on the stack if + necessary: + + arg word 3 -> gr23 or fr7L + arg word 2 -> gr24 or fr6L or fr7R + arg word 1 -> gr25 or fr5L + arg word 0 -> gr26 or fr4L or fr5R + + Note that fr4R and fr6R are never used for arguments (i.e., + doubles are not passed in fr4 or fr6). + + The rest of the arguments are passed on the stack starting at SP-52, + but 64-bit arguments need to be aligned to an 8-byte boundary + This means we can have holes either in the register allocation, or in the stack. */ /* ffi_prep_args is called by the assembly routine once stack space has been allocated for the function's arguments - + The following code will put everything into the stack frame (which was allocated by the asm routine), and on return the asm routine will load the arguments that should be passed by register into the appropriate registers - + NOTE: We load floating point args in this function... that means we assume gcc will not mess with fp regs in here. */ -/*@-exportheader@*/ -void ffi_prep_args_LINUX(UINT32 *stack, extended_cif *ecif, unsigned bytes) -/*@=exportheader@*/ +void ffi_prep_args_pa32(UINT32 *stack, extended_cif *ecif, unsigned bytes) { register unsigned int i; register ffi_type **p_arg; register void **p_argv; - unsigned int slot = FIRST_ARG_SLOT - 1; + unsigned int slot = FIRST_ARG_SLOT; char *dest_cpy; + size_t len; - debug(1, "%s: stack = %p, ecif = %p, bytes = %u\n", __FUNCTION__, stack, ecif, bytes); + debug(1, "%s: stack = %p, ecif = %p, bytes = %u\n", __FUNCTION__, stack, + ecif, bytes); p_arg = ecif->cif->arg_types; p_argv = ecif->avalue; @@ -130,116 +159,105 @@ switch (type) { case FFI_TYPE_SINT8: - slot++; *(SINT32 *)(stack - slot) = *(SINT8 *)(*p_argv); break; case FFI_TYPE_UINT8: - slot++; *(UINT32 *)(stack - slot) = *(UINT8 *)(*p_argv); break; case FFI_TYPE_SINT16: - slot++; *(SINT32 *)(stack - slot) = *(SINT16 *)(*p_argv); break; case FFI_TYPE_UINT16: - slot++; *(UINT32 *)(stack - slot) = *(UINT16 *)(*p_argv); break; case FFI_TYPE_UINT32: case FFI_TYPE_SINT32: case FFI_TYPE_POINTER: - slot++; - debug(3, "Storing UINT32 %u in slot %u\n", *(UINT32 *)(*p_argv), slot); + debug(3, "Storing UINT32 %u in slot %u\n", *(UINT32 *)(*p_argv), + slot); *(UINT32 *)(stack - slot) = *(UINT32 *)(*p_argv); break; case FFI_TYPE_UINT64: case FFI_TYPE_SINT64: - slot += 2; - if (slot & 1) - slot++; - - *(UINT32 *)(stack - slot) = (*(UINT64 *)(*p_argv)) >> 32; - *(UINT32 *)(stack - slot + 1) = (*(UINT64 *)(*p_argv)) & 0xffffffffUL; + /* Align slot for 64-bit type. */ + slot += (slot & 1) ? 1 : 2; + *(UINT64 *)(stack - slot) = *(UINT64 *)(*p_argv); break; case FFI_TYPE_FLOAT: - /* First 4 args go in fr4L - fr7L */ - slot++; + /* First 4 args go in fr4L - fr7L. */ + debug(3, "Storing UINT32(float) in slot %u\n", slot); + *(UINT32 *)(stack - slot) = *(UINT32 *)(*p_argv); switch (slot - FIRST_ARG_SLOT) { - case 0: fldw(*p_argv, fr4); break; - case 1: fldw(*p_argv, fr5); break; - case 2: fldw(*p_argv, fr6); break; - case 3: fldw(*p_argv, fr7); break; - default: - /* Other ones are just passed on the stack. */ - debug(3, "Storing UINT32(float) in slot %u\n", slot); - *(UINT32 *)(stack - slot) = *(UINT32 *)(*p_argv); - break; + /* First 4 args go in fr4L - fr7L. */ + case 0: fldw(stack - slot, fr4); break; + case 1: fldw(stack - slot, fr5); break; + case 2: fldw(stack - slot, fr6); break; + case 3: fldw(stack - slot, fr7); break; } - break; + break; case FFI_TYPE_DOUBLE: - slot += 2; - if (slot & 1) - slot++; - switch (slot - FIRST_ARG_SLOT + 1) + /* Align slot for 64-bit type. */ + slot += (slot & 1) ? 1 : 2; + debug(3, "Storing UINT64(double) at slot %u\n", slot); + *(UINT64 *)(stack - slot) = *(UINT64 *)(*p_argv); + switch (slot - FIRST_ARG_SLOT) { - /* First 2 args go in fr5, fr7 */ - case 2: fldd(*p_argv, fr5); break; - case 4: fldd(*p_argv, fr7); break; - default: - debug(3, "Storing UINT64(double) at slot %u\n", slot); - *(UINT64 *)(stack - slot) = *(UINT64 *)(*p_argv); - break; + /* First 2 args go in fr5, fr7. */ + case 1: fldd(stack - slot, fr5); break; + case 3: fldd(stack - slot, fr7); break; } break; +#ifdef PA_HPUX + case FFI_TYPE_LONGDOUBLE: + /* Long doubles are passed in the same manner as structures + larger than 8 bytes. */ + *(UINT32 *)(stack - slot) = (UINT32)(*p_argv); + break; +#endif + case FFI_TYPE_STRUCT: /* Structs smaller or equal than 4 bytes are passed in one register. Structs smaller or equal 8 bytes are passed in two registers. Larger structures are passed by pointer. */ - if((*p_arg)->size <= 4) + len = (*p_arg)->size; + if (len <= 4) { - slot++; - dest_cpy = (char *)(stack - slot); - dest_cpy += 4 - (*p_arg)->size; - memcpy((char *)dest_cpy, (char *)*p_argv, (*p_arg)->size); + dest_cpy = (char *)(stack - slot) + 4 - len; + memcpy(dest_cpy, (char *)*p_argv, len); } - else if ((*p_arg)->size <= 8) + else if (len <= 8) { - slot += 2; - if (slot & 1) - slot++; - dest_cpy = (char *)(stack - slot); - dest_cpy += 8 - (*p_arg)->size; - memcpy((char *)dest_cpy, (char *)*p_argv, (*p_arg)->size); - } - else - { - slot++; - *(UINT32 *)(stack - slot) = (UINT32)(*p_argv); + slot += (slot & 1) ? 1 : 2; + dest_cpy = (char *)(stack - slot) + 8 - len; + memcpy(dest_cpy, (char *)*p_argv, len); } + else + *(UINT32 *)(stack - slot) = (UINT32)(*p_argv); break; default: FFI_ASSERT(0); } + slot++; p_arg++; p_argv++; } /* Make sure we didn't mess up and scribble on the stack. */ { - int n; + unsigned int n; debug(5, "Stack setup:\n"); for (n = 0; n < (bytes + 3) / 4; n++) @@ -255,7 +273,7 @@ return; } -static void ffi_size_stack_LINUX(ffi_cif *cif) +static void ffi_size_stack_pa32(ffi_cif *cif) { ffi_type **ptr; int i; @@ -273,6 +291,9 @@ z += 2 + (z & 1); /* must start on even regs, so we may waste one */ break; +#ifdef PA_HPUX + case FFI_TYPE_LONGDOUBLE: +#endif case FFI_TYPE_STRUCT: z += 1; /* pass by ptr, callee will copy */ break; @@ -304,6 +325,13 @@ cif->flags = (unsigned) cif->rtype->type; break; +#ifdef PA_HPUX + case FFI_TYPE_LONGDOUBLE: + /* Long doubles are treated like a structure. */ + cif->flags = FFI_TYPE_STRUCT; + break; +#endif + case FFI_TYPE_STRUCT: /* For the return type we have to check the size of the structures. If the size is smaller or equal 4 bytes, the result is given back @@ -327,8 +355,8 @@ own stack sizing. */ switch (cif->abi) { - case FFI_LINUX: - ffi_size_stack_LINUX(cif); + case FFI_PA32: + ffi_size_stack_pa32(cif); break; default: @@ -339,20 +367,11 @@ return FFI_OK; } -/*@-declundef@*/ -/*@-exportheader@*/ -extern void ffi_call_LINUX(void (*)(UINT32 *, extended_cif *, unsigned), - /*@out@*/ extended_cif *, - unsigned, unsigned, - /*@out@*/ unsigned *, - void (*fn)(void)); -/*@=declundef@*/ -/*@=exportheader@*/ - -void ffi_call(/*@dependent@*/ ffi_cif *cif, - void (*fn)(void), - /*@out@*/ void *rvalue, - /*@dependent@*/ void **avalue) +extern void ffi_call_pa32(void (*)(UINT32 *, extended_cif *, unsigned), + extended_cif *, unsigned, unsigned, unsigned *, + void (*fn)(void)); + +void ffi_call(ffi_cif *cif, void (*fn)(void), void *rvalue, void **avalue) { extended_cif ecif; @@ -362,12 +381,15 @@ /* If the return value is a struct and we don't have a return value address then we need to make one. */ - if ((rvalue == NULL) && - (cif->rtype->type == FFI_TYPE_STRUCT)) + if (rvalue == NULL +#ifdef PA_HPUX + && (cif->rtype->type == FFI_TYPE_STRUCT + || cif->rtype->type == FFI_TYPE_LONGDOUBLE)) +#else + && cif->rtype->type == FFI_TYPE_STRUCT) +#endif { - /*@-sysunrecog@*/ ecif.rvalue = alloca(cif->rtype->size); - /*@=sysunrecog@*/ } else ecif.rvalue = rvalue; @@ -375,12 +397,10 @@ switch (cif->abi) { - case FFI_LINUX: - /*@-usedef@*/ - debug(2, "Calling ffi_call_LINUX: ecif=%p, bytes=%u, flags=%u, rvalue=%p, fn=%p\n", &ecif, cif->bytes, cif->flags, ecif.rvalue, (void *)fn); - ffi_call_LINUX(ffi_prep_args_LINUX, &ecif, cif->bytes, + case FFI_PA32: + debug(3, "Calling ffi_call_pa32: ecif=%p, bytes=%u, flags=%u, rvalue=%p, fn=%p\n", &ecif, cif->bytes, cif->flags, ecif.rvalue, (void *)fn); + ffi_call_pa32(ffi_prep_args_pa32, &ecif, cif->bytes, cif->flags, ecif.rvalue, fn); - /*@=usedef@*/ break; default: @@ -394,7 +414,7 @@ the stack, and we need to fill them into a cif structure and invoke the user function. This really ought to be in asm to make sure the compiler doesn't do things we don't expect. */ -UINT32 ffi_closure_inner_LINUX(ffi_closure *closure, UINT32 *stack) +ffi_status ffi_closure_inner_pa32(ffi_closure *closure, UINT32 *stack) { ffi_cif *cif; void **avalue; @@ -402,7 +422,8 @@ UINT32 ret[2]; /* function can return up to 64-bits in registers */ ffi_type **p_arg; char *tmp; - int i, avn, slot = FIRST_ARG_SLOT - 1; + int i, avn; + unsigned int slot = FIRST_ARG_SLOT; register UINT32 r28 asm("r28"); cif = closure->cif; @@ -430,20 +451,23 @@ case FFI_TYPE_SINT32: case FFI_TYPE_UINT32: case FFI_TYPE_POINTER: - slot++; avalue[i] = (char *)(stack - slot) + sizeof(UINT32) - (*p_arg)->size; break; case FFI_TYPE_SINT64: case FFI_TYPE_UINT64: - slot += 2; - if (slot & 1) - slot++; + slot += (slot & 1) ? 1 : 2; avalue[i] = (void *)(stack - slot); break; case FFI_TYPE_FLOAT: - slot++; +#ifdef PA_LINUX + /* The closure call is indirect. In Linux, floating point + arguments in indirect calls with a prototype are passed + in the floating point registers instead of the general + registers. So, we need to replace what was previously + stored in the current slot with the value in the + corresponding floating point register. */ switch (slot - FIRST_ARG_SLOT) { case 0: fstw(fr4, (void *)(stack - slot)); break; @@ -451,18 +475,20 @@ case 2: fstw(fr6, (void *)(stack - slot)); break; case 3: fstw(fr7, (void *)(stack - slot)); break; } +#endif avalue[i] = (void *)(stack - slot); break; case FFI_TYPE_DOUBLE: - slot += 2; - if (slot & 1) - slot++; - switch (slot - FIRST_ARG_SLOT + 1) + slot += (slot & 1) ? 1 : 2; +#ifdef PA_LINUX + /* See previous comment for FFI_TYPE_FLOAT. */ + switch (slot - FIRST_ARG_SLOT) { - case 2: fstd(fr5, (void *)(stack - slot)); break; - case 4: fstd(fr7, (void *)(stack - slot)); break; + case 1: fstd(fr5, (void *)(stack - slot)); break; + case 3: fstd(fr7, (void *)(stack - slot)); break; } +#endif avalue[i] = (void *)(stack - slot); break; @@ -470,35 +496,36 @@ /* Structs smaller or equal than 4 bytes are passed in one register. Structs smaller or equal 8 bytes are passed in two registers. Larger structures are passed by pointer. */ - if((*p_arg)->size <= 4) { - slot++; - avalue[i] = (void *)(stack - slot) + sizeof(UINT32) - - (*p_arg)->size; - } else if ((*p_arg)->size <= 8) { - slot += 2; - if (slot & 1) - slot++; - avalue[i] = (void *)(stack - slot) + sizeof(UINT64) - - (*p_arg)->size; - } else { - slot++; + if((*p_arg)->size <= 4) + { + avalue[i] = (void *)(stack - slot) + sizeof(UINT32) - + (*p_arg)->size; + } + else if ((*p_arg)->size <= 8) + { + slot += (slot & 1) ? 1 : 2; + avalue[i] = (void *)(stack - slot) + sizeof(UINT64) - + (*p_arg)->size; + } + else avalue[i] = (void *) *(stack - slot); - } break; default: FFI_ASSERT(0); } + slot++; p_arg++; } /* Invoke the closure. */ (closure->fun) (cif, rvalue, avalue, closure->user_data); - debug(3, "after calling function, ret[0] = %08x, ret[1] = %08x\n", ret[0], ret[1]); + debug(3, "after calling function, ret[0] = %08x, ret[1] = %08x\n", ret[0], + ret[1]); - /* Store the result */ + /* Store the result using the lower 2 bytes of the flags. */ switch (cif->flags) { case FFI_TYPE_UINT8: @@ -536,7 +563,9 @@ /* Don't need a return value, done by caller. */ break; + case FFI_TYPE_SMALL_STRUCT2: case FFI_TYPE_SMALL_STRUCT3: + case FFI_TYPE_SMALL_STRUCT4: tmp = (void*)(stack - FIRST_ARG_SLOT); tmp += 4 - cif->rtype->size; memcpy((void*)tmp, &ret[0], cif->rtype->size); @@ -545,6 +574,7 @@ case FFI_TYPE_SMALL_STRUCT5: case FFI_TYPE_SMALL_STRUCT6: case FFI_TYPE_SMALL_STRUCT7: + case FFI_TYPE_SMALL_STRUCT8: { unsigned int ret2[2]; int off; @@ -582,39 +612,93 @@ cif specifies the argument and result types for fun. The cif must already be prep'ed. */ -void ffi_closure_LINUX(void); +extern void ffi_closure_pa32(void); ffi_status -ffi_prep_closure (ffi_closure* closure, - ffi_cif* cif, - void (*fun)(ffi_cif*,void*,void**,void*), - void *user_data) +ffi_prep_closure_loc (ffi_closure* closure, + ffi_cif* cif, + void (*fun)(ffi_cif*,void*,void**,void*), + void *user_data, + void *codeloc) { UINT32 *tramp = (UINT32 *)(closure->tramp); +#ifdef PA_HPUX + UINT32 *tmp; +#endif - FFI_ASSERT (cif->abi == FFI_LINUX); + FFI_ASSERT (cif->abi == FFI_PA32); /* Make a small trampoline that will branch to our handler function. Use PC-relative addressing. */ - tramp[0] = 0xeaa00000; /* b,l .+8, %r21 ; %r21 <- pc+8 */ - tramp[1] = 0xd6a01c1e; /* depi 0,31,2, %r21 ; mask priv bits */ - tramp[2] = 0x4aa10028; /* ldw 20(%r21), %r1 ; load plabel */ - tramp[3] = 0x36b53ff1; /* ldo -8(%r21), %r21 ; get closure addr */ - tramp[4] = 0x0c201096; /* ldw 0(%r1), %r22 ; address of handler */ - tramp[5] = 0xeac0c000; /* bv %r0(%r22) ; branch to handler */ - tramp[6] = 0x0c281093; /* ldw 4(%r1), %r19 ; GP of handler */ - tramp[7] = ((UINT32)(ffi_closure_LINUX) & ~2); +#ifdef PA_LINUX + tramp[0] = 0xeaa00000; /* b,l .+8,%r21 ; %r21 <- pc+8 */ + tramp[1] = 0xd6a01c1e; /* depi 0,31,2,%r21 ; mask priv bits */ + tramp[2] = 0x4aa10028; /* ldw 20(%r21),%r1 ; load plabel */ + tramp[3] = 0x36b53ff1; /* ldo -8(%r21),%r21 ; get closure addr */ + tramp[4] = 0x0c201096; /* ldw 0(%r1),%r22 ; address of handler */ + tramp[5] = 0xeac0c000; /* bv%r0(%r22) ; branch to handler */ + tramp[6] = 0x0c281093; /* ldw 4(%r1),%r19 ; GP of handler */ + tramp[7] = ((UINT32)(ffi_closure_pa32) & ~2); /* Flush d/icache -- have to flush up 2 two lines because of alignment. */ - asm volatile ( - "fdc 0(%0)\n" - "fdc %1(%0)\n" - "fic 0(%%sr4, %0)\n" - "fic %1(%%sr4, %0)\n" - "sync\n" - : : "r"((unsigned long)tramp & ~31), "r"(32 /* stride */)); + __asm__ volatile( + "fdc 0(%0)\n\t" + "fdc %1(%0)\n\t" + "fic 0(%%sr4, %0)\n\t" + "fic %1(%%sr4, %0)\n\t" + "sync\n\t" + "nop\n\t" + "nop\n\t" + "nop\n\t" + "nop\n\t" + "nop\n\t" + "nop\n\t" + "nop\n" + : + : "r"((unsigned long)tramp & ~31), + "r"(32 /* stride */) + : "memory"); +#endif + +#ifdef PA_HPUX + tramp[0] = 0xeaa00000; /* b,l .+8,%r21 ; %r21 <- pc+8 */ + tramp[1] = 0xd6a01c1e; /* depi 0,31,2,%r21 ; mask priv bits */ + tramp[2] = 0x4aa10038; /* ldw 28(%r21),%r1 ; load plabel */ + tramp[3] = 0x36b53ff1; /* ldo -8(%r21),%r21 ; get closure addr */ + tramp[4] = 0x0c201096; /* ldw 0(%r1),%r22 ; address of handler */ + tramp[5] = 0x02c010b4; /* ldsid (%r22),%r20 ; load space id */ + tramp[6] = 0x00141820; /* mtsp %r20,%sr0 ; into %sr0 */ + tramp[7] = 0xe2c00000; /* be 0(%sr0,%r22) ; branch to handler */ + tramp[8] = 0x0c281093; /* ldw 4(%r1),%r19 ; GP of handler */ + tramp[9] = ((UINT32)(ffi_closure_pa32) & ~2); + + /* Flush d/icache -- have to flush three lines because of alignment. */ + __asm__ volatile( + "copy %1,%0\n\t" + "fdc,m %2(%0)\n\t" + "fdc,m %2(%0)\n\t" + "fdc,m %2(%0)\n\t" + "ldsid (%1),%0\n\t" + "mtsp %0,%%sr0\n\t" + "copy %1,%0\n\t" + "fic,m %2(%%sr0,%0)\n\t" + "fic,m %2(%%sr0,%0)\n\t" + "fic,m %2(%%sr0,%0)\n\t" + "sync\n\t" + "nop\n\t" + "nop\n\t" + "nop\n\t" + "nop\n\t" + "nop\n\t" + "nop\n\t" + "nop\n" + : "=&r" ((unsigned long)tmp) + : "r" ((unsigned long)tramp & ~31), + "r" (32/* stride */) + : "memory"); +#endif closure->cif = cif; closure->user_data = user_data; Modified: python/trunk/Modules/_ctypes/libffi/src/pa/ffitarget.h ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/pa/ffitarget.h (original) +++ python/trunk/Modules/_ctypes/libffi/src/pa/ffitarget.h Tue Mar 4 21:09:11 2008 @@ -13,13 +13,14 @@ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR - OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. ----------------------------------------------------------------------- */ @@ -35,9 +36,20 @@ typedef enum ffi_abi { FFI_FIRST_ABI = 0, -#ifdef PA - FFI_LINUX, - FFI_DEFAULT_ABI = FFI_LINUX, +#ifdef PA_LINUX + FFI_PA32, + FFI_DEFAULT_ABI = FFI_PA32, +#endif + +#ifdef PA_HPUX + FFI_PA32, + FFI_DEFAULT_ABI = FFI_PA32, +#endif + +#ifdef PA64_HPUX +#error "PA64_HPUX FFI is not yet implemented" + FFI_PA64, + FFI_DEFAULT_ABI = FFI_PA64, #endif FFI_LAST_ABI = FFI_DEFAULT_ABI + 1 @@ -49,11 +61,17 @@ #define FFI_CLOSURES 1 #define FFI_NATIVE_RAW_API 0 +#ifdef PA_LINUX #define FFI_TRAMPOLINE_SIZE 32 - -#define FFI_TYPE_SMALL_STRUCT3 -1 -#define FFI_TYPE_SMALL_STRUCT5 -2 -#define FFI_TYPE_SMALL_STRUCT6 -3 -#define FFI_TYPE_SMALL_STRUCT7 -4 +#else +#define FFI_TRAMPOLINE_SIZE 40 #endif +#define FFI_TYPE_SMALL_STRUCT2 -1 +#define FFI_TYPE_SMALL_STRUCT3 -2 +#define FFI_TYPE_SMALL_STRUCT4 -3 +#define FFI_TYPE_SMALL_STRUCT5 -4 +#define FFI_TYPE_SMALL_STRUCT6 -5 +#define FFI_TYPE_SMALL_STRUCT7 -6 +#define FFI_TYPE_SMALL_STRUCT8 -7 +#endif Modified: python/trunk/Modules/_ctypes/libffi/src/pa/linux.S ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/pa/linux.S (original) +++ python/trunk/Modules/_ctypes/libffi/src/pa/linux.S Tue Mar 4 21:09:11 2008 @@ -1,5 +1,6 @@ /* ----------------------------------------------------------------------- linux.S - (c) 2003-2004 Randolph Chung + (c) 2008 Red Hat, Inc. HPPA Foreign Function Interface @@ -17,7 +18,7 @@ THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR + IN NO EVENT SHALL RENESAS TECHNOLOGY BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. @@ -31,20 +32,20 @@ .level 1.1 .align 4 - /* void ffi_call_LINUX(void (*)(char *, extended_cif *), + /* void ffi_call_pa32(void (*)(char *, extended_cif *), extended_cif *ecif, unsigned bytes, unsigned flags, unsigned *rvalue, - void (*fn)()); + void (*fn)(void)); */ - .export ffi_call_LINUX,code - .import ffi_prep_args_LINUX,code + .export ffi_call_pa32,code + .import ffi_prep_args_pa32,code - .type ffi_call_LINUX, @function + .type ffi_call_pa32, @function .LFB1: -ffi_call_LINUX: +ffi_call_pa32: .proc .callinfo FRAME=64,CALLS,SAVE_RP,SAVE_SP,ENTRY_GR=4 .entry @@ -63,7 +64,7 @@ [ 64-bytes register save area ] <- %r4 [ Stack space for actual call, passed as ] <- %arg0 - [ arg0 to ffi_prep_args_LINUX ] + [ arg0 to ffi_prep_args_pa32 ] [ Stack for calling prep_args ] <- %sp */ @@ -73,14 +74,14 @@ .LCFI13: copy %sp, %r4 - addl %arg2, %r4, %arg0 /* arg stack */ - stw %arg3, -48(%r3) /* save flags; we need it later */ + addl %arg2, %r4, %arg0 /* arg stack */ + stw %arg3, -48(%r3) /* save flags; we need it later */ /* Call prep_args: %arg0(stack) -- set up above %arg1(ecif) -- same as incoming param %arg2(bytes) -- same as incoming param */ - bl ffi_prep_args_LINUX,%r2 + bl ffi_prep_args_pa32,%r2 ldo 64(%arg0), %sp ldo -64(%sp), %sp @@ -106,90 +107,139 @@ /* Store the result according to the return type. */ -checksmst3: - comib,<>,n FFI_TYPE_SMALL_STRUCT3, %r21, checksmst567 - /* 3-byte structs are returned in ret0 as ??xxyyzz. Shift - left 8 bits to write to the result structure. */ - zdep %ret0, 23, 24, %r22 - b done - stw %r22, 0(%r20) - -checksmst567: - /* 5-7 byte values are returned right justified: +.Lcheckint: + comib,<>,n FFI_TYPE_INT, %r21, .Lcheckint8 + b .Ldone + stw %ret0, 0(%r20) + +.Lcheckint8: + comib,<>,n FFI_TYPE_UINT8, %r21, .Lcheckint16 + b .Ldone + stb %ret0, 0(%r20) + +.Lcheckint16: + comib,<>,n FFI_TYPE_UINT16, %r21, .Lcheckdbl + b .Ldone + sth %ret0, 0(%r20) + +.Lcheckdbl: + comib,<>,n FFI_TYPE_DOUBLE, %r21, .Lcheckfloat + b .Ldone + fstd %fr4,0(%r20) + +.Lcheckfloat: + comib,<>,n FFI_TYPE_FLOAT, %r21, .Lcheckll + b .Ldone + fstw %fr4L,0(%r20) + +.Lcheckll: + comib,<>,n FFI_TYPE_UINT64, %r21, .Lchecksmst2 + stw %ret0, 0(%r20) + b .Ldone + stw %ret1, 4(%r20) + +.Lchecksmst2: + comib,<>,n FFI_TYPE_SMALL_STRUCT2, %r21, .Lchecksmst3 + /* 2-byte structs are returned in ret0 as ????xxyy. */ + extru %ret0, 23, 8, %r22 + stbs,ma %r22, 1(%r20) + b .Ldone + stb %ret0, 0(%r20) + +.Lchecksmst3: + comib,<>,n FFI_TYPE_SMALL_STRUCT3, %r21, .Lchecksmst4 + /* 3-byte structs are returned in ret0 as ??xxyyzz. */ + extru %ret0, 15, 8, %r22 + stbs,ma %r22, 1(%r20) + extru %ret0, 23, 8, %r22 + stbs,ma %r22, 1(%r20) + b .Ldone + stb %ret0, 0(%r20) + +.Lchecksmst4: + comib,<>,n FFI_TYPE_SMALL_STRUCT4, %r21, .Lchecksmst5 + /* 4-byte structs are returned in ret0 as wwxxyyzz. */ + extru %ret0, 7, 8, %r22 + stbs,ma %r22, 1(%r20) + extru %ret0, 15, 8, %r22 + stbs,ma %r22, 1(%r20) + extru %ret0, 23, 8, %r22 + stbs,ma %r22, 1(%r20) + b .Ldone + stb %ret0, 0(%r20) + +.Lchecksmst5: + comib,<>,n FFI_TYPE_SMALL_STRUCT5, %r21, .Lchecksmst6 + /* 5 byte values are returned right justified: + ret0 ret1 + 5: ??????aa bbccddee */ + stbs,ma %ret0, 1(%r20) + extru %ret1, 7, 8, %r22 + stbs,ma %r22, 1(%r20) + extru %ret1, 15, 8, %r22 + stbs,ma %r22, 1(%r20) + extru %ret1, 23, 8, %r22 + stbs,ma %r22, 1(%r20) + b .Ldone + stb %ret1, 0(%r20) + +.Lchecksmst6: + comib,<>,n FFI_TYPE_SMALL_STRUCT6, %r21, .Lchecksmst7 + /* 6 byte values are returned right justified: + ret0 ret1 + 6: ????aabb ccddeeff */ + extru %ret0, 23, 8, %r22 + stbs,ma %r22, 1(%r20) + stbs,ma %ret0, 1(%r20) + extru %ret1, 7, 8, %r22 + stbs,ma %r22, 1(%r20) + extru %ret1, 15, 8, %r22 + stbs,ma %r22, 1(%r20) + extru %ret1, 23, 8, %r22 + stbs,ma %r22, 1(%r20) + b .Ldone + stb %ret1, 0(%r20) + +.Lchecksmst7: + comib,<>,n FFI_TYPE_SMALL_STRUCT7, %r21, .Lchecksmst8 + /* 7 byte values are returned right justified: ret0 ret1 - 5: ??????aa bbccddee - 6: ????aabb ccddeeff - 7: ??aabbcc ddeeffgg - - To store this in the result, write the first 4 bytes into a temp - register using shrpw (t1 = aabbccdd), followed by a rotation of - ret1: - - ret0 ret1 ret1 - 5: ??????aa bbccddee -> eebbccdd (rotate 8) - 6: ????aabb ccddeeff -> eeffccdd (rotate 16) - 7: ??aabbcc ddeeffgg -> eeffggdd (rotate 24) - - then we write (t1, ret1) into the result. */ - - addi,<> -FFI_TYPE_SMALL_STRUCT5,%r21,%r0 - ldi 8, %r22 - addi,<> -FFI_TYPE_SMALL_STRUCT6,%r21,%r0 - ldi 16, %r22 - addi,<> -FFI_TYPE_SMALL_STRUCT7,%r21,%r0 - ldi 24, %r22 - - /* This relies on all the FFI_TYPE_*_STRUCT* defines being <0 */ - cmpib,<=,n 0, %r21, checkint8 - mtsar %r22 - - shrpw %ret0, %ret1, %sar, %ret0 /* ret0 = aabbccdd */ - shrpw %ret1, %ret1, %sar, %ret1 /* rotate ret1 */ - - stw %ret0, 0(%r20) - b done - stw %ret1, 4(%r20) - -checkint8: - comib,<>,n FFI_TYPE_UINT8, %r21, checkint16 - b done - stb %ret0, 0(%r20) - -checkint16: - comib,<>,n FFI_TYPE_UINT16, %r21, checkint32 - b done - sth %ret0, 0(%r20) - -checkint32: - comib,<>,n FFI_TYPE_UINT32, %r21, checkint - b done - stw %ret0, 0(%r20) - -checkint: - comib,<>,n FFI_TYPE_INT, %r21, checkll - b done - stw %ret0, 0(%r20) - -checkll: - comib,<>,n FFI_TYPE_UINT64, %r21, checkdbl - stw %ret0, 0(%r20) - b done - stw %ret1, 4(%r20) - -checkdbl: - comib,<>,n FFI_TYPE_DOUBLE, %r21, checkfloat - b done - fstd %fr4,0(%r20) - -checkfloat: - comib,<>,n FFI_TYPE_FLOAT, %r21, done - fstw %fr4L,0(%r20) - - /* structure returns are either handled by one of the - INT/UINT64 cases above, or, if passed by pointer, - is handled by the callee. */ + 7: ??aabbcc ddeeffgg */ + extru %ret0, 15, 8, %r22 + stbs,ma %r22, 1(%r20) + extru %ret0, 23, 8, %r22 + stbs,ma %r22, 1(%r20) + stbs,ma %ret0, 1(%r20) + extru %ret1, 7, 8, %r22 + stbs,ma %r22, 1(%r20) + extru %ret1, 15, 8, %r22 + stbs,ma %r22, 1(%r20) + extru %ret1, 23, 8, %r22 + stbs,ma %r22, 1(%r20) + b .Ldone + stb %ret1, 0(%r20) + +.Lchecksmst8: + comib,<>,n FFI_TYPE_SMALL_STRUCT8, %r21, .Ldone + /* 8 byte values are returned right justified: + ret0 ret1 + 8: aabbccdd eeffgghh */ + extru %ret0, 7, 8, %r22 + stbs,ma %r22, 1(%r20) + extru %ret0, 15, 8, %r22 + stbs,ma %r22, 1(%r20) + extru %ret0, 23, 8, %r22 + stbs,ma %r22, 1(%r20) + stbs,ma %ret0, 1(%r20) + extru %ret1, 7, 8, %r22 + stbs,ma %r22, 1(%r20) + extru %ret1, 15, 8, %r22 + stbs,ma %r22, 1(%r20) + extru %ret1, 23, 8, %r22 + stbs,ma %r22, 1(%r20) + stb %ret1, 0(%r20) -done: +.Ldone: /* all done, return */ copy %r4, %sp /* pop arg stack */ ldw 12(%r3), %r4 @@ -201,14 +251,14 @@ .procend .LFE1: - /* void ffi_closure_LINUX(void); + /* void ffi_closure_pa32(void); Called with closure argument in %r21 */ - .export ffi_closure_LINUX,code - .import ffi_closure_inner_LINUX,code + .export ffi_closure_pa32,code + .import ffi_closure_inner_pa32,code - .type ffi_closure_LINUX, @function + .type ffi_closure_pa32, @function .LFB2: -ffi_closure_LINUX: +ffi_closure_pa32: .proc .callinfo FRAME=64,CALLS,SAVE_RP,SAVE_SP,ENTRY_GR=3 .entry @@ -228,7 +278,7 @@ stw %arg3, -48(%r3) copy %r21, %arg0 - bl ffi_closure_inner_LINUX, %r2 + bl ffi_closure_inner_pa32, %r2 copy %r3, %arg1 ldwm -64(%sp), %r3 @@ -299,7 +349,7 @@ .sleb128 -5 .byte 0x4 ;# DW_CFA_advance_loc4 - .word .LCFI12-.LCFI11 + .word .LCFI22-.LCFI21 .byte 0xd ;# DW_CFA_def_cfa_register = r3 .uleb128 0x3 Modified: python/trunk/Modules/_ctypes/libffi/src/powerpc/darwin.S ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/powerpc/darwin.S (original) +++ python/trunk/Modules/_ctypes/libffi/src/powerpc/darwin.S Tue Mar 4 21:09:11 2008 @@ -1,4 +1,3 @@ -#ifdef __ppc__ /* ----------------------------------------------------------------------- darwin.S - Copyright (c) 2000 John Hornkvist Copyright (c) 2004 Free Software Foundation, Inc. @@ -244,4 +243,3 @@ .align LOG2_GPR_BYTES LLFB0$non_lazy_ptr: .g_long LFB0 -#endif Modified: python/trunk/Modules/_ctypes/libffi/src/powerpc/darwin_closure.S ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/powerpc/darwin_closure.S (original) +++ python/trunk/Modules/_ctypes/libffi/src/powerpc/darwin_closure.S Tue Mar 4 21:09:11 2008 @@ -1,4 +1,3 @@ -#ifdef __ppc__ /* ----------------------------------------------------------------------- darwin_closure.S - Copyright (c) 2002, 2003, 2004, Free Software Foundation, Inc. based on ppc_closure.S @@ -247,7 +246,7 @@ /* END(ffi_closure_ASM) */ .data -.section __TEXT,__eh_frame,coalesced,no_toc+strip_static_syms +.section __TEXT,__eh_frame,coalesced,no_toc+strip_static_syms+live_support EH_frame1: .set L$set$0,LECIE1-LSCIE1 .long L$set$0 ; Length of Common Information Entry @@ -316,4 +315,3 @@ .align LOG2_GPR_BYTES LLFB1$non_lazy_ptr: .g_long LFB1 -#endif Modified: python/trunk/Modules/_ctypes/libffi/src/powerpc/ffi.c ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/powerpc/ffi.c (original) +++ python/trunk/Modules/_ctypes/libffi/src/powerpc/ffi.c Tue Mar 4 21:09:11 2008 @@ -1,5 +1,7 @@ /* ----------------------------------------------------------------------- ffi.c - Copyright (c) 1998 Geoffrey Keating + Copyright (C) 2007 Free Software Foundation, Inc + Copyright (C) 2008 Red Hat, Inc PowerPC Foreign Function Interface @@ -39,7 +41,8 @@ FLAG_RETURNS_NOTHING = 1 << (31-30), /* These go in cr7 */ FLAG_RETURNS_FP = 1 << (31-29), FLAG_RETURNS_64BITS = 1 << (31-28), - FLAG_RETURNS_128BITS = 1 << (31-27), + + FLAG_RETURNS_128BITS = 1 << (31-27), /* cr6 */ FLAG_ARG_NEEDS_COPY = 1 << (31- 7), FLAG_FP_ARGUMENTS = 1 << (31- 6), /* cr1.eq; specified by ABI */ @@ -48,10 +51,13 @@ }; /* About the SYSV ABI. */ -enum { - NUM_GPR_ARG_REGISTERS = 8, - NUM_FPR_ARG_REGISTERS = 8 -}; +unsigned int NUM_GPR_ARG_REGISTERS = 8; +#ifndef __NO_FPRS__ +unsigned int NUM_FPR_ARG_REGISTERS = 8; +#else +unsigned int NUM_FPR_ARG_REGISTERS = 0; +#endif + enum { ASM_NEEDS_REGISTERS = 4 }; /* ffi_prep_args_SYSV is called by the assembly routine once stack space @@ -80,10 +86,8 @@ */ -/*@-exportheader@*/ void ffi_prep_args_SYSV (extended_cif *ecif, unsigned *const stack) -/*@=exportheader@*/ { const unsigned bytes = ecif->cif->bytes; const unsigned flags = ecif->cif->flags; @@ -116,7 +120,7 @@ /* 'next_arg' grows up as we put parameters in it. */ valp next_arg; - int i; + int i, ii MAYBE_UNUSED; ffi_type **ptr; double double_tmp; union { @@ -134,6 +138,9 @@ size_t struct_copy_size; unsigned gprvalue; + if (ecif->cif->abi == FFI_LINUX_SOFT_FLOAT) + NUM_FPR_ARG_REGISTERS = 0; + stacktop.c = (char *) stack + bytes; gpr_base.u = stacktop.u - ASM_NEEDS_REGISTERS - NUM_GPR_ARG_REGISTERS; intarg_count = 0; @@ -165,6 +172,9 @@ switch ((*ptr)->type) { case FFI_TYPE_FLOAT: + /* With FFI_LINUX_SOFT_FLOAT floats are handled like UINT32. */ + if (ecif->cif->abi == FFI_LINUX_SOFT_FLOAT) + goto soft_float_prep; double_tmp = **p_argv.f; if (fparg_count >= NUM_FPR_ARG_REGISTERS) { @@ -178,6 +188,9 @@ break; case FFI_TYPE_DOUBLE: + /* With FFI_LINUX_SOFT_FLOAT doubles are handled like UINT64. */ + if (ecif->cif->abi == FFI_LINUX_SOFT_FLOAT) + goto soft_double_prep; double_tmp = **p_argv.d; if (fparg_count >= NUM_FPR_ARG_REGISTERS) @@ -197,8 +210,77 @@ FFI_ASSERT (flags & FLAG_FP_ARGUMENTS); break; +#if FFI_TYPE_LONGDOUBLE != FFI_TYPE_DOUBLE + case FFI_TYPE_LONGDOUBLE: + if ((ecif->cif->abi != FFI_LINUX) + && (ecif->cif->abi != FFI_LINUX_SOFT_FLOAT)) + goto do_struct; + /* The soft float ABI for long doubles works like this, + a long double is passed in four consecutive gprs if available. + A maximum of 2 long doubles can be passed in gprs. + If we do not have 4 gprs left, the long double is passed on the + stack, 4-byte aligned. */ + if (ecif->cif->abi == FFI_LINUX_SOFT_FLOAT) + { + unsigned int int_tmp = (*p_argv.ui)[0]; + if (intarg_count >= NUM_GPR_ARG_REGISTERS - 3) + { + if (intarg_count < NUM_GPR_ARG_REGISTERS) + intarg_count += NUM_GPR_ARG_REGISTERS - intarg_count; + *next_arg.u = int_tmp; + next_arg.u++; + for (ii = 1; ii < 4; ii++) + { + int_tmp = (*p_argv.ui)[ii]; + *next_arg.u = int_tmp; + next_arg.u++; + } + } + else + { + *gpr_base.u++ = int_tmp; + for (ii = 1; ii < 4; ii++) + { + int_tmp = (*p_argv.ui)[ii]; + *gpr_base.u++ = int_tmp; + } + } + intarg_count +=4; + } + else + { + double_tmp = (*p_argv.d)[0]; + + if (fparg_count >= NUM_FPR_ARG_REGISTERS - 1) + { + if (intarg_count >= NUM_GPR_ARG_REGISTERS + && intarg_count % 2 != 0) + { + intarg_count++; + next_arg.u++; + } + *next_arg.d = double_tmp; + next_arg.u += 2; + double_tmp = (*p_argv.d)[1]; + *next_arg.d = double_tmp; + next_arg.u += 2; + } + else + { + *fpr_base.d++ = double_tmp; + double_tmp = (*p_argv.d)[1]; + *fpr_base.d++ = double_tmp; + } + + fparg_count += 2; + FFI_ASSERT (flags & FLAG_FP_ARGUMENTS); + } + break; +#endif + case FFI_TYPE_UINT64: case FFI_TYPE_SINT64: + soft_double_prep: if (intarg_count == NUM_GPR_ARG_REGISTERS-1) intarg_count++; if (intarg_count >= NUM_GPR_ARG_REGISTERS) @@ -232,7 +314,7 @@ case FFI_TYPE_STRUCT: #if FFI_TYPE_LONGDOUBLE != FFI_TYPE_DOUBLE - case FFI_TYPE_LONGDOUBLE: + do_struct: #endif struct_copy_size = ((*ptr)->size + 15) & ~0xF; copy_space.c -= struct_copy_size; @@ -261,6 +343,8 @@ case FFI_TYPE_UINT32: case FFI_TYPE_SINT32: case FFI_TYPE_POINTER: + soft_float_prep: + gprvalue = **p_argv.ui; putgpr: @@ -322,10 +406,8 @@ */ -/*@-exportheader@*/ void FFI_HIDDEN ffi_prep_args64 (extended_cif *ecif, unsigned long *const stack) -/*@=exportheader@*/ { const unsigned long bytes = ecif->cif->bytes; const unsigned long flags = ecif->cif->flags; @@ -433,6 +515,7 @@ if (fparg_count < NUM_FPR_ARG_REGISTERS64) *fpr_base.d++ = double_tmp; fparg_count++; + FFI_ASSERT (__LDBL_MANT_DIG__ == 106); FFI_ASSERT (flags & FLAG_FP_ARGUMENTS); break; #endif @@ -515,6 +598,9 @@ unsigned type = cif->rtype->type; unsigned size = cif->rtype->size; + if (cif->abi == FFI_LINUX_SOFT_FLOAT) + NUM_FPR_ARG_REGISTERS = 0; + if (cif->abi != FFI_LINUX64) { /* All the machine-independent calculation of cif->bytes will be wrong. @@ -536,11 +622,6 @@ /* Space for the mandatory parm save area and general registers. */ bytes += 2 * NUM_GPR_ARG_REGISTERS64 * sizeof (long); - -#if FFI_TYPE_LONGDOUBLE != FFI_TYPE_DOUBLE - if (type == FFI_TYPE_LONGDOUBLE) - type = FFI_TYPE_DOUBLE; -#endif } /* Return value handling. The rules for SYSV are as follows: @@ -549,19 +630,33 @@ - 64-bit integer values and structures between 5 and 8 bytes are returned in gpr3 and gpr4; - Single/double FP values are returned in fpr1; - - Larger structures and long double (if not equivalent to double) values - are allocated space and a pointer is passed as the first argument. + - Larger structures are allocated space and a pointer is passed as + the first argument. + - long doubles (if not equivalent to double) are returned in + fpr1,fpr2 for Linux and as for large structs for SysV. For LINUX64: - integer values in gpr3; - Structures/Unions by reference; - - Single/double FP values in fpr1, long double in fpr1,fpr2. */ + - Single/double FP values in fpr1, long double in fpr1,fpr2. + - soft-float float/doubles are treated as UINT32/UINT64 respectivley. + - soft-float long doubles are returned in gpr3-gpr6. */ switch (type) { +#if FFI_TYPE_LONGDOUBLE != FFI_TYPE_DOUBLE + case FFI_TYPE_LONGDOUBLE: + if (cif->abi != FFI_LINUX && cif->abi != FFI_LINUX64 + && cif->abi != FFI_LINUX_SOFT_FLOAT) + goto byref; + flags |= FLAG_RETURNS_128BITS; + /* Fall through. */ +#endif case FFI_TYPE_DOUBLE: flags |= FLAG_RETURNS_64BITS; /* Fall through. */ case FFI_TYPE_FLOAT: - flags |= FLAG_RETURNS_FP; + /* With FFI_LINUX_SOFT_FLOAT no fp registers are used. */ + if (cif->abi != FFI_LINUX_SOFT_FLOAT) + flags |= FLAG_RETURNS_FP; break; case FFI_TYPE_UINT64: @@ -598,15 +693,8 @@ } } } - /* else fall through. */ #if FFI_TYPE_LONGDOUBLE != FFI_TYPE_DOUBLE - case FFI_TYPE_LONGDOUBLE: - if (type == FFI_TYPE_LONGDOUBLE && cif->abi == FFI_LINUX64) - { - flags |= FLAG_RETURNS_128BITS; - flags |= FLAG_RETURNS_FP; - break; - } + byref: #endif intarg_count++; flags |= FLAG_RETVAL_REFERENCE; @@ -631,11 +719,36 @@ switch ((*ptr)->type) { case FFI_TYPE_FLOAT: + /* With FFI_LINUX_SOFT_FLOAT floats are handled like UINT32. */ + if (cif->abi == FFI_LINUX_SOFT_FLOAT) + goto soft_float_cif; fparg_count++; /* floating singles are not 8-aligned on stack */ break; +#if FFI_TYPE_LONGDOUBLE != FFI_TYPE_DOUBLE + case FFI_TYPE_LONGDOUBLE: + if (cif->abi != FFI_LINUX && cif->abi != FFI_LINUX_SOFT_FLOAT) + goto do_struct; + if (cif->abi == FFI_LINUX_SOFT_FLOAT) + { + if (intarg_count >= NUM_GPR_ARG_REGISTERS - 3 + || intarg_count < NUM_GPR_ARG_REGISTERS) + /* A long double in FFI_LINUX_SOFT_FLOAT can use only + a set of four consecutive gprs. If we have not enough, + we have to adjust the intarg_count value. */ + intarg_count += NUM_GPR_ARG_REGISTERS - intarg_count; + intarg_count += 4; + break; + } + else + fparg_count++; + /* Fall thru */ +#endif case FFI_TYPE_DOUBLE: + /* With FFI_LINUX_SOFT_FLOAT doubles are handled like UINT64. */ + if (cif->abi == FFI_LINUX_SOFT_FLOAT) + goto soft_double_cif; fparg_count++; /* If this FP arg is going on the stack, it must be 8-byte-aligned. */ @@ -647,6 +760,7 @@ case FFI_TYPE_UINT64: case FFI_TYPE_SINT64: + soft_double_cif: /* 'long long' arguments are passed as two words, but either both words must fit in registers or both go on the stack. If they go on the stack, they must @@ -664,7 +778,7 @@ case FFI_TYPE_STRUCT: #if FFI_TYPE_LONGDOUBLE != FFI_TYPE_DOUBLE - case FFI_TYPE_LONGDOUBLE: + do_struct: #endif /* We must allocate space for a copy of these to enforce pass-by-value. Pad the space up to a multiple of 16 @@ -674,6 +788,7 @@ /* Fall through (allocate space for the pointer). */ default: + soft_float_cif: /* Everything else is passed as a 4-byte word in a GPR, either the object itself or a pointer to it. */ intarg_count++; @@ -687,8 +802,13 @@ { #if FFI_TYPE_LONGDOUBLE != FFI_TYPE_DOUBLE case FFI_TYPE_LONGDOUBLE: - fparg_count += 2; - intarg_count += 2; + if (cif->abi == FFI_LINUX_SOFT_FLOAT) + intarg_count += 4; + else + { + fparg_count += 2; + intarg_count += 2; + } break; #endif case FFI_TYPE_FLOAT: @@ -751,24 +871,14 @@ return FFI_OK; } -/*@-declundef@*/ -/*@-exportheader@*/ -extern void ffi_call_SYSV(/*@out@*/ extended_cif *, - unsigned, unsigned, - /*@out@*/ unsigned *, +extern void ffi_call_SYSV(extended_cif *, unsigned, unsigned, unsigned *, void (*fn)(void)); -extern void FFI_HIDDEN ffi_call_LINUX64(/*@out@*/ extended_cif *, - unsigned long, unsigned long, - /*@out@*/ unsigned long *, +extern void FFI_HIDDEN ffi_call_LINUX64(extended_cif *, unsigned long, + unsigned long, unsigned long *, void (*fn)(void)); -/*@=declundef@*/ -/*@=exportheader@*/ void -ffi_call(/*@dependent@*/ ffi_cif *cif, - void (*fn)(void), - /*@out@*/ void *rvalue, - /*@dependent@*/ void **avalue) +ffi_call(ffi_cif *cif, void (*fn)(void), void *rvalue, void **avalue) { extended_cif ecif; @@ -780,9 +890,7 @@ if ((rvalue == NULL) && (cif->rtype->type == FFI_TYPE_STRUCT)) { - /*@-sysunrecog@*/ ecif.rvalue = alloca(cif->rtype->size); - /*@=sysunrecog@*/ } else ecif.rvalue = rvalue; @@ -793,15 +901,13 @@ #ifndef POWERPC64 case FFI_SYSV: case FFI_GCC_SYSV: - /*@-usedef@*/ + case FFI_LINUX: + case FFI_LINUX_SOFT_FLOAT: ffi_call_SYSV (&ecif, -cif->bytes, cif->flags, ecif.rvalue, fn); - /*@=usedef@*/ break; #else case FFI_LINUX64: - /*@-usedef@*/ ffi_call_LINUX64 (&ecif, -(long) cif->bytes, cif->flags, ecif.rvalue, fn); - /*@=usedef@*/ break; #endif default: @@ -815,27 +921,24 @@ #define MIN_CACHE_LINE_SIZE 8 static void -flush_icache (char *addr1, int size) +flush_icache (char *wraddr, char *xaddr, int size) { int i; - char * addr; for (i = 0; i < size; i += MIN_CACHE_LINE_SIZE) - { - addr = addr1 + i; - __asm__ volatile ("icbi 0,%0;" "dcbf 0,%0;" - : : "r" (addr) : "memory"); - } - addr = addr1 + size - 1; - __asm__ volatile ("icbi 0,%0;" "dcbf 0,%0;" "sync;" "isync;" - : : "r"(addr) : "memory"); + __asm__ volatile ("icbi 0,%0;" "dcbf 0,%1;" + : : "r" (xaddr + i), "r" (wraddr + i) : "memory"); + __asm__ volatile ("icbi 0,%0;" "dcbf 0,%1;" "sync;" "isync;" + : : "r"(xaddr + size - 1), "r"(wraddr + size - 1) + : "memory"); } #endif ffi_status -ffi_prep_closure (ffi_closure *closure, - ffi_cif *cif, - void (*fun) (ffi_cif *, void *, void **, void *), - void *user_data) +ffi_prep_closure_loc (ffi_closure *closure, + ffi_cif *cif, + void (*fun) (ffi_cif *, void *, void **, void *), + void *user_data, + void *codeloc) { #ifdef POWERPC64 void **tramp = (void **) &closure->tramp[0]; @@ -843,7 +946,7 @@ FFI_ASSERT (cif->abi == FFI_LINUX64); /* Copy function address and TOC from ffi_closure_LINUX64. */ memcpy (tramp, (char *) ffi_closure_LINUX64, 16); - tramp[2] = (void *) closure; + tramp[2] = codeloc; #else unsigned int *tramp; @@ -859,10 +962,10 @@ tramp[8] = 0x7c0903a6; /* mtctr r0 */ tramp[9] = 0x4e800420; /* bctr */ *(void **) &tramp[2] = (void *) ffi_closure_SYSV; /* function */ - *(void **) &tramp[3] = (void *) closure; /* context */ + *(void **) &tramp[3] = codeloc; /* context */ /* Flush the icache. */ - flush_icache (&closure->tramp[0],FFI_TRAMPOLINE_SIZE); + flush_icache ((char *)tramp, (char *)codeloc, FFI_TRAMPOLINE_SIZE); #endif closure->cif = cif; @@ -920,14 +1023,17 @@ For FFI_SYSV the result is passed in r3/r4 if the struct size is less or equal 8 bytes. */ - if (cif->rtype->type == FFI_TYPE_STRUCT) + if ((cif->rtype->type == FFI_TYPE_STRUCT + && !((cif->abi == FFI_SYSV) && (size <= 8))) +#if FFI_TYPE_LONGDOUBLE != FFI_TYPE_DOUBLE + || (cif->rtype->type == FFI_TYPE_LONGDOUBLE + && cif->abi != FFI_LINUX && cif->abi != FFI_LINUX_SOFT_FLOAT) +#endif + ) { - if (!((cif->abi == FFI_SYSV) && (size <= 8))) - { - rvalue = (void *) *pgr; - ng++; - pgr++; - } + rvalue = (void *) *pgr; + ng++; + pgr++; } i = 0; @@ -974,6 +1080,7 @@ case FFI_TYPE_SINT32: case FFI_TYPE_UINT32: case FFI_TYPE_POINTER: + soft_float_closure: /* there are 8 gpr registers used to pass values */ if (ng < 8) { @@ -989,6 +1096,9 @@ break; case FFI_TYPE_STRUCT: +#if FFI_TYPE_LONGDOUBLE != FFI_TYPE_DOUBLE + do_struct: +#endif /* Structs are passed by reference. The address will appear in a gpr if it is one of the first 8 arguments. */ if (ng < 8) @@ -1006,6 +1116,7 @@ case FFI_TYPE_SINT64: case FFI_TYPE_UINT64: + soft_double_closure: /* passing long long ints are complex, they must * be passed in suitable register pairs such as * (r3,r4) or (r5,r6) or (r6,r7), or (r7,r8) or (r9,r10) @@ -1037,6 +1148,9 @@ break; case FFI_TYPE_FLOAT: + /* With FFI_LINUX_SOFT_FLOAT floats are handled like UINT32. */ + if (cif->abi == FFI_LINUX_SOFT_FLOAT) + goto soft_float_closure; /* unfortunately float values are stored as doubles * in the ffi_closure_SYSV code (since we don't check * the type in that routine). @@ -1060,12 +1174,14 @@ * naughty thing to do but... */ avalue[i] = pst; - nf++; pst += 1; } break; case FFI_TYPE_DOUBLE: + /* With FFI_LINUX_SOFT_FLOAT doubles are handled like UINT64. */ + if (cif->abi == FFI_LINUX_SOFT_FLOAT) + goto soft_double_closure; /* On the outgoing stack all values are aligned to 8 */ /* there are 8 64bit floating point registers */ @@ -1080,11 +1196,47 @@ if (((long) pst) & 4) pst++; avalue[i] = pst; - nf++; pst += 2; } break; +#if FFI_TYPE_LONGDOUBLE != FFI_TYPE_DOUBLE + case FFI_TYPE_LONGDOUBLE: + if (cif->abi != FFI_LINUX && cif->abi != FFI_LINUX_SOFT_FLOAT) + goto do_struct; + if (cif->abi == FFI_LINUX_SOFT_FLOAT) + { /* Test if for the whole long double, 4 gprs are available. + otherwise the stuff ends up on the stack. */ + if (ng < 5) + { + avalue[i] = pgr; + pgr += 4; + ng += 4; + } + else + { + avalue[i] = pst; + pst += 4; + } + break; + } + if (nf < 7) + { + avalue[i] = pfr; + pfr += 2; + nf += 2; + } + else + { + if (((long) pst) & 4) + pst++; + avalue[i] = pst; + pst += 4; + nf = 8; + } + break; +#endif + default: FFI_ASSERT (0); } @@ -1101,8 +1253,36 @@ if (cif->abi == FFI_SYSV && cif->rtype->type == FFI_TYPE_STRUCT && size <= 8) return FFI_SYSV_TYPE_SMALL_STRUCT + size; - return cif->rtype->type; - +#if FFI_TYPE_LONGDOUBLE != FFI_TYPE_DOUBLE + else if (cif->rtype->type == FFI_TYPE_LONGDOUBLE + && cif->abi != FFI_LINUX && cif->abi != FFI_LINUX_SOFT_FLOAT) + return FFI_TYPE_STRUCT; +#endif + /* With FFI_LINUX_SOFT_FLOAT floats and doubles are handled like UINT32 + respectivley UINT64. */ + if (cif->abi == FFI_LINUX_SOFT_FLOAT) + { + switch (cif->rtype->type) + { + case FFI_TYPE_FLOAT: + return FFI_TYPE_UINT32; + break; + case FFI_TYPE_DOUBLE: + return FFI_TYPE_UINT64; + break; +#if FFI_TYPE_LONGDOUBLE != FFI_TYPE_DOUBLE + case FFI_TYPE_LONGDOUBLE: + return FFI_TYPE_UINT128; + break; +#endif + default: + return cif->rtype->type; + } + } + else + { + return cif->rtype->type; + } } int FFI_HIDDEN ffi_closure_helper_LINUX64 (ffi_closure *, void *, Modified: python/trunk/Modules/_ctypes/libffi/src/powerpc/ffi_darwin.c ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/powerpc/ffi_darwin.c (original) +++ python/trunk/Modules/_ctypes/libffi/src/powerpc/ffi_darwin.c Tue Mar 4 21:09:11 2008 @@ -1,12 +1,12 @@ -#if !(defined(__APPLE__) && !defined(__ppc__)) /* ----------------------------------------------------------------------- - ffi.c - Copyright (c) 1998 Geoffrey Keating + ffi_darwin.c - PowerPC Foreign Function Interface - - Darwin ABI support (c) 2001 John Hornkvist - AIX ABI support (c) 2002 Free Software Foundation, Inc. + Copyright (C) 1998 Geoffrey Keating + Copyright (C) 2001 John Hornkvist + Copyright (C) 2002, 2006, 2007 Free Software Foundation, Inc. + FFI support for Darwin and AIX. + Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the ``Software''), to deal in the Software without restriction, including @@ -80,9 +80,7 @@ */ -/*@-exportheader@*/ void ffi_prep_args(extended_cif *ecif, unsigned *const stack) -/*@=exportheader@*/ { const unsigned bytes = ecif->cif->bytes; const unsigned flags = ecif->cif->flags; @@ -228,6 +226,48 @@ //FFI_ASSERT(flags & FLAG_4_GPR_ARGUMENTS || intarg_count <= 4); } +/* Adjust the size of S to be correct for Darwin. + On Darwin, the first field of a structure has natural alignment. */ + +static void +darwin_adjust_aggregate_sizes (ffi_type *s) +{ + int i; + + if (s->type != FFI_TYPE_STRUCT) + return; + + s->size = 0; + for (i = 0; s->elements[i] != NULL; i++) + { + ffi_type *p; + int align; + + p = s->elements[i]; + darwin_adjust_aggregate_sizes (p); + if (i == 0 + && (p->type == FFI_TYPE_UINT64 + || p->type == FFI_TYPE_SINT64 + || p->type == FFI_TYPE_DOUBLE + || p->alignment == 8)) + align = 8; + else if (p->alignment == 16 || p->alignment < 4) + align = p->alignment; + else + align = 4; + s->size = ALIGN(s->size, align) + p->size; + } + + s->size = ALIGN(s->size, s->alignment); + + if (s->elements[0]->type == FFI_TYPE_UINT64 + || s->elements[0]->type == FFI_TYPE_SINT64 + || s->elements[0]->type == FFI_TYPE_DOUBLE + || s->elements[0]->alignment == 8) + s->alignment = s->alignment > 8 ? s->alignment : 8; + /* Do not add additional tail padding. */ +} + /* Perform machine dependent cif processing. */ ffi_status ffi_prep_cif_machdep(ffi_cif *cif) { @@ -240,8 +280,16 @@ unsigned size_al = 0; /* All the machine-independent calculation of cif->bytes will be wrong. + All the calculation of structure sizes will also be wrong. Redo the calculation for DARWIN. */ + if (cif->abi == FFI_DARWIN) + { + darwin_adjust_aggregate_sizes (cif->rtype); + for (i = 0; i < cif->nargs; i++) + darwin_adjust_aggregate_sizes (cif->arg_types[i]); + } + /* Space for the frame pointer, callee's LR, CR, etc, and for the asm's temp regs. */ @@ -376,25 +424,12 @@ return FFI_OK; } -/*@-declundef@*/ -/*@-exportheader@*/ -extern void ffi_call_AIX(/*@out@*/ extended_cif *, - unsigned, unsigned, - /*@out@*/ unsigned *, - void (*fn)(void), - void (*fn2)(extended_cif *, unsigned *const)); -extern void ffi_call_DARWIN(/*@out@*/ extended_cif *, - unsigned, unsigned, - /*@out@*/ unsigned *, - void (*fn)(void), - void (*fn2)(extended_cif *, unsigned *const)); -/*@=declundef@*/ -/*@=exportheader@*/ - -void ffi_call(/*@dependent@*/ ffi_cif *cif, - void (*fn)(void), - /*@out@*/ void *rvalue, - /*@dependent@*/ void **avalue) +extern void ffi_call_AIX(extended_cif *, unsigned, unsigned, unsigned *, + void (*fn)(void), void (*fn2)(void)); +extern void ffi_call_DARWIN(extended_cif *, unsigned, unsigned, unsigned *, + void (*fn)(void), void (*fn2)(void)); + +void ffi_call(ffi_cif *cif, void (*fn)(void), void *rvalue, void **avalue) { extended_cif ecif; @@ -407,9 +442,7 @@ if ((rvalue == NULL) && (cif->rtype->type == FFI_TYPE_STRUCT)) { - /*@-sysunrecog@*/ ecif.rvalue = alloca(cif->rtype->size); - /*@=sysunrecog@*/ } else ecif.rvalue = rvalue; @@ -417,16 +450,12 @@ switch (cif->abi) { case FFI_AIX: - /*@-usedef@*/ - ffi_call_AIX(&ecif, -cif->bytes, - cif->flags, ecif.rvalue, fn, ffi_prep_args); - /*@=usedef@*/ + ffi_call_AIX(&ecif, -cif->bytes, cif->flags, ecif.rvalue, fn, + ffi_prep_args); break; case FFI_DARWIN: - /*@-usedef@*/ - ffi_call_DARWIN(&ecif, -cif->bytes, - cif->flags, ecif.rvalue, fn, ffi_prep_args); - /*@=usedef@*/ + ffi_call_DARWIN(&ecif, -cif->bytes, cif->flags, ecif.rvalue, fn, + ffi_prep_args); break; default: FFI_ASSERT(0); @@ -499,10 +528,11 @@ */ ffi_status -ffi_prep_closure (ffi_closure* closure, - ffi_cif* cif, - void (*fun)(ffi_cif*, void*, void**, void*), - void *user_data) +ffi_prep_closure_loc (ffi_closure* closure, + ffi_cif* cif, + void (*fun)(ffi_cif*, void*, void**, void*), + void *user_data, + void *codeloc) { unsigned int *tramp; struct ffi_aix_trampoline_struct *tramp_aix; @@ -524,14 +554,14 @@ tramp[8] = 0x816b0004; /* lwz r11,4(r11) static chain */ tramp[9] = 0x4e800420; /* bctr */ tramp[2] = (unsigned long) ffi_closure_ASM; /* function */ - tramp[3] = (unsigned long) closure; /* context */ + tramp[3] = (unsigned long) codeloc; /* context */ closure->cif = cif; closure->fun = fun; closure->user_data = user_data; /* Flush the icache. Only necessary on Darwin. */ - flush_range(&closure->tramp[0],FFI_TRAMPOLINE_SIZE); + flush_range(codeloc, FFI_TRAMPOLINE_SIZE); break; @@ -544,7 +574,7 @@ tramp_aix->code_pointer = fd->code_pointer; tramp_aix->toc = fd->toc; - tramp_aix->static_chain = closure; + tramp_aix->static_chain = codeloc; closure->cif = cif; closure->fun = fun; closure->user_data = user_data; @@ -768,4 +798,3 @@ /* Tell ffi_closure_ASM to perform return type promotions. */ return cif->rtype->type; } -#endif Modified: python/trunk/Modules/_ctypes/libffi/src/powerpc/ffitarget.h ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/powerpc/ffitarget.h (original) +++ python/trunk/Modules/_ctypes/libffi/src/powerpc/ffitarget.h Tue Mar 4 21:09:11 2008 @@ -1,5 +1,6 @@ /* -----------------------------------------------------------------*-C-*- ffitarget.h - Copyright (c) 1996-2003 Red Hat, Inc. + Copyright (C) 2007 Free Software Foundation, Inc Target configuration macros for PowerPC. Permission is hereby granted, free of charge, to any person obtaining @@ -13,13 +14,14 @@ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR - OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. ----------------------------------------------------------------------- */ @@ -43,10 +45,20 @@ FFI_SYSV, FFI_GCC_SYSV, FFI_LINUX64, + FFI_LINUX, + FFI_LINUX_SOFT_FLOAT, # ifdef POWERPC64 FFI_DEFAULT_ABI = FFI_LINUX64, # else +# if (!defined(__NO_FPRS__) && (__LDBL_MANT_DIG__ == 106)) + FFI_DEFAULT_ABI = FFI_LINUX, +# else +# ifdef __NO_FPRS__ + FFI_DEFAULT_ABI = FFI_LINUX_SOFT_FLOAT, +# else FFI_DEFAULT_ABI = FFI_GCC_SYSV, +# endif +# endif # endif #endif @@ -69,7 +81,7 @@ FFI_DEFAULT_ABI = FFI_SYSV, #endif - FFI_LAST_ABI = FFI_DEFAULT_ABI + 1 + FFI_LAST_ABI } ffi_abi; #endif @@ -78,8 +90,14 @@ #define FFI_CLOSURES 1 #define FFI_NATIVE_RAW_API 0 +/* For additional types like the below, take care about the order in + ppc_closures.S. They must follow after the FFI_TYPE_LAST. */ + +/* Needed for soft-float long-double-128 support. */ +#define FFI_TYPE_UINT128 (FFI_TYPE_LAST + 1) + /* Needed for FFI_SYSV small structure returns. */ -#define FFI_SYSV_TYPE_SMALL_STRUCT (FFI_TYPE_LAST) +#define FFI_SYSV_TYPE_SMALL_STRUCT (FFI_TYPE_LAST + 2) #if defined(POWERPC64) || defined(POWERPC_AIX) #define FFI_TRAMPOLINE_SIZE 24 Modified: python/trunk/Modules/_ctypes/libffi/src/powerpc/linux64.S ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/powerpc/linux64.S (original) +++ python/trunk/Modules/_ctypes/libffi/src/powerpc/linux64.S Tue Mar 4 21:09:11 2008 @@ -1,5 +1,6 @@ /* ----------------------------------------------------------------------- sysv.h - Copyright (c) 2003 Jakub Jelinek + Copyright (c) 2008 Red Hat, Inc. PowerPC64 Assembly glue. @@ -14,13 +15,14 @@ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR - OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. ----------------------------------------------------------------------- */ #define LIBFFI_ASM @@ -47,8 +49,8 @@ std %r0, 16(%r1) mr %r28, %r1 /* our AP. */ - stdux %r1, %r1, %r4 .LCFI0: + stdux %r1, %r1, %r4 mr %r31, %r5 /* flags, */ mr %r30, %r6 /* rvalue, */ mr %r29, %r7 /* function address. */ @@ -100,6 +102,10 @@ /* Make the call. */ bctrl + /* This must follow the call immediately, the unwinder + uses this to find out if r2 has been saved or not. */ + ld %r2, 40(%r1) + /* Now, deal with the return value. */ mtcrf 0x01, %r31 bt- 30, .Ldone_return_value @@ -109,7 +115,6 @@ .Ldone_return_value: /* Restore the registers we used and return. */ - ld %r2, 40(%r1) mr %r1, %r28 ld %r0, 16(%r28) ld %r28, -32(%r1) @@ -120,12 +125,10 @@ blr .Lfp_return_value: - bt 27, .Lfd_return_value bf 28, .Lfloat_return_value stfd %f1, 0(%r30) - b .Ldone_return_value -.Lfd_return_value: - stfd %f1, 0(%r30) + mtcrf 0x02, %r31 /* cr6 */ + bf 27, .Ldone_return_value stfd %f2, 8(%r30) b .Ldone_return_value .Lfloat_return_value: @@ -178,3 +181,7 @@ .align 3 .LEFDE1: #endif + +#if defined __ELF__ && defined __linux__ + .section .note.GNU-stack,"", at progbits +#endif Modified: python/trunk/Modules/_ctypes/libffi/src/powerpc/linux64_closure.S ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/powerpc/linux64_closure.S (original) +++ python/trunk/Modules/_ctypes/libffi/src/powerpc/linux64_closure.S Tue Mar 4 21:09:11 2008 @@ -1,3 +1,29 @@ +/* ----------------------------------------------------------------------- + sysv.h - Copyright (c) 2003 Jakub Jelinek + Copyright (c) 2008 Red Hat, Inc. + + PowerPC64 Assembly glue. + + Permission is hereby granted, free of charge, to any person obtaining + a copy of this software and associated documentation files (the + ``Software''), to deal in the Software without restriction, including + without limitation the rights to use, copy, modify, merge, publish, + distribute, sublicense, and/or sell copies of the Software, and to + permit persons to whom the Software is furnished to do so, subject to + the following conditions: + + The above copyright notice and this permission notice shall be included + in all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. + ----------------------------------------------------------------------- */ #define LIBFFI_ASM #include #include @@ -204,3 +230,7 @@ .align 3 .LEFDE1: #endif + +#if defined __ELF__ && defined __linux__ + .section .note.GNU-stack,"", at progbits +#endif Modified: python/trunk/Modules/_ctypes/libffi/src/powerpc/ppc_closure.S ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/powerpc/ppc_closure.S (original) +++ python/trunk/Modules/_ctypes/libffi/src/powerpc/ppc_closure.S Tue Mar 4 21:09:11 2008 @@ -1,3 +1,29 @@ +/* ----------------------------------------------------------------------- + sysv.h - Copyright (c) 2003 Jakub Jelinek + Copyright (c) 2008 Red Hat, Inc. + + PowerPC Assembly glue. + + Permission is hereby granted, free of charge, to any person obtaining + a copy of this software and associated documentation files (the + ``Software''), to deal in the Software without restriction, including + without limitation the rights to use, copy, modify, merge, publish, + distribute, sublicense, and/or sell copies of the Software, and to + permit persons to whom the Software is furnished to do so, subject to + the following conditions: + + The above copyright notice and this permission notice shall be included + in all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. + ----------------------------------------------------------------------- */ #define LIBFFI_ASM #include #include @@ -28,6 +54,7 @@ stw %r9, 40(%r1) stw %r10,44(%r1) +#ifndef __NO_FPRS__ # next save fpr 1 to fpr 8 (aligned to 8) stfd %f1, 48(%r1) stfd %f2, 56(%r1) @@ -37,6 +64,7 @@ stfd %f6, 88(%r1) stfd %f7, 96(%r1) stfd %f8, 104(%r1) +#endif # set up registers for the routine that actually does the work # get the context pointer from the trampoline @@ -58,218 +86,190 @@ # make the call bl ffi_closure_helper_SYSV at local - +.Lret: # now r3 contains the return type # so use it to look up in a table # so we know how to deal with each type # look up the proper starting point in table # by using return type as offset - addi %r6,%r1,112 # get pointer to results area - bl .Lget_ret_type0_addr # get pointer to .Lret_type0 into LR - mflr %r4 # move to r4 - slwi %r3,%r3,4 # now multiply return type by 16 - add %r3,%r3,%r4 # add contents of table to table address + + mflr %r4 # move address of .Lret to r4 + slwi %r3,%r3,4 # now multiply return type by 16 + addi %r4, %r4, .Lret_type0 - .Lret + lwz %r0,148(%r1) + add %r3,%r3,%r4 # add contents of table to table address mtctr %r3 - bctr # jump to it + bctr # jump to it .LFE1: # Each of the ret_typeX code fragments has to be exactly 16 bytes long # (4 instructions). For cache effectiveness we align to a 16 byte boundary # first. .align 4 - - nop - nop - nop -.Lget_ret_type0_addr: - blrl - # case FFI_TYPE_VOID .Lret_type0: - b .Lfinish - nop - nop + mtlr %r0 + addi %r1,%r1,144 + blr nop # case FFI_TYPE_INT -.Lret_type1: - lwz %r3,0(%r6) - b .Lfinish - nop - nop + lwz %r3,112+0(%r1) + mtlr %r0 +.Lfinish: + addi %r1,%r1,144 + blr # case FFI_TYPE_FLOAT -.Lret_type2: - lfs %f1,0(%r6) - b .Lfinish - nop - nop + lfs %f1,112+0(%r1) + mtlr %r0 + addi %r1,%r1,144 + blr # case FFI_TYPE_DOUBLE -.Lret_type3: - lfd %f1,0(%r6) - b .Lfinish - nop - nop + lfd %f1,112+0(%r1) + mtlr %r0 + addi %r1,%r1,144 + blr # case FFI_TYPE_LONGDOUBLE -.Lret_type4: - lfd %f1,0(%r6) + lfd %f1,112+0(%r1) + lfd %f2,112+8(%r1) + mtlr %r0 b .Lfinish - nop - nop # case FFI_TYPE_UINT8 -.Lret_type5: - lbz %r3,3(%r6) - b .Lfinish - nop - nop + lbz %r3,112+3(%r1) + mtlr %r0 + addi %r1,%r1,144 + blr # case FFI_TYPE_SINT8 -.Lret_type6: - lbz %r3,3(%r6) + lbz %r3,112+3(%r1) extsb %r3,%r3 + mtlr %r0 b .Lfinish - nop # case FFI_TYPE_UINT16 -.Lret_type7: - lhz %r3,2(%r6) - b .Lfinish - nop - nop + lhz %r3,112+2(%r1) + mtlr %r0 + addi %r1,%r1,144 + blr # case FFI_TYPE_SINT16 -.Lret_type8: - lha %r3,2(%r6) - b .Lfinish - nop - nop + lha %r3,112+2(%r1) + mtlr %r0 + addi %r1,%r1,144 + blr # case FFI_TYPE_UINT32 -.Lret_type9: - lwz %r3,0(%r6) - b .Lfinish - nop - nop + lwz %r3,112+0(%r1) + mtlr %r0 + addi %r1,%r1,144 + blr # case FFI_TYPE_SINT32 -.Lret_type10: - lwz %r3,0(%r6) - b .Lfinish - nop - nop + lwz %r3,112+0(%r1) + mtlr %r0 + addi %r1,%r1,144 + blr # case FFI_TYPE_UINT64 -.Lret_type11: - lwz %r3,0(%r6) - lwz %r4,4(%r6) + lwz %r3,112+0(%r1) + lwz %r4,112+4(%r1) + mtlr %r0 b .Lfinish - nop # case FFI_TYPE_SINT64 -.Lret_type12: - lwz %r3,0(%r6) - lwz %r4,4(%r6) + lwz %r3,112+0(%r1) + lwz %r4,112+4(%r1) + mtlr %r0 b .Lfinish - nop # case FFI_TYPE_STRUCT -.Lret_type13: - b .Lfinish - nop - nop + mtlr %r0 + addi %r1,%r1,144 + blr nop # case FFI_TYPE_POINTER -.Lret_type14: - lwz %r3,0(%r6) - b .Lfinish - nop - nop + lwz %r3,112+0(%r1) + mtlr %r0 + addi %r1,%r1,144 + blr + +# case FFI_TYPE_UINT128 + lwz %r3,112+0(%r1) + lwz %r4,112+4(%r1) + lwz %r5,112+8(%r1) + bl .Luint128 # The return types below are only used when the ABI type is FFI_SYSV. # case FFI_SYSV_TYPE_SMALL_STRUCT + 1. One byte struct. -.Lret_type15: -# fall through. - lbz %r3,0(%r6) - b .Lfinish - nop - nop + lbz %r3,112+0(%r1) + mtlr %r0 + addi %r1,%r1,144 + blr # case FFI_SYSV_TYPE_SMALL_STRUCT + 2. Two byte struct. -.Lret_type16: -# fall through. - lhz %r3,0(%r6) - b .Lfinish - nop - nop + lhz %r3,112+0(%r1) + mtlr %r0 + addi %r1,%r1,144 + blr # case FFI_SYSV_TYPE_SMALL_STRUCT + 3. Three byte struct. -.Lret_type17: -# fall through. - lwz %r3,0(%r6) + lwz %r3,112+0(%r1) srwi %r3,%r3,8 + mtlr %r0 b .Lfinish - nop # case FFI_SYSV_TYPE_SMALL_STRUCT + 4. Four byte struct. -.Lret_type18: -# this one handles the structs from above too. - lwz %r3,0(%r6) - b .Lfinish - nop - nop + lwz %r3,112+0(%r1) + mtlr %r0 + addi %r1,%r1,144 + blr # case FFI_SYSV_TYPE_SMALL_STRUCT + 5. Five byte struct. -.Lret_type19: -# fall through. - lwz %r3,0(%r6) - lwz %r4,4(%r6) + lwz %r3,112+0(%r1) + lwz %r4,112+4(%r1) li %r5,24 b .Lstruct567 # case FFI_SYSV_TYPE_SMALL_STRUCT + 6. Six byte struct. -.Lret_type20: -# fall through. - lwz %r3,0(%r6) - lwz %r4,4(%r6) + lwz %r3,112+0(%r1) + lwz %r4,112+4(%r1) li %r5,16 b .Lstruct567 # case FFI_SYSV_TYPE_SMALL_STRUCT + 7. Seven byte struct. -.Lret_type21: -# fall through. - lwz %r3,0(%r6) - lwz %r4,4(%r6) + lwz %r3,112+0(%r1) + lwz %r4,112+4(%r1) li %r5,8 b .Lstruct567 # case FFI_SYSV_TYPE_SMALL_STRUCT + 8. Eight byte struct. -.Lret_type22: -# this one handles the above unhandled structs. - lwz %r3,0(%r6) - lwz %r4,4(%r6) + lwz %r3,112+0(%r1) + lwz %r4,112+4(%r1) + mtlr %r0 b .Lfinish - nop -# case done -.Lfinish: +.Lstruct567: + subfic %r6,%r5,32 + srw %r4,%r4,%r5 + slw %r6,%r3,%r6 + srw %r3,%r3,%r5 + or %r4,%r6,%r4 + mtlr %r0 + addi %r1,%r1,144 + blr - lwz %r0,148(%r1) +.Luint128: + lwz %r6,112+12(%r1) mtlr %r0 addi %r1,%r1,144 blr -.Lstruct567: - subfic %r0,%r5,32 - srw %r4,%r4,%r5 - slw %r0,%r3,%r0 - srw %r3,%r3,%r5 - or %r4,%r0,%r4 - b .Lfinish END(ffi_closure_SYSV) .section ".eh_frame",EH_FRAME_FLAGS, at progbits @@ -321,3 +321,7 @@ .LEFDE1: #endif + +#if defined __ELF__ && defined __linux__ + .section .note.GNU-stack,"", at progbits +#endif Modified: python/trunk/Modules/_ctypes/libffi/src/powerpc/sysv.S ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/powerpc/sysv.S (original) +++ python/trunk/Modules/_ctypes/libffi/src/powerpc/sysv.S Tue Mar 4 21:09:11 2008 @@ -1,5 +1,6 @@ /* ----------------------------------------------------------------------- - sysv.h - Copyright (c) 1998 Geoffrey Keating + sysv.S - Copyright (c) 1998 Geoffrey Keating + Copyright (C) 2007 Free Software Foundation, Inc PowerPC Assembly glue. @@ -14,13 +15,14 @@ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR - OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. ----------------------------------------------------------------------- */ #define LIBFFI_ASM @@ -98,13 +100,17 @@ bctrl /* Now, deal with the return value. */ - mtcrf 0x01,%r31 + mtcrf 0x01,%r31 /* cr7 */ bt- 31,L(small_struct_return_value) bt- 30,L(done_return_value) bt- 29,L(fp_return_value) stw %r3,0(%r30) bf+ 28,L(done_return_value) stw %r4,4(%r30) + mtcrf 0x02,%r31 /* cr6 */ + bf 27,L(done_return_value) + stw %r5,8(%r30) + stw %r6,12(%r30) /* Fall through... */ L(done_return_value): @@ -121,6 +127,9 @@ L(fp_return_value): bf 28,L(float_return_value) stfd %f1,0(%r30) + mtcrf 0x02,%r31 /* cr6 */ + bf 27,L(done_return_value) + stfd %f2,8(%r30) b L(done_return_value) L(float_return_value): stfs %f1,0(%r30) @@ -215,3 +224,7 @@ .align 2 .LEFDE1: #endif + +#if defined __ELF__ && defined __linux__ + .section .note.GNU-stack,"", at progbits +#endif Modified: python/trunk/Modules/_ctypes/libffi/src/prep_cif.c ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/prep_cif.c (original) +++ python/trunk/Modules/_ctypes/libffi/src/prep_cif.c Tue Mar 4 21:09:11 2008 @@ -1,5 +1,5 @@ /* ----------------------------------------------------------------------- - prep_cif.c - Copyright (c) 1996, 1998 Red Hat, Inc. + prep_cif.c - Copyright (c) 1996, 1998, 2007 Red Hat, Inc. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the @@ -12,20 +12,20 @@ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR - OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. ----------------------------------------------------------------------- */ #include #include #include - /* Round up to FFI_SIZEOF_ARG. */ #define STACK_ARG_SIZE(x) ALIGN(x, FFI_SIZEOF_ARG) @@ -33,14 +33,12 @@ /* Perform machine independent initialization of aggregate type specifications. */ -static ffi_status initialize_aggregate(/*@out@*/ ffi_type *arg) +static ffi_status initialize_aggregate(ffi_type *arg) { - ffi_type **ptr; + ffi_type **ptr; FFI_ASSERT(arg != NULL); - /*@-usedef@*/ - FFI_ASSERT(arg->elements != NULL); FFI_ASSERT(arg->size == 0); FFI_ASSERT(arg->alignment == 0); @@ -51,33 +49,15 @@ { if (((*ptr)->size == 0) && (initialize_aggregate((*ptr)) != FFI_OK)) return FFI_BAD_TYPEDEF; - + /* Perform a sanity check on the argument type */ FFI_ASSERT_VALID_TYPE(*ptr); -#ifdef POWERPC_DARWIN - { - int curalign; - - curalign = (*ptr)->alignment; - if (ptr != &(arg->elements[0])) { - if (curalign > 4 && curalign != 16) { - curalign = 4; - } - } - arg->size = ALIGN(arg->size, curalign); - arg->size += (*ptr)->size; - - arg->alignment = (arg->alignment > curalign) ? - arg->alignment : curalign; - } -#else arg->size = ALIGN(arg->size, (*ptr)->alignment); arg->size += (*ptr)->size; - arg->alignment = (arg->alignment > (*ptr)->alignment) ? + arg->alignment = (arg->alignment > (*ptr)->alignment) ? arg->alignment : (*ptr)->alignment; -#endif ptr++; } @@ -95,8 +75,6 @@ return FFI_BAD_TYPEDEF; else return FFI_OK; - - /*@=usedef@*/ } #ifndef __CRIS__ @@ -107,23 +85,8 @@ /* Perform machine independent ffi_cif preparation, then call machine dependent routine. */ -#ifdef X86_DARWIN -static inline int struct_on_stack(int size) -{ - if (size > 8) return 1; - /* This is not what the ABI says, but is what is really implemented */ - switch (size) { - case 1: case 2: case 4: case 8: return 0; - } - return 1; -} -#endif - - -ffi_status ffi_prep_cif(/*@out@*/ /*@partial@*/ ffi_cif *cif, - ffi_abi abi, unsigned int nargs, - /*@dependent@*/ /*@out@*/ /*@partial@*/ ffi_type *rtype, - /*@dependent@*/ ffi_type **atypes) +ffi_status ffi_prep_cif(ffi_cif *cif, ffi_abi abi, unsigned int nargs, + ffi_type *rtype, ffi_type **atypes) { unsigned bytes = 0; unsigned int i; @@ -140,10 +103,8 @@ cif->flags = 0; /* Initialize the return type if necessary */ - /*@-usedef@*/ if ((cif->rtype->size == 0) && (initialize_aggregate(cif->rtype) != FFI_OK)) return FFI_BAD_TYPEDEF; - /*@=usedef@*/ /* Perform a sanity check on the return type */ FFI_ASSERT_VALID_TYPE(cif->rtype); @@ -156,10 +117,9 @@ && (cif->abi != FFI_V9 || cif->rtype->size > 32) #endif #ifdef X86_DARWIN - - && (struct_on_stack(cif->rtype->size)) + && (cif->rtype->size > 8) #endif - ) + ) bytes = STACK_ARG_SIZE(sizeof(void*)); #endif @@ -170,20 +130,11 @@ if (((*ptr)->size == 0) && (initialize_aggregate((*ptr)) != FFI_OK)) return FFI_BAD_TYPEDEF; - /* Perform a sanity check on the argument type, do this + /* Perform a sanity check on the argument type, do this check after the initialization. */ FFI_ASSERT_VALID_TYPE(*ptr); -#if defined(X86_DARWIN) - { - int align = (*ptr)->alignment; - if (align > 4) align = 4; - if ((align - 1) & bytes) - bytes = ALIGN(bytes, align); - bytes += STACK_ARG_SIZE((*ptr)->size); - } - -#elif !defined __x86_64__ && !defined S390 && !defined PA +#if !defined __x86_64__ && !defined S390 && !defined PA #ifdef SPARC if (((*ptr)->type == FFI_TYPE_STRUCT && ((*ptr)->size > 16 || cif->abi != FFI_V9)) @@ -196,7 +147,7 @@ /* Add any padding if necessary */ if (((*ptr)->alignment - 1) & bytes) bytes = ALIGN(bytes, (*ptr)->alignment); - + bytes += STACK_ARG_SIZE((*ptr)->size); } #endif @@ -208,3 +159,16 @@ return ffi_prep_cif_machdep(cif); } #endif /* not __CRIS__ */ + +#if FFI_CLOSURES + +ffi_status +ffi_prep_closure (ffi_closure* closure, + ffi_cif* cif, + void (*fun)(ffi_cif*,void*,void**,void*), + void *user_data) +{ + return ffi_prep_closure_loc (closure, cif, fun, user_data, closure); +} + +#endif Modified: python/trunk/Modules/_ctypes/libffi/src/s390/ffi.c ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/s390/ffi.c (original) +++ python/trunk/Modules/_ctypes/libffi/src/s390/ffi.c Tue Mar 4 21:09:11 2008 @@ -1,5 +1,6 @@ /* ----------------------------------------------------------------------- - ffi.c - Copyright (c) 2000 Software AG + ffi.c - Copyright (c) 2000, 2007 Software AG + Copyright (c) 2008 Red Hat, Inc S390 Foreign Function Interface @@ -207,6 +208,12 @@ void *arg = *p_argv; int type = (*ptr)->type; +#if FFI_TYPE_LONGDOUBLE != FFI_TYPE_DOUBLE + /* 16-byte long double is passed like a struct. */ + if (type == FFI_TYPE_LONGDOUBLE) + type = FFI_TYPE_STRUCT; +#endif + /* Check how a structure type is passed. */ if (type == FFI_TYPE_STRUCT) { @@ -364,6 +371,12 @@ cif->flags = FFI390_RET_DOUBLE; break; +#if FFI_TYPE_LONGDOUBLE != FFI_TYPE_DOUBLE + case FFI_TYPE_LONGDOUBLE: + cif->flags = FFI390_RET_STRUCT; + n_gpr++; + break; +#endif /* Integer values are returned in gpr 2 (and gpr 3 for 64-bit values on 31-bit machines). */ case FFI_TYPE_UINT64: @@ -400,6 +413,12 @@ { int type = (*ptr)->type; +#if FFI_TYPE_LONGDOUBLE != FFI_TYPE_DOUBLE + /* 16-byte long double is passed like a struct. */ + if (type == FFI_TYPE_LONGDOUBLE) + type = FFI_TYPE_STRUCT; +#endif + /* Check how a structure type is passed. */ if (type == FFI_TYPE_STRUCT) { @@ -562,6 +581,12 @@ int deref_struct_pointer = 0; int type = (*ptr)->type; +#if FFI_TYPE_LONGDOUBLE != FFI_TYPE_DOUBLE + /* 16-byte long double is passed like a struct. */ + if (type == FFI_TYPE_LONGDOUBLE) + type = FFI_TYPE_STRUCT; +#endif + /* Check how a structure type is passed. */ if (type == FFI_TYPE_STRUCT) { @@ -662,6 +687,9 @@ /* Void is easy, and so is struct. */ case FFI_TYPE_VOID: case FFI_TYPE_STRUCT: +#if FFI_TYPE_LONGDOUBLE != FFI_TYPE_DOUBLE + case FFI_TYPE_LONGDOUBLE: +#endif break; /* Floating point values are returned in fpr 0. */ @@ -709,17 +737,18 @@ /*====================================================================*/ /* */ -/* Name - ffi_prep_closure. */ +/* Name - ffi_prep_closure_loc. */ /* */ /* Function - Prepare a FFI closure. */ /* */ /*====================================================================*/ ffi_status -ffi_prep_closure (ffi_closure *closure, - ffi_cif *cif, - void (*fun) (ffi_cif *, void *, void **, void *), - void *user_data) +ffi_prep_closure_loc (ffi_closure *closure, + ffi_cif *cif, + void (*fun) (ffi_cif *, void *, void **, void *), + void *user_data, + void *codeloc) { FFI_ASSERT (cif->abi == FFI_SYSV); @@ -728,7 +757,7 @@ *(short *)&closure->tramp [2] = 0x9801; /* lm %r0,%r1,6(%r1) */ *(short *)&closure->tramp [4] = 0x1006; *(short *)&closure->tramp [6] = 0x07f1; /* br %r1 */ - *(long *)&closure->tramp [8] = (long)closure; + *(long *)&closure->tramp [8] = (long)codeloc; *(long *)&closure->tramp[12] = (long)&ffi_closure_SYSV; #else *(short *)&closure->tramp [0] = 0x0d10; /* basr %r1,0 */ @@ -736,7 +765,7 @@ *(short *)&closure->tramp [4] = 0x100e; *(short *)&closure->tramp [6] = 0x0004; *(short *)&closure->tramp [8] = 0x07f1; /* br %r1 */ - *(long *)&closure->tramp[16] = (long)closure; + *(long *)&closure->tramp[16] = (long)codeloc; *(long *)&closure->tramp[24] = (long)&ffi_closure_SYSV; #endif Modified: python/trunk/Modules/_ctypes/libffi/src/s390/ffitarget.h ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/s390/ffitarget.h (original) +++ python/trunk/Modules/_ctypes/libffi/src/s390/ffitarget.h Tue Mar 4 21:09:11 2008 @@ -13,13 +13,14 @@ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR - OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. ----------------------------------------------------------------------- */ Modified: python/trunk/Modules/_ctypes/libffi/src/s390/sysv.S ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/s390/sysv.S (original) +++ python/trunk/Modules/_ctypes/libffi/src/s390/sysv.S Tue Mar 4 21:09:11 2008 @@ -1,6 +1,7 @@ /* ----------------------------------------------------------------------- sysv.S - Copyright (c) 2000 Software AG - + Copyright (c) 2008 Red Hat, Inc. + S390 Foreign Function Interface Permission is hereby granted, free of charge, to any person obtaining @@ -14,13 +15,14 @@ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR - OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. ----------------------------------------------------------------------- */ #define LIBFFI_ASM @@ -427,3 +429,6 @@ #endif +#if defined __ELF__ && defined __linux__ + .section .note.GNU-stack,"", at progbits +#endif Modified: python/trunk/Modules/_ctypes/libffi/src/sh/ffi.c ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/sh/ffi.c (original) +++ python/trunk/Modules/_ctypes/libffi/src/sh/ffi.c Tue Mar 4 21:09:11 2008 @@ -1,5 +1,6 @@ /* ----------------------------------------------------------------------- - ffi.c - Copyright (c) 2002, 2003, 2004, 2005 Kaz Kojima + ffi.c - Copyright (c) 2002, 2003, 2004, 2005, 2006, 2007 Kaz Kojima + Copyright (c) 2008 Red Hat, Inc. SuperH Foreign Function Interface @@ -14,13 +15,14 @@ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR - OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. ----------------------------------------------------------------------- */ #include @@ -106,9 +108,7 @@ /* ffi_prep_args is called by the assembly routine once stack space has been allocated for the function's arguments */ -/*@-exportheader@*/ void ffi_prep_args(char *stack, extended_cif *ecif) -/*@=exportheader@*/ { register unsigned int i; register int tmp; @@ -406,20 +406,10 @@ return FFI_OK; } -/*@-declundef@*/ -/*@-exportheader@*/ -extern void ffi_call_SYSV(void (*)(char *, extended_cif *), - /*@out@*/ extended_cif *, - unsigned, unsigned, - /*@out@*/ unsigned *, - void (*fn)()); -/*@=declundef@*/ -/*@=exportheader@*/ - -void ffi_call(/*@dependent@*/ ffi_cif *cif, - void (*fn)(), - /*@out@*/ void *rvalue, - /*@dependent@*/ void **avalue) +extern void ffi_call_SYSV(void (*)(char *, extended_cif *), extended_cif *, + unsigned, unsigned, unsigned *, void (*fn)(void)); + +void ffi_call(ffi_cif *cif, void (*fn)(void), void *rvalue, void **avalue) { extended_cif ecif; UINT64 trvalue; @@ -436,9 +426,7 @@ else if ((rvalue == NULL) && (cif->rtype->type == FFI_TYPE_STRUCT)) { - /*@-sysunrecog@*/ ecif.rvalue = alloca(cif->rtype->size); - /*@=sysunrecog@*/ } else ecif.rvalue = rvalue; @@ -446,10 +434,8 @@ switch (cif->abi) { case FFI_SYSV: - /*@-usedef@*/ - ffi_call_SYSV(ffi_prep_args, &ecif, cif->bytes, - cif->flags, ecif.rvalue, fn); - /*@=usedef@*/ + ffi_call_SYSV(ffi_prep_args, &ecif, cif->bytes, cif->flags, ecif.rvalue, + fn); break; default: FFI_ASSERT(0); @@ -468,10 +454,11 @@ #endif ffi_status -ffi_prep_closure (ffi_closure* closure, - ffi_cif* cif, - void (*fun)(ffi_cif*, void*, void**, void*), - void *user_data) +ffi_prep_closure_loc (ffi_closure* closure, + ffi_cif* cif, + void (*fun)(ffi_cif*, void*, void**, void*), + void *user_data, + void *codeloc) { unsigned int *tramp; unsigned short insn; @@ -491,7 +478,7 @@ tramp[0] = 0xd102d301; tramp[1] = 0x412b0000 | insn; #endif - *(void **) &tramp[2] = (void *)closure; /* ctx */ + *(void **) &tramp[2] = (void *)codeloc; /* ctx */ *(void **) &tramp[3] = (void *)ffi_closure_SYSV; /* funaddr */ closure->cif = cif; @@ -500,7 +487,7 @@ #if defined(__SH4__) /* Flush the icache. */ - __ic_invalidate(&closure->tramp[0]); + __ic_invalidate(codeloc); #endif return FFI_OK; @@ -535,7 +522,6 @@ int freg = 0; #endif ffi_cif *cif; - double temp; cif = closure->cif; avalue = alloca(cif->nargs * sizeof(void *)); @@ -544,7 +530,7 @@ returns the data directly to the caller. */ if (cif->rtype->type == FFI_TYPE_STRUCT && STRUCT_VALUE_ADDRESS_WITH_ARG) { - rvalue = *pgr++; + rvalue = (void *) *pgr++; ireg = 1; } else @@ -611,6 +597,8 @@ { if (freg + 1 >= NFREGARG) continue; + if (freg & 1) + pfr++; freg = (freg + 1) & ~1; freg += 2; avalue[i] = pfr; Modified: python/trunk/Modules/_ctypes/libffi/src/sh/ffitarget.h ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/sh/ffitarget.h (original) +++ python/trunk/Modules/_ctypes/libffi/src/sh/ffitarget.h Tue Mar 4 21:09:11 2008 @@ -13,13 +13,14 @@ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR - OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. ----------------------------------------------------------------------- */ Modified: python/trunk/Modules/_ctypes/libffi/src/sh/sysv.S ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/sh/sysv.S (original) +++ python/trunk/Modules/_ctypes/libffi/src/sh/sysv.S Tue Mar 4 21:09:11 2008 @@ -1,5 +1,5 @@ /* ----------------------------------------------------------------------- - sysv.S - Copyright (c) 2002, 2003, 2004 Kaz Kojima + sysv.S - Copyright (c) 2002, 2003, 2004, 2006 Kaz Kojima SuperH Foreign Function Interface @@ -17,7 +17,8 @@ THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR + IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR + ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. @@ -829,13 +830,13 @@ .byte 0x6 /* uleb128 0x6 */ .byte 0x8e /* DW_CFA_offset, column 0xe */ .byte 0x5 /* uleb128 0x5 */ - .byte 0x8b /* DW_CFA_offset, column 0xb */ + .byte 0x84 /* DW_CFA_offset, column 0x4 */ .byte 0x4 /* uleb128 0x4 */ - .byte 0x8a /* DW_CFA_offset, column 0xa */ + .byte 0x85 /* DW_CFA_offset, column 0x5 */ .byte 0x3 /* uleb128 0x3 */ - .byte 0x89 /* DW_CFA_offset, column 0x9 */ + .byte 0x86 /* DW_CFA_offset, column 0x6 */ .byte 0x2 /* uleb128 0x2 */ - .byte 0x88 /* DW_CFA_offset, column 0x8 */ + .byte 0x87 /* DW_CFA_offset, column 0x7 */ .byte 0x1 /* uleb128 0x1 */ .byte 0x4 /* DW_CFA_advance_loc4 */ .4byte .LCFIE-.LCFID Modified: python/trunk/Modules/_ctypes/libffi/src/sh64/ffi.c ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/sh64/ffi.c (original) +++ python/trunk/Modules/_ctypes/libffi/src/sh64/ffi.c Tue Mar 4 21:09:11 2008 @@ -1,5 +1,6 @@ /* ----------------------------------------------------------------------- ffi.c - Copyright (c) 2003, 2004 Kaz Kojima + Copyright (c) 2008 Anthony Green SuperH SHmedia Foreign Function Interface @@ -14,13 +15,14 @@ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR - OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. ----------------------------------------------------------------------- */ #include @@ -238,12 +240,12 @@ /*@out@*/ extended_cif *, unsigned, unsigned, long long, /*@out@*/ unsigned *, - void (*fn)()); + void (*fn)(void)); /*@=declundef@*/ /*@=exportheader@*/ void ffi_call(/*@dependent@*/ ffi_cif *cif, - void (*fn)(), + void (*fn)(void), /*@out@*/ void *rvalue, /*@dependent@*/ void **avalue) { Modified: python/trunk/Modules/_ctypes/libffi/src/sh64/ffitarget.h ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/sh64/ffitarget.h (original) +++ python/trunk/Modules/_ctypes/libffi/src/sh64/ffitarget.h Tue Mar 4 21:09:11 2008 @@ -13,13 +13,14 @@ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR - OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. ----------------------------------------------------------------------- */ Modified: python/trunk/Modules/_ctypes/libffi/src/sh64/sysv.S ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/sh64/sysv.S (original) +++ python/trunk/Modules/_ctypes/libffi/src/sh64/sysv.S Tue Mar 4 21:09:11 2008 @@ -17,7 +17,8 @@ THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR + IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR + ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. Modified: python/trunk/Modules/_ctypes/libffi/src/sparc/ffi.c ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/sparc/ffi.c (original) +++ python/trunk/Modules/_ctypes/libffi/src/sparc/ffi.c Tue Mar 4 21:09:11 2008 @@ -1,5 +1,5 @@ /* ----------------------------------------------------------------------- - ffi.c - Copyright (c) 1996, 2003, 2004 Red Hat, Inc. + ffi.c - Copyright (c) 1996, 2003, 2004, 2007, 2008 Red Hat, Inc. SPARC Foreign Function Interface @@ -14,13 +14,14 @@ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR - OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. ----------------------------------------------------------------------- */ #include @@ -425,10 +426,11 @@ #endif ffi_status -ffi_prep_closure (ffi_closure* closure, - ffi_cif* cif, - void (*fun)(ffi_cif*, void*, void**, void*), - void *user_data) +ffi_prep_closure_loc (ffi_closure* closure, + ffi_cif* cif, + void (*fun)(ffi_cif*, void*, void**, void*), + void *user_data, + void *codeloc) { unsigned int *tramp = (unsigned int *) &closure->tramp[0]; unsigned long fn; @@ -443,7 +445,7 @@ tramp[3] = 0x01000000; /* nop */ *((unsigned long *) &tramp[4]) = fn; #else - unsigned long ctx = (unsigned long) closure; + unsigned long ctx = (unsigned long) codeloc; FFI_ASSERT (cif->abi == FFI_V8); fn = (unsigned long) ffi_closure_v8; tramp[0] = 0x03000000 | fn >> 10; /* sethi %hi(fn), %g1 */ Modified: python/trunk/Modules/_ctypes/libffi/src/sparc/ffitarget.h ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/sparc/ffitarget.h (original) +++ python/trunk/Modules/_ctypes/libffi/src/sparc/ffitarget.h Tue Mar 4 21:09:11 2008 @@ -13,13 +13,14 @@ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR - OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. ----------------------------------------------------------------------- */ Modified: python/trunk/Modules/_ctypes/libffi/src/sparc/v8.S ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/sparc/v8.S (original) +++ python/trunk/Modules/_ctypes/libffi/src/sparc/v8.S Tue Mar 4 21:09:11 2008 @@ -1,5 +1,5 @@ /* ----------------------------------------------------------------------- - v8.S - Copyright (c) 1996, 1997, 2003, 2004 Red Hat, Inc. + v8.S - Copyright (c) 1996, 1997, 2003, 2004, 2008 Red Hat, Inc. SPARC Foreign Function Interface @@ -14,13 +14,14 @@ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR - OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. ----------------------------------------------------------------------- */ #define LIBFFI_ASM @@ -265,3 +266,7 @@ .byte 0x1f ! uleb128 0x1f .align WS .LLEFDE2: + +#if defined __ELF__ && defined __linux__ + .section .note.GNU-stack,"", at progbits +#endif Modified: python/trunk/Modules/_ctypes/libffi/src/sparc/v9.S ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/sparc/v9.S (original) +++ python/trunk/Modules/_ctypes/libffi/src/sparc/v9.S Tue Mar 4 21:09:11 2008 @@ -1,5 +1,5 @@ /* ----------------------------------------------------------------------- - v9.S - Copyright (c) 2000, 2003, 2004 Red Hat, Inc. + v9.S - Copyright (c) 2000, 2003, 2004, 2008 Red Hat, Inc. SPARC 64-bit Foreign Function Interface @@ -14,13 +14,14 @@ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR - OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. ----------------------------------------------------------------------- */ #define LIBFFI_ASM @@ -300,3 +301,7 @@ .align 8 .LLEFDE2: #endif + +#ifdef __linux__ + .section .note.GNU-stack,"", at progbits +#endif Modified: python/trunk/Modules/_ctypes/libffi/src/x86/darwin.S ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/x86/darwin.S (original) +++ python/trunk/Modules/_ctypes/libffi/src/x86/darwin.S Tue Mar 4 21:09:11 2008 @@ -1,8 +1,8 @@ -#ifdef __i386__ /* ----------------------------------------------------------------------- - darwin.S - Copyright (c) 1996, 1998, 2001, 2002, 2003 Red Hat, Inc. - - X86 Foreign Function Interface + darwin.S - Copyright (c) 1996, 1998, 2001, 2002, 2003, 2005 Red Hat, Inc. + Copyright (C) 2008 Free Software Foundation, Inc. + + X86 Foreign Function Interface Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the @@ -18,16 +18,12 @@ THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR + IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR + ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ----------------------------------------------------------------------- */ - -/* - * This file is based on sysv.S and then hacked up by Ronald who hasn't done - * assembly programming in 8 years. - */ #ifndef __x86_64__ @@ -35,18 +31,11 @@ #include #include -#ifdef PyObjC_STRICT_DEBUGGING - /* XXX: Debugging of stack alignment, to be removed */ -#define ASSERT_STACK_ALIGNED movdqa -16(%esp), %xmm0 -#else -#define ASSERT_STACK_ALIGNED -#endif - .text .globl _ffi_prep_args -.align 4 + .align 4 .globl _ffi_call_SYSV _ffi_call_SYSV: @@ -54,15 +43,12 @@ pushl %ebp .LCFI0: movl %esp,%ebp - subl $8,%esp - ASSERT_STACK_ALIGNED .LCFI1: + subl $8,%esp /* Make room for all of the new args. */ movl 16(%ebp),%ecx subl %ecx,%esp - ASSERT_STACK_ALIGNED - movl %esp,%eax /* Place all of the ffi_prep_args in position */ @@ -71,27 +57,20 @@ pushl %eax call *8(%ebp) - ASSERT_STACK_ALIGNED - /* Return stack to previous state and call the function */ - addl $16,%esp - - ASSERT_STACK_ALIGNED + addl $16,%esp call *28(%ebp) - - /* XXX: return returns return with 'ret $4', that upsets the stack! */ - movl 16(%ebp),%ecx - addl %ecx,%esp - /* Load %ecx with the return type code */ movl 20(%ebp),%ecx + /* Protect %esi. We're going to pop it in the epilogue. */ + pushl %esi /* If the return value pointer is NULL, assume no return value. */ cmpl $0,24(%ebp) - jne retint + jne 0f /* Even if there is no space for the return value, we are obliged to handle floating-point values. */ @@ -99,145 +78,366 @@ jne noretval fstp %st(0) - jmp epilogue - -retint: - cmpl $FFI_TYPE_INT,%ecx - jne retfloat - /* Load %ecx with the pointer to storage for the return value */ - movl 24(%ebp),%ecx - movl %eax,0(%ecx) jmp epilogue +0: + .align 4 + call 1f +.Lstore_table: + .long noretval-.Lstore_table /* FFI_TYPE_VOID */ + .long retint-.Lstore_table /* FFI_TYPE_INT */ + .long retfloat-.Lstore_table /* FFI_TYPE_FLOAT */ + .long retdouble-.Lstore_table /* FFI_TYPE_DOUBLE */ + .long retlongdouble-.Lstore_table /* FFI_TYPE_LONGDOUBLE */ + .long retuint8-.Lstore_table /* FFI_TYPE_UINT8 */ + .long retsint8-.Lstore_table /* FFI_TYPE_SINT8 */ + .long retuint16-.Lstore_table /* FFI_TYPE_UINT16 */ + .long retsint16-.Lstore_table /* FFI_TYPE_SINT16 */ + .long retint-.Lstore_table /* FFI_TYPE_UINT32 */ + .long retint-.Lstore_table /* FFI_TYPE_SINT32 */ + .long retint64-.Lstore_table /* FFI_TYPE_UINT64 */ + .long retint64-.Lstore_table /* FFI_TYPE_SINT64 */ + .long retstruct-.Lstore_table /* FFI_TYPE_STRUCT */ + .long retint-.Lstore_table /* FFI_TYPE_POINTER */ + .long retstruct1b-.Lstore_table /* FFI_TYPE_SMALL_STRUCT_1B */ + .long retstruct2b-.Lstore_table /* FFI_TYPE_SMALL_STRUCT_2B */ +1: + pop %esi + add (%esi, %ecx, 4), %esi + jmp *%esi + + /* Sign/zero extend as appropriate. */ +retsint8: + movsbl %al, %eax + jmp retint + +retsint16: + movswl %ax, %eax + jmp retint + +retuint8: + movzbl %al, %eax + jmp retint + +retuint16: + movzwl %ax, %eax + jmp retint retfloat: - cmpl $FFI_TYPE_FLOAT,%ecx - jne retdouble /* Load %ecx with the pointer to storage for the return value */ - movl 24(%ebp),%ecx + movl 24(%ebp),%ecx fstps (%ecx) jmp epilogue retdouble: - cmpl $FFI_TYPE_DOUBLE,%ecx - jne retlongdouble /* Load %ecx with the pointer to storage for the return value */ - movl 24(%ebp),%ecx + movl 24(%ebp),%ecx fstpl (%ecx) jmp epilogue retlongdouble: - cmpl $FFI_TYPE_LONGDOUBLE,%ecx - jne retint64 /* Load %ecx with the pointer to storage for the return value */ - movl 24(%ebp),%ecx + movl 24(%ebp),%ecx fstpt (%ecx) jmp epilogue - -retint64: - cmpl $FFI_TYPE_SINT64,%ecx - jne retstruct1b + +retint64: /* Load %ecx with the pointer to storage for the return value */ - movl 24(%ebp),%ecx + movl 24(%ebp),%ecx movl %eax,0(%ecx) movl %edx,4(%ecx) jmp epilogue retstruct1b: - cmpl $FFI_TYPE_SINT8,%ecx - jne retstruct2b - movl 24(%ebp),%ecx - movb %al,0(%ecx) - jmp epilogue + /* Load %ecx with the pointer to storage for the return value */ + movl 24(%ebp),%ecx + movb %al,0(%ecx) + jmp epilogue retstruct2b: - cmpl $FFI_TYPE_SINT16,%ecx - jne retstruct - movl 24(%ebp),%ecx - movw %ax,0(%ecx) - jmp epilogue - -retstruct: - cmpl $FFI_TYPE_STRUCT,%ecx - jne noretval - /* Nothing to do! */ - - subl $4,%esp + /* Load %ecx with the pointer to storage for the return value */ + movl 24(%ebp),%ecx + movw %ax,0(%ecx) + jmp epilogue - ASSERT_STACK_ALIGNED +retint: + /* Load %ecx with the pointer to storage for the return value */ + movl 24(%ebp),%ecx + movl %eax,0(%ecx) - addl $8,%esp - movl %ebp, %esp - popl %ebp - ret +retstruct: + /* Nothing to do! */ noretval: epilogue: - ASSERT_STACK_ALIGNED - addl $8, %esp - + popl %esi + movl %ebp,%esp + popl %ebp + ret - movl %ebp,%esp - popl %ebp - ret .LFE1: .ffi_call_SYSV_end: -#if 0 - .size ffi_call_SYSV,.ffi_call_SYSV_end-ffi_call_SYSV -#endif -#if 0 - .section .eh_frame,EH_FRAME_FLAGS, at progbits -.Lframe1: - .long .LECIE1-.LSCIE1 /* Length of Common Information Entry */ -.LSCIE1: - .long 0x0 /* CIE Identifier Tag */ - .byte 0x1 /* CIE Version */ -#ifdef __PIC__ - .ascii "zR\0" /* CIE Augmentation */ -#else - .ascii "\0" /* CIE Augmentation */ -#endif - .byte 0x1 /* .uleb128 0x1; CIE Code Alignment Factor */ - .byte 0x7c /* .sleb128 -4; CIE Data Alignment Factor */ - .byte 0x8 /* CIE RA Column */ -#ifdef __PIC__ - .byte 0x1 /* .uleb128 0x1; Augmentation size */ - .byte 0x1b /* FDE Encoding (pcrel sdata4) */ -#endif - .byte 0xc /* DW_CFA_def_cfa */ - .byte 0x4 /* .uleb128 0x4 */ - .byte 0x4 /* .uleb128 0x4 */ - .byte 0x88 /* DW_CFA_offset, column 0x8 */ - .byte 0x1 /* .uleb128 0x1 */ - .align 4 -.LECIE1: -.LSFDE1: - .long .LEFDE1-.LASFDE1 /* FDE Length */ -.LASFDE1: - .long .LASFDE1-.Lframe1 /* FDE CIE offset */ -#ifdef __PIC__ - .long .LFB1-. /* FDE initial location */ -#else - .long .LFB1 /* FDE initial location */ -#endif - .long .LFE1-.LFB1 /* FDE address range */ -#ifdef __PIC__ - .byte 0x0 /* .uleb128 0x0; Augmentation size */ + .align 4 +FFI_HIDDEN (ffi_closure_SYSV) +.globl _ffi_closure_SYSV + +_ffi_closure_SYSV: +.LFB2: + pushl %ebp +.LCFI2: + movl %esp, %ebp +.LCFI3: + subl $40, %esp + leal -24(%ebp), %edx + movl %edx, -12(%ebp) /* resp */ + leal 8(%ebp), %edx + movl %edx, 4(%esp) /* args = __builtin_dwarf_cfa () */ + leal -12(%ebp), %edx + movl %edx, (%esp) /* &resp */ + movl %ebx, 8(%esp) +.LCFI7: + call L_ffi_closure_SYSV_inner$stub + movl 8(%esp), %ebx + movl -12(%ebp), %ecx + cmpl $FFI_TYPE_INT, %eax + je .Lcls_retint + + /* Handle FFI_TYPE_UINT8, FFI_TYPE_SINT8, FFI_TYPE_UINT16, + FFI_TYPE_SINT16, FFI_TYPE_UINT32, FFI_TYPE_SINT32. */ + cmpl $FFI_TYPE_UINT64, %eax + jge 0f + cmpl $FFI_TYPE_UINT8, %eax + jge .Lcls_retint + +0: cmpl $FFI_TYPE_FLOAT, %eax + je .Lcls_retfloat + cmpl $FFI_TYPE_DOUBLE, %eax + je .Lcls_retdouble + cmpl $FFI_TYPE_LONGDOUBLE, %eax + je .Lcls_retldouble + cmpl $FFI_TYPE_SINT64, %eax + je .Lcls_retllong + cmpl $FFI_TYPE_SMALL_STRUCT_1B, %eax + je .Lcls_retstruct1b + cmpl $FFI_TYPE_SMALL_STRUCT_2B, %eax + je .Lcls_retstruct2b + cmpl $FFI_TYPE_STRUCT, %eax + je .Lcls_retstruct +.Lcls_epilogue: + movl %ebp, %esp + popl %ebp + ret +.Lcls_retint: + movl (%ecx), %eax + jmp .Lcls_epilogue +.Lcls_retfloat: + flds (%ecx) + jmp .Lcls_epilogue +.Lcls_retdouble: + fldl (%ecx) + jmp .Lcls_epilogue +.Lcls_retldouble: + fldt (%ecx) + jmp .Lcls_epilogue +.Lcls_retllong: + movl (%ecx), %eax + movl 4(%ecx), %edx + jmp .Lcls_epilogue +.Lcls_retstruct1b: + movsbl (%ecx), %eax + jmp .Lcls_epilogue +.Lcls_retstruct2b: + movswl (%ecx), %eax + jmp .Lcls_epilogue +.Lcls_retstruct: + lea -8(%ebp),%esp + movl %ebp, %esp + popl %ebp + ret $4 +.LFE2: + +#if !FFI_NO_RAW_API + +#define RAW_CLOSURE_CIF_OFFSET ((FFI_TRAMPOLINE_SIZE + 3) & ~3) +#define RAW_CLOSURE_FUN_OFFSET (RAW_CLOSURE_CIF_OFFSET + 4) +#define RAW_CLOSURE_USER_DATA_OFFSET (RAW_CLOSURE_FUN_OFFSET + 4) +#define CIF_FLAGS_OFFSET 20 + + .align 4 +FFI_HIDDEN (ffi_closure_raw_SYSV) +.globl _ffi_closure_raw_SYSV + +_ffi_closure_raw_SYSV: +.LFB3: + pushl %ebp +.LCFI4: + movl %esp, %ebp +.LCFI5: + pushl %esi +.LCFI6: + subl $36, %esp + movl RAW_CLOSURE_CIF_OFFSET(%eax), %esi /* closure->cif */ + movl RAW_CLOSURE_USER_DATA_OFFSET(%eax), %edx /* closure->user_data */ + movl %edx, 12(%esp) /* user_data */ + leal 8(%ebp), %edx /* __builtin_dwarf_cfa () */ + movl %edx, 8(%esp) /* raw_args */ + leal -24(%ebp), %edx + movl %edx, 4(%esp) /* &res */ + movl %esi, (%esp) /* cif */ + call *RAW_CLOSURE_FUN_OFFSET(%eax) /* closure->fun */ + movl CIF_FLAGS_OFFSET(%esi), %eax /* rtype */ + cmpl $FFI_TYPE_INT, %eax + je .Lrcls_retint + + /* Handle FFI_TYPE_UINT8, FFI_TYPE_SINT8, FFI_TYPE_UINT16, + FFI_TYPE_SINT16, FFI_TYPE_UINT32, FFI_TYPE_SINT32. */ + cmpl $FFI_TYPE_UINT64, %eax + jge 0f + cmpl $FFI_TYPE_UINT8, %eax + jge .Lrcls_retint +0: + cmpl $FFI_TYPE_FLOAT, %eax + je .Lrcls_retfloat + cmpl $FFI_TYPE_DOUBLE, %eax + je .Lrcls_retdouble + cmpl $FFI_TYPE_LONGDOUBLE, %eax + je .Lrcls_retldouble + cmpl $FFI_TYPE_SINT64, %eax + je .Lrcls_retllong +.Lrcls_epilogue: + addl $36, %esp + popl %esi + popl %ebp + ret +.Lrcls_retint: + movl -24(%ebp), %eax + jmp .Lrcls_epilogue +.Lrcls_retfloat: + flds -24(%ebp) + jmp .Lrcls_epilogue +.Lrcls_retdouble: + fldl -24(%ebp) + jmp .Lrcls_epilogue +.Lrcls_retldouble: + fldt -24(%ebp) + jmp .Lrcls_epilogue +.Lrcls_retllong: + movl -24(%ebp), %eax + movl -20(%ebp), %edx + jmp .Lrcls_epilogue +.LFE3: #endif - .byte 0x4 /* DW_CFA_advance_loc4 */ - .long .LCFI0-.LFB1 - .byte 0xe /* DW_CFA_def_cfa_offset */ - .byte 0x8 /* .uleb128 0x8 */ - .byte 0x85 /* DW_CFA_offset, column 0x5 */ - .byte 0x2 /* .uleb128 0x2 */ - .byte 0x4 /* DW_CFA_advance_loc4 */ - .long .LCFI1-.LCFI0 - .byte 0xd /* DW_CFA_def_cfa_register */ - .byte 0x5 /* .uleb128 0x5 */ - .align 4 -.LEFDE1: + +.section __IMPORT,__jump_table,symbol_stubs,self_modifying_code+pure_instructions,5 +L_ffi_closure_SYSV_inner$stub: + .indirect_symbol _ffi_closure_SYSV_inner + hlt ; hlt ; hlt ; hlt ; hlt + + +.section __TEXT,__eh_frame,coalesced,no_toc+strip_static_syms+live_support +EH_frame1: + .set L$set$0,LECIE1-LSCIE1 + .long L$set$0 +LSCIE1: + .long 0x0 + .byte 0x1 + .ascii "zR\0" + .byte 0x1 + .byte 0x7c + .byte 0x8 + .byte 0x1 + .byte 0x10 + .byte 0xc + .byte 0x5 + .byte 0x4 + .byte 0x88 + .byte 0x1 + .align 2 +LECIE1: +.globl _ffi_call_SYSV.eh +_ffi_call_SYSV.eh: +LSFDE1: + .set L$set$1,LEFDE1-LASFDE1 + .long L$set$1 +LASFDE1: + .long LASFDE1-EH_frame1 + .long .LFB1-. + .set L$set$2,.LFE1-.LFB1 + .long L$set$2 + .byte 0x0 + .byte 0x4 + .set L$set$3,.LCFI0-.LFB1 + .long L$set$3 + .byte 0xe + .byte 0x8 + .byte 0x84 + .byte 0x2 + .byte 0x4 + .set L$set$4,.LCFI1-.LCFI0 + .long L$set$4 + .byte 0xd + .byte 0x4 + .align 2 +LEFDE1: +.globl _ffi_closure_SYSV.eh +_ffi_closure_SYSV.eh: +LSFDE2: + .set L$set$5,LEFDE2-LASFDE2 + .long L$set$5 +LASFDE2: + .long LASFDE2-EH_frame1 + .long .LFB2-. + .set L$set$6,.LFE2-.LFB2 + .long L$set$6 + .byte 0x0 + .byte 0x4 + .set L$set$7,.LCFI2-.LFB2 + .long L$set$7 + .byte 0xe + .byte 0x8 + .byte 0x84 + .byte 0x2 + .byte 0x4 + .set L$set$8,.LCFI3-.LCFI2 + .long L$set$8 + .byte 0xd + .byte 0x4 + .align 2 +LEFDE2: + +#if !FFI_NO_RAW_API + +.globl _ffi_closure_raw_SYSV.eh +_ffi_closure_raw_SYSV.eh: +LSFDE3: + .set L$set$10,LEFDE3-LASFDE3 + .long L$set$10 +LASFDE3: + .long LASFDE3-EH_frame1 + .long .LFB3-. + .set L$set$11,.LFE3-.LFB3 + .long L$set$11 + .byte 0x0 + .byte 0x4 + .set L$set$12,.LCFI4-.LFB3 + .long L$set$12 + .byte 0xe + .byte 0x8 + .byte 0x84 + .byte 0x2 + .byte 0x4 + .set L$set$13,.LCFI5-.LCFI4 + .long L$set$13 + .byte 0xd + .byte 0x4 + .byte 0x4 + .set L$set$14,.LCFI6-.LCFI5 + .long L$set$14 + .byte 0x85 + .byte 0x3 + .align 2 +LEFDE3: + #endif #endif /* ifndef __x86_64__ */ - -#endif /* defined __i386__ */ Modified: python/trunk/Modules/_ctypes/libffi/src/x86/ffi.c ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/x86/ffi.c (original) +++ python/trunk/Modules/_ctypes/libffi/src/x86/ffi.c Tue Mar 4 21:09:11 2008 @@ -1,10 +1,11 @@ /* ----------------------------------------------------------------------- - ffi.c - Copyright (c) 1996, 1998, 1999, 2001 Red Hat, Inc. + ffi.c - Copyright (c) 1996, 1998, 1999, 2001, 2007, 2008 Red Hat, Inc. Copyright (c) 2002 Ranjit Mathew Copyright (c) 2002 Bo Thorsen Copyright (c) 2002 Roger Sayle - - x86 Foreign Function Interface + Copyright (C) 2008 Free Software Foundation, Inc. + + x86 Foreign Function Interface Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the @@ -17,13 +18,14 @@ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR - OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. ----------------------------------------------------------------------- */ #ifndef __x86_64__ @@ -36,9 +38,7 @@ /* ffi_prep_args is called by the assembly routine once stack space has been allocated for the function's arguments */ -/*@-exportheader@*/ void ffi_prep_args(char *stack, extended_cif *ecif) -/*@=exportheader@*/ { register unsigned int i; register void **p_argv; @@ -121,9 +121,16 @@ switch (cif->rtype->type) { case FFI_TYPE_VOID: -#if !defined(X86_WIN32) && !defined(__OpenBSD__) && !defined(__FreeBSD__) +#ifdef X86 case FFI_TYPE_STRUCT: #endif +#if defined(X86) || defined(X86_DARWIN) + case FFI_TYPE_UINT8: + case FFI_TYPE_UINT16: + case FFI_TYPE_SINT8: + case FFI_TYPE_SINT16: +#endif + case FFI_TYPE_SINT64: case FFI_TYPE_FLOAT: case FFI_TYPE_DOUBLE: @@ -135,15 +142,15 @@ cif->flags = FFI_TYPE_SINT64; break; -#if defined(X86_WIN32) || defined(__OpenBSD__) || defined(__FreeBSD__) +#ifndef X86 case FFI_TYPE_STRUCT: if (cif->rtype->size == 1) { - cif->flags = FFI_TYPE_SINT8; /* same as char size */ + cif->flags = FFI_TYPE_SMALL_STRUCT_1B; /* same as char size */ } else if (cif->rtype->size == 2) { - cif->flags = FFI_TYPE_SINT16; /* same as short size */ + cif->flags = FFI_TYPE_SMALL_STRUCT_2B; /* same as short size */ } else if (cif->rtype->size == 4) { @@ -165,35 +172,23 @@ break; } +#ifdef X86_DARWIN + cif->bytes = (cif->bytes + 15) & ~0xF; +#endif + return FFI_OK; } -/*@-declundef@*/ -/*@-exportheader@*/ -extern void ffi_call_SYSV(void (*)(char *, extended_cif *), - /*@out@*/ extended_cif *, - unsigned, unsigned, - /*@out@*/ unsigned *, - void (*fn)(void)); -/*@=declundef@*/ -/*@=exportheader@*/ +extern void ffi_call_SYSV(void (*)(char *, extended_cif *), extended_cif *, + unsigned, unsigned, unsigned *, void (*fn)(void)); #ifdef X86_WIN32 -/*@-declundef@*/ -/*@-exportheader@*/ -extern void ffi_call_STDCALL(void (*)(char *, extended_cif *), - /*@out@*/ extended_cif *, - unsigned, unsigned, - /*@out@*/ unsigned *, - void (*fn)(void)); -/*@=declundef@*/ -/*@=exportheader@*/ +extern void ffi_call_STDCALL(void (*)(char *, extended_cif *), extended_cif *, + unsigned, unsigned, unsigned *, void (*fn)(void)); + #endif /* X86_WIN32 */ -void ffi_call(/*@dependent@*/ ffi_cif *cif, - void (*fn)(void), - /*@out@*/ void *rvalue, - /*@dependent@*/ void **avalue) +void ffi_call(ffi_cif *cif, void (*fn)(void), void *rvalue, void **avalue) { extended_cif ecif; @@ -206,9 +201,7 @@ if ((rvalue == NULL) && (cif->flags == FFI_TYPE_STRUCT)) { - /*@-sysunrecog@*/ ecif.rvalue = alloca(cif->rtype->size); - /*@=sysunrecog@*/ } else ecif.rvalue = rvalue; @@ -217,17 +210,13 @@ switch (cif->abi) { case FFI_SYSV: - /*@-usedef@*/ - ffi_call_SYSV(ffi_prep_args, &ecif, cif->bytes, - cif->flags, ecif.rvalue, fn); - /*@=usedef@*/ + ffi_call_SYSV(ffi_prep_args, &ecif, cif->bytes, cif->flags, ecif.rvalue, + fn); break; #ifdef X86_WIN32 case FFI_STDCALL: - /*@-usedef@*/ - ffi_call_STDCALL(ffi_prep_args, &ecif, cif->bytes, - cif->flags, ecif.rvalue, fn); - /*@=usedef@*/ + ffi_call_STDCALL(ffi_prep_args, &ecif, cif->bytes, cif->flags, + ecif.rvalue, fn); break; #endif /* X86_WIN32 */ default: @@ -247,6 +236,10 @@ __attribute__ ((regparm(1))); void FFI_HIDDEN ffi_closure_raw_SYSV (ffi_raw_closure *) __attribute__ ((regparm(1))); +#ifdef X86_WIN32 +void FFI_HIDDEN ffi_closure_STDCALL (ffi_closure *) + __attribute__ ((regparm(1))); +#endif /* This function is jumped to by the trampoline */ @@ -256,7 +249,7 @@ void **respp; void *args; { - /* our various things... */ + /* our various things... */ ffi_cif *cif; void **arg_area; @@ -276,11 +269,9 @@ return cif->flags; } -/*@-exportheader@*/ -static void -ffi_prep_incoming_args_SYSV(char *stack, void **rvalue, - void **avalue, ffi_cif *cif) -/*@=exportheader@*/ +static void +ffi_prep_incoming_args_SYSV(char *stack, void **rvalue, void **avalue, + ffi_cif *cif) { register unsigned int i; register void **p_argv; @@ -324,27 +315,54 @@ ({ unsigned char *__tramp = (unsigned char*)(TRAMP); \ unsigned int __fun = (unsigned int)(FUN); \ unsigned int __ctx = (unsigned int)(CTX); \ - unsigned int __dis = __fun - ((unsigned int) __tramp + FFI_TRAMPOLINE_SIZE); \ + unsigned int __dis = __fun - (__ctx + 10); \ *(unsigned char*) &__tramp[0] = 0xb8; \ *(unsigned int*) &__tramp[1] = __ctx; /* movl __ctx, %eax */ \ *(unsigned char *) &__tramp[5] = 0xe9; \ *(unsigned int*) &__tramp[6] = __dis; /* jmp __fun */ \ }) +#define FFI_INIT_TRAMPOLINE_STDCALL(TRAMP,FUN,CTX,SIZE) \ +({ unsigned char *__tramp = (unsigned char*)(TRAMP); \ + unsigned int __fun = (unsigned int)(FUN); \ + unsigned int __ctx = (unsigned int)(CTX); \ + unsigned int __dis = __fun - (__ctx + 10); \ + unsigned short __size = (unsigned short)(SIZE); \ + *(unsigned char*) &__tramp[0] = 0xb8; \ + *(unsigned int*) &__tramp[1] = __ctx; /* movl __ctx, %eax */ \ + *(unsigned char *) &__tramp[5] = 0xe8; \ + *(unsigned int*) &__tramp[6] = __dis; /* call __fun */ \ + *(unsigned char *) &__tramp[10] = 0xc2; \ + *(unsigned short*) &__tramp[11] = __size; /* ret __size */ \ + }) /* the cif must already be prep'ed */ ffi_status -ffi_prep_closure (ffi_closure* closure, - ffi_cif* cif, - void (*fun)(ffi_cif*,void*,void**,void*), - void *user_data) +ffi_prep_closure_loc (ffi_closure* closure, + ffi_cif* cif, + void (*fun)(ffi_cif*,void*,void**,void*), + void *user_data, + void *codeloc) { - FFI_ASSERT (cif->abi == FFI_SYSV); - - FFI_INIT_TRAMPOLINE (&closure->tramp[0], \ - &ffi_closure_SYSV, \ - (void*)closure); + if (cif->abi == FFI_SYSV) + { + FFI_INIT_TRAMPOLINE (&closure->tramp[0], + &ffi_closure_SYSV, + (void*)closure); + } +#ifdef X86_WIN32 + else if (cif->abi == FFI_STDCALL) + { + FFI_INIT_TRAMPOLINE_STDCALL (&closure->tramp[0], + &ffi_closure_STDCALL, + (void*)closure, cif->bytes); + } +#endif + else + { + return FFI_BAD_ABI; + } closure->cif = cif; closure->user_data = user_data; @@ -358,14 +376,17 @@ #if !FFI_NO_RAW_API ffi_status -ffi_prep_raw_closure (ffi_raw_closure* closure, - ffi_cif* cif, - void (*fun)(ffi_cif*,void*,ffi_raw*,void*), - void *user_data) +ffi_prep_raw_closure_loc (ffi_raw_closure* closure, + ffi_cif* cif, + void (*fun)(ffi_cif*,void*,ffi_raw*,void*), + void *user_data, + void *codeloc) { int i; - FFI_ASSERT (cif->abi == FFI_SYSV); + if (cif->abi != FFI_SYSV) { + return FFI_BAD_ABI; + } // we currently don't support certain kinds of arguments for raw // closures. This should be implemented by a separate assembly language @@ -380,7 +401,7 @@ FFI_INIT_TRAMPOLINE (&closure->tramp[0], &ffi_closure_raw_SYSV, - (void*)closure); + codeloc); closure->cif = cif; closure->user_data = user_data; @@ -400,27 +421,18 @@ * libffi-1.20, this is not the case.) */ -extern void -ffi_call_SYSV(void (*)(char *, extended_cif *), - /*@out@*/ extended_cif *, - unsigned, unsigned, - /*@out@*/ unsigned *, - void (*fn)(void)); +extern void +ffi_call_SYSV(void (*)(char *, extended_cif *), extended_cif *, unsigned, + unsigned, unsigned *, void (*fn)(void)); #ifdef X86_WIN32 extern void -ffi_call_STDCALL(void (*)(char *, extended_cif *), - /*@out@*/ extended_cif *, - unsigned, unsigned, - /*@out@*/ unsigned *, - void (*fn)(void)); +ffi_call_STDCALL(void (*)(char *, extended_cif *), extended_cif *, unsigned, + unsigned, unsigned *, void (*fn)(void)); #endif /* X86_WIN32 */ void -ffi_raw_call(/*@dependent@*/ ffi_cif *cif, - void (*fn)(void), - /*@out@*/ void *rvalue, - /*@dependent@*/ ffi_raw *fake_avalue) +ffi_raw_call(ffi_cif *cif, void (*fn)(void), void *rvalue, ffi_raw *fake_avalue) { extended_cif ecif; void **avalue = (void **)fake_avalue; @@ -434,9 +446,7 @@ if ((rvalue == NULL) && (cif->rtype->type == FFI_TYPE_STRUCT)) { - /*@-sysunrecog@*/ ecif.rvalue = alloca(cif->rtype->size); - /*@=sysunrecog@*/ } else ecif.rvalue = rvalue; @@ -445,17 +455,13 @@ switch (cif->abi) { case FFI_SYSV: - /*@-usedef@*/ - ffi_call_SYSV(ffi_prep_args_raw, &ecif, cif->bytes, - cif->flags, ecif.rvalue, fn); - /*@=usedef@*/ + ffi_call_SYSV(ffi_prep_args_raw, &ecif, cif->bytes, cif->flags, + ecif.rvalue, fn); break; #ifdef X86_WIN32 case FFI_STDCALL: - /*@-usedef@*/ - ffi_call_STDCALL(ffi_prep_args_raw, &ecif, cif->bytes, - cif->flags, ecif.rvalue, fn); - /*@=usedef@*/ + ffi_call_STDCALL(ffi_prep_args_raw, &ecif, cif->bytes, cif->flags, + ecif.rvalue, fn); break; #endif /* X86_WIN32 */ default: Modified: python/trunk/Modules/_ctypes/libffi/src/x86/ffi64.c ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/x86/ffi64.c (original) +++ python/trunk/Modules/_ctypes/libffi/src/x86/ffi64.c Tue Mar 4 21:09:11 2008 @@ -1,5 +1,6 @@ /* ----------------------------------------------------------------------- - ffi.c - Copyright (c) 2002 Bo Thorsen + ffi.c - Copyright (c) 2002, 2007 Bo Thorsen + Copyright (c) 2008 Red Hat, Inc. x86-64 Foreign Function Interface @@ -14,13 +15,14 @@ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR - OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. ----------------------------------------------------------------------- */ #include @@ -433,10 +435,11 @@ extern void ffi_closure_unix64(void); ffi_status -ffi_prep_closure (ffi_closure* closure, - ffi_cif* cif, - void (*fun)(ffi_cif*, void*, void**, void*), - void *user_data) +ffi_prep_closure_loc (ffi_closure* closure, + ffi_cif* cif, + void (*fun)(ffi_cif*, void*, void**, void*), + void *user_data, + void *codeloc) { volatile unsigned short *tramp; @@ -445,7 +448,7 @@ tramp[0] = 0xbb49; /* mov , %r11 */ *(void * volatile *) &tramp[1] = ffi_closure_unix64; tramp[5] = 0xba49; /* mov , %r10 */ - *(void * volatile *) &tramp[6] = closure; + *(void * volatile *) &tramp[6] = codeloc; /* Set the carry bit iff the function uses any sse registers. This is clc or stc, together with the first byte of the jmp. */ Deleted: /python/trunk/Modules/_ctypes/libffi/src/x86/ffi_darwin.c ============================================================================== --- /python/trunk/Modules/_ctypes/libffi/src/x86/ffi_darwin.c Tue Mar 4 21:09:11 2008 +++ (empty file) @@ -1,596 +0,0 @@ -# ifdef __i386__ -/* ----------------------------------------------------------------------- - ffi.c - Copyright (c) 1996, 1998, 1999, 2001 Red Hat, Inc. - Copyright (c) 2002 Ranjit Mathew - Copyright (c) 2002 Bo Thorsen - Copyright (c) 2002 Roger Sayle - - x86 Foreign Function Interface - - Permission is hereby granted, free of charge, to any person obtaining - a copy of this software and associated documentation files (the - ``Software''), to deal in the Software without restriction, including - without limitation the rights to use, copy, modify, merge, publish, - distribute, sublicense, and/or sell copies of the Software, and to - permit persons to whom the Software is furnished to do so, subject to - the following conditions: - - The above copyright notice and this permission notice shall be included - in all copies or substantial portions of the Software. - - THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR - OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. - ----------------------------------------------------------------------- */ - -#ifndef __x86_64__ - -#include -#include - -#include - -/* ffi_prep_args is called by the assembly routine once stack space - has been allocated for the function's arguments */ - -/*@-exportheader@*/ -void ffi_prep_args(char *stack, extended_cif *ecif); - -static inline int retval_on_stack(ffi_type* tp) -{ - if (tp->type == FFI_TYPE_STRUCT) { - int sz = tp->size; - if (sz > 8) { - return 1; - } - switch (sz) { - case 1: case 2: case 4: case 8: return 0; - default: return 1; - } - } - return 0; -} - - -void ffi_prep_args(char *stack, extended_cif *ecif) -/*@=exportheader@*/ -{ - register unsigned int i; - register void **p_argv; - register char *argp; - register ffi_type **p_arg; - - argp = stack; - - if (retval_on_stack(ecif->cif->rtype)) { - *(void **) argp = ecif->rvalue; - argp += 4; - } - - - p_argv = ecif->avalue; - - for (i = ecif->cif->nargs, p_arg = ecif->cif->arg_types; - i != 0; - i--, p_arg++) - { - size_t z; - - /* Align if necessary */ - if ((sizeof(int) - 1) & (unsigned) argp) - argp = (char *) ALIGN(argp, sizeof(int)); - - z = (*p_arg)->size; - if (z < sizeof(int)) - { - z = sizeof(int); - switch ((*p_arg)->type) - { - case FFI_TYPE_SINT8: - *(signed int *) argp = (signed int)*(SINT8 *)(* p_argv); - break; - - case FFI_TYPE_UINT8: - *(unsigned int *) argp = (unsigned int)*(UINT8 *)(* p_argv); - break; - - case FFI_TYPE_SINT16: - *(signed int *) argp = (signed int)*(SINT16 *)(* p_argv); - break; - - case FFI_TYPE_UINT16: - *(unsigned int *) argp = (unsigned int)*(UINT16 *)(* p_argv); - break; - - case FFI_TYPE_SINT32: - *(signed int *) argp = (signed int)*(SINT32 *)(* p_argv); - break; - - case FFI_TYPE_UINT32: - *(unsigned int *) argp = (unsigned int)*(UINT32 *)(* p_argv); - break; - - case FFI_TYPE_STRUCT: - *(unsigned int *) argp = (unsigned int)*(UINT32 *)(* p_argv); - break; - - default: - FFI_ASSERT(0); - } - } - else - { - memcpy(argp, *p_argv, z); - } - p_argv++; - argp += z; - } - - return; -} - -/* Perform machine dependent cif processing */ -ffi_status ffi_prep_cif_machdep(ffi_cif *cif) -{ - /* Set the return type flag */ - switch (cif->rtype->type) - { - case FFI_TYPE_VOID: -#if !defined(X86_WIN32) && !defined(X86_DARWIN) - case FFI_TYPE_STRUCT: -#endif - case FFI_TYPE_SINT64: - case FFI_TYPE_FLOAT: - case FFI_TYPE_DOUBLE: -#if FFI_TYPE_LONGDOUBLE != FFI_TYPE_DOUBLE - case FFI_TYPE_LONGDOUBLE: -#endif - cif->flags = (unsigned) cif->rtype->type; - break; - - case FFI_TYPE_UINT64: - cif->flags = FFI_TYPE_SINT64; - break; - -#if defined(X86_WIN32) || defined(X86_DARWIN) - - case FFI_TYPE_STRUCT: - if (cif->rtype->size == 1) - { - cif->flags = FFI_TYPE_SINT8; /* same as char size */ - } - else if (cif->rtype->size == 2) - { - cif->flags = FFI_TYPE_SINT16; /* same as short size */ - } - else if (cif->rtype->size == 4) - { - cif->flags = FFI_TYPE_INT; /* same as int type */ - } - else if (cif->rtype->size == 8) - { - cif->flags = FFI_TYPE_SINT64; /* same as int64 type */ - } - else - { - cif->flags = FFI_TYPE_STRUCT; - } - break; -#endif - - default: - cif->flags = FFI_TYPE_INT; - break; - } - - /* Darwin: The stack needs to be aligned to a multiple of 16 bytes */ -#if 1 - cif->bytes = (cif->bytes + 15) & ~0xF; -#endif - - - return FFI_OK; -} - -/*@-declundef@*/ -/*@-exportheader@*/ -extern void ffi_call_SYSV(void (*)(char *, extended_cif *), - /*@out@*/ extended_cif *, - unsigned, unsigned, - /*@out@*/ unsigned *, - void (*fn)(void)); -/*@=declundef@*/ -/*@=exportheader@*/ - -#ifdef X86_WIN32 -/*@-declundef@*/ -/*@-exportheader@*/ -extern void ffi_call_STDCALL(void (*)(char *, extended_cif *), - /*@out@*/ extended_cif *, - unsigned, unsigned, - /*@out@*/ unsigned *, - void (*fn)(void)); -/*@=declundef@*/ -/*@=exportheader@*/ -#endif /* X86_WIN32 */ - -void ffi_call(/*@dependent@*/ ffi_cif *cif, - void (*fn)(void), - /*@out@*/ void *rvalue, - /*@dependent@*/ void **avalue) -{ - extended_cif ecif; - - ecif.cif = cif; - ecif.avalue = avalue; - - /* If the return value is a struct and we don't have a return */ - /* value address then we need to make one */ - - if ((rvalue == NULL) && retval_on_stack(cif->rtype)) - { - /*@-sysunrecog@*/ - ecif.rvalue = alloca(cif->rtype->size); - /*@=sysunrecog@*/ - } - else - ecif.rvalue = rvalue; - - switch (cif->abi) - { - case FFI_SYSV: - /*@-usedef@*/ - /* To avoid changing the assembly code make sure the size of the argument - * block is a multiple of 16. Then add 8 to compensate for local variables - * in ffi_call_SYSV. - */ - ffi_call_SYSV(ffi_prep_args, &ecif, cif->bytes, - cif->flags, ecif.rvalue, fn); - /*@=usedef@*/ - break; -#ifdef X86_WIN32 - case FFI_STDCALL: - /*@-usedef@*/ - ffi_call_STDCALL(ffi_prep_args, &ecif, cif->bytes, - cif->flags, ecif.rvalue, fn); - /*@=usedef@*/ - break; -#endif /* X86_WIN32 */ - default: - FFI_ASSERT(0); - break; - } -} - - -/** private members **/ - -static void ffi_closure_SYSV (ffi_closure *) - __attribute__ ((regparm(1))); -#if !FFI_NO_RAW_API -static void ffi_closure_raw_SYSV (ffi_raw_closure *) - __attribute__ ((regparm(1))); -#endif - -/*@-exportheader@*/ -static inline void -ffi_prep_incoming_args_SYSV(char *stack, void **rvalue, - void **avalue, ffi_cif *cif) -/*@=exportheader@*/ -{ - register unsigned int i; - register void **p_argv; - register char *argp; - register ffi_type **p_arg; - - argp = stack; - - if (retval_on_stack(cif->rtype)) { - *rvalue = *(void **) argp; - argp += 4; - } - - p_argv = avalue; - - for (i = cif->nargs, p_arg = cif->arg_types; (i != 0); i--, p_arg++) - { - size_t z; - - /* Align if necessary */ - if ((sizeof(int) - 1) & (unsigned) argp) { - argp = (char *) ALIGN(argp, sizeof(int)); - } - - z = (*p_arg)->size; - - /* because we're little endian, this is what it turns into. */ - - *p_argv = (void*) argp; - - p_argv++; - argp += z; - } - - return; -} - -/* This function is jumped to by the trampoline */ - -static void -ffi_closure_SYSV (closure) - ffi_closure *closure; -{ - // this is our return value storage - long double res; - - // our various things... - ffi_cif *cif; - void **arg_area; - void *resp = (void*)&res; - void *args = __builtin_dwarf_cfa (); - - - cif = closure->cif; - arg_area = (void**) alloca (cif->nargs * sizeof (void*)); - - /* this call will initialize ARG_AREA, such that each - * element in that array points to the corresponding - * value on the stack; and if the function returns - * a structure, it will re-set RESP to point to the - * structure return address. */ - - ffi_prep_incoming_args_SYSV(args, (void**)&resp, arg_area, cif); - - (closure->fun) (cif, resp, arg_area, closure->user_data); - - /* now, do a generic return based on the value of rtype */ - if (cif->flags == FFI_TYPE_INT) - { - asm ("movl (%0),%%eax" : : "r" (resp) : "eax"); - } - else if (cif->flags == FFI_TYPE_FLOAT) - { - asm ("flds (%0)" : : "r" (resp) : "st" ); - } - else if (cif->flags == FFI_TYPE_DOUBLE) - { - asm ("fldl (%0)" : : "r" (resp) : "st", "st(1)" ); - } - else if (cif->flags == FFI_TYPE_LONGDOUBLE) - { - asm ("fldt (%0)" : : "r" (resp) : "st", "st(1)" ); - } - else if (cif->flags == FFI_TYPE_SINT64) - { - asm ("movl 0(%0),%%eax;" - "movl 4(%0),%%edx" - : : "r"(resp) - : "eax", "edx"); - } -#if defined(X86_WIN32) || defined(X86_DARWIN) - else if (cif->flags == FFI_TYPE_SINT8) /* 1-byte struct */ - { - asm ("movsbl (%0),%%eax" : : "r" (resp) : "eax"); - } - else if (cif->flags == FFI_TYPE_SINT16) /* 2-bytes struct */ - { - asm ("movswl (%0),%%eax" : : "r" (resp) : "eax"); - } -#endif - - else if (cif->flags == FFI_TYPE_STRUCT) - { - asm ("lea -8(%ebp),%esp;" - "pop %esi;" - "pop %edi;" - "pop %ebp;" - "ret $4"); - } -} - - -/* How to make a trampoline. Derived from gcc/config/i386/i386.c. */ - -#define FFI_INIT_TRAMPOLINE(TRAMP,FUN,CTX) \ -({ unsigned char *__tramp = (unsigned char*)(TRAMP); \ - unsigned int __fun = (unsigned int)(FUN); \ - unsigned int __ctx = (unsigned int)(CTX); \ - unsigned int __dis = __fun - ((unsigned int) __tramp + FFI_TRAMPOLINE_SIZE); \ - *(unsigned char*) &__tramp[0] = 0xb8; \ - *(unsigned int*) &__tramp[1] = __ctx; /* movl __ctx, %eax */ \ - *(unsigned char *) &__tramp[5] = 0xe9; \ - *(unsigned int*) &__tramp[6] = __dis; /* jmp __fun */ \ - }) - - -/* the cif must already be prep'ed */ - -ffi_status -ffi_prep_closure (ffi_closure* closure, - ffi_cif* cif, - void (*fun)(ffi_cif*,void*,void**,void*), - void *user_data) -{ - FFI_ASSERT (cif->abi == FFI_SYSV); - - FFI_INIT_TRAMPOLINE (&closure->tramp[0], \ - &ffi_closure_SYSV, \ - (void*)closure); - - closure->cif = cif; - closure->user_data = user_data; - closure->fun = fun; - - return FFI_OK; -} - -/* ------- Native raw API support -------------------------------- */ - -#if !FFI_NO_RAW_API - -static void -ffi_closure_raw_SYSV (closure) - ffi_raw_closure *closure; -{ - // this is our return value storage - long double res; - - // our various things... - ffi_raw *raw_args; - ffi_cif *cif; - unsigned short rtype; - void *resp = (void*)&res; - - /* get the cif */ - cif = closure->cif; - - /* the SYSV/X86 abi matches the RAW API exactly, well.. almost */ - raw_args = (ffi_raw*) __builtin_dwarf_cfa (); - - (closure->fun) (cif, resp, raw_args, closure->user_data); - - rtype = cif->flags; - - /* now, do a generic return based on the value of rtype */ - if (rtype == FFI_TYPE_INT) - { - asm ("movl (%0),%%eax" : : "r" (resp) : "eax"); - } - else if (rtype == FFI_TYPE_FLOAT) - { - asm ("flds (%0)" : : "r" (resp) : "st" ); - } - else if (rtype == FFI_TYPE_DOUBLE) - { - asm ("fldl (%0)" : : "r" (resp) : "st", "st(1)" ); - } - else if (rtype == FFI_TYPE_LONGDOUBLE) - { - asm ("fldt (%0)" : : "r" (resp) : "st", "st(1)" ); - } - else if (rtype == FFI_TYPE_SINT64) - { - asm ("movl 0(%0),%%eax; movl 4(%0),%%edx" - : : "r"(resp) - : "eax", "edx"); - } -} - - - - -ffi_status -ffi_prep_raw_closure (ffi_raw_closure* closure, - ffi_cif* cif, - void (*fun)(ffi_cif*,void*,ffi_raw*,void*), - void *user_data) -{ - int i; - - FFI_ASSERT (cif->abi == FFI_SYSV); - - // we currently don't support certain kinds of arguments for raw - // closures. This should be implemented by a separate assembly language - // routine, since it would require argument processing, something we - // don't do now for performance. - - for (i = cif->nargs-1; i >= 0; i--) - { - FFI_ASSERT (cif->arg_types[i]->type != FFI_TYPE_STRUCT); - FFI_ASSERT (cif->arg_types[i]->type != FFI_TYPE_LONGDOUBLE); - } - - - FFI_INIT_TRAMPOLINE (&closure->tramp[0], &ffi_closure_raw_SYSV, - (void*)closure); - - closure->cif = cif; - closure->user_data = user_data; - closure->fun = fun; - - return FFI_OK; -} - -static void -ffi_prep_args_raw(char *stack, extended_cif *ecif) -{ - memcpy (stack, ecif->avalue, ecif->cif->bytes); -} - -/* we borrow this routine from libffi (it must be changed, though, to - * actually call the function passed in the first argument. as of - * libffi-1.20, this is not the case.) - */ - -extern void -ffi_call_SYSV(void (*)(char *, extended_cif *), - /*@out@*/ extended_cif *, - unsigned, unsigned, - /*@out@*/ unsigned *, - void (*fn)()); - -#ifdef X86_WIN32 -extern void -ffi_call_STDCALL(void (*)(char *, extended_cif *), - /*@out@*/ extended_cif *, - unsigned, unsigned, - /*@out@*/ unsigned *, - void (*fn)()); -#endif /* X86_WIN32 */ - -void -ffi_raw_call(/*@dependent@*/ ffi_cif *cif, - void (*fn)(), - /*@out@*/ void *rvalue, - /*@dependent@*/ ffi_raw *fake_avalue) -{ - extended_cif ecif; - void **avalue = (void **)fake_avalue; - - ecif.cif = cif; - ecif.avalue = avalue; - - /* If the return value is a struct and we don't have a return */ - /* value address then we need to make one */ - - if ((rvalue == NULL) && retval_on_stack(cif->rtype)) - { - /*@-sysunrecog@*/ - ecif.rvalue = alloca(cif->rtype->size); - /*@=sysunrecog@*/ - } - else - ecif.rvalue = rvalue; - - - switch (cif->abi) - { - case FFI_SYSV: - /*@-usedef@*/ - ffi_call_SYSV(ffi_prep_args_raw, &ecif, cif->bytes, - cif->flags, ecif.rvalue, fn); - /*@=usedef@*/ - break; -#ifdef X86_WIN32 - case FFI_STDCALL: - /*@-usedef@*/ - ffi_call_STDCALL(ffi_prep_args_raw, &ecif, cif->bytes, - cif->flags, ecif.rvalue, fn); - /*@=usedef@*/ - break; -#endif /* X86_WIN32 */ - default: - FFI_ASSERT(0); - break; - } -} - -#endif - -#endif /* __x86_64__ */ - -#endif /* __i386__ */ Modified: python/trunk/Modules/_ctypes/libffi/src/x86/ffitarget.h ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/x86/ffitarget.h (original) +++ python/trunk/Modules/_ctypes/libffi/src/x86/ffitarget.h Tue Mar 4 21:09:11 2008 @@ -1,5 +1,7 @@ /* -----------------------------------------------------------------*-C-*- ffitarget.h - Copyright (c) 1996-2003 Red Hat, Inc. + Copyright (C) 2008 Free Software Foundation, Inc. + Target configuration macros for x86 and x86-64. Permission is hereby granted, free of charge, to any person obtaining @@ -13,13 +15,14 @@ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR - OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. ----------------------------------------------------------------------- */ @@ -51,7 +54,7 @@ #endif /* ---- Intel x86 and AMD x86-64 - */ -#if !defined(X86_WIN32) && (defined(__i386__) || defined(__x86_64__)) +#if !defined(X86_WIN32) && (defined(__i386__) || defined(__x86_64__)) FFI_SYSV, FFI_UNIX64, /* Unix variants all use the same ABI for x86-64 */ #ifdef __i386__ @@ -68,12 +71,18 @@ /* ---- Definitions for closures ----------------------------------------- */ #define FFI_CLOSURES 1 +#define FFI_TYPE_SMALL_STRUCT_1B (FFI_TYPE_LAST + 1) +#define FFI_TYPE_SMALL_STRUCT_2B (FFI_TYPE_LAST + 2) -#ifdef X86_64 +#if defined (X86_64) || (defined (__x86_64__) && defined (X86_DARWIN)) #define FFI_TRAMPOLINE_SIZE 24 #define FFI_NATIVE_RAW_API 0 #else +#ifdef X86_WIN32 +#define FFI_TRAMPOLINE_SIZE 13 +#else #define FFI_TRAMPOLINE_SIZE 10 +#endif #define FFI_NATIVE_RAW_API 1 /* x86 has native raw api support */ #endif Modified: python/trunk/Modules/_ctypes/libffi/src/x86/sysv.S ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/x86/sysv.S (original) +++ python/trunk/Modules/_ctypes/libffi/src/x86/sysv.S Tue Mar 4 21:09:11 2008 @@ -1,5 +1,5 @@ /* ----------------------------------------------------------------------- - sysv.S - Copyright (c) 1996, 1998, 2001, 2002, 2003, 2005 Red Hat, Inc. + sysv.S - Copyright (c) 1996, 1998, 2001-2003, 2005, 2008 Red Hat, Inc. X86 Foreign Function Interface @@ -14,13 +14,14 @@ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR - OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. ----------------------------------------------------------------------- */ #ifndef __x86_64__ @@ -59,16 +60,15 @@ call *28(%ebp) - /* Remove the space we pushed for the args */ - movl 16(%ebp),%ecx - addl %ecx,%esp - /* Load %ecx with the return type code */ movl 20(%ebp),%ecx + /* Protect %esi. We're going to pop it in the epilogue. */ + pushl %esi + /* If the return value pointer is NULL, assume no return value. */ cmpl $0,24(%ebp) - jne retint + jne 0f /* Even if there is no space for the return value, we are obliged to handle floating-point values. */ @@ -78,51 +78,84 @@ jmp epilogue -retint: - cmpl $FFI_TYPE_INT,%ecx - jne retfloat - /* Load %ecx with the pointer to storage for the return value */ - movl 24(%ebp),%ecx - movl %eax,0(%ecx) - jmp epilogue +0: + call 1f + +.Lstore_table: + .long noretval-.Lstore_table /* FFI_TYPE_VOID */ + .long retint-.Lstore_table /* FFI_TYPE_INT */ + .long retfloat-.Lstore_table /* FFI_TYPE_FLOAT */ + .long retdouble-.Lstore_table /* FFI_TYPE_DOUBLE */ + .long retlongdouble-.Lstore_table /* FFI_TYPE_LONGDOUBLE */ + .long retuint8-.Lstore_table /* FFI_TYPE_UINT8 */ + .long retsint8-.Lstore_table /* FFI_TYPE_SINT8 */ + .long retuint16-.Lstore_table /* FFI_TYPE_UINT16 */ + .long retsint16-.Lstore_table /* FFI_TYPE_SINT16 */ + .long retint-.Lstore_table /* FFI_TYPE_UINT32 */ + .long retint-.Lstore_table /* FFI_TYPE_SINT32 */ + .long retint64-.Lstore_table /* FFI_TYPE_UINT64 */ + .long retint64-.Lstore_table /* FFI_TYPE_SINT64 */ + .long retstruct-.Lstore_table /* FFI_TYPE_STRUCT */ + .long retint-.Lstore_table /* FFI_TYPE_POINTER */ + +1: + pop %esi + add (%esi, %ecx, 4), %esi + jmp *%esi + + /* Sign/zero extend as appropriate. */ +retsint8: + movsbl %al, %eax + jmp retint + +retsint16: + movswl %ax, %eax + jmp retint + +retuint8: + movzbl %al, %eax + jmp retint + +retuint16: + movzwl %ax, %eax + jmp retint retfloat: - cmpl $FFI_TYPE_FLOAT,%ecx - jne retdouble /* Load %ecx with the pointer to storage for the return value */ movl 24(%ebp),%ecx fstps (%ecx) jmp epilogue retdouble: - cmpl $FFI_TYPE_DOUBLE,%ecx - jne retlongdouble /* Load %ecx with the pointer to storage for the return value */ movl 24(%ebp),%ecx fstpl (%ecx) jmp epilogue retlongdouble: - cmpl $FFI_TYPE_LONGDOUBLE,%ecx - jne retint64 /* Load %ecx with the pointer to storage for the return value */ movl 24(%ebp),%ecx fstpt (%ecx) jmp epilogue retint64: - cmpl $FFI_TYPE_SINT64,%ecx - jne retstruct /* Load %ecx with the pointer to storage for the return value */ movl 24(%ebp),%ecx movl %eax,0(%ecx) movl %edx,4(%ecx) + jmp epilogue +retint: + /* Load %ecx with the pointer to storage for the return value */ + movl 24(%ebp),%ecx + movl %eax,0(%ecx) + retstruct: /* Nothing to do! */ noretval: epilogue: + popl %esi movl %ebp,%esp popl %ebp ret @@ -162,7 +195,15 @@ movl -12(%ebp), %ecx cmpl $FFI_TYPE_INT, %eax je .Lcls_retint - cmpl $FFI_TYPE_FLOAT, %eax + + /* Handle FFI_TYPE_UINT8, FFI_TYPE_SINT8, FFI_TYPE_UINT16, + FFI_TYPE_SINT16, FFI_TYPE_UINT32, FFI_TYPE_SINT32. */ + cmpl $FFI_TYPE_UINT64, %eax + jge 0f + cmpl $FFI_TYPE_UINT8, %eax + jge .Lcls_retint + +0: cmpl $FFI_TYPE_FLOAT, %eax je .Lcls_retfloat cmpl $FFI_TYPE_DOUBLE, %eax je .Lcls_retdouble @@ -170,6 +211,8 @@ je .Lcls_retldouble cmpl $FFI_TYPE_SINT64, %eax je .Lcls_retllong + cmpl $FFI_TYPE_STRUCT, %eax + je .Lcls_retstruct .Lcls_epilogue: movl %ebp, %esp popl %ebp @@ -190,6 +233,10 @@ movl (%ecx), %eax movl 4(%ecx), %edx jmp .Lcls_epilogue +.Lcls_retstruct: + movl %ebp, %esp + popl %ebp + ret $4 .LFE2: .size ffi_closure_SYSV, .-ffi_closure_SYSV @@ -226,6 +273,14 @@ movl CIF_FLAGS_OFFSET(%esi), %eax /* rtype */ cmpl $FFI_TYPE_INT, %eax je .Lrcls_retint + + /* Handle FFI_TYPE_UINT8, FFI_TYPE_SINT8, FFI_TYPE_UINT16, + FFI_TYPE_SINT16, FFI_TYPE_UINT32, FFI_TYPE_SINT32. */ + cmpl $FFI_TYPE_UINT64, %eax + jge 0f + cmpl $FFI_TYPE_UINT8, %eax + jge .Lrcls_retint +0: cmpl $FFI_TYPE_FLOAT, %eax je .Lrcls_retfloat cmpl $FFI_TYPE_DOUBLE, %eax @@ -377,6 +432,6 @@ #endif /* ifndef __x86_64__ */ -#ifdef __ELF__ -.section .note.GNU-stack,"",%progbits +#if defined __ELF__ && defined __linux__ + .section .note.GNU-stack,"", at progbits #endif Modified: python/trunk/Modules/_ctypes/libffi/src/x86/unix64.S ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/x86/unix64.S (original) +++ python/trunk/Modules/_ctypes/libffi/src/x86/unix64.S Tue Mar 4 21:09:11 2008 @@ -1,5 +1,6 @@ /* ----------------------------------------------------------------------- unix64.S - Copyright (c) 2002 Bo Thorsen + Copyright (c) 2008 Red Hat, Inc x86-64 Foreign Function Interface @@ -14,13 +15,14 @@ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR - OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. ----------------------------------------------------------------------- */ #ifdef __x86_64__ @@ -31,7 +33,7 @@ .text /* ffi_call_unix64 (void *args, unsigned long bytes, unsigned flags, - void *raddr, void (*fnaddr)()); + void *raddr, void (*fnaddr)(void)); Bit o trickiness here -- ARGS+BYTES is the base of the stack frame for this function. This has been allocated by ffi_call. We also @@ -410,3 +412,7 @@ .LEFDE3: #endif /* __x86_64__ */ + +#if defined __ELF__ && defined __linux__ + .section .note.GNU-stack,"", at progbits +#endif Modified: python/trunk/Modules/_ctypes/libffi/src/x86/win32.S ============================================================================== --- python/trunk/Modules/_ctypes/libffi/src/x86/win32.S (original) +++ python/trunk/Modules/_ctypes/libffi/src/x86/win32.S Tue Mar 4 21:09:11 2008 @@ -20,7 +20,8 @@ THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR + IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR + ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. @@ -258,6 +259,22 @@ .ffi_call_STDCALL_end: + .globl _ffi_closure_STDCALL +_ffi_closure_STDCALL: + pushl %ebp + movl %esp, %ebp + subl $40, %esp + leal -24(%ebp), %edx + movl %edx, -12(%ebp) /* resp */ + leal 12(%ebp), %edx /* account for stub return address on stack */ + movl %edx, 4(%esp) /* args */ + leal -12(%ebp), %edx + movl %edx, (%esp) /* &resp */ + call _ffi_closure_SYSV_inner + movl -12(%ebp), %ecx + jmp .Lcls_return_result +.ffi_closure_STDCALL_end: + .globl _ffi_closure_SYSV _ffi_closure_SYSV: pushl %ebp @@ -271,6 +288,7 @@ movl %edx, (%esp) /* &resp */ call _ffi_closure_SYSV_inner movl -12(%ebp), %ecx +.Lcls_return_result: cmpl $FFI_TYPE_INT, %eax je .Lcls_retint cmpl $FFI_TYPE_FLOAT, %eax Modified: python/trunk/configure ============================================================================== --- python/trunk/configure (original) +++ python/trunk/configure Tue Mar 4 21:09:11 2008 @@ -1,5 +1,5 @@ #! /bin/sh -# From configure.in Revision: 60536 . +# From configure.in Revision: 60765 . # Guess values for system-dependent variables and create Makefiles. # Generated by GNU Autoconf 2.61 for python 2.6. # @@ -13223,138 +13223,6 @@ # Check for use of the system libffi library -if test "${ac_cv_header_ffi_h+set}" = set; then - { echo "$as_me:$LINENO: checking for ffi.h" >&5 -echo $ECHO_N "checking for ffi.h... $ECHO_C" >&6; } -if test "${ac_cv_header_ffi_h+set}" = set; then - echo $ECHO_N "(cached) $ECHO_C" >&6 -fi -{ echo "$as_me:$LINENO: result: $ac_cv_header_ffi_h" >&5 -echo "${ECHO_T}$ac_cv_header_ffi_h" >&6; } -else - # Is the header compilable? -{ echo "$as_me:$LINENO: checking ffi.h usability" >&5 -echo $ECHO_N "checking ffi.h usability... $ECHO_C" >&6; } -cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ -$ac_includes_default -#include -_ACEOF -rm -f conftest.$ac_objext -if { (ac_try="$ac_compile" -case "(($ac_try" in - *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; - *) ac_try_echo=$ac_try;; -esac -eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 - (eval "$ac_compile") 2>conftest.er1 - ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); } && { - test -z "$ac_c_werror_flag" || - test ! -s conftest.err - } && test -s conftest.$ac_objext; then - ac_header_compiler=yes -else - echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - - ac_header_compiler=no -fi - -rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext -{ echo "$as_me:$LINENO: result: $ac_header_compiler" >&5 -echo "${ECHO_T}$ac_header_compiler" >&6; } - -# Is the header present? -{ echo "$as_me:$LINENO: checking ffi.h presence" >&5 -echo $ECHO_N "checking ffi.h presence... $ECHO_C" >&6; } -cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ -#include -_ACEOF -if { (ac_try="$ac_cpp conftest.$ac_ext" -case "(($ac_try" in - *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; - *) ac_try_echo=$ac_try;; -esac -eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5 - (eval "$ac_cpp conftest.$ac_ext") 2>conftest.er1 - ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); } >/dev/null && { - test -z "$ac_c_preproc_warn_flag$ac_c_werror_flag" || - test ! -s conftest.err - }; then - ac_header_preproc=yes -else - echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - - ac_header_preproc=no -fi - -rm -f conftest.err conftest.$ac_ext -{ echo "$as_me:$LINENO: result: $ac_header_preproc" >&5 -echo "${ECHO_T}$ac_header_preproc" >&6; } - -# So? What about this header? -case $ac_header_compiler:$ac_header_preproc:$ac_c_preproc_warn_flag in - yes:no: ) - { echo "$as_me:$LINENO: WARNING: ffi.h: accepted by the compiler, rejected by the preprocessor!" >&5 -echo "$as_me: WARNING: ffi.h: accepted by the compiler, rejected by the preprocessor!" >&2;} - { echo "$as_me:$LINENO: WARNING: ffi.h: proceeding with the compiler's result" >&5 -echo "$as_me: WARNING: ffi.h: proceeding with the compiler's result" >&2;} - ac_header_preproc=yes - ;; - no:yes:* ) - { echo "$as_me:$LINENO: WARNING: ffi.h: present but cannot be compiled" >&5 -echo "$as_me: WARNING: ffi.h: present but cannot be compiled" >&2;} - { echo "$as_me:$LINENO: WARNING: ffi.h: check for missing prerequisite headers?" >&5 -echo "$as_me: WARNING: ffi.h: check for missing prerequisite headers?" >&2;} - { echo "$as_me:$LINENO: WARNING: ffi.h: see the Autoconf documentation" >&5 -echo "$as_me: WARNING: ffi.h: see the Autoconf documentation" >&2;} - { echo "$as_me:$LINENO: WARNING: ffi.h: section \"Present But Cannot Be Compiled\"" >&5 -echo "$as_me: WARNING: ffi.h: section \"Present But Cannot Be Compiled\"" >&2;} - { echo "$as_me:$LINENO: WARNING: ffi.h: proceeding with the preprocessor's result" >&5 -echo "$as_me: WARNING: ffi.h: proceeding with the preprocessor's result" >&2;} - { echo "$as_me:$LINENO: WARNING: ffi.h: in the future, the compiler will take precedence" >&5 -echo "$as_me: WARNING: ffi.h: in the future, the compiler will take precedence" >&2;} - ( cat <<\_ASBOX -## ------------------------------------------------ ## -## Report this to http://www.python.org/python-bugs ## -## ------------------------------------------------ ## -_ASBOX - ) | sed "s/^/$as_me: WARNING: /" >&2 - ;; -esac -{ echo "$as_me:$LINENO: checking for ffi.h" >&5 -echo $ECHO_N "checking for ffi.h... $ECHO_C" >&6; } -if test "${ac_cv_header_ffi_h+set}" = set; then - echo $ECHO_N "(cached) $ECHO_C" >&6 -else - ac_cv_header_ffi_h=$ac_header_preproc -fi -{ echo "$as_me:$LINENO: result: $ac_cv_header_ffi_h" >&5 -echo "${ECHO_T}$ac_cv_header_ffi_h" >&6; } - -fi - - { echo "$as_me:$LINENO: checking for --with-system-ffi" >&5 echo $ECHO_N "checking for --with-system-ffi... $ECHO_C" >&6; } @@ -13364,15 +13232,6 @@ fi -if test -z "$with_system_ffi" && test "$ac_cv_header_ffi_h" = yes; then - case "$ac_sys_system/`uname -m`" in - Linux/alpha*) with_system_ffi="yes"; CONFIG_ARGS="$CONFIG_ARGS --with-system-ffi";; - Linux/arm*) with_system_ffi="yes"; CONFIG_ARGS="$CONFIG_ARGS --with-system-ffi";; - Linux/ppc*) with_system_ffi="yes"; CONFIG_ARGS="$CONFIG_ARGS --with-system-ffi";; - Linux/s390*) with_system_ffi="yes"; CONFIG_ARGS="$CONFIG_ARGS --with-system-ffi";; - *) with_system_ffi="no" - esac -fi { echo "$as_me:$LINENO: result: $with_system_ffi" >&5 echo "${ECHO_T}$with_system_ffi" >&6; } Modified: python/trunk/configure.in ============================================================================== --- python/trunk/configure.in (original) +++ python/trunk/configure.in Tue Mar 4 21:09:11 2008 @@ -1754,20 +1754,10 @@ [AC_MSG_RESULT(no)]) # Check for use of the system libffi library -AC_CHECK_HEADER(ffi.h) AC_MSG_CHECKING(for --with-system-ffi) AC_ARG_WITH(system_ffi, AC_HELP_STRING(--with-system-ffi, build _ctypes module using an installed ffi library)) -if test -z "$with_system_ffi" && test "$ac_cv_header_ffi_h" = yes; then - case "$ac_sys_system/`uname -m`" in - Linux/alpha*) with_system_ffi="yes"; CONFIG_ARGS="$CONFIG_ARGS --with-system-ffi";; - Linux/arm*) with_system_ffi="yes"; CONFIG_ARGS="$CONFIG_ARGS --with-system-ffi";; - Linux/ppc*) with_system_ffi="yes"; CONFIG_ARGS="$CONFIG_ARGS --with-system-ffi";; - Linux/s390*) with_system_ffi="yes"; CONFIG_ARGS="$CONFIG_ARGS --with-system-ffi";; - *) with_system_ffi="no" - esac -fi AC_MSG_RESULT($with_system_ffi) # Determine if signalmodule should be used. Modified: python/trunk/setup.py ============================================================================== --- python/trunk/setup.py (original) +++ python/trunk/setup.py Tue Mar 4 21:09:11 2008 @@ -1455,8 +1455,37 @@ # *** Uncomment these for TOGL extension only: # -lGL -lGLU -lXext -lXmu \ + def configure_ctypes_darwin(self, ext): + # Darwin (OS X) uses preconfigured files, in + # the Modules/_ctypes/libffi_osx directory. + (srcdir,) = sysconfig.get_config_vars('srcdir') + ffi_srcdir = os.path.abspath(os.path.join(srcdir, 'Modules', + '_ctypes', 'libffi_osx')) + sources = [os.path.join(ffi_srcdir, p) + for p in ['ffi.c', + 'x86/x86-darwin.S', + 'x86/x86-ffi_darwin.c', + 'x86/x86-ffi64.c', + 'powerpc/ppc-darwin.S', + 'powerpc/ppc-darwin_closure.S', + 'powerpc/ppc-ffi_darwin.c', + 'powerpc/ppc64-darwin_closure.S', + ]] + + # Add .S (preprocessed assembly) to C compiler source extensions. + self.compiler.src_extensions.append('.S') + + include_dirs = [os.path.join(ffi_srcdir, 'include'), + os.path.join(ffi_srcdir, 'powerpc')] + ext.include_dirs.extend(include_dirs) + ext.sources.extend(sources) + return True + def configure_ctypes(self, ext): if not self.use_system_libffi: + if sys.platform == 'darwin': + return self.configure_ctypes_darwin(ext) + (srcdir,) = sysconfig.get_config_vars('srcdir') ffi_builddir = os.path.join(self.build_temp, 'libffi') ffi_srcdir = os.path.abspath(os.path.join(srcdir, 'Modules', @@ -1515,6 +1544,7 @@ if sys.platform == 'darwin': sources.append('_ctypes/darwin/dlfcn_simple.c') + extra_compile_args.append('-DMACOSX') include_dirs.append('_ctypes/darwin') # XXX Is this still needed? ## extra_link_args.extend(['-read_only_relocs', 'warning']) @@ -1544,6 +1574,11 @@ if not '--with-system-ffi' in sysconfig.get_config_var("CONFIG_ARGS"): return + if sys.platform == 'darwin': + # OS X 10.5 comes with libffi.dylib; the include files are + # in /usr/include/ffi + inc_dirs.append('/usr/include/ffi') + ffi_inc = find_file('ffi.h', [], inc_dirs) if ffi_inc is not None: ffi_h = ffi_inc[0] + '/ffi.h' From python-checkins at python.org Tue Mar 4 21:21:42 2008 From: python-checkins at python.org (thomas.heller) Date: Tue, 4 Mar 2008 21:21:42 +0100 (CET) Subject: [Python-checkins] r61235 - python/trunk/Modules/_ctypes/libffi/fficonfig.py.in Message-ID: <20080304202142.F27951E4007@bag.python.org> Author: thomas.heller Date: Tue Mar 4 21:21:42 2008 New Revision: 61235 Modified: python/trunk/Modules/_ctypes/libffi/fficonfig.py.in Log: Try to fix the build for PY_LINUX. Modified: python/trunk/Modules/_ctypes/libffi/fficonfig.py.in ============================================================================== --- python/trunk/Modules/_ctypes/libffi/fficonfig.py.in (original) +++ python/trunk/Modules/_ctypes/libffi/fficonfig.py.in Tue Mar 4 21:21:42 2008 @@ -24,6 +24,7 @@ 'SH': ['src/sh/sysv.S', 'src/sh/ffi.c'], 'SH64': ['src/sh64/sysv.S', 'src/sh64/ffi.c'], 'PA': ['src/pa/linux.S', 'src/pa/ffi.c'], + 'PA_LINUX': ['src/pa/linux.S', 'src/pa/ffi.c'], } ffi_srcdir = '@srcdir@' From python-checkins at python.org Tue Mar 4 22:14:05 2008 From: python-checkins at python.org (fred.drake) Date: Tue, 4 Mar 2008 22:14:05 +0100 (CET) Subject: [Python-checkins] r61236 - python/trunk/Lib/compileall.py Message-ID: <20080304211405.58BF21E400B@bag.python.org> Author: fred.drake Date: Tue Mar 4 22:14:04 2008 New Revision: 61236 Modified: python/trunk/Lib/compileall.py Log: fix typo Modified: python/trunk/Lib/compileall.py ============================================================================== --- python/trunk/Lib/compileall.py (original) +++ python/trunk/Lib/compileall.py Tue Mar 4 22:14:04 2008 @@ -119,7 +119,7 @@ print "-d destdir: purported directory name for error messages" print " if no directory arguments, -l sys.path is assumed" print "-x regexp: skip files matching the regular expression regexp" - print " the regexp is search for in the full path of the file" + print " the regexp is searched for in the full path of the file" sys.exit(2) maxlevels = 10 ddir = None From buildbot at python.org Tue Mar 4 22:45:22 2008 From: buildbot at python.org (buildbot at python.org) Date: Tue, 04 Mar 2008 21:45:22 +0000 Subject: [Python-checkins] buildbot failure in g4 osx.4 trunk Message-ID: <20080304214522.827DC1E4007@bag.python.org> The Buildbot has detected a new failure of g4 osx.4 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/g4%20osx.4%20trunk/builds/2974 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: psf-g4 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: fred.drake BUILD FAILED: failed failed slave lost sincerely, -The Buildbot From nnorwitz at gmail.com Tue Mar 4 23:40:29 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Tue, 4 Mar 2008 17:40:29 -0500 Subject: [Python-checkins] Python Regression Test Failures all () Message-ID: <20080304224029.GA7397@python.psfb.org> From python-checkins at python.org Tue Mar 4 23:29:44 2008 From: python-checkins at python.org (raymond.hettinger) Date: Tue, 4 Mar 2008 23:29:44 +0100 (CET) Subject: [Python-checkins] r61237 - python/trunk/Modules/itertoolsmodule.c Message-ID: <20080304222944.E488C1E4007@bag.python.org> Author: raymond.hettinger Date: Tue Mar 4 23:29:44 2008 New Revision: 61237 Modified: python/trunk/Modules/itertoolsmodule.c Log: Fix refleak in chain(). Modified: python/trunk/Modules/itertoolsmodule.c ============================================================================== --- python/trunk/Modules/itertoolsmodule.c (original) +++ python/trunk/Modules/itertoolsmodule.c Tue Mar 4 23:29:44 2008 @@ -1682,8 +1682,8 @@ return NULL; /* no more input sources */ } lz->active = PyObject_GetIter(iterable); + Py_DECREF(iterable); if (lz->active == NULL) { - Py_DECREF(iterable); Py_CLEAR(lz->source); return NULL; /* input not iterable */ } From nnorwitz at gmail.com Wed Mar 5 00:31:56 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Tue, 4 Mar 2008 18:31:56 -0500 Subject: [Python-checkins] Python Regression Test Failures basics (1) Message-ID: <20080304233156.GA21056@python.psfb.org> 313 tests OK. 1 test failed: test_sqlite 28 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_bsddb3 test_cd test_cl test_curses test_gl test_imageop test_imgfile test_ioctl test_linuxaudiodev test_macostools test_ossaudiodev test_pep277 test_scriptpackages test_socketserver test_startfile test_sunaudiodev test_tcl test_timeout test_unicode_file test_urllib2net test_urllibnet test_winreg test_winsound test_zipfile64 1 skip unexpected on linux2: test_ioctl test_grammar test_opcodes test_dict test_builtin test_exceptions test_types test_unittest test_doctest test_doctest2 test_MimeWriter test_SimpleHTTPServer test_StringIO test___all__ test___future__ test__locale test_abc test_abstract_numbers test_aepack test_aepack skipped -- No module named aepack test_al test_al skipped -- No module named al test_anydbm test_applesingle test_applesingle skipped -- No module named macostools test_array test_ast test_asynchat test_asyncore test_atexit test_audioop test_augassign test_base64 test_bastion test_bigaddrspace test_bigmem test_binascii test_binhex test_binop test_bisect test_bool test_bsddb test_bsddb185 test_bsddb185 skipped -- No module named bsddb185 test_bsddb3 test_bsddb3 skipped -- Use of the `bsddb' resource not enabled test_buffer test_bufio test_bz2 test_calendar test_call test_capi test_cd test_cd skipped -- No module named cd test_cfgparser test_cgi test_charmapcodec test_cl test_cl skipped -- No module named cl test_class test_cmath test_cmd test_cmd_line test_cmd_line_script test_code test_codeccallbacks test_codecencodings_cn test_codecencodings_hk test_codecencodings_jp test_codecencodings_kr test_codecencodings_tw test_codecmaps_cn test_codecmaps_hk test_codecmaps_jp test_codecmaps_kr test_codecmaps_tw test_codecs test_codeop test_coding test_coercion test_collections test_colorsys test_commands test_compare test_compile test_compiler test_complex test_complex_args test_contains test_contextlib test_cookie test_cookielib test_copy test_copy_reg test_cpickle test_cprofile test_crypt test_csv test_ctypes test_curses test_curses skipped -- Use of the `curses' resource not enabled test_datetime test_dbm test_decimal test_decorators test_defaultdict test_deque test_descr test_descrtut test_difflib test_dircache test_dis test_distutils test_dl test_docxmlrpc test_dumbdbm test_dummy_thread test_dummy_threading test_email test_email_codecs test_email_renamed test_enumerate test_eof test_errno test_exception_variations test_extcall test_fcntl test_file test_filecmp test_fileinput test_float test_fnmatch test_fork1 test_format test_fpformat test_fractions test_frozen test_ftplib test_funcattrs test_functools test_future test_future_builtins test_gc test_gdbm test_generators test_genericpath test_genexps test_getargs test_getargs2 test_getopt test_gettext test_gl test_gl skipped -- No module named gl test_glob test_global test_grp test_gzip test_hash test_hashlib test_heapq test_hexoct test_hmac test_hotshot test_htmllib test_htmlparser test_httplib test_imageop test_imageop skipped -- No module named imgfile test_imaplib test_imgfile test_imgfile skipped -- No module named imgfile test_imp test_import test_importhooks test_index test_inspect test_ioctl test_ioctl skipped -- Unable to open /dev/tty test_isinstance test_iter test_iterlen test_itertools test_largefile test_linuxaudiodev test_linuxaudiodev skipped -- Use of the `audio' resource not enabled test_list test_locale test_logging test_long test_long_future test_longexp test_macostools test_macostools skipped -- No module named macostools test_macpath test_mailbox test_marshal test_math test_md5 test_mhlib test_mimetools test_mimetypes test_minidom test_mmap test_module test_modulefinder test_multibytecodec test_multibytecodec_support test_multifile test_mutants test_mutex test_netrc test_new test_nis test_normalization test_ntpath test_old_mailbox test_openpty test_operator test_optparse test_os test_ossaudiodev test_ossaudiodev skipped -- Use of the `audio' resource not enabled test_parser s_push: parser stack overflow test_peepholer test_pep247 test_pep263 test_pep277 test_pep277 skipped -- test works only on NT+ test_pep292 test_pep352 test_pickle test_pickletools test_pipes test_pkg test_pkgimport test_platform test_plistlib test_poll test_popen [8018 refs] [8018 refs] [8018 refs] test_popen2 test_poplib test_posix test_posixpath test_pow test_pprint test_profile test_profilehooks test_property test_pstats test_pty test_pwd test_pyclbr test_pyexpat test_queue test_quopri [8395 refs] [8395 refs] test_random test_re test_repr test_resource test_rfc822 test_richcmp test_robotparser test_runpy test_sax test_scope test_scriptpackages test_scriptpackages skipped -- No module named aetools test_select test_set test_sets test_sgmllib test_sha test_shelve test_shlex test_shutil test_signal test_site test_slice test_smtplib test_socket test_socket_ssl /tmp/python-test/local/lib/python2.6/test/test_socket_ssl.py:108: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. ssl_sock = socket.ssl(s) /tmp/python-test/local/lib/python2.6/test/test_socket_ssl.py:159: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. ss = socket.ssl(s) /tmp/python-test/local/lib/python2.6/test/test_socket_ssl.py:173: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. ss = socket.ssl(s) test_socketserver test_socketserver skipped -- Use of the `network' resource not enabled test_softspace test_sort test_sqlite test test_sqlite failed -- Traceback (most recent call last): File "/tmp/python-test/local/lib/python2.6/sqlite3/test/transactions.py", line 51, in tearDown os.unlink(get_db_path()) OSError: [Errno 2] No such file or directory: 'sqlite_testdb' test_ssl test_startfile test_startfile skipped -- cannot import name startfile test_str test_strftime test_string test_stringprep test_strop test_strptime test_struct test_structmembers test_structseq test_subprocess [8013 refs] [8015 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8015 refs] [9938 refs] [8231 refs] [8015 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] . [8013 refs] [8013 refs] this bit of output is from a test of stdout in a different process ... [8013 refs] [8013 refs] [8231 refs] test_sunaudiodev test_sunaudiodev skipped -- No module named sunaudiodev test_sundry test_symtable test_syntax test_sys [8013 refs] [8013 refs] test_tarfile test_tcl test_tcl skipped -- No module named _tkinter test_telnetlib test_tempfile [8018 refs] test_textwrap test_thread test_threaded_import test_threadedtempfile test_threading [11149 refs] test_threading_local test_threadsignals test_time test_timeout test_timeout skipped -- Use of the `network' resource not enabled test_tokenize test_trace test_traceback test_transformer test_tuple test_typechecks test_ucn test_unary test_unicode test_unicode_file test_unicode_file skipped -- No Unicode filesystem semantics on this platform. test_unicodedata test_univnewlines test_unpack test_urllib test_urllib2 test_urllib2_localnet test_urllib2net test_urllib2net skipped -- Use of the `network' resource not enabled test_urllibnet test_urllibnet skipped -- Use of the `network' resource not enabled test_urlparse test_userdict test_userlist test_userstring test_uu test_uuid WARNING: uuid.getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._ifconfig_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._unixdll_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. test_wait3 test_wait4 test_warnings test_wave test_weakref test_whichdb test_winreg test_winreg skipped -- No module named _winreg test_winsound test_winsound skipped -- No module named winsound test_with test_wsgiref test_xdrlib test_xml_etree test_xml_etree_c test_xmllib test_xmlrpc test_xpickle test_xrange test_zipfile /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:472: DeprecationWarning: struct integer overflow masking is deprecated zipfp.close() /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:399: DeprecationWarning: struct integer overflow masking is deprecated zipfp.close() test_zipfile64 test_zipfile64 skipped -- test requires loads of disk-space bytes and a long time to run test_zipimport test_zlib 313 tests OK. 1 test failed: test_sqlite 28 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_bsddb3 test_cd test_cl test_curses test_gl test_imageop test_imgfile test_ioctl test_linuxaudiodev test_macostools test_ossaudiodev test_pep277 test_scriptpackages test_socketserver test_startfile test_sunaudiodev test_tcl test_timeout test_unicode_file test_urllib2net test_urllibnet test_winreg test_winsound test_zipfile64 1 skip unexpected on linux2: test_ioctl [572574 refs] From python-checkins at python.org Wed Mar 5 01:44:42 2008 From: python-checkins at python.org (andrew.kuchling) Date: Wed, 5 Mar 2008 01:44:42 +0100 (CET) Subject: [Python-checkins] r61239 - python/trunk/Doc/whatsnew/2.6.rst Message-ID: <20080305004442.2A2411E4007@bag.python.org> Author: andrew.kuchling Date: Wed Mar 5 01:44:41 2008 New Revision: 61239 Modified: python/trunk/Doc/whatsnew/2.6.rst Log: Add more items; add fragmentary notes Modified: python/trunk/Doc/whatsnew/2.6.rst ============================================================================== --- python/trunk/Doc/whatsnew/2.6.rst (original) +++ python/trunk/Doc/whatsnew/2.6.rst Wed Mar 5 01:44:41 2008 @@ -117,8 +117,12 @@ New Issue Tracker: Roundup -------------------------------------------------- -XXX write this. +XXX write this -- this section is currently just brief notes. +The developers were growing increasingly annoyed by SourceForge's +bug tracker. (Discuss problems in a sentence or two.) + +Hosting provided by XXX. New Documentation Format: ReStructured Text -------------------------------------------------- @@ -455,7 +459,46 @@ PEP 3101: Advanced String Formatting ===================================================== -XXX write this +XXX write this -- this section is currently just brief notes. + +8-bit and Unicode strings have a .format() method that takes the arguments +to be formatted. + +.format() uses curly brackets ({, }) as special characters: + + format("User ID: {0}", "root") -> "User ID: root" + format("Empty dict: {{}}") -> "Empty dict: {}" + 0.name + 0[name] + +Format specifiers: + + 0:8 -> left-align, pad + 0:>8 -> right-align, pad + +Format data types:: + + ... take table from PEP 3101 + +Classes and types can define a __format__ method to control how it's +formatted. It receives a single argument, the format specifier:: + + def __format__(self, format_spec): + if isinstance(format_spec, unicode): + return unicode(str(self)) + else: + return str(self) + +There's also a format() built-in that will format a single value. It calls +the type's :meth:`__format__` method with the provided specifier:: + + >>> format(75.6564, '.2f') + '75.66' + +.. seealso:: + + :pep:`3101` - Advanced String Formatting + PEP written by Talin. .. ====================================================================== @@ -509,12 +552,30 @@ .. ====================================================================== +.. _pep-3112: + +PEP 3112: Byte Literals +===================================================== + +Python 3.0 adopts Unicode as the language's fundamental string type, and +denotes 8-bit literals differently, either as ``b'string'`` +or using a :class:`bytes` constructor. For future compatibility, +Python 2.6 adds :class:`bytes` as a synonym for the :class:`str` type, +and it also supports the ``b''`` notation. + +.. seealso:: + + :pep:`3112` - Bytes literals in Python 3000 + PEP written by Jason Orendorff; backported to 2.6 by Christian Heimes. + +.. ====================================================================== + .. _pep-3119: PEP 3119: Abstract Base Classes ===================================================== -XXX +XXX write this -- this section is currently just brief notes. How to identify a file object? @@ -558,16 +619,23 @@ PEP 3127: Integer Literal Support and Syntax ===================================================== -XXX write this +XXX write this -- this section is currently just brief notes. Python 3.0 changes the syntax for octal integer literals, and adds supports for binary integers: 0o instad of 0, and 0b for binary. Python 2.6 doesn't support this, but a bin() -builtin was added, and +builtin was added. + +XXX changes to the hex/oct builtins New bin() built-in returns the binary form of a number. +.. seealso:: + + :pep:`3127` - Integer Literal Support and Syntax + PEP written by Patrick Maupin. + .. ====================================================================== .. _pep-3129: @@ -575,7 +643,30 @@ PEP 3129: Class Decorators ===================================================== -XXX write this. +XXX write this -- this section is currently just brief notes. + +Class decorators are analogous to function decorators. After defining a class, +it's passed through the specified series of decorator functions +and the ultimate return value is recorded as the class. + +:: + + class A: + pass + A = foo(bar(A)) + + + @foo + @bar + class A: + pass + +XXX need to find a good motivating example. + +.. seealso:: + + :pep:`3129` - Class Decorators + PEP written by Collin Winter. .. ====================================================================== @@ -631,11 +722,14 @@ .. seealso:: + :pep:`3141` - A Type Hierarchy for Numbers + PEP written by Jeffrey Yasskin. + XXX link: Discusses Scheme's numeric tower. -The Fraction Module +The :mod:`fractions` Module -------------------------------------------------- To fill out the hierarchy of numeric types, a rational-number class @@ -657,11 +751,27 @@ >>> a/b Fraction(5, 3) +To help in converting floating-point numbers to rationals, +the float type now has a :meth:`as_integer_ratio()` method that returns +the numerator and denominator for a fraction that evaluates to the same +floating-point value:: + + >>> (2.5) .as_integer_ratio() + (5, 2) + >>> (3.1415) .as_integer_ratio() + (7074029114692207L, 2251799813685248L) + >>> (1./3) .as_integer_ratio() + (6004799503160661L, 18014398509481984L) + +Note that values that can only be approximated by floating-point +numbers, such as 1./3, are not simplified to the number being +approximated; the fraction attempts to match the floating-point value +**exactly**. + The :mod:`fractions` module is based upon an implementation by Sjoerd Mullender that was in Python's :file:`Demo/classes/` directory for a long time. This implementation was significantly updated by Jeffrey -Yaskin. - +Yasskin. Other Language Changes ====================== @@ -767,6 +877,12 @@ .. Patch #1537 +* Generator objects now have a :attr:`gi_code` attribute that refers to + the original code object backing the generator. + (Contributed by Collin Winter.) + + .. Patch #1473257 + * The :func:`compile` built-in function now accepts keyword arguments as well as positional parameters. (Contributed by Thomas Wouters.) @@ -1054,6 +1170,12 @@ [('1', '2', '3'), ('1', '2', '4'), ('1', '3', '4'), ('2', '3', '4')] + ``permutations(iter[, r])`` returns all the permutations of length *r* from + the iterable's elements. If *r* is not specified, it will default to the + number of elements produced by the iterable. + + XXX enter example once Raymond commits the code. + ``itertools.chain(*iterables)` is an existing function in :mod:`itertools` that gained a new constructor. ``itertools.chain.from_iterable(iterable)`` takes a single @@ -1066,6 +1188,13 @@ (All contributed by Raymond Hettinger.) +* The :mod:`logging` module's :class:`FileHandler` class + and its subclasses :class:`WatchedFileHandler`, :class:`RotatingFileHandler`, + and :class:`TimedRotatingFileHandler` now + have an optional *delay* parameter to its constructor. If *delay* + is true, opening of the log file is deferred until the first + :meth:`emit` call is made. (Contributed by Vinay Sajip.) + * The :mod:`macfs` module has been removed. This in turn required the :func:`macostools.touched` function to be removed because it depended on the :mod:`macfs` module. @@ -1171,6 +1300,13 @@ changed and :const:`UF_APPEND` to indicate that data can only be appended to the file. (Contributed by M. Levinson.) + ``os.closerange(*low*, *high*)`` efficiently closes all file descriptors + from *low* to *high*, ignoring any errors and not including *high* itself. + This function is now used by the :mod:`subprocess` module to make starting + processes faster. (Contributed by Georg Brandl.) + + .. Patch #1663329 + * The :mod:`pyexpat` module's :class:`Parser` objects now allow setting their :attr:`buffer_size` attribute to change the size of the buffer used to hold character data. @@ -1203,6 +1339,14 @@ * The :mod:`rgbimg` module has been removed. +* The :mod:`sched` module's :class:`scheduler` instances now + have a read-only :attr:`queue` attribute that returns the + contents of the scheduler's queue, represented as a list of + named tuples with the fields + ``(*time*, *priority*, *action*, *argument*)``. + (Contributed by Raymond Hettinger XXX check.) + .. % Patch 1861 + * The :mod:`sets` module has been deprecated; it's better to use the built-in :class:`set` and :class:`frozenset` types. @@ -1223,7 +1367,7 @@ On receiving a signal, a byte will be written and the main event loop will be woken up, without the need to poll. - Contributed by Adam Olsen. + (Contributed by Adam Olsen.) .. % Patch 1583 @@ -1250,7 +1394,7 @@ * In the :mod:`smtplib` module, SMTP.starttls() now complies with :rfc:`3207` and forgets any knowledge obtained from the server not obtained from - the TLS negotiation itself. Patch contributed by Bill Fenner. + the TLS negotiation itself. (Patch contributed by Bill Fenner.) .. Issue 829951 @@ -1297,6 +1441,12 @@ These attributes are all read-only. (Contributed by Christian Heimes.) + It's now possible to determine the current profiler and tracer functions + by calling :func:`sys.getprofile` and :func:`sys.gettrace`. + (Contributed by Georg Brandl.) + + .. Patch #1648 + * The :mod:`tarfile` module now supports POSIX.1-2001 (pax) and POSIX.1-1988 (ustar) format tarfiles, in addition to the GNU tar format that was already supported. The default format @@ -1547,11 +1697,13 @@ .. Issue 1635 -* Some macros were renamed to make it clearer that they are macros, +* Some macros were renamed in both 3.0 and 2.6 to make it clearer that + they are macros, not functions. :cmacro:`Py_Size()` became :cmacro:`Py_SIZE()`, :cmacro:`Py_Type()` became :cmacro:`Py_TYPE()`, and - :cmacro:`Py_Refcnt()` became :cmacro:`Py_REFCNT()`. Macros for backward - compatibility are still available for Python 2.6. + :cmacro:`Py_Refcnt()` became :cmacro:`Py_REFCNT()`. + The mixed-case macros are still available + in Python 2.6 for backward compatibility. .. Issue 1629 From python-checkins at python.org Wed Mar 5 02:50:34 2008 From: python-checkins at python.org (amaury.forgeotdarc) Date: Wed, 5 Mar 2008 02:50:34 +0100 (CET) Subject: [Python-checkins] r61240 - in python/trunk: Lib/test/test_grammar.py Misc/NEWS Python/ast.c Message-ID: <20080305015034.5AAB11E4007@bag.python.org> Author: amaury.forgeotdarc Date: Wed Mar 5 02:50:33 2008 New Revision: 61240 Modified: python/trunk/Lib/test/test_grammar.py python/trunk/Misc/NEWS python/trunk/Python/ast.c Log: Issue#2238: some syntax errors from *args or **kwargs expressions would give bogus error messages, because of untested exceptions:: >>> f(**g(1=2)) XXX undetected error Traceback (most recent call last): File "", line 1, in TypeError: 'int' object is not iterable instead of the expected SyntaxError: keyword can't be an expression Will backport. Modified: python/trunk/Lib/test/test_grammar.py ============================================================================== --- python/trunk/Lib/test/test_grammar.py (original) +++ python/trunk/Lib/test/test_grammar.py Wed Mar 5 02:50:33 2008 @@ -282,6 +282,10 @@ def d32v((x,)): pass d32v((1,)) + # Check ast errors in *args and *kwargs + check_syntax_error(self, "f(*g(1=2))") + check_syntax_error(self, "f(**g(1=2))") + def testLambdef(self): ### lambdef: 'lambda' [varargslist] ':' test l1 = lambda : 0 Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Wed Mar 5 02:50:33 2008 @@ -9,6 +9,12 @@ *Release date: XX-XXX-2008* +Core and builtins +----------------- + +- Issue #2238: Some syntax errors in *args and **kwargs expressions could give + bogus error messages. + What's New in Python 2.6 alpha 1? ================================= Modified: python/trunk/Python/ast.c ============================================================================== --- python/trunk/Python/ast.c (original) +++ python/trunk/Python/ast.c Wed Mar 5 02:50:33 2008 @@ -1934,10 +1934,14 @@ } else if (TYPE(ch) == STAR) { vararg = ast_for_expr(c, CHILD(n, i+1)); + if (!vararg) + return NULL; i++; } else if (TYPE(ch) == DOUBLESTAR) { kwarg = ast_for_expr(c, CHILD(n, i+1)); + if (!kwarg) + return NULL; i++; } } From buildbot at python.org Wed Mar 5 03:48:44 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 05 Mar 2008 02:48:44 +0000 Subject: [Python-checkins] buildbot failure in ia64 Ubuntu trunk Message-ID: <20080305024845.18BA91E4007@bag.python.org> The Buildbot has detected a new failure of ia64 Ubuntu trunk. Full details are available at: http://www.python.org/dev/buildbot/all/ia64%20Ubuntu%20trunk/builds/1561 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ia64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: amaury.forgeotdarc,andrew.kuchling BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From buildbot at python.org Wed Mar 5 04:00:49 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 05 Mar 2008 03:00:49 +0000 Subject: [Python-checkins] buildbot failure in S-390 Debian trunk Message-ID: <20080305030049.682271E4007@bag.python.org> The Buildbot has detected a new failure of S-390 Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/S-390%20Debian%20trunk/builds/143 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-s390 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: amaury.forgeotdarc,andrew.kuchling BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_timeout make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Wed Mar 5 04:16:33 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 05 Mar 2008 03:16:33 +0000 Subject: [Python-checkins] buildbot failure in x86 FreeBSD trunk Message-ID: <20080305031633.E35A81E4007@bag.python.org> The Buildbot has detected a new failure of x86 FreeBSD trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20FreeBSD%20trunk/builds/691 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: bolen-freebsd Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: amaury.forgeotdarc,andrew.kuchling BUILD FAILED: failed test Excerpt from the test logfile: Traceback (most recent call last): File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 227, in handle_request self.process_request(request, client_address) File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 268, in process_request self.finish_request(request, client_address) File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 281, in finish_request self.RequestHandlerClass(request, client_address, self) File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 576, in __init__ self.handle() File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 227, in handle_request self.process_request(request, client_address) File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 268, in process_request self.finish_request(request, client_address) File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 281, in finish_request self.RequestHandlerClass(request, client_address, self) File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 576, in __init__ self.handle() File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 227, in handle_request self.process_request(request, client_address) File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 268, in process_request self.finish_request(request, client_address) File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 281, in finish_request self.RequestHandlerClass(request, client_address, self) File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 576, in __init__ self.handle() File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable 1 test failed: test_smtplib sincerely, -The Buildbot From python-checkins at python.org Wed Mar 5 06:10:49 2008 From: python-checkins at python.org (neal.norwitz) Date: Wed, 5 Mar 2008 06:10:49 +0100 (CET) Subject: [Python-checkins] r61241 - python/trunk/Lib/bsddb/test/test_dbshelve.py python/trunk/Lib/bsddb/test/test_thread.py Message-ID: <20080305051049.2B8541E4007@bag.python.org> Author: neal.norwitz Date: Wed Mar 5 06:10:48 2008 New Revision: 61241 Modified: python/trunk/Lib/bsddb/test/test_dbshelve.py python/trunk/Lib/bsddb/test/test_thread.py Log: Remove the files/dirs after closing the DB so the tests work on Windows. Patch from Trent Nelson. Also simplified removing a file by using test_support. Modified: python/trunk/Lib/bsddb/test/test_dbshelve.py ============================================================================== --- python/trunk/Lib/bsddb/test/test_dbshelve.py (original) +++ python/trunk/Lib/bsddb/test/test_dbshelve.py Wed Mar 5 06:10:48 2008 @@ -40,10 +40,7 @@ def tearDown(self): self.do_close() - try: - os.remove(self.filename) - except os.error: - pass + test_support.unlink(self.filename) def mk(self, key): """Turn key into an appropriate key type for this db""" @@ -267,8 +264,8 @@ def tearDown(self): - test_support.rmtree(self.homeDir) self.do_close() + test_support.rmtree(self.homeDir) class EnvBTreeShelveTestCase(BasicEnvShelveTestCase): Modified: python/trunk/Lib/bsddb/test/test_thread.py ============================================================================== --- python/trunk/Lib/bsddb/test/test_thread.py (original) +++ python/trunk/Lib/bsddb/test/test_thread.py Wed Mar 5 06:10:48 2008 @@ -73,9 +73,9 @@ self.d.open(self.filename, self.dbtype, self.dbopenflags|db.DB_CREATE) def tearDown(self): - test_support.rmtree(self.homeDir) self.d.close() self.env.close() + test_support.rmtree(self.homeDir) def setEnvOpts(self): pass From python-checkins at python.org Wed Mar 5 06:14:18 2008 From: python-checkins at python.org (neal.norwitz) Date: Wed, 5 Mar 2008 06:14:18 +0100 (CET) Subject: [Python-checkins] r61242 - python/trunk/Lib/test/test_winsound.py Message-ID: <20080305051418.D252E1E400B@bag.python.org> Author: neal.norwitz Date: Wed Mar 5 06:14:18 2008 New Revision: 61242 Modified: python/trunk/Lib/test/test_winsound.py Log: Get this test to pass even when there is no sound card in the system. Patch from Trent Nelson. (I can't test this.) Modified: python/trunk/Lib/test/test_winsound.py ============================================================================== --- python/trunk/Lib/test/test_winsound.py (original) +++ python/trunk/Lib/test/test_winsound.py Wed Mar 5 06:14:18 2008 @@ -8,6 +8,13 @@ class BeepTest(unittest.TestCase): + # As with PlaySoundTest, incorporate the _have_soundcard() check + # into our test methods. If there's no audio device present, + # winsound.Beep returns 0 and GetLastError() returns 127, which + # is: ERROR_PROC_NOT_FOUND ("The specified procedure could not + # be found"). (FWIW, virtual/Hyper-V systems fall under this + # scenario as they have no sound devices whatsoever (not even + # a legacy Beep device).) def test_errors(self): self.assertRaises(TypeError, winsound.Beep) @@ -15,12 +22,17 @@ self.assertRaises(ValueError, winsound.Beep, 32768, 75) def test_extremes(self): - winsound.Beep(37, 75) - winsound.Beep(32767, 75) + if _have_soundcard(): + winsound.Beep(37, 75) + winsound.Beep(32767, 75) + else: + self.assertRaises(RuntimeError, winsound.Beep, 37, 75) + self.assertRaises(RuntimeError, winsound.Beep, 32767, 75) def test_increasingfrequency(self): - for i in xrange(100, 2000, 100): - winsound.Beep(i, 75) + if _have_soundcard(): + for i in xrange(100, 2000, 100): + winsound.Beep(i, 75) class MessageBeepTest(unittest.TestCase): From python-checkins at python.org Wed Mar 5 06:20:44 2008 From: python-checkins at python.org (neal.norwitz) Date: Wed, 5 Mar 2008 06:20:44 +0100 (CET) Subject: [Python-checkins] r61243 - python/trunk/Lib/sqlite3/test/transactions.py Message-ID: <20080305052044.AAC871E4014@bag.python.org> Author: neal.norwitz Date: Wed Mar 5 06:20:44 2008 New Revision: 61243 Modified: python/trunk/Lib/sqlite3/test/transactions.py Log: Catch OSError when trying to remove a file in case removal fails. This should prevent a failure in tearDown masking any real test failure. Modified: python/trunk/Lib/sqlite3/test/transactions.py ============================================================================== --- python/trunk/Lib/sqlite3/test/transactions.py (original) +++ python/trunk/Lib/sqlite3/test/transactions.py Wed Mar 5 06:20:44 2008 @@ -32,7 +32,7 @@ def setUp(self): try: os.remove(get_db_path()) - except: + except OSError: pass self.con1 = sqlite.connect(get_db_path(), timeout=0.1) @@ -48,7 +48,10 @@ self.cur2.close() self.con2.close() - os.unlink(get_db_path()) + try: + os.unlink(get_db_path()) + except OSError: + pass def CheckDMLdoesAutoCommitBefore(self): self.cur1.execute("create table test(i)") From python-checkins at python.org Wed Mar 5 06:38:06 2008 From: python-checkins at python.org (neal.norwitz) Date: Wed, 5 Mar 2008 06:38:06 +0100 (CET) Subject: [Python-checkins] r61244 - python/trunk/Lib/test/test_smtplib.py Message-ID: <20080305053806.8773F1E4007@bag.python.org> Author: neal.norwitz Date: Wed Mar 5 06:38:06 2008 New Revision: 61244 Modified: python/trunk/Lib/test/test_smtplib.py Log: Make the timeout longer to give slow machines a chance to pass the test before timing out. This doesn't change the duration of the test under normal circumstances. This is targetted at fixing the spurious failures on the FreeBSD buildbot primarily. Modified: python/trunk/Lib/test/test_smtplib.py ============================================================================== --- python/trunk/Lib/test/test_smtplib.py (original) +++ python/trunk/Lib/test/test_smtplib.py Wed Mar 5 06:38:06 2008 @@ -19,7 +19,7 @@ def server(evt, buf): serv = socket.socket(socket.AF_INET, socket.SOCK_STREAM) - serv.settimeout(1) + serv.settimeout(15) serv.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) serv.bind(("", 0)) global PORT From python-checkins at python.org Wed Mar 5 06:49:03 2008 From: python-checkins at python.org (neal.norwitz) Date: Wed, 5 Mar 2008 06:49:03 +0100 (CET) Subject: [Python-checkins] r61245 - python/trunk/Misc/build.sh Message-ID: <20080305054903.EAB691E4007@bag.python.org> Author: neal.norwitz Date: Wed Mar 5 06:49:03 2008 New Revision: 61245 Modified: python/trunk/Misc/build.sh Log: Tabs -> spaces Modified: python/trunk/Misc/build.sh ============================================================================== --- python/trunk/Misc/build.sh (original) +++ python/trunk/Misc/build.sh Wed Mar 5 06:49:03 2008 @@ -93,7 +93,7 @@ place_summary_first() { testf=$1 sed -n '/^[0-9][0-9]* tests OK\./,$p' < $testf \ - | egrep -v '\[[0-9]+ refs\]' > $testf.tmp + | egrep -v '\[[0-9]+ refs\]' > $testf.tmp echo "" >> $testf.tmp cat $testf >> $testf.tmp mv $testf.tmp $testf @@ -103,7 +103,7 @@ testf=$1 n=`grep -ic " failed:" $testf` if [ $n -eq 1 ] ; then - n=`grep " failed:" $testf | sed -e 's/ .*//'` + n=`grep " failed:" $testf | sed -e 's/ .*//'` fi echo $n } @@ -115,17 +115,17 @@ if [ "$FAILURE_CC" != "" ]; then dest="$dest -c $FAILURE_CC" fi - if [ "x$3" != "x" ] ; then - (echo "More important issues:" - echo "----------------------" - egrep -v "$3" < $2 - echo "" - echo "Less important issues:" - echo "----------------------" - egrep "$3" < $2) + if [ "x$3" != "x" ] ; then + (echo "More important issues:" + echo "----------------------" + egrep -v "$3" < $2 + echo "" + echo "Less important issues:" + echo "----------------------" + egrep "$3" < $2) else - cat $2 - fi | mutt -s "$FAILURE_SUBJECT $1 ($NUM_FAILURES)" $dest + cat $2 + fi | mutt -s "$FAILURE_SUBJECT $1 ($NUM_FAILURES)" $dest fi } @@ -222,7 +222,7 @@ ## ensure that the reflog exists so the grep doesn't fail touch $REFLOG $PYTHON $REGRTEST_ARGS -R 4:3:$REFLOG -u network $LEAKY_SKIPS >& build/$F - LEAK_PAT="($LEAKY_TESTS|sum=0)" + LEAK_PAT="($LEAKY_TESTS|sum=0)" NUM_FAILURES=`egrep -vc "$LEAK_PAT" $REFLOG` place_summary_first build/$F update_status "Testing refleaks ($NUM_FAILURES failures)" "$F" $start From python-checkins at python.org Wed Mar 5 06:50:21 2008 From: python-checkins at python.org (neal.norwitz) Date: Wed, 5 Mar 2008 06:50:21 +0100 (CET) Subject: [Python-checkins] r61246 - python/trunk/Misc/build.sh Message-ID: <20080305055021.297E11E4007@bag.python.org> Author: neal.norwitz Date: Wed Mar 5 06:50:20 2008 New Revision: 61246 Modified: python/trunk/Misc/build.sh Log: Use -u urlfetch to run more tests Modified: python/trunk/Misc/build.sh ============================================================================== --- python/trunk/Misc/build.sh (original) +++ python/trunk/Misc/build.sh Wed Mar 5 06:50:20 2008 @@ -202,7 +202,7 @@ ## make and run basic tests F=make-test.out start=`current_time` - $PYTHON $REGRTEST_ARGS >& build/$F + $PYTHON $REGRTEST_ARGS -u urlfetch >& build/$F NUM_FAILURES=`count_failures build/$F` place_summary_first build/$F update_status "Testing basics ($NUM_FAILURES failures)" "$F" $start @@ -210,7 +210,7 @@ F=make-test-opt.out start=`current_time` - $PYTHON -O $REGRTEST_ARGS >& build/$F + $PYTHON -O $REGRTEST_ARGS -u urlfetch >& build/$F NUM_FAILURES=`count_failures build/$F` place_summary_first build/$F update_status "Testing opt ($NUM_FAILURES failures)" "$F" $start @@ -221,7 +221,7 @@ start=`current_time` ## ensure that the reflog exists so the grep doesn't fail touch $REFLOG - $PYTHON $REGRTEST_ARGS -R 4:3:$REFLOG -u network $LEAKY_SKIPS >& build/$F + $PYTHON $REGRTEST_ARGS -R 4:3:$REFLOG -u network,urlfetch $LEAKY_SKIPS >& build/$F LEAK_PAT="($LEAKY_TESTS|sum=0)" NUM_FAILURES=`egrep -vc "$LEAK_PAT" $REFLOG` place_summary_first build/$F From python-checkins at python.org Wed Mar 5 06:51:20 2008 From: python-checkins at python.org (neal.norwitz) Date: Wed, 5 Mar 2008 06:51:20 +0100 (CET) Subject: [Python-checkins] r61247 - python/trunk/Misc/build.sh Message-ID: <20080305055120.9AA461E4007@bag.python.org> Author: neal.norwitz Date: Wed Mar 5 06:51:20 2008 New Revision: 61247 Modified: python/trunk/Misc/build.sh Log: test_smtplib sometimes reports leaks too, suppress it Modified: python/trunk/Misc/build.sh ============================================================================== --- python/trunk/Misc/build.sh (original) +++ python/trunk/Misc/build.sh Wed Mar 5 06:51:20 2008 @@ -67,7 +67,7 @@ # Note: test_XXX (none currently) really leak, but are disabled # so we don't send spam. Any test which really leaks should only # be listed here if there are also test cases under Lib/test/leakers. -LEAKY_TESTS="test_(asynchat|cmd_line|popen2|socket|sys|threadsignals|urllib2_localnet)" +LEAKY_TESTS="test_(asynchat|cmd_line|popen2|socket|smtplib|sys|threadsignals|urllib2_localnet)" # Skip these tests altogether when looking for leaks. These tests # do not need to be stored above in LEAKY_TESTS too. From python-checkins at python.org Wed Mar 5 07:19:57 2008 From: python-checkins at python.org (jeffrey.yasskin) Date: Wed, 5 Mar 2008 07:19:57 +0100 (CET) Subject: [Python-checkins] r61248 - python/trunk/Lib/test/test_socketserver.py Message-ID: <20080305061957.56F021E400F@bag.python.org> Author: jeffrey.yasskin Date: Wed Mar 5 07:19:56 2008 New Revision: 61248 Modified: python/trunk/Lib/test/test_socketserver.py Log: Fix test_socketserver on Windows after r61099 added several signal.alarm() calls (which don't exist on non-Unix platforms). Thanks to Trent Nelson for the report and patch. Modified: python/trunk/Lib/test/test_socketserver.py ============================================================================== --- python/trunk/Lib/test/test_socketserver.py (original) +++ python/trunk/Lib/test/test_socketserver.py Wed Mar 5 07:19:56 2008 @@ -28,6 +28,10 @@ HAVE_UNIX_SOCKETS = hasattr(socket, "AF_UNIX") HAVE_FORKING = hasattr(os, "fork") and os.name != "os2" +def signal_alarm(n): + """Call signal.alarm when it exists (i.e. not on Windows).""" + if hasattr(signal, 'alarm'): + signal.alarm(n) def receive(sock, n, timeout=20): r, w, x = select.select([sock], [], [], timeout) @@ -99,7 +103,7 @@ """Test all socket servers.""" def setUp(self): - signal.alarm(20) # Kill deadlocks after 20 seconds. + signal_alarm(20) # Kill deadlocks after 20 seconds. self.port_seed = 0 self.test_files = [] @@ -112,7 +116,7 @@ except os.error: pass self.test_files[:] = [] - signal.alarm(0) # Didn't deadlock. + signal_alarm(0) # Didn't deadlock. def pickaddr(self, proto): if proto == socket.AF_INET: @@ -267,4 +271,4 @@ if __name__ == "__main__": test_main() - signal.alarm(3) # Shutdown shouldn't take more than 3 seconds. + signal_alarm(3) # Shutdown shouldn't take more than 3 seconds. From buildbot at python.org Wed Mar 5 07:23:43 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 05 Mar 2008 06:23:43 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-4 trunk Message-ID: <20080305062343.49E001E400B@bag.python.org> The Buildbot has detected a new failure of x86 XP-4 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-4%20trunk/builds/780 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: bolen-windows Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: neal.norwitz BUILD FAILED: failed test Excerpt from the test logfile: 3 tests failed: test_socketserver test_ssl test_winsound ====================================================================== ERROR: test_TCPServer (test.test_socketserver.SocketServerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_socketserver.py", line 102, in setUp signal.alarm(20) # Kill deadlocks after 20 seconds. AttributeError: 'module' object has no attribute 'alarm' ====================================================================== ERROR: test_ThreadingTCPServer (test.test_socketserver.SocketServerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_socketserver.py", line 102, in setUp signal.alarm(20) # Kill deadlocks after 20 seconds. AttributeError: 'module' object has no attribute 'alarm' ====================================================================== ERROR: test_ThreadingUDPServer (test.test_socketserver.SocketServerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_socketserver.py", line 102, in setUp signal.alarm(20) # Kill deadlocks after 20 seconds. AttributeError: 'module' object has no attribute 'alarm' ====================================================================== ERROR: test_UDPServer (test.test_socketserver.SocketServerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_socketserver.py", line 102, in setUp signal.alarm(20) # Kill deadlocks after 20 seconds. AttributeError: 'module' object has no attribute 'alarm' ====================================================================== FAIL: test_extremes (test.test_winsound.BeepTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_winsound.py", line 29, in test_extremes self.assertRaises(RuntimeError, winsound.Beep, 37, 75) AssertionError: RuntimeError not raised sincerely, -The Buildbot From buildbot at python.org Wed Mar 5 07:28:34 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 05 Mar 2008 06:28:34 +0000 Subject: [Python-checkins] buildbot failure in ppc Debian unstable trunk Message-ID: <20080305062834.F33951E4007@bag.python.org> The Buildbot has detected a new failure of ppc Debian unstable trunk. Full details are available at: http://www.python.org/dev/buildbot/all/ppc%20Debian%20unstable%20trunk/builds/951 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: neal.norwitz BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_tarfile make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Wed Mar 5 07:39:16 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 05 Mar 2008 06:39:16 +0000 Subject: [Python-checkins] buildbot failure in PPC64 Debian trunk Message-ID: <20080305063916.DA5C81E400B@bag.python.org> The Buildbot has detected a new failure of PPC64 Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/PPC64%20Debian%20trunk/builds/459 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: neal.norwitz BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_tarfile make: *** [buildbottest] Error 1 sincerely, -The Buildbot From python-checkins at python.org Wed Mar 5 08:10:37 2008 From: python-checkins at python.org (georg.brandl) Date: Wed, 5 Mar 2008 08:10:37 +0100 (CET) Subject: [Python-checkins] r61249 - python/trunk/Doc/whatsnew/2.6.rst Message-ID: <20080305071037.877F21E400B@bag.python.org> Author: georg.brandl Date: Wed Mar 5 08:10:35 2008 New Revision: 61249 Modified: python/trunk/Doc/whatsnew/2.6.rst Log: Fix some rst. Modified: python/trunk/Doc/whatsnew/2.6.rst ============================================================================== --- python/trunk/Doc/whatsnew/2.6.rst (original) +++ python/trunk/Doc/whatsnew/2.6.rst Wed Mar 5 08:10:35 2008 @@ -850,7 +850,7 @@ positive or negative infinity. This works on any platform with IEEE 754 semantics. (Contributed by Christian Heimes.) - .. Patch 1635. + .. Patch 1635 Other functions in the :mod:`math` module, :func:`isinf` and :func:`isnan`, return true if their floating-point argument is @@ -932,7 +932,7 @@ (Original optimization implemented by Armin Rigo, updated for Python 2.6 by Kevin Jacobs.) - .. % Patch 1700288 + .. Patch 1700288 * All of the functions in the :mod:`struct` module have been rewritten in C, thanks to work at the Need For Speed sprint. @@ -1335,17 +1335,17 @@ long searches can now be interrupted. (Contributed by Josh Hoyt and Ralf Schmitt.) - .. % Patch 846388 + .. Patch 846388 * The :mod:`rgbimg` module has been removed. * The :mod:`sched` module's :class:`scheduler` instances now have a read-only :attr:`queue` attribute that returns the contents of the scheduler's queue, represented as a list of - named tuples with the fields - ``(*time*, *priority*, *action*, *argument*)``. + named tuples with the fields ``(time, priority, action, argument)``. (Contributed by Raymond Hettinger XXX check.) - .. % Patch 1861 + + .. Patch 1861 * The :mod:`sets` module has been deprecated; it's better to use the built-in :class:`set` and :class:`frozenset` types. @@ -1369,7 +1369,7 @@ (Contributed by Adam Olsen.) - .. % Patch 1583 + .. Patch 1583 The :func:`siginterrupt` function is now available from Python code, and allows changing whether signals can interrupt system calls or not. From buildbot at python.org Wed Mar 5 08:19:25 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 05 Mar 2008 07:19:25 +0000 Subject: [Python-checkins] buildbot failure in sparc Debian trunk Message-ID: <20080305071925.70E691E400B@bag.python.org> The Buildbot has detected a new failure of sparc Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20Debian%20trunk/builds/159 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-sparc Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: neal.norwitz BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_ssl make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Wed Mar 5 10:22:36 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 05 Mar 2008 09:22:36 +0000 Subject: [Python-checkins] buildbot failure in x86 FreeBSD trunk Message-ID: <20080305092236.DD01A1E400B@bag.python.org> The Buildbot has detected a new failure of x86 FreeBSD trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20FreeBSD%20trunk/builds/694 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: bolen-freebsd Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: jeffrey.yasskin,neal.norwitz BUILD FAILED: failed test Excerpt from the test logfile: Traceback (most recent call last): File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 227, in handle_request self.process_request(request, client_address) File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 268, in process_request self.finish_request(request, client_address) File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 281, in finish_request self.RequestHandlerClass(request, client_address, self) File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 576, in __init__ self.handle() File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable 1 test failed: test_smtplib sincerely, -The Buildbot From python-checkins at python.org Wed Mar 5 14:58:37 2008 From: python-checkins at python.org (nick.coghlan) Date: Wed, 5 Mar 2008 14:58:37 +0100 (CET) Subject: [Python-checkins] r61250 - in sandbox/trunk/userref: ODF ODF/Chapter01_EssentialConcepts.odt ODF/Chapter02_StatementsAndExpressions.odt ODF/Chapter03_ReferencesAndNamespaces.odt ODF/Chapter04_ControlFlowStatements.odt ODF/Chapter05_FunctionsAndGenerators.odt ODF/Chapter06_ClassesAndMetaclasses.odt ODF/Chapter07_ModulesAndApplications.odt ODF/Chapter08_TheBuiltinNamespace.odt ODF/Chapter09_SummaryOfClassProtocols.odt README.txt ngc_doctest.py test_examples.py Message-ID: <20080305135837.E483A1E4014@bag.python.org> Author: nick.coghlan Date: Wed Mar 5 14:58:36 2008 New Revision: 61250 Added: sandbox/trunk/userref/ sandbox/trunk/userref/ODF/ sandbox/trunk/userref/ODF/Chapter01_EssentialConcepts.odt (contents, props changed) sandbox/trunk/userref/ODF/Chapter02_StatementsAndExpressions.odt (contents, props changed) sandbox/trunk/userref/ODF/Chapter03_ReferencesAndNamespaces.odt (contents, props changed) sandbox/trunk/userref/ODF/Chapter04_ControlFlowStatements.odt (contents, props changed) sandbox/trunk/userref/ODF/Chapter05_FunctionsAndGenerators.odt (contents, props changed) sandbox/trunk/userref/ODF/Chapter06_ClassesAndMetaclasses.odt (contents, props changed) sandbox/trunk/userref/ODF/Chapter07_ModulesAndApplications.odt (contents, props changed) sandbox/trunk/userref/ODF/Chapter08_TheBuiltinNamespace.odt (contents, props changed) sandbox/trunk/userref/ODF/Chapter09_SummaryOfClassProtocols.odt (contents, props changed) sandbox/trunk/userref/README.txt (contents, props changed) sandbox/trunk/userref/ngc_doctest.py (contents, props changed) sandbox/trunk/userref/test_examples.py (contents, props changed) Log: Add Python User's Reference manuscript to the sandbox - hopefully it will be of some use to the doc-sig crew Added: sandbox/trunk/userref/ODF/Chapter01_EssentialConcepts.odt ============================================================================== Binary file. No diff available. Added: sandbox/trunk/userref/ODF/Chapter02_StatementsAndExpressions.odt ============================================================================== Binary file. No diff available. Added: sandbox/trunk/userref/ODF/Chapter03_ReferencesAndNamespaces.odt ============================================================================== Binary file. No diff available. Added: sandbox/trunk/userref/ODF/Chapter04_ControlFlowStatements.odt ============================================================================== Binary file. No diff available. Added: sandbox/trunk/userref/ODF/Chapter05_FunctionsAndGenerators.odt ============================================================================== Binary file. No diff available. Added: sandbox/trunk/userref/ODF/Chapter06_ClassesAndMetaclasses.odt ============================================================================== Binary file. No diff available. Added: sandbox/trunk/userref/ODF/Chapter07_ModulesAndApplications.odt ============================================================================== Binary file. No diff available. Added: sandbox/trunk/userref/ODF/Chapter08_TheBuiltinNamespace.odt ============================================================================== Binary file. No diff available. Added: sandbox/trunk/userref/ODF/Chapter09_SummaryOfClassProtocols.odt ============================================================================== Binary file. No diff available. Added: sandbox/trunk/userref/README.txt ============================================================================== --- (empty file) +++ sandbox/trunk/userref/README.txt Wed Mar 5 14:58:36 2008 @@ -0,0 +1,23 @@ +# +# Python User's Reference +# +# Copyright 2008 Nick Coghlan +# Licensed to the Python Software Foundation under a Python Contributor's Agreement +# + +The text in the ODF directory is based on a pre-release manuscript for a book project which is no longer going ahead. After discussion with Guido and the publisher, I am cleaning the files up for donation to the PSF under my existing contributor agreement. + +Any thoughts on how to efficiently convert these files to ReST markup would be appreciated ? it seems to me that it should be possible to use the style information in the ODT XML to generate markup that is appropriate for input to the Python doc building system. + +If I can figure out how to do the ReST conversion, can get the files up to date for Python 2.6/3.0 in a timely fashion and get enough positive feedback from doc-sig and python-dev, they may be worth considering as the backbone of a new section in the standard Python docs. Alternatively, they may prove to be a useful resource for folks updating other parts of the documentation or working on their own documentation projects, even if they don't end up being used directly + +The sandbox directory also includes a hacked version of Python 2.4's doctest.py to support the script that runs the examples directly from the ODT files (for an audience of one, modifying a copy of doctest was easier at the time than figuring out how to do it properly). + +========= +TODO List +========= + +- any items flagged with XXX in the ODT files +- check that sys.exc_info() is covered in the section on exception (Chapter 4) +- cover contextlib.nested and contexlib.closing (Chapter 4) +- cover both sys.stderr and sys.stdout in section on printing to output streams (Chapter 2) Added: sandbox/trunk/userref/ngc_doctest.py ============================================================================== --- (empty file) +++ sandbox/trunk/userref/ngc_doctest.py Wed Mar 5 14:58:36 2008 @@ -0,0 +1,2721 @@ +# Module doctest. +# Released to the public domain 16-Jan-2001, by Tim Peters (tim at python.org). +# Major enhancements and refactoring by: +# Jim Fulton +# Edward Loper + +# Tweaked by Nick Coghlan to support altering sys.displayhook/excepthook + +# Provided as-is; use at your own risk; no warranty; no promises; enjoy! + +r"""Module doctest -- a framework for running examples in docstrings. + +In simplest use, end each module M to be tested with: + +def _test(): + import doctest + doctest.testmod() + +if __name__ == "__main__": + _test() + +Then running the module as a script will cause the examples in the +docstrings to get executed and verified: + +python M.py + +This won't display anything unless an example fails, in which case the +failing example(s) and the cause(s) of the failure(s) are printed to stdout +(why not stderr? because stderr is a lame hack <0.2 wink>), and the final +line of output is "Test failed.". + +Run it with the -v switch instead: + +python M.py -v + +and a detailed report of all examples tried is printed to stdout, along +with assorted summaries at the end. + +You can force verbose mode by passing "verbose=True" to testmod, or prohibit +it by passing "verbose=False". In either of those cases, sys.argv is not +examined by testmod. + +There are a variety of other ways to run doctests, including integration +with the unittest framework, and support for running non-Python text +files containing doctests. There are also many ways to override parts +of doctest's default behaviors. See the Library Reference Manual for +details. +""" +__docformat__ = 'reStructuredText en' + +__all__ = [ + # 0, Option Flags + 'register_optionflag', + 'DONT_ACCEPT_TRUE_FOR_1', + 'DONT_ACCEPT_BLANKLINE', + 'NORMALIZE_WHITESPACE', + 'ELLIPSIS', + 'IGNORE_EXCEPTION_DETAIL', + 'COMPARISON_FLAGS', + 'REPORT_UDIFF', + 'REPORT_CDIFF', + 'REPORT_NDIFF', + 'REPORT_ONLY_FIRST_FAILURE', + 'REPORTING_FLAGS', + # 1. Utility Functions + 'is_private', + # 2. Example & DocTest + 'Example', + 'DocTest', + # 3. Doctest Parser + 'DocTestParser', + # 4. Doctest Finder + 'DocTestFinder', + # 5. Doctest Runner + 'DocTestRunner', + 'OutputChecker', + 'DocTestFailure', + 'UnexpectedException', + 'DebugRunner', + # 6. Test Functions + 'testmod', + 'testfile', + 'run_docstring_examples', + # 7. Tester + 'Tester', + # 8. Unittest Support + 'DocTestSuite', + 'DocFileSuite', + 'set_unittest_reportflags', + # 9. Debugging Support + 'script_from_examples', + 'testsource', + 'debug_src', + 'debug', +] + +import __future__ + +import sys, traceback, inspect, linecache, os, re, types +import unittest, difflib, pdb, tempfile +import warnings +from StringIO import StringIO + +# Don't whine about the deprecated is_private function in this +# module's tests. +warnings.filterwarnings("ignore", "is_private", DeprecationWarning, + __name__, 0) + +# There are 4 basic classes: +# - Example: a pair, plus an intra-docstring line number. +# - DocTest: a collection of examples, parsed from a docstring, plus +# info about where the docstring came from (name, filename, lineno). +# - DocTestFinder: extracts DocTests from a given object's docstring and +# its contained objects' docstrings. +# - DocTestRunner: runs DocTest cases, and accumulates statistics. +# +# So the basic picture is: +# +# list of: +# +------+ +---------+ +-------+ +# |object| --DocTestFinder-> | DocTest | --DocTestRunner-> |results| +# +------+ +---------+ +-------+ +# | Example | +# | ... | +# | Example | +# +---------+ + +# Option constants. + +OPTIONFLAGS_BY_NAME = {} +def register_optionflag(name): + flag = 1 << len(OPTIONFLAGS_BY_NAME) + OPTIONFLAGS_BY_NAME[name] = flag + return flag + +DONT_ACCEPT_TRUE_FOR_1 = register_optionflag('DONT_ACCEPT_TRUE_FOR_1') +DONT_ACCEPT_BLANKLINE = register_optionflag('DONT_ACCEPT_BLANKLINE') +NORMALIZE_WHITESPACE = register_optionflag('NORMALIZE_WHITESPACE') +ELLIPSIS = register_optionflag('ELLIPSIS') +IGNORE_EXCEPTION_DETAIL = register_optionflag('IGNORE_EXCEPTION_DETAIL') + +COMPARISON_FLAGS = (DONT_ACCEPT_TRUE_FOR_1 | + DONT_ACCEPT_BLANKLINE | + NORMALIZE_WHITESPACE | + ELLIPSIS | + IGNORE_EXCEPTION_DETAIL) + +REPORT_UDIFF = register_optionflag('REPORT_UDIFF') +REPORT_CDIFF = register_optionflag('REPORT_CDIFF') +REPORT_NDIFF = register_optionflag('REPORT_NDIFF') +REPORT_ONLY_FIRST_FAILURE = register_optionflag('REPORT_ONLY_FIRST_FAILURE') + +REPORTING_FLAGS = (REPORT_UDIFF | + REPORT_CDIFF | + REPORT_NDIFF | + REPORT_ONLY_FIRST_FAILURE) + +# Special string markers for use in `want` strings: +BLANKLINE_MARKER = '' +ELLIPSIS_MARKER = '...' + +###################################################################### +## Table of Contents +###################################################################### +# 1. Utility Functions +# 2. Example & DocTest -- store test cases +# 3. DocTest Parser -- extracts examples from strings +# 4. DocTest Finder -- extracts test cases from objects +# 5. DocTest Runner -- runs test cases +# 6. Test Functions -- convenient wrappers for testing +# 7. Tester Class -- for backwards compatibility +# 8. Unittest Support +# 9. Debugging Support +# 10. Example Usage + +###################################################################### +## 1. Utility Functions +###################################################################### + +def is_private(prefix, base): + """prefix, base -> true iff name prefix + "." + base is "private". + + Prefix may be an empty string, and base does not contain a period. + Prefix is ignored (although functions you write conforming to this + protocol may make use of it). + Return true iff base begins with an (at least one) underscore, but + does not both begin and end with (at least) two underscores. + + >>> is_private("a.b", "my_func") + False + >>> is_private("____", "_my_func") + True + >>> is_private("someclass", "__init__") + False + >>> is_private("sometypo", "__init_") + True + >>> is_private("x.y.z", "_") + True + >>> is_private("_x.y.z", "__") + False + >>> is_private("", "") # senseless but consistent + False + """ + warnings.warn("is_private is deprecated; it wasn't useful; " + "examine DocTestFinder.find() lists instead", + DeprecationWarning, stacklevel=2) + return base[:1] == "_" and not base[:2] == "__" == base[-2:] + +def _extract_future_flags(globs): + """ + Return the compiler-flags associated with the future features that + have been imported into the given namespace (globs). + """ + flags = 0 + for fname in __future__.all_feature_names: + feature = globs.get(fname, None) + if feature is getattr(__future__, fname): + flags |= feature.compiler_flag + return flags + +def _normalize_module(module, depth=2): + """ + Return the module specified by `module`. In particular: + - If `module` is a module, then return module. + - If `module` is a string, then import and return the + module with that name. + - If `module` is None, then return the calling module. + The calling module is assumed to be the module of + the stack frame at the given depth in the call stack. + """ + if inspect.ismodule(module): + return module + elif isinstance(module, (str, unicode)): + return __import__(module, globals(), locals(), ["*"]) + elif module is None: + return sys.modules[sys._getframe(depth).f_globals['__name__']] + else: + raise TypeError("Expected a module, string, or None") + +def _indent(s, indent=4): + """ + Add the given number of space characters to the beginning every + non-blank line in `s`, and return the result. + """ + # This regexp matches the start of non-blank lines: + return re.sub('(?m)^(?!$)', indent*' ', s) + +def _exception_traceback(exc_info): + """ + Return a string containing a traceback message for the given + exc_info tuple (as returned by sys.exc_info()). + """ + # Get a traceback message. + excout = StringIO() + exc_type, exc_val, exc_tb = exc_info + traceback.print_exception(exc_type, exc_val, exc_tb, file=excout) + return excout.getvalue() + +# Override some StringIO methods. +class _SpoofOut(StringIO): + def getvalue(self): + result = StringIO.getvalue(self) + # If anything at all was written, make sure there's a trailing + # newline. There's no way for the expected output to indicate + # that a trailing newline is missing. + if result and not result.endswith("\n"): + result += "\n" + # Prevent softspace from screwing up the next test case, in + # case they used print with a trailing comma in an example. + if hasattr(self, "softspace"): + del self.softspace + return result + + def truncate(self, size=None): + StringIO.truncate(self, size) + if hasattr(self, "softspace"): + del self.softspace + +# Worst-case linear-time ellipsis matching. +def _ellipsis_match(want, got): + """ + Essentially the only subtle case: + >>> _ellipsis_match('aa...aa', 'aaa') + False + """ + if ELLIPSIS_MARKER not in want: + return want == got + + # Find "the real" strings. + ws = want.split(ELLIPSIS_MARKER) + assert len(ws) >= 2 + + # Deal with exact matches possibly needed at one or both ends. + startpos, endpos = 0, len(got) + w = ws[0] + if w: # starts with exact match + if got.startswith(w): + startpos = len(w) + del ws[0] + else: + return False + w = ws[-1] + if w: # ends with exact match + if got.endswith(w): + endpos -= len(w) + del ws[-1] + else: + return False + + if startpos > endpos: + # Exact end matches required more characters than we have, as in + # _ellipsis_match('aa...aa', 'aaa') + return False + + # For the rest, we only need to find the leftmost non-overlapping + # match for each piece. If there's no overall match that way alone, + # there's no overall match period. + for w in ws: + # w may be '' at times, if there are consecutive ellipses, or + # due to an ellipsis at the start or end of `want`. That's OK. + # Search for an empty string succeeds, and doesn't change startpos. + startpos = got.find(w, startpos, endpos) + if startpos < 0: + return False + startpos += len(w) + + return True + +def _comment_line(line): + "Return a commented form of the given line" + line = line.rstrip() + if line: + return '# '+line + else: + return '#' + +class _OutputRedirectingPdb(pdb.Pdb): + """ + A specialized version of the python debugger that redirects stdout + to a given stream when interacting with the user. Stdout is *not* + redirected when traced code is executed. + """ + def __init__(self, out): + self.__out = out + pdb.Pdb.__init__(self) + + def trace_dispatch(self, *args): + # Redirect stdout to the given stream. + save_stdout = sys.stdout + sys.stdout = self.__out + # Call Pdb's trace dispatch method. + try: + return pdb.Pdb.trace_dispatch(self, *args) + finally: + sys.stdout = save_stdout + +# [XX] Normalize with respect to os.path.pardir? +def _module_relative_path(module, path): + if not inspect.ismodule(module): + raise TypeError, 'Expected a module: %r' % module + if path.startswith('/'): + raise ValueError, 'Module-relative files may not have absolute paths' + + # Find the base directory for the path. + if hasattr(module, '__file__'): + # A normal module/package + basedir = os.path.split(module.__file__)[0] + elif module.__name__ == '__main__': + # An interactive session. + if len(sys.argv)>0 and sys.argv[0] != '': + basedir = os.path.split(sys.argv[0])[0] + else: + basedir = os.curdir + else: + # A module w/o __file__ (this includes builtins) + raise ValueError("Can't resolve paths relative to the module " + + module + " (it has no __file__)") + + # Combine the base directory and the path. + return os.path.join(basedir, *(path.split('/'))) + +###################################################################### +## 2. Example & DocTest +###################################################################### +## - An "example" is a pair, where "source" is a +## fragment of source code, and "want" is the expected output for +## "source." The Example class also includes information about +## where the example was extracted from. +## +## - A "doctest" is a collection of examples, typically extracted from +## a string (such as an object's docstring). The DocTest class also +## includes information about where the string was extracted from. + +class Example: + """ + A single doctest example, consisting of source code and expected + output. `Example` defines the following attributes: + + - source: A single Python statement, always ending with a newline. + The constructor adds a newline if needed. + + - want: The expected output from running the source code (either + from stdout, or a traceback in case of exception). `want` ends + with a newline unless it's empty, in which case it's an empty + string. The constructor adds a newline if needed. + + - exc_msg: The exception message generated by the example, if + the example is expected to generate an exception; or `None` if + it is not expected to generate an exception. This exception + message is compared against the return value of + `traceback.format_exception_only()`. `exc_msg` ends with a + newline unless it's `None`. The constructor adds a newline + if needed. + + - lineno: The line number within the DocTest string containing + this Example where the Example begins. This line number is + zero-based, with respect to the beginning of the DocTest. + + - indent: The example's indentation in the DocTest string. + I.e., the number of space characters that preceed the + example's first prompt. + + - options: A dictionary mapping from option flags to True or + False, which is used to override default options for this + example. Any option flags not contained in this dictionary + are left at their default value (as specified by the + DocTestRunner's optionflags). By default, no options are set. + """ + def __init__(self, source, want, exc_msg=None, lineno=0, indent=0, + options=None): + # Normalize inputs. + if not source.endswith('\n'): + source += '\n' + if want and not want.endswith('\n'): + want += '\n' + if exc_msg is not None and not exc_msg.endswith('\n'): + exc_msg += '\n' + # Store properties. + self.source = source + self.want = want + self.lineno = lineno + self.indent = indent + if options is None: options = {} + self.options = options + self.exc_msg = exc_msg + +class DocTest: + """ + A collection of doctest examples that should be run in a single + namespace. Each `DocTest` defines the following attributes: + + - examples: the list of examples. + + - globs: The namespace (aka globals) that the examples should + be run in. + + - name: A name identifying the DocTest (typically, the name of + the object whose docstring this DocTest was extracted from). + + - filename: The name of the file that this DocTest was extracted + from, or `None` if the filename is unknown. + + - lineno: The line number within filename where this DocTest + begins, or `None` if the line number is unavailable. This + line number is zero-based, with respect to the beginning of + the file. + + - docstring: The string that the examples were extracted from, + or `None` if the string is unavailable. + """ + def __init__(self, examples, globs, name, filename, lineno, docstring): + """ + Create a new DocTest containing the given examples. The + DocTest's globals are initialized with a copy of `globs`. + """ + assert not isinstance(examples, basestring), \ + "DocTest no longer accepts str; use DocTestParser instead" + self.examples = examples + self.docstring = docstring + self.globs = globs.copy() + self.name = name + self.filename = filename + self.lineno = lineno + + def __repr__(self): + if len(self.examples) == 0: + examples = 'no examples' + elif len(self.examples) == 1: + examples = '1 example' + else: + examples = '%d examples' % len(self.examples) + return ('' % + (self.name, self.filename, self.lineno, examples)) + + + # This lets us sort tests by name: + def __cmp__(self, other): + if not isinstance(other, DocTest): + return -1 + return cmp((self.name, self.filename, self.lineno, id(self)), + (other.name, other.filename, other.lineno, id(other))) + +###################################################################### +## 3. DocTestParser +###################################################################### + +class DocTestParser: + """ + A class used to parse strings containing doctest examples. + """ + # This regular expression is used to find doctest examples in a + # string. It defines three groups: `source` is the source code + # (including leading indentation and prompts); `indent` is the + # indentation of the first (PS1) line of the source code; and + # `want` is the expected output (including leading indentation). + _EXAMPLE_RE = re.compile(r''' + # Source consists of a PS1 line followed by zero or more PS2 lines. + (?P + (?:^(?P [ ]*) >>> .*) # PS1 line + (?:\n [ ]* \.\.\. .*)*) # PS2 lines + \n? + # Want consists of any non-blank lines that do not start with PS1. + (?P (?:(?![ ]*$) # Not a blank line + (?![ ]*>>>) # Not a line starting with PS1 + .*$\n? # But any other line + )*) + ''', re.MULTILINE | re.VERBOSE) + + # A regular expression for handling `want` strings that contain + # expected exceptions. It divides `want` into three pieces: + # - the traceback header line (`hdr`) + # - the traceback stack (`stack`) + # - the exception message (`msg`), as generated by + # traceback.format_exception_only() + # `msg` may have multiple lines. We assume/require that the + # exception message is the first non-indented line starting with a word + # character following the traceback header line. + _EXCEPTION_RE = re.compile(r""" + # Grab the traceback header. Different versions of Python have + # said different things on the first traceback line. + ^(?P Traceback\ \( + (?: most\ recent\ call\ last + | innermost\ last + ) \) : + ) + \s* $ # toss trailing whitespace on the header. + (?P .*?) # don't blink: absorb stuff until... + ^ (?P \w+ .*) # a line *starts* with alphanum. + """, re.VERBOSE | re.MULTILINE | re.DOTALL) + + # A callable returning a true value iff its argument is a blank line + # or contains a single comment. + _IS_BLANK_OR_COMMENT = re.compile(r'^[ ]*(#.*)?$').match + + def parse(self, string, name=''): + """ + Divide the given string into examples and intervening text, + and return them as a list of alternating Examples and strings. + Line numbers for the Examples are 0-based. The optional + argument `name` is a name identifying this string, and is only + used for error messages. + """ + string = string.expandtabs() + # If all lines begin with the same indentation, then strip it. + min_indent = self._min_indent(string) + if min_indent > 0: + string = '\n'.join([l[min_indent:] for l in string.split('\n')]) + + output = [] + charno, lineno = 0, 0 + # Find all doctest examples in the string: + for m in self._EXAMPLE_RE.finditer(string): + # Add the pre-example text to `output`. + output.append(string[charno:m.start()]) + # Update lineno (lines before this example) + lineno += string.count('\n', charno, m.start()) + # Extract info from the regexp match. + (source, options, want, exc_msg) = \ + self._parse_example(m, name, lineno) + # Create an Example, and add it to the list. + if not self._IS_BLANK_OR_COMMENT(source): + output.append( Example(source, want, exc_msg, + lineno=lineno, + indent=min_indent+len(m.group('indent')), + options=options) ) + # Update lineno (lines inside this example) + lineno += string.count('\n', m.start(), m.end()) + # Update charno. + charno = m.end() + # Add any remaining post-example text to `output`. + output.append(string[charno:]) + return output + + def get_doctest(self, string, globs, name, filename, lineno): + """ + Extract all doctest examples from the given string, and + collect them into a `DocTest` object. + + `globs`, `name`, `filename`, and `lineno` are attributes for + the new `DocTest` object. See the documentation for `DocTest` + for more information. + """ + return DocTest(self.get_examples(string, name), globs, + name, filename, lineno, string) + + def get_examples(self, string, name=''): + """ + Extract all doctest examples from the given string, and return + them as a list of `Example` objects. Line numbers are + 0-based, because it's most common in doctests that nothing + interesting appears on the same line as opening triple-quote, + and so the first interesting line is called \"line 1\" then. + + The optional argument `name` is a name identifying this + string, and is only used for error messages. + """ + return [x for x in self.parse(string, name) + if isinstance(x, Example)] + + def _parse_example(self, m, name, lineno): + """ + Given a regular expression match from `_EXAMPLE_RE` (`m`), + return a pair `(source, want)`, where `source` is the matched + example's source code (with prompts and indentation stripped); + and `want` is the example's expected output (with indentation + stripped). + + `name` is the string's name, and `lineno` is the line number + where the example starts; both are used for error messages. + """ + # Get the example's indentation level. + indent = len(m.group('indent')) + + # Divide source into lines; check that they're properly + # indented; and then strip their indentation & prompts. + source_lines = m.group('source').split('\n') + self._check_prompt_blank(source_lines, indent, name, lineno) + self._check_prefix(source_lines[1:], ' '*indent + '.', name, lineno) + source = '\n'.join([sl[indent+4:] for sl in source_lines]) + + # Divide want into lines; check that it's properly indented; and + # then strip the indentation. Spaces before the last newline should + # be preserved, so plain rstrip() isn't good enough. + want = m.group('want') + want_lines = want.split('\n') + if len(want_lines) > 1 and re.match(r' *$', want_lines[-1]): + del want_lines[-1] # forget final newline & spaces after it + self._check_prefix(want_lines, ' '*indent, name, + lineno + len(source_lines)) + want = '\n'.join([wl[indent:] for wl in want_lines]) + + # If `want` contains a traceback message, then extract it. + m = self._EXCEPTION_RE.match(want) + if m: + exc_msg = m.group('msg') + else: + exc_msg = None + + # Extract options from the source. + options = self._find_options(source, name, lineno) + + return source, options, want, exc_msg + + # This regular expression looks for option directives in the + # source code of an example. Option directives are comments + # starting with "doctest:". Warning: this may give false + # positives for string-literals that contain the string + # "#doctest:". Eliminating these false positives would require + # actually parsing the string; but we limit them by ignoring any + # line containing "#doctest:" that is *followed* by a quote mark. + _OPTION_DIRECTIVE_RE = re.compile(r'#\s*doctest:\s*([^\n\'"]*)$', + re.MULTILINE) + + def _find_options(self, source, name, lineno): + """ + Return a dictionary containing option overrides extracted from + option directives in the given source string. + + `name` is the string's name, and `lineno` is the line number + where the example starts; both are used for error messages. + """ + options = {} + # (note: with the current regexp, this will match at most once:) + for m in self._OPTION_DIRECTIVE_RE.finditer(source): + option_strings = m.group(1).replace(',', ' ').split() + for option in option_strings: + if (option[0] not in '+-' or + option[1:] not in OPTIONFLAGS_BY_NAME): + raise ValueError('line %r of the doctest for %s ' + 'has an invalid option: %r' % + (lineno+1, name, option)) + flag = OPTIONFLAGS_BY_NAME[option[1:]] + options[flag] = (option[0] == '+') + if options and self._IS_BLANK_OR_COMMENT(source): + raise ValueError('line %r of the doctest for %s has an option ' + 'directive on a line with no example: %r' % + (lineno, name, source)) + return options + + # This regular expression finds the indentation of every non-blank + # line in a string. + _INDENT_RE = re.compile('^([ ]*)(?=\S)', re.MULTILINE) + + def _min_indent(self, s): + "Return the minimum indentation of any non-blank line in `s`" + indents = [len(indent) for indent in self._INDENT_RE.findall(s)] + if len(indents) > 0: + return min(indents) + else: + return 0 + + def _check_prompt_blank(self, lines, indent, name, lineno): + """ + Given the lines of a source string (including prompts and + leading indentation), check to make sure that every prompt is + followed by a space character. If any line is not followed by + a space character, then raise ValueError. + """ + for i, line in enumerate(lines): + if len(line) >= indent+4 and line[indent+3] != ' ': + raise ValueError('line %r of the docstring for %s ' + 'lacks blank after %s: %r' % + (lineno+i+1, name, + line[indent:indent+3], line)) + + def _check_prefix(self, lines, prefix, name, lineno): + """ + Check that every line in the given list starts with the given + prefix; if any line does not, then raise a ValueError. + """ + for i, line in enumerate(lines): + if line and not line.startswith(prefix): + raise ValueError('line %r of the docstring for %s has ' + 'inconsistent leading whitespace: %r' % + (lineno+i+1, name, line)) + + +###################################################################### +## 4. DocTest Finder +###################################################################### + +class DocTestFinder: + """ + A class used to extract the DocTests that are relevant to a given + object, from its docstring and the docstrings of its contained + objects. Doctests can currently be extracted from the following + object types: modules, functions, classes, methods, staticmethods, + classmethods, and properties. + """ + + def __init__(self, verbose=False, parser=DocTestParser(), + recurse=True, _namefilter=None, exclude_empty=True): + """ + Create a new doctest finder. + + The optional argument `parser` specifies a class or + function that should be used to create new DocTest objects (or + objects that implement the same interface as DocTest). The + signature for this factory function should match the signature + of the DocTest constructor. + + If the optional argument `recurse` is false, then `find` will + only examine the given object, and not any contained objects. + + If the optional argument `exclude_empty` is false, then `find` + will include tests for objects with empty docstrings. + """ + self._parser = parser + self._verbose = verbose + self._recurse = recurse + self._exclude_empty = exclude_empty + # _namefilter is undocumented, and exists only for temporary backward- + # compatibility support of testmod's deprecated isprivate mess. + self._namefilter = _namefilter + + def find(self, obj, name=None, module=None, globs=None, + extraglobs=None): + """ + Return a list of the DocTests that are defined by the given + object's docstring, or by any of its contained objects' + docstrings. + + The optional parameter `module` is the module that contains + the given object. If the module is not specified or is None, then + the test finder will attempt to automatically determine the + correct module. The object's module is used: + + - As a default namespace, if `globs` is not specified. + - To prevent the DocTestFinder from extracting DocTests + from objects that are imported from other modules. + - To find the name of the file containing the object. + - To help find the line number of the object within its + file. + + Contained objects whose module does not match `module` are ignored. + + If `module` is False, no attempt to find the module will be made. + This is obscure, of use mostly in tests: if `module` is False, or + is None but cannot be found automatically, then all objects are + considered to belong to the (non-existent) module, so all contained + objects will (recursively) be searched for doctests. + + The globals for each DocTest is formed by combining `globs` + and `extraglobs` (bindings in `extraglobs` override bindings + in `globs`). A new copy of the globals dictionary is created + for each DocTest. If `globs` is not specified, then it + defaults to the module's `__dict__`, if specified, or {} + otherwise. If `extraglobs` is not specified, then it defaults + to {}. + + """ + # If name was not specified, then extract it from the object. + if name is None: + name = getattr(obj, '__name__', None) + if name is None: + raise ValueError("DocTestFinder.find: name must be given " + "when obj.__name__ doesn't exist: %r" % + (type(obj),)) + + # Find the module that contains the given object (if obj is + # a module, then module=obj.). Note: this may fail, in which + # case module will be None. + if module is False: + module = None + elif module is None: + module = inspect.getmodule(obj) + + # Read the module's source code. This is used by + # DocTestFinder._find_lineno to find the line number for a + # given object's docstring. + try: + file = inspect.getsourcefile(obj) or inspect.getfile(obj) + source_lines = linecache.getlines(file) + if not source_lines: + source_lines = None + except TypeError: + source_lines = None + + # Initialize globals, and merge in extraglobs. + if globs is None: + if module is None: + globs = {} + else: + globs = module.__dict__.copy() + else: + globs = globs.copy() + if extraglobs is not None: + globs.update(extraglobs) + + # Recursively expore `obj`, extracting DocTests. + tests = [] + self._find(tests, obj, name, module, source_lines, globs, {}) + return tests + + def _filter(self, obj, prefix, base): + """ + Return true if the given object should not be examined. + """ + return (self._namefilter is not None and + self._namefilter(prefix, base)) + + def _from_module(self, module, object): + """ + Return true if the given object is defined in the given + module. + """ + if module is None: + return True + elif inspect.isfunction(object): + return module.__dict__ is object.func_globals + elif inspect.isclass(object): + return module.__name__ == object.__module__ + elif inspect.getmodule(object) is not None: + return module is inspect.getmodule(object) + elif hasattr(object, '__module__'): + return module.__name__ == object.__module__ + elif isinstance(object, property): + return True # [XX] no way not be sure. + else: + raise ValueError("object must be a class or function") + + def _find(self, tests, obj, name, module, source_lines, globs, seen): + """ + Find tests for the given object and any contained objects, and + add them to `tests`. + """ + if self._verbose: + print 'Finding tests in %s' % name + + # If we've already processed this object, then ignore it. + if id(obj) in seen: + return + seen[id(obj)] = 1 + + # Find a test for this object, and add it to the list of tests. + test = self._get_test(obj, name, module, globs, source_lines) + if test is not None: + tests.append(test) + + # Look for tests in a module's contained objects. + if inspect.ismodule(obj) and self._recurse: + for valname, val in obj.__dict__.items(): + # Check if this contained object should be ignored. + if self._filter(val, name, valname): + continue + valname = '%s.%s' % (name, valname) + # Recurse to functions & classes. + if ((inspect.isfunction(val) or inspect.isclass(val)) and + self._from_module(module, val)): + self._find(tests, val, valname, module, source_lines, + globs, seen) + + # Look for tests in a module's __test__ dictionary. + if inspect.ismodule(obj) and self._recurse: + for valname, val in getattr(obj, '__test__', {}).items(): + if not isinstance(valname, basestring): + raise ValueError("DocTestFinder.find: __test__ keys " + "must be strings: %r" % + (type(valname),)) + if not (inspect.isfunction(val) or inspect.isclass(val) or + inspect.ismethod(val) or inspect.ismodule(val) or + isinstance(val, basestring)): + raise ValueError("DocTestFinder.find: __test__ values " + "must be strings, functions, methods, " + "classes, or modules: %r" % + (type(val),)) + valname = '%s.__test__.%s' % (name, valname) + self._find(tests, val, valname, module, source_lines, + globs, seen) + + # Look for tests in a class's contained objects. + if inspect.isclass(obj) and self._recurse: + for valname, val in obj.__dict__.items(): + # Check if this contained object should be ignored. + if self._filter(val, name, valname): + continue + # Special handling for staticmethod/classmethod. + if isinstance(val, staticmethod): + val = getattr(obj, valname) + if isinstance(val, classmethod): + val = getattr(obj, valname).im_func + + # Recurse to methods, properties, and nested classes. + if ((inspect.isfunction(val) or inspect.isclass(val) or + isinstance(val, property)) and + self._from_module(module, val)): + valname = '%s.%s' % (name, valname) + self._find(tests, val, valname, module, source_lines, + globs, seen) + + def _get_test(self, obj, name, module, globs, source_lines): + """ + Return a DocTest for the given object, if it defines a docstring; + otherwise, return None. + """ + # Extract the object's docstring. If it doesn't have one, + # then return None (no test for this object). + if isinstance(obj, basestring): + docstring = obj + else: + try: + if obj.__doc__ is None: + docstring = '' + else: + docstring = obj.__doc__ + if not isinstance(docstring, basestring): + docstring = str(docstring) + except (TypeError, AttributeError): + docstring = '' + + # Find the docstring's location in the file. + lineno = self._find_lineno(obj, source_lines) + + # Don't bother if the docstring is empty. + if self._exclude_empty and not docstring: + return None + + # Return a DocTest for this object. + if module is None: + filename = None + else: + filename = getattr(module, '__file__', module.__name__) + if filename[-4:] in (".pyc", ".pyo"): + filename = filename[:-1] + return self._parser.get_doctest(docstring, globs, name, + filename, lineno) + + def _find_lineno(self, obj, source_lines): + """ + Return a line number of the given object's docstring. Note: + this method assumes that the object has a docstring. + """ + lineno = None + + # Find the line number for modules. + if inspect.ismodule(obj): + lineno = 0 + + # Find the line number for classes. + # Note: this could be fooled if a class is defined multiple + # times in a single file. + if inspect.isclass(obj): + if source_lines is None: + return None + pat = re.compile(r'^\s*class\s*%s\b' % + getattr(obj, '__name__', '-')) + for i, line in enumerate(source_lines): + if pat.match(line): + lineno = i + break + + # Find the line number for functions & methods. + if inspect.ismethod(obj): obj = obj.im_func + if inspect.isfunction(obj): obj = obj.func_code + if inspect.istraceback(obj): obj = obj.tb_frame + if inspect.isframe(obj): obj = obj.f_code + if inspect.iscode(obj): + lineno = getattr(obj, 'co_firstlineno', None)-1 + + # Find the line number where the docstring starts. Assume + # that it's the first line that begins with a quote mark. + # Note: this could be fooled by a multiline function + # signature, where a continuation line begins with a quote + # mark. + if lineno is not None: + if source_lines is None: + return lineno+1 + pat = re.compile('(^|.*:)\s*\w*("|\')') + for lineno in range(lineno, len(source_lines)): + if pat.match(source_lines[lineno]): + return lineno + + # We couldn't find the line number. + return None + +###################################################################### +## 5. DocTest Runner +###################################################################### + +class DocTestRunner: + """ + A class used to run DocTest test cases, and accumulate statistics. + The `run` method is used to process a single DocTest case. It + returns a tuple `(f, t)`, where `t` is the number of test cases + tried, and `f` is the number of test cases that failed. + + >>> tests = DocTestFinder().find(_TestClass) + >>> runner = DocTestRunner(verbose=False) + >>> for test in tests: + ... print runner.run(test) + (0, 2) + (0, 1) + (0, 2) + (0, 2) + + The `summarize` method prints a summary of all the test cases that + have been run by the runner, and returns an aggregated `(f, t)` + tuple: + + >>> runner.summarize(verbose=1) + 4 items passed all tests: + 2 tests in _TestClass + 2 tests in _TestClass.__init__ + 2 tests in _TestClass.get + 1 tests in _TestClass.square + 7 tests in 4 items. + 7 passed and 0 failed. + Test passed. + (0, 7) + + The aggregated number of tried examples and failed examples is + also available via the `tries` and `failures` attributes: + + >>> runner.tries + 7 + >>> runner.failures + 0 + + The comparison between expected outputs and actual outputs is done + by an `OutputChecker`. This comparison may be customized with a + number of option flags; see the documentation for `testmod` for + more information. If the option flags are insufficient, then the + comparison may also be customized by passing a subclass of + `OutputChecker` to the constructor. + + The test runner's display output can be controlled in two ways. + First, an output function (`out) can be passed to + `TestRunner.run`; this function will be called with strings that + should be displayed. It defaults to `sys.stdout.write`. If + capturing the output is not sufficient, then the display output + can be also customized by subclassing DocTestRunner, and + overriding the methods `report_start`, `report_success`, + `report_unexpected_exception`, and `report_failure`. + """ + # This divider string is used to separate failure messages, and to + # separate sections of the summary. + DIVIDER = "*" * 70 + + def __init__(self, checker=None, verbose=None, optionflags=0): + """ + Create a new test runner. + + Optional keyword arg `checker` is the `OutputChecker` that + should be used to compare the expected outputs and actual + outputs of doctest examples. + + Optional keyword arg 'verbose' prints lots of stuff if true, + only failures if false; by default, it's true iff '-v' is in + sys.argv. + + Optional argument `optionflags` can be used to control how the + test runner compares expected output to actual output, and how + it displays failures. See the documentation for `testmod` for + more information. + """ + self._checker = checker or OutputChecker() + if verbose is None: + verbose = '-v' in sys.argv + self._verbose = verbose + self.optionflags = optionflags + self.original_optionflags = optionflags + + # Keep track of the examples we've run. + self.tries = 0 + self.failures = 0 + self._name2ft = {} + + # Create a fake output target for capturing doctest output. + self._fakeout = _SpoofOut() + + #///////////////////////////////////////////////////////////////// + # Reporting methods + #///////////////////////////////////////////////////////////////// + + def report_start(self, out, test, example): + """ + Report that the test runner is about to process the given + example. (Only displays a message if verbose=True) + """ + if self._verbose: + if example.want: + out('Trying:\n' + _indent(example.source) + + 'Expecting:\n' + _indent(example.want)) + else: + out('Trying:\n' + _indent(example.source) + + 'Expecting nothing\n') + + def report_success(self, out, test, example, got): + """ + Report that the given example ran successfully. (Only + displays a message if verbose=True) + """ + if self._verbose: + out("ok\n") + + def report_failure(self, out, test, example, got): + """ + Report that the given example failed. + """ + out(self._failure_header(test, example) + + self._checker.output_difference(example, got, self.optionflags)) + + def report_unexpected_exception(self, out, test, example, exc_info): + """ + Report that the given example raised an unexpected exception. + """ + if example.want: + expected = 'Expected:\n' + _indent(example.want) + else: + expected = 'Expected nothing\n' + out(self._failure_header(test, example) + expected + + 'Exception raised:\n' + _indent(_exception_traceback(exc_info))) + + def _failure_header(self, test, example): + out = [self.DIVIDER] + if test.filename: + if test.lineno is not None and example.lineno is not None: + lineno = test.lineno + example.lineno + 1 + else: + lineno = '?' + out.append('File "%s", line %s, in %s' % + (test.filename, lineno, test.name)) + else: + out.append('Line %s, in %s' % (example.lineno+1, test.name)) + out.append('Failed example:') + source = example.source + out.append(_indent(source)) + return '\n'.join(out) + + #///////////////////////////////////////////////////////////////// + # DocTest Running + #///////////////////////////////////////////////////////////////// + + def __run(self, test, compileflags, out): + """ + Run the examples in `test`. Write the outcome of each example + with one of the `DocTestRunner.report_*` methods, using the + writer function `out`. `compileflags` is the set of compiler + flags that should be used to execute examples. Return a tuple + `(f, t)`, where `t` is the number of examples tried, and `f` + is the number of examples that failed. The examples are run + in the namespace `test.globs`. + """ + # Keep track of the number of failures and tries. + failures = tries = 0 + + # Save the option flags (since option directives can be used + # to modify them). + original_optionflags = self.optionflags + + SUCCESS, FAILURE, BOOM = range(3) # `outcome` state + + check = self._checker.check_output + + # Process each example. + for examplenum, example in enumerate(test.examples): + + # If REPORT_ONLY_FIRST_FAILURE is set, then supress + # reporting after the first failure. + quiet = (self.optionflags & REPORT_ONLY_FIRST_FAILURE and + failures > 0) + + # Merge in the example's options. + self.optionflags = original_optionflags + if example.options: + for (optionflag, val) in example.options.items(): + if val: + self.optionflags |= optionflag + else: + self.optionflags &= ~optionflag + + # Record that we started this example. + tries += 1 + if not quiet: + self.report_start(out, test, example) + + # Use a special filename for compile(), so we can retrieve + # the source code during interactive debugging (see + # __patched_linecache_getlines). + filename = '' % (test.name, examplenum) + + # Run the example in the given context (globs), and record + # any exception that gets raised. (But don't intercept + # keyboard interrupts.) + exception = None + self._fakeout.truncate(0) + try: + # Don't blink! This is where the user's code gets run. + exec compile(example.source, filename, "single", + compileflags, 1) in test.globs + self.debugger.set_continue() # ==== Example Finished ==== + except KeyboardInterrupt: + raise + except: + exception = sys.exc_info() + self.debugger.set_continue() # ==== Example Finished ==== + + got = self._fakeout.getvalue() # the actual output + outcome = FAILURE # guilty until proved innocent or insane + + # If the example executed without raising any exceptions, + # verify its output. + if exception is None: + if check(example.want, got, self.optionflags): + outcome = SUCCESS + + # The example raised an exception: check if it was expected. + else: + exc_info = exception + exc_msg = traceback.format_exception_only(*exc_info[:2])[-1] + outcome = BOOM + # If `example.exc_msg` is None, then we weren't expecting + # an exception. + if example.exc_msg is not None: + wanted_exc_msg = example.exc_msg + elif got and example.want.startswith(got): + # There was output before the exception + wanted_exc_msg = example.want[len(got):] + got = '' + else: + wanted_exc_msg = example.want + if 'SyntaxError' in wanted_exc_msg: + # Syntax errors don't print a reliable traceback + wanted_exc_msg = "\n".join([ + "Traceback (most recent call last):", + " ...", + exc_msg]) + + if not quiet: + got += _exception_traceback(exc_info) + # We expected an exception: see whether it matches. + if check(wanted_exc_msg, exc_msg, self.optionflags): + outcome = SUCCESS + + # Another chance if they didn't care about the detail. + elif self.optionflags & IGNORE_EXCEPTION_DETAIL: + m1 = re.match(r'[^:]*:', wanted_exc_msg) + m2 = re.match(r'[^:]*:', exc_msg) + if m1 and m2 and check(m1.group(0), m2.group(0), + self.optionflags): + outcome = SUCCESS + if outcome is BOOM: + # See if the excepthook makes a difference + hook_stdio = StringIO() + orig_stderr = sys.stderr + orig_stdout = sys.stdout + sys.stdout = sys.stderr = hook_stdio + try: + sys.excepthook(*exc_info) + finally: + sys.stderr = orig_stderr + sys.stdout = orig_stdout + hook_stdio.seek(0) + got = hook_stdio.read() + if check(wanted_exc_msg, got, self.optionflags): + outcome = SUCCESS + else: + outcome = BOOM + + + # Report the outcome. + if outcome is SUCCESS: + if not quiet: + self.report_success(out, test, example, got) + elif outcome is FAILURE: + if not quiet: + self.report_failure(out, test, example, got) + failures += 1 + elif outcome is BOOM: + if not quiet: + print >> sys.stderr, "===========================================" + print >> sys.stderr, wanted_exc_msg + print >> sys.stderr, "*** Comparing to ***" + print >> sys.stderr, got + print >> sys.stderr, "*** Comparing to exception***" + print >> sys.stderr, exc_msg + self.report_unexpected_exception(out, test, example, + exc_info) + failures += 1 + else: + assert False, ("unknown outcome", outcome) + + # Restore the option flags (in case they were modified) + self.optionflags = original_optionflags + + # Record and return the number of failures and tries. + self.__record_outcome(test, failures, tries) + return failures, tries + + def __record_outcome(self, test, f, t): + """ + Record the fact that the given DocTest (`test`) generated `f` + failures out of `t` tried examples. + """ + f2, t2 = self._name2ft.get(test.name, (0,0)) + self._name2ft[test.name] = (f+f2, t+t2) + self.failures += f + self.tries += t + + __LINECACHE_FILENAME_RE = re.compile(r'[\w\.]+)' + r'\[(?P\d+)\]>$') + def __patched_linecache_getlines(self, filename, module_globals=None): + m = self.__LINECACHE_FILENAME_RE.match(filename) + if m and m.group('name') == self.test.name: + example = self.test.examples[int(m.group('examplenum'))] + return example.source.splitlines(True) + else: + return self.save_linecache_getlines(filename, module_globals) + + def run(self, test, compileflags=None, out=None, clear_globs=True): + """ + Run the examples in `test`, and display the results using the + writer function `out`. + + The examples are run in the namespace `test.globs`. If + `clear_globs` is true (the default), then this namespace will + be cleared after the test runs, to help with garbage + collection. If you would like to examine the namespace after + the test completes, then use `clear_globs=False`. + + `compileflags` gives the set of flags that should be used by + the Python compiler when running the examples. If not + specified, then it will default to the set of future-import + flags that apply to `globs`. + + The output of each example is checked using + `DocTestRunner.check_output`, and the results are formatted by + the `DocTestRunner.report_*` methods. + """ + self.test = test + + if compileflags is None: + compileflags = _extract_future_flags(test.globs) + + save_stdout = sys.stdout + if out is None: + out = save_stdout.write + sys.stdout = self._fakeout + + # Patch pdb.set_trace to restore sys.stdout during interactive + # debugging (so it's not still redirected to self._fakeout). + # Note that the interactive output will go to *our* + # save_stdout, even if that's not the real sys.stdout; this + # allows us to write test cases for the set_trace behavior. + save_set_trace = pdb.set_trace + self.debugger = _OutputRedirectingPdb(save_stdout) + self.debugger.reset() + pdb.set_trace = self.debugger.set_trace + + # Patch linecache.getlines, so we can see the example's source + # when we're inside the debugger. + self.save_linecache_getlines = linecache.getlines + linecache.getlines = self.__patched_linecache_getlines + + try: + return self.__run(test, compileflags, out) + finally: + sys.stdout = save_stdout + pdb.set_trace = save_set_trace + linecache.getlines = self.save_linecache_getlines + if clear_globs: + test.globs.clear() + + #///////////////////////////////////////////////////////////////// + # Summarization + #///////////////////////////////////////////////////////////////// + def summarize(self, verbose=None): + """ + Print a summary of all the test cases that have been run by + this DocTestRunner, and return a tuple `(f, t)`, where `f` is + the total number of failed examples, and `t` is the total + number of tried examples. + + The optional `verbose` argument controls how detailed the + summary is. If the verbosity is not specified, then the + DocTestRunner's verbosity is used. + """ + if verbose is None: + verbose = self._verbose + notests = [] + passed = [] + failed = [] + totalt = totalf = 0 + for x in self._name2ft.items(): + name, (f, t) = x + assert f <= t + totalt += t + totalf += f + if t == 0: + notests.append(name) + elif f == 0: + passed.append( (name, t) ) + else: + failed.append(x) + if verbose: + if notests: + print len(notests), "items had no tests:" + notests.sort() + for thing in notests: + print " ", thing + if passed: + print len(passed), "items passed all tests:" + passed.sort() + for thing, count in passed: + print " %3d tests in %s" % (count, thing) + if failed: + print self.DIVIDER + print len(failed), "items had failures:" + failed.sort() + for thing, (f, t) in failed: + print " %3d of %3d in %s" % (f, t, thing) + if verbose: + print totalt, "tests in", len(self._name2ft), "items." + print totalt - totalf, "passed and", totalf, "failed." + if totalf: + print "***Test Failed***", totalf, "failures." + elif verbose: + print "Test passed." + return totalf, totalt + + #///////////////////////////////////////////////////////////////// + # Backward compatibility cruft to maintain doctest.master. + #///////////////////////////////////////////////////////////////// + def merge(self, other): + d = self._name2ft + for name, (f, t) in other._name2ft.items(): + if name in d: + print "*** DocTestRunner.merge: '" + name + "' in both" \ + " testers; summing outcomes." + f2, t2 = d[name] + f = f + f2 + t = t + t2 + d[name] = f, t + +class OutputChecker: + """ + A class used to check the whether the actual output from a doctest + example matches the expected output. `OutputChecker` defines two + methods: `check_output`, which compares a given pair of outputs, + and returns true if they match; and `output_difference`, which + returns a string describing the differences between two outputs. + """ + def check_output(self, want, got, optionflags): + """ + Return True iff the actual output from an example (`got`) + matches the expected output (`want`). These strings are + always considered to match if they are identical; but + depending on what option flags the test runner is using, + several non-exact match types are also possible. See the + documentation for `TestRunner` for more information about + option flags. + """ + # Handle the common case first, for efficiency: + # if they're string-identical, always return true. + if got == want: + return True + + # The values True and False replaced 1 and 0 as the return + # value for boolean comparisons in Python 2.3. + if not (optionflags & DONT_ACCEPT_TRUE_FOR_1): + if (got,want) == ("True\n", "1\n"): + return True + if (got,want) == ("False\n", "0\n"): + return True + + # can be used as a special sequence to signify a + # blank line, unless the DONT_ACCEPT_BLANKLINE flag is used. + if not (optionflags & DONT_ACCEPT_BLANKLINE): + # Replace in want with a blank line. + want = re.sub('(?m)^%s\s*?$' % re.escape(BLANKLINE_MARKER), + '', want) + # If a line in got contains only spaces, then remove the + # spaces. + got = re.sub('(?m)^\s*?$', '', got) + if got == want: + return True + + # This flag causes doctest to ignore any differences in the + # contents of whitespace strings. Note that this can be used + # in conjunction with the ELLIPSIS flag. + if optionflags & NORMALIZE_WHITESPACE: + got = ' '.join(got.split()) + want = ' '.join(want.split()) + if got == want: + return True + + # The ELLIPSIS flag says to let the sequence "..." in `want` + # match any substring in `got`. + if optionflags & ELLIPSIS: + if _ellipsis_match(want, got): + return True + + # Try sys.displayhook + if sys.displayhook is not sys.__displayhook__: + _hook = sys.displayhook + sys.displayhook = sys.__displayhook__ # Restore displayhook + if got == '': + got = None + fixit = StringIO() + _saved_stdout = sys.stdout + sys.stdout = fixit + _hook(got) + sys.stdout = _saved_stdout + got = fixit.getvalue() + if got == want: + return True + + # We didn't find any match; return false. + return False + + # Should we do a fancy diff? + def _do_a_fancy_diff(self, want, got, optionflags): + # Not unless they asked for a fancy diff. + if not optionflags & (REPORT_UDIFF | + REPORT_CDIFF | + REPORT_NDIFF): + return False + + # If expected output uses ellipsis, a meaningful fancy diff is + # too hard ... or maybe not. In two real-life failures Tim saw, + # a diff was a major help anyway, so this is commented out. + # [todo] _ellipsis_match() knows which pieces do and don't match, + # and could be the basis for a kick-ass diff in this case. + ##if optionflags & ELLIPSIS and ELLIPSIS_MARKER in want: + ## return False + + # ndiff does intraline difference marking, so can be useful even + # for 1-line differences. + if optionflags & REPORT_NDIFF: + return True + + # The other diff types need at least a few lines to be helpful. + return want.count('\n') > 2 and got.count('\n') > 2 + + def output_difference(self, example, got, optionflags): + """ + Return a string describing the differences between the + expected output for a given example (`example`) and the actual + output (`got`). `optionflags` is the set of option flags used + to compare `want` and `got`. + """ + want = example.want + # If s are being used, then replace blank lines + # with in the actual output string. + if not (optionflags & DONT_ACCEPT_BLANKLINE): + got = re.sub('(?m)^[ ]*(?=\n)', BLANKLINE_MARKER, got) + + # Check if we should use diff. + if self._do_a_fancy_diff(want, got, optionflags): + # Split want & got into lines. + want_lines = want.splitlines(True) # True == keep line ends + got_lines = got.splitlines(True) + # Use difflib to find their differences. + if optionflags & REPORT_UDIFF: + diff = difflib.unified_diff(want_lines, got_lines, n=2) + diff = list(diff)[2:] # strip the diff header + kind = 'unified diff with -expected +actual' + elif optionflags & REPORT_CDIFF: + diff = difflib.context_diff(want_lines, got_lines, n=2) + diff = list(diff)[2:] # strip the diff header + kind = 'context diff with expected followed by actual' + elif optionflags & REPORT_NDIFF: + engine = difflib.Differ(charjunk=difflib.IS_CHARACTER_JUNK) + diff = list(engine.compare(want_lines, got_lines)) + kind = 'ndiff with -expected +actual' + else: + assert 0, 'Bad diff option' + # Remove trailing whitespace on diff output. + diff = [line.rstrip() + '\n' for line in diff] + return 'Differences (%s):\n' % kind + _indent(''.join(diff)) + + # If we're not using diff, then simply list the expected + # output followed by the actual output. + if want and got: + return 'Expected:\n%sGot:\n%s' % (_indent(want), _indent(got)) + elif want: + return 'Expected:\n%sGot nothing\n' % _indent(want) + elif got: + return 'Expected nothing\nGot:\n%s' % _indent(got) + else: + return 'Expected nothing\nGot nothing\n' + +class DocTestFailure(Exception): + """A DocTest example has failed in debugging mode. + + The exception instance has variables: + + - test: the DocTest object being run + + - excample: the Example object that failed + + - got: the actual output + """ + def __init__(self, test, example, got): + self.test = test + self.example = example + self.got = got + + def __str__(self): + return str(self.test) + +class UnexpectedException(Exception): + """A DocTest example has encountered an unexpected exception + + The exception instance has variables: + + - test: the DocTest object being run + + - excample: the Example object that failed + + - exc_info: the exception info + """ + def __init__(self, test, example, exc_info): + self.test = test + self.example = example + self.exc_info = exc_info + + def __str__(self): + return str(self.test) + +class DebugRunner(DocTestRunner): + r"""Run doc tests but raise an exception as soon as there is a failure. + + If an unexpected exception occurs, an UnexpectedException is raised. + It contains the test, the example, and the original exception: + + >>> runner = DebugRunner(verbose=False) + >>> test = DocTestParser().get_doctest('>>> raise KeyError\n42', + ... {}, 'foo', 'foo.py', 0) + >>> try: + ... runner.run(test) + ... except UnexpectedException, failure: + ... pass + + >>> failure.test is test + True + + >>> failure.example.want + '42\n' + + >>> exc_info = failure.exc_info + >>> raise exc_info[0], exc_info[1], exc_info[2] + Traceback (most recent call last): + ... + KeyError + + We wrap the original exception to give the calling application + access to the test and example information. + + If the output doesn't match, then a DocTestFailure is raised: + + >>> test = DocTestParser().get_doctest(''' + ... >>> x = 1 + ... >>> x + ... 2 + ... ''', {}, 'foo', 'foo.py', 0) + + >>> try: + ... runner.run(test) + ... except DocTestFailure, failure: + ... pass + + DocTestFailure objects provide access to the test: + + >>> failure.test is test + True + + As well as to the example: + + >>> failure.example.want + '2\n' + + and the actual output: + + >>> failure.got + '1\n' + + If a failure or error occurs, the globals are left intact: + + >>> del test.globs['__builtins__'] + >>> test.globs + {'x': 1} + + >>> test = DocTestParser().get_doctest(''' + ... >>> x = 2 + ... >>> raise KeyError + ... ''', {}, 'foo', 'foo.py', 0) + + >>> runner.run(test) + Traceback (most recent call last): + ... + UnexpectedException: + + >>> del test.globs['__builtins__'] + >>> test.globs + {'x': 2} + + But the globals are cleared if there is no error: + + >>> test = DocTestParser().get_doctest(''' + ... >>> x = 2 + ... ''', {}, 'foo', 'foo.py', 0) + + >>> runner.run(test) + (0, 1) + + >>> test.globs + {} + + """ + + def run(self, test, compileflags=None, out=None, clear_globs=True): + r = DocTestRunner.run(self, test, compileflags, out, False) + if clear_globs: + test.globs.clear() + return r + + def report_unexpected_exception(self, out, test, example, exc_info): + raise UnexpectedException(test, example, exc_info) + + def report_failure(self, out, test, example, got): + raise DocTestFailure(test, example, got) + +###################################################################### +## 6. Test Functions +###################################################################### +# These should be backwards compatible. + +# For backward compatibility, a global instance of a DocTestRunner +# class, updated by testmod. +master = None + +def testmod(m=None, name=None, globs=None, verbose=None, isprivate=None, + report=True, optionflags=0, extraglobs=None, + raise_on_error=False, exclude_empty=False): + """m=None, name=None, globs=None, verbose=None, isprivate=None, + report=True, optionflags=0, extraglobs=None, raise_on_error=False, + exclude_empty=False + + Test examples in docstrings in functions and classes reachable + from module m (or the current module if m is not supplied), starting + with m.__doc__. Unless isprivate is specified, private names + are not skipped. + + Also test examples reachable from dict m.__test__ if it exists and is + not None. m.__test__ maps names to functions, classes and strings; + function and class docstrings are tested even if the name is private; + strings are tested directly, as if they were docstrings. + + Return (#failures, #tests). + + See doctest.__doc__ for an overview. + + Optional keyword arg "name" gives the name of the module; by default + use m.__name__. + + Optional keyword arg "globs" gives a dict to be used as the globals + when executing examples; by default, use m.__dict__. A copy of this + dict is actually used for each docstring, so that each docstring's + examples start with a clean slate. + + Optional keyword arg "extraglobs" gives a dictionary that should be + merged into the globals that are used to execute examples. By + default, no extra globals are used. This is new in 2.4. + + Optional keyword arg "verbose" prints lots of stuff if true, prints + only failures if false; by default, it's true iff "-v" is in sys.argv. + + Optional keyword arg "report" prints a summary at the end when true, + else prints nothing at the end. In verbose mode, the summary is + detailed, else very brief (in fact, empty if all tests passed). + + Optional keyword arg "optionflags" or's together module constants, + and defaults to 0. This is new in 2.3. Possible values (see the + docs for details): + + DONT_ACCEPT_TRUE_FOR_1 + DONT_ACCEPT_BLANKLINE + NORMALIZE_WHITESPACE + ELLIPSIS + IGNORE_EXCEPTION_DETAIL + REPORT_UDIFF + REPORT_CDIFF + REPORT_NDIFF + REPORT_ONLY_FIRST_FAILURE + + Optional keyword arg "raise_on_error" raises an exception on the + first unexpected exception or failure. This allows failures to be + post-mortem debugged. + + Deprecated in Python 2.4: + Optional keyword arg "isprivate" specifies a function used to + determine whether a name is private. The default function is + treat all functions as public. Optionally, "isprivate" can be + set to doctest.is_private to skip over functions marked as private + using the underscore naming convention; see its docs for details. + + Advanced tomfoolery: testmod runs methods of a local instance of + class doctest.Tester, then merges the results into (or creates) + global Tester instance doctest.master. Methods of doctest.master + can be called directly too, if you want to do something unusual. + Passing report=0 to testmod is especially useful then, to delay + displaying a summary. Invoke doctest.master.summarize(verbose) + when you're done fiddling. + """ + global master + + if isprivate is not None: + warnings.warn("the isprivate argument is deprecated; " + "examine DocTestFinder.find() lists instead", + DeprecationWarning) + + # If no module was given, then use __main__. + if m is None: + # DWA - m will still be None if this wasn't invoked from the command + # line, in which case the following TypeError is about as good an error + # as we should expect + m = sys.modules.get('__main__') + + # Check that we were actually given a module. + if not inspect.ismodule(m): + raise TypeError("testmod: module required; %r" % (m,)) + + # If no name was given, then use the module's name. + if name is None: + name = m.__name__ + + # Find, parse, and run all tests in the given module. + finder = DocTestFinder(_namefilter=isprivate, exclude_empty=exclude_empty) + + if raise_on_error: + runner = DebugRunner(verbose=verbose, optionflags=optionflags) + else: + runner = DocTestRunner(verbose=verbose, optionflags=optionflags) + + for test in finder.find(m, name, globs=globs, extraglobs=extraglobs): + runner.run(test) + + if report: + runner.summarize() + + if master is None: + master = runner + else: + master.merge(runner) + + return runner.failures, runner.tries + +def testfile(filename, module_relative=True, name=None, package=None, + globs=None, verbose=None, report=True, optionflags=0, + extraglobs=None, raise_on_error=False, parser=DocTestParser()): + """ + Test examples in the given file. Return (#failures, #tests). + + Optional keyword arg "module_relative" specifies how filenames + should be interpreted: + + - If "module_relative" is True (the default), then "filename" + specifies a module-relative path. By default, this path is + relative to the calling module's directory; but if the + "package" argument is specified, then it is relative to that + package. To ensure os-independence, "filename" should use + "/" characters to separate path segments, and should not + be an absolute path (i.e., it may not begin with "/"). + + - If "module_relative" is False, then "filename" specifies an + os-specific path. The path may be absolute or relative (to + the current working directory). + + Optional keyword arg "name" gives the name of the test; by default + use the file's basename. + + Optional keyword argument "package" is a Python package or the + name of a Python package whose directory should be used as the + base directory for a module relative filename. If no package is + specified, then the calling module's directory is used as the base + directory for module relative filenames. It is an error to + specify "package" if "module_relative" is False. + + Optional keyword arg "globs" gives a dict to be used as the globals + when executing examples; by default, use {}. A copy of this dict + is actually used for each docstring, so that each docstring's + examples start with a clean slate. + + Optional keyword arg "extraglobs" gives a dictionary that should be + merged into the globals that are used to execute examples. By + default, no extra globals are used. + + Optional keyword arg "verbose" prints lots of stuff if true, prints + only failures if false; by default, it's true iff "-v" is in sys.argv. + + Optional keyword arg "report" prints a summary at the end when true, + else prints nothing at the end. In verbose mode, the summary is + detailed, else very brief (in fact, empty if all tests passed). + + Optional keyword arg "optionflags" or's together module constants, + and defaults to 0. Possible values (see the docs for details): + + DONT_ACCEPT_TRUE_FOR_1 + DONT_ACCEPT_BLANKLINE + NORMALIZE_WHITESPACE + ELLIPSIS + IGNORE_EXCEPTION_DETAIL + REPORT_UDIFF + REPORT_CDIFF + REPORT_NDIFF + REPORT_ONLY_FIRST_FAILURE + + Optional keyword arg "raise_on_error" raises an exception on the + first unexpected exception or failure. This allows failures to be + post-mortem debugged. + + Optional keyword arg "parser" specifies a DocTestParser (or + subclass) that should be used to extract tests from the files. + + Advanced tomfoolery: testmod runs methods of a local instance of + class doctest.Tester, then merges the results into (or creates) + global Tester instance doctest.master. Methods of doctest.master + can be called directly too, if you want to do something unusual. + Passing report=0 to testmod is especially useful then, to delay + displaying a summary. Invoke doctest.master.summarize(verbose) + when you're done fiddling. + """ + global master + + if package and not module_relative: + raise ValueError("Package may only be specified for module-" + "relative paths.") + + # Relativize the path + if module_relative: + package = _normalize_module(package) + filename = _module_relative_path(package, filename) + + # If no name was given, then use the file's name. + if name is None: + name = os.path.basename(filename) + + # Assemble the globals. + if globs is None: + globs = {} + else: + globs = globs.copy() + if extraglobs is not None: + globs.update(extraglobs) + + if raise_on_error: + runner = DebugRunner(verbose=verbose, optionflags=optionflags) + else: + runner = DocTestRunner(verbose=verbose, optionflags=optionflags) + + # Read the file, convert it to a test, and run it. + s = open(filename).read() + test = parser.get_doctest(s, globs, name, filename, 0) + runner.run(test) + + if report: + runner.summarize() + + if master is None: + master = runner + else: + master.merge(runner) + + return runner.failures, runner.tries + +def run_docstring_examples(f, globs, verbose=False, name="NoName", + compileflags=None, optionflags=0): + """ + Test examples in the given object's docstring (`f`), using `globs` + as globals. Optional argument `name` is used in failure messages. + If the optional argument `verbose` is true, then generate output + even if there are no failures. + + `compileflags` gives the set of flags that should be used by the + Python compiler when running the examples. If not specified, then + it will default to the set of future-import flags that apply to + `globs`. + + Optional keyword arg `optionflags` specifies options for the + testing and output. See the documentation for `testmod` for more + information. + """ + # Find, parse, and run all tests in the given module. + finder = DocTestFinder(verbose=verbose, recurse=False) + runner = DocTestRunner(verbose=verbose, optionflags=optionflags) + for test in finder.find(f, name, globs=globs): + runner.run(test, compileflags=compileflags) + +###################################################################### +## 7. Tester +###################################################################### +# This is provided only for backwards compatibility. It's not +# actually used in any way. + +class Tester: + def __init__(self, mod=None, globs=None, verbose=None, + isprivate=None, optionflags=0): + + warnings.warn("class Tester is deprecated; " + "use class doctest.DocTestRunner instead", + DeprecationWarning, stacklevel=2) + if mod is None and globs is None: + raise TypeError("Tester.__init__: must specify mod or globs") + if mod is not None and not inspect.ismodule(mod): + raise TypeError("Tester.__init__: mod must be a module; %r" % + (mod,)) + if globs is None: + globs = mod.__dict__ + self.globs = globs + + self.verbose = verbose + self.isprivate = isprivate + self.optionflags = optionflags + self.testfinder = DocTestFinder(_namefilter=isprivate) + self.testrunner = DocTestRunner(verbose=verbose, + optionflags=optionflags) + + def runstring(self, s, name): + test = DocTestParser().get_doctest(s, self.globs, name, None, None) + if self.verbose: + print "Running string", name + (f,t) = self.testrunner.run(test) + if self.verbose: + print f, "of", t, "examples failed in string", name + return (f,t) + + def rundoc(self, object, name=None, module=None): + f = t = 0 + tests = self.testfinder.find(object, name, module=module, + globs=self.globs) + for test in tests: + (f2, t2) = self.testrunner.run(test) + (f,t) = (f+f2, t+t2) + return (f,t) + + def rundict(self, d, name, module=None): + import new + m = new.module(name) + m.__dict__.update(d) + if module is None: + module = False + return self.rundoc(m, name, module) + + def run__test__(self, d, name): + import new + m = new.module(name) + m.__test__ = d + return self.rundoc(m, name) + + def summarize(self, verbose=None): + return self.testrunner.summarize(verbose) + + def merge(self, other): + self.testrunner.merge(other.testrunner) + +###################################################################### +## 8. Unittest Support +###################################################################### + +_unittest_reportflags = 0 + +def set_unittest_reportflags(flags): + """Sets the unittest option flags. + + The old flag is returned so that a runner could restore the old + value if it wished to: + + >>> import doctest + >>> old = doctest._unittest_reportflags + >>> doctest.set_unittest_reportflags(REPORT_NDIFF | + ... REPORT_ONLY_FIRST_FAILURE) == old + True + + >>> doctest._unittest_reportflags == (REPORT_NDIFF | + ... REPORT_ONLY_FIRST_FAILURE) + True + + Only reporting flags can be set: + + >>> doctest.set_unittest_reportflags(ELLIPSIS) + Traceback (most recent call last): + ... + ValueError: ('Only reporting flags allowed', 8) + + >>> doctest.set_unittest_reportflags(old) == (REPORT_NDIFF | + ... REPORT_ONLY_FIRST_FAILURE) + True + """ + global _unittest_reportflags + + if (flags & REPORTING_FLAGS) != flags: + raise ValueError("Only reporting flags allowed", flags) + old = _unittest_reportflags + _unittest_reportflags = flags + return old + + +class DocTestCase(unittest.TestCase): + + def __init__(self, test, optionflags=0, setUp=None, tearDown=None, + checker=None): + + unittest.TestCase.__init__(self) + self._dt_optionflags = optionflags + self._dt_checker = checker + self._dt_test = test + self._dt_setUp = setUp + self._dt_tearDown = tearDown + + def setUp(self): + test = self._dt_test + + if self._dt_setUp is not None: + self._dt_setUp(test) + + def tearDown(self): + test = self._dt_test + + if self._dt_tearDown is not None: + self._dt_tearDown(test) + + test.globs.clear() + + def runTest(self): + test = self._dt_test + old = sys.stdout + new = StringIO() + optionflags = self._dt_optionflags + + if not (optionflags & REPORTING_FLAGS): + # The option flags don't include any reporting flags, + # so add the default reporting flags + optionflags |= _unittest_reportflags + + runner = DocTestRunner(optionflags=optionflags, + checker=self._dt_checker, verbose=False) + + try: + runner.DIVIDER = "-"*70 + failures, tries = runner.run( + test, out=new.write, clear_globs=False) + finally: + sys.stdout = old + + if failures: + raise self.failureException(self.format_failure(new.getvalue())) + + def format_failure(self, err): + test = self._dt_test + if test.lineno is None: + lineno = 'unknown line number' + else: + lineno = '%s' % test.lineno + lname = '.'.join(test.name.split('.')[-1:]) + return ('Failed doctest test for %s\n' + ' File "%s", line %s, in %s\n\n%s' + % (test.name, test.filename, lineno, lname, err) + ) + + def debug(self): + r"""Run the test case without results and without catching exceptions + + The unit test framework includes a debug method on test cases + and test suites to support post-mortem debugging. The test code + is run in such a way that errors are not caught. This way a + caller can catch the errors and initiate post-mortem debugging. + + The DocTestCase provides a debug method that raises + UnexpectedException errors if there is an unexepcted + exception: + + >>> test = DocTestParser().get_doctest('>>> raise KeyError\n42', + ... {}, 'foo', 'foo.py', 0) + >>> case = DocTestCase(test) + >>> try: + ... case.debug() + ... except UnexpectedException, failure: + ... pass + + The UnexpectedException contains the test, the example, and + the original exception: + + >>> failure.test is test + True + + >>> failure.example.want + '42\n' + + >>> exc_info = failure.exc_info + >>> raise exc_info[0], exc_info[1], exc_info[2] + Traceback (most recent call last): + ... + KeyError + + If the output doesn't match, then a DocTestFailure is raised: + + >>> test = DocTestParser().get_doctest(''' + ... >>> x = 1 + ... >>> x + ... 2 + ... ''', {}, 'foo', 'foo.py', 0) + >>> case = DocTestCase(test) + + >>> try: + ... case.debug() + ... except DocTestFailure, failure: + ... pass + + DocTestFailure objects provide access to the test: + + >>> failure.test is test + True + + As well as to the example: + + >>> failure.example.want + '2\n' + + and the actual output: + + >>> failure.got + '1\n' + + """ + + self.setUp() + runner = DebugRunner(optionflags=self._dt_optionflags, + checker=self._dt_checker, verbose=False) + runner.run(self._dt_test) + self.tearDown() + + def id(self): + return self._dt_test.name + + def __repr__(self): + name = self._dt_test.name.split('.') + return "%s (%s)" % (name[-1], '.'.join(name[:-1])) + + __str__ = __repr__ + + def shortDescription(self): + return "Doctest: " + self._dt_test.name + +def DocTestSuite(module=None, globs=None, extraglobs=None, test_finder=None, + **options): + """ + Convert doctest tests for a module to a unittest test suite. + + This converts each documentation string in a module that + contains doctest tests to a unittest test case. If any of the + tests in a doc string fail, then the test case fails. An exception + is raised showing the name of the file containing the test and a + (sometimes approximate) line number. + + The `module` argument provides the module to be tested. The argument + can be either a module or a module name. + + If no argument is given, the calling module is used. + + A number of options may be provided as keyword arguments: + + setUp + A set-up function. This is called before running the + tests in each file. The setUp function will be passed a DocTest + object. The setUp function can access the test globals as the + globs attribute of the test passed. + + tearDown + A tear-down function. This is called after running the + tests in each file. The tearDown function will be passed a DocTest + object. The tearDown function can access the test globals as the + globs attribute of the test passed. + + globs + A dictionary containing initial global variables for the tests. + + optionflags + A set of doctest option flags expressed as an integer. + """ + + if test_finder is None: + test_finder = DocTestFinder() + + module = _normalize_module(module) + tests = test_finder.find(module, globs=globs, extraglobs=extraglobs) + if globs is None: + globs = module.__dict__ + if not tests: + # Why do we want to do this? Because it reveals a bug that might + # otherwise be hidden. + raise ValueError(module, "has no tests") + + tests.sort() + suite = unittest.TestSuite() + for test in tests: + if len(test.examples) == 0: + continue + if not test.filename: + filename = module.__file__ + if filename[-4:] in (".pyc", ".pyo"): + filename = filename[:-1] + test.filename = filename + suite.addTest(DocTestCase(test, **options)) + + return suite + +class DocFileCase(DocTestCase): + + def id(self): + return '_'.join(self._dt_test.name.split('.')) + + def __repr__(self): + return self._dt_test.filename + __str__ = __repr__ + + def format_failure(self, err): + return ('Failed doctest test for %s\n File "%s", line 0\n\n%s' + % (self._dt_test.name, self._dt_test.filename, err) + ) + +def DocFileTest(path, module_relative=True, package=None, + globs=None, parser=DocTestParser(), **options): + if globs is None: + globs = {} + + if package and not module_relative: + raise ValueError("Package may only be specified for module-" + "relative paths.") + + # Relativize the path. + if module_relative: + package = _normalize_module(package) + path = _module_relative_path(package, path) + + # Find the file and read it. + name = os.path.basename(path) + doc = open(path).read() + + # Convert it to a test, and wrap it in a DocFileCase. + test = parser.get_doctest(doc, globs, name, path, 0) + return DocFileCase(test, **options) + +def DocFileSuite(*paths, **kw): + """A unittest suite for one or more doctest files. + + The path to each doctest file is given as a string; the + interpretation of that string depends on the keyword argument + "module_relative". + + A number of options may be provided as keyword arguments: + + module_relative + If "module_relative" is True, then the given file paths are + interpreted as os-independent module-relative paths. By + default, these paths are relative to the calling module's + directory; but if the "package" argument is specified, then + they are relative to that package. To ensure os-independence, + "filename" should use "/" characters to separate path + segments, and may not be an absolute path (i.e., it may not + begin with "/"). + + If "module_relative" is False, then the given file paths are + interpreted as os-specific paths. These paths may be absolute + or relative (to the current working directory). + + package + A Python package or the name of a Python package whose directory + should be used as the base directory for module relative paths. + If "package" is not specified, then the calling module's + directory is used as the base directory for module relative + filenames. It is an error to specify "package" if + "module_relative" is False. + + setUp + A set-up function. This is called before running the + tests in each file. The setUp function will be passed a DocTest + object. The setUp function can access the test globals as the + globs attribute of the test passed. + + tearDown + A tear-down function. This is called after running the + tests in each file. The tearDown function will be passed a DocTest + object. The tearDown function can access the test globals as the + globs attribute of the test passed. + + globs + A dictionary containing initial global variables for the tests. + + optionflags + A set of doctest option flags expressed as an integer. + + parser + A DocTestParser (or subclass) that should be used to extract + tests from the files. + """ + suite = unittest.TestSuite() + + # We do this here so that _normalize_module is called at the right + # level. If it were called in DocFileTest, then this function + # would be the caller and we might guess the package incorrectly. + if kw.get('module_relative', True): + kw['package'] = _normalize_module(kw.get('package')) + + for path in paths: + suite.addTest(DocFileTest(path, **kw)) + + return suite + +###################################################################### +## 9. Debugging Support +###################################################################### + +def script_from_examples(s): + r"""Extract script from text with examples. + + Converts text with examples to a Python script. Example input is + converted to regular code. Example output and all other words + are converted to comments: + + >>> text = ''' + ... Here are examples of simple math. + ... + ... Python has super accurate integer addition + ... + ... >>> 2 + 2 + ... 5 + ... + ... And very friendly error messages: + ... + ... >>> 1/0 + ... To Infinity + ... And + ... Beyond + ... + ... You can use logic if you want: + ... + ... >>> if 0: + ... ... blah + ... ... blah + ... ... + ... + ... Ho hum + ... ''' + + >>> print script_from_examples(text) + # Here are examples of simple math. + # + # Python has super accurate integer addition + # + 2 + 2 + # Expected: + ## 5 + # + # And very friendly error messages: + # + 1/0 + # Expected: + ## To Infinity + ## And + ## Beyond + # + # You can use logic if you want: + # + if 0: + blah + blah + # + # Ho hum + """ + output = [] + for piece in DocTestParser().parse(s): + if isinstance(piece, Example): + # Add the example's source code (strip trailing NL) + output.append(piece.source[:-1]) + # Add the expected output: + want = piece.want + if want: + output.append('# Expected:') + output += ['## '+l for l in want.split('\n')[:-1]] + else: + # Add non-example text. + output += [_comment_line(l) + for l in piece.split('\n')[:-1]] + + # Trim junk on both ends. + while output and output[-1] == '#': + output.pop() + while output and output[0] == '#': + output.pop(0) + # Combine the output, and return it. + return '\n'.join(output) + +def testsource(module, name): + """Extract the test sources from a doctest docstring as a script. + + Provide the module (or dotted name of the module) containing the + test to be debugged and the name (within the module) of the object + with the doc string with tests to be debugged. + """ + module = _normalize_module(module) + tests = DocTestFinder().find(module) + test = [t for t in tests if t.name == name] + if not test: + raise ValueError(name, "not found in tests") + test = test[0] + testsrc = script_from_examples(test.docstring) + return testsrc + +def debug_src(src, pm=False, globs=None): + """Debug a single doctest docstring, in argument `src`'""" + testsrc = script_from_examples(src) + debug_script(testsrc, pm, globs) + +def debug_script(src, pm=False, globs=None): + "Debug a test script. `src` is the script, as a string." + import pdb + + # Note that tempfile.NameTemporaryFile() cannot be used. As the + # docs say, a file so created cannot be opened by name a second time + # on modern Windows boxes, and execfile() needs to open it. + srcfilename = tempfile.mktemp(".py", "doctestdebug") + f = open(srcfilename, 'w') + f.write(src) + f.close() + + try: + if globs: + globs = globs.copy() + else: + globs = {} + + if pm: + try: + execfile(srcfilename, globs, globs) + except: + print sys.exc_info()[1] + pdb.post_mortem(sys.exc_info()[2]) + else: + # Note that %r is vital here. '%s' instead can, e.g., cause + # backslashes to get treated as metacharacters on Windows. + pdb.run("execfile(%r)" % srcfilename, globs, globs) + + finally: + os.remove(srcfilename) + +def debug(module, name, pm=False): + """Debug a single doctest docstring. + + Provide the module (or dotted name of the module) containing the + test to be debugged and the name (within the module) of the object + with the docstring with tests to be debugged. + """ + module = _normalize_module(module) + testsrc = testsource(module, name) + debug_script(testsrc, pm, module.__dict__) + +###################################################################### +## 10. Example Usage +###################################################################### +class _TestClass: + """ + A pointless class, for sanity-checking of docstring testing. + + Methods: + square() + get() + + >>> _TestClass(13).get() + _TestClass(-12).get() + 1 + >>> hex(_TestClass(13).square().get()) + '0xa9' + """ + + def __init__(self, val): + """val -> _TestClass object with associated value val. + + >>> t = _TestClass(123) + >>> print t.get() + 123 + """ + + self.val = val + + def square(self): + """square() -> square TestClass's associated value + + >>> _TestClass(13).square().get() + 169 + """ + + self.val = self.val ** 2 + return self + + def get(self): + """get() -> return TestClass's associated value. + + >>> x = _TestClass(-42) + >>> print x.get() + -42 + """ + + return self.val + +__test__ = {"_TestClass": _TestClass, + "string": r""" + Example of a string object, searched as-is. + >>> x = 1; y = 2 + >>> x + y, x * y + (3, 2) + """, + + "bool-int equivalence": r""" + In 2.2, boolean expressions displayed + 0 or 1. By default, we still accept + them. This can be disabled by passing + DONT_ACCEPT_TRUE_FOR_1 to the new + optionflags argument. + >>> 4 == 4 + 1 + >>> 4 == 4 + True + >>> 4 > 4 + 0 + >>> 4 > 4 + False + """, + + "blank lines": r""" + Blank lines can be marked with : + >>> print 'foo\n\nbar\n' + foo + + bar + + """, + + "ellipsis": r""" + If the ellipsis flag is used, then '...' can be used to + elide substrings in the desired output: + >>> print range(1000) #doctest: +ELLIPSIS + [0, 1, 2, ..., 999] + """, + + "whitespace normalization": r""" + If the whitespace normalization flag is used, then + differences in whitespace are ignored. + >>> print range(30) #doctest: +NORMALIZE_WHITESPACE + [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, + 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, + 27, 28, 29] + """, + } + +def _test(): + r = unittest.TextTestRunner() + r.run(DocTestSuite()) + +if __name__ == "__main__": + _test() Added: sandbox/trunk/userref/test_examples.py ============================================================================== --- (empty file) +++ sandbox/trunk/userref/test_examples.py Wed Mar 5 14:58:36 2008 @@ -0,0 +1,202 @@ +#!/usr/bin/env python +"""Test embedded examples for the Python Programmer's Reference""" +import os, os.path +import ngc_doctest as doctest # Custom doctest that checks sys.displayhook +import re +import zipfile +try: + from xml.etree.cElementTree import iterparse +except ImportError: + from cElementTree import iterparse +from cStringIO import StringIO + +def _teststring(data, name="", globs=None, verbose=None, + report=True, optionflags=0, extraglobs=None, + raise_on_error=False, parser=doctest.DocTestParser()): + """ + Test examples in the given data string. Return (#failures, #tests). + + Optional keyword arg "name" gives the name of the test; by default + uses "". + + Optional keyword arg "globs" gives a dict to be used as the globals + when executing examples; by default, use {}. A copy of this dict + is actually used for each docstring, so that each docstring's + examples start with a clean slate. + + Optional keyword arg "extraglobs" gives a dictionary that should be + merged into the globals that are used to execute examples. By + default, no extra globals are used. + + Optional keyword arg "verbose" prints lots of stuff if true, prints + only failures if false; by default, it's true iff "-v" is in sys.argv. + + Optional keyword arg "report" prints a summary at the end when true, + else prints nothing at the end. In verbose mode, the summary is + detailed, else very brief (in fact, empty if all tests passed). + + Optional keyword arg "optionflags" or's together module constants, + and defaults to 0. Possible values (see the docs for details): + + DONT_ACCEPT_TRUE_FOR_1 + DONT_ACCEPT_BLANKLINE + NORMALIZE_WHITESPACE + ELLIPSIS + IGNORE_EXCEPTION_DETAIL + REPORT_UDIFF + REPORT_CDIFF + REPORT_NDIFF + REPORT_ONLY_FIRST_FAILURE + + Optional keyword arg "raise_on_error" raises an exception on the + first unexpected exception or failure. This allows failures to be + post-mortem debugged. + + Optional keyword arg "parser" specifies a DocTestParser (or + subclass) that should be used to extract tests from the files. + + Advanced tomfoolery: teststring runs methods of a local instance of + class doctest.Tester, then merges the results into (or creates) + global Tester instance doctest.master. Methods of doctest.master + can be called directly too, if you want to do something unusual. + Passing report=0 to testmod is especially useful then, to delay + displaying a summary. Invoke doctest.master.summarize(verbose) + when you're done fiddling. + """ + # Assemble the globals. + if globs is None: + globs = {} + else: + globs = globs.copy() + if extraglobs is not None: + globs.update(extraglobs) + # Make the test runner + if raise_on_error: + runner = doctest.DebugRunner(verbose=verbose, + optionflags=optionflags) + else: + runner = doctest.DocTestRunner(verbose=verbose, + optionflags=optionflags) + # Convert the string to a test, and run it. + test = parser.get_doctest(data, globs, name, None, 0) + runner.run(test) + if report: + runner.summarize() + return runner.failures, runner.tries + +test_options = doctest.ELLIPSIS | doctest.IGNORE_EXCEPTION_DETAIL +import __future__ +test_globals = dict(absolute_import=__future__.absolute_import, + division=__future__.division, + with_statement=__future__.with_statement, + __name__="__main__") + +def _is_text_example(name): + return re.search("examples\d\d\.txt", name) is not None + +def _test_text_file(name): + return doctest.testfile(name, optionflags=test_options) + +def _is_odt_example(name): + return re.search("Chapter\d\d.*\.odt", name) is not None + + +_style_attr = "{urn:oasis:names:tc:opendocument:xmlns:text:1.0}style-name" +def _in_style(elem, styles): + if styles is None: + return True + try: + return elem.attrib[_style_attr] in styles + except KeyError: + return False + +_space_tag = "{urn:oasis:names:tc:opendocument:xmlns:text:1.0}s" +_num_spaces_attr = "{urn:oasis:names:tc:opendocument:xmlns:text:1.0}c" +_para_tag = "{urn:oasis:names:tc:opendocument:xmlns:text:1.0}p" +_span_tag = "{urn:oasis:names:tc:opendocument:xmlns:text:1.0}span" +def _para_text(elem, styles=None): + if elem.tag != _para_tag: + return '' + if not _in_style(elem, styles): + return '\n' + para = [] + # print elem + if elem.text is not None: + para.append(elem.text) + for subelem in elem: + if subelem.tag == _space_tag: + try: + num_spaces = int(subelem.attrib[_num_spaces_attr]) + except KeyError: + pass + else: + para.append(" " * num_spaces) + elif subelem.tag == _span_tag and subelem.text is not None: + para.append(subelem.text) + if subelem.tail is not None: + para.append(subelem.tail) + if elem.tail is not None: + para.append(elem.tail) + if para and not para[-1].endswith('\n'): + para.append('\n') + result = ''.join(para) + if result.rstrip(): + # print result + return result + return '' + +def _odt_data_to_text(data, styles=None): + result = [] + for event, elem in iterparse(data): + result.append(_para_text(elem, styles)) + return ''.join(result) + +def _get_odt_text(name, styles=None): + zf = zipfile.ZipFile(name, 'r') + data = zf.read("content.xml") + zf.close() + return _odt_data_to_text(StringIO(data), styles) + +_code_styles = "CDT CDT1 CDTX C1".split() + +def _test_odt_file(name): + txt = _get_odt_text(name, _code_styles) + # print txt + return _teststring(txt, name=os.path.basename(name), + globs=test_globals, + optionflags=test_options) + +extension_checks = { + ".txt" : _is_text_example, + ".odt" : _is_odt_example, +} + +extension_tests = { + ".txt" : _test_text_file, + ".odt" : _test_odt_file, +} + +def abs_walk(walk_dir): + walk_path = os.path.abspath(walk_dir) + for path, subdirs, fnames in os.walk(walk_path): + for fname in fnames: + abs_name = os.path.join(path, fname) + __, ext = os.path.splitext(fname) + yield abs_name, ext + +def _test_all(test_dir='.'): + for fname, fext in abs_walk(test_dir): + try: + is_example = extension_checks[fext] + test_example = extension_tests[fext] + except KeyError: + continue + if is_example(fname): + failed, tests = test_example(fname) + print ("Passed %d of %d tests in %s" % + (tests - failed, tests, fname)) + +if __name__ == "__main__": + _test_all() + + From python-checkins at python.org Wed Mar 5 15:06:42 2008 From: python-checkins at python.org (nick.coghlan) Date: Wed, 5 Mar 2008 15:06:42 +0100 (CET) Subject: [Python-checkins] r61251 - sandbox/trunk/userref/ODF/Chapter07_ModulesAndApplications.odt Message-ID: <20080305140642.CD9D11E4014@bag.python.org> Author: nick.coghlan Date: Wed Mar 5 15:06:36 2008 New Revision: 61251 Modified: sandbox/trunk/userref/ODF/Chapter07_ModulesAndApplications.odt Log: Minor edits to one of the chapter files Modified: sandbox/trunk/userref/ODF/Chapter07_ModulesAndApplications.odt ============================================================================== Binary files. No diff available. From nnorwitz at gmail.com Wed Mar 5 16:09:04 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Wed, 5 Mar 2008 10:09:04 -0500 Subject: [Python-checkins] Python Regression Test Failures refleak (2) Message-ID: <20080305150904.GA24806@python.psfb.org> More important issues: ---------------------- test_threadedtempfile leaked [0, 0, 103] references, sum=103 test_userlist leaked [50, 50, 50] references, sum=150 Less important issues: ---------------------- test_smtplib leaked [0, 0, 86] references, sum=86 test_threadsignals leaked [-8, 0, 0] references, sum=-8 test_urllib2_localnet leaked [3, 3, 3] references, sum=9 From python-checkins at python.org Wed Mar 5 15:53:40 2008 From: python-checkins at python.org (thomas.heller) Date: Wed, 5 Mar 2008 15:53:40 +0100 (CET) Subject: [Python-checkins] r61252 - python/trunk/Misc/NEWS Message-ID: <20080305145340.2AA171E4015@bag.python.org> Author: thomas.heller Date: Wed Mar 5 15:53:39 2008 New Revision: 61252 Modified: python/trunk/Misc/NEWS Log: News entry for yesterdays commit. Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Wed Mar 5 15:53:39 2008 @@ -15,6 +15,16 @@ - Issue #2238: Some syntax errors in *args and **kwargs expressions could give bogus error messages. +Library +------- + +- The bundled libffi copy is now in sync with the recently released + libffi3.0.4 version, apart from some small changes to + Modules/_ctypes/libffi/configure.ac. + On OS X, preconfigured libffi files are used. + On all linux systems the --with-system-ffi configure option defaults + to "yes". + What's New in Python 2.6 alpha 1? ================================= From python-checkins at python.org Wed Mar 5 16:34:30 2008 From: python-checkins at python.org (thomas.heller) Date: Wed, 5 Mar 2008 16:34:30 +0100 (CET) Subject: [Python-checkins] r61253 - in python/trunk: Doc/library/struct.rst Lib/ctypes/__init__.py Lib/test/test_struct.py Misc/NEWS Modules/_ctypes/_ctypes.c Modules/_ctypes/_ctypes_test.c Modules/_ctypes/cfield.c Modules/_struct.c Message-ID: <20080305153430.12B211E4015@bag.python.org> Author: thomas.heller Date: Wed Mar 5 16:34:29 2008 New Revision: 61253 Modified: python/trunk/Doc/library/struct.rst python/trunk/Lib/ctypes/__init__.py python/trunk/Lib/test/test_struct.py python/trunk/Misc/NEWS python/trunk/Modules/_ctypes/_ctypes.c python/trunk/Modules/_ctypes/_ctypes_test.c python/trunk/Modules/_ctypes/cfield.c python/trunk/Modules/_struct.c Log: Issue 1872: Changed the struct module typecode from 't' to '?', for compatibility with PEP3118. Modified: python/trunk/Doc/library/struct.rst ============================================================================== --- python/trunk/Doc/library/struct.rst (original) +++ python/trunk/Doc/library/struct.rst Wed Mar 5 16:34:29 2008 @@ -77,7 +77,7 @@ +--------+-------------------------+--------------------+-------+ | ``B`` | :ctype:`unsigned char` | integer | | +--------+-------------------------+--------------------+-------+ -| ``t`` | :ctype:`_Bool` | bool | \(1) | +| ``?`` | :ctype:`_Bool` | bool | \(1) | +--------+-------------------------+--------------------+-------+ | ``h`` | :ctype:`short` | integer | | +--------+-------------------------+--------------------+-------+ @@ -110,7 +110,7 @@ Notes: (1) - The ``'t'`` conversion code corresponds to the :ctype:`_Bool` type defined by + The ``'?'`` conversion code corresponds to the :ctype:`_Bool` type defined by C99. If this type is not available, it is simulated using a :ctype:`char`. In standard mode, it is always represented by one byte. @@ -158,7 +158,7 @@ values, meaning a Python long integer will be used to hold the pointer; other platforms use 32-bit pointers and will use a Python integer. -For the ``'t'`` format character, the return value is either :const:`True` or +For the ``'?'`` format character, the return value is either :const:`True` or :const:`False`. When packing, the truth value of the argument object is used. Either 0 or 1 in the native or standard bool representation will be packed, and any non-zero value will be True when unpacking. Modified: python/trunk/Lib/ctypes/__init__.py ============================================================================== --- python/trunk/Lib/ctypes/__init__.py (original) +++ python/trunk/Lib/ctypes/__init__.py Wed Mar 5 16:34:29 2008 @@ -240,7 +240,7 @@ _check_size(c_void_p) class c_bool(_SimpleCData): - _type_ = "t" + _type_ = "?" # This cache maps types to pointers to them. _pointer_type_cache = {} Modified: python/trunk/Lib/test/test_struct.py ============================================================================== --- python/trunk/Lib/test/test_struct.py (original) +++ python/trunk/Lib/test/test_struct.py Wed Mar 5 16:34:29 2008 @@ -84,8 +84,8 @@ if sz * 3 != struct.calcsize('iii'): raise TestFailed, 'inconsistent sizes' -fmt = 'cbxxxxxxhhhhiillffdt' -fmt3 = '3c3b18x12h6i6l6f3d3t' +fmt = 'cbxxxxxxhhhhiillffd?' +fmt3 = '3c3b18x12h6i6l6f3d3?' sz = struct.calcsize(fmt) sz3 = struct.calcsize(fmt3) if sz * 3 != sz3: @@ -111,7 +111,7 @@ t = True for prefix in ('', '@', '<', '>', '=', '!'): - for format in ('xcbhilfdt', 'xcBHILfdt'): + for format in ('xcbhilfd?', 'xcBHILfd?'): format = prefix + format if verbose: print "trying:", format @@ -160,11 +160,11 @@ ('f', -2.0, '\300\000\000\000', '\000\000\000\300', 0), ('d', -2.0, '\300\000\000\000\000\000\000\000', '\000\000\000\000\000\000\000\300', 0), - ('t', 0, '\0', '\0', 0), - ('t', 3, '\1', '\1', 1), - ('t', True, '\1', '\1', 0), - ('t', [], '\0', '\0', 1), - ('t', (1,), '\1', '\1', 1), + ('?', 0, '\0', '\0', 0), + ('?', 3, '\1', '\1', 1), + ('?', True, '\1', '\1', 0), + ('?', [], '\0', '\0', 1), + ('?', (1,), '\1', '\1', 1), ] for fmt, arg, big, lil, asy in tests: @@ -633,13 +633,13 @@ false = (), [], [], '', 0 true = [1], 'test', 5, -1, 0xffffffffL+1, 0xffffffff/2 - falseFormat = prefix + 't' * len(false) + falseFormat = prefix + '?' * len(false) if verbose: print 'trying bool pack/unpack on', false, 'using format', falseFormat packedFalse = struct.pack(falseFormat, *false) unpackedFalse = struct.unpack(falseFormat, packedFalse) - trueFormat = prefix + 't' * len(true) + trueFormat = prefix + '?' * len(true) if verbose: print 'trying bool pack/unpack on', true, 'using format', trueFormat packedTrue = struct.pack(trueFormat, *true) @@ -658,10 +658,10 @@ raise TestFailed('%r did not unpack as false' % t) if prefix and verbose: - print 'trying size of bool with format %r' % (prefix+'t') - packed = struct.pack(prefix+'t', 1) + print 'trying size of bool with format %r' % (prefix+'?') + packed = struct.pack(prefix+'?', 1) - if len(packed) != struct.calcsize(prefix+'t'): + if len(packed) != struct.calcsize(prefix+'?'): raise TestFailed('packed length is not equal to calculated size') if len(packed) != 1 and prefix: @@ -670,7 +670,7 @@ print 'size of bool in native format is %i' % (len(packed)) for c in '\x01\x7f\xff\x0f\xf0': - if struct.unpack('>t', c)[0] is not True: + if struct.unpack('>?', c)[0] is not True: raise TestFailed('%c did not unpack as True' % c) test_bool() Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Wed Mar 5 16:34:29 2008 @@ -18,6 +18,9 @@ Library ------- +- Issue #1872: The struct module typecode for _Bool has been changed + from 't' to '?'. + - The bundled libffi copy is now in sync with the recently released libffi3.0.4 version, apart from some small changes to Modules/_ctypes/libffi/configure.ac. Modified: python/trunk/Modules/_ctypes/_ctypes.c ============================================================================== --- python/trunk/Modules/_ctypes/_ctypes.c (original) +++ python/trunk/Modules/_ctypes/_ctypes.c Wed Mar 5 16:34:29 2008 @@ -1242,7 +1242,7 @@ */ -static char *SIMPLE_TYPE_CHARS = "cbBhHiIlLdfuzZqQPXOvtg"; +static char *SIMPLE_TYPE_CHARS = "cbBhHiIlLdfuzZqQPXOv?g"; static PyObject * c_wchar_p_from_param(PyObject *type, PyObject *value) Modified: python/trunk/Modules/_ctypes/_ctypes_test.c ============================================================================== --- python/trunk/Modules/_ctypes/_ctypes_test.c (original) +++ python/trunk/Modules/_ctypes/_ctypes_test.c Wed Mar 5 16:34:29 2008 @@ -25,6 +25,15 @@ /* some functions handy for testing */ +EXPORT(void)testfunc_array(int values[4]) +{ + printf("testfunc_array %d %d %d %d\n", + values[0], + values[1], + values[2], + values[3]); +} + EXPORT(long double)testfunc_Ddd(double a, double b) { long double result = (long double)(a * b); Modified: python/trunk/Modules/_ctypes/cfield.c ============================================================================== --- python/trunk/Modules/_ctypes/cfield.c (original) +++ python/trunk/Modules/_ctypes/cfield.c Wed Mar 5 16:34:29 2008 @@ -726,7 +726,7 @@ #endif static PyObject * -t_set(void *ptr, PyObject *value, Py_ssize_t size) +bool_set(void *ptr, PyObject *value, Py_ssize_t size) { switch (PyObject_IsTrue(value)) { case -1: @@ -741,7 +741,7 @@ } static PyObject * -t_get(void *ptr, Py_ssize_t size) +bool_get(void *ptr, Py_ssize_t size) { return PyBool_FromLong((long)*(BOOL_TYPE *)ptr); } @@ -1645,15 +1645,15 @@ { 'v', vBOOL_set, vBOOL_get, &ffi_type_sshort}, #endif #if SIZEOF__BOOL == 1 - { 't', t_set, t_get, &ffi_type_uchar}, /* Also fallback for no native _Bool support */ + { '?', bool_set, bool_get, &ffi_type_uchar}, /* Also fallback for no native _Bool support */ #elif SIZEOF__BOOL == SIZEOF_SHORT - { 't', t_set, t_get, &ffi_type_ushort}, + { '?', bool_set, bool_get, &ffi_type_ushort}, #elif SIZEOF__BOOL == SIZEOF_INT - { 't', t_set, t_get, &ffi_type_uint, I_set_sw, I_get_sw}, + { '?', bool_set, bool_get, &ffi_type_uint, I_set_sw, I_get_sw}, #elif SIZEOF__BOOL == SIZEOF_LONG - { 't', t_set, t_get, &ffi_type_ulong, L_set_sw, L_get_sw}, + { '?', bool_set, bool_get, &ffi_type_ulong, L_set_sw, L_get_sw}, #elif SIZEOF__BOOL == SIZEOF_LONG_LONG - { 't', t_set, t_get, &ffi_type_ulong, Q_set_sw, Q_get_sw}, + { '?', bool_set, bool_get, &ffi_type_ulong, Q_set_sw, Q_get_sw}, #endif /* SIZEOF__BOOL */ { 'O', O_set, O_get, &ffi_type_pointer}, { 0, NULL, NULL, NULL}, Modified: python/trunk/Modules/_struct.c ============================================================================== --- python/trunk/Modules/_struct.c (original) +++ python/trunk/Modules/_struct.c Wed Mar 5 16:34:29 2008 @@ -799,7 +799,7 @@ {'q', sizeof(PY_LONG_LONG), LONG_LONG_ALIGN, nu_longlong, np_longlong}, {'Q', sizeof(PY_LONG_LONG), LONG_LONG_ALIGN, nu_ulonglong,np_ulonglong}, #endif - {'t', sizeof(BOOL_TYPE), BOOL_ALIGN, nu_bool, np_bool}, + {'?', sizeof(BOOL_TYPE), BOOL_ALIGN, nu_bool, np_bool}, {'f', sizeof(float), FLOAT_ALIGN, nu_float, np_float}, {'d', sizeof(double), DOUBLE_ALIGN, nu_double, np_double}, {'P', sizeof(void *), VOID_P_ALIGN, nu_void_p, np_void_p}, @@ -1036,7 +1036,7 @@ {'L', 4, 0, bu_uint, bp_uint}, {'q', 8, 0, bu_longlong, bp_longlong}, {'Q', 8, 0, bu_ulonglong, bp_ulonglong}, - {'t', 1, 0, bu_bool, bp_bool}, + {'?', 1, 0, bu_bool, bp_bool}, {'f', 4, 0, bu_float, bp_float}, {'d', 8, 0, bu_double, bp_double}, {0} @@ -1255,7 +1255,7 @@ {'L', 4, 0, lu_uint, lp_uint}, {'q', 8, 0, lu_longlong, lp_longlong}, {'Q', 8, 0, lu_ulonglong, lp_ulonglong}, - {'t', 1, 0, bu_bool, bp_bool}, /* Std rep not endian dep, + {'?', 1, 0, bu_bool, bp_bool}, /* Std rep not endian dep, but potentially different from native rep -- reuse bx_bool funcs. */ {'f', 4, 0, lu_float, lp_float}, {'d', 8, 0, lu_double, lp_double}, From python-checkins at python.org Wed Mar 5 17:41:10 2008 From: python-checkins at python.org (skip.montanaro) Date: Wed, 5 Mar 2008 17:41:10 +0100 (CET) Subject: [Python-checkins] r61254 - python/trunk/README Message-ID: <20080305164110.42C7C1E4016@bag.python.org> Author: skip.montanaro Date: Wed Mar 5 17:41:09 2008 New Revision: 61254 Modified: python/trunk/README Log: Elaborate on the role of the altinstall target when installing multiple versions. Modified: python/trunk/README ============================================================================== --- python/trunk/README (original) +++ python/trunk/README Wed Mar 5 17:41:09 2008 @@ -968,14 +968,8 @@ name is the manual page, installed as "/usr/local/man/man1/python.1" by default. -If you have a previous installation of Python that you don't -want to replace yet, use - - make altinstall - -This installs the same set of files as "make install" except it -doesn't create the hard link to "python" named "python" and -it doesn't install the manual page at all. +If you want to install multiple versions of Python see the section below +entitled "Installing multiple versions". The only thing you may have to install manually is the Python mode for Emacs found in Misc/python-mode.el. (But then again, more recent @@ -988,6 +982,25 @@ PATH, you may want to set up a symlink in /usr/local/bin. +Installing multiple versions +---------------------------- + +On Unix and Mac systems if you intend to install multiple versions of Python +using the same installation prefix (--prefix argument to the configure +script) you must take care that your primary python executable is not +overwritten by the installation of a different versio. All files and +directories installed using "make altinstall" contain the major and minor +version and can thus live side-by-side. "make install" also creates +${prefix}/bin/python which refers to ${prefix}/bin/pythonX.Y. If you intend +to install multiple versions using the same prefix you must decide which +version (if any) is your "primary" version. Install that version using +"make install". Install all other versions using "make altinstall". + +For example, if you want to install Python 2.5, 2.6 and 3.0 with 2.6 being +the primary version, you would execute "make install" in your 2.6 build +directory and "make altinstall" in the others. + + Configuration options and variables ----------------------------------- From python-checkins at python.org Wed Mar 5 20:31:46 2008 From: python-checkins at python.org (georg.brandl) Date: Wed, 5 Mar 2008 20:31:46 +0100 (CET) Subject: [Python-checkins] r61255 - python/trunk/Doc/using/cmdline.rst Message-ID: <20080305193146.C10511E4022@bag.python.org> Author: georg.brandl Date: Wed Mar 5 20:31:44 2008 New Revision: 61255 Modified: python/trunk/Doc/using/cmdline.rst Log: #2239: PYTHONPATH delimiter is os.pathsep. Modified: python/trunk/Doc/using/cmdline.rst ============================================================================== --- python/trunk/Doc/using/cmdline.rst (original) +++ python/trunk/Doc/using/cmdline.rst Wed Mar 5 20:31:44 2008 @@ -356,7 +356,8 @@ Augment the default search path for module files. The format is the same as the shell's :envvar:`PATH`: one or more directory pathnames separated by - colons. Non-existent directories are silently ignored. + :data:`os.pathsep` (e.g. colons on Unix or semicolons on Windows). + Non-existent directories are silently ignored. The default search path is installation dependent, but generally begins with :file:`{prefix}/lib/python{version}`` (see :envvar:`PYTHONHOME` above). It From python-checkins at python.org Wed Mar 5 21:59:58 2008 From: python-checkins at python.org (raymond.hettinger) Date: Wed, 5 Mar 2008 21:59:58 +0100 (CET) Subject: [Python-checkins] r61256 - in python/trunk: Lib/test/test_itertools.py Misc/NEWS Modules/itertoolsmodule.c Message-ID: <20080305205958.A63291E4003@bag.python.org> Author: raymond.hettinger Date: Wed Mar 5 21:59:58 2008 New Revision: 61256 Modified: python/trunk/Lib/test/test_itertools.py python/trunk/Misc/NEWS python/trunk/Modules/itertoolsmodule.c Log: C implementation of itertools.permutations(). Modified: python/trunk/Lib/test/test_itertools.py ============================================================================== --- python/trunk/Lib/test/test_itertools.py (original) +++ python/trunk/Lib/test/test_itertools.py Wed Mar 5 21:59:58 2008 @@ -47,15 +47,6 @@ 'Factorial' return prod(range(1, n+1)) -def permutations(iterable, r=None): - # XXX use this until real permutations code is added - pool = tuple(iterable) - n = len(pool) - r = n if r is None else r - for indices in product(range(n), repeat=r): - if len(set(indices)) == r: - yield tuple(pool[i] for i in indices) - class TestBasicOps(unittest.TestCase): def test_chain(self): self.assertEqual(list(chain('abc', 'def')), list('abcdef')) @@ -117,6 +108,8 @@ self.assertEqual(len(set(c)), r) # no duplicate elements self.assertEqual(list(c), sorted(c)) # keep original ordering self.assert_(all(e in values for e in c)) # elements taken from input iterable + self.assertEqual(list(c), + [e for e in values if e in c]) # comb is a subsequence of the input iterable self.assertEqual(result, list(combinations1(values, r))) # matches first pure python version self.assertEqual(result, list(combinations2(values, r))) # matches first pure python version @@ -127,9 +120,10 @@ def test_permutations(self): self.assertRaises(TypeError, permutations) # too few arguments self.assertRaises(TypeError, permutations, 'abc', 2, 1) # too many arguments -## self.assertRaises(TypeError, permutations, None) # pool is not iterable -## self.assertRaises(ValueError, permutations, 'abc', -2) # r is negative -## self.assertRaises(ValueError, permutations, 'abc', 32) # r is too big + self.assertRaises(TypeError, permutations, None) # pool is not iterable + self.assertRaises(ValueError, permutations, 'abc', -2) # r is negative + self.assertRaises(ValueError, permutations, 'abc', 32) # r is too big + self.assertRaises(TypeError, permutations, 'abc', 's') # r is not an int or None self.assertEqual(list(permutations(range(3), 2)), [(0,1), (0,2), (1,0), (1,2), (2,0), (2,1)]) @@ -182,7 +176,7 @@ self.assertEqual(result, list(permutations(values))) # test default r # Test implementation detail: tuple re-use -## self.assertEqual(len(set(map(id, permutations('abcde', 3)))), 1) + self.assertEqual(len(set(map(id, permutations('abcde', 3)))), 1) self.assertNotEqual(len(set(map(id, list(permutations('abcde', 3))))), 1) def test_count(self): @@ -407,12 +401,23 @@ list(product(*args, **dict(repeat=r)))) self.assertEqual(len(list(product(*[range(7)]*6))), 7**6) self.assertRaises(TypeError, product, range(6), None) + + def product2(*args, **kwds): + 'Pure python version used in docs' + pools = map(tuple, args) * kwds.get('repeat', 1) + result = [[]] + for pool in pools: + result = [x+[y] for x in result for y in pool] + for prod in result: + yield tuple(prod) + argtypes = ['', 'abc', '', xrange(0), xrange(4), dict(a=1, b=2, c=3), set('abcdefg'), range(11), tuple(range(13))] for i in range(100): args = [random.choice(argtypes) for j in range(random.randrange(5))] expected_len = prod(map(len, args)) self.assertEqual(len(list(product(*args))), expected_len) + self.assertEqual(list(product(*args)), list(product2(*args))) args = map(iter, args) self.assertEqual(len(list(product(*args))), expected_len) Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Wed Mar 5 21:59:58 2008 @@ -699,7 +699,7 @@ - Added itertools.product() which forms the Cartesian product of the input iterables. -- Added itertools.combinations(). +- Added itertools.combinations() and itertools.permutations(). - Patch #1541463: optimize performance of cgi.FieldStorage operations. Modified: python/trunk/Modules/itertoolsmodule.c ============================================================================== --- python/trunk/Modules/itertoolsmodule.c (original) +++ python/trunk/Modules/itertoolsmodule.c Wed Mar 5 21:59:58 2008 @@ -2238,6 +2238,279 @@ }; +/* permutations object ************************************************************ + +def permutations(iterable, r=None): + 'permutations(range(3), 2) --> (0,1) (0,2) (1,0) (1,2) (2,0) (2,1)' + pool = tuple(iterable) + n = len(pool) + r = n if r is None else r + indices = range(n) + cycles = range(n-r+1, n+1)[::-1] + yield tuple(pool[i] for i in indices[:r]) + while n: + for i in reversed(range(r)): + cycles[i] -= 1 + if cycles[i] == 0: + indices[i:] = indices[i+1:] + indices[i:i+1] + cycles[i] = n - i + else: + j = cycles[i] + indices[i], indices[-j] = indices[-j], indices[i] + yield tuple(pool[i] for i in indices[:r]) + break + else: + return +*/ + +typedef struct { + PyObject_HEAD + PyObject *pool; /* input converted to a tuple */ + Py_ssize_t *indices; /* one index per element in the pool */ + Py_ssize_t *cycles; /* one rollover counter per element in the result */ + PyObject *result; /* most recently returned result tuple */ + Py_ssize_t r; /* size of result tuple */ + int stopped; /* set to 1 when the permutations iterator is exhausted */ +} permutationsobject; + +static PyTypeObject permutations_type; + +static PyObject * +permutations_new(PyTypeObject *type, PyObject *args, PyObject *kwds) +{ + permutationsobject *po; + Py_ssize_t n; + Py_ssize_t r; + PyObject *robj = Py_None; + PyObject *pool = NULL; + PyObject *iterable = NULL; + Py_ssize_t *indices = NULL; + Py_ssize_t *cycles = NULL; + Py_ssize_t i; + static char *kwargs[] = {"iterable", "r", NULL}; + + if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|O:permutations", kwargs, + &iterable, &robj)) + return NULL; + + pool = PySequence_Tuple(iterable); + if (pool == NULL) + goto error; + n = PyTuple_GET_SIZE(pool); + + r = n; + if (robj != Py_None) { + r = PyInt_AsSsize_t(robj); + if (r == -1 && PyErr_Occurred()) + goto error; + } + if (r < 0) { + PyErr_SetString(PyExc_ValueError, "r must be non-negative"); + goto error; + } + if (r > n) { + PyErr_SetString(PyExc_ValueError, "r cannot be bigger than the iterable"); + goto error; + } + + indices = PyMem_Malloc(n * sizeof(Py_ssize_t)); + cycles = PyMem_Malloc(r * sizeof(Py_ssize_t)); + if (indices == NULL || cycles == NULL) { + PyErr_NoMemory(); + goto error; + } + + for (i=0 ; itp_alloc(type, 0); + if (po == NULL) + goto error; + + po->pool = pool; + po->indices = indices; + po->cycles = cycles; + po->result = NULL; + po->r = r; + po->stopped = 0; + + return (PyObject *)po; + +error: + if (indices != NULL) + PyMem_Free(indices); + if (cycles != NULL) + PyMem_Free(cycles); + Py_XDECREF(pool); + return NULL; +} + +static void +permutations_dealloc(permutationsobject *po) +{ + PyObject_GC_UnTrack(po); + Py_XDECREF(po->pool); + Py_XDECREF(po->result); + PyMem_Free(po->indices); + PyMem_Free(po->cycles); + Py_TYPE(po)->tp_free(po); +} + +static int +permutations_traverse(permutationsobject *po, visitproc visit, void *arg) +{ + if (po->pool != NULL) + Py_VISIT(po->pool); + if (po->result != NULL) + Py_VISIT(po->result); + return 0; +} + +static PyObject * +permutations_next(permutationsobject *po) +{ + PyObject *elem; + PyObject *oldelem; + PyObject *pool = po->pool; + Py_ssize_t *indices = po->indices; + Py_ssize_t *cycles = po->cycles; + PyObject *result = po->result; + Py_ssize_t n = PyTuple_GET_SIZE(pool); + Py_ssize_t r = po->r; + Py_ssize_t i, j, k, index; + + if (po->stopped) + return NULL; + + if (result == NULL) { + /* On the first pass, initialize result tuple using the indices */ + result = PyTuple_New(r); + if (result == NULL) + goto empty; + po->result = result; + for (i=0; i 1) { + PyObject *old_result = result; + result = PyTuple_New(r); + if (result == NULL) + goto empty; + po->result = result; + for (i=0; i=0 ; i--) { + cycles[i] -= 1; + if (cycles[i] == 0) { + /* rotatation: indices[i:] = indices[i+1:] + indices[i:i+1] */ + index = indices[i]; + for (j=i ; jstopped = 1; + return NULL; +} + +PyDoc_STRVAR(permutations_doc, +"permutations(iterables[, r]) --> permutations object\n\ +\n\ +Return successive r-length permutations of elements in the iterable.\n\n\ +permutations(range(4), 3) --> (0,1,2), (0,1,3), (0,2,3), (1,2,3)"); + +static PyTypeObject permutations_type = { + PyVarObject_HEAD_INIT(NULL, 0) + "itertools.permutations", /* tp_name */ + sizeof(permutationsobject), /* tp_basicsize */ + 0, /* tp_itemsize */ + /* methods */ + (destructor)permutations_dealloc, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + 0, /* tp_compare */ + 0, /* tp_repr */ + 0, /* tp_as_number */ + 0, /* tp_as_sequence */ + 0, /* tp_as_mapping */ + 0, /* tp_hash */ + 0, /* tp_call */ + 0, /* tp_str */ + PyObject_GenericGetAttr, /* tp_getattro */ + 0, /* tp_setattro */ + 0, /* tp_as_buffer */ + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | + Py_TPFLAGS_BASETYPE, /* tp_flags */ + permutations_doc, /* tp_doc */ + (traverseproc)permutations_traverse, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + PyObject_SelfIter, /* tp_iter */ + (iternextfunc)permutations_next, /* tp_iternext */ + 0, /* tp_methods */ + 0, /* tp_members */ + 0, /* tp_getset */ + 0, /* tp_base */ + 0, /* tp_dict */ + 0, /* tp_descr_get */ + 0, /* tp_descr_set */ + 0, /* tp_dictoffset */ + 0, /* tp_init */ + 0, /* tp_alloc */ + permutations_new, /* tp_new */ + PyObject_GC_Del, /* tp_free */ +}; + + /* ifilter object ************************************************************/ typedef struct { @@ -3295,6 +3568,7 @@ &count_type, &izip_type, &iziplongest_type, + &permutations_type, &product_type, &repeat_type, &groupby_type, From python-checkins at python.org Wed Mar 5 22:04:32 2008 From: python-checkins at python.org (raymond.hettinger) Date: Wed, 5 Mar 2008 22:04:32 +0100 (CET) Subject: [Python-checkins] r61257 - python/trunk/Modules/itertoolsmodule.c Message-ID: <20080305210432.9403A1E4003@bag.python.org> Author: raymond.hettinger Date: Wed Mar 5 22:04:32 2008 New Revision: 61257 Modified: python/trunk/Modules/itertoolsmodule.c Log: Small code cleanup. Modified: python/trunk/Modules/itertoolsmodule.c ============================================================================== --- python/trunk/Modules/itertoolsmodule.c (original) +++ python/trunk/Modules/itertoolsmodule.c Wed Mar 5 22:04:32 2008 @@ -2362,10 +2362,8 @@ static int permutations_traverse(permutationsobject *po, visitproc visit, void *arg) { - if (po->pool != NULL) - Py_VISIT(po->pool); - if (po->result != NULL) - Py_VISIT(po->result); + Py_VISIT(po->pool); + Py_VISIT(po->result); return 0; } From buildbot at python.org Wed Mar 5 22:29:33 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 05 Mar 2008 21:29:33 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-3 trunk Message-ID: <20080305212933.EDC471E4003@bag.python.org> The Buildbot has detected a new failure of x86 XP-3 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-3%20trunk/builds/1016 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl,raymond.hettinger BUILD FAILED: failed clean sincerely, -The Buildbot From python-checkins at python.org Wed Mar 5 22:59:50 2008 From: python-checkins at python.org (brett.cannon) Date: Wed, 5 Mar 2008 22:59:50 +0100 (CET) Subject: [Python-checkins] r61258 - peps/trunk/pep-3108.txt Message-ID: <20080305215950.C24641E4003@bag.python.org> Author: brett.cannon Date: Wed Mar 5 22:59:50 2008 New Revision: 61258 Modified: peps/trunk/pep-3108.txt Log: Add the 'url' package. Do note that the functionality from urllib will only be merged into the pacakge if the documentation is properly updated. Otherwise the module can be made available as an external module for those who wish to continue to use it. Modified: peps/trunk/pep-3108.txt ============================================================================== --- peps/trunk/pep-3108.txt (original) +++ peps/trunk/pep-3108.txt Wed Mar 5 22:59:50 2008 @@ -371,6 +371,11 @@ + Guido has previously supported the deprecation [#thread-deprecation]_. + +* urllib + + + Superceded by urllib2. + + Functionality unique to urllib will be kept in the `url package`_. * UserDict [done] @@ -576,11 +581,28 @@ turtle tk.turtle ================== =============================== -.. [4] ``tk.filedialog`` can safely combine ``FileDialog`` and ``tkFileDialog`` - as there are no naming conflicts. +.. [4] ``tk.filedialog`` can safely combine ``FileDialog`` and + ``tkFileDialog`` as there are no naming conflicts. .. [5] ``tk.simpledialog`` can safely combine ``SimpleDialog`` and ``tkSimpleDialog`` have no naming conflicts. + + +url package +/////////// + +================== =============================== +Current Name Replacement Name +================== =============================== +urllib2 url.request +urlparse url.parse +urllib url.parse, url.request [6]_ +================== =============================== + +.. [6] The quoting-related functions from ``urllib`` will be added + to ``url.parse``. ``urllib.URLOpener`` and ``FancyUrlOpener`` + will be added to ``url.request`` as long as the documentation + for both modules is updated. xmlrpc package From python-checkins at python.org Wed Mar 5 23:13:12 2008 From: python-checkins at python.org (barry.warsaw) Date: Wed, 5 Mar 2008 23:13:12 +0100 (CET) Subject: [Python-checkins] r61259 - sandbox/trunk/release sandbox/trunk/release/release.py Message-ID: <20080305221312.5B99E1E401D@bag.python.org> Author: barry.warsaw Date: Wed Mar 5 23:13:11 2008 New Revision: 61259 Added: sandbox/trunk/release/ sandbox/trunk/release/release.py Log: Benjamin Peterson's release script, contributed to the cause. Added: sandbox/trunk/release/release.py ============================================================================== --- (empty file) +++ sandbox/trunk/release/release.py Wed Mar 5 23:13:11 2008 @@ -0,0 +1,237 @@ +"An assistant for making Python releases by Benjamin Peterson" +#!/usr/bin/env python +from __future__ import with_statement + +import sys +import os +import optparse +import re +import subprocess +import shutil +import tempfile + +# Ideas stolen from Mailman's release script, Lib/tokens.py and welease + +def error(*msgs): + print >> sys.stderr, "**ERROR**" + for msg in msgs: + print >> sys.stderr, msg + sys.exit(1) + +def run_cmd(args, silent=False): + cmd = " ".join(args) + if not silent: + print "Executing %s" % cmd + try: + if silent: + code = subprocess.call(cmd, shell=True, stdout=PIPE) + else: + code = subprocess.call(cmd, shell=True) + except OSError: + error("%s failed" % cmd) + +def check_env(): + if "EDITOR" not in os.environ: + error("editor not detected.", + "Please set your EDITOR enviroment variable") + if not os.path.exists(".svn"): + error("CWD is not a Subversion checkout") + +def get_arg_parser(): + usage = "%prog [options] tagname" + p = optparse.OptionParser(usage=usage) + p.add_option("-b", "--bump", + default=False, action="store_true", + help="bump the revision number in important files") + p.add_option("-e", "--export", + default=False, action="store_true", + help="Export the SVN tag to a tarball") + p.add_option("-m", "--branch", + default=False, action="store_true", + help="create a maintance branch to go along with the release") + p.add_option("-t", "--tag", + default=False, action="store_true", + help="Tag the release in Subversion") + return p + +def constant_replace(fn, updated_constants, comment_start="/*", comment_end="*/"): + "Inserts in between --start constant-- and --end constant-- in a file" + start_tag = comment_start + "--start constants--" + comment_end + end_tag = comment_start + "--end constants--" + comment_end + with open(fn) as fp: + lines = fp.read().splitlines() + try: + start = lines.index(start_tag) + 1 + end = lines.index(end_tag) + except ValueError: + error("%s doesn't have constant tags" % fn) + lines[start:end] = [updated_constants] + with open(fn, "w") as fp: + fp.write("\n".join(lines)) + +def bump(tag): + print "Bumping version to %s" % tag + + wanted_file = "Misc/RPM/python-%s.spec" % tag.basic_version + print "Updating %s" % wanted_file, + if not os.path.exists(wanted_file): + specs = os.listdir("Misc/RPM/") + for file in specs: + if file.startswith("python-"): + break + full_path = os.path.join("Misc/RPM/", file) + print "\nrenaming %s to %s" % (full_path, wanted_file) + run_cmd(["svn", "rename", "--force", full_path, wanted_file]) + print "File was renamed; please commit" + run_cmd(["svn", "commit"]) + new = "%define version " + tag.text + \ + "\n%define libver " + tag.basic_version + constant_replace(wanted_file, new, "#", "") + print "done" + + print "Updating Include/patchlevel.h...", + template = """#define PY_MAJOR_VERSION [major] +#define PY_MINOR_VERSION [minor] +#define PY_MICRO_VERSION [patch] +#define PY_RELEASE_LEVEL [level] +#define PY_RELEASE_SERIAL [serial] +#define PY_VERSION \"[text]\"""" + for what in ("major", "minor", "patch", "serial", "text"): + template = template.replace("[" + what + "]", str(getattr(tag, what))) + level_defines = {"a" : "PY_RELEASE_LEVEL_ALPHA", + "b" : "PY_RELEASE_LEVEL_BETA", + "c" : "PY_RELEASE_LEVEL_GAMMA", + "f" : "PY_RELEASE_LEVEL_FINAL"} + template = template.replace("[level]", level_defines[tag.level]) + constant_replace("Include/patchlevel.h", template) + print "done" + + print "Updating Lib/idlelib/idlever.py...", + with open("Lib/idlelib/idlever.py", "w") as fp: + new = "IDLE_VERSION = \"%s\"\n" % tag.next_text + fp.write(new) + print "done" + + print "Updating Lib/distutils/__init__.py...", + new = "__version__ = \"%s\"" % tag.text + constant_replace("Lib/distutils/__init__.py", new, "#", "") + print "done" + + other_files = ["README"] + if tag.patch == 0 and tag.level == "a" and tag.serial == 0: + other_files += ["Doc/tutorial/interpreter.rst", + "Doc/tutorial/stdlib.rst", "Doc/tutorial/stdlib2.rst"] + print "\nManual editing time..." + for fn in other_files: + print "Edit %s" % fn + manual_edit(fn) + + print "Bumped revision" + print "Please commit and use --tag" + +def manual_edit(fn): + run_cmd([os.environ["EDITOR"], fn]) + +def export(tag): + temp_dir = tempfile.mkdtemp("pyrelease") + if not os.path.exists("dist") and not os.path.isdir("dist"): + print "creating dist directory" + os.mkdir("dist") + tgz = "dist/Python-%s.tgz" % tag.text + bz = "dist/Python-%s.tar.bz2" % tag.text + old_cur = os.getcwd() + os.chdir(temp_dir) + try: + try: + print "Exporting tag" + run_cmd(["svn", "export", + "http://svn.python.org/projects/python/tags/r%s" + % tag.text.replace(".", ""), "release"]) + print "Making .tgz" + run_cmd(["tar cf - release | gzip -9 > release.tgz"]) + print "Making .tar.bz2" + run_cmd(["tar cf - release " + "| bzip2 -9 > release.tar.bz2"]) + finally: + os.chdir(old_cur) + print "Moving files to dist" + os.rename(os.path.join(temp_dir, "release.tgz"), tgz) + os.rename(os.path.join(temp_dir, "release.tar.bz2"), bz) + finally: + print "Cleaning up" + shutil.rmtree(temp_dir) + print "Calculating md5sums" + run_cmd(["md5sum", tgz, ">", tgz + ".md5"]) + run_cmd(["md5sum", bz, ">", bz + ".md5"]) + print "**Now extract the archives and run the tests**" + +class Tag: + def __init__(self, text, major, minor, patch, level, serial): + self.text = text + self.next_text = self.text + self.major = major + self.minor = minor + self.patch = patch + self.level = level + self.serial = serial + self.basic_version = major + "." + minor + + def __str__(self): + return self.text + +def break_up_tag(tag): + exp = re.compile(r"(\d+)(?:\.(\d+)(?:\.(\d+))?)?(?:([abc])(\d+))?") + result = exp.search(tag) + if result is None: + error("tag %s is not valid" % tag) + data = list(result.groups()) + # fix None level + if data[3] is None: + data[3] = "f" + # None Everythign else should be 0 + for i, thing in enumerate(data): + if thing is None: + data[i] = 0 + return Tag(tag, *data) + +def branch(tag): + if tag.minor > 0 or tag.patch > 0 or tag.level != "f": + print "It doesn't look like your making a final release." + if raw_input("Are you sure you want to branch?") != "y": + return + run_cmd(["svn", "copy", get_current_location(), + "svn+ssh://svn.python.org/projects/python/branches/" + "release%s-maint" % (tag.major + tag.minor)]) + +def get_current_location(): + data = subprocess.Popen("svn info", shell=True, + stdout=subprocess.PIPE).stdout.read().splitlines() + for line in data: + if line.startswith("URL: "): + return line.lstrip("URL: ") + +def make_tag(tag): + run_cmd(["svn", "copy", get_current_location(), + "svn+ssh://svn.python.org/projects/python/tags/r" + + tag.text.replace(".", "")]) + +def main(argv): + parser = get_arg_parser() + options, args = parser.parse_args(argv) + if len(args) != 2: + parser.print_usage() + sys.exit(1) + tag = break_up_tag(args[1]) + if not options.export: + check_env() + if options.bump: + bump(tag) + elif options.tag: + make_tag(tag) + elif options.branch: + branch(tag) + elif options.export: + export(tag) + +if __name__ == "__main__": + main(sys.argv) From python-checkins at python.org Wed Mar 5 23:24:31 2008 From: python-checkins at python.org (martin.v.loewis) Date: Wed, 5 Mar 2008 23:24:31 +0100 (CET) Subject: [Python-checkins] r61260 - python/trunk/Tools/buildbot/clean.bat Message-ID: <20080305222431.A3ACE1E402B@bag.python.org> Author: martin.v.loewis Date: Wed Mar 5 23:24:31 2008 New Revision: 61260 Modified: python/trunk/Tools/buildbot/clean.bat Log: cd PCbuild only after deleting all pyc files. Modified: python/trunk/Tools/buildbot/clean.bat ============================================================================== --- python/trunk/Tools/buildbot/clean.bat (original) +++ python/trunk/Tools/buildbot/clean.bat Wed Mar 5 23:24:31 2008 @@ -1,7 +1,7 @@ @rem Used by the buildbot "clean" step. call "%VS90COMNTOOLS%vsvars32.bat" -cd PCbuild @echo Deleting .pyc/.pyo files ... del /s Lib\*.pyc Lib\*.pyo +cd PCbuild vcbuild /clean pcbuild.sln "Release|Win32" vcbuild /clean pcbuild.sln "Debug|Win32" From buildbot at python.org Thu Mar 6 00:33:29 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 05 Mar 2008 23:33:29 +0000 Subject: [Python-checkins] buildbot failure in g4 osx.4 trunk Message-ID: <20080305233329.6F9F01E4003@bag.python.org> The Buildbot has detected a new failure of g4 osx.4 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/g4%20osx.4%20trunk/builds/2984 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: psf-g4 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed failed slave lost sincerely, -The Buildbot From buildbot at python.org Thu Mar 6 00:34:11 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 05 Mar 2008 23:34:11 +0000 Subject: [Python-checkins] buildbot failure in PPC64 Debian trunk Message-ID: <20080305233411.A68151E4003@bag.python.org> The Buildbot has detected a new failure of PPC64 Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/PPC64%20Debian%20trunk/builds/464 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From buildbot at python.org Thu Mar 6 00:56:43 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 05 Mar 2008 23:56:43 +0000 Subject: [Python-checkins] buildbot failure in x86 FreeBSD trunk Message-ID: <20080305235643.B39511E4003@bag.python.org> The Buildbot has detected a new failure of x86 FreeBSD trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20FreeBSD%20trunk/builds/698 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: bolen-freebsd Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed test Excerpt from the test logfile: Traceback (most recent call last): File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 227, in handle_request self.process_request(request, client_address) File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 268, in process_request self.finish_request(request, client_address) File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 281, in finish_request self.RequestHandlerClass(request, client_address, self) File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 576, in __init__ self.handle() File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 227, in handle_request self.process_request(request, client_address) File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 268, in process_request self.finish_request(request, client_address) File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 281, in finish_request self.RequestHandlerClass(request, client_address, self) File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 576, in __init__ self.handle() File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable 1 test failed: test_xmlrpc Traceback (most recent call last): File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 227, in handle_request self.process_request(request, client_address) File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 268, in process_request self.finish_request(request, client_address) File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 281, in finish_request self.RequestHandlerClass(request, client_address, self) File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 576, in __init__ self.handle() File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable sincerely, -The Buildbot From python-checkins at python.org Thu Mar 6 02:15:52 2008 From: python-checkins at python.org (raymond.hettinger) Date: Thu, 6 Mar 2008 02:15:52 +0100 (CET) Subject: [Python-checkins] r61261 - python/trunk/Doc/library/itertools.rst Message-ID: <20080306011552.9EE011E4003@bag.python.org> Author: raymond.hettinger Date: Thu Mar 6 02:15:52 2008 New Revision: 61261 Modified: python/trunk/Doc/library/itertools.rst Log: Add examples. Modified: python/trunk/Doc/library/itertools.rst ============================================================================== --- python/trunk/Doc/library/itertools.rst (original) +++ python/trunk/Doc/library/itertools.rst Thu Mar 6 02:15:52 2008 @@ -71,6 +71,7 @@ Equivalent to:: def chain(*iterables): + # chain('ABC', 'DEF') --> A B C D E F for it in iterables: for element in it: yield element @@ -83,6 +84,7 @@ @classmethod def from_iterable(iterables): + # chain.from_iterable(['ABC', 'DEF']) --> A B C D E F for it in iterables: for element in it: yield element @@ -108,7 +110,8 @@ Equivalent to:: def combinations(iterable, r): - 'combinations(range(4), 3) --> (0,1,2) (0,1,3) (0,2,3) (1,2,3)' + # combinations('ABCD', 2) --> AB AC AD BC BD CD + # combinations(range(4), 3) --> 012 013 023 123 pool = tuple(iterable) n = len(pool) indices = range(r) @@ -145,6 +148,7 @@ numbers. Equivalent to:: def count(n=0): + # count(10) --> 10 11 12 13 14 ... while True: yield n n += 1 @@ -157,6 +161,7 @@ indefinitely. Equivalent to:: def cycle(iterable): + # cycle('ABCD') --> A B C D A B C D A B C D ... saved = [] for element in iterable: yield element @@ -177,6 +182,7 @@ start-up time. Equivalent to:: def dropwhile(predicate, iterable): + # dropwhile(lambda x: x<5, [1,4,6,4,1]) --> 6 4 1 iterable = iter(iterable) for x in iterable: if not predicate(x): @@ -215,6 +221,8 @@ :func:`groupby` is equivalent to:: class groupby(object): + # [k for k, g in groupby('AAAABBBCCDAABBB')] --> A B C D A B + # [(list(g)) for k, g in groupby('AAAABBBCCD')] --> AAAA BBB CC D def __init__(self, iterable, key=None): if key is None: key = lambda x: x @@ -245,6 +253,7 @@ that are true. Equivalent to:: def ifilter(predicate, iterable): + # ifilter(lambda x: x%2, range(10)) --> 1 3 5 7 9 if predicate is None: predicate = bool for x in iterable: @@ -259,6 +268,7 @@ that are false. Equivalent to:: def ifilterfalse(predicate, iterable): + # ifilterfalse(lambda x: x%2, range(10)) --> 0 2 4 6 8 if predicate is None: predicate = bool for x in iterable: @@ -277,6 +287,7 @@ useful way of supplying arguments to :func:`imap`. Equivalent to:: def imap(function, *iterables): + # imap(pow, (2,3,10), (5,2,3)) --> 32 9 1000 iterables = map(iter, iterables) while True: args = [it.next() for it in iterables] @@ -299,6 +310,10 @@ multi-line report may list a name field on every third line). Equivalent to:: def islice(iterable, *args): + # islice('ABCDEFG', 2) --> A B + # islice('ABCDEFG', 2, 4) --> C D + # islice('ABCDEFG', 2, None) --> C D E F G + # islice('ABCDEFG', 0, None, 2) --> A C E G s = slice(*args) it = iter(xrange(s.start or 0, s.stop or sys.maxint, s.step or 1)) nexti = it.next() @@ -321,6 +336,7 @@ lock-step iteration over several iterables at a time. Equivalent to:: def izip(*iterables): + # izip('ABCD', 'xy') --> Ax By iterables = map(iter, iterables) while iterables: result = [it.next() for it in iterables] @@ -346,6 +362,7 @@ Iteration continues until the longest iterable is exhausted. Equivalent to:: def izip_longest(*args, **kwds): + # izip_longest('ABCD', 'xy', fillvalue='-') --> Ax By C- D- fillvalue = kwds.get('fillvalue') def sentinel(counter = ([fillvalue]*(len(args)-1)).pop): yield counter() # yields the fillvalue, or raises IndexError @@ -382,7 +399,8 @@ Equivalent to:: def permutations(iterable, r=None): - 'permutations(range(3), 2) --> (0,1) (0,2) (1,0) (1,2) (2,0) (2,1)' + # permutations('ABCD', 2) --> AB AC AD BA BC BD CA CB CD DA DB DC + # permutations(range(3)) --> 012 021 102 120 201 210 pool = tuple(iterable) n = len(pool) r = n if r is None else r @@ -424,8 +442,8 @@ Equivalent to nested for-loops in a generator expression. For example, ``product(A, B)`` returns the same as ``((x,y) for x in A for y in B)``. - The leftmost iterators are in the outermost for-loop, so the output tuples - cycle like an odometer (with the rightmost element changing on every + The leftmost iterators correspond to the outermost for-loop, so the output + tuples cycle like an odometer (with the rightmost element changing on every iteration). This results in a lexicographic ordering so that if the inputs iterables are sorted, the product tuples are emitted in sorted order. @@ -438,6 +456,8 @@ actual implementation does not build up intermediate results in memory:: def product(*args, **kwds): + # product('ABCD', 'xy') --> Ax Ay Bx By Cx Cy Dx Dy + # product(range(2), repeat=3) --> 000 001 010 011 100 101 110 111 pools = map(tuple, args) * kwds.get('repeat', 1) result = [[]] for pool in pools: @@ -451,10 +471,11 @@ Make an iterator that returns *object* over and over again. Runs indefinitely unless the *times* argument is specified. Used as argument to :func:`imap` for - invariant parameters to the called function. Also used with :func:`izip` to - create an invariant part of a tuple record. Equivalent to:: + invariant function parameters. Also used with :func:`izip` to create constant + fields in a tuple record. Equivalent to:: def repeat(object, times=None): + # repeat(10, 3) --> 10 10 10 if times is None: while True: yield object @@ -472,6 +493,7 @@ between ``function(a,b)`` and ``function(*c)``. Equivalent to:: def starmap(function, iterable): + # starmap(pow, [(2,5), (3,2), (10,3)]) --> 32 9 1000 for args in iterable: yield function(*args) @@ -485,6 +507,7 @@ predicate is true. Equivalent to:: def takewhile(predicate, iterable): + # takewhile(lambda x: x<5, [1,4,6,4,1]) --> 1 4 for x in iterable: if predicate(x): yield x @@ -528,23 +551,6 @@ The following examples show common uses for each tool and demonstrate ways they can be combined. :: - >>> amounts = [120.15, 764.05, 823.14] - >>> for checknum, amount in izip(count(1200), amounts): - ... print 'Check %d is for $%.2f' % (checknum, amount) - ... - Check 1200 is for $120.15 - Check 1201 is for $764.05 - Check 1202 is for $823.14 - - >>> import operator - >>> for cube in imap(operator.pow, xrange(1,5), repeat(3)): - ... print cube - ... - 1 - 8 - 27 - 64 - # Show a dictionary sorted and grouped by value >>> from operator import itemgetter >>> d = dict(a=1, b=2, c=1, d=2, e=1, f=2, g=3) From python-checkins at python.org Thu Mar 6 02:36:27 2008 From: python-checkins at python.org (andrew.kuchling) Date: Thu, 6 Mar 2008 02:36:27 +0100 (CET) Subject: [Python-checkins] r61262 - python/trunk/Doc/whatsnew/2.6.rst Message-ID: <20080306013627.D33EC1E4003@bag.python.org> Author: andrew.kuchling Date: Thu Mar 6 02:36:27 2008 New Revision: 61262 Modified: python/trunk/Doc/whatsnew/2.6.rst Log: Add two items Modified: python/trunk/Doc/whatsnew/2.6.rst ============================================================================== --- python/trunk/Doc/whatsnew/2.6.rst (original) +++ python/trunk/Doc/whatsnew/2.6.rst Thu Mar 6 02:36:27 2008 @@ -1157,7 +1157,7 @@ (2, 3, 1, 3), (2, 3, 1, 4), (2, 3, 2, 3), (2, 3, 2, 4), (2, 4, 1, 3), (2, 4, 1, 4), (2, 4, 2, 3), (2, 4, 2, 4)] - ``combinations(iter, r)`` returns combinations of length *r* from + ``combinations(iter, r)`` returns sub-sequences of length *r* from the elements of *iterable*. :: itertools.combinations('123', 2) -> @@ -1170,14 +1170,18 @@ [('1', '2', '3'), ('1', '2', '4'), ('1', '3', '4'), ('2', '3', '4')] - ``permutations(iter[, r])`` returns all the permutations of length *r* from + ``permutations(iter[, r])`` returns all the permutations of length *r* of the iterable's elements. If *r* is not specified, it will default to the number of elements produced by the iterable. - XXX enter example once Raymond commits the code. + itertools.permutations([1,2,3,4], 2) -> + [(1, 2), (1, 3), (1, 4), + (2, 1), (2, 3), (2, 4), + (3, 1), (3, 2), (3, 4), + (4, 1), (4, 2), (4, 3)] ``itertools.chain(*iterables)` is an existing function in - :mod:`itertools` that gained a new constructor. + :mod:`itertools` that gained a new constructor in Python 2.6. ``itertools.chain.from_iterable(iterable)`` takes a single iterable that should return other iterables. :func:`chain` will then return all the elements of the first iterable, then @@ -1411,6 +1415,10 @@ by Michael Pomraning.) .. Patch #742598 + +* The :mod:`struct` module now supports the C99 :ctype:`_Bool` type, + using the format character ``'?'``. + (Contributed by David Remahl.) * A new variable in the :mod:`sys` module, :attr:`float_info`, is an object From python-checkins at python.org Thu Mar 6 07:47:18 2008 From: python-checkins at python.org (georg.brandl) Date: Thu, 6 Mar 2008 07:47:18 +0100 (CET) Subject: [Python-checkins] r61263 - in python/trunk: Doc/distutils/sourcedist.rst Lib/distutils/command/sdist.py Message-ID: <20080306064718.9C0B51E4003@bag.python.org> Author: georg.brandl Date: Thu Mar 6 07:47:18 2008 New Revision: 61263 Modified: python/trunk/Doc/distutils/sourcedist.rst python/trunk/Lib/distutils/command/sdist.py Log: #1725737: ignore other VC directories other than CVS and SVN's too. Modified: python/trunk/Doc/distutils/sourcedist.rst ============================================================================== --- python/trunk/Doc/distutils/sourcedist.rst (original) +++ python/trunk/Doc/distutils/sourcedist.rst Thu Mar 6 07:47:18 2008 @@ -122,7 +122,8 @@ * all files in the Distutils "build" tree (default :file:`build/`) -* all files in directories named :file:`RCS`, :file:`CVS` or :file:`.svn` +* all files in directories named :file:`RCS`, :file:`CVS`, :file:`.svn`, + :file:`.hg`, :file:`.git`, :file:`.bzr` or :file:`_darcs` Now we have our complete list of files, which is written to the manifest for future reference, and then used to build the source distribution archive(s). @@ -156,8 +157,9 @@ previous two steps, so it's important that the ``prune`` command in the manifest template comes after the ``recursive-include`` command -#. exclude the entire :file:`build` tree, and any :file:`RCS`, :file:`CVS` and - :file:`.svn` directories +#. exclude the entire :file:`build` tree, and any :file:`RCS`, :file:`CVS`, + :file:`.svn`, :file:`.hg`, :file:`.git`, :file:`.bzr` and :file:`_darcs` + directories Just like in the setup script, file and directory names in the manifest template should always be slash-separated; the Distutils will take care of converting Modified: python/trunk/Lib/distutils/command/sdist.py ============================================================================== --- python/trunk/Lib/distutils/command/sdist.py (original) +++ python/trunk/Lib/distutils/command/sdist.py Thu Mar 6 07:47:18 2008 @@ -347,14 +347,14 @@ * the build tree (typically "build") * the release tree itself (only an issue if we ran "sdist" previously with --keep-temp, or it aborted) - * any RCS, CVS and .svn directories + * any RCS, CVS, .svn, .hg, .git, .bzr, _darcs directories """ build = self.get_finalized_command('build') base_dir = self.distribution.get_fullname() self.filelist.exclude_pattern(None, prefix=build.build_base) self.filelist.exclude_pattern(None, prefix=base_dir) - self.filelist.exclude_pattern(r'/(RCS|CVS|\.svn)/.*', is_regex=1) + self.filelist.exclude_pattern(r'(^|/)(RCS|CVS|\.svn|\.hg|\.git|\.bzr|_darcs)/.*', is_regex=1) def write_manifest (self): From g.brandl at gmx.net Thu Mar 6 07:54:05 2008 From: g.brandl at gmx.net (Georg Brandl) Date: Thu, 06 Mar 2008 07:54:05 +0100 Subject: [Python-checkins] r61259 - sandbox/trunk/release sandbox/trunk/release/release.py In-Reply-To: <20080305221312.5B99E1E401D@bag.python.org> References: <20080305221312.5B99E1E401D@bag.python.org> Message-ID: barry.warsaw schrieb: > Author: barry.warsaw > Date: Wed Mar 5 23:13:11 2008 > New Revision: 61259 > > Added: > sandbox/trunk/release/ > sandbox/trunk/release/release.py > Log: > Benjamin Peterson's release script, contributed to the cause. > > > Added: sandbox/trunk/release/release.py > ============================================================================== > --- (empty file) > +++ sandbox/trunk/release/release.py Wed Mar 5 23:13:11 2008 > @@ -0,0 +1,237 @@ > +"An assistant for making Python releases by Benjamin Peterson" > +#!/usr/bin/env python The shebang line should be at the top of the file to be effective... Georg From python-checkins at python.org Thu Mar 6 07:55:23 2008 From: python-checkins at python.org (martin.v.loewis) Date: Thu, 6 Mar 2008 07:55:23 +0100 (CET) Subject: [Python-checkins] r61264 - in python/trunk: Lib/test/test_os.py Misc/NEWS Message-ID: <20080306065523.429771E4003@bag.python.org> Author: martin.v.loewis Date: Thu Mar 6 07:55:22 2008 New Revision: 61264 Modified: python/trunk/Lib/test/test_os.py python/trunk/Misc/NEWS Log: Patch #2232: os.tmpfile might fail on Windows if the user has no permission to create files in the root directory. Will backport to 2.5. Modified: python/trunk/Lib/test/test_os.py ============================================================================== --- python/trunk/Lib/test/test_os.py (original) +++ python/trunk/Lib/test/test_os.py Thu Mar 6 07:55:22 2008 @@ -65,6 +65,44 @@ def test_tmpfile(self): if not hasattr(os, "tmpfile"): return + # As with test_tmpnam() below, the Windows implementation of tmpfile() + # attempts to create a file in the root directory of the current drive. + # On Vista and Server 2008, this test will always fail for normal users + # as writing to the root directory requires elevated privileges. With + # XP and below, the semantics of tmpfile() are the same, but the user + # running the test is more likely to have administrative privileges on + # their account already. If that's the case, then os.tmpfile() should + # work. In order to make this test as useful as possible, rather than + # trying to detect Windows versions or whether or not the user has the + # right permissions, just try and create a file in the root directory + # and see if it raises a 'Permission denied' OSError. If it does, then + # test that a subsequent call to os.tmpfile() raises the same error. If + # it doesn't, assume we're on XP or below and the user running the test + # has administrative privileges, and proceed with the test as normal. + if sys.platform == 'win32': + name = '\\python_test_os_test_tmpfile.txt' + if os.path.exists(name): + os.remove(name) + try: + fp = open(name, 'w') + except IOError, first: + # open() failed, assert tmpfile() fails in the same way. + # Although open() raises an IOError and os.tmpfile() raises an + # OSError(), 'args' will be (13, 'Permission denied') in both + # cases. + try: + fp = os.tmpfile() + except OSError, second: + self.assertEqual(first.args, second.args) + else: + self.fail("expected os.tmpfile() to raise OSError") + return + else: + # open() worked, therefore, tmpfile() should work. Close our + # dummy file and proceed with the test as normal. + fp.close() + os.remove(name) + fp = os.tmpfile() fp.write("foobar") fp.seek(0,0) Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Thu Mar 6 07:55:22 2008 @@ -28,6 +28,12 @@ On all linux systems the --with-system-ffi configure option defaults to "yes". +Tests +----- + +- Patch #2232: os.tmpfile might fail on Windows if the user has no + permission to create files in the root directory. + What's New in Python 2.6 alpha 1? ================================= From python-checkins at python.org Thu Mar 6 07:56:38 2008 From: python-checkins at python.org (martin.v.loewis) Date: Thu, 6 Mar 2008 07:56:38 +0100 (CET) Subject: [Python-checkins] r61265 - python/branches/release25-maint/Lib/idlelib/NEWS.txt Message-ID: <20080306065638.22EF81E4003@bag.python.org> Author: martin.v.loewis Date: Thu Mar 6 07:56:35 2008 New Revision: 61265 Modified: python/branches/release25-maint/Lib/idlelib/NEWS.txt Log: Add section for 2.5.3. Modified: python/branches/release25-maint/Lib/idlelib/NEWS.txt ============================================================================== --- python/branches/release25-maint/Lib/idlelib/NEWS.txt (original) +++ python/branches/release25-maint/Lib/idlelib/NEWS.txt Thu Mar 6 07:56:35 2008 @@ -1,3 +1,9 @@ +What's New in IDLE 1.2.3c1? +========================= + +*Release date: XX-XXX-2008* + + What's New in IDLE 1.2.2? ========================= From python-checkins at python.org Thu Mar 6 07:57:08 2008 From: python-checkins at python.org (martin.v.loewis) Date: Thu, 6 Mar 2008 07:57:08 +0100 (CET) Subject: [Python-checkins] r61266 - in python/branches/release25-maint: Lib/test/test_os.py Misc/NEWS Message-ID: <20080306065708.DC65A1E4003@bag.python.org> Author: martin.v.loewis Date: Thu Mar 6 07:57:02 2008 New Revision: 61266 Modified: python/branches/release25-maint/Lib/test/test_os.py python/branches/release25-maint/Misc/NEWS Log: Patch #2232: os.tmpfile might fail on Windows if the user has no permission to create files in the root directory. Modified: python/branches/release25-maint/Lib/test/test_os.py ============================================================================== --- python/branches/release25-maint/Lib/test/test_os.py (original) +++ python/branches/release25-maint/Lib/test/test_os.py Thu Mar 6 07:57:02 2008 @@ -59,6 +59,44 @@ def test_tmpfile(self): if not hasattr(os, "tmpfile"): return + # As with test_tmpnam() below, the Windows implementation of tmpfile() + # attempts to create a file in the root directory of the current drive. + # On Vista and Server 2008, this test will always fail for normal users + # as writing to the root directory requires elevated privileges. With + # XP and below, the semantics of tmpfile() are the same, but the user + # running the test is more likely to have administrative privileges on + # their account already. If that's the case, then os.tmpfile() should + # work. In order to make this test as useful as possible, rather than + # trying to detect Windows versions or whether or not the user has the + # right permissions, just try and create a file in the root directory + # and see if it raises a 'Permission denied' OSError. If it does, then + # test that a subsequent call to os.tmpfile() raises the same error. If + # it doesn't, assume we're on XP or below and the user running the test + # has administrative privileges, and proceed with the test as normal. + if sys.platform == 'win32': + name = '\\python_test_os_test_tmpfile.txt' + if os.path.exists(name): + os.remove(name) + try: + fp = open(name, 'w') + except IOError, first: + # open() failed, assert tmpfile() fails in the same way. + # Although open() raises an IOError and os.tmpfile() raises an + # OSError(), 'args' will be (13, 'Permission denied') in both + # cases. + try: + fp = os.tmpfile() + except OSError, second: + self.assertEqual(first.args, second.args) + else: + self.fail("expected os.tmpfile() to raise OSError") + return + else: + # open() worked, therefore, tmpfile() should work. Close our + # dummy file and proceed with the test as normal. + fp.close() + os.remove(name) + fp = os.tmpfile() fp.write("foobar") fp.seek(0,0) Modified: python/branches/release25-maint/Misc/NEWS ============================================================================== --- python/branches/release25-maint/Misc/NEWS (original) +++ python/branches/release25-maint/Misc/NEWS Thu Mar 6 07:57:02 2008 @@ -28,6 +28,13 @@ Extension Modules ----------------- +Tests +----- + +- Patch #2232: os.tmpfile might fail on Windows if the user has no + permission to create files in the root directory. + + Documentation ------------- From python-checkins at python.org Thu Mar 6 08:14:27 2008 From: python-checkins at python.org (martin.v.loewis) Date: Thu, 6 Mar 2008 08:14:27 +0100 (CET) Subject: [Python-checkins] r61268 - in python/branches/release25-maint: Doc/dist/dist.tex Lib/distutils/command/sdist.py Misc/NEWS Message-ID: <20080306071427.419221E4003@bag.python.org> Author: martin.v.loewis Date: Thu Mar 6 08:14:26 2008 New Revision: 61268 Modified: python/branches/release25-maint/Doc/dist/dist.tex python/branches/release25-maint/Lib/distutils/command/sdist.py python/branches/release25-maint/Misc/NEWS Log: Backport of r61263: #1725737: ignore other VC directories other than CVS and SVN's too. Modified: python/branches/release25-maint/Doc/dist/dist.tex ============================================================================== --- python/branches/release25-maint/Doc/dist/dist.tex (original) +++ python/branches/release25-maint/Doc/dist/dist.tex Thu Mar 6 08:14:26 2008 @@ -1213,7 +1213,8 @@ be included in the source distribution: \begin{itemize} \item all files in the Distutils ``build'' tree (default \file{build/}) -\item all files in directories named \file{RCS}, \file{CVS} or \file{.svn} +\item all files in directories named \file{RCS}, \file{CVS}, \file{.svn}, + \file{.hg}, \file{.git}, \file{.bzr}, or \file{\_darcs} \end{itemize} Now we have our complete list of files, which is written to the manifest for future reference, and then used to build the source distribution @@ -1246,7 +1247,8 @@ \code{prune} command in the manifest template comes after the \code{recursive-include} command \item exclude the entire \file{build} tree, and any \file{RCS}, - \file{CVS} and \file{.svn} directories + \file{CVS}, \file{.svn}, \file{.hg}, \file{.git}, \file{.bzr}, or + \file{\_darcs} directories \end{enumerate} Just like in the setup script, file and directory names in the manifest template should always be slash-separated; the Distutils will take care Modified: python/branches/release25-maint/Lib/distutils/command/sdist.py ============================================================================== --- python/branches/release25-maint/Lib/distutils/command/sdist.py (original) +++ python/branches/release25-maint/Lib/distutils/command/sdist.py Thu Mar 6 08:14:26 2008 @@ -347,14 +347,14 @@ * the build tree (typically "build") * the release tree itself (only an issue if we ran "sdist" previously with --keep-temp, or it aborted) - * any RCS, CVS and .svn directories + * any RCS, CVS, .svn, .hg, .git, .bzr, _darcs directories """ build = self.get_finalized_command('build') base_dir = self.distribution.get_fullname() self.filelist.exclude_pattern(None, prefix=build.build_base) self.filelist.exclude_pattern(None, prefix=base_dir) - self.filelist.exclude_pattern(r'/(RCS|CVS|\.svn)/.*', is_regex=1) + self.filelist.exclude_pattern(r'(^|/)(RCS|CVS|\.svn|\.hg|\.git|\.bzr|_darcs)/.*', is_regex=1) def write_manifest (self): Modified: python/branches/release25-maint/Misc/NEWS ============================================================================== --- python/branches/release25-maint/Misc/NEWS (original) +++ python/branches/release25-maint/Misc/NEWS Thu Mar 6 08:14:26 2008 @@ -15,6 +15,9 @@ Library ------- +- Bug #1725737: In distutil's sdist, exclude RCS, CVS etc. also in the + root directory, and also exclude .hg, .git, .bzr, and _darcs. + - Bug #1389051: imaplib causes excessive memory fragmentation when reading large messages. From python-checkins at python.org Thu Mar 6 08:19:16 2008 From: python-checkins at python.org (georg.brandl) Date: Thu, 6 Mar 2008 08:19:16 +0100 (CET) Subject: [Python-checkins] r61269 - python/trunk/Doc/library/re.rst Message-ID: <20080306071916.8C23B1E4003@bag.python.org> Author: georg.brandl Date: Thu Mar 6 08:19:15 2008 New Revision: 61269 Modified: python/trunk/Doc/library/re.rst Log: Expand on re.split behavior with captured expressions. Modified: python/trunk/Doc/library/re.rst ============================================================================== --- python/trunk/Doc/library/re.rst (original) +++ python/trunk/Doc/library/re.rst Thu Mar 6 08:19:15 2008 @@ -543,14 +543,26 @@ >>> re.split('\W+', 'Words, words, words.', 1) ['Words', 'words, words.'] + If there are capturing groups in the separator and it matches at the start of + the string, the result will start with an empty string. The same holds for + the end of the string:: + + >>> re.split('(\W+)', '...words, words...') + ['', '...', 'words', ', ', 'words', '...', ''] + + That way, separator components are always found at the same relative + indices within the result list (e.g., if there's one capturing group + in the separator, the 0th, the 2nd and so forth). + Note that *split* will never split a string on an empty pattern match. - For example :: + For example:: >>> re.split('x*', 'foo') ['foo'] >>> re.split("(?m)^$", "foo\n\nbar\n") ['foo\n\nbar\n'] + .. function:: findall(pattern, string[, flags]) Return all non-overlapping matches of *pattern* in *string*, as a list of From python-checkins at python.org Thu Mar 6 08:22:10 2008 From: python-checkins at python.org (georg.brandl) Date: Thu, 6 Mar 2008 08:22:10 +0100 (CET) Subject: [Python-checkins] r61270 - python/trunk/Doc/tutorial/classes.rst Message-ID: <20080306072210.4251F1E4017@bag.python.org> Author: georg.brandl Date: Thu Mar 6 08:22:09 2008 New Revision: 61270 Modified: python/trunk/Doc/tutorial/classes.rst Log: Little clarification of assignments. Modified: python/trunk/Doc/tutorial/classes.rst ============================================================================== --- python/trunk/Doc/tutorial/classes.rst (original) +++ python/trunk/Doc/tutorial/classes.rst Thu Mar 6 08:22:09 2008 @@ -123,6 +123,8 @@ a variable will simply create a *new* local variable in the innermost scope, leaving the identically named outer variable unchanged). +.. XXX mention nonlocal + Usually, the local scope references the local names of the (textually) current function. Outside functions, the local scope references the same namespace as the global scope: the module's namespace. Class definitions place yet another @@ -136,14 +138,15 @@ time, so don't rely on dynamic name resolution! (In fact, local variables are already determined statically.) -A special quirk of Python is that assignments always go into the innermost -scope. Assignments do not copy data --- they just bind names to objects. The -same is true for deletions: the statement ``del x`` removes the binding of ``x`` -from the namespace referenced by the local scope. In fact, all operations that -introduce new names use the local scope: in particular, import statements and -function definitions bind the module or function name in the local scope. (The -:keyword:`global` statement can be used to indicate that particular variables -live in the global scope.) +A special quirk of Python is that -- if no :keyword:`global` or +:keyword:`nonlocal` statement is in effect -- assignments to names always go +into the innermost scope. Assignments do not copy data --- they just bind names +to objects. The same is true for deletions: the statement ``del x`` removes the +binding of ``x`` from the namespace referenced by the local scope. In fact, all +operations that introduce new names use the local scope: in particular, import +statements and function definitions bind the module or function name in the +local scope. (The :keyword:`global` statement can be used to indicate that +particular variables live in the global scope.) .. _tut-firstclasses: From python-checkins at python.org Thu Mar 6 08:31:34 2008 From: python-checkins at python.org (georg.brandl) Date: Thu, 6 Mar 2008 08:31:34 +0100 (CET) Subject: [Python-checkins] r61271 - python/trunk/Doc/tutorial/classes.rst Message-ID: <20080306073134.C6C961E4003@bag.python.org> Author: georg.brandl Date: Thu Mar 6 08:31:34 2008 New Revision: 61271 Modified: python/trunk/Doc/tutorial/classes.rst Log: Add isinstance/issubclass to tutorial. Modified: python/trunk/Doc/tutorial/classes.rst ============================================================================== --- python/trunk/Doc/tutorial/classes.rst (original) +++ python/trunk/Doc/tutorial/classes.rst Thu Mar 6 08:31:34 2008 @@ -420,6 +420,9 @@ defined in this global scope, and in the next section we'll find some good reasons why a method would want to reference its own class! +Each value is an object, and therefore has a *class* (also called its *type*). +It is stored as ``object.__class__``. + .. _tut-inheritance: @@ -469,6 +472,19 @@ only works if the base class is defined or imported directly in the global scope.) +Python has two builtin functions that work with inheritance: + +* Use :func:`isinstance` to check an object's type: ``isinstance(obj, int)`` + will be ``True`` only if ``obj.__class__`` is :class:`int` or some class + derived from :class:`int`. + +* Use :func:`issubclass` to check class inheritance: ``issubclass(bool, int)`` + is ``True`` since :class:`bool` is a subclass of :class:`int`. However, + ``issubclass(unicode, str)`` is ``False`` since :class:`unicode` is not a + subclass of :class:`str` (they only share a common ancestor, + :class:`basestring`). + + .. _tut-multiple: From python-checkins at python.org Thu Mar 6 08:34:53 2008 From: python-checkins at python.org (georg.brandl) Date: Thu, 6 Mar 2008 08:34:53 +0100 (CET) Subject: [Python-checkins] r61272 - python/trunk/Misc/NEWS Message-ID: <20080306073453.230B91E4003@bag.python.org> Author: georg.brandl Date: Thu Mar 6 08:34:52 2008 New Revision: 61272 Modified: python/trunk/Misc/NEWS Log: Add missing NEWS entry for r61263. Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Thu Mar 6 08:34:52 2008 @@ -18,6 +18,9 @@ Library ------- +- Bug #1725737: In distutil's sdist, exclude RCS, CVS etc. also in the + root directory, and also exclude .hg, .git, .bzr, and _darcs. + - Issue #1872: The struct module typecode for _Bool has been changed from 't' to '?'. From python-checkins at python.org Thu Mar 6 08:41:17 2008 From: python-checkins at python.org (georg.brandl) Date: Thu, 6 Mar 2008 08:41:17 +0100 (CET) Subject: [Python-checkins] r61273 - in python/trunk: Doc/library/py_compile.rst Lib/py_compile.py Misc/NEWS Message-ID: <20080306074117.274E81E4003@bag.python.org> Author: georg.brandl Date: Thu Mar 6 08:41:16 2008 New Revision: 61273 Modified: python/trunk/Doc/library/py_compile.rst python/trunk/Lib/py_compile.py python/trunk/Misc/NEWS Log: #2225: return nonzero status code from py_compile if not all files could be compiled. Modified: python/trunk/Doc/library/py_compile.rst ============================================================================== --- python/trunk/Doc/library/py_compile.rst (original) +++ python/trunk/Doc/library/py_compile.rst Thu Mar 6 08:41:16 2008 @@ -42,7 +42,12 @@ structure to locate source files; it only compiles files named explicitly. When this module is run as a script, the :func:`main` is used to compile all the -files named on the command line. +files named on the command line. The exit status is nonzero if one of the files +could not be compiled. + +.. versionchanged:: 2.6 + + Added the nonzero exit status. .. seealso:: Modified: python/trunk/Lib/py_compile.py ============================================================================== --- python/trunk/Lib/py_compile.py (original) +++ python/trunk/Lib/py_compile.py Thu Mar 6 08:41:16 2008 @@ -154,11 +154,15 @@ """ if args is None: args = sys.argv[1:] + rv = 0 for filename in args: try: compile(filename, doraise=True) - except PyCompileError,err: + except PyCompileError, err: + # return value to indicate at least one failure + rv = 1 sys.stderr.write(err.msg) + return rv if __name__ == "__main__": - main() + sys.exit(main()) Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Thu Mar 6 08:41:16 2008 @@ -18,6 +18,9 @@ Library ------- +- Issue #2225: py_compile, when executed as a script, now returns a non- + zero status code if not all files could be compiled successfully. + - Bug #1725737: In distutil's sdist, exclude RCS, CVS etc. also in the root directory, and also exclude .hg, .git, .bzr, and _darcs. From python-checkins at python.org Thu Mar 6 08:43:02 2008 From: python-checkins at python.org (georg.brandl) Date: Thu, 6 Mar 2008 08:43:02 +0100 (CET) Subject: [Python-checkins] r61274 - python/trunk/Lib/rlcompleter.py Message-ID: <20080306074302.9E71C1E4003@bag.python.org> Author: georg.brandl Date: Thu Mar 6 08:43:02 2008 New Revision: 61274 Modified: python/trunk/Lib/rlcompleter.py Log: #2220: handle matching failure more gracefully. Modified: python/trunk/Lib/rlcompleter.py ============================================================================== --- python/trunk/Lib/rlcompleter.py (original) +++ python/trunk/Lib/rlcompleter.py Thu Mar 6 08:43:02 2008 @@ -125,7 +125,7 @@ import re m = re.match(r"(\w+(\.\w+)*)\.(\w*)", text) if not m: - return + return [] expr, attr = m.group(1, 3) object = eval(expr, self.namespace) words = dir(object) From python-checkins at python.org Thu Mar 6 08:45:52 2008 From: python-checkins at python.org (georg.brandl) Date: Thu, 6 Mar 2008 08:45:52 +0100 (CET) Subject: [Python-checkins] r61275 - python/trunk/Misc/NEWS Message-ID: <20080306074552.9F6D01E4017@bag.python.org> Author: georg.brandl Date: Thu Mar 6 08:45:52 2008 New Revision: 61275 Modified: python/trunk/Misc/NEWS Log: Bug #2220: handle rlcompleter attribute match failure more gracefully. Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Thu Mar 6 08:45:52 2008 @@ -18,6 +18,8 @@ Library ------- +- Bug #2220: handle rlcompleter attribute match failure more gracefully. + - Issue #2225: py_compile, when executed as a script, now returns a non- zero status code if not all files could be compiled successfully. From python-checkins at python.org Thu Mar 6 08:46:26 2008 From: python-checkins at python.org (georg.brandl) Date: Thu, 6 Mar 2008 08:46:26 +0100 (CET) Subject: [Python-checkins] r61276 - in python/branches/release25-maint: Lib/rlcompleter.py Misc/NEWS Message-ID: <20080306074626.DCC221E4015@bag.python.org> Author: georg.brandl Date: Thu Mar 6 08:46:26 2008 New Revision: 61276 Modified: python/branches/release25-maint/Lib/rlcompleter.py python/branches/release25-maint/Misc/NEWS Log: Bug #2220: handle rlcompleter attribute match failure more gracefully. (backport from r61275) Modified: python/branches/release25-maint/Lib/rlcompleter.py ============================================================================== --- python/branches/release25-maint/Lib/rlcompleter.py (original) +++ python/branches/release25-maint/Lib/rlcompleter.py Thu Mar 6 08:46:26 2008 @@ -125,7 +125,7 @@ import re m = re.match(r"(\w+(\.\w+)*)\.(\w*)", text) if not m: - return + return [] expr, attr = m.group(1, 3) object = eval(expr, self.namespace) words = dir(object) Modified: python/branches/release25-maint/Misc/NEWS ============================================================================== --- python/branches/release25-maint/Misc/NEWS (original) +++ python/branches/release25-maint/Misc/NEWS Thu Mar 6 08:46:26 2008 @@ -15,6 +15,8 @@ Library ------- +- Bug #2220: handle rlcompleter attribute match failure more gracefully. + - Bug #1725737: In distutil's sdist, exclude RCS, CVS etc. also in the root directory, and also exclude .hg, .git, .bzr, and _darcs. From buildbot at python.org Thu Mar 6 08:56:37 2008 From: buildbot at python.org (buildbot at python.org) Date: Thu, 06 Mar 2008 07:56:37 +0000 Subject: [Python-checkins] buildbot failure in x86 W2k8 2.5 Message-ID: <20080306075637.67BFE1E4003@bag.python.org> The Buildbot has detected a new failure of x86 W2k8 2.5. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20W2k8%202.5/builds/3 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: nelson-windows Build Reason: Build Source Stamp: [branch branches/release25-maint] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_winsound ====================================================================== ERROR: test_extremes (test.test_winsound.BeepTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\2.5.nelson-windows\build\lib\test\test_winsound.py", line 18, in test_extremes winsound.Beep(37, 75) RuntimeError: Failed to beep ====================================================================== ERROR: test_increasingfrequency (test.test_winsound.BeepTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\2.5.nelson-windows\build\lib\test\test_winsound.py", line 23, in test_increasingfrequency winsound.Beep(i, 75) RuntimeError: Failed to beep sincerely, -The Buildbot From buildbot at python.org Thu Mar 6 09:01:56 2008 From: buildbot at python.org (buildbot at python.org) Date: Thu, 06 Mar 2008 08:01:56 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 trunk Message-ID: <20080306080156.42A861E4003@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%20trunk/builds/2655 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: andrew.kuchling,georg.brandl,raymond.hettinger BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_asynchat sincerely, -The Buildbot From buildbot at python.org Thu Mar 6 09:10:11 2008 From: buildbot at python.org (buildbot at python.org) Date: Thu, 06 Mar 2008 08:10:11 +0000 Subject: [Python-checkins] buildbot failure in g4 osx.4 2.5 Message-ID: <20080306081011.DECF61E4003@bag.python.org> The Buildbot has detected a new failure of g4 osx.4 2.5. Full details are available at: http://www.python.org/dev/buildbot/all/g4%20osx.4%202.5/builds/548 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: psf-g4 Build Reason: Build Source Stamp: [branch branches/release25-maint] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed failed slave lost sincerely, -The Buildbot From buildbot at python.org Thu Mar 6 09:19:51 2008 From: buildbot at python.org (buildbot at python.org) Date: Thu, 06 Mar 2008 08:19:51 +0000 Subject: [Python-checkins] buildbot failure in hppa Ubuntu 2.5 Message-ID: <20080306081951.F0B2E1E4003@bag.python.org> The Buildbot has detected a new failure of hppa Ubuntu 2.5. Full details are available at: http://www.python.org/dev/buildbot/all/hppa%20Ubuntu%202.5/builds/167 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-ubuntu-hppa Build Reason: Build Source Stamp: [branch branches/release25-maint] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From buildbot at python.org Thu Mar 6 13:05:57 2008 From: buildbot at python.org (buildbot at python.org) Date: Thu, 06 Mar 2008 12:05:57 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-4 2.5 Message-ID: <20080306120557.2EB211E4009@bag.python.org> The Buildbot has detected a new failure of x86 XP-4 2.5. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-4%202.5/builds/158 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: bolen-windows Build Reason: Build Source Stamp: [branch branches/release25-maint] HEAD Blamelist: georg.brandl,martin.v.loewis BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From barry at python.org Thu Mar 6 13:24:50 2008 From: barry at python.org (Barry Warsaw) Date: Thu, 6 Mar 2008 07:24:50 -0500 Subject: [Python-checkins] r61259 - sandbox/trunk/release sandbox/trunk/release/release.py In-Reply-To: References: <20080305221312.5B99E1E401D@bag.python.org> Message-ID: <3FFF5F22-BCA4-4940-97B5-2EB3A2F58465@python.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Mar 6, 2008, at 1:54 AM, Georg Brandl wrote: > barry.warsaw schrieb: >> Author: barry.warsaw >> Date: Wed Mar 5 23:13:11 2008 >> New Revision: 61259 >> >> Added: >> sandbox/trunk/release/ >> sandbox/trunk/release/release.py >> Log: >> Benjamin Peterson's release script, contributed to the cause. >> >> >> Added: sandbox/trunk/release/release.py >> = >> = >> = >> = >> = >> = >> = >> = >> = >> ===================================================================== >> --- (empty file) >> +++ sandbox/trunk/release/release.py Wed Mar 5 23:13:11 2008 >> @@ -0,0 +1,237 @@ >> +"An assistant for making Python releases by Benjamin Peterson" >> +#!/usr/bin/env python > > The shebang line should be at the top of the file to be effective... Thanks. I haven't started hacking on the file yet. I wanted to commit exactly what Benjamin sent me first. - -Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.8 (Darwin) iQCVAwUBR8/iknEjvBPtnXfVAQIh9wP8DkY574CbkSsmtdIvNpXZldKpmn0uox/N RnIVOyfzOOsaegu9ldstIIkQmYxyQGnoCNbGj8z7atPx78O/wAlCYLAhPmJ5TkKn NPMM3SbzALrLqHNcgqbd1udIZADvUoC3eRO8SLJ/I1zXrBNNk5wODYenpxS9t9h8 eq1uzqL0heg= =MXv6 -----END PGP SIGNATURE----- From python-checkins at python.org Thu Mar 6 14:43:52 2008 From: python-checkins at python.org (martin.v.loewis) Date: Thu, 6 Mar 2008 14:43:52 +0100 (CET) Subject: [Python-checkins] r61277 - external/db-4.4.20-vs9/build_win32/Berkeley_DB.sln external/db-4.4.20-vs9/build_win32/build_all.vcproj external/db-4.4.20-vs9/build_win32/db_archive.vcproj external/db-4.4.20-vs9/build_win32/db_checkpoint.vcproj external/db-4.4.20-vs9/build_win32/db_deadlock.vcproj external/db-4.4.20-vs9/build_win32/db_dll.vcproj external/db-4.4.20-vs9/build_win32/db_dump.vcproj external/db-4.4.20-vs9/build_win32/db_hotbackup.vcproj external/db-4.4.20-vs9/build_win32/db_java.vcproj external/db-4.4.20-vs9/build_win32/db_load.vcproj external/db-4.4.20-vs9/build_win32/db_printlog.vcproj external/db-4.4.20-vs9/build_win32/db_recover.vcproj external/db-4.4.20-vs9/build_win32/db_small.vcproj external/db-4.4.20-vs9/build_win32/db_stat.vcproj external/db-4.4.20-vs9/build_win32/db_static.vcproj external/db-4.4.20-vs9/build_win32/db_tcl.vcproj external/db-4.4.20-vs9/build_win32/db_test.vcproj external/db-4.4.20-vs9/build_win32/db_upgrade.vcproj external/db-4.4.20-vs9/build_win32/db_verify.vcproj external/db-4.4.20-vs9/build_win32/ex_access.vcproj external/db-4.4.20-vs9/build_win32/ex_btrec.vcproj external/db-4.4.20-vs9/build_win32/ex_csvcode.vcproj external/db-4.4.20-vs9/build_win32/ex_csvload.vcproj external/db-4.4.20-vs9/build_win32/ex_csvquery.vcproj external/db-4.4.20-vs9/build_win32/ex_env.vcproj external/db-4.4.20-vs9/build_win32/ex_lock.vcproj external/db-4.4.20-vs9/build_win32/ex_mpool.vcproj external/db-4.4.20-vs9/build_win32/ex_repquote.vcproj external/db-4.4.20-vs9/build_win32/ex_sequence.vcproj external/db-4.4.20-vs9/build_win32/ex_tpcb.vcproj external/db-4.4.20-vs9/build_win32/ex_txnguide.vcproj external/db-4.4.20-vs9/build_win32/ex_txnguide_inmem.vcproj external/db-4.4.20-vs9/build_win32/example_database_load.vcproj external/db-4.4.20-vs9/build_win32/example_database_read.vcproj external/db-4.4.20-vs9/build_win32/excxx_access.vcproj external/db-4.4.20-vs9/build_win32/excxx_btrec.vcproj external/db-4.4.20-vs9/build_win32/excxx_env.vcproj external/db-4.4.20-vs9/build_win32/excxx_example_database_load.vcproj external/db-4.4.20-vs9/build_win32/excxx_example_database_read.vcproj external/db-4.4.20-vs9/build_win32/excxx_lock.vcproj external/db-4.4.20-vs9/build_win32/excxx_mpool.vcproj external/db-4.4.20-vs9/build_win32/excxx_sequence.vcproj external/db-4.4.20-vs9/build_win32/excxx_tpcb.vcproj external/db-4.4.20-vs9/build_win32/excxx_txnguide.vcproj external/db-4.4.20-vs9/build_win32/excxx_txnguide_inmem.vcproj Message-ID: <20080306134352.966FE1E4009@bag.python.org> Author: martin.v.loewis Date: Thu Mar 6 14:43:47 2008 New Revision: 61277 Modified: external/db-4.4.20-vs9/build_win32/Berkeley_DB.sln external/db-4.4.20-vs9/build_win32/build_all.vcproj external/db-4.4.20-vs9/build_win32/db_archive.vcproj external/db-4.4.20-vs9/build_win32/db_checkpoint.vcproj external/db-4.4.20-vs9/build_win32/db_deadlock.vcproj external/db-4.4.20-vs9/build_win32/db_dll.vcproj external/db-4.4.20-vs9/build_win32/db_dump.vcproj external/db-4.4.20-vs9/build_win32/db_hotbackup.vcproj external/db-4.4.20-vs9/build_win32/db_java.vcproj external/db-4.4.20-vs9/build_win32/db_load.vcproj external/db-4.4.20-vs9/build_win32/db_printlog.vcproj external/db-4.4.20-vs9/build_win32/db_recover.vcproj external/db-4.4.20-vs9/build_win32/db_small.vcproj external/db-4.4.20-vs9/build_win32/db_stat.vcproj external/db-4.4.20-vs9/build_win32/db_static.vcproj external/db-4.4.20-vs9/build_win32/db_tcl.vcproj external/db-4.4.20-vs9/build_win32/db_test.vcproj external/db-4.4.20-vs9/build_win32/db_upgrade.vcproj external/db-4.4.20-vs9/build_win32/db_verify.vcproj external/db-4.4.20-vs9/build_win32/ex_access.vcproj external/db-4.4.20-vs9/build_win32/ex_btrec.vcproj external/db-4.4.20-vs9/build_win32/ex_csvcode.vcproj external/db-4.4.20-vs9/build_win32/ex_csvload.vcproj external/db-4.4.20-vs9/build_win32/ex_csvquery.vcproj external/db-4.4.20-vs9/build_win32/ex_env.vcproj external/db-4.4.20-vs9/build_win32/ex_lock.vcproj external/db-4.4.20-vs9/build_win32/ex_mpool.vcproj external/db-4.4.20-vs9/build_win32/ex_repquote.vcproj external/db-4.4.20-vs9/build_win32/ex_sequence.vcproj external/db-4.4.20-vs9/build_win32/ex_tpcb.vcproj external/db-4.4.20-vs9/build_win32/ex_txnguide.vcproj external/db-4.4.20-vs9/build_win32/ex_txnguide_inmem.vcproj external/db-4.4.20-vs9/build_win32/example_database_load.vcproj external/db-4.4.20-vs9/build_win32/example_database_read.vcproj external/db-4.4.20-vs9/build_win32/excxx_access.vcproj external/db-4.4.20-vs9/build_win32/excxx_btrec.vcproj external/db-4.4.20-vs9/build_win32/excxx_env.vcproj external/db-4.4.20-vs9/build_win32/excxx_example_database_load.vcproj external/db-4.4.20-vs9/build_win32/excxx_example_database_read.vcproj external/db-4.4.20-vs9/build_win32/excxx_lock.vcproj external/db-4.4.20-vs9/build_win32/excxx_mpool.vcproj external/db-4.4.20-vs9/build_win32/excxx_sequence.vcproj external/db-4.4.20-vs9/build_win32/excxx_tpcb.vcproj external/db-4.4.20-vs9/build_win32/excxx_txnguide.vcproj external/db-4.4.20-vs9/build_win32/excxx_txnguide_inmem.vcproj Log: Generate x64 platform configuration. Modified: external/db-4.4.20-vs9/build_win32/Berkeley_DB.sln ============================================================================== --- external/db-4.4.20-vs9/build_win32/Berkeley_DB.sln (original) +++ external/db-4.4.20-vs9/build_win32/Berkeley_DB.sln Thu Mar 6 14:43:47 2008 @@ -252,719 +252,1431 @@ Global GlobalSection(SolutionConfigurationPlatforms) = preSolution ASCII Debug|Win32 = ASCII Debug|Win32 + ASCII Debug|x64 = ASCII Debug|x64 ASCII Release|Win32 = ASCII Release|Win32 + ASCII Release|x64 = ASCII Release|x64 Debug AMD64|Win32 = Debug AMD64|Win32 + Debug AMD64|x64 = Debug AMD64|x64 Debug IA64|Win32 = Debug IA64|Win32 + Debug IA64|x64 = Debug IA64|x64 Debug|Win32 = Debug|Win32 + Debug|x64 = Debug|x64 Release AMD64|Win32 = Release AMD64|Win32 + Release AMD64|x64 = Release AMD64|x64 Release IA64|Win32 = Release IA64|Win32 + Release IA64|x64 = Release IA64|x64 Release|Win32 = Release|Win32 + Release|x64 = Release|x64 EndGlobalSection GlobalSection(ProjectConfigurationPlatforms) = postSolution {5BE6A7BC-7CE3-48C6-B855-D90E545FD625}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {5BE6A7BC-7CE3-48C6-B855-D90E545FD625}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {5BE6A7BC-7CE3-48C6-B855-D90E545FD625}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {5BE6A7BC-7CE3-48C6-B855-D90E545FD625}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {5BE6A7BC-7CE3-48C6-B855-D90E545FD625}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {5BE6A7BC-7CE3-48C6-B855-D90E545FD625}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {5BE6A7BC-7CE3-48C6-B855-D90E545FD625}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {5BE6A7BC-7CE3-48C6-B855-D90E545FD625}.ASCII Release|x64.Build.0 = ASCII Release|x64 {5BE6A7BC-7CE3-48C6-B855-D90E545FD625}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {5BE6A7BC-7CE3-48C6-B855-D90E545FD625}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {5BE6A7BC-7CE3-48C6-B855-D90E545FD625}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {5BE6A7BC-7CE3-48C6-B855-D90E545FD625}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {5BE6A7BC-7CE3-48C6-B855-D90E545FD625}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {5BE6A7BC-7CE3-48C6-B855-D90E545FD625}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {5BE6A7BC-7CE3-48C6-B855-D90E545FD625}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {5BE6A7BC-7CE3-48C6-B855-D90E545FD625}.Debug IA64|x64.Build.0 = Debug IA64|x64 {5BE6A7BC-7CE3-48C6-B855-D90E545FD625}.Debug|Win32.ActiveCfg = Debug|Win32 {5BE6A7BC-7CE3-48C6-B855-D90E545FD625}.Debug|Win32.Build.0 = Debug|Win32 + {5BE6A7BC-7CE3-48C6-B855-D90E545FD625}.Debug|x64.ActiveCfg = Debug|x64 + {5BE6A7BC-7CE3-48C6-B855-D90E545FD625}.Debug|x64.Build.0 = Debug|x64 {5BE6A7BC-7CE3-48C6-B855-D90E545FD625}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {5BE6A7BC-7CE3-48C6-B855-D90E545FD625}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {5BE6A7BC-7CE3-48C6-B855-D90E545FD625}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {5BE6A7BC-7CE3-48C6-B855-D90E545FD625}.Release AMD64|x64.Build.0 = Release AMD64|x64 {5BE6A7BC-7CE3-48C6-B855-D90E545FD625}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {5BE6A7BC-7CE3-48C6-B855-D90E545FD625}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {5BE6A7BC-7CE3-48C6-B855-D90E545FD625}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {5BE6A7BC-7CE3-48C6-B855-D90E545FD625}.Release IA64|x64.Build.0 = Release IA64|x64 {5BE6A7BC-7CE3-48C6-B855-D90E545FD625}.Release|Win32.ActiveCfg = Release|Win32 {5BE6A7BC-7CE3-48C6-B855-D90E545FD625}.Release|Win32.Build.0 = Release|Win32 + {5BE6A7BC-7CE3-48C6-B855-D90E545FD625}.Release|x64.ActiveCfg = Release|x64 + {5BE6A7BC-7CE3-48C6-B855-D90E545FD625}.Release|x64.Build.0 = Release|x64 {B42CC051-97A0-4C0A-BDD7-45791B92C61C}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {B42CC051-97A0-4C0A-BDD7-45791B92C61C}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {B42CC051-97A0-4C0A-BDD7-45791B92C61C}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {B42CC051-97A0-4C0A-BDD7-45791B92C61C}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {B42CC051-97A0-4C0A-BDD7-45791B92C61C}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {B42CC051-97A0-4C0A-BDD7-45791B92C61C}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {B42CC051-97A0-4C0A-BDD7-45791B92C61C}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {B42CC051-97A0-4C0A-BDD7-45791B92C61C}.ASCII Release|x64.Build.0 = ASCII Release|x64 {B42CC051-97A0-4C0A-BDD7-45791B92C61C}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {B42CC051-97A0-4C0A-BDD7-45791B92C61C}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {B42CC051-97A0-4C0A-BDD7-45791B92C61C}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {B42CC051-97A0-4C0A-BDD7-45791B92C61C}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {B42CC051-97A0-4C0A-BDD7-45791B92C61C}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {B42CC051-97A0-4C0A-BDD7-45791B92C61C}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {B42CC051-97A0-4C0A-BDD7-45791B92C61C}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {B42CC051-97A0-4C0A-BDD7-45791B92C61C}.Debug IA64|x64.Build.0 = Debug IA64|x64 {B42CC051-97A0-4C0A-BDD7-45791B92C61C}.Debug|Win32.ActiveCfg = Debug|Win32 {B42CC051-97A0-4C0A-BDD7-45791B92C61C}.Debug|Win32.Build.0 = Debug|Win32 + {B42CC051-97A0-4C0A-BDD7-45791B92C61C}.Debug|x64.ActiveCfg = Debug|x64 + {B42CC051-97A0-4C0A-BDD7-45791B92C61C}.Debug|x64.Build.0 = Debug|x64 {B42CC051-97A0-4C0A-BDD7-45791B92C61C}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {B42CC051-97A0-4C0A-BDD7-45791B92C61C}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {B42CC051-97A0-4C0A-BDD7-45791B92C61C}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {B42CC051-97A0-4C0A-BDD7-45791B92C61C}.Release AMD64|x64.Build.0 = Release AMD64|x64 {B42CC051-97A0-4C0A-BDD7-45791B92C61C}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {B42CC051-97A0-4C0A-BDD7-45791B92C61C}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {B42CC051-97A0-4C0A-BDD7-45791B92C61C}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {B42CC051-97A0-4C0A-BDD7-45791B92C61C}.Release IA64|x64.Build.0 = Release IA64|x64 {B42CC051-97A0-4C0A-BDD7-45791B92C61C}.Release|Win32.ActiveCfg = Release|Win32 {B42CC051-97A0-4C0A-BDD7-45791B92C61C}.Release|Win32.Build.0 = Release|Win32 + {B42CC051-97A0-4C0A-BDD7-45791B92C61C}.Release|x64.ActiveCfg = Release|x64 + {B42CC051-97A0-4C0A-BDD7-45791B92C61C}.Release|x64.Build.0 = Release|x64 {939E539C-7144-4FD2-B318-6B70791E4FB0}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {939E539C-7144-4FD2-B318-6B70791E4FB0}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {939E539C-7144-4FD2-B318-6B70791E4FB0}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {939E539C-7144-4FD2-B318-6B70791E4FB0}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {939E539C-7144-4FD2-B318-6B70791E4FB0}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {939E539C-7144-4FD2-B318-6B70791E4FB0}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {939E539C-7144-4FD2-B318-6B70791E4FB0}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {939E539C-7144-4FD2-B318-6B70791E4FB0}.ASCII Release|x64.Build.0 = ASCII Release|x64 {939E539C-7144-4FD2-B318-6B70791E4FB0}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {939E539C-7144-4FD2-B318-6B70791E4FB0}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {939E539C-7144-4FD2-B318-6B70791E4FB0}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {939E539C-7144-4FD2-B318-6B70791E4FB0}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {939E539C-7144-4FD2-B318-6B70791E4FB0}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {939E539C-7144-4FD2-B318-6B70791E4FB0}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {939E539C-7144-4FD2-B318-6B70791E4FB0}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {939E539C-7144-4FD2-B318-6B70791E4FB0}.Debug IA64|x64.Build.0 = Debug IA64|x64 {939E539C-7144-4FD2-B318-6B70791E4FB0}.Debug|Win32.ActiveCfg = Debug|Win32 {939E539C-7144-4FD2-B318-6B70791E4FB0}.Debug|Win32.Build.0 = Debug|Win32 + {939E539C-7144-4FD2-B318-6B70791E4FB0}.Debug|x64.ActiveCfg = Debug|x64 + {939E539C-7144-4FD2-B318-6B70791E4FB0}.Debug|x64.Build.0 = Debug|x64 {939E539C-7144-4FD2-B318-6B70791E4FB0}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {939E539C-7144-4FD2-B318-6B70791E4FB0}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {939E539C-7144-4FD2-B318-6B70791E4FB0}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {939E539C-7144-4FD2-B318-6B70791E4FB0}.Release AMD64|x64.Build.0 = Release AMD64|x64 {939E539C-7144-4FD2-B318-6B70791E4FB0}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {939E539C-7144-4FD2-B318-6B70791E4FB0}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {939E539C-7144-4FD2-B318-6B70791E4FB0}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {939E539C-7144-4FD2-B318-6B70791E4FB0}.Release IA64|x64.Build.0 = Release IA64|x64 {939E539C-7144-4FD2-B318-6B70791E4FB0}.Release|Win32.ActiveCfg = Release|Win32 {939E539C-7144-4FD2-B318-6B70791E4FB0}.Release|Win32.Build.0 = Release|Win32 + {939E539C-7144-4FD2-B318-6B70791E4FB0}.Release|x64.ActiveCfg = Release|x64 + {939E539C-7144-4FD2-B318-6B70791E4FB0}.Release|x64.Build.0 = Release|x64 {97E39181-72A5-4BA5-B962-BCDACCA2FE83}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {97E39181-72A5-4BA5-B962-BCDACCA2FE83}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {97E39181-72A5-4BA5-B962-BCDACCA2FE83}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {97E39181-72A5-4BA5-B962-BCDACCA2FE83}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {97E39181-72A5-4BA5-B962-BCDACCA2FE83}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {97E39181-72A5-4BA5-B962-BCDACCA2FE83}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {97E39181-72A5-4BA5-B962-BCDACCA2FE83}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {97E39181-72A5-4BA5-B962-BCDACCA2FE83}.ASCII Release|x64.Build.0 = ASCII Release|x64 {97E39181-72A5-4BA5-B962-BCDACCA2FE83}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {97E39181-72A5-4BA5-B962-BCDACCA2FE83}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {97E39181-72A5-4BA5-B962-BCDACCA2FE83}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {97E39181-72A5-4BA5-B962-BCDACCA2FE83}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {97E39181-72A5-4BA5-B962-BCDACCA2FE83}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {97E39181-72A5-4BA5-B962-BCDACCA2FE83}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {97E39181-72A5-4BA5-B962-BCDACCA2FE83}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {97E39181-72A5-4BA5-B962-BCDACCA2FE83}.Debug IA64|x64.Build.0 = Debug IA64|x64 {97E39181-72A5-4BA5-B962-BCDACCA2FE83}.Debug|Win32.ActiveCfg = Debug|Win32 {97E39181-72A5-4BA5-B962-BCDACCA2FE83}.Debug|Win32.Build.0 = Debug|Win32 + {97E39181-72A5-4BA5-B962-BCDACCA2FE83}.Debug|x64.ActiveCfg = Debug|x64 + {97E39181-72A5-4BA5-B962-BCDACCA2FE83}.Debug|x64.Build.0 = Debug|x64 {97E39181-72A5-4BA5-B962-BCDACCA2FE83}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {97E39181-72A5-4BA5-B962-BCDACCA2FE83}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {97E39181-72A5-4BA5-B962-BCDACCA2FE83}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {97E39181-72A5-4BA5-B962-BCDACCA2FE83}.Release AMD64|x64.Build.0 = Release AMD64|x64 {97E39181-72A5-4BA5-B962-BCDACCA2FE83}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {97E39181-72A5-4BA5-B962-BCDACCA2FE83}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {97E39181-72A5-4BA5-B962-BCDACCA2FE83}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {97E39181-72A5-4BA5-B962-BCDACCA2FE83}.Release IA64|x64.Build.0 = Release IA64|x64 {97E39181-72A5-4BA5-B962-BCDACCA2FE83}.Release|Win32.ActiveCfg = Release|Win32 {97E39181-72A5-4BA5-B962-BCDACCA2FE83}.Release|Win32.Build.0 = Release|Win32 + {97E39181-72A5-4BA5-B962-BCDACCA2FE83}.Release|x64.ActiveCfg = Release|x64 + {97E39181-72A5-4BA5-B962-BCDACCA2FE83}.Release|x64.Build.0 = Release|x64 {BDFBE3FD-385D-4816-90E5-A0FEC0925E5F}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {BDFBE3FD-385D-4816-90E5-A0FEC0925E5F}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {BDFBE3FD-385D-4816-90E5-A0FEC0925E5F}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {BDFBE3FD-385D-4816-90E5-A0FEC0925E5F}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {BDFBE3FD-385D-4816-90E5-A0FEC0925E5F}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {BDFBE3FD-385D-4816-90E5-A0FEC0925E5F}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {BDFBE3FD-385D-4816-90E5-A0FEC0925E5F}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {BDFBE3FD-385D-4816-90E5-A0FEC0925E5F}.ASCII Release|x64.Build.0 = ASCII Release|x64 {BDFBE3FD-385D-4816-90E5-A0FEC0925E5F}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {BDFBE3FD-385D-4816-90E5-A0FEC0925E5F}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {BDFBE3FD-385D-4816-90E5-A0FEC0925E5F}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {BDFBE3FD-385D-4816-90E5-A0FEC0925E5F}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {BDFBE3FD-385D-4816-90E5-A0FEC0925E5F}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {BDFBE3FD-385D-4816-90E5-A0FEC0925E5F}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {BDFBE3FD-385D-4816-90E5-A0FEC0925E5F}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {BDFBE3FD-385D-4816-90E5-A0FEC0925E5F}.Debug IA64|x64.Build.0 = Debug IA64|x64 {BDFBE3FD-385D-4816-90E5-A0FEC0925E5F}.Debug|Win32.ActiveCfg = Debug|Win32 {BDFBE3FD-385D-4816-90E5-A0FEC0925E5F}.Debug|Win32.Build.0 = Debug|Win32 + {BDFBE3FD-385D-4816-90E5-A0FEC0925E5F}.Debug|x64.ActiveCfg = Debug|x64 + {BDFBE3FD-385D-4816-90E5-A0FEC0925E5F}.Debug|x64.Build.0 = Debug|x64 {BDFBE3FD-385D-4816-90E5-A0FEC0925E5F}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {BDFBE3FD-385D-4816-90E5-A0FEC0925E5F}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {BDFBE3FD-385D-4816-90E5-A0FEC0925E5F}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {BDFBE3FD-385D-4816-90E5-A0FEC0925E5F}.Release AMD64|x64.Build.0 = Release AMD64|x64 {BDFBE3FD-385D-4816-90E5-A0FEC0925E5F}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {BDFBE3FD-385D-4816-90E5-A0FEC0925E5F}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {BDFBE3FD-385D-4816-90E5-A0FEC0925E5F}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {BDFBE3FD-385D-4816-90E5-A0FEC0925E5F}.Release IA64|x64.Build.0 = Release IA64|x64 {BDFBE3FD-385D-4816-90E5-A0FEC0925E5F}.Release|Win32.ActiveCfg = Release|Win32 {BDFBE3FD-385D-4816-90E5-A0FEC0925E5F}.Release|Win32.Build.0 = Release|Win32 + {BDFBE3FD-385D-4816-90E5-A0FEC0925E5F}.Release|x64.ActiveCfg = Release|x64 + {BDFBE3FD-385D-4816-90E5-A0FEC0925E5F}.Release|x64.Build.0 = Release|x64 {5AFD87D8-688B-4CC1-9DC9-AD8943EF596D}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {5AFD87D8-688B-4CC1-9DC9-AD8943EF596D}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {5AFD87D8-688B-4CC1-9DC9-AD8943EF596D}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {5AFD87D8-688B-4CC1-9DC9-AD8943EF596D}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {5AFD87D8-688B-4CC1-9DC9-AD8943EF596D}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {5AFD87D8-688B-4CC1-9DC9-AD8943EF596D}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {5AFD87D8-688B-4CC1-9DC9-AD8943EF596D}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {5AFD87D8-688B-4CC1-9DC9-AD8943EF596D}.ASCII Release|x64.Build.0 = ASCII Release|x64 {5AFD87D8-688B-4CC1-9DC9-AD8943EF596D}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {5AFD87D8-688B-4CC1-9DC9-AD8943EF596D}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {5AFD87D8-688B-4CC1-9DC9-AD8943EF596D}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {5AFD87D8-688B-4CC1-9DC9-AD8943EF596D}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {5AFD87D8-688B-4CC1-9DC9-AD8943EF596D}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {5AFD87D8-688B-4CC1-9DC9-AD8943EF596D}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {5AFD87D8-688B-4CC1-9DC9-AD8943EF596D}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {5AFD87D8-688B-4CC1-9DC9-AD8943EF596D}.Debug IA64|x64.Build.0 = Debug IA64|x64 {5AFD87D8-688B-4CC1-9DC9-AD8943EF596D}.Debug|Win32.ActiveCfg = Debug|Win32 {5AFD87D8-688B-4CC1-9DC9-AD8943EF596D}.Debug|Win32.Build.0 = Debug|Win32 + {5AFD87D8-688B-4CC1-9DC9-AD8943EF596D}.Debug|x64.ActiveCfg = Debug|x64 + {5AFD87D8-688B-4CC1-9DC9-AD8943EF596D}.Debug|x64.Build.0 = Debug|x64 {5AFD87D8-688B-4CC1-9DC9-AD8943EF596D}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {5AFD87D8-688B-4CC1-9DC9-AD8943EF596D}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {5AFD87D8-688B-4CC1-9DC9-AD8943EF596D}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {5AFD87D8-688B-4CC1-9DC9-AD8943EF596D}.Release AMD64|x64.Build.0 = Release AMD64|x64 {5AFD87D8-688B-4CC1-9DC9-AD8943EF596D}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {5AFD87D8-688B-4CC1-9DC9-AD8943EF596D}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {5AFD87D8-688B-4CC1-9DC9-AD8943EF596D}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {5AFD87D8-688B-4CC1-9DC9-AD8943EF596D}.Release IA64|x64.Build.0 = Release IA64|x64 {5AFD87D8-688B-4CC1-9DC9-AD8943EF596D}.Release|Win32.ActiveCfg = Release|Win32 {5AFD87D8-688B-4CC1-9DC9-AD8943EF596D}.Release|Win32.Build.0 = Release|Win32 + {5AFD87D8-688B-4CC1-9DC9-AD8943EF596D}.Release|x64.ActiveCfg = Release|x64 + {5AFD87D8-688B-4CC1-9DC9-AD8943EF596D}.Release|x64.Build.0 = Release|x64 {E685D09D-0E46-482B-8546-4FA0BEC4EB8A}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {E685D09D-0E46-482B-8546-4FA0BEC4EB8A}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {E685D09D-0E46-482B-8546-4FA0BEC4EB8A}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {E685D09D-0E46-482B-8546-4FA0BEC4EB8A}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {E685D09D-0E46-482B-8546-4FA0BEC4EB8A}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {E685D09D-0E46-482B-8546-4FA0BEC4EB8A}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {E685D09D-0E46-482B-8546-4FA0BEC4EB8A}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {E685D09D-0E46-482B-8546-4FA0BEC4EB8A}.ASCII Release|x64.Build.0 = ASCII Release|x64 {E685D09D-0E46-482B-8546-4FA0BEC4EB8A}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {E685D09D-0E46-482B-8546-4FA0BEC4EB8A}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {E685D09D-0E46-482B-8546-4FA0BEC4EB8A}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {E685D09D-0E46-482B-8546-4FA0BEC4EB8A}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {E685D09D-0E46-482B-8546-4FA0BEC4EB8A}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {E685D09D-0E46-482B-8546-4FA0BEC4EB8A}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {E685D09D-0E46-482B-8546-4FA0BEC4EB8A}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {E685D09D-0E46-482B-8546-4FA0BEC4EB8A}.Debug IA64|x64.Build.0 = Debug IA64|x64 {E685D09D-0E46-482B-8546-4FA0BEC4EB8A}.Debug|Win32.ActiveCfg = Debug|Win32 {E685D09D-0E46-482B-8546-4FA0BEC4EB8A}.Debug|Win32.Build.0 = Debug|Win32 + {E685D09D-0E46-482B-8546-4FA0BEC4EB8A}.Debug|x64.ActiveCfg = Debug|x64 + {E685D09D-0E46-482B-8546-4FA0BEC4EB8A}.Debug|x64.Build.0 = Debug|x64 {E685D09D-0E46-482B-8546-4FA0BEC4EB8A}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {E685D09D-0E46-482B-8546-4FA0BEC4EB8A}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {E685D09D-0E46-482B-8546-4FA0BEC4EB8A}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {E685D09D-0E46-482B-8546-4FA0BEC4EB8A}.Release AMD64|x64.Build.0 = Release AMD64|x64 {E685D09D-0E46-482B-8546-4FA0BEC4EB8A}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {E685D09D-0E46-482B-8546-4FA0BEC4EB8A}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {E685D09D-0E46-482B-8546-4FA0BEC4EB8A}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {E685D09D-0E46-482B-8546-4FA0BEC4EB8A}.Release IA64|x64.Build.0 = Release IA64|x64 {E685D09D-0E46-482B-8546-4FA0BEC4EB8A}.Release|Win32.ActiveCfg = Release|Win32 {E685D09D-0E46-482B-8546-4FA0BEC4EB8A}.Release|Win32.Build.0 = Release|Win32 + {E685D09D-0E46-482B-8546-4FA0BEC4EB8A}.Release|x64.ActiveCfg = Release|x64 + {E685D09D-0E46-482B-8546-4FA0BEC4EB8A}.Release|x64.Build.0 = Release|x64 {B440697C-3D0A-44F4-90CF-A37EBA37B228}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {B440697C-3D0A-44F4-90CF-A37EBA37B228}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {B440697C-3D0A-44F4-90CF-A37EBA37B228}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {B440697C-3D0A-44F4-90CF-A37EBA37B228}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {B440697C-3D0A-44F4-90CF-A37EBA37B228}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {B440697C-3D0A-44F4-90CF-A37EBA37B228}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {B440697C-3D0A-44F4-90CF-A37EBA37B228}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {B440697C-3D0A-44F4-90CF-A37EBA37B228}.ASCII Release|x64.Build.0 = ASCII Release|x64 {B440697C-3D0A-44F4-90CF-A37EBA37B228}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {B440697C-3D0A-44F4-90CF-A37EBA37B228}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {B440697C-3D0A-44F4-90CF-A37EBA37B228}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {B440697C-3D0A-44F4-90CF-A37EBA37B228}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {B440697C-3D0A-44F4-90CF-A37EBA37B228}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {B440697C-3D0A-44F4-90CF-A37EBA37B228}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {B440697C-3D0A-44F4-90CF-A37EBA37B228}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {B440697C-3D0A-44F4-90CF-A37EBA37B228}.Debug IA64|x64.Build.0 = Debug IA64|x64 {B440697C-3D0A-44F4-90CF-A37EBA37B228}.Debug|Win32.ActiveCfg = Debug|Win32 {B440697C-3D0A-44F4-90CF-A37EBA37B228}.Debug|Win32.Build.0 = Debug|Win32 + {B440697C-3D0A-44F4-90CF-A37EBA37B228}.Debug|x64.ActiveCfg = Debug|x64 + {B440697C-3D0A-44F4-90CF-A37EBA37B228}.Debug|x64.Build.0 = Debug|x64 {B440697C-3D0A-44F4-90CF-A37EBA37B228}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {B440697C-3D0A-44F4-90CF-A37EBA37B228}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {B440697C-3D0A-44F4-90CF-A37EBA37B228}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {B440697C-3D0A-44F4-90CF-A37EBA37B228}.Release AMD64|x64.Build.0 = Release AMD64|x64 {B440697C-3D0A-44F4-90CF-A37EBA37B228}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {B440697C-3D0A-44F4-90CF-A37EBA37B228}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {B440697C-3D0A-44F4-90CF-A37EBA37B228}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {B440697C-3D0A-44F4-90CF-A37EBA37B228}.Release IA64|x64.Build.0 = Release IA64|x64 {B440697C-3D0A-44F4-90CF-A37EBA37B228}.Release|Win32.ActiveCfg = Release|Win32 {B440697C-3D0A-44F4-90CF-A37EBA37B228}.Release|Win32.Build.0 = Release|Win32 + {B440697C-3D0A-44F4-90CF-A37EBA37B228}.Release|x64.ActiveCfg = Release|x64 + {B440697C-3D0A-44F4-90CF-A37EBA37B228}.Release|x64.Build.0 = Release|x64 {23394132-218F-4F7A-BA9F-37C7F7007765}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {23394132-218F-4F7A-BA9F-37C7F7007765}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {23394132-218F-4F7A-BA9F-37C7F7007765}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {23394132-218F-4F7A-BA9F-37C7F7007765}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {23394132-218F-4F7A-BA9F-37C7F7007765}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {23394132-218F-4F7A-BA9F-37C7F7007765}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {23394132-218F-4F7A-BA9F-37C7F7007765}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {23394132-218F-4F7A-BA9F-37C7F7007765}.ASCII Release|x64.Build.0 = ASCII Release|x64 {23394132-218F-4F7A-BA9F-37C7F7007765}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {23394132-218F-4F7A-BA9F-37C7F7007765}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {23394132-218F-4F7A-BA9F-37C7F7007765}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {23394132-218F-4F7A-BA9F-37C7F7007765}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {23394132-218F-4F7A-BA9F-37C7F7007765}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {23394132-218F-4F7A-BA9F-37C7F7007765}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {23394132-218F-4F7A-BA9F-37C7F7007765}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {23394132-218F-4F7A-BA9F-37C7F7007765}.Debug IA64|x64.Build.0 = Debug IA64|x64 {23394132-218F-4F7A-BA9F-37C7F7007765}.Debug|Win32.ActiveCfg = Debug|Win32 {23394132-218F-4F7A-BA9F-37C7F7007765}.Debug|Win32.Build.0 = Debug|Win32 + {23394132-218F-4F7A-BA9F-37C7F7007765}.Debug|x64.ActiveCfg = Debug|x64 + {23394132-218F-4F7A-BA9F-37C7F7007765}.Debug|x64.Build.0 = Debug|x64 {23394132-218F-4F7A-BA9F-37C7F7007765}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {23394132-218F-4F7A-BA9F-37C7F7007765}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {23394132-218F-4F7A-BA9F-37C7F7007765}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {23394132-218F-4F7A-BA9F-37C7F7007765}.Release AMD64|x64.Build.0 = Release AMD64|x64 {23394132-218F-4F7A-BA9F-37C7F7007765}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {23394132-218F-4F7A-BA9F-37C7F7007765}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {23394132-218F-4F7A-BA9F-37C7F7007765}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {23394132-218F-4F7A-BA9F-37C7F7007765}.Release IA64|x64.Build.0 = Release IA64|x64 {23394132-218F-4F7A-BA9F-37C7F7007765}.Release|Win32.ActiveCfg = Release|Win32 {23394132-218F-4F7A-BA9F-37C7F7007765}.Release|Win32.Build.0 = Release|Win32 + {23394132-218F-4F7A-BA9F-37C7F7007765}.Release|x64.ActiveCfg = Release|x64 + {23394132-218F-4F7A-BA9F-37C7F7007765}.Release|x64.Build.0 = Release|x64 {4F1C44D8-2893-4FD6-A832-21F372F19596}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {4F1C44D8-2893-4FD6-A832-21F372F19596}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {4F1C44D8-2893-4FD6-A832-21F372F19596}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {4F1C44D8-2893-4FD6-A832-21F372F19596}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {4F1C44D8-2893-4FD6-A832-21F372F19596}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {4F1C44D8-2893-4FD6-A832-21F372F19596}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {4F1C44D8-2893-4FD6-A832-21F372F19596}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {4F1C44D8-2893-4FD6-A832-21F372F19596}.ASCII Release|x64.Build.0 = ASCII Release|x64 {4F1C44D8-2893-4FD6-A832-21F372F19596}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {4F1C44D8-2893-4FD6-A832-21F372F19596}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {4F1C44D8-2893-4FD6-A832-21F372F19596}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {4F1C44D8-2893-4FD6-A832-21F372F19596}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {4F1C44D8-2893-4FD6-A832-21F372F19596}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {4F1C44D8-2893-4FD6-A832-21F372F19596}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {4F1C44D8-2893-4FD6-A832-21F372F19596}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {4F1C44D8-2893-4FD6-A832-21F372F19596}.Debug IA64|x64.Build.0 = Debug IA64|x64 {4F1C44D8-2893-4FD6-A832-21F372F19596}.Debug|Win32.ActiveCfg = Debug|Win32 {4F1C44D8-2893-4FD6-A832-21F372F19596}.Debug|Win32.Build.0 = Debug|Win32 + {4F1C44D8-2893-4FD6-A832-21F372F19596}.Debug|x64.ActiveCfg = Debug|x64 + {4F1C44D8-2893-4FD6-A832-21F372F19596}.Debug|x64.Build.0 = Debug|x64 {4F1C44D8-2893-4FD6-A832-21F372F19596}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {4F1C44D8-2893-4FD6-A832-21F372F19596}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {4F1C44D8-2893-4FD6-A832-21F372F19596}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {4F1C44D8-2893-4FD6-A832-21F372F19596}.Release AMD64|x64.Build.0 = Release AMD64|x64 {4F1C44D8-2893-4FD6-A832-21F372F19596}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {4F1C44D8-2893-4FD6-A832-21F372F19596}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {4F1C44D8-2893-4FD6-A832-21F372F19596}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {4F1C44D8-2893-4FD6-A832-21F372F19596}.Release IA64|x64.Build.0 = Release IA64|x64 {4F1C44D8-2893-4FD6-A832-21F372F19596}.Release|Win32.ActiveCfg = Release|Win32 {4F1C44D8-2893-4FD6-A832-21F372F19596}.Release|Win32.Build.0 = Release|Win32 + {4F1C44D8-2893-4FD6-A832-21F372F19596}.Release|x64.ActiveCfg = Release|x64 + {4F1C44D8-2893-4FD6-A832-21F372F19596}.Release|x64.Build.0 = Release|x64 {F1A92B38-F912-45B0-93A9-72770E70BEF4}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {F1A92B38-F912-45B0-93A9-72770E70BEF4}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {F1A92B38-F912-45B0-93A9-72770E70BEF4}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {F1A92B38-F912-45B0-93A9-72770E70BEF4}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {F1A92B38-F912-45B0-93A9-72770E70BEF4}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {F1A92B38-F912-45B0-93A9-72770E70BEF4}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {F1A92B38-F912-45B0-93A9-72770E70BEF4}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {F1A92B38-F912-45B0-93A9-72770E70BEF4}.ASCII Release|x64.Build.0 = ASCII Release|x64 {F1A92B38-F912-45B0-93A9-72770E70BEF4}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {F1A92B38-F912-45B0-93A9-72770E70BEF4}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {F1A92B38-F912-45B0-93A9-72770E70BEF4}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {F1A92B38-F912-45B0-93A9-72770E70BEF4}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {F1A92B38-F912-45B0-93A9-72770E70BEF4}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {F1A92B38-F912-45B0-93A9-72770E70BEF4}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {F1A92B38-F912-45B0-93A9-72770E70BEF4}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {F1A92B38-F912-45B0-93A9-72770E70BEF4}.Debug IA64|x64.Build.0 = Debug IA64|x64 {F1A92B38-F912-45B0-93A9-72770E70BEF4}.Debug|Win32.ActiveCfg = Debug|Win32 {F1A92B38-F912-45B0-93A9-72770E70BEF4}.Debug|Win32.Build.0 = Debug|Win32 + {F1A92B38-F912-45B0-93A9-72770E70BEF4}.Debug|x64.ActiveCfg = Debug|x64 + {F1A92B38-F912-45B0-93A9-72770E70BEF4}.Debug|x64.Build.0 = Debug|x64 {F1A92B38-F912-45B0-93A9-72770E70BEF4}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {F1A92B38-F912-45B0-93A9-72770E70BEF4}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {F1A92B38-F912-45B0-93A9-72770E70BEF4}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {F1A92B38-F912-45B0-93A9-72770E70BEF4}.Release AMD64|x64.Build.0 = Release AMD64|x64 {F1A92B38-F912-45B0-93A9-72770E70BEF4}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {F1A92B38-F912-45B0-93A9-72770E70BEF4}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {F1A92B38-F912-45B0-93A9-72770E70BEF4}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {F1A92B38-F912-45B0-93A9-72770E70BEF4}.Release IA64|x64.Build.0 = Release IA64|x64 {F1A92B38-F912-45B0-93A9-72770E70BEF4}.Release|Win32.ActiveCfg = Release|Win32 {F1A92B38-F912-45B0-93A9-72770E70BEF4}.Release|Win32.Build.0 = Release|Win32 + {F1A92B38-F912-45B0-93A9-72770E70BEF4}.Release|x64.ActiveCfg = Release|x64 + {F1A92B38-F912-45B0-93A9-72770E70BEF4}.Release|x64.Build.0 = Release|x64 {171A24B8-2BDB-4511-B0E7-0E5E6447B1FB}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {171A24B8-2BDB-4511-B0E7-0E5E6447B1FB}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {171A24B8-2BDB-4511-B0E7-0E5E6447B1FB}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {171A24B8-2BDB-4511-B0E7-0E5E6447B1FB}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {171A24B8-2BDB-4511-B0E7-0E5E6447B1FB}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {171A24B8-2BDB-4511-B0E7-0E5E6447B1FB}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {171A24B8-2BDB-4511-B0E7-0E5E6447B1FB}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {171A24B8-2BDB-4511-B0E7-0E5E6447B1FB}.ASCII Release|x64.Build.0 = ASCII Release|x64 {171A24B8-2BDB-4511-B0E7-0E5E6447B1FB}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {171A24B8-2BDB-4511-B0E7-0E5E6447B1FB}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {171A24B8-2BDB-4511-B0E7-0E5E6447B1FB}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {171A24B8-2BDB-4511-B0E7-0E5E6447B1FB}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {171A24B8-2BDB-4511-B0E7-0E5E6447B1FB}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {171A24B8-2BDB-4511-B0E7-0E5E6447B1FB}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {171A24B8-2BDB-4511-B0E7-0E5E6447B1FB}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {171A24B8-2BDB-4511-B0E7-0E5E6447B1FB}.Debug IA64|x64.Build.0 = Debug IA64|x64 {171A24B8-2BDB-4511-B0E7-0E5E6447B1FB}.Debug|Win32.ActiveCfg = Debug|Win32 {171A24B8-2BDB-4511-B0E7-0E5E6447B1FB}.Debug|Win32.Build.0 = Debug|Win32 + {171A24B8-2BDB-4511-B0E7-0E5E6447B1FB}.Debug|x64.ActiveCfg = Debug|x64 + {171A24B8-2BDB-4511-B0E7-0E5E6447B1FB}.Debug|x64.Build.0 = Debug|x64 {171A24B8-2BDB-4511-B0E7-0E5E6447B1FB}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {171A24B8-2BDB-4511-B0E7-0E5E6447B1FB}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {171A24B8-2BDB-4511-B0E7-0E5E6447B1FB}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {171A24B8-2BDB-4511-B0E7-0E5E6447B1FB}.Release AMD64|x64.Build.0 = Release AMD64|x64 {171A24B8-2BDB-4511-B0E7-0E5E6447B1FB}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {171A24B8-2BDB-4511-B0E7-0E5E6447B1FB}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {171A24B8-2BDB-4511-B0E7-0E5E6447B1FB}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {171A24B8-2BDB-4511-B0E7-0E5E6447B1FB}.Release IA64|x64.Build.0 = Release IA64|x64 {171A24B8-2BDB-4511-B0E7-0E5E6447B1FB}.Release|Win32.ActiveCfg = Release|Win32 {171A24B8-2BDB-4511-B0E7-0E5E6447B1FB}.Release|Win32.Build.0 = Release|Win32 + {171A24B8-2BDB-4511-B0E7-0E5E6447B1FB}.Release|x64.ActiveCfg = Release|x64 + {171A24B8-2BDB-4511-B0E7-0E5E6447B1FB}.Release|x64.Build.0 = Release|x64 {B44968C4-3243-4D6D-9A59-4C0F9F875C4E}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {B44968C4-3243-4D6D-9A59-4C0F9F875C4E}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {B44968C4-3243-4D6D-9A59-4C0F9F875C4E}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {B44968C4-3243-4D6D-9A59-4C0F9F875C4E}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {B44968C4-3243-4D6D-9A59-4C0F9F875C4E}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {B44968C4-3243-4D6D-9A59-4C0F9F875C4E}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {B44968C4-3243-4D6D-9A59-4C0F9F875C4E}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {B44968C4-3243-4D6D-9A59-4C0F9F875C4E}.ASCII Release|x64.Build.0 = ASCII Release|x64 {B44968C4-3243-4D6D-9A59-4C0F9F875C4E}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {B44968C4-3243-4D6D-9A59-4C0F9F875C4E}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {B44968C4-3243-4D6D-9A59-4C0F9F875C4E}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {B44968C4-3243-4D6D-9A59-4C0F9F875C4E}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {B44968C4-3243-4D6D-9A59-4C0F9F875C4E}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {B44968C4-3243-4D6D-9A59-4C0F9F875C4E}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {B44968C4-3243-4D6D-9A59-4C0F9F875C4E}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {B44968C4-3243-4D6D-9A59-4C0F9F875C4E}.Debug IA64|x64.Build.0 = Debug IA64|x64 {B44968C4-3243-4D6D-9A59-4C0F9F875C4E}.Debug|Win32.ActiveCfg = Debug|Win32 {B44968C4-3243-4D6D-9A59-4C0F9F875C4E}.Debug|Win32.Build.0 = Debug|Win32 + {B44968C4-3243-4D6D-9A59-4C0F9F875C4E}.Debug|x64.ActiveCfg = Debug|x64 + {B44968C4-3243-4D6D-9A59-4C0F9F875C4E}.Debug|x64.Build.0 = Debug|x64 {B44968C4-3243-4D6D-9A59-4C0F9F875C4E}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {B44968C4-3243-4D6D-9A59-4C0F9F875C4E}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {B44968C4-3243-4D6D-9A59-4C0F9F875C4E}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {B44968C4-3243-4D6D-9A59-4C0F9F875C4E}.Release AMD64|x64.Build.0 = Release AMD64|x64 {B44968C4-3243-4D6D-9A59-4C0F9F875C4E}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {B44968C4-3243-4D6D-9A59-4C0F9F875C4E}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {B44968C4-3243-4D6D-9A59-4C0F9F875C4E}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {B44968C4-3243-4D6D-9A59-4C0F9F875C4E}.Release IA64|x64.Build.0 = Release IA64|x64 {B44968C4-3243-4D6D-9A59-4C0F9F875C4E}.Release|Win32.ActiveCfg = Release|Win32 {B44968C4-3243-4D6D-9A59-4C0F9F875C4E}.Release|Win32.Build.0 = Release|Win32 + {B44968C4-3243-4D6D-9A59-4C0F9F875C4E}.Release|x64.ActiveCfg = Release|x64 + {B44968C4-3243-4D6D-9A59-4C0F9F875C4E}.Release|x64.Build.0 = Release|x64 {C6112BAF-38D0-415D-B5B4-AC7F1ECD00F4}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {C6112BAF-38D0-415D-B5B4-AC7F1ECD00F4}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {C6112BAF-38D0-415D-B5B4-AC7F1ECD00F4}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {C6112BAF-38D0-415D-B5B4-AC7F1ECD00F4}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {C6112BAF-38D0-415D-B5B4-AC7F1ECD00F4}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {C6112BAF-38D0-415D-B5B4-AC7F1ECD00F4}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {C6112BAF-38D0-415D-B5B4-AC7F1ECD00F4}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {C6112BAF-38D0-415D-B5B4-AC7F1ECD00F4}.ASCII Release|x64.Build.0 = ASCII Release|x64 {C6112BAF-38D0-415D-B5B4-AC7F1ECD00F4}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {C6112BAF-38D0-415D-B5B4-AC7F1ECD00F4}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {C6112BAF-38D0-415D-B5B4-AC7F1ECD00F4}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {C6112BAF-38D0-415D-B5B4-AC7F1ECD00F4}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {C6112BAF-38D0-415D-B5B4-AC7F1ECD00F4}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {C6112BAF-38D0-415D-B5B4-AC7F1ECD00F4}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {C6112BAF-38D0-415D-B5B4-AC7F1ECD00F4}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {C6112BAF-38D0-415D-B5B4-AC7F1ECD00F4}.Debug IA64|x64.Build.0 = Debug IA64|x64 {C6112BAF-38D0-415D-B5B4-AC7F1ECD00F4}.Debug|Win32.ActiveCfg = Debug|Win32 {C6112BAF-38D0-415D-B5B4-AC7F1ECD00F4}.Debug|Win32.Build.0 = Debug|Win32 + {C6112BAF-38D0-415D-B5B4-AC7F1ECD00F4}.Debug|x64.ActiveCfg = Debug|x64 + {C6112BAF-38D0-415D-B5B4-AC7F1ECD00F4}.Debug|x64.Build.0 = Debug|x64 {C6112BAF-38D0-415D-B5B4-AC7F1ECD00F4}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {C6112BAF-38D0-415D-B5B4-AC7F1ECD00F4}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {C6112BAF-38D0-415D-B5B4-AC7F1ECD00F4}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {C6112BAF-38D0-415D-B5B4-AC7F1ECD00F4}.Release AMD64|x64.Build.0 = Release AMD64|x64 {C6112BAF-38D0-415D-B5B4-AC7F1ECD00F4}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {C6112BAF-38D0-415D-B5B4-AC7F1ECD00F4}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {C6112BAF-38D0-415D-B5B4-AC7F1ECD00F4}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {C6112BAF-38D0-415D-B5B4-AC7F1ECD00F4}.Release IA64|x64.Build.0 = Release IA64|x64 {C6112BAF-38D0-415D-B5B4-AC7F1ECD00F4}.Release|Win32.ActiveCfg = Release|Win32 {C6112BAF-38D0-415D-B5B4-AC7F1ECD00F4}.Release|Win32.Build.0 = Release|Win32 + {C6112BAF-38D0-415D-B5B4-AC7F1ECD00F4}.Release|x64.ActiveCfg = Release|x64 + {C6112BAF-38D0-415D-B5B4-AC7F1ECD00F4}.Release|x64.Build.0 = Release|x64 {F7C5312E-EB20-411F-AC5B-A463D3DE028F}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {F7C5312E-EB20-411F-AC5B-A463D3DE028F}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {F7C5312E-EB20-411F-AC5B-A463D3DE028F}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {F7C5312E-EB20-411F-AC5B-A463D3DE028F}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {F7C5312E-EB20-411F-AC5B-A463D3DE028F}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {F7C5312E-EB20-411F-AC5B-A463D3DE028F}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {F7C5312E-EB20-411F-AC5B-A463D3DE028F}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {F7C5312E-EB20-411F-AC5B-A463D3DE028F}.ASCII Release|x64.Build.0 = ASCII Release|x64 {F7C5312E-EB20-411F-AC5B-A463D3DE028F}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {F7C5312E-EB20-411F-AC5B-A463D3DE028F}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {F7C5312E-EB20-411F-AC5B-A463D3DE028F}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {F7C5312E-EB20-411F-AC5B-A463D3DE028F}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {F7C5312E-EB20-411F-AC5B-A463D3DE028F}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {F7C5312E-EB20-411F-AC5B-A463D3DE028F}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {F7C5312E-EB20-411F-AC5B-A463D3DE028F}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {F7C5312E-EB20-411F-AC5B-A463D3DE028F}.Debug IA64|x64.Build.0 = Debug IA64|x64 {F7C5312E-EB20-411F-AC5B-A463D3DE028F}.Debug|Win32.ActiveCfg = Debug|Win32 {F7C5312E-EB20-411F-AC5B-A463D3DE028F}.Debug|Win32.Build.0 = Debug|Win32 + {F7C5312E-EB20-411F-AC5B-A463D3DE028F}.Debug|x64.ActiveCfg = Debug|x64 + {F7C5312E-EB20-411F-AC5B-A463D3DE028F}.Debug|x64.Build.0 = Debug|x64 {F7C5312E-EB20-411F-AC5B-A463D3DE028F}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {F7C5312E-EB20-411F-AC5B-A463D3DE028F}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {F7C5312E-EB20-411F-AC5B-A463D3DE028F}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {F7C5312E-EB20-411F-AC5B-A463D3DE028F}.Release AMD64|x64.Build.0 = Release AMD64|x64 {F7C5312E-EB20-411F-AC5B-A463D3DE028F}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {F7C5312E-EB20-411F-AC5B-A463D3DE028F}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {F7C5312E-EB20-411F-AC5B-A463D3DE028F}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {F7C5312E-EB20-411F-AC5B-A463D3DE028F}.Release IA64|x64.Build.0 = Release IA64|x64 {F7C5312E-EB20-411F-AC5B-A463D3DE028F}.Release|Win32.ActiveCfg = Release|Win32 {F7C5312E-EB20-411F-AC5B-A463D3DE028F}.Release|Win32.Build.0 = Release|Win32 + {F7C5312E-EB20-411F-AC5B-A463D3DE028F}.Release|x64.ActiveCfg = Release|x64 + {F7C5312E-EB20-411F-AC5B-A463D3DE028F}.Release|x64.Build.0 = Release|x64 {070DC608-CBBF-4AFB-BA2C-BFE0785038F4}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {070DC608-CBBF-4AFB-BA2C-BFE0785038F4}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {070DC608-CBBF-4AFB-BA2C-BFE0785038F4}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {070DC608-CBBF-4AFB-BA2C-BFE0785038F4}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {070DC608-CBBF-4AFB-BA2C-BFE0785038F4}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {070DC608-CBBF-4AFB-BA2C-BFE0785038F4}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {070DC608-CBBF-4AFB-BA2C-BFE0785038F4}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {070DC608-CBBF-4AFB-BA2C-BFE0785038F4}.ASCII Release|x64.Build.0 = ASCII Release|x64 {070DC608-CBBF-4AFB-BA2C-BFE0785038F4}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {070DC608-CBBF-4AFB-BA2C-BFE0785038F4}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {070DC608-CBBF-4AFB-BA2C-BFE0785038F4}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {070DC608-CBBF-4AFB-BA2C-BFE0785038F4}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {070DC608-CBBF-4AFB-BA2C-BFE0785038F4}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {070DC608-CBBF-4AFB-BA2C-BFE0785038F4}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {070DC608-CBBF-4AFB-BA2C-BFE0785038F4}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {070DC608-CBBF-4AFB-BA2C-BFE0785038F4}.Debug IA64|x64.Build.0 = Debug IA64|x64 {070DC608-CBBF-4AFB-BA2C-BFE0785038F4}.Debug|Win32.ActiveCfg = Debug|Win32 {070DC608-CBBF-4AFB-BA2C-BFE0785038F4}.Debug|Win32.Build.0 = Debug|Win32 + {070DC608-CBBF-4AFB-BA2C-BFE0785038F4}.Debug|x64.ActiveCfg = Debug|x64 + {070DC608-CBBF-4AFB-BA2C-BFE0785038F4}.Debug|x64.Build.0 = Debug|x64 {070DC608-CBBF-4AFB-BA2C-BFE0785038F4}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {070DC608-CBBF-4AFB-BA2C-BFE0785038F4}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {070DC608-CBBF-4AFB-BA2C-BFE0785038F4}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {070DC608-CBBF-4AFB-BA2C-BFE0785038F4}.Release AMD64|x64.Build.0 = Release AMD64|x64 {070DC608-CBBF-4AFB-BA2C-BFE0785038F4}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {070DC608-CBBF-4AFB-BA2C-BFE0785038F4}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {070DC608-CBBF-4AFB-BA2C-BFE0785038F4}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {070DC608-CBBF-4AFB-BA2C-BFE0785038F4}.Release IA64|x64.Build.0 = Release IA64|x64 {070DC608-CBBF-4AFB-BA2C-BFE0785038F4}.Release|Win32.ActiveCfg = Release|Win32 {070DC608-CBBF-4AFB-BA2C-BFE0785038F4}.Release|Win32.Build.0 = Release|Win32 + {070DC608-CBBF-4AFB-BA2C-BFE0785038F4}.Release|x64.ActiveCfg = Release|x64 + {070DC608-CBBF-4AFB-BA2C-BFE0785038F4}.Release|x64.Build.0 = Release|x64 {A2F71ABC-96B2-450D-973A-5BE661034CCE}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {A2F71ABC-96B2-450D-973A-5BE661034CCE}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {A2F71ABC-96B2-450D-973A-5BE661034CCE}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {A2F71ABC-96B2-450D-973A-5BE661034CCE}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {A2F71ABC-96B2-450D-973A-5BE661034CCE}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {A2F71ABC-96B2-450D-973A-5BE661034CCE}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {A2F71ABC-96B2-450D-973A-5BE661034CCE}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {A2F71ABC-96B2-450D-973A-5BE661034CCE}.ASCII Release|x64.Build.0 = ASCII Release|x64 {A2F71ABC-96B2-450D-973A-5BE661034CCE}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {A2F71ABC-96B2-450D-973A-5BE661034CCE}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {A2F71ABC-96B2-450D-973A-5BE661034CCE}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {A2F71ABC-96B2-450D-973A-5BE661034CCE}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {A2F71ABC-96B2-450D-973A-5BE661034CCE}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {A2F71ABC-96B2-450D-973A-5BE661034CCE}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {A2F71ABC-96B2-450D-973A-5BE661034CCE}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {A2F71ABC-96B2-450D-973A-5BE661034CCE}.Debug IA64|x64.Build.0 = Debug IA64|x64 {A2F71ABC-96B2-450D-973A-5BE661034CCE}.Debug|Win32.ActiveCfg = Debug|Win32 {A2F71ABC-96B2-450D-973A-5BE661034CCE}.Debug|Win32.Build.0 = Debug|Win32 + {A2F71ABC-96B2-450D-973A-5BE661034CCE}.Debug|x64.ActiveCfg = Debug|x64 + {A2F71ABC-96B2-450D-973A-5BE661034CCE}.Debug|x64.Build.0 = Debug|x64 {A2F71ABC-96B2-450D-973A-5BE661034CCE}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {A2F71ABC-96B2-450D-973A-5BE661034CCE}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {A2F71ABC-96B2-450D-973A-5BE661034CCE}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {A2F71ABC-96B2-450D-973A-5BE661034CCE}.Release AMD64|x64.Build.0 = Release AMD64|x64 {A2F71ABC-96B2-450D-973A-5BE661034CCE}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {A2F71ABC-96B2-450D-973A-5BE661034CCE}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {A2F71ABC-96B2-450D-973A-5BE661034CCE}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {A2F71ABC-96B2-450D-973A-5BE661034CCE}.Release IA64|x64.Build.0 = Release IA64|x64 {A2F71ABC-96B2-450D-973A-5BE661034CCE}.Release|Win32.ActiveCfg = Release|Win32 {A2F71ABC-96B2-450D-973A-5BE661034CCE}.Release|Win32.Build.0 = Release|Win32 + {A2F71ABC-96B2-450D-973A-5BE661034CCE}.Release|x64.ActiveCfg = Release|x64 + {A2F71ABC-96B2-450D-973A-5BE661034CCE}.Release|x64.Build.0 = Release|x64 {861A5FAD-25FB-4884-8F28-2CE407B54AAD}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {861A5FAD-25FB-4884-8F28-2CE407B54AAD}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {861A5FAD-25FB-4884-8F28-2CE407B54AAD}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {861A5FAD-25FB-4884-8F28-2CE407B54AAD}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {861A5FAD-25FB-4884-8F28-2CE407B54AAD}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {861A5FAD-25FB-4884-8F28-2CE407B54AAD}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {861A5FAD-25FB-4884-8F28-2CE407B54AAD}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {861A5FAD-25FB-4884-8F28-2CE407B54AAD}.ASCII Release|x64.Build.0 = ASCII Release|x64 {861A5FAD-25FB-4884-8F28-2CE407B54AAD}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {861A5FAD-25FB-4884-8F28-2CE407B54AAD}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {861A5FAD-25FB-4884-8F28-2CE407B54AAD}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {861A5FAD-25FB-4884-8F28-2CE407B54AAD}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {861A5FAD-25FB-4884-8F28-2CE407B54AAD}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {861A5FAD-25FB-4884-8F28-2CE407B54AAD}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {861A5FAD-25FB-4884-8F28-2CE407B54AAD}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {861A5FAD-25FB-4884-8F28-2CE407B54AAD}.Debug IA64|x64.Build.0 = Debug IA64|x64 {861A5FAD-25FB-4884-8F28-2CE407B54AAD}.Debug|Win32.ActiveCfg = Debug|Win32 {861A5FAD-25FB-4884-8F28-2CE407B54AAD}.Debug|Win32.Build.0 = Debug|Win32 + {861A5FAD-25FB-4884-8F28-2CE407B54AAD}.Debug|x64.ActiveCfg = Debug|x64 + {861A5FAD-25FB-4884-8F28-2CE407B54AAD}.Debug|x64.Build.0 = Debug|x64 {861A5FAD-25FB-4884-8F28-2CE407B54AAD}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {861A5FAD-25FB-4884-8F28-2CE407B54AAD}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {861A5FAD-25FB-4884-8F28-2CE407B54AAD}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {861A5FAD-25FB-4884-8F28-2CE407B54AAD}.Release AMD64|x64.Build.0 = Release AMD64|x64 {861A5FAD-25FB-4884-8F28-2CE407B54AAD}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {861A5FAD-25FB-4884-8F28-2CE407B54AAD}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {861A5FAD-25FB-4884-8F28-2CE407B54AAD}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {861A5FAD-25FB-4884-8F28-2CE407B54AAD}.Release IA64|x64.Build.0 = Release IA64|x64 {861A5FAD-25FB-4884-8F28-2CE407B54AAD}.Release|Win32.ActiveCfg = Release|Win32 {861A5FAD-25FB-4884-8F28-2CE407B54AAD}.Release|Win32.Build.0 = Release|Win32 + {861A5FAD-25FB-4884-8F28-2CE407B54AAD}.Release|x64.ActiveCfg = Release|x64 + {861A5FAD-25FB-4884-8F28-2CE407B54AAD}.Release|x64.Build.0 = Release|x64 {93D80BE1-BE99-4D39-B02D-31294C9E9DFA}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {93D80BE1-BE99-4D39-B02D-31294C9E9DFA}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {93D80BE1-BE99-4D39-B02D-31294C9E9DFA}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {93D80BE1-BE99-4D39-B02D-31294C9E9DFA}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {93D80BE1-BE99-4D39-B02D-31294C9E9DFA}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {93D80BE1-BE99-4D39-B02D-31294C9E9DFA}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {93D80BE1-BE99-4D39-B02D-31294C9E9DFA}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {93D80BE1-BE99-4D39-B02D-31294C9E9DFA}.ASCII Release|x64.Build.0 = ASCII Release|x64 {93D80BE1-BE99-4D39-B02D-31294C9E9DFA}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {93D80BE1-BE99-4D39-B02D-31294C9E9DFA}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {93D80BE1-BE99-4D39-B02D-31294C9E9DFA}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {93D80BE1-BE99-4D39-B02D-31294C9E9DFA}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {93D80BE1-BE99-4D39-B02D-31294C9E9DFA}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {93D80BE1-BE99-4D39-B02D-31294C9E9DFA}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {93D80BE1-BE99-4D39-B02D-31294C9E9DFA}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {93D80BE1-BE99-4D39-B02D-31294C9E9DFA}.Debug IA64|x64.Build.0 = Debug IA64|x64 {93D80BE1-BE99-4D39-B02D-31294C9E9DFA}.Debug|Win32.ActiveCfg = Debug|Win32 {93D80BE1-BE99-4D39-B02D-31294C9E9DFA}.Debug|Win32.Build.0 = Debug|Win32 + {93D80BE1-BE99-4D39-B02D-31294C9E9DFA}.Debug|x64.ActiveCfg = Debug|x64 + {93D80BE1-BE99-4D39-B02D-31294C9E9DFA}.Debug|x64.Build.0 = Debug|x64 {93D80BE1-BE99-4D39-B02D-31294C9E9DFA}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {93D80BE1-BE99-4D39-B02D-31294C9E9DFA}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {93D80BE1-BE99-4D39-B02D-31294C9E9DFA}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {93D80BE1-BE99-4D39-B02D-31294C9E9DFA}.Release AMD64|x64.Build.0 = Release AMD64|x64 {93D80BE1-BE99-4D39-B02D-31294C9E9DFA}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {93D80BE1-BE99-4D39-B02D-31294C9E9DFA}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {93D80BE1-BE99-4D39-B02D-31294C9E9DFA}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {93D80BE1-BE99-4D39-B02D-31294C9E9DFA}.Release IA64|x64.Build.0 = Release IA64|x64 {93D80BE1-BE99-4D39-B02D-31294C9E9DFA}.Release|Win32.ActiveCfg = Release|Win32 {93D80BE1-BE99-4D39-B02D-31294C9E9DFA}.Release|Win32.Build.0 = Release|Win32 + {93D80BE1-BE99-4D39-B02D-31294C9E9DFA}.Release|x64.ActiveCfg = Release|x64 + {93D80BE1-BE99-4D39-B02D-31294C9E9DFA}.Release|x64.Build.0 = Release|x64 {3E497868-A70B-45BE-B113-1F04D368DA93}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {3E497868-A70B-45BE-B113-1F04D368DA93}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {3E497868-A70B-45BE-B113-1F04D368DA93}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {3E497868-A70B-45BE-B113-1F04D368DA93}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {3E497868-A70B-45BE-B113-1F04D368DA93}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {3E497868-A70B-45BE-B113-1F04D368DA93}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {3E497868-A70B-45BE-B113-1F04D368DA93}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {3E497868-A70B-45BE-B113-1F04D368DA93}.ASCII Release|x64.Build.0 = ASCII Release|x64 {3E497868-A70B-45BE-B113-1F04D368DA93}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {3E497868-A70B-45BE-B113-1F04D368DA93}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {3E497868-A70B-45BE-B113-1F04D368DA93}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {3E497868-A70B-45BE-B113-1F04D368DA93}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {3E497868-A70B-45BE-B113-1F04D368DA93}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {3E497868-A70B-45BE-B113-1F04D368DA93}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {3E497868-A70B-45BE-B113-1F04D368DA93}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {3E497868-A70B-45BE-B113-1F04D368DA93}.Debug IA64|x64.Build.0 = Debug IA64|x64 {3E497868-A70B-45BE-B113-1F04D368DA93}.Debug|Win32.ActiveCfg = Debug|Win32 {3E497868-A70B-45BE-B113-1F04D368DA93}.Debug|Win32.Build.0 = Debug|Win32 + {3E497868-A70B-45BE-B113-1F04D368DA93}.Debug|x64.ActiveCfg = Debug|x64 + {3E497868-A70B-45BE-B113-1F04D368DA93}.Debug|x64.Build.0 = Debug|x64 {3E497868-A70B-45BE-B113-1F04D368DA93}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {3E497868-A70B-45BE-B113-1F04D368DA93}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {3E497868-A70B-45BE-B113-1F04D368DA93}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {3E497868-A70B-45BE-B113-1F04D368DA93}.Release AMD64|x64.Build.0 = Release AMD64|x64 {3E497868-A70B-45BE-B113-1F04D368DA93}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {3E497868-A70B-45BE-B113-1F04D368DA93}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {3E497868-A70B-45BE-B113-1F04D368DA93}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {3E497868-A70B-45BE-B113-1F04D368DA93}.Release IA64|x64.Build.0 = Release IA64|x64 {3E497868-A70B-45BE-B113-1F04D368DA93}.Release|Win32.ActiveCfg = Release|Win32 {3E497868-A70B-45BE-B113-1F04D368DA93}.Release|Win32.Build.0 = Release|Win32 + {3E497868-A70B-45BE-B113-1F04D368DA93}.Release|x64.ActiveCfg = Release|x64 + {3E497868-A70B-45BE-B113-1F04D368DA93}.Release|x64.Build.0 = Release|x64 {894E4674-B620-44CF-915C-EA171C279B0B}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {894E4674-B620-44CF-915C-EA171C279B0B}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {894E4674-B620-44CF-915C-EA171C279B0B}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {894E4674-B620-44CF-915C-EA171C279B0B}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {894E4674-B620-44CF-915C-EA171C279B0B}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {894E4674-B620-44CF-915C-EA171C279B0B}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {894E4674-B620-44CF-915C-EA171C279B0B}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {894E4674-B620-44CF-915C-EA171C279B0B}.ASCII Release|x64.Build.0 = ASCII Release|x64 {894E4674-B620-44CF-915C-EA171C279B0B}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {894E4674-B620-44CF-915C-EA171C279B0B}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {894E4674-B620-44CF-915C-EA171C279B0B}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {894E4674-B620-44CF-915C-EA171C279B0B}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {894E4674-B620-44CF-915C-EA171C279B0B}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {894E4674-B620-44CF-915C-EA171C279B0B}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {894E4674-B620-44CF-915C-EA171C279B0B}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {894E4674-B620-44CF-915C-EA171C279B0B}.Debug IA64|x64.Build.0 = Debug IA64|x64 {894E4674-B620-44CF-915C-EA171C279B0B}.Debug|Win32.ActiveCfg = Debug|Win32 {894E4674-B620-44CF-915C-EA171C279B0B}.Debug|Win32.Build.0 = Debug|Win32 + {894E4674-B620-44CF-915C-EA171C279B0B}.Debug|x64.ActiveCfg = Debug|x64 + {894E4674-B620-44CF-915C-EA171C279B0B}.Debug|x64.Build.0 = Debug|x64 {894E4674-B620-44CF-915C-EA171C279B0B}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {894E4674-B620-44CF-915C-EA171C279B0B}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {894E4674-B620-44CF-915C-EA171C279B0B}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {894E4674-B620-44CF-915C-EA171C279B0B}.Release AMD64|x64.Build.0 = Release AMD64|x64 {894E4674-B620-44CF-915C-EA171C279B0B}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {894E4674-B620-44CF-915C-EA171C279B0B}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {894E4674-B620-44CF-915C-EA171C279B0B}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {894E4674-B620-44CF-915C-EA171C279B0B}.Release IA64|x64.Build.0 = Release IA64|x64 {894E4674-B620-44CF-915C-EA171C279B0B}.Release|Win32.ActiveCfg = Release|Win32 {894E4674-B620-44CF-915C-EA171C279B0B}.Release|Win32.Build.0 = Release|Win32 + {894E4674-B620-44CF-915C-EA171C279B0B}.Release|x64.ActiveCfg = Release|x64 + {894E4674-B620-44CF-915C-EA171C279B0B}.Release|x64.Build.0 = Release|x64 {EA1286CA-295F-4666-8101-EAE5A254B78A}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {EA1286CA-295F-4666-8101-EAE5A254B78A}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {EA1286CA-295F-4666-8101-EAE5A254B78A}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {EA1286CA-295F-4666-8101-EAE5A254B78A}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {EA1286CA-295F-4666-8101-EAE5A254B78A}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {EA1286CA-295F-4666-8101-EAE5A254B78A}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {EA1286CA-295F-4666-8101-EAE5A254B78A}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {EA1286CA-295F-4666-8101-EAE5A254B78A}.ASCII Release|x64.Build.0 = ASCII Release|x64 {EA1286CA-295F-4666-8101-EAE5A254B78A}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {EA1286CA-295F-4666-8101-EAE5A254B78A}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {EA1286CA-295F-4666-8101-EAE5A254B78A}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {EA1286CA-295F-4666-8101-EAE5A254B78A}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {EA1286CA-295F-4666-8101-EAE5A254B78A}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {EA1286CA-295F-4666-8101-EAE5A254B78A}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {EA1286CA-295F-4666-8101-EAE5A254B78A}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {EA1286CA-295F-4666-8101-EAE5A254B78A}.Debug IA64|x64.Build.0 = Debug IA64|x64 {EA1286CA-295F-4666-8101-EAE5A254B78A}.Debug|Win32.ActiveCfg = Debug|Win32 {EA1286CA-295F-4666-8101-EAE5A254B78A}.Debug|Win32.Build.0 = Debug|Win32 + {EA1286CA-295F-4666-8101-EAE5A254B78A}.Debug|x64.ActiveCfg = Debug|x64 + {EA1286CA-295F-4666-8101-EAE5A254B78A}.Debug|x64.Build.0 = Debug|x64 {EA1286CA-295F-4666-8101-EAE5A254B78A}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {EA1286CA-295F-4666-8101-EAE5A254B78A}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {EA1286CA-295F-4666-8101-EAE5A254B78A}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {EA1286CA-295F-4666-8101-EAE5A254B78A}.Release AMD64|x64.Build.0 = Release AMD64|x64 {EA1286CA-295F-4666-8101-EAE5A254B78A}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {EA1286CA-295F-4666-8101-EAE5A254B78A}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {EA1286CA-295F-4666-8101-EAE5A254B78A}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {EA1286CA-295F-4666-8101-EAE5A254B78A}.Release IA64|x64.Build.0 = Release IA64|x64 {EA1286CA-295F-4666-8101-EAE5A254B78A}.Release|Win32.ActiveCfg = Release|Win32 {EA1286CA-295F-4666-8101-EAE5A254B78A}.Release|Win32.Build.0 = Release|Win32 + {EA1286CA-295F-4666-8101-EAE5A254B78A}.Release|x64.ActiveCfg = Release|x64 + {EA1286CA-295F-4666-8101-EAE5A254B78A}.Release|x64.Build.0 = Release|x64 {207C3746-0D9C-4ADB-AF4D-876039022870}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {207C3746-0D9C-4ADB-AF4D-876039022870}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {207C3746-0D9C-4ADB-AF4D-876039022870}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {207C3746-0D9C-4ADB-AF4D-876039022870}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {207C3746-0D9C-4ADB-AF4D-876039022870}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {207C3746-0D9C-4ADB-AF4D-876039022870}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {207C3746-0D9C-4ADB-AF4D-876039022870}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {207C3746-0D9C-4ADB-AF4D-876039022870}.ASCII Release|x64.Build.0 = ASCII Release|x64 {207C3746-0D9C-4ADB-AF4D-876039022870}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {207C3746-0D9C-4ADB-AF4D-876039022870}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {207C3746-0D9C-4ADB-AF4D-876039022870}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {207C3746-0D9C-4ADB-AF4D-876039022870}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {207C3746-0D9C-4ADB-AF4D-876039022870}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {207C3746-0D9C-4ADB-AF4D-876039022870}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {207C3746-0D9C-4ADB-AF4D-876039022870}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {207C3746-0D9C-4ADB-AF4D-876039022870}.Debug IA64|x64.Build.0 = Debug IA64|x64 {207C3746-0D9C-4ADB-AF4D-876039022870}.Debug|Win32.ActiveCfg = Debug|Win32 {207C3746-0D9C-4ADB-AF4D-876039022870}.Debug|Win32.Build.0 = Debug|Win32 + {207C3746-0D9C-4ADB-AF4D-876039022870}.Debug|x64.ActiveCfg = Debug|x64 + {207C3746-0D9C-4ADB-AF4D-876039022870}.Debug|x64.Build.0 = Debug|x64 {207C3746-0D9C-4ADB-AF4D-876039022870}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {207C3746-0D9C-4ADB-AF4D-876039022870}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {207C3746-0D9C-4ADB-AF4D-876039022870}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {207C3746-0D9C-4ADB-AF4D-876039022870}.Release AMD64|x64.Build.0 = Release AMD64|x64 {207C3746-0D9C-4ADB-AF4D-876039022870}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {207C3746-0D9C-4ADB-AF4D-876039022870}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {207C3746-0D9C-4ADB-AF4D-876039022870}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {207C3746-0D9C-4ADB-AF4D-876039022870}.Release IA64|x64.Build.0 = Release IA64|x64 {207C3746-0D9C-4ADB-AF4D-876039022870}.Release|Win32.ActiveCfg = Release|Win32 {207C3746-0D9C-4ADB-AF4D-876039022870}.Release|Win32.Build.0 = Release|Win32 + {207C3746-0D9C-4ADB-AF4D-876039022870}.Release|x64.ActiveCfg = Release|x64 + {207C3746-0D9C-4ADB-AF4D-876039022870}.Release|x64.Build.0 = Release|x64 {14301406-3348-4D7A-A47B-EC6180017F50}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {14301406-3348-4D7A-A47B-EC6180017F50}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {14301406-3348-4D7A-A47B-EC6180017F50}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {14301406-3348-4D7A-A47B-EC6180017F50}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {14301406-3348-4D7A-A47B-EC6180017F50}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {14301406-3348-4D7A-A47B-EC6180017F50}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {14301406-3348-4D7A-A47B-EC6180017F50}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {14301406-3348-4D7A-A47B-EC6180017F50}.ASCII Release|x64.Build.0 = ASCII Release|x64 {14301406-3348-4D7A-A47B-EC6180017F50}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {14301406-3348-4D7A-A47B-EC6180017F50}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {14301406-3348-4D7A-A47B-EC6180017F50}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {14301406-3348-4D7A-A47B-EC6180017F50}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {14301406-3348-4D7A-A47B-EC6180017F50}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {14301406-3348-4D7A-A47B-EC6180017F50}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {14301406-3348-4D7A-A47B-EC6180017F50}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {14301406-3348-4D7A-A47B-EC6180017F50}.Debug IA64|x64.Build.0 = Debug IA64|x64 {14301406-3348-4D7A-A47B-EC6180017F50}.Debug|Win32.ActiveCfg = Debug|Win32 {14301406-3348-4D7A-A47B-EC6180017F50}.Debug|Win32.Build.0 = Debug|Win32 + {14301406-3348-4D7A-A47B-EC6180017F50}.Debug|x64.ActiveCfg = Debug|x64 + {14301406-3348-4D7A-A47B-EC6180017F50}.Debug|x64.Build.0 = Debug|x64 {14301406-3348-4D7A-A47B-EC6180017F50}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {14301406-3348-4D7A-A47B-EC6180017F50}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {14301406-3348-4D7A-A47B-EC6180017F50}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {14301406-3348-4D7A-A47B-EC6180017F50}.Release AMD64|x64.Build.0 = Release AMD64|x64 {14301406-3348-4D7A-A47B-EC6180017F50}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {14301406-3348-4D7A-A47B-EC6180017F50}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {14301406-3348-4D7A-A47B-EC6180017F50}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {14301406-3348-4D7A-A47B-EC6180017F50}.Release IA64|x64.Build.0 = Release IA64|x64 {14301406-3348-4D7A-A47B-EC6180017F50}.Release|Win32.ActiveCfg = Release|Win32 {14301406-3348-4D7A-A47B-EC6180017F50}.Release|Win32.Build.0 = Release|Win32 + {14301406-3348-4D7A-A47B-EC6180017F50}.Release|x64.ActiveCfg = Release|x64 + {14301406-3348-4D7A-A47B-EC6180017F50}.Release|x64.Build.0 = Release|x64 {3505C707-1445-4E79-894A-D30CFE5E90DA}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {3505C707-1445-4E79-894A-D30CFE5E90DA}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {3505C707-1445-4E79-894A-D30CFE5E90DA}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {3505C707-1445-4E79-894A-D30CFE5E90DA}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {3505C707-1445-4E79-894A-D30CFE5E90DA}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {3505C707-1445-4E79-894A-D30CFE5E90DA}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {3505C707-1445-4E79-894A-D30CFE5E90DA}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {3505C707-1445-4E79-894A-D30CFE5E90DA}.ASCII Release|x64.Build.0 = ASCII Release|x64 {3505C707-1445-4E79-894A-D30CFE5E90DA}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {3505C707-1445-4E79-894A-D30CFE5E90DA}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {3505C707-1445-4E79-894A-D30CFE5E90DA}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {3505C707-1445-4E79-894A-D30CFE5E90DA}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {3505C707-1445-4E79-894A-D30CFE5E90DA}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {3505C707-1445-4E79-894A-D30CFE5E90DA}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {3505C707-1445-4E79-894A-D30CFE5E90DA}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {3505C707-1445-4E79-894A-D30CFE5E90DA}.Debug IA64|x64.Build.0 = Debug IA64|x64 {3505C707-1445-4E79-894A-D30CFE5E90DA}.Debug|Win32.ActiveCfg = Debug|Win32 {3505C707-1445-4E79-894A-D30CFE5E90DA}.Debug|Win32.Build.0 = Debug|Win32 + {3505C707-1445-4E79-894A-D30CFE5E90DA}.Debug|x64.ActiveCfg = Debug|x64 + {3505C707-1445-4E79-894A-D30CFE5E90DA}.Debug|x64.Build.0 = Debug|x64 {3505C707-1445-4E79-894A-D30CFE5E90DA}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {3505C707-1445-4E79-894A-D30CFE5E90DA}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {3505C707-1445-4E79-894A-D30CFE5E90DA}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {3505C707-1445-4E79-894A-D30CFE5E90DA}.Release AMD64|x64.Build.0 = Release AMD64|x64 {3505C707-1445-4E79-894A-D30CFE5E90DA}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {3505C707-1445-4E79-894A-D30CFE5E90DA}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {3505C707-1445-4E79-894A-D30CFE5E90DA}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {3505C707-1445-4E79-894A-D30CFE5E90DA}.Release IA64|x64.Build.0 = Release IA64|x64 {3505C707-1445-4E79-894A-D30CFE5E90DA}.Release|Win32.ActiveCfg = Release|Win32 {3505C707-1445-4E79-894A-D30CFE5E90DA}.Release|Win32.Build.0 = Release|Win32 + {3505C707-1445-4E79-894A-D30CFE5E90DA}.Release|x64.ActiveCfg = Release|x64 + {3505C707-1445-4E79-894A-D30CFE5E90DA}.Release|x64.Build.0 = Release|x64 {8F1A8687-4CA8-4F49-AD58-D0ECA0E0E1CA}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {8F1A8687-4CA8-4F49-AD58-D0ECA0E0E1CA}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {8F1A8687-4CA8-4F49-AD58-D0ECA0E0E1CA}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {8F1A8687-4CA8-4F49-AD58-D0ECA0E0E1CA}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {8F1A8687-4CA8-4F49-AD58-D0ECA0E0E1CA}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {8F1A8687-4CA8-4F49-AD58-D0ECA0E0E1CA}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {8F1A8687-4CA8-4F49-AD58-D0ECA0E0E1CA}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {8F1A8687-4CA8-4F49-AD58-D0ECA0E0E1CA}.ASCII Release|x64.Build.0 = ASCII Release|x64 {8F1A8687-4CA8-4F49-AD58-D0ECA0E0E1CA}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {8F1A8687-4CA8-4F49-AD58-D0ECA0E0E1CA}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {8F1A8687-4CA8-4F49-AD58-D0ECA0E0E1CA}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {8F1A8687-4CA8-4F49-AD58-D0ECA0E0E1CA}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {8F1A8687-4CA8-4F49-AD58-D0ECA0E0E1CA}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {8F1A8687-4CA8-4F49-AD58-D0ECA0E0E1CA}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {8F1A8687-4CA8-4F49-AD58-D0ECA0E0E1CA}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {8F1A8687-4CA8-4F49-AD58-D0ECA0E0E1CA}.Debug IA64|x64.Build.0 = Debug IA64|x64 {8F1A8687-4CA8-4F49-AD58-D0ECA0E0E1CA}.Debug|Win32.ActiveCfg = Debug|Win32 {8F1A8687-4CA8-4F49-AD58-D0ECA0E0E1CA}.Debug|Win32.Build.0 = Debug|Win32 + {8F1A8687-4CA8-4F49-AD58-D0ECA0E0E1CA}.Debug|x64.ActiveCfg = Debug|x64 + {8F1A8687-4CA8-4F49-AD58-D0ECA0E0E1CA}.Debug|x64.Build.0 = Debug|x64 {8F1A8687-4CA8-4F49-AD58-D0ECA0E0E1CA}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {8F1A8687-4CA8-4F49-AD58-D0ECA0E0E1CA}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {8F1A8687-4CA8-4F49-AD58-D0ECA0E0E1CA}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {8F1A8687-4CA8-4F49-AD58-D0ECA0E0E1CA}.Release AMD64|x64.Build.0 = Release AMD64|x64 {8F1A8687-4CA8-4F49-AD58-D0ECA0E0E1CA}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {8F1A8687-4CA8-4F49-AD58-D0ECA0E0E1CA}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {8F1A8687-4CA8-4F49-AD58-D0ECA0E0E1CA}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {8F1A8687-4CA8-4F49-AD58-D0ECA0E0E1CA}.Release IA64|x64.Build.0 = Release IA64|x64 {8F1A8687-4CA8-4F49-AD58-D0ECA0E0E1CA}.Release|Win32.ActiveCfg = Release|Win32 {8F1A8687-4CA8-4F49-AD58-D0ECA0E0E1CA}.Release|Win32.Build.0 = Release|Win32 + {8F1A8687-4CA8-4F49-AD58-D0ECA0E0E1CA}.Release|x64.ActiveCfg = Release|x64 + {8F1A8687-4CA8-4F49-AD58-D0ECA0E0E1CA}.Release|x64.Build.0 = Release|x64 {927F260A-19CF-4E55-A154-109A0BF80AE9}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {927F260A-19CF-4E55-A154-109A0BF80AE9}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {927F260A-19CF-4E55-A154-109A0BF80AE9}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {927F260A-19CF-4E55-A154-109A0BF80AE9}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {927F260A-19CF-4E55-A154-109A0BF80AE9}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {927F260A-19CF-4E55-A154-109A0BF80AE9}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {927F260A-19CF-4E55-A154-109A0BF80AE9}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {927F260A-19CF-4E55-A154-109A0BF80AE9}.ASCII Release|x64.Build.0 = ASCII Release|x64 {927F260A-19CF-4E55-A154-109A0BF80AE9}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {927F260A-19CF-4E55-A154-109A0BF80AE9}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {927F260A-19CF-4E55-A154-109A0BF80AE9}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {927F260A-19CF-4E55-A154-109A0BF80AE9}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {927F260A-19CF-4E55-A154-109A0BF80AE9}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {927F260A-19CF-4E55-A154-109A0BF80AE9}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {927F260A-19CF-4E55-A154-109A0BF80AE9}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {927F260A-19CF-4E55-A154-109A0BF80AE9}.Debug IA64|x64.Build.0 = Debug IA64|x64 {927F260A-19CF-4E55-A154-109A0BF80AE9}.Debug|Win32.ActiveCfg = Debug|Win32 {927F260A-19CF-4E55-A154-109A0BF80AE9}.Debug|Win32.Build.0 = Debug|Win32 + {927F260A-19CF-4E55-A154-109A0BF80AE9}.Debug|x64.ActiveCfg = Debug|x64 + {927F260A-19CF-4E55-A154-109A0BF80AE9}.Debug|x64.Build.0 = Debug|x64 {927F260A-19CF-4E55-A154-109A0BF80AE9}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {927F260A-19CF-4E55-A154-109A0BF80AE9}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {927F260A-19CF-4E55-A154-109A0BF80AE9}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {927F260A-19CF-4E55-A154-109A0BF80AE9}.Release AMD64|x64.Build.0 = Release AMD64|x64 {927F260A-19CF-4E55-A154-109A0BF80AE9}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {927F260A-19CF-4E55-A154-109A0BF80AE9}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {927F260A-19CF-4E55-A154-109A0BF80AE9}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {927F260A-19CF-4E55-A154-109A0BF80AE9}.Release IA64|x64.Build.0 = Release IA64|x64 {927F260A-19CF-4E55-A154-109A0BF80AE9}.Release|Win32.ActiveCfg = Release|Win32 {927F260A-19CF-4E55-A154-109A0BF80AE9}.Release|Win32.Build.0 = Release|Win32 + {927F260A-19CF-4E55-A154-109A0BF80AE9}.Release|x64.ActiveCfg = Release|x64 + {927F260A-19CF-4E55-A154-109A0BF80AE9}.Release|x64.Build.0 = Release|x64 {389384D4-75A5-4B53-A7C2-AC61375C430B}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {389384D4-75A5-4B53-A7C2-AC61375C430B}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {389384D4-75A5-4B53-A7C2-AC61375C430B}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {389384D4-75A5-4B53-A7C2-AC61375C430B}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {389384D4-75A5-4B53-A7C2-AC61375C430B}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {389384D4-75A5-4B53-A7C2-AC61375C430B}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {389384D4-75A5-4B53-A7C2-AC61375C430B}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {389384D4-75A5-4B53-A7C2-AC61375C430B}.ASCII Release|x64.Build.0 = ASCII Release|x64 {389384D4-75A5-4B53-A7C2-AC61375C430B}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {389384D4-75A5-4B53-A7C2-AC61375C430B}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {389384D4-75A5-4B53-A7C2-AC61375C430B}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {389384D4-75A5-4B53-A7C2-AC61375C430B}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {389384D4-75A5-4B53-A7C2-AC61375C430B}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {389384D4-75A5-4B53-A7C2-AC61375C430B}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {389384D4-75A5-4B53-A7C2-AC61375C430B}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {389384D4-75A5-4B53-A7C2-AC61375C430B}.Debug IA64|x64.Build.0 = Debug IA64|x64 {389384D4-75A5-4B53-A7C2-AC61375C430B}.Debug|Win32.ActiveCfg = Debug|Win32 {389384D4-75A5-4B53-A7C2-AC61375C430B}.Debug|Win32.Build.0 = Debug|Win32 + {389384D4-75A5-4B53-A7C2-AC61375C430B}.Debug|x64.ActiveCfg = Debug|x64 + {389384D4-75A5-4B53-A7C2-AC61375C430B}.Debug|x64.Build.0 = Debug|x64 {389384D4-75A5-4B53-A7C2-AC61375C430B}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {389384D4-75A5-4B53-A7C2-AC61375C430B}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {389384D4-75A5-4B53-A7C2-AC61375C430B}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {389384D4-75A5-4B53-A7C2-AC61375C430B}.Release AMD64|x64.Build.0 = Release AMD64|x64 {389384D4-75A5-4B53-A7C2-AC61375C430B}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {389384D4-75A5-4B53-A7C2-AC61375C430B}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {389384D4-75A5-4B53-A7C2-AC61375C430B}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {389384D4-75A5-4B53-A7C2-AC61375C430B}.Release IA64|x64.Build.0 = Release IA64|x64 {389384D4-75A5-4B53-A7C2-AC61375C430B}.Release|Win32.ActiveCfg = Release|Win32 {389384D4-75A5-4B53-A7C2-AC61375C430B}.Release|Win32.Build.0 = Release|Win32 + {389384D4-75A5-4B53-A7C2-AC61375C430B}.Release|x64.ActiveCfg = Release|x64 + {389384D4-75A5-4B53-A7C2-AC61375C430B}.Release|x64.Build.0 = Release|x64 {B0C9B69A-89CA-4378-90E2-180F8C6DA0EB}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {B0C9B69A-89CA-4378-90E2-180F8C6DA0EB}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {B0C9B69A-89CA-4378-90E2-180F8C6DA0EB}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {B0C9B69A-89CA-4378-90E2-180F8C6DA0EB}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {B0C9B69A-89CA-4378-90E2-180F8C6DA0EB}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {B0C9B69A-89CA-4378-90E2-180F8C6DA0EB}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {B0C9B69A-89CA-4378-90E2-180F8C6DA0EB}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {B0C9B69A-89CA-4378-90E2-180F8C6DA0EB}.ASCII Release|x64.Build.0 = ASCII Release|x64 {B0C9B69A-89CA-4378-90E2-180F8C6DA0EB}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {B0C9B69A-89CA-4378-90E2-180F8C6DA0EB}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {B0C9B69A-89CA-4378-90E2-180F8C6DA0EB}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {B0C9B69A-89CA-4378-90E2-180F8C6DA0EB}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {B0C9B69A-89CA-4378-90E2-180F8C6DA0EB}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {B0C9B69A-89CA-4378-90E2-180F8C6DA0EB}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {B0C9B69A-89CA-4378-90E2-180F8C6DA0EB}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {B0C9B69A-89CA-4378-90E2-180F8C6DA0EB}.Debug IA64|x64.Build.0 = Debug IA64|x64 {B0C9B69A-89CA-4378-90E2-180F8C6DA0EB}.Debug|Win32.ActiveCfg = Debug|Win32 {B0C9B69A-89CA-4378-90E2-180F8C6DA0EB}.Debug|Win32.Build.0 = Debug|Win32 + {B0C9B69A-89CA-4378-90E2-180F8C6DA0EB}.Debug|x64.ActiveCfg = Debug|x64 + {B0C9B69A-89CA-4378-90E2-180F8C6DA0EB}.Debug|x64.Build.0 = Debug|x64 {B0C9B69A-89CA-4378-90E2-180F8C6DA0EB}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {B0C9B69A-89CA-4378-90E2-180F8C6DA0EB}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {B0C9B69A-89CA-4378-90E2-180F8C6DA0EB}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {B0C9B69A-89CA-4378-90E2-180F8C6DA0EB}.Release AMD64|x64.Build.0 = Release AMD64|x64 {B0C9B69A-89CA-4378-90E2-180F8C6DA0EB}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {B0C9B69A-89CA-4378-90E2-180F8C6DA0EB}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {B0C9B69A-89CA-4378-90E2-180F8C6DA0EB}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {B0C9B69A-89CA-4378-90E2-180F8C6DA0EB}.Release IA64|x64.Build.0 = Release IA64|x64 {B0C9B69A-89CA-4378-90E2-180F8C6DA0EB}.Release|Win32.ActiveCfg = Release|Win32 {B0C9B69A-89CA-4378-90E2-180F8C6DA0EB}.Release|Win32.Build.0 = Release|Win32 + {B0C9B69A-89CA-4378-90E2-180F8C6DA0EB}.Release|x64.ActiveCfg = Release|x64 + {B0C9B69A-89CA-4378-90E2-180F8C6DA0EB}.Release|x64.Build.0 = Release|x64 {5A50D272-4CE6-4DBA-A20E-B4649BDAB28C}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {5A50D272-4CE6-4DBA-A20E-B4649BDAB28C}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {5A50D272-4CE6-4DBA-A20E-B4649BDAB28C}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {5A50D272-4CE6-4DBA-A20E-B4649BDAB28C}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {5A50D272-4CE6-4DBA-A20E-B4649BDAB28C}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {5A50D272-4CE6-4DBA-A20E-B4649BDAB28C}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {5A50D272-4CE6-4DBA-A20E-B4649BDAB28C}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {5A50D272-4CE6-4DBA-A20E-B4649BDAB28C}.ASCII Release|x64.Build.0 = ASCII Release|x64 {5A50D272-4CE6-4DBA-A20E-B4649BDAB28C}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {5A50D272-4CE6-4DBA-A20E-B4649BDAB28C}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {5A50D272-4CE6-4DBA-A20E-B4649BDAB28C}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {5A50D272-4CE6-4DBA-A20E-B4649BDAB28C}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {5A50D272-4CE6-4DBA-A20E-B4649BDAB28C}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {5A50D272-4CE6-4DBA-A20E-B4649BDAB28C}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {5A50D272-4CE6-4DBA-A20E-B4649BDAB28C}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {5A50D272-4CE6-4DBA-A20E-B4649BDAB28C}.Debug IA64|x64.Build.0 = Debug IA64|x64 {5A50D272-4CE6-4DBA-A20E-B4649BDAB28C}.Debug|Win32.ActiveCfg = Debug|Win32 {5A50D272-4CE6-4DBA-A20E-B4649BDAB28C}.Debug|Win32.Build.0 = Debug|Win32 + {5A50D272-4CE6-4DBA-A20E-B4649BDAB28C}.Debug|x64.ActiveCfg = Debug|x64 + {5A50D272-4CE6-4DBA-A20E-B4649BDAB28C}.Debug|x64.Build.0 = Debug|x64 {5A50D272-4CE6-4DBA-A20E-B4649BDAB28C}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {5A50D272-4CE6-4DBA-A20E-B4649BDAB28C}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {5A50D272-4CE6-4DBA-A20E-B4649BDAB28C}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {5A50D272-4CE6-4DBA-A20E-B4649BDAB28C}.Release AMD64|x64.Build.0 = Release AMD64|x64 {5A50D272-4CE6-4DBA-A20E-B4649BDAB28C}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {5A50D272-4CE6-4DBA-A20E-B4649BDAB28C}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {5A50D272-4CE6-4DBA-A20E-B4649BDAB28C}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {5A50D272-4CE6-4DBA-A20E-B4649BDAB28C}.Release IA64|x64.Build.0 = Release IA64|x64 {5A50D272-4CE6-4DBA-A20E-B4649BDAB28C}.Release|Win32.ActiveCfg = Release|Win32 {5A50D272-4CE6-4DBA-A20E-B4649BDAB28C}.Release|Win32.Build.0 = Release|Win32 + {5A50D272-4CE6-4DBA-A20E-B4649BDAB28C}.Release|x64.ActiveCfg = Release|x64 + {5A50D272-4CE6-4DBA-A20E-B4649BDAB28C}.Release|x64.Build.0 = Release|x64 {3C064356-0ECF-4BA2-A9F8-EA0E073F1A04}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {3C064356-0ECF-4BA2-A9F8-EA0E073F1A04}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {3C064356-0ECF-4BA2-A9F8-EA0E073F1A04}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {3C064356-0ECF-4BA2-A9F8-EA0E073F1A04}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {3C064356-0ECF-4BA2-A9F8-EA0E073F1A04}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {3C064356-0ECF-4BA2-A9F8-EA0E073F1A04}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {3C064356-0ECF-4BA2-A9F8-EA0E073F1A04}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {3C064356-0ECF-4BA2-A9F8-EA0E073F1A04}.ASCII Release|x64.Build.0 = ASCII Release|x64 {3C064356-0ECF-4BA2-A9F8-EA0E073F1A04}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {3C064356-0ECF-4BA2-A9F8-EA0E073F1A04}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {3C064356-0ECF-4BA2-A9F8-EA0E073F1A04}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {3C064356-0ECF-4BA2-A9F8-EA0E073F1A04}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {3C064356-0ECF-4BA2-A9F8-EA0E073F1A04}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {3C064356-0ECF-4BA2-A9F8-EA0E073F1A04}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {3C064356-0ECF-4BA2-A9F8-EA0E073F1A04}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {3C064356-0ECF-4BA2-A9F8-EA0E073F1A04}.Debug IA64|x64.Build.0 = Debug IA64|x64 {3C064356-0ECF-4BA2-A9F8-EA0E073F1A04}.Debug|Win32.ActiveCfg = Debug|Win32 {3C064356-0ECF-4BA2-A9F8-EA0E073F1A04}.Debug|Win32.Build.0 = Debug|Win32 + {3C064356-0ECF-4BA2-A9F8-EA0E073F1A04}.Debug|x64.ActiveCfg = Debug|x64 + {3C064356-0ECF-4BA2-A9F8-EA0E073F1A04}.Debug|x64.Build.0 = Debug|x64 {3C064356-0ECF-4BA2-A9F8-EA0E073F1A04}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {3C064356-0ECF-4BA2-A9F8-EA0E073F1A04}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {3C064356-0ECF-4BA2-A9F8-EA0E073F1A04}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {3C064356-0ECF-4BA2-A9F8-EA0E073F1A04}.Release AMD64|x64.Build.0 = Release AMD64|x64 {3C064356-0ECF-4BA2-A9F8-EA0E073F1A04}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {3C064356-0ECF-4BA2-A9F8-EA0E073F1A04}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {3C064356-0ECF-4BA2-A9F8-EA0E073F1A04}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {3C064356-0ECF-4BA2-A9F8-EA0E073F1A04}.Release IA64|x64.Build.0 = Release IA64|x64 {3C064356-0ECF-4BA2-A9F8-EA0E073F1A04}.Release|Win32.ActiveCfg = Release|Win32 {3C064356-0ECF-4BA2-A9F8-EA0E073F1A04}.Release|Win32.Build.0 = Release|Win32 + {3C064356-0ECF-4BA2-A9F8-EA0E073F1A04}.Release|x64.ActiveCfg = Release|x64 + {3C064356-0ECF-4BA2-A9F8-EA0E073F1A04}.Release|x64.Build.0 = Release|x64 {40373079-658E-451A-8249-5986B2190133}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {40373079-658E-451A-8249-5986B2190133}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {40373079-658E-451A-8249-5986B2190133}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {40373079-658E-451A-8249-5986B2190133}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {40373079-658E-451A-8249-5986B2190133}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {40373079-658E-451A-8249-5986B2190133}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {40373079-658E-451A-8249-5986B2190133}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {40373079-658E-451A-8249-5986B2190133}.ASCII Release|x64.Build.0 = ASCII Release|x64 {40373079-658E-451A-8249-5986B2190133}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {40373079-658E-451A-8249-5986B2190133}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {40373079-658E-451A-8249-5986B2190133}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {40373079-658E-451A-8249-5986B2190133}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {40373079-658E-451A-8249-5986B2190133}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {40373079-658E-451A-8249-5986B2190133}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {40373079-658E-451A-8249-5986B2190133}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {40373079-658E-451A-8249-5986B2190133}.Debug IA64|x64.Build.0 = Debug IA64|x64 {40373079-658E-451A-8249-5986B2190133}.Debug|Win32.ActiveCfg = Debug|Win32 {40373079-658E-451A-8249-5986B2190133}.Debug|Win32.Build.0 = Debug|Win32 + {40373079-658E-451A-8249-5986B2190133}.Debug|x64.ActiveCfg = Debug|x64 + {40373079-658E-451A-8249-5986B2190133}.Debug|x64.Build.0 = Debug|x64 {40373079-658E-451A-8249-5986B2190133}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {40373079-658E-451A-8249-5986B2190133}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {40373079-658E-451A-8249-5986B2190133}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {40373079-658E-451A-8249-5986B2190133}.Release AMD64|x64.Build.0 = Release AMD64|x64 {40373079-658E-451A-8249-5986B2190133}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {40373079-658E-451A-8249-5986B2190133}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {40373079-658E-451A-8249-5986B2190133}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {40373079-658E-451A-8249-5986B2190133}.Release IA64|x64.Build.0 = Release IA64|x64 {40373079-658E-451A-8249-5986B2190133}.Release|Win32.ActiveCfg = Release|Win32 {40373079-658E-451A-8249-5986B2190133}.Release|Win32.Build.0 = Release|Win32 + {40373079-658E-451A-8249-5986B2190133}.Release|x64.ActiveCfg = Release|x64 + {40373079-658E-451A-8249-5986B2190133}.Release|x64.Build.0 = Release|x64 {6E39EDB9-D55A-43C7-B740-9221B03FA458}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {6E39EDB9-D55A-43C7-B740-9221B03FA458}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {6E39EDB9-D55A-43C7-B740-9221B03FA458}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {6E39EDB9-D55A-43C7-B740-9221B03FA458}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {6E39EDB9-D55A-43C7-B740-9221B03FA458}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {6E39EDB9-D55A-43C7-B740-9221B03FA458}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {6E39EDB9-D55A-43C7-B740-9221B03FA458}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {6E39EDB9-D55A-43C7-B740-9221B03FA458}.ASCII Release|x64.Build.0 = ASCII Release|x64 {6E39EDB9-D55A-43C7-B740-9221B03FA458}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {6E39EDB9-D55A-43C7-B740-9221B03FA458}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {6E39EDB9-D55A-43C7-B740-9221B03FA458}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {6E39EDB9-D55A-43C7-B740-9221B03FA458}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {6E39EDB9-D55A-43C7-B740-9221B03FA458}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {6E39EDB9-D55A-43C7-B740-9221B03FA458}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {6E39EDB9-D55A-43C7-B740-9221B03FA458}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {6E39EDB9-D55A-43C7-B740-9221B03FA458}.Debug IA64|x64.Build.0 = Debug IA64|x64 {6E39EDB9-D55A-43C7-B740-9221B03FA458}.Debug|Win32.ActiveCfg = Debug|Win32 {6E39EDB9-D55A-43C7-B740-9221B03FA458}.Debug|Win32.Build.0 = Debug|Win32 + {6E39EDB9-D55A-43C7-B740-9221B03FA458}.Debug|x64.ActiveCfg = Debug|x64 + {6E39EDB9-D55A-43C7-B740-9221B03FA458}.Debug|x64.Build.0 = Debug|x64 {6E39EDB9-D55A-43C7-B740-9221B03FA458}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {6E39EDB9-D55A-43C7-B740-9221B03FA458}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {6E39EDB9-D55A-43C7-B740-9221B03FA458}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {6E39EDB9-D55A-43C7-B740-9221B03FA458}.Release AMD64|x64.Build.0 = Release AMD64|x64 {6E39EDB9-D55A-43C7-B740-9221B03FA458}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {6E39EDB9-D55A-43C7-B740-9221B03FA458}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {6E39EDB9-D55A-43C7-B740-9221B03FA458}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {6E39EDB9-D55A-43C7-B740-9221B03FA458}.Release IA64|x64.Build.0 = Release IA64|x64 {6E39EDB9-D55A-43C7-B740-9221B03FA458}.Release|Win32.ActiveCfg = Release|Win32 {6E39EDB9-D55A-43C7-B740-9221B03FA458}.Release|Win32.Build.0 = Release|Win32 + {6E39EDB9-D55A-43C7-B740-9221B03FA458}.Release|x64.ActiveCfg = Release|x64 + {6E39EDB9-D55A-43C7-B740-9221B03FA458}.Release|x64.Build.0 = Release|x64 {EE86A96A-D00A-44C9-8914-850E57B8E677}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {EE86A96A-D00A-44C9-8914-850E57B8E677}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {EE86A96A-D00A-44C9-8914-850E57B8E677}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {EE86A96A-D00A-44C9-8914-850E57B8E677}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {EE86A96A-D00A-44C9-8914-850E57B8E677}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {EE86A96A-D00A-44C9-8914-850E57B8E677}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {EE86A96A-D00A-44C9-8914-850E57B8E677}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {EE86A96A-D00A-44C9-8914-850E57B8E677}.ASCII Release|x64.Build.0 = ASCII Release|x64 {EE86A96A-D00A-44C9-8914-850E57B8E677}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {EE86A96A-D00A-44C9-8914-850E57B8E677}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {EE86A96A-D00A-44C9-8914-850E57B8E677}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {EE86A96A-D00A-44C9-8914-850E57B8E677}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {EE86A96A-D00A-44C9-8914-850E57B8E677}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {EE86A96A-D00A-44C9-8914-850E57B8E677}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {EE86A96A-D00A-44C9-8914-850E57B8E677}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {EE86A96A-D00A-44C9-8914-850E57B8E677}.Debug IA64|x64.Build.0 = Debug IA64|x64 {EE86A96A-D00A-44C9-8914-850E57B8E677}.Debug|Win32.ActiveCfg = Debug|Win32 {EE86A96A-D00A-44C9-8914-850E57B8E677}.Debug|Win32.Build.0 = Debug|Win32 + {EE86A96A-D00A-44C9-8914-850E57B8E677}.Debug|x64.ActiveCfg = Debug|x64 + {EE86A96A-D00A-44C9-8914-850E57B8E677}.Debug|x64.Build.0 = Debug|x64 {EE86A96A-D00A-44C9-8914-850E57B8E677}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {EE86A96A-D00A-44C9-8914-850E57B8E677}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {EE86A96A-D00A-44C9-8914-850E57B8E677}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {EE86A96A-D00A-44C9-8914-850E57B8E677}.Release AMD64|x64.Build.0 = Release AMD64|x64 {EE86A96A-D00A-44C9-8914-850E57B8E677}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {EE86A96A-D00A-44C9-8914-850E57B8E677}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {EE86A96A-D00A-44C9-8914-850E57B8E677}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {EE86A96A-D00A-44C9-8914-850E57B8E677}.Release IA64|x64.Build.0 = Release IA64|x64 {EE86A96A-D00A-44C9-8914-850E57B8E677}.Release|Win32.ActiveCfg = Release|Win32 {EE86A96A-D00A-44C9-8914-850E57B8E677}.Release|Win32.Build.0 = Release|Win32 + {EE86A96A-D00A-44C9-8914-850E57B8E677}.Release|x64.ActiveCfg = Release|x64 + {EE86A96A-D00A-44C9-8914-850E57B8E677}.Release|x64.Build.0 = Release|x64 {22CBF051-19CD-4E61-B5C6-B5F8072F1DD3}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {22CBF051-19CD-4E61-B5C6-B5F8072F1DD3}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {22CBF051-19CD-4E61-B5C6-B5F8072F1DD3}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {22CBF051-19CD-4E61-B5C6-B5F8072F1DD3}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {22CBF051-19CD-4E61-B5C6-B5F8072F1DD3}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {22CBF051-19CD-4E61-B5C6-B5F8072F1DD3}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {22CBF051-19CD-4E61-B5C6-B5F8072F1DD3}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {22CBF051-19CD-4E61-B5C6-B5F8072F1DD3}.ASCII Release|x64.Build.0 = ASCII Release|x64 {22CBF051-19CD-4E61-B5C6-B5F8072F1DD3}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {22CBF051-19CD-4E61-B5C6-B5F8072F1DD3}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {22CBF051-19CD-4E61-B5C6-B5F8072F1DD3}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {22CBF051-19CD-4E61-B5C6-B5F8072F1DD3}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {22CBF051-19CD-4E61-B5C6-B5F8072F1DD3}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {22CBF051-19CD-4E61-B5C6-B5F8072F1DD3}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {22CBF051-19CD-4E61-B5C6-B5F8072F1DD3}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {22CBF051-19CD-4E61-B5C6-B5F8072F1DD3}.Debug IA64|x64.Build.0 = Debug IA64|x64 {22CBF051-19CD-4E61-B5C6-B5F8072F1DD3}.Debug|Win32.ActiveCfg = Debug|Win32 {22CBF051-19CD-4E61-B5C6-B5F8072F1DD3}.Debug|Win32.Build.0 = Debug|Win32 + {22CBF051-19CD-4E61-B5C6-B5F8072F1DD3}.Debug|x64.ActiveCfg = Debug|x64 + {22CBF051-19CD-4E61-B5C6-B5F8072F1DD3}.Debug|x64.Build.0 = Debug|x64 {22CBF051-19CD-4E61-B5C6-B5F8072F1DD3}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {22CBF051-19CD-4E61-B5C6-B5F8072F1DD3}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {22CBF051-19CD-4E61-B5C6-B5F8072F1DD3}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {22CBF051-19CD-4E61-B5C6-B5F8072F1DD3}.Release AMD64|x64.Build.0 = Release AMD64|x64 {22CBF051-19CD-4E61-B5C6-B5F8072F1DD3}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {22CBF051-19CD-4E61-B5C6-B5F8072F1DD3}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {22CBF051-19CD-4E61-B5C6-B5F8072F1DD3}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {22CBF051-19CD-4E61-B5C6-B5F8072F1DD3}.Release IA64|x64.Build.0 = Release IA64|x64 {22CBF051-19CD-4E61-B5C6-B5F8072F1DD3}.Release|Win32.ActiveCfg = Release|Win32 {22CBF051-19CD-4E61-B5C6-B5F8072F1DD3}.Release|Win32.Build.0 = Release|Win32 + {22CBF051-19CD-4E61-B5C6-B5F8072F1DD3}.Release|x64.ActiveCfg = Release|x64 + {22CBF051-19CD-4E61-B5C6-B5F8072F1DD3}.Release|x64.Build.0 = Release|x64 {E53D7716-2173-47DC-AA6A-E06867A9D5D0}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {E53D7716-2173-47DC-AA6A-E06867A9D5D0}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {E53D7716-2173-47DC-AA6A-E06867A9D5D0}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {E53D7716-2173-47DC-AA6A-E06867A9D5D0}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {E53D7716-2173-47DC-AA6A-E06867A9D5D0}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {E53D7716-2173-47DC-AA6A-E06867A9D5D0}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {E53D7716-2173-47DC-AA6A-E06867A9D5D0}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {E53D7716-2173-47DC-AA6A-E06867A9D5D0}.ASCII Release|x64.Build.0 = ASCII Release|x64 {E53D7716-2173-47DC-AA6A-E06867A9D5D0}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {E53D7716-2173-47DC-AA6A-E06867A9D5D0}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {E53D7716-2173-47DC-AA6A-E06867A9D5D0}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {E53D7716-2173-47DC-AA6A-E06867A9D5D0}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {E53D7716-2173-47DC-AA6A-E06867A9D5D0}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {E53D7716-2173-47DC-AA6A-E06867A9D5D0}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {E53D7716-2173-47DC-AA6A-E06867A9D5D0}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {E53D7716-2173-47DC-AA6A-E06867A9D5D0}.Debug IA64|x64.Build.0 = Debug IA64|x64 {E53D7716-2173-47DC-AA6A-E06867A9D5D0}.Debug|Win32.ActiveCfg = Debug|Win32 {E53D7716-2173-47DC-AA6A-E06867A9D5D0}.Debug|Win32.Build.0 = Debug|Win32 + {E53D7716-2173-47DC-AA6A-E06867A9D5D0}.Debug|x64.ActiveCfg = Debug|x64 + {E53D7716-2173-47DC-AA6A-E06867A9D5D0}.Debug|x64.Build.0 = Debug|x64 {E53D7716-2173-47DC-AA6A-E06867A9D5D0}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {E53D7716-2173-47DC-AA6A-E06867A9D5D0}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {E53D7716-2173-47DC-AA6A-E06867A9D5D0}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {E53D7716-2173-47DC-AA6A-E06867A9D5D0}.Release AMD64|x64.Build.0 = Release AMD64|x64 {E53D7716-2173-47DC-AA6A-E06867A9D5D0}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {E53D7716-2173-47DC-AA6A-E06867A9D5D0}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {E53D7716-2173-47DC-AA6A-E06867A9D5D0}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {E53D7716-2173-47DC-AA6A-E06867A9D5D0}.Release IA64|x64.Build.0 = Release IA64|x64 {E53D7716-2173-47DC-AA6A-E06867A9D5D0}.Release|Win32.ActiveCfg = Release|Win32 {E53D7716-2173-47DC-AA6A-E06867A9D5D0}.Release|Win32.Build.0 = Release|Win32 + {E53D7716-2173-47DC-AA6A-E06867A9D5D0}.Release|x64.ActiveCfg = Release|x64 + {E53D7716-2173-47DC-AA6A-E06867A9D5D0}.Release|x64.Build.0 = Release|x64 {07BA8001-E8DA-4CD2-A800-A4239741C4ED}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {07BA8001-E8DA-4CD2-A800-A4239741C4ED}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {07BA8001-E8DA-4CD2-A800-A4239741C4ED}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {07BA8001-E8DA-4CD2-A800-A4239741C4ED}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {07BA8001-E8DA-4CD2-A800-A4239741C4ED}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {07BA8001-E8DA-4CD2-A800-A4239741C4ED}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {07BA8001-E8DA-4CD2-A800-A4239741C4ED}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {07BA8001-E8DA-4CD2-A800-A4239741C4ED}.ASCII Release|x64.Build.0 = ASCII Release|x64 {07BA8001-E8DA-4CD2-A800-A4239741C4ED}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {07BA8001-E8DA-4CD2-A800-A4239741C4ED}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {07BA8001-E8DA-4CD2-A800-A4239741C4ED}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {07BA8001-E8DA-4CD2-A800-A4239741C4ED}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {07BA8001-E8DA-4CD2-A800-A4239741C4ED}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {07BA8001-E8DA-4CD2-A800-A4239741C4ED}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {07BA8001-E8DA-4CD2-A800-A4239741C4ED}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {07BA8001-E8DA-4CD2-A800-A4239741C4ED}.Debug IA64|x64.Build.0 = Debug IA64|x64 {07BA8001-E8DA-4CD2-A800-A4239741C4ED}.Debug|Win32.ActiveCfg = Debug|Win32 {07BA8001-E8DA-4CD2-A800-A4239741C4ED}.Debug|Win32.Build.0 = Debug|Win32 + {07BA8001-E8DA-4CD2-A800-A4239741C4ED}.Debug|x64.ActiveCfg = Debug|x64 + {07BA8001-E8DA-4CD2-A800-A4239741C4ED}.Debug|x64.Build.0 = Debug|x64 {07BA8001-E8DA-4CD2-A800-A4239741C4ED}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {07BA8001-E8DA-4CD2-A800-A4239741C4ED}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {07BA8001-E8DA-4CD2-A800-A4239741C4ED}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {07BA8001-E8DA-4CD2-A800-A4239741C4ED}.Release AMD64|x64.Build.0 = Release AMD64|x64 {07BA8001-E8DA-4CD2-A800-A4239741C4ED}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {07BA8001-E8DA-4CD2-A800-A4239741C4ED}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {07BA8001-E8DA-4CD2-A800-A4239741C4ED}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {07BA8001-E8DA-4CD2-A800-A4239741C4ED}.Release IA64|x64.Build.0 = Release IA64|x64 {07BA8001-E8DA-4CD2-A800-A4239741C4ED}.Release|Win32.ActiveCfg = Release|Win32 {07BA8001-E8DA-4CD2-A800-A4239741C4ED}.Release|Win32.Build.0 = Release|Win32 + {07BA8001-E8DA-4CD2-A800-A4239741C4ED}.Release|x64.ActiveCfg = Release|x64 + {07BA8001-E8DA-4CD2-A800-A4239741C4ED}.Release|x64.Build.0 = Release|x64 {5F2C44DE-5466-4CF1-9962-415FFC4F3246}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {5F2C44DE-5466-4CF1-9962-415FFC4F3246}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {5F2C44DE-5466-4CF1-9962-415FFC4F3246}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {5F2C44DE-5466-4CF1-9962-415FFC4F3246}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {5F2C44DE-5466-4CF1-9962-415FFC4F3246}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {5F2C44DE-5466-4CF1-9962-415FFC4F3246}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {5F2C44DE-5466-4CF1-9962-415FFC4F3246}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {5F2C44DE-5466-4CF1-9962-415FFC4F3246}.ASCII Release|x64.Build.0 = ASCII Release|x64 {5F2C44DE-5466-4CF1-9962-415FFC4F3246}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {5F2C44DE-5466-4CF1-9962-415FFC4F3246}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {5F2C44DE-5466-4CF1-9962-415FFC4F3246}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {5F2C44DE-5466-4CF1-9962-415FFC4F3246}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {5F2C44DE-5466-4CF1-9962-415FFC4F3246}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {5F2C44DE-5466-4CF1-9962-415FFC4F3246}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {5F2C44DE-5466-4CF1-9962-415FFC4F3246}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {5F2C44DE-5466-4CF1-9962-415FFC4F3246}.Debug IA64|x64.Build.0 = Debug IA64|x64 {5F2C44DE-5466-4CF1-9962-415FFC4F3246}.Debug|Win32.ActiveCfg = Debug|Win32 {5F2C44DE-5466-4CF1-9962-415FFC4F3246}.Debug|Win32.Build.0 = Debug|Win32 + {5F2C44DE-5466-4CF1-9962-415FFC4F3246}.Debug|x64.ActiveCfg = Debug|x64 + {5F2C44DE-5466-4CF1-9962-415FFC4F3246}.Debug|x64.Build.0 = Debug|x64 {5F2C44DE-5466-4CF1-9962-415FFC4F3246}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {5F2C44DE-5466-4CF1-9962-415FFC4F3246}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {5F2C44DE-5466-4CF1-9962-415FFC4F3246}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {5F2C44DE-5466-4CF1-9962-415FFC4F3246}.Release AMD64|x64.Build.0 = Release AMD64|x64 {5F2C44DE-5466-4CF1-9962-415FFC4F3246}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {5F2C44DE-5466-4CF1-9962-415FFC4F3246}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {5F2C44DE-5466-4CF1-9962-415FFC4F3246}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {5F2C44DE-5466-4CF1-9962-415FFC4F3246}.Release IA64|x64.Build.0 = Release IA64|x64 {5F2C44DE-5466-4CF1-9962-415FFC4F3246}.Release|Win32.ActiveCfg = Release|Win32 {5F2C44DE-5466-4CF1-9962-415FFC4F3246}.Release|Win32.Build.0 = Release|Win32 + {5F2C44DE-5466-4CF1-9962-415FFC4F3246}.Release|x64.ActiveCfg = Release|x64 + {5F2C44DE-5466-4CF1-9962-415FFC4F3246}.Release|x64.Build.0 = Release|x64 {10429E06-7C29-477D-8993-46773DC3FFE9}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {10429E06-7C29-477D-8993-46773DC3FFE9}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {10429E06-7C29-477D-8993-46773DC3FFE9}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {10429E06-7C29-477D-8993-46773DC3FFE9}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {10429E06-7C29-477D-8993-46773DC3FFE9}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {10429E06-7C29-477D-8993-46773DC3FFE9}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {10429E06-7C29-477D-8993-46773DC3FFE9}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {10429E06-7C29-477D-8993-46773DC3FFE9}.ASCII Release|x64.Build.0 = ASCII Release|x64 {10429E06-7C29-477D-8993-46773DC3FFE9}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {10429E06-7C29-477D-8993-46773DC3FFE9}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {10429E06-7C29-477D-8993-46773DC3FFE9}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {10429E06-7C29-477D-8993-46773DC3FFE9}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {10429E06-7C29-477D-8993-46773DC3FFE9}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {10429E06-7C29-477D-8993-46773DC3FFE9}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {10429E06-7C29-477D-8993-46773DC3FFE9}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {10429E06-7C29-477D-8993-46773DC3FFE9}.Debug IA64|x64.Build.0 = Debug IA64|x64 {10429E06-7C29-477D-8993-46773DC3FFE9}.Debug|Win32.ActiveCfg = Debug|Win32 {10429E06-7C29-477D-8993-46773DC3FFE9}.Debug|Win32.Build.0 = Debug|Win32 + {10429E06-7C29-477D-8993-46773DC3FFE9}.Debug|x64.ActiveCfg = Debug|x64 + {10429E06-7C29-477D-8993-46773DC3FFE9}.Debug|x64.Build.0 = Debug|x64 {10429E06-7C29-477D-8993-46773DC3FFE9}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {10429E06-7C29-477D-8993-46773DC3FFE9}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {10429E06-7C29-477D-8993-46773DC3FFE9}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {10429E06-7C29-477D-8993-46773DC3FFE9}.Release AMD64|x64.Build.0 = Release AMD64|x64 {10429E06-7C29-477D-8993-46773DC3FFE9}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {10429E06-7C29-477D-8993-46773DC3FFE9}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {10429E06-7C29-477D-8993-46773DC3FFE9}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {10429E06-7C29-477D-8993-46773DC3FFE9}.Release IA64|x64.Build.0 = Release IA64|x64 {10429E06-7C29-477D-8993-46773DC3FFE9}.Release|Win32.ActiveCfg = Release|Win32 {10429E06-7C29-477D-8993-46773DC3FFE9}.Release|Win32.Build.0 = Release|Win32 + {10429E06-7C29-477D-8993-46773DC3FFE9}.Release|x64.ActiveCfg = Release|x64 + {10429E06-7C29-477D-8993-46773DC3FFE9}.Release|x64.Build.0 = Release|x64 {D5938761-AD96-4C0C-9C46-BC375358DFE9}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {D5938761-AD96-4C0C-9C46-BC375358DFE9}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {D5938761-AD96-4C0C-9C46-BC375358DFE9}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {D5938761-AD96-4C0C-9C46-BC375358DFE9}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {D5938761-AD96-4C0C-9C46-BC375358DFE9}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {D5938761-AD96-4C0C-9C46-BC375358DFE9}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {D5938761-AD96-4C0C-9C46-BC375358DFE9}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {D5938761-AD96-4C0C-9C46-BC375358DFE9}.ASCII Release|x64.Build.0 = ASCII Release|x64 {D5938761-AD96-4C0C-9C46-BC375358DFE9}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {D5938761-AD96-4C0C-9C46-BC375358DFE9}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {D5938761-AD96-4C0C-9C46-BC375358DFE9}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {D5938761-AD96-4C0C-9C46-BC375358DFE9}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {D5938761-AD96-4C0C-9C46-BC375358DFE9}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {D5938761-AD96-4C0C-9C46-BC375358DFE9}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {D5938761-AD96-4C0C-9C46-BC375358DFE9}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {D5938761-AD96-4C0C-9C46-BC375358DFE9}.Debug IA64|x64.Build.0 = Debug IA64|x64 {D5938761-AD96-4C0C-9C46-BC375358DFE9}.Debug|Win32.ActiveCfg = Debug|Win32 {D5938761-AD96-4C0C-9C46-BC375358DFE9}.Debug|Win32.Build.0 = Debug|Win32 + {D5938761-AD96-4C0C-9C46-BC375358DFE9}.Debug|x64.ActiveCfg = Debug|x64 + {D5938761-AD96-4C0C-9C46-BC375358DFE9}.Debug|x64.Build.0 = Debug|x64 {D5938761-AD96-4C0C-9C46-BC375358DFE9}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {D5938761-AD96-4C0C-9C46-BC375358DFE9}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {D5938761-AD96-4C0C-9C46-BC375358DFE9}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {D5938761-AD96-4C0C-9C46-BC375358DFE9}.Release AMD64|x64.Build.0 = Release AMD64|x64 {D5938761-AD96-4C0C-9C46-BC375358DFE9}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {D5938761-AD96-4C0C-9C46-BC375358DFE9}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {D5938761-AD96-4C0C-9C46-BC375358DFE9}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {D5938761-AD96-4C0C-9C46-BC375358DFE9}.Release IA64|x64.Build.0 = Release IA64|x64 {D5938761-AD96-4C0C-9C46-BC375358DFE9}.Release|Win32.ActiveCfg = Release|Win32 {D5938761-AD96-4C0C-9C46-BC375358DFE9}.Release|Win32.Build.0 = Release|Win32 + {D5938761-AD96-4C0C-9C46-BC375358DFE9}.Release|x64.ActiveCfg = Release|x64 + {D5938761-AD96-4C0C-9C46-BC375358DFE9}.Release|x64.Build.0 = Release|x64 {A27263BC-8BED-412C-944D-7EC7D9F194EC}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {A27263BC-8BED-412C-944D-7EC7D9F194EC}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {A27263BC-8BED-412C-944D-7EC7D9F194EC}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {A27263BC-8BED-412C-944D-7EC7D9F194EC}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {A27263BC-8BED-412C-944D-7EC7D9F194EC}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {A27263BC-8BED-412C-944D-7EC7D9F194EC}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {A27263BC-8BED-412C-944D-7EC7D9F194EC}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {A27263BC-8BED-412C-944D-7EC7D9F194EC}.ASCII Release|x64.Build.0 = ASCII Release|x64 {A27263BC-8BED-412C-944D-7EC7D9F194EC}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {A27263BC-8BED-412C-944D-7EC7D9F194EC}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {A27263BC-8BED-412C-944D-7EC7D9F194EC}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {A27263BC-8BED-412C-944D-7EC7D9F194EC}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {A27263BC-8BED-412C-944D-7EC7D9F194EC}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {A27263BC-8BED-412C-944D-7EC7D9F194EC}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {A27263BC-8BED-412C-944D-7EC7D9F194EC}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {A27263BC-8BED-412C-944D-7EC7D9F194EC}.Debug IA64|x64.Build.0 = Debug IA64|x64 {A27263BC-8BED-412C-944D-7EC7D9F194EC}.Debug|Win32.ActiveCfg = Debug|Win32 {A27263BC-8BED-412C-944D-7EC7D9F194EC}.Debug|Win32.Build.0 = Debug|Win32 + {A27263BC-8BED-412C-944D-7EC7D9F194EC}.Debug|x64.ActiveCfg = Debug|x64 + {A27263BC-8BED-412C-944D-7EC7D9F194EC}.Debug|x64.Build.0 = Debug|x64 {A27263BC-8BED-412C-944D-7EC7D9F194EC}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {A27263BC-8BED-412C-944D-7EC7D9F194EC}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {A27263BC-8BED-412C-944D-7EC7D9F194EC}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {A27263BC-8BED-412C-944D-7EC7D9F194EC}.Release AMD64|x64.Build.0 = Release AMD64|x64 {A27263BC-8BED-412C-944D-7EC7D9F194EC}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {A27263BC-8BED-412C-944D-7EC7D9F194EC}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {A27263BC-8BED-412C-944D-7EC7D9F194EC}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {A27263BC-8BED-412C-944D-7EC7D9F194EC}.Release IA64|x64.Build.0 = Release IA64|x64 {A27263BC-8BED-412C-944D-7EC7D9F194EC}.Release|Win32.ActiveCfg = Release|Win32 {A27263BC-8BED-412C-944D-7EC7D9F194EC}.Release|Win32.Build.0 = Release|Win32 + {A27263BC-8BED-412C-944D-7EC7D9F194EC}.Release|x64.ActiveCfg = Release|x64 + {A27263BC-8BED-412C-944D-7EC7D9F194EC}.Release|x64.Build.0 = Release|x64 {47E23751-01E8-43C9-83C2-06EE68C121EF}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {47E23751-01E8-43C9-83C2-06EE68C121EF}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {47E23751-01E8-43C9-83C2-06EE68C121EF}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {47E23751-01E8-43C9-83C2-06EE68C121EF}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {47E23751-01E8-43C9-83C2-06EE68C121EF}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {47E23751-01E8-43C9-83C2-06EE68C121EF}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {47E23751-01E8-43C9-83C2-06EE68C121EF}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {47E23751-01E8-43C9-83C2-06EE68C121EF}.ASCII Release|x64.Build.0 = ASCII Release|x64 {47E23751-01E8-43C9-83C2-06EE68C121EF}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {47E23751-01E8-43C9-83C2-06EE68C121EF}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {47E23751-01E8-43C9-83C2-06EE68C121EF}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {47E23751-01E8-43C9-83C2-06EE68C121EF}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {47E23751-01E8-43C9-83C2-06EE68C121EF}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {47E23751-01E8-43C9-83C2-06EE68C121EF}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {47E23751-01E8-43C9-83C2-06EE68C121EF}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {47E23751-01E8-43C9-83C2-06EE68C121EF}.Debug IA64|x64.Build.0 = Debug IA64|x64 {47E23751-01E8-43C9-83C2-06EE68C121EF}.Debug|Win32.ActiveCfg = Debug|Win32 {47E23751-01E8-43C9-83C2-06EE68C121EF}.Debug|Win32.Build.0 = Debug|Win32 + {47E23751-01E8-43C9-83C2-06EE68C121EF}.Debug|x64.ActiveCfg = Debug|x64 + {47E23751-01E8-43C9-83C2-06EE68C121EF}.Debug|x64.Build.0 = Debug|x64 {47E23751-01E8-43C9-83C2-06EE68C121EF}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {47E23751-01E8-43C9-83C2-06EE68C121EF}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {47E23751-01E8-43C9-83C2-06EE68C121EF}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {47E23751-01E8-43C9-83C2-06EE68C121EF}.Release AMD64|x64.Build.0 = Release AMD64|x64 {47E23751-01E8-43C9-83C2-06EE68C121EF}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {47E23751-01E8-43C9-83C2-06EE68C121EF}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {47E23751-01E8-43C9-83C2-06EE68C121EF}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {47E23751-01E8-43C9-83C2-06EE68C121EF}.Release IA64|x64.Build.0 = Release IA64|x64 {47E23751-01E8-43C9-83C2-06EE68C121EF}.Release|Win32.ActiveCfg = Release|Win32 {47E23751-01E8-43C9-83C2-06EE68C121EF}.Release|Win32.Build.0 = Release|Win32 + {47E23751-01E8-43C9-83C2-06EE68C121EF}.Release|x64.ActiveCfg = Release|x64 + {47E23751-01E8-43C9-83C2-06EE68C121EF}.Release|x64.Build.0 = Release|x64 {3A8EA457-87A3-4F20-A993-452D104FFD16}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {3A8EA457-87A3-4F20-A993-452D104FFD16}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {3A8EA457-87A3-4F20-A993-452D104FFD16}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {3A8EA457-87A3-4F20-A993-452D104FFD16}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {3A8EA457-87A3-4F20-A993-452D104FFD16}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {3A8EA457-87A3-4F20-A993-452D104FFD16}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {3A8EA457-87A3-4F20-A993-452D104FFD16}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {3A8EA457-87A3-4F20-A993-452D104FFD16}.ASCII Release|x64.Build.0 = ASCII Release|x64 {3A8EA457-87A3-4F20-A993-452D104FFD16}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {3A8EA457-87A3-4F20-A993-452D104FFD16}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {3A8EA457-87A3-4F20-A993-452D104FFD16}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {3A8EA457-87A3-4F20-A993-452D104FFD16}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {3A8EA457-87A3-4F20-A993-452D104FFD16}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {3A8EA457-87A3-4F20-A993-452D104FFD16}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {3A8EA457-87A3-4F20-A993-452D104FFD16}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {3A8EA457-87A3-4F20-A993-452D104FFD16}.Debug IA64|x64.Build.0 = Debug IA64|x64 {3A8EA457-87A3-4F20-A993-452D104FFD16}.Debug|Win32.ActiveCfg = Debug|Win32 {3A8EA457-87A3-4F20-A993-452D104FFD16}.Debug|Win32.Build.0 = Debug|Win32 + {3A8EA457-87A3-4F20-A993-452D104FFD16}.Debug|x64.ActiveCfg = Debug|x64 + {3A8EA457-87A3-4F20-A993-452D104FFD16}.Debug|x64.Build.0 = Debug|x64 {3A8EA457-87A3-4F20-A993-452D104FFD16}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {3A8EA457-87A3-4F20-A993-452D104FFD16}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {3A8EA457-87A3-4F20-A993-452D104FFD16}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {3A8EA457-87A3-4F20-A993-452D104FFD16}.Release AMD64|x64.Build.0 = Release AMD64|x64 {3A8EA457-87A3-4F20-A993-452D104FFD16}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {3A8EA457-87A3-4F20-A993-452D104FFD16}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {3A8EA457-87A3-4F20-A993-452D104FFD16}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {3A8EA457-87A3-4F20-A993-452D104FFD16}.Release IA64|x64.Build.0 = Release IA64|x64 {3A8EA457-87A3-4F20-A993-452D104FFD16}.Release|Win32.ActiveCfg = Release|Win32 {3A8EA457-87A3-4F20-A993-452D104FFD16}.Release|Win32.Build.0 = Release|Win32 + {3A8EA457-87A3-4F20-A993-452D104FFD16}.Release|x64.ActiveCfg = Release|x64 + {3A8EA457-87A3-4F20-A993-452D104FFD16}.Release|x64.Build.0 = Release|x64 {F7F26B95-3624-49CF-9A79-6C6F69B72078}.ASCII Debug|Win32.ActiveCfg = ASCII Debug|Win32 {F7F26B95-3624-49CF-9A79-6C6F69B72078}.ASCII Debug|Win32.Build.0 = ASCII Debug|Win32 + {F7F26B95-3624-49CF-9A79-6C6F69B72078}.ASCII Debug|x64.ActiveCfg = ASCII Debug|x64 + {F7F26B95-3624-49CF-9A79-6C6F69B72078}.ASCII Debug|x64.Build.0 = ASCII Debug|x64 {F7F26B95-3624-49CF-9A79-6C6F69B72078}.ASCII Release|Win32.ActiveCfg = ASCII Release|Win32 {F7F26B95-3624-49CF-9A79-6C6F69B72078}.ASCII Release|Win32.Build.0 = ASCII Release|Win32 + {F7F26B95-3624-49CF-9A79-6C6F69B72078}.ASCII Release|x64.ActiveCfg = ASCII Release|x64 + {F7F26B95-3624-49CF-9A79-6C6F69B72078}.ASCII Release|x64.Build.0 = ASCII Release|x64 {F7F26B95-3624-49CF-9A79-6C6F69B72078}.Debug AMD64|Win32.ActiveCfg = Debug AMD64|Win32 {F7F26B95-3624-49CF-9A79-6C6F69B72078}.Debug AMD64|Win32.Build.0 = Debug AMD64|Win32 + {F7F26B95-3624-49CF-9A79-6C6F69B72078}.Debug AMD64|x64.ActiveCfg = Debug AMD64|x64 + {F7F26B95-3624-49CF-9A79-6C6F69B72078}.Debug AMD64|x64.Build.0 = Debug AMD64|x64 {F7F26B95-3624-49CF-9A79-6C6F69B72078}.Debug IA64|Win32.ActiveCfg = Debug IA64|Win32 {F7F26B95-3624-49CF-9A79-6C6F69B72078}.Debug IA64|Win32.Build.0 = Debug IA64|Win32 + {F7F26B95-3624-49CF-9A79-6C6F69B72078}.Debug IA64|x64.ActiveCfg = Debug IA64|x64 + {F7F26B95-3624-49CF-9A79-6C6F69B72078}.Debug IA64|x64.Build.0 = Debug IA64|x64 {F7F26B95-3624-49CF-9A79-6C6F69B72078}.Debug|Win32.ActiveCfg = Debug|Win32 {F7F26B95-3624-49CF-9A79-6C6F69B72078}.Debug|Win32.Build.0 = Debug|Win32 + {F7F26B95-3624-49CF-9A79-6C6F69B72078}.Debug|x64.ActiveCfg = Debug|x64 + {F7F26B95-3624-49CF-9A79-6C6F69B72078}.Debug|x64.Build.0 = Debug|x64 {F7F26B95-3624-49CF-9A79-6C6F69B72078}.Release AMD64|Win32.ActiveCfg = Release AMD64|Win32 {F7F26B95-3624-49CF-9A79-6C6F69B72078}.Release AMD64|Win32.Build.0 = Release AMD64|Win32 + {F7F26B95-3624-49CF-9A79-6C6F69B72078}.Release AMD64|x64.ActiveCfg = Release AMD64|x64 + {F7F26B95-3624-49CF-9A79-6C6F69B72078}.Release AMD64|x64.Build.0 = Release AMD64|x64 {F7F26B95-3624-49CF-9A79-6C6F69B72078}.Release IA64|Win32.ActiveCfg = Release IA64|Win32 {F7F26B95-3624-49CF-9A79-6C6F69B72078}.Release IA64|Win32.Build.0 = Release IA64|Win32 + {F7F26B95-3624-49CF-9A79-6C6F69B72078}.Release IA64|x64.ActiveCfg = Release IA64|x64 + {F7F26B95-3624-49CF-9A79-6C6F69B72078}.Release IA64|x64.Build.0 = Release IA64|x64 {F7F26B95-3624-49CF-9A79-6C6F69B72078}.Release|Win32.ActiveCfg = Release|Win32 {F7F26B95-3624-49CF-9A79-6C6F69B72078}.Release|Win32.Build.0 = Release|Win32 + {F7F26B95-3624-49CF-9A79-6C6F69B72078}.Release|x64.ActiveCfg = Release|x64 + {F7F26B95-3624-49CF-9A79-6C6F69B72078}.Release|x64.Build.0 = Release|x64 EndGlobalSection GlobalSection(SolutionProperties) = preSolution HideSolutionNode = FALSE Modified: external/db-4.4.20-vs9/build_win32/build_all.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/build_all.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/build_all.vcproj Thu Mar 6 14:43:47 2008 @@ -11,6 +11,9 @@ + @@ -343,6 +346,350 @@ CompileAsManaged="" /> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Modified: external/db-4.4.20-vs9/build_win32/db_archive.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/db_archive.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/db_archive.vcproj Thu Mar 6 14:43:47 2008 @@ -10,12 +10,491 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -74,9 +553,131 @@ /> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -138,7 +740,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -181,6 +785,7 @@ /> @@ -287,7 +894,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -376,7 +984,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -466,7 +1075,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -509,6 +1120,7 @@ /> @@ -568,6 +1183,7 @@ /> + + + + + + + + + + + + + + + + + + + + + + + + Modified: external/db-4.4.20-vs9/build_win32/db_checkpoint.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/db_checkpoint.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/db_checkpoint.vcproj Thu Mar 6 14:43:47 2008 @@ -10,12 +10,491 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -74,9 +553,131 @@ /> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -138,7 +740,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -228,7 +831,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -317,7 +921,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -407,7 +1012,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -450,6 +1057,7 @@ /> @@ -509,6 +1120,7 @@ /> @@ -568,6 +1183,7 @@ /> + + + + + + + + + + + + + + + + + + + + + + + + Modified: external/db-4.4.20-vs9/build_win32/db_deadlock.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/db_deadlock.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/db_deadlock.vcproj Thu Mar 6 14:43:47 2008 @@ -10,12 +10,491 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -74,9 +553,131 @@ /> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -138,7 +740,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -227,7 +830,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -317,7 +921,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -407,7 +1012,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -450,6 +1057,7 @@ /> @@ -509,6 +1120,7 @@ /> @@ -568,6 +1183,7 @@ /> + + + + + + + + + + + + + + + + + + + + + + + + Modified: external/db-4.4.20-vs9/build_win32/db_dll.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/db_dll.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/db_dll.vcproj Thu Mar 6 14:43:47 2008 @@ -10,6 +10,9 @@ + @@ -632,114 +635,9228 @@ Name="VCPostBuildEventTool" /> - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - - - + /> + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + /> + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -138,7 +740,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -228,7 +831,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -317,7 +921,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -360,6 +966,7 @@ /> @@ -419,6 +1029,7 @@ /> @@ -525,7 +1138,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -568,6 +1183,7 @@ /> + + + + + + + + + + + + + + + + + + + + + + + + Modified: external/db-4.4.20-vs9/build_win32/db_hotbackup.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/db_hotbackup.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/db_hotbackup.vcproj Thu Mar 6 14:43:47 2008 @@ -10,12 +10,460 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -74,9 +522,162 @@ /> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -138,7 +740,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -181,6 +785,7 @@ /> @@ -287,7 +894,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -330,6 +939,7 @@ /> @@ -435,7 +1047,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -478,6 +1092,7 @@ /> @@ -584,7 +1201,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> + + + + + + + + + + + + + + + + + + + + + + + + Modified: external/db-4.4.20-vs9/build_win32/db_java.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/db_java.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/db_java.vcproj Thu Mar 6 14:43:47 2008 @@ -10,12 +10,487 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -74,7 +549,108 @@ /> + + + + + + + + + + + + + + + + + + + @@ -92,6 +668,7 @@ /> + + + + + + + + + + + + + + + + + + + @@ -206,7 +847,7 @@ RandomizedBaseAddress="1" DataExecutionPrevention="0" ImportLibrary=".\Debug/libdb_java44d.lib" - TargetMachine="1" + TargetMachine="17" /> @@ -305,7 +946,7 @@ RandomizedBaseAddress="1" DataExecutionPrevention="0" ImportLibrary=".\Release/libdb_java44.lib" - TargetMachine="1" + TargetMachine="17" /> @@ -348,6 +991,7 @@ /> @@ -462,7 +1107,7 @@ RandomizedBaseAddress="1" DataExecutionPrevention="0" ImportLibrary=".\Debug_ASCII/libdb_java44d.lib" - TargetMachine="1" + TargetMachine="17" /> @@ -505,6 +1152,7 @@ /> @@ -620,7 +1269,7 @@ RandomizedBaseAddress="1" DataExecutionPrevention="0" ImportLibrary=".\Release_ASCII/libdb_java44.lib" - TargetMachine="1" + TargetMachine="17" /> + + + + + + + + + + + + Modified: external/db-4.4.20-vs9/build_win32/db_load.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/db_load.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/db_load.vcproj Thu Mar 6 14:43:47 2008 @@ -10,14 +10,467 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -139,7 +741,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -228,7 +831,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -271,6 +876,7 @@ /> @@ -330,6 +939,7 @@ /> @@ -436,7 +1048,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -525,7 +1138,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -568,6 +1183,7 @@ /> + + + + + + + + + + + + + + + + + + + + + + + + Modified: external/db-4.4.20-vs9/build_win32/db_printlog.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/db_printlog.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/db_printlog.vcproj Thu Mar 6 14:43:47 2008 @@ -10,6 +10,9 @@ + @@ -608,123 +611,941 @@ Name="VCPostBuildEventTool" /> - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Modified: external/db-4.4.20-vs9/build_win32/db_recover.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/db_recover.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/db_recover.vcproj Thu Mar 6 14:43:47 2008 @@ -10,12 +10,491 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -74,9 +553,131 @@ /> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -139,7 +741,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -182,6 +786,7 @@ /> @@ -287,7 +894,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -377,7 +985,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -466,7 +1075,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -509,6 +1120,7 @@ /> @@ -568,6 +1183,7 @@ /> + + + + + + + + + + + + + + + + + + + + + + + + Modified: external/db-4.4.20-vs9/build_win32/db_small.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/db_small.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/db_small.vcproj Thu Mar 6 14:43:47 2008 @@ -10,6 +10,9 @@ + @@ -538,12 +541,7528 @@ Name="VCPostBuildEventTool" /> - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + /> + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -133,9 +612,135 @@ /> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -198,7 +804,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -287,7 +894,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -330,6 +939,7 @@ /> @@ -435,7 +1047,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -525,7 +1138,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -568,6 +1183,7 @@ /> + + + + + + + + + + + + + + + + + + + + + + + + Modified: external/db-4.4.20-vs9/build_win32/db_static.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/db_static.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/db_static.vcproj Thu Mar 6 14:43:47 2008 @@ -10,6 +10,9 @@ + @@ -530,133 +533,9345 @@ Name="VCPostBuildEventTool" /> - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + /> + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + @@ -636,117 +639,1043 @@ Name="VCPostBuildEventTool" /> - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Modified: external/db-4.4.20-vs9/build_win32/db_test.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/db_test.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/db_test.vcproj Thu Mar 6 14:43:47 2008 @@ -10,14 +10,473 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -139,7 +749,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -231,7 +842,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -276,6 +889,7 @@ /> @@ -335,6 +952,7 @@ /> @@ -440,7 +1060,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -531,7 +1152,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -576,6 +1199,7 @@ /> + + + + + + + + + + + + + + + + + + + + + + + + Modified: external/db-4.4.20-vs9/build_win32/db_upgrade.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/db_upgrade.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/db_upgrade.vcproj Thu Mar 6 14:43:47 2008 @@ -10,14 +10,525 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -92,6 +632,7 @@ /> + + + + + + + + + + + + + + + + + + + @@ -197,7 +803,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -287,7 +894,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -330,6 +939,7 @@ /> @@ -389,6 +1002,7 @@ /> @@ -494,7 +1110,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -584,7 +1201,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> + + + + + + + + + + + + + + + + + + + + + + + + Modified: external/db-4.4.20-vs9/build_win32/db_verify.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/db_verify.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/db_verify.vcproj Thu Mar 6 14:43:47 2008 @@ -10,14 +10,526 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -92,6 +632,7 @@ /> @@ -151,6 +695,7 @@ /> + + + + + + + + + + + + + + + + + + + @@ -256,7 +866,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -346,7 +957,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -389,6 +1002,7 @@ /> @@ -495,7 +1111,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -584,7 +1201,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> + + + + + + + + + + + + + + + + + + + + + + + + Modified: external/db-4.4.20-vs9/build_win32/ex_access.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/ex_access.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/ex_access.vcproj Thu Mar 6 14:43:47 2008 @@ -10,14 +10,526 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -92,6 +632,7 @@ /> @@ -151,6 +695,7 @@ /> @@ -210,6 +758,7 @@ /> + + + + + + + + + + + + + + + + + + + @@ -316,7 +930,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -406,7 +1021,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -495,7 +1111,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -584,7 +1201,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> + + + + + + + + + + + + + + + + + + + + + + + + Modified: external/db-4.4.20-vs9/build_win32/ex_btrec.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/ex_btrec.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/ex_btrec.vcproj Thu Mar 6 14:43:47 2008 @@ -10,14 +10,611 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -79,7 +677,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -122,6 +722,7 @@ /> @@ -181,6 +785,7 @@ /> @@ -287,7 +894,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -330,6 +939,7 @@ /> @@ -435,7 +1047,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -478,6 +1092,7 @@ /> @@ -584,7 +1201,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> + + + + + + + + + + + + + + + + + + + + + + + + Modified: external/db-4.4.20-vs9/build_win32/ex_csvcode.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/ex_csvcode.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/ex_csvcode.vcproj Thu Mar 6 14:43:47 2008 @@ -10,14 +10,619 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -80,7 +686,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -171,7 +778,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -262,7 +870,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -307,6 +917,7 @@ /> @@ -366,6 +980,7 @@ /> @@ -425,6 +1043,7 @@ /> @@ -484,6 +1106,7 @@ /> @@ -590,7 +1215,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> + + + + + + + + + + + + + + + + + + + + + + + + Modified: external/db-4.4.20-vs9/build_win32/ex_csvload.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/ex_csvload.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/ex_csvload.vcproj Thu Mar 6 14:43:47 2008 @@ -10,6 +10,9 @@ + @@ -608,119 +611,813 @@ Name="VCPostBuildEventTool" /> - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Modified: external/db-4.4.20-vs9/build_win32/ex_csvquery.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/ex_csvquery.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/ex_csvquery.vcproj Thu Mar 6 14:43:47 2008 @@ -10,6 +10,9 @@ + @@ -608,144 +611,878 @@ Name="VCPostBuildEventTool" /> - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Modified: external/db-4.4.20-vs9/build_win32/ex_env.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/ex_env.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/ex_env.vcproj Thu Mar 6 14:43:47 2008 @@ -10,14 +10,611 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -79,7 +677,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -169,7 +768,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -212,6 +813,7 @@ /> @@ -271,6 +876,7 @@ /> @@ -377,7 +985,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -420,6 +1030,7 @@ /> @@ -525,7 +1138,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -568,6 +1183,7 @@ /> + + + + + + + + + + + + + + + + + + + + + + + + Modified: external/db-4.4.20-vs9/build_win32/ex_lock.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/ex_lock.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/ex_lock.vcproj Thu Mar 6 14:43:47 2008 @@ -10,14 +10,611 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -79,7 +677,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -122,6 +722,7 @@ /> @@ -181,6 +785,7 @@ /> @@ -287,7 +894,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -376,7 +984,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -419,6 +1029,7 @@ /> @@ -478,6 +1092,7 @@ /> @@ -584,7 +1201,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> + + + + + + + + + + + + + + + + + + + + + + + + Modified: external/db-4.4.20-vs9/build_win32/ex_mpool.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/ex_mpool.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/ex_mpool.vcproj Thu Mar 6 14:43:47 2008 @@ -10,12 +10,461 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -74,9 +523,161 @@ /> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -138,7 +740,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -181,6 +785,7 @@ /> @@ -240,6 +848,7 @@ /> @@ -346,7 +957,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -436,7 +1048,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -479,6 +1093,7 @@ /> @@ -584,7 +1201,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> + + + + + + + + + + + + + + + + + + + + + + + + Modified: external/db-4.4.20-vs9/build_win32/ex_repquote.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/ex_repquote.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/ex_repquote.vcproj Thu Mar 6 14:43:47 2008 @@ -10,12 +10,611 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -33,6 +632,7 @@ /> @@ -138,7 +740,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -227,7 +830,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -270,6 +875,7 @@ /> @@ -376,7 +984,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -466,7 +1075,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -509,6 +1120,7 @@ /> @@ -568,6 +1183,7 @@ /> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Modified: external/db-4.4.20-vs9/build_win32/ex_sequence.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/ex_sequence.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/ex_sequence.vcproj Thu Mar 6 14:43:47 2008 @@ -10,14 +10,525 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -92,6 +632,7 @@ /> @@ -151,6 +695,7 @@ /> + + + + + + + + + + + + + + + + + + + @@ -256,7 +866,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -346,7 +957,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -389,6 +1002,7 @@ /> @@ -494,7 +1110,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -584,7 +1201,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> + + + + + + + + + + + + + + + + + + + + + + + + Modified: external/db-4.4.20-vs9/build_win32/ex_tpcb.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/ex_tpcb.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/ex_tpcb.vcproj Thu Mar 6 14:43:47 2008 @@ -10,14 +10,525 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -139,7 +741,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -182,6 +786,7 @@ /> @@ -241,6 +849,7 @@ /> @@ -300,6 +912,7 @@ /> @@ -405,7 +1020,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -494,7 +1110,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -584,7 +1201,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> + + + + + + + + + + + + + + + + + + + + + + + + Modified: external/db-4.4.20-vs9/build_win32/ex_txnguide.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/ex_txnguide.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/ex_txnguide.vcproj Thu Mar 6 14:43:47 2008 @@ -10,12 +10,491 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -74,9 +553,131 @@ /> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -138,7 +740,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -228,7 +831,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -318,7 +922,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -407,7 +1012,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -450,6 +1057,7 @@ /> @@ -509,6 +1120,7 @@ /> @@ -568,6 +1183,7 @@ /> + + + + + + + + + + + + + + + + + + + + + + + + Modified: external/db-4.4.20-vs9/build_win32/ex_txnguide_inmem.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/ex_txnguide_inmem.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/ex_txnguide_inmem.vcproj Thu Mar 6 14:43:47 2008 @@ -10,12 +10,461 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -74,9 +523,161 @@ /> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -138,7 +740,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -181,6 +785,7 @@ /> @@ -240,6 +848,7 @@ /> @@ -346,7 +957,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -436,7 +1048,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -479,6 +1093,7 @@ /> @@ -584,7 +1201,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> + + + + + + + + + + + + + + + + + + + + + + + + Modified: external/db-4.4.20-vs9/build_win32/example_database_load.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/example_database_load.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/example_database_load.vcproj Thu Mar 6 14:43:47 2008 @@ -10,14 +10,525 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -139,7 +741,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -228,7 +831,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -271,6 +876,7 @@ /> @@ -376,7 +984,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -419,6 +1029,7 @@ /> @@ -478,6 +1092,7 @@ /> @@ -584,7 +1201,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Modified: external/db-4.4.20-vs9/build_win32/example_database_read.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/example_database_read.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/example_database_read.vcproj Thu Mar 6 14:43:47 2008 @@ -10,14 +10,526 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -92,6 +632,7 @@ /> @@ -151,6 +695,7 @@ /> + + + + + + + + + + + + + + + + + + + @@ -256,7 +866,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -299,6 +911,7 @@ /> @@ -405,7 +1020,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -495,7 +1111,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -584,7 +1201,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Modified: external/db-4.4.20-vs9/build_win32/excxx_access.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/excxx_access.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/excxx_access.vcproj Thu Mar 6 14:43:47 2008 @@ -10,14 +10,466 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -138,7 +740,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -181,6 +785,7 @@ /> @@ -286,7 +893,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -329,6 +938,7 @@ /> @@ -435,7 +1047,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -525,7 +1138,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -568,6 +1183,7 @@ /> + + + + + + + + + + + + + + + + + + + + + + + + Modified: external/db-4.4.20-vs9/build_win32/excxx_btrec.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/excxx_btrec.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/excxx_btrec.vcproj Thu Mar 6 14:43:47 2008 @@ -10,14 +10,611 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -80,7 +678,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -123,6 +723,7 @@ /> @@ -228,7 +831,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -271,6 +876,7 @@ /> @@ -330,6 +939,7 @@ /> @@ -435,7 +1047,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -525,7 +1138,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -568,6 +1183,7 @@ /> + + + + + + + + + + + + + + + + + + + + + + + + Modified: external/db-4.4.20-vs9/build_win32/excxx_env.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/excxx_env.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/excxx_env.vcproj Thu Mar 6 14:43:47 2008 @@ -10,12 +10,491 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -74,7 +553,7 @@ /> @@ -133,9 +612,135 @@ /> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -198,7 +804,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -287,7 +894,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -376,7 +984,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -466,7 +1075,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -509,6 +1120,7 @@ /> @@ -568,6 +1183,7 @@ /> + + + + + + + + + + + + + + + + + + + + + + + + Modified: external/db-4.4.20-vs9/build_win32/excxx_example_database_load.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/excxx_example_database_load.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/excxx_example_database_load.vcproj Thu Mar 6 14:43:47 2008 @@ -10,14 +10,611 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -79,7 +677,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -122,6 +722,7 @@ /> @@ -181,6 +785,7 @@ /> @@ -240,6 +848,7 @@ /> @@ -345,7 +956,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -435,7 +1047,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -525,7 +1138,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -568,6 +1183,7 @@ /> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Modified: external/db-4.4.20-vs9/build_win32/excxx_example_database_read.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/excxx_example_database_read.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/excxx_example_database_read.vcproj Thu Mar 6 14:43:47 2008 @@ -10,14 +10,525 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -92,6 +632,7 @@ /> @@ -151,6 +695,7 @@ /> @@ -210,6 +758,7 @@ /> + + + + + + + + + + + + + + + + + + + @@ -316,7 +930,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -405,7 +1020,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -494,7 +1110,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -584,7 +1201,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Modified: external/db-4.4.20-vs9/build_win32/excxx_lock.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/excxx_lock.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/excxx_lock.vcproj Thu Mar 6 14:43:47 2008 @@ -10,14 +10,611 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -79,7 +677,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -122,6 +722,7 @@ /> @@ -228,7 +831,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -271,6 +876,7 @@ /> @@ -330,6 +939,7 @@ /> @@ -436,7 +1048,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -479,6 +1093,7 @@ /> @@ -584,7 +1201,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> + + + + + + + + + + + + + + + + + + + + + + + + Modified: external/db-4.4.20-vs9/build_win32/excxx_mpool.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/excxx_mpool.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/excxx_mpool.vcproj Thu Mar 6 14:43:47 2008 @@ -10,14 +10,611 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -80,7 +678,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -123,6 +723,7 @@ /> @@ -182,6 +786,7 @@ /> @@ -287,7 +894,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -377,7 +985,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -420,6 +1030,7 @@ /> @@ -525,7 +1138,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -568,6 +1183,7 @@ /> + + + + + + + + + + + + + + + + + + + + + + + + Modified: external/db-4.4.20-vs9/build_win32/excxx_sequence.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/excxx_sequence.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/excxx_sequence.vcproj Thu Mar 6 14:43:47 2008 @@ -10,14 +10,611 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -79,7 +677,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -122,6 +722,7 @@ /> @@ -181,6 +785,7 @@ /> @@ -287,7 +894,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -377,7 +985,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -466,7 +1075,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -509,6 +1120,7 @@ /> @@ -568,6 +1183,7 @@ /> + + + + + + + + + + + + + + + + + + + + + + + + Modified: external/db-4.4.20-vs9/build_win32/excxx_tpcb.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/excxx_tpcb.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/excxx_tpcb.vcproj Thu Mar 6 14:43:47 2008 @@ -10,14 +10,525 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -92,6 +632,7 @@ /> + + + + + + + + + + + + + + + + + + + @@ -198,7 +804,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -241,6 +849,7 @@ /> @@ -300,6 +912,7 @@ /> @@ -405,7 +1020,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -494,7 +1110,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -584,7 +1201,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> + + + + + + + + + + + + + + + + + + + + + + + + Modified: external/db-4.4.20-vs9/build_win32/excxx_txnguide.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/excxx_txnguide.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/excxx_txnguide.vcproj Thu Mar 6 14:43:47 2008 @@ -10,14 +10,611 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -80,7 +678,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -123,6 +723,7 @@ /> @@ -182,6 +786,7 @@ /> @@ -241,6 +849,7 @@ /> @@ -347,7 +958,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -436,7 +1048,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -479,6 +1093,7 @@ /> @@ -584,7 +1201,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> + + + + + + + + + + + + + + + + + + + + + + + + Modified: external/db-4.4.20-vs9/build_win32/excxx_txnguide_inmem.vcproj ============================================================================== --- external/db-4.4.20-vs9/build_win32/excxx_txnguide_inmem.vcproj (original) +++ external/db-4.4.20-vs9/build_win32/excxx_txnguide_inmem.vcproj Thu Mar 6 14:43:47 2008 @@ -10,14 +10,611 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -80,7 +678,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -170,7 +769,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -213,6 +814,7 @@ /> @@ -318,7 +922,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> @@ -361,6 +967,7 @@ /> @@ -420,6 +1030,7 @@ /> @@ -479,6 +1093,7 @@ /> @@ -584,7 +1201,7 @@ SubSystem="1" RandomizedBaseAddress="1" DataExecutionPrevention="0" - TargetMachine="1" + TargetMachine="17" /> + + + + + + + + + + + + + + + + + + + + + + + + From python-checkins at python.org Thu Mar 6 14:49:47 2008 From: python-checkins at python.org (martin.v.loewis) Date: Thu, 6 Mar 2008 14:49:47 +0100 (CET) Subject: [Python-checkins] r61278 - python/trunk/PCbuild/_bsddb.vcproj python/trunk/PCbuild/readme.txt Message-ID: <20080306134947.75E021E4009@bag.python.org> Author: martin.v.loewis Date: Thu Mar 6 14:49:47 2008 New Revision: 61278 Modified: python/trunk/PCbuild/_bsddb.vcproj python/trunk/PCbuild/readme.txt Log: Rely on x64 platform configuration when building _bsddb on AMD64. Modified: python/trunk/PCbuild/_bsddb.vcproj ============================================================================== --- python/trunk/PCbuild/_bsddb.vcproj (original) +++ python/trunk/PCbuild/_bsddb.vcproj Thu Mar 6 14:49:47 2008 @@ -115,11 +115,11 @@ /> @@ -497,11 +497,11 @@ /> Modified: python/trunk/PCbuild/readme.txt ============================================================================== --- python/trunk/PCbuild/readme.txt (original) +++ python/trunk/PCbuild/readme.txt Thu Mar 6 14:49:47 2008 @@ -202,7 +202,9 @@ The _bsddb subprojects depends only on the db_static project of Berkeley DB. You have to choose either "Release", "Release AMD64", "Debug" - or "Debug AMD64" as configuration. + or "Debug AMD64" as configuration. For the AND64 builds, you need to + create the "x64" platform first (in Solution Platforms\Configuration + Manager...) Alternatively, if you want to start with the original sources, go to Sleepycat's download page: From python-checkins at python.org Thu Mar 6 14:50:29 2008 From: python-checkins at python.org (martin.v.loewis) Date: Thu, 6 Mar 2008 14:50:29 +0100 (CET) Subject: [Python-checkins] r61279 - python/trunk/Tools/buildbot/external-amd64.bat Message-ID: <20080306135029.439671E4009@bag.python.org> Author: martin.v.loewis Date: Thu Mar 6 14:50:28 2008 New Revision: 61279 Modified: python/trunk/Tools/buildbot/external-amd64.bat Log: Update db-4.4.20 build procedure. Modified: python/trunk/Tools/buildbot/external-amd64.bat ============================================================================== --- python/trunk/Tools/buildbot/external-amd64.bat (original) +++ python/trunk/Tools/buildbot/external-amd64.bat Thu Mar 6 14:50:28 2008 @@ -10,10 +10,15 @@ if not exist bzip2-1.0.3 svn export http://svn.python.org/projects/external/bzip2-1.0.3 @rem Sleepycat db -if not exist db-4.4.20 svn export http://svn.python.org/projects/external/db-4.4.20 - at REM if not exist db-4.4.20\build_win32\debug\libdb44sd.lib ( - at REM vcbuild db-4.4.20\build_win32\Berkeley_DB.sln /build Debug /project db_static - at REM ) + at rem Remove VS 2003 builds +if exist db-4.4.20 if not exist db-4.4.20\build_win32\this_is_for_vs9 ( + echo Removing old build + rd /s/q db-4.4.20 +) +if not exist db-4.4.20 svn export http://svn.python.org/projects/external/db-4.4.20-vs9 db-4.4.20 +if not exist db-4.4.20\build_win32\debug\libdb44sd.lib ( + vcbuild db-4.4.20\build_win32\db_static.vcproj "Debug AMD64|x64" +) @rem OpenSSL if not exist openssl-0.9.8g ( From buildbot at python.org Thu Mar 6 15:33:18 2008 From: buildbot at python.org (buildbot at python.org) Date: Thu, 06 Mar 2008 14:33:18 +0000 Subject: [Python-checkins] buildbot failure in g4 osx.4 trunk Message-ID: <20080306143318.CFD9D1E4039@bag.python.org> The Buildbot has detected a new failure of g4 osx.4 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/g4%20osx.4%20trunk/builds/2988 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: psf-g4 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed failed slave lost sincerely, -The Buildbot From python-checkins at python.org Thu Mar 6 20:00:07 2008 From: python-checkins at python.org (martin.v.loewis) Date: Thu, 6 Mar 2008 20:00:07 +0100 (CET) Subject: [Python-checkins] r61282 - tracker/instances/python-dev/scripts/set_text_plain Message-ID: <20080306190007.386751E4004@bag.python.org> Author: martin.v.loewis Date: Thu Mar 6 20:00:06 2008 New Revision: 61282 Added: tracker/instances/python-dev/scripts/set_text_plain - copied, changed from r61007, tracker/instances/python-dev/scripts/remove_py3k Log: Add script to set all patches to text/plain. Copied: tracker/instances/python-dev/scripts/set_text_plain (from r61007, tracker/instances/python-dev/scripts/remove_py3k) ============================================================================== --- tracker/instances/python-dev/scripts/remove_py3k (original) +++ tracker/instances/python-dev/scripts/set_text_plain Thu Mar 6 20:00:06 2008 @@ -1,24 +1,17 @@ -# This sample script changes all issues with the -# py3k keyword to using the "Python 3.0" version instead. -import sys +# This sample script changes all files with the +# extensions .diff/.patch/.py to the text/plain type +import sys, posixpath sys.path.insert(1,'/home/roundup/roundup/lib/python2.4/site-packages') import roundup.instance tracker = roundup.instance.open('.') db = tracker.open('admin') -py3k = db.keyword.lookup('py3k') -py30 = db.version.lookup('Python 3.0') - -using_py3k = db.issue.find(keywords={py3k:1}) - -for issue in using_py3k: - keywords = db.issue.get(issue, 'keywords') - keywords.remove(py3k) - versions = db.issue.get(issue, 'versions') - versions.append(py30) - - # Use set_inner, so that auditors and reactors don't fire - db.issue.set_inner(issue, keywords=keywords, versions=versions) +tochange = [] +for fileid in db.file.getnodeids(): + name = db.file.get(fileid, 'name') + if posixpath.splitext(name)[1] in ('.diff','.patch','.py'): + if db.file.get(fileid, 'type') != 'text/plain': + db.file.set_inner(fileid, type='text/plain') db.commit() From buildbot at python.org Thu Mar 6 20:19:02 2008 From: buildbot at python.org (buildbot at python.org) Date: Thu, 06 Mar 2008 19:19:02 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 3.0 Message-ID: <20080306191902.95D941E4015@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%203.0/builds/688 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From python-checkins at python.org Thu Mar 6 20:50:01 2008 From: python-checkins at python.org (martin.v.loewis) Date: Thu, 6 Mar 2008 20:50:01 +0100 (CET) Subject: [Python-checkins] r61283 - in tracker/instances/python-dev: detectors/autoassign.py schema.py Message-ID: <20080306195001.8A84B1E4019@bag.python.org> Author: martin.v.loewis Date: Thu Mar 6 20:50:01 2008 New Revision: 61283 Added: tracker/instances/python-dev/detectors/autoassign.py (contents, props changed) Modified: tracker/instances/python-dev/schema.py Log: Implement auto-assignment. Added: tracker/instances/python-dev/detectors/autoassign.py ============================================================================== --- (empty file) +++ tracker/instances/python-dev/detectors/autoassign.py Thu Mar 6 20:50:01 2008 @@ -0,0 +1,28 @@ +# Auditor to automatically assign issues to a user when +# the component field gets set + +def autoassign(db, cl, nodeid, newvalues): + try: + components = newvalues['components'] + except KeyError: + # Without components, nothing needs to be auto-assigned + return + if newvalues.has_key('assignee'): + # If there is an explicit assignee in the new values + # (even if it is None, in the case unassignment): + # do nothing + return + # If the issue is already assigned, do nothing + if nodeid and db.issue.get(nodeid, 'assignee'): + return + for component in components: + user = db.component.get(component, 'assign_to') + if user: + # If there would be multiple auto-assigned users + # arbitrarily pick the first one we find + newvalues['assignee'] = user + return + +def init(db): + db.issue.audit('create', autoassign) + db.issue.audit('set', autoassign) Modified: tracker/instances/python-dev/schema.py ============================================================================== --- tracker/instances/python-dev/schema.py (original) +++ tracker/instances/python-dev/schema.py Thu Mar 6 20:50:01 2008 @@ -20,7 +20,8 @@ component = Class(db, 'component', name=String(), description=String(), - order=Number()) + order=Number(), + assign_to=Link('user')) component.setkey('name') # Version From python-checkins at python.org Thu Mar 6 20:58:04 2008 From: python-checkins at python.org (martin.v.loewis) Date: Thu, 6 Mar 2008 20:58:04 +0100 (CET) Subject: [Python-checkins] r61284 - tracker/instances/python-dev/detectors/audit2to3.py Message-ID: <20080306195804.74E391E4015@bag.python.org> Author: martin.v.loewis Date: Thu Mar 6 20:58:04 2008 New Revision: 61284 Removed: tracker/instances/python-dev/detectors/audit2to3.py Log: Remove audit2to3, replacing it with the generic autoassignment feature. Deleted: /tracker/instances/python-dev/detectors/audit2to3.py ============================================================================== --- /tracker/instances/python-dev/detectors/audit2to3.py Thu Mar 6 20:58:04 2008 +++ (empty file) @@ -1,41 +0,0 @@ -import roundup -import roundup.instance -import sets - -def update2to3(db, cl, nodeid, newvalues): - '''Component 2to3 issues to be assigned to collinwinter unless otherwise - assigned. - ''' - # nodeid will be None if this is a new node - componentIDS=None - if nodeid is not None: - componentIDS = cl.get(nodeid, 'components') - if newvalues.has_key('components'): - componentIDS = newvalues['components'] - if componentIDS and (theComponent in componentIDS): - if not newvalues.has_key('assignee') or \ - newvalues['assignee'] == Nobody: - newvalues['assignee'] = theMan - -def init(db): - global theMan, theComponent, Nobody - theMan = db.user.lookup('collinwinter') - Nobody = db.user.lookup('nobody') - theComponent = db.component.lookup('2to3 (2.x to 3.0 conversion tool)') - - db.issue.audit('create', update2to3) - db.issue.audit('set', update2to3) - -if __name__ == '__main__': - global theMan, theComponent, Nobody - instanceHome='/home/roundup/trackers/tracker' - instance = roundup.instance.open(instanceHome) - db = instance.open('admin') - cl = db.issue - nodeID = '1002' - theMan = db.user.lookup('collinwinter') - Nobody = db.user.lookup('nobody') - theComponent = db.component.lookup('2to3 (2.x to 3.0 conversion tool)') - newvalues = { 'components': [theComponent] , 'assignee': Nobody} - update2to3(db, cl, nodeID, newvalues) - print Nobody, theMan, theComponent, newvalues From buildbot at python.org Thu Mar 6 21:27:25 2008 From: buildbot at python.org (buildbot at python.org) Date: Thu, 06 Mar 2008 20:27:25 +0000 Subject: [Python-checkins] buildbot failure in S-390 Debian 3.0 Message-ID: <20080306202725.B36301E4016@bag.python.org> The Buildbot has detected a new failure of S-390 Debian 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/S-390%20Debian%203.0/builds/78 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-s390 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_xmlrpc_net make: *** [buildbottest] Error 1 sincerely, -The Buildbot From python-checkins at python.org Thu Mar 6 21:52:01 2008 From: python-checkins at python.org (raymond.hettinger) Date: Thu, 6 Mar 2008 21:52:01 +0100 (CET) Subject: [Python-checkins] r61285 - python/trunk/Lib/test/test_itertools.py Message-ID: <20080306205201.596E71E4015@bag.python.org> Author: raymond.hettinger Date: Thu Mar 6 21:52:01 2008 New Revision: 61285 Modified: python/trunk/Lib/test/test_itertools.py Log: More tests. Modified: python/trunk/Lib/test/test_itertools.py ============================================================================== --- python/trunk/Lib/test/test_itertools.py (original) +++ python/trunk/Lib/test/test_itertools.py Thu Mar 6 21:52:01 2008 @@ -49,11 +49,19 @@ class TestBasicOps(unittest.TestCase): def test_chain(self): - self.assertEqual(list(chain('abc', 'def')), list('abcdef')) - self.assertEqual(list(chain('abc')), list('abc')) - self.assertEqual(list(chain('')), []) - self.assertEqual(take(4, chain('abc', 'def')), list('abcd')) - self.assertRaises(TypeError, list,chain(2, 3)) + + def chain2(*iterables): + 'Pure python version in the docs' + for it in iterables: + for element in it: + yield element + + for c in (chain, chain2): + self.assertEqual(list(c('abc', 'def')), list('abcdef')) + self.assertEqual(list(c('abc')), list('abc')) + self.assertEqual(list(c('')), []) + self.assertEqual(take(4, c('abc', 'def')), list('abcd')) + self.assertRaises(TypeError, list,c(2, 3)) def test_chain_from_iterable(self): self.assertEqual(list(chain.from_iterable(['abc', 'def'])), list('abcdef')) @@ -652,6 +660,81 @@ self.assertRaises(StopIteration, f(lambda x:x, []).next) self.assertRaises(StopIteration, f(lambda x:x, StopNow()).next) +class TestExamples(unittest.TestCase): + + def test_chain(self): + self.assertEqual(''.join(chain('ABC', 'DEF')), 'ABCDEF') + + def test_chain_from_iterable(self): + self.assertEqual(''.join(chain.from_iterable(['ABC', 'DEF'])), 'ABCDEF') + + def test_combinations(self): + self.assertEqual(list(combinations('ABCD', 2)), + [('A','B'), ('A','C'), ('A','D'), ('B','C'), ('B','D'), ('C','D')]) + self.assertEqual(list(combinations(range(4), 3)), + [(0,1,2), (0,1,3), (0,2,3), (1,2,3)]) + + def test_count(self): + self.assertEqual(list(islice(count(10), 5)), [10, 11, 12, 13, 14]) + + def test_cycle(self): + self.assertEqual(list(islice(cycle('ABCD'), 12)), list('ABCDABCDABCD')) + + def test_dropwhile(self): + self.assertEqual(list(dropwhile(lambda x: x<5, [1,4,6,4,1])), [6,4,1]) + + def test_groupby(self): + self.assertEqual([k for k, g in groupby('AAAABBBCCDAABBB')], + list('ABCDAB')) + self.assertEqual([(list(g)) for k, g in groupby('AAAABBBCCD')], + [list('AAAA'), list('BBB'), list('CC'), list('D')]) + + def test_ifilter(self): + self.assertEqual(list(ifilter(lambda x: x%2, range(10))), [1,3,5,7,9]) + + def test_ifilterfalse(self): + self.assertEqual(list(ifilterfalse(lambda x: x%2, range(10))), [0,2,4,6,8]) + + def test_imap(self): + self.assertEqual(list(imap(pow, (2,3,10), (5,2,3))), [32, 9, 1000]) + + def test_islice(self): + self.assertEqual(list(islice('ABCDEFG', 2)), list('AB')) + self.assertEqual(list(islice('ABCDEFG', 2, 4)), list('CD')) + self.assertEqual(list(islice('ABCDEFG', 2, None)), list('CDEFG')) + self.assertEqual(list(islice('ABCDEFG', 0, None, 2)), list('ACEG')) + + def test_izip(self): + self.assertEqual(list(izip('ABCD', 'xy')), [('A', 'x'), ('B', 'y')]) + + def test_izip_longest(self): + self.assertEqual(list(izip_longest('ABCD', 'xy', fillvalue='-')), + [('A', 'x'), ('B', 'y'), ('C', '-'), ('D', '-')]) + + def test_permutations(self): + self.assertEqual(list(permutations('ABCD', 2)), + map(tuple, 'AB AC AD BA BC BD CA CB CD DA DB DC'.split())) + self.assertEqual(list(permutations(range(3))), + [(0,1,2), (0,2,1), (1,0,2), (1,2,0), (2,0,1), (2,1,0)]) + + def test_product(self): + self.assertEqual(list(product('ABCD', 'xy')), + map(tuple, 'Ax Ay Bx By Cx Cy Dx Dy'.split())) + self.assertEqual(list(product(range(2), repeat=3)), + [(0,0,0), (0,0,1), (0,1,0), (0,1,1), + (1,0,0), (1,0,1), (1,1,0), (1,1,1)]) + + def test_repeat(self): + self.assertEqual(list(repeat(10, 3)), [10, 10, 10]) + + def test_stapmap(self): + self.assertEqual(list(starmap(pow, [(2,5), (3,2), (10,3)])), + [32, 9, 1000]) + + def test_takewhile(self): + self.assertEqual(list(takewhile(lambda x: x<5, [1,4,6,4,1])), [1,4]) + + class TestGC(unittest.TestCase): def makecycle(self, iterator, container): @@ -663,6 +746,14 @@ a = [] self.makecycle(chain(a), a) + def test_chain_from_iterable(self): + a = [] + self.makecycle(chain.from_iterable([a]), a) + + def test_combinations(self): + a = [] + self.makecycle(combinations([1,2,a,3], 3), a) + def test_cycle(self): a = [] self.makecycle(cycle([a]*2), a) @@ -687,6 +778,12 @@ a = [] self.makecycle(izip([a]*2, [a]*3), a) + def test_izip_longest(self): + a = [] + self.makecycle(izip_longest([a]*2, [a]*3), a) + b = [a, None] + self.makecycle(izip_longest([a]*2, [a]*3, fillvalue=b), a) + def test_imap(self): a = [] self.makecycle(imap(lambda x:x, [a]*2), a) @@ -695,6 +792,14 @@ a = [] self.makecycle(islice([a]*2, None), a) + def test_permutations(self): + a = [] + self.makecycle(permutations([1,2,a,3], 3), a) + + def test_product(self): + a = [] + self.makecycle(product([1,2,a,3], repeat=3), a) + def test_repeat(self): a = [] self.makecycle(repeat(a), a) @@ -1120,6 +1225,30 @@ ... pass ... return izip(a, b) +>>> def grouper(n, iterable, padvalue=None): +... "grouper(3, 'abcdefg', 'x') --> ('a','b','c'), ('d','e','f'), ('g','x','x')" +... return izip(*[chain(iterable, repeat(padvalue, n-1))]*n) + +>>> def roundrobin(*iterables): +... "roundrobin('abc', 'd', 'ef') --> 'a', 'd', 'e', 'b', 'f', 'c'" +... # Recipe credited to George Sakkis +... pending = len(iterables) +... nexts = cycle(iter(it).next for it in iterables) +... while pending: +... try: +... for next in nexts: +... yield next() +... except StopIteration: +... pending -= 1 +... nexts = cycle(islice(nexts, pending)) + +>>> def powerset(iterable): +... "powerset('ab') --> set([]), set(['a']), set(['b']), set(['a', 'b'])" +... # Recipe credited to Eric Raymond +... pairs = [(2**i, x) for i, x in enumerate(iterable)] +... for n in xrange(2**len(pairs)): +... yield set(x for m, x in pairs if m&n) + This is not part of the examples but it tests to make sure the definitions perform as purported. @@ -1185,6 +1314,15 @@ >>> dotproduct([1,2,3], [4,5,6]) 32 +>>> list(grouper(3, 'abcdefg', 'x')) +[('a', 'b', 'c'), ('d', 'e', 'f'), ('g', 'x', 'x')] + +>>> list(roundrobin('abc', 'd', 'ef')) +['a', 'd', 'e', 'b', 'f', 'c'] + +>>> map(sorted, powerset('ab')) +[[], ['a'], ['b'], ['a', 'b']] + """ __test__ = {'libreftest' : libreftest} @@ -1192,7 +1330,7 @@ def test_main(verbose=None): test_classes = (TestBasicOps, TestVariousIteratorArgs, TestGC, RegressionTests, LengthTransparency, - SubclassWithKwargsTest) + SubclassWithKwargsTest, TestExamples) test_support.run_unittest(*test_classes) # verify reference counting From buildbot at python.org Thu Mar 6 22:29:55 2008 From: buildbot at python.org (buildbot at python.org) Date: Thu, 06 Mar 2008 21:29:55 +0000 Subject: [Python-checkins] buildbot failure in AMD64 W2k8 trunk Message-ID: <20080306212955.BF9A01E4015@bag.python.org> The Buildbot has detected a new failure of AMD64 W2k8 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/AMD64%20W2k8%20trunk/builds/0 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: nelson-win64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: martin.v.loewis,raymond.hettinger BUILD FAILED: failed compile sincerely, -The Buildbot From buildbot at python.org Thu Mar 6 22:31:06 2008 From: buildbot at python.org (buildbot at python.org) Date: Thu, 06 Mar 2008 21:31:06 +0000 Subject: [Python-checkins] buildbot failure in AMD64 W2k8 3.0 Message-ID: <20080306213108.8472C1E4015@bag.python.org> The Buildbot has detected a new failure of AMD64 W2k8 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/AMD64%20W2k8%203.0/builds/0 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: nelson-win64 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed failed slave lost sincerely, -The Buildbot From python-checkins at python.org Thu Mar 6 23:52:30 2008 From: python-checkins at python.org (raymond.hettinger) Date: Thu, 6 Mar 2008 23:52:30 +0100 (CET) Subject: [Python-checkins] r61286 - in python/trunk: Lib/test/test_itertools.py Modules/itertoolsmodule.c Message-ID: <20080306225230.735B91E4016@bag.python.org> Author: raymond.hettinger Date: Thu Mar 6 23:51:36 2008 New Revision: 61286 Modified: python/trunk/Lib/test/test_itertools.py python/trunk/Modules/itertoolsmodule.c Log: Issue 2246: itertools grouper object did not participate in GC (should be backported). Modified: python/trunk/Lib/test/test_itertools.py ============================================================================== --- python/trunk/Lib/test/test_itertools.py (original) +++ python/trunk/Lib/test/test_itertools.py Thu Mar 6 23:51:36 2008 @@ -766,6 +766,13 @@ a = [] self.makecycle(groupby([a]*2, lambda x:x), a) + def test_issue2246(self): + # Issue 2246 -- the _grouper iterator was not included in GC + n = 10 + keyfunc = lambda x: x + for i, j in groupby(xrange(n), key=keyfunc): + keyfunc.__dict__.setdefault('x',[]).append(j) + def test_ifilter(self): a = [] self.makecycle(ifilter(lambda x:True, [a]*2), a) Modified: python/trunk/Modules/itertoolsmodule.c ============================================================================== --- python/trunk/Modules/itertoolsmodule.c (original) +++ python/trunk/Modules/itertoolsmodule.c Thu Mar 6 23:51:36 2008 @@ -198,7 +198,7 @@ { _grouperobject *igo; - igo = PyObject_New(_grouperobject, &_grouper_type); + igo = PyObject_GC_New(_grouperobject, &_grouper_type); if (igo == NULL) return NULL; igo->parent = (PyObject *)parent; @@ -206,15 +206,25 @@ igo->tgtkey = tgtkey; Py_INCREF(tgtkey); + PyObject_GC_Track(igo); return (PyObject *)igo; } static void _grouper_dealloc(_grouperobject *igo) { + PyObject_GC_UnTrack(igo); Py_DECREF(igo->parent); Py_DECREF(igo->tgtkey); - PyObject_Del(igo); + PyObject_GC_Del(igo); +} + +static int +_grouper_traverse(_grouperobject *igo, visitproc visit, void *arg) +{ + Py_VISIT(igo->parent); + Py_VISIT(igo->tgtkey); + return 0; } static PyObject * @@ -280,9 +290,9 @@ PyObject_GenericGetAttr, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT, /* tp_flags */ + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC, /* tp_flags */ 0, /* tp_doc */ - 0, /* tp_traverse */ + (traverseproc)_grouper_traverse,/* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ 0, /* tp_weaklistoffset */ @@ -299,7 +309,7 @@ 0, /* tp_init */ 0, /* tp_alloc */ 0, /* tp_new */ - PyObject_Del, /* tp_free */ + PyObject_GC_Del, /* tp_free */ }; From python-checkins at python.org Thu Mar 6 23:58:43 2008 From: python-checkins at python.org (raymond.hettinger) Date: Thu, 6 Mar 2008 23:58:43 +0100 (CET) Subject: [Python-checkins] r61287 - in python/branches/release25-maint: Lib/test/test_itertools.py Modules/itertoolsmodule.c Message-ID: <20080306225843.2795F1E4023@bag.python.org> Author: raymond.hettinger Date: Thu Mar 6 23:58:42 2008 New Revision: 61287 Modified: python/branches/release25-maint/Lib/test/test_itertools.py python/branches/release25-maint/Modules/itertoolsmodule.c Log: Backport r61286 adding GC to the grouper for itertools.groupby() fixing Issue 2246. Modified: python/branches/release25-maint/Lib/test/test_itertools.py ============================================================================== --- python/branches/release25-maint/Lib/test/test_itertools.py (original) +++ python/branches/release25-maint/Lib/test/test_itertools.py Thu Mar 6 23:58:42 2008 @@ -449,6 +449,13 @@ a = [] self.makecycle(groupby([a]*2, lambda x:x), a) + def test_issue2246(self): + # Issue 2246 -- the _grouper iterator was not included in GC + n = 10 + keyfunc = lambda x: x + for i, j in groupby(xrange(n), key=keyfunc): + keyfunc.__dict__.setdefault('x',[]).append(j) + def test_ifilter(self): a = [] self.makecycle(ifilter(lambda x:True, [a]*2), a) Modified: python/branches/release25-maint/Modules/itertoolsmodule.c ============================================================================== --- python/branches/release25-maint/Modules/itertoolsmodule.c (original) +++ python/branches/release25-maint/Modules/itertoolsmodule.c Thu Mar 6 23:58:42 2008 @@ -199,7 +199,7 @@ { _grouperobject *igo; - igo = PyObject_New(_grouperobject, &_grouper_type); + igo = PyObject_GC_New(_grouperobject, &_grouper_type); if (igo == NULL) return NULL; igo->parent = (PyObject *)parent; @@ -207,15 +207,25 @@ igo->tgtkey = tgtkey; Py_INCREF(tgtkey); + PyObject_GC_Track(igo); return (PyObject *)igo; } static void _grouper_dealloc(_grouperobject *igo) { + PyObject_GC_UnTrack(igo); Py_DECREF(igo->parent); Py_DECREF(igo->tgtkey); - PyObject_Del(igo); + PyObject_GC_Del(igo); +} + +static int +_grouper_traverse(_grouperobject *igo, visitproc visit, void *arg) +{ + Py_VISIT(igo->parent); + Py_VISIT(igo->tgtkey); + return 0; } static PyObject * @@ -282,9 +292,9 @@ PyObject_GenericGetAttr, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT, /* tp_flags */ + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC, /* tp_flags */ 0, /* tp_doc */ - 0, /* tp_traverse */ + (traverseproc)_grouper_traverse,/* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ 0, /* tp_weaklistoffset */ @@ -301,7 +311,7 @@ 0, /* tp_init */ 0, /* tp_alloc */ 0, /* tp_new */ - PyObject_Del, /* tp_free */ + PyObject_GC_Del, /* tp_free */ }; From buildbot at python.org Fri Mar 7 01:04:33 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 07 Mar 2008 00:04:33 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 trunk Message-ID: <20080307000433.39C2F1E4015@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%20trunk/builds/2659 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: raymond.hettinger BUILD FAILED: failed test Excerpt from the test logfile: 4 tests failed: test_asynchat test_signal test_smtplib test_socket sincerely, -The Buildbot From buildbot at python.org Fri Mar 7 01:26:17 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 07 Mar 2008 00:26:17 +0000 Subject: [Python-checkins] buildbot failure in x86 FreeBSD trunk Message-ID: <20080307002617.776831E4015@bag.python.org> The Buildbot has detected a new failure of x86 FreeBSD trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20FreeBSD%20trunk/builds/703 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: bolen-freebsd Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: raymond.hettinger BUILD FAILED: failed test Excerpt from the test logfile: Traceback (most recent call last): File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 227, in handle_request self.process_request(request, client_address) File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 268, in process_request self.finish_request(request, client_address) File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 281, in finish_request self.RequestHandlerClass(request, client_address, self) File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 576, in __init__ self.handle() File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 227, in handle_request self.process_request(request, client_address) File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 268, in process_request self.finish_request(request, client_address) File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 281, in finish_request self.RequestHandlerClass(request, client_address, self) File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 576, in __init__ self.handle() File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable 1 test failed: test_smtplib sincerely, -The Buildbot From buildbot at python.org Fri Mar 7 02:02:06 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 07 Mar 2008 01:02:06 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 2.5 Message-ID: <20080307010206.3D2CE1E401A@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 2.5. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%202.5/builds/464 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch branches/release25-maint] HEAD Blamelist: raymond.hettinger BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_socket sincerely, -The Buildbot From python-checkins at python.org Fri Mar 7 02:33:20 2008 From: python-checkins at python.org (raymond.hettinger) Date: Fri, 7 Mar 2008 02:33:20 +0100 (CET) Subject: [Python-checkins] r61288 - in python/trunk: Doc/library/itertools.rst Lib/test/test_itertools.py Message-ID: <20080307013320.D5CE61E4015@bag.python.org> Author: raymond.hettinger Date: Fri Mar 7 02:33:20 2008 New Revision: 61288 Modified: python/trunk/Doc/library/itertools.rst python/trunk/Lib/test/test_itertools.py Log: Tweak recipes and tests Modified: python/trunk/Doc/library/itertools.rst ============================================================================== --- python/trunk/Doc/library/itertools.rst (original) +++ python/trunk/Doc/library/itertools.rst Fri Mar 7 02:33:20 2008 @@ -662,15 +662,15 @@ def pairwise(iterable): "s -> (s0,s1), (s1,s2), (s2, s3), ..." a, b = tee(iterable) - try: - b.next() - except StopIteration: - pass + for elem in b: + break return izip(a, b) - def grouper(n, iterable, padvalue=None): + def grouper(n, iterable, fillvalue=None): "grouper(3, 'abcdefg', 'x') --> ('a','b','c'), ('d','e','f'), ('g','x','x')" - return izip(*[chain(iterable, repeat(padvalue, n-1))]*n) + args = [iter(iterable)] * n + kwds = dict(fillvalue=fillvalue) + return izip_longest(*args, **kwds) def roundrobin(*iterables): "roundrobin('abc', 'd', 'ef') --> 'a', 'd', 'e', 'b', 'f', 'c'" Modified: python/trunk/Lib/test/test_itertools.py ============================================================================== --- python/trunk/Lib/test/test_itertools.py (original) +++ python/trunk/Lib/test/test_itertools.py Fri Mar 7 02:33:20 2008 @@ -410,6 +410,28 @@ self.assertEqual(len(list(product(*[range(7)]*6))), 7**6) self.assertRaises(TypeError, product, range(6), None) + def product1(*args, **kwds): + pools = map(tuple, args) * kwds.get('repeat', 1) + n = len(pools) + if n == 0: + yield () + return + if any(len(pool) == 0 for pool in pools): + return + indices = [0] * n + yield tuple(pool[i] for pool, i in zip(pools, indices)) + while 1: + for i in reversed(range(n)): # right to left + if indices[i] == len(pools[i]) - 1: + continue + indices[i] += 1 + for j in range(i+1, n): + indices[j] = 0 + yield tuple(pool[i] for pool, i in zip(pools, indices)) + break + else: + return + def product2(*args, **kwds): 'Pure python version used in docs' pools = map(tuple, args) * kwds.get('repeat', 1) @@ -425,6 +447,7 @@ args = [random.choice(argtypes) for j in range(random.randrange(5))] expected_len = prod(map(len, args)) self.assertEqual(len(list(product(*args))), expected_len) + self.assertEqual(list(product(*args)), list(product1(*args))) self.assertEqual(list(product(*args)), list(product2(*args))) args = map(iter, args) self.assertEqual(len(list(product(*args))), expected_len) @@ -1213,7 +1236,7 @@ ... return sum(imap(operator.mul, vec1, vec2)) >>> def flatten(listOfLists): -... return list(chain(*listOfLists)) +... return list(chain.from_iterable(listOfLists)) >>> def repeatfunc(func, times=None, *args): ... "Repeat calls to func with specified arguments." @@ -1226,15 +1249,15 @@ >>> def pairwise(iterable): ... "s -> (s0,s1), (s1,s2), (s2, s3), ..." ... a, b = tee(iterable) -... try: -... b.next() -... except StopIteration: -... pass +... for elem in b: +... break ... return izip(a, b) ->>> def grouper(n, iterable, padvalue=None): +>>> def grouper(n, iterable, fillvalue=None): ... "grouper(3, 'abcdefg', 'x') --> ('a','b','c'), ('d','e','f'), ('g','x','x')" -... return izip(*[chain(iterable, repeat(padvalue, n-1))]*n) +... args = [iter(iterable)] * n +... kwds = dict(fillvalue=fillvalue) +... return izip_longest(*args, **kwds) >>> def roundrobin(*iterables): ... "roundrobin('abc', 'd', 'ef') --> 'a', 'd', 'e', 'b', 'f', 'c'" From buildbot at python.org Fri Mar 7 02:57:23 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 07 Mar 2008 01:57:23 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-3 trunk Message-ID: <20080307015723.38C9C1E4026@bag.python.org> The Buildbot has detected a new failure of x86 XP-3 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-3%20trunk/builds/1024 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: raymond.hettinger BUILD FAILED: failed clean sincerely, -The Buildbot From buildbot at python.org Fri Mar 7 03:26:03 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 07 Mar 2008 02:26:03 +0000 Subject: [Python-checkins] buildbot failure in g4 osx.4 trunk Message-ID: <20080307022603.421181E4015@bag.python.org> The Buildbot has detected a new failure of g4 osx.4 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/g4%20osx.4%20trunk/builds/2991 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: psf-g4 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: raymond.hettinger BUILD FAILED: failed failed slave lost sincerely, -The Buildbot From python-checkins at python.org Fri Mar 7 07:22:16 2008 From: python-checkins at python.org (jeffrey.yasskin) Date: Fri, 7 Mar 2008 07:22:16 +0100 (CET) Subject: [Python-checkins] r61289 - in python/trunk: Doc/library/socketserver.rst Lib/SocketServer.py Lib/test/test_socketserver.py Misc/NEWS Message-ID: <20080307062216.195FC1E4016@bag.python.org> Author: jeffrey.yasskin Date: Fri Mar 7 07:22:15 2008 New Revision: 61289 Modified: python/trunk/Doc/library/socketserver.rst python/trunk/Lib/SocketServer.py python/trunk/Lib/test/test_socketserver.py python/trunk/Misc/NEWS Log: Progress on issue #1193577 by adding a polling .shutdown() method to SocketServers. The core of the patch was written by Pedro Werneck, but any bugs are mine. I've also rearranged the code for timeouts in order to avoid interfering with the shutdown poll. Modified: python/trunk/Doc/library/socketserver.rst ============================================================================== --- python/trunk/Doc/library/socketserver.rst (original) +++ python/trunk/Doc/library/socketserver.rst Fri Mar 7 07:22:15 2008 @@ -113,7 +113,8 @@ finished requests and to use :func:`select` to decide which request to work on next (or whether to handle a new incoming request). This is particularly important for stream services where each client can potentially be connected for -a long time (if threads or subprocesses cannot be used). +a long time (if threads or subprocesses cannot be used). See :mod:`asyncore` for +another way to manage this. .. XXX should data and methods be intermingled, or separate? how should the distinction between class and instance variables be drawn? @@ -132,16 +133,24 @@ .. function:: handle_request() - Process a single request. This function calls the following methods in order: - :meth:`get_request`, :meth:`verify_request`, and :meth:`process_request`. If - the user-provided :meth:`handle` method of the handler class raises an - exception, the server's :meth:`handle_error` method will be called. + Process a single request. This function calls the following methods in + order: :meth:`get_request`, :meth:`verify_request`, and + :meth:`process_request`. If the user-provided :meth:`handle` method of the + handler class raises an exception, the server's :meth:`handle_error` method + will be called. If no request is received within :attr:`self.timeout` + seconds, :meth:`handle_timeout` will be called and :meth:`handle_request` + will return. -.. function:: serve_forever() +.. function:: serve_forever(poll_interval=0.5) - Handle an infinite number of requests. This simply calls :meth:`handle_request` - inside an infinite loop. + Handle requests until an explicit :meth:`shutdown` request. Polls for + shutdown every *poll_interval* seconds. + + +.. function:: shutdown() + + Tells the :meth:`serve_forever` loop to stop and waits until it does. .. data:: address_family @@ -195,10 +204,9 @@ .. data:: timeout - Timeout duration, measured in seconds, or :const:`None` if no timeout is desired. - If no incoming requests are received within the timeout period, - the :meth:`handle_timeout` method is called and then the server resumes waiting for - requests. + Timeout duration, measured in seconds, or :const:`None` if no timeout is + desired. If :meth:`handle_request` receives no incoming requests within the + timeout period, the :meth:`handle_timeout` method is called. There are various server methods that can be overridden by subclasses of base server classes like :class:`TCPServer`; these methods aren't useful to external Modified: python/trunk/Lib/SocketServer.py ============================================================================== --- python/trunk/Lib/SocketServer.py (original) +++ python/trunk/Lib/SocketServer.py Fri Mar 7 07:22:15 2008 @@ -130,8 +130,13 @@ import socket +import select import sys import os +try: + import threading +except ImportError: + import dummy_threading as threading __all__ = ["TCPServer","UDPServer","ForkingUDPServer","ForkingTCPServer", "ThreadingUDPServer","ThreadingTCPServer","BaseRequestHandler", @@ -149,7 +154,8 @@ Methods for the caller: - __init__(server_address, RequestHandlerClass) - - serve_forever() + - serve_forever(poll_interval=0.5) + - shutdown() - handle_request() # if you do not use serve_forever() - fileno() -> int # for select() @@ -190,6 +196,8 @@ """Constructor. May be extended, do not override.""" self.server_address = server_address self.RequestHandlerClass = RequestHandlerClass + self.__is_shut_down = threading.Event() + self.__serving = False def server_activate(self): """Called by constructor to activate the server. @@ -199,27 +207,73 @@ """ pass - def serve_forever(self): - """Handle one request at a time until doomsday.""" - while 1: - self.handle_request() + def serve_forever(self, poll_interval=0.5): + """Handle one request at a time until shutdown. + + Polls for shutdown every poll_interval seconds. Ignores + self.timeout. If you need to do periodic tasks, do them in + another thread. + """ + self.__serving = True + self.__is_shut_down.clear() + while self.__serving: + # XXX: Consider using another file descriptor or + # connecting to the socket to wake this up instead of + # polling. Polling reduces our responsiveness to a + # shutdown request and wastes cpu at all other times. + r, w, e = select.select([self], [], [], poll_interval) + if r: + self._handle_request_noblock() + self.__is_shut_down.set() + + def shutdown(self): + """Stops the serve_forever loop. + + Blocks until the loop has finished. This must be called while + serve_forever() is running in another thread, or it will + deadlock. + """ + self.__serving = False + self.__is_shut_down.wait() # The distinction between handling, getting, processing and # finishing a request is fairly arbitrary. Remember: # # - handle_request() is the top-level call. It calls - # await_request(), verify_request() and process_request() - # - get_request(), called by await_request(), is different for - # stream or datagram sockets + # select, get_request(), verify_request() and process_request() + # - get_request() is different for stream or datagram sockets # - process_request() is the place that may fork a new process # or create a new thread to finish the request # - finish_request() instantiates the request handler class; # this constructor will handle the request all by itself def handle_request(self): - """Handle one request, possibly blocking.""" + """Handle one request, possibly blocking. + + Respects self.timeout. + """ + # Support people who used socket.settimeout() to escape + # handle_request before self.timeout was available. + timeout = self.socket.gettimeout() + if timeout is None: + timeout = self.timeout + elif self.timeout is not None: + timeout = min(timeout, self.timeout) + fd_sets = select.select([self], [], [], timeout) + if not fd_sets[0]: + self.handle_timeout() + return + self._handle_request_noblock() + + def _handle_request_noblock(self): + """Handle one request, without blocking. + + I assume that select.select has returned that the socket is + readable before this function was called, so there should be + no risk of blocking in get_request(). + """ try: - request, client_address = self.await_request() + request, client_address = self.get_request() except socket.error: return if self.verify_request(request, client_address): @@ -229,21 +283,6 @@ self.handle_error(request, client_address) self.close_request(request) - def await_request(self): - """Call get_request or handle_timeout, observing self.timeout. - - Returns value from get_request() or raises socket.timeout exception if - timeout was exceeded. - """ - if self.timeout is not None: - # If timeout == 0, you're responsible for your own fd magic. - import select - fd_sets = select.select([self], [], [], self.timeout) - if not fd_sets[0]: - self.handle_timeout() - raise socket.timeout("Listening timed out") - return self.get_request() - def handle_timeout(self): """Called if no new request arrives within self.timeout. @@ -307,7 +346,8 @@ Methods for the caller: - __init__(server_address, RequestHandlerClass, bind_and_activate=True) - - serve_forever() + - serve_forever(poll_interval=0.5) + - shutdown() - handle_request() # if you don't use serve_forever() - fileno() -> int # for select() @@ -523,7 +563,6 @@ def process_request(self, request, client_address): """Start a new thread to process the request.""" - import threading t = threading.Thread(target = self.process_request_thread, args = (request, client_address)) if self.daemon_threads: Modified: python/trunk/Lib/test/test_socketserver.py ============================================================================== --- python/trunk/Lib/test/test_socketserver.py (original) +++ python/trunk/Lib/test/test_socketserver.py Fri Mar 7 07:22:15 2008 @@ -21,7 +21,6 @@ test.test_support.requires("network") -NREQ = 3 TEST_STR = "hello world\n" HOST = "localhost" @@ -50,43 +49,6 @@ pass -class MyMixinServer: - def serve_a_few(self): - for i in range(NREQ): - self.handle_request() - - def handle_error(self, request, client_address): - self.close_request(request) - self.server_close() - raise - - -class ServerThread(threading.Thread): - def __init__(self, addr, svrcls, hdlrcls): - threading.Thread.__init__(self) - self.__addr = addr - self.__svrcls = svrcls - self.__hdlrcls = hdlrcls - self.ready = threading.Event() - - def run(self): - class svrcls(MyMixinServer, self.__svrcls): - pass - if verbose: print "thread: creating server" - svr = svrcls(self.__addr, self.__hdlrcls) - # We had the OS pick a port, so pull the real address out of - # the server. - self.addr = svr.server_address - self.port = self.addr[1] - if self.addr != svr.socket.getsockname(): - raise RuntimeError('server_address was %s, expected %s' % - (self.addr, svr.socket.getsockname())) - self.ready.set() - if verbose: print "thread: serving three times" - svr.serve_a_few() - if verbose: print "thread: done" - - @contextlib.contextmanager def simple_subprocess(testcase): pid = os.fork() @@ -143,28 +105,48 @@ self.test_files.append(fn) return fn - def run_server(self, svrcls, hdlrbase, testfunc): + def make_server(self, addr, svrcls, hdlrbase): + class MyServer(svrcls): + def handle_error(self, request, client_address): + self.close_request(request) + self.server_close() + raise + class MyHandler(hdlrbase): def handle(self): line = self.rfile.readline() self.wfile.write(line) - addr = self.pickaddr(svrcls.address_family) + if verbose: print "creating server" + server = MyServer(addr, MyHandler) + self.assertEquals(server.server_address, server.socket.getsockname()) + return server + + def run_server(self, svrcls, hdlrbase, testfunc): + server = self.make_server(self.pickaddr(svrcls.address_family), + svrcls, hdlrbase) + # We had the OS pick a port, so pull the real address out of + # the server. + addr = server.server_address if verbose: + print "server created" print "ADDR =", addr print "CLASS =", svrcls - t = ServerThread(addr, svrcls, MyHandler) - if verbose: print "server created" + t = threading.Thread( + name='%s serving' % svrcls, + target=server.serve_forever, + # Short poll interval to make the test finish quickly. + # Time between requests is short enough that we won't wake + # up spuriously too many times. + kwargs={'poll_interval':0.01}) + t.setDaemon(True) # In case this function raises. t.start() if verbose: print "server running" - t.ready.wait(10) - self.assert_(t.ready.isSet(), - "%s not ready within a reasonable time" % svrcls) - addr = t.addr - for i in range(NREQ): + for i in range(3): if verbose: print "test client", i testfunc(svrcls.address_family, addr) if verbose: print "waiting for server" + server.shutdown() t.join() if verbose: print "done" Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Fri Mar 7 07:22:15 2008 @@ -18,6 +18,9 @@ Library ------- +- Issue #1193577: A .shutdown() method has been added to SocketServers + which terminates the .serve_forever() loop. + - Bug #2220: handle rlcompleter attribute match failure more gracefully. - Issue #2225: py_compile, when executed as a script, now returns a non- From buildbot at python.org Fri Mar 7 07:51:22 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 07 Mar 2008 06:51:22 +0000 Subject: [Python-checkins] buildbot failure in amd64 gentoo trunk Message-ID: <20080307065122.DE5181E4015@bag.python.org> The Buildbot has detected a new failure of amd64 gentoo trunk. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20gentoo%20trunk/builds/335 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-amd64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: jeffrey.yasskin BUILD FAILED: failed test Excerpt from the test logfile: Traceback (most recent call last): File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/threading.py", line 490, in __bootstrap_inner self.run() File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/threading.py", line 446, in run self.__target(*self.__args, **self.__kwargs) File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/bsddb/test/test_thread.py", line 284, in readerThread rec = dbutils.DeadlockWrap(c.next, max_retries=10) File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/bsddb/dbutils.py", line 62, in DeadlockWrap return function(*_args, **_kwargs) DBLockDeadlockError: (-30995, 'DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock') 1 test failed: test_ssl make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Fri Mar 7 08:31:59 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 07 Mar 2008 07:31:59 +0000 Subject: [Python-checkins] buildbot failure in S-390 Debian trunk Message-ID: <20080307073200.09DAB1E4015@bag.python.org> The Buildbot has detected a new failure of S-390 Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/S-390%20Debian%20trunk/builds/157 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-s390 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: jeffrey.yasskin BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_ssl make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Fri Mar 7 08:34:03 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 07 Mar 2008 07:34:03 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 trunk Message-ID: <20080307073403.67FFA1E4015@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%20trunk/builds/2661 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: jeffrey.yasskin BUILD FAILED: failed test Excerpt from the test logfile: 4 tests failed: test_asynchat test_signal test_smtplib test_socket ====================================================================== FAIL: test_wakeup_fd_during (test.test_signal.WakeupSignalTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_signal.py", line 198, in test_wakeup_fd_during [self.read], [], [], self.TIMEOUT_FULL) AssertionError: error not raised ====================================================================== FAIL: test_wakeup_fd_early (test.test_signal.WakeupSignalTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_signal.py", line 186, in test_wakeup_fd_early self.assert_(mid_time - before_time < self.TIMEOUT_HALF) AssertionError ====================================================================== FAIL: test_siginterrupt_on (test.test_signal.SiginterruptTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_signal.py", line 257, in test_siginterrupt_on self.assertEquals(i, True) AssertionError: False != True ====================================================================== FAIL: testInterruptedTimeout (test.test_socket.TCPTimeoutTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_socket.py", line 994, in testInterruptedTimeout self.fail("got Alarm in wrong place") AssertionError: got Alarm in wrong place sincerely, -The Buildbot From nnorwitz at gmail.com Fri Mar 7 11:21:54 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Fri, 7 Mar 2008 05:21:54 -0500 Subject: [Python-checkins] Python Regression Test Failures refleak (1) Message-ID: <20080307102154.GA30119@python.psfb.org> More important issues: ---------------------- test_threadedtempfile leaked [-102, 91, -91] references, sum=-102 Less important issues: ---------------------- test_threadsignals leaked [0, -8, 0] references, sum=-8 test_urllib2_localnet leaked [3, 3, 3] references, sum=9 From nnorwitz at gmail.com Fri Mar 7 14:02:14 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Fri, 7 Mar 2008 08:02:14 -0500 Subject: [Python-checkins] Python Regression Test Failures all (1) Message-ID: <20080307130214.GA3334@python.psfb.org> 318 tests OK. 1 test failed: test_ssl 20 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_cd test_cl test_gl test_imageop test_imgfile test_ioctl test_macostools test_pep277 test_scriptpackages test_startfile test_sunaudiodev test_tcl test_unicode_file test_winreg test_winsound test_zipfile64 1 skip unexpected on linux2: test_ioctl test_grammar test_opcodes test_dict test_builtin test_exceptions test_types test_unittest test_doctest test_doctest2 test_MimeWriter test_SimpleHTTPServer test_StringIO test___all__ test___future__ test__locale test_abc test_abstract_numbers test_aepack test_aepack skipped -- No module named aepack test_al test_al skipped -- No module named al test_anydbm test_applesingle test_applesingle skipped -- No module named macostools test_array test_ast test_asynchat test_asyncore test_atexit test_audioop test_augassign test_base64 test_bastion test_bigaddrspace test_bigmem test_binascii test_binhex test_binop test_bisect test_bool test_bsddb test_bsddb185 test_bsddb185 skipped -- No module named bsddb185 test_bsddb3 test_buffer test_bufio test_bz2 test_calendar test_call test_capi test_cd test_cd skipped -- No module named cd test_cfgparser test_cgi test_charmapcodec test_cl test_cl skipped -- No module named cl test_class test_cmath test_cmd test_cmd_line test_cmd_line_script test_code test_codeccallbacks test_codecencodings_cn test_codecencodings_hk test_codecencodings_jp test_codecencodings_kr test_codecencodings_tw test_codecmaps_cn test_codecmaps_hk test_codecmaps_jp test_codecmaps_kr test_codecmaps_tw test_codecs test_codeop test_coding test_coercion test_collections test_colorsys test_commands test_compare test_compile test_compiler testCompileLibrary still working, be patient... test_complex test_complex_args test_contains test_contextlib test_cookie test_cookielib test_copy test_copy_reg test_cpickle test_cprofile test_crypt test_csv test_ctypes test_datetime test_dbm test_decimal test_decorators test_defaultdict test_deque test_descr test_descrtut test_difflib test_dircache test_dis test_distutils test_dl test_docxmlrpc test_dumbdbm test_dummy_thread test_dummy_threading test_email test_email_codecs test_email_renamed test_enumerate test_eof test_errno test_exception_variations test_extcall test_fcntl test_file test_filecmp test_fileinput test_float test_fnmatch test_fork1 test_format test_fpformat test_fractions test_frozen test_ftplib test_funcattrs test_functools test_future test_future_builtins test_gc test_gdbm test_generators test_genericpath test_genexps test_getargs test_getargs2 test_getopt test_gettext test_gl test_gl skipped -- No module named gl test_glob test_global test_grp test_gzip test_hash test_hashlib test_heapq test_hexoct test_hmac test_hotshot test_htmllib test_htmlparser test_httplib test_imageop test_imageop skipped -- No module named imgfile test_imaplib test_imgfile test_imgfile skipped -- No module named imgfile test_imp test_import test_importhooks test_index test_inspect test_ioctl test_ioctl skipped -- Unable to open /dev/tty test_isinstance test_iter test_iterlen test_itertools test_largefile test_list test_locale test_logging test_long test_long_future test_longexp test_macostools test_macostools skipped -- No module named macostools test_macpath test_mailbox test_marshal test_math test_md5 test_mhlib test_mimetools test_mimetypes test_minidom test_mmap test_module test_modulefinder test_multibytecodec test_multibytecodec_support test_multifile test_mutants test_mutex test_netrc test_new test_nis test_normalization test_ntpath test_old_mailbox test_openpty test_operator test_optparse test_os test_parser s_push: parser stack overflow test_peepholer test_pep247 test_pep263 test_pep277 test_pep277 skipped -- test works only on NT+ test_pep292 test_pep352 test_pickle test_pickletools test_pipes test_pkg test_pkgimport test_platform test_plistlib test_poll test_popen [8018 refs] [8018 refs] [8018 refs] test_popen2 test_poplib test_posix test_posixpath test_pow test_pprint test_profile test_profilehooks test_property test_pstats test_pty test_pwd test_pyclbr test_pyexpat test_queue test_quopri [8395 refs] [8395 refs] test_random test_re test_repr test_resource test_rfc822 test_richcmp test_robotparser test_runpy test_sax test_scope test_scriptpackages test_scriptpackages skipped -- No module named aetools test_select test_set test_sets test_sgmllib test_sha test_shelve test_shlex test_shutil test_signal test_site test_slice test_smtplib test_socket test_socket_ssl /tmp/python-test/local/lib/python2.6/test/test_socket_ssl.py:108: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. ssl_sock = socket.ssl(s) /tmp/python-test/local/lib/python2.6/test/test_socket_ssl.py:74: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. ss = socket.ssl(s) /tmp/python-test/local/lib/python2.6/test/test_socket_ssl.py:159: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. ss = socket.ssl(s) /tmp/python-test/local/lib/python2.6/test/test_socket_ssl.py:173: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. ss = socket.ssl(s) test_socketserver test_softspace test_sort test_sqlite test_ssl test test_ssl produced unexpected output: ********************************************************************** *** lines 2-7 of actual output doesn't appear in expected output after line 1: + Traceback (most recent call last): + File "/tmp/python-test/local/lib/python2.6/test/test_ssl.py", line 366, in serve_forever + self.handle_request() + File "/tmp/python-test/local/lib/python2.6/SocketServer.py", line 262, in handle_request + fd_sets = select.select([self], [], [], timeout) + error: (9, 'Bad file descriptor') ********************************************************************** test_startfile test_startfile skipped -- cannot import name startfile test_str test_strftime test_string test_stringprep test_strop test_strptime test_struct test_structmembers test_structseq test_subprocess [8013 refs] [8015 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8015 refs] [9938 refs] [8231 refs] [8015 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] . [8013 refs] [8013 refs] this bit of output is from a test of stdout in a different process ... [8013 refs] [8013 refs] [8231 refs] test_sunaudiodev test_sunaudiodev skipped -- No module named sunaudiodev test_sundry test_symtable test_syntax test_sys [8013 refs] [8013 refs] test_tarfile test_tcl test_tcl skipped -- No module named _tkinter test_telnetlib test_tempfile [8018 refs] test_textwrap test_thread test_threaded_import test_threadedtempfile test_threading [11151 refs] test_threading_local test_threadsignals test_time test_timeout test_tokenize test_trace test_traceback test_transformer test_tuple test_typechecks test_ucn test_unary test_unicode test_unicode_file test_unicode_file skipped -- No Unicode filesystem semantics on this platform. test_unicodedata test_univnewlines test_unpack test_urllib test_urllib2 test_urllib2_localnet test_urllib2net No handlers could be found for logger "test_urllib2" test_urllibnet test_urlparse test_userdict test_userlist test_userstring test_uu test_uuid WARNING: uuid.getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._ifconfig_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._unixdll_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. test_wait3 test_wait4 test_warnings test_wave test_weakref test_whichdb test_winreg test_winreg skipped -- No module named _winreg test_winsound test_winsound skipped -- No module named winsound test_with test_wsgiref test_xdrlib test_xml_etree test_xml_etree_c test_xmllib test_xmlrpc test_xpickle test_xrange test_zipfile /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:472: DeprecationWarning: struct integer overflow masking is deprecated zipfp.close() /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:399: DeprecationWarning: struct integer overflow masking is deprecated zipfp.close() test_zipfile64 test_zipfile64 skipped -- test requires loads of disk-space bytes and a long time to run test_zipimport test_zlib 318 tests OK. 1 test failed: test_ssl 20 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_cd test_cl test_gl test_imageop test_imgfile test_ioctl test_macostools test_pep277 test_scriptpackages test_startfile test_sunaudiodev test_tcl test_unicode_file test_winreg test_winsound test_zipfile64 1 skip unexpected on linux2: test_ioctl [576499 refs] From python-checkins at python.org Fri Mar 7 15:13:29 2008 From: python-checkins at python.org (nick.coghlan) Date: Fri, 7 Mar 2008 15:13:29 +0100 (CET) Subject: [Python-checkins] r61290 - in python/trunk: Doc/library/dis.rst Lib/compiler/pycodegen.py Misc/NEWS Python/ceval.c Python/compile.c Python/import.c Message-ID: <20080307141329.6193A1E4015@bag.python.org> Author: nick.coghlan Date: Fri Mar 7 15:13:28 2008 New Revision: 61290 Modified: python/trunk/Doc/library/dis.rst python/trunk/Lib/compiler/pycodegen.py python/trunk/Misc/NEWS python/trunk/Python/ceval.c python/trunk/Python/compile.c python/trunk/Python/import.c Log: Speed up with statements by storing the __exit__ method on the stack instead of in a temp variable (bumps the magic number for pyc files) Modified: python/trunk/Doc/library/dis.rst ============================================================================== --- python/trunk/Doc/library/dis.rst (original) +++ python/trunk/Doc/library/dis.rst Fri Mar 7 15:13:28 2008 @@ -519,21 +519,24 @@ .. opcode:: WITH_CLEANUP () - Cleans up the stack when a :keyword:`with` statement block exits. TOS is the - context manager's :meth:`__exit__` bound method. Below that are 1--3 values - indicating how/why the finally clause was entered: - - * SECOND = ``None`` - * (SECOND, THIRD) = (``WHY_{RETURN,CONTINUE}``), retval - * SECOND = ``WHY_*``; no retval below it - * (SECOND, THIRD, FOURTH) = exc_info() - - In the last case, ``TOS(SECOND, THIRD, FOURTH)`` is called, otherwise - ``TOS(None, None, None)``. - - In addition, if the stack represents an exception, *and* the function call - returns a 'true' value, this information is "zapped", to prevent ``END_FINALLY`` - from re-raising the exception. (But non-local gotos should still be resumed.) + Cleans up the stack when a :keyword:`with` statement block exits. On top of + the stack are 1--3 values indicating how/why the finally clause was entered: + + * TOP = ``None`` + * (TOP, SECOND) = (``WHY_{RETURN,CONTINUE}``), retval + * TOP = ``WHY_*``; no retval below it + * (TOP, SECOND, THIRD) = exc_info() + + Under them is EXIT, the context manager's :meth:`__exit__` bound method. + + In the last case, ``EXIT(TOP, SECOND, THIRD)`` is called, otherwise + ``EXIT(None, None, None)``. + + EXIT is removed from the stack, leaving the values above it in the same + order. In addition, if the stack represents an exception, *and* the function + call returns a 'true' value, this information is "zapped", to prevent + ``END_FINALLY`` from re-raising the exception. (But non-local gotos should + still be resumed.) .. XXX explain the WHY stuff! Modified: python/trunk/Lib/compiler/pycodegen.py ============================================================================== --- python/trunk/Lib/compiler/pycodegen.py (original) +++ python/trunk/Lib/compiler/pycodegen.py Fri Mar 7 15:13:28 2008 @@ -822,14 +822,13 @@ def visitWith(self, node): body = self.newBlock() final = self.newBlock() - exitvar = "$exit%d" % self.__with_count valuevar = "$value%d" % self.__with_count self.__with_count += 1 self.set_lineno(node) self.visit(node.expr) self.emit('DUP_TOP') self.emit('LOAD_ATTR', '__exit__') - self._implicitNameOp('STORE', exitvar) + self.emit('ROT_TWO') self.emit('LOAD_ATTR', '__enter__') self.emit('CALL_FUNCTION', 0) if node.vars is None: @@ -849,8 +848,6 @@ self.emit('LOAD_CONST', None) self.nextBlock(final) self.setups.push((END_FINALLY, final)) - self._implicitNameOp('LOAD', exitvar) - self._implicitNameOp('DELETE', exitvar) self.emit('WITH_CLEANUP') self.emit('END_FINALLY') self.setups.pop() Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Fri Mar 7 15:13:28 2008 @@ -12,6 +12,9 @@ Core and builtins ----------------- +- Issue #2179: speed up with statement execution by storing the exit method + on the stack instead of in a temporary variable (patch by Jeffrey Yaskin) + - Issue #2238: Some syntax errors in *args and **kwargs expressions could give bogus error messages. Modified: python/trunk/Python/ceval.c ============================================================================== --- python/trunk/Python/ceval.c (original) +++ python/trunk/Python/ceval.c Fri Mar 7 15:13:28 2008 @@ -2254,17 +2254,20 @@ case WITH_CLEANUP: { - /* TOP is the context.__exit__ bound method. - Below that are 1-3 values indicating how/why - we entered the finally clause: - - SECOND = None - - (SECOND, THIRD) = (WHY_{RETURN,CONTINUE}), retval - - SECOND = WHY_*; no retval below it - - (SECOND, THIRD, FOURTH) = exc_info() + /* At the top of the stack are 1-3 values indicating + how/why we entered the finally clause: + - TOP = None + - (TOP, SECOND) = (WHY_{RETURN,CONTINUE}), retval + - TOP = WHY_*; no retval below it + - (TOP, SECOND, THIRD) = exc_info() + Below them is EXIT, the context.__exit__ bound method. In the last case, we must call - TOP(SECOND, THIRD, FOURTH) + EXIT(TOP, SECOND, THIRD) otherwise we must call - TOP(None, None, None) + EXIT(None, None, None) + + In all cases, we remove EXIT from the stack, leaving + the rest in the same order. In addition, if the stack represents an exception, *and* the function call returns a 'true' value, we @@ -2273,36 +2276,59 @@ should still be resumed.) */ - x = TOP(); - u = SECOND(); - if (PyInt_Check(u) || u == Py_None) { + PyObject *exit_func; + + u = POP(); + if (u == Py_None) { + exit_func = TOP(); + SET_TOP(u); + v = w = Py_None; + } + else if (PyInt_Check(u)) { + switch(PyInt_AS_LONG(u)) { + case WHY_RETURN: + case WHY_CONTINUE: + /* Retval in TOP. */ + exit_func = SECOND(); + SET_SECOND(TOP()); + SET_TOP(u); + break; + default: + exit_func = TOP(); + SET_TOP(u); + break; + } u = v = w = Py_None; } else { - v = THIRD(); - w = FOURTH(); + v = TOP(); + w = SECOND(); + exit_func = THIRD(); + SET_TOP(u); + SET_SECOND(v); + SET_THIRD(w); } /* XXX Not the fastest way to call it... */ - x = PyObject_CallFunctionObjArgs(x, u, v, w, NULL); - if (x == NULL) + x = PyObject_CallFunctionObjArgs(exit_func, u, v, w, + NULL); + if (x == NULL) { + Py_DECREF(exit_func); break; /* Go to error exit */ + } if (u != Py_None && PyObject_IsTrue(x)) { /* There was an exception and a true return */ - Py_DECREF(x); - x = TOP(); /* Again */ - STACKADJ(-3); + STACKADJ(-2); Py_INCREF(Py_None); SET_TOP(Py_None); - Py_DECREF(x); Py_DECREF(u); Py_DECREF(v); Py_DECREF(w); } else { - /* Let END_FINALLY do its thing */ - Py_DECREF(x); - x = POP(); - Py_DECREF(x); + /* The stack was rearranged to remove EXIT + above. Let END_FINALLY do its thing */ } + Py_DECREF(x); + Py_DECREF(exit_func); PREDICT(END_FINALLY); break; } Modified: python/trunk/Python/compile.c ============================================================================== --- python/trunk/Python/compile.c (original) +++ python/trunk/Python/compile.c Fri Mar 7 15:13:28 2008 @@ -2842,7 +2842,7 @@ { static identifier enter_attr, exit_attr; basicblock *block, *finally; - identifier tmpexit, tmpvalue = NULL; + identifier tmpvalue = NULL; assert(s->kind == With_kind); @@ -2862,12 +2862,6 @@ if (!block || !finally) return 0; - /* Create a temporary variable to hold context.__exit__ */ - tmpexit = compiler_new_tmpname(c); - if (tmpexit == NULL) - return 0; - PyArena_AddPyObject(c->c_arena, tmpexit); - if (s->v.With.optional_vars) { /* Create a temporary variable to hold context.__enter__(). We need to do this rather than preserving it on the stack @@ -2887,11 +2881,10 @@ /* Evaluate EXPR */ VISIT(c, expr, s->v.With.context_expr); - /* Squirrel away context.__exit__ */ + /* Squirrel away context.__exit__ by stuffing it under context */ ADDOP(c, DUP_TOP); ADDOP_O(c, LOAD_ATTR, exit_attr, names); - if (!compiler_nameop(c, tmpexit, Store)) - return 0; + ADDOP(c, ROT_TWO); /* Call context.__enter__() */ ADDOP_O(c, LOAD_ATTR, enter_attr, names); @@ -2935,10 +2928,9 @@ if (!compiler_push_fblock(c, FINALLY_END, finally)) return 0; - /* Finally block starts; push tmpexit and issue our magic opcode. */ - if (!compiler_nameop(c, tmpexit, Load) || - !compiler_nameop(c, tmpexit, Del)) - return 0; + /* Finally block starts; context.__exit__ is on the stack under + the exception or return information. Just issue our magic + opcode. */ ADDOP(c, WITH_CLEANUP); /* Finally block ends. */ Modified: python/trunk/Python/import.c ============================================================================== --- python/trunk/Python/import.c (original) +++ python/trunk/Python/import.c Fri Mar 7 15:13:28 2008 @@ -72,9 +72,10 @@ storing constants that should have been removed) Python 2.5c2: 62131 (fix wrong code: for x, in ... in listcomp/genexp) Python 2.6a0: 62151 (peephole optimizations and STORE_MAP opcode) + Python 2.6a1: 62161 (WITH_CLEANUP optimization) . */ -#define MAGIC (62151 | ((long)'\r'<<16) | ((long)'\n'<<24)) +#define MAGIC (62161 | ((long)'\r'<<16) | ((long)'\n'<<24)) /* Magic word as global; note that _PyImport_Init() can change the value of this global to accommodate for alterations of how the From buildbot at python.org Fri Mar 7 16:14:30 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 07 Mar 2008 15:14:30 +0000 Subject: [Python-checkins] buildbot failure in ia64 Ubuntu trunk Message-ID: <20080307151430.494731E4015@bag.python.org> The Buildbot has detected a new failure of ia64 Ubuntu trunk. Full details are available at: http://www.python.org/dev/buildbot/all/ia64%20Ubuntu%20trunk/builds/1577 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ia64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: nick.coghlan BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_ssl Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ia64/build/Lib/test/test_ssl.py", line 366, in serve_forever self.handle_request() File "/home/pybot/buildarea/trunk.klose-debian-ia64/build/Lib/SocketServer.py", line 257, in handle_request timeout = self.socket.gettimeout() File "", line 1, in gettimeout File "/home/pybot/buildarea/trunk.klose-debian-ia64/build/Lib/socket.py", line 160, in _dummy raise error(EBADF, 'Bad file descriptor') error: [Errno 9] Bad file descriptor make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Fri Mar 7 17:10:17 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 07 Mar 2008 16:10:17 +0000 Subject: [Python-checkins] buildbot failure in alpha Debian trunk Message-ID: <20080307161017.9F8701E4016@bag.python.org> The Buildbot has detected a new failure of alpha Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Debian%20trunk/builds/59 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-alpha Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: jeffrey.yasskin,raymond.hettinger BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From python-checkins at python.org Fri Mar 7 18:41:39 2008 From: python-checkins at python.org (thomas.heller) Date: Fri, 7 Mar 2008 18:41:39 +0100 (CET) Subject: [Python-checkins] r61291 - python/branches/libffi3-branch Message-ID: <20080307174139.1B5191E4005@bag.python.org> Author: thomas.heller Date: Fri Mar 7 18:41:38 2008 New Revision: 61291 Removed: python/branches/libffi3-branch/ Log: Remove branch; it has been merged to trunk. From python-checkins at python.org Fri Mar 7 21:08:41 2008 From: python-checkins at python.org (raymond.hettinger) Date: Fri, 7 Mar 2008 21:08:41 +0100 (CET) Subject: [Python-checkins] r61293 - python/branches/release25-maint/Doc/lib/libitertools.tex Message-ID: <20080307200841.EFE9D1E4005@bag.python.org> Author: raymond.hettinger Date: Fri Mar 7 21:08:41 2008 New Revision: 61293 Modified: python/branches/release25-maint/Doc/lib/libitertools.tex Log: Backport documentation improvements. Modified: python/branches/release25-maint/Doc/lib/libitertools.tex ============================================================================== --- python/branches/release25-maint/Doc/lib/libitertools.tex (original) +++ python/branches/release25-maint/Doc/lib/libitertools.tex Fri Mar 7 21:08:41 2008 @@ -68,6 +68,7 @@ \begin{verbatim} def chain(*iterables): + # chain('ABC', 'DEF') --> A B C D E F for it in iterables: for element in it: yield element @@ -83,6 +84,7 @@ \begin{verbatim} def count(n=0): + # count(10) --> 10 11 12 13 14 ... while True: yield n n += 1 @@ -100,6 +102,7 @@ \begin{verbatim} def cycle(iterable): + # cycle('ABCD') --> A B C D A B C D A B C D ... saved = [] for element in iterable: yield element @@ -121,6 +124,7 @@ \begin{verbatim} def dropwhile(predicate, iterable): + # dropwhile(lambda x: x<5, [1,4,6,4,1]) --> 6 4 1 iterable = iter(iterable) for x in iterable: if not predicate(x): @@ -156,6 +160,8 @@ \begin{verbatim} class groupby(object): + # [k for k, g in groupby('AAAABBBCCDAABBB')] --> A B C D A B + # [(list(g)) for k, g in groupby('AAAABBBCCD')] --> AAAA BBB CC D def __init__(self, iterable, key=None): if key is None: key = lambda x: x @@ -187,6 +193,7 @@ \begin{verbatim} def ifilter(predicate, iterable): + # ifilter(lambda x: x%2, range(10)) --> 1 3 5 7 9 if predicate is None: predicate = bool for x in iterable: @@ -203,6 +210,7 @@ \begin{verbatim} def ifilterfalse(predicate, iterable): + # ifilterfalse(lambda x: x%2, range(10)) --> 0 2 4 6 8 if predicate is None: predicate = bool for x in iterable: @@ -225,6 +233,7 @@ \begin{verbatim} def imap(function, *iterables): + # imap(pow, (2,3,10), (5,2,3)) --> 32 9 1000 iterables = map(iter, iterables) while True: args = [i.next() for i in iterables] @@ -251,6 +260,10 @@ \begin{verbatim} def islice(iterable, *args): + # islice('ABCDEFG', 2) --> A B + # islice('ABCDEFG', 2, 4) --> C D + # islice('ABCDEFG', 2, None) --> C D E F G + # islice('ABCDEFG', 0, None, 2) --> A C E G s = slice(*args) it = iter(xrange(s.start or 0, s.stop or sys.maxint, s.step or 1)) nexti = it.next() @@ -274,6 +287,7 @@ \begin{verbatim} def izip(*iterables): + # izip('ABCD', 'xy') --> Ax By iterables = map(iter, iterables) while iterables: result = [it.next() for it in iterables] @@ -311,6 +325,7 @@ \begin{verbatim} def repeat(object, times=None): + # repeat(10, 3) --> 10 10 10 if times is None: while True: yield object @@ -331,6 +346,7 @@ \begin{verbatim} def starmap(function, iterable): + # starmap(pow, [(2,5), (3,2), (10,3)]) --> 32 9 1000 iterable = iter(iterable) while True: yield function(*iterable.next()) @@ -343,6 +359,7 @@ \begin{verbatim} def takewhile(predicate, iterable): + # takewhile(lambda x: x<5, [1,4,6,4,1]) --> 1 4 for x in iterable: if predicate(x): yield x @@ -389,34 +406,6 @@ \begin{verbatim} ->>> amounts = [120.15, 764.05, 823.14] ->>> for checknum, amount in izip(count(1200), amounts): -... print 'Check %d is for $%.2f' % (checknum, amount) -... -Check 1200 is for $120.15 -Check 1201 is for $764.05 -Check 1202 is for $823.14 - ->>> import operator ->>> for cube in imap(operator.pow, xrange(1,5), repeat(3)): -... print cube -... -1 -8 -27 -64 - ->>> reportlines = ['EuroPython', 'Roster', '', 'alex', '', 'laura', - '', 'martin', '', 'walter', '', 'mark'] ->>> for name in islice(reportlines, 3, None, 2): -... print name.title() -... -Alex -Laura -Martin -Walter -Mark - # Show a dictionary sorted and grouped by value >>> from operator import itemgetter >>> d = dict(a=1, b=2, c=1, d=2, e=1, f=2, g=3) @@ -529,10 +518,8 @@ def pairwise(iterable): "s -> (s0,s1), (s1,s2), (s2, s3), ..." a, b = tee(iterable) - try: - b.next() - except StopIteration: - pass + for elem in b: + break return izip(a, b) def grouper(n, iterable, padvalue=None): @@ -543,4 +530,24 @@ "Return a new dict with swapped keys and values" return dict(izip(d.itervalues(), d)) +def roundrobin(*iterables): + "roundrobin('abc', 'd', 'ef') --> 'a', 'd', 'e', 'b', 'f', 'c'" + # Recipe credited to George Sakkis + pending = len(iterables) + nexts = cycle(iter(it).next for it in iterables) + while pending: + try: + for next in nexts: + yield next() + except StopIteration: + pending -= 1 + nexts = cycle(islice(nexts, pending)) + +def powerset(iterable): + "powerset('ab') --> set([]), set(['a']), set(['b']), set(['a', 'b'])" + # Recipe credited to Eric Raymond + pairs = [(2**i, x) for i, x in enumerate(iterable)] + for n in xrange(2**len(pairs)): + yield set(x for m, x in pairs if m&n) + \end{verbatim} From python-checkins at python.org Fri Mar 7 22:09:23 2008 From: python-checkins at python.org (andrew.kuchling) Date: Fri, 7 Mar 2008 22:09:23 +0100 (CET) Subject: [Python-checkins] r61298 - python/trunk/Doc/library/email.message.rst Message-ID: <20080307210923.991931E4025@bag.python.org> Author: andrew.kuchling Date: Fri Mar 7 22:09:23 2008 New Revision: 61298 Modified: python/trunk/Doc/library/email.message.rst Log: Grammar fix Modified: python/trunk/Doc/library/email.message.rst ============================================================================== --- python/trunk/Doc/library/email.message.rst (original) +++ python/trunk/Doc/library/email.message.rst Fri Mar 7 22:09:23 2008 @@ -38,7 +38,7 @@ .. method:: Message.as_string([unixfrom]) - Return the entire message flatten as a string. When optional *unixfrom* is + Return the entire message flattened as a string. When optional *unixfrom* is ``True``, the envelope header is included in the returned string. *unixfrom* defaults to ``False``. From python-checkins at python.org Fri Mar 7 22:10:06 2008 From: python-checkins at python.org (andrew.kuchling) Date: Fri, 7 Mar 2008 22:10:06 +0100 (CET) Subject: [Python-checkins] r61299 - python/branches/release25-maint/Doc/lib/emailmessage.tex Message-ID: <20080307211006.E26611E4020@bag.python.org> Author: andrew.kuchling Date: Fri Mar 7 22:10:06 2008 New Revision: 61299 Modified: python/branches/release25-maint/Doc/lib/emailmessage.tex Log: Grammar fix Modified: python/branches/release25-maint/Doc/lib/emailmessage.tex ============================================================================== --- python/branches/release25-maint/Doc/lib/emailmessage.tex (original) +++ python/branches/release25-maint/Doc/lib/emailmessage.tex Fri Mar 7 22:10:06 2008 @@ -34,7 +34,7 @@ \end{classdesc} \begin{methoddesc}[Message]{as_string}{\optional{unixfrom}} -Return the entire message flatten as a string. When optional +Return the entire message flattened as a string. When optional \var{unixfrom} is \code{True}, the envelope header is included in the returned string. \var{unixfrom} defaults to \code{False}. From nnorwitz at gmail.com Fri Mar 7 23:23:14 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Fri, 7 Mar 2008 17:23:14 -0500 Subject: [Python-checkins] Python Regression Test Failures refleak (1) Message-ID: <20080307222314.GA10881@python.psfb.org> More important issues: ---------------------- test_threading leaked [0, 0, 5] references, sum=5 Less important issues: ---------------------- test_threadsignals leaked [0, -8, 0] references, sum=-8 test_urllib2_localnet leaked [3, 3, 3] references, sum=9 From python-checkins at python.org Fri Mar 7 23:21:33 2008 From: python-checkins at python.org (brett.cannon) Date: Fri, 7 Mar 2008 23:21:33 +0100 (CET) Subject: [Python-checkins] r61300 - sandbox/trunk/import_in_py/_importlib.py Message-ID: <20080307222133.D6E771E4027@bag.python.org> Author: brett.cannon Date: Fri Mar 7 23:21:33 2008 New Revision: 61300 Modified: sandbox/trunk/import_in_py/_importlib.py Log: Add a note about NullImporter being the importer set when nothing is found for a location in sys.path_hooks. Modified: sandbox/trunk/import_in_py/_importlib.py ============================================================================== --- sandbox/trunk/import_in_py/_importlib.py (original) +++ sandbox/trunk/import_in_py/_importlib.py Fri Mar 7 23:21:33 2008 @@ -723,6 +723,7 @@ else: # No importer factory on sys.path_hooks works; use the default # importer factory. + # XXX NullImporter used in import.c. try: importer = self.default_path_hook(path_entry) sys.path_importer_cache[path_entry] = importer From buildbot at python.org Sat Mar 8 00:27:12 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 07 Mar 2008 23:27:12 +0000 Subject: [Python-checkins] buildbot failure in x86 FreeBSD 2 trunk Message-ID: <20080307232712.D8C481E4003@bag.python.org> The Buildbot has detected a new failure of x86 FreeBSD 2 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20FreeBSD%202%20trunk/builds/0 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: werven-freebsd Build Reason: The web-page 'force build' button was pressed by 'Martin von Loewis': Test the installation Build Source Stamp: [branch trunk] HEAD Blamelist: BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From python-checkins at python.org Sat Mar 8 01:27:38 2008 From: python-checkins at python.org (brett.cannon) Date: Sat, 8 Mar 2008 01:27:38 +0100 (CET) Subject: [Python-checkins] r61301 - sandbox/trunk/import_in_py/docs/flowchart.graffle sandbox/trunk/import_in_py/docs/pseudocode.py Message-ID: <20080308002738.661DE1E4003@bag.python.org> Author: brett.cannon Date: Sat Mar 8 01:27:37 2008 New Revision: 61301 Added: sandbox/trunk/import_in_py/docs/flowchart.graffle (contents, props changed) Removed: sandbox/trunk/import_in_py/docs/pseudocode.py Log: Move documentation over to using an Omnigraffle flowchart. PDFs will be saved and checked in for the relevant flowcharts. Added: sandbox/trunk/import_in_py/docs/flowchart.graffle ============================================================================== Binary file. No diff available. Deleted: /sandbox/trunk/import_in_py/docs/pseudocode.py ============================================================================== --- /sandbox/trunk/import_in_py/docs/pseudocode.py Sat Mar 8 01:27:37 2008 +++ (empty file) @@ -1,177 +0,0 @@ -raise ImportError("module is just pseudocode") - -import sys -def __import__(name, globals, locals, fromlist, level): - """Pseudocode to explain how importing works. - - Caveats: - + Classic relative import semantics are not covered. - + Assume all code runs with the import lock held. - + Some structure (e.g., using importers for built-in and frozen - modules) is purely conceptual and not used in the C - implementation of import. - - """ - path = globals.get('__path__') - # If a relative import, figure out absolute name of requested module. - if level != 0: - # Adjust relative import based on whether caller is a package and - # the specified level in the call. - # Also make sure import does not go beyond top-level. - name = resolve_name(name, globals['__name__'], path, level) - # Import each parent in the name, starting at the top. - # Assume each_parent iterates through each parent of the module request, - # starting at the top-level parent. - # Since loaders are required to set the module in sys.modules, a successful - # import should be followed by 'continue' to let the next module be - # imported. - for name in each_parent(name): - # If the module is already cached in sys.modules then move along. - if name in sys.modules: - continue - # Try to find a __path__ attribute on the (possibly non-existent) - # parent. - immediate_parent = name.rsplit('.', 1)[0] - try: - path = sys.modules[immediate_parent].__path__ - except (KeyError, AttributeError): - path = None - # Search sys.meta_path. - for meta_importer in sys.meta_path: - loader = meta_importer.find_module(name, path) - if loader: - loader.load_module(name) - continue - # Check built-in and frozen modules. - else: - for module_finder in (builtin_importer, frozen_importer): - loader = module_finder(name, path) - if loader: - loader.load_module(name) - continue - # With sys.meta_path, built-ins, and frozen modules checked, now look - # at sys.path or parent.__path__. - search_path = path if path else sys.path - for path_entry in search_path - # Look for a cached importer. - if path_entry in sys.path_importer_cache: - importer = sys.path_importer_cache[path_entry] - # Found an importer. - if importer: - loader = importer.find_module(name) - # If the import can handle the module, load it. Otherwise - # fall through to the default import. - if loader: - loader.load_module(name) - continue - # A pre-existing importer was not found; try to make one. - else: - for importer_factory in sys.path_hooks: - try: - # If an importer is found, cache it and try to use it. - # If it can't be used, then fall through to the default - # import. - importer = importer_factory(path_entry) - sys.path_importer_cache[path_entry] = importer - loader = importer.find_module(name) - if loader: - loader.load_module(name) - except ImportError: - continue - else: - # No importer could be created, so set to None in - # sys.path_import_cache to skip trying to make one in the - # future, then fall through to the default import. - sys.path_importer_cache[path_entry] = None - # As no importer was found for the sys.path entry, use the default - # importer for extension modules, Python bytecode, and Python - # source modules. - loader = find_extension_module(name, path_entry) - if loader: - loader.load_module(name) - continue - loader = find_py_pyc_module(name, path_entry) - if loader: - loader.load_module(name) - continue - # All available places to look for a module have been exhausted; raise - # an ImportError. - raise ImportError - # With the module now imported and store in sys.modules, figure out exactly - # what module to return based on fromlist and how the module name was - # specified. - if not fromlist: - # The fromlist is empty, so return the top-most parent module. - # Whether the import was relative or absolute must be considered. - if level: - return top_relative_name(name, level) - else: - return sys.modules[name.split('.', 1)[0]] - else: - # As fromlist is not empty, return the module specified by the import. - # Must also handle possible imports of modules if the module imported - # was a package and thus names in the fromlist are modules within the - # package and not object within a module. - module = sys.modules[name] - # If not a module, then can just return the module as the names - # specified in fromlist are supposed to be attributes on the module. - if not hasattr(module, '__path__'): - return module - # The imported module was a package, which means everything in the - # fromlist are supposed to be modules within the package. That means - # that an *attempt* must be made to try to import every name in - # fromlist. - if '*' in fromlist and hasattr(module, '__all__'): - fromlist = list(fromlist).extend(module.__all__) - for item in fromlist: - if item == '*': - continue - if not hasattr(module, item): - try: - __import__('.'.join([name, item]), module.__dict__, level=0) - except ImportError: - pass - return module - - -from imp import get_suffixes, C_EXTENSION -def find_extension_module(name, path_entry): - """Try to locate a C extension module for the requested module.""" - # Get the immediate name of the module being searched for as the extension - # module's file name will be based on it. - immediate_name = name.rsplit('.', 1)[-1] - # Check every possible C extension suffix with the immediate module name - # (typically two; '.so' and 'module.so'). - for suffix in (suffix[0] for suffix in get_suffixes() - if suffix[2] == C_EXTENSION): - file_path = os.path.join(path_entry, immediate_name + suffix) - if os.path.isfile(file_path): # I/O - return extension_loader(name, file_path) - else: - return None - - -from imp import PY_SOURCE, PY_COMPILED -def find_py_pyc_module(name, path_entry): - """Try to locate a Python source code or bytecode module for the requested - module.""" - # Get the immediate name of the module being imported as the Python file's - # name will be based on it. - immediate_name = name.rsplit('.', 1)[-1] - # Check every valid Python code suffix for possible files (typically two; - # '.py' and either '.pyc' or '.pyo'). - for suffix in (suffix[0] for suffix in get_suffixes() - if suffix[2] in (PY_SOURCE, PY_COMPILED)): - # See if the module is actually a package. - pkg_init_path = os.path.join(path_entry, immediate_name, - '__init__' + suffix) - if os.path.isfile(pkg_init_path): # I/O - return py_loader(name, pkg_init_path, is_pkg=True) - # If module is not a package, see if it is a file by itself. - file_path = os.path.join(path_entry, immediate_name + suffix) - if os.path.isfile(file_path): # I/O - return py_loader(name, file_path, is_pkg=False) - - -def py_loader(name, file_path, is_pkg): - pass From python-checkins at python.org Sat Mar 8 01:28:59 2008 From: python-checkins at python.org (brett.cannon) Date: Sat, 8 Mar 2008 01:28:59 +0100 (CET) Subject: [Python-checkins] r61302 - sandbox/trunk/import_in_py/docs/__import__.pdf sandbox/trunk/import_in_py/docs/flowchart.graffle Message-ID: <20080308002859.407881E4003@bag.python.org> Author: brett.cannon Date: Sat Mar 8 01:28:58 2008 New Revision: 61302 Added: sandbox/trunk/import_in_py/docs/__import__.pdf (contents, props changed) Modified: sandbox/trunk/import_in_py/docs/flowchart.graffle Log: Export the flowchart for __import__(). Added: sandbox/trunk/import_in_py/docs/__import__.pdf ============================================================================== Binary file. No diff available. Modified: sandbox/trunk/import_in_py/docs/flowchart.graffle ============================================================================== Binary files. No diff available. From nnorwitz at gmail.com Sat Mar 8 02:04:05 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Fri, 7 Mar 2008 20:04:05 -0500 Subject: [Python-checkins] Python Regression Test Failures all (1) Message-ID: <20080308010405.GA16623@python.psfb.org> 318 tests OK. 1 test failed: test_ssl 20 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_cd test_cl test_gl test_imageop test_imgfile test_ioctl test_macostools test_pep277 test_scriptpackages test_startfile test_sunaudiodev test_tcl test_unicode_file test_winreg test_winsound test_zipfile64 1 skip unexpected on linux2: test_ioctl test_grammar test_opcodes test_dict test_builtin test_exceptions test_types test_unittest test_doctest test_doctest2 test_MimeWriter test_SimpleHTTPServer test_StringIO test___all__ test___future__ test__locale test_abc test_abstract_numbers test_aepack test_aepack skipped -- No module named aepack test_al test_al skipped -- No module named al test_anydbm test_applesingle test_applesingle skipped -- No module named macostools test_array test_ast test_asynchat test_asyncore test_atexit test_audioop test_augassign test_base64 test_bastion test_bigaddrspace test_bigmem test_binascii test_binhex test_binop test_bisect test_bool test_bsddb test_bsddb185 test_bsddb185 skipped -- No module named bsddb185 test_bsddb3 test_buffer test_bufio test_bz2 test_calendar test_call test_capi test_cd test_cd skipped -- No module named cd test_cfgparser test_cgi test_charmapcodec test_cl test_cl skipped -- No module named cl test_class test_cmath test_cmd test_cmd_line test_cmd_line_script test_code test_codeccallbacks test_codecencodings_cn test_codecencodings_hk test_codecencodings_jp test_codecencodings_kr test_codecencodings_tw test_codecmaps_cn test_codecmaps_hk test_codecmaps_jp test_codecmaps_kr test_codecmaps_tw test_codecs test_codeop test_coding test_coercion test_collections test_colorsys test_commands test_compare test_compile test_compiler testCompileLibrary still working, be patient... test_complex test_complex_args test_contains test_contextlib test_cookie test_cookielib test_copy test_copy_reg test_cpickle test_cprofile test_crypt test_csv test_ctypes test_datetime test_dbm test_decimal test_decorators test_defaultdict test_deque test_descr test_descrtut test_difflib test_dircache test_dis test_distutils test_dl test_docxmlrpc test_dumbdbm test_dummy_thread test_dummy_threading test_email test_email_codecs test_email_renamed test_enumerate test_eof test_errno test_exception_variations test_extcall test_fcntl test_file test_filecmp test_fileinput test_float test_fnmatch test_fork1 test_format test_fpformat test_fractions test_frozen test_ftplib test_funcattrs test_functools test_future test_future_builtins test_gc test_gdbm test_generators test_genericpath test_genexps test_getargs test_getargs2 test_getopt test_gettext test_gl test_gl skipped -- No module named gl test_glob test_global test_grp test_gzip test_hash test_hashlib test_heapq test_hexoct test_hmac test_hotshot test_htmllib test_htmlparser test_httplib test_imageop test_imageop skipped -- No module named imgfile test_imaplib test_imgfile test_imgfile skipped -- No module named imgfile test_imp test_import test_importhooks test_index test_inspect test_ioctl test_ioctl skipped -- Unable to open /dev/tty test_isinstance test_iter test_iterlen test_itertools test_largefile test_list test_locale test_logging test_long test_long_future test_longexp test_macostools test_macostools skipped -- No module named macostools test_macpath test_mailbox test_marshal test_math test_md5 test_mhlib test_mimetools test_mimetypes test_minidom test_mmap test_module test_modulefinder test_multibytecodec test_multibytecodec_support test_multifile test_mutants test_mutex test_netrc test_new test_nis test_normalization test_ntpath test_old_mailbox test_openpty test_operator test_optparse test_os test_parser s_push: parser stack overflow test_peepholer test_pep247 test_pep263 test_pep277 test_pep277 skipped -- test works only on NT+ test_pep292 test_pep352 test_pickle test_pickletools test_pipes test_pkg test_pkgimport test_platform test_plistlib test_poll test_popen [8018 refs] [8018 refs] [8018 refs] test_popen2 test_poplib test_posix test_posixpath test_pow test_pprint test_profile test_profilehooks test_property test_pstats test_pty test_pwd test_pyclbr test_pyexpat test_queue test_quopri [8395 refs] [8395 refs] test_random test_re test_repr test_resource test_rfc822 test_richcmp test_robotparser test_runpy test_sax test_scope test_scriptpackages test_scriptpackages skipped -- No module named aetools test_select test_set test_sets test_sgmllib test_sha test_shelve test_shlex test_shutil test_signal test_site test_slice test_smtplib test_socket test_socket_ssl /tmp/python-test/local/lib/python2.6/test/test_socket_ssl.py:108: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. ssl_sock = socket.ssl(s) /tmp/python-test/local/lib/python2.6/test/test_socket_ssl.py:74: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. ss = socket.ssl(s) /tmp/python-test/local/lib/python2.6/test/test_socket_ssl.py:159: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. ss = socket.ssl(s) /tmp/python-test/local/lib/python2.6/test/test_socket_ssl.py:173: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. ss = socket.ssl(s) test_socketserver test_softspace test_sort test_sqlite test_ssl test test_ssl produced unexpected output: ********************************************************************** *** lines 2-7 of actual output doesn't appear in expected output after line 1: + Traceback (most recent call last): + File "/tmp/python-test/local/lib/python2.6/test/test_ssl.py", line 366, in serve_forever + self.handle_request() + File "/tmp/python-test/local/lib/python2.6/SocketServer.py", line 262, in handle_request + fd_sets = select.select([self], [], [], timeout) + error: (9, 'Bad file descriptor') ********************************************************************** test_startfile test_startfile skipped -- cannot import name startfile test_str test_strftime test_string test_stringprep test_strop test_strptime test_struct test_structmembers test_structseq test_subprocess [8013 refs] [8015 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8015 refs] [9938 refs] [8231 refs] [8015 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] . [8013 refs] [8013 refs] this bit of output is from a test of stdout in a different process ... [8013 refs] [8013 refs] [8231 refs] test_sunaudiodev test_sunaudiodev skipped -- No module named sunaudiodev test_sundry test_symtable test_syntax test_sys [8013 refs] [8013 refs] test_tarfile test_tcl test_tcl skipped -- No module named _tkinter test_telnetlib test_tempfile [8018 refs] test_textwrap test_thread test_threaded_import test_threadedtempfile test_threading [11151 refs] test_threading_local test_threadsignals test_time test_timeout test_tokenize test_trace test_traceback test_transformer test_tuple test_typechecks test_ucn test_unary test_unicode test_unicode_file test_unicode_file skipped -- No Unicode filesystem semantics on this platform. test_unicodedata test_univnewlines test_unpack test_urllib test_urllib2 test_urllib2_localnet test_urllib2net No handlers could be found for logger "test_urllib2" test_urllibnet test_urlparse test_userdict test_userlist test_userstring test_uu test_uuid WARNING: uuid.getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._ifconfig_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._unixdll_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. test_wait3 test_wait4 test_warnings test_wave test_weakref test_whichdb test_winreg test_winreg skipped -- No module named _winreg test_winsound test_winsound skipped -- No module named winsound test_with test_wsgiref test_xdrlib test_xml_etree test_xml_etree_c test_xmllib test_xmlrpc test_xpickle test_xrange test_zipfile /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:472: DeprecationWarning: struct integer overflow masking is deprecated zipfp.close() /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:399: DeprecationWarning: struct integer overflow masking is deprecated zipfp.close() test_zipfile64 test_zipfile64 skipped -- test requires loads of disk-space bytes and a long time to run test_zipimport test_zlib 318 tests OK. 1 test failed: test_ssl 20 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_cd test_cl test_gl test_imageop test_imgfile test_ioctl test_macostools test_pep277 test_scriptpackages test_startfile test_sunaudiodev test_tcl test_unicode_file test_winreg test_winsound test_zipfile64 1 skip unexpected on linux2: test_ioctl [576366 refs] From nnorwitz at gmail.com Sat Mar 8 05:45:04 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Fri, 7 Mar 2008 23:45:04 -0500 Subject: [Python-checkins] Python Regression Test Failures refleak (1) Message-ID: <20080308044504.GA21816@python.psfb.org> More important issues: ---------------------- test_threading leaked [0, 0, 5] references, sum=5 Less important issues: ---------------------- test_threadsignals leaked [0, -8, 0] references, sum=-8 test_urllib2_localnet leaked [3, 3, 3] references, sum=9 test_cmd_line leaked [-23, 0, 23] references, sum=0 test_smtplib leaked [86, -86, 3] references, sum=3 test_socketserver leaked [-75, 0, 0] references, sum=-75 From nnorwitz at gmail.com Sat Mar 8 05:45:34 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Fri, 7 Mar 2008 23:45:34 -0500 Subject: [Python-checkins] Python Regression Test Failures refleak (1) Message-ID: <20080308044534.GA21895@python.psfb.org> More important issues: ---------------------- test_threading leaked [0, 0, 5] references, sum=5 Less important issues: ---------------------- test_threadsignals leaked [0, -8, 0] references, sum=-8 test_urllib2_localnet leaked [3, 3, 3] references, sum=9 test_cmd_line leaked [-23, 0, 23] references, sum=0 test_smtplib leaked [86, -86, 3] references, sum=3 test_socketserver leaked [-75, 0, 0] references, sum=-75 From python-checkins at python.org Sat Mar 8 10:54:06 2008 From: python-checkins at python.org (georg.brandl) Date: Sat, 8 Mar 2008 10:54:06 +0100 (CET) Subject: [Python-checkins] r61303 - python/trunk/Doc/reference/simple_stmts.rst Message-ID: <20080308095406.D96EB1E4005@bag.python.org> Author: georg.brandl Date: Sat Mar 8 10:54:06 2008 New Revision: 61303 Modified: python/trunk/Doc/reference/simple_stmts.rst Log: #2253: fix continue vs. finally docs. Modified: python/trunk/Doc/reference/simple_stmts.rst ============================================================================== --- python/trunk/Doc/reference/simple_stmts.rst (original) +++ python/trunk/Doc/reference/simple_stmts.rst Sat Mar 8 10:54:06 2008 @@ -619,9 +619,13 @@ :keyword:`continue` may only occur syntactically nested in a :keyword:`for` or :keyword:`while` loop, but not nested in a function or class definition or -:keyword:`finally` statement within that loop. [#]_ It continues with the next +:keyword:`finally` clause within that loop. It continues with the next cycle of the nearest enclosing loop. +When :keyword:`continue` passes control out of a :keyword:`try` statement with a +:keyword:`finally` clause, that :keyword:`finally` clause is executed before +really starting the next loop cycle. + .. _import: .. _from: @@ -920,9 +924,4 @@ :func:`locals` return the current global and local dictionary, respectively, which may be useful to pass around for use by :keyword:`exec`. -.. rubric:: Footnotes - -.. [#] It may occur within an :keyword:`except` or :keyword:`else` clause. The - restriction on occurring in the :keyword:`try` clause is implementor's laziness - and will eventually be lifted. From python-checkins at python.org Sat Mar 8 11:01:44 2008 From: python-checkins at python.org (marc-andre.lemburg) Date: Sat, 8 Mar 2008 11:01:44 +0100 (CET) Subject: [Python-checkins] r61304 - python/trunk/Lib/platform.py Message-ID: <20080308100144.40DEB1E4005@bag.python.org> Author: marc-andre.lemburg Date: Sat Mar 8 11:01:43 2008 New Revision: 61304 Modified: python/trunk/Lib/platform.py Log: Add new name for Mandrake: Mandriva. Modified: python/trunk/Lib/platform.py ============================================================================== --- python/trunk/Lib/platform.py (original) +++ python/trunk/Lib/platform.py Sat Mar 8 11:01:43 2008 @@ -240,9 +240,10 @@ # and http://data.linux-ntfs.org/rpm/whichrpm # and http://www.die.net/doc/linux/man/man1/lsb_release.1.html -_supported_dists = ('SuSE', 'debian', 'fedora', 'redhat', 'centos', - 'mandrake', 'rocks', 'slackware', 'yellowdog', - 'gentoo', 'UnitedLinux', 'turbolinux') +_supported_dists = ( + 'SuSE', 'debian', 'fedora', 'redhat', 'centos', + 'mandrake', 'mandriva', 'rocks', 'slackware', 'yellowdog', 'gentoo', + 'UnitedLinux', 'turbolinux') def _parse_release_file(firstline): From python-checkins at python.org Sat Mar 8 11:05:24 2008 From: python-checkins at python.org (georg.brandl) Date: Sat, 8 Mar 2008 11:05:24 +0100 (CET) Subject: [Python-checkins] r61305 - python/trunk/Doc/c-api/intro.rst Message-ID: <20080308100524.BBC9F1E4005@bag.python.org> Author: georg.brandl Date: Sat Mar 8 11:05:24 2008 New Revision: 61305 Modified: python/trunk/Doc/c-api/intro.rst Log: #1533486: fix types in refcount intro. Modified: python/trunk/Doc/c-api/intro.rst ============================================================================== --- python/trunk/Doc/c-api/intro.rst (original) +++ python/trunk/Doc/c-api/intro.rst Sat Mar 8 11:05:24 2008 @@ -137,7 +137,7 @@ object type, such as a list, as well as performing any additional finalization that's needed. There's no chance that the reference count can overflow; at least as many bits are used to hold the reference count as there are distinct -memory locations in virtual memory (assuming ``sizeof(long) >= sizeof(char*)``). +memory locations in virtual memory (assuming ``sizeof(Py_ssize_t) >= sizeof(void*)``). Thus, the reference count increment is a simple operation. It is not necessary to increment an object's reference count for every local From buildbot at python.org Sat Mar 8 11:30:31 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 08 Mar 2008 10:30:31 +0000 Subject: [Python-checkins] buildbot failure in amd64 gentoo trunk Message-ID: <20080308103031.8F1351E401E@bag.python.org> The Buildbot has detected a new failure of amd64 gentoo trunk. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20gentoo%20trunk/builds/337 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-amd64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl,marc-andre.lemburg BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_ssl Traceback (most recent call last): File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/test/test_ssl.py", line 366, in serve_forever self.handle_request() File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/SocketServer.py", line 262, in handle_request fd_sets = select.select([self], [], [], timeout) File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/SocketServer.py", line 436, in fileno return self.socket.fileno() File "", line 1, in fileno File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/socket.py", line 160, in _dummy raise error(EBADF, 'Bad file descriptor') error: [Errno 9] Bad file descriptor make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Sat Mar 8 11:47:08 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 08 Mar 2008 10:47:08 +0000 Subject: [Python-checkins] buildbot failure in ppc Debian unstable 3.0 Message-ID: <20080308104708.77CB71E4005@bag.python.org> The Buildbot has detected a new failure of ppc Debian unstable 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/ppc%20Debian%20unstable%203.0/builds/622 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed compile sincerely, -The Buildbot From buildbot at python.org Sat Mar 8 12:04:28 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 08 Mar 2008 11:04:28 +0000 Subject: [Python-checkins] buildbot failure in ia64 Ubuntu 3.0 Message-ID: <20080308110428.D4BA01E4005@bag.python.org> The Buildbot has detected a new failure of ia64 Ubuntu 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/ia64%20Ubuntu%203.0/builds/593 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ia64 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed compile sincerely, -The Buildbot From buildbot at python.org Sat Mar 8 12:06:58 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 08 Mar 2008 11:06:58 +0000 Subject: [Python-checkins] buildbot failure in amd64 gentoo 3.0 Message-ID: <20080308110658.6336E1E4005@bag.python.org> The Buildbot has detected a new failure of amd64 gentoo 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20gentoo%203.0/builds/142 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-amd64 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed test Excerpt from the test logfile: 2 tests failed: test_codecmaps_cn test_os Traceback (most recent call last): File "./Lib/test/regrtest.py", line 597, in runtest_inner indirect_test() File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/test/test_codecmaps_cn.py", line 30, in test_main test_support.run_unittest(__name__) File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/test/test_support.py", line 552, in run_unittest suite.addTest(unittest.findTestCases(sys.modules[cls])) File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/unittest.py", line 636, in findTestCases return _makeLoader(prefix, sortUsing, suiteClass).loadTestsFromModule(module) File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/unittest.py", line 540, in loadTestsFromModule tests.append(self.loadTestsFromTestCase(obj)) File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/unittest.py", line 532, in loadTestsFromTestCase return self.suiteClass(map(testCaseClass, testCaseNames)) File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/unittest.py", line 387, in __init__ self.addTests(tests) File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/unittest.py", line 423, in addTests for test in tests: File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/test/test_multibytecodec_support.py", line 279, in __init__ self.open_mapping_file() # test it to report the error early File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/test/test_multibytecodec_support.py", line 282, in open_mapping_file return test_support.open_urlresource(self.mapfileurl) File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/test/test_support.py", line 277, in open_urlresource fn, _ = urllib.urlretrieve(url, filename) File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/urllib.py", line 88, in urlretrieve return _urlopener.retrieve(url, filename, reporthook, data) File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/urllib.py", line 230, in retrieve fp = self.open(url, data) File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/urllib.py", line 202, in open raise IOError('socket error', msg).with_traceback(sys.exc_info()[2]) File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/urllib.py", line 198, in open return getattr(self, name)(url) File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/urllib.py", line 372, in open_http return self._open_generic_http(httplib.HTTPConnection, url, data) File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/urllib.py", line 351, in _open_generic_http http_conn.request("GET", selector, headers=headers) File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/httplib.py", line 898, in request self._send_request(method, url, body, headers) File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/httplib.py", line 935, in _send_request self.endheaders() File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/httplib.py", line 893, in endheaders self._send_output() File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/httplib.py", line 759, in _send_output self.send(msg) File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/httplib.py", line 718, in send self.connect() File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/httplib.py", line 702, in connect self.timeout) File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/socket.py", line 294, in create_connection raise error(msg) IOError: [Errno socket error] [Errno 111] Connection refused ====================================================================== ERROR: test_update2 (test.test_os.EnvironTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/test/test_os.py", line 221, in test_update2 File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/os.py", line 646, in popen File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/io.py", line 1169, in __init__ ValueError: invalid encoding: None make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Sat Mar 8 12:14:14 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 08 Mar 2008 11:14:14 +0000 Subject: [Python-checkins] buildbot failure in x86 FreeBSD 2 3.0 Message-ID: <20080308111414.3FFC91E4005@bag.python.org> The Buildbot has detected a new failure of x86 FreeBSD 2 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20FreeBSD%202%203.0/builds/0 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: werven-freebsd Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed compile sincerely, -The Buildbot From buildbot at python.org Sat Mar 8 12:30:06 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 08 Mar 2008 11:30:06 +0000 Subject: [Python-checkins] buildbot failure in sparc solaris10 gcc 3.0 Message-ID: <20080308113006.566DC1E4005@bag.python.org> The Buildbot has detected a new failure of sparc solaris10 gcc 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20solaris10%20gcc%203.0/builds/648 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: loewis-sun Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed test Excerpt from the test logfile: 2 tests failed: test_codecmaps_cn test_os Traceback (most recent call last): File "./Lib/test/regrtest.py", line 597, in runtest_inner indirect_test() File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/test/test_codecmaps_cn.py", line 30, in test_main test_support.run_unittest(__name__) File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/test/test_support.py", line 552, in run_unittest suite.addTest(unittest.findTestCases(sys.modules[cls])) File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/unittest.py", line 636, in findTestCases return _makeLoader(prefix, sortUsing, suiteClass).loadTestsFromModule(module) File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/unittest.py", line 540, in loadTestsFromModule tests.append(self.loadTestsFromTestCase(obj)) File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/unittest.py", line 532, in loadTestsFromTestCase return self.suiteClass(map(testCaseClass, testCaseNames)) File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/unittest.py", line 387, in __init__ self.addTests(tests) File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/unittest.py", line 423, in addTests for test in tests: File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/test/test_multibytecodec_support.py", line 279, in __init__ self.open_mapping_file() # test it to report the error early File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/test/test_multibytecodec_support.py", line 282, in open_mapping_file return test_support.open_urlresource(self.mapfileurl) File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/test/test_support.py", line 277, in open_urlresource fn, _ = urllib.urlretrieve(url, filename) File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/urllib.py", line 88, in urlretrieve return _urlopener.retrieve(url, filename, reporthook, data) File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/urllib.py", line 230, in retrieve fp = self.open(url, data) File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/urllib.py", line 202, in open raise IOError('socket error', msg).with_traceback(sys.exc_info()[2]) File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/urllib.py", line 198, in open return getattr(self, name)(url) File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/urllib.py", line 372, in open_http return self._open_generic_http(httplib.HTTPConnection, url, data) File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/urllib.py", line 351, in _open_generic_http http_conn.request("GET", selector, headers=headers) File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/httplib.py", line 898, in request self._send_request(method, url, body, headers) File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/httplib.py", line 935, in _send_request self.endheaders() File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/httplib.py", line 893, in endheaders self._send_output() File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/httplib.py", line 759, in _send_output self.send(msg) File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/httplib.py", line 718, in send self.connect() File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/httplib.py", line 702, in connect self.timeout) File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/socket.py", line 294, in create_connection raise error(msg) IOError: [Errno socket error] [Errno 146] Connection refused ====================================================================== ERROR: test_update2 (test.test_os.EnvironTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/test/test_os.py", line 221, in test_update2 File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/os.py", line 646, in popen File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/io.py", line 1169, in __init__ ValueError: invalid encoding: None sincerely, -The Buildbot From buildbot at python.org Sat Mar 8 13:05:08 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 08 Mar 2008 12:05:08 +0000 Subject: [Python-checkins] buildbot failure in sparc Ubuntu 3.0 Message-ID: <20080308120508.9526C1E4005@bag.python.org> The Buildbot has detected a new failure of sparc Ubuntu 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20Ubuntu%203.0/builds/151 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-ubuntu-sparc Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed compile sincerely, -The Buildbot From buildbot at python.org Sat Mar 8 13:08:48 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 08 Mar 2008 12:08:48 +0000 Subject: [Python-checkins] buildbot failure in sparc Debian 3.0 Message-ID: <20080308120848.8255A1E4005@bag.python.org> The Buildbot has detected a new failure of sparc Debian 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20Debian%203.0/builds/81 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-sparc Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed compile sincerely, -The Buildbot From buildbot at python.org Sat Mar 8 13:34:24 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 08 Mar 2008 12:34:24 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 3.0 Message-ID: <20080308123424.727891E4005@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%203.0/builds/690 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From python-checkins at python.org Sat Mar 8 17:50:27 2008 From: python-checkins at python.org (facundo.batista) Date: Sat, 8 Mar 2008 17:50:27 +0100 (CET) Subject: [Python-checkins] r61312 - in python/trunk: Doc/library/pdb.rst Lib/pdb.py Misc/NEWS Message-ID: <20080308165027.AA0D11E4008@bag.python.org> Author: facundo.batista Date: Sat Mar 8 17:50:27 2008 New Revision: 61312 Modified: python/trunk/Doc/library/pdb.rst python/trunk/Lib/pdb.py python/trunk/Misc/NEWS Log: Issue 1106316. post_mortem()'s parameter, traceback, is now optional: it defaults to the traceback of the exception that is currently being handled. Modified: python/trunk/Doc/library/pdb.rst ============================================================================== --- python/trunk/Doc/library/pdb.rst (original) +++ python/trunk/Doc/library/pdb.rst Sat Mar 8 17:50:27 2008 @@ -107,9 +107,12 @@ being debugged (e.g. when an assertion fails). -.. function:: post_mortem(traceback) +.. function:: post_mortem([traceback]) - Enter post-mortem debugging of the given *traceback* object. + Enter post-mortem debugging of the given *traceback* object. If no + *traceback* is given, it uses the one of the exception that is currently + being handled (an exception must be being handled if the default is to be + used). .. function:: pm() Modified: python/trunk/Lib/pdb.py ============================================================================== --- python/trunk/Lib/pdb.py (original) +++ python/trunk/Lib/pdb.py Sat Mar 8 17:50:27 2008 @@ -1198,7 +1198,16 @@ # Post-Mortem interface -def post_mortem(t): +def post_mortem(t=None): + # handling the default + if t is None: + # sys.exc_info() returns (type, value, traceback) if an exception is + # being handled, otherwise it returns None + t = sys.exc_info()[2] + if t is None: + raise ValueError("A valid traceback must be passed if no " + "exception is being handled") + p = Pdb() p.reset() while t.tb_next is not None: Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Sat Mar 8 17:50:27 2008 @@ -21,6 +21,11 @@ Library ------- +- Issue #1106316: pdb.post_mortem()'s parameter, "traceback", is now + optional: it defaults to the traceback of the exception that is currently + being handled (is mandatory to be in the middle of an exception, otherwise + it raises ValueError). + - Issue #1193577: A .shutdown() method has been added to SocketServers which terminates the .serve_forever() loop. From buildbot at python.org Sat Mar 8 18:58:37 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 08 Mar 2008 17:58:37 +0000 Subject: [Python-checkins] buildbot failure in g4 osx.4 trunk Message-ID: <20080308175837.898551E4019@bag.python.org> The Buildbot has detected a new failure of g4 osx.4 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/g4%20osx.4%20trunk/builds/2995 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: psf-g4 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: facundo.batista BUILD FAILED: failed svn sincerely, -The Buildbot From buildbot at python.org Sat Mar 8 19:01:53 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 08 Mar 2008 18:01:53 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 trunk Message-ID: <20080308180153.6D62E1E4005@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%20trunk/builds/2664 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: facundo.batista BUILD FAILED: failed test Excerpt from the test logfile: 2 tests failed: test_asynchat test_smtplib sincerely, -The Buildbot From python-checkins at python.org Sat Mar 8 19:26:55 2008 From: python-checkins at python.org (jeffrey.yasskin) Date: Sat, 8 Mar 2008 19:26:55 +0100 (CET) Subject: [Python-checkins] r61313 - python/trunk/Tools/pybench/Setup.py python/trunk/Tools/pybench/With.py Message-ID: <20080308182655.3DD221E4016@bag.python.org> Author: jeffrey.yasskin Date: Sat Mar 8 19:26:54 2008 New Revision: 61313 Added: python/trunk/Tools/pybench/With.py Modified: python/trunk/Tools/pybench/Setup.py Log: Add tests for with and finally performance to pybench. Modified: python/trunk/Tools/pybench/Setup.py ============================================================================== --- python/trunk/Tools/pybench/Setup.py (original) +++ python/trunk/Tools/pybench/Setup.py Sat Mar 8 19:26:54 2008 @@ -30,6 +30,10 @@ from Tuples import * from Dict import * from Exceptions import * +try: + from With import * +except SyntaxError: + pass from Imports import * from Strings import * from Numbers import * Added: python/trunk/Tools/pybench/With.py ============================================================================== --- (empty file) +++ python/trunk/Tools/pybench/With.py Sat Mar 8 19:26:54 2008 @@ -0,0 +1,190 @@ +from __future__ import with_statement +from pybench import Test + +class WithFinally(Test): + + version = 2.0 + operations = 20 + rounds = 80000 + + class ContextManager(object): + def __enter__(self): + pass + def __exit__(self, exc, val, tb): + pass + + def test(self): + + cm = self.ContextManager() + + for i in xrange(self.rounds): + with cm: pass + with cm: pass + with cm: pass + with cm: pass + with cm: pass + with cm: pass + with cm: pass + with cm: pass + with cm: pass + with cm: pass + with cm: pass + with cm: pass + with cm: pass + with cm: pass + with cm: pass + with cm: pass + with cm: pass + with cm: pass + with cm: pass + with cm: pass + + def calibrate(self): + + cm = self.ContextManager() + + for i in xrange(self.rounds): + pass + + +class TryFinally(Test): + + version = 2.0 + operations = 20 + rounds = 80000 + + class ContextManager(object): + def __enter__(self): + pass + def __exit__(self): + # "Context manager" objects used just for their cleanup + # actions in finally blocks usually don't have parameters. + pass + + def test(self): + + cm = self.ContextManager() + + for i in xrange(self.rounds): + cm.__enter__() + try: pass + finally: cm.__exit__() + + cm.__enter__() + try: pass + finally: cm.__exit__() + + cm.__enter__() + try: pass + finally: cm.__exit__() + + cm.__enter__() + try: pass + finally: cm.__exit__() + + cm.__enter__() + try: pass + finally: cm.__exit__() + + cm.__enter__() + try: pass + finally: cm.__exit__() + + cm.__enter__() + try: pass + finally: cm.__exit__() + + cm.__enter__() + try: pass + finally: cm.__exit__() + + cm.__enter__() + try: pass + finally: cm.__exit__() + + cm.__enter__() + try: pass + finally: cm.__exit__() + + cm.__enter__() + try: pass + finally: cm.__exit__() + + cm.__enter__() + try: pass + finally: cm.__exit__() + + cm.__enter__() + try: pass + finally: cm.__exit__() + + cm.__enter__() + try: pass + finally: cm.__exit__() + + cm.__enter__() + try: pass + finally: cm.__exit__() + + cm.__enter__() + try: pass + finally: cm.__exit__() + + cm.__enter__() + try: pass + finally: cm.__exit__() + + cm.__enter__() + try: pass + finally: cm.__exit__() + + cm.__enter__() + try: pass + finally: cm.__exit__() + + cm.__enter__() + try: pass + finally: cm.__exit__() + + def calibrate(self): + + cm = self.ContextManager() + + for i in xrange(self.rounds): + pass + + +class WithRaiseExcept(Test): + + version = 2.0 + operations = 2 + 3 + 3 + rounds = 100000 + + class BlockExceptions(object): + def __enter__(self): + pass + def __exit__(self, exc, val, tb): + return True + + def test(self): + + error = ValueError + be = self.BlockExceptions() + + for i in xrange(self.rounds): + with be: raise error + with be: raise error + with be: raise error,"something" + with be: raise error,"something" + with be: raise error,"something" + with be: raise error("something") + with be: raise error("something") + with be: raise error("something") + + def calibrate(self): + + error = ValueError + be = self.BlockExceptions() + + for i in xrange(self.rounds): + pass From buildbot at python.org Sat Mar 8 19:51:34 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 08 Mar 2008 18:51:34 +0000 Subject: [Python-checkins] buildbot failure in x86 W2k8 trunk Message-ID: <20080308185141.A033D1E4005@bag.python.org> The Buildbot has detected a new failure of x86 W2k8 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20W2k8%20trunk/builds/67 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: nelson-windows Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: jeffrey.yasskin BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_ssl sincerely, -The Buildbot From buildbot at python.org Sat Mar 8 19:55:26 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 08 Mar 2008 18:55:26 +0000 Subject: [Python-checkins] buildbot failure in amd64 gentoo trunk Message-ID: <20080308185554.F24A61E4005@bag.python.org> The Buildbot has detected a new failure of amd64 gentoo trunk. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20gentoo%20trunk/builds/339 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-amd64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: jeffrey.yasskin BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_ssl make: *** [buildbottest] Error 1 sincerely, -The Buildbot From nnorwitz at gmail.com Sat Mar 8 20:23:43 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Sat, 8 Mar 2008 11:23:43 -0800 Subject: [Python-checkins] Python Regression Test Failures basics (1) In-Reply-To: References: <20080301113313.GA25006@python.psfb.org> Message-ID: Gerhard, There is another problem with test_sqlite on the trunk. In Lib/sqlite3/test/transactions.py two functions cause hangs: CheckLocking CheckRaiseTimeout which is causing the builtbots to fail. Could you take a look? Thanks, n On Sat, Mar 1, 2008 at 9:14 AM, Neal Norwitz wrote: > Gerhard, > > I'm guessing this failure is due to your recent change. The exception is: > > > File "/tmp/python-test/local/lib/python2.6/sqlite3/test/regression.py", > line 118, in CheckWorkaroundForBuggySqliteTransferBindings > self.con.execute("create table if not exists foo(bar)") > OperationalError: near "not": syntax error > > The sqlite version is: sqlite-3.2.1-r3 on an old gentoo x86 box. > > Thanks, > n > > > > On Sat, Mar 1, 2008 at 3:33 AM, Neal Norwitz wrote: > > 313 tests OK. > > 1 test failed: > > test_sqlite > > 28 tests skipped: > > test_aepack test_al test_applesingle test_bsddb185 test_bsddb3 > > test_cd test_cl test_curses test_gl test_imageop test_imgfile > > test_ioctl test_linuxaudiodev test_macostools test_ossaudiodev > > test_pep277 test_scriptpackages test_socketserver test_startfile > > test_sunaudiodev test_tcl test_timeout test_unicode_file > > test_urllib2net test_urllibnet test_winreg test_winsound > > test_zipfile64 > > 1 skip unexpected on linux2: > > test_ioctl > > > > test_grammar > > test_opcodes > > test_dict > > test_builtin > > test_exceptions > > test_types > > test_unittest > > test_doctest > > test_doctest2 > > test_MimeWriter > > test_SimpleHTTPServer > > test_StringIO > > test___all__ > > test___future__ > > test__locale > > test_abc > > test_abstract_numbers > > test_aepack > > test_aepack skipped -- No module named aepack > > test_al > > test_al skipped -- No module named al > > test_anydbm > > test_applesingle > > test_applesingle skipped -- No module named macostools > > test_array > > test_ast > > test_asynchat > > test_asyncore > > test_atexit > > test_audioop > > test_augassign > > test_base64 > > test_bastion > > test_bigaddrspace > > test_bigmem > > test_binascii > > test_binhex > > test_binop > > test_bisect > > test_bool > > test_bsddb > > test_bsddb185 > > test_bsddb185 skipped -- No module named bsddb185 > > test_bsddb3 > > test_bsddb3 skipped -- Use of the `bsddb' resource not enabled > > test_buffer > > test_bufio > > test_bz2 > > test_calendar > > test_call > > test_capi > > test_cd > > test_cd skipped -- No module named cd > > test_cfgparser > > test_cgi > > test_charmapcodec > > test_cl > > test_cl skipped -- No module named cl > > test_class > > test_cmath > > test_cmd > > test_cmd_line > > test_cmd_line_script > > test_code > > test_codeccallbacks > > test_codecencodings_cn > > test_codecencodings_hk > > test_codecencodings_jp > > test_codecencodings_kr > > test_codecencodings_tw > > test_codecmaps_cn > > test_codecmaps_hk > > test_codecmaps_jp > > test_codecmaps_kr > > test_codecmaps_tw > > test_codecs > > test_codeop > > test_coding > > test_coercion > > test_collections > > test_colorsys > > test_commands > > test_compare > > test_compile > > test_compiler > > test_complex > > test_complex_args > > test_contains > > test_contextlib > > test_cookie > > test_cookielib > > test_copy > > test_copy_reg > > test_cpickle > > test_cprofile > > test_crypt > > test_csv > > test_ctypes > > test_curses > > test_curses skipped -- Use of the `curses' resource not enabled > > test_datetime > > test_dbm > > test_decimal > > test_decorators > > test_defaultdict > > test_deque > > test_descr > > test_descrtut > > test_difflib > > test_dircache > > test_dis > > test_distutils > > test_dl > > test_docxmlrpc > > test_dumbdbm > > test_dummy_thread > > test_dummy_threading > > test_email > > test_email_codecs > > test_email_renamed > > test_enumerate > > test_eof > > test_errno > > test_exception_variations > > test_extcall > > test_fcntl > > test_file > > test_filecmp > > test_fileinput > > test_float > > test_fnmatch > > test_fork1 > > test_format > > test_fpformat > > test_fractions > > test_frozen > > test_ftplib > > test_funcattrs > > test_functools > > test_future > > test_future_builtins > > test_gc > > test_gdbm > > test_generators > > test_genericpath > > test_genexps > > test_getargs > > test_getargs2 > > test_getopt > > test_gettext > > test_gl > > test_gl skipped -- No module named gl > > test_glob > > test_global > > test_grp > > test_gzip > > test_hash > > test_hashlib > > test_heapq > > test_hexoct > > test_hmac > > test_hotshot > > test_htmllib > > test_htmlparser > > test_httplib > > test_imageop > > test_imageop skipped -- No module named imgfile > > test_imaplib > > test_imgfile > > test_imgfile skipped -- No module named imgfile > > test_imp > > test_import > > test_importhooks > > test_index > > test_inspect > > test_ioctl > > test_ioctl skipped -- Unable to open /dev/tty > > test_isinstance > > test_iter > > test_iterlen > > test_itertools > > test_largefile > > test_linuxaudiodev > > test_linuxaudiodev skipped -- Use of the `audio' resource not enabled > > test_list > > test_locale > > test_logging > > test_long > > test_long_future > > test_longexp > > test_macostools > > test_macostools skipped -- No module named macostools > > test_macpath > > test_mailbox > > test_marshal > > test_math > > test_md5 > > test_mhlib > > test_mimetools > > test_mimetypes > > test_minidom > > test_mmap > > test_module > > test_modulefinder > > test_multibytecodec > > test_multibytecodec_support > > test_multifile > > test_mutants > > test_mutex > > test_netrc > > test_new > > test_nis > > test_normalization > > test_ntpath > > test_old_mailbox > > test_openpty > > test_operator > > test_optparse > > test_os > > test_ossaudiodev > > test_ossaudiodev skipped -- Use of the `audio' resource not enabled > > test_parser > > s_push: parser stack overflow > > test_peepholer > > test_pep247 > > test_pep263 > > test_pep277 > > test_pep277 skipped -- test works only on NT+ > > test_pep292 > > test_pep352 > > test_pickle > > test_pickletools > > test_pipes > > test_pkg > > test_pkgimport > > test_platform > > test_plistlib > > test_poll > > test_popen > > [8018 refs] > > [8018 refs] > > [8018 refs] > > test_popen2 > > test_poplib > > test_posix > > test_posixpath > > test_pow > > test_pprint > > test_profile > > test_profilehooks > > test_property > > test_pstats > > test_pty > > test_pwd > > test_pyclbr > > test_pyexpat > > test_queue > > test_quopri > > [8395 refs] > > [8395 refs] > > test_random > > test_re > > test_repr > > test_resource > > test_rfc822 > > test_richcmp > > test_robotparser > > test_runpy > > test_sax > > test_scope > > test_scriptpackages > > test_scriptpackages skipped -- No module named aetools > > test_select > > test_set > > test_sets > > test_sgmllib > > test_sha > > test_shelve > > test_shlex > > test_shutil > > test_signal > > test_site > > test_slice > > test_smtplib > > test_socket > > test_socket_ssl > > test_socketserver > > test_socketserver skipped -- Use of the `network' resource not enabled > > test_softspace > > test_sort > > test_sqlite > > test test_sqlite failed -- Traceback (most recent call last): > > File "/tmp/python-test/local/lib/python2.6/sqlite3/test/regression.py", line 118, in CheckWorkaroundForBuggySqliteTransferBindings > > self.con.execute("create table if not exists foo(bar)") > > OperationalError: near "not": syntax error > > > > test_ssl > > test_startfile > > test_startfile skipped -- cannot import name startfile > > test_str > > test_strftime > > test_string > > test_stringprep > > test_strop > > test_strptime > > test_struct > > test_structmembers > > test_structseq > > test_subprocess > > [8013 refs] > > [8015 refs] > > [8013 refs] > > [8013 refs] > > [8013 refs] > > [8013 refs] > > [8013 refs] > > [8013 refs] > > [8013 refs] > > [8013 refs] > > [8015 refs] > > [9938 refs] > > [8231 refs] > > [8015 refs] > > [8013 refs] > > [8013 refs] > > [8013 refs] > > [8013 refs] > > [8013 refs] > > . > > [8013 refs] > > [8013 refs] > > this bit of output is from a test of stdout in a different process ... > > [8013 refs] > > [8013 refs] > > [8231 refs] > > test_sunaudiodev > > test_sunaudiodev skipped -- No module named sunaudiodev > > test_sundry > > test_symtable > > test_syntax > > test_sys > > [8013 refs] > > [8013 refs] > > test_tarfile > > test_tcl > > test_tcl skipped -- No module named _tkinter > > test_telnetlib > > test_tempfile > > [8018 refs] > > test_textwrap > > test_thread > > test_threaded_import > > test_threadedtempfile > > test_threading > > [11149 refs] > > test_threading_local > > test_threadsignals > > test_time > > test_timeout > > test_timeout skipped -- Use of the `network' resource not enabled > > test_tokenize > > test_trace > > test_traceback > > test_transformer > > test_tuple > > test_typechecks > > test_ucn > > test_unary > > test_unicode > > test_unicode_file > > test_unicode_file skipped -- No Unicode filesystem semantics on this platform. > > test_unicodedata > > test_univnewlines > > test_unpack > > test_urllib > > test_urllib2 > > test_urllib2_localnet > > test_urllib2net > > test_urllib2net skipped -- Use of the `network' resource not enabled > > test_urllibnet > > test_urllibnet skipped -- Use of the `network' resource not enabled > > test_urlparse > > test_userdict > > test_userlist > > test_userstring > > test_uu > > test_uuid > > WARNING: uuid.getnode is unreliable on many platforms. > > It is disabled until the code and/or test can be fixed properly. > > WARNING: uuid._ifconfig_getnode is unreliable on many platforms. > > It is disabled until the code and/or test can be fixed properly. > > WARNING: uuid._unixdll_getnode is unreliable on many platforms. > > It is disabled until the code and/or test can be fixed properly. > > test_wait3 > > test_wait4 > > test_warnings > > test_wave > > test_weakref > > test_whichdb > > test_winreg > > test_winreg skipped -- No module named _winreg > > test_winsound > > test_winsound skipped -- No module named winsound > > test_with > > test_wsgiref > > test_xdrlib > > test_xml_etree > > test_xml_etree_c > > test_xmllib > > test_xmlrpc > > test_xpickle > > test_xrange > > test_zipfile > > /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:472: DeprecationWarning: struct integer overflow masking is deprecated > > zipfp.close() > > /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:399: DeprecationWarning: struct integer overflow masking is deprecated > > zipfp.close() > > test_zipfile64 > > test_zipfile64 skipped -- test requires loads of disk-space bytes and a long time to run > > test_zipimport > > test_zlib > > 313 tests OK. > > 1 test failed: > > test_sqlite > > 28 tests skipped: > > test_aepack test_al test_applesingle test_bsddb185 test_bsddb3 > > test_cd test_cl test_curses test_gl test_imageop test_imgfile > > test_ioctl test_linuxaudiodev test_macostools test_ossaudiodev > > test_pep277 test_scriptpackages test_socketserver test_startfile > > test_sunaudiodev test_tcl test_timeout test_unicode_file > > test_urllib2net test_urllibnet test_winreg test_winsound > > test_zipfile64 > > 1 skip unexpected on linux2: > > test_ioctl > > [570330 refs] > > _______________________________________________ > > Python-checkins mailing list > > Python-checkins at python.org > > http://mail.python.org/mailman/listinfo/python-checkins > > > From python-checkins at python.org Sat Mar 8 21:08:22 2008 From: python-checkins at python.org (jeffrey.yasskin) Date: Sat, 8 Mar 2008 21:08:22 +0100 (CET) Subject: [Python-checkins] r61314 - python/trunk/Tools/pybench/pybench.py Message-ID: <20080308200822.2F89A1E402A@bag.python.org> Author: jeffrey.yasskin Date: Sat Mar 8 21:08:21 2008 New Revision: 61314 Modified: python/trunk/Tools/pybench/pybench.py Log: Fix pybench for pythons < 2.6, tested back to 2.3. Modified: python/trunk/Tools/pybench/pybench.py ============================================================================== --- python/trunk/Tools/pybench/pybench.py (original) +++ python/trunk/Tools/pybench/pybench.py Sat Mar 8 21:08:21 2008 @@ -121,7 +121,7 @@ 'platform': platform.platform(), 'processor': platform.processor(), 'executable': sys.executable, - 'implementation': platform.python_implementation(), + 'implementation': getattr(platform, 'python_implementation', 'n/a'), 'python': platform.python_version(), 'compiler': platform.python_compiler(), 'buildno': buildno, @@ -837,7 +837,7 @@ print 'PYBENCH %s' % __version__ print '-' * LINE print '* using %s %s' % ( - platform.python_implementation(), + getattr(platform, 'python_implementation', 'Python'), string.join(string.split(sys.version), ' ')) # Switch off garbage collection From buildbot at python.org Sat Mar 8 21:26:22 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 08 Mar 2008 20:26:22 +0000 Subject: [Python-checkins] buildbot failure in sparc Debian trunk Message-ID: <20080308202622.86D2C1E4005@bag.python.org> The Buildbot has detected a new failure of sparc Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20Debian%20trunk/builds/175 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-sparc Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: jeffrey.yasskin BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From python-checkins at python.org Sat Mar 8 21:33:20 2008 From: python-checkins at python.org (brett.cannon) Date: Sat, 8 Mar 2008 21:33:20 +0100 (CET) Subject: [Python-checkins] r61315 - peps/trunk/pep-3100.txt Message-ID: <20080308203320.3BC6A1E4022@bag.python.org> Author: brett.cannon Date: Sat Mar 8 21:33:19 2008 New Revision: 61315 Modified: peps/trunk/pep-3100.txt Log: Fix a spelling typo and add a little markup. Modified: peps/trunk/pep-3100.txt ============================================================================== --- peps/trunk/pep-3100.txt (original) +++ peps/trunk/pep-3100.txt Sat Mar 8 21:33:19 2008 @@ -84,8 +84,9 @@ * Imports [#pep328]_ + Imports will be absolute by default. [done] + Relative imports must be explicitly specified. [done] - + Indirection entries in sys.modules (i.e., a value of None for - A.string means to use the top-level sring module) will not be supported. + + Indirection entries in ``sys.modules`` (i.e., a value of ``None`` for + ``A.string`` means to use the top-level ``string`` module) will not be + supported. * __init__.py might become optional in sub-packages? __init__.py will still be required for top-level packages. * Cleanup the Py_InitModule() variants {,3,4} (also import and parser APIs) From nnorwitz at gmail.com Sat Mar 8 22:23:25 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Sat, 8 Mar 2008 16:23:25 -0500 Subject: [Python-checkins] Python Regression Test Failures all () Message-ID: <20080308212325.GA12951@python.psfb.org> From buildbot at python.org Sat Mar 8 22:18:57 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 08 Mar 2008 21:18:57 +0000 Subject: [Python-checkins] buildbot failure in S-390 Debian trunk Message-ID: <20080308211905.5B3B31E4012@bag.python.org> The Buildbot has detected a new failure of S-390 Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/S-390%20Debian%20trunk/builds/162 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-s390 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: jeffrey.yasskin BUILD FAILED: failed test Excerpt from the test logfile: Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/threading.py", line 490, in __bootstrap_inner self.run() File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/threading.py", line 446, in run self.__target(*self.__args, **self.__kwargs) File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/bsddb/test/test_thread.py", line 284, in readerThread rec = dbutils.DeadlockWrap(c.next, max_retries=10) File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/bsddb/dbutils.py", line 62, in DeadlockWrap return function(*_args, **_kwargs) DBLockDeadlockError: (-30995, 'DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock') Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/threading.py", line 490, in __bootstrap_inner self.run() File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/threading.py", line 446, in run self.__target(*self.__args, **self.__kwargs) File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/bsddb/test/test_thread.py", line 284, in readerThread rec = dbutils.DeadlockWrap(c.next, max_retries=10) File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/bsddb/dbutils.py", line 62, in DeadlockWrap return function(*_args, **_kwargs) DBLockDeadlockError: (-30995, 'DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock') Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/threading.py", line 490, in __bootstrap_inner self.run() File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/threading.py", line 446, in run self.__target(*self.__args, **self.__kwargs) File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/bsddb/test/test_thread.py", line 284, in readerThread rec = dbutils.DeadlockWrap(c.next, max_retries=10) File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/bsddb/dbutils.py", line 62, in DeadlockWrap return function(*_args, **_kwargs) DBLockDeadlockError: (-30995, 'DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock') Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/threading.py", line 490, in __bootstrap_inner self.run() File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/threading.py", line 446, in run self.__target(*self.__args, **self.__kwargs) File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/bsddb/test/test_thread.py", line 284, in readerThread rec = dbutils.DeadlockWrap(c.next, max_retries=10) File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/bsddb/dbutils.py", line 62, in DeadlockWrap return function(*_args, **_kwargs) DBLockDeadlockError: (-30995, 'DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock') 1 test failed: test_ssl make: *** [buildbottest] Error 1 sincerely, -The Buildbot From python-checkins at python.org Sat Mar 8 22:23:54 2008 From: python-checkins at python.org (georg.brandl) Date: Sat, 8 Mar 2008 22:23:54 +0100 (CET) Subject: [Python-checkins] r61316 - doctools/trunk/sphinx/builder.py doctools/trunk/sphinx/environment.py doctools/trunk/sphinx/quickstart.py Message-ID: <20080308212354.DE52B1E4005@bag.python.org> Author: georg.brandl Date: Sat Mar 8 22:23:54 2008 New Revision: 61316 Modified: doctools/trunk/sphinx/builder.py doctools/trunk/sphinx/environment.py doctools/trunk/sphinx/quickstart.py Log: Add some labels by default; create a master doc in quickstart. Modified: doctools/trunk/sphinx/builder.py ============================================================================== --- doctools/trunk/sphinx/builder.py (original) +++ doctools/trunk/sphinx/builder.py Sat Mar 8 22:23:54 2008 @@ -244,6 +244,7 @@ doctree = self.env.get_and_resolve_doctree(docname, self) except Exception, err: warnings.append('%s:: doctree not found!' % docname) + continue self.write_doc(docname, doctree) for warning in warnings: if warning.strip(): Modified: doctools/trunk/sphinx/environment.py ============================================================================== --- doctools/trunk/sphinx/environment.py (original) +++ doctools/trunk/sphinx/environment.py Sat Mar 8 22:23:54 2008 @@ -243,6 +243,11 @@ self.index_num = 0 # autonumber for index targets self.gloss_entries = set() # existing definition labels + # Some magically present labels + self.labels['genindex'] = ('genindex', '', 'Index') + self.labels['modindex'] = ('modindex', '', 'Module Index') + self.labels['search'] = ('search', '', 'Search Page') + def set_warnfunc(self, func): self._warnfunc = func self.settings['warning_stream'] = RedirStream(func) @@ -614,15 +619,18 @@ entries.append(toc) if entries: return addnodes.compact_paragraph('', '', *entries) - return [] + return None for toctreenode in doctree.traverse(addnodes.toctree): maxdepth = toctreenode.get('maxdepth', -1) newnode = _entries_from_toctree(toctreenode) - # prune the tree to maxdepth - if maxdepth > 0: - walk_depth(newnode, 1, maxdepth) - toctreenode.replace_self(newnode) + if newnode is not None: + # prune the tree to maxdepth + if maxdepth > 0: + walk_depth(newnode, 1, maxdepth) + toctreenode.replace_self(newnode) + else: + toctreenode.replace_self([]) # set the target paths in the toctrees (they are not known # at TOC generation time) @@ -667,7 +675,9 @@ contnode['refdocname'] = docname contnode['refsectname'] = sectname newnode['refuri'] = builder.get_relative_uri( - fromdocname, docname) + '#' + labelid + fromdocname, docname) + if labelid: + newnode['refuri'] += '#' + labelid newnode.append(innernode) elif typ == 'keyword': # keywords are referenced by named labels Modified: doctools/trunk/sphinx/quickstart.py ============================================================================== --- doctools/trunk/sphinx/quickstart.py (original) +++ doctools/trunk/sphinx/quickstart.py Sat Mar 8 22:23:54 2008 @@ -140,14 +140,53 @@ #latex_appendices = [] ''' + +MASTER_FILE = '''\ +.. %(project)s documentation master file, created by sphinx-quickstart.py on %(now)s. + You can adapt this file completely to your liking, but it should at least + contain the root `toctree` directive. + +Welcome to %(project)s's documentation! +===========%(underline)s================= + +Contents: + +.. toctree:: + :maxdepth: 2 + +Indices and tables +================== + +* :ref:`genindex` +* :ref:`modindex` +* :ref:`search` + +''' + +def mkdir_p(dir): + if path.isdir(dir): + return + os.makedirs(dir) + + def is_path(x): - """Please enter an existing path name.""" - return path.isdir(x) + """Please enter a valid path name.""" + return path.isdir(x) or not path.exists(x) def nonempty(x): """Please enter some text.""" return len(x) +def choice(*l): + def val(x): + return x in l + val.__doc__ = 'Please enter one of %s.' % ', '.join(l) + return val + +def boolean(x): + """Please enter either 'y' or 'n'.""" + return x.upper() in ('Y', 'YES', 'N', 'NO') + def suffix(x): """Please enter a file suffix, e.g. '.rst' or '.txt'.""" return x[0:1] == '.' and len(x) > 1 @@ -184,10 +223,22 @@ accept a default value, if one is given in brackets).''' print ''' -This tool will create "src" and "build" folders in this path, which -must be an existing directory.''' +Enter the root path for documentation.''' do_prompt(d, 'path', 'Root path for the documentation', '.', is_path) print ''' +You have two options for placing the build directory for Sphinx output. +Either, you use a directory ".build" within the root path, or you separate +"source" and "build" directories within the root path.''' + do_prompt(d, 'sep', 'Separate source and build directories (y/n)', 'n', + boolean) + print ''' +Inside the root directory, two more directories will be created; ".templates" +for custom HTML templates and ".static" for custom stylesheets and other +static files. Since the leading dot may be inconvenient for Windows users, +you can enter another prefix (such as "_") to replace the dot.''' + do_prompt(d, 'dot', 'Name prefix for templates and static dir', '.', ok) + + print ''' The project name will occur in several places in the built documentation.''' do_prompt(d, 'project', 'Project name') do_prompt(d, 'author', 'Author name(s)') @@ -208,36 +259,41 @@ "contents tree", that is, it is the root of the hierarchical structure of the documents. Normally, this is "index", but if your "index" document is a custom template, you can also set this to another filename.''' - do_prompt(d, 'master', 'Name of your master document (without suffix)', 'index') - print ''' -Inside the "src" directory, two directories will be created; ".templates" -for custom HTML templates and ".static" for custom stylesheets and other -static files. Since the leading dot may be inconvenient for Windows users, -you can enter another prefix (such as "_") to replace the dot.''' - do_prompt(d, 'dot', 'Name prefix for templates and static dir', '.', ok) + do_prompt(d, 'master', 'Name of your master document (without suffix)', + 'index') d['year'] = time.strftime('%Y') d['now'] = time.asctime() + d['underline'] = len(d['project']) * '=' + + if not path.isdir(d['path']): + mkdir_p(d['path']) - os.mkdir(path.join(d['path'], 'src')) - os.mkdir(path.join(d['path'], 'build')) + separate = d['sep'].upper() in ('Y', 'YES') + srcdir = separate and path.join(d['path'], 'source') or d['path'] - f = open(path.join(d['path'], 'src', 'conf.py'), 'w') + mkdir_p(srcdir) + if separate: + mkdir_p(path.join(d['path'], 'build')) + else: + mkdir_p(path.join(srcdir, d['dot'] + 'build')) + mkdir_p(path.join(srcdir, d['dot'] + 'templates')) + mkdir_p(path.join(srcdir, d['dot'] + 'static')) + + f = open(path.join(srcdir, 'conf.py'), 'w') f.write(QUICKSTART_CONF % d) f.close() - masterfile = path.join(d['path'], 'src', d['master'] + d['suffix']) - - templatedir = path.join(d['path'], 'src', d['dot'] + 'templates') - os.mkdir(templatedir) - staticdir = path.join(d['path'], 'src', d['dot'] + 'static') - os.mkdir(staticdir) + masterfile = path.join(srcdir, d['master'] + d['suffix']) + f = open(masterfile, 'w') + f.write(MASTER_FILE % d) + f.close() print print bold('Finished: An initial directory structure has been created.') print ''' -You should now create your master file %s and other documentation -sources. Use the sphinx-build.py script to build the docs. +You should now populate your master file %s and create other documentation +source files. Use the sphinx-build.py script to build the docs. ''' % (masterfile) From python-checkins at python.org Sat Mar 8 22:35:16 2008 From: python-checkins at python.org (jeffrey.yasskin) Date: Sat, 8 Mar 2008 22:35:16 +0100 (CET) Subject: [Python-checkins] r61317 - python/trunk/Tools/pybench/pybench.py Message-ID: <20080308213516.1A98C1E4005@bag.python.org> Author: jeffrey.yasskin Date: Sat Mar 8 22:35:15 2008 New Revision: 61317 Modified: python/trunk/Tools/pybench/pybench.py Log: Well that was dumb. platform.python_implementation returns a function, not a string. Modified: python/trunk/Tools/pybench/pybench.py ============================================================================== --- python/trunk/Tools/pybench/pybench.py (original) +++ python/trunk/Tools/pybench/pybench.py Sat Mar 8 22:35:15 2008 @@ -121,7 +121,8 @@ 'platform': platform.platform(), 'processor': platform.processor(), 'executable': sys.executable, - 'implementation': getattr(platform, 'python_implementation', 'n/a'), + 'implementation': getattr(platform, 'python_implementation', + lambda:'n/a')(), 'python': platform.python_version(), 'compiler': platform.python_compiler(), 'buildno': buildno, @@ -837,7 +838,7 @@ print 'PYBENCH %s' % __version__ print '-' * LINE print '* using %s %s' % ( - getattr(platform, 'python_implementation', 'Python'), + getattr(platform, 'python_implementation', lambda:'Python')(), string.join(string.split(sys.version), ' ')) # Switch off garbage collection From nnorwitz at gmail.com Sat Mar 8 23:26:40 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Sat, 8 Mar 2008 17:26:40 -0500 Subject: [Python-checkins] Python Regression Test Failures refleak (1) Message-ID: <20080308222640.GA7657@python.psfb.org> More important issues: ---------------------- test_threadedtempfile leaked [0, 0, 99] references, sum=99 Less important issues: ---------------------- test_cmd_line leaked [0, 0, 23] references, sum=23 test_smtplib leaked [0, 194, -196] references, sum=-2 test_urllib2_localnet leaked [3, 182, -176] references, sum=9 From nnorwitz at gmail.com Sat Mar 8 23:44:21 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Sat, 8 Mar 2008 17:44:21 -0500 Subject: [Python-checkins] Python Regression Test Failures all (1) Message-ID: <20080308224421.GA10910@python.psfb.org> 318 tests OK. 1 test failed: test_ssl 20 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_cd test_cl test_gl test_imageop test_imgfile test_ioctl test_macostools test_pep277 test_scriptpackages test_startfile test_sunaudiodev test_tcl test_unicode_file test_winreg test_winsound test_zipfile64 1 skip unexpected on linux2: test_ioctl test_grammar test_opcodes test_dict test_builtin test_exceptions test_types test_unittest test_doctest test_doctest2 test_MimeWriter test_SimpleHTTPServer test_StringIO test___all__ test___future__ test__locale test_abc test_abstract_numbers test_aepack test_aepack skipped -- No module named aepack test_al test_al skipped -- No module named al test_anydbm test_applesingle test_applesingle skipped -- No module named macostools test_array test_ast test_asynchat test_asyncore test_atexit test_audioop test_augassign test_base64 test_bastion test_bigaddrspace test_bigmem test_binascii test_binhex test_binop test_bisect test_bool test_bsddb test_bsddb185 test_bsddb185 skipped -- No module named bsddb185 test_bsddb3 test_buffer test_bufio test_bz2 test_calendar test_call test_capi test_cd test_cd skipped -- No module named cd test_cfgparser test_cgi test_charmapcodec test_cl test_cl skipped -- No module named cl test_class test_cmath test_cmd test_cmd_line test_cmd_line_script test_code test_codeccallbacks test_codecencodings_cn test_codecencodings_hk test_codecencodings_jp test_codecencodings_kr test_codecencodings_tw test_codecmaps_cn test_codecmaps_hk test_codecmaps_jp test_codecmaps_kr test_codecmaps_tw test_codecs test_codeop test_coding test_coercion test_collections test_colorsys test_commands test_compare test_compile test_compiler testCompileLibrary still working, be patient... test_complex test_complex_args test_contains test_contextlib test_cookie test_cookielib test_copy test_copy_reg test_cpickle test_cprofile test_crypt test_csv test_ctypes test_datetime test_dbm test_decimal test_decorators test_defaultdict test_deque test_descr test_descrtut test_difflib test_dircache test_dis test_distutils test_dl test_docxmlrpc test_dumbdbm test_dummy_thread test_dummy_threading test_email test_email_codecs test_email_renamed test_enumerate test_eof test_errno test_exception_variations test_extcall test_fcntl test_file test_filecmp test_fileinput test_float test_fnmatch test_fork1 test_format test_fpformat test_fractions test_frozen test_ftplib test_funcattrs test_functools test_future test_future_builtins test_gc test_gdbm test_generators test_genericpath test_genexps test_getargs test_getargs2 test_getopt test_gettext test_gl test_gl skipped -- No module named gl test_glob test_global test_grp test_gzip test_hash test_hashlib test_heapq test_hexoct test_hmac test_hotshot test_htmllib test_htmlparser test_httplib test_imageop test_imageop skipped -- No module named imgfile test_imaplib test_imgfile test_imgfile skipped -- No module named imgfile test_imp test_import test_importhooks test_index test_inspect test_ioctl test_ioctl skipped -- Unable to open /dev/tty test_isinstance test_iter test_iterlen test_itertools test_largefile test_list test_locale test_logging test_long test_long_future test_longexp test_macostools test_macostools skipped -- No module named macostools test_macpath test_mailbox test_marshal test_math test_md5 test_mhlib test_mimetools test_mimetypes test_minidom test_mmap test_module test_modulefinder test_multibytecodec test_multibytecodec_support test_multifile test_mutants test_mutex test_netrc test_new test_nis test_normalization test_ntpath test_old_mailbox test_openpty test_operator test_optparse test_os test_parser s_push: parser stack overflow test_peepholer test_pep247 test_pep263 test_pep277 test_pep277 skipped -- test works only on NT+ test_pep292 test_pep352 test_pickle test_pickletools test_pipes test_pkg test_pkgimport test_platform test_plistlib test_poll test_popen [8018 refs] [8018 refs] [8018 refs] test_popen2 test_poplib test_posix test_posixpath test_pow test_pprint test_profile test_profilehooks test_property test_pstats test_pty test_pwd test_pyclbr test_pyexpat test_queue test_quopri [8395 refs] [8395 refs] test_random test_re test_repr test_resource test_rfc822 test_richcmp test_robotparser test_runpy test_sax test_scope test_scriptpackages test_scriptpackages skipped -- No module named aetools test_select test_set test_sets test_sgmllib test_sha test_shelve test_shlex test_shutil test_signal test_site test_slice test_smtplib test_socket test_socket_ssl /tmp/python-test/local/lib/python2.6/test/test_socket_ssl.py:108: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. ssl_sock = socket.ssl(s) /tmp/python-test/local/lib/python2.6/test/test_socket_ssl.py:74: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. ss = socket.ssl(s) /tmp/python-test/local/lib/python2.6/test/test_socket_ssl.py:159: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. ss = socket.ssl(s) /tmp/python-test/local/lib/python2.6/test/test_socket_ssl.py:173: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. ss = socket.ssl(s) test_socketserver test_softspace test_sort test_sqlite test_ssl test test_ssl produced unexpected output: ********************************************************************** *** lines 2-7 of actual output doesn't appear in expected output after line 1: + Traceback (most recent call last): + File "/tmp/python-test/local/lib/python2.6/test/test_ssl.py", line 366, in serve_forever + self.handle_request() + File "/tmp/python-test/local/lib/python2.6/SocketServer.py", line 262, in handle_request + fd_sets = select.select([self], [], [], timeout) + error: (9, 'Bad file descriptor') ********************************************************************** test_startfile test_startfile skipped -- cannot import name startfile test_str test_strftime test_string test_stringprep test_strop test_strptime test_struct test_structmembers test_structseq test_subprocess [8013 refs] [8015 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8015 refs] [9938 refs] [8231 refs] [8015 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] . [8013 refs] [8013 refs] this bit of output is from a test of stdout in a different process ... [8013 refs] [8013 refs] [8231 refs] test_sunaudiodev test_sunaudiodev skipped -- No module named sunaudiodev test_sundry test_symtable test_syntax test_sys [8013 refs] [8013 refs] test_tarfile test_tcl test_tcl skipped -- No module named _tkinter test_telnetlib test_tempfile [8018 refs] test_textwrap test_thread test_threaded_import test_threadedtempfile test_threading [11151 refs] test_threading_local test_threadsignals test_time test_timeout test_tokenize test_trace test_traceback test_transformer test_tuple test_typechecks test_ucn test_unary test_unicode test_unicode_file test_unicode_file skipped -- No Unicode filesystem semantics on this platform. test_unicodedata test_univnewlines test_unpack test_urllib test_urllib2 test_urllib2_localnet test_urllib2net No handlers could be found for logger "test_urllib2" test_urllibnet test_urlparse test_userdict test_userlist test_userstring test_uu test_uuid WARNING: uuid.getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._ifconfig_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._unixdll_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. test_wait3 test_wait4 test_warnings test_wave test_weakref test_whichdb test_winreg test_winreg skipped -- No module named _winreg test_winsound test_winsound skipped -- No module named winsound test_with test_wsgiref test_xdrlib test_xml_etree test_xml_etree_c test_xmllib test_xmlrpc test_xpickle test_xrange test_zipfile /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:472: DeprecationWarning: struct integer overflow masking is deprecated zipfp.close() /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:399: DeprecationWarning: struct integer overflow masking is deprecated zipfp.close() test_zipfile64 test_zipfile64 skipped -- test requires loads of disk-space bytes and a long time to run test_zipimport test_zlib 318 tests OK. 1 test failed: test_ssl 20 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_cd test_cl test_gl test_imageop test_imgfile test_ioctl test_macostools test_pep277 test_scriptpackages test_startfile test_sunaudiodev test_tcl test_unicode_file test_winreg test_winsound test_zipfile64 1 skip unexpected on linux2: test_ioctl [576366 refs] From buildbot at python.org Sat Mar 8 23:57:01 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 08 Mar 2008 22:57:01 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 trunk Message-ID: <20080308225701.82E201E4006@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%20trunk/builds/2667 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: jeffrey.yasskin BUILD FAILED: failed test Excerpt from the test logfile: 2 tests failed: test_asynchat test_smtplib sincerely, -The Buildbot From python-checkins at python.org Sun Mar 9 04:33:08 2008 From: python-checkins at python.org (brett.cannon) Date: Sun, 9 Mar 2008 04:33:08 +0100 (CET) Subject: [Python-checkins] r61318 - peps/trunk/pep-3108.txt Message-ID: <20080309033308.3ECC81E4006@bag.python.org> Author: brett.cannon Date: Sun Mar 9 04:33:07 2008 New Revision: 61318 Modified: peps/trunk/pep-3108.txt Log: Add Cookie and cookielib to the proposed http package. Modified: peps/trunk/pep-3108.txt ============================================================================== --- peps/trunk/pep-3108.txt (original) +++ peps/trunk/pep-3108.txt Sun Mar 9 04:33:07 2008 @@ -550,6 +550,8 @@ BaseHTTPServer http.server [2]_ CGIHTTPServer http.server [2]_ SimpleHTTPServer http.server [2]_ +Cookie http.cookies +cookielib http.cookiejar ================= =============================== .. [2] The ``http.server`` module can combine the specified modules From python-checkins at python.org Sun Mar 9 05:03:40 2008 From: python-checkins at python.org (brett.cannon) Date: Sun, 9 Mar 2008 05:03:40 +0100 (CET) Subject: [Python-checkins] r61319 - peps/trunk/pep-3108.txt Message-ID: <20080309040340.8A5EA1E4006@bag.python.org> Author: brett.cannon Date: Sun Mar 9 05:03:39 2008 New Revision: 61319 Modified: peps/trunk/pep-3108.txt Log: List audioop/aifc/sunau as modules that were listed as to be removed but kept. Modified: peps/trunk/pep-3108.txt ============================================================================== --- peps/trunk/pep-3108.txt (original) +++ peps/trunk/pep-3108.txt Sun Mar 9 05:03:39 2008 @@ -687,6 +687,10 @@ * asynchat/asyncore + Josiah Carlson has said he will maintain the modules. + +* audioop/sunau/aifc + + + Audio modules where the formats are still used. * base64/quopri/uu From python-checkins at python.org Sun Mar 9 09:26:21 2008 From: python-checkins at python.org (martin.v.loewis) Date: Sun, 9 Mar 2008 09:26:21 +0100 (CET) Subject: [Python-checkins] r61320 - in tracker/roundup-src: CHANGES.txt README.txt demo.py doc/Makefile doc/admin_guide.txt doc/announcement.txt doc/customizing.txt doc/design.txt doc/features.txt doc/index.txt doc/installation.txt doc/mysql.txt doc/overview.txt doc/roundup-server.1 doc/roundup-server.ini.example doc/upgrading.txt doc/user_guide.txt frontends/ZRoundup/ZRoundup.py locale/es.po locale/hu.po locale/lt.po locale/roundup.pot locale/ru.po roundup/__init__.py roundup/admin.py roundup/backends/__init__.py roundup/backends/back_anydbm.py roundup/backends/back_metakit.py roundup/backends/back_mysql.py roundup/backends/back_postgresql.py roundup/backends/back_sqlite.py roundup/backends/blobfiles.py roundup/backends/indexer_xapian.py roundup/backends/rdbms_common.py roundup/backends/sessions_dbm.py roundup/backends/sessions_rdbms.py roundup/cgi/TranslationService.py roundup/cgi/actions.py roundup/cgi/client.py roundup/cgi/form_parser.py roundup/cgi/templating.py roundup/configuration.py roundup/date.py roundup/hyperdb.py roundup/mailer.py roundup/mailgw.py roundup/roundupdb.py roundup/scripts/roundup_server.py scripts/import_sf.py scripts/roundup-reminder setup.py templates/classic/detectors/messagesummary.py templates/classic/detectors/userauditor.py templates/classic/html/help_controls.js templates/classic/html/issue.index.html templates/classic/html/issue.item.html templates/classic/html/issue.search.html templates/classic/html/query.edit.html templates/classic/html/user.item.html templates/classic/html/user_utils.js templates/classic/schema.py templates/minimal/detectors/userauditor.py templates/minimal/html/help_controls.js templates/minimal/html/user.item.html test/db_test_base.py test/test_actions.py test/test_cgi.py test/test_dates.py test/test_mailgw.py test/test_metakit.py test/test_multipart.py Message-ID: <20080309082621.6A9FB1E4006@bag.python.org> Author: martin.v.loewis Date: Sun Mar 9 09:26:16 2008 New Revision: 61320 Modified: tracker/roundup-src/CHANGES.txt tracker/roundup-src/README.txt tracker/roundup-src/demo.py tracker/roundup-src/doc/Makefile tracker/roundup-src/doc/admin_guide.txt tracker/roundup-src/doc/announcement.txt tracker/roundup-src/doc/customizing.txt tracker/roundup-src/doc/design.txt tracker/roundup-src/doc/features.txt tracker/roundup-src/doc/index.txt tracker/roundup-src/doc/installation.txt tracker/roundup-src/doc/mysql.txt tracker/roundup-src/doc/overview.txt tracker/roundup-src/doc/roundup-server.1 tracker/roundup-src/doc/roundup-server.ini.example tracker/roundup-src/doc/upgrading.txt tracker/roundup-src/doc/user_guide.txt tracker/roundup-src/frontends/ZRoundup/ZRoundup.py tracker/roundup-src/locale/es.po tracker/roundup-src/locale/hu.po tracker/roundup-src/locale/lt.po tracker/roundup-src/locale/roundup.pot tracker/roundup-src/locale/ru.po tracker/roundup-src/roundup/__init__.py tracker/roundup-src/roundup/admin.py tracker/roundup-src/roundup/backends/__init__.py tracker/roundup-src/roundup/backends/back_anydbm.py tracker/roundup-src/roundup/backends/back_metakit.py tracker/roundup-src/roundup/backends/back_mysql.py tracker/roundup-src/roundup/backends/back_postgresql.py tracker/roundup-src/roundup/backends/back_sqlite.py tracker/roundup-src/roundup/backends/blobfiles.py tracker/roundup-src/roundup/backends/indexer_xapian.py tracker/roundup-src/roundup/backends/rdbms_common.py tracker/roundup-src/roundup/backends/sessions_dbm.py tracker/roundup-src/roundup/backends/sessions_rdbms.py tracker/roundup-src/roundup/cgi/TranslationService.py tracker/roundup-src/roundup/cgi/actions.py tracker/roundup-src/roundup/cgi/client.py tracker/roundup-src/roundup/cgi/form_parser.py tracker/roundup-src/roundup/cgi/templating.py tracker/roundup-src/roundup/configuration.py tracker/roundup-src/roundup/date.py tracker/roundup-src/roundup/hyperdb.py tracker/roundup-src/roundup/mailer.py tracker/roundup-src/roundup/mailgw.py tracker/roundup-src/roundup/roundupdb.py tracker/roundup-src/roundup/scripts/roundup_server.py tracker/roundup-src/scripts/import_sf.py tracker/roundup-src/scripts/roundup-reminder tracker/roundup-src/setup.py tracker/roundup-src/templates/classic/detectors/messagesummary.py tracker/roundup-src/templates/classic/detectors/userauditor.py tracker/roundup-src/templates/classic/html/help_controls.js tracker/roundup-src/templates/classic/html/issue.index.html tracker/roundup-src/templates/classic/html/issue.item.html tracker/roundup-src/templates/classic/html/issue.search.html tracker/roundup-src/templates/classic/html/query.edit.html tracker/roundup-src/templates/classic/html/user.item.html tracker/roundup-src/templates/classic/html/user_utils.js tracker/roundup-src/templates/classic/schema.py tracker/roundup-src/templates/minimal/detectors/userauditor.py tracker/roundup-src/templates/minimal/html/help_controls.js tracker/roundup-src/templates/minimal/html/user.item.html tracker/roundup-src/test/db_test_base.py tracker/roundup-src/test/test_actions.py tracker/roundup-src/test/test_cgi.py tracker/roundup-src/test/test_dates.py tracker/roundup-src/test/test_mailgw.py tracker/roundup-src/test/test_metakit.py tracker/roundup-src/test/test_multipart.py Log: Upgrade to roundup 1.4.2. Patch contributed by Philipp Gortan. Modified: tracker/roundup-src/CHANGES.txt ============================================================================== --- tracker/roundup-src/CHANGES.txt (original) +++ tracker/roundup-src/CHANGES.txt Sun Mar 9 09:26:16 2008 @@ -1,6 +1,107 @@ This file contains the changes to the Roundup system over time. The entries are given with the most recent entry first. +2008-02-07 1.4.2 +Feature: +- New config option in mail section: ignore_alternatives allows to + ignore alternatives besides the text/plain part used for the content + of a message in multipart/alternative attachments. +- Admin copy of error email from mailgw includes traceback (thanks Ulrik + Mikaelsson) +- Messages created through the web are now given an in-reply-to header + when email out to nosy (thanks Martin v. L?wis) +- Nosy messages now include more information about issues (all link + properties with a "name" attribute) (thanks Martin v. L?wis) + +Fixed: +- Searching date range by supplying just a date as the filter spec +- Handle no time.tzset under Windows (sf #1825643) +- Fix race condition in file storage transaction commit (sf #1883580) +- Make user utils JS work with firstname/lastname again (sf #1868323) +- Fix ZRoundup to work with Zope 2.8.5 (sf #1806125) +- Fix race condition for key properties in rdbms backends (sf #1876683) +- Handle Reject in mailgw final set/create (sf #1826425) + + +2007-11-09 1.4.1 +Fixed: +- Removed some metakit references + + +2007-11-04 1.4.0 +Feature: +- Roundup has a new xmlrpc frontend that gives access to a tracker using + XMLRPC. +- Dates can now be in the year-range 1-9999 +- The metakit backend has been removed +- Add simple anti-spam recipe to docs +- Allow customisation of regular expressions used in email parsing, thanks + Bruno Damour +- Italian translation by Marco Ghidinelli +- Multilinks take any iterable +- config option: specify port and local hostname for SMTP connections +- Tracker index templating (i.e. when roundup_server is serving multiple + trackers) (sf bug 1058020) +- config option: Limit nosy attachments based on size (Philipp Gortan) +- roundup_server supports SSL via pyopenssl +- templatable 404 not found messages (sf bug 1403287) +- Unauthorized email includes a link to the registration page for + the tracker +- config options: control whether author info/email is included in email + sent by roundup +- support for receiving OpenPGP MIME messages (signed or encrypted) + +Fixed: +- Handling of unset Link search in RDBMS backend +- Journal export of anydbm didn't correctly export previously empty values +- Fix handling of defaults for date fields +- Fix
name in user editing to allow multilink popups to work +- Fix form handling of editing existing hyperdb items from a new item page. +- Added new rdbms-indexes for full-text index which will speed up + reindexing. +- Turning off indexing for content properties of FileClass instance + (e.g., "file" and "msg") now works for SQL backends. +- Enabled over-riding of content-type in web interface (thanks + John Mitchell) +- Validate user timezones to filter bad entries (sf bug 1738470) +- Classic template allows searching for issues with no topic set + (sf bug 1610787) +- xapian_indexer uses current API for stemming (Rick Benavidez) + (sf bug 1771414) +- Ensure email addresses are unique (sf bug 1611787) +- roundup_admin tracks uncommitted changes in interactive mode + for all backends (sf bug 1297014) +- add template search path for easy_install (Marek Kubica) +- don't spam the roundup admin on client shutdowns (Ulrik Mikaelsson) +- respect umask on filestorage backends (Ulrik Mikaelsson) (sf bug 1744328) +- cope with spam robots posting multiple instances of the same form +- include the author of property-only changes in generated messages +- fuller email validation in templates (sf feature 1216291) +- cope with bad cookies from other apps on same domain (sf bug 1691708) +- updated Spanish translation from Ramiro Morales +- clean up query display of "Private to you items" (sf bug 1481394) +- use local timezone for mail date header (sf bug 1658173) +- allow CSV export of queries on selected issues (sf bug 1783492) +- remove blobfiles on destroy (sf bug 1654132) +- handle postgres exceptions during session cleanup (sf bug 1703116) +- update Xapian indexer to use current API +- handle export and import of old trackers that have data attached to + journal "create" events +- fix a couple more old instances of "type" instead of "ENGINE" for mysql + backend +- make LinkHTMLProperty handle non-existing keys (sf patch 1815895) + + +2007-02-15 1.3.3 +Fixed: +- If-Modified-Since handling was broken +- Updated documentation for customising hard-coded searches in page.html +- Updated Windows installation docs (thanks Bo Berglund) +- Handle rounding of seconds generating invalid date values +- Handle 8-bit untranslateable messages from database properties +- Fix scripts/roundup-reminder date calculation (sf bug 1649979) +- Improved due_date and timelog customisation docs (sf bug 1625124) + 2006-12-19 1.3.2 Fixed: @@ -760,9 +861,9 @@ - anonymous user can no longer edit or view itself (sf bug 828901). - corrected typo in installation.html (sf bug 822967). - clarified listTemplates docstring. -- print a nicer error message when the address is already in use +- print a nicer error message when the address is already in use (sf bug 798659). -- remove empty lines before sending strings off to the csv parser +- remove empty lines before sending strings off to the csv parser (sf bug 821364). - centralised conversion of user-input data to hyperdb values (sf bug 802405, sf bug 817217, sf rfe 816994) @@ -786,7 +887,7 @@ - tidied up forms in default stylesheet - force textareas to use monospace fonts, lessening surprise on the user - moved out parts of client.py to new modules: - * actions.py - the xxxAction and xxxPermission functions refactored into + * actions.py - the xxxAction and xxxPermission functions refactored into Action classes * exceptions.py - all exceptions * form_parser.py - parsePropsFromForm & extractFormList in a FormParser @@ -944,7 +1045,7 @@ - audit some user properties for valid values (roles, address) (sf bugs 742968 and 739653) - fix HTML file detection (hence history xref linking) (sf bug 741478) -- session database caches it's type, rather than calling whichdb each time +- session database caches it's type, rather than calling whichdb each time around. - changed rdbms_common to fix sql backends for new Boolean types under Py2.3 @@ -979,7 +1080,7 @@ cc addresses, different from address and different nosy list property) (thanks John Rouillard) - applied patch for nicer history display (sf feature 638280) -- cleaning old unused sessions only once per hour, not on every cgi +- cleaning old unused sessions only once per hour, not on every cgi request. It is greatly improves web interface performance, especially on trackers under high load - added mysql backend (see doc/mysql.txt for details) @@ -1037,7 +1138,7 @@ Fixed: - applied unicode patch. All data is stored in utf-8. Incoming messages - converted from any encoding to utf-8, outgoing messages are encoded + converted from any encoding to utf-8, outgoing messages are encoded according to rfc2822 (sf bug 568873) - fixed layout issues with forms in sidebar - fixed timelog example so it handles new issues (sf bug 678908) @@ -1120,7 +1221,7 @@ - handle :add: better in cgi form parsing (sf bug 663235) - handle all-whitespace multilink values in forms (sf bug 663855) - fixed searching on date / interval fields (sf bug 658157) -- fixed form elements names in search form to allow grouping and sorting +- fixed form elements names in search form to allow grouping and sorting on "creation" field - display of saved queries is now performed correctly @@ -1310,7 +1411,7 @@ - daemonify roundup-server (fork, logfile, pidfile) - modify cgitb to display PageTemplate errors better - rename to "instance" to "tracker" -- have roundup.cgi pick up tracker config from the environment +- have roundup.cgi pick up tracker config from the environment - revamped look and feel in web interface - cleaned up stylesheet usage - several bug fixes and documentation fixes @@ -1344,7 +1445,7 @@ done in the default templates. - the regeneration of the indexes (if necessary) is done once the schema is set up in the dbinit. - - new "reindex" command in roundup-admin used to force regeneration of the + - new "reindex" command in roundup-admin used to force regeneration of the index - added email display function - mangles email addrs so they're not so easily scraped from the web @@ -1422,7 +1523,7 @@ wants to ignore - fixed the example addresses in the templates to use correct example domains - cleaned out the template stylesheets, removing a bunch of junk that really - wasn't necessary (font specs, styles never used) and added a style for + wasn't necessary (font specs, styles never used) and added a style for message content - build htmlbase if tests are run using CVS checkout - #565979 ] code error in hyperdb.Class.find @@ -1435,7 +1536,7 @@ - #565992 ] if ISSUE_TRACKER_WEB doesn't have the trailing '/', add it - use the rfc822 module to ensure that every (oddball) email address and real-name is properly quoted -- #558867 ] ZRoundup redirect /instance requests to /instance/ +- #558867 ] ZRoundup redirect /instance requests to /instance/ - #569415 ] {version} - #569178 ] type error was fixed as part of the general cleanup of reactors @@ -1499,13 +1600,13 @@ 2002-01-24 - 0.4.0 Feature: - much nicer history display (actualy real handling of property types etc) -- journal entries for link and mutlilink properties can be switched on or +- journal entries for link and mutlilink properties can be switched on or off - properties in change note are now sorted - you can now use the roundup-admin tool pack the database Fixed: -- the mail gateway now responds with an error message when invalid values +- the mail gateway now responds with an error message when invalid values for arguments are specified for link or mutlilink properties - modified unit test to check nosy and assignedto when specified as arguments - handle attachments with no name (eg tnef) @@ -1612,7 +1713,7 @@ - added tests for mailgw -2001-11-23 - 0.3.0 +2001-11-23 - 0.3.0 Feature: - #467129 ] Lossage when username=e-mail-address - #473123 ] Change message generation for author @@ -1920,7 +2021,7 @@ - Added the "classic" template - a direct implementation of the Roundup spec. Well, as close as we're going to get, anyway. - Added an issue priority of support to "extended" -- Added command-line arg handling to roundup-server so it's more useful +- Added command-line arg handling to roundup-server so it's more useful out-of-the-box. - Added distutils-style installation of "lib" files. - Added some unit tests. Modified: tracker/roundup-src/README.txt ============================================================================== --- tracker/roundup-src/README.txt (original) +++ tracker/roundup-src/README.txt Sun Mar 9 09:26:16 2008 @@ -31,11 +31,14 @@ Upgrading ========= For upgrading instructions, please see upgrading.txt in the "doc" directory. - + Usage and Other Information =========================== See the index.txt file in the "doc" directory. +The *.txt files in the "doc" directory are written in reStructedText. If +you have rst2html installed (part of the docutils suite) you can convert +these to HTML by running "make html" in the "doc" directory. License Modified: tracker/roundup-src/demo.py ============================================================================== --- tracker/roundup-src/demo.py (original) +++ tracker/roundup-src/demo.py Sun Mar 9 09:26:16 2008 @@ -2,7 +2,7 @@ # # Copyright (c) 2003 Richard Jones (richard at mechanicalcat.net) # -# $Id: demo.py,v 1.25 2006/08/07 07:15:05 richard Exp $ +# $Id: demo.py,v 1.26 2007/08/28 22:37:45 jpend Exp $ import errno import os @@ -103,6 +103,13 @@ 2. Hit Control-C to stop the server. 3. Re-start the server by running "roundup-demo" again. 4. Re-initialise the server by running "roundup-demo nuke". + +Demo tracker is set up to be accessed by localhost browser. If you +run demo on a server host, please stop the demo, open file +"demo/config.ini" with your editor, change the host name in the "web" +option in section "[tracker]", save the file, then re-run the demo +program. + ''' % url # disable command line processing in roundup_server Modified: tracker/roundup-src/doc/Makefile ============================================================================== --- tracker/roundup-src/doc/Makefile (original) +++ tracker/roundup-src/doc/Makefile Sun Mar 9 09:26:16 2008 @@ -5,12 +5,14 @@ SOURCE = announcement.txt customizing.txt developers.txt FAQ.txt features.txt \ glossary.txt implementation.txt index.txt design.txt mysql.txt \ installation.txt upgrading.txt user_guide.txt admin_guide.txt \ - postgresql.txt tracker_templates.txt + postgresql.txt tracker_templates.txt xmlrpc.txt COMPILED := $(SOURCE:.txt=.html) WEBHT := $(SOURCE:.txt=.ht) -all: ${COMPILED} ${WEBHT} +all: html ht +html: ${COMPILED} +ht: ${WEBHT} website: ${WEBHT} cp *.ht ${WEBDIR} Modified: tracker/roundup-src/doc/admin_guide.txt ============================================================================== --- tracker/roundup-src/doc/admin_guide.txt (original) +++ tracker/roundup-src/doc/admin_guide.txt Sun Mar 9 09:26:16 2008 @@ -2,7 +2,7 @@ Administration Guide ==================== -:Version: $Revision: 1.23 $ +:Version: $Revision: 1.27 $ .. contents:: @@ -82,6 +82,9 @@ ;log_ip = yes ;pidfile = ;logfile = + ;template = + ;ssl = no + ;pem = [trackers] ; Add one of these per tracker being served @@ -109,6 +112,18 @@ written to this file. It must be specified if **pidfile** is specified. If per-tracker logging is specified, then very little will be written to this file. +**template** + Specifies a template used for displaying the tracker index when + multiple trackers are being used. The variable "trackers" is available + to the template and is a dict of all configured trackers. +**ssl** + Enables the use of SSL to secure the connection to the roundup-server. + If you enable this, ensure that your tracker's config.ini specifies + an *https* URL. +**pem** + If specified, the SSL PEM file containing the private key and certificate. + If not specified, roundup will generate a temporary, self-signed certificate + for use. **trackers** section Each line denotes a mapping from a URL component to a tracker home. Make sure the name part doesn't include any url-unsafe characters like @@ -168,8 +183,15 @@ Tracker Backup -------------- -Stop the web and email frontends and to copy the contents of the tracker home -directory to some other place using standard backup tools. +The roundup-admin import and export commands are **not** recommended for +performing backup. + +Optionally stop the web and email frontends and to copy the contents of the +tracker home directory to some other place using standard backup tools. +This means using +*pg_dump* to take a snapshot of your Postgres backend database, for example. +A simple copy of the tracker home (and files storage area if you've configured +it to be elsewhere) will then complete the backup. Software Upgrade @@ -187,6 +209,12 @@ 4. Stop the tracker web and email frontends. 5. Follow the steps in the `upgrading documentation`_ for the new version of the software in the copied. + + Usually you will be asked to run `roundup_admin migrate` on your tracker + before you allow users to start accessing the tracker. + + It's safe to run this even if it's not required, so just get into the + habit. 6. You may test each of the admin tool, web interface and mail gateway using the new version of the software. To do this, invoke the scripts directly in the source directory with:: Modified: tracker/roundup-src/doc/announcement.txt ============================================================================== --- tracker/roundup-src/doc/announcement.txt (original) +++ tracker/roundup-src/doc/announcement.txt Sun Mar 9 09:26:16 2008 @@ -1,24 +1,24 @@ -I'm proud to release version 1.3.2 of Roundup. +I'm proud to release version 1.4.2 of Roundup. -Fixed in 1.3.2: - -- relax rules for required fields in form_parser.py (sf bug 1599740) -- documentation cleanup from Luke Ross (sf patch 1594860) -- updated Spanish translation from Ramiro Morales (sf patch 1594718) -- handle 8-bit untranslateable messages in tracker templates -- handling of required for boolean False and numeric 0 (sf bug 1608200) -- removed bogus args attr of ConfigurationError (sf bug 1608056) -- implemented start_response in roundup.cgi (sf bug 1604304) -- clarified windows service documentation (sf patch 1597713) -- HTMLClass fixed to work with new item permissions check (sf bug 1602983) -- support POP over SSL (sf patch 1597703) -- clean up input field generation and quoting of values (sf bug 1615616) -- allow use of roundup-server pidfile without forking (sf bug 1614753) -- allow translation of status/priority menu options (sf bug 1613976) - -New Features in 1.3.0: - -- WSGI support via roundup.cgi.wsgi_handler +New Features in 1.4.2: +- New config option in mail section: ignore_alternatives allows to + ignore alternatives besides the text/plain part used for the content + of a message in multipart/alternative attachments. +- Admin copy of error email from mailgw includes traceback (thanks Ulrik + Mikaelsson) +- Messages created through the web are now given an in-reply-to header + when email out to nosy (thanks Martin v. L??wis) +- Nosy messages now include more information about issues (all link + properties with a "name" attribute) (thanks Martin v. L??wis) + +And things fixed: +- Searching date range by supplying just a date as the filter spec +- Handle no time.tzset under Windows (sf #1825643) +- Fix race condition in file storage transaction commit (sf #1883580) +- Make user utils JS work with firstname/lastname again (sf #1868323) +- Fix ZRoundup to work with Zope 2.8.5 (sf #1806125) +- Fix race condition for key properties in rdbms backends (sf #1876683) +- Handle Reject in mailgw final set/create (sf #1826425) If you're upgrading from an older version of Roundup you *must* follow the "Software Upgrade" guidelines given in the maintenance documentation. @@ -62,6 +62,6 @@ disutils-based install script is provided. It comes with two issue tracker templates (a classic bug/feature tracker and -a minimal skeleton) and five database back-ends (anydbm, sqlite, metakit, -mysql and postgresql). +a minimal skeleton) and four database back-ends (anydbm, sqlite, mysql +and postgresql). Modified: tracker/roundup-src/doc/customizing.txt ============================================================================== --- tracker/roundup-src/doc/customizing.txt (original) +++ tracker/roundup-src/doc/customizing.txt Sun Mar 9 09:26:16 2008 @@ -2,7 +2,7 @@ Customising Roundup =================== -:Version: $Revision: 1.215 $ +:Version: $Revision: 1.222 $ .. This document borrows from the ZopeBook section on ZPT. The original is at: http://www.zope.org/Documentation/Books/ZopeBook/current/ZPT.stx @@ -95,6 +95,14 @@ Path to the HTML templates directory. The path may be either absolute or relative to the directory containig this config file. + static_files -- default *blank* + Path to directory holding additional static files available via Web + UI. This directory may contain sitewide images, CSS stylesheets etc. + and is searched for these files prior to the TEMPLATES directory + specified above. If this option is not set, all static files are + taken from the TEMPLATES directory The path may be either absolute or + relative to the directory containig this config file. + admin_email -- ``roundup-admin`` Email address that roundup will complain to if it runs into trouble. If the email address doesn't contain an ``@`` part, the MAIL_DOMAIN defined @@ -150,6 +158,9 @@ your tracker. See the indexer source for the default list of stop-words (e.g. ``A,AND,ARE,AS,AT,BE,BUT,BY, ...``). + umask -- ``02`` + Defines the file creation mode mask. + Section **tracker** name -- ``Roundup issue tracker`` A descriptive name for your roundup instance. @@ -164,6 +175,11 @@ email -- ``issue_tracker`` Email address that mail to roundup should go to. + language -- default *blank* + Default locale name for this tracker. If this option is not set, the + language is determined by the environment variable LANGUAGE, LC_ALL, + LC_MESSAGES, or LANG, in that order of preference. + Section **web** http_auth -- ``yes`` Whether to use HTTP Basic Authentication, if present. @@ -204,6 +220,13 @@ password -- ``roundup`` Database user password. + read_default_file -- ``~/.my.cnf`` + Name of the MySQL defaults file. Only used in MySQL connections. + + read_default_group -- ``roundup`` + Name of the group to use in the MySQL defaults file. Only used in + MySQL connections. + Section **logging** config -- default *blank* Path to configuration file for standard Python logging module. If this @@ -240,6 +263,15 @@ SMTP login password. Set this if your mail host requires authenticated access. + port -- default *25* + SMTP port on mail host. + Set this if your mail host runs on a different port. + + local_hostname -- default *blank* + The fully qualified domain name (FQDN) to use during SMTP sessions. If left + blank, the underlying SMTP library will attempt to detect your FQDN. If your + mail host requires something specific, specify the FQDN to use. + tls -- ``no`` If your SMTP mail host provides or requires TLS (Transport Layer Security) then you may set this option to 'yes'. @@ -268,6 +300,16 @@ precedence. The path may be either absolute or relative to the directory containig this config file. + add_authorinfo -- ``yes`` + Add a line with author information at top of all messages send by + roundup. + + add_authoremail -- ``yes`` + Add the mail address of the author to the author information at the + top of all messages. If this is false but add_authorinfo is true, + only the name of the actor is added which protects the mail address + of the actor from being exposed at mail archives, etc. + Section **mailgw** Roundup Mail Gateway options @@ -285,6 +327,10 @@ Default class to use in the mailgw if one isn't supplied in email subjects. To disable, leave the value blank. + language -- default *blank* + Default locale name for the tracker mail gateway. If this option is + not set, mail gateway will use the language of the tracker instance. + subject_prefix_parsing -- ``strict`` Controls the parsing of the [prefix] on subject lines in incoming emails. ``strict`` will return an error to the sender if the [prefix] is not @@ -310,6 +356,42 @@ an issue for the interval after the issue's creation or last activity. The interval is a standard Roundup interval. + refwd_re -- ``(\s*\W?\s*(fw|fwd|re|aw|sv|ang)\W)+`` + Regular expression matching a single reply or forward prefix + prepended by the mailer. This is explicitly stripped from the + subject during parsing. Value is Python Regular Expression + (UTF8-encoded). + + origmsg_re -- `` ^[>|\s]*-----\s?Original Message\s?-----$`` + Regular expression matching start of an original message if quoted + the in body. Value is Python Regular Expression (UTF8-encoded). + + sign_re -- ``^[>|\s]*-- ?$`` + Regular expression matching the start of a signature in the message + body. Value is Python Regular Expression (UTF8-encoded). + + eol_re -- ``[\r\n]+`` + Regular expression matching end of line. Value is Python Regular + Expression (UTF8-encoded). + + blankline_re -- ``[\r\n]+\s*[\r\n]+`` + Regular expression matching a blank line. Value is Python Regular + Expression (UTF8-encoded). + +Section **pgp** + OpenPGP mail processing options + + enable -- ``no`` + Enable PGP processing. Requires pyme. + + roles -- default *blank* + If specified, a comma-separated list of roles to perform PGP + processing on. If not specified, it happens for all users. + + homedir -- default *blank* + Location of PGP directory. Defaults to $HOME/.gnupg if not + specified. + Section **nosy** Nosy messages sending @@ -340,6 +422,12 @@ a separate email is sent to each recipient. If ``single`` then a single email is sent with each recipient as a CC address. + max_attachment_size -- ``2147483647`` + Attachments larger than the given number of bytes won't be attached + to nosy mails. They will be replaced by a link to the tracker's + download page for the file. + + You may generate a new default config file using the ``roundup-admin genconfig`` command. @@ -436,7 +524,7 @@ file = FileClass(db, "file", name=String()) - issue = IssueClass(db, "issue", topic=Multilink("keyword"), + issue = IssueClass(db, "issue", keyword=Multilink("keyword"), status=Link("status"), assignedto=Link("user"), priority=Link("priority")) issue.setkey('title') @@ -2472,10 +2560,10 @@ been added for clarity):: /issue?status=unread,in-progress,resolved& - topic=security,ui& + keyword=security,ui& @group=priority,-status& @sort=-activity& - @filters=status,topic& + @filters=status,keyword& @columns=title,status,fixer The index view is determined by two parts of the specifier: the layout @@ -2494,11 +2582,11 @@ The example specifies an index of "issue" items. Only items with a "status" of either "unread" or "in-progress" or "resolved" are -displayed, and only items with "topic" values including both "security" +displayed, and only items with "keyword" values including both "security" and "ui" are displayed. The items are grouped by priority arranged in ascending order and in descending order by status; and within groups, sorted by activity, arranged in descending order. The filter -section shows filters for the "status" and "topic" properties, and the +section shows filters for the "status" and "keyword" properties, and the table includes columns for the "title", "status", and "fixer" properties. @@ -2892,28 +2980,42 @@ tracker access (note that roundup-server would need to be restarted as it caches the schema). -1. modify the ``schema.py``:: +1. Modify the ``schema.py``:: issue = IssueClass(db, "issue", - assignedto=Link("user"), topic=Multilink("keyword"), + assignedto=Link("user"), keyword=Multilink("keyword"), priority=Link("priority"), status=Link("status"), due_date=Date()) -2. add an edit field to the ``issue.item.html`` template:: +2. Add an edit field to the ``issue.item.html`` template:: Due Date - + + + If you want to show only the date part of due_date then do this instead:: + + + Due Date + + -3. add the property to the ``issue.index.html`` page:: +3. Add the property to the ``issue.index.html`` page:: (in the heading row) Due Date (in the data row) - + + + If you want format control of the display of the due date you can + enter the following in the data row to show only the actual due date:: + +   -4. add the property to the ``issue.search.html`` page:: +4. Add the property to the ``issue.search.html`` page:: Due Date: @@ -2923,11 +3025,12 @@ -5. if you wish for the due date to appear in the standard views listed - in the sidebar of the web interface then you'll need to add "due_date" - to the list of @columns in the links in the sidebar section of - ``page.html``. - +5. If you wish for the due date to appear in the standard views listed + in the sidebar of the web interface then you'll need to add "due_date" + to the columns and columns_showall lists in your ``page.html``:: + + columns string:id,activity,due_date,title,creator,status; + columns_showall string:id,activity,due_date,title,creator,assignedto,status; Adding a new constrained field to the classic schema ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -3374,7 +3477,7 @@ ``schema.py``):: issue = IssueClass(db, "issue", - assignedto=Link("user"), topic=Multilink("keyword"), + assignedto=Link("user"), keyword=Multilink("keyword"), priority=Link("priority"), status=Link("status"), times=Multilink("timelog")) @@ -3392,7 +3495,7 @@ Time Log -
(enter as '3y 1m 4d 2:40:02' or parts thereof) + (enter as '3y 1m 4d 2:40:02' or parts thereof) @@ -3405,6 +3508,17 @@ On submission, the "-1" timelog item will be created and assigned a real item id. The "times" property of the issue will have the new id added to it. + + The full entry will now look like this:: + + + Time Log + + (enter as '3y 1m 4d 2:40:02' or parts thereof) + + + + 4. We want to display a total of the timelog times that have been accumulated for an issue. To do this, we'll need to actually write @@ -3451,7 +3565,7 @@ displayed in the template as text like "+ 1y 2:40" (1 year, 2 hours and 40 minutes). -8. If you're using a persistent web server - ``roundup-server`` or +6. If you're using a persistent web server - ``roundup-server`` or ``mod_python`` for example - then you'll need to restart that to pick up the code changes. When that's done, you'll be able to use the new time logging interface. @@ -3459,7 +3573,7 @@ An extension of this modification attaches the timelog entries to any change message entered at the time of the timelog entry: -1. Add a link to the timelog to the msg class: +A. Add a link to the timelog to the msg class in ``schema.py``: msg = FileClass(db, "msg", author=Link("user", do_journal='no'), @@ -3468,19 +3582,51 @@ summary=String(), files=Multilink("file"), messageid=String(), - inreplyto=String() + inreplyto=String(), times=Multilink("timelog")) -2. Add a new hidden field that links that new timelog item (new +B. Add a new hidden field that links that new timelog item (new because it's marked as having id "-1") to the new message. - It looks like this:: - - + The link is placed in ``issue.item.html`` in the same section that + handles the timelog entry. + + It looks like this after this addition:: + + + Time Log + + (enter as '3y 1m 4d 2:40:02' or parts thereof) + + + + The "times" property of the message will have the new id added to it. -3. Add the timelog listing from step 5. to the ``msg.item.html`` template - so that the timelog entry appears on the message view page. +C. Add the timelog listing from step 5. to the ``msg.item.html`` template + so that the timelog entry appears on the message view page. Note that + the call to totalTimeSpent is not used here since there will only be one + single timelog entry for each message. + + I placed it after the Date entry like this:: + + + Date: + + + + + + + + + + + + +
Time Log
DatePeriodLogged By
+ + Tracking different types of issues @@ -3504,7 +3650,7 @@ # store issues related to those systems support = IssueClass(db, "support", - assignedto=Link("user"), topic=Multilink("keyword"), + assignedto=Link("user"), keyword=Multilink("keyword"), status=Link("status"), deadline=Date(), affects=Multilink("system")) @@ -3776,6 +3922,37 @@ Changes to Tracker Behaviour ---------------------------- +Preventing SPAM +~~~~~~~~~~~~~~~ + +The following detector code may be installed in your tracker's +``detectors`` directory. It will block any messages being created that +have HTML attachments (a very common vector for spam and phishing) +and any messages that have more than 2 HTTP URLs in them. Just copy +the following into ``detectors/anti_spam.py`` in your tracker:: + + from roundup.exceptions import Reject + + def reject_html(db, cl, nodeid, newvalues): + if newvalues['type'] == 'text/html': + raise Reject, 'not allowed' + + def reject_manylinks(db, cl, nodeid, newvalues): + content = newvalues['content'] + if content.count('http://') > 2: + raise Reject, 'not allowed' + + def init(db): + db.file.audit('create', reject_html) + db.msg.audit('create', reject_manylinks) + +You may also wish to block image attachments if your tracker does not +need that ability:: + + if newvalues['type'].startswith('image/'): + raise Reject, 'not allowed' + + Stop "nosy" messages going to people on vacation ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -3957,14 +4134,14 @@ this class in your tracker's ``schema.py`` file. Change this:: issue = IssueClass(db, "issue", - assignedto=Link("user"), topic=Multilink("keyword"), + assignedto=Link("user"), keyword=Multilink("keyword"), priority=Link("priority"), status=Link("status")) to this, adding the blockers entry:: issue = IssueClass(db, "issue", blockers=Multilink("issue"), - assignedto=Link("user"), topic=Multilink("keyword"), + assignedto=Link("user"), keyword=Multilink("keyword"), priority=Link("priority"), status=Link("status")) 2. Add the new ``blockers`` property to the ``issue.item.html`` edit @@ -3974,12 +4151,14 @@ You'll need to fiddle with your item page layout to find an appropriate place to put it - I'll leave that fun part up to you. @@ -4067,16 +4246,33 @@ example, the existing "Show All" link in the "page" template (in the tracker's "html" directory) looks like this:: - Show All
+ Show All
modify it to add the "blockers" info to the URL (note, both the "@filter" *and* "blockers" values must be specified):: - Show All
+ Show All
The above examples are line-wrapped on the trailing & and should be unwrapped. @@ -4087,33 +4283,32 @@ history at the bottom of the issue page - look for a "link" event to another issue's "blockers" property. -Add users to the nosy list based on the topic -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Add users to the nosy list based on the keyword +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Let's say we need the ability to automatically add users to the nosy list based -on the occurance of a topic. Every user should be allowed to edit their -own list of topics for which they want to be added to the nosy list. +on the occurance of a keyword. Every user should be allowed to edit their +own list of keywords for which they want to be added to the nosy list. Below, we'll show that this change can be done with minimal understanding of the Roundup system, using only copy and paste. This requires three changes to the tracker: a change in the database to -allow per-user recording of the lists of topics for which he wants to +allow per-user recording of the lists of keywords for which he wants to be put on the nosy list, a change in the user view allowing them to edit -this list of topics, and addition of an auditor which updates the nosy -list when a topic is set. +this list of keywords, and addition of an auditor which updates the nosy +list when a keyword is set. -Adding the nosy topic list -:::::::::::::::::::::::::: +Adding the nosy keyword list +:::::::::::::::::::::::::::: -The change to make in the database, is that for any user there should be -a list of topics for which he wants to be put on the nosy list. Adding -a ``Multilink`` of ``keyword`` seems to fullfill this (note that within -the code, topics are called ``keywords``.) As such, all that has to be -done is to add a new field to the definition of ``user`` within the -file ``schema.py``. We will call this new field ``nosy_keywords``, and -the updated definition of user will be:: +The change to make in the database, is that for any user there should be a list +of keywords for which he wants to be put on the nosy list. Adding a +``Multilink`` of ``keyword`` seems to fullfill this. As such, all that has to +be done is to add a new field to the definition of ``user`` within the file +``schema.py``. We will call this new field ``nosy_keywords``, and the updated +definition of user will be:: user = Class(db, "user", username=String(), password=Password(), @@ -4124,22 +4319,22 @@ timezone=String(), nosy_keywords=Multilink('keyword')) -Changing the user view to allow changing the nosy topic list -:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: +Changing the user view to allow changing the nosy keyword list +:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: -We want any user to be able to change the list of topics for which +We want any user to be able to change the list of keywords for which he will by default be added to the nosy list. We choose to add this to the user view, as is generated by the file ``html/user.item.html``. We can easily -see that the topic field in the issue view has very similar editing -requirements as our nosy topics, both being lists of topics. As -such, we look for Topics in ``issue.item.html``, and extract the +see that the keyword field in the issue view has very similar editing +requirements as our nosy keywords, both being lists of keywords. As +such, we look for Keywords in ``issue.item.html``, and extract the associated parts from there. We add this to ``user.item.html`` at the bottom of the list of viewed items (i.e. just below the 'Alternate E-mail addresses' in the classic template):: - + " msgstr "" -#: ../roundup/cgi/templating.py:1050 +#: ../roundup/cgi/templating.py:1094 msgid "History" msgstr "Historia" -#: ../roundup/cgi/templating.py:1052 +#: ../roundup/cgi/templating.py:1096 msgid "" msgstr "" -#: ../roundup/cgi/templating.py:1053 +#: ../roundup/cgi/templating.py:1097 msgid "" msgstr "" -#: ../roundup/cgi/templating.py:1054 +#: ../roundup/cgi/templating.py:1098 msgid "" msgstr "" -#: ../roundup/cgi/templating.py:1055 +#: ../roundup/cgi/templating.py:1099 msgid "" -msgstr "" +msgstr "" -#: ../roundup/cgi/templating.py:1097 +#: ../roundup/cgi/templating.py:1141 #, python-format msgid "Copy of %(class)s %(id)s" msgstr "Copia de %(class)s %(id)s" -#: ../roundup/cgi/templating.py:1331 +#: ../roundup/cgi/templating.py:1434 msgid "*encrypted*" msgstr "*cifrado*" -#: ../roundup/cgi/templating.py:1514 +#: ../roundup/cgi/templating.py:1507 ../roundup/cgi/templating.py:1528 +#: ../roundup/cgi/templating.py:1534 ../roundup/cgi/templating.py:1050:1507 +#: :1528:1534 +msgid "No" +msgstr "No" + +#: ../roundup/cgi/templating.py:1507 ../roundup/cgi/templating.py:1526 +#: ../roundup/cgi/templating.py:1531 ../roundup/cgi/templating.py:1050:1507 +#: :1526:1531 +msgid "Yes" +msgstr "Si" + +#: ../roundup/cgi/templating.py:1620 msgid "" "default value for DateHTMLProperty must be either DateHTMLProperty or string " "date representation." @@ -1602,17 +1659,17 @@ "el valor por defecto para DateHTMLProperty debe ser un DateHTMLProperty o " "una cadena que represente una fecha." -#: ../roundup/cgi/templating.py:1674 +#: ../roundup/cgi/templating.py:1780 #, python-format msgid "Attempt to look up %(attr)s on a missing value" msgstr "Se intent? buscar %(attr)s en un valor faltante" -#: ../roundup/cgi/templating.py:1750 +#: ../roundup/cgi/templating.py:1853 #, python-format msgid "" msgstr "" -#: ../roundup/date.py:186 +#: ../roundup/date.py:300 msgid "" "Not a date spec: \"yyyy-mm-dd\", \"mm-dd\", \"HH:MM\", \"HH:MM:SS\" or " "\"yyyy-mm-dd.HH:MM:SS.SSS\"" @@ -1620,7 +1677,7 @@ "No es una especificaci?n de fecha: \"aaaa-mm-dd\", \"mm-dd\", \"HH:MM\", " "\"HH:MM:SS\" o \"aaaa-mm-dd.HH:MM:SS.SSS\"" -#: ../roundup/date.py:240 +#: ../roundup/date.py:359 #, python-format msgid "" "%r not a date / time spec \"yyyy-mm-dd\", \"mm-dd\", \"HH:MM\", \"HH:MM:SS\" " @@ -1629,113 +1686,113 @@ "%r no es una especificaci?n de fecha / hora \"aaaa-mm-dd\", \"mm-dd\", \"HH:" "MM\", \"HH:MM:SS\" o \"aaaa-mm-dd.HH:MM:SS.SSS\"" -#: ../roundup/date.py:538 +#: ../roundup/date.py:666 msgid "" "Not an interval spec: [+-] [#y] [#m] [#w] [#d] [[[H]H:MM]:SS] [date spec]" msgstr "" "No es una especificaci?n de intervalo de tiempo: [+-] [#a] [#m] [#s] [#d] " "[[[H]H:MM]:SS] [especific. fecha]" -#: ../roundup/date.py:557 +#: ../roundup/date.py:685 msgid "Not an interval spec: [+-] [#y] [#m] [#w] [#d] [[[H]H:MM]:SS]" msgstr "" "No es una especificaci?n de intervalo de tiempo: [+-] [#a] [#m] [#s] [#d] " "[[[H]H:MM]:SS]" -#: ../roundup/date.py:694 +#: ../roundup/date.py:822 #, python-format msgid "%(number)s year" msgid_plural "%(number)s years" msgstr[0] "%(number)s a?o" msgstr[1] "%(number)s a?os" -#: ../roundup/date.py:698 +#: ../roundup/date.py:826 #, python-format msgid "%(number)s month" msgid_plural "%(number)s months" msgstr[0] "%(number)s mes" msgstr[1] "%(number)s meses" -#: ../roundup/date.py:702 +#: ../roundup/date.py:830 #, python-format msgid "%(number)s week" msgid_plural "%(number)s weeks" msgstr[0] "%(number)s semana" msgstr[1] "%(number)s semanas" -#: ../roundup/date.py:706 +#: ../roundup/date.py:834 #, python-format msgid "%(number)s day" msgid_plural "%(number)s days" msgstr[0] "%(number)s d?a" msgstr[1] "%(number)s d?as" -#: ../roundup/date.py:710 +#: ../roundup/date.py:838 msgid "tomorrow" msgstr "ma?ana" -#: ../roundup/date.py:712 +#: ../roundup/date.py:840 msgid "yesterday" msgstr "ayer" -#: ../roundup/date.py:715 +#: ../roundup/date.py:843 #, python-format msgid "%(number)s hour" msgid_plural "%(number)s hours" msgstr[0] "%(number)s hora" msgstr[1] "%(number)s horas" -#: ../roundup/date.py:719 +#: ../roundup/date.py:847 msgid "an hour" msgstr "una hora" -#: ../roundup/date.py:721 +#: ../roundup/date.py:849 msgid "1 1/2 hours" msgstr "1 hora y 1/2" -#: ../roundup/date.py:723 +#: ../roundup/date.py:851 #, python-format msgid "1 %(number)s/4 hours" msgid_plural "1 %(number)s/4 hours" msgstr[0] "1 %(number)s/4 de hora" msgstr[1] "1 %(number)s/4 de hora" -#: ../roundup/date.py:727 +#: ../roundup/date.py:855 msgid "in a moment" msgstr "en un momento" -#: ../roundup/date.py:729 +#: ../roundup/date.py:857 msgid "just now" msgstr "ahora" -#: ../roundup/date.py:732 +#: ../roundup/date.py:860 msgid "1 minute" msgstr "1 minuto" -#: ../roundup/date.py:735 +#: ../roundup/date.py:863 #, python-format msgid "%(number)s minute" msgid_plural "%(number)s minutes" msgstr[0] "%(number)s minuto" msgstr[1] "%(number)s minutos" -#: ../roundup/date.py:738 +#: ../roundup/date.py:866 msgid "1/2 an hour" msgstr "media hora" -#: ../roundup/date.py:740 +#: ../roundup/date.py:868 #, python-format msgid "%(number)s/4 hour" msgid_plural "%(number)s/4 hours" msgstr[0] "%(number)s/4 de hora" msgstr[1] "%(number)s/4s de hora" -#: ../roundup/date.py:744 +#: ../roundup/date.py:872 #, python-format msgid "%s ago" msgstr "hace %s" -#: ../roundup/date.py:746 +#: ../roundup/date.py:874 #, python-format msgid "in %s" msgstr "en %s" @@ -1749,7 +1806,7 @@ "ATENCI?N: El directorio '%s'\n" "\tcontiene una plantilla con el viejo formato - se ignorar?" -#: ../roundup/mailgw.py:586 +#: ../roundup/mailgw.py:584 msgid "" "\n" "Emails to Roundup trackers must include a Subject: line!\n" @@ -1757,7 +1814,7 @@ "\n" "Todos los e-mails enviados a trackers Roundup deben incluir un Asunto:!\n" -#: ../roundup/mailgw.py:674 +#: ../roundup/mailgw.py:708 #, python-format msgid "" "\n" @@ -1787,44 +1844,72 @@ "\n" "El asunto que Ud. envi? es: '%(subject)s'\n" -#: ../roundup/mailgw.py:705 +#: ../roundup/mailgw.py:746 #, python-format msgid "" "\n" -"The class name you identified in the subject line (\"%(classname)s\") does " -"not exist in the\n" -"database.\n" +"The class name you identified in the subject line (\"%(classname)s\") does\n" +"not exist in the database.\n" "\n" "Valid class names are: %(validname)s\n" "Subject was: \"%(subject)s\"\n" msgstr "" "\n" -"La clase que Ud. identific? en el Asunto (\"%(classname)s\") no existe en la " -"base de\n" -"datos.\n" +"La clase que Ud. identific? en el Asunto (\"%(classname)s\") \n" +"no existe en la base de datos.\n" "\n" "Nombres v?lidos de clases son: %(validname)s\n" "El asunto que Ud. envi? es: \"%(subject)s\"\n" -#: ../roundup/mailgw.py:733 +#: ../roundup/mailgw.py:754 +#, python-format +msgid "" +"\n" +"You did not identify a class name in the subject line and there is no\n" +"default set for this tracker. The subject must contain a class name or\n" +"designator to indicate the 'topic' of the message. For example:\n" +" Subject: [issue] This is a new issue\n" +" - this will create a new issue in the tracker with the title 'This is\n" +" a new issue'.\n" +" Subject: [issue1234] This is a followup to issue 1234\n" +" - this will append the message's contents to the existing issue 1234\n" +" in the tracker.\n" +"\n" +"Subject was: '%(subject)s'\n" +msgstr "" +"\n" +"Ud. no indic? un nombre de clase en Asunto y el tracker no tiene\n" +"configurado un valor por omisi?n. El asunto debe contener un nombre\n" +"de clase o designador para indicar para indicar el 't?pico' del mensaje.\n" +"Por ejemplo:\n" +" Asunto: [issue] Este es un nuevo issue\n" +" - Esto crear? un nuevo issue en el tracker con el t?tulo 'Este es un\n" +" nuevo issue'.\n" +" Asunto: [issue1234] Esta es un agregado al issue 1234\n" +" - Esto anexar? el contenido del e-mail al issue 1234 ya existente\n" +" en el tracker.\n" +"\n" +"El asunto que Ud. envi? es: '%(subject)s'\n" + +#: ../roundup/mailgw.py:795 #, python-format msgid "" "\n" "I cannot match your message to a node in the database - you need to either\n" -"supply a full designator (with number, eg \"[issue123]\" or keep the\n" +"supply a full designator (with number, eg \"[issue123]\") or keep the\n" "previous subject title intact so I can match that.\n" "\n" "Subject was: \"%(subject)s\"\n" msgstr "" "\n" "No puedo encontrar un nodo en la base de datos que coincida con el mensaje\n" -"que Ud. ha enviado - Necesita proveer un designador v?lido (con n?mero, por\n" -"ejemplo \"[issue123]\" o mantener intacto el Asunto previo de manera que yo\n" -"pueda encontrar una coincidencia.\n" +"que Ud. ha enviado - Necesita proveer un designador completo (con n?mero,\n" +"por ejemplo \"[issue123]\" o mantener intacto el Asunto previo de manera\n" +"que yo pueda encontrar una coincidencia.\n" "\n" "El asunto que Ud. envi? es: \"%(subject)s\"\n" -#: ../roundup/mailgw.py:766 +#: ../roundup/mailgw.py:828 #, python-format msgid "" "\n" @@ -1839,7 +1924,7 @@ "\n" "El asunto que Ud. envi? es: \"%(subject)s\"\n" -#: ../roundup/mailgw.py:794 +#: ../roundup/mailgw.py:856 #, python-format msgid "" "\n" @@ -1853,7 +1938,7 @@ "incorrecta:\n" " %(current_class)s\n" -#: ../roundup/mailgw.py:817 +#: ../roundup/mailgw.py:879 #, python-format msgid "" "\n" @@ -1867,34 +1952,34 @@ "incorrectas:\n" " %(errors)s\n" -#: ../roundup/mailgw.py:847 +#: ../roundup/mailgw.py:919 #, python-format msgid "" "\n" -"You are not a registered user.\n" +"You are not a registered user.%(registration_info)s\n" "\n" "Unknown address: %(from_address)s\n" msgstr "" "\n" -"Ud. no es un usuario registrado.\n" +"Ud. no es un usuario registrado.%(registration_info)s\n" "\n" "Direcci?n desconocida: %(from_address)s\n" -#: ../roundup/mailgw.py:855 +#: ../roundup/mailgw.py:927 msgid "You are not permitted to access this tracker." msgstr "Ud. no posee los permisos necesarios para acceder a este tracker." -#: ../roundup/mailgw.py:862 +#: ../roundup/mailgw.py:934 #, python-format msgid "You are not permitted to edit %(classname)s." msgstr "Ud. no tiene permitido editar %(classname)s." -#: ../roundup/mailgw.py:866 +#: ../roundup/mailgw.py:938 #, python-format msgid "You are not permitted to create %(classname)s." msgstr "Ud. no tiene permitido crear %(classname)s." -#: ../roundup/mailgw.py:913 +#: ../roundup/mailgw.py:985 #, python-format msgid "" "\n" @@ -1910,7 +1995,7 @@ "\n" "El Asunto que Ud. envi? es: \"%(subject)s\"\n" -#: ../roundup/mailgw.py:942 +#: ../roundup/mailgw.py:1013 msgid "" "\n" "Roundup requires the submission to be plain text. The message parser could\n" @@ -1922,20 +2007,20 @@ "podido localizar una parte MIME text/plain en su mensaje que pueda ser " "usada.\n" -#: ../roundup/mailgw.py:964 +#: ../roundup/mailgw.py:1030 msgid "You are not permitted to create files." -msgstr "Ud. no tiene permitida la creaci?n de archivos." +msgstr "Ud. no tiene permitida la creaci?n de ficheros." -#: ../roundup/mailgw.py:978 +#: ../roundup/mailgw.py:1044 #, python-format msgid "You are not permitted to add files to %(classname)s." -msgstr "Ud. no tiene permitido agregar archivos a %(classname)s." +msgstr "Ud. no tiene permitido agregar ficheros a %(classname)s." -#: ../roundup/mailgw.py:996 +#: ../roundup/mailgw.py:1062 msgid "You are not permitted to create messages." msgstr "Ud. no tiene permitido crear mensajes." -#: ../roundup/mailgw.py:1004 +#: ../roundup/mailgw.py:1070 #, python-format msgid "" "\n" @@ -1946,19 +2031,19 @@ "El mensaje de e-mail ha sido rechazado por un detector.\n" "%(error)s\n" -#: ../roundup/mailgw.py:1012 +#: ../roundup/mailgw.py:1078 #, python-format msgid "You are not permitted to add messages to %(classname)s." msgstr "Ud. no tiene permitido agregar mensajes a %(classname)s." -#: ../roundup/mailgw.py:1039 +#: ../roundup/mailgw.py:1105 #, python-format msgid "You are not permitted to edit property %(prop)s of class %(classname)s." msgstr "" "Ud. no tiene permitido editar la propiedad %(prop)s de la clase %(classname)" "s." -#: ../roundup/mailgw.py:1047 +#: ../roundup/mailgw.py:1113 #, python-format msgid "" "\n" @@ -1969,77 +2054,98 @@ "Ha habido un problema con el mensaje que env??:\n" " %(message)s\n" -#: ../roundup/mailgw.py:1069 +#: ../roundup/mailgw.py:1135 msgid "not of form [arg=value,value,...;arg=value,value,...]" msgstr "no es de la forma [arg=valor,valor,...;arg=valor,valor,...]" -#: ../roundup/roundupdb.py:142 +#: ../roundup/roundupdb.py:147 msgid "files" -msgstr "archivos" +msgstr "ficheros" -#: ../roundup/roundupdb.py:142 +#: ../roundup/roundupdb.py:147 msgid "messages" msgstr "mensajes" -#: ../roundup/roundupdb.py:142 +#: ../roundup/roundupdb.py:147 msgid "nosy" msgstr "interesados" -#: ../roundup/roundupdb.py:142 +#: ../roundup/roundupdb.py:147 msgid "superseder" msgstr "reemplazado por" -#: ../roundup/roundupdb.py:142 +#: ../roundup/roundupdb.py:147 msgid "title" msgstr "t?tulo" -#: ../roundup/roundupdb.py:143 +#: ../roundup/roundupdb.py:148 msgid "assignedto" msgstr "asignadoa" -#: ../roundup/roundupdb.py:143 +#: ../roundup/roundupdb.py:148 +msgid "keyword" +msgstr "Palabra clave" + +#: ../roundup/roundupdb.py:148 msgid "priority" msgstr "prioridad" -#: ../roundup/roundupdb.py:143 +#: ../roundup/roundupdb.py:148 msgid "status" msgstr "estado" -#: ../roundup/roundupdb.py:143 -msgid "topic" -msgstr "palabraclave" - -#: ../roundup/roundupdb.py:146 +#: ../roundup/roundupdb.py:151 msgid "activity" msgstr "actividad" #. following properties are common for all hyperdb classes #. they are listed here to keep things in one place -#: ../roundup/roundupdb.py:146 +#: ../roundup/roundupdb.py:151 msgid "actor" msgstr "?ltimoactor" -#: ../roundup/roundupdb.py:146 +#: ../roundup/roundupdb.py:151 msgid "creation" msgstr "creaci?n" -#: ../roundup/roundupdb.py:146 +#: ../roundup/roundupdb.py:151 msgid "creator" msgstr "creador" -#: ../roundup/roundupdb.py:304 +#: ../roundup/roundupdb.py:309 #, python-format msgid "New submission from %(authname)s%(authaddr)s:" msgstr "Nuevo aporte de %(authname)s%(authaddr)s:" -#: ../roundup/roundupdb.py:307 +#: ../roundup/roundupdb.py:312 #, python-format msgid "%(authname)s%(authaddr)s added the comment:" msgstr "%(authname)s%(authaddr)s agreg? el comentario:" -#: ../roundup/roundupdb.py:310 -msgid "System message:" -msgstr "Mensaje de sistema:" +#: ../roundup/roundupdb.py:315 +#, python-format +msgid "Change by %(authname)s%(authaddr)s:" +msgstr "Modificaci?n de %(authname)s%(authaddr)s:" + +#: ../roundup/roundupdb.py:342 +#, python-format +msgid "File '%(filename)s' not attached - you can download it from %(link)s." +msgstr "Fichero '%(filename)s' no anexado - puede descargarlo de %(link)s." + +#: ../roundup/roundupdb.py:615 +#, python-format +msgid "" +"\n" +"Now:\n" +"%(new)s\n" +"Was:\n" +"%(old)s" +msgstr "" +"\n" +"Ahora:\n" +"%(new)s\n" +"Antes:\n" +"%(old)s" #: ../roundup/scripts/roundup_demo.py:32 #, python-format @@ -2060,8 +2166,8 @@ #: ../roundup/scripts/roundup_mailgw.py:36 #, python-format msgid "" -"Usage: %(program)s [-v] [-c] [[-C class] -S field=value]* " -"[method]\n" +"Usage: %(program)s [-v] [-c class] [[-C class] -S field=value]* [method]\n" "\n" "Options:\n" " -v: print version and exit\n" @@ -2107,6 +2213,10 @@ " are both valid. The username and/or password will be prompted for if\n" " not supplied on the command-line.\n" "\n" +"POPS:\n" +" Connect to a POP server over ssl. This requires python 2.4 or later.\n" +" This supports the same notation as POP.\n" +"\n" "APOP:\n" " Same as POP, but using Authenticated POP:\n" " apop username:password at server\n" @@ -2125,8 +2235,8 @@ " imaps username:password at server [mailbox]\n" "\n" msgstr "" -"Uso: %(program)s [-v] [-c] [[-C clase] -S campo=valor]* [m?todo]\n" +"Uso: %(program)s [-v] [-c clase] [[-C clase] -S campo=valor]* [m?todo]\n" "\n" "Opciones:\n" " -v: imprime version y sale\n" @@ -2137,7 +2247,7 @@ "La pasarela de correo de roundup puede ser invocada en una de cuatro " "formas:\n" " . con un directorio base de instancia como ?nico argumento,\n" -" . con un directorio base de instancia y un archivo de spool de correo,\n" +" . con un directorio base de instancia y un fichero de spool de correo,\n" " . con un directorio base de instancia y una cuenta de un servidor POP/APOP, " "o\n" " . con un directorio base de instancia y una cuenta de un servidor IMAP/" @@ -2145,15 +2255,12 @@ "\n" "Tambi?n soporta los argumentos opcionales -C y -S que le permiten " "establecer\n" -"campos para una clase creada por la pasarela de correo de Roundup\n" -"roundup-mailgw.\n" -"La clase por omisi?n es msg, pero las otras clases: issue, file, user " -"tambien\n" -"pueden usarse. Las opciones -S y --set usan la notaci?n\n" -"propiedad=valor[;propiedad=valor] aceptada por el comando roundup de l?nea " -"de\n" -"comandos o los comandos que pueden ser pasados en el campo Asunto: de un\n" -"mensaje de correo electr?nico.\n" +"campos para una clase creada por la pasarela de correo roundup-mailgw.\n" +"La clase por omisi?n es msg, pero las otras clases: issue, file, user\n" +"tambien pueden usarse. Las opciones -S y --set usan la misma notaci?n\n" +"propiedad=valor[;propiedad=valor] aceptada por el comando roundup de\n" +"l?nea de comandos o los comandos que pueden ser pasados en el campo\n" +"Asunto: de un mensaje de correo electr?nico.\n" "\n" "Tambi?n le permite establecer el tipo de mensaje basado en la direcci?n de\n" "correo usada.\n" @@ -2163,10 +2270,10 @@ " est?ndar y lo env?a al m?dulo roundup.mailgw.\n" "\n" "UNIX mailbox:\n" -" En el segundo caso, la pasarela lee todos los mensajes desde el archivo de\n" +" En el segundo caso, la pasarela lee todos los mensajes desde el fichero de\n" " spool de correo y env?a los mismos de a uno al m?dulo roundup.mailgw. El\n" -" archivo se vac?a una vez que todos los mensajes han sido procesados\n" -" exitosamente. El archivo se especifica como:\n" +" fichero se vac?a una vez que todos los mensajes han sido procesados\n" +" exitosamente. El fichero se especifica como:\n" " mailbox /ruta/al/mailbox\n" "\n" "POP:\n" @@ -2174,7 +2281,7 @@ " POP y env?a los mismos de a uno al m?dulo roundup.mailgw. El servidor\n" " POP se especifica como:\n" " pop nombreusuario:contrase?a at servidor\n" -" El nombreusuario y la contrase?a pueden omitirse:\n" +" El nombreusuario y la contrase?a pueden omitirse por lo que:\n" " pop nombreusuario at servidor\n" " pop servidor\n" " son v?lidos. El nombre de usuario y/o la contrase?a se solicitar?n si no\n" @@ -2188,29 +2295,33 @@ " Se conecta a un servidor IMAP. Esta forma soporta la misma notaci?n que\n" " correo POP\n" " imap nombreusuario:contrase?a at servidor\n" -" Tambi?n le permite especificar una casilla distinta a INBOX usando el\n" +" Tambi?n le permite especificar una carpeta distinta a INBOX usando el\n" " formato:\n" -" imap nombreusuario:contrase?a at servidor casilla\n" +" imap nombreusuario:contrase?a at servidor carpeta\n" "\n" "IMAPS:\n" " Se conecta a un servidor IMAP usando ssl.\n" " Esta forma soporta la misma notaci?n que IMAP.\n" -" imaps nombreusuario:contrase?a at servidor [casilla]\n" +" imaps nombreusuario:contrase?a at servidor [carpeta]\n" "\n" -#: ../roundup/scripts/roundup_mailgw.py:147 +#: ../roundup/scripts/roundup_mailgw.py:151 msgid "Error: not enough source specification information" msgstr "Error: no hay informaci?n de especificaci?n de origen suficiente" -#: ../roundup/scripts/roundup_mailgw.py:163 +#: ../roundup/scripts/roundup_mailgw.py:167 +msgid "Error: a later version of python is required" +msgstr "Error: se require una versi?n mas reciente de python" + +#: ../roundup/scripts/roundup_mailgw.py:170 msgid "Error: pop specification not valid" msgstr "Error: especification pop no v?lida" -#: ../roundup/scripts/roundup_mailgw.py:170 +#: ../roundup/scripts/roundup_mailgw.py:177 msgid "Error: apop specification not valid" msgstr "Error: especification apop no v?lida" -#: ../roundup/scripts/roundup_mailgw.py:184 +#: ../roundup/scripts/roundup_mailgw.py:189 msgid "" "Error: The source must be either \"mailbox\", \"pop\", \"apop\", \"imap\" or " "\"imaps\"" @@ -2218,7 +2329,11 @@ "Error: EL origen debe ser \"mailbox\", \"pop\", \"apop\", \"imap\" o \"imaps" "\"" -#: ../roundup/scripts/roundup_server.py:157 +#: ../roundup/scripts/roundup_server.py:76 +msgid "WARNING: generating temporary SSL certificate" +msgstr "ATENCION: generando certificado SLL temporario" + +#: ../roundup/scripts/roundup_server.py:253 msgid "" "Roundup trackers index\n" "

Roundup trackers index

    \n" @@ -2226,53 +2341,53 @@ "?ndice de trackers Roundup\n" "

    ?ndice de trackers Roundup

      \n" -#: ../roundup/scripts/roundup_server.py:287 +#: ../roundup/scripts/roundup_server.py:389 #, python-format msgid "Error: %s: %s" -msgstr "" +msgstr "Error: %s: %s" -#: ../roundup/scripts/roundup_server.py:297 +#: ../roundup/scripts/roundup_server.py:399 msgid "WARNING: ignoring \"-g\" argument, not root" msgstr "ATENCI?N: ignorando argumento \"-g\" , Ud. no es root" -#: ../roundup/scripts/roundup_server.py:303 +#: ../roundup/scripts/roundup_server.py:405 msgid "Can't change groups - no grp module" msgstr "No puede cambiar grupos - el m?dulo grp no est? presente" -#: ../roundup/scripts/roundup_server.py:312 +#: ../roundup/scripts/roundup_server.py:414 #, python-format msgid "Group %(group)s doesn't exist" msgstr "El grupo %(group)s no existe" -#: ../roundup/scripts/roundup_server.py:323 +#: ../roundup/scripts/roundup_server.py:425 msgid "Can't run as root!" msgstr "No puede ejecutarse como root!" -#: ../roundup/scripts/roundup_server.py:326 +#: ../roundup/scripts/roundup_server.py:428 msgid "WARNING: ignoring \"-u\" argument, not root" msgstr "ATENCI?N: ignorando argumento \"-u\", Ud. no es root" -#: ../roundup/scripts/roundup_server.py:331 +#: ../roundup/scripts/roundup_server.py:434 msgid "Can't change users - no pwd module" msgstr "No puedo cambiar usuarios - no existe el m?dulo pwd" -#: ../roundup/scripts/roundup_server.py:340 +#: ../roundup/scripts/roundup_server.py:443 #, python-format msgid "User %(user)s doesn't exist" msgstr "El usuario %(user)s no existe" -#: ../roundup/scripts/roundup_server.py:471 +#: ../roundup/scripts/roundup_server.py:592 #, python-format msgid "Multiprocess mode \"%s\" is not available, switching to single-process" msgstr "" "El modo multiproceso \"%s\" no est? disponible, conmutado a proceso simple" -#: ../roundup/scripts/roundup_server.py:494 +#: ../roundup/scripts/roundup_server.py:620 #, python-format msgid "Unable to bind to port %s, port already in use." msgstr "Imposible asociarse al puerto %s, el mismo ya est? en uso." -#: ../roundup/scripts/roundup_server.py:562 +#: ../roundup/scripts/roundup_server.py:688 msgid "" " -c Windows Service options.\n" " If you want to run the server as a Windows Service, you\n" @@ -2284,17 +2399,17 @@ " -c Opciones de Servicio Windows.\n" " Si desdea ejecutar el servidor como un Servicio Windows, debe " "usar\n" -" un archivo de configuraci?n para especificar los directorios " +" un fichero de configuraci?n para especificar los directorios " "base\n" " de los trackers.\n" " Cuando ejecuta el Roundup Tracker como un servicio deb usar " "la\n" -" opci?n para activar un archivo de registro.\n" +" opci?n para activar un fichero de registro.\n" " Tipee \"roundup-server -c help\" para ver ayuda espec?fica " "para\n" " Servicios Web." -#: ../roundup/scripts/roundup_server.py:569 +#: ../roundup/scripts/roundup_server.py:695 msgid "" " -u runs the Roundup web server as this UID\n" " -g runs the Roundup web server as this GID\n" @@ -2306,10 +2421,10 @@ " -g ejecuta el servidor web de Roundup como este GID\n" " -d ejecuta el servidor web de Roundup en segundo plano y escribe " "el\n" -" PID del servidor en el archivo especificado por PIDfile.\n" +" PID del servidor en el fichero especificado por PIDfile.\n" " La opci?n -l *debe* ser especificada si se usa la opci?n -d." -#: ../roundup/scripts/roundup_server.py:576 +#: ../roundup/scripts/roundup_server.py:702 #, python-format msgid "" "%(message)sUsage: roundup-server [options] [name=tracker home]*\n" @@ -2324,6 +2439,9 @@ " -l log to the file indicated by fname instead of stderr/stdout\n" " -N log client machine names instead of IP addresses (much " "slower)\n" +" -i set tracker index template\n" +" -s enable SSL\n" +" -e PEM file containing SSL key and certificate\n" " -t multiprocess mode (default: %(mp_def)s).\n" " Allowed values: %(mp_types)s.\n" "%(os_part)s\n" @@ -2369,26 +2487,29 @@ "Opciones:\n" " -v imprime el n?mero de versi?n de Roundup y sale\n" " -h imprime este texto y sale\n" -" -S crea o actualiza el archivo de configuraci?n y sale\n" -" -C usa el archivo de configuraci?n \n" +" -S crea o actualiza el fichero de configuraci?n y sale\n" +" -C usa el fichero de configuraci?n \n" " -n especifica el nombre de host de la instancia del servidor web " "de Roundup\n" " -p especifica el puerto en el cual escuchar? el servidor (por " "omisi?n: %(port)s)\n" -" -l almacena bit?cora en el archivo indicado por fname en lugar " +" -l almacena bit?cora en el fichero indicado por fname en lugar " "de hacerlo a stderr/stdout\n" " -N almacena en bit?cora los nombres de los equipos clientes en " "lugar de direcciones IP (mucho mas lento)\n" -" -t mod multiproceso (por omisi?n: %(mp_def)s).\n" +" -i especifica la plantilla del ?ndice del tracker\n" +" -s activa SSL\n" +" -e fichero PEM que contiene la llave y el certificado SSL\n" +" -t modo multiproceso (por omisi?n: %(mp_def)s).\n" " Valores permitidos: %(mp_types)s.\n" "%(os_part)s\n" "\n" "Opciones largas:\n" " --version imprime el n?mero de versi?n de Roundup y sale\n" " --help imprime este texto y sale\n" -" --save-config crea o actualiza el archivo de configuraci?n y sale\n" -" --config usa el archivo de configuraci?n \n" -" Todos las variables de la secci?n [main] del archivo de configuraci?n\n" +" --save-config crea o actualiza el fichero de configuraci?n y sale\n" +" --config usa el fichero de configuraci?n \n" +" Todos las variables de la secci?n [main] del fichero de configuraci?n\n" " pueden tambi?n especificarse usando la forma --=\n" "\n" "Ejemplos:\n" @@ -2404,11 +2525,11 @@ " roundup-server -d /var/run/roundup.pid -l /var/log/roundup.log \\\n" " support=/var/spool/roundup-trackers/support\n" "\n" -"Formato de archivo de configuraci?n:\n" -" El archivo de configuraci?n del Servidor Roundup tiene un formato de " -"archivo.ini com?n.\n" -" El archivo de configuraci?n creado con 'roundup-server -S' contiene\n" -" explicaciones detalladas para cada opci?n. Por favor vea dicho archivo " +"Formato de fichero de configuraci?n:\n" +" El fichero de configuraci?n del Servidor Roundup tiene un formato de " +"fichero.ini com?n.\n" +" El fichero de configuraci?n creado con 'roundup-server -S' contiene\n" +" explicaciones detalladas para cada opci?n. Por favor vea dicho fichero " "para encontrar\n" " descripciones de las variables.\n" "\n" @@ -2427,22 +2548,22 @@ " caracteres tales como espacios, dado que los mismos confunden a Internet " "Explorer.\n" -#: ../roundup/scripts/roundup_server.py:723 +#: ../roundup/scripts/roundup_server.py:860 msgid "Instances must be name=home" msgstr "Las Instancias debe ser de la forma nombre=directorio base" -#: ../roundup/scripts/roundup_server.py:737 +#: ../roundup/scripts/roundup_server.py:874 #, python-format msgid "Configuration saved to %s" msgstr "Configuraci?n guardada en %s" -#: ../roundup/scripts/roundup_server.py:755 +#: ../roundup/scripts/roundup_server.py:892 msgid "Sorry, you can't run the server as a daemon on this Operating System" msgstr "" "Lo siento, no puede ejecutar el servidor como un demonio en este Sistema " "Operativo" -#: ../roundup/scripts/roundup_server.py:767 +#: ../roundup/scripts/roundup_server.py:907 #, python-format msgid "Roundup server started on %(HOST)s:%(PORT)s" msgstr "servidor Roundup iniciado en %(HOST)s:%(PORT)s" @@ -2470,35 +2591,74 @@ " mientras Ud. lo editaba. Por favor revisualice\n" " el nodo y revise sus modificaciones.\n" -#: ../templates/classic/html/_generic.help.html:9 -#: ../templates/minimal/html/_generic.help.html:9 -msgid "${property} help - ${tracker}" -msgstr "${property} ayuda - ${tracker}" +#: ../templates/classic/html/_generic.help-empty.html:6 +msgid "Please specify your search parameters!" +msgstr "?Por favor especifique sus par?metros de b?squeda!" +#: ../templates/classic/html/_generic.help-list.html:20 +#: ../templates/classic/html/_generic.index.html:14 +#: ../templates/classic/html/_generic.item.html:12 +#: ../templates/classic/html/file.item.html:9 +#: ../templates/classic/html/issue.index.html:16 +#: ../templates/classic/html/issue.item.html:28 +#: ../templates/classic/html/msg.item.html:26 +#: ../templates/classic/html/user.index.html:9 +#: ../templates/classic/html/user.item.html:35 +#: ../templates/minimal/html/_generic.index.html:14 +#: ../templates/minimal/html/_generic.item.html:12 +#: ../templates/minimal/html/user.index.html:9 +#: ../templates/minimal/html/user.item.html:35 +#: ../templates/minimal/html/user.register.html:14 +msgid "You are not allowed to view this page." +msgstr "Ud. no posee los permisos necesarios para ver esta p?gina." + +#: ../templates/classic/html/_generic.help-list.html:34 +msgid "1..25 out of 50" +msgstr "1..25 de 50" + +#: ../templates/classic/html/_generic.help-search.html:9 +msgid "" +"Generic template ${template} or version for class ${classname} is not yet " +"implemented" +msgstr "" +"Aun no est?n implementadas una plantilla gen?rica ${template} o una " +"version para la clase ${classname}" + +#: ../templates/classic/html/_generic.help-submit.html:57 #: ../templates/classic/html/_generic.help.html:31 #: ../templates/minimal/html/_generic.help.html:31 msgid " Cancel " msgstr " Cancelar " +#: ../templates/classic/html/_generic.help-submit.html:63 #: ../templates/classic/html/_generic.help.html:34 #: ../templates/minimal/html/_generic.help.html:34 msgid " Apply " msgstr " Aplicar " +#: ../templates/classic/html/_generic.help.html:9 +#: ../templates/classic/html/user.help.html:13 +#: ../templates/minimal/html/_generic.help.html:9 +msgid "${property} help - ${tracker}" +msgstr "${property} ayuda - ${tracker}" + #: ../templates/classic/html/_generic.help.html:41 -#: ../templates/classic/html/issue.index.html:73 +#: ../templates/classic/html/help.html:21 +#: ../templates/classic/html/issue.index.html:80 #: ../templates/minimal/html/_generic.help.html:41 msgid "<< previous" msgstr "<< anterior" #: ../templates/classic/html/_generic.help.html:53 -#: ../templates/classic/html/issue.index.html:81 +#: ../templates/classic/html/help.html:28 +#: ../templates/classic/html/issue.index.html:88 #: ../templates/minimal/html/_generic.help.html:53 msgid "${start}..${end} out of ${total}" msgstr "${start}..${end} de un total de ${total}" #: ../templates/classic/html/_generic.help.html:57 -#: ../templates/classic/html/issue.index.html:84 +#: ../templates/classic/html/help.html:32 +#: ../templates/classic/html/issue.index.html:91 #: ../templates/minimal/html/_generic.help.html:57 msgid "next >>" msgstr "pr?xima >>" @@ -2517,24 +2677,24 @@ msgid "${class} editing" msgstr "Edici?n de ${class}" -#: ../templates/classic/html/_generic.index.html:14 -#: ../templates/classic/html/_generic.item.html:12 -#: ../templates/classic/html/file.item.html:9 -#: ../templates/classic/html/issue.index.html:16 -#: ../templates/classic/html/issue.item.html:28 -#: ../templates/classic/html/msg.item.html:26 -#: ../templates/classic/html/user.index.html:9 -#: ../templates/classic/html/user.item.html:28 -#: ../templates/minimal/html/_generic.index.html:14 -#: ../templates/minimal/html/_generic.item.html:12 -#: ../templates/minimal/html/user.index.html:9 -#: ../templates/minimal/html/user.item.html:28 -#: ../templates/minimal/html/user.register.html:14 -msgid "You are not allowed to view this page." -msgstr "Ud. no posee los permisos necesarios para ver esta p?gina." +#: ../templates/classic/html/_generic.index.html:19 +#: ../templates/classic/html/_generic.item.html:16 +#: ../templates/classic/html/file.item.html:13 +#: ../templates/classic/html/issue.index.html:20 +#: ../templates/classic/html/issue.item.html:32 +#: ../templates/classic/html/msg.item.html:30 +#: ../templates/classic/html/user.index.html:13 +#: ../templates/classic/html/user.item.html:39 +#: ../templates/minimal/html/_generic.index.html:19 +#: ../templates/minimal/html/_generic.item.html:17 +#: ../templates/minimal/html/user.index.html:13 +#: ../templates/minimal/html/user.item.html:39 +#: ../templates/minimal/html/user.register.html:17 +msgid "Please login with your username and password." +msgstr "Por favor identif?quese con su mombre de usuario y contrase?a." -#: ../templates/classic/html/_generic.index.html:22 -#: ../templates/minimal/html/_generic.index.html:22 +#: ../templates/classic/html/_generic.index.html:28 +#: ../templates/minimal/html/_generic.index.html:28 msgid "" "

      You may edit the contents of the ${classname} class " "using this form. Commas, newlines and double quotes (\") must be handled " @@ -2555,25 +2715,25 @@ "Para eliminar elementos elimine la l?nea correspondiente. Para agregar " "nuevos elementos an?xelos a la tabla y coloque una X en la columna id.

      " -#: ../templates/classic/html/_generic.index.html:44 -#: ../templates/minimal/html/_generic.index.html:44 +#: ../templates/classic/html/_generic.index.html:50 +#: ../templates/minimal/html/_generic.index.html:50 msgid "Edit Items" msgstr "Editar Items" #: ../templates/classic/html/file.index.html:4 msgid "List of files - ${tracker}" -msgstr "Lista de archivos - ${tracker}" +msgstr "Lista de ficheros - ${tracker}" #: ../templates/classic/html/file.index.html:5 msgid "List of files" -msgstr "Lista de archivos" +msgstr "Lista de ficheros" #: ../templates/classic/html/file.index.html:10 msgid "Download" msgstr "Descargar" #: ../templates/classic/html/file.index.html:11 -#: ../templates/classic/html/file.item.html:22 +#: ../templates/classic/html/file.item.html:27 msgid "Content Type" msgstr "Tipo de Contenido" @@ -2582,25 +2742,24 @@ msgstr "Subido por" #: ../templates/classic/html/file.index.html:13 -#: ../templates/classic/html/msg.item.html:43 +#: ../templates/classic/html/msg.item.html:48 msgid "Date" msgstr "Fecha" #: ../templates/classic/html/file.item.html:2 msgid "File display - ${tracker}" -msgstr "Visualizaci?n de archivos - ${tracker}" +msgstr "Visualizaci?n de ficheros - ${tracker}" #: ../templates/classic/html/file.item.html:4 msgid "File display" -msgstr "Visualizaci?n de archivos" +msgstr "Visualizaci?n de ficheros" -#: ../templates/classic/html/file.item.html:18 -#: ../templates/classic/html/user.item.html:39 +#: ../templates/classic/html/file.item.html:23 #: ../templates/classic/html/user.register.html:17 msgid "Name" msgstr "Nombre" -#: ../templates/classic/html/file.item.html:40 +#: ../templates/classic/html/file.item.html:45 msgid "download" msgstr "descargar" @@ -2614,80 +2773,78 @@ msgid "List of classes" msgstr "Lista de clases" -#: ../templates/classic/html/issue.index.html:7 -msgid "List of issues - ${tracker}" -msgstr "Lista de issues - ${tracker}" - -#: ../templates/classic/html/issue.index.html:11 +#: ../templates/classic/html/issue.index.html:4 +#: ../templates/classic/html/issue.index.html:10 msgid "List of issues" msgstr "Lista de issues" -#: ../templates/classic/html/issue.index.html:22 -#: ../templates/classic/html/issue.item.html:44 +#: ../templates/classic/html/issue.index.html:27 +#: ../templates/classic/html/issue.item.html:49 msgid "Priority" msgstr "Prioridad" -#: ../templates/classic/html/issue.index.html:23 +#: ../templates/classic/html/issue.index.html:28 msgid "ID" -msgstr "" +msgstr "ID" -#: ../templates/classic/html/issue.index.html:24 +#: ../templates/classic/html/issue.index.html:29 msgid "Creation" msgstr "Creaci?n" -#: ../templates/classic/html/issue.index.html:25 +#: ../templates/classic/html/issue.index.html:30 msgid "Activity" msgstr "Actividad" -#: ../templates/classic/html/issue.index.html:26 +#: ../templates/classic/html/issue.index.html:31 msgid "Actor" msgstr "?ltimo actor" -#: ../templates/classic/html/issue.index.html:27 -msgid "Topic" +#: ../templates/classic/html/issue.index.html:32 +#: ../templates/classic/html/keyword.item.html:37 +msgid "Keyword" msgstr "Palabra clave" -#: ../templates/classic/html/issue.index.html:28 -#: ../templates/classic/html/issue.item.html:39 +#: ../templates/classic/html/issue.index.html:33 +#: ../templates/classic/html/issue.item.html:44 msgid "Title" msgstr "T?tulo" -#: ../templates/classic/html/issue.index.html:29 -#: ../templates/classic/html/issue.item.html:46 +#: ../templates/classic/html/issue.index.html:34 +#: ../templates/classic/html/issue.item.html:51 msgid "Status" msgstr "Estado" -#: ../templates/classic/html/issue.index.html:30 +#: ../templates/classic/html/issue.index.html:35 msgid "Creator" msgstr "Creador" -#: ../templates/classic/html/issue.index.html:31 +#: ../templates/classic/html/issue.index.html:36 msgid "Assigned To" msgstr "Asignado a" -#: ../templates/classic/html/issue.index.html:97 +#: ../templates/classic/html/issue.index.html:104 msgid "Download as CSV" msgstr "Descargar como CSV" -#: ../templates/classic/html/issue.index.html:105 +#: ../templates/classic/html/issue.index.html:114 msgid "Sort on:" msgstr "Ordenar por:" -#: ../templates/classic/html/issue.index.html:108 -#: ../templates/classic/html/issue.index.html:125 +#: ../templates/classic/html/issue.index.html:118 +#: ../templates/classic/html/issue.index.html:139 msgid "- nothing -" msgstr "- nada -" -#: ../templates/classic/html/issue.index.html:116 -#: ../templates/classic/html/issue.index.html:133 +#: ../templates/classic/html/issue.index.html:126 +#: ../templates/classic/html/issue.index.html:147 msgid "Descending:" msgstr "Descendente:" -#: ../templates/classic/html/issue.index.html:122 +#: ../templates/classic/html/issue.index.html:135 msgid "Group on:" msgstr "Agrupar por:" -#: ../templates/classic/html/issue.index.html:139 +#: ../templates/classic/html/issue.index.html:154 msgid "Redisplay" msgstr "Revisualizar" @@ -2709,48 +2866,50 @@ #: ../templates/classic/html/issue.item.html:19 msgid "Issue${id}" -msgstr "" +msgstr "Issue${id}" #: ../templates/classic/html/issue.item.html:22 msgid "Issue${id} Editing" msgstr "Edici?n de Issue${id}" -#: ../templates/classic/html/issue.item.html:51 +#: ../templates/classic/html/issue.item.html:56 msgid "Superseder" msgstr "Reemplazado por" -#: ../templates/classic/html/issue.item.html:56 -msgid "View: ${link}" -msgstr "Ver: ${link}" +#: ../templates/classic/html/issue.item.html:61 +msgid "View:" +msgstr "Ver:" -#: ../templates/classic/html/issue.item.html:60 +#: ../templates/classic/html/issue.item.html:67 msgid "Nosy List" msgstr "Lista de interesados" -#: ../templates/classic/html/issue.item.html:69 +#: ../templates/classic/html/issue.item.html:76 msgid "Assigned To" msgstr "Asignado a" -#: ../templates/classic/html/issue.item.html:71 -msgid "Topics" +#: ../templates/classic/html/issue.item.html:78 +#: ../templates/classic/html/page.html:103 +#: ../templates/minimal/html/page.html:102 +msgid "Keywords" msgstr "Palabras clave" -#: ../templates/classic/html/issue.item.html:79 +#: ../templates/classic/html/issue.item.html:86 msgid "Change Note" msgstr "Nota de modificaci?n" -#: ../templates/classic/html/issue.item.html:87 +#: ../templates/classic/html/issue.item.html:94 msgid "File" -msgstr "Archivo" +msgstr "Fichero" -#: ../templates/classic/html/issue.item.html:99 +#: ../templates/classic/html/issue.item.html:106 msgid "Make a copy" msgstr "Hacer una copia" -#: ../templates/classic/html/issue.item.html:107 -#: ../templates/classic/html/user.item.html:106 +#: ../templates/classic/html/issue.item.html:114 +#: ../templates/classic/html/user.item.html:153 #: ../templates/classic/html/user.register.html:69 -#: ../templates/minimal/html/user.item.html:86 +#: ../templates/minimal/html/user.item.html:153 msgid "" "
- +
View:
+
Nosy TopicsNosy Keywords @@ -4152,7 +4347,7 @@ The more difficult part is the logic to add the users to the nosy list when required. -We choose to perform this action whenever the topics on an +We choose to perform this action whenever the keywords on an item are set (this includes the creation of items). Here we choose to start out with a copy of the ``detectors/nosyreaction.py`` detector, which we copy to the file @@ -4173,8 +4368,8 @@ code, which handled adding the assignedto user(s) to the nosy list in ``updatenosy``, should be replaced by a block of code to add the interested users to the nosy list. We choose here to loop over all -new topics, than looping over all users, -and assign the user to the nosy list when the topic occurs in the user's +new keywords, than looping over all users, +and assign the user to the nosy list when the keyword occurs in the user's ``nosy_keywords``. The next part in ``updatenosy`` -- adding the author and/or recipients of a message to the nosy list -- is obviously not relevant here and is thus deleted from the new auditor. The last @@ -4182,7 +4377,7 @@ This results in the following function:: def update_kw_nosy(db, cl, nodeid, newvalues): - '''Update the nosy list for changes to the topics + '''Update the nosy list for changes to the keywords ''' # nodeid will be None if this is a new node current = {} @@ -4207,17 +4402,17 @@ if not current.has_key(value): current[value] = 1 - # add users with topic in nosy_keywords to the nosy list - if newvalues.has_key('topic') and newvalues['topic'] is not None: - topic_ids = newvalues['topic'] - for topic in topic_ids: + # add users with keyword in nosy_keywords to the nosy list + if newvalues.has_key('keyword') and newvalues['keyword'] is not None: + keyword_ids = newvalues['keyword'] + for keyword in keyword_ids: # loop over all users, - # and assign user to nosy when topic in nosy_keywords + # and assign user to nosy when keyword in nosy_keywords for user_id in db.user.list(): nosy_kw = db.user.get(user_id, "nosy_keywords") found = 0 for kw in nosy_kw: - if kw == topic: + if kw == keyword: found = 1 if found: current[user_id] = 1 @@ -4237,10 +4432,10 @@ Multiple additions When a user, after automatic selection, is manually removed from the nosy list, he is added to the nosy list again when the - topic list of the issue is updated. A better design might be - to only check which topics are new compared to the old list - of topics, and only add users when they have indicated - interest on a new topic. + keyword list of the issue is updated. A better design might be + to only check which keywords are new compared to the old list + of keywords, and only add users when they have indicated + interest on a new keyword. The code could also be changed to only trigger on the ``create()`` event, rather than also on the ``set()`` event, thus only setting @@ -4250,8 +4445,8 @@ In the auditor, there is a loop over all users. For a site with only few users this will pose no serious problem; however, with many users this will be a serious performance bottleneck. - A way out would be to link from the topics to the users who - selected these topics as nosy topics. This will eliminate the + A way out would be to link from the keywords to the users who + selected these keywords as nosy keywords. This will eliminate the loop over all users. Changes to Security and Permissions Modified: tracker/roundup-src/doc/design.txt ============================================================================== --- tracker/roundup-src/doc/design.txt (original) +++ tracker/roundup-src/doc/design.txt Sun Mar 9 09:26:16 2008 @@ -819,7 +819,7 @@ Class(db, "keyword", name=hyperdb.String()) Class(db, "issue", fixer=hyperdb.Multilink("user"), - topic=hyperdb.Multilink("keyword"), + keyword=hyperdb.Multilink("keyword"), priority=hyperdb.Link("priority"), status=hyperdb.Link("status")) @@ -1250,10 +1250,10 @@ clarity):: /issue?status=unread,in-progress,resolved& - topic=security,ui& + keyword=security,ui& :group=priority,-status& :sort=-activity& - :filters=status,topic& + :filters=status,keyword& :columns=title,status,fixer @@ -1274,11 +1274,11 @@ The example specifies an index of "issue" items. Only issues with a "status" of either "unread" or "in-progres" or "resolved" are displayed, -and only issues with "topic" values including both "security" and "ui" +and only issues with "keyword" values including both "security" and "ui" are displayed. The items are grouped by priority arranged in ascending order and in descending order by status; and within groups, sorted by activity, arranged in descending order. The filter section shows -filters for the "status" and "topic" properties, and the table includes +filters for the "status" and "keyword" properties, and the table includes columns for the "title", "status", and "fixer" properties. Associated with each issue class is a default layout specifier. The Modified: tracker/roundup-src/doc/features.txt ============================================================================== --- tracker/roundup-src/doc/features.txt (original) +++ tracker/roundup-src/doc/features.txt Sun Mar 9 09:26:16 2008 @@ -15,7 +15,7 @@ - requires *no* additional support software - python (2.3+) is enough to get you going - easy to set up higher-performance storage backends like sqlite_, - metakit_, mysql_ and postgresql_ + mysql_ and postgresql_ *simple to use* - accessible through the web, email, command-line or Python programs @@ -40,11 +40,11 @@ customisations *fast, scalable* - - with the sqlite, metakit, mysql and postgresql backends, roundup is + - with the sqlite, mysql and postgresql backends, roundup is also fast and scalable, easily handling thousands of issues and users with decent response times - database indexes are automatically added for those backends that - support them (sqlite, metakit, mysql and postgresql) + support them (sqlite, mysql and postgresql) - indexed text searching giving fast responses to searches across all messages and indexed string properties - support for the Xapian full-text indexing engine for large trackers @@ -102,8 +102,12 @@ - a variety of sample shell scripts are provided (weekly reports, issue generation, ...) +*xmlrpc interface* + - simple remote tracker interface with basic HTTP authentication + - provides same access to tracker as roundup-admin, but based on + XMLRPC calls + .. _sqlite: http://www.hwaci.com/sw/sqlite/ -.. _metakit: http://www.equi4.com/metakit/ .. _mysql: http://sourceforge.net/projects/mysql-python .. _postgresql: http://initd.org/software/initd/psycopg Modified: tracker/roundup-src/doc/index.txt ============================================================================== --- tracker/roundup-src/doc/index.txt (original) +++ tracker/roundup-src/doc/index.txt Sun Mar 9 09:26:16 2008 @@ -81,6 +81,7 @@ Wil Cooley, Joe Cooper, Kelley Dagley, +Bruno Damour, Toby Dickenson, Paul F. Dubois, Eric Earnst, @@ -95,6 +96,7 @@ Frank Gibbons, Johannes Gijsbers, Gus Gollings, +Philipp Gortan, Dan Grassi, Robin Green, Jason Grout, @@ -122,11 +124,14 @@ Andrey Lebedev, Henrik Levkowetz, David Linke, +Martin v. L?wis, Fredrik Lundh, Will Maier, Georges Martin, Gordon McMillan, John F Meinel Jr, +Ulrik Mikaelsson, +John Mitchell, Ramiro Morales, Toni Mueller, Stefan Niederhauser, Modified: tracker/roundup-src/doc/installation.txt ============================================================================== --- tracker/roundup-src/doc/installation.txt (original) +++ tracker/roundup-src/doc/installation.txt Sun Mar 9 09:26:16 2008 @@ -2,7 +2,7 @@ Installing Roundup ================== -:Version: $Revision: 1.121 $ +:Version: $Revision: 1.130 $ .. contents:: :depth: 2 @@ -75,9 +75,23 @@ you to install a snapshot. Snapshot "0.9.2_svn6532" has been tried successfully. +pyopenssl + If pyopenssl_ is installed the roundup-server can be configured + to serve trackers over SSL. If you are going to serve roundup via + proxy through a server with SSL support (e.g. apache) then this is + unnecessary. + +pyme + If pyme_ is installed you can configure the mail gateway to perform + verification or decryption of incoming OpenPGP MIME messages. When + configured, you can require email to be cryptographically signed + before roundup will allow it to make modifications to issues. + .. _Xapian: http://www.xapian.org/ .. _pytz: http://www.python.org/pypi/pytz .. _Olson tz database: http://www.twinsun.com/tz/tz-link.htm +.. _pyopenssl: http://pyopenssl.sourceforge.net +.. _pyme: http://pyme.sourceforge.net Getting Roundup @@ -146,7 +160,7 @@ If you would like to place the Roundup scripts in a directory other than ``/usr/bin``, then specify the preferred location with -``--install-script``. For example, to install them in +``--install-scripts``. For example, to install them in ``/opt/roundup/bin``:: python setup.py install --install-scripts=/opt/roundup/bin @@ -272,25 +286,21 @@ ========== =========== ===== ============================== anydbm Slowest Few Always available sqlite Fastest(*) Few May need install (PySQLite_) -metakit Fastest(*) Few Needs install (metakit_) postgresql Fast Many Needs install/admin (psycopg_) mysql Fast Many Needs install/admin (MySQLdb_) ========== =========== ===== ============================== **sqlite** - These use the embedded database engines PySQLite_ and metakit_ to provide - very fast backends. They are not suitable for trackers which will have - many simultaneous users, but require much less installation and - maintenance effort than more scalable postgresql and mysql backends. + This uses the embedded database engine PySQLite_ to provide a very fast + backend. This is not suitable for trackers which will have many + simultaneous users, but requires much less installation and maintenance + effort than more scalable postgresql and mysql backends. SQLite is supported via PySQLite versions 1.1.7, 2.1.0 and sqlite3 (the last being bundled with Python 2.5+) Installed SQLite should be the latest version available (3.3.8 is known to work, 3.1.3 is known to have problems). -**metakit** - Similar performance to sqlite. If you are choosing between these two, - please select sqlite. **postgresql** Backend for popular RDBMS PostgreSQL. You must read doc/postgresql.txt for additional installation steps and requirements. You must also configure @@ -343,7 +353,7 @@ adsutil.vbs set w3svc/AllowPathInfoForScriptMappings TRUE -The ``adsutil.vbs`` file can be found in either ``c:\inetpub\adminscripts`` +The ``adsutil.vbs`` file can be found in either ``c:\inetpub\adminscripts`` or ``c:\winnt\system32\inetsrv\adminsamples\`` or ``c:\winnt\system32\inetsrv\adminscripts\`` depending on your installation. @@ -373,7 +383,7 @@ ``.cgi`` extension of the cgi script. Place the ``roundup.cgi`` script wherever you want it to be, rename it to just ``roundup``, and add a couple lines to your Apache configuration:: - + SetHandler cgi-script @@ -467,7 +477,8 @@ In the following example we have two trackers set up in ``/var/db/roundup/support`` and ``/var/db/roundup/devel`` and accessed as ``https://my.host/roundup/support/`` and ``https://my.host/roundup/devel/`` -respectively. Having them share same parent directory allows us to +respectively (provided Apache has been set up for SSL of course). +Having them share same parent directory allows us to reduce the number of configuration directives. Support tracker has russian user interface. The other tracker (devel) has english user interface (default). @@ -489,7 +500,7 @@ # everything else is handled by roundup web UI AliasMatch /roundup/([^/]+)/(?!@@file/)(.*) /var/db/roundup/$1/dummy.py/$2 # roundup requires a slash after tracker name - add it if missing - RedirectMatch permanent /roundup/([^/]+)$ /roundup/$1/ + RedirectMatch permanent ^/roundup/([^/]+)$ /roundup/$1/ # common settings for all roundup trackers Order allow,deny @@ -511,6 +522,29 @@ PythonOption TrackerHome /var/db/roundup/devel +Notice that the ``/var/db/roundup`` path shown above refers to the directory +in which the tracker homes are stored. The actual value will thus depend on +your system. + +On Windows the corresponding lines will look similar to these:: + + AliasMatch /roundup/(.+)/@@file/(.*) C:/DATA/roundup/$1/html/$2 + AliasMatch /roundup/([^/]+)/(?!@@file/)(.*) C:/DATA/roundup/$1/dummy.py/$2 + + + + +In this example the directory hosting all of the tracker homes is +``C:\DATA\roundup``. (Notice that you must use forward slashes in paths +inside the httpd.conf file!) + +The URL for accessing these trackers then become: +`http:///roundup/support/`` and +``http:///roundup/devel/`` + +Note that in order to use https connections you must set up Apache for secure +serving with SSL. + WSGI Handler ~~~~~~~~~~~~ @@ -543,7 +577,7 @@ Roundup tracker. You should pick ONE of the following, all of which will continue my example setup from above: -As a mail alias pipe process +As a mail alias pipe process ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Set up a mail alias called "issue_tracker" as (include the quote marks): @@ -556,7 +590,7 @@ and change the command to:: |roundup-mailgw /opt/roundup/trackers/support - + To test the mail gateway on unix systems, try:: echo test |mail -s '[issue] test' support at YOUR_DOMAIN_HERE @@ -598,7 +632,7 @@ require_files = /usr/bin/roundup-mailgw:ROUNDUP_HOME/$local_part/schema.py The following configuration has been tested on Debian Sarge with -Exim4. +Exim4. .. note:: Note that the Debian Exim4 packages don't allow pipes in alias files @@ -764,6 +798,15 @@ http://cjkpython.berlios.de/ +Public Tracker Considerations +----------------------------- + +If you run a public tracker, you will eventually have to think about +dealing with spam entered through both the web and mail interfaces. + +The `customisation documentation`_ has a simple detector that will block +a lot of spam attempts. Look for the example "Preventing SPAM". + Maintenance =========== @@ -836,26 +879,87 @@ Windows Server -------------- -To have the Roundup web server start up when your machine boots up, set the -following up in Scheduled Tasks (note, the following is for a cygwin setup): +To have the Roundup web server start up when your machine boots up, there +are two different methods, the scheduler and installing the service. + -Run - ``c:\cygwin\bin\bash.exe -c "roundup-server TheProject=/opt/roundup/trackers/support"`` -Start In - ``C:\cygwin\opt\roundup\bin`` -Schedule - At System Startup +1. Using the Windows scheduler +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Set up the following in Scheduled Tasks (note, the following is for a +cygwin setup): + +**Run** + + ``c:\cygwin\bin\bash.exe -c "roundup-server TheProject=/opt/roundup/trackers/support"`` + +**Start In** + + ``C:\cygwin\opt\roundup\bin`` + +**Schedule** + + At System Startup To have the Roundup mail gateway run periodically to poll a POP email address, -set the following up in Scheduled Tasks: +set up the following in Scheduled Tasks: + +**Run** + + ``c:\cygwin\bin\bash.exe -c "roundup-mailgw /opt/roundup/trackers/support pop roundup:roundup at mail-server"`` + +**Start In** + + ``C:\cygwin\opt\roundup\bin`` + +**Schedule** + + Every 10 minutes from 5:00AM for 24 hours every day + + Stop the task if it runs for 8 minutes + + +2. Installing the roundup server as a Windows service +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +This is more Windows oriented and will make the Roundup server run as +soon as the PC starts up without any need for a login or such. It will +also be available in the normal Windows Administrative Tools. + +For this you need first to create a service ini file containing the +relevant settings. + +1. It is created if you execute the following command from within the + scripts directory (notice the use of backslashes):: + + roundup-server -S -C \server.ini -n -p 8080 -l \trackerlog.log software=\Software + + where the item ```` is replaced with the physical directory + that hosts all of your trackers. The ```` item is the name + of your roundup server PC, such as w2003srv or similar. + +2. Next open the now created file ``C:\DATA\roundup\server.ini`` file + (if your ```` is ``C:\DATA\roundup``). + Check the entries for correctness, especially this one:: + + [trackers] + software = C:\DATA\Roundup\Software + + (this is an example where the tracker is named software and its home is + ``C:\DATA\Roundup\Software``) + +3. Next give the commands that actually installs and starts the service:: + + roundup-server -C C:\DATA\Roundup\server.ini -c install + roundup-server -c start + +4. Finally open the AdministrativeTools/Services applet and locate the + Roundup service entry. Open its properties and change it to start + automatically instead of manually. -Run - ``c:\cygwin\bin\bash.exe -c "roundup-mailgw /opt/roundup/trackers/support pop roundup:roundup at mail-server"`` -Start In - ``C:\cygwin\opt\roundup\bin`` -Schedule - Every 10 minutes from 5:00AM for 24 hours every day - Stop the task if it runs for 8 minutes +If you are using Apache as the webserver you might want to use it with +mod_python instead to serve out Roundup. In that case see the mod_python +instructions above for details. Sendmail smrsh @@ -929,7 +1033,6 @@ .. _External hyperlink targets: .. _apache: http://httpd.apache.org/ -.. _metakit: http://www.equi4.com/metakit/ .. _mod_python: http://www.modpython.org/ .. _MySQLdb: http://sourceforge.net/projects/mysql-python .. _Psycopg: http://initd.org/software/initd/psycopg Modified: tracker/roundup-src/doc/mysql.txt ============================================================================== --- tracker/roundup-src/doc/mysql.txt (original) +++ tracker/roundup-src/doc/mysql.txt Sun Mar 9 09:26:16 2008 @@ -2,7 +2,7 @@ MySQL Backend ============= -:version: $Revision: 1.12 $ +:version: $Revision: 1.13 $ This notes detail the MySQL backend for the Roundup issue tracker. @@ -13,19 +13,13 @@ To use MySQL as the backend for storing roundup data, you also need to install: -1. MySQL RDBMS 4.0.16 or higher - http://www.mysql.com. Your MySQL +1. MySQL RDBMS 4.0.18 or higher - http://www.mysql.com. Your MySQL installation MUST support InnoDB tables (or Berkeley DB (BDB) tables - if you have no other choice). If you're running < 4.0.16 (but not <4.0) + if you have no other choice). If you're running < 4.0.18 (but not <4.0) then you'll need to use BDB to pass all unit tests. Edit the ``roundup/backends/back_mysql.py`` file to enable DBD instead of InnoDB. 2. Python MySQL interface - http://sourceforge.net/projects/mysql-python -.. note:: - The InnoDB implementation has a bug__ that Roundup tickles. See - -__ http://bugs.mysql.com/bug.php?id=1810 - - Running the MySQL tests ======================= Modified: tracker/roundup-src/doc/overview.txt ============================================================================== --- tracker/roundup-src/doc/overview.txt (original) +++ tracker/roundup-src/doc/overview.txt Sun Mar 9 09:26:16 2008 @@ -147,7 +147,7 @@ only sometimes fall into one category; often, a piece of information may be related to several concepts. -For example, forcing each item into a single topic +For example, forcing each item into a single keyword category is not just suboptimal but counterproductive: seekers of that item may expect to find it in a different category @@ -245,7 +245,7 @@ The *multilink* type is for a list of links to any number of other items in the in the database. A *multilink* property, for example, can be used to refer to related items -or topic categories relevant to an item. +or keyword categories relevant to an item. For Roundup, all items have four properties that are not customizable: @@ -314,13 +314,13 @@ # superseder = Multilink("issue") # (it also gets the Class properties creation, activity and creator) issue = IssueClass(db, "issue", - assignedto=Link("user"), topic=Multilink("keyword"), + assignedto=Link("user"), keyword=Multilink("keyword"), priority=Link("priority"), status=Link("status")) The **assignedto** property assigns responsibility for an item to a person or a list of people. -The **topic** property places the -item in an arbitrary number of relevant topic sets (see +The **keyword** property places the +item in an arbitrary number of relevant keyword sets (see the section on `Browsing and Searching`_). The **prority** and **status** values are initially: @@ -449,11 +449,11 @@ messages they might have missed. We can take this a step further and -permit users to monitor particular topics or classifications of items +permit users to monitor particular keywords or classifications of items by allowing other kinds of items to also have their own nosy lists. For example, a manager could be on the nosy list of the priority value item for "critical", or a -developer could be on the nosy list of the topic value item for "security". +developer could be on the nosy list of the keyword value item for "security". The recipients are then determined by the union of the nosy lists on the item and all the items it links to. @@ -552,7 +552,7 @@ (the filter selects the *intersection* of the sets of items associated with the active options) -For a *multilink* property like **topic**, +For a *multilink* property like **keyword**, one possibility is to show, as hyperlinks, the keywords whose sets have non-empty intersections with the currently displayed set of items. Sorting the keywords by popularity seems Modified: tracker/roundup-src/doc/roundup-server.1 ============================================================================== --- tracker/roundup-src/doc/roundup-server.1 (original) +++ tracker/roundup-src/doc/roundup-server.1 Sun Mar 9 09:26:16 2008 @@ -21,18 +21,36 @@ Sets a filename to log to (instead of stdout). This is required if the -d option is used. .TP +\fB-i\fP \fIfile\fP +Sets a filename to use as a template for generating the tracker index page. +The variable "trackers" is available to the template and is a dict of all +configured trackers. +.TP +\fB-s\fP +Enables to use of SSL. +.TP +\fB-e\fP \fIfile\fP +Sets a filename containing the PEM file to use for SSL. If left blank, a +temporary self-signed certificate will be used. +.TP \fB-h\fP print help .TP \fBname=\fP\fItracker home\fP -Sets the tracker home(s) to use. The name is how the tracker is -identified in the URL (it's the first part of the URL path). The -tracker home is the directory that was identified when you did -"roundup-admin init". You may specify any number of these name=home -pairs on the command-line. For convenience, you may edit the -TRACKER_HOMES variable in the roundup-server file instead. -Make sure the name part doesn't include any url-unsafe characters like -spaces, as these confuse the cookie handling in browsers like IE. +Sets the tracker home(s) to use. The \fBname\fP variable is how the tracker is +identified in the URL (it's the first part of the URL path). The \fItracker +home\fP variable is the directory that was identified when you did +"roundup-admin init". You may specify any number of these name=home pairs on +the command-line. For convenience, you may edit the TRACKER_HOMES variable in +the roundup-server file instead. Make sure the name part doesn't include any +url-unsafe characters like spaces, as these confuse the cookie handling in +browsers like IE. +.SH EXAMPLES +.TP +.B roundup-server -p 9000 bugs=/var/tracker reqs=/home/roundup/group1 +Start the server on port \fB9000\fP serving two trackers; one under +\fB/bugs\fP and one under \fB/reqs\fP. + .SH CONFIGURATION FILE See the "admin_guide" in the Roundup "doc" directory. .SH AUTHOR Modified: tracker/roundup-src/doc/roundup-server.ini.example ============================================================================== --- tracker/roundup-src/doc/roundup-server.ini.example (original) +++ tracker/roundup-src/doc/roundup-server.ini.example Sun Mar 9 09:26:16 2008 @@ -1,6 +1,6 @@ ; This is a sample configuration file for roundup-server. See the ; admin_guide for information about its contents. -[server] +[main] port = 8080 ;hostname = ;user = @@ -8,9 +8,12 @@ ;log_ip = yes ;pidfile = ;logfile = +;template = +;ssl = no +;pem = ; Add one of these per tracker being served -[tracker_url_component] +[trackers] home = /path/to/tracker Modified: tracker/roundup-src/doc/upgrading.txt ============================================================================== --- tracker/roundup-src/doc/upgrading.txt (original) +++ tracker/roundup-src/doc/upgrading.txt Sun Mar 9 09:26:16 2008 @@ -13,6 +13,51 @@ .. contents:: +Migrating from 1.4.x to 1.4.2 +============================= + +You should run the "roundup-admin migrate" command for your tracker once +you've installed the latest codebase. + +Do this before you use the web, command-line or mail interface and before +any users access the tracker. + +This command will respond with either "Tracker updated" (if you've not +previously run it on an RDBMS backend) or "No migration action required" +(if you have run it, or have used another interface to the tracker, +or are using anydbm). + +It's safe to run this even if it's not required, so just get into the +habit. + + +Migrating from 1.3.3 to 1.4.0 +============================= + +Value of the "refwd_re" tracker configuration option (section "mailgw") +is treated as UTF-8 string. In previous versions, it was ISO8859-1. + +If you have running trackers based on the classic template, please +update the messagesummary detector as follows:: + + --- detectors/messagesummary.py 17 Apr 2003 03:26:38 -0000 1.1 + +++ detectors/messagesummary.py 3 Apr 2007 06:47:21 -0000 1.2 + @@ -8,7 +8,7 @@ + if newvalues.has_key('summary') or not newvalues.has_key('content'): + return + + - summary, content = parseContent(newvalues['content'], 1, 1) + + summary, content = parseContent(newvalues['content'], config=db.config) + newvalues['summary'] = summary + +In the latest version we have added some database indexes to the +SQL-backends (mysql, postgresql, sqlite) for speeding up building the +roundup-index for full-text search. We recommend that you create the +following database indexes on the database by hand:: + + CREATE INDEX words_by_id ON __words (_textid) + CREATE UNIQUE INDEX __textids_by_props ON __textids (_class, _itemid, _prop) + Migrating from 1.2.x to 1.3.0 ============================= Modified: tracker/roundup-src/doc/user_guide.txt ============================================================================== --- tracker/roundup-src/doc/user_guide.txt (original) +++ tracker/roundup-src/doc/user_guide.txt Sun Mar 9 09:26:16 2008 @@ -2,7 +2,7 @@ User Guide ========== -:Version: $Revision: 1.36 $ +:Version: $Revision: 1.37 $ .. contents:: @@ -112,7 +112,7 @@ Constrained (link and multilink) properties ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Fields like "Assigned To" and "Topics" hold references to items in other +Fields like "Assigned To" and "Keywords" hold references to items in other classes ("user" and "keyword" in those two cases.) Sometimes, the selection is done through a menu, like in the "Assigned @@ -130,13 +130,13 @@ match issues that are not assigned to a user. ``assignedto=2,3,40`` match issues that are assigned to users 2, 3 or 40. -``topic=user interface`` - match issues with the keyword "user interface" in their topic list -``topic=web interface,e-mail interface`` +``keyword=user interface`` + match issues with the keyword "user interface" in their keyword list +``keyword=web interface,e-mail interface`` match issues with the keyword "web interface" or "e-mail interface" in - their topic list -``topic=-1`` - match issues with no topics set + their keyword list +``keyword=-1`` + match issues with no keywords set Date properties @@ -350,10 +350,10 @@ (whitespace has been added for clarity):: /issue?status=unread,in-progress,resolved& - topic=security,ui& + keyword=security,ui& @group=priority,-status& @sort=-activity& - @filters=status,topic& + @filters=status,keyword& @columns=title,status,fixer Modified: tracker/roundup-src/frontends/ZRoundup/ZRoundup.py ============================================================================== --- tracker/roundup-src/frontends/ZRoundup/ZRoundup.py (original) +++ tracker/roundup-src/frontends/ZRoundup/ZRoundup.py Sun Mar 9 09:26:16 2008 @@ -14,7 +14,7 @@ # BASIS, AND THERE IS NO OBLIGATION WHATSOEVER TO PROVIDE MAINTENANCE, # SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS. # -# $Id: ZRoundup.py,v 1.22 2006/01/25 03:43:04 richard Exp $ +# $Id: ZRoundup.py,v 1.23 2008/02/07 01:03:39 richard Exp $ # ''' ZRoundup module - exposes the roundup web interface to Zope @@ -67,6 +67,11 @@ def end_headers(self): # not needed - the RESPONSE object handles this internally on write() pass + def start_response(self, headers, response): + self.send_response(response) + for key, value in headers: + self.send_header(key, value) + self.end_headers() class FormItem: '''Make a Zope form item look like a cgi.py one @@ -89,6 +94,8 @@ else: entry = FormItem(entry) return entry + def __iter__(self): + return iter(self.__form) def getvalue(self, key, default=None): if self.__form.has_key(key): return self.__form[key] Modified: tracker/roundup-src/locale/es.po ============================================================================== --- tracker/roundup-src/locale/es.po (original) +++ tracker/roundup-src/locale/es.po Sun Mar 9 09:26:16 2008 @@ -5,10 +5,10 @@ # msgid "" msgstr "" -"Project-Id-Version: Roundup 1.3.1\n" +"Project-Id-Version: Roundup 1.3.3\n" "Report-Msgid-Bugs-To: roundup-devel at lists.sourceforge.net\n" -"POT-Creation-Date: 2006-04-27 09:02+0300\n" -"PO-Revision-Date: 2006-11-11 01:32:00-0300\n" +"POT-Creation-Date: 2007-09-16 09:48+0300\n" +"PO-Revision-Date: 2007-09-18 01:22-0300\n" "Last-Translator: Ramiro Morales \n" "Language-Team: Spanish Translators \n" "MIME-Version: 1.0\n" @@ -17,19 +17,19 @@ "Plural-Forms: nplurals=2; plural=(n != 1);\n" # ../roundup/admin.py:85 :955 :1004 :1026 -#: ../roundup/admin.py:85 ../roundup/admin.py:981 ../roundup/admin.py:1030 -#: ../roundup/admin.py:1052 +#: ../roundup/admin.py:86 ../roundup/admin.py:989 ../roundup/admin.py:1040 +#: ../roundup/admin.py:1063 ../roundup/admin.py:86:989 :1040:1063 #, python-format msgid "no such class \"%(classname)s\"" msgstr "la clase \"%(classname)s\" no existe" # ../roundup/admin.py:95 :99 -#: ../roundup/admin.py:95 ../roundup/admin.py:99 +#: ../roundup/admin.py:96 ../roundup/admin.py:100 ../roundup/admin.py:96:100 #, python-format msgid "argument \"%(arg)s\" not propname=value" msgstr "el argumento \"%(arg)s\" no es de la forma nombrepropiedad=valor" -#: ../roundup/admin.py:112 +#: ../roundup/admin.py:113 #, python-format msgid "" "Problem: %(message)s\n" @@ -38,7 +38,7 @@ "Problema: %(message)s\n" "\n" -#: ../roundup/admin.py:113 +#: ../roundup/admin.py:114 #, python-format msgid "" "%(message)sUsage: roundup-admin [options] [ ]\n" @@ -93,11 +93,11 @@ " roundup-admin help -- ayuda espec?fica a un comando\n" " roundup-admin help all -- toda la ayuda disponible\n" -#: ../roundup/admin.py:140 +#: ../roundup/admin.py:141 msgid "Commands:" msgstr "Comandos:" -#: ../roundup/admin.py:147 +#: ../roundup/admin.py:148 msgid "" "Commands may be abbreviated as long as the abbreviation\n" "matches only one command, e.g. l == li == lis == list." @@ -105,7 +105,7 @@ "Los comandos pueden ser abreviados siempre y cuando la abreviaci?n\n" "coincida con s?lo un comando, ej. l == li == lis == list." -#: ../roundup/admin.py:177 +#: ../roundup/admin.py:178 msgid "" "\n" "All commands (except help) require a tracker specifier. This is just\n" @@ -175,7 +175,7 @@ "Todos los comandos (excepto ayuda) requieren un especificador de tracker.\n" "Este es simplemente la ruta al tracker roundup con el que se est? " "trabajando.\n" -"Un tracker roundup es donde roundup mantiene la base de datos y el archivo " +"Un tracker roundup es donde roundup mantiene la base de datos y el fichero " "de\n" "configuraci?n que define un issue tracker. Puede pensarse en el mismo como " "el\n" @@ -250,12 +250,12 @@ "\n" "Ayuda sobre comandos:\n" -#: ../roundup/admin.py:240 +#: ../roundup/admin.py:241 #, python-format msgid "%s:" -msgstr "" +msgstr "%s:" -#: ../roundup/admin.py:245 +#: ../roundup/admin.py:246 msgid "" "Usage: help topic\n" " Give help about topic.\n" @@ -275,33 +275,33 @@ " all -- toda la ayuda disponible\n" " " -#: ../roundup/admin.py:268 +#: ../roundup/admin.py:269 #, python-format msgid "Sorry, no help for \"%(topic)s\"" msgstr "Lo siento, no hay ayuda para \"%(topic)s\"" # ../roundup/admin.py:338 :387 -#: ../roundup/admin.py:340 ../roundup/admin.py:396 +#: ../roundup/admin.py:346 ../roundup/admin.py:402 ../roundup/admin.py:346:402 msgid "Templates:" msgstr "Plantillas:" # ../roundup/admin.py:341 :398 -#: ../roundup/admin.py:343 ../roundup/admin.py:407 +#: ../roundup/admin.py:349 ../roundup/admin.py:413 ../roundup/admin.py:349:413 msgid "Back ends:" msgstr "Motor de almacenamiento" -#: ../roundup/admin.py:346 +#: ../roundup/admin.py:352 msgid "" -"Usage: install [template [backend [admin password [key=val[,key=val]]]]]\n" +"Usage: install [template [backend [key=val[,key=val]]]]\n" " Install a new Roundup tracker.\n" "\n" " The command will prompt for the tracker home directory\n" " (if not supplied through TRACKER_HOME or the -i option).\n" -" The template, backend and admin password may be specified\n" -" on the command-line as arguments, in that order.\n" +" The template and backend may be specified on the command-line\n" +" as arguments, in that order.\n" "\n" -" The last command line argument allows to pass initial values\n" -" for config options. For example, passing\n" +" Command line arguments following the backend allows you to\n" +" pass initial values for config options. For example, passing\n" " \"web_http_auth=no,rdbms_user=dinsdale\" will override defaults\n" " for options http_auth in section [web] and user in section [rdbms].\n" " Please be careful to not use spaces in this argument! (Enclose\n" @@ -315,14 +315,13 @@ " See also initopts help.\n" " " msgstr "" -"Uso: install [plantilla [backend [ctrase?a adm [clave=val[,clave=val]]]]]\n" +"Uso: install [plantilla [backend [clave=val[,clave=val]]]]\n" " Instala un nuevo tracker Roundup.\n" "\n" " El comando preguntar? el directorio base del tracker\n" " (si el mismo no se provee v?a TRACKER_HOME o la opci?n -i).\n" -" La plantilla, el backend y la contrase?a de admin pueden\n" -" especificarse en la l?nea de comandos como argumentos, en ese " -"orden.\n" +" La plantilla, el backend pueden especificarse en la l?nea\n" +" de comandos como argumentos, en ese orden.\n" "\n" " El ?ltimo argumento de la l?nea de comandos permite especificar " "valores\n" @@ -349,23 +348,24 @@ # ../roundup/admin.py:360 :442 :503 :582 :632 :688 :709 :737 :808 :875 :946 # :994 :1016 :1043 :1106 :1173 -#: ../roundup/admin.py:369 ../roundup/admin.py:466 ../roundup/admin.py:527 -#: ../roundup/admin.py:606 ../roundup/admin.py:656 ../roundup/admin.py:714 -#: ../roundup/admin.py:735 ../roundup/admin.py:763 ../roundup/admin.py:834 -#: ../roundup/admin.py:901 ../roundup/admin.py:972 ../roundup/admin.py:1020 -#: ../roundup/admin.py:1042 ../roundup/admin.py:1069 ../roundup/admin.py:1136 -#: ../roundup/admin.py:1207 +#: ../roundup/admin.py:375 ../roundup/admin.py:472 ../roundup/admin.py:533 +#: ../roundup/admin.py:612 ../roundup/admin.py:663 ../roundup/admin.py:721 +#: ../roundup/admin.py:742 ../roundup/admin.py:770 ../roundup/admin.py:842 +#: ../roundup/admin.py:909 ../roundup/admin.py:980 ../roundup/admin.py:1030 +#: ../roundup/admin.py:1053 ../roundup/admin.py:1084 ../roundup/admin.py:1180 +#: ../roundup/admin.py:1253 ../roundup/admin.py:375:472 :1030:1053 :1084:1180 +#: :1253 :533:612 :663:721 :742:770 :842:909:980 msgid "Not enough arguments supplied" msgstr "No se provey? una cantidad suficiente de argumentos" -#: ../roundup/admin.py:375 +#: ../roundup/admin.py:381 #, python-format msgid "Instance home parent directory \"%(parent)s\" does not exist" msgstr "" "El directorio padre \"%(parent)s\" del directorio base de la instancia no " "existe" -#: ../roundup/admin.py:383 +#: ../roundup/admin.py:389 #, python-format msgid "" "WARNING: There appears to be a tracker in \"%(tracker_home)s\"!\n" @@ -376,20 +376,20 @@ "Si Ud. lo reinstala, perder? toda la informaci?n relacionada al mismo!\n" "Elimino la misma? Y/N: " -#: ../roundup/admin.py:398 +#: ../roundup/admin.py:404 msgid "Select template [classic]: " msgstr "Seleccione la plantilla [classic]: " -#: ../roundup/admin.py:409 +#: ../roundup/admin.py:415 msgid "Select backend [anydbm]: " msgstr "Selecccione el motor de almacenamiento [anydbm]: " -#: ../roundup/admin.py:419 +#: ../roundup/admin.py:425 #, python-format msgid "Error in configuration settings: \"%s\"" msgstr "Error en opciones de configuraci?n: \"%s\"" -#: ../roundup/admin.py:428 +#: ../roundup/admin.py:434 #, python-format msgid "" "\n" @@ -399,14 +399,14 @@ msgstr "" "\n" "---------------------------------------------------------------------------\n" -" Ud. debe ahora editar el archivo de configuraci?n del tracker:\n" +" Ud. debe ahora editar el fichero de configuraci?n del tracker:\n" " %(config_file)s" -#: ../roundup/admin.py:438 +#: ../roundup/admin.py:444 msgid " ... at a minimum, you must set following options:" msgstr " ... como m?nimo, debe configurar las siguientes opciones:" -#: ../roundup/admin.py:443 +#: ../roundup/admin.py:449 #, python-format msgid "" "\n" @@ -424,9 +424,9 @@ msgstr "" "\n" " Si desea modificar el esquema de la base de datos,\n" -" debe tambien editar el archivo de esquema:\n" +" debe tambien editar el fichero de esquema:\n" " %(database_config_file)s\n" -" Puede tambi?n cambiar el archivo de inicializaci?n de la base de datos:\n" +" Puede tambi?n cambiar el fichero de inicializaci?n de la base de datos:\n" " %(database_init_file)s\n" " ... vea la documentaci?n sobre personalizaci?n si desea m?s informaci?n.\n" "\n" @@ -434,21 +434,21 @@ " completado los pasos arriba descriptos.\n" "---------------------------------------------------------------------------\n" -#: ../roundup/admin.py:461 +#: ../roundup/admin.py:467 msgid "" "Usage: genconfig \n" " Generate a new tracker config file (ini style) with default values\n" " in .\n" " " msgstr "" -"Uso: genconfig \n" -" Genera un nuevo archivo de configuraci?n de tracker (en formato " +"Uso: genconfig \n" +" Genera un nuevo fichero de configuraci?n de tracker (en formato " "ini)\n" -" con valores por defecto en el archivo .\n" +" con valores por defecto en el fichero .\n" " " #. password -#: ../roundup/admin.py:471 +#: ../roundup/admin.py:477 msgid "" "Usage: initialise [adminpw]\n" " Initialise a new Roundup tracker.\n" @@ -467,23 +467,23 @@ " Ejecuta la funci?n de inicializaci?n dbinit.init() del tracker\n" " " -#: ../roundup/admin.py:485 +#: ../roundup/admin.py:491 msgid "Admin Password: " msgstr "Contrase?a de administraci?n: " -#: ../roundup/admin.py:486 +#: ../roundup/admin.py:492 msgid " Confirm: " msgstr " Confirmar: " -#: ../roundup/admin.py:490 +#: ../roundup/admin.py:496 msgid "Instance home does not exist" msgstr "El directorio base de la instancia no existe" -#: ../roundup/admin.py:494 +#: ../roundup/admin.py:500 msgid "Instance has not been installed" msgstr "La instancia no ha sido instalada" -#: ../roundup/admin.py:499 +#: ../roundup/admin.py:505 msgid "" "WARNING: The database is already initialised!\n" "If you re-initialise it, you will lose all the data!\n" @@ -493,7 +493,7 @@ "Si la reinicializa, perder? toda la informaci?n!\n" "Eliminar la misma? Y/N: " -#: ../roundup/admin.py:520 +#: ../roundup/admin.py:526 msgid "" "Usage: get property designator[,designator]*\n" " Get the given property of one or more designator(s).\n" @@ -510,7 +510,7 @@ " " # ../roundup/admin.py:536 :551 -#: ../roundup/admin.py:560 ../roundup/admin.py:575 +#: ../roundup/admin.py:566 ../roundup/admin.py:581 ../roundup/admin.py:566:581 #, python-format msgid "property %s is not of type Multilink or Link so -d flag does not apply." msgstr "" @@ -518,18 +518,18 @@ "no puede usarse." # ../roundup/admin.py:559 :957 :1006 :1028 -#: ../roundup/admin.py:583 ../roundup/admin.py:983 ../roundup/admin.py:1032 -#: ../roundup/admin.py:1054 +#: ../roundup/admin.py:589 ../roundup/admin.py:991 ../roundup/admin.py:1042 +#: ../roundup/admin.py:1065 ../roundup/admin.py:589:991 :1042:1065 #, python-format msgid "no such %(classname)s node \"%(nodeid)s\"" msgstr "no existe nodo de clase %(classname)s llamado \"%(nodeid)s\"" -#: ../roundup/admin.py:585 +#: ../roundup/admin.py:591 #, python-format msgid "no such %(classname)s property \"%(propname)s\"" msgstr "no existe propiedad de clase %(classname)s llamado \"%(propname)s\"" -#: ../roundup/admin.py:594 +#: ../roundup/admin.py:600 msgid "" "Usage: set items property=value property=value ...\n" " Set the given properties of one or more items(s).\n" @@ -558,7 +558,7 @@ " asociados como n?meros separados por comas (\"1,2,3\").\n" " " -#: ../roundup/admin.py:648 +#: ../roundup/admin.py:655 msgid "" "Usage: find classname propname=value ...\n" " Find the nodes of the given class with a given link property value.\n" @@ -580,13 +580,13 @@ " " # ../roundup/admin.py:675 :828 :840 :894 -#: ../roundup/admin.py:701 ../roundup/admin.py:854 ../roundup/admin.py:866 -#: ../roundup/admin.py:920 +#: ../roundup/admin.py:708 ../roundup/admin.py:862 ../roundup/admin.py:874 +#: ../roundup/admin.py:928 ../roundup/admin.py:708:862 :874:928 #, python-format msgid "%(classname)s has no property \"%(propname)s\"" msgstr "%(classname)s no posee la propiedad \"%(propname)s\"" -#: ../roundup/admin.py:708 +#: ../roundup/admin.py:715 msgid "" "Usage: specification classname\n" " Show the properties for a classname.\n" @@ -600,17 +600,17 @@ " Visualiza las propiedades para una cierta clase.\n" " " -#: ../roundup/admin.py:723 +#: ../roundup/admin.py:730 #, python-format msgid "%(key)s: %(value)s (key property)" msgstr "%(key)s: %(value)s (propiedad de clave)" -#: ../roundup/admin.py:725 +#: ../roundup/admin.py:732 #, python-format msgid "%(key)s: %(value)s" -msgstr "" +msgstr "%(key)s: %(value)s" -#: ../roundup/admin.py:728 +#: ../roundup/admin.py:735 msgid "" "Usage: display designator[,designator]*\n" " Show the property values for the given node(s).\n" @@ -626,12 +626,12 @@ "especificado.\n" " " -#: ../roundup/admin.py:752 +#: ../roundup/admin.py:759 #, python-format msgid "%(key)s: %(value)r" -msgstr "" +msgstr "%(key)s: %(value)r" -#: ../roundup/admin.py:755 +#: ../roundup/admin.py:762 msgid "" "Usage: create classname property=value ...\n" " Create a new entry of a given class.\n" @@ -650,31 +650,31 @@ " nombre=valor provistos en la l?nea de comandos luego del comando\n" " \"create\" para establecer valores de propiedad(es). " -#: ../roundup/admin.py:782 +#: ../roundup/admin.py:789 #, python-format msgid "%(propname)s (Password): " msgstr "%(propname)s (Contrase?a): " -#: ../roundup/admin.py:784 +#: ../roundup/admin.py:791 #, python-format msgid " %(propname)s (Again): " msgstr " %(propname)s (Nuevamente): " -#: ../roundup/admin.py:786 +#: ../roundup/admin.py:793 msgid "Sorry, try again..." msgstr "Lo lamento, intente nuevamente..." -#: ../roundup/admin.py:790 +#: ../roundup/admin.py:797 #, python-format msgid "%(propname)s (%(proptype)s): " -msgstr "" +msgstr "%(propname)s (%(proptype)s): " -#: ../roundup/admin.py:808 +#: ../roundup/admin.py:815 #, python-format msgid "you must provide the \"%(propname)s\" property." msgstr "debe proveer la propiedad \"%(propname)s\"." -#: ../roundup/admin.py:819 +#: ../roundup/admin.py:827 msgid "" "Usage: list classname [property]\n" " List the instances of a class.\n" @@ -704,16 +704,16 @@ "clase.\n" " " -#: ../roundup/admin.py:832 +#: ../roundup/admin.py:840 msgid "Too many arguments supplied" msgstr "Demasiados argumentos" -#: ../roundup/admin.py:868 +#: ../roundup/admin.py:876 #, python-format msgid "%(nodeid)4s: %(value)s" -msgstr "" +msgstr "%(nodeid)4s: %(value)s" -#: ../roundup/admin.py:872 +#: ../roundup/admin.py:880 msgid "" "Usage: table classname [property[,property]*]\n" " List the instances of a class in tabular form.\n" @@ -777,12 +777,12 @@ " caracteres.\n" " " -#: ../roundup/admin.py:916 +#: ../roundup/admin.py:924 #, python-format msgid "\"%(spec)s\" not name:width" msgstr "\"%(spec)s\" no es de la forma nombre:longitud" -#: ../roundup/admin.py:966 +#: ../roundup/admin.py:974 msgid "" "Usage: history designator\n" " Show the history entries of a designator.\n" @@ -798,7 +798,7 @@ " designador.\n" " " -#: ../roundup/admin.py:987 +#: ../roundup/admin.py:995 msgid "" "Usage: commit\n" " Commit changes made to the database during an interactive session.\n" @@ -823,7 +823,7 @@ " son autom?ticamente escritos si resultan exitosos.\n" " " -#: ../roundup/admin.py:1001 +#: ../roundup/admin.py:1010 msgid "" "Usage: rollback\n" " Undo all changes that are pending commit to the database.\n" @@ -845,7 +845,7 @@ " no introducir?a cambios en la base de datos.\n" " " -#: ../roundup/admin.py:1013 +#: ../roundup/admin.py:1023 msgid "" "Usage: retire designator[,designator]*\n" " Retire the node specified by designator.\n" @@ -862,7 +862,7 @@ " reusado.\n" " " -#: ../roundup/admin.py:1036 +#: ../roundup/admin.py:1047 msgid "" "Usage: restore designator[,designator]*\n" " Restore the retired node specified by designator.\n" @@ -878,30 +878,64 @@ " " #. grab the directory to export to -#: ../roundup/admin.py:1058 +#: ../roundup/admin.py:1070 msgid "" -"Usage: export [class[,class]] export_dir\n" +"Usage: export [[-]class[,class]] export_dir\n" " Export the database to colon-separated-value files.\n" +" To exclude the files (e.g. for the msg or file class),\n" +" use the exporttables command.\n" "\n" -" Optionally limit the export to just the names classes.\n" +" Optionally limit the export to just the named classes\n" +" or exclude the named classes, if the 1st argument starts with '-'.\n" "\n" " This action exports the current data from the database into\n" " colon-separated-value files that are placed in the nominated\n" " destination directory.\n" " " msgstr "" -"Uso: export [clase[,clase]] dir_exportaci?n\n" -" Exporta la base de datos a archivos de valores separados por comas.\n" +"Uso: export [[-]clase[,clase]] dir_exportaci?n\n" +" Exporta la base de datos a ficheros de valores separados por comas.\n" +" Para excluir los ficheros (por ej. en las clases msg o file),\n" +" use el comando exporttables.\n" "\n" -" Opcionalmente limita la exportaci?n s?lo a las clases\n" -" especificadas.\n" +" Opcionalmente limita la exportaci?n s?lo a las clases especifica-\n" +" das o las excluye si el primer argumento comienza con '-'.\n" "\n" " Esta acci?n exporta los datos actuales desde la base de datos a\n" -" archivos de valores separados por comas que se colocar?n en el\n" +" ficheros de valores separados por comas que se colocar?n en el\n" " directorio de destino especificado (dir_exportaci?n).\n" " " -#: ../roundup/admin.py:1116 +#: ../roundup/admin.py:1145 +msgid "" +"Usage: exporttables [[-]class[,class]] export_dir\n" +" Export the database to colon-separated-value files, excluding the\n" +" files below $TRACKER_HOME/db/files/ (which can be archived " +"separately).\n" +" To include the files, use the export command.\n" +"\n" +" Optionally limit the export to just the named classes\n" +" or exclude the named classes, if the 1st argument starts with '-'.\n" +"\n" +" This action exports the current data from the database into\n" +" colon-separated-value files that are placed in the nominated\n" +" destination directory.\n" +" " +msgstr "" +"Uso: export [clase[,clase]] dir_exportaci?n\n" +" Exporta la base de datos a ficheros de valores separados por comas,\n" +" excluyendo los ficheros en $TRACKER_HOME/db/files/ (los cuales\n" +" pueden ser archivados por separado).\n" +"\n" +" Opcionalmente limita la exportaci?n s?lo a las clases especifica-\n" +" das o las excluye si el primer argumento comienza con '-'.\n" +"\n" +" Esta acci?n exporta los datos actuales desde la base de datos a\n" +" ficheros de valores separados por comas que se colocar?n en el\n" +" directorio de destino especificado.\n" +" " + +#: ../roundup/admin.py:1160 msgid "" "Usage: import import_dir\n" " Import a database from the directory containing CSV files,\n" @@ -924,10 +958,10 @@ " " msgstr "" "Uso: import dir_importaci?n\n" -" Importa una base de datos desde el directorio conteniendo archivos\n" +" Importa una base de datos desde el directorio conteniendo ficheros\n" " CSV, dos por cada clase a importar.\n" "\n" -" Los archivos usados en la importaci?n son:\n" +" Los ficheros usados en la importaci?n son:\n" "\n" " .csv\n" " Este debe definir las mismas propiedades que la clase (esto\n" @@ -937,7 +971,7 @@ " Este define los journals para los items que se est?n importando.\n" "\n" " Los nodos importados tendr?n los mismos id?s que los nodos seg?n\n" -" se encontraban definidos en el archivo importado, por lo tanto\n" +" se encontraban definidos en el fichero importado, por lo tanto\n" " reemplazar?n todo contenido preexistente.\n" "\n" " Los nuevos nodos son agregados a la base de datos existente - si\n" @@ -946,7 +980,7 @@ " tediosamente, retirar toda los datos viejos.)\n" " " -#: ../roundup/admin.py:1189 +#: ../roundup/admin.py:1235 msgid "" "Usage: pack period | date\n" "\n" @@ -985,11 +1019,11 @@ "\n" " " -#: ../roundup/admin.py:1217 +#: ../roundup/admin.py:1263 msgid "Invalid format" msgstr "Formato inv?lido" -#: ../roundup/admin.py:1227 +#: ../roundup/admin.py:1274 msgid "" "Usage: reindex [classname|designator]*\n" " Re-generate a tracker's search indexes.\n" @@ -1005,12 +1039,12 @@ " Es un comando que por lo general se ejecuta autom?ticamente.\n" " " -#: ../roundup/admin.py:1241 +#: ../roundup/admin.py:1288 #, python-format msgid "no such item \"%(designator)s\"" msgstr "no existe un ?tem llamado \"%(designator)s\"" -#: ../roundup/admin.py:1251 +#: ../roundup/admin.py:1298 msgid "" "Usage: security [Role name]\n" " Display the Permissions available to one or all Roles.\n" @@ -1020,81 +1054,82 @@ " Muestra los permisos disponibles para uno o todos los Roles.\n" " " -#: ../roundup/admin.py:1259 +#: ../roundup/admin.py:1306 #, python-format msgid "No such Role \"%(role)s\"" msgstr "No existe un Rol llamado \"%(role)s\"" -#: ../roundup/admin.py:1265 +#: ../roundup/admin.py:1312 #, python-format msgid "New Web users get the Roles \"%(role)s\"" msgstr "Los nuevos usuarios creados v?a Web obtiene los Roles \"%(role)s\"" -#: ../roundup/admin.py:1267 +#: ../roundup/admin.py:1314 #, python-format msgid "New Web users get the Role \"%(role)s\"" msgstr "Los nuevos usuarios creados v?a Web obtienen el Rol \"%(role)s\"" -#: ../roundup/admin.py:1270 +#: ../roundup/admin.py:1317 #, python-format msgid "New Email users get the Roles \"%(role)s\"" msgstr "" "Los nuevos usuarios creados v?a e-mail obtienen los Roles \"%(role)s\"" -#: ../roundup/admin.py:1272 +#: ../roundup/admin.py:1319 #, python-format msgid "New Email users get the Role \"%(role)s\"" msgstr "Los nuevos usuarios creados v?a e-mail obtienen el Rol \"%(role)s\"" -#: ../roundup/admin.py:1275 +#: ../roundup/admin.py:1322 #, python-format msgid "Role \"%(name)s\":" msgstr "Rol \"%(name)s\":" -#: ../roundup/admin.py:1280 +#: ../roundup/admin.py:1327 #, python-format msgid " %(description)s (%(name)s for \"%(klass)s\": %(properties)s only)" msgstr "" " %(description)s (%(name)s para \"%(klass)s\": %(properties)s solamente)" -#: ../roundup/admin.py:1283 +#: ../roundup/admin.py:1330 #, python-format msgid " %(description)s (%(name)s for \"%(klass)s\" only)" msgstr " %(description)s (%(name)s para \"%(klass)s\" solamente)" -#: ../roundup/admin.py:1286 +#: ../roundup/admin.py:1333 #, python-format msgid " %(description)s (%(name)s)" -msgstr "" +msgstr " %(description)s (%(name)s)" -#: ../roundup/admin.py:1315 +#: ../roundup/admin.py:1362 #, python-format msgid "Unknown command \"%(command)s\" (\"help commands\" for a list)" msgstr "" "Comando desconocido \"%(command)s\" (tipee \"help commands\" para obtener " "una lista)" -#: ../roundup/admin.py:1321 +#: ../roundup/admin.py:1368 #, python-format msgid "Multiple commands match \"%(command)s\": %(list)s" msgstr "Coinciden mas de un comando \"%(command)s\": %(list)s" -#: ../roundup/admin.py:1328 +#: ../roundup/admin.py:1375 msgid "Enter tracker home: " msgstr "Ingrese directorio base del tracker: " # ../roundup/admin.py:1296 :1302 :1322 -#: ../roundup/admin.py:1335 ../roundup/admin.py:1341 ../roundup/admin.py:1361 +#: ../roundup/admin.py:1382 ../roundup/admin.py:1388 ../roundup/admin.py:1408 +#: ../roundup/admin.py:1382:1388:1408 #, python-format msgid "Error: %(message)s" -msgstr "" +msgstr "Error: %(message)s" -#: ../roundup/admin.py:1349 +#: ../roundup/admin.py:1396 #, python-format msgid "Error: Couldn't open tracker: %(message)s" msgstr "Error: No se pudo abrir el tracker: %(message)s" -#: ../roundup/admin.py:1374 +#: ../roundup/admin.py:1421 #, python-format msgid "" "Roundup %s ready for input.\n" @@ -1103,48 +1138,48 @@ "Roundup %s listo para comandos.\n" "Tipee \"help\" para ayuda." -#: ../roundup/admin.py:1379 +#: ../roundup/admin.py:1426 msgid "Note: command history and editing not available" msgstr "Nota: historia y edici?n de comandos no disponible" -#: ../roundup/admin.py:1383 +#: ../roundup/admin.py:1430 msgid "roundup> " -msgstr "" +msgstr "roundup> " -#: ../roundup/admin.py:1385 +#: ../roundup/admin.py:1432 msgid "exit..." msgstr "salir..." -#: ../roundup/admin.py:1395 +#: ../roundup/admin.py:1442 msgid "There are unsaved changes. Commit them (y/N)? " msgstr "Hay cambios sin guardar. Debo guardar los mismos (y/N)? " -#: ../roundup/backends/back_anydbm.py:2001 +#: ../roundup/backends/back_anydbm.py:2004 #, python-format msgid "WARNING: invalid date tuple %r" msgstr "ATENCI?N: tuple de fecha inv?lido %r" -#: ../roundup/backends/rdbms_common.py:1434 +#: ../roundup/backends/rdbms_common.py:1445 msgid "create" msgstr "crea" -#: ../roundup/backends/rdbms_common.py:1600 +#: ../roundup/backends/rdbms_common.py:1611 msgid "unlink" msgstr "desenlaza" -#: ../roundup/backends/rdbms_common.py:1604 +#: ../roundup/backends/rdbms_common.py:1615 msgid "link" msgstr "enlaza" -#: ../roundup/backends/rdbms_common.py:1724 +#: ../roundup/backends/rdbms_common.py:1737 msgid "set" msgstr "asigna" -#: ../roundup/backends/rdbms_common.py:1748 +#: ../roundup/backends/rdbms_common.py:1761 msgid "retired" msgstr "retira" -#: ../roundup/backends/rdbms_common.py:1778 +#: ../roundup/backends/rdbms_common.py:1791 msgid "restored" msgstr "restaura" @@ -1177,54 +1212,56 @@ msgstr "%(classname)s %(itemid)s ha sido retirado" # ../roundup/cgi/actions.py:163 :191 -#: ../roundup/cgi/actions.py:174 ../roundup/cgi/actions.py:202 +#: ../roundup/cgi/actions.py:169 ../roundup/cgi/actions.py:197 +#: ../roundup/cgi/actions.py:169:197 msgid "You do not have permission to edit queries" msgstr "Ud. no posee los permisos necesarios para editar consultas" # ../roundup/cgi/actions.py:169 :197 -#: ../roundup/cgi/actions.py:180 ../roundup/cgi/actions.py:209 +#: ../roundup/cgi/actions.py:175 ../roundup/cgi/actions.py:204 +#: ../roundup/cgi/actions.py:175:204 msgid "You do not have permission to store queries" msgstr "Ud. no posee los permisos necesarios para grabar consultas" -#: ../roundup/cgi/actions.py:297 +#: ../roundup/cgi/actions.py:310 #, python-format msgid "Not enough values on line %(line)s" msgstr "No hay valores suficientes en la l?nea %(line)s" -#: ../roundup/cgi/actions.py:344 +#: ../roundup/cgi/actions.py:357 msgid "Items edited OK" msgstr "Items editados exitosamente" -#: ../roundup/cgi/actions.py:404 +#: ../roundup/cgi/actions.py:416 #, python-format msgid "%(class)s %(id)s %(properties)s edited ok" msgstr "Edici?n exitosa de %(properties)s de %(class)s %(id)s" -#: ../roundup/cgi/actions.py:407 +#: ../roundup/cgi/actions.py:419 #, python-format msgid "%(class)s %(id)s - nothing changed" msgstr "%(class)s %(id)s - sin modificaciones" -#: ../roundup/cgi/actions.py:419 +#: ../roundup/cgi/actions.py:431 #, python-format msgid "%(class)s %(id)s created" msgstr "%(class)s %(id)s creado" -#: ../roundup/cgi/actions.py:451 +#: ../roundup/cgi/actions.py:463 #, python-format msgid "You do not have permission to edit %(class)s" msgstr "Ud. no posee los permisos necesarios para editar %(class)s" -#: ../roundup/cgi/actions.py:463 +#: ../roundup/cgi/actions.py:475 #, python-format msgid "You do not have permission to create %(class)s" msgstr "Ud. no posee los permisos necesarios para crear %(class)s" -#: ../roundup/cgi/actions.py:487 +#: ../roundup/cgi/actions.py:499 msgid "You do not have permission to edit user roles" msgstr "Ud. no posee los permisos necesarios para editar roles de usuario" -#: ../roundup/cgi/actions.py:537 +#: ../roundup/cgi/actions.py:549 #, python-format msgid "" "Edit Error: someone else has edited this %s (%s). View cambios que dicha persona ha realizado en una " "ventana aparte." -#: ../roundup/cgi/actions.py:565 +#: ../roundup/cgi/actions.py:577 #, python-format msgid "Edit Error: %s" msgstr "Error de edici?n: %s" # ../roundup/cgi/actions.py:579 :590 :761 :780 -#: ../roundup/cgi/actions.py:596 ../roundup/cgi/actions.py:607 -#: ../roundup/cgi/actions.py:778 ../roundup/cgi/actions.py:797 +#: ../roundup/cgi/actions.py:608 ../roundup/cgi/actions.py:619 +#: ../roundup/cgi/actions.py:790 ../roundup/cgi/actions.py:809 +#: ../roundup/cgi/actions.py:608:619 :790:809 #, python-format msgid "Error: %s" -msgstr "" +msgstr "Error: %s" -#: ../roundup/cgi/actions.py:633 +#: ../roundup/cgi/actions.py:645 msgid "" "Invalid One Time Key!\n" "(a Mozilla bug may cause this message to show up erroneously, please check " @@ -1256,50 +1294,51 @@ "(un bug de Mozilla puede ser el causante de que se visualice este mensaje en " "forma err?nea, por favor verifique su casilla de e-mail)" -#: ../roundup/cgi/actions.py:675 +#: ../roundup/cgi/actions.py:687 #, python-format msgid "Password reset and email sent to %s" msgstr "Contrase?a reinicializada y mensaje de e-mail enviado a %s" -#: ../roundup/cgi/actions.py:684 +#: ../roundup/cgi/actions.py:696 msgid "Unknown username" msgstr "Usuario desconocido" -#: ../roundup/cgi/actions.py:692 +#: ../roundup/cgi/actions.py:704 msgid "Unknown email address" msgstr "Direcci?n de e-mail desconocida" -#: ../roundup/cgi/actions.py:697 +#: ../roundup/cgi/actions.py:709 msgid "You need to specify a username or address" msgstr "Debe especificar un nombre de usuario o direcci?n de e-mail" -#: ../roundup/cgi/actions.py:722 +#: ../roundup/cgi/actions.py:734 #, python-format msgid "Email sent to %s" msgstr "Se ha enviado un mensaje de e-mail a %s" -#: ../roundup/cgi/actions.py:741 +#: ../roundup/cgi/actions.py:753 msgid "You are now registered, welcome!" msgstr "Ud. se ha registrado exitosamente, bienvenido!" -#: ../roundup/cgi/actions.py:786 +#: ../roundup/cgi/actions.py:798 msgid "It is not permitted to supply roles at registration." msgstr "No est? permitido especificar roles en el momento del registro." -#: ../roundup/cgi/actions.py:878 +#: ../roundup/cgi/actions.py:890 msgid "You are logged out" msgstr "Ha salido del sistema exitosamente" -#: ../roundup/cgi/actions.py:895 +#: ../roundup/cgi/actions.py:907 msgid "Username required" msgstr "Se requiere el ingreso de un nombre de usuario" # ../roundup/cgi/actions.py:891 :895 -#: ../roundup/cgi/actions.py:930 ../roundup/cgi/actions.py:934 +#: ../roundup/cgi/actions.py:942 ../roundup/cgi/actions.py:946 +#: ../roundup/cgi/actions.py:942:946 msgid "Invalid login" msgstr "nombre de usuario ? contrase?a inv?lidos" -#: ../roundup/cgi/actions.py:940 +#: ../roundup/cgi/actions.py:952 msgid "You do not have permission to login" msgstr "Ud. no tiene permiso para ingresar al sistema" @@ -1317,7 +1356,7 @@ #: ../roundup/cgi/cgitb.py:64 #, python-format msgid "
  • \"%(name)s\" (%(info)s)
  • " -msgstr "" +msgstr "
  • \"%(name)s\" (%(info)s)
  • " #: ../roundup/cgi/cgitb.py:67 #, python-format @@ -1360,7 +1399,7 @@ #: ../roundup/cgi/cgitb.py:116 #, python-format msgid "%(exc_type)s: %(exc_value)s" -msgstr "" +msgstr "%(exc_type)s: %(exc_value)s" #: ../roundup/cgi/cgitb.py:120 msgid "" @@ -1384,6 +1423,7 @@ # ../roundup/cgi/cgitb.py:172 :178 #: ../roundup/cgi/cgitb.py:172 ../roundup/cgi/cgitb.py:178 +#: ../roundup/cgi/cgitb.py:172:178 msgid "undefined" msgstr "indefinido/a" @@ -1402,29 +1442,29 @@ "p>\n" "" -#: ../roundup/cgi/client.py:308 +#: ../roundup/cgi/client.py:339 msgid "Form Error: " msgstr "Error de formulario" -#: ../roundup/cgi/client.py:363 +#: ../roundup/cgi/client.py:394 #, python-format msgid "Unrecognized charset: %r" msgstr "Conjunto de caracteres desconocido: %r" -#: ../roundup/cgi/client.py:491 +#: ../roundup/cgi/client.py:522 msgid "Anonymous users are not allowed to use the web interface" msgstr "Los usuarios anonimos no tienen permitido usar esta interfaz Web" -#: ../roundup/cgi/client.py:646 +#: ../roundup/cgi/client.py:677 msgid "You are not allowed to view this file." -msgstr "Ud. no tiene permitido ver este archivo" +msgstr "Ud. no tiene permitido ver este fichero" -#: ../roundup/cgi/client.py:738 +#: ../roundup/cgi/client.py:770 #, python-format msgid "%(starttag)sTime elapsed: %(seconds)fs%(endtag)s\n" msgstr "%(starttag)sTiempo transcurrido: %(seconds)fs%(endtag)s\n" -#: ../roundup/cgi/client.py:742 +#: ../roundup/cgi/client.py:774 #, python-format msgid "" "%(starttag)sCache hits: %(cache_hits)d, misses %(cache_misses)d. Loading " @@ -1435,15 +1475,24 @@ #: ../roundup/cgi/form_parser.py:283 #, python-format -msgid "link \"%(key)s\" value \"%(value)s\" not a designator" -msgstr "el enlace \"%(key)s\" valor \"%(value)s\" no es un designador" +msgid "link \"%(key)s\" value \"%(entry)s\" not a designator" +msgstr "el enlace \"%(key)s\" valor \"%(entry)s\" no es un designador" -#: ../roundup/cgi/form_parser.py:290 +#: ../roundup/cgi/form_parser.py:301 #, python-format msgid "%(class)s %(property)s is not a link or multilink property" msgstr "%(property)s de %(class)s no es una propiedad enlace o multilink" -#: ../roundup/cgi/form_parser.py:312 +#: ../roundup/cgi/form_parser.py:313 +#, python-format +msgid "" +"The form action claims to require property \"%(property)s\" which doesn't " +"exist" +msgstr "" +"La accion de formulario especifica que requiere la propiedad " +"\"%(property)s\" la cual no existe" + +#: ../roundup/cgi/form_parser.py:335 #, python-format msgid "" "You have submitted a %(action)s action for the property \"%(property)s\" " @@ -1453,24 +1502,26 @@ "existe" # ../roundup/cgi/form_parser.py:331 :357 -#: ../roundup/cgi/form_parser.py:331 ../roundup/cgi/form_parser.py:357 +#: ../roundup/cgi/form_parser.py:354 ../roundup/cgi/form_parser.py:380 +#: ../roundup/cgi/form_parser.py:354:380 #, python-format msgid "You have submitted more than one value for the %s property" msgstr "Ha ingresado m?s de un valor para la propiedad %s" # ../roundup/cgi/form_parser.py:354 :360 -#: ../roundup/cgi/form_parser.py:354 ../roundup/cgi/form_parser.py:360 +#: ../roundup/cgi/form_parser.py:377 ../roundup/cgi/form_parser.py:383 +#: ../roundup/cgi/form_parser.py:377:383 msgid "Password and confirmation text do not match" msgstr "La contrase?a y el texto de confirmaci?n no coinciden" -#: ../roundup/cgi/form_parser.py:395 +#: ../roundup/cgi/form_parser.py:418 #, python-format msgid "property \"%(propname)s\": \"%(value)s\" not currently in list" msgstr "" "propiedad \"%(propname)s\": \"%(value)s\" no se encuentra en este momento en " "la lista" -#: ../roundup/cgi/form_parser.py:512 +#: ../roundup/cgi/form_parser.py:551 #, python-format msgid "Required %(class)s property %(property)s not supplied" msgid_plural "Required %(class)s properties %(property)s not supplied" @@ -1481,120 +1532,126 @@ "Las propiedades %(property)s de la clase %(class)s son obligatorias y no se " "han provisto" -#: ../roundup/cgi/form_parser.py:535 +#: ../roundup/cgi/form_parser.py:574 msgid "File is empty" -msgstr "El archivo est? vac?o" +msgstr "El fichero est? vac?o" -#: ../roundup/cgi/templating.py:72 +#: ../roundup/cgi/templating.py:77 #, python-format msgid "You are not allowed to %(action)s items of class %(class)s" msgstr "Ud. no tiene permitido %(action)s items de la clase %(class)s" -#: ../roundup/cgi/templating.py:627 +#: ../roundup/cgi/templating.py:657 msgid "(list)" msgstr "(lista)" -#: ../roundup/cgi/templating.py:696 +#: ../roundup/cgi/templating.py:726 msgid "Submit New Entry" msgstr "Crear nuevo elemento" # ../roundup/cgi/templating.py:673 :792 :1166 :1187 :1231 :1253 :1287 :1326 # :1377 :1394 :1470 :1490 :1503 :1520 :1530 :1580 :1755 -#: ../roundup/cgi/templating.py:710 ../roundup/cgi/templating.py:829 -#: ../roundup/cgi/templating.py:1236 ../roundup/cgi/templating.py:1257 -#: ../roundup/cgi/templating.py:1304 ../roundup/cgi/templating.py:1327 -#: ../roundup/cgi/templating.py:1361 ../roundup/cgi/templating.py:1400 -#: ../roundup/cgi/templating.py:1453 ../roundup/cgi/templating.py:1470 -#: ../roundup/cgi/templating.py:1549 ../roundup/cgi/templating.py:1569 -#: ../roundup/cgi/templating.py:1587 ../roundup/cgi/templating.py:1619 -#: ../roundup/cgi/templating.py:1629 ../roundup/cgi/templating.py:1683 -#: ../roundup/cgi/templating.py:1875 +#: ../roundup/cgi/templating.py:740 ../roundup/cgi/templating.py:873 +#: ../roundup/cgi/templating.py:1294 ../roundup/cgi/templating.py:1323 +#: ../roundup/cgi/templating.py:1343 ../roundup/cgi/templating.py:1356 +#: ../roundup/cgi/templating.py:1407 ../roundup/cgi/templating.py:1430 +#: ../roundup/cgi/templating.py:1466 ../roundup/cgi/templating.py:1503 +#: ../roundup/cgi/templating.py:1556 ../roundup/cgi/templating.py:1573 +#: ../roundup/cgi/templating.py:1657 ../roundup/cgi/templating.py:1677 +#: ../roundup/cgi/templating.py:1695 ../roundup/cgi/templating.py:1727 +#: ../roundup/cgi/templating.py:1737 ../roundup/cgi/templating.py:1789 +#: ../roundup/cgi/templating.py:1978 ../roundup/cgi/templating.py:740:873 +#: :1294:1323 :1343:1356 :1407:1430 :1466:1503 :1556:1573 :1657:1677 +#: :1695:1727 :1737:1789:1978 msgid "[hidden]" msgstr "[oculto]" -#: ../roundup/cgi/templating.py:711 +#: ../roundup/cgi/templating.py:741 msgid "New node - no history" msgstr "Nuevo nodo - sin historia" -#: ../roundup/cgi/templating.py:811 +#: ../roundup/cgi/templating.py:855 msgid "Submit Changes" msgstr "Enviar modificaciones" -#: ../roundup/cgi/templating.py:893 +#: ../roundup/cgi/templating.py:937 msgid "The indicated property no longer exists" msgstr "La propiedad indicada ya no existe" -#: ../roundup/cgi/templating.py:894 +#: ../roundup/cgi/templating.py:938 #, python-format msgid "%s: %s\n" -msgstr "" +msgstr "%s: %s\n" -#: ../roundup/cgi/templating.py:907 +#: ../roundup/cgi/templating.py:951 #, python-format msgid "The linked class %(classname)s no longer exists" msgstr "La clase relacionada %(classname)s ya no existe" # ../roundup/cgi/templating.py:903 :924 -#: ../roundup/cgi/templating.py:940 ../roundup/cgi/templating.py:964 +#: ../roundup/cgi/templating.py:984 ../roundup/cgi/templating.py:1008 +#: ../roundup/cgi/templating.py:984:1008 msgid "The linked node no longer exists" msgstr "El nodo relacionado ya no existe" -#: ../roundup/cgi/templating.py:1006 ../roundup/cgi/templating.py:1404 -#: ../roundup/cgi/templating.py:1425 ../roundup/cgi/templating.py:1431 -msgid "No" -msgstr "" - -#: ../roundup/cgi/templating.py:1006 ../roundup/cgi/templating.py:1404 -#: ../roundup/cgi/templating.py:1423 ../roundup/cgi/templating.py:1428 -msgid "Yes" -msgstr "Si" - -#: ../roundup/cgi/templating.py:1017 +#: ../roundup/cgi/templating.py:1061 #, python-format msgid "%s: (no value)" msgstr "%s: (sin valor)" -#: ../roundup/cgi/templating.py:1029 +#: ../roundup/cgi/templating.py:1073 msgid "" "This event is not handled by the history display!" msgstr "" "Este evento no es soportado por la visualizaci?n de historia!" -#: ../roundup/cgi/templating.py:1041 +#: ../roundup/cgi/templating.py:1085 msgid "
    Note:
    Nota:
    DateFechaUserUsuarioActionAcci?nArgsArgs
    Note:  highlighted  fields are required.
    " @@ -2758,7 +2917,7 @@ "
    Nota: Los campos  resaltados  son obligatorios.
    " -#: ../templates/classic/html/issue.item.html:121 +#: ../templates/classic/html/issue.item.html:128 msgid "" "Created on ${creation} by ${creator}, last changed " "${activity} by ${actor}." @@ -2766,54 +2925,54 @@ "Creado el ${creation} por ${creator}, ?ltima modificaci?n el " "${activity} por ${actor}." -#: ../templates/classic/html/issue.item.html:125 -#: ../templates/classic/html/msg.item.html:56 +#: ../templates/classic/html/issue.item.html:132 +#: ../templates/classic/html/msg.item.html:61 msgid "Files" -msgstr "Archivos" +msgstr "Ficheros" -#: ../templates/classic/html/issue.item.html:127 -#: ../templates/classic/html/msg.item.html:58 +#: ../templates/classic/html/issue.item.html:134 +#: ../templates/classic/html/msg.item.html:63 msgid "File name" -msgstr "Nombre de archivo" +msgstr "Nombre de fichero" -#: ../templates/classic/html/issue.item.html:128 -#: ../templates/classic/html/msg.item.html:59 +#: ../templates/classic/html/issue.item.html:135 +#: ../templates/classic/html/msg.item.html:64 msgid "Uploaded" msgstr "Subido" -#: ../templates/classic/html/issue.item.html:129 +#: ../templates/classic/html/issue.item.html:136 msgid "Type" msgstr "Tipo" -#: ../templates/classic/html/issue.item.html:130 +#: ../templates/classic/html/issue.item.html:137 #: ../templates/classic/html/query.edit.html:30 msgid "Edit" msgstr "Editar" -#: ../templates/classic/html/issue.item.html:131 +#: ../templates/classic/html/issue.item.html:138 msgid "Remove" msgstr "Eliminar" -#: ../templates/classic/html/issue.item.html:151 -#: ../templates/classic/html/issue.item.html:172 +#: ../templates/classic/html/issue.item.html:158 +#: ../templates/classic/html/issue.item.html:179 #: ../templates/classic/html/query.edit.html:50 msgid "remove" msgstr "eliminar" -#: ../templates/classic/html/issue.item.html:158 +#: ../templates/classic/html/issue.item.html:165 #: ../templates/classic/html/msg.index.html:9 msgid "Messages" msgstr "Mensajes" -#: ../templates/classic/html/issue.item.html:162 +#: ../templates/classic/html/issue.item.html:169 msgid "msg${id} (view)" msgstr "mensaje${id} (ver)" -#: ../templates/classic/html/issue.item.html:163 +#: ../templates/classic/html/issue.item.html:170 msgid "Author: ${author}" msgstr "Autor: ${author}" -#: ../templates/classic/html/issue.item.html:165 +#: ../templates/classic/html/issue.item.html:172 msgid "Date: ${date}" msgstr "Fecha: ${date}" @@ -2825,129 +2984,132 @@ msgid "Issue searching" msgstr "B?squeda de Issues" -#: ../templates/classic/html/issue.search.html:25 +#: ../templates/classic/html/issue.search.html:31 msgid "Filter on" msgstr "Filtrar por" -#: ../templates/classic/html/issue.search.html:26 +#: ../templates/classic/html/issue.search.html:32 msgid "Display" msgstr "Visualizar" -#: ../templates/classic/html/issue.search.html:27 +#: ../templates/classic/html/issue.search.html:33 msgid "Sort on" msgstr "Ordenar por" -#: ../templates/classic/html/issue.search.html:28 +#: ../templates/classic/html/issue.search.html:34 msgid "Group on" msgstr "Agrupar por" -#: ../templates/classic/html/issue.search.html:32 +#: ../templates/classic/html/issue.search.html:38 msgid "All text*:" msgstr "Todo el texto*:" -#: ../templates/classic/html/issue.search.html:40 +#: ../templates/classic/html/issue.search.html:46 msgid "Title:" msgstr "T?tulo:" -#: ../templates/classic/html/issue.search.html:50 -msgid "Topic:" +#: ../templates/classic/html/issue.search.html:56 +msgid "Keyword:" msgstr "Palabra clave:" #: ../templates/classic/html/issue.search.html:58 +#: ../templates/classic/html/issue.search.html:123 +#: ../templates/classic/html/issue.search.html:139 +msgid "not selected" +msgstr "no seleccionado" + +#: ../templates/classic/html/issue.search.html:67 msgid "ID:" -msgstr "" +msgstr "ID:" -#: ../templates/classic/html/issue.search.html:66 +#: ../templates/classic/html/issue.search.html:75 msgid "Creation Date:" msgstr "Fecha de creaci?n:" -#: ../templates/classic/html/issue.search.html:77 +#: ../templates/classic/html/issue.search.html:86 msgid "Creator:" msgstr "Creador:" -#: ../templates/classic/html/issue.search.html:79 +#: ../templates/classic/html/issue.search.html:88 msgid "created by me" msgstr "creado por m?" -#: ../templates/classic/html/issue.search.html:88 +#: ../templates/classic/html/issue.search.html:97 msgid "Activity:" msgstr "Actividad:" -#: ../templates/classic/html/issue.search.html:99 +#: ../templates/classic/html/issue.search.html:108 msgid "Actor:" msgstr "?ltimo actor:" -#: ../templates/classic/html/issue.search.html:101 +#: ../templates/classic/html/issue.search.html:110 msgid "done by me" msgstr "hecho por m?" -#: ../templates/classic/html/issue.search.html:112 +#: ../templates/classic/html/issue.search.html:121 msgid "Priority:" msgstr "Prioridad:" -#: ../templates/classic/html/issue.search.html:114 -#: ../templates/classic/html/issue.search.html:130 -msgid "not selected" -msgstr "no seleccionado" - -#: ../templates/classic/html/issue.search.html:125 +#: ../templates/classic/html/issue.search.html:134 msgid "Status:" msgstr "Estado:" -#: ../templates/classic/html/issue.search.html:128 +#: ../templates/classic/html/issue.search.html:137 msgid "not resolved" msgstr "sin resolver" -#: ../templates/classic/html/issue.search.html:143 +#: ../templates/classic/html/issue.search.html:152 msgid "Assigned to:" msgstr "Asignado a:" -#: ../templates/classic/html/issue.search.html:146 +#: ../templates/classic/html/issue.search.html:155 msgid "assigned to me" msgstr "asignado a m?" -#: ../templates/classic/html/issue.search.html:148 +#: ../templates/classic/html/issue.search.html:157 msgid "unassigned" msgstr "no asignado" -#: ../templates/classic/html/issue.search.html:158 +#: ../templates/classic/html/issue.search.html:167 msgid "No Sort or group:" msgstr "No ordenar o agrupar" -#: ../templates/classic/html/issue.search.html:166 +#: ../templates/classic/html/issue.search.html:175 msgid "Pagesize:" msgstr "Tama?o de p?gina" -#: ../templates/classic/html/issue.search.html:172 +#: ../templates/classic/html/issue.search.html:181 msgid "Start With:" msgstr "Comenzar con:" -#: ../templates/classic/html/issue.search.html:178 +#: ../templates/classic/html/issue.search.html:187 msgid "Sort Descending:" msgstr "Ordenar en forma descendente:" -#: ../templates/classic/html/issue.search.html:185 +#: ../templates/classic/html/issue.search.html:194 msgid "Group Descending:" msgstr "Agrupar en forma descendente:" -#: ../templates/classic/html/issue.search.html:192 +#: ../templates/classic/html/issue.search.html:201 msgid "Query name**:" msgstr "Nombre de la consulta**:" -#: ../templates/classic/html/issue.search.html:204 -#: ../templates/classic/html/page.html:31 -#: ../templates/classic/html/page.html:60 -#: ../templates/minimal/html/page.html:31 +#: ../templates/classic/html/issue.search.html:213 +#: ../templates/classic/html/page.html:43 +#: ../templates/classic/html/page.html:92 +#: ../templates/classic/html/user.help-search.html:69 +#: ../templates/minimal/html/page.html:43 +#: ../templates/minimal/html/page.html:91 msgid "Search" msgstr "Buscar" -#: ../templates/classic/html/issue.search.html:209 +#: ../templates/classic/html/issue.search.html:218 msgid "*: The \"all text\" field will look in message bodies and issue titles" msgstr "" "*: El campo \"Todo el texto\" busca en los cuerpos de los mensajes y los " "t?tulos de los issues" -#: ../templates/classic/html/issue.search.html:212 +#: ../templates/classic/html/issue.search.html:221 msgid "" "**: If you supply a name, the query will be saved off and available as a " "link in the sidebar" @@ -2981,10 +3143,6 @@ "Para crear una nueva Palabra clave, ingrese la misma abajo y haga click en " "\"Crear nuevo elemento\"." -#: ../templates/classic/html/keyword.item.html:37 -msgid "Keyword" -msgstr "Palabra clave" - #: ../templates/classic/html/msg.index.html:3 msgid "List of messages - ${tracker}" msgstr "Lista de mensajes - ${tracker}" @@ -3017,133 +3175,153 @@ msgid "Message${id} Editing" msgstr "Edici?n de Mensaje${id}" -#: ../templates/classic/html/msg.item.html:33 +#: ../templates/classic/html/msg.item.html:38 msgid "Author" msgstr "Autor" -#: ../templates/classic/html/msg.item.html:38 +#: ../templates/classic/html/msg.item.html:43 msgid "Recipients" msgstr "Destinatarios" -#: ../templates/classic/html/msg.item.html:49 +#: ../templates/classic/html/msg.item.html:54 msgid "Content" msgstr "Contenido" -#: ../templates/classic/html/page.html:41 +#: ../templates/classic/html/page.html:54 +#: ../templates/minimal/html/page.html:53 msgid "Your Queries (edit)" msgstr "Sus consultas (editar)" -#: ../templates/classic/html/page.html:52 +#: ../templates/classic/html/page.html:65 +#: ../templates/minimal/html/page.html:64 msgid "Issues" -msgstr "" +msgstr "Issues" -#: ../templates/classic/html/page.html:54 -#: ../templates/classic/html/page.html:74 +#: ../templates/classic/html/page.html:67 +#: ../templates/classic/html/page.html:105 +#: ../templates/minimal/html/page.html:66 +#: ../templates/minimal/html/page.html:104 msgid "Create New" msgstr "Crear" -#: ../templates/classic/html/page.html:56 +#: ../templates/classic/html/page.html:69 +#: ../templates/minimal/html/page.html:68 msgid "Show Unassigned" msgstr "Mostrar no asignados" -#: ../templates/classic/html/page.html:58 +#: ../templates/classic/html/page.html:81 +#: ../templates/minimal/html/page.html:80 msgid "Show All" msgstr "Mostrar todos" -#: ../templates/classic/html/page.html:61 +#: ../templates/classic/html/page.html:93 +#: ../templates/minimal/html/page.html:92 msgid "Show issue:" msgstr "Mostrar issue:" -#: ../templates/classic/html/page.html:72 -msgid "Keywords" -msgstr "Palabras clave" - -#: ../templates/classic/html/page.html:78 +#: ../templates/classic/html/page.html:108 +#: ../templates/minimal/html/page.html:107 msgid "Edit Existing" msgstr "Editar existentes" -#: ../templates/classic/html/page.html:84 -#: ../templates/minimal/html/page.html:65 +#: ../templates/classic/html/page.html:114 +#: ../templates/minimal/html/page.html:113 msgid "Administration" msgstr "Administraci?n" -#: ../templates/classic/html/page.html:86 -#: ../templates/minimal/html/page.html:66 +#: ../templates/classic/html/page.html:116 +#: ../templates/minimal/html/page.html:115 msgid "Class List" msgstr "Lista de clases" -#: ../templates/classic/html/page.html:90 -#: ../templates/minimal/html/page.html:68 +#: ../templates/classic/html/page.html:120 +#: ../templates/minimal/html/page.html:119 msgid "User List" msgstr "Lista de usuarios" -#: ../templates/classic/html/page.html:92 -#: ../templates/minimal/html/page.html:71 +#: ../templates/classic/html/page.html:122 +#: ../templates/minimal/html/page.html:121 msgid "Add User" msgstr "Agregar usuario" -#: ../templates/classic/html/page.html:99 -#: ../templates/classic/html/page.html:105 -#: ../templates/minimal/html/page.html:46 +#: ../templates/classic/html/page.html:129 +#: ../templates/classic/html/page.html:135 +#: ../templates/minimal/html/page.html:128 +#: ../templates/minimal/html/page.html:134 msgid "Login" msgstr "Ingresar" -#: ../templates/classic/html/page.html:104 -#: ../templates/minimal/html/page.html:45 +#: ../templates/classic/html/page.html:134 +#: ../templates/minimal/html/page.html:133 msgid "Remember me?" msgstr "Recordarme?" -#: ../templates/classic/html/page.html:108 +#: ../templates/classic/html/page.html:138 #: ../templates/classic/html/user.register.html:63 -#: ../templates/minimal/html/page.html:50 -#: ../templates/minimal/html/user.register.html:58 +#: ../templates/minimal/html/page.html:137 +#: ../templates/minimal/html/user.register.html:61 msgid "Register" msgstr "Registrarse" -#: ../templates/classic/html/page.html:111 +#: ../templates/classic/html/page.html:141 +#: ../templates/minimal/html/page.html:140 msgid "Lost your login?" msgstr "Olvid? su contrase?a?" -#: ../templates/classic/html/page.html:116 +#: ../templates/classic/html/page.html:146 +#: ../templates/minimal/html/page.html:145 msgid "Hello, ${user}" msgstr "Hola, ${user}" -#: ../templates/classic/html/page.html:118 +#: ../templates/classic/html/page.html:148 msgid "Your Issues" msgstr "Sus Issues" -#: ../templates/classic/html/page.html:119 -#: ../templates/minimal/html/page.html:57 +#: ../templates/classic/html/page.html:160 +#: ../templates/minimal/html/page.html:147 msgid "Your Details" msgstr "Sus datos personales" -#: ../templates/classic/html/page.html:121 -#: ../templates/minimal/html/page.html:59 +#: ../templates/classic/html/page.html:162 +#: ../templates/minimal/html/page.html:149 msgid "Logout" msgstr "Salir" -#: ../templates/classic/html/page.html:125 +#: ../templates/classic/html/page.html:166 +#: ../templates/minimal/html/page.html:153 msgid "Help" msgstr "Ayuda" -#: ../templates/classic/html/page.html:126 +#: ../templates/classic/html/page.html:167 +#: ../templates/minimal/html/page.html:154 msgid "Roundup docs" msgstr "Doc. de Roundup" -#: ../templates/classic/html/page.html:136 -#: ../templates/minimal/html/page.html:81 +#: ../templates/classic/html/page.html:177 +#: ../templates/minimal/html/page.html:164 msgid "clear this message" msgstr "quitar este mensaje" -#: ../templates/classic/html/page.html:181 +#: ../templates/classic/html/page.html:241 +#: ../templates/classic/html/page.html:256 +#: ../templates/classic/html/page.html:270 +#: ../templates/minimal/html/page.html:228 +#: ../templates/minimal/html/page.html:243 +#: ../templates/minimal/html/page.html:257 msgid "don't care" msgstr "cualquier(a)" -#: ../templates/classic/html/page.html:183 +#: ../templates/classic/html/page.html:243 +#: ../templates/classic/html/page.html:258 +#: ../templates/classic/html/page.html:271 +#: ../templates/minimal/html/page.html:230 +#: ../templates/minimal/html/page.html:245 +#: ../templates/minimal/html/page.html:258 msgid "------------" -msgstr "" +msgstr "------------" -#: ../templates/classic/html/page.html:210 +#: ../templates/classic/html/page.html:299 +#: ../templates/minimal/html/page.html:286 msgid "no value" msgstr "sin valor" @@ -3254,6 +3432,18 @@ "detalladas en el mismo para completar el proceso de generaci?n de nueva una " "contrase?a." +#: ../templates/classic/html/user.help-search.html:73 +msgid "Pagesize" +msgstr "Tama?o de p?gina" + +#: ../templates/classic/html/user.help.html:43 +msgid "" +"Your browser is not capable of using frames; you should be redirected " +"immediately, or visit ${link}." +msgstr "" +"Su navegador no tiene capacidad de manejar marcos; deber?a ser " +"redireccionado de inmediato, caso contrario vaya a ${link}." + #: ../templates/classic/html/user.index.html:3 #: ../templates/minimal/html/user.index.html:3 msgid "User listing - ${tracker}" @@ -3264,129 +3454,92 @@ msgid "User listing" msgstr "Listado de usuarios" -#: ../templates/classic/html/user.index.html:14 -#: ../templates/minimal/html/user.index.html:14 +#: ../templates/classic/html/user.index.html:19 +#: ../templates/minimal/html/user.index.html:19 msgid "Username" msgstr "Nombre de usuario" -#: ../templates/classic/html/user.index.html:15 +#: ../templates/classic/html/user.index.html:20 msgid "Real name" msgstr "Nombre real" -#: ../templates/classic/html/user.index.html:16 -#: ../templates/classic/html/user.item.html:70 +#: ../templates/classic/html/user.index.html:21 #: ../templates/classic/html/user.register.html:45 msgid "Organisation" msgstr "Organizaci?n" -#: ../templates/classic/html/user.index.html:17 -#: ../templates/minimal/html/user.index.html:15 +#: ../templates/classic/html/user.index.html:22 +#: ../templates/minimal/html/user.index.html:20 msgid "Email address" msgstr "Direcci?n de e-mail" -#: ../templates/classic/html/user.index.html:18 +#: ../templates/classic/html/user.index.html:23 msgid "Phone number" msgstr "Nro. telef?nico" -#: ../templates/classic/html/user.index.html:19 +#: ../templates/classic/html/user.index.html:24 msgid "Retire" msgstr "Retirar" -#: ../templates/classic/html/user.index.html:32 +#: ../templates/classic/html/user.index.html:37 msgid "retire" msgstr "retirar" -#: ../templates/classic/html/user.item.html:7 -#: ../templates/minimal/html/user.item.html:7 +#: ../templates/classic/html/user.item.html:9 +#: ../templates/minimal/html/user.item.html:9 msgid "User ${id}: ${title} - ${tracker}" msgstr "Usuario ${id}: ${title} - ${tracker}" -#: ../templates/classic/html/user.item.html:10 -#: ../templates/minimal/html/user.item.html:10 +#: ../templates/classic/html/user.item.html:12 +#: ../templates/minimal/html/user.item.html:12 msgid "New User - ${tracker}" msgstr "Nuevo usuario - ${tracker}" -#: ../templates/classic/html/user.item.html:14 -#: ../templates/minimal/html/user.item.html:14 +#: ../templates/classic/html/user.item.html:21 +#: ../templates/minimal/html/user.item.html:21 msgid "New User" msgstr "Nuevo usuario" -#: ../templates/classic/html/user.item.html:16 -#: ../templates/minimal/html/user.item.html:16 +#: ../templates/classic/html/user.item.html:23 +#: ../templates/minimal/html/user.item.html:23 msgid "New User Editing" msgstr "Edici?n de nuevo usuario" -#: ../templates/classic/html/user.item.html:19 -#: ../templates/minimal/html/user.item.html:19 +#: ../templates/classic/html/user.item.html:26 +#: ../templates/minimal/html/user.item.html:26 msgid "User${id}" msgstr "Usuario${id}" -#: ../templates/classic/html/user.item.html:22 -#: ../templates/minimal/html/user.item.html:22 +#: ../templates/classic/html/user.item.html:29 +#: ../templates/minimal/html/user.item.html:29 msgid "User${id} Editing" msgstr "Edici?n de Usuario${id}" -#: ../templates/classic/html/user.item.html:43 -#: ../templates/classic/html/user.register.html:21 -#: ../templates/minimal/html/user.item.html:40 -#: ../templates/minimal/html/user.register.html:26 -msgid "Login Name" -msgstr "Nombre para Login" - -#: ../templates/classic/html/user.item.html:47 -#: ../templates/classic/html/user.register.html:25 -#: ../templates/minimal/html/user.item.html:44 -#: ../templates/minimal/html/user.register.html:30 -msgid "Login Password" -msgstr "Contrase?a para Login" - -#: ../templates/classic/html/user.item.html:51 -#: ../templates/classic/html/user.register.html:29 -#: ../templates/minimal/html/user.item.html:48 -#: ../templates/minimal/html/user.register.html:34 -msgid "Confirm Password" -msgstr "Confirmar contrase?a" - -#: ../templates/classic/html/user.item.html:55 +#: ../templates/classic/html/user.item.html:80 #: ../templates/classic/html/user.register.html:33 -#: ../templates/minimal/html/user.item.html:52 -#: ../templates/minimal/html/user.register.html:38 +#: ../templates/minimal/html/user.item.html:80 +#: ../templates/minimal/html/user.register.html:41 msgid "Roles" msgstr "Roles" -#: ../templates/classic/html/user.item.html:61 -#: ../templates/minimal/html/user.item.html:58 +#: ../templates/classic/html/user.item.html:88 +#: ../templates/minimal/html/user.item.html:88 msgid "(to give the user more than one role, enter a comma,separated,list)" msgstr "" "(para asignar m?s de un rol al usuario, ingrese una lista de los mismos " "separados por comas)" -#: ../templates/classic/html/user.item.html:66 -#: ../templates/classic/html/user.register.html:41 -msgid "Phone" -msgstr "Tel?fono" - -#: ../templates/classic/html/user.item.html:74 -msgid "Timezone" -msgstr "Zona horaria" - -#: ../templates/classic/html/user.item.html:78 +#: ../templates/classic/html/user.item.html:109 +#: ../templates/minimal/html/user.item.html:109 msgid "(this is a numeric hour offset, the default is ${zone})" msgstr "" "(este es un valor num?rico de diferencia horaria, el valor por defecto es " "${zone})" -#: ../templates/classic/html/user.item.html:83 -#: ../templates/classic/html/user.register.html:49 -#: ../templates/minimal/html/user.item.html:63 -#: ../templates/minimal/html/user.register.html:46 -msgid "E-mail address" -msgstr "Direcci?n de e-mail" - -#: ../templates/classic/html/user.item.html:91 +#: ../templates/classic/html/user.item.html:130 #: ../templates/classic/html/user.register.html:53 -#: ../templates/minimal/html/user.item.html:71 -#: ../templates/minimal/html/user.register.html:50 +#: ../templates/minimal/html/user.item.html:130 +#: ../templates/minimal/html/user.register.html:53 msgid "Alternate E-mail addresses
    One address per line" msgstr "Direcciones de e-mail alternativas
    Una direcci?n por l?nea" @@ -3397,6 +3550,30 @@ msgid "Registering with ${tracker}" msgstr "Registr?ndose en ${tracker}" +#: ../templates/classic/html/user.register.html:21 +#: ../templates/minimal/html/user.register.html:29 +msgid "Login Name" +msgstr "Nombre para Login" + +#: ../templates/classic/html/user.register.html:25 +#: ../templates/minimal/html/user.register.html:33 +msgid "Login Password" +msgstr "Contrase?a para Login" + +#: ../templates/classic/html/user.register.html:29 +#: ../templates/minimal/html/user.register.html:37 +msgid "Confirm Password" +msgstr "Confirmar contrase?a" + +#: ../templates/classic/html/user.register.html:41 +msgid "Phone" +msgstr "Tel?fono" + +#: ../templates/classic/html/user.register.html:49 +#: ../templates/minimal/html/user.register.html:49 +msgid "E-mail address" +msgstr "Direcci?n de e-mail" + #: ../templates/classic/html/user.rego_progress.html:4 #: ../templates/minimal/html/user.rego_progress.html:4 msgid "Registration in progress - ${tracker}" @@ -3416,91 +3593,101 @@ "En breve recibir? un mensaje de e-mail para confirmar su registro. Para " "completar el proceso, visite el enlace indicado en dicho mensaje." -#: ../templates/minimal/html/home.html:2 -msgid "Tracker home - ${tracker}" -msgstr "Directorio base del tracker - ${tracker}" - -#: ../templates/minimal/html/home.html:4 -msgid "Tracker home" -msgstr "Directorio base del tracker" - -#: ../templates/minimal/html/home.html:16 -msgid "Please select from one of the menu options on the left." -msgstr "Por favor seleccione entre las opciones del men? a la izquierda." - -#: ../templates/minimal/html/home.html:19 -msgid "Please log in or register." -msgstr "Por favor ingrese al sistema o reg?strese en el mismo." - -#: ../templates/minimal/html/page.html:55 -msgid "Hello,
    ${user}" -msgstr "Hola,
    ${user}" - # priority translations: #: ../templates/classic/initial_data.py:5 -#: ../templates/classic/html/page.html:246 msgid "critical" -msgstr "" +msgstr "critical" #: ../templates/classic/initial_data.py:6 -#: ../templates/classic/html/page.html:246 msgid "urgent" -msgstr "" +msgstr "urgent" #: ../templates/classic/initial_data.py:7 -#: ../templates/classic/html/page.html:246 msgid "bug" -msgstr "" +msgstr "bug" #: ../templates/classic/initial_data.py:8 -#: ../templates/classic/html/page.html:246 msgid "feature" -msgstr "" +msgstr "feature" #: ../templates/classic/initial_data.py:9 -#: ../templates/classic/html/page.html:246 msgid "wish" -msgstr "" +msgstr "wish" -#: status translations: ../templates/classic/initial_data.py:12 -#: ../templates/classic/html/page.html:246 +#: ../templates/classic/initial_data.py:12 msgid "unread" -msgstr "" +msgstr "unread" #: ../templates/classic/initial_data.py:13 -#: ../templates/classic/html/page.html:246 msgid "deferred" -msgstr "" +msgstr "deferred" #: ../templates/classic/initial_data.py:14 -#: ../templates/classic/html/page.html:246 msgid "chatting" -msgstr "" +msgstr "chatting" #: ../templates/classic/initial_data.py:15 -#: ../templates/classic/html/page.html:246 -msgid "in-progress" -msgstr "" +msgid "need-eg" +msgstr "need-eg" #: ../templates/classic/initial_data.py:16 -#: ../templates/classic/html/page.html:246 -msgid "need-eg" -msgstr "" +msgid "in-progress" +msgstr "in-progress" #: ../templates/classic/initial_data.py:17 -#: ../templates/classic/html/page.html:246 msgid "testing" -msgstr "" +msgstr "testing" #: ../templates/classic/initial_data.py:18 -#: ../templates/classic/html/page.html:246 msgid "done-cbb" -msgstr "" +msgstr "done-cbb" #: ../templates/classic/initial_data.py:19 -#: ../templates/classic/html/page.html:246 msgid "resolved" msgstr "resuelto" +#: ../templates/minimal/html/home.html:2 +msgid "Tracker home - ${tracker}" +msgstr "Directorio base del tracker - ${tracker}" + +#: ../templates/minimal/html/home.html:4 +msgid "Tracker home" +msgstr "Directorio base del tracker" + +#: ../templates/minimal/html/home.html:16 +msgid "Please select from one of the menu options on the left." +msgstr "Por favor seleccione entre las opciones del men? a la izquierda." + +#: ../templates/minimal/html/home.html:19 +msgid "Please log in or register." +msgstr "Por favor ingrese al sistema o reg?strese en el mismo." + +#~ msgid "topic" +#~ msgstr "palabraclave" + +#~ msgid "System message:" +#~ msgstr "Mensaje de sistema:" + +#~ msgid "List of issues - ${tracker}" +#~ msgstr "Lista de issues - ${tracker}" + +#~ msgid "Topic" +#~ msgstr "Palabra clave" + +#~ msgid "View: ${link}" +#~ msgstr "Ver: ${link}" + +#~ msgid "Topics" +#~ msgstr "Palabras clave" + +#~ msgid "Topic:" +#~ msgstr "Palabra clave:" + +#~ msgid "Timezone" +#~ msgstr "Zona horaria" + +#~ msgid "Hello,
    ${user}" +#~ msgstr "Hola,
    ${user}" + #~ msgid "User editing - ${tracker}" #~ msgstr "Edici?n de usuario - ${tracker}" Modified: tracker/roundup-src/locale/hu.po ============================================================================== --- tracker/roundup-src/locale/hu.po (original) +++ tracker/roundup-src/locale/hu.po Sun Mar 9 09:26:16 2008 @@ -1,47 +1,51 @@ +# Translation of roundup.po to Hungarian +# Copyright ?? 2007 Free Software Foundation, Inc. +# This file is distributed under the same license as the Roundup package. +# +# Gul??csi Tam??s , 2006. +# kilo aka Gabor Kmetyko , 2007. msgid "" msgstr "" -"Project-Id-Version: v0.1\n" -"POT-Creation-Date: \n" -"PO-Revision-Date: 2006-12-02 14:40+0100\n" -"Last-Translator: Gul??csi Tam??s \n" -"Language-Team: UNO-SOFT \n" +"Project-Id-Version: Roundup 1.3.3\n" +"Report-Msgid-Bugs-To: roundup-devel at lists.sourceforge.net\n" +"POT-Creation-Date: 2007-09-27 11:18+0300\n" +"PO-Revision-Date: 2007-09-20 12:30+0200\n" +"Last-Translator: kilo aka Gabor Kmetyko \n" +"Language-Team: Hungarian\n" "MIME-Version: 1.0\n" -"Content-Type: text/plain; charset=utf-8\n" +"Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" -"X-Poedit-Language: Hungarian\n" -"X-Poedit-Country: Hungary\n" -"X-Poedit-SourceCharset: utf-8\n" +"Plural-Forms: nplurals=1; plural=0;\n" +"X-Generator: KBabel 1.11.4\n" # ../roundup/admin.py:85 :981 :1030 :1052 -#: ../roundup/admin.py:85 -#: ../roundup/admin.py:981 -#: ../roundup/admin.py:1030 -#: ../roundup/admin.py:1052 +#: ../roundup/admin.py:86 ../roundup/admin.py:989 ../roundup/admin.py:1040 +#: ../roundup/admin.py:1063 #, python-format msgid "no such class \"%(classname)s\"" -msgstr "nincs ilyen oszt??ly \"%(classname)s\"" +msgstr "nincs \"%(classname)s\" oszt??ly" # ../roundup/admin.py:95 :99 -#: ../roundup/admin.py:95 -#: ../roundup/admin.py:99 +#: ../roundup/admin.py:96 ../roundup/admin.py:100 #, python-format msgid "argument \"%(arg)s\" not propname=value" -msgstr "\"%(arg)s\" argumentum nem n??v=??rt??k alak??" +msgstr "A(z) \"%(arg)s\" argumentum nem n??v=??rt??k alak??" -#: ../roundup/admin.py:112 +#: ../roundup/admin.py:113 #, python-format msgid "" "Problem: %(message)s\n" "\n" msgstr "Probl??ma: %(message)s\n" -#: ../roundup/admin.py:113 +#: ../roundup/admin.py:114 #, python-format msgid "" "%(message)sUsage: roundup-admin [options] [ ]\n" "\n" "Options:\n" -" -i instance home -- specify the issue tracker \"home directory\" to administer\n" +" -i instance home -- specify the issue tracker \"home directory\" to " +"administer\n" " -u -- the user[:password] to use for commands\n" " -d -- print full designators not just class id numbers\n" " -c -- when outputting lists of data, comma-separate them.\n" @@ -63,16 +67,19 @@ "%(message)sHaszn??lat: roundup-admin [opci??k] []\n" "\n" "Opci??k:\n" -" -i p??ld??ny el??r??si ??t -- add meg az adminisztr??lni k??v??nt hibak??vet?? \"k??nyvt??r??t\"\n" +" -i p??ld??ny el??r??si ??t -- add meg az adminisztr??lni k??v??nt hibak??vet?? " +"\"k??nyvt??r??t\"\n" " -u -- a parancsokn??l haszn??lt felhaszn??l??n??v[:jelsz??]\n" " -d -- ??rd ki a teljes nevet, ne csak az oszt??ly azonos??t??t\n" " -c -- adatlist??kn??l vessz??vel v??laszd el az elemeket.\n" " Ugyanaz mint '-S \",\"'.\n" -" -S -- adatlist??kn??l a megadott sz??veggel v??laszd el az elemeket\n" +" -S -- adatlist??kn??l a megadott sz??veggel v??laszd el az " +"elemeket\n" " -s -- when outputting lists of data, space-separate them.\n" " Same as '-S \" \"'.\n" " -V -- import??l??sn??l legy??l b??besz??d??\n" -" -v -- ??rd ki a Roundup ??s a Python verzi??sz??mokat (??s l??pj ki)\n" +" -v -- ??rd ki a Roundup ??s a Python verzi??sz??mokat (??s l??pj " +"ki)\n" "\n" " -s, -c vagy -S k??z??l egyszerre csak egy adhat?? meg.\n" "\n" @@ -82,17 +89,19 @@ " roundup-admin help -- parancs-specifikus seg??ts??g\n" " roundup-admin help all -- minden el??rhet?? seg??ts??g\n" -#: ../roundup/admin.py:140 +#: ../roundup/admin.py:141 msgid "Commands:" msgstr "Parancsok:" -#: ../roundup/admin.py:147 +#: ../roundup/admin.py:148 msgid "" "Commands may be abbreviated as long as the abbreviation\n" "matches only one command, e.g. l == li == lis == list." -msgstr "A parancsok r??vid??thet??k mindaddig, am??g csak egy parancsra illenek, pl. l == li == lis == list." +msgstr "" +"A parancsok r??vid??thet??k mindaddig, am??g csak egy parancsra illenek, pl. l " +"== li == lis == list." -#: ../roundup/admin.py:177 +#: ../roundup/admin.py:178 msgid "" "\n" "All commands (except help) require a tracker specifier. This is just\n" @@ -102,7 +111,8 @@ "directory\". It may be specified in the environment variable TRACKER_HOME\n" "or on the command line as \"-i tracker\".\n" "\n" -"A designator is a classname and a nodeid concatenated, eg. bug1, user10, ...\n" +"A designator is a classname and a nodeid concatenated, eg. bug1, " +"user10, ...\n" "\n" "Property values are represented as strings in command arguments and in the\n" "printed results:\n" @@ -127,8 +137,8 @@ " Roch\\'e Compaan (2 tokens: Roch'e Compaan)\n" " address=\"1 2 3\" (1 token: address=1 2 3)\n" " \\\\ (1 token: \\)\n" -" \\n" -"\\r\\t (1 token: a newline, carriage-return and tab)\n" +" \\n\\r\\t (1 token: a newline, carriage-return and " +"tab)\n" "\n" "When multiple nodes are specified to the roundup get or roundup set\n" "commands, the specified properties are retrieved or set on all the listed\n" @@ -158,12 +168,12 @@ "Command help:\n" msgstr "" -#: ../roundup/admin.py:240 +#: ../roundup/admin.py:241 #, python-format msgid "%s:" msgstr "%s:" -#: ../roundup/admin.py:245 +#: ../roundup/admin.py:246 msgid "" "Usage: help topic\n" " Give help about topic.\n" @@ -178,40 +188,38 @@ " Seg??ts??get ad a t??m??r??l.\n" "\n" " commands -- parancsok list??ja\n" -" -- seg??ts??g adott programhoz\n" +" -- seg??ts??g adott parancshoz\n" " initopts -- kezd?? parancs opci??k\n" " all -- minden el??rhet?? seg??ts??g\n" " " -#: ../roundup/admin.py:268 +#: ../roundup/admin.py:269 #, python-format msgid "Sorry, no help for \"%(topic)s\"" -msgstr "Eln??z??st, \"%(topic)s\" t??m??hoz nincs help" +msgstr "Eln??z??st, \"%(topic)s\" t??m??hoz nincs s??g??" # ../roundup/admin.py:340 :396 -#: ../roundup/admin.py:340 -#: ../roundup/admin.py:396 +#: ../roundup/admin.py:346 ../roundup/admin.py:402 msgid "Templates:" msgstr "Sablonok:" # ../roundup/admin.py:343 :407 -#: ../roundup/admin.py:343 -#: ../roundup/admin.py:407 +#: ../roundup/admin.py:349 ../roundup/admin.py:413 msgid "Back ends:" msgstr "Adatb??zis h??tterek:" -#: ../roundup/admin.py:346 +#: ../roundup/admin.py:352 msgid "" -"Usage: install [template [backend [admin password [key=val[,key=val]]]]]\n" +"Usage: install [template [backend [key=val[,key=val]]]]\n" " Install a new Roundup tracker.\n" "\n" " The command will prompt for the tracker home directory\n" " (if not supplied through TRACKER_HOME or the -i option).\n" -" The template, backend and admin password may be specified\n" -" on the command-line as arguments, in that order.\n" +" The template and backend may be specified on the command-line\n" +" as arguments, in that order.\n" "\n" -" The last command line argument allows to pass initial values\n" -" for config options. For example, passing\n" +" Command line arguments following the backend allows you to\n" +" pass initial values for config options. For example, passing\n" " \"web_http_auth=no,rdbms_user=dinsdale\" will override defaults\n" " for options http_auth in section [web] and user in section [rdbms].\n" " Please be careful to not use spaces in this argument! (Enclose\n" @@ -228,55 +236,46 @@ # ../roundup/admin.py:369 :466 :527 :606 :656 :714 :735 :763 :834 :901 :972 # :1020 :1042 :1069 :1136 :1207 -#: ../roundup/admin.py:369 -#: ../roundup/admin.py:466 -#: ../roundup/admin.py:527 -#: ../roundup/admin.py:606 -#: ../roundup/admin.py:656 -#: ../roundup/admin.py:714 -#: ../roundup/admin.py:735 -#: ../roundup/admin.py:763 -#: ../roundup/admin.py:834 -#: ../roundup/admin.py:901 -#: ../roundup/admin.py:972 -#: ../roundup/admin.py:1020 -#: ../roundup/admin.py:1042 -#: ../roundup/admin.py:1069 -#: ../roundup/admin.py:1136 -#: ../roundup/admin.py:1207 +#: ../roundup/admin.py:375 ../roundup/admin.py:472 ../roundup/admin.py:533 +#: ../roundup/admin.py:612 ../roundup/admin.py:663 ../roundup/admin.py:721 +#: ../roundup/admin.py:742 ../roundup/admin.py:770 ../roundup/admin.py:842 +#: ../roundup/admin.py:909 ../roundup/admin.py:980 ../roundup/admin.py:1030 +#: ../roundup/admin.py:1053 ../roundup/admin.py:1084 ../roundup/admin.py:1180 +#: ../roundup/admin.py:1253 msgid "Not enough arguments supplied" msgstr "Nincs megadva el??g argumentum" -#: ../roundup/admin.py:375 +#: ../roundup/admin.py:381 #, python-format msgid "Instance home parent directory \"%(parent)s\" does not exist" msgstr "P??ld??ny k??nyvt??r sz??l??je (\"%(parent)s\") nem l??tezik" -#: ../roundup/admin.py:383 +#: ../roundup/admin.py:389 #, python-format msgid "" "WARNING: There appears to be a tracker in \"%(tracker_home)s\"!\n" "If you re-install it, you will lose all the data!\n" "Erase it? Y/N: " msgstr "" -"FIGYELEM: ??gy t??nik, m??r l??tezik egy hibak??vet?? a \"%(tracker_home)s\" k??nyvt??rban!\n" +"FIGYELEM: ??gy t??nik, m??r l??tezik egy hibak??vet?? a \"%(tracker_home)s\" " +"k??nyvt??rban!\n" "Ha ??jra install??lod, minden adat elveszik!\n" "T??r??ljem? Y/N: " -#: ../roundup/admin.py:398 +#: ../roundup/admin.py:404 msgid "Select template [classic]: " -msgstr "Sablon v??laszt??sa [classic]:" +msgstr "Sablon v??laszt??sa [classic]: " -#: ../roundup/admin.py:409 +#: ../roundup/admin.py:415 msgid "Select backend [anydbm]: " -msgstr "Adatb??zis h??tt??r v??laszt??sa [anydbm]:" +msgstr "Adatb??zis h??tt??r v??laszt??sa [anydbm]: " -#: ../roundup/admin.py:419 +#: ../roundup/admin.py:425 #, python-format msgid "Error in configuration settings: \"%s\"" msgstr "Hiba a konfigur??ci??s be??ll??t??sokban: \"%s\"" -#: ../roundup/admin.py:428 +#: ../roundup/admin.py:434 #, python-format msgid "" "\n" @@ -289,11 +288,11 @@ " Most kell szerkesztened a konfigur??ci??s f??jlt:\n" " %(config_file)s" -#: ../roundup/admin.py:438 +#: ../roundup/admin.py:444 msgid " ... at a minimum, you must set following options:" -msgstr " ... legkevesebb a k??vetkez?? opci??kat kell be??ll??tani:" +msgstr " ... legal??bb a k??vetkez?? opci??kat kell be??ll??tani:" -#: ../roundup/admin.py:443 +#: ../roundup/admin.py:449 #, python-format msgid "" "\n" @@ -304,12 +303,13 @@ " %(database_init_file)s\n" " ... see the documentation on customizing for more information.\n" "\n" -" You MUST run the \"roundup-admin initialise\" command once you've performed\n" +" You MUST run the \"roundup-admin initialise\" command once you've " +"performed\n" " the above steps.\n" "---------------------------------------------------------------------------\n" msgstr "" -#: ../roundup/admin.py:461 +#: ../roundup/admin.py:467 msgid "" "Usage: genconfig \n" " Generate a new tracker config file (ini style) with default values\n" @@ -317,12 +317,13 @@ " " msgstr "" "Haszn??lat: genconfig \n" -" ??j hibak??vet?? konfigur??ci??s f??jl (ini st??lus??) gener??l??sa alap??rtelmezett ??rt??kekkel\n" +" ??j hibak??vet?? konfigur??ci??s f??jl (ini st??lus??) gener??l??sa " +"alap??rtelmezett ??rt??kekkel\n" " a f??jlba.\n" " " #. password -#: ../roundup/admin.py:471 +#: ../roundup/admin.py:477 msgid "" "Usage: initialise [adminpw]\n" " Initialise a new Roundup tracker.\n" @@ -340,33 +341,33 @@ " V??grehajtja az adatb??zist inicializ??l?? dbinit.init() rutint\n" " " -#: ../roundup/admin.py:485 +#: ../roundup/admin.py:491 msgid "Admin Password: " -msgstr "Adminisztr??tori jelsz??:" +msgstr "Adminisztr??tori jelsz??: " -#: ../roundup/admin.py:486 +#: ../roundup/admin.py:492 msgid " Confirm: " -msgstr " Meger??s??t??s: " +msgstr " Meger??s??t??s " -#: ../roundup/admin.py:490 +#: ../roundup/admin.py:496 msgid "Instance home does not exist" msgstr "A p??ld??ny k??nyvt??ra nem l??tezik" -#: ../roundup/admin.py:494 +#: ../roundup/admin.py:500 msgid "Instance has not been installed" msgstr "A p??ld??ny nem lett install??lva" -#: ../roundup/admin.py:499 +#: ../roundup/admin.py:505 msgid "" "WARNING: The database is already initialised!\n" "If you re-initialise it, you will lose all the data!\n" "Erase it? Y/N: " msgstr "" "FIGYELEM: Az adatb??zis m??r inicializ??lt!\n" -"Ha ??jrainicializ??lod, minden adat elveszik!\n" -"T??rl??d? Y/N:" +"??jrainicializ??l??s eset??n minden adat elv??sz!\n" +"T??rli? Y/N: " -#: ../roundup/admin.py:520 +#: ../roundup/admin.py:526 msgid "" "Usage: get property designator[,designator]*\n" " Get the given property of one or more designator(s).\n" @@ -378,32 +379,31 @@ "Haszn??lat: get property designator[,designator]*\n" " Visszaadja egy vagy t??bb jel??l?? tulajdons??g??t.\n" "\n" -" Visszaadja az ??rt??k??t a jel??l?? ??ltal\n" -" meghat??rozott csom??pontnak.\n" +" Visszaadja a jel??l?? ??ltal meghat??rozott\n" +" csom??pont ??rt??k??t.\n" " " # ../roundup/admin.py:560 :575 -#: ../roundup/admin.py:560 -#: ../roundup/admin.py:575 +#: ../roundup/admin.py:566 ../roundup/admin.py:581 #, python-format msgid "property %s is not of type Multilink or Link so -d flag does not apply." msgstr "" +"A(z) %s tulajdons??g nem Multilink vagy Link t??pus??, ez??rt a -d kapcsol?? nem " +"alkalmazhat??." # ../roundup/admin.py:583 :983 :1032 :1054 -#: ../roundup/admin.py:583 -#: ../roundup/admin.py:983 -#: ../roundup/admin.py:1032 -#: ../roundup/admin.py:1054 +#: ../roundup/admin.py:589 ../roundup/admin.py:991 ../roundup/admin.py:1042 +#: ../roundup/admin.py:1065 #, python-format msgid "no such %(classname)s node \"%(nodeid)s\"" -msgstr "" +msgstr "nincs \"%(nodeid)s\" %(classname)s csom??pont" -#: ../roundup/admin.py:585 +#: ../roundup/admin.py:591 #, python-format msgid "no such %(classname)s property \"%(propname)s\"" -msgstr "" +msgstr "nincs \"%(propname)s\" %(classname)s tulajdons??g" -#: ../roundup/admin.py:594 +#: ../roundup/admin.py:600 msgid "" "Usage: set items property=value property=value ...\n" " Set the given properties of one or more items(s).\n" @@ -412,13 +412,14 @@ " list of item designators (ie \"designator[,designator,...]\").\n" "\n" " This command sets the properties to the values for all designators\n" -" given. If the value is missing (ie. \"property=\") then the property\n" +" given. If the value is missing (ie. \"property=\") then the " +"property\n" " is un-set. If the property is a multilink, you specify the linked\n" " ids for the multilink as comma-separated numbers (ie \"1,2,3\").\n" " " msgstr "" -#: ../roundup/admin.py:648 +#: ../roundup/admin.py:655 msgid "" "Usage: find classname propname=value ...\n" " Find the nodes of the given class with a given link property value.\n" @@ -430,15 +431,13 @@ msgstr "" # ../roundup/admin.py:701 :854 :866 :920 -#: ../roundup/admin.py:701 -#: ../roundup/admin.py:854 -#: ../roundup/admin.py:866 -#: ../roundup/admin.py:920 +#: ../roundup/admin.py:708 ../roundup/admin.py:862 ../roundup/admin.py:874 +#: ../roundup/admin.py:928 #, python-format msgid "%(classname)s has no property \"%(propname)s\"" -msgstr "" +msgstr "%(classname)s-nek nincs \"%(propname)s\" tulajdons??ga" -#: ../roundup/admin.py:708 +#: ../roundup/admin.py:715 msgid "" "Usage: specification classname\n" " Show the properties for a classname.\n" @@ -446,18 +445,23 @@ " This lists the properties for a given class.\n" " " msgstr "" +"Haszn??lat: specification classname\n" +" Oszt??ly tulajdons??gainak megjelen??t??se.\n" +"\n" +" List??zza az adott oszt??ly tulajdons??gait.\n" +" " -#: ../roundup/admin.py:723 +#: ../roundup/admin.py:730 #, python-format msgid "%(key)s: %(value)s (key property)" -msgstr "" +msgstr "%(key)s: %(value)s (kulcs tulajdons??g)" -#: ../roundup/admin.py:725 +#: ../roundup/admin.py:732 ../roundup/admin.py:759 #, python-format msgid "%(key)s: %(value)s" -msgstr "" +msgstr "%(key)s: %(value)s" -#: ../roundup/admin.py:728 +#: ../roundup/admin.py:735 msgid "" "Usage: display designator[,designator]*\n" " Show the property values for the given node(s).\n" @@ -467,47 +471,43 @@ " " msgstr "" -#: ../roundup/admin.py:752 -#, python-format -msgid "%(key)s: %(value)r" -msgstr "" - -#: ../roundup/admin.py:755 +#: ../roundup/admin.py:762 msgid "" "Usage: create classname property=value ...\n" " Create a new entry of a given class.\n" "\n" " This creates a new entry of the given class using the property\n" -" name=value arguments provided on the command line after the \"create\"\n" +" name=value arguments provided on the command line after the \"create" +"\"\n" " command.\n" " " msgstr "" -#: ../roundup/admin.py:782 +#: ../roundup/admin.py:789 #, python-format msgid "%(propname)s (Password): " -msgstr "" +msgstr "%(propname)s (Jelsz??): " -#: ../roundup/admin.py:784 +#: ../roundup/admin.py:791 #, python-format msgid " %(propname)s (Again): " -msgstr "" +msgstr " %(propname)s (Ism??t): " -#: ../roundup/admin.py:786 +#: ../roundup/admin.py:793 msgid "Sorry, try again..." -msgstr "" +msgstr "Sajn??lom, pr??b??lja ??jra..." -#: ../roundup/admin.py:790 +#: ../roundup/admin.py:797 #, python-format msgid "%(propname)s (%(proptype)s): " -msgstr "" +msgstr "%(propname)s (%(proptype)s): " -#: ../roundup/admin.py:808 +#: ../roundup/admin.py:815 #, python-format msgid "you must provide the \"%(propname)s\" property." -msgstr "" +msgstr "meg kell adni a(z) \"%(propname)s\" tulajdons??got." -#: ../roundup/admin.py:819 +#: ../roundup/admin.py:827 msgid "" "Usage: list classname [property]\n" " List the instances of a class.\n" @@ -523,16 +523,16 @@ " " msgstr "" -#: ../roundup/admin.py:832 +#: ../roundup/admin.py:840 msgid "Too many arguments supplied" -msgstr "" +msgstr "T??l sok argumentum ker??lt megad??sra" -#: ../roundup/admin.py:868 +#: ../roundup/admin.py:876 #, python-format msgid "%(nodeid)4s: %(value)s" -msgstr "" +msgstr "%(nodeid)4s: %(value)s" -#: ../roundup/admin.py:872 +#: ../roundup/admin.py:880 msgid "" "Usage: table classname [property[,property]*]\n" " List the instances of a class in tabular form.\n" @@ -564,21 +564,22 @@ " " msgstr "" -#: ../roundup/admin.py:916 +#: ../roundup/admin.py:924 #, python-format msgid "\"%(spec)s\" not name:width" -msgstr "" +msgstr "\"%(spec)s\" nem n??v:hossz form??tum??" -#: ../roundup/admin.py:966 +#: ../roundup/admin.py:974 msgid "" "Usage: history designator\n" " Show the history entries of a designator.\n" "\n" -" Lists the journal entries for the node identified by the designator.\n" +" Lists the journal entries for the node identified by the " +"designator.\n" " " msgstr "" -#: ../roundup/admin.py:987 +#: ../roundup/admin.py:995 msgid "" "Usage: commit\n" " Commit changes made to the database during an interactive session.\n" @@ -592,7 +593,7 @@ " " msgstr "" -#: ../roundup/admin.py:1001 +#: ../roundup/admin.py:1010 msgid "" "Usage: rollback\n" " Undo all changes that are pending commit to the database.\n" @@ -604,7 +605,7 @@ " " msgstr "" -#: ../roundup/admin.py:1013 +#: ../roundup/admin.py:1023 msgid "" "Usage: retire designator[,designator]*\n" " Retire the node specified by designator.\n" @@ -614,7 +615,7 @@ " " msgstr "" -#: ../roundup/admin.py:1036 +#: ../roundup/admin.py:1047 msgid "" "Usage: restore designator[,designator]*\n" " Restore the retired node specified by designator.\n" @@ -624,12 +625,32 @@ msgstr "" #. grab the directory to export to -#: ../roundup/admin.py:1058 +#: ../roundup/admin.py:1070 msgid "" -"Usage: export [class[,class]] export_dir\n" +"Usage: export [[-]class[,class]] export_dir\n" " Export the database to colon-separated-value files.\n" +" To exclude the files (e.g. for the msg or file class),\n" +" use the exporttables command.\n" +"\n" +" Optionally limit the export to just the named classes\n" +" or exclude the named classes, if the 1st argument starts with '-'.\n" +"\n" +" This action exports the current data from the database into\n" +" colon-separated-value files that are placed in the nominated\n" +" destination directory.\n" +" " +msgstr "" + +#: ../roundup/admin.py:1145 +msgid "" +"Usage: exporttables [[-]class[,class]] export_dir\n" +" Export the database to colon-separated-value files, excluding the\n" +" files below $TRACKER_HOME/db/files/ (which can be archived " +"separately).\n" +" To include the files, use the export command.\n" "\n" -" Optionally limit the export to just the names classes.\n" +" Optionally limit the export to just the named classes\n" +" or exclude the named classes, if the 1st argument starts with '-'.\n" "\n" " This action exports the current data from the database into\n" " colon-separated-value files that are placed in the nominated\n" @@ -637,7 +658,7 @@ " " msgstr "" -#: ../roundup/admin.py:1116 +#: ../roundup/admin.py:1160 msgid "" "Usage: import import_dir\n" " Import a database from the directory containing CSV files,\n" @@ -660,14 +681,15 @@ " " msgstr "" -#: ../roundup/admin.py:1189 +#: ../roundup/admin.py:1235 msgid "" "Usage: pack period | date\n" "\n" " Remove journal entries older than a period of time specified or\n" " before a certain date.\n" "\n" -" A period is specified using the suffixes \"y\", \"m\", and \"d\". The\n" +" A period is specified using the suffixes \"y\", \"m\", and \"d\". " +"The\n" " suffix \"w\" (for \"week\") means 7 days.\n" "\n" " \"3y\" means three years\n" @@ -681,11 +703,11 @@ " " msgstr "" -#: ../roundup/admin.py:1217 +#: ../roundup/admin.py:1263 msgid "Invalid format" msgstr "Hib??s form??tum" -#: ../roundup/admin.py:1227 +#: ../roundup/admin.py:1274 msgid "" "Usage: reindex [classname|designator]*\n" " Re-generate a tracker's search indexes.\n" @@ -695,148 +717,185 @@ " " msgstr "" -#: ../roundup/admin.py:1241 +#: ../roundup/admin.py:1288 #, python-format msgid "no such item \"%(designator)s\"" -msgstr "" +msgstr "nincs ilyen elem: \"%(designator)s\"" -#: ../roundup/admin.py:1251 +#: ../roundup/admin.py:1298 msgid "" "Usage: security [Role name]\n" " Display the Permissions available to one or all Roles.\n" " " msgstr "" +"Haszn??lat: security [szerepk??r]\n" +" Megjelen??ti a megadott vagy az ??sszes szerepk??r jogosults??gait.\n" +" " -#: ../roundup/admin.py:1259 +#: ../roundup/admin.py:1306 #, python-format msgid "No such Role \"%(role)s\"" -msgstr "" +msgstr "Nincs ilyen szerepk??r: \"%(role)s\"" -#: ../roundup/admin.py:1265 +#: ../roundup/admin.py:1312 #, python-format msgid "New Web users get the Roles \"%(role)s\"" -msgstr "" +msgstr "??j web felhaszn??l??k ezeket a szerepk??r??ket kapj??k: \"%(role)s\"" -#: ../roundup/admin.py:1267 +#: ../roundup/admin.py:1314 #, python-format msgid "New Web users get the Role \"%(role)s\"" -msgstr "" +msgstr "??j web felhaszn??l??k ezt a szerepk??rt kapj??k \"%(role)s\"" -#: ../roundup/admin.py:1270 +#: ../roundup/admin.py:1317 #, python-format msgid "New Email users get the Roles \"%(role)s\"" -msgstr "" +msgstr "??j e-mail felhaszn??l??k ezeket a szerepk??r??ket kapj??k: \"%(role)s\"" -#: ../roundup/admin.py:1272 +#: ../roundup/admin.py:1319 #, python-format msgid "New Email users get the Role \"%(role)s\"" -msgstr "" +msgstr "??j e-mail felhaszn??l??k ezt a szerepk??rt kapj??k: \"%(role)s\"" -#: ../roundup/admin.py:1275 +#: ../roundup/admin.py:1322 #, python-format msgid "Role \"%(name)s\":" -msgstr "" +msgstr "\"%(name)s\" szerepk??r:" -#: ../roundup/admin.py:1280 +#: ../roundup/admin.py:1327 #, python-format msgid " %(description)s (%(name)s for \"%(klass)s\": %(properties)s only)" msgstr "" -#: ../roundup/admin.py:1283 +#: ../roundup/admin.py:1330 #, python-format msgid " %(description)s (%(name)s for \"%(klass)s\" only)" msgstr "" -#: ../roundup/admin.py:1286 +#: ../roundup/admin.py:1333 #, python-format msgid " %(description)s (%(name)s)" -msgstr "" +msgstr " %(description)s (%(name)s)" -#: ../roundup/admin.py:1315 +#: ../roundup/admin.py:1362 #, python-format msgid "Unknown command \"%(command)s\" (\"help commands\" for a list)" msgstr "" +"\"%(command)s\": ismeretlen parancs (\"help commands\" parancsok " +"list??z??s??hoz)" -#: ../roundup/admin.py:1321 +#: ../roundup/admin.py:1368 #, python-format msgid "Multiple commands match \"%(command)s\": %(list)s" -msgstr "T??bb parancs is illeszkedik a megadottra \"%(command)s\": %(list)s" +msgstr "" +"T??bb parancs is illeszkedik a megadott \"%(command)s\" parancsra: %(list)s" -#: ../roundup/admin.py:1328 +#: ../roundup/admin.py:1375 msgid "Enter tracker home: " -msgstr "Add meg a hibak??vet?? k??nyvt??r??t:" +msgstr "Adja meg a hibak??vet?? k??nyvt??r??t: " # ../roundup/admin.py:1335 :1341 :1361 -#: ../roundup/admin.py:1335 -#: ../roundup/admin.py:1341 -#: ../roundup/admin.py:1361 +#: ../roundup/admin.py:1382 ../roundup/admin.py:1388 ../roundup/admin.py:1408 #, python-format msgid "Error: %(message)s" msgstr "Hiba: %(message)s" -#: ../roundup/admin.py:1349 +#: ../roundup/admin.py:1396 #, python-format msgid "Error: Couldn't open tracker: %(message)s" -msgstr "Hiba: Nem tudtam megnyitni a hibak??vet??t: %(message)s" +msgstr "Hiba: Hibak??vet?? megnyit??sa sikertelen: %(message)s" -#: ../roundup/admin.py:1374 +#: ../roundup/admin.py:1421 #, python-format msgid "" "Roundup %s ready for input.\n" "Type \"help\" for help." msgstr "" -"Roundup %s fogad??k??sz.\n" -"G??pelj \"help\"-et a seg??ts??ghez." +"A Roundup %s fogad??k??sz.\n" +"Seg??ts??g??rt g??peljen \"help\"-et." -#: ../roundup/admin.py:1379 +#: ../roundup/admin.py:1426 msgid "Note: command history and editing not available" msgstr "Megjegyz??s: a parancsok t??rt??nete ??s szerkeszt??se nem el??rhet??" -#: ../roundup/admin.py:1383 +#: ../roundup/admin.py:1430 msgid "roundup> " msgstr "roundup> " -#: ../roundup/admin.py:1385 +#: ../roundup/admin.py:1432 msgid "exit..." msgstr "kil??p??s..." -#: ../roundup/admin.py:1395 +#: ../roundup/admin.py:1442 msgid "There are unsaved changes. Commit them (y/N)? " msgstr "Vannak nem mentett v??ltoztat??sok. Elmenti ??ket (y/N)? " -#: ../roundup/backends/back_anydbm.py:2001 +#: ../roundup/backends/back_anydbm.py:219 +#: ../roundup/backends/sessions_dbm.py:50 +msgid "Couldn't identify database type" +msgstr "" + +#: ../roundup/backends/back_anydbm.py:245 +#, python-format +msgid "Couldn't open database - the required module '%s' is not available" +msgstr "" + +# ../roundup/backends/back_anydbm.py:795:1070 +# ../roundup/backends/back_metakit.py:567:834 +# ../roundup/backends/rdbms_common.py:1320:1549 :1267:1285 :1331:1901 +# :1755:1775 :1828:2436 :866:1601 +#: ../roundup/backends/back_anydbm.py:795 +#: ../roundup/backends/back_anydbm.py:1070 +#: ../roundup/backends/back_anydbm.py:1267 +#: ../roundup/backends/back_anydbm.py:1285 +#: ../roundup/backends/back_anydbm.py:1331 +#: ../roundup/backends/back_anydbm.py:1901 +#: ../roundup/backends/back_metakit.py:567 +#: ../roundup/backends/back_metakit.py:834 +#: ../roundup/backends/back_metakit.py:866 +#: ../roundup/backends/back_metakit.py:1601 +#: ../roundup/backends/rdbms_common.py:1320 +#: ../roundup/backends/rdbms_common.py:1549 +#: ../roundup/backends/rdbms_common.py:1755 +#: ../roundup/backends/rdbms_common.py:1775 +#: ../roundup/backends/rdbms_common.py:1828 +#: ../roundup/backends/rdbms_common.py:2436 +msgid "Database open read-only" +msgstr "" + +#: ../roundup/backends/back_anydbm.py:2003 #, python-format msgid "WARNING: invalid date tuple %r" msgstr "FIGYELEM: hib??s d??tum tuple %r" -#: ../roundup/backends/rdbms_common.py:1434 +#: ../roundup/backends/rdbms_common.py:1449 msgid "create" msgstr "l??trehoz??s" -#: ../roundup/backends/rdbms_common.py:1600 +#: ../roundup/backends/rdbms_common.py:1615 msgid "unlink" msgstr "t??rl??s" -#: ../roundup/backends/rdbms_common.py:1604 +#: ../roundup/backends/rdbms_common.py:1619 msgid "link" msgstr "kapcsol??s" -#: ../roundup/backends/rdbms_common.py:1724 +#: ../roundup/backends/rdbms_common.py:1741 msgid "set" msgstr "be??ll??t??s" -#: ../roundup/backends/rdbms_common.py:1748 +#: ../roundup/backends/rdbms_common.py:1765 msgid "retired" msgstr "visszavonult" -#: ../roundup/backends/rdbms_common.py:1778 +#: ../roundup/backends/rdbms_common.py:1795 msgid "restored" msgstr "vissza??ll??tott" #: ../roundup/cgi/actions.py:58 #, python-format msgid "You do not have permission to %(action)s the %(classname)s class." -msgstr "" +msgstr "Nincs jogosults??ga %(action)s m??veletre a(z) %(classname)s oszt??lyon." #: ../roundup/cgi/actions.py:89 msgid "No type specified" @@ -849,11 +908,11 @@ #: ../roundup/cgi/actions.py:97 #, python-format msgid "\"%(input)s\" is not an ID (%(classname)s ID required)" -msgstr "" +msgstr "\"%(input)s\" nem azonos??t?? (%(classname)s azonos??t?? sz??ks??ges)" #: ../roundup/cgi/actions.py:117 msgid "You may not retire the admin or anonymous user" -msgstr "Az admin ??s anonymous felhaszn??l??kat nem nyugd??jazhatod" +msgstr "Az admin ??s anonymous felhaszn??l??kat nem lehet visszavonultatni" #: ../roundup/cgi/actions.py:124 #, python-format @@ -861,127 +920,127 @@ msgstr "%(classname)s %(itemid)s visszavon??sra ker??lt" # ../roundup/cgi/actions.py:174 :202 -#: ../roundup/cgi/actions.py:174 -#: ../roundup/cgi/actions.py:202 +#: ../roundup/cgi/actions.py:169 ../roundup/cgi/actions.py:197 msgid "You do not have permission to edit queries" -msgstr "Nincs jogod a lek??rdez??sek szerkeszt??s??hez" +msgstr "Nincs jogosults??ga a lek??rdez??sek szerkeszt??s??hez" # ../roundup/cgi/actions.py:180 :209 -#: ../roundup/cgi/actions.py:180 -#: ../roundup/cgi/actions.py:209 +#: ../roundup/cgi/actions.py:175 ../roundup/cgi/actions.py:204 msgid "You do not have permission to store queries" -msgstr "Nincs jogod a lek??rdez??sek t??rol??s??hoz" +msgstr "Nincs jogosults??ga a lek??rdez??sek t??rol??s??hoz" -#: ../roundup/cgi/actions.py:297 +#: ../roundup/cgi/actions.py:310 #, python-format msgid "Not enough values on line %(line)s" msgstr "Nincs el??g ??rt??k a(z) %(line)s soron" -#: ../roundup/cgi/actions.py:344 +#: ../roundup/cgi/actions.py:357 msgid "Items edited OK" msgstr "Az elemek sikeresen szerkesztve" -#: ../roundup/cgi/actions.py:404 +#: ../roundup/cgi/actions.py:416 #, python-format msgid "%(class)s %(id)s %(properties)s edited ok" msgstr "%(class)s %(id)s %(properties)s sikeresen szerkesztve" -#: ../roundup/cgi/actions.py:407 +#: ../roundup/cgi/actions.py:419 #, python-format msgid "%(class)s %(id)s - nothing changed" -msgstr "%(class)s %(id)s - semmi sem v??ltzozott" +msgstr "%(class)s %(id)s - nincs v??ltoz??s" -#: ../roundup/cgi/actions.py:419 +#: ../roundup/cgi/actions.py:431 #, python-format msgid "%(class)s %(id)s created" msgstr "%(class)s %(id)s l??trehozva" -#: ../roundup/cgi/actions.py:451 +#: ../roundup/cgi/actions.py:463 #, python-format msgid "You do not have permission to edit %(class)s" -msgstr "Nincs jogod szerkeszteni %(class)s-t" +msgstr "Nincs jogosults??ga szerkeszteni %(class)s-t" -#: ../roundup/cgi/actions.py:463 +#: ../roundup/cgi/actions.py:475 #, python-format msgid "You do not have permission to create %(class)s" -msgstr "Nincs jogod l??trehozni %(class)s-t" +msgstr "Nincs jogosults??ga l??trehozni %(class)s-t" -#: ../roundup/cgi/actions.py:487 +#: ../roundup/cgi/actions.py:499 msgid "You do not have permission to edit user roles" -msgstr "Nincs jogod szerkeszteni a felhaszn??l??i szerepk??r??ket" +msgstr "Nincs jogosults??ga a felhaszn??l??i szerepk??r??k szerkeszt??s??hez" -#: ../roundup/cgi/actions.py:537 +#: ../roundup/cgi/actions.py:549 #, python-format -msgid "Edit Error: someone else has edited this %s (%s). View their changes in a new window." -msgstr "Szerkeszt??si hiba: valaki m??r szerkesztette %s (%s). N??zd meg a v??ltoztat??sait egy ??j ablakban." +msgid "" +"Edit Error: someone else has edited this %s (%s). View their changes in a new window." +msgstr "" +"Szerkeszt??si hiba: valaki m??r szerkesztette %s (%s). N??zze meg a v??ltoztat??sait egy ??j ablakban." -#: ../roundup/cgi/actions.py:565 +#: ../roundup/cgi/actions.py:577 #, python-format msgid "Edit Error: %s" msgstr "Szerkeszt??si hiba: %s" # ../roundup/cgi/actions.py:596 :607 :778 :797 -#: ../roundup/cgi/actions.py:596 -#: ../roundup/cgi/actions.py:607 -#: ../roundup/cgi/actions.py:778 -#: ../roundup/cgi/actions.py:797 +#: ../roundup/cgi/actions.py:608 ../roundup/cgi/actions.py:619 +#: ../roundup/cgi/actions.py:790 ../roundup/cgi/actions.py:809 #, python-format msgid "Error: %s" msgstr "Hiba: %s" -#: ../roundup/cgi/actions.py:633 +#: ../roundup/cgi/actions.py:645 msgid "" "Invalid One Time Key!\n" -"(a Mozilla bug may cause this message to show up erroneously, please check your email)" +"(a Mozilla bug may cause this message to show up erroneously, please check " +"your email)" msgstr "" -#: ../roundup/cgi/actions.py:675 +#: ../roundup/cgi/actions.py:687 #, python-format msgid "Password reset and email sent to %s" -msgstr "A jelsz?? t??r??lve lett ??s emailt k??ldt??nk %s-nek" +msgstr "A jelsz?? t??rl??sre ker??lt ??s e-mailt k??ldt??nk %s-nek" -#: ../roundup/cgi/actions.py:684 +#: ../roundup/cgi/actions.py:696 msgid "Unknown username" msgstr "Ismeretlen felhaszn??l??n??v" -#: ../roundup/cgi/actions.py:692 +#: ../roundup/cgi/actions.py:704 msgid "Unknown email address" -msgstr "Ismeretlen email c??m" +msgstr "Ismeretlen e-mail c??m" -#: ../roundup/cgi/actions.py:697 +#: ../roundup/cgi/actions.py:709 msgid "You need to specify a username or address" -msgstr "Meg kell adnond egy felhaszn??l?? nevet vagy c??met" +msgstr "Meg kell adni egy felhaszn??l??nevet vagy c??met" -#: ../roundup/cgi/actions.py:722 +#: ../roundup/cgi/actions.py:734 #, python-format msgid "Email sent to %s" -msgstr "Email elk??ldve %s-nek" +msgstr "E-mail elk??ldve %s-nek" -#: ../roundup/cgi/actions.py:741 +#: ../roundup/cgi/actions.py:753 msgid "You are now registered, welcome!" -msgstr "Regisztr??lva lett??l, isten hozott!" +msgstr "Regisztr??l??s sikeres, isten hozott!" -#: ../roundup/cgi/actions.py:786 +#: ../roundup/cgi/actions.py:798 msgid "It is not permitted to supply roles at registration." -msgstr "Szerepk??r??k nem adhat??k meg regisztr??l??skor." +msgstr "Regisztr??l??skor nem adhat??k meg szerepk??r??k." -#: ../roundup/cgi/actions.py:878 +#: ../roundup/cgi/actions.py:890 msgid "You are logged out" -msgstr "Kijelentkezt??l" +msgstr "Kijelentkezett" -#: ../roundup/cgi/actions.py:895 +#: ../roundup/cgi/actions.py:907 msgid "Username required" msgstr "A felhaszn??l??n??v sz??ks??ges" # ../roundup/cgi/actions.py:930 :934 -#: ../roundup/cgi/actions.py:930 -#: ../roundup/cgi/actions.py:934 +#: ../roundup/cgi/actions.py:942 ../roundup/cgi/actions.py:946 msgid "Invalid login" msgstr "Hib??s bejelentkez??s" -#: ../roundup/cgi/actions.py:940 +#: ../roundup/cgi/actions.py:952 msgid "You do not have permission to login" -msgstr "Nincs jogod bejelentkezni" +msgstr "Nincs jogosults??ga bejelentkezni" #: ../roundup/cgi/cgitb.py:49 #, python-format @@ -990,7 +1049,7 @@ "

    %(exc_type)s: %(exc_value)s

    \n" "

    Debugging information follows

    " msgstr "" -"

    Template Hiba

    \n" +"

    Sablon Hiba

    \n" "

    %(exc_type)s: %(exc_value)s

    \n" "

    Debug inform??ci??k al??bb

    " @@ -1002,7 +1061,7 @@ #: ../roundup/cgi/cgitb.py:67 #, python-format msgid "
  • Looking for \"%(name)s\", current path:
      %(path)s
  • " -msgstr "
  • Keresett \"%(name)s\", aktu??lis el??r??si ??t:
      %(path)s
  • " +msgstr "
  • \"%(name)s\" keres??se, aktu??lis el??r??si ??t:
      %(path)s
  • " #: ../roundup/cgi/cgitb.py:71 #, python-format @@ -1012,7 +1071,7 @@ #: ../roundup/cgi/cgitb.py:76 #, python-format msgid "A problem occurred in your template \"%s\"." -msgstr "Probl??ma mer??lt fel a(z) \"%s\" template-el." +msgstr "Probl??ma mer??lt fel a(z) \"%s\" sablonnal." #: ../roundup/cgi/cgitb.py:84 #, python-format @@ -1036,12 +1095,20 @@ msgstr "%(exc_type)s: %(exc_value)s" #: ../roundup/cgi/cgitb.py:120 -msgid "

    A problem occurred while running a Python script. Here is the sequence of function calls leading up to the error, with the most recent (innermost) call first. The exception attributes are:" -msgstr "" +msgid "" +"

    A problem occurred while running a Python script. Here is the sequence of " +"function calls leading up to the error, with the most recent (innermost) " +"call first. The exception attributes are:" +msgstr "" +"

    Probl??ma mer??lt fel egy Python parancsf??jl futtat??sa sor??n. Al??bb " +"megtekinthet?? a hib??hoz vezet?? f??ggv??nyh??v??sok sora, a legut??bbi (legbels??) " +"h??v??s l??that?? legel??sz??r. A kiv??tel tulajdons??gai:" #: ../roundup/cgi/cgitb.py:129 msgid "<file is None - probably inside eval or exec>" -msgstr "<file is None - probably inside eval or exec>" +msgstr "" +"<A f??jl None ??rt??k?? - feltehet??leg eval vagy exec " +"utas??t??son bel??l>" #: ../roundup/cgi/cgitb.py:138 #, python-format @@ -1049,12 +1116,11 @@ msgstr "%s-ban" # ../roundup/cgi/cgitb.py:172 :178 -#: ../roundup/cgi/cgitb.py:172 -#: ../roundup/cgi/cgitb.py:178 +#: ../roundup/cgi/cgitb.py:172 ../roundup/cgi/cgitb.py:178 msgid "undefined" msgstr "nem defini??lt" -#: ../roundup/cgi/client.py:49 +#: ../roundup/cgi/client.py:51 msgid "" "An error has occurred\n" "

    An error has occurred

    \n" @@ -1062,331 +1128,405 @@ "The tracker maintainers have been notified of the problem.

    \n" "" msgstr "" -"Hibat t??rt??nt\n" +"Hiba t??rt??nt\n" "

    Hiba t??rt??nt

    \n" -"

    Probl??ma mer??lt fel k??r??s??nek feldolgoz??sa k??zben.\n" -"A hibak??vet?? karbantart??it ??rtes??t??st kaptak a probl??m??r??l.

    \n" +"

    Probl??ma mer??lt fel a k??r??s feldolgoz??sa k??zben.\n" +"A hibak??vet?? karbantart??i ??rtes??t??st kaptak a probl??m??r??l.

    \n" "" -#: ../roundup/cgi/client.py:308 +#: ../roundup/cgi/client.py:377 msgid "Form Error: " -msgstr "??rlap hiba:" +msgstr "??rlap hiba: " -#: ../roundup/cgi/client.py:363 +#: ../roundup/cgi/client.py:432 #, python-format msgid "Unrecognized charset: %r" msgstr "Ismeretlen karakterk??szlet: %r" -#: ../roundup/cgi/client.py:491 +#: ../roundup/cgi/client.py:560 msgid "Anonymous users are not allowed to use the web interface" msgstr "Anonim felhaszn??l??k nem haszn??lhatj??k a webes fel??letet" -#: ../roundup/cgi/client.py:646 +#: ../roundup/cgi/client.py:715 msgid "You are not allowed to view this file." -msgstr "Nem n??zheted meg ezt a f??jlt." +msgstr "Nem n??zheti meg ezt a f??jlt." -#: ../roundup/cgi/client.py:738 +#: ../roundup/cgi/client.py:808 #, python-format msgid "%(starttag)sTime elapsed: %(seconds)fs%(endtag)s\n" msgstr "%(starttag)sEltelt id??: %(seconds)fs%(endtag)s\n" -#: ../roundup/cgi/client.py:742 +#: ../roundup/cgi/client.py:812 #, python-format -msgid "%(starttag)sCache hits: %(cache_hits)d, misses %(cache_misses)d. Loading items: %(get_items)f secs. Filtering: %(filtering)f secs.%(endtag)s\n" -msgstr "%(starttag)sCache tal??latok: %(cache_hits)d, t??ved??s %(cache_misses)d. Elemek bet??lt??se: %(get_items)f mp. Sz??r??s: %(filtering)f mp.%(endtag)s\n" +msgid "" +"%(starttag)sCache hits: %(cache_hits)d, misses %(cache_misses)d. Loading " +"items: %(get_items)f secs. Filtering: %(filtering)f secs.%(endtag)s\n" +msgstr "" +"%(starttag)sCache tal??latok: %(cache_hits)d, t??ved??s %(cache_misses)d. " +"Elemek bet??lt??se: %(get_items)f mp. Sz??r??s: %(filtering)f mp.%(endtag)s\n" #: ../roundup/cgi/form_parser.py:283 -#, python-format -msgid "link \"%(key)s\" value \"%(value)s\" not a designator" -msgstr "" +#, fuzzy, python-format +msgid "link \"%(key)s\" value \"%(entry)s\" not a designator" +msgstr "A(z) \"%(value)s\" ??rt??k?? \"%(key)s\" csatol??s nem teljes n??v" -#: ../roundup/cgi/form_parser.py:290 +#: ../roundup/cgi/form_parser.py:301 #, python-format msgid "%(class)s %(property)s is not a link or multilink property" +msgstr "A(y) %(class)s %(property)s nem link vagy multilink t??pus?? tulajdons??g" + +#: ../roundup/cgi/form_parser.py:313 +#, fuzzy, python-format +msgid "" +"The form action claims to require property \"%(property)s\" which doesn't " +"exist" msgstr "" +"%(action)s m??veletet k??v??n a \"%(property)s\" tulajdons??gon v??gezni, de az " +"nem l??tezik" -#: ../roundup/cgi/form_parser.py:312 +#: ../roundup/cgi/form_parser.py:335 #, python-format -msgid "You have submitted a %(action)s action for the property \"%(property)s\" which doesn't exist" +msgid "" +"You have submitted a %(action)s action for the property \"%(property)s\" " +"which doesn't exist" msgstr "" +"%(action)s m??veletet k??v??n a \"%(property)s\" tulajdons??gon v??gezni, de az " +"nem l??tezik" # ../roundup/cgi/form_parser.py:331 :357 -#: ../roundup/cgi/form_parser.py:331 -#: ../roundup/cgi/form_parser.py:357 +#: ../roundup/cgi/form_parser.py:354 ../roundup/cgi/form_parser.py:380 #, python-format msgid "You have submitted more than one value for the %s property" -msgstr "" +msgstr "Egyn??l t??bb ??rt??ket adott meg a(z) %s tulajdons??ghoz" # ../roundup/cgi/form_parser.py:354 :360 -#: ../roundup/cgi/form_parser.py:354 -#: ../roundup/cgi/form_parser.py:360 +#: ../roundup/cgi/form_parser.py:377 ../roundup/cgi/form_parser.py:383 msgid "Password and confirmation text do not match" -msgstr "" +msgstr "A jelsz?? ??s a meger??s??t??s nem egyezik" -#: ../roundup/cgi/form_parser.py:395 +#: ../roundup/cgi/form_parser.py:418 #, python-format msgid "property \"%(propname)s\": \"%(value)s\" not currently in list" -msgstr "" +msgstr "\"%(propname)s\" tulajdons??g: \"%(value)s\" jelenleg nincs a list??ban" -#: ../roundup/cgi/form_parser.py:512 +#: ../roundup/cgi/form_parser.py:551 #, python-format msgid "Required %(class)s property %(property)s not supplied" msgid_plural "Required %(class)s properties %(property)s not supplied" -msgstr[0] "" +msgstr[0] "Nincs megadva a(z) %(class)s k??telez?? %(property)s tulajdons??ga" msgstr[1] "" +"Nincsenek megadva a(z) %(class)s k??telez?? %(property)s tulajdons??gai" -#: ../roundup/cgi/form_parser.py:535 +#: ../roundup/cgi/form_parser.py:574 msgid "File is empty" -msgstr "" +msgstr "A f??jl ??res" -#: ../roundup/cgi/templating.py:72 +#: ../roundup/cgi/templating.py:77 #, python-format msgid "You are not allowed to %(action)s items of class %(class)s" msgstr "" +"Nincs jogosults??ga a(z) %(class)s oszt??ly elemein %(action)s m??veletet " +"v??grehajtani" -#: ../roundup/cgi/templating.py:627 +#: ../roundup/cgi/templating.py:657 msgid "(list)" -msgstr "" +msgstr "(lista)" -#: ../roundup/cgi/templating.py:696 +#: ../roundup/cgi/templating.py:726 msgid "Submit New Entry" -msgstr "" +msgstr "L??trehoz??s" # ../roundup/cgi/templating.py:710 :829 :1236 :1257 :1304 :1327 :1361 :1400 # :1453 :1470 :1549 :1569 :1587 :1619 :1629 :1683 :1875 -#: ../roundup/cgi/templating.py:710 -#: ../roundup/cgi/templating.py:829 -#: ../roundup/cgi/templating.py:1236 -#: ../roundup/cgi/templating.py:1257 -#: ../roundup/cgi/templating.py:1304 -#: ../roundup/cgi/templating.py:1327 -#: ../roundup/cgi/templating.py:1361 -#: ../roundup/cgi/templating.py:1400 -#: ../roundup/cgi/templating.py:1453 -#: ../roundup/cgi/templating.py:1470 -#: ../roundup/cgi/templating.py:1549 -#: ../roundup/cgi/templating.py:1569 -#: ../roundup/cgi/templating.py:1587 -#: ../roundup/cgi/templating.py:1619 -#: ../roundup/cgi/templating.py:1629 -#: ../roundup/cgi/templating.py:1683 -#: ../roundup/cgi/templating.py:1875 +#: ../roundup/cgi/templating.py:740 ../roundup/cgi/templating.py:873 +#: ../roundup/cgi/templating.py:1294 ../roundup/cgi/templating.py:1323 +#: ../roundup/cgi/templating.py:1343 ../roundup/cgi/templating.py:1356 +#: ../roundup/cgi/templating.py:1407 ../roundup/cgi/templating.py:1430 +#: ../roundup/cgi/templating.py:1466 ../roundup/cgi/templating.py:1503 +#: ../roundup/cgi/templating.py:1556 ../roundup/cgi/templating.py:1573 +#: ../roundup/cgi/templating.py:1657 ../roundup/cgi/templating.py:1677 +#: ../roundup/cgi/templating.py:1695 ../roundup/cgi/templating.py:1727 +#: ../roundup/cgi/templating.py:1737 ../roundup/cgi/templating.py:1789 +#: ../roundup/cgi/templating.py:1978 msgid "[hidden]" -msgstr "" +msgstr "[rejtett]" -#: ../roundup/cgi/templating.py:711 +#: ../roundup/cgi/templating.py:741 msgid "New node - no history" -msgstr "" +msgstr "??j bejegyz??s - nincs t??rt??net" -#: ../roundup/cgi/templating.py:811 +#: ../roundup/cgi/templating.py:855 msgid "Submit Changes" -msgstr "" +msgstr "V??ltoz??sok ment??se" -#: ../roundup/cgi/templating.py:893 +#: ../roundup/cgi/templating.py:937 msgid "The indicated property no longer exists" -msgstr "" +msgstr "A jelzett tulajdons??g m??r nem l??tezik" -#: ../roundup/cgi/templating.py:894 +#: ../roundup/cgi/templating.py:938 #, python-format msgid "%s: %s\n" -msgstr "" +msgstr "%s: %s\n" -#: ../roundup/cgi/templating.py:907 +#: ../roundup/cgi/templating.py:951 #, python-format msgid "The linked class %(classname)s no longer exists" -msgstr "" +msgstr "A csatolt %(classname)s oszt??ly m??r nem l??tezik" # ../roundup/cgi/templating.py:940 :964 -#: ../roundup/cgi/templating.py:940 -#: ../roundup/cgi/templating.py:964 +#: ../roundup/cgi/templating.py:984 ../roundup/cgi/templating.py:1008 msgid "The linked node no longer exists" -msgstr "" - -# ../roundup/cgi/templating.py:1006 :1404 :1425 :1431 -#: ../roundup/cgi/templating.py:1006 -#: ../roundup/cgi/templating.py:1404 -#: ../roundup/cgi/templating.py:1425 -#: ../roundup/cgi/templating.py:1431 -msgid "No" -msgstr "Nem" +msgstr "A csatolt bejegyz??s m??r nem l??tezik" -# ../roundup/cgi/templating.py:1006 :1404 :1423 :1428 -#: ../roundup/cgi/templating.py:1006 -#: ../roundup/cgi/templating.py:1404 -#: ../roundup/cgi/templating.py:1423 -#: ../roundup/cgi/templating.py:1428 -msgid "Yes" -msgstr "Igen" - -#: ../roundup/cgi/templating.py:1017 +#: ../roundup/cgi/templating.py:1061 #, python-format msgid "%s: (no value)" -msgstr "" +msgstr "%s: (nincs ??rt??k)" -#: ../roundup/cgi/templating.py:1029 -msgid "This event is not handled by the history display!" +#: ../roundup/cgi/templating.py:1073 +msgid "" +"This event is not handled by the history display!" msgstr "" +"Az el??zm??nyek k??perny?? nem kezeli ezt az esem??nyt!" -#: ../roundup/cgi/templating.py:1041 +#: ../roundup/cgi/templating.py:1085 msgid "Note:" -msgstr "" +msgstr "Megjegyz??s:" -#: ../roundup/cgi/templating.py:1050 +#: ../roundup/cgi/templating.py:1094 msgid "History" -msgstr "" +msgstr "El??zm??nyek" -#: ../roundup/cgi/templating.py:1052 +#: ../roundup/cgi/templating.py:1096 msgid "Date" -msgstr "" +msgstr "D??tum" -#: ../roundup/cgi/templating.py:1053 +#: ../roundup/cgi/templating.py:1097 msgid "User" -msgstr "" +msgstr "Szerz??" -#: ../roundup/cgi/templating.py:1054 +#: ../roundup/cgi/templating.py:1098 msgid "Action" -msgstr "" +msgstr "M??velet" -#: ../roundup/cgi/templating.py:1055 +#: ../roundup/cgi/templating.py:1099 msgid "Args" -msgstr "" +msgstr "Tulajdons??gok" -#: ../roundup/cgi/templating.py:1097 +#: ../roundup/cgi/templating.py:1141 #, python-format msgid "Copy of %(class)s %(id)s" -msgstr "" +msgstr "A(z) %(class)s %(id)s m??solata" -#: ../roundup/cgi/templating.py:1331 +#: ../roundup/cgi/templating.py:1434 msgid "*encrypted*" -msgstr "" +msgstr "*titkos??tva*" -#: ../roundup/cgi/templating.py:1514 -msgid "default value for DateHTMLProperty must be either DateHTMLProperty or string date representation." +# ../roundup/cgi/templating.py:1006 :1404 :1425 :1431 +#: ../roundup/cgi/templating.py:1507 ../roundup/cgi/templating.py:1528 +#: ../roundup/cgi/templating.py:1534 ../roundup/cgi/templating.py:1050 +msgid "No" +msgstr "Nem" + +# ../roundup/cgi/templating.py:1006 :1404 :1423 :1428 +#: ../roundup/cgi/templating.py:1507 ../roundup/cgi/templating.py:1526 +#: ../roundup/cgi/templating.py:1531 ../roundup/cgi/templating.py:1050 +msgid "Yes" +msgstr "Igen" + +#: ../roundup/cgi/templating.py:1620 +msgid "" +"default value for DateHTMLProperty must be either DateHTMLProperty or string " +"date representation." msgstr "" +"a DateHTMLProperty alap??rt??ke DateHTMLProperty vagy sz??veges d??tumle??r??s " +"t??pus?? kell legyen." -#: ../roundup/cgi/templating.py:1674 +#: ../roundup/cgi/templating.py:1780 #, python-format msgid "Attempt to look up %(attr)s on a missing value" -msgstr "" +msgstr "K??s??rlet %(attr)s keres??s??re egy hi??nyz?? ??rt??ken" -#: ../roundup/cgi/templating.py:1750 +#: ../roundup/cgi/templating.py:1853 #, python-format msgid "" -msgstr "" +msgstr "" -#: ../roundup/date.py:186 -msgid "Not a date spec: \"yyyy-mm-dd\", \"mm-dd\", \"HH:MM\", \"HH:MM:SS\" or \"yyyy-mm-dd.HH:MM:SS.SSS\"" +#: ../roundup/date.py:300 +msgid "" +"Not a date spec: \"yyyy-mm-dd\", \"mm-dd\", \"HH:MM\", \"HH:MM:SS\" or " +"\"yyyy-mm-dd.HH:MM:SS.SSS\"" msgstr "" +"Nem d??tum specifik??ci??: \"????????-hh-nn\", \"hh-nn\", \"????:PP\", \"????:PP:SS\" " +"vagy \"????????-hh-nn.????:PP:SS.SSS\"" -#: ../roundup/date.py:240 +#: ../roundup/date.py:359 #, python-format -msgid "%r not a date / time spec \"yyyy-mm-dd\", \"mm-dd\", \"HH:MM\", \"HH:MM:SS\" or \"yyyy-mm-dd.HH:MM:SS.SSS\"" +msgid "" +"%r not a date / time spec \"yyyy-mm-dd\", \"mm-dd\", \"HH:MM\", \"HH:MM:SS\" " +"or \"yyyy-mm-dd.HH:MM:SS.SSS\"" msgstr "" +"%r nem d??tum / id?? specifik??ci?? \"????????-hh-nn\", \"hh-nn\", \"????:PP\", \"????:" +"PP:SS\" vagy \"????????-hh-nn.????:PP:SS.SSS\"" -#: ../roundup/date.py:538 -msgid "Not an interval spec: [+-] [#y] [#m] [#w] [#d] [[[H]H:MM]:SS] [date spec]" +#: ../roundup/date.py:666 +msgid "" +"Not an interval spec: [+-] [#y] [#m] [#w] [#d] [[[H]H:MM]:SS] [date spec]" msgstr "" +"Nem id??k??z specifik??ci??: [+-] [#??] [#h] [#w] [#n] [[[??]??:PP]:SS] [d??tum " +"t??pus]" -#: ../roundup/date.py:557 +#: ../roundup/date.py:685 msgid "Not an interval spec: [+-] [#y] [#m] [#w] [#d] [[[H]H:MM]:SS]" -msgstr "" +msgstr "Nem id??k??z specifik??ci??: [+-] [#??] [#h] [#w] [#n] [[[??]??:PP]:SS]" -#: ../roundup/date.py:694 +#: ../roundup/date.py:822 #, python-format msgid "%(number)s year" msgid_plural "%(number)s years" -msgstr[0] "" -msgstr[1] "" +msgstr[0] "%(number)s ??ve" +msgstr[1] "%(number)s ??ve" -#: ../roundup/date.py:698 +#: ../roundup/date.py:826 #, python-format msgid "%(number)s month" msgid_plural "%(number)s months" -msgstr[0] "" -msgstr[1] "" +msgstr[0] "%(number)s h??napja" +msgstr[1] "%(number)s h??napja" -#: ../roundup/date.py:702 +#: ../roundup/date.py:830 #, python-format msgid "%(number)s week" msgid_plural "%(number)s weeks" -msgstr[0] "" -msgstr[1] "" +msgstr[0] "%(number)s hete" +msgstr[1] "%(number)s hete" -#: ../roundup/date.py:706 +#: ../roundup/date.py:834 #, python-format msgid "%(number)s day" msgid_plural "%(number)s days" -msgstr[0] "" -msgstr[1] "" +msgstr[0] "%(number)s napja" +msgstr[1] "%(number)s napja" -#: ../roundup/date.py:710 +#: ../roundup/date.py:838 msgid "tomorrow" -msgstr "" +msgstr "holnap" -#: ../roundup/date.py:712 +#: ../roundup/date.py:840 msgid "yesterday" -msgstr "" +msgstr "tegnap" -#: ../roundup/date.py:715 +#: ../roundup/date.py:843 #, python-format msgid "%(number)s hour" msgid_plural "%(number)s hours" -msgstr[0] "" -msgstr[1] "" +msgstr[0] "%(number)s ??r??ja" +msgstr[1] "%(number)s ??r??ja" -#: ../roundup/date.py:719 +#: ../roundup/date.py:847 msgid "an hour" -msgstr "" +msgstr "egy ??r??ja" -#: ../roundup/date.py:721 +#: ../roundup/date.py:849 msgid "1 1/2 hours" -msgstr "" +msgstr "1 1/2 ??r??ja" -#: ../roundup/date.py:723 +#: ../roundup/date.py:851 #, python-format msgid "1 %(number)s/4 hours" msgid_plural "1 %(number)s/4 hours" -msgstr[0] "" -msgstr[1] "" +msgstr[0] "1 %(number)s/4 ??r??ja" +msgstr[1] "1 %(number)s/4 ??r??ja" -#: ../roundup/date.py:727 +#: ../roundup/date.py:855 msgid "in a moment" -msgstr "" +msgstr "egy pillanat" -#: ../roundup/date.py:729 +#: ../roundup/date.py:857 msgid "just now" -msgstr "" +msgstr "??pp most" -#: ../roundup/date.py:732 +#: ../roundup/date.py:860 msgid "1 minute" -msgstr "" +msgstr "1 perce" -#: ../roundup/date.py:735 +#: ../roundup/date.py:863 #, python-format msgid "%(number)s minute" msgid_plural "%(number)s minutes" -msgstr[0] "" -msgstr[1] "" +msgstr[0] "%(number)s perce" +msgstr[1] "%(number)s perce" -#: ../roundup/date.py:738 +#: ../roundup/date.py:866 msgid "1/2 an hour" -msgstr "" +msgstr "1/2 ??r??ja" -#: ../roundup/date.py:740 +#: ../roundup/date.py:868 #, python-format msgid "%(number)s/4 hour" msgid_plural "%(number)s/4 hours" -msgstr[0] "" -msgstr[1] "" +msgstr[0] "%(number)s/4 ??r??ja" +msgstr[1] "%(number)s/4 ??r??ja" -#: ../roundup/date.py:744 +#: ../roundup/date.py:872 #, python-format msgid "%s ago" -msgstr "" +msgstr "%s" -#: ../roundup/date.py:746 +#: ../roundup/date.py:874 #, python-format msgid "in %s" +msgstr "%s-ban" + +#: ../roundup/hyperdb.py:87 +#, fuzzy, python-format +msgid "property %s: %s" +msgstr "Hiba: %s: %s" + +#: ../roundup/hyperdb.py:107 +#, python-format +msgid "property %s: %r is an invalid date (%s)" +msgstr "" + +#: ../roundup/hyperdb.py:124 +#, python-format +msgid "property %s: %r is an invalid date interval (%s)" +msgstr "" + +#: ../roundup/hyperdb.py:219 +#, fuzzy, python-format +msgid "property %s: %r is not currently an element" +msgstr "\"%(propname)s\" tulajdons??g: \"%(value)s\" jelenleg nincs a list??ban" + +#: ../roundup/hyperdb.py:263 +#, python-format +msgid "property %s: %r is not a number" +msgstr "" + +#: ../roundup/hyperdb.py:276 +#, python-format +msgid "\"%s\" not a node designator" +msgstr "" + +# ../roundup/hyperdb.py:949:957 +#: ../roundup/hyperdb.py:949 ../roundup/hyperdb.py:957 +#, python-format +msgid "Not a property name: %s" +msgstr "" + +#: ../roundup/hyperdb.py:1240 +#, python-format +msgid "property %s: %r is not a %s." +msgstr "" + +#: ../roundup/hyperdb.py:1243 +#, python-format +msgid "you may only enter ID values for property %s" +msgstr "" + +#: ../roundup/hyperdb.py:1273 +#, python-format +msgid "%r is not a property of %s" msgstr "" #: ../roundup/init.py:134 @@ -1395,14 +1535,51 @@ "WARNING: directory '%s'\n" "\tcontains old-style template - ignored" msgstr "" +"FIGYELEM: a(z) '%s' k??nyvt??r\n" +"\tr??gi t??pus?? sablont tartalmaz - ignor??lva" + +# ../roundup/mailgw.py:199:211 +#: ../roundup/mailgw.py:199 ../roundup/mailgw.py:211 +#, python-format +msgid "Message signed with unknown key: %s" +msgstr "" + +#: ../roundup/mailgw.py:202 +#, python-format +msgid "Message signed with an expired key: %s" +msgstr "" + +#: ../roundup/mailgw.py:205 +#, python-format +msgid "Message signed with a revoked key: %s" +msgstr "" + +#: ../roundup/mailgw.py:208 +msgid "Invalid PGP signature detected." +msgstr "" -#: ../roundup/mailgw.py:586 +#: ../roundup/mailgw.py:404 +msgid "Unknown multipart/encrypted version." +msgstr "" + +#: ../roundup/mailgw.py:413 +msgid "Unable to decrypt your message." +msgstr "" + +#: ../roundup/mailgw.py:442 +msgid "No PGP signature found in message." +msgstr "" + +#: ../roundup/mailgw.py:749 msgid "" "\n" "Emails to Roundup trackers must include a Subject: line!\n" msgstr "" +"\n" +"A Roundup hibak??vet??kh??z k??ld??tt e-maileknek tartalmazniuk kell egy Subject: " +"sort!\n" -#: ../roundup/mailgw.py:674 +#: ../roundup/mailgw.py:873 #, python-format msgid "" "\n" @@ -1419,29 +1596,59 @@ "Subject was: '%(subject)s'\n" msgstr "" -#: ../roundup/mailgw.py:705 -#, python-format +#: ../roundup/mailgw.py:911 +#, fuzzy, python-format msgid "" "\n" -"The class name you identified in the subject line (\"%(classname)s\") does not exist in the\n" -"database.\n" +"The class name you identified in the subject line (\"%(classname)s\") does\n" +"not exist in the database.\n" "\n" "Valid class names are: %(validname)s\n" "Subject was: \"%(subject)s\"\n" msgstr "" +"\n" +"A t??rgy sorban megadott oszt??ly neve (\"%(classname)s\") nem l??tezik\n" +"az adatb??zisban.\n" +"\n" +"Az ??rv??nyes oszt??lynevek: %(validname)s\n" +"A t??rgy ez volt: \"%(subject)s\"\n" -#: ../roundup/mailgw.py:733 +#: ../roundup/mailgw.py:919 #, python-format msgid "" "\n" +"You did not identify a class name in the subject line and there is no\n" +"default set for this tracker. The subject must contain a class name or\n" +"designator to indicate the 'topic' of the message. For example:\n" +" Subject: [issue] This is a new issue\n" +" - this will create a new issue in the tracker with the title 'This is\n" +" a new issue'.\n" +" Subject: [issue1234] This is a followup to issue 1234\n" +" - this will append the message's contents to the existing issue 1234\n" +" in the tracker.\n" +"\n" +"Subject was: '%(subject)s'\n" +msgstr "" + +#: ../roundup/mailgw.py:960 +#, fuzzy, python-format +msgid "" +"\n" "I cannot match your message to a node in the database - you need to either\n" -"supply a full designator (with number, eg \"[issue123]\" or keep the\n" +"supply a full designator (with number, eg \"[issue123]\") or keep the\n" "previous subject title intact so I can match that.\n" "\n" "Subject was: \"%(subject)s\"\n" msgstr "" +"\n" +"Nem siker??lt az ??zenetet p??ros??tani egy adatb??zisban szerepl?? ??ggal - vagy " +"meg kell\n" +"adnia egy teljes nevet (sz??mmal egy??tt, pl. \"[issue123]\"vagy meg kell\n" +"tartania a teljes el??z?? c??met, hogy ahhoz lehessen p??ros??tani.\n" +"\n" +"A t??rgy ez volt: \"%(subject)s\"\n" -#: ../roundup/mailgw.py:766 +#: ../roundup/mailgw.py:993 #, python-format msgid "" "\n" @@ -1450,8 +1657,13 @@ "\n" "Subject was: \"%(subject)s\"\n" msgstr "" +"\n" +"Az ??zenet t??rgysor??ban megadott ??g\n" +"(\"%(nodeid)s\") nem l??tezik.\n" +"\n" +"A t??rgy ez volt: \"%(subject)s\"\n" -#: ../roundup/mailgw.py:794 +#: ../roundup/mailgw.py:1021 #, python-format msgid "" "\n" @@ -1459,8 +1671,12 @@ "%(mailadmin)s and have them fix the incorrect class specified as:\n" " %(current_class)s\n" msgstr "" +"\n" +"A mail ??tj??r?? nincs helyesen be??ll??tva. Vegye fel a kapcsolatot\n" +"%(mailadmin)s-nal ??s jav??ttassa ki a hib??san megadott oszt??lyt:\n" +" %(current_class)s\n" -#: ../roundup/mailgw.py:817 +#: ../roundup/mailgw.py:1044 #, python-format msgid "" "\n" @@ -1468,31 +1684,39 @@ "%(mailadmin)s and have them fix the incorrect properties:\n" " %(errors)s\n" msgstr "" +"\n" +"A mail ??tj??r?? nincs helyesen be??ll??tva. Vegye fel a kapcsolatot\n" +"%(mailadmin)s-nal ??s jav??ttassa ki a hib??s tulajdons??gokat:\n" +" %(errors)s\n" -#: ../roundup/mailgw.py:847 -#, python-format +#: ../roundup/mailgw.py:1084 +#, fuzzy, python-format msgid "" "\n" -"You are not a registered user.\n" +"You are not a registered user.%(registration_info)s\n" "\n" "Unknown address: %(from_address)s\n" msgstr "" +"\n" +"??n nem bejegyzett felhaszn??l??.\n" +"\n" +"Ismeretlen c??m: %(from_address)s\n" -#: ../roundup/mailgw.py:855 +#: ../roundup/mailgw.py:1092 msgid "You are not permitted to access this tracker." -msgstr "" +msgstr "Ehhez a hibak??vet??h??z hozz??f??r??se nem enged??lyezett." -#: ../roundup/mailgw.py:862 +#: ../roundup/mailgw.py:1099 #, python-format msgid "You are not permitted to edit %(classname)s." -msgstr "" +msgstr "Nincs jogosults??ga %(classname)s szerkeszt??s??hez." -#: ../roundup/mailgw.py:866 +#: ../roundup/mailgw.py:1103 #, python-format msgid "You are not permitted to create %(classname)s." -msgstr "" +msgstr "Nincs jogosults??ga %(classname)s l??trehoz??s??hoz." -#: ../roundup/mailgw.py:913 +#: ../roundup/mailgw.py:1150 #, python-format msgid "" "\n" @@ -1501,148 +1725,189 @@ "\n" "Subject was: \"%(subject)s\"\n" msgstr "" +"\n" +"Probl??ma mer??lt fel a t??rgysor argumentum list??j??nak feldolgoz??sa sor??n:\n" +"- %(errors)s\n" +"\n" +"A t??rgy ez volt: \"%(subject)s\"\n" -#: ../roundup/mailgw.py:942 +#: ../roundup/mailgw.py:1203 +msgid "" +"\n" +"This tracker has been configured to require all email be PGP signed or\n" +"encrypted." +msgstr "" + +#: ../roundup/mailgw.py:1209 msgid "" "\n" "Roundup requires the submission to be plain text. The message parser could\n" "not find a text/plain part to use.\n" msgstr "" +"\n" +"A Roundup egyszer?? sz??vegk??nt tudja fogadni a k??relmet. Az ??zenet ??rtelmez??\n" +"nem tal??lt haszn??lhat??, egyszer?? sz??veg form??tum?? r??szt.\n" -#: ../roundup/mailgw.py:964 +#: ../roundup/mailgw.py:1226 msgid "You are not permitted to create files." -msgstr "" +msgstr "Nincs jogosults??ga f??jlok l??trehoz??s??ra." -#: ../roundup/mailgw.py:978 +#: ../roundup/mailgw.py:1240 #, python-format msgid "You are not permitted to add files to %(classname)s." -msgstr "" +msgstr "Nincs jogosults??ga f??jlok hozz??ad??s??ra %(classname)s-hez." -#: ../roundup/mailgw.py:996 +#: ../roundup/mailgw.py:1258 msgid "You are not permitted to create messages." -msgstr "" +msgstr "Nincs jogosults??ga ??zenetek l??trehoz??s??ra." -#: ../roundup/mailgw.py:1004 +#: ../roundup/mailgw.py:1266 #, python-format msgid "" "\n" "Mail message was rejected by a detector.\n" "%(error)s\n" msgstr "" +"\n" +"A mail ??zenetet a felder??t?? visszutas??totta.\n" +"%(error)s\n" -#: ../roundup/mailgw.py:1012 +#: ../roundup/mailgw.py:1274 #, python-format msgid "You are not permitted to add messages to %(classname)s." -msgstr "" +msgstr "Nincs jogosults??ga ??zenet hozz??ad??s??ra %(classname)s-hez." -#: ../roundup/mailgw.py:1039 +#: ../roundup/mailgw.py:1301 #, python-format msgid "You are not permitted to edit property %(prop)s of class %(classname)s." msgstr "" +"Nincs jogosults??ga %(classname)s oszt??ly %(prop)s tulajdons??g??t szerkeszteni." -#: ../roundup/mailgw.py:1047 +#: ../roundup/mailgw.py:1309 #, python-format msgid "" "\n" "There was a problem with the message you sent:\n" " %(message)s\n" msgstr "" +"\n" +"Probl??ma volt az ??n ??ltal k??ld??tt ??zenettel:\n" +" %(message)s\n" -#: ../roundup/mailgw.py:1069 +#: ../roundup/mailgw.py:1331 msgid "not of form [arg=value,value,...;arg=value,value,...]" -msgstr "" +msgstr "nem [arg=??rt??k,??rt??k,...;arg=??rt??k,??rt??k,...] form??tum??" -#: ../roundup/roundupdb.py:142 +#: ../roundup/roundupdb.py:147 msgid "files" -msgstr "" +msgstr "f??jlok" -#: ../roundup/roundupdb.py:142 +#: ../roundup/roundupdb.py:147 msgid "messages" -msgstr "" +msgstr "??zenetek" -#: ../roundup/roundupdb.py:142 +#: ../roundup/roundupdb.py:147 msgid "nosy" -msgstr "" +msgstr "k??v??ncsi" -#: ../roundup/roundupdb.py:142 +#: ../roundup/roundupdb.py:147 msgid "superseder" -msgstr "" +msgstr "helyettes" -#: ../roundup/roundupdb.py:142 +#: ../roundup/roundupdb.py:147 msgid "title" -msgstr "" +msgstr "c??m" -#: ../roundup/roundupdb.py:143 +#: ../roundup/roundupdb.py:148 msgid "assignedto" -msgstr "" +msgstr "kiosztva" -#: ../roundup/roundupdb.py:143 +#: ../roundup/roundupdb.py:148 +#, fuzzy +msgid "keyword" +msgstr "T??ma" + +#: ../roundup/roundupdb.py:148 msgid "priority" -msgstr "" +msgstr "priorit??s" -#: ../roundup/roundupdb.py:143 +#: ../roundup/roundupdb.py:148 msgid "status" -msgstr "" - -#: ../roundup/roundupdb.py:143 -msgid "topic" -msgstr "t??ma" +msgstr "??llapot" -#: ../roundup/roundupdb.py:146 +#: ../roundup/roundupdb.py:151 msgid "activity" -msgstr "" +msgstr "m??velet" #. following properties are common for all hyperdb classes #. they are listed here to keep things in one place -#: ../roundup/roundupdb.py:146 +#: ../roundup/roundupdb.py:151 msgid "actor" -msgstr "" +msgstr "v??gezte" -#: ../roundup/roundupdb.py:146 +#: ../roundup/roundupdb.py:151 msgid "creation" -msgstr "" +msgstr "l??trehoz??s" -#: ../roundup/roundupdb.py:146 +#: ../roundup/roundupdb.py:151 msgid "creator" -msgstr "" +msgstr "l??trehoz??" -#: ../roundup/roundupdb.py:304 +#: ../roundup/roundupdb.py:309 #, python-format msgid "New submission from %(authname)s%(authaddr)s:" -msgstr "" +msgstr "??j beadv??ny %(authname)s%(authaddr)s r??sz??r??l:" -#: ../roundup/roundupdb.py:307 +#: ../roundup/roundupdb.py:312 #, python-format msgid "%(authname)s%(authaddr)s added the comment:" +msgstr "%(authname)s%(authaddr)s ezt a megjegyz??st ??rta:" + +#: ../roundup/roundupdb.py:315 +#, fuzzy, python-format +msgid "Change by %(authname)s%(authaddr)s:" +msgstr "??j beadv??ny %(authname)s%(authaddr)s r??sz??r??l:" + +#: ../roundup/roundupdb.py:342 +#, python-format +msgid "File '%(filename)s' not attached - you can download it from %(link)s." msgstr "" -#: ../roundup/roundupdb.py:310 -msgid "System message:" +#: ../roundup/roundupdb.py:615 +#, python-format +msgid "" +"\n" +"Now:\n" +"%(new)s\n" +"Was:\n" +"%(old)s" msgstr "" #: ../roundup/scripts/roundup_demo.py:32 #, python-format msgid "Enter directory path to create demo tracker [%s]: " -msgstr "" +msgstr "Adja meg az el??r??si utat a bemutat?? tracker [%s] l??trehoz??s??hoz: " #: ../roundup/scripts/roundup_gettext.py:22 #, python-format msgid "Usage: %(program)s " -msgstr "" +msgstr "Haszn??lat: %(program)s " #: ../roundup/scripts/roundup_gettext.py:37 #, python-format msgid "No tracker templates found in directory %s" -msgstr "" +msgstr "Nem tal??lhat?? tracker sablon a(z) %s k??nyvt??rban" #: ../roundup/scripts/roundup_mailgw.py:36 #, python-format msgid "" -"Usage: %(program)s [-v] [-c] [[-C class] -S field=value]* [method]\n" +"Usage: %(program)s [-v] [-c class] [[-C class] -S field=value]* [method]\n" "\n" "Options:\n" " -v: print version and exit\n" -" -c: default class of item to create (else the tracker's MAIL_DEFAULT_CLASS)\n" +" -c: default class of item to create (else the tracker's " +"MAIL_DEFAULT_CLASS)\n" " -C / -S: see below\n" "\n" "The roundup mail gateway may be called in one of four ways:\n" @@ -1683,6 +1948,10 @@ " are both valid. The username and/or password will be prompted for if\n" " not supplied on the command-line.\n" "\n" +"POPS:\n" +" Connect to a POP server over ssl. This requires python 2.4 or later.\n" +" This supports the same notation as POP.\n" +"\n" "APOP:\n" " Same as POP, but using Authenticated POP:\n" " apop username:password at server\n" @@ -1702,23 +1971,35 @@ "\n" msgstr "" -#: ../roundup/scripts/roundup_mailgw.py:147 +#: ../roundup/scripts/roundup_mailgw.py:151 msgid "Error: not enough source specification information" -msgstr "Hiba: nincs el??g forr??s specifik??ci??s inform??c??" +msgstr "Hiba: nincs el??g forr??s specifik??ci??s inform??ci??" -#: ../roundup/scripts/roundup_mailgw.py:163 -msgid "Error: pop specification not valid" -msgstr "Hiba: a pop specifik??ci?? nem val??s" +#: ../roundup/scripts/roundup_mailgw.py:167 +msgid "Error: a later version of python is required" +msgstr "" #: ../roundup/scripts/roundup_mailgw.py:170 +msgid "Error: pop specification not valid" +msgstr "Hiba: a pop specifik??ci?? nem ??rv??nyes" + +#: ../roundup/scripts/roundup_mailgw.py:177 msgid "Error: apop specification not valid" -msgstr "Hiba: az apop specifik??ci?? nem val??s" +msgstr "Hiba: az apop specifik??ci?? nem ??rv??nyes" -#: ../roundup/scripts/roundup_mailgw.py:184 -msgid "Error: The source must be either \"mailbox\", \"pop\", \"apop\", \"imap\" or \"imaps\"" -msgstr "Hiba: A forr??s a k??vetkez??k egyike kell legyen: \"mailbox\", \"pop\", \"apop\", \"imap\" vagy \"imaps\"" +#: ../roundup/scripts/roundup_mailgw.py:189 +msgid "" +"Error: The source must be either \"mailbox\", \"pop\", \"apop\", \"imap\" or " +"\"imaps\"" +msgstr "" +"Hiba: A forr??s a k??vetkez??k egyike kell legyen: \"mailbox\", \"pop\", \"apop" +"\", \"imap\" vagy \"imaps\"" -#: ../roundup/scripts/roundup_server.py:157 +#: ../roundup/scripts/roundup_server.py:76 +msgid "WARNING: generating temporary SSL certificate" +msgstr "" + +#: ../roundup/scripts/roundup_server.py:253 msgid "" "Roundup trackers index\n" "

    Roundup trackers index

      \n" @@ -1726,52 +2007,52 @@ "Roundup hibak??vet??k list??ja\n" "

      Roundup hibak??vet??k list??ja

        \n" -#: ../roundup/scripts/roundup_server.py:287 +#: ../roundup/scripts/roundup_server.py:389 #, python-format msgid "Error: %s: %s" msgstr "Hiba: %s: %s" -#: ../roundup/scripts/roundup_server.py:297 +#: ../roundup/scripts/roundup_server.py:399 msgid "WARNING: ignoring \"-g\" argument, not root" -msgstr "FIGYELEM: \"-g\" opci??t figyelmen k??v??l hagyom, nem root" +msgstr "FIGYELEM: \"-g\" opci?? figyelmen k??v??l hagy??sra ker??lt, nem root" -#: ../roundup/scripts/roundup_server.py:303 +#: ../roundup/scripts/roundup_server.py:405 msgid "Can't change groups - no grp module" msgstr "Nem lehet csoportot v??ltani - nincs meg a grp modul" -#: ../roundup/scripts/roundup_server.py:312 +#: ../roundup/scripts/roundup_server.py:414 #, python-format msgid "Group %(group)s doesn't exist" msgstr "%(group)s csoport nem l??tezik" -#: ../roundup/scripts/roundup_server.py:323 +#: ../roundup/scripts/roundup_server.py:425 msgid "Can't run as root!" msgstr "Nem futhat root-k??nt!" -#: ../roundup/scripts/roundup_server.py:326 +#: ../roundup/scripts/roundup_server.py:428 msgid "WARNING: ignoring \"-u\" argument, not root" -msgstr "FIGYELEM: \"-u\" opci??t figyelmen k??v??l hagyom, nem root" +msgstr "FIGYELEM: \"-u\" opci?? figyelmen k??v??l hagy??sra ker??lt, nem root" -#: ../roundup/scripts/roundup_server.py:331 +#: ../roundup/scripts/roundup_server.py:434 msgid "Can't change users - no pwd module" -msgstr "" +msgstr "Felhaszn??l??v??lt??s nem siker??lt - nincs pwd modul" -#: ../roundup/scripts/roundup_server.py:340 +#: ../roundup/scripts/roundup_server.py:443 #, python-format msgid "User %(user)s doesn't exist" msgstr "A(z) %(user)s felhaszn??l?? nem l??tezik" -#: ../roundup/scripts/roundup_server.py:471 +#: ../roundup/scripts/roundup_server.py:592 #, python-format msgid "Multiprocess mode \"%s\" is not available, switching to single-process" -msgstr "" +msgstr "\"%s\" t??bbsz??l?? m??d nem ??rhet?? el, ??tt??r??s egysz??l?? m??dra" -#: ../roundup/scripts/roundup_server.py:494 +#: ../roundup/scripts/roundup_server.py:620 #, python-format msgid "Unable to bind to port %s, port already in use." -msgstr "" +msgstr "Nem siker??lt a(z) %s portra csatlakozni, a port m??r haszn??latban van." -#: ../roundup/scripts/roundup_server.py:562 +#: ../roundup/scripts/roundup_server.py:688 msgid "" " -c Windows Service options.\n" " If you want to run the server as a Windows Service, you\n" @@ -1781,7 +2062,7 @@ " specifics." msgstr "" -#: ../roundup/scripts/roundup_server.py:569 +#: ../roundup/scripts/roundup_server.py:695 msgid "" " -u runs the Roundup web server as this UID\n" " -g runs the Roundup web server as this GID\n" @@ -1790,7 +2071,7 @@ " specified if -d is used." msgstr "" -#: ../roundup/scripts/roundup_server.py:576 +#: ../roundup/scripts/roundup_server.py:702 #, python-format msgid "" "%(message)sUsage: roundup-server [options] [name=tracker home]*\n" @@ -1803,7 +2084,11 @@ " -n set the host name of the Roundup web server instance\n" " -p set the port to listen on (default: %(port)s)\n" " -l log to the file indicated by fname instead of stderr/stdout\n" -" -N log client machine names instead of IP addresses (much slower)\n" +" -N log client machine names instead of IP addresses (much " +"slower)\n" +" -i set tracker index template\n" +" -s enable SSL\n" +" -e PEM file containing SSL key and certificate\n" " -t multiprocess mode (default: %(mp_def)s).\n" " Allowed values: %(mp_types)s.\n" "%(os_part)s\n" @@ -1844,23 +2129,24 @@ " any url-unsafe characters like spaces, as these confuse IE.\n" msgstr "" -#: ../roundup/scripts/roundup_server.py:723 +#: ../roundup/scripts/roundup_server.py:860 msgid "Instances must be name=home" -msgstr "" +msgstr "A p??ld??nyoknak n??v=home form??ban kell lenni??k" -#: ../roundup/scripts/roundup_server.py:737 +#: ../roundup/scripts/roundup_server.py:874 #, python-format msgid "Configuration saved to %s" -msgstr "A konfigur??ci?? elmentve %s" +msgstr "Be??ll??t??sok elmentve ide: %s" -#: ../roundup/scripts/roundup_server.py:755 +#: ../roundup/scripts/roundup_server.py:892 msgid "Sorry, you can't run the server as a daemon on this Operating System" -msgstr "Eln??z??st, ezen az oper??ci??s rendszeren nem ind??thatod el d??monk??nt a szervert" +msgstr "" +"Eln??z??st, ezen az oper??ci??s rendszeren a szerver nem ind??that?? d??monk??nt" -#: ../roundup/scripts/roundup_server.py:767 +#: ../roundup/scripts/roundup_server.py:907 #, python-format msgid "Roundup server started on %(HOST)s:%(PORT)s" -msgstr "Roundup server a %(HOST)s:%(PORT)s g??pen" +msgstr "Roundup server elind??tva a(z) %(HOST)s:%(PORT)s g??pen" #: ../templates/classic/html/_generic.collision.html:4 #: ../templates/minimal/html/_generic.collision.html:4 @@ -1880,36 +2166,77 @@ " while you were editing. Please reload\n" " the node and review your edits.\n" msgstr "" +"\n" +" ??tk??z??s t??rt??nt. Egy m??sik felhaszn??l?? m??dos??totta ezt a bejegyz??st\n" +" mialatt ??n is szerkesztette. Olvassa ??jra\n" +" a bejegyz??st ??s szerkessze ??jra azt.\n" -#: ../templates/classic/html/_generic.help.html:9 -#: ../templates/minimal/html/_generic.help.html:9 -msgid "${property} help - ${tracker}" -msgstr "${property} seg??ts??g - ${tracker}" +#: ../templates/classic/html/_generic.help-empty.html:6 +msgid "Please specify your search parameters!" +msgstr "" + +#: ../templates/classic/html/_generic.help-list.html:20 +#: ../templates/classic/html/_generic.index.html:14 +#: ../templates/classic/html/_generic.item.html:12 +#: ../templates/classic/html/file.item.html:9 +#: ../templates/classic/html/issue.index.html:16 +#: ../templates/classic/html/issue.item.html:28 +#: ../templates/classic/html/msg.item.html:26 +#: ../templates/classic/html/user.index.html:9 +#: ../templates/classic/html/user.item.html:35 +#: ../templates/minimal/html/_generic.index.html:14 +#: ../templates/minimal/html/_generic.item.html:12 +#: ../templates/minimal/html/user.index.html:9 +#: ../templates/minimal/html/user.item.html:35 +#: ../templates/minimal/html/user.register.html:14 +msgid "You are not allowed to view this page." +msgstr "Nincs jogosults??ga az oldal megjelen??t??s??hez." +#: ../templates/classic/html/_generic.help-list.html:34 +msgid "1..25 out of 50" +msgstr "" + +#: ../templates/classic/html/_generic.help-search.html:9 +msgid "" +"Generic template ${template} or version for class ${classname} is not yet " +"implemented" +msgstr "" + +#: ../templates/classic/html/_generic.help-submit.html:57 #: ../templates/classic/html/_generic.help.html:31 #: ../templates/minimal/html/_generic.help.html:31 msgid " Cancel " -msgstr "M??gse" +msgstr " M??gsem " +#: ../templates/classic/html/_generic.help-submit.html:63 #: ../templates/classic/html/_generic.help.html:34 #: ../templates/minimal/html/_generic.help.html:34 msgid " Apply " -msgstr "Alkalmaz" +msgstr " Alkalmaz " + +#: ../templates/classic/html/_generic.help.html:9 +#: ../templates/classic/html/user.help.html:13 +#: ../templates/minimal/html/_generic.help.html:9 +msgid "${property} help - ${tracker}" +msgstr "${property} seg??ts??g - ${tracker}" #: ../templates/classic/html/_generic.help.html:41 -#: ../templates/classic/html/issue.index.html:73 +#: ../templates/classic/html/help.html:21 +#: ../templates/classic/html/issue.index.html:81 #: ../templates/minimal/html/_generic.help.html:41 msgid "<< previous" msgstr "<< el??z??" #: ../templates/classic/html/_generic.help.html:53 -#: ../templates/classic/html/issue.index.html:81 +#: ../templates/classic/html/help.html:28 +#: ../templates/classic/html/issue.index.html:89 #: ../templates/minimal/html/_generic.help.html:53 msgid "${start}..${end} out of ${total}" msgstr "${start}..${end}, ??sszesen ${total}" #: ../templates/classic/html/_generic.help.html:57 -#: ../templates/classic/html/issue.index.html:84 +#: ../templates/classic/html/help.html:32 +#: ../templates/classic/html/issue.index.html:92 #: ../templates/minimal/html/_generic.help.html:57 msgid "next >>" msgstr "k??vetkez?? >>" @@ -1928,29 +2255,37 @@ msgid "${class} editing" msgstr "${class} szerkeszt??se" -#: ../templates/classic/html/_generic.index.html:14 -#: ../templates/classic/html/_generic.item.html:12 -#: ../templates/classic/html/file.item.html:9 -#: ../templates/classic/html/issue.index.html:16 -#: ../templates/classic/html/issue.item.html:28 -#: ../templates/classic/html/msg.item.html:26 -#: ../templates/classic/html/user.index.html:9 -#: ../templates/classic/html/user.item.html:28 -#: ../templates/minimal/html/_generic.index.html:14 -#: ../templates/minimal/html/_generic.item.html:12 -#: ../templates/minimal/html/user.index.html:9 -#: ../templates/minimal/html/user.item.html:28 -#: ../templates/minimal/html/user.register.html:14 -msgid "You are not allowed to view this page." -msgstr "Nem n??zheted meg ezt az oldalt." - -#: ../templates/classic/html/_generic.index.html:22 -#: ../templates/minimal/html/_generic.index.html:22 -msgid "

        You may edit the contents of the ${classname} class using this form. Commas, newlines and double quotes (\") must be handled delicately. You may include commas and newlines by enclosing the values in double-quotes (\"). Double quotes themselves must be quoted by doubling (\"\").

        Multilink properties have their multiple values colon (\":\") separated (... ,\"one:two:three\", ...)

        Remove entries by deleting their line. Add new entries by appending them to the table - put an X in the id column.

        " +#: ../templates/classic/html/_generic.index.html:19 +#: ../templates/classic/html/_generic.item.html:16 +#: ../templates/classic/html/file.item.html:13 +#: ../templates/classic/html/issue.index.html:20 +#: ../templates/classic/html/issue.item.html:32 +#: ../templates/classic/html/msg.item.html:30 +#: ../templates/classic/html/user.index.html:13 +#: ../templates/classic/html/user.item.html:39 +#: ../templates/minimal/html/_generic.index.html:19 +#: ../templates/minimal/html/_generic.item.html:17 +#: ../templates/minimal/html/user.index.html:13 +#: ../templates/minimal/html/user.item.html:39 +#: ../templates/minimal/html/user.register.html:17 +msgid "Please login with your username and password." +msgstr "" + +#: ../templates/classic/html/_generic.index.html:28 +#: ../templates/minimal/html/_generic.index.html:28 +msgid "" +"

        You may edit the contents of the ${classname} class " +"using this form. Commas, newlines and double quotes (\") must be handled " +"delicately. You may include commas and newlines by enclosing the values in " +"double-quotes (\"). Double quotes themselves must be quoted by doubling " +"(\"\").

        Multilink properties have their " +"multiple values colon (\":\") separated (... ,\"one:two:three\", ...)

        " +"

        Remove entries by deleting their line. Add new " +"entries by appending them to the table - put an X in the id column.

        " msgstr "" -#: ../templates/classic/html/_generic.index.html:44 -#: ../templates/minimal/html/_generic.index.html:44 +#: ../templates/classic/html/_generic.index.html:50 +#: ../templates/minimal/html/_generic.index.html:50 msgid "Edit Items" msgstr "Elemek szerkeszt??se" @@ -1967,16 +2302,16 @@ msgstr "Let??lt??s" #: ../templates/classic/html/file.index.html:11 -#: ../templates/classic/html/file.item.html:22 +#: ../templates/classic/html/file.item.html:27 msgid "Content Type" msgstr "Tartalom t??pus" #: ../templates/classic/html/file.index.html:12 msgid "Uploaded By" -msgstr "Felt??ltve" +msgstr "Felt??lt??tte" #: ../templates/classic/html/file.index.html:13 -#: ../templates/classic/html/msg.item.html:43 +#: ../templates/classic/html/msg.item.html:48 msgid "Date" msgstr "D??tum" @@ -1988,13 +2323,12 @@ msgid "File display" msgstr "F??jl megjelen??t??s" -#: ../templates/classic/html/file.item.html:18 -#: ../templates/classic/html/user.item.html:39 +#: ../templates/classic/html/file.item.html:23 #: ../templates/classic/html/user.register.html:17 msgid "Name" msgstr "N??v" -#: ../templates/classic/html/file.item.html:40 +#: ../templates/classic/html/file.item.html:45 msgid "download" msgstr "let??lt??s" @@ -2008,80 +2342,78 @@ msgid "List of classes" msgstr "Oszt??lyok list??ja" -#: ../templates/classic/html/issue.index.html:7 -msgid "List of issues - ${tracker}" -msgstr "??gyek list??ja - ${tracker}" - -#: ../templates/classic/html/issue.index.html:11 +#: ../templates/classic/html/issue.index.html:4 +#: ../templates/classic/html/issue.index.html:10 msgid "List of issues" msgstr "??gyek list??ja" -#: ../templates/classic/html/issue.index.html:22 -#: ../templates/classic/html/issue.item.html:44 +#: ../templates/classic/html/issue.index.html:27 +#: ../templates/classic/html/issue.item.html:49 msgid "Priority" msgstr "Priorit??s" -#: ../templates/classic/html/issue.index.html:23 +#: ../templates/classic/html/issue.index.html:28 msgid "ID" msgstr "Azonos??t??" -#: ../templates/classic/html/issue.index.html:24 +#: ../templates/classic/html/issue.index.html:29 msgid "Creation" msgstr "L??trehoz??s" -#: ../templates/classic/html/issue.index.html:25 +#: ../templates/classic/html/issue.index.html:30 msgid "Activity" msgstr "Aktivit??s" -#: ../templates/classic/html/issue.index.html:26 +#: ../templates/classic/html/issue.index.html:31 msgid "Actor" msgstr "Hozz??sz??l??" -#: ../templates/classic/html/issue.index.html:27 -msgid "Topic" +#: ../templates/classic/html/issue.index.html:32 +#: ../templates/classic/html/keyword.item.html:37 +msgid "Keyword" msgstr "T??ma" -#: ../templates/classic/html/issue.index.html:28 -#: ../templates/classic/html/issue.item.html:39 +#: ../templates/classic/html/issue.index.html:33 +#: ../templates/classic/html/issue.item.html:44 msgid "Title" msgstr "C??m" -#: ../templates/classic/html/issue.index.html:29 -#: ../templates/classic/html/issue.item.html:46 +#: ../templates/classic/html/issue.index.html:34 +#: ../templates/classic/html/issue.item.html:51 msgid "Status" -msgstr "??llapt" +msgstr "??llapot" -#: ../templates/classic/html/issue.index.html:30 +#: ../templates/classic/html/issue.index.html:35 msgid "Creator" -msgstr "K??sz??t??" +msgstr "L??trehoz??" -#: ../templates/classic/html/issue.index.html:31 +#: ../templates/classic/html/issue.index.html:36 msgid "Assigned To" -msgstr "Hozz??rendelve" +msgstr "Kiosztva" -#: ../templates/classic/html/issue.index.html:97 +#: ../templates/classic/html/issue.index.html:105 msgid "Download as CSV" msgstr "Let??lt??s CSV-k??nt" -#: ../templates/classic/html/issue.index.html:105 +#: ../templates/classic/html/issue.index.html:115 msgid "Sort on:" msgstr "Rendez??s:" -#: ../templates/classic/html/issue.index.html:108 -#: ../templates/classic/html/issue.index.html:125 +#: ../templates/classic/html/issue.index.html:119 +#: ../templates/classic/html/issue.index.html:140 msgid "- nothing -" msgstr "- semmi -" -#: ../templates/classic/html/issue.index.html:116 -#: ../templates/classic/html/issue.index.html:133 +#: ../templates/classic/html/issue.index.html:127 +#: ../templates/classic/html/issue.index.html:148 msgid "Descending:" -msgstr "Cs??kken??leg:" +msgstr "Cs??kken??:" -#: ../templates/classic/html/issue.index.html:122 +#: ../templates/classic/html/issue.index.html:136 msgid "Group on:" msgstr "Csoportos??t??s:" -#: ../templates/classic/html/issue.index.html:139 +#: ../templates/classic/html/issue.index.html:155 msgid "Redisplay" msgstr "Megjelen??t??s ??jra" @@ -2091,15 +2423,15 @@ #: ../templates/classic/html/issue.item.html:10 msgid "New Issue - ${tracker}" -msgstr "??j bejelent??s - ${tracker}" +msgstr "??j ??gy - ${tracker}" #: ../templates/classic/html/issue.item.html:14 msgid "New Issue" -msgstr "??j bejelent??s" +msgstr "??j ??gy" #: ../templates/classic/html/issue.item.html:16 msgid "New Issue Editing" -msgstr "??j bejelent??s szerkeszt??se" +msgstr "??j ??gy szerkeszt??se" #: ../templates/classic/html/issue.item.html:19 msgid "Issue${id}" @@ -2109,255 +2441,275 @@ msgid "Issue${id} Editing" msgstr "${id}. ??gy szerkeszt??se" -#: ../templates/classic/html/issue.item.html:51 +#: ../templates/classic/html/issue.item.html:56 msgid "Superseder" msgstr "Helyettes??t??" -#: ../templates/classic/html/issue.item.html:56 -msgid "View: ${link}" -msgstr "Mitasd: ${link}" +#: ../templates/classic/html/issue.item.html:61 +msgid "View:" +msgstr "" -#: ../templates/classic/html/issue.item.html:60 +#: ../templates/classic/html/issue.item.html:67 msgid "Nosy List" msgstr "K??v??ncsiak list??ja" -#: ../templates/classic/html/issue.item.html:69 +#: ../templates/classic/html/issue.item.html:76 msgid "Assigned To" msgstr "Kiosztva" -#: ../templates/classic/html/issue.item.html:71 -msgid "Topics" -msgstr "T??m??k:" +#: ../templates/classic/html/issue.item.html:78 +#: ../templates/classic/html/page.html:103 +#: ../templates/minimal/html/page.html:102 +msgid "Keywords" +msgstr "T??m??k" -#: ../templates/classic/html/issue.item.html:79 +#: ../templates/classic/html/issue.item.html:86 msgid "Change Note" msgstr "Megjegyz??s m??dos??t??sa" -#: ../templates/classic/html/issue.item.html:87 +#: ../templates/classic/html/issue.item.html:94 msgid "File" msgstr "F??jl" -#: ../templates/classic/html/issue.item.html:99 +#: ../templates/classic/html/issue.item.html:106 msgid "Make a copy" msgstr "M??solat k??sz??t??se" -#: ../templates/classic/html/issue.item.html:107 -#: ../templates/classic/html/user.item.html:106 +#: ../templates/classic/html/issue.item.html:114 +#: ../templates/classic/html/user.item.html:153 #: ../templates/classic/html/user.register.html:69 -#: ../templates/minimal/html/user.item.html:86 -msgid "
        Note:  highlighted  fields are required.
        " -msgstr "
        Megjegyz??s:  a kiemelt  mez??k sz??ks??gesek.
        " - -#: ../templates/classic/html/issue.item.html:121 -msgid "Created on ${creation} by ${creator}, last changed ${activity} by ${actor}." -msgstr "${creation} k??sz??tette ${creator}, utolj??ra ${actor} m??dos??totta ${activity}-kor." +#: ../templates/minimal/html/user.item.html:153 +msgid "" +"
        Note:  highlighted  fields are required.
        " +msgstr "" +"
        Megjegyz??s:  a kiemelt  mez??k sz??ks??gesek.
        " + +#: ../templates/classic/html/issue.item.html:128 +msgid "" +"Created on ${creation} by ${creator}, last changed " +"${activity} by ${actor}." +msgstr "" +"${creation} l??trehozta ${creator}, utolj??ra ${actor} " +"m??dos??totta ${activity}-kor." -#: ../templates/classic/html/issue.item.html:125 -#: ../templates/classic/html/msg.item.html:56 +#: ../templates/classic/html/issue.item.html:132 +#: ../templates/classic/html/msg.item.html:61 msgid "Files" msgstr "F??jlok" -#: ../templates/classic/html/issue.item.html:127 -#: ../templates/classic/html/msg.item.html:58 +#: ../templates/classic/html/issue.item.html:134 +#: ../templates/classic/html/msg.item.html:63 msgid "File name" -msgstr "F??jl neve" +msgstr "F??jln??v" -#: ../templates/classic/html/issue.item.html:128 -#: ../templates/classic/html/msg.item.html:59 +#: ../templates/classic/html/issue.item.html:135 +#: ../templates/classic/html/msg.item.html:64 msgid "Uploaded" msgstr "Felt??ltve" -#: ../templates/classic/html/issue.item.html:129 +#: ../templates/classic/html/issue.item.html:136 msgid "Type" msgstr "T??pus" -#: ../templates/classic/html/issue.item.html:130 +#: ../templates/classic/html/issue.item.html:137 #: ../templates/classic/html/query.edit.html:30 msgid "Edit" msgstr "Szerkeszt??s" -#: ../templates/classic/html/issue.item.html:131 +#: ../templates/classic/html/issue.item.html:138 msgid "Remove" -msgstr "Eldob??s" +msgstr "T??rl??s" -#: ../templates/classic/html/issue.item.html:151 -#: ../templates/classic/html/issue.item.html:172 +#: ../templates/classic/html/issue.item.html:158 +#: ../templates/classic/html/issue.item.html:179 #: ../templates/classic/html/query.edit.html:50 msgid "remove" -msgstr "eldob??s" +msgstr "T??rl??s" -#: ../templates/classic/html/issue.item.html:158 +#: ../templates/classic/html/issue.item.html:165 #: ../templates/classic/html/msg.index.html:9 msgid "Messages" msgstr "??zenetek" -#: ../templates/classic/html/issue.item.html:162 +#: ../templates/classic/html/issue.item.html:169 msgid "msg${id} (view)" -msgstr "${id}. ??zenet (view)" +msgstr "${id}. ??zenet" -#: ../templates/classic/html/issue.item.html:163 +#: ../templates/classic/html/issue.item.html:170 msgid "Author: ${author}" msgstr "Szerz??: ${author}" -#: ../templates/classic/html/issue.item.html:165 +#: ../templates/classic/html/issue.item.html:172 msgid "Date: ${date}" msgstr "D??tum: ${date}" #: ../templates/classic/html/issue.search.html:2 msgid "Issue searching - ${tracker}" -msgstr "Bejelent??s keres??se - ${tracker}" +msgstr "??gy keres??se - ${tracker}" #: ../templates/classic/html/issue.search.html:4 msgid "Issue searching" -msgstr "Bejelent??s keres??se" +msgstr "??gy keres??se" -#: ../templates/classic/html/issue.search.html:25 +#: ../templates/classic/html/issue.search.html:31 msgid "Filter on" msgstr "Sz??r??s" -#: ../templates/classic/html/issue.search.html:26 +#: ../templates/classic/html/issue.search.html:32 msgid "Display" msgstr "Megjelen??t??s" -#: ../templates/classic/html/issue.search.html:27 +#: ../templates/classic/html/issue.search.html:33 msgid "Sort on" msgstr "Rendez??s" -#: ../templates/classic/html/issue.search.html:28 +#: ../templates/classic/html/issue.search.html:34 msgid "Group on" msgstr "Csoportos??t??s" -#: ../templates/classic/html/issue.search.html:32 +#: ../templates/classic/html/issue.search.html:38 msgid "All text*:" msgstr "Minden sz??veg*:" -#: ../templates/classic/html/issue.search.html:40 +#: ../templates/classic/html/issue.search.html:46 msgid "Title:" msgstr "C??m:" -#: ../templates/classic/html/issue.search.html:50 -msgid "Topic:" -msgstr "T??ma:" +#: ../templates/classic/html/issue.search.html:56 +#, fuzzy +msgid "Keyword:" +msgstr "T??ma" #: ../templates/classic/html/issue.search.html:58 +#: ../templates/classic/html/issue.search.html:123 +#: ../templates/classic/html/issue.search.html:139 +msgid "not selected" +msgstr "nem kijel??lt" + +#: ../templates/classic/html/issue.search.html:67 msgid "ID:" msgstr "Azonos??t??:" -#: ../templates/classic/html/issue.search.html:66 +#: ../templates/classic/html/issue.search.html:75 msgid "Creation Date:" -msgstr "K??sz??t??s d??tuma:" +msgstr "L??trehoz??s d??tuma:" -#: ../templates/classic/html/issue.search.html:77 +#: ../templates/classic/html/issue.search.html:86 msgid "Creator:" -msgstr "K??sz??t??:" +msgstr "L??trehoz??:" -#: ../templates/classic/html/issue.search.html:79 +#: ../templates/classic/html/issue.search.html:88 msgid "created by me" msgstr "??n k??sz??tettem" -#: ../templates/classic/html/issue.search.html:88 +#: ../templates/classic/html/issue.search.html:97 msgid "Activity:" -msgstr "Aktivit??s:" +msgstr "M??velet:" -#: ../templates/classic/html/issue.search.html:99 +#: ../templates/classic/html/issue.search.html:108 msgid "Actor:" msgstr "Hozz??sz??l??:" -#: ../templates/classic/html/issue.search.html:101 +#: ../templates/classic/html/issue.search.html:110 msgid "done by me" -msgstr "??n tettem" +msgstr "saj??t magam" -#: ../templates/classic/html/issue.search.html:112 +#: ../templates/classic/html/issue.search.html:121 msgid "Priority:" msgstr "Priorit??s:" -#: ../templates/classic/html/issue.search.html:114 -#: ../templates/classic/html/issue.search.html:130 -msgid "not selected" -msgstr "nem kijel??lt" - -#: ../templates/classic/html/issue.search.html:125 +#: ../templates/classic/html/issue.search.html:134 msgid "Status:" msgstr "??llapot:" -#: ../templates/classic/html/issue.search.html:128 +#: ../templates/classic/html/issue.search.html:137 msgid "not resolved" msgstr "nem megoldott" -#: ../templates/classic/html/issue.search.html:143 +#: ../templates/classic/html/issue.search.html:152 msgid "Assigned to:" msgstr "Kiadva:" -#: ../templates/classic/html/issue.search.html:146 +#: ../templates/classic/html/issue.search.html:155 msgid "assigned to me" msgstr "nekem adva" -#: ../templates/classic/html/issue.search.html:148 +#: ../templates/classic/html/issue.search.html:157 msgid "unassigned" msgstr "gazd??tlan" -#: ../templates/classic/html/issue.search.html:158 +#: ../templates/classic/html/issue.search.html:167 msgid "No Sort or group:" -msgstr "Ne rendezd vagy csoportos??tsd:" +msgstr "Ne rendezze vagy csoportos??tsa:" -#: ../templates/classic/html/issue.search.html:166 +#: ../templates/classic/html/issue.search.html:175 msgid "Pagesize:" msgstr "Oldalm??ret:" -#: ../templates/classic/html/issue.search.html:172 +#: ../templates/classic/html/issue.search.html:181 msgid "Start With:" msgstr "Kezd??s:" -#: ../templates/classic/html/issue.search.html:178 +#: ../templates/classic/html/issue.search.html:187 msgid "Sort Descending:" -msgstr "Cs??kken??leg rendzve:" +msgstr "Cs??kken?? rendez??s:" -#: ../templates/classic/html/issue.search.html:185 +#: ../templates/classic/html/issue.search.html:194 msgid "Group Descending:" -msgstr "Csoport cs??kken??leg:" +msgstr "Cs??kken?? csoportos??t??s:" -#: ../templates/classic/html/issue.search.html:192 +#: ../templates/classic/html/issue.search.html:201 msgid "Query name**:" msgstr "Lek??rdez??s neve**:" -#: ../templates/classic/html/issue.search.html:204 -#: ../templates/classic/html/page.html:31 -#: ../templates/classic/html/page.html:60 -#: ../templates/minimal/html/page.html:31 +#: ../templates/classic/html/issue.search.html:213 +#: ../templates/classic/html/page.html:43 +#: ../templates/classic/html/page.html:92 +#: ../templates/classic/html/user.help-search.html:69 +#: ../templates/minimal/html/page.html:43 +#: ../templates/minimal/html/page.html:91 msgid "Search" msgstr "Keres??s" -#: ../templates/classic/html/issue.search.html:209 +#: ../templates/classic/html/issue.search.html:218 msgid "*: The \"all text\" field will look in message bodies and issue titles" -msgstr "*: The \"all text\" field will look in message bodies and issue titles" +msgstr "" +"*: A \"Minden sz??veg\" mez?? az ??zenetek c??msor??ban ??s belsej??ben is keres" -#: ../templates/classic/html/issue.search.html:212 -msgid "**: If you supply a name, the query will be saved off and available as a link in the sidebar" -msgstr "**: Ha megadsz egy nevet, a lek??rdez??st elmentj??k ??s az oldals??von el??rhet?? lesz" +#: ../templates/classic/html/issue.search.html:221 +msgid "" +"**: If you supply a name, the query will be saved off and available as a " +"link in the sidebar" +msgstr "" +"**: Ha megad egy nevet, a lek??rdez??s elment??sre ker??l ??s az oldals??von " +"el??rhet?? lesz" #: ../templates/classic/html/keyword.item.html:3 msgid "Keyword editing - ${tracker}" -msgstr "Kulcssz?? szerkeszt??se - ${tracker}" +msgstr "T??m??k szerkeszt??se - ${tracker}" #: ../templates/classic/html/keyword.item.html:5 msgid "Keyword editing" -msgstr "Kulcssz?? szerkeszt??se" +msgstr "T??ma szerkeszt??se" #: ../templates/classic/html/keyword.item.html:11 msgid "Existing Keywords" -msgstr "L??tez?? kulcsszavak" +msgstr "L??tez?? t??m??k" #: ../templates/classic/html/keyword.item.html:20 -msgid "To edit an existing keyword (for spelling or typing errors), click on its entry above." -msgstr "Megl??v?? kulcssz?? szerkeszt??s??hez (helyes??r??si hib??k) kattints a fenti elemre." +msgid "" +"To edit an existing keyword (for spelling or typing errors), click on its " +"entry above." +msgstr "" +"Megl??v?? t??ma szerkeszt??s??hez (helyes??r??si hib??k) kattintson a fenti elemre." #: ../templates/classic/html/keyword.item.html:27 msgid "To create a new keyword, enter it below and click \"Submit New Entry\"." -msgstr "??j kulcssz?? l??trehoz??s??hoz add meg al??bb ??s kattints az \"??j elem k??ld??se\" gombra." - -#: ../templates/classic/html/keyword.item.html:37 -msgid "Keyword" -msgstr "Kulcssz??" +msgstr "" +"??j t??ma l??trehoz??s??hoz adja meg al??bb, majd kattintson a \"L??trehoz??s\" " +"gombra." #: ../templates/classic/html/msg.index.html:3 msgid "List of messages - ${tracker}" @@ -2365,7 +2717,7 @@ #: ../templates/classic/html/msg.index.html:5 msgid "Message listing" -msgstr "??zenetek list??z??sa" +msgstr "??zenetek list??ja" #: ../templates/classic/html/msg.item.html:6 msgid "Message ${id} - ${tracker}" @@ -2391,147 +2743,167 @@ msgid "Message${id} Editing" msgstr "${id}. ??zenet szerkeszt??se" -#: ../templates/classic/html/msg.item.html:33 +#: ../templates/classic/html/msg.item.html:38 msgid "Author" msgstr "Szerz??" -#: ../templates/classic/html/msg.item.html:38 +#: ../templates/classic/html/msg.item.html:43 msgid "Recipients" msgstr "C??mzettek" -#: ../templates/classic/html/msg.item.html:49 +#: ../templates/classic/html/msg.item.html:54 msgid "Content" msgstr "Tartalom" -#: ../templates/classic/html/page.html:41 +#: ../templates/classic/html/page.html:54 +#: ../templates/minimal/html/page.html:53 msgid "Your Queries (edit)" -msgstr "Te lek??rdez??seid (szerkeszt??s)" +msgstr "Lek??rdez??sek (szerk.)" -#: ../templates/classic/html/page.html:52 +#: ../templates/classic/html/page.html:65 +#: ../templates/minimal/html/page.html:64 msgid "Issues" -msgstr "T??m??k" +msgstr "??gyek" -#: ../templates/classic/html/page.html:54 -#: ../templates/classic/html/page.html:74 +#: ../templates/classic/html/page.html:67 +#: ../templates/classic/html/page.html:105 +#: ../templates/minimal/html/page.html:66 +#: ../templates/minimal/html/page.html:104 msgid "Create New" msgstr "??j l??trehoz??sa" -#: ../templates/classic/html/page.html:56 +#: ../templates/classic/html/page.html:69 +#: ../templates/minimal/html/page.html:68 msgid "Show Unassigned" msgstr "Gazd??tlanok mutat??sa" -#: ../templates/classic/html/page.html:58 +#: ../templates/classic/html/page.html:81 +#: ../templates/minimal/html/page.html:80 msgid "Show All" msgstr "Mutasd mind" -#: ../templates/classic/html/page.html:61 +#: ../templates/classic/html/page.html:93 +#: ../templates/minimal/html/page.html:92 msgid "Show issue:" -msgstr "T??ma mutat??sa:" +msgstr "??gy mutat??sa:" -#: ../templates/classic/html/page.html:72 -msgid "Keywords" -msgstr "Kulcsszavak" - -#: ../templates/classic/html/page.html:78 +#: ../templates/classic/html/page.html:108 +#: ../templates/minimal/html/page.html:107 msgid "Edit Existing" msgstr "Megl??v??k szerkeszt??se" -#: ../templates/classic/html/page.html:84 -#: ../templates/minimal/html/page.html:65 +#: ../templates/classic/html/page.html:114 +#: ../templates/minimal/html/page.html:113 msgid "Administration" msgstr "Adminisztr??ci??" -#: ../templates/classic/html/page.html:86 -#: ../templates/minimal/html/page.html:66 +#: ../templates/classic/html/page.html:116 +#: ../templates/minimal/html/page.html:115 msgid "Class List" -msgstr "Oszt??lzok list??ja" +msgstr "Oszt??lyok list??ja" -#: ../templates/classic/html/page.html:90 -#: ../templates/minimal/html/page.html:68 +#: ../templates/classic/html/page.html:120 +#: ../templates/minimal/html/page.html:119 msgid "User List" msgstr "Felhaszn??l??k list??ja" -#: ../templates/classic/html/page.html:92 -#: ../templates/minimal/html/page.html:71 +#: ../templates/classic/html/page.html:122 +#: ../templates/minimal/html/page.html:121 msgid "Add User" msgstr "Felhaszn??l?? hozz??ad??sa" -#: ../templates/classic/html/page.html:99 -#: ../templates/classic/html/page.html:105 -#: ../templates/minimal/html/page.html:46 +#: ../templates/classic/html/page.html:129 +#: ../templates/classic/html/page.html:135 +#: ../templates/minimal/html/page.html:128 +#: ../templates/minimal/html/page.html:134 msgid "Login" msgstr "Bejelentkez??s" -#: ../templates/classic/html/page.html:104 -#: ../templates/minimal/html/page.html:45 +#: ../templates/classic/html/page.html:134 +#: ../templates/minimal/html/page.html:133 msgid "Remember me?" -msgstr "Eml??kezzek r??d?" +msgstr "Eml??kezzen?" -#: ../templates/classic/html/page.html:108 +#: ../templates/classic/html/page.html:138 #: ../templates/classic/html/user.register.html:63 -#: ../templates/minimal/html/page.html:50 -#: ../templates/minimal/html/user.register.html:58 +#: ../templates/minimal/html/page.html:137 +#: ../templates/minimal/html/user.register.html:61 msgid "Register" -msgstr "Regisztr??l??s" +msgstr "Regisztr??ci??" -#: ../templates/classic/html/page.html:111 +#: ../templates/classic/html/page.html:141 +#: ../templates/minimal/html/page.html:140 msgid "Lost your login?" -msgstr "Elveszett a jelszavad?" +msgstr "Elveszett a jelszava?" -#: ../templates/classic/html/page.html:116 +#: ../templates/classic/html/page.html:146 +#: ../templates/minimal/html/page.html:145 msgid "Hello, ${user}" msgstr "Hell??, ${user}" -#: ../templates/classic/html/page.html:118 +#: ../templates/classic/html/page.html:148 msgid "Your Issues" -msgstr "T??m??id" +msgstr "Saj??t ??gyek" -#: ../templates/classic/html/page.html:119 -#: ../templates/minimal/html/page.html:57 +#: ../templates/classic/html/page.html:160 +#: ../templates/minimal/html/page.html:147 msgid "Your Details" -msgstr "Adataid" +msgstr "Saj??t adatok" -#: ../templates/classic/html/page.html:121 -#: ../templates/minimal/html/page.html:59 +#: ../templates/classic/html/page.html:162 +#: ../templates/minimal/html/page.html:149 msgid "Logout" msgstr "Kijelentkez??s" -#: ../templates/classic/html/page.html:125 +#: ../templates/classic/html/page.html:166 +#: ../templates/minimal/html/page.html:153 msgid "Help" msgstr "Seg??ts??g" -#: ../templates/classic/html/page.html:126 +#: ../templates/classic/html/page.html:167 +#: ../templates/minimal/html/page.html:154 msgid "Roundup docs" msgstr "Roundup dokument??ci??" -#: ../templates/classic/html/page.html:136 -#: ../templates/minimal/html/page.html:81 +#: ../templates/classic/html/page.html:177 +#: ../templates/minimal/html/page.html:164 msgid "clear this message" msgstr "??zenet t??rl??se" -#: ../templates/classic/html/page.html:181 +#: ../templates/classic/html/page.html:241 +#: ../templates/classic/html/page.html:256 +#: ../templates/classic/html/page.html:270 +#: ../templates/minimal/html/page.html:228 +#: ../templates/minimal/html/page.html:243 +#: ../templates/minimal/html/page.html:257 msgid "don't care" -msgstr "ne t??r??dj vele" +msgstr "mindegy" -#: ../templates/classic/html/page.html:183 +#: ../templates/classic/html/page.html:243 +#: ../templates/classic/html/page.html:258 +#: ../templates/classic/html/page.html:271 +#: ../templates/minimal/html/page.html:230 +#: ../templates/minimal/html/page.html:245 +#: ../templates/minimal/html/page.html:258 msgid "------------" msgstr "------------" -#: ../templates/classic/html/page.html:210 +#: ../templates/classic/html/page.html:299 +#: ../templates/minimal/html/page.html:286 msgid "no value" msgstr "nincs ??rt??k" #: ../templates/classic/html/query.edit.html:4 msgid "\"Your Queries\" Editing - ${tracker}" -msgstr "\"Te lek??rdez??seid\" szerkeszt??se - ${tracker}" +msgstr "\"Saj??t lek??rdez??sek\" szerkeszt??se - ${tracker}" #: ../templates/classic/html/query.edit.html:6 msgid "\"Your Queries\" Editing" -msgstr "\"Te lek??rdez??seid\" szerkeszt??se" +msgstr "\"Saj??t lek??rdez??sek\" szerkeszt??se" #: ../templates/classic/html/query.edit.html:11 msgid "You are not allowed to edit queries." -msgstr "Nem szerkeszthetsz lek??rdez??seket." +msgstr "Nincs jogosults??ga lek??rdez??sek szerkeszt??s??hez." #: ../templates/classic/html/query.edit.html:28 msgid "Query" @@ -2539,11 +2911,11 @@ #: ../templates/classic/html/query.edit.html:29 msgid "Include in \"Your Queries\"" -msgstr "Bevesz a \"Te lek??rdez??seid\" k??z??" +msgstr "Hozz??ad??s a \"Saj??t lek??rdez??sek\"-hez" #: ../templates/classic/html/query.edit.html:31 msgid "Private to you?" -msgstr "Csak neked?" +msgstr "Csak saj??t haszn??latra?" #: ../templates/classic/html/query.edit.html:44 msgid "leave out" @@ -2562,7 +2934,7 @@ msgstr "[lek??rdez??s visszavonva]" #: ../templates/classic/html/query.edit.html:67 -#: ../templates/classic/html/query.edit.html:92 +#: ../templates/classic/html/query.edit.html:94 msgid "edit" msgstr "szerkeszt??s" @@ -2578,11 +2950,11 @@ msgid "Delete" msgstr "T??rl??s" -#: ../templates/classic/html/query.edit.html:94 +#: ../templates/classic/html/query.edit.html:96 msgid "[not yours to edit]" -msgstr "[nem a tied hogy szerkeszd]" +msgstr "[nem saj??t szerkeszt??s??]" -#: ../templates/classic/html/query.edit.html:102 +#: ../templates/classic/html/query.edit.html:104 msgid "Save Selection" msgstr "Kijel??l??s ment??se" @@ -2595,29 +2967,48 @@ msgstr "Jelsz?? t??rl??s k??r??se" #: ../templates/classic/html/user.forgotten.html:9 -msgid "You have two options if you have forgotten your password. If you know the email address you registered with, enter it below." -msgstr "K??t lehet??s??ged van, ha elfelejtetted a jelszavad. Ha tudod a regisztr??ci??s jelszavad, add meg al??bb." +msgid "" +"You have two options if you have forgotten your password. If you know the " +"email address you registered with, enter it below." +msgstr "" +"K??t lehet??s??ge van, ha elfelejtette a jelszav??t. Ha tudja a regisztr??ci??s e-" +"mail c??m??t, adja meg al??bb." #: ../templates/classic/html/user.forgotten.html:16 msgid "Email Address:" -msgstr "Email c??m:" +msgstr "E-mail c??m:" #: ../templates/classic/html/user.forgotten.html:24 #: ../templates/classic/html/user.forgotten.html:34 msgid "Request password reset" -msgstr "K??rj jelsz?? t??rl??st." +msgstr "K??rjen jelsz?? t??rl??st" #: ../templates/classic/html/user.forgotten.html:30 msgid "Or, if you know your username, then enter it below." -msgstr "Vagy, ha ismered a felhaszn??l?? nevet, add meg al??bb." +msgstr "Vagy, ha ismeri a felhaszn??l??nevet, adja meg al??bb." #: ../templates/classic/html/user.forgotten.html:33 msgid "Username:" -msgstr "Felhaszn??l?? n??v:" +msgstr "Felhaszn??l??n??v:" #: ../templates/classic/html/user.forgotten.html:39 -msgid "A confirmation email will be sent to you - please follow the instructions within it to complete the reset process." -msgstr "Egy meger??s??t?? emailt k??ld??nk neked - k??rlek k??vesd a benne foglaltakat hogy befejezhesd a t??rl??st." +msgid "" +"A confirmation email will be sent to you - please follow the instructions " +"within it to complete the reset process." +msgstr "" +"A rendszer egy meger??s??t?? e-mailt k??ld ??nnek - k??vesse a benne foglaltakat a " +"vissza??ll??t??si folyamat befejez??s??hez." + +#: ../templates/classic/html/user.help-search.html:73 +#, fuzzy +msgid "Pagesize" +msgstr "Oldalm??ret:" + +#: ../templates/classic/html/user.help.html:43 +msgid "" +"Your browser is not capable of using frames; you should be redirected " +"immediately, or visit ${link}." +msgstr "" #: ../templates/classic/html/user.index.html:3 #: ../templates/minimal/html/user.index.html:3 @@ -2629,135 +3020,123 @@ msgid "User listing" msgstr "Felhaszn??l??k list??ja" -#: ../templates/classic/html/user.index.html:14 -#: ../templates/minimal/html/user.index.html:14 +#: ../templates/classic/html/user.index.html:19 +#: ../templates/minimal/html/user.index.html:19 msgid "Username" msgstr "Felhaszn??l??n??v" -#: ../templates/classic/html/user.index.html:15 +#: ../templates/classic/html/user.index.html:20 msgid "Real name" msgstr "Val??di n??v" -#: ../templates/classic/html/user.index.html:16 -#: ../templates/classic/html/user.item.html:70 +#: ../templates/classic/html/user.index.html:21 #: ../templates/classic/html/user.register.html:45 msgid "Organisation" msgstr "Szervezet" -#: ../templates/classic/html/user.index.html:17 -#: ../templates/minimal/html/user.index.html:15 +#: ../templates/classic/html/user.index.html:22 +#: ../templates/minimal/html/user.index.html:20 msgid "Email address" -msgstr "Email c??m" +msgstr "E-mail c??m" -#: ../templates/classic/html/user.index.html:18 +#: ../templates/classic/html/user.index.html:23 msgid "Phone number" msgstr "Telefonsz??m" -#: ../templates/classic/html/user.index.html:19 +#: ../templates/classic/html/user.index.html:24 msgid "Retire" msgstr "Visszavonul??s" -#: ../templates/classic/html/user.index.html:32 +#: ../templates/classic/html/user.index.html:37 msgid "retire" msgstr "visszavonul??s" -#: ../templates/classic/html/user.item.html:7 -#: ../templates/minimal/html/user.item.html:7 +#: ../templates/classic/html/user.item.html:9 +#: ../templates/minimal/html/user.item.html:9 msgid "User ${id}: ${title} - ${tracker}" msgstr "${id}. felhaszn??l??: ${title} - ${tracker}" -#: ../templates/classic/html/user.item.html:10 -#: ../templates/minimal/html/user.item.html:10 +#: ../templates/classic/html/user.item.html:12 +#: ../templates/minimal/html/user.item.html:12 msgid "New User - ${tracker}" msgstr "??j felhaszn??l?? - ${tracker}" -#: ../templates/classic/html/user.item.html:14 -#: ../templates/minimal/html/user.item.html:14 +#: ../templates/classic/html/user.item.html:21 +#: ../templates/minimal/html/user.item.html:21 msgid "New User" msgstr "??j felhaszn??l??" -#: ../templates/classic/html/user.item.html:16 -#: ../templates/minimal/html/user.item.html:16 +#: ../templates/classic/html/user.item.html:23 +#: ../templates/minimal/html/user.item.html:23 msgid "New User Editing" msgstr "??j felhaszn??l?? szerkeszt??se" -#: ../templates/classic/html/user.item.html:19 -#: ../templates/minimal/html/user.item.html:19 +#: ../templates/classic/html/user.item.html:26 +#: ../templates/minimal/html/user.item.html:26 msgid "User${id}" msgstr "${id}. felhaszn??l??" -#: ../templates/classic/html/user.item.html:22 -#: ../templates/minimal/html/user.item.html:22 +#: ../templates/classic/html/user.item.html:29 +#: ../templates/minimal/html/user.item.html:29 msgid "User${id} Editing" msgstr "${id}. felhaszn??l?? szerkeszt??se" -#: ../templates/classic/html/user.item.html:43 +#: ../templates/classic/html/user.item.html:80 +#: ../templates/classic/html/user.register.html:33 +#: ../templates/minimal/html/user.item.html:80 +#: ../templates/minimal/html/user.register.html:41 +msgid "Roles" +msgstr "Szerepk??r??k" + +#: ../templates/classic/html/user.item.html:88 +#: ../templates/minimal/html/user.item.html:88 +msgid "(to give the user more than one role, enter a comma,separated,list)" +msgstr "" +"(egyn??l t??bb szerepk??r megad??s??hoz vessz??vel,elv??lasztott,list??t,adjon,meg)" + +#: ../templates/classic/html/user.item.html:109 +#: ../templates/minimal/html/user.item.html:109 +msgid "(this is a numeric hour offset, the default is ${zone})" +msgstr "(ez egy numerikus ??ra eltol??s, ${zone} az alap??rtelmezett)" + +#: ../templates/classic/html/user.item.html:130 +#: ../templates/classic/html/user.register.html:53 +#: ../templates/minimal/html/user.item.html:130 +#: ../templates/minimal/html/user.register.html:53 +msgid "Alternate E-mail addresses
        One address per line" +msgstr "Alternat??v e-mail c??mek
        soronk??nt egy c??m" + +#: ../templates/classic/html/user.register.html:4 +#: ../templates/classic/html/user.register.html:7 +#: ../templates/minimal/html/user.register.html:4 +#: ../templates/minimal/html/user.register.html:7 +msgid "Registering with ${tracker}" +msgstr "Regisztr??l??s a k??vetkez??n??l: ${tracker}" + #: ../templates/classic/html/user.register.html:21 -#: ../templates/minimal/html/user.item.html:40 -#: ../templates/minimal/html/user.register.html:26 +#: ../templates/minimal/html/user.register.html:29 msgid "Login Name" msgstr "Bejelentkez??si n??v" -#: ../templates/classic/html/user.item.html:47 #: ../templates/classic/html/user.register.html:25 -#: ../templates/minimal/html/user.item.html:44 -#: ../templates/minimal/html/user.register.html:30 +#: ../templates/minimal/html/user.register.html:33 msgid "Login Password" msgstr "Bejelentkez??si jelsz??" -#: ../templates/classic/html/user.item.html:51 #: ../templates/classic/html/user.register.html:29 -#: ../templates/minimal/html/user.item.html:48 -#: ../templates/minimal/html/user.register.html:34 +#: ../templates/minimal/html/user.register.html:37 msgid "Confirm Password" msgstr "Jelsz?? meger??s??t??se" -#: ../templates/classic/html/user.item.html:55 -#: ../templates/classic/html/user.register.html:33 -#: ../templates/minimal/html/user.item.html:52 -#: ../templates/minimal/html/user.register.html:38 -msgid "Roles" -msgstr "Szerepk??r??k" - -#: ../templates/classic/html/user.item.html:61 -#: ../templates/minimal/html/user.item.html:58 -msgid "(to give the user more than one role, enter a comma,separated,list)" -msgstr "(egyn??l t??bb szerepk??r megad??s??hoz vessz??vel,elv??lasztott,list??t,adj,meg)" - -#: ../templates/classic/html/user.item.html:66 #: ../templates/classic/html/user.register.html:41 msgid "Phone" msgstr "Telefon" -#: ../templates/classic/html/user.item.html:74 -msgid "Timezone" -msgstr "Id??z??na" - -#: ../templates/classic/html/user.item.html:78 -msgid "(this is a numeric hour offset, the default is ${zone})" -msgstr "(ez egy numerikus ??ra eltol??s, ${zone} az alap??rtelmezett)" - -#: ../templates/classic/html/user.item.html:83 #: ../templates/classic/html/user.register.html:49 -#: ../templates/minimal/html/user.item.html:63 -#: ../templates/minimal/html/user.register.html:46 +#: ../templates/minimal/html/user.register.html:49 msgid "E-mail address" msgstr "E-mail c??mek" -#: ../templates/classic/html/user.item.html:91 -#: ../templates/classic/html/user.register.html:53 -#: ../templates/minimal/html/user.item.html:71 -#: ../templates/minimal/html/user.register.html:50 -msgid "Alternate E-mail addresses
        One address per line" -msgstr "Alternat??v e-mail c??mek
        soronk??nt egy c??m " - -#: ../templates/classic/html/user.register.html:4 -#: ../templates/classic/html/user.register.html:7 -#: ../templates/minimal/html/user.register.html:4 -#: ../templates/minimal/html/user.register.html:7 -msgid "Registering with ${tracker}" -msgstr "Regisztr??l??s a k??vetkez??n??l: ${tracker}" - #: ../templates/classic/html/user.rego_progress.html:4 #: ../templates/minimal/html/user.rego_progress.html:4 msgid "Registration in progress - ${tracker}" @@ -2770,93 +3149,79 @@ #: ../templates/classic/html/user.rego_progress.html:10 #: ../templates/minimal/html/user.rego_progress.html:10 -msgid "You will shortly receive an email to confirm your registration. To complete the registration process, visit the link indicated in the email." -msgstr "R??videsen kapni fog egy e-mailt a regisztr??ci??j??nak meger??s??t??s??re.. A regisztr??l??s befejez??s??het k??rem k??vesse a lev??lben l??v?? linket.." - -#: ../templates/minimal/html/home.html:2 -msgid "Tracker home - ${tracker}" -msgstr "Hibak??vet?? - ${tracker}" - -#: ../templates/minimal/html/home.html:4 -msgid "Tracker home" -msgstr "Hibak??vet?? otthona" - -#: ../templates/minimal/html/home.html:16 -msgid "Please select from one of the menu options on the left." -msgstr "K??rem v??lasson a bal oldali men??b??l." - -#: ../templates/minimal/html/home.html:19 -msgid "Please log in or register." -msgstr "K??rem jelentkezzen be vagy regisztr??ljon." - -#: ../templates/minimal/html/page.html:55 -msgid "Hello,
        ${user}" -msgstr "Hell??,
        ${user}" +msgid "" +"You will shortly receive an email to confirm your registration. To complete " +"the registration process, visit the link indicated in the email." +msgstr "" +"R??videsen kapni fog egy e-mailt a regisztr??ci??j??nak meger??s??t??s??re. A " +"regisztr??ci?? befejez??s??het k??vesse a lev??lben l??v?? linket." # priority translations: #: ../templates/classic/initial_data.py:5 -#: ../templates/classic/html/page.html:246 msgid "critical" msgstr "kritikus" #: ../templates/classic/initial_data.py:6 -#: ../templates/classic/html/page.html:246 msgid "urgent" msgstr "s??rg??s" #: ../templates/classic/initial_data.py:7 -#: ../templates/classic/html/page.html:246 msgid "bug" msgstr "hiba" #: ../templates/classic/initial_data.py:8 -#: ../templates/classic/html/page.html:246 msgid "feature" msgstr "szolg??ltat??s" #: ../templates/classic/initial_data.py:9 -#: ../templates/classic/html/page.html:246 msgid "wish" msgstr "??haj" # status translations: -#: status ../templates/classic/initial_data.py:12 -#: ../templates/classic/html/page.html:246 +#: ../templates/classic/initial_data.py:12 msgid "unread" msgstr "nem olvasott" #: ../templates/classic/initial_data.py:13 -#: ../templates/classic/html/page.html:246 msgid "deferred" msgstr "elutas??tva" #: ../templates/classic/initial_data.py:14 -#: ../templates/classic/html/page.html:246 msgid "chatting" msgstr "megbesz??l??s" #: ../templates/classic/initial_data.py:15 -#: ../templates/classic/html/page.html:246 -msgid "in-progress" -msgstr "folyamatban" - -#: ../templates/classic/initial_data.py:16 -#: ../templates/classic/html/page.html:246 msgid "need-eg" msgstr "meger??s??t??sre v??r" +#: ../templates/classic/initial_data.py:16 +msgid "in-progress" +msgstr "folyamatban" + #: ../templates/classic/initial_data.py:17 -#: ../templates/classic/html/page.html:246 msgid "testing" msgstr "tesztel??s" #: ../templates/classic/initial_data.py:18 -#: ../templates/classic/html/page.html:246 msgid "done-cbb" -msgstr "elk??sz??lt" +msgstr "elk??sz??lt-lehetne jobb" #: ../templates/classic/initial_data.py:19 -#: ../templates/classic/html/page.html:246 msgid "resolved" msgstr "megoldva" +#: ../templates/minimal/html/home.html:2 +msgid "Tracker home - ${tracker}" +msgstr "Hibak??vet?? - ${tracker}" + +#: ../templates/minimal/html/home.html:4 +msgid "Tracker home" +msgstr "Hibak??vet??" + +#: ../templates/minimal/html/home.html:16 +msgid "Please select from one of the menu options on the left." +msgstr "K??rem v??lasszon a bal oldali men??b??l." + +#: ../templates/minimal/html/home.html:19 +msgid "Please log in or register." +msgstr "Jelentkezzen be vagy regisztr??ljon." Modified: tracker/roundup-src/locale/lt.po ============================================================================== --- tracker/roundup-src/locale/lt.po (original) +++ tracker/roundup-src/locale/lt.po Sun Mar 9 09:26:16 2008 @@ -1,7 +1,7 @@ # Lithuanian message file for Roundup Issue Tracker # Aiste Kesminaite , 2005 # -# $Id: lt.po,v 1.7 2006/11/23 05:23:26 a1s Exp $ +# $Id: lt.po,v 1.8 2007/01/04 17:14:29 a1s Exp $ # # roundup.pot revision 1.20 # @@ -2947,7 +2947,7 @@ #: ../templates/classic/html/issue.search.html:215 msgid "*: The \"all text\" field will look in message bodies and issue titles" -msgstr "*: Laukas ???visas tekstas??? i????oks prane??im?? tekste ir kreipini?? pavadinimuose" +msgstr "*: Laukas ???visas tekstas??? ie??kos prane??im?? tekste ir kreipini?? pavadinimuose" #: ../templates/classic/html/issue.search.html:218 msgid "**: If you supply a name, the query will be saved off and available as a link in the sidebar" Modified: tracker/roundup-src/locale/roundup.pot ============================================================================== --- tracker/roundup-src/locale/roundup.pot (original) +++ tracker/roundup-src/locale/roundup.pot Sun Mar 9 09:26:16 2008 @@ -8,7 +8,7 @@ msgstr "" "Project-Id-Version: PACKAGE VERSION\n" "Report-Msgid-Bugs-To: roundup-devel at lists.sourceforge.net\n" -"POT-Creation-Date: 2006-12-18 13:36+0200\n" +"POT-Creation-Date: 2007-09-27 11:18+0300\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "Last-Translator: FULL NAME \n" "Language-Team: LANGUAGE \n" @@ -17,25 +17,25 @@ "Content-Transfer-Encoding: 8bit\n" "Plural-Forms: nplurals=INTEGER; plural=EXPRESSION;\n" -#: ../roundup/admin.py:85 ../roundup/admin.py:981 ../roundup/admin.py:1030 -#: ../roundup/admin.py:1052 ../roundup/admin.py:85:981 :1030:1052 +#: ../roundup/admin.py:86 ../roundup/admin.py:989 ../roundup/admin.py:1040 +#: ../roundup/admin.py:1063 ../roundup/admin.py:86:989 :1040:1063 #, python-format msgid "no such class \"%(classname)s\"" msgstr "" -#: ../roundup/admin.py:95 ../roundup/admin.py:99 ../roundup/admin.py:95:99 +#: ../roundup/admin.py:96 ../roundup/admin.py:100 ../roundup/admin.py:96:100 #, python-format msgid "argument \"%(arg)s\" not propname=value" msgstr "" -#: ../roundup/admin.py:112 +#: ../roundup/admin.py:113 #, python-format msgid "" "Problem: %(message)s\n" "\n" msgstr "" -#: ../roundup/admin.py:113 +#: ../roundup/admin.py:114 #, python-format msgid "" "%(message)sUsage: roundup-admin [options] [ ]\n" @@ -62,17 +62,17 @@ " roundup-admin help all -- all available help\n" msgstr "" -#: ../roundup/admin.py:140 +#: ../roundup/admin.py:141 msgid "Commands:" msgstr "" -#: ../roundup/admin.py:147 +#: ../roundup/admin.py:148 msgid "" "Commands may be abbreviated as long as the abbreviation\n" "matches only one command, e.g. l == li == lis == list." msgstr "" -#: ../roundup/admin.py:177 +#: ../roundup/admin.py:178 msgid "" "\n" "All commands (except help) require a tracker specifier. This is just\n" @@ -137,12 +137,12 @@ "Command help:\n" msgstr "" -#: ../roundup/admin.py:240 +#: ../roundup/admin.py:241 #, python-format msgid "%s:" msgstr "" -#: ../roundup/admin.py:245 +#: ../roundup/admin.py:246 msgid "" "Usage: help topic\n" " Give help about topic.\n" @@ -154,20 +154,20 @@ " " msgstr "" -#: ../roundup/admin.py:268 +#: ../roundup/admin.py:269 #, python-format msgid "Sorry, no help for \"%(topic)s\"" msgstr "" -#: ../roundup/admin.py:340 ../roundup/admin.py:396 ../roundup/admin.py:340:396 +#: ../roundup/admin.py:346 ../roundup/admin.py:402 ../roundup/admin.py:346:402 msgid "Templates:" msgstr "" -#: ../roundup/admin.py:343 ../roundup/admin.py:407 ../roundup/admin.py:343:407 +#: ../roundup/admin.py:349 ../roundup/admin.py:413 ../roundup/admin.py:349:413 msgid "Back ends:" msgstr "" -#: ../roundup/admin.py:346 +#: ../roundup/admin.py:352 msgid "" "Usage: install [template [backend [key=val[,key=val]]]]\n" " Install a new Roundup tracker.\n" @@ -193,22 +193,22 @@ " " msgstr "" -#: ../roundup/admin.py:369 ../roundup/admin.py:466 ../roundup/admin.py:527 -#: ../roundup/admin.py:606 ../roundup/admin.py:656 ../roundup/admin.py:714 -#: ../roundup/admin.py:735 ../roundup/admin.py:763 ../roundup/admin.py:834 -#: ../roundup/admin.py:901 ../roundup/admin.py:972 ../roundup/admin.py:1020 -#: ../roundup/admin.py:1042 ../roundup/admin.py:1072 ../roundup/admin.py:1171 -#: ../roundup/admin.py:1243 ../roundup/admin.py:369:466 :1020:1042 :1072:1171 -#: :1243 :527:606 :656:714 :735:763 :834:901 :972 +#: ../roundup/admin.py:375 ../roundup/admin.py:472 ../roundup/admin.py:533 +#: ../roundup/admin.py:612 ../roundup/admin.py:663 ../roundup/admin.py:721 +#: ../roundup/admin.py:742 ../roundup/admin.py:770 ../roundup/admin.py:842 +#: ../roundup/admin.py:909 ../roundup/admin.py:980 ../roundup/admin.py:1030 +#: ../roundup/admin.py:1053 ../roundup/admin.py:1084 ../roundup/admin.py:1180 +#: ../roundup/admin.py:1253 ../roundup/admin.py:375:472 :1030:1053 :1084:1180 +#: :1253 :533:612 :663:721 :742:770 :842:909 :980 msgid "Not enough arguments supplied" msgstr "" -#: ../roundup/admin.py:375 +#: ../roundup/admin.py:381 #, python-format msgid "Instance home parent directory \"%(parent)s\" does not exist" msgstr "" -#: ../roundup/admin.py:383 +#: ../roundup/admin.py:389 #, python-format msgid "" "WARNING: There appears to be a tracker in \"%(tracker_home)s\"!\n" @@ -216,20 +216,20 @@ "Erase it? Y/N: " msgstr "" -#: ../roundup/admin.py:398 +#: ../roundup/admin.py:404 msgid "Select template [classic]: " msgstr "" -#: ../roundup/admin.py:409 +#: ../roundup/admin.py:415 msgid "Select backend [anydbm]: " msgstr "" -#: ../roundup/admin.py:419 +#: ../roundup/admin.py:425 #, python-format msgid "Error in configuration settings: \"%s\"" msgstr "" -#: ../roundup/admin.py:428 +#: ../roundup/admin.py:434 #, python-format msgid "" "\n" @@ -238,11 +238,11 @@ " %(config_file)s" msgstr "" -#: ../roundup/admin.py:438 +#: ../roundup/admin.py:444 msgid " ... at a minimum, you must set following options:" msgstr "" -#: ../roundup/admin.py:443 +#: ../roundup/admin.py:449 #, python-format msgid "" "\n" @@ -258,7 +258,7 @@ "---------------------------------------------------------------------------\n" msgstr "" -#: ../roundup/admin.py:461 +#: ../roundup/admin.py:467 msgid "" "Usage: genconfig \n" " Generate a new tracker config file (ini style) with default values\n" @@ -267,7 +267,7 @@ msgstr "" #. password -#: ../roundup/admin.py:471 +#: ../roundup/admin.py:477 msgid "" "Usage: initialise [adminpw]\n" " Initialise a new Roundup tracker.\n" @@ -278,30 +278,30 @@ " " msgstr "" -#: ../roundup/admin.py:485 +#: ../roundup/admin.py:491 msgid "Admin Password: " msgstr "" -#: ../roundup/admin.py:486 +#: ../roundup/admin.py:492 msgid " Confirm: " msgstr "" -#: ../roundup/admin.py:490 +#: ../roundup/admin.py:496 msgid "Instance home does not exist" msgstr "" -#: ../roundup/admin.py:494 +#: ../roundup/admin.py:500 msgid "Instance has not been installed" msgstr "" -#: ../roundup/admin.py:499 +#: ../roundup/admin.py:505 msgid "" "WARNING: The database is already initialised!\n" "If you re-initialise it, you will lose all the data!\n" "Erase it? Y/N: " msgstr "" -#: ../roundup/admin.py:520 +#: ../roundup/admin.py:526 msgid "" "Usage: get property designator[,designator]*\n" " Get the given property of one or more designator(s).\n" @@ -311,23 +311,23 @@ " " msgstr "" -#: ../roundup/admin.py:560 ../roundup/admin.py:575 ../roundup/admin.py:560:575 +#: ../roundup/admin.py:566 ../roundup/admin.py:581 ../roundup/admin.py:566:581 #, python-format msgid "property %s is not of type Multilink or Link so -d flag does not apply." msgstr "" -#: ../roundup/admin.py:583 ../roundup/admin.py:983 ../roundup/admin.py:1032 -#: ../roundup/admin.py:1054 ../roundup/admin.py:583:983 :1032:1054 +#: ../roundup/admin.py:589 ../roundup/admin.py:991 ../roundup/admin.py:1042 +#: ../roundup/admin.py:1065 ../roundup/admin.py:589:991 :1042:1065 #, python-format msgid "no such %(classname)s node \"%(nodeid)s\"" msgstr "" -#: ../roundup/admin.py:585 +#: ../roundup/admin.py:591 #, python-format msgid "no such %(classname)s property \"%(propname)s\"" msgstr "" -#: ../roundup/admin.py:594 +#: ../roundup/admin.py:600 msgid "" "Usage: set items property=value property=value ...\n" " Set the given properties of one or more items(s).\n" @@ -342,7 +342,7 @@ " " msgstr "" -#: ../roundup/admin.py:648 +#: ../roundup/admin.py:655 msgid "" "Usage: find classname propname=value ...\n" " Find the nodes of the given class with a given link property value.\n" @@ -353,13 +353,13 @@ " " msgstr "" -#: ../roundup/admin.py:701 ../roundup/admin.py:854 ../roundup/admin.py:866 -#: ../roundup/admin.py:920 ../roundup/admin.py:701:854 :866:920 +#: ../roundup/admin.py:708 ../roundup/admin.py:862 ../roundup/admin.py:874 +#: ../roundup/admin.py:928 ../roundup/admin.py:708:862 :874:928 #, python-format msgid "%(classname)s has no property \"%(propname)s\"" msgstr "" -#: ../roundup/admin.py:708 +#: ../roundup/admin.py:715 msgid "" "Usage: specification classname\n" " Show the properties for a classname.\n" @@ -368,17 +368,17 @@ " " msgstr "" -#: ../roundup/admin.py:723 +#: ../roundup/admin.py:730 #, python-format msgid "%(key)s: %(value)s (key property)" msgstr "" -#: ../roundup/admin.py:725 +#: ../roundup/admin.py:732 ../roundup/admin.py:759 ../roundup/admin.py:732:759 #, python-format msgid "%(key)s: %(value)s" msgstr "" -#: ../roundup/admin.py:728 +#: ../roundup/admin.py:735 msgid "" "Usage: display designator[,designator]*\n" " Show the property values for the given node(s).\n" @@ -388,12 +388,7 @@ " " msgstr "" -#: ../roundup/admin.py:752 -#, python-format -msgid "%(key)s: %(value)r" -msgstr "" - -#: ../roundup/admin.py:755 +#: ../roundup/admin.py:762 msgid "" "Usage: create classname property=value ...\n" " Create a new entry of a given class.\n" @@ -405,31 +400,31 @@ " " msgstr "" -#: ../roundup/admin.py:782 +#: ../roundup/admin.py:789 #, python-format msgid "%(propname)s (Password): " msgstr "" -#: ../roundup/admin.py:784 +#: ../roundup/admin.py:791 #, python-format msgid " %(propname)s (Again): " msgstr "" -#: ../roundup/admin.py:786 +#: ../roundup/admin.py:793 msgid "Sorry, try again..." msgstr "" -#: ../roundup/admin.py:790 +#: ../roundup/admin.py:797 #, python-format msgid "%(propname)s (%(proptype)s): " msgstr "" -#: ../roundup/admin.py:808 +#: ../roundup/admin.py:815 #, python-format msgid "you must provide the \"%(propname)s\" property." msgstr "" -#: ../roundup/admin.py:819 +#: ../roundup/admin.py:827 msgid "" "Usage: list classname [property]\n" " List the instances of a class.\n" @@ -445,16 +440,16 @@ " " msgstr "" -#: ../roundup/admin.py:832 +#: ../roundup/admin.py:840 msgid "Too many arguments supplied" msgstr "" -#: ../roundup/admin.py:868 +#: ../roundup/admin.py:876 #, python-format msgid "%(nodeid)4s: %(value)s" msgstr "" -#: ../roundup/admin.py:872 +#: ../roundup/admin.py:880 msgid "" "Usage: table classname [property[,property]*]\n" " List the instances of a class in tabular form.\n" @@ -486,12 +481,12 @@ " " msgstr "" -#: ../roundup/admin.py:916 +#: ../roundup/admin.py:924 #, python-format msgid "\"%(spec)s\" not name:width" msgstr "" -#: ../roundup/admin.py:966 +#: ../roundup/admin.py:974 msgid "" "Usage: history designator\n" " Show the history entries of a designator.\n" @@ -500,7 +495,7 @@ " " msgstr "" -#: ../roundup/admin.py:987 +#: ../roundup/admin.py:995 msgid "" "Usage: commit\n" " Commit changes made to the database during an interactive session.\n" @@ -514,7 +509,7 @@ " " msgstr "" -#: ../roundup/admin.py:1001 +#: ../roundup/admin.py:1010 msgid "" "Usage: rollback\n" " Undo all changes that are pending commit to the database.\n" @@ -526,7 +521,7 @@ " " msgstr "" -#: ../roundup/admin.py:1013 +#: ../roundup/admin.py:1023 msgid "" "Usage: retire designator[,designator]*\n" " Retire the node specified by designator.\n" @@ -536,7 +531,7 @@ " " msgstr "" -#: ../roundup/admin.py:1036 +#: ../roundup/admin.py:1047 msgid "" "Usage: restore designator[,designator]*\n" " Restore the retired node specified by designator.\n" @@ -546,7 +541,7 @@ msgstr "" #. grab the directory to export to -#: ../roundup/admin.py:1058 +#: ../roundup/admin.py:1070 msgid "" "Usage: export [[-]class[,class]] export_dir\n" " Export the database to colon-separated-value files.\n" @@ -562,7 +557,7 @@ " " msgstr "" -#: ../roundup/admin.py:1136 +#: ../roundup/admin.py:1145 msgid "" "Usage: exporttables [[-]class[,class]] export_dir\n" " Export the database to colon-separated-value files, excluding the\n" @@ -579,7 +574,7 @@ " " msgstr "" -#: ../roundup/admin.py:1151 +#: ../roundup/admin.py:1160 msgid "" "Usage: import import_dir\n" " Import a database from the directory containing CSV files,\n" @@ -602,7 +597,7 @@ " " msgstr "" -#: ../roundup/admin.py:1225 +#: ../roundup/admin.py:1235 msgid "" "Usage: pack period | date\n" "\n" @@ -624,11 +619,11 @@ " " msgstr "" -#: ../roundup/admin.py:1253 +#: ../roundup/admin.py:1263 msgid "Invalid format" msgstr "" -#: ../roundup/admin.py:1263 +#: ../roundup/admin.py:1274 msgid "" "Usage: reindex [classname|designator]*\n" " Re-generate a tracker's search indexes.\n" @@ -638,137 +633,170 @@ " " msgstr "" -#: ../roundup/admin.py:1277 +#: ../roundup/admin.py:1288 #, python-format msgid "no such item \"%(designator)s\"" msgstr "" -#: ../roundup/admin.py:1287 +#: ../roundup/admin.py:1298 msgid "" "Usage: security [Role name]\n" " Display the Permissions available to one or all Roles.\n" " " msgstr "" -#: ../roundup/admin.py:1295 +#: ../roundup/admin.py:1306 #, python-format msgid "No such Role \"%(role)s\"" msgstr "" -#: ../roundup/admin.py:1301 +#: ../roundup/admin.py:1312 #, python-format msgid "New Web users get the Roles \"%(role)s\"" msgstr "" -#: ../roundup/admin.py:1303 +#: ../roundup/admin.py:1314 #, python-format msgid "New Web users get the Role \"%(role)s\"" msgstr "" -#: ../roundup/admin.py:1306 +#: ../roundup/admin.py:1317 #, python-format msgid "New Email users get the Roles \"%(role)s\"" msgstr "" -#: ../roundup/admin.py:1308 +#: ../roundup/admin.py:1319 #, python-format msgid "New Email users get the Role \"%(role)s\"" msgstr "" -#: ../roundup/admin.py:1311 +#: ../roundup/admin.py:1322 #, python-format msgid "Role \"%(name)s\":" msgstr "" -#: ../roundup/admin.py:1316 +#: ../roundup/admin.py:1327 #, python-format msgid " %(description)s (%(name)s for \"%(klass)s\": %(properties)s only)" msgstr "" -#: ../roundup/admin.py:1319 +#: ../roundup/admin.py:1330 #, python-format msgid " %(description)s (%(name)s for \"%(klass)s\" only)" msgstr "" -#: ../roundup/admin.py:1322 +#: ../roundup/admin.py:1333 #, python-format msgid " %(description)s (%(name)s)" msgstr "" -#: ../roundup/admin.py:1351 +#: ../roundup/admin.py:1362 #, python-format msgid "Unknown command \"%(command)s\" (\"help commands\" for a list)" msgstr "" -#: ../roundup/admin.py:1357 +#: ../roundup/admin.py:1368 #, python-format msgid "Multiple commands match \"%(command)s\": %(list)s" msgstr "" -#: ../roundup/admin.py:1364 +#: ../roundup/admin.py:1375 msgid "Enter tracker home: " msgstr "" -#: ../roundup/admin.py:1371 ../roundup/admin.py:1377 ../roundup/admin.py:1397 -#: ../roundup/admin.py:1371:1377 :1397 +#: ../roundup/admin.py:1382 ../roundup/admin.py:1388 ../roundup/admin.py:1408 +#: ../roundup/admin.py:1382:1388 :1408 #, python-format msgid "Error: %(message)s" msgstr "" -#: ../roundup/admin.py:1385 +#: ../roundup/admin.py:1396 #, python-format msgid "Error: Couldn't open tracker: %(message)s" msgstr "" -#: ../roundup/admin.py:1410 +#: ../roundup/admin.py:1421 #, python-format msgid "" "Roundup %s ready for input.\n" "Type \"help\" for help." msgstr "" -#: ../roundup/admin.py:1415 +#: ../roundup/admin.py:1426 msgid "Note: command history and editing not available" msgstr "" -#: ../roundup/admin.py:1419 +#: ../roundup/admin.py:1430 msgid "roundup> " msgstr "" -#: ../roundup/admin.py:1421 +#: ../roundup/admin.py:1432 msgid "exit..." msgstr "" -#: ../roundup/admin.py:1431 +#: ../roundup/admin.py:1442 msgid "There are unsaved changes. Commit them (y/N)? " msgstr "" -#: ../roundup/backends/back_anydbm.py:2000 +#: ../roundup/backends/back_anydbm.py:219 +#: ../roundup/backends/sessions_dbm.py:50 +msgid "Couldn't identify database type" +msgstr "" + +#: ../roundup/backends/back_anydbm.py:245 +#, python-format +msgid "Couldn't open database - the required module '%s' is not available" +msgstr "" + +#: ../roundup/backends/back_anydbm.py:795 +#: ../roundup/backends/back_anydbm.py:1070 +#: ../roundup/backends/back_anydbm.py:1267 +#: ../roundup/backends/back_anydbm.py:1285 +#: ../roundup/backends/back_anydbm.py:1331 +#: ../roundup/backends/back_anydbm.py:1901 +#: ../roundup/backends/back_anydbm.py:795:1070 +#: ../roundup/backends/back_metakit.py:567 +#: ../roundup/backends/back_metakit.py:834 +#: ../roundup/backends/back_metakit.py:866 +#: ../roundup/backends/back_metakit.py:1601 +#: ../roundup/backends/back_metakit.py:567:834 +#: ../roundup/backends/rdbms_common.py:1320 +#: ../roundup/backends/rdbms_common.py:1549 +#: ../roundup/backends/rdbms_common.py:1755 +#: ../roundup/backends/rdbms_common.py:1775 +#: ../roundup/backends/rdbms_common.py:1828 +#: ../roundup/backends/rdbms_common.py:2436 +#: ../roundup/backends/rdbms_common.py:1320:1549 :1267:1285 :1331:1901 +#: :1755:1775 :1828:2436 :866:1601 +msgid "Database open read-only" +msgstr "" + +#: ../roundup/backends/back_anydbm.py:2003 #, python-format msgid "WARNING: invalid date tuple %r" msgstr "" -#: ../roundup/backends/rdbms_common.py:1442 +#: ../roundup/backends/rdbms_common.py:1449 msgid "create" msgstr "" -#: ../roundup/backends/rdbms_common.py:1608 +#: ../roundup/backends/rdbms_common.py:1615 msgid "unlink" msgstr "" -#: ../roundup/backends/rdbms_common.py:1612 +#: ../roundup/backends/rdbms_common.py:1619 msgid "link" msgstr "" -#: ../roundup/backends/rdbms_common.py:1732 +#: ../roundup/backends/rdbms_common.py:1741 msgid "set" msgstr "" -#: ../roundup/backends/rdbms_common.py:1756 +#: ../roundup/backends/rdbms_common.py:1765 msgid "retired" msgstr "" -#: ../roundup/backends/rdbms_common.py:1786 +#: ../roundup/backends/rdbms_common.py:1795 msgid "restored" msgstr "" @@ -799,124 +827,124 @@ msgid "%(classname)s %(itemid)s has been retired" msgstr "" -#: ../roundup/cgi/actions.py:174 ../roundup/cgi/actions.py:202 -#: ../roundup/cgi/actions.py:174:202 +#: ../roundup/cgi/actions.py:169 ../roundup/cgi/actions.py:197 +#: ../roundup/cgi/actions.py:169:197 msgid "You do not have permission to edit queries" msgstr "" -#: ../roundup/cgi/actions.py:180 ../roundup/cgi/actions.py:209 -#: ../roundup/cgi/actions.py:180:209 +#: ../roundup/cgi/actions.py:175 ../roundup/cgi/actions.py:204 +#: ../roundup/cgi/actions.py:175:204 msgid "You do not have permission to store queries" msgstr "" -#: ../roundup/cgi/actions.py:298 +#: ../roundup/cgi/actions.py:310 #, python-format msgid "Not enough values on line %(line)s" msgstr "" -#: ../roundup/cgi/actions.py:345 +#: ../roundup/cgi/actions.py:357 msgid "Items edited OK" msgstr "" -#: ../roundup/cgi/actions.py:405 +#: ../roundup/cgi/actions.py:416 #, python-format msgid "%(class)s %(id)s %(properties)s edited ok" msgstr "" -#: ../roundup/cgi/actions.py:408 +#: ../roundup/cgi/actions.py:419 #, python-format msgid "%(class)s %(id)s - nothing changed" msgstr "" -#: ../roundup/cgi/actions.py:420 +#: ../roundup/cgi/actions.py:431 #, python-format msgid "%(class)s %(id)s created" msgstr "" -#: ../roundup/cgi/actions.py:452 +#: ../roundup/cgi/actions.py:463 #, python-format msgid "You do not have permission to edit %(class)s" msgstr "" -#: ../roundup/cgi/actions.py:464 +#: ../roundup/cgi/actions.py:475 #, python-format msgid "You do not have permission to create %(class)s" msgstr "" -#: ../roundup/cgi/actions.py:488 +#: ../roundup/cgi/actions.py:499 msgid "You do not have permission to edit user roles" msgstr "" -#: ../roundup/cgi/actions.py:538 +#: ../roundup/cgi/actions.py:549 #, python-format msgid "" "Edit Error: someone else has edited this %s (%s). View their changes in a new window." msgstr "" -#: ../roundup/cgi/actions.py:566 +#: ../roundup/cgi/actions.py:577 #, python-format msgid "Edit Error: %s" msgstr "" -#: ../roundup/cgi/actions.py:597 ../roundup/cgi/actions.py:608 -#: ../roundup/cgi/actions.py:779 ../roundup/cgi/actions.py:798 -#: ../roundup/cgi/actions.py:597:608 :779:798 +#: ../roundup/cgi/actions.py:608 ../roundup/cgi/actions.py:619 +#: ../roundup/cgi/actions.py:790 ../roundup/cgi/actions.py:809 +#: ../roundup/cgi/actions.py:608:619 :790:809 #, python-format msgid "Error: %s" msgstr "" -#: ../roundup/cgi/actions.py:634 +#: ../roundup/cgi/actions.py:645 msgid "" "Invalid One Time Key!\n" "(a Mozilla bug may cause this message to show up erroneously, please check " "your email)" msgstr "" -#: ../roundup/cgi/actions.py:676 +#: ../roundup/cgi/actions.py:687 #, python-format msgid "Password reset and email sent to %s" msgstr "" -#: ../roundup/cgi/actions.py:685 +#: ../roundup/cgi/actions.py:696 msgid "Unknown username" msgstr "" -#: ../roundup/cgi/actions.py:693 +#: ../roundup/cgi/actions.py:704 msgid "Unknown email address" msgstr "" -#: ../roundup/cgi/actions.py:698 +#: ../roundup/cgi/actions.py:709 msgid "You need to specify a username or address" msgstr "" -#: ../roundup/cgi/actions.py:723 +#: ../roundup/cgi/actions.py:734 #, python-format msgid "Email sent to %s" msgstr "" -#: ../roundup/cgi/actions.py:742 +#: ../roundup/cgi/actions.py:753 msgid "You are now registered, welcome!" msgstr "" -#: ../roundup/cgi/actions.py:787 +#: ../roundup/cgi/actions.py:798 msgid "It is not permitted to supply roles at registration." msgstr "" -#: ../roundup/cgi/actions.py:879 +#: ../roundup/cgi/actions.py:890 msgid "You are logged out" msgstr "" -#: ../roundup/cgi/actions.py:896 +#: ../roundup/cgi/actions.py:907 msgid "Username required" msgstr "" -#: ../roundup/cgi/actions.py:931 ../roundup/cgi/actions.py:935 -#: ../roundup/cgi/actions.py:931:935 +#: ../roundup/cgi/actions.py:942 ../roundup/cgi/actions.py:946 +#: ../roundup/cgi/actions.py:942:946 msgid "Invalid login" msgstr "" -#: ../roundup/cgi/actions.py:941 +#: ../roundup/cgi/actions.py:952 msgid "You do not have permission to login" msgstr "" @@ -990,7 +1018,7 @@ msgid "undefined" msgstr "" -#: ../roundup/cgi/client.py:49 +#: ../roundup/cgi/client.py:51 msgid "" "An error has occurred\n" "

        An error has occurred

        \n" @@ -999,29 +1027,29 @@ "" msgstr "" -#: ../roundup/cgi/client.py:326 +#: ../roundup/cgi/client.py:377 msgid "Form Error: " msgstr "" -#: ../roundup/cgi/client.py:381 +#: ../roundup/cgi/client.py:432 #, python-format msgid "Unrecognized charset: %r" msgstr "" -#: ../roundup/cgi/client.py:509 +#: ../roundup/cgi/client.py:560 msgid "Anonymous users are not allowed to use the web interface" msgstr "" -#: ../roundup/cgi/client.py:664 +#: ../roundup/cgi/client.py:715 msgid "You are not allowed to view this file." msgstr "" -#: ../roundup/cgi/client.py:758 +#: ../roundup/cgi/client.py:808 #, python-format msgid "%(starttag)sTime elapsed: %(seconds)fs%(endtag)s\n" msgstr "" -#: ../roundup/cgi/client.py:762 +#: ../roundup/cgi/client.py:812 #, python-format msgid "" "%(starttag)sCache hits: %(cache_hits)d, misses %(cache_misses)d. Loading " @@ -1079,251 +1107,303 @@ msgid "File is empty" msgstr "" -#: ../roundup/cgi/templating.py:73 +#: ../roundup/cgi/templating.py:77 #, python-format msgid "You are not allowed to %(action)s items of class %(class)s" msgstr "" -#: ../roundup/cgi/templating.py:645 +#: ../roundup/cgi/templating.py:657 msgid "(list)" msgstr "" -#: ../roundup/cgi/templating.py:714 +#: ../roundup/cgi/templating.py:726 msgid "Submit New Entry" msgstr "" -#: ../roundup/cgi/templating.py:728 ../roundup/cgi/templating.py:862 -#: ../roundup/cgi/templating.py:1269 ../roundup/cgi/templating.py:1298 -#: ../roundup/cgi/templating.py:1318 ../roundup/cgi/templating.py:1364 -#: ../roundup/cgi/templating.py:1387 ../roundup/cgi/templating.py:1423 -#: ../roundup/cgi/templating.py:1460 ../roundup/cgi/templating.py:1513 -#: ../roundup/cgi/templating.py:1530 ../roundup/cgi/templating.py:1614 -#: ../roundup/cgi/templating.py:1634 ../roundup/cgi/templating.py:1652 -#: ../roundup/cgi/templating.py:1684 ../roundup/cgi/templating.py:1694 -#: ../roundup/cgi/templating.py:1746 ../roundup/cgi/templating.py:1935 -#: ../roundup/cgi/templating.py:728:862 :1269:1298 :1318:1364 :1387:1423 -#: :1460:1513 :1530:1614 :1634:1652 :1684:1694 :1746:1935 +#: ../roundup/cgi/templating.py:740 ../roundup/cgi/templating.py:873 +#: ../roundup/cgi/templating.py:1294 ../roundup/cgi/templating.py:1323 +#: ../roundup/cgi/templating.py:1343 ../roundup/cgi/templating.py:1356 +#: ../roundup/cgi/templating.py:1407 ../roundup/cgi/templating.py:1430 +#: ../roundup/cgi/templating.py:1466 ../roundup/cgi/templating.py:1503 +#: ../roundup/cgi/templating.py:1556 ../roundup/cgi/templating.py:1573 +#: ../roundup/cgi/templating.py:1657 ../roundup/cgi/templating.py:1677 +#: ../roundup/cgi/templating.py:1695 ../roundup/cgi/templating.py:1727 +#: ../roundup/cgi/templating.py:1737 ../roundup/cgi/templating.py:1789 +#: ../roundup/cgi/templating.py:1978 ../roundup/cgi/templating.py:740:873 +#: :1294:1323 :1343:1356 :1407:1430 :1466:1503 :1556:1573 :1657:1677 :1695:1727 +#: :1737:1789 :1978 msgid "[hidden]" msgstr "" -#: ../roundup/cgi/templating.py:729 +#: ../roundup/cgi/templating.py:741 msgid "New node - no history" msgstr "" -#: ../roundup/cgi/templating.py:844 +#: ../roundup/cgi/templating.py:855 msgid "Submit Changes" msgstr "" -#: ../roundup/cgi/templating.py:926 +#: ../roundup/cgi/templating.py:937 msgid "The indicated property no longer exists" msgstr "" -#: ../roundup/cgi/templating.py:927 +#: ../roundup/cgi/templating.py:938 #, python-format msgid "%s: %s\n" msgstr "" -#: ../roundup/cgi/templating.py:940 +#: ../roundup/cgi/templating.py:951 #, python-format msgid "The linked class %(classname)s no longer exists" msgstr "" -#: ../roundup/cgi/templating.py:973 ../roundup/cgi/templating.py:997 -#: ../roundup/cgi/templating.py:973:997 +#: ../roundup/cgi/templating.py:984 ../roundup/cgi/templating.py:1008 +#: ../roundup/cgi/templating.py:984:1008 msgid "The linked node no longer exists" msgstr "" -#: ../roundup/cgi/templating.py:1050 +#: ../roundup/cgi/templating.py:1061 #, python-format msgid "%s: (no value)" msgstr "" -#: ../roundup/cgi/templating.py:1062 +#: ../roundup/cgi/templating.py:1073 msgid "" "This event is not handled by the history display!" msgstr "" -#: ../roundup/cgi/templating.py:1074 +#: ../roundup/cgi/templating.py:1085 msgid "Note:" msgstr "" -#: ../roundup/cgi/templating.py:1083 +#: ../roundup/cgi/templating.py:1094 msgid "History" msgstr "" -#: ../roundup/cgi/templating.py:1085 +#: ../roundup/cgi/templating.py:1096 msgid "Date" msgstr "" -#: ../roundup/cgi/templating.py:1086 +#: ../roundup/cgi/templating.py:1097 msgid "User" msgstr "" -#: ../roundup/cgi/templating.py:1087 +#: ../roundup/cgi/templating.py:1098 msgid "Action" msgstr "" -#: ../roundup/cgi/templating.py:1088 +#: ../roundup/cgi/templating.py:1099 msgid "Args" msgstr "" -#: ../roundup/cgi/templating.py:1130 +#: ../roundup/cgi/templating.py:1141 #, python-format msgid "Copy of %(class)s %(id)s" msgstr "" -#: ../roundup/cgi/templating.py:1391 +#: ../roundup/cgi/templating.py:1434 msgid "*encrypted*" msgstr "" -#: ../roundup/cgi/templating.py:1464 ../roundup/cgi/templating.py:1485 -#: ../roundup/cgi/templating.py:1491 ../roundup/cgi/templating.py:1039:1464 -#: :1485:1491 +#: ../roundup/cgi/templating.py:1507 ../roundup/cgi/templating.py:1528 +#: ../roundup/cgi/templating.py:1534 ../roundup/cgi/templating.py:1050:1507 +#: :1528:1534 msgid "No" msgstr "" -#: ../roundup/cgi/templating.py:1464 ../roundup/cgi/templating.py:1483 -#: ../roundup/cgi/templating.py:1488 ../roundup/cgi/templating.py:1039:1464 -#: :1483:1488 +#: ../roundup/cgi/templating.py:1507 ../roundup/cgi/templating.py:1526 +#: ../roundup/cgi/templating.py:1531 ../roundup/cgi/templating.py:1050:1507 +#: :1526:1531 msgid "Yes" msgstr "" -#: ../roundup/cgi/templating.py:1577 +#: ../roundup/cgi/templating.py:1620 msgid "" "default value for DateHTMLProperty must be either DateHTMLProperty or string " "date representation." msgstr "" -#: ../roundup/cgi/templating.py:1737 +#: ../roundup/cgi/templating.py:1780 #, python-format msgid "Attempt to look up %(attr)s on a missing value" msgstr "" -#: ../roundup/cgi/templating.py:1810 +#: ../roundup/cgi/templating.py:1853 #, python-format msgid "" msgstr "" -#: ../roundup/date.py:301 +#: ../roundup/date.py:300 msgid "" "Not a date spec: \"yyyy-mm-dd\", \"mm-dd\", \"HH:MM\", \"HH:MM:SS\" or \"yyyy-" "mm-dd.HH:MM:SS.SSS\"" msgstr "" -#: ../roundup/date.py:363 +#: ../roundup/date.py:359 #, python-format msgid "" "%r not a date / time spec \"yyyy-mm-dd\", \"mm-dd\", \"HH:MM\", \"HH:MM:SS\" " "or \"yyyy-mm-dd.HH:MM:SS.SSS\"" msgstr "" -#: ../roundup/date.py:662 +#: ../roundup/date.py:666 msgid "" "Not an interval spec: [+-] [#y] [#m] [#w] [#d] [[[H]H:MM]:SS] [date spec]" msgstr "" -#: ../roundup/date.py:681 +#: ../roundup/date.py:685 msgid "Not an interval spec: [+-] [#y] [#m] [#w] [#d] [[[H]H:MM]:SS]" msgstr "" -#: ../roundup/date.py:818 +#: ../roundup/date.py:822 #, python-format msgid "%(number)s year" msgid_plural "%(number)s years" msgstr[0] "" msgstr[1] "" -#: ../roundup/date.py:822 +#: ../roundup/date.py:826 #, python-format msgid "%(number)s month" msgid_plural "%(number)s months" msgstr[0] "" msgstr[1] "" -#: ../roundup/date.py:826 +#: ../roundup/date.py:830 #, python-format msgid "%(number)s week" msgid_plural "%(number)s weeks" msgstr[0] "" msgstr[1] "" -#: ../roundup/date.py:830 +#: ../roundup/date.py:834 #, python-format msgid "%(number)s day" msgid_plural "%(number)s days" msgstr[0] "" msgstr[1] "" -#: ../roundup/date.py:834 +#: ../roundup/date.py:838 msgid "tomorrow" msgstr "" -#: ../roundup/date.py:836 +#: ../roundup/date.py:840 msgid "yesterday" msgstr "" -#: ../roundup/date.py:839 +#: ../roundup/date.py:843 #, python-format msgid "%(number)s hour" msgid_plural "%(number)s hours" msgstr[0] "" msgstr[1] "" -#: ../roundup/date.py:843 +#: ../roundup/date.py:847 msgid "an hour" msgstr "" -#: ../roundup/date.py:845 +#: ../roundup/date.py:849 msgid "1 1/2 hours" msgstr "" -#: ../roundup/date.py:847 +#: ../roundup/date.py:851 #, python-format msgid "1 %(number)s/4 hours" msgid_plural "1 %(number)s/4 hours" msgstr[0] "" msgstr[1] "" -#: ../roundup/date.py:851 +#: ../roundup/date.py:855 msgid "in a moment" msgstr "" -#: ../roundup/date.py:853 +#: ../roundup/date.py:857 msgid "just now" msgstr "" -#: ../roundup/date.py:856 +#: ../roundup/date.py:860 msgid "1 minute" msgstr "" -#: ../roundup/date.py:859 +#: ../roundup/date.py:863 #, python-format msgid "%(number)s minute" msgid_plural "%(number)s minutes" msgstr[0] "" msgstr[1] "" -#: ../roundup/date.py:862 +#: ../roundup/date.py:866 msgid "1/2 an hour" msgstr "" -#: ../roundup/date.py:864 +#: ../roundup/date.py:868 #, python-format msgid "%(number)s/4 hour" msgid_plural "%(number)s/4 hours" msgstr[0] "" msgstr[1] "" -#: ../roundup/date.py:868 +#: ../roundup/date.py:872 #, python-format msgid "%s ago" msgstr "" -#: ../roundup/date.py:870 +#: ../roundup/date.py:874 #, python-format msgid "in %s" msgstr "" +#: ../roundup/hyperdb.py:87 +#, python-format +msgid "property %s: %s" +msgstr "" + +#: ../roundup/hyperdb.py:107 +#, python-format +msgid "property %s: %r is an invalid date (%s)" +msgstr "" + +#: ../roundup/hyperdb.py:124 +#, python-format +msgid "property %s: %r is an invalid date interval (%s)" +msgstr "" + +#: ../roundup/hyperdb.py:219 +#, python-format +msgid "property %s: %r is not currently an element" +msgstr "" + +#: ../roundup/hyperdb.py:263 +#, python-format +msgid "property %s: %r is not a number" +msgstr "" + +#: ../roundup/hyperdb.py:276 +#, python-format +msgid "\"%s\" not a node designator" +msgstr "" + +#: ../roundup/hyperdb.py:949 ../roundup/hyperdb.py:957 +#: ../roundup/hyperdb.py:949:957 +#, python-format +msgid "Not a property name: %s" +msgstr "" + +#: ../roundup/hyperdb.py:1240 +#, python-format +msgid "property %s: %r is not a %s." +msgstr "" + +#: ../roundup/hyperdb.py:1243 +#, python-format +msgid "you may only enter ID values for property %s" +msgstr "" + +#: ../roundup/hyperdb.py:1273 +#, python-format +msgid "%r is not a property of %s" +msgstr "" + #: ../roundup/init.py:134 #, python-format msgid "" @@ -1331,13 +1411,45 @@ "\tcontains old-style template - ignored" msgstr "" -#: ../roundup/mailgw.py:583 +#: ../roundup/mailgw.py:199 ../roundup/mailgw.py:211 +#: ../roundup/mailgw.py:199:211 +#, python-format +msgid "Message signed with unknown key: %s" +msgstr "" + +#: ../roundup/mailgw.py:202 +#, python-format +msgid "Message signed with an expired key: %s" +msgstr "" + +#: ../roundup/mailgw.py:205 +#, python-format +msgid "Message signed with a revoked key: %s" +msgstr "" + +#: ../roundup/mailgw.py:208 +msgid "Invalid PGP signature detected." +msgstr "" + +#: ../roundup/mailgw.py:404 +msgid "Unknown multipart/encrypted version." +msgstr "" + +#: ../roundup/mailgw.py:413 +msgid "Unable to decrypt your message." +msgstr "" + +#: ../roundup/mailgw.py:442 +msgid "No PGP signature found in message." +msgstr "" + +#: ../roundup/mailgw.py:749 msgid "" "\n" "Emails to Roundup trackers must include a Subject: line!\n" msgstr "" -#: ../roundup/mailgw.py:673 +#: ../roundup/mailgw.py:873 #, python-format msgid "" "\n" @@ -1354,30 +1466,46 @@ "Subject was: '%(subject)s'\n" msgstr "" -#: ../roundup/mailgw.py:704 +#: ../roundup/mailgw.py:911 #, python-format msgid "" "\n" -"The class name you identified in the subject line (\"%(classname)s\") does " -"not exist in the\n" -"database.\n" +"The class name you identified in the subject line (\"%(classname)s\") does\n" +"not exist in the database.\n" "\n" "Valid class names are: %(validname)s\n" "Subject was: \"%(subject)s\"\n" msgstr "" -#: ../roundup/mailgw.py:739 +#: ../roundup/mailgw.py:919 +#, python-format +msgid "" +"\n" +"You did not identify a class name in the subject line and there is no\n" +"default set for this tracker. The subject must contain a class name or\n" +"designator to indicate the 'topic' of the message. For example:\n" +" Subject: [issue] This is a new issue\n" +" - this will create a new issue in the tracker with the title 'This is\n" +" a new issue'.\n" +" Subject: [issue1234] This is a followup to issue 1234\n" +" - this will append the message's contents to the existing issue 1234\n" +" in the tracker.\n" +"\n" +"Subject was: '%(subject)s'\n" +msgstr "" + +#: ../roundup/mailgw.py:960 #, python-format msgid "" "\n" "I cannot match your message to a node in the database - you need to either\n" -"supply a full designator (with number, eg \"[issue123]\" or keep the\n" +"supply a full designator (with number, eg \"[issue123]\") or keep the\n" "previous subject title intact so I can match that.\n" "\n" "Subject was: \"%(subject)s\"\n" msgstr "" -#: ../roundup/mailgw.py:772 +#: ../roundup/mailgw.py:993 #, python-format msgid "" "\n" @@ -1387,7 +1515,7 @@ "Subject was: \"%(subject)s\"\n" msgstr "" -#: ../roundup/mailgw.py:800 +#: ../roundup/mailgw.py:1021 #, python-format msgid "" "\n" @@ -1396,7 +1524,7 @@ " %(current_class)s\n" msgstr "" -#: ../roundup/mailgw.py:823 +#: ../roundup/mailgw.py:1044 #, python-format msgid "" "\n" @@ -1405,30 +1533,30 @@ " %(errors)s\n" msgstr "" -#: ../roundup/mailgw.py:853 +#: ../roundup/mailgw.py:1084 #, python-format msgid "" "\n" -"You are not a registered user.\n" +"You are not a registered user.%(registration_info)s\n" "\n" "Unknown address: %(from_address)s\n" msgstr "" -#: ../roundup/mailgw.py:861 +#: ../roundup/mailgw.py:1092 msgid "You are not permitted to access this tracker." msgstr "" -#: ../roundup/mailgw.py:868 +#: ../roundup/mailgw.py:1099 #, python-format msgid "You are not permitted to edit %(classname)s." msgstr "" -#: ../roundup/mailgw.py:872 +#: ../roundup/mailgw.py:1103 #, python-format msgid "You are not permitted to create %(classname)s." msgstr "" -#: ../roundup/mailgw.py:919 +#: ../roundup/mailgw.py:1150 #, python-format msgid "" "\n" @@ -1438,27 +1566,34 @@ "Subject was: \"%(subject)s\"\n" msgstr "" -#: ../roundup/mailgw.py:947 +#: ../roundup/mailgw.py:1203 +msgid "" +"\n" +"This tracker has been configured to require all email be PGP signed or\n" +"encrypted." +msgstr "" + +#: ../roundup/mailgw.py:1209 msgid "" "\n" "Roundup requires the submission to be plain text. The message parser could\n" "not find a text/plain part to use.\n" msgstr "" -#: ../roundup/mailgw.py:969 +#: ../roundup/mailgw.py:1226 msgid "You are not permitted to create files." msgstr "" -#: ../roundup/mailgw.py:983 +#: ../roundup/mailgw.py:1240 #, python-format msgid "You are not permitted to add files to %(classname)s." msgstr "" -#: ../roundup/mailgw.py:1001 +#: ../roundup/mailgw.py:1258 msgid "You are not permitted to create messages." msgstr "" -#: ../roundup/mailgw.py:1009 +#: ../roundup/mailgw.py:1266 #, python-format msgid "" "\n" @@ -1466,17 +1601,17 @@ "%(error)s\n" msgstr "" -#: ../roundup/mailgw.py:1017 +#: ../roundup/mailgw.py:1274 #, python-format msgid "You are not permitted to add messages to %(classname)s." msgstr "" -#: ../roundup/mailgw.py:1044 +#: ../roundup/mailgw.py:1301 #, python-format msgid "You are not permitted to edit property %(prop)s of class %(classname)s." msgstr "" -#: ../roundup/mailgw.py:1052 +#: ../roundup/mailgw.py:1309 #, python-format msgid "" "\n" @@ -1484,79 +1619,85 @@ " %(message)s\n" msgstr "" -#: ../roundup/mailgw.py:1074 +#: ../roundup/mailgw.py:1331 msgid "not of form [arg=value,value,...;arg=value,value,...]" msgstr "" -#: ../roundup/roundupdb.py:146 +#: ../roundup/roundupdb.py:147 msgid "files" msgstr "" -#: ../roundup/roundupdb.py:146 +#: ../roundup/roundupdb.py:147 msgid "messages" msgstr "" -#: ../roundup/roundupdb.py:146 +#: ../roundup/roundupdb.py:147 msgid "nosy" msgstr "" -#: ../roundup/roundupdb.py:146 +#: ../roundup/roundupdb.py:147 msgid "superseder" msgstr "" -#: ../roundup/roundupdb.py:146 +#: ../roundup/roundupdb.py:147 msgid "title" msgstr "" -#: ../roundup/roundupdb.py:147 +#: ../roundup/roundupdb.py:148 msgid "assignedto" msgstr "" -#: ../roundup/roundupdb.py:147 -msgid "priority" +#: ../roundup/roundupdb.py:148 +msgid "keyword" msgstr "" -#: ../roundup/roundupdb.py:147 -msgid "status" +#: ../roundup/roundupdb.py:148 +msgid "priority" msgstr "" -#: ../roundup/roundupdb.py:147 -msgid "topic" +#: ../roundup/roundupdb.py:148 +msgid "status" msgstr "" -#: ../roundup/roundupdb.py:150 +#: ../roundup/roundupdb.py:151 msgid "activity" msgstr "" #. following properties are common for all hyperdb classes #. they are listed here to keep things in one place -#: ../roundup/roundupdb.py:150 +#: ../roundup/roundupdb.py:151 msgid "actor" msgstr "" -#: ../roundup/roundupdb.py:150 +#: ../roundup/roundupdb.py:151 msgid "creation" msgstr "" -#: ../roundup/roundupdb.py:150 +#: ../roundup/roundupdb.py:151 msgid "creator" msgstr "" -#: ../roundup/roundupdb.py:308 +#: ../roundup/roundupdb.py:309 #, python-format msgid "New submission from %(authname)s%(authaddr)s:" msgstr "" -#: ../roundup/roundupdb.py:311 +#: ../roundup/roundupdb.py:312 #, python-format msgid "%(authname)s%(authaddr)s added the comment:" msgstr "" -#: ../roundup/roundupdb.py:314 -msgid "System message:" +#: ../roundup/roundupdb.py:315 +#, python-format +msgid "Change by %(authname)s%(authaddr)s:" +msgstr "" + +#: ../roundup/roundupdb.py:342 +#, python-format +msgid "File '%(filename)s' not attached - you can download it from %(link)s." msgstr "" -#: ../roundup/roundupdb.py:597 +#: ../roundup/roundupdb.py:615 #, python-format msgid "" "\n" @@ -1675,58 +1816,62 @@ "\"imaps\"" msgstr "" -#: ../roundup/scripts/roundup_server.py:157 +#: ../roundup/scripts/roundup_server.py:76 +msgid "WARNING: generating temporary SSL certificate" +msgstr "" + +#: ../roundup/scripts/roundup_server.py:253 msgid "" "Roundup trackers index\n" "

        Roundup trackers index

          \n" msgstr "" -#: ../roundup/scripts/roundup_server.py:293 +#: ../roundup/scripts/roundup_server.py:389 #, python-format msgid "Error: %s: %s" msgstr "" -#: ../roundup/scripts/roundup_server.py:303 +#: ../roundup/scripts/roundup_server.py:399 msgid "WARNING: ignoring \"-g\" argument, not root" msgstr "" -#: ../roundup/scripts/roundup_server.py:309 +#: ../roundup/scripts/roundup_server.py:405 msgid "Can't change groups - no grp module" msgstr "" -#: ../roundup/scripts/roundup_server.py:318 +#: ../roundup/scripts/roundup_server.py:414 #, python-format msgid "Group %(group)s doesn't exist" msgstr "" -#: ../roundup/scripts/roundup_server.py:329 +#: ../roundup/scripts/roundup_server.py:425 msgid "Can't run as root!" msgstr "" -#: ../roundup/scripts/roundup_server.py:332 +#: ../roundup/scripts/roundup_server.py:428 msgid "WARNING: ignoring \"-u\" argument, not root" msgstr "" -#: ../roundup/scripts/roundup_server.py:338 +#: ../roundup/scripts/roundup_server.py:434 msgid "Can't change users - no pwd module" msgstr "" -#: ../roundup/scripts/roundup_server.py:347 +#: ../roundup/scripts/roundup_server.py:443 #, python-format msgid "User %(user)s doesn't exist" msgstr "" -#: ../roundup/scripts/roundup_server.py:481 +#: ../roundup/scripts/roundup_server.py:592 #, python-format msgid "Multiprocess mode \"%s\" is not available, switching to single-process" msgstr "" -#: ../roundup/scripts/roundup_server.py:504 +#: ../roundup/scripts/roundup_server.py:620 #, python-format msgid "Unable to bind to port %s, port already in use." msgstr "" -#: ../roundup/scripts/roundup_server.py:572 +#: ../roundup/scripts/roundup_server.py:688 msgid "" " -c Windows Service options.\n" " If you want to run the server as a Windows Service, you\n" @@ -1736,7 +1881,7 @@ " specifics." msgstr "" -#: ../roundup/scripts/roundup_server.py:579 +#: ../roundup/scripts/roundup_server.py:695 msgid "" " -u runs the Roundup web server as this UID\n" " -g runs the Roundup web server as this GID\n" @@ -1745,7 +1890,7 @@ " specified if -d is used." msgstr "" -#: ../roundup/scripts/roundup_server.py:586 +#: ../roundup/scripts/roundup_server.py:702 #, python-format msgid "" "%(message)sUsage: roundup-server [options] [name=tracker home]*\n" @@ -1760,6 +1905,9 @@ " -l log to the file indicated by fname instead of stderr/stdout\n" " -N log client machine names instead of IP addresses (much " "slower)\n" +" -i set tracker index template\n" +" -s enable SSL\n" +" -e PEM file containing SSL key and certificate\n" " -t multiprocess mode (default: %(mp_def)s).\n" " Allowed values: %(mp_types)s.\n" "%(os_part)s\n" @@ -1800,20 +1948,20 @@ " any url-unsafe characters like spaces, as these confuse IE.\n" msgstr "" -#: ../roundup/scripts/roundup_server.py:741 +#: ../roundup/scripts/roundup_server.py:860 msgid "Instances must be name=home" msgstr "" -#: ../roundup/scripts/roundup_server.py:755 +#: ../roundup/scripts/roundup_server.py:874 #, python-format msgid "Configuration saved to %s" msgstr "" -#: ../roundup/scripts/roundup_server.py:773 +#: ../roundup/scripts/roundup_server.py:892 msgid "Sorry, you can't run the server as a daemon on this Operating System" msgstr "" -#: ../roundup/scripts/roundup_server.py:788 +#: ../roundup/scripts/roundup_server.py:907 #, python-format msgid "Roundup server started on %(HOST)s:%(PORT)s" msgstr "" @@ -1888,21 +2036,21 @@ #: ../templates/classic/html/_generic.help.html:41 #: ../templates/classic/html/help.html:21 -#: ../templates/classic/html/issue.index.html:80 +#: ../templates/classic/html/issue.index.html:81 #: ../templates/minimal/html/_generic.help.html:41 msgid "<< previous" msgstr "" #: ../templates/classic/html/_generic.help.html:53 #: ../templates/classic/html/help.html:28 -#: ../templates/classic/html/issue.index.html:88 +#: ../templates/classic/html/issue.index.html:89 #: ../templates/minimal/html/_generic.help.html:53 msgid "${start}..${end} out of ${total}" msgstr "" #: ../templates/classic/html/_generic.help.html:57 #: ../templates/classic/html/help.html:32 -#: ../templates/classic/html/issue.index.html:91 +#: ../templates/classic/html/issue.index.html:92 #: ../templates/minimal/html/_generic.help.html:57 msgid "next >>" msgstr "" @@ -1932,6 +2080,7 @@ #: ../templates/minimal/html/_generic.index.html:19 #: ../templates/minimal/html/_generic.item.html:17 #: ../templates/minimal/html/user.index.html:13 +#: ../templates/minimal/html/user.item.html:39 #: ../templates/minimal/html/user.register.html:17 msgid "Please login with your username and password." msgstr "" @@ -2034,7 +2183,8 @@ msgstr "" #: ../templates/classic/html/issue.index.html:32 -msgid "Topic" +#: ../templates/classic/html/keyword.item.html:37 +msgid "Keyword" msgstr "" #: ../templates/classic/html/issue.index.html:33 @@ -2055,29 +2205,29 @@ msgid "Assigned To" msgstr "" -#: ../templates/classic/html/issue.index.html:104 +#: ../templates/classic/html/issue.index.html:105 msgid "Download as CSV" msgstr "" -#: ../templates/classic/html/issue.index.html:114 +#: ../templates/classic/html/issue.index.html:115 msgid "Sort on:" msgstr "" -#: ../templates/classic/html/issue.index.html:118 -#: ../templates/classic/html/issue.index.html:139 +#: ../templates/classic/html/issue.index.html:119 +#: ../templates/classic/html/issue.index.html:140 msgid "- nothing -" msgstr "" -#: ../templates/classic/html/issue.index.html:126 -#: ../templates/classic/html/issue.index.html:147 +#: ../templates/classic/html/issue.index.html:127 +#: ../templates/classic/html/issue.index.html:148 msgid "Descending:" msgstr "" -#: ../templates/classic/html/issue.index.html:135 +#: ../templates/classic/html/issue.index.html:136 msgid "Group on:" msgstr "" -#: ../templates/classic/html/issue.index.html:154 +#: ../templates/classic/html/issue.index.html:155 msgid "Redisplay" msgstr "" @@ -2122,7 +2272,9 @@ msgstr "" #: ../templates/classic/html/issue.item.html:78 -msgid "Topics" +#: ../templates/classic/html/page.html:103 +#: ../templates/minimal/html/page.html:102 +msgid "Keywords" msgstr "" #: ../templates/classic/html/issue.item.html:86 @@ -2138,9 +2290,9 @@ msgstr "" #: ../templates/classic/html/issue.item.html:114 -#: ../templates/classic/html/user.item.html:152 +#: ../templates/classic/html/user.item.html:153 #: ../templates/classic/html/user.register.html:69 -#: ../templates/minimal/html/user.item.html:147 +#: ../templates/minimal/html/user.item.html:153 msgid "" "
          Note:  highlighted  fields are required.
          " @@ -2236,91 +2388,92 @@ msgstr "" #: ../templates/classic/html/issue.search.html:56 -msgid "Topic:" +msgid "Keyword:" msgstr "" -#: ../templates/classic/html/issue.search.html:64 +#: ../templates/classic/html/issue.search.html:58 +#: ../templates/classic/html/issue.search.html:123 +#: ../templates/classic/html/issue.search.html:139 +msgid "not selected" +msgstr "" + +#: ../templates/classic/html/issue.search.html:67 msgid "ID:" msgstr "" -#: ../templates/classic/html/issue.search.html:72 +#: ../templates/classic/html/issue.search.html:75 msgid "Creation Date:" msgstr "" -#: ../templates/classic/html/issue.search.html:83 +#: ../templates/classic/html/issue.search.html:86 msgid "Creator:" msgstr "" -#: ../templates/classic/html/issue.search.html:85 +#: ../templates/classic/html/issue.search.html:88 msgid "created by me" msgstr "" -#: ../templates/classic/html/issue.search.html:94 +#: ../templates/classic/html/issue.search.html:97 msgid "Activity:" msgstr "" -#: ../templates/classic/html/issue.search.html:105 +#: ../templates/classic/html/issue.search.html:108 msgid "Actor:" msgstr "" -#: ../templates/classic/html/issue.search.html:107 +#: ../templates/classic/html/issue.search.html:110 msgid "done by me" msgstr "" -#: ../templates/classic/html/issue.search.html:118 +#: ../templates/classic/html/issue.search.html:121 msgid "Priority:" msgstr "" -#: ../templates/classic/html/issue.search.html:120 -#: ../templates/classic/html/issue.search.html:136 -msgid "not selected" -msgstr "" - -#: ../templates/classic/html/issue.search.html:131 +#: ../templates/classic/html/issue.search.html:134 msgid "Status:" msgstr "" -#: ../templates/classic/html/issue.search.html:134 +#: ../templates/classic/html/issue.search.html:137 msgid "not resolved" msgstr "" -#: ../templates/classic/html/issue.search.html:149 +#: ../templates/classic/html/issue.search.html:152 msgid "Assigned to:" msgstr "" -#: ../templates/classic/html/issue.search.html:152 +#: ../templates/classic/html/issue.search.html:155 msgid "assigned to me" msgstr "" -#: ../templates/classic/html/issue.search.html:154 +#: ../templates/classic/html/issue.search.html:157 msgid "unassigned" msgstr "" -#: ../templates/classic/html/issue.search.html:164 +#: ../templates/classic/html/issue.search.html:167 msgid "No Sort or group:" msgstr "" -#: ../templates/classic/html/issue.search.html:172 +#: ../templates/classic/html/issue.search.html:175 msgid "Pagesize:" msgstr "" -#: ../templates/classic/html/issue.search.html:178 +#: ../templates/classic/html/issue.search.html:181 msgid "Start With:" msgstr "" -#: ../templates/classic/html/issue.search.html:184 +#: ../templates/classic/html/issue.search.html:187 msgid "Sort Descending:" msgstr "" -#: ../templates/classic/html/issue.search.html:191 +#: ../templates/classic/html/issue.search.html:194 msgid "Group Descending:" msgstr "" -#: ../templates/classic/html/issue.search.html:198 +#: ../templates/classic/html/issue.search.html:201 msgid "Query name**:" msgstr "" -#: ../templates/classic/html/issue.search.html:210 +#: ../templates/classic/html/issue.search.html:213 #: ../templates/classic/html/page.html:43 #: ../templates/classic/html/page.html:92 #: ../templates/classic/html/user.help-search.html:69 @@ -2329,11 +2482,11 @@ msgid "Search" msgstr "" -#: ../templates/classic/html/issue.search.html:215 +#: ../templates/classic/html/issue.search.html:218 msgid "*: The \"all text\" field will look in message bodies and issue titles" msgstr "" -#: ../templates/classic/html/issue.search.html:218 +#: ../templates/classic/html/issue.search.html:221 msgid "" "**: If you supply a name, the query will be saved off and available as a link " "in the sidebar" @@ -2361,10 +2514,6 @@ msgid "To create a new keyword, enter it below and click \"Submit New Entry\"." msgstr "" -#: ../templates/classic/html/keyword.item.html:37 -msgid "Keyword" -msgstr "" - #: ../templates/classic/html/msg.index.html:3 msgid "List of messages - ${tracker}" msgstr "" @@ -2441,11 +2590,6 @@ msgid "Show issue:" msgstr "" -#: ../templates/classic/html/page.html:103 -#: ../templates/minimal/html/page.html:102 -msgid "Keywords" -msgstr "" - #: ../templates/classic/html/page.html:108 #: ../templates/minimal/html/page.html:107 msgid "Edit Existing" @@ -2593,7 +2737,7 @@ msgstr "" #: ../templates/classic/html/query.edit.html:67 -#: ../templates/classic/html/query.edit.html:92 +#: ../templates/classic/html/query.edit.html:94 msgid "edit" msgstr "" @@ -2609,11 +2753,11 @@ msgid "Delete" msgstr "" -#: ../templates/classic/html/query.edit.html:94 +#: ../templates/classic/html/query.edit.html:96 msgid "[not yours to edit]" msgstr "" -#: ../templates/classic/html/query.edit.html:102 +#: ../templates/classic/html/query.edit.html:104 msgid "Save Selection" msgstr "" @@ -2735,26 +2879,26 @@ msgid "User${id} Editing" msgstr "" -#: ../templates/classic/html/user.item.html:79 +#: ../templates/classic/html/user.item.html:80 #: ../templates/classic/html/user.register.html:33 -#: ../templates/minimal/html/user.item.html:74 +#: ../templates/minimal/html/user.item.html:80 #: ../templates/minimal/html/user.register.html:41 msgid "Roles" msgstr "" -#: ../templates/classic/html/user.item.html:87 -#: ../templates/minimal/html/user.item.html:82 +#: ../templates/classic/html/user.item.html:88 +#: ../templates/minimal/html/user.item.html:88 msgid "(to give the user more than one role, enter a comma,separated,list)" msgstr "" -#: ../templates/classic/html/user.item.html:108 -#: ../templates/minimal/html/user.item.html:103 +#: ../templates/classic/html/user.item.html:109 +#: ../templates/minimal/html/user.item.html:109 msgid "(this is a numeric hour offset, the default is ${zone})" msgstr "" -#: ../templates/classic/html/user.item.html:129 +#: ../templates/classic/html/user.item.html:130 #: ../templates/classic/html/user.register.html:53 -#: ../templates/minimal/html/user.item.html:124 +#: ../templates/minimal/html/user.item.html:130 #: ../templates/minimal/html/user.register.html:53 msgid "Alternate E-mail addresses
          One address per line" msgstr "" Modified: tracker/roundup-src/locale/ru.po ============================================================================== --- tracker/roundup-src/locale/ru.po (original) +++ tracker/roundup-src/locale/ru.po Sun Mar 9 09:26:16 2008 @@ -1,16 +1,16 @@ # Russian message file for Roundup Issue Tracker # alexander smishlajev , 2004 # -# $Id: ru.po,v 1.15 2006/12/18 12:10:10 a1s Exp $ +# $Id: ru.po,v 1.16 2007/09/16 07:23:04 a1s Exp $ # -# roundup.pot revision 1.22 +# roundup.pot revision 1.23 # msgid "" msgstr "" "Project-Id-Version: Roundup 1.3.2\n" "Report-Msgid-Bugs-To: roundup-devel at lists.sourceforge.net\n" "POT-Creation-Date: 2006-04-27 09:02+0300\n" -"PO-Revision-Date: 2006-12-18 13:38+0200\n" +"PO-Revision-Date: 2007-09-16 10:20+0200\n" "Last-Translator: alexander smishlajev \n" "Language-Team: Russian\n" "MIME-Version: 1.0\n" @@ -19,21 +19,21 @@ "Plural-Forms: nplurals=3; plural=n%10==1 && n%100!=11 ? 0 : n%10>=2 && n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2;\n" "X-Poedit-Language: Russian\n" -#: ../roundup/admin.py:85 -#: ../roundup/admin.py:981 -#: ../roundup/admin.py:1030 -#: ../roundup/admin.py:1052 +#: ../roundup/admin.py:86 +#: ../roundup/admin.py:989 +#: ../roundup/admin.py:1040 +#: ../roundup/admin.py:1063 #, python-format msgid "no such class \"%(classname)s\"" msgstr "????? \"%(classname)s\" ?? ??????????" -#: ../roundup/admin.py:95 -#: ../roundup/admin.py:99 +#: ../roundup/admin.py:96 +#: ../roundup/admin.py:100 #, python-format msgid "argument \"%(arg)s\" not propname=value" msgstr "???????? \"%(arg)s\" ?????? ????? ??? ???=????????" -#: ../roundup/admin.py:112 +#: ../roundup/admin.py:113 #, python-format msgid "" "Problem: %(message)s\n" @@ -42,7 +42,7 @@ "??????: %(message)s\n" "\n" -#: ../roundup/admin.py:113 +#: ../roundup/admin.py:114 #, python-format msgid "" "%(message)sUsage: roundup-admin [options] [ ]\n" @@ -89,11 +89,11 @@ " roundup-admin help -- ??????? ?? ???????\n" " roundup-admin help all -- ??? ?????????? ?????????\n" -#: ../roundup/admin.py:140 +#: ../roundup/admin.py:141 msgid "Commands:" msgstr "???????:" -#: ../roundup/admin.py:147 +#: ../roundup/admin.py:148 msgid "" "Commands may be abbreviated as long as the abbreviation\n" "matches only one command, e.g. l == li == lis == list." @@ -107,7 +107,7 @@ # ?? ??? ??? ?????? ?? ????????. # # ??? ????? ???????? ?????? "?????? ? ?????????"? -#: ../roundup/admin.py:177 +#: ../roundup/admin.py:178 msgid "" "\n" "All commands (except help) require a tracker specifier. This is just\n" @@ -236,12 +236,12 @@ "\n" "??????? ?? ????????:\n" -#: ../roundup/admin.py:240 +#: ../roundup/admin.py:241 #, python-format msgid "%s:" msgstr "" -#: ../roundup/admin.py:245 +#: ../roundup/admin.py:246 msgid "" "Usage: help topic\n" " Give help about topic.\n" @@ -261,22 +261,22 @@ " all -- ??? ???????\n" " " -#: ../roundup/admin.py:268 +#: ../roundup/admin.py:269 #, python-format msgid "Sorry, no help for \"%(topic)s\"" msgstr "???????, ??????? \"%(topic)s\" ?? ??????????." -#: ../roundup/admin.py:340 -#: ../roundup/admin.py:396 +#: ../roundup/admin.py:346 +#: ../roundup/admin.py:402 msgid "Templates:" msgstr "???????:" -#: ../roundup/admin.py:343 -#: ../roundup/admin.py:407 +#: ../roundup/admin.py:349 +#: ../roundup/admin.py:413 msgid "Back ends:" msgstr "???????:" -#: ../roundup/admin.py:346 +#: ../roundup/admin.py:352 msgid "" "Usage: install [template [backend [key=val[,key=val]]]]\n" " Install a new Roundup tracker.\n" @@ -327,31 +327,31 @@ " ??.????? \"help initopts\".\n" " " -#: ../roundup/admin.py:369 -#: ../roundup/admin.py:466 -#: ../roundup/admin.py:527 -#: ../roundup/admin.py:606 -#: ../roundup/admin.py:656 -#: ../roundup/admin.py:714 -#: ../roundup/admin.py:735 -#: ../roundup/admin.py:763 -#: ../roundup/admin.py:834 -#: ../roundup/admin.py:901 -#: ../roundup/admin.py:972 -#: ../roundup/admin.py:1020 -#: ../roundup/admin.py:1042 -#: ../roundup/admin.py:1072 -#: ../roundup/admin.py:1171 -#: ../roundup/admin.py:1243 +#: ../roundup/admin.py:375 +#: ../roundup/admin.py:472 +#: ../roundup/admin.py:533 +#: ../roundup/admin.py:612 +#: ../roundup/admin.py:663 +#: ../roundup/admin.py:721 +#: ../roundup/admin.py:742 +#: ../roundup/admin.py:770 +#: ../roundup/admin.py:842 +#: ../roundup/admin.py:909 +#: ../roundup/admin.py:980 +#: ../roundup/admin.py:1030 +#: ../roundup/admin.py:1053 +#: ../roundup/admin.py:1084 +#: ../roundup/admin.py:1180 +#: ../roundup/admin.py:1253 msgid "Not enough arguments supplied" msgstr "???????????? ??????????" -#: ../roundup/admin.py:375 +#: ../roundup/admin.py:381 #, python-format msgid "Instance home parent directory \"%(parent)s\" does not exist" msgstr "??????? \"%(parent)s\" ?? ??????????" -#: ../roundup/admin.py:383 +#: ../roundup/admin.py:389 #, python-format msgid "" "WARNING: There appears to be a tracker in \"%(tracker_home)s\"!\n" @@ -362,20 +362,20 @@ "????????? ????????? ????????? ??? ???? ??????!\n" "??????? ???????????? ??????? Y/N: " -#: ../roundup/admin.py:398 +#: ../roundup/admin.py:404 msgid "Select template [classic]: " msgstr "???????? ?????? [classic]: " -#: ../roundup/admin.py:409 +#: ../roundup/admin.py:415 msgid "Select backend [anydbm]: " msgstr "???????? ?????? [anydbm]: " -#: ../roundup/admin.py:419 +#: ../roundup/admin.py:425 #, python-format msgid "Error in configuration settings: \"%s\"" msgstr "?????? ? ?????????? ????????????: \"%s\"" -#: ../roundup/admin.py:428 +#: ../roundup/admin.py:434 #, python-format msgid "" "\n" @@ -388,12 +388,12 @@ " ?????? ??? ????? ????????? ???????????????? ???? ???????:\n" " %(config_file)s" -#: ../roundup/admin.py:438 +#: ../roundup/admin.py:444 msgid " ... at a minimum, you must set following options:" msgstr " ... ??? ???????, ?? ?????? ?????????? ?????????:" # ??????? ?????????? ???????? ????????? -#: ../roundup/admin.py:443 +#: ../roundup/admin.py:449 #, python-format msgid "" "\n" @@ -419,7 +419,7 @@ " ????? ????? ?? ?????? ????????? ??????? \"roundup-admin initialise\".\n" "---------------------------------------------------------------------------\n" -#: ../roundup/admin.py:461 +#: ../roundup/admin.py:467 msgid "" "Usage: genconfig \n" " Generate a new tracker config file (ini style) with default values\n" @@ -433,7 +433,7 @@ # password #. password -#: ../roundup/admin.py:471 +#: ../roundup/admin.py:477 msgid "" "Usage: initialise [adminpw]\n" " Initialise a new Roundup tracker.\n" @@ -451,23 +451,23 @@ " ????????????? ??????? ???????? ???????? dbinit.init()\n" " " -#: ../roundup/admin.py:485 +#: ../roundup/admin.py:491 msgid "Admin Password: " msgstr "?????? ??????????????: " -#: ../roundup/admin.py:486 +#: ../roundup/admin.py:492 msgid " Confirm: " msgstr " ??? ???: " -#: ../roundup/admin.py:490 +#: ../roundup/admin.py:496 msgid "Instance home does not exist" msgstr "???????? ??????? ??????? ?? ??????????" -#: ../roundup/admin.py:494 +#: ../roundup/admin.py:500 msgid "Instance has not been installed" msgstr "?????? ?? ??????????" -#: ../roundup/admin.py:499 +#: ../roundup/admin.py:505 msgid "" "WARNING: The database is already initialised!\n" "If you re-initialise it, you will lose all the data!\n" @@ -477,7 +477,7 @@ "????????? ????????????? ????????? ??? ???? ??????!\n" "??????? ???????????? ????? Y/N: " -#: ../roundup/admin.py:520 +#: ../roundup/admin.py:526 msgid "" "Usage: get property designator[,designator]*\n" " Get the given property of one or more designator(s).\n" @@ -494,26 +494,26 @@ " ????????????? ? ?????? ??????????.\n" " " -#: ../roundup/admin.py:560 -#: ../roundup/admin.py:575 +#: ../roundup/admin.py:566 +#: ../roundup/admin.py:581 #, python-format msgid "property %s is not of type Multilink or Link so -d flag does not apply." msgstr "???? '-d' ??????????, ?????? ??? ??? ???????? %s - ?? Link ? ?? Multilink" -#: ../roundup/admin.py:583 -#: ../roundup/admin.py:983 -#: ../roundup/admin.py:1032 -#: ../roundup/admin.py:1054 +#: ../roundup/admin.py:589 +#: ../roundup/admin.py:991 +#: ../roundup/admin.py:1042 +#: ../roundup/admin.py:1065 #, python-format msgid "no such %(classname)s node \"%(nodeid)s\"" msgstr "? ?????? %(classname)s ??? ??????? \"%(nodeid)s\"" -#: ../roundup/admin.py:585 +#: ../roundup/admin.py:591 #, python-format msgid "no such %(classname)s property \"%(propname)s\"" msgstr "? ?????? %(classname)s ??? ???????? \"%(propname)s\"" -#: ../roundup/admin.py:594 +#: ../roundup/admin.py:600 msgid "" "Usage: set items property=value property=value ...\n" " Set the given properties of one or more items(s).\n" @@ -541,7 +541,7 @@ " ???????. (????????, \"1,2,3\".)\n" " " -#: ../roundup/admin.py:648 +#: ../roundup/admin.py:655 msgid "" "Usage: find classname propname=value ...\n" " Find the nodes of the given class with a given link property value.\n" @@ -559,15 +559,15 @@ " ??????? ????????? ???????, ??? ?????? ????? ???????.\n" " " -#: ../roundup/admin.py:701 -#: ../roundup/admin.py:854 -#: ../roundup/admin.py:866 -#: ../roundup/admin.py:920 +#: ../roundup/admin.py:708 +#: ../roundup/admin.py:862 +#: ../roundup/admin.py:874 +#: ../roundup/admin.py:928 #, python-format msgid "%(classname)s has no property \"%(propname)s\"" msgstr "????? %(classname)s ?? ????? ???????? \"%(propname)s\"" -#: ../roundup/admin.py:708 +#: ../roundup/admin.py:715 msgid "" "Usage: specification classname\n" " Show the properties for a classname.\n" @@ -581,17 +581,17 @@ " ?????? ?????? ????????? ?????????? ??????.\n" " " -#: ../roundup/admin.py:723 +#: ../roundup/admin.py:730 #, python-format msgid "%(key)s: %(value)s (key property)" msgstr "%(key)s: %(value)s (???????? ???????)" -#: ../roundup/admin.py:725 +#: ../roundup/admin.py:732 #, python-format msgid "%(key)s: %(value)s" msgstr "" -#: ../roundup/admin.py:728 +#: ../roundup/admin.py:735 msgid "" "Usage: display designator[,designator]*\n" " Show the property values for the given node(s).\n" @@ -607,12 +607,12 @@ " ???????? ???????????.\n" " " -#: ../roundup/admin.py:752 +#: ../roundup/admin.py:759 #, python-format msgid "%(key)s: %(value)r" msgstr "" -#: ../roundup/admin.py:755 +#: ../roundup/admin.py:762 msgid "" "Usage: create classname property=value ...\n" " Create a new entry of a given class.\n" @@ -629,31 +629,31 @@ " ????? ??????? ?????????? ??????????.\n" " " -#: ../roundup/admin.py:782 +#: ../roundup/admin.py:789 #, python-format msgid "%(propname)s (Password): " msgstr " %(propname)s (??????): " -#: ../roundup/admin.py:784 +#: ../roundup/admin.py:791 #, python-format msgid " %(propname)s (Again): " msgstr "%(propname)s (??? ???): " -#: ../roundup/admin.py:786 +#: ../roundup/admin.py:793 msgid "Sorry, try again..." msgstr "?????? ?? ???????. ?????????? ??? ???." -#: ../roundup/admin.py:790 +#: ../roundup/admin.py:797 #, python-format msgid "%(propname)s (%(proptype)s): " msgstr "" -#: ../roundup/admin.py:808 +#: ../roundup/admin.py:815 #, python-format msgid "you must provide the \"%(propname)s\" property." msgstr "??????? \"%(propname)s\" ?????? ???? ????????." -#: ../roundup/admin.py:819 +#: ../roundup/admin.py:827 msgid "" "Usage: list classname [property]\n" " List the instances of a class.\n" @@ -682,16 +682,16 @@ " ?????? ?????? ???????? ????? ????????.\n" " " -#: ../roundup/admin.py:832 +#: ../roundup/admin.py:840 msgid "Too many arguments supplied" msgstr "?????? ??????? ????? ??????????" -#: ../roundup/admin.py:868 +#: ../roundup/admin.py:876 #, python-format msgid "%(nodeid)4s: %(value)s" msgstr "" -#: ../roundup/admin.py:872 +#: ../roundup/admin.py:880 msgid "" "Usage: table classname [property[,property]*]\n" " List the instances of a class in tabular form.\n" @@ -751,12 +751,12 @@ " ???????? ???????? ??????? \"Name\" ?? ??????? ????????.\n" " " -#: ../roundup/admin.py:916 +#: ../roundup/admin.py:924 #, python-format msgid "\"%(spec)s\" not name:width" msgstr "???????? \"%(spec)s\" ?????? ???? ?????? ??? ???:??????" -#: ../roundup/admin.py:966 +#: ../roundup/admin.py:974 msgid "" "Usage: history designator\n" " Show the history entries of a designator.\n" @@ -771,7 +771,7 @@ " ????????? ??????????.\n" " " -#: ../roundup/admin.py:987 +#: ../roundup/admin.py:995 msgid "" "Usage: commit\n" " Commit changes made to the database during an interactive session.\n" @@ -795,7 +795,7 @@ " ?????????????, ???? ??? ?????????? ??????? ?? ????????? ??????.\n" " " -#: ../roundup/admin.py:1001 +#: ../roundup/admin.py:1010 msgid "" "Usage: rollback\n" " Undo all changes that are pending commit to the database.\n" @@ -816,7 +816,7 @@ " ???? ? ?????? ????????? ??????.\n" " " -#: ../roundup/admin.py:1013 +#: ../roundup/admin.py:1023 msgid "" "Usage: retire designator[,designator]*\n" " Retire the node specified by designator.\n" @@ -834,7 +834,7 @@ " ???????????? ? ?????? ????????.\n" " " -#: ../roundup/admin.py:1036 +#: ../roundup/admin.py:1047 msgid "" "Usage: restore designator[,designator]*\n" " Restore the retired node specified by designator.\n" @@ -850,7 +850,7 @@ " " #. grab the directory to export to -#: ../roundup/admin.py:1058 +#: ../roundup/admin.py:1070 msgid "" "Usage: export [[-]class[,class]] export_dir\n" " Export the database to colon-separated-value files.\n" @@ -885,7 +885,7 @@ " exporttables.\n" " " -#: ../roundup/admin.py:1136 +#: ../roundup/admin.py:1145 msgid "" "Usage: exporttables [[-]class[,class]] export_dir\n" " Export the database to colon-separated-value files, excluding the\n" @@ -920,7 +920,7 @@ " ?????????, ??????????? ??????? export.\n" " " -#: ../roundup/admin.py:1151 +#: ../roundup/admin.py:1160 msgid "" "Usage: import import_dir\n" " Import a database from the directory containing CSV files,\n" @@ -964,7 +964,7 @@ " ?? ???????????? ???? ??? ???????).\n" " " -#: ../roundup/admin.py:1225 +#: ../roundup/admin.py:1235 msgid "" "Usage: pack period | date\n" "\n" @@ -1003,11 +1003,11 @@ "\n" " " -#: ../roundup/admin.py:1253 +#: ../roundup/admin.py:1263 msgid "Invalid format" msgstr "???????????? ??????" -#: ../roundup/admin.py:1263 +#: ../roundup/admin.py:1274 msgid "" "Usage: reindex [classname|designator]*\n" " Re-generate a tracker's search indexes.\n" @@ -1023,12 +1023,12 @@ " ??????. ?????? ?????????? ???????? ?????????? ?????????????.\n" " " -#: ../roundup/admin.py:1277 +#: ../roundup/admin.py:1288 #, python-format msgid "no such item \"%(designator)s\"" msgstr "?????? \"%(designator)s\" ?? ??????????" -#: ../roundup/admin.py:1287 +#: ../roundup/admin.py:1298 msgid "" "Usage: security [Role name]\n" " Display the Permissions available to one or all Roles.\n" @@ -1039,78 +1039,78 @@ " ?????.\n" " " -#: ../roundup/admin.py:1295 +#: ../roundup/admin.py:1306 #, python-format msgid "No such Role \"%(role)s\"" msgstr "???? \"%(role)s\" ?? ??????????" -#: ../roundup/admin.py:1301 +#: ../roundup/admin.py:1312 #, python-format msgid "New Web users get the Roles \"%(role)s\"" msgstr "????? ???????????? web ???????? ???? \"%(role)s\"" -#: ../roundup/admin.py:1303 +#: ../roundup/admin.py:1314 #, python-format msgid "New Web users get the Role \"%(role)s\"" msgstr "????? ???????????? web ???????? ???? \"%(role)s\"" -#: ../roundup/admin.py:1306 +#: ../roundup/admin.py:1317 #, python-format msgid "New Email users get the Roles \"%(role)s\"" msgstr "????? ???????????? email ???????? ???? \"%(role)s\"" -#: ../roundup/admin.py:1308 +#: ../roundup/admin.py:1319 #, python-format msgid "New Email users get the Role \"%(role)s\"" msgstr "????? ???????????? email ???????? ???? \"%(role)s\"" -#: ../roundup/admin.py:1311 +#: ../roundup/admin.py:1322 #, python-format msgid "Role \"%(name)s\":" msgstr "???? \"%(name)s\":" -#: ../roundup/admin.py:1316 +#: ../roundup/admin.py:1327 #, python-format msgid " %(description)s (%(name)s for \"%(klass)s\": %(properties)s only)" msgstr " %(description)s (%(name)s ??? ?????? \"%(klass)s\": ?????? ???????? %(properties)s)" -#: ../roundup/admin.py:1319 +#: ../roundup/admin.py:1330 #, python-format msgid " %(description)s (%(name)s for \"%(klass)s\" only)" msgstr " %(description)s (%(name)s ?????? ??? ?????? \"%(klass)s\")" -#: ../roundup/admin.py:1322 +#: ../roundup/admin.py:1333 #, python-format msgid " %(description)s (%(name)s)" msgstr "" -#: ../roundup/admin.py:1351 +#: ../roundup/admin.py:1362 #, python-format msgid "Unknown command \"%(command)s\" (\"help commands\" for a list)" msgstr "??????? \"%(command)s\" ??????????. (\"help commands\" ?????? ?????? ??????)" -#: ../roundup/admin.py:1357 +#: ../roundup/admin.py:1368 #, python-format msgid "Multiple commands match \"%(command)s\": %(list)s" msgstr "\"%(command)s\" ????????????? ?????????? ????????: %(list)s" -#: ../roundup/admin.py:1364 +#: ../roundup/admin.py:1375 msgid "Enter tracker home: " msgstr "???????? ??????? ???????: " -#: ../roundup/admin.py:1371 -#: ../roundup/admin.py:1377 -#: ../roundup/admin.py:1397 +#: ../roundup/admin.py:1382 +#: ../roundup/admin.py:1388 +#: ../roundup/admin.py:1408 #, python-format msgid "Error: %(message)s" msgstr "??????: %(message)s" -#: ../roundup/admin.py:1385 +#: ../roundup/admin.py:1396 #, python-format msgid "Error: Couldn't open tracker: %(message)s" msgstr "??????: ?????? ?? ???????????: %(message)s" -#: ../roundup/admin.py:1410 +#: ../roundup/admin.py:1421 #, python-format msgid "" "Roundup %s ready for input.\n" @@ -1119,48 +1119,48 @@ "Roundup %s ? ????? ???????.\n" "??????? \"help\" ??? ???????." -#: ../roundup/admin.py:1415 +#: ../roundup/admin.py:1426 msgid "Note: command history and editing not available" msgstr "??????????: ???????? ???????? ? ??????? ??????" -#: ../roundup/admin.py:1419 +#: ../roundup/admin.py:1430 msgid "roundup> " msgstr "" -#: ../roundup/admin.py:1421 +#: ../roundup/admin.py:1432 msgid "exit..." msgstr "????????? ? ??? ???..." -#: ../roundup/admin.py:1431 +#: ../roundup/admin.py:1442 msgid "There are unsaved changes. Commit them (y/N)? " msgstr "??, ??? ????????????? ?????????. ???????? ? ???? ?????? (y/N)? " -#: ../roundup/backends/back_anydbm.py:2000 +#: ../roundup/backends/back_anydbm.py:2004 #, python-format msgid "WARNING: invalid date tuple %r" msgstr "????????! ???????? ????: %r" -#: ../roundup/backends/rdbms_common.py:1442 +#: ../roundup/backends/rdbms_common.py:1445 msgid "create" msgstr "????????" -#: ../roundup/backends/rdbms_common.py:1608 +#: ../roundup/backends/rdbms_common.py:1611 msgid "unlink" msgstr "???????" -#: ../roundup/backends/rdbms_common.py:1612 +#: ../roundup/backends/rdbms_common.py:1615 msgid "link" msgstr "????????" -#: ../roundup/backends/rdbms_common.py:1732 +#: ../roundup/backends/rdbms_common.py:1737 msgid "set" msgstr "?????????" -#: ../roundup/backends/rdbms_common.py:1756 +#: ../roundup/backends/rdbms_common.py:1761 msgid "retired" msgstr "??????????" -#: ../roundup/backends/rdbms_common.py:1786 +#: ../roundup/backends/rdbms_common.py:1791 msgid "restored" msgstr "??????????????" @@ -1191,73 +1191,73 @@ msgid "%(classname)s %(itemid)s has been retired" msgstr "%(classname)s %(itemid)s ??????" -#: ../roundup/cgi/actions.py:174 -#: ../roundup/cgi/actions.py:202 +#: ../roundup/cgi/actions.py:169 +#: ../roundup/cgi/actions.py:197 msgid "You do not have permission to edit queries" msgstr "? ??? ??? ?????????? ?? ?????????????? ????????" -#: ../roundup/cgi/actions.py:180 -#: ../roundup/cgi/actions.py:209 +#: ../roundup/cgi/actions.py:175 +#: ../roundup/cgi/actions.py:204 msgid "You do not have permission to store queries" msgstr "? ??? ??? ?????????? ?? ?????????? ????????" -#: ../roundup/cgi/actions.py:298 +#: ../roundup/cgi/actions.py:310 #, python-format msgid "Not enough values on line %(line)s" msgstr "? ?????? %(line)s ?? ??????? ????????" -#: ../roundup/cgi/actions.py:345 +#: ../roundup/cgi/actions.py:357 msgid "Items edited OK" msgstr "??????? ???????? ???????" -#: ../roundup/cgi/actions.py:405 +#: ../roundup/cgi/actions.py:416 #, python-format msgid "%(class)s %(id)s %(properties)s edited ok" msgstr "???????? ???????? %(properties)s ??????? %(class)s %(id)s" -#: ../roundup/cgi/actions.py:408 +#: ../roundup/cgi/actions.py:419 #, python-format msgid "%(class)s %(id)s - nothing changed" msgstr "%(class)s %(id)s - ??? ?????????" -#: ../roundup/cgi/actions.py:420 +#: ../roundup/cgi/actions.py:431 #, python-format msgid "%(class)s %(id)s created" msgstr "%(class)s %(id)s ??????" -#: ../roundup/cgi/actions.py:452 +#: ../roundup/cgi/actions.py:463 #, python-format msgid "You do not have permission to edit %(class)s" msgstr "? ??? ??? ?????????? ????????????? %(class)s" -#: ../roundup/cgi/actions.py:464 +#: ../roundup/cgi/actions.py:475 #, python-format msgid "You do not have permission to create %(class)s" msgstr "? ??? ??? ?????????? ????????? %(class)s" -#: ../roundup/cgi/actions.py:488 +#: ../roundup/cgi/actions.py:499 msgid "You do not have permission to edit user roles" msgstr "? ??? ??? ?????????? ?? ????????? ????? ?????????????" -#: ../roundup/cgi/actions.py:538 +#: ../roundup/cgi/actions.py:549 #, python-format msgid "Edit Error: someone else has edited this %s (%s). View their changes in a new window." msgstr "?????? ??????????????: %s (%s) ??????? ?????? ????????????. ??????????? ??? ????????? ? ?????? ????." -#: ../roundup/cgi/actions.py:566 +#: ../roundup/cgi/actions.py:577 #, python-format msgid "Edit Error: %s" msgstr "?????? ??????????????: %s" -#: ../roundup/cgi/actions.py:597 #: ../roundup/cgi/actions.py:608 -#: ../roundup/cgi/actions.py:779 -#: ../roundup/cgi/actions.py:798 +#: ../roundup/cgi/actions.py:619 +#: ../roundup/cgi/actions.py:790 +#: ../roundup/cgi/actions.py:809 #, python-format msgid "Error: %s" msgstr "??????: %s" -#: ../roundup/cgi/actions.py:634 +#: ../roundup/cgi/actions.py:645 msgid "" "Invalid One Time Key!\n" "(a Mozilla bug may cause this message to show up erroneously, please check your email)" @@ -1265,50 +1265,50 @@ "???? ????????????? ??????????!\n" "(??-?? ?????? ? ???????? Mozilla ??? ????????? ????? ???? ????????. ????????? ???? ?????, ??????????.)" -#: ../roundup/cgi/actions.py:676 +#: ../roundup/cgi/actions.py:687 #, python-format msgid "Password reset and email sent to %s" msgstr "?????? ???????. ?? ?????? %s ?????????? ??????." -#: ../roundup/cgi/actions.py:685 +#: ../roundup/cgi/actions.py:696 msgid "Unknown username" msgstr "??????????? ??? ????????????" -#: ../roundup/cgi/actions.py:693 +#: ../roundup/cgi/actions.py:704 msgid "Unknown email address" msgstr "??????????? ????? email" -#: ../roundup/cgi/actions.py:698 +#: ../roundup/cgi/actions.py:709 msgid "You need to specify a username or address" msgstr "?? ?????? ??????? ??? ???????????? ??? ????? email" -#: ../roundup/cgi/actions.py:723 +#: ../roundup/cgi/actions.py:734 #, python-format msgid "Email sent to %s" msgstr "?????? ?????????? ?? %s" -#: ../roundup/cgi/actions.py:742 +#: ../roundup/cgi/actions.py:753 msgid "You are now registered, welcome!" msgstr "?? ????????????????. ????? ??????????!" -#: ../roundup/cgi/actions.py:787 +#: ../roundup/cgi/actions.py:798 msgid "It is not permitted to supply roles at registration." msgstr "?????? ????????? ???? ??? ???????????" -#: ../roundup/cgi/actions.py:879 +#: ../roundup/cgi/actions.py:890 msgid "You are logged out" msgstr "????? ?????? ????????" -#: ../roundup/cgi/actions.py:896 +#: ../roundup/cgi/actions.py:907 msgid "Username required" msgstr "?? ??????? ??? ????????????" -#: ../roundup/cgi/actions.py:931 -#: ../roundup/cgi/actions.py:935 +#: ../roundup/cgi/actions.py:942 +#: ../roundup/cgi/actions.py:946 msgid "Invalid login" msgstr "???????????? ?????? ??? ??? ????????????." -#: ../roundup/cgi/actions.py:941 +#: ../roundup/cgi/actions.py:952 msgid "You do not have permission to login" msgstr "? ??? ??? ?????????? ?? ?????? ? ????????" @@ -1403,29 +1403,29 @@ "?????????????? ??????? ???????? ????????? ?? ??????.

          \n" "" -#: ../roundup/cgi/client.py:326 +#: ../roundup/cgi/client.py:339 msgid "Form Error: " msgstr "?????? ?????: " -#: ../roundup/cgi/client.py:381 +#: ../roundup/cgi/client.py:394 #, python-format msgid "Unrecognized charset: %r" msgstr "????????? %r ?? ??????????" -#: ../roundup/cgi/client.py:509 +#: ../roundup/cgi/client.py:522 msgid "Anonymous users are not allowed to use the web interface" msgstr "????????? ????????????? ?? ????????? ???????????? ???-???????????." -#: ../roundup/cgi/client.py:664 +#: ../roundup/cgi/client.py:677 msgid "You are not allowed to view this file." msgstr "? ??? ??? ?????????? ?? ???????? ????? ?????." -#: ../roundup/cgi/client.py:758 +#: ../roundup/cgi/client.py:770 #, python-format msgid "%(starttag)sTime elapsed: %(seconds)fs%(endtag)s\n" msgstr "%(starttag)s??????????? ?????: %(seconds)fs%(endtag)s\n" -#: ../roundup/cgi/client.py:762 +#: ../roundup/cgi/client.py:774 #, python-format msgid "%(starttag)sCache hits: %(cache_hits)d, misses %(cache_misses)d. Loading items: %(get_items)f secs. Filtering: %(filtering)f secs.%(endtag)s\n" msgstr "%(starttag)s???????????? ????????: %(cache_hits)d, ???????????: %(cache_misses)d. ???????? ????????: %(get_items)f ???. ??????????: %(filtering)f ???.%(endtag)s\n" @@ -1479,158 +1479,159 @@ msgid "File is empty" msgstr "???? ????" -#: ../roundup/cgi/templating.py:73 +#: ../roundup/cgi/templating.py:77 #, python-format msgid "You are not allowed to %(action)s items of class %(class)s" msgstr "? ??? ??? ?????????? %(action)s ??? ?????? %(class)s" -#: ../roundup/cgi/templating.py:645 +#: ../roundup/cgi/templating.py:657 msgid "(list)" msgstr "(??????)" -#: ../roundup/cgi/templating.py:714 +#: ../roundup/cgi/templating.py:726 msgid "Submit New Entry" msgstr "????????" # ../roundup/cgi/templating.py:673 :792 :1166 :1187 :1231 :1253 :1287 :1326 # :1377 :1394 :1470 :1490 :1503 :1520 :1530 :1580 :1755 -#: ../roundup/cgi/templating.py:728 -#: ../roundup/cgi/templating.py:862 -#: ../roundup/cgi/templating.py:1269 -#: ../roundup/cgi/templating.py:1298 -#: ../roundup/cgi/templating.py:1318 -#: ../roundup/cgi/templating.py:1364 -#: ../roundup/cgi/templating.py:1387 -#: ../roundup/cgi/templating.py:1423 -#: ../roundup/cgi/templating.py:1460 -#: ../roundup/cgi/templating.py:1513 -#: ../roundup/cgi/templating.py:1530 -#: ../roundup/cgi/templating.py:1614 -#: ../roundup/cgi/templating.py:1634 -#: ../roundup/cgi/templating.py:1652 -#: ../roundup/cgi/templating.py:1684 -#: ../roundup/cgi/templating.py:1694 -#: ../roundup/cgi/templating.py:1746 -#: ../roundup/cgi/templating.py:1935 +#: ../roundup/cgi/templating.py:740 +#: ../roundup/cgi/templating.py:873 +#: ../roundup/cgi/templating.py:1294 +#: ../roundup/cgi/templating.py:1323 +#: ../roundup/cgi/templating.py:1343 +#: ../roundup/cgi/templating.py:1356 +#: ../roundup/cgi/templating.py:1407 +#: ../roundup/cgi/templating.py:1430 +#: ../roundup/cgi/templating.py:1466 +#: ../roundup/cgi/templating.py:1503 +#: ../roundup/cgi/templating.py:1556 +#: ../roundup/cgi/templating.py:1573 +#: ../roundup/cgi/templating.py:1657 +#: ../roundup/cgi/templating.py:1677 +#: ../roundup/cgi/templating.py:1695 +#: ../roundup/cgi/templating.py:1727 +#: ../roundup/cgi/templating.py:1737 +#: ../roundup/cgi/templating.py:1789 +#: ../roundup/cgi/templating.py:1978 msgid "[hidden]" msgstr "[??????????]" -#: ../roundup/cgi/templating.py:729 +#: ../roundup/cgi/templating.py:741 msgid "New node - no history" msgstr "????? ???????? - ??? ???????" -#: ../roundup/cgi/templating.py:844 +#: ../roundup/cgi/templating.py:855 msgid "Submit Changes" msgstr "????????" -#: ../roundup/cgi/templating.py:926 +#: ../roundup/cgi/templating.py:937 msgid "The indicated property no longer exists" msgstr "????????? ??????? ??? ?? ??????????." -#: ../roundup/cgi/templating.py:927 +#: ../roundup/cgi/templating.py:938 #, python-format msgid "%s: %s\n" msgstr "" -#: ../roundup/cgi/templating.py:940 +#: ../roundup/cgi/templating.py:951 #, python-format msgid "The linked class %(classname)s no longer exists" msgstr "????????? ????? %(classname)s ??? ?? ??????????" # :823 -#: ../roundup/cgi/templating.py:973 -#: ../roundup/cgi/templating.py:997 +#: ../roundup/cgi/templating.py:984 +#: ../roundup/cgi/templating.py:1008 msgid "The linked node no longer exists" msgstr "????????? ?????? ??? ?? ??????????" -#: ../roundup/cgi/templating.py:1050 +#: ../roundup/cgi/templating.py:1061 #, python-format msgid "%s: (no value)" msgstr "%s: (??? ????????)" -#: ../roundup/cgi/templating.py:1062 +#: ../roundup/cgi/templating.py:1073 msgid "This event is not handled by the history display!" msgstr "??????????? ??? ???????!" -#: ../roundup/cgi/templating.py:1074 +#: ../roundup/cgi/templating.py:1085 msgid "Note:" msgstr "??????????:" -#: ../roundup/cgi/templating.py:1083 +#: ../roundup/cgi/templating.py:1094 msgid "History" msgstr "???????" -#: ../roundup/cgi/templating.py:1085 +#: ../roundup/cgi/templating.py:1096 msgid "Date" msgstr "????" -#: ../roundup/cgi/templating.py:1086 +#: ../roundup/cgi/templating.py:1097 msgid "User" msgstr "????????????" -#: ../roundup/cgi/templating.py:1087 +#: ../roundup/cgi/templating.py:1098 msgid "Action" msgstr "????????" -#: ../roundup/cgi/templating.py:1088 +#: ../roundup/cgi/templating.py:1099 msgid "Args" msgstr "?????????" -#: ../roundup/cgi/templating.py:1130 +#: ../roundup/cgi/templating.py:1141 #, python-format msgid "Copy of %(class)s %(id)s" msgstr "?????: %(class)s %(id)s" -#: ../roundup/cgi/templating.py:1391 +#: ../roundup/cgi/templating.py:1434 msgid "*encrypted*" msgstr "*??????????*" -#: ../roundup/cgi/templating.py:1464 -#: ../roundup/cgi/templating.py:1485 -#: ../roundup/cgi/templating.py:1491 -#: ../roundup/cgi/templating.py:1039 +#: ../roundup/cgi/templating.py:1507 +#: ../roundup/cgi/templating.py:1528 +#: ../roundup/cgi/templating.py:1534 +#: ../roundup/cgi/templating.py:1050 msgid "No" msgstr "???" -#: ../roundup/cgi/templating.py:1464 -#: ../roundup/cgi/templating.py:1483 -#: ../roundup/cgi/templating.py:1488 -#: ../roundup/cgi/templating.py:1039 +#: ../roundup/cgi/templating.py:1507 +#: ../roundup/cgi/templating.py:1526 +#: ../roundup/cgi/templating.py:1531 +#: ../roundup/cgi/templating.py:1050 msgid "Yes" msgstr "??" -#: ../roundup/cgi/templating.py:1577 +#: ../roundup/cgi/templating.py:1620 msgid "default value for DateHTMLProperty must be either DateHTMLProperty or string date representation." msgstr "???????? ?? ????????? ??? DateHTMLProperty ?????? ???? ???????? DateHTMLProperty ??? ????????? ?????????????? ????." -#: ../roundup/cgi/templating.py:1737 +#: ../roundup/cgi/templating.py:1780 #, python-format msgid "Attempt to look up %(attr)s on a missing value" msgstr "??????? ???????? ??????? \"%(attr)s\" ??????????????? ???????" -#: ../roundup/cgi/templating.py:1810 +#: ../roundup/cgi/templating.py:1853 #, python-format msgid "" msgstr "" -#: ../roundup/date.py:301 +#: ../roundup/date.py:300 msgid "Not a date spec: \"yyyy-mm-dd\", \"mm-dd\", \"HH:MM\", \"HH:MM:SS\" or \"yyyy-mm-dd.HH:MM:SS.SSS\"" msgstr "???? ?????? ???? ? ??????? \"yyyy-mm-dd\", \"mm-dd\", \"HH:MM\", \"HH:MM:SS\" ??? \"yyyy-mm-dd.HH:MM:SS.SSS\"" -#: ../roundup/date.py:363 +#: ../roundup/date.py:359 #, python-format msgid "%r not a date / time spec \"yyyy-mm-dd\", \"mm-dd\", \"HH:MM\", \"HH:MM:SS\" or \"yyyy-mm-dd.HH:MM:SS.SSS\"" msgstr "???????? ???????? ????/???????: %r. ???? ?????? ???? ? ??????? \"yyyy-mm-dd\", \"mm-dd\", \"HH:MM\", \"HH:MM:SS\" ??? \"yyyy-mm-dd.HH:MM:SS.SSS\"" -#: ../roundup/date.py:662 +#: ../roundup/date.py:666 msgid "Not an interval spec: [+-] [#y] [#m] [#w] [#d] [[[H]H:MM]:SS] [date spec]" msgstr "???????? ?????? ???? ? ??????? [+-] [#y] [#m] [#w] [#d] [[[H]H:MM]:SS] [????]" -#: ../roundup/date.py:681 +#: ../roundup/date.py:685 msgid "Not an interval spec: [+-] [#y] [#m] [#w] [#d] [[[H]H:MM]:SS]" msgstr "???????? ?????? ???? ? ??????? [+-] [#y] [#m] [#w] [#d] [[[H]H:MM]:SS]" -#: ../roundup/date.py:818 +#: ../roundup/date.py:822 #, python-format msgid "%(number)s year" msgid_plural "%(number)s years" @@ -1638,7 +1639,7 @@ msgstr[1] "%(number)s ????" msgstr[2] "%(number)s ???" -#: ../roundup/date.py:822 +#: ../roundup/date.py:826 #, python-format msgid "%(number)s month" msgid_plural "%(number)s months" @@ -1646,7 +1647,7 @@ msgstr[1] "%(number)s ??????" msgstr[2] "%(number)s ???????" -#: ../roundup/date.py:826 +#: ../roundup/date.py:830 #, python-format msgid "%(number)s week" msgid_plural "%(number)s weeks" @@ -1654,7 +1655,7 @@ msgstr[1] "%(number)s ??????" msgstr[2] "%(number)s ??????" -#: ../roundup/date.py:830 +#: ../roundup/date.py:834 #, python-format msgid "%(number)s day" msgid_plural "%(number)s days" @@ -1662,15 +1663,15 @@ msgstr[1] "%(number)s ???" msgstr[2] "%(number)s ????" -#: ../roundup/date.py:834 +#: ../roundup/date.py:838 msgid "tomorrow" msgstr "??????" -#: ../roundup/date.py:836 +#: ../roundup/date.py:840 msgid "yesterday" msgstr "?????" -#: ../roundup/date.py:839 +#: ../roundup/date.py:843 #, python-format msgid "%(number)s hour" msgid_plural "%(number)s hours" @@ -1678,16 +1679,16 @@ msgstr[1] "%(number)s ????" msgstr[2] "%(number)s ?????" -#: ../roundup/date.py:843 +#: ../roundup/date.py:847 msgid "an hour" msgstr "???" -#: ../roundup/date.py:845 +#: ../roundup/date.py:849 msgid "1 1/2 hours" msgstr "??????? ????" # third form ain't used -#: ../roundup/date.py:847 +#: ../roundup/date.py:851 #, python-format msgid "1 %(number)s/4 hours" msgid_plural "1 %(number)s/4 hours" @@ -1695,21 +1696,21 @@ msgstr[1] "??? ? %(number)s ????????" msgstr[2] "??? ? %(number)s ?????????" -#: ../roundup/date.py:851 +#: ../roundup/date.py:855 msgid "in a moment" msgstr "??????" -#: ../roundup/date.py:853 +#: ../roundup/date.py:857 msgid "just now" msgstr "?????? ???" # ???????????? ? ?????????? "????? ??????" ??? "?????? ?????" -#: ../roundup/date.py:856 +#: ../roundup/date.py:860 msgid "1 minute" msgstr "??????" # ???????????? ? ?????????? "????? 2 ??????" ??? "2 ?????? ?????" -#: ../roundup/date.py:859 +#: ../roundup/date.py:863 #, python-format msgid "%(number)s minute" msgid_plural "%(number)s minutes" @@ -1717,11 +1718,11 @@ msgstr[1] "%(number)s ??????" msgstr[2] "%(number)s ?????" -#: ../roundup/date.py:862 +#: ../roundup/date.py:866 msgid "1/2 an hour" msgstr "???????" -#: ../roundup/date.py:864 +#: ../roundup/date.py:868 #, python-format msgid "%(number)s/4 hour" msgid_plural "%(number)s/4 hours" @@ -1729,12 +1730,12 @@ msgstr[1] "%(number)s ???????? ????" msgstr[2] "%(number)s ????????? ????" -#: ../roundup/date.py:868 +#: ../roundup/date.py:872 #, python-format msgid "%s ago" msgstr "%s ?????" -#: ../roundup/date.py:870 +#: ../roundup/date.py:874 #, python-format msgid "in %s" msgstr "????? %s" @@ -1748,7 +1749,7 @@ "????????! ??????? '%s'\n" "\t???????? ?????? ??????? ??????? - ????????" -#: ../roundup/mailgw.py:583 +#: ../roundup/mailgw.py:584 msgid "" "\n" "Emails to Roundup trackers must include a Subject: line!\n" @@ -1756,7 +1757,7 @@ "\n" "? ??????? ??? ??????? Roundup ?????? ???? ??????? ???? ????????? (Subject).\n" -#: ../roundup/mailgw.py:673 +#: ../roundup/mailgw.py:708 #, python-format msgid "" "\n" @@ -1785,12 +1786,12 @@ " 1234, ??????? ??? ?????????? ? ???????.\n" "???? ?????? ??????: \"%(subject)s\"\n" -#: ../roundup/mailgw.py:704 +#: ../roundup/mailgw.py:746 #, python-format msgid "" "\n" -"The class name you identified in the subject line (\"%(classname)s\") does not exist in the\n" -"database.\n" +"The class name you identified in the subject line (\"%(classname)s\") does\n" +"not exist in the database.\n" "\n" "Valid class names are: %(validname)s\n" "Subject was: \"%(subject)s\"\n" @@ -1802,12 +1803,42 @@ "????? ???????????? ???????: %(validname)s\n" "???? ?????? ??????: \"%(subject)s\"\n" -#: ../roundup/mailgw.py:739 +#: ../roundup/mailgw.py:754 +#, python-format +msgid "" +"\n" +"You did not identify a class name in the subject line and there is no\n" +"default set for this tracker. The subject must contain a class name or\n" +"designator to indicate the 'topic' of the message. For example:\n" +" Subject: [issue] This is a new issue\n" +" - this will create a new issue in the tracker with the title 'This is\n" +" a new issue'.\n" +" Subject: [issue1234] This is a followup to issue 1234\n" +" - this will append the message's contents to the existing issue 1234\n" +" in the tracker.\n" +"\n" +"Subject was: '%(subject)s'\n" +msgstr "" +"\n" +"?? ?? ??????? ? ???? ?????? ????? ??????, ? ???????? ?? ?????????\n" +"?? ??????????? ??? ????? ???????. ? ???? \"Subject:\" ? ??????????\n" +"??????? ??????? ???? ?????? ????? ??? ????????? ???????, ? ????????\n" +"????????? ??? ?????????. ????????:\n" +" Subject: [issue] ??? ????? ??????\n" +" - ????? ?????? ??????? ? ??????? ????? ?????? (?????? ?????? issue)\n" +" ? ?????????? \"??? ????? ??????\".\n" +" Subject: [issue1234] ??? ????????? ? ?????? 1234\n" +" - ?????????? ????? ?????? ????? ????????? ? ?????? ????????? ??????\n" +" 1234, ??????? ??? ?????????? ? ???????.\n" +"\n" +"???? ?????? ??????: \"%(subject)s\"\n" + +#: ../roundup/mailgw.py:795 #, python-format msgid "" "\n" "I cannot match your message to a node in the database - you need to either\n" -"supply a full designator (with number, eg \"[issue123]\" or keep the\n" +"supply a full designator (with number, eg \"[issue123]\") or keep the\n" "previous subject title intact so I can match that.\n" "\n" "Subject was: \"%(subject)s\"\n" @@ -1821,7 +1852,7 @@ "\n" "???? ?????? ??????: \"%(subject)s\"\n" -#: ../roundup/mailgw.py:772 +#: ../roundup/mailgw.py:828 #, python-format msgid "" "\n" @@ -1835,7 +1866,7 @@ "\n" "???? ?????? ??????: \"%(subject)s\"\n" -#: ../roundup/mailgw.py:800 +#: ../roundup/mailgw.py:856 #, python-format msgid "" "\n" @@ -1849,7 +1880,7 @@ "? ??????????? ????????? ??????:\n" " %(current_class)s\n" -#: ../roundup/mailgw.py:823 +#: ../roundup/mailgw.py:879 #, python-format msgid "" "\n" @@ -1863,34 +1894,34 @@ "? ??????????? ????????? ?????????:\n" " %(errors)s\n" -#: ../roundup/mailgw.py:853 +#: ../roundup/mailgw.py:919 #, python-format msgid "" "\n" -"You are not a registered user.\n" +"You are not a registered user.%(registration_info)s\n" "\n" "Unknown address: %(from_address)s\n" msgstr "" "\n" -"?????? ???????? ?????? ?????????????????? ?????????????.\n" +"?????? ???????? ?????? ?????????????????? ?????????????.%(registration_info)s\n" "\n" "??????????? ?????: %(from_address)s\n" -#: ../roundup/mailgw.py:861 +#: ../roundup/mailgw.py:927 msgid "You are not permitted to access this tracker." msgstr "? ??? ??? ?????????? ?? ?????? ? ????? ???????." -#: ../roundup/mailgw.py:868 +#: ../roundup/mailgw.py:934 #, python-format msgid "You are not permitted to edit %(classname)s." msgstr "? ??? ??? ?????????? ????????????? %(classname)s" -#: ../roundup/mailgw.py:872 +#: ../roundup/mailgw.py:938 #, python-format msgid "You are not permitted to create %(classname)s." msgstr "? ??? ??? ?????????? ????????? ??????? %(classname)s" -#: ../roundup/mailgw.py:919 +#: ../roundup/mailgw.py:985 #, python-format msgid "" "\n" @@ -1905,7 +1936,7 @@ "\n" "???? ??????: \"%(subject)s\"\n" -#: ../roundup/mailgw.py:947 +#: ../roundup/mailgw.py:1013 msgid "" "\n" "Roundup requires the submission to be plain text. The message parser could\n" @@ -1915,20 +1946,20 @@ "????????? ??? Roundup ?????? ???? ? ????????? ???????.\n" "? ????? ????????? ?? ??????? ????? ??????? text/plain.\n" -#: ../roundup/mailgw.py:969 +#: ../roundup/mailgw.py:1030 msgid "You are not permitted to create files." msgstr "? ??? ??? ?????????? ?? ???????? ??????." -#: ../roundup/mailgw.py:983 +#: ../roundup/mailgw.py:1044 #, python-format msgid "You are not permitted to add files to %(classname)s." msgstr "? ??? ??? ?????????? ????????? ????? ??? ?????? %(classname)s." -#: ../roundup/mailgw.py:1001 +#: ../roundup/mailgw.py:1062 msgid "You are not permitted to create messages." msgstr "? ??? ??? ?????????? ?? ???????? ?????????" -#: ../roundup/mailgw.py:1009 +#: ../roundup/mailgw.py:1070 #, python-format msgid "" "\n" @@ -1939,17 +1970,17 @@ "????????? ????????? ??????????.\n" "%(error)s\n" -#: ../roundup/mailgw.py:1017 +#: ../roundup/mailgw.py:1078 #, python-format msgid "You are not permitted to add messages to %(classname)s." msgstr "? ??? ??? ?????????? ????????? ????????? ??? ?????? %(classname)s." -#: ../roundup/mailgw.py:1044 +#: ../roundup/mailgw.py:1105 #, python-format msgid "You are not permitted to edit property %(prop)s of class %(classname)s." msgstr "? ??? ??? ?????????? ???????? ??????? %(prop)s ?????? %(classname)s" -#: ../roundup/mailgw.py:1052 +#: ../roundup/mailgw.py:1113 #, python-format msgid "" "\n" @@ -1960,79 +1991,85 @@ "??? ????????? ?????? ????????? ????????? ??????:\n" " %(message)s\n" -#: ../roundup/mailgw.py:1074 +#: ../roundup/mailgw.py:1135 msgid "not of form [arg=value,value,...;arg=value,value,...]" msgstr "????????? ?????? ???? ? ??????? [???=????????,????????,...;???=????????,????????,...]" -#: ../roundup/roundupdb.py:146 +#: ../roundup/roundupdb.py:147 msgid "files" msgstr "?????" -#: ../roundup/roundupdb.py:146 +#: ../roundup/roundupdb.py:147 msgid "messages" msgstr "?????????" -#: ../roundup/roundupdb.py:146 +#: ../roundup/roundupdb.py:147 msgid "nosy" msgstr "?????????" -#: ../roundup/roundupdb.py:146 +#: ../roundup/roundupdb.py:147 msgid "superseder" msgstr "?????????" -#: ../roundup/roundupdb.py:146 +#: ../roundup/roundupdb.py:147 msgid "title" msgstr "????????" -#: ../roundup/roundupdb.py:147 +#: ../roundup/roundupdb.py:148 msgid "assignedto" msgstr "???????????" -#: ../roundup/roundupdb.py:147 +#: ../roundup/roundupdb.py:148 +msgid "keyword" +msgstr "???????? ?????" + +#: ../roundup/roundupdb.py:148 msgid "priority" msgstr "?????????" -#: ../roundup/roundupdb.py:147 +#: ../roundup/roundupdb.py:148 msgid "status" msgstr "??????" -#: ../roundup/roundupdb.py:147 -msgid "topic" -msgstr "????" - -#: ../roundup/roundupdb.py:150 +#: ../roundup/roundupdb.py:151 msgid "activity" msgstr "????????" #. following properties are common for all hyperdb classes #. they are listed here to keep things in one place -#: ../roundup/roundupdb.py:150 +#: ../roundup/roundupdb.py:151 msgid "actor" msgstr "????????" -#: ../roundup/roundupdb.py:150 +#: ../roundup/roundupdb.py:151 msgid "creation" msgstr "???? ????????" -#: ../roundup/roundupdb.py:150 +#: ../roundup/roundupdb.py:151 msgid "creator" msgstr "?????" -#: ../roundup/roundupdb.py:308 +#: ../roundup/roundupdb.py:309 #, python-format msgid "New submission from %(authname)s%(authaddr)s:" msgstr "????? ??????????? ?? %(authname)s%(authaddr)s:" -#: ../roundup/roundupdb.py:311 +#: ../roundup/roundupdb.py:312 #, python-format msgid "%(authname)s%(authaddr)s added the comment:" msgstr "%(authname)s%(authaddr)s ??????? ?????????:" -#: ../roundup/roundupdb.py:314 -msgid "System message:" -msgstr "????????? ???????:" +#: ../roundup/roundupdb.py:315 +#, python-format +msgid "Change by %(authname)s%(authaddr)s:" +msgstr "????????? %(authname)s%(authaddr)s:" + +#: ../roundup/roundupdb.py:342 +#, python-format +msgid "File '%(filename)s' not attached - you can download it from %(link)s." +msgstr "???? '%(filename)s' ?? ?????? - ?? ?????? ??????? ??? ?? ?????? %(link)s." -#: ../roundup/roundupdb.py:597 +#: ../roundup/roundupdb.py:615 #, python-format msgid "" "\n" @@ -2225,7 +2262,11 @@ msgid "Error: The source must be either \"mailbox\", \"pop\", \"apop\", \"imap\" or \"imaps\"" msgstr "??????: ??? ????????? ????? ?????? ???? \"mailbox\", \"pop\", \"apop\", \"imap\" ??? \"imaps\"" -#: ../roundup/scripts/roundup_server.py:157 +#: ../roundup/scripts/roundup_server.py:76 +msgid "WARNING: generating temporary SSL certificate" +msgstr "????????: ????????? ????????? ?????????? ??? SSL" + +#: ../roundup/scripts/roundup_server.py:253 msgid "" "Roundup trackers index\n" "

          Roundup trackers index

            \n" @@ -2233,52 +2274,52 @@ "?????? ???????? Roundup\n" "

            ?????? ???????? Roundup

              \n" -#: ../roundup/scripts/roundup_server.py:293 +#: ../roundup/scripts/roundup_server.py:389 #, python-format msgid "Error: %s: %s" msgstr "??????: %s: %s" -#: ../roundup/scripts/roundup_server.py:303 +#: ../roundup/scripts/roundup_server.py:399 msgid "WARNING: ignoring \"-g\" argument, not root" msgstr "????????: ???????? \"-g\" ?? ????????????, ?? ???????? ?????? ??? ???????????? root" -#: ../roundup/scripts/roundup_server.py:309 +#: ../roundup/scripts/roundup_server.py:405 msgid "Can't change groups - no grp module" msgstr "??????? ?????? ?????????? - ????? ?????? grp" -#: ../roundup/scripts/roundup_server.py:318 +#: ../roundup/scripts/roundup_server.py:414 #, python-format msgid "Group %(group)s doesn't exist" msgstr "?????? %(group)s ?? ??????????" -#: ../roundup/scripts/roundup_server.py:329 +#: ../roundup/scripts/roundup_server.py:425 msgid "Can't run as root!" msgstr "?????? ??????? ? ???????????? ???????????? root ????????!" -#: ../roundup/scripts/roundup_server.py:332 +#: ../roundup/scripts/roundup_server.py:428 msgid "WARNING: ignoring \"-u\" argument, not root" msgstr "????????: ???????? \"-u\" ?? ????????????, ?? ???????? ?????? ??? ???????????? root" -#: ../roundup/scripts/roundup_server.py:338 +#: ../roundup/scripts/roundup_server.py:434 msgid "Can't change users - no pwd module" msgstr "??????? ???????????? ?????????? - ????? ?????? pwd" -#: ../roundup/scripts/roundup_server.py:347 +#: ../roundup/scripts/roundup_server.py:443 #, python-format msgid "User %(user)s doesn't exist" msgstr "???????????? %(user)s ?? ??????????" -#: ../roundup/scripts/roundup_server.py:481 +#: ../roundup/scripts/roundup_server.py:592 #, python-format msgid "Multiprocess mode \"%s\" is not available, switching to single-process" msgstr "????? \"%s\" ??????????, ????????????? ? ???????????? ?????" -#: ../roundup/scripts/roundup_server.py:504 +#: ../roundup/scripts/roundup_server.py:620 #, python-format msgid "Unable to bind to port %s, port already in use." msgstr "?????????? ?????????? ?????? ?? ????? %s, ???? ??? ?????." -#: ../roundup/scripts/roundup_server.py:572 +#: ../roundup/scripts/roundup_server.py:688 msgid "" " -c Windows Service options.\n" " If you want to run the server as a Windows Service, you\n" @@ -2295,7 +2336,7 @@ " ???? ?????????. ??????? 'roundup-server -c help'\n" " ?????? ??????? ? ????????? ?????? ??????? Windows." -#: ../roundup/scripts/roundup_server.py:579 +#: ../roundup/scripts/roundup_server.py:695 msgid "" " -u runs the Roundup web server as this UID\n" " -g runs the Roundup web server as this GID\n" @@ -2309,7 +2350,7 @@ " ? ????????? ?????? ? ??????? ??????. ???? ??????? \"-d\",\n" " ???? ????????? *???????????* ?????? ???? ????? ?????? \"-l\"" -#: ../roundup/scripts/roundup_server.py:586 +#: ../roundup/scripts/roundup_server.py:702 #, python-format msgid "" "%(message)sUsage: roundup-server [options] [name=tracker home]*\n" @@ -2323,6 +2364,9 @@ " -p set the port to listen on (default: %(port)s)\n" " -l log to the file indicated by fname instead of stderr/stdout\n" " -N log client machine names instead of IP addresses (much slower)\n" +" -i set tracker index template\n" +" -s enable SSL\n" +" -e PEM file containing SSL key and certificate\n" " -t multiprocess mode (default: %(mp_def)s).\n" " Allowed values: %(mp_types)s.\n" "%(os_part)s\n" @@ -2374,6 +2418,9 @@ " -l ????? ???????? ? ????????? ????? (?????? stderr/stdout)\n" " -N ??????????????? ????? ????? ???????? ?????? IP-???????\n" " (?????? ????????? ??????).\n" +" -i ??????? ?????? ??? ?????? ????????\n" +" -s ???????? SSL\n" +" -e PEM-????, ?????????? ???? ? ?????????? ??? SSL\n" " -t ????? ??????????????? (?? ????????? - %(mp_def)s).\n" " ????????? ??????: %(mp_types)s.\n" "%(os_part)s\n" @@ -2416,20 +2463,20 @@ " ?? ????? ?????????????? ? URL (???????, ??????? ????? ? ????.),\n" " ?????? ??? ????? ????? ??????? ? ????? ????????? ???????? ???? IE.\n" -#: ../roundup/scripts/roundup_server.py:741 +#: ../roundup/scripts/roundup_server.py:860 msgid "Instances must be name=home" msgstr "?????? ???????? ?????? ???? ? ??????? ???=???????" -#: ../roundup/scripts/roundup_server.py:755 +#: ../roundup/scripts/roundup_server.py:874 #, python-format msgid "Configuration saved to %s" msgstr "???????????? ???????? ? %s" -#: ../roundup/scripts/roundup_server.py:773 +#: ../roundup/scripts/roundup_server.py:892 msgid "Sorry, you can't run the server as a daemon on this Operating System" msgstr "????????, ? ???? ???????????? ??????? ?????? ? ??????? ?????? ??????????" -#: ../roundup/scripts/roundup_server.py:788 +#: ../roundup/scripts/roundup_server.py:907 #, python-format msgid "Roundup server started on %(HOST)s:%(PORT)s" msgstr "?????? Roundup ????? ? ?????? ?? ?????? %(HOST)s:%(PORT)s" @@ -2551,6 +2598,7 @@ #: ../templates/minimal/html/_generic.index.html:19 #: ../templates/minimal/html/_generic.item.html:17 #: ../templates/minimal/html/user.index.html:13 +#: ../templates/minimal/html/user.item.html:39 #: ../templates/minimal/html/user.register.html:17 msgid "Please login with your username and password." msgstr "??????? ??? ???????????? ? ?????? ??? ????? ? ???????." @@ -2645,8 +2693,9 @@ msgstr "????????" #: ../templates/classic/html/issue.index.html:32 -msgid "Topic" -msgstr "????" +#: ../templates/classic/html/keyword.item.html:37 +msgid "Keyword" +msgstr "???????? ?????" #: ../templates/classic/html/issue.index.html:33 #: ../templates/classic/html/issue.item.html:44 @@ -2733,8 +2782,10 @@ msgstr "???????????" #: ../templates/classic/html/issue.item.html:78 -msgid "Topics" -msgstr "????" +#: ../templates/classic/html/page.html:103 +#: ../templates/minimal/html/page.html:102 +msgid "Keywords" +msgstr "???????? ?????" #: ../templates/classic/html/issue.item.html:86 msgid "Change Note" @@ -2749,9 +2800,9 @@ msgstr "???????????" #: ../templates/classic/html/issue.item.html:114 -#: ../templates/classic/html/user.item.html:152 +#: ../templates/classic/html/user.item.html:153 #: ../templates/classic/html/user.register.html:69 -#: ../templates/minimal/html/user.item.html:147 +#: ../templates/minimal/html/user.item.html:153 msgid "
              Note:  highlighted  fields are required.
              " msgstr "
              ??????????: ?????????? ???? ?????? ???? ?????????.
              " @@ -2843,91 +2894,92 @@ msgstr "? ?????????:" #: ../templates/classic/html/issue.search.html:56 -msgid "Topic:" -msgstr "????:" +msgid "Keyword:" +msgstr "???????? ?????:" + +#: ../templates/classic/html/issue.search.html:58 +#: ../templates/classic/html/issue.search.html:123 +#: ../templates/classic/html/issue.search.html:139 +msgid "not selected" +msgstr "?? ??????????" -#: ../templates/classic/html/issue.search.html:64 +#: ../templates/classic/html/issue.search.html:67 msgid "ID:" msgstr "" -#: ../templates/classic/html/issue.search.html:72 +#: ../templates/classic/html/issue.search.html:75 msgid "Creation Date:" msgstr "???? ????????:" -#: ../templates/classic/html/issue.search.html:83 +#: ../templates/classic/html/issue.search.html:86 msgid "Creator:" msgstr "?????:" -#: ../templates/classic/html/issue.search.html:85 +#: ../templates/classic/html/issue.search.html:88 msgid "created by me" msgstr "??????? ????" -#: ../templates/classic/html/issue.search.html:94 +#: ../templates/classic/html/issue.search.html:97 msgid "Activity:" msgstr "????????:" -#: ../templates/classic/html/issue.search.html:105 +#: ../templates/classic/html/issue.search.html:108 msgid "Actor:" msgstr "????????:" -#: ../templates/classic/html/issue.search.html:107 +#: ../templates/classic/html/issue.search.html:110 msgid "done by me" msgstr "????????? ????" -#: ../templates/classic/html/issue.search.html:118 +#: ../templates/classic/html/issue.search.html:121 msgid "Priority:" msgstr "?????????:" -#: ../templates/classic/html/issue.search.html:120 -#: ../templates/classic/html/issue.search.html:136 -msgid "not selected" -msgstr "?? ??????????" - -#: ../templates/classic/html/issue.search.html:131 +#: ../templates/classic/html/issue.search.html:134 msgid "Status:" msgstr "??????:" -#: ../templates/classic/html/issue.search.html:134 +#: ../templates/classic/html/issue.search.html:137 msgid "not resolved" msgstr "?? ??????" -#: ../templates/classic/html/issue.search.html:149 +#: ../templates/classic/html/issue.search.html:152 msgid "Assigned to:" msgstr "???????????:" -#: ../templates/classic/html/issue.search.html:152 +#: ../templates/classic/html/issue.search.html:155 msgid "assigned to me" msgstr "???????? ???" -#: ../templates/classic/html/issue.search.html:154 +#: ../templates/classic/html/issue.search.html:157 msgid "unassigned" msgstr "???????????" -#: ../templates/classic/html/issue.search.html:164 +#: ../templates/classic/html/issue.search.html:167 msgid "No Sort or group:" msgstr "?? ??????????? / ?? ????????????" -#: ../templates/classic/html/issue.search.html:172 +#: ../templates/classic/html/issue.search.html:175 msgid "Pagesize:" msgstr "?????? ????????:" -#: ../templates/classic/html/issue.search.html:178 +#: ../templates/classic/html/issue.search.html:181 msgid "Start With:" msgstr "?????? ?:" -#: ../templates/classic/html/issue.search.html:184 +#: ../templates/classic/html/issue.search.html:187 msgid "Sort Descending:" msgstr "??????????? ?? ????????:" -#: ../templates/classic/html/issue.search.html:191 +#: ../templates/classic/html/issue.search.html:194 msgid "Group Descending:" msgstr "???????????? ?? ????????" -#: ../templates/classic/html/issue.search.html:198 +#: ../templates/classic/html/issue.search.html:201 msgid "Query name**:" msgstr "??? ???????**:" -#: ../templates/classic/html/issue.search.html:210 +#: ../templates/classic/html/issue.search.html:213 #: ../templates/classic/html/page.html:43 #: ../templates/classic/html/page.html:92 #: ../templates/classic/html/user.help-search.html:69 @@ -2936,11 +2988,11 @@ msgid "Search" msgstr "?????" -#: ../templates/classic/html/issue.search.html:215 +#: ../templates/classic/html/issue.search.html:218 msgid "*: The \"all text\" field will look in message bodies and issue titles" msgstr "*: ????? ?? ????? ?????? ???? ????????? ?????? ? ?????????? ? ? ???? ?????????." -#: ../templates/classic/html/issue.search.html:218 +#: ../templates/classic/html/issue.search.html:221 msgid "**: If you supply a name, the query will be saved off and available as a link in the sidebar" msgstr "**: ???? ??????? ???, ?????? ????? ???????? ??? ???? ?????? ? ???????? ? ?????? ???????? ? ????." @@ -2964,10 +3016,6 @@ msgid "To create a new keyword, enter it below and click \"Submit New Entry\"." msgstr "????? ??????? ????? ???????? ?????, ????????? ???? ????? ? ??????? ?????? \"????????\"." -#: ../templates/classic/html/keyword.item.html:37 -msgid "Keyword" -msgstr "???????? ?????" - #: ../templates/classic/html/msg.index.html:3 msgid "List of messages - ${tracker}" msgstr "?????? ????????? - ${tracker}" @@ -3044,11 +3092,6 @@ msgid "Show issue:" msgstr "????????:" -#: ../templates/classic/html/page.html:103 -#: ../templates/minimal/html/page.html:102 -msgid "Keywords" -msgstr "???????? ?????" - #: ../templates/classic/html/page.html:108 #: ../templates/minimal/html/page.html:107 msgid "Edit Existing" @@ -3332,26 +3375,26 @@ msgid "User${id} Editing" msgstr "?????????????? ???????? ???????????? ${id}" -#: ../templates/classic/html/user.item.html:79 +#: ../templates/classic/html/user.item.html:80 #: ../templates/classic/html/user.register.html:33 -#: ../templates/minimal/html/user.item.html:74 +#: ../templates/minimal/html/user.item.html:80 #: ../templates/minimal/html/user.register.html:41 msgid "Roles" msgstr "????" -#: ../templates/classic/html/user.item.html:87 -#: ../templates/minimal/html/user.item.html:82 +#: ../templates/classic/html/user.item.html:88 +#: ../templates/minimal/html/user.item.html:88 msgid "(to give the user more than one role, enter a comma,separated,list)" msgstr "(???? ????? ?????????, ??????????? ?? ????? ???????)" -#: ../templates/classic/html/user.item.html:108 -#: ../templates/minimal/html/user.item.html:103 +#: ../templates/classic/html/user.item.html:109 +#: ../templates/minimal/html/user.item.html:109 msgid "(this is a numeric hour offset, the default is ${zone})" msgstr "(????? - ??????? ????? ??????? ? ??????????? ????????, ?? ????????? - ${zone})" -#: ../templates/classic/html/user.item.html:129 +#: ../templates/classic/html/user.item.html:130 #: ../templates/classic/html/user.register.html:53 -#: ../templates/minimal/html/user.item.html:124 +#: ../templates/minimal/html/user.item.html:130 #: ../templates/minimal/html/user.register.html:53 msgid "Alternate E-mail addresses
              One address per line" msgstr "?????????????? ?????? email
              ?? ?????? ?????? ? ??????" Modified: tracker/roundup-src/roundup/__init__.py ============================================================================== --- tracker/roundup-src/roundup/__init__.py (original) +++ tracker/roundup-src/roundup/__init__.py Sun Mar 9 09:26:16 2008 @@ -14,8 +14,8 @@ # FOR A PARTICULAR PURPOSE. THE CODE PROVIDED HEREUNDER IS ON AN "AS IS" # BASIS, AND THERE IS NO OBLIGATION WHATSOEVER TO PROVIDE MAINTENANCE, # SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS. -# -# $Id: __init__.py,v 1.45 2006/12/19 03:03:37 richard Exp $ +# +# $Id: __init__.py,v 1.49 2007/12/23 01:52:07 richard Exp $ '''Roundup - issue tracking for knowledge workers. @@ -27,14 +27,14 @@ new issues, (b) find and edit existing issues, and (c) discuss issues with other participants. The system will facilitate communication among the participants by managing discussions and notifying interested parties when -issues are edited. +issues are edited. Roundup's structure is that of a cake:: _________________________________________________________________________ | E-mail Client | Web Browser | Detector Scripts | Shell | |------------------+-----------------+----------------------+-------------| - | E-mail User | Web User | Detector | Command | + | E-mail User | Web User | Detector | Command | |-------------------------------------------------------------------------| | Roundup Database Layer | |-------------------------------------------------------------------------| @@ -68,6 +68,6 @@ ''' __docformat__ = 'restructuredtext' -__version__ = '1.3.2' +__version__ = '1.4.2' # vim: set filetype=python ts=4 sw=4 et si Modified: tracker/roundup-src/roundup/admin.py ============================================================================== --- tracker/roundup-src/roundup/admin.py (original) +++ tracker/roundup-src/roundup/admin.py Sun Mar 9 09:26:16 2008 @@ -16,7 +16,7 @@ # BASIS, AND THERE IS NO OBLIGATION WHATSOEVER TO PROVIDE MAINTENANCE, # SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS. # -# $Id: admin.py,v 1.105 2006/08/11 05:13:06 richard Exp $ +# $Id: admin.py,v 1.110 2008/02/07 03:28:33 richard Exp $ '''Administration commands for maintaining Roundup trackers. ''' @@ -75,6 +75,7 @@ self.help[k[5:]] = getattr(self, k) self.tracker_home = '' self.db = None + self.db_uncommitted = False def get_class(self, classname): '''Get the class - raise an exception if it doesn't exist. @@ -286,26 +287,31 @@ Look in the following places, where the later rules take precedence: - 1. /share/roundup/templates/* + 1. /../../share/roundup/templates/* + this is where they will be if we installed an egg via easy_install + 2. /share/roundup/templates/* this should be the standard place to find them when Roundup is installed - 2. /../templates/* + 3. /../templates/* this will be used if Roundup's run in the distro (aka. source) directory - 3. /* + 4. /* this is for when someone unpacks a 3rd-party template - 4. + 5. this is for someone who "cd"s to the 3rd-party template dir ''' # OK, try /share/roundup/templates + # and /share/roundup/templates # -- this module (roundup.admin) will be installed in something # like: - # /usr/lib/python2.2/site-packages/roundup/admin.py (5 dirs up) - # c:\python22\lib\site-packages\roundup\admin.py (4 dirs up) - # we're interested in where the "lib" directory is - ie. the /usr/ - # part + # /usr/lib/python2.5/site-packages/roundup/admin.py (5 dirs up) + # c:\python25\lib\site-packages\roundup\admin.py (4 dirs up) + # /usr/lib/python2.5/site-packages/roundup-1.3.3-py2.5-egg/roundup/admin.py + # (2 dirs up) + # + # we're interested in where the directory containing "share" is templates = {} - for N in 4, 5: + for N in 2, 4, 5: path = __file__ # move up N elements in the path for i in range(N): @@ -642,6 +648,7 @@ except (TypeError, IndexError, ValueError), message: import traceback; traceback.print_exc() raise UsageError, message + self.db_uncommitted = True return 0 def do_find(self, args): @@ -749,7 +756,7 @@ keys.sort() for key in keys: value = cl.get(nodeid, key) - print _('%(key)s: %(value)r')%locals() + print _('%(key)s: %(value)s')%locals() def do_create(self, args): ""'''Usage: create classname property=value ... @@ -813,6 +820,7 @@ print apply(cl.create, (), props) except (TypeError, IndexError, ValueError), message: raise UsageError, message + self.db_uncommitted = True return 0 def do_list(self, args): @@ -995,6 +1003,7 @@ they are successful. ''' self.db.commit() + self.db_uncommitted = False return 0 def do_rollback(self, args): @@ -1007,6 +1016,7 @@ immediately after would make no changes to the database. ''' self.db.rollback() + self.db_uncommitted = False return 0 def do_retire(self, args): @@ -1030,6 +1040,7 @@ raise UsageError, _('no such class "%(classname)s"')%locals() except IndexError: raise UsageError, _('no such %(classname)s node "%(nodeid)s"')%locals() + self.db_uncommitted = True return 0 def do_restore(self, args): @@ -1052,6 +1063,7 @@ raise UsageError, _('no such class "%(classname)s"')%locals() except IndexError: raise UsageError, _('no such %(classname)s node "%(nodeid)s"')%locals() + self.db_uncommitted = True return 0 def do_export(self, args, export_files=True): @@ -1110,10 +1122,7 @@ # all nodes for this class for nodeid in cl.getnodeids(): if self.verbose: - sys.stdout.write('Exporting %s - %s\r'%(classname, nodeid)) - sys.stdout.flush() - if self.verbose: - sys.stdout.write('Exporting %s - %s\r'%(classname, nodeid)) + sys.stdout.write('\rExporting %s - %s'%(classname, nodeid)) sys.stdout.flush() writer.writerow(cl.export_list(propnames, nodeid)) if export_files and hasattr(cl, 'export_files'): @@ -1198,7 +1207,7 @@ continue if self.verbose: - sys.stdout.write('Importing %s - %s\r'%(classname, n)) + sys.stdout.write('\rImporting %s - %s'%(classname, n)) sys.stdout.flush() # do the import and figure the current highest nodeid @@ -1219,6 +1228,7 @@ print 'setting', classname, maxid+1 self.db.setid(classname, str(maxid+1)) + self.db_uncommitted = True return 0 def do_pack(self, args): @@ -1257,6 +1267,7 @@ elif m['date']: pack_before = date.Date(value) self.db.pack(pack_before) + self.db_uncommitted = True return 0 def do_reindex(self, args, desre=re.compile('([A-Za-z]+)([0-9]+)')): @@ -1322,6 +1333,33 @@ print _(' %(description)s (%(name)s)')%d return 0 + + def do_migrate(self, args): + '''Usage: migrate + Update a tracker's database to be compatible with the Roundup + codebase. + + You should run the "migrate" command for your tracker once you've + installed the latest codebase. + + Do this before you use the web, command-line or mail interface and + before any users access the tracker. + + This command will respond with either "Tracker updated" (if you've + not previously run it on an RDBMS backend) or "No migration action + required" (if you have run it, or have used another interface to the + tracker, or possibly because you are using anydbm). + + It's safe to run this even if it's not required, so just get into + the habit. + ''' + if getattr(self.db, 'db_version_updated'): + print _('Tracker updated') + self.db_uncommitted = True + else: + print _('No migration action required') + return 0 + def run_command(self, args): '''Run a single command ''' @@ -1427,7 +1465,7 @@ self.run_command(args) # exit.. check for transactions - if self.db and self.db.transactions: + if self.db and self.db_uncommitted: commit = raw_input(_('There are unsaved changes. Commit them (y/N)? ')) if commit and commit[0].lower() == 'y': self.db.commit() Modified: tracker/roundup-src/roundup/backends/__init__.py ============================================================================== --- tracker/roundup-src/roundup/backends/__init__.py (original) +++ tracker/roundup-src/roundup/backends/__init__.py Sun Mar 9 09:26:16 2008 @@ -15,7 +15,7 @@ # BASIS, AND THERE IS NO OBLIGATION WHATSOEVER TO PROVIDE MAINTENANCE, # SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS. # -# $Id: __init__.py,v 1.39 2006/10/09 23:49:32 richard Exp $ +# $Id: __init__.py,v 1.40 2007/11/07 20:47:12 richard Exp $ '''Container for the hyperdb storage backend implementations. ''' @@ -80,7 +80,7 @@ ''' l = [] - for name in 'anydbm', 'mysql', 'sqlite', 'metakit', 'postgresql': + for name in 'anydbm', 'mysql', 'sqlite', 'postgresql': if have_backend(name): l.append(name) return l Modified: tracker/roundup-src/roundup/backends/back_anydbm.py ============================================================================== --- tracker/roundup-src/roundup/backends/back_anydbm.py (original) +++ tracker/roundup-src/roundup/backends/back_anydbm.py Sun Mar 9 09:26:16 2008 @@ -15,7 +15,7 @@ # BASIS, AND THERE IS NO OBLIGATION WHATSOEVER TO PROVIDE MAINTENANCE, # SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS. # -#$Id: back_anydbm.py,v 1.202 2006/08/29 04:20:50 richard Exp $ +#$Id: back_anydbm.py,v 1.210 2008/02/07 00:57:59 richard Exp $ '''This module defines a backend that saves the hyperdatabase in a database chosen by anydbm. It is guaranteed to always be available in python versions >2.1.1 (the dumbdbm fallback in 2.1.1 and earlier has several @@ -85,6 +85,7 @@ Class.set(), Class.retire(), and Class.restore() methods are disabled. ''' + FileStorage.__init__(self, config.UMASK) self.config, self.journaltag = config, journaltag self.dir = config.DATABASE self.classes = {} @@ -214,7 +215,8 @@ if os.path.exists(path): db_type = whichdb.whichdb(path) if not db_type: - raise hyperdb.DatabaseError, "Couldn't identify database type" + raise hyperdb.DatabaseError, \ + _("Couldn't identify database type") elif os.path.exists(path+'.db'): # if the path ends in '.db', it's a dbm database, whether # anydbm says it's dbhash or not! @@ -240,8 +242,8 @@ dbm = __import__(db_type) except ImportError: raise hyperdb.DatabaseError, \ - "Couldn't open database - the required module '%s'"\ - " is not available"%db_type + _("Couldn't open database - the required module '%s'"\ + " is not available")%db_type if __debug__: logging.getLogger('hyperdb').debug("opendb %r.open(%r, %r)"%(db_type, path, mode)) @@ -376,6 +378,7 @@ # add the destroy commit action self.transactions.append((self.doDestroyNode, (classname, nodeid))) + self.transactions.append((FileStorage.destroy, (self, classname, nodeid))) def serialise(self, classname, node): '''Copy the node contents, converting non-marshallable data into @@ -619,6 +622,11 @@ db.close() del self.databases + # clear the transactions list now so the blobfile implementation + # doesn't think there's still pending file commits when it tries + # to access the file data + self.transactions = [] + # reindex the nodes that request it for classname, nodeid in filter(None, reindex.keys()): self.getclass(classname).index(nodeid) @@ -717,9 +725,6 @@ if db.has_key(nodeid): del db[nodeid] - # return the classname, nodeid so we reindex this content - return (classname, nodeid) - def rollback(self): ''' Reverse all actions from the current transaction. ''' @@ -792,7 +797,7 @@ raise KeyError, '"id" is reserved' if self.db.journaltag is None: - raise hyperdb.DatabaseError, 'Database open read-only' + raise hyperdb.DatabaseError, _('Database open read-only') if propvalues.has_key('creation') or propvalues.has_key('activity'): raise KeyError, '"creation" and "activity" are reserved' @@ -840,8 +845,10 @@ (self.classname, newid, key)) elif isinstance(prop, hyperdb.Multilink): - if type(value) != type([]): - raise TypeError, 'new property "%s" not a list of ids'%key + if value is None: + value = [] + if not hasattr(value, '__iter__'): + raise TypeError, 'new property "%s" not an iterable of ids'%key # clean up and validate the list of links link_class = self.properties[key].classname @@ -1065,7 +1072,7 @@ raise KeyError, '"id" is reserved' if self.db.journaltag is None: - raise hyperdb.DatabaseError, 'Database open read-only' + raise hyperdb.DatabaseError, _('Database open read-only') node = self.db.getnode(self.classname, nodeid) if node.has_key(self.db.RETIRED_FLAG): @@ -1132,8 +1139,10 @@ (self.classname, nodeid, propname)) elif isinstance(prop, hyperdb.Multilink): - if type(value) != type([]): - raise TypeError, 'new property "%s" not a list of'\ + if value is None: + value = [] + if not hasattr(value, '__iter__'): + raise TypeError, 'new property "%s" not an iterable of'\ ' ids'%propname link_class = self.properties[propname].classname l = [] @@ -1260,7 +1269,7 @@ to modify the "creation" or "activity" properties cause a KeyError. ''' if self.db.journaltag is None: - raise hyperdb.DatabaseError, 'Database open read-only' + raise hyperdb.DatabaseError, _('Database open read-only') self.fireAuditors('retire', nodeid, None) @@ -1278,7 +1287,7 @@ Make node available for all operations like it was before retirement. ''' if self.db.journaltag is None: - raise hyperdb.DatabaseError, 'Database open read-only' + raise hyperdb.DatabaseError, _('Database open read-only') node = self.db.getnode(self.classname, nodeid) # check if key property was overrided @@ -1324,7 +1333,7 @@ support the session storage of the cgi interface. ''' if self.db.journaltag is None: - raise hyperdb.DatabaseError, 'Database open read-only' + raise hyperdb.DatabaseError, _('Database open read-only') self.db.destroynode(self.classname, nodeid) def history(self, nodeid): @@ -1517,7 +1526,7 @@ must_close = False if db is None: db = self.db.getclassdb(self.classname) - must_close = True + must_close = True try: res = res + db.keys() @@ -1556,7 +1565,7 @@ The filter must match all properties specificed. If the property value to match is a list: - + 1. String properties must match all elements in the list, and 2. Other properties must match any of the elements in the list. """ @@ -1640,7 +1649,7 @@ l.append((OTHER, k, [float(val) for val in v])) filterspec = l - + # now, find all the nodes that are active and pass filtering matches = [] cldb = self.db.getclassdb(cn) @@ -1894,7 +1903,7 @@ Return the nodeid of the node imported. ''' if self.db.journaltag is None: - raise hyperdb.DatabaseError, 'Database open read-only' + raise hyperdb.DatabaseError, _('Database open read-only') properties = self.getprops() # make the new node's property map @@ -1962,8 +1971,7 @@ prop = properties[propname] # make sure the params are eval()'able if value is None: - # don't export empties - continue + pass elif isinstance(prop, hyperdb.Date): # this is a hack - some dates are stored as strings if not isinstance(value, type('')): Modified: tracker/roundup-src/roundup/backends/back_metakit.py ============================================================================== --- tracker/roundup-src/roundup/backends/back_metakit.py (original) +++ tracker/roundup-src/roundup/backends/back_metakit.py Sun Mar 9 09:26:16 2008 @@ -1,2142 +0,0 @@ -# $Id: back_metakit.py,v 1.113 2006/08/29 04:20:50 richard Exp $ -'''Metakit backend for Roundup, originally by Gordon McMillan. - -Known Current Bugs: - -- You can't change a class' key properly. This shouldn't be too hard to fix. -- Some unit tests are overridden. - -Notes by Richard: - -This backend has some behaviour specific to metakit: - -- there's no concept of an explicit "unset" in metakit, so all types - have some "unset" value: - - ========= ===== ====================================================== - Type Value Action when fetching from mk - ========= ===== ====================================================== - Strings '' convert to None - Date 0 (seconds since 1970-01-01.00:00:00) convert to None - Interval '' convert to None - Number 0 ambiguious :( - do nothing (see BACKWARDS_COMPATIBLE) - Boolean 0 ambiguious :( - do nothing (see BACKWARDS_COMPATABILE) - Link 0 convert to None - Multilink [] actually, mk can handle this one ;) - Password '' convert to None - ========= ===== ====================================================== - - The get/set routines handle these values accordingly by converting - to/from None where they can. The Number/Boolean types are not able - to handle an "unset" at all, so they default the "unset" to 0. -- Metakit relies in reference counting to close the database, there is - no explicit close call. This can cause issues if a metakit - database is referenced multiple times, one might not actually be - closing the db. -- probably a bunch of stuff that I'm not aware of yet because I haven't - fully read through the source. One of these days.... -''' -__docformat__ = 'restructuredtext' -# Enable this flag to break backwards compatibility (i.e. can't read old -# databases) but comply with more roundup features, like adding NULL support. -BACKWARDS_COMPATIBLE = 1 - -from roundup import hyperdb, date, password, roundupdb, security -from roundup.support import reversed -import logging -import metakit -from sessions_dbm import Sessions, OneTimeKeys -import re, marshal, os, sys, time, calendar, shutil -from indexer_common import Indexer as CommonIndexer -import locking -from roundup.date import Range -from blobfiles import files_in_dir - -# view modes for opening -# XXX FIXME BPK -> these don't do anything, they are ignored -# should we just get rid of them for simplicities sake? -READ = 0 -READWRITE = 1 - -def db_exists(config): - return os.path.exists(os.path.join(config.TRACKER_HOME, 'db', - 'tracker.mk4')) - -def db_nuke(config): - shutil.rmtree(os.path.join(config.TRACKER_HOME, 'db')) - -# general metakit error -class MKBackendError(Exception): - pass - -_dbs = {} - -def Database(config, journaltag=None): - ''' Only have a single instance of the Database class for each instance - ''' - db = _dbs.get(config.DATABASE, None) - if db is None or db._db is None: - db = _Database(config, journaltag) - _dbs[config.DATABASE] = db - else: - db.journaltag = journaltag - return db - -class _Database(hyperdb.Database, roundupdb.Database): - # Metakit has no concept of an explicit NULL - BACKEND_MISSING_STRING = '' - BACKEND_MISSING_NUMBER = 0 - BACKEND_MISSING_BOOLEAN = 0 - - def __init__(self, config, journaltag=None): - self.config = config - self.journaltag = journaltag - self.classes = {} - self.dirty = 0 - self.lockfile = None - self._db = self.__open() - self.indexer = Indexer(self) - self.security = security.Security(self) - - self.stats = {'cache_hits': 0, 'cache_misses': 0, 'get_items': 0, - 'filtering': 0} - - os.umask(config.UMASK) - - def post_init(self): - if self.indexer.should_reindex(): - self.reindex() - - def refresh_database(self): - # XXX handle refresh - self.reindex() - - def reindex(self, classname=None): - if classname: - classes = [self.getclass(classname)] - else: - classes = self.classes.values() - for klass in classes: - for nodeid in klass.list(): - klass.index(nodeid) - self.indexer.save_index() - - def getSessionManager(self): - return Sessions(self) - - def getOTKManager(self): - return OneTimeKeys(self) - - # --- defined in ping's spec - def __getattr__(self, classname): - if classname == 'transactions': - return self.dirty - # fall back on the classes - try: - return self.getclass(classname) - except KeyError, msg: - # KeyError's not appropriate here - raise AttributeError, str(msg) - def getclass(self, classname): - try: - return self.classes[classname] - except KeyError: - raise KeyError, 'There is no class called "%s"'%classname - def getclasses(self): - return self.classes.keys() - # --- end of ping's spec - - # --- exposed methods - def commit(self, fail_ok=False): - ''' Commit the current transactions. - - Save all data changed since the database was opened or since the - last commit() or rollback(). - - fail_ok indicates that the commit is allowed to fail. This is used - in the web interface when committing cleaning of the session - database. We don't care if there's a concurrency issue there. - - The only backend this seems to affect is postgres. - ''' - if self.dirty: - self._db.commit() - for cl in self.classes.values(): - cl._commit() - self.indexer.save_index() - self.dirty = 0 - def rollback(self): - '''roll back all changes since the last commit''' - if self.dirty: - for cl in self.classes.values(): - cl._rollback() - self._db.rollback() - self._db = None - self._db = metakit.storage(self.dbnm, 1) - self.hist = self._db.view('history') - self.tables = self._db.view('tables') - self.indexer.rollback() - self.indexer.datadb = self._db - self.dirty = 0 - def clearCache(self): - '''clear the internal cache by committing all pending database changes''' - for cl in self.classes.values(): - cl._commit() - def clear(self): - '''clear the internal cache but don't commit any changes''' - for cl in self.classes.values(): - cl._clear() - def hasnode(self, classname, nodeid): - '''does a particular class contain a nodeid?''' - return self.getclass(classname).hasnode(nodeid) - def pack(self, pack_before): - ''' Delete all journal entries except "create" before 'pack_before'. - ''' - mindate = int(calendar.timegm(pack_before.get_tuple())) - i = 0 - while i < len(self.hist): - if self.hist[i].date < mindate and self.hist[i].action != _CREATE: - self.hist.delete(i) - else: - i = i + 1 - def addclass(self, cl): - ''' Add a Class to the hyperdatabase. - ''' - cn = cl.classname - self.classes[cn] = cl - if self.tables.find(name=cn) < 0: - self.tables.append(name=cn) - - # add default Edit and View permissions - self.security.addPermission(name="Create", klass=cn, - description="User is allowed to create "+cn) - self.security.addPermission(name="Edit", klass=cn, - description="User is allowed to edit "+cn) - self.security.addPermission(name="View", klass=cn, - description="User is allowed to access "+cn) - - def addjournal(self, tablenm, nodeid, action, params, creator=None, - creation=None): - ''' Journal the Action - 'action' may be: - - 'create' or 'set' -- 'params' is a dictionary of property values - 'link' or 'unlink' -- 'params' is (classname, nodeid, propname) - 'retire' -- 'params' is None - ''' - tblid = self.tables.find(name=tablenm) - if tblid == -1: - tblid = self.tables.append(name=tablenm) - if creator is None: - creator = int(self.getuid()) - else: - try: - creator = int(creator) - except TypeError: - creator = int(self.getclass('user').lookup(creator)) - if creation is None: - creation = int(time.time()) - elif isinstance(creation, date.Date): - creation = int(calendar.timegm(creation.get_tuple())) - # tableid:I,nodeid:I,date:I,user:I,action:I,params:B - self.hist.append(tableid=tblid, - nodeid=int(nodeid), - date=creation, - action=action, - user=creator, - params=marshal.dumps(params)) - - def setjournal(self, tablenm, nodeid, journal): - '''Set the journal to the "journal" list.''' - tblid = self.tables.find(name=tablenm) - if tblid == -1: - tblid = self.tables.append(name=tablenm) - for nodeid, date, user, action, params in journal: - # tableid:I,nodeid:I,date:I,user:I,action:I,params:B - self.hist.append(tableid=tblid, - nodeid=int(nodeid), - date=date, - action=action, - user=int(user), - params=marshal.dumps(params)) - - def getjournal(self, tablenm, nodeid): - ''' get the journal for id - ''' - rslt = [] - tblid = self.tables.find(name=tablenm) - if tblid == -1: - return rslt - q = self.hist.select(tableid=tblid, nodeid=int(nodeid)) - if len(q) == 0: - raise IndexError, "no history for id %s in %s" % (nodeid, tablenm) - i = 0 - #userclass = self.getclass('user') - for row in q: - try: - params = marshal.loads(row.params) - except ValueError: - logging.getLogger("hyperdb").error( - "history couldn't unmarshal %r" % row.params) - params = {} - #usernm = userclass.get(str(row.user), 'username') - dt = date.Date(time.gmtime(row.date)) - #rslt.append((nodeid, dt, usernm, _actionnames[row.action], params)) - rslt.append((nodeid, dt, str(row.user), _actionnames[row.action], - params)) - return rslt - - def destroyjournal(self, tablenm, nodeid): - nodeid = int(nodeid) - tblid = self.tables.find(name=tablenm) - if tblid == -1: - return - i = 0 - hist = self.hist - while i < len(hist): - if hist[i].tableid == tblid and hist[i].nodeid == nodeid: - hist.delete(i) - else: - i = i + 1 - self.dirty = 1 - - def close(self): - ''' Close off the connection. - ''' - # de-reference count the metakit databases, - # as this is the only way they will be closed - for cl in self.classes.values(): - cl.db = None - self._db = None - if self.lockfile is not None: - locking.release_lock(self.lockfile) - if _dbs.has_key(self.config.DATABASE): - del _dbs[self.config.DATABASE] - if self.lockfile is not None: - self.lockfile.close() - self.lockfile = None - self.classes = {} - - # force the indexer to close - self.indexer.close() - self.indexer = None - - # --- internal - def __open(self): - ''' Open the metakit database - ''' - # make the database dir if it doesn't exist - if not os.path.exists(self.config.DATABASE): - os.makedirs(self.config.DATABASE) - - # figure the file names - self.dbnm = db = os.path.join(self.config.DATABASE, 'tracker.mk4') - lockfilenm = db[:-3]+'lck' - - # get the database lock - self.lockfile = locking.acquire_lock(lockfilenm) - self.lockfile.write(str(os.getpid())) - self.lockfile.flush() - - # see if the schema has changed since last db access - self.fastopen = 0 - if os.path.exists(db): - dbtm = os.path.getmtime(db) - schemafile = os.path.join(self.config['HOME'], 'schema.py') - if not os.path.isfile(schemafile): - # try old-style schema - schemafile = os.path.join(self.config['HOME'], 'dbinit.py') - if os.path.isfile(schemafile) \ - and (os.path.getmtime(schemafile) < dbtm): - # found schema file - it's older than the db - self.fastopen = 1 - - # open the db - db = metakit.storage(db, 1) - hist = db.view('history') - tables = db.view('tables') - if not self.fastopen: - # create the database if it's brand new - if not hist.structure(): - hist = db.getas('history[tableid:I,nodeid:I,date:I,user:I,action:I,params:B]') - if not tables.structure(): - tables = db.getas('tables[name:S]') - db.commit() - - # we now have an open, initialised database - self.tables = tables - self.hist = hist - return db - - def setid(self, classname, maxid): - ''' No-op in metakit - ''' - cls = self.getclass(classname) - cls.setid(int(maxid)) - - def numfiles(self): - '''Get number of files in storage, even across subdirectories. - ''' - files_dir = os.path.join(self.config.DATABASE, 'files') - return files_in_dir(files_dir) - -_STRINGTYPE = type('') -_LISTTYPE = type([]) -_CREATE, _SET, _RETIRE, _LINK, _UNLINK, _RESTORE = range(6) - -_actionnames = { - _CREATE : 'create', - _SET : 'set', - _RETIRE : 'retire', - _RESTORE : 'restore', - _LINK : 'link', - _UNLINK : 'unlink', -} - -_names_to_actionnames = { - 'create': _CREATE, - 'set': _SET, - 'retire': _RETIRE, - 'restore': _RESTORE, - 'link': _LINK, - 'unlink': _UNLINK, -} - -_marker = [] - -_ALLOWSETTINGPRIVATEPROPS = 0 - -class Class(hyperdb.Class): - ''' The handle to a particular class of nodes in a hyperdatabase. - - All methods except __repr__ and getnode must be implemented by a - concrete backend Class of which this is one. - ''' - - privateprops = None - def __init__(self, db, classname, **properties): - if hasattr(db, classname): - raise ValueError, "Class %s already exists"%classname - - hyperdb.Class.__init__ (self, db, classname, **properties) - self.db = db # why isn't this a weakref as for other backends?? - self.key = None - self.ruprops = self.properties - self.privateprops = { 'id' : hyperdb.String(), - 'activity' : hyperdb.Date(), - 'actor' : hyperdb.Link('user'), - 'creation' : hyperdb.Date(), - 'creator' : hyperdb.Link('user') } - - self.idcache = {} - self.uncommitted = {} - self.comactions = [] - self.rbactions = [] - - view = self.__getview() - self.maxid = 1 - if view: - self.maxid = view[-1].id + 1 - - def setid(self, maxid): - self.maxid = maxid + 1 - - def enableJournalling(self): - '''Turn journalling on for this class - ''' - self.do_journal = 1 - - def disableJournalling(self): - '''Turn journalling off for this class - ''' - self.do_journal = 0 - - # --- the hyperdb.Class methods - def create(self, **propvalues): - ''' Create a new node of this class and return its id. - - The keyword arguments in 'propvalues' map property names to values. - - The values of arguments must be acceptable for the types of their - corresponding properties or a TypeError is raised. - - If this class has a key property, it must be present and its value - must not collide with other key strings or a ValueError is raised. - - Any other properties on this class that are missing from the - 'propvalues' dictionary are set to None. - - If an id in a link or multilink property does not refer to a valid - node, an IndexError is raised. - ''' - if not propvalues: - raise ValueError, "Need something to create!" - self.fireAuditors('create', None, propvalues) - newid = self.create_inner(**propvalues) - self.fireReactors('create', newid, None) - return newid - - def create_inner(self, **propvalues): - ''' Called by create, in-between the audit and react calls. - ''' - rowdict = {} - rowdict['id'] = newid = self.maxid - self.maxid += 1 - ndx = self.getview(READWRITE).append(rowdict) - propvalues['#ISNEW'] = 1 - try: - self.set_inner(str(newid), **propvalues) - except Exception: - self.maxid -= 1 - raise - return str(newid) - - def get(self, nodeid, propname, default=_marker, cache=1): - '''Get the value of a property on an existing node of this class. - - 'nodeid' must be the id of an existing node of this class or an - IndexError is raised. 'propname' must be the name of a property - of this class or a KeyError is raised. - - 'cache' exists for backwards compatibility, and is not used. - ''' - view = self.getview() - id = int(nodeid) - if cache == 0: - oldnode = self.uncommitted.get(id, None) - if oldnode and oldnode.has_key(propname): - raw = oldnode[propname] - converter = _converters.get(raw.__class__, None) - if converter: - return converter(raw) - return raw - ndx = self.idcache.get(id, None) - - if ndx is None: - ndx = view.find(id=id) - if ndx < 0: - raise IndexError, "%s has no node %s" % (self.classname, nodeid) - self.idcache[id] = ndx - try: - raw = getattr(view[ndx], propname) - except AttributeError: - raise KeyError, propname - rutyp = self.ruprops.get(propname, None) - - if rutyp is None: - rutyp = self.privateprops[propname] - - converter = _converters.get(rutyp.__class__, None) - if converter: - raw = converter(raw) - return raw - - def set(self, nodeid, **propvalues): - '''Modify a property on an existing node of this class. - - 'nodeid' must be the id of an existing node of this class or an - IndexError is raised. - - Each key in 'propvalues' must be the name of a property of this - class or a KeyError is raised. - - All values in 'propvalues' must be acceptable types for their - corresponding properties or a TypeError is raised. - - If the value of the key property is set, it must not collide with - other key strings or a ValueError is raised. - - If the value of a Link or Multilink property contains an invalid - node id, a ValueError is raised. - ''' - self.fireAuditors('set', nodeid, propvalues) - propvalues, oldnode = self.set_inner(nodeid, **propvalues) - self.fireReactors('set', nodeid, oldnode) - - def set_inner(self, nodeid, **propvalues): - '''Called outside of auditors''' - isnew = 0 - if propvalues.has_key('#ISNEW'): - isnew = 1 - del propvalues['#ISNEW'] - - if propvalues.has_key('id'): - raise KeyError, '"id" is reserved' - if self.db.journaltag is None: - raise hyperdb.DatabaseError, 'Database open read-only' - view = self.getview(READWRITE) - - # node must exist & not be retired - id = int(nodeid) - ndx = view.find(id=id) - if ndx < 0: - raise IndexError, "%s has no node %s" % (self.classname, nodeid) - row = view[ndx] - if row._isdel: - raise IndexError, "%s has no node %s" % (self.classname, nodeid) - oldnode = self.uncommitted.setdefault(id, {}) - changes = {} - - for key, value in propvalues.items(): - # this will raise the KeyError if the property isn't valid - # ... we don't use getprops() here because we only care about - # the writeable properties. - if _ALLOWSETTINGPRIVATEPROPS: - prop = self.ruprops.get(key, None) - if not prop: - prop = self.privateprops[key] - else: - prop = self.ruprops[key] - converter = _converters.get(prop.__class__, lambda v: v) - # if the value's the same as the existing value, no sense in - # doing anything - oldvalue = converter(getattr(row, key)) - if value == oldvalue: - del propvalues[key] - continue - - # check to make sure we're not duplicating an existing key - if key == self.key: - iv = self.getindexview(READWRITE) - ndx = iv.find(k=value) - if ndx == -1: - iv.append(k=value, i=row.id) - if not isnew: - ndx = iv.find(k=oldvalue) - if ndx > -1: - iv.delete(ndx) - else: - raise ValueError, 'node with key "%s" exists'%value - - # do stuff based on the prop type - if isinstance(prop, hyperdb.Link): - link_class = prop.classname - # must be a string or None - if value is not None and not isinstance(value, type('')): - raise ValueError, 'property "%s" link value be a string'%( - key) - # Roundup sets to "unselected" by passing None - if value is None: - value = 0 - # if it isn't a number, it's a key - try: - int(value) - except ValueError: - try: - value = self.db.getclass(link_class).lookup(value) - except (TypeError, KeyError): - raise IndexError, 'new property "%s": %s not a %s'%( - key, value, prop.classname) - - if (value is not None and - not self.db.getclass(link_class).hasnode(value)): - raise IndexError, '%s has no node %s'%(link_class, value) - - setattr(row, key, int(value)) - changes[key] = oldvalue - - if self.do_journal and prop.do_journal: - # register the unlink with the old linked node - if oldvalue: - self.db.addjournal(link_class, oldvalue, _UNLINK, - (self.classname, str(row.id), key)) - - # register the link with the newly linked node - if value: - self.db.addjournal(link_class, value, _LINK, - (self.classname, str(row.id), key)) - - elif isinstance(prop, hyperdb.Multilink): - if value is not None and type(value) != _LISTTYPE: - raise TypeError, 'new property "%s" not a list of ids'%key - link_class = prop.classname - l = [] - if value is None: - value = [] - for entry in value: - if type(entry) != _STRINGTYPE: - raise ValueError, 'new property "%s" link value ' \ - 'must be a string'%key - # if it isn't a number, it's a key - try: - int(entry) - except ValueError: - try: - entry = self.db.getclass(link_class).lookup(entry) - except (TypeError, KeyError): - raise IndexError, 'new property "%s": %s not a %s'%( - key, entry, prop.classname) - l.append(entry) - propvalues[key] = value = l - - # handle removals - rmvd = [] - for id in oldvalue: - if id not in value: - rmvd.append(id) - # register the unlink with the old linked node - if self.do_journal and prop.do_journal: - self.db.addjournal(link_class, id, _UNLINK, - (self.classname, str(row.id), key)) - - # handle additions - adds = [] - for id in value: - if id not in oldvalue: - if not self.db.getclass(link_class).hasnode(id): - raise IndexError, '%s has no node %s'%( - link_class, id) - adds.append(id) - # register the link with the newly linked node - if self.do_journal and prop.do_journal: - self.db.addjournal(link_class, id, _LINK, - (self.classname, str(row.id), key)) - - # perform the modifications on the actual property value - sv = getattr(row, key) - i = 0 - while i < len(sv): - if str(sv[i].fid) in rmvd: - sv.delete(i) - else: - i += 1 - for id in adds: - sv.append(fid=int(id)) - - # figure the journal entry - l = [] - if adds: - l.append(('+', adds)) - if rmvd: - l.append(('-', rmvd)) - if l: - changes[key] = tuple(l) - #changes[key] = oldvalue - - if not rmvd and not adds: - del propvalues[key] - - elif isinstance(prop, hyperdb.String): - if value is not None and type(value) != _STRINGTYPE: - raise TypeError, 'new property "%s" not a string'%key - if value is None: - value = '' - setattr(row, key, value) - changes[key] = oldvalue - if hasattr(prop, 'isfilename') and prop.isfilename: - propvalues[key] = os.path.basename(value) - if prop.indexme: - self.db.indexer.add_text((self.classname, nodeid, key), - value, 'text/plain') - - elif isinstance(prop, hyperdb.Password): - if value is not None and not isinstance(value, password.Password): - raise TypeError, 'new property "%s" not a Password'% key - if value is None: - value = '' - setattr(row, key, str(value)) - changes[key] = str(oldvalue) - propvalues[key] = str(value) - - elif isinstance(prop, hyperdb.Date): - if value is not None and not isinstance(value, date.Date): - raise TypeError, 'new property "%s" not a Date'% key - if value is None: - setattr(row, key, 0) - else: - setattr(row, key, int(calendar.timegm(value.get_tuple()))) - if oldvalue is None: - changes[key] = oldvalue - else: - changes[key] = str(oldvalue) - propvalues[key] = str(value) - - elif isinstance(prop, hyperdb.Interval): - if value is not None and not isinstance(value, date.Interval): - raise TypeError, 'new property "%s" not an Interval'% key - if value is None: - setattr(row, key, '') - else: - # kedder: we should store interval values serialized - setattr(row, key, value.serialise()) - changes[key] = str(oldvalue) - propvalues[key] = str(value) - - elif isinstance(prop, hyperdb.Number): - if value is None: - v = 0 - else: - try: - v = float(value) - except ValueError: - raise TypeError, "%s (%s) is not numeric"%(key, repr(value)) - if not BACKWARDS_COMPATIBLE: - if v >=0: - v = v + 1 - setattr(row, key, v) - changes[key] = oldvalue - propvalues[key] = value - - elif isinstance(prop, hyperdb.Boolean): - if value is None: - bv = 0 - elif value not in (0,1): - raise TypeError, "%s (%s) is not boolean"%(key, repr(value)) - else: - bv = value - if not BACKWARDS_COMPATIBLE: - bv += 1 - setattr(row, key, bv) - changes[key] = oldvalue - propvalues[key] = value - - oldnode[key] = oldvalue - - # nothing to do? - if not isnew and not propvalues: - return propvalues, oldnode - if not propvalues.has_key('activity'): - row.activity = int(time.time()) - if not propvalues.has_key('actor'): - row.actor = int(self.db.getuid()) - if isnew: - if not row.creation: - row.creation = int(time.time()) - if not row.creator: - row.creator = int(self.db.getuid()) - - self.db.dirty = 1 - - if self.do_journal: - if isnew: - self.db.addjournal(self.classname, nodeid, _CREATE, {}) - else: - self.db.addjournal(self.classname, nodeid, _SET, changes) - - return propvalues, oldnode - - def retire(self, nodeid): - '''Retire a node. - - The properties on the node remain available from the get() method, - and the node's id is never reused. - - Retired nodes are not returned by the find(), list(), or lookup() - methods, and other nodes may reuse the values of their key properties. - ''' - if self.db.journaltag is None: - raise hyperdb.DatabaseError, 'Database open read-only' - self.fireAuditors('retire', nodeid, None) - view = self.getview(READWRITE) - ndx = view.find(id=int(nodeid)) - if ndx < 0: - raise KeyError, "nodeid %s not found" % nodeid - - row = view[ndx] - oldvalues = self.uncommitted.setdefault(row.id, {}) - oldval = oldvalues['_isdel'] = row._isdel - row._isdel = 1 - - if self.do_journal: - self.db.addjournal(self.classname, nodeid, _RETIRE, {}) - if self.key: - iv = self.getindexview(READWRITE) - ndx = iv.find(k=getattr(row, self.key)) - # find is broken with multiple attribute lookups - # on ordered views - #ndx = iv.find(k=getattr(row, self.key),i=row.id) - if ndx > -1 and iv[ndx].i == row.id: - iv.delete(ndx) - - self.db.dirty = 1 - self.fireReactors('retire', nodeid, None) - - def restore(self, nodeid): - '''Restore a retired node. - - Make node available for all operations like it was before retirement. - ''' - if self.db.journaltag is None: - raise hyperdb.DatabaseError, 'Database open read-only' - - # check if key property was overrided - key = self.getkey() - keyvalue = self.get(nodeid, key) - - try: - id = self.lookup(keyvalue) - except KeyError: - pass - else: - raise KeyError, "Key property (%s) of retired node clashes with \ - existing one (%s)" % (key, keyvalue) - # Now we can safely restore node - self.fireAuditors('restore', nodeid, None) - view = self.getview(READWRITE) - ndx = view.find(id=int(nodeid)) - if ndx < 0: - raise KeyError, "nodeid %s not found" % nodeid - - row = view[ndx] - oldvalues = self.uncommitted.setdefault(row.id, {}) - oldval = oldvalues['_isdel'] = row._isdel - row._isdel = 0 - - if self.do_journal: - self.db.addjournal(self.classname, nodeid, _RESTORE, {}) - if self.key: - iv = self.getindexview(READWRITE) - ndx = iv.find(k=getattr(row, self.key),i=row.id) - if ndx > -1: - iv.delete(ndx) - self.db.dirty = 1 - self.fireReactors('restore', nodeid, None) - - def is_retired(self, nodeid): - '''Return true if the node is retired - ''' - view = self.getview(READWRITE) - # node must exist & not be retired - id = int(nodeid) - ndx = view.find(id=id) - if ndx < 0: - raise IndexError, "%s has no node %s" % (self.classname, nodeid) - row = view[ndx] - return row._isdel - - def history(self, nodeid): - '''Retrieve the journal of edits on a particular node. - - 'nodeid' must be the id of an existing node of this class or an - IndexError is raised. - - The returned list contains tuples of the form - - (nodeid, date, tag, action, params) - - 'date' is a Timestamp object specifying the time of the change and - 'tag' is the journaltag specified when the database was opened. - ''' - if not self.do_journal: - raise ValueError, 'Journalling is disabled for this class' - return self.db.getjournal(self.classname, nodeid) - - def setkey(self, propname): - '''Select a String property of this class to be the key property. - - 'propname' must be the name of a String property of this class or - None, or a TypeError is raised. The values of the key property on - all existing nodes must be unique or a ValueError is raised. - ''' - if self.key: - if propname == self.key: - return - else: - # drop the old key table - tablename = "_%s.%s"%(self.classname, self.key) - self.db._db.getas(tablename) - - #raise ValueError, "%s already indexed on %s"%(self.classname, - # self.key) - - prop = self.properties.get(propname, None) - if prop is None: - prop = self.privateprops.get(propname, None) - if prop is None: - raise KeyError, "no property %s" % propname - if not isinstance(prop, hyperdb.String): - raise TypeError, "%s is not a String" % propname - - # the way he index on properties is by creating a - # table named _%(classname)s.%(key)s, if this table - # exists then everything is okay. If this table - # doesn't exist, then generate a new table on the - # key value. - - # first setkey for this run or key has been changed - self.key = propname - tablename = "_%s.%s"%(self.classname, self.key) - - iv = self.db._db.view(tablename) - if self.db.fastopen and iv.structure(): - return - - # very first setkey ever or the key has changed - self.db.dirty = 1 - iv = self.db._db.getas('_%s[k:S,i:I]' % tablename) - iv = iv.ordered(1) - for row in self.getview(): - iv.append(k=getattr(row, propname), i=row.id) - self.db.commit() - - def getkey(self): - '''Return the name of the key property for this class or None.''' - return self.key - - def lookup(self, keyvalue): - '''Locate a particular node by its key property and return its id. - - If this class has no key property, a TypeError is raised. If the - keyvalue matches one of the values for the key property among - the nodes in this class, the matching node's id is returned; - otherwise a KeyError is raised. - ''' - if not self.key: - raise TypeError, 'No key property set for class %s'%self.classname - - if type(keyvalue) is not _STRINGTYPE: - raise TypeError, '%r is not a string'%keyvalue - - # XXX FIX ME -> this is a bit convoluted - # First we search the index view to get the id - # which is a quicker look up. - # Then we lookup the row with id=id - # if the _isdel property of the row is 0, return the - # string version of the id. (Why string version???) - # - # Otherwise, just lookup the non-indexed key - # in the non-index table and check the _isdel property - iv = self.getindexview() - if iv: - # look up the index view for the id, - # then instead of looking up the keyvalue, lookup the - # quicker id - ndx = iv.find(k=keyvalue) - if ndx > -1: - view = self.getview() - ndx = view.find(id=iv[ndx].i) - if ndx > -1: - row = view[ndx] - if not row._isdel: - return str(row.id) - else: - # perform the slower query - view = self.getview() - ndx = view.find({self.key:keyvalue}) - if ndx > -1: - row = view[ndx] - if not row._isdel: - return str(row.id) - - raise KeyError, keyvalue - - def destroy(self, id): - '''Destroy a node. - - WARNING: this method should never be used except in extremely rare - situations where there could never be links to the node being - deleted - - WARNING: use retire() instead - - WARNING: the properties of this node will not be available ever again - - WARNING: really, use retire() instead - - Well, I think that's enough warnings. This method exists mostly to - support the session storage of the cgi interface. - - The node is completely removed from the hyperdb, including all journal - entries. It will no longer be available, and will generally break code - if there are any references to the node. - ''' - view = self.getview(READWRITE) - ndx = view.find(id=int(id)) - if ndx > -1: - if self.key: - keyvalue = getattr(view[ndx], self.key) - iv = self.getindexview(READWRITE) - if iv: - ivndx = iv.find(k=keyvalue) - if ivndx > -1: - iv.delete(ivndx) - view.delete(ndx) - self.db.destroyjournal(self.classname, id) - self.db.dirty = 1 - - def find(self, **propspec): - '''Get the ids of nodes in this class which link to the given nodes. - - 'propspec' consists of keyword args propname=nodeid or - propname={nodeid:1, } - 'propname' must be the name of a property in this class, or a - KeyError is raised. That property must be a Link or - Multilink property, or a TypeError is raised. - - Any node in this class whose 'propname' property links to any of - the nodeids will be returned. Examples:: - - db.issue.find(messages='1') - db.issue.find(messages={'1':1,'3':1}, files={'7':1}) - ''' - propspec = propspec.items() - for propname, nodeid in propspec: - # check the prop is OK - prop = self.ruprops[propname] - if (not isinstance(prop, hyperdb.Link) and - not isinstance(prop, hyperdb.Multilink)): - raise TypeError, "'%s' not a Link/Multilink property"%propname - - vws = [] - for propname, ids in propspec: - if type(ids) is _STRINGTYPE: - ids = {int(ids):1} - elif ids is None: - ids = {0:1} - else: - d = {} - for id in ids.keys(): - if id is None: - d[0] = 1 - else: - d[int(id)] = 1 - ids = d - prop = self.ruprops[propname] - view = self.getview() - if isinstance(prop, hyperdb.Multilink): - def ff(row, nm=propname, ids=ids): - if not row._isdel: - sv = getattr(row, nm) - for sr in sv: - if ids.has_key(sr.fid): - return 1 - return 0 - else: - def ff(row, nm=propname, ids=ids): - return not row._isdel and ids.has_key(getattr(row, nm)) - ndxview = view.filter(ff) - vws.append(ndxview.unique()) - - # handle the empty match case - if not vws: - return [] - - ndxview = vws[0] - for v in vws[1:]: - ndxview = ndxview.union(v) - view = self.getview().remapwith(ndxview) - rslt = [] - for row in view: - rslt.append(str(row.id)) - return rslt - - - def list(self): - ''' Return a list of the ids of the active nodes in this class. - ''' - l = [] - for row in self.getview().select(_isdel=0): - l.append(str(row.id)) - return l - - def getnodeids(self, retired=None): - ''' Retrieve all the ids of the nodes for a particular Class. - - Set retired=None to get all nodes. Otherwise it'll get all the - retired or non-retired nodes, depending on the flag. - ''' - l = [] - if retired is False or retired is True: - result = self.getview().select(_isdel=retired) - else: - result = self.getview() - for row in result: - l.append(str(row.id)) - return l - - def count(self): - return len(self.getview()) - - def getprops(self, protected=1): - # protected is not in ping's spec - allprops = self.ruprops.copy() - if protected and self.privateprops is not None: - allprops.update(self.privateprops) - return allprops - - def addprop(self, **properties): - for key in properties.keys(): - if self.ruprops.has_key(key): - raise ValueError, "%s is already a property of %s"%(key, - self.classname) - self.ruprops.update(properties) - # Class structure has changed - self.db.fastopen = 0 - view = self.__getview() - self.db.commit() - # ---- end of ping's spec - - def _filter(self, search_matches, filterspec, proptree): - '''Return a list of the ids of the active nodes in this class that - match the 'filter' spec, sorted by the group spec and then the - sort spec - - "filterspec" is {propname: value(s)} - - "sort" and "group" are (dir, prop) where dir is '+', '-' or None - and prop is a prop name or None - - "search_matches" is {nodeid: marker} or None - - The filter must match all properties specificed - but if the - property value to match is a list, any one of the values in the - list may match for that property to match. - ''' - if __debug__: - start_t = time.time() - - where = {'_isdel':0} - wherehigh = {} - mlcriteria = {} - regexes = [] - orcriteria = {} - for propname, value in filterspec.items(): - prop = self.ruprops.get(propname, None) - if prop is None: - prop = self.privateprops[propname] - if isinstance(prop, hyperdb.Multilink): - if value in ('-1', ['-1']): - value = [] - elif type(value) is not _LISTTYPE: - value = [value] - # transform keys to ids - u = [] - for item in value: - try: - item = int(item) - except (TypeError, ValueError): - item = int(self.db.getclass(prop.classname).lookup(item)) - if item == -1: - item = 0 - u.append(item) - mlcriteria[propname] = u - elif isinstance(prop, hyperdb.Link): - if type(value) is not _LISTTYPE: - value = [value] - # transform keys to ids - u = [] - for item in value: - if item is None: - item = -1 - else: - try: - item = int(item) - except (TypeError, ValueError): - linkcl = self.db.getclass(prop.classname) - item = int(linkcl.lookup(item)) - if item == -1: - item = 0 - u.append(item) - if len(u) == 1: - where[propname] = u[0] - else: - orcriteria[propname] = u - elif isinstance(prop, hyperdb.String): - if type(value) is not type([]): - value = [value] - for v in value: - # simple glob searching - v = re.sub(r'([\|\{\}\\\.\+\[\]\(\)])', r'\\\1', v) - v = v.replace('?', '.') - v = v.replace('*', '.*?') - regexes.append((propname, re.compile(v, re.I))) - elif propname == 'id': - where[propname] = int(value) - elif isinstance(prop, hyperdb.Boolean): - if type(value) is _STRINGTYPE: - bv = value.lower() in ('yes', 'true', 'on', '1') - else: - bv = value - where[propname] = bv - elif isinstance(prop, hyperdb.Date): - try: - # Try to filter on range of dates - date_rng = prop.range_from_raw (value, self.db) - if date_rng.from_value: - t = date_rng.from_value.get_tuple() - where[propname] = int(calendar.timegm(t)) - else: - # use minimum possible value to exclude items without - # 'prop' property - where[propname] = 0 - if date_rng.to_value: - t = date_rng.to_value.get_tuple() - wherehigh[propname] = int(calendar.timegm(t)) - else: - wherehigh[propname] = None - except ValueError: - # If range creation fails - ignore that search parameter - pass - elif isinstance(prop, hyperdb.Interval): - try: - # Try to filter on range of intervals - date_rng = Range(value, date.Interval) - if date_rng.from_value: - #t = date_rng.from_value.get_tuple() - where[propname] = date_rng.from_value.serialise() - else: - # use minimum possible value to exclude items without - # 'prop' property - where[propname] = '-99999999999999' - if date_rng.to_value: - #t = date_rng.to_value.get_tuple() - wherehigh[propname] = date_rng.to_value.serialise() - else: - wherehigh[propname] = None - except ValueError: - # If range creation fails - ignore that search parameter - pass - elif isinstance(prop, hyperdb.Number): - if type(value) is _LISTTYPE: - orcriteria[propname] = [float(v) for v in value] - else: - where[propname] = float(value) - else: - where[propname] = str(value) - v = self.getview() - if where: - where_higherbound = where.copy() - where_higherbound.update(wherehigh) - v = v.select(where, where_higherbound) - - if mlcriteria: - # multilink - if any of the nodeids required by the - # filterspec aren't in this node's property, then skip it - def ff(row, ml=mlcriteria): - for propname, values in ml.items(): - sv = getattr(row, propname) - if not values and not sv: - return 1 - for id in values: - if sv.find(fid=id) != -1: - return 1 - return 0 - iv = v.filter(ff) - v = v.remapwith(iv) - - if orcriteria: - def ff(row, crit=orcriteria): - for propname, allowed in crit.items(): - val = getattr(row, propname) - if val not in allowed: - return 0 - return 1 - - iv = v.filter(ff) - v = v.remapwith(iv) - - if regexes: - def ff(row, r=regexes): - for propname, regex in r: - val = str(getattr(row, propname)) - if not regex.search(val): - return 0 - return 1 - - iv = v.filter(ff) - v = v.remapwith(iv) - - # Handle all the sorting we can inside Metakit. If we encounter - # transitive attributes or a Multilink on the way, we sort by - # what we have so far and defer the rest to the outer sorting - # routine. We mark the attributes for which sorting has been - # done with sort_done. Of course the whole thing works only if - # we do it backwards. - sortspec = [] - rev = [] - sa = [] - if proptree: - sa = reversed(proptree.sortattr) - for pt in sa: - if pt.parent != proptree: - break; - propname = pt.name - dir = pt.sort_direction - assert (dir and propname) - isreversed = 0 - if dir == '-': - isreversed = 1 - try: - prop = getattr(v, propname) - except AttributeError: - logging.getLogger("hyperdb").error( - "MK has no property %s" % propname) - continue - propclass = self.ruprops.get(propname, None) - if propclass is None: - propclass = self.privateprops.get(propname, None) - if propclass is None: - logging.getLogger("hyperdb").error( - "Schema has no property %s" % propname) - continue - # Dead code: We dont't find Links here (in sortattr we would - # see the order property of the link, but this is not in the - # first level of the tree). The code is left in because one - # day we might want to properly implement this. The code is - # broken because natural-joining to the Link-class can - # produce name-clashes wich result in broken sorting. - if isinstance(propclass, hyperdb.Link): - linkclass = self.db.getclass(propclass.classname) - lv = linkclass.getview() - lv = lv.rename('id', propname) - v = v.join(lv, prop, 1) - prop = getattr(v, linkclass.orderprop()) - if isreversed: - rev.append(prop) - sortspec.append(prop) - pt.sort_done = True - sortspec.reverse() - rev.reverse() - v = v.sortrev(sortspec, rev)[:] #XXX Metakit bug - - rslt = [] - for row in v: - id = str(row.id) - if search_matches is not None: - if search_matches.has_key(id): - rslt.append(id) - else: - rslt.append(id) - - if __debug__: - self.db.stats['filtering'] += (time.time() - start_t) - - return rslt - - def hasnode(self, nodeid): - '''Determine if the given nodeid actually exists - ''' - return int(nodeid) < self.maxid - - def stringFind(self, **requirements): - '''Locate a particular node by matching a set of its String - properties in a caseless search. - - If the property is not a String property, a TypeError is raised. - - The return is a list of the id of all nodes that match. - ''' - for propname in requirements.keys(): - prop = self.properties[propname] - if isinstance(not prop, hyperdb.String): - raise TypeError, "'%s' not a String property"%propname - requirements[propname] = requirements[propname].lower() - requirements['_isdel'] = 0 - - l = [] - for row in self.getview().select(requirements): - l.append(str(row.id)) - return l - - def addjournal(self, nodeid, action, params): - '''Add a journal to the given nodeid, - 'action' may be: - - 'create' or 'set' -- 'params' is a dictionary of property values - 'link' or 'unlink' -- 'params' is (classname, nodeid, propname) - 'retire' -- 'params' is None - ''' - self.db.addjournal(self.classname, nodeid, action, params) - - def index(self, nodeid): - ''' Add (or refresh) the node to search indexes ''' - # find all the String properties that have indexme - for prop, propclass in self.getprops().items(): - if isinstance(propclass, hyperdb.String) and propclass.indexme: - # index them under (classname, nodeid, property) - self.db.indexer.add_text((self.classname, nodeid, prop), - str(self.get(nodeid, prop))) - - # --- used by Database - def _commit(self): - ''' called post commit of the DB. - interested subclasses may override ''' - self.uncommitted = {} - for action in self.comactions: - action() - self.comactions = [] - self.rbactions = [] - self.idcache = {} - def _rollback(self): - ''' called pre rollback of the DB. - interested subclasses may override ''' - self.comactions = [] - for action in self.rbactions: - action() - self.rbactions = [] - self.uncommitted = {} - self.idcache = {} - def _clear(self): - view = self.getview(READWRITE) - if len(view): - view[:] = [] - self.db.dirty = 1 - iv = self.getindexview(READWRITE) - if iv: - iv[:] = [] - def commitaction(self, action): - ''' call this to register a callback called on commit - callback is removed on end of transaction ''' - self.comactions.append(action) - def rollbackaction(self, action): - ''' call this to register a callback called on rollback - callback is removed on end of transaction ''' - self.rbactions.append(action) - # --- internal - def __getview(self): - ''' Find the interface for a specific Class in the hyperdb. - - This method checks to see whether the schema has changed and - re-works the underlying metakit structure if it has. - ''' - db = self.db._db - view = db.view(self.classname) - mkprops = view.structure() - - # if we have structure in the database, and the structure hasn't - # changed - # note on view.ordered -> - # return a metakit view ordered on the id column - # id is always the first column. This speeds up - # look-ups on the id column. - - if mkprops and self.db.fastopen: - return view.ordered(1) - - # is the definition the same? - for nm, rutyp in self.ruprops.items(): - for mkprop in mkprops: - if mkprop.name == nm: - break - else: - mkprop = None - if mkprop is None: - break - if _typmap[rutyp.__class__] != mkprop.type: - break - else: - # make sure we have the 'actor' property too - for mkprop in mkprops: - if mkprop.name == 'actor': - return view.ordered(1) - - # The schema has changed. We need to create or restructure the mk view - # id comes first, so we can use view.ordered(1) so that - # MK will order it for us to allow binary-search quick lookups on - # the id column - self.db.dirty = 1 - s = ["%s[id:I" % self.classname] - - # these columns will always be added, we can't trample them :) - _columns = {"id":"I", "_isdel":"I", "activity":"I", "actor": "I", - "creation":"I", "creator":"I"} - - for nm, rutyp in self.ruprops.items(): - mktyp = _typmap[rutyp.__class__].upper() - if nm in _columns and _columns[nm] != mktyp: - # oops, two columns with the same name and different properties - raise MKBackendError("column %s for table %sis defined with multiple types"%(nm, self.classname)) - _columns[nm] = mktyp - s.append('%s:%s' % (nm, mktyp)) - if mktyp == 'V': - s[-1] += ('[fid:I]') - - # XXX FIX ME -> in some tests, creation:I becomes creation:S is this - # okay? Does this need to be supported? - s.append('_isdel:I,activity:I,actor:I,creation:I,creator:I]') - view = self.db._db.getas(','.join(s)) - self.db.commit() - return view.ordered(1) - def getview(self, RW=0): - # XXX FIX ME -> The RW flag doesn't do anything. - return self.db._db.view(self.classname).ordered(1) - def getindexview(self, RW=0): - # XXX FIX ME -> The RW flag doesn't do anything. - tablename = "_%s.%s"%(self.classname, self.key) - return self.db._db.view("_%s" % tablename).ordered(1) - - # - # import / export - # - def export_list(self, propnames, nodeid): - ''' Export a node - generate a list of CSV-able data in the order - specified by propnames for the given node. - ''' - properties = self.getprops() - l = [] - for prop in propnames: - proptype = properties[prop] - value = self.get(nodeid, prop) - # "marshal" data where needed - if value is None: - pass - elif isinstance(proptype, hyperdb.Date): - value = value.get_tuple() - elif isinstance(proptype, hyperdb.Interval): - value = value.get_tuple() - elif isinstance(proptype, hyperdb.Password): - value = str(value) - l.append(repr(value)) - - # append retired flag - l.append(repr(self.is_retired(nodeid))) - - return l - - def import_list(self, propnames, proplist): - ''' Import a node - all information including "id" is present and - should not be sanity checked. Triggers are not triggered. The - journal should be initialised using the "creator" and "creation" - information. - - Return the nodeid of the node imported. - ''' - if self.db.journaltag is None: - raise hyperdb.DatabaseError, 'Database open read-only' - properties = self.getprops() - - d = {} - view = self.getview(READWRITE) - for i in range(len(propnames)): - value = eval(proplist[i]) - if not value: - continue - - propname = propnames[i] - if propname == 'id': - newid = value = int(value) - elif propname == 'is retired': - # is the item retired? - if int(value): - d['_isdel'] = 1 - continue - elif value is None: - d[propname] = None - continue - - prop = properties[propname] - if isinstance(prop, hyperdb.Date): - value = int(calendar.timegm(value)) - elif isinstance(prop, hyperdb.Interval): - value = date.Interval(value).serialise() - elif isinstance(prop, hyperdb.Number): - value = float(value) - elif isinstance(prop, hyperdb.Boolean): - value = int(value) - elif isinstance(prop, hyperdb.Link) and value: - value = int(value) - elif isinstance(prop, hyperdb.Multilink): - # we handle multilinks separately - continue - d[propname] = value - - # possibly make a new node - if not d.has_key('id'): - d['id'] = newid = self.maxid - self.maxid += 1 - - # save off the node - view.append(d) - - # fix up multilinks - ndx = view.find(id=newid) - row = view[ndx] - for i in range(len(propnames)): - value = eval(proplist[i]) - propname = propnames[i] - if propname == 'is retired': - continue - prop = properties[propname] - if not isinstance(prop, hyperdb.Multilink): - continue - sv = getattr(row, propname) - for entry in value: - sv.append((int(entry),)) - - self.db.dirty = 1 - return newid - - def export_journals(self): - '''Export a class's journal - generate a list of lists of - CSV-able data: - - nodeid, date, user, action, params - - No heading here - the columns are fixed. - ''' - from roundup.hyperdb import Interval, Date, Password - properties = self.getprops() - r = [] - for nodeid in self.getnodeids(): - for nodeid, date, user, action, params in self.history(nodeid): - date = date.get_tuple() - if action == 'set': - export_data = {} - for propname, value in params.items(): - if not properties.has_key(propname): - # property no longer in the schema - continue - - prop = properties[propname] - # make sure the params are eval()'able - if value is None: - pass - elif isinstance(prop, Date): - value = value.get_tuple() - elif isinstance(prop, Interval): - value = value.get_tuple() - elif isinstance(prop, Password): - value = str(value) - export_data[propname] = value - params = export_data - l = [nodeid, date, user, action, params] - r.append(map(repr, l)) - return r - - def import_journals(self, entries): - '''Import a class's journal. - - Uses setjournal() to set the journal for each item.''' - properties = self.getprops() - d = {} - for l in entries: - l = map(eval, l) - nodeid, jdate, user, action, params = l - jdate = int(calendar.timegm(date.Date(jdate).get_tuple())) - r = d.setdefault(nodeid, []) - if action == 'set': - for propname, value in params.items(): - prop = properties[propname] - if value is None: - pass - elif isinstance(prop, hyperdb.Date): - value = date.Date(value) - elif isinstance(prop, hyperdb.Interval): - value = date.Interval(value) - elif isinstance(prop, hyperdb.Password): - pwd = password.Password() - pwd.unpack(value) - value = pwd - params[propname] = value - action = _names_to_actionnames[action] - r.append((nodeid, jdate, user, action, params)) - - for nodeid, l in d.items(): - self.db.setjournal(self.classname, nodeid, l) - -def _fetchML(sv): - l = [] - for row in sv: - if row.fid: - l.append(str(row.fid)) - return l - -def _fetchPW(s): - ''' Convert to a password.Password unless the password is '' which is - our sentinel for "unset". - ''' - if s == '': - return None - p = password.Password() - p.unpack(s) - return p - -def _fetchLink(n): - ''' Return None if the link is 0 - otherwise strify it. - ''' - return n and str(n) or None - -def _fetchDate(n): - ''' Convert the timestamp to a date.Date instance - unless it's 0 which - is our sentinel for "unset". - ''' - if n == 0: - return None - return date.Date(time.gmtime(n)) - -def _fetchInterval(n): - ''' Convert to a date.Interval unless the interval is '' which is our - sentinel for "unset". - ''' - if n == '': - return None - return date.Interval(n) - -# Converters for boolean and numbers to properly -# return None values. -# These are in conjunction with the setters above -# look for hyperdb.Boolean and hyperdb.Number -if BACKWARDS_COMPATIBLE: - def getBoolean(bool): return bool - def getNumber(number): return number -else: - def getBoolean(bool): - if not bool: res = None - else: res = bool - 1 - return res - - def getNumber(number): - if number == 0: res = None - elif number < 0: res = number - else: res = number - 1 - return res - -_converters = { - hyperdb.Date : _fetchDate, - hyperdb.Link : _fetchLink, - hyperdb.Multilink : _fetchML, - hyperdb.Interval : _fetchInterval, - hyperdb.Password : _fetchPW, - hyperdb.Boolean : getBoolean, - hyperdb.Number : getNumber, - hyperdb.String : lambda s: s and str(s) or None, -} - -class FileName(hyperdb.String): - isfilename = 1 - -_typmap = { - FileName : 'S', - hyperdb.String : 'S', - hyperdb.Date : 'I', - hyperdb.Link : 'I', - hyperdb.Multilink : 'V', - hyperdb.Interval : 'S', - hyperdb.Password : 'S', - hyperdb.Boolean : 'I', - hyperdb.Number : 'D', -} -class FileClass(hyperdb.FileClass, Class): - ''' like Class but with a content property - ''' - def __init__(self, db, classname, **properties): - '''The newly-created class automatically includes the "content" - and "type" properties. - ''' - if not properties.has_key('content'): - properties['content'] = hyperdb.String(indexme='yes') - if not properties.has_key('type'): - properties['type'] = hyperdb.String() - Class.__init__(self, db, classname, **properties) - - def gen_filename(self, nodeid): - nm = '%s%s' % (self.classname, nodeid) - sd = str(int(int(nodeid) / 1000)) - d = os.path.join(self.db.config.DATABASE, 'files', self.classname, sd) - if not os.path.exists(d): - os.makedirs(d) - return os.path.join(d, nm) - - def export_files(self, dirname, nodeid): - ''' Export the "content" property as a file, not csv column - ''' - source = self.gen_filename(nodeid) - x, filename = os.path.split(source) - x, subdir = os.path.split(x) - dest = os.path.join(dirname, self.classname+'-files', subdir, filename) - if not os.path.exists(os.path.dirname(dest)): - os.makedirs(os.path.dirname(dest)) - shutil.copyfile(source, dest) - - def import_files(self, dirname, nodeid): - ''' Import the "content" property as a file - ''' - dest = self.gen_filename(nodeid) - x, filename = os.path.split(dest) - x, subdir = os.path.split(x) - source = os.path.join(dirname, self.classname+'-files', subdir, - filename) - if not os.path.exists(os.path.dirname(dest)): - os.makedirs(os.path.dirname(dest)) - shutil.copyfile(source, dest) - - if self.properties['content'].indexme: - return - - mime_type = None - if self.getprops().has_key('type'): - mime_type = self.get(nodeid, 'type') - if not mime_type: - mime_type = self.default_mime_type - self.db.indexer.add_text((self.classname, nodeid, 'content'), - self.get(nodeid, 'content'), mime_type) - - def get(self, nodeid, propname, default=_marker, cache=1): - if propname == 'content': - poss_msg = 'Possibly an access right configuration problem.' - fnm = self.gen_filename(nodeid) - if not os.path.exists(fnm): - fnm = fnm + '.tmp' - try: - f = open(fnm, 'rb') - except IOError, (strerror): - # XXX by catching this we donot see an error in the log. - return 'ERROR reading file: %s%s\n%s\n%s'%( - self.classname, nodeid, poss_msg, strerror) - x = f.read() - f.close() - else: - x = Class.get(self, nodeid, propname, default) - return x - - def create(self, **propvalues): - if not propvalues: - raise ValueError, "Need something to create!" - self.fireAuditors('create', None, propvalues) - - content = propvalues['content'] - del propvalues['content'] - - newid = Class.create_inner(self, **propvalues) - if not content: - return newid - - # figure a filename - nm = self.gen_filename(newid) - - # make sure we don't register the rename action more than once - if not os.path.exists(nm + '.tmp'): - # register commit and rollback actions - def commit(fnm=nm): - os.rename(fnm + '.tmp', fnm) - self.commitaction(commit) - def undo(fnm=nm): - os.remove(fnm + '.tmp') - self.rollbackaction(undo) - - # save the tempfile - f = open(nm + '.tmp', 'wb') - f.write(content) - f.close() - - if not self.properties['content'].indexme: - return newid - - mimetype = propvalues.get('type', self.default_mime_type) - self.db.indexer.add_text((self.classname, newid, 'content'), content, - mimetype) - return newid - - def set(self, itemid, **propvalues): - if not propvalues: - return - self.fireAuditors('set', None, propvalues) - - content = propvalues.get('content', None) - if content is not None: - del propvalues['content'] - - propvalues, oldnode = Class.set_inner(self, itemid, **propvalues) - - # figure a filename - if content is not None: - nm = self.gen_filename(itemid) - - # make sure we don't register the rename action more than once - if not os.path.exists(nm + '.tmp'): - # register commit and rollback actions - def commit(fnm=nm): - if os.path.exists(fnm): - os.remove(fnm) - os.rename(fnm + '.tmp', fnm) - self.commitaction(commit) - def undo(fnm=nm): - os.remove(fnm + '.tmp') - self.rollbackaction(undo) - - f = open(nm + '.tmp', 'wb') - f.write(content) - f.close() - - if self.properties['content'].indexme: - mimetype = propvalues.get('type', self.default_mime_type) - self.db.indexer.add_text((self.classname, itemid, 'content'), - content, mimetype) - - self.fireReactors('set', oldnode, propvalues) - - def index(self, nodeid): - ''' Add (or refresh) the node to search indexes. - - Use the content-type property for the content property. - ''' - # find all the String properties that have indexme - for prop, propclass in self.getprops().items(): - if prop == 'content' and propclass.indexme: - mime_type = self.get(nodeid, 'type', self.default_mime_type) - self.db.indexer.add_text((self.classname, nodeid, 'content'), - str(self.get(nodeid, 'content')), mime_type) - elif isinstance(propclass, hyperdb.String) and propclass.indexme: - # index them under (classname, nodeid, property) - try: - value = str(self.get(nodeid, prop)) - except IndexError: - # node has been destroyed - continue - self.db.indexer.add_text((self.classname, nodeid, prop), value) - -class IssueClass(Class, roundupdb.IssueClass): - ''' The newly-created class automatically includes the "messages", - "files", "nosy", and "superseder" properties. If the 'properties' - dictionary attempts to specify any of these properties or a - "creation" or "activity" property, a ValueError is raised. - ''' - def __init__(self, db, classname, **properties): - if not properties.has_key('title'): - properties['title'] = hyperdb.String(indexme='yes') - if not properties.has_key('messages'): - properties['messages'] = hyperdb.Multilink("msg") - if not properties.has_key('files'): - properties['files'] = hyperdb.Multilink("file") - if not properties.has_key('nosy'): - # note: journalling is turned off as it really just wastes - # space. this behaviour may be overridden in an instance - properties['nosy'] = hyperdb.Multilink("user", do_journal="no") - if not properties.has_key('superseder'): - properties['superseder'] = hyperdb.Multilink(classname) - Class.__init__(self, db, classname, **properties) - -CURVERSION = 2 - -class MetakitIndexer(CommonIndexer): - def __init__(self, db): - CommonIndexer.__init__(self, db) - self.path = os.path.join(db.config.DATABASE, 'index.mk4') - self.db = metakit.storage(self.path, 1) - self.datadb = db._db - self.reindex = 0 - v = self.db.view('version') - if not v.structure(): - v = self.db.getas('version[vers:I]') - self.db.commit() - v.append(vers=CURVERSION) - self.reindex = 1 - elif v[0].vers != CURVERSION: - v[0].vers = CURVERSION - self.reindex = 1 - if self.reindex: - self.db.getas('ids[tblid:I,nodeid:I,propid:I,ignore:I]') - self.db.getas('index[word:S,hits[pos:I]]') - self.db.commit() - self.reindex = 1 - self.changed = 0 - self.propcache = {} - - def close(self): - '''close the indexing database''' - del self.db - self.db = None - - def force_reindex(self): - '''Force a reindexing of the database. This essentially - empties the tables ids and index and sets a flag so - that the databases are reindexed''' - v = self.db.view('ids') - v[:] = [] - v = self.db.view('index') - v[:] = [] - self.db.commit() - self.reindex = 1 - - def should_reindex(self): - '''returns True if the indexes need to be rebuilt''' - return self.reindex - - def _getprops(self, classname): - props = self.propcache.get(classname, None) - if props is None: - props = self.datadb.view(classname).structure() - props = [prop.name for prop in props] - self.propcache[classname] = props - return props - - def _getpropid(self, classname, propname): - return self._getprops(classname).index(propname) - - def _getpropname(self, classname, propid): - return self._getprops(classname)[propid] - - def add_text(self, identifier, text, mime_type='text/plain'): - if mime_type != 'text/plain': - return - classname, nodeid, property = identifier - tbls = self.datadb.view('tables') - tblid = tbls.find(name=classname) - if tblid < 0: - raise KeyError, "unknown class %r"%classname - nodeid = int(nodeid) - propid = self._getpropid(classname, property) - ids = self.db.view('ids') - oldpos = ids.find(tblid=tblid,nodeid=nodeid,propid=propid,ignore=0) - if oldpos > -1: - ids[oldpos].ignore = 1 - self.changed = 1 - pos = ids.append(tblid=tblid,nodeid=nodeid,propid=propid) - - wordlist = re.findall(r'\b\w{2,25}\b', text.upper()) - words = {} - for word in wordlist: - if not self.is_stopword(word): - words[word] = 1 - words = words.keys() - - index = self.db.view('index').ordered(1) - for word in words: - ndx = index.find(word=word) - if ndx < 0: - index.append(word=word) - ndx = index.find(word=word) - index[ndx].hits.append(pos=pos) - self.changed = 1 - - def find(self, wordlist): - '''look up all the words in the wordlist. - If none are found return an empty dictionary - * more rules here - ''' - hits = None - index = self.db.view('index').ordered(1) - for word in wordlist: - word = word.upper() - if not 2 < len(word) < 26: - continue - ndx = index.find(word=word) - if ndx < 0: - return {} - if hits is None: - hits = index[ndx].hits - else: - hits = hits.intersect(index[ndx].hits) - if len(hits) == 0: - return {} - if hits is None: - return [] - rslt = [] - ids = self.db.view('ids').remapwith(hits) - tbls = self.datadb.view('tables') - for i in range(len(ids)): - hit = ids[i] - if not hit.ignore: - classname = tbls[hit.tblid].name - nodeid = str(hit.nodeid) - property = self._getpropname(classname, hit.propid) - rslt.append((classname, nodeid, property)) - return rslt - - def save_index(self): - if self.changed: - self.db.commit() - self.changed = 0 - - def rollback(self): - if self.changed: - self.db.rollback() - self.db = metakit.storage(self.path, 1) - self.changed = 0 - -try: - from indexer_xapian import Indexer -except ImportError: - Indexer = MetakitIndexer - -# vim: set et sts=4 sw=4 : Modified: tracker/roundup-src/roundup/backends/back_mysql.py ============================================================================== --- tracker/roundup-src/roundup/backends/back_mysql.py (original) +++ tracker/roundup-src/roundup/backends/back_mysql.py Sun Mar 9 09:26:16 2008 @@ -1,4 +1,4 @@ -#$Id: back_mysql.py,v 1.71 2006/08/29 04:20:50 richard Exp $ +#$Id: back_mysql.py,v 1.74 2007/10/26 01:34:43 richard Exp $ # # Copyright (c) 2003 Martynas Sklyzmantas, Andrey Lebedev # @@ -13,7 +13,7 @@ How to implement AUTO_INCREMENT: mysql> create table foo (num integer auto_increment primary key, name -varchar(255)) AUTO_INCREMENT=1 type=InnoDB; +varchar(255)) AUTO_INCREMENT=1 ENGINE=InnoDB; ql> insert into foo (name) values ('foo5'); Query OK, 1 row affected (0.00 sec) @@ -166,10 +166,10 @@ if message[0] != ER.NO_SUCH_TABLE: raise DatabaseError, message self.init_dbschema() - self.sql("CREATE TABLE `schema` (`schema` TEXT) TYPE=%s"% + self.sql("CREATE TABLE `schema` (`schema` TEXT) ENGINE=%s"% self.mysql_backend) self.sql('''CREATE TABLE ids (name VARCHAR(255), - num INTEGER) TYPE=%s'''%self.mysql_backend) + num INTEGER) ENGINE=%s'''%self.mysql_backend) self.sql('create index ids_name_idx on ids(name)') self.create_version_2_tables() @@ -194,23 +194,26 @@ # OTK store self.sql('''CREATE TABLE otks (otk_key VARCHAR(255), otk_value TEXT, otk_time FLOAT(20)) - TYPE=%s'''%self.mysql_backend) + ENGINE=%s'''%self.mysql_backend) self.sql('CREATE INDEX otks_key_idx ON otks(otk_key)') # Sessions store self.sql('''CREATE TABLE sessions (session_key VARCHAR(255), session_time FLOAT(20), session_value TEXT) - TYPE=%s'''%self.mysql_backend) + ENGINE=%s'''%self.mysql_backend) self.sql('''CREATE INDEX sessions_key_idx ON sessions(session_key)''') # full-text indexing store self.sql('''CREATE TABLE __textids (_class VARCHAR(255), _itemid VARCHAR(255), _prop VARCHAR(255), _textid INT) - TYPE=%s'''%self.mysql_backend) + ENGINE=%s'''%self.mysql_backend) self.sql('''CREATE TABLE __words (_word VARCHAR(30), - _textid INT) TYPE=%s'''%self.mysql_backend) + _textid INT) ENGINE=%s'''%self.mysql_backend) self.sql('CREATE INDEX words_word_ids ON __words(_word)') + self.sql('CREATE INDEX words_by_id ON __words (_textid)') + self.sql('CREATE UNIQUE INDEX __textids_by_props ON ' + '__textids (_class, _itemid, _prop)') sql = 'insert into ids (name, num) values (%s,%s)'%(self.arg, self.arg) self.sql(sql, ('__textids', 1)) @@ -389,7 +392,7 @@ # create the base table scols = ','.join(['%s %s'%x for x in cols]) - sql = 'create table _%s (%s) type=%s'%(spec.classname, scols, + sql = 'create table _%s (%s) ENGINE=%s'%(spec.classname, scols, self.mysql_backend) self.sql(sql) @@ -450,7 +453,7 @@ for x in 'nodeid date tag action params'.split()]) sql = '''create table %s__journal ( nodeid integer, date datetime, tag varchar(255), - action varchar(255), params text) type=%s'''%( + action varchar(255), params text) ENGINE=%s'''%( spec.classname, self.mysql_backend) self.sql(sql) self.create_journal_table_indexes(spec) @@ -464,7 +467,7 @@ def create_multilink_table(self, spec, ml): sql = '''CREATE TABLE `%s_%s` (linkid VARCHAR(255), - nodeid VARCHAR(255)) TYPE=%s'''%(spec.classname, ml, + nodeid VARCHAR(255)) ENGINE=%s'''%(spec.classname, ml, self.mysql_backend) self.sql(sql) self.create_multilink_table_indexes(spec, ml) Modified: tracker/roundup-src/roundup/backends/back_postgresql.py ============================================================================== --- tracker/roundup-src/roundup/backends/back_postgresql.py (original) +++ tracker/roundup-src/roundup/backends/back_postgresql.py Sun Mar 9 09:26:16 2008 @@ -1,4 +1,4 @@ -#$Id: back_postgresql.py,v 1.37 2006/11/09 00:55:33 richard Exp $ +#$Id: back_postgresql.py,v 1.43 2007/09/28 15:15:06 jpend Exp $ # # Copyright (c) 2003 Martynas Sklyzmantas, Andrey Lebedev # @@ -13,13 +13,16 @@ try: import psycopg from psycopg import QuotedString + from psycopg import ProgrammingError except: from psycopg2 import psycopg1 as psycopg from psycopg2.extensions import QuotedString + from psycopg2.psycopg1 import ProgrammingError import logging from roundup import hyperdb, date from roundup.backends import rdbms_common +from roundup.backends import sessions_rdbms def connection_dict(config, dbnamestr=None): ''' read_default_group is MySQL-specific, ignore it ''' @@ -51,12 +54,12 @@ ''' template1 = connection_dict(config) template1['database'] = 'template1' - + try: conn = psycopg.connect(**template1) except psycopg.OperationalError, message: raise hyperdb.DatabaseError, message - + conn.set_isolation_level(0) cursor = conn.cursor() try: @@ -70,7 +73,7 @@ def pg_command(cursor, command): '''Execute the postgresql command, which may be blocked by some other user connecting to the database, and return a true value if it succeeds. - + If there is a concurrent update, retry the command. ''' try: @@ -79,7 +82,7 @@ response = str(err).split('\n')[0] if response.find('FATAL') != -1: raise RuntimeError, response - elif response.find('ERROR') != -1: + else: msgs = [ 'is being accessed by other users', 'could not serialize access due to concurrent update', @@ -104,12 +107,28 @@ except: return 0 +class Sessions(sessions_rdbms.Sessions): + def set(self, *args, **kwargs): + try: + sessions_rdbms.Sessions.set(self, *args, **kwargs) + except ProgrammingError, err: + response = str(err).split('\n')[0] + if -1 != response.find('ERROR') and \ + -1 != response.find('could not serialize access due to concurrent update'): + # another client just updated, and we're running on + # serializable isolation. + # see http://www.postgresql.org/docs/7.4/interactive/transaction-iso.html + self.db.rollback() + class Database(rdbms_common.Database): arg = '%s' # used by some code to switch styles of query implements_intersect = 1 + def getSessionManager(self): + return Sessions(self) + def sql_open_connection(self): db = connection_dict(self.config, 'database') logging.getLogger('hyperdb').info('open database %r'%db['database']) @@ -266,3 +285,4 @@ class FileClass(PostgresqlClass, rdbms_common.FileClass): pass +# vim: set et sts=4 sw=4 : Modified: tracker/roundup-src/roundup/backends/back_sqlite.py ============================================================================== --- tracker/roundup-src/roundup/backends/back_sqlite.py (original) +++ tracker/roundup-src/roundup/backends/back_sqlite.py Sun Mar 9 09:26:16 2008 @@ -1,4 +1,4 @@ -# $Id: back_sqlite.py,v 1.50 2006/12/19 03:01:37 richard Exp $ +# $Id: back_sqlite.py,v 1.51 2007/06/21 07:35:50 schlatterbeck Exp $ '''Implements a backend for SQLite. See https://pysqlite.sourceforge.net/ for pysqlite info @@ -144,6 +144,9 @@ self.sql('CREATE TABLE __words (_word varchar, ' '_textid integer)') self.sql('CREATE INDEX words_word_ids ON __words(_word)') + self.sql('CREATE INDEX words_by_id ON __words (_textid)') + self.sql('CREATE UNIQUE INDEX __textids_by_props ON ' + '__textids (_class, _itemid, _prop)') sql = 'insert into ids (name, num) values (%s,%s)'%(self.arg, self.arg) self.sql(sql, ('__textids', 1)) Modified: tracker/roundup-src/roundup/backends/blobfiles.py ============================================================================== --- tracker/roundup-src/roundup/backends/blobfiles.py (original) +++ tracker/roundup-src/roundup/backends/blobfiles.py Sun Mar 9 09:26:16 2008 @@ -14,8 +14,8 @@ # FOR A PARTICULAR PURPOSE. THE CODE PROVIDED HEREUNDER IS ON AN "AS IS" # BASIS, AND THERE IS NO OBLIGATION WHATSOEVER TO PROVIDE MAINTENANCE, # SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS. -# -#$Id: blobfiles.py,v 1.19 2005/06/08 03:35:18 anthonybaxter Exp $ +# +#$Id: blobfiles.py,v 1.24 2008/02/07 00:57:59 richard Exp $ '''This module exports file storage for roundup backends. Files are stored into a directory hierarchy. ''' @@ -36,22 +36,204 @@ return num_files class FileStorage: - """Store files in some directory structure""" + """Store files in some directory structure + + Some databases do not permit the storage of arbitrary data (i.e., + file content). And, some database schema explicitly store file + content in the fielsystem. In particular, if a class defines a + 'filename' property, it is assumed that the data is stored in the + indicated file, outside of whatever database Roundup is otherwise + using. + + In these situations, it is difficult to maintain the transactional + abstractions used elsewhere in Roundup. In particular, if a + file's content is edited, but then the containing transaction is + not committed, we do not want to commit the edit. Similarly, we + would like to guarantee that if a transaction is committed to the + database, then the edit has in fact taken place. + + This class provides an approximation of these transactional + requirements. + + For classes that do not have a 'filename' property, the file name + used to store the file's content is a deterministic function of + the classname and nodeid for the file. The 'filename' function + computes this name. The name will contain directories and + subdirectories, but, suppose, for the purposes of what follows, + that the filename is 'file'. + + Edit Procotol + ------------- + + When a file is created or edited, the following protocol is used: + + 1. The new content of the file is placed in 'file.tmp'. + + 2. A transaction is recored in 'self.transactions' referencing the + 'doStoreFile' method of this class. + + 3. At some subsequent point, the database 'commit' function is + called. This function first performs a traditional database + commit (for example, by issuing a SQL command to commit the + current transaction), and, then, runs the transactions recored + in 'self.transactions'. + + 4. The 'doStoreFile' method renames the 'file.tmp' to 'file'. + + If Step 3 never occurs, but, instead, the database 'rollback' + method is called, then that method, after rolling back the + database transaction, calls 'rollbackStoreFile', which removes + 'file.tmp'. + + Race Condition + -------------- + + If two Roundup instances (say, the mail gateway and a web client, + or two web clients running with a multi-process server) attempt + edits at the same time, both will write to 'file.tmp', and the + results will be indeterminate. + + Crash Analysis + -------------- + + There are several situations that may occur if a crash (whether + because the machine crashes, because an unhandled Python exception + is raised, or because the Python process is killed) occurs. + + Complexity ensues because backuping up an RDBMS is generally more + complex than simply copying a file. Instead, some command is run + which stores a snapshot of the database in a file. So, if you + back up the database to a file, and then back up the filesystem, + it is likely that further database transactions have occurred + between the point of database backup and the point of filesystem + backup. + + For the purposes, of this analysis, we assume that the filesystem + backup occurred after the database backup. Furthermore, we assume + that filesystem backups are atomic; i.e., the at the filesystem is + not being modified during the backup. + + 1. Neither the 'commit' nor 'rollback' methods on the database are + ever called. + + In this case, the '.tmp' file should be ignored as the + transaction was not committed. + + 2. The 'commit' method is called. Subsequently, the machine + crashes, and is restored from backups. + + The most recent filesystem backup and the most recent database + backup are not in general from the same instant in time. + + This problem means that we can never be sure after a crash if + the contents of a file are what we intend. It is always + possible that an edit was made to the file that is not + reflected in the filesystem. + + 3. A crash occurs between the point of the database commit and the + call to 'doStoreFile'. + + If only one of 'file' and 'file.tmp' exists, then that + version should be used. However, if both 'file' and 'file.tmp' + exist, there is no way to know which version to use. + + Reading the File + ---------------- + + When determining the content of the file, we use the following + algorithm: + + 1. If 'self.transactions' reflects an edit of the file, then use + 'file.tmp'. + + We know that an edit to the file is in process so 'file.tmp' is + the right choice. If 'file.tmp' does not exist, raise an + exception; something has removed the content of the file while + we are in the process of editing it. + + 2. Otherwise, if 'file.tmp' exists, and 'file' does not, use + 'file.tmp'. + + We know that the file is supposed to exist because there is a + reference to it in the database. Since 'file' does not exist, + we assume that Crash 3 occurred during the initial creation of + the file. + + 3. Otherwise, use 'file'. + + If 'file.tmp' is not present, this is obviously the best we can + do. This is always the right answer unless Crash 2 occurred, + in which case the contents of 'file' may be newer than they + were at the point of database backup. + + If 'file.tmp' is present, we know that we are not actively + editing the file. The possibilities are: + + a. Crash 1 has occurred. In this case, using 'file' is the + right answer, so we will have chosen correctly. + + b. Crash 3 has occurred. In this case, 'file.tmp' is the right + answer, so we will have chosen incorrectly. However, 'file' + was at least a previously committed value. + + Future Improvements + ------------------- + + One approach would be to take advantage of databases which do + allow the storage of arbitary date. For example, MySQL provides + the HUGE BLOB datatype for storing up to 4GB of data. + + Another approach would be to store a version ('v') in the actual + database and name files 'file.v'. Then, the editing protocol + would become: + + 1. Generate a new version 'v', guaranteed to be different from all + other versions ever used by the database. (The version need + not be in any particular sequence; a UUID would be fine.) + + 2. Store the content in 'file.v'. + + 3. Update the database to indicate that the version of the node is + 'v'. + + Now, if the transaction is committed, the database will refer to + 'file.v', where the content exists. If the transaction is rolled + back, or not committed, 'file.v' will never be referenced. In the + event of a crash, under the assumptions above, there may be + 'file.v' files that are not referenced by the database, but the + database will be consistent, so long as unreferenced 'file.v' + files are never removed until after the database has been backed + up. + """ + + tempext = '.tmp' + """The suffix added to files indicating that they are uncommitted.""" + + def __init__(self, umask): + self.umask = umask + def subdirFilename(self, classname, nodeid, property=None): """Determine what the filename and subdir for nodeid + classname is.""" if property: name = '%s%s.%s'%(classname, nodeid, property) else: - # roundupdb.FileClass never specified the property name, so don't + # roundupdb.FileClass never specified the property name, so don't # include it name = '%s%s'%(classname, nodeid) - + # have a separate subdir for every thousand messages subdir = str(int(nodeid) / 1000) return os.path.join(subdir, name) - + + def _tempfile(self, filename): + """Return a temporary filename. + + 'filename' -- The name of the eventual destination file.""" + + return filename + self.tempext + def filename(self, classname, nodeid, property=None, create=0): - '''Determine what the filename for the given node and optionally + '''Determine what the filename for the given node and optionally property is. Try a variety of different filenames - the file could be in the @@ -60,14 +242,45 @@ ''' filename = os.path.join(self.dir, 'files', classname, self.subdirFilename(classname, nodeid, property)) - if create or os.path.exists(filename): + # If the caller is going to create the file, return the + # post-commit filename. It is the callers responsibility to + # add self.tempext when actually creating the file. + if create: return filename - # try .tmp - filename = filename + '.tmp' + tempfile = self._tempfile(filename) + + # If an edit to this file is in progress, then return the name + # of the temporary file containing the edited content. + for method, args in self.transactions: + if (method == self.doStoreFile and + args == (classname, nodeid, property)): + # There is an edit in progress for this file. + if not os.path.exists(tempfile): + raise IOError('content file for %s not found'%tempfile) + return tempfile + if os.path.exists(filename): return filename + # Otherwise, if the temporary file exists, then the probable + # explanation is that a crash occurred between the point that + # the database entry recording the creation of the file + # occured and the point at which the file was renamed from the + # temporary name to the final name. + if os.path.exists(tempfile): + try: + # Clean up, by performing the commit now. + os.rename(tempfile, filename) + except: + pass + # If two Roundup clients both try to rename the file + # at the same time, only one of them will succeed. + # So, tolerate such an error -- but no other. + if not os.path.exists(filename): + raise IOError('content file for %s not found'%filename) + return filename + # ok, try flat (very old-style) if property: filename = os.path.join(self.dir, 'files', '%s%s.%s'%(classname, @@ -94,13 +307,17 @@ os.makedirs(os.path.dirname(name)) # save to a temp file - name = name + '.tmp' + name = self._tempfile(name) # make sure we don't register the rename action more than once if not os.path.exists(name): # save off the rename action self.transactions.append((self.doStoreFile, (classname, nodeid, property))) + # always set umask before writing to make sure we have the proper one + # in multi-tracker (i.e. multi-umask) or modpython scenarios + # the umask may have changed since last we set it. + os.umask(self.umask) open(name, 'wb').write(content) def getfile(self, classname, nodeid, property): @@ -125,16 +342,16 @@ '''Store the file as part of a transaction commit. ''' # determine the name of the file to write to - name = self.filename(classname, nodeid, property) + name = self.filename(classname, nodeid, property, 1) # the file is currently ".tmp" - move it to its real name to commit - if name.endswith('.tmp'): + if name.endswith(self.tempext): # creation dstname = os.path.splitext(name)[0] else: # edit operation dstname = name - name = name + '.tmp' + name = self._tempfile(name) # content is being updated (and some platforms, eg. win32, won't # let us rename over the top of the old file) @@ -151,8 +368,25 @@ ''' # determine the name of the file to delete name = self.filename(classname, nodeid, property) - if not name.endswith('.tmp'): - name += '.tmp' + if not name.endswith(self.tempext): + name += self.tempext os.remove(name) + def isStoreFile(self, classname, nodeid): + '''See if there is actually any FileStorage for this node. + Is there a better way than using self.filename? + ''' + try: + fname = self.filename(classname, nodeid) + return True + except IOError: + return False + + def destroy(self, classname, nodeid): + '''If there is actually FileStorage for this node + remove it from the filesystem + ''' + if self.isStoreFile(classname, nodeid): + os.remove(self.filename(classname, nodeid)) + # vim: set filetype=python ts=4 sw=4 et si Modified: tracker/roundup-src/roundup/backends/indexer_xapian.py ============================================================================== --- tracker/roundup-src/roundup/backends/indexer_xapian.py (original) +++ tracker/roundup-src/roundup/backends/indexer_xapian.py Sun Mar 9 09:26:16 2008 @@ -1,4 +1,4 @@ -#$Id: indexer_xapian.py,v 1.4 2006/02/10 00:16:13 richard Exp $ +#$Id: indexer_xapian.py,v 1.6 2007/10/25 07:02:42 richard Exp $ ''' This implements the full-text indexer using the Xapian indexer. ''' import re, os @@ -32,7 +32,7 @@ def close(self): '''close the indexing database''' pass - + def rollback(self): if not self.transaction_active: return @@ -92,7 +92,7 @@ word = match.group(0) if self.is_stopword(word): continue - term = stemmer.stem_word(word) + term = stemmer(word) doc.add_posting(term, match.start(0)) if docid: database.replace_document(docid, doc) @@ -103,7 +103,7 @@ '''look up all the words in the wordlist. If none are found return an empty dictionary * more rules here - ''' + ''' if not wordlist: return {} @@ -113,7 +113,7 @@ stemmer = xapian.Stem("english") terms = [] for term in [word.upper() for word in wordlist if 26 > len(word) > 2]: - terms.append(stemmer.stem_word(term.upper())) + terms.append(stemmer(term.upper())) query = xapian.Query(xapian.Query.OP_AND, terms) enquire.set_query(query) Modified: tracker/roundup-src/roundup/backends/rdbms_common.py ============================================================================== --- tracker/roundup-src/roundup/backends/rdbms_common.py (original) +++ tracker/roundup-src/roundup/backends/rdbms_common.py Sun Mar 9 09:26:16 2008 @@ -15,8 +15,8 @@ # BASIS, AND THERE IS NO OBLIGATION WHATSOEVER TO PROVIDE MAINTENANCE, # SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS. # -#$Id: rdbms_common.py,v 1.182 2006/10/04 01:12:00 richard Exp $ -''' Relational database (SQL) backend common code. +#$Id: rdbms_common.py,v 1.195 2008/02/07 05:01:42 richard Exp $ +""" Relational database (SQL) backend common code. Basics: @@ -29,7 +29,7 @@ - journals are stored adjunct to the per-class tables - table names and columns have "_" prepended so the names can't clash with restricted names (like "order") -- retirement is determined by the __retired__ column being true +- retirement is determined by the __retired__ column being > 0 Database-specific changes may generally be pushed out to the overridable sql_* methods, since everything else should be fairly generic. There's @@ -42,7 +42,14 @@ that maps to a table. If that information differs from the hyperdb schema, then we update it. We also store in the schema dict a version which allows us to upgrade the database schema when necessary. See upgrade_db(). -''' + +To force a unqiueness constraint on the key properties we put the item +id into the __retired__ column duing retirement (so it's 0 for "active" +items) and place a unqiueness constraint on key + __retired__. This is +particularly important for the users class where multiple users may +try to have the same username, with potentially many retired users with +the same name. +""" __docformat__ = 'restructuredtext' # standard python modules @@ -54,6 +61,7 @@ Multilink, DatabaseError, Boolean, Number, Node from roundup.backends import locking from roundup.support import reversed +from roundup.i18n import _ # support from blobfiles import FileStorage @@ -84,8 +92,8 @@ return int(value) def connection_dict(config, dbnamestr=None): - ''' Used by Postgresql and MySQL to detemine the keyword args for - opening the database connection.''' + """ Used by Postgresql and MySQL to detemine the keyword args for + opening the database connection.""" d = { } if dbnamestr: d[dbnamestr] = config.RDBMS_NAME @@ -97,15 +105,16 @@ return d class Database(FileStorage, hyperdb.Database, roundupdb.Database): - ''' Wrapper around an SQL database that presents a hyperdb interface. + """ Wrapper around an SQL database that presents a hyperdb interface. - some functionality is specific to the actual SQL database, hence the sql_* methods that are NotImplemented - we keep a cache of the latest ROW_CACHE_SIZE row fetches. - ''' + """ def __init__(self, config, journaltag=None): - ''' Open the database and load the schema from it. - ''' + """ Open the database and load the schema from it. + """ + FileStorage.__init__(self, config.UMASK) self.config, self.journaltag = config, journaltag self.dir = config.DATABASE self.classes = {} @@ -139,15 +148,15 @@ return OneTimeKeys(self) def open_connection(self): - ''' Open a connection to the database, creating it if necessary. + """ Open a connection to the database, creating it if necessary. Must call self.load_dbschema() - ''' + """ raise NotImplemented def sql(self, sql, args=None): - ''' Execute the sql with the optional args. - ''' + """ Execute the sql with the optional args. + """ if __debug__: logging.getLogger('hyperdb').debug('SQL %r %r'%(sql, args)) if args: @@ -156,18 +165,18 @@ self.cursor.execute(sql) def sql_fetchone(self): - ''' Fetch a single row. If there's nothing to fetch, return None. - ''' + """ Fetch a single row. If there's nothing to fetch, return None. + """ return self.cursor.fetchone() def sql_fetchall(self): - ''' Fetch all rows. If there's nothing to fetch, return []. - ''' + """ Fetch all rows. If there's nothing to fetch, return []. + """ return self.cursor.fetchall() def sql_stringquote(self, value): - ''' Quote the string so it's safe to put in the 'sql quotes' - ''' + """ Quote the string so it's safe to put in the 'sql quotes' + """ return re.sub("'", "''", str(value)) def init_dbschema(self): @@ -177,8 +186,8 @@ } def load_dbschema(self): - ''' Load the schema definition that the database currently implements - ''' + """ Load the schema definition that the database currently implements + """ self.cursor.execute('select schema from schema') schema = self.cursor.fetchone() if schema: @@ -187,18 +196,18 @@ self.database_schema = {} def save_dbschema(self): - ''' Save the schema definition that the database currently implements - ''' + """ Save the schema definition that the database currently implements + """ s = repr(self.database_schema) self.sql('delete from schema') self.sql('insert into schema values (%s)'%self.arg, (s,)) def post_init(self): - ''' Called once the schema initialisation has finished. + """ Called once the schema initialisation has finished. We should now confirm that the schema defined by our "classes" attribute actually matches the schema in the database. - ''' + """ save = 0 # handle changes in the schema @@ -237,12 +246,13 @@ # update this number when we need to make changes to the SQL structure # of the backen database - current_db_version = 4 + current_db_version = 5 + db_version_updated = False def upgrade_db(self): - ''' Update the SQL database to reflect changes in the backend code. + """ Update the SQL database to reflect changes in the backend code. Return boolean whether we need to save the schema. - ''' + """ version = self.database_schema.get('version', 1) if version == self.current_db_version: # nothing to do @@ -270,7 +280,11 @@ if version < 4: self.fix_version_3_tables() + if version < 5: + self.fix_version_4_tables() + self.database_schema['version'] = self.current_db_version + self.db_version_updated = True return 1 def fix_version_3_tables(self): @@ -281,11 +295,23 @@ self.sql('ALTER TABLE %ss ADD %s_value TEXT'%(name, name)) def fix_version_2_tables(self): - '''Default (used by sqlite): NOOP''' + # Default (used by sqlite): NOOP pass + def fix_version_4_tables(self): + # note this is an explicit call now + c = self.cursor + for cn, klass in self.classes.items(): + c.execute('select id from _%s where __retired__<>0'%(cn,)) + for (id,) in c.fetchall(): + c.execute('update _%s set __retired__=%s where id=%s'%(cn, + self.arg, self.arg), (id, id)) + + if klass.key: + self.add_class_key_required_unique_constraint(cn, klass.key) + def _convert_journal_tables(self): - '''Get current journal table contents, drop the table and re-create''' + """Get current journal table contents, drop the table and re-create""" c = self.cursor cols = ','.join('nodeid date tag action params'.split()) for klass in self.classes.values(): @@ -307,8 +333,8 @@ self.cursor.execute(sql, row) def _convert_string_properties(self): - '''Get current Class tables that contain String properties, and - convert the VARCHAR columns to TEXT''' + """Get current Class tables that contain String properties, and + convert the VARCHAR columns to TEXT""" c = self.cursor for klass in self.classes.values(): # slurp and drop @@ -363,11 +389,11 @@ hyperdb.Number : 'REAL', } def determine_columns(self, properties): - ''' Figure the column names and multilink properties from the spec + """ Figure the column names and multilink properties from the spec "properties" is a list of (name, prop) where prop may be an instance of a hyperdb "type" _or_ a string repr of that type. - ''' + """ cols = [ ('_actor', self.hyperdb_to_sql_datatypes[hyperdb.Link]), ('_activity', self.hyperdb_to_sql_datatypes[hyperdb.Date]), @@ -397,11 +423,11 @@ return cols, mls def update_class(self, spec, old_spec, force=0): - ''' Determine the differences between the current spec and the + """ Determine the differences between the current spec and the database version of the spec, and update where necessary. If 'force' is true, update the database anyway. - ''' + """ new_has = spec.properties.has_key new_spec = spec.schema() new_spec[1].sort() @@ -500,8 +526,8 @@ return cols, mls def create_class_table(self, spec): - '''Create the class table for the given Class "spec". Creates the - indexes too.''' + """Create the class table for the given Class "spec". Creates the + indexes too.""" cols, mls = self.determine_all_columns(spec) # create the base table @@ -514,8 +540,8 @@ return cols, mls def create_class_table_indexes(self, spec): - ''' create the class table for the given spec - ''' + """ create the class table for the given spec + """ # create __retired__ index index_sql2 = 'create index _%s_retired_idx on _%s(__retired__)'%( spec.classname, spec.classname) @@ -528,9 +554,18 @@ spec.classname, spec.key) self.sql(index_sql3) + # and the unique index for key / retired(id) + self.add_class_key_required_unique_constraint(spec.classname, + spec.key) + # TODO: create indexes on (selected?) Link property columns, as # they're more likely to be used for lookup + def add_class_key_required_unique_constraint(self, cn, key): + sql = '''create unique index _%s_key_retired_idx + on _%s(__retired__, _%s)'''%(cn, cn, key) + self.sql(sql) + def drop_class_table_indexes(self, cn, key): # drop the old table indexes first l = ['_%s_id_idx'%cn, '_%s_retired_idx'%cn] @@ -545,29 +580,34 @@ self.sql(index_sql) def create_class_table_key_index(self, cn, key): - ''' create the class table for the given spec - ''' + """ create the class table for the given spec + """ sql = 'create index _%s_%s_idx on _%s(_%s)'%(cn, key, cn, key) self.sql(sql) def drop_class_table_key_index(self, cn, key): table_name = '_%s'%cn index_name = '_%s_%s_idx'%(cn, key) - if not self.sql_index_exists(table_name, index_name): - return - sql = 'drop index '+index_name - self.sql(sql) + if self.sql_index_exists(table_name, index_name): + sql = 'drop index '+index_name + self.sql(sql) + + # and now the retired unique index too + index_name = '_%s_key_retired_idx'%cn + if self.sql_index_exists(table_name, index_name): + sql = 'drop index '+index_name + self.sql(sql) def create_journal_table(self, spec): - ''' create the journal table for a class given the spec and + """ create the journal table for a class given the spec and already-determined cols - ''' + """ # journal table cols = ','.join(['%s varchar'%x for x in 'nodeid date tag action params'.split()]) - sql = '''create table %s__journal ( + sql = """create table %s__journal ( nodeid integer, date %s, tag varchar(255), - action varchar(255), params text)''' % (spec.classname, + action varchar(255), params text)""" % (spec.classname, self.hyperdb_to_sql_datatypes[hyperdb.Date]) self.sql(sql) self.create_journal_table_indexes(spec) @@ -586,9 +626,9 @@ self.sql(index_sql) def create_multilink_table(self, spec, ml): - ''' Create a multilink table for the "ml" property of the class + """ Create a multilink table for the "ml" property of the class given by the spec - ''' + """ # create the table sql = 'create table %s_%s (linkid INTEGER, nodeid INTEGER)'%( spec.classname, ml) @@ -619,8 +659,8 @@ self.sql(index_sql) def create_class(self, spec): - ''' Create a database table according to the given spec. - ''' + """ Create a database table according to the given spec. + """ cols, mls = self.create_class_table(spec) self.create_journal_table(spec) @@ -629,14 +669,14 @@ self.create_multilink_table(spec, ml) def drop_class(self, cn, spec): - ''' Drop the given table from the database. + """ Drop the given table from the database. Drop the journal and multilink tables too. - ''' + """ properties = spec[1] # figure the multilinks mls = [] - for propanme, prop in properties: + for propname, prop in properties: if isinstance(prop, Multilink): mls.append(propname) @@ -664,15 +704,15 @@ # Classes # def __getattr__(self, classname): - ''' A convenient way of calling self.getclass(classname). - ''' + """ A convenient way of calling self.getclass(classname). + """ if self.classes.has_key(classname): return self.classes[classname] raise AttributeError, classname def addclass(self, cl): - ''' Add a Class to the hyperdatabase. - ''' + """ Add a Class to the hyperdatabase. + """ cn = cl.classname if self.classes.has_key(cn): raise ValueError, cn @@ -687,28 +727,28 @@ description="User is allowed to access "+cn) def getclasses(self): - ''' Return a list of the names of all existing classes. - ''' + """ Return a list of the names of all existing classes. + """ l = self.classes.keys() l.sort() return l def getclass(self, classname): - '''Get the Class object representing a particular class. + """Get the Class object representing a particular class. If 'classname' is not a valid class name, a KeyError is raised. - ''' + """ try: return self.classes[classname] except KeyError: raise KeyError, 'There is no class called "%s"'%classname def clear(self): - '''Delete all database contents. + """Delete all database contents. Note: I don't commit here, which is different behaviour to the "nuke from orbit" behaviour in the dbs. - ''' + """ logging.getLogger('hyperdb').info('clear') for cn in self.classes.keys(): sql = 'delete from _%s'%cn @@ -730,8 +770,8 @@ hyperdb.Multilink : lambda x: x, # used in journal marshalling } def addnode(self, classname, nodeid, node): - ''' Add the specified node to its class's db. - ''' + """ Add the specified node to its class's db. + """ if __debug__: logging.getLogger('hyperdb').debug('addnode %s%s %r'%(classname, nodeid, node)) @@ -805,8 +845,8 @@ self.sql(sql, (entry, nodeid)) def setnode(self, classname, nodeid, values, multilink_changes={}): - ''' Change the specified node. - ''' + """ Change the specified node. + """ if __debug__: logging.getLogger('hyperdb').debug('setnode %s%s %r' % (classname, nodeid, values)) @@ -919,8 +959,8 @@ hyperdb.Multilink : lambda x: x, # used in journal marshalling } def getnode(self, classname, nodeid): - ''' Get a node from the database. - ''' + """ Get a node from the database. + """ # see if we have this node cached key = (classname, nodeid) if self.cache.has_key(key): @@ -990,9 +1030,9 @@ return node def destroynode(self, classname, nodeid): - '''Remove a node from the database. Called exclusively by the + """Remove a node from the database. Called exclusively by the destroy() method on Class. - ''' + """ logging.getLogger('hyperdb').info('destroynode %s%s'%(classname, nodeid)) # make sure the node exists @@ -1024,29 +1064,32 @@ sql = 'delete from %s__journal where nodeid=%s'%(classname, self.arg) self.sql(sql, (nodeid,)) + # cleanup any blob filestorage when we commit + self.transactions.append((FileStorage.destroy, (self, classname, nodeid))) + def hasnode(self, classname, nodeid): - ''' Determine if the database has a given node. - ''' + """ Determine if the database has a given node. + """ sql = 'select count(*) from _%s where id=%s'%(classname, self.arg) self.sql(sql, (nodeid,)) return int(self.cursor.fetchone()[0]) def countnodes(self, classname): - ''' Count the number of nodes that exist for a particular Class. - ''' + """ Count the number of nodes that exist for a particular Class. + """ sql = 'select count(*) from _%s'%classname self.sql(sql) return self.cursor.fetchone()[0] def addjournal(self, classname, nodeid, action, params, creator=None, creation=None): - ''' Journal the Action + """ Journal the Action 'action' may be: 'create' or 'set' -- 'params' is a dictionary of property values 'link' or 'unlink' -- 'params' is (classname, nodeid, propname) 'retire' -- 'params' is None - ''' + """ # handle supply of the special journalling parameters (usually # supplied on importing an existing database) if creator: @@ -1078,7 +1121,7 @@ journaltag, action, params) def setjournal(self, classname, nodeid, journal): - '''Set the journal to the "journal" list.''' + """Set the journal to the "journal" list.""" # clear out any existing entries self.sql('delete from %s__journal where nodeid=%s'%(classname, self.arg), (nodeid,)) @@ -1102,8 +1145,8 @@ journaltag, action, params) def _journal_marshal(self, params, classname): - '''Convert the journal params values into safely repr'able and - eval'able values.''' + """Convert the journal params values into safely repr'able and + eval'able values.""" properties = self.getclass(classname).getprops() for param, value in params.items(): if not value: @@ -1120,8 +1163,8 @@ params[param] = cvt(value) def getjournal(self, classname, nodeid): - ''' get the journal for id - ''' + """ get the journal for id + """ # make sure the node exists if not self.hasnode(classname, nodeid): raise IndexError, '%s has no node %s'%(classname, nodeid) @@ -1158,8 +1201,8 @@ def save_journal(self, classname, cols, nodeid, journaldate, journaltag, action, params): - ''' Save the journal entry to the database - ''' + """ Save the journal entry to the database + """ entry = (nodeid, journaldate, journaltag, action, params) # do the insert @@ -1169,8 +1212,8 @@ self.sql(sql, entry) def load_journal(self, classname, cols, nodeid): - ''' Load the journal from the database - ''' + """ Load the journal from the database + """ # now get the journal entries sql = 'select %s from %s__journal where nodeid=%s order by date'%( cols, classname, self.arg) @@ -1178,8 +1221,8 @@ return self.cursor.fetchall() def pack(self, pack_before): - ''' Delete all journal entries except "create" before 'pack_before'. - ''' + """ Delete all journal entries except "create" before 'pack_before'. + """ date_stamp = self.hyperdb_to_sql_value[Date](pack_before) # do the delete @@ -1189,8 +1232,8 @@ self.sql(sql, (date_stamp,)) def sql_commit(self, fail_ok=False): - ''' Actually commit to the database. - ''' + """ Actually commit to the database. + """ logging.getLogger('hyperdb').info('commit') self.conn.commit() @@ -1199,7 +1242,7 @@ self.cursor = self.conn.cursor() def commit(self, fail_ok=False): - ''' Commit the current transactions. + """ Commit the current transactions. Save all data changed since the database was opened or since the last commit() or rollback(). @@ -1209,7 +1252,7 @@ database. We don't care if there's a concurrency issue there. The only backend this seems to affect is postgres. - ''' + """ # commit the database self.sql_commit(fail_ok) @@ -1227,11 +1270,11 @@ self.conn.rollback() def rollback(self): - ''' Reverse all actions from the current transaction. + """ Reverse all actions from the current transaction. Undo all the changes made since the database was opened or the last commit() or rollback() was performed. - ''' + """ logging.getLogger('hyperdb').info('rollback') self.sql_rollback() @@ -1251,8 +1294,8 @@ self.conn.close() def close(self): - ''' Close off the connection. - ''' + """ Close off the connection. + """ self.indexer.close() self.sql_close() @@ -1260,31 +1303,31 @@ # The base Class class # class Class(hyperdb.Class): - ''' The handle to a particular class of nodes in a hyperdatabase. + """ The handle to a particular class of nodes in a hyperdatabase. All methods except __repr__ and getnode must be implemented by a concrete backend Class. - ''' + """ def schema(self): - ''' A dumpable version of the schema that we can store in the + """ A dumpable version of the schema that we can store in the database - ''' + """ return (self.key, [(x, repr(y)) for x,y in self.properties.items()]) def enableJournalling(self): - '''Turn journalling on for this class - ''' + """Turn journalling on for this class + """ self.do_journal = 1 def disableJournalling(self): - '''Turn journalling off for this class - ''' + """Turn journalling off for this class + """ self.do_journal = 0 # Editing nodes: def create(self, **propvalues): - ''' Create a new node of this class and return its id. + """ Create a new node of this class and return its id. The keyword arguments in 'propvalues' map property names to values. @@ -1299,20 +1342,20 @@ If an id in a link or multilink property does not refer to a valid node, an IndexError is raised. - ''' + """ self.fireAuditors('create', None, propvalues) newid = self.create_inner(**propvalues) self.fireReactors('create', newid, None) return newid def create_inner(self, **propvalues): - ''' Called by create, in-between the audit and react calls. - ''' + """ Called by create, in-between the audit and react calls. + """ if propvalues.has_key('id'): raise KeyError, '"id" is reserved' if self.db.journaltag is None: - raise DatabaseError, 'Database open read-only' + raise DatabaseError, _('Database open read-only') if propvalues.has_key('creator') or propvalues.has_key('actor') or \ propvalues.has_key('creation') or propvalues.has_key('activity'): @@ -1363,8 +1406,10 @@ (self.classname, newid, key)) elif isinstance(prop, Multilink): - if type(value) != type([]): - raise TypeError, 'new property "%s" not a list of ids'%key + if value is None: + value = [] + if not hasattr(value, '__iter__'): + raise TypeError, 'new property "%s" not an iterable of ids'%key # clean up and validate the list of links link_class = self.properties[key].classname @@ -1445,14 +1490,14 @@ return str(newid) def get(self, nodeid, propname, default=_marker, cache=1): - '''Get the value of a property on an existing node of this class. + """Get the value of a property on an existing node of this class. 'nodeid' must be the id of an existing node of this class or an IndexError is raised. 'propname' must be the name of a property of this class or a KeyError is raised. 'cache' exists for backwards compatibility, and is not used. - ''' + """ if propname == 'id': return nodeid @@ -1501,7 +1546,7 @@ return d[propname] def set(self, nodeid, **propvalues): - '''Modify a property on an existing node of this class. + """Modify a property on an existing node of this class. 'nodeid' must be the id of an existing node of this class or an IndexError is raised. @@ -1517,7 +1562,7 @@ If the value of a Link or Multilink property contains an invalid node id, a ValueError is raised. - ''' + """ self.fireAuditors('set', nodeid, propvalues) oldvalues = copy.deepcopy(self.db.getnode(self.classname, nodeid)) propvalues = self.set_inner(nodeid, **propvalues) @@ -1525,8 +1570,8 @@ return propvalues def set_inner(self, nodeid, **propvalues): - ''' Called by set, in-between the audit and react calls. - ''' + """ Called by set, in-between the audit and react calls. + """ if not propvalues: return propvalues @@ -1539,7 +1584,7 @@ raise KeyError, '"id" is reserved' if self.db.journaltag is None: - raise DatabaseError, 'Database open read-only' + raise DatabaseError, _('Database open read-only') node = self.db.getnode(self.classname, nodeid) if self.is_retired(nodeid): @@ -1613,8 +1658,10 @@ (self.classname, nodeid, propname)) elif isinstance(prop, Multilink): - if type(value) != type([]): - raise TypeError, 'new property "%s" not a list of'\ + if value is None: + value = [] + if not hasattr(value, '__iter__'): + raise TypeError, 'new property "%s" not an iterable of'\ ' ids'%propname link_class = self.properties[propname].classname l = [] @@ -1734,16 +1781,16 @@ return propvalues def retire(self, nodeid): - '''Retire a node. + """Retire a node. The properties on the node remain available from the get() method, and the node's id is never reused. Retired nodes are not returned by the find(), list(), or lookup() methods, and other nodes may reuse the values of their key properties. - ''' + """ if self.db.journaltag is None: - raise DatabaseError, 'Database open read-only' + raise DatabaseError, _('Database open read-only') self.fireAuditors('retire', nodeid, None) @@ -1751,19 +1798,19 @@ # conversion (hello, sqlite) sql = 'update _%s set __retired__=%s where id=%s'%(self.classname, self.db.arg, self.db.arg) - self.db.sql(sql, (1, nodeid)) + self.db.sql(sql, (nodeid, nodeid)) if self.do_journal: self.db.addjournal(self.classname, nodeid, ''"retired", None) self.fireReactors('retire', nodeid, None) def restore(self, nodeid): - '''Restore a retired node. + """Restore a retired node. Make node available for all operations like it was before retirement. - ''' + """ if self.db.journaltag is None: - raise DatabaseError, 'Database open read-only' + raise DatabaseError, _('Database open read-only') node = self.db.getnode(self.classname, nodeid) # check if key property was overrided @@ -1788,15 +1835,15 @@ self.fireReactors('restore', nodeid, None) def is_retired(self, nodeid): - '''Return true if the node is rerired - ''' + """Return true if the node is rerired + """ sql = 'select __retired__ from _%s where id=%s'%(self.classname, self.db.arg) self.db.sql(sql, (nodeid,)) - return int(self.db.sql_fetchone()[0]) + return int(self.db.sql_fetchone()[0]) > 0 def destroy(self, nodeid): - '''Destroy a node. + """Destroy a node. WARNING: this method should never be used except in extremely rare situations where there could never be links to the node being @@ -1814,13 +1861,13 @@ The node is completely removed from the hyperdb, including all journal entries. It will no longer be available, and will generally break code if there are any references to the node. - ''' + """ if self.db.journaltag is None: - raise DatabaseError, 'Database open read-only' + raise DatabaseError, _('Database open read-only') self.db.destroynode(self.classname, nodeid) def history(self, nodeid): - '''Retrieve the journal of edits on a particular node. + """Retrieve the journal of edits on a particular node. 'nodeid' must be the id of an existing node of this class or an IndexError is raised. @@ -1831,49 +1878,49 @@ 'date' is a Timestamp object specifying the time of the change and 'tag' is the journaltag specified when the database was opened. - ''' + """ if not self.do_journal: raise ValueError, 'Journalling is disabled for this class' return self.db.getjournal(self.classname, nodeid) # Locating nodes: def hasnode(self, nodeid): - '''Determine if the given nodeid actually exists - ''' + """Determine if the given nodeid actually exists + """ return self.db.hasnode(self.classname, nodeid) def setkey(self, propname): - '''Select a String property of this class to be the key property. + """Select a String property of this class to be the key property. 'propname' must be the name of a String property of this class or None, or a TypeError is raised. The values of the key property on all existing nodes must be unique or a ValueError is raised. - ''' + """ prop = self.getprops()[propname] if not isinstance(prop, String): raise TypeError, 'key properties must be String' self.key = propname def getkey(self): - '''Return the name of the key property for this class or None.''' + """Return the name of the key property for this class or None.""" return self.key def lookup(self, keyvalue): - '''Locate a particular node by its key property and return its id. + """Locate a particular node by its key property and return its id. If this class has no key property, a TypeError is raised. If the 'keyvalue' matches one of the values for the key property among the nodes in this class, the matching node's id is returned; otherwise a KeyError is raised. - ''' + """ if not self.key: raise TypeError, 'No key property set for class %s'%self.classname # use the arg to handle any odd database type conversion (hello, # sqlite) - sql = "select id from _%s where _%s=%s and __retired__ <> %s"%( + sql = "select id from _%s where _%s=%s and __retired__=%s"%( self.classname, self.key, self.db.arg, self.db.arg) - self.db.sql(sql, (keyvalue, 1)) + self.db.sql(sql, (keyvalue, 0)) # see if there was a result that's not retired row = self.db.sql_fetchone() @@ -1886,7 +1933,7 @@ return str(row[0]) def find(self, **propspec): - '''Get the ids of nodes in this class which link to the given nodes. + """Get the ids of nodes in this class which link to the given nodes. 'propspec' consists of keyword args propname=nodeid or propname={nodeid:1, } @@ -1899,7 +1946,7 @@ db.issue.find(messages='1') db.issue.find(messages={'1':1,'3':1}, files={'7':1}) - ''' + """ # shortcut if not propspec: return [] @@ -1938,9 +1985,9 @@ s += '_%s in (%s)'%(prop, ','.join([a]*len(values))) where.append('(' + s +')') if where: - allvalues = (1, ) + allvalues - sql.append('''select id from _%s where __retired__ <> %s - and %s'''%(self.classname, a, ' and '.join(where))) + allvalues = (0, ) + allvalues + sql.append("""select id from _%s where __retired__=%s + and %s"""%(self.classname, a, ' and '.join(where))) # now multilinks for prop, values in propspec: @@ -1948,7 +1995,7 @@ continue if not values: continue - allvalues += (1, ) + allvalues += (0, ) if type(values) is type(''): allvalues += (values,) s = a @@ -1956,8 +2003,8 @@ allvalues += tuple(values.keys()) s = ','.join([a]*len(values)) tn = '%s_%s'%(self.classname, prop) - sql.append('''select id from _%s, %s where __retired__ <> %s - and id = %s.nodeid and %s.linkid in (%s)'''%(self.classname, + sql.append("""select id from _%s, %s where __retired__=%s + and id = %s.nodeid and %s.linkid in (%s)"""%(self.classname, tn, a, tn, tn, s)) if not sql: @@ -1969,13 +2016,13 @@ return l def stringFind(self, **requirements): - '''Locate a particular node by matching a set of its String + """Locate a particular node by matching a set of its String properties in a caseless search. If the property is not a String property, a TypeError is raised. The return is a list of the id of all nodes that match. - ''' + """ where = [] args = [] for propname in requirements.keys(): @@ -1987,33 +2034,34 @@ # generate the where clause s = ' and '.join(['lower(_%s)=%s'%(col, self.db.arg) for col in where]) - sql = 'select id from _%s where %s and __retired__<>%s'%( + sql = 'select id from _%s where %s and __retired__=%s'%( self.classname, s, self.db.arg) - args.append(1) + args.append(0) self.db.sql(sql, tuple(args)) # XXX numeric ids l = [str(x[0]) for x in self.db.sql_fetchall()] return l def list(self): - ''' Return a list of the ids of the active nodes in this class. - ''' + """ Return a list of the ids of the active nodes in this class. + """ return self.getnodeids(retired=0) def getnodeids(self, retired=None): - ''' Retrieve all the ids of the nodes for a particular Class. + """ Retrieve all the ids of the nodes for a particular Class. Set retired=None to get all nodes. Otherwise it'll get all the retired or non-retired nodes, depending on the flag. - ''' + """ # flip the sense of the 'retired' flag if we don't want all of them if retired is not None: + args = (0, ) if retired: - args = (0, ) + compare = '>' else: - args = (1, ) - sql = 'select id from _%s where __retired__ <> %s'%(self.classname, - self.db.arg) + compare = '=' + sql = 'select id from _%s where __retired__%s%s'%(self.classname, + compare, self.db.arg) else: args = () sql = 'select id from _%s'%self.classname @@ -2023,11 +2071,11 @@ return ids def _subselect(self, classname, multilink_table): - '''Create a subselect. This is factored out because some + """Create a subselect. This is factored out because some databases (hmm only one, so far) doesn't support subselects look for "I can't believe it's not a toy RDBMS" in the mysql backend. - ''' + """ return '_%s.id not in (select nodeid from %s)'%(classname, multilink_table) @@ -2040,7 +2088,7 @@ order_by_null_values = None def filter(self, search_matches, filterspec, sort=[], group=[]): - '''Return a list of the ids of the active nodes in this class that + """Return a list of the ids of the active nodes in this class that match the 'filter' spec, sorted by the group spec and then the sort spec @@ -2058,7 +2106,7 @@ 1. String properties must match all elements in the list, and 2. Other properties must match any of the elements in the list. - ''' + """ # we can't match anything if search_matches is empty if search_matches == {}: return [] @@ -2267,7 +2315,7 @@ props = self.getprops() # don't match retired nodes - where.append('_%s.__retired__ <> 1'%icn) + where.append('_%s.__retired__=0'%icn) # add results of full text search if search_matches is not None: @@ -2326,14 +2374,14 @@ return l def filter_sql(self, sql): - '''Return a list of the ids of the items in this class that match + """Return a list of the ids of the items in this class that match the SQL provided. The SQL is a complete "select" statement. The SQL select must include the item id as the first column. This function DOES NOT filter out retired items, add on a where - clause "__retired__ <> 1" if you don't want retired nodes. - ''' + clause "__retired__=0" if you don't want retired nodes. + """ if __debug__: start_t = time.time() @@ -2345,20 +2393,20 @@ return l def count(self): - '''Get the number of nodes in this class. + """Get the number of nodes in this class. If the returned integer is 'numnodes', the ids of all the nodes in this class run from 1 to numnodes, and numnodes+1 will be the id of the next node to be created in this class. - ''' + """ return self.db.countnodes(self.classname) # Manipulating properties: def getprops(self, protected=1): - '''Return a dictionary mapping property names to property objects. + """Return a dictionary mapping property names to property objects. If the "protected" flag is true, we include protected properties - those which may not be modified. - ''' + """ d = self.properties.copy() if protected: d['id'] = String() @@ -2369,21 +2417,21 @@ return d def addprop(self, **properties): - '''Add properties to this class. + """Add properties to this class. The keyword arguments in 'properties' must map names to property objects, or a TypeError is raised. None of the keys in 'properties' may collide with the names of existing properties, or a ValueError is raised before any properties have been added. - ''' + """ for key in properties.keys(): if self.properties.has_key(key): raise ValueError, key self.properties.update(properties) def index(self, nodeid): - '''Add (or refresh) the node to search indexes - ''' + """Add (or refresh) the node to search indexes + """ # find all the String properties that have indexme for prop, propclass in self.getprops().items(): if isinstance(propclass, String) and propclass.indexme: @@ -2394,9 +2442,9 @@ # import / export support # def export_list(self, propnames, nodeid): - ''' Export a node - generate a list of CSV-able data in the order + """ Export a node - generate a list of CSV-able data in the order specified by propnames for the given node. - ''' + """ properties = self.getprops() l = [] for prop in propnames: @@ -2416,15 +2464,15 @@ return l def import_list(self, propnames, proplist): - ''' Import a node - all information including "id" is present and + """ Import a node - all information including "id" is present and should not be sanity checked. Triggers are not triggered. The journal should be initialised using the "creator" and "created" information. Return the nodeid of the node imported. - ''' + """ if self.db.journaltag is None: - raise DatabaseError, 'Database open read-only' + raise DatabaseError, _('Database open read-only') properties = self.getprops() # make the new node's property map @@ -2493,17 +2541,17 @@ # conversion (hello, sqlite) sql = 'update _%s set __retired__=%s where id=%s'%(self.classname, self.db.arg, self.db.arg) - self.db.sql(sql, (1, newid)) + self.db.sql(sql, (newid, newid)) return newid def export_journals(self): - '''Export a class's journal - generate a list of lists of + """Export a class's journal - generate a list of lists of CSV-able data: nodeid, date, user, action, params No heading here - the columns are fixed. - ''' + """ properties = self.getprops() r = [] for nodeid in self.getnodeids(): @@ -2528,14 +2576,17 @@ value = str(value) export_data[propname] = value params = export_data + elif action == 'create' and params: + # old tracker with data stored in the create! + params = {} l = [nodeid, date, user, action, params] r.append(map(repr, l)) return r def import_journals(self, entries): - '''Import a class's journal. + """Import a class's journal. - Uses setjournal() to set the journal for each item.''' + Uses setjournal() to set the journal for each item.""" properties = self.getprops() d = {} for l in entries: @@ -2556,24 +2607,27 @@ pwd.unpack(value) value = pwd params[propname] = value + elif action == 'create' and params: + # old tracker with data stored in the create! + params = {} r.append((nodeid, date.Date(jdate), user, action, params)) for nodeid, l in d.items(): self.db.setjournal(self.classname, nodeid, l) class FileClass(hyperdb.FileClass, Class): - '''This class defines a large chunk of data. To support this, it has a + """This class defines a large chunk of data. To support this, it has a mandatory String property "content" which is typically saved off externally to the hyperdb. The default MIME type of this data is defined by the "default_mime_type" class attribute, which may be overridden by each node if the class defines a "type" String property. - ''' + """ def __init__(self, db, classname, **properties): - '''The newly-created class automatically includes the "content" + """The newly-created class automatically includes the "content" and "type" properties. - ''' + """ if not properties.has_key('content'): properties['content'] = hyperdb.String(indexme='yes') if not properties.has_key('type'): @@ -2581,8 +2635,8 @@ Class.__init__(self, db, classname, **properties) def create(self, **propvalues): - ''' snaffle the file propvalue and store in a file - ''' + """ snaffle the file propvalue and store in a file + """ # we need to fire the auditors now, or the content property won't # be in propvalues for the auditors to play with self.fireAuditors('create', None, propvalues) @@ -2610,10 +2664,10 @@ return newid def get(self, nodeid, propname, default=_marker, cache=1): - ''' Trap the content propname and get it from the file + """ Trap the content propname and get it from the file 'cache' exists for backwards compatibility, and is not used. - ''' + """ poss_msg = 'Possibly a access right configuration problem.' if propname == 'content': try: @@ -2627,21 +2681,9 @@ else: return Class.get(self, nodeid, propname) - def getprops(self, protected=1): - '''In addition to the actual properties on the node, these methods - provide the "content" property. If the "protected" flag is true, - we include protected properties - those which may not be - modified. - - Note that the content prop is indexed separately, hence no indexme. - ''' - d = Class.getprops(self, protected=protected).copy() - d['content'] = hyperdb.String(indexme='yes') - return d - def set(self, itemid, **propvalues): - ''' Snarf the "content" propvalue and update it in a file - ''' + """ Snarf the "content" propvalue and update it in a file + """ self.fireAuditors('set', itemid, propvalues) oldvalues = copy.deepcopy(self.db.getnode(self.classname, itemid)) @@ -2669,10 +2711,10 @@ return propvalues def index(self, nodeid): - ''' Add (or refresh) the node to search indexes. + """ Add (or refresh) the node to search indexes. Use the content-type property for the content property. - ''' + """ # find all the String properties that have indexme for prop, propclass in self.getprops().items(): if prop == 'content' and propclass.indexme: @@ -2692,12 +2734,12 @@ class IssueClass(Class, roundupdb.IssueClass): # Overridden methods: def __init__(self, db, classname, **properties): - '''The newly-created class automatically includes the "messages", + """The newly-created class automatically includes the "messages", "files", "nosy", and "superseder" properties. If the 'properties' dictionary attempts to specify any of these properties or a "creation", "creator", "activity" or "actor" property, a ValueError is raised. - ''' + """ if not properties.has_key('title'): properties['title'] = hyperdb.String(indexme='yes') if not properties.has_key('messages'): Modified: tracker/roundup-src/roundup/backends/sessions_dbm.py ============================================================================== --- tracker/roundup-src/roundup/backends/sessions_dbm.py (original) +++ tracker/roundup-src/roundup/backends/sessions_dbm.py Sun Mar 9 09:26:16 2008 @@ -1,4 +1,4 @@ -#$Id: sessions_dbm.py,v 1.7 2006/04/27 04:59:37 richard Exp $ +#$Id: sessions_dbm.py,v 1.9 2007/09/27 06:18:53 jpend Exp $ """This module defines a very basic store that's used by the CGI interface to store session and one-time-key information. @@ -8,6 +8,8 @@ __docformat__ = 'restructuredtext' import anydbm, whichdb, os, marshal, time +from roundup import hyperdb +from roundup.i18n import _ class BasicDatabase: ''' Provide a nice encapsulation of an anydbm store. @@ -44,7 +46,8 @@ if os.path.exists(path): db_type = whichdb.whichdb(path) if not db_type: - raise hyperdb.DatabaseError, "Couldn't identify database type" + raise hyperdb.DatabaseError, \ + _("Couldn't identify database type") elif os.path.exists(path+'.db'): # if the path ends in '.db', it's a dbm database, whether # anydbm says it's dbhash or not! @@ -155,3 +158,4 @@ class OneTimeKeys(BasicDatabase): name = 'otks' +# vim: set sts ts=4 sw=4 et si : Modified: tracker/roundup-src/roundup/backends/sessions_rdbms.py ============================================================================== --- tracker/roundup-src/roundup/backends/sessions_rdbms.py (original) +++ tracker/roundup-src/roundup/backends/sessions_rdbms.py Sun Mar 9 09:26:16 2008 @@ -1,4 +1,4 @@ -#$Id: sessions_rdbms.py,v 1.4 2006/04/27 04:03:11 richard Exp $ +#$Id: sessions_rdbms.py,v 1.7 2007/09/25 19:49:19 jpend Exp $ """This module defines a very basic store that's used by the CGI interface to store session and one-time-key information. @@ -77,7 +77,7 @@ self.name, self.db.arg), (infoid,)) def updateTimestamp(self, infoid): - ''' don't update every hit - once a minute should be OK ''' + """ don't update every hit - once a minute should be OK """ now = time.time() self.cursor.execute('''update %ss set %s_time=%s where %s_key=%s and %s_time < %s'''%(self.name, self.name, self.db.arg, @@ -97,3 +97,4 @@ class OneTimeKeys(BasicDatabase): name = 'otk' +# vim: set et sts=4 sw=4 : Modified: tracker/roundup-src/roundup/cgi/TranslationService.py ============================================================================== --- tracker/roundup-src/roundup/cgi/TranslationService.py (original) +++ tracker/roundup-src/roundup/cgi/TranslationService.py Sun Mar 9 09:26:16 2008 @@ -13,8 +13,8 @@ # translate(domain, msgid, mapping, context, target_language, default) # -__version__ = "$Revision: 1.3 $"[11:-2] -__date__ = "$Date: 2006/12/02 23:41:28 $"[7:-2] +__version__ = "$Revision: 1.4 $"[11:-2] +__date__ = "$Date: 2007/01/14 22:54:15 $"[7:-2] from roundup import i18n from roundup.cgi.PageTemplates import Expressions, PathIterator, TALES @@ -35,6 +35,8 @@ return _msg def gettext(self, msgid): + if not isinstance(msgid, unicode): + msgid = unicode(msgid, 'utf8') return self.ugettext(msgid).encode(self.OUTPUT_ENCODING) def ngettext(self, singular, plural, number): Modified: tracker/roundup-src/roundup/cgi/actions.py ============================================================================== --- tracker/roundup-src/roundup/cgi/actions.py (original) +++ tracker/roundup-src/roundup/cgi/actions.py Sun Mar 9 09:26:16 2008 @@ -1,4 +1,4 @@ -#$Id: actions.py,v 1.62 2006/08/11 05:41:32 richard Exp $ +#$Id: actions.py,v 1.71 2007/09/20 23:44:58 jpend Exp $ import re, cgi, StringIO, urllib, Cookie, time, random, csv, codecs @@ -148,21 +148,16 @@ """ self.fakeFilterVars() queryname = self.getQueryName() - + # editing existing query name? - old_queryname = '' - for key in ('@old-queryname', ':old-queryname'): - if self.form.has_key(key): - old_queryname = self.form[key].value.strip() + old_queryname = self.getFromForm('old-queryname') # handle saving the query params if queryname: # parse the environment and figure what the query _is_ req = templating.HTMLRequest(self.client) - # The [1:] strips off the '?' character, it isn't part of the - # query string. - url = req.indexargs_url('', {})[1:] + url = self.getCurrentURL(req) key = self.db.query.getkey() if key: @@ -247,12 +242,29 @@ self.form.value.append(cgi.MiniFieldStorage('@filter', key)) - def getQueryName(self): - for key in ('@queryname', ':queryname'): + def getCurrentURL(self, req): + """Get current URL for storing as a query. + + Note: We are removing the first character from the current URL, + because the leading '?' is not part of the query string. + + Implementation note: + But maybe the template should be part of the stored query: + template = self.getFromForm('template') + if template: + return req.indexargs_url('', {'@template' : template})[1:] + """ + return req.indexargs_url('', {})[1:] + + def getFromForm(self, name): + for key in ('@' + name, ':' + name): if self.form.has_key(key): return self.form[key].value.strip() return '' + def getQueryName(self): + return self.getFromForm('queryname') + class EditCSVAction(Action): name = 'edit' permissionType = 'Edit' @@ -355,9 +367,11 @@ deps = {} links = {} for cn, nodeid, propname, vlist in all_links: - if not all_props.has_key((cn, nodeid)): + numeric_id = int (nodeid or 0) + if not (numeric_id > 0 or all_props.has_key((cn, nodeid))): # link item to link to doesn't (and won't) exist continue + for value in vlist: if not all_props.has_key(value): # link item to link to doesn't (and won't) exist @@ -389,36 +403,33 @@ m = [] for needed in order: props = all_props[needed] - if not props: - # nothing to do - continue cn, nodeid = needed - - if nodeid is not None and int(nodeid) > 0: - # make changes to the node - props = self._changenode(cn, nodeid, props) - - # and some nice feedback for the user - if props: - info = ', '.join(map(self._, props.keys())) - m.append( - self._('%(class)s %(id)s %(properties)s edited ok') - % {'class':cn, 'id':nodeid, 'properties':info}) + if props: + if nodeid is not None and int(nodeid) > 0: + # make changes to the node + props = self._changenode(cn, nodeid, props) + + # and some nice feedback for the user + if props: + info = ', '.join(map(self._, props.keys())) + m.append( + self._('%(class)s %(id)s %(properties)s edited ok') + % {'class':cn, 'id':nodeid, 'properties':info}) + else: + m.append(self._('%(class)s %(id)s - nothing changed') + % {'class':cn, 'id':nodeid}) else: - m.append(self._('%(class)s %(id)s - nothing changed') - % {'class':cn, 'id':nodeid}) - else: - assert props + assert props - # make a new node - newid = self._createnode(cn, props) - if nodeid is None: - self.nodeid = newid - nodeid = newid - - # and some nice feedback for the user - m.append(self._('%(class)s %(id)s created') - % {'class':cn, 'id':newid}) + # make a new node + newid = self._createnode(cn, props) + if nodeid is None: + self.nodeid = newid + nodeid = newid + + # and some nice feedback for the user + m.append(self._('%(class)s %(id)s created') + % {'class':cn, 'id':newid}) # fill in new ids in links if links.has_key(needed): @@ -760,7 +771,7 @@ except (ValueError, KeyError), message: self.client.error_message.append(str(message)) return - self.finishRego() + return self.finishRego() class RegisterAction(RegoCommon, EditCommon): name = 'register' Modified: tracker/roundup-src/roundup/cgi/client.py ============================================================================== --- tracker/roundup-src/roundup/cgi/client.py (original) +++ tracker/roundup-src/roundup/cgi/client.py Sun Mar 9 09:26:16 2008 @@ -1,4 +1,4 @@ -# $Id: client.py,v 1.229 2006/11/15 06:27:15 a1s Exp $ +# $Id: client.py,v 1.238 2007/09/22 21:20:57 jpend Exp $ """WWW request handler (also used in the stand-alone server). """ @@ -7,6 +7,7 @@ import base64, binascii, cgi, codecs, mimetypes, os import random, re, rfc822, stat, time, urllib, urlparse import Cookie, socket, errno +from Cookie import CookieError, BaseCookie, SimpleCookie from roundup import roundupdb, date, hyperdb, password from roundup.cgi import templating, cgitb, TranslationService @@ -46,12 +47,47 @@ return match.group(1) return '<%s>'%match.group(2) + error_message = ""'''An error has occurred

              An error has occurred

              A problem was encountered processing your request. The tracker maintainers have been notified of the problem.

              ''' + +class LiberalCookie(SimpleCookie): + ''' Python's SimpleCookie throws an exception if the cookie uses invalid + syntax. Other applications on the same server may have done precisely + this, preventing roundup from working through no fault of roundup. + Numerous other python apps have run into the same problem: + + trac: http://trac.edgewall.org/ticket/2256 + mailman: http://bugs.python.org/issue472646 + + This particular implementation comes from trac's solution to the + problem. Unfortunately it requires some hackery in SimpleCookie's + internals to provide a more liberal __set method. + ''' + def load(self, rawdata, ignore_parse_errors=True): + if ignore_parse_errors: + self.bad_cookies = [] + self._BaseCookie__set = self._loose_set + SimpleCookie.load(self, rawdata) + if ignore_parse_errors: + self._BaseCookie__set = self._strict_set + for key in self.bad_cookies: + del self[key] + + _strict_set = BaseCookie._BaseCookie__set + + def _loose_set(self, key, real_value, coded_value): + try: + self._strict_set(key, real_value, coded_value) + except CookieError: + self.bad_cookies.append(key) + dict.__setitem__(self, key, None) + + class Client: '''Instantiate to handle one CGI request. @@ -181,7 +217,9 @@ self.charset = self.STORAGE_CHARSET # parse cookies (used in charset and session lookups) - self.cookie = Cookie.SimpleCookie(self.env.get('HTTP_COOKIE', '')) + # use our own LiberalCookie to handle bad apps on the same + # server that have set cookies that are out of spec + self.cookie = LiberalCookie(self.env.get('HTTP_COOKIE', '')) self.user = None self.userid = None @@ -287,7 +325,12 @@ self.additional_headers['Expires'] = rfc822.formatdate(date) # render the content - self.write_html(self.renderContext()) + try: + self.write_html(self.renderContext()) + except IOError: + # IOErrors here are due to the client disconnecting before + # recieving the reply. + pass except SeriousError, message: self.write_html(str(message)) @@ -319,9 +362,17 @@ self.template = '' self.error_message.append(message) self.write_html(self.renderContext()) - except NotFound: - # pass through - raise + except NotFound, e: + self.response_code = 404 + self.template = '404' + try: + cl = self.db.getclass(self.classname) + self.write_html(self.renderContext()) + except KeyError: + # we can't map the URL to a class we know about + # reraise the NotFound and let roundup_server + # handle it + raise NotFound, e except FormError, e: self.error_message.append(self._('Form Error: ') + str(e)) self.write_html(self.renderContext()) @@ -734,8 +785,7 @@ # spit out headers self.additional_headers['Content-Type'] = mime_type self.additional_headers['Content-Length'] = str(len(content)) - lmt = rfc822.formatdate(lmt) - self.additional_headers['Last-Modified'] = lmt + self.additional_headers['Last-Modified'] = rfc822.formatdate(lmt) ims = None # see if there's an if-modified-since... @@ -868,7 +918,13 @@ try: call(*args, **kwargs) except socket.error, err: - if err.errno not in self.IGNORE_NET_ERRORS: + err_errno = getattr (err, 'errno', None) + if err_errno is None: + try: + err_errno = err[0] + except TypeError: + pass + if err_errno not in self.IGNORE_NET_ERRORS: raise def write(self, content): @@ -880,8 +936,9 @@ def write_html(self, content): if not self.headers_done: # at this point, we are sure about Content-Type - self.additional_headers['Content-Type'] = \ - 'text/html; charset=%s' % self.charset + if not self.additional_headers.has_key('Content-Type'): + self.additional_headers['Content-Type'] = \ + 'text/html; charset=%s' % self.charset self.header() if self.env['REQUEST_METHOD'] == 'HEAD': Modified: tracker/roundup-src/roundup/cgi/form_parser.py ============================================================================== --- tracker/roundup-src/roundup/cgi/form_parser.py (original) +++ tracker/roundup-src/roundup/cgi/form_parser.py Sun Mar 9 09:26:16 2008 @@ -541,7 +541,7 @@ cl = self.db.classes[self.classname] if cl.get(nodeid, entry) is not None: required.remove(entry) - + # any required values not present? if not required: continue Modified: tracker/roundup-src/roundup/cgi/templating.py ============================================================================== --- tracker/roundup-src/roundup/cgi/templating.py (original) +++ tracker/roundup-src/roundup/cgi/templating.py Sun Mar 9 09:26:16 2008 @@ -3,7 +3,7 @@ """Implements the API used in the HTML templating for the web interface. """ -todo = ''' +todo = """ - Most methods should have a "default" arg to supply a value when none appears in the hyperdb or request. - Multilink property additions: change_note and new_upload @@ -11,11 +11,11 @@ - NumberHTMLProperty should support numeric operations - LinkHTMLProperty should handle comparisons to strings (cf. linked name) - HTMLRequest.default(self, sort, group, filter, columns, **filterspec): - """Set the request's view arguments to the given values when no + '''Set the request's view arguments to the given values when no values are found in the CGI environment. - """ + ''' - have menu() methods accept filtering arguments -''' +""" __docformat__ = 'restructuredtext' @@ -42,6 +42,10 @@ import StructuredText except ImportError: StructuredText = None +try: + from docutils.core import publish_parts as ReStructuredText +except ImportError: + ReStructuredText = None # bring in the templating support from roundup.cgi.PageTemplates import PageTemplate, GlobalTranslationService @@ -75,8 +79,8 @@ 'action': self.action, 'class': self.klass} def find_template(dir, name, view): - ''' Find a template in the nominated dir - ''' + """ Find a template in the nominated dir + """ # find the source if view: filename = '%s.%s'%(name, view) @@ -122,8 +126,8 @@ self.dir = dir def precompileTemplates(self): - ''' Go through a directory and precompile all the templates therein - ''' + """ Go through a directory and precompile all the templates therein + """ for filename in os.listdir(self.dir): # skip subdirs if os.path.isdir(filename): @@ -147,7 +151,7 @@ self.get(filename, None) def get(self, name, extension=None): - ''' Interface to get a template, possibly loading a compiled template. + """ Interface to get a template, possibly loading a compiled template. "name" and "extension" indicate the template we're after, which in most cases will be "name.extension". If "extension" is None, then @@ -155,7 +159,7 @@ If the file "name.extension" doesn't exist, we look for "_generic.extension" as a fallback. - ''' + """ # default the name to "home" if name is None: name = 'home' @@ -290,12 +294,12 @@ return c class RoundupPageTemplate(PageTemplate.PageTemplate): - '''A Roundup-specific PageTemplate. + """A Roundup-specific PageTemplate. Interrogate the client to set up Roundup-specific template variables to be available. See 'context' function for the list of variables. - ''' + """ # 06-jun-2004 [als] i am not sure if this method is used yet def getContext(self, client, classname, request): @@ -327,8 +331,8 @@ return ''%self.id class HTMLDatabase: - ''' Return HTMLClasses for valid class fetches - ''' + """ Return HTMLClasses for valid class fetches + """ def __init__(self, client): self._client = client self._ = client._ @@ -362,26 +366,35 @@ m.append(HTMLClass(self._client, item)) return m -def lookupIds(db, prop, ids, fail_ok=0, num_re=re.compile('^-?\d+$')): - ''' "fail_ok" should be specified if we wish to pass through bad values +num_re = re.compile('^-?\d+$') + +def lookupIds(db, prop, ids, fail_ok=0, num_re=num_re, do_lookup=True): + """ "fail_ok" should be specified if we wish to pass through bad values (most likely form values that we wish to represent back to the user) - ''' + "do_lookup" is there for preventing lookup by key-value (if we + know that the value passed *is* an id) + """ cl = db.getclass(prop.classname) l = [] for entry in ids: - try: - l.append(cl.lookup(entry)) - except (TypeError, KeyError): - # if fail_ok, ignore lookup error - # otherwise entry must be existing object id rather than key value - if fail_ok or num_re.match(entry): - l.append(entry) + if do_lookup: + try: + item = cl.lookup(entry) + except (TypeError, KeyError): + pass + else: + l.append(item) + continue + # if fail_ok, ignore lookup error + # otherwise entry must be existing object id rather than key value + if fail_ok or num_re.match(entry): + l.append(entry) return l -def lookupKeys(linkcl, key, ids, num_re=re.compile('^-?\d+$')): - ''' Look up the "key" values for "ids" list - though some may already +def lookupKeys(linkcl, key, ids, num_re=num_re): + """ Look up the "key" values for "ids" list - though some may already be key values, not ids. - ''' + """ l = [] for entry in ids: if num_re.match(entry): @@ -409,7 +422,7 @@ def input_html4(**attrs): """Generate an 'input' (html4) element with given attributes""" - _set_input_default_args(attrs) + _set_input_default_args(attrs) return ''%' '.join(['%s="%s"'%(k,cgi.escape(str(v), True)) for k,v in attrs.items()]) @@ -420,7 +433,7 @@ for k,v in attrs.items()]) class HTMLInputMixin: - ''' requires a _client property ''' + """ requires a _client property """ def __init__(self): html_version = 'html4' if hasattr(self._client.instance.config, 'HTML_VERSION'): @@ -445,25 +458,25 @@ class HTMLPermissions: def view_check(self): - ''' Raise the Unauthorised exception if the user's not permitted to + """ Raise the Unauthorised exception if the user's not permitted to view this class. - ''' + """ if not self.is_view_ok(): raise Unauthorised("view", self._classname, translator=self._client.translator) def edit_check(self): - ''' Raise the Unauthorised exception if the user's not permitted to + """ Raise the Unauthorised exception if the user's not permitted to edit items of this class. - ''' + """ if not self.is_edit_ok(): raise Unauthorised("edit", self._classname, translator=self._client.translator) class HTMLClass(HTMLInputMixin, HTMLPermissions): - ''' Accesses through a class (either through *class* or *db.*) - ''' + """ Accesses through a class (either through *class* or *db.*) + """ def __init__(self, client, classname, anonymous=0): self._client = client self._ = client._ @@ -479,29 +492,28 @@ HTMLInputMixin.__init__(self) def is_edit_ok(self): - ''' Is the user allowed to Create the current class? - ''' + """ Is the user allowed to Create the current class? + """ return self._db.security.hasPermission('Create', self._client.userid, self._classname) def is_view_ok(self): - ''' Is the user allowed to View the current class? - ''' + """ Is the user allowed to View the current class? + """ return self._db.security.hasPermission('View', self._client.userid, self._classname) def is_only_view_ok(self): - ''' Is the user only allowed to View (ie. not Create) the current class? - ''' + """ Is the user only allowed to View (ie. not Create) the current class? + """ return self.is_view_ok() and not self.is_edit_ok() def __repr__(self): return ''%(id(self), self.classname) def __getitem__(self, item): - ''' return an HTMLProperty instance - ''' - #print 'HTMLClass.getitem', (self, item) + """ return an HTMLProperty instance + """ # we don't exist if item == 'id': @@ -543,19 +555,19 @@ raise KeyError, item def __getattr__(self, attr): - ''' convenience access ''' + """ convenience access """ try: return self[attr] except KeyError: raise AttributeError, attr def designator(self): - ''' Return this class' designator (classname) ''' + """ Return this class' designator (classname) """ return self._classname - def getItem(self, itemid, num_re=re.compile('^-?\d+$')): - ''' Get an item of this class by its item id. - ''' + def getItem(self, itemid, num_re=num_re): + """ Get an item of this class by its item id. + """ # make sure we're looking at an itemid if not isinstance(itemid, type(1)) and not num_re.match(itemid): itemid = self._klass.lookup(itemid) @@ -563,8 +575,8 @@ return HTMLItem(self._client, self.classname, itemid) def properties(self, sort=1): - ''' Return HTMLProperty for all of this class' properties. - ''' + """ Return HTMLProperty for all of this class' properties. + """ l = [] for name, prop in self._props.items(): for klass, htmlklass in propclasses: @@ -580,8 +592,8 @@ return l def list(self, sort_on=None): - ''' List all items in this class. - ''' + """ List all items in this class. + """ # get the list and sort it nicely l = self._klass.list() sortfunc = make_sort_function(self._db, self._classname, sort_on) @@ -597,8 +609,8 @@ return l def csv(self): - ''' Return the items of this class as a chunk of CSV text. - ''' + """ Return the items of this class as a chunk of CSV text. + """ props = self.propnames() s = StringIO.StringIO() writer = csv.writer(s) @@ -617,18 +629,18 @@ return s.getvalue() def propnames(self): - ''' Return the list of the names of the properties of this class. - ''' + """ Return the list of the names of the properties of this class. + """ idlessprops = self._klass.getprops(protected=0).keys() idlessprops.sort() return ['id'] + idlessprops def filter(self, request=None, filterspec={}, sort=[], group=[]): - ''' Return a list of items from this class, filtered and sorted + """ Return a list of items from this class, filtered and sorted by the current requested filterspec/filter/sort/group args "request" takes precedence over the other three arguments. - ''' + """ if request is not None: filterspec = request.filterspec sort = request.sort @@ -645,7 +657,7 @@ def classhelp(self, properties=None, label=''"(list)", width='500', height='400', property='', form='itemSynopsis', pagesize=50, inputtype="checkbox", sort=None, filter=None): - '''Pop up a javascript window with class help + """Pop up a javascript window with class help This generates a link to a popup window which displays the properties indicated by "properties" of the class named by @@ -674,7 +686,7 @@ If the "form" arg is given, it's passed through to the javascript help_window function. - it's the name of the form the "property" belongs to. - ''' + """ if properties is None: properties = self._klass.getprops(protected=0).keys() properties.sort() @@ -711,15 +723,15 @@ return '%s' % \ (help_url, onclick, self._(label)) - def submit(self, label=''"Submit New Entry"): - ''' Generate a submit button (and action hidden element) + def submit(self, label=''"Submit New Entry", action="new"): + """ Generate a submit button (and action hidden element) Generate nothing if we're not editable. - ''' + """ if not self.is_edit_ok(): return '' - return self.input(type="hidden", name="@action", value="new") + \ + return self.input(type="hidden", name="@action", value=action) + \ '\n' + \ self.input(type="submit", name="submit_button", value=self._(label)) @@ -729,8 +741,8 @@ return self._('New node - no history') def renderWith(self, name, **kwargs): - ''' Render this class with the given template. - ''' + """ Render this class with the given template. + """ # create a new request and override the specified args req = HTMLRequest(self._client) req.classname = self.classname @@ -747,8 +759,8 @@ return pt.render(self._client, self.classname, req, **args) class _HTMLItem(HTMLInputMixin, HTMLPermissions): - ''' Accesses through an *item* - ''' + """ Accesses through an *item* + """ def __init__(self, client, classname, nodeid, anonymous=0): self._client = client self._db = client.db @@ -763,22 +775,22 @@ HTMLInputMixin.__init__(self) def is_edit_ok(self): - ''' Is the user allowed to Edit the current class? - ''' + """ Is the user allowed to Edit the current class? + """ return self._db.security.hasPermission('Edit', self._client.userid, self._classname, itemid=self._nodeid) def is_view_ok(self): - ''' Is the user allowed to View the current class? - ''' + """ Is the user allowed to View the current class? + """ if self._db.security.hasPermission('View', self._client.userid, self._classname, itemid=self._nodeid): return 1 return self.is_edit_ok() def is_only_view_ok(self): - ''' Is the user only allowed to View (ie. not Edit) the current class? - ''' + """ Is the user only allowed to View (ie. not Edit) the current class? + """ return self.is_view_ok() and not self.is_edit_ok() def __repr__(self): @@ -786,11 +798,10 @@ self._nodeid) def __getitem__(self, item): - ''' return an HTMLProperty instance + """ return an HTMLProperty instance this now can handle transitive lookups where item is of the form x.y.z - ''' - #print 'HTMLItem.getitem', (self, item) + """ if item == 'id': return self._nodeid @@ -827,7 +838,7 @@ raise KeyError, item def __getattr__(self, attr): - ''' convenience access to properties ''' + """ convenience access to properties """ try: return self[attr] except KeyError: @@ -841,19 +852,19 @@ """Is this item retired?""" return self._klass.is_retired(self._nodeid) - def submit(self, label=''"Submit Changes"): + def submit(self, label=''"Submit Changes", action="edit"): """Generate a submit button. Also sneak in the lastactivity and action hidden elements. """ return self.input(type="hidden", name="@lastactivity", value=self.activity.local(0)) + '\n' + \ - self.input(type="hidden", name="@action", value="edit") + '\n' + \ + self.input(type="hidden", name="@action", value=action) + '\n' + \ self.input(type="submit", name="submit_button", value=self._(label)) def journal(self, direction='descending'): - ''' Return a list of HTMLJournalEntry instances. - ''' + """ Return a list of HTMLJournalEntry instances. + """ # XXX do this return [] @@ -1091,8 +1102,8 @@ return '\n'.join(l) def renderQueryForm(self): - ''' Render this item, which is a query, as a search form. - ''' + """ Render this item, which is a query, as a search form. + """ # create a new request and override the specified args req = HTMLRequest(self._client) req.classname = self._klass.get(self._nodeid, 'klass') @@ -1107,9 +1118,9 @@ return pt.render(self._client, req.classname, req) def download_url(self): - ''' Assume that this item is a FileClass and that it has a name + """ Assume that this item is a FileClass and that it has a name and content. Construct a URL for the download of the content. - ''' + """ name = self._klass.get(self._nodeid, 'name') url = '%s%s/%s'%(self._classname, self._nodeid, name) return urllib.quote(url) @@ -1138,23 +1149,23 @@ for key, value in query.items()]) class _HTMLUser(_HTMLItem): - '''Add ability to check for permissions on users. - ''' + """Add ability to check for permissions on users. + """ _marker = [] def hasPermission(self, permission, classname=_marker, property=None, itemid=None): - '''Determine if the user has the Permission. + """Determine if the user has the Permission. The class being tested defaults to the template's class, but may be overidden for this test by suppling an alternate classname. - ''' + """ if classname is self._marker: classname = self._client.classname return self._db.security.hasPermission(permission, self._nodeid, classname, property, itemid) def hasRole(self, rolename): - '''Determine whether the user has the Role.''' + """Determine whether the user has the Role.""" roles = self._db.user.get(self._nodeid, 'roles').split(',') for role in roles: if role.strip() == rolename: return True @@ -1167,7 +1178,7 @@ return _HTMLItem(client, classname, nodeid, anonymous) class HTMLProperty(HTMLInputMixin, HTMLPermissions): - ''' String, Number, Date, Interval HTMLProperty + """ String, Number, Date, Interval HTMLProperty Has useful attributes: @@ -1175,7 +1186,7 @@ _value the value of the property if any A wrapper object which may be stringified for the plain() behaviour. - ''' + """ def __init__(self, client, classname, nodeid, prop, name, value, anonymous=0): self._client = client @@ -1208,14 +1219,14 @@ return not not self._value def isset(self): - '''Is my _value not None?''' + """Is my _value not None?""" return self._value is not None def is_edit_ok(self): - '''Should the user be allowed to use an edit form field for this + """Should the user be allowed to use an edit form field for this property. Check "Create" for new items, or "Edit" for existing ones. - ''' + """ if self._nodeid: return self._db.security.hasPermission('Edit', self._client.userid, self._classname, self._name, self._nodeid) @@ -1223,8 +1234,8 @@ self._classname, self._name) def is_view_ok(self): - ''' Is the user allowed to View the current class? - ''' + """ Is the user allowed to View the current class? + """ if self._db.security.hasPermission('View', self._client.userid, self._classname, self._name, self._nodeid): return 1 @@ -1234,6 +1245,19 @@ hyper_re = re.compile(r'((?P\w{3,6}://\S+[\w/])|' r'(?P[-+=%/\w\.]+@[\w\.\-]+)|' r'(?P(?P[A-Za-z_]+)(\s*)(?P\d+)))') + def _hyper_repl_item(self,match,replacement): + item = match.group('item') + cls = match.group('class').lower() + id = match.group('id') + try: + # make sure cls is a valid tracker classname + cl = self._db.getclass(cls) + if not cl.hasnode(id): + return item + return replacement % locals() + except KeyError: + return item + def _hyper_repl(self, match): if match.group('url'): s = match.group('url') @@ -1242,29 +1266,30 @@ s = match.group('email') return '%s'%(s, s) else: - s = match.group('item') - s1 = match.group('class').lower() - s2 = match.group('id') - try: - # make sure s1 is a valid tracker classname - cl = self._db.getclass(s1) - if not cl.hasnode(s2): - return s - return '%s'%(s1, s2, s) - except KeyError: - return s + return self._hyper_repl_item(match, + '%(item)s') + + def _hyper_repl_rst(self, match): + if match.group('url'): + s = match.group('url') + return '`%s <%s>`_'%(s, s) + elif match.group('email'): + s = match.group('email') + return '`%s `_'%(s, s) + else: + return self._hyper_repl_item(match,'`%(item)s <%(cls)s%(id)s>`_') def hyperlinked(self): - ''' Render a "hyperlinked" version of the text ''' + """ Render a "hyperlinked" version of the text """ return self.plain(hyperlink=1) def plain(self, escape=0, hyperlink=0): - '''Render a "plain" representation of the property + """Render a "plain" representation of the property - "escape" turns on/off HTML quoting - "hyperlink" turns on/off in-text hyperlinking of URLs, email addresses and designators - ''' + """ if not self.is_view_ok(): return self._('[hidden]') @@ -1282,7 +1307,7 @@ return s def wrapped(self, escape=1, hyperlink=1): - '''Render a "wrapped" representation of the property. + """Render a "wrapped" representation of the property. We wrap long lines at 80 columns on the nearest whitespace. Lines with no whitespace are not broken to force wrapping. @@ -1293,7 +1318,7 @@ - "escape" turns on/off HTML quoting - "hyperlink" turns on/off in-text hyperlinking of URLs, email addresses and designators - ''' + """ if not self.is_view_ok(): return self._('[hidden]') @@ -1310,10 +1335,10 @@ return s def stext(self, escape=0, hyperlink=1): - ''' Render the value of the property as StructuredText. + """ Render the value of the property as StructuredText. This requires the StructureText module to be installed separately. - ''' + """ if not self.is_view_ok(): return self._('[hidden]') @@ -1322,11 +1347,27 @@ return s return StructuredText(s,level=1,header=0) + def rst(self, hyperlink=1): + """ Render the value of the property as ReStructuredText. + + This requires docutils to be installed separately. + """ + if not self.is_view_ok(): + return self._('[hidden]') + + if not ReStructuredText: + return self.plain(escape=0, hyperlink=hyperlink) + s = self.plain(escape=0, hyperlink=0) + if hyperlink: + s = self.hyper_re.sub(self._hyper_repl_rst, s) + return ReStructuredText(s, writer_name="html")["body"].encode("utf-8", + "replace") + def field(self, **kwargs): - ''' Render the property as a field in HTML. + """ Render the property as a field in HTML. If not editable, just display the value via plain(). - ''' + """ if not self.is_edit_ok(): return self.plain() @@ -1338,11 +1379,11 @@ kwargs.update({"name": self._formname, "value": value}) return self.input(**kwargs) - def multiline(self, escape=0, rows=5, cols=40): - ''' Render a multiline form edit field for the property. + def multiline(self, escape=0, rows=5, cols=40, **kwargs): + """ Render a multiline form edit field for the property. If not editable, just display the plain() value in a
               tag.
              -        '''
              +        """
                       if not self.is_edit_ok():
                           return '
              %s
              '%self.plain() @@ -1353,13 +1394,15 @@ value = '"'.join(value.split('"')) name = self._formname - return ('') % locals() def email(self, escape=1): - ''' Render the value of the property as an obscured email address - ''' + """ Render the value of the property as an obscured email address + """ if not self.is_view_ok(): return self._('[hidden]') @@ -1381,8 +1424,8 @@ class PasswordHTMLProperty(HTMLProperty): def plain(self): - ''' Render a "plain" representation of the property - ''' + """ Render a "plain" representation of the property + """ if not self.is_view_ok(): return self._('[hidden]') @@ -1391,22 +1434,22 @@ return self._('*encrypted*') def field(self, size=30): - ''' Render a form edit field for the property. + """ Render a form edit field for the property. If not editable, just display the value via plain(). - ''' + """ if not self.is_edit_ok(): return self.plain() return self.input(type="password", name=self._formname, size=size) def confirm(self, size=30): - ''' Render a second form edit field for the property, used for + """ Render a second form edit field for the property, used for confirmation that the user typed the password correctly. Generates a field with name "@confirm at name". If not editable, display nothing. - ''' + """ if not self.is_edit_ok(): return '' @@ -1417,8 +1460,8 @@ class NumberHTMLProperty(HTMLProperty): def plain(self): - ''' Render a "plain" representation of the property - ''' + """ Render a "plain" representation of the property + """ if not self.is_view_ok(): return self._('[hidden]') @@ -1428,10 +1471,10 @@ return str(self._value) def field(self, size=30): - ''' Render a form edit field for the property. + """ Render a form edit field for the property. If not editable, just display the value via plain(). - ''' + """ if not self.is_edit_ok(): return self.plain() @@ -1442,20 +1485,20 @@ return self.input(name=self._formname, value=value, size=size) def __int__(self): - ''' Return an int of me - ''' + """ Return an int of me + """ return int(self._value) def __float__(self): - ''' Return a float of me - ''' + """ Return a float of me + """ return float(self._value) class BooleanHTMLProperty(HTMLProperty): def plain(self): - ''' Render a "plain" representation of the property - ''' + """ Render a "plain" representation of the property + """ if not self.is_view_ok(): return self._('[hidden]') @@ -1464,10 +1507,10 @@ return self._value and self._("Yes") or self._("No") def field(self): - ''' Render a form edit field for the property + """ Render a form edit field for the property If not editable, just display the value via plain(). - ''' + """ if not self.is_edit_ok(): return self.plain() @@ -1507,8 +1550,8 @@ self._offset = self._prop.offset (self._db) def plain(self): - ''' Render a "plain" representation of the property - ''' + """ Render a "plain" representation of the property + """ if not self.is_view_ok(): return self._('[hidden]') @@ -1521,11 +1564,11 @@ return str(self._value.local(offset)) def now(self, str_interval=None): - ''' Return the current time. + """ Return the current time. This is useful for defaulting a new value. Returns a DateHTMLProperty. - ''' + """ if not self.is_view_ok(): return self._('[hidden]') @@ -1546,7 +1589,7 @@ self._prop, self._formname, ret) def field(self, size=30, default=None, format=_marker, popcal=True): - '''Render a form edit field for the property + """Render a form edit field for the property If not editable, just display the value via plain(). @@ -1554,7 +1597,7 @@ Default=yes. The format string is a standard python strftime format string. - ''' + """ if not self.is_edit_ok(): if format is self._marker: return self.plain() @@ -1568,7 +1611,7 @@ raw_value = None else: if isinstance(default, basestring): - raw_value = Date(default, translator=self._client) + raw_value = date.Date(default, translator=self._client) elif isinstance(default, date.Date): raw_value = default elif isinstance(default, DateHTMLProperty): @@ -1606,10 +1649,10 @@ return s def reldate(self, pretty=1): - ''' Render the interval between the date and now. + """ Render the interval between the date and now. If the "pretty" flag is true, then make the display pretty. - ''' + """ if not self.is_view_ok(): return self._('[hidden]') @@ -1623,13 +1666,13 @@ return str(interval) def pretty(self, format=_marker): - ''' Render the date in a pretty format (eg. month names, spaces). + """ Render the date in a pretty format (eg. month names, spaces). The format string is a standard python strftime format string. Note that if the day is zero, and appears at the start of the string, then it'll be stripped from the output. This is handy for the situation when a date only specifies a month and a year. - ''' + """ if not self.is_view_ok(): return self._('[hidden]') @@ -1646,8 +1689,8 @@ return self._value.local(offset).pretty() def local(self, offset): - ''' Return the date/time as a local (timezone offset) date/time. - ''' + """ Return the date/time as a local (timezone offset) date/time. + """ if not self.is_view_ok(): return self._('[hidden]') @@ -1678,8 +1721,8 @@ self._value.setTranslator(self._client.translator) def plain(self): - ''' Render a "plain" representation of the property - ''' + """ Render a "plain" representation of the property + """ if not self.is_view_ok(): return self._('[hidden]') @@ -1688,18 +1731,18 @@ return str(self._value) def pretty(self): - ''' Render the interval in a pretty format (eg. "yesterday") - ''' + """ Render the interval in a pretty format (eg. "yesterday") + """ if not self.is_view_ok(): return self._('[hidden]') return self._value.pretty() def field(self, size=30): - ''' Render a form edit field for the property + """ Render a form edit field for the property If not editable, just display the value via plain(). - ''' + """ if not self.is_edit_ok(): return self.plain() @@ -1710,7 +1753,7 @@ return self.input(name=self._formname, value=value, size=size) class LinkHTMLProperty(HTMLProperty): - ''' Link HTMLProperty + """ Link HTMLProperty Include the above as well as being able to access the class information. Stringifying the object itself results in the value from the item being displayed. Accessing attributes of this object @@ -1718,7 +1761,7 @@ property accessed (so item/assignedto/name would look up the user entry identified by the assignedto property on item, and then the name property of that user) - ''' + """ def __init__(self, *args, **kw): HTMLProperty.__init__(self, *args, **kw) # if we're representing a form value, then the -1 from the form really @@ -1727,7 +1770,7 @@ self._value = None def __getattr__(self, attr): - ''' return a new HTMLItem ''' + """ return a new HTMLItem """ if not self._value: # handle a special page templates lookup if attr == '__render_with_namespace__': @@ -1740,8 +1783,8 @@ return getattr(i, attr) def plain(self, escape=0): - ''' Render a "plain" representation of the property - ''' + """ Render a "plain" representation of the property + """ if not self.is_view_ok(): return self._('[hidden]') @@ -1749,16 +1792,19 @@ return '' linkcl = self._db.classes[self._prop.classname] k = linkcl.labelprop(1) - value = str(linkcl.get(self._value, k)) + if num_re.match(self._value): + value = str(linkcl.get(self._value, k)) + else : + value = self._value if escape: value = cgi.escape(value) return value def field(self, showid=0, size=None): - ''' Render a form edit field for the property + """ Render a form edit field for the property If not editable, just display the value via plain(). - ''' + """ if not self.is_edit_ok(): return self.plain() @@ -1768,7 +1814,7 @@ value = '' else: k = linkcl.getkey() - if k: + if k and num_re.match(self._value): value = linkcl.get(self._value, k) else: value = self._value @@ -1776,7 +1822,7 @@ def menu(self, size=None, height=None, showid=0, additional=[], value=None, sort_on=None, **conditions): - ''' Render a form select list for this property + """ Render a form select list for this property "size" is used to limit the length of the list labels "height" is used to set the ') return '\n'.join(l) @@ -1863,16 +1909,16 @@ class MultilinkHTMLProperty(HTMLProperty): - ''' Multilink HTMLProperty + """ Multilink HTMLProperty Also be iterable, returning a wrapper object like the Link case for each entry in the multilink. - ''' + """ def __init__(self, *args, **kwargs): HTMLProperty.__init__(self, *args, **kwargs) if self._value: display_value = lookupIds(self._db, self._prop, self._value, - fail_ok=1) + fail_ok=1, do_lookup=False) sortfun = make_sort_function(self._db, self._prop.classname) # sorting fails if the value contains # items not yet stored in the database @@ -1884,15 +1930,15 @@ self._value = display_value def __len__(self): - ''' length of the multilink ''' + """ length of the multilink """ return len(self._value) def __getattr__(self, attr): - ''' no extended attribute accesses make sense here ''' + """ no extended attribute accesses make sense here """ raise AttributeError, attr def viewableGenerator(self, values): - '''Used to iterate over only the View'able items in a class.''' + """Used to iterate over only the View'able items in a class.""" check = self._db.security.hasPermission userid = self._client.userid classname = self._prop.classname @@ -1901,36 +1947,36 @@ yield HTMLItem(self._client, classname, value) def __iter__(self): - ''' iterate and return a new HTMLItem - ''' + """ iterate and return a new HTMLItem + """ return self.viewableGenerator(self._value) def reverse(self): - ''' return the list in reverse order - ''' + """ return the list in reverse order + """ l = self._value[:] l.reverse() return self.viewableGenerator(l) def sorted(self, property): - ''' Return this multilink sorted by the given property ''' + """ Return this multilink sorted by the given property """ value = list(self.__iter__()) value.sort(lambda a,b:cmp(a[property], b[property])) return value def __contains__(self, value): - ''' Support the "in" operator. We have to make sure the passed-in + """ Support the "in" operator. We have to make sure the passed-in value is a string first, not a HTMLProperty. - ''' + """ return str(value) in self._value def isset(self): - '''Is my _value not []?''' + """Is my _value not []?""" return self._value != [] def plain(self, escape=0): - ''' Render a "plain" representation of the property - ''' + """ Render a "plain" representation of the property + """ if not self.is_view_ok(): return self._('[hidden]') @@ -1948,10 +1994,10 @@ return value def field(self, size=30, showid=0): - ''' Render a form edit field for the property + """ Render a form edit field for the property If not editable, just display the value via plain(). - ''' + """ if not self.is_edit_ok(): return self.plain() @@ -1968,7 +2014,7 @@ def menu(self, size=None, height=None, showid=0, additional=[], value=None, sort_on=None, **conditions): - ''' Render a form list for this property. "size" is used to limit the length of the list labels "height" is used to set the + + query @@ -93,7 +95,7 @@ [not yours to edit] - + Modified: tracker/roundup-src/templates/classic/html/user.item.html ============================================================================== --- tracker/roundup-src/templates/classic/html/user.item.html (original) +++ tracker/roundup-src/templates/classic/html/user.item.html Sun Mar 9 09:26:16 2008 @@ -43,6 +43,7 @@
              -

              You are not - allowed to view this page.

              +

              + You are not allowed to view this page.

              + +

              + Please login with your username and password.

              To: issue_tracker at your.tracker.email.domain.example -Cc: richard at test +Cc: richard at test.test Reply-To: chef at bork.bork.bork Message-Id: Subject: [issue] Testing... @@ -154,7 +155,7 @@ charset="iso-8859-1" From: Chef To: issue_tracker at your.tracker.email.domain.example -Cc: richard at test +Cc: richard at test.test Message-Id: Subject: [issue] Testing... @@ -175,7 +176,7 @@ charset="iso-8859-1" From: Chef To: issue_tracker at your.tracker.email.domain.example -Cc: richard at test +Cc: richard at test.test Message-Id: Subject: [issue] Testing... @@ -189,7 +190,7 @@ def testAlternateAddress(self): self._handle_mail('''Content-Type: text/plain; charset="iso-8859-1" -From: John Doe +From: John Doe To: issue_tracker at your.tracker.email.domain.example Message-Id: Subject: [issue] Testing... @@ -206,7 +207,7 @@ charset="iso-8859-1" From: Chef To: issue_tracker at your.tracker.email.domain.example -Cc: richard at test +Cc: richard at test.test Message-Id: Subject: Testing... @@ -228,16 +229,17 @@ ''') self.compareMessages(self._get_mail(), '''FROM: roundup-admin at your.tracker.email.domain.example -TO: chef at bork.bork.bork, mary at test, richard at test +TO: chef at bork.bork.bork, mary at test.test, richard at test.test Content-Type: text/plain; charset=utf-8 Subject: [issue1] Testing... -To: chef at bork.bork.bork, mary at test, richard at test +To: chef at bork.bork.bork, mary at test.test, richard at test.test From: "Bork, Chef" Reply-To: Roundup issue tracker MIME-Version: 1.0 Message-Id: X-Roundup-Name: Roundup issue tracker X-Roundup-Loop: hello +X-Roundup-Issue-Status: unread Content-Transfer-Encoding: quoted-printable @@ -258,20 +260,196 @@ _______________________________________________________________________ ''') - # BUG - # def testMultipart(self): - # '''With more than one part''' - # see MultipartEnc tests: but if there is more than one part - # we return a multipart/mixed and the boundary contains - # the ip address of the test machine. + def testNewIssueNoAuthorInfo(self): + self.db.config.MAIL_ADD_AUTHORINFO = 'no' + self._handle_mail('''Content-Type: text/plain; + charset="iso-8859-1" +From: Chef +To: issue_tracker at your.tracker.email.domain.example +Message-Id: +Subject: [issue] Testing... [nosy=mary; assignedto=richard] - # BUG should test some binary attamchent too. +This is a test submission of a new issue. +''') + self.compareMessages(self._get_mail(), +'''FROM: roundup-admin at your.tracker.email.domain.example +TO: chef at bork.bork.bork, mary at test.test, richard at test.test +Content-Type: text/plain; charset=utf-8 +Subject: [issue1] Testing... +To: mary at test.test, richard at test.test +From: "Bork, Chef" +Reply-To: Roundup issue tracker +MIME-Version: 1.0 +Message-Id: +X-Roundup-Name: Roundup issue tracker +X-Roundup-Loop: hello +X-Roundup-Issue-Status: unread +Content-Transfer-Encoding: quoted-printable + +This is a test submission of a new issue. + +---------- +assignedto: richard +messages: 1 +nosy: Chef, mary, richard +status: unread +title: Testing... + +_______________________________________________________________________ +Roundup issue tracker + +_______________________________________________________________________ +''') + + def testNewIssueNoAuthorEmail(self): + self.db.config.MAIL_ADD_AUTHOREMAIL = 'no' + self._handle_mail('''Content-Type: text/plain; + charset="iso-8859-1" +From: Chef +To: issue_tracker at your.tracker.email.domain.example +Message-Id: +Subject: [issue] Testing... [nosy=mary; assignedto=richard] + +This is a test submission of a new issue. +''') + self.compareMessages(self._get_mail(), +'''FROM: roundup-admin at your.tracker.email.domain.example +TO: chef at bork.bork.bork, mary at test.test, richard at test.test +Content-Type: text/plain; charset=utf-8 +Subject: [issue1] Testing... +To: mary at test.test, richard at test.test +From: "Bork, Chef" +Reply-To: Roundup issue tracker +MIME-Version: 1.0 +Message-Id: +X-Roundup-Name: Roundup issue tracker +X-Roundup-Loop: hello +X-Roundup-Issue-Status: unread +Content-Transfer-Encoding: quoted-printable + +New submission from Bork, Chef: + +This is a test submission of a new issue. + +---------- +assignedto: richard +messages: 1 +nosy: Chef, mary, richard +status: unread +title: Testing... + +_______________________________________________________________________ +Roundup issue tracker + +_______________________________________________________________________ +''') + + multipart_msg = '''Content-Type: text/plain; + charset="iso-8859-1" +From: mary +To: issue_tracker at your.tracker.email.domain.example +Message-Id: +In-Reply-To: +Subject: [issue1] Testing... +Content-Type: multipart/mixed; boundary="bxyzzy" +Content-Disposition: inline + + +--bxyzzy +Content-Type: multipart/alternative; boundary="bCsyhTFzCvuiizWE" +Content-Disposition: inline + +--bCsyhTFzCvuiizWE +Content-Type: text/plain; charset=us-ascii +Content-Disposition: inline + +test attachment first text/plain + +--bCsyhTFzCvuiizWE +Content-Type: application/octet-stream +Content-Disposition: attachment; filename="first.dvi" +Content-Transfer-Encoding: base64 + +SnVzdCBhIHRlc3QgAQo= + +--bCsyhTFzCvuiizWE +Content-Type: text/plain; charset=us-ascii +Content-Disposition: inline + +test attachment second text/plain + +--bCsyhTFzCvuiizWE +Content-Type: text/html +Content-Disposition: inline + + +to be ignored. + + +--bCsyhTFzCvuiizWE-- + +--bxyzzy +Content-Type: multipart/alternative; boundary="bCsyhTFzCvuiizWF" +Content-Disposition: inline + +--bCsyhTFzCvuiizWF +Content-Type: text/plain; charset=us-ascii +Content-Disposition: inline + +test attachment third text/plain + +--bCsyhTFzCvuiizWF +Content-Type: application/octet-stream +Content-Disposition: attachment; filename="second.dvi" +Content-Transfer-Encoding: base64 + +SnVzdCBhIHRlc3QK + +--bCsyhTFzCvuiizWF-- + +--bxyzzy-- +''' + + def testMultipartKeepAlternatives(self): + self.doNewIssue() + self._handle_mail(self.multipart_msg) + messages = self.db.issue.get('1', 'messages') + messages.sort() + msg = self.db.msg.getnode (messages[-1]) + assert(len(msg.files) == 5) + names = {0 : 'first.dvi', 4 : 'second.dvi'} + content = {3 : 'test attachment third text/plain\n', + 4 : 'Just a test\n'} + for n, id in enumerate (msg.files): + f = self.db.file.getnode (id) + self.assertEqual(f.name, names.get (n, 'unnamed')) + if n in content : + self.assertEqual(f.content, content [n]) + self.assertEqual(msg.content, 'test attachment second text/plain') + + def testMultipartDropAlternatives(self): + self.doNewIssue() + self.db.config.MAILGW_IGNORE_ALTERNATIVES = True + self._handle_mail(self.multipart_msg) + messages = self.db.issue.get('1', 'messages') + messages.sort() + msg = self.db.msg.getnode (messages[-1]) + assert(len(msg.files) == 2) + names = {1 : 'second.dvi'} + content = {0 : 'test attachment third text/plain\n', + 1 : 'Just a test\n'} + for n, id in enumerate (msg.files): + f = self.db.file.getnode (id) + self.assertEqual(f.name, names.get (n, 'unnamed')) + if n in content : + self.assertEqual(f.content, content [n]) + self.assertEqual(msg.content, 'test attachment second text/plain') def testSimpleFollowup(self): self.doNewIssue() self._handle_mail('''Content-Type: text/plain; charset="iso-8859-1" -From: mary +From: mary To: issue_tracker at your.tracker.email.domain.example Message-Id: In-Reply-To: @@ -281,10 +459,10 @@ ''') self.compareMessages(self._get_mail(), '''FROM: roundup-admin at your.tracker.email.domain.example -TO: chef at bork.bork.bork, richard at test +TO: chef at bork.bork.bork, richard at test.test Content-Type: text/plain; charset=utf-8 Subject: [issue1] Testing... -To: chef at bork.bork.bork, richard at test +To: chef at bork.bork.bork, richard at test.test From: "Contrary, Mary" Reply-To: Roundup issue tracker MIME-Version: 1.0 @@ -292,10 +470,11 @@ In-Reply-To: X-Roundup-Name: Roundup issue tracker X-Roundup-Loop: hello +X-Roundup-Issue-Status: chatting Content-Transfer-Encoding: quoted-printable -Contrary, Mary added the comment: +Contrary, Mary added the comment: This is a second followup @@ -313,7 +492,7 @@ self._handle_mail('''Content-Type: text/plain; charset="iso-8859-1" -From: richard +From: richard To: issue_tracker at your.tracker.email.domain.example Message-Id: In-Reply-To: @@ -328,10 +507,10 @@ self.compareMessages(self._get_mail(), '''FROM: roundup-admin at your.tracker.email.domain.example -TO: chef at bork.bork.bork, john at test, mary at test +TO: chef at bork.bork.bork, john at test.test, mary at test.test Content-Type: text/plain; charset=utf-8 Subject: [issue1] Testing... -To: chef at bork.bork.bork, john at test, mary at test +To: chef at bork.bork.bork, john at test.test, mary at test.test From: richard Reply-To: Roundup issue tracker MIME-Version: 1.0 @@ -339,10 +518,11 @@ In-Reply-To: X-Roundup-Name: Roundup issue tracker X-Roundup-Loop: hello +X-Roundup-Issue-Status: chatting Content-Transfer-Encoding: quoted-printable -richard added the comment: +richard added the comment: This is a followup @@ -357,6 +537,50 @@ _______________________________________________________________________ ''') + def testPropertyChangeOnly(self): + self.doNewIssue() + oldvalues = self.db.getnode('issue', '1').copy() + oldvalues['assignedto'] = None + self.db.issue.set('1', assignedto=self.chef_id) + self.db.commit() + self.db.issue.nosymessage('1', None, oldvalues) + + new_mail = "" + for line in self._get_mail().split("\n"): + if "Message-Id: " in line: + continue + if "Date: " in line: + continue + new_mail += line+"\n" + + self.compareMessages(new_mail, """ +FROM: roundup-admin at your.tracker.email.domain.example +TO: chef at bork.bork.bork, richard at test.test +Content-Type: text/plain; charset=utf-8 +Subject: [issue1] Testing... +To: chef at bork.bork.bork, richard at test.test +From: "Bork, Chef" +X-Roundup-Name: Roundup issue tracker +X-Roundup-Loop: hello +X-Roundup-Issue-Status: unread +X-Roundup-Version: 1.3.3 +MIME-Version: 1.0 +Reply-To: Roundup issue tracker +Content-Transfer-Encoding: quoted-printable + + +Changes by Bork, Chef : + + +---------- +assignedto: -> Chef + +_______________________________________________________________________ +Roundup issue tracker + +_______________________________________________________________________ +""") + # # FOLLOWUP TITLE MATCH @@ -365,7 +589,7 @@ self.doNewIssue() self._handle_mail('''Content-Type: text/plain; charset="iso-8859-1" -From: richard +From: richard To: issue_tracker at your.tracker.email.domain.example Message-Id: Subject: Re: Testing... [assignedto=mary; nosy=+john] @@ -374,10 +598,10 @@ ''') self.compareMessages(self._get_mail(), '''FROM: roundup-admin at your.tracker.email.domain.example -TO: chef at bork.bork.bork, john at test, mary at test +TO: chef at bork.bork.bork, john at test.test, mary at test.test Content-Type: text/plain; charset=utf-8 Subject: [issue1] Testing... -To: chef at bork.bork.bork, john at test, mary at test +To: chef at bork.bork.bork, john at test.test, mary at test.test From: richard Reply-To: Roundup issue tracker MIME-Version: 1.0 @@ -385,10 +609,11 @@ In-Reply-To: X-Roundup-Name: Roundup issue tracker X-Roundup-Loop: hello +X-Roundup-Issue-Status: chatting Content-Transfer-Encoding: quoted-printable -richard added the comment: +richard added the comment: This is a followup @@ -403,12 +628,36 @@ _______________________________________________________________________ ''') + def testFollowupTitleMatchMultiRe(self): + nodeid1 = self.doNewIssue() + nodeid2 = self._handle_mail('''Content-Type: text/plain; + charset="iso-8859-1" +From: richard +To: issue_tracker at your.tracker.email.domain.example +Message-Id: +Subject: Re: Testing... [assignedto=mary; nosy=+john] + +This is a followup +''') + + nodeid3 = self._handle_mail('''Content-Type: text/plain; + charset="iso-8859-1" +From: richard +To: issue_tracker at your.tracker.email.domain.example +Message-Id: +Subject: Ang: Re: Testing... + +This is a followup +''') + self.assertEqual(nodeid1, nodeid2) + self.assertEqual(nodeid1, nodeid3) + def testFollowupTitleMatchNever(self): nodeid = self.doNewIssue() self.db.config.MAILGW_SUBJECT_CONTENT_MATCH = 'never' self.assertNotEqual(self._handle_mail('''Content-Type: text/plain; charset="iso-8859-1" -From: richard +From: richard To: issue_tracker at your.tracker.email.domain.example Message-Id: Subject: Re: Testing... @@ -423,7 +672,7 @@ self.db.config.MAILGW_SUBJECT_CONTENT_MATCH = 'creation 00:00:01' self.assertNotEqual(self._handle_mail('''Content-Type: text/plain; charset="iso-8859-1" -From: richard +From: richard To: issue_tracker at your.tracker.email.domain.example Message-Id: Subject: Re: Testing... @@ -434,7 +683,7 @@ self.db.config.MAILGW_SUBJECT_CONTENT_MATCH = 'creation +1d' self.assertEqual(self._handle_mail('''Content-Type: text/plain; charset="iso-8859-1" -From: richard +From: richard To: issue_tracker at your.tracker.email.domain.example Message-Id: Subject: Re: Testing... @@ -448,7 +697,7 @@ self.db.config.ADD_AUTHOR_TO_NOSY = 'yes' self._handle_mail('''Content-Type: text/plain; charset="iso-8859-1" -From: john at test +From: john at test.test To: issue_tracker at your.tracker.email.domain.example Message-Id: In-Reply-To: @@ -459,10 +708,10 @@ self.compareMessages(self._get_mail(), '''FROM: roundup-admin at your.tracker.email.domain.example -TO: chef at bork.bork.bork, richard at test +TO: chef at bork.bork.bork, richard at test.test Content-Type: text/plain; charset=utf-8 Subject: [issue1] Testing... -To: chef at bork.bork.bork, richard at test +To: chef at bork.bork.bork, richard at test.test From: John Doe Reply-To: Roundup issue tracker MIME-Version: 1.0 @@ -470,10 +719,11 @@ In-Reply-To: X-Roundup-Name: Roundup issue tracker X-Roundup-Loop: hello +X-Roundup-Issue-Status: chatting Content-Transfer-Encoding: quoted-printable -John Doe added the comment: +John Doe added the comment: This is a followup @@ -493,9 +743,9 @@ self.db.config.ADD_RECIPIENTS_TO_NOSY = 'yes' self._handle_mail('''Content-Type: text/plain; charset="iso-8859-1" -From: richard at test +From: richard at test.test To: issue_tracker at your.tracker.email.domain.example -Cc: john at test +Cc: john at test.test Message-Id: In-Reply-To: Subject: [issue1] Testing... @@ -515,10 +765,11 @@ In-Reply-To: X-Roundup-Name: Roundup issue tracker X-Roundup-Loop: hello +X-Roundup-Issue-Status: chatting Content-Transfer-Encoding: quoted-printable -richard added the comment: +richard added the comment: This is a followup @@ -539,7 +790,7 @@ self.db.config.MESSAGES_TO_AUTHOR = 'yes' self._handle_mail('''Content-Type: text/plain; charset="iso-8859-1" -From: john at test +From: john at test.test To: issue_tracker at your.tracker.email.domain.example Message-Id: In-Reply-To: @@ -549,10 +800,10 @@ ''') self.compareMessages(self._get_mail(), '''FROM: roundup-admin at your.tracker.email.domain.example -TO: chef at bork.bork.bork, john at test, richard at test +TO: chef at bork.bork.bork, john at test.test, richard at test.test Content-Type: text/plain; charset=utf-8 Subject: [issue1] Testing... -To: chef at bork.bork.bork, john at test, richard at test +To: chef at bork.bork.bork, john at test.test, richard at test.test From: John Doe Reply-To: Roundup issue tracker MIME-Version: 1.0 @@ -560,10 +811,11 @@ In-Reply-To: X-Roundup-Name: Roundup issue tracker X-Roundup-Loop: hello +X-Roundup-Issue-Status: chatting Content-Transfer-Encoding: quoted-printable -John Doe added the comment: +John Doe added the comment: This is a followup @@ -583,7 +835,7 @@ self.instance.config.ADD_AUTHOR_TO_NOSY = 'no' self._handle_mail('''Content-Type: text/plain; charset="iso-8859-1" -From: john at test +From: john at test.test To: issue_tracker at your.tracker.email.domain.example Message-Id: In-Reply-To: @@ -593,10 +845,10 @@ ''') self.compareMessages(self._get_mail(), '''FROM: roundup-admin at your.tracker.email.domain.example -TO: chef at bork.bork.bork, richard at test +TO: chef at bork.bork.bork, richard at test.test Content-Type: text/plain; charset=utf-8 Subject: [issue1] Testing... -To: chef at bork.bork.bork, richard at test +To: chef at bork.bork.bork, richard at test.test From: John Doe Reply-To: Roundup issue tracker MIME-Version: 1.0 @@ -604,10 +856,11 @@ In-Reply-To: X-Roundup-Name: Roundup issue tracker X-Roundup-Loop: hello +X-Roundup-Issue-Status: chatting Content-Transfer-Encoding: quoted-printable -John Doe added the comment: +John Doe added the comment: This is a followup @@ -626,9 +879,9 @@ self.instance.config.ADD_RECIPIENTS_TO_NOSY = 'no' self._handle_mail('''Content-Type: text/plain; charset="iso-8859-1" -From: richard at test +From: richard at test.test To: issue_tracker at your.tracker.email.domain.example -Cc: john at test +Cc: john at test.test Message-Id: In-Reply-To: Subject: [issue1] Testing... @@ -648,10 +901,11 @@ In-Reply-To: X-Roundup-Name: Roundup issue tracker X-Roundup-Loop: hello +X-Roundup-Issue-Status: chatting Content-Transfer-Encoding: quoted-printable -richard added the comment: +richard added the comment: This is a followup @@ -670,7 +924,7 @@ self._handle_mail('''Content-Type: text/plain; charset="iso-8859-1" -From: richard +From: richard To: issue_tracker at your.tracker.email.domain.example Message-Id: In-Reply-To: @@ -690,7 +944,7 @@ self._handle_mail('''Content-Type: text/plain; charset="iso-8859-1" -From: richard +From: richard To: issue_tracker at your.tracker.email.domain.example Message-Id: In-Reply-To: @@ -710,7 +964,7 @@ self._handle_mail('''Content-Type: text/plain; charset="iso-8859-1" -From: richard +From: richard To: issue_tracker at your.tracker.email.domain.example Message-Id: In-Reply-To: @@ -742,7 +996,48 @@ This is a test submission of a new issue. ''' - self.assertRaises(Unauthorized, self._handle_mail, message) + try: + self._handle_mail(message) + except Unauthorized, value: + body_diff = self.compareMessages(str(value), """ +You are not a registered user. + +Unknown address: fubar at bork.bork.bork +""") + + assert not body_diff, body_diff + + else: + raise AssertionError, "Unathorized not raised when handling mail" + + # Add Web Access role to anonymous, and try again to make sure + # we get a "please register at:" message this time. + p = [ + self.db.security.getPermission('Create', 'user'), + self.db.security.getPermission('Web Access', None), + ] + + self.db.security.role['anonymous'].permissions=p + + try: + self._handle_mail(message) + except Unauthorized, value: + body_diff = self.compareMessages(str(value), """ +You are not a registered user. Please register at: + +http://tracker.example/cgi-bin/roundup.cgi/bugs/user?template=register + +...before sending mail to the tracker. + +Unknown address: fubar at bork.bork.bork +""") + + assert not body_diff, body_diff + + else: + raise AssertionError, "Unathorized not raised when handling mail" + + # Make sure list of users is the same as before. m = self.db.user.list() m.sort() self.assertEqual(l, m) @@ -762,7 +1057,7 @@ self.doNewIssue() self._handle_mail('''Content-Type: text/plain; charset="iso-8859-1" -From: mary +From: mary To: issue_tracker at your.tracker.email.domain.example Message-Id: In-Reply-To: @@ -776,10 +1071,10 @@ ''') self.compareMessages(self._get_mail(), '''FROM: roundup-admin at your.tracker.email.domain.example -TO: chef at bork.bork.bork, richard at test +TO: chef at bork.bork.bork, richard at test.test Content-Type: text/plain; charset=utf-8 Subject: [issue1] Testing... -To: chef at bork.bork.bork, richard at test +To: chef at bork.bork.bork, richard at test.test From: "Contrary, Mary" Reply-To: Roundup issue tracker MIME-Version: 1.0 @@ -787,10 +1082,11 @@ In-Reply-To: X-Roundup-Name: Roundup issue tracker X-Roundup-Loop: hello +X-Roundup-Issue-Status: chatting Content-Transfer-Encoding: quoted-printable -Contrary, Mary added the comment: +Contrary, Mary added the comment: A message with encoding (encoded oe =C3=B6) @@ -808,7 +1104,7 @@ self.doNewIssue() self._handle_mail('''Content-Type: text/plain; charset="iso-8859-1" -From: mary +From: mary To: issue_tracker at your.tracker.email.domain.example Message-Id: In-Reply-To: @@ -829,10 +1125,10 @@ ''') self.compareMessages(self._get_mail(), '''FROM: roundup-admin at your.tracker.email.domain.example -TO: chef at bork.bork.bork, richard at test +TO: chef at bork.bork.bork, richard at test.test Content-Type: text/plain; charset=utf-8 Subject: [issue1] Testing... -To: chef at bork.bork.bork, richard at test +To: chef at bork.bork.bork, richard at test.test From: "Contrary, Mary" Reply-To: Roundup issue tracker MIME-Version: 1.0 @@ -840,10 +1136,11 @@ In-Reply-To: X-Roundup-Name: Roundup issue tracker X-Roundup-Loop: hello +X-Roundup-Issue-Status: chatting Content-Transfer-Encoding: quoted-printable -Contrary, Mary added the comment: +Contrary, Mary added the comment: A message with first part encoded (encoded oe =C3=B6) @@ -860,7 +1157,7 @@ self.doNewIssue() self._handle_mail('''Content-Type: text/plain; charset="iso-8859-1" -From: mary +From: mary To: issue_tracker at your.tracker.email.domain.example Message-Id: In-Reply-To: @@ -878,22 +1175,24 @@ --bCsyhTFzCvuiizWE Content-Type: application/octet-stream Content-Disposition: attachment; filename="main.dvi" +Content-Transfer-Encoding: base64 -xxxxxx +SnVzdCBhIHRlc3QgAQo= --bCsyhTFzCvuiizWE-- ''') messages = self.db.issue.get('1', 'messages') messages.sort() - file = self.db.msg.get(messages[-1], 'files')[0] - self.assertEqual(self.db.file.get(file, 'name'), 'main.dvi') + file = self.db.file.getnode (self.db.msg.get(messages[-1], 'files')[0]) + self.assertEqual(file.name, 'main.dvi') + self.assertEqual(file.content, 'Just a test \001\n') def testFollowupStupidQuoting(self): self.doNewIssue() self._handle_mail('''Content-Type: text/plain; charset="iso-8859-1" -From: richard +From: richard To: issue_tracker at your.tracker.email.domain.example Message-Id: In-Reply-To: @@ -914,10 +1213,11 @@ In-Reply-To: X-Roundup-Name: Roundup issue tracker X-Roundup-Loop: hello +X-Roundup-Issue-Status: chatting Content-Transfer-Encoding: quoted-printable -richard added the comment: +richard added the comment: This is a followup @@ -952,7 +1252,7 @@ self._handle_mail('''Content-Type: text/plain; charset="iso-8859-1" -From: richard +From: richard To: issue_tracker at your.tracker.email.domain.example Message-Id: In-Reply-To: @@ -1005,7 +1305,7 @@ charset="iso-8859-1" From: Chef To: issue_tracker at your.tracker.email.domain.example -Cc: richard at test +Cc: richard at test.test Message-Id: Subject: Re: Complete your registration to Roundup issue tracker -- key %s @@ -1018,7 +1318,7 @@ self.db.keyword.create(name='Foo') self._handle_mail('''Content-Type: text/plain; charset="iso-8859-1" -From: richard +From: richard To: issue_tracker at your.tracker.email.domain.example Message-Id: In-Reply-To: @@ -1031,9 +1331,9 @@ nodeid = self._handle_mail('''Content-Type: text/plain; charset="iso-8859-1" From: Chef -Resent-From: mary +Resent-From: mary To: issue_tracker at your.tracker.email.domain.example -Cc: richard at test +Cc: richard at test.test Message-Id: Subject: [issue] Testing... @@ -1052,7 +1352,7 @@ From: Chef X-Roundup-Loop: hello To: issue_tracker at your.tracker.email.domain.example -Cc: richard at test +Cc: richard at test.test Message-Id: Subject: Re: [issue] Testing... @@ -1066,7 +1366,7 @@ From: Chef Precedence: bulk To: issue_tracker at your.tracker.email.domain.example -Cc: richard at test +Cc: richard at test.test Message-Id: Subject: Re: [issue] Testing... @@ -1079,11 +1379,11 @@ charset="iso-8859-1" From: Chef To: issue_tracker at your.tracker.email.domain.example -Cc: richard at test +Cc: richard at test.test Message-Id: Subject: Re: [issue] Out of office AutoReply: Back next week -Hi, I'm back in the office next week +Hi, I am back in the office next week ''') def testNoSubject(self): @@ -1092,7 +1392,7 @@ charset="iso-8859-1" From: Chef To: issue_tracker at your.tracker.email.domain.example -Cc: richard at test +Cc: richard at test.test Reply-To: chef at bork.bork.bork Message-Id: @@ -1108,7 +1408,7 @@ From: Chef To: issue_tracker at your.tracker.email.domain.example Subject: [frobulated] testing -Cc: richard at test +Cc: richard at test.test Reply-To: chef at bork.bork.bork Message-Id: @@ -1119,7 +1419,7 @@ From: Chef To: issue_tracker at your.tracker.email.domain.example Subject: [issue12345] testing -Cc: richard at test +Cc: richard at test.test Reply-To: chef at bork.bork.bork Message-Id: @@ -1132,7 +1432,23 @@ From: Chef To: issue_tracker at your.tracker.email.domain.example Subject: [frobulated] testing -Cc: richard at test +Cc: richard at test.test +Reply-To: chef at bork.bork.bork +Message-Id: + +''') + assert not os.path.exists(SENDMAILDEBUG) + self.assertEqual(self.db.issue.get(nodeid, 'title'), + '[frobulated] testing') + + def testInvalidClassLooseReply(self): + self.instance.config.MAILGW_SUBJECT_PREFIX_PARSING = 'loose' + nodeid = self._handle_mail('''Content-Type: text/plain; + charset="iso-8859-1" +From: Chef +To: issue_tracker at your.tracker.email.domain.example +Subject: Re: [frobulated] testing +Cc: richard at test.test Reply-To: chef at bork.bork.bork Message-Id: @@ -1148,7 +1464,7 @@ From: Chef To: issue_tracker at your.tracker.email.domain.example Subject: [issue1234] testing -Cc: richard at test +Cc: richard at test.test Reply-To: chef at bork.bork.bork Message-Id: @@ -1165,7 +1481,7 @@ From: Chef To: issue_tracker at your.tracker.email.domain.example Subject: [keyword1] Testing... [name=Bar] -Cc: richard at test +Cc: richard at test.test Reply-To: chef at bork.bork.bork Message-Id: @@ -1173,6 +1489,40 @@ assert not os.path.exists(SENDMAILDEBUG) self.assertEqual(self.db.keyword.get('1', 'name'), 'Bar') + def testClassStrictInvalid(self): + self.instance.config.MAILGW_SUBJECT_PREFIX_PARSING = 'strict' + self.instance.config.MAILGW_DEFAULT_CLASS = '' + + message = '''Content-Type: text/plain; + charset="iso-8859-1" +From: Chef +To: issue_tracker at your.tracker.email.domain.example +Subject: Testing... +Cc: richard at test.test +Reply-To: chef at bork.bork.bork +Message-Id: + +''' + self.assertRaises(MailUsageError, self._handle_mail, message) + + def testClassStrictValid(self): + self.instance.config.MAILGW_SUBJECT_PREFIX_PARSING = 'strict' + self.instance.config.MAILGW_DEFAULT_CLASS = '' + + nodeid = self._handle_mail('''Content-Type: text/plain; + charset="iso-8859-1" +From: Chef +To: issue_tracker at your.tracker.email.domain.example +Subject: [issue] Testing... +Cc: richard at test.test +Reply-To: chef at bork.bork.bork +Message-Id: + +''') + + assert not os.path.exists(SENDMAILDEBUG) + self.assertEqual(self.db.issue.get(nodeid, 'title'), 'Testing...') + # # TEST FOR INVALID COMMANDS HANDLING # @@ -1183,7 +1533,7 @@ From: Chef To: issue_tracker at your.tracker.email.domain.example Subject: testing [frobulated] -Cc: richard at test +Cc: richard at test.test Reply-To: chef at bork.bork.bork Message-Id: @@ -1196,7 +1546,7 @@ From: Chef To: issue_tracker at your.tracker.email.domain.example Subject: testing [frobulated] -Cc: richard at test +Cc: richard at test.test Reply-To: chef at bork.bork.bork Message-Id: @@ -1212,7 +1562,7 @@ From: Chef To: issue_tracker at your.tracker.email.domain.example Subject: testing [frobulated] -Cc: richard at test +Cc: richard at test.test Reply-To: chef at bork.bork.bork Message-Id: @@ -1228,7 +1578,7 @@ From: Chef To: issue_tracker at your.tracker.email.domain.example Subject: testing [assignedto=mary] -Cc: richard at test +Cc: richard at test.test Reply-To: chef at bork.bork.bork Message-Id: @@ -1244,7 +1594,7 @@ From: Chef To: issue_tracker at your.tracker.email.domain.example Subject: testing {assignedto=mary} -Cc: richard at test +Cc: richard at test.test Reply-To: chef at bork.bork.bork Message-Id: @@ -1258,7 +1608,7 @@ self.db.keyword.create(name='Foo') self._handle_mail('''Content-Type: text/plain; charset="iso-8859-1" -From: richard +From: richard To: issue_tracker at your.tracker.email.domain.example Message-Id: In-Reply-To: @@ -1275,7 +1625,7 @@ From: Chef To: issue_tracker at your.tracker.email.domain.example Subject: testing [assignedto=mary] -Cc: richard at test +Cc: richard at test.test Reply-To: chef at bork.bork.bork Message-Id: @@ -1285,6 +1635,97 @@ 'testing [assignedto=mary]') self.assertEqual(self.db.issue.get(nodeid, 'assignedto'), None) + def testReplytoMatch(self): + self.instance.config.MAILGW_SUBJECT_PREFIX_PARSING = 'loose' + nodeid = self.doNewIssue() + nodeid2 = self._handle_mail('''Content-Type: text/plain; + charset="iso-8859-1" +From: Chef +To: issue_tracker at your.tracker.email.domain.example +Message-Id: +In-Reply-To: +Subject: Testing... + +Followup message. +''') + + nodeid3 = self._handle_mail('''Content-Type: text/plain; + charset="iso-8859-1" +From: Chef +To: issue_tracker at your.tracker.email.domain.example +Message-Id: +In-Reply-To: +Subject: Testing... + +Yet another message in the same thread/issue. +''') + + self.assertEqual(nodeid, nodeid2) + self.assertEqual(nodeid, nodeid3) + + def testHelpSubject(self): + message = '''Content-Type: text/plain; + charset="iso-8859-1" +From: Chef +To: issue_tracker at your.tracker.email.domain.example +Message-Id: +In-Reply-To: +Subject: hElp + + +''' + self.assertRaises(MailUsageHelp, self._handle_mail, message) + + def testMaillistSubject(self): + self.instance.config.MAILGW_SUBJECT_SUFFIX_DELIMITERS = '[]' + self.db.keyword.create(name='Foo') + self._handle_mail('''Content-Type: text/plain; + charset="iso-8859-1" +From: Chef +To: issue_tracker at your.tracker.email.domain.example +Subject: [mailinglist-name] [keyword1] Testing.. [name=Bar] +Cc: richard at test.test +Reply-To: chef at bork.bork.bork +Message-Id: + +''') + + assert not os.path.exists(SENDMAILDEBUG) + self.assertEqual(self.db.keyword.get('1', 'name'), 'Bar') + + def testUnknownPrefixSubject(self): + self.db.keyword.create(name='Foo') + self._handle_mail('''Content-Type: text/plain; + charset="iso-8859-1" +From: Chef +To: issue_tracker at your.tracker.email.domain.example +Subject: VeryStrangeRe: [keyword1] Testing.. [name=Bar] +Cc: richard at test.test +Reply-To: chef at bork.bork.bork +Message-Id: + +''') + + assert not os.path.exists(SENDMAILDEBUG) + self.assertEqual(self.db.keyword.get('1', 'name'), 'Bar') + + def testIssueidLast(self): + nodeid1 = self.doNewIssue() + nodeid2 = self._handle_mail('''Content-Type: text/plain; + charset="iso-8859-1" +From: mary +To: issue_tracker at your.tracker.email.domain.example +Message-Id: +In-Reply-To: +Subject: New title [issue1] + +This is a second followup +''') + + assert nodeid1 == nodeid2 + self.assertEqual(self.db.issue.get(nodeid2, 'title'), "Testing...") + + def test_suite(): suite = unittest.TestSuite() suite.addTest(unittest.makeSuite(MailgwTestCase)) Modified: tracker/roundup-src/test/test_metakit.py ============================================================================== --- tracker/roundup-src/test/test_metakit.py (original) +++ tracker/roundup-src/test/test_metakit.py Sun Mar 9 09:26:16 2008 @@ -1,83 +0,0 @@ -# -# Copyright (c) 2001 Bizar Software Pty Ltd (http://www.bizarsoftware.com.au/) -# This module is free software, and you may redistribute it and/or modify -# under the same terms as Python, so long as this copyright message and -# disclaimer are retained in their original form. -# -# IN NO EVENT SHALL BIZAR SOFTWARE PTY LTD BE LIABLE TO ANY PARTY FOR -# DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING -# OUT OF THE USE OF THIS CODE, EVEN IF THE AUTHOR HAS BEEN ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. -# -# BIZAR SOFTWARE PTY LTD SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, -# BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS -# FOR A PARTICULAR PURPOSE. THE CODE PROVIDED HEREUNDER IS ON AN "AS IS" -# BASIS, AND THERE IS NO OBLIGATION WHATSOEVER TO PROVIDE MAINTENANCE, -# SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS. -# -# $Id: test_metakit.py,v 1.7 2004/11/18 16:33:43 a1s Exp $ -import unittest, os, shutil, time, weakref - -from db_test_base import DBTest, ROTest, SchemaTest, ClassicInitTest, config, password - -from roundup.backends import get_backend, have_backend - -class metakitOpener: - if have_backend('metakit'): - module = get_backend('metakit') - module._instances = weakref.WeakValueDictionary() - - def nuke_database(self): - shutil.rmtree(config.DATABASE) - -class metakitDBTest(metakitOpener, DBTest): - def testBooleanUnset(self): - # XXX: metakit can't unset Booleans :( - nid = self.db.user.create(username='foo', assignable=1) - self.db.user.set(nid, assignable=None) - self.assertEqual(self.db.user.get(nid, "assignable"), 0) - - def testNumberUnset(self): - # XXX: metakit can't unset Numbers :( - nid = self.db.user.create(username='foo', age=1) - self.db.user.set(nid, age=None) - self.assertEqual(self.db.user.get(nid, "age"), 0) - - def testPasswordUnset(self): - # XXX: metakit can't unset Numbers (id's) :( - x = password.Password('x') - nid = self.db.user.create(username='foo', password=x) - self.db.user.set(nid, assignable=None) - self.assertEqual(self.db.user.get(nid, "assignable"), 0) - -class metakitROTest(metakitOpener, ROTest): - pass - -class metakitSchemaTest(metakitOpener, SchemaTest): - pass - -class metakitClassicInitTest(ClassicInitTest): - backend = 'metakit' - -from session_common import DBMTest -class metakitSessionTest(metakitOpener, DBMTest): - pass - -def test_suite(): - suite = unittest.TestSuite() - if not have_backend('metakit'): - print 'Skipping metakit tests' - return suite - print 'Including metakit tests' - suite.addTest(unittest.makeSuite(metakitDBTest)) - suite.addTest(unittest.makeSuite(metakitROTest)) - suite.addTest(unittest.makeSuite(metakitSchemaTest)) - suite.addTest(unittest.makeSuite(metakitClassicInitTest)) - suite.addTest(unittest.makeSuite(metakitSessionTest)) - return suite - -if __name__ == '__main__': - runner = unittest.TextTestRunner() - unittest.main(testRunner=runner) - -# vim: set et sts=4 sw=4 : Modified: tracker/roundup-src/test/test_multipart.py ============================================================================== --- tracker/roundup-src/test/test_multipart.py (original) +++ tracker/roundup-src/test/test_multipart.py Sun Mar 9 09:26:16 2008 @@ -15,7 +15,7 @@ # BASIS, AND THERE IS NO OBLIGATION WHATSOEVER TO PROVIDE MAINTENANCE, # SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS. # -# $Id: test_multipart.py,v 1.7 2004/01/17 13:49:06 jlgijsbers Exp $ +# $Id: test_multipart.py,v 1.8 2007/09/22 07:25:35 jpend Exp $ import unittest from cStringIO import StringIO @@ -30,7 +30,7 @@ 'application/pgp-signature': ' name="foo.gpg"\nfoo\n', 'application/pdf': ' name="foo.pdf"\nfoo\n', 'message/rfc822': 'Subject: foo\n\nfoo\n'} - + def __init__(self, spec): """Create a basic MIME message according to 'spec'. @@ -44,10 +44,10 @@ content_type = line.strip() if not content_type: continue - + indent = self.getIndent(line) if indent: - parts.append('--boundary-%s\n' % indent) + parts.append('\n--boundary-%s\n' % indent) parts.append('Content-type: %s;\n' % content_type) parts.append(self.table[content_type] % {'indent': indent + 1}) @@ -68,7 +68,7 @@ w = self.fp.write w('Content-Type: multipart/mixed; boundary="foo"\r\n\r\n') w('This is a multipart message. Ignore this bit.\r\n') - w('--foo\r\n') + w('\r\n--foo\r\n') w('Content-Type: text/plain\r\n\r\n') w('Hello, world!\r\n') @@ -76,26 +76,26 @@ w('Blah blah\r\n') w('foo\r\n') w('-foo\r\n') - w('--foo\r\n') + w('\r\n--foo\r\n') w('Content-Type: multipart/alternative; boundary="bar"\r\n\r\n') w('This is a multipart message. Ignore this bit.\r\n') - w('--bar\r\n') + w('\r\n--bar\r\n') w('Content-Type: text/plain\r\n\r\n') w('Hello, world!\r\n') w('\r\n') w('Blah blah\r\n') - w('--bar\r\n') + w('\r\n--bar\r\n') w('Content-Type: text/html\r\n\r\n') w('Hello, world!\r\n') - w('--bar--\r\n') - w('--foo\r\n') + w('\r\n--bar--\r\n') + w('\r\n--foo\r\n') w('Content-Type: text/plain\r\n\r\n') w('Last bit\n') - w('--foo--\r\n') + w('\r\n--foo--\r\n') self.fp.seek(0) def testMultipart(self): @@ -185,7 +185,7 @@ text/plain application/pdf """, ('foo\n', [('foo.pdf', 'application/pdf', 'foo\n')])) - + def testSignedText(self): self.TestExtraction(""" multipart/signed From python-checkins at python.org Sun Mar 9 09:46:20 2008 From: python-checkins at python.org (martin.v.loewis) Date: Sun, 9 Mar 2008 09:46:20 +0100 (CET) Subject: [Python-checkins] r61321 - tracker/instances/python-dev/detectors/messagesummary.py Message-ID: <20080309084620.82ECC1E4006@bag.python.org> Author: martin.v.loewis Date: Sun Mar 9 09:46:20 2008 New Revision: 61321 Modified: tracker/instances/python-dev/detectors/messagesummary.py Log: Perform 1.3.2 to 1.4.2 migration. Modified: tracker/instances/python-dev/detectors/messagesummary.py ============================================================================== --- tracker/instances/python-dev/detectors/messagesummary.py (original) +++ tracker/instances/python-dev/detectors/messagesummary.py Sun Mar 9 09:46:20 2008 @@ -8,7 +8,7 @@ if newvalues.has_key('summary') or not newvalues.has_key('content'): return - summary, content = parseContent(newvalues['content'], 1, 1) + summary, content = parseContent(newvalues['content'], config=db.config) newvalues['summary'] = summary From python-checkins at python.org Sun Mar 9 09:47:00 2008 From: python-checkins at python.org (martin.v.loewis) Date: Sun, 9 Mar 2008 09:47:00 +0100 (CET) Subject: [Python-checkins] r61322 - tracker/instances/meta/detectors/messagesummary.py Message-ID: <20080309084700.A94011E4006@bag.python.org> Author: martin.v.loewis Date: Sun Mar 9 09:47:00 2008 New Revision: 61322 Modified: tracker/instances/meta/detectors/messagesummary.py Log: Perform 1.3.2 to 1.4.2 migration. Modified: tracker/instances/meta/detectors/messagesummary.py ============================================================================== --- tracker/instances/meta/detectors/messagesummary.py (original) +++ tracker/instances/meta/detectors/messagesummary.py Sun Mar 9 09:47:00 2008 @@ -8,7 +8,7 @@ if newvalues.has_key('summary') or not newvalues.has_key('content'): return - summary, content = parseContent(newvalues['content'], 1, 1) + summary, content = parseContent(newvalues['content'], config=db.config) newvalues['summary'] = summary From python-checkins at python.org Sun Mar 9 09:47:32 2008 From: python-checkins at python.org (martin.v.loewis) Date: Sun, 9 Mar 2008 09:47:32 +0100 (CET) Subject: [Python-checkins] r61323 - tracker/instances/jython/detectors/messagesummary.py Message-ID: <20080309084732.29CE61E4006@bag.python.org> Author: martin.v.loewis Date: Sun Mar 9 09:47:31 2008 New Revision: 61323 Modified: tracker/instances/jython/detectors/messagesummary.py Log: Perform 1.3.2 to 1.4.2 migration. Modified: tracker/instances/jython/detectors/messagesummary.py ============================================================================== --- tracker/instances/jython/detectors/messagesummary.py (original) +++ tracker/instances/jython/detectors/messagesummary.py Sun Mar 9 09:47:31 2008 @@ -8,7 +8,7 @@ if newvalues.has_key('summary') or not newvalues.has_key('content'): return - summary, content = parseContent(newvalues['content'], 1, 1) + summary, content = parseContent(newvalues['content'], config=db.config) newvalues['summary'] = summary From python-checkins at python.org Sun Mar 9 09:48:02 2008 From: python-checkins at python.org (martin.v.loewis) Date: Sun, 9 Mar 2008 09:48:02 +0100 (CET) Subject: [Python-checkins] r61324 - tracker/instances/jobs/detectors/messagesummary.py Message-ID: <20080309084802.1CF141E4006@bag.python.org> Author: martin.v.loewis Date: Sun Mar 9 09:48:01 2008 New Revision: 61324 Modified: tracker/instances/jobs/detectors/messagesummary.py Log: Perform 1.3.2 to 1.4.2 migration. Modified: tracker/instances/jobs/detectors/messagesummary.py ============================================================================== --- tracker/instances/jobs/detectors/messagesummary.py (original) +++ tracker/instances/jobs/detectors/messagesummary.py Sun Mar 9 09:48:01 2008 @@ -8,7 +8,7 @@ if newvalues.has_key('summary') or not newvalues.has_key('content'): return - summary, content = parseContent(newvalues['content'], 1, 1) + summary, content = parseContent(newvalues['content'], config=db.config) newvalues['summary'] = summary From python-checkins at python.org Sun Mar 9 11:39:09 2008 From: python-checkins at python.org (georg.brandl) Date: Sun, 9 Mar 2008 11:39:09 +0100 (CET) Subject: [Python-checkins] r61325 - doctools/trunk/sphinx/templates/layout.html Message-ID: <20080309103909.A46651E4006@bag.python.org> Author: georg.brandl Date: Sun Mar 9 11:39:09 2008 New Revision: 61325 Modified: doctools/trunk/sphinx/templates/layout.html Log: Add more blocks for custom page content. Modified: doctools/trunk/sphinx/templates/layout.html ============================================================================== --- doctools/trunk/sphinx/templates/layout.html (original) +++ doctools/trunk/sphinx/templates/layout.html Sun Mar 9 11:39:09 2008 @@ -55,6 +55,7 @@ +{%- block beforerelbar %}{% endblock %} {%- filter capture('relbar') %} {%- block relbar %} {%- endblock %} {%- endfilter %} +{%- block afterrelbar %}{% endblock %}
              @@ -94,6 +96,7 @@ {%- endif %}
              +{%- block beforesidebar %}{% endblock %} {%- block sidebar %} {%- if builder != 'htmlhelp' %} {%- endif %} {%- endblock %} +{%- block aftersidebar %}{% endblock %}
              {%- block bottomrelbar %} {{ relbar }} {%- endblock %} +{%- block beforefooter %} {%- block footer %} {%- endblock %} +{%- block afterfooter %} From python-checkins at python.org Sun Mar 9 11:40:28 2008 From: python-checkins at python.org (georg.brandl) Date: Sun, 9 Mar 2008 11:40:28 +0100 (CET) Subject: [Python-checkins] r61326 - doctools/trunk/sphinx/templates/layout.html Message-ID: <20080309104028.045231E4006@bag.python.org> Author: georg.brandl Date: Sun Mar 9 11:40:27 2008 New Revision: 61326 Modified: doctools/trunk/sphinx/templates/layout.html Log: Add missing endblock directives. Modified: doctools/trunk/sphinx/templates/layout.html ============================================================================== --- doctools/trunk/sphinx/templates/layout.html (original) +++ doctools/trunk/sphinx/templates/layout.html Sun Mar 9 11:40:27 2008 @@ -150,7 +150,7 @@ {%- block bottomrelbar %} {{ relbar }} {%- endblock %} -{%- block beforefooter %} +{%- block beforefooter %}{% endblock %} {%- block footer %} {%- endblock %} -{%- block afterfooter %} +{%- block afterfooter %}{% endblock %} From gh at ghaering.de Sun Mar 9 12:39:22 2008 From: gh at ghaering.de (=?ISO-8859-1?Q?Gerhard_H=E4ring?=) Date: Sun, 09 Mar 2008 12:39:22 +0100 Subject: [Python-checkins] SQLite test hangs - was: Re: Python Regression Test Failures basics (1) In-Reply-To: References: <20080301113313.GA25006@python.psfb.org> Message-ID: <47D3CC6A.5070302@ghaering.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Neal Norwitz wrote: > Gerhard, > > There is another problem with test_sqlite on the trunk. In > Lib/sqlite3/test/transactions.py two functions cause hangs: > > CheckLocking > CheckRaiseTimeout > > which is causing the builtbots to fail. > > Could you take a look? I tried to investigate this one. First, I couldn't find the hangs you described in the buildbot status pages. Second, my first guess was that it was caused by old SQLite versions rather than SQLite. I set up a test environment in which I build and test pysqlite trunk (which is almost exactly the same code as the sqlite3 module) against all SQLite versons from 3.0.8 to 3.5.6. I couldn't reproduce the hang (this is Linux on x86; Ubuntu 7.10). I'm currently out of ideas, as I don't have a hardware/software combination available in which I can reproduce the hang. What I did find, however, was that two aggregate-related tests fail with SQLite verson 3.5.5 and 3.5.6. I'm 99.999 % sure it's caused by SQLite swapping out its stack-based VM in favour of a new register-based one. So I can wait out this particular problem ;-) - -- Gerhard -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFH08xqdIO4ozGCH14RAthNAKCQO/cKWdFIxyx15nHIlBrq3jah8wCgnmEZ +qIFzgak5R7FWEcJwwnPDb4= =TsPC -----END PGP SIGNATURE----- From python-checkins at python.org Sun Mar 9 13:01:03 2008 From: python-checkins at python.org (georg.brandl) Date: Sun, 9 Mar 2008 13:01:03 +0100 (CET) Subject: [Python-checkins] r61327 - doctools/trunk/sphinx/builder.py Message-ID: <20080309120103.9AEEC1E4024@bag.python.org> Author: georg.brandl Date: Sun Mar 9 13:01:03 2008 New Revision: 61327 Modified: doctools/trunk/sphinx/builder.py Log: Fix to load user-provided templates before system-provided ones. Modified: doctools/trunk/sphinx/builder.py ============================================================================== --- doctools/trunk/sphinx/builder.py (original) +++ doctools/trunk/sphinx/builder.py Sun Mar 9 13:01:03 2008 @@ -81,6 +81,7 @@ self.templates = {} templates_path = [path.join(path.dirname(__file__), 'templates')] templates_path.extend(self.config.templates_path) + templates_path.reverse() self.jinja_env = Environment(loader=SphinxFileSystemLoader(templates_path), # disable traceback, more likely that something # in the application is broken than in the templates From python-checkins at python.org Sun Mar 9 13:01:16 2008 From: python-checkins at python.org (georg.brandl) Date: Sun, 9 Mar 2008 13:01:16 +0100 (CET) Subject: [Python-checkins] r61328 - doctools/trunk/sphinx/static/contents.png doctools/trunk/sphinx/static/navigation.png doctools/trunk/sphinx/static/sphinxdoc.css Message-ID: <20080309120116.4FD721E400D@bag.python.org> Author: georg.brandl Date: Sun Mar 9 13:01:15 2008 New Revision: 61328 Added: doctools/trunk/sphinx/static/contents.png (contents, props changed) doctools/trunk/sphinx/static/navigation.png (contents, props changed) doctools/trunk/sphinx/static/sphinxdoc.css Log: Add a new experimental style. Added: doctools/trunk/sphinx/static/contents.png ============================================================================== Binary file. No diff available. Added: doctools/trunk/sphinx/static/navigation.png ============================================================================== Binary file. No diff available. Added: doctools/trunk/sphinx/static/sphinxdoc.css ============================================================================== --- (empty file) +++ doctools/trunk/sphinx/static/sphinxdoc.css Sun Mar 9 13:01:15 2008 @@ -0,0 +1,449 @@ +/** + * Alternate Sphinx design + * Originally created by Armin Ronacher for Werkzeug, adapted by Georg Brandl. + */ + +body { + font-family: 'Lucida Grande', 'Lucida Sans Unicode', 'Geneva', 'Verdana', sans-serif; + font-size: 14px; + letter-spacing: -0.01em; + line-height: 150%; + text-align: center; + /*background-color: #AFC1C4; */ + background-color: #BFD1D4; + color: black; + padding: 0; + border: 1px solid #aaa; + + margin: 0px 80px 0px 80px; + min-width: 740px; +} + +a { + color: #CA7900; + text-decoration: none; +} + +a:hover { + color: #2491CF; +} + +pre { + font-family: 'Consolas', 'Deja Vu Sans Mono', 'Bitstream Vera Sans Mono', monospace; + font-size: 0.95em; + letter-spacing: 0.015em; + padding: 0.5em; + border: 1px solid #ccc; + background-color: #f8f8f8; +} + +cite, code, tt { + font-family: 'Consolas', 'Deja Vu Sans Mono', 'Bitstream Vera Sans Mono', monospace; + font-size: 0.95em; + letter-spacing: 0.01em; + font-style: normal; +} + +hr { + border: 1px solid #abc; + margin: 2em; +} + +tt { + background-color: #f2f2f2; + border-bottom: 1px solid #ddd; + color: #333; +} + +tt.descname { + background-color: transparent; + font-weight: bold; + font-size: 1.2em; + border: 0; +} + +tt.descclassname { + background-color: transparent; +} + +tt.xref, a tt { + background-color: transparent; + font-weight: bold; +} + +dl { + margin-bottom: 15px; + clear: both; +} + +dd p { + margin-top: 0px; +} + +dd ul, dd table { + margin-bottom: 10px; +} + +dd { + margin-top: 3px; + margin-bottom: 10px; + margin-left: 30px; +} + +.refcount { + color: #060; +} + +dt:target, +.highlight { + background-color: #fbe54e; +} + +/* +dt { + margin-top: 0.8em; +} + +dd p.first { + margin-top: 0; +} + +dd p.last { + margin-bottom: 0; +} +*/ + +pre { + line-height: 120%; +} + +pre a { + color: inherit; + text-decoration: underline; +} + +div.syntax { + background-color: transparent; +} + +div.document { + background-color: white; + text-align: left; + background-image: url(contents.png); + background-repeat: repeat-x; +} + +div.documentwrapper { + float: left; + width: 100%; +} + +div.clearer { + clear: both; +} + +div.header { + background-image: url(header.png); + height: 100px; +} + +div.header h1 { + float: right; + position: absolute; + margin: -30px 0 0 585px; + height: 180px; + width: 180px; +} + +div.header h1 a { + display: block; + background-image: url(werkzeug.png); + background-repeat: no-repeat; + height: 180px; + width: 180px; + text-decoration: none; + color: white!important; +} + +div.header span { + display: none; +} + +div.header p { + background-image: url(header_invert.png); + margin: 0; + padding: 10px; + height: 80px; + color: white; + display: none; +} + +div.related h3 { + display: none; +} + +div.related ul { + background-image: url(navigation.png); + height: 2em; + list-style: none; + border-top: 1px solid #ddd; + border-bottom: 1px solid #ddd; + margin: 0; + padding-left: 10px; +} + +div.related ul li { + margin: 0; + padding: 0; + height: 2em; + float: left; +} + +div.related ul li.right { + float: right; + margin-right: 5px; +} + +div.related ul li a { + margin: 0; + padding: 0 5px 0 5px; + line-height: 1.75em; + color: #EE9816; +} + +div.related ul li a:hover { + color: #3CA8E7; +} + +div.body { + margin: 0; + padding: 0; +} + +div.bodywrapper { + margin: 0 240px 0 0; + padding: 0.5em 20px 20px 20px; + border-right: 1px solid #ccc; +} + +div.sidebar { + margin: 0; + padding: 0.5em 15px 15px 0; + width: 210px; + float: right; + margin-left: -100%; +} + +div.sidebar h4, div.sidebar h3 { + margin: 1em 0 0.5em 0; + font-size: 0.9em; + padding: 0.1em 0 0.1em 0.5em; + color: white; + border: 1px solid #86989B; + background-color: #AFC1C4; +} + +div.sidebar ul { + padding-left: 1.5em; + list-style: none; + padding: 0; + line-height: 110%; +} + +div.sidebar ul li { + margin-bottom: 7px; +} + +div.sidebar ul ul { + list-style: square; + margin-left: 20px; +} + +p { + margin: 0.8em 0 0.5em 0; +} + +h1 { + margin: 0; + padding: 0.7em 0 0.3em 0; + font-size: 1.5em; + color: #11557C; +} + +h2 { + margin: 1.3em 0 0.2em 0; + font-size: 1.35em; + padding: 0; +} + +h3 { + margin: 1em 0 -0.3em 0; + font-size: 1.2em; +} + +h1 a, h2 a, h3 a, h4 a, h5 a, h6 a { + color: black!important; +} + +h1 a.anchor, h2 a.anchor, h3 a.anchor, h4 a.anchor, h5 a.anchor, h6 a.anchor { + display: none; + margin: 0 0 0 0.3em; + padding: 0 0.2em 0 0.2em; + color: #aaa!important; +} + +h1:hover a.anchor, h2:hover a.anchor, h3:hover a.anchor, h4:hover a.anchor, +h5:hover a.anchor, h6:hover a.anchor { + display: inline; +} + +h1 a.anchor:hover, h2 a.anchor:hover, h3 a.anchor:hover, h4 a.anchor:hover, +h5 a.anchor:hover, h6 a.anchor:hover { + color: #777; + background-color: #eee; +} + +table { + border-collapse: collapse; + margin: 0 -0.5em 0 -0.5em; +} + +table td, table th { + padding: 0.2em 0.5em 0.2em 0.5em; +} + +div.footer { + background-color: #E3EFF1; + color: #86989B; + padding: 3px 8px 3px 0; + clear: both; + font-size: 0.8em; + text-align: right; +} + +div.footer a { + color: #86989B; + text-decoration: underline; +} + +div.pagination { + margin-top: 2em; + padding-top: 0.5em; + border-top: 1px solid black; + text-align: center; +} + +p.noshell em { + color: #3ca8e7; + text-decoration: underline; + font-style: normal; +} + +p.noshell:hover, div.nutshell { + background-color: white; +} + +p.noshell { + cursor: pointer; +} + +div.sidebar ul.toc { + margin: 1em 0 1em 0; + padding: 0 0 0 0.5em; + list-style: none; +} + +div.sidebar ul.toc li { + margin: 0.5em 0 0.5em 0; + font-size: 0.9em; + line-height: 130%; +} + +div.sidebar ul.toc li p { + margin: 0; + padding: 0; +} + +div.sidebar ul.toc ul { + margin: 0.2em 0 0.2em 0; + padding: 0 0 0 1.8em; +} + +div.sidebar ul.toc ul li { + padding: 0; +} + +div.admonition, div.warning { + font-size: 0.9em; + margin: 1em 0 0 0; + border: 1px solid #86989B; + background-color: #f7f7f7; +} + +div.admonition p, div.warning p { + margin: 0.5em 1em 0.5em 1em; + padding: 0; +} + +div.admonition pre, div.warning pre { + margin: 0.4em 1em 0.4em 1em; +} + +div.admonition p.admonition-title, +div.warning p.admonition-title { + margin: 0; + padding: 0.1em 0 0.1em 0.5em; + color: white; + border-bottom: 1px solid #86989B; + font-weight: bold; + background-color: #AFC1C4; +} + +div.warning { + border: 1px solid #940000; +} + +div.warning p.admonition-title { + background-color: #CF0000; + border-bottom-color: #940000; +} + +div.admonition ul, div.admonition ol, +div.warning ul, div.warning ol { + margin: 0.1em 0.5em 0.5em 3em; + padding: 0; +} + +div.versioninfo { + margin: 1em 0 0 0; + border: 1px solid #ccc; + background-color: #DDEAF0; + padding: 8px; + line-height: 1.3em; + font-size: 0.9em; +} + + +a.headerlink { + color: #c60f0f!important; + font-size: 1em; + margin-left: 6px; + padding: 0 4px 0 4px; + text-decoration: none; + visibility: hidden; +} + +h1:hover > a.headerlink, +h2:hover > a.headerlink, +h3:hover > a.headerlink, +h4:hover > a.headerlink, +h5:hover > a.headerlink, +h6:hover > a.headerlink, +dt:hover > a.headerlink { + visibility: visible; +} + +a.headerlink:hover { + background-color: #ccc; + color: white!important; +} From ncoghlan at gmail.com Sun Mar 9 15:25:15 2008 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 10 Mar 2008 00:25:15 +1000 Subject: [Python-checkins] r61327 - doctools/trunk/sphinx/builder.py In-Reply-To: <20080309120103.9AEEC1E4024@bag.python.org> References: <20080309120103.9AEEC1E4024@bag.python.org> Message-ID: <47D3F34B.90304@gmail.com> georg.brandl wrote: > Author: georg.brandl > Date: Sun Mar 9 13:01:03 2008 > New Revision: 61327 > > Modified: > doctools/trunk/sphinx/builder.py > Log: > Fix to load user-provided templates before system-provided ones. > > > Modified: doctools/trunk/sphinx/builder.py > ============================================================================== > --- doctools/trunk/sphinx/builder.py (original) > +++ doctools/trunk/sphinx/builder.py Sun Mar 9 13:01:03 2008 > @@ -81,6 +81,7 @@ > self.templates = {} > templates_path = [path.join(path.dirname(__file__), 'templates')] > templates_path.extend(self.config.templates_path) > + templates_path.reverse() Won't that end up searching the supplied path entries in the opposite order as well? Perhaps this would be a better option (would replace the extend and reverse lines): templates_path[0:0] = self.config.templates_path Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From python-checkins at python.org Sun Mar 9 16:11:40 2008 From: python-checkins at python.org (georg.brandl) Date: Sun, 9 Mar 2008 16:11:40 +0100 (CET) Subject: [Python-checkins] r61329 - python/trunk/Doc/library/unittest.rst Message-ID: <20080309151140.4BBAE1E4021@bag.python.org> Author: georg.brandl Date: Sun Mar 9 16:11:39 2008 New Revision: 61329 Modified: python/trunk/Doc/library/unittest.rst Log: #2249: document assertTrue and assertFalse. Modified: python/trunk/Doc/library/unittest.rst ============================================================================== --- python/trunk/Doc/library/unittest.rst (original) +++ python/trunk/Doc/library/unittest.rst Sun Mar 9 16:11:39 2008 @@ -566,6 +566,7 @@ .. method:: TestCase.assert_(expr[, msg]) TestCase.failUnless(expr[, msg]) + TestCase.assertTrue(expr[, msg]) Signal a test failure if *expr* is false; the explanation for the error will be *msg* if given, otherwise it will be :const:`None`. @@ -622,6 +623,7 @@ .. method:: TestCase.failIf(expr[, msg]) + TestCase.assertFalse(expr[, msg]) The inverse of the :meth:`failUnless` method is the :meth:`failIf` method. This signals a test failure if *expr* is true, with *msg* or :const:`None` for the From buildbot at python.org Sun Mar 9 17:57:37 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 09 Mar 2008 16:57:37 +0000 Subject: [Python-checkins] buildbot failure in x86 XP trunk Message-ID: <20080309165737.935781E4002@bag.python.org> The Buildbot has detected a new failure of x86 XP trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP%20trunk/builds/0 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: armbruster-windows Build Reason: The web-page 'force build' button was pressed by 'joseph Armbruster': test build Build Source Stamp: [branch trunk] 61328 Blamelist: BUILD FAILED: failed failed slave lost sincerely, -The Buildbot From python-checkins at python.org Sun Mar 9 19:18:31 2008 From: python-checkins at python.org (georg.brandl) Date: Sun, 9 Mar 2008 19:18:31 +0100 (CET) Subject: [Python-checkins] r61330 - python/trunk/Doc/conf.py Message-ID: <20080309181831.0F7461E4002@bag.python.org> Author: georg.brandl Date: Sun Mar 9 19:18:30 2008 New Revision: 61330 Modified: python/trunk/Doc/conf.py Log: Update for newest Sphinx. Modified: python/trunk/Doc/conf.py ============================================================================== --- python/trunk/Doc/conf.py (original) +++ python/trunk/Doc/conf.py Sun Mar 9 19:18:30 2008 @@ -14,6 +14,7 @@ # --------------------- extensions = ['sphinx.ext.refcounting', 'sphinx.ext.coverage'] +templates_path = ['tools/sphinxext'] # General substitutions. project = 'Python' @@ -73,16 +74,16 @@ html_use_smartypants = True # Content template for the index page, filename relative to this file. -html_index = 'tools/sphinxext/indexcontent.html' +html_index = 'indexcontent.html' # Custom sidebar templates, filenames relative to this file. html_sidebars = { - 'index': 'tools/sphinxext/indexsidebar.html', + 'index': 'indexsidebar.html', } # Additional templates that should be rendered to pages. html_additional_pages = { - 'download': 'tools/sphinxext/download.html', + 'download': 'download.html', } # Output file base name for HTML help builder. From python-checkins at python.org Sun Mar 9 19:18:42 2008 From: python-checkins at python.org (georg.brandl) Date: Sun, 9 Mar 2008 19:18:42 +0100 (CET) Subject: [Python-checkins] r61331 - in doctools/trunk/sphinx: _jinja.py application.py builder.py config.py directives.py environment.py quickstart.py roles.py static/sphinxdoc.css Message-ID: <20080309181842.274231E4002@bag.python.org> Author: georg.brandl Date: Sun Mar 9 19:18:41 2008 New Revision: 61331 Modified: doctools/trunk/sphinx/ (props changed) doctools/trunk/sphinx/_jinja.py doctools/trunk/sphinx/application.py doctools/trunk/sphinx/builder.py doctools/trunk/sphinx/config.py doctools/trunk/sphinx/directives.py doctools/trunk/sphinx/environment.py doctools/trunk/sphinx/quickstart.py doctools/trunk/sphinx/roles.py doctools/trunk/sphinx/static/sphinxdoc.css Log: * Allow registering arbitrary cross-referencing directives/roles. * Allow labels anywhere, and allow giving an explicit caption in :ref: links. * Some fixes to the sphinxdoc style. * Add an option to show author information in the output. * Search user-defined templates in the order they occur in the config (thanks Nick). Modified: doctools/trunk/sphinx/_jinja.py ============================================================================== --- doctools/trunk/sphinx/_jinja.py (original) +++ doctools/trunk/sphinx/_jinja.py Sun Mar 9 19:18:41 2008 @@ -25,12 +25,19 @@ paths, or from an absolute path. """ - def __init__(self, paths): - self.searchpaths = map(path.abspath, paths) + def __init__(self, basepath, extpaths): + self.basepath = path.abspath(basepath) + self.extpaths = map(path.abspath, extpaths) + self.searchpaths = self.extpaths + [self.basepath] def get_source(self, environment, name, parent): name = name.replace('/', path.sep) - if path.isabs(name): + if name.startswith('!'): + name = name[1:] + if not path.exists(path.join(self.basepath, name)): + raise TemplateNotFound(name) + filename = path.join(self.basepath, name) + elif path.isabs(name): if not path.exists(name): raise TemplateNotFound(name) filename = name Modified: doctools/trunk/sphinx/application.py ============================================================================== --- doctools/trunk/sphinx/application.py (original) +++ doctools/trunk/sphinx/application.py Sun Mar 9 19:18:41 2008 @@ -18,8 +18,10 @@ from docutils.parsers.rst import directives, roles import sphinx +from sphinx.roles import xfileref_role from sphinx.config import Config from sphinx.builder import builtin_builders +from sphinx.directives import desc_directive, additional_xref_types from sphinx.util.console import bold @@ -185,3 +187,9 @@ def add_role(self, name, role): roles.register_canonical_role(name, role) + + def add_description_unit(self, directivename, rolename, indexdesc='', + parse_node=None): + additional_xref_types[directivename] = (rolename, indexdesc, parse_node) + directives.register_directive(directivename, desc_directive) + roles.register_canonical_role(rolename, xfileref_role) Modified: doctools/trunk/sphinx/builder.py ============================================================================== --- doctools/trunk/sphinx/builder.py (original) +++ doctools/trunk/sphinx/builder.py Sun Mar 9 19:18:41 2008 @@ -79,10 +79,9 @@ # load templates self.templates = {} - templates_path = [path.join(path.dirname(__file__), 'templates')] - templates_path.extend(self.config.templates_path) - templates_path.reverse() - self.jinja_env = Environment(loader=SphinxFileSystemLoader(templates_path), + base_templates_path = path.join(path.dirname(__file__), 'templates') + loader = SphinxFileSystemLoader(base_templates_path, self.config.templates_path) + self.jinja_env = Environment(loader=loader, # disable traceback, more likely that something # in the application is broken than in the templates friendly_traceback=False) @@ -438,13 +437,11 @@ # additional pages from conf.py for pagename, template in self.config.html_additional_pages.items(): - template = path.join(self.srcdir, template) self.handle_page(pagename, {}, template) # the index page indextemplate = self.config.html_index if indextemplate: - indextemplate = path.join(self.srcdir, indextemplate) self.handle_page('index', {'indextemplate': indextemplate}, 'index.html') # copy static files @@ -515,7 +512,7 @@ ctx['hasdoc'] = lambda name: name in self.env.all_docs sidebarfile = self.config.html_sidebars.get(pagename) if sidebarfile: - ctx['customsidebar'] = path.join(self.srcdir, sidebarfile) + ctx['customsidebar'] = sidebarfile ctx.update(addctx) output = self.get_template(templatename).render(ctx) Modified: doctools/trunk/sphinx/config.py ============================================================================== --- doctools/trunk/sphinx/config.py (original) +++ doctools/trunk/sphinx/config.py Sun Mar 9 19:18:41 2008 @@ -20,7 +20,7 @@ # the values are: (default, needs fresh doctrees if changed) # If you add a value here, don't forget to include it in the - # quickstart.py file template as well! + # quickstart.py file template as well as in the docs! config_values = dict( # general substitutions @@ -41,6 +41,7 @@ unused_docs = ([], True), add_function_parentheses = (True, True), add_module_names = (True, True), + show_authors = (False, True), pygments_style = ('sphinx', False), # HTML options Modified: doctools/trunk/sphinx/directives.py ============================================================================== --- doctools/trunk/sphinx/directives.py (original) +++ doctools/trunk/sphinx/directives.py Sun Mar 9 19:18:41 2008 @@ -310,21 +310,30 @@ targetname, targetname) env.note_reftarget('option', optname, targetname) continue - elif desctype == 'envvar': + elif desctype == 'describe': signode.clear() signode += addnodes.desc_name(sig, sig) + continue + else: + # another registered generic x-ref directive + rolename, indextext, parse_node = additional_xref_types[desctype] + if parse_node: + parse_node(sig, signode) + else: + signode.clear() + signode += addnodes.desc_name(sig, sig) if not noindex: - targetname = 'envvar-' + sig + targetname = '%s-%s' % (rolename, sig) signode['ids'].append(targetname) state.document.note_explicit_target(signode) - env.note_index_entry('pair', 'environment variable; %s' % sig, - targetname, targetname) - env.note_reftarget('envvar', sig, targetname) + if indextext: + env.note_index_entry('pair', '%s; %s' % (indextext, sig), + targetname, targetname) + env.note_reftarget(rolename, sig, targetname) + # don't use object indexing below continue - else: - # for "describe": use generic fallback - raise ValueError except ValueError, err: + # signature parsing failed signode.clear() signode += addnodes.desc_name(sig, sig) continue # we don't want an index entry here @@ -384,15 +393,22 @@ 'cvar', # the odd one 'opcode', - # the generic ones - 'cmdoption', # for command line options - 'envvar', # for environment variables + # for command line options + 'cmdoption', + # the generic one 'describe', + 'envvar', ] for _name in desctypes: directives.register_directive(_name, desc_directive) +# Generic cross-reference types; they can be registered in the application +additional_xref_types = { + # directive name: (role name, index text) + 'envvar': ('envvar', 'environment variable', None), +} + # ------ versionadded/versionchanged ----------------------------------------------- @@ -526,8 +542,23 @@ def author_directive(name, arguments, options, content, lineno, content_offset, block_text, state, state_machine): - # The author directives aren't included in the built document - return [] + # Show authors only if the show_authors option is on + env = state.document.settings.env + if not env.config.show_authors: + return [] + para = nodes.paragraph() + emph = nodes.emphasis() + para += emph + if name == 'sectionauthor': + text = 'Section author: ' + elif name == 'moduleauthor': + text = 'Module author: ' + else: + text = 'Author: ' + emph += nodes.Text(text, text) + inodes, messages = state.inline_text(arguments[0], lineno) + emph.extend(inodes) + return [para] + messages author_directive.arguments = (1, 0, 1) directives.register_directive('sectionauthor', author_directive) Modified: doctools/trunk/sphinx/environment.py ============================================================================== --- doctools/trunk/sphinx/environment.py (original) +++ doctools/trunk/sphinx/environment.py Sun Mar 9 19:18:41 2008 @@ -44,6 +44,7 @@ from sphinx import addnodes from sphinx.util import get_matching_docs, SEP +from sphinx.directives import additional_xref_types default_settings = { 'embed_stylesheet': False, @@ -56,7 +57,7 @@ # This is increased every time a new environment attribute is added # to properly invalidate pickle files. -ENV_VERSION = 16 +ENV_VERSION = 17 def walk_depth(node, depth, maxdepth): @@ -226,6 +227,7 @@ self.filemodules = {} # docname -> [modules] self.modules = {} # modname -> docname, synopsis, platform, deprecated self.labels = {} # labelname -> docname, labelid, sectionname + self.anonlabels = {} # labelname -> docname, labelid self.reftargets = {} # (type, name) -> docname, labelid # where type is term, token, option, envvar @@ -471,13 +473,19 @@ continue labelid = document.nameids[name] node = document.ids[labelid] - if not isinstance(node, nodes.section): - # e.g. desc-signatures + if name.isdigit() or node.has_key('refuri') or \ + node.tagname.startswith('desc_'): + # ignore footnote labels, labels automatically generated from a + # link and description units continue - sectname = node[0].astext() # node[0] == title node if name in self.labels: self.warn(docname, 'duplicate label %s, ' % name + 'other instance in %s' % self.doc2path(self.labels[name][0])) + self.anonlabels[name] = docname, labelid + if not isinstance(node, nodes.section): + # anonymous-only labels + continue + sectname = node[0].astext() # node[0] == title node self.labels[name] = docname, labelid, sectname def note_toctree(self, docname, toctreenode): @@ -654,23 +662,37 @@ typ = node['reftype'] target = node['reftarget'] + reftarget_roles = set(('token', 'term', 'option')) + # add all custom xref types too + reftarget_roles.update(i[0] for i in additional_xref_types.values()) + try: if typ == 'ref': - # reference to the named label; the final node will contain the - # section name after the label - docname, labelid, sectname = self.labels.get(target, ('','','')) - if not docname: - newnode = doctree.reporter.system_message( - 2, 'undefined label: %s' % target) - #self.warn(fromdocname, 'undefined label: %s' % target) + if node['refcaption']: + # reference to anonymous label; the reference uses the supplied + # link caption + docname, labelid = self.anonlabels.get(target, ('','')) + sectname = node.astext() + if not docname: + newnode = doctree.reporter.system_message( + 2, 'undefined label: %s' % target) else: + # reference to the named label; the final node will contain the + # section name after the label + docname, labelid, sectname = self.labels.get(target, ('','','')) + if not docname: + newnode = doctree.reporter.system_message( + 2, 'undefined label: %s -- if you don\'t ' % target + + 'give a link caption the label must precede a section ' + 'header.') + if docname: newnode = nodes.reference('', '') innernode = nodes.emphasis(sectname, sectname) if docname == fromdocname: newnode['refid'] = labelid else: - # set more info in contnode in case the following call - # raises NoUri, the builder will have to resolve these + # set more info in contnode in case the get_relative_uri call + # raises NoUri, the builder will then have to resolve these contnode = addnodes.pending_xref('') contnode['refdocname'] = docname contnode['refsectname'] = sectname @@ -693,7 +715,7 @@ newnode['refuri'] = builder.get_relative_uri( fromdocname, docname) + '#' + labelid newnode.append(contnode) - elif typ in ('token', 'term', 'envvar', 'option'): + elif typ in reftarget_roles: docname, labelid = self.reftargets.get((typ, target), ('', '')) if not docname: if typ == 'term': Modified: doctools/trunk/sphinx/quickstart.py ============================================================================== --- doctools/trunk/sphinx/quickstart.py (original) +++ doctools/trunk/sphinx/quickstart.py Sun Mar 9 19:18:41 2008 @@ -78,6 +78,10 @@ # unit titles (such as .. function::). #add_module_names = True +# If true, sectionauthor and moduleauthor directives will be shown in the +# output. They are ignored by default. +#show_authors = False + # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' @@ -103,14 +107,14 @@ # typographically correct entities. #html_use_smartypants = True -# Content template for the index page, filename relative to this file. +# Content template for the index page. #html_index = '' -# Custom sidebar templates, maps page names to filenames relative to this file. +# Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to -# filenames relative to this file. +# template names. #html_additional_pages = {} # If true, the reST sources are included in the HTML build as _sources/. Modified: doctools/trunk/sphinx/roles.py ============================================================================== --- doctools/trunk/sphinx/roles.py (original) +++ doctools/trunk/sphinx/roles.py Sun Mar 9 19:18:41 2008 @@ -17,6 +17,7 @@ from sphinx import addnodes ws_re = re.compile(r'\s+') +caption_ref_re = re.compile(r'^([^<]+?)\s*<(.+)>$') generic_docroles = { 'command' : nodes.strong, @@ -127,6 +128,21 @@ pnode['refspecific'] = True if typ == 'term': pnode['reftarget'] = ws_re.sub(' ', text).lower() + elif typ == 'ref': + brace = text.find('<') + if brace != -1: + pnode['refcaption'] = True + m = caption_ref_re.match(text) + if not m: + # fallback + pnode['reftarget'] = text[brace+1:] + text = text[:brace] + else: + pnode['reftarget'] = m.group(2) + text = m.group(1) + else: + pnode['refcaption'] = False + pnode['reftarget'] = ws_re.sub('', text) elif typ == 'option': if text[0] in '-/': pnode['reftarget'] = text[1:] Modified: doctools/trunk/sphinx/static/sphinxdoc.css ============================================================================== --- doctools/trunk/sphinx/static/sphinxdoc.css (original) +++ doctools/trunk/sphinx/static/sphinxdoc.css Sun Mar 9 19:18:41 2008 @@ -64,11 +64,24 @@ tt.descclassname { background-color: transparent; + border: 0; } -tt.xref, a tt { +tt.xref { background-color: transparent; font-weight: bold; + border: 0; +} + +a tt { + background-color: transparent; + font-weight: bold; + border: 0; + color: #CA7900; +} + +a tt:hover { + color: #2491CF; } dl { @@ -99,19 +112,10 @@ background-color: #fbe54e; } -/* -dt { - margin-top: 0.8em; -} - -dd p.first { - margin-top: 0; -} - -dd p.last { - margin-bottom: 0; +dl.glossary dt { + font-weight: bold; + font-size: 1.1em; } -*/ pre { line-height: 120%; @@ -122,10 +126,6 @@ text-decoration: underline; } -div.syntax { - background-color: transparent; -} - div.document { background-color: white; text-align: left; @@ -226,6 +226,10 @@ border-right: 1px solid #ccc; } +div.body a { + text-decoration: underline; +} + div.sidebar { margin: 0; padding: 0.5em 15px 15px 0; @@ -447,3 +451,50 @@ background-color: #ccc; color: white!important; } + +table.indextable td { + text-align: left; + vertical-align: top; +} + +table.indextable dl, table.indextable dd { + margin-top: 0; + margin-bottom: 0; +} + +table.indextable tr.pcap { + height: 10px; +} + +table.indextable tr.cap { + margin-top: 10px; + background-color: #f2f2f2; +} + +img.toggler { + margin-right: 3px; + margin-top: 3px; + cursor: pointer; +} + +form.pfform { + margin: 10px 0 20px 0; +} + +table.contentstable { + width: 90%; +} + +table.contentstable p.biglink { + line-height: 150%; +} + +a.biglink { + font-size: 1.3em; +} + +span.linkdescr { + font-style: italic; + padding-top: 5px; + font-size: 90%; +} From nnorwitz at gmail.com Sun Mar 9 19:32:54 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Sun, 9 Mar 2008 10:32:54 -0800 Subject: [Python-checkins] SQLite test hangs - was: Re: Python Regression Test Failures basics (1) In-Reply-To: <47D3CC6A.5070302@ghaering.de> References: <20080301113313.GA25006@python.psfb.org> <47D3CC6A.5070302@ghaering.de> Message-ID: On Sun, Mar 9, 2008 at 3:39 AM, Gerhard H?ring wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Neal Norwitz wrote: > > Gerhard, > > > > There is another problem with test_sqlite on the trunk. In > > Lib/sqlite3/test/transactions.py two functions cause hangs: > > > > CheckLocking > > CheckRaiseTimeout > > > > which is causing the builtbots to fail. > > > > Could you take a look? > > I tried to investigate this one. > > First, I couldn't find the hangs you described in the buildbot status > pages. It's only one machine. Here's the most recent: http://www.python.org/dev/buildbot/all/x86%20gentoo%20trunk/builds/3169/step-test/0 > Second, my first guess was that it was caused by old SQLite versions > rather than SQLite. I set up a test environment in which I build and > test pysqlite trunk (which is almost exactly the same code as the > sqlite3 module) against all SQLite versons from 3.0.8 to 3.5.6. > > I couldn't reproduce the hang (this is Linux on x86; Ubuntu 7.10). I'm > currently out of ideas, as I don't have a hardware/software > combination available in which I can reproduce the hang. If you give me some ideas for where to look/how to debug, I can try to find the problem on this machine. I don't know if it's specific to o/s, sqlite version, compiler version, etc. It could be indicative of a larger problem or not. No way to know without finding the cause, so I'd like to try if possible. > What I did find, however, was that two aggregate-related tests fail > with SQLite verson 3.5.5 and 3.5.6. I'm 99.999 % sure it's caused by > SQLite swapping out its stack-based VM in favour of a new > register-based one. So I can wait out this particular problem ;-) :-) n From python-checkins at python.org Sun Mar 9 20:03:42 2008 From: python-checkins at python.org (neal.norwitz) Date: Sun, 9 Mar 2008 20:03:42 +0100 (CET) Subject: [Python-checkins] r61332 - python/trunk/Lib/test/test_ssl.py Message-ID: <20080309190342.66D011E400C@bag.python.org> Author: neal.norwitz Date: Sun Mar 9 20:03:42 2008 New Revision: 61332 Modified: python/trunk/Lib/test/test_ssl.py Log: Introduce a lock to fix a race condition which caused an exception in the test. Some buildbots were consistently failing (e.g., amd64). Also remove a couple of semi-colons. Modified: python/trunk/Lib/test/test_ssl.py ============================================================================== --- python/trunk/Lib/test/test_ssl.py (original) +++ python/trunk/Lib/test/test_ssl.py Sun Mar 9 20:03:42 2008 @@ -337,6 +337,7 @@ # we assume the certfile contains both private key and certificate self.certfile = certfile self.active = False + self.active_lock = threading.Lock() self.allow_reuse_address = True def get_request (self): @@ -361,23 +362,32 @@ # We want this to run in a thread, so we use a slightly # modified version of "forever". self.active = True - while self.active: + while 1: try: - self.handle_request() + # We need to lock while handling the request. + # Another thread can close the socket after self.active + # has been checked and before the request is handled. + # This causes an exception when using the closed socket. + with self.active_lock: + if not self.active: + break + self.handle_request() except socket.timeout: pass except KeyboardInterrupt: self.server_close() return except: - sys.stdout.write(''.join(traceback.format_exception(*sys.exc_info()))); + sys.stdout.write(''.join(traceback.format_exception(*sys.exc_info()))) + break def server_close(self): # Again, we want this to run in a thread, so we need to override # close to clear the "active" flag, so that serve_forever() will # terminate. - HTTPServer.server_close(self) - self.active = False + with self.active_lock: + HTTPServer.server_close(self) + self.active = False class RootedHTTPRequestHandler(SimpleHTTPRequestHandler): @@ -664,7 +674,7 @@ not in cert['subject']): raise test_support.TestFailed( "Missing or invalid 'organizationName' field in certificate subject; " - "should be 'Python Software Foundation'."); + "should be 'Python Software Foundation'.") s.close() finally: server.stop() From buildbot at python.org Sun Mar 9 20:32:26 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 09 Mar 2008 19:32:26 +0000 Subject: [Python-checkins] buildbot failure in amd64 gentoo trunk Message-ID: <20080309193227.175791E400C@bag.python.org> The Buildbot has detected a new failure of amd64 gentoo trunk. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20gentoo%20trunk/builds/342 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-amd64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl,neal.norwitz BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_timeout make: *** [buildbottest] Error 1 sincerely, -The Buildbot From python-checkins at python.org Sun Mar 9 20:55:45 2008 From: python-checkins at python.org (georg.brandl) Date: Sun, 9 Mar 2008 20:55:45 +0100 (CET) Subject: [Python-checkins] r61334 - doctools/trunk/sphinx/builder.py doctools/trunk/sphinx/latexwriter.py Message-ID: <20080309195545.3F90D1E400C@bag.python.org> Author: georg.brandl Date: Sun Mar 9 20:55:44 2008 New Revision: 61334 Modified: doctools/trunk/sphinx/builder.py doctools/trunk/sphinx/latexwriter.py Log: Actually honor the "title" value in latex_documents, if present. Modified: doctools/trunk/sphinx/builder.py ============================================================================== --- doctools/trunk/sphinx/builder.py (original) +++ doctools/trunk/sphinx/builder.py Sun Mar 9 20:55:44 2008 @@ -717,6 +717,7 @@ self.info("writing... ", nonl=1) doctree.settings = docsettings doctree.settings.author = author + doctree.settings.title = title doctree.settings.docname = docname doctree.settings.docclass = docclass docwriter.write(doctree, destination) Modified: doctools/trunk/sphinx/latexwriter.py ============================================================================== --- doctools/trunk/sphinx/latexwriter.py (original) +++ doctools/trunk/sphinx/latexwriter.py Sun Mar 9 20:55:44 2008 @@ -102,7 +102,8 @@ 'preamble': builder.config.latex_preamble, 'author': document.settings.author, 'docname': document.settings.docname, - 'title': None, # is determined later + # if empty, the title is set to the first section title + 'title': document.settings.title, 'release': builder.config.release, 'date': date, } @@ -208,7 +209,8 @@ elif self.this_is_the_title: if len(node.children) != 1 and not isinstance(node.children[0], nodes.Text): self.builder.warn('document title is not a single Text node') - self.options['title'] = node.astext() + if not self.options['title']: + self.options['title'] = node.astext() self.this_is_the_title = 0 raise nodes.SkipNode elif isinstance(node.parent, nodes.section): From python-checkins at python.org Sun Mar 9 22:31:43 2008 From: python-checkins at python.org (georg.brandl) Date: Sun, 9 Mar 2008 22:31:43 +0100 (CET) Subject: [Python-checkins] r61335 - doctools/trunk/sphinx/roles.py Message-ID: <20080309213143.1FBE31E4017@bag.python.org> Author: georg.brandl Date: Sun Mar 9 22:31:42 2008 New Revision: 61335 Modified: doctools/trunk/sphinx/roles.py Log: A leading '~' in a object cross-reference hides the module part. Modified: doctools/trunk/sphinx/roles.py ============================================================================== --- doctools/trunk/sphinx/roles.py (original) +++ doctools/trunk/sphinx/roles.py Sun Mar 9 22:31:42 2008 @@ -120,12 +120,21 @@ rawtext, text, classes=['xref'])], [] pnode = addnodes.pending_xref(rawtext) pnode['reftype'] = typ - # if the first character is a dot, search more specific namespaces first - # else search builtins first - if text[0:1] == '.' and \ - typ in ('data', 'exc', 'func', 'class', 'const', 'attr', 'meth'): - text = text[1:] - pnode['refspecific'] = True + innertext = text + # special actions for Python object cross-references + if typ in ('data', 'exc', 'func', 'class', 'const', 'attr', 'meth'): + # if the first character is a dot, search more specific namespaces first + # else search builtins first + if text[0:1] == '.': + text = text[1:] + pnode['refspecific'] = True + # if the first character is a tilde, don't display the module/class parts + # of the contents + if text[0:1] == '~': + text = text[1:] + dot = text.rfind('.') + if dot != -1: + innertext = text[dot+1:] if typ == 'term': pnode['reftarget'] = ws_re.sub(' ', text).lower() elif typ == 'ref': @@ -152,7 +161,7 @@ pnode['reftarget'] = ws_re.sub('', text) pnode['modname'] = env.currmodule pnode['classname'] = env.currclass - pnode += innernodetypes.get(typ, nodes.literal)(rawtext, text, classes=['xref']) + pnode += innernodetypes.get(typ, nodes.literal)(rawtext, innertext, classes=['xref']) return [pnode], [] From python-checkins at python.org Sun Mar 9 22:31:53 2008 From: python-checkins at python.org (georg.brandl) Date: Sun, 9 Mar 2008 22:31:53 +0100 (CET) Subject: [Python-checkins] r61336 - in doctools/trunk/sphinx: application.py quickstart.py static/sphinxdoc.css templates/layout.html Message-ID: <20080309213153.2FB781E4027@bag.python.org> Author: georg.brandl Date: Sun Mar 9 22:31:52 2008 New Revision: 61336 Modified: doctools/trunk/sphinx/application.py doctools/trunk/sphinx/quickstart.py doctools/trunk/sphinx/static/sphinxdoc.css doctools/trunk/sphinx/templates/layout.html Log: Some miscellaneous fixes. Modified: doctools/trunk/sphinx/application.py ============================================================================== --- doctools/trunk/sphinx/application.py (original) +++ doctools/trunk/sphinx/application.py Sun Mar 9 22:31:52 2008 @@ -44,7 +44,7 @@ return self.message -# List of all known events. Maps name to arguments description. +# List of all known core events. Maps name to arguments description. events = { 'builder-inited': '', 'doctree-read' : 'the doctree before being pickled', @@ -68,6 +68,8 @@ self._warning = warning self._warncount = 0 + self._events = events.copy() + # read config self.config = Config(srcdir, 'conf.py') if confoverrides: @@ -137,7 +139,7 @@ def _validate_event(self, event): event = intern(event) - if event not in events: + if event not in self._events: raise ExtensionError('Unknown event name: %s' % event) def connect(self, event, callback): @@ -173,17 +175,22 @@ def add_config_value(self, name, default, rebuild_env): if name in self.config.values: - raise ExtensionError('Config value %r already present') + raise ExtensionError('Config value %r already present' % name) self.config.values[name] = (default, rebuild_env) + def add_event(self, name): + if name in self._events: + raise ExtensionError('Event %r already present' % name) + self._events[name] = '' + def add_node(self, node): nodes._add_node_class_names([node.__name__]) - def add_directive(self, name, cls, content, arguments, **options): - cls.content = content - cls.arguments = arguments - cls.options = options - directives.register_directive(name, cls) + def add_directive(self, name, func, content, arguments, **options): + func.content = content + func.arguments = arguments + func.options = options + directives.register_directive(name, func) def add_role(self, name, role): roles.register_canonical_role(name, role) Modified: doctools/trunk/sphinx/quickstart.py ============================================================================== --- doctools/trunk/sphinx/quickstart.py (original) +++ doctools/trunk/sphinx/quickstart.py Sun Mar 9 22:31:52 2008 @@ -138,7 +138,7 @@ #latex_documents = [] # Additional stuff for the LaTeX preamble. -#latex_preamble = ' +#latex_preamble = '' # Documents to append as an appendix to all manuals. #latex_appendices = [] Modified: doctools/trunk/sphinx/static/sphinxdoc.css ============================================================================== --- doctools/trunk/sphinx/static/sphinxdoc.css (original) +++ doctools/trunk/sphinx/static/sphinxdoc.css Sun Mar 9 22:31:52 2008 @@ -264,13 +264,10 @@ div.sidebar ul { padding-left: 1.5em; + margin-top: 7px; list-style: none; padding: 0; - line-height: 110%; -} - -div.sidebar ul li { - margin-bottom: 7px; + line-height: 130%; } div.sidebar ul ul { @@ -448,7 +445,7 @@ font-size: 1em; margin-left: 6px; padding: 0 4px 0 4px; - text-decoration: none; + text-decoration: none!important; visibility: hidden; } Modified: doctools/trunk/sphinx/templates/layout.html ============================================================================== --- doctools/trunk/sphinx/templates/layout.html (original) +++ doctools/trunk/sphinx/templates/layout.html Sun Mar 9 22:31:52 2008 @@ -73,10 +73,13 @@
            1. settings |
            2. {%- endif %} -
            3. {{ project }} v{{ release }} Documentation »
            4. + {%- block rootrellink %} +
            5. {{ project }} v{{ release }} documentation »
            6. + {%- endblock %} {%- for parent in parents %}
            7. {{ parent.title }} »
            8. {%- endfor %} + {%- block relbaritems %}{% endblock %}
              {%- endblock %} From python-checkins at python.org Sun Mar 9 22:32:25 2008 From: python-checkins at python.org (georg.brandl) Date: Sun, 9 Mar 2008 22:32:25 +0100 (CET) Subject: [Python-checkins] r61337 - in doctools/trunk/doc: .build .static .static/sphinx.png .templates .templates/index.html .templates/indexsidebar.html .templates/layout.html builders.rst concepts.rst conf.py config.rst contents.rst ext.py extensions.rst glossary.rst intro.rst markup.rst rest.rst templating.rst Message-ID: <20080309213225.8B0D71E4022@bag.python.org> Author: georg.brandl Date: Sun Mar 9 22:32:24 2008 New Revision: 61337 Added: doctools/trunk/doc/ doctools/trunk/doc/.build/ (props changed) doctools/trunk/doc/.static/ doctools/trunk/doc/.static/sphinx.png (contents, props changed) doctools/trunk/doc/.templates/ doctools/trunk/doc/.templates/index.html doctools/trunk/doc/.templates/indexsidebar.html doctools/trunk/doc/.templates/layout.html doctools/trunk/doc/builders.rst doctools/trunk/doc/concepts.rst doctools/trunk/doc/conf.py doctools/trunk/doc/config.rst doctools/trunk/doc/contents.rst doctools/trunk/doc/ext.py doctools/trunk/doc/extensions.rst doctools/trunk/doc/glossary.rst doctools/trunk/doc/intro.rst doctools/trunk/doc/markup.rst doctools/trunk/doc/rest.rst doctools/trunk/doc/templating.rst Log: First pass at Sphinx documentation. Most of it still needs to be written :) Added: doctools/trunk/doc/.static/sphinx.png ============================================================================== Binary file. No diff available. Added: doctools/trunk/doc/.templates/index.html ============================================================================== --- (empty file) +++ doctools/trunk/doc/.templates/index.html Sun Mar 9 22:32:24 2008 @@ -0,0 +1,60 @@ +{% extends "layout.html" %} +{% set title = 'Overview' %} +{% block body %} +

              Welcome

              + +

              + Sphinx is a tool that makes it easy to create intelligent and beautiful + documentation for Python projects, written by Georg Brandl. It was + originally created to translate the + new Python documentation, but has now been cleaned up in the hope that + it will be useful to many other projects. (Of course, this site is also + created from reStructuredText sources using Sphinx!) +

              +

              + Although it is still under constant development, the following features are + already present, work fine and can be seen “in action” in the + Python docs: +

              +
                +
              • Output formats: HTML (including Windows HTML Help) and LaTeX, for + printable PDF versions
              • +
              • Extensive cross-references: semantic markup and automatic links + for functions, classes, glossary terms and similar pieces of information
              • +
              • Hierarchical structure: easy definition of a document tree, with + automatic links to siblings, parents and children
              • +
              • Automatic indices: general index as well as a module index
              • +
              • Code handling: automatic highlighting using the Pygments highlighter
              • +
              +

              + Sphinx uses reStructuredText + as its markup language, and many of its strengths come from the power and + straightforwardness of reStructuredText and its parsing and translating + suite, the Docutils. +

              + +

              Documentation

              + + +
              + + + + + +
              + +

              Get Sphinx

              +

              + Sphinx is available as an easy-installable + package on the Python Package + Index. +

              + +{% endblock %} \ No newline at end of file Added: doctools/trunk/doc/.templates/indexsidebar.html ============================================================================== --- (empty file) +++ doctools/trunk/doc/.templates/indexsidebar.html Sun Mar 9 22:32:24 2008 @@ -0,0 +1,9 @@ +

              Download

              + +

              Get Sphinx from the Python Package +Index.

              + +

              Questions? Suggestions?

              + +

              Send them to <georg at python org>, or come to the +#python-docs channel on FreeNode.

              \ No newline at end of file Added: doctools/trunk/doc/.templates/layout.html ============================================================================== --- (empty file) +++ doctools/trunk/doc/.templates/layout.html Sun Mar 9 22:32:24 2008 @@ -0,0 +1,12 @@ +{% extends "!layout.html" %} + +{% block rootrellink %} +
            9. Sphinx home
            10. +
            11. Documentation »
            12. +{% endblock %} + +{% block beforerelbar %} +
              + +
              +{% endblock %} Added: doctools/trunk/doc/builders.rst ============================================================================== --- (empty file) +++ doctools/trunk/doc/builders.rst Sun Mar 9 22:32:24 2008 @@ -0,0 +1,9 @@ +.. _builders: + +Builders and the environment +============================ + +.. module:: sphinx.builder + :synopsis: Available built-in builder classes. + + Added: doctools/trunk/doc/concepts.rst ============================================================================== --- (empty file) +++ doctools/trunk/doc/concepts.rst Sun Mar 9 22:32:24 2008 @@ -0,0 +1,14 @@ +.. _concepts: + +Sphinx concepts +=============== + + +The TOC tree +------------ + + +Document names +-------------- + + Added: doctools/trunk/doc/conf.py ============================================================================== --- (empty file) +++ doctools/trunk/doc/conf.py Sun Mar 9 22:32:24 2008 @@ -0,0 +1,125 @@ +# -*- coding: utf-8 -*- +# +# Sphinx documentation build configuration file, created by +# sphinx-quickstart.py on Sat Mar 8 21:47:50 2008. +# +# This file is execfile()d with the current directory set to its containing dir. +# +# The contents of this file are pickled, so don't put values in the namespace +# that aren't pickleable (module imports are okay, they're removed automatically). +# +# All configuration values have a default value; values that are commented out +# serve to show the default value. + +import sys + +# If your extensions are in another directory, add it here. +sys.path.append('.') + +# General configuration +# --------------------- + +# Add any Sphinx extension module names here, as strings. They can be extensions +# coming with Sphinx (named 'sphinx.addons.*') or your custom ones. +extensions = ['ext'] + +# Add any paths that contain templates here, relative to this directory. +templates_path = ['.templates'] + +# The suffix of source filenames. +source_suffix = '.rst' + +# The master toctree document. +master_doc = 'contents' + +# General substitutions. +project = 'Sphinx' +copyright = '2008, Georg Brandl' + +# The default replacements for |version| and |release|, also used in various +# other places throughout the built documents. +# +# The short X.Y version. +version = '0.1' +# The full version, including alpha/beta/rc tags. +release = '0.1' + +# There are two options for replacing |today|: either, you set today to some +# non-false value, then it is used: +#today = '' +# Else, today_fmt is used as the format for a strftime call. +today_fmt = '%B %d, %Y' + +# List of documents that shouldn't be included in the build. +#unused_docs = [] + +# If true, '()' will be appended to :func: etc. cross-reference text. +#add_function_parentheses = True + +# If true, the current module name will be prepended to all description +# unit titles (such as .. function::). +#add_module_names = True + +show_authors = True + +# The name of the Pygments (syntax highlighting) style to use. +pygments_style = 'friendly' + + +# Options for HTML output +# ----------------------- + +# The style sheet to use for HTML and HTML Help pages. A file of that name +# must exist either in Sphinx' static/ path, or in one of the custom paths +# given in html_static_path. +html_style = 'sphinxdoc.css' + +# Add any paths that contain custom static files (such as style sheets) here, +# relative to this directory. They are copied after the builtin static files, +# so a file named "default.css" will overwrite the builtin "default.css". +html_static_path = ['.static'] + +# If not '', a 'Last updated on:' timestamp is inserted at every page bottom, +# using the given strftime format. +html_last_updated_fmt = '%b %d, %Y' + +# If true, SmartyPants will be used to convert quotes and dashes to +# typographically correct entities. +#html_use_smartypants = True + +# Content template for the index page. +html_index = 'index.html' + +# Custom sidebar templates, maps page names to templates. +html_sidebars = {'index': 'indexsidebar.html'} + +# Additional templates that should be rendered to pages, maps page names to +# templates. +#html_additional_pages = {} + +# If true, the reST sources are included in the HTML build as _sources/. +#html_copy_source = True + +# Output file base name for HTML help builder. +htmlhelp_basename = 'Sphinxdoc' + + +# Options for LaTeX output +# ------------------------ + +# The paper size ('letter' or 'a4'). +#latex_paper_size = 'letter' + +# The font size ('10pt', '11pt' or '12pt'). +#latex_font_size = '10pt' + +# Grouping the document tree into LaTeX files. List of tuples +# (source start file, target name, title, author, document class [howto/manual]). +latex_documents = [('contents', 'sphinx.tex', 'Sphinx Documentation', + 'Georg Brandl', 'manual')] + +# Additional stuff for the LaTeX preamble. +#latex_preamble = '' + +# Documents to append as an appendix to all manuals. +#latex_appendices = [] Added: doctools/trunk/doc/config.rst ============================================================================== --- (empty file) +++ doctools/trunk/doc/config.rst Sun Mar 9 22:32:24 2008 @@ -0,0 +1,204 @@ +.. highlightlang:: python + +The build configuration file +============================ + +.. module:: conf + :synopsis: Build configuration file. + +The :term:`documentation root` must contain a file named :file:`conf.py`. This +file (containing Python code) is called the "build configuration file" and +contains all configuration needed to customize Sphinx input and output behavior. + +The configuration file if executed as Python code at build time (using +:func:`execfile`, and with the current directory set to the documentation root), +and therefore can execute arbitrarily complex code. Sphinx then reads simple +names from the file's namespace as its configuration. + +Two conventions are important to keep in mind here: Relative paths are always +used relative to the documentation root, and document names must always be given +without file name extension. + +The contents of the namespace are pickled (so that Sphinx can find out when +configuration changes), so it may not contain unpickleable values -- delete them +from the namespace with ``del`` if appropriate. Modules are removed +automatically, so you don't need to ``del`` your imports after use. + +The configuration values can be separated in several groups. If not otherwise +documented, values must be strings, and their default is the empty string. + + +General configuration +--------------------- + +.. confval:: extensions + + A list of strings that are module names of Sphinx extensions. These can be + extensions coming with Sphinx (named ``sphinx.addons.*``) or custom ones. + + Note that you can extend :data:`sys.path` within the conf file if your + extensions live in another directory. + +.. confval:: templates_path + + A list of paths that contain extra templates (or templates that overwrite + builtin templates). + +.. confval:: source_suffix + + The file name extension of source files. Only files with this suffix will be + read as sources. Default is ``.rst``. + +.. confval:: master_doc + + The document name of the "master" document, that is, the document that + contains the root :dir:`toctree` directive. Default is ``'contents'``. + +.. confval:: project + + The documented project's name. + +.. confval:: copyright + + A copyright statement in the style ``'2008, Author Name'``. + +.. confval:: version + + The major project version, used as the replacement for ``|version|``. For + example, for the Python documentation, this may be something like ``2.6``. + +.. confval:: release + + The full project version, used as the replacement for ``|release|`` and + e.g. in the HTML templates. For example, for the Python documentation, this + may be something like ``2.6.0rc1``. + + If you don't need the separation provided between :confval:`version` and + :confval:`release`, just set them both to the same value. + +.. confval:: today + today_fmt + + These values determine how to format the current date, used as the + replacement for ``|today|``. + + * If you set :confval:`today` to a non-empty value, it is used. + * Otherwise, the current time is formatted using :func:`time.strftime` and + the format given in :confval:`today_fmt`. + + The default is no :confval:`today` and a :confval:`today_fmt` of ``'%B %d, + %Y'``. + +.. confval:: unused_docs + + A list of document names that are present, but not currently included in the + toctree. Use this setting to suppress the warning that is normally emitted + in that case. + +.. confval:: add_function_parentheses + + A boolean that decides whether parentheses are appended to function and + method role text (e.g. the content of ``:func:`input```) to signify that the + name is callable. Default is ``True``. + +.. confval:: add_module_names + + A boolean that decides whether module names are prepended to all + :term:`description unit` titles, e.g. for :dir:`function` directives. + Default is ``True``. + +.. confval:: show_authors + + A boolean that decides whether :dir:`moduleauthor` and :dir:`sectionauthor` + directives produce any output in the built files. + +.. confval:: pygments_style + + The style name to use for Pygments highlighting of source code. Default is + ``'sphinx'``, which is a builtin style designed to match Sphinx' default + style. + + +Options for HTML output +----------------------- + +These options influence HTML as well as HTML Help output, and other builders +that use Sphinx' HTMLWriter class. + +.. confval:: html_style + + The style sheet to use for HTML pages. A file of that name must exist either + in Sphinx' :file:`static/` path, or in one of the custom paths given in + :confval:`html_static_path`. Default is ``'default.css'``. + +.. confval:: html_static_path + + A list of paths that contain custom static files (such as style sheets or + script files). They are copied to the output directory after the builtin + static files, so a file named :file:`default.css` will overwrite the builtin + :file:`default.css`. + +.. confval:: html_last_updated_fmt + + If this is not the empty string, a 'Last updated on:' timestamp is inserted + at every page bottom, using the given :func:`strftime` format. Default is + ``'%b %d, %Y'``. + +.. confval:: html_use_smartypants + + If true, *SmartyPants* will be used to convert quotes and dashes to + typographically correct entities. Default: ``True``. + +.. confval:: html_index + + Content template for the index page, filename relative to this file. If this + is not the empty string, the "index" document will not be created from a + reStructuredText file but from this template. + +.. confval:: html_sidebars + + Custom sidebar templates, must be a dictionary that maps document names to + template names. + +.. confval:: html_additional_pages + + Additional templates that should be rendered to HTML pages, must be a + dictionary that maps document names to template names. + +.. confval:: html_copy_source + + If true, the reST sources are included in the HTML build as + :file:`_sources/{name}`. + +.. confval:: htmlhelp_basename + + Output file base name for HTML help builder. Default is ``'pydoc'``. + + +Options for LaTeX output +------------------------ + +These options influence LaTeX output. + +.. confval:: latex_paper_size + + The output paper size (``'letter'`` or ``'a4'``). Default is ``'letter'``. + +.. confval:: latex_font_size + + The font size ('10pt', '11pt' or '12pt'). Default is ``'10pt'``. + +.. confval:: latex_documents + + Grouping the document tree into LaTeX files. List of tuples (source start + file, target name, title, author, document class [howto/manual]). + + XXX expand. + +.. confval:: latex_appendices + + Documents to append as an appendix to all manuals. + +.. confval:: latex_preamble + + Additional LaTeX markup for the preamble. Added: doctools/trunk/doc/contents.rst ============================================================================== --- (empty file) +++ doctools/trunk/doc/contents.rst Sun Mar 9 22:32:24 2008 @@ -0,0 +1,27 @@ +.. _contents: + +Sphinx documentation contents +============================= + +.. toctree:: + :maxdepth: 1 + + intro.rst + concepts.rst + rest.rst + markup.rst + builders.rst + config.rst + templating.rst + extensions.rst + + glossary.rst + + +Indices and tables +================== + +* :ref:`genindex` +* :ref:`modindex` +* :ref:`search` +* :ref:`glossary` Added: doctools/trunk/doc/ext.py ============================================================================== --- (empty file) +++ doctools/trunk/doc/ext.py Sun Mar 9 22:32:24 2008 @@ -0,0 +1,4 @@ +def setup(app): + app.add_description_unit('directive', 'dir', 'directive') + app.add_description_unit('role', 'role', 'role') + app.add_description_unit('confval', 'confval', 'configuration value') Added: doctools/trunk/doc/extensions.rst ============================================================================== --- (empty file) +++ doctools/trunk/doc/extensions.rst Sun Mar 9 22:32:24 2008 @@ -0,0 +1,110 @@ +.. _extensions: + +Sphinx Extensions +================= + +.. module:: sphinx.application + :synopsis: Application class and extensibility interface. + +Since many projects will need special features in their documentation, Sphinx is +designed to be extensible on several levels. + +First, you can add new :term:`builder`\s to support new output formats or +actions on the parsed documents. Then, it is possible to register custom +reStructuredText roles and directives, extending the markup. And finally, there +are so-called "hook points" at strategic places throughout the build process, +where an extension can register a hook and run specialized code. + +Each Sphinx extension is a Python module with at least a :func:`setup` function. +This function is called at initialization time with one argument, the +application object representing the Sphinx process. This application object has +the following public API: + +.. method:: Application.add_builder(builder) + + Register a new builder. *builder* must be a class that inherits from + :class:`~sphinx.builder.Builder`. + +.. method:: Application.add_config_value(name, default, rebuild_env) + + Register a configuration value. This is necessary for Sphinx to recognize + new values and set default values accordingly. The *name* should be prefixed + with the extension name, to avoid clashes. The *default* value can be any + Python object. The boolean value *rebuild_env* must be ``True`` if a change + in the setting only takes effect when a document is parsed -- this means that + the whole environment must be rebuilt. + +.. method:: Application.add_event(name) + + Register an event called *name*. + +.. method:: Application.add_node(node) + + Register a Docutils node class. This is necessary for Docutils internals. + It may also be used in the future to validate nodes in the parsed documents. + +.. method:: Application.add_directive(name, cls, content, arguments, **options) + + Register a Docutils directive. *name* must be the prospective directive + name, *func* the directive function (see the Docutils documentation - XXX + ref) for details about the signature and return value. *content*, + *arguments* and *options* are set as attributes on the function and determine + whether the directive has content, arguments and options, respectively. For + their exact meaning, please consult the Docutils documentation. + +.. method:: Application.add_role(name, role) + + Register a Docutils role. *name* must be the role name that occurs in the + source, *role* the role function (see the Docutils documentation on details). + +.. method:: Application.add_description_unit(directivename, rolename, indexdesc='', parse_node=None) + + XXX + +.. method:: Application.connect(event, callback) + + Register *callback* to be called when *event* is emitted. For details on + available core events and the arguments of callback functions, please see + :ref:`events`. + + The method returns a "listener ID" that can be used as an argument to + :meth:`disconnect`. + +.. method:: Application.disconnect(listener_id) + + Unregister callback *listener_id*. + +.. method:: Application.emit(event, *arguments) + + Emit *event* and pass *arguments* to the callback functions. Do not emit + core Sphinx events in extensions! + + +.. exception:: ExtensionError + + All these functions raise this exception if something went wrong with the + extension API. + +Examples of using the Sphinx extension API can be seen in the :mod:`sphinx.ext` +package. + + +.. _events: + +Sphinx core events +------------------ + +These events are known to the core: + +====================== =================================== ========= +Event name Emitted when Arguments +====================== =================================== ========= +``'builder-inited'`` the builder object has been created -none- +``'doctree-read'`` a doctree has been parsed and read *doctree* + by the environment, and is about to + be pickled +``'doctree-resolved'`` a doctree has been "resolved" by *doctree*, *docname* + the environment, that is, all + references and TOCs have been + inserted +====================== =================================== ========= Added: doctools/trunk/doc/glossary.rst ============================================================================== --- (empty file) +++ doctools/trunk/doc/glossary.rst Sun Mar 9 22:32:24 2008 @@ -0,0 +1,22 @@ +.. _glossary: + +Glossary +======== + +.. glossary:: + + builder + A class (inheriting from :class:`~sphinx.builder.Builder`) that takes + parsed documents and performs an action on them. Normally, builders + translate the documents to an output format, but it is also possible to + use the builder builders that e.g. check for broken links in the + documentation, or build coverage information. + + See :ref:`builders` for an overview over Sphinx' built-in builders. + + description unit + XXX + + documentation root + The directory which contains the documentation's :file:`conf.py` file and + is therefore seen as one Sphinx project. Added: doctools/trunk/doc/intro.rst ============================================================================== --- (empty file) +++ doctools/trunk/doc/intro.rst Sun Mar 9 22:32:24 2008 @@ -0,0 +1,11 @@ +Introduction +============ + + + +Prerequisites +------------- + + +Running a build +--------------- Added: doctools/trunk/doc/markup.rst ============================================================================== --- (empty file) +++ doctools/trunk/doc/markup.rst Sun Mar 9 22:32:24 2008 @@ -0,0 +1,835 @@ +.. highlight:: rest + :linenothreshold: 5 + +.. XXX missing: glossary + + +Sphinx Markup Constructs +======================== + +Sphinx adds a lot of new directives and interpreted text roles to standard reST +markup. This section contains the reference material for these facilities. + + +File-wide metadata +------------------ + +reST has the concept of "field lists"; these are a sequence of fields marked up +like this:: + + :Field name: Field content + +A field list at the very top of a file is parsed as the "docinfo", which in +normal documents can be used to record the author, date of publication and +other metadata. In Sphinx, the docinfo is used as metadata, too, but not +displayed in the output. + +At the moment, only one metadata field is recognized: + +``nocomments`` + If set, the web application won't display a comment form for a page generated + from this source file. + + +Meta-information markup +----------------------- + +.. directive:: sectionauthor + + Identifies the author of the current section. The argument should include + the author's name such that it can be used for presentation and email + address. The domain name portion of the address should be lower case. + Example:: + + .. sectionauthor:: Guido van Rossum + + By default, this markup isn't reflected in the output in any way (it helps + keep track of contributions), but you can set the configuration value + :confval:`show_authors` to True to make them produce a paragraph in the + output. + + +Module-specific markup +---------------------- + +The markup described in this section is used to provide information about a +module being documented. Each module should be documented in its own file. +Normally this markup appears after the title heading of that file; a typical +file might start like this:: + + :mod:`parrot` -- Dead parrot access + =================================== + + .. module:: parrot + :platform: Unix, Windows + :synopsis: Analyze and reanimate dead parrots. + .. moduleauthor:: Eric Cleese + .. moduleauthor:: John Idle + +As you can see, the module-specific markup consists of two directives, the +``module`` directive and the ``moduleauthor`` directive. + +.. directive:: module + + This directive marks the beginning of the description of a module (or package + submodule, in which case the name should be fully qualified, including the + package name). + + The ``platform`` option, if present, is a comma-separated list of the + platforms on which the module is available (if it is available on all + platforms, the option should be omitted). The keys are short identifiers; + examples that are in use include "IRIX", "Mac", "Windows", and "Unix". It is + important to use a key which has already been used when applicable. + + The ``synopsis`` option should consist of one sentence describing the + module's purpose -- it is currently only used in the Global Module Index. + + The ``deprecated`` option can be given (with no value) to mark a module as + deprecated; it will be designated as such in various locations then. + +.. directive:: moduleauthor + + The ``moduleauthor`` directive, which can appear multiple times, names the + authors of the module code, just like ``sectionauthor`` names the author(s) + of a piece of documentation. It too only produces output if the + :confval:`show_authors` configuration value is True. + + +.. note:: + + It is important to make the section title of a module-describing file + meaningful since that value will be inserted in the table-of-contents trees + in overview files. + + +Information units +----------------- + +There are a number of directives used to describe specific features provided by +modules. Each directive requires one or more signatures to provide basic +information about what is being described, and the content should be the +description. The basic version makes entries in the general index; if no index +entry is desired, you can give the directive option flag ``:noindex:``. The +following example shows all of the features of this directive type:: + + .. function:: spam(eggs) + ham(eggs) + :noindex: + + Spam or ham the foo. + +The signatures of object methods or data attributes should always include the +type name (``.. method:: FileInput.input(...)``), even if it is obvious from the +context which type they belong to; this is to enable consistent +cross-references. If you describe methods belonging to an abstract protocol, +such as "context managers", include a (pseudo-)type name too to make the +index entries more informative. + +The directives are: + +.. directive:: cfunction + + Describes a C function. The signature should be given as in C, e.g.:: + + .. cfunction:: PyObject* PyType_GenericAlloc(PyTypeObject *type, Py_ssize_t nitems) + + This is also used to describe function-like preprocessor macros. The names + of the arguments should be given so they may be used in the description. + + Note that you don't have to backslash-escape asterisks in the signature, + as it is not parsed by the reST inliner. + +.. directive:: cmember + + Describes a C struct member. Example signature:: + + .. cmember:: PyObject* PyTypeObject.tp_bases + + The text of the description should include the range of values allowed, how + the value should be interpreted, and whether the value can be changed. + References to structure members in text should use the ``member`` role. + +.. directive:: cmacro + + Describes a "simple" C macro. Simple macros are macros which are used + for code expansion, but which do not take arguments so cannot be described as + functions. This is not to be used for simple constant definitions. Examples + of its use in the Python documentation include :cmacro:`PyObject_HEAD` and + :cmacro:`Py_BEGIN_ALLOW_THREADS`. + +.. directive:: ctype + + Describes a C type. The signature should just be the type name. + +.. directive:: cvar + + Describes a global C variable. The signature should include the type, such + as:: + + .. cvar:: PyObject* PyClass_Type + +.. directive:: data + + Describes global data in a module, including both variables and values used + as "defined constants." Class and object attributes are not documented + using this environment. + +.. directive:: exception + + Describes an exception class. The signature can, but need not include + parentheses with constructor arguments. + +.. directive:: function + + Describes a module-level function. The signature should include the + parameters, enclosing optional parameters in brackets. Default values can be + given if it enhances clarity. For example:: + + .. function:: Timer.repeat([repeat=3[, number=1000000]]) + + Object methods are not documented using this directive. Bound object methods + placed in the module namespace as part of the public interface of the module + are documented using this, as they are equivalent to normal functions for + most purposes. + + The description should include information about the parameters required and + how they are used (especially whether mutable objects passed as parameters + are modified), side effects, and possible exceptions. A small example may be + provided. + +.. directive:: class + + Describes a class. The signature can include parentheses with parameters + which will be shown as the constructor arguments. + +.. directive:: attribute + + Describes an object data attribute. The description should include + information about the type of the data to be expected and whether it may be + changed directly. + +.. directive:: method + + Describes an object method. The parameters should not include the ``self`` + parameter. The description should include similar information to that + described for ``function``. + +.. directive:: opcode + + Describes a Python bytecode instruction (this is not very useful for projects + other than Python itself). + +.. directive:: cmdoption + + Describes a command line option or switch. Option argument names should be + enclosed in angle brackets. Example:: + + .. cmdoption:: -m + + Run a module as a script. + +.. directive:: envvar + + Describes an environment variable that the documented code uses or defines. + + +There is also a generic version of these directives: + +.. directive:: describe + + This directive produces the same formatting as the specific ones explained + above but does not create index entries or cross-referencing targets. It is + used, for example, to describe the directives in this document. Example:: + + .. describe:: opcode + + Describes a Python bytecode instruction. + + +Showing code examples +--------------------- + +Examples of Python source code or interactive sessions are represented using +standard reST literal blocks. They are started by a ``::`` at the end of the +preceding paragraph and delimited by indentation. + +Representing an interactive session requires including the prompts and output +along with the Python code. No special markup is required for interactive +sessions. After the last line of input or output presented, there should not be +an "unused" primary prompt; this is an example of what *not* to do:: + + >>> 1 + 1 + 2 + >>> + +Syntax highlighting is handled in a smart way: + +* There is a "highlighting language" for each source file. Per default, + this is ``'python'`` as the majority of files will have to highlight Python + snippets. + +* Within Python highlighting mode, interactive sessions are recognized + automatically and highlighted appropriately. + +* The highlighting language can be changed using the ``highlightlang`` + directive, used as follows:: + + .. highlightlang:: c + + This language is used until the next ``highlightlang`` directive is + encountered. + +* The valid values for the highlighting language are: + + * ``python`` (the default) + * ``c`` + * ``rest`` + * ``none`` (no highlighting) + +* If highlighting with the current language fails, the block is not highlighted + in any way. + +Longer displays of verbatim text may be included by storing the example text in +an external file containing only plain text. The file may be included using the +``literalinclude`` directive. [1]_ For example, to include the Python source file +:file:`example.py`, use:: + + .. literalinclude:: example.py + +The file name is relative to the current file's path. Documentation-specific +include files should be placed in the ``Doc/includes`` subdirectory. + + +Inline markup +------------- + +As said before, Sphinx uses interpreted text roles to insert semantic markup in +documents. + +Variable names are an exception, they should be marked simply with ``*var*``. + +For all other roles, you have to write ``:rolename:`content```. + +.. note:: + + For all cross-referencing roles, if you prefix the content with ``!``, no + reference/hyperlink will be created. + +The following roles refer to objects in modules and are possibly hyperlinked if +a matching identifier is found: + +.. role:: mod + + The name of a module; a dotted name may be used. This should also be used for + package names. + +.. role:: func + + The name of a Python function; dotted names may be used. The role text + should include trailing parentheses to enhance readability. The parentheses + are stripped when searching for identifiers. + +.. role:: data + + The name of a module-level variable. + +.. role:: const + + The name of a "defined" constant. This may be a C-language ``#define`` + or a Python variable that is not intended to be changed. + +.. role:: class + + A class name; a dotted name may be used. + +.. role:: meth + + The name of a method of an object. The role text should include the type + name, method name and the trailing parentheses. A dotted name may be used. + +.. role:: attr + + The name of a data attribute of an object. + +.. role:: exc + + The name of an exception. A dotted name may be used. + +The name enclosed in this markup can include a module name and/or a class name. +For example, ``:func:`filter``` could refer to a function named ``filter`` in +the current module, or the built-in function of that name. In contrast, +``:func:`foo.filter``` clearly refers to the ``filter`` function in the ``foo`` +module. + +Normally, names in these roles are searched first without any further +qualification, then with the current module name prepended, then with the +current module and class name (if any) prepended. If you prefix the name with a +dot, this order is reversed. For example, in the documentation of the +:mod:`codecs` module, ``:func:`open``` always refers to the built-in function, +while ``:func:`.open``` refers to :func:`codecs.open`. + +A similar heuristic is used to determine whether the name is an attribute of +the currently documented class. + +The following roles create cross-references to C-language constructs if they +are defined in the API documentation: + +.. role:: cdata + + The name of a C-language variable. + +.. role:: cfunc + + The name of a C-language function. Should include trailing parentheses. + +.. role:: cmacro + + The name of a "simple" C macro, as defined above. + +.. role:: ctype + + The name of a C-language type. + + +The following roles do possibly create a cross-reference, but do not refer +to objects: + +.. role:: token + + The name of a grammar token (used in the reference manual to create links + between production displays). + +.. role:: keyword + + The name of a keyword in Python. This creates a link to a reference label + with that name, if it exists. + + +The following role creates a cross-reference to the term in the glossary: + +.. role:: term + + Reference to a term in the glossary. The glossary is created using the + ``glossary`` directive containing a definition list with terms and + definitions. It does not have to be in the same file as the ``term`` markup, + for example the Python docs have one global glossary in the ``glossary.rst`` + file. + + If you use a term that's not explained in a glossary, you'll get a warning + during build. + +--------- + +The following roles don't do anything special except formatting the text +in a different style: + +.. role:: command + + The name of an OS-level command, such as ``rm``. + +.. role:: dfn + + Mark the defining instance of a term in the text. (No index entries are + generated.) + +.. role:: envvar + + An environment variable. Index entries are generated. + +.. role:: file + + The name of a file or directory. Within the contents, you can use curly + braces to indicate a "variable" part, for example:: + + ... is installed in :file:`/usr/lib/python2.{x}/site-packages` ... + + In the built documentation, the ``x`` will be displayed differently to + indicate that it is to be replaced by the Python minor version. + +.. role:: guilabel + + Labels presented as part of an interactive user interface should be marked + using ``guilabel``. This includes labels from text-based interfaces such as + those created using :mod:`curses` or other text-based libraries. Any label + used in the interface should be marked with this role, including button + labels, window titles, field names, menu and menu selection names, and even + values in selection lists. + +.. role:: kbd + + Mark a sequence of keystrokes. What form the key sequence takes may depend + on platform- or application-specific conventions. When there are no relevant + conventions, the names of modifier keys should be spelled out, to improve + accessibility for new users and non-native speakers. For example, an + *xemacs* key sequence may be marked like ``:kbd:`C-x C-f```, but without + reference to a specific application or platform, the same sequence should be + marked as ``:kbd:`Control-x Control-f```. + +.. role:: mailheader + + The name of an RFC 822-style mail header. This markup does not imply that + the header is being used in an email message, but can be used to refer to any + header of the same "style." This is also used for headers defined by the + various MIME specifications. The header name should be entered in the same + way it would normally be found in practice, with the camel-casing conventions + being preferred where there is more than one common usage. For example: + ``:mailheader:`Content-Type```. + +.. role:: makevar + + The name of a :command:`make` variable. + +.. role:: manpage + + A reference to a Unix manual page including the section, + e.g. ``:manpage:`ls(1)```. + +.. role:: menuselection + + Menu selections should be marked using the ``menuselection`` role. This is + used to mark a complete sequence of menu selections, including selecting + submenus and choosing a specific operation, or any subsequence of such a + sequence. The names of individual selections should be separated by + ``-->``. + + For example, to mark the selection "Start > Programs", use this markup:: + + :menuselection:`Start --> Programs` + + When including a selection that includes some trailing indicator, such as the + ellipsis some operating systems use to indicate that the command opens a + dialog, the indicator should be omitted from the selection name. + +.. role:: mimetype + + The name of a MIME type, or a component of a MIME type (the major or minor + portion, taken alone). + +.. role:: newsgroup + + The name of a Usenet newsgroup. + +.. role:: option + + A command-line option to an executable program. The leading hyphen(s) must + be included. + +.. role:: program + + The name of an executable program. This may differ from the file name for + the executable for some platforms. In particular, the ``.exe`` (or other) + extension should be omitted for Windows programs. + +.. role:: regexp + + A regular expression. Quotes should not be included. + +.. role:: samp + + A piece of literal text, such as code. Within the contents, you can use + curly braces to indicate a "variable" part, as in ``:file:``. + + If you don't need the "variable part" indication, use the standard + ````code```` instead. + +.. role:: var + + A Python or C variable or parameter name. + + +The following roles generate external links: + +.. role:: pep + + A reference to a Python Enhancement Proposal. This generates appropriate + index entries. The text "PEP *number*\ " is generated; in the HTML output, + this text is a hyperlink to an online copy of the specified PEP. + +.. role:: rfc + + A reference to an Internet Request for Comments. This generates appropriate + index entries. The text "RFC *number*\ " is generated; in the HTML output, + this text is a hyperlink to an online copy of the specified RFC. + + +Note that there are no special roles for including hyperlinks as you can use +the standard reST markup for that purpose. + + +.. _doc-ref-role: + +Cross-linking markup +-------------------- + +.. XXX add new :ref: syntax alternative + +To support cross-referencing to arbitrary sections in the documentation, the +standard reST labels are "abused" a bit: Every label must precede a section +title; and every label name must be unique throughout the entire documentation +source. + +You can then reference to these sections using the ``:ref:`label-name``` role. + +Example:: + + .. _my-reference-label: + + Section to cross-reference + -------------------------- + + This is the text of the section. + + It refers to the section itself, see :ref:`my-reference-label`. + +The ``:ref:`` invocation is replaced with the section title. + + +Paragraph-level markup +---------------------- + +These directives create short paragraphs and can be used inside information +units as well as normal text: + +.. directive:: note + + An especially important bit of information about an API that a user should be + aware of when using whatever bit of API the note pertains to. The content of + the directive should be written in complete sentences and include all + appropriate punctuation. + + Example:: + + .. note:: + + This function is not suitable for sending spam e-mails. + +.. directive:: warning + + An important bit of information about an API that a user should be very aware + of when using whatever bit of API the warning pertains to. The content of + the directive should be written in complete sentences and include all + appropriate punctuation. This differs from ``note`` in that it is recommended + over ``note`` for information regarding security. + +.. directive:: versionadded + + This directive documents the version of the project which added the described + feature to the library or C API. When this applies to an entire module, it + should be placed at the top of the module section before any prose. + + The first argument must be given and is the version in question; you can add + a second argument consisting of a *brief* explanation of the change. + + Example:: + + .. versionadded:: 2.5 + The `spam` parameter. + + Note that there must be no blank line between the directive head and the + explanation; this is to make these blocks visually continuous in the markup. + +.. directive:: versionchanged + + Similar to ``versionadded``, but describes when and what changed in the named + feature in some way (new parameters, changed side effects, etc.). + +-------------- + +.. directive:: seealso + + Many sections include a list of references to module documentation or + external documents. These lists are created using the ``seealso`` directive. + + The ``seealso`` directive is typically placed in a section just before any + sub-sections. For the HTML output, it is shown boxed off from the main flow + of the text. + + The content of the ``seealso`` directive should be a reST definition list. + Example:: + + .. seealso:: + + Module :mod:`zipfile` + Documentation of the :mod:`zipfile` standard module. + + `GNU tar manual, Basic Tar Format `_ + Documentation for tar archive files, including GNU tar extensions. + +.. directive:: rubric + + This directive creates a paragraph heading that is not used to create a + table of contents node. It is currently used for the "Footnotes" caption. + +.. directive:: centered + + This directive creates a centered boldfaced paragraph. Use it as follows:: + + .. centered:: + + Paragraph contents. + + +Table-of-contents markup +------------------------ + +Since reST does not have facilities to interconnect several documents, or split +documents into multiple output files, Sphinx uses a custom directive to add +relations between the single files the documentation is made of, as well as +tables of contents. The ``toctree`` directive is the central element. + +.. directive:: toctree + + This directive inserts a "TOC tree" at the current location, using the + individual TOCs (including "sub-TOC trees") of the files given in the + directive body. A numeric ``maxdepth`` option may be given to indicate the + depth of the tree; by default, all levels are included. + + Consider this example (taken from the library reference index):: + + .. toctree:: + :maxdepth: 2 + + intro.rst + strings.rst + datatypes.rst + numeric.rst + (many more files listed here) + + This accomplishes two things: + + * Tables of contents from all those files are inserted, with a maximum depth + of two, that means one nested heading. ``toctree`` directives in those + files are also taken into account. + * Sphinx knows that the relative order of the files ``intro.rst``, + ``strings.rst`` and so forth, and it knows that they are children of the + shown file, the library index. From this information it generates "next + chapter", "previous chapter" and "parent chapter" links. + + In the end, all files included in the build process must occur in one + ``toctree`` directive; Sphinx will emit a warning if it finds a file that is + not included, because that means that this file will not be reachable through + standard navigation. + + The special file ``contents.rst`` at the root of the source directory is the + "root" of the TOC tree hierarchy; from it the "Contents" page is generated. + + +Index-generating markup +----------------------- + +Sphinx automatically creates index entries from all information units (like +functions, classes or attributes) like discussed before. + +However, there is also an explicit directive available, to make the index more +comprehensive and enable index entries in documents where information is not +mainly contained in information units, such as the language reference. + +The directive is ``index`` and contains one or more index entries. Each entry +consists of a type and a value, separated by a colon. + +For example:: + + .. index:: + single: execution; context + module: __main__ + module: sys + triple: module; search; path + +This directive contains five entries, which will be converted to entries in the +generated index which link to the exact location of the index statement (or, in +case of offline media, the corresponding page number). + +The possible entry types are: + +single + Creates a single index entry. Can be made a subentry by separating the + subentry text with a semicolon (this notation is also used below to describe + what entries are created). +pair + ``pair: loop; statement`` is a shortcut that creates two index entries, + namely ``loop; statement`` and ``statement; loop``. +triple + Likewise, ``triple: module; search; path`` is a shortcut that creates three + index entries, which are ``module; search path``, ``search; path, module`` and + ``path; module search``. +module, keyword, operator, object, exception, statement, builtin + These all create two index entries. For example, ``module: hashlib`` creates + the entries ``module; hashlib`` and ``hashlib; module``. + +For index directives containing only "single" entries, there is a shorthand +notation:: + + .. index:: BNF, grammar, syntax, notation + +This creates four index entries. + + +Grammar production displays +--------------------------- + +Special markup is available for displaying the productions of a formal grammar. +The markup is simple and does not attempt to model all aspects of BNF (or any +derived forms), but provides enough to allow context-free grammars to be +displayed in a way that causes uses of a symbol to be rendered as hyperlinks to +the definition of the symbol. There is this directive: + +.. directive:: productionlist + + This directive is used to enclose a group of productions. Each production is + given on a single line and consists of a name, separated by a colon from the + following definition. If the definition spans multiple lines, each + continuation line must begin with a colon placed at the same column as in the + first line. + + Blank lines are not allowed within ``productionlist`` directive arguments. + + The definition can contain token names which are marked as interpreted text + (e.g. ``sum ::= `integer` "+" `integer```) -- this generates cross-references + to the productions of these tokens. + + Note that no further reST parsing is done in the production, so that you + don't have to escape ``*`` or ``|`` characters. + + +.. XXX describe optional first parameter + +The following is an example taken from the Python Reference Manual:: + + .. productionlist:: + try_stmt: try1_stmt | try2_stmt + try1_stmt: "try" ":" `suite` + : ("except" [`expression` ["," `target`]] ":" `suite`)+ + : ["else" ":" `suite`] + : ["finally" ":" `suite`] + try2_stmt: "try" ":" `suite` + : "finally" ":" `suite` + + +Substitutions +------------- + +The documentation system provides three substitutions that are defined by default. +They are set in the build configuration file, see :ref:`doc-build-config`. + +.. describe:: |release| + + Replaced by the project release the documentation refers to. This is meant + to be the full version string including alpha/beta/release candidate tags, + e.g. ``2.5.2b3``. + +.. describe:: |version| + + Replaced by the project version the documentation refers to. This is meant to + consist only of the major and minor version parts, e.g. ``2.5``, even for + version 2.5.1. + +.. describe:: |today| + + Replaced by either today's date, or the date set in the build configuration + file. Normally has the format ``April 14, 2007``. + + +.. rubric:: Footnotes + +.. [1] There is a standard ``.. include`` directive, but it raises errors if the + file is not found. This one only emits a warning. Added: doctools/trunk/doc/rest.rst ============================================================================== --- (empty file) +++ doctools/trunk/doc/rest.rst Sun Mar 9 22:32:24 2008 @@ -0,0 +1,251 @@ +.. highlightlang:: rest + +reStructuredText Primer +======================= + +This section is a brief introduction to reStructuredText (reST) concepts and +syntax, intended to provide authors with enough information to author documents +productively. Since reST was designed to be a simple, unobtrusive markup +language, this will not take too long. + +.. seealso:: + + The authoritative `reStructuredText User + Documentation `_. + + +Paragraphs +---------- + +The paragraph is the most basic block in a reST document. Paragraphs are simply +chunks of text separated by one or more blank lines. As in Python, indentation +is significant in reST, so all lines of the same paragraph must be left-aligned +to the same level of indentation. + + +Inline markup +------------- + +The standard reST inline markup is quite simple: use + +* one asterisk: ``*text*`` for emphasis (italics), +* two asterisks: ``**text**`` for strong emphasis (boldface), and +* backquotes: ````text```` for code samples. + +If asterisks or backquotes appear in running text and could be confused with +inline markup delimiters, they have to be escaped with a backslash. + +Be aware of some restrictions of this markup: + +* it may not be nested, +* content may not start or end with whitespace: ``* text*`` is wrong, +* it must be separated from surrounding text by non-word characters. Use a + backslash escaped space to work around that: ``thisis\ *one*\ word``. + +These restrictions may be lifted in future versions of the docutils. + +reST also allows for custom "interpreted text roles"', which signify that the +enclosed text should be interpreted in a specific way. Sphinx uses this to +provide semantic markup and cross-referencing of identifiers, as described in +the appropriate section. The general syntax is ``:rolename:`content```. + + +Lists and Quotes +---------------- + +List markup is natural: just place an asterisk at the start of a paragraph and +indent properly. The same goes for numbered lists; they can also be +autonumbered using a ``#`` sign:: + + * This is a bulleted list. + * It has two items, the second + item uses two lines. + + 1. This is a numbered list. + 2. It has two items too. + + #. This is a numbered list. + #. It has two items too. + +Note that Sphinx disables the use of enumerated lists introduced by alphabetic +or roman numerals, such as :: + + A. First item + B. Second item + + +Nested lists are possible, but be aware that they must be separated from the +parent list items by blank lines:: + + * this is + * a list + + * with a nested list + * and some subitems + + * and here the parent list continues + +Definition lists are created as follows:: + + term (up to a line of text) + Definition of the term, which must be indented + + and can even consist of multiple paragraphs + + next term + Description. + + +Paragraphs are quoted by just indenting them more than the surrounding +paragraphs. + + +Source Code +----------- + +Literal code blocks are introduced by ending a paragraph with the special marker +``::``. The literal block must be indented, to be able to include blank lines:: + + This is a normal text paragraph. The next paragraph is a code sample:: + + It is not processed in any way, except + that the indentation is removed. + + It can span multiple lines. + + This is a normal text paragraph again. + +The handling of the ``::`` marker is smart: + +* If it occurs as a paragraph of its own, that paragraph is completely left + out of the document. +* If it is preceded by whitespace, the marker is removed. +* If it is preceded by non-whitespace, the marker is replaced by a single + colon. + +That way, the second sentence in the above example's first paragraph would be +rendered as "The next paragraph is a code sample:". + + +Hyperlinks +---------- + +External links +^^^^^^^^^^^^^^ + +Use ```Link text `_`` for inline web links. If the link text +should be the web address, you don't need special markup at all, the parser +finds links and mail addresses in ordinary text. + +Internal links +^^^^^^^^^^^^^^ + +Internal linking is done via a special reST role, see the section on specific +markup, :ref:`doc-ref-role`. + + +Sections +-------- + +Section headers are created by underlining (and optionally overlining) the +section title with a punctuation character, at least as long as the text:: + + ================= + This is a heading + ================= + +Normally, there are no heading levels assigned to certain characters as the +structure is determined from the succession of headings. However, for the +Python documentation, this convention is used which you may follow: + +* ``#`` with overline, for parts +* ``*`` with overline, for chapters +* ``=``, for sections +* ``-``, for subsections +* ``^``, for subsubsections +* ``"``, for paragraphs + + +Explicit Markup +--------------- + +"Explicit markup" is used in reST for most constructs that need special +handling, such as footnotes, specially-highlighted paragraphs, comments, and +generic directives. + +An explicit markup block begins with a line starting with ``..`` followed by +whitespace and is terminated by the next paragraph at the same level of +indentation. (There needs to be a blank line between explicit markup and normal +paragraphs. This may all sound a bit complicated, but it is intuitive enough +when you write it.) + + +Directives +---------- + +A directive is a generic block of explicit markup. Besides roles, it is one of +the extension mechanisms of reST, and Sphinx makes heavy use of it. + +Basically, a directive consists of a name, arguments, options and content. (Keep +this terminology in mind, it is used in the next chapter describing custom +directives.) Looking at this example, :: + + .. function:: foo(x) + foo(y, z) + :bar: no + + Return a line of text input from the user. + +``function`` is the directive name. It is given two arguments here, the +remainder of the first line and the second line, as well as one option ``bar`` +(as you can see, options are given in the lines immediately following the +arguments and indicated by the colons). + +The directive content follows after a blank line and is indented relative to the +directive start. + + +Footnotes +--------- + +For footnotes, use ``[#]_`` to mark the footnote location, and add the footnote +body at the bottom of the document after a "Footnotes" rubric heading, like so:: + + Lorem ipsum [#]_ dolor sit amet ... [#]_ + + .. rubric:: Footnotes + + .. [#] Text of the first footnote. + .. [#] Text of the second footnote. + +You can also explicitly number the footnotes for better context. + + +Comments +-------- + +Every explicit markup block which isn't a valid markup construct (like the +footnotes above) is regarded as a comment. + + +Source encoding +--------------- + +Since the easiest way to include special characters like em dashes or copyright +signs in reST is to directly write them as Unicode characters, one has to +specify an encoding: + +All documentation source files must be in UTF-8 encoding, and the HTML +documents written from them will be in that encoding as well. + + +Gotchas +------- + +There are some problems one commonly runs into while authoring reST documents: + +* **Separation of inline markup:** As said above, inline markup spans must be + separated from the surrounding text by non-word characters, you have to use + an escaped space to get around that. + +.. XXX more? Added: doctools/trunk/doc/templating.rst ============================================================================== --- (empty file) +++ doctools/trunk/doc/templating.rst Sun Mar 9 22:32:24 2008 @@ -0,0 +1,4 @@ +.. _templating: + +Templating +========== From python-checkins at python.org Sun Mar 9 22:42:38 2008 From: python-checkins at python.org (georg.brandl) Date: Sun, 9 Mar 2008 22:42:38 +0100 (CET) Subject: [Python-checkins] r61338 - doctools/trunk/sphinx/__init__.py Message-ID: <20080309214238.7BC0F1E4023@bag.python.org> Author: georg.brandl Date: Sun Mar 9 22:42:38 2008 New Revision: 61338 Modified: doctools/trunk/sphinx/__init__.py (contents, props changed) Log: Actually use the Python repository's revision number. Modified: doctools/trunk/sphinx/__init__.py ============================================================================== --- doctools/trunk/sphinx/__init__.py (original) +++ doctools/trunk/sphinx/__init__.py Sun Mar 9 22:42:38 2008 @@ -18,7 +18,7 @@ from sphinx.application import Sphinx from sphinx.util.console import nocolor -__version__ = '$Revision: 5369 $'[11:-2] +__version__ = '$Revision$'[11:-2] def usage(argv, msg=None): From buildbot at python.org Mon Mar 10 00:18:55 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 09 Mar 2008 23:18:55 +0000 Subject: [Python-checkins] buildbot failure in x86 XP 3.0 Message-ID: <20080309231856.38AEC1E400D@bag.python.org> The Buildbot has detected a new failure of x86 XP 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP%203.0/builds/1 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: armbruster-windows Build Reason: The web-page 'force build' button was pressed by 'Joseph Armbruster': Test Build Build Source Stamp: [branch branches/py3k] 61311 Blamelist: BUILD FAILED: failed test Excerpt from the test logfile: 6 tests failed: test___all__ test_bsddb3 test_mailbox test_socketserver test_ssl test_xmlrpc_net ====================================================================== ERROR: test_all (test.test___all__.AllTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test___all__.py", line 82, in test_all self.check_all("locale") File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test___all__.py", line 19, in check_all exec("from %s import *" % modname, names) File "", line 1, in AttributeError: 'module' object has no attribute 'strxfrm' ====================================================================== ERROR: test00_associateDBError (bsddb.test.test_associate.AssociateErrorTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_associate.py", line 108, in setUp os.remove(file) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test01_associateWithDB (bsddb.test.test_associate.AssociateHashTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_associate.py", line 164, in setUp os.remove(file) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test02_associateAfterDB (bsddb.test.test_associate.AssociateHashTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_associate.py", line 164, in setUp os.remove(file) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test01_associateWithDB (bsddb.test.test_associate.AssociateBTreeTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_associate.py", line 164, in setUp os.remove(file) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test02_associateAfterDB (bsddb.test.test_associate.AssociateBTreeTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_associate.py", line 164, in setUp os.remove(file) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test01_associateWithDB (bsddb.test.test_associate.AssociateRecnoTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_associate.py", line 164, in setUp os.remove(file) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test02_associateAfterDB (bsddb.test.test_associate.AssociateRecnoTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_associate.py", line 164, in setUp os.remove(file) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test01_associateWithDB (bsddb.test.test_associate.AssociateBTreeTxnTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_associate.py", line 164, in setUp os.remove(file) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test02_associateAfterDB (bsddb.test.test_associate.AssociateBTreeTxnTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_associate.py", line 164, in setUp os.remove(file) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test13_associate_in_transaction (bsddb.test.test_associate.AssociateBTreeTxnTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_associate.py", line 164, in setUp os.remove(file) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test01_associateWithDB (bsddb.test.test_associate.ShelveAssociateHashTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_associate.py", line 164, in setUp os.remove(file) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test02_associateAfterDB (bsddb.test.test_associate.ShelveAssociateHashTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_associate.py", line 164, in setUp os.remove(file) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test01_associateWithDB (bsddb.test.test_associate.ShelveAssociateBTreeTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_associate.py", line 164, in setUp os.remove(file) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test02_associateAfterDB (bsddb.test.test_associate.ShelveAssociateBTreeTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_associate.py", line 164, in setUp os.remove(file) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test01_associateWithDB (bsddb.test.test_associate.ShelveAssociateRecnoTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_associate.py", line 164, in setUp os.remove(file) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test02_associateAfterDB (bsddb.test.test_associate.ShelveAssociateRecnoTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_associate.py", line 164, in setUp os.remove(file) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test01_associateWithDB (bsddb.test.test_associate.ThreadedAssociateHashTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_associate.py", line 164, in setUp os.remove(file) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test02_associateAfterDB (bsddb.test.test_associate.ThreadedAssociateHashTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_associate.py", line 164, in setUp os.remove(file) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test01_associateWithDB (bsddb.test.test_associate.ThreadedAssociateBTreeTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_associate.py", line 164, in setUp os.remove(file) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test02_associateAfterDB (bsddb.test.test_associate.ThreadedAssociateBTreeTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_associate.py", line 164, in setUp os.remove(file) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test01_associateWithDB (bsddb.test.test_associate.ThreadedAssociateRecnoTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_associate.py", line 164, in setUp os.remove(file) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test02_associateAfterDB (bsddb.test.test_associate.ThreadedAssociateRecnoTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_associate.py", line 164, in setUp os.remove(file) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test01_GetsAndPuts (bsddb.test.test_basics.BasicBTreeWithEnvTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test02_DictionaryMethods (bsddb.test.test_basics.BasicBTreeWithEnvTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test03_SimpleCursorStuff (bsddb.test.test_basics.BasicBTreeWithEnvTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test03b_SimpleCursorWithGetReturnsNone1 (bsddb.test.test_basics.BasicBTreeWithEnvTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test03b_SimpleCursorWithoutGetReturnsNone0 (bsddb.test.test_basics.BasicBTreeWithEnvTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test03c_SimpleCursorGetReturnsNone2 (bsddb.test.test_basics.BasicBTreeWithEnvTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test04_PartialGetAndPut (bsddb.test.test_basics.BasicBTreeWithEnvTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test05_GetSize (bsddb.test.test_basics.BasicBTreeWithEnvTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test06_Truncate (bsddb.test.test_basics.BasicBTreeWithEnvTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test07_EnvRemoveAndRename (bsddb.test.test_basics.BasicBTreeWithEnvTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test01_GetsAndPuts (bsddb.test.test_basics.BasicHashWithEnvTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test02_DictionaryMethods (bsddb.test.test_basics.BasicHashWithEnvTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test03_SimpleCursorStuff (bsddb.test.test_basics.BasicHashWithEnvTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test03b_SimpleCursorWithGetReturnsNone1 (bsddb.test.test_basics.BasicHashWithEnvTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test03b_SimpleCursorWithoutGetReturnsNone0 (bsddb.test.test_basics.BasicHashWithEnvTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test03c_SimpleCursorGetReturnsNone2 (bsddb.test.test_basics.BasicHashWithEnvTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test04_PartialGetAndPut (bsddb.test.test_basics.BasicHashWithEnvTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test05_GetSize (bsddb.test.test_basics.BasicHashWithEnvTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test06_Truncate (bsddb.test.test_basics.BasicHashWithEnvTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test07_EnvRemoveAndRename (bsddb.test.test_basics.BasicHashWithEnvTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test01_GetsAndPuts (bsddb.test.test_basics.BTreeTransactionTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test02_DictionaryMethods (bsddb.test.test_basics.BTreeTransactionTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test03_SimpleCursorStuff (bsddb.test.test_basics.BTreeTransactionTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test03b_SimpleCursorWithGetReturnsNone1 (bsddb.test.test_basics.BTreeTransactionTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test03b_SimpleCursorWithoutGetReturnsNone0 (bsddb.test.test_basics.BTreeTransactionTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test03c_SimpleCursorGetReturnsNone2 (bsddb.test.test_basics.BTreeTransactionTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test04_PartialGetAndPut (bsddb.test.test_basics.BTreeTransactionTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test05_GetSize (bsddb.test.test_basics.BTreeTransactionTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test06_Transactions (bsddb.test.test_basics.BTreeTransactionTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test06_Truncate (bsddb.test.test_basics.BTreeTransactionTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test07_TxnTruncate (bsddb.test.test_basics.BTreeTransactionTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test08_TxnLateUse (bsddb.test.test_basics.BTreeTransactionTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test01_GetsAndPuts (bsddb.test.test_basics.HashTransactionTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test02_DictionaryMethods (bsddb.test.test_basics.HashTransactionTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test03_SimpleCursorStuff (bsddb.test.test_basics.HashTransactionTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test03b_SimpleCursorWithGetReturnsNone1 (bsddb.test.test_basics.HashTransactionTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test03b_SimpleCursorWithoutGetReturnsNone0 (bsddb.test.test_basics.HashTransactionTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test03c_SimpleCursorGetReturnsNone2 (bsddb.test.test_basics.HashTransactionTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test04_PartialGetAndPut (bsddb.test.test_basics.HashTransactionTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test05_GetSize (bsddb.test.test_basics.HashTransactionTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test06_Transactions (bsddb.test.test_basics.HashTransactionTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test06_Truncate (bsddb.test.test_basics.HashTransactionTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test07_TxnTruncate (bsddb.test.test_basics.HashTransactionTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test08_TxnLateUse (bsddb.test.test_basics.HashTransactionTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test01_GetsAndPuts (bsddb.test.test_basics.BTreeMultiDBTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test02_DictionaryMethods (bsddb.test.test_basics.BTreeMultiDBTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test03_SimpleCursorStuff (bsddb.test.test_basics.BTreeMultiDBTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test03b_SimpleCursorWithGetReturnsNone1 (bsddb.test.test_basics.BTreeMultiDBTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test03b_SimpleCursorWithoutGetReturnsNone0 (bsddb.test.test_basics.BTreeMultiDBTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test03c_SimpleCursorGetReturnsNone2 (bsddb.test.test_basics.BTreeMultiDBTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test04_PartialGetAndPut (bsddb.test.test_basics.BTreeMultiDBTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test05_GetSize (bsddb.test.test_basics.BTreeMultiDBTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test06_Truncate (bsddb.test.test_basics.BTreeMultiDBTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test09_MultiDB (bsddb.test.test_basics.BTreeMultiDBTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test01_GetsAndPuts (bsddb.test.test_basics.HashMultiDBTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test02_DictionaryMethods (bsddb.test.test_basics.HashMultiDBTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test03_SimpleCursorStuff (bsddb.test.test_basics.HashMultiDBTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test03b_SimpleCursorWithGetReturnsNone1 (bsddb.test.test_basics.HashMultiDBTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test03b_SimpleCursorWithoutGetReturnsNone0 (bsddb.test.test_basics.HashMultiDBTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test03c_SimpleCursorGetReturnsNone2 (bsddb.test.test_basics.HashMultiDBTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test04_PartialGetAndPut (bsddb.test.test_basics.HashMultiDBTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test05_GetSize (bsddb.test.test_basics.HashMultiDBTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test06_Truncate (bsddb.test.test_basics.HashMultiDBTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test09_MultiDB (bsddb.test.test_basics.HashMultiDBTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_basics.py", line 62, in setUp test_support.rmtree(homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test_cannot_assign_twice (bsddb.test.test_compare.BtreeExceptionsTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_compare.py", line 84, in setUp | db.DB_INIT_LOCK | db.DB_THREAD) bsddb.db.DBInvalidArgError: (22, 'Invalid argument -- configured environment flags incompatible with existing environment') ====================================================================== ERROR: test_compare_function_bad_return (bsddb.test.test_compare.BtreeExceptionsTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_compare.py", line 84, in setUp | db.DB_INIT_LOCK | db.DB_THREAD) bsddb.db.DBInvalidArgError: (22, 'Invalid argument -- configured environment flags incompatible with existing environment') ====================================================================== ERROR: test_compare_function_exception (bsddb.test.test_compare.BtreeExceptionsTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_compare.py", line 84, in setUp | db.DB_INIT_LOCK | db.DB_THREAD) bsddb.db.DBInvalidArgError: (22, 'Invalid argument -- configured environment flags incompatible with existing environment') ====================================================================== ERROR: test_compare_function_incorrect (bsddb.test.test_compare.BtreeExceptionsTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_compare.py", line 84, in setUp | db.DB_INIT_LOCK | db.DB_THREAD) bsddb.db.DBInvalidArgError: (22, 'Invalid argument -- configured environment flags incompatible with existing environment') ====================================================================== ERROR: test_raises_non_callable (bsddb.test.test_compare.BtreeExceptionsTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_compare.py", line 84, in setUp | db.DB_INIT_LOCK | db.DB_THREAD) bsddb.db.DBInvalidArgError: (22, 'Invalid argument -- configured environment flags incompatible with existing environment') ====================================================================== ERROR: test_set_bt_compare_with_function (bsddb.test.test_compare.BtreeExceptionsTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_compare.py", line 84, in setUp | db.DB_INIT_LOCK | db.DB_THREAD) bsddb.db.DBInvalidArgError: (22, 'Invalid argument -- configured environment flags incompatible with existing environment') ====================================================================== ERROR: test_compare_function_useless (bsddb.test.test_compare.BtreeKeyCompareTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_compare.py", line 84, in setUp | db.DB_INIT_LOCK | db.DB_THREAD) bsddb.db.DBInvalidArgError: (22, 'Invalid argument -- configured environment flags incompatible with existing environment') ====================================================================== ERROR: test_lexical_ordering (bsddb.test.test_compare.BtreeKeyCompareTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_compare.py", line 84, in setUp | db.DB_INIT_LOCK | db.DB_THREAD) bsddb.db.DBInvalidArgError: (22, 'Invalid argument -- configured environment flags incompatible with existing environment') ====================================================================== ERROR: test_reverse_lexical_ordering (bsddb.test.test_compare.BtreeKeyCompareTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_compare.py", line 84, in setUp | db.DB_INIT_LOCK | db.DB_THREAD) bsddb.db.DBInvalidArgError: (22, 'Invalid argument -- configured environment flags incompatible with existing environment') ====================================================================== ERROR: test01_both (bsddb.test.test_dbobj.dbobjTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_dbobj.py", line 37, in tearDown test_support.rmtree(self.homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test02_dbobj_dict_interface (bsddb.test.test_dbobj.dbobjTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_dbobj.py", line 37, in tearDown test_support.rmtree(self.homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test03_dbobj_type_before_open (bsddb.test.test_dbobj.dbobjTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_dbobj.py", line 37, in tearDown test_support.rmtree(self.homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test01_basics (bsddb.test.test_dbshelve.EnvBTreeShelveTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_dbshelve.py", line 267, in setUp self.do_open() File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_dbshelve.py", line 278, in do_open self.d.open(self.filename, self.dbtype, self.dbflags) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\dbshelve.py", line 147, in open self.db.open(*args, **kwargs) bsddb.db.DBPermissionsError: (1, 'Operation not permitted') ====================================================================== ERROR: test02_cursors (bsddb.test.test_dbshelve.EnvBTreeShelveTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_dbshelve.py", line 267, in setUp self.do_open() File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_dbshelve.py", line 278, in do_open self.d.open(self.filename, self.dbtype, self.dbflags) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\dbshelve.py", line 147, in open self.db.open(*args, **kwargs) bsddb.db.DBPermissionsError: (1, 'Operation not permitted') ====================================================================== ERROR: test03_append (bsddb.test.test_dbshelve.EnvBTreeShelveTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_dbshelve.py", line 267, in setUp self.do_open() File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_dbshelve.py", line 278, in do_open self.d.open(self.filename, self.dbtype, self.dbflags) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\dbshelve.py", line 147, in open self.db.open(*args, **kwargs) bsddb.db.DBPermissionsError: (1, 'Operation not permitted') ====================================================================== ERROR: test01_basics (bsddb.test.test_dbshelve.EnvHashShelveTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_dbshelve.py", line 267, in setUp self.do_open() File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_dbshelve.py", line 278, in do_open self.d.open(self.filename, self.dbtype, self.dbflags) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\dbshelve.py", line 147, in open self.db.open(*args, **kwargs) bsddb.db.DBPermissionsError: (1, 'Operation not permitted') ====================================================================== ERROR: test02_cursors (bsddb.test.test_dbshelve.EnvHashShelveTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_dbshelve.py", line 267, in setUp self.do_open() File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_dbshelve.py", line 278, in do_open self.d.open(self.filename, self.dbtype, self.dbflags) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\dbshelve.py", line 147, in open self.db.open(*args, **kwargs) bsddb.db.DBPermissionsError: (1, 'Operation not permitted') ====================================================================== ERROR: test03_append (bsddb.test.test_dbshelve.EnvHashShelveTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_dbshelve.py", line 267, in setUp self.do_open() File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_dbshelve.py", line 278, in do_open self.d.open(self.filename, self.dbtype, self.dbflags) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\dbshelve.py", line 147, in open self.db.open(*args, **kwargs) bsddb.db.DBPermissionsError: (1, 'Operation not permitted') ====================================================================== ERROR: test01_basics (bsddb.test.test_dbshelve.EnvThreadBTreeShelveTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_dbshelve.py", line 267, in setUp self.do_open() File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_dbshelve.py", line 278, in do_open self.d.open(self.filename, self.dbtype, self.dbflags) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\dbshelve.py", line 147, in open self.db.open(*args, **kwargs) bsddb.db.DBPermissionsError: (1, 'Operation not permitted') ====================================================================== ERROR: test02_cursors (bsddb.test.test_dbshelve.EnvThreadBTreeShelveTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_dbshelve.py", line 267, in setUp self.do_open() File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_dbshelve.py", line 278, in do_open self.d.open(self.filename, self.dbtype, self.dbflags) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\dbshelve.py", line 147, in open self.db.open(*args, **kwargs) bsddb.db.DBPermissionsError: (1, 'Operation not permitted') ====================================================================== ERROR: test03_append (bsddb.test.test_dbshelve.EnvThreadBTreeShelveTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_dbshelve.py", line 267, in setUp self.do_open() File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_dbshelve.py", line 278, in do_open self.d.open(self.filename, self.dbtype, self.dbflags) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\dbshelve.py", line 147, in open self.db.open(*args, **kwargs) bsddb.db.DBPermissionsError: (1, 'Operation not permitted') ====================================================================== ERROR: test01_basics (bsddb.test.test_dbshelve.EnvThreadHashShelveTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_dbshelve.py", line 267, in setUp self.do_open() File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_dbshelve.py", line 278, in do_open self.d.open(self.filename, self.dbtype, self.dbflags) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\dbshelve.py", line 147, in open self.db.open(*args, **kwargs) bsddb.db.DBPermissionsError: (1, 'Operation not permitted') ====================================================================== ERROR: test02_cursors (bsddb.test.test_dbshelve.EnvThreadHashShelveTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_dbshelve.py", line 267, in setUp self.do_open() File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_dbshelve.py", line 278, in do_open self.d.open(self.filename, self.dbtype, self.dbflags) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\dbshelve.py", line 147, in open self.db.open(*args, **kwargs) bsddb.db.DBPermissionsError: (1, 'Operation not permitted') ====================================================================== ERROR: test03_append (bsddb.test.test_dbshelve.EnvThreadHashShelveTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_dbshelve.py", line 267, in setUp self.do_open() File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_dbshelve.py", line 278, in do_open self.d.open(self.filename, self.dbtype, self.dbflags) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\dbshelve.py", line 147, in open self.db.open(*args, **kwargs) bsddb.db.DBPermissionsError: (1, 'Operation not permitted') ====================================================================== ERROR: test01_close_dbenv_before_db (bsddb.test.test_env_close.DBEnvClosedEarlyCrash) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_env_close.py", line 55, in test01_close_dbenv_before_db 0o666) bsddb.db.DBInvalidArgError: (22, 'Invalid argument -- configured environment flags incompatible with existing environment') ====================================================================== ERROR: test01_close_dbenv_before_db (bsddb.test.test_env_close.DBEnvClosedEarlyCrash) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_env_close.py", line 49, in tearDown test_support.rmtree(self.homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test02_close_dbenv_delete_db_success (bsddb.test.test_env_close.DBEnvClosedEarlyCrash) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_env_close.py", line 80, in test02_close_dbenv_delete_db_success 0o666) bsddb.db.DBInvalidArgError: (22, 'Invalid argument -- configured environment flags incompatible with existing environment') ====================================================================== ERROR: test02_close_dbenv_delete_db_success (bsddb.test.test_env_close.DBEnvClosedEarlyCrash) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_env_close.py", line 49, in tearDown test_support.rmtree(self.homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test01_join (bsddb.test.test_join.JoinTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_join.py", line 60, in setUp self.env.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL | db.DB_INIT_LOCK ) bsddb.db.DBInvalidArgError: (22, 'Invalid argument -- configured environment flags incompatible with existing environment') ====================================================================== ERROR: test01_badpointer (bsddb.test.test_misc.MiscTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_misc.py", line 36, in tearDown test_support.rmtree(self.homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test02_db_home (bsddb.test.test_misc.MiscTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_misc.py", line 36, in tearDown test_support.rmtree(self.homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test03_repr_closed_db (bsddb.test.test_misc.MiscTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_misc.py", line 36, in tearDown test_support.rmtree(self.homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test04_double_free_make_key_dbt (bsddb.test.test_misc.MiscTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_misc.py", line 36, in tearDown test_support.rmtree(self.homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test05_key_with_null_bytes (bsddb.test.test_misc.MiscTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_misc.py", line 36, in tearDown test_support.rmtree(self.homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test_DB_set_flags_persists (bsddb.test.test_misc.MiscTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_misc.py", line 36, in tearDown test_support.rmtree(self.homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test01_pickle_DBError (bsddb.test.test_pickle.pickleTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_pickle.py", line 39, in tearDown test_support.rmtree(self.homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test02_WithSource (bsddb.test.test_recno.SimpleRecnoTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_recno.py", line 39, in tearDown test_support.rmtree(self.homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test01_1WriterMultiReaders (bsddb.test.test_thread.BTreeConcurrentDataStore) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_thread.py", line 62, in setUp self.env.open(self.homeDir, self.envflags | db.DB_CREATE) bsddb.db.DBInvalidArgError: (22, 'Invalid argument -- configured environment flags incompatible with existing environment') ====================================================================== ERROR: test01_1WriterMultiReaders (bsddb.test.test_thread.HashConcurrentDataStore) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_thread.py", line 62, in setUp self.env.open(self.homeDir, self.envflags | db.DB_CREATE) bsddb.db.DBInvalidArgError: (22, 'Invalid argument -- configured environment flags incompatible with existing environment') ====================================================================== ERROR: test02_SimpleLocks (bsddb.test.test_thread.BTreeSimpleThreaded) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_thread.py", line 62, in setUp self.env.open(self.homeDir, self.envflags | db.DB_CREATE) bsddb.db.DBInvalidArgError: (22, 'Invalid argument -- configured environment flags incompatible with existing environment') ====================================================================== ERROR: test02_SimpleLocks (bsddb.test.test_thread.HashSimpleThreaded) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_thread.py", line 62, in setUp self.env.open(self.homeDir, self.envflags | db.DB_CREATE) bsddb.db.DBInvalidArgError: (22, 'Invalid argument -- configured environment flags incompatible with existing environment') ====================================================================== ERROR: test03_ThreadedTransactions (bsddb.test.test_thread.BTreeThreadedTransactions) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_thread.py", line 62, in setUp self.env.open(self.homeDir, self.envflags | db.DB_CREATE) bsddb.db.DBInvalidArgError: (22, 'Invalid argument -- configured environment flags incompatible with existing environment') ====================================================================== ERROR: test03_ThreadedTransactions (bsddb.test.test_thread.HashThreadedTransactions) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_thread.py", line 62, in setUp self.env.open(self.homeDir, self.envflags | db.DB_CREATE) bsddb.db.DBInvalidArgError: (22, 'Invalid argument -- configured environment flags incompatible with existing environment') ====================================================================== ERROR: test03_ThreadedTransactions (bsddb.test.test_thread.BTreeThreadedNoWaitTransactions) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_thread.py", line 62, in setUp self.env.open(self.homeDir, self.envflags | db.DB_CREATE) bsddb.db.DBInvalidArgError: (22, 'Invalid argument -- configured environment flags incompatible with existing environment') ====================================================================== ERROR: test03_ThreadedTransactions (bsddb.test.test_thread.HashThreadedNoWaitTransactions) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_thread.py", line 62, in setUp self.env.open(self.homeDir, self.envflags | db.DB_CREATE) bsddb.db.DBInvalidArgError: (22, 'Invalid argument -- configured environment flags incompatible with existing environment') ====================================================================== ERROR: test_cachesize (bsddb.test.test_sequence.DBSequenceTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_sequence.py", line 48, in tearDown test_support.rmtree(self.homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test_flags (bsddb.test.test_sequence.DBSequenceTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_sequence.py", line 48, in tearDown test_support.rmtree(self.homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test_get (bsddb.test.test_sequence.DBSequenceTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_sequence.py", line 48, in tearDown test_support.rmtree(self.homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test_get_dbp (bsddb.test.test_sequence.DBSequenceTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_sequence.py", line 48, in tearDown test_support.rmtree(self.homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test_get_key (bsddb.test.test_sequence.DBSequenceTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_sequence.py", line 48, in tearDown test_support.rmtree(self.homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test_range (bsddb.test.test_sequence.DBSequenceTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_sequence.py", line 48, in tearDown test_support.rmtree(self.homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test_remove (bsddb.test.test_sequence.DBSequenceTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_sequence.py", line 48, in tearDown test_support.rmtree(self.homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test_stat (bsddb.test.test_sequence.DBSequenceTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_sequence.py", line 48, in tearDown test_support.rmtree(self.homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== ERROR: test_pget (bsddb.test.test_cursor_pget_bug.pget_bugTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_cursor_pget_bug.py", line 47, in tearDown test_support.rmtree(self.homeDir) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_support.py", line 70, in rmtree shutil.rmtree(path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\joe\\locals~1\\temp\\db_home1508\\dbshelve_db_file.db' ====================================================================== FAIL: test01_both (bsddb.test.test_dbobj.dbobjTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\bsddb\test\test_dbobj.py", line 52, in test01_both "overridden dbobj.DB.put() method failed [1]" AssertionError: overridden dbobj.DB.put() method failed [1] ====================================================================== ERROR: test_flush (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 718, in tearDown self._delete_recursively(self._path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 47, in _delete_recursively os.remove(target) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test' ====================================================================== ERROR: test_popitem (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 336, in test_popitem self.assertEqual(int(msg.get_payload()), keys.index(key)) ValueError: invalid literal for int() with base 10: 'From: foo 0 F' ====================================================================== ERROR: test_flush (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 718, in tearDown self._delete_recursively(self._path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 47, in _delete_recursively os.remove(target) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test' ====================================================================== ERROR: test_popitem (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 336, in test_popitem self.assertEqual(int(msg.get_payload()), keys.index(key)) ValueError: invalid literal for int() with base 10: 'From: foo 0 \x01' ====================================================================== ERROR: test_flush (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 941, in tearDown self._delete_recursively(self._path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 47, in _delete_recursively os.remove(target) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test' ====================================================================== ERROR: test_popitem (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 336, in test_popitem self.assertEqual(int(msg.get_payload()), keys.index(key)) ValueError: invalid literal for int() with base 10: 'From: foo *** EOOH *** From: foo 0 1,, From: foo *** EOOH *** From: foo 1 1,, From: foo *** EOOH *** From: foo 2 1,, From: foo *** EOOH *** From: foo 3 ' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 412, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_add (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 77, in test_add self.assertEqual(self._box.get_string(keys[0]), self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\nF' != 'From: foo\n\n0' ====================================================================== FAIL: test_add_and_close (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 758, in test_add_and_close self.assertEqual(contents, open(self._path, 'r').read()) AssertionError: 'From MAILER-DAEMON Wed Mar 22 23:24:44 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Wed Mar 22 23:24:44 2000\n\nFrom: foo\n\n0\n\nFrom MAILER-DAEMON Wed Mar 22 23:24:44 2000\n\nFrom: foo\n\n1\n\nFrom MAILER-DAEMON Wed Mar 22 23:24:44 2000\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Wed Mar 22 23:24:44 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'From MAILER-DAEMON Wed Mar 22 23:24:44 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Wed Mar 22 23:24:44 2000\n\nFrom: foo\n\n0\n\nFrom MAILER-DAEMON Wed Mar 22 23:24:44 2000\n\nFrom: foo\n\n1\n\nFrom MAILER-DAEMON Wed Mar 22 23:24:44 2000\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Wed Mar 22 23:24:44 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Wed Mar 22 23:24:44 2000\n\nFrom: foo\n\n0\n\nFrom MAILER-DAEMON Wed Mar 22 23:24:44 2000\n\nFrom: foo\n\n1\n\nFrom MAILER-DAEMON Wed Mar 22 23:24:44 2000\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Wed Mar 22 23:24:44 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Wed Mar 22 23:24:44 2000\n\nFrom: foo\n\n1\n\nFrom MAILER-DAEMON Wed Mar 22 23:24:44 2000\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Wed Mar 22 23:24:44 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Wed Mar 22 23:24:44 2000\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Wed Mar 22 23:24:44 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Wed Mar 22 23:24:44 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' ====================================================================== FAIL: test_add_from_string (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 725, in test_add_from_string self.assertEqual(self._box[key].get_from(), 'foo at bar blah') AssertionError: 'foo at bar blah\n' != 'foo at bar blah' ====================================================================== FAIL: test_close (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 389, in test_close self._test_flush_or_close(self._box.close) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 6 != 3 ====================================================================== FAIL: test_delitem (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 87, in test_delitem self._test_remove_or_delitem(self._box.__delitem__) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n1' != 'From: foo\n\n1' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 412, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_flush (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 377, in test_flush self._test_flush_or_close(self._box.flush) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 6 != 3 ====================================================================== FAIL: test_get (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 129, in test_get self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_file (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 174, in test_get_file self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\nF' != 'From: foo\n\n0' ====================================================================== FAIL: test_get_message (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 156, in test_get_message self.assertEqual(msg0['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_string (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 164, in test_get_string self.assertEqual(self._box.get_string(key0), self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\nF' != 'From: foo\n\n0' ====================================================================== FAIL: test_getitem (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 144, in test_getitem self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_items (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 207, in test_items self._check_iteration(self._box.items, do_keys=True, do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iter (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 194, in test_iter do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iteritems (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 203, in test_iteritems do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_itervalues (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 189, in test_itervalues do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_open_close_open (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 742, in test_open_close_open self.assertEqual(len(self._box), 3) AssertionError: 6 != 3 ====================================================================== FAIL: test_pop (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 313, in test_pop self.assertEqual(self._box.pop(key0).get_payload(), '0') AssertionError: 'From: foo\n\n0\n\nF' != '0' ====================================================================== FAIL: test_remove (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 83, in test_remove self._test_remove_or_delitem(self._box.remove) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n1' != 'From: foo\n\n1' ====================================================================== FAIL: test_set_item (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 272, in test_set_item self._template % 'original 0') AssertionError: '\nFrom: foo\n\noriginal 0' != 'From: foo\n\noriginal 0' ====================================================================== FAIL: test_update (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 350, in test_update self._template % 'changed 0') AssertionError: '\nFrom: foo\n\nchanged 0\n\nF' != 'From: foo\n\nchanged 0' ====================================================================== FAIL: test_values (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 198, in test_values self._check_iteration(self._box.values, do_keys=False, do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_add (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 77, in test_add self.assertEqual(self._box.get_string(keys[0]), self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\n\x01' != 'From: foo\n\n0' ====================================================================== FAIL: test_add_and_close (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 758, in test_add_and_close self.assertEqual(contents, open(self._path, 'r').read()) AssertionError: '\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 22 23:24:47 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 22 23:24:47 2000\n\nFrom: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 22 23:24:47 2000\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 22 23:24:47 2000\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 22 23:24:47 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n' != '\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 22 23:24:47 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 22 23:24:47 2000\n\nFrom: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 22 23:24:47 2000\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 22 23:24:47 2000\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 22 23:24:47 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 22 23:24:47 2000\n\nFrom: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 22 23:24:47 2000\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 22 23:24:47 2000\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 22 23:24:47 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 22 23:24:47 2000\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 22 23:24:47 2000\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 22 23:24:47 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 22 23:24:47 2000\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 22 23:24:47 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 22 23:24:47 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n' ====================================================================== FAIL: test_add_from_string (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 725, in test_add_from_string self.assertEqual(self._box[key].get_from(), 'foo at bar blah') AssertionError: 'foo at bar blah\n' != 'foo at bar blah' ====================================================================== FAIL: test_close (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 389, in test_close self._test_flush_or_close(self._box.close) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_delitem (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 87, in test_delitem self._test_remove_or_delitem(self._box.__delitem__) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n1\n\n\x01' != 'From: foo\n\n1' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 412, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_flush (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 377, in test_flush self._test_flush_or_close(self._box.flush) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_get (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 129, in test_get self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_file (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 174, in test_get_file self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\n\x01' != 'From: foo\n\n0' ====================================================================== FAIL: test_get_message (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 156, in test_get_message self.assertEqual(msg0['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_string (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 164, in test_get_string self.assertEqual(self._box.get_string(key0), self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\n\x01' != 'From: foo\n\n0' ====================================================================== FAIL: test_getitem (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 144, in test_getitem self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_items (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 207, in test_items self._check_iteration(self._box.items, do_keys=True, do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iter (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 194, in test_iter do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iteritems (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 203, in test_iteritems do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_itervalues (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 189, in test_itervalues do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_open_close_open (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 742, in test_open_close_open self.assertEqual(len(self._box), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_pop (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 313, in test_pop self.assertEqual(self._box.pop(key0).get_payload(), '0') AssertionError: 'From: foo\n\n0\n\n\x01' != '0' ====================================================================== FAIL: test_remove (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 83, in test_remove self._test_remove_or_delitem(self._box.remove) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n1\n\n\x01' != 'From: foo\n\n1' ====================================================================== FAIL: test_set_item (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 272, in test_set_item self._template % 'original 0') AssertionError: '\nFrom: foo\n\noriginal 0\n\n\x01' != 'From: foo\n\noriginal 0' ====================================================================== FAIL: test_update (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 350, in test_update self._template % 'changed 0') AssertionError: '\nFrom: foo\n\nchanged 0\n\n\x01' != 'From: foo\n\nchanged 0' ====================================================================== FAIL: test_values (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 198, in test_values self._check_iteration(self._box.values, do_keys=False, do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 412, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_add (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 77, in test_add self.assertEqual(self._box.get_string(keys[0]), self._template % 0) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n0\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f' != 'From: foo\n\n0' ====================================================================== FAIL: test_close (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 389, in test_close self._test_flush_or_close(self._box.close) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_delitem (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 87, in test_delitem self._test_remove_or_delitem(self._box.__delitem__) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n1\n\n\x1f' != 'From: foo\n\n1' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 412, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_flush (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 377, in test_flush self._test_flush_or_close(self._box.flush) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_get (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 129, in test_get self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_file (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 174, in test_get_file self._template % 0) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n0\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f' != 'From: foo\n\n0' ====================================================================== FAIL: test_get_message (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 156, in test_get_message self.assertEqual(msg0['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_string (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 164, in test_get_string self.assertEqual(self._box.get_string(key0), self._template % 0) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n0\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f' != 'From: foo\n\n0' ====================================================================== FAIL: test_getitem (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 144, in test_getitem self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_items (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 207, in test_items self._check_iteration(self._box.items, do_keys=True, do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iter (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 194, in test_iter do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iteritems (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 203, in test_iteritems do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_itervalues (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 189, in test_itervalues do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_pop (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 313, in test_pop self.assertEqual(self._box.pop(key0).get_payload(), '0') AssertionError: 'From: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n0\n\n\x1f\x0c\n\n1,,\n\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n1\n\n\x1f' != '0' ====================================================================== FAIL: test_remove (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 83, in test_remove self._test_remove_or_delitem(self._box.remove) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n1\n\n\x1f' != 'From: foo\n\n1' ====================================================================== FAIL: test_set_item (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 272, in test_set_item self._template % 'original 0') AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\noriginal 0\n\n\x1f' != 'From: foo\n\noriginal 0' ====================================================================== FAIL: test_update (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 350, in test_update self._template % 'changed 0') AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\nchanged 0\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f' != 'From: foo\n\nchanged 0' ====================================================================== FAIL: test_values (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 198, in test_values self._check_iteration(self._box.values, do_keys=False, do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== ERROR: test_TCPServer (test.test_socketserver.SocketServerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_socketserver.py", line 129, in setUp signal.alarm(20) # Kill deadlocks after 20 seconds. AttributeError: 'module' object has no attribute 'alarm' ====================================================================== ERROR: test_ThreadingTCPServer (test.test_socketserver.SocketServerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_socketserver.py", line 129, in setUp signal.alarm(20) # Kill deadlocks after 20 seconds. AttributeError: 'module' object has no attribute 'alarm' ====================================================================== ERROR: test_ThreadingUDPServer (test.test_socketserver.SocketServerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_socketserver.py", line 129, in setUp signal.alarm(20) # Kill deadlocks after 20 seconds. AttributeError: 'module' object has no attribute 'alarm' ====================================================================== ERROR: test_UDPServer (test.test_socketserver.SocketServerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_socketserver.py", line 129, in setUp signal.alarm(20) # Kill deadlocks after 20 seconds. AttributeError: 'module' object has no attribute 'alarm' ====================================================================== ERROR: testConnect (test.test_ssl.NetworkedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 130, in testConnect raise test_support.TestFailed("Unexpected exception %s" % x) test.test_support.TestFailed: Unexpected exception [Errno 1] _ssl.c:486: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed ====================================================================== ERROR: testProtocolSSL2 (test.test_ssl.ThreadedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 812, in testProtocolSSL2 tryProtocolCombo(ssl.PROTOCOL_SSLv2, ssl.PROTOCOL_SSLv2, True, ssl.CERT_OPTIONAL) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 681, in tryProtocolCombo chatty=False, connectionchatty=False) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 630, in serverParamsTest raise test_support.TestFailed("Unexpected SSL error: " + str(x)) test.test_support.TestFailed: Unexpected SSL error: [Errno 1] _ssl.c:486: error:1407E086:SSL routines:SSL2_SET_CERTIFICATE:certificate verify failed ====================================================================== ERROR: testProtocolSSL23 (test.test_ssl.ThreadedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 833, in testProtocolSSL23 tryProtocolCombo(ssl.PROTOCOL_SSLv23, ssl.PROTOCOL_SSLv3, True, ssl.CERT_OPTIONAL) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 681, in tryProtocolCombo chatty=False, connectionchatty=False) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 630, in serverParamsTest raise test_support.TestFailed("Unexpected SSL error: " + str(x)) test.test_support.TestFailed: Unexpected SSL error: [Errno 1] _ssl.c:486: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed ====================================================================== ERROR: testProtocolSSL3 (test.test_ssl.ThreadedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 845, in testProtocolSSL3 tryProtocolCombo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True, ssl.CERT_OPTIONAL) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 681, in tryProtocolCombo chatty=False, connectionchatty=False) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 630, in serverParamsTest raise test_support.TestFailed("Unexpected SSL error: " + str(x)) test.test_support.TestFailed: Unexpected SSL error: [Errno 1] _ssl.c:486: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed ====================================================================== ERROR: testProtocolTLS1 (test.test_ssl.ThreadedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 855, in testProtocolTLS1 tryProtocolCombo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True, ssl.CERT_OPTIONAL) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 681, in tryProtocolCombo chatty=False, connectionchatty=False) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 630, in serverParamsTest raise test_support.TestFailed("Unexpected SSL error: " + str(x)) test.test_support.TestFailed: Unexpected SSL error: [Errno 1] _ssl.c:486: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed ====================================================================== ERROR: testReadCert (test.test_ssl.ThreadedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 728, in testReadCert "Unexpected SSL error: " + str(x)) test.test_support.TestFailed: Unexpected SSL error: [Errno 1] _ssl.c:486: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed ====================================================================== FAIL: test_current_time (test.test_xmlrpc_net.CurrentTimeTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_xmlrpc_net.py", line 28, in test_current_time self.assert_(delta.days <= 1) AssertionError: None sincerely, -The Buildbot From buildbot at python.org Mon Mar 10 06:28:10 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 10 Mar 2008 05:28:10 +0000 Subject: [Python-checkins] buildbot failure in ppc Debian unstable 3.0 Message-ID: <20080310052811.30DB31E400D@bag.python.org> The Buildbot has detected a new failure of ppc Debian unstable 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/ppc%20Debian%20unstable%203.0/builds/628 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: neal.norwitz BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_xmlrpc_net make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Mon Mar 10 11:34:06 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 10 Mar 2008 10:34:06 +0000 Subject: [Python-checkins] buildbot failure in ia64 Ubuntu 3.0 Message-ID: <20080310103417.C9BCA1E4007@bag.python.org> The Buildbot has detected a new failure of ia64 Ubuntu 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/ia64%20Ubuntu%203.0/builds/600 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ia64 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed failed slave lost sincerely, -The Buildbot From gh at ghaering.de Mon Mar 10 13:18:32 2008 From: gh at ghaering.de (=?ISO-8859-1?Q?Gerhard_H=E4ring?=) Date: Mon, 10 Mar 2008 13:18:32 +0100 Subject: [Python-checkins] SQLite test hangs - was: Re: Python Regression Test Failures basics (1) In-Reply-To: References: <20080301113313.GA25006@python.psfb.org> <47D3CC6A.5070302@ghaering.de> Message-ID: <47D52718.9040905@ghaering.de> Neal Norwitz wrote: >> [...] >> First, I couldn't find the hangs you described in the buildbot status >> pages. > > It's only one machine. Here's the most recent: > http://www.python.org/dev/buildbot/all/x86%20gentoo%20trunk/builds/3169/step-test/0 The test_bsddb3 runs in a deadlock error here. Is this really a coincidence? > [...] > If you give me some ideas for where to look/how to debug, I can try to > find the problem on this machine. I don't know if it's specific to > o/s, sqlite version, compiler version, etc. It could be indicative of > a larger problem or not. No way to know without finding the cause, so > I'd like to try if possible. [...] I'd first try to build against a freshly compiled SQLite version that's known to work like http://sqlite.org/sqlite-3.5.4.tar.gz Be sure to ./configure with --enable-threadsafe Random guesses for causes of the problem: - the SQLite library was not compiled with --enable-threadsafe - the SQLite locking did not work because of the filesystem (NFS, ...) - SQLite headers and libraries were of different versions -- Gerhard From buildbot at python.org Mon Mar 10 15:40:44 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 10 Mar 2008 14:40:44 +0000 Subject: [Python-checkins] buildbot failure in AMD64 W2k8 2.5 Message-ID: <20080310144044.46F001E4014@bag.python.org> The Buildbot has detected a new failure of AMD64 W2k8 2.5. Full details are available at: http://www.python.org/dev/buildbot/all/AMD64%20W2k8%202.5/builds/0 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: nelson-win64 Build Reason: Build Source Stamp: [branch branches/release25-maint] HEAD Blamelist: raymond.hettinger BUILD FAILED: failed compile sincerely, -The Buildbot From python-checkins at python.org Mon Mar 10 17:22:25 2008 From: python-checkins at python.org (phillip.eby) Date: Mon, 10 Mar 2008 17:22:25 +0100 (CET) Subject: [Python-checkins] r61341 - sandbox/trunk/setuptools/setuptools/command/easy_install.py Message-ID: <20080310162225.DDFFA1E400B@bag.python.org> Author: phillip.eby Date: Mon Mar 10 17:22:25 2008 New Revision: 61341 Modified: sandbox/trunk/setuptools/setuptools/command/easy_install.py Log: Support installing pywin32 as an egg, albeit without registering COM support, shortcuts, etc. Modified: sandbox/trunk/setuptools/setuptools/command/easy_install.py ============================================================================== --- sandbox/trunk/setuptools/setuptools/command/easy_install.py (original) +++ sandbox/trunk/setuptools/setuptools/command/easy_install.py Mon Mar 10 17:22:25 2008 @@ -744,8 +744,9 @@ native_libs = [] top_level = {} def process(src,dst): + s = src.lower() for old,new in prefixes: - if src.startswith(old): + if s.startswith(old): src = new+src[len(old):] parts = src.split('/') dst = os.path.join(egg_tmp, *parts) @@ -761,7 +762,6 @@ if not src.endswith('.pth'): log.warn("WARNING: can't process %s", src) return None - # extract, tracking .pyd/.dll->native_libs and .py -> to_compile unpack_archive(dist_filename, egg_tmp, process) stubs = [] @@ -1273,7 +1273,7 @@ """Get exe->egg path translations for a given .exe file""" prefixes = [ - ('PURELIB/', ''), + ('PURELIB/', ''), ('PLATLIB/pywin32_system32', ''), ('PLATLIB/', ''), ('SCRIPTS/', 'EGG-INFO/scripts/') ] @@ -1290,14 +1290,14 @@ continue if name.endswith('-nspkg.pth'): continue - if parts[0] in ('PURELIB','PLATLIB'): + if parts[0].upper() in ('PURELIB','PLATLIB'): for pth in yield_lines(z.read(name)): pth = pth.strip().replace('\\','/') if not pth.startswith('import'): prefixes.append((('%s/%s/' % (parts[0],pth)), '')) finally: z.close() - + prefixes = [(x.lower(),y) for x, y in prefixes] prefixes.sort(); prefixes.reverse() return prefixes From python-checkins at python.org Mon Mar 10 18:02:33 2008 From: python-checkins at python.org (phillip.eby) Date: Mon, 10 Mar 2008 18:02:33 +0100 (CET) Subject: [Python-checkins] r61342 - in sandbox/branches/setuptools-0.6: EasyInstall.txt setuptools/command/easy_install.py Message-ID: <20080310170233.3112D1E4011@bag.python.org> Author: phillip.eby Date: Mon Mar 10 18:02:32 2008 New Revision: 61342 Modified: sandbox/branches/setuptools-0.6/EasyInstall.txt sandbox/branches/setuptools-0.6/setuptools/command/easy_install.py Log: Fixed ``win32.exe`` support for .pth files, so unnecessary directory nesting is flattened out in the resulting egg. (There was a case-sensitivity problem that affected some distributions, notably ``pywin32``.) (backport from trunk) Modified: sandbox/branches/setuptools-0.6/EasyInstall.txt ============================================================================== --- sandbox/branches/setuptools-0.6/EasyInstall.txt (original) +++ sandbox/branches/setuptools-0.6/EasyInstall.txt Mon Mar 10 18:02:32 2008 @@ -1234,6 +1234,10 @@ ============================ 0.6final + * Fixed ``win32.exe`` support for .pth files, so unnecessary directory nesting + is flattened out in the resulting egg. (There was a case-sensitivity + problem that affected some distributions, notably ``pywin32``.) + * Prevent ``--help-commands`` and other junk from showing under Python 2.5 when running ``easy_install --help``. Modified: sandbox/branches/setuptools-0.6/setuptools/command/easy_install.py ============================================================================== --- sandbox/branches/setuptools-0.6/setuptools/command/easy_install.py (original) +++ sandbox/branches/setuptools-0.6/setuptools/command/easy_install.py Mon Mar 10 18:02:32 2008 @@ -744,8 +744,9 @@ native_libs = [] top_level = {} def process(src,dst): + s = src.lower() for old,new in prefixes: - if src.startswith(old): + if s.startswith(old): src = new+src[len(old):] parts = src.split('/') dst = os.path.join(egg_tmp, *parts) @@ -761,7 +762,6 @@ if not src.endswith('.pth'): log.warn("WARNING: can't process %s", src) return None - # extract, tracking .pyd/.dll->native_libs and .py -> to_compile unpack_archive(dist_filename, egg_tmp, process) stubs = [] @@ -1273,7 +1273,7 @@ """Get exe->egg path translations for a given .exe file""" prefixes = [ - ('PURELIB/', ''), + ('PURELIB/', ''), ('PLATLIB/pywin32_system32', ''), ('PLATLIB/', ''), ('SCRIPTS/', 'EGG-INFO/scripts/') ] @@ -1290,14 +1290,14 @@ continue if name.endswith('-nspkg.pth'): continue - if parts[0] in ('PURELIB','PLATLIB'): + if parts[0].upper() in ('PURELIB','PLATLIB'): for pth in yield_lines(z.read(name)): pth = pth.strip().replace('\\','/') if not pth.startswith('import'): prefixes.append((('%s/%s/' % (parts[0],pth)), '')) finally: z.close() - + prefixes = [(x.lower(),y) for x, y in prefixes] prefixes.sort(); prefixes.reverse() return prefixes From python-checkins at python.org Mon Mar 10 22:15:03 2008 From: python-checkins at python.org (martin.v.loewis) Date: Mon, 10 Mar 2008 22:15:03 +0100 (CET) Subject: [Python-checkins] r61343 - tracker/roundup-src/roundup/cgi/actions.py Message-ID: <20080310211503.6B05E1E400B@bag.python.org> Author: martin.v.loewis Date: Mon Mar 10 22:15:03 2008 New Revision: 61343 Modified: tracker/roundup-src/roundup/cgi/actions.py Log: Protect against connection loss. Modified: tracker/roundup-src/roundup/cgi/actions.py ============================================================================== --- tracker/roundup-src/roundup/cgi/actions.py (original) +++ tracker/roundup-src/roundup/cgi/actions.py Mon Mar 10 22:15:03 2008 @@ -998,11 +998,13 @@ self.client.STORAGE_CHARSET, self.client.charset, 'replace') writer = csv.writer(wfile) - writer.writerow(columns) + # mvl: protect against connection loss + self.client._socket_op(writer.writerow, columns) # and search for itemid in klass.filter(matches, filterspec, sort, group): - writer.writerow([str(klass.get(itemid, col)) for col in columns]) + # mvl: likewise + self.client._socket_op(writer.writerow, [str(klass.get(itemid, col)) for col in columns]) return '\n' From python-checkins at python.org Tue Mar 11 01:19:07 2008 From: python-checkins at python.org (raymond.hettinger) Date: Tue, 11 Mar 2008 01:19:07 +0100 (CET) Subject: [Python-checkins] r61344 - in python/trunk: Doc/library/itertools.rst Lib/test/test_itertools.py Message-ID: <20080311001907.82AA01E400B@bag.python.org> Author: raymond.hettinger Date: Tue Mar 11 01:19:07 2008 New Revision: 61344 Modified: python/trunk/Doc/library/itertools.rst python/trunk/Lib/test/test_itertools.py Log: Add recipe to docs. Modified: python/trunk/Doc/library/itertools.rst ============================================================================== --- python/trunk/Doc/library/itertools.rst (original) +++ python/trunk/Doc/library/itertools.rst Tue Mar 11 01:19:07 2008 @@ -692,3 +692,8 @@ for n in xrange(2**len(pairs)): yield set(x for m, x in pairs if m&n) + def compress(data, selectors): + "compress('abcdef', [1,0,1,0,1,1]) --> a c e f" + for d, s in izip(data, selectors): + if s: + yield d Modified: python/trunk/Lib/test/test_itertools.py ============================================================================== --- python/trunk/Lib/test/test_itertools.py (original) +++ python/trunk/Lib/test/test_itertools.py Tue Mar 11 01:19:07 2008 @@ -1279,6 +1279,12 @@ ... for n in xrange(2**len(pairs)): ... yield set(x for m, x in pairs if m&n) +>>> def compress(data, selectors): +... "compress('abcdef', [1,0,1,0,1,1]) --> a c e f" +... for d, s in izip(data, selectors): +... if s: +... yield d + This is not part of the examples but it tests to make sure the definitions perform as purported. @@ -1353,6 +1359,9 @@ >>> map(sorted, powerset('ab')) [[], ['a'], ['b'], ['a', 'b']] +>>> list(compress('abcdef', [1,0,1,0,1,1])) +['a', 'c', 'e', 'f'] + """ __test__ = {'libreftest' : libreftest} From python-checkins at python.org Tue Mar 11 18:59:54 2008 From: python-checkins at python.org (martin.v.loewis) Date: Tue, 11 Mar 2008 18:59:54 +0100 (CET) Subject: [Python-checkins] r61345 - in python/branches/release24-maint: Include/patchlevel.h Lib/idlelib/NEWS.txt Lib/idlelib/idlever.py Misc/NEWS README Message-ID: <20080311175954.07C8B1E4009@bag.python.org> Author: martin.v.loewis Date: Tue Mar 11 18:59:53 2008 New Revision: 61345 Modified: python/branches/release24-maint/Include/patchlevel.h python/branches/release24-maint/Lib/idlelib/NEWS.txt python/branches/release24-maint/Lib/idlelib/idlever.py python/branches/release24-maint/Misc/NEWS python/branches/release24-maint/README Log: Prepare for 2.4.5 Modified: python/branches/release24-maint/Include/patchlevel.h ============================================================================== --- python/branches/release24-maint/Include/patchlevel.h (original) +++ python/branches/release24-maint/Include/patchlevel.h Tue Mar 11 18:59:53 2008 @@ -22,11 +22,11 @@ #define PY_MAJOR_VERSION 2 #define PY_MINOR_VERSION 4 #define PY_MICRO_VERSION 5 -#define PY_RELEASE_LEVEL PY_RELEASE_LEVEL_GAMMA -#define PY_RELEASE_SERIAL 1 +#define PY_RELEASE_LEVEL PY_RELEASE_LEVEL_FINAL +#define PY_RELEASE_SERIAL 0 /* Version as a string */ -#define PY_VERSION "2.4.5c1" +#define PY_VERSION "2.4.5" /* Version as a single 4-byte hex number, e.g. 0x010502B2 == 1.5.2b2. Use this for numeric comparisons, e.g. #if PY_VERSION_HEX >= ... */ Modified: python/branches/release24-maint/Lib/idlelib/NEWS.txt ============================================================================== --- python/branches/release24-maint/Lib/idlelib/NEWS.txt (original) +++ python/branches/release24-maint/Lib/idlelib/NEWS.txt Tue Mar 11 18:59:53 2008 @@ -1,3 +1,8 @@ +What's New in IDLE 1.1.5? +========================= + +*Release date: 11-Mar-2006* + What's New in IDLE 1.1.5c1? ========================= Modified: python/branches/release24-maint/Lib/idlelib/idlever.py ============================================================================== --- python/branches/release24-maint/Lib/idlelib/idlever.py (original) +++ python/branches/release24-maint/Lib/idlelib/idlever.py Tue Mar 11 18:59:53 2008 @@ -1 +1 @@ -IDLE_VERSION = "1.1.5c1" +IDLE_VERSION = "1.1.5" Modified: python/branches/release24-maint/Misc/NEWS ============================================================================== --- python/branches/release24-maint/Misc/NEWS (original) +++ python/branches/release24-maint/Misc/NEWS Tue Mar 11 18:59:53 2008 @@ -4,10 +4,15 @@ (editors: check NEWS.help for information about editing NEWS using ReST.) +What's New in Python 2.4.5? +============================= + +*Release date: 11-Mar-2008* + What's New in Python 2.4.5c1? ============================= -*Release date: 20-Mar-2008* +*Release date: 02-Mar-2008* Core and builtins Modified: python/branches/release24-maint/README ============================================================================== --- python/branches/release24-maint/README (original) +++ python/branches/release24-maint/README Tue Mar 11 18:59:53 2008 @@ -1,5 +1,5 @@ -This is Python version 2.4.5c1 -============================== +This is Python version 2.4.5 +============================ Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008 Python Software Foundation. All rights reserved. From python-checkins at python.org Tue Mar 11 19:00:09 2008 From: python-checkins at python.org (martin.v.loewis) Date: Tue, 11 Mar 2008 19:00:09 +0100 (CET) Subject: [Python-checkins] r61346 - in python/branches/release23-maint: Include/patchlevel.h Lib/idlelib/NEWS.txt Lib/idlelib/idlever.py Misc/NEWS README Message-ID: <20080311180009.17F6A1E401A@bag.python.org> Author: martin.v.loewis Date: Tue Mar 11 19:00:08 2008 New Revision: 61346 Modified: python/branches/release23-maint/Include/patchlevel.h python/branches/release23-maint/Lib/idlelib/NEWS.txt python/branches/release23-maint/Lib/idlelib/idlever.py python/branches/release23-maint/Misc/NEWS python/branches/release23-maint/README Log: Prepare for 2.3.7. Modified: python/branches/release23-maint/Include/patchlevel.h ============================================================================== --- python/branches/release23-maint/Include/patchlevel.h (original) +++ python/branches/release23-maint/Include/patchlevel.h Tue Mar 11 19:00:08 2008 @@ -22,11 +22,11 @@ #define PY_MAJOR_VERSION 2 #define PY_MINOR_VERSION 3 #define PY_MICRO_VERSION 7 -#define PY_RELEASE_LEVEL PY_RELEASE_LEVEL_GAMMA -#define PY_RELEASE_SERIAL 1 +#define PY_RELEASE_LEVEL PY_RELEASE_LEVEL_FINAL +#define PY_RELEASE_SERIAL 0 /* Version as a string */ -#define PY_VERSION "2.3.7c1" +#define PY_VERSION "2.3.7" /* Version as a single 4-byte hex number, e.g. 0x010502B2 == 1.5.2b2. Use this for numeric comparisons, e.g. #if PY_VERSION_HEX >= ... */ Modified: python/branches/release23-maint/Lib/idlelib/NEWS.txt ============================================================================== --- python/branches/release23-maint/Lib/idlelib/NEWS.txt (original) +++ python/branches/release23-maint/Lib/idlelib/NEWS.txt Tue Mar 11 19:00:08 2008 @@ -1,3 +1,8 @@ +What's New in IDLE 1.0.7? +========================= + +*Release date: 11-Mar-2007* + What's New in IDLE 1.0.6? ========================= Modified: python/branches/release23-maint/Lib/idlelib/idlever.py ============================================================================== --- python/branches/release23-maint/Lib/idlelib/idlever.py (original) +++ python/branches/release23-maint/Lib/idlelib/idlever.py Tue Mar 11 19:00:08 2008 @@ -1 +1 @@ -IDLE_VERSION = "1.0.6" +IDLE_VERSION = "1.0.7" Modified: python/branches/release23-maint/Misc/NEWS ============================================================================== --- python/branches/release23-maint/Misc/NEWS (original) +++ python/branches/release23-maint/Misc/NEWS Tue Mar 11 19:00:08 2008 @@ -4,6 +4,11 @@ (editors: check NEWS.help for information about editing NEWS using ReST.) +What's New in Python 2.3.7? +=========================== + +*Release date: 11-Mar-2008* + What's New in Python 2.3.7c1? =========================== Modified: python/branches/release23-maint/README ============================================================================== --- python/branches/release23-maint/README (original) +++ python/branches/release23-maint/README Tue Mar 11 19:00:08 2008 @@ -1,5 +1,5 @@ -This is Python version 2.3.7c1 -============================== +This is Python version 2.3.7 +============================ Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008 Python Software Foundation. All rights reserved. From python-checkins at python.org Tue Mar 11 19:00:58 2008 From: python-checkins at python.org (martin.v.loewis) Date: Tue, 11 Mar 2008 19:00:58 +0100 (CET) Subject: [Python-checkins] r61347 - python/tags/r237 Message-ID: <20080311180058.B62DE1E4009@bag.python.org> Author: martin.v.loewis Date: Tue Mar 11 19:00:58 2008 New Revision: 61347 Added: python/tags/r237/ - copied from r61346, python/branches/release23-maint/ Log: Tag 2.3.7. From python-checkins at python.org Tue Mar 11 19:01:22 2008 From: python-checkins at python.org (martin.v.loewis) Date: Tue, 11 Mar 2008 19:01:22 +0100 (CET) Subject: [Python-checkins] r61348 - python/tags/r245 Message-ID: <20080311180122.2B1671E4028@bag.python.org> Author: martin.v.loewis Date: Tue Mar 11 19:01:21 2008 New Revision: 61348 Added: python/tags/r245/ - copied from r61347, python/branches/release24-maint/ Log: Tag 2.4.5. From python-checkins at python.org Tue Mar 11 22:14:55 2008 From: python-checkins at python.org (guido.van.rossum) Date: Tue, 11 Mar 2008 22:14:55 +0100 (CET) Subject: [Python-checkins] r61349 - python/branches/release25-maint/Objects/stringobject.c python/branches/release25-maint/Objects/unicodeobject.c Message-ID: <20080311211455.394BF1E4004@bag.python.org> Author: guido.van.rossum Date: Tue Mar 11 22:14:54 2008 New Revision: 61349 Modified: python/branches/release25-maint/Objects/stringobject.c python/branches/release25-maint/Objects/unicodeobject.c Log: Fix the overflows in expandtabs(). "This time for sure!" (Exploit at request.) Modified: python/branches/release25-maint/Objects/stringobject.c ============================================================================== --- python/branches/release25-maint/Objects/stringobject.c (original) +++ python/branches/release25-maint/Objects/stringobject.c Tue Mar 11 22:14:54 2008 @@ -3299,9 +3299,9 @@ static PyObject* string_expandtabs(PyStringObject *self, PyObject *args) { - const char *e, *p; + const char *e, *p, *qe; char *q; - Py_ssize_t i, j, old_j; + Py_ssize_t i, j, incr; PyObject *u; int tabsize = 8; @@ -3309,63 +3309,70 @@ return NULL; /* First pass: determine size of output string */ - i = j = old_j = 0; - e = PyString_AS_STRING(self) + PyString_GET_SIZE(self); + i = 0; /* chars up to and including most recent \n or \r */ + j = 0; /* chars since most recent \n or \r (use in tab calculations) */ + e = PyString_AS_STRING(self) + PyString_GET_SIZE(self); /* end of input */ for (p = PyString_AS_STRING(self); p < e; p++) if (*p == '\t') { if (tabsize > 0) { - j += tabsize - (j % tabsize); - if (old_j > j) { - PyErr_SetString(PyExc_OverflowError, - "new string is too long"); - return NULL; - } - old_j = j; + incr = tabsize - (j % tabsize); + if (j > PY_SSIZE_T_MAX - incr) + goto overflow1; + j += incr; } } else { + if (j > PY_SSIZE_T_MAX - 1) + goto overflow1; j++; if (*p == '\n' || *p == '\r') { + if (i > PY_SSIZE_T_MAX - j) + goto overflow1; i += j; - old_j = j = 0; - if (i < 0) { - PyErr_SetString(PyExc_OverflowError, - "new string is too long"); - return NULL; - } + j = 0; } } - if ((i + j) < 0) { - PyErr_SetString(PyExc_OverflowError, "new string is too long"); - return NULL; - } + if (i > PY_SSIZE_T_MAX - j) + goto overflow1; /* Second pass: create output string and fill it */ u = PyString_FromStringAndSize(NULL, i + j); if (!u) return NULL; - j = 0; - q = PyString_AS_STRING(u); + j = 0; /* same as in first pass */ + q = PyString_AS_STRING(u); /* next output char */ + qe = PyString_AS_STRING(u) + PyString_GET_SIZE(u); /* end of output */ for (p = PyString_AS_STRING(self); p < e; p++) if (*p == '\t') { if (tabsize > 0) { i = tabsize - (j % tabsize); j += i; - while (i--) + while (i--) { + if (q >= qe) + goto overflow2; *q++ = ' '; + } } } else { - j++; + if (q >= qe) + goto overflow2; *q++ = *p; + j++; if (*p == '\n' || *p == '\r') j = 0; } return u; + + overflow2: + Py_DECREF(u); + overflow1: + PyErr_SetString(PyExc_OverflowError, "new string is too long"); + return NULL; } Py_LOCAL_INLINE(PyObject *) Modified: python/branches/release25-maint/Objects/unicodeobject.c ============================================================================== --- python/branches/release25-maint/Objects/unicodeobject.c (original) +++ python/branches/release25-maint/Objects/unicodeobject.c Tue Mar 11 22:14:54 2008 @@ -5689,7 +5689,8 @@ Py_UNICODE *e; Py_UNICODE *p; Py_UNICODE *q; - Py_ssize_t i, j, old_j; + Py_UNICODE *qe; + Py_ssize_t i, j, incr; PyUnicodeObject *u; int tabsize = 8; @@ -5697,63 +5698,70 @@ return NULL; /* First pass: determine size of output string */ - i = j = old_j = 0; - e = self->str + self->length; + i = 0; /* chars up to and including most recent \n or \r */ + j = 0; /* chars since most recent \n or \r (use in tab calculations) */ + e = self->str + self->length; /* end of input */ for (p = self->str; p < e; p++) if (*p == '\t') { if (tabsize > 0) { - j += tabsize - (j % tabsize); - if (old_j > j) { - PyErr_SetString(PyExc_OverflowError, - "new string is too long"); - return NULL; - } - old_j = j; - } + incr = tabsize - (j % tabsize); /* cannot overflow */ + if (j > PY_SSIZE_T_MAX - incr) + goto overflow1; + j += incr; + } } else { + if (j > PY_SSIZE_T_MAX - 1) + goto overflow1; j++; if (*p == '\n' || *p == '\r') { + if (i > PY_SSIZE_T_MAX - j) + goto overflow1; i += j; - old_j = j = 0; - if (i < 0) { - PyErr_SetString(PyExc_OverflowError, - "new string is too long"); - return NULL; - } + j = 0; } } - if ((i + j) < 0) { - PyErr_SetString(PyExc_OverflowError, "new string is too long"); - return NULL; - } + if (i > PY_SSIZE_T_MAX - j) + goto overflow1; /* Second pass: create output string and fill it */ u = _PyUnicode_New(i + j); if (!u) return NULL; - j = 0; - q = u->str; + j = 0; /* same as in first pass */ + q = u->str; /* next output char */ + qe = u->str + u->length; /* end of output */ for (p = self->str; p < e; p++) if (*p == '\t') { if (tabsize > 0) { i = tabsize - (j % tabsize); j += i; - while (i--) + while (i--) { + if (q >= qe) + goto overflow2; *q++ = ' '; + } } } else { - j++; + if (q >= qe) + goto overflow2; *q++ = *p; + j++; if (*p == '\n' || *p == '\r') j = 0; } return (PyObject*) u; + + overflow2: + Py_DECREF(u); + overflow1: + PyErr_SetString(PyExc_OverflowError, "new string is too long"); + return NULL; } PyDoc_STRVAR(find__doc__, From python-checkins at python.org Tue Mar 11 22:18:06 2008 From: python-checkins at python.org (guido.van.rossum) Date: Tue, 11 Mar 2008 22:18:06 +0100 (CET) Subject: [Python-checkins] r61350 - python/trunk/Objects/stringobject.c python/trunk/Objects/unicodeobject.c Message-ID: <20080311211806.809981E4004@bag.python.org> Author: guido.van.rossum Date: Tue Mar 11 22:18:06 2008 New Revision: 61350 Modified: python/trunk/Objects/stringobject.c python/trunk/Objects/unicodeobject.c Log: Fix the overflows in expandtabs(). "This time for sure!" (Exploit at request.) Modified: python/trunk/Objects/stringobject.c ============================================================================== --- python/trunk/Objects/stringobject.c (original) +++ python/trunk/Objects/stringobject.c Tue Mar 11 22:18:06 2008 @@ -3363,9 +3363,9 @@ static PyObject* string_expandtabs(PyStringObject *self, PyObject *args) { - const char *e, *p; + const char *e, *p, *qe; char *q; - Py_ssize_t i, j, old_j; + Py_ssize_t i, j, incr; PyObject *u; int tabsize = 8; @@ -3373,63 +3373,70 @@ return NULL; /* First pass: determine size of output string */ - i = j = old_j = 0; - e = PyString_AS_STRING(self) + PyString_GET_SIZE(self); + i = 0; /* chars up to and including most recent \n or \r */ + j = 0; /* chars since most recent \n or \r (use in tab calculations) */ + e = PyString_AS_STRING(self) + PyString_GET_SIZE(self); /* end of input */ for (p = PyString_AS_STRING(self); p < e; p++) if (*p == '\t') { if (tabsize > 0) { - j += tabsize - (j % tabsize); - if (old_j > j) { - PyErr_SetString(PyExc_OverflowError, - "new string is too long"); - return NULL; - } - old_j = j; + incr = tabsize - (j % tabsize); + if (j > PY_SSIZE_T_MAX - incr) + goto overflow1; + j += incr; } } else { + if (j > PY_SSIZE_T_MAX - 1) + goto overflow1; j++; if (*p == '\n' || *p == '\r') { + if (i > PY_SSIZE_T_MAX - j) + goto overflow1; i += j; - old_j = j = 0; - if (i < 0) { - PyErr_SetString(PyExc_OverflowError, - "new string is too long"); - return NULL; - } + j = 0; } } - if ((i + j) < 0) { - PyErr_SetString(PyExc_OverflowError, "new string is too long"); - return NULL; - } + if (i > PY_SSIZE_T_MAX - j) + goto overflow1; /* Second pass: create output string and fill it */ u = PyString_FromStringAndSize(NULL, i + j); if (!u) return NULL; - j = 0; - q = PyString_AS_STRING(u); + j = 0; /* same as in first pass */ + q = PyString_AS_STRING(u); /* next output char */ + qe = PyString_AS_STRING(u) + PyString_GET_SIZE(u); /* end of output */ for (p = PyString_AS_STRING(self); p < e; p++) if (*p == '\t') { if (tabsize > 0) { i = tabsize - (j % tabsize); j += i; - while (i--) + while (i--) { + if (q >= qe) + goto overflow2; *q++ = ' '; + } } } else { - j++; + if (q >= qe) + goto overflow2; *q++ = *p; + j++; if (*p == '\n' || *p == '\r') j = 0; } return u; + + overflow2: + Py_DECREF(u); + overflow1: + PyErr_SetString(PyExc_OverflowError, "new string is too long"); + return NULL; } Py_LOCAL_INLINE(PyObject *) Modified: python/trunk/Objects/unicodeobject.c ============================================================================== --- python/trunk/Objects/unicodeobject.c (original) +++ python/trunk/Objects/unicodeobject.c Tue Mar 11 22:18:06 2008 @@ -6495,7 +6495,8 @@ Py_UNICODE *e; Py_UNICODE *p; Py_UNICODE *q; - Py_ssize_t i, j, old_j; + Py_UNICODE *qe; + Py_ssize_t i, j, incr; PyUnicodeObject *u; int tabsize = 8; @@ -6503,63 +6504,70 @@ return NULL; /* First pass: determine size of output string */ - i = j = old_j = 0; - e = self->str + self->length; + i = 0; /* chars up to and including most recent \n or \r */ + j = 0; /* chars since most recent \n or \r (use in tab calculations) */ + e = self->str + self->length; /* end of input */ for (p = self->str; p < e; p++) if (*p == '\t') { if (tabsize > 0) { - j += tabsize - (j % tabsize); - if (old_j > j) { - PyErr_SetString(PyExc_OverflowError, - "new string is too long"); - return NULL; - } - old_j = j; - } + incr = tabsize - (j % tabsize); /* cannot overflow */ + if (j > PY_SSIZE_T_MAX - incr) + goto overflow1; + j += incr; + } } else { + if (j > PY_SSIZE_T_MAX - 1) + goto overflow1; j++; if (*p == '\n' || *p == '\r') { + if (i > PY_SSIZE_T_MAX - j) + goto overflow1; i += j; - old_j = j = 0; - if (i < 0) { - PyErr_SetString(PyExc_OverflowError, - "new string is too long"); - return NULL; - } + j = 0; } } - if ((i + j) < 0) { - PyErr_SetString(PyExc_OverflowError, "new string is too long"); - return NULL; - } + if (i > PY_SSIZE_T_MAX - j) + goto overflow1; /* Second pass: create output string and fill it */ u = _PyUnicode_New(i + j); if (!u) return NULL; - j = 0; - q = u->str; + j = 0; /* same as in first pass */ + q = u->str; /* next output char */ + qe = u->str + u->length; /* end of output */ for (p = self->str; p < e; p++) if (*p == '\t') { if (tabsize > 0) { i = tabsize - (j % tabsize); j += i; - while (i--) + while (i--) { + if (q >= qe) + goto overflow2; *q++ = ' '; + } } } else { - j++; + if (q >= qe) + goto overflow2; *q++ = *p; + j++; if (*p == '\n' || *p == '\r') j = 0; } return (PyObject*) u; + + overflow2: + Py_DECREF(u); + overflow1: + PyErr_SetString(PyExc_OverflowError, "new string is too long"); + return NULL; } PyDoc_STRVAR(find__doc__, From buildbot at python.org Tue Mar 11 22:28:19 2008 From: buildbot at python.org (buildbot at python.org) Date: Tue, 11 Mar 2008 21:28:19 +0000 Subject: [Python-checkins] buildbot failure in x86 FreeBSD 2 2.5 Message-ID: <20080311212819.537B41E4004@bag.python.org> The Buildbot has detected a new failure of x86 FreeBSD 2 2.5. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20FreeBSD%202%202.5/builds/0 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: werven-freebsd Build Reason: Build Source Stamp: [branch branches/release25-maint] HEAD Blamelist: guido.van.rossum BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From buildbot at python.org Tue Mar 11 22:34:02 2008 From: buildbot at python.org (buildbot at python.org) Date: Tue, 11 Mar 2008 21:34:02 +0000 Subject: [Python-checkins] buildbot failure in x86 XP 2.5 Message-ID: <20080311213403.027321E4004@bag.python.org> The Buildbot has detected a new failure of x86 XP 2.5. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP%202.5/builds/0 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: armbruster-windows Build Reason: Build Source Stamp: [branch branches/release25-maint] HEAD Blamelist: guido.van.rossum BUILD FAILED: failed compile sincerely, -The Buildbot From python-checkins at python.org Tue Mar 11 22:37:47 2008 From: python-checkins at python.org (raymond.hettinger) Date: Tue, 11 Mar 2008 22:37:47 +0100 (CET) Subject: [Python-checkins] r61351 - python/trunk/Doc/library/operator.rst Message-ID: <20080311213747.2E3B31E4004@bag.python.org> Author: raymond.hettinger Date: Tue Mar 11 22:37:46 2008 New Revision: 61351 Modified: python/trunk/Doc/library/operator.rst Log: Improve docs for itemgetter(). Show that it works with slices. Modified: python/trunk/Doc/library/operator.rst ============================================================================== --- python/trunk/Doc/library/operator.rst (original) +++ python/trunk/Doc/library/operator.rst Tue Mar 11 22:37:46 2008 @@ -517,25 +517,46 @@ .. function:: itemgetter(item[, args...]) - Return a callable object that fetches *item* from its operand. If more than one - item is requested, returns a tuple of items. After, ``f=itemgetter(2)``, the - call ``f(b)`` returns ``b[2]``. After, ``f=itemgetter(2,5,3)``, the call - ``f(b)`` returns ``(b[2], b[5], b[3])``. + Return a callable object that fetches *item* from its operand using the + operand's :meth:`__getitem__` method. If multiple items are specified, + returns a tuple of lookup values. Equivalent to:: + + def itemgetter(*items): + if len(items) == 1: + item = items[0] + def g(obj): + return obj[item] + else: + def g(obj): + return tuple(obj[item] for item in items) + return g + + The items can be any type accepted by the operand's :meth:`__getitem__` + method. Dictionaries accept any hashable value. Lists, tuples, and + strings accept an index or a slice:: + + >>> itemgetter(1)('ABCDEFG') + 'B' + >>> itemgetter(1,3,5)('ABCDEFG') + ('B', 'D', 'F') + >>> itemgetter(slice(2,None))('ABCDEFG') + 'CDEFG' .. versionadded:: 2.4 .. versionchanged:: 2.5 Added support for multiple item extraction. -Examples:: + Example of using :func:`itemgetter` to retrieve specific fields from a + tuple record:: - >>> from operator import itemgetter - >>> inventory = [('apple', 3), ('banana', 2), ('pear', 5), ('orange', 1)] - >>> getcount = itemgetter(1) - >>> map(getcount, inventory) - [3, 2, 5, 1] - >>> sorted(inventory, key=getcount) - [('orange', 1), ('banana', 2), ('apple', 3), ('pear', 5)] + >>> from operator import itemgetter + >>> inventory = [('apple', 3), ('banana', 2), ('pear', 5), ('orange', 1)] + >>> getcount = itemgetter(1) + >>> map(getcount, inventory) + [3, 2, 5, 1] + >>> sorted(inventory, key=getcount) + [('orange', 1), ('banana', 2), ('apple', 3), ('pear', 5)] .. function:: methodcaller(name[, args...]) From nnorwitz at gmail.com Tue Mar 11 23:25:46 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Tue, 11 Mar 2008 17:25:46 -0500 Subject: [Python-checkins] Python Regression Test Failures refleak (1) Message-ID: <20080311222546.GA26490@python.psfb.org> More important issues: ---------------------- test_telnetlib leaked [-78, 0, 0] references, sum=-78 Less important issues: ---------------------- test_smtplib leaked [86, -86, 119] references, sum=119 test_threadsignals leaked [0, 0, -8] references, sum=-8 test_urllib2_localnet leaked [3, 3, 3] references, sum=9 From buildbot at python.org Tue Mar 11 23:53:48 2008 From: buildbot at python.org (buildbot at python.org) Date: Tue, 11 Mar 2008 22:53:48 +0000 Subject: [Python-checkins] buildbot failure in sparc Ubuntu 2.5 Message-ID: <20080311225348.3D2EB1E4009@bag.python.org> The Buildbot has detected a new failure of sparc Ubuntu 2.5. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20Ubuntu%202.5/builds/58 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-ubuntu-sparc Build Reason: Build Source Stamp: [branch branches/release25-maint] HEAD Blamelist: guido.van.rossum BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_tarfile make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Wed Mar 12 00:19:53 2008 From: buildbot at python.org (buildbot at python.org) Date: Tue, 11 Mar 2008 23:19:53 +0000 Subject: [Python-checkins] buildbot failure in x86 FreeBSD trunk Message-ID: <20080311231953.77C8D1E4009@bag.python.org> The Buildbot has detected a new failure of x86 FreeBSD trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20FreeBSD%20trunk/builds/714 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: bolen-freebsd Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: guido.van.rossum BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_smtplib sincerely, -The Buildbot From buildbot at python.org Wed Mar 12 01:56:31 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 12 Mar 2008 00:56:31 +0000 Subject: [Python-checkins] buildbot failure in hppa Ubuntu 2.5 Message-ID: <20080312005631.DB04B1E400F@bag.python.org> The Buildbot has detected a new failure of hppa Ubuntu 2.5. Full details are available at: http://www.python.org/dev/buildbot/all/hppa%20Ubuntu%202.5/builds/170 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-ubuntu-hppa Build Reason: Build Source Stamp: [branch branches/release25-maint] HEAD Blamelist: guido.van.rossum BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_timeout ====================================================================== FAIL: testConnectTimeout (test.test_timeout.TimeoutTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/2.5.klose-ubuntu-hppa/build/Lib/test/test_timeout.py", line 122, in testConnectTimeout self.addr_remote) AssertionError: error not raised make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Wed Mar 12 10:43:28 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 12 Mar 2008 09:43:28 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 3.0 Message-ID: <20080312094328.B96861E4010@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%203.0/builds/695 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: georg.brandl BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From nnorwitz at gmail.com Wed Mar 12 11:28:37 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Wed, 12 Mar 2008 05:28:37 -0500 Subject: [Python-checkins] Python Regression Test Failures refleak (1) Message-ID: <20080312102837.GA9897@python.psfb.org> More important issues: ---------------------- test_threadedtempfile leaked [0, 0, 100] references, sum=100 Less important issues: ---------------------- test_cmd_line leaked [23, 0, -23] references, sum=0 test_smtplib leaked [-5, 5, 119] references, sum=119 test_socketserver leaked [1, -81, 0] references, sum=-80 test_threadsignals leaked [0, -8, 0] references, sum=-8 test_urllib2_localnet leaked [3, 3, 3] references, sum=9 From nnorwitz at gmail.com Wed Mar 12 11:48:32 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Wed, 12 Mar 2008 05:48:32 -0500 Subject: [Python-checkins] Python Regression Test Failures all (1) Message-ID: <20080312104832.GA13062@python.psfb.org> 318 tests OK. 1 test failed: test_ssl 20 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_cd test_cl test_gl test_imageop test_imgfile test_ioctl test_macostools test_pep277 test_scriptpackages test_startfile test_sunaudiodev test_tcl test_unicode_file test_winreg test_winsound test_zipfile64 1 skip unexpected on linux2: test_ioctl test_grammar test_opcodes test_dict test_builtin test_exceptions test_types test_unittest test_doctest test_doctest2 test_MimeWriter test_SimpleHTTPServer test_StringIO test___all__ test___future__ test__locale test_abc test_abstract_numbers test_aepack test_aepack skipped -- No module named aepack test_al test_al skipped -- No module named al test_anydbm test_applesingle test_applesingle skipped -- No module named macostools test_array test_ast test_asynchat test_asyncore test_atexit test_audioop test_augassign test_base64 test_bastion test_bigaddrspace test_bigmem test_binascii test_binhex test_binop test_bisect test_bool test_bsddb test_bsddb185 test_bsddb185 skipped -- No module named bsddb185 test_bsddb3 Exception in thread reader 0: Traceback (most recent call last): File "/tmp/python-test/local/lib/python2.6/threading.py", line 490, in __bootstrap_inner self.run() File "/tmp/python-test/local/lib/python2.6/threading.py", line 446, in run self.__target(*self.__args, **self.__kwargs) File "/tmp/python-test/local/lib/python2.6/bsddb/test/test_thread.py", line 284, in readerThread rec = dbutils.DeadlockWrap(c.next, max_retries=10) File "/tmp/python-test/local/lib/python2.6/bsddb/dbutils.py", line 62, in DeadlockWrap return function(*_args, **_kwargs) DBLockDeadlockError: (-30996, 'DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock') Exception in thread writer 1: Traceback (most recent call last): File "/tmp/python-test/local/lib/python2.6/threading.py", line 490, in __bootstrap_inner self.run() File "/tmp/python-test/local/lib/python2.6/threading.py", line 446, in run self.__target(*self.__args, **self.__kwargs) File "/tmp/python-test/local/lib/python2.6/bsddb/test/test_thread.py", line 263, in writerThread self.assertEqual(data, self.makeData(key)) File "/tmp/python-test/local/lib/python2.6/unittest.py", line 343, in failUnlessEqual (msg or '%r != %r' % (first, second)) AssertionError: None != '1007-1007-1007-1007-1007' Exception in thread writer 0: Traceback (most recent call last): File "/tmp/python-test/local/lib/python2.6/threading.py", line 490, in __bootstrap_inner self.run() File "/tmp/python-test/local/lib/python2.6/threading.py", line 446, in run self.__target(*self.__args, **self.__kwargs) File "/tmp/python-test/local/lib/python2.6/bsddb/test/test_thread.py", line 263, in writerThread self.assertEqual(data, self.makeData(key)) File "/tmp/python-test/local/lib/python2.6/unittest.py", line 343, in failUnlessEqual (msg or '%r != %r' % (first, second)) AssertionError: None != '0004-0004-0004-0004-0004' Exception in thread writer 2: Traceback (most recent call last): File "/tmp/python-test/local/lib/python2.6/threading.py", line 490, in __bootstrap_inner self.run() File "/tmp/python-test/local/lib/python2.6/threading.py", line 446, in run self.__target(*self.__args, **self.__kwargs) File "/tmp/python-test/local/lib/python2.6/bsddb/test/test_thread.py", line 263, in writerThread self.assertEqual(data, self.makeData(key)) File "/tmp/python-test/local/lib/python2.6/unittest.py", line 343, in failUnlessEqual (msg or '%r != %r' % (first, second)) AssertionError: None != '2002-2002-2002-2002-2002' test_buffer test_bufio test_bz2 test_calendar test_call test_capi test_cd test_cd skipped -- No module named cd test_cfgparser test_cgi test_charmapcodec test_cl test_cl skipped -- No module named cl test_class test_cmath test_cmd test_cmd_line test_cmd_line_script test_code test_codeccallbacks test_codecencodings_cn test_codecencodings_hk test_codecencodings_jp test_codecencodings_kr test_codecencodings_tw test_codecmaps_cn test_codecmaps_hk test_codecmaps_jp test_codecmaps_kr test_codecmaps_tw test_codecs test_codeop test_coding test_coercion test_collections test_colorsys test_commands test_compare test_compile test_compiler testCompileLibrary still working, be patient... test_complex test_complex_args test_contains test_contextlib test_cookie test_cookielib test_copy test_copy_reg test_cpickle test_cprofile test_crypt test_csv test_ctypes test_datetime test_dbm test_decimal test_decorators test_defaultdict test_deque test_descr test_descrtut test_difflib test_dircache test_dis test_distutils test_dl test_docxmlrpc test_dumbdbm test_dummy_thread test_dummy_threading test_email test_email_codecs test_email_renamed test_enumerate test_eof test_errno test_exception_variations test_extcall test_fcntl test_file test_filecmp test_fileinput test_float test_fnmatch test_fork1 test_format test_fpformat test_fractions test_frozen test_ftplib test_funcattrs test_functools test_future test_future_builtins test_gc test_gdbm test_generators test_genericpath test_genexps test_getargs test_getargs2 test_getopt test_gettext test_gl test_gl skipped -- No module named gl test_glob test_global test_grp test_gzip test_hash test_hashlib test_heapq test_hexoct test_hmac test_hotshot test_htmllib test_htmlparser test_httplib test_imageop test_imageop skipped -- No module named imgfile test_imaplib test_imgfile test_imgfile skipped -- No module named imgfile test_imp test_import test_importhooks test_index test_inspect test_ioctl test_ioctl skipped -- Unable to open /dev/tty test_isinstance test_iter test_iterlen test_itertools test_largefile test_list test_locale test_logging test_long test_long_future test_longexp test_macostools test_macostools skipped -- No module named macostools test_macpath test_mailbox test_marshal test_math test_md5 test_mhlib test_mimetools test_mimetypes test_minidom test_mmap test_module test_modulefinder test_multibytecodec test_multibytecodec_support test_multifile test_mutants test_mutex test_netrc test_new test_nis test_normalization test_ntpath test_old_mailbox test_openpty test_operator test_optparse test_os test_parser s_push: parser stack overflow test_peepholer test_pep247 test_pep263 test_pep277 test_pep277 skipped -- test works only on NT+ test_pep292 test_pep352 test_pickle test_pickletools test_pipes test_pkg test_pkgimport test_platform test_plistlib test_poll test_popen [8018 refs] [8018 refs] [8018 refs] test_popen2 test_poplib test_posix test_posixpath test_pow test_pprint test_profile test_profilehooks test_property test_pstats test_pty test_pwd test_pyclbr test_pyexpat test_queue test_quopri [8395 refs] [8395 refs] test_random test_re test_repr test_resource test_rfc822 test_richcmp test_robotparser test_runpy test_sax test_scope test_scriptpackages test_scriptpackages skipped -- No module named aetools test_select test_set test_sets test_sgmllib test_sha test_shelve test_shlex test_shutil test_signal test_site test_slice test_smtplib test_socket test_socket_ssl /tmp/python-test/local/lib/python2.6/test/test_socket_ssl.py:108: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. ssl_sock = socket.ssl(s) /tmp/python-test/local/lib/python2.6/test/test_socket_ssl.py:74: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. ss = socket.ssl(s) /tmp/python-test/local/lib/python2.6/test/test_socket_ssl.py:159: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. ss = socket.ssl(s) /tmp/python-test/local/lib/python2.6/test/test_socket_ssl.py:173: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. ss = socket.ssl(s) test_socketserver test_softspace test_sort test_sqlite test_ssl test test_ssl failed -- Traceback (most recent call last): File "/tmp/python-test/local/lib/python2.6/test/test_ssl.py", line 136, in testFetchServerCert pem = ssl.get_server_certificate(("svn.python.org", 443)) File "/tmp/python-test/local/lib/python2.6/ssl.py", line 526, in get_server_certificate s.connect(addr) File "/tmp/python-test/local/lib/python2.6/ssl.py", line 204, in connect self.ca_certs) SSLError: [Errno 8] _ssl.c:429: EOF occurred in violation of protocol test_startfile test_startfile skipped -- cannot import name startfile test_str test_strftime test_string test_stringprep test_strop test_strptime test_struct test_structmembers test_structseq test_subprocess [8013 refs] [8015 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8015 refs] [9938 refs] [8231 refs] [8015 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] . [8013 refs] [8013 refs] this bit of output is from a test of stdout in a different process ... [8013 refs] [8013 refs] [8231 refs] test_sunaudiodev test_sunaudiodev skipped -- No module named sunaudiodev test_sundry test_symtable test_syntax test_sys [8013 refs] [8013 refs] test_tarfile test_tcl test_tcl skipped -- No module named _tkinter test_telnetlib test_tempfile [8018 refs] test_textwrap test_thread test_threaded_import test_threadedtempfile test_threading [11151 refs] test_threading_local test_threadsignals test_time test_timeout test_tokenize test_trace test_traceback test_transformer test_tuple test_typechecks test_ucn test_unary test_unicode test_unicode_file test_unicode_file skipped -- No Unicode filesystem semantics on this platform. test_unicodedata test_univnewlines test_unpack test_urllib test_urllib2 test_urllib2_localnet test_urllib2net No handlers could be found for logger "test_urllib2" test_urllibnet test_urlparse test_userdict test_userlist test_userstring test_uu test_uuid WARNING: uuid.getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._ifconfig_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._unixdll_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. test_wait3 test_wait4 test_warnings test_wave test_weakref test_whichdb test_winreg test_winreg skipped -- No module named _winreg test_winsound test_winsound skipped -- No module named winsound test_with test_wsgiref test_xdrlib test_xml_etree test_xml_etree_c test_xmllib test_xmlrpc test_xpickle test_xrange test_zipfile /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:472: DeprecationWarning: struct integer overflow masking is deprecated zipfp.close() /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:399: DeprecationWarning: struct integer overflow masking is deprecated zipfp.close() test_zipfile64 test_zipfile64 skipped -- test requires loads of disk-space bytes and a long time to run test_zipimport test_zlib 318 tests OK. 1 test failed: test_ssl 20 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_cd test_cl test_gl test_imageop test_imgfile test_ioctl test_macostools test_pep277 test_scriptpackages test_startfile test_sunaudiodev test_tcl test_unicode_file test_winreg test_winsound test_zipfile64 1 skip unexpected on linux2: test_ioctl [576377 refs] From MAILER-DAEMON at bag.python.org Wed Mar 12 15:11:17 2008 From: MAILER-DAEMON at bag.python.org (Mail Delivery System) Date: Wed, 12 Mar 2008 15:11:17 +0100 (CET) Subject: [Python-checkins] Undelivered Mail Returned to Sender Message-ID: <20080312141117.D1E4A1E4011@bag.python.org> This is the mail system at host bag.python.org. I'm sorry to have to inform you that your message could not be delivered to one or more recipients. It's attached below. For further assistance, please send mail to postmaster. If you do so, please include this problem report. You can delete your own text from the attached returned message. The mail system : host smtp0.xs4all.nl[194.109.24.26] said: 552 5.6.0 Headers too large (32768 max) (in reply to end of DATA command) -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/rfc822-headers Size: 2667 bytes Desc: Undelivered Message Headers Url : http://mail.python.org/pipermail/python-checkins/attachments/20080312/d4cb5108/attachment.bin From python-checkins at python.org Wed Mar 12 22:07:53 2008 From: python-checkins at python.org (georg.brandl) Date: Wed, 12 Mar 2008 22:07:53 +0100 (CET) Subject: [Python-checkins] r61354 - doctools/trunk/sphinx/builder.py doctools/trunk/sphinx/quickstart.py Message-ID: <20080312210753.DF3271E4014@bag.python.org> Author: georg.brandl Date: Wed Mar 12 22:07:53 2008 New Revision: 61354 Modified: doctools/trunk/sphinx/builder.py doctools/trunk/sphinx/quickstart.py Log: Template dirs are relative to the root. Modified: doctools/trunk/sphinx/builder.py ============================================================================== --- doctools/trunk/sphinx/builder.py (original) +++ doctools/trunk/sphinx/builder.py Wed Mar 12 22:07:53 2008 @@ -80,7 +80,9 @@ # load templates self.templates = {} base_templates_path = path.join(path.dirname(__file__), 'templates') - loader = SphinxFileSystemLoader(base_templates_path, self.config.templates_path) + ext_templates_path = [path.join(self.srcdir, dir) + for dir in self.config.templates_path] + loader = SphinxFileSystemLoader(base_templates_path, ext_templates_path) self.jinja_env = Environment(loader=loader, # disable traceback, more likely that something # in the application is broken than in the templates Modified: doctools/trunk/sphinx/quickstart.py ============================================================================== --- doctools/trunk/sphinx/quickstart.py (original) +++ doctools/trunk/sphinx/quickstart.py Wed Mar 12 22:07:53 2008 @@ -278,9 +278,10 @@ mkdir_p(srcdir) if separate: - mkdir_p(path.join(d['path'], 'build')) + builddir = path.join(d['path'], 'build') else: - mkdir_p(path.join(srcdir, d['dot'] + 'build')) + builddir = path.join(srcdir, d['dot'] + 'build') + mkdir_p(builddir) mkdir_p(path.join(srcdir, d['dot'] + 'templates')) mkdir_p(path.join(srcdir, d['dot'] + 'static')) @@ -297,8 +298,10 @@ print bold('Finished: An initial directory structure has been created.') print ''' You should now populate your master file %s and create other documentation -source files. Use the sphinx-build.py script to build the docs. -''' % (masterfile) +source files. Use the sphinx-build.py script to build the docs, like so: + + sphinx-build.py -b %s %s +''' % (masterfile, srcdir, builddir) def main(argv=sys.argv): From python-checkins at python.org Wed Mar 12 22:37:53 2008 From: python-checkins at python.org (georg.brandl) Date: Wed, 12 Mar 2008 22:37:53 +0100 (CET) Subject: [Python-checkins] r61355 - in doctools/trunk/doc: builders.rst ext ext/api.rst ext/autodoc.rst ext/coverage.rst ext/doctest.rst ext/ifconfig.rst ext/refcounting.rst extensions.rst intro.rst templating.rst Message-ID: <20080312213753.191281E4015@bag.python.org> Author: georg.brandl Date: Wed Mar 12 22:37:22 2008 New Revision: 61355 Added: doctools/trunk/doc/ext/ doctools/trunk/doc/ext/api.rst doctools/trunk/doc/ext/autodoc.rst doctools/trunk/doc/ext/coverage.rst doctools/trunk/doc/ext/doctest.rst doctools/trunk/doc/ext/ifconfig.rst doctools/trunk/doc/ext/refcounting.rst Modified: doctools/trunk/doc/builders.rst doctools/trunk/doc/extensions.rst doctools/trunk/doc/intro.rst doctools/trunk/doc/templating.rst Log: Some more documentation. Modified: doctools/trunk/doc/builders.rst ============================================================================== --- doctools/trunk/doc/builders.rst (original) +++ doctools/trunk/doc/builders.rst Wed Mar 12 22:37:22 2008 @@ -7,3 +7,17 @@ :synopsis: Available built-in builder classes. +.. class:: Builder + +.. class:: StandaloneHTMLBuilder + +.. class:: WebHTMLBuilder + +.. class:: HTMLHelpBuilder + +.. class:: LaTeXBuilder + +.. class:: ChangesBuilder + +.. class:: CheckExternalLinksBuilder + Added: doctools/trunk/doc/ext/api.rst ============================================================================== --- (empty file) +++ doctools/trunk/doc/ext/api.rst Wed Mar 12 22:37:22 2008 @@ -0,0 +1,96 @@ +Extension API +============= + +Each Sphinx extension is a Python module with at least a :func:`setup` function. +This function is called at initialization time with one argument, the +application object representing the Sphinx process. This application object has +the following public API: + +.. method:: Application.add_builder(builder) + + Register a new builder. *builder* must be a class that inherits from + :class:`~sphinx.builder.Builder`. + +.. method:: Application.add_config_value(name, default, rebuild_env) + + Register a configuration value. This is necessary for Sphinx to recognize + new values and set default values accordingly. The *name* should be prefixed + with the extension name, to avoid clashes. The *default* value can be any + Python object. The boolean value *rebuild_env* must be ``True`` if a change + in the setting only takes effect when a document is parsed -- this means that + the whole environment must be rebuilt. + +.. method:: Application.add_event(name) + + Register an event called *name*. + +.. method:: Application.add_node(node) + + Register a Docutils node class. This is necessary for Docutils internals. + It may also be used in the future to validate nodes in the parsed documents. + +.. method:: Application.add_directive(name, cls, content, arguments, **options) + + Register a Docutils directive. *name* must be the prospective directive + name, *func* the directive function (see the Docutils documentation - XXX + ref) for details about the signature and return value. *content*, + *arguments* and *options* are set as attributes on the function and determine + whether the directive has content, arguments and options, respectively. For + their exact meaning, please consult the Docutils documentation. + +.. method:: Application.add_role(name, role) + + Register a Docutils role. *name* must be the role name that occurs in the + source, *role* the role function (see the Docutils documentation on details). + +.. method:: Application.add_description_unit(directivename, rolename, indexdesc='', parse_node=None) + + XXX + +.. method:: Application.connect(event, callback) + + Register *callback* to be called when *event* is emitted. For details on + available core events and the arguments of callback functions, please see + :ref:`events`. + + The method returns a "listener ID" that can be used as an argument to + :meth:`disconnect`. + +.. method:: Application.disconnect(listener_id) + + Unregister callback *listener_id*. + +.. method:: Application.emit(event, *arguments) + + Emit *event* and pass *arguments* to the callback functions. Do not emit + core Sphinx events in extensions! + + +.. exception:: ExtensionError + + All these functions raise this exception if something went wrong with the + extension API. + +Examples of using the Sphinx extension API can be seen in the :mod:`sphinx.ext` +package. + + +.. _events: + +Sphinx core events +------------------ + +These events are known to the core: + +====================== =================================== ========= +Event name Emitted when Arguments +====================== =================================== ========= +``'builder-inited'`` the builder object has been created -none- +``'doctree-read'`` a doctree has been parsed and read *doctree* + by the environment, and is about to + be pickled +``'doctree-resolved'`` a doctree has been "resolved" by *doctree*, *docname* + the environment, that is, all + references and TOCs have been + inserted +====================== =================================== ========= Added: doctools/trunk/doc/ext/autodoc.rst ============================================================================== --- (empty file) +++ doctools/trunk/doc/ext/autodoc.rst Wed Mar 12 22:37:22 2008 @@ -0,0 +1,5 @@ +:mod:`sphinx.ext.autodoc` -- Include documentation from docstrings +================================================================== + +.. module:: sphinx.ext.autodoc + :synopsis: Include documentation from docstrings. Added: doctools/trunk/doc/ext/coverage.rst ============================================================================== --- (empty file) +++ doctools/trunk/doc/ext/coverage.rst Wed Mar 12 22:37:22 2008 @@ -0,0 +1,29 @@ +:mod:`sphinx.ext.coverage` -- Collect doc coverage stats +======================================================== + +.. module:: sphinx.ext.coverage + :synopsis: Check Python modules and C API for coverage in the documentation. + + +This extension features one additional builder, the :class:`CoverageBuilder`. + +.. class:: CoverageBuilder + + To use this builder, activate the coverage extension in your configuration + file and give ``-b coverage`` on the command line. + + +Several new configuration values can be used to specify what the builder +should check: + +.. confval:: coverage_ignore_modules + +.. confval:: coverage_ignore_functions + +.. confval:: coverage_ignore_classes + +.. confval:: coverage_c_path + +.. confval:: coverage_c_regexes + +.. confval:: coverage_ignore_c_items Added: doctools/trunk/doc/ext/doctest.rst ============================================================================== --- (empty file) +++ doctools/trunk/doc/ext/doctest.rst Wed Mar 12 22:37:22 2008 @@ -0,0 +1,5 @@ +:mod:`sphinx.ext.doctest` -- Test snippets in the documentation +=============================================================== + +.. module:: sphinx.ext.doctest + :synopsis: Test snippets in the documentation. Added: doctools/trunk/doc/ext/ifconfig.rst ============================================================================== --- (empty file) +++ doctools/trunk/doc/ext/ifconfig.rst Wed Mar 12 22:37:22 2008 @@ -0,0 +1,21 @@ +.. highlight:: rest + +:mod:`sphinx.ext.ifconfig` -- Include content based on configuration +==================================================================== + +.. module:: sphinx.ext.ifconfig + :synopsis: Include documentation content based on configuration values. + +This extension is quite simple, and features only one directive: + +.. directive:: ifconfig + + Include content of the directive only if the Python expression given as an + argument is ``True``, evaluated in the namespace of the project's + configuration (that is, all variables from :file:`conf.py` are available). + + For example, one could write :: + + .. ifconfig:: releaselevel in ('alpha', 'beta', 'rc') + + This stuff is only included in the built docs for unstable versions. Added: doctools/trunk/doc/ext/refcounting.rst ============================================================================== --- (empty file) +++ doctools/trunk/doc/ext/refcounting.rst Wed Mar 12 22:37:22 2008 @@ -0,0 +1,5 @@ +:mod:`sphinx.ext.refcounting` -- Keep track of reference counting behavior +========================================================================== + +.. module:: sphinx.ext.refcounting + :synopsis: Keep track of reference counting behavior. Modified: doctools/trunk/doc/extensions.rst ============================================================================== --- doctools/trunk/doc/extensions.rst (original) +++ doctools/trunk/doc/extensions.rst Wed Mar 12 22:37:22 2008 @@ -15,96 +15,21 @@ are so-called "hook points" at strategic places throughout the build process, where an extension can register a hook and run specialized code. -Each Sphinx extension is a Python module with at least a :func:`setup` function. -This function is called at initialization time with one argument, the -application object representing the Sphinx process. This application object has -the following public API: +.. toctree:: -.. method:: Application.add_builder(builder) + ext/api.rst - Register a new builder. *builder* must be a class that inherits from - :class:`~sphinx.builder.Builder`. -.. method:: Application.add_config_value(name, default, rebuild_env) +Builtin Sphinx extensions +------------------------- - Register a configuration value. This is necessary for Sphinx to recognize - new values and set default values accordingly. The *name* should be prefixed - with the extension name, to avoid clashes. The *default* value can be any - Python object. The boolean value *rebuild_env* must be ``True`` if a change - in the setting only takes effect when a document is parsed -- this means that - the whole environment must be rebuilt. +These extensions are built in and can be activated by respective entries in the +:confval:`extensions` configuration value: -.. method:: Application.add_event(name) +.. toctree:: - Register an event called *name*. - -.. method:: Application.add_node(node) - - Register a Docutils node class. This is necessary for Docutils internals. - It may also be used in the future to validate nodes in the parsed documents. - -.. method:: Application.add_directive(name, cls, content, arguments, **options) - - Register a Docutils directive. *name* must be the prospective directive - name, *func* the directive function (see the Docutils documentation - XXX - ref) for details about the signature and return value. *content*, - *arguments* and *options* are set as attributes on the function and determine - whether the directive has content, arguments and options, respectively. For - their exact meaning, please consult the Docutils documentation. - -.. method:: Application.add_role(name, role) - - Register a Docutils role. *name* must be the role name that occurs in the - source, *role* the role function (see the Docutils documentation on details). - -.. method:: Application.add_description_unit(directivename, rolename, indexdesc='', parse_node=None) - - XXX - -.. method:: Application.connect(event, callback) - - Register *callback* to be called when *event* is emitted. For details on - available core events and the arguments of callback functions, please see - :ref:`events`. - - The method returns a "listener ID" that can be used as an argument to - :meth:`disconnect`. - -.. method:: Application.disconnect(listener_id) - - Unregister callback *listener_id*. - -.. method:: Application.emit(event, *arguments) - - Emit *event* and pass *arguments* to the callback functions. Do not emit - core Sphinx events in extensions! - - -.. exception:: ExtensionError - - All these functions raise this exception if something went wrong with the - extension API. - -Examples of using the Sphinx extension API can be seen in the :mod:`sphinx.ext` -package. - - -.. _events: - -Sphinx core events ------------------- - -These events are known to the core: - -====================== =================================== ========= -Event name Emitted when Arguments -====================== =================================== ========= -``'builder-inited'`` the builder object has been created -none- -``'doctree-read'`` a doctree has been parsed and read *doctree* - by the environment, and is about to - be pickled -``'doctree-resolved'`` a doctree has been "resolved" by *doctree*, *docname* - the environment, that is, all - references and TOCs have been - inserted -====================== =================================== ========= + ext/autodoc.rst + ext/doctest.rst + ext/refcounting.rst + ext/ifconfig.rst + ext/coverage.rst Modified: doctools/trunk/doc/intro.rst ============================================================================== --- doctools/trunk/doc/intro.rst (original) +++ doctools/trunk/doc/intro.rst Wed Mar 12 22:37:22 2008 @@ -1,11 +1,90 @@ Introduction ============ +This is the documentation for the Sphinx documentation builder. Sphinx is a +tool that translates a set of reStructuredText_ source files into various output +formats, automatically producing cross-references, indices etc. +.. XXX web app Prerequisites ------------- +Sphinx needs at least **Python 2.4** to run. If you like to have source code +highlighting support, you must also install the Pygments_ library, which you can +do via setuptools' easy_install. + +.. _reStructuredText: http://docutils.sf.net/rst.html +.. _Pygments: http://pygments.org + + +Setting up a documentation root +------------------------------- + +The root directory of a documentation collection is called the +:dfn:`documentation root`. There's nothing special about it; it just needs to +contain the Sphinx configuration file, :file:`conf.py`. + +Sphinx comes with a script called :program:`sphinx-quickstart.py` that sets up a +documentation root and creates a default :file:`conf.py` from a few questions +it asks you. Just run :: + + $ sphinx-quickstart.py + +and answer the questions. + +.. XXX environment + Running a build --------------- + +A build is started with the :program:`sphinx-build.py` script. It is called +like this:: + + $ sphinx-build.py -b latex sourcedir builddir + +where *sourcedir* is the :term:`documentation root`, and *builddir* is the +directory in which you want to place the built documentation (it must be an +existing directory). The :option:`-b` option selects a builder; in this example +Sphinx will build LaTeX files. + +The :program:`sphinx-build.py` script has several more options: + +**-a** + If given, always write all output files. The default is to only write output + files for new and changed source files. (This may not apply to all + builders.) + +**-E** + Don't use a saved :term:`environment` (the structure caching all + cross-references), but rebuild it completely. The default is to only read + and parse source files that are new or have changed since the last run. + +**-d** *path* + Since Sphinx has to read and parse all source files before it can write an + output file, the parsed source files are cached as "doctree pickles". + Normally, these files are put in a directory called :file:`.doctrees` under + the build directory; with this option you can select a different cache + directory (the doctrees can be shared between all builders). + +**-D** *setting=value* + Override a configuration value set in the :file:`conf.py` file. (The value + must be a string value.) + +**-N** + Do not do colored output. (On Windows, colored output is disabled in any + case.) + +**-q** + Do not output anything on standard output, only write warnings to standard + error. + +**-P** + (Useful for debugging only.) Run the Python debugger, :mod:`pdb`, if an + unhandled exception occurs while building. + + +You can also give one or more filenames on the command line after the source and +build directories. Sphinx will then try to build only these output files (and +their dependencies). Modified: doctools/trunk/doc/templating.rst ============================================================================== --- doctools/trunk/doc/templating.rst (original) +++ doctools/trunk/doc/templating.rst Wed Mar 12 22:37:22 2008 @@ -2,3 +2,61 @@ Templating ========== + +Sphinx uses the `Jinja ` templating engine for its HTML +templates. Jinja is a text-based engine, and inspired by Django templates, so +anyone having used Django will already be familiar with it. It also has +excellent documentation for those who need to make themselves familiar with it. + +The most important concept in Jinja is :dfn:`template inheritance`, which means +that you can overwrite only specific blocks within a template, customizing it +while also keeping the changes at a minimum. + +Inheritance is done via two directives, ``extends`` and ``block``. + +.. template path + blocks + extends !template + +These are the blocks that are predefined in Sphinx' ``layout.html`` template: + +**doctype** + The doctype, by default HTML 4 Transitional. + +**rellinks** + HTML ```` tag, by default empty. + +**beforerelbar** + Block before the "related bar" (the navigation links at the page top), by + default empty. Use this to insert a page header. + +**relbar** + The "related bar" by default. Overwrite this block to customize the entire + navigation bar. + +**rootrellink** + The most parent relbar link, by default pointing to the "index" document with + a caption of e.g. "Project v0.1 documentation". + +**relbaritems** + Block in the ``
                `` used for relbar items, by default empty. Use this to + add more items. + +**afterrelbar** + Block between relbar and document body, by default empty. + +**body** + Block in the document body. This should be overwritten by every child + template, e.g. :file:`page.html` puts the page content there. + +**beforesidebar** + Block between body and sidebar, by default empty. + +**sidebar** + Contains the whole sidebar. + +**aftersidebar** + Block between From nnorwitz at gmail.com Wed Mar 12 23:19:27 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Wed, 12 Mar 2008 17:19:27 -0500 Subject: [Python-checkins] Python Regression Test Failures refleak (1) Message-ID: <20080312221927.GA11990@python.psfb.org> More important issues: ---------------------- test_threading leaked [0, 0, 7] references, sum=7 Less important issues: ---------------------- test_asynchat leaked [0, 0, 113] references, sum=113 test_cmd_line leaked [23, 0, 0] references, sum=23 test_smtplib leaked [-98, 5, 86] references, sum=-7 test_threadsignals leaked [0, -8, 0] references, sum=-8 test_urllib2_localnet leaked [3, 182, -176] references, sum=9 From buildbot at python.org Thu Mar 13 04:09:13 2008 From: buildbot at python.org (buildbot at python.org) Date: Thu, 13 Mar 2008 03:09:13 +0000 Subject: [Python-checkins] buildbot failure in g4 osx.4 3.0 Message-ID: <20080313030914.1509A1E4017@bag.python.org> The Buildbot has detected a new failure of g4 osx.4 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/g4%20osx.4%203.0/builds/597 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: psf-g4 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: raymond.hettinger BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_cgi make: *** [buildbottest] Error 1 sincerely, -The Buildbot From srujannajurs at gmail.com Thu Mar 13 06:30:36 2008 From: srujannajurs at gmail.com (srujan reddy) Date: Thu, 13 Mar 2008 11:00:36 +0530 Subject: [Python-checkins] request for expon180.1e6 file Message-ID: respected sir/madam, i am need of the expon180.1e6 file for my project.....unfortunately i could'nt get the file for it was deleted from ur link.....i would be very thankful to you if you can send me the corresponding file......i also require the files of other distributions like random and triangular....... I will be very grateful to you if you can send me the corresponding files. with regards srujan (PHD scholar, IIT kharagpur, INDIA) From buildbot at python.org Thu Mar 13 06:47:50 2008 From: buildbot at python.org (buildbot at python.org) Date: Thu, 13 Mar 2008 05:47:50 +0000 Subject: [Python-checkins] buildbot failure in sparc Debian 3.0 Message-ID: <20080313054750.60FE91E4018@bag.python.org> The Buildbot has detected a new failure of sparc Debian 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20Debian%203.0/builds/89 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-sparc Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: raymond.hettinger BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_xmlrpc_net make: *** [buildbottest] Error 1 sincerely, -The Buildbot From python-checkins at python.org Thu Mar 13 08:15:57 2008 From: python-checkins at python.org (georg.brandl) Date: Thu, 13 Mar 2008 08:15:57 +0100 (CET) Subject: [Python-checkins] r61363 - python/trunk/Doc/library/mailbox.rst Message-ID: <20080313071557.247A41E4020@bag.python.org> Author: georg.brandl Date: Thu Mar 13 08:15:56 2008 New Revision: 61363 Modified: python/trunk/Doc/library/mailbox.rst Log: #2265: fix example. Modified: python/trunk/Doc/library/mailbox.rst ============================================================================== --- python/trunk/Doc/library/mailbox.rst (original) +++ python/trunk/Doc/library/mailbox.rst Thu Mar 13 08:15:56 2008 @@ -1630,7 +1630,7 @@ destination = mailbox.MH('~/Mail') destination.lock() for message in mailbox.Babyl('~/RMAIL'): - destination.add(MHMessage(message)) + destination.add(mailbox.MHMessage(message)) destination.flush() destination.unlock() From python-checkins at python.org Thu Mar 13 08:17:14 2008 From: python-checkins at python.org (georg.brandl) Date: Thu, 13 Mar 2008 08:17:14 +0100 (CET) Subject: [Python-checkins] r61364 - python/trunk/Doc/library/xml.dom.rst Message-ID: <20080313071714.72CF91E4014@bag.python.org> Author: georg.brandl Date: Thu Mar 13 08:17:14 2008 New Revision: 61364 Modified: python/trunk/Doc/library/xml.dom.rst Log: #2270: fix typo. Modified: python/trunk/Doc/library/xml.dom.rst ============================================================================== --- python/trunk/Doc/library/xml.dom.rst (original) +++ python/trunk/Doc/library/xml.dom.rst Thu Mar 13 08:17:14 2008 @@ -517,7 +517,7 @@ ^^^^^^^^^^^^^^^^ A :class:`Document` represents an entire XML document, including its constituent -elements, attributes, processing instructions, comments etc. Remeber that it +elements, attributes, processing instructions, comments etc. Remember that it inherits properties from :class:`Node`. From python-checkins at python.org Thu Mar 13 08:21:42 2008 From: python-checkins at python.org (georg.brandl) Date: Thu, 13 Mar 2008 08:21:42 +0100 (CET) Subject: [Python-checkins] r61365 - python/trunk/Doc/library/threading.rst Message-ID: <20080313072142.2B6EE1E4003@bag.python.org> Author: georg.brandl Date: Thu Mar 13 08:21:41 2008 New Revision: 61365 Modified: python/trunk/Doc/library/threading.rst Log: #1720705: add docs about import/threading interaction, wording by Nick. Modified: python/trunk/Doc/library/threading.rst ============================================================================== --- python/trunk/Doc/library/threading.rst (original) +++ python/trunk/Doc/library/threading.rst Thu Mar 13 08:21:41 2008 @@ -731,3 +731,26 @@ with some_rlock: print "some_rlock is locked while this executes" + +.. _threaded-imports: + +Importing in threaded code +-------------------------- + +While the import machinery is thread safe, there are two key +restrictions on threaded imports due to inherent limitations in the way +that thread safety is provided: + +* Firstly, other than in the main module, an import should not have the + side effect of spawning a new thread and then waiting for that thread in + any way. Failing to abide by this restriction can lead to a deadlock if + the spawned thread directly or indirectly attempts to import a module. +* Secondly, all import attempts must be completed before the interpreter + starts shutting itself down. This can be most easily achieved by only + performing imports from non-daemon threads created through the threading + module. Daemon threads and threads created directly with the thread + module will require some other form of synchronization to ensure they do + not attempt imports after system shutdown has commenced. Failure to + abide by this restriction will lead to intermittent exceptions and + crashes during interpreter shutdown (as the late imports attempt to + access machinery which is no longer in a valid state). From python-checkins at python.org Thu Mar 13 12:07:35 2008 From: python-checkins at python.org (andrew.kuchling) Date: Thu, 13 Mar 2008 12:07:35 +0100 (CET) Subject: [Python-checkins] r61366 - python/trunk/Doc/reference/compound_stmts.rst Message-ID: <20080313110735.E3D301E4003@bag.python.org> Author: andrew.kuchling Date: Thu Mar 13 12:07:35 2008 New Revision: 61366 Modified: python/trunk/Doc/reference/compound_stmts.rst Log: Add class decorators Modified: python/trunk/Doc/reference/compound_stmts.rst ============================================================================== --- python/trunk/Doc/reference/compound_stmts.rst (original) +++ python/trunk/Doc/reference/compound_stmts.rst Thu Mar 13 12:07:35 2008 @@ -50,6 +50,7 @@ : | `with_stmt` : | `funcdef` : | `classdef` + : | `decorated` suite: `stmt_list` NEWLINE | NEWLINE INDENT `statement`+ DEDENT statement: `stmt_list` NEWLINE | `compound_stmt` stmt_list: `simple_stmt` (";" `simple_stmt`)* [";"] @@ -400,9 +401,10 @@ :ref:`types`): .. productionlist:: - funcdef: [`decorators`] "def" `funcname` "(" [`parameter_list`] ")" ":" `suite` + decorated: decorators (classdef | funcdef) decorators: `decorator`+ decorator: "@" `dotted_name` ["(" [`argument_list` [","]] ")"] NEWLINE + funcdef: "def" `funcname` "(" [`parameter_list`] ")" ":" `suite` dotted_name: `identifier` ("." `identifier`)* parameter_list: (`defparameter` ",")* : ( "*" `identifier` [, "**" `identifier`] @@ -529,6 +531,11 @@ class`\es, descriptors can be used to create instance variables with different implementation details. +Class definitions, like function definitions, may be wrapped by one or +more :term:`decorator` expressions. The evaluation rules for the +decorator expressions are the same as for functions. The result must +be a class object, which is then bound to the class name. + .. rubric:: Footnotes .. [#] The exception is propagated to the invocation stack only if there is no From python-checkins at python.org Thu Mar 13 17:43:17 2008 From: python-checkins at python.org (raymond.hettinger) Date: Thu, 13 Mar 2008 17:43:17 +0100 (CET) Subject: [Python-checkins] r61367 - python/trunk/Modules/itertoolsmodule.c Message-ID: <20080313164317.8F36E1E4003@bag.python.org> Author: raymond.hettinger Date: Thu Mar 13 17:43:17 2008 New Revision: 61367 Modified: python/trunk/Modules/itertoolsmodule.c Log: Add 2-to-3 support for the itertools moved to builtins or renamed. Modified: python/trunk/Modules/itertoolsmodule.c ============================================================================== --- python/trunk/Modules/itertoolsmodule.c (original) +++ python/trunk/Modules/itertoolsmodule.c Thu Mar 13 17:43:17 2008 @@ -1445,6 +1445,11 @@ imapobject *lz; Py_ssize_t numargs, i; + if (Py_Py3kWarningFlag && + PyErr_Warn(PyExc_DeprecationWarning, + "In 3.x, itertools.imap() was moved to builtin map()") < 0) + return NULL; + if (type == &imap_type && !_PyArg_NoKeywords("imap()", kwds)) return NULL; @@ -2536,6 +2541,11 @@ PyObject *it; ifilterobject *lz; + if (Py_Py3kWarningFlag && + PyErr_Warn(PyExc_DeprecationWarning, + "In 3.x, itertools.ifilter() was moved to builtin filter().") < 0) + return NULL; + if (type == &ifilter_type && !_PyArg_NoKeywords("ifilter()", kwds)) return NULL; @@ -2679,6 +2689,11 @@ PyObject *it; ifilterfalseobject *lz; + if (Py_Py3kWarningFlag && + PyErr_Warn(PyExc_DeprecationWarning, + "In 3.x, ifilterfalse() was renamed to filterfalse().") < 0) + return NULL; + if (type == &ifilterfalse_type && !_PyArg_NoKeywords("ifilterfalse()", kwds)) return NULL; @@ -2985,6 +3000,11 @@ PyObject *result; Py_ssize_t tuplesize = PySequence_Length(args); + if (Py_Py3kWarningFlag && + PyErr_Warn(PyExc_DeprecationWarning, + "In 3.x, itertools.izip() was moved to builtin zip()") < 0) + return NULL; + if (type == &izip_type && !_PyArg_NoKeywords("izip()", kwds)) return NULL; @@ -3321,6 +3341,11 @@ PyObject *fillvalue = Py_None; Py_ssize_t tuplesize = PySequence_Length(args); + if (Py_Py3kWarningFlag && + PyErr_Warn(PyExc_DeprecationWarning, + "In 3.x, izip_longest() is renamed to zip_longest().") < 0) + return NULL; + if (kwds != NULL && PyDict_CheckExact(kwds) && PyDict_Size(kwds) > 0) { fillvalue = PyDict_GetItemString(kwds, "fillvalue"); if (fillvalue == NULL || PyDict_Size(kwds) > 1) { From python-checkins at python.org Thu Mar 13 17:44:00 2008 From: python-checkins at python.org (raymond.hettinger) Date: Thu, 13 Mar 2008 17:44:00 +0100 (CET) Subject: [Python-checkins] r61368 - python/trunk/Modules/itertoolsmodule.c Message-ID: <20080313164400.28C271E4003@bag.python.org> Author: raymond.hettinger Date: Thu Mar 13 17:43:59 2008 New Revision: 61368 Modified: python/trunk/Modules/itertoolsmodule.c Log: Consistent tense. Modified: python/trunk/Modules/itertoolsmodule.c ============================================================================== --- python/trunk/Modules/itertoolsmodule.c (original) +++ python/trunk/Modules/itertoolsmodule.c Thu Mar 13 17:43:59 2008 @@ -3343,7 +3343,7 @@ if (Py_Py3kWarningFlag && PyErr_Warn(PyExc_DeprecationWarning, - "In 3.x, izip_longest() is renamed to zip_longest().") < 0) + "In 3.x, izip_longest() was renamed to zip_longest().") < 0) return NULL; if (kwds != NULL && PyDict_CheckExact(kwds) && PyDict_Size(kwds) > 0) { From buildbot at python.org Thu Mar 13 18:51:51 2008 From: buildbot at python.org (buildbot at python.org) Date: Thu, 13 Mar 2008 17:51:51 +0000 Subject: [Python-checkins] buildbot failure in g4 osx.4 trunk Message-ID: <20080313175151.6C3B11E4014@bag.python.org> The Buildbot has detected a new failure of g4 osx.4 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/g4%20osx.4%20trunk/builds/3002 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: psf-g4 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: andrew.kuchling,georg.brandl,raymond.hettinger BUILD FAILED: failed test Excerpt from the test logfile: Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable 1 test failed: test_xmlrpc make: *** [buildbottest] Error 1 sincerely, -The Buildbot From python-checkins at python.org Thu Mar 13 20:03:52 2008 From: python-checkins at python.org (raymond.hettinger) Date: Thu, 13 Mar 2008 20:03:52 +0100 (CET) Subject: [Python-checkins] r61369 - in python/trunk: Doc/library/heapq.rst Lib/heapq.py Lib/test/test_heapq.py Misc/NEWS Modules/_heapqmodule.c Message-ID: <20080313190352.4DDE51E4014@bag.python.org> Author: raymond.hettinger Date: Thu Mar 13 20:03:51 2008 New Revision: 61369 Modified: python/trunk/Doc/library/heapq.rst python/trunk/Lib/heapq.py python/trunk/Lib/test/test_heapq.py python/trunk/Misc/NEWS python/trunk/Modules/_heapqmodule.c Log: Issue 2274: Add heapq.heappushpop(). Modified: python/trunk/Doc/library/heapq.rst ============================================================================== --- python/trunk/Doc/library/heapq.rst (original) +++ python/trunk/Doc/library/heapq.rst Thu Mar 13 20:03:51 2008 @@ -45,6 +45,13 @@ Pop and return the smallest item from the *heap*, maintaining the heap invariant. If the heap is empty, :exc:`IndexError` is raised. +.. function:: heappushpop(heap, item) + + Push *item* on the heap, then pop and return the smallest item from the + *heap*. The combined action runs more efficiently than :func:`heappush` + followed by a separate call to :func:`heappop`. + + .. versionadded:: 2.6 .. function:: heapify(x) Modified: python/trunk/Lib/heapq.py ============================================================================== --- python/trunk/Lib/heapq.py (original) +++ python/trunk/Lib/heapq.py Thu Mar 13 20:03:51 2008 @@ -127,7 +127,7 @@ """ __all__ = ['heappush', 'heappop', 'heapify', 'heapreplace', 'merge', - 'nlargest', 'nsmallest'] + 'nlargest', 'nsmallest', 'heappushpop'] from itertools import islice, repeat, count, imap, izip, tee from operator import itemgetter, neg @@ -165,6 +165,13 @@ _siftup(heap, 0) return returnitem +def heappushpop(heap, item): + """Fast version of a heappush followed by a heappop.""" + if heap and item > heap[0]: + item, heap[0] = heap[0], item + _siftup(heap, 0) + return item + def heapify(x): """Transform list into a heap, in-place, in O(len(heap)) time.""" n = len(x) @@ -304,7 +311,7 @@ # If available, use C implementation try: - from _heapq import heappush, heappop, heapify, heapreplace, nlargest, nsmallest + from _heapq import heappush, heappop, heapify, heapreplace, nlargest, nsmallest, heappushpop except ImportError: pass Modified: python/trunk/Lib/test/test_heapq.py ============================================================================== --- python/trunk/Lib/test/test_heapq.py (original) +++ python/trunk/Lib/test/test_heapq.py Thu Mar 13 20:03:51 2008 @@ -107,6 +107,34 @@ self.assertRaises(TypeError, self.module.heapreplace, None, None) self.assertRaises(IndexError, self.module.heapreplace, [], None) + def test_nbest_with_pushpop(self): + data = [random.randrange(2000) for i in range(1000)] + heap = data[:10] + self.module.heapify(heap) + for item in data[10:]: + self.module.heappushpop(heap, item) + self.assertEqual(list(self.heapiter(heap)), sorted(data)[-10:]) + self.assertEqual(self.module.heappushpop([], 'x'), 'x') + + def test_heappushpop(self): + h = [] + x = self.module.heappushpop(h, 10) + self.assertEqual((h, x), ([], 10)) + + h = [10] + x = self.module.heappushpop(h, 10.0) + self.assertEqual((h, x), ([10], 10.0)) + self.assertEqual(type(h[0]), int) + self.assertEqual(type(x), float) + + h = [10]; + x = self.module.heappushpop(h, 9) + self.assertEqual((h, x), ([10], 9)) + + h = [10]; + x = self.module.heappushpop(h, 11) + self.assertEqual((h, x), ([11], 10)) + def test_heapsort(self): # Exercise everything with repeated heapsort checks for trial in xrange(100): Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Thu Mar 13 20:03:51 2008 @@ -491,6 +491,8 @@ Library ------- +- #2274 Add heapq.heappushpop(). + - Add inspect.isabstract(object) to fix bug #2223 - Add a __format__ method to Decimal, to support PEP 3101. Modified: python/trunk/Modules/_heapqmodule.c ============================================================================== --- python/trunk/Modules/_heapqmodule.c (original) +++ python/trunk/Modules/_heapqmodule.c Thu Mar 13 20:03:51 2008 @@ -162,6 +162,11 @@ { PyObject *heap, *item, *returnitem; + if (Py_Py3kWarningFlag && + PyErr_Warn(PyExc_DeprecationWarning, + "In 3.x, heapreplace() was removed. Use heappushpop() instead.") < 0) + return NULL; + if (!PyArg_UnpackTuple(args, "heapreplace", 2, 2, &heap, &item)) return NULL; @@ -196,6 +201,48 @@ item = heapreplace(heap, item)\n"); static PyObject * +heappushpop(PyObject *self, PyObject *args) +{ + PyObject *heap, *item, *returnitem; + int cmp; + + if (!PyArg_UnpackTuple(args, "heappushpop", 2, 2, &heap, &item)) + return NULL; + + if (!PyList_Check(heap)) { + PyErr_SetString(PyExc_TypeError, "heap argument must be a list"); + return NULL; + } + + if (PyList_GET_SIZE(heap) < 1) { + Py_INCREF(item); + return item; + } + + cmp = PyObject_RichCompareBool(item, PyList_GET_ITEM(heap, 0), Py_LE); + if (cmp == -1) + return NULL; + if (cmp == 1) { + Py_INCREF(item); + return item; + } + + returnitem = PyList_GET_ITEM(heap, 0); + Py_INCREF(item); + PyList_SET_ITEM(heap, 0, item); + if (_siftup((PyListObject *)heap, 0) == -1) { + Py_DECREF(returnitem); + return NULL; + } + return returnitem; +} + +PyDoc_STRVAR(heappushpop_doc, +"Push item on the heap, then pop and return the smallest item\n\ +from the heap. The combined action runs more efficiently than\n\ +heappush() followed by a separate call to heappop()."); + +static PyObject * heapify(PyObject *self, PyObject *heap) { Py_ssize_t i, n; @@ -468,6 +515,8 @@ static PyMethodDef heapq_methods[] = { {"heappush", (PyCFunction)heappush, METH_VARARGS, heappush_doc}, + {"heappushpop", (PyCFunction)heappushpop, + METH_VARARGS, heappushpop_doc}, {"heappop", (PyCFunction)heappop, METH_O, heappop_doc}, {"heapreplace", (PyCFunction)heapreplace, From buildbot at python.org Thu Mar 13 20:30:48 2008 From: buildbot at python.org (buildbot at python.org) Date: Thu, 13 Mar 2008 19:30:48 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-3 trunk Message-ID: <20080313193048.E39E61E401B@bag.python.org> The Buildbot has detected a new failure of x86 XP-3 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-3%20trunk/builds/1037 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: raymond.hettinger BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_timeout Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\threading.py", line 490, in __bootstrap_inner self.run() File "C:\buildbot\work\trunk.heller-windows\build\lib\threading.py", line 446, in run self.__target(*self.__args, **self.__kwargs) File "C:\buildbot\work\trunk.heller-windows\build\lib\bsddb\test\test_thread.py", line 284, in readerThread rec = dbutils.DeadlockWrap(c.next, max_retries=10) File "C:\buildbot\work\trunk.heller-windows\build\lib\bsddb\dbutils.py", line 62, in DeadlockWrap return function(*_args, **_kwargs) DBLockDeadlockError: (-30995, 'DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock') Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\threading.py", line 490, in __bootstrap_inner self.run() File "C:\buildbot\work\trunk.heller-windows\build\lib\threading.py", line 446, in run self.__target(*self.__args, **self.__kwargs) File "C:\buildbot\work\trunk.heller-windows\build\lib\bsddb\test\test_thread.py", line 284, in readerThread rec = dbutils.DeadlockWrap(c.next, max_retries=10) File "C:\buildbot\work\trunk.heller-windows\build\lib\bsddb\dbutils.py", line 62, in DeadlockWrap return function(*_args, **_kwargs) DBLockDeadlockError: (-30995, 'DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock') sincerely, -The Buildbot From python-checkins at python.org Thu Mar 13 20:33:35 2008 From: python-checkins at python.org (raymond.hettinger) Date: Thu, 13 Mar 2008 20:33:35 +0100 (CET) Subject: [Python-checkins] r61370 - python/trunk/Lib/heapq.py Message-ID: <20080313193335.165B31E4014@bag.python.org> Author: raymond.hettinger Date: Thu Mar 13 20:33:34 2008 New Revision: 61370 Modified: python/trunk/Lib/heapq.py Log: Simplify the nlargest() code using heappushpop(). Modified: python/trunk/Lib/heapq.py ============================================================================== --- python/trunk/Lib/heapq.py (original) +++ python/trunk/Lib/heapq.py Thu Mar 13 20:33:34 2008 @@ -193,13 +193,9 @@ if not result: return result heapify(result) - _heapreplace = heapreplace - sol = result[0] # sol --> smallest of the nlargest + _heappushpop = heappushpop for elem in it: - if elem <= sol: - continue - _heapreplace(result, elem) - sol = result[0] + heappushpop(result, elem) result.sort(reverse=True) return result From python-checkins at python.org Thu Mar 13 21:27:01 2008 From: python-checkins at python.org (brett.cannon) Date: Thu, 13 Mar 2008 21:27:01 +0100 (CET) Subject: [Python-checkins] r61371 - in python/trunk: Lib/test/output/test_thread Lib/test/test_thread.py Misc/NEWS Message-ID: <20080313202701.CF6D81E402C@bag.python.org> Author: brett.cannon Date: Thu Mar 13 21:27:00 2008 New Revision: 61371 Removed: python/trunk/Lib/test/output/test_thread Modified: python/trunk/Lib/test/test_thread.py python/trunk/Misc/NEWS Log: Move test_thread over to unittest. Commits GHOP 237. Thanks Benjamin Peterson for the patch. Deleted: /python/trunk/Lib/test/output/test_thread ============================================================================== --- /python/trunk/Lib/test/output/test_thread Thu Mar 13 21:27:00 2008 +++ (empty file) @@ -1,18 +0,0 @@ -test_thread -waiting for all tasks to complete -all tasks done - -*** Barrier Test *** -all tasks done - -*** Changing thread stack size *** -caught expected ValueError setting stack_size(4096) -successfully set stack_size(262144) -successfully set stack_size(1048576) -successfully set stack_size(0) -trying stack_size = 262144 -waiting for all tasks to complete -all tasks done -trying stack_size = 1048576 -waiting for all tasks to complete -all tasks done Modified: python/trunk/Lib/test/test_thread.py ============================================================================== --- python/trunk/Lib/test/test_thread.py (original) +++ python/trunk/Lib/test/test_thread.py Thu Mar 13 21:27:00 2008 @@ -1,160 +1,164 @@ -# Very rudimentary test of thread module - -# Create a bunch of threads, let each do some work, wait until all are done - -from test.test_support import verbose +import os +import unittest import random +from test import test_support import thread import time -mutex = thread.allocate_lock() -rmutex = thread.allocate_lock() # for calls to random -running = 0 -done = thread.allocate_lock() -done.acquire() - -numtasks = 10 - -def task(ident): - global running - rmutex.acquire() - delay = random.random() * numtasks - rmutex.release() - if verbose: - print 'task', ident, 'will run for', round(delay, 1), 'sec' - time.sleep(delay) - if verbose: - print 'task', ident, 'done' - mutex.acquire() - running = running - 1 - if running == 0: - done.release() - mutex.release() - -next_ident = 0 -def newtask(): - global next_ident, running - mutex.acquire() - next_ident = next_ident + 1 - if verbose: - print 'creating task', next_ident - thread.start_new_thread(task, (next_ident,)) - running = running + 1 - mutex.release() - -for i in range(numtasks): - newtask() - -print 'waiting for all tasks to complete' -done.acquire() -print 'all tasks done' - -class barrier: - def __init__(self, n): - self.n = n + +NUMTASKS = 10 +NUMTRIPS = 3 + + +def verbose_print(arg): + """Helper function for printing out debugging output.""" + if test_support.verbose: + print arg + + +class BasicThreadTest(unittest.TestCase): + + def setUp(self): + self.done_mutex = thread.allocate_lock() + self.done_mutex.acquire() + self.running_mutex = thread.allocate_lock() + self.random_mutex = thread.allocate_lock() + self.running = 0 + self.next_ident = 0 + + +class ThreadRunningTests(BasicThreadTest): + + def newtask(self): + with self.running_mutex: + self.next_ident += 1 + verbose_print("creating task %s" % self.next_ident) + thread.start_new_thread(self.task, (self.next_ident,)) + self.running += 1 + + def task(self, ident): + with self.random_mutex: + delay = random.random() * NUMTASKS + verbose_print("task %s will run for %s" % (ident, round(delay, 1))) + time.sleep(delay) + verbose_print("task %s done" % ident) + with self.running_mutex: + self.running -= 1 + if self.running == 0: + self.done_mutex.release() + + def test_starting_threads(self): + # Basic test for thread creation. + for i in range(NUMTASKS): + self.newtask() + verbose_print("waiting for tasks to complete...") + self.done_mutex.acquire() + verbose_print("all tasks done") + + def test_stack_size(self): + # Various stack size tests. + self.assertEquals(thread.stack_size(), 0, "intial stack size is not 0") + + thread.stack_size(0) + self.assertEquals(thread.stack_size(), 0, "stack_size not reset to default") + + if os.name not in ("nt", "os2", "posix"): + return + + tss_supported = True + try: + thread.stack_size(4096) + except ValueError: + verbose_print("caught expected ValueError setting " + "stack_size(4096)") + except thread.error: + tss_supported = False + verbose_print("platform does not support changing thread stack " + "size") + + if tss_supported: + fail_msg = "stack_size(%d) failed - should succeed" + for tss in (262144, 0x100000, 0): + thread.stack_size(tss) + self.assertEquals(thread.stack_size(), tss, fail_msg % tss) + verbose_print("successfully set stack_size(%d)" % tss) + + for tss in (262144, 0x100000): + verbose_print("trying stack_size = (%d)" % tss) + self.next_ident = 0 + for i in range(NUMTASKS): + self.newtask() + + verbose_print("waiting for all tasks to complete") + self.done_mutex.acquire() + verbose_print("all tasks done") + + thread.stack_size(0) + + +class Barrier: + def __init__(self, num_threads): + self.num_threads = num_threads self.waiting = 0 - self.checkin = thread.allocate_lock() - self.checkout = thread.allocate_lock() - self.checkout.acquire() + self.checkin_mutex = thread.allocate_lock() + self.checkout_mutex = thread.allocate_lock() + self.checkout_mutex.acquire() def enter(self): - checkin, checkout = self.checkin, self.checkout - - checkin.acquire() + self.checkin_mutex.acquire() self.waiting = self.waiting + 1 - if self.waiting == self.n: - self.waiting = self.n - 1 - checkout.release() + if self.waiting == self.num_threads: + self.waiting = self.num_threads - 1 + self.checkout_mutex.release() return - checkin.release() + self.checkin_mutex.release() - checkout.acquire() + self.checkout_mutex.acquire() self.waiting = self.waiting - 1 if self.waiting == 0: - checkin.release() + self.checkin_mutex.release() return - checkout.release() + self.checkout_mutex.release() -numtrips = 3 -def task2(ident): - global running - for i in range(numtrips): - if ident == 0: - # give it a good chance to enter the next - # barrier before the others are all out - # of the current one - delay = 0.001 - else: - rmutex.acquire() - delay = random.random() * numtasks - rmutex.release() - if verbose: - print 'task', ident, 'will run for', round(delay, 1), 'sec' - time.sleep(delay) - if verbose: - print 'task', ident, 'entering barrier', i - bar.enter() - if verbose: - print 'task', ident, 'leaving barrier', i - mutex.acquire() - running -= 1 - # Must release mutex before releasing done, else the main thread can - # exit and set mutex to None as part of global teardown; then - # mutex.release() raises AttributeError. - finished = running == 0 - mutex.release() - if finished: - done.release() - -print '\n*** Barrier Test ***' -if done.acquire(0): - raise ValueError, "'done' should have remained acquired" -bar = barrier(numtasks) -running = numtasks -for i in range(numtasks): - thread.start_new_thread(task2, (i,)) -done.acquire() -print 'all tasks done' - -# not all platforms support changing thread stack size -print '\n*** Changing thread stack size ***' -if thread.stack_size() != 0: - raise ValueError, "initial stack_size not 0" - -thread.stack_size(0) -if thread.stack_size() != 0: - raise ValueError, "stack_size not reset to default" - -from os import name as os_name -if os_name in ("nt", "os2", "posix"): - - tss_supported = 1 - try: - thread.stack_size(4096) - except ValueError: - print 'caught expected ValueError setting stack_size(4096)' - except thread.error: - tss_supported = 0 - print 'platform does not support changing thread stack size' - - if tss_supported: - failed = lambda s, e: s != e - fail_msg = "stack_size(%d) failed - should succeed" - for tss in (262144, 0x100000, 0): - thread.stack_size(tss) - if failed(thread.stack_size(), tss): - raise ValueError, fail_msg % tss - print 'successfully set stack_size(%d)' % tss - - for tss in (262144, 0x100000): - print 'trying stack_size = %d' % tss - next_ident = 0 - for i in range(numtasks): - newtask() - - print 'waiting for all tasks to complete' - done.acquire() - print 'all tasks done' - # reset stack size to default - thread.stack_size(0) +class BarrierTest(BasicThreadTest): + + def test_barrier(self): + self.bar = Barrier(NUMTASKS) + self.running = NUMTASKS + for i in range(NUMTASKS): + thread.start_new_thread(self.task2, (i,)) + verbose_print("waiting for tasks to end") + self.done_mutex.acquire() + verbose_print("tasks done") + + def task2(self, ident): + for i in range(NUMTRIPS): + if ident == 0: + # give it a good chance to enter the next + # barrier before the others are all out + # of the current one + delay = 0.001 + else: + with self.random_mutex: + delay = random.random() * NUMTASKS + verbose_print("task %s will run for %s" % (ident, round(delay, 1))) + time.sleep(delay) + verbose_print("task %s entering %s" % (ident, i)) + self.bar.enter() + verbose_print("task %s leaving barrier" % ident) + with self.running_mutex: + self.running -= 1 + # Must release mutex before releasing done, else the main thread can + # exit and set mutex to None as part of global teardown; then + # mutex.release() raises AttributeError. + finished = self.running == 0 + if finished: + self.done_mutex.release() + + +def test_main(): + test_support.run_unittest(ThreadRunningTests, BarrierTest) + +if __name__ == "__main__": + test_main() Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Thu Mar 13 21:27:00 2008 @@ -50,6 +50,8 @@ Tests ----- +- GHOP 237: Rewrite test_thread using unittest. + - Patch #2232: os.tmpfile might fail on Windows if the user has no permission to create files in the root directory. From python-checkins at python.org Thu Mar 13 21:33:11 2008 From: python-checkins at python.org (brett.cannon) Date: Thu, 13 Mar 2008 21:33:11 +0100 (CET) Subject: [Python-checkins] r61372 - in python/trunk: Lib/test/output/test_tokenize Lib/test/test_tokenize.py Misc/ACKS Misc/NEWS Message-ID: <20080313203311.3C8811E4014@bag.python.org> Author: brett.cannon Date: Thu Mar 13 21:33:10 2008 New Revision: 61372 Removed: python/trunk/Lib/test/output/test_tokenize Modified: python/trunk/Lib/test/test_tokenize.py python/trunk/Misc/ACKS python/trunk/Misc/NEWS Log: Move test_tokenize to doctest. Done as GHOP 238 by Josip Dzolonga. Deleted: /python/trunk/Lib/test/output/test_tokenize ============================================================================== --- /python/trunk/Lib/test/output/test_tokenize Thu Mar 13 21:33:10 2008 +++ (empty file) @@ -1,684 +0,0 @@ -test_tokenize -1,0-1,34: COMMENT "# Tests for the 'tokenize' module." -1,34-1,35: NL '\n' -2,0-2,42: COMMENT '# Large bits stolen from test_grammar.py. ' -2,42-2,43: NL '\n' -3,0-3,1: NL '\n' -4,0-4,10: COMMENT '# Comments' -4,10-4,11: NL '\n' -5,0-5,3: STRING '"#"' -5,3-5,4: NEWLINE '\n' -6,0-6,2: COMMENT "#'" -6,2-6,3: NL '\n' -7,0-7,2: COMMENT '#"' -7,2-7,3: NL '\n' -8,0-8,2: COMMENT '#\\' -8,2-8,3: NL '\n' -9,7-9,8: COMMENT '#' -9,8-9,9: NL '\n' -10,4-10,9: COMMENT '# abc' -10,9-10,10: NL '\n' -11,0-12,4: STRING "'''#\n#'''" -12,4-12,5: NEWLINE '\n' -13,0-13,1: NL '\n' -14,0-14,1: NAME 'x' -14,2-14,3: OP '=' -14,4-14,5: NUMBER '1' -14,7-14,8: COMMENT '#' -14,8-14,9: NEWLINE '\n' -15,0-15,1: NL '\n' -16,0-16,24: COMMENT '# Balancing continuation' -16,24-16,25: NL '\n' -17,0-17,1: NL '\n' -18,0-18,1: NAME 'a' -18,2-18,3: OP '=' -18,4-18,5: OP '(' -18,5-18,6: NUMBER '3' -18,6-18,7: OP ',' -18,8-18,9: NUMBER '4' -18,9-18,10: OP ',' -18,10-18,11: NL '\n' -19,2-19,3: NUMBER '5' -19,3-19,4: OP ',' -19,5-19,6: NUMBER '6' -19,6-19,7: OP ')' -19,7-19,8: NEWLINE '\n' -20,0-20,1: NAME 'y' -20,2-20,3: OP '=' -20,4-20,5: OP '[' -20,5-20,6: NUMBER '3' -20,6-20,7: OP ',' -20,8-20,9: NUMBER '4' -20,9-20,10: OP ',' -20,10-20,11: NL '\n' -21,2-21,3: NUMBER '5' -21,3-21,4: OP ']' -21,4-21,5: NEWLINE '\n' -22,0-22,1: NAME 'z' -22,2-22,3: OP '=' -22,4-22,5: OP '{' -22,5-22,8: STRING "'a'" -22,8-22,9: OP ':' -22,9-22,10: NUMBER '5' -22,10-22,11: OP ',' -22,11-22,12: NL '\n' -23,2-23,5: STRING "'b'" -23,5-23,6: OP ':' -23,6-23,7: NUMBER '6' -23,7-23,8: OP '}' -23,8-23,9: NEWLINE '\n' -24,0-24,1: NAME 'x' -24,2-24,3: OP '=' -24,4-24,5: OP '(' -24,5-24,8: NAME 'len' -24,8-24,9: OP '(' -24,9-24,10: OP '`' -24,10-24,11: NAME 'y' -24,11-24,12: OP '`' -24,12-24,13: OP ')' -24,14-24,15: OP '+' -24,16-24,17: NUMBER '5' -24,17-24,18: OP '*' -24,18-24,19: NAME 'x' -24,20-24,21: OP '-' -24,22-24,23: NAME 'a' -24,23-24,24: OP '[' -24,24-24,25: NL '\n' -25,3-25,4: NUMBER '3' -25,5-25,6: OP ']' -25,6-25,7: NL '\n' -26,3-26,4: OP '-' -26,5-26,6: NAME 'x' -26,7-26,8: OP '+' -26,9-26,12: NAME 'len' -26,12-26,13: OP '(' -26,13-26,14: OP '{' -26,14-26,15: NL '\n' -27,3-27,4: OP '}' -27,4-27,5: NL '\n' -28,4-28,5: OP ')' -28,5-28,6: NL '\n' -29,2-29,3: OP ')' -29,3-29,4: NEWLINE '\n' -30,0-30,1: NL '\n' -31,0-31,36: COMMENT '# Backslash means line continuation:' -31,36-31,37: NL '\n' -32,0-32,1: NAME 'x' -32,2-32,3: OP '=' -32,4-32,5: NUMBER '1' -33,0-33,1: OP '+' -33,2-33,3: NUMBER '1' -33,3-33,4: NEWLINE '\n' -34,0-34,1: NL '\n' -35,0-35,54: COMMENT '# Backslash does not means continuation in comments :\\' -35,54-35,55: NL '\n' -36,0-36,1: NAME 'x' -36,2-36,3: OP '=' -36,4-36,5: NUMBER '0' -36,5-36,6: NEWLINE '\n' -37,0-37,1: NL '\n' -38,0-38,19: COMMENT '# Ordinary integers' -38,19-38,20: NL '\n' -39,0-39,4: NUMBER '0xff' -39,5-39,7: OP '<>' -39,8-39,11: NUMBER '255' -39,11-39,12: NEWLINE '\n' -40,0-40,4: NUMBER '0377' -40,5-40,7: OP '<>' -40,8-40,11: NUMBER '255' -40,11-40,12: NEWLINE '\n' -41,0-41,10: NUMBER '2147483647' -41,13-41,15: OP '!=' -41,16-41,28: NUMBER '017777777777' -41,28-41,29: NEWLINE '\n' -42,0-42,1: OP '-' -42,1-42,11: NUMBER '2147483647' -42,11-42,12: OP '-' -42,12-42,13: NUMBER '1' -42,14-42,16: OP '!=' -42,17-42,29: NUMBER '020000000000' -42,29-42,30: NEWLINE '\n' -43,0-43,12: NUMBER '037777777777' -43,13-43,15: OP '!=' -43,16-43,17: OP '-' -43,17-43,18: NUMBER '1' -43,18-43,19: NEWLINE '\n' -44,0-44,10: NUMBER '0xffffffff' -44,11-44,13: OP '!=' -44,14-44,15: OP '-' -44,15-44,16: NUMBER '1' -44,16-44,17: NEWLINE '\n' -45,0-45,1: NL '\n' -46,0-46,15: COMMENT '# Long integers' -46,15-46,16: NL '\n' -47,0-47,1: NAME 'x' -47,2-47,3: OP '=' -47,4-47,6: NUMBER '0L' -47,6-47,7: NEWLINE '\n' -48,0-48,1: NAME 'x' -48,2-48,3: OP '=' -48,4-48,6: NUMBER '0l' -48,6-48,7: NEWLINE '\n' -49,0-49,1: NAME 'x' -49,2-49,3: OP '=' -49,4-49,23: NUMBER '0xffffffffffffffffL' -49,23-49,24: NEWLINE '\n' -50,0-50,1: NAME 'x' -50,2-50,3: OP '=' -50,4-50,23: NUMBER '0xffffffffffffffffl' -50,23-50,24: NEWLINE '\n' -51,0-51,1: NAME 'x' -51,2-51,3: OP '=' -51,4-51,23: NUMBER '077777777777777777L' -51,23-51,24: NEWLINE '\n' -52,0-52,1: NAME 'x' -52,2-52,3: OP '=' -52,4-52,23: NUMBER '077777777777777777l' -52,23-52,24: NEWLINE '\n' -53,0-53,1: NAME 'x' -53,2-53,3: OP '=' -53,4-53,35: NUMBER '123456789012345678901234567890L' -53,35-53,36: NEWLINE '\n' -54,0-54,1: NAME 'x' -54,2-54,3: OP '=' -54,4-54,35: NUMBER '123456789012345678901234567890l' -54,35-54,36: NEWLINE '\n' -55,0-55,1: NL '\n' -56,0-56,24: COMMENT '# Floating-point numbers' -56,24-56,25: NL '\n' -57,0-57,1: NAME 'x' -57,2-57,3: OP '=' -57,4-57,8: NUMBER '3.14' -57,8-57,9: NEWLINE '\n' -58,0-58,1: NAME 'x' -58,2-58,3: OP '=' -58,4-58,8: NUMBER '314.' -58,8-58,9: NEWLINE '\n' -59,0-59,1: NAME 'x' -59,2-59,3: OP '=' -59,4-59,9: NUMBER '0.314' -59,9-59,10: NEWLINE '\n' -60,0-60,17: COMMENT '# XXX x = 000.314' -60,17-60,18: NL '\n' -61,0-61,1: NAME 'x' -61,2-61,3: OP '=' -61,4-61,8: NUMBER '.314' -61,8-61,9: NEWLINE '\n' -62,0-62,1: NAME 'x' -62,2-62,3: OP '=' -62,4-62,8: NUMBER '3e14' -62,8-62,9: NEWLINE '\n' -63,0-63,1: NAME 'x' -63,2-63,3: OP '=' -63,4-63,8: NUMBER '3E14' -63,8-63,9: NEWLINE '\n' -64,0-64,1: NAME 'x' -64,2-64,3: OP '=' -64,4-64,9: NUMBER '3e-14' -64,9-64,10: NEWLINE '\n' -65,0-65,1: NAME 'x' -65,2-65,3: OP '=' -65,4-65,9: NUMBER '3e+14' -65,9-65,10: NEWLINE '\n' -66,0-66,1: NAME 'x' -66,2-66,3: OP '=' -66,4-66,9: NUMBER '3.e14' -66,9-66,10: NEWLINE '\n' -67,0-67,1: NAME 'x' -67,2-67,3: OP '=' -67,4-67,9: NUMBER '.3e14' -67,9-67,10: NEWLINE '\n' -68,0-68,1: NAME 'x' -68,2-68,3: OP '=' -68,4-68,9: NUMBER '3.1e4' -68,9-68,10: NEWLINE '\n' -69,0-69,1: NL '\n' -70,0-70,17: COMMENT '# String literals' -70,17-70,18: NL '\n' -71,0-71,1: NAME 'x' -71,2-71,3: OP '=' -71,4-71,6: STRING "''" -71,6-71,7: OP ';' -71,8-71,9: NAME 'y' -71,10-71,11: OP '=' -71,12-71,14: STRING '""' -71,14-71,15: OP ';' -71,15-71,16: NEWLINE '\n' -72,0-72,1: NAME 'x' -72,2-72,3: OP '=' -72,4-72,8: STRING "'\\''" -72,8-72,9: OP ';' -72,10-72,11: NAME 'y' -72,12-72,13: OP '=' -72,14-72,17: STRING '"\'"' -72,17-72,18: OP ';' -72,18-72,19: NEWLINE '\n' -73,0-73,1: NAME 'x' -73,2-73,3: OP '=' -73,4-73,7: STRING '\'"\'' -73,7-73,8: OP ';' -73,9-73,10: NAME 'y' -73,11-73,12: OP '=' -73,13-73,17: STRING '"\\""' -73,17-73,18: OP ';' -73,18-73,19: NEWLINE '\n' -74,0-74,1: NAME 'x' -74,2-74,3: OP '=' -74,4-74,32: STRING '"doesn\'t \\"shrink\\" does it"' -74,32-74,33: NEWLINE '\n' -75,0-75,1: NAME 'y' -75,2-75,3: OP '=' -75,4-75,31: STRING '\'doesn\\\'t "shrink" does it\'' -75,31-75,32: NEWLINE '\n' -76,0-76,1: NAME 'x' -76,2-76,3: OP '=' -76,4-76,32: STRING '"does \\"shrink\\" doesn\'t it"' -76,32-76,33: NEWLINE '\n' -77,0-77,1: NAME 'y' -77,2-77,3: OP '=' -77,4-77,31: STRING '\'does "shrink" doesn\\\'t it\'' -77,31-77,32: NEWLINE '\n' -78,0-78,1: NAME 'x' -78,2-78,3: OP '=' -78,4-83,3: STRING '"""\nThe "quick"\nbrown fox\njumps over\nthe \'lazy\' dog.\n"""' -83,3-83,4: NEWLINE '\n' -84,0-84,1: NAME 'y' -84,2-84,3: OP '=' -84,4-84,63: STRING '\'\\nThe "quick"\\nbrown fox\\njumps over\\nthe \\\'lazy\\\' dog.\\n\'' -84,63-84,64: NEWLINE '\n' -85,0-85,1: NAME 'y' -85,2-85,3: OP '=' -85,4-90,3: STRING '\'\'\'\nThe "quick"\nbrown fox\njumps over\nthe \'lazy\' dog.\n\'\'\'' -90,3-90,4: OP ';' -90,4-90,5: NEWLINE '\n' -91,0-91,1: NAME 'y' -91,2-91,3: OP '=' -91,4-96,1: STRING '"\\n\\\nThe \\"quick\\"\\n\\\nbrown fox\\n\\\njumps over\\n\\\nthe \'lazy\' dog.\\n\\\n"' -96,1-96,2: OP ';' -96,2-96,3: NEWLINE '\n' -97,0-97,1: NAME 'y' -97,2-97,3: OP '=' -97,4-102,1: STRING '\'\\n\\\nThe \\"quick\\"\\n\\\nbrown fox\\n\\\njumps over\\n\\\nthe \\\'lazy\\\' dog.\\n\\\n\'' -102,1-102,2: OP ';' -102,2-102,3: NEWLINE '\n' -103,0-103,1: NAME 'x' -103,2-103,3: OP '=' -103,4-103,9: STRING "r'\\\\'" -103,10-103,11: OP '+' -103,12-103,17: STRING "R'\\\\'" -103,17-103,18: NEWLINE '\n' -104,0-104,1: NAME 'x' -104,2-104,3: OP '=' -104,4-104,9: STRING "r'\\''" -104,10-104,11: OP '+' -104,12-104,14: STRING "''" -104,14-104,15: NEWLINE '\n' -105,0-105,1: NAME 'y' -105,2-105,3: OP '=' -105,4-107,6: STRING "r'''\nfoo bar \\\\\nbaz'''" -107,7-107,8: OP '+' -107,9-108,6: STRING "R'''\nfoo'''" -108,6-108,7: NEWLINE '\n' -109,0-109,1: NAME 'y' -109,2-109,3: OP '=' -109,4-111,3: STRING 'r"""foo\nbar \\\\ baz\n"""' -111,4-111,5: OP '+' -111,6-112,3: STRING "R'''spam\n'''" -112,3-112,4: NEWLINE '\n' -113,0-113,1: NAME 'x' -113,2-113,3: OP '=' -113,4-113,10: STRING "u'abc'" -113,11-113,12: OP '+' -113,13-113,19: STRING "U'ABC'" -113,19-113,20: NEWLINE '\n' -114,0-114,1: NAME 'y' -114,2-114,3: OP '=' -114,4-114,10: STRING 'u"abc"' -114,11-114,12: OP '+' -114,13-114,19: STRING 'U"ABC"' -114,19-114,20: NEWLINE '\n' -115,0-115,1: NAME 'x' -115,2-115,3: OP '=' -115,4-115,11: STRING "ur'abc'" -115,12-115,13: OP '+' -115,14-115,21: STRING "Ur'ABC'" -115,22-115,23: OP '+' -115,24-115,31: STRING "uR'ABC'" -115,32-115,33: OP '+' -115,34-115,41: STRING "UR'ABC'" -115,41-115,42: NEWLINE '\n' -116,0-116,1: NAME 'y' -116,2-116,3: OP '=' -116,4-116,11: STRING 'ur"abc"' -116,12-116,13: OP '+' -116,14-116,21: STRING 'Ur"ABC"' -116,22-116,23: OP '+' -116,24-116,31: STRING 'uR"ABC"' -116,32-116,33: OP '+' -116,34-116,41: STRING 'UR"ABC"' -116,41-116,42: NEWLINE '\n' -117,0-117,1: NAME 'x' -117,2-117,3: OP '=' -117,4-117,10: STRING "ur'\\\\'" -117,11-117,12: OP '+' -117,13-117,19: STRING "UR'\\\\'" -117,19-117,20: NEWLINE '\n' -118,0-118,1: NAME 'x' -118,2-118,3: OP '=' -118,4-118,10: STRING "ur'\\''" -118,11-118,12: OP '+' -118,13-118,15: STRING "''" -118,15-118,16: NEWLINE '\n' -119,0-119,1: NAME 'y' -119,2-119,3: OP '=' -119,4-121,6: STRING "ur'''\nfoo bar \\\\\nbaz'''" -121,7-121,8: OP '+' -121,9-122,6: STRING "UR'''\nfoo'''" -122,6-122,7: NEWLINE '\n' -123,0-123,1: NAME 'y' -123,2-123,3: OP '=' -123,4-125,3: STRING 'Ur"""foo\nbar \\\\ baz\n"""' -125,4-125,5: OP '+' -125,6-126,3: STRING "uR'''spam\n'''" -126,3-126,4: NEWLINE '\n' -127,0-127,1: NL '\n' -128,0-128,13: COMMENT '# Indentation' -128,13-128,14: NL '\n' -129,0-129,2: NAME 'if' -129,3-129,4: NUMBER '1' -129,4-129,5: OP ':' -129,5-129,6: NEWLINE '\n' -130,0-130,4: INDENT ' ' -130,4-130,5: NAME 'x' -130,6-130,7: OP '=' -130,8-130,9: NUMBER '2' -130,9-130,10: NEWLINE '\n' -131,0-131,0: DEDENT '' -131,0-131,2: NAME 'if' -131,3-131,4: NUMBER '1' -131,4-131,5: OP ':' -131,5-131,6: NEWLINE '\n' -132,0-132,8: INDENT ' ' -132,8-132,9: NAME 'x' -132,10-132,11: OP '=' -132,12-132,13: NUMBER '2' -132,13-132,14: NEWLINE '\n' -133,0-133,0: DEDENT '' -133,0-133,2: NAME 'if' -133,3-133,4: NUMBER '1' -133,4-133,5: OP ':' -133,5-133,6: NEWLINE '\n' -134,0-134,4: INDENT ' ' -134,4-134,9: NAME 'while' -134,10-134,11: NUMBER '0' -134,11-134,12: OP ':' -134,12-134,13: NEWLINE '\n' -135,0-135,5: INDENT ' ' -135,5-135,7: NAME 'if' -135,8-135,9: NUMBER '0' -135,9-135,10: OP ':' -135,10-135,11: NEWLINE '\n' -136,0-136,11: INDENT ' ' -136,11-136,12: NAME 'x' -136,13-136,14: OP '=' -136,15-136,16: NUMBER '2' -136,16-136,17: NEWLINE '\n' -137,5-137,5: DEDENT '' -137,5-137,6: NAME 'x' -137,7-137,8: OP '=' -137,9-137,10: NUMBER '2' -137,10-137,11: NEWLINE '\n' -138,0-138,0: DEDENT '' -138,0-138,0: DEDENT '' -138,0-138,2: NAME 'if' -138,3-138,4: NUMBER '0' -138,4-138,5: OP ':' -138,5-138,6: NEWLINE '\n' -139,0-139,2: INDENT ' ' -139,2-139,4: NAME 'if' -139,5-139,6: NUMBER '2' -139,6-139,7: OP ':' -139,7-139,8: NEWLINE '\n' -140,0-140,3: INDENT ' ' -140,3-140,8: NAME 'while' -140,9-140,10: NUMBER '0' -140,10-140,11: OP ':' -140,11-140,12: NEWLINE '\n' -141,0-141,8: INDENT ' ' -141,8-141,10: NAME 'if' -141,11-141,12: NUMBER '1' -141,12-141,13: OP ':' -141,13-141,14: NEWLINE '\n' -142,0-142,10: INDENT ' ' -142,10-142,11: NAME 'x' -142,12-142,13: OP '=' -142,14-142,15: NUMBER '2' -142,15-142,16: NEWLINE '\n' -143,0-143,1: NL '\n' -144,0-144,11: COMMENT '# Operators' -144,11-144,12: NL '\n' -145,0-145,1: NL '\n' -146,0-146,0: DEDENT '' -146,0-146,0: DEDENT '' -146,0-146,0: DEDENT '' -146,0-146,0: DEDENT '' -146,0-146,3: NAME 'def' -146,4-146,7: NAME 'd22' -146,7-146,8: OP '(' -146,8-146,9: NAME 'a' -146,9-146,10: OP ',' -146,11-146,12: NAME 'b' -146,12-146,13: OP ',' -146,14-146,15: NAME 'c' -146,15-146,16: OP '=' -146,16-146,17: NUMBER '1' -146,17-146,18: OP ',' -146,19-146,20: NAME 'd' -146,20-146,21: OP '=' -146,21-146,22: NUMBER '2' -146,22-146,23: OP ')' -146,23-146,24: OP ':' -146,25-146,29: NAME 'pass' -146,29-146,30: NEWLINE '\n' -147,0-147,3: NAME 'def' -147,4-147,8: NAME 'd01v' -147,8-147,9: OP '(' -147,9-147,10: NAME 'a' -147,10-147,11: OP '=' -147,11-147,12: NUMBER '1' -147,12-147,13: OP ',' -147,14-147,15: OP '*' -147,15-147,20: NAME 'restt' -147,20-147,21: OP ',' -147,22-147,24: OP '**' -147,24-147,29: NAME 'restd' -147,29-147,30: OP ')' -147,30-147,31: OP ':' -147,32-147,36: NAME 'pass' -147,36-147,37: NEWLINE '\n' -148,0-148,1: NL '\n' -149,0-149,1: OP '(' -149,1-149,2: NAME 'x' -149,2-149,3: OP ',' -149,4-149,5: NAME 'y' -149,5-149,6: OP ')' -149,7-149,9: OP '<>' -149,10-149,11: OP '(' -149,11-149,12: OP '{' -149,12-149,15: STRING "'a'" -149,15-149,16: OP ':' -149,16-149,17: NUMBER '1' -149,17-149,18: OP '}' -149,18-149,19: OP ',' -149,20-149,21: OP '{' -149,21-149,24: STRING "'b'" -149,24-149,25: OP ':' -149,25-149,26: NUMBER '2' -149,26-149,27: OP '}' -149,27-149,28: OP ')' -149,28-149,29: NEWLINE '\n' -150,0-150,1: NL '\n' -151,0-151,12: COMMENT '# comparison' -151,12-151,13: NL '\n' -152,0-152,2: NAME 'if' -152,3-152,4: NUMBER '1' -152,5-152,6: OP '<' -152,7-152,8: NUMBER '1' -152,9-152,10: OP '>' -152,11-152,12: NUMBER '1' -152,13-152,15: OP '==' -152,16-152,17: NUMBER '1' -152,18-152,20: OP '>=' -152,21-152,22: NUMBER '1' -152,23-152,25: OP '<=' -152,26-152,27: NUMBER '1' -152,28-152,30: OP '<>' -152,31-152,32: NUMBER '1' -152,33-152,35: OP '!=' -152,36-152,37: NUMBER '1' -152,38-152,40: NAME 'in' -152,41-152,42: NUMBER '1' -152,43-152,46: NAME 'not' -152,47-152,49: NAME 'in' -152,50-152,51: NUMBER '1' -152,52-152,54: NAME 'is' -152,55-152,56: NUMBER '1' -152,57-152,59: NAME 'is' -152,60-152,63: NAME 'not' -152,64-152,65: NUMBER '1' -152,65-152,66: OP ':' -152,67-152,71: NAME 'pass' -152,71-152,72: NEWLINE '\n' -153,0-153,1: NL '\n' -154,0-154,8: COMMENT '# binary' -154,8-154,9: NL '\n' -155,0-155,1: NAME 'x' -155,2-155,3: OP '=' -155,4-155,5: NUMBER '1' -155,6-155,7: OP '&' -155,8-155,9: NUMBER '1' -155,9-155,10: NEWLINE '\n' -156,0-156,1: NAME 'x' -156,2-156,3: OP '=' -156,4-156,5: NUMBER '1' -156,6-156,7: OP '^' -156,8-156,9: NUMBER '1' -156,9-156,10: NEWLINE '\n' -157,0-157,1: NAME 'x' -157,2-157,3: OP '=' -157,4-157,5: NUMBER '1' -157,6-157,7: OP '|' -157,8-157,9: NUMBER '1' -157,9-157,10: NEWLINE '\n' -158,0-158,1: NL '\n' -159,0-159,7: COMMENT '# shift' -159,7-159,8: NL '\n' -160,0-160,1: NAME 'x' -160,2-160,3: OP '=' -160,4-160,5: NUMBER '1' -160,6-160,8: OP '<<' -160,9-160,10: NUMBER '1' -160,11-160,13: OP '>>' -160,14-160,15: NUMBER '1' -160,15-160,16: NEWLINE '\n' -161,0-161,1: NL '\n' -162,0-162,10: COMMENT '# additive' -162,10-162,11: NL '\n' -163,0-163,1: NAME 'x' -163,2-163,3: OP '=' -163,4-163,5: NUMBER '1' -163,6-163,7: OP '-' -163,8-163,9: NUMBER '1' -163,10-163,11: OP '+' -163,12-163,13: NUMBER '1' -163,14-163,15: OP '-' -163,16-163,17: NUMBER '1' -163,18-163,19: OP '+' -163,20-163,21: NUMBER '1' -163,21-163,22: NEWLINE '\n' -164,0-164,1: NL '\n' -165,0-165,16: COMMENT '# multiplicative' -165,16-165,17: NL '\n' -166,0-166,1: NAME 'x' -166,2-166,3: OP '=' -166,4-166,5: NUMBER '1' -166,6-166,7: OP '/' -166,8-166,9: NUMBER '1' -166,10-166,11: OP '*' -166,12-166,13: NUMBER '1' -166,14-166,15: OP '%' -166,16-166,17: NUMBER '1' -166,17-166,18: NEWLINE '\n' -167,0-167,1: NL '\n' -168,0-168,7: COMMENT '# unary' -168,7-168,8: NL '\n' -169,0-169,1: NAME 'x' -169,2-169,3: OP '=' -169,4-169,5: OP '~' -169,5-169,6: NUMBER '1' -169,7-169,8: OP '^' -169,9-169,10: NUMBER '1' -169,11-169,12: OP '&' -169,13-169,14: NUMBER '1' -169,15-169,16: OP '|' -169,17-169,18: NUMBER '1' -169,19-169,20: OP '&' -169,21-169,22: NUMBER '1' -169,23-169,24: OP '^' -169,25-169,26: OP '-' -169,26-169,27: NUMBER '1' -169,27-169,28: NEWLINE '\n' -170,0-170,1: NAME 'x' -170,2-170,3: OP '=' -170,4-170,5: OP '-' -170,5-170,6: NUMBER '1' -170,6-170,7: OP '*' -170,7-170,8: NUMBER '1' -170,8-170,9: OP '/' -170,9-170,10: NUMBER '1' -170,11-170,12: OP '+' -170,13-170,14: NUMBER '1' -170,14-170,15: OP '*' -170,15-170,16: NUMBER '1' -170,17-170,18: OP '-' -170,19-170,20: OP '-' -170,20-170,21: OP '-' -170,21-170,22: OP '-' -170,22-170,23: NUMBER '1' -170,23-170,24: OP '*' -170,24-170,25: NUMBER '1' -170,25-170,26: NEWLINE '\n' -171,0-171,1: NL '\n' -172,0-172,10: COMMENT '# selector' -172,10-172,11: NL '\n' -173,0-173,6: NAME 'import' -173,7-173,10: NAME 'sys' -173,10-173,11: OP ',' -173,12-173,16: NAME 'time' -173,16-173,17: NEWLINE '\n' -174,0-174,1: NAME 'x' -174,2-174,3: OP '=' -174,4-174,7: NAME 'sys' -174,7-174,8: OP '.' -174,8-174,15: NAME 'modules' -174,15-174,16: OP '[' -174,16-174,22: STRING "'time'" -174,22-174,23: OP ']' -174,23-174,24: OP '.' -174,24-174,28: NAME 'time' -174,28-174,29: OP '(' -174,29-174,30: OP ')' -174,30-174,31: NEWLINE '\n' -175,0-175,1: NL '\n' -176,0-176,1: OP '@' -176,1-176,13: NAME 'staticmethod' -176,13-176,14: NEWLINE '\n' -177,0-177,3: NAME 'def' -177,4-177,7: NAME 'foo' -177,7-177,8: OP '(' -177,8-177,9: OP ')' -177,9-177,10: OP ':' -177,11-177,15: NAME 'pass' -177,15-177,16: NEWLINE '\n' -178,0-178,1: NL '\n' -179,0-179,0: ENDMARKER '' Modified: python/trunk/Lib/test/test_tokenize.py ============================================================================== --- python/trunk/Lib/test/test_tokenize.py (original) +++ python/trunk/Lib/test/test_tokenize.py Thu Mar 13 21:33:10 2008 @@ -1,112 +1,502 @@ -"""Tests for the tokenize module. +doctests = """ +Tests for the tokenize module. -The tests were originally written in the old Python style, where the -test output was compared to a golden file. This docstring represents -the first steps towards rewriting the entire test as a doctest. + >>> import glob, random, sys -The tests can be really simple. Given a small fragment of source -code, print out a table with the tokens. The ENDMARK is omitted for +The tests can be really simple. Given a small fragment of source +code, print out a table with thokens. The ENDMARK is omitted for brevity. ->>> dump_tokens("1 + 1") -NUMBER '1' (1, 0) (1, 1) -OP '+' (1, 2) (1, 3) -NUMBER '1' (1, 4) (1, 5) - -A comment generates a token here, unlike in the parser module. The -comment token is followed by an NL or a NEWLINE token, depending on -whether the line contains the completion of a statement. - ->>> dump_tokens("if False:\\n" -... " # NL\\n" -... " True = False # NEWLINE\\n") -NAME 'if' (1, 0) (1, 2) -NAME 'False' (1, 3) (1, 8) -OP ':' (1, 8) (1, 9) -NEWLINE '\\n' (1, 9) (1, 10) -COMMENT '# NL' (2, 4) (2, 8) -NL '\\n' (2, 8) (2, 9) -INDENT ' ' (3, 0) (3, 4) -NAME 'True' (3, 4) (3, 8) -OP '=' (3, 9) (3, 10) -NAME 'False' (3, 11) (3, 16) -COMMENT '# NEWLINE' (3, 17) (3, 26) -NEWLINE '\\n' (3, 26) (3, 27) -DEDENT '' (4, 0) (4, 0) - - -There will be a bunch more tests of specific source patterns. - -The tokenize module also defines an untokenize function that should -regenerate the original program text from the tokens. - -There are some standard formatting practices that are easy to get right. - ->>> roundtrip("if x == 1:\\n" -... " print x\\n") -if x == 1: - print x + >>> dump_tokens("1 + 1") + NUMBER '1' (1, 0) (1, 1) + OP '+' (1, 2) (1, 3) + NUMBER '1' (1, 4) (1, 5) + + >>> dump_tokens("if False:\\n" + ... " # NL\\n" + ... " True = False # NEWLINE\\n") + NAME 'if' (1, 0) (1, 2) + NAME 'False' (1, 3) (1, 8) + OP ':' (1, 8) (1, 9) + NEWLINE '\\n' (1, 9) (1, 10) + COMMENT '# NL' (2, 4) (2, 8) + NL '\\n' (2, 8) (2, 9) + INDENT ' ' (3, 0) (3, 4) + NAME 'True' (3, 4) (3, 8) + OP '=' (3, 9) (3, 10) + NAME 'False' (3, 11) (3, 16) + COMMENT '# NEWLINE' (3, 17) (3, 26) + NEWLINE '\\n' (3, 26) (3, 27) + DEDENT '' (4, 0) (4, 0) + + >>> indent_error_file = \""" + ... def k(x): + ... x += 2 + ... x += 5 + ... \""" + + >>> for tok in generate_tokens(StringIO(indent_error_file).readline): pass + Traceback (most recent call last): + ... + IndentationError: unindent does not match any outer indentation level + +Test roundtrip for `untokenize`. `f` is an open file or a string. The source +code in f is tokenized, converted back to source code via tokenize.untokenize(), +and tokenized again from the latter. The test fails if the second tokenization +doesn't match the first. + + >>> def roundtrip(f): + ... if isinstance(f, str): f = StringIO(f) + ... token_list = list(generate_tokens(f.readline)) + ... f.close() + ... tokens1 = [tok[:2] for tok in token_list] + ... new_text = untokenize(tokens1) + ... readline = iter(new_text.splitlines(1)).next + ... tokens2 = [tok[:2] for tok in generate_tokens(readline)] + ... return tokens1 == tokens2 + ... + +There are some standard formattig practises that are easy to get right. + + >>> roundtrip("if x == 1:\\n" + ... " print x\\n") + True -Some people use different formatting conventions, which makes -untokenize a little trickier. Note that this test involves trailing -whitespace after the colon. Note that we use hex escapes to make the -two trailing blanks apparent in the expected output. - ->>> roundtrip("if x == 1 : \\n" -... " print x\\n") -if x == 1 :\x20\x20 - print x - -Comments need to go in the right place. - ->>> roundtrip("if x == 1:\\n" -... " # A comment by itself.\\n" -... " print x # Comment here, too.\\n" -... " # Another comment.\\n" -... "after_if = True\\n") -if x == 1: - # A comment by itself. - print x # Comment here, too. - # Another comment. -after_if = True - ->>> roundtrip("if (x # The comments need to go in the right place\\n" -... " == 1):\\n" -... " print 'x == 1'\\n") -if (x # The comments need to go in the right place - == 1): - print 'x == 1' + >>> roundtrip("# This is a comment\\n# This also") + True +Some people use different formatting conventions, which makes +untokenize a little trickier. Note that this test involves trailing +whitespace after the colon. Note that we use hex escapes to make the +two trailing blanks apperant in the expected output. + + >>> roundtrip("if x == 1 : \\n" + ... " print x\\n") + True + + >>> f = test_support.findfile("tokenize_tests" + os.extsep + "txt") + >>> roundtrip(open(f)) + True + + >>> roundtrip("if x == 1:\\n" + ... " # A comment by itself.\\n" + ... " print x # Comment here, too.\\n" + ... " # Another comment.\\n" + ... "after_if = True\\n") + True + + >>> roundtrip("if (x # The comments need to go in the right place\\n" + ... " == 1):\\n" + ... " print 'x==1'\\n") + True + + >>> roundtrip("class Test: # A comment here\\n" + ... " # A comment with weird indent\\n" + ... " after_com = 5\\n" + ... " def x(m): return m*5 # a one liner\\n" + ... " def y(m): # A whitespace after the colon\\n" + ... " return y*4 # 3-space indent\\n") + True + +Some error-handling code + + >>> roundtrip("try: import somemodule\\n" + ... "except ImportError: # comment\\n" + ... " print 'Can not import' # comment2\\n" + ... "else: print 'Loaded'\\n") + True + +Balancing contunuation + + >>> roundtrip("a = (3,4, \\n" + ... "5,6)\\n" + ... "y = [3, 4,\\n" + ... "5]\\n" + ... "z = {'a': 5,\\n" + ... "'b':15, 'c':True}\\n" + ... "x = len(y) + 5 - a[\\n" + ... "3] - a[2]\\n" + ... "+ len(z) - z[\\n" + ... "'b']\\n") + True + +Ordinary integers and binary operators + + >>> dump_tokens("0xff <= 255") + NUMBER '0xff' (1, 0) (1, 4) + OP '<=' (1, 5) (1, 7) + NUMBER '255' (1, 8) (1, 11) + >>> dump_tokens("01234567 > ~0x15") + NUMBER '01234567' (1, 0) (1, 8) + OP '>' (1, 9) (1, 10) + OP '~' (1, 11) (1, 12) + NUMBER '0x15' (1, 12) (1, 16) + >>> dump_tokens("2134568 != 01231515") + NUMBER '2134568' (1, 0) (1, 7) + OP '!=' (1, 8) (1, 10) + NUMBER '01231515' (1, 11) (1, 19) + >>> dump_tokens("(-124561-1) & 0200000000") + OP '(' (1, 0) (1, 1) + OP '-' (1, 1) (1, 2) + NUMBER '124561' (1, 2) (1, 8) + OP '-' (1, 8) (1, 9) + NUMBER '1' (1, 9) (1, 10) + OP ')' (1, 10) (1, 11) + OP '&' (1, 12) (1, 13) + NUMBER '0200000000' (1, 14) (1, 24) + >>> dump_tokens("0xdeadbeef != -1") + NUMBER '0xdeadbeef' (1, 0) (1, 10) + OP '!=' (1, 11) (1, 13) + OP '-' (1, 14) (1, 15) + NUMBER '1' (1, 15) (1, 16) + >>> dump_tokens("0xdeadc0de & 012345") + NUMBER '0xdeadc0de' (1, 0) (1, 10) + OP '&' (1, 11) (1, 12) + NUMBER '012345' (1, 13) (1, 19) + >>> dump_tokens("0xFF & 0x15 | 1234") + NUMBER '0xFF' (1, 0) (1, 4) + OP '&' (1, 5) (1, 6) + NUMBER '0x15' (1, 7) (1, 11) + OP '|' (1, 12) (1, 13) + NUMBER '1234' (1, 14) (1, 18) + +Long integers + + >>> dump_tokens("x = 0L") + NAME 'x' (1, 0) (1, 1) + OP '=' (1, 2) (1, 3) + NUMBER '0L' (1, 4) (1, 6) + >>> dump_tokens("x = 0xfffffffffff") + NAME 'x' (1, 0) (1, 1) + OP '=' (1, 2) (1, 3) + NUMBER '0xffffffffff (1, 4) (1, 17) + >>> dump_tokens("x = 123141242151251616110l") + NAME 'x' (1, 0) (1, 1) + OP '=' (1, 2) (1, 3) + NUMBER '123141242151 (1, 4) (1, 26) + >>> dump_tokens("x = -15921590215012591L") + NAME 'x' (1, 0) (1, 1) + OP '=' (1, 2) (1, 3) + OP '-' (1, 4) (1, 5) + NUMBER '159215902150 (1, 5) (1, 23) + +Floating point numbers + + >>> dump_tokens("x = 3.14159") + NAME 'x' (1, 0) (1, 1) + OP '=' (1, 2) (1, 3) + NUMBER '3.14159' (1, 4) (1, 11) + >>> dump_tokens("x = 314159.") + NAME 'x' (1, 0) (1, 1) + OP '=' (1, 2) (1, 3) + NUMBER '314159.' (1, 4) (1, 11) + >>> dump_tokens("x = .314159") + NAME 'x' (1, 0) (1, 1) + OP '=' (1, 2) (1, 3) + NUMBER '.314159' (1, 4) (1, 11) + >>> dump_tokens("x = 3e14159") + NAME 'x' (1, 0) (1, 1) + OP '=' (1, 2) (1, 3) + NUMBER '3e14159' (1, 4) (1, 11) + >>> dump_tokens("x = 3E123") + NAME 'x' (1, 0) (1, 1) + OP '=' (1, 2) (1, 3) + NUMBER '3E123' (1, 4) (1, 9) + >>> dump_tokens("x+y = 3e-1230") + NAME 'x' (1, 0) (1, 1) + OP '+' (1, 1) (1, 2) + NAME 'y' (1, 2) (1, 3) + OP '=' (1, 4) (1, 5) + NUMBER '3e-1230' (1, 6) (1, 13) + >>> dump_tokens("x = 3.14e159") + NAME 'x' (1, 0) (1, 1) + OP '=' (1, 2) (1, 3) + NUMBER '3.14e159' (1, 4) (1, 12) + +String literals + + >>> dump_tokens("x = ''; y = \\\"\\\"") + NAME 'x' (1, 0) (1, 1) + OP '=' (1, 2) (1, 3) + STRING "''" (1, 4) (1, 6) + OP ';' (1, 6) (1, 7) + NAME 'y' (1, 8) (1, 9) + OP '=' (1, 10) (1, 11) + STRING '""' (1, 12) (1, 14) + >>> dump_tokens("x = '\\\"'; y = \\\"'\\\"") + NAME 'x' (1, 0) (1, 1) + OP '=' (1, 2) (1, 3) + STRING '\\'"\\'' (1, 4) (1, 7) + OP ';' (1, 7) (1, 8) + NAME 'y' (1, 9) (1, 10) + OP '=' (1, 11) (1, 12) + STRING '"\\'"' (1, 13) (1, 16) + >>> dump_tokens("x = \\\"doesn't \\\"shrink\\\", does it\\\"") + NAME 'x' (1, 0) (1, 1) + OP '=' (1, 2) (1, 3) + STRING '"doesn\\'t "' (1, 4) (1, 14) + NAME 'shrink' (1, 14) (1, 20) + STRING '", does it"' (1, 20) (1, 31) + >>> dump_tokens("x = u'abc' + U'ABC'") + NAME 'x' (1, 0) (1, 1) + OP '=' (1, 2) (1, 3) + STRING "u'abc'" (1, 4) (1, 10) + OP '+' (1, 11) (1, 12) + STRING "U'ABC'" (1, 13) (1, 19) + >>> dump_tokens('y = u"ABC" + U"ABC"') + NAME 'y' (1, 0) (1, 1) + OP '=' (1, 2) (1, 3) + STRING 'u"ABC"' (1, 4) (1, 10) + OP '+' (1, 11) (1, 12) + STRING 'U"ABC"' (1, 13) (1, 19) + >>> dump_tokens("x = ur'abc' + Ur'ABC' + uR'ABC' + UR'ABC'") + NAME 'x' (1, 0) (1, 1) + OP '=' (1, 2) (1, 3) + STRING "ur'abc'" (1, 4) (1, 11) + OP '+' (1, 12) (1, 13) + STRING "Ur'ABC'" (1, 14) (1, 21) + OP '+' (1, 22) (1, 23) + STRING "uR'ABC'" (1, 24) (1, 31) + OP '+' (1, 32) (1, 33) + STRING "UR'ABC'" (1, 34) (1, 41) + >>> dump_tokens('y = ur"abc" + Ur"ABC" + uR"ABC" + UR"ABC"') + NAME 'y' (1, 0) (1, 1) + OP '=' (1, 2) (1, 3) + STRING 'ur"abc"' (1, 4) (1, 11) + OP '+' (1, 12) (1, 13) + STRING 'Ur"ABC"' (1, 14) (1, 21) + OP '+' (1, 22) (1, 23) + STRING 'uR"ABC"' (1, 24) (1, 31) + OP '+' (1, 32) (1, 33) + STRING 'UR"ABC"' (1, 34) (1, 41) + +Operators + + >>> dump_tokens("def d22(a, b, c=2, d=2, *k): pass") + NAME 'def' (1, 0) (1, 3) + NAME 'd22' (1, 4) (1, 7) + OP '(' (1, 7) (1, 8) + NAME 'a' (1, 8) (1, 9) + OP ',' (1, 9) (1, 10) + NAME 'b' (1, 11) (1, 12) + OP ',' (1, 12) (1, 13) + NAME 'c' (1, 14) (1, 15) + OP '=' (1, 15) (1, 16) + NUMBER '2' (1, 16) (1, 17) + OP ',' (1, 17) (1, 18) + NAME 'd' (1, 19) (1, 20) + OP '=' (1, 20) (1, 21) + NUMBER '2' (1, 21) (1, 22) + OP ',' (1, 22) (1, 23) + OP '*' (1, 24) (1, 25) + NAME 'k' (1, 25) (1, 26) + OP ')' (1, 26) (1, 27) + OP ':' (1, 27) (1, 28) + NAME 'pass' (1, 29) (1, 33) + >>> dump_tokens("def d01v_(a=1, *k, **w): pass") + NAME 'def' (1, 0) (1, 3) + NAME 'd01v_' (1, 4) (1, 9) + OP '(' (1, 9) (1, 10) + NAME 'a' (1, 10) (1, 11) + OP '=' (1, 11) (1, 12) + NUMBER '1' (1, 12) (1, 13) + OP ',' (1, 13) (1, 14) + OP '*' (1, 15) (1, 16) + NAME 'k' (1, 16) (1, 17) + OP ',' (1, 17) (1, 18) + OP '**' (1, 19) (1, 21) + NAME 'w' (1, 21) (1, 22) + OP ')' (1, 22) (1, 23) + OP ':' (1, 23) (1, 24) + NAME 'pass' (1, 25) (1, 29) + +Comparison + + >>> dump_tokens("if 1 < 1 > 1 == 1 >= 5 <= 0x15 <= 0x12 != " + + ... "1 and 5 in 1 not in 1 is 1 or 5 is not 1: pass") + NAME 'if' (1, 0) (1, 2) + NUMBER '1' (1, 3) (1, 4) + OP '<' (1, 5) (1, 6) + NUMBER '1' (1, 7) (1, 8) + OP '>' (1, 9) (1, 10) + NUMBER '1' (1, 11) (1, 12) + OP '==' (1, 13) (1, 15) + NUMBER '1' (1, 16) (1, 17) + OP '>=' (1, 18) (1, 20) + NUMBER '5' (1, 21) (1, 22) + OP '<=' (1, 23) (1, 25) + NUMBER '0x15' (1, 26) (1, 30) + OP '<=' (1, 31) (1, 33) + NUMBER '0x12' (1, 34) (1, 38) + OP '!=' (1, 39) (1, 41) + NUMBER '1' (1, 42) (1, 43) + NAME 'and' (1, 44) (1, 47) + NUMBER '5' (1, 48) (1, 49) + NAME 'in' (1, 50) (1, 52) + NUMBER '1' (1, 53) (1, 54) + NAME 'not' (1, 55) (1, 58) + NAME 'in' (1, 59) (1, 61) + NUMBER '1' (1, 62) (1, 63) + NAME 'is' (1, 64) (1, 66) + NUMBER '1' (1, 67) (1, 68) + NAME 'or' (1, 69) (1, 71) + NUMBER '5' (1, 72) (1, 73) + NAME 'is' (1, 74) (1, 76) + NAME 'not' (1, 77) (1, 80) + NUMBER '1' (1, 81) (1, 82) + OP ':' (1, 82) (1, 83) + NAME 'pass' (1, 84) (1, 88) + +Shift + + >>> dump_tokens("x = 1 << 1 >> 5") + NAME 'x' (1, 0) (1, 1) + OP '=' (1, 2) (1, 3) + NUMBER '1' (1, 4) (1, 5) + OP '<<' (1, 6) (1, 8) + NUMBER '1' (1, 9) (1, 10) + OP '>>' (1, 11) (1, 13) + NUMBER '5' (1, 14) (1, 15) + +Additive + + >>> dump_tokens("x = 1 - y + 15 - 01 + 0x124 + z + a[5]") + NAME 'x' (1, 0) (1, 1) + OP '=' (1, 2) (1, 3) + NUMBER '1' (1, 4) (1, 5) + OP '-' (1, 6) (1, 7) + NAME 'y' (1, 8) (1, 9) + OP '+' (1, 10) (1, 11) + NUMBER '15' (1, 12) (1, 14) + OP '-' (1, 15) (1, 16) + NUMBER '01' (1, 17) (1, 19) + OP '+' (1, 20) (1, 21) + NUMBER '0x124' (1, 22) (1, 27) + OP '+' (1, 28) (1, 29) + NAME 'z' (1, 30) (1, 31) + OP '+' (1, 32) (1, 33) + NAME 'a' (1, 34) (1, 35) + OP '[' (1, 35) (1, 36) + NUMBER '5' (1, 36) (1, 37) + OP ']' (1, 37) (1, 38) + +Multiplicative + + >>> dump_tokens("x = 1//1*1/5*12%0x12") + NAME 'x' (1, 0) (1, 1) + OP '=' (1, 2) (1, 3) + NUMBER '1' (1, 4) (1, 5) + OP '//' (1, 5) (1, 7) + NUMBER '1' (1, 7) (1, 8) + OP '*' (1, 8) (1, 9) + NUMBER '1' (1, 9) (1, 10) + OP '/' (1, 10) (1, 11) + NUMBER '5' (1, 11) (1, 12) + OP '*' (1, 12) (1, 13) + NUMBER '12' (1, 13) (1, 15) + OP '%' (1, 15) (1, 16) + NUMBER '0x12' (1, 16) (1, 20) + +Unary + + >>> dump_tokens("~1 ^ 1 & 1 |1 ^ -1") + OP '~' (1, 0) (1, 1) + NUMBER '1' (1, 1) (1, 2) + OP '^' (1, 3) (1, 4) + NUMBER '1' (1, 5) (1, 6) + OP '&' (1, 7) (1, 8) + NUMBER '1' (1, 9) (1, 10) + OP '|' (1, 11) (1, 12) + NUMBER '1' (1, 12) (1, 13) + OP '^' (1, 14) (1, 15) + OP '-' (1, 16) (1, 17) + NUMBER '1' (1, 17) (1, 18) + >>> dump_tokens("-1*1/1+1*1//1 - ---1**1") + OP '-' (1, 0) (1, 1) + NUMBER '1' (1, 1) (1, 2) + OP '*' (1, 2) (1, 3) + NUMBER '1' (1, 3) (1, 4) + OP '/' (1, 4) (1, 5) + NUMBER '1' (1, 5) (1, 6) + OP '+' (1, 6) (1, 7) + NUMBER '1' (1, 7) (1, 8) + OP '*' (1, 8) (1, 9) + NUMBER '1' (1, 9) (1, 10) + OP '//' (1, 10) (1, 12) + NUMBER '1' (1, 12) (1, 13) + OP '-' (1, 14) (1, 15) + OP '-' (1, 16) (1, 17) + OP '-' (1, 17) (1, 18) + OP '-' (1, 18) (1, 19) + NUMBER '1' (1, 19) (1, 20) + OP '**' (1, 20) (1, 22) + NUMBER '1' (1, 22) (1, 23) + +Selector + + >>> dump_tokens("import sys, time\\nx = sys.modules['time'].time()") + NAME 'import' (1, 0) (1, 6) + NAME 'sys' (1, 7) (1, 10) + OP ',' (1, 10) (1, 11) + NAME 'time' (1, 12) (1, 16) + NEWLINE '\\n' (1, 16) (1, 17) + NAME 'x' (2, 0) (2, 1) + OP '=' (2, 2) (2, 3) + NAME 'sys' (2, 4) (2, 7) + OP '.' (2, 7) (2, 8) + NAME 'modules' (2, 8) (2, 15) + OP '[' (2, 15) (2, 16) + STRING "'time'" (2, 16) (2, 22) + OP ']' (2, 22) (2, 23) + OP '.' (2, 23) (2, 24) + NAME 'time' (2, 24) (2, 28) + OP '(' (2, 28) (2, 29) + OP ')' (2, 29) (2, 30) + +Methods + + >>> dump_tokens("@staticmethod\\ndef foo(x,y): pass") + OP '@' (1, 0) (1, 1) + NAME 'staticmethod (1, 1) (1, 13) + NEWLINE '\\n' (1, 13) (1, 14) + NAME 'def' (2, 0) (2, 3) + NAME 'foo' (2, 4) (2, 7) + OP '(' (2, 7) (2, 8) + NAME 'x' (2, 8) (2, 9) + OP ',' (2, 9) (2, 10) + NAME 'y' (2, 10) (2, 11) + OP ')' (2, 11) (2, 12) + OP ':' (2, 12) (2, 13) + NAME 'pass' (2, 14) (2, 18) + +Backslash means line continuation, except for comments + + >>> roundtrip("x=1+\\\\n" + ... "1\\n" + ... "# This is a comment\\\\n" + ... "# This also\\n") + True + >>> roundtrip("# Comment \\\\nx = 0") + True + + >>> + >>> tempdir = os.path.dirname(f) or os.curdir + >>> testfiles = glob.glob(os.path.join(tempdir, "test*.py")) + >>> if not test_support.is_resource_enabled("compiler"): + ... testfiles = random.sample(testfiles, 10) + ... + >>> for testfile in testfiles: + ... if not roundtrip(open(testfile)): break + ... else: True + True """ -import os, glob, random, time, sys -from cStringIO import StringIO -from test.test_support import (verbose, findfile, is_resource_enabled, - TestFailed) -from tokenize import (tokenize, generate_tokens, untokenize, tok_name, - ENDMARKER, NUMBER, NAME, OP, STRING, COMMENT) - -# How much time in seconds can pass before we print a 'Still working' message. -_PRINT_WORKING_MSG_INTERVAL = 5 * 60 - -# Test roundtrip for `untokenize`. `f` is a file path. The source code in f -# is tokenized, converted back to source code via tokenize.untokenize(), -# and tokenized again from the latter. The test fails if the second -# tokenization doesn't match the first. -def test_roundtrip(f): - ## print 'Testing:', f - fobj = open(f) - try: - fulltok = list(generate_tokens(fobj.readline)) - finally: - fobj.close() - - t1 = [tok[:2] for tok in fulltok] - newtext = untokenize(t1) - readline = iter(newtext.splitlines(1)).next - t2 = [tok[:2] for tok in generate_tokens(readline)] - if t1 != t2: - raise TestFailed("untokenize() roundtrip failed for %r" % f) + +from test import test_support +from tokenize import (tokenize, untokenize, generate_tokens, NUMBER, NAME, OP, + STRING, ENDMARKER, tok_name) +from StringIO import StringIO +import os def dump_tokens(s): """Print out the tokens in s in a table format. @@ -118,12 +508,7 @@ if type == ENDMARKER: break type = tok_name[type] - print "%(type)-10.10s %(token)-13.13r %(start)s %(end)s" % locals() - -def roundtrip(s): - f = StringIO(s) - source = untokenize(generate_tokens(f.readline)) - print source, + print("%(type)-10.10s %(token)-13.13r %(start)s %(end)s" % locals()) # This is an example from the docs, set up as a doctest. def decistmt(s): @@ -163,61 +548,13 @@ result.append((toknum, tokval)) return untokenize(result) -def test_main(): - if verbose: - print 'starting...' - - next_time = time.time() + _PRINT_WORKING_MSG_INTERVAL - # This displays the tokenization of tokenize_tests.py to stdout, and - # regrtest.py checks that this equals the expected output (in the - # test/output/ directory). - f = open(findfile('tokenize_tests' + os.extsep + 'txt')) - tokenize(f.readline) - f.close() - - # Now run test_roundtrip() over tokenize_test.py too, and over all - # (if the "compiler" resource is enabled) or a small random sample (if - # "compiler" is not enabled) of the test*.py files. - f = findfile('tokenize_tests' + os.extsep + 'txt') - test_roundtrip(f) - - testdir = os.path.dirname(f) or os.curdir - testfiles = glob.glob(testdir + os.sep + 'test*.py') - if not is_resource_enabled('compiler'): - testfiles = random.sample(testfiles, 10) - - for f in testfiles: - # Print still working message since this test can be really slow - if next_time <= time.time(): - next_time = time.time() + _PRINT_WORKING_MSG_INTERVAL - print >>sys.__stdout__, ' test_main still working, be patient...' - sys.__stdout__.flush() - - test_roundtrip(f) - - # Test detecton of IndentationError. - sampleBadText = """\ -def foo(): - bar - baz -""" +__test__ = {"doctests" : doctests, 'decistmt': decistmt} - try: - for tok in generate_tokens(StringIO(sampleBadText).readline): - pass - except IndentationError: - pass - else: - raise TestFailed("Did not detect IndentationError:") - - # Run the doctests in this module. - from test import test_tokenize # i.e., this module - from test.test_support import run_doctest - run_doctest(test_tokenize, verbose) - if verbose: - print 'finished' +def test_main(): + from test import test_tokenize + test_support.run_doctest(test_tokenize, True) if __name__ == "__main__": test_main() Modified: python/trunk/Misc/ACKS ============================================================================== --- python/trunk/Misc/ACKS (original) +++ python/trunk/Misc/ACKS Thu Mar 13 21:33:10 2008 @@ -178,6 +178,7 @@ Andy Dustman Gary Duzan Eugene Dvurechenski +Josip Dzolonga Maxim Dzumanenko Hans Eckardt Grant Edwards Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Thu Mar 13 21:33:10 2008 @@ -50,6 +50,8 @@ Tests ----- +- GHOP 238: Convert test_tokenize to use doctest. + - GHOP 237: Rewrite test_thread using unittest. - Patch #2232: os.tmpfile might fail on Windows if the user has no From python-checkins at python.org Thu Mar 13 21:47:41 2008 From: python-checkins at python.org (brett.cannon) Date: Thu, 13 Mar 2008 21:47:41 +0100 (CET) Subject: [Python-checkins] r61373 - in python/trunk: Lib/test/test_crypt.py Lib/test/test_select.py Misc/ACKS Misc/NEWS Message-ID: <20080313204741.A98A91E4014@bag.python.org> Author: brett.cannon Date: Thu Mar 13 21:47:41 2008 New Revision: 61373 Modified: python/trunk/Lib/test/test_crypt.py python/trunk/Lib/test/test_select.py python/trunk/Misc/ACKS python/trunk/Misc/NEWS Log: Convert test_contains, test_crypt, and test_select to unittest. Patch from GHOP 294 by David Marek. Modified: python/trunk/Lib/test/test_crypt.py ============================================================================== --- python/trunk/Lib/test/test_crypt.py (original) +++ python/trunk/Lib/test/test_crypt.py Thu Mar 13 21:47:41 2008 @@ -1,11 +1,16 @@ -#! /usr/bin/env python -"""Simple test script for cryptmodule.c - Roger E. Masse -""" - -from test.test_support import verbose +from test import test_support +import unittest import crypt -c = crypt.crypt('mypassword', 'ab') -if verbose: - print 'Test encryption: ', c +class CryptTestCase(unittest.TestCase): + + def test_crypt(self): + c = crypt.crypt('mypassword', 'ab') + if test_support.verbose: + print 'Test encryption: ', c + +def test_main(): + test_support.run_unittest(CryptTestCase) + +if __name__ == "__main__": + test_main() Modified: python/trunk/Lib/test/test_select.py ============================================================================== --- python/trunk/Lib/test/test_select.py (original) +++ python/trunk/Lib/test/test_select.py Thu Mar 13 21:47:41 2008 @@ -1,70 +1,53 @@ -# Testing select module -from test.test_support import verbose, reap_children +from test import test_support +import unittest import select import os +import sys -# test some known error conditions -try: - rfd, wfd, xfd = select.select(1, 2, 3) -except TypeError: - pass -else: - print 'expected TypeError exception not raised' - -class Nope: - pass - -class Almost: - def fileno(self): - return 'fileno' - -try: - rfd, wfd, xfd = select.select([Nope()], [], []) -except TypeError: - pass -else: - print 'expected TypeError exception not raised' - -try: - rfd, wfd, xfd = select.select([Almost()], [], []) -except TypeError: - pass -else: - print 'expected TypeError exception not raised' - -try: - rfd, wfd, xfd = select.select([], [], [], 'not a number') -except TypeError: - pass -else: - print 'expected TypeError exception not raised' - - -def test(): - import sys - if sys.platform[:3] in ('win', 'mac', 'os2', 'riscos'): - if verbose: - print "Can't test select easily on", sys.platform - return - cmd = 'for i in 0 1 2 3 4 5 6 7 8 9; do echo testing...; sleep 1; done' - p = os.popen(cmd, 'r') - for tout in (0, 1, 2, 4, 8, 16) + (None,)*10: - if verbose: - print 'timeout =', tout - rfd, wfd, xfd = select.select([p], [], [], tout) - if (rfd, wfd, xfd) == ([], [], []): - continue - if (rfd, wfd, xfd) == ([p], [], []): - line = p.readline() - if verbose: - print repr(line) - if not line: - if verbose: - print 'EOF' - break - continue - print 'Unexpected return values from select():', rfd, wfd, xfd - p.close() - reap_children() +class SelectTestCase(unittest.TestCase): -test() + class Nope: + pass + + class Almost: + def fileno(self): + return 'fileno' + + def test_error_conditions(self): + self.assertRaises(TypeError, select.select, 1, 2, 3) + self.assertRaises(TypeError, select.select, [self.Nope()], [], []) + self.assertRaises(TypeError, select.select, [self.Almost()], [], []) + self.assertRaises(TypeError, select.select, [], [], [], "not a number") + + def test_select(self): + if sys.platform[:3] in ('win', 'mac', 'os2', 'riscos'): + if test_support.verbose: + print "Can't test select easily on", sys.platform + return + cmd = 'for i in 0 1 2 3 4 5 6 7 8 9; do echo testing...; sleep 1; done' + p = os.popen(cmd, 'r') + for tout in (0, 1, 2, 4, 8, 16) + (None,)*10: + if test_support.verbose: + print 'timeout =', tout + rfd, wfd, xfd = select.select([p], [], [], tout) + if (rfd, wfd, xfd) == ([], [], []): + continue + if (rfd, wfd, xfd) == ([p], [], []): + line = p.readline() + if test_support.verbose: + print repr(line) + if not line: + if test_support.verbose: + print 'EOF' + break + continue + self.fail('Unexpected return values from select():', rfd, wfd, xfd) + p.close() + + +def test_main(): + test_support.run_unittest(SelectTestCase) + test_support.reap_children() + +if __name__ == "__main__": + test_main() Modified: python/trunk/Misc/ACKS ============================================================================== --- python/trunk/Misc/ACKS (original) +++ python/trunk/Misc/ACKS Thu Mar 13 21:47:41 2008 @@ -428,6 +428,7 @@ Grzegorz Makarewicz Ken Manheimer Vladimir Marangozov +David Marek Doug Marien Alex Martelli Anthony Martin Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Thu Mar 13 21:47:41 2008 @@ -50,6 +50,8 @@ Tests ----- +- GHOP 294: Convert test_contains, test_crypt, and test_select to unittest. + - GHOP 238: Convert test_tokenize to use doctest. - GHOP 237: Rewrite test_thread using unittest. From python-checkins at python.org Thu Mar 13 22:02:16 2008 From: python-checkins at python.org (brett.cannon) Date: Thu, 13 Mar 2008 22:02:16 +0100 (CET) Subject: [Python-checkins] r61374 - in python/trunk: Lib/test/test_gdbm.py Misc/ACKS Misc/NEWS Message-ID: <20080313210216.C12921E402B@bag.python.org> Author: brett.cannon Date: Thu Mar 13 22:02:16 2008 New Revision: 61374 Modified: python/trunk/Lib/test/test_gdbm.py python/trunk/Misc/ACKS python/trunk/Misc/NEWS Log: Move test_gdbm to use unittest. Closes issue #1960. Thanks Giampaolo Rodola. Modified: python/trunk/Lib/test/test_gdbm.py ============================================================================== --- python/trunk/Lib/test/test_gdbm.py (original) +++ python/trunk/Lib/test/test_gdbm.py Thu Mar 13 22:02:16 2008 @@ -1,46 +1,83 @@ -#! /usr/bin/env python -"""Test script for the gdbm module - Roger E. Masse -""" - import gdbm -from gdbm import error -from test.test_support import verbose, verify, TestFailed, TESTFN +import unittest +import os +from test.test_support import verbose, TESTFN, run_unittest, unlink + filename = TESTFN -g = gdbm.open(filename, 'c') -verify(g.keys() == []) -g['a'] = 'b' -g['12345678910'] = '019237410982340912840198242' -a = g.keys() -if verbose: - print 'Test gdbm file keys: ', a - -g.has_key('a') -g.close() -try: - g['a'] -except error: - pass -else: - raise TestFailed, "expected gdbm.error accessing closed database" -g = gdbm.open(filename, 'r') -g.close() -g = gdbm.open(filename, 'w') -g.close() -g = gdbm.open(filename, 'n') -g.close() -try: - g = gdbm.open(filename, 'rx') - g.close() -except error: - pass -else: - raise TestFailed, "expected gdbm.error when passing invalid open flags" - -try: - import os - os.unlink(filename) -except: - pass +class TestGdbm(unittest.TestCase): + + def setUp(self): + self.g = None + + def tearDown(self): + if self.g is not None: + self.g.close() + unlink(filename) + + def test_key_methods(self): + self.g = gdbm.open(filename, 'c') + self.assertEqual(self.g.keys(), []) + self.g['a'] = 'b' + self.g['12345678910'] = '019237410982340912840198242' + key_set = set(self.g.keys()) + self.assertEqual(key_set, frozenset(['a', '12345678910'])) + self.assert_(self.g.has_key('a')) + key = self.g.firstkey() + while key: + self.assert_(key in key_set) + key_set.remove(key) + key = self.g.nextkey(key) + self.assertRaises(KeyError, lambda: self.g['xxx']) + + def test_error_conditions(self): + # Try to open a non-existent database. + unlink(filename) + self.assertRaises(gdbm.error, gdbm.open, filename, 'r') + self.assertRaises(gdbm.error, gdbm.open, filename, 'w') + # Try to access a closed database. + self.g = gdbm.open(filename, 'c') + self.g.close() + self.assertRaises(gdbm.error, lambda: self.g['a']) + # try pass an invalid open flag + self.assertRaises(gdbm.error, lambda: gdbm.open(filename, 'rx').close()) + + def test_flags(self): + # Test the flag parameter open() by trying all supported flag modes. + all = set(gdbm.open_flags) + # Test standard flags (presumably "crwn"). + modes = all - set('fsu') + for mode in modes: + self.g = gdbm.open(filename, mode) + self.g.close() + + # Test additional flags (presumably "fsu"). + flags = all - set('crwn') + for mode in modes: + for flag in flags: + self.g = gdbm.open(filename, mode + flag) + self.g.close() + + def test_reorganize(self): + self.g = gdbm.open(filename, 'c') + size0 = os.path.getsize(filename) + + self.g['x'] = 'x' * 10000 + size1 = os.path.getsize(filename) + self.assert_(size0 < size1) + + del self.g['x'] + # 'size' is supposed to be the same even after deleting an entry. + self.assertEqual(os.path.getsize(filename), size1) + + self.g.reorganize() + size2 = os.path.getsize(filename) + self.assert_(size1 > size2 >= size0) + + +def test_main(): + run_unittest(TestGdbm) + +if __name__ == '__main__': + test_main() Modified: python/trunk/Misc/ACKS ============================================================================== --- python/trunk/Misc/ACKS (original) +++ python/trunk/Misc/ACKS Thu Mar 13 22:02:16 2008 @@ -559,6 +559,7 @@ Andy Robinson Jim Robinson Kevin Rodgers +Giampaolo Rodola Mike Romberg Case Roole Timothy Roscoe Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Thu Mar 13 22:02:16 2008 @@ -50,6 +50,8 @@ Tests ----- +- Issue 1960: Convert test_gdbm to unittest. + - GHOP 294: Convert test_contains, test_crypt, and test_select to unittest. - GHOP 238: Convert test_tokenize to use doctest. From python-checkins at python.org Thu Mar 13 22:09:29 2008 From: python-checkins at python.org (brett.cannon) Date: Thu, 13 Mar 2008 22:09:29 +0100 (CET) Subject: [Python-checkins] r61375 - in python/trunk: Lib/test/test_fcntl.py Misc/NEWS Message-ID: <20080313210929.2F8D31E4035@bag.python.org> Author: brett.cannon Date: Thu Mar 13 22:09:28 2008 New Revision: 61375 Modified: python/trunk/Lib/test/test_fcntl.py python/trunk/Misc/NEWS Log: Convert test_fcntl to unittest. Closes issue #2055. Thanks Giampaolo Rodola. Modified: python/trunk/Lib/test/test_fcntl.py ============================================================================== --- python/trunk/Lib/test/test_fcntl.py (original) +++ python/trunk/Lib/test/test_fcntl.py Thu Mar 13 22:09:28 2008 @@ -1,69 +1,89 @@ -#! /usr/bin/env python """Test program for the fcntl C module. - OS/2+EMX doesn't support the file locking operations. - Roger E. Masse + +OS/2+EMX doesn't support the file locking operations. + """ import struct import fcntl import os, sys -from test.test_support import verbose, TESTFN +import unittest +from test.test_support import verbose, TESTFN, unlink, run_unittest + +# TODO - Write tests for ioctl(), flock() and lockf(). -filename = TESTFN -try: - os.O_LARGEFILE -except AttributeError: - start_len = "ll" -else: - start_len = "qq" - -if sys.platform.startswith('atheos'): - start_len = "qq" - -if sys.platform in ('netbsd1', 'netbsd2', 'netbsd3', - 'Darwin1.2', 'darwin', - 'freebsd2', 'freebsd3', 'freebsd4', 'freebsd5', - 'freebsd6', 'freebsd7', 'freebsd8', - 'bsdos2', 'bsdos3', 'bsdos4', - 'openbsd', 'openbsd2', 'openbsd3', 'openbsd4'): - if struct.calcsize('l') == 8: - off_t = 'l' - pid_t = 'i' +def get_lockdata(): + if sys.platform.startswith('atheos'): + start_len = "qq" else: - off_t = 'lxxxx' - pid_t = 'l' - lockdata = struct.pack(off_t+off_t+pid_t+'hh', 0, 0, 0, fcntl.F_WRLCK, 0) -elif sys.platform in ['aix3', 'aix4', 'hp-uxB', 'unixware7']: - lockdata = struct.pack('hhlllii', fcntl.F_WRLCK, 0, 0, 0, 0, 0, 0) -elif sys.platform in ['os2emx']: - lockdata = None -else: - lockdata = struct.pack('hh'+start_len+'hh', fcntl.F_WRLCK, 0, 0, 0, 0, 0) -if lockdata: - if verbose: - print 'struct.pack: ', repr(lockdata) - -# the example from the library docs -f = open(filename, 'w') -rv = fcntl.fcntl(f.fileno(), fcntl.F_SETFL, os.O_NONBLOCK) -if verbose: - print 'Status from fcntl with O_NONBLOCK: ', rv - -if sys.platform not in ['os2emx']: - rv = fcntl.fcntl(f.fileno(), fcntl.F_SETLKW, lockdata) - if verbose: - print 'String from fcntl with F_SETLKW: ', repr(rv) - -f.close() -os.unlink(filename) - - -# Again, but pass the file rather than numeric descriptor: -f = open(filename, 'w') -rv = fcntl.fcntl(f, fcntl.F_SETFL, os.O_NONBLOCK) + try: + os.O_LARGEFILE + except AttributeError: + start_len = "ll" + else: + start_len = "qq" + + if sys.platform in ('netbsd1', 'netbsd2', 'netbsd3', + 'Darwin1.2', 'darwin', + 'freebsd2', 'freebsd3', 'freebsd4', 'freebsd5', + 'freebsd6', 'freebsd7', 'freebsd8', + 'bsdos2', 'bsdos3', 'bsdos4', + 'openbsd', 'openbsd2', 'openbsd3', 'openbsd4'): + if struct.calcsize('l') == 8: + off_t = 'l' + pid_t = 'i' + else: + off_t = 'lxxxx' + pid_t = 'l' + lockdata = struct.pack(off_t + off_t + pid_t + 'hh', 0, 0, 0, + fcntl.F_WRLCK, 0) + elif sys.platform in ['aix3', 'aix4', 'hp-uxB', 'unixware7']: + lockdata = struct.pack('hhlllii', fcntl.F_WRLCK, 0, 0, 0, 0, 0, 0) + elif sys.platform in ['os2emx']: + lockdata = None + else: + lockdata = struct.pack('hh'+start_len+'hh', fcntl.F_WRLCK, 0, 0, 0, 0, 0) + if lockdata: + if verbose: + print 'struct.pack: ', repr(lockdata) + return lockdata + +lockdata = get_lockdata() + + +class TestFcntl(unittest.TestCase): + + def setUp(self): + self.f = None + + def tearDown(self): + if not self.f.closed: + self.f.close() + unlink(TESTFN) + + def test_fcntl_fileno(self): + # the example from the library docs + self.f = open(TESTFN, 'w') + rv = fcntl.fcntl(self.f.fileno(), fcntl.F_SETFL, os.O_NONBLOCK) + if verbose: + print 'Status from fcntl with O_NONBLOCK: ', rv + if sys.platform not in ['os2emx']: + rv = fcntl.fcntl(self.f.fileno(), fcntl.F_SETLKW, lockdata) + if verbose: + print 'String from fcntl with F_SETLKW: ', repr(rv) + self.f.close() + + def test_fcntl_file_descriptor(self): + # again, but pass the file rather than numeric descriptor + self.f = open(TESTFN, 'w') + rv = fcntl.fcntl(self.f, fcntl.F_SETFL, os.O_NONBLOCK) + if sys.platform not in ['os2emx']: + rv = fcntl.fcntl(self.f, fcntl.F_SETLKW, lockdata) + self.f.close() + -if sys.platform not in ['os2emx']: - rv = fcntl.fcntl(f, fcntl.F_SETLKW, lockdata) +def test_main(): + run_unittest(TestFcntl) -f.close() -os.unlink(filename) +if __name__ == '__main__': + test_main() Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Thu Mar 13 22:09:28 2008 @@ -50,6 +50,8 @@ Tests ----- +- Issue #2055: Convert test_fcntl to unittest. + - Issue 1960: Convert test_gdbm to unittest. - GHOP 294: Convert test_contains, test_crypt, and test_select to unittest. From buildbot at python.org Thu Mar 13 23:25:15 2008 From: buildbot at python.org (buildbot at python.org) Date: Thu, 13 Mar 2008 22:25:15 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 trunk Message-ID: <20080313222516.0B2201E4017@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%20trunk/builds/2674 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: brett.cannon BUILD FAILED: failed test Excerpt from the test logfile: 2 tests failed: test_asynchat test_smtplib sincerely, -The Buildbot From python-checkins at python.org Fri Mar 14 06:03:45 2008 From: python-checkins at python.org (raymond.hettinger) Date: Fri, 14 Mar 2008 06:03:45 +0100 (CET) Subject: [Python-checkins] r61376 - python/trunk/Modules/_heapqmodule.c Message-ID: <20080314050345.203B91E4005@bag.python.org> Author: raymond.hettinger Date: Fri Mar 14 06:03:44 2008 New Revision: 61376 Modified: python/trunk/Modules/_heapqmodule.c Log: Leave heapreplace() unchanged. Modified: python/trunk/Modules/_heapqmodule.c ============================================================================== --- python/trunk/Modules/_heapqmodule.c (original) +++ python/trunk/Modules/_heapqmodule.c Fri Mar 14 06:03:44 2008 @@ -162,11 +162,6 @@ { PyObject *heap, *item, *returnitem; - if (Py_Py3kWarningFlag && - PyErr_Warn(PyExc_DeprecationWarning, - "In 3.x, heapreplace() was removed. Use heappushpop() instead.") < 0) - return NULL; - if (!PyArg_UnpackTuple(args, "heapreplace", 2, 2, &heap, &item)) return NULL; From buildbot at python.org Fri Mar 14 06:26:27 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 14 Mar 2008 05:26:27 +0000 Subject: [Python-checkins] buildbot failure in x86 W2k8 trunk Message-ID: <20080314052627.CCE381E4005@bag.python.org> The Buildbot has detected a new failure of x86 W2k8 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20W2k8%20trunk/builds/80 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: nelson-windows Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: raymond.hettinger BUILD FAILED: failed failed slave lost sincerely, -The Buildbot From buildbot at python.org Fri Mar 14 06:48:31 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 14 Mar 2008 05:48:31 +0000 Subject: [Python-checkins] buildbot failure in PPC64 Debian trunk Message-ID: <20080314054831.DBFC61E4024@bag.python.org> The Buildbot has detected a new failure of PPC64 Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/PPC64%20Debian%20trunk/builds/488 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: raymond.hettinger BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_ssl make: *** [buildbottest] Error 1 sincerely, -The Buildbot From nnorwitz at gmail.com Fri Mar 14 10:08:51 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Fri, 14 Mar 2008 04:08:51 -0500 Subject: [Python-checkins] Python Regression Test Failures basics (1) Message-ID: <20080314090851.GA31989@python.psfb.org> 313 tests OK. 1 test failed: test_gdbm 28 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_bsddb3 test_cd test_cl test_curses test_gl test_imageop test_imgfile test_ioctl test_linuxaudiodev test_macostools test_ossaudiodev test_pep277 test_scriptpackages test_socketserver test_startfile test_sunaudiodev test_tcl test_timeout test_unicode_file test_urllib2net test_urllibnet test_winreg test_winsound test_zipfile64 1 skip unexpected on linux2: test_ioctl test_grammar test_opcodes test_dict test_builtin test_exceptions test_types test_unittest test_doctest test_doctest2 test_MimeWriter test_SimpleHTTPServer test_StringIO test___all__ test___future__ test__locale test_abc test_abstract_numbers test_aepack test_aepack skipped -- No module named aepack test_al test_al skipped -- No module named al test_anydbm test_applesingle test_applesingle skipped -- No module named macostools test_array test_ast test_asynchat test_asyncore test_atexit test_audioop test_augassign test_base64 test_bastion test_bigaddrspace test_bigmem test_binascii test_binhex test_binop test_bisect test_bool test_bsddb test_bsddb185 test_bsddb185 skipped -- No module named bsddb185 test_bsddb3 test_bsddb3 skipped -- Use of the `bsddb' resource not enabled test_buffer test_bufio test_bz2 test_calendar test_call test_capi test_cd test_cd skipped -- No module named cd test_cfgparser test_cgi test_charmapcodec test_cl test_cl skipped -- No module named cl test_class test_cmath test_cmd test_cmd_line test_cmd_line_script test_code test_codeccallbacks test_codecencodings_cn test_codecencodings_hk test_codecencodings_jp test_codecencodings_kr test_codecencodings_tw test_codecmaps_cn test_codecmaps_hk test_codecmaps_jp test_codecmaps_kr test_codecmaps_tw test_codecs test_codeop test_coding test_coercion test_collections test_colorsys test_commands test_compare test_compile test_compiler test_complex test_complex_args test_contains test_contextlib test_cookie test_cookielib test_copy test_copy_reg test_cpickle test_cprofile test_crypt test_csv test_ctypes test_curses test_curses skipped -- Use of the `curses' resource not enabled test_datetime test_dbm test_decimal test_decorators test_defaultdict test_deque test_descr test_descrtut test_difflib test_dircache test_dis test_distutils test_dl test_docxmlrpc test_dumbdbm test_dummy_thread test_dummy_threading test_email test_email_codecs test_email_renamed test_enumerate test_eof test_errno test_exception_variations test_extcall test_fcntl test_file test_filecmp test_fileinput test_float test_fnmatch test_fork1 test_format test_fpformat test_fractions test_frozen test_ftplib test_funcattrs test_functools test_future test_future_builtins test_gc test_gdbm test test_gdbm failed -- Traceback (most recent call last): File "/tmp/python-test/local/lib/python2.6/test/test_gdbm.py", line 38, in test_error_conditions self.assertRaises(gdbm.error, gdbm.open, filename, 'w') AssertionError: error not raised test_generators test_genericpath test_genexps test_getargs test_getargs2 test_getopt test_gettext test_gl test_gl skipped -- No module named gl test_glob test_global test_grp test_gzip test_hash test_hashlib test_heapq test_hexoct test_hmac test_hotshot test_htmllib test_htmlparser test_httplib test_imageop test_imageop skipped -- No module named imgfile test_imaplib test_imgfile test_imgfile skipped -- No module named imgfile test_imp test_import test_importhooks test_index test_inspect test_ioctl test_ioctl skipped -- Unable to open /dev/tty test_isinstance test_iter test_iterlen test_itertools test_largefile test_linuxaudiodev test_linuxaudiodev skipped -- Use of the `audio' resource not enabled test_list test_locale test_logging test_long test_long_future test_longexp test_macostools test_macostools skipped -- No module named macostools test_macpath test_mailbox test_marshal test_math test_md5 test_mhlib test_mimetools test_mimetypes test_minidom test_mmap test_module test_modulefinder test_multibytecodec test_multibytecodec_support test_multifile test_mutants test_mutex test_netrc test_new test_nis test_normalization test_ntpath test_old_mailbox test_openpty test_operator test_optparse test_os test_ossaudiodev test_ossaudiodev skipped -- Use of the `audio' resource not enabled test_parser s_push: parser stack overflow test_peepholer test_pep247 test_pep263 test_pep277 test_pep277 skipped -- test works only on NT+ test_pep292 test_pep352 test_pickle test_pickletools test_pipes test_pkg test_pkgimport test_platform test_plistlib test_poll test_popen [8018 refs] [8018 refs] [8018 refs] test_popen2 test_poplib test_posix test_posixpath test_pow test_pprint test_profile test_profilehooks test_property test_pstats test_pty test_pwd test_pyclbr test_pyexpat test_queue test_quopri [8395 refs] [8395 refs] test_random test_re test_repr test_resource test_rfc822 test_richcmp test_robotparser test_runpy test_sax test_scope test_scriptpackages test_scriptpackages skipped -- No module named aetools test_select test_set test_sets test_sgmllib test_sha test_shelve test_shlex test_shutil test_signal test_site test_slice test_smtplib test_socket test_socket_ssl /tmp/python-test/local/lib/python2.6/test/test_socket_ssl.py:108: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. ssl_sock = socket.ssl(s) /tmp/python-test/local/lib/python2.6/test/test_socket_ssl.py:159: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. ss = socket.ssl(s) /tmp/python-test/local/lib/python2.6/test/test_socket_ssl.py:173: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. ss = socket.ssl(s) test_socketserver test_socketserver skipped -- Use of the `network' resource not enabled test_softspace test_sort test_sqlite test_ssl test_startfile test_startfile skipped -- cannot import name startfile test_str test_strftime test_string test_stringprep test_strop test_strptime test_struct test_structmembers test_structseq test_subprocess [8013 refs] [8015 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8015 refs] [9938 refs] [8231 refs] [8015 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] . [8013 refs] [8013 refs] this bit of output is from a test of stdout in a different process ... [8013 refs] [8013 refs] [8231 refs] test_sunaudiodev test_sunaudiodev skipped -- No module named sunaudiodev test_sundry test_symtable test_syntax test_sys [8013 refs] [8013 refs] test_tarfile test_tcl test_tcl skipped -- No module named _tkinter test_telnetlib test_tempfile [8018 refs] test_textwrap test_thread test_threaded_import test_threadedtempfile test_threading [11151 refs] test_threading_local test_threadsignals test_time test_timeout test_timeout skipped -- Use of the `network' resource not enabled test_tokenize test_trace test_traceback test_transformer test_tuple test_typechecks test_ucn test_unary test_unicode test_unicode_file test_unicode_file skipped -- No Unicode filesystem semantics on this platform. test_unicodedata test_univnewlines test_unpack test_urllib test_urllib2 test_urllib2_localnet test_urllib2net test_urllib2net skipped -- Use of the `network' resource not enabled test_urllibnet test_urllibnet skipped -- Use of the `network' resource not enabled test_urlparse test_userdict test_userlist test_userstring test_uu test_uuid WARNING: uuid.getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._ifconfig_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._unixdll_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. test_wait3 test_wait4 test_warnings test_wave test_weakref test_whichdb test_winreg test_winreg skipped -- No module named _winreg test_winsound test_winsound skipped -- No module named winsound test_with test_wsgiref test_xdrlib test_xml_etree test_xml_etree_c test_xmllib test_xmlrpc test_xpickle test_xrange test_zipfile /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:472: DeprecationWarning: struct integer overflow masking is deprecated zipfp.close() /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:399: DeprecationWarning: struct integer overflow masking is deprecated zipfp.close() test_zipfile64 test_zipfile64 skipped -- test requires loads of disk-space bytes and a long time to run test_zipimport test_zlib 313 tests OK. 1 test failed: test_gdbm 28 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_bsddb3 test_cd test_cl test_curses test_gl test_imageop test_imgfile test_ioctl test_linuxaudiodev test_macostools test_ossaudiodev test_pep277 test_scriptpackages test_socketserver test_startfile test_sunaudiodev test_tcl test_timeout test_unicode_file test_urllib2net test_urllibnet test_winreg test_winsound test_zipfile64 1 skip unexpected on linux2: test_ioctl [565936 refs] From nnorwitz at gmail.com Fri Mar 14 10:15:58 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Fri, 14 Mar 2008 04:15:58 -0500 Subject: [Python-checkins] Python Regression Test Failures opt (1) Message-ID: <20080314091558.GA1533@python.psfb.org> 313 tests OK. 1 test failed: test_gdbm 28 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_bsddb3 test_cd test_cl test_curses test_gl test_imageop test_imgfile test_ioctl test_linuxaudiodev test_macostools test_ossaudiodev test_pep277 test_scriptpackages test_socketserver test_startfile test_sunaudiodev test_tcl test_timeout test_unicode_file test_urllib2net test_urllibnet test_winreg test_winsound test_zipfile64 1 skip unexpected on linux2: test_ioctl test_grammar test_opcodes test_dict test_builtin test_exceptions test_types test_unittest test_doctest test_doctest2 test_MimeWriter test_SimpleHTTPServer test_StringIO test___all__ test___future__ test__locale test_abc test_abstract_numbers test_aepack test_aepack skipped -- No module named aepack test_al test_al skipped -- No module named al test_anydbm test_applesingle test_applesingle skipped -- No module named macostools test_array test_ast test_asynchat test_asyncore test_atexit test_audioop test_augassign test_base64 test_bastion test_bigaddrspace test_bigmem test_binascii test_binhex test_binop test_bisect test_bool test_bsddb test_bsddb185 test_bsddb185 skipped -- No module named bsddb185 test_bsddb3 test_bsddb3 skipped -- Use of the `bsddb' resource not enabled test_buffer test_bufio test_bz2 test_calendar test_call test_capi test_cd test_cd skipped -- No module named cd test_cfgparser test_cgi test_charmapcodec test_cl test_cl skipped -- No module named cl test_class test_cmath test_cmd test_cmd_line test_cmd_line_script test_code test_codeccallbacks test_codecencodings_cn test_codecencodings_hk test_codecencodings_jp test_codecencodings_kr test_codecencodings_tw test_codecmaps_cn test_codecmaps_hk test_codecmaps_jp test_codecmaps_kr test_codecmaps_tw test_codecs test_codeop test_coding test_coercion test_collections test_colorsys test_commands test_compare test_compile test_compiler test_complex test_complex_args test_contains test_contextlib test_cookie test_cookielib test_copy test_copy_reg test_cpickle test_cprofile test_crypt test_csv test_ctypes test_curses test_curses skipped -- Use of the `curses' resource not enabled test_datetime test_dbm test_decimal test_decorators test_defaultdict test_deque test_descr test_descrtut test_difflib test_dircache test_dis test_distutils [10077 refs] test_dl test_docxmlrpc test_dumbdbm test_dummy_thread test_dummy_threading test_email test_email_codecs test_email_renamed test_enumerate test_eof test_errno test_exception_variations test_extcall test_fcntl test_file test_filecmp test_fileinput test_float test_fnmatch test_fork1 test_format test_fpformat test_fractions test_frozen test_ftplib test_funcattrs test_functools test_future test_future_builtins test_gc test_gdbm test test_gdbm failed -- Traceback (most recent call last): File "/tmp/python-test/local/lib/python2.6/test/test_gdbm.py", line 38, in test_error_conditions self.assertRaises(gdbm.error, gdbm.open, filename, 'w') AssertionError: error not raised test_generators test_genericpath test_genexps test_getargs test_getargs2 test_getopt test_gettext test_gl test_gl skipped -- No module named gl test_glob test_global test_grp test_gzip test_hash test_hashlib test_heapq test_hexoct test_hmac test_hotshot test_htmllib test_htmlparser test_httplib test_imageop test_imageop skipped -- No module named imgfile test_imaplib test_imgfile test_imgfile skipped -- No module named imgfile test_imp test_import test_importhooks test_index test_inspect test_ioctl test_ioctl skipped -- Unable to open /dev/tty test_isinstance test_iter test_iterlen test_itertools test_largefile test_linuxaudiodev test_linuxaudiodev skipped -- Use of the `audio' resource not enabled test_list test_locale test_logging test_long test_long_future test_longexp test_macostools test_macostools skipped -- No module named macostools test_macpath test_mailbox test_marshal test_math test_md5 test_mhlib test_mimetools test_mimetypes test_minidom test_mmap test_module test_modulefinder test_multibytecodec test_multibytecodec_support test_multifile test_mutants test_mutex test_netrc test_new test_nis test_normalization test_ntpath test_old_mailbox test_openpty test_operator test_optparse test_os test_ossaudiodev test_ossaudiodev skipped -- Use of the `audio' resource not enabled test_parser s_push: parser stack overflow test_peepholer test_pep247 test_pep263 test_pep277 test_pep277 skipped -- test works only on NT+ test_pep292 test_pep352 test_pickle test_pickletools test_pipes test_pkg test_pkgimport test_platform test_plistlib test_poll test_popen [8018 refs] [8018 refs] [8018 refs] test_popen2 test_poplib test_posix test_posixpath test_pow test_pprint test_profile test_profilehooks test_property test_pstats test_pty test_pwd test_pyclbr test_pyexpat test_queue test_quopri [8395 refs] [8395 refs] test_random test_re test_repr test_resource test_rfc822 test_richcmp test_robotparser test_runpy test_sax test_scope test_scriptpackages test_scriptpackages skipped -- No module named aetools test_select test_set test_sets test_sgmllib test_sha test_shelve test_shlex test_shutil test_signal test_site test_slice test_smtplib test_socket test_socket_ssl /tmp/python-test/local/lib/python2.6/test/test_socket_ssl.py:108: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. ssl_sock = socket.ssl(s) /tmp/python-test/local/lib/python2.6/test/test_socket_ssl.py:159: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. ss = socket.ssl(s) /tmp/python-test/local/lib/python2.6/test/test_socket_ssl.py:173: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. ss = socket.ssl(s) test_socketserver test_socketserver skipped -- Use of the `network' resource not enabled test_softspace test_sort test_sqlite test_ssl test_startfile test_startfile skipped -- cannot import name startfile test_str test_strftime test_string test_stringprep test_strop test_strptime test_struct test_structmembers test_structseq test_subprocess [8013 refs] [8015 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8015 refs] [9938 refs] [8231 refs] [8015 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] . [8013 refs] [8013 refs] this bit of output is from a test of stdout in a different process ... [8013 refs] [8013 refs] [8231 refs] test_sunaudiodev test_sunaudiodev skipped -- No module named sunaudiodev test_sundry test_symtable test_syntax test_sys [8013 refs] [8013 refs] test_tarfile test_tcl test_tcl skipped -- No module named _tkinter test_telnetlib test_tempfile [8018 refs] test_textwrap test_thread test_threaded_import test_threadedtempfile test_threading [11151 refs] test_threading_local test_threadsignals test_time test_timeout test_timeout skipped -- Use of the `network' resource not enabled test_tokenize test_trace test_traceback test_transformer test_tuple test_typechecks test_ucn test_unary test_unicode test_unicode_file test_unicode_file skipped -- No Unicode filesystem semantics on this platform. test_unicodedata test_univnewlines test_unpack test_urllib test_urllib2 test_urllib2_localnet test_urllib2net test_urllib2net skipped -- Use of the `network' resource not enabled test_urllibnet test_urllibnet skipped -- Use of the `network' resource not enabled test_urlparse test_userdict test_userlist test_userstring test_uu test_uuid WARNING: uuid.getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._ifconfig_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._unixdll_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. test_wait3 test_wait4 test_warnings test_wave test_weakref test_whichdb test_winreg test_winreg skipped -- No module named _winreg test_winsound test_winsound skipped -- No module named winsound test_with test_wsgiref test_xdrlib test_xml_etree test_xml_etree_c test_xmllib test_xmlrpc test_xpickle test_xrange test_zipfile /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:472: DeprecationWarning: struct integer overflow masking is deprecated zipfp.close() /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:399: DeprecationWarning: struct integer overflow masking is deprecated zipfp.close() test_zipfile64 test_zipfile64 skipped -- test requires loads of disk-space bytes and a long time to run test_zipimport test_zlib 313 tests OK. 1 test failed: test_gdbm 28 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_bsddb3 test_cd test_cl test_curses test_gl test_imageop test_imgfile test_ioctl test_linuxaudiodev test_macostools test_ossaudiodev test_pep277 test_scriptpackages test_socketserver test_startfile test_sunaudiodev test_tcl test_timeout test_unicode_file test_urllib2net test_urllibnet test_winreg test_winsound test_zipfile64 1 skip unexpected on linux2: test_ioctl [565526 refs] From nnorwitz at gmail.com Fri Mar 14 11:36:54 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Fri, 14 Mar 2008 05:36:54 -0500 Subject: [Python-checkins] Python Regression Test Failures all (1) Message-ID: <20080314103654.GA19921@python.psfb.org> 318 tests OK. 1 test failed: test_gdbm 20 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_cd test_cl test_gl test_imageop test_imgfile test_ioctl test_macostools test_pep277 test_scriptpackages test_startfile test_sunaudiodev test_tcl test_unicode_file test_winreg test_winsound test_zipfile64 1 skip unexpected on linux2: test_ioctl test_grammar test_opcodes test_dict test_builtin test_exceptions test_types test_unittest test_doctest test_doctest2 test_MimeWriter test_SimpleHTTPServer test_StringIO test___all__ test___future__ test__locale test_abc test_abstract_numbers test_aepack test_aepack skipped -- No module named aepack test_al test_al skipped -- No module named al test_anydbm test_applesingle test_applesingle skipped -- No module named macostools test_array test_ast test_asynchat test_asyncore test_atexit test_audioop test_augassign test_base64 test_bastion test_bigaddrspace test_bigmem test_binascii test_binhex test_binop test_bisect test_bool test_bsddb test_bsddb185 test_bsddb185 skipped -- No module named bsddb185 test_bsddb3 Exception in thread reader 0: Traceback (most recent call last): File "/tmp/python-test/local/lib/python2.6/threading.py", line 490, in __bootstrap_inner self.run() File "/tmp/python-test/local/lib/python2.6/threading.py", line 446, in run self.__target(*self.__args, **self.__kwargs) File "/tmp/python-test/local/lib/python2.6/bsddb/test/test_thread.py", line 284, in readerThread rec = dbutils.DeadlockWrap(c.next, max_retries=10) File "/tmp/python-test/local/lib/python2.6/bsddb/dbutils.py", line 62, in DeadlockWrap return function(*_args, **_kwargs) DBLockDeadlockError: (-30996, 'DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock') test_buffer test_bufio test_bz2 test_calendar test_call test_capi test_cd test_cd skipped -- No module named cd test_cfgparser test_cgi test_charmapcodec test_cl test_cl skipped -- No module named cl test_class test_cmath test_cmd test_cmd_line test_cmd_line_script test_code test_codeccallbacks test_codecencodings_cn test_codecencodings_hk test_codecencodings_jp test_codecencodings_kr test_codecencodings_tw test_codecmaps_cn test_codecmaps_hk test_codecmaps_jp test_codecmaps_kr test_codecmaps_tw test_codecs test_codeop test_coding test_coercion test_collections test_colorsys test_commands test_compare test_compile test_compiler testCompileLibrary still working, be patient... test_complex test_complex_args test_contains test_contextlib test_cookie test_cookielib test_copy test_copy_reg test_cpickle test_cprofile test_crypt test_csv test_ctypes test_datetime test_dbm test_decimal test_decorators test_defaultdict test_deque test_descr test_descrtut test_difflib test_dircache test_dis test_distutils test_dl test_docxmlrpc test_dumbdbm test_dummy_thread test_dummy_threading test_email test_email_codecs test_email_renamed test_enumerate test_eof test_errno test_exception_variations test_extcall test_fcntl test_file test_filecmp test_fileinput test_float test_fnmatch test_fork1 test_format test_fpformat test_fractions test_frozen test_ftplib test_funcattrs test_functools test_future test_future_builtins test_gc test_gdbm test test_gdbm failed -- Traceback (most recent call last): File "/tmp/python-test/local/lib/python2.6/test/test_gdbm.py", line 38, in test_error_conditions self.assertRaises(gdbm.error, gdbm.open, filename, 'w') AssertionError: error not raised test_generators test_genericpath test_genexps test_getargs test_getargs2 test_getopt test_gettext test_gl test_gl skipped -- No module named gl test_glob test_global test_grp test_gzip test_hash test_hashlib test_heapq test_hexoct test_hmac test_hotshot test_htmllib test_htmlparser test_httplib test_imageop test_imageop skipped -- No module named imgfile test_imaplib test_imgfile test_imgfile skipped -- No module named imgfile test_imp test_import test_importhooks test_index test_inspect test_ioctl test_ioctl skipped -- Unable to open /dev/tty test_isinstance test_iter test_iterlen test_itertools test_largefile test_list test_locale test_logging test_long test_long_future test_longexp test_macostools test_macostools skipped -- No module named macostools test_macpath test_mailbox test_marshal test_math test_md5 test_mhlib test_mimetools test_mimetypes test_minidom test_mmap test_module test_modulefinder test_multibytecodec test_multibytecodec_support test_multifile test_mutants test_mutex test_netrc test_new test_nis test_normalization test_ntpath test_old_mailbox test_openpty test_operator test_optparse test_os test_parser s_push: parser stack overflow test_peepholer test_pep247 test_pep263 test_pep277 test_pep277 skipped -- test works only on NT+ test_pep292 test_pep352 test_pickle test_pickletools test_pipes test_pkg test_pkgimport test_platform test_plistlib test_poll test_popen [8018 refs] [8018 refs] [8018 refs] test_popen2 test_poplib test_posix test_posixpath test_pow test_pprint test_profile test_profilehooks test_property test_pstats test_pty test_pwd test_pyclbr test_pyexpat test_queue test_quopri [8395 refs] [8395 refs] test_random test_re test_repr test_resource test_rfc822 test_richcmp test_robotparser test_runpy test_sax test_scope test_scriptpackages test_scriptpackages skipped -- No module named aetools test_select test_set test_sets test_sgmllib test_sha test_shelve test_shlex test_shutil test_signal test_site test_slice test_smtplib test_socket test_socket_ssl /tmp/python-test/local/lib/python2.6/test/test_socket_ssl.py:108: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. ssl_sock = socket.ssl(s) /tmp/python-test/local/lib/python2.6/test/test_socket_ssl.py:74: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. ss = socket.ssl(s) /tmp/python-test/local/lib/python2.6/test/test_socket_ssl.py:159: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. ss = socket.ssl(s) /tmp/python-test/local/lib/python2.6/test/test_socket_ssl.py:173: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. ss = socket.ssl(s) test_socketserver test_softspace test_sort test_sqlite test_ssl test_startfile test_startfile skipped -- cannot import name startfile test_str test_strftime test_string test_stringprep test_strop test_strptime test_struct test_structmembers test_structseq test_subprocess [8013 refs] [8015 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8015 refs] [9938 refs] [8231 refs] [8015 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] . [8013 refs] [8013 refs] this bit of output is from a test of stdout in a different process ... [8013 refs] [8013 refs] [8231 refs] test_sunaudiodev test_sunaudiodev skipped -- No module named sunaudiodev test_sundry test_symtable test_syntax test_sys [8013 refs] [8013 refs] test_tarfile test_tcl test_tcl skipped -- No module named _tkinter test_telnetlib test_tempfile [8018 refs] test_textwrap test_thread test_threaded_import test_threadedtempfile test_threading [11151 refs] test_threading_local test_threadsignals test_time test_timeout test_tokenize test_trace test_traceback test_transformer test_tuple test_typechecks test_ucn test_unary test_unicode test_unicode_file test_unicode_file skipped -- No Unicode filesystem semantics on this platform. test_unicodedata test_univnewlines test_unpack test_urllib test_urllib2 test_urllib2_localnet test_urllib2net No handlers could be found for logger "test_urllib2" test_urllibnet test_urlparse test_userdict test_userlist test_userstring test_uu test_uuid WARNING: uuid.getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._ifconfig_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._unixdll_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. test_wait3 test_wait4 test_warnings test_wave test_weakref test_whichdb test_winreg test_winreg skipped -- No module named _winreg test_winsound test_winsound skipped -- No module named winsound test_with test_wsgiref test_xdrlib test_xml_etree test_xml_etree_c test_xmllib test_xmlrpc test_xpickle test_xrange test_zipfile /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:472: DeprecationWarning: struct integer overflow masking is deprecated zipfp.close() /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:399: DeprecationWarning: struct integer overflow masking is deprecated zipfp.close() test_zipfile64 test_zipfile64 skipped -- test requires loads of disk-space bytes and a long time to run test_zipimport test_zlib 318 tests OK. 1 test failed: test_gdbm 20 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_cd test_cl test_gl test_imageop test_imgfile test_ioctl test_macostools test_pep277 test_scriptpackages test_startfile test_sunaudiodev test_tcl test_unicode_file test_winreg test_winsound test_zipfile64 1 skip unexpected on linux2: test_ioctl [577589 refs] From python-checkins at python.org Fri Mar 14 14:56:09 2008 From: python-checkins at python.org (martin.v.loewis) Date: Fri, 14 Mar 2008 14:56:09 +0100 (CET) Subject: [Python-checkins] r61378 - in python/trunk: Misc/NEWS PCbuild/rt.bat Message-ID: <20080314135609.676171E401C@bag.python.org> Author: martin.v.loewis Date: Fri Mar 14 14:56:09 2008 New Revision: 61378 Modified: python/trunk/Misc/NEWS python/trunk/PCbuild/rt.bat Log: Patch #2284: add -x64 option to rt.bat. Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Fri Mar 14 14:56:09 2008 @@ -63,6 +63,10 @@ - Patch #2232: os.tmpfile might fail on Windows if the user has no permission to create files in the root directory. +Build +----- + +- Patch #2284: Add -x64 option to rt.bat. What's New in Python 2.6 alpha 1? ================================= Modified: python/trunk/PCbuild/rt.bat ============================================================================== --- python/trunk/PCbuild/rt.bat (original) +++ python/trunk/PCbuild/rt.bat Fri Mar 14 14:56:09 2008 @@ -1,11 +1,13 @@ @echo off rem Run Tests. Run the regression test suite. -rem Usage: rt [-d] [-O] [-q] regrtest_args +rem Usage: rt [-d] [-O] [-q] [-x64] regrtest_args rem -d Run Debug build (python_d.exe). Else release build. rem -O Run python.exe or python_d.exe (see -d) with -O. rem -q "quick" -- normally the tests are run twice, the first time rem after deleting all the .py[co] files reachable from Lib/. rem -q runs the tests just once, and without deleting .py[co] files. +rem -x64 Run the 64-bit build of python (or python_d if -d was specified) +rem from the 'amd64' dir instead of the 32-bit build in this dir. rem All leading instances of these switches are shifted off, and rem whatever remains is passed to regrtest.py. For example, rem rt -O -d -x test_thread @@ -24,16 +26,20 @@ setlocal -set exe=python +set prefix=.\ +set suffix= set qmode= set dashO= -PATH %PATH%;..\..\tcltk\bin +set tcltk= :CheckOpts if "%1"=="-O" (set dashO=-O) & shift & goto CheckOpts if "%1"=="-q" (set qmode=yes) & shift & goto CheckOpts -if "%1"=="-d" (set exe=python_d) & shift & goto CheckOpts +if "%1"=="-d" (set suffix=_d) & shift & goto CheckOpts +if "%1"=="-x64" (set prefix=amd64) & (set tcltk=tcltk64) & shift & goto CheckOpts +PATH %PATH%;..\..\%tcltk%\bin +set exe=%prefix%\python%suffix% set cmd=%exe% %dashO% -E -tt ../lib/test/regrtest.py %1 %2 %3 %4 %5 %6 %7 %8 %9 if defined qmode goto Qmode From python-checkins at python.org Fri Mar 14 14:57:59 2008 From: python-checkins at python.org (martin.v.loewis) Date: Fri, 14 Mar 2008 14:57:59 +0100 (CET) Subject: [Python-checkins] r61379 - python/trunk/Tools/buildbot/test-amd64.bat Message-ID: <20080314135759.618231E4006@bag.python.org> Author: martin.v.loewis Date: Fri Mar 14 14:57:59 2008 New Revision: 61379 Modified: python/trunk/Tools/buildbot/test-amd64.bat Log: Use -x64 flag. Modified: python/trunk/Tools/buildbot/test-amd64.bat ============================================================================== --- python/trunk/Tools/buildbot/test-amd64.bat (original) +++ python/trunk/Tools/buildbot/test-amd64.bat Fri Mar 14 14:57:59 2008 @@ -1,3 +1,3 @@ @rem Used by the buildbot "test" step. -cd PC\VS7.1 -call rt.bat -q -uall -rw +cd PC +call rt.bat -q -uall -rw -x64 From python-checkins at python.org Fri Mar 14 14:58:31 2008 From: python-checkins at python.org (mark.dickinson) Date: Fri, 14 Mar 2008 14:58:31 +0100 (CET) Subject: [Python-checkins] r61380 - python/branches/trunk-math/Modules/cmathmodule.c Message-ID: <20080314135831.DAF901E4019@bag.python.org> Author: mark.dickinson Date: Fri Mar 14 14:58:31 2008 New Revision: 61380 Modified: python/branches/trunk-math/Modules/cmathmodule.c Log: General code cleanup and extra commenting in cmathmodule.c. Fix dependence on FLT_RADIX=2; FLT_RADIX=16 should now also work (untested). Modified: python/branches/trunk-math/Modules/cmathmodule.c ============================================================================== --- python/branches/trunk-math/Modules/cmathmodule.c (original) +++ python/branches/trunk-math/Modules/cmathmodule.c Fri Mar 14 14:58:31 2008 @@ -3,11 +3,14 @@ /* much code borrowed from mathmodule.c */ #include "Python.h" - -/* we need DBL_MAX, DBL_MIN, DBL_EPSILON and DBL_MANT_DIG from float.h */ -/* We assume that FLT_RADIX is 2, not 10 or 16. */ +/* we need DBL_MAX, DBL_MIN, DBL_EPSILON, DBL_MANT_DIG and FLT_RADIX from + float.h. We assume that FLT_RADIX is either 2 or 16. */ #include +#if (FLT_RADIX != 2 && FLT_RADIX != 16) +#error "Modules/cmathmodule.c expects FLT_RADIX to be 2 or 16" +#endif + #ifndef M_LN2 #define M_LN2 (0.6931471805599453094) /* natural log of 2 */ #endif @@ -28,10 +31,20 @@ #define CM_LOG_LARGE_DOUBLE (log(CM_LARGE_DOUBLE)) #define CM_SQRT_DBL_MIN (sqrt(DBL_MIN)) -/* CM_SCALE_UP defines the power of 2 to multiply by to turn a subnormal into - a normal; used in sqrt. must be odd */ -#define CM_SCALE_UP 2*(DBL_MANT_DIG/2) + 1 -#define CM_SCALE_DOWN -(DBL_MANT_DIG/2 + 1) +/* + CM_SCALE_UP is an odd integer chosen such that multiplication by + 2**CM_SCALE_UP is sufficient to turn a subnormal into a normal. + CM_SCALE_DOWN is (-(CM_SCALE_UP+1)/2). These scalings are used to compute + square roots accurately when the real and imaginary parts of the argument + are subnormal. +*/ + +#if FLT_RADIX==2 +#define CM_SCALE_UP (2*(DBL_MANT_DIG/2) + 1) +#elif FLT_RADIX==16 +#define CM_SCALE_UP (4*DBL_MANT_DIG+1) +#endif +#define CM_SCALE_DOWN (-(CM_SCALE_UP+1)/2) /* forward declarations */ static Py_complex c_asinh(Py_complex); @@ -96,7 +109,7 @@ #define P34 0.75*Py_MATH_PI #ifdef MS_WINDOWS /* On Windows HUGE_VAL is an extern variable and not a constant. Since the - special value arrays need a constant we have to role our own infinity + special value arrays need a constant we have to roll our own infinity and nan. */ # define INF (DBL_MAX*DBL_MAX) # define N (INF*0.) @@ -106,16 +119,23 @@ #endif /* MS_WINDOWS */ #define U -9.5426319407711027e33 /* unlikely value, used as placeholder */ -/* First, the C functions that do the real work */ +/* First, the C functions that do the real work. Each of the c_* + functions computes and returns the C99 Annex G recommended result + and also sets errno as follows: errno = 0 if no floating-point + exception is associated with the result; errno = EDOM if C99 Annex + G recommends raising divide-by-zero or invalid for this result; and + errno = ERANGE where the overflow floating-point signal should be + raised. +*/ static Py_complex acos_special_values[7][7] = { - {{P34,INF}, {P,INF}, {P,INF}, {P,-INF}, {P, -INF}, {P34,-INF}, {N,INF}}, - {{P12,INF}, {U,U}, {U,U}, {U,U}, {U,U}, {P12,-INF}, {N,N}}, - {{P12,INF}, {U,U}, {P12,0.}, {P12,-0.}, {U,U}, {P12,-INF}, {P12,N}}, - {{P12,INF}, {U,U}, {P12,0.}, {P12,-0.}, {U,U}, {P12,-INF}, {P12,N}}, - {{P12,INF}, {U,U}, {U,U}, {U,U}, {U,U}, {P12,-INF}, {N,N}}, - {{P14,INF}, {0.,INF},{0.,INF}, {0.,-INF}, {0.,-INF}, {P14,-INF}, {N,INF}}, - {{N,INF}, {N,N}, {N,N}, {N,N}, {N,N}, {N,-INF}, {N,N}} + {{P34,INF},{P,INF}, {P,INF}, {P,-INF}, {P,-INF}, {P34,-INF},{N,INF}}, + {{P12,INF},{U,U}, {U,U}, {U,U}, {U,U}, {P12,-INF},{N,N}}, + {{P12,INF},{U,U}, {P12,0.},{P12,-0.},{U,U}, {P12,-INF},{P12,N}}, + {{P12,INF},{U,U}, {P12,0.},{P12,-0.},{U,U}, {P12,-INF},{P12,N}}, + {{P12,INF},{U,U}, {U,U}, {U,U}, {U,U}, {P12,-INF},{N,N}}, + {{P14,INF},{0.,INF},{0.,INF},{0.,-INF},{0.,-INF},{P14,-INF},{N,INF}}, + {{N,INF}, {N,N}, {N,N}, {N,N}, {N,N}, {N,-INF}, {N,N}} }; static Py_complex @@ -125,7 +145,7 @@ SPECIAL_VALUE(z, acos_special_values); - if (fabs(z.real) > CM_LARGE_DOUBLE || fabs(z.imag) > CM_LARGE_DOUBLE) { + if (fabs(z.real) > CM_LARGE_DOUBLE || fabs(z.imag) > CM_LARGE_DOUBLE) { /* avoid unnecessary overflow for large arguments */ r.real = atan2(fabs(z.imag), z.real); /* split into cases to make sure that the branch cut has the @@ -158,13 +178,13 @@ static Py_complex acosh_special_values[7][7] = { - {{INF,-P34}, {INF,-P}, {INF,-P}, {INF,P}, {INF,P}, {INF,P34}, {INF,N}}, - {{INF,-P12}, {U,U}, {U,U}, {U,U}, {U,U}, {INF,P12}, {N,N}}, - {{INF,-P12}, {U,U}, {0.,-P12}, {0.,P12}, {U,U}, {INF,P12}, {N,N}}, - {{INF,-P12}, {U,U}, {0.,-P12}, {0.,P12}, {U,U}, {INF,P12}, {N,N}}, - {{INF,-P12}, {U,U}, {U,U}, {U,U}, {U,U}, {INF,P12}, {N,N}}, - {{INF,-P14}, {INF,-0.},{INF,-0.}, {INF,0.}, {INF,0.},{INF,P14}, {INF,N}}, - {{INF,N}, {N,N}, {N,N}, {N,N}, {N,N}, {INF,N}, {N,N}} + {{INF,-P34},{INF,-P}, {INF,-P}, {INF,P}, {INF,P}, {INF,P34},{INF,N}}, + {{INF,-P12},{U,U}, {U,U}, {U,U}, {U,U}, {INF,P12},{N,N}}, + {{INF,-P12},{U,U}, {0.,-P12},{0.,P12},{U,U}, {INF,P12},{N,N}}, + {{INF,-P12},{U,U}, {0.,-P12},{0.,P12},{U,U}, {INF,P12},{N,N}}, + {{INF,-P12},{U,U}, {U,U}, {U,U}, {U,U}, {INF,P12},{N,N}}, + {{INF,-P14},{INF,-0.},{INF,-0.},{INF,0.},{INF,0.},{INF,P14},{INF,N}}, + {{INF,N}, {N,N}, {N,N}, {N,N}, {N,N}, {INF,N}, {N,N}} }; static Py_complex @@ -174,7 +194,7 @@ SPECIAL_VALUE(z, acosh_special_values); - if (fabs(z.real) > CM_LARGE_DOUBLE || fabs(z.imag) > CM_LARGE_DOUBLE) { + if (fabs(z.real) > CM_LARGE_DOUBLE || fabs(z.imag) > CM_LARGE_DOUBLE) { /* avoid unnecessary overflow for large arguments */ r.real = log(hypot(z.real/2., z.imag/2.)) + M_LN2*2.; r.imag = atan2(z.imag, z.real); @@ -218,13 +238,13 @@ static Py_complex asinh_special_values[7][7] = { - {{-INF,-P14}, {-INF,-0.},{-INF,-0.}, {-INF,0.}, {-INF,0.},{-INF,P14}, {-INF,N}}, - {{-INF,-P12}, {U,U}, {U,U}, {U,U}, {U,U}, {-INF,P12}, {N,N}}, - {{-INF,-P12}, {U,U}, {-0.,-0.},{-0.,0.},{U,U}, {-INF,P12}, {N,N}}, - {{INF,-P12}, {U,U}, {0.,-0.}, {0.,0.}, {U,U}, {INF,P12}, {N,N}}, - {{INF,-P12}, {U,U}, {U,U}, {U,U}, {U,U}, {INF,P12}, {N,N}}, - {{INF,-P14}, {INF,-0.}, {INF,-0.}, {INF,0.}, {INF,0.}, {INF,P14}, {INF,N}}, - {{INF,N}, {N,N}, {N,-0.}, {N,0.}, {N,N}, {INF,N}, {N,N}} + {{-INF,-P14},{-INF,-0.},{-INF,-0.},{-INF,0.},{-INF,0.},{-INF,P14},{-INF,N}}, + {{-INF,-P12},{U,U}, {U,U}, {U,U}, {U,U}, {-INF,P12},{N,N}}, + {{-INF,-P12},{U,U}, {-0.,-0.}, {-0.,0.}, {U,U}, {-INF,P12},{N,N}}, + {{INF,-P12}, {U,U}, {0.,-0.}, {0.,0.}, {U,U}, {INF,P12}, {N,N}}, + {{INF,-P12}, {U,U}, {U,U}, {U,U}, {U,U}, {INF,P12}, {N,N}}, + {{INF,-P14}, {INF,-0.}, {INF,-0.}, {INF,0.}, {INF,0.}, {INF,P14}, {INF,N}}, + {{INF,N}, {N,N}, {N,-0.}, {N,0.}, {N,N}, {INF,N}, {N,N}} }; static Py_complex @@ -234,7 +254,7 @@ SPECIAL_VALUE(z, asinh_special_values); - if (fabs(z.real) > CM_LARGE_DOUBLE || fabs(z.imag) > CM_LARGE_DOUBLE) { + if (fabs(z.real) > CM_LARGE_DOUBLE || fabs(z.imag) > CM_LARGE_DOUBLE) { if (z.imag >= 0.) { r.real = copysign(log(hypot(z.real/2., z.imag/2.)) + M_LN2*2., z.real); @@ -304,13 +324,13 @@ static Py_complex atanh_special_values[7][7] = { - {{-0.,-P12},{-0.,-P12}, {-0.,-P12}, {-0.,P12}, {-0.,P12}, {-0.,P12},{-0.,N}}, - {{-0.,-P12},{U,U}, {U,U}, {U,U}, {U,U}, {-0.,P12},{N,N}}, - {{-0.,-P12},{U,U}, {-0.,-0.}, {-0.,0.}, {U,U}, {-0.,P12},{-0.,N}}, - {{0.,-P12}, {U,U}, {0.,-0.}, {0.,0.}, {U,U}, {0.,P12}, {0.,N}}, - {{0.,-P12}, {U,U}, {U,U}, {U,U}, {U,U}, {0.,P12}, {N,N}}, - {{0.,-P12}, {0.,-P12}, {0.,-P12}, {0.,P12}, {0.,P12}, {0.,P12}, {0.,N}}, - {{0.,-P12}, {N,N}, {N,N}, {N,N}, {N,N}, {0.,P12}, {N,N}} + {{-0.,-P12},{-0.,-P12},{-0.,-P12},{-0.,P12},{-0.,P12},{-0.,P12},{-0.,N}}, + {{-0.,-P12},{U,U}, {U,U}, {U,U}, {U,U}, {-0.,P12},{N,N}}, + {{-0.,-P12},{U,U}, {-0.,-0.}, {-0.,0.}, {U,U}, {-0.,P12},{-0.,N}}, + {{0.,-P12}, {U,U}, {0.,-0.}, {0.,0.}, {U,U}, {0.,P12}, {0.,N}}, + {{0.,-P12}, {U,U}, {U,U}, {U,U}, {U,U}, {0.,P12}, {N,N}}, + {{0.,-P12}, {0.,-P12}, {0.,-P12}, {0.,P12}, {0.,P12}, {0.,P12}, {0.,N}}, + {{0.,-P12}, {N,N}, {N,N}, {N,N}, {N,N}, {0.,P12}, {N,N}} }; static Py_complex @@ -385,13 +405,13 @@ /* cosh(infinity + i*y) needs to be dealt with specially */ static Py_complex cosh_special_values[7][7] = { - {{INF,N}, {U,U},{INF,0.}, {INF,-0.}, {U,U},{INF,N}, {INF,N}}, - {{N,N}, {U,U},{U,U}, {U,U}, {U,U},{N,N}, {N,N}}, - {{N,0.},{U,U},{1.,0.}, {1.,-0.},{U,U},{N,0.},{N,0.}}, - {{N,0.},{U,U},{1.,-0.},{1.,0.}, {U,U},{N,0.},{N,0.}}, - {{N,N}, {U,U},{U,U}, {U,U}, {U,U},{N,N}, {N,N}}, - {{INF,N}, {U,U},{INF,-0.}, {INF,0.}, {U,U},{INF,N}, {INF,N}}, - {{N,N}, {N,N},{N,0.}, {N,0.}, {N,N},{N,N}, {N,N}} + {{INF,N},{U,U},{INF,0.}, {INF,-0.},{U,U},{INF,N},{INF,N}}, + {{N,N}, {U,U},{U,U}, {U,U}, {U,U},{N,N}, {N,N}}, + {{N,0.}, {U,U},{1.,0.}, {1.,-0.}, {U,U},{N,0.}, {N,0.}}, + {{N,0.}, {U,U},{1.,-0.}, {1.,0.}, {U,U},{N,0.}, {N,0.}}, + {{N,N}, {U,U},{U,U}, {U,U}, {U,U},{N,N}, {N,N}}, + {{INF,N} {U,U},{INF,-0.},{INF,0.}, {U,U},{INF,N},{INF,N}}, + {{N,N}, {N,N},{N,0.}, {N,0.}, {N,N},{N,N}, {N,N}} }; static Py_complex @@ -453,13 +473,13 @@ /* exp(infinity + i*y) and exp(-infinity + i*y) need special treatment for finite y */ static Py_complex exp_special_values[7][7] = { - {{0.,0.},{U,U},{0.,-0.},{0.,0.},{U,U},{0.,0.},{0.,0.}}, - {{N,N}, {U,U},{U,U}, {U,U}, {U,U},{N,N}, {N,N}}, - {{N,N}, {U,U},{1.,-0.},{1.,0.},{U,U},{N,N}, {N,N}}, - {{N,N}, {U,U},{1.,-0.},{1.,0.},{U,U},{N,N}, {N,N}}, - {{N,N}, {U,U},{U,U}, {U,U}, {U,U},{N,N}, {N,N}}, - {{INF,N}, {U,U},{INF,-0.}, {INF,0.}, {U,U},{INF,N}, {INF,N}}, - {{N,N}, {N,N},{N,-0.}, {N,0.}, {N,N},{N,N}, {N,N}} + {{0.,0.},{U,U},{0.,-0.}, {0.,0.}, {U,U},{0.,0.},{0.,0.}}, + {{N,N}, {U,U},{U,U}, {U,U}, {U,U},{N,N}, {N,N}}, + {{N,N}, {U,U},{1.,-0.}, {1.,0.}, {U,U},{N,N}, {N,N}}, + {{N,N}, {U,U},{1.,-0.}, {1.,0.}, {U,U},{N,N}, {N,N}}, + {{N,N}, {U,U},{U,U}, {U,U}, {U,U},{N,N}, {N,N}}, + {{INF,N},{U,U},{INF,-0.},{INF,0.},{U,U},{INF,N},{INF,N}}, + {{N,N}, {N,N},{N,-0.}, {N,0.}, {N,N},{N,N}, {N,N}} }; static Py_complex @@ -519,13 +539,13 @@ static Py_complex log_special_values[7][7] = { - {{INF,-P34}, {INF,-P}, {INF,-P}, {INF,P}, {INF,P}, {INF,P34}, {INF,N}}, - {{INF,-P12}, {U,U}, {U,U}, {U,U}, {U,U}, {INF,P12}, {N,N}}, - {{INF,-P12}, {U,U}, {-INF,-P}, {-INF,P}, {U,U}, {INF,P12}, {N,N}}, - {{INF,-P12}, {U,U}, {-INF,-0.},{-INF,0.},{U,U}, {INF,P12}, {N,N}}, - {{INF,-P12}, {U,U}, {U,U}, {U,U}, {U,U}, {INF,P12}, {N,N}}, - {{INF,-P14}, {INF,-0.},{INF,-0.}, {INF,0.}, {INF,0.},{INF,P14}, {INF,N}}, - {{INF,N}, {N,N}, {N,N}, {N,N}, {N,N}, {INF,N}, {N,N}} + {{INF,-P34},{INF,-P}, {INF,-P}, {INF,P}, {INF,P}, {INF,P34}, {INF,N}}, + {{INF,-P12},{U,U}, {U,U}, {U,U}, {U,U}, {INF,P12}, {N,N}}, + {{INF,-P12},{U,U}, {-INF,-P}, {-INF,P}, {U,U}, {INF,P12}, {N,N}}, + {{INF,-P12},{U,U}, {-INF,-0.},{-INF,0.},{U,U}, {INF,P12}, {N,N}}, + {{INF,-P12},{U,U}, {U,U}, {U,U}, {U,U}, {INF,P12}, {N,N}}, + {{INF,-P14},{INF,-0.},{INF,-0.}, {INF,0.}, {INF,0.},{INF,P14}, {INF,N}}, + {{INF,N}, {N,N}, {N,N}, {N,N}, {N,N}, {INF,N}, {N,N}} }; static Py_complex @@ -639,13 +659,13 @@ /* sinh(infinity + i*y) needs to be dealt with specially */ static Py_complex sinh_special_values[7][7] = { - {{INF,N}, {U,U},{-INF,-0.}, {-INF,0.}, {U,U},{INF,N}, {INF,N}}, - {{N,N}, {U,U},{U,U}, {U,U}, {U,U},{N,N}, {N,N}}, - {{0.,N},{U,U},{-0.,-0.},{-0.,0.},{U,U},{0.,N},{0.,N}}, - {{0.,N},{U,U},{0.,-0.}, {0.,0.}, {U,U},{0.,N},{0.,N}}, - {{N,N}, {U,U},{U,U}, {U,U}, {U,U},{N,N}, {N,N}}, - {{INF,N}, {U,U},{INF,-0.}, {INF,0.}, {U,U},{INF,N}, {INF,N}}, - {{N,N}, {N,N},{N,-0.}, {N,0.}, {N,N},{N,N}, {N,N}} + {{INF,N},{U,U},{-INF,-0.},{-INF,0.},{U,U},{INF,N},{INF,N}}, + {{N,N}, {U,U},{U,U}, {U,U}, {U,U},{N,N}, {N,N}}, + {{0.,N}, {U,U},{-0.,-0.}, {-0.,0.}, {U,U},{0.,N}, {0.,N}}, + {{0.,N}, {U,U},{0.,-0.}, {0.,0.}, {U,U},{0.,N}, {0.,N}}, + {{N,N}, {U,U},{U,U}, {U,U}, {U,U},{N,N}, {N,N}}, + {{INF,N},{U,U},{INF,-0.}, {INF,0.}, {U,U},{INF,N},{INF,N}}, + {{N,N}, {N,N},{N,-0.}, {N,0.}, {N,N},{N,N}, {N,N}} }; static Py_complex @@ -695,7 +715,6 @@ else errno = 0; return r; - } PyDoc_STRVAR(c_sinh_doc, @@ -705,13 +724,13 @@ static Py_complex sqrt_special_values[7][7] = { - {{INF,-INF},{0.,-INF},{0.,-INF}, {0.,INF}, {0.,INF},{INF,INF},{N,INF}}, - {{INF,-INF},{U,U}, {U,U}, {U,U}, {U,U}, {INF,INF},{N,N}}, - {{INF,-INF},{U,U}, {0.,-0.},{0.,0.},{U,U}, {INF,INF},{N,N}}, - {{INF,-INF},{U,U}, {0.,-0.},{0.,0.},{U,U}, {INF,INF},{N,N}}, - {{INF,-INF},{U,U}, {U,U}, {U,U}, {U,U}, {INF,INF},{N,N}}, - {{INF,-INF},{INF,-0.},{INF,-0.}, {INF,0.}, {INF,0.},{INF,INF},{INF,N}}, - {{INF,-INF},{N,N}, {N,N}, {N,N}, {N,N}, {INF,INF},{N,N}} + {{INF,-INF},{0.,-INF},{0.,-INF},{0.,INF},{0.,INF},{INF,INF},{N,INF}}, + {{INF,-INF},{U,U}, {U,U}, {U,U}, {U,U}, {INF,INF},{N,N}}, + {{INF,-INF},{U,U}, {0.,-0.}, {0.,0.}, {U,U}, {INF,INF},{N,N}}, + {{INF,-INF},{U,U}, {0.,-0.}, {0.,0.}, {U,U}, {INF,INF},{N,N}}, + {{INF,-INF},{U,U}, {U,U}, {U,U}, {U,U}, {INF,INF},{N,N}}, + {{INF,-INF},{INF,-0.},{INF,-0.},{INF,0.},{INF,0.},{INF,INF},{INF,N}}, + {{INF,-INF},{N,N}, {N,N}, {N,N}, {N,N}, {INF,INF},{N,N}} }; static Py_complex @@ -768,7 +787,7 @@ ax /= 8.; s = 2.*sqrt(ax + hypot(ax, ay/8.)); } - d = ay/(2.*s); + d = ay/(2.*s); if (z.real >= 0.) { r.real = s; @@ -808,13 +827,13 @@ /* tanh(infinity + i*y) needs to be dealt with specially */ static Py_complex tanh_special_values[7][7] = { - {{-1.,0.},{U,U},{-1.,-0.},{-1.,0.},{U,U},{-1.,0.},{-1.,0.}}, - {{N,N}, {U,U},{U,U}, {U,U}, {U,U},{N,N}, {N,N}}, - {{N,N}, {U,U},{-0.,-0.},{-0.,0.},{U,U},{N,N}, {N,N}}, - {{N,N}, {U,U},{0.,-0.}, {0.,0.}, {U,U},{N,N}, {N,N}}, - {{N,N}, {U,U},{U,U}, {U,U}, {U,U},{N,N}, {N,N}}, - {{1.,0.}, {U,U},{1.,-0.}, {1.,0.}, {U,U},{1.,0.}, {1.,0.}}, - {{N,N}, {N,N},{N,-0.}, {N,0.}, {N,N},{N,N}, {N,N}} + {{-1.,0.},{U,U},{-1.,-0.},{-1.,0.},{U,U},{-1.,0.},{-1.,0.}}, + {{N,N}, {U,U},{U,U}, {U,U}, {U,U},{N,N}, {N,N}}, + {{N,N}, {U,U},{-0.,-0.},{-0.,0.},{U,U},{N,N}, {N,N}}, + {{N,N}, {U,U},{0.,-0.}, {0.,0.}, {U,U},{N,N}, {N,N}}, + {{N,N}, {U,U},{U,U}, {U,U}, {U,U},{N,N}, {N,N}}, + {{1.,0.}, {U,U},{1.,-0.}, {1.,0.}, {U,U},{1.,0.}, {1.,0.}}, + {{N,N}, {N,N},{N,-0.}, {N,0.}, {N,N},{N,N}, {N,N}} }; static Py_complex @@ -867,7 +886,7 @@ /* danger of overflow in 2.*z.imag !*/ if (fabs(z.real) > CM_LOG_LARGE_DOUBLE) { r.real = copysign(1., z.real); - r.imag = 4.*sin(z.imag)*cos(z.imag)*exp(-2.*fabs(z.real)); + r.imag = 4.*sin(z.imag)*cos(z.imag)*exp(-2.*fabs(z.real)); } else { tx = tanh(z.real); ty = tan(z.imag); @@ -1025,13 +1044,13 @@ */ static Py_complex rect_special_values[7][7] = { - {{INF,N},{U,U},{-INF,0.},{-INF,-0.},{U,U},{INF,N},{INF,N}}, - {{N,N}, {U,U},{U,U}, {U,U}, {U,U},{N,N}, {N,N}}, - {{0.,0.},{U,U},{-0.,0.}, {-0.,-0.}, {U,U},{0.,0.},{0.,0.}}, - {{0.,0.},{U,U},{0.,-0.}, {0.,0.}, {U,U},{0.,0.},{0.,0.}}, - {{N,N}, {U,U},{U,U}, {U,U}, {U,U},{N,N}, {N,N}}, - {{INF,N},{U,U},{INF,-0.},{INF,0.}, {U,U},{INF,N},{INF,N}}, - {{N,N}, {N,N},{N,0.}, {N,0.}, {N,N},{N,N}, {N,N}} + {{INF,N},{U,U},{-INF,0.},{-INF,-0.},{U,U},{INF,N},{INF,N}}, + {{N,N}, {U,U},{U,U}, {U,U}, {U,U},{N,N}, {N,N}}, + {{0.,0.},{U,U},{-0.,0.}, {-0.,-0.}, {U,U},{0.,0.},{0.,0.}}, + {{0.,0.},{U,U},{0.,-0.}, {0.,0.}, {U,U},{0.,0.},{0.,0.}}, + {{N,N}, {U,U},{U,U}, {U,U}, {U,U},{N,N}, {N,N}}, + {{INF,N},{U,U},{INF,-0.},{INF,0.}, {U,U},{INF,N},{INF,N}}, + {{N,N}, {N,N},{N,0.}, {N,0.}, {N,N},{N,N}, {N,N}} }; static PyObject * From python-checkins at python.org Fri Mar 14 15:00:29 2008 From: python-checkins at python.org (mark.dickinson) Date: Fri, 14 Mar 2008 15:00:29 +0100 (CET) Subject: [Python-checkins] r61381 - python/branches/trunk-math/Modules/cmathmodule.c Message-ID: <20080314140029.48E8A1E4006@bag.python.org> Author: mark.dickinson Date: Fri Mar 14 15:00:29 2008 New Revision: 61381 Modified: python/branches/trunk-math/Modules/cmathmodule.c Log: Fix accidentally deleted comma Modified: python/branches/trunk-math/Modules/cmathmodule.c ============================================================================== --- python/branches/trunk-math/Modules/cmathmodule.c (original) +++ python/branches/trunk-math/Modules/cmathmodule.c Fri Mar 14 15:00:29 2008 @@ -410,7 +410,7 @@ {{N,0.}, {U,U},{1.,0.}, {1.,-0.}, {U,U},{N,0.}, {N,0.}}, {{N,0.}, {U,U},{1.,-0.}, {1.,0.}, {U,U},{N,0.}, {N,0.}}, {{N,N}, {U,U},{U,U}, {U,U}, {U,U},{N,N}, {N,N}}, - {{INF,N} {U,U},{INF,-0.},{INF,0.}, {U,U},{INF,N},{INF,N}}, + {{INF,N},{U,U},{INF,-0.},{INF,0.}, {U,U},{INF,N},{INF,N}}, {{N,N}, {N,N},{N,0.}, {N,0.}, {N,N},{N,N}, {N,N}} }; From python-checkins at python.org Fri Mar 14 15:03:10 2008 From: python-checkins at python.org (brett.cannon) Date: Fri, 14 Mar 2008 15:03:10 +0100 (CET) Subject: [Python-checkins] r61382 - python/trunk/Lib/test/test_gdbm.py Message-ID: <20080314140310.B45A21E4006@bag.python.org> Author: brett.cannon Date: Fri Mar 14 15:03:10 2008 New Revision: 61382 Modified: python/trunk/Lib/test/test_gdbm.py Log: Remove a bad test. Modified: python/trunk/Lib/test/test_gdbm.py ============================================================================== --- python/trunk/Lib/test/test_gdbm.py (original) +++ python/trunk/Lib/test/test_gdbm.py Fri Mar 14 15:03:10 2008 @@ -35,7 +35,6 @@ # Try to open a non-existent database. unlink(filename) self.assertRaises(gdbm.error, gdbm.open, filename, 'r') - self.assertRaises(gdbm.error, gdbm.open, filename, 'w') # Try to access a closed database. self.g = gdbm.open(filename, 'c') self.g.close() From python-checkins at python.org Fri Mar 14 15:23:38 2008 From: python-checkins at python.org (mark.dickinson) Date: Fri, 14 Mar 2008 15:23:38 +0100 (CET) Subject: [Python-checkins] r61383 - in python/trunk: Lib/test/test_struct.py Misc/NEWS Objects/floatobject.c Message-ID: <20080314142338.A19361E4006@bag.python.org> Author: mark.dickinson Date: Fri Mar 14 15:23:37 2008 New Revision: 61383 Modified: python/trunk/Lib/test/test_struct.py python/trunk/Misc/NEWS python/trunk/Objects/floatobject.c Log: Issue 705836: Fix struct.pack(">f", 1e40) to behave consistently across platforms: it should now raise OverflowError on all platforms. (Previously it raised OverflowError only on non IEEE 754 platforms.) Also fix the (already existing) test for this behaviour so that it actually raises TestFailed instead of just referencing it. Modified: python/trunk/Lib/test/test_struct.py ============================================================================== --- python/trunk/Lib/test/test_struct.py (original) +++ python/trunk/Lib/test/test_struct.py Fri Mar 14 15:23:37 2008 @@ -482,7 +482,7 @@ except OverflowError: pass else: - TestFailed("expected OverflowError") + raise TestFailed("expected OverflowError") test_705836() Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Fri Mar 14 15:23:37 2008 @@ -21,6 +21,10 @@ Library ------- +- Issue #705836: struct.pack(">f", x) now raises OverflowError on all + platforms when x is too large to fit into an IEEE 754 float; previously + it only raised OverflowError on non IEEE 754 platforms. + - Issue #1106316: pdb.post_mortem()'s parameter, "traceback", is now optional: it defaults to the traceback of the exception that is currently being handled (is mandatory to be in the middle of an exception, otherwise Modified: python/trunk/Objects/floatobject.c ============================================================================== --- python/trunk/Objects/floatobject.c (original) +++ python/trunk/Objects/floatobject.c Fri Mar 14 15:23:37 2008 @@ -1751,9 +1751,6 @@ /*---------------------------------------------------------------------------- * _PyFloat_{Pack,Unpack}{4,8}. See floatobject.h. - * - * TODO: On platforms that use the standard IEEE-754 single and double - * formats natively, these routines could simply copy the bytes. */ int _PyFloat_Pack4(double x, unsigned char *p, int le) @@ -1833,28 +1830,31 @@ /* Done */ return 0; - Overflow: - PyErr_SetString(PyExc_OverflowError, - "float too large to pack with f format"); - return -1; } else { float y = (float)x; const char *s = (char*)&y; int i, incr = 1; + if (Py_IS_INFINITY(y) && !Py_IS_INFINITY(x)) + goto Overflow; + if ((float_format == ieee_little_endian_format && !le) || (float_format == ieee_big_endian_format && le)) { p += 3; incr = -1; } - + for (i = 0; i < 4; i++) { *p = *s++; p += incr; } return 0; } + Overflow: + PyErr_SetString(PyExc_OverflowError, + "float too large to pack with f format"); + return -1; } int From buildbot at python.org Fri Mar 14 15:26:19 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 14 Mar 2008 14:26:19 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 3.0 Message-ID: <20080314142620.2C1F11E4028@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%203.0/builds/699 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: georg.brandl,raymond.hettinger BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From buildbot at python.org Fri Mar 14 15:54:23 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 14 Mar 2008 14:54:23 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-3 trunk Message-ID: <20080314145423.E1DE81E402C@bag.python.org> The Buildbot has detected a new failure of x86 XP-3 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-3%20trunk/builds/1044 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: brett.cannon,mark.dickinson BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_timeout sincerely, -The Buildbot From buildbot at python.org Fri Mar 14 16:32:40 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 14 Mar 2008 15:32:40 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 trunk Message-ID: <20080314153240.B10FE1E4006@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%20trunk/builds/2676 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed test Excerpt from the test logfile: 3 tests failed: test_asynchat test_smtplib test_socket ====================================================================== FAIL: testInterruptedTimeout (test.test_socket.TCPTimeoutTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_socket.py", line 994, in testInterruptedTimeout self.fail("got Alarm in wrong place") AssertionError: got Alarm in wrong place sincerely, -The Buildbot From python-checkins at python.org Fri Mar 14 16:40:31 2008 From: python-checkins at python.org (georg.brandl) Date: Fri, 14 Mar 2008 16:40:31 +0100 (CET) Subject: [Python-checkins] r61384 - doctools/trunk/sphinx/builder.py Message-ID: <20080314154031.E8D921E4006@bag.python.org> Author: georg.brandl Date: Fri Mar 14 16:40:31 2008 New Revision: 61384 Modified: doctools/trunk/sphinx/builder.py Log: Fix typo. Modified: doctools/trunk/sphinx/builder.py ============================================================================== --- doctools/trunk/sphinx/builder.py (original) +++ doctools/trunk/sphinx/builder.py Fri Mar 14 16:40:31 2008 @@ -977,7 +977,7 @@ f = urlopen(uri) f.close() except HTTPError, err: - if err.code == 403 and uri.startwith('http://en.wikipedia.org/'): + if err.code == 403 and uri.startswith('http://en.wikipedia.org/'): # Wikipedia blocks requests from urllib User-Agent return 0 return (2, str(err)) From buildbot at python.org Fri Mar 14 16:40:53 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 14 Mar 2008 15:40:53 +0000 Subject: [Python-checkins] buildbot failure in x86 FreeBSD trunk Message-ID: <20080314154054.12E8A1E4006@bag.python.org> The Buildbot has detected a new failure of x86 FreeBSD trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20FreeBSD%20trunk/builds/720 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: bolen-freebsd Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed test Excerpt from the test logfile: Traceback (most recent call last): File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable 1 test failed: test_smtplib sincerely, -The Buildbot From python-checkins at python.org Fri Mar 14 16:41:33 2008 From: python-checkins at python.org (georg.brandl) Date: Fri, 14 Mar 2008 16:41:33 +0100 (CET) Subject: [Python-checkins] r61385 - doctools/trunk/sphinx/builder.py Message-ID: <20080314154133.66C3B1E4006@bag.python.org> Author: georg.brandl Date: Fri Mar 14 16:41:33 2008 New Revision: 61385 Modified: doctools/trunk/sphinx/builder.py Log: Another typo. Modified: doctools/trunk/sphinx/builder.py ============================================================================== --- doctools/trunk/sphinx/builder.py (original) +++ doctools/trunk/sphinx/builder.py Fri Mar 14 16:41:33 2008 @@ -979,7 +979,7 @@ except HTTPError, err: if err.code == 403 and uri.startswith('http://en.wikipedia.org/'): # Wikipedia blocks requests from urllib User-Agent - return 0 + return (0, 0) return (2, str(err)) except Exception, err: return (2, str(err)) From tnelson at onresolve.com Fri Mar 14 22:00:06 2008 From: tnelson at onresolve.com (Trent Nelson) Date: Fri, 14 Mar 2008 14:00:06 -0700 Subject: [Python-checkins] r61379 - python/trunk/Tools/buildbot/test-amd64.bat In-Reply-To: <20080314135759.618231E4006@bag.python.org> References: <20080314135759.618231E4006@bag.python.org> Message-ID: <87D3F9C72FBF214DB39FA4E3FE618CDC6E168ACFA5@EXMBX04.exchhosting.com> > Modified: python/trunk/Tools/buildbot/test-amd64.bat > ======================================================================= > ======= > --- python/trunk/Tools/buildbot/test-amd64.bat (original) > +++ python/trunk/Tools/buildbot/test-amd64.bat Fri Mar 14 14:57:59 > 2008 > @@ -1,3 +1,3 @@ > @rem Used by the buildbot "test" step. > -cd PC\VS7.1 > -call rt.bat -q -uall -rw > +cd PC > +call rt.bat -q -uall -rw -x64 You'll want 'rt.bat -q -d -x64 -uall -rw' there ;-) Trent. From python-checkins at python.org Fri Mar 14 22:06:21 2008 From: python-checkins at python.org (thomas.heller) Date: Fri, 14 Mar 2008 22:06:21 +0100 (CET) Subject: [Python-checkins] r61387 - python/trunk/Modules/_ctypes/_ctypes.c Message-ID: <20080314210621.713AD1E4002@bag.python.org> Author: thomas.heller Date: Fri Mar 14 22:06:21 2008 New Revision: 61387 Modified: python/trunk/Modules/_ctypes/_ctypes.c Log: Remove unneeded initializer. Modified: python/trunk/Modules/_ctypes/_ctypes.c ============================================================================== --- python/trunk/Modules/_ctypes/_ctypes.c (original) +++ python/trunk/Modules/_ctypes/_ctypes.c Fri Mar 14 22:06:21 2008 @@ -208,7 +208,7 @@ 0, /* tp_dictoffset */ 0, /* tp_init */ 0, /* tp_alloc */ - PyType_GenericNew, /* tp_new */ + 0, /* tp_new */ 0, /* tp_free */ }; From python-checkins at python.org Fri Mar 14 22:19:28 2008 From: python-checkins at python.org (martin.v.loewis) Date: Fri, 14 Mar 2008 22:19:28 +0100 (CET) Subject: [Python-checkins] r61388 - python/trunk/Tools/buildbot/test-amd64.bat Message-ID: <20080314211928.9D4261E4002@bag.python.org> Author: martin.v.loewis Date: Fri Mar 14 22:19:28 2008 New Revision: 61388 Modified: python/trunk/Tools/buildbot/test-amd64.bat Log: Run debug version, cd to PCbuild. Modified: python/trunk/Tools/buildbot/test-amd64.bat ============================================================================== --- python/trunk/Tools/buildbot/test-amd64.bat (original) +++ python/trunk/Tools/buildbot/test-amd64.bat Fri Mar 14 22:19:28 2008 @@ -1,3 +1,3 @@ @rem Used by the buildbot "test" step. -cd PC -call rt.bat -q -uall -rw -x64 +cd PCbuild +call rt.bat -q -d -x64 -uall -rw From martin at v.loewis.de Fri Mar 14 22:19:46 2008 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMwpp3aXMi?=) Date: Fri, 14 Mar 2008 16:19:46 -0500 Subject: [Python-checkins] r61379 - python/trunk/Tools/buildbot/test-amd64.bat In-Reply-To: <87D3F9C72FBF214DB39FA4E3FE618CDC6E168ACFA5@EXMBX04.exchhosting.com> References: <20080314135759.618231E4006@bag.python.org> <87D3F9C72FBF214DB39FA4E3FE618CDC6E168ACFA5@EXMBX04.exchhosting.com> Message-ID: <47DAEBF2.20400@v.loewis.de> > You'll want 'rt.bat -q -d -x64 -uall -rw' there ;-) Thanks, fixed. It should also cd to PCbuild, right? Martin From buildbot at python.org Fri Mar 14 22:44:31 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 14 Mar 2008 21:44:31 +0000 Subject: [Python-checkins] buildbot failure in S-390 Debian trunk Message-ID: <20080314214431.6C6711E4002@bag.python.org> The Buildbot has detected a new failure of S-390 Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/S-390%20Debian%20trunk/builds/174 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-s390 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: thomas.heller BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From buildbot at python.org Fri Mar 14 22:47:04 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 14 Mar 2008 21:47:04 +0000 Subject: [Python-checkins] buildbot failure in amd64 gentoo trunk Message-ID: <20080314214704.7963F1E4002@bag.python.org> The Buildbot has detected a new failure of amd64 gentoo trunk. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20gentoo%20trunk/builds/356 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-amd64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed failed slave lost sincerely, -The Buildbot From buildbot at python.org Fri Mar 14 23:00:28 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 14 Mar 2008 22:00:28 +0000 Subject: [Python-checkins] buildbot failure in g4 osx.4 trunk Message-ID: <20080314220028.C4D1E1E4016@bag.python.org> The Buildbot has detected a new failure of g4 osx.4 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/g4%20osx.4%20trunk/builds/3010 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: psf-g4 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: thomas.heller BUILD FAILED: failed test Excerpt from the test logfile: Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable 1 test failed: test_xmlrpc Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable ====================================================================== FAIL: test_simple1 (test.test_xmlrpc.SimpleServerTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/test/test_xmlrpc.py", line 361, in test_simple1 self.fail("%s\n%s" % (e, getattr(e, "headers", ""))) AssertionError: [Errno 32] Broken pipe ====================================================================== FAIL: test_basic (test.test_xmlrpc.FailingServerTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/test/test_xmlrpc.py", line 517, in test_basic self.fail("%s\n%s" % (e, getattr(e, "headers", ""))) AssertionError: [Errno 32] Broken pipe make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Fri Mar 14 23:18:03 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 14 Mar 2008 22:18:03 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 trunk Message-ID: <20080314221803.C50511E4002@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%20trunk/builds/2678 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: thomas.heller BUILD FAILED: failed test Excerpt from the test logfile: 2 tests failed: test_asynchat test_smtplib sincerely, -The Buildbot From python-checkins at python.org Fri Mar 14 23:35:04 2008 From: python-checkins at python.org (georg.brandl) Date: Fri, 14 Mar 2008 23:35:04 +0100 (CET) Subject: [Python-checkins] r61389 - doctools/trunk/sphinx/htmlwriter.py Message-ID: <20080314223504.E73671E400A@bag.python.org> Author: georg.brandl Date: Fri Mar 14 23:35:04 2008 New Revision: 61389 Modified: doctools/trunk/sphinx/htmlwriter.py Log: Correctly handle doctest blocks in HTML writer. Modified: doctools/trunk/sphinx/htmlwriter.py ============================================================================== --- doctools/trunk/sphinx/htmlwriter.py (original) +++ doctools/trunk/sphinx/htmlwriter.py Fri Mar 14 23:35:04 2008 @@ -187,6 +187,9 @@ self.body.append(self.highlighter.highlight_block(node.rawsource, lang, linenos)) raise nodes.SkipNode + def visit_doctest_block(self, node): + self.visit_literal_block(node) + # overwritten def visit_literal(self, node): if len(node.children) == 1 and \ From python-checkins at python.org Fri Mar 14 23:39:37 2008 From: python-checkins at python.org (georg.brandl) Date: Fri, 14 Mar 2008 23:39:37 +0100 (CET) Subject: [Python-checkins] r61390 - doctools/trunk/sphinx/directives.py Message-ID: <20080314223937.13F5A1E4002@bag.python.org> Author: georg.brandl Date: Fri Mar 14 23:39:36 2008 New Revision: 61390 Modified: doctools/trunk/sphinx/directives.py Log: Fix behavior for .. method directives inside a .. class. Modified: doctools/trunk/sphinx/directives.py ============================================================================== --- doctools/trunk/sphinx/directives.py (original) +++ doctools/trunk/sphinx/directives.py Fri Mar 14 23:39:36 2008 @@ -133,10 +133,12 @@ if m is None: raise ValueError classname, name, arglist = m.groups() + add_module = True if env.currclass: if classname and classname.startswith(env.currclass): fullname = classname + name classname = classname[len(env.currclass):].lstrip('.') + add_module = False elif classname: fullname = env.currclass + '.' + classname + name else: @@ -148,7 +150,7 @@ signode += addnodes.desc_classname(classname, classname) # exceptions are a special case, since they are documented in the # 'exceptions' module. - elif env.config.add_module_names and \ + elif add_module and env.config.add_module_names and \ env.currmodule and env.currmodule != 'exceptions': nodetext = env.currmodule + '.' signode += addnodes.desc_classname(nodetext, nodetext) From buildbot at python.org Fri Mar 14 23:39:55 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 14 Mar 2008 22:39:55 +0000 Subject: [Python-checkins] buildbot failure in sparc solaris10 gcc trunk Message-ID: <20080314223955.F27941E4002@bag.python.org> The Buildbot has detected a new failure of sparc solaris10 gcc trunk. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20solaris10%20gcc%20trunk/builds/2951 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: loewis-sun Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_ctypes sincerely, -The Buildbot From python-checkins at python.org Sat Mar 15 00:08:23 2008 From: python-checkins at python.org (georg.brandl) Date: Sat, 15 Mar 2008 00:08:23 +0100 (CET) Subject: [Python-checkins] r61391 - in doctools/trunk/sphinx: ext/autodoc.py ext/coverage.py util/__init__.py Message-ID: <20080314230823.1F6641E4002@bag.python.org> Author: georg.brandl Date: Sat Mar 15 00:08:22 2008 New Revision: 61391 Added: doctools/trunk/sphinx/ext/autodoc.py Modified: doctools/trunk/sphinx/ext/coverage.py doctools/trunk/sphinx/util/__init__.py Log: Add first version of sphinx.ext.autodoc that generates documentation from docstrings. Added: doctools/trunk/sphinx/ext/autodoc.py ============================================================================== --- (empty file) +++ doctools/trunk/sphinx/ext/autodoc.py Sat Mar 15 00:08:22 2008 @@ -0,0 +1,219 @@ +# -*- coding: utf-8 -*- +""" + sphinx.ext.autodoc + ~~~~~~~~~~~~~~~~~~ + + Automatically insert docstrings for functions, classes or whole modules into + the doctree, thus avoiding duplication between docstrings and documentation + for those who like elaborate docstrings. + + :copyright: 2008 by Georg Brandl. + :license: BSD. +""" + +import types +import inspect +import textwrap + +from docutils import nodes +from docutils.parsers.rst import directives +from docutils.statemachine import ViewList + +from sphinx import addnodes +from sphinx.util import rpartition + +try: + base_exception = BaseException +except NameError: + base_exception = Exception + + +def prepare_docstring(s): + """Convert a docstring into lines of parseable reST.""" + if not s or s.isspace(): + return [''] + nl = s.rstrip().find('\n') + if nl == -1: + # Only one line... + return [s.strip(), ''] + # The first line may be indented differently... + firstline = s[:nl].strip() + otherlines = textwrap.dedent(s[nl+1:]) + return [firstline] + otherlines.splitlines() + [''] + + +def generate_rst(what, name, members, undoc, add_content, + document, lineno, indent=''): + env = document.settings.env + + # find out what to import + if what == 'module': + mod = obj = name + objpath = [] + elif what in ('class', 'exception', 'function'): + mod, obj = rpartition(name, '.') + if not mod: + mod = env.autodoc_current_module + if not mod: + mod = env.currmodule + objpath = [obj] + else: + mod_cls, obj = rpartition(name, '.') + if not mod_cls: + mod_cls = env.autodoc_current_class + if not mod_cls: + mod_cls = env.currclass + mod, cls = rpartition(mod_cls, '.') + if not mod: + mod = env.autodoc_current_module + if not mod: + mod = env.currmodule + objpath = [cls, obj] + + result = ViewList() + + try: + todoc = module = __import__(mod, None, None, ['foo']) + for part in objpath: + todoc = getattr(todoc, part) + if hasattr(todoc, '__module__'): + if todoc.__module__ != mod: + return [], result + docstring = todoc.__doc__ + except (ImportError, AttributeError): + warning = document.reporter.warning( + 'autodoc can\'t import/find %s %r, check your spelling ' + 'and sys.path' % (what, str(name)), line=lineno) + return [warning], result + + # add directive header + try: + if what == 'class': + args = inspect.formatargspec(*inspect.getargspec(todoc.__init__)) + elif what in ('function', 'method'): + args = inspect.formatargspec(*inspect.getargspec(todoc)) + if what == 'method': + if args[1:7] == 'self, ': + args = '(' + args[7:] + elif args == '(self)': + args = '()' + else: + args = '' + except: + args = '' + if len(objpath) == 2: + qualname = '%s.%s' % (cls, obj) + else: + qualname = obj + result.append(indent + '.. %s:: %s%s' % (what, qualname, args), '') + result.append('', '') + + # the module directive doesn't like content + if what != 'module': + indent += ' ' + + # add docstring content + if what == 'module' and env.config.automodule_skip_lines: + docstring = '\n'.join(docstring.splitlines() + [env.config.automodule_skip_lines:]) + docstring = prepare_docstring(docstring) + for i, line in enumerate(docstring): + result.append(indent + line, '' % name, i) + + # add source content, if present + if add_content: + for line, src in zip(add_content.data, add_content.items): + result.append(indent + line, src[0], src[1]) + + if not members or what in ('function', 'method', 'attribute'): + return [], result + + env.autodoc_current_module = mod + if objpath: + env.autodoc_current_class = objpath[0] + + warnings = [] + # add members, if possible + _all = members == ['__all__'] + if _all: + all_members = sorted(inspect.getmembers(todoc)) + else: + all_members = [(mname, getattr(todoc, mname)) for mname in members] + for (membername, member) in all_members: + if _all and membername.startswith('_'): + continue + doc = getattr(member, '__doc__', None) + if not undoc and not doc: + continue + if what == 'module': + if isinstance(member, types.FunctionType): + memberwhat = 'function' + elif isinstance(member, types.ClassType) or \ + isinstance(member, type): + if issubclass(member, base_exception): + memberwhat = 'exception' + else: + memberwhat = 'class' + else: + # XXX: todo -- attribute docs + continue + else: + if callable(member): + memberwhat = 'method' + elif isinstance(member, property): + memberwhat = 'attribute' + else: + # XXX: todo -- attribute docs + continue + full_membername = name + '.' + membername + subwarn, subres = generate_rst(memberwhat, full_membername, ['__all__'], + undoc, None, document, lineno, indent) + warnings.extend(subwarn) + result.extend(subres) + + env.autodoc_current_module = None + env.autodoc_current_class = None + + return warnings, result + + + +def _auto_directive(dirname, arguments, options, content, lineno, + content_offset, block_text, state, state_machine): + what = dirname[4:] + name = arguments[0] + members = options.get('members', []) + undoc = 'undoc-members' in options + + warnings, result = generate_rst(what, name, members, undoc, content, + state.document, lineno) + + node = nodes.paragraph() + state.nested_parse(result, content_offset, node) + return warnings + [node] + +def auto_directive(*args, **kwds): + return _auto_directive(*args, **kwds) + +def auto_directive_withmembers(*args, **kwds): + return _auto_directive(*args, **kwds) + + +def members_directive(arg): + if arg is None: + return ['__all__'] + return [x.strip() for x in arg.split(',')] + + +def setup(app): + options = {'members': members_directive, 'undoc-members': directives.flag} + app.add_directive('automodule', auto_directive_withmembers, + 1, (1, 0, 1), **options) + app.add_directive('autoclass', auto_directive_withmembers, + 1, (1, 0, 1), **options) + app.add_directive('autoexception', auto_directive_withmembers, + 1, (1, 0, 1), **options) + app.add_directive('autofunction', auto_directive, 1, (1, 0, 1)) + app.add_directive('automethod', auto_directive, 1, (1, 0, 1)) + app.add_directive('autoattribute', auto_directive, 1, (1, 0, 1)) + app.add_config_value('automodule_skip_lines', 0, True) Modified: doctools/trunk/sphinx/ext/coverage.py ============================================================================== --- doctools/trunk/sphinx/ext/coverage.py (original) +++ doctools/trunk/sphinx/ext/coverage.py Sat Mar 15 00:08:22 2008 @@ -36,9 +36,6 @@ class CoverageBuilder(Builder): - """ - Checks the completeness of Python's C-API documentation. - """ name = 'coverage' @@ -237,10 +234,10 @@ def setup(app): app.add_builder(CoverageBuilder) - app.add_config_value('coverage_c_path', [], False) - app.add_config_value('coverage_c_regexes', [], False) app.add_config_value('coverage_ignore_modules', [], False) app.add_config_value('coverage_ignore_functions', [], False) app.add_config_value('coverage_ignore_classes', [], False) - app.add_config_value('coverage_ignore_c_items', [], False) + app.add_config_value('coverage_c_path', [], False) + app.add_config_value('coverage_c_regexes', {}, False) + app.add_config_value('coverage_ignore_c_items', {}, False) Modified: doctools/trunk/sphinx/util/__init__.py ============================================================================== --- doctools/trunk/sphinx/util/__init__.py (original) +++ doctools/trunk/sphinx/util/__init__.py Sat Mar 15 00:08:22 2008 @@ -112,3 +112,11 @@ def fmt_ex(ex): """Format a single line with an exception description.""" return traceback.format_exception_only(ex.__class__, ex)[-1].strip() + + +def rpartition(s, t): + """Similar to str.rpartition from 2.5.""" + i = s.rfind(t) + if i != -1: + return s[:i], s[i+len(t):] + return '', s From python-checkins at python.org Sat Mar 15 00:10:35 2008 From: python-checkins at python.org (georg.brandl) Date: Sat, 15 Mar 2008 00:10:35 +0100 (CET) Subject: [Python-checkins] r61392 - python/trunk/Doc/library/urllib.rst Message-ID: <20080314231035.268651E4002@bag.python.org> Author: georg.brandl Date: Sat Mar 15 00:10:34 2008 New Revision: 61392 Modified: python/trunk/Doc/library/urllib.rst Log: Remove obsolete paragraph. #2288. Modified: python/trunk/Doc/library/urllib.rst ============================================================================== --- python/trunk/Doc/library/urllib.rst (original) +++ python/trunk/Doc/library/urllib.rst Sat Mar 15 00:10:34 2008 @@ -107,10 +107,6 @@ filehandle = urllib.urlopen(some_url, proxies=None) filehandle = urllib.urlopen(some_url) - The :func:`urlopen` function does not support explicit proxy specification. If - you need to override environmental proxy settings, use :class:`URLopener`, or a - subclass such as :class:`FancyURLopener`. - Proxies which require authentication for use are not currently supported; this is considered an implementation limitation. From python-checkins at python.org Sat Mar 15 00:35:08 2008 From: python-checkins at python.org (georg.brandl) Date: Sat, 15 Mar 2008 00:35:08 +0100 (CET) Subject: [Python-checkins] r61393 - in doctools/trunk/doc: conf.py ext/autodoc.rst Message-ID: <20080314233508.BE6E91E401C@bag.python.org> Author: georg.brandl Date: Sat Mar 15 00:35:08 2008 New Revision: 61393 Modified: doctools/trunk/doc/conf.py doctools/trunk/doc/ext/autodoc.rst Log: Add documentation for autodoc. Modified: doctools/trunk/doc/conf.py ============================================================================== --- doctools/trunk/doc/conf.py (original) +++ doctools/trunk/doc/conf.py Sat Mar 15 00:35:08 2008 @@ -21,7 +21,7 @@ # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.addons.*') or your custom ones. -extensions = ['ext'] +extensions = ['ext', 'sphinx.ext.autodoc'] # Add any paths that contain templates here, relative to this directory. templates_path = ['.templates'] @@ -123,3 +123,5 @@ # Documents to append as an appendix to all manuals. #latex_appendices = [] + +automodule_skip_lines = 4 Modified: doctools/trunk/doc/ext/autodoc.rst ============================================================================== --- doctools/trunk/doc/ext/autodoc.rst (original) +++ doctools/trunk/doc/ext/autodoc.rst Sat Mar 15 00:35:08 2008 @@ -1,5 +1,98 @@ +.. highlight:: rest + :mod:`sphinx.ext.autodoc` -- Include documentation from docstrings ================================================================== .. module:: sphinx.ext.autodoc :synopsis: Include documentation from docstrings. + +.. index:: pair: automatic; documentation + single: docstring + +This extension can import the modules you are documenting, and pull in +documentation from docstrings in a semi-automatic way. + +For this to work, the docstrings must of course be written in correct +reStructuredText. You can then use all of the usual Sphinx markup in the +docstrings, and it will end up correctly in the documentation. Together with +hand-written documentation, this technique eases the pain of having to maintain +two locations for documentation, while at the same time avoiding +auto-generated-looking pure API documentation. + +:mod:`autodoc` provides several directives that are versions of the usual +:dir:`module`, :dir:`class` and so forth. On parsing time, they import the +corresponding module and extract the docstring of the given objects, inserting +them into the page source under a suitable :dir:`module`, :dir:`class` etc. +directive. + +.. note:: + + Just as :dir:`class` respects the current :dir:`module`, :dir:`autoclass` + will also do so, and likewise with :dir:`method` and :dir:`class`. + + +.. directive:: automodule + autoclass + autoexception + + Document a module, class or exception. All three directives will by default + only insert the docstring of the object itself:: + + .. autoclass:: Noodle + + will produce source like this:: + + .. class:: Noodle + + Noodle's docstring. + + If you want to automatically document members, there's a ``members`` + option:: + + .. autoclass:: Noodle + :members: + + will document all non-private member functions and properties (that is, those + whose name doesn't start with ``_``), while :: + + .. autoclass:: Noodle + :members: eat, slurp + + will document exactly the specified members. + + Members without docstrings will be left out, unless you give the + ``undoc-members`` flag option. + + The "auto" directives can also contain content of their own, it will be + inserted into the resulting non-auto directive source after the docstring + (but before any automatic member documentation). + + Therefore, you can also mix automatic and non-automatic member documentation, + like so:: + + .. autoclass:: Noodle + :members: eat, slurp + + .. method:: boil(time=10) + + Boil the noodle *time* minutes. + + +.. directive:: autofunction + automethod + autoattribute + + These work exactly like :dir:`autoclass` etc., but do not offer the options + used for automatic member documentation. + + +There's also one new config value that you can set: + +.. confval:: automodule_skip_lines + + This value (whose default is ``0``) can be used to skip an amount of lines in + every module docstring that is processed by an :dir:`automodule` directive. + This is provided because some projects like to put headings in the module + docstring, which would then interfere with your sectioning, or automatic + fields with version control tags, that you don't want to put in the generated + documentation. From python-checkins at python.org Sat Mar 15 00:47:31 2008 From: python-checkins at python.org (georg.brandl) Date: Sat, 15 Mar 2008 00:47:31 +0100 (CET) Subject: [Python-checkins] r61394 - doctools/trunk/sphinx/builder.py doctools/trunk/sphinx/linkcheck.py Message-ID: <20080314234731.2041E1E4002@bag.python.org> Author: georg.brandl Date: Sat Mar 15 00:47:30 2008 New Revision: 61394 Added: doctools/trunk/sphinx/linkcheck.py Modified: doctools/trunk/sphinx/builder.py Log: Move link checker to its own file. Use different user-agent to enable Wikipedia lookup. Modified: doctools/trunk/sphinx/builder.py ============================================================================== --- doctools/trunk/sphinx/builder.py (original) +++ doctools/trunk/sphinx/builder.py Sat Mar 15 00:47:30 2008 @@ -5,7 +5,7 @@ Builder classes for different output formats. - :copyright: 2007-2008 by Georg Brandl, Thomas Lamb. + :copyright: 2007-2008 by Georg Brandl. :license: BSD. """ @@ -13,11 +13,9 @@ import time import codecs import shutil -import socket import cPickle as pickle from os import path from cgi import escape -from urllib2 import urlopen, HTTPError from docutils import nodes from docutils.io import StringOutput, FileOutput, DocTreeInput @@ -891,108 +889,7 @@ pass -class CheckExternalLinksBuilder(Builder): - """ - Checks for broken external links. - """ - name = 'linkcheck' - - def init(self): - self.good = set() - self.broken = {} - self.redirected = {} - # set a timeout for non-responding servers - socket.setdefaulttimeout(5.0) - # create output file - open(path.join(self.outdir, 'output.txt'), 'w').close() - - def get_target_uri(self, docname, typ=None): - return '' - - def get_outdated_docs(self): - return self.env.all_docs - - def prepare_writing(self, docnames): - return - - def write_doc(self, docname, doctree): - self.info() - for node in doctree.traverse(nodes.reference): - try: - self.check(node, docname) - except KeyError: - continue - - def check(self, node, docname): - uri = node['refuri'] - - if '#' in uri: - uri = uri.split('#')[0] - - if uri in self.good: - return - - if uri[0:5] == 'http:' or uri[0:6] == 'https:': - self.info(uri, nonl=1) - lineno = None - while lineno is None and node: - node = node.parent - lineno = node.line - - if uri in self.broken: - (r, s) = self.broken[uri] - elif uri in self.redirected: - (r, s) = self.redirected[uri] - else: - (r, s) = self.resolve(uri) - - if r == 0: - self.info(' - ' + darkgreen('working')) - self.good.add(uri) - elif r == 2: - self.info(' - ' + red('broken: ') + s) - self.broken[uri] = (r, s) - self.write_entry('broken', docname, lineno, uri + ': ' + s) - else: - self.info(' - ' + purple('redirected') + ' to ' + s) - self.redirected[uri] = (r, s) - self.write_entry('redirected', docname, lineno, uri + ' to ' + s) - - elif len(uri) == 0 or uri[0:7] == 'mailto:' or uri[0:4] == 'ftp:': - return - else: - self.info(uri + ' - ' + red('malformed!')) - self.write_entry('malformed', docname, lineno, uri) - - return - - def write_entry(self, what, docname, line, uri): - output = open(path.join(self.outdir, 'output.txt'), 'a') - output.write("%s:%s [%s] %s\n" % (self.env.doc2path(docname, None), - line, what, uri)) - output.close() - - def resolve(self, uri): - try: - f = urlopen(uri) - f.close() - except HTTPError, err: - if err.code == 403 and uri.startswith('http://en.wikipedia.org/'): - # Wikipedia blocks requests from urllib User-Agent - return (0, 0) - return (2, str(err)) - except Exception, err: - return (2, str(err)) - if f.url.rstrip('/') == uri.rstrip('/'): - return (0, 0) - else: - return (1, f.url) - - def finish(self): - return - - - +from sphinx.linkcheck import CheckExternalLinksBuilder builtin_builders = { 'html': StandaloneHTMLBuilder, Added: doctools/trunk/sphinx/linkcheck.py ============================================================================== --- (empty file) +++ doctools/trunk/sphinx/linkcheck.py Sat Mar 15 00:47:30 2008 @@ -0,0 +1,124 @@ +# -*- coding: utf-8 -*- +""" + sphinx.linkcheck + ~~~~~~~~~~~~~~~~ + + The CheckExternalLinksBuilder class. + + :copyright: 2008 by Georg Brandl, Thomas Lamb. + :license: BSD. +""" + +import socket +from os import path +from urllib2 import build_opener, HTTPError + +from docutils import nodes + +from sphinx.builder import Builder +from sphinx.util.console import bold, purple, red, darkgreen + +# create an opener that will simulate a browser user-agent +opener = build_opener() +opener.addheaders = [('User-agent', 'Mozilla/5.0')] + + +class CheckExternalLinksBuilder(Builder): + """ + Checks for broken external links. + """ + name = 'linkcheck' + + def init(self): + self.good = set() + self.broken = {} + self.redirected = {} + # set a timeout for non-responding servers + socket.setdefaulttimeout(5.0) + # create output file + open(path.join(self.outdir, 'output.txt'), 'w').close() + + def get_target_uri(self, docname, typ=None): + return '' + + def get_outdated_docs(self): + return self.env.all_docs + + def prepare_writing(self, docnames): + return + + def write_doc(self, docname, doctree): + self.info() + for node in doctree.traverse(nodes.reference): + try: + self.check(node, docname) + except KeyError: + continue + + def check(self, node, docname): + uri = node['refuri'] + + if '#' in uri: + uri = uri.split('#')[0] + + if uri in self.good: + return + + if uri[0:5] == 'http:' or uri[0:6] == 'https:': + self.info(uri, nonl=1) + lineno = None + while lineno is None and node: + node = node.parent + lineno = node.line + + if uri in self.broken: + (r, s) = self.broken[uri] + elif uri in self.redirected: + (r, s) = self.redirected[uri] + else: + (r, s) = self.resolve(uri) + + if r == 0: + self.info(' - ' + darkgreen('working')) + self.good.add(uri) + elif r == 2: + self.info(' - ' + red('broken: ') + s) + self.broken[uri] = (r, s) + self.write_entry('broken', docname, lineno, uri + ': ' + s) + else: + self.info(' - ' + purple('redirected') + ' to ' + s) + self.redirected[uri] = (r, s) + self.write_entry('redirected', docname, lineno, uri + ' to ' + s) + + elif len(uri) == 0 or uri[0:7] == 'mailto:' or uri[0:4] == 'ftp:': + return + else: + self.info(uri + ' - ' + red('malformed!')) + self.write_entry('malformed', docname, lineno, uri) + + return + + def write_entry(self, what, docname, line, uri): + output = open(path.join(self.outdir, 'output.txt'), 'a') + output.write("%s:%s [%s] %s\n" % (self.env.doc2path(docname, None), + line, what, uri)) + output.close() + + def resolve(self, uri): + try: + f = opener.open(uri) + f.close() + except HTTPError, err: + #if err.code == 403 and uri.startswith('http://en.wikipedia.org/'): + # # Wikipedia blocks requests from urllib User-Agent + # return (0, 0) + return (2, str(err)) + except Exception, err: + return (2, str(err)) + if f.url.rstrip('/') == uri.rstrip('/'): + return (0, 0) + else: + return (1, f.url) + + def finish(self): + return From python-checkins at python.org Sat Mar 15 01:20:21 2008 From: python-checkins at python.org (georg.brandl) Date: Sat, 15 Mar 2008 01:20:21 +0100 (CET) Subject: [Python-checkins] r61395 - in python/trunk/Doc: bugs.rst distutils/apiref.rst distutils/examples.rst distutils/setupscript.rst documenting/style.rst howto/advocacy.rst howto/curses.rst howto/regex.rst howto/unicode.rst howto/urllib2.rst install/index.rst library/aepack.rst library/cookielib.rst library/imaplib.rst library/mailbox.rst library/mimetools.rst library/mimetypes.rst library/othergui.rst library/robotparser.rst library/sha.rst library/tix.rst library/tkinter.rst library/zipfile.rst library/zipimport.rst license.rst reference/introduction.rst tutorial/whatnow.rst Message-ID: <20080315002021.0BD391E4024@bag.python.org> Author: georg.brandl Date: Sat Mar 15 01:20:19 2008 New Revision: 61395 Modified: python/trunk/Doc/bugs.rst python/trunk/Doc/distutils/apiref.rst python/trunk/Doc/distutils/examples.rst python/trunk/Doc/distutils/setupscript.rst python/trunk/Doc/documenting/style.rst python/trunk/Doc/howto/advocacy.rst python/trunk/Doc/howto/curses.rst python/trunk/Doc/howto/regex.rst python/trunk/Doc/howto/unicode.rst python/trunk/Doc/howto/urllib2.rst python/trunk/Doc/install/index.rst python/trunk/Doc/library/aepack.rst python/trunk/Doc/library/cookielib.rst python/trunk/Doc/library/imaplib.rst python/trunk/Doc/library/mailbox.rst python/trunk/Doc/library/mimetools.rst python/trunk/Doc/library/mimetypes.rst python/trunk/Doc/library/othergui.rst python/trunk/Doc/library/robotparser.rst python/trunk/Doc/library/sha.rst python/trunk/Doc/library/tix.rst python/trunk/Doc/library/tkinter.rst python/trunk/Doc/library/zipfile.rst python/trunk/Doc/library/zipimport.rst python/trunk/Doc/license.rst python/trunk/Doc/reference/introduction.rst python/trunk/Doc/tutorial/whatnow.rst Log: Fix lots of broken links in the docs, found by Sphinx' external link checker. Modified: python/trunk/Doc/bugs.rst ============================================================================== --- python/trunk/Doc/bugs.rst (original) +++ python/trunk/Doc/bugs.rst Sat Mar 15 01:20:19 2008 @@ -53,7 +53,7 @@ Article which goes into some detail about how to create a useful bug report. This describes what kind of information is useful and why it is useful. - `Bug Writing Guidelines `_ + `Bug Writing Guidelines `_ Information about writing a good bug report. Some of this is specific to the Mozilla project, but describes general good practices. Modified: python/trunk/Doc/distutils/apiref.rst ============================================================================== --- python/trunk/Doc/distutils/apiref.rst (original) +++ python/trunk/Doc/distutils/apiref.rst Sat Mar 15 01:20:19 2008 @@ -73,7 +73,7 @@ +--------------------+--------------------------------+-------------------------------------------------------------+ | *classifiers* | A list of categories for the | The list of available | | | package | categorizations is at | - | | | http://cheeseshop.python.org/pypi?:action=list_classifiers. | + | | | http://pypi.python.org/pypi?:action=list_classifiers. | +--------------------+--------------------------------+-------------------------------------------------------------+ | *distclass* | the :class:`Distribution` | A subclass of | | | class to use | :class:`distutils.core.Distribution` | Modified: python/trunk/Doc/distutils/examples.rst ============================================================================== --- python/trunk/Doc/distutils/examples.rst (original) +++ python/trunk/Doc/distutils/examples.rst Sat Mar 15 01:20:19 2008 @@ -11,7 +11,7 @@ .. seealso:: - `Distutils Cookbook `_ + `Distutils Cookbook `_ Collection of recipes showing how to achieve more control over distutils. Modified: python/trunk/Doc/distutils/setupscript.rst ============================================================================== --- python/trunk/Doc/distutils/setupscript.rst (original) +++ python/trunk/Doc/distutils/setupscript.rst Sat Mar 15 01:20:19 2008 @@ -580,7 +580,7 @@ (4) These fields should not be used if your package is to be compatible with Python versions prior to 2.2.3 or 2.3. The list is available from the `PyPI website - `_. + `_. 'short string' A single line of text, not more than 200 characters. Modified: python/trunk/Doc/documenting/style.rst ============================================================================== --- python/trunk/Doc/documenting/style.rst (original) +++ python/trunk/Doc/documenting/style.rst Sat Mar 15 01:20:19 2008 @@ -66,5 +66,5 @@ 1970s. -.. _Apple Publications Style Guide: http://developer.apple.com/documentation/UserExperience/Conceptual/APStyleGuide/AppleStyleGuide2003.pdf +.. _Apple Publications Style Guide: http://developer.apple.com/documentation/UserExperience/Conceptual/APStyleGuide/AppleStyleGuide2006.pdf Modified: python/trunk/Doc/howto/advocacy.rst ============================================================================== --- python/trunk/Doc/howto/advocacy.rst (original) +++ python/trunk/Doc/howto/advocacy.rst Sat Mar 15 01:20:19 2008 @@ -346,7 +346,7 @@ wasn't written commercially. This site presents arguments that show how open source software can have considerable advantages over closed-source software. -http://sunsite.unc.edu/LDP/HOWTO/mini/Advocacy.html +http://www.faqs.org/docs/Linux-mini/Advocacy.html The Linux Advocacy mini-HOWTO was the inspiration for this document, and is also well worth reading for general suggestions on winning acceptance for a new technology, such as Linux or Python. In general, you won't make much progress Modified: python/trunk/Doc/howto/curses.rst ============================================================================== --- python/trunk/Doc/howto/curses.rst (original) +++ python/trunk/Doc/howto/curses.rst Sat Mar 15 01:20:19 2008 @@ -52,7 +52,7 @@ No one has made a Windows port of the curses module. On a Windows platform, try the Console module written by Fredrik Lundh. The Console module provides cursor-addressable text output, plus full support for mouse and keyboard input, -and is available from http://effbot.org/efflib/console. +and is available from http://effbot.org/zone/console-index.htm. The Python curses module @@ -432,5 +432,5 @@ If you write an interesting little program, feel free to contribute it as another demo. We can always use more of them! -The ncurses FAQ: http://dickey.his.com/ncurses/ncurses.faq.html +The ncurses FAQ: http://invisible-island.net/ncurses/ncurses.faq.html Modified: python/trunk/Doc/howto/regex.rst ============================================================================== --- python/trunk/Doc/howto/regex.rst (original) +++ python/trunk/Doc/howto/regex.rst Sat Mar 15 01:20:19 2008 @@ -367,8 +367,8 @@ Python distribution. It allows you to enter REs and strings, and displays whether the RE matches or fails. :file:`redemo.py` can be quite useful when trying to debug a complicated RE. Phil Schwartz's `Kodos -`_ is also an interactive tool for -developing and testing RE patterns. +`_ is also an interactive tool for developing and +testing RE patterns. This HOWTO uses the standard Python interpreter for its examples. First, run the Python interpreter, import the :mod:`re` module, and compile a RE:: Modified: python/trunk/Doc/howto/unicode.rst ============================================================================== --- python/trunk/Doc/howto/unicode.rst (original) +++ python/trunk/Doc/howto/unicode.rst Sat Mar 15 01:20:19 2008 @@ -210,10 +210,6 @@ to reading the Unicode character tables, available at . -Roman Czyborra wrote another explanation of Unicode's basic principles; it's at -. Czyborra has written a number of -other Unicode-related documentation, available from . - Two other good introductory articles were written by Joel Spolsky and Jason Orendorff . If this introduction didn't make @@ -490,7 +486,7 @@ Marc-Andr? Lemburg gave a presentation at EuroPython 2002 titled "Python and Unicode". A PDF version of his slides is available at -, and is an +, and is an excellent overview of the design of Python's Unicode features. @@ -677,7 +673,7 @@ The PDF slides for Marc-Andr? Lemburg's presentation "Writing Unicode-aware Applications in Python" are available at - + and discuss questions of character encodings as well as how to internationalize and localize an application. Modified: python/trunk/Doc/howto/urllib2.rst ============================================================================== --- python/trunk/Doc/howto/urllib2.rst (original) +++ python/trunk/Doc/howto/urllib2.rst Sat Mar 15 01:20:19 2008 @@ -8,7 +8,7 @@ There is an French translation of an earlier revision of this HOWTO, available at `urllib2 - Le Manuel manquant - `_. + `_. Modified: python/trunk/Doc/install/index.rst ============================================================================== --- python/trunk/Doc/install/index.rst (original) +++ python/trunk/Doc/install/index.rst Sat Mar 15 01:20:19 2008 @@ -872,10 +872,10 @@ -Borland C++ -^^^^^^^^^^^ +Borland/CodeGear C++ +^^^^^^^^^^^^^^^^^^^^ -This subsection describes the necessary steps to use Distutils with the Borland +This subsection describes the necessary steps to use Distutils with the Borland C++ compiler version 5.5. First you have to know that Borland's object file format (OMF) is different from the format used by the Python version you can download from the Python or ActiveState Web site. (Python is built with @@ -915,7 +915,7 @@ .. seealso:: - `C++Builder Compiler `_ + `C++Builder Compiler `_ Information about the free C++ compiler from Borland, including links to the download pages. @@ -938,9 +938,7 @@ These compilers require some special libraries. This task is more complex than for Borland's C++, because there is no program to convert the library. First you have to create a list of symbols which the Python DLL exports. (You can find -a good program for this task at -http://starship.python.net/crew/kernr/mingw32/Notes.html, see at PExports 0.42h -there.) +a good program for this task at http://www.emmestech.com/software/cygwin/pexports-0.43/download_pexports.html) .. I don't understand what the next line means. --amk .. (inclusive the references on data structures.) @@ -984,9 +982,6 @@ `Building Python modules on MS Windows platform with MinGW `_ Information about building the required libraries for the MinGW environment. - http://pyopengl.sourceforge.net/ftp/win32-stuff/ - Converted import libraries in Cygwin/MinGW and Borland format, and a script to - create the registry entries needed for Distutils to locate the built Python. .. rubric:: Footnotes Modified: python/trunk/Doc/library/aepack.rst ============================================================================== --- python/trunk/Doc/library/aepack.rst (original) +++ python/trunk/Doc/library/aepack.rst Sat Mar 15 01:20:19 2008 @@ -84,7 +84,3 @@ Module :mod:`aetypes` Python definitions of codes for Apple Event descriptor types. - - `Inside Macintosh: Interapplication Communication `_ - Information about inter-process communications on the Macintosh. - Modified: python/trunk/Doc/library/cookielib.rst ============================================================================== --- python/trunk/Doc/library/cookielib.rst (original) +++ python/trunk/Doc/library/cookielib.rst Sat Mar 15 01:20:19 2008 @@ -121,7 +121,7 @@ Extensions to this module, including a class for reading Microsoft Internet Explorer cookies on Windows. - http://www.netscape.com/newsref/std/cookie_spec.html + http://wp.netscape.com/newsref/std/cookie_spec.html The specification of the original Netscape cookie protocol. Though this is still the dominant protocol, the 'Netscape cookie protocol' implemented by all the major browsers (and :mod:`cookielib`) only bears a passing resemblance to Modified: python/trunk/Doc/library/imaplib.rst ============================================================================== --- python/trunk/Doc/library/imaplib.rst (original) +++ python/trunk/Doc/library/imaplib.rst Sat Mar 15 01:20:19 2008 @@ -117,7 +117,7 @@ Documents describing the protocol, and sources and binaries for servers implementing it, can all be found at the University of Washington's *IMAP - Information Center* (http://www.cac.washington.edu/imap/). + Information Center* (http://www.washington.edu/imap/). .. _imap4-objects: Modified: python/trunk/Doc/library/mailbox.rst ============================================================================== --- python/trunk/Doc/library/mailbox.rst (original) +++ python/trunk/Doc/library/mailbox.rst Sat Mar 15 01:20:19 2008 @@ -404,7 +404,7 @@ Notes on Maildir by its inventor. Includes an updated name-creation scheme and details on "info" semantics. - `maildir man page from Courier `_ + `maildir man page from Courier `_ Another specification of the format. Describes a common extension for supporting folders. @@ -461,7 +461,7 @@ `mbox man page from tin `_ Another specification of the format, with details on locking. - `Configuring Netscape Mail on Unix: Why The Content-Length Format is Bad `_ + `Configuring Netscape Mail on Unix: Why The Content-Length Format is Bad `_ An argument for using the original mbox format rather than a variation. `"mbox" is a family of several mutually incompatible mailbox formats `_ @@ -665,7 +665,7 @@ `Format of Version 5 Babyl Files `_ A specification of the Babyl format. - `Reading Mail with Rmail `_ + `Reading Mail with Rmail `_ The Rmail manual, with some information on Babyl semantics. @@ -1541,10 +1541,6 @@ :class:`UnixMailbox` except that individual messages are separated by only ``From`` lines. - For more information, see `Configuring Netscape Mail on Unix: Why the - Content-Length Format is Bad - `_. - .. class:: PortableUnixMailbox(fp[, factory]) Modified: python/trunk/Doc/library/mimetools.rst ============================================================================== --- python/trunk/Doc/library/mimetools.rst (original) +++ python/trunk/Doc/library/mimetools.rst Sat Mar 15 01:20:19 2008 @@ -73,7 +73,7 @@ Module :mod:`multifile` Support for reading files which contain distinct parts, such as MIME data. - http://www.cs.uu.nl/wais/html/na-dir/mail/mime-faq/.html + http://faqs.cs.uu.nl/na-dir/mail/mime-faq/.html The MIME Frequently Asked Questions document. For an overview of MIME, see the answer to question 1.1 in Part 1 of this document. Modified: python/trunk/Doc/library/mimetypes.rst ============================================================================== --- python/trunk/Doc/library/mimetypes.rst (original) +++ python/trunk/Doc/library/mimetypes.rst Sat Mar 15 01:20:19 2008 @@ -41,7 +41,7 @@ Optional *strict* is a flag specifying whether the list of known MIME types is limited to only the official types `registered with IANA - `_ are recognized. + `_ are recognized. When *strict* is true (the default), only the IANA types are supported; when *strict* is false, some additional non-standard but commonly used MIME types are also recognized. Modified: python/trunk/Doc/library/othergui.rst ============================================================================== --- python/trunk/Doc/library/othergui.rst (original) +++ python/trunk/Doc/library/othergui.rst Sat Mar 15 01:20:19 2008 @@ -36,14 +36,12 @@ `PyGTK `_ is a set of bindings for the `GTK `_ widget set. It - provides an object oriented interface that is slightly higher level than the C - one. It comes with many more widgets than Tkinter provides, and - has good Python-specific reference documentation. There are also `bindings - `_ to `GNOME `_. - One well known PyGTK application is - `PythonCAD `_. An - online `tutorial `_ is - available. + provides an object oriented interface that is slightly higher level than + the C one. It comes with many more widgets than Tkinter provides, and has + good Python-specific reference documentation. There are also bindings to + `GNOME `_. One well known PyGTK application is + `PythonCAD `_. An online `tutorial + `_ is available. `PyQt `_ PyQt is a :program:`sip`\ -wrapped binding to the Qt toolkit. Qt is an Modified: python/trunk/Doc/library/robotparser.rst ============================================================================== --- python/trunk/Doc/library/robotparser.rst (original) +++ python/trunk/Doc/library/robotparser.rst Sat Mar 15 01:20:19 2008 @@ -15,9 +15,8 @@ This module provides a single class, :class:`RobotFileParser`, which answers questions about whether or not a particular user agent can fetch a URL on the -Web site that published the :file:`robots.txt` file. For more details on the -structure of :file:`robots.txt` files, see -http://www.robotstxt.org/wc/norobots.html. +Web site that published the :file:`robots.txt` file. For more details on the +structure of :file:`robots.txt` files, see http://www.robotstxt.org/orig.html. .. class:: RobotFileParser() Modified: python/trunk/Doc/library/sha.rst ============================================================================== --- python/trunk/Doc/library/sha.rst (original) +++ python/trunk/Doc/library/sha.rst Sat Mar 15 01:20:19 2008 @@ -82,6 +82,6 @@ `_, published in August 2002. - `Cryptographic Toolkit (Secure Hashing) `_ + `Cryptographic Toolkit (Secure Hashing) `_ Links from NIST to various information on secure hashing. Modified: python/trunk/Doc/library/tix.rst ============================================================================== --- python/trunk/Doc/library/tix.rst (original) +++ python/trunk/Doc/library/tix.rst Sat Mar 15 01:20:19 2008 @@ -35,7 +35,7 @@ `Tix Programming Guide `_ On-line version of the programmer's reference material. - `Tix Development Applications `_ + `Tix Development Applications `_ Tix applications for development of Tix and Tkinter programs. Tide applications work under Tk or Tkinter, and include :program:`TixInspect`, an inspector to remotely modify and debug Tix/Tk/Tkinter applications. Modified: python/trunk/Doc/library/tkinter.rst ============================================================================== --- python/trunk/Doc/library/tkinter.rst (original) +++ python/trunk/Doc/library/tkinter.rst Sat Mar 15 01:20:19 2008 @@ -21,7 +21,7 @@ `An Introduction to Tkinter `_ Fredrik Lundh's on-line reference material. - `Tkinter reference: a GUI for Python `_ + `Tkinter reference: a GUI for Python `_ On-line reference material. `Tkinter for JPython `_ Modified: python/trunk/Doc/library/zipfile.rst ============================================================================== --- python/trunk/Doc/library/zipfile.rst (original) +++ python/trunk/Doc/library/zipfile.rst Sat Mar 15 01:20:19 2008 @@ -13,7 +13,7 @@ provides tools to create, read, write, append, and list a ZIP file. Any advanced use of this module will require an understanding of the format, as defined in `PKZIP Application Note -`_. +`_. This module does not currently handle multi-disk ZIP files, or ZIP files which have appended comments (although it correctly handles comments @@ -83,7 +83,7 @@ .. seealso:: - `PKZIP Application Note `_ + `PKZIP Application Note `_ Documentation on the ZIP file format by Phil Katz, the creator of the format and algorithms used. @@ -373,7 +373,7 @@ .. attribute:: ZipInfo.extra Expansion field data. The `PKZIP Application Note - `_ contains + `_ contains some comments on the internal structure of the data contained in this string. Modified: python/trunk/Doc/library/zipimport.rst ============================================================================== --- python/trunk/Doc/library/zipimport.rst (original) +++ python/trunk/Doc/library/zipimport.rst Sat Mar 15 01:20:19 2008 @@ -35,7 +35,7 @@ .. seealso:: - `PKZIP Application Note `_ + `PKZIP Application Note `_ Documentation on the ZIP file format by Phil Katz, the creator of the format and algorithms used. Modified: python/trunk/Doc/license.rst ============================================================================== --- python/trunk/Doc/license.rst (original) +++ python/trunk/Doc/license.rst Sat Mar 15 01:20:19 2008 @@ -343,7 +343,7 @@ The :mod:`socket` module uses the functions, :func:`getaddrinfo`, and :func:`getnameinfo`, which are coded in separate source files from the WIDE -Project, http://www.wide.ad.jp/about/index.html. :: +Project, http://www.wide.ad.jp/. :: Copyright (C) 1995, 1996, 1997, and 1998 WIDE Project. All rights reserved. Modified: python/trunk/Doc/reference/introduction.rst ============================================================================== --- python/trunk/Doc/reference/introduction.rst (original) +++ python/trunk/Doc/reference/introduction.rst Sat Mar 15 01:20:19 2008 @@ -59,7 +59,7 @@ This implementation actually uses the CPython implementation, but is a managed .NET application and makes .NET libraries available. It was created by Brian Lloyd. For more information, see the `Python for .NET home page - `_. + `_. IronPython An alternate Python for .NET. Unlike Python.NET, this is a complete Python Modified: python/trunk/Doc/tutorial/whatnow.rst ============================================================================== --- python/trunk/Doc/tutorial/whatnow.rst (original) +++ python/trunk/Doc/tutorial/whatnow.rst Sat Mar 15 01:20:19 2008 @@ -38,9 +38,9 @@ * http://docs.python.org: Fast access to Python's documentation. -* http://cheeseshop.python.org: The Python Package Index, nicknamed the Cheese - Shop, is an index of user-created Python modules that are available for - download. Once you begin releasing code, you can register it here so that +* http://pypi.python.org: The Python Package Index, previously also nicknamed + the Cheese Shop, is an index of user-created Python modules that are available + for download. Once you begin releasing code, you can register it here so that others can find it. * http://aspn.activestate.com/ASPN/Python/Cookbook/: The Python Cookbook is a From python-checkins at python.org Sat Mar 15 03:32:49 2008 From: python-checkins at python.org (skip.montanaro) Date: Sat, 15 Mar 2008 03:32:49 +0100 (CET) Subject: [Python-checkins] r61396 - python/trunk/Doc/library/os.rst Message-ID: <20080315023249.85F061E4002@bag.python.org> Author: skip.montanaro Date: Sat Mar 15 03:32:49 2008 New Revision: 61396 Modified: python/trunk/Doc/library/os.rst Log: note that fork and forkpty raise OSError on failure Modified: python/trunk/Doc/library/os.rst ============================================================================== --- python/trunk/Doc/library/os.rst (original) +++ python/trunk/Doc/library/os.rst Sat Mar 15 03:32:49 2008 @@ -1631,7 +1631,8 @@ .. function:: fork() Fork a child process. Return ``0`` in the child and the child's process id in the - parent. Availability: Macintosh, Unix. + parent. If an error occurs :exc:`OSError` is raised. + Availability: Macintosh, Unix. .. function:: forkpty() @@ -1640,7 +1641,8 @@ terminal. Return a pair of ``(pid, fd)``, where *pid* is ``0`` in the child, the new child's process id in the parent, and *fd* is the file descriptor of the master end of the pseudo-terminal. For a more portable approach, use the - :mod:`pty` module. Availability: Macintosh, some flavors of Unix. + :mod:`pty` module. If an error occurs :exc:`OSError` is raised. + Availability: Macintosh, some flavors of Unix. .. function:: kill(pid, sig) From python-checkins at python.org Sat Mar 15 09:02:20 2008 From: python-checkins at python.org (georg.brandl) Date: Sat, 15 Mar 2008 09:02:20 +0100 (CET) Subject: [Python-checkins] r61398 - doctools/trunk/sphinx/linkcheck.py Message-ID: <20080315080220.50B6A1E4002@bag.python.org> Author: georg.brandl Date: Sat Mar 15 09:02:19 2008 New Revision: 61398 Modified: doctools/trunk/sphinx/linkcheck.py Log: Make the linkchecker output usable with Emacs' grep-mode. Modified: doctools/trunk/sphinx/linkcheck.py ============================================================================== --- doctools/trunk/sphinx/linkcheck.py (original) +++ doctools/trunk/sphinx/linkcheck.py Sat Mar 15 09:02:19 2008 @@ -100,8 +100,8 @@ def write_entry(self, what, docname, line, uri): output = open(path.join(self.outdir, 'output.txt'), 'a') - output.write("%s:%s [%s] %s\n" % (self.env.doc2path(docname, None), - line, what, uri)) + output.write("%s:%s: [%s] %s\n" % (self.env.doc2path(docname, None), + line, what, uri)) output.close() def resolve(self, uri): From python-checkins at python.org Sat Mar 15 10:06:05 2008 From: python-checkins at python.org (georg.brandl) Date: Sat, 15 Mar 2008 10:06:05 +0100 (CET) Subject: [Python-checkins] r61399 - in doctools/trunk/doc: concepts.rst contents.rst glossary.rst markup markup.rst markup/code.rst markup/index.rst markup/infounits.rst markup/inline.rst markup/misc.rst markup/para.rst Message-ID: <20080315090605.39C011E4026@bag.python.org> Author: georg.brandl Date: Sat Mar 15 10:06:04 2008 New Revision: 61399 Added: doctools/trunk/doc/markup/ doctools/trunk/doc/markup/code.rst doctools/trunk/doc/markup/index.rst doctools/trunk/doc/markup/infounits.rst doctools/trunk/doc/markup/inline.rst doctools/trunk/doc/markup/misc.rst doctools/trunk/doc/markup/para.rst Removed: doctools/trunk/doc/markup.rst Modified: doctools/trunk/doc/concepts.rst doctools/trunk/doc/contents.rst doctools/trunk/doc/glossary.rst Log: Expand the markup chapter a bit. Modified: doctools/trunk/doc/concepts.rst ============================================================================== --- doctools/trunk/doc/concepts.rst (original) +++ doctools/trunk/doc/concepts.rst Sat Mar 15 10:06:04 2008 @@ -1,14 +1,58 @@ +.. highlight:: rest + .. _concepts: Sphinx concepts =============== +Document names +-------------- -The TOC tree ------------- -Document names --------------- +The TOC tree +------------ +Since reST does not have facilities to interconnect several documents, or split +documents into multiple output files, Sphinx uses a custom directive to add +relations between the single files the documentation is made of, as well as +tables of contents. The ``toctree`` directive is the central element. + +.. directive:: toctree + + This directive inserts a "TOC tree" at the current location, using the + individual TOCs (including "sub-TOC trees") of the files given in the + directive body. A numeric ``maxdepth`` option may be given to indicate the + depth of the tree; by default, all levels are included. + + Consider this example (taken from the Python docs' library reference index):: + + .. toctree:: + :maxdepth: 2 + + intro.rst + strings.rst + datatypes.rst + numeric.rst + (many more files listed here) + + This accomplishes two things: + + * Tables of contents from all those files are inserted, with a maximum depth + of two, that means one nested heading. ``toctree`` directives in those + files are also taken into account. + * Sphinx knows that the relative order of the files ``intro.rst``, + ``strings.rst`` and so forth, and it knows that they are children of the + shown file, the library index. From this information it generates "next + chapter", "previous chapter" and "parent chapter" links. + + In the end, all files included in the build process must occur in one + ``toctree`` directive; Sphinx will emit a warning if it finds a file that is + not included, because that means that this file will not be reachable through + standard navigation. Use :confval:`unused_documents` to explicitly exclude + documents from this check. + + The "master file" (selected by :confval:`master_file`) is the "root" of the + TOC tree hierarchy. It can be used as the documentation's main page, or as a + "full table of contents" if you don't give a ``maxdepth`` option. Modified: doctools/trunk/doc/contents.rst ============================================================================== --- doctools/trunk/doc/contents.rst (original) +++ doctools/trunk/doc/contents.rst Sat Mar 15 10:06:04 2008 @@ -9,7 +9,7 @@ intro.rst concepts.rst rest.rst - markup.rst + markup/index.rst builders.rst config.rst templating.rst Modified: doctools/trunk/doc/glossary.rst ============================================================================== --- doctools/trunk/doc/glossary.rst (original) +++ doctools/trunk/doc/glossary.rst Sat Mar 15 10:06:04 2008 @@ -20,3 +20,9 @@ documentation root The directory which contains the documentation's :file:`conf.py` file and is therefore seen as one Sphinx project. + + environment + A structure where information about all documents under the root is saved, + and used for cross-referencing. The environment is pickled after the + parsing stage, so that successive runs only need to read and parse new and + changed documents. Deleted: /doctools/trunk/doc/markup.rst ============================================================================== --- /doctools/trunk/doc/markup.rst Sat Mar 15 10:06:04 2008 +++ (empty file) @@ -1,835 +0,0 @@ -.. highlight:: rest - :linenothreshold: 5 - -.. XXX missing: glossary - - -Sphinx Markup Constructs -======================== - -Sphinx adds a lot of new directives and interpreted text roles to standard reST -markup. This section contains the reference material for these facilities. - - -File-wide metadata ------------------- - -reST has the concept of "field lists"; these are a sequence of fields marked up -like this:: - - :Field name: Field content - -A field list at the very top of a file is parsed as the "docinfo", which in -normal documents can be used to record the author, date of publication and -other metadata. In Sphinx, the docinfo is used as metadata, too, but not -displayed in the output. - -At the moment, only one metadata field is recognized: - -``nocomments`` - If set, the web application won't display a comment form for a page generated - from this source file. - - -Meta-information markup ------------------------ - -.. directive:: sectionauthor - - Identifies the author of the current section. The argument should include - the author's name such that it can be used for presentation and email - address. The domain name portion of the address should be lower case. - Example:: - - .. sectionauthor:: Guido van Rossum - - By default, this markup isn't reflected in the output in any way (it helps - keep track of contributions), but you can set the configuration value - :confval:`show_authors` to True to make them produce a paragraph in the - output. - - -Module-specific markup ----------------------- - -The markup described in this section is used to provide information about a -module being documented. Each module should be documented in its own file. -Normally this markup appears after the title heading of that file; a typical -file might start like this:: - - :mod:`parrot` -- Dead parrot access - =================================== - - .. module:: parrot - :platform: Unix, Windows - :synopsis: Analyze and reanimate dead parrots. - .. moduleauthor:: Eric Cleese - .. moduleauthor:: John Idle - -As you can see, the module-specific markup consists of two directives, the -``module`` directive and the ``moduleauthor`` directive. - -.. directive:: module - - This directive marks the beginning of the description of a module (or package - submodule, in which case the name should be fully qualified, including the - package name). - - The ``platform`` option, if present, is a comma-separated list of the - platforms on which the module is available (if it is available on all - platforms, the option should be omitted). The keys are short identifiers; - examples that are in use include "IRIX", "Mac", "Windows", and "Unix". It is - important to use a key which has already been used when applicable. - - The ``synopsis`` option should consist of one sentence describing the - module's purpose -- it is currently only used in the Global Module Index. - - The ``deprecated`` option can be given (with no value) to mark a module as - deprecated; it will be designated as such in various locations then. - -.. directive:: moduleauthor - - The ``moduleauthor`` directive, which can appear multiple times, names the - authors of the module code, just like ``sectionauthor`` names the author(s) - of a piece of documentation. It too only produces output if the - :confval:`show_authors` configuration value is True. - - -.. note:: - - It is important to make the section title of a module-describing file - meaningful since that value will be inserted in the table-of-contents trees - in overview files. - - -Information units ------------------ - -There are a number of directives used to describe specific features provided by -modules. Each directive requires one or more signatures to provide basic -information about what is being described, and the content should be the -description. The basic version makes entries in the general index; if no index -entry is desired, you can give the directive option flag ``:noindex:``. The -following example shows all of the features of this directive type:: - - .. function:: spam(eggs) - ham(eggs) - :noindex: - - Spam or ham the foo. - -The signatures of object methods or data attributes should always include the -type name (``.. method:: FileInput.input(...)``), even if it is obvious from the -context which type they belong to; this is to enable consistent -cross-references. If you describe methods belonging to an abstract protocol, -such as "context managers", include a (pseudo-)type name too to make the -index entries more informative. - -The directives are: - -.. directive:: cfunction - - Describes a C function. The signature should be given as in C, e.g.:: - - .. cfunction:: PyObject* PyType_GenericAlloc(PyTypeObject *type, Py_ssize_t nitems) - - This is also used to describe function-like preprocessor macros. The names - of the arguments should be given so they may be used in the description. - - Note that you don't have to backslash-escape asterisks in the signature, - as it is not parsed by the reST inliner. - -.. directive:: cmember - - Describes a C struct member. Example signature:: - - .. cmember:: PyObject* PyTypeObject.tp_bases - - The text of the description should include the range of values allowed, how - the value should be interpreted, and whether the value can be changed. - References to structure members in text should use the ``member`` role. - -.. directive:: cmacro - - Describes a "simple" C macro. Simple macros are macros which are used - for code expansion, but which do not take arguments so cannot be described as - functions. This is not to be used for simple constant definitions. Examples - of its use in the Python documentation include :cmacro:`PyObject_HEAD` and - :cmacro:`Py_BEGIN_ALLOW_THREADS`. - -.. directive:: ctype - - Describes a C type. The signature should just be the type name. - -.. directive:: cvar - - Describes a global C variable. The signature should include the type, such - as:: - - .. cvar:: PyObject* PyClass_Type - -.. directive:: data - - Describes global data in a module, including both variables and values used - as "defined constants." Class and object attributes are not documented - using this environment. - -.. directive:: exception - - Describes an exception class. The signature can, but need not include - parentheses with constructor arguments. - -.. directive:: function - - Describes a module-level function. The signature should include the - parameters, enclosing optional parameters in brackets. Default values can be - given if it enhances clarity. For example:: - - .. function:: Timer.repeat([repeat=3[, number=1000000]]) - - Object methods are not documented using this directive. Bound object methods - placed in the module namespace as part of the public interface of the module - are documented using this, as they are equivalent to normal functions for - most purposes. - - The description should include information about the parameters required and - how they are used (especially whether mutable objects passed as parameters - are modified), side effects, and possible exceptions. A small example may be - provided. - -.. directive:: class - - Describes a class. The signature can include parentheses with parameters - which will be shown as the constructor arguments. - -.. directive:: attribute - - Describes an object data attribute. The description should include - information about the type of the data to be expected and whether it may be - changed directly. - -.. directive:: method - - Describes an object method. The parameters should not include the ``self`` - parameter. The description should include similar information to that - described for ``function``. - -.. directive:: opcode - - Describes a Python bytecode instruction (this is not very useful for projects - other than Python itself). - -.. directive:: cmdoption - - Describes a command line option or switch. Option argument names should be - enclosed in angle brackets. Example:: - - .. cmdoption:: -m - - Run a module as a script. - -.. directive:: envvar - - Describes an environment variable that the documented code uses or defines. - - -There is also a generic version of these directives: - -.. directive:: describe - - This directive produces the same formatting as the specific ones explained - above but does not create index entries or cross-referencing targets. It is - used, for example, to describe the directives in this document. Example:: - - .. describe:: opcode - - Describes a Python bytecode instruction. - - -Showing code examples ---------------------- - -Examples of Python source code or interactive sessions are represented using -standard reST literal blocks. They are started by a ``::`` at the end of the -preceding paragraph and delimited by indentation. - -Representing an interactive session requires including the prompts and output -along with the Python code. No special markup is required for interactive -sessions. After the last line of input or output presented, there should not be -an "unused" primary prompt; this is an example of what *not* to do:: - - >>> 1 + 1 - 2 - >>> - -Syntax highlighting is handled in a smart way: - -* There is a "highlighting language" for each source file. Per default, - this is ``'python'`` as the majority of files will have to highlight Python - snippets. - -* Within Python highlighting mode, interactive sessions are recognized - automatically and highlighted appropriately. - -* The highlighting language can be changed using the ``highlightlang`` - directive, used as follows:: - - .. highlightlang:: c - - This language is used until the next ``highlightlang`` directive is - encountered. - -* The valid values for the highlighting language are: - - * ``python`` (the default) - * ``c`` - * ``rest`` - * ``none`` (no highlighting) - -* If highlighting with the current language fails, the block is not highlighted - in any way. - -Longer displays of verbatim text may be included by storing the example text in -an external file containing only plain text. The file may be included using the -``literalinclude`` directive. [1]_ For example, to include the Python source file -:file:`example.py`, use:: - - .. literalinclude:: example.py - -The file name is relative to the current file's path. Documentation-specific -include files should be placed in the ``Doc/includes`` subdirectory. - - -Inline markup -------------- - -As said before, Sphinx uses interpreted text roles to insert semantic markup in -documents. - -Variable names are an exception, they should be marked simply with ``*var*``. - -For all other roles, you have to write ``:rolename:`content```. - -.. note:: - - For all cross-referencing roles, if you prefix the content with ``!``, no - reference/hyperlink will be created. - -The following roles refer to objects in modules and are possibly hyperlinked if -a matching identifier is found: - -.. role:: mod - - The name of a module; a dotted name may be used. This should also be used for - package names. - -.. role:: func - - The name of a Python function; dotted names may be used. The role text - should include trailing parentheses to enhance readability. The parentheses - are stripped when searching for identifiers. - -.. role:: data - - The name of a module-level variable. - -.. role:: const - - The name of a "defined" constant. This may be a C-language ``#define`` - or a Python variable that is not intended to be changed. - -.. role:: class - - A class name; a dotted name may be used. - -.. role:: meth - - The name of a method of an object. The role text should include the type - name, method name and the trailing parentheses. A dotted name may be used. - -.. role:: attr - - The name of a data attribute of an object. - -.. role:: exc - - The name of an exception. A dotted name may be used. - -The name enclosed in this markup can include a module name and/or a class name. -For example, ``:func:`filter``` could refer to a function named ``filter`` in -the current module, or the built-in function of that name. In contrast, -``:func:`foo.filter``` clearly refers to the ``filter`` function in the ``foo`` -module. - -Normally, names in these roles are searched first without any further -qualification, then with the current module name prepended, then with the -current module and class name (if any) prepended. If you prefix the name with a -dot, this order is reversed. For example, in the documentation of the -:mod:`codecs` module, ``:func:`open``` always refers to the built-in function, -while ``:func:`.open``` refers to :func:`codecs.open`. - -A similar heuristic is used to determine whether the name is an attribute of -the currently documented class. - -The following roles create cross-references to C-language constructs if they -are defined in the API documentation: - -.. role:: cdata - - The name of a C-language variable. - -.. role:: cfunc - - The name of a C-language function. Should include trailing parentheses. - -.. role:: cmacro - - The name of a "simple" C macro, as defined above. - -.. role:: ctype - - The name of a C-language type. - - -The following roles do possibly create a cross-reference, but do not refer -to objects: - -.. role:: token - - The name of a grammar token (used in the reference manual to create links - between production displays). - -.. role:: keyword - - The name of a keyword in Python. This creates a link to a reference label - with that name, if it exists. - - -The following role creates a cross-reference to the term in the glossary: - -.. role:: term - - Reference to a term in the glossary. The glossary is created using the - ``glossary`` directive containing a definition list with terms and - definitions. It does not have to be in the same file as the ``term`` markup, - for example the Python docs have one global glossary in the ``glossary.rst`` - file. - - If you use a term that's not explained in a glossary, you'll get a warning - during build. - ---------- - -The following roles don't do anything special except formatting the text -in a different style: - -.. role:: command - - The name of an OS-level command, such as ``rm``. - -.. role:: dfn - - Mark the defining instance of a term in the text. (No index entries are - generated.) - -.. role:: envvar - - An environment variable. Index entries are generated. - -.. role:: file - - The name of a file or directory. Within the contents, you can use curly - braces to indicate a "variable" part, for example:: - - ... is installed in :file:`/usr/lib/python2.{x}/site-packages` ... - - In the built documentation, the ``x`` will be displayed differently to - indicate that it is to be replaced by the Python minor version. - -.. role:: guilabel - - Labels presented as part of an interactive user interface should be marked - using ``guilabel``. This includes labels from text-based interfaces such as - those created using :mod:`curses` or other text-based libraries. Any label - used in the interface should be marked with this role, including button - labels, window titles, field names, menu and menu selection names, and even - values in selection lists. - -.. role:: kbd - - Mark a sequence of keystrokes. What form the key sequence takes may depend - on platform- or application-specific conventions. When there are no relevant - conventions, the names of modifier keys should be spelled out, to improve - accessibility for new users and non-native speakers. For example, an - *xemacs* key sequence may be marked like ``:kbd:`C-x C-f```, but without - reference to a specific application or platform, the same sequence should be - marked as ``:kbd:`Control-x Control-f```. - -.. role:: mailheader - - The name of an RFC 822-style mail header. This markup does not imply that - the header is being used in an email message, but can be used to refer to any - header of the same "style." This is also used for headers defined by the - various MIME specifications. The header name should be entered in the same - way it would normally be found in practice, with the camel-casing conventions - being preferred where there is more than one common usage. For example: - ``:mailheader:`Content-Type```. - -.. role:: makevar - - The name of a :command:`make` variable. - -.. role:: manpage - - A reference to a Unix manual page including the section, - e.g. ``:manpage:`ls(1)```. - -.. role:: menuselection - - Menu selections should be marked using the ``menuselection`` role. This is - used to mark a complete sequence of menu selections, including selecting - submenus and choosing a specific operation, or any subsequence of such a - sequence. The names of individual selections should be separated by - ``-->``. - - For example, to mark the selection "Start > Programs", use this markup:: - - :menuselection:`Start --> Programs` - - When including a selection that includes some trailing indicator, such as the - ellipsis some operating systems use to indicate that the command opens a - dialog, the indicator should be omitted from the selection name. - -.. role:: mimetype - - The name of a MIME type, or a component of a MIME type (the major or minor - portion, taken alone). - -.. role:: newsgroup - - The name of a Usenet newsgroup. - -.. role:: option - - A command-line option to an executable program. The leading hyphen(s) must - be included. - -.. role:: program - - The name of an executable program. This may differ from the file name for - the executable for some platforms. In particular, the ``.exe`` (or other) - extension should be omitted for Windows programs. - -.. role:: regexp - - A regular expression. Quotes should not be included. - -.. role:: samp - - A piece of literal text, such as code. Within the contents, you can use - curly braces to indicate a "variable" part, as in ``:file:``. - - If you don't need the "variable part" indication, use the standard - ````code```` instead. - -.. role:: var - - A Python or C variable or parameter name. - - -The following roles generate external links: - -.. role:: pep - - A reference to a Python Enhancement Proposal. This generates appropriate - index entries. The text "PEP *number*\ " is generated; in the HTML output, - this text is a hyperlink to an online copy of the specified PEP. - -.. role:: rfc - - A reference to an Internet Request for Comments. This generates appropriate - index entries. The text "RFC *number*\ " is generated; in the HTML output, - this text is a hyperlink to an online copy of the specified RFC. - - -Note that there are no special roles for including hyperlinks as you can use -the standard reST markup for that purpose. - - -.. _doc-ref-role: - -Cross-linking markup --------------------- - -.. XXX add new :ref: syntax alternative - -To support cross-referencing to arbitrary sections in the documentation, the -standard reST labels are "abused" a bit: Every label must precede a section -title; and every label name must be unique throughout the entire documentation -source. - -You can then reference to these sections using the ``:ref:`label-name``` role. - -Example:: - - .. _my-reference-label: - - Section to cross-reference - -------------------------- - - This is the text of the section. - - It refers to the section itself, see :ref:`my-reference-label`. - -The ``:ref:`` invocation is replaced with the section title. - - -Paragraph-level markup ----------------------- - -These directives create short paragraphs and can be used inside information -units as well as normal text: - -.. directive:: note - - An especially important bit of information about an API that a user should be - aware of when using whatever bit of API the note pertains to. The content of - the directive should be written in complete sentences and include all - appropriate punctuation. - - Example:: - - .. note:: - - This function is not suitable for sending spam e-mails. - -.. directive:: warning - - An important bit of information about an API that a user should be very aware - of when using whatever bit of API the warning pertains to. The content of - the directive should be written in complete sentences and include all - appropriate punctuation. This differs from ``note`` in that it is recommended - over ``note`` for information regarding security. - -.. directive:: versionadded - - This directive documents the version of the project which added the described - feature to the library or C API. When this applies to an entire module, it - should be placed at the top of the module section before any prose. - - The first argument must be given and is the version in question; you can add - a second argument consisting of a *brief* explanation of the change. - - Example:: - - .. versionadded:: 2.5 - The `spam` parameter. - - Note that there must be no blank line between the directive head and the - explanation; this is to make these blocks visually continuous in the markup. - -.. directive:: versionchanged - - Similar to ``versionadded``, but describes when and what changed in the named - feature in some way (new parameters, changed side effects, etc.). - --------------- - -.. directive:: seealso - - Many sections include a list of references to module documentation or - external documents. These lists are created using the ``seealso`` directive. - - The ``seealso`` directive is typically placed in a section just before any - sub-sections. For the HTML output, it is shown boxed off from the main flow - of the text. - - The content of the ``seealso`` directive should be a reST definition list. - Example:: - - .. seealso:: - - Module :mod:`zipfile` - Documentation of the :mod:`zipfile` standard module. - - `GNU tar manual, Basic Tar Format `_ - Documentation for tar archive files, including GNU tar extensions. - -.. directive:: rubric - - This directive creates a paragraph heading that is not used to create a - table of contents node. It is currently used for the "Footnotes" caption. - -.. directive:: centered - - This directive creates a centered boldfaced paragraph. Use it as follows:: - - .. centered:: - - Paragraph contents. - - -Table-of-contents markup ------------------------- - -Since reST does not have facilities to interconnect several documents, or split -documents into multiple output files, Sphinx uses a custom directive to add -relations between the single files the documentation is made of, as well as -tables of contents. The ``toctree`` directive is the central element. - -.. directive:: toctree - - This directive inserts a "TOC tree" at the current location, using the - individual TOCs (including "sub-TOC trees") of the files given in the - directive body. A numeric ``maxdepth`` option may be given to indicate the - depth of the tree; by default, all levels are included. - - Consider this example (taken from the library reference index):: - - .. toctree:: - :maxdepth: 2 - - intro.rst - strings.rst - datatypes.rst - numeric.rst - (many more files listed here) - - This accomplishes two things: - - * Tables of contents from all those files are inserted, with a maximum depth - of two, that means one nested heading. ``toctree`` directives in those - files are also taken into account. - * Sphinx knows that the relative order of the files ``intro.rst``, - ``strings.rst`` and so forth, and it knows that they are children of the - shown file, the library index. From this information it generates "next - chapter", "previous chapter" and "parent chapter" links. - - In the end, all files included in the build process must occur in one - ``toctree`` directive; Sphinx will emit a warning if it finds a file that is - not included, because that means that this file will not be reachable through - standard navigation. - - The special file ``contents.rst`` at the root of the source directory is the - "root" of the TOC tree hierarchy; from it the "Contents" page is generated. - - -Index-generating markup ------------------------ - -Sphinx automatically creates index entries from all information units (like -functions, classes or attributes) like discussed before. - -However, there is also an explicit directive available, to make the index more -comprehensive and enable index entries in documents where information is not -mainly contained in information units, such as the language reference. - -The directive is ``index`` and contains one or more index entries. Each entry -consists of a type and a value, separated by a colon. - -For example:: - - .. index:: - single: execution; context - module: __main__ - module: sys - triple: module; search; path - -This directive contains five entries, which will be converted to entries in the -generated index which link to the exact location of the index statement (or, in -case of offline media, the corresponding page number). - -The possible entry types are: - -single - Creates a single index entry. Can be made a subentry by separating the - subentry text with a semicolon (this notation is also used below to describe - what entries are created). -pair - ``pair: loop; statement`` is a shortcut that creates two index entries, - namely ``loop; statement`` and ``statement; loop``. -triple - Likewise, ``triple: module; search; path`` is a shortcut that creates three - index entries, which are ``module; search path``, ``search; path, module`` and - ``path; module search``. -module, keyword, operator, object, exception, statement, builtin - These all create two index entries. For example, ``module: hashlib`` creates - the entries ``module; hashlib`` and ``hashlib; module``. - -For index directives containing only "single" entries, there is a shorthand -notation:: - - .. index:: BNF, grammar, syntax, notation - -This creates four index entries. - - -Grammar production displays ---------------------------- - -Special markup is available for displaying the productions of a formal grammar. -The markup is simple and does not attempt to model all aspects of BNF (or any -derived forms), but provides enough to allow context-free grammars to be -displayed in a way that causes uses of a symbol to be rendered as hyperlinks to -the definition of the symbol. There is this directive: - -.. directive:: productionlist - - This directive is used to enclose a group of productions. Each production is - given on a single line and consists of a name, separated by a colon from the - following definition. If the definition spans multiple lines, each - continuation line must begin with a colon placed at the same column as in the - first line. - - Blank lines are not allowed within ``productionlist`` directive arguments. - - The definition can contain token names which are marked as interpreted text - (e.g. ``sum ::= `integer` "+" `integer```) -- this generates cross-references - to the productions of these tokens. - - Note that no further reST parsing is done in the production, so that you - don't have to escape ``*`` or ``|`` characters. - - -.. XXX describe optional first parameter - -The following is an example taken from the Python Reference Manual:: - - .. productionlist:: - try_stmt: try1_stmt | try2_stmt - try1_stmt: "try" ":" `suite` - : ("except" [`expression` ["," `target`]] ":" `suite`)+ - : ["else" ":" `suite`] - : ["finally" ":" `suite`] - try2_stmt: "try" ":" `suite` - : "finally" ":" `suite` - - -Substitutions -------------- - -The documentation system provides three substitutions that are defined by default. -They are set in the build configuration file, see :ref:`doc-build-config`. - -.. describe:: |release| - - Replaced by the project release the documentation refers to. This is meant - to be the full version string including alpha/beta/release candidate tags, - e.g. ``2.5.2b3``. - -.. describe:: |version| - - Replaced by the project version the documentation refers to. This is meant to - consist only of the major and minor version parts, e.g. ``2.5``, even for - version 2.5.1. - -.. describe:: |today| - - Replaced by either today's date, or the date set in the build configuration - file. Normally has the format ``April 14, 2007``. - - -.. rubric:: Footnotes - -.. [1] There is a standard ``.. include`` directive, but it raises errors if the - file is not found. This one only emits a warning. Added: doctools/trunk/doc/markup/code.rst ============================================================================== --- (empty file) +++ doctools/trunk/doc/markup/code.rst Sat Mar 15 10:06:04 2008 @@ -0,0 +1,94 @@ +.. highlight:: rest + +Showing code examples +--------------------- + +Examples of Python source code or interactive sessions are represented using +standard reST literal blocks. They are started by a ``::`` at the end of the +preceding paragraph and delimited by indentation. + +Representing an interactive session requires including the prompts and output +along with the Python code. No special markup is required for interactive +sessions. After the last line of input or output presented, there should not be +an "unused" primary prompt; this is an example of what *not* to do:: + + >>> 1 + 1 + 2 + >>> + +Syntax highlighting is done with `Pygments `_ (if it's +installed) and handled in a smart way: + +* There is a "highlighting language" for each source file. Per default, this is + ``'python'`` as the majority of files will have to highlight Python snippets. + +* Within Python highlighting mode, interactive sessions are recognized + automatically and highlighted appropriately. + +* The highlighting language can be changed using the ``highlight`` directive, + used as follows:: + + .. highlight:: c + + This language is used until the next ``highlight`` directive is encountered. + +* For documents that have to show snippets in different languages, there's also + a :dir:`code-block` directive that is given the highlighting language + directly:: + + .. code-block:: ruby + + Some Ruby code. + + The directive's alias name :dir:`sourcecode` works as well. + +* The valid values for the highlighting language are: + + * ``none`` (no highlighting) + * ``python`` (the default) + * ``rest`` + * ``c`` + * ... and any other lexer name that Pygments supports. + +* If highlighting with the selected language fails, the block is not highlighted + in any way. + +Line numbers +^^^^^^^^^^^^ + +If installed, Pygments can generate line numbers for code blocks. For +automatically-highlighted blocks (those started by ``::``), line numbers must be +switched on in a :dir:`highlight` directive, with the ``linenothreshold`` +option:: + + .. highlight:: python + :linenothreshold: 5 + +This will produce line numbers for all code blocks longer than five lines. + +For :dir:`code-block` blocks, a ``linenos`` flag option can be given to switch +on line numbers for the individual block:: + + .. code-block:: ruby + :linenos: + + Some more Ruby code. + + +Includes +^^^^^^^^ + +Longer displays of verbatim text may be included by storing the example text in +an external file containing only plain text. The file may be included using the +``literalinclude`` directive. [1]_ For example, to include the Python source file +:file:`example.py`, use:: + + .. literalinclude:: example.py + +The file name is relative to the current file's path. + + +.. rubric:: Footnotes + +.. [1] There is a standard ``.. include`` directive, but it raises errors if the + file is not found. This one only emits a warning. Added: doctools/trunk/doc/markup/index.rst ============================================================================== --- (empty file) +++ doctools/trunk/doc/markup/index.rst Sat Mar 15 10:06:04 2008 @@ -0,0 +1,15 @@ +.. XXX missing: glossary + +Sphinx Markup Constructs +======================== + +Sphinx adds a lot of new directives and interpreted text roles to standard reST +markup. This section contains the reference material for these facilities. + +.. toctree:: + + infounits.rst + para.rst + code.rst + inline.rst + misc.rst Added: doctools/trunk/doc/markup/infounits.rst ============================================================================== --- (empty file) +++ doctools/trunk/doc/markup/infounits.rst Sat Mar 15 10:06:04 2008 @@ -0,0 +1,197 @@ +.. highlight:: rest + +Module-specific markup +---------------------- + +The markup described in this section is used to provide information about a +module being documented. Each module should be documented in its own file. +Normally this markup appears after the title heading of that file; a typical +file might start like this:: + + :mod:`parrot` -- Dead parrot access + =================================== + + .. module:: parrot + :platform: Unix, Windows + :synopsis: Analyze and reanimate dead parrots. + .. moduleauthor:: Eric Cleese + .. moduleauthor:: John Idle + +As you can see, the module-specific markup consists of two directives, the +``module`` directive and the ``moduleauthor`` directive. + +.. directive:: module + + This directive marks the beginning of the description of a module (or package + submodule, in which case the name should be fully qualified, including the + package name). + + The ``platform`` option, if present, is a comma-separated list of the + platforms on which the module is available (if it is available on all + platforms, the option should be omitted). The keys are short identifiers; + examples that are in use include "IRIX", "Mac", "Windows", and "Unix". It is + important to use a key which has already been used when applicable. + + The ``synopsis`` option should consist of one sentence describing the + module's purpose -- it is currently only used in the Global Module Index. + + The ``deprecated`` option can be given (with no value) to mark a module as + deprecated; it will be designated as such in various locations then. + +.. directive:: moduleauthor + + The ``moduleauthor`` directive, which can appear multiple times, names the + authors of the module code, just like ``sectionauthor`` names the author(s) + of a piece of documentation. It too only produces output if the + :confval:`show_authors` configuration value is True. + + +.. note:: + + It is important to make the section title of a module-describing file + meaningful since that value will be inserted in the table-of-contents trees + in overview files. + + +Information units +----------------- + +There are a number of directives used to describe specific features provided by +modules. Each directive requires one or more signatures to provide basic +information about what is being described, and the content should be the +description. The basic version makes entries in the general index; if no index +entry is desired, you can give the directive option flag ``:noindex:``. The +following example shows all of the features of this directive type:: + + .. function:: spam(eggs) + ham(eggs) + :noindex: + + Spam or ham the foo. + +The signatures of object methods or data attributes should always include the +type name (``.. method:: FileInput.input(...)``), even if it is obvious from the +context which type they belong to; this is to enable consistent +cross-references. If you describe methods belonging to an abstract protocol, +such as "context managers", include a (pseudo-)type name too to make the +index entries more informative. + +The directives are: + +.. directive:: cfunction + + Describes a C function. The signature should be given as in C, e.g.:: + + .. cfunction:: PyObject* PyType_GenericAlloc(PyTypeObject *type, Py_ssize_t nitems) + + This is also used to describe function-like preprocessor macros. The names + of the arguments should be given so they may be used in the description. + + Note that you don't have to backslash-escape asterisks in the signature, + as it is not parsed by the reST inliner. + +.. directive:: cmember + + Describes a C struct member. Example signature:: + + .. cmember:: PyObject* PyTypeObject.tp_bases + + The text of the description should include the range of values allowed, how + the value should be interpreted, and whether the value can be changed. + References to structure members in text should use the ``member`` role. + +.. directive:: cmacro + + Describes a "simple" C macro. Simple macros are macros which are used + for code expansion, but which do not take arguments so cannot be described as + functions. This is not to be used for simple constant definitions. Examples + of its use in the Python documentation include :cmacro:`PyObject_HEAD` and + :cmacro:`Py_BEGIN_ALLOW_THREADS`. + +.. directive:: ctype + + Describes a C type. The signature should just be the type name. + +.. directive:: cvar + + Describes a global C variable. The signature should include the type, such + as:: + + .. cvar:: PyObject* PyClass_Type + +.. directive:: data + + Describes global data in a module, including both variables and values used + as "defined constants." Class and object attributes are not documented + using this environment. + +.. directive:: exception + + Describes an exception class. The signature can, but need not include + parentheses with constructor arguments. + +.. directive:: function + + Describes a module-level function. The signature should include the + parameters, enclosing optional parameters in brackets. Default values can be + given if it enhances clarity. For example:: + + .. function:: Timer.repeat([repeat=3[, number=1000000]]) + + Object methods are not documented using this directive. Bound object methods + placed in the module namespace as part of the public interface of the module + are documented using this, as they are equivalent to normal functions for + most purposes. + + The description should include information about the parameters required and + how they are used (especially whether mutable objects passed as parameters + are modified), side effects, and possible exceptions. A small example may be + provided. + +.. directive:: class + + Describes a class. The signature can include parentheses with parameters + which will be shown as the constructor arguments. + +.. directive:: attribute + + Describes an object data attribute. The description should include + information about the type of the data to be expected and whether it may be + changed directly. + +.. directive:: method + + Describes an object method. The parameters should not include the ``self`` + parameter. The description should include similar information to that + described for ``function``. + +.. directive:: opcode + + Describes a Python bytecode instruction (this is not very useful for projects + other than Python itself). + +.. directive:: cmdoption + + Describes a command line option or switch. Option argument names should be + enclosed in angle brackets. Example:: + + .. cmdoption:: -m + + Run a module as a script. + +.. directive:: envvar + + Describes an environment variable that the documented code uses or defines. + + +There is also a generic version of these directives: + +.. directive:: describe + + This directive produces the same formatting as the specific ones explained + above but does not create index entries or cross-referencing targets. It is + used, for example, to describe the directives in this document. Example:: + + .. describe:: opcode + + Describes a Python bytecode instruction. Added: doctools/trunk/doc/markup/inline.rst ============================================================================== --- (empty file) +++ doctools/trunk/doc/markup/inline.rst Sat Mar 15 10:06:04 2008 @@ -0,0 +1,311 @@ +.. highlight:: rest + +Inline markup +------------- + +As said before, Sphinx uses interpreted text roles to insert semantic markup in +documents. + +Variable names are an exception, they should be marked simply with ``*var*``. + +For all other roles, you have to write ``:rolename:`content```. + +.. note:: + + For all cross-referencing roles, if you prefix the content with ``!``, no + reference/hyperlink will be created. + +The following roles refer to objects in modules and are possibly hyperlinked if +a matching identifier is found: + +.. role:: mod + + The name of a module; a dotted name may be used. This should also be used for + package names. + +.. role:: func + + The name of a Python function; dotted names may be used. The role text + should include trailing parentheses to enhance readability. The parentheses + are stripped when searching for identifiers. + +.. role:: data + + The name of a module-level variable. + +.. role:: const + + The name of a "defined" constant. This may be a C-language ``#define`` + or a Python variable that is not intended to be changed. + +.. role:: class + + A class name; a dotted name may be used. + +.. role:: meth + + The name of a method of an object. The role text should include the type + name, method name and the trailing parentheses. A dotted name may be used. + +.. role:: attr + + The name of a data attribute of an object. + +.. role:: exc + + The name of an exception. A dotted name may be used. + +The name enclosed in this markup can include a module name and/or a class name. +For example, ``:func:`filter``` could refer to a function named ``filter`` in +the current module, or the built-in function of that name. In contrast, +``:func:`foo.filter``` clearly refers to the ``filter`` function in the ``foo`` +module. + +Normally, names in these roles are searched first without any further +qualification, then with the current module name prepended, then with the +current module and class name (if any) prepended. If you prefix the name with a +dot, this order is reversed. For example, in the documentation of the +:mod:`codecs` module, ``:func:`open``` always refers to the built-in function, +while ``:func:`.open``` refers to :func:`codecs.open`. + +A similar heuristic is used to determine whether the name is an attribute of +the currently documented class. + +The following roles create cross-references to C-language constructs if they +are defined in the API documentation: + +.. role:: cdata + + The name of a C-language variable. + +.. role:: cfunc + + The name of a C-language function. Should include trailing parentheses. + +.. role:: cmacro + + The name of a "simple" C macro, as defined above. + +.. role:: ctype + + The name of a C-language type. + + +The following roles do possibly create a cross-reference, but do not refer +to objects: + +.. role:: token + + The name of a grammar token (used in the reference manual to create links + between production displays). + +.. role:: keyword + + The name of a keyword in Python. This creates a link to a reference label + with that name, if it exists. + + +The following role creates a cross-reference to the term in the glossary: + +.. role:: term + + Reference to a term in the glossary. The glossary is created using the + ``glossary`` directive containing a definition list with terms and + definitions. It does not have to be in the same file as the ``term`` markup, + for example the Python docs have one global glossary in the ``glossary.rst`` + file. + + If you use a term that's not explained in a glossary, you'll get a warning + during build. + +--------- + +The following roles don't do anything special except formatting the text +in a different style: + +.. role:: command + + The name of an OS-level command, such as ``rm``. + +.. role:: dfn + + Mark the defining instance of a term in the text. (No index entries are + generated.) + +.. role:: envvar + + An environment variable. Index entries are generated. + +.. role:: file + + The name of a file or directory. Within the contents, you can use curly + braces to indicate a "variable" part, for example:: + + ... is installed in :file:`/usr/lib/python2.{x}/site-packages` ... + + In the built documentation, the ``x`` will be displayed differently to + indicate that it is to be replaced by the Python minor version. + +.. role:: guilabel + + Labels presented as part of an interactive user interface should be marked + using ``guilabel``. This includes labels from text-based interfaces such as + those created using :mod:`curses` or other text-based libraries. Any label + used in the interface should be marked with this role, including button + labels, window titles, field names, menu and menu selection names, and even + values in selection lists. + +.. role:: kbd + + Mark a sequence of keystrokes. What form the key sequence takes may depend + on platform- or application-specific conventions. When there are no relevant + conventions, the names of modifier keys should be spelled out, to improve + accessibility for new users and non-native speakers. For example, an + *xemacs* key sequence may be marked like ``:kbd:`C-x C-f```, but without + reference to a specific application or platform, the same sequence should be + marked as ``:kbd:`Control-x Control-f```. + +.. role:: mailheader + + The name of an RFC 822-style mail header. This markup does not imply that + the header is being used in an email message, but can be used to refer to any + header of the same "style." This is also used for headers defined by the + various MIME specifications. The header name should be entered in the same + way it would normally be found in practice, with the camel-casing conventions + being preferred where there is more than one common usage. For example: + ``:mailheader:`Content-Type```. + +.. role:: makevar + + The name of a :command:`make` variable. + +.. role:: manpage + + A reference to a Unix manual page including the section, + e.g. ``:manpage:`ls(1)```. + +.. role:: menuselection + + Menu selections should be marked using the ``menuselection`` role. This is + used to mark a complete sequence of menu selections, including selecting + submenus and choosing a specific operation, or any subsequence of such a + sequence. The names of individual selections should be separated by + ``-->``. + + For example, to mark the selection "Start > Programs", use this markup:: + + :menuselection:`Start --> Programs` + + When including a selection that includes some trailing indicator, such as the + ellipsis some operating systems use to indicate that the command opens a + dialog, the indicator should be omitted from the selection name. + +.. role:: mimetype + + The name of a MIME type, or a component of a MIME type (the major or minor + portion, taken alone). + +.. role:: newsgroup + + The name of a Usenet newsgroup. + +.. role:: option + + A command-line option to an executable program. The leading hyphen(s) must + be included. + +.. role:: program + + The name of an executable program. This may differ from the file name for + the executable for some platforms. In particular, the ``.exe`` (or other) + extension should be omitted for Windows programs. + +.. role:: regexp + + A regular expression. Quotes should not be included. + +.. role:: samp + + A piece of literal text, such as code. Within the contents, you can use + curly braces to indicate a "variable" part, as in ``:file:``. + + If you don't need the "variable part" indication, use the standard + ````code```` instead. + +.. role:: var + + A Python or C variable or parameter name. + + +The following roles generate external links: + +.. role:: pep + + A reference to a Python Enhancement Proposal. This generates appropriate + index entries. The text "PEP *number*\ " is generated; in the HTML output, + this text is a hyperlink to an online copy of the specified PEP. + +.. role:: rfc + + A reference to an Internet Request for Comments. This generates appropriate + index entries. The text "RFC *number*\ " is generated; in the HTML output, + this text is a hyperlink to an online copy of the specified RFC. + + +Note that there are no special roles for including hyperlinks as you can use +the standard reST markup for that purpose. + + +Substitutions +------------- + +The documentation system provides three substitutions that are defined by default. +They are set in the build configuration file. + +.. describe:: |release| + + Replaced by the project release the documentation refers to. This is meant + to be the full version string including alpha/beta/release candidate tags, + e.g. ``2.5.2b3``. Set by :confval:`release`. + +.. describe:: |version| + + Replaced by the project version the documentation refers to. This is meant to + consist only of the major and minor version parts, e.g. ``2.5``, even for + version 2.5.1. Set by :confval:`version`. + +.. describe:: |today| + + Replaced by either today's date, or the date set in the build configuration + file. Normally has the format ``April 14, 2007``. Set by + :confval:`today_fmt` and :confval:`today`. + + +.. _doc-ref-role: + +Cross-linking markup +-------------------- + +To support cross-referencing to arbitrary sections in the documentation, the +standard reST labels used. Of course, for this to work label names must be +unique throughout the entire documentation. There are two ways in which you can +refer to labels: + +* If you place a label directly before a section title, you can reference to it + with ``:ref:`label-name```. Example:: + + .. _my-reference-label: + + Section to cross-reference + -------------------------- + + This is the text of the section. + + It refers to the section itself, see :ref:`my-reference-label`. + + The ``:ref:`` role would then generate a link to the section, with the link + title being "Section to cross-reference". + +* Labels that aren't placed before a section title can still be referenced to, + but you must give the link an explicit title, using this syntax: ``:ref:`Link + title ```. Added: doctools/trunk/doc/markup/misc.rst ============================================================================== --- (empty file) +++ doctools/trunk/doc/markup/misc.rst Sat Mar 15 10:06:04 2008 @@ -0,0 +1,41 @@ +.. highlight:: rest + +Miscellaneous markup +==================== + +File-wide metadata +------------------ + +reST has the concept of "field lists"; these are a sequence of fields marked up +like this:: + + :Field name: Field content + +A field list at the very top of a file is parsed as the "docinfo", which in +normal documents can be used to record the author, date of publication and +other metadata. In Sphinx, the docinfo is used as metadata, too, but not +displayed in the output. + +At the moment, only one metadata field is recognized: + +``nocomments`` + If set, the web application won't display a comment form for a page generated + from this source file. + + +Meta-information markup +----------------------- + +.. directive:: sectionauthor + + Identifies the author of the current section. The argument should include + the author's name such that it can be used for presentation and email + address. The domain name portion of the address should be lower case. + Example:: + + .. sectionauthor:: Guido van Rossum + + By default, this markup isn't reflected in the output in any way (it helps + keep track of contributions), but you can set the configuration value + :confval:`show_authors` to True to make them produce a paragraph in the + output. Added: doctools/trunk/doc/markup/para.rst ============================================================================== --- (empty file) +++ doctools/trunk/doc/markup/para.rst Sat Mar 15 10:06:04 2008 @@ -0,0 +1,185 @@ +.. highlight:: rest + +Paragraph-level markup +---------------------- + +These directives create short paragraphs and can be used inside information +units as well as normal text: + +.. directive:: note + + An especially important bit of information about an API that a user should be + aware of when using whatever bit of API the note pertains to. The content of + the directive should be written in complete sentences and include all + appropriate punctuation. + + Example:: + + .. note:: + + This function is not suitable for sending spam e-mails. + +.. directive:: warning + + An important bit of information about an API that a user should be very aware + of when using whatever bit of API the warning pertains to. The content of + the directive should be written in complete sentences and include all + appropriate punctuation. This differs from ``note`` in that it is recommended + over ``note`` for information regarding security. + +.. directive:: versionadded + + This directive documents the version of the project which added the described + feature to the library or C API. When this applies to an entire module, it + should be placed at the top of the module section before any prose. + + The first argument must be given and is the version in question; you can add + a second argument consisting of a *brief* explanation of the change. + + Example:: + + .. versionadded:: 2.5 + The `spam` parameter. + + Note that there must be no blank line between the directive head and the + explanation; this is to make these blocks visually continuous in the markup. + +.. directive:: versionchanged + + Similar to ``versionadded``, but describes when and what changed in the named + feature in some way (new parameters, changed side effects, etc.). + +-------------- + +.. directive:: seealso + + Many sections include a list of references to module documentation or + external documents. These lists are created using the ``seealso`` directive. + + The ``seealso`` directive is typically placed in a section just before any + sub-sections. For the HTML output, it is shown boxed off from the main flow + of the text. + + The content of the ``seealso`` directive should be a reST definition list. + Example:: + + .. seealso:: + + Module :mod:`zipfile` + Documentation of the :mod:`zipfile` standard module. + + `GNU tar manual, Basic Tar Format `_ + Documentation for tar archive files, including GNU tar extensions. + +.. directive:: rubric + + This directive creates a paragraph heading that is not used to create a + table of contents node. It is currently used for the "Footnotes" caption. + +.. directive:: centered + + This directive creates a centered boldfaced paragraph. Use it as follows:: + + .. centered:: + + Paragraph contents. + + +Table-of-contents markup +------------------------ + +The :dir:`toctree` directive, which generates tables of contents of +subdocuments, is described in "Sphinx concepts". + +For local tables of contents, use the standard reST :dir:`contents` directive. + + +Index-generating markup +----------------------- + +Sphinx automatically creates index entries from all information units (like +functions, classes or attributes) like discussed before. + +However, there is also an explicit directive available, to make the index more +comprehensive and enable index entries in documents where information is not +mainly contained in information units, such as the language reference. + +The directive is ``index`` and contains one or more index entries. Each entry +consists of a type and a value, separated by a colon. + +For example:: + + .. index:: + single: execution; context + module: __main__ + module: sys + triple: module; search; path + +This directive contains five entries, which will be converted to entries in the +generated index which link to the exact location of the index statement (or, in +case of offline media, the corresponding page number). + +The possible entry types are: + +single + Creates a single index entry. Can be made a subentry by separating the + subentry text with a semicolon (this notation is also used below to describe + what entries are created). +pair + ``pair: loop; statement`` is a shortcut that creates two index entries, + namely ``loop; statement`` and ``statement; loop``. +triple + Likewise, ``triple: module; search; path`` is a shortcut that creates three + index entries, which are ``module; search path``, ``search; path, module`` and + ``path; module search``. +module, keyword, operator, object, exception, statement, builtin + These all create two index entries. For example, ``module: hashlib`` creates + the entries ``module; hashlib`` and ``hashlib; module``. + +For index directives containing only "single" entries, there is a shorthand +notation:: + + .. index:: BNF, grammar, syntax, notation + +This creates four index entries. + + +Grammar production displays +--------------------------- + +Special markup is available for displaying the productions of a formal grammar. +The markup is simple and does not attempt to model all aspects of BNF (or any +derived forms), but provides enough to allow context-free grammars to be +displayed in a way that causes uses of a symbol to be rendered as hyperlinks to +the definition of the symbol. There is this directive: + +.. directive:: productionlist + + This directive is used to enclose a group of productions. Each production is + given on a single line and consists of a name, separated by a colon from the + following definition. If the definition spans multiple lines, each + continuation line must begin with a colon placed at the same column as in the + first line. + + Blank lines are not allowed within ``productionlist`` directive arguments. + + The definition can contain token names which are marked as interpreted text + (e.g. ``sum ::= `integer` "+" `integer```) -- this generates cross-references + to the productions of these tokens. + + Note that no further reST parsing is done in the production, so that you + don't have to escape ``*`` or ``|`` characters. + + +.. XXX describe optional first parameter + +The following is an example taken from the Python Reference Manual:: + + .. productionlist:: + try_stmt: try1_stmt | try2_stmt + try1_stmt: "try" ":" `suite` + : ("except" [`expression` ["," `target`]] ":" `suite`)+ + : ["else" ":" `suite`] + : ["finally" ":" `suite`] + try2_stmt: "try" ":" `suite` + : "finally" ":" `suite` From python-checkins at python.org Sat Mar 15 15:37:18 2008 From: python-checkins at python.org (collin.winter) Date: Sat, 15 Mar 2008 15:37:18 +0100 (CET) Subject: [Python-checkins] r61401 - in sandbox/trunk/2to3: fixes/fix_methodattrs.py tests/test_fixers.py Message-ID: <20080315143718.804621E4007@bag.python.org> Author: collin.winter Date: Sat Mar 15 15:37:18 2008 New Revision: 61401 Modified: sandbox/trunk/2to3/fixes/fix_methodattrs.py sandbox/trunk/2to3/tests/test_fixers.py Log: Fix two tab/space mixtures. Modified: sandbox/trunk/2to3/fixes/fix_methodattrs.py ============================================================================== --- sandbox/trunk/2to3/fixes/fix_methodattrs.py (original) +++ sandbox/trunk/2to3/fixes/fix_methodattrs.py Sat Mar 15 15:37:18 2008 @@ -19,5 +19,5 @@ def transform(self, node, results): attr = results["attr"][0] - new = MAP[attr.value] + new = MAP[attr.value] attr.replace(Name(new, prefix=attr.get_prefix())) Modified: sandbox/trunk/2to3/tests/test_fixers.py ============================================================================== --- sandbox/trunk/2to3/tests/test_fixers.py (original) +++ sandbox/trunk/2to3/tests/test_fixers.py Sat Mar 15 15:37:18 2008 @@ -2207,7 +2207,7 @@ a = "from %s import %s" % (mod, new) self.check(b, a) - s = "from foo import %s" % old + s = "from foo import %s" % old self.unchanged(s) def test_import_from_as(self): From python-checkins at python.org Sat Mar 15 17:04:46 2008 From: python-checkins at python.org (skip.montanaro) Date: Sat, 15 Mar 2008 17:04:46 +0100 (CET) Subject: [Python-checkins] r61402 - in python/trunk: Doc/library/datetime.rst Lib/_strptime.py Lib/test/test_datetime.py Lib/test/test_strptime.py Modules/datetimemodule.c Modules/timemodule.c Message-ID: <20080315160446.80E991E4007@bag.python.org> Author: skip.montanaro Date: Sat Mar 15 17:04:45 2008 New Revision: 61402 Modified: python/trunk/Doc/library/datetime.rst python/trunk/Lib/_strptime.py python/trunk/Lib/test/test_datetime.py python/trunk/Lib/test/test_strptime.py python/trunk/Modules/datetimemodule.c python/trunk/Modules/timemodule.c Log: add %f format to datetime - issue 1158 Modified: python/trunk/Doc/library/datetime.rst ============================================================================== --- python/trunk/Doc/library/datetime.rst (original) +++ python/trunk/Doc/library/datetime.rst Sat Mar 15 17:04:45 2008 @@ -1489,9 +1489,31 @@ be used, as time objects have no such values. If they're used anyway, ``1900`` is substituted for the year, and ``0`` for the month and day. -For :class:`date` objects, the format codes for hours, minutes, and seconds -should not be used, as :class:`date` objects have no such values. If they're -used anyway, ``0`` is substituted for them. +For :class:`date` objects, the format codes for hours, minutes, seconds, and +microseconds should not be used, as :class:`date` objects have no such +values. If they're used anyway, ``0`` is substituted for them. + +:class:`time` and :class:`datetime` objects support a ``%f`` format code +which expands to the number of microseconds in the object, zero-padded on +the left to six places. + +.. versionadded:: 2.6 + +For a naive object, the ``%z`` and ``%Z`` format codes are replaced by empty +strings. + +For an aware object: + +``%z`` + :meth:`utcoffset` is transformed into a 5-character string of the form +HHMM or + -HHMM, where HH is a 2-digit string giving the number of UTC offset hours, and + MM is a 2-digit string giving the number of UTC offset minutes. For example, if + :meth:`utcoffset` returns ``timedelta(hours=-3, minutes=-30)``, ``%z`` is + replaced with the string ``'-0330'``. + +``%Z`` + If :meth:`tzname` returns ``None``, ``%Z`` is replaced by an empty string. + Otherwise ``%Z`` is replaced by the returned value, which must be a string. The full set of format codes supported varies across platforms, because Python calls the platform C library's :func:`strftime` function, and platform @@ -1524,6 +1546,10 @@ | ``%d`` | Day of the month as a decimal | | | | number [01,31]. | | +-----------+--------------------------------+-------+ +| ``%f`` | Microsecond as a decimal | \(1) | +| | number [0,999999], zero-padded | | +| | on the left | | ++-----------+--------------------------------+-------+ | ``%H`` | Hour (24-hour clock) as a | | | | decimal number [00,23]. | | +-----------+--------------------------------+-------+ @@ -1539,13 +1565,13 @@ | ``%M`` | Minute as a decimal number | | | | [00,59]. | | +-----------+--------------------------------+-------+ -| ``%p`` | Locale's equivalent of either | \(1) | +| ``%p`` | Locale's equivalent of either | \(2) | | | AM or PM. | | +-----------+--------------------------------+-------+ -| ``%S`` | Second as a decimal number | \(2) | +| ``%S`` | Second as a decimal number | \(3) | | | [00,61]. | | +-----------+--------------------------------+-------+ -| ``%U`` | Week number of the year | \(3) | +| ``%U`` | Week number of the year | \(4) | | | (Sunday as the first day of | | | | the week) as a decimal number | | | | [00,53]. All days in a new | | @@ -1556,7 +1582,7 @@ | ``%w`` | Weekday as a decimal number | | | | [0(Sunday),6]. | | +-----------+--------------------------------+-------+ -| ``%W`` | Week number of the year | \(3) | +| ``%W`` | Week number of the year | \(4) | | | (Monday as the first day of | | | | the week) as a decimal number | | | | [00,53]. All days in a new | | @@ -1576,7 +1602,7 @@ | ``%Y`` | Year with century as a decimal | | | | number. | | +-----------+--------------------------------+-------+ -| ``%z`` | UTC offset in the form +HHMM | \(4) | +| ``%z`` | UTC offset in the form +HHMM | \(5) | | | or -HHMM (empty string if the | | | | the object is naive). | | +-----------+--------------------------------+-------+ @@ -1589,17 +1615,22 @@ Notes: (1) + When used with the :func:`strptime` function, the ``%f`` directive + accepts from one to six digits and zero pads on the right. ``%f`` is + an extension to the set of format characters in the C standard. + +(2) When used with the :func:`strptime` function, the ``%p`` directive only affects the output hour field if the ``%I`` directive is used to parse the hour. -(2) +(3) The range really is ``0`` to ``61``; this accounts for leap seconds and the (very rare) double leap seconds. -(3) +(4) When used with the :func:`strptime` function, ``%U`` and ``%W`` are only used in calculations when the day of the week and the year are specified. -(4) +(5) For example, if :meth:`utcoffset` returns ``timedelta(hours=-3, minutes=-30)``, ``%z`` is replaced with the string ``'-0330'``. Modified: python/trunk/Lib/_strptime.py ============================================================================== --- python/trunk/Lib/_strptime.py (original) +++ python/trunk/Lib/_strptime.py Sat Mar 15 17:04:45 2008 @@ -22,7 +22,7 @@ except: from dummy_thread import allocate_lock as _thread_allocate_lock -__all__ = ['strptime'] +__all__ = [] def _getlang(): # Figure out what the current language is set to. @@ -190,6 +190,7 @@ base.__init__({ # The " \d" part of the regex is to make %c from ANSI C work 'd': r"(?P3[0-1]|[1-2]\d|0[1-9]|[1-9]| [1-9])", + 'f': r"(?P[0-9]{1,6})", 'H': r"(?P2[0-3]|[0-1]\d|\d)", 'I': r"(?P1[0-2]|0[1-9]|[1-9])", 'j': r"(?P36[0-6]|3[0-5]\d|[1-2]\d\d|0[1-9]\d|00[1-9]|[1-9]\d|0[1-9]|[1-9])", @@ -291,7 +292,7 @@ return 1 + days_to_week + day_of_week -def strptime(data_string, format="%a %b %d %H:%M:%S %Y"): +def _strptime(data_string, format="%a %b %d %H:%M:%S %Y"): """Return a time struct based on the input string and the format string.""" global _TimeRE_cache, _regex_cache with _cache_lock: @@ -327,7 +328,7 @@ data_string[found.end():]) year = 1900 month = day = 1 - hour = minute = second = 0 + hour = minute = second = fraction = 0 tz = -1 # Default to -1 to signify that values not known; not critical to have, # though @@ -384,6 +385,11 @@ minute = int(found_dict['M']) elif group_key == 'S': second = int(found_dict['S']) + elif group_key == 'f': + s = found_dict['f'] + # Pad to always return microseconds. + s += "0" * (6 - len(s)) + fraction = int(s) elif group_key == 'A': weekday = locale_time.f_weekday.index(found_dict['A'].lower()) elif group_key == 'a': @@ -440,6 +446,9 @@ day = datetime_result.day if weekday == -1: weekday = datetime_date(year, month, day).weekday() - return time.struct_time((year, month, day, - hour, minute, second, - weekday, julian, tz)) + return (time.struct_time((year, month, day, + hour, minute, second, + weekday, julian, tz)), fraction) + +def _strptime_time(data_string, format="%a %b %d %H:%M:%S %Y"): + return _strptime(data_string, format)[0] Modified: python/trunk/Lib/test/test_datetime.py ============================================================================== --- python/trunk/Lib/test/test_datetime.py (original) +++ python/trunk/Lib/test/test_datetime.py Sat Mar 15 17:04:45 2008 @@ -1507,11 +1507,12 @@ self.failUnless(abs(from_timestamp - from_now) <= tolerance) def test_strptime(self): - import time + import _strptime - string = '2004-12-01 13:02:47' - format = '%Y-%m-%d %H:%M:%S' - expected = self.theclass(*(time.strptime(string, format)[0:6])) + string = '2004-12-01 13:02:47.197' + format = '%Y-%m-%d %H:%M:%S.%f' + result, frac = _strptime._strptime(string, format) + expected = self.theclass(*(result[0:6]+(frac,))) got = self.theclass.strptime(string, format) self.assertEqual(expected, got) @@ -1539,9 +1540,9 @@ def test_more_strftime(self): # This tests fields beyond those tested by the TestDate.test_strftime. - t = self.theclass(2004, 12, 31, 6, 22, 33) - self.assertEqual(t.strftime("%m %d %y %S %M %H %j"), - "12 31 04 33 22 06 366") + t = self.theclass(2004, 12, 31, 6, 22, 33, 47) + self.assertEqual(t.strftime("%m %d %y %f %S %M %H %j"), + "12 31 04 000047 33 22 06 366") def test_extract(self): dt = self.theclass(2002, 3, 4, 18, 45, 3, 1234) @@ -1814,7 +1815,7 @@ def test_strftime(self): t = self.theclass(1, 2, 3, 4) - self.assertEqual(t.strftime('%H %M %S'), "01 02 03") + self.assertEqual(t.strftime('%H %M %S %f'), "01 02 03 000004") # A naive object replaces %z and %Z with empty strings. self.assertEqual(t.strftime("'%z' '%Z'"), "'' ''") Modified: python/trunk/Lib/test/test_strptime.py ============================================================================== --- python/trunk/Lib/test/test_strptime.py (original) +++ python/trunk/Lib/test/test_strptime.py Sat Mar 15 17:04:45 2008 @@ -208,11 +208,11 @@ def test_ValueError(self): # Make sure ValueError is raised when match fails or format is bad - self.assertRaises(ValueError, _strptime.strptime, data_string="%d", + self.assertRaises(ValueError, _strptime._strptime_time, data_string="%d", format="%A") for bad_format in ("%", "% ", "%e"): try: - _strptime.strptime("2005", bad_format) + _strptime._strptime_time("2005", bad_format) except ValueError: continue except Exception, err: @@ -223,12 +223,12 @@ def test_unconverteddata(self): # Check ValueError is raised when there is unconverted data - self.assertRaises(ValueError, _strptime.strptime, "10 12", "%m") + self.assertRaises(ValueError, _strptime._strptime_time, "10 12", "%m") def helper(self, directive, position): """Helper fxn in testing.""" strf_output = time.strftime("%" + directive, self.time_tuple) - strp_output = _strptime.strptime(strf_output, "%" + directive) + strp_output = _strptime._strptime_time(strf_output, "%" + directive) self.failUnless(strp_output[position] == self.time_tuple[position], "testing of '%s' directive failed; '%s' -> %s != %s" % (directive, strf_output, strp_output[position], @@ -241,7 +241,7 @@ # Must also make sure %y values are correct for bounds set by Open Group for century, bounds in ((1900, ('69', '99')), (2000, ('00', '68'))): for bound in bounds: - strp_output = _strptime.strptime(bound, '%y') + strp_output = _strptime._strptime_time(bound, '%y') expected_result = century + int(bound) self.failUnless(strp_output[0] == expected_result, "'y' test failed; passed in '%s' " @@ -260,7 +260,7 @@ # Test hour directives self.helper('H', 3) strf_output = time.strftime("%I %p", self.time_tuple) - strp_output = _strptime.strptime(strf_output, "%I %p") + strp_output = _strptime._strptime_time(strf_output, "%I %p") self.failUnless(strp_output[3] == self.time_tuple[3], "testing of '%%I %%p' directive failed; '%s' -> %s != %s" % (strf_output, strp_output[3], self.time_tuple[3])) @@ -273,6 +273,12 @@ # Test second directives self.helper('S', 5) + def test_fraction(self): + import datetime + now = datetime.datetime.now() + tup, frac = _strptime._strptime(str(now), format="%Y-%m-%d %H:%M:%S.%f") + self.assertEqual(frac, now.microsecond) + def test_weekday(self): # Test weekday directives for directive in ('A', 'a', 'w'): @@ -287,16 +293,16 @@ # When gmtime() is used with %Z, entire result of strftime() is empty. # Check for equal timezone names deals with bad locale info when this # occurs; first found in FreeBSD 4.4. - strp_output = _strptime.strptime("UTC", "%Z") + strp_output = _strptime._strptime_time("UTC", "%Z") self.failUnlessEqual(strp_output.tm_isdst, 0) - strp_output = _strptime.strptime("GMT", "%Z") + strp_output = _strptime._strptime_time("GMT", "%Z") self.failUnlessEqual(strp_output.tm_isdst, 0) if sys.platform == "mac": # Timezones don't really work on MacOS9 return time_tuple = time.localtime() strf_output = time.strftime("%Z") #UTC does not have a timezone - strp_output = _strptime.strptime(strf_output, "%Z") + strp_output = _strptime._strptime_time(strf_output, "%Z") locale_time = _strptime.LocaleTime() if time.tzname[0] != time.tzname[1] or not time.daylight: self.failUnless(strp_output[8] == time_tuple[8], @@ -320,7 +326,7 @@ original_daylight = time.daylight time.tzname = (tz_name, tz_name) time.daylight = 1 - tz_value = _strptime.strptime(tz_name, "%Z")[8] + tz_value = _strptime._strptime_time(tz_name, "%Z")[8] self.failUnlessEqual(tz_value, -1, "%s lead to a timezone value of %s instead of -1 when " "time.daylight set to %s and passing in %s" % @@ -347,7 +353,7 @@ def test_percent(self): # Make sure % signs are handled properly strf_output = time.strftime("%m %% %Y", self.time_tuple) - strp_output = _strptime.strptime(strf_output, "%m %% %Y") + strp_output = _strptime._strptime_time(strf_output, "%m %% %Y") self.failUnless(strp_output[0] == self.time_tuple[0] and strp_output[1] == self.time_tuple[1], "handling of percent sign failed") @@ -355,17 +361,17 @@ def test_caseinsensitive(self): # Should handle names case-insensitively. strf_output = time.strftime("%B", self.time_tuple) - self.failUnless(_strptime.strptime(strf_output.upper(), "%B"), + self.failUnless(_strptime._strptime_time(strf_output.upper(), "%B"), "strptime does not handle ALL-CAPS names properly") - self.failUnless(_strptime.strptime(strf_output.lower(), "%B"), + self.failUnless(_strptime._strptime_time(strf_output.lower(), "%B"), "strptime does not handle lowercase names properly") - self.failUnless(_strptime.strptime(strf_output.capitalize(), "%B"), + self.failUnless(_strptime._strptime_time(strf_output.capitalize(), "%B"), "strptime does not handle capword names properly") def test_defaults(self): # Default return value should be (1900, 1, 1, 0, 0, 0, 0, 1, 0) defaults = (1900, 1, 1, 0, 0, 0, 0, 1, -1) - strp_output = _strptime.strptime('1', '%m') + strp_output = _strptime._strptime_time('1', '%m') self.failUnless(strp_output == defaults, "Default values for strptime() are incorrect;" " %s != %s" % (strp_output, defaults)) @@ -377,7 +383,7 @@ # escaped. # Test instigated by bug #796149 . need_escaping = ".^$*+?{}\[]|)(" - self.failUnless(_strptime.strptime(need_escaping, need_escaping)) + self.failUnless(_strptime._strptime_time(need_escaping, need_escaping)) class Strptime12AMPMTests(unittest.TestCase): """Test a _strptime regression in '%I %p' at 12 noon (12 PM)""" @@ -386,8 +392,8 @@ eq = self.assertEqual eq(time.strptime('12 PM', '%I %p')[3], 12) eq(time.strptime('12 AM', '%I %p')[3], 0) - eq(_strptime.strptime('12 PM', '%I %p')[3], 12) - eq(_strptime.strptime('12 AM', '%I %p')[3], 0) + eq(_strptime._strptime_time('12 PM', '%I %p')[3], 12) + eq(_strptime._strptime_time('12 AM', '%I %p')[3], 0) class JulianTests(unittest.TestCase): @@ -397,7 +403,7 @@ eq = self.assertEqual for i in range(1, 367): # use 2004, since it is a leap year, we have 366 days - eq(_strptime.strptime('%d 2004' % i, '%j %Y')[7], i) + eq(_strptime._strptime_time('%d 2004' % i, '%j %Y')[7], i) class CalculationTests(unittest.TestCase): """Test that strptime() fills in missing info correctly""" @@ -408,7 +414,7 @@ def test_julian_calculation(self): # Make sure that when Julian is missing that it is calculated format_string = "%Y %m %d %H %M %S %w %Z" - result = _strptime.strptime(time.strftime(format_string, self.time_tuple), + result = _strptime._strptime_time(time.strftime(format_string, self.time_tuple), format_string) self.failUnless(result.tm_yday == self.time_tuple.tm_yday, "Calculation of tm_yday failed; %s != %s" % @@ -417,7 +423,7 @@ def test_gregorian_calculation(self): # Test that Gregorian date can be calculated from Julian day format_string = "%Y %H %M %S %w %j %Z" - result = _strptime.strptime(time.strftime(format_string, self.time_tuple), + result = _strptime._strptime_time(time.strftime(format_string, self.time_tuple), format_string) self.failUnless(result.tm_year == self.time_tuple.tm_year and result.tm_mon == self.time_tuple.tm_mon and @@ -431,7 +437,7 @@ def test_day_of_week_calculation(self): # Test that the day of the week is calculated as needed format_string = "%Y %m %d %H %S %j %Z" - result = _strptime.strptime(time.strftime(format_string, self.time_tuple), + result = _strptime._strptime_time(time.strftime(format_string, self.time_tuple), format_string) self.failUnless(result.tm_wday == self.time_tuple.tm_wday, "Calculation of day of the week failed;" @@ -445,7 +451,7 @@ format_string = "%%Y %%%s %%w" % directive dt_date = datetime_date(*ymd_tuple) strp_input = dt_date.strftime(format_string) - strp_output = _strptime.strptime(strp_input, format_string) + strp_output = _strptime._strptime_time(strp_input, format_string) self.failUnless(strp_output[:3] == ymd_tuple, "%s(%s) test failed w/ '%s': %s != %s (%s != %s)" % (test_reason, directive, strp_input, @@ -484,11 +490,11 @@ def test_time_re_recreation(self): # Make sure cache is recreated when current locale does not match what # cached object was created with. - _strptime.strptime("10", "%d") - _strptime.strptime("2005", "%Y") + _strptime._strptime_time("10", "%d") + _strptime._strptime_time("2005", "%Y") _strptime._TimeRE_cache.locale_time.lang = "Ni" original_time_re = id(_strptime._TimeRE_cache) - _strptime.strptime("10", "%d") + _strptime._strptime_time("10", "%d") self.failIfEqual(original_time_re, id(_strptime._TimeRE_cache)) self.failUnlessEqual(len(_strptime._regex_cache), 1) @@ -502,7 +508,7 @@ while len(_strptime._regex_cache) <= _strptime._CACHE_MAX_SIZE: _strptime._regex_cache[bogus_key] = None bogus_key += 1 - _strptime.strptime("10", "%d") + _strptime._strptime_time("10", "%d") self.failUnlessEqual(len(_strptime._regex_cache), 1) def test_new_localetime(self): @@ -510,7 +516,7 @@ # is created. locale_time_id = id(_strptime._TimeRE_cache.locale_time) _strptime._TimeRE_cache.locale_time.lang = "Ni" - _strptime.strptime("10", "%d") + _strptime._strptime_time("10", "%d") self.failIfEqual(locale_time_id, id(_strptime._TimeRE_cache.locale_time)) @@ -522,13 +528,13 @@ except locale.Error: return try: - _strptime.strptime('10', '%d') + _strptime._strptime_time('10', '%d') # Get id of current cache object. first_time_re_id = id(_strptime._TimeRE_cache) try: # Change the locale and force a recreation of the cache. locale.setlocale(locale.LC_TIME, ('de_DE', 'UTF8')) - _strptime.strptime('10', '%d') + _strptime._strptime_time('10', '%d') # Get the new cache object's id. second_time_re_id = id(_strptime._TimeRE_cache) # They should not be equal. Modified: python/trunk/Modules/datetimemodule.c ============================================================================== --- python/trunk/Modules/datetimemodule.c (original) +++ python/trunk/Modules/datetimemodule.c Sat Mar 15 17:04:45 2008 @@ -1130,10 +1130,24 @@ return 0; } +static PyObject * +make_freplacement(PyObject *object) +{ + char freplacement[7]; + if (PyTime_Check(object)) + sprintf(freplacement, "%06d", TIME_GET_MICROSECOND(object)); + else if (PyDateTime_Check(object)) + sprintf(freplacement, "%06d", DATE_GET_MICROSECOND(object)); + else + sprintf(freplacement, "%06d", 0); + + return PyString_FromStringAndSize(freplacement, strlen(freplacement)); +} + /* I sure don't want to reproduce the strftime code from the time module, * so this imports the module and calls it. All the hair is due to - * giving special meanings to the %z and %Z format codes via a preprocessing - * step on the format string. + * giving special meanings to the %z, %Z and %f format codes via a + * preprocessing step on the format string. * tzinfoarg is the argument to pass to the object's tzinfo method, if * needed. */ @@ -1145,6 +1159,7 @@ PyObject *zreplacement = NULL; /* py string, replacement for %z */ PyObject *Zreplacement = NULL; /* py string, replacement for %Z */ + PyObject *freplacement = NULL; /* py string, replacement for %f */ char *pin; /* pointer to next char in input format */ char ch; /* next char in input format */ @@ -1186,11 +1201,11 @@ } } - /* Scan the input format, looking for %z and %Z escapes, building + /* Scan the input format, looking for %z/%Z/%f escapes, building * a new format. Since computing the replacements for those codes * is expensive, don't unless they're actually used. */ - totalnew = PyString_Size(format) + 1; /* realistic if no %z/%Z */ + totalnew = PyString_Size(format) + 1; /* realistic if no %z/%Z/%f */ newfmt = PyString_FromStringAndSize(NULL, totalnew); if (newfmt == NULL) goto Done; pnew = PyString_AsString(newfmt); @@ -1272,6 +1287,18 @@ ptoappend = PyString_AS_STRING(Zreplacement); ntoappend = PyString_GET_SIZE(Zreplacement); } + else if (ch == 'f') { + /* format microseconds */ + if (freplacement == NULL) { + freplacement = make_freplacement(object); + if (freplacement == NULL) + goto Done; + } + assert(freplacement != NULL); + assert(PyString_Check(freplacement)); + ptoappend = PyString_AS_STRING(freplacement); + ntoappend = PyString_GET_SIZE(freplacement); + } else { /* percent followed by neither z nor Z */ ptoappend = pin - 2; @@ -1313,6 +1340,7 @@ Py_DECREF(time); } Done: + Py_XDECREF(freplacement); Py_XDECREF(zreplacement); Py_XDECREF(Zreplacement); Py_XDECREF(newfmt); @@ -3853,43 +3881,69 @@ static PyObject * datetime_strptime(PyObject *cls, PyObject *args) { - PyObject *result = NULL, *obj, *module; + static PyObject *module = NULL; + PyObject *result = NULL, *obj, *st = NULL, *frac = NULL; const char *string, *format; if (!PyArg_ParseTuple(args, "ss:strptime", &string, &format)) return NULL; - if ((module = PyImport_ImportModuleNoBlock("time")) == NULL) + if (module == NULL && + (module = PyImport_ImportModuleNoBlock("_strptime")) == NULL) return NULL; - obj = PyObject_CallMethod(module, "strptime", "ss", string, format); - Py_DECREF(module); + /* _strptime._strptime returns a two-element tuple. The first + element is a time.struct_time object. The second is the + microseconds (which are not defined for time.struct_time). */ + obj = PyObject_CallMethod(module, "_strptime", "ss", string, format); if (obj != NULL) { int i, good_timetuple = 1; - long int ia[6]; - if (PySequence_Check(obj) && PySequence_Size(obj) >= 6) - for (i=0; i < 6; i++) { - PyObject *p = PySequence_GetItem(obj, i); - if (p == NULL) { - Py_DECREF(obj); - return NULL; + long int ia[7]; + if (PySequence_Check(obj) && PySequence_Size(obj) == 2) { + st = PySequence_GetItem(obj, 0); + frac = PySequence_GetItem(obj, 1); + if (st == NULL || frac == NULL) + good_timetuple = 0; + /* copy y/m/d/h/m/s values out of the + time.struct_time */ + if (good_timetuple && + PySequence_Check(st) && + PySequence_Size(st) >= 6) { + for (i=0; i < 6; i++) { + PyObject *p = PySequence_GetItem(st, i); + if (p == NULL) { + good_timetuple = 0; + break; + } + if (PyInt_Check(p)) + ia[i] = PyInt_AsLong(p); + else + good_timetuple = 0; + Py_DECREF(p); } - if (PyInt_Check(p)) - ia[i] = PyInt_AsLong(p); - else - good_timetuple = 0; - Py_DECREF(p); } + else + good_timetuple = 0; + /* follow that up with a little dose of microseconds */ + if (PyInt_Check(frac)) + ia[6] = PyInt_AsLong(frac); + else + good_timetuple = 0; + } else good_timetuple = 0; if (good_timetuple) - result = PyObject_CallFunction(cls, "iiiiii", - ia[0], ia[1], ia[2], ia[3], ia[4], ia[5]); + result = PyObject_CallFunction(cls, "iiiiiii", + ia[0], ia[1], ia[2], + ia[3], ia[4], ia[5], + ia[6]); else PyErr_SetString(PyExc_ValueError, - "unexpected value from time.strptime"); - Py_DECREF(obj); + "unexpected value from _strptime._strptime"); } + Py_XDECREF(obj); + Py_XDECREF(st); + Py_XDECREF(frac); return result; } Modified: python/trunk/Modules/timemodule.c ============================================================================== --- python/trunk/Modules/timemodule.c (original) +++ python/trunk/Modules/timemodule.c Sat Mar 15 17:04:45 2008 @@ -520,7 +520,7 @@ if (!strptime_module) return NULL; - strptime_result = PyObject_CallMethod(strptime_module, "strptime", "O", args); + strptime_result = PyObject_CallMethod(strptime_module, "_strptime_time", "O", args); Py_DECREF(strptime_module); return strptime_result; } From python-checkins at python.org Sat Mar 15 17:07:11 2008 From: python-checkins at python.org (skip.montanaro) Date: Sat, 15 Mar 2008 17:07:11 +0100 (CET) Subject: [Python-checkins] r61403 - python/trunk/Misc/NEWS Message-ID: <20080315160711.CE8F31E4007@bag.python.org> Author: skip.montanaro Date: Sat Mar 15 17:07:11 2008 New Revision: 61403 Modified: python/trunk/Misc/NEWS Log: . Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Sat Mar 15 17:07:11 2008 @@ -21,6 +21,10 @@ Library ------- +- Issue #1158: add %f format (fractions of a second represented as + microseconds) to datetime objects. Understood by both strptime and + strftime. + - Issue #705836: struct.pack(">f", x) now raises OverflowError on all platforms when x is too large to fit into an IEEE 754 float; previously it only raised OverflowError on non IEEE 754 platforms. From buildbot at python.org Sat Mar 15 17:58:55 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 15 Mar 2008 16:58:55 +0000 Subject: [Python-checkins] buildbot failure in g4 osx.4 trunk Message-ID: <20080315165855.D15551E4007@bag.python.org> The Buildbot has detected a new failure of g4 osx.4 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/g4%20osx.4%20trunk/builds/3012 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: psf-g4 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl,skip.montanaro BUILD FAILED: failed test Excerpt from the test logfile: Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable 1 test failed: test_xmlrpc make: *** [buildbottest] Error 1 sincerely, -The Buildbot From ncoghlan at gmail.com Sat Mar 15 19:37:59 2008 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 16 Mar 2008 04:37:59 +1000 Subject: [Python-checkins] r61403 - python/trunk/Misc/NEWS In-Reply-To: <20080315160711.CE8F31E4007@bag.python.org> References: <20080315160711.CE8F31E4007@bag.python.org> Message-ID: <47DC1787.9020707@gmail.com> skip.montanaro wrote: > Author: skip.montanaro > Date: Sat Mar 15 17:07:11 2008 > New Revision: 61403 > > Modified: > python/trunk/Misc/NEWS > Log: > . > > > Modified: python/trunk/Misc/NEWS > ============================================================================== > --- python/trunk/Misc/NEWS (original) > +++ python/trunk/Misc/NEWS Sat Mar 15 17:07:11 2008 > @@ -21,6 +21,10 @@ > Library > ------- > > +- Issue #1158: add %f format (fractions of a second represented as > + microseconds) to datetime objects. Understood by both strptime and > + strftime. %f makes me think femtoseconds :) Any particular reason we can't use '%u' to align with the convention of abbreviating microseconds as 'us' when a character encoding doesn't provide convenient access to the Greek letter mu? (e.g. ASCII) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From python-checkins at python.org Sat Mar 15 21:02:05 2008 From: python-checkins at python.org (raymond.hettinger) Date: Sat, 15 Mar 2008 21:02:05 +0100 (CET) Subject: [Python-checkins] r61404 - in python/trunk/Lib: numbers.py test/test_abstract_numbers.py Message-ID: <20080315200205.0858D1E4016@bag.python.org> Author: raymond.hettinger Date: Sat Mar 15 21:02:04 2008 New Revision: 61404 Modified: python/trunk/Lib/numbers.py python/trunk/Lib/test/test_abstract_numbers.py Log: Removed Exact/Inexact after discussion with Yasskin. Unlike Scheme where exactness is implemented as taints, the Python implementation associated exactness with data types. This created inheritance issues (making an exact subclass of floats would result in the subclass having both an explicit Exact registration and an inherited Inexact registration). This was a problem for the decimal module which was designed to span both exact and inexact arithmetic. There was also a question of use cases and no examples were found where ABCs for exactness could be used to improve code. One other issue was having separate tags for both the affirmative and negative cases. This is at odds with the approach taken elsewhere in the Python (i.e. we don't have an ABC both Hashable and Unhashable). Modified: python/trunk/Lib/numbers.py ============================================================================== --- python/trunk/Lib/numbers.py (original) +++ python/trunk/Lib/numbers.py Sat Mar 15 21:02:04 2008 @@ -8,10 +8,7 @@ from __future__ import division from abc import ABCMeta, abstractmethod, abstractproperty -__all__ = ["Number", "Exact", "Inexact", - "Complex", "Real", "Rational", "Integral", - ] - +__all__ = ["Number", "Complex", "Real", "Rational", "Integral"] class Number(object): """All numbers inherit from this class. @@ -22,60 +19,13 @@ __metaclass__ = ABCMeta -class Exact(Number): - """Operations on instances of this type are exact. - - As long as the result of a homogenous operation is of the same - type, you can assume that it was computed exactly, and there are - no round-off errors. Laws like commutativity and associativity - hold. - """ - -Exact.register(int) -Exact.register(long) - - -class Inexact(Number): - """Operations on instances of this type are inexact. - - Given X, an instance of Inexact, it is possible that (X + -X) + 3 - == 3, but X + (-X + 3) == 0. The exact form this error takes will - vary by type, but it's generally unsafe to compare this type for - equality. - """ - -Inexact.register(complex) -Inexact.register(float) -# Inexact.register(decimal.Decimal) - - ## Notes on Decimal ## ---------------- ## Decimal has all of the methods specified by the Real abc, but it should ## not be registered as a Real because decimals do not interoperate with -## binary floats. -## -## Decimal has some of the characteristics of Integrals. It provides -## logical operations but not as operators. The logical operations only apply -## to a subset of decimals (those that are non-negative, have a zero exponent, -## and have digits that are only 0 or 1). It does provide __long__() and -## a three argument form of __pow__ that includes exactness guarantees. -## It does not provide an __index__() method. -## -## Depending on context, decimal operations may be exact or inexact. -## -## When decimal is run in a context with small precision and automatic rounding, -## it is Inexact. See the "Floating point notes" section of the decimal docs -## for an example of losing the associative and distributive properties of -## addition. -## -## When decimal is used for high precision integer arithmetic, it is Exact. -## When the decimal used as fixed-point, it is Exact. -## When it is run with sufficient precision, it is Exact. -## When the decimal.Inexact trap is set, decimal operations are Exact. -## For an example, see the float_to_decimal() recipe in the "Decimal FAQ" -## section of the docs -- it shows an how traps are used in conjunction -## with variable precision to reliably achieve exact results. +## binary floats (i.e. Decimal('3.14') + 2.71828 is undefined). But, +## abstract reals are expected to interoperate (i.e. R1 + R2 should be +## expected to work if R1 and R2 are both Reals). class Complex(Number): """Complex defines the operations that work on the builtin complex type. Modified: python/trunk/Lib/test/test_abstract_numbers.py ============================================================================== --- python/trunk/Lib/test/test_abstract_numbers.py (original) +++ python/trunk/Lib/test/test_abstract_numbers.py Sat Mar 15 21:02:04 2008 @@ -4,7 +4,6 @@ import operator import unittest from numbers import Complex, Real, Rational, Integral -from numbers import Exact, Inexact from numbers import Number from test import test_support @@ -12,8 +11,6 @@ def test_int(self): self.failUnless(issubclass(int, Integral)) self.failUnless(issubclass(int, Complex)) - self.failUnless(issubclass(int, Exact)) - self.failIf(issubclass(int, Inexact)) self.assertEqual(7, int(7).real) self.assertEqual(0, int(7).imag) @@ -24,8 +21,6 @@ def test_long(self): self.failUnless(issubclass(long, Integral)) self.failUnless(issubclass(long, Complex)) - self.failUnless(issubclass(long, Exact)) - self.failIf(issubclass(long, Inexact)) self.assertEqual(7, long(7).real) self.assertEqual(0, long(7).imag) @@ -36,8 +31,6 @@ def test_float(self): self.failIf(issubclass(float, Rational)) self.failUnless(issubclass(float, Real)) - self.failIf(issubclass(float, Exact)) - self.failUnless(issubclass(float, Inexact)) self.assertEqual(7.3, float(7.3).real) self.assertEqual(0, float(7.3).imag) @@ -46,8 +39,6 @@ def test_complex(self): self.failIf(issubclass(complex, Real)) self.failUnless(issubclass(complex, Complex)) - self.failIf(issubclass(complex, Exact)) - self.failUnless(issubclass(complex, Inexact)) c1, c2 = complex(3, 2), complex(4,1) # XXX: This is not ideal, but see the comment in math_trunc(). From buildbot at python.org Sat Mar 15 21:27:10 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 15 Mar 2008 20:27:10 +0000 Subject: [Python-checkins] buildbot failure in x86 W2k8 trunk Message-ID: <20080315202710.A033A1E4016@bag.python.org> The Buildbot has detected a new failure of x86 W2k8 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20W2k8%20trunk/builds/87 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: nelson-windows Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: raymond.hettinger BUILD FAILED: failed test Excerpt from the test logfile: 3 tests failed: test_abstract_numbers test_builtin test_fractions Traceback (most recent call last): File "../lib/test/regrtest.py", line 550, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "S:\buildbots\python\trunk.nelson-windows\build\lib\test\test_abstract_numbers.py", line 6, in from numbers import Complex, Real, Rational, Integral File "S:\buildbots\python\trunk.nelson-windows\build\lib\numbers.py", line 262, in class Rational(Real, Exact): NameError: name 'Exact' is not defined Traceback (most recent call last): File "../lib/test/regrtest.py", line 550, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "S:\buildbots\python\trunk.nelson-windows\build\lib\test\test_builtin.py", line 8, in import sys, warnings, cStringIO, random, fractions, UserDict File "S:\buildbots\python\trunk.nelson-windows\build\lib\fractions.py", line 8, in import numbers File "S:\buildbots\python\trunk.nelson-windows\build\lib\numbers.py", line 262, in class Rational(Real, Exact): NameError: name 'Exact' is not defined Traceback (most recent call last): File "../lib/test/regrtest.py", line 550, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "S:\buildbots\python\trunk.nelson-windows\build\lib\test\test_fractions.py", line 7, in import fractions File "S:\buildbots\python\trunk.nelson-windows\build\lib\fractions.py", line 8, in import numbers File "S:\buildbots\python\trunk.nelson-windows\build\lib\numbers.py", line 262, in class Rational(Real, Exact): NameError: name 'Exact' is not defined sincerely, -The Buildbot From buildbot at python.org Sat Mar 15 21:29:12 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 15 Mar 2008 20:29:12 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-3 trunk Message-ID: <20080315202912.63C401E4016@bag.python.org> The Buildbot has detected a new failure of x86 XP-3 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-3%20trunk/builds/1048 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: raymond.hettinger BUILD FAILED: failed test Excerpt from the test logfile: 3 tests failed: test_abstract_numbers test_builtin test_fractions Traceback (most recent call last): File "../lib/test/regrtest.py", line 550, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_abstract_numbers.py", line 6, in from numbers import Complex, Real, Rational, Integral File "C:\buildbot\work\trunk.heller-windows\build\lib\numbers.py", line 262, in class Rational(Real, Exact): NameError: name 'Exact' is not defined Traceback (most recent call last): File "../lib/test/regrtest.py", line 550, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_builtin.py", line 8, in import sys, warnings, cStringIO, random, fractions, UserDict File "C:\buildbot\work\trunk.heller-windows\build\lib\fractions.py", line 8, in import numbers File "C:\buildbot\work\trunk.heller-windows\build\lib\numbers.py", line 262, in class Rational(Real, Exact): NameError: name 'Exact' is not defined Traceback (most recent call last): File "../lib/test/regrtest.py", line 550, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_fractions.py", line 7, in import fractions File "C:\buildbot\work\trunk.heller-windows\build\lib\fractions.py", line 8, in import numbers File "C:\buildbot\work\trunk.heller-windows\build\lib\numbers.py", line 262, in class Rational(Real, Exact): NameError: name 'Exact' is not defined sincerely, -The Buildbot From buildbot at python.org Sat Mar 15 21:30:25 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 15 Mar 2008 20:30:25 +0000 Subject: [Python-checkins] buildbot failure in amd64 gentoo trunk Message-ID: <20080315203025.B0A8C1E4029@bag.python.org> The Buildbot has detected a new failure of amd64 gentoo trunk. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20gentoo%20trunk/builds/358 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-amd64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: raymond.hettinger BUILD FAILED: failed test Excerpt from the test logfile: Traceback (most recent call last): File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/threading.py", line 490, in __bootstrap_inner self.run() File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/threading.py", line 446, in run self.__target(*self.__args, **self.__kwargs) File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/bsddb/test/test_thread.py", line 284, in readerThread rec = dbutils.DeadlockWrap(c.next, max_retries=10) File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/bsddb/dbutils.py", line 62, in DeadlockWrap return function(*_args, **_kwargs) DBLockDeadlockError: (-30995, 'DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock') 4 tests failed: test_abstract_numbers test_builtin test_fractions test_urllibnet Traceback (most recent call last): File "./Lib/test/regrtest.py", line 550, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/test/test_abstract_numbers.py", line 6, in from numbers import Complex, Real, Rational, Integral File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/numbers.py", line 262, in class Rational(Real, Exact): NameError: name 'Exact' is not defined Traceback (most recent call last): File "./Lib/test/regrtest.py", line 550, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/test/test_builtin.py", line 8, in import sys, warnings, cStringIO, random, fractions, UserDict File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/fractions.py", line 8, in import numbers File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/numbers.py", line 262, in class Rational(Real, Exact): NameError: name 'Exact' is not defined Traceback (most recent call last): File "./Lib/test/regrtest.py", line 550, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/test/test_fractions.py", line 7, in import fractions File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/fractions.py", line 8, in import numbers File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/numbers.py", line 262, in class Rational(Real, Exact): NameError: name 'Exact' is not defined make: *** [buildbottest] Error 1 sincerely, -The Buildbot From python-checkins at python.org Sat Mar 15 21:37:50 2008 From: python-checkins at python.org (raymond.hettinger) Date: Sat, 15 Mar 2008 21:37:50 +0100 (CET) Subject: [Python-checkins] r61405 - python/trunk/Lib/numbers.py Message-ID: <20080315203750.CBB721E4016@bag.python.org> Author: raymond.hettinger Date: Sat Mar 15 21:37:50 2008 New Revision: 61405 Modified: python/trunk/Lib/numbers.py Log: Zap one more use of Exact/Inexact. Modified: python/trunk/Lib/numbers.py ============================================================================== --- python/trunk/Lib/numbers.py (original) +++ python/trunk/Lib/numbers.py Sat Mar 15 21:37:50 2008 @@ -259,7 +259,7 @@ Real.register(float) -class Rational(Real, Exact): +class Rational(Real): """.numerator and .denominator should be in lowest terms.""" @abstractproperty From buildbot at python.org Sat Mar 15 21:41:50 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 15 Mar 2008 20:41:50 +0000 Subject: [Python-checkins] buildbot failure in ppc Debian unstable trunk Message-ID: <20080315204150.7F42F1E4016@bag.python.org> The Buildbot has detected a new failure of ppc Debian unstable trunk. Full details are available at: http://www.python.org/dev/buildbot/all/ppc%20Debian%20unstable%20trunk/builds/986 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: raymond.hettinger BUILD FAILED: failed test Excerpt from the test logfile: 3 tests failed: test_abstract_numbers test_builtin test_fractions Traceback (most recent call last): File "./Lib/test/regrtest.py", line 550, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_abstract_numbers.py", line 6, in from numbers import Complex, Real, Rational, Integral File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/numbers.py", line 262, in class Rational(Real, Exact): NameError: name 'Exact' is not defined Traceback (most recent call last): File "./Lib/test/regrtest.py", line 550, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_builtin.py", line 8, in import sys, warnings, cStringIO, random, fractions, UserDict File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/fractions.py", line 8, in import numbers File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/numbers.py", line 262, in class Rational(Real, Exact): NameError: name 'Exact' is not defined Traceback (most recent call last): File "./Lib/test/regrtest.py", line 550, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_fractions.py", line 7, in import fractions File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/fractions.py", line 8, in import numbers File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/numbers.py", line 262, in class Rational(Real, Exact): NameError: name 'Exact' is not defined make: *** [buildbottest] Error 1 sincerely, -The Buildbot From nnorwitz at gmail.com Sat Mar 15 22:08:32 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Sat, 15 Mar 2008 16:08:32 -0500 Subject: [Python-checkins] Python Regression Test Failures basics (3) Message-ID: <20080315210832.GA29332@python.psfb.org> 311 tests OK. 3 tests failed: test_abstract_numbers test_builtin test_fractions 28 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_bsddb3 test_cd test_cl test_curses test_gl test_imageop test_imgfile test_ioctl test_linuxaudiodev test_macostools test_ossaudiodev test_pep277 test_scriptpackages test_socketserver test_startfile test_sunaudiodev test_tcl test_timeout test_unicode_file test_urllib2net test_urllibnet test_winreg test_winsound test_zipfile64 1 skip unexpected on linux2: test_ioctl test_grammar test_opcodes test_dict test_builtin test test_builtin crashed -- : name 'Exact' is not defined test_exceptions test_types test_unittest test_doctest test_doctest2 test_MimeWriter test_SimpleHTTPServer test_StringIO test___all__ test___future__ test__locale test_abc test_abstract_numbers test test_abstract_numbers crashed -- : name 'Exact' is not defined test_aepack test_aepack skipped -- No module named aepack test_al test_al skipped -- No module named al test_anydbm test_applesingle test_applesingle skipped -- No module named macostools test_array test_ast test_asynchat test_asyncore test_atexit test_audioop test_augassign test_base64 test_bastion test_bigaddrspace test_bigmem test_binascii test_binhex test_binop test_bisect test_bool test_bsddb test_bsddb185 test_bsddb185 skipped -- No module named bsddb185 test_bsddb3 test_bsddb3 skipped -- Use of the `bsddb' resource not enabled test_buffer test_bufio test_bz2 test_calendar test_call test_capi test_cd test_cd skipped -- No module named cd test_cfgparser test_cgi test_charmapcodec test_cl test_cl skipped -- No module named cl test_class test_cmath test_cmd test_cmd_line test_cmd_line_script test_code test_codeccallbacks test_codecencodings_cn test_codecencodings_hk test_codecencodings_jp test_codecencodings_kr test_codecencodings_tw test_codecmaps_cn test_codecmaps_hk test_codecmaps_jp test_codecmaps_kr test_codecmaps_tw test_codecs test_codeop test_coding test_coercion test_collections test_colorsys test_commands test_compare test_compile test_compiler test_complex test_complex_args test_contains test_contextlib test_cookie test_cookielib test_copy test_copy_reg test_cpickle test_cprofile test_crypt test_csv test_ctypes test_curses test_curses skipped -- Use of the `curses' resource not enabled test_datetime test_dbm test_decimal test_decorators test_defaultdict test_deque test_descr test_descrtut test_difflib test_dircache test_dis test_distutils test_dl test_docxmlrpc test_dumbdbm test_dummy_thread test_dummy_threading test_email test_email_codecs test_email_renamed test_enumerate test_eof test_errno test_exception_variations test_extcall test_fcntl test_file test_filecmp test_fileinput test_float test_fnmatch test_fork1 test_format test_fpformat test_fractions test test_fractions crashed -- : name 'Exact' is not defined test_frozen test_ftplib test_funcattrs test_functools test_future test_future_builtins test_gc test_gdbm test_generators test_genericpath test_genexps test_getargs test_getargs2 test_getopt test_gettext test_gl test_gl skipped -- No module named gl test_glob test_global test_grp test_gzip test_hash test_hashlib test_heapq test_hexoct test_hmac test_hotshot test_htmllib test_htmlparser test_httplib test_imageop test_imageop skipped -- No module named imgfile test_imaplib test_imgfile test_imgfile skipped -- No module named imgfile test_imp test_import test_importhooks test_index test_inspect test_ioctl test_ioctl skipped -- Unable to open /dev/tty test_isinstance test_iter test_iterlen test_itertools test_largefile test_linuxaudiodev test_linuxaudiodev skipped -- Use of the `audio' resource not enabled test_list test_locale test_logging test_long test_long_future test_longexp test_macostools test_macostools skipped -- No module named macostools test_macpath test_mailbox test_marshal test_math test_md5 test_mhlib test_mimetools test_mimetypes test_minidom test_mmap test_module test_modulefinder test_multibytecodec test_multibytecodec_support test_multifile test_mutants test_mutex test_netrc test_new test_nis test_normalization test_ntpath test_old_mailbox test_openpty test_operator test_optparse test_os test_ossaudiodev test_ossaudiodev skipped -- Use of the `audio' resource not enabled test_parser s_push: parser stack overflow test_peepholer test_pep247 test_pep263 test_pep277 test_pep277 skipped -- test works only on NT+ test_pep292 test_pep352 test_pickle test_pickletools test_pipes test_pkg test_pkgimport test_platform test_plistlib test_poll test_popen [8018 refs] [8018 refs] [8018 refs] test_popen2 test_poplib test_posix test_posixpath test_pow test_pprint test_profile test_profilehooks test_property test_pstats test_pty test_pwd test_pyclbr test_pyexpat test_queue test_quopri [8395 refs] [8395 refs] test_random test_re test_repr test_resource test_rfc822 test_richcmp test_robotparser test_runpy test_sax test_scope test_scriptpackages test_scriptpackages skipped -- No module named aetools test_select test_set test_sets test_sgmllib test_sha test_shelve test_shlex test_shutil test_signal test_site test_slice test_smtplib test_socket test_socket_ssl /tmp/python-test/local/lib/python2.6/test/test_socket_ssl.py:108: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. ssl_sock = socket.ssl(s) /tmp/python-test/local/lib/python2.6/test/test_socket_ssl.py:159: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. ss = socket.ssl(s) /tmp/python-test/local/lib/python2.6/test/test_socket_ssl.py:173: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. ss = socket.ssl(s) test_socketserver test_socketserver skipped -- Use of the `network' resource not enabled test_softspace test_sort test_sqlite test_ssl test_startfile test_startfile skipped -- cannot import name startfile test_str test_strftime test_string test_stringprep test_strop test_strptime test_struct test_structmembers test_structseq test_subprocess [8013 refs] [8015 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8015 refs] [9938 refs] [8231 refs] [8015 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] . [8013 refs] [8013 refs] this bit of output is from a test of stdout in a different process ... [8013 refs] [8013 refs] [8231 refs] test_sunaudiodev test_sunaudiodev skipped -- No module named sunaudiodev test_sundry test_symtable test_syntax test_sys [8013 refs] [8013 refs] test_tarfile test_tcl test_tcl skipped -- No module named _tkinter test_telnetlib test_tempfile [8018 refs] test_textwrap test_thread test_threaded_import test_threadedtempfile test_threading [11151 refs] test_threading_local test_threadsignals test_time test_timeout test_timeout skipped -- Use of the `network' resource not enabled test_tokenize test_trace test_traceback test_transformer test_tuple test_typechecks test_ucn test_unary test_unicode test_unicode_file test_unicode_file skipped -- No Unicode filesystem semantics on this platform. test_unicodedata test_univnewlines test_unpack test_urllib test_urllib2 test_urllib2_localnet test_urllib2net test_urllib2net skipped -- Use of the `network' resource not enabled test_urllibnet test_urllibnet skipped -- Use of the `network' resource not enabled test_urlparse test_userdict test_userlist test_userstring test_uu test_uuid WARNING: uuid.getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._ifconfig_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._unixdll_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. test_wait3 test_wait4 test_warnings test_wave test_weakref test_whichdb test_winreg test_winreg skipped -- No module named _winreg test_winsound test_winsound skipped -- No module named winsound test_with test_wsgiref test_xdrlib test_xml_etree test_xml_etree_c test_xmllib test_xmlrpc test_xpickle test_xrange test_zipfile /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:472: DeprecationWarning: struct integer overflow masking is deprecated zipfp.close() /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:399: DeprecationWarning: struct integer overflow masking is deprecated zipfp.close() test_zipfile64 test_zipfile64 skipped -- test requires loads of disk-space bytes and a long time to run test_zipimport test_zlib 311 tests OK. 3 tests failed: test_abstract_numbers test_builtin test_fractions 28 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_bsddb3 test_cd test_cl test_curses test_gl test_imageop test_imgfile test_ioctl test_linuxaudiodev test_macostools test_ossaudiodev test_pep277 test_scriptpackages test_socketserver test_startfile test_sunaudiodev test_tcl test_timeout test_unicode_file test_urllib2net test_urllibnet test_winreg test_winsound test_zipfile64 1 skip unexpected on linux2: test_ioctl [554837 refs] From buildbot at python.org Sat Mar 15 21:46:02 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 15 Mar 2008 20:46:02 +0000 Subject: [Python-checkins] buildbot failure in PPC64 Debian trunk Message-ID: <20080315204602.3C0901E4016@bag.python.org> The Buildbot has detected a new failure of PPC64 Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/PPC64%20Debian%20trunk/builds/494 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: raymond.hettinger BUILD FAILED: failed test Excerpt from the test logfile: 3 tests failed: test_abstract_numbers test_builtin test_fractions Traceback (most recent call last): File "./Lib/test/regrtest.py", line 550, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/test/test_abstract_numbers.py", line 6, in from numbers import Complex, Real, Rational, Integral File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/numbers.py", line 262, in class Rational(Real, Exact): NameError: name 'Exact' is not defined Traceback (most recent call last): File "./Lib/test/regrtest.py", line 550, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/test/test_builtin.py", line 8, in import sys, warnings, cStringIO, random, fractions, UserDict File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/fractions.py", line 8, in import numbers File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/numbers.py", line 262, in class Rational(Real, Exact): NameError: name 'Exact' is not defined Traceback (most recent call last): File "./Lib/test/regrtest.py", line 550, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/test/test_fractions.py", line 7, in import fractions File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/fractions.py", line 8, in import numbers File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/numbers.py", line 262, in class Rational(Real, Exact): NameError: name 'Exact' is not defined make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Sat Mar 15 21:50:52 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 15 Mar 2008 20:50:52 +0000 Subject: [Python-checkins] buildbot failure in sparc solaris10 gcc trunk Message-ID: <20080315205052.B65D91E4017@bag.python.org> The Buildbot has detected a new failure of sparc solaris10 gcc trunk. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20solaris10%20gcc%20trunk/builds/2953 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: loewis-sun Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: raymond.hettinger BUILD FAILED: failed test Excerpt from the test logfile: 3 tests failed: test_abstract_numbers test_builtin test_fractions Traceback (most recent call last): File "./Lib/test/regrtest.py", line 550, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/opt/users/buildbot/slave/trunk.loewis-sun/build/Lib/test/test_abstract_numbers.py", line 6, in from numbers import Complex, Real, Rational, Integral File "/opt/users/buildbot/slave/trunk.loewis-sun/build/Lib/numbers.py", line 262, in class Rational(Real, Exact): NameError: name 'Exact' is not defined Traceback (most recent call last): File "./Lib/test/regrtest.py", line 550, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/opt/users/buildbot/slave/trunk.loewis-sun/build/Lib/test/test_builtin.py", line 8, in import sys, warnings, cStringIO, random, fractions, UserDict File "/opt/users/buildbot/slave/trunk.loewis-sun/build/Lib/fractions.py", line 8, in import numbers File "/opt/users/buildbot/slave/trunk.loewis-sun/build/Lib/numbers.py", line 262, in class Rational(Real, Exact): NameError: name 'Exact' is not defined Traceback (most recent call last): File "./Lib/test/regrtest.py", line 550, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/opt/users/buildbot/slave/trunk.loewis-sun/build/Lib/test/test_fractions.py", line 7, in import fractions File "/opt/users/buildbot/slave/trunk.loewis-sun/build/Lib/fractions.py", line 8, in import numbers File "/opt/users/buildbot/slave/trunk.loewis-sun/build/Lib/numbers.py", line 262, in class Rational(Real, Exact): NameError: name 'Exact' is not defined sincerely, -The Buildbot From nnorwitz at gmail.com Sat Mar 15 22:17:02 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Sat, 15 Mar 2008 16:17:02 -0500 Subject: [Python-checkins] Python Regression Test Failures opt (3) Message-ID: <20080315211702.GA9265@python.psfb.org> 311 tests OK. 3 tests failed: test_abstract_numbers test_builtin test_fractions 28 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_bsddb3 test_cd test_cl test_curses test_gl test_imageop test_imgfile test_ioctl test_linuxaudiodev test_macostools test_ossaudiodev test_pep277 test_scriptpackages test_socketserver test_startfile test_sunaudiodev test_tcl test_timeout test_unicode_file test_urllib2net test_urllibnet test_winreg test_winsound test_zipfile64 1 skip unexpected on linux2: test_ioctl test_grammar test_opcodes test_dict test_builtin test test_builtin crashed -- : name 'Exact' is not defined test_exceptions test_types test_unittest test_doctest test_doctest2 test_MimeWriter test_SimpleHTTPServer test_StringIO test___all__ test___future__ test__locale test_abc test_abstract_numbers test test_abstract_numbers crashed -- : name 'Exact' is not defined test_aepack test_aepack skipped -- No module named aepack test_al test_al skipped -- No module named al test_anydbm test_applesingle test_applesingle skipped -- No module named macostools test_array test_ast test_asynchat test_asyncore test_atexit test_audioop test_augassign test_base64 test_bastion test_bigaddrspace test_bigmem test_binascii test_binhex test_binop test_bisect test_bool test_bsddb test_bsddb185 test_bsddb185 skipped -- No module named bsddb185 test_bsddb3 test_bsddb3 skipped -- Use of the `bsddb' resource not enabled test_buffer test_bufio test_bz2 test_calendar test_call test_capi test_cd test_cd skipped -- No module named cd test_cfgparser test_cgi test_charmapcodec test_cl test_cl skipped -- No module named cl test_class test_cmath test_cmd test_cmd_line test_cmd_line_script test_code test_codeccallbacks test_codecencodings_cn test_codecencodings_hk test_codecencodings_jp test_codecencodings_kr test_codecencodings_tw test_codecmaps_cn test_codecmaps_hk test_codecmaps_jp test_codecmaps_kr test_codecmaps_tw test_codecs test_codeop test_coding test_coercion test_collections test_colorsys test_commands test_compare test_compile test_compiler test_complex test_complex_args test_contains test_contextlib test_cookie test_cookielib test_copy test_copy_reg test_cpickle test_cprofile test_crypt test_csv test_ctypes test_curses test_curses skipped -- Use of the `curses' resource not enabled test_datetime test_dbm test_decimal test_decorators test_defaultdict test_deque test_descr test_descrtut test_difflib test_dircache test_dis test_distutils [10077 refs] test_dl test_docxmlrpc test_dumbdbm test_dummy_thread test_dummy_threading test_email test_email_codecs test_email_renamed test_enumerate test_eof test_errno test_exception_variations test_extcall test_fcntl test_file test_filecmp test_fileinput test_float test_fnmatch test_fork1 test_format test_fpformat test_fractions test test_fractions crashed -- : name 'Exact' is not defined test_frozen test_ftplib test_funcattrs test_functools test_future test_future_builtins test_gc test_gdbm test_generators test_genericpath test_genexps test_getargs test_getargs2 test_getopt test_gettext test_gl test_gl skipped -- No module named gl test_glob test_global test_grp test_gzip test_hash test_hashlib test_heapq test_hexoct test_hmac test_hotshot test_htmllib test_htmlparser test_httplib test_imageop test_imageop skipped -- No module named imgfile test_imaplib test_imgfile test_imgfile skipped -- No module named imgfile test_imp test_import test_importhooks test_index test_inspect test_ioctl test_ioctl skipped -- Unable to open /dev/tty test_isinstance test_iter test_iterlen test_itertools test_largefile test_linuxaudiodev test_linuxaudiodev skipped -- Use of the `audio' resource not enabled test_list test_locale test_logging test_long test_long_future test_longexp test_macostools test_macostools skipped -- No module named macostools test_macpath test_mailbox test_marshal test_math test_md5 test_mhlib test_mimetools test_mimetypes test_minidom test_mmap test_module test_modulefinder test_multibytecodec test_multibytecodec_support test_multifile test_mutants test_mutex test_netrc test_new test_nis test_normalization test_ntpath test_old_mailbox test_openpty test_operator test_optparse test_os test_ossaudiodev test_ossaudiodev skipped -- Use of the `audio' resource not enabled test_parser s_push: parser stack overflow test_peepholer test_pep247 test_pep263 test_pep277 test_pep277 skipped -- test works only on NT+ test_pep292 test_pep352 test_pickle test_pickletools test_pipes test_pkg test_pkgimport test_platform test_plistlib test_poll test_popen [8018 refs] [8018 refs] [8018 refs] test_popen2 test_poplib test_posix test_posixpath test_pow test_pprint test_profile test_profilehooks test_property test_pstats test_pty test_pwd test_pyclbr test_pyexpat test_queue test_quopri [8395 refs] [8395 refs] test_random test_re test_repr test_resource test_rfc822 test_richcmp test_robotparser test_runpy test_sax test_scope test_scriptpackages test_scriptpackages skipped -- No module named aetools test_select test_set test_sets test_sgmllib test_sha test_shelve test_shlex test_shutil test_signal test_site test_slice test_smtplib test_socket test_socket_ssl /tmp/python-test/local/lib/python2.6/test/test_socket_ssl.py:108: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. ssl_sock = socket.ssl(s) /tmp/python-test/local/lib/python2.6/test/test_socket_ssl.py:159: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. ss = socket.ssl(s) /tmp/python-test/local/lib/python2.6/test/test_socket_ssl.py:173: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. ss = socket.ssl(s) test_socketserver test_socketserver skipped -- Use of the `network' resource not enabled test_softspace test_sort test_sqlite test_ssl test_startfile test_startfile skipped -- cannot import name startfile test_str test_strftime test_string test_stringprep test_strop test_strptime test_struct test_structmembers test_structseq test_subprocess [8013 refs] [8015 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8015 refs] [9938 refs] [8231 refs] [8015 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] . [8013 refs] [8013 refs] this bit of output is from a test of stdout in a different process ... [8013 refs] [8013 refs] [8231 refs] test_sunaudiodev test_sunaudiodev skipped -- No module named sunaudiodev test_sundry test_symtable test_syntax test_sys [8013 refs] [8013 refs] test_tarfile test_tcl test_tcl skipped -- No module named _tkinter test_telnetlib test_tempfile [8018 refs] test_textwrap test_thread test_threaded_import test_threadedtempfile test_threading [11151 refs] test_threading_local test_threadsignals test_time test_timeout test_timeout skipped -- Use of the `network' resource not enabled test_tokenize test_trace test_traceback test_transformer test_tuple test_typechecks test_ucn test_unary test_unicode test_unicode_file test_unicode_file skipped -- No Unicode filesystem semantics on this platform. test_unicodedata test_univnewlines test_unpack test_urllib test_urllib2 test_urllib2_localnet test_urllib2net test_urllib2net skipped -- Use of the `network' resource not enabled test_urllibnet test_urllibnet skipped -- Use of the `network' resource not enabled test_urlparse test_userdict test_userlist test_userstring test_uu test_uuid WARNING: uuid.getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._ifconfig_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._unixdll_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. test_wait3 test_wait4 test_warnings test_wave test_weakref test_whichdb test_winreg test_winreg skipped -- No module named _winreg test_winsound test_winsound skipped -- No module named winsound test_with test_wsgiref test_xdrlib test_xml_etree test_xml_etree_c test_xmllib test_xmlrpc test_xpickle test_xrange test_zipfile /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:472: DeprecationWarning: struct integer overflow masking is deprecated zipfp.close() /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:399: DeprecationWarning: struct integer overflow masking is deprecated zipfp.close() test_zipfile64 test_zipfile64 skipped -- test requires loads of disk-space bytes and a long time to run test_zipimport test_zlib 311 tests OK. 3 tests failed: test_abstract_numbers test_builtin test_fractions 28 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_bsddb3 test_cd test_cl test_curses test_gl test_imageop test_imgfile test_ioctl test_linuxaudiodev test_macostools test_ossaudiodev test_pep277 test_scriptpackages test_socketserver test_startfile test_sunaudiodev test_tcl test_timeout test_unicode_file test_urllib2net test_urllibnet test_winreg test_winsound test_zipfile64 1 skip unexpected on linux2: test_ioctl [554427 refs] From buildbot at python.org Sat Mar 15 22:10:32 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 15 Mar 2008 21:10:32 +0000 Subject: [Python-checkins] buildbot failure in S-390 Debian trunk Message-ID: <20080315211032.2BCD91E4016@bag.python.org> The Buildbot has detected a new failure of S-390 Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/S-390%20Debian%20trunk/builds/177 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-s390 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: raymond.hettinger BUILD FAILED: failed test Excerpt from the test logfile: 3 tests failed: test_abstract_numbers test_builtin test_fractions Traceback (most recent call last): File "./Lib/test/regrtest.py", line 550, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/test/test_abstract_numbers.py", line 6, in from numbers import Complex, Real, Rational, Integral File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/numbers.py", line 262, in class Rational(Real, Exact): NameError: name 'Exact' is not defined Traceback (most recent call last): File "./Lib/test/regrtest.py", line 550, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/test/test_builtin.py", line 8, in import sys, warnings, cStringIO, random, fractions, UserDict File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/fractions.py", line 8, in import numbers File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/numbers.py", line 262, in class Rational(Real, Exact): NameError: name 'Exact' is not defined Traceback (most recent call last): File "./Lib/test/regrtest.py", line 550, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/test/test_fractions.py", line 7, in import fractions File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/fractions.py", line 8, in import numbers File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/numbers.py", line 262, in class Rational(Real, Exact): NameError: name 'Exact' is not defined make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Sat Mar 15 22:15:01 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 15 Mar 2008 21:15:01 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 trunk Message-ID: <20080315211501.605DA1E4017@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%20trunk/builds/2681 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: raymond.hettinger BUILD FAILED: failed test Excerpt from the test logfile: 5 tests failed: test_abstract_numbers test_asynchat test_builtin test_fractions test_smtplib Traceback (most recent call last): File "./Lib/test/regrtest.py", line 550, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_abstract_numbers.py", line 6, in from numbers import Complex, Real, Rational, Integral File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/numbers.py", line 262, in class Rational(Real, Exact): NameError: name 'Exact' is not defined Traceback (most recent call last): File "./Lib/test/regrtest.py", line 550, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_builtin.py", line 8, in import sys, warnings, cStringIO, random, fractions, UserDict File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/fractions.py", line 8, in import numbers File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/numbers.py", line 262, in class Rational(Real, Exact): NameError: name 'Exact' is not defined Traceback (most recent call last): File "./Lib/test/regrtest.py", line 550, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_fractions.py", line 7, in import fractions File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/fractions.py", line 8, in import numbers File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/numbers.py", line 262, in class Rational(Real, Exact): NameError: name 'Exact' is not defined sincerely, -The Buildbot From buildbot at python.org Sat Mar 15 22:41:16 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 15 Mar 2008 21:41:16 +0000 Subject: [Python-checkins] buildbot failure in x86 FreeBSD trunk Message-ID: <20080315214116.62D9A1E4016@bag.python.org> The Buildbot has detected a new failure of x86 FreeBSD trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20FreeBSD%20trunk/builds/725 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: bolen-freebsd Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: raymond.hettinger BUILD FAILED: failed test Excerpt from the test logfile: 3 tests failed: test_abstract_numbers test_builtin test_fractions Traceback (most recent call last): File "./Lib/test/regrtest.py", line 550, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/test/test_abstract_numbers.py", line 6, in from numbers import Complex, Real, Rational, Integral File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/numbers.py", line 262, in class Rational(Real, Exact): NameError: name 'Exact' is not defined Traceback (most recent call last): File "./Lib/test/regrtest.py", line 550, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/test/test_builtin.py", line 8, in import sys, warnings, cStringIO, random, fractions, UserDict File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/fractions.py", line 8, in import numbers File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/numbers.py", line 262, in class Rational(Real, Exact): NameError: name 'Exact' is not defined Traceback (most recent call last): File "./Lib/test/regrtest.py", line 550, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/test/test_fractions.py", line 7, in import fractions File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/fractions.py", line 8, in import numbers File "/usr/home/db3l/buildarea/trunk.bolen-freebsd/build/Lib/numbers.py", line 262, in class Rational(Real, Exact): NameError: name 'Exact' is not defined sincerely, -The Buildbot From buildbot at python.org Sat Mar 15 23:00:53 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 15 Mar 2008 22:00:53 +0000 Subject: [Python-checkins] buildbot failure in sparc Ubuntu trunk Message-ID: <20080315220053.6B7551E4016@bag.python.org> The Buildbot has detected a new failure of sparc Ubuntu trunk. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20Ubuntu%20trunk/builds/344 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-ubuntu-sparc Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: raymond.hettinger BUILD FAILED: failed test Excerpt from the test logfile: 3 tests failed: test_abstract_numbers test_builtin test_fractions Traceback (most recent call last): File "./Lib/test/regrtest.py", line 550, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/home/pybot/buildarea/trunk.klose-ubuntu-sparc/build/Lib/test/test_abstract_numbers.py", line 6, in from numbers import Complex, Real, Rational, Integral File "/home/pybot/buildarea/trunk.klose-ubuntu-sparc/build/Lib/numbers.py", line 262, in class Rational(Real, Exact): NameError: name 'Exact' is not defined Traceback (most recent call last): File "./Lib/test/regrtest.py", line 550, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/home/pybot/buildarea/trunk.klose-ubuntu-sparc/build/Lib/test/test_builtin.py", line 8, in import sys, warnings, cStringIO, random, fractions, UserDict File "/home/pybot/buildarea/trunk.klose-ubuntu-sparc/build/Lib/fractions.py", line 8, in import numbers File "/home/pybot/buildarea/trunk.klose-ubuntu-sparc/build/Lib/numbers.py", line 262, in class Rational(Real, Exact): NameError: name 'Exact' is not defined Traceback (most recent call last): File "./Lib/test/regrtest.py", line 550, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/home/pybot/buildarea/trunk.klose-ubuntu-sparc/build/Lib/test/test_fractions.py", line 7, in import fractions File "/home/pybot/buildarea/trunk.klose-ubuntu-sparc/build/Lib/fractions.py", line 8, in import numbers File "/home/pybot/buildarea/trunk.klose-ubuntu-sparc/build/Lib/numbers.py", line 262, in class Rational(Real, Exact): NameError: name 'Exact' is not defined make: *** [buildbottest] Error 1 sincerely, -The Buildbot From python-checkins at python.org Sat Mar 15 23:03:18 2008 From: python-checkins at python.org (neal.norwitz) Date: Sat, 15 Mar 2008 23:03:18 +0100 (CET) Subject: [Python-checkins] r61406 - in python/trunk: Misc/NEWS Python/compile.c Message-ID: <20080315220318.E43681E4016@bag.python.org> Author: neal.norwitz Date: Sat Mar 15 23:03:18 2008 New Revision: 61406 Modified: python/trunk/Misc/NEWS python/trunk/Python/compile.c Log: Add a warning for code like: assert (0, 'message') An empty tuple does not create a warning. While questionable usage: assert (), 'message' should not display a warning. Tested manually. The warning message could be improved. Feel free to update it. Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Sat Mar 15 23:03:18 2008 @@ -12,6 +12,8 @@ Core and builtins ----------------- +- Add a warning when asserting a non-empty tuple which is always true. + - Issue #2179: speed up with statement execution by storing the exit method on the stack instead of in a temporary variable (patch by Jeffrey Yaskin) Modified: python/trunk/Python/compile.c ============================================================================== --- python/trunk/Python/compile.c (original) +++ python/trunk/Python/compile.c Sat Mar 15 23:03:18 2008 @@ -2056,6 +2056,14 @@ if (assertion_error == NULL) return 0; } + if (s->v.Assert.test->kind == Tuple_kind && + asdl_seq_LEN(s->v.Assert.test->v.Tuple.elts) > 0) { + const char* msg = + "assertion is always true, perhaps remove parentheses?"; + if (PyErr_WarnExplicit(PyExc_SyntaxWarning, msg, c->c_filename, + c->u->u_lineno, NULL, NULL) == -1) + return 0; + } VISIT(c, expr, s->v.Assert.test); end = compiler_new_block(c); if (end == NULL) From buildbot at python.org Sat Mar 15 23:04:58 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 15 Mar 2008 22:04:58 +0000 Subject: [Python-checkins] buildbot failure in sparc Debian trunk Message-ID: <20080315220458.C6A591E4016@bag.python.org> The Buildbot has detected a new failure of sparc Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20Debian%20trunk/builds/191 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-sparc Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: raymond.hettinger BUILD FAILED: failed test Excerpt from the test logfile: 3 tests failed: test_abstract_numbers test_builtin test_fractions Traceback (most recent call last): File "./Lib/test/regrtest.py", line 550, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/home/pybot/buildarea-sid/trunk.klose-debian-sparc/build/Lib/test/test_abstract_numbers.py", line 6, in from numbers import Complex, Real, Rational, Integral File "/home/pybot/buildarea-sid/trunk.klose-debian-sparc/build/Lib/numbers.py", line 262, in class Rational(Real, Exact): NameError: name 'Exact' is not defined Traceback (most recent call last): File "./Lib/test/regrtest.py", line 550, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/home/pybot/buildarea-sid/trunk.klose-debian-sparc/build/Lib/test/test_builtin.py", line 8, in import sys, warnings, cStringIO, random, fractions, UserDict File "/home/pybot/buildarea-sid/trunk.klose-debian-sparc/build/Lib/fractions.py", line 8, in import numbers File "/home/pybot/buildarea-sid/trunk.klose-debian-sparc/build/Lib/numbers.py", line 262, in class Rational(Real, Exact): NameError: name 'Exact' is not defined Traceback (most recent call last): File "./Lib/test/regrtest.py", line 550, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/home/pybot/buildarea-sid/trunk.klose-debian-sparc/build/Lib/test/test_fractions.py", line 7, in import fractions File "/home/pybot/buildarea-sid/trunk.klose-debian-sparc/build/Lib/fractions.py", line 8, in import numbers File "/home/pybot/buildarea-sid/trunk.klose-debian-sparc/build/Lib/numbers.py", line 262, in class Rational(Real, Exact): NameError: name 'Exact' is not defined make: *** [buildbottest] Error 1 sincerely, -The Buildbot From nnorwitz at gmail.com Sat Mar 15 23:41:11 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Sat, 15 Mar 2008 17:41:11 -0500 Subject: [Python-checkins] Python Regression Test Failures all (3) Message-ID: <20080315224111.GA9584@python.psfb.org> 316 tests OK. 3 tests failed: test_abstract_numbers test_builtin test_fractions 20 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_cd test_cl test_gl test_imageop test_imgfile test_ioctl test_macostools test_pep277 test_scriptpackages test_startfile test_sunaudiodev test_tcl test_unicode_file test_winreg test_winsound test_zipfile64 1 skip unexpected on linux2: test_ioctl test_grammar test_opcodes test_dict test_builtin test test_builtin crashed -- : name 'Exact' is not defined test_exceptions test_types test_unittest test_doctest test_doctest2 test_MimeWriter test_SimpleHTTPServer test_StringIO test___all__ test___future__ test__locale test_abc test_abstract_numbers test test_abstract_numbers crashed -- : name 'Exact' is not defined test_aepack test_aepack skipped -- No module named aepack test_al test_al skipped -- No module named al test_anydbm test_applesingle test_applesingle skipped -- No module named macostools test_array test_ast test_asynchat test_asyncore test_atexit test_audioop test_augassign test_base64 test_bastion test_bigaddrspace test_bigmem test_binascii test_binhex test_binop test_bisect test_bool test_bsddb test_bsddb185 test_bsddb185 skipped -- No module named bsddb185 test_bsddb3 test_buffer test_bufio test_bz2 test_calendar test_call test_capi test_cd test_cd skipped -- No module named cd test_cfgparser test_cgi test_charmapcodec test_cl test_cl skipped -- No module named cl test_class test_cmath test_cmd test_cmd_line test_cmd_line_script test_code test_codeccallbacks test_codecencodings_cn test_codecencodings_hk test_codecencodings_jp test_codecencodings_kr test_codecencodings_tw test_codecmaps_cn test_codecmaps_hk test_codecmaps_jp test_codecmaps_kr test_codecmaps_tw test_codecs test_codeop test_coding test_coercion test_collections test_colorsys test_commands test_compare test_compile test_compiler testCompileLibrary still working, be patient... test_complex test_complex_args test_contains test_contextlib test_cookie test_cookielib test_copy test_copy_reg test_cpickle test_cprofile test_crypt test_csv test_ctypes test_datetime test_dbm test_decimal test_decorators test_defaultdict test_deque test_descr test_descrtut test_difflib test_dircache test_dis test_distutils test_dl test_docxmlrpc test_dumbdbm test_dummy_thread test_dummy_threading test_email test_email_codecs test_email_renamed test_enumerate test_eof test_errno test_exception_variations test_extcall test_fcntl test_file test_filecmp test_fileinput test_float test_fnmatch test_fork1 test_format test_fpformat test_fractions test test_fractions crashed -- : name 'Exact' is not defined test_frozen test_ftplib test_funcattrs test_functools test_future test_future_builtins test_gc test_gdbm test_generators test_genericpath test_genexps test_getargs test_getargs2 test_getopt test_gettext test_gl test_gl skipped -- No module named gl test_glob test_global test_grp test_gzip test_hash test_hashlib test_heapq test_hexoct test_hmac test_hotshot test_htmllib test_htmlparser test_httplib test_imageop test_imageop skipped -- No module named imgfile test_imaplib test_imgfile test_imgfile skipped -- No module named imgfile test_imp test_import test_importhooks test_index test_inspect test_ioctl test_ioctl skipped -- Unable to open /dev/tty test_isinstance test_iter test_iterlen test_itertools test_largefile test_list test_locale test_logging test_long test_long_future test_longexp test_macostools test_macostools skipped -- No module named macostools test_macpath test_mailbox test_marshal test_math test_md5 test_mhlib test_mimetools test_mimetypes test_minidom test_mmap test_module test_modulefinder test_multibytecodec test_multibytecodec_support test_multifile test_mutants test_mutex test_netrc test_new test_nis test_normalization test_ntpath test_old_mailbox test_openpty test_operator test_optparse test_os test_parser s_push: parser stack overflow test_peepholer test_pep247 test_pep263 test_pep277 test_pep277 skipped -- test works only on NT+ test_pep292 test_pep352 test_pickle test_pickletools test_pipes test_pkg test_pkgimport test_platform test_plistlib test_poll test_popen [8018 refs] [8018 refs] [8018 refs] test_popen2 test_poplib test_posix test_posixpath test_pow test_pprint test_profile test_profilehooks test_property test_pstats test_pty test_pwd test_pyclbr test_pyexpat test_queue test_quopri [8395 refs] [8395 refs] test_random test_re test_repr test_resource test_rfc822 test_richcmp test_robotparser test_runpy test_sax test_scope test_scriptpackages test_scriptpackages skipped -- No module named aetools test_select test_set test_sets test_sgmllib test_sha test_shelve test_shlex test_shutil test_signal test_site test_slice test_smtplib test_socket test_socket_ssl /tmp/python-test/local/lib/python2.6/test/test_socket_ssl.py:108: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. ssl_sock = socket.ssl(s) /tmp/python-test/local/lib/python2.6/test/test_socket_ssl.py:74: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. ss = socket.ssl(s) /tmp/python-test/local/lib/python2.6/test/test_socket_ssl.py:159: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. ss = socket.ssl(s) /tmp/python-test/local/lib/python2.6/test/test_socket_ssl.py:173: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. ss = socket.ssl(s) test_socketserver test_softspace test_sort test_sqlite test_ssl test_startfile test_startfile skipped -- cannot import name startfile test_str test_strftime test_string test_stringprep test_strop test_strptime test_struct test_structmembers test_structseq test_subprocess [8013 refs] [8015 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8015 refs] [9938 refs] [8231 refs] [8015 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] [8013 refs] . [8013 refs] [8013 refs] this bit of output is from a test of stdout in a different process ... [8013 refs] [8013 refs] [8231 refs] test_sunaudiodev test_sunaudiodev skipped -- No module named sunaudiodev test_sundry test_symtable test_syntax test_sys [8013 refs] [8013 refs] test_tarfile test_tcl test_tcl skipped -- No module named _tkinter test_telnetlib test_tempfile [8018 refs] test_textwrap test_thread test_threaded_import test_threadedtempfile test_threading [11151 refs] test_threading_local test_threadsignals test_time test_timeout test_tokenize test_trace test_traceback test_transformer test_tuple test_typechecks test_ucn test_unary test_unicode test_unicode_file test_unicode_file skipped -- No Unicode filesystem semantics on this platform. test_unicodedata test_univnewlines test_unpack test_urllib test_urllib2 test_urllib2_localnet test_urllib2net No handlers could be found for logger "test_urllib2" test_urllibnet test_urlparse test_userdict test_userlist test_userstring test_uu test_uuid WARNING: uuid.getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._ifconfig_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._unixdll_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. test_wait3 test_wait4 test_warnings test_wave test_weakref test_whichdb test_winreg test_winreg skipped -- No module named _winreg test_winsound test_winsound skipped -- No module named winsound test_with test_wsgiref test_xdrlib test_xml_etree test_xml_etree_c test_xmllib test_xmlrpc test_xpickle test_xrange test_zipfile /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:472: DeprecationWarning: struct integer overflow masking is deprecated zipfp.close() /tmp/python-test/local/lib/python2.6/test/test_zipfile.py:399: DeprecationWarning: struct integer overflow masking is deprecated zipfp.close() test_zipfile64 test_zipfile64 skipped -- test requires loads of disk-space bytes and a long time to run test_zipimport test_zlib 316 tests OK. 3 tests failed: test_abstract_numbers test_builtin test_fractions 20 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_cd test_cl test_gl test_imageop test_imgfile test_ioctl test_macostools test_pep277 test_scriptpackages test_startfile test_sunaudiodev test_tcl test_unicode_file test_winreg test_winsound test_zipfile64 1 skip unexpected on linux2: test_ioctl [566481 refs] From buildbot at python.org Sat Mar 15 23:29:28 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 15 Mar 2008 22:29:28 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-3 trunk Message-ID: <20080315222928.9ABE41E4016@bag.python.org> The Buildbot has detected a new failure of x86 XP-3 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-3%20trunk/builds/1050 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: neal.norwitz BUILD FAILED: failed test Excerpt from the test logfile: Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\threading.py", line 490, in __bootstrap_inner self.run() File "C:\buildbot\work\trunk.heller-windows\build\lib\threading.py", line 446, in run self.__target(*self.__args, **self.__kwargs) File "C:\buildbot\work\trunk.heller-windows\build\lib\bsddb\test\test_thread.py", line 284, in readerThread rec = dbutils.DeadlockWrap(c.next, max_retries=10) File "C:\buildbot\work\trunk.heller-windows\build\lib\bsddb\dbutils.py", line 62, in DeadlockWrap return function(*_args, **_kwargs) DBLockDeadlockError: (-30995, 'DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock') Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\threading.py", line 490, in __bootstrap_inner self.run() File "C:\buildbot\work\trunk.heller-windows\build\lib\threading.py", line 446, in run self.__target(*self.__args, **self.__kwargs) File "C:\buildbot\work\trunk.heller-windows\build\lib\bsddb\test\test_thread.py", line 284, in readerThread rec = dbutils.DeadlockWrap(c.next, max_retries=10) File "C:\buildbot\work\trunk.heller-windows\build\lib\bsddb\dbutils.py", line 62, in DeadlockWrap return function(*_args, **_kwargs) DBLockDeadlockError: (-30995, 'DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock') Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\threading.py", line 490, in __bootstrap_inner self.run() File "C:\buildbot\work\trunk.heller-windows\build\lib\threading.py", line 446, in run self.__target(*self.__args, **self.__kwargs) File "C:\buildbot\work\trunk.heller-windows\build\lib\bsddb\test\test_thread.py", line 284, in readerThread rec = dbutils.DeadlockWrap(c.next, max_retries=10) File "C:\buildbot\work\trunk.heller-windows\build\lib\bsddb\dbutils.py", line 62, in DeadlockWrap return function(*_args, **_kwargs) DBLockDeadlockError: (-30995, 'DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock') Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\threading.py", line 490, in __bootstrap_inner self.run() File "C:\buildbot\work\trunk.heller-windows\build\lib\threading.py", line 446, in run self.__target(*self.__args, **self.__kwargs) File "C:\buildbot\work\trunk.heller-windows\build\lib\bsddb\test\test_thread.py", line 248, in writerThread self.assertEqual(data, self.makeData(key)) File "C:\buildbot\work\trunk.heller-windows\build\lib\unittest.py", line 343, in failUnlessEqual (msg or '%r != %r' % (first, second)) AssertionError: None != '1153-1153-1153-1153-1153' Traceback (most recent call last): File "C:\buildbot\work\trunk.hellertest___all__ test_builtin 1 test failed: test_timeout Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\threading.py", line 490, in __bootstrap_inner self.run() File "C:\buildbot\work\trunk.heller-windows\build\lib\threading.py", line 446, in run self.__target(*self.__args, **self.__kwargs) File "C:\buildbot\work\trunk.heller-windows\build\lib\bsddb\test\test_thread.py", line 263, in writerThread self.assertEqual(data, self.makeData(key)) File "C:\buildbot\work\trunk.heller-windows\build\lib\unittest.py", line 343, in failUnlessEqual (msg or '%r != %r' % (first, second)) AssertionError: None != '2000-2000-2000-2000-2000' sincerely, -The Buildbot From python-checkins at python.org Sat Mar 15 23:36:01 2008 From: python-checkins at python.org (neal.norwitz) Date: Sat, 15 Mar 2008 23:36:01 +0100 (CET) Subject: [Python-checkins] r61407 - python/trunk/Python/symtable.c Message-ID: <20080315223601.8D2AB1E4016@bag.python.org> Author: neal.norwitz Date: Sat Mar 15 23:36:01 2008 New Revision: 61407 Modified: python/trunk/Python/symtable.c Log: Handle memory allocation failure. Found by Adam Olsen Modified: python/trunk/Python/symtable.c ============================================================================== --- python/trunk/Python/symtable.c (original) +++ python/trunk/Python/symtable.c Sat Mar 15 23:36:01 2008 @@ -27,8 +27,9 @@ k = PyLong_FromVoidPtr(key); if (k == NULL) goto fail; - ste = (PySTEntryObject *)PyObject_New(PySTEntryObject, - &PySTEntry_Type); + ste = PyObject_New(PySTEntryObject, &PySTEntry_Type); + if (ste == NULL) + goto fail; ste->ste_table = st; ste->ste_id = k; ste->ste_tmpname = 0; From buildbot at python.org Sun Mar 16 01:19:20 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 16 Mar 2008 00:19:20 +0000 Subject: [Python-checkins] buildbot failure in x86 gentoo 3.0 Message-ID: <20080316001920.A0D411E4017@bag.python.org> The Buildbot has detected a new failure of x86 gentoo 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20gentoo%203.0/builds/638 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-x86 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: make: *** [buildbottest] Unknown signal 32 sincerely, -The Buildbot From buildbot at python.org Sun Mar 16 01:32:07 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 16 Mar 2008 00:32:07 +0000 Subject: [Python-checkins] buildbot failure in amd64 gentoo 3.0 Message-ID: <20080316003207.3295A1E402A@bag.python.org> The Buildbot has detected a new failure of amd64 gentoo 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20gentoo%203.0/builds/157 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-amd64 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: 3 tests failed: test_datetime test_struct test_tokenize ====================================================================== ERROR: test_strptime (test.test_datetime.TestDateTime) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/test/test_datetime.py", line 1530, in test_strptime got = self.theclass.strptime(string, format) File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/_strptime.py", line 320, in _strptime raise ValueError("stray %% in format '%s'" % format) ValueError: stray % in format '%' ====================================================================== ERROR: test_strptime (test.test_datetime.TestDateTimeTZ) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/test/test_datetime.py", line 1530, in test_strptime got = self.theclass.strptime(string, format) File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/_strptime.py", line 320, in _strptime raise ValueError("stray %% in format '%s'" % format) ValueError: stray % in format '%' Traceback (most recent call last): File "./Lib/test/regrtest.py", line 590, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/test/test_struct.py", line 689, in test_bool() File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/test/test_struct.py", line 686, in test_bool if struct.unpack('>?', c)[0] is not True: TypeError: 'int' does not have the buffer interface make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Sun Mar 16 01:39:33 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 16 Mar 2008 00:39:33 +0000 Subject: [Python-checkins] buildbot failure in ppc Debian unstable 3.0 Message-ID: <20080316003933.B68801E4037@bag.python.org> The Buildbot has detected a new failure of ppc Debian unstable 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/ppc%20Debian%20unstable%203.0/builds/637 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: 3 tests failed: test_datetime test_struct test_tokenize ====================================================================== FAIL: test_strptime (test.test_datetime.TestDateTime) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/3.0.klose-debian-ppc/build/Lib/test/test_datetime.py", line 1531, in test_strptime self.assertEqual(expected, got) AssertionError: datetime.datetime(2004, 12, 1, 13, 2, 47, 197000) != datetime.datetime(1900, 1, 1, 0, 0) ====================================================================== FAIL: test_strptime (test.test_datetime.TestDateTimeTZ) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/3.0.klose-debian-ppc/build/Lib/test/test_datetime.py", line 1531, in test_strptime self.assertEqual(expected, got) AssertionError: datetime.datetime(2004, 12, 1, 13, 2, 47, 197000) != datetime.datetime(1900, 1, 1, 0, 0) Traceback (most recent call last): File "./Lib/test/regrtest.py", line 590, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/home/pybot/buildarea/3.0.klose-debian-ppc/build/Lib/test/test_struct.py", line 689, in test_bool() File "/home/pybot/buildarea/3.0.klose-debian-ppc/build/Lib/test/test_struct.py", line 686, in test_bool if struct.unpack('>?', c)[0] is not True: TypeError: 'int' does not have the buffer interface make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Sun Mar 16 01:53:47 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 16 Mar 2008 00:53:47 +0000 Subject: [Python-checkins] buildbot failure in sparc solaris10 gcc 3.0 Message-ID: <20080316005348.165DE1E401F@bag.python.org> The Buildbot has detected a new failure of sparc solaris10 gcc 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20solaris10%20gcc%203.0/builds/662 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: loewis-sun Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: 3 tests failed: test_datetime test_struct test_tokenize ====================================================================== FAIL: test_strptime (test.test_datetime.TestDateTime) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/test/test_datetime.py", line 1531, in test_strptime self.assertEqual(expected, got) AssertionError: datetime.datetime(2004, 12, 1, 13, 2, 47, 197000) != datetime.datetime(1900, 1, 1, 0, 0) ====================================================================== FAIL: test_strptime (test.test_datetime.TestDateTimeTZ) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/test/test_datetime.py", line 1531, in test_strptime self.assertEqual(expected, got) AssertionError: datetime.datetime(2004, 12, 1, 13, 2, 47, 197000) != datetime.datetime(1900, 1, 1, 0, 0) Traceback (most recent call last): File "./Lib/test/regrtest.py", line 590, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/test/test_struct.py", line 689, in test_bool() File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/test/test_struct.py", line 686, in test_bool if struct.unpack('>?', c)[0] is not True: TypeError: 'int' does not have the buffer interface sincerely, -The Buildbot From buildbot at python.org Sun Mar 16 02:07:36 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 16 Mar 2008 01:07:36 +0000 Subject: [Python-checkins] buildbot failure in g4 osx.4 3.0 Message-ID: <20080316010738.2F7FB1E401A@bag.python.org> The Buildbot has detected a new failure of g4 osx.4 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/g4%20osx.4%203.0/builds/601 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: psf-g4 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: 3 tests failed: test_datetime test_struct test_tokenize ====================================================================== FAIL: test_strptime (test.test_datetime.TestDateTime) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/buildslave/bb/3.0.psf-g4/build/Lib/test/test_datetime.py", line 1531, in test_strptime self.assertEqual(expected, got) AssertionError: datetime.datetime(2004, 12, 1, 13, 2, 47, 197000) != datetime.datetime(1900, 1, 1, 0, 0) ====================================================================== FAIL: test_strptime (test.test_datetime.TestDateTimeTZ) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/buildslave/bb/3.0.psf-g4/build/Lib/test/test_datetime.py", line 1531, in test_strptime self.assertEqual(expected, got) AssertionError: datetime.datetime(2004, 12, 1, 13, 2, 47, 197000) != datetime.datetime(1900, 1, 1, 0, 0) Traceback (most recent call last): File "./Lib/test/regrtest.py", line 590, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/Users/buildslave/bb/3.0.psf-g4/build/Lib/test/test_struct.py", line 689, in test_bool() File "/Users/buildslave/bb/3.0.psf-g4/build/Lib/test/test_struct.py", line 686, in test_bool if struct.unpack('>?', c)[0] is not True: TypeError: 'int' does not have the buffer interface make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Sun Mar 16 02:14:26 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 16 Mar 2008 01:14:26 +0000 Subject: [Python-checkins] buildbot failure in S-390 Debian 3.0 Message-ID: <20080316011438.2E9401E4024@bag.python.org> The Buildbot has detected a new failure of S-390 Debian 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/S-390%20Debian%203.0/builds/93 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-s390 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: 3 tests failed: test_datetime test_struct test_tokenize ====================================================================== FAIL: test_strptime (test.test_datetime.TestDateTime) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/3.0.klose-debian-s390/build/Lib/test/test_datetime.py", line 1531, in test_strptime self.assertEqual(expected, got) AssertionError: datetime.datetime(2004, 12, 1, 13, 2, 47, 197000) != datetime.datetime(1900, 1, 1, 0, 0) ====================================================================== FAIL: test_strptime (test.test_datetime.TestDateTimeTZ) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/3.0.klose-debian-s390/build/Lib/test/test_datetime.py", line 1531, in test_strptime self.assertEqual(expected, got) AssertionError: datetime.datetime(2004, 12, 1, 13, 2, 47, 197000) != datetime.datetime(1900, 1, 1, 0, 0) Traceback (most recent call last): File "./Lib/test/regrtest.py", line 590, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/home/pybot/buildarea/3.0.klose-debian-s390/build/Lib/test/test_struct.py", line 689, in test_bool() File "/home/pybot/buildarea/3.0.klose-debian-s390/build/Lib/test/test_struct.py", line 686, in test_bool if struct.unpack('>?', c)[0] is not True: TypeError: 'int' does not have the buffer interface make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Sun Mar 16 03:08:11 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 16 Mar 2008 02:08:11 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 3.0 Message-ID: <20080316020811.7CE0C1E4017@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%203.0/builds/702 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From buildbot at python.org Sun Mar 16 03:49:47 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 16 Mar 2008 02:49:47 +0000 Subject: [Python-checkins] buildbot failure in x86 W2k8 3.0 Message-ID: <20080316024948.3ABEE1E4017@bag.python.org> The Buildbot has detected a new failure of x86 W2k8 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20W2k8%203.0/builds/41 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: nelson-windows Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: mark.dickinson BUILD FAILED: failed test Excerpt from the test logfile: 3 tests failed: test_datetime test_mailbox test_tokenize ====================================================================== ERROR: test_strptime (test.test_datetime.TestDateTime) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_datetime.py", line 1530, in test_strptime got = self.theclass.strptime(string, format) File "S:\buildbots\python\3.0.nelson-windows\build\lib\_strptime.py", line 320, in _strptime raise ValueError("stray %% in format '%s'" % format) ValueError: stray % in format '%' ====================================================================== ERROR: test_strptime (test.test_datetime.TestDateTimeTZ) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_datetime.py", line 1530, in test_strptime got = self.theclass.strptime(string, format) File "S:\buildbots\python\3.0.nelson-windows\build\lib\_strptime.py", line 320, in _strptime raise ValueError("stray %% in format '%s'" % format) ValueError: stray % in format '%' ====================================================================== ERROR: test_flush (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 718, in tearDown self._delete_recursively(self._path) File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 47, in _delete_recursively os.remove(target) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test' ====================================================================== ERROR: test_popitem (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 336, in test_popitem self.assertEqual(int(msg.get_payload()), keys.index(key)) ValueError: invalid literal for int() with base 10: 'From: foo 0 F' ====================================================================== ERROR: test_flush (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 718, in tearDown self._delete_recursively(self._path) File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 47, in _delete_recursively os.remove(target) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test' ====================================================================== ERROR: test_popitem (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 336, in test_popitem self.assertEqual(int(msg.get_payload()), keys.index(key)) ValueError: invalid literal for int() with base 10: 'From: foo 0 \x01' ====================================================================== ERROR: test_flush (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 941, in tearDown self._delete_recursively(self._path) File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 47, in _delete_recursively os.remove(target) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test' ====================================================================== ERROR: test_popitem (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 336, in test_popitem self.assertEqual(int(msg.get_payload()), keys.index(key)) ValueError: invalid literal for int() with base 10: 'From: foo *** EOOH *** From: foo 0 1,, From: foo *** EOOH *** From: foo 1 1,, From: foo *** EOOH *** From: foo 2 1,, From: foo *** EOOH *** From: foo 3 ' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 412, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_add (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 77, in test_add self.assertEqual(self._box.get_string(keys[0]), self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\nF' != 'From: foo\n\n0' ====================================================================== FAIL: test_add_and_close (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 758, in test_add_and_close self.assertEqual(contents, open(self._path, 'r').read()) AssertionError: 'From MAILER-DAEMON Sun Mar 16 02:47:43 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Sun Mar 16 02:47:43 2008\n\nFrom: foo\n\n0\n\nFrom MAILER-DAEMON Sun Mar 16 02:47:43 2008\n\nFrom: foo\n\n1\n\nFrom MAILER-DAEMON Sun Mar 16 02:47:43 2008\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Sun Mar 16 02:47:43 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'From MAILER-DAEMON Sun Mar 16 02:47:43 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Sun Mar 16 02:47:43 2008\n\nFrom: foo\n\n0\n\nFrom MAILER-DAEMON Sun Mar 16 02:47:43 2008\n\nFrom: foo\n\n1\n\nFrom MAILER-DAEMON Sun Mar 16 02:47:43 2008\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Sun Mar 16 02:47:43 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Sun Mar 16 02:47:43 2008\n\nFrom: foo\n\n0\n\nFrom MAILER-DAEMON Sun Mar 16 02:47:43 2008\n\nFrom: foo\n\n1\n\nFrom MAILER-DAEMON Sun Mar 16 02:47:43 2008\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Sun Mar 16 02:47:43 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Sun Mar 16 02:47:43 2008\n\nFrom: foo\n\n1\n\nFrom MAILER-DAEMON Sun Mar 16 02:47:43 2008\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Sun Mar 16 02:47:43 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Sun Mar 16 02:47:43 2008\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Sun Mar 16 02:47:43 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Sun Mar 16 02:47:43 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' ====================================================================== FAIL: test_add_from_string (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 725, in test_add_from_string self.assertEqual(self._box[key].get_from(), 'foo at bar blah') AssertionError: 'foo at bar blah\n' != 'foo at bar blah' ====================================================================== FAIL: test_close (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 389, in test_close self._test_flush_or_close(self._box.close) File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 6 != 3 ====================================================================== FAIL: test_delitem (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 87, in test_delitem self._test_remove_or_delitem(self._box.__delitem__) File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n1' != 'From: foo\n\n1' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 412, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_flush (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 377, in test_flush self._test_flush_or_close(self._box.flush) File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 6 != 3 ====================================================================== FAIL: test_get (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 129, in test_get self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_file (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 174, in test_get_file self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\nF' != 'From: foo\n\n0' ====================================================================== FAIL: test_get_message (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 156, in test_get_message self.assertEqual(msg0['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_string (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 164, in test_get_string self.assertEqual(self._box.get_string(key0), self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\nF' != 'From: foo\n\n0' ====================================================================== FAIL: test_getitem (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 144, in test_getitem self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_items (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 207, in test_items self._check_iteration(self._box.items, do_keys=True, do_values=True) File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iter (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 194, in test_iter do_values=True) File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iteritems (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 203, in test_iteritems do_values=True) File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_itervalues (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 189, in test_itervalues do_values=True) File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_open_close_open (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 742, in test_open_close_open self.assertEqual(len(self._box), 3) AssertionError: 6 != 3 ====================================================================== FAIL: test_pop (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 313, in test_pop self.assertEqual(self._box.pop(key0).get_payload(), '0') AssertionError: 'From: foo\n\n0\n\nF' != '0' ====================================================================== FAIL: test_remove (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 83, in test_remove self._test_remove_or_delitem(self._box.remove) File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n1' != 'From: foo\n\n1' ====================================================================== FAIL: test_set_item (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 272, in test_set_item self._template % 'original 0') AssertionError: '\nFrom: foo\n\noriginal 0' != 'From: foo\n\noriginal 0' ====================================================================== FAIL: test_update (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 350, in test_update self._template % 'changed 0') AssertionError: '\nFrom: foo\n\nchanged 0\n\nF' != 'From: foo\n\nchanged 0' ====================================================================== FAIL: test_values (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 198, in test_values self._check_iteration(self._box.values, do_keys=False, do_values=True) File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_add (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 77, in test_add self.assertEqual(self._box.get_string(keys[0]), self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\n\x01' != 'From: foo\n\n0' ====================================================================== FAIL: test_add_and_close (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 758, in test_add_and_close self.assertEqual(contents, open(self._path, 'r').read()) AssertionError: '\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sun Mar 16 02:47:45 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sun Mar 16 02:47:45 2008\n\nFrom: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sun Mar 16 02:47:45 2008\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sun Mar 16 02:47:45 2008\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sun Mar 16 02:47:45 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n' != '\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sun Mar 16 02:47:45 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sun Mar 16 02:47:45 2008\n\nFrom: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sun Mar 16 02:47:45 2008\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sun Mar 16 02:47:45 2008\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sun Mar 16 02:47:45 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sun Mar 16 02:47:45 2008\n\nFrom: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sun Mar 16 02:47:45 2008\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sun Mar 16 02:47:45 2008\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sun Mar 16 02:47:45 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sun Mar 16 02:47:45 2008\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sun Mar 16 02:47:45 2008\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sun Mar 16 02:47:45 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sun Mar 16 02:47:45 2008\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sun Mar 16 02:47:45 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sun Mar 16 02:47:45 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n' ====================================================================== FAIL: test_add_from_string (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 725, in test_add_from_string self.assertEqual(self._box[key].get_from(), 'foo at bar blah') AssertionError: 'foo at bar blah\n' != 'foo at bar blah' ====================================================================== FAIL: test_close (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 389, in test_close self._test_flush_or_close(self._box.close) File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_delitem (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 87, in test_delitem self._test_remove_or_delitem(self._box.__delitem__) File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n1\n\n\x01' != 'From: foo\n\n1' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 412, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_flush (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 377, in test_flush self._test_flush_or_close(self._box.flush) File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_get (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 129, in test_get self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_file (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 174, in test_get_file self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\n\x01' != 'From: foo\n\n0' ====================================================================== FAIL: test_get_message (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 156, in test_get_message self.assertEqual(msg0['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_string (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 164, in test_get_string self.assertEqual(self._box.get_string(key0), self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\n\x01' != 'From: foo\n\n0' ====================================================================== FAIL: test_getitem (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 144, in test_getitem self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_items (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 207, in test_items self._check_iteration(self._box.items, do_keys=True, do_values=True) File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iter (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 194, in test_iter do_values=True) File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iteritems (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 203, in test_iteritems do_values=True) File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_itervalues (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 189, in test_itervalues do_values=True) File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_open_close_open (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 742, in test_open_close_open self.assertEqual(len(self._box), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_pop (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 313, in test_pop self.assertEqual(self._box.pop(key0).get_payload(), '0') AssertionError: 'From: foo\n\n0\n\n\x01' != '0' ====================================================================== FAIL: test_remove (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 83, in test_remove self._test_remove_or_delitem(self._box.remove) File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n1\n\n\x01' != 'From: foo\n\n1' ====================================================================== FAIL: test_set_item (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 272, in test_set_item self._template % 'original 0') AssertionError: '\nFrom: foo\n\noriginal 0\n\n\x01' != 'From: foo\n\noriginal 0' ====================================================================== FAIL: test_update (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 350, in test_update self._template % 'changed 0') AssertionError: '\nFrom: foo\n\nchanged 0\n\n\x01' != 'From: foo\n\nchanged 0' ====================================================================== FAIL: test_values (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 198, in test_values self._check_iteration(self._box.values, do_keys=False, do_values=True) File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 412, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_add (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 77, in test_add self.assertEqual(self._box.get_string(keys[0]), self._template % 0) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n0\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f' != 'From: foo\n\n0' ====================================================================== FAIL: test_close (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 389, in test_close self._test_flush_or_close(self._box.close) File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_delitem (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 87, in test_delitem self._test_remove_or_delitem(self._box.__delitem__) File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n1\n\n\x1f' != 'From: foo\n\n1' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 412, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_flush (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 377, in test_flush self._test_flush_or_close(self._box.flush) File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_get (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 129, in test_get self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_file (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 174, in test_get_file self._template % 0) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n0\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f' != 'From: foo\n\n0' ====================================================================== FAIL: test_get_message (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 156, in test_get_message self.assertEqual(msg0['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_string (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 164, in test_get_string self.assertEqual(self._box.get_string(key0), self._template % 0) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n0\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f' != 'From: foo\n\n0' ====================================================================== FAIL: test_getitem (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 144, in test_getitem self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_items (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 207, in test_items self._check_iteration(self._box.items, do_keys=True, do_values=True) File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iter (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 194, in test_iter do_values=True) File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iteritems (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 203, in test_iteritems do_values=True) File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_itervalues (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 189, in test_itervalues do_values=True) File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_pop (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 313, in test_pop self.assertEqual(self._box.pop(key0).get_payload(), '0') AssertionError: 'From: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n0\n\n\x1f\x0c\n\n1,,\n\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n1\n\n\x1f' != '0' ====================================================================== FAIL: test_remove (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 83, in test_remove self._test_remove_or_delitem(self._box.remove) File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n1\n\n\x1f' != 'From: foo\n\n1' ====================================================================== FAIL: test_set_item (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 272, in test_set_item self._template % 'original 0') AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\noriginal 0\n\n\x1f' != 'From: foo\n\noriginal 0' ====================================================================== FAIL: test_update (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 350, in test_update self._template % 'changed 0') AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\nchanged 0\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f' != 'From: foo\n\nchanged 0' ====================================================================== FAIL: test_values (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 198, in test_values self._check_iteration(self._box.values, do_keys=False, do_values=True) File "S:\buildbots\python\3.0.nelson-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' sincerely, -The Buildbot From buildbot at python.org Sun Mar 16 04:09:06 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 16 Mar 2008 03:09:06 +0000 Subject: [Python-checkins] buildbot failure in sparc Debian 3.0 Message-ID: <20080316030906.B5FF51E4017@bag.python.org> The Buildbot has detected a new failure of sparc Debian 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20Debian%203.0/builds/93 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-sparc Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: christian.heimes BUILD FAILED: failed failed slave lost sincerely, -The Buildbot From buildbot at python.org Sun Mar 16 04:13:42 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 16 Mar 2008 03:13:42 +0000 Subject: [Python-checkins] buildbot failure in sparc Ubuntu 3.0 Message-ID: <20080316031342.B08041E4017@bag.python.org> The Buildbot has detected a new failure of sparc Ubuntu 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20Ubuntu%203.0/builds/163 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-ubuntu-sparc Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: christian.heimes BUILD FAILED: failed failed slave lost sincerely, -The Buildbot From buildbot at python.org Sun Mar 16 05:38:56 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 16 Mar 2008 04:38:56 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 3.0 Message-ID: <20080316043856.85A4C1E4017@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%203.0/builds/705 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: mark.dickinson BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From python-checkins at python.org Sun Mar 16 06:20:44 2008 From: python-checkins at python.org (raymond.hettinger) Date: Sun, 16 Mar 2008 06:20:44 +0100 (CET) Subject: [Python-checkins] r61413 - python/trunk/Doc/library/numbers.rst Message-ID: <20080316052044.4E3F01E4017@bag.python.org> Author: raymond.hettinger Date: Sun Mar 16 06:20:42 2008 New Revision: 61413 Modified: python/trunk/Doc/library/numbers.rst Log: Update docs to reflect removal of Exact/Inexact Modified: python/trunk/Doc/library/numbers.rst ============================================================================== --- python/trunk/Doc/library/numbers.rst (original) +++ python/trunk/Doc/library/numbers.rst Sun Mar 16 06:20:42 2008 @@ -8,9 +8,8 @@ The :mod:`numbers` module (:pep:`3141`) defines a hierarchy of numeric abstract -base classes which progressively define more operations. These concepts also -provide a way to distinguish exact from inexact types. None of the types defined -in this module can be instantiated. +base classes which progressively define more operations. None of the types +defined in this module can be instantiated. .. class:: Number @@ -19,27 +18,6 @@ *x* is a number, without caring what kind, use ``isinstance(x, Number)``. -Exact and inexact operations ----------------------------- - -.. class:: Exact - - Subclasses of this type have exact operations. - - As long as the result of a homogenous operation is of the same type, you can - assume that it was computed exactly, and there are no round-off errors. Laws - like commutativity and associativity hold. - - -.. class:: Inexact - - Subclasses of this type have inexact operations. - - Given X, an instance of :class:`Inexact`, it is possible that ``(X + -X) + 3 - == 3``, but ``X + (-X + 3) == 0``. The exact form this error takes will vary - by type, but it's generally unsafe to compare this type for equality. - - The numeric tower ----------------- @@ -79,7 +57,7 @@ .. class:: Rational - Subtypes both :class:`Real` and :class:`Exact`, and adds + Subtypes :class:`Real` and adds :attr:`Rational.numerator` and :attr:`Rational.denominator` properties, which should be in lowest terms. With these, it provides a default for :func:`float`. @@ -239,4 +217,4 @@ __add__, __radd__ = _operator_fallbacks(_add, operator.add) - # ... \ No newline at end of file + # ... From buildbot at python.org Sun Mar 16 07:49:48 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 16 Mar 2008 06:49:48 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-4 3.0 Message-ID: <20080316064948.C865D1E4022@bag.python.org> The Buildbot has detected a new failure of x86 XP-4 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-4%203.0/builds/588 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: bolen-windows Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: mark.dickinson BUILD FAILED: failed test Excerpt from the test logfile: 3 tests failed: test_mailbox test_tokenize test_winsound ====================================================================== ERROR: test_flush (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 718, in tearDown self._delete_recursively(self._path) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 47, in _delete_recursively os.remove(target) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test' ====================================================================== ERROR: test_popitem (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 336, in test_popitem self.assertEqual(int(msg.get_payload()), keys.index(key)) ValueError: invalid literal for int() with base 10: 'From: foo 0 F' ====================================================================== ERROR: test_flush (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 718, in tearDown self._delete_recursively(self._path) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 47, in _delete_recursively os.remove(target) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test' ====================================================================== ERROR: test_popitem (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 336, in test_popitem self.assertEqual(int(msg.get_payload()), keys.index(key)) ValueError: invalid literal for int() with base 10: 'From: foo 0 \x01' ====================================================================== ERROR: test_flush (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 941, in tearDown self._delete_recursively(self._path) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 47, in _delete_recursively os.remove(target) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test' ====================================================================== ERROR: test_popitem (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 336, in test_popitem self.assertEqual(int(msg.get_payload()), keys.index(key)) ValueError: invalid literal for int() with base 10: 'From: foo *** EOOH *** From: foo 0 1,, From: foo *** EOOH *** From: foo 1 1,, From: foo *** EOOH *** From: foo 2 1,, From: foo *** EOOH *** From: foo 3 ' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 412, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_add (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 77, in test_add self.assertEqual(self._box.get_string(keys[0]), self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\nF' != 'From: foo\n\n0' ====================================================================== FAIL: test_add_and_close (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 758, in test_add_and_close self.assertEqual(contents, open(self._path, 'r').read()) AssertionError: 'From MAILER-DAEMON Sun Mar 16 06:45:36 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Sun Mar 16 06:45:36 2008\n\nFrom: foo\n\n0\n\nFrom MAILER-DAEMON Sun Mar 16 06:45:36 2008\n\nFrom: foo\n\n1\n\nFrom MAILER-DAEMON Sun Mar 16 06:45:36 2008\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Sun Mar 16 06:45:36 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'From MAILER-DAEMON Sun Mar 16 06:45:36 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Sun Mar 16 06:45:36 2008\n\nFrom: foo\n\n0\n\nFrom MAILER-DAEMON Sun Mar 16 06:45:36 2008\n\nFrom: foo\n\n1\n\nFrom MAILER-DAEMON Sun Mar 16 06:45:36 2008\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Sun Mar 16 06:45:36 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Sun Mar 16 06:45:36 2008\n\nFrom: foo\n\n0\n\nFrom MAILER-DAEMON Sun Mar 16 06:45:36 2008\n\nFrom: foo\n\n1\n\nFrom MAILER-DAEMON Sun Mar 16 06:45:36 2008\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Sun Mar 16 06:45:36 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Sun Mar 16 06:45:36 2008\n\nFrom: foo\n\n1\n\nFrom MAILER-DAEMON Sun Mar 16 06:45:36 2008\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Sun Mar 16 06:45:36 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Sun Mar 16 06:45:36 2008\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Sun Mar 16 06:45:36 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Sun Mar 16 06:45:36 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' ====================================================================== FAIL: test_add_from_string (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 725, in test_add_from_string self.assertEqual(self._box[key].get_from(), 'foo at bar blah') AssertionError: 'foo at bar blah\n' != 'foo at bar blah' ====================================================================== FAIL: test_close (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 389, in test_close self._test_flush_or_close(self._box.close) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 6 != 3 ====================================================================== FAIL: test_delitem (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 87, in test_delitem self._test_remove_or_delitem(self._box.__delitem__) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n1' != 'From: foo\n\n1' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 412, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_flush (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 377, in test_flush self._test_flush_or_close(self._box.flush) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 6 != 3 ====================================================================== FAIL: test_get (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 129, in test_get self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_file (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 174, in test_get_file self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\nF' != 'From: foo\n\n0' ====================================================================== FAIL: test_get_message (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 156, in test_get_message self.assertEqual(msg0['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_string (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 164, in test_get_string self.assertEqual(self._box.get_string(key0), self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\nF' != 'From: foo\n\n0' ====================================================================== FAIL: test_getitem (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 144, in test_getitem self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_items (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 207, in test_items self._check_iteration(self._box.items, do_keys=True, do_values=True) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iter (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 194, in test_iter do_values=True) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iteritems (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 203, in test_iteritems do_values=True) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_itervalues (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 189, in test_itervalues do_values=True) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_open_close_open (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 742, in test_open_close_open self.assertEqual(len(self._box), 3) AssertionError: 6 != 3 ====================================================================== FAIL: test_pop (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 313, in test_pop self.assertEqual(self._box.pop(key0).get_payload(), '0') AssertionError: 'From: foo\n\n0\n\nF' != '0' ====================================================================== FAIL: test_remove (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 83, in test_remove self._test_remove_or_delitem(self._box.remove) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n1' != 'From: foo\n\n1' ====================================================================== FAIL: test_set_item (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 272, in test_set_item self._template % 'original 0') AssertionError: '\nFrom: foo\n\noriginal 0' != 'From: foo\n\noriginal 0' ====================================================================== FAIL: test_update (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 350, in test_update self._template % 'changed 0') AssertionError: '\nFrom: foo\n\nchanged 0\n\nF' != 'From: foo\n\nchanged 0' ====================================================================== FAIL: test_values (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 198, in test_values self._check_iteration(self._box.values, do_keys=False, do_values=True) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_add (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 77, in test_add self.assertEqual(self._box.get_string(keys[0]), self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\n\x01' != 'From: foo\n\n0' ====================================================================== FAIL: test_add_and_close (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 758, in test_add_and_close self.assertEqual(contents, open(self._path, 'r').read()) AssertionError: '\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sun Mar 16 06:45:41 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sun Mar 16 06:45:41 2008\n\nFrom: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sun Mar 16 06:45:41 2008\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sun Mar 16 06:45:41 2008\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sun Mar 16 06:45:41 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n' != '\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sun Mar 16 06:45:41 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sun Mar 16 06:45:41 2008\n\nFrom: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sun Mar 16 06:45:41 2008\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sun Mar 16 06:45:41 2008\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sun Mar 16 06:45:41 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sun Mar 16 06:45:41 2008\n\nFrom: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sun Mar 16 06:45:41 2008\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sun Mar 16 06:45:41 2008\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sun Mar 16 06:45:41 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sun Mar 16 06:45:41 2008\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sun Mar 16 06:45:41 2008\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sun Mar 16 06:45:41 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sun Mar 16 06:45:41 2008\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sun Mar 16 06:45:41 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sun Mar 16 06:45:41 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n' ====================================================================== FAIL: test_add_from_string (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 725, in test_add_from_string self.assertEqual(self._box[key].get_from(), 'foo at bar blah') AssertionError: 'foo at bar blah\n' != 'foo at bar blah' ====================================================================== FAIL: test_close (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 389, in test_close self._test_flush_or_close(self._box.close) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_delitem (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 87, in test_delitem self._test_remove_or_delitem(self._box.__delitem__) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n1\n\n\x01' != 'From: foo\n\n1' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 412, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_flush (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 377, in test_flush self._test_flush_or_close(self._box.flush) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_get (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 129, in test_get self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_file (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 174, in test_get_file self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\n\x01' != 'From: foo\n\n0' ====================================================================== FAIL: test_get_message (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 156, in test_get_message self.assertEqual(msg0['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_string (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 164, in test_get_string self.assertEqual(self._box.get_string(key0), self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\n\x01' != 'From: foo\n\n0' ====================================================================== FAIL: test_getitem (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 144, in test_getitem self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_items (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 207, in test_items self._check_iteration(self._box.items, do_keys=True, do_values=True) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iter (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 194, in test_iter do_values=True) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iteritems (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 203, in test_iteritems do_values=True) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_itervalues (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 189, in test_itervalues do_values=True) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_open_close_open (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 742, in test_open_close_open self.assertEqual(len(self._box), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_pop (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 313, in test_pop self.assertEqual(self._box.pop(key0).get_payload(), '0') AssertionError: 'From: foo\n\n0\n\n\x01' != '0' ====================================================================== FAIL: test_remove (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 83, in test_remove self._test_remove_or_delitem(self._box.remove) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n1\n\n\x01' != 'From: foo\n\n1' ====================================================================== FAIL: test_set_item (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 272, in test_set_item self._template % 'original 0') AssertionError: '\nFrom: foo\n\noriginal 0\n\n\x01' != 'From: foo\n\noriginal 0' ====================================================================== FAIL: test_update (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 350, in test_update self._template % 'changed 0') AssertionError: '\nFrom: foo\n\nchanged 0\n\n\x01' != 'From: foo\n\nchanged 0' ====================================================================== FAIL: test_values (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 198, in test_values self._check_iteration(self._box.values, do_keys=False, do_values=True) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 412, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_add (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 77, in test_add self.assertEqual(self._box.get_string(keys[0]), self._template % 0) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n0\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f' != 'From: foo\n\n0' ====================================================================== FAIL: test_close (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 389, in test_close self._test_flush_or_close(self._box.close) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_delitem (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 87, in test_delitem self._test_remove_or_delitem(self._box.__delitem__) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n1\n\n\x1f' != 'From: foo\n\n1' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 412, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_flush (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 377, in test_flush self._test_flush_or_close(self._box.flush) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_get (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 129, in test_get self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_file (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 174, in test_get_file self._template % 0) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n0\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f' != 'From: foo\n\n0' ====================================================================== FAIL: test_get_message (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 156, in test_get_message self.assertEqual(msg0['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_string (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 164, in test_get_string self.assertEqual(self._box.get_string(key0), self._template % 0) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n0\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f' != 'From: foo\n\n0' ====================================================================== FAIL: test_getitem (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 144, in test_getitem self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_items (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 207, in test_items self._check_iteration(self._box.items, do_keys=True, do_values=True) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iter (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 194, in test_iter do_values=True) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iteritems (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 203, in test_iteritems do_values=True) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_itervalues (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 189, in test_itervalues do_values=True) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_pop (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 313, in test_pop self.assertEqual(self._box.pop(key0).get_payload(), '0') AssertionError: 'From: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n0\n\n\x1f\x0c\n\n1,,\n\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n1\n\n\x1f' != '0' ====================================================================== FAIL: test_remove (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 83, in test_remove self._test_remove_or_delitem(self._box.remove) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n1\n\n\x1f' != 'From: foo\n\n1' ====================================================================== FAIL: test_set_item (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 272, in test_set_item self._template % 'original 0') AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\noriginal 0\n\n\x1f' != 'From: foo\n\noriginal 0' ====================================================================== FAIL: test_update (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 350, in test_update self._template % 'changed 0') AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\nchanged 0\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f' != 'From: foo\n\nchanged 0' ====================================================================== FAIL: test_values (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 198, in test_values self._check_iteration(self._box.values, do_keys=False, do_values=True) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_extremes (test.test_winsound.BeepTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_winsound.py", line 30, in test_extremes self.assertRaises(RuntimeError, winsound.Beep, 37, 75) AssertionError: RuntimeError not raised by Beep sincerely, -The Buildbot From python-checkins at python.org Sun Mar 16 09:00:20 2008 From: python-checkins at python.org (georg.brandl) Date: Sun, 16 Mar 2008 09:00:20 +0100 (CET) Subject: [Python-checkins] r61414 - python/trunk/Doc/extending/newtypes.rst Message-ID: <20080316080020.5B4761E400B@bag.python.org> Author: georg.brandl Date: Sun Mar 16 09:00:19 2008 New Revision: 61414 Modified: python/trunk/Doc/extending/newtypes.rst Log: #2299: typos in newtypes.rst. Modified: python/trunk/Doc/extending/newtypes.rst ============================================================================== --- python/trunk/Doc/extending/newtypes.rst (original) +++ python/trunk/Doc/extending/newtypes.rst Sun Mar 16 09:00:19 2008 @@ -428,7 +428,7 @@ * when decrementing a reference count in a :attr:`tp_dealloc` handler when garbage-collections is not supported [#]_ -We want to want to expose our instance variables as attributes. There are a +We want to expose our instance variables as attributes. There are a number of ways to do that. The simplest way is to define member definitions:: static PyMemberDef Noddy_members[] = { @@ -616,7 +616,7 @@ Noddy_getseters, /* tp_getset */ -to register out attribute getters and setters. +to register our attribute getters and setters. The last item in a :ctype:`PyGetSetDef` structure is the closure mentioned above. In this case, we aren't using the closure, so we just pass *NULL*. @@ -1575,7 +1575,7 @@ less careful about decrementing their reference counts, however, we accept instances of string subclasses. Even though deallocating normal strings won't call back into our objects, we can't guarantee that deallocating an instance of - a string subclass won't. call back into out objects. + a string subclass won't call back into our objects. .. [#] Even in the third version, we aren't guaranteed to avoid cycles. Instances of string subclasses are allowed and string subclasses could allow cycles even if From python-checkins at python.org Sun Mar 16 12:09:32 2008 From: python-checkins at python.org (georg.brandl) Date: Sun, 16 Mar 2008 12:09:32 +0100 (CET) Subject: [Python-checkins] r61415 - doctools/trunk/sphinx/directives.py doctools/trunk/sphinx/htmlwriter.py doctools/trunk/sphinx/latexwriter.py Message-ID: <20080316110932.C28AF1E400B@bag.python.org> Author: georg.brandl Date: Sun Mar 16 12:09:32 2008 New Revision: 61415 Modified: doctools/trunk/sphinx/directives.py doctools/trunk/sphinx/htmlwriter.py doctools/trunk/sphinx/latexwriter.py Log: Add language and linenos options to literalinclude directive. Modified: doctools/trunk/sphinx/directives.py ============================================================================== --- doctools/trunk/sphinx/directives.py (original) +++ doctools/trunk/sphinx/directives.py Sun Mar 16 12:09:32 2008 @@ -677,8 +677,14 @@ else: retnode = nodes.literal_block(text, text, source=fn) retnode.line = 1 + if options.get('language', ''): + retnode['language'] = options['language'] + if 'linenos' in options: + retnode['linenos'] = True return [retnode] +literalinclude_directive.options = {'linenos': directives.flag, + 'language': directives.unchanged} literalinclude_directive.content = 0 literalinclude_directive.arguments = (1, 0, 0) directives.register_directive('literalinclude', literalinclude_directive) Modified: doctools/trunk/sphinx/htmlwriter.py ============================================================================== --- doctools/trunk/sphinx/htmlwriter.py (original) +++ doctools/trunk/sphinx/htmlwriter.py Sun Mar 16 12:09:32 2008 @@ -183,6 +183,7 @@ if node.has_key('language'): # code-block directives lang = node['language'] + if node.has_key('linenos'): linenos = node['linenos'] self.body.append(self.highlighter.highlight_block(node.rawsource, lang, linenos)) raise nodes.SkipNode Modified: doctools/trunk/sphinx/latexwriter.py ============================================================================== --- doctools/trunk/sphinx/latexwriter.py (original) +++ doctools/trunk/sphinx/latexwriter.py Sun Mar 16 12:09:32 2008 @@ -656,6 +656,7 @@ if node.has_key('language'): # code-block directives lang = node['language'] + if node.has_key('linenos'): linenos = node['linenos'] hlcode = self.highlighter.highlight_block(code, lang, linenos) # workaround for Unicode issue From python-checkins at python.org Sun Mar 16 12:10:03 2008 From: python-checkins at python.org (georg.brandl) Date: Sun, 16 Mar 2008 12:10:03 +0100 (CET) Subject: [Python-checkins] r61416 - doctools/trunk/sphinx/environment.py Message-ID: <20080316111003.D51E41E400B@bag.python.org> Author: georg.brandl Date: Sun Mar 16 12:10:03 2008 New Revision: 61416 Modified: doctools/trunk/sphinx/environment.py Log: Don't warn for unknown keywords. Give Python refs a link title. Modified: doctools/trunk/sphinx/environment.py ============================================================================== --- doctools/trunk/sphinx/environment.py (original) +++ doctools/trunk/sphinx/environment.py Sun Mar 16 12:10:03 2008 @@ -705,7 +705,7 @@ # keywords are referenced by named labels docname, labelid, _ = self.labels.get(target, ('','','')) if not docname: - self.warn(fromdocname, 'unknown keyword: %s' % target) + #self.warn(fromdocname, 'unknown keyword: %s' % target) newnode = contnode else: newnode = nodes.reference('', '') @@ -766,6 +766,7 @@ newnode['refuri'] = ( builder.get_relative_uri(fromdocname, desc[0]) + '#' + name) + newnode['reftitle'] = name newnode.append(contnode) else: raise RuntimeError('unknown xfileref node encountered: %s' % node) From python-checkins at python.org Sun Mar 16 12:10:31 2008 From: python-checkins at python.org (georg.brandl) Date: Sun, 16 Mar 2008 12:10:31 +0100 (CET) Subject: [Python-checkins] r61417 - doctools/trunk/sphinx/ext/autodoc.py Message-ID: <20080316111031.D63A81E4017@bag.python.org> Author: georg.brandl Date: Sun Mar 16 12:10:31 2008 New Revision: 61417 Modified: doctools/trunk/sphinx/ext/autodoc.py Log: Fix autodoc for some conditions where env.autodoc_* is not set. Modified: doctools/trunk/sphinx/ext/autodoc.py ============================================================================== --- doctools/trunk/sphinx/ext/autodoc.py (original) +++ doctools/trunk/sphinx/ext/autodoc.py Sun Mar 16 12:10:31 2008 @@ -52,19 +52,19 @@ objpath = [] elif what in ('class', 'exception', 'function'): mod, obj = rpartition(name, '.') - if not mod: + if not mod and hasattr(env, 'autodoc_current_module'): mod = env.autodoc_current_module if not mod: mod = env.currmodule objpath = [obj] else: mod_cls, obj = rpartition(name, '.') - if not mod_cls: + if not mod_cls and hasattr(env, 'autodoc_current_class'): mod_cls = env.autodoc_current_class if not mod_cls: mod_cls = env.currclass mod, cls = rpartition(mod_cls, '.') - if not mod: + if not mod and hasattr(env, 'autodoc_current_module'): mod = env.autodoc_current_module if not mod: mod = env.currmodule From python-checkins at python.org Sun Mar 16 12:18:55 2008 From: python-checkins at python.org (georg.brandl) Date: Sun, 16 Mar 2008 12:18:55 +0100 (CET) Subject: [Python-checkins] r61418 - doctools/trunk/sphinx/roles.py Message-ID: <20080316111855.D9E971E400B@bag.python.org> Author: georg.brandl Date: Sun Mar 16 12:18:55 2008 New Revision: 61418 Modified: doctools/trunk/sphinx/roles.py Log: Enable :role:`title ` syntax for all xref role types. Modified: doctools/trunk/sphinx/roles.py ============================================================================== --- doctools/trunk/sphinx/roles.py (original) +++ doctools/trunk/sphinx/roles.py Sun Mar 16 12:18:55 2008 @@ -37,9 +37,9 @@ roles.register_generic_role(rolename, nodeclass) -def indexmarkup_role(typ, rawtext, text, lineno, inliner, options={}, content=[]): +def indexmarkup_role(typ, rawtext, etext, lineno, inliner, options={}, content=[]): env = inliner.document.settings.env - text = utils.unescape(text) + text = utils.unescape(etext) targetid = 'index-%s' % env.index_num env.index_num += 1 indexnode = addnodes.index() @@ -52,11 +52,9 @@ indexnode['entries'] = [('single', text, targetid, text), ('single', 'environment variable; %s' % text, targetid, text)] - pnode = addnodes.pending_xref(rawtext) - pnode['reftype'] = 'envvar' - pnode['reftarget'] = text - pnode += nodes.strong(text, text, classes=['xref']) - return [indexnode, targetnode, pnode], [] + xref_nodes = xfileref_role(typ, rawtext, etext, lineno, inliner, + options, content)[0] + return [indexnode, targetnode] + xref_nodes, [] elif typ == 'pep': env.note_index_entry('single', 'Python Enhancement Proposals!PEP %s' % text, targetid, 'PEP %s' % text) @@ -100,6 +98,7 @@ 'ref': nodes.emphasis, 'term': nodes.emphasis, 'token': nodes.strong, + 'envvar': nodes.strong, 'option': addnodes.literal_emphasis, } @@ -118,9 +117,9 @@ text = text[1:] return [innernodetypes.get(typ, nodes.literal)( rawtext, text, classes=['xref'])], [] - pnode = addnodes.pending_xref(rawtext) - pnode['reftype'] = typ - innertext = text + # we want a cross-reference, create the reference node + pnode = addnodes.pending_xref(rawtext, reftype=typ, refcaption=False, + modname=env.currmodule, classname=env.currclass) # special actions for Python object cross-references if typ in ('data', 'exc', 'func', 'class', 'const', 'attr', 'meth'): # if the first character is a dot, search more specific namespaces first @@ -130,37 +129,38 @@ pnode['refspecific'] = True # if the first character is a tilde, don't display the module/class parts # of the contents - if text[0:1] == '~': + elif text[0:1] == '~': text = text[1:] dot = text.rfind('.') if dot != -1: innertext = text[dot+1:] - if typ == 'term': - pnode['reftarget'] = ws_re.sub(' ', text).lower() - elif typ == 'ref': - brace = text.find('<') - if brace != -1: - pnode['refcaption'] = True - m = caption_ref_re.match(text) - if not m: - # fallback - pnode['reftarget'] = text[brace+1:] - text = text[:brace] - else: - pnode['reftarget'] = m.group(2) - text = m.group(1) + innertext = text + # look if explicit title and target are given + brace = text.find('<') + if brace != -1: + pnode['refcaption'] = True + m = caption_ref_re.match(text) + if m: + target = m.group(2) + innertext = m.group(1) else: - pnode['refcaption'] = False - pnode['reftarget'] = ws_re.sub('', text) + # fallback: everything after '<' is the target + target = text[brace+1:] + innertext = text[:brace] + # else, generate target from title + elif typ == 'term': + # normalize whitespace in definition terms (if the term reference is + # broken over a line, a newline will be in text) + target = ws_re.sub(' ', text).lower() elif typ == 'option': + # strip option marker from target if text[0] in '-/': - pnode['reftarget'] = text[1:] + target = text[1:] else: - pnode['reftarget'] = text + target = text else: - pnode['reftarget'] = ws_re.sub('', text) - pnode['modname'] = env.currmodule - pnode['classname'] = env.currclass + target = ws_re.sub('', text) + pnode['reftarget'] = target pnode += innernodetypes.get(typ, nodes.literal)(rawtext, innertext, classes=['xref']) return [pnode], [] From python-checkins at python.org Sun Mar 16 12:19:27 2008 From: python-checkins at python.org (georg.brandl) Date: Sun, 16 Mar 2008 12:19:27 +0100 (CET) Subject: [Python-checkins] r61419 - in doctools/trunk/doc: builders.rst concepts.rst config.rst contents.rst ext.py ext/api.rst ext/appapi.rst ext/builderapi.rst extensions.rst intro.rst markup/code.rst markup/desc.rst markup/index.rst markup/infounits.rst markup/inline.rst markup/para.rst Message-ID: <20080316111927.41BC61E400B@bag.python.org> Author: georg.brandl Date: Sun Mar 16 12:19:26 2008 New Revision: 61419 Added: doctools/trunk/doc/ext/appapi.rst doctools/trunk/doc/ext/builderapi.rst doctools/trunk/doc/markup/desc.rst Removed: doctools/trunk/doc/ext/api.rst doctools/trunk/doc/markup/infounits.rst Modified: doctools/trunk/doc/builders.rst doctools/trunk/doc/concepts.rst doctools/trunk/doc/config.rst doctools/trunk/doc/contents.rst doctools/trunk/doc/ext.py doctools/trunk/doc/extensions.rst doctools/trunk/doc/intro.rst doctools/trunk/doc/markup/code.rst doctools/trunk/doc/markup/index.rst doctools/trunk/doc/markup/inline.rst doctools/trunk/doc/markup/para.rst Log: Update documentation, add more content. Modified: doctools/trunk/doc/builders.rst ============================================================================== --- doctools/trunk/doc/builders.rst (original) +++ doctools/trunk/doc/builders.rst Sun Mar 16 12:19:26 2008 @@ -1,23 +1,68 @@ .. _builders: -Builders and the environment -============================ +Available builders +================== .. module:: sphinx.builder :synopsis: Available built-in builder classes. +These are the built-in Sphinx builders. More builders can be added by +:ref:`extensions `. + +The builder's "name" must be given to the **-b** command-line option of +:program:`sphinx-build.py` to select a builder. -.. class:: Builder .. class:: StandaloneHTMLBuilder -.. class:: WebHTMLBuilder + This is the standard HTML builder. Its output is a directory with HTML + files, complete with style sheets and optionally the reST sources. There are + quite a few configuration values that customize the output of this builder, + see the chapter :ref:`html-options` for details. + + Its name is ``html``. .. class:: HTMLHelpBuilder + This builder produces the same output as the standalone HTML builder, but + also generates HTML Help support files that allow the Microsoft HTML Help + Workshop to compile them into a CHM file. + + Its name is ``htmlhelp``. + +.. class:: WebHTMLBuilder + + This builder produces a directory with pickle files containing mostly HTML + fragments and TOC information, for use of a web application (or custom + postprocessing tool) that doesn't use the standard HTML templates. + + It also is the format used by the Sphinx Web application. Its name is + ``web``. + .. class:: LaTeXBuilder + This builder produces a bunch of LaTeX files in the output directory. You + have to specify which documents are to be included in which LaTeX files via + the :confval:`latex_documents` configuration value. There are a few + configuration values that customize the output of this builder, see the + chapter :ref:`latex-options` for details. + + Its name is ``latex``. + .. class:: ChangesBuilder + This builder produces an HTML overview of all :dir:`versionadded`, + :dir:`versionchanged` and :dir:`deprecated` directives for the current + :confval:`version`. This is useful to generate a ChangeLog file, for + example. + + Its name is ``changes``. + .. class:: CheckExternalLinksBuilder + This builder scans all documents for external links, tries to open them with + :mod:`urllib2`, and writes an overview which ones are broken and redirected + to standard output and to :file:`output.txt` in the output directory. + + Its name is ``linkcheck``. + Modified: doctools/trunk/doc/concepts.rst ============================================================================== --- doctools/trunk/doc/concepts.rst (original) +++ doctools/trunk/doc/concepts.rst Sun Mar 16 12:19:26 2008 @@ -8,12 +8,20 @@ Document names -------------- - +Since the reST source files can have different extensions (some people like +``.txt``, some like ``.rst`` -- the extension can be configured with +:confval:`source_suffix`) and different OSes have different path separators, +Sphinx abstracts them: all "document names" are relative to the +:term:`documentation root`, the extension is stripped, and path separators are +converted to slashes. All values, parameters and suchlike referring to +"documents" expect such a document name. The TOC tree ------------ +.. index:: pair: table of; contents + Since reST does not have facilities to interconnect several documents, or split documents into multiple output files, Sphinx uses a custom directive to add relations between the single files the documentation is made of, as well as @@ -22,37 +30,38 @@ .. directive:: toctree This directive inserts a "TOC tree" at the current location, using the - individual TOCs (including "sub-TOC trees") of the files given in the - directive body. A numeric ``maxdepth`` option may be given to indicate the - depth of the tree; by default, all levels are included. + individual TOCs (including "sub-TOC trees") of the documents given in the + directive body (whose path is relative to the document the directive occurs + in). A numeric ``maxdepth`` option may be given to indicate the depth of the + tree; by default, all levels are included. Consider this example (taken from the Python docs' library reference index):: .. toctree:: :maxdepth: 2 - intro.rst - strings.rst - datatypes.rst - numeric.rst - (many more files listed here) + intro + strings + datatypes + numeric + (many more documents listed here) This accomplishes two things: - * Tables of contents from all those files are inserted, with a maximum depth - of two, that means one nested heading. ``toctree`` directives in those - files are also taken into account. - * Sphinx knows that the relative order of the files ``intro.rst``, - ``strings.rst`` and so forth, and it knows that they are children of the - shown file, the library index. From this information it generates "next + * Tables of contents from all those documents are inserted, with a maximum + depth of two, that means one nested heading. ``toctree`` directives in + those documents are also taken into account. + * Sphinx knows that the relative order of the documents ``intro``, + ``strings`` and so forth, and it knows that they are children of the shown + document, the library index. From this information it generates "next chapter", "previous chapter" and "parent chapter" links. - In the end, all files included in the build process must occur in one - ``toctree`` directive; Sphinx will emit a warning if it finds a file that is - not included, because that means that this file will not be reachable through - standard navigation. Use :confval:`unused_documents` to explicitly exclude - documents from this check. - - The "master file" (selected by :confval:`master_file`) is the "root" of the - TOC tree hierarchy. It can be used as the documentation's main page, or as a - "full table of contents" if you don't give a ``maxdepth`` option. + In the end, all documents under the :term:`documentation root` must occur in + one ``toctree`` directive; Sphinx will emit a warning if it finds a file that + is not included, because that means that this file will not be reachable + through standard navigation. Use :confval:`unused_documents` to explicitly + exclude documents from this check. + + The "master document" (selected by :confval:`master_doc`) is the "root" of + the TOC tree hierarchy. It can be used as the documentation's main page, or + as a "full table of contents" if you don't give a ``maxdepth`` option. Modified: doctools/trunk/doc/config.rst ============================================================================== --- doctools/trunk/doc/config.rst (original) +++ doctools/trunk/doc/config.rst Sun Mar 16 12:19:26 2008 @@ -119,6 +119,8 @@ style. +.. _html-options: + Options for HTML output ----------------------- @@ -175,6 +177,8 @@ Output file base name for HTML help builder. Default is ``'pydoc'``. +.. _latex-options: + Options for LaTeX output ------------------------ Modified: doctools/trunk/doc/contents.rst ============================================================================== --- doctools/trunk/doc/contents.rst (original) +++ doctools/trunk/doc/contents.rst Sun Mar 16 12:19:26 2008 @@ -6,16 +6,16 @@ .. toctree:: :maxdepth: 1 - intro.rst - concepts.rst - rest.rst - markup/index.rst - builders.rst - config.rst - templating.rst - extensions.rst + intro + concepts + rest + markup/index + builders + config + templating + extensions - glossary.rst + glossary Indices and tables Modified: doctools/trunk/doc/ext.py ============================================================================== --- doctools/trunk/doc/ext.py (original) +++ doctools/trunk/doc/ext.py Sun Mar 16 12:19:26 2008 @@ -1,4 +1,38 @@ +# -*- coding: utf-8 -*- +""" + ext.py -- Sphinx extension for the Sphinx documentation + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + + :copyright: 2008 by Georg Brandl. + :license: BSD. +""" + +import re + +from sphinx import addnodes + +dir_sig_re = re.compile(r'\.\. ([^:]+)::(.*)$') + +def parse_directive(sig, signode): + if not sig.startswith('.'): + sig = '.. %s::' % sig + signode += addnodes.desc_name(sig, sig) + return + m = dir_sig_re.match(sig) + if not m: + signode += addnodes.desc_name(sig, sig) + return + name, args = m.groups() + name = '.. %s::' % name + signode += addnodes.desc_name(name, name) + signode += addnodes.desc_classname(args, args) + + +def parse_role(sig, signode): + signode += addnodes.desc_name(':%s:' % sig, ':%s:' % sig) + + def setup(app): - app.add_description_unit('directive', 'dir', 'directive') - app.add_description_unit('role', 'role', 'role') + app.add_description_unit('directive', 'dir', 'directive', parse_directive) + app.add_description_unit('role', 'role', 'role', parse_role) app.add_description_unit('confval', 'confval', 'configuration value') Deleted: /doctools/trunk/doc/ext/api.rst ============================================================================== --- /doctools/trunk/doc/ext/api.rst Sun Mar 16 12:19:26 2008 +++ (empty file) @@ -1,96 +0,0 @@ -Extension API -============= - -Each Sphinx extension is a Python module with at least a :func:`setup` function. -This function is called at initialization time with one argument, the -application object representing the Sphinx process. This application object has -the following public API: - -.. method:: Application.add_builder(builder) - - Register a new builder. *builder* must be a class that inherits from - :class:`~sphinx.builder.Builder`. - -.. method:: Application.add_config_value(name, default, rebuild_env) - - Register a configuration value. This is necessary for Sphinx to recognize - new values and set default values accordingly. The *name* should be prefixed - with the extension name, to avoid clashes. The *default* value can be any - Python object. The boolean value *rebuild_env* must be ``True`` if a change - in the setting only takes effect when a document is parsed -- this means that - the whole environment must be rebuilt. - -.. method:: Application.add_event(name) - - Register an event called *name*. - -.. method:: Application.add_node(node) - - Register a Docutils node class. This is necessary for Docutils internals. - It may also be used in the future to validate nodes in the parsed documents. - -.. method:: Application.add_directive(name, cls, content, arguments, **options) - - Register a Docutils directive. *name* must be the prospective directive - name, *func* the directive function (see the Docutils documentation - XXX - ref) for details about the signature and return value. *content*, - *arguments* and *options* are set as attributes on the function and determine - whether the directive has content, arguments and options, respectively. For - their exact meaning, please consult the Docutils documentation. - -.. method:: Application.add_role(name, role) - - Register a Docutils role. *name* must be the role name that occurs in the - source, *role* the role function (see the Docutils documentation on details). - -.. method:: Application.add_description_unit(directivename, rolename, indexdesc='', parse_node=None) - - XXX - -.. method:: Application.connect(event, callback) - - Register *callback* to be called when *event* is emitted. For details on - available core events and the arguments of callback functions, please see - :ref:`events`. - - The method returns a "listener ID" that can be used as an argument to - :meth:`disconnect`. - -.. method:: Application.disconnect(listener_id) - - Unregister callback *listener_id*. - -.. method:: Application.emit(event, *arguments) - - Emit *event* and pass *arguments* to the callback functions. Do not emit - core Sphinx events in extensions! - - -.. exception:: ExtensionError - - All these functions raise this exception if something went wrong with the - extension API. - -Examples of using the Sphinx extension API can be seen in the :mod:`sphinx.ext` -package. - - -.. _events: - -Sphinx core events ------------------- - -These events are known to the core: - -====================== =================================== ========= -Event name Emitted when Arguments -====================== =================================== ========= -``'builder-inited'`` the builder object has been created -none- -``'doctree-read'`` a doctree has been parsed and read *doctree* - by the environment, and is about to - be pickled -``'doctree-resolved'`` a doctree has been "resolved" by *doctree*, *docname* - the environment, that is, all - references and TOCs have been - inserted -====================== =================================== ========= Added: doctools/trunk/doc/ext/appapi.rst ============================================================================== --- (empty file) +++ doctools/trunk/doc/ext/appapi.rst Sun Mar 16 12:19:26 2008 @@ -0,0 +1,96 @@ +Extension API +============= + +Each Sphinx extension is a Python module with at least a :func:`setup` function. +This function is called at initialization time with one argument, the +application object representing the Sphinx process. This application object has +the following public API: + +.. method:: Application.add_builder(builder) + + Register a new builder. *builder* must be a class that inherits from + :class:`~sphinx.builder.Builder`. + +.. method:: Application.add_config_value(name, default, rebuild_env) + + Register a configuration value. This is necessary for Sphinx to recognize + new values and set default values accordingly. The *name* should be prefixed + with the extension name, to avoid clashes. The *default* value can be any + Python object. The boolean value *rebuild_env* must be ``True`` if a change + in the setting only takes effect when a document is parsed -- this means that + the whole environment must be rebuilt. + +.. method:: Application.add_event(name) + + Register an event called *name*. + +.. method:: Application.add_node(node) + + Register a Docutils node class. This is necessary for Docutils internals. + It may also be used in the future to validate nodes in the parsed documents. + +.. method:: Application.add_directive(name, cls, content, arguments, **options) + + Register a Docutils directive. *name* must be the prospective directive + name, *func* the directive function (see the Docutils documentation - XXX + ref) for details about the signature and return value. *content*, + *arguments* and *options* are set as attributes on the function and determine + whether the directive has content, arguments and options, respectively. For + their exact meaning, please consult the Docutils documentation. + +.. method:: Application.add_role(name, role) + + Register a Docutils role. *name* must be the role name that occurs in the + source, *role* the role function (see the Docutils documentation on details). + +.. method:: Application.add_description_unit(directivename, rolename, indexdesc='', parse_node=None) + + XXX + +.. method:: Application.connect(event, callback) + + Register *callback* to be called when *event* is emitted. For details on + available core events and the arguments of callback functions, please see + :ref:`events`. + + The method returns a "listener ID" that can be used as an argument to + :meth:`disconnect`. + +.. method:: Application.disconnect(listener_id) + + Unregister callback *listener_id*. + +.. method:: Application.emit(event, *arguments) + + Emit *event* and pass *arguments* to the callback functions. Do not emit + core Sphinx events in extensions! + + +.. exception:: ExtensionError + + All these functions raise this exception if something went wrong with the + extension API. + +Examples of using the Sphinx extension API can be seen in the :mod:`sphinx.ext` +package. + + +.. _events: + +Sphinx core events +------------------ + +These events are known to the core: + +====================== =================================== ========= +Event name Emitted when Arguments +====================== =================================== ========= +``'builder-inited'`` the builder object has been created -none- +``'doctree-read'`` a doctree has been parsed and read *doctree* + by the environment, and is about to + be pickled +``'doctree-resolved'`` a doctree has been "resolved" by *doctree*, *docname* + the environment, that is, all + references and TOCs have been + inserted +====================== =================================== ========= Added: doctools/trunk/doc/ext/builderapi.rst ============================================================================== --- (empty file) +++ doctools/trunk/doc/ext/builderapi.rst Sun Mar 16 12:19:26 2008 @@ -0,0 +1,24 @@ +Writing new builders +==================== + +.. class:: sphinx.builder.Builder + + This is the base class for all builders. + + These methods are predefined and will be called from the application: + + .. automethod:: load_env + .. automethod:: get_relative_uri + .. automethod:: build_all + .. automethod:: build_specific + .. automethod:: build_update + .. automethod:: build + + These methods must be overridden in concrete builder classes: + + .. automethod:: init + .. automethod:: get_outdated_docs + .. automethod:: get_target_uri + .. automethod:: prepare_writing + .. automethod:: write_doc + .. automethod:: finish Modified: doctools/trunk/doc/extensions.rst ============================================================================== --- doctools/trunk/doc/extensions.rst (original) +++ doctools/trunk/doc/extensions.rst Sun Mar 16 12:19:26 2008 @@ -17,7 +17,8 @@ .. toctree:: - ext/api.rst + ext/appapi + ext/builderapi Builtin Sphinx extensions @@ -28,8 +29,8 @@ .. toctree:: - ext/autodoc.rst - ext/doctest.rst - ext/refcounting.rst - ext/ifconfig.rst - ext/coverage.rst + ext/autodoc + ext/doctest + ext/refcounting + ext/ifconfig + ext/coverage Modified: doctools/trunk/doc/intro.rst ============================================================================== --- doctools/trunk/doc/intro.rst (original) +++ doctools/trunk/doc/intro.rst Sun Mar 16 12:19:26 2008 @@ -33,8 +33,6 @@ and answer the questions. -.. XXX environment - Running a build --------------- Modified: doctools/trunk/doc/markup/code.rst ============================================================================== --- doctools/trunk/doc/markup/code.rst (original) +++ doctools/trunk/doc/markup/code.rst Sun Mar 16 12:19:26 2008 @@ -3,6 +3,9 @@ Showing code examples --------------------- +.. index:: pair: code; examples + single: sourcecode + Examples of Python source code or interactive sessions are represented using standard reST literal blocks. They are started by a ``::`` at the end of the preceding paragraph and delimited by indentation. @@ -78,14 +81,24 @@ Includes ^^^^^^^^ -Longer displays of verbatim text may be included by storing the example text in -an external file containing only plain text. The file may be included using the -``literalinclude`` directive. [1]_ For example, to include the Python source file -:file:`example.py`, use:: - - .. literalinclude:: example.py +.. directive:: .. literalinclude:: filename -The file name is relative to the current file's path. + Longer displays of verbatim text may be included by storing the example text in + an external file containing only plain text. The file may be included using the + ``literalinclude`` directive. [1]_ For example, to include the Python source file + :file:`example.py`, use:: + + .. literalinclude:: example.py + + The file name is relative to the current file's path. + + The directive also supports the ``linenos`` flag option to switch on line + numbers, and a ``language`` option to select a language different from the + current file's standard language. Example with options:: + + .. literalinclude:: example.rb + :language: ruby + :linenos: .. rubric:: Footnotes Added: doctools/trunk/doc/markup/desc.rst ============================================================================== --- (empty file) +++ doctools/trunk/doc/markup/desc.rst Sun Mar 16 12:19:26 2008 @@ -0,0 +1,197 @@ +.. highlight:: rest + +Module-specific markup +---------------------- + +The markup described in this section is used to provide information about a +module being documented. Each module should be documented in its own file. +Normally this markup appears after the title heading of that file; a typical +file might start like this:: + + :mod:`parrot` -- Dead parrot access + =================================== + + .. module:: parrot + :platform: Unix, Windows + :synopsis: Analyze and reanimate dead parrots. + .. moduleauthor:: Eric Cleese + .. moduleauthor:: John Idle + +As you can see, the module-specific markup consists of two directives, the +``module`` directive and the ``moduleauthor`` directive. + +.. directive:: .. module:: name + + This directive marks the beginning of the description of a module (or package + submodule, in which case the name should be fully qualified, including the + package name). + + The ``platform`` option, if present, is a comma-separated list of the + platforms on which the module is available (if it is available on all + platforms, the option should be omitted). The keys are short identifiers; + examples that are in use include "IRIX", "Mac", "Windows", and "Unix". It is + important to use a key which has already been used when applicable. + + The ``synopsis`` option should consist of one sentence describing the + module's purpose -- it is currently only used in the Global Module Index. + + The ``deprecated`` option can be given (with no value) to mark a module as + deprecated; it will be designated as such in various locations then. + +.. directive:: .. moduleauthor:: name + + The ``moduleauthor`` directive, which can appear multiple times, names the + authors of the module code, just like ``sectionauthor`` names the author(s) + of a piece of documentation. It too only produces output if the + :confval:`show_authors` configuration value is True. + + +.. note:: + + It is important to make the section title of a module-describing file + meaningful since that value will be inserted in the table-of-contents trees + in overview files. + + +Description units +----------------- + +There are a number of directives used to describe specific features provided by +modules. Each directive requires one or more signatures to provide basic +information about what is being described, and the content should be the +description. The basic version makes entries in the general index; if no index +entry is desired, you can give the directive option flag ``:noindex:``. The +following example shows all of the features of this directive type:: + + .. function:: spam(eggs) + ham(eggs) + :noindex: + + Spam or ham the foo. + +The signatures of object methods or data attributes should always include the +type name (``.. method:: FileInput.input(...)``), even if it is obvious from the +context which type they belong to; this is to enable consistent +cross-references. If you describe methods belonging to an abstract protocol, +such as "context managers", include a (pseudo-)type name too to make the +index entries more informative. + +The directives are: + +.. directive:: .. cfunction:: type name(signature) + + Describes a C function. The signature should be given as in C, e.g.:: + + .. cfunction:: PyObject* PyType_GenericAlloc(PyTypeObject *type, Py_ssize_t nitems) + + This is also used to describe function-like preprocessor macros. The names + of the arguments should be given so they may be used in the description. + + Note that you don't have to backslash-escape asterisks in the signature, + as it is not parsed by the reST inliner. + +.. directive:: .. cmember:: type name + + Describes a C struct member. Example signature:: + + .. cmember:: PyObject* PyTypeObject.tp_bases + + The text of the description should include the range of values allowed, how + the value should be interpreted, and whether the value can be changed. + References to structure members in text should use the ``member`` role. + +.. directive:: .. cmacro:: name + + Describes a "simple" C macro. Simple macros are macros which are used + for code expansion, but which do not take arguments so cannot be described as + functions. This is not to be used for simple constant definitions. Examples + of its use in the Python documentation include :cmacro:`PyObject_HEAD` and + :cmacro:`Py_BEGIN_ALLOW_THREADS`. + +.. directive:: .. ctype:: name + + Describes a C type. The signature should just be the type name. + +.. directive:: .. cvar:: type name + + Describes a global C variable. The signature should include the type, such + as:: + + .. cvar:: PyObject* PyClass_Type + +.. directive:: .. data:: name + + Describes global data in a module, including both variables and values used + as "defined constants." Class and object attributes are not documented + using this environment. + +.. directive:: .. exception:: name + + Describes an exception class. The signature can, but need not include + parentheses with constructor arguments. + +.. directive:: .. function:: name(signature) + + Describes a module-level function. The signature should include the + parameters, enclosing optional parameters in brackets. Default values can be + given if it enhances clarity. For example:: + + .. function:: Timer.repeat([repeat=3[, number=1000000]]) + + Object methods are not documented using this directive. Bound object methods + placed in the module namespace as part of the public interface of the module + are documented using this, as they are equivalent to normal functions for + most purposes. + + The description should include information about the parameters required and + how they are used (especially whether mutable objects passed as parameters + are modified), side effects, and possible exceptions. A small example may be + provided. + +.. directive:: .. class:: name[(signature)] + + Describes a class. The signature can include parentheses with parameters + which will be shown as the constructor arguments. + +.. directive:: .. attribute:: name + + Describes an object data attribute. The description should include + information about the type of the data to be expected and whether it may be + changed directly. + +.. directive:: .. method:: name(signature) + + Describes an object method. The parameters should not include the ``self`` + parameter. The description should include similar information to that + described for ``function``. + +.. directive:: .. opcode:: name + + Describes a Python bytecode instruction (this is not very useful for projects + other than Python itself). + +.. directive:: .. cmdoption:: name args + + Describes a command line option or switch. Option argument names should be + enclosed in angle brackets. Example:: + + .. cmdoption:: -m + + Run a module as a script. + +.. directive:: .. envvar:: name + + Describes an environment variable that the documented code uses or defines. + + +There is also a generic version of these directives: + +.. directive:: .. describe:: text + + This directive produces the same formatting as the specific ones explained + above but does not create index entries or cross-referencing targets. It is + used, for example, to describe the directives in this document. Example:: + + .. describe:: opcode + + Describes a Python bytecode instruction. Modified: doctools/trunk/doc/markup/index.rst ============================================================================== --- doctools/trunk/doc/markup/index.rst (original) +++ doctools/trunk/doc/markup/index.rst Sun Mar 16 12:19:26 2008 @@ -1,5 +1,3 @@ -.. XXX missing: glossary - Sphinx Markup Constructs ======================== @@ -8,8 +6,8 @@ .. toctree:: - infounits.rst - para.rst - code.rst - inline.rst - misc.rst + desc + para + code + inline + misc Deleted: /doctools/trunk/doc/markup/infounits.rst ============================================================================== --- /doctools/trunk/doc/markup/infounits.rst Sun Mar 16 12:19:26 2008 +++ (empty file) @@ -1,197 +0,0 @@ -.. highlight:: rest - -Module-specific markup ----------------------- - -The markup described in this section is used to provide information about a -module being documented. Each module should be documented in its own file. -Normally this markup appears after the title heading of that file; a typical -file might start like this:: - - :mod:`parrot` -- Dead parrot access - =================================== - - .. module:: parrot - :platform: Unix, Windows - :synopsis: Analyze and reanimate dead parrots. - .. moduleauthor:: Eric Cleese - .. moduleauthor:: John Idle - -As you can see, the module-specific markup consists of two directives, the -``module`` directive and the ``moduleauthor`` directive. - -.. directive:: module - - This directive marks the beginning of the description of a module (or package - submodule, in which case the name should be fully qualified, including the - package name). - - The ``platform`` option, if present, is a comma-separated list of the - platforms on which the module is available (if it is available on all - platforms, the option should be omitted). The keys are short identifiers; - examples that are in use include "IRIX", "Mac", "Windows", and "Unix". It is - important to use a key which has already been used when applicable. - - The ``synopsis`` option should consist of one sentence describing the - module's purpose -- it is currently only used in the Global Module Index. - - The ``deprecated`` option can be given (with no value) to mark a module as - deprecated; it will be designated as such in various locations then. - -.. directive:: moduleauthor - - The ``moduleauthor`` directive, which can appear multiple times, names the - authors of the module code, just like ``sectionauthor`` names the author(s) - of a piece of documentation. It too only produces output if the - :confval:`show_authors` configuration value is True. - - -.. note:: - - It is important to make the section title of a module-describing file - meaningful since that value will be inserted in the table-of-contents trees - in overview files. - - -Information units ------------------ - -There are a number of directives used to describe specific features provided by -modules. Each directive requires one or more signatures to provide basic -information about what is being described, and the content should be the -description. The basic version makes entries in the general index; if no index -entry is desired, you can give the directive option flag ``:noindex:``. The -following example shows all of the features of this directive type:: - - .. function:: spam(eggs) - ham(eggs) - :noindex: - - Spam or ham the foo. - -The signatures of object methods or data attributes should always include the -type name (``.. method:: FileInput.input(...)``), even if it is obvious from the -context which type they belong to; this is to enable consistent -cross-references. If you describe methods belonging to an abstract protocol, -such as "context managers", include a (pseudo-)type name too to make the -index entries more informative. - -The directives are: - -.. directive:: cfunction - - Describes a C function. The signature should be given as in C, e.g.:: - - .. cfunction:: PyObject* PyType_GenericAlloc(PyTypeObject *type, Py_ssize_t nitems) - - This is also used to describe function-like preprocessor macros. The names - of the arguments should be given so they may be used in the description. - - Note that you don't have to backslash-escape asterisks in the signature, - as it is not parsed by the reST inliner. - -.. directive:: cmember - - Describes a C struct member. Example signature:: - - .. cmember:: PyObject* PyTypeObject.tp_bases - - The text of the description should include the range of values allowed, how - the value should be interpreted, and whether the value can be changed. - References to structure members in text should use the ``member`` role. - -.. directive:: cmacro - - Describes a "simple" C macro. Simple macros are macros which are used - for code expansion, but which do not take arguments so cannot be described as - functions. This is not to be used for simple constant definitions. Examples - of its use in the Python documentation include :cmacro:`PyObject_HEAD` and - :cmacro:`Py_BEGIN_ALLOW_THREADS`. - -.. directive:: ctype - - Describes a C type. The signature should just be the type name. - -.. directive:: cvar - - Describes a global C variable. The signature should include the type, such - as:: - - .. cvar:: PyObject* PyClass_Type - -.. directive:: data - - Describes global data in a module, including both variables and values used - as "defined constants." Class and object attributes are not documented - using this environment. - -.. directive:: exception - - Describes an exception class. The signature can, but need not include - parentheses with constructor arguments. - -.. directive:: function - - Describes a module-level function. The signature should include the - parameters, enclosing optional parameters in brackets. Default values can be - given if it enhances clarity. For example:: - - .. function:: Timer.repeat([repeat=3[, number=1000000]]) - - Object methods are not documented using this directive. Bound object methods - placed in the module namespace as part of the public interface of the module - are documented using this, as they are equivalent to normal functions for - most purposes. - - The description should include information about the parameters required and - how they are used (especially whether mutable objects passed as parameters - are modified), side effects, and possible exceptions. A small example may be - provided. - -.. directive:: class - - Describes a class. The signature can include parentheses with parameters - which will be shown as the constructor arguments. - -.. directive:: attribute - - Describes an object data attribute. The description should include - information about the type of the data to be expected and whether it may be - changed directly. - -.. directive:: method - - Describes an object method. The parameters should not include the ``self`` - parameter. The description should include similar information to that - described for ``function``. - -.. directive:: opcode - - Describes a Python bytecode instruction (this is not very useful for projects - other than Python itself). - -.. directive:: cmdoption - - Describes a command line option or switch. Option argument names should be - enclosed in angle brackets. Example:: - - .. cmdoption:: -m - - Run a module as a script. - -.. directive:: envvar - - Describes an environment variable that the documented code uses or defines. - - -There is also a generic version of these directives: - -.. directive:: describe - - This directive produces the same formatting as the specific ones explained - above but does not create index entries or cross-referencing targets. It is - used, for example, to describe the directives in this document. Example:: - - .. describe:: opcode - - Describes a Python bytecode instruction. Modified: doctools/trunk/doc/markup/inline.rst ============================================================================== --- doctools/trunk/doc/markup/inline.rst (original) +++ doctools/trunk/doc/markup/inline.rst Sun Mar 16 12:19:26 2008 @@ -1,10 +1,9 @@ .. highlight:: rest Inline markup -------------- +============= -As said before, Sphinx uses interpreted text roles to insert semantic markup in -documents. +Sphinx uses interpreted text roles to insert semantic markup into documents. Variable names are an exception, they should be marked simply with ``*var*``. @@ -12,8 +11,38 @@ .. note:: - For all cross-referencing roles, if you prefix the content with ``!``, no - reference/hyperlink will be created. + The default role (```content```) has no special meaning by default. You are + free to use it for anything you like. + + +Cross-referencing syntax +------------------------ + +Cross-references are generated by many semantic interpreted text roles. +Basically, you only need to write ``:role:`target```, and a link will be created +to the item named *target* of the type indicated by *role*. The links's text +will be the same as *target*. + +There are some additional facilities, however, that make cross-referencing roles +more versatile: + +* You may supply an explicit title and reference target, like in reST direct + hyperlinks: ``:role:`title ``` will refer to *target*, but the link + text will be *title*. + +* If you prefix the content with ``!``, no reference/hyperlink will be created. + +* For the Python object roles, if you prefix the content with ``~``, the link + text will only be the last component of the target. For example, + ``:meth:`~Queue.Queue.get``` will refer to ``Queue.Queue.get`` but only + display ``get`` as the link text. + + In HTML output, the link's ``title`` attribute (that is e.g. shown as a + tool-tip on mouse-hover) will always be the full target name. + + +Cross-referencing Python objects +-------------------------------- The following roles refer to objects in modules and are possibly hyperlinked if a matching identifier is found: @@ -64,15 +93,19 @@ Normally, names in these roles are searched first without any further qualification, then with the current module name prepended, then with the current module and class name (if any) prepended. If you prefix the name with a -dot, this order is reversed. For example, in the documentation of the +dot, this order is reversed. For example, in the documentation of Python's :mod:`codecs` module, ``:func:`open``` always refers to the built-in function, while ``:func:`.open``` refers to :func:`codecs.open`. A similar heuristic is used to determine whether the name is an attribute of the currently documented class. + +Cross-referencing C constructs +------------------------------ + The following roles create cross-references to C-language constructs if they -are defined in the API documentation: +are defined in the documentation: .. role:: cdata @@ -91,19 +124,33 @@ The name of a C-language type. -The following roles do possibly create a cross-reference, but do not refer -to objects: +Cross-referencing other items of interest +----------------------------------------- + +The following roles do possibly create a cross-reference, but do not refer to +objects: + +.. role:: envvar + + An environment variable. Index entries are generated. Also generates a link + to the matching :dir:`envvar` directive, if it exists. .. role:: token - The name of a grammar token (used in the reference manual to create links - between production displays). + The name of a grammar token (used to create links between + :dir:`productionlist` directives). .. role:: keyword The name of a keyword in Python. This creates a link to a reference label with that name, if it exists. +.. role:: option + + A command-line option to an executable program. The leading hyphen(s) must + be included. This generates a link to a :dir:`cmdoption` directive, if it + exists. + The following role creates a cross-reference to the term in the glossary: @@ -118,7 +165,37 @@ If you use a term that's not explained in a glossary, you'll get a warning during build. ---------- + +Cross-referencing arbitrary locations +------------------------------------- + +To support cross-referencing to arbitrary locations in the documentation, the +standard reST labels used. Of course, for this to work label names must be +unique throughout the entire documentation. There are two ways in which you can +refer to labels: + +* If you place a label directly before a section title, you can reference to it + with ``:ref:`label-name```. Example:: + + .. _my-reference-label: + + Section to cross-reference + -------------------------- + + This is the text of the section. + + It refers to the section itself, see :ref:`my-reference-label`. + + The ``:ref:`` role would then generate a link to the section, with the link + title being "Section to cross-reference". + +* Labels that aren't placed before a section title can still be referenced to, + but you must give the link an explicit title, using this syntax: ``:ref:`Link + title ```. + + +Other semantic markup +--------------------- The following roles don't do anything special except formatting the text in a different style: @@ -132,10 +209,6 @@ Mark the defining instance of a term in the text. (No index entries are generated.) -.. role:: envvar - - An environment variable. Index entries are generated. - .. role:: file The name of a file or directory. Within the contents, you can use curly @@ -209,11 +282,6 @@ The name of a Usenet newsgroup. -.. role:: option - - A command-line option to an executable program. The leading hyphen(s) must - be included. - .. role:: program The name of an executable program. This may differ from the file name for @@ -232,10 +300,6 @@ If you don't need the "variable part" indication, use the standard ````code```` instead. -.. role:: var - - A Python or C variable or parameter name. - The following roles generate external links: @@ -279,33 +343,3 @@ Replaced by either today's date, or the date set in the build configuration file. Normally has the format ``April 14, 2007``. Set by :confval:`today_fmt` and :confval:`today`. - - -.. _doc-ref-role: - -Cross-linking markup --------------------- - -To support cross-referencing to arbitrary sections in the documentation, the -standard reST labels used. Of course, for this to work label names must be -unique throughout the entire documentation. There are two ways in which you can -refer to labels: - -* If you place a label directly before a section title, you can reference to it - with ``:ref:`label-name```. Example:: - - .. _my-reference-label: - - Section to cross-reference - -------------------------- - - This is the text of the section. - - It refers to the section itself, see :ref:`my-reference-label`. - - The ``:ref:`` role would then generate a link to the section, with the link - title being "Section to cross-reference". - -* Labels that aren't placed before a section title can still be referenced to, - but you must give the link an explicit title, using this syntax: ``:ref:`Link - title ```. Modified: doctools/trunk/doc/markup/para.rst ============================================================================== --- doctools/trunk/doc/markup/para.rst (original) +++ doctools/trunk/doc/markup/para.rst Sun Mar 16 12:19:26 2008 @@ -3,6 +3,9 @@ Paragraph-level markup ---------------------- +.. index:: note, warning + pair: changes; in version + These directives create short paragraphs and can be used inside information units as well as normal text: @@ -27,7 +30,7 @@ appropriate punctuation. This differs from ``note`` in that it is recommended over ``note`` for information regarding security. -.. directive:: versionadded +.. directive:: .. versionadded:: version This directive documents the version of the project which added the described feature to the library or C API. When this applies to an entire module, it @@ -44,7 +47,7 @@ Note that there must be no blank line between the directive head and the explanation; this is to make these blocks visually continuous in the markup. -.. directive:: versionchanged +.. directive:: .. versionchanged:: version Similar to ``versionadded``, but describes when and what changed in the named feature in some way (new parameters, changed side effects, etc.). @@ -71,7 +74,7 @@ `GNU tar manual, Basic Tar Format `_ Documentation for tar archive files, including GNU tar extensions. -.. directive:: rubric +.. directive:: .. rubric:: title This directive creates a paragraph heading that is not used to create a table of contents node. It is currently used for the "Footnotes" caption. @@ -104,44 +107,69 @@ comprehensive and enable index entries in documents where information is not mainly contained in information units, such as the language reference. -The directive is ``index`` and contains one or more index entries. Each entry -consists of a type and a value, separated by a colon. +.. directive:: .. index:: + + This directive contains one or more index entries. Each entry consists of a + type and a value, separated by a colon. + + For example:: + + .. index:: + single: execution; context + module: __main__ + module: sys + triple: module; search; path + + This directive contains five entries, which will be converted to entries in the + generated index which link to the exact location of the index statement (or, in + case of offline media, the corresponding page number). + + The possible entry types are: + + single + Creates a single index entry. Can be made a subentry by separating the + subentry text with a semicolon (this notation is also used below to describe + what entries are created). + pair + ``pair: loop; statement`` is a shortcut that creates two index entries, + namely ``loop; statement`` and ``statement; loop``. + triple + Likewise, ``triple: module; search; path`` is a shortcut that creates three + index entries, which are ``module; search path``, ``search; path, module`` and + ``path; module search``. + module, keyword, operator, object, exception, statement, builtin + These all create two index entries. For example, ``module: hashlib`` creates + the entries ``module; hashlib`` and ``hashlib; module``. -For example:: + For index directives containing only "single" entries, there is a shorthand + notation:: - .. index:: - single: execution; context - module: __main__ - module: sys - triple: module; search; path - -This directive contains five entries, which will be converted to entries in the -generated index which link to the exact location of the index statement (or, in -case of offline media, the corresponding page number). - -The possible entry types are: - -single - Creates a single index entry. Can be made a subentry by separating the - subentry text with a semicolon (this notation is also used below to describe - what entries are created). -pair - ``pair: loop; statement`` is a shortcut that creates two index entries, - namely ``loop; statement`` and ``statement; loop``. -triple - Likewise, ``triple: module; search; path`` is a shortcut that creates three - index entries, which are ``module; search path``, ``search; path, module`` and - ``path; module search``. -module, keyword, operator, object, exception, statement, builtin - These all create two index entries. For example, ``module: hashlib`` creates - the entries ``module; hashlib`` and ``hashlib; module``. + .. index:: BNF, grammar, syntax, notation -For index directives containing only "single" entries, there is a shorthand -notation:: + This creates four index entries. - .. index:: BNF, grammar, syntax, notation -This creates four index entries. +Glossary +-------- + +.. directive:: glossary + + This directive must contain a reST definition list with terms and + definitions. The definitions will then be referencable with the :role:`term` + role. Example:: + + .. glossary:: + + documentation root + The directory which contains the documentation's :file:`conf.py` file and + is therefore seen as one Sphinx project. + + environment + A structure where information about all documents under the root is saved, + and used for cross-referencing. The environment is pickled after the + parsing stage, so that successive runs only need to read and parse new and + changed documents. + Grammar production displays @@ -170,7 +198,6 @@ Note that no further reST parsing is done in the production, so that you don't have to escape ``*`` or ``|`` characters. - .. XXX describe optional first parameter The following is an example taken from the Python Reference Manual:: From python-checkins at python.org Sun Mar 16 12:22:40 2008 From: python-checkins at python.org (georg.brandl) Date: Sun, 16 Mar 2008 12:22:40 +0100 (CET) Subject: [Python-checkins] r61420 - in doctools/trunk/doc: markup/inline.rst rest.rst Message-ID: <20080316112240.D62AB1E400B@bag.python.org> Author: georg.brandl Date: Sun Mar 16 12:22:40 2008 New Revision: 61420 Modified: doctools/trunk/doc/markup/inline.rst doctools/trunk/doc/rest.rst Log: Fix internal link. Modified: doctools/trunk/doc/markup/inline.rst ============================================================================== --- doctools/trunk/doc/markup/inline.rst (original) +++ doctools/trunk/doc/markup/inline.rst Sun Mar 16 12:22:40 2008 @@ -166,6 +166,8 @@ during build. +.. _ref-role: + Cross-referencing arbitrary locations ------------------------------------- Modified: doctools/trunk/doc/rest.rst ============================================================================== --- doctools/trunk/doc/rest.rst (original) +++ doctools/trunk/doc/rest.rst Sun Mar 16 12:22:40 2008 @@ -141,7 +141,7 @@ ^^^^^^^^^^^^^^ Internal linking is done via a special reST role, see the section on specific -markup, :ref:`doc-ref-role`. +markup, :ref:`ref-role`. Sections From python-checkins at python.org Sun Mar 16 12:44:35 2008 From: python-checkins at python.org (georg.brandl) Date: Sun, 16 Mar 2008 12:44:35 +0100 (CET) Subject: [Python-checkins] r61421 - doctools/trunk/sphinx-web.py Message-ID: <20080316114435.466CE1E400B@bag.python.org> Author: georg.brandl Date: Sun Mar 16 12:44:35 2008 New Revision: 61421 Modified: doctools/trunk/sphinx-web.py Log: sphinx.web is currently broken. Modified: doctools/trunk/sphinx-web.py ============================================================================== --- doctools/trunk/sphinx-web.py (original) +++ doctools/trunk/sphinx-web.py Sun Mar 16 12:44:35 2008 @@ -10,5 +10,8 @@ import sys if __name__ == '__main__': + print 'sphinx.web currently doesn\'t work -- it will undergo a ' \ + 'serious rewrite soon.' + sys.exit() from sphinx.web import main sys.exit(main(sys.argv)) From python-checkins at python.org Sun Mar 16 13:00:01 2008 From: python-checkins at python.org (georg.brandl) Date: Sun, 16 Mar 2008 13:00:01 +0100 (CET) Subject: [Python-checkins] r61422 - in doctools/trunk: doc/.static/sphinx.png doc/.templates/index.html doc/.templates/layout.html sphinx/static/sphinxdoc.css sphinx/templates/layout.html sphinx/templates/macros.html Message-ID: <20080316120001.EA4EF1E400B@bag.python.org> Author: georg.brandl Date: Sun Mar 16 13:00:01 2008 New Revision: 61422 Added: doctools/trunk/sphinx/templates/macros.html Modified: doctools/trunk/doc/.static/sphinx.png doctools/trunk/doc/.templates/index.html doctools/trunk/doc/.templates/layout.html doctools/trunk/sphinx/static/sphinxdoc.css doctools/trunk/sphinx/templates/layout.html Log: Make the sphinxdoc layout work with IE. Modified: doctools/trunk/doc/.static/sphinx.png ============================================================================== Binary files. No diff available. Modified: doctools/trunk/doc/.templates/index.html ============================================================================== --- doctools/trunk/doc/.templates/index.html (original) +++ doctools/trunk/doc/.templates/index.html Sun Mar 16 13:00:01 2008 @@ -1,6 +1,11 @@ {% extends "layout.html" %} {% set title = 'Overview' %} {% block body %} +

                + Attention: this is a preview. Sphinx is not released yet on PyPI, + and the contents of this documentation are subject to change. +

                +

                Welcome

                @@ -10,6 +15,7 @@ new Python documentation, but has now been cleaned up in the hope that it will be useful to many other projects. (Of course, this site is also created from reStructuredText sources using Sphinx!) +

                Although it is still under constant development, the following features are Modified: doctools/trunk/doc/.templates/layout.html ============================================================================== --- doctools/trunk/doc/.templates/layout.html (original) +++ doctools/trunk/doc/.templates/layout.html Sun Mar 16 13:00:01 2008 @@ -10,3 +10,7 @@

              {% endblock %} + +{# put the sidebar before the body #} +{% block sidebar1 %}{{ sidebar() }}{% endblock %} +{% block sidebar2 %}{% endblock %} \ No newline at end of file Modified: doctools/trunk/sphinx/static/sphinxdoc.css ============================================================================== --- doctools/trunk/sphinx/static/sphinxdoc.css (original) +++ doctools/trunk/sphinx/static/sphinxdoc.css Sun Mar 16 13:00:01 2008 @@ -101,7 +101,6 @@ dl { margin-bottom: 15px; - clear: both; } dd p { @@ -148,51 +147,16 @@ background-repeat: repeat-x; } +/* div.documentwrapper { - float: left; width: 100%; } +*/ div.clearer { clear: both; } -div.header { - background-image: url(header.png); - height: 100px; -} - -div.header h1 { - float: right; - position: absolute; - margin: -30px 0 0 585px; - height: 180px; - width: 180px; -} - -div.header h1 a { - display: block; - background-image: url(werkzeug.png); - background-repeat: no-repeat; - height: 180px; - width: 180px; - text-decoration: none; - color: white!important; -} - -div.header span { - display: none; -} - -div.header p { - background-image: url(header_invert.png); - margin: 0; - padding: 10px; - height: 80px; - color: white; - display: none; -} - div.related h3 { display: none; } @@ -250,7 +214,8 @@ padding: 0.5em 15px 15px 0; width: 210px; float: right; - margin-left: -100%; + text-align: left; +/* margin-left: -100%; */ } div.sidebar h4, div.sidebar h3 { Modified: doctools/trunk/sphinx/templates/layout.html ============================================================================== --- doctools/trunk/sphinx/templates/layout.html (original) +++ doctools/trunk/sphinx/templates/layout.html Sun Mar 16 13:00:01 2008 @@ -1,7 +1,8 @@ -{% block doctype -%} +{%- include "macros.html" %} +{%- block doctype -%} -{% endblock -%} +{%- endblock %} @@ -56,36 +57,14 @@ {%- block beforerelbar %}{% endblock %} -{%- filter capture('relbar') %} -{%- block relbar %} - -{%- endblock %} -{%- endfilter %} +{%- block relbar1 %}{{ relbar() }}{% endblock %} {%- block afterrelbar %}{% endblock %} +{%- block beforesidebar1 %}{% endblock %} +{%- block sidebar1 %}{# possible location for sidebar #}{% endblock %} +{%- block aftersidebar1 %}{% endblock %} + +{%- block beforedocument %}{% endblock %}
              {%- if builder != 'htmlhelp' %} @@ -98,61 +77,16 @@
              {%- endif %}
              +{%- block afterdocument %}{% endblock %} -{%- block beforesidebar %}{% endblock %} -{%- block sidebar %} - {%- if builder != 'htmlhelp' %} - - {%- endif %} -{%- endblock %} -{%- block aftersidebar %}{% endblock %} +{%- block beforesidebar2 %}{% endblock %} +{%- block sidebar2 %}{{ sidebar() }}{% endblock %} +{%- block aftersidebar2 %}{% endblock %}
              -{%- block bottomrelbar %} -{{ relbar }} -{%- endblock %} + +{%- block relbar2 %}{{ relbar() }}{% endblock %} + {%- block beforefooter %}{% endblock %} {%- block footer %} {%- endif %} {%- endmacro %} -{%- macro relbar %} - -{%- endmacro %} + From python-checkins at python.org Tue Mar 18 20:52:26 2008 From: python-checkins at python.org (georg.brandl) Date: Tue, 18 Mar 2008 20:52:26 +0100 (CET) Subject: [Python-checkins] r61554 - doctools/trunk/doc/Makefile Message-ID: <20080318195226.AC2251E4003@bag.python.org> Author: georg.brandl Date: Tue Mar 18 20:52:26 2008 New Revision: 61554 Added: doctools/trunk/doc/Makefile Log: Add doc Makefile. Added: doctools/trunk/doc/Makefile ============================================================================== --- (empty file) +++ doctools/trunk/doc/Makefile Tue Mar 18 20:52:26 2008 @@ -0,0 +1,66 @@ +# Makefile for Sphinx documentation +# + +# You can set these variables from the command line. +SPHINXOPTS = +SPHINXBUILD = python ../sphinx-build.py +PAPER = + +ALLSPHINXOPTS = -d .build/doctrees -D latex_paper_size=$(PAPER) \ + $(SPHINXOPTS) . + +.PHONY: help clean html web htmlhelp latex changes linkcheck + +help: + @echo "Please use \`make ' where is one of" + @echo " html to make standalone HTML files" + @echo " web to make files usable by Sphinx.web" + @echo " htmlhelp to make HTML files and a HTML help project" + @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" + @echo " changes to make an overview over all changed/added/deprecated items" + @echo " linkcheck to check all external links for integrity" + +clean: + -rm -rf .build/* + +html: + mkdir -p .build/html .build/doctrees + $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) .build/html + @echo + @echo "Build finished. The HTML pages are in .build/html." + +web: + mkdir -p .build/web .build/doctrees + $(SPHINXBUILD) -b web $(ALLSPHINXOPTS) .build/web + @echo + @echo "Build finished; now you can run" + @echo " python -m sphinx.web .build/web" + @echo "to start the server." + +htmlhelp: + mkdir -p .build/htmlhelp .build/doctrees + $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) .build/htmlhelp + @echo + @echo "Build finished; now you can run HTML Help Workshop with the" \ + ".hhp project file in .build/htmlhelp." + +latex: + mkdir -p .build/latex .build/doctrees + $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) .build/latex + @echo + @echo "Build finished; the LaTeX files are in .build/latex." + @echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \ + "run these through (pdf)latex." + +changes: + mkdir -p .build/changes .build/doctrees + $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) .build/changes + @echo + @echo "The overview file is in .build/changes." + +linkcheck: + mkdir -p .build/linkcheck .build/doctrees + $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) .build/linkcheck + @echo + @echo "Link check complete; look for any errors in the above output " \ + "or in .build/linkcheck/output.txt." From python-checkins at python.org Tue Mar 18 20:54:48 2008 From: python-checkins at python.org (georg.brandl) Date: Tue, 18 Mar 2008 20:54:48 +0100 (CET) Subject: [Python-checkins] r61555 - in doctools/trunk: doc/config.rst sphinx/builder.py sphinx/config.py sphinx/latexwriter.py sphinx/quickstart.py sphinx/templates/layout.html Message-ID: <20080318195448.924001E4003@bag.python.org> Author: georg.brandl Date: Tue Mar 18 20:54:45 2008 New Revision: 61555 Modified: doctools/trunk/doc/config.rst doctools/trunk/sphinx/builder.py doctools/trunk/sphinx/config.py doctools/trunk/sphinx/latexwriter.py doctools/trunk/sphinx/quickstart.py doctools/trunk/sphinx/templates/layout.html Log: Make it possible to deactivate the module index. Modified: doctools/trunk/doc/config.rst ============================================================================== --- doctools/trunk/doc/config.rst (original) +++ doctools/trunk/doc/config.rst Tue Mar 18 20:54:45 2008 @@ -167,6 +167,10 @@ Additional templates that should be rendered to HTML pages, must be a dictionary that maps document names to template names. +.. confval:: html_use_modindex + + If true, add a module index to the HTML documents. Default is ``True``. + .. confval:: html_copy_source If true, the reST sources are included in the HTML build as @@ -217,3 +221,7 @@ .. confval:: latex_preamble Additional LaTeX markup for the preamble. + +.. confval:: latex_use_modindex + + If true, add a module index to LaTeX documents. Default is ``True``. Modified: doctools/trunk/sphinx/builder.py ============================================================================== --- doctools/trunk/sphinx/builder.py (original) +++ doctools/trunk/sphinx/builder.py Tue Mar 18 20:54:45 2008 @@ -315,6 +315,7 @@ version = self.config.version, last_updated = self.last_updated, style = self.config.html_style, + use_modindex = self.config.html_use_modindex, builder = self.name, parents = [], titles = {}, @@ -387,46 +388,47 @@ # the global module index - # the sorted list of all modules, for the global module index - modules = sorted(((mn, (self.get_relative_uri('modindex', fn) + - '#module-' + mn, sy, pl, dep)) - for (mn, (fn, sy, pl, dep)) in self.env.modules.iteritems()), - key=lambda x: x[0].lower()) - # collect all platforms - platforms = set() - # sort out collapsable modules - modindexentries = [] - pmn = '' - cg = 0 # collapse group - fl = '' # first letter - for mn, (fn, sy, pl, dep) in modules: - pl = pl and pl.split(', ') or [] - platforms.update(pl) - if fl != mn[0].lower() and mn[0] != '_': - modindexentries.append(['', False, 0, False, - mn[0].upper(), '', [], False]) - tn = mn.split('.')[0] - if tn != mn: - # submodule - if pmn == tn: - # first submodule - make parent collapsable - modindexentries[-1][1] = True - elif not pmn.startswith(tn): - # submodule without parent in list, add dummy entry + if self.config.html_use_modindex: + # the sorted list of all modules, for the global module index + modules = sorted(((mn, (self.get_relative_uri('modindex', fn) + + '#module-' + mn, sy, pl, dep)) + for (mn, (fn, sy, pl, dep)) in self.env.modules.iteritems()), + key=lambda x: x[0].lower()) + # collect all platforms + platforms = set() + # sort out collapsable modules + modindexentries = [] + pmn = '' + cg = 0 # collapse group + fl = '' # first letter + for mn, (fn, sy, pl, dep) in modules: + pl = pl and pl.split(', ') or [] + platforms.update(pl) + if fl != mn[0].lower() and mn[0] != '_': + modindexentries.append(['', False, 0, False, + mn[0].upper(), '', [], False]) + tn = mn.split('.')[0] + if tn != mn: + # submodule + if pmn == tn: + # first submodule - make parent collapsable + modindexentries[-1][1] = True + elif not pmn.startswith(tn): + # submodule without parent in list, add dummy entry + cg += 1 + modindexentries.append([tn, True, cg, False, '', '', [], False]) + else: cg += 1 - modindexentries.append([tn, True, cg, False, '', '', [], False]) - else: - cg += 1 - modindexentries.append([mn, False, cg, (tn != mn), fn, sy, pl, dep]) - pmn = mn - fl = mn[0].lower() - platforms = sorted(platforms) - - modindexcontext = dict( - modindexentries = modindexentries, - platforms = platforms, - ) - self.handle_page('modindex', modindexcontext, 'modindex.html') + modindexentries.append([mn, False, cg, (tn != mn), fn, sy, pl, dep]) + pmn = mn + fl = mn[0].lower() + platforms = sorted(platforms) + + modindexcontext = dict( + modindexentries = modindexentries, + platforms = platforms, + ) + self.handle_page('modindex', modindexcontext, 'modindex.html') # the search page self.handle_page('search', {}, 'search.html') Modified: doctools/trunk/sphinx/config.py ============================================================================== --- doctools/trunk/sphinx/config.py (original) +++ doctools/trunk/sphinx/config.py Tue Mar 18 20:54:45 2008 @@ -53,6 +53,7 @@ html_index = ('', False), html_sidebars = ({}, False), html_additional_pages = ({}, False), + html_use_modindex = (True, False), html_copy_source = (True, False), # HTML help options @@ -64,6 +65,7 @@ latex_documents = ([], False), latex_preamble = ('', False), latex_appendices = ([], False), + latex_use_modindex = (True, False), ) def __init__(self, dirname, filename): Modified: doctools/trunk/sphinx/latexwriter.py ============================================================================== --- doctools/trunk/sphinx/latexwriter.py (original) +++ doctools/trunk/sphinx/latexwriter.py Tue Mar 18 20:54:45 2008 @@ -33,11 +33,9 @@ \author{%(author)s} %(preamble)s \makeindex -\makemodindex ''' FOOTER = r''' -\printmodindex \printindex \end{document} ''' @@ -57,15 +55,9 @@ self.builder = builder def translate(self): - try: - visitor = LaTeXTranslator(self.document, self.builder) - self.document.walkabout(visitor) - self.output = visitor.astext() - except: - import pdb, sys, traceback - traceback.print_exc() - tb = sys.exc_info()[2] - pdb.post_mortem(tb) + visitor = LaTeXTranslator(self.document, self.builder) + self.document.walkabout(visitor) + self.output = visitor.astext() # Helper classes @@ -100,6 +92,7 @@ 'papersize': paper, 'pointsize': builder.config.latex_font_size, 'preamble': builder.config.latex_preamble, + 'modindex': builder.config.latex_use_modindex, 'author': document.settings.author, 'docname': document.settings.docname, # if empty, the title is set to the first section title @@ -127,8 +120,10 @@ def astext(self): return (HEADER % self.options) + \ + (self.options['modindex'] and '\\makemodindex\n' or '') + \ self.highlighter.get_stylesheet() + '\n\n' + \ u''.join(self.body) + \ + (self.options['modindex'] and '\\printmodindex\n' or '') + \ (FOOTER % self.options) def visit_document(self, node): Modified: doctools/trunk/sphinx/quickstart.py ============================================================================== --- doctools/trunk/sphinx/quickstart.py (original) +++ doctools/trunk/sphinx/quickstart.py Tue Mar 18 20:54:45 2008 @@ -117,6 +117,9 @@ # template names. #html_additional_pages = {} +# If false, no module index is generated. +#html_use_modindex = True + # If true, the reST sources are included in the HTML build as _sources/. #html_copy_source = True @@ -142,6 +145,9 @@ # Documents to append as an appendix to all manuals. #latex_appendices = [] + +# If false, no module index is generated. +#latex_use_modindex = True ''' MASTER_FILE = '''\ Modified: doctools/trunk/sphinx/templates/layout.html ============================================================================== --- doctools/trunk/sphinx/templates/layout.html (original) +++ doctools/trunk/sphinx/templates/layout.html Tue Mar 18 20:54:45 2008 @@ -9,7 +9,9 @@

              Navigation

              • index
              • + {%- if use_modindex %}
              • modules |
              • + {%- endif %} {%- if next %}
              • next |
              • {%- endif %} From python-checkins at python.org Tue Mar 18 20:59:15 2008 From: python-checkins at python.org (steven.bethard) Date: Tue, 18 Mar 2008 20:59:15 +0100 (CET) Subject: [Python-checkins] r61556 - python/trunk/Lib/test/test_atexit.py Message-ID: <20080318195915.3331A1E4003@bag.python.org> Author: steven.bethard Date: Tue Mar 18 20:59:14 2008 New Revision: 61556 Modified: python/trunk/Lib/test/test_atexit.py Log: Fix test_atexit so that it still passes when -3 is supplied. (It was catching the warning messages on stdio from using the reload() function.) Modified: python/trunk/Lib/test/test_atexit.py ============================================================================== --- python/trunk/Lib/test/test_atexit.py (original) +++ python/trunk/Lib/test/test_atexit.py Tue Mar 18 20:59:14 2008 @@ -41,13 +41,13 @@ def test_sys_override(self): # be sure a preset sys.exitfunc is handled properly - s = StringIO.StringIO() - sys.stdout = sys.stderr = s save_handlers = atexit._exithandlers atexit._exithandlers = [] exfunc = sys.exitfunc sys.exitfunc = self.h1 reload(atexit) + s = StringIO.StringIO() + sys.stdout = sys.stderr = s try: atexit.register(self.h2) atexit._run_exitfuncs() From buildbot at python.org Tue Mar 18 21:12:45 2008 From: buildbot at python.org (buildbot at python.org) Date: Tue, 18 Mar 2008 20:12:45 +0000 Subject: [Python-checkins] buildbot failure in g4 osx.4 trunk Message-ID: <20080318201245.8B7C61E4003@bag.python.org> The Buildbot has detected a new failure of g4 osx.4 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/g4%20osx.4%20trunk/builds/3037 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: psf-g4 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: brett.cannon,sean.reifschneider,steven.bethard BUILD FAILED: failed test Excerpt from the test logfile: Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable 1 test failed: test_xmlrpc Traceback (most recent call last): File "./Lib/test/regrtest.py", line 563, in runtest_inner test_times.append((test_time, test)) AttributeError: 'NoneType' object has no attribute 'append' make: *** [buildbottest] Error 1 sincerely, -The Buildbot From python-checkins at python.org Tue Mar 18 21:30:38 2008 From: python-checkins at python.org (neal.norwitz) Date: Tue, 18 Mar 2008 21:30:38 +0100 (CET) Subject: [Python-checkins] r61559 - python/trunk/Lib/test/test_extcall.py Message-ID: <20080318203038.94D3D1E4003@bag.python.org> Author: neal.norwitz Date: Tue Mar 18 21:30:38 2008 New Revision: 61559 Modified: python/trunk/Lib/test/test_extcall.py Log: Import the test properly. This is especially important for py3k. Modified: python/trunk/Lib/test/test_extcall.py ============================================================================== --- python/trunk/Lib/test/test_extcall.py (original) +++ python/trunk/Lib/test/test_extcall.py Tue Mar 18 21:30:38 2008 @@ -255,7 +255,7 @@ from test import test_support def test_main(): - import test_extcall # self import + from test import test_extcall # self import test_support.run_doctest(test_extcall, True) if __name__ == '__main__': From python-checkins at python.org Tue Mar 18 21:40:02 2008 From: python-checkins at python.org (gregory.p.smith) Date: Tue, 18 Mar 2008 21:40:02 +0100 (CET) Subject: [Python-checkins] r61560 - python/trunk/Misc/NEWS Message-ID: <20080318204002.0CA0A1E4003@bag.python.org> Author: gregory.p.smith Date: Tue Mar 18 21:40:01 2008 New Revision: 61560 Modified: python/trunk/Misc/NEWS Log: news entry for the chown fix Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Tue Mar 18 21:40:01 2008 @@ -44,6 +44,9 @@ Library ------- +- Issue #1747858: Fix chown to work with large uid's and gid's on 64-bit + platforms. + - Issue #1202: zlib.crc32 and zlib.adler32 no longer return different values on 32-bit vs. 64-bit python interpreters. Both were correct, but they now both return a signed integer object for consistency. From python-checkins at python.org Tue Mar 18 22:12:42 2008 From: python-checkins at python.org (brett.cannon) Date: Tue, 18 Mar 2008 22:12:42 +0100 (CET) Subject: [Python-checkins] r61563 - python/trunk Message-ID: <20080318211242.6333E1E4003@bag.python.org> Author: brett.cannon Date: Tue Mar 18 22:12:42 2008 New Revision: 61563 Modified: python/trunk/ (props changed) Log: Ignore BIG5HKSCS-2004.TXT which is downloaded as part of a test. From python-checkins at python.org Tue Mar 18 22:20:26 2008 From: python-checkins at python.org (david.wolever) Date: Tue, 18 Mar 2008 22:20:26 +0100 (CET) Subject: [Python-checkins] r61564 - python/trunk/Python/bltinmodule.c Message-ID: <20080318212026.2D6071E402C@bag.python.org> Author: david.wolever Date: Tue Mar 18 22:20:25 2008 New Revision: 61564 Modified: python/trunk/Python/bltinmodule.c Log: Added a warning when -3 is enabled and None is passed to filter as the first argument. Modified: python/trunk/Python/bltinmodule.c ============================================================================== --- python/trunk/Python/bltinmodule.c (original) +++ python/trunk/Python/bltinmodule.c Tue Mar 18 22:20:25 2008 @@ -296,6 +296,13 @@ } if (func == (PyObject *)&PyBool_Type || func == Py_None) { + if (Py_Py3kWarningFlag && + PyErr_Warn(PyExc_DeprecationWarning, + "filter with None as a first argument " + "is not supported in 3.x. Use a list " + "comprehension instead.") < 0) + return NULL; + ok = PyObject_IsTrue(item); } else { From python-checkins at python.org Tue Mar 18 22:30:14 2008 From: python-checkins at python.org (steven.bethard) Date: Tue, 18 Mar 2008 22:30:14 +0100 (CET) Subject: [Python-checkins] r61565 - python/trunk/Lib/test/regrtest.py Message-ID: <20080318213014.243A41E401D@bag.python.org> Author: steven.bethard Date: Tue Mar 18 22:30:13 2008 New Revision: 61565 Modified: python/trunk/Lib/test/regrtest.py Log: Have regrtest skip test_py3kwarn when the -3 flag is missing. Modified: python/trunk/Lib/test/regrtest.py ============================================================================== --- python/trunk/Lib/test/regrtest.py (original) +++ python/trunk/Lib/test/regrtest.py Tue Mar 18 22:30:13 2008 @@ -1164,6 +1164,14 @@ self.expected.add('test_sunaudiodev') self.expected.add('test_nis') + # TODO: This is a hack to raise TestSkipped if -3 is not enabled. + # Instead of relying on callable to have a warning, we should expose + # the -3 flag to Python code somehow + with test_support.catch_warning() as w: + callable(int) + if w.message is None: + self.expected.add('test_py3kwarn') + self.valid = True def isvalid(self): From buildbot at python.org Tue Mar 18 22:59:15 2008 From: buildbot at python.org (buildbot at python.org) Date: Tue, 18 Mar 2008 21:59:15 +0000 Subject: [Python-checkins] buildbot failure in x86 XP 3.0 Message-ID: <20080318215915.5ADF91E400E@bag.python.org> The Buildbot has detected a new failure of x86 XP 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP%203.0/builds/28 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: armbruster-windows Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From python-checkins at python.org Tue Mar 18 23:01:27 2008 From: python-checkins at python.org (david.wolever) Date: Tue, 18 Mar 2008 23:01:27 +0100 (CET) Subject: [Python-checkins] r61569 - in sandbox/trunk/2to3/lib2to3: fixes/fix_filter.py tests/test_fixers.py Message-ID: <20080318220127.8787F1E4013@bag.python.org> Author: david.wolever Date: Tue Mar 18 23:01:27 2008 New Revision: 61569 Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_filter.py sandbox/trunk/2to3/lib2to3/tests/test_fixers.py Log: Fixed 2to3's handing of filter(None, seq). Now it will return [_f for _f in seq if _f]. Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_filter.py ============================================================================== --- sandbox/trunk/2to3/lib2to3/fixes/fix_filter.py (original) +++ sandbox/trunk/2to3/lib2to3/fixes/fix_filter.py Tue Mar 18 23:01:27 2008 @@ -40,6 +40,11 @@ | power< 'filter' + trailer< '(' arglist< none='None' ',' seq=any > ')' > + > + | + power< + 'filter' args=trailer< '(' [any] ')' > > """ @@ -65,6 +70,13 @@ results.get("fp").clone(), results.get("it").clone(), results.get("xp").clone()) + + elif "none" in results: + new = ListComp(Name("_f"), + Name("_f"), + results["seq"].clone(), + Name("_f")) + else: if in_special_context(node): return None Modified: sandbox/trunk/2to3/lib2to3/tests/test_fixers.py ============================================================================== --- sandbox/trunk/2to3/lib2to3/tests/test_fixers.py (original) +++ sandbox/trunk/2to3/lib2to3/tests/test_fixers.py Tue Mar 18 23:01:27 2008 @@ -2308,13 +2308,17 @@ fixer = "filter" def test_prefix_preservation(self): - b = """x = filter( None, 'abc' )""" - a = """x = list(filter( None, 'abc' ))""" + b = """x = filter( foo, 'abc' )""" + a = """x = list(filter( foo, 'abc' ))""" + self.check(b, a) + + b = """x = filter( None , 'abc' )""" + a = """x = [_f for _f in 'abc' if _f]""" self.check(b, a) def test_filter_basic(self): b = """x = filter(None, 'abc')""" - a = """x = list(filter(None, 'abc'))""" + a = """x = [_f for _f in 'abc' if _f]""" self.check(b, a) b = """x = len(filter(f, 'abc'))""" From python-checkins at python.org Tue Mar 18 23:08:20 2008 From: python-checkins at python.org (steven.bethard) Date: Tue, 18 Mar 2008 23:08:20 +0100 (CET) Subject: [Python-checkins] r61570 - in python/trunk: Lib/test/test_py3kwarn.py Objects/codeobject.c Objects/methodobject.c Message-ID: <20080318220820.EE58C1E4003@bag.python.org> Author: steven.bethard Date: Tue Mar 18 23:08:20 2008 New Revision: 61570 Modified: python/trunk/Lib/test/test_py3kwarn.py python/trunk/Objects/codeobject.c python/trunk/Objects/methodobject.c Log: Add py3k warnings for code and method inequality comparisons. This should resolve issue 2373. The codeobject.c and methodobject.c changes are both just backports of the Python 3 code. Modified: python/trunk/Lib/test/test_py3kwarn.py ============================================================================== --- python/trunk/Lib/test/test_py3kwarn.py (original) +++ python/trunk/Lib/test/test_py3kwarn.py Tue Mar 18 23:08:20 2008 @@ -50,6 +50,35 @@ with catch_warning() as w: self.assertWarning(cell0 < cell1, w, expected) + def test_code_inequality_comparisons(self): + expected = 'code inequality comparisons not supported in 3.x.' + def f(x): + pass + def g(x): + pass + with catch_warning() as w: + self.assertWarning(f.func_code < g.func_code, w, expected) + with catch_warning() as w: + self.assertWarning(f.func_code <= g.func_code, w, expected) + with catch_warning() as w: + self.assertWarning(f.func_code >= g.func_code, w, expected) + with catch_warning() as w: + self.assertWarning(f.func_code > g.func_code, w, expected) + + def test_builtin_function_or_method_comparisons(self): + expected = ('builtin_function_or_method ' + 'inequality comparisons not supported in 3.x.') + func = eval + meth = {}.get + with catch_warning() as w: + self.assertWarning(func < meth, w, expected) + with catch_warning() as w: + self.assertWarning(func > meth, w, expected) + with catch_warning() as w: + self.assertWarning(meth <= func, w, expected) + with catch_warning() as w: + self.assertWarning(meth >= func, w, expected) + def assertWarning(self, _, warning, expected_message): self.assertEqual(str(warning.message), expected_message) Modified: python/trunk/Objects/codeobject.c ============================================================================== --- python/trunk/Objects/codeobject.c (original) +++ python/trunk/Objects/codeobject.c Tue Mar 18 23:08:20 2008 @@ -327,6 +327,72 @@ return 0; } +static PyObject * +code_richcompare(PyObject *self, PyObject *other, int op) +{ + PyCodeObject *co, *cp; + int eq; + PyObject *res; + + if ((op != Py_EQ && op != Py_NE) || + !PyCode_Check(self) || + !PyCode_Check(other)) { + + /* Py3K warning if types are not equal and comparison isn't == or != */ + if (Py_Py3kWarningFlag && PyErr_Warn(PyExc_DeprecationWarning, + "code inequality comparisons not supported in 3.x.") < 0) { + return NULL; + } + + Py_INCREF(Py_NotImplemented); + return Py_NotImplemented; + } + + co = (PyCodeObject *)self; + cp = (PyCodeObject *)other; + + eq = PyObject_RichCompareBool(co->co_name, cp->co_name, Py_EQ); + if (eq <= 0) goto unequal; + eq = co->co_argcount == cp->co_argcount; + if (!eq) goto unequal; + eq = co->co_nlocals == cp->co_nlocals; + if (!eq) goto unequal; + eq = co->co_flags == cp->co_flags; + if (!eq) goto unequal; + eq = co->co_firstlineno == cp->co_firstlineno; + if (!eq) goto unequal; + eq = PyObject_RichCompareBool(co->co_code, cp->co_code, Py_EQ); + if (eq <= 0) goto unequal; + eq = PyObject_RichCompareBool(co->co_consts, cp->co_consts, Py_EQ); + if (eq <= 0) goto unequal; + eq = PyObject_RichCompareBool(co->co_names, cp->co_names, Py_EQ); + if (eq <= 0) goto unequal; + eq = PyObject_RichCompareBool(co->co_varnames, cp->co_varnames, Py_EQ); + if (eq <= 0) goto unequal; + eq = PyObject_RichCompareBool(co->co_freevars, cp->co_freevars, Py_EQ); + if (eq <= 0) goto unequal; + eq = PyObject_RichCompareBool(co->co_cellvars, cp->co_cellvars, Py_EQ); + if (eq <= 0) goto unequal; + + if (op == Py_EQ) + res = Py_True; + else + res = Py_False; + goto done; + + unequal: + if (eq < 0) + return NULL; + if (op == Py_NE) + res = Py_True; + else + res = Py_False; + + done: + Py_INCREF(res); + return res; +} + static long code_hash(PyCodeObject *co) { @@ -377,7 +443,7 @@ code_doc, /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ - 0, /* tp_richcompare */ + code_richcompare, /* tp_richcompare */ 0, /* tp_weaklistoffset */ 0, /* tp_iter */ 0, /* tp_iternext */ Modified: python/trunk/Objects/methodobject.c ============================================================================== --- python/trunk/Objects/methodobject.c (original) +++ python/trunk/Objects/methodobject.c Tue Mar 18 23:08:20 2008 @@ -223,6 +223,40 @@ return 1; } +static PyObject * +meth_richcompare(PyObject *self, PyObject *other, int op) +{ + PyCFunctionObject *a, *b; + PyObject *res; + int eq; + + if ((op != Py_EQ && op != Py_NE) || + !PyCFunction_Check(self) || + !PyCFunction_Check(other)) + { + /* Py3K warning if types are not equal and comparison isn't == or != */ + if (Py_Py3kWarningFlag && PyErr_Warn(PyExc_DeprecationWarning, + "builtin_function_or_method " + "inequality comparisons not supported in 3.x.") < 0) { + return NULL; + } + + Py_INCREF(Py_NotImplemented); + return Py_NotImplemented; + } + a = (PyCFunctionObject *)self; + b = (PyCFunctionObject *)other; + eq = a->m_self == b->m_self; + if (eq) + eq = a->m_ml->ml_meth == b->m_ml->ml_meth; + if (op == Py_EQ) + res = eq ? Py_True : Py_False; + else + res = eq ? Py_False : Py_True; + Py_INCREF(res); + return res; +} + static long meth_hash(PyCFunctionObject *a) { @@ -268,7 +302,7 @@ 0, /* tp_doc */ (traverseproc)meth_traverse, /* tp_traverse */ 0, /* tp_clear */ - 0, /* tp_richcompare */ + meth_richcompare, /* tp_richcompare */ 0, /* tp_weaklistoffset */ 0, /* tp_iter */ 0, /* tp_iternext */ From python-checkins at python.org Tue Mar 18 23:27:41 2008 From: python-checkins at python.org (gregory.p.smith) Date: Tue, 18 Mar 2008 23:27:41 +0100 (CET) Subject: [Python-checkins] r61571 - in python/trunk: Lib/test/test_zlib.py Modules/binascii.c Message-ID: <20080318222741.AA2E41E4003@bag.python.org> Author: gregory.p.smith Date: Tue Mar 18 23:27:41 2008 New Revision: 61571 Modified: python/trunk/Lib/test/test_zlib.py python/trunk/Modules/binascii.c Log: Add a test to make sure zlib.crc32 and binascii.crc32 return the same thing. Fix a buglet in binascii.crc32, the second optional argument could previously have a signedness mismatch with the C variable its going into. Modified: python/trunk/Lib/test/test_zlib.py ============================================================================== --- python/trunk/Lib/test/test_zlib.py (original) +++ python/trunk/Lib/test/test_zlib.py Tue Mar 18 23:27:41 2008 @@ -1,6 +1,7 @@ import unittest from test import test_support import zlib +import binascii import random @@ -47,6 +48,11 @@ self.assertEqual(zlib.adler32(foo+foo), -721416943) self.assertEqual(zlib.adler32('spam'), 72286642) + def test_same_as_binascii_crc32(self): + foo = 'abcdefghijklmnop' + self.assertEqual(binascii.crc32(foo), zlib.crc32(foo)) + self.assertEqual(binascii.crc32('spam'), zlib.crc32('spam')) + class ExceptionTestCase(unittest.TestCase): Modified: python/trunk/Modules/binascii.c ============================================================================== --- python/trunk/Modules/binascii.c (original) +++ python/trunk/Modules/binascii.c Tue Mar 18 23:27:41 2008 @@ -874,7 +874,7 @@ Py_ssize_t len; long result; - if ( !PyArg_ParseTuple(args, "s#|l:crc32", &bin_data, &len, &crc) ) + if ( !PyArg_ParseTuple(args, "s#|k:crc32", &bin_data, &len, &crc) ) return NULL; crc = ~ crc; From buildbot at python.org Tue Mar 18 23:45:18 2008 From: buildbot at python.org (buildbot at python.org) Date: Tue, 18 Mar 2008 22:45:18 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 trunk Message-ID: <20080318224518.EE28D1E400E@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%20trunk/builds/2699 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: gregory.p.smith,steven.bethard BUILD FAILED: failed test Excerpt from the test logfile: 3 tests failed: test_asynchat test_smtplib test_tarfile ====================================================================== ERROR: test_check_members (test.test_tarfile.GzipMiscReadTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_tarfile.py", line 221, in test_check_members for tarinfo in self.tar: File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 2407, in next tarinfo = self.tarfile.next() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 2309, in next self.fileobj.seek(self.offset) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 401, in seek self.read(1024) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 235, in read self._read(readsize) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 300, in _read self._read_eof() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 319, in _read_eof raise IOError, "CRC check failed" IOError: CRC check failed ====================================================================== ERROR: test_fileobj_with_offset (test.test_tarfile.GzipMiscReadTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_tarfile.py", line 190, in test_fileobj_with_offset tar.getmembers() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 1789, in getmembers self._load() # all members, we first have to File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 2354, in _load tarinfo = self.next() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 2309, in next self.fileobj.seek(self.offset) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 401, in seek self.read(1024) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 235, in read self._read(readsize) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 300, in _read self._read_eof() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 319, in _read_eof raise IOError, "CRC check failed" IOError: CRC check failed ====================================================================== ERROR: test_find_members (test.test_tarfile.GzipMiscReadTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_tarfile.py", line 230, in test_find_members self.assert_(self.tar.getmembers()[-1].name == "misc/eof", File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 1789, in getmembers self._load() # all members, we first have to File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 2354, in _load tarinfo = self.next() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 2309, in next self.fileobj.seek(self.offset) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 401, in seek self.read(1024) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 235, in read self._read(readsize) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 300, in _read self._read_eof() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 319, in _read_eof raise IOError, "CRC check failed" IOError: CRC check failed ====================================================================== ERROR: test_v7_dirtype (test.test_tarfile.GzipMiscReadTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_tarfile.py", line 207, in test_v7_dirtype tarinfo = self.tar.getmember("misc/dirtype-old-v7") File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 1778, in getmember tarinfo = self._getmember(name) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 2338, in _getmember members = self.getmembers() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 1789, in getmembers self._load() # all members, we first have to File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 2354, in _load tarinfo = self.next() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 2309, in next self.fileobj.seek(self.offset) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 401, in seek self.read(1024) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 235, in read self._read(readsize) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 300, in _read self._read_eof() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 319, in _read_eof raise IOError, "CRC check failed" IOError: CRC check failed ====================================================================== ERROR: test_xstar_type (test.test_tarfile.GzipMiscReadTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_tarfile.py", line 216, in test_xstar_type self.tar.getmember("misc/regtype-xstar") File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 1778, in getmember tarinfo = self._getmember(name) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 2338, in _getmember members = self.getmembers() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 1789, in getmembers self._load() # all members, we first have to File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 2354, in _load tarinfo = self.next() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 2309, in next self.fileobj.seek(self.offset) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 401, in seek self.read(1024) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 235, in read self._read(readsize) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 300, in _read self._read_eof() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 319, in _read_eof raise IOError, "CRC check failed" IOError: CRC check failed ====================================================================== ERROR: test_fileobj_iter (test.test_tarfile.GzipUstarReadTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_tarfile.py", line 81, in test_fileobj_iter self.tar.extract("ustar/regtype", TEMPDIR) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 2056, in extract tarinfo = self.getmember(member) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 1778, in getmember tarinfo = self._getmember(name) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 2338, in _getmember members = self.getmembers() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 1789, in getmembers self._load() # all members, we first have to File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 2354, in _load tarinfo = self.next() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 2309, in next self.fileobj.seek(self.offset) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 401, in seek self.read(1024) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 235, in read self._read(readsize) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 300, in _read self._read_eof() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 319, in _read_eof raise IOError, "CRC check failed" IOError: CRC check failed ====================================================================== ERROR: test_fileobj_readlines (test.test_tarfile.GzipUstarReadTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_tarfile.py", line 65, in test_fileobj_readlines self.tar.extract("ustar/regtype", TEMPDIR) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 2056, in extract tarinfo = self.getmember(member) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 1778, in getmember tarinfo = self._getmember(name) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 2338, in _getmember members = self.getmembers() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 1789, in getmembers self._load() # all members, we first have to File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 2354, in _load tarinfo = self.next() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 2309, in next self.fileobj.seek(self.offset) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 401, in seek self.read(1024) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 235, in read self._read(readsize) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 300, in _read self._read_eof() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 319, in _read_eof raise IOError, "CRC check failed" IOError: CRC check failed ====================================================================== ERROR: test_fileobj_regular_file (test.test_tarfile.GzipUstarReadTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_tarfile.py", line 58, in test_fileobj_regular_file tarinfo = self.tar.getmember("ustar/regtype") File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 1778, in getmember tarinfo = self._getmember(name) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 2338, in _getmember members = self.getmembers() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 1789, in getmembers self._load() # all members, we first have to File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 2354, in _load tarinfo = self.next() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 2309, in next self.fileobj.seek(self.offset) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 401, in seek self.read(1024) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 235, in read self._read(readsize) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 300, in _read self._read_eof() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 319, in _read_eof raise IOError, "CRC check failed" IOError: CRC check failed ====================================================================== ERROR: test_fileobj_seek (test.test_tarfile.GzipUstarReadTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_tarfile.py", line 91, in test_fileobj_seek self.tar.extract("ustar/regtype", TEMPDIR) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 2056, in extract tarinfo = self.getmember(member) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 1778, in getmember tarinfo = self._getmember(name) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 2338, in _getmember members = self.getmembers() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 1789, in getmembers self._load() # all members, we first have to File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 2354, in _load tarinfo = self.next() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 2309, in next self.fileobj.seek(self.offset) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 401, in seek self.read(1024) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 235, in read self._read(readsize) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 300, in _read self._read_eof() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 319, in _read_eof raise IOError, "CRC check failed" IOError: CRC check failed ====================================================================== ERROR: test_exclude (test.test_tarfile.GzipWriteTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_tarfile.py", line 648, in test_exclude tar = tarfile.open(tmpname, "r") File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/tarfile.py", line 1649, in open raise ReadError("file could not be opened successfully") ReadError: file could not be opened successfully ====================================================================== ERROR: test_stream_padding (test.test_tarfile.GzipStreamWriteTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_tarfile.py", line 666, in test_stream_padding data = fobj.read() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 228, in read self._read(readsize) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 300, in _read self._read_eof() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 319, in _read_eof raise IOError, "CRC check failed" IOError: CRC check failed sincerely, -The Buildbot From jimjjewett at gmail.com Tue Mar 18 23:54:16 2008 From: jimjjewett at gmail.com (Jim Jewett) Date: Tue, 18 Mar 2008 18:54:16 -0400 Subject: [Python-checkins] logging shutdown (was: Re: r61431 - python/trunk/Doc/library/logging.rst) Message-ID: I think (repeatedly) testing an app through IDLE is a reasonable use case. Would it be reasonable for shutdown to remove logging from sys.modules, so that a rerun has some chance of succeeding via its own import? -jJ On 3/16/08, vinay.sajip wrote: > Author: vinay.sajip > Date: Sun Mar 16 22:35:58 2008 > New Revision: 61431 > > Modified: > python/trunk/Doc/library/logging.rst > Log: > Clarified documentation on use of shutdown(). > > Modified: python/trunk/Doc/library/logging.rst > ============================================================================== > --- python/trunk/Doc/library/logging.rst (original) > +++ python/trunk/Doc/library/logging.rst Sun Mar 16 22:35:58 2008 > @@ -732,7 +732,8 @@ > .. function:: shutdown() > > Informs the logging system to perform an orderly shutdown by flushing and > - closing all handlers. > + closing all handlers. This should be called at application exit and no > + further use of the logging system should be made after this call. > > > .. function:: setLoggerClass(klass) > _______________________________________________ > Python-checkins mailing list > Python-checkins at python.org > http://mail.python.org/mailman/listinfo/python-checkins > From python-checkins at python.org Tue Mar 18 23:56:06 2008 From: python-checkins at python.org (david.wolever) Date: Tue, 18 Mar 2008 23:56:06 +0100 (CET) Subject: [Python-checkins] r61574 - sandbox/trunk/2to3/lib2to3 Message-ID: <20080318225606.CB6A71E401E@bag.python.org> Author: david.wolever Date: Tue Mar 18 23:56:06 2008 New Revision: 61574 Modified: sandbox/trunk/2to3/lib2to3/ (props changed) Log: Added svn:ignore keywords From python-checkins at python.org Wed Mar 19 00:22:30 2008 From: python-checkins at python.org (raymond.hettinger) Date: Wed, 19 Mar 2008 00:22:30 +0100 (CET) Subject: [Python-checkins] r61575 - python/trunk/Objects/abstract.c Message-ID: <20080318232230.0873B1E4003@bag.python.org> Author: raymond.hettinger Date: Wed Mar 19 00:22:29 2008 New Revision: 61575 Modified: python/trunk/Objects/abstract.c Log: Speed-up isinstance() for one easy case. Modified: python/trunk/Objects/abstract.c ============================================================================== --- python/trunk/Objects/abstract.c (original) +++ python/trunk/Objects/abstract.c Wed Mar 19 00:22:29 2008 @@ -2909,6 +2909,11 @@ static PyObject *name = NULL; PyObject *t, *v, *tb; PyObject *checker; + + /* Quick test for an exact match */ + if (Py_TYPE(inst) == cls) + return 1; + PyErr_Fetch(&t, &v, &tb); if (name == NULL) { From python-checkins at python.org Wed Mar 19 00:33:08 2008 From: python-checkins at python.org (raymond.hettinger) Date: Wed, 19 Mar 2008 00:33:08 +0100 (CET) Subject: [Python-checkins] r61576 - python/trunk/Objects/listobject.c Message-ID: <20080318233308.7B5121E4003@bag.python.org> Author: raymond.hettinger Date: Wed Mar 19 00:33:08 2008 New Revision: 61576 Modified: python/trunk/Objects/listobject.c Log: Issue: 2354: Add 3K warning for the cmp argument to list.sort() and sorted(). Modified: python/trunk/Objects/listobject.c ============================================================================== --- python/trunk/Objects/listobject.c (original) +++ python/trunk/Objects/listobject.c Wed Mar 19 00:33:08 2008 @@ -2037,6 +2037,11 @@ } if (compare == Py_None) compare = NULL; + if (compare == NULL && + Py_Py3kWarningFlag && + PyErr_Warn(PyExc_DeprecationWarning, + "In 3.x, the cmp argument is no longer supported.") < 0) + return NULL; if (keyfunc == Py_None) keyfunc = NULL; if (compare != NULL && keyfunc != NULL) { From python-checkins at python.org Wed Mar 19 00:45:50 2008 From: python-checkins at python.org (eric.smith) Date: Wed, 19 Mar 2008 00:45:50 +0100 (CET) Subject: [Python-checkins] r61577 - in python/trunk: Include/code.h Include/compile.h Include/parsetok.h Include/pythonrun.h Lib/__future__.py Lib/test/test_print.py Misc/ACKS Misc/NEWS Parser/parser.c Parser/parsetok.c Python/bltinmodule.c Python/future.c Python/pythonrun.c Message-ID: <20080318234550.096EE1E4011@bag.python.org> Author: eric.smith Date: Wed Mar 19 00:45:49 2008 New Revision: 61577 Added: python/trunk/Lib/test/test_print.py Modified: python/trunk/Include/code.h python/trunk/Include/compile.h python/trunk/Include/parsetok.h python/trunk/Include/pythonrun.h python/trunk/Lib/__future__.py python/trunk/Misc/ACKS python/trunk/Misc/NEWS python/trunk/Parser/parser.c python/trunk/Parser/parsetok.c python/trunk/Python/bltinmodule.c python/trunk/Python/future.c python/trunk/Python/pythonrun.c Log: Backport of the print function, using a __future__ import. This work is substantially Anthony Baxter's, from issue 1633807. I just freshened it, made a few minor tweaks, and added the test cases. I also created issue 2412, which is to check for 2to3's behavior with the print function. I also added myself to ACKS. Modified: python/trunk/Include/code.h ============================================================================== --- python/trunk/Include/code.h (original) +++ python/trunk/Include/code.h Wed Mar 19 00:45:49 2008 @@ -48,11 +48,12 @@ #define CO_FUTURE_DIVISION 0x2000 #define CO_FUTURE_ABSOLUTE_IMPORT 0x4000 /* do absolute imports by default */ #define CO_FUTURE_WITH_STATEMENT 0x8000 +#define CO_FUTURE_PRINT_FUNCTION 0x10000 /* This should be defined if a future statement modifies the syntax. For example, when a keyword is added. */ -#if 0 +#if 1 #define PY_PARSER_REQUIRES_FUTURE_KEYWORD #endif Modified: python/trunk/Include/compile.h ============================================================================== --- python/trunk/Include/compile.h (original) +++ python/trunk/Include/compile.h Wed Mar 19 00:45:49 2008 @@ -24,6 +24,8 @@ #define FUTURE_DIVISION "division" #define FUTURE_ABSOLUTE_IMPORT "absolute_import" #define FUTURE_WITH_STATEMENT "with_statement" +#define FUTURE_PRINT_FUNCTION "print_function" + struct _mod; /* Declare the existence of this type */ PyAPI_FUNC(PyCodeObject *) PyAST_Compile(struct _mod *, const char *, Modified: python/trunk/Include/parsetok.h ============================================================================== --- python/trunk/Include/parsetok.h (original) +++ python/trunk/Include/parsetok.h Wed Mar 19 00:45:49 2008 @@ -27,6 +27,10 @@ #define PyPARSE_WITH_IS_KEYWORD 0x0003 #endif +#define PyPARSE_PRINT_IS_FUNCTION 0x0004 + + + PyAPI_FUNC(node *) PyParser_ParseString(const char *, grammar *, int, perrdetail *); PyAPI_FUNC(node *) PyParser_ParseFile (FILE *, const char *, grammar *, int, Modified: python/trunk/Include/pythonrun.h ============================================================================== --- python/trunk/Include/pythonrun.h (original) +++ python/trunk/Include/pythonrun.h Wed Mar 19 00:45:49 2008 @@ -8,7 +8,7 @@ #endif #define PyCF_MASK (CO_FUTURE_DIVISION | CO_FUTURE_ABSOLUTE_IMPORT | \ - CO_FUTURE_WITH_STATEMENT) + CO_FUTURE_WITH_STATEMENT|CO_FUTURE_PRINT_FUNCTION) #define PyCF_MASK_OBSOLETE (CO_NESTED) #define PyCF_SOURCE_IS_UTF8 0x0100 #define PyCF_DONT_IMPLY_DEDENT 0x0200 Modified: python/trunk/Lib/__future__.py ============================================================================== --- python/trunk/Lib/__future__.py (original) +++ python/trunk/Lib/__future__.py Wed Mar 19 00:45:49 2008 @@ -53,6 +53,7 @@ "division", "absolute_import", "with_statement", + "print_function", ] __all__ = ["all_feature_names"] + all_feature_names @@ -66,6 +67,7 @@ CO_FUTURE_DIVISION = 0x2000 # division CO_FUTURE_ABSOLUTE_IMPORT = 0x4000 # perform absolute imports by default CO_FUTURE_WITH_STATEMENT = 0x8000 # with statement +CO_FUTURE_PRINT_FUNCTION = 0x10000 # print function class _Feature: def __init__(self, optionalRelease, mandatoryRelease, compiler_flag): @@ -114,3 +116,7 @@ with_statement = _Feature((2, 5, 0, "alpha", 1), (2, 6, 0, "alpha", 0), CO_FUTURE_WITH_STATEMENT) + +print_function = _Feature((2, 6, 0, "alpha", 2), + (3, 0, 0, "alpha", 0), + CO_FUTURE_PRINT_FUNCTION) Added: python/trunk/Lib/test/test_print.py ============================================================================== --- (empty file) +++ python/trunk/Lib/test/test_print.py Wed Mar 19 00:45:49 2008 @@ -0,0 +1,129 @@ +"""Test correct operation of the print function. +""" + +from __future__ import print_function + +import unittest +from test import test_support + +import sys +try: + # 3.x + from io import StringIO +except ImportError: + # 2.x + from StringIO import StringIO + +from contextlib import contextmanager + +NotDefined = object() + +# A dispatch table all 8 combinations of providing +# sep, end, and file +# I use this machinery so that I'm not just passing default +# values to print, I'm eiher passing or not passing in the +# arguments +dispatch = { + (False, False, False): + lambda args, sep, end, file: print(*args), + (False, False, True): + lambda args, sep, end, file: print(file=file, *args), + (False, True, False): + lambda args, sep, end, file: print(end=end, *args), + (False, True, True): + lambda args, sep, end, file: print(end=end, file=file, *args), + (True, False, False): + lambda args, sep, end, file: print(sep=sep, *args), + (True, False, True): + lambda args, sep, end, file: print(sep=sep, file=file, *args), + (True, True, False): + lambda args, sep, end, file: print(sep=sep, end=end, *args), + (True, True, True): + lambda args, sep, end, file: print(sep=sep, end=end, file=file, *args), + } + + at contextmanager +def stdout_redirected(new_stdout): + save_stdout = sys.stdout + sys.stdout = new_stdout + try: + yield None + finally: + sys.stdout = save_stdout + +# Class used to test __str__ and print +class ClassWith__str__: + def __init__(self, x): + self.x = x + def __str__(self): + return self.x + +class TestPrint(unittest.TestCase): + def check(self, expected, args, + sep=NotDefined, end=NotDefined, file=NotDefined): + # Capture sys.stdout in a StringIO. Call print with args, + # and with sep, end, and file, if they're defined. Result + # must match expected. + + # Look up the actual function to call, based on if sep, end, and file + # are defined + fn = dispatch[(sep is not NotDefined, + end is not NotDefined, + file is not NotDefined)] + + t = StringIO() + with stdout_redirected(t): + fn(args, sep, end, file) + + self.assertEqual(t.getvalue(), expected) + + def test_print(self): + def x(expected, args, sep=NotDefined, end=NotDefined): + # Run the test 2 ways: not using file, and using + # file directed to a StringIO + + self.check(expected, args, sep=sep, end=end) + + # When writing to a file, stdout is expected to be empty + o = StringIO() + self.check('', args, sep=sep, end=end, file=o) + + # And o will contain the expected output + self.assertEqual(o.getvalue(), expected) + + x('\n', ()) + x('a\n', ('a',)) + x('None\n', (None,)) + x('1 2\n', (1, 2)) + x('1 2\n', (1, ' ', 2)) + x('1*2\n', (1, 2), sep='*') + x('1 s', (1, 's'), end='') + x('a\nb\n', ('a', 'b'), sep='\n') + x('1.01', (1.0, 1), sep='', end='') + x('1*a*1.3+', (1, 'a', 1.3), sep='*', end='+') + x('a\n\nb\n', ('a\n', 'b'), sep='\n') + x('\0+ +\0\n', ('\0', ' ', '\0'), sep='+') + + x('a\n b\n', ('a\n', 'b')) + x('a\n b\n', ('a\n', 'b'), sep=None) + x('a\n b\n', ('a\n', 'b'), end=None) + x('a\n b\n', ('a\n', 'b'), sep=None, end=None) + + x('*\n', (ClassWith__str__('*'),)) + x('abc 1\n', (ClassWith__str__('abc'), 1)) + + # 2.x unicode tests + x(u'1 2\n', ('1', u'2')) + x(u'u\1234\n', (u'u\1234',)) + x(u' abc 1\n', (' ', ClassWith__str__(u'abc'), 1)) + + # errors + self.assertRaises(TypeError, print, '', sep=3) + self.assertRaises(TypeError, print, '', end=3) + self.assertRaises(AttributeError, print, '', file='') + +def test_main(): + test_support.run_unittest(TestPrint) + +if __name__ == "__main__": + test_main() Modified: python/trunk/Misc/ACKS ============================================================================== --- python/trunk/Misc/ACKS (original) +++ python/trunk/Misc/ACKS Wed Mar 19 00:45:49 2008 @@ -622,6 +622,7 @@ J. Sipprell Kragen Sitaker Christopher Smith +Eric V. Smith Gregory P. Smith Rafal Smotrzyk Dirk Soede Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Wed Mar 19 00:45:49 2008 @@ -12,6 +12,9 @@ Core and builtins ----------------- +- Issue 1745. Backport print function with: + from __future__ import print_function + - Issue 2332: add new attribute names for instance method objects. The two changes are: im_self -> __self__ and im_func -> __func__ Modified: python/trunk/Parser/parser.c ============================================================================== --- python/trunk/Parser/parser.c (original) +++ python/trunk/Parser/parser.c Wed Mar 19 00:45:49 2008 @@ -149,12 +149,10 @@ strcmp(l->lb_str, s) != 0) continue; #ifdef PY_PARSER_REQUIRES_FUTURE_KEYWORD - if (!(ps->p_flags & CO_FUTURE_WITH_STATEMENT)) { - if (s[0] == 'w' && strcmp(s, "with") == 0) - break; /* not a keyword yet */ - else if (s[0] == 'a' && strcmp(s, "as") == 0) - break; /* not a keyword yet */ - } + if (ps->p_flags & CO_FUTURE_PRINT_FUNCTION && + s[0] == 'p' && strcmp(s, "print") == 0) { + break; /* no longer a keyword */ + } #endif D(printf("It's a keyword\n")); return n - i; @@ -208,6 +206,10 @@ strcmp(STR(CHILD(cch, 0)), "with_statement") == 0) { ps->p_flags |= CO_FUTURE_WITH_STATEMENT; break; + } else if (NCH(cch) >= 1 && TYPE(CHILD(cch, 0)) == NAME && + strcmp(STR(CHILD(cch, 0)), "print_function") == 0) { + ps->p_flags |= CO_FUTURE_PRINT_FUNCTION; + break; } } } Modified: python/trunk/Parser/parsetok.c ============================================================================== --- python/trunk/Parser/parsetok.c (original) +++ python/trunk/Parser/parsetok.c Wed Mar 19 00:45:49 2008 @@ -123,8 +123,8 @@ return NULL; } #ifdef PY_PARSER_REQUIRES_FUTURE_KEYWORD - if (flags & PyPARSE_WITH_IS_KEYWORD) - ps->p_flags |= CO_FUTURE_WITH_STATEMENT; + if (flags & PyPARSE_PRINT_IS_FUNCTION) + ps->p_flags |= CO_FUTURE_PRINT_FUNCTION; #endif for (;;) { @@ -167,26 +167,6 @@ str[len] = '\0'; #ifdef PY_PARSER_REQUIRES_FUTURE_KEYWORD - /* This is only necessary to support the "as" warning, but - we don't want to warn about "as" in import statements. */ - if (type == NAME && - len == 6 && str[0] == 'i' && strcmp(str, "import") == 0) - handling_import = 1; - - /* Warn about with as NAME */ - if (type == NAME && - !(ps->p_flags & CO_FUTURE_WITH_STATEMENT)) { - if (len == 4 && str[0] == 'w' && strcmp(str, "with") == 0) - warn(with_msg, err_ret->filename, tok->lineno); - else if (!(handling_import || handling_with) && - len == 2 && str[0] == 'a' && - strcmp(str, "as") == 0) - warn(as_msg, err_ret->filename, tok->lineno); - } - else if (type == NAME && - (ps->p_flags & CO_FUTURE_WITH_STATEMENT) && - len == 4 && str[0] == 'w' && strcmp(str, "with") == 0) - handling_with = 1; #endif if (a >= tok->line_start) col_offset = a - tok->line_start; Modified: python/trunk/Python/bltinmodule.c ============================================================================== --- python/trunk/Python/bltinmodule.c (original) +++ python/trunk/Python/bltinmodule.c Wed Mar 19 00:45:49 2008 @@ -1486,6 +1486,78 @@ equivalent to (x**y) % z, but may be more efficient (e.g. for longs)."); +static PyObject * +builtin_print(PyObject *self, PyObject *args, PyObject *kwds) +{ + static char *kwlist[] = {"sep", "end", "file", 0}; + static PyObject *dummy_args; + PyObject *sep = NULL, *end = NULL, *file = NULL; + int i, err; + + if (dummy_args == NULL) { + if (!(dummy_args = PyTuple_New(0))) + return NULL; + } + if (!PyArg_ParseTupleAndKeywords(dummy_args, kwds, "|OOO:print", + kwlist, &sep, &end, &file)) + return NULL; + if (file == NULL || file == Py_None) { + file = PySys_GetObject("stdout"); + /* sys.stdout may be None when FILE* stdout isn't connected */ + if (file == Py_None) + Py_RETURN_NONE; + } + + if (sep && sep != Py_None && !PyString_Check(sep) && + !PyUnicode_Check(sep)) { + PyErr_Format(PyExc_TypeError, + "sep must be None, str or unicode, not %.200s", + sep->ob_type->tp_name); + return NULL; + } + if (end && end != Py_None && !PyString_Check(end) && + !PyUnicode_Check(end)) { + PyErr_Format(PyExc_TypeError, + "end must be None, str or unicode, not %.200s", + end->ob_type->tp_name); + return NULL; + } + + for (i = 0; i < PyTuple_Size(args); i++) { + if (i > 0) { + if (sep == NULL || sep == Py_None) + err = PyFile_WriteString(" ", file); + else + err = PyFile_WriteObject(sep, file, + Py_PRINT_RAW); + if (err) + return NULL; + } + err = PyFile_WriteObject(PyTuple_GetItem(args, i), file, + Py_PRINT_RAW); + if (err) + return NULL; + } + + if (end == NULL || end == Py_None) + err = PyFile_WriteString("\n", file); + else + err = PyFile_WriteObject(end, file, Py_PRINT_RAW); + if (err) + return NULL; + + Py_RETURN_NONE; +} + +PyDoc_STRVAR(print_doc, +"print(value, ..., sep=' ', end='\\n', file=sys.stdout)\n\ +\n\ +Prints the values to a stream, or to sys.stdout by default.\n\ +Optional keyword arguments:\n\ +file: a file-like object (stream); defaults to the current sys.stdout.\n\ +sep: string inserted between values, default a space.\n\ +end: string appended after the last value, default a newline."); + /* Return number of items in range (lo, hi, step), when arguments are * PyInt or PyLong objects. step > 0 required. Return a value < 0 if @@ -2424,6 +2496,7 @@ {"open", (PyCFunction)builtin_open, METH_VARARGS | METH_KEYWORDS, open_doc}, {"ord", builtin_ord, METH_O, ord_doc}, {"pow", builtin_pow, METH_VARARGS, pow_doc}, + {"print", (PyCFunction)builtin_print, METH_VARARGS | METH_KEYWORDS, print_doc}, {"range", builtin_range, METH_VARARGS, range_doc}, {"raw_input", builtin_raw_input, METH_VARARGS, raw_input_doc}, {"reduce", builtin_reduce, METH_VARARGS, reduce_doc}, Modified: python/trunk/Python/future.c ============================================================================== --- python/trunk/Python/future.c (original) +++ python/trunk/Python/future.c Wed Mar 19 00:45:49 2008 @@ -33,6 +33,8 @@ ff->ff_features |= CO_FUTURE_ABSOLUTE_IMPORT; } else if (strcmp(feature, FUTURE_WITH_STATEMENT) == 0) { ff->ff_features |= CO_FUTURE_WITH_STATEMENT; + } else if (strcmp(feature, FUTURE_PRINT_FUNCTION) == 0) { + ff->ff_features |= CO_FUTURE_PRINT_FUNCTION; } else if (strcmp(feature, "braces") == 0) { PyErr_SetString(PyExc_SyntaxError, "not a chance"); Modified: python/trunk/Python/pythonrun.c ============================================================================== --- python/trunk/Python/pythonrun.c (original) +++ python/trunk/Python/pythonrun.c Wed Mar 19 00:45:49 2008 @@ -738,18 +738,19 @@ } } +#if 0 /* compute parser flags based on compiler flags */ #define PARSER_FLAGS(flags) \ ((flags) ? ((((flags)->cf_flags & PyCF_DONT_IMPLY_DEDENT) ? \ PyPARSE_DONT_IMPLY_DEDENT : 0)) : 0) - -#if 0 +#endif +#if 1 /* Keep an example of flags with future keyword support. */ #define PARSER_FLAGS(flags) \ ((flags) ? ((((flags)->cf_flags & PyCF_DONT_IMPLY_DEDENT) ? \ PyPARSE_DONT_IMPLY_DEDENT : 0) \ - | ((flags)->cf_flags & CO_FUTURE_WITH_STATEMENT ? \ - PyPARSE_WITH_IS_KEYWORD : 0)) : 0) + | ((flags)->cf_flags & CO_FUTURE_PRINT_FUNCTION ? \ + PyPARSE_PRINT_IS_FUNCTION : 0)) : 0) #endif int From buildbot at python.org Wed Mar 19 01:15:38 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 19 Mar 2008 00:15:38 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-3 trunk Message-ID: <20080319001538.7123C1E4003@bag.python.org> The Buildbot has detected a new failure of x86 XP-3 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-3%20trunk/builds/1099 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: eric.smith,raymond.hettinger BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_compiler ====================================================================== ERROR: testCompileLibrary (test.test_compiler.CompilerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_compiler.py", line 52, in testCompileLibrary compiler.compile(buf, basename, "exec") File "C:\buildbot\work\trunk.heller-windows\build\lib\compiler\pycodegen.py", line 64, in compile gen.compile() File "C:\buildbot\work\trunk.heller-windows\build\lib\compiler\pycodegen.py", line 112, in compile gen = ModuleCodeGenerator(tree) File "C:\buildbot\work\trunk.heller-windows\build\lib\compiler\pycodegen.py", line 1275, in __init__ self.futures = future.find_futures(tree) File "C:\buildbot\work\trunk.heller-windows\build\lib\compiler\future.py", line 59, in find_futures walk(node, p1) File "C:\buildbot\work\trunk.heller-windows\build\lib\compiler\visitor.py", line 106, in walk walker.preorder(tree, visitor) File "C:\buildbot\work\trunk.heller-windows\build\lib\compiler\visitor.py", line 63, in preorder self.dispatch(tree, *args) # XXX *args make sense? File "C:\buildbot\work\trunk.heller-windows\build\lib\compiler\visitor.py", line 57, in dispatch return meth(node, *args) File "C:\buildbot\work\trunk.heller-windows\build\lib\compiler\future.py", line 27, in visitModule if not self.check_stmt(s): File "C:\buildbot\work\trunk.heller-windows\build\lib\compiler\future.py", line 37, in check_stmt "future feature %s is not defined" % name SyntaxError: future feature print_function is not defined sincerely, -The Buildbot From buildbot at python.org Wed Mar 19 01:30:04 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 19 Mar 2008 00:30:04 +0000 Subject: [Python-checkins] buildbot failure in x86 W2k8 trunk Message-ID: <20080319003004.2E5C91E4003@bag.python.org> The Buildbot has detected a new failure of x86 W2k8 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20W2k8%20trunk/builds/133 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: nelson-windows Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: eric.smith BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_compiler ====================================================================== ERROR: testCompileLibrary (test.test_compiler.CompilerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\trunk.nelson-windows\build\lib\test\test_compiler.py", line 52, in testCompileLibrary compiler.compile(buf, basename, "exec") File "S:\buildbots\python\trunk.nelson-windows\build\lib\compiler\pycodegen.py", line 64, in compile gen.compile() File "S:\buildbots\python\trunk.nelson-windows\build\lib\compiler\pycodegen.py", line 112, in compile gen = ModuleCodeGenerator(tree) File "S:\buildbots\python\trunk.nelson-windows\build\lib\compiler\pycodegen.py", line 1275, in __init__ self.futures = future.find_futures(tree) File "S:\buildbots\python\trunk.nelson-windows\build\lib\compiler\future.py", line 59, in find_futures walk(node, p1) File "S:\buildbots\python\trunk.nelson-windows\build\lib\compiler\visitor.py", line 106, in walk walker.preorder(tree, visitor) File "S:\buildbots\python\trunk.nelson-windows\build\lib\compiler\visitor.py", line 63, in preorder self.dispatch(tree, *args) # XXX *args make sense? File "S:\buildbots\python\trunk.nelson-windows\build\lib\compiler\visitor.py", line 57, in dispatch return meth(node, *args) File "S:\buildbots\python\trunk.nelson-windows\build\lib\compiler\future.py", line 27, in visitModule if not self.check_stmt(s): File "S:\buildbots\python\trunk.nelson-windows\build\lib\compiler\future.py", line 37, in check_stmt "future feature %s is not defined" % name SyntaxError: future feature print_function is not defined sincerely, -The Buildbot From buildbot at python.org Wed Mar 19 01:51:56 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 19 Mar 2008 00:51:56 +0000 Subject: [Python-checkins] buildbot failure in amd64 gentoo trunk Message-ID: <20080319005157.096901E4003@bag.python.org> The Buildbot has detected a new failure of amd64 gentoo trunk. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20gentoo%20trunk/builds/403 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-amd64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: eric.smith,raymond.hettinger BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_compiler ====================================================================== ERROR: testCompileLibrary (test.test_compiler.CompilerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/test/test_compiler.py", line 52, in testCompileLibrary compiler.compile(buf, basename, "exec") File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/compiler/pycodegen.py", line 64, in compile gen.compile() File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/compiler/pycodegen.py", line 112, in compile gen = ModuleCodeGenerator(tree) File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/compiler/pycodegen.py", line 1275, in __init__ self.futures = future.find_futures(tree) File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/compiler/future.py", line 59, in find_futures walk(node, p1) File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/compiler/visitor.py", line 106, in walk walker.preorder(tree, visitor) File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/compiler/visitor.py", line 63, in preorder self.dispatch(tree, *args) # XXX *args make sense? File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/compiler/visitor.py", line 57, in dispatch return meth(node, *args) File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/compiler/future.py", line 27, in visitModule if not self.check_stmt(s): File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/compiler/future.py", line 37, in check_stmt "future feature %s is not defined" % name SyntaxError: future feature print_function is not defined make: *** [buildbottest] Error 1 sincerely, -The Buildbot From python-checkins at python.org Wed Mar 19 02:05:35 2008 From: python-checkins at python.org (andrew.kuchling) Date: Wed, 19 Mar 2008 02:05:35 +0100 (CET) Subject: [Python-checkins] r61580 - python/trunk/Misc/developers.txt Message-ID: <20080319010535.761441E401C@bag.python.org> Author: andrew.kuchling Date: Wed Mar 19 02:05:35 2008 New Revision: 61580 Modified: python/trunk/Misc/developers.txt Log: Add Jeff Rush Modified: python/trunk/Misc/developers.txt ============================================================================== --- python/trunk/Misc/developers.txt (original) +++ python/trunk/Misc/developers.txt Wed Mar 19 02:05:35 2008 @@ -17,6 +17,8 @@ Permissions History ------------------- +- Jeff Rush was given SVN access on 18 March 2008 by AMK, for Distutils work. + - David Wolever was given SVN access on 17 March 2008 by MvL, for 2to3 work. From tnelson at onresolve.com Wed Mar 19 02:16:33 2008 From: tnelson at onresolve.com (Trent Nelson) Date: Tue, 18 Mar 2008 18:16:33 -0700 Subject: [Python-checkins] r61577 - in python/trunk: Include/code.h Include/compile.h Include/parsetok.h Include/pythonrun.h Lib/__future__.py Lib/test/test_print.py Misc/ACKS Misc/NEWS Parser/parser.c Parser/parsetok.c Python/bltinmodule.c Python/future.c ... In-Reply-To: <20080318234550.096EE1E4011@bag.python.org> References: <20080318234550.096EE1E4011@bag.python.org> Message-ID: <87D3F9C72FBF214DB39FA4E3FE618CDC6E160CEE23@EXMBX04.exchhosting.com> This change breaks all the trunk buildbots: ====================================================================== ERROR: testCompileLibrary (test.test_compiler.CompilerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "S:\buildbots\python\trunk.nelson-windows\build\lib\test\test_compiler.py", line 52, in testCompileLibrary compiler.compile(buf, basename, "exec") File "S:\buildbots\python\trunk.nelson-windows\build\lib\compiler\pycodegen.py", line 64, in compile gen.compile() File "S:\buildbots\python\trunk.nelson-windows\build\lib\compiler\pycodegen.py", line 112, in compile gen = ModuleCodeGenerator(tree) File "S:\buildbots\python\trunk.nelson-windows\build\lib\compiler\pycodegen.py", line 1275, in __init__ self.futures = future.find_futures(tree) File "S:\buildbots\python\trunk.nelson-windows\build\lib\compiler\future.py", line 59, in find_futures walk(node, p1) File "S:\buildbots\python\trunk.nelson-windows\build\lib\compiler\visitor.py", line 106, in walk walker.preorder(tree, visitor) File "S:\buildbots\python\trunk.nelson-windows\build\lib\compiler\visitor.py", line 63, in preorder self.dispatch(tree, *args) # XXX *args make sense? File "S:\buildbots\python\trunk.nelson-windows\build\lib\compiler\visitor.py", line 57, in dispatch return meth(node, *args) File "S:\buildbots\python\trunk.nelson-windows\build\lib\compiler\future.py", line 27, in visitModule if not self.check_stmt(s): File "S:\buildbots\python\trunk.nelson-windows\build\lib\compiler\future.py", line 37, in check_stmt "future feature %s is not defined" % name SyntaxError: future feature print_function is not defined ________________________________________ From: python-checkins-bounces+tnelson=onresolve.com at python.org [python-checkins-bounces+tnelson=onresolve.com at python.org] On Behalf Of eric.smith [python-checkins at python.org] Sent: 18 March 2008 19:45 To: python-checkins at python.org Subject: [Python-checkins] r61577 - in python/trunk: Include/code.h Include/compile.h Include/parsetok.h Include/pythonrun.h Lib/__future__.py Lib/test/test_print.py Misc/ACKS Misc/NEWS Parser/parser.c Parser/parsetok.c Python/bltinmodule.c Python/future.c Pyth... Author: eric.smith Date: Wed Mar 19 00:45:49 2008 New Revision: 61577 Added: python/trunk/Lib/test/test_print.py Modified: python/trunk/Include/code.h python/trunk/Include/compile.h python/trunk/Include/parsetok.h python/trunk/Include/pythonrun.h python/trunk/Lib/__future__.py python/trunk/Misc/ACKS python/trunk/Misc/NEWS python/trunk/Parser/parser.c python/trunk/Parser/parsetok.c python/trunk/Python/bltinmodule.c python/trunk/Python/future.c python/trunk/Python/pythonrun.c Log: Backport of the print function, using a __future__ import. This work is substantially Anthony Baxter's, from issue 1633807. I just freshened it, made a few minor tweaks, and added the test cases. I also created issue 2412, which is to check for 2to3's behavior with the print function. I also added myself to ACKS. Modified: python/trunk/Include/code.h ============================================================================== --- python/trunk/Include/code.h (original) +++ python/trunk/Include/code.h Wed Mar 19 00:45:49 2008 @@ -48,11 +48,12 @@ #define CO_FUTURE_DIVISION 0x2000 #define CO_FUTURE_ABSOLUTE_IMPORT 0x4000 /* do absolute imports by default */ #define CO_FUTURE_WITH_STATEMENT 0x8000 +#define CO_FUTURE_PRINT_FUNCTION 0x10000 /* This should be defined if a future statement modifies the syntax. For example, when a keyword is added. */ -#if 0 +#if 1 #define PY_PARSER_REQUIRES_FUTURE_KEYWORD #endif Modified: python/trunk/Include/compile.h ============================================================================== --- python/trunk/Include/compile.h (original) +++ python/trunk/Include/compile.h Wed Mar 19 00:45:49 2008 @@ -24,6 +24,8 @@ #define FUTURE_DIVISION "division" #define FUTURE_ABSOLUTE_IMPORT "absolute_import" #define FUTURE_WITH_STATEMENT "with_statement" +#define FUTURE_PRINT_FUNCTION "print_function" + struct _mod; /* Declare the existence of this type */ PyAPI_FUNC(PyCodeObject *) PyAST_Compile(struct _mod *, const char *, Modified: python/trunk/Include/parsetok.h ============================================================================== --- python/trunk/Include/parsetok.h (original) +++ python/trunk/Include/parsetok.h Wed Mar 19 00:45:49 2008 @@ -27,6 +27,10 @@ #define PyPARSE_WITH_IS_KEYWORD 0x0003 #endif +#define PyPARSE_PRINT_IS_FUNCTION 0x0004 + + + PyAPI_FUNC(node *) PyParser_ParseString(const char *, grammar *, int, perrdetail *); PyAPI_FUNC(node *) PyParser_ParseFile (FILE *, const char *, grammar *, int, Modified: python/trunk/Include/pythonrun.h ============================================================================== --- python/trunk/Include/pythonrun.h (original) +++ python/trunk/Include/pythonrun.h Wed Mar 19 00:45:49 2008 @@ -8,7 +8,7 @@ #endif #define PyCF_MASK (CO_FUTURE_DIVISION | CO_FUTURE_ABSOLUTE_IMPORT | \ - CO_FUTURE_WITH_STATEMENT) + CO_FUTURE_WITH_STATEMENT|CO_FUTURE_PRINT_FUNCTION) #define PyCF_MASK_OBSOLETE (CO_NESTED) #define PyCF_SOURCE_IS_UTF8 0x0100 #define PyCF_DONT_IMPLY_DEDENT 0x0200 Modified: python/trunk/Lib/__future__.py ============================================================================== --- python/trunk/Lib/__future__.py (original) +++ python/trunk/Lib/__future__.py Wed Mar 19 00:45:49 2008 @@ -53,6 +53,7 @@ "division", "absolute_import", "with_statement", + "print_function", ] __all__ = ["all_feature_names"] + all_feature_names @@ -66,6 +67,7 @@ CO_FUTURE_DIVISION = 0x2000 # division CO_FUTURE_ABSOLUTE_IMPORT = 0x4000 # perform absolute imports by default CO_FUTURE_WITH_STATEMENT = 0x8000 # with statement +CO_FUTURE_PRINT_FUNCTION = 0x10000 # print function class _Feature: def __init__(self, optionalRelease, mandatoryRelease, compiler_flag): @@ -114,3 +116,7 @@ with_statement = _Feature((2, 5, 0, "alpha", 1), (2, 6, 0, "alpha", 0), CO_FUTURE_WITH_STATEMENT) + +print_function = _Feature((2, 6, 0, "alpha", 2), + (3, 0, 0, "alpha", 0), + CO_FUTURE_PRINT_FUNCTION) Added: python/trunk/Lib/test/test_print.py ============================================================================== --- (empty file) +++ python/trunk/Lib/test/test_print.py Wed Mar 19 00:45:49 2008 @@ -0,0 +1,129 @@ +"""Test correct operation of the print function. +""" + +from __future__ import print_function + +import unittest +from test import test_support + +import sys +try: + # 3.x + from io import StringIO +except ImportError: + # 2.x + from StringIO import StringIO + +from contextlib import contextmanager + +NotDefined = object() + +# A dispatch table all 8 combinations of providing +# sep, end, and file +# I use this machinery so that I'm not just passing default +# values to print, I'm eiher passing or not passing in the +# arguments +dispatch = { + (False, False, False): + lambda args, sep, end, file: print(*args), + (False, False, True): + lambda args, sep, end, file: print(file=file, *args), + (False, True, False): + lambda args, sep, end, file: print(end=end, *args), + (False, True, True): + lambda args, sep, end, file: print(end=end, file=file, *args), + (True, False, False): + lambda args, sep, end, file: print(sep=sep, *args), + (True, False, True): + lambda args, sep, end, file: print(sep=sep, file=file, *args), + (True, True, False): + lambda args, sep, end, file: print(sep=sep, end=end, *args), + (True, True, True): + lambda args, sep, end, file: print(sep=sep, end=end, file=file, *args), + } + + at contextmanager +def stdout_redirected(new_stdout): + save_stdout = sys.stdout + sys.stdout = new_stdout + try: + yield None + finally: + sys.stdout = save_stdout + +# Class used to test __str__ and print +class ClassWith__str__: + def __init__(self, x): + self.x = x + def __str__(self): + return self.x + +class TestPrint(unittest.TestCase): + def check(self, expected, args, + sep=NotDefined, end=NotDefined, file=NotDefined): + # Capture sys.stdout in a StringIO. Call print with args, + # and with sep, end, and file, if they're defined. Result + # must match expected. + + # Look up the actual function to call, based on if sep, end, and file + # are defined + fn = dispatch[(sep is not NotDefined, + end is not NotDefined, + file is not NotDefined)] + + t = StringIO() + with stdout_redirected(t): + fn(args, sep, end, file) + + self.assertEqual(t.getvalue(), expected) + + def test_print(self): + def x(expected, args, sep=NotDefined, end=NotDefined): + # Run the test 2 ways: not using file, and using + # file directed to a StringIO + + self.check(expected, args, sep=sep, end=end) + + # When writing to a file, stdout is expected to be empty + o = StringIO() + self.check('', args, sep=sep, end=end, file=o) + + # And o will contain the expected output + self.assertEqual(o.getvalue(), expected) + + x('\n', ()) + x('a\n', ('a',)) + x('None\n', (None,)) + x('1 2\n', (1, 2)) + x('1 2\n', (1, ' ', 2)) + x('1*2\n', (1, 2), sep='*') + x('1 s', (1, 's'), end='') + x('a\nb\n', ('a', 'b'), sep='\n') + x('1.01', (1.0, 1), sep='', end='') + x('1*a*1.3+', (1, 'a', 1.3), sep='*', end='+') + x('a\n\nb\n', ('a\n', 'b'), sep='\n') + x('\0+ +\0\n', ('\0', ' ', '\0'), sep='+') + + x('a\n b\n', ('a\n', 'b')) + x('a\n b\n', ('a\n', 'b'), sep=None) + x('a\n b\n', ('a\n', 'b'), end=None) + x('a\n b\n', ('a\n', 'b'), sep=None, end=None) + + x('*\n', (ClassWith__str__('*'),)) + x('abc 1\n', (ClassWith__str__('abc'), 1)) + + # 2.x unicode tests + x(u'1 2\n', ('1', u'2')) + x(u'u\1234\n', (u'u\1234',)) + x(u' abc 1\n', (' ', ClassWith__str__(u'abc'), 1)) + + # errors + self.assertRaises(TypeError, print, '', sep=3) + self.assertRaises(TypeError, print, '', end=3) + self.assertRaises(AttributeError, print, '', file='') + +def test_main(): + test_support.run_unittest(TestPrint) + +if __name__ == "__main__": + test_main() Modified: python/trunk/Misc/ACKS ============================================================================== --- python/trunk/Misc/ACKS (original) +++ python/trunk/Misc/ACKS Wed Mar 19 00:45:49 2008 @@ -622,6 +622,7 @@ J. Sipprell Kragen Sitaker Christopher Smith +Eric V. Smith Gregory P. Smith Rafal Smotrzyk Dirk Soede Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Wed Mar 19 00:45:49 2008 @@ -12,6 +12,9 @@ Core and builtins ----------------- +- Issue 1745. Backport print function with: + from __future__ import print_function + - Issue 2332: add new attribute names for instance method objects. The two changes are: im_self -> __self__ and im_func -> __func__ Modified: python/trunk/Parser/parser.c ============================================================================== --- python/trunk/Parser/parser.c (original) +++ python/trunk/Parser/parser.c Wed Mar 19 00:45:49 2008 @@ -149,12 +149,10 @@ strcmp(l->lb_str, s) != 0) continue; #ifdef PY_PARSER_REQUIRES_FUTURE_KEYWORD - if (!(ps->p_flags & CO_FUTURE_WITH_STATEMENT)) { - if (s[0] == 'w' && strcmp(s, "with") == 0) - break; /* not a keyword yet */ - else if (s[0] == 'a' && strcmp(s, "as") == 0) - break; /* not a keyword yet */ - } + if (ps->p_flags & CO_FUTURE_PRINT_FUNCTION && + s[0] == 'p' && strcmp(s, "print") == 0) { + break; /* no longer a keyword */ + } #endif D(printf("It's a keyword\n")); return n - i; @@ -208,6 +206,10 @@ strcmp(STR(CHILD(cch, 0)), "with_statement") == 0) { ps->p_flags |= CO_FUTURE_WITH_STATEMENT; break; + } else if (NCH(cch) >= 1 && TYPE(CHILD(cch, 0)) == NAME && + strcmp(STR(CHILD(cch, 0)), "print_function") == 0) { + ps->p_flags |= CO_FUTURE_PRINT_FUNCTION; + break; } } } Modified: python/trunk/Parser/parsetok.c ============================================================================== --- python/trunk/Parser/parsetok.c (original) +++ python/trunk/Parser/parsetok.c Wed Mar 19 00:45:49 2008 @@ -123,8 +123,8 @@ return NULL; } #ifdef PY_PARSER_REQUIRES_FUTURE_KEYWORD - if (flags & PyPARSE_WITH_IS_KEYWORD) - ps->p_flags |= CO_FUTURE_WITH_STATEMENT; + if (flags & PyPARSE_PRINT_IS_FUNCTION) + ps->p_flags |= CO_FUTURE_PRINT_FUNCTION; #endif for (;;) { @@ -167,26 +167,6 @@ str[len] = '\0'; #ifdef PY_PARSER_REQUIRES_FUTURE_KEYWORD - /* This is only necessary to support the "as" warning, but - we don't want to warn about "as" in import statements. */ - if (type == NAME && - len == 6 && str[0] == 'i' && strcmp(str, "import") == 0) - handling_import = 1; - - /* Warn about with as NAME */ - if (type == NAME && - !(ps->p_flags & CO_FUTURE_WITH_STATEMENT)) { - if (len == 4 && str[0] == 'w' && strcmp(str, "with") == 0) - warn(with_msg, err_ret->filename, tok->lineno); - else if (!(handling_import || handling_with) && - len == 2 && str[0] == 'a' && - strcmp(str, "as") == 0) - warn(as_msg, err_ret->filename, tok->lineno); - } - else if (type == NAME && - (ps->p_flags & CO_FUTURE_WITH_STATEMENT) && - len == 4 && str[0] == 'w' && strcmp(str, "with") == 0) - handling_with = 1; #endif if (a >= tok->line_start) col_offset = a - tok->line_start; Modified: python/trunk/Python/bltinmodule.c ============================================================================== --- python/trunk/Python/bltinmodule.c (original) +++ python/trunk/Python/bltinmodule.c Wed Mar 19 00:45:49 2008 @@ -1486,6 +1486,78 @@ equivalent to (x**y) % z, but may be more efficient (e.g. for longs)."); +static PyObject * +builtin_print(PyObject *self, PyObject *args, PyObject *kwds) +{ + static char *kwlist[] = {"sep", "end", "file", 0}; + static PyObject *dummy_args; + PyObject *sep = NULL, *end = NULL, *file = NULL; + int i, err; + + if (dummy_args == NULL) { + if (!(dummy_args = PyTuple_New(0))) + return NULL; + } + if (!PyArg_ParseTupleAndKeywords(dummy_args, kwds, "|OOO:print", + kwlist, &sep, &end, &file)) + return NULL; + if (file == NULL || file == Py_None) { + file = PySys_GetObject("stdout"); + /* sys.stdout may be None when FILE* stdout isn't connected */ + if (file == Py_None) + Py_RETURN_NONE; + } + + if (sep && sep != Py_None && !PyString_Check(sep) && + !PyUnicode_Check(sep)) { + PyErr_Format(PyExc_TypeError, + "sep must be None, str or unicode, not %.200s", + sep->ob_type->tp_name); + return NULL; + } + if (end && end != Py_None && !PyString_Check(end) && + !PyUnicode_Check(end)) { + PyErr_Format(PyExc_TypeError, + "end must be None, str or unicode, not %.200s", + end->ob_type->tp_name); + return NULL; + } + + for (i = 0; i < PyTuple_Size(args); i++) { + if (i > 0) { + if (sep == NULL || sep == Py_None) + err = PyFile_WriteString(" ", file); + else + err = PyFile_WriteObject(sep, file, + Py_PRINT_RAW); + if (err) + return NULL; + } + err = PyFile_WriteObject(PyTuple_GetItem(args, i), file, + Py_PRINT_RAW); + if (err) + return NULL; + } + + if (end == NULL || end == Py_None) + err = PyFile_WriteString("\n", file); + else + err = PyFile_WriteObject(end, file, Py_PRINT_RAW); + if (err) + return NULL; + + Py_RETURN_NONE; +} + +PyDoc_STRVAR(print_doc, +"print(value, ..., sep=' ', end='\\n', file=sys.stdout)\n\ +\n\ +Prints the values to a stream, or to sys.stdout by default.\n\ +Optional keyword arguments:\n\ +file: a file-like object (stream); defaults to the current sys.stdout.\n\ +sep: string inserted between values, default a space.\n\ +end: string appended after the last value, default a newline."); + /* Return number of items in range (lo, hi, step), when arguments are * PyInt or PyLong objects. step > 0 required. Return a value < 0 if @@ -2424,6 +2496,7 @@ {"open", (PyCFunction)builtin_open, METH_VARARGS | METH_KEYWORDS, open_doc}, {"ord", builtin_ord, METH_O, ord_doc}, {"pow", builtin_pow, METH_VARARGS, pow_doc}, + {"print", (PyCFunction)builtin_print, METH_VARARGS | METH_KEYWORDS, print_doc}, {"range", builtin_range, METH_VARARGS, range_doc}, {"raw_input", builtin_raw_input, METH_VARARGS, raw_input_doc}, {"reduce", builtin_reduce, METH_VARARGS, reduce_doc}, Modified: python/trunk/Python/future.c ============================================================================== --- python/trunk/Python/future.c (original) +++ python/trunk/Python/future.c Wed Mar 19 00:45:49 2008 @@ -33,6 +33,8 @@ ff->ff_features |= CO_FUTURE_ABSOLUTE_IMPORT; } else if (strcmp(feature, FUTURE_WITH_STATEMENT) == 0) { ff->ff_features |= CO_FUTURE_WITH_STATEMENT; + } else if (strcmp(feature, FUTURE_PRINT_FUNCTION) == 0) { + ff->ff_features |= CO_FUTURE_PRINT_FUNCTION; } else if (strcmp(feature, "braces") == 0) { PyErr_SetString(PyExc_SyntaxError, "not a chance"); Modified: python/trunk/Python/pythonrun.c ============================================================================== --- python/trunk/Python/pythonrun.c (original) +++ python/trunk/Python/pythonrun.c Wed Mar 19 00:45:49 2008 @@ -738,18 +738,19 @@ } } +#if 0 /* compute parser flags based on compiler flags */ #define PARSER_FLAGS(flags) \ ((flags) ? ((((flags)->cf_flags & PyCF_DONT_IMPLY_DEDENT) ? \ PyPARSE_DONT_IMPLY_DEDENT : 0)) : 0) - -#if 0 +#endif +#if 1 /* Keep an example of flags with future keyword support. */ #define PARSER_FLAGS(flags) \ ((flags) ? ((((flags)->cf_flags & PyCF_DONT_IMPLY_DEDENT) ? \ PyPARSE_DONT_IMPLY_DEDENT : 0) \ - | ((flags)->cf_flags & CO_FUTURE_WITH_STATEMENT ? \ - PyPARSE_WITH_IS_KEYWORD : 0)) : 0) + | ((flags)->cf_flags & CO_FUTURE_PRINT_FUNCTION ? \ + PyPARSE_PRINT_IS_FUNCTION : 0)) : 0) #endif int _______________________________________________ Python-checkins mailing list Python-checkins at python.org http://mail.python.org/mailman/listinfo/python-checkins From python-checkins at python.org Wed Mar 19 02:38:35 2008 From: python-checkins at python.org (gregory.p.smith) Date: Wed, 19 Mar 2008 02:38:35 +0100 (CET) Subject: [Python-checkins] r61581 - in python/trunk: Doc/library/hashlib.rst Lib/hashlib.py Message-ID: <20080319013835.8362C1E4003@bag.python.org> Author: gregory.p.smith Date: Wed Mar 19 02:38:35 2008 New Revision: 61581 Modified: python/trunk/Doc/library/hashlib.rst python/trunk/Lib/hashlib.py Log: Mention that crc32 and adler32 are available in a different module (zlib). Some people look for them in hashlib. Modified: python/trunk/Doc/library/hashlib.rst ============================================================================== --- python/trunk/Doc/library/hashlib.rst (original) +++ python/trunk/Doc/library/hashlib.rst Wed Mar 19 02:38:35 2008 @@ -21,6 +21,10 @@ digest are interchangeable. Older algorithms were called message digests. The modern term is secure hash. +.. note:: + If you want the adler32 or crc32 hash functions they are available in + the :mod:`zlib` module. + .. warning:: Some algorithms have known hash collision weaknesses, see the FAQ at the end. Modified: python/trunk/Lib/hashlib.py ============================================================================== --- python/trunk/Lib/hashlib.py (original) +++ python/trunk/Lib/hashlib.py Wed Mar 19 02:38:35 2008 @@ -18,6 +18,9 @@ More algorithms may be available on your platform but the above are guaranteed to exist. +NOTE: If you want the adler32 or crc32 hash functions they are available in +the zlib module. + Choose your hash function wisely. Some have known collision weaknesses. sha384 and sha512 will be slow on 32 bit platforms. From eric+python-dev at trueblade.com Wed Mar 19 02:23:29 2008 From: eric+python-dev at trueblade.com (Eric Smith) Date: Tue, 18 Mar 2008 21:23:29 -0400 Subject: [Python-checkins] [Python-Dev] r61577 - in python/trunk: Include/code.h Include/compile.h Include/parsetok.h Include/pythonrun.h Lib/__future__.py Lib/test/test_print.py Misc/ACKS Misc/NEWS Parser/parser.c Parser/parsetok.c Python/bltinmodule.c Python/future.c ... In-Reply-To: <87D3F9C72FBF214DB39FA4E3FE618CDC6E160CEE23@EXMBX04.exchhosting.com> References: <20080318234550.096EE1E4011@bag.python.org> <87D3F9C72FBF214DB39FA4E3FE618CDC6E160CEE23@EXMBX04.exchhosting.com> Message-ID: <47E06B11.80904@trueblade.com> Yes, I know, and I'm looking at it. It doesn't fail on my Linux or Mac OS X boxes. I'm trying to duplicate the problem. I'm going to try it on my Windows box when I get home in about an hour. I'll fix it tonight. I realize there's a beer riding on the buildbots being green! Eric. Trent Nelson wrote: > This change breaks all the trunk buildbots: > > ====================================================================== > ERROR: testCompileLibrary (test.test_compiler.CompilerTest) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "S:\buildbots\python\trunk.nelson-windows\build\lib\test\test_compiler.py", line 52, in testCompileLibrary > compiler.compile(buf, basename, "exec") > File "S:\buildbots\python\trunk.nelson-windows\build\lib\compiler\pycodegen.py", line 64, in compile > gen.compile() > File "S:\buildbots\python\trunk.nelson-windows\build\lib\compiler\pycodegen.py", line 112, in compile > gen = ModuleCodeGenerator(tree) > File "S:\buildbots\python\trunk.nelson-windows\build\lib\compiler\pycodegen.py", line 1275, in __init__ > self.futures = future.find_futures(tree) > File "S:\buildbots\python\trunk.nelson-windows\build\lib\compiler\future.py", line 59, in find_futures > walk(node, p1) > File "S:\buildbots\python\trunk.nelson-windows\build\lib\compiler\visitor.py", line 106, in walk > walker.preorder(tree, visitor) > File "S:\buildbots\python\trunk.nelson-windows\build\lib\compiler\visitor.py", line 63, in preorder > self.dispatch(tree, *args) # XXX *args make sense? > File "S:\buildbots\python\trunk.nelson-windows\build\lib\compiler\visitor.py", line 57, in dispatch > return meth(node, *args) > File "S:\buildbots\python\trunk.nelson-windows\build\lib\compiler\future.py", line 27, in visitModule > if not self.check_stmt(s): > File "S:\buildbots\python\trunk.nelson-windows\build\lib\compiler\future.py", line 37, in check_stmt > "future feature %s is not defined" % name > SyntaxError: future feature print_function is not defined > > ________________________________________ > From: python-checkins-bounces+tnelson=onresolve.com at python.org [python-checkins-bounces+tnelson=onresolve.com at python.org] On Behalf Of eric.smith [python-checkins at python.org] > Sent: 18 March 2008 19:45 > To: python-checkins at python.org > Subject: [Python-checkins] r61577 - in python/trunk: Include/code.h Include/compile.h Include/parsetok.h Include/pythonrun.h Lib/__future__.py Lib/test/test_print.py Misc/ACKS Misc/NEWS Parser/parser.c Parser/parsetok.c Python/bltinmodule.c Python/future.c Pyth... > > Author: eric.smith > Date: Wed Mar 19 00:45:49 2008 > New Revision: 61577 > > Added: > python/trunk/Lib/test/test_print.py > Modified: > python/trunk/Include/code.h > python/trunk/Include/compile.h > python/trunk/Include/parsetok.h > python/trunk/Include/pythonrun.h > python/trunk/Lib/__future__.py > python/trunk/Misc/ACKS > python/trunk/Misc/NEWS > python/trunk/Parser/parser.c > python/trunk/Parser/parsetok.c > python/trunk/Python/bltinmodule.c > python/trunk/Python/future.c > python/trunk/Python/pythonrun.c > Log: > Backport of the print function, using a __future__ import. > This work is substantially Anthony Baxter's, from issue > 1633807. I just freshened it, made a few minor tweaks, > and added the test cases. I also created issue 2412, > which is to check for 2to3's behavior with the print > function. I also added myself to ACKS. > > Modified: python/trunk/Include/code.h > ============================================================================== > --- python/trunk/Include/code.h (original) > +++ python/trunk/Include/code.h Wed Mar 19 00:45:49 2008 > @@ -48,11 +48,12 @@ > #define CO_FUTURE_DIVISION 0x2000 > #define CO_FUTURE_ABSOLUTE_IMPORT 0x4000 /* do absolute imports by default */ > #define CO_FUTURE_WITH_STATEMENT 0x8000 > +#define CO_FUTURE_PRINT_FUNCTION 0x10000 > > /* This should be defined if a future statement modifies the syntax. > For example, when a keyword is added. > */ > -#if 0 > +#if 1 > #define PY_PARSER_REQUIRES_FUTURE_KEYWORD > #endif > > > Modified: python/trunk/Include/compile.h > ============================================================================== > --- python/trunk/Include/compile.h (original) > +++ python/trunk/Include/compile.h Wed Mar 19 00:45:49 2008 > @@ -24,6 +24,8 @@ > #define FUTURE_DIVISION "division" > #define FUTURE_ABSOLUTE_IMPORT "absolute_import" > #define FUTURE_WITH_STATEMENT "with_statement" > +#define FUTURE_PRINT_FUNCTION "print_function" > + > > struct _mod; /* Declare the existence of this type */ > PyAPI_FUNC(PyCodeObject *) PyAST_Compile(struct _mod *, const char *, > > Modified: python/trunk/Include/parsetok.h > ============================================================================== > --- python/trunk/Include/parsetok.h (original) > +++ python/trunk/Include/parsetok.h Wed Mar 19 00:45:49 2008 > @@ -27,6 +27,10 @@ > #define PyPARSE_WITH_IS_KEYWORD 0x0003 > #endif > > +#define PyPARSE_PRINT_IS_FUNCTION 0x0004 > + > + > + > PyAPI_FUNC(node *) PyParser_ParseString(const char *, grammar *, int, > perrdetail *); > PyAPI_FUNC(node *) PyParser_ParseFile (FILE *, const char *, grammar *, int, > > Modified: python/trunk/Include/pythonrun.h > ============================================================================== > --- python/trunk/Include/pythonrun.h (original) > +++ python/trunk/Include/pythonrun.h Wed Mar 19 00:45:49 2008 > @@ -8,7 +8,7 @@ > #endif > > #define PyCF_MASK (CO_FUTURE_DIVISION | CO_FUTURE_ABSOLUTE_IMPORT | \ > - CO_FUTURE_WITH_STATEMENT) > + CO_FUTURE_WITH_STATEMENT|CO_FUTURE_PRINT_FUNCTION) > #define PyCF_MASK_OBSOLETE (CO_NESTED) > #define PyCF_SOURCE_IS_UTF8 0x0100 > #define PyCF_DONT_IMPLY_DEDENT 0x0200 > > Modified: python/trunk/Lib/__future__.py > ============================================================================== > --- python/trunk/Lib/__future__.py (original) > +++ python/trunk/Lib/__future__.py Wed Mar 19 00:45:49 2008 > @@ -53,6 +53,7 @@ > "division", > "absolute_import", > "with_statement", > + "print_function", > ] > > __all__ = ["all_feature_names"] + all_feature_names > @@ -66,6 +67,7 @@ > CO_FUTURE_DIVISION = 0x2000 # division > CO_FUTURE_ABSOLUTE_IMPORT = 0x4000 # perform absolute imports by default > CO_FUTURE_WITH_STATEMENT = 0x8000 # with statement > +CO_FUTURE_PRINT_FUNCTION = 0x10000 # print function > > class _Feature: > def __init__(self, optionalRelease, mandatoryRelease, compiler_flag): > @@ -114,3 +116,7 @@ > with_statement = _Feature((2, 5, 0, "alpha", 1), > (2, 6, 0, "alpha", 0), > CO_FUTURE_WITH_STATEMENT) > + > +print_function = _Feature((2, 6, 0, "alpha", 2), > + (3, 0, 0, "alpha", 0), > + CO_FUTURE_PRINT_FUNCTION) > > Added: python/trunk/Lib/test/test_print.py > ============================================================================== > --- (empty file) > +++ python/trunk/Lib/test/test_print.py Wed Mar 19 00:45:49 2008 > @@ -0,0 +1,129 @@ > +"""Test correct operation of the print function. > +""" > + > +from __future__ import print_function > + > +import unittest > +from test import test_support > + > +import sys > +try: > + # 3.x > + from io import StringIO > +except ImportError: > + # 2.x > + from StringIO import StringIO > + > +from contextlib import contextmanager > + > +NotDefined = object() > + > +# A dispatch table all 8 combinations of providing > +# sep, end, and file > +# I use this machinery so that I'm not just passing default > +# values to print, I'm eiher passing or not passing in the > +# arguments > +dispatch = { > + (False, False, False): > + lambda args, sep, end, file: print(*args), > + (False, False, True): > + lambda args, sep, end, file: print(file=file, *args), > + (False, True, False): > + lambda args, sep, end, file: print(end=end, *args), > + (False, True, True): > + lambda args, sep, end, file: print(end=end, file=file, *args), > + (True, False, False): > + lambda args, sep, end, file: print(sep=sep, *args), > + (True, False, True): > + lambda args, sep, end, file: print(sep=sep, file=file, *args), > + (True, True, False): > + lambda args, sep, end, file: print(sep=sep, end=end, *args), > + (True, True, True): > + lambda args, sep, end, file: print(sep=sep, end=end, file=file, *args), > + } > + > + at contextmanager > +def stdout_redirected(new_stdout): > + save_stdout = sys.stdout > + sys.stdout = new_stdout > + try: > + yield None > + finally: > + sys.stdout = save_stdout > + > +# Class used to test __str__ and print > +class ClassWith__str__: > + def __init__(self, x): > + self.x = x > + def __str__(self): > + return self.x > + > +class TestPrint(unittest.TestCase): > + def check(self, expected, args, > + sep=NotDefined, end=NotDefined, file=NotDefined): > + # Capture sys.stdout in a StringIO. Call print with args, > + # and with sep, end, and file, if they're defined. Result > + # must match expected. > + > + # Look up the actual function to call, based on if sep, end, and file > + # are defined > + fn = dispatch[(sep is not NotDefined, > + end is not NotDefined, > + file is not NotDefined)] > + > + t = StringIO() > + with stdout_redirected(t): > + fn(args, sep, end, file) > + > + self.assertEqual(t.getvalue(), expected) > + > + def test_print(self): > + def x(expected, args, sep=NotDefined, end=NotDefined): > + # Run the test 2 ways: not using file, and using > + # file directed to a StringIO > + > + self.check(expected, args, sep=sep, end=end) > + > + # When writing to a file, stdout is expected to be empty > + o = StringIO() > + self.check('', args, sep=sep, end=end, file=o) > + > + # And o will contain the expected output > + self.assertEqual(o.getvalue(), expected) > + > + x('\n', ()) > + x('a\n', ('a',)) > + x('None\n', (None,)) > + x('1 2\n', (1, 2)) > + x('1 2\n', (1, ' ', 2)) > + x('1*2\n', (1, 2), sep='*') > + x('1 s', (1, 's'), end='') > + x('a\nb\n', ('a', 'b'), sep='\n') > + x('1.01', (1.0, 1), sep='', end='') > + x('1*a*1.3+', (1, 'a', 1.3), sep='*', end='+') > + x('a\n\nb\n', ('a\n', 'b'), sep='\n') > + x('\0+ +\0\n', ('\0', ' ', '\0'), sep='+') > + > + x('a\n b\n', ('a\n', 'b')) > + x('a\n b\n', ('a\n', 'b'), sep=None) > + x('a\n b\n', ('a\n', 'b'), end=None) > + x('a\n b\n', ('a\n', 'b'), sep=None, end=None) > + > + x('*\n', (ClassWith__str__('*'),)) > + x('abc 1\n', (ClassWith__str__('abc'), 1)) > + > + # 2.x unicode tests > + x(u'1 2\n', ('1', u'2')) > + x(u'u\1234\n', (u'u\1234',)) > + x(u' abc 1\n', (' ', ClassWith__str__(u'abc'), 1)) > + > + # errors > + self.assertRaises(TypeError, print, '', sep=3) > + self.assertRaises(TypeError, print, '', end=3) > + self.assertRaises(AttributeError, print, '', file='') > + > +def test_main(): > + test_support.run_unittest(TestPrint) > + > +if __name__ == "__main__": > + test_main() > > Modified: python/trunk/Misc/ACKS > ============================================================================== > --- python/trunk/Misc/ACKS (original) > +++ python/trunk/Misc/ACKS Wed Mar 19 00:45:49 2008 > @@ -622,6 +622,7 @@ > J. Sipprell > Kragen Sitaker > Christopher Smith > +Eric V. Smith > Gregory P. Smith > Rafal Smotrzyk > Dirk Soede > > Modified: python/trunk/Misc/NEWS > ============================================================================== > --- python/trunk/Misc/NEWS (original) > +++ python/trunk/Misc/NEWS Wed Mar 19 00:45:49 2008 > @@ -12,6 +12,9 @@ > Core and builtins > ----------------- > > +- Issue 1745. Backport print function with: > + from __future__ import print_function > + > - Issue 2332: add new attribute names for instance method objects. > The two changes are: im_self -> __self__ and im_func -> __func__ > > > Modified: python/trunk/Parser/parser.c > ============================================================================== > --- python/trunk/Parser/parser.c (original) > +++ python/trunk/Parser/parser.c Wed Mar 19 00:45:49 2008 > @@ -149,12 +149,10 @@ > strcmp(l->lb_str, s) != 0) > continue; > #ifdef PY_PARSER_REQUIRES_FUTURE_KEYWORD > - if (!(ps->p_flags & CO_FUTURE_WITH_STATEMENT)) { > - if (s[0] == 'w' && strcmp(s, "with") == 0) > - break; /* not a keyword yet */ > - else if (s[0] == 'a' && strcmp(s, "as") == 0) > - break; /* not a keyword yet */ > - } > + if (ps->p_flags & CO_FUTURE_PRINT_FUNCTION && > + s[0] == 'p' && strcmp(s, "print") == 0) { > + break; /* no longer a keyword */ > + } > #endif > D(printf("It's a keyword\n")); > return n - i; > @@ -208,6 +206,10 @@ > strcmp(STR(CHILD(cch, 0)), "with_statement") == 0) { > ps->p_flags |= CO_FUTURE_WITH_STATEMENT; > break; > + } else if (NCH(cch) >= 1 && TYPE(CHILD(cch, 0)) == NAME && > + strcmp(STR(CHILD(cch, 0)), "print_function") == 0) { > + ps->p_flags |= CO_FUTURE_PRINT_FUNCTION; > + break; > } > } > } > > Modified: python/trunk/Parser/parsetok.c > ============================================================================== > --- python/trunk/Parser/parsetok.c (original) > +++ python/trunk/Parser/parsetok.c Wed Mar 19 00:45:49 2008 > @@ -123,8 +123,8 @@ > return NULL; > } > #ifdef PY_PARSER_REQUIRES_FUTURE_KEYWORD > - if (flags & PyPARSE_WITH_IS_KEYWORD) > - ps->p_flags |= CO_FUTURE_WITH_STATEMENT; > + if (flags & PyPARSE_PRINT_IS_FUNCTION) > + ps->p_flags |= CO_FUTURE_PRINT_FUNCTION; > #endif > > for (;;) { > @@ -167,26 +167,6 @@ > str[len] = '\0'; > > #ifdef PY_PARSER_REQUIRES_FUTURE_KEYWORD > - /* This is only necessary to support the "as" warning, but > - we don't want to warn about "as" in import statements. */ > - if (type == NAME && > - len == 6 && str[0] == 'i' && strcmp(str, "import") == 0) > - handling_import = 1; > - > - /* Warn about with as NAME */ > - if (type == NAME && > - !(ps->p_flags & CO_FUTURE_WITH_STATEMENT)) { > - if (len == 4 && str[0] == 'w' && strcmp(str, "with") == 0) > - warn(with_msg, err_ret->filename, tok->lineno); > - else if (!(handling_import || handling_with) && > - len == 2 && str[0] == 'a' && > - strcmp(str, "as") == 0) > - warn(as_msg, err_ret->filename, tok->lineno); > - } > - else if (type == NAME && > - (ps->p_flags & CO_FUTURE_WITH_STATEMENT) && > - len == 4 && str[0] == 'w' && strcmp(str, "with") == 0) > - handling_with = 1; > #endif > if (a >= tok->line_start) > col_offset = a - tok->line_start; > > Modified: python/trunk/Python/bltinmodule.c > ============================================================================== > --- python/trunk/Python/bltinmodule.c (original) > +++ python/trunk/Python/bltinmodule.c Wed Mar 19 00:45:49 2008 > @@ -1486,6 +1486,78 @@ > equivalent to (x**y) % z, but may be more efficient (e.g. for longs)."); > > > +static PyObject * > +builtin_print(PyObject *self, PyObject *args, PyObject *kwds) > +{ > + static char *kwlist[] = {"sep", "end", "file", 0}; > + static PyObject *dummy_args; > + PyObject *sep = NULL, *end = NULL, *file = NULL; > + int i, err; > + > + if (dummy_args == NULL) { > + if (!(dummy_args = PyTuple_New(0))) > + return NULL; > + } > + if (!PyArg_ParseTupleAndKeywords(dummy_args, kwds, "|OOO:print", > + kwlist, &sep, &end, &file)) > + return NULL; > + if (file == NULL || file == Py_None) { > + file = PySys_GetObject("stdout"); > + /* sys.stdout may be None when FILE* stdout isn't connected */ > + if (file == Py_None) > + Py_RETURN_NONE; > + } > + > + if (sep && sep != Py_None && !PyString_Check(sep) && > + !PyUnicode_Check(sep)) { > + PyErr_Format(PyExc_TypeError, > + "sep must be None, str or unicode, not %.200s", > + sep->ob_type->tp_name); > + return NULL; > + } > + if (end && end != Py_None && !PyString_Check(end) && > + !PyUnicode_Check(end)) { > + PyErr_Format(PyExc_TypeError, > + "end must be None, str or unicode, not %.200s", > + end->ob_type->tp_name); > + return NULL; > + } > + > + for (i = 0; i < PyTuple_Size(args); i++) { > + if (i > 0) { > + if (sep == NULL || sep == Py_None) > + err = PyFile_WriteString(" ", file); > + else > + err = PyFile_WriteObject(sep, file, > + Py_PRINT_RAW); > + if (err) > + return NULL; > + } > + err = PyFile_WriteObject(PyTuple_GetItem(args, i), file, > + Py_PRINT_RAW); > + if (err) > + return NULL; > + } > + > + if (end == NULL || end == Py_None) > + err = PyFile_WriteString("\n", file); > + else > + err = PyFile_WriteObject(end, file, Py_PRINT_RAW); > + if (err) > + return NULL; > + > + Py_RETURN_NONE; > +} > + > +PyDoc_STRVAR(print_doc, > +"print(value, ..., sep=' ', end='\\n', file=sys.stdout)\n\ > +\n\ > +Prints the values to a stream, or to sys.stdout by default.\n\ > +Optional keyword arguments:\n\ > +file: a file-like object (stream); defaults to the current sys.stdout.\n\ > +sep: string inserted between values, default a space.\n\ > +end: string appended after the last value, default a newline."); > + > > /* Return number of items in range (lo, hi, step), when arguments are > * PyInt or PyLong objects. step > 0 required. Return a value < 0 if > @@ -2424,6 +2496,7 @@ > {"open", (PyCFunction)builtin_open, METH_VARARGS | METH_KEYWORDS, open_doc}, > {"ord", builtin_ord, METH_O, ord_doc}, > {"pow", builtin_pow, METH_VARARGS, pow_doc}, > + {"print", (PyCFunction)builtin_print, METH_VARARGS | METH_KEYWORDS, print_doc}, > {"range", builtin_range, METH_VARARGS, range_doc}, > {"raw_input", builtin_raw_input, METH_VARARGS, raw_input_doc}, > {"reduce", builtin_reduce, METH_VARARGS, reduce_doc}, > > Modified: python/trunk/Python/future.c > ============================================================================== > --- python/trunk/Python/future.c (original) > +++ python/trunk/Python/future.c Wed Mar 19 00:45:49 2008 > @@ -33,6 +33,8 @@ > ff->ff_features |= CO_FUTURE_ABSOLUTE_IMPORT; > } else if (strcmp(feature, FUTURE_WITH_STATEMENT) == 0) { > ff->ff_features |= CO_FUTURE_WITH_STATEMENT; > + } else if (strcmp(feature, FUTURE_PRINT_FUNCTION) == 0) { > + ff->ff_features |= CO_FUTURE_PRINT_FUNCTION; > } else if (strcmp(feature, "braces") == 0) { > PyErr_SetString(PyExc_SyntaxError, > "not a chance"); > > Modified: python/trunk/Python/pythonrun.c > ============================================================================== > --- python/trunk/Python/pythonrun.c (original) > +++ python/trunk/Python/pythonrun.c Wed Mar 19 00:45:49 2008 > @@ -738,18 +738,19 @@ > } > } > > +#if 0 > /* compute parser flags based on compiler flags */ > #define PARSER_FLAGS(flags) \ > ((flags) ? ((((flags)->cf_flags & PyCF_DONT_IMPLY_DEDENT) ? \ > PyPARSE_DONT_IMPLY_DEDENT : 0)) : 0) > - > -#if 0 > +#endif > +#if 1 > /* Keep an example of flags with future keyword support. */ > #define PARSER_FLAGS(flags) \ > ((flags) ? ((((flags)->cf_flags & PyCF_DONT_IMPLY_DEDENT) ? \ > PyPARSE_DONT_IMPLY_DEDENT : 0) \ > - | ((flags)->cf_flags & CO_FUTURE_WITH_STATEMENT ? \ > - PyPARSE_WITH_IS_KEYWORD : 0)) : 0) > + | ((flags)->cf_flags & CO_FUTURE_PRINT_FUNCTION ? \ > + PyPARSE_PRINT_IS_FUNCTION : 0)) : 0) > #endif > > int > _______________________________________________ > Python-checkins mailing list > Python-checkins at python.org > http://mail.python.org/mailman/listinfo/python-checkins > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/eric%2Bpython-dev%40trueblade.com > From python-checkins at python.org Wed Mar 19 02:46:10 2008 From: python-checkins at python.org (gregory.p.smith) Date: Wed, 19 Mar 2008 02:46:10 +0100 (CET) Subject: [Python-checkins] r61582 - python/trunk/Lib/zipfile.py Message-ID: <20080319014610.7EC521E4011@bag.python.org> Author: gregory.p.smith Date: Wed Mar 19 02:46:10 2008 New Revision: 61582 Modified: python/trunk/Lib/zipfile.py Log: Use zlib's crc32 routine instead of binascii when available. zlib's is faster when compiled properly optimized and about the same speed otherwise. Modified: python/trunk/Lib/zipfile.py ============================================================================== --- python/trunk/Lib/zipfile.py (original) +++ python/trunk/Lib/zipfile.py Wed Mar 19 02:46:10 2008 @@ -6,8 +6,10 @@ try: import zlib # We may need its compression method + crc32 = zlib.crc32 except ImportError: zlib = None + crc32 = binascii.crc32 __all__ = ["BadZipfile", "error", "ZIP_STORED", "ZIP_DEFLATED", "is_zipfile", "ZipInfo", "ZipFile", "PyZipFile", "LargeZipFile" ] @@ -940,7 +942,7 @@ if not buf: break file_size = file_size + len(buf) - CRC = binascii.crc32(buf, CRC) + CRC = crc32(buf, CRC) if cmpr: buf = cmpr.compress(buf) compress_size = compress_size + len(buf) @@ -983,7 +985,7 @@ zinfo.header_offset = self.fp.tell() # Start of header bytes self._writecheck(zinfo) self._didModify = True - zinfo.CRC = binascii.crc32(bytes) # CRC-32 checksum + zinfo.CRC = crc32(bytes) # CRC-32 checksum if zinfo.compress_type == ZIP_DEFLATED: co = zlib.compressobj(zlib.Z_DEFAULT_COMPRESSION, zlib.DEFLATED, -15) @@ -1041,7 +1043,7 @@ if extra: # Append a ZIP64 field to the extra's extra_data = struct.pack( - ' The Buildbot has detected a new failure of sparc solaris10 gcc trunk. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20solaris10%20gcc%20trunk/builds/2983 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: loewis-sun Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: eric.smith,raymond.hettinger BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_compiler ====================================================================== ERROR: testCompileLibrary (test.test_compiler.CompilerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/users/buildbot/slave/trunk.loewis-sun/build/Lib/test/test_compiler.py", line 52, in testCompileLibrary compiler.compile(buf, basename, "exec") File "/opt/users/buildbot/slave/trunk.loewis-sun/build/Lib/compiler/pycodegen.py", line 64, in compile gen.compile() File "/opt/users/buildbot/slave/trunk.loewis-sun/build/Lib/compiler/pycodegen.py", line 112, in compile gen = ModuleCodeGenerator(tree) File "/opt/users/buildbot/slave/trunk.loewis-sun/build/Lib/compiler/pycodegen.py", line 1275, in __init__ self.futures = future.find_futures(tree) File "/opt/users/buildbot/slave/trunk.loewis-sun/build/Lib/compiler/future.py", line 59, in find_futures walk(node, p1) File "/opt/users/buildbot/slave/trunk.loewis-sun/build/Lib/compiler/visitor.py", line 106, in walk walker.preorder(tree, visitor) File "/opt/users/buildbot/slave/trunk.loewis-sun/build/Lib/compiler/visitor.py", line 63, in preorder self.dispatch(tree, *args) # XXX *args make sense? File "/opt/users/buildbot/slave/trunk.loewis-sun/build/Lib/compiler/visitor.py", line 57, in dispatch return meth(node, *args) File "/opt/users/buildbot/slave/trunk.loewis-sun/build/Lib/compiler/future.py", line 27, in visitModule if not self.check_stmt(s): File "/opt/users/buildbot/slave/trunk.loewis-sun/build/Lib/compiler/future.py", line 37, in check_stmt "future feature %s is not defined" % name SyntaxError: future feature print_function is not defined sincerely, -The Buildbot From buildbot at python.org Wed Mar 19 02:54:27 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 19 Mar 2008 01:54:27 +0000 Subject: [Python-checkins] buildbot failure in ppc Debian unstable trunk Message-ID: <20080319015427.ACA0C1E4003@bag.python.org> The Buildbot has detected a new failure of ppc Debian unstable trunk. Full details are available at: http://www.python.org/dev/buildbot/all/ppc%20Debian%20unstable%20trunk/builds/1019 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: eric.smith,raymond.hettinger BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_compiler ====================================================================== ERROR: testCompileLibrary (test.test_compiler.CompilerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_compiler.py", line 52, in testCompileLibrary compiler.compile(buf, basename, "exec") File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/compiler/pycodegen.py", line 64, in compile gen.compile() File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/compiler/pycodegen.py", line 112, in compile gen = ModuleCodeGenerator(tree) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/compiler/pycodegen.py", line 1275, in __init__ self.futures = future.find_futures(tree) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/compiler/future.py", line 59, in find_futures walk(node, p1) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/compiler/visitor.py", line 106, in walk walker.preorder(tree, visitor) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/compiler/visitor.py", line 63, in preorder self.dispatch(tree, *args) # XXX *args make sense? File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/compiler/visitor.py", line 57, in dispatch return meth(node, *args) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/compiler/future.py", line 27, in visitModule if not self.check_stmt(s): File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/compiler/future.py", line 37, in check_stmt "future feature %s is not defined" % name SyntaxError: future feature print_function is not defined make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Wed Mar 19 02:56:22 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 19 Mar 2008 01:56:22 +0000 Subject: [Python-checkins] buildbot failure in PPC64 Debian trunk Message-ID: <20080319015622.70B6B1E4003@bag.python.org> The Buildbot has detected a new failure of PPC64 Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/PPC64%20Debian%20trunk/builds/532 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: eric.smith BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_compiler ====================================================================== ERROR: testCompileLibrary (test.test_compiler.CompilerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/test/test_compiler.py", line 52, in testCompileLibrary compiler.compile(buf, basename, "exec") File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/compiler/pycodegen.py", line 64, in compile gen.compile() File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/compiler/pycodegen.py", line 112, in compile gen = ModuleCodeGenerator(tree) File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/compiler/pycodegen.py", line 1275, in __init__ self.futures = future.find_futures(tree) File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/compiler/future.py", line 59, in find_futures walk(node, p1) File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/compiler/visitor.py", line 106, in walk walker.preorder(tree, visitor) File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/compiler/visitor.py", line 63, in preorder self.dispatch(tree, *args) # XXX *args make sense? File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/compiler/visitor.py", line 57, in dispatch return meth(node, *args) File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/compiler/future.py", line 27, in visitModule if not self.check_stmt(s): File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/compiler/future.py", line 37, in check_stmt "future feature %s is not defined" % name SyntaxError: future feature print_function is not defined make: *** [buildbottest] Error 1 sincerely, -The Buildbot From python-checkins at python.org Wed Mar 19 03:00:38 2008 From: python-checkins at python.org (trent.nelson) Date: Wed, 19 Mar 2008 03:00:38 +0100 (CET) Subject: [Python-checkins] r61583 - external/sqlite-source-3.5.7.x external/sqlite-source-3.5.7.x/alter.c external/sqlite-source-3.5.7.x/analyze.c external/sqlite-source-3.5.7.x/attach.c external/sqlite-source-3.5.7.x/auth.c external/sqlite-source-3.5.7.x/bitvec.c external/sqlite-source-3.5.7.x/btmutex.c external/sqlite-source-3.5.7.x/btree.c external/sqlite-source-3.5.7.x/btree.h external/sqlite-source-3.5.7.x/btreeInt.h external/sqlite-source-3.5.7.x/build.c external/sqlite-source-3.5.7.x/callback.c external/sqlite-source-3.5.7.x/complete.c external/sqlite-source-3.5.7.x/config.h external/sqlite-source-3.5.7.x/date.c external/sqlite-source-3.5.7.x/delete.c external/sqlite-source-3.5.7.x/expr.c external/sqlite-source-3.5.7.x/fault.c external/sqlite-source-3.5.7.x/fts3.c external/sqlite-source-3.5.7.x/fts3.h external/sqlite-source-3.5.7.x/fts3_hash.c external/sqlite-source-3.5.7.x/fts3_hash.h external/sqlite-source-3.5.7.x/fts3_icu.c external/sqlite-source-3.5.7.x/fts3_porter.c external/sqlite-source-3.5.7.x/fts3_tokenizer.c external/sqlite-source-3.5.7.x/fts3_tokenizer.h external/sqlite-source-3.5.7.x/fts3_tokenizer1.c external/sqlite-source-3.5.7.x/func.c external/sqlite-source-3.5.7.x/hash.c external/sqlite-source-3.5.7.x/hash.h external/sqlite-source-3.5.7.x/insert.c external/sqlite-source-3.5.7.x/journal.c external/sqlite-source-3.5.7.x/keywordhash.h external/sqlite-source-3.5.7.x/legacy.c external/sqlite-source-3.5.7.x/loadext.c external/sqlite-source-3.5.7.x/main.c external/sqlite-source-3.5.7.x/malloc.c external/sqlite-source-3.5.7.x/mem1.c external/sqlite-source-3.5.7.x/mem2.c external/sqlite-source-3.5.7.x/mem3.c external/sqlite-source-3.5.7.x/mem4.c external/sqlite-source-3.5.7.x/mem5.c external/sqlite-source-3.5.7.x/mutex.c external/sqlite-source-3.5.7.x/mutex.h external/sqlite-source-3.5.7.x/mutex_os2.c external/sqlite-source-3.5.7.x/mutex_unix.c external/sqlite-source-3.5.7.x/mutex_w32.c external/sqlite-source-3.5.7.x/opcodes.c external/sqlite-source-3.5.7.x/opcodes.h external/sqlite-source-3.5.7.x/os.c external/sqlite-source-3.5.7.x/os.h external/sqlite-source-3.5.7.x/os_common.h external/sqlite-source-3.5.7.x/os_os2.c external/sqlite-source-3.5.7.x/os_unix.c external/sqlite-source-3.5.7.x/os_win.c external/sqlite-source-3.5.7.x/pager.c external/sqlite-source-3.5.7.x/pager.h external/sqlite-source-3.5.7.x/parse.c external/sqlite-source-3.5.7.x/parse.h external/sqlite-source-3.5.7.x/pragma.c external/sqlite-source-3.5.7.x/prepare.c external/sqlite-source-3.5.7.x/printf.c external/sqlite-source-3.5.7.x/random.c external/sqlite-source-3.5.7.x/select.c external/sqlite-source-3.5.7.x/shell.c external/sqlite-source-3.5.7.x/sqlite3.h external/sqlite-source-3.5.7.x/sqlite3ext.h external/sqlite-source-3.5.7.x/sqliteInt.h external/sqlite-source-3.5.7.x/sqliteLimit.h external/sqlite-source-3.5.7.x/table.c external/sqlite-source-3.5.7.x/tclsqlite.c external/sqlite-source-3.5.7.x/tokenize.c external/sqlite-source-3.5.7.x/trigger.c external/sqlite-source-3.5.7.x/update.c external/sqlite-source-3.5.7.x/utf.c external/sqlite-source-3.5.7.x/util.c external/sqlite-source-3.5.7.x/vacuum.c external/sqlite-source-3.5.7.x/vdbe.c external/sqlite-source-3.5.7.x/vdbe.h external/sqlite-source-3.5.7.x/vdbeInt.h external/sqlite-source-3.5.7.x/vdbeapi.c external/sqlite-source-3.5.7.x/vdbeaux.c external/sqlite-source-3.5.7.x/vdbeblob.c external/sqlite-source-3.5.7.x/vdbefifo.c external/sqlite-source-3.5.7.x/vdbemem.c external/sqlite-source-3.5.7.x/vtab.c external/sqlite-source-3.5.7.x/where.c Message-ID: <20080319020038.236661E4003@bag.python.org> Author: trent.nelson Date: Wed Mar 19 03:00:27 2008 New Revision: 61583 Added: external/sqlite-source-3.5.7.x/ external/sqlite-source-3.5.7.x/alter.c external/sqlite-source-3.5.7.x/analyze.c external/sqlite-source-3.5.7.x/attach.c external/sqlite-source-3.5.7.x/auth.c external/sqlite-source-3.5.7.x/bitvec.c external/sqlite-source-3.5.7.x/btmutex.c external/sqlite-source-3.5.7.x/btree.c external/sqlite-source-3.5.7.x/btree.h external/sqlite-source-3.5.7.x/btreeInt.h external/sqlite-source-3.5.7.x/build.c external/sqlite-source-3.5.7.x/callback.c external/sqlite-source-3.5.7.x/complete.c external/sqlite-source-3.5.7.x/config.h external/sqlite-source-3.5.7.x/date.c external/sqlite-source-3.5.7.x/delete.c external/sqlite-source-3.5.7.x/expr.c external/sqlite-source-3.5.7.x/fault.c external/sqlite-source-3.5.7.x/fts3.c external/sqlite-source-3.5.7.x/fts3.h external/sqlite-source-3.5.7.x/fts3_hash.c external/sqlite-source-3.5.7.x/fts3_hash.h external/sqlite-source-3.5.7.x/fts3_icu.c external/sqlite-source-3.5.7.x/fts3_porter.c external/sqlite-source-3.5.7.x/fts3_tokenizer.c external/sqlite-source-3.5.7.x/fts3_tokenizer.h external/sqlite-source-3.5.7.x/fts3_tokenizer1.c external/sqlite-source-3.5.7.x/func.c external/sqlite-source-3.5.7.x/hash.c external/sqlite-source-3.5.7.x/hash.h external/sqlite-source-3.5.7.x/insert.c external/sqlite-source-3.5.7.x/journal.c external/sqlite-source-3.5.7.x/keywordhash.h external/sqlite-source-3.5.7.x/legacy.c external/sqlite-source-3.5.7.x/loadext.c external/sqlite-source-3.5.7.x/main.c external/sqlite-source-3.5.7.x/malloc.c external/sqlite-source-3.5.7.x/mem1.c external/sqlite-source-3.5.7.x/mem2.c external/sqlite-source-3.5.7.x/mem3.c external/sqlite-source-3.5.7.x/mem4.c external/sqlite-source-3.5.7.x/mem5.c external/sqlite-source-3.5.7.x/mutex.c external/sqlite-source-3.5.7.x/mutex.h external/sqlite-source-3.5.7.x/mutex_os2.c external/sqlite-source-3.5.7.x/mutex_unix.c external/sqlite-source-3.5.7.x/mutex_w32.c external/sqlite-source-3.5.7.x/opcodes.c external/sqlite-source-3.5.7.x/opcodes.h external/sqlite-source-3.5.7.x/os.c external/sqlite-source-3.5.7.x/os.h external/sqlite-source-3.5.7.x/os_common.h external/sqlite-source-3.5.7.x/os_os2.c external/sqlite-source-3.5.7.x/os_unix.c external/sqlite-source-3.5.7.x/os_win.c external/sqlite-source-3.5.7.x/pager.c external/sqlite-source-3.5.7.x/pager.h external/sqlite-source-3.5.7.x/parse.c external/sqlite-source-3.5.7.x/parse.h external/sqlite-source-3.5.7.x/pragma.c external/sqlite-source-3.5.7.x/prepare.c external/sqlite-source-3.5.7.x/printf.c external/sqlite-source-3.5.7.x/random.c external/sqlite-source-3.5.7.x/select.c external/sqlite-source-3.5.7.x/shell.c external/sqlite-source-3.5.7.x/sqlite3.h external/sqlite-source-3.5.7.x/sqlite3ext.h external/sqlite-source-3.5.7.x/sqliteInt.h external/sqlite-source-3.5.7.x/sqliteLimit.h external/sqlite-source-3.5.7.x/table.c external/sqlite-source-3.5.7.x/tclsqlite.c external/sqlite-source-3.5.7.x/tokenize.c external/sqlite-source-3.5.7.x/trigger.c external/sqlite-source-3.5.7.x/update.c external/sqlite-source-3.5.7.x/utf.c external/sqlite-source-3.5.7.x/util.c external/sqlite-source-3.5.7.x/vacuum.c external/sqlite-source-3.5.7.x/vdbe.c external/sqlite-source-3.5.7.x/vdbe.h external/sqlite-source-3.5.7.x/vdbeInt.h external/sqlite-source-3.5.7.x/vdbeapi.c external/sqlite-source-3.5.7.x/vdbeaux.c external/sqlite-source-3.5.7.x/vdbeblob.c external/sqlite-source-3.5.7.x/vdbefifo.c external/sqlite-source-3.5.7.x/vdbemem.c external/sqlite-source-3.5.7.x/vtab.c external/sqlite-source-3.5.7.x/where.c Log: Initial import of sqlite-source-3.5.7. Added: external/sqlite-source-3.5.7.x/alter.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/alter.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,630 @@ +/* +** 2005 February 15 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** This file contains C code routines that used to generate VDBE code +** that implements the ALTER TABLE command. +** +** $Id: alter.c,v 1.42 2008/02/09 14:30:30 drh Exp $ +*/ +#include "sqliteInt.h" +#include + +/* +** The code in this file only exists if we are not omitting the +** ALTER TABLE logic from the build. +*/ +#ifndef SQLITE_OMIT_ALTERTABLE + + +/* +** This function is used by SQL generated to implement the +** ALTER TABLE command. The first argument is the text of a CREATE TABLE or +** CREATE INDEX command. The second is a table name. The table name in +** the CREATE TABLE or CREATE INDEX statement is replaced with the third +** argument and the result returned. Examples: +** +** sqlite_rename_table('CREATE TABLE abc(a, b, c)', 'def') +** -> 'CREATE TABLE def(a, b, c)' +** +** sqlite_rename_table('CREATE INDEX i ON abc(a)', 'def') +** -> 'CREATE INDEX i ON def(a, b, c)' +*/ +static void renameTableFunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + unsigned char const *zSql = sqlite3_value_text(argv[0]); + unsigned char const *zTableName = sqlite3_value_text(argv[1]); + + int token; + Token tname; + unsigned char const *zCsr = zSql; + int len = 0; + char *zRet; + + sqlite3 *db = sqlite3_user_data(context); + + /* The principle used to locate the table name in the CREATE TABLE + ** statement is that the table name is the first token that is immediatedly + ** followed by a left parenthesis - TK_LP - or "USING" TK_USING. + */ + if( zSql ){ + do { + if( !*zCsr ){ + /* Ran out of input before finding an opening bracket. Return NULL. */ + return; + } + + /* Store the token that zCsr points to in tname. */ + tname.z = zCsr; + tname.n = len; + + /* Advance zCsr to the next token. Store that token type in 'token', + ** and its length in 'len' (to be used next iteration of this loop). + */ + do { + zCsr += len; + len = sqlite3GetToken(zCsr, &token); + } while( token==TK_SPACE ); + assert( len>0 ); + } while( token!=TK_LP && token!=TK_USING ); + + zRet = sqlite3MPrintf(db, "%.*s\"%w\"%s", tname.z - zSql, zSql, + zTableName, tname.z+tname.n); + sqlite3_result_text(context, zRet, -1, sqlite3_free); + } +} + +#ifndef SQLITE_OMIT_TRIGGER +/* This function is used by SQL generated to implement the +** ALTER TABLE command. The first argument is the text of a CREATE TRIGGER +** statement. The second is a table name. The table name in the CREATE +** TRIGGER statement is replaced with the third argument and the result +** returned. This is analagous to renameTableFunc() above, except for CREATE +** TRIGGER, not CREATE INDEX and CREATE TABLE. +*/ +static void renameTriggerFunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + unsigned char const *zSql = sqlite3_value_text(argv[0]); + unsigned char const *zTableName = sqlite3_value_text(argv[1]); + + int token; + Token tname; + int dist = 3; + unsigned char const *zCsr = zSql; + int len = 0; + char *zRet; + + sqlite3 *db = sqlite3_user_data(context); + + /* The principle used to locate the table name in the CREATE TRIGGER + ** statement is that the table name is the first token that is immediatedly + ** preceded by either TK_ON or TK_DOT and immediatedly followed by one + ** of TK_WHEN, TK_BEGIN or TK_FOR. + */ + if( zSql ){ + do { + + if( !*zCsr ){ + /* Ran out of input before finding the table name. Return NULL. */ + return; + } + + /* Store the token that zCsr points to in tname. */ + tname.z = zCsr; + tname.n = len; + + /* Advance zCsr to the next token. Store that token type in 'token', + ** and its length in 'len' (to be used next iteration of this loop). + */ + do { + zCsr += len; + len = sqlite3GetToken(zCsr, &token); + }while( token==TK_SPACE ); + assert( len>0 ); + + /* Variable 'dist' stores the number of tokens read since the most + ** recent TK_DOT or TK_ON. This means that when a WHEN, FOR or BEGIN + ** token is read and 'dist' equals 2, the condition stated above + ** to be met. + ** + ** Note that ON cannot be a database, table or column name, so + ** there is no need to worry about syntax like + ** "CREATE TRIGGER ... ON ON.ON BEGIN ..." etc. + */ + dist++; + if( token==TK_DOT || token==TK_ON ){ + dist = 0; + } + } while( dist!=2 || (token!=TK_WHEN && token!=TK_FOR && token!=TK_BEGIN) ); + + /* Variable tname now contains the token that is the old table-name + ** in the CREATE TRIGGER statement. + */ + zRet = sqlite3MPrintf(db, "%.*s\"%w\"%s", tname.z - zSql, zSql, + zTableName, tname.z+tname.n); + sqlite3_result_text(context, zRet, -1, sqlite3_free); + } +} +#endif /* !SQLITE_OMIT_TRIGGER */ + +/* +** Register built-in functions used to help implement ALTER TABLE +*/ +void sqlite3AlterFunctions(sqlite3 *db){ + static const struct { + char *zName; + signed char nArg; + void (*xFunc)(sqlite3_context*,int,sqlite3_value **); + } aFuncs[] = { + { "sqlite_rename_table", 2, renameTableFunc}, +#ifndef SQLITE_OMIT_TRIGGER + { "sqlite_rename_trigger", 2, renameTriggerFunc}, +#endif + }; + int i; + + for(i=0; idb->aDb[1].pSchema; /* Temp db schema */ + + /* If the table is not located in the temp-db (in which case NULL is + ** returned, loop through the tables list of triggers. For each trigger + ** that is not part of the temp-db schema, add a clause to the WHERE + ** expression being built up in zWhere. + */ + if( pTab->pSchema!=pTempSchema ){ + sqlite3 *db = pParse->db; + for( pTrig=pTab->pTrigger; pTrig; pTrig=pTrig->pNext ){ + if( pTrig->pSchema==pTempSchema ){ + if( !zWhere ){ + zWhere = sqlite3MPrintf(db, "name=%Q", pTrig->name); + }else{ + tmp = zWhere; + zWhere = sqlite3MPrintf(db, "%s OR name=%Q", zWhere, pTrig->name); + sqlite3_free(tmp); + } + } + } + } + return zWhere; +} + +/* +** Generate code to drop and reload the internal representation of table +** pTab from the database, including triggers and temporary triggers. +** Argument zName is the name of the table in the database schema at +** the time the generated code is executed. This can be different from +** pTab->zName if this function is being called to code part of an +** "ALTER TABLE RENAME TO" statement. +*/ +static void reloadTableSchema(Parse *pParse, Table *pTab, const char *zName){ + Vdbe *v; + char *zWhere; + int iDb; /* Index of database containing pTab */ +#ifndef SQLITE_OMIT_TRIGGER + Trigger *pTrig; +#endif + + v = sqlite3GetVdbe(pParse); + if( !v ) return; + assert( sqlite3BtreeHoldsAllMutexes(pParse->db) ); + iDb = sqlite3SchemaToIndex(pParse->db, pTab->pSchema); + assert( iDb>=0 ); + +#ifndef SQLITE_OMIT_TRIGGER + /* Drop any table triggers from the internal schema. */ + for(pTrig=pTab->pTrigger; pTrig; pTrig=pTrig->pNext){ + int iTrigDb = sqlite3SchemaToIndex(pParse->db, pTrig->pSchema); + assert( iTrigDb==iDb || iTrigDb==1 ); + sqlite3VdbeAddOp4(v, OP_DropTrigger, iTrigDb, 0, 0, pTrig->name, 0); + } +#endif + + /* Drop the table and index from the internal schema */ + sqlite3VdbeAddOp4(v, OP_DropTable, iDb, 0, 0, pTab->zName, 0); + + /* Reload the table, index and permanent trigger schemas. */ + zWhere = sqlite3MPrintf(pParse->db, "tbl_name=%Q", zName); + if( !zWhere ) return; + sqlite3VdbeAddOp4(v, OP_ParseSchema, iDb, 0, 0, zWhere, P4_DYNAMIC); + +#ifndef SQLITE_OMIT_TRIGGER + /* Now, if the table is not stored in the temp database, reload any temp + ** triggers. Don't use IN(...) in case SQLITE_OMIT_SUBQUERY is defined. + */ + if( (zWhere=whereTempTriggers(pParse, pTab))!=0 ){ + sqlite3VdbeAddOp4(v, OP_ParseSchema, 1, 0, 0, zWhere, P4_DYNAMIC); + } +#endif +} + +/* +** Generate code to implement the "ALTER TABLE xxx RENAME TO yyy" +** command. +*/ +void sqlite3AlterRenameTable( + Parse *pParse, /* Parser context. */ + SrcList *pSrc, /* The table to rename. */ + Token *pName /* The new table name. */ +){ + int iDb; /* Database that contains the table */ + char *zDb; /* Name of database iDb */ + Table *pTab; /* Table being renamed */ + char *zName = 0; /* NULL-terminated version of pName */ + sqlite3 *db = pParse->db; /* Database connection */ + int nTabName; /* Number of UTF-8 characters in zTabName */ + const char *zTabName; /* Original name of the table */ + Vdbe *v; +#ifndef SQLITE_OMIT_TRIGGER + char *zWhere = 0; /* Where clause to locate temp triggers */ +#endif + int isVirtualRename = 0; /* True if this is a v-table with an xRename() */ + + if( db->mallocFailed ) goto exit_rename_table; + assert( pSrc->nSrc==1 ); + assert( sqlite3BtreeHoldsAllMutexes(pParse->db) ); + + pTab = sqlite3LocateTable(pParse, 0, pSrc->a[0].zName, pSrc->a[0].zDatabase); + if( !pTab ) goto exit_rename_table; + iDb = sqlite3SchemaToIndex(pParse->db, pTab->pSchema); + zDb = db->aDb[iDb].zName; + + /* Get a NULL terminated version of the new table name. */ + zName = sqlite3NameFromToken(db, pName); + if( !zName ) goto exit_rename_table; + + /* Check that a table or index named 'zName' does not already exist + ** in database iDb. If so, this is an error. + */ + if( sqlite3FindTable(db, zName, zDb) || sqlite3FindIndex(db, zName, zDb) ){ + sqlite3ErrorMsg(pParse, + "there is already another table or index with this name: %s", zName); + goto exit_rename_table; + } + + /* Make sure it is not a system table being altered, or a reserved name + ** that the table is being renamed to. + */ + if( strlen(pTab->zName)>6 && 0==sqlite3StrNICmp(pTab->zName, "sqlite_", 7) ){ + sqlite3ErrorMsg(pParse, "table %s may not be altered", pTab->zName); + goto exit_rename_table; + } + if( SQLITE_OK!=sqlite3CheckObjectName(pParse, zName) ){ + goto exit_rename_table; + } + +#ifndef SQLITE_OMIT_VIEW + if( pTab->pSelect ){ + sqlite3ErrorMsg(pParse, "view %s may not be altered", pTab->zName); + goto exit_rename_table; + } +#endif + +#ifndef SQLITE_OMIT_AUTHORIZATION + /* Invoke the authorization callback. */ + if( sqlite3AuthCheck(pParse, SQLITE_ALTER_TABLE, zDb, pTab->zName, 0) ){ + goto exit_rename_table; + } +#endif + +#ifndef SQLITE_OMIT_VIRTUALTABLE + if( sqlite3ViewGetColumnNames(pParse, pTab) ){ + goto exit_rename_table; + } + if( IsVirtual(pTab) && pTab->pMod->pModule->xRename ){ + isVirtualRename = 1; + } +#endif + + /* Begin a transaction and code the VerifyCookie for database iDb. + ** Then modify the schema cookie (since the ALTER TABLE modifies the + ** schema). Open a statement transaction if the table is a virtual + ** table. + */ + v = sqlite3GetVdbe(pParse); + if( v==0 ){ + goto exit_rename_table; + } + sqlite3BeginWriteOperation(pParse, isVirtualRename, iDb); + sqlite3ChangeCookie(pParse, iDb); + + /* If this is a virtual table, invoke the xRename() function if + ** one is defined. The xRename() callback will modify the names + ** of any resources used by the v-table implementation (including other + ** SQLite tables) that are identified by the name of the virtual table. + */ +#ifndef SQLITE_OMIT_VIRTUALTABLE + if( isVirtualRename ){ + int i = ++pParse->nMem; + sqlite3VdbeAddOp4(v, OP_String8, 0, i, 0, zName, 0); + sqlite3VdbeAddOp4(v, OP_VRename, i, 0, 0,(const char*)pTab->pVtab, P4_VTAB); + } +#endif + + /* figure out how many UTF-8 characters are in zName */ + zTabName = pTab->zName; + nTabName = sqlite3Utf8CharLen(zTabName, -1); + + /* Modify the sqlite_master table to use the new table name. */ + sqlite3NestedParse(pParse, + "UPDATE %Q.%s SET " +#ifdef SQLITE_OMIT_TRIGGER + "sql = sqlite_rename_table(sql, %Q), " +#else + "sql = CASE " + "WHEN type = 'trigger' THEN sqlite_rename_trigger(sql, %Q)" + "ELSE sqlite_rename_table(sql, %Q) END, " +#endif + "tbl_name = %Q, " + "name = CASE " + "WHEN type='table' THEN %Q " + "WHEN name LIKE 'sqlite_autoindex%%' AND type='index' THEN " + "'sqlite_autoindex_' || %Q || substr(name,%d+18) " + "ELSE name END " + "WHERE tbl_name=%Q AND " + "(type='table' OR type='index' OR type='trigger');", + zDb, SCHEMA_TABLE(iDb), zName, zName, zName, +#ifndef SQLITE_OMIT_TRIGGER + zName, +#endif + zName, nTabName, zTabName + ); + +#ifndef SQLITE_OMIT_AUTOINCREMENT + /* If the sqlite_sequence table exists in this database, then update + ** it with the new table name. + */ + if( sqlite3FindTable(db, "sqlite_sequence", zDb) ){ + sqlite3NestedParse(pParse, + "UPDATE \"%w\".sqlite_sequence set name = %Q WHERE name = %Q", + zDb, zName, pTab->zName); + } +#endif + +#ifndef SQLITE_OMIT_TRIGGER + /* If there are TEMP triggers on this table, modify the sqlite_temp_master + ** table. Don't do this if the table being ALTERed is itself located in + ** the temp database. + */ + if( (zWhere=whereTempTriggers(pParse, pTab))!=0 ){ + sqlite3NestedParse(pParse, + "UPDATE sqlite_temp_master SET " + "sql = sqlite_rename_trigger(sql, %Q), " + "tbl_name = %Q " + "WHERE %s;", zName, zName, zWhere); + sqlite3_free(zWhere); + } +#endif + + /* Drop and reload the internal table schema. */ + reloadTableSchema(pParse, pTab, zName); + +exit_rename_table: + sqlite3SrcListDelete(pSrc); + sqlite3_free(zName); +} + + +/* +** This function is called after an "ALTER TABLE ... ADD" statement +** has been parsed. Argument pColDef contains the text of the new +** column definition. +** +** The Table structure pParse->pNewTable was extended to include +** the new column during parsing. +*/ +void sqlite3AlterFinishAddColumn(Parse *pParse, Token *pColDef){ + Table *pNew; /* Copy of pParse->pNewTable */ + Table *pTab; /* Table being altered */ + int iDb; /* Database number */ + const char *zDb; /* Database name */ + const char *zTab; /* Table name */ + char *zCol; /* Null-terminated column definition */ + Column *pCol; /* The new column */ + Expr *pDflt; /* Default value for the new column */ + sqlite3 *db; /* The database connection; */ + + if( pParse->nErr ) return; + pNew = pParse->pNewTable; + assert( pNew ); + + db = pParse->db; + assert( sqlite3BtreeHoldsAllMutexes(db) ); + iDb = sqlite3SchemaToIndex(db, pNew->pSchema); + zDb = db->aDb[iDb].zName; + zTab = pNew->zName; + pCol = &pNew->aCol[pNew->nCol-1]; + pDflt = pCol->pDflt; + pTab = sqlite3FindTable(db, zTab, zDb); + assert( pTab ); + +#ifndef SQLITE_OMIT_AUTHORIZATION + /* Invoke the authorization callback. */ + if( sqlite3AuthCheck(pParse, SQLITE_ALTER_TABLE, zDb, pTab->zName, 0) ){ + return; + } +#endif + + /* If the default value for the new column was specified with a + ** literal NULL, then set pDflt to 0. This simplifies checking + ** for an SQL NULL default below. + */ + if( pDflt && pDflt->op==TK_NULL ){ + pDflt = 0; + } + + /* Check that the new column is not specified as PRIMARY KEY or UNIQUE. + ** If there is a NOT NULL constraint, then the default value for the + ** column must not be NULL. + */ + if( pCol->isPrimKey ){ + sqlite3ErrorMsg(pParse, "Cannot add a PRIMARY KEY column"); + return; + } + if( pNew->pIndex ){ + sqlite3ErrorMsg(pParse, "Cannot add a UNIQUE column"); + return; + } + if( pCol->notNull && !pDflt ){ + sqlite3ErrorMsg(pParse, + "Cannot add a NOT NULL column with default value NULL"); + return; + } + + /* Ensure the default expression is something that sqlite3ValueFromExpr() + ** can handle (i.e. not CURRENT_TIME etc.) + */ + if( pDflt ){ + sqlite3_value *pVal; + if( sqlite3ValueFromExpr(db, pDflt, SQLITE_UTF8, SQLITE_AFF_NONE, &pVal) ){ + db->mallocFailed = 1; + return; + } + if( !pVal ){ + sqlite3ErrorMsg(pParse, "Cannot add a column with non-constant default"); + return; + } + sqlite3ValueFree(pVal); + } + + /* Modify the CREATE TABLE statement. */ + zCol = sqlite3DbStrNDup(db, (char*)pColDef->z, pColDef->n); + if( zCol ){ + char *zEnd = &zCol[pColDef->n-1]; + while( (zEnd>zCol && *zEnd==';') || isspace(*(unsigned char *)zEnd) ){ + *zEnd-- = '\0'; + } + sqlite3NestedParse(pParse, + "UPDATE \"%w\".%s SET " + "sql = substr(sql,1,%d) || ', ' || %Q || substr(sql,%d) " + "WHERE type = 'table' AND name = %Q", + zDb, SCHEMA_TABLE(iDb), pNew->addColOffset, zCol, pNew->addColOffset+1, + zTab + ); + sqlite3_free(zCol); + } + + /* If the default value of the new column is NULL, then set the file + ** format to 2. If the default value of the new column is not NULL, + ** the file format becomes 3. + */ + sqlite3MinimumFileFormat(pParse, iDb, pDflt ? 3 : 2); + + /* Reload the schema of the modified table. */ + reloadTableSchema(pParse, pTab, pTab->zName); +} + +/* +** This function is called by the parser after the table-name in +** an "ALTER TABLE ADD" statement is parsed. Argument +** pSrc is the full-name of the table being altered. +** +** This routine makes a (partial) copy of the Table structure +** for the table being altered and sets Parse.pNewTable to point +** to it. Routines called by the parser as the column definition +** is parsed (i.e. sqlite3AddColumn()) add the new Column data to +** the copy. The copy of the Table structure is deleted by tokenize.c +** after parsing is finished. +** +** Routine sqlite3AlterFinishAddColumn() will be called to complete +** coding the "ALTER TABLE ... ADD" statement. +*/ +void sqlite3AlterBeginAddColumn(Parse *pParse, SrcList *pSrc){ + Table *pNew; + Table *pTab; + Vdbe *v; + int iDb; + int i; + int nAlloc; + sqlite3 *db = pParse->db; + + /* Look up the table being altered. */ + assert( pParse->pNewTable==0 ); + assert( sqlite3BtreeHoldsAllMutexes(db) ); + if( db->mallocFailed ) goto exit_begin_add_column; + pTab = sqlite3LocateTable(pParse, 0, pSrc->a[0].zName, pSrc->a[0].zDatabase); + if( !pTab ) goto exit_begin_add_column; + +#ifndef SQLITE_OMIT_VIRTUALTABLE + if( IsVirtual(pTab) ){ + sqlite3ErrorMsg(pParse, "virtual tables may not be altered"); + goto exit_begin_add_column; + } +#endif + + /* Make sure this is not an attempt to ALTER a view. */ + if( pTab->pSelect ){ + sqlite3ErrorMsg(pParse, "Cannot add a column to a view"); + goto exit_begin_add_column; + } + + assert( pTab->addColOffset>0 ); + iDb = sqlite3SchemaToIndex(db, pTab->pSchema); + + /* Put a copy of the Table struct in Parse.pNewTable for the + ** sqlite3AddColumn() function and friends to modify. + */ + pNew = (Table*)sqlite3DbMallocZero(db, sizeof(Table)); + if( !pNew ) goto exit_begin_add_column; + pParse->pNewTable = pNew; + pNew->nRef = 1; + pNew->nCol = pTab->nCol; + assert( pNew->nCol>0 ); + nAlloc = (((pNew->nCol-1)/8)*8)+8; + assert( nAlloc>=pNew->nCol && nAlloc%8==0 && nAlloc-pNew->nCol<8 ); + pNew->aCol = (Column*)sqlite3DbMallocZero(db, sizeof(Column)*nAlloc); + pNew->zName = sqlite3DbStrDup(db, pTab->zName); + if( !pNew->aCol || !pNew->zName ){ + db->mallocFailed = 1; + goto exit_begin_add_column; + } + memcpy(pNew->aCol, pTab->aCol, sizeof(Column)*pNew->nCol); + for(i=0; inCol; i++){ + Column *pCol = &pNew->aCol[i]; + pCol->zName = sqlite3DbStrDup(db, pCol->zName); + pCol->zColl = 0; + pCol->zType = 0; + pCol->pDflt = 0; + } + pNew->pSchema = db->aDb[iDb].pSchema; + pNew->addColOffset = pTab->addColOffset; + pNew->nRef = 1; + + /* Begin a transaction and increment the schema cookie. */ + sqlite3BeginWriteOperation(pParse, 0, iDb); + v = sqlite3GetVdbe(pParse); + if( !v ) goto exit_begin_add_column; + sqlite3ChangeCookie(pParse, iDb); + +exit_begin_add_column: + sqlite3SrcListDelete(pSrc); + return; +} +#endif /* SQLITE_ALTER_TABLE */ Added: external/sqlite-source-3.5.7.x/analyze.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/analyze.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,425 @@ +/* +** 2005 July 8 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** This file contains code associated with the ANALYZE command. +** +** @(#) $Id: analyze.c,v 1.41 2008/01/25 15:04:49 drh Exp $ +*/ +#ifndef SQLITE_OMIT_ANALYZE +#include "sqliteInt.h" + +/* +** This routine generates code that opens the sqlite_stat1 table on cursor +** iStatCur. +** +** If the sqlite_stat1 tables does not previously exist, it is created. +** If it does previously exist, all entires associated with table zWhere +** are removed. If zWhere==0 then all entries are removed. +*/ +static void openStatTable( + Parse *pParse, /* Parsing context */ + int iDb, /* The database we are looking in */ + int iStatCur, /* Open the sqlite_stat1 table on this cursor */ + const char *zWhere /* Delete entries associated with this table */ +){ + sqlite3 *db = pParse->db; + Db *pDb; + int iRootPage; + int createStat1 = 0; + Table *pStat; + Vdbe *v = sqlite3GetVdbe(pParse); + + if( v==0 ) return; + assert( sqlite3BtreeHoldsAllMutexes(db) ); + assert( sqlite3VdbeDb(v)==db ); + pDb = &db->aDb[iDb]; + if( (pStat = sqlite3FindTable(db, "sqlite_stat1", pDb->zName))==0 ){ + /* The sqlite_stat1 tables does not exist. Create it. + ** Note that a side-effect of the CREATE TABLE statement is to leave + ** the rootpage of the new table in register pParse->regRoot. This is + ** important because the OpenWrite opcode below will be needing it. */ + sqlite3NestedParse(pParse, + "CREATE TABLE %Q.sqlite_stat1(tbl,idx,stat)", + pDb->zName + ); + iRootPage = pParse->regRoot; + createStat1 = 1; /* Cause rootpage to be taken from top of stack */ + }else if( zWhere ){ + /* The sqlite_stat1 table exists. Delete all entries associated with + ** the table zWhere. */ + sqlite3NestedParse(pParse, + "DELETE FROM %Q.sqlite_stat1 WHERE tbl=%Q", + pDb->zName, zWhere + ); + iRootPage = pStat->tnum; + }else{ + /* The sqlite_stat1 table already exists. Delete all rows. */ + iRootPage = pStat->tnum; + sqlite3VdbeAddOp2(v, OP_Clear, pStat->tnum, iDb); + } + + /* Open the sqlite_stat1 table for writing. Unless it was created + ** by this vdbe program, lock it for writing at the shared-cache level. + ** If this vdbe did create the sqlite_stat1 table, then it must have + ** already obtained a schema-lock, making the write-lock redundant. + */ + if( !createStat1 ){ + sqlite3TableLock(pParse, iDb, iRootPage, 1, "sqlite_stat1"); + } + sqlite3VdbeAddOp3(v, OP_OpenWrite, iStatCur, iRootPage, iDb); + sqlite3VdbeChangeP5(v, createStat1); + sqlite3VdbeAddOp2(v, OP_SetNumColumns, iStatCur, 3); +} + +/* +** Generate code to do an analysis of all indices associated with +** a single table. +*/ +static void analyzeOneTable( + Parse *pParse, /* Parser context */ + Table *pTab, /* Table whose indices are to be analyzed */ + int iStatCur, /* Cursor that writes to the sqlite_stat1 table */ + int iMem /* Available memory locations begin here */ +){ + Index *pIdx; /* An index to being analyzed */ + int iIdxCur; /* Cursor number for index being analyzed */ + int nCol; /* Number of columns in the index */ + Vdbe *v; /* The virtual machine being built up */ + int i; /* Loop counter */ + int topOfLoop; /* The top of the loop */ + int endOfLoop; /* The end of the loop */ + int addr; /* The address of an instruction */ + int iDb; /* Index of database containing pTab */ + + v = sqlite3GetVdbe(pParse); + if( v==0 || pTab==0 || pTab->pIndex==0 ){ + /* Do no analysis for tables that have no indices */ + return; + } + assert( sqlite3BtreeHoldsAllMutexes(pParse->db) ); + iDb = sqlite3SchemaToIndex(pParse->db, pTab->pSchema); + assert( iDb>=0 ); +#ifndef SQLITE_OMIT_AUTHORIZATION + if( sqlite3AuthCheck(pParse, SQLITE_ANALYZE, pTab->zName, 0, + pParse->db->aDb[iDb].zName ) ){ + return; + } +#endif + + /* Establish a read-lock on the table at the shared-cache level. */ + sqlite3TableLock(pParse, iDb, pTab->tnum, 0, pTab->zName); + + iIdxCur = pParse->nTab; + for(pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext){ + KeyInfo *pKey = sqlite3IndexKeyinfo(pParse, pIdx); + int regFields; /* Register block for building records */ + int regRec; /* Register holding completed record */ + int regTemp; /* Temporary use register */ + int regCol; /* Content of a column from the table being analyzed */ + int regRowid; /* Rowid for the inserted record */ + int regF2; + + /* Open a cursor to the index to be analyzed + */ + assert( iDb==sqlite3SchemaToIndex(pParse->db, pIdx->pSchema) ); + sqlite3VdbeAddOp4(v, OP_OpenRead, iIdxCur, pIdx->tnum, iDb, + (char *)pKey, P4_KEYINFO_HANDOFF); + VdbeComment((v, "%s", pIdx->zName)); + nCol = pIdx->nColumn; + regFields = iMem+nCol*2; + regTemp = regRowid = regCol = regFields+3; + regRec = regCol+1; + if( regRec>pParse->nMem ){ + pParse->nMem = regRec; + } + sqlite3VdbeAddOp2(v, OP_SetNumColumns, iIdxCur, nCol+1); + + /* Memory cells are used as follows: + ** + ** mem[iMem]: The total number of rows in the table. + ** mem[iMem+1]: Number of distinct values in column 1 + ** ... + ** mem[iMem+nCol]: Number of distinct values in column N + ** mem[iMem+nCol+1] Last observed value of column 1 + ** ... + ** mem[iMem+nCol+nCol]: Last observed value of column N + ** + ** Cells iMem through iMem+nCol are initialized to 0. The others + ** are initialized to NULL. + */ + for(i=0; i<=nCol; i++){ + sqlite3VdbeAddOp2(v, OP_Integer, 0, iMem+i); + } + for(i=0; i0 then it is always the case the D>0 so division by zero + ** is never possible. + */ + addr = sqlite3VdbeAddOp1(v, OP_IfNot, iMem); + sqlite3VdbeAddOp4(v, OP_String8, 0, regFields, 0, pTab->zName, 0); + sqlite3VdbeAddOp4(v, OP_String8, 0, regFields+1, 0, pIdx->zName, 0); + regF2 = regFields+2; + sqlite3VdbeAddOp2(v, OP_SCopy, iMem, regF2); + for(i=0; idb; + Schema *pSchema = db->aDb[iDb].pSchema; /* Schema of database iDb */ + HashElem *k; + int iStatCur; + int iMem; + + sqlite3BeginWriteOperation(pParse, 0, iDb); + iStatCur = pParse->nTab++; + openStatTable(pParse, iDb, iStatCur, 0); + iMem = pParse->nMem+1; + for(k=sqliteHashFirst(&pSchema->tblHash); k; k=sqliteHashNext(k)){ + Table *pTab = (Table*)sqliteHashData(k); + analyzeOneTable(pParse, pTab, iStatCur, iMem); + } + loadAnalysis(pParse, iDb); +} + +/* +** Generate code that will do an analysis of a single table in +** a database. +*/ +static void analyzeTable(Parse *pParse, Table *pTab){ + int iDb; + int iStatCur; + + assert( pTab!=0 ); + assert( sqlite3BtreeHoldsAllMutexes(pParse->db) ); + iDb = sqlite3SchemaToIndex(pParse->db, pTab->pSchema); + sqlite3BeginWriteOperation(pParse, 0, iDb); + iStatCur = pParse->nTab++; + openStatTable(pParse, iDb, iStatCur, pTab->zName); + analyzeOneTable(pParse, pTab, iStatCur, pParse->nMem+1); + loadAnalysis(pParse, iDb); +} + +/* +** Generate code for the ANALYZE command. The parser calls this routine +** when it recognizes an ANALYZE command. +** +** ANALYZE -- 1 +** ANALYZE -- 2 +** ANALYZE ?.? -- 3 +** +** Form 1 causes all indices in all attached databases to be analyzed. +** Form 2 analyzes all indices the single database named. +** Form 3 analyzes all indices associated with the named table. +*/ +void sqlite3Analyze(Parse *pParse, Token *pName1, Token *pName2){ + sqlite3 *db = pParse->db; + int iDb; + int i; + char *z, *zDb; + Table *pTab; + Token *pTableName; + + /* Read the database schema. If an error occurs, leave an error message + ** and code in pParse and return NULL. */ + assert( sqlite3BtreeHoldsAllMutexes(pParse->db) ); + if( SQLITE_OK!=sqlite3ReadSchema(pParse) ){ + return; + } + + if( pName1==0 ){ + /* Form 1: Analyze everything */ + for(i=0; inDb; i++){ + if( i==1 ) continue; /* Do not analyze the TEMP database */ + analyzeDatabase(pParse, i); + } + }else if( pName2==0 || pName2->n==0 ){ + /* Form 2: Analyze the database or table named */ + iDb = sqlite3FindDb(db, pName1); + if( iDb>=0 ){ + analyzeDatabase(pParse, iDb); + }else{ + z = sqlite3NameFromToken(db, pName1); + if( z ){ + pTab = sqlite3LocateTable(pParse, 0, z, 0); + sqlite3_free(z); + if( pTab ){ + analyzeTable(pParse, pTab); + } + } + } + }else{ + /* Form 3: Analyze the fully qualified table name */ + iDb = sqlite3TwoPartName(pParse, pName1, pName2, &pTableName); + if( iDb>=0 ){ + zDb = db->aDb[iDb].zName; + z = sqlite3NameFromToken(db, pTableName); + if( z ){ + pTab = sqlite3LocateTable(pParse, 0, z, zDb); + sqlite3_free(z); + if( pTab ){ + analyzeTable(pParse, pTab); + } + } + } + } +} + +/* +** Used to pass information from the analyzer reader through to the +** callback routine. +*/ +typedef struct analysisInfo analysisInfo; +struct analysisInfo { + sqlite3 *db; + const char *zDatabase; +}; + +/* +** This callback is invoked once for each index when reading the +** sqlite_stat1 table. +** +** argv[0] = name of the index +** argv[1] = results of analysis - on integer for each column +*/ +static int analysisLoader(void *pData, int argc, char **argv, char **azNotUsed){ + analysisInfo *pInfo = (analysisInfo*)pData; + Index *pIndex; + int i, c; + unsigned int v; + const char *z; + + assert( argc==2 ); + if( argv==0 || argv[0]==0 || argv[1]==0 ){ + return 0; + } + pIndex = sqlite3FindIndex(pInfo->db, argv[0], pInfo->zDatabase); + if( pIndex==0 ){ + return 0; + } + z = argv[1]; + for(i=0; *z && i<=pIndex->nColumn; i++){ + v = 0; + while( (c=z[0])>='0' && c<='9' ){ + v = v*10 + c - '0'; + z++; + } + pIndex->aiRowEst[i] = v; + if( *z==' ' ) z++; + } + return 0; +} + +/* +** Load the content of the sqlite_stat1 table into the index hash tables. +*/ +int sqlite3AnalysisLoad(sqlite3 *db, int iDb){ + analysisInfo sInfo; + HashElem *i; + char *zSql; + int rc; + + assert( iDb>=0 && iDbnDb ); + assert( db->aDb[iDb].pBt!=0 ); + assert( sqlite3BtreeHoldsMutex(db->aDb[iDb].pBt) ); + + /* Clear any prior statistics */ + for(i=sqliteHashFirst(&db->aDb[iDb].pSchema->idxHash);i;i=sqliteHashNext(i)){ + Index *pIdx = sqliteHashData(i); + sqlite3DefaultRowEst(pIdx); + } + + /* Check to make sure the sqlite_stat1 table existss */ + sInfo.db = db; + sInfo.zDatabase = db->aDb[iDb].zName; + if( sqlite3FindTable(db, "sqlite_stat1", sInfo.zDatabase)==0 ){ + return SQLITE_ERROR; + } + + + /* Load new statistics out of the sqlite_stat1 table */ + zSql = sqlite3MPrintf(db, "SELECT idx, stat FROM %Q.sqlite_stat1", + sInfo.zDatabase); + (void)sqlite3SafetyOff(db); + rc = sqlite3_exec(db, zSql, analysisLoader, &sInfo, 0); + (void)sqlite3SafetyOn(db); + sqlite3_free(zSql); + return rc; +} + + +#endif /* SQLITE_OMIT_ANALYZE */ Added: external/sqlite-source-3.5.7.x/attach.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/attach.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,527 @@ +/* +** 2003 April 6 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** This file contains code used to implement the ATTACH and DETACH commands. +** +** $Id: attach.c,v 1.72 2008/02/13 18:25:27 danielk1977 Exp $ +*/ +#include "sqliteInt.h" + +#ifndef SQLITE_OMIT_ATTACH +/* +** Resolve an expression that was part of an ATTACH or DETACH statement. This +** is slightly different from resolving a normal SQL expression, because simple +** identifiers are treated as strings, not possible column names or aliases. +** +** i.e. if the parser sees: +** +** ATTACH DATABASE abc AS def +** +** it treats the two expressions as literal strings 'abc' and 'def' instead of +** looking for columns of the same name. +** +** This only applies to the root node of pExpr, so the statement: +** +** ATTACH DATABASE abc||def AS 'db2' +** +** will fail because neither abc or def can be resolved. +*/ +static int resolveAttachExpr(NameContext *pName, Expr *pExpr) +{ + int rc = SQLITE_OK; + if( pExpr ){ + if( pExpr->op!=TK_ID ){ + rc = sqlite3ExprResolveNames(pName, pExpr); + if( rc==SQLITE_OK && !sqlite3ExprIsConstant(pExpr) ){ + sqlite3ErrorMsg(pName->pParse, "invalid name: \"%T\"", &pExpr->span); + return SQLITE_ERROR; + } + }else{ + pExpr->op = TK_STRING; + } + } + return rc; +} + +/* +** An SQL user-function registered to do the work of an ATTACH statement. The +** three arguments to the function come directly from an attach statement: +** +** ATTACH DATABASE x AS y KEY z +** +** SELECT sqlite_attach(x, y, z) +** +** If the optional "KEY z" syntax is omitted, an SQL NULL is passed as the +** third argument. +*/ +static void attachFunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + int i; + int rc = 0; + sqlite3 *db = sqlite3_user_data(context); + const char *zName; + const char *zFile; + Db *aNew; + char *zErrDyn = 0; + char zErr[128]; + + zFile = (const char *)sqlite3_value_text(argv[0]); + zName = (const char *)sqlite3_value_text(argv[1]); + if( zFile==0 ) zFile = ""; + if( zName==0 ) zName = ""; + + /* Check for the following errors: + ** + ** * Too many attached databases, + ** * Transaction currently open + ** * Specified database name already being used. + */ + if( db->nDb>=SQLITE_MAX_ATTACHED+2 ){ + sqlite3_snprintf( + sizeof(zErr), zErr, "too many attached databases - max %d", + SQLITE_MAX_ATTACHED + ); + goto attach_error; + } + if( !db->autoCommit ){ + sqlite3_snprintf(sizeof(zErr), zErr, + "cannot ATTACH database within transaction"); + goto attach_error; + } + for(i=0; inDb; i++){ + char *z = db->aDb[i].zName; + if( z && zName && sqlite3StrICmp(z, zName)==0 ){ + sqlite3_snprintf(sizeof(zErr), zErr, + "database %s is already in use", zName); + goto attach_error; + } + } + + /* Allocate the new entry in the db->aDb[] array and initialise the schema + ** hash tables. + */ + if( db->aDb==db->aDbStatic ){ + aNew = sqlite3_malloc( sizeof(db->aDb[0])*3 ); + if( aNew==0 ){ + db->mallocFailed = 1; + return; + } + memcpy(aNew, db->aDb, sizeof(db->aDb[0])*2); + }else{ + aNew = sqlite3_realloc(db->aDb, sizeof(db->aDb[0])*(db->nDb+1) ); + if( aNew==0 ){ + db->mallocFailed = 1; + return; + } + } + db->aDb = aNew; + aNew = &db->aDb[db->nDb++]; + memset(aNew, 0, sizeof(*aNew)); + + /* Open the database file. If the btree is successfully opened, use + ** it to obtain the database schema. At this point the schema may + ** or may not be initialised. + */ + rc = sqlite3BtreeFactory(db, zFile, 0, SQLITE_DEFAULT_CACHE_SIZE, + db->openFlags | SQLITE_OPEN_MAIN_DB, + &aNew->pBt); + if( rc==SQLITE_OK ){ + aNew->pSchema = sqlite3SchemaGet(db, aNew->pBt); + if( !aNew->pSchema ){ + rc = SQLITE_NOMEM; + }else if( aNew->pSchema->file_format && aNew->pSchema->enc!=ENC(db) ){ + sqlite3_snprintf(sizeof(zErr), zErr, + "attached databases must use the same text encoding as main database"); + goto attach_error; + } + sqlite3PagerLockingMode(sqlite3BtreePager(aNew->pBt), db->dfltLockMode); + } + aNew->zName = sqlite3DbStrDup(db, zName); + aNew->safety_level = 3; + +#if SQLITE_HAS_CODEC + { + extern int sqlite3CodecAttach(sqlite3*, int, const void*, int); + extern void sqlite3CodecGetKey(sqlite3*, int, void**, int*); + int nKey; + char *zKey; + int t = sqlite3_value_type(argv[2]); + switch( t ){ + case SQLITE_INTEGER: + case SQLITE_FLOAT: + zErrDyn = sqlite3DbStrDup(db, "Invalid key value"); + rc = SQLITE_ERROR; + break; + + case SQLITE_TEXT: + case SQLITE_BLOB: + nKey = sqlite3_value_bytes(argv[2]); + zKey = (char *)sqlite3_value_blob(argv[2]); + sqlite3CodecAttach(db, db->nDb-1, zKey, nKey); + break; + + case SQLITE_NULL: + /* No key specified. Use the key from the main database */ + sqlite3CodecGetKey(db, 0, (void**)&zKey, &nKey); + sqlite3CodecAttach(db, db->nDb-1, zKey, nKey); + break; + } + } +#endif + + /* If the file was opened successfully, read the schema for the new database. + ** If this fails, or if opening the file failed, then close the file and + ** remove the entry from the db->aDb[] array. i.e. put everything back the way + ** we found it. + */ + if( rc==SQLITE_OK ){ + (void)sqlite3SafetyOn(db); + sqlite3BtreeEnterAll(db); + rc = sqlite3Init(db, &zErrDyn); + sqlite3BtreeLeaveAll(db); + (void)sqlite3SafetyOff(db); + } + if( rc ){ + int iDb = db->nDb - 1; + assert( iDb>=2 ); + if( db->aDb[iDb].pBt ){ + sqlite3BtreeClose(db->aDb[iDb].pBt); + db->aDb[iDb].pBt = 0; + db->aDb[iDb].pSchema = 0; + } + sqlite3ResetInternalSchema(db, 0); + db->nDb = iDb; + if( rc==SQLITE_NOMEM || rc==SQLITE_IOERR_NOMEM ){ + db->mallocFailed = 1; + sqlite3_snprintf(sizeof(zErr),zErr, "out of memory"); + }else{ + sqlite3_snprintf(sizeof(zErr),zErr, "unable to open database: %s", zFile); + } + goto attach_error; + } + + return; + +attach_error: + /* Return an error if we get here */ + if( zErrDyn ){ + sqlite3_result_error(context, zErrDyn, -1); + sqlite3_free(zErrDyn); + }else{ + zErr[sizeof(zErr)-1] = 0; + sqlite3_result_error(context, zErr, -1); + } + if( rc ) sqlite3_result_error_code(context, rc); +} + +/* +** An SQL user-function registered to do the work of an DETACH statement. The +** three arguments to the function come directly from a detach statement: +** +** DETACH DATABASE x +** +** SELECT sqlite_detach(x) +*/ +static void detachFunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + const char *zName = (const char *)sqlite3_value_text(argv[0]); + sqlite3 *db = sqlite3_user_data(context); + int i; + Db *pDb = 0; + char zErr[128]; + + if( zName==0 ) zName = ""; + for(i=0; inDb; i++){ + pDb = &db->aDb[i]; + if( pDb->pBt==0 ) continue; + if( sqlite3StrICmp(pDb->zName, zName)==0 ) break; + } + + if( i>=db->nDb ){ + sqlite3_snprintf(sizeof(zErr),zErr, "no such database: %s", zName); + goto detach_error; + } + if( i<2 ){ + sqlite3_snprintf(sizeof(zErr),zErr, "cannot detach database %s", zName); + goto detach_error; + } + if( !db->autoCommit ){ + sqlite3_snprintf(sizeof(zErr), zErr, + "cannot DETACH database within transaction"); + goto detach_error; + } + if( sqlite3BtreeIsInReadTrans(pDb->pBt) ){ + sqlite3_snprintf(sizeof(zErr),zErr, "database %s is locked", zName); + goto detach_error; + } + + sqlite3BtreeClose(pDb->pBt); + pDb->pBt = 0; + pDb->pSchema = 0; + sqlite3ResetInternalSchema(db, 0); + return; + +detach_error: + sqlite3_result_error(context, zErr, -1); +} + +/* +** This procedure generates VDBE code for a single invocation of either the +** sqlite_detach() or sqlite_attach() SQL user functions. +*/ +static void codeAttach( + Parse *pParse, /* The parser context */ + int type, /* Either SQLITE_ATTACH or SQLITE_DETACH */ + const char *zFunc, /* Either "sqlite_attach" or "sqlite_detach */ + int nFunc, /* Number of args to pass to zFunc */ + Expr *pAuthArg, /* Expression to pass to authorization callback */ + Expr *pFilename, /* Name of database file */ + Expr *pDbname, /* Name of the database to use internally */ + Expr *pKey /* Database key for encryption extension */ +){ + int rc; + NameContext sName; + Vdbe *v; + FuncDef *pFunc; + sqlite3* db = pParse->db; + int regArgs; + +#ifndef SQLITE_OMIT_AUTHORIZATION + assert( db->mallocFailed || pAuthArg ); + if( pAuthArg ){ + char *zAuthArg = sqlite3NameFromToken(db, &pAuthArg->span); + if( !zAuthArg ){ + goto attach_end; + } + rc = sqlite3AuthCheck(pParse, type, zAuthArg, 0, 0); + sqlite3_free(zAuthArg); + if(rc!=SQLITE_OK ){ + goto attach_end; + } + } +#endif /* SQLITE_OMIT_AUTHORIZATION */ + + memset(&sName, 0, sizeof(NameContext)); + sName.pParse = pParse; + + if( + SQLITE_OK!=(rc = resolveAttachExpr(&sName, pFilename)) || + SQLITE_OK!=(rc = resolveAttachExpr(&sName, pDbname)) || + SQLITE_OK!=(rc = resolveAttachExpr(&sName, pKey)) + ){ + pParse->nErr++; + goto attach_end; + } + + v = sqlite3GetVdbe(pParse); + regArgs = sqlite3GetTempRange(pParse, 4); + sqlite3ExprCode(pParse, pFilename, regArgs); + sqlite3ExprCode(pParse, pDbname, regArgs+1); + sqlite3ExprCode(pParse, pKey, regArgs+2); + + assert( v || db->mallocFailed ); + if( v ){ + sqlite3VdbeAddOp3(v, OP_Function, 0, regArgs+3-nFunc, regArgs+3); + sqlite3VdbeChangeP5(v, nFunc); + pFunc = sqlite3FindFunction(db, zFunc, strlen(zFunc), nFunc, SQLITE_UTF8,0); + sqlite3VdbeChangeP4(v, -1, (char *)pFunc, P4_FUNCDEF); + + /* Code an OP_Expire. For an ATTACH statement, set P1 to true (expire this + ** statement only). For DETACH, set it to false (expire all existing + ** statements). + */ + sqlite3VdbeAddOp1(v, OP_Expire, (type==SQLITE_ATTACH)); + } + +attach_end: + sqlite3ExprDelete(pFilename); + sqlite3ExprDelete(pDbname); + sqlite3ExprDelete(pKey); +} + +/* +** Called by the parser to compile a DETACH statement. +** +** DETACH pDbname +*/ +void sqlite3Detach(Parse *pParse, Expr *pDbname){ + codeAttach(pParse, SQLITE_DETACH, "sqlite_detach", 1, pDbname, 0, 0, pDbname); +} + +/* +** Called by the parser to compile an ATTACH statement. +** +** ATTACH p AS pDbname KEY pKey +*/ +void sqlite3Attach(Parse *pParse, Expr *p, Expr *pDbname, Expr *pKey){ + codeAttach(pParse, SQLITE_ATTACH, "sqlite_attach", 3, p, p, pDbname, pKey); +} +#endif /* SQLITE_OMIT_ATTACH */ + +/* +** Register the functions sqlite_attach and sqlite_detach. +*/ +void sqlite3AttachFunctions(sqlite3 *db){ +#ifndef SQLITE_OMIT_ATTACH + static const int enc = SQLITE_UTF8; + sqlite3CreateFunc(db, "sqlite_attach", 3, enc, db, attachFunc, 0, 0); + sqlite3CreateFunc(db, "sqlite_detach", 1, enc, db, detachFunc, 0, 0); +#endif +} + +/* +** Initialize a DbFixer structure. This routine must be called prior +** to passing the structure to one of the sqliteFixAAAA() routines below. +** +** The return value indicates whether or not fixation is required. TRUE +** means we do need to fix the database references, FALSE means we do not. +*/ +int sqlite3FixInit( + DbFixer *pFix, /* The fixer to be initialized */ + Parse *pParse, /* Error messages will be written here */ + int iDb, /* This is the database that must be used */ + const char *zType, /* "view", "trigger", or "index" */ + const Token *pName /* Name of the view, trigger, or index */ +){ + sqlite3 *db; + + if( iDb<0 || iDb==1 ) return 0; + db = pParse->db; + assert( db->nDb>iDb ); + pFix->pParse = pParse; + pFix->zDb = db->aDb[iDb].zName; + pFix->zType = zType; + pFix->pName = pName; + return 1; +} + +/* +** The following set of routines walk through the parse tree and assign +** a specific database to all table references where the database name +** was left unspecified in the original SQL statement. The pFix structure +** must have been initialized by a prior call to sqlite3FixInit(). +** +** These routines are used to make sure that an index, trigger, or +** view in one database does not refer to objects in a different database. +** (Exception: indices, triggers, and views in the TEMP database are +** allowed to refer to anything.) If a reference is explicitly made +** to an object in a different database, an error message is added to +** pParse->zErrMsg and these routines return non-zero. If everything +** checks out, these routines return 0. +*/ +int sqlite3FixSrcList( + DbFixer *pFix, /* Context of the fixation */ + SrcList *pList /* The Source list to check and modify */ +){ + int i; + const char *zDb; + struct SrcList_item *pItem; + + if( pList==0 ) return 0; + zDb = pFix->zDb; + for(i=0, pItem=pList->a; inSrc; i++, pItem++){ + if( pItem->zDatabase==0 ){ + pItem->zDatabase = sqlite3DbStrDup(pFix->pParse->db, zDb); + }else if( sqlite3StrICmp(pItem->zDatabase,zDb)!=0 ){ + sqlite3ErrorMsg(pFix->pParse, + "%s %T cannot reference objects in database %s", + pFix->zType, pFix->pName, pItem->zDatabase); + return 1; + } +#if !defined(SQLITE_OMIT_VIEW) || !defined(SQLITE_OMIT_TRIGGER) + if( sqlite3FixSelect(pFix, pItem->pSelect) ) return 1; + if( sqlite3FixExpr(pFix, pItem->pOn) ) return 1; +#endif + } + return 0; +} +#if !defined(SQLITE_OMIT_VIEW) || !defined(SQLITE_OMIT_TRIGGER) +int sqlite3FixSelect( + DbFixer *pFix, /* Context of the fixation */ + Select *pSelect /* The SELECT statement to be fixed to one database */ +){ + while( pSelect ){ + if( sqlite3FixExprList(pFix, pSelect->pEList) ){ + return 1; + } + if( sqlite3FixSrcList(pFix, pSelect->pSrc) ){ + return 1; + } + if( sqlite3FixExpr(pFix, pSelect->pWhere) ){ + return 1; + } + if( sqlite3FixExpr(pFix, pSelect->pHaving) ){ + return 1; + } + pSelect = pSelect->pPrior; + } + return 0; +} +int sqlite3FixExpr( + DbFixer *pFix, /* Context of the fixation */ + Expr *pExpr /* The expression to be fixed to one database */ +){ + while( pExpr ){ + if( sqlite3FixSelect(pFix, pExpr->pSelect) ){ + return 1; + } + if( sqlite3FixExprList(pFix, pExpr->pList) ){ + return 1; + } + if( sqlite3FixExpr(pFix, pExpr->pRight) ){ + return 1; + } + pExpr = pExpr->pLeft; + } + return 0; +} +int sqlite3FixExprList( + DbFixer *pFix, /* Context of the fixation */ + ExprList *pList /* The expression to be fixed to one database */ +){ + int i; + struct ExprList_item *pItem; + if( pList==0 ) return 0; + for(i=0, pItem=pList->a; inExpr; i++, pItem++){ + if( sqlite3FixExpr(pFix, pItem->pExpr) ){ + return 1; + } + } + return 0; +} +#endif + +#ifndef SQLITE_OMIT_TRIGGER +int sqlite3FixTriggerStep( + DbFixer *pFix, /* Context of the fixation */ + TriggerStep *pStep /* The trigger step be fixed to one database */ +){ + while( pStep ){ + if( sqlite3FixSelect(pFix, pStep->pSelect) ){ + return 1; + } + if( sqlite3FixExpr(pFix, pStep->pWhere) ){ + return 1; + } + if( sqlite3FixExprList(pFix, pStep->pExprList) ){ + return 1; + } + pStep = pStep->pNext; + } + return 0; +} +#endif Added: external/sqlite-source-3.5.7.x/auth.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/auth.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,234 @@ +/* +** 2003 January 11 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** This file contains code used to implement the sqlite3_set_authorizer() +** API. This facility is an optional feature of the library. Embedded +** systems that do not need this facility may omit it by recompiling +** the library with -DSQLITE_OMIT_AUTHORIZATION=1 +** +** $Id: auth.c,v 1.29 2007/09/18 15:55:07 drh Exp $ +*/ +#include "sqliteInt.h" + +/* +** All of the code in this file may be omitted by defining a single +** macro. +*/ +#ifndef SQLITE_OMIT_AUTHORIZATION + +/* +** Set or clear the access authorization function. +** +** The access authorization function is be called during the compilation +** phase to verify that the user has read and/or write access permission on +** various fields of the database. The first argument to the auth function +** is a copy of the 3rd argument to this routine. The second argument +** to the auth function is one of these constants: +** +** SQLITE_CREATE_INDEX +** SQLITE_CREATE_TABLE +** SQLITE_CREATE_TEMP_INDEX +** SQLITE_CREATE_TEMP_TABLE +** SQLITE_CREATE_TEMP_TRIGGER +** SQLITE_CREATE_TEMP_VIEW +** SQLITE_CREATE_TRIGGER +** SQLITE_CREATE_VIEW +** SQLITE_DELETE +** SQLITE_DROP_INDEX +** SQLITE_DROP_TABLE +** SQLITE_DROP_TEMP_INDEX +** SQLITE_DROP_TEMP_TABLE +** SQLITE_DROP_TEMP_TRIGGER +** SQLITE_DROP_TEMP_VIEW +** SQLITE_DROP_TRIGGER +** SQLITE_DROP_VIEW +** SQLITE_INSERT +** SQLITE_PRAGMA +** SQLITE_READ +** SQLITE_SELECT +** SQLITE_TRANSACTION +** SQLITE_UPDATE +** +** The third and fourth arguments to the auth function are the name of +** the table and the column that are being accessed. The auth function +** should return either SQLITE_OK, SQLITE_DENY, or SQLITE_IGNORE. If +** SQLITE_OK is returned, it means that access is allowed. SQLITE_DENY +** means that the SQL statement will never-run - the sqlite3_exec() call +** will return with an error. SQLITE_IGNORE means that the SQL statement +** should run but attempts to read the specified column will return NULL +** and attempts to write the column will be ignored. +** +** Setting the auth function to NULL disables this hook. The default +** setting of the auth function is NULL. +*/ +int sqlite3_set_authorizer( + sqlite3 *db, + int (*xAuth)(void*,int,const char*,const char*,const char*,const char*), + void *pArg +){ + sqlite3_mutex_enter(db->mutex); + db->xAuth = xAuth; + db->pAuthArg = pArg; + sqlite3ExpirePreparedStatements(db); + sqlite3_mutex_leave(db->mutex); + return SQLITE_OK; +} + +/* +** Write an error message into pParse->zErrMsg that explains that the +** user-supplied authorization function returned an illegal value. +*/ +static void sqliteAuthBadReturnCode(Parse *pParse, int rc){ + sqlite3ErrorMsg(pParse, "illegal return value (%d) from the " + "authorization function - should be SQLITE_OK, SQLITE_IGNORE, " + "or SQLITE_DENY", rc); + pParse->rc = SQLITE_ERROR; +} + +/* +** The pExpr should be a TK_COLUMN expression. The table referred to +** is in pTabList or else it is the NEW or OLD table of a trigger. +** Check to see if it is OK to read this particular column. +** +** If the auth function returns SQLITE_IGNORE, change the TK_COLUMN +** instruction into a TK_NULL. If the auth function returns SQLITE_DENY, +** then generate an error. +*/ +void sqlite3AuthRead( + Parse *pParse, /* The parser context */ + Expr *pExpr, /* The expression to check authorization on */ + Schema *pSchema, /* The schema of the expression */ + SrcList *pTabList /* All table that pExpr might refer to */ +){ + sqlite3 *db = pParse->db; + int rc; + Table *pTab = 0; /* The table being read */ + const char *zCol; /* Name of the column of the table */ + int iSrc; /* Index in pTabList->a[] of table being read */ + const char *zDBase; /* Name of database being accessed */ + TriggerStack *pStack; /* The stack of current triggers */ + int iDb; /* The index of the database the expression refers to */ + + if( db->xAuth==0 ) return; + if( pExpr->op!=TK_COLUMN ) return; + iDb = sqlite3SchemaToIndex(pParse->db, pSchema); + if( iDb<0 ){ + /* An attempt to read a column out of a subquery or other + ** temporary table. */ + return; + } + for(iSrc=0; pTabList && iSrcnSrc; iSrc++){ + if( pExpr->iTable==pTabList->a[iSrc].iCursor ) break; + } + if( iSrc>=0 && pTabList && iSrcnSrc ){ + pTab = pTabList->a[iSrc].pTab; + }else if( (pStack = pParse->trigStack)!=0 ){ + /* This must be an attempt to read the NEW or OLD pseudo-tables + ** of a trigger. + */ + assert( pExpr->iTable==pStack->newIdx || pExpr->iTable==pStack->oldIdx ); + pTab = pStack->pTab; + } + if( pTab==0 ) return; + if( pExpr->iColumn>=0 ){ + assert( pExpr->iColumnnCol ); + zCol = pTab->aCol[pExpr->iColumn].zName; + }else if( pTab->iPKey>=0 ){ + assert( pTab->iPKeynCol ); + zCol = pTab->aCol[pTab->iPKey].zName; + }else{ + zCol = "ROWID"; + } + assert( iDb>=0 && iDbnDb ); + zDBase = db->aDb[iDb].zName; + rc = db->xAuth(db->pAuthArg, SQLITE_READ, pTab->zName, zCol, zDBase, + pParse->zAuthContext); + if( rc==SQLITE_IGNORE ){ + pExpr->op = TK_NULL; + }else if( rc==SQLITE_DENY ){ + if( db->nDb>2 || iDb!=0 ){ + sqlite3ErrorMsg(pParse, "access to %s.%s.%s is prohibited", + zDBase, pTab->zName, zCol); + }else{ + sqlite3ErrorMsg(pParse, "access to %s.%s is prohibited",pTab->zName,zCol); + } + pParse->rc = SQLITE_AUTH; + }else if( rc!=SQLITE_OK ){ + sqliteAuthBadReturnCode(pParse, rc); + } +} + +/* +** Do an authorization check using the code and arguments given. Return +** either SQLITE_OK (zero) or SQLITE_IGNORE or SQLITE_DENY. If SQLITE_DENY +** is returned, then the error count and error message in pParse are +** modified appropriately. +*/ +int sqlite3AuthCheck( + Parse *pParse, + int code, + const char *zArg1, + const char *zArg2, + const char *zArg3 +){ + sqlite3 *db = pParse->db; + int rc; + + /* Don't do any authorization checks if the database is initialising + ** or if the parser is being invoked from within sqlite3_declare_vtab. + */ + if( db->init.busy || IN_DECLARE_VTAB ){ + return SQLITE_OK; + } + + if( db->xAuth==0 ){ + return SQLITE_OK; + } + rc = db->xAuth(db->pAuthArg, code, zArg1, zArg2, zArg3, pParse->zAuthContext); + if( rc==SQLITE_DENY ){ + sqlite3ErrorMsg(pParse, "not authorized"); + pParse->rc = SQLITE_AUTH; + }else if( rc!=SQLITE_OK && rc!=SQLITE_IGNORE ){ + rc = SQLITE_DENY; + sqliteAuthBadReturnCode(pParse, rc); + } + return rc; +} + +/* +** Push an authorization context. After this routine is called, the +** zArg3 argument to authorization callbacks will be zContext until +** popped. Or if pParse==0, this routine is a no-op. +*/ +void sqlite3AuthContextPush( + Parse *pParse, + AuthContext *pContext, + const char *zContext +){ + pContext->pParse = pParse; + if( pParse ){ + pContext->zAuthContext = pParse->zAuthContext; + pParse->zAuthContext = zContext; + } +} + +/* +** Pop an authorization context that was previously pushed +** by sqlite3AuthContextPush +*/ +void sqlite3AuthContextPop(AuthContext *pContext){ + if( pContext->pParse ){ + pContext->pParse->zAuthContext = pContext->zAuthContext; + pContext->pParse = 0; + } +} + +#endif /* SQLITE_OMIT_AUTHORIZATION */ Added: external/sqlite-source-3.5.7.x/bitvec.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/bitvec.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,211 @@ +/* +** 2008 February 16 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** This file implements an object that represents a fixed-length +** bitmap. Bits are numbered starting with 1. +** +** A bitmap is used to record what pages a database file have been +** journalled during a transaction. Usually only a few pages are +** journalled. So the bitmap is usually sparse and has low cardinality. +** But sometimes (for example when during a DROP of a large table) most +** or all of the pages get journalled. In those cases, the bitmap becomes +** dense. The algorithm needs to handle both cases well. +** +** The size of the bitmap is fixed when the object is created. +** +** All bits are clear when the bitmap is created. Individual bits +** may be set or cleared one at a time. +** +** Test operations are about 100 times more common that set operations. +** Clear operations are exceedingly rare. There are usually between +** 5 and 500 set operations per Bitvec object, though the number of sets can +** sometimes grow into tens of thousands or larger. The size of the +** Bitvec object is the number of pages in the database file at the +** start of a transaction, and is thus usually less than a few thousand, +** but can be as large as 2 billion for a really big database. +** +** @(#) $Id: bitvec.c,v 1.2 2008/03/14 13:02:08 mlcreech Exp $ +*/ +#include "sqliteInt.h" + +#define BITVEC_SZ 512 +/* Round the union size down to the nearest pointer boundary, since that's how +** it will be aligned within the Bitvec struct. */ +#define BITVEC_USIZE (((BITVEC_SZ-12)/sizeof(Bitvec *))*sizeof(Bitvec *)) +#define BITVEC_NCHAR BITVEC_USIZE +#define BITVEC_NBIT (BITVEC_NCHAR*8) +#define BITVEC_NINT (BITVEC_USIZE/4) +#define BITVEC_MXHASH (BITVEC_NINT/2) +#define BITVEC_NPTR (BITVEC_USIZE/sizeof(Bitvec *)) + +#define BITVEC_HASH(X) (((X)*37)%BITVEC_NINT) + +/* +** A bitmap is an instance of the following structure. +** +** This bitmap records the existance of zero or more bits +** with values between 1 and iSize, inclusive. +** +** There are three possible representations of the bitmap. +** If iSize<=BITVEC_NBIT, then Bitvec.u.aBitmap[] is a straight +** bitmap. The least significant bit is bit 1. +** +** If iSize>BITVEC_NBIT and iDivisor==0 then Bitvec.u.aHash[] is +** a hash table that will hold up to BITVEC_MXHASH distinct values. +** +** Otherwise, the value i is redirected into one of BITVEC_NPTR +** sub-bitmaps pointed to by Bitvec.u.apSub[]. Each subbitmap +** handles up to iDivisor separate values of i. apSub[0] holds +** values between 1 and iDivisor. apSub[1] holds values between +** iDivisor+1 and 2*iDivisor. apSub[N] holds values between +** N*iDivisor+1 and (N+1)*iDivisor. Each subbitmap is normalized +** to hold deal with values between 1 and iDivisor. +*/ +struct Bitvec { + u32 iSize; /* Maximum bit index */ + u32 nSet; /* Number of bits that are set */ + u32 iDivisor; /* Number of bits handled by each apSub[] entry */ + union { + u8 aBitmap[BITVEC_NCHAR]; /* Bitmap representation */ + u32 aHash[BITVEC_NINT]; /* Hash table representation */ + Bitvec *apSub[BITVEC_NPTR]; /* Recursive representation */ + } u; +}; + +/* +** Create a new bitmap object able to handle bits between 0 and iSize, +** inclusive. Return a pointer to the new object. Return NULL if +** malloc fails. +*/ +Bitvec *sqlite3BitvecCreate(u32 iSize){ + Bitvec *p; + assert( sizeof(*p)==BITVEC_SZ ); + p = sqlite3MallocZero( sizeof(*p) ); + if( p ){ + p->iSize = iSize; + } + return p; +} + +/* +** Check to see if the i-th bit is set. Return true or false. +** If p is NULL (if the bitmap has not been created) or if +** i is out of range, then return false. +*/ +int sqlite3BitvecTest(Bitvec *p, u32 i){ + assert( i>0 ); + if( p==0 ) return 0; + if( i>p->iSize ) return 0; + if( p->iSize<=BITVEC_NBIT ){ + i--; + return (p->u.aBitmap[i/8] & (1<<(i&7)))!=0; + } + if( p->iDivisor>0 ){ + u32 bin = (i-1)/p->iDivisor; + i = (i-1)%p->iDivisor + 1; + return sqlite3BitvecTest(p->u.apSub[bin], i); + }else{ + u32 h = BITVEC_HASH(i); + while( p->u.aHash[h] ){ + if( p->u.aHash[h]==i ) return 1; + h++; + if( h>=BITVEC_NINT ) h = 0; + } + return 0; + } +} + +/* +** Set the i-th bit. Return 0 on success and an error code if +** anything goes wrong. +*/ +int sqlite3BitvecSet(Bitvec *p, u32 i){ + u32 h; + assert( p!=0 ); + if( p->iSize<=BITVEC_NBIT ){ + i--; + p->u.aBitmap[i/8] |= 1 << (i&7); + return SQLITE_OK; + } + if( p->iDivisor ){ + u32 bin = (i-1)/p->iDivisor; + i = (i-1)%p->iDivisor + 1; + if( p->u.apSub[bin]==0 ){ + sqlite3FaultBenign(SQLITE_FAULTINJECTOR_MALLOC, 1); + p->u.apSub[bin] = sqlite3BitvecCreate( p->iDivisor ); + sqlite3FaultBenign(SQLITE_FAULTINJECTOR_MALLOC, 0); + if( p->u.apSub[bin]==0 ) return SQLITE_NOMEM; + } + return sqlite3BitvecSet(p->u.apSub[bin], i); + } + h = BITVEC_HASH(i); + while( p->u.aHash[h] ){ + if( p->u.aHash[h]==i ) return SQLITE_OK; + h++; + if( h==BITVEC_NINT ) h = 0; + } + p->nSet++; + if( p->nSet>=BITVEC_MXHASH ){ + int j, rc; + u32 aiValues[BITVEC_NINT]; + memcpy(aiValues, p->u.aHash, sizeof(aiValues)); + memset(p->u.apSub, 0, sizeof(p->u.apSub[0])*BITVEC_NPTR); + p->iDivisor = (p->iSize + BITVEC_NPTR - 1)/BITVEC_NPTR; + sqlite3BitvecSet(p, i); + for(rc=j=0; ju.aHash[h] = i; + return SQLITE_OK; +} + +/* +** Clear the i-th bit. Return 0 on success and an error code if +** anything goes wrong. +*/ +void sqlite3BitvecClear(Bitvec *p, u32 i){ + assert( p!=0 ); + if( p->iSize<=BITVEC_NBIT ){ + i--; + p->u.aBitmap[i/8] &= ~(1 << (i&7)); + }else if( p->iDivisor ){ + u32 bin = (i-1)/p->iDivisor; + i = (i-1)%p->iDivisor + 1; + if( p->u.apSub[bin] ){ + sqlite3BitvecClear(p->u.apSub[bin], i); + } + }else{ + int j; + u32 aiValues[BITVEC_NINT]; + memcpy(aiValues, p->u.aHash, sizeof(aiValues)); + memset(p->u.aHash, 0, sizeof(p->u.aHash[0])*BITVEC_NINT); + p->nSet = 0; + for(j=0; jiDivisor ){ + int i; + for(i=0; iu.apSub[i]); + } + } + sqlite3_free(p); +} Added: external/sqlite-source-3.5.7.x/btmutex.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/btmutex.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,317 @@ +/* +** 2007 August 27 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** +** $Id: btmutex.c,v 1.9 2008/01/23 12:52:41 drh Exp $ +** +** This file contains code used to implement mutexes on Btree objects. +** This code really belongs in btree.c. But btree.c is getting too +** big and we want to break it down some. This packaged seemed like +** a good breakout. +*/ +#include "btreeInt.h" +#if SQLITE_THREADSAFE && !defined(SQLITE_OMIT_SHARED_CACHE) + + +/* +** Enter a mutex on the given BTree object. +** +** If the object is not sharable, then no mutex is ever required +** and this routine is a no-op. The underlying mutex is non-recursive. +** But we keep a reference count in Btree.wantToLock so the behavior +** of this interface is recursive. +** +** To avoid deadlocks, multiple Btrees are locked in the same order +** by all database connections. The p->pNext is a list of other +** Btrees belonging to the same database connection as the p Btree +** which need to be locked after p. If we cannot get a lock on +** p, then first unlock all of the others on p->pNext, then wait +** for the lock to become available on p, then relock all of the +** subsequent Btrees that desire a lock. +*/ +void sqlite3BtreeEnter(Btree *p){ + Btree *pLater; + + /* Some basic sanity checking on the Btree. The list of Btrees + ** connected by pNext and pPrev should be in sorted order by + ** Btree.pBt value. All elements of the list should belong to + ** the same connection. Only shared Btrees are on the list. */ + assert( p->pNext==0 || p->pNext->pBt>p->pBt ); + assert( p->pPrev==0 || p->pPrev->pBtpBt ); + assert( p->pNext==0 || p->pNext->db==p->db ); + assert( p->pPrev==0 || p->pPrev->db==p->db ); + assert( p->sharable || (p->pNext==0 && p->pPrev==0) ); + + /* Check for locking consistency */ + assert( !p->locked || p->wantToLock>0 ); + assert( p->sharable || p->wantToLock==0 ); + + /* We should already hold a lock on the database connection */ + assert( sqlite3_mutex_held(p->db->mutex) ); + + if( !p->sharable ) return; + p->wantToLock++; + if( p->locked ) return; + +#ifndef SQLITE_MUTEX_NOOP + /* In most cases, we should be able to acquire the lock we + ** want without having to go throught the ascending lock + ** procedure that follows. Just be sure not to block. + */ + if( sqlite3_mutex_try(p->pBt->mutex)==SQLITE_OK ){ + p->locked = 1; + return; + } + + /* To avoid deadlock, first release all locks with a larger + ** BtShared address. Then acquire our lock. Then reacquire + ** the other BtShared locks that we used to hold in ascending + ** order. + */ + for(pLater=p->pNext; pLater; pLater=pLater->pNext){ + assert( pLater->sharable ); + assert( pLater->pNext==0 || pLater->pNext->pBt>pLater->pBt ); + assert( !pLater->locked || pLater->wantToLock>0 ); + if( pLater->locked ){ + sqlite3_mutex_leave(pLater->pBt->mutex); + pLater->locked = 0; + } + } + sqlite3_mutex_enter(p->pBt->mutex); + p->locked = 1; + for(pLater=p->pNext; pLater; pLater=pLater->pNext){ + if( pLater->wantToLock ){ + sqlite3_mutex_enter(pLater->pBt->mutex); + pLater->locked = 1; + } + } +#endif /* SQLITE_MUTEX_NOOP */ +} + +/* +** Exit the recursive mutex on a Btree. +*/ +void sqlite3BtreeLeave(Btree *p){ + if( p->sharable ){ + assert( p->wantToLock>0 ); + p->wantToLock--; + if( p->wantToLock==0 ){ + assert( p->locked ); + sqlite3_mutex_leave(p->pBt->mutex); + p->locked = 0; + } + } +} + +#ifndef NDEBUG +/* +** Return true if the BtShared mutex is held on the btree. +** +** This routine makes no determination one why or another if the +** database connection mutex is held. +** +** This routine is used only from within assert() statements. +*/ +int sqlite3BtreeHoldsMutex(Btree *p){ + return (p->sharable==0 || + (p->locked && p->wantToLock && sqlite3_mutex_held(p->pBt->mutex))); +} +#endif + + +#ifndef SQLITE_OMIT_INCRBLOB +/* +** Enter and leave a mutex on a Btree given a cursor owned by that +** Btree. These entry points are used by incremental I/O and can be +** omitted if that module is not used. +*/ +void sqlite3BtreeEnterCursor(BtCursor *pCur){ + sqlite3BtreeEnter(pCur->pBtree); +} +void sqlite3BtreeLeaveCursor(BtCursor *pCur){ + sqlite3BtreeLeave(pCur->pBtree); +} +#endif /* SQLITE_OMIT_INCRBLOB */ + + +/* +** Enter the mutex on every Btree associated with a database +** connection. This is needed (for example) prior to parsing +** a statement since we will be comparing table and column names +** against all schemas and we do not want those schemas being +** reset out from under us. +** +** There is a corresponding leave-all procedures. +** +** Enter the mutexes in accending order by BtShared pointer address +** to avoid the possibility of deadlock when two threads with +** two or more btrees in common both try to lock all their btrees +** at the same instant. +*/ +void sqlite3BtreeEnterAll(sqlite3 *db){ + int i; + Btree *p, *pLater; + assert( sqlite3_mutex_held(db->mutex) ); + for(i=0; inDb; i++){ + p = db->aDb[i].pBt; + if( p && p->sharable ){ + p->wantToLock++; + if( !p->locked ){ + assert( p->wantToLock==1 ); + while( p->pPrev ) p = p->pPrev; + while( p->locked && p->pNext ) p = p->pNext; + for(pLater = p->pNext; pLater; pLater=pLater->pNext){ + if( pLater->locked ){ + sqlite3_mutex_leave(pLater->pBt->mutex); + pLater->locked = 0; + } + } + while( p ){ + sqlite3_mutex_enter(p->pBt->mutex); + p->locked++; + p = p->pNext; + } + } + } + } +} +void sqlite3BtreeLeaveAll(sqlite3 *db){ + int i; + Btree *p; + assert( sqlite3_mutex_held(db->mutex) ); + for(i=0; inDb; i++){ + p = db->aDb[i].pBt; + if( p && p->sharable ){ + assert( p->wantToLock>0 ); + p->wantToLock--; + if( p->wantToLock==0 ){ + assert( p->locked ); + sqlite3_mutex_leave(p->pBt->mutex); + p->locked = 0; + } + } + } +} + +#ifndef NDEBUG +/* +** Return true if the current thread holds the database connection +** mutex and all required BtShared mutexes. +** +** This routine is used inside assert() statements only. +*/ +int sqlite3BtreeHoldsAllMutexes(sqlite3 *db){ + int i; + if( !sqlite3_mutex_held(db->mutex) ){ + return 0; + } + for(i=0; inDb; i++){ + Btree *p; + p = db->aDb[i].pBt; + if( p && p->sharable && + (p->wantToLock==0 || !sqlite3_mutex_held(p->pBt->mutex)) ){ + return 0; + } + } + return 1; +} +#endif /* NDEBUG */ + +/* +** Potentially dd a new Btree pointer to a BtreeMutexArray. +** Really only add the Btree if it can possibly be shared with +** another database connection. +** +** The Btrees are kept in sorted order by pBtree->pBt. That +** way when we go to enter all the mutexes, we can enter them +** in order without every having to backup and retry and without +** worrying about deadlock. +** +** The number of shared btrees will always be small (usually 0 or 1) +** so an insertion sort is an adequate algorithm here. +*/ +void sqlite3BtreeMutexArrayInsert(BtreeMutexArray *pArray, Btree *pBtree){ + int i, j; + BtShared *pBt; + if( pBtree==0 || pBtree->sharable==0 ) return; +#ifndef NDEBUG + { + for(i=0; inMutex; i++){ + assert( pArray->aBtree[i]!=pBtree ); + } + } +#endif + assert( pArray->nMutex>=0 ); + assert( pArray->nMutexaBtree)/sizeof(pArray->aBtree[0])-1 ); + pBt = pBtree->pBt; + for(i=0; inMutex; i++){ + assert( pArray->aBtree[i]!=pBtree ); + if( pArray->aBtree[i]->pBt>pBt ){ + for(j=pArray->nMutex; j>i; j--){ + pArray->aBtree[j] = pArray->aBtree[j-1]; + } + pArray->aBtree[i] = pBtree; + pArray->nMutex++; + return; + } + } + pArray->aBtree[pArray->nMutex++] = pBtree; +} + +/* +** Enter the mutex of every btree in the array. This routine is +** called at the beginning of sqlite3VdbeExec(). The mutexes are +** exited at the end of the same function. +*/ +void sqlite3BtreeMutexArrayEnter(BtreeMutexArray *pArray){ + int i; + for(i=0; inMutex; i++){ + Btree *p = pArray->aBtree[i]; + /* Some basic sanity checking */ + assert( i==0 || pArray->aBtree[i-1]->pBtpBt ); + assert( !p->locked || p->wantToLock>0 ); + + /* We should already hold a lock on the database connection */ + assert( sqlite3_mutex_held(p->db->mutex) ); + + p->wantToLock++; + if( !p->locked && p->sharable ){ + sqlite3_mutex_enter(p->pBt->mutex); + p->locked = 1; + } + } +} + +/* +** Leave the mutex of every btree in the group. +*/ +void sqlite3BtreeMutexArrayLeave(BtreeMutexArray *pArray){ + int i; + for(i=0; inMutex; i++){ + Btree *p = pArray->aBtree[i]; + /* Some basic sanity checking */ + assert( i==0 || pArray->aBtree[i-1]->pBtpBt ); + assert( p->locked || !p->sharable ); + assert( p->wantToLock>0 ); + + /* We should already hold a lock on the database connection */ + assert( sqlite3_mutex_held(p->db->mutex) ); + + p->wantToLock--; + if( p->wantToLock==0 && p->locked ){ + sqlite3_mutex_leave(p->pBt->mutex); + p->locked = 0; + } + } +} + + +#endif /* SQLITE_THREADSAFE && !SQLITE_OMIT_SHARED_CACHE */ Added: external/sqlite-source-3.5.7.x/btree.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/btree.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,6943 @@ +/* +** 2004 April 6 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** $Id: btree.c,v 1.440 2008/03/04 17:45:01 mlcreech Exp $ +** +** This file implements a external (disk-based) database using BTrees. +** See the header comment on "btreeInt.h" for additional information. +** Including a description of file format and an overview of operation. +*/ +#include "btreeInt.h" + +/* +** The header string that appears at the beginning of every +** SQLite database. +*/ +static const char zMagicHeader[] = SQLITE_FILE_HEADER; + +/* +** Set this global variable to 1 to enable tracing using the TRACE +** macro. +*/ +#if SQLITE_TEST +int sqlite3BtreeTrace=0; /* True to enable tracing */ +#endif + + + +#ifndef SQLITE_OMIT_SHARED_CACHE +/* +** A flag to indicate whether or not shared cache is enabled. Also, +** a list of BtShared objects that are eligible for participation +** in shared cache. The variables have file scope during normal builds, +** but the test harness needs to access these variables so we make them +** global for test builds. +*/ +#ifdef SQLITE_TEST +BtShared *sqlite3SharedCacheList = 0; +int sqlite3SharedCacheEnabled = 0; +#else +static BtShared *sqlite3SharedCacheList = 0; +static int sqlite3SharedCacheEnabled = 0; +#endif +#endif /* SQLITE_OMIT_SHARED_CACHE */ + +#ifndef SQLITE_OMIT_SHARED_CACHE +/* +** Enable or disable the shared pager and schema features. +** +** This routine has no effect on existing database connections. +** The shared cache setting effects only future calls to +** sqlite3_open(), sqlite3_open16(), or sqlite3_open_v2(). +*/ +int sqlite3_enable_shared_cache(int enable){ + sqlite3SharedCacheEnabled = enable; + return SQLITE_OK; +} +#endif + + +/* +** Forward declaration +*/ +static int checkReadLocks(Btree*,Pgno,BtCursor*); + + +#ifdef SQLITE_OMIT_SHARED_CACHE + /* + ** The functions queryTableLock(), lockTable() and unlockAllTables() + ** manipulate entries in the BtShared.pLock linked list used to store + ** shared-cache table level locks. If the library is compiled with the + ** shared-cache feature disabled, then there is only ever one user + ** of each BtShared structure and so this locking is not necessary. + ** So define the lock related functions as no-ops. + */ + #define queryTableLock(a,b,c) SQLITE_OK + #define lockTable(a,b,c) SQLITE_OK + #define unlockAllTables(a) +#endif + +#ifndef SQLITE_OMIT_SHARED_CACHE +/* +** Query to see if btree handle p may obtain a lock of type eLock +** (READ_LOCK or WRITE_LOCK) on the table with root-page iTab. Return +** SQLITE_OK if the lock may be obtained (by calling lockTable()), or +** SQLITE_LOCKED if not. +*/ +static int queryTableLock(Btree *p, Pgno iTab, u8 eLock){ + BtShared *pBt = p->pBt; + BtLock *pIter; + + assert( sqlite3BtreeHoldsMutex(p) ); + + /* This is a no-op if the shared-cache is not enabled */ + if( !p->sharable ){ + return SQLITE_OK; + } + + /* If some other connection is holding an exclusive lock, the + ** requested lock may not be obtained. + */ + if( pBt->pExclusive && pBt->pExclusive!=p ){ + return SQLITE_LOCKED; + } + + /* This (along with lockTable()) is where the ReadUncommitted flag is + ** dealt with. If the caller is querying for a read-lock and the flag is + ** set, it is unconditionally granted - even if there are write-locks + ** on the table. If a write-lock is requested, the ReadUncommitted flag + ** is not considered. + ** + ** In function lockTable(), if a read-lock is demanded and the + ** ReadUncommitted flag is set, no entry is added to the locks list + ** (BtShared.pLock). + ** + ** To summarize: If the ReadUncommitted flag is set, then read cursors do + ** not create or respect table locks. The locking procedure for a + ** write-cursor does not change. + */ + if( + !p->db || + 0==(p->db->flags&SQLITE_ReadUncommitted) || + eLock==WRITE_LOCK || + iTab==MASTER_ROOT + ){ + for(pIter=pBt->pLock; pIter; pIter=pIter->pNext){ + if( pIter->pBtree!=p && pIter->iTable==iTab && + (pIter->eLock!=eLock || eLock!=READ_LOCK) ){ + return SQLITE_LOCKED; + } + } + } + return SQLITE_OK; +} +#endif /* !SQLITE_OMIT_SHARED_CACHE */ + +#ifndef SQLITE_OMIT_SHARED_CACHE +/* +** Add a lock on the table with root-page iTable to the shared-btree used +** by Btree handle p. Parameter eLock must be either READ_LOCK or +** WRITE_LOCK. +** +** SQLITE_OK is returned if the lock is added successfully. SQLITE_BUSY and +** SQLITE_NOMEM may also be returned. +*/ +static int lockTable(Btree *p, Pgno iTable, u8 eLock){ + BtShared *pBt = p->pBt; + BtLock *pLock = 0; + BtLock *pIter; + + assert( sqlite3BtreeHoldsMutex(p) ); + + /* This is a no-op if the shared-cache is not enabled */ + if( !p->sharable ){ + return SQLITE_OK; + } + + assert( SQLITE_OK==queryTableLock(p, iTable, eLock) ); + + /* If the read-uncommitted flag is set and a read-lock is requested, + ** return early without adding an entry to the BtShared.pLock list. See + ** comment in function queryTableLock() for more info on handling + ** the ReadUncommitted flag. + */ + if( + (p->db) && + (p->db->flags&SQLITE_ReadUncommitted) && + (eLock==READ_LOCK) && + iTable!=MASTER_ROOT + ){ + return SQLITE_OK; + } + + /* First search the list for an existing lock on this table. */ + for(pIter=pBt->pLock; pIter; pIter=pIter->pNext){ + if( pIter->iTable==iTable && pIter->pBtree==p ){ + pLock = pIter; + break; + } + } + + /* If the above search did not find a BtLock struct associating Btree p + ** with table iTable, allocate one and link it into the list. + */ + if( !pLock ){ + pLock = (BtLock *)sqlite3MallocZero(sizeof(BtLock)); + if( !pLock ){ + return SQLITE_NOMEM; + } + pLock->iTable = iTable; + pLock->pBtree = p; + pLock->pNext = pBt->pLock; + pBt->pLock = pLock; + } + + /* Set the BtLock.eLock variable to the maximum of the current lock + ** and the requested lock. This means if a write-lock was already held + ** and a read-lock requested, we don't incorrectly downgrade the lock. + */ + assert( WRITE_LOCK>READ_LOCK ); + if( eLock>pLock->eLock ){ + pLock->eLock = eLock; + } + + return SQLITE_OK; +} +#endif /* !SQLITE_OMIT_SHARED_CACHE */ + +#ifndef SQLITE_OMIT_SHARED_CACHE +/* +** Release all the table locks (locks obtained via calls to the lockTable() +** procedure) held by Btree handle p. +*/ +static void unlockAllTables(Btree *p){ + BtShared *pBt = p->pBt; + BtLock **ppIter = &pBt->pLock; + + assert( sqlite3BtreeHoldsMutex(p) ); + assert( p->sharable || 0==*ppIter ); + + while( *ppIter ){ + BtLock *pLock = *ppIter; + assert( pBt->pExclusive==0 || pBt->pExclusive==pLock->pBtree ); + if( pLock->pBtree==p ){ + *ppIter = pLock->pNext; + sqlite3_free(pLock); + }else{ + ppIter = &pLock->pNext; + } + } + + if( pBt->pExclusive==p ){ + pBt->pExclusive = 0; + } +} +#endif /* SQLITE_OMIT_SHARED_CACHE */ + +static void releasePage(MemPage *pPage); /* Forward reference */ + +/* +** Verify that the cursor holds a mutex on the BtShared +*/ +#ifndef NDEBUG +static int cursorHoldsMutex(BtCursor *p){ + return sqlite3_mutex_held(p->pBt->mutex); +} +#endif + + +#ifndef SQLITE_OMIT_INCRBLOB +/* +** Invalidate the overflow page-list cache for cursor pCur, if any. +*/ +static void invalidateOverflowCache(BtCursor *pCur){ + assert( cursorHoldsMutex(pCur) ); + sqlite3_free(pCur->aOverflow); + pCur->aOverflow = 0; +} + +/* +** Invalidate the overflow page-list cache for all cursors opened +** on the shared btree structure pBt. +*/ +static void invalidateAllOverflowCache(BtShared *pBt){ + BtCursor *p; + assert( sqlite3_mutex_held(pBt->mutex) ); + for(p=pBt->pCursor; p; p=p->pNext){ + invalidateOverflowCache(p); + } +} +#else + #define invalidateOverflowCache(x) + #define invalidateAllOverflowCache(x) +#endif + +/* +** Save the current cursor position in the variables BtCursor.nKey +** and BtCursor.pKey. The cursor's state is set to CURSOR_REQUIRESEEK. +*/ +static int saveCursorPosition(BtCursor *pCur){ + int rc; + + assert( CURSOR_VALID==pCur->eState ); + assert( 0==pCur->pKey ); + assert( cursorHoldsMutex(pCur) ); + + rc = sqlite3BtreeKeySize(pCur, &pCur->nKey); + + /* If this is an intKey table, then the above call to BtreeKeySize() + ** stores the integer key in pCur->nKey. In this case this value is + ** all that is required. Otherwise, if pCur is not open on an intKey + ** table, then malloc space for and store the pCur->nKey bytes of key + ** data. + */ + if( rc==SQLITE_OK && 0==pCur->pPage->intKey){ + void *pKey = sqlite3_malloc(pCur->nKey); + if( pKey ){ + rc = sqlite3BtreeKey(pCur, 0, pCur->nKey, pKey); + if( rc==SQLITE_OK ){ + pCur->pKey = pKey; + }else{ + sqlite3_free(pKey); + } + }else{ + rc = SQLITE_NOMEM; + } + } + assert( !pCur->pPage->intKey || !pCur->pKey ); + + if( rc==SQLITE_OK ){ + releasePage(pCur->pPage); + pCur->pPage = 0; + pCur->eState = CURSOR_REQUIRESEEK; + } + + invalidateOverflowCache(pCur); + return rc; +} + +/* +** Save the positions of all cursors except pExcept open on the table +** with root-page iRoot. Usually, this is called just before cursor +** pExcept is used to modify the table (BtreeDelete() or BtreeInsert()). +*/ +static int saveAllCursors(BtShared *pBt, Pgno iRoot, BtCursor *pExcept){ + BtCursor *p; + assert( sqlite3_mutex_held(pBt->mutex) ); + assert( pExcept==0 || pExcept->pBt==pBt ); + for(p=pBt->pCursor; p; p=p->pNext){ + if( p!=pExcept && (0==iRoot || p->pgnoRoot==iRoot) && + p->eState==CURSOR_VALID ){ + int rc = saveCursorPosition(p); + if( SQLITE_OK!=rc ){ + return rc; + } + } + } + return SQLITE_OK; +} + +/* +** Clear the current cursor position. +*/ +static void clearCursorPosition(BtCursor *pCur){ + assert( cursorHoldsMutex(pCur) ); + sqlite3_free(pCur->pKey); + pCur->pKey = 0; + pCur->eState = CURSOR_INVALID; +} + +/* +** Restore the cursor to the position it was in (or as close to as possible) +** when saveCursorPosition() was called. Note that this call deletes the +** saved position info stored by saveCursorPosition(), so there can be +** at most one effective restoreOrClearCursorPosition() call after each +** saveCursorPosition(). +** +** If the second argument argument - doSeek - is false, then instead of +** returning the cursor to its saved position, any saved position is deleted +** and the cursor state set to CURSOR_INVALID. +*/ +int sqlite3BtreeRestoreOrClearCursorPosition(BtCursor *pCur){ + int rc; + assert( cursorHoldsMutex(pCur) ); + assert( pCur->eState>=CURSOR_REQUIRESEEK ); + if( pCur->eState==CURSOR_FAULT ){ + return pCur->skip; + } +#ifndef SQLITE_OMIT_INCRBLOB + if( pCur->isIncrblobHandle ){ + return SQLITE_ABORT; + } +#endif + pCur->eState = CURSOR_INVALID; + rc = sqlite3BtreeMoveto(pCur, pCur->pKey, pCur->nKey, 0, &pCur->skip); + if( rc==SQLITE_OK ){ + sqlite3_free(pCur->pKey); + pCur->pKey = 0; + assert( pCur->eState==CURSOR_VALID || pCur->eState==CURSOR_INVALID ); + } + return rc; +} + +#define restoreOrClearCursorPosition(p) \ + (p->eState>=CURSOR_REQUIRESEEK ? \ + sqlite3BtreeRestoreOrClearCursorPosition(p) : \ + SQLITE_OK) + +#ifndef SQLITE_OMIT_AUTOVACUUM +/* +** Given a page number of a regular database page, return the page +** number for the pointer-map page that contains the entry for the +** input page number. +*/ +static Pgno ptrmapPageno(BtShared *pBt, Pgno pgno){ + int nPagesPerMapPage, iPtrMap, ret; + assert( sqlite3_mutex_held(pBt->mutex) ); + nPagesPerMapPage = (pBt->usableSize/5)+1; + iPtrMap = (pgno-2)/nPagesPerMapPage; + ret = (iPtrMap*nPagesPerMapPage) + 2; + if( ret==PENDING_BYTE_PAGE(pBt) ){ + ret++; + } + return ret; +} + +/* +** Write an entry into the pointer map. +** +** This routine updates the pointer map entry for page number 'key' +** so that it maps to type 'eType' and parent page number 'pgno'. +** An error code is returned if something goes wrong, otherwise SQLITE_OK. +*/ +static int ptrmapPut(BtShared *pBt, Pgno key, u8 eType, Pgno parent){ + DbPage *pDbPage; /* The pointer map page */ + u8 *pPtrmap; /* The pointer map data */ + Pgno iPtrmap; /* The pointer map page number */ + int offset; /* Offset in pointer map page */ + int rc; + + assert( sqlite3_mutex_held(pBt->mutex) ); + /* The master-journal page number must never be used as a pointer map page */ + assert( 0==PTRMAP_ISPAGE(pBt, PENDING_BYTE_PAGE(pBt)) ); + + assert( pBt->autoVacuum ); + if( key==0 ){ + return SQLITE_CORRUPT_BKPT; + } + iPtrmap = PTRMAP_PAGENO(pBt, key); + rc = sqlite3PagerGet(pBt->pPager, iPtrmap, &pDbPage); + if( rc!=SQLITE_OK ){ + return rc; + } + offset = PTRMAP_PTROFFSET(pBt, key); + pPtrmap = (u8 *)sqlite3PagerGetData(pDbPage); + + if( eType!=pPtrmap[offset] || get4byte(&pPtrmap[offset+1])!=parent ){ + TRACE(("PTRMAP_UPDATE: %d->(%d,%d)\n", key, eType, parent)); + rc = sqlite3PagerWrite(pDbPage); + if( rc==SQLITE_OK ){ + pPtrmap[offset] = eType; + put4byte(&pPtrmap[offset+1], parent); + } + } + + sqlite3PagerUnref(pDbPage); + return rc; +} + +/* +** Read an entry from the pointer map. +** +** This routine retrieves the pointer map entry for page 'key', writing +** the type and parent page number to *pEType and *pPgno respectively. +** An error code is returned if something goes wrong, otherwise SQLITE_OK. +*/ +static int ptrmapGet(BtShared *pBt, Pgno key, u8 *pEType, Pgno *pPgno){ + DbPage *pDbPage; /* The pointer map page */ + int iPtrmap; /* Pointer map page index */ + u8 *pPtrmap; /* Pointer map page data */ + int offset; /* Offset of entry in pointer map */ + int rc; + + assert( sqlite3_mutex_held(pBt->mutex) ); + + iPtrmap = PTRMAP_PAGENO(pBt, key); + rc = sqlite3PagerGet(pBt->pPager, iPtrmap, &pDbPage); + if( rc!=0 ){ + return rc; + } + pPtrmap = (u8 *)sqlite3PagerGetData(pDbPage); + + offset = PTRMAP_PTROFFSET(pBt, key); + assert( pEType!=0 ); + *pEType = pPtrmap[offset]; + if( pPgno ) *pPgno = get4byte(&pPtrmap[offset+1]); + + sqlite3PagerUnref(pDbPage); + if( *pEType<1 || *pEType>5 ) return SQLITE_CORRUPT_BKPT; + return SQLITE_OK; +} + +#endif /* SQLITE_OMIT_AUTOVACUUM */ + +/* +** Given a btree page and a cell index (0 means the first cell on +** the page, 1 means the second cell, and so forth) return a pointer +** to the cell content. +** +** This routine works only for pages that do not contain overflow cells. +*/ +#define findCell(pPage, iCell) \ + ((pPage)->aData + get2byte(&(pPage)->aData[(pPage)->cellOffset+2*(iCell)])) +#ifdef SQLITE_TEST +u8 *sqlite3BtreeFindCell(MemPage *pPage, int iCell){ + assert( iCell>=0 ); + assert( iCellaData[pPage->hdrOffset+3]) ); + return findCell(pPage, iCell); +} +#endif + +/* +** This a more complex version of sqlite3BtreeFindCell() that works for +** pages that do contain overflow cells. See insert +*/ +static u8 *findOverflowCell(MemPage *pPage, int iCell){ + int i; + assert( sqlite3_mutex_held(pPage->pBt->mutex) ); + for(i=pPage->nOverflow-1; i>=0; i--){ + int k; + struct _OvflCell *pOvfl; + pOvfl = &pPage->aOvfl[i]; + k = pOvfl->idx; + if( k<=iCell ){ + if( k==iCell ){ + return pOvfl->pCell; + } + iCell--; + } + } + return findCell(pPage, iCell); +} + +/* +** Parse a cell content block and fill in the CellInfo structure. There +** are two versions of this function. sqlite3BtreeParseCell() takes a +** cell index as the second argument and sqlite3BtreeParseCellPtr() +** takes a pointer to the body of the cell as its second argument. +** +** Within this file, the parseCell() macro can be called instead of +** sqlite3BtreeParseCellPtr(). Using some compilers, this will be faster. +*/ +void sqlite3BtreeParseCellPtr( + MemPage *pPage, /* Page containing the cell */ + u8 *pCell, /* Pointer to the cell text. */ + CellInfo *pInfo /* Fill in this structure */ +){ + int n; /* Number bytes in cell content header */ + u32 nPayload; /* Number of bytes of cell payload */ + + assert( sqlite3_mutex_held(pPage->pBt->mutex) ); + + pInfo->pCell = pCell; + assert( pPage->leaf==0 || pPage->leaf==1 ); + n = pPage->childPtrSize; + assert( n==4-4*pPage->leaf ); + if( pPage->hasData ){ + n += getVarint32(&pCell[n], &nPayload); + }else{ + nPayload = 0; + } + pInfo->nData = nPayload; + if( pPage->intKey ){ + n += getVarint(&pCell[n], (u64 *)&pInfo->nKey); + }else{ + u32 x; + n += getVarint32(&pCell[n], &x); + pInfo->nKey = x; + nPayload += x; + } + pInfo->nPayload = nPayload; + pInfo->nHeader = n; + if( nPayload<=pPage->maxLocal ){ + /* This is the (easy) common case where the entire payload fits + ** on the local page. No overflow is required. + */ + int nSize; /* Total size of cell content in bytes */ + pInfo->nLocal = nPayload; + pInfo->iOverflow = 0; + nSize = nPayload + n; + if( nSize<4 ){ + nSize = 4; /* Minimum cell size is 4 */ + } + pInfo->nSize = nSize; + }else{ + /* If the payload will not fit completely on the local page, we have + ** to decide how much to store locally and how much to spill onto + ** overflow pages. The strategy is to minimize the amount of unused + ** space on overflow pages while keeping the amount of local storage + ** in between minLocal and maxLocal. + ** + ** Warning: changing the way overflow payload is distributed in any + ** way will result in an incompatible file format. + */ + int minLocal; /* Minimum amount of payload held locally */ + int maxLocal; /* Maximum amount of payload held locally */ + int surplus; /* Overflow payload available for local storage */ + + minLocal = pPage->minLocal; + maxLocal = pPage->maxLocal; + surplus = minLocal + (nPayload - minLocal)%(pPage->pBt->usableSize - 4); + if( surplus <= maxLocal ){ + pInfo->nLocal = surplus; + }else{ + pInfo->nLocal = minLocal; + } + pInfo->iOverflow = pInfo->nLocal + n; + pInfo->nSize = pInfo->iOverflow + 4; + } +} +#define parseCell(pPage, iCell, pInfo) \ + sqlite3BtreeParseCellPtr((pPage), findCell((pPage), (iCell)), (pInfo)) +void sqlite3BtreeParseCell( + MemPage *pPage, /* Page containing the cell */ + int iCell, /* The cell index. First cell is 0 */ + CellInfo *pInfo /* Fill in this structure */ +){ + parseCell(pPage, iCell, pInfo); +} + +/* +** Compute the total number of bytes that a Cell needs in the cell +** data area of the btree-page. The return number includes the cell +** data header and the local payload, but not any overflow page or +** the space used by the cell pointer. +*/ +#ifndef NDEBUG +static u16 cellSize(MemPage *pPage, int iCell){ + CellInfo info; + sqlite3BtreeParseCell(pPage, iCell, &info); + return info.nSize; +} +#endif +static u16 cellSizePtr(MemPage *pPage, u8 *pCell){ + CellInfo info; + sqlite3BtreeParseCellPtr(pPage, pCell, &info); + return info.nSize; +} + +#ifndef SQLITE_OMIT_AUTOVACUUM +/* +** If the cell pCell, part of page pPage contains a pointer +** to an overflow page, insert an entry into the pointer-map +** for the overflow page. +*/ +static int ptrmapPutOvflPtr(MemPage *pPage, u8 *pCell){ + if( pCell ){ + CellInfo info; + sqlite3BtreeParseCellPtr(pPage, pCell, &info); + assert( (info.nData+(pPage->intKey?0:info.nKey))==info.nPayload ); + if( (info.nData+(pPage->intKey?0:info.nKey))>info.nLocal ){ + Pgno ovfl = get4byte(&pCell[info.iOverflow]); + return ptrmapPut(pPage->pBt, ovfl, PTRMAP_OVERFLOW1, pPage->pgno); + } + } + return SQLITE_OK; +} +/* +** If the cell with index iCell on page pPage contains a pointer +** to an overflow page, insert an entry into the pointer-map +** for the overflow page. +*/ +static int ptrmapPutOvfl(MemPage *pPage, int iCell){ + u8 *pCell; + assert( sqlite3_mutex_held(pPage->pBt->mutex) ); + pCell = findOverflowCell(pPage, iCell); + return ptrmapPutOvflPtr(pPage, pCell); +} +#endif + + +/* +** Defragment the page given. All Cells are moved to the +** end of the page and all free space is collected into one +** big FreeBlk that occurs in between the header and cell +** pointer array and the cell content area. +*/ +static int defragmentPage(MemPage *pPage){ + int i; /* Loop counter */ + int pc; /* Address of a i-th cell */ + int addr; /* Offset of first byte after cell pointer array */ + int hdr; /* Offset to the page header */ + int size; /* Size of a cell */ + int usableSize; /* Number of usable bytes on a page */ + int cellOffset; /* Offset to the cell pointer array */ + int brk; /* Offset to the cell content area */ + int nCell; /* Number of cells on the page */ + unsigned char *data; /* The page data */ + unsigned char *temp; /* Temp area for cell content */ + + assert( sqlite3PagerIswriteable(pPage->pDbPage) ); + assert( pPage->pBt!=0 ); + assert( pPage->pBt->usableSize <= SQLITE_MAX_PAGE_SIZE ); + assert( pPage->nOverflow==0 ); + assert( sqlite3_mutex_held(pPage->pBt->mutex) ); + temp = sqlite3PagerTempSpace(pPage->pBt->pPager); + data = pPage->aData; + hdr = pPage->hdrOffset; + cellOffset = pPage->cellOffset; + nCell = pPage->nCell; + assert( nCell==get2byte(&data[hdr+3]) ); + usableSize = pPage->pBt->usableSize; + brk = get2byte(&data[hdr+5]); + memcpy(&temp[brk], &data[brk], usableSize - brk); + brk = usableSize; + for(i=0; ipBt->usableSize ); + size = cellSizePtr(pPage, &temp[pc]); + brk -= size; + memcpy(&data[brk], &temp[pc], size); + put2byte(pAddr, brk); + } + assert( brk>=cellOffset+2*nCell ); + put2byte(&data[hdr+5], brk); + data[hdr+1] = 0; + data[hdr+2] = 0; + data[hdr+7] = 0; + addr = cellOffset+2*nCell; + memset(&data[addr], 0, brk-addr); + return SQLITE_OK; +} + +/* +** Allocate nByte bytes of space on a page. +** +** Return the index into pPage->aData[] of the first byte of +** the new allocation. Or return 0 if there is not enough free +** space on the page to satisfy the allocation request. +** +** If the page contains nBytes of free space but does not contain +** nBytes of contiguous free space, then this routine automatically +** calls defragementPage() to consolidate all free space before +** allocating the new chunk. +*/ +static int allocateSpace(MemPage *pPage, int nByte){ + int addr, pc, hdr; + int size; + int nFrag; + int top; + int nCell; + int cellOffset; + unsigned char *data; + + data = pPage->aData; + assert( sqlite3PagerIswriteable(pPage->pDbPage) ); + assert( pPage->pBt ); + assert( sqlite3_mutex_held(pPage->pBt->mutex) ); + if( nByte<4 ) nByte = 4; + if( pPage->nFreenOverflow>0 ) return 0; + pPage->nFree -= nByte; + hdr = pPage->hdrOffset; + + nFrag = data[hdr+7]; + if( nFrag<60 ){ + /* Search the freelist looking for a slot big enough to satisfy the + ** space request. */ + addr = hdr+1; + while( (pc = get2byte(&data[addr]))>0 ){ + size = get2byte(&data[pc+2]); + if( size>=nByte ){ + if( sizecellOffset; + if( nFrag>=60 || cellOffset + 2*nCell > top - nByte ){ + if( defragmentPage(pPage) ) return 0; + top = get2byte(&data[hdr+5]); + } + top -= nByte; + assert( cellOffset + 2*nCell <= top ); + put2byte(&data[hdr+5], top); + return top; +} + +/* +** Return a section of the pPage->aData to the freelist. +** The first byte of the new free block is pPage->aDisk[start] +** and the size of the block is "size" bytes. +** +** Most of the effort here is involved in coalesing adjacent +** free blocks into a single big free block. +*/ +static void freeSpace(MemPage *pPage, int start, int size){ + int addr, pbegin, hdr; + unsigned char *data = pPage->aData; + + assert( pPage->pBt!=0 ); + assert( sqlite3PagerIswriteable(pPage->pDbPage) ); + assert( start>=pPage->hdrOffset+6+(pPage->leaf?0:4) ); + assert( (start + size)<=pPage->pBt->usableSize ); + assert( sqlite3_mutex_held(pPage->pBt->mutex) ); + if( size<4 ) size = 4; + +#ifdef SQLITE_SECURE_DELETE + /* Overwrite deleted information with zeros when the SECURE_DELETE + ** option is enabled at compile-time */ + memset(&data[start], 0, size); +#endif + + /* Add the space back into the linked list of freeblocks */ + hdr = pPage->hdrOffset; + addr = hdr + 1; + while( (pbegin = get2byte(&data[addr]))0 ){ + assert( pbegin<=pPage->pBt->usableSize-4 ); + assert( pbegin>addr ); + addr = pbegin; + } + assert( pbegin<=pPage->pBt->usableSize-4 ); + assert( pbegin>addr || pbegin==0 ); + put2byte(&data[addr], start); + put2byte(&data[start], pbegin); + put2byte(&data[start+2], size); + pPage->nFree += size; + + /* Coalesce adjacent free blocks */ + addr = pPage->hdrOffset + 1; + while( (pbegin = get2byte(&data[addr]))>0 ){ + int pnext, psize; + assert( pbegin>addr ); + assert( pbegin<=pPage->pBt->usableSize-4 ); + pnext = get2byte(&data[pbegin]); + psize = get2byte(&data[pbegin+2]); + if( pbegin + psize + 3 >= pnext && pnext>0 ){ + int frag = pnext - (pbegin+psize); + assert( frag<=data[pPage->hdrOffset+7] ); + data[pPage->hdrOffset+7] -= frag; + put2byte(&data[pbegin], get2byte(&data[pnext])); + put2byte(&data[pbegin+2], pnext+get2byte(&data[pnext+2])-pbegin); + }else{ + addr = pbegin; + } + } + + /* If the cell content area begins with a freeblock, remove it. */ + if( data[hdr+1]==data[hdr+5] && data[hdr+2]==data[hdr+6] ){ + int top; + pbegin = get2byte(&data[hdr+1]); + memcpy(&data[hdr+1], &data[pbegin], 2); + top = get2byte(&data[hdr+5]); + put2byte(&data[hdr+5], top + get2byte(&data[pbegin+2])); + } +} + +/* +** Decode the flags byte (the first byte of the header) for a page +** and initialize fields of the MemPage structure accordingly. +*/ +static void decodeFlags(MemPage *pPage, int flagByte){ + BtShared *pBt; /* A copy of pPage->pBt */ + + assert( pPage->hdrOffset==(pPage->pgno==1 ? 100 : 0) ); + assert( sqlite3_mutex_held(pPage->pBt->mutex) ); + pPage->intKey = (flagByte & (PTF_INTKEY|PTF_LEAFDATA))!=0; + pPage->zeroData = (flagByte & PTF_ZERODATA)!=0; + pPage->leaf = (flagByte & PTF_LEAF)!=0; + pPage->childPtrSize = 4*(pPage->leaf==0); + pBt = pPage->pBt; + if( flagByte & PTF_LEAFDATA ){ + pPage->leafData = 1; + pPage->maxLocal = pBt->maxLeaf; + pPage->minLocal = pBt->minLeaf; + }else{ + pPage->leafData = 0; + pPage->maxLocal = pBt->maxLocal; + pPage->minLocal = pBt->minLocal; + } + pPage->hasData = !(pPage->zeroData || (!pPage->leaf && pPage->leafData)); +} + +/* +** Initialize the auxiliary information for a disk block. +** +** The pParent parameter must be a pointer to the MemPage which +** is the parent of the page being initialized. The root of a +** BTree has no parent and so for that page, pParent==NULL. +** +** Return SQLITE_OK on success. If we see that the page does +** not contain a well-formed database page, then return +** SQLITE_CORRUPT. Note that a return of SQLITE_OK does not +** guarantee that the page is well-formed. It only shows that +** we failed to detect any corruption. +*/ +int sqlite3BtreeInitPage( + MemPage *pPage, /* The page to be initialized */ + MemPage *pParent /* The parent. Might be NULL */ +){ + int pc; /* Address of a freeblock within pPage->aData[] */ + int hdr; /* Offset to beginning of page header */ + u8 *data; /* Equal to pPage->aData */ + BtShared *pBt; /* The main btree structure */ + int usableSize; /* Amount of usable space on each page */ + int cellOffset; /* Offset from start of page to first cell pointer */ + int nFree; /* Number of unused bytes on the page */ + int top; /* First byte of the cell content area */ + + pBt = pPage->pBt; + assert( pBt!=0 ); + assert( pParent==0 || pParent->pBt==pBt ); + assert( sqlite3_mutex_held(pBt->mutex) ); + assert( pPage->pgno==sqlite3PagerPagenumber(pPage->pDbPage) ); + assert( pPage == sqlite3PagerGetExtra(pPage->pDbPage) ); + assert( pPage->aData == sqlite3PagerGetData(pPage->pDbPage) ); + if( pPage->pParent!=pParent && (pPage->pParent!=0 || pPage->isInit) ){ + /* The parent page should never change unless the file is corrupt */ + return SQLITE_CORRUPT_BKPT; + } + if( pPage->isInit ) return SQLITE_OK; + if( pPage->pParent==0 && pParent!=0 ){ + pPage->pParent = pParent; + sqlite3PagerRef(pParent->pDbPage); + } + hdr = pPage->hdrOffset; + data = pPage->aData; + decodeFlags(pPage, data[hdr]); + pPage->nOverflow = 0; + pPage->idxShift = 0; + usableSize = pBt->usableSize; + pPage->cellOffset = cellOffset = hdr + 12 - 4*pPage->leaf; + top = get2byte(&data[hdr+5]); + pPage->nCell = get2byte(&data[hdr+3]); + if( pPage->nCell>MX_CELL(pBt) ){ + /* To many cells for a single page. The page must be corrupt */ + return SQLITE_CORRUPT_BKPT; + } + if( pPage->nCell==0 && pParent!=0 && pParent->pgno!=1 ){ + /* All pages must have at least one cell, except for root pages */ + return SQLITE_CORRUPT_BKPT; + } + + /* Compute the total free space on the page */ + pc = get2byte(&data[hdr+1]); + nFree = data[hdr+7] + top - (cellOffset + 2*pPage->nCell); + while( pc>0 ){ + int next, size; + if( pc>usableSize-4 ){ + /* Free block is off the page */ + return SQLITE_CORRUPT_BKPT; + } + next = get2byte(&data[pc]); + size = get2byte(&data[pc+2]); + if( next>0 && next<=pc+size+3 ){ + /* Free blocks must be in accending order */ + return SQLITE_CORRUPT_BKPT; + } + nFree += size; + pc = next; + } + pPage->nFree = nFree; + if( nFree>=usableSize ){ + /* Free space cannot exceed total page size */ + return SQLITE_CORRUPT_BKPT; + } + + pPage->isInit = 1; + return SQLITE_OK; +} + +/* +** Set up a raw page so that it looks like a database page holding +** no entries. +*/ +static void zeroPage(MemPage *pPage, int flags){ + unsigned char *data = pPage->aData; + BtShared *pBt = pPage->pBt; + int hdr = pPage->hdrOffset; + int first; + + assert( sqlite3PagerPagenumber(pPage->pDbPage)==pPage->pgno ); + assert( sqlite3PagerGetExtra(pPage->pDbPage) == (void*)pPage ); + assert( sqlite3PagerGetData(pPage->pDbPage) == data ); + assert( sqlite3PagerIswriteable(pPage->pDbPage) ); + assert( sqlite3_mutex_held(pBt->mutex) ); + memset(&data[hdr], 0, pBt->usableSize - hdr); + data[hdr] = flags; + first = hdr + 8 + 4*((flags&PTF_LEAF)==0); + memset(&data[hdr+1], 0, 4); + data[hdr+7] = 0; + put2byte(&data[hdr+5], pBt->usableSize); + pPage->nFree = pBt->usableSize - first; + decodeFlags(pPage, flags); + pPage->hdrOffset = hdr; + pPage->cellOffset = first; + pPage->nOverflow = 0; + pPage->idxShift = 0; + pPage->nCell = 0; + pPage->isInit = 1; +} + +/* +** Get a page from the pager. Initialize the MemPage.pBt and +** MemPage.aData elements if needed. +** +** If the noContent flag is set, it means that we do not care about +** the content of the page at this time. So do not go to the disk +** to fetch the content. Just fill in the content with zeros for now. +** If in the future we call sqlite3PagerWrite() on this page, that +** means we have started to be concerned about content and the disk +** read should occur at that point. +*/ +int sqlite3BtreeGetPage( + BtShared *pBt, /* The btree */ + Pgno pgno, /* Number of the page to fetch */ + MemPage **ppPage, /* Return the page in this parameter */ + int noContent /* Do not load page content if true */ +){ + int rc; + MemPage *pPage; + DbPage *pDbPage; + + assert( sqlite3_mutex_held(pBt->mutex) ); + rc = sqlite3PagerAcquire(pBt->pPager, pgno, (DbPage**)&pDbPage, noContent); + if( rc ) return rc; + pPage = (MemPage *)sqlite3PagerGetExtra(pDbPage); + pPage->aData = sqlite3PagerGetData(pDbPage); + pPage->pDbPage = pDbPage; + pPage->pBt = pBt; + pPage->pgno = pgno; + pPage->hdrOffset = pPage->pgno==1 ? 100 : 0; + *ppPage = pPage; + return SQLITE_OK; +} + +/* +** Get a page from the pager and initialize it. This routine +** is just a convenience wrapper around separate calls to +** sqlite3BtreeGetPage() and sqlite3BtreeInitPage(). +*/ +static int getAndInitPage( + BtShared *pBt, /* The database file */ + Pgno pgno, /* Number of the page to get */ + MemPage **ppPage, /* Write the page pointer here */ + MemPage *pParent /* Parent of the page */ +){ + int rc; + assert( sqlite3_mutex_held(pBt->mutex) ); + if( pgno==0 ){ + return SQLITE_CORRUPT_BKPT; + } + rc = sqlite3BtreeGetPage(pBt, pgno, ppPage, 0); + if( rc==SQLITE_OK && (*ppPage)->isInit==0 ){ + rc = sqlite3BtreeInitPage(*ppPage, pParent); + } + return rc; +} + +/* +** Release a MemPage. This should be called once for each prior +** call to sqlite3BtreeGetPage. +*/ +static void releasePage(MemPage *pPage){ + if( pPage ){ + assert( pPage->aData ); + assert( pPage->pBt ); + assert( sqlite3PagerGetExtra(pPage->pDbPage) == (void*)pPage ); + assert( sqlite3PagerGetData(pPage->pDbPage)==pPage->aData ); + assert( sqlite3_mutex_held(pPage->pBt->mutex) ); + sqlite3PagerUnref(pPage->pDbPage); + } +} + +/* +** This routine is called when the reference count for a page +** reaches zero. We need to unref the pParent pointer when that +** happens. +*/ +static void pageDestructor(DbPage *pData, int pageSize){ + MemPage *pPage; + assert( (pageSize & 7)==0 ); + pPage = (MemPage *)sqlite3PagerGetExtra(pData); + assert( pPage->isInit==0 || sqlite3_mutex_held(pPage->pBt->mutex) ); + if( pPage->pParent ){ + MemPage *pParent = pPage->pParent; + assert( pParent->pBt==pPage->pBt ); + pPage->pParent = 0; + releasePage(pParent); + } + pPage->isInit = 0; +} + +/* +** During a rollback, when the pager reloads information into the cache +** so that the cache is restored to its original state at the start of +** the transaction, for each page restored this routine is called. +** +** This routine needs to reset the extra data section at the end of the +** page to agree with the restored data. +*/ +static void pageReinit(DbPage *pData, int pageSize){ + MemPage *pPage; + assert( (pageSize & 7)==0 ); + pPage = (MemPage *)sqlite3PagerGetExtra(pData); + if( pPage->isInit ){ + assert( sqlite3_mutex_held(pPage->pBt->mutex) ); + pPage->isInit = 0; + sqlite3BtreeInitPage(pPage, pPage->pParent); + } +} + +/* +** Invoke the busy handler for a btree. +*/ +static int sqlite3BtreeInvokeBusyHandler(void *pArg, int n){ + BtShared *pBt = (BtShared*)pArg; + assert( pBt->db ); + assert( sqlite3_mutex_held(pBt->db->mutex) ); + return sqlite3InvokeBusyHandler(&pBt->db->busyHandler); +} + +/* +** Open a database file. +** +** zFilename is the name of the database file. If zFilename is NULL +** a new database with a random name is created. This randomly named +** database file will be deleted when sqlite3BtreeClose() is called. +** If zFilename is ":memory:" then an in-memory database is created +** that is automatically destroyed when it is closed. +*/ +int sqlite3BtreeOpen( + const char *zFilename, /* Name of the file containing the BTree database */ + sqlite3 *db, /* Associated database handle */ + Btree **ppBtree, /* Pointer to new Btree object written here */ + int flags, /* Options */ + int vfsFlags /* Flags passed through to sqlite3_vfs.xOpen() */ +){ + sqlite3_vfs *pVfs; /* The VFS to use for this btree */ + BtShared *pBt = 0; /* Shared part of btree structure */ + Btree *p; /* Handle to return */ + int rc = SQLITE_OK; + int nReserve; + unsigned char zDbHeader[100]; + + /* Set the variable isMemdb to true for an in-memory database, or + ** false for a file-based database. This symbol is only required if + ** either of the shared-data or autovacuum features are compiled + ** into the library. + */ +#if !defined(SQLITE_OMIT_SHARED_CACHE) || !defined(SQLITE_OMIT_AUTOVACUUM) + #ifdef SQLITE_OMIT_MEMORYDB + const int isMemdb = 0; + #else + const int isMemdb = zFilename && !strcmp(zFilename, ":memory:"); + #endif +#endif + + assert( db!=0 ); + assert( sqlite3_mutex_held(db->mutex) ); + + pVfs = db->pVfs; + p = sqlite3MallocZero(sizeof(Btree)); + if( !p ){ + return SQLITE_NOMEM; + } + p->inTrans = TRANS_NONE; + p->db = db; + +#if !defined(SQLITE_OMIT_SHARED_CACHE) && !defined(SQLITE_OMIT_DISKIO) + /* + ** If this Btree is a candidate for shared cache, try to find an + ** existing BtShared object that we can share with + */ + if( (flags & BTREE_PRIVATE)==0 + && isMemdb==0 + && (db->flags & SQLITE_Vtab)==0 + && zFilename && zFilename[0] + ){ + if( sqlite3SharedCacheEnabled ){ + int nFullPathname = pVfs->mxPathname+1; + char *zFullPathname = (char *)sqlite3_malloc(nFullPathname); + sqlite3_mutex *mutexShared; + p->sharable = 1; + if( db ){ + db->flags |= SQLITE_SharedCache; + } + if( !zFullPathname ){ + sqlite3_free(p); + return SQLITE_NOMEM; + } + sqlite3OsFullPathname(pVfs, zFilename, nFullPathname, zFullPathname); + mutexShared = sqlite3_mutex_alloc(SQLITE_MUTEX_STATIC_MASTER); + sqlite3_mutex_enter(mutexShared); + for(pBt=sqlite3SharedCacheList; pBt; pBt=pBt->pNext){ + assert( pBt->nRef>0 ); + if( 0==strcmp(zFullPathname, sqlite3PagerFilename(pBt->pPager)) + && sqlite3PagerVfs(pBt->pPager)==pVfs ){ + p->pBt = pBt; + pBt->nRef++; + break; + } + } + sqlite3_mutex_leave(mutexShared); + sqlite3_free(zFullPathname); + } +#ifdef SQLITE_DEBUG + else{ + /* In debug mode, we mark all persistent databases as sharable + ** even when they are not. This exercises the locking code and + ** gives more opportunity for asserts(sqlite3_mutex_held()) + ** statements to find locking problems. + */ + p->sharable = 1; + } +#endif + } +#endif + if( pBt==0 ){ + /* + ** The following asserts make sure that structures used by the btree are + ** the right size. This is to guard against size changes that result + ** when compiling on a different architecture. + */ + assert( sizeof(i64)==8 || sizeof(i64)==4 ); + assert( sizeof(u64)==8 || sizeof(u64)==4 ); + assert( sizeof(u32)==4 ); + assert( sizeof(u16)==2 ); + assert( sizeof(Pgno)==4 ); + + pBt = sqlite3MallocZero( sizeof(*pBt) ); + if( pBt==0 ){ + rc = SQLITE_NOMEM; + goto btree_open_out; + } + pBt->busyHdr.xFunc = sqlite3BtreeInvokeBusyHandler; + pBt->busyHdr.pArg = pBt; + rc = sqlite3PagerOpen(pVfs, &pBt->pPager, zFilename, + EXTRA_SIZE, flags, vfsFlags); + if( rc==SQLITE_OK ){ + rc = sqlite3PagerReadFileheader(pBt->pPager,sizeof(zDbHeader),zDbHeader); + } + if( rc!=SQLITE_OK ){ + goto btree_open_out; + } + sqlite3PagerSetBusyhandler(pBt->pPager, &pBt->busyHdr); + p->pBt = pBt; + + sqlite3PagerSetDestructor(pBt->pPager, pageDestructor); + sqlite3PagerSetReiniter(pBt->pPager, pageReinit); + pBt->pCursor = 0; + pBt->pPage1 = 0; + pBt->readOnly = sqlite3PagerIsreadonly(pBt->pPager); + pBt->pageSize = get2byte(&zDbHeader[16]); + if( pBt->pageSize<512 || pBt->pageSize>SQLITE_MAX_PAGE_SIZE + || ((pBt->pageSize-1)&pBt->pageSize)!=0 ){ + pBt->pageSize = 0; + sqlite3PagerSetPagesize(pBt->pPager, &pBt->pageSize); + pBt->maxEmbedFrac = 64; /* 25% */ + pBt->minEmbedFrac = 32; /* 12.5% */ + pBt->minLeafFrac = 32; /* 12.5% */ +#ifndef SQLITE_OMIT_AUTOVACUUM + /* If the magic name ":memory:" will create an in-memory database, then + ** leave the autoVacuum mode at 0 (do not auto-vacuum), even if + ** SQLITE_DEFAULT_AUTOVACUUM is true. On the other hand, if + ** SQLITE_OMIT_MEMORYDB has been defined, then ":memory:" is just a + ** regular file-name. In this case the auto-vacuum applies as per normal. + */ + if( zFilename && !isMemdb ){ + pBt->autoVacuum = (SQLITE_DEFAULT_AUTOVACUUM ? 1 : 0); + pBt->incrVacuum = (SQLITE_DEFAULT_AUTOVACUUM==2 ? 1 : 0); + } +#endif + nReserve = 0; + }else{ + nReserve = zDbHeader[20]; + pBt->maxEmbedFrac = zDbHeader[21]; + pBt->minEmbedFrac = zDbHeader[22]; + pBt->minLeafFrac = zDbHeader[23]; + pBt->pageSizeFixed = 1; +#ifndef SQLITE_OMIT_AUTOVACUUM + pBt->autoVacuum = (get4byte(&zDbHeader[36 + 4*4])?1:0); + pBt->incrVacuum = (get4byte(&zDbHeader[36 + 7*4])?1:0); +#endif + } + pBt->usableSize = pBt->pageSize - nReserve; + assert( (pBt->pageSize & 7)==0 ); /* 8-byte alignment of pageSize */ + sqlite3PagerSetPagesize(pBt->pPager, &pBt->pageSize); + +#if !defined(SQLITE_OMIT_SHARED_CACHE) && !defined(SQLITE_OMIT_DISKIO) + /* Add the new BtShared object to the linked list sharable BtShareds. + */ + if( p->sharable ){ + sqlite3_mutex *mutexShared; + pBt->nRef = 1; + mutexShared = sqlite3_mutex_alloc(SQLITE_MUTEX_STATIC_MASTER); + if( SQLITE_THREADSAFE ){ + pBt->mutex = sqlite3_mutex_alloc(SQLITE_MUTEX_FAST); + if( pBt->mutex==0 ){ + rc = SQLITE_NOMEM; + db->mallocFailed = 0; + goto btree_open_out; + } + } + sqlite3_mutex_enter(mutexShared); + pBt->pNext = sqlite3SharedCacheList; + sqlite3SharedCacheList = pBt; + sqlite3_mutex_leave(mutexShared); + } +#endif + } + +#if !defined(SQLITE_OMIT_SHARED_CACHE) && !defined(SQLITE_OMIT_DISKIO) + /* If the new Btree uses a sharable pBtShared, then link the new + ** Btree into the list of all sharable Btrees for the same connection. + ** The list is kept in ascending order by pBt address. + */ + if( p->sharable ){ + int i; + Btree *pSib; + for(i=0; inDb; i++){ + if( (pSib = db->aDb[i].pBt)!=0 && pSib->sharable ){ + while( pSib->pPrev ){ pSib = pSib->pPrev; } + if( p->pBtpBt ){ + p->pNext = pSib; + p->pPrev = 0; + pSib->pPrev = p; + }else{ + while( pSib->pNext && pSib->pNext->pBtpBt ){ + pSib = pSib->pNext; + } + p->pNext = pSib->pNext; + p->pPrev = pSib; + if( p->pNext ){ + p->pNext->pPrev = p; + } + pSib->pNext = p; + } + break; + } + } + } +#endif + *ppBtree = p; + +btree_open_out: + if( rc!=SQLITE_OK ){ + if( pBt && pBt->pPager ){ + sqlite3PagerClose(pBt->pPager); + } + sqlite3_free(pBt); + sqlite3_free(p); + *ppBtree = 0; + } + return rc; +} + +/* +** Decrement the BtShared.nRef counter. When it reaches zero, +** remove the BtShared structure from the sharing list. Return +** true if the BtShared.nRef counter reaches zero and return +** false if it is still positive. +*/ +static int removeFromSharingList(BtShared *pBt){ +#ifndef SQLITE_OMIT_SHARED_CACHE + sqlite3_mutex *pMaster; + BtShared *pList; + int removed = 0; + + assert( sqlite3_mutex_notheld(pBt->mutex) ); + pMaster = sqlite3_mutex_alloc(SQLITE_MUTEX_STATIC_MASTER); + sqlite3_mutex_enter(pMaster); + pBt->nRef--; + if( pBt->nRef<=0 ){ + if( sqlite3SharedCacheList==pBt ){ + sqlite3SharedCacheList = pBt->pNext; + }else{ + pList = sqlite3SharedCacheList; + while( pList && pList->pNext!=pBt ){ + pList=pList->pNext; + } + if( pList ){ + pList->pNext = pBt->pNext; + } + } + if( SQLITE_THREADSAFE ){ + sqlite3_mutex_free(pBt->mutex); + } + removed = 1; + } + sqlite3_mutex_leave(pMaster); + return removed; +#else + return 1; +#endif +} + +/* +** Close an open database and invalidate all cursors. +*/ +int sqlite3BtreeClose(Btree *p){ + BtShared *pBt = p->pBt; + BtCursor *pCur; + + /* Close all cursors opened via this handle. */ + assert( sqlite3_mutex_held(p->db->mutex) ); + sqlite3BtreeEnter(p); + pBt->db = p->db; + pCur = pBt->pCursor; + while( pCur ){ + BtCursor *pTmp = pCur; + pCur = pCur->pNext; + if( pTmp->pBtree==p ){ + sqlite3BtreeCloseCursor(pTmp); + } + } + + /* Rollback any active transaction and free the handle structure. + ** The call to sqlite3BtreeRollback() drops any table-locks held by + ** this handle. + */ + sqlite3BtreeRollback(p); + sqlite3BtreeLeave(p); + + /* If there are still other outstanding references to the shared-btree + ** structure, return now. The remainder of this procedure cleans + ** up the shared-btree. + */ + assert( p->wantToLock==0 && p->locked==0 ); + if( !p->sharable || removeFromSharingList(pBt) ){ + /* The pBt is no longer on the sharing list, so we can access + ** it without having to hold the mutex. + ** + ** Clean out and delete the BtShared object. + */ + assert( !pBt->pCursor ); + sqlite3PagerClose(pBt->pPager); + if( pBt->xFreeSchema && pBt->pSchema ){ + pBt->xFreeSchema(pBt->pSchema); + } + sqlite3_free(pBt->pSchema); + sqlite3_free(pBt); + } + +#ifndef SQLITE_OMIT_SHARED_CACHE + assert( p->wantToLock==0 ); + assert( p->locked==0 ); + if( p->pPrev ) p->pPrev->pNext = p->pNext; + if( p->pNext ) p->pNext->pPrev = p->pPrev; +#endif + + sqlite3_free(p); + return SQLITE_OK; +} + +/* +** Change the limit on the number of pages allowed in the cache. +** +** The maximum number of cache pages is set to the absolute +** value of mxPage. If mxPage is negative, the pager will +** operate asynchronously - it will not stop to do fsync()s +** to insure data is written to the disk surface before +** continuing. Transactions still work if synchronous is off, +** and the database cannot be corrupted if this program +** crashes. But if the operating system crashes or there is +** an abrupt power failure when synchronous is off, the database +** could be left in an inconsistent and unrecoverable state. +** Synchronous is on by default so database corruption is not +** normally a worry. +*/ +int sqlite3BtreeSetCacheSize(Btree *p, int mxPage){ + BtShared *pBt = p->pBt; + assert( sqlite3_mutex_held(p->db->mutex) ); + sqlite3BtreeEnter(p); + sqlite3PagerSetCachesize(pBt->pPager, mxPage); + sqlite3BtreeLeave(p); + return SQLITE_OK; +} + +/* +** Change the way data is synced to disk in order to increase or decrease +** how well the database resists damage due to OS crashes and power +** failures. Level 1 is the same as asynchronous (no syncs() occur and +** there is a high probability of damage) Level 2 is the default. There +** is a very low but non-zero probability of damage. Level 3 reduces the +** probability of damage to near zero but with a write performance reduction. +*/ +#ifndef SQLITE_OMIT_PAGER_PRAGMAS +int sqlite3BtreeSetSafetyLevel(Btree *p, int level, int fullSync){ + BtShared *pBt = p->pBt; + assert( sqlite3_mutex_held(p->db->mutex) ); + sqlite3BtreeEnter(p); + sqlite3PagerSetSafetyLevel(pBt->pPager, level, fullSync); + sqlite3BtreeLeave(p); + return SQLITE_OK; +} +#endif + +/* +** Return TRUE if the given btree is set to safety level 1. In other +** words, return TRUE if no sync() occurs on the disk files. +*/ +int sqlite3BtreeSyncDisabled(Btree *p){ + BtShared *pBt = p->pBt; + int rc; + assert( sqlite3_mutex_held(p->db->mutex) ); + sqlite3BtreeEnter(p); + assert( pBt && pBt->pPager ); + rc = sqlite3PagerNosync(pBt->pPager); + sqlite3BtreeLeave(p); + return rc; +} + +#if !defined(SQLITE_OMIT_PAGER_PRAGMAS) || !defined(SQLITE_OMIT_VACUUM) +/* +** Change the default pages size and the number of reserved bytes per page. +** +** The page size must be a power of 2 between 512 and 65536. If the page +** size supplied does not meet this constraint then the page size is not +** changed. +** +** Page sizes are constrained to be a power of two so that the region +** of the database file used for locking (beginning at PENDING_BYTE, +** the first byte past the 1GB boundary, 0x40000000) needs to occur +** at the beginning of a page. +** +** If parameter nReserve is less than zero, then the number of reserved +** bytes per page is left unchanged. +*/ +int sqlite3BtreeSetPageSize(Btree *p, int pageSize, int nReserve){ + int rc = SQLITE_OK; + BtShared *pBt = p->pBt; + sqlite3BtreeEnter(p); + if( pBt->pageSizeFixed ){ + sqlite3BtreeLeave(p); + return SQLITE_READONLY; + } + if( nReserve<0 ){ + nReserve = pBt->pageSize - pBt->usableSize; + } + if( pageSize>=512 && pageSize<=SQLITE_MAX_PAGE_SIZE && + ((pageSize-1)&pageSize)==0 ){ + assert( (pageSize & 7)==0 ); + assert( !pBt->pPage1 && !pBt->pCursor ); + pBt->pageSize = pageSize; + rc = sqlite3PagerSetPagesize(pBt->pPager, &pBt->pageSize); + } + pBt->usableSize = pBt->pageSize - nReserve; + sqlite3BtreeLeave(p); + return rc; +} + +/* +** Return the currently defined page size +*/ +int sqlite3BtreeGetPageSize(Btree *p){ + return p->pBt->pageSize; +} +int sqlite3BtreeGetReserve(Btree *p){ + int n; + sqlite3BtreeEnter(p); + n = p->pBt->pageSize - p->pBt->usableSize; + sqlite3BtreeLeave(p); + return n; +} + +/* +** Set the maximum page count for a database if mxPage is positive. +** No changes are made if mxPage is 0 or negative. +** Regardless of the value of mxPage, return the maximum page count. +*/ +int sqlite3BtreeMaxPageCount(Btree *p, int mxPage){ + int n; + sqlite3BtreeEnter(p); + n = sqlite3PagerMaxPageCount(p->pBt->pPager, mxPage); + sqlite3BtreeLeave(p); + return n; +} +#endif /* !defined(SQLITE_OMIT_PAGER_PRAGMAS) || !defined(SQLITE_OMIT_VACUUM) */ + +/* +** Change the 'auto-vacuum' property of the database. If the 'autoVacuum' +** parameter is non-zero, then auto-vacuum mode is enabled. If zero, it +** is disabled. The default value for the auto-vacuum property is +** determined by the SQLITE_DEFAULT_AUTOVACUUM macro. +*/ +int sqlite3BtreeSetAutoVacuum(Btree *p, int autoVacuum){ +#ifdef SQLITE_OMIT_AUTOVACUUM + return SQLITE_READONLY; +#else + BtShared *pBt = p->pBt; + int rc = SQLITE_OK; + int av = (autoVacuum?1:0); + + sqlite3BtreeEnter(p); + if( pBt->pageSizeFixed && av!=pBt->autoVacuum ){ + rc = SQLITE_READONLY; + }else{ + pBt->autoVacuum = av; + } + sqlite3BtreeLeave(p); + return rc; +#endif +} + +/* +** Return the value of the 'auto-vacuum' property. If auto-vacuum is +** enabled 1 is returned. Otherwise 0. +*/ +int sqlite3BtreeGetAutoVacuum(Btree *p){ +#ifdef SQLITE_OMIT_AUTOVACUUM + return BTREE_AUTOVACUUM_NONE; +#else + int rc; + sqlite3BtreeEnter(p); + rc = ( + (!p->pBt->autoVacuum)?BTREE_AUTOVACUUM_NONE: + (!p->pBt->incrVacuum)?BTREE_AUTOVACUUM_FULL: + BTREE_AUTOVACUUM_INCR + ); + sqlite3BtreeLeave(p); + return rc; +#endif +} + + +/* +** Get a reference to pPage1 of the database file. This will +** also acquire a readlock on that file. +** +** SQLITE_OK is returned on success. If the file is not a +** well-formed database file, then SQLITE_CORRUPT is returned. +** SQLITE_BUSY is returned if the database is locked. SQLITE_NOMEM +** is returned if we run out of memory. +*/ +static int lockBtree(BtShared *pBt){ + int rc, pageSize; + MemPage *pPage1; + + assert( sqlite3_mutex_held(pBt->mutex) ); + if( pBt->pPage1 ) return SQLITE_OK; + rc = sqlite3BtreeGetPage(pBt, 1, &pPage1, 0); + if( rc!=SQLITE_OK ) return rc; + + + /* Do some checking to help insure the file we opened really is + ** a valid database file. + */ + rc = SQLITE_NOTADB; + if( sqlite3PagerPagecount(pBt->pPager)>0 ){ + u8 *page1 = pPage1->aData; + if( memcmp(page1, zMagicHeader, 16)!=0 ){ + goto page1_init_failed; + } + if( page1[18]>1 ){ + pBt->readOnly = 1; + } + if( page1[19]>1 ){ + goto page1_init_failed; + } + pageSize = get2byte(&page1[16]); + if( ((pageSize-1)&pageSize)!=0 || pageSize<512 || + (SQLITE_MAX_PAGE_SIZE<32768 && pageSize>SQLITE_MAX_PAGE_SIZE) + ){ + goto page1_init_failed; + } + assert( (pageSize & 7)==0 ); + pBt->pageSize = pageSize; + pBt->usableSize = pageSize - page1[20]; + if( pBt->usableSize<500 ){ + goto page1_init_failed; + } + pBt->maxEmbedFrac = page1[21]; + pBt->minEmbedFrac = page1[22]; + pBt->minLeafFrac = page1[23]; +#ifndef SQLITE_OMIT_AUTOVACUUM + pBt->autoVacuum = (get4byte(&page1[36 + 4*4])?1:0); + pBt->incrVacuum = (get4byte(&page1[36 + 7*4])?1:0); +#endif + } + + /* maxLocal is the maximum amount of payload to store locally for + ** a cell. Make sure it is small enough so that at least minFanout + ** cells can will fit on one page. We assume a 10-byte page header. + ** Besides the payload, the cell must store: + ** 2-byte pointer to the cell + ** 4-byte child pointer + ** 9-byte nKey value + ** 4-byte nData value + ** 4-byte overflow page pointer + ** So a cell consists of a 2-byte poiner, a header which is as much as + ** 17 bytes long, 0 to N bytes of payload, and an optional 4 byte overflow + ** page pointer. + */ + pBt->maxLocal = (pBt->usableSize-12)*pBt->maxEmbedFrac/255 - 23; + pBt->minLocal = (pBt->usableSize-12)*pBt->minEmbedFrac/255 - 23; + pBt->maxLeaf = pBt->usableSize - 35; + pBt->minLeaf = (pBt->usableSize-12)*pBt->minLeafFrac/255 - 23; + if( pBt->minLocal>pBt->maxLocal || pBt->maxLocal<0 ){ + goto page1_init_failed; + } + assert( pBt->maxLeaf + 23 <= MX_CELL_SIZE(pBt) ); + pBt->pPage1 = pPage1; + return SQLITE_OK; + +page1_init_failed: + releasePage(pPage1); + pBt->pPage1 = 0; + return rc; +} + +/* +** This routine works like lockBtree() except that it also invokes the +** busy callback if there is lock contention. +*/ +static int lockBtreeWithRetry(Btree *pRef){ + int rc = SQLITE_OK; + + assert( sqlite3BtreeHoldsMutex(pRef) ); + if( pRef->inTrans==TRANS_NONE ){ + u8 inTransaction = pRef->pBt->inTransaction; + btreeIntegrity(pRef); + rc = sqlite3BtreeBeginTrans(pRef, 0); + pRef->pBt->inTransaction = inTransaction; + pRef->inTrans = TRANS_NONE; + if( rc==SQLITE_OK ){ + pRef->pBt->nTransaction--; + } + btreeIntegrity(pRef); + } + return rc; +} + + +/* +** If there are no outstanding cursors and we are not in the middle +** of a transaction but there is a read lock on the database, then +** this routine unrefs the first page of the database file which +** has the effect of releasing the read lock. +** +** If there are any outstanding cursors, this routine is a no-op. +** +** If there is a transaction in progress, this routine is a no-op. +*/ +static void unlockBtreeIfUnused(BtShared *pBt){ + assert( sqlite3_mutex_held(pBt->mutex) ); + if( pBt->inTransaction==TRANS_NONE && pBt->pCursor==0 && pBt->pPage1!=0 ){ + if( sqlite3PagerRefcount(pBt->pPager)>=1 ){ + assert( pBt->pPage1->aData ); +#if 0 + if( pBt->pPage1->aData==0 ){ + MemPage *pPage = pBt->pPage1; + pPage->aData = sqlite3PagerGetData(pPage->pDbPage); + pPage->pBt = pBt; + pPage->pgno = 1; + } +#endif + releasePage(pBt->pPage1); + } + pBt->pPage1 = 0; + pBt->inStmt = 0; + } +} + +/* +** Create a new database by initializing the first page of the +** file. +*/ +static int newDatabase(BtShared *pBt){ + MemPage *pP1; + unsigned char *data; + int rc; + + assert( sqlite3_mutex_held(pBt->mutex) ); + if( sqlite3PagerPagecount(pBt->pPager)>0 ) return SQLITE_OK; + pP1 = pBt->pPage1; + assert( pP1!=0 ); + data = pP1->aData; + rc = sqlite3PagerWrite(pP1->pDbPage); + if( rc ) return rc; + memcpy(data, zMagicHeader, sizeof(zMagicHeader)); + assert( sizeof(zMagicHeader)==16 ); + put2byte(&data[16], pBt->pageSize); + data[18] = 1; + data[19] = 1; + data[20] = pBt->pageSize - pBt->usableSize; + data[21] = pBt->maxEmbedFrac; + data[22] = pBt->minEmbedFrac; + data[23] = pBt->minLeafFrac; + memset(&data[24], 0, 100-24); + zeroPage(pP1, PTF_INTKEY|PTF_LEAF|PTF_LEAFDATA ); + pBt->pageSizeFixed = 1; +#ifndef SQLITE_OMIT_AUTOVACUUM + assert( pBt->autoVacuum==1 || pBt->autoVacuum==0 ); + assert( pBt->incrVacuum==1 || pBt->incrVacuum==0 ); + put4byte(&data[36 + 4*4], pBt->autoVacuum); + put4byte(&data[36 + 7*4], pBt->incrVacuum); +#endif + return SQLITE_OK; +} + +/* +** Attempt to start a new transaction. A write-transaction +** is started if the second argument is nonzero, otherwise a read- +** transaction. If the second argument is 2 or more and exclusive +** transaction is started, meaning that no other process is allowed +** to access the database. A preexisting transaction may not be +** upgraded to exclusive by calling this routine a second time - the +** exclusivity flag only works for a new transaction. +** +** A write-transaction must be started before attempting any +** changes to the database. None of the following routines +** will work unless a transaction is started first: +** +** sqlite3BtreeCreateTable() +** sqlite3BtreeCreateIndex() +** sqlite3BtreeClearTable() +** sqlite3BtreeDropTable() +** sqlite3BtreeInsert() +** sqlite3BtreeDelete() +** sqlite3BtreeUpdateMeta() +** +** If an initial attempt to acquire the lock fails because of lock contention +** and the database was previously unlocked, then invoke the busy handler +** if there is one. But if there was previously a read-lock, do not +** invoke the busy handler - just return SQLITE_BUSY. SQLITE_BUSY is +** returned when there is already a read-lock in order to avoid a deadlock. +** +** Suppose there are two processes A and B. A has a read lock and B has +** a reserved lock. B tries to promote to exclusive but is blocked because +** of A's read lock. A tries to promote to reserved but is blocked by B. +** One or the other of the two processes must give way or there can be +** no progress. By returning SQLITE_BUSY and not invoking the busy callback +** when A already has a read lock, we encourage A to give up and let B +** proceed. +*/ +int sqlite3BtreeBeginTrans(Btree *p, int wrflag){ + BtShared *pBt = p->pBt; + int rc = SQLITE_OK; + + sqlite3BtreeEnter(p); + pBt->db = p->db; + btreeIntegrity(p); + + /* If the btree is already in a write-transaction, or it + ** is already in a read-transaction and a read-transaction + ** is requested, this is a no-op. + */ + if( p->inTrans==TRANS_WRITE || (p->inTrans==TRANS_READ && !wrflag) ){ + goto trans_begun; + } + + /* Write transactions are not possible on a read-only database */ + if( pBt->readOnly && wrflag ){ + rc = SQLITE_READONLY; + goto trans_begun; + } + + /* If another database handle has already opened a write transaction + ** on this shared-btree structure and a second write transaction is + ** requested, return SQLITE_BUSY. + */ + if( pBt->inTransaction==TRANS_WRITE && wrflag ){ + rc = SQLITE_BUSY; + goto trans_begun; + } + +#ifndef SQLITE_OMIT_SHARED_CACHE + if( wrflag>1 ){ + BtLock *pIter; + for(pIter=pBt->pLock; pIter; pIter=pIter->pNext){ + if( pIter->pBtree!=p ){ + rc = SQLITE_BUSY; + goto trans_begun; + } + } + } +#endif + + do { + if( pBt->pPage1==0 ){ + rc = lockBtree(pBt); + } + + if( rc==SQLITE_OK && wrflag ){ + if( pBt->readOnly ){ + rc = SQLITE_READONLY; + }else{ + rc = sqlite3PagerBegin(pBt->pPage1->pDbPage, wrflag>1); + if( rc==SQLITE_OK ){ + rc = newDatabase(pBt); + } + } + } + + if( rc==SQLITE_OK ){ + if( wrflag ) pBt->inStmt = 0; + }else{ + unlockBtreeIfUnused(pBt); + } + }while( rc==SQLITE_BUSY && pBt->inTransaction==TRANS_NONE && + sqlite3BtreeInvokeBusyHandler(pBt, 0) ); + + if( rc==SQLITE_OK ){ + if( p->inTrans==TRANS_NONE ){ + pBt->nTransaction++; + } + p->inTrans = (wrflag?TRANS_WRITE:TRANS_READ); + if( p->inTrans>pBt->inTransaction ){ + pBt->inTransaction = p->inTrans; + } +#ifndef SQLITE_OMIT_SHARED_CACHE + if( wrflag>1 ){ + assert( !pBt->pExclusive ); + pBt->pExclusive = p; + } +#endif + } + + +trans_begun: + btreeIntegrity(p); + sqlite3BtreeLeave(p); + return rc; +} + +#ifndef SQLITE_OMIT_AUTOVACUUM + +/* +** Set the pointer-map entries for all children of page pPage. Also, if +** pPage contains cells that point to overflow pages, set the pointer +** map entries for the overflow pages as well. +*/ +static int setChildPtrmaps(MemPage *pPage){ + int i; /* Counter variable */ + int nCell; /* Number of cells in page pPage */ + int rc; /* Return code */ + BtShared *pBt = pPage->pBt; + int isInitOrig = pPage->isInit; + Pgno pgno = pPage->pgno; + + assert( sqlite3_mutex_held(pPage->pBt->mutex) ); + rc = sqlite3BtreeInitPage(pPage, pPage->pParent); + if( rc!=SQLITE_OK ){ + goto set_child_ptrmaps_out; + } + nCell = pPage->nCell; + + for(i=0; ileaf ){ + Pgno childPgno = get4byte(pCell); + rc = ptrmapPut(pBt, childPgno, PTRMAP_BTREE, pgno); + if( rc!=SQLITE_OK ) goto set_child_ptrmaps_out; + } + } + + if( !pPage->leaf ){ + Pgno childPgno = get4byte(&pPage->aData[pPage->hdrOffset+8]); + rc = ptrmapPut(pBt, childPgno, PTRMAP_BTREE, pgno); + } + +set_child_ptrmaps_out: + pPage->isInit = isInitOrig; + return rc; +} + +/* +** Somewhere on pPage, which is guarenteed to be a btree page, not an overflow +** page, is a pointer to page iFrom. Modify this pointer so that it points to +** iTo. Parameter eType describes the type of pointer to be modified, as +** follows: +** +** PTRMAP_BTREE: pPage is a btree-page. The pointer points at a child +** page of pPage. +** +** PTRMAP_OVERFLOW1: pPage is a btree-page. The pointer points at an overflow +** page pointed to by one of the cells on pPage. +** +** PTRMAP_OVERFLOW2: pPage is an overflow-page. The pointer points at the next +** overflow page in the list. +*/ +static int modifyPagePointer(MemPage *pPage, Pgno iFrom, Pgno iTo, u8 eType){ + assert( sqlite3_mutex_held(pPage->pBt->mutex) ); + if( eType==PTRMAP_OVERFLOW2 ){ + /* The pointer is always the first 4 bytes of the page in this case. */ + if( get4byte(pPage->aData)!=iFrom ){ + return SQLITE_CORRUPT_BKPT; + } + put4byte(pPage->aData, iTo); + }else{ + int isInitOrig = pPage->isInit; + int i; + int nCell; + + sqlite3BtreeInitPage(pPage, 0); + nCell = pPage->nCell; + + for(i=0; iaData[pPage->hdrOffset+8])!=iFrom ){ + return SQLITE_CORRUPT_BKPT; + } + put4byte(&pPage->aData[pPage->hdrOffset+8], iTo); + } + + pPage->isInit = isInitOrig; + } + return SQLITE_OK; +} + + +/* +** Move the open database page pDbPage to location iFreePage in the +** database. The pDbPage reference remains valid. +*/ +static int relocatePage( + BtShared *pBt, /* Btree */ + MemPage *pDbPage, /* Open page to move */ + u8 eType, /* Pointer map 'type' entry for pDbPage */ + Pgno iPtrPage, /* Pointer map 'page-no' entry for pDbPage */ + Pgno iFreePage /* The location to move pDbPage to */ +){ + MemPage *pPtrPage; /* The page that contains a pointer to pDbPage */ + Pgno iDbPage = pDbPage->pgno; + Pager *pPager = pBt->pPager; + int rc; + + assert( eType==PTRMAP_OVERFLOW2 || eType==PTRMAP_OVERFLOW1 || + eType==PTRMAP_BTREE || eType==PTRMAP_ROOTPAGE ); + assert( sqlite3_mutex_held(pBt->mutex) ); + assert( pDbPage->pBt==pBt ); + + /* Move page iDbPage from its current location to page number iFreePage */ + TRACE(("AUTOVACUUM: Moving %d to free page %d (ptr page %d type %d)\n", + iDbPage, iFreePage, iPtrPage, eType)); + rc = sqlite3PagerMovepage(pPager, pDbPage->pDbPage, iFreePage); + if( rc!=SQLITE_OK ){ + return rc; + } + pDbPage->pgno = iFreePage; + + /* If pDbPage was a btree-page, then it may have child pages and/or cells + ** that point to overflow pages. The pointer map entries for all these + ** pages need to be changed. + ** + ** If pDbPage is an overflow page, then the first 4 bytes may store a + ** pointer to a subsequent overflow page. If this is the case, then + ** the pointer map needs to be updated for the subsequent overflow page. + */ + if( eType==PTRMAP_BTREE || eType==PTRMAP_ROOTPAGE ){ + rc = setChildPtrmaps(pDbPage); + if( rc!=SQLITE_OK ){ + return rc; + } + }else{ + Pgno nextOvfl = get4byte(pDbPage->aData); + if( nextOvfl!=0 ){ + rc = ptrmapPut(pBt, nextOvfl, PTRMAP_OVERFLOW2, iFreePage); + if( rc!=SQLITE_OK ){ + return rc; + } + } + } + + /* Fix the database pointer on page iPtrPage that pointed at iDbPage so + ** that it points at iFreePage. Also fix the pointer map entry for + ** iPtrPage. + */ + if( eType!=PTRMAP_ROOTPAGE ){ + rc = sqlite3BtreeGetPage(pBt, iPtrPage, &pPtrPage, 0); + if( rc!=SQLITE_OK ){ + return rc; + } + rc = sqlite3PagerWrite(pPtrPage->pDbPage); + if( rc!=SQLITE_OK ){ + releasePage(pPtrPage); + return rc; + } + rc = modifyPagePointer(pPtrPage, iDbPage, iFreePage, eType); + releasePage(pPtrPage); + if( rc==SQLITE_OK ){ + rc = ptrmapPut(pBt, iFreePage, eType, iPtrPage); + } + } + return rc; +} + +/* Forward declaration required by incrVacuumStep(). */ +static int allocateBtreePage(BtShared *, MemPage **, Pgno *, Pgno, u8); + +/* +** Perform a single step of an incremental-vacuum. If successful, +** return SQLITE_OK. If there is no work to do (and therefore no +** point in calling this function again), return SQLITE_DONE. +** +** More specificly, this function attempts to re-organize the +** database so that the last page of the file currently in use +** is no longer in use. +** +** If the nFin parameter is non-zero, the implementation assumes +** that the caller will keep calling incrVacuumStep() until +** it returns SQLITE_DONE or an error, and that nFin is the +** number of pages the database file will contain after this +** process is complete. +*/ +static int incrVacuumStep(BtShared *pBt, Pgno nFin){ + Pgno iLastPg; /* Last page in the database */ + Pgno nFreeList; /* Number of pages still on the free-list */ + + assert( sqlite3_mutex_held(pBt->mutex) ); + iLastPg = pBt->nTrunc; + if( iLastPg==0 ){ + iLastPg = sqlite3PagerPagecount(pBt->pPager); + } + + if( !PTRMAP_ISPAGE(pBt, iLastPg) && iLastPg!=PENDING_BYTE_PAGE(pBt) ){ + int rc; + u8 eType; + Pgno iPtrPage; + + nFreeList = get4byte(&pBt->pPage1->aData[36]); + if( nFreeList==0 || nFin==iLastPg ){ + return SQLITE_DONE; + } + + rc = ptrmapGet(pBt, iLastPg, &eType, &iPtrPage); + if( rc!=SQLITE_OK ){ + return rc; + } + if( eType==PTRMAP_ROOTPAGE ){ + return SQLITE_CORRUPT_BKPT; + } + + if( eType==PTRMAP_FREEPAGE ){ + if( nFin==0 ){ + /* Remove the page from the files free-list. This is not required + ** if nFin is non-zero. In that case, the free-list will be + ** truncated to zero after this function returns, so it doesn't + ** matter if it still contains some garbage entries. + */ + Pgno iFreePg; + MemPage *pFreePg; + rc = allocateBtreePage(pBt, &pFreePg, &iFreePg, iLastPg, 1); + if( rc!=SQLITE_OK ){ + return rc; + } + assert( iFreePg==iLastPg ); + releasePage(pFreePg); + } + } else { + Pgno iFreePg; /* Index of free page to move pLastPg to */ + MemPage *pLastPg; + + rc = sqlite3BtreeGetPage(pBt, iLastPg, &pLastPg, 0); + if( rc!=SQLITE_OK ){ + return rc; + } + + /* If nFin is zero, this loop runs exactly once and page pLastPg + ** is swapped with the first free page pulled off the free list. + ** + ** On the other hand, if nFin is greater than zero, then keep + ** looping until a free-page located within the first nFin pages + ** of the file is found. + */ + do { + MemPage *pFreePg; + rc = allocateBtreePage(pBt, &pFreePg, &iFreePg, 0, 0); + if( rc!=SQLITE_OK ){ + releasePage(pLastPg); + return rc; + } + releasePage(pFreePg); + }while( nFin!=0 && iFreePg>nFin ); + assert( iFreePgpDbPage); + if( rc==SQLITE_OK ){ + rc = relocatePage(pBt, pLastPg, eType, iPtrPage, iFreePg); + } + releasePage(pLastPg); + if( rc!=SQLITE_OK ){ + return rc; + } + } + } + + pBt->nTrunc = iLastPg - 1; + while( pBt->nTrunc==PENDING_BYTE_PAGE(pBt)||PTRMAP_ISPAGE(pBt, pBt->nTrunc) ){ + pBt->nTrunc--; + } + return SQLITE_OK; +} + +/* +** A write-transaction must be opened before calling this function. +** It performs a single unit of work towards an incremental vacuum. +** +** If the incremental vacuum is finished after this function has run, +** SQLITE_DONE is returned. If it is not finished, but no error occured, +** SQLITE_OK is returned. Otherwise an SQLite error code. +*/ +int sqlite3BtreeIncrVacuum(Btree *p){ + int rc; + BtShared *pBt = p->pBt; + + sqlite3BtreeEnter(p); + pBt->db = p->db; + assert( pBt->inTransaction==TRANS_WRITE && p->inTrans==TRANS_WRITE ); + if( !pBt->autoVacuum ){ + rc = SQLITE_DONE; + }else{ + invalidateAllOverflowCache(pBt); + rc = incrVacuumStep(pBt, 0); + } + sqlite3BtreeLeave(p); + return rc; +} + +/* +** This routine is called prior to sqlite3PagerCommit when a transaction +** is commited for an auto-vacuum database. +** +** If SQLITE_OK is returned, then *pnTrunc is set to the number of pages +** the database file should be truncated to during the commit process. +** i.e. the database has been reorganized so that only the first *pnTrunc +** pages are in use. +*/ +static int autoVacuumCommit(BtShared *pBt, Pgno *pnTrunc){ + int rc = SQLITE_OK; + Pager *pPager = pBt->pPager; +#ifndef NDEBUG + int nRef = sqlite3PagerRefcount(pPager); +#endif + + assert( sqlite3_mutex_held(pBt->mutex) ); + invalidateAllOverflowCache(pBt); + assert(pBt->autoVacuum); + if( !pBt->incrVacuum ){ + Pgno nFin = 0; + + if( pBt->nTrunc==0 ){ + Pgno nFree; + Pgno nPtrmap; + const int pgsz = pBt->pageSize; + Pgno nOrig = sqlite3PagerPagecount(pBt->pPager); + + if( PTRMAP_ISPAGE(pBt, nOrig) ){ + return SQLITE_CORRUPT_BKPT; + } + if( nOrig==PENDING_BYTE_PAGE(pBt) ){ + nOrig--; + } + nFree = get4byte(&pBt->pPage1->aData[36]); + nPtrmap = (nFree-nOrig+PTRMAP_PAGENO(pBt, nOrig)+pgsz/5)/(pgsz/5); + nFin = nOrig - nFree - nPtrmap; + if( nOrig>PENDING_BYTE_PAGE(pBt) && nFin<=PENDING_BYTE_PAGE(pBt) ){ + nFin--; + } + while( PTRMAP_ISPAGE(pBt, nFin) || nFin==PENDING_BYTE_PAGE(pBt) ){ + nFin--; + } + } + + while( rc==SQLITE_OK ){ + rc = incrVacuumStep(pBt, nFin); + } + if( rc==SQLITE_DONE ){ + assert(nFin==0 || pBt->nTrunc==0 || nFin<=pBt->nTrunc); + rc = SQLITE_OK; + if( pBt->nTrunc ){ + rc = sqlite3PagerWrite(pBt->pPage1->pDbPage); + put4byte(&pBt->pPage1->aData[32], 0); + put4byte(&pBt->pPage1->aData[36], 0); + pBt->nTrunc = nFin; + } + } + if( rc!=SQLITE_OK ){ + sqlite3PagerRollback(pPager); + } + } + + if( rc==SQLITE_OK ){ + *pnTrunc = pBt->nTrunc; + pBt->nTrunc = 0; + } + assert( nRef==sqlite3PagerRefcount(pPager) ); + return rc; +} + +#endif + +/* +** This routine does the first phase of a two-phase commit. This routine +** causes a rollback journal to be created (if it does not already exist) +** and populated with enough information so that if a power loss occurs +** the database can be restored to its original state by playing back +** the journal. Then the contents of the journal are flushed out to +** the disk. After the journal is safely on oxide, the changes to the +** database are written into the database file and flushed to oxide. +** At the end of this call, the rollback journal still exists on the +** disk and we are still holding all locks, so the transaction has not +** committed. See sqlite3BtreeCommit() for the second phase of the +** commit process. +** +** This call is a no-op if no write-transaction is currently active on pBt. +** +** Otherwise, sync the database file for the btree pBt. zMaster points to +** the name of a master journal file that should be written into the +** individual journal file, or is NULL, indicating no master journal file +** (single database transaction). +** +** When this is called, the master journal should already have been +** created, populated with this journal pointer and synced to disk. +** +** Once this is routine has returned, the only thing required to commit +** the write-transaction for this database file is to delete the journal. +*/ +int sqlite3BtreeCommitPhaseOne(Btree *p, const char *zMaster){ + int rc = SQLITE_OK; + if( p->inTrans==TRANS_WRITE ){ + BtShared *pBt = p->pBt; + Pgno nTrunc = 0; + sqlite3BtreeEnter(p); + pBt->db = p->db; +#ifndef SQLITE_OMIT_AUTOVACUUM + if( pBt->autoVacuum ){ + rc = autoVacuumCommit(pBt, &nTrunc); + if( rc!=SQLITE_OK ){ + sqlite3BtreeLeave(p); + return rc; + } + } +#endif + rc = sqlite3PagerCommitPhaseOne(pBt->pPager, zMaster, nTrunc); + sqlite3BtreeLeave(p); + } + return rc; +} + +/* +** Commit the transaction currently in progress. +** +** This routine implements the second phase of a 2-phase commit. The +** sqlite3BtreeSync() routine does the first phase and should be invoked +** prior to calling this routine. The sqlite3BtreeSync() routine did +** all the work of writing information out to disk and flushing the +** contents so that they are written onto the disk platter. All this +** routine has to do is delete or truncate the rollback journal +** (which causes the transaction to commit) and drop locks. +** +** This will release the write lock on the database file. If there +** are no active cursors, it also releases the read lock. +*/ +int sqlite3BtreeCommitPhaseTwo(Btree *p){ + BtShared *pBt = p->pBt; + + sqlite3BtreeEnter(p); + pBt->db = p->db; + btreeIntegrity(p); + + /* If the handle has a write-transaction open, commit the shared-btrees + ** transaction and set the shared state to TRANS_READ. + */ + if( p->inTrans==TRANS_WRITE ){ + int rc; + assert( pBt->inTransaction==TRANS_WRITE ); + assert( pBt->nTransaction>0 ); + rc = sqlite3PagerCommitPhaseTwo(pBt->pPager); + if( rc!=SQLITE_OK ){ + sqlite3BtreeLeave(p); + return rc; + } + pBt->inTransaction = TRANS_READ; + pBt->inStmt = 0; + } + unlockAllTables(p); + + /* If the handle has any kind of transaction open, decrement the transaction + ** count of the shared btree. If the transaction count reaches 0, set + ** the shared state to TRANS_NONE. The unlockBtreeIfUnused() call below + ** will unlock the pager. + */ + if( p->inTrans!=TRANS_NONE ){ + pBt->nTransaction--; + if( 0==pBt->nTransaction ){ + pBt->inTransaction = TRANS_NONE; + } + } + + /* Set the handles current transaction state to TRANS_NONE and unlock + ** the pager if this call closed the only read or write transaction. + */ + p->inTrans = TRANS_NONE; + unlockBtreeIfUnused(pBt); + + btreeIntegrity(p); + sqlite3BtreeLeave(p); + return SQLITE_OK; +} + +/* +** Do both phases of a commit. +*/ +int sqlite3BtreeCommit(Btree *p){ + int rc; + sqlite3BtreeEnter(p); + rc = sqlite3BtreeCommitPhaseOne(p, 0); + if( rc==SQLITE_OK ){ + rc = sqlite3BtreeCommitPhaseTwo(p); + } + sqlite3BtreeLeave(p); + return rc; +} + +#ifndef NDEBUG +/* +** Return the number of write-cursors open on this handle. This is for use +** in assert() expressions, so it is only compiled if NDEBUG is not +** defined. +** +** For the purposes of this routine, a write-cursor is any cursor that +** is capable of writing to the databse. That means the cursor was +** originally opened for writing and the cursor has not be disabled +** by having its state changed to CURSOR_FAULT. +*/ +static int countWriteCursors(BtShared *pBt){ + BtCursor *pCur; + int r = 0; + for(pCur=pBt->pCursor; pCur; pCur=pCur->pNext){ + if( pCur->wrFlag && pCur->eState!=CURSOR_FAULT ) r++; + } + return r; +} +#endif + +/* +** This routine sets the state to CURSOR_FAULT and the error +** code to errCode for every cursor on BtShared that pBtree +** references. +** +** Every cursor is tripped, including cursors that belong +** to other database connections that happen to be sharing +** the cache with pBtree. +** +** This routine gets called when a rollback occurs. +** All cursors using the same cache must be tripped +** to prevent them from trying to use the btree after +** the rollback. The rollback may have deleted tables +** or moved root pages, so it is not sufficient to +** save the state of the cursor. The cursor must be +** invalidated. +*/ +void sqlite3BtreeTripAllCursors(Btree *pBtree, int errCode){ + BtCursor *p; + sqlite3BtreeEnter(pBtree); + for(p=pBtree->pBt->pCursor; p; p=p->pNext){ + clearCursorPosition(p); + p->eState = CURSOR_FAULT; + p->skip = errCode; + } + sqlite3BtreeLeave(pBtree); +} + +/* +** Rollback the transaction in progress. All cursors will be +** invalided by this operation. Any attempt to use a cursor +** that was open at the beginning of this operation will result +** in an error. +** +** This will release the write lock on the database file. If there +** are no active cursors, it also releases the read lock. +*/ +int sqlite3BtreeRollback(Btree *p){ + int rc; + BtShared *pBt = p->pBt; + MemPage *pPage1; + + sqlite3BtreeEnter(p); + pBt->db = p->db; + rc = saveAllCursors(pBt, 0, 0); +#ifndef SQLITE_OMIT_SHARED_CACHE + if( rc!=SQLITE_OK ){ + /* This is a horrible situation. An IO or malloc() error occured whilst + ** trying to save cursor positions. If this is an automatic rollback (as + ** the result of a constraint, malloc() failure or IO error) then + ** the cache may be internally inconsistent (not contain valid trees) so + ** we cannot simply return the error to the caller. Instead, abort + ** all queries that may be using any of the cursors that failed to save. + */ + sqlite3BtreeTripAllCursors(p, rc); + } +#endif + btreeIntegrity(p); + unlockAllTables(p); + + if( p->inTrans==TRANS_WRITE ){ + int rc2; + +#ifndef SQLITE_OMIT_AUTOVACUUM + pBt->nTrunc = 0; +#endif + + assert( TRANS_WRITE==pBt->inTransaction ); + rc2 = sqlite3PagerRollback(pBt->pPager); + if( rc2!=SQLITE_OK ){ + rc = rc2; + } + + /* The rollback may have destroyed the pPage1->aData value. So + ** call sqlite3BtreeGetPage() on page 1 again to make + ** sure pPage1->aData is set correctly. */ + if( sqlite3BtreeGetPage(pBt, 1, &pPage1, 0)==SQLITE_OK ){ + releasePage(pPage1); + } + assert( countWriteCursors(pBt)==0 ); + pBt->inTransaction = TRANS_READ; + } + + if( p->inTrans!=TRANS_NONE ){ + assert( pBt->nTransaction>0 ); + pBt->nTransaction--; + if( 0==pBt->nTransaction ){ + pBt->inTransaction = TRANS_NONE; + } + } + + p->inTrans = TRANS_NONE; + pBt->inStmt = 0; + unlockBtreeIfUnused(pBt); + + btreeIntegrity(p); + sqlite3BtreeLeave(p); + return rc; +} + +/* +** Start a statement subtransaction. The subtransaction can +** can be rolled back independently of the main transaction. +** You must start a transaction before starting a subtransaction. +** The subtransaction is ended automatically if the main transaction +** commits or rolls back. +** +** Only one subtransaction may be active at a time. It is an error to try +** to start a new subtransaction if another subtransaction is already active. +** +** Statement subtransactions are used around individual SQL statements +** that are contained within a BEGIN...COMMIT block. If a constraint +** error occurs within the statement, the effect of that one statement +** can be rolled back without having to rollback the entire transaction. +*/ +int sqlite3BtreeBeginStmt(Btree *p){ + int rc; + BtShared *pBt = p->pBt; + sqlite3BtreeEnter(p); + pBt->db = p->db; + if( (p->inTrans!=TRANS_WRITE) || pBt->inStmt ){ + rc = pBt->readOnly ? SQLITE_READONLY : SQLITE_ERROR; + }else{ + assert( pBt->inTransaction==TRANS_WRITE ); + rc = pBt->readOnly ? SQLITE_OK : sqlite3PagerStmtBegin(pBt->pPager); + pBt->inStmt = 1; + } + sqlite3BtreeLeave(p); + return rc; +} + + +/* +** Commit the statment subtransaction currently in progress. If no +** subtransaction is active, this is a no-op. +*/ +int sqlite3BtreeCommitStmt(Btree *p){ + int rc; + BtShared *pBt = p->pBt; + sqlite3BtreeEnter(p); + pBt->db = p->db; + if( pBt->inStmt && !pBt->readOnly ){ + rc = sqlite3PagerStmtCommit(pBt->pPager); + }else{ + rc = SQLITE_OK; + } + pBt->inStmt = 0; + sqlite3BtreeLeave(p); + return rc; +} + +/* +** Rollback the active statement subtransaction. If no subtransaction +** is active this routine is a no-op. +** +** All cursors will be invalidated by this operation. Any attempt +** to use a cursor that was open at the beginning of this operation +** will result in an error. +*/ +int sqlite3BtreeRollbackStmt(Btree *p){ + int rc = SQLITE_OK; + BtShared *pBt = p->pBt; + sqlite3BtreeEnter(p); + pBt->db = p->db; + if( pBt->inStmt && !pBt->readOnly ){ + rc = sqlite3PagerStmtRollback(pBt->pPager); + assert( countWriteCursors(pBt)==0 ); + pBt->inStmt = 0; + } + sqlite3BtreeLeave(p); + return rc; +} + +/* +** Default key comparison function to be used if no comparison function +** is specified on the sqlite3BtreeCursor() call. +*/ +static int dfltCompare( + void *NotUsed, /* User data is not used */ + int n1, const void *p1, /* First key to compare */ + int n2, const void *p2 /* Second key to compare */ +){ + int c; + c = memcmp(p1, p2, n1pBt; + + assert( sqlite3BtreeHoldsMutex(p) ); + *ppCur = 0; + if( wrFlag ){ + if( pBt->readOnly ){ + return SQLITE_READONLY; + } + if( checkReadLocks(p, iTable, 0) ){ + return SQLITE_LOCKED; + } + } + + if( pBt->pPage1==0 ){ + rc = lockBtreeWithRetry(p); + if( rc!=SQLITE_OK ){ + return rc; + } + if( pBt->readOnly && wrFlag ){ + return SQLITE_READONLY; + } + } + pCur = sqlite3MallocZero( sizeof(*pCur) ); + if( pCur==0 ){ + rc = SQLITE_NOMEM; + goto create_cursor_exception; + } + pCur->pgnoRoot = (Pgno)iTable; + if( iTable==1 && sqlite3PagerPagecount(pBt->pPager)==0 ){ + rc = SQLITE_EMPTY; + goto create_cursor_exception; + } + rc = getAndInitPage(pBt, pCur->pgnoRoot, &pCur->pPage, 0); + if( rc!=SQLITE_OK ){ + goto create_cursor_exception; + } + + /* Now that no other errors can occur, finish filling in the BtCursor + ** variables, link the cursor into the BtShared list and set *ppCur (the + ** output argument to this function). + */ + pCur->xCompare = xCmp ? xCmp : dfltCompare; + pCur->pArg = pArg; + pCur->pBtree = p; + pCur->pBt = pBt; + pCur->wrFlag = wrFlag; + pCur->pNext = pBt->pCursor; + if( pCur->pNext ){ + pCur->pNext->pPrev = pCur; + } + pBt->pCursor = pCur; + pCur->eState = CURSOR_INVALID; + *ppCur = pCur; + + return SQLITE_OK; + +create_cursor_exception: + if( pCur ){ + releasePage(pCur->pPage); + sqlite3_free(pCur); + } + unlockBtreeIfUnused(pBt); + return rc; +} +int sqlite3BtreeCursor( + Btree *p, /* The btree */ + int iTable, /* Root page of table to open */ + int wrFlag, /* 1 to write. 0 read-only */ + int (*xCmp)(void*,int,const void*,int,const void*), /* Key Comparison func */ + void *pArg, /* First arg to xCompare() */ + BtCursor **ppCur /* Write new cursor here */ +){ + int rc; + sqlite3BtreeEnter(p); + p->pBt->db = p->db; + rc = btreeCursor(p, iTable, wrFlag, xCmp, pArg, ppCur); + sqlite3BtreeLeave(p); + return rc; +} + + +/* +** Close a cursor. The read lock on the database file is released +** when the last cursor is closed. +*/ +int sqlite3BtreeCloseCursor(BtCursor *pCur){ + BtShared *pBt = pCur->pBt; + Btree *pBtree = pCur->pBtree; + + sqlite3BtreeEnter(pBtree); + pBt->db = pBtree->db; + clearCursorPosition(pCur); + if( pCur->pPrev ){ + pCur->pPrev->pNext = pCur->pNext; + }else{ + pBt->pCursor = pCur->pNext; + } + if( pCur->pNext ){ + pCur->pNext->pPrev = pCur->pPrev; + } + releasePage(pCur->pPage); + unlockBtreeIfUnused(pBt); + invalidateOverflowCache(pCur); + sqlite3_free(pCur); + sqlite3BtreeLeave(pBtree); + return SQLITE_OK; +} + +/* +** Make a temporary cursor by filling in the fields of pTempCur. +** The temporary cursor is not on the cursor list for the Btree. +*/ +void sqlite3BtreeGetTempCursor(BtCursor *pCur, BtCursor *pTempCur){ + assert( cursorHoldsMutex(pCur) ); + memcpy(pTempCur, pCur, sizeof(*pCur)); + pTempCur->pNext = 0; + pTempCur->pPrev = 0; + if( pTempCur->pPage ){ + sqlite3PagerRef(pTempCur->pPage->pDbPage); + } +} + +/* +** Delete a temporary cursor such as was made by the CreateTemporaryCursor() +** function above. +*/ +void sqlite3BtreeReleaseTempCursor(BtCursor *pCur){ + assert( cursorHoldsMutex(pCur) ); + if( pCur->pPage ){ + sqlite3PagerUnref(pCur->pPage->pDbPage); + } +} + +/* +** Make sure the BtCursor* given in the argument has a valid +** BtCursor.info structure. If it is not already valid, call +** sqlite3BtreeParseCell() to fill it in. +** +** BtCursor.info is a cache of the information in the current cell. +** Using this cache reduces the number of calls to sqlite3BtreeParseCell(). +** +** 2007-06-25: There is a bug in some versions of MSVC that cause the +** compiler to crash when getCellInfo() is implemented as a macro. +** But there is a measureable speed advantage to using the macro on gcc +** (when less compiler optimizations like -Os or -O0 are used and the +** compiler is not doing agressive inlining.) So we use a real function +** for MSVC and a macro for everything else. Ticket #2457. +*/ +#ifndef NDEBUG + static void assertCellInfo(BtCursor *pCur){ + CellInfo info; + memset(&info, 0, sizeof(info)); + sqlite3BtreeParseCell(pCur->pPage, pCur->idx, &info); + assert( memcmp(&info, &pCur->info, sizeof(info))==0 ); + } +#else + #define assertCellInfo(x) +#endif +#ifdef _MSC_VER + /* Use a real function in MSVC to work around bugs in that compiler. */ + static void getCellInfo(BtCursor *pCur){ + if( pCur->info.nSize==0 ){ + sqlite3BtreeParseCell(pCur->pPage, pCur->idx, &pCur->info); + }else{ + assertCellInfo(pCur); + } + } +#else /* if not _MSC_VER */ + /* Use a macro in all other compilers so that the function is inlined */ +#define getCellInfo(pCur) \ + if( pCur->info.nSize==0 ){ \ + sqlite3BtreeParseCell(pCur->pPage, pCur->idx, &pCur->info); \ + }else{ \ + assertCellInfo(pCur); \ + } +#endif /* _MSC_VER */ + +/* +** Set *pSize to the size of the buffer needed to hold the value of +** the key for the current entry. If the cursor is not pointing +** to a valid entry, *pSize is set to 0. +** +** For a table with the INTKEY flag set, this routine returns the key +** itself, not the number of bytes in the key. +*/ +int sqlite3BtreeKeySize(BtCursor *pCur, i64 *pSize){ + int rc; + + assert( cursorHoldsMutex(pCur) ); + rc = restoreOrClearCursorPosition(pCur); + if( rc==SQLITE_OK ){ + assert( pCur->eState==CURSOR_INVALID || pCur->eState==CURSOR_VALID ); + if( pCur->eState==CURSOR_INVALID ){ + *pSize = 0; + }else{ + getCellInfo(pCur); + *pSize = pCur->info.nKey; + } + } + return rc; +} + +/* +** Set *pSize to the number of bytes of data in the entry the +** cursor currently points to. Always return SQLITE_OK. +** Failure is not possible. If the cursor is not currently +** pointing to an entry (which can happen, for example, if +** the database is empty) then *pSize is set to 0. +*/ +int sqlite3BtreeDataSize(BtCursor *pCur, u32 *pSize){ + int rc; + + assert( cursorHoldsMutex(pCur) ); + rc = restoreOrClearCursorPosition(pCur); + if( rc==SQLITE_OK ){ + assert( pCur->eState==CURSOR_INVALID || pCur->eState==CURSOR_VALID ); + if( pCur->eState==CURSOR_INVALID ){ + /* Not pointing at a valid entry - set *pSize to 0. */ + *pSize = 0; + }else{ + getCellInfo(pCur); + *pSize = pCur->info.nData; + } + } + return rc; +} + +/* +** Given the page number of an overflow page in the database (parameter +** ovfl), this function finds the page number of the next page in the +** linked list of overflow pages. If possible, it uses the auto-vacuum +** pointer-map data instead of reading the content of page ovfl to do so. +** +** If an error occurs an SQLite error code is returned. Otherwise: +** +** Unless pPgnoNext is NULL, the page number of the next overflow +** page in the linked list is written to *pPgnoNext. If page ovfl +** is the last page in its linked list, *pPgnoNext is set to zero. +** +** If ppPage is not NULL, *ppPage is set to the MemPage* handle +** for page ovfl. The underlying pager page may have been requested +** with the noContent flag set, so the page data accessable via +** this handle may not be trusted. +*/ +static int getOverflowPage( + BtShared *pBt, + Pgno ovfl, /* Overflow page */ + MemPage **ppPage, /* OUT: MemPage handle */ + Pgno *pPgnoNext /* OUT: Next overflow page number */ +){ + Pgno next = 0; + int rc; + + assert( sqlite3_mutex_held(pBt->mutex) ); + /* One of these must not be NULL. Otherwise, why call this function? */ + assert(ppPage || pPgnoNext); + + /* If pPgnoNext is NULL, then this function is being called to obtain + ** a MemPage* reference only. No page-data is required in this case. + */ + if( !pPgnoNext ){ + return sqlite3BtreeGetPage(pBt, ovfl, ppPage, 1); + } + +#ifndef SQLITE_OMIT_AUTOVACUUM + /* Try to find the next page in the overflow list using the + ** autovacuum pointer-map pages. Guess that the next page in + ** the overflow list is page number (ovfl+1). If that guess turns + ** out to be wrong, fall back to loading the data of page + ** number ovfl to determine the next page number. + */ + if( pBt->autoVacuum ){ + Pgno pgno; + Pgno iGuess = ovfl+1; + u8 eType; + + while( PTRMAP_ISPAGE(pBt, iGuess) || iGuess==PENDING_BYTE_PAGE(pBt) ){ + iGuess++; + } + + if( iGuess<=sqlite3PagerPagecount(pBt->pPager) ){ + rc = ptrmapGet(pBt, iGuess, &eType, &pgno); + if( rc!=SQLITE_OK ){ + return rc; + } + if( eType==PTRMAP_OVERFLOW2 && pgno==ovfl ){ + next = iGuess; + } + } + } +#endif + + if( next==0 || ppPage ){ + MemPage *pPage = 0; + + rc = sqlite3BtreeGetPage(pBt, ovfl, &pPage, next!=0); + assert(rc==SQLITE_OK || pPage==0); + if( next==0 && rc==SQLITE_OK ){ + next = get4byte(pPage->aData); + } + + if( ppPage ){ + *ppPage = pPage; + }else{ + releasePage(pPage); + } + } + *pPgnoNext = next; + + return rc; +} + +/* +** Copy data from a buffer to a page, or from a page to a buffer. +** +** pPayload is a pointer to data stored on database page pDbPage. +** If argument eOp is false, then nByte bytes of data are copied +** from pPayload to the buffer pointed at by pBuf. If eOp is true, +** then sqlite3PagerWrite() is called on pDbPage and nByte bytes +** of data are copied from the buffer pBuf to pPayload. +** +** SQLITE_OK is returned on success, otherwise an error code. +*/ +static int copyPayload( + void *pPayload, /* Pointer to page data */ + void *pBuf, /* Pointer to buffer */ + int nByte, /* Number of bytes to copy */ + int eOp, /* 0 -> copy from page, 1 -> copy to page */ + DbPage *pDbPage /* Page containing pPayload */ +){ + if( eOp ){ + /* Copy data from buffer to page (a write operation) */ + int rc = sqlite3PagerWrite(pDbPage); + if( rc!=SQLITE_OK ){ + return rc; + } + memcpy(pPayload, pBuf, nByte); + }else{ + /* Copy data from page to buffer (a read operation) */ + memcpy(pBuf, pPayload, nByte); + } + return SQLITE_OK; +} + +/* +** This function is used to read or overwrite payload information +** for the entry that the pCur cursor is pointing to. If the eOp +** parameter is 0, this is a read operation (data copied into +** buffer pBuf). If it is non-zero, a write (data copied from +** buffer pBuf). +** +** A total of "amt" bytes are read or written beginning at "offset". +** Data is read to or from the buffer pBuf. +** +** This routine does not make a distinction between key and data. +** It just reads or writes bytes from the payload area. Data might +** appear on the main page or be scattered out on multiple overflow +** pages. +** +** If the BtCursor.isIncrblobHandle flag is set, and the current +** cursor entry uses one or more overflow pages, this function +** allocates space for and lazily popluates the overflow page-list +** cache array (BtCursor.aOverflow). Subsequent calls use this +** cache to make seeking to the supplied offset more efficient. +** +** Once an overflow page-list cache has been allocated, it may be +** invalidated if some other cursor writes to the same table, or if +** the cursor is moved to a different row. Additionally, in auto-vacuum +** mode, the following events may invalidate an overflow page-list cache. +** +** * An incremental vacuum, +** * A commit in auto_vacuum="full" mode, +** * Creating a table (may require moving an overflow page). +*/ +static int accessPayload( + BtCursor *pCur, /* Cursor pointing to entry to read from */ + int offset, /* Begin reading this far into payload */ + int amt, /* Read this many bytes */ + unsigned char *pBuf, /* Write the bytes into this buffer */ + int skipKey, /* offset begins at data if this is true */ + int eOp /* zero to read. non-zero to write. */ +){ + unsigned char *aPayload; + int rc = SQLITE_OK; + u32 nKey; + int iIdx = 0; + MemPage *pPage = pCur->pPage; /* Btree page of current cursor entry */ + BtShared *pBt; /* Btree this cursor belongs to */ + + assert( pPage ); + assert( pCur->eState==CURSOR_VALID ); + assert( pCur->idx>=0 && pCur->idxnCell ); + assert( offset>=0 ); + assert( cursorHoldsMutex(pCur) ); + + getCellInfo(pCur); + aPayload = pCur->info.pCell + pCur->info.nHeader; + nKey = (pPage->intKey ? 0 : pCur->info.nKey); + + if( skipKey ){ + offset += nKey; + } + if( offset+amt > nKey+pCur->info.nData ){ + /* Trying to read or write past the end of the data is an error */ + return SQLITE_ERROR; + } + + /* Check if data must be read/written to/from the btree page itself. */ + if( offsetinfo.nLocal ){ + int a = amt; + if( a+offset>pCur->info.nLocal ){ + a = pCur->info.nLocal - offset; + } + rc = copyPayload(&aPayload[offset], pBuf, a, eOp, pPage->pDbPage); + offset = 0; + pBuf += a; + amt -= a; + }else{ + offset -= pCur->info.nLocal; + } + + pBt = pCur->pBt; + if( rc==SQLITE_OK && amt>0 ){ + const int ovflSize = pBt->usableSize - 4; /* Bytes content per ovfl page */ + Pgno nextPage; + + nextPage = get4byte(&aPayload[pCur->info.nLocal]); + +#ifndef SQLITE_OMIT_INCRBLOB + /* If the isIncrblobHandle flag is set and the BtCursor.aOverflow[] + ** has not been allocated, allocate it now. The array is sized at + ** one entry for each overflow page in the overflow chain. The + ** page number of the first overflow page is stored in aOverflow[0], + ** etc. A value of 0 in the aOverflow[] array means "not yet known" + ** (the cache is lazily populated). + */ + if( pCur->isIncrblobHandle && !pCur->aOverflow ){ + int nOvfl = (pCur->info.nPayload-pCur->info.nLocal+ovflSize-1)/ovflSize; + pCur->aOverflow = (Pgno *)sqlite3MallocZero(sizeof(Pgno)*nOvfl); + if( nOvfl && !pCur->aOverflow ){ + rc = SQLITE_NOMEM; + } + } + + /* If the overflow page-list cache has been allocated and the + ** entry for the first required overflow page is valid, skip + ** directly to it. + */ + if( pCur->aOverflow && pCur->aOverflow[offset/ovflSize] ){ + iIdx = (offset/ovflSize); + nextPage = pCur->aOverflow[iIdx]; + offset = (offset%ovflSize); + } +#endif + + for( ; rc==SQLITE_OK && amt>0 && nextPage; iIdx++){ + +#ifndef SQLITE_OMIT_INCRBLOB + /* If required, populate the overflow page-list cache. */ + if( pCur->aOverflow ){ + assert(!pCur->aOverflow[iIdx] || pCur->aOverflow[iIdx]==nextPage); + pCur->aOverflow[iIdx] = nextPage; + } +#endif + + if( offset>=ovflSize ){ + /* The only reason to read this page is to obtain the page + ** number for the next page in the overflow chain. The page + ** data is not required. So first try to lookup the overflow + ** page-list cache, if any, then fall back to the getOverflowPage() + ** function. + */ +#ifndef SQLITE_OMIT_INCRBLOB + if( pCur->aOverflow && pCur->aOverflow[iIdx+1] ){ + nextPage = pCur->aOverflow[iIdx+1]; + } else +#endif + rc = getOverflowPage(pBt, nextPage, 0, &nextPage); + offset -= ovflSize; + }else{ + /* Need to read this page properly. It contains some of the + ** range of data that is being read (eOp==0) or written (eOp!=0). + */ + DbPage *pDbPage; + int a = amt; + rc = sqlite3PagerGet(pBt->pPager, nextPage, &pDbPage); + if( rc==SQLITE_OK ){ + aPayload = sqlite3PagerGetData(pDbPage); + nextPage = get4byte(aPayload); + if( a + offset > ovflSize ){ + a = ovflSize - offset; + } + rc = copyPayload(&aPayload[offset+4], pBuf, a, eOp, pDbPage); + sqlite3PagerUnref(pDbPage); + offset = 0; + amt -= a; + pBuf += a; + } + } + } + } + + if( rc==SQLITE_OK && amt>0 ){ + return SQLITE_CORRUPT_BKPT; + } + return rc; +} + +/* +** Read part of the key associated with cursor pCur. Exactly +** "amt" bytes will be transfered into pBuf[]. The transfer +** begins at "offset". +** +** Return SQLITE_OK on success or an error code if anything goes +** wrong. An error is returned if "offset+amt" is larger than +** the available payload. +*/ +int sqlite3BtreeKey(BtCursor *pCur, u32 offset, u32 amt, void *pBuf){ + int rc; + + assert( cursorHoldsMutex(pCur) ); + rc = restoreOrClearCursorPosition(pCur); + if( rc==SQLITE_OK ){ + assert( pCur->eState==CURSOR_VALID ); + assert( pCur->pPage!=0 ); + if( pCur->pPage->intKey ){ + return SQLITE_CORRUPT_BKPT; + } + assert( pCur->pPage->intKey==0 ); + assert( pCur->idx>=0 && pCur->idxpPage->nCell ); + rc = accessPayload(pCur, offset, amt, (unsigned char*)pBuf, 0, 0); + } + return rc; +} + +/* +** Read part of the data associated with cursor pCur. Exactly +** "amt" bytes will be transfered into pBuf[]. The transfer +** begins at "offset". +** +** Return SQLITE_OK on success or an error code if anything goes +** wrong. An error is returned if "offset+amt" is larger than +** the available payload. +*/ +int sqlite3BtreeData(BtCursor *pCur, u32 offset, u32 amt, void *pBuf){ + int rc; + + assert( cursorHoldsMutex(pCur) ); + rc = restoreOrClearCursorPosition(pCur); + if( rc==SQLITE_OK ){ + assert( pCur->eState==CURSOR_VALID ); + assert( pCur->pPage!=0 ); + assert( pCur->idx>=0 && pCur->idxpPage->nCell ); + rc = accessPayload(pCur, offset, amt, pBuf, 1, 0); + } + return rc; +} + +/* +** Return a pointer to payload information from the entry that the +** pCur cursor is pointing to. The pointer is to the beginning of +** the key if skipKey==0 and it points to the beginning of data if +** skipKey==1. The number of bytes of available key/data is written +** into *pAmt. If *pAmt==0, then the value returned will not be +** a valid pointer. +** +** This routine is an optimization. It is common for the entire key +** and data to fit on the local page and for there to be no overflow +** pages. When that is so, this routine can be used to access the +** key and data without making a copy. If the key and/or data spills +** onto overflow pages, then accessPayload() must be used to reassembly +** the key/data and copy it into a preallocated buffer. +** +** The pointer returned by this routine looks directly into the cached +** page of the database. The data might change or move the next time +** any btree routine is called. +*/ +static const unsigned char *fetchPayload( + BtCursor *pCur, /* Cursor pointing to entry to read from */ + int *pAmt, /* Write the number of available bytes here */ + int skipKey /* read beginning at data if this is true */ +){ + unsigned char *aPayload; + MemPage *pPage; + u32 nKey; + int nLocal; + + assert( pCur!=0 && pCur->pPage!=0 ); + assert( pCur->eState==CURSOR_VALID ); + assert( cursorHoldsMutex(pCur) ); + pPage = pCur->pPage; + assert( pCur->idx>=0 && pCur->idxnCell ); + getCellInfo(pCur); + aPayload = pCur->info.pCell; + aPayload += pCur->info.nHeader; + if( pPage->intKey ){ + nKey = 0; + }else{ + nKey = pCur->info.nKey; + } + if( skipKey ){ + aPayload += nKey; + nLocal = pCur->info.nLocal - nKey; + }else{ + nLocal = pCur->info.nLocal; + if( nLocal>nKey ){ + nLocal = nKey; + } + } + *pAmt = nLocal; + return aPayload; +} + + +/* +** For the entry that cursor pCur is point to, return as +** many bytes of the key or data as are available on the local +** b-tree page. Write the number of available bytes into *pAmt. +** +** The pointer returned is ephemeral. The key/data may move +** or be destroyed on the next call to any Btree routine, +** including calls from other threads against the same cache. +** Hence, a mutex on the BtShared should be held prior to calling +** this routine. +** +** These routines is used to get quick access to key and data +** in the common case where no overflow pages are used. +*/ +const void *sqlite3BtreeKeyFetch(BtCursor *pCur, int *pAmt){ + assert( cursorHoldsMutex(pCur) ); + if( pCur->eState==CURSOR_VALID ){ + return (const void*)fetchPayload(pCur, pAmt, 0); + } + return 0; +} +const void *sqlite3BtreeDataFetch(BtCursor *pCur, int *pAmt){ + assert( cursorHoldsMutex(pCur) ); + if( pCur->eState==CURSOR_VALID ){ + return (const void*)fetchPayload(pCur, pAmt, 1); + } + return 0; +} + + +/* +** Move the cursor down to a new child page. The newPgno argument is the +** page number of the child page to move to. +*/ +static int moveToChild(BtCursor *pCur, u32 newPgno){ + int rc; + MemPage *pNewPage; + MemPage *pOldPage; + BtShared *pBt = pCur->pBt; + + assert( cursorHoldsMutex(pCur) ); + assert( pCur->eState==CURSOR_VALID ); + rc = getAndInitPage(pBt, newPgno, &pNewPage, pCur->pPage); + if( rc ) return rc; + pNewPage->idxParent = pCur->idx; + pOldPage = pCur->pPage; + pOldPage->idxShift = 0; + releasePage(pOldPage); + pCur->pPage = pNewPage; + pCur->idx = 0; + pCur->info.nSize = 0; + if( pNewPage->nCell<1 ){ + return SQLITE_CORRUPT_BKPT; + } + return SQLITE_OK; +} + +/* +** Return true if the page is the virtual root of its table. +** +** The virtual root page is the root page for most tables. But +** for the table rooted on page 1, sometime the real root page +** is empty except for the right-pointer. In such cases the +** virtual root page is the page that the right-pointer of page +** 1 is pointing to. +*/ +int sqlite3BtreeIsRootPage(MemPage *pPage){ + MemPage *pParent; + + assert( sqlite3_mutex_held(pPage->pBt->mutex) ); + pParent = pPage->pParent; + if( pParent==0 ) return 1; + if( pParent->pgno>1 ) return 0; + if( get2byte(&pParent->aData[pParent->hdrOffset+3])==0 ) return 1; + return 0; +} + +/* +** Move the cursor up to the parent page. +** +** pCur->idx is set to the cell index that contains the pointer +** to the page we are coming from. If we are coming from the +** right-most child page then pCur->idx is set to one more than +** the largest cell index. +*/ +void sqlite3BtreeMoveToParent(BtCursor *pCur){ + MemPage *pParent; + MemPage *pPage; + int idxParent; + + assert( cursorHoldsMutex(pCur) ); + assert( pCur->eState==CURSOR_VALID ); + pPage = pCur->pPage; + assert( pPage!=0 ); + assert( !sqlite3BtreeIsRootPage(pPage) ); + pParent = pPage->pParent; + assert( pParent!=0 ); + idxParent = pPage->idxParent; + sqlite3PagerRef(pParent->pDbPage); + releasePage(pPage); + pCur->pPage = pParent; + pCur->info.nSize = 0; + assert( pParent->idxShift==0 ); + pCur->idx = idxParent; +} + +/* +** Move the cursor to the root page +*/ +static int moveToRoot(BtCursor *pCur){ + MemPage *pRoot; + int rc = SQLITE_OK; + Btree *p = pCur->pBtree; + BtShared *pBt = p->pBt; + + assert( cursorHoldsMutex(pCur) ); + assert( CURSOR_INVALID < CURSOR_REQUIRESEEK ); + assert( CURSOR_VALID < CURSOR_REQUIRESEEK ); + assert( CURSOR_FAULT > CURSOR_REQUIRESEEK ); + if( pCur->eState>=CURSOR_REQUIRESEEK ){ + if( pCur->eState==CURSOR_FAULT ){ + return pCur->skip; + } + clearCursorPosition(pCur); + } + pRoot = pCur->pPage; + if( pRoot && pRoot->pgno==pCur->pgnoRoot ){ + assert( pRoot->isInit ); + }else{ + if( + SQLITE_OK!=(rc = getAndInitPage(pBt, pCur->pgnoRoot, &pRoot, 0)) + ){ + pCur->eState = CURSOR_INVALID; + return rc; + } + releasePage(pCur->pPage); + pCur->pPage = pRoot; + } + pCur->idx = 0; + pCur->info.nSize = 0; + if( pRoot->nCell==0 && !pRoot->leaf ){ + Pgno subpage; + assert( pRoot->pgno==1 ); + subpage = get4byte(&pRoot->aData[pRoot->hdrOffset+8]); + assert( subpage>0 ); + pCur->eState = CURSOR_VALID; + rc = moveToChild(pCur, subpage); + } + pCur->eState = ((pCur->pPage->nCell>0)?CURSOR_VALID:CURSOR_INVALID); + return rc; +} + +/* +** Move the cursor down to the left-most leaf entry beneath the +** entry to which it is currently pointing. +** +** The left-most leaf is the one with the smallest key - the first +** in ascending order. +*/ +static int moveToLeftmost(BtCursor *pCur){ + Pgno pgno; + int rc = SQLITE_OK; + MemPage *pPage; + + assert( cursorHoldsMutex(pCur) ); + assert( pCur->eState==CURSOR_VALID ); + while( rc==SQLITE_OK && !(pPage = pCur->pPage)->leaf ){ + assert( pCur->idx>=0 && pCur->idxnCell ); + pgno = get4byte(findCell(pPage, pCur->idx)); + rc = moveToChild(pCur, pgno); + } + return rc; +} + +/* +** Move the cursor down to the right-most leaf entry beneath the +** page to which it is currently pointing. Notice the difference +** between moveToLeftmost() and moveToRightmost(). moveToLeftmost() +** finds the left-most entry beneath the *entry* whereas moveToRightmost() +** finds the right-most entry beneath the *page*. +** +** The right-most entry is the one with the largest key - the last +** key in ascending order. +*/ +static int moveToRightmost(BtCursor *pCur){ + Pgno pgno; + int rc = SQLITE_OK; + MemPage *pPage; + + assert( cursorHoldsMutex(pCur) ); + assert( pCur->eState==CURSOR_VALID ); + while( rc==SQLITE_OK && !(pPage = pCur->pPage)->leaf ){ + pgno = get4byte(&pPage->aData[pPage->hdrOffset+8]); + pCur->idx = pPage->nCell; + rc = moveToChild(pCur, pgno); + } + if( rc==SQLITE_OK ){ + pCur->idx = pPage->nCell - 1; + pCur->info.nSize = 0; + } + return SQLITE_OK; +} + +/* Move the cursor to the first entry in the table. Return SQLITE_OK +** on success. Set *pRes to 0 if the cursor actually points to something +** or set *pRes to 1 if the table is empty. +*/ +int sqlite3BtreeFirst(BtCursor *pCur, int *pRes){ + int rc; + + assert( cursorHoldsMutex(pCur) ); + assert( sqlite3_mutex_held(pCur->pBtree->db->mutex) ); + rc = moveToRoot(pCur); + if( rc==SQLITE_OK ){ + if( pCur->eState==CURSOR_INVALID ){ + assert( pCur->pPage->nCell==0 ); + *pRes = 1; + rc = SQLITE_OK; + }else{ + assert( pCur->pPage->nCell>0 ); + *pRes = 0; + rc = moveToLeftmost(pCur); + } + } + return rc; +} + +/* Move the cursor to the last entry in the table. Return SQLITE_OK +** on success. Set *pRes to 0 if the cursor actually points to something +** or set *pRes to 1 if the table is empty. +*/ +int sqlite3BtreeLast(BtCursor *pCur, int *pRes){ + int rc; + + assert( cursorHoldsMutex(pCur) ); + assert( sqlite3_mutex_held(pCur->pBtree->db->mutex) ); + rc = moveToRoot(pCur); + if( rc==SQLITE_OK ){ + if( CURSOR_INVALID==pCur->eState ){ + assert( pCur->pPage->nCell==0 ); + *pRes = 1; + }else{ + assert( pCur->eState==CURSOR_VALID ); + *pRes = 0; + rc = moveToRightmost(pCur); + } + } + return rc; +} + +/* Move the cursor so that it points to an entry near pKey/nKey. +** Return a success code. +** +** For INTKEY tables, only the nKey parameter is used. pKey is +** ignored. For other tables, nKey is the number of bytes of data +** in pKey. The comparison function specified when the cursor was +** created is used to compare keys. +** +** If an exact match is not found, then the cursor is always +** left pointing at a leaf page which would hold the entry if it +** were present. The cursor might point to an entry that comes +** before or after the key. +** +** The result of comparing the key with the entry to which the +** cursor is written to *pRes if pRes!=NULL. The meaning of +** this value is as follows: +** +** *pRes<0 The cursor is left pointing at an entry that +** is smaller than pKey or if the table is empty +** and the cursor is therefore left point to nothing. +** +** *pRes==0 The cursor is left pointing at an entry that +** exactly matches pKey. +** +** *pRes>0 The cursor is left pointing at an entry that +** is larger than pKey. +** +*/ +int sqlite3BtreeMoveto( + BtCursor *pCur, /* The cursor to be moved */ + const void *pKey, /* The key content for indices. Not used by tables */ + i64 nKey, /* Size of pKey. Or the key for tables */ + int biasRight, /* If true, bias the search to the high end */ + int *pRes /* Search result flag */ +){ + int rc; + + assert( cursorHoldsMutex(pCur) ); + assert( sqlite3_mutex_held(pCur->pBtree->db->mutex) ); + rc = moveToRoot(pCur); + if( rc ){ + return rc; + } + assert( pCur->pPage ); + assert( pCur->pPage->isInit ); + if( pCur->eState==CURSOR_INVALID ){ + *pRes = -1; + assert( pCur->pPage->nCell==0 ); + return SQLITE_OK; + } + for(;;){ + int lwr, upr; + Pgno chldPg; + MemPage *pPage = pCur->pPage; + int c = -1; /* pRes return if table is empty must be -1 */ + lwr = 0; + upr = pPage->nCell-1; + if( !pPage->intKey && pKey==0 ){ + return SQLITE_CORRUPT_BKPT; + } + if( biasRight ){ + pCur->idx = upr; + }else{ + pCur->idx = (upr+lwr)/2; + } + if( lwr<=upr ) for(;;){ + void *pCellKey; + i64 nCellKey; + pCur->info.nSize = 0; + if( pPage->intKey ){ + u8 *pCell; + pCell = findCell(pPage, pCur->idx) + pPage->childPtrSize; + if( pPage->hasData ){ + u32 dummy; + pCell += getVarint32(pCell, &dummy); + } + getVarint(pCell, (u64 *)&nCellKey); + if( nCellKeynKey ){ + c = +1; + }else{ + c = 0; + } + }else{ + int available; + pCellKey = (void *)fetchPayload(pCur, &available, 0); + nCellKey = pCur->info.nKey; + if( available>=nCellKey ){ + c = pCur->xCompare(pCur->pArg, nCellKey, pCellKey, nKey, pKey); + }else{ + pCellKey = sqlite3_malloc( nCellKey ); + if( pCellKey==0 ) return SQLITE_NOMEM; + rc = sqlite3BtreeKey(pCur, 0, nCellKey, (void *)pCellKey); + c = pCur->xCompare(pCur->pArg, nCellKey, pCellKey, nKey, pKey); + sqlite3_free(pCellKey); + if( rc ){ + return rc; + } + } + } + if( c==0 ){ + if( pPage->leafData && !pPage->leaf ){ + lwr = pCur->idx; + upr = lwr - 1; + break; + }else{ + if( pRes ) *pRes = 0; + return SQLITE_OK; + } + } + if( c<0 ){ + lwr = pCur->idx+1; + }else{ + upr = pCur->idx-1; + } + if( lwr>upr ){ + break; + } + pCur->idx = (lwr+upr)/2; + } + assert( lwr==upr+1 ); + assert( pPage->isInit ); + if( pPage->leaf ){ + chldPg = 0; + }else if( lwr>=pPage->nCell ){ + chldPg = get4byte(&pPage->aData[pPage->hdrOffset+8]); + }else{ + chldPg = get4byte(findCell(pPage, lwr)); + } + if( chldPg==0 ){ + assert( pCur->idx>=0 && pCur->idxpPage->nCell ); + if( pRes ) *pRes = c; + return SQLITE_OK; + } + pCur->idx = lwr; + pCur->info.nSize = 0; + rc = moveToChild(pCur, chldPg); + if( rc ){ + return rc; + } + } + /* NOT REACHED */ +} + + +/* +** Return TRUE if the cursor is not pointing at an entry of the table. +** +** TRUE will be returned after a call to sqlite3BtreeNext() moves +** past the last entry in the table or sqlite3BtreePrev() moves past +** the first entry. TRUE is also returned if the table is empty. +*/ +int sqlite3BtreeEof(BtCursor *pCur){ + /* TODO: What if the cursor is in CURSOR_REQUIRESEEK but all table entries + ** have been deleted? This API will need to change to return an error code + ** as well as the boolean result value. + */ + return (CURSOR_VALID!=pCur->eState); +} + +/* +** Return the database connection handle for a cursor. +*/ +sqlite3 *sqlite3BtreeCursorDb(const BtCursor *pCur){ + assert( sqlite3_mutex_held(pCur->pBtree->db->mutex) ); + return pCur->pBtree->db; +} + +/* +** Advance the cursor to the next entry in the database. If +** successful then set *pRes=0. If the cursor +** was already pointing to the last entry in the database before +** this routine was called, then set *pRes=1. +*/ +static int btreeNext(BtCursor *pCur, int *pRes){ + int rc; + MemPage *pPage; + + assert( cursorHoldsMutex(pCur) ); + rc = restoreOrClearCursorPosition(pCur); + if( rc!=SQLITE_OK ){ + return rc; + } + assert( pRes!=0 ); + pPage = pCur->pPage; + if( CURSOR_INVALID==pCur->eState ){ + *pRes = 1; + return SQLITE_OK; + } + if( pCur->skip>0 ){ + pCur->skip = 0; + *pRes = 0; + return SQLITE_OK; + } + pCur->skip = 0; + + assert( pPage->isInit ); + assert( pCur->idxnCell ); + + pCur->idx++; + pCur->info.nSize = 0; + if( pCur->idx>=pPage->nCell ){ + if( !pPage->leaf ){ + rc = moveToChild(pCur, get4byte(&pPage->aData[pPage->hdrOffset+8])); + if( rc ) return rc; + rc = moveToLeftmost(pCur); + *pRes = 0; + return rc; + } + do{ + if( sqlite3BtreeIsRootPage(pPage) ){ + *pRes = 1; + pCur->eState = CURSOR_INVALID; + return SQLITE_OK; + } + sqlite3BtreeMoveToParent(pCur); + pPage = pCur->pPage; + }while( pCur->idx>=pPage->nCell ); + *pRes = 0; + if( pPage->leafData ){ + rc = sqlite3BtreeNext(pCur, pRes); + }else{ + rc = SQLITE_OK; + } + return rc; + } + *pRes = 0; + if( pPage->leaf ){ + return SQLITE_OK; + } + rc = moveToLeftmost(pCur); + return rc; +} +int sqlite3BtreeNext(BtCursor *pCur, int *pRes){ + int rc; + assert( cursorHoldsMutex(pCur) ); + rc = btreeNext(pCur, pRes); + return rc; +} + + +/* +** Step the cursor to the back to the previous entry in the database. If +** successful then set *pRes=0. If the cursor +** was already pointing to the first entry in the database before +** this routine was called, then set *pRes=1. +*/ +static int btreePrevious(BtCursor *pCur, int *pRes){ + int rc; + Pgno pgno; + MemPage *pPage; + + assert( cursorHoldsMutex(pCur) ); + rc = restoreOrClearCursorPosition(pCur); + if( rc!=SQLITE_OK ){ + return rc; + } + if( CURSOR_INVALID==pCur->eState ){ + *pRes = 1; + return SQLITE_OK; + } + if( pCur->skip<0 ){ + pCur->skip = 0; + *pRes = 0; + return SQLITE_OK; + } + pCur->skip = 0; + + pPage = pCur->pPage; + assert( pPage->isInit ); + assert( pCur->idx>=0 ); + if( !pPage->leaf ){ + pgno = get4byte( findCell(pPage, pCur->idx) ); + rc = moveToChild(pCur, pgno); + if( rc ){ + return rc; + } + rc = moveToRightmost(pCur); + }else{ + while( pCur->idx==0 ){ + if( sqlite3BtreeIsRootPage(pPage) ){ + pCur->eState = CURSOR_INVALID; + *pRes = 1; + return SQLITE_OK; + } + sqlite3BtreeMoveToParent(pCur); + pPage = pCur->pPage; + } + pCur->idx--; + pCur->info.nSize = 0; + if( pPage->leafData && !pPage->leaf ){ + rc = sqlite3BtreePrevious(pCur, pRes); + }else{ + rc = SQLITE_OK; + } + } + *pRes = 0; + return rc; +} +int sqlite3BtreePrevious(BtCursor *pCur, int *pRes){ + int rc; + assert( cursorHoldsMutex(pCur) ); + rc = btreePrevious(pCur, pRes); + return rc; +} + +/* +** Allocate a new page from the database file. +** +** The new page is marked as dirty. (In other words, sqlite3PagerWrite() +** has already been called on the new page.) The new page has also +** been referenced and the calling routine is responsible for calling +** sqlite3PagerUnref() on the new page when it is done. +** +** SQLITE_OK is returned on success. Any other return value indicates +** an error. *ppPage and *pPgno are undefined in the event of an error. +** Do not invoke sqlite3PagerUnref() on *ppPage if an error is returned. +** +** If the "nearby" parameter is not 0, then a (feeble) effort is made to +** locate a page close to the page number "nearby". This can be used in an +** attempt to keep related pages close to each other in the database file, +** which in turn can make database access faster. +** +** If the "exact" parameter is not 0, and the page-number nearby exists +** anywhere on the free-list, then it is guarenteed to be returned. This +** is only used by auto-vacuum databases when allocating a new table. +*/ +static int allocateBtreePage( + BtShared *pBt, + MemPage **ppPage, + Pgno *pPgno, + Pgno nearby, + u8 exact +){ + MemPage *pPage1; + int rc; + int n; /* Number of pages on the freelist */ + int k; /* Number of leaves on the trunk of the freelist */ + MemPage *pTrunk = 0; + MemPage *pPrevTrunk = 0; + + assert( sqlite3_mutex_held(pBt->mutex) ); + pPage1 = pBt->pPage1; + n = get4byte(&pPage1->aData[36]); + if( n>0 ){ + /* There are pages on the freelist. Reuse one of those pages. */ + Pgno iTrunk; + u8 searchList = 0; /* If the free-list must be searched for 'nearby' */ + + /* If the 'exact' parameter was true and a query of the pointer-map + ** shows that the page 'nearby' is somewhere on the free-list, then + ** the entire-list will be searched for that page. + */ +#ifndef SQLITE_OMIT_AUTOVACUUM + if( exact && nearby<=sqlite3PagerPagecount(pBt->pPager) ){ + u8 eType; + assert( nearby>0 ); + assert( pBt->autoVacuum ); + rc = ptrmapGet(pBt, nearby, &eType, 0); + if( rc ) return rc; + if( eType==PTRMAP_FREEPAGE ){ + searchList = 1; + } + *pPgno = nearby; + } +#endif + + /* Decrement the free-list count by 1. Set iTrunk to the index of the + ** first free-list trunk page. iPrevTrunk is initially 1. + */ + rc = sqlite3PagerWrite(pPage1->pDbPage); + if( rc ) return rc; + put4byte(&pPage1->aData[36], n-1); + + /* The code within this loop is run only once if the 'searchList' variable + ** is not true. Otherwise, it runs once for each trunk-page on the + ** free-list until the page 'nearby' is located. + */ + do { + pPrevTrunk = pTrunk; + if( pPrevTrunk ){ + iTrunk = get4byte(&pPrevTrunk->aData[0]); + }else{ + iTrunk = get4byte(&pPage1->aData[32]); + } + rc = sqlite3BtreeGetPage(pBt, iTrunk, &pTrunk, 0); + if( rc ){ + pTrunk = 0; + goto end_allocate_page; + } + + k = get4byte(&pTrunk->aData[4]); + if( k==0 && !searchList ){ + /* The trunk has no leaves and the list is not being searched. + ** So extract the trunk page itself and use it as the newly + ** allocated page */ + assert( pPrevTrunk==0 ); + rc = sqlite3PagerWrite(pTrunk->pDbPage); + if( rc ){ + goto end_allocate_page; + } + *pPgno = iTrunk; + memcpy(&pPage1->aData[32], &pTrunk->aData[0], 4); + *ppPage = pTrunk; + pTrunk = 0; + TRACE(("ALLOCATE: %d trunk - %d free pages left\n", *pPgno, n-1)); + }else if( k>pBt->usableSize/4 - 8 ){ + /* Value of k is out of range. Database corruption */ + rc = SQLITE_CORRUPT_BKPT; + goto end_allocate_page; +#ifndef SQLITE_OMIT_AUTOVACUUM + }else if( searchList && nearby==iTrunk ){ + /* The list is being searched and this trunk page is the page + ** to allocate, regardless of whether it has leaves. + */ + assert( *pPgno==iTrunk ); + *ppPage = pTrunk; + searchList = 0; + rc = sqlite3PagerWrite(pTrunk->pDbPage); + if( rc ){ + goto end_allocate_page; + } + if( k==0 ){ + if( !pPrevTrunk ){ + memcpy(&pPage1->aData[32], &pTrunk->aData[0], 4); + }else{ + memcpy(&pPrevTrunk->aData[0], &pTrunk->aData[0], 4); + } + }else{ + /* The trunk page is required by the caller but it contains + ** pointers to free-list leaves. The first leaf becomes a trunk + ** page in this case. + */ + MemPage *pNewTrunk; + Pgno iNewTrunk = get4byte(&pTrunk->aData[8]); + rc = sqlite3BtreeGetPage(pBt, iNewTrunk, &pNewTrunk, 0); + if( rc!=SQLITE_OK ){ + goto end_allocate_page; + } + rc = sqlite3PagerWrite(pNewTrunk->pDbPage); + if( rc!=SQLITE_OK ){ + releasePage(pNewTrunk); + goto end_allocate_page; + } + memcpy(&pNewTrunk->aData[0], &pTrunk->aData[0], 4); + put4byte(&pNewTrunk->aData[4], k-1); + memcpy(&pNewTrunk->aData[8], &pTrunk->aData[12], (k-1)*4); + releasePage(pNewTrunk); + if( !pPrevTrunk ){ + put4byte(&pPage1->aData[32], iNewTrunk); + }else{ + rc = sqlite3PagerWrite(pPrevTrunk->pDbPage); + if( rc ){ + goto end_allocate_page; + } + put4byte(&pPrevTrunk->aData[0], iNewTrunk); + } + } + pTrunk = 0; + TRACE(("ALLOCATE: %d trunk - %d free pages left\n", *pPgno, n-1)); +#endif + }else{ + /* Extract a leaf from the trunk */ + int closest; + Pgno iPage; + unsigned char *aData = pTrunk->aData; + rc = sqlite3PagerWrite(pTrunk->pDbPage); + if( rc ){ + goto end_allocate_page; + } + if( nearby>0 ){ + int i, dist; + closest = 0; + dist = get4byte(&aData[8]) - nearby; + if( dist<0 ) dist = -dist; + for(i=1; isqlite3PagerPagecount(pBt->pPager) ){ + /* Free page off the end of the file */ + return SQLITE_CORRUPT_BKPT; + } + TRACE(("ALLOCATE: %d was leaf %d of %d on trunk %d" + ": %d more free pages\n", + *pPgno, closest+1, k, pTrunk->pgno, n-1)); + if( closestpDbPage); + rc = sqlite3PagerWrite((*ppPage)->pDbPage); + if( rc!=SQLITE_OK ){ + releasePage(*ppPage); + } + } + searchList = 0; + } + } + releasePage(pPrevTrunk); + pPrevTrunk = 0; + }while( searchList ); + }else{ + /* There are no pages on the freelist, so create a new page at the + ** end of the file */ + *pPgno = sqlite3PagerPagecount(pBt->pPager) + 1; + +#ifndef SQLITE_OMIT_AUTOVACUUM + if( pBt->nTrunc ){ + /* An incr-vacuum has already run within this transaction. So the + ** page to allocate is not from the physical end of the file, but + ** at pBt->nTrunc. + */ + *pPgno = pBt->nTrunc+1; + if( *pPgno==PENDING_BYTE_PAGE(pBt) ){ + (*pPgno)++; + } + } + if( pBt->autoVacuum && PTRMAP_ISPAGE(pBt, *pPgno) ){ + /* If *pPgno refers to a pointer-map page, allocate two new pages + ** at the end of the file instead of one. The first allocated page + ** becomes a new pointer-map page, the second is used by the caller. + */ + TRACE(("ALLOCATE: %d from end of file (pointer-map page)\n", *pPgno)); + assert( *pPgno!=PENDING_BYTE_PAGE(pBt) ); + (*pPgno)++; + if( *pPgno==PENDING_BYTE_PAGE(pBt) ){ (*pPgno)++; } + } + if( pBt->nTrunc ){ + pBt->nTrunc = *pPgno; + } +#endif + + assert( *pPgno!=PENDING_BYTE_PAGE(pBt) ); + rc = sqlite3BtreeGetPage(pBt, *pPgno, ppPage, 0); + if( rc ) return rc; + rc = sqlite3PagerWrite((*ppPage)->pDbPage); + if( rc!=SQLITE_OK ){ + releasePage(*ppPage); + } + TRACE(("ALLOCATE: %d from end of file\n", *pPgno)); + } + + assert( *pPgno!=PENDING_BYTE_PAGE(pBt) ); + +end_allocate_page: + releasePage(pTrunk); + releasePage(pPrevTrunk); + return rc; +} + +/* +** Add a page of the database file to the freelist. +** +** sqlite3PagerUnref() is NOT called for pPage. +*/ +static int freePage(MemPage *pPage){ + BtShared *pBt = pPage->pBt; + MemPage *pPage1 = pBt->pPage1; + int rc, n, k; + + /* Prepare the page for freeing */ + assert( sqlite3_mutex_held(pPage->pBt->mutex) ); + assert( pPage->pgno>1 ); + pPage->isInit = 0; + releasePage(pPage->pParent); + pPage->pParent = 0; + + /* Increment the free page count on pPage1 */ + rc = sqlite3PagerWrite(pPage1->pDbPage); + if( rc ) return rc; + n = get4byte(&pPage1->aData[36]); + put4byte(&pPage1->aData[36], n+1); + +#ifdef SQLITE_SECURE_DELETE + /* If the SQLITE_SECURE_DELETE compile-time option is enabled, then + ** always fully overwrite deleted information with zeros. + */ + rc = sqlite3PagerWrite(pPage->pDbPage); + if( rc ) return rc; + memset(pPage->aData, 0, pPage->pBt->pageSize); +#endif + +#ifndef SQLITE_OMIT_AUTOVACUUM + /* If the database supports auto-vacuum, write an entry in the pointer-map + ** to indicate that the page is free. + */ + if( pBt->autoVacuum ){ + rc = ptrmapPut(pBt, pPage->pgno, PTRMAP_FREEPAGE, 0); + if( rc ) return rc; + } +#endif + + if( n==0 ){ + /* This is the first free page */ + rc = sqlite3PagerWrite(pPage->pDbPage); + if( rc ) return rc; + memset(pPage->aData, 0, 8); + put4byte(&pPage1->aData[32], pPage->pgno); + TRACE(("FREE-PAGE: %d first\n", pPage->pgno)); + }else{ + /* Other free pages already exist. Retrive the first trunk page + ** of the freelist and find out how many leaves it has. */ + MemPage *pTrunk; + rc = sqlite3BtreeGetPage(pBt, get4byte(&pPage1->aData[32]), &pTrunk, 0); + if( rc ) return rc; + k = get4byte(&pTrunk->aData[4]); + if( k>=pBt->usableSize/4 - 8 ){ + /* The trunk is full. Turn the page being freed into a new + ** trunk page with no leaves. */ + rc = sqlite3PagerWrite(pPage->pDbPage); + if( rc==SQLITE_OK ){ + put4byte(pPage->aData, pTrunk->pgno); + put4byte(&pPage->aData[4], 0); + put4byte(&pPage1->aData[32], pPage->pgno); + TRACE(("FREE-PAGE: %d new trunk page replacing %d\n", + pPage->pgno, pTrunk->pgno)); + } + }else if( k<0 ){ + rc = SQLITE_CORRUPT; + }else{ + /* Add the newly freed page as a leaf on the current trunk */ + rc = sqlite3PagerWrite(pTrunk->pDbPage); + if( rc==SQLITE_OK ){ + put4byte(&pTrunk->aData[4], k+1); + put4byte(&pTrunk->aData[8+k*4], pPage->pgno); +#ifndef SQLITE_SECURE_DELETE + sqlite3PagerDontWrite(pPage->pDbPage); +#endif + } + TRACE(("FREE-PAGE: %d leaf on trunk page %d\n",pPage->pgno,pTrunk->pgno)); + } + releasePage(pTrunk); + } + return rc; +} + +/* +** Free any overflow pages associated with the given Cell. +*/ +static int clearCell(MemPage *pPage, unsigned char *pCell){ + BtShared *pBt = pPage->pBt; + CellInfo info; + Pgno ovflPgno; + int rc; + int nOvfl; + int ovflPageSize; + + assert( sqlite3_mutex_held(pPage->pBt->mutex) ); + sqlite3BtreeParseCellPtr(pPage, pCell, &info); + if( info.iOverflow==0 ){ + return SQLITE_OK; /* No overflow pages. Return without doing anything */ + } + ovflPgno = get4byte(&pCell[info.iOverflow]); + ovflPageSize = pBt->usableSize - 4; + nOvfl = (info.nPayload - info.nLocal + ovflPageSize - 1)/ovflPageSize; + assert( ovflPgno==0 || nOvfl>0 ); + while( nOvfl-- ){ + MemPage *pOvfl; + if( ovflPgno==0 || ovflPgno>sqlite3PagerPagecount(pBt->pPager) ){ + return SQLITE_CORRUPT_BKPT; + } + + rc = getOverflowPage(pBt, ovflPgno, &pOvfl, (nOvfl==0)?0:&ovflPgno); + if( rc ) return rc; + rc = freePage(pOvfl); + sqlite3PagerUnref(pOvfl->pDbPage); + if( rc ) return rc; + } + return SQLITE_OK; +} + +/* +** Create the byte sequence used to represent a cell on page pPage +** and write that byte sequence into pCell[]. Overflow pages are +** allocated and filled in as necessary. The calling procedure +** is responsible for making sure sufficient space has been allocated +** for pCell[]. +** +** Note that pCell does not necessary need to point to the pPage->aData +** area. pCell might point to some temporary storage. The cell will +** be constructed in this temporary area then copied into pPage->aData +** later. +*/ +static int fillInCell( + MemPage *pPage, /* The page that contains the cell */ + unsigned char *pCell, /* Complete text of the cell */ + const void *pKey, i64 nKey, /* The key */ + const void *pData,int nData, /* The data */ + int nZero, /* Extra zero bytes to append to pData */ + int *pnSize /* Write cell size here */ +){ + int nPayload; + const u8 *pSrc; + int nSrc, n, rc; + int spaceLeft; + MemPage *pOvfl = 0; + MemPage *pToRelease = 0; + unsigned char *pPrior; + unsigned char *pPayload; + BtShared *pBt = pPage->pBt; + Pgno pgnoOvfl = 0; + int nHeader; + CellInfo info; + + assert( sqlite3_mutex_held(pPage->pBt->mutex) ); + + /* Fill in the header. */ + nHeader = 0; + if( !pPage->leaf ){ + nHeader += 4; + } + if( pPage->hasData ){ + nHeader += putVarint(&pCell[nHeader], nData+nZero); + }else{ + nData = nZero = 0; + } + nHeader += putVarint(&pCell[nHeader], *(u64*)&nKey); + sqlite3BtreeParseCellPtr(pPage, pCell, &info); + assert( info.nHeader==nHeader ); + assert( info.nKey==nKey ); + assert( info.nData==nData+nZero ); + + /* Fill in the payload */ + nPayload = nData + nZero; + if( pPage->intKey ){ + pSrc = pData; + nSrc = nData; + nData = 0; + }else{ + nPayload += nKey; + pSrc = pKey; + nSrc = nKey; + } + *pnSize = info.nSize; + spaceLeft = info.nLocal; + pPayload = &pCell[nHeader]; + pPrior = &pCell[info.iOverflow]; + + while( nPayload>0 ){ + if( spaceLeft==0 ){ + int isExact = 0; +#ifndef SQLITE_OMIT_AUTOVACUUM + Pgno pgnoPtrmap = pgnoOvfl; /* Overflow page pointer-map entry page */ + if( pBt->autoVacuum ){ + do{ + pgnoOvfl++; + } while( + PTRMAP_ISPAGE(pBt, pgnoOvfl) || pgnoOvfl==PENDING_BYTE_PAGE(pBt) + ); + if( pgnoOvfl>1 ){ + /* isExact = 1; */ + } + } +#endif + rc = allocateBtreePage(pBt, &pOvfl, &pgnoOvfl, pgnoOvfl, isExact); +#ifndef SQLITE_OMIT_AUTOVACUUM + /* If the database supports auto-vacuum, and the second or subsequent + ** overflow page is being allocated, add an entry to the pointer-map + ** for that page now. + ** + ** If this is the first overflow page, then write a partial entry + ** to the pointer-map. If we write nothing to this pointer-map slot, + ** then the optimistic overflow chain processing in clearCell() + ** may misinterpret the uninitialised values and delete the + ** wrong pages from the database. + */ + if( pBt->autoVacuum && rc==SQLITE_OK ){ + u8 eType = (pgnoPtrmap?PTRMAP_OVERFLOW2:PTRMAP_OVERFLOW1); + rc = ptrmapPut(pBt, pgnoOvfl, eType, pgnoPtrmap); + if( rc ){ + releasePage(pOvfl); + } + } +#endif + if( rc ){ + releasePage(pToRelease); + return rc; + } + put4byte(pPrior, pgnoOvfl); + releasePage(pToRelease); + pToRelease = pOvfl; + pPrior = pOvfl->aData; + put4byte(pPrior, 0); + pPayload = &pOvfl->aData[4]; + spaceLeft = pBt->usableSize - 4; + } + n = nPayload; + if( n>spaceLeft ) n = spaceLeft; + if( nSrc>0 ){ + if( n>nSrc ) n = nSrc; + assert( pSrc ); + memcpy(pPayload, pSrc, n); + }else{ + memset(pPayload, 0, n); + } + nPayload -= n; + pPayload += n; + pSrc += n; + nSrc -= n; + spaceLeft -= n; + if( nSrc==0 ){ + nSrc = nData; + pSrc = pData; + } + } + releasePage(pToRelease); + return SQLITE_OK; +} + +/* +** Change the MemPage.pParent pointer on the page whose number is +** given in the second argument so that MemPage.pParent holds the +** pointer in the third argument. +*/ +static int reparentPage(BtShared *pBt, Pgno pgno, MemPage *pNewParent, int idx){ + MemPage *pThis; + DbPage *pDbPage; + + assert( sqlite3_mutex_held(pBt->mutex) ); + assert( pNewParent!=0 ); + if( pgno==0 ) return SQLITE_OK; + assert( pBt->pPager!=0 ); + pDbPage = sqlite3PagerLookup(pBt->pPager, pgno); + if( pDbPage ){ + pThis = (MemPage *)sqlite3PagerGetExtra(pDbPage); + if( pThis->isInit ){ + assert( pThis->aData==sqlite3PagerGetData(pDbPage) ); + if( pThis->pParent!=pNewParent ){ + if( pThis->pParent ) sqlite3PagerUnref(pThis->pParent->pDbPage); + pThis->pParent = pNewParent; + sqlite3PagerRef(pNewParent->pDbPage); + } + pThis->idxParent = idx; + } + sqlite3PagerUnref(pDbPage); + } + +#ifndef SQLITE_OMIT_AUTOVACUUM + if( pBt->autoVacuum ){ + return ptrmapPut(pBt, pgno, PTRMAP_BTREE, pNewParent->pgno); + } +#endif + return SQLITE_OK; +} + + + +/* +** Change the pParent pointer of all children of pPage to point back +** to pPage. +** +** In other words, for every child of pPage, invoke reparentPage() +** to make sure that each child knows that pPage is its parent. +** +** This routine gets called after you memcpy() one page into +** another. +*/ +static int reparentChildPages(MemPage *pPage){ + int i; + BtShared *pBt = pPage->pBt; + int rc = SQLITE_OK; + + assert( sqlite3_mutex_held(pPage->pBt->mutex) ); + if( pPage->leaf ) return SQLITE_OK; + + for(i=0; inCell; i++){ + u8 *pCell = findCell(pPage, i); + if( !pPage->leaf ){ + rc = reparentPage(pBt, get4byte(pCell), pPage, i); + if( rc!=SQLITE_OK ) return rc; + } + } + if( !pPage->leaf ){ + rc = reparentPage(pBt, get4byte(&pPage->aData[pPage->hdrOffset+8]), + pPage, i); + pPage->idxShift = 0; + } + return rc; +} + +/* +** Remove the i-th cell from pPage. This routine effects pPage only. +** The cell content is not freed or deallocated. It is assumed that +** the cell content has been copied someplace else. This routine just +** removes the reference to the cell from pPage. +** +** "sz" must be the number of bytes in the cell. +*/ +static void dropCell(MemPage *pPage, int idx, int sz){ + int i; /* Loop counter */ + int pc; /* Offset to cell content of cell being deleted */ + u8 *data; /* pPage->aData */ + u8 *ptr; /* Used to move bytes around within data[] */ + + assert( idx>=0 && idxnCell ); + assert( sz==cellSize(pPage, idx) ); + assert( sqlite3PagerIswriteable(pPage->pDbPage) ); + assert( sqlite3_mutex_held(pPage->pBt->mutex) ); + data = pPage->aData; + ptr = &data[pPage->cellOffset + 2*idx]; + pc = get2byte(ptr); + assert( pc>10 && pc+sz<=pPage->pBt->usableSize ); + freeSpace(pPage, pc, sz); + for(i=idx+1; inCell; i++, ptr+=2){ + ptr[0] = ptr[2]; + ptr[1] = ptr[3]; + } + pPage->nCell--; + put2byte(&data[pPage->hdrOffset+3], pPage->nCell); + pPage->nFree += 2; + pPage->idxShift = 1; +} + +/* +** Insert a new cell on pPage at cell index "i". pCell points to the +** content of the cell. +** +** If the cell content will fit on the page, then put it there. If it +** will not fit, then make a copy of the cell content into pTemp if +** pTemp is not null. Regardless of pTemp, allocate a new entry +** in pPage->aOvfl[] and make it point to the cell content (either +** in pTemp or the original pCell) and also record its index. +** Allocating a new entry in pPage->aCell[] implies that +** pPage->nOverflow is incremented. +** +** If nSkip is non-zero, then do not copy the first nSkip bytes of the +** cell. The caller will overwrite them after this function returns. If +** nSkip is non-zero, then pCell may not point to an invalid memory location +** (but pCell+nSkip is always valid). +*/ +static int insertCell( + MemPage *pPage, /* Page into which we are copying */ + int i, /* New cell becomes the i-th cell of the page */ + u8 *pCell, /* Content of the new cell */ + int sz, /* Bytes of content in pCell */ + u8 *pTemp, /* Temp storage space for pCell, if needed */ + u8 nSkip /* Do not write the first nSkip bytes of the cell */ +){ + int idx; /* Where to write new cell content in data[] */ + int j; /* Loop counter */ + int top; /* First byte of content for any cell in data[] */ + int end; /* First byte past the last cell pointer in data[] */ + int ins; /* Index in data[] where new cell pointer is inserted */ + int hdr; /* Offset into data[] of the page header */ + int cellOffset; /* Address of first cell pointer in data[] */ + u8 *data; /* The content of the whole page */ + u8 *ptr; /* Used for moving information around in data[] */ + + assert( i>=0 && i<=pPage->nCell+pPage->nOverflow ); + assert( sz==cellSizePtr(pPage, pCell) ); + assert( sqlite3_mutex_held(pPage->pBt->mutex) ); + if( pPage->nOverflow || sz+2>pPage->nFree ){ + if( pTemp ){ + memcpy(pTemp+nSkip, pCell+nSkip, sz-nSkip); + pCell = pTemp; + } + j = pPage->nOverflow++; + assert( jaOvfl)/sizeof(pPage->aOvfl[0]) ); + pPage->aOvfl[j].pCell = pCell; + pPage->aOvfl[j].idx = i; + pPage->nFree = 0; + }else{ + int rc = sqlite3PagerWrite(pPage->pDbPage); + if( rc!=SQLITE_OK ){ + return rc; + } + assert( sqlite3PagerIswriteable(pPage->pDbPage) ); + data = pPage->aData; + hdr = pPage->hdrOffset; + top = get2byte(&data[hdr+5]); + cellOffset = pPage->cellOffset; + end = cellOffset + 2*pPage->nCell + 2; + ins = cellOffset + 2*i; + if( end > top - sz ){ + rc = defragmentPage(pPage); + if( rc!=SQLITE_OK ) return rc; + top = get2byte(&data[hdr+5]); + assert( end + sz <= top ); + } + idx = allocateSpace(pPage, sz); + assert( idx>0 ); + assert( end <= get2byte(&data[hdr+5]) ); + pPage->nCell++; + pPage->nFree -= 2; + memcpy(&data[idx+nSkip], pCell+nSkip, sz-nSkip); + for(j=end-2, ptr=&data[j]; j>ins; j-=2, ptr-=2){ + ptr[0] = ptr[-2]; + ptr[1] = ptr[-1]; + } + put2byte(&data[ins], idx); + put2byte(&data[hdr+3], pPage->nCell); + pPage->idxShift = 1; +#ifndef SQLITE_OMIT_AUTOVACUUM + if( pPage->pBt->autoVacuum ){ + /* The cell may contain a pointer to an overflow page. If so, write + ** the entry for the overflow page into the pointer map. + */ + CellInfo info; + sqlite3BtreeParseCellPtr(pPage, pCell, &info); + assert( (info.nData+(pPage->intKey?0:info.nKey))==info.nPayload ); + if( (info.nData+(pPage->intKey?0:info.nKey))>info.nLocal ){ + Pgno pgnoOvfl = get4byte(&pCell[info.iOverflow]); + rc = ptrmapPut(pPage->pBt, pgnoOvfl, PTRMAP_OVERFLOW1, pPage->pgno); + if( rc!=SQLITE_OK ) return rc; + } + } +#endif + } + + return SQLITE_OK; +} + +/* +** Add a list of cells to a page. The page should be initially empty. +** The cells are guaranteed to fit on the page. +*/ +static void assemblePage( + MemPage *pPage, /* The page to be assemblied */ + int nCell, /* The number of cells to add to this page */ + u8 **apCell, /* Pointers to cell bodies */ + u16 *aSize /* Sizes of the cells */ +){ + int i; /* Loop counter */ + int totalSize; /* Total size of all cells */ + int hdr; /* Index of page header */ + int cellptr; /* Address of next cell pointer */ + int cellbody; /* Address of next cell body */ + u8 *data; /* Data for the page */ + + assert( pPage->nOverflow==0 ); + assert( sqlite3_mutex_held(pPage->pBt->mutex) ); + totalSize = 0; + for(i=0; inFree ); + assert( pPage->nCell==0 ); + cellptr = pPage->cellOffset; + data = pPage->aData; + hdr = pPage->hdrOffset; + put2byte(&data[hdr+3], nCell); + if( nCell ){ + cellbody = allocateSpace(pPage, totalSize); + assert( cellbody>0 ); + assert( pPage->nFree >= 2*nCell ); + pPage->nFree -= 2*nCell; + for(i=0; ipBt->usableSize ); + } + pPage->nCell = nCell; +} + +/* +** The following parameters determine how many adjacent pages get involved +** in a balancing operation. NN is the number of neighbors on either side +** of the page that participate in the balancing operation. NB is the +** total number of pages that participate, including the target page and +** NN neighbors on either side. +** +** The minimum value of NN is 1 (of course). Increasing NN above 1 +** (to 2 or 3) gives a modest improvement in SELECT and DELETE performance +** in exchange for a larger degradation in INSERT and UPDATE performance. +** The value of NN appears to give the best results overall. +*/ +#define NN 1 /* Number of neighbors on either side of pPage */ +#define NB (NN*2+1) /* Total pages involved in the balance */ + +/* Forward reference */ +static int balance(MemPage*, int); + +#ifndef SQLITE_OMIT_QUICKBALANCE +/* +** This version of balance() handles the common special case where +** a new entry is being inserted on the extreme right-end of the +** tree, in other words, when the new entry will become the largest +** entry in the tree. +** +** Instead of trying balance the 3 right-most leaf pages, just add +** a new page to the right-hand side and put the one new entry in +** that page. This leaves the right side of the tree somewhat +** unbalanced. But odds are that we will be inserting new entries +** at the end soon afterwards so the nearly empty page will quickly +** fill up. On average. +** +** pPage is the leaf page which is the right-most page in the tree. +** pParent is its parent. pPage must have a single overflow entry +** which is also the right-most entry on the page. +*/ +static int balance_quick(MemPage *pPage, MemPage *pParent){ + int rc; + MemPage *pNew; + Pgno pgnoNew; + u8 *pCell; + u16 szCell; + CellInfo info; + BtShared *pBt = pPage->pBt; + int parentIdx = pParent->nCell; /* pParent new divider cell index */ + int parentSize; /* Size of new divider cell */ + u8 parentCell[64]; /* Space for the new divider cell */ + + assert( sqlite3_mutex_held(pPage->pBt->mutex) ); + + /* Allocate a new page. Insert the overflow cell from pPage + ** into it. Then remove the overflow cell from pPage. + */ + rc = allocateBtreePage(pBt, &pNew, &pgnoNew, 0, 0); + if( rc!=SQLITE_OK ){ + return rc; + } + pCell = pPage->aOvfl[0].pCell; + szCell = cellSizePtr(pPage, pCell); + zeroPage(pNew, pPage->aData[0]); + assemblePage(pNew, 1, &pCell, &szCell); + pPage->nOverflow = 0; + + /* Set the parent of the newly allocated page to pParent. */ + pNew->pParent = pParent; + sqlite3PagerRef(pParent->pDbPage); + + /* pPage is currently the right-child of pParent. Change this + ** so that the right-child is the new page allocated above and + ** pPage is the next-to-right child. + */ + assert( pPage->nCell>0 ); + pCell = findCell(pPage, pPage->nCell-1); + sqlite3BtreeParseCellPtr(pPage, pCell, &info); + rc = fillInCell(pParent, parentCell, 0, info.nKey, 0, 0, 0, &parentSize); + if( rc!=SQLITE_OK ){ + return rc; + } + assert( parentSize<64 ); + rc = insertCell(pParent, parentIdx, parentCell, parentSize, 0, 4); + if( rc!=SQLITE_OK ){ + return rc; + } + put4byte(findOverflowCell(pParent,parentIdx), pPage->pgno); + put4byte(&pParent->aData[pParent->hdrOffset+8], pgnoNew); + +#ifndef SQLITE_OMIT_AUTOVACUUM + /* If this is an auto-vacuum database, update the pointer map + ** with entries for the new page, and any pointer from the + ** cell on the page to an overflow page. + */ + if( pBt->autoVacuum ){ + rc = ptrmapPut(pBt, pgnoNew, PTRMAP_BTREE, pParent->pgno); + if( rc==SQLITE_OK ){ + rc = ptrmapPutOvfl(pNew, 0); + } + if( rc!=SQLITE_OK ){ + releasePage(pNew); + return rc; + } + } +#endif + + /* Release the reference to the new page and balance the parent page, + ** in case the divider cell inserted caused it to become overfull. + */ + releasePage(pNew); + return balance(pParent, 0); +} +#endif /* SQLITE_OMIT_QUICKBALANCE */ + +/* +** This routine redistributes Cells on pPage and up to NN*2 siblings +** of pPage so that all pages have about the same amount of free space. +** Usually NN siblings on either side of pPage is used in the balancing, +** though more siblings might come from one side if pPage is the first +** or last child of its parent. If pPage has fewer than 2*NN siblings +** (something which can only happen if pPage is the root page or a +** child of root) then all available siblings participate in the balancing. +** +** The number of siblings of pPage might be increased or decreased by one or +** two in an effort to keep pages nearly full but not over full. The root page +** is special and is allowed to be nearly empty. If pPage is +** the root page, then the depth of the tree might be increased +** or decreased by one, as necessary, to keep the root page from being +** overfull or completely empty. +** +** Note that when this routine is called, some of the Cells on pPage +** might not actually be stored in pPage->aData[]. This can happen +** if the page is overfull. Part of the job of this routine is to +** make sure all Cells for pPage once again fit in pPage->aData[]. +** +** In the course of balancing the siblings of pPage, the parent of pPage +** might become overfull or underfull. If that happens, then this routine +** is called recursively on the parent. +** +** If this routine fails for any reason, it might leave the database +** in a corrupted state. So if this routine fails, the database should +** be rolled back. +*/ +static int balance_nonroot(MemPage *pPage){ + MemPage *pParent; /* The parent of pPage */ + BtShared *pBt; /* The whole database */ + int nCell = 0; /* Number of cells in apCell[] */ + int nMaxCells = 0; /* Allocated size of apCell, szCell, aFrom. */ + int nOld; /* Number of pages in apOld[] */ + int nNew; /* Number of pages in apNew[] */ + int nDiv; /* Number of cells in apDiv[] */ + int i, j, k; /* Loop counters */ + int idx; /* Index of pPage in pParent->aCell[] */ + int nxDiv; /* Next divider slot in pParent->aCell[] */ + int rc; /* The return code */ + int leafCorrection; /* 4 if pPage is a leaf. 0 if not */ + int leafData; /* True if pPage is a leaf of a LEAFDATA tree */ + int usableSpace; /* Bytes in pPage beyond the header */ + int pageFlags; /* Value of pPage->aData[0] */ + int subtotal; /* Subtotal of bytes in cells on one page */ + int iSpace = 0; /* First unused byte of aSpace[] */ + MemPage *apOld[NB]; /* pPage and up to two siblings */ + Pgno pgnoOld[NB]; /* Page numbers for each page in apOld[] */ + MemPage *apCopy[NB]; /* Private copies of apOld[] pages */ + MemPage *apNew[NB+2]; /* pPage and up to NB siblings after balancing */ + Pgno pgnoNew[NB+2]; /* Page numbers for each page in apNew[] */ + u8 *apDiv[NB]; /* Divider cells in pParent */ + int cntNew[NB+2]; /* Index in aCell[] of cell after i-th page */ + int szNew[NB+2]; /* Combined size of cells place on i-th page */ + u8 **apCell = 0; /* All cells begin balanced */ + u16 *szCell; /* Local size of all cells in apCell[] */ + u8 *aCopy[NB]; /* Space for holding data of apCopy[] */ + u8 *aSpace; /* Space to hold copies of dividers cells */ +#ifndef SQLITE_OMIT_AUTOVACUUM + u8 *aFrom = 0; +#endif + + assert( sqlite3_mutex_held(pPage->pBt->mutex) ); + + /* + ** Find the parent page. + */ + assert( pPage->isInit ); + assert( sqlite3PagerIswriteable(pPage->pDbPage) || pPage->nOverflow==1 ); + pBt = pPage->pBt; + pParent = pPage->pParent; + assert( pParent ); + if( SQLITE_OK!=(rc = sqlite3PagerWrite(pParent->pDbPage)) ){ + return rc; + } + TRACE(("BALANCE: begin page %d child of %d\n", pPage->pgno, pParent->pgno)); + +#ifndef SQLITE_OMIT_QUICKBALANCE + /* + ** A special case: If a new entry has just been inserted into a + ** table (that is, a btree with integer keys and all data at the leaves) + ** and the new entry is the right-most entry in the tree (it has the + ** largest key) then use the special balance_quick() routine for + ** balancing. balance_quick() is much faster and results in a tighter + ** packing of data in the common case. + */ + if( pPage->leaf && + pPage->intKey && + pPage->leafData && + pPage->nOverflow==1 && + pPage->aOvfl[0].idx==pPage->nCell && + pPage->pParent->pgno!=1 && + get4byte(&pParent->aData[pParent->hdrOffset+8])==pPage->pgno + ){ + /* + ** TODO: Check the siblings to the left of pPage. It may be that + ** they are not full and no new page is required. + */ + return balance_quick(pPage, pParent); + } +#endif + + if( SQLITE_OK!=(rc = sqlite3PagerWrite(pPage->pDbPage)) ){ + return rc; + } + + /* + ** Find the cell in the parent page whose left child points back + ** to pPage. The "idx" variable is the index of that cell. If pPage + ** is the rightmost child of pParent then set idx to pParent->nCell + */ + if( pParent->idxShift ){ + Pgno pgno; + pgno = pPage->pgno; + assert( pgno==sqlite3PagerPagenumber(pPage->pDbPage) ); + for(idx=0; idxnCell; idx++){ + if( get4byte(findCell(pParent, idx))==pgno ){ + break; + } + } + assert( idxnCell + || get4byte(&pParent->aData[pParent->hdrOffset+8])==pgno ); + }else{ + idx = pPage->idxParent; + } + + /* + ** Initialize variables so that it will be safe to jump + ** directly to balance_cleanup at any moment. + */ + nOld = nNew = 0; + sqlite3PagerRef(pParent->pDbPage); + + /* + ** Find sibling pages to pPage and the cells in pParent that divide + ** the siblings. An attempt is made to find NN siblings on either + ** side of pPage. More siblings are taken from one side, however, if + ** pPage there are fewer than NN siblings on the other side. If pParent + ** has NB or fewer children then all children of pParent are taken. + */ + nxDiv = idx - NN; + if( nxDiv + NB > pParent->nCell ){ + nxDiv = pParent->nCell - NB + 1; + } + if( nxDiv<0 ){ + nxDiv = 0; + } + nDiv = 0; + for(i=0, k=nxDiv; inCell ){ + apDiv[i] = findCell(pParent, k); + nDiv++; + assert( !pParent->leaf ); + pgnoOld[i] = get4byte(apDiv[i]); + }else if( k==pParent->nCell ){ + pgnoOld[i] = get4byte(&pParent->aData[pParent->hdrOffset+8]); + }else{ + break; + } + rc = getAndInitPage(pBt, pgnoOld[i], &apOld[i], pParent); + if( rc ) goto balance_cleanup; + apOld[i]->idxParent = k; + apCopy[i] = 0; + assert( i==nOld ); + nOld++; + nMaxCells += 1+apOld[i]->nCell+apOld[i]->nOverflow; + } + + /* Make nMaxCells a multiple of 4 in order to preserve 8-byte + ** alignment */ + nMaxCells = (nMaxCells + 3)&~3; + + /* + ** Allocate space for memory structures + */ + apCell = sqlite3_malloc( + nMaxCells*sizeof(u8*) /* apCell */ + + nMaxCells*sizeof(u16) /* szCell */ + + (ROUND8(sizeof(MemPage))+pBt->pageSize)*NB /* aCopy */ + + pBt->pageSize*5 /* aSpace */ + + (ISAUTOVACUUM ? nMaxCells : 0) /* aFrom */ + ); + if( apCell==0 ){ + rc = SQLITE_NOMEM; + goto balance_cleanup; + } + szCell = (u16*)&apCell[nMaxCells]; + aCopy[0] = (u8*)&szCell[nMaxCells]; + assert( ((aCopy[0] - (u8*)apCell) & 7)==0 ); /* 8-byte alignment required */ + for(i=1; ipageSize+ROUND8(sizeof(MemPage))]; + assert( ((aCopy[i] - (u8*)apCell) & 7)==0 ); /* 8-byte alignment required */ + } + aSpace = &aCopy[NB-1][pBt->pageSize+ROUND8(sizeof(MemPage))]; + assert( ((aSpace - (u8*)apCell) & 7)==0 ); /* 8-byte alignment required */ +#ifndef SQLITE_OMIT_AUTOVACUUM + if( pBt->autoVacuum ){ + aFrom = &aSpace[5*pBt->pageSize]; + } +#endif + + /* + ** Make copies of the content of pPage and its siblings into aOld[]. + ** The rest of this function will use data from the copies rather + ** that the original pages since the original pages will be in the + ** process of being overwritten. + */ + for(i=0; iaData = (void*)&p[1]; + memcpy(p->aData, apOld[i]->aData, pBt->pageSize); + } + + /* + ** Load pointers to all cells on sibling pages and the divider cells + ** into the local apCell[] array. Make copies of the divider cells + ** into space obtained form aSpace[] and remove the the divider Cells + ** from pParent. + ** + ** If the siblings are on leaf pages, then the child pointers of the + ** divider cells are stripped from the cells before they are copied + ** into aSpace[]. In this way, all cells in apCell[] are without + ** child pointers. If siblings are not leaves, then all cell in + ** apCell[] include child pointers. Either way, all cells in apCell[] + ** are alike. + ** + ** leafCorrection: 4 if pPage is a leaf. 0 if pPage is not a leaf. + ** leafData: 1 if pPage holds key+data and pParent holds only keys. + */ + nCell = 0; + leafCorrection = pPage->leaf*4; + leafData = pPage->leafData && pPage->leaf; + for(i=0; inCell+pOld->nOverflow; + for(j=0; jautoVacuum ){ + int a; + aFrom[nCell] = i; + for(a=0; anOverflow; a++){ + if( pOld->aOvfl[a].pCell==apCell[nCell] ){ + aFrom[nCell] = 0xFF; + break; + } + } + } +#endif + nCell++; + } + if( ipageSize*5 ); + memcpy(pTemp, apDiv[i], sz); + apCell[nCell] = pTemp+leafCorrection; +#ifndef SQLITE_OMIT_AUTOVACUUM + if( pBt->autoVacuum ){ + aFrom[nCell] = 0xFF; + } +#endif + dropCell(pParent, nxDiv, sz); + szCell[nCell] -= leafCorrection; + assert( get4byte(pTemp)==pgnoOld[i] ); + if( !pOld->leaf ){ + assert( leafCorrection==0 ); + /* The right pointer of the child page pOld becomes the left + ** pointer of the divider cell */ + memcpy(apCell[nCell], &pOld->aData[pOld->hdrOffset+8], 4); + }else{ + assert( leafCorrection==4 ); + if( szCell[nCell]<4 ){ + /* Do not allow any cells smaller than 4 bytes. */ + szCell[nCell] = 4; + } + } + nCell++; + } + } + } + + /* + ** Figure out the number of pages needed to hold all nCell cells. + ** Store this number in "k". Also compute szNew[] which is the total + ** size of all cells on the i-th page and cntNew[] which is the index + ** in apCell[] of the cell that divides page i from page i+1. + ** cntNew[k] should equal nCell. + ** + ** Values computed by this block: + ** + ** k: The total number of sibling pages + ** szNew[i]: Spaced used on the i-th sibling page. + ** cntNew[i]: Index in apCell[] and szCell[] for the first cell to + ** the right of the i-th sibling page. + ** usableSpace: Number of bytes of space available on each sibling. + ** + */ + usableSpace = pBt->usableSize - 12 + leafCorrection; + for(subtotal=k=i=0; i usableSpace ){ + szNew[k] = subtotal - szCell[i]; + cntNew[k] = i; + if( leafData ){ i--; } + subtotal = 0; + k++; + } + } + szNew[k] = subtotal; + cntNew[k] = nCell; + k++; + + /* + ** The packing computed by the previous block is biased toward the siblings + ** on the left side. The left siblings are always nearly full, while the + ** right-most sibling might be nearly empty. This block of code attempts + ** to adjust the packing of siblings to get a better balance. + ** + ** This adjustment is more than an optimization. The packing above might + ** be so out of balance as to be illegal. For example, the right-most + ** sibling might be completely empty. This adjustment is not optional. + */ + for(i=k-1; i>0; i--){ + int szRight = szNew[i]; /* Size of sibling on the right */ + int szLeft = szNew[i-1]; /* Size of sibling on the left */ + int r; /* Index of right-most cell in left sibling */ + int d; /* Index of first cell to the left of right sibling */ + + r = cntNew[i-1] - 1; + d = r + 1 - leafData; + assert( d0) or we are the + ** a virtual root page. A virtual root page is when the real root + ** page is page 1 and we are the only child of that page. + */ + assert( cntNew[0]>0 || (pParent->pgno==1 && pParent->nCell==0) ); + + /* + ** Allocate k new pages. Reuse old pages where possible. + */ + assert( pPage->pgno>1 ); + pageFlags = pPage->aData[0]; + for(i=0; ipDbPage); + nNew++; + if( rc ) goto balance_cleanup; + }else{ + assert( i>0 ); + rc = allocateBtreePage(pBt, &pNew, &pgnoNew[i], pgnoNew[i-1], 0); + if( rc ) goto balance_cleanup; + apNew[i] = pNew; + nNew++; + } + zeroPage(pNew, pageFlags); + } + + /* Free any old pages that were not reused as new pages. + */ + while( ii ){ + int t; + MemPage *pT; + t = pgnoNew[i]; + pT = apNew[i]; + pgnoNew[i] = pgnoNew[minI]; + apNew[i] = apNew[minI]; + pgnoNew[minI] = t; + apNew[minI] = pT; + } + } + TRACE(("BALANCE: old: %d %d %d new: %d(%d) %d(%d) %d(%d) %d(%d) %d(%d)\n", + pgnoOld[0], + nOld>=2 ? pgnoOld[1] : 0, + nOld>=3 ? pgnoOld[2] : 0, + pgnoNew[0], szNew[0], + nNew>=2 ? pgnoNew[1] : 0, nNew>=2 ? szNew[1] : 0, + nNew>=3 ? pgnoNew[2] : 0, nNew>=3 ? szNew[2] : 0, + nNew>=4 ? pgnoNew[3] : 0, nNew>=4 ? szNew[3] : 0, + nNew>=5 ? pgnoNew[4] : 0, nNew>=5 ? szNew[4] : 0)); + + /* + ** Evenly distribute the data in apCell[] across the new pages. + ** Insert divider cells into pParent as necessary. + */ + j = 0; + for(i=0; ipgno==pgnoNew[i] ); + assemblePage(pNew, cntNew[i]-j, &apCell[j], &szCell[j]); + assert( pNew->nCell>0 || (nNew==1 && cntNew[0]==0) ); + assert( pNew->nOverflow==0 ); + +#ifndef SQLITE_OMIT_AUTOVACUUM + /* If this is an auto-vacuum database, update the pointer map entries + ** that point to the siblings that were rearranged. These can be: left + ** children of cells, the right-child of the page, or overflow pages + ** pointed to by cells. + */ + if( pBt->autoVacuum ){ + for(k=j; kpgno!=pNew->pgno ){ + rc = ptrmapPutOvfl(pNew, k-j); + if( rc!=SQLITE_OK ){ + goto balance_cleanup; + } + } + } + } +#endif + + j = cntNew[i]; + + /* If the sibling page assembled above was not the right-most sibling, + ** insert a divider cell into the parent page. + */ + if( ileaf ){ + memcpy(&pNew->aData[8], pCell, 4); + pTemp = 0; + }else if( leafData ){ + /* If the tree is a leaf-data tree, and the siblings are leaves, + ** then there is no divider cell in apCell[]. Instead, the divider + ** cell consists of the integer key for the right-most cell of + ** the sibling-page assembled above only. + */ + CellInfo info; + j--; + sqlite3BtreeParseCellPtr(pNew, apCell[j], &info); + pCell = &aSpace[iSpace]; + fillInCell(pParent, pCell, 0, info.nKey, 0, 0, 0, &sz); + iSpace += sz; + assert( iSpace<=pBt->pageSize*5 ); + pTemp = 0; + }else{ + pCell -= 4; + pTemp = &aSpace[iSpace]; + iSpace += sz; + assert( iSpace<=pBt->pageSize*5 ); + /* Obscure case for non-leaf-data trees: If the cell at pCell was + ** previously stored on a leaf node, and its reported size was 4 + ** bytes, then it may actually be smaller than this + ** (see sqlite3BtreeParseCellPtr(), 4 bytes is the minimum size of + ** any cell). But it is important to pass the correct size to + ** insertCell(), so reparse the cell now. + ** + ** Note that this can never happen in an SQLite data file, as all + ** cells are at least 4 bytes. It only happens in b-trees used + ** to evaluate "IN (SELECT ...)" and similar clauses. + */ + if( szCell[j]==4 ){ + assert(leafCorrection==4); + sz = cellSizePtr(pParent, pCell); + } + } + rc = insertCell(pParent, nxDiv, pCell, sz, pTemp, 4); + if( rc!=SQLITE_OK ) goto balance_cleanup; + put4byte(findOverflowCell(pParent,nxDiv), pNew->pgno); +#ifndef SQLITE_OMIT_AUTOVACUUM + /* If this is an auto-vacuum database, and not a leaf-data tree, + ** then update the pointer map with an entry for the overflow page + ** that the cell just inserted points to (if any). + */ + if( pBt->autoVacuum && !leafData ){ + rc = ptrmapPutOvfl(pParent, nxDiv); + if( rc!=SQLITE_OK ){ + goto balance_cleanup; + } + } +#endif + j++; + nxDiv++; + } + } + assert( j==nCell ); + assert( nOld>0 ); + assert( nNew>0 ); + if( (pageFlags & PTF_LEAF)==0 ){ + memcpy(&apNew[nNew-1]->aData[8], &apCopy[nOld-1]->aData[8], 4); + } + if( nxDiv==pParent->nCell+pParent->nOverflow ){ + /* Right-most sibling is the right-most child of pParent */ + put4byte(&pParent->aData[pParent->hdrOffset+8], pgnoNew[nNew-1]); + }else{ + /* Right-most sibling is the left child of the first entry in pParent + ** past the right-most divider entry */ + put4byte(findOverflowCell(pParent, nxDiv), pgnoNew[nNew-1]); + } + + /* + ** Reparent children of all cells. + */ + for(i=0; iisInit ); + rc = balance(pParent, 0); + + /* + ** Cleanup before returning. + */ +balance_cleanup: + sqlite3_free(apCell); + for(i=0; ipgno, nOld, nNew, nCell)); + return rc; +} + +/* +** This routine is called for the root page of a btree when the root +** page contains no cells. This is an opportunity to make the tree +** shallower by one level. +*/ +static int balance_shallower(MemPage *pPage){ + MemPage *pChild; /* The only child page of pPage */ + Pgno pgnoChild; /* Page number for pChild */ + int rc = SQLITE_OK; /* Return code from subprocedures */ + BtShared *pBt; /* The main BTree structure */ + int mxCellPerPage; /* Maximum number of cells per page */ + u8 **apCell; /* All cells from pages being balanced */ + u16 *szCell; /* Local size of all cells */ + + assert( pPage->pParent==0 ); + assert( pPage->nCell==0 ); + assert( sqlite3_mutex_held(pPage->pBt->mutex) ); + pBt = pPage->pBt; + mxCellPerPage = MX_CELL(pBt); + apCell = sqlite3_malloc( mxCellPerPage*(sizeof(u8*)+sizeof(u16)) ); + if( apCell==0 ) return SQLITE_NOMEM; + szCell = (u16*)&apCell[mxCellPerPage]; + if( pPage->leaf ){ + /* The table is completely empty */ + TRACE(("BALANCE: empty table %d\n", pPage->pgno)); + }else{ + /* The root page is empty but has one child. Transfer the + ** information from that one child into the root page if it + ** will fit. This reduces the depth of the tree by one. + ** + ** If the root page is page 1, it has less space available than + ** its child (due to the 100 byte header that occurs at the beginning + ** of the database fle), so it might not be able to hold all of the + ** information currently contained in the child. If this is the + ** case, then do not do the transfer. Leave page 1 empty except + ** for the right-pointer to the child page. The child page becomes + ** the virtual root of the tree. + */ + pgnoChild = get4byte(&pPage->aData[pPage->hdrOffset+8]); + assert( pgnoChild>0 ); + assert( pgnoChild<=sqlite3PagerPagecount(pPage->pBt->pPager) ); + rc = sqlite3BtreeGetPage(pPage->pBt, pgnoChild, &pChild, 0); + if( rc ) goto end_shallow_balance; + if( pPage->pgno==1 ){ + rc = sqlite3BtreeInitPage(pChild, pPage); + if( rc ) goto end_shallow_balance; + assert( pChild->nOverflow==0 ); + if( pChild->nFree>=100 ){ + /* The child information will fit on the root page, so do the + ** copy */ + int i; + zeroPage(pPage, pChild->aData[0]); + for(i=0; inCell; i++){ + apCell[i] = findCell(pChild,i); + szCell[i] = cellSizePtr(pChild, apCell[i]); + } + assemblePage(pPage, pChild->nCell, apCell, szCell); + /* Copy the right-pointer of the child to the parent. */ + put4byte(&pPage->aData[pPage->hdrOffset+8], + get4byte(&pChild->aData[pChild->hdrOffset+8])); + freePage(pChild); + TRACE(("BALANCE: child %d transfer to page 1\n", pChild->pgno)); + }else{ + /* The child has more information that will fit on the root. + ** The tree is already balanced. Do nothing. */ + TRACE(("BALANCE: child %d will not fit on page 1\n", pChild->pgno)); + } + }else{ + memcpy(pPage->aData, pChild->aData, pPage->pBt->usableSize); + pPage->isInit = 0; + pPage->pParent = 0; + rc = sqlite3BtreeInitPage(pPage, 0); + assert( rc==SQLITE_OK ); + freePage(pChild); + TRACE(("BALANCE: transfer child %d into root %d\n", + pChild->pgno, pPage->pgno)); + } + rc = reparentChildPages(pPage); + assert( pPage->nOverflow==0 ); +#ifndef SQLITE_OMIT_AUTOVACUUM + if( pBt->autoVacuum ){ + int i; + for(i=0; inCell; i++){ + rc = ptrmapPutOvfl(pPage, i); + if( rc!=SQLITE_OK ){ + goto end_shallow_balance; + } + } + } +#endif + releasePage(pChild); + } +end_shallow_balance: + sqlite3_free(apCell); + return rc; +} + + +/* +** The root page is overfull +** +** When this happens, Create a new child page and copy the +** contents of the root into the child. Then make the root +** page an empty page with rightChild pointing to the new +** child. Finally, call balance_internal() on the new child +** to cause it to split. +*/ +static int balance_deeper(MemPage *pPage){ + int rc; /* Return value from subprocedures */ + MemPage *pChild; /* Pointer to a new child page */ + Pgno pgnoChild; /* Page number of the new child page */ + BtShared *pBt; /* The BTree */ + int usableSize; /* Total usable size of a page */ + u8 *data; /* Content of the parent page */ + u8 *cdata; /* Content of the child page */ + int hdr; /* Offset to page header in parent */ + int brk; /* Offset to content of first cell in parent */ + + assert( pPage->pParent==0 ); + assert( pPage->nOverflow>0 ); + pBt = pPage->pBt; + assert( sqlite3_mutex_held(pBt->mutex) ); + rc = allocateBtreePage(pBt, &pChild, &pgnoChild, pPage->pgno, 0); + if( rc ) return rc; + assert( sqlite3PagerIswriteable(pChild->pDbPage) ); + usableSize = pBt->usableSize; + data = pPage->aData; + hdr = pPage->hdrOffset; + brk = get2byte(&data[hdr+5]); + cdata = pChild->aData; + memcpy(cdata, &data[hdr], pPage->cellOffset+2*pPage->nCell-hdr); + memcpy(&cdata[brk], &data[brk], usableSize-brk); + assert( pChild->isInit==0 ); + rc = sqlite3BtreeInitPage(pChild, pPage); + if( rc ) goto balancedeeper_out; + memcpy(pChild->aOvfl, pPage->aOvfl, pPage->nOverflow*sizeof(pPage->aOvfl[0])); + pChild->nOverflow = pPage->nOverflow; + if( pChild->nOverflow ){ + pChild->nFree = 0; + } + assert( pChild->nCell==pPage->nCell ); + zeroPage(pPage, pChild->aData[0] & ~PTF_LEAF); + put4byte(&pPage->aData[pPage->hdrOffset+8], pgnoChild); + TRACE(("BALANCE: copy root %d into %d\n", pPage->pgno, pChild->pgno)); +#ifndef SQLITE_OMIT_AUTOVACUUM + if( pBt->autoVacuum ){ + int i; + rc = ptrmapPut(pBt, pChild->pgno, PTRMAP_BTREE, pPage->pgno); + if( rc ) goto balancedeeper_out; + for(i=0; inCell; i++){ + rc = ptrmapPutOvfl(pChild, i); + if( rc!=SQLITE_OK ){ + return rc; + } + } + } +#endif + rc = balance_nonroot(pChild); + +balancedeeper_out: + releasePage(pChild); + return rc; +} + +/* +** Decide if the page pPage needs to be balanced. If balancing is +** required, call the appropriate balancing routine. +*/ +static int balance(MemPage *pPage, int insert){ + int rc = SQLITE_OK; + assert( sqlite3_mutex_held(pPage->pBt->mutex) ); + if( pPage->pParent==0 ){ + rc = sqlite3PagerWrite(pPage->pDbPage); + if( rc==SQLITE_OK && pPage->nOverflow>0 ){ + rc = balance_deeper(pPage); + } + if( rc==SQLITE_OK && pPage->nCell==0 ){ + rc = balance_shallower(pPage); + } + }else{ + if( pPage->nOverflow>0 || + (!insert && pPage->nFree>pPage->pBt->usableSize*2/3) ){ + rc = balance_nonroot(pPage); + } + } + return rc; +} + +/* +** This routine checks all cursors that point to table pgnoRoot. +** If any of those cursors were opened with wrFlag==0 in a different +** database connection (a database connection that shares the pager +** cache with the current connection) and that other connection +** is not in the ReadUncommmitted state, then this routine returns +** SQLITE_LOCKED. +** +** In addition to checking for read-locks (where a read-lock +** means a cursor opened with wrFlag==0) this routine also moves +** all write cursors so that they are pointing to the +** first Cell on the root page. This is necessary because an insert +** or delete might change the number of cells on a page or delete +** a page entirely and we do not want to leave any cursors +** pointing to non-existant pages or cells. +*/ +static int checkReadLocks(Btree *pBtree, Pgno pgnoRoot, BtCursor *pExclude){ + BtCursor *p; + BtShared *pBt = pBtree->pBt; + sqlite3 *db = pBtree->db; + assert( sqlite3BtreeHoldsMutex(pBtree) ); + for(p=pBt->pCursor; p; p=p->pNext){ + if( p==pExclude ) continue; + if( p->eState!=CURSOR_VALID ) continue; + if( p->pgnoRoot!=pgnoRoot ) continue; + if( p->wrFlag==0 ){ + sqlite3 *dbOther = p->pBtree->db; + if( dbOther==0 || + (dbOther!=db && (dbOther->flags & SQLITE_ReadUncommitted)==0) ){ + return SQLITE_LOCKED; + } + }else if( p->pPage->pgno!=p->pgnoRoot ){ + moveToRoot(p); + } + } + return SQLITE_OK; +} + +/* +** Insert a new record into the BTree. The key is given by (pKey,nKey) +** and the data is given by (pData,nData). The cursor is used only to +** define what table the record should be inserted into. The cursor +** is left pointing at a random location. +** +** For an INTKEY table, only the nKey value of the key is used. pKey is +** ignored. For a ZERODATA table, the pData and nData are both ignored. +*/ +int sqlite3BtreeInsert( + BtCursor *pCur, /* Insert data into the table of this cursor */ + const void *pKey, i64 nKey, /* The key of the new record */ + const void *pData, int nData, /* The data of the new record */ + int nZero, /* Number of extra 0 bytes to append to data */ + int appendBias /* True if this is likely an append */ +){ + int rc; + int loc; + int szNew; + MemPage *pPage; + Btree *p = pCur->pBtree; + BtShared *pBt = p->pBt; + unsigned char *oldCell; + unsigned char *newCell = 0; + + assert( cursorHoldsMutex(pCur) ); + if( pBt->inTransaction!=TRANS_WRITE ){ + /* Must start a transaction before doing an insert */ + rc = pBt->readOnly ? SQLITE_READONLY : SQLITE_ERROR; + return rc; + } + assert( !pBt->readOnly ); + if( !pCur->wrFlag ){ + return SQLITE_PERM; /* Cursor not open for writing */ + } + if( checkReadLocks(pCur->pBtree, pCur->pgnoRoot, pCur) ){ + return SQLITE_LOCKED; /* The table pCur points to has a read lock */ + } + if( pCur->eState==CURSOR_FAULT ){ + return pCur->skip; + } + + /* Save the positions of any other cursors open on this table */ + clearCursorPosition(pCur); + if( + SQLITE_OK!=(rc = saveAllCursors(pBt, pCur->pgnoRoot, pCur)) || + SQLITE_OK!=(rc = sqlite3BtreeMoveto(pCur, pKey, nKey, appendBias, &loc)) + ){ + return rc; + } + + pPage = pCur->pPage; + assert( pPage->intKey || nKey>=0 ); + assert( pPage->leaf || !pPage->leafData ); + TRACE(("INSERT: table=%d nkey=%lld ndata=%d page=%d %s\n", + pCur->pgnoRoot, nKey, nData, pPage->pgno, + loc==0 ? "overwrite" : "new entry")); + assert( pPage->isInit ); + newCell = sqlite3_malloc( MX_CELL_SIZE(pBt) ); + if( newCell==0 ) return SQLITE_NOMEM; + rc = fillInCell(pPage, newCell, pKey, nKey, pData, nData, nZero, &szNew); + if( rc ) goto end_insert; + assert( szNew==cellSizePtr(pPage, newCell) ); + assert( szNew<=MX_CELL_SIZE(pBt) ); + if( loc==0 && CURSOR_VALID==pCur->eState ){ + u16 szOld; + assert( pCur->idx>=0 && pCur->idxnCell ); + rc = sqlite3PagerWrite(pPage->pDbPage); + if( rc ){ + goto end_insert; + } + oldCell = findCell(pPage, pCur->idx); + if( !pPage->leaf ){ + memcpy(newCell, oldCell, 4); + } + szOld = cellSizePtr(pPage, oldCell); + rc = clearCell(pPage, oldCell); + if( rc ) goto end_insert; + dropCell(pPage, pCur->idx, szOld); + }else if( loc<0 && pPage->nCell>0 ){ + assert( pPage->leaf ); + pCur->idx++; + pCur->info.nSize = 0; + }else{ + assert( pPage->leaf ); + } + rc = insertCell(pPage, pCur->idx, newCell, szNew, 0, 0); + if( rc!=SQLITE_OK ) goto end_insert; + rc = balance(pPage, 1); + /* sqlite3BtreePageDump(pCur->pBt, pCur->pgnoRoot, 1); */ + /* fflush(stdout); */ + if( rc==SQLITE_OK ){ + moveToRoot(pCur); + } +end_insert: + sqlite3_free(newCell); + return rc; +} + +/* +** Delete the entry that the cursor is pointing to. The cursor +** is left pointing at a random location. +*/ +int sqlite3BtreeDelete(BtCursor *pCur){ + MemPage *pPage = pCur->pPage; + unsigned char *pCell; + int rc; + Pgno pgnoChild = 0; + Btree *p = pCur->pBtree; + BtShared *pBt = p->pBt; + + assert( cursorHoldsMutex(pCur) ); + assert( pPage->isInit ); + if( pBt->inTransaction!=TRANS_WRITE ){ + /* Must start a transaction before doing a delete */ + rc = pBt->readOnly ? SQLITE_READONLY : SQLITE_ERROR; + return rc; + } + assert( !pBt->readOnly ); + if( pCur->eState==CURSOR_FAULT ){ + return pCur->skip; + } + if( pCur->idx >= pPage->nCell ){ + return SQLITE_ERROR; /* The cursor is not pointing to anything */ + } + if( !pCur->wrFlag ){ + return SQLITE_PERM; /* Did not open this cursor for writing */ + } + if( checkReadLocks(pCur->pBtree, pCur->pgnoRoot, pCur) ){ + return SQLITE_LOCKED; /* The table pCur points to has a read lock */ + } + + /* Restore the current cursor position (a no-op if the cursor is not in + ** CURSOR_REQUIRESEEK state) and save the positions of any other cursors + ** open on the same table. Then call sqlite3PagerWrite() on the page + ** that the entry will be deleted from. + */ + if( + (rc = restoreOrClearCursorPosition(pCur))!=0 || + (rc = saveAllCursors(pBt, pCur->pgnoRoot, pCur))!=0 || + (rc = sqlite3PagerWrite(pPage->pDbPage))!=0 + ){ + return rc; + } + + /* Locate the cell within its page and leave pCell pointing to the + ** data. The clearCell() call frees any overflow pages associated with the + ** cell. The cell itself is still intact. + */ + pCell = findCell(pPage, pCur->idx); + if( !pPage->leaf ){ + pgnoChild = get4byte(pCell); + } + rc = clearCell(pPage, pCell); + if( rc ){ + return rc; + } + + if( !pPage->leaf ){ + /* + ** The entry we are about to delete is not a leaf so if we do not + ** do something we will leave a hole on an internal page. + ** We have to fill the hole by moving in a cell from a leaf. The + ** next Cell after the one to be deleted is guaranteed to exist and + ** to be a leaf so we can use it. + */ + BtCursor leafCur; + unsigned char *pNext; + int notUsed; + unsigned char *tempCell = 0; + assert( !pPage->leafData ); + sqlite3BtreeGetTempCursor(pCur, &leafCur); + rc = sqlite3BtreeNext(&leafCur, ¬Used); + if( rc==SQLITE_OK ){ + rc = sqlite3PagerWrite(leafCur.pPage->pDbPage); + } + if( rc==SQLITE_OK ){ + u16 szNext; + TRACE(("DELETE: table=%d delete internal from %d replace from leaf %d\n", + pCur->pgnoRoot, pPage->pgno, leafCur.pPage->pgno)); + dropCell(pPage, pCur->idx, cellSizePtr(pPage, pCell)); + pNext = findCell(leafCur.pPage, leafCur.idx); + szNext = cellSizePtr(leafCur.pPage, pNext); + assert( MX_CELL_SIZE(pBt)>=szNext+4 ); + tempCell = sqlite3_malloc( MX_CELL_SIZE(pBt) ); + if( tempCell==0 ){ + rc = SQLITE_NOMEM; + } + if( rc==SQLITE_OK ){ + rc = insertCell(pPage, pCur->idx, pNext-4, szNext+4, tempCell, 0); + } + if( rc==SQLITE_OK ){ + put4byte(findOverflowCell(pPage, pCur->idx), pgnoChild); + rc = balance(pPage, 0); + } + if( rc==SQLITE_OK ){ + dropCell(leafCur.pPage, leafCur.idx, szNext); + rc = balance(leafCur.pPage, 0); + } + } + sqlite3_free(tempCell); + sqlite3BtreeReleaseTempCursor(&leafCur); + }else{ + TRACE(("DELETE: table=%d delete from leaf %d\n", + pCur->pgnoRoot, pPage->pgno)); + dropCell(pPage, pCur->idx, cellSizePtr(pPage, pCell)); + rc = balance(pPage, 0); + } + if( rc==SQLITE_OK ){ + moveToRoot(pCur); + } + return rc; +} + +/* +** Create a new BTree table. Write into *piTable the page +** number for the root page of the new table. +** +** The type of type is determined by the flags parameter. Only the +** following values of flags are currently in use. Other values for +** flags might not work: +** +** BTREE_INTKEY|BTREE_LEAFDATA Used for SQL tables with rowid keys +** BTREE_ZERODATA Used for SQL indices +*/ +static int btreeCreateTable(Btree *p, int *piTable, int flags){ + BtShared *pBt = p->pBt; + MemPage *pRoot; + Pgno pgnoRoot; + int rc; + + assert( sqlite3BtreeHoldsMutex(p) ); + if( pBt->inTransaction!=TRANS_WRITE ){ + /* Must start a transaction first */ + rc = pBt->readOnly ? SQLITE_READONLY : SQLITE_ERROR; + return rc; + } + assert( !pBt->readOnly ); + +#ifdef SQLITE_OMIT_AUTOVACUUM + rc = allocateBtreePage(pBt, &pRoot, &pgnoRoot, 1, 0); + if( rc ){ + return rc; + } +#else + if( pBt->autoVacuum ){ + Pgno pgnoMove; /* Move a page here to make room for the root-page */ + MemPage *pPageMove; /* The page to move to. */ + + /* Creating a new table may probably require moving an existing database + ** to make room for the new tables root page. In case this page turns + ** out to be an overflow page, delete all overflow page-map caches + ** held by open cursors. + */ + invalidateAllOverflowCache(pBt); + + /* Read the value of meta[3] from the database to determine where the + ** root page of the new table should go. meta[3] is the largest root-page + ** created so far, so the new root-page is (meta[3]+1). + */ + rc = sqlite3BtreeGetMeta(p, 4, &pgnoRoot); + if( rc!=SQLITE_OK ){ + return rc; + } + pgnoRoot++; + + /* The new root-page may not be allocated on a pointer-map page, or the + ** PENDING_BYTE page. + */ + while( pgnoRoot==PTRMAP_PAGENO(pBt, pgnoRoot) || + pgnoRoot==PENDING_BYTE_PAGE(pBt) ){ + pgnoRoot++; + } + assert( pgnoRoot>=3 ); + + /* Allocate a page. The page that currently resides at pgnoRoot will + ** be moved to the allocated page (unless the allocated page happens + ** to reside at pgnoRoot). + */ + rc = allocateBtreePage(pBt, &pPageMove, &pgnoMove, pgnoRoot, 1); + if( rc!=SQLITE_OK ){ + return rc; + } + + if( pgnoMove!=pgnoRoot ){ + /* pgnoRoot is the page that will be used for the root-page of + ** the new table (assuming an error did not occur). But we were + ** allocated pgnoMove. If required (i.e. if it was not allocated + ** by extending the file), the current page at position pgnoMove + ** is already journaled. + */ + u8 eType; + Pgno iPtrPage; + + releasePage(pPageMove); + + /* Move the page currently at pgnoRoot to pgnoMove. */ + rc = sqlite3BtreeGetPage(pBt, pgnoRoot, &pRoot, 0); + if( rc!=SQLITE_OK ){ + return rc; + } + rc = ptrmapGet(pBt, pgnoRoot, &eType, &iPtrPage); + if( rc!=SQLITE_OK || eType==PTRMAP_ROOTPAGE || eType==PTRMAP_FREEPAGE ){ + releasePage(pRoot); + return rc; + } + assert( eType!=PTRMAP_ROOTPAGE ); + assert( eType!=PTRMAP_FREEPAGE ); + rc = sqlite3PagerWrite(pRoot->pDbPage); + if( rc!=SQLITE_OK ){ + releasePage(pRoot); + return rc; + } + rc = relocatePage(pBt, pRoot, eType, iPtrPage, pgnoMove); + releasePage(pRoot); + + /* Obtain the page at pgnoRoot */ + if( rc!=SQLITE_OK ){ + return rc; + } + rc = sqlite3BtreeGetPage(pBt, pgnoRoot, &pRoot, 0); + if( rc!=SQLITE_OK ){ + return rc; + } + rc = sqlite3PagerWrite(pRoot->pDbPage); + if( rc!=SQLITE_OK ){ + releasePage(pRoot); + return rc; + } + }else{ + pRoot = pPageMove; + } + + /* Update the pointer-map and meta-data with the new root-page number. */ + rc = ptrmapPut(pBt, pgnoRoot, PTRMAP_ROOTPAGE, 0); + if( rc ){ + releasePage(pRoot); + return rc; + } + rc = sqlite3BtreeUpdateMeta(p, 4, pgnoRoot); + if( rc ){ + releasePage(pRoot); + return rc; + } + + }else{ + rc = allocateBtreePage(pBt, &pRoot, &pgnoRoot, 1, 0); + if( rc ) return rc; + } +#endif + assert( sqlite3PagerIswriteable(pRoot->pDbPage) ); + zeroPage(pRoot, flags | PTF_LEAF); + sqlite3PagerUnref(pRoot->pDbPage); + *piTable = (int)pgnoRoot; + return SQLITE_OK; +} +int sqlite3BtreeCreateTable(Btree *p, int *piTable, int flags){ + int rc; + sqlite3BtreeEnter(p); + p->pBt->db = p->db; + rc = btreeCreateTable(p, piTable, flags); + sqlite3BtreeLeave(p); + return rc; +} + +/* +** Erase the given database page and all its children. Return +** the page to the freelist. +*/ +static int clearDatabasePage( + BtShared *pBt, /* The BTree that contains the table */ + Pgno pgno, /* Page number to clear */ + MemPage *pParent, /* Parent page. NULL for the root */ + int freePageFlag /* Deallocate page if true */ +){ + MemPage *pPage = 0; + int rc; + unsigned char *pCell; + int i; + + assert( sqlite3_mutex_held(pBt->mutex) ); + if( pgno>sqlite3PagerPagecount(pBt->pPager) ){ + return SQLITE_CORRUPT_BKPT; + } + + rc = getAndInitPage(pBt, pgno, &pPage, pParent); + if( rc ) goto cleardatabasepage_out; + for(i=0; inCell; i++){ + pCell = findCell(pPage, i); + if( !pPage->leaf ){ + rc = clearDatabasePage(pBt, get4byte(pCell), pPage->pParent, 1); + if( rc ) goto cleardatabasepage_out; + } + rc = clearCell(pPage, pCell); + if( rc ) goto cleardatabasepage_out; + } + if( !pPage->leaf ){ + rc = clearDatabasePage(pBt, get4byte(&pPage->aData[8]), pPage->pParent, 1); + if( rc ) goto cleardatabasepage_out; + } + if( freePageFlag ){ + rc = freePage(pPage); + }else if( (rc = sqlite3PagerWrite(pPage->pDbPage))==0 ){ + zeroPage(pPage, pPage->aData[0] | PTF_LEAF); + } + +cleardatabasepage_out: + releasePage(pPage); + return rc; +} + +/* +** Delete all information from a single table in the database. iTable is +** the page number of the root of the table. After this routine returns, +** the root page is empty, but still exists. +** +** This routine will fail with SQLITE_LOCKED if there are any open +** read cursors on the table. Open write cursors are moved to the +** root of the table. +*/ +int sqlite3BtreeClearTable(Btree *p, int iTable){ + int rc; + BtShared *pBt = p->pBt; + sqlite3BtreeEnter(p); + pBt->db = p->db; + if( p->inTrans!=TRANS_WRITE ){ + rc = pBt->readOnly ? SQLITE_READONLY : SQLITE_ERROR; + }else if( (rc = checkReadLocks(p, iTable, 0))!=SQLITE_OK ){ + /* nothing to do */ + }else if( SQLITE_OK!=(rc = saveAllCursors(pBt, iTable, 0)) ){ + /* nothing to do */ + }else{ + rc = clearDatabasePage(pBt, (Pgno)iTable, 0, 0); + } + sqlite3BtreeLeave(p); + return rc; +} + +/* +** Erase all information in a table and add the root of the table to +** the freelist. Except, the root of the principle table (the one on +** page 1) is never added to the freelist. +** +** This routine will fail with SQLITE_LOCKED if there are any open +** cursors on the table. +** +** If AUTOVACUUM is enabled and the page at iTable is not the last +** root page in the database file, then the last root page +** in the database file is moved into the slot formerly occupied by +** iTable and that last slot formerly occupied by the last root page +** is added to the freelist instead of iTable. In this say, all +** root pages are kept at the beginning of the database file, which +** is necessary for AUTOVACUUM to work right. *piMoved is set to the +** page number that used to be the last root page in the file before +** the move. If no page gets moved, *piMoved is set to 0. +** The last root page is recorded in meta[3] and the value of +** meta[3] is updated by this procedure. +*/ +static int btreeDropTable(Btree *p, int iTable, int *piMoved){ + int rc; + MemPage *pPage = 0; + BtShared *pBt = p->pBt; + + assert( sqlite3BtreeHoldsMutex(p) ); + if( p->inTrans!=TRANS_WRITE ){ + return pBt->readOnly ? SQLITE_READONLY : SQLITE_ERROR; + } + + /* It is illegal to drop a table if any cursors are open on the + ** database. This is because in auto-vacuum mode the backend may + ** need to move another root-page to fill a gap left by the deleted + ** root page. If an open cursor was using this page a problem would + ** occur. + */ + if( pBt->pCursor ){ + return SQLITE_LOCKED; + } + + rc = sqlite3BtreeGetPage(pBt, (Pgno)iTable, &pPage, 0); + if( rc ) return rc; + rc = sqlite3BtreeClearTable(p, iTable); + if( rc ){ + releasePage(pPage); + return rc; + } + + *piMoved = 0; + + if( iTable>1 ){ +#ifdef SQLITE_OMIT_AUTOVACUUM + rc = freePage(pPage); + releasePage(pPage); +#else + if( pBt->autoVacuum ){ + Pgno maxRootPgno; + rc = sqlite3BtreeGetMeta(p, 4, &maxRootPgno); + if( rc!=SQLITE_OK ){ + releasePage(pPage); + return rc; + } + + if( iTable==maxRootPgno ){ + /* If the table being dropped is the table with the largest root-page + ** number in the database, put the root page on the free list. + */ + rc = freePage(pPage); + releasePage(pPage); + if( rc!=SQLITE_OK ){ + return rc; + } + }else{ + /* The table being dropped does not have the largest root-page + ** number in the database. So move the page that does into the + ** gap left by the deleted root-page. + */ + MemPage *pMove; + releasePage(pPage); + rc = sqlite3BtreeGetPage(pBt, maxRootPgno, &pMove, 0); + if( rc!=SQLITE_OK ){ + return rc; + } + rc = relocatePage(pBt, pMove, PTRMAP_ROOTPAGE, 0, iTable); + releasePage(pMove); + if( rc!=SQLITE_OK ){ + return rc; + } + rc = sqlite3BtreeGetPage(pBt, maxRootPgno, &pMove, 0); + if( rc!=SQLITE_OK ){ + return rc; + } + rc = freePage(pMove); + releasePage(pMove); + if( rc!=SQLITE_OK ){ + return rc; + } + *piMoved = maxRootPgno; + } + + /* Set the new 'max-root-page' value in the database header. This + ** is the old value less one, less one more if that happens to + ** be a root-page number, less one again if that is the + ** PENDING_BYTE_PAGE. + */ + maxRootPgno--; + if( maxRootPgno==PENDING_BYTE_PAGE(pBt) ){ + maxRootPgno--; + } + if( maxRootPgno==PTRMAP_PAGENO(pBt, maxRootPgno) ){ + maxRootPgno--; + } + assert( maxRootPgno!=PENDING_BYTE_PAGE(pBt) ); + + rc = sqlite3BtreeUpdateMeta(p, 4, maxRootPgno); + }else{ + rc = freePage(pPage); + releasePage(pPage); + } +#endif + }else{ + /* If sqlite3BtreeDropTable was called on page 1. */ + zeroPage(pPage, PTF_INTKEY|PTF_LEAF ); + releasePage(pPage); + } + return rc; +} +int sqlite3BtreeDropTable(Btree *p, int iTable, int *piMoved){ + int rc; + sqlite3BtreeEnter(p); + p->pBt->db = p->db; + rc = btreeDropTable(p, iTable, piMoved); + sqlite3BtreeLeave(p); + return rc; +} + + +/* +** Read the meta-information out of a database file. Meta[0] +** is the number of free pages currently in the database. Meta[1] +** through meta[15] are available for use by higher layers. Meta[0] +** is read-only, the others are read/write. +** +** The schema layer numbers meta values differently. At the schema +** layer (and the SetCookie and ReadCookie opcodes) the number of +** free pages is not visible. So Cookie[0] is the same as Meta[1]. +*/ +int sqlite3BtreeGetMeta(Btree *p, int idx, u32 *pMeta){ + DbPage *pDbPage; + int rc; + unsigned char *pP1; + BtShared *pBt = p->pBt; + + sqlite3BtreeEnter(p); + pBt->db = p->db; + + /* Reading a meta-data value requires a read-lock on page 1 (and hence + ** the sqlite_master table. We grab this lock regardless of whether or + ** not the SQLITE_ReadUncommitted flag is set (the table rooted at page + ** 1 is treated as a special case by queryTableLock() and lockTable()). + */ + rc = queryTableLock(p, 1, READ_LOCK); + if( rc!=SQLITE_OK ){ + sqlite3BtreeLeave(p); + return rc; + } + + assert( idx>=0 && idx<=15 ); + rc = sqlite3PagerGet(pBt->pPager, 1, &pDbPage); + if( rc ){ + sqlite3BtreeLeave(p); + return rc; + } + pP1 = (unsigned char *)sqlite3PagerGetData(pDbPage); + *pMeta = get4byte(&pP1[36 + idx*4]); + sqlite3PagerUnref(pDbPage); + + /* If autovacuumed is disabled in this build but we are trying to + ** access an autovacuumed database, then make the database readonly. + */ +#ifdef SQLITE_OMIT_AUTOVACUUM + if( idx==4 && *pMeta>0 ) pBt->readOnly = 1; +#endif + + /* Grab the read-lock on page 1. */ + rc = lockTable(p, 1, READ_LOCK); + sqlite3BtreeLeave(p); + return rc; +} + +/* +** Write meta-information back into the database. Meta[0] is +** read-only and may not be written. +*/ +int sqlite3BtreeUpdateMeta(Btree *p, int idx, u32 iMeta){ + BtShared *pBt = p->pBt; + unsigned char *pP1; + int rc; + assert( idx>=1 && idx<=15 ); + sqlite3BtreeEnter(p); + pBt->db = p->db; + if( p->inTrans!=TRANS_WRITE ){ + rc = pBt->readOnly ? SQLITE_READONLY : SQLITE_ERROR; + }else{ + assert( pBt->pPage1!=0 ); + pP1 = pBt->pPage1->aData; + rc = sqlite3PagerWrite(pBt->pPage1->pDbPage); + if( rc==SQLITE_OK ){ + put4byte(&pP1[36 + idx*4], iMeta); +#ifndef SQLITE_OMIT_AUTOVACUUM + if( idx==7 ){ + assert( pBt->autoVacuum || iMeta==0 ); + assert( iMeta==0 || iMeta==1 ); + pBt->incrVacuum = iMeta; + } +#endif + } + } + sqlite3BtreeLeave(p); + return rc; +} + +/* +** Return the flag byte at the beginning of the page that the cursor +** is currently pointing to. +*/ +int sqlite3BtreeFlags(BtCursor *pCur){ + /* TODO: What about CURSOR_REQUIRESEEK state? Probably need to call + ** restoreOrClearCursorPosition() here. + */ + MemPage *pPage; + restoreOrClearCursorPosition(pCur); + pPage = pCur->pPage; + assert( cursorHoldsMutex(pCur) ); + assert( pPage->pBt==pCur->pBt ); + return pPage ? pPage->aData[pPage->hdrOffset] : 0; +} + + +/* +** Return the pager associated with a BTree. This routine is used for +** testing and debugging only. +*/ +Pager *sqlite3BtreePager(Btree *p){ + return p->pBt->pPager; +} + +#ifndef SQLITE_OMIT_INTEGRITY_CHECK +/* +** Append a message to the error message string. +*/ +static void checkAppendMsg( + IntegrityCk *pCheck, + char *zMsg1, + const char *zFormat, + ... +){ + va_list ap; + char *zMsg2; + if( !pCheck->mxErr ) return; + pCheck->mxErr--; + pCheck->nErr++; + va_start(ap, zFormat); + zMsg2 = sqlite3VMPrintf(0, zFormat, ap); + va_end(ap); + if( zMsg1==0 ) zMsg1 = ""; + if( pCheck->zErrMsg ){ + char *zOld = pCheck->zErrMsg; + pCheck->zErrMsg = 0; + sqlite3SetString(&pCheck->zErrMsg, zOld, "\n", zMsg1, zMsg2, (char*)0); + sqlite3_free(zOld); + }else{ + sqlite3SetString(&pCheck->zErrMsg, zMsg1, zMsg2, (char*)0); + } + sqlite3_free(zMsg2); +} +#endif /* SQLITE_OMIT_INTEGRITY_CHECK */ + +#ifndef SQLITE_OMIT_INTEGRITY_CHECK +/* +** Add 1 to the reference count for page iPage. If this is the second +** reference to the page, add an error message to pCheck->zErrMsg. +** Return 1 if there are 2 ore more references to the page and 0 if +** if this is the first reference to the page. +** +** Also check that the page number is in bounds. +*/ +static int checkRef(IntegrityCk *pCheck, int iPage, char *zContext){ + if( iPage==0 ) return 1; + if( iPage>pCheck->nPage || iPage<0 ){ + checkAppendMsg(pCheck, zContext, "invalid page number %d", iPage); + return 1; + } + if( pCheck->anRef[iPage]==1 ){ + checkAppendMsg(pCheck, zContext, "2nd reference to page %d", iPage); + return 1; + } + return (pCheck->anRef[iPage]++)>1; +} + +#ifndef SQLITE_OMIT_AUTOVACUUM +/* +** Check that the entry in the pointer-map for page iChild maps to +** page iParent, pointer type ptrType. If not, append an error message +** to pCheck. +*/ +static void checkPtrmap( + IntegrityCk *pCheck, /* Integrity check context */ + Pgno iChild, /* Child page number */ + u8 eType, /* Expected pointer map type */ + Pgno iParent, /* Expected pointer map parent page number */ + char *zContext /* Context description (used for error msg) */ +){ + int rc; + u8 ePtrmapType; + Pgno iPtrmapParent; + + rc = ptrmapGet(pCheck->pBt, iChild, &ePtrmapType, &iPtrmapParent); + if( rc!=SQLITE_OK ){ + checkAppendMsg(pCheck, zContext, "Failed to read ptrmap key=%d", iChild); + return; + } + + if( ePtrmapType!=eType || iPtrmapParent!=iParent ){ + checkAppendMsg(pCheck, zContext, + "Bad ptr map entry key=%d expected=(%d,%d) got=(%d,%d)", + iChild, eType, iParent, ePtrmapType, iPtrmapParent); + } +} +#endif + +/* +** Check the integrity of the freelist or of an overflow page list. +** Verify that the number of pages on the list is N. +*/ +static void checkList( + IntegrityCk *pCheck, /* Integrity checking context */ + int isFreeList, /* True for a freelist. False for overflow page list */ + int iPage, /* Page number for first page in the list */ + int N, /* Expected number of pages in the list */ + char *zContext /* Context for error messages */ +){ + int i; + int expected = N; + int iFirst = iPage; + while( N-- > 0 && pCheck->mxErr ){ + DbPage *pOvflPage; + unsigned char *pOvflData; + if( iPage<1 ){ + checkAppendMsg(pCheck, zContext, + "%d of %d pages missing from overflow list starting at %d", + N+1, expected, iFirst); + break; + } + if( checkRef(pCheck, iPage, zContext) ) break; + if( sqlite3PagerGet(pCheck->pPager, (Pgno)iPage, &pOvflPage) ){ + checkAppendMsg(pCheck, zContext, "failed to get page %d", iPage); + break; + } + pOvflData = (unsigned char *)sqlite3PagerGetData(pOvflPage); + if( isFreeList ){ + int n = get4byte(&pOvflData[4]); +#ifndef SQLITE_OMIT_AUTOVACUUM + if( pCheck->pBt->autoVacuum ){ + checkPtrmap(pCheck, iPage, PTRMAP_FREEPAGE, 0, zContext); + } +#endif + if( n>pCheck->pBt->usableSize/4-8 ){ + checkAppendMsg(pCheck, zContext, + "freelist leaf count too big on page %d", iPage); + N--; + }else{ + for(i=0; ipBt->autoVacuum ){ + checkPtrmap(pCheck, iFreePage, PTRMAP_FREEPAGE, 0, zContext); + } +#endif + checkRef(pCheck, iFreePage, zContext); + } + N -= n; + } + } +#ifndef SQLITE_OMIT_AUTOVACUUM + else{ + /* If this database supports auto-vacuum and iPage is not the last + ** page in this overflow list, check that the pointer-map entry for + ** the following page matches iPage. + */ + if( pCheck->pBt->autoVacuum && N>0 ){ + i = get4byte(pOvflData); + checkPtrmap(pCheck, i, PTRMAP_OVERFLOW2, iPage, zContext); + } + } +#endif + iPage = get4byte(pOvflData); + sqlite3PagerUnref(pOvflPage); + } +} +#endif /* SQLITE_OMIT_INTEGRITY_CHECK */ + +#ifndef SQLITE_OMIT_INTEGRITY_CHECK +/* +** Do various sanity checks on a single page of a tree. Return +** the tree depth. Root pages return 0. Parents of root pages +** return 1, and so forth. +** +** These checks are done: +** +** 1. Make sure that cells and freeblocks do not overlap +** but combine to completely cover the page. +** NO 2. Make sure cell keys are in order. +** NO 3. Make sure no key is less than or equal to zLowerBound. +** NO 4. Make sure no key is greater than or equal to zUpperBound. +** 5. Check the integrity of overflow pages. +** 6. Recursively call checkTreePage on all children. +** 7. Verify that the depth of all children is the same. +** 8. Make sure this page is at least 33% full or else it is +** the root of the tree. +*/ +static int checkTreePage( + IntegrityCk *pCheck, /* Context for the sanity check */ + int iPage, /* Page number of the page to check */ + MemPage *pParent, /* Parent page */ + char *zParentContext /* Parent context */ +){ + MemPage *pPage; + int i, rc, depth, d2, pgno, cnt; + int hdr, cellStart; + int nCell; + u8 *data; + BtShared *pBt; + int usableSize; + char zContext[100]; + char *hit; + + sqlite3_snprintf(sizeof(zContext), zContext, "Page %d: ", iPage); + + /* Check that the page exists + */ + pBt = pCheck->pBt; + usableSize = pBt->usableSize; + if( iPage==0 ) return 0; + if( checkRef(pCheck, iPage, zParentContext) ) return 0; + if( (rc = sqlite3BtreeGetPage(pBt, (Pgno)iPage, &pPage, 0))!=0 ){ + checkAppendMsg(pCheck, zContext, + "unable to get the page. error code=%d", rc); + return 0; + } + if( (rc = sqlite3BtreeInitPage(pPage, pParent))!=0 ){ + checkAppendMsg(pCheck, zContext, + "sqlite3BtreeInitPage() returns error code %d", rc); + releasePage(pPage); + return 0; + } + + /* Check out all the cells. + */ + depth = 0; + for(i=0; inCell && pCheck->mxErr; i++){ + u8 *pCell; + int sz; + CellInfo info; + + /* Check payload overflow pages + */ + sqlite3_snprintf(sizeof(zContext), zContext, + "On tree page %d cell %d: ", iPage, i); + pCell = findCell(pPage,i); + sqlite3BtreeParseCellPtr(pPage, pCell, &info); + sz = info.nData; + if( !pPage->intKey ) sz += info.nKey; + assert( sz==info.nPayload ); + if( sz>info.nLocal ){ + int nPage = (sz - info.nLocal + usableSize - 5)/(usableSize - 4); + Pgno pgnoOvfl = get4byte(&pCell[info.iOverflow]); +#ifndef SQLITE_OMIT_AUTOVACUUM + if( pBt->autoVacuum ){ + checkPtrmap(pCheck, pgnoOvfl, PTRMAP_OVERFLOW1, iPage, zContext); + } +#endif + checkList(pCheck, 0, pgnoOvfl, nPage, zContext); + } + + /* Check sanity of left child page. + */ + if( !pPage->leaf ){ + pgno = get4byte(pCell); +#ifndef SQLITE_OMIT_AUTOVACUUM + if( pBt->autoVacuum ){ + checkPtrmap(pCheck, pgno, PTRMAP_BTREE, iPage, zContext); + } +#endif + d2 = checkTreePage(pCheck,pgno,pPage,zContext); + if( i>0 && d2!=depth ){ + checkAppendMsg(pCheck, zContext, "Child page depth differs"); + } + depth = d2; + } + } + if( !pPage->leaf ){ + pgno = get4byte(&pPage->aData[pPage->hdrOffset+8]); + sqlite3_snprintf(sizeof(zContext), zContext, + "On page %d at right child: ", iPage); +#ifndef SQLITE_OMIT_AUTOVACUUM + if( pBt->autoVacuum ){ + checkPtrmap(pCheck, pgno, PTRMAP_BTREE, iPage, 0); + } +#endif + checkTreePage(pCheck, pgno, pPage, zContext); + } + + /* Check for complete coverage of the page + */ + data = pPage->aData; + hdr = pPage->hdrOffset; + hit = sqlite3MallocZero( usableSize ); + if( hit ){ + memset(hit, 1, get2byte(&data[hdr+5])); + nCell = get2byte(&data[hdr+3]); + cellStart = hdr + 12 - 4*pPage->leaf; + for(i=0; i=usableSize || pc<0 ){ + checkAppendMsg(pCheck, 0, + "Corruption detected in cell %d on page %d",i,iPage,0); + }else{ + for(j=pc+size-1; j>=pc; j--) hit[j]++; + } + } + for(cnt=0, i=get2byte(&data[hdr+1]); i>0 && i=usableSize || i<0 ){ + checkAppendMsg(pCheck, 0, + "Corruption detected in cell %d on page %d",i,iPage,0); + }else{ + for(j=i+size-1; j>=i; j--) hit[j]++; + } + i = get2byte(&data[i]); + } + for(i=cnt=0; i1 ){ + checkAppendMsg(pCheck, 0, + "Multiple uses for byte %d of page %d", i, iPage); + break; + } + } + if( cnt!=data[hdr+7] ){ + checkAppendMsg(pCheck, 0, + "Fragmented space is %d byte reported as %d on page %d", + cnt, data[hdr+7], iPage); + } + } + sqlite3_free(hit); + + releasePage(pPage); + return depth+1; +} +#endif /* SQLITE_OMIT_INTEGRITY_CHECK */ + +#ifndef SQLITE_OMIT_INTEGRITY_CHECK +/* +** This routine does a complete check of the given BTree file. aRoot[] is +** an array of pages numbers were each page number is the root page of +** a table. nRoot is the number of entries in aRoot. +** +** If everything checks out, this routine returns NULL. If something is +** amiss, an error message is written into memory obtained from malloc() +** and a pointer to that error message is returned. The calling function +** is responsible for freeing the error message when it is done. +*/ +char *sqlite3BtreeIntegrityCheck( + Btree *p, /* The btree to be checked */ + int *aRoot, /* An array of root pages numbers for individual trees */ + int nRoot, /* Number of entries in aRoot[] */ + int mxErr, /* Stop reporting errors after this many */ + int *pnErr /* Write number of errors seen to this variable */ +){ + int i; + int nRef; + IntegrityCk sCheck; + BtShared *pBt = p->pBt; + + sqlite3BtreeEnter(p); + pBt->db = p->db; + nRef = sqlite3PagerRefcount(pBt->pPager); + if( lockBtreeWithRetry(p)!=SQLITE_OK ){ + sqlite3BtreeLeave(p); + return sqlite3StrDup("Unable to acquire a read lock on the database"); + } + sCheck.pBt = pBt; + sCheck.pPager = pBt->pPager; + sCheck.nPage = sqlite3PagerPagecount(sCheck.pPager); + sCheck.mxErr = mxErr; + sCheck.nErr = 0; + *pnErr = 0; +#ifndef SQLITE_OMIT_AUTOVACUUM + if( pBt->nTrunc!=0 ){ + sCheck.nPage = pBt->nTrunc; + } +#endif + if( sCheck.nPage==0 ){ + unlockBtreeIfUnused(pBt); + sqlite3BtreeLeave(p); + return 0; + } + sCheck.anRef = sqlite3_malloc( (sCheck.nPage+1)*sizeof(sCheck.anRef[0]) ); + if( !sCheck.anRef ){ + unlockBtreeIfUnused(pBt); + *pnErr = 1; + sqlite3BtreeLeave(p); + return sqlite3MPrintf(p->db, "Unable to malloc %d bytes", + (sCheck.nPage+1)*sizeof(sCheck.anRef[0])); + } + for(i=0; i<=sCheck.nPage; i++){ sCheck.anRef[i] = 0; } + i = PENDING_BYTE_PAGE(pBt); + if( i<=sCheck.nPage ){ + sCheck.anRef[i] = 1; + } + sCheck.zErrMsg = 0; + + /* Check the integrity of the freelist + */ + checkList(&sCheck, 1, get4byte(&pBt->pPage1->aData[32]), + get4byte(&pBt->pPage1->aData[36]), "Main freelist: "); + + /* Check all the tables. + */ + for(i=0; iautoVacuum && aRoot[i]>1 ){ + checkPtrmap(&sCheck, aRoot[i], PTRMAP_ROOTPAGE, 0, 0); + } +#endif + checkTreePage(&sCheck, aRoot[i], 0, "List of tree roots: "); + } + + /* Make sure every page in the file is referenced + */ + for(i=1; i<=sCheck.nPage && sCheck.mxErr; i++){ +#ifdef SQLITE_OMIT_AUTOVACUUM + if( sCheck.anRef[i]==0 ){ + checkAppendMsg(&sCheck, 0, "Page %d is never used", i); + } +#else + /* If the database supports auto-vacuum, make sure no tables contain + ** references to pointer-map pages. + */ + if( sCheck.anRef[i]==0 && + (PTRMAP_PAGENO(pBt, i)!=i || !pBt->autoVacuum) ){ + checkAppendMsg(&sCheck, 0, "Page %d is never used", i); + } + if( sCheck.anRef[i]!=0 && + (PTRMAP_PAGENO(pBt, i)==i && pBt->autoVacuum) ){ + checkAppendMsg(&sCheck, 0, "Pointer map page %d is referenced", i); + } +#endif + } + + /* Make sure this analysis did not leave any unref() pages + */ + unlockBtreeIfUnused(pBt); + if( nRef != sqlite3PagerRefcount(pBt->pPager) ){ + checkAppendMsg(&sCheck, 0, + "Outstanding page count goes from %d to %d during this analysis", + nRef, sqlite3PagerRefcount(pBt->pPager) + ); + } + + /* Clean up and report errors. + */ + sqlite3BtreeLeave(p); + sqlite3_free(sCheck.anRef); + *pnErr = sCheck.nErr; + return sCheck.zErrMsg; +} +#endif /* SQLITE_OMIT_INTEGRITY_CHECK */ + +/* +** Return the full pathname of the underlying database file. +** +** The pager filename is invariant as long as the pager is +** open so it is safe to access without the BtShared mutex. +*/ +const char *sqlite3BtreeGetFilename(Btree *p){ + assert( p->pBt->pPager!=0 ); + return sqlite3PagerFilename(p->pBt->pPager); +} + +/* +** Return the pathname of the directory that contains the database file. +** +** The pager directory name is invariant as long as the pager is +** open so it is safe to access without the BtShared mutex. +*/ +const char *sqlite3BtreeGetDirname(Btree *p){ + assert( p->pBt->pPager!=0 ); + return sqlite3PagerDirname(p->pBt->pPager); +} + +/* +** Return the pathname of the journal file for this database. The return +** value of this routine is the same regardless of whether the journal file +** has been created or not. +** +** The pager journal filename is invariant as long as the pager is +** open so it is safe to access without the BtShared mutex. +*/ +const char *sqlite3BtreeGetJournalname(Btree *p){ + assert( p->pBt->pPager!=0 ); + return sqlite3PagerJournalname(p->pBt->pPager); +} + +#ifndef SQLITE_OMIT_VACUUM +/* +** Copy the complete content of pBtFrom into pBtTo. A transaction +** must be active for both files. +** +** The size of file pBtFrom may be reduced by this operation. +** If anything goes wrong, the transaction on pBtFrom is rolled back. +*/ +static int btreeCopyFile(Btree *pTo, Btree *pFrom){ + int rc = SQLITE_OK; + Pgno i, nPage, nToPage, iSkip; + + BtShared *pBtTo = pTo->pBt; + BtShared *pBtFrom = pFrom->pBt; + pBtTo->db = pTo->db; + pBtFrom->db = pFrom->db; + + + if( pTo->inTrans!=TRANS_WRITE || pFrom->inTrans!=TRANS_WRITE ){ + return SQLITE_ERROR; + } + if( pBtTo->pCursor ) return SQLITE_BUSY; + nToPage = sqlite3PagerPagecount(pBtTo->pPager); + nPage = sqlite3PagerPagecount(pBtFrom->pPager); + iSkip = PENDING_BYTE_PAGE(pBtTo); + for(i=1; rc==SQLITE_OK && i<=nPage; i++){ + DbPage *pDbPage; + if( i==iSkip ) continue; + rc = sqlite3PagerGet(pBtFrom->pPager, i, &pDbPage); + if( rc ) break; + rc = sqlite3PagerOverwrite(pBtTo->pPager, i, sqlite3PagerGetData(pDbPage)); + sqlite3PagerUnref(pDbPage); + } + + /* If the file is shrinking, journal the pages that are being truncated + ** so that they can be rolled back if the commit fails. + */ + for(i=nPage+1; rc==SQLITE_OK && i<=nToPage; i++){ + DbPage *pDbPage; + if( i==iSkip ) continue; + rc = sqlite3PagerGet(pBtTo->pPager, i, &pDbPage); + if( rc ) break; + rc = sqlite3PagerWrite(pDbPage); + sqlite3PagerDontWrite(pDbPage); + /* Yeah. It seems wierd to call DontWrite() right after Write(). But + ** that is because the names of those procedures do not exactly + ** represent what they do. Write() really means "put this page in the + ** rollback journal and mark it as dirty so that it will be written + ** to the database file later." DontWrite() undoes the second part of + ** that and prevents the page from being written to the database. The + ** page is still on the rollback journal, though. And that is the whole + ** point of this loop: to put pages on the rollback journal. */ + sqlite3PagerUnref(pDbPage); + } + if( !rc && nPagepPager, nPage); + } + + if( rc ){ + sqlite3BtreeRollback(pTo); + } + return rc; +} +int sqlite3BtreeCopyFile(Btree *pTo, Btree *pFrom){ + int rc; + sqlite3BtreeEnter(pTo); + sqlite3BtreeEnter(pFrom); + rc = btreeCopyFile(pTo, pFrom); + sqlite3BtreeLeave(pFrom); + sqlite3BtreeLeave(pTo); + return rc; +} + +#endif /* SQLITE_OMIT_VACUUM */ + +/* +** Return non-zero if a transaction is active. +*/ +int sqlite3BtreeIsInTrans(Btree *p){ + assert( p==0 || sqlite3_mutex_held(p->db->mutex) ); + return (p && (p->inTrans==TRANS_WRITE)); +} + +/* +** Return non-zero if a statement transaction is active. +*/ +int sqlite3BtreeIsInStmt(Btree *p){ + assert( sqlite3BtreeHoldsMutex(p) ); + return (p->pBt && p->pBt->inStmt); +} + +/* +** Return non-zero if a read (or write) transaction is active. +*/ +int sqlite3BtreeIsInReadTrans(Btree *p){ + assert( sqlite3_mutex_held(p->db->mutex) ); + return (p && (p->inTrans!=TRANS_NONE)); +} + +/* +** This function returns a pointer to a blob of memory associated with +** a single shared-btree. The memory is used by client code for its own +** purposes (for example, to store a high-level schema associated with +** the shared-btree). The btree layer manages reference counting issues. +** +** The first time this is called on a shared-btree, nBytes bytes of memory +** are allocated, zeroed, and returned to the caller. For each subsequent +** call the nBytes parameter is ignored and a pointer to the same blob +** of memory returned. +** +** Just before the shared-btree is closed, the function passed as the +** xFree argument when the memory allocation was made is invoked on the +** blob of allocated memory. This function should not call sqlite3_free() +** on the memory, the btree layer does that. +*/ +void *sqlite3BtreeSchema(Btree *p, int nBytes, void(*xFree)(void *)){ + BtShared *pBt = p->pBt; + sqlite3BtreeEnter(p); + if( !pBt->pSchema ){ + pBt->pSchema = sqlite3MallocZero(nBytes); + pBt->xFreeSchema = xFree; + } + sqlite3BtreeLeave(p); + return pBt->pSchema; +} + +/* +** Return true if another user of the same shared btree as the argument +** handle holds an exclusive lock on the sqlite_master table. +*/ +int sqlite3BtreeSchemaLocked(Btree *p){ + int rc; + assert( sqlite3_mutex_held(p->db->mutex) ); + sqlite3BtreeEnter(p); + rc = (queryTableLock(p, MASTER_ROOT, READ_LOCK)!=SQLITE_OK); + sqlite3BtreeLeave(p); + return rc; +} + + +#ifndef SQLITE_OMIT_SHARED_CACHE +/* +** Obtain a lock on the table whose root page is iTab. The +** lock is a write lock if isWritelock is true or a read lock +** if it is false. +*/ +int sqlite3BtreeLockTable(Btree *p, int iTab, u8 isWriteLock){ + int rc = SQLITE_OK; + u8 lockType = (isWriteLock?WRITE_LOCK:READ_LOCK); + sqlite3BtreeEnter(p); + rc = queryTableLock(p, iTab, lockType); + if( rc==SQLITE_OK ){ + rc = lockTable(p, iTab, lockType); + } + sqlite3BtreeLeave(p); + return rc; +} +#endif + +#ifndef SQLITE_OMIT_INCRBLOB +/* +** Argument pCsr must be a cursor opened for writing on an +** INTKEY table currently pointing at a valid table entry. +** This function modifies the data stored as part of that entry. +** Only the data content may only be modified, it is not possible +** to change the length of the data stored. +*/ +int sqlite3BtreePutData(BtCursor *pCsr, u32 offset, u32 amt, void *z){ + assert( cursorHoldsMutex(pCsr) ); + assert( sqlite3_mutex_held(pCsr->pBtree->db->mutex) ); + assert(pCsr->isIncrblobHandle); + if( pCsr->eState>=CURSOR_REQUIRESEEK ){ + if( pCsr->eState==CURSOR_FAULT ){ + return pCsr->skip; + }else{ + return SQLITE_ABORT; + } + } + + /* Check some preconditions: + ** (a) the cursor is open for writing, + ** (b) there is no read-lock on the table being modified and + ** (c) the cursor points at a valid row of an intKey table. + */ + if( !pCsr->wrFlag ){ + return SQLITE_READONLY; + } + assert( !pCsr->pBt->readOnly + && pCsr->pBt->inTransaction==TRANS_WRITE ); + if( checkReadLocks(pCsr->pBtree, pCsr->pgnoRoot, pCsr) ){ + return SQLITE_LOCKED; /* The table pCur points to has a read lock */ + } + if( pCsr->eState==CURSOR_INVALID || !pCsr->pPage->intKey ){ + return SQLITE_ERROR; + } + + return accessPayload(pCsr, offset, amt, (unsigned char *)z, 0, 1); +} + +/* +** Set a flag on this cursor to cache the locations of pages from the +** overflow list for the current row. This is used by cursors opened +** for incremental blob IO only. +** +** This function sets a flag only. The actual page location cache +** (stored in BtCursor.aOverflow[]) is allocated and used by function +** accessPayload() (the worker function for sqlite3BtreeData() and +** sqlite3BtreePutData()). +*/ +void sqlite3BtreeCacheOverflow(BtCursor *pCur){ + assert( cursorHoldsMutex(pCur) ); + assert( sqlite3_mutex_held(pCur->pBtree->db->mutex) ); + assert(!pCur->isIncrblobHandle); + assert(!pCur->aOverflow); + pCur->isIncrblobHandle = 1; +} +#endif Added: external/sqlite-source-3.5.7.x/btree.h ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/btree.h Wed Mar 19 03:00:27 2008 @@ -0,0 +1,203 @@ +/* +** 2001 September 15 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** This header file defines the interface that the sqlite B-Tree file +** subsystem. See comments in the source code for a detailed description +** of what each interface routine does. +** +** @(#) $Id: btree.h,v 1.94 2007/12/07 18:55:28 drh Exp $ +*/ +#ifndef _BTREE_H_ +#define _BTREE_H_ + +/* TODO: This definition is just included so other modules compile. It +** needs to be revisited. +*/ +#define SQLITE_N_BTREE_META 10 + +/* +** If defined as non-zero, auto-vacuum is enabled by default. Otherwise +** it must be turned on for each database using "PRAGMA auto_vacuum = 1". +*/ +#ifndef SQLITE_DEFAULT_AUTOVACUUM + #define SQLITE_DEFAULT_AUTOVACUUM 0 +#endif + +#define BTREE_AUTOVACUUM_NONE 0 /* Do not do auto-vacuum */ +#define BTREE_AUTOVACUUM_FULL 1 /* Do full auto-vacuum */ +#define BTREE_AUTOVACUUM_INCR 2 /* Incremental vacuum */ + +/* +** Forward declarations of structure +*/ +typedef struct Btree Btree; +typedef struct BtCursor BtCursor; +typedef struct BtShared BtShared; +typedef struct BtreeMutexArray BtreeMutexArray; + +/* +** This structure records all of the Btrees that need to hold +** a mutex before we enter sqlite3VdbeExec(). The Btrees are +** are placed in aBtree[] in order of aBtree[]->pBt. That way, +** we can always lock and unlock them all quickly. +*/ +struct BtreeMutexArray { + int nMutex; + Btree *aBtree[SQLITE_MAX_ATTACHED+1]; +}; + + +int sqlite3BtreeOpen( + const char *zFilename, /* Name of database file to open */ + sqlite3 *db, /* Associated database connection */ + Btree **, /* Return open Btree* here */ + int flags, /* Flags */ + int vfsFlags /* Flags passed through to VFS open */ +); + +/* The flags parameter to sqlite3BtreeOpen can be the bitwise or of the +** following values. +** +** NOTE: These values must match the corresponding PAGER_ values in +** pager.h. +*/ +#define BTREE_OMIT_JOURNAL 1 /* Do not use journal. No argument */ +#define BTREE_NO_READLOCK 2 /* Omit readlocks on readonly files */ +#define BTREE_MEMORY 4 /* In-memory DB. No argument */ +#define BTREE_READONLY 8 /* Open the database in read-only mode */ +#define BTREE_READWRITE 16 /* Open for both reading and writing */ +#define BTREE_CREATE 32 /* Create the database if it does not exist */ + +/* Additional values for the 4th argument of sqlite3BtreeOpen that +** are not associated with PAGER_ values. +*/ +#define BTREE_PRIVATE 64 /* Never share with other connections */ + +int sqlite3BtreeClose(Btree*); +int sqlite3BtreeSetCacheSize(Btree*,int); +int sqlite3BtreeSetSafetyLevel(Btree*,int,int); +int sqlite3BtreeSyncDisabled(Btree*); +int sqlite3BtreeSetPageSize(Btree*,int,int); +int sqlite3BtreeGetPageSize(Btree*); +int sqlite3BtreeMaxPageCount(Btree*,int); +int sqlite3BtreeGetReserve(Btree*); +int sqlite3BtreeSetAutoVacuum(Btree *, int); +int sqlite3BtreeGetAutoVacuum(Btree *); +int sqlite3BtreeBeginTrans(Btree*,int); +int sqlite3BtreeCommitPhaseOne(Btree*, const char *zMaster); +int sqlite3BtreeCommitPhaseTwo(Btree*); +int sqlite3BtreeCommit(Btree*); +int sqlite3BtreeRollback(Btree*); +int sqlite3BtreeBeginStmt(Btree*); +int sqlite3BtreeCommitStmt(Btree*); +int sqlite3BtreeRollbackStmt(Btree*); +int sqlite3BtreeCreateTable(Btree*, int*, int flags); +int sqlite3BtreeIsInTrans(Btree*); +int sqlite3BtreeIsInStmt(Btree*); +int sqlite3BtreeIsInReadTrans(Btree*); +void *sqlite3BtreeSchema(Btree *, int, void(*)(void *)); +int sqlite3BtreeSchemaLocked(Btree *); +int sqlite3BtreeLockTable(Btree *, int, u8); + +const char *sqlite3BtreeGetFilename(Btree *); +const char *sqlite3BtreeGetDirname(Btree *); +const char *sqlite3BtreeGetJournalname(Btree *); +int sqlite3BtreeCopyFile(Btree *, Btree *); + +int sqlite3BtreeIncrVacuum(Btree *); + +/* The flags parameter to sqlite3BtreeCreateTable can be the bitwise OR +** of the following flags: +*/ +#define BTREE_INTKEY 1 /* Table has only 64-bit signed integer keys */ +#define BTREE_ZERODATA 2 /* Table has keys only - no data */ +#define BTREE_LEAFDATA 4 /* Data stored in leaves only. Implies INTKEY */ + +int sqlite3BtreeDropTable(Btree*, int, int*); +int sqlite3BtreeClearTable(Btree*, int); +int sqlite3BtreeGetMeta(Btree*, int idx, u32 *pValue); +int sqlite3BtreeUpdateMeta(Btree*, int idx, u32 value); +void sqlite3BtreeTripAllCursors(Btree*, int); + +int sqlite3BtreeCursor( + Btree*, /* BTree containing table to open */ + int iTable, /* Index of root page */ + int wrFlag, /* 1 for writing. 0 for read-only */ + int(*)(void*,int,const void*,int,const void*), /* Key comparison function */ + void*, /* First argument to compare function */ + BtCursor **ppCursor /* Returned cursor */ +); + +int sqlite3BtreeCloseCursor(BtCursor*); +int sqlite3BtreeMoveto(BtCursor*,const void *pKey,i64 nKey,int bias,int *pRes); +int sqlite3BtreeDelete(BtCursor*); +int sqlite3BtreeInsert(BtCursor*, const void *pKey, i64 nKey, + const void *pData, int nData, + int nZero, int bias); +int sqlite3BtreeFirst(BtCursor*, int *pRes); +int sqlite3BtreeLast(BtCursor*, int *pRes); +int sqlite3BtreeNext(BtCursor*, int *pRes); +int sqlite3BtreeEof(BtCursor*); +int sqlite3BtreeFlags(BtCursor*); +int sqlite3BtreePrevious(BtCursor*, int *pRes); +int sqlite3BtreeKeySize(BtCursor*, i64 *pSize); +int sqlite3BtreeKey(BtCursor*, u32 offset, u32 amt, void*); +sqlite3 *sqlite3BtreeCursorDb(const BtCursor*); +const void *sqlite3BtreeKeyFetch(BtCursor*, int *pAmt); +const void *sqlite3BtreeDataFetch(BtCursor*, int *pAmt); +int sqlite3BtreeDataSize(BtCursor*, u32 *pSize); +int sqlite3BtreeData(BtCursor*, u32 offset, u32 amt, void*); + +char *sqlite3BtreeIntegrityCheck(Btree*, int *aRoot, int nRoot, int, int*); +struct Pager *sqlite3BtreePager(Btree*); + +int sqlite3BtreePutData(BtCursor*, u32 offset, u32 amt, void*); +void sqlite3BtreeCacheOverflow(BtCursor *); + +#ifdef SQLITE_TEST +int sqlite3BtreeCursorInfo(BtCursor*, int*, int); +void sqlite3BtreeCursorList(Btree*); +int sqlite3BtreePageDump(Btree*, int, int recursive); +#endif + +/* +** If we are not using shared cache, then there is no need to +** use mutexes to access the BtShared structures. So make the +** Enter and Leave procedures no-ops. +*/ +#if !defined(SQLITE_OMIT_SHARED_CACHE) && SQLITE_THREADSAFE + void sqlite3BtreeEnter(Btree*); + void sqlite3BtreeLeave(Btree*); + int sqlite3BtreeHoldsMutex(Btree*); + void sqlite3BtreeEnterCursor(BtCursor*); + void sqlite3BtreeLeaveCursor(BtCursor*); + void sqlite3BtreeEnterAll(sqlite3*); + void sqlite3BtreeLeaveAll(sqlite3*); + int sqlite3BtreeHoldsAllMutexes(sqlite3*); + void sqlite3BtreeMutexArrayEnter(BtreeMutexArray*); + void sqlite3BtreeMutexArrayLeave(BtreeMutexArray*); + void sqlite3BtreeMutexArrayInsert(BtreeMutexArray*, Btree*); +#else +# define sqlite3BtreeEnter(X) +# define sqlite3BtreeLeave(X) +# define sqlite3BtreeHoldsMutex(X) 1 +# define sqlite3BtreeEnterCursor(X) +# define sqlite3BtreeLeaveCursor(X) +# define sqlite3BtreeEnterAll(X) +# define sqlite3BtreeLeaveAll(X) +# define sqlite3BtreeHoldsAllMutexes(X) 1 +# define sqlite3BtreeMutexArrayEnter(X) +# define sqlite3BtreeMutexArrayLeave(X) +# define sqlite3BtreeMutexArrayInsert(X,Y) +#endif + + +#endif /* _BTREE_H_ */ Added: external/sqlite-source-3.5.7.x/btreeInt.h ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/btreeInt.h Wed Mar 19 03:00:27 2008 @@ -0,0 +1,651 @@ +/* +** 2004 April 6 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** $Id: btreeInt.h,v 1.17 2008/03/04 17:45:01 mlcreech Exp $ +** +** This file implements a external (disk-based) database using BTrees. +** For a detailed discussion of BTrees, refer to +** +** Donald E. Knuth, THE ART OF COMPUTER PROGRAMMING, Volume 3: +** "Sorting And Searching", pages 473-480. Addison-Wesley +** Publishing Company, Reading, Massachusetts. +** +** The basic idea is that each page of the file contains N database +** entries and N+1 pointers to subpages. +** +** ---------------------------------------------------------------- +** | Ptr(0) | Key(0) | Ptr(1) | Key(1) | ... | Key(N-1) | Ptr(N) | +** ---------------------------------------------------------------- +** +** All of the keys on the page that Ptr(0) points to have values less +** than Key(0). All of the keys on page Ptr(1) and its subpages have +** values greater than Key(0) and less than Key(1). All of the keys +** on Ptr(N) and its subpages have values greater than Key(N-1). And +** so forth. +** +** Finding a particular key requires reading O(log(M)) pages from the +** disk where M is the number of entries in the tree. +** +** In this implementation, a single file can hold one or more separate +** BTrees. Each BTree is identified by the index of its root page. The +** key and data for any entry are combined to form the "payload". A +** fixed amount of payload can be carried directly on the database +** page. If the payload is larger than the preset amount then surplus +** bytes are stored on overflow pages. The payload for an entry +** and the preceding pointer are combined to form a "Cell". Each +** page has a small header which contains the Ptr(N) pointer and other +** information such as the size of key and data. +** +** FORMAT DETAILS +** +** The file is divided into pages. The first page is called page 1, +** the second is page 2, and so forth. A page number of zero indicates +** "no such page". The page size can be anything between 512 and 65536. +** Each page can be either a btree page, a freelist page or an overflow +** page. +** +** The first page is always a btree page. The first 100 bytes of the first +** page contain a special header (the "file header") that describes the file. +** The format of the file header is as follows: +** +** OFFSET SIZE DESCRIPTION +** 0 16 Header string: "SQLite format 3\000" +** 16 2 Page size in bytes. +** 18 1 File format write version +** 19 1 File format read version +** 20 1 Bytes of unused space at the end of each page +** 21 1 Max embedded payload fraction +** 22 1 Min embedded payload fraction +** 23 1 Min leaf payload fraction +** 24 4 File change counter +** 28 4 Reserved for future use +** 32 4 First freelist page +** 36 4 Number of freelist pages in the file +** 40 60 15 4-byte meta values passed to higher layers +** +** All of the integer values are big-endian (most significant byte first). +** +** The file change counter is incremented when the database is changed +** This counter allows other processes to know when the file has changed +** and thus when they need to flush their cache. +** +** The max embedded payload fraction is the amount of the total usable +** space in a page that can be consumed by a single cell for standard +** B-tree (non-LEAFDATA) tables. A value of 255 means 100%. The default +** is to limit the maximum cell size so that at least 4 cells will fit +** on one page. Thus the default max embedded payload fraction is 64. +** +** If the payload for a cell is larger than the max payload, then extra +** payload is spilled to overflow pages. Once an overflow page is allocated, +** as many bytes as possible are moved into the overflow pages without letting +** the cell size drop below the min embedded payload fraction. +** +** The min leaf payload fraction is like the min embedded payload fraction +** except that it applies to leaf nodes in a LEAFDATA tree. The maximum +** payload fraction for a LEAFDATA tree is always 100% (or 255) and it +** not specified in the header. +** +** Each btree pages is divided into three sections: The header, the +** cell pointer array, and the cell content area. Page 1 also has a 100-byte +** file header that occurs before the page header. +** +** |----------------| +** | file header | 100 bytes. Page 1 only. +** |----------------| +** | page header | 8 bytes for leaves. 12 bytes for interior nodes +** |----------------| +** | cell pointer | | 2 bytes per cell. Sorted order. +** | array | | Grows downward +** | | v +** |----------------| +** | unallocated | +** | space | +** |----------------| ^ Grows upwards +** | cell content | | Arbitrary order interspersed with freeblocks. +** | area | | and free space fragments. +** |----------------| +** +** The page headers looks like this: +** +** OFFSET SIZE DESCRIPTION +** 0 1 Flags. 1: intkey, 2: zerodata, 4: leafdata, 8: leaf +** 1 2 byte offset to the first freeblock +** 3 2 number of cells on this page +** 5 2 first byte of the cell content area +** 7 1 number of fragmented free bytes +** 8 4 Right child (the Ptr(N) value). Omitted on leaves. +** +** The flags define the format of this btree page. The leaf flag means that +** this page has no children. The zerodata flag means that this page carries +** only keys and no data. The intkey flag means that the key is a integer +** which is stored in the key size entry of the cell header rather than in +** the payload area. +** +** The cell pointer array begins on the first byte after the page header. +** The cell pointer array contains zero or more 2-byte numbers which are +** offsets from the beginning of the page to the cell content in the cell +** content area. The cell pointers occur in sorted order. The system strives +** to keep free space after the last cell pointer so that new cells can +** be easily added without having to defragment the page. +** +** Cell content is stored at the very end of the page and grows toward the +** beginning of the page. +** +** Unused space within the cell content area is collected into a linked list of +** freeblocks. Each freeblock is at least 4 bytes in size. The byte offset +** to the first freeblock is given in the header. Freeblocks occur in +** increasing order. Because a freeblock must be at least 4 bytes in size, +** any group of 3 or fewer unused bytes in the cell content area cannot +** exist on the freeblock chain. A group of 3 or fewer free bytes is called +** a fragment. The total number of bytes in all fragments is recorded. +** in the page header at offset 7. +** +** SIZE DESCRIPTION +** 2 Byte offset of the next freeblock +** 2 Bytes in this freeblock +** +** Cells are of variable length. Cells are stored in the cell content area at +** the end of the page. Pointers to the cells are in the cell pointer array +** that immediately follows the page header. Cells is not necessarily +** contiguous or in order, but cell pointers are contiguous and in order. +** +** Cell content makes use of variable length integers. A variable +** length integer is 1 to 9 bytes where the lower 7 bits of each +** byte are used. The integer consists of all bytes that have bit 8 set and +** the first byte with bit 8 clear. The most significant byte of the integer +** appears first. A variable-length integer may not be more than 9 bytes long. +** As a special case, all 8 bytes of the 9th byte are used as data. This +** allows a 64-bit integer to be encoded in 9 bytes. +** +** 0x00 becomes 0x00000000 +** 0x7f becomes 0x0000007f +** 0x81 0x00 becomes 0x00000080 +** 0x82 0x00 becomes 0x00000100 +** 0x80 0x7f becomes 0x0000007f +** 0x8a 0x91 0xd1 0xac 0x78 becomes 0x12345678 +** 0x81 0x81 0x81 0x81 0x01 becomes 0x10204081 +** +** Variable length integers are used for rowids and to hold the number of +** bytes of key and data in a btree cell. +** +** The content of a cell looks like this: +** +** SIZE DESCRIPTION +** 4 Page number of the left child. Omitted if leaf flag is set. +** var Number of bytes of data. Omitted if the zerodata flag is set. +** var Number of bytes of key. Or the key itself if intkey flag is set. +** * Payload +** 4 First page of the overflow chain. Omitted if no overflow +** +** Overflow pages form a linked list. Each page except the last is completely +** filled with data (pagesize - 4 bytes). The last page can have as little +** as 1 byte of data. +** +** SIZE DESCRIPTION +** 4 Page number of next overflow page +** * Data +** +** Freelist pages come in two subtypes: trunk pages and leaf pages. The +** file header points to the first in a linked list of trunk page. Each trunk +** page points to multiple leaf pages. The content of a leaf page is +** unspecified. A trunk page looks like this: +** +** SIZE DESCRIPTION +** 4 Page number of next trunk page +** 4 Number of leaf pointers on this page +** * zero or more pages numbers of leaves +*/ +#include "sqliteInt.h" +#include "pager.h" +#include "btree.h" +#include "os.h" +#include + +/* Round up a number to the next larger multiple of 8. This is used +** to force 8-byte alignment on 64-bit architectures. +*/ +#define ROUND8(x) ((x+7)&~7) + + +/* The following value is the maximum cell size assuming a maximum page +** size give above. +*/ +#define MX_CELL_SIZE(pBt) (pBt->pageSize-8) + +/* The maximum number of cells on a single page of the database. This +** assumes a minimum cell size of 6 bytes (4 bytes for the cell itself +** plus 2 bytes for the index to the cell in the page header). Such +** small cells will be rare, but they are possible. +*/ +#define MX_CELL(pBt) ((pBt->pageSize-8)/6) + +/* Forward declarations */ +typedef struct MemPage MemPage; +typedef struct BtLock BtLock; + +/* +** This is a magic string that appears at the beginning of every +** SQLite database in order to identify the file as a real database. +** +** You can change this value at compile-time by specifying a +** -DSQLITE_FILE_HEADER="..." on the compiler command-line. The +** header must be exactly 16 bytes including the zero-terminator so +** the string itself should be 15 characters long. If you change +** the header, then your custom library will not be able to read +** databases generated by the standard tools and the standard tools +** will not be able to read databases created by your custom library. +*/ +#ifndef SQLITE_FILE_HEADER /* 123456789 123456 */ +# define SQLITE_FILE_HEADER "SQLite format 3" +#endif + +/* +** Page type flags. An ORed combination of these flags appear as the +** first byte of on-disk image of every BTree page. +*/ +#define PTF_INTKEY 0x01 +#define PTF_ZERODATA 0x02 +#define PTF_LEAFDATA 0x04 +#define PTF_LEAF 0x08 + +/* +** As each page of the file is loaded into memory, an instance of the following +** structure is appended and initialized to zero. This structure stores +** information about the page that is decoded from the raw file page. +** +** The pParent field points back to the parent page. This allows us to +** walk up the BTree from any leaf to the root. Care must be taken to +** unref() the parent page pointer when this page is no longer referenced. +** The pageDestructor() routine handles that chore. +** +** Access to all fields of this structure is controlled by the mutex +** stored in MemPage.pBt->mutex. +*/ +struct MemPage { + u8 isInit; /* True if previously initialized. MUST BE FIRST! */ + u8 idxShift; /* True if Cell indices have changed */ + u8 nOverflow; /* Number of overflow cell bodies in aCell[] */ + u8 intKey; /* True if intkey flag is set */ + u8 leaf; /* True if leaf flag is set */ + u8 zeroData; /* True if table stores keys only */ + u8 leafData; /* True if tables stores data on leaves only */ + u8 hasData; /* True if this page stores data */ + u8 hdrOffset; /* 100 for page 1. 0 otherwise */ + u8 childPtrSize; /* 0 if leaf==1. 4 if leaf==0 */ + u16 maxLocal; /* Copy of BtShared.maxLocal or BtShared.maxLeaf */ + u16 minLocal; /* Copy of BtShared.minLocal or BtShared.minLeaf */ + u16 cellOffset; /* Index in aData of first cell pointer */ + u16 idxParent; /* Index in parent of this node */ + u16 nFree; /* Number of free bytes on the page */ + u16 nCell; /* Number of cells on this page, local and ovfl */ + struct _OvflCell { /* Cells that will not fit on aData[] */ + u8 *pCell; /* Pointers to the body of the overflow cell */ + u16 idx; /* Insert this cell before idx-th non-overflow cell */ + } aOvfl[5]; + BtShared *pBt; /* Pointer to BtShared that this page is part of */ + u8 *aData; /* Pointer to disk image of the page data */ + DbPage *pDbPage; /* Pager page handle */ + Pgno pgno; /* Page number for this page */ + MemPage *pParent; /* The parent of this page. NULL for root */ +}; + +/* +** The in-memory image of a disk page has the auxiliary information appended +** to the end. EXTRA_SIZE is the number of bytes of space needed to hold +** that extra information. +*/ +#define EXTRA_SIZE sizeof(MemPage) + +/* A Btree handle +** +** A database connection contains a pointer to an instance of +** this object for every database file that it has open. This structure +** is opaque to the database connection. The database connection cannot +** see the internals of this structure and only deals with pointers to +** this structure. +** +** For some database files, the same underlying database cache might be +** shared between multiple connections. In that case, each contection +** has it own pointer to this object. But each instance of this object +** points to the same BtShared object. The database cache and the +** schema associated with the database file are all contained within +** the BtShared object. +** +** All fields in this structure are accessed under sqlite3.mutex. +** The pBt pointer itself may not be changed while there exists cursors +** in the referenced BtShared that point back to this Btree since those +** cursors have to do go through this Btree to find their BtShared and +** they often do so without holding sqlite3.mutex. +*/ +struct Btree { + sqlite3 *db; /* The database connection holding this btree */ + BtShared *pBt; /* Sharable content of this btree */ + u8 inTrans; /* TRANS_NONE, TRANS_READ or TRANS_WRITE */ + u8 sharable; /* True if we can share pBt with another db */ + u8 locked; /* True if db currently has pBt locked */ + int wantToLock; /* Number of nested calls to sqlite3BtreeEnter() */ + Btree *pNext; /* List of other sharable Btrees from the same db */ + Btree *pPrev; /* Back pointer of the same list */ +}; + +/* +** Btree.inTrans may take one of the following values. +** +** If the shared-data extension is enabled, there may be multiple users +** of the Btree structure. At most one of these may open a write transaction, +** but any number may have active read transactions. +*/ +#define TRANS_NONE 0 +#define TRANS_READ 1 +#define TRANS_WRITE 2 + +/* +** An instance of this object represents a single database file. +** +** A single database file can be in use as the same time by two +** or more database connections. When two or more connections are +** sharing the same database file, each connection has it own +** private Btree object for the file and each of those Btrees points +** to this one BtShared object. BtShared.nRef is the number of +** connections currently sharing this database file. +** +** Fields in this structure are accessed under the BtShared.mutex +** mutex, except for nRef and pNext which are accessed under the +** global SQLITE_MUTEX_STATIC_MASTER mutex. The pPager field +** may not be modified once it is initially set as long as nRef>0. +** The pSchema field may be set once under BtShared.mutex and +** thereafter is unchanged as long as nRef>0. +*/ +struct BtShared { + Pager *pPager; /* The page cache */ + sqlite3 *db; /* Database connection currently using this Btree */ + BtCursor *pCursor; /* A list of all open cursors */ + MemPage *pPage1; /* First page of the database */ + u8 inStmt; /* True if we are in a statement subtransaction */ + u8 readOnly; /* True if the underlying file is readonly */ + u8 maxEmbedFrac; /* Maximum payload as % of total page size */ + u8 minEmbedFrac; /* Minimum payload as % of total page size */ + u8 minLeafFrac; /* Minimum leaf payload as % of total page size */ + u8 pageSizeFixed; /* True if the page size can no longer be changed */ +#ifndef SQLITE_OMIT_AUTOVACUUM + u8 autoVacuum; /* True if auto-vacuum is enabled */ + u8 incrVacuum; /* True if incr-vacuum is enabled */ + Pgno nTrunc; /* Non-zero if the db will be truncated (incr vacuum) */ +#endif + u16 pageSize; /* Total number of bytes on a page */ + u16 usableSize; /* Number of usable bytes on each page */ + int maxLocal; /* Maximum local payload in non-LEAFDATA tables */ + int minLocal; /* Minimum local payload in non-LEAFDATA tables */ + int maxLeaf; /* Maximum local payload in a LEAFDATA table */ + int minLeaf; /* Minimum local payload in a LEAFDATA table */ + u8 inTransaction; /* Transaction state */ + int nTransaction; /* Number of open transactions (read + write) */ + void *pSchema; /* Pointer to space allocated by sqlite3BtreeSchema() */ + void (*xFreeSchema)(void*); /* Destructor for BtShared.pSchema */ + sqlite3_mutex *mutex; /* Non-recursive mutex required to access this struct */ + BusyHandler busyHdr; /* The busy handler for this btree */ +#ifndef SQLITE_OMIT_SHARED_CACHE + int nRef; /* Number of references to this structure */ + BtShared *pNext; /* Next on a list of sharable BtShared structs */ + BtLock *pLock; /* List of locks held on this shared-btree struct */ + Btree *pExclusive; /* Btree with an EXCLUSIVE lock on the whole db */ +#endif +}; + +/* +** An instance of the following structure is used to hold information +** about a cell. The parseCellPtr() function fills in this structure +** based on information extract from the raw disk page. +*/ +typedef struct CellInfo CellInfo; +struct CellInfo { + u8 *pCell; /* Pointer to the start of cell content */ + i64 nKey; /* The key for INTKEY tables, or number of bytes in key */ + u32 nData; /* Number of bytes of data */ + u32 nPayload; /* Total amount of payload */ + u16 nHeader; /* Size of the cell content header in bytes */ + u16 nLocal; /* Amount of payload held locally */ + u16 iOverflow; /* Offset to overflow page number. Zero if no overflow */ + u16 nSize; /* Size of the cell content on the main b-tree page */ +}; + +/* +** A cursor is a pointer to a particular entry within a particular +** b-tree within a database file. +** +** The entry is identified by its MemPage and the index in +** MemPage.aCell[] of the entry. +** +** When a single database file can shared by two more database connections, +** but cursors cannot be shared. Each cursor is associated with a +** particular database connection identified BtCursor.pBtree.db. +** +** Fields in this structure are accessed under the BtShared.mutex +** found at self->pBt->mutex. +*/ +struct BtCursor { + Btree *pBtree; /* The Btree to which this cursor belongs */ + BtShared *pBt; /* The BtShared this cursor points to */ + BtCursor *pNext, *pPrev; /* Forms a linked list of all cursors */ + int (*xCompare)(void*,int,const void*,int,const void*); /* Key comp func */ + void *pArg; /* First arg to xCompare() */ + Pgno pgnoRoot; /* The root page of this tree */ + MemPage *pPage; /* Page that contains the entry */ + int idx; /* Index of the entry in pPage->aCell[] */ + CellInfo info; /* A parse of the cell we are pointing at */ + u8 wrFlag; /* True if writable */ + u8 eState; /* One of the CURSOR_XXX constants (see below) */ + void *pKey; /* Saved key that was cursor's last known position */ + i64 nKey; /* Size of pKey, or last integer key */ + int skip; /* (skip<0) -> Prev() is a no-op. (skip>0) -> Next() is */ +#ifndef SQLITE_OMIT_INCRBLOB + u8 isIncrblobHandle; /* True if this cursor is an incr. io handle */ + Pgno *aOverflow; /* Cache of overflow page locations */ +#endif +}; + +/* +** Potential values for BtCursor.eState. +** +** CURSOR_VALID: +** Cursor points to a valid entry. getPayload() etc. may be called. +** +** CURSOR_INVALID: +** Cursor does not point to a valid entry. This can happen (for example) +** because the table is empty or because BtreeCursorFirst() has not been +** called. +** +** CURSOR_REQUIRESEEK: +** The table that this cursor was opened on still exists, but has been +** modified since the cursor was last used. The cursor position is saved +** in variables BtCursor.pKey and BtCursor.nKey. When a cursor is in +** this state, restoreOrClearCursorPosition() can be called to attempt to +** seek the cursor to the saved position. +** +** CURSOR_FAULT: +** A unrecoverable error (an I/O error or a malloc failure) has occurred +** on a different connection that shares the BtShared cache with this +** cursor. The error has left the cache in an inconsistent state. +** Do nothing else with this cursor. Any attempt to use the cursor +** should return the error code stored in BtCursor.skip +*/ +#define CURSOR_INVALID 0 +#define CURSOR_VALID 1 +#define CURSOR_REQUIRESEEK 2 +#define CURSOR_FAULT 3 + +/* +** The TRACE macro will print high-level status information about the +** btree operation when the global variable sqlite3BtreeTrace is +** enabled. +*/ +#if SQLITE_TEST +# define TRACE(X) if( sqlite3BtreeTrace ){ printf X; fflush(stdout); } +#else +# define TRACE(X) +#endif + +/* +** Routines to read and write variable-length integers. These used to +** be defined locally, but now we use the varint routines in the util.c +** file. +*/ +#define getVarint sqlite3GetVarint +#define getVarint32(A,B) ((*B=*(A))<=0x7f?1:sqlite3GetVarint32(A,B)) +#define putVarint sqlite3PutVarint + +/* The database page the PENDING_BYTE occupies. This page is never used. +** TODO: This macro is very similary to PAGER_MJ_PGNO() in pager.c. They +** should possibly be consolidated (presumably in pager.h). +** +** If disk I/O is omitted (meaning that the database is stored purely +** in memory) then there is no pending byte. +*/ +#ifdef SQLITE_OMIT_DISKIO +# define PENDING_BYTE_PAGE(pBt) 0x7fffffff +#else +# define PENDING_BYTE_PAGE(pBt) ((PENDING_BYTE/(pBt)->pageSize)+1) +#endif + +/* +** A linked list of the following structures is stored at BtShared.pLock. +** Locks are added (or upgraded from READ_LOCK to WRITE_LOCK) when a cursor +** is opened on the table with root page BtShared.iTable. Locks are removed +** from this list when a transaction is committed or rolled back, or when +** a btree handle is closed. +*/ +struct BtLock { + Btree *pBtree; /* Btree handle holding this lock */ + Pgno iTable; /* Root page of table */ + u8 eLock; /* READ_LOCK or WRITE_LOCK */ + BtLock *pNext; /* Next in BtShared.pLock list */ +}; + +/* Candidate values for BtLock.eLock */ +#define READ_LOCK 1 +#define WRITE_LOCK 2 + +/* +** These macros define the location of the pointer-map entry for a +** database page. The first argument to each is the number of usable +** bytes on each page of the database (often 1024). The second is the +** page number to look up in the pointer map. +** +** PTRMAP_PAGENO returns the database page number of the pointer-map +** page that stores the required pointer. PTRMAP_PTROFFSET returns +** the offset of the requested map entry. +** +** If the pgno argument passed to PTRMAP_PAGENO is a pointer-map page, +** then pgno is returned. So (pgno==PTRMAP_PAGENO(pgsz, pgno)) can be +** used to test if pgno is a pointer-map page. PTRMAP_ISPAGE implements +** this test. +*/ +#define PTRMAP_PAGENO(pBt, pgno) ptrmapPageno(pBt, pgno) +#define PTRMAP_PTROFFSET(pBt, pgno) (5*(pgno-ptrmapPageno(pBt, pgno)-1)) +#define PTRMAP_ISPAGE(pBt, pgno) (PTRMAP_PAGENO((pBt),(pgno))==(pgno)) + +/* +** The pointer map is a lookup table that identifies the parent page for +** each child page in the database file. The parent page is the page that +** contains a pointer to the child. Every page in the database contains +** 0 or 1 parent pages. (In this context 'database page' refers +** to any page that is not part of the pointer map itself.) Each pointer map +** entry consists of a single byte 'type' and a 4 byte parent page number. +** The PTRMAP_XXX identifiers below are the valid types. +** +** The purpose of the pointer map is to facility moving pages from one +** position in the file to another as part of autovacuum. When a page +** is moved, the pointer in its parent must be updated to point to the +** new location. The pointer map is used to locate the parent page quickly. +** +** PTRMAP_ROOTPAGE: The database page is a root-page. The page-number is not +** used in this case. +** +** PTRMAP_FREEPAGE: The database page is an unused (free) page. The page-number +** is not used in this case. +** +** PTRMAP_OVERFLOW1: The database page is the first page in a list of +** overflow pages. The page number identifies the page that +** contains the cell with a pointer to this overflow page. +** +** PTRMAP_OVERFLOW2: The database page is the second or later page in a list of +** overflow pages. The page-number identifies the previous +** page in the overflow page list. +** +** PTRMAP_BTREE: The database page is a non-root btree page. The page number +** identifies the parent page in the btree. +*/ +#define PTRMAP_ROOTPAGE 1 +#define PTRMAP_FREEPAGE 2 +#define PTRMAP_OVERFLOW1 3 +#define PTRMAP_OVERFLOW2 4 +#define PTRMAP_BTREE 5 + +/* A bunch of assert() statements to check the transaction state variables +** of handle p (type Btree*) are internally consistent. +*/ +#define btreeIntegrity(p) \ + assert( p->pBt->inTransaction!=TRANS_NONE || p->pBt->nTransaction==0 ); \ + assert( p->pBt->inTransaction>=p->inTrans ); + + +/* +** The ISAUTOVACUUM macro is used within balance_nonroot() to determine +** if the database supports auto-vacuum or not. Because it is used +** within an expression that is an argument to another macro +** (sqliteMallocRaw), it is not possible to use conditional compilation. +** So, this macro is defined instead. +*/ +#ifndef SQLITE_OMIT_AUTOVACUUM +#define ISAUTOVACUUM (pBt->autoVacuum) +#else +#define ISAUTOVACUUM 0 +#endif + + +/* +** This structure is passed around through all the sanity checking routines +** in order to keep track of some global state information. +*/ +typedef struct IntegrityCk IntegrityCk; +struct IntegrityCk { + BtShared *pBt; /* The tree being checked out */ + Pager *pPager; /* The associated pager. Also accessible by pBt->pPager */ + int nPage; /* Number of pages in the database */ + int *anRef; /* Number of times each page is referenced */ + int mxErr; /* Stop accumulating errors when this reaches zero */ + char *zErrMsg; /* An error message. NULL if no errors seen. */ + int nErr; /* Number of messages written to zErrMsg so far */ +}; + +/* +** Read or write a two- and four-byte big-endian integer values. +*/ +#define get2byte(x) ((x)[0]<<8 | (x)[1]) +#define put2byte(p,v) ((p)[0] = (v)>>8, (p)[1] = (v)) +#define get4byte sqlite3Get4byte +#define put4byte sqlite3Put4byte + +/* +** Internal routines that should be accessed by the btree layer only. +*/ +int sqlite3BtreeGetPage(BtShared*, Pgno, MemPage**, int); +int sqlite3BtreeInitPage(MemPage *pPage, MemPage *pParent); +void sqlite3BtreeParseCellPtr(MemPage*, u8*, CellInfo*); +void sqlite3BtreeParseCell(MemPage*, int, CellInfo*); +#ifdef SQLITE_TEST +u8 *sqlite3BtreeFindCell(MemPage *pPage, int iCell); +#endif +int sqlite3BtreeRestoreOrClearCursorPosition(BtCursor *pCur); +void sqlite3BtreeGetTempCursor(BtCursor *pCur, BtCursor *pTempCur); +void sqlite3BtreeReleaseTempCursor(BtCursor *pCur); +int sqlite3BtreeIsRootPage(MemPage *pPage); +void sqlite3BtreeMoveToParent(BtCursor *pCur); Added: external/sqlite-source-3.5.7.x/build.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/build.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,3467 @@ +/* +** 2001 September 15 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** This file contains C code routines that are called by the SQLite parser +** when syntax rules are reduced. The routines in this file handle the +** following kinds of SQL syntax: +** +** CREATE TABLE +** DROP TABLE +** CREATE INDEX +** DROP INDEX +** creating ID lists +** BEGIN TRANSACTION +** COMMIT +** ROLLBACK +** +** $Id: build.c,v 1.474 2008/03/06 09:58:50 mlcreech Exp $ +*/ +#include "sqliteInt.h" +#include + +/* +** This routine is called when a new SQL statement is beginning to +** be parsed. Initialize the pParse structure as needed. +*/ +void sqlite3BeginParse(Parse *pParse, int explainFlag){ + pParse->explain = explainFlag; + pParse->nVar = 0; +} + +#ifndef SQLITE_OMIT_SHARED_CACHE +/* +** The TableLock structure is only used by the sqlite3TableLock() and +** codeTableLocks() functions. +*/ +struct TableLock { + int iDb; /* The database containing the table to be locked */ + int iTab; /* The root page of the table to be locked */ + u8 isWriteLock; /* True for write lock. False for a read lock */ + const char *zName; /* Name of the table */ +}; + +/* +** Record the fact that we want to lock a table at run-time. +** +** The table to be locked has root page iTab and is found in database iDb. +** A read or a write lock can be taken depending on isWritelock. +** +** This routine just records the fact that the lock is desired. The +** code to make the lock occur is generated by a later call to +** codeTableLocks() which occurs during sqlite3FinishCoding(). +*/ +void sqlite3TableLock( + Parse *pParse, /* Parsing context */ + int iDb, /* Index of the database containing the table to lock */ + int iTab, /* Root page number of the table to be locked */ + u8 isWriteLock, /* True for a write lock */ + const char *zName /* Name of the table to be locked */ +){ + int i; + int nBytes; + TableLock *p; + + if( iDb<0 ){ + return; + } + + for(i=0; inTableLock; i++){ + p = &pParse->aTableLock[i]; + if( p->iDb==iDb && p->iTab==iTab ){ + p->isWriteLock = (p->isWriteLock || isWriteLock); + return; + } + } + + nBytes = sizeof(TableLock) * (pParse->nTableLock+1); + pParse->aTableLock = + sqlite3DbReallocOrFree(pParse->db, pParse->aTableLock, nBytes); + if( pParse->aTableLock ){ + p = &pParse->aTableLock[pParse->nTableLock++]; + p->iDb = iDb; + p->iTab = iTab; + p->isWriteLock = isWriteLock; + p->zName = zName; + }else{ + pParse->nTableLock = 0; + pParse->db->mallocFailed = 1; + } +} + +/* +** Code an OP_TableLock instruction for each table locked by the +** statement (configured by calls to sqlite3TableLock()). +*/ +static void codeTableLocks(Parse *pParse){ + int i; + Vdbe *pVdbe; + + if( 0==(pVdbe = sqlite3GetVdbe(pParse)) ){ + return; + } + + for(i=0; inTableLock; i++){ + TableLock *p = &pParse->aTableLock[i]; + int p1 = p->iDb; + if( p->isWriteLock ){ + p1 = -1*(p1+1); + } + sqlite3VdbeAddOp4(pVdbe, OP_TableLock, p1, p->iTab, 0, p->zName, P4_STATIC); + } +} +#else + #define codeTableLocks(x) +#endif + +/* +** This routine is called after a single SQL statement has been +** parsed and a VDBE program to execute that statement has been +** prepared. This routine puts the finishing touches on the +** VDBE program and resets the pParse structure for the next +** parse. +** +** Note that if an error occurred, it might be the case that +** no VDBE code was generated. +*/ +void sqlite3FinishCoding(Parse *pParse){ + sqlite3 *db; + Vdbe *v; + + db = pParse->db; + if( db->mallocFailed ) return; + if( pParse->nested ) return; + if( pParse->nErr ) return; + if( !pParse->pVdbe ){ + if( pParse->rc==SQLITE_OK && pParse->nErr ){ + pParse->rc = SQLITE_ERROR; + return; + } + } + + /* Begin by generating some termination code at the end of the + ** vdbe program + */ + v = sqlite3GetVdbe(pParse); + if( v ){ + sqlite3VdbeAddOp0(v, OP_Halt); + + /* The cookie mask contains one bit for each database file open. + ** (Bit 0 is for main, bit 1 is for temp, and so forth.) Bits are + ** set for each database that is used. Generate code to start a + ** transaction on each used database and to verify the schema cookie + ** on each used database. + */ + if( pParse->cookieGoto>0 ){ + u32 mask; + int iDb; + sqlite3VdbeJumpHere(v, pParse->cookieGoto-1); + for(iDb=0, mask=1; iDbnDb; mask<<=1, iDb++){ + if( (mask & pParse->cookieMask)==0 ) continue; + sqlite3VdbeUsesBtree(v, iDb); + sqlite3VdbeAddOp2(v,OP_Transaction, iDb, (mask & pParse->writeMask)!=0); + sqlite3VdbeAddOp2(v,OP_VerifyCookie, iDb, pParse->cookieValue[iDb]); + } +#ifndef SQLITE_OMIT_VIRTUALTABLE + if( pParse->pVirtualLock ){ + char *vtab = (char *)pParse->pVirtualLock->pVtab; + sqlite3VdbeAddOp4(v, OP_VBegin, 0, 0, 0, vtab, P4_VTAB); + } +#endif + + /* Once all the cookies have been verified and transactions opened, + ** obtain the required table-locks. This is a no-op unless the + ** shared-cache feature is enabled. + */ + codeTableLocks(pParse); + sqlite3VdbeAddOp2(v, OP_Goto, 0, pParse->cookieGoto); + } + +#ifndef SQLITE_OMIT_TRACE + if( !db->init.busy ){ + /* Change the P4 argument of the first opcode (which will always be + ** an OP_Trace) to be the complete text of the current SQL statement. + */ + VdbeOp *pOp = sqlite3VdbeGetOp(v, 0); + if( pOp && pOp->opcode==OP_Trace ){ + sqlite3VdbeChangeP4(v, 0, pParse->zSql, pParse->zTail-pParse->zSql); + } + } +#endif /* SQLITE_OMIT_TRACE */ + } + + + /* Get the VDBE program ready for execution + */ + if( v && pParse->nErr==0 && !db->mallocFailed ){ +#ifdef SQLITE_DEBUG + FILE *trace = (db->flags & SQLITE_VdbeTrace)!=0 ? stdout : 0; + sqlite3VdbeTrace(v, trace); +#endif + sqlite3VdbeMakeReady(v, pParse->nVar, pParse->nMem+3, + pParse->nTab+3, pParse->explain); + pParse->rc = SQLITE_DONE; + pParse->colNamesSet = 0; + }else if( pParse->rc==SQLITE_OK ){ + pParse->rc = SQLITE_ERROR; + } + pParse->nTab = 0; + pParse->nMem = 0; + pParse->nSet = 0; + pParse->nVar = 0; + pParse->cookieMask = 0; + pParse->cookieGoto = 0; +} + +/* +** Run the parser and code generator recursively in order to generate +** code for the SQL statement given onto the end of the pParse context +** currently under construction. When the parser is run recursively +** this way, the final OP_Halt is not appended and other initialization +** and finalization steps are omitted because those are handling by the +** outermost parser. +** +** Not everything is nestable. This facility is designed to permit +** INSERT, UPDATE, and DELETE operations against SQLITE_MASTER. Use +** care if you decide to try to use this routine for some other purposes. +*/ +void sqlite3NestedParse(Parse *pParse, const char *zFormat, ...){ + va_list ap; + char *zSql; +# define SAVE_SZ (sizeof(Parse) - offsetof(Parse,nVar)) + char saveBuf[SAVE_SZ]; + + if( pParse->nErr ) return; + assert( pParse->nested<10 ); /* Nesting should only be of limited depth */ + va_start(ap, zFormat); + zSql = sqlite3VMPrintf(pParse->db, zFormat, ap); + va_end(ap); + if( zSql==0 ){ + pParse->db->mallocFailed = 1; + return; /* A malloc must have failed */ + } + pParse->nested++; + memcpy(saveBuf, &pParse->nVar, SAVE_SZ); + memset(&pParse->nVar, 0, SAVE_SZ); + sqlite3RunParser(pParse, zSql, 0); + sqlite3_free(zSql); + memcpy(&pParse->nVar, saveBuf, SAVE_SZ); + pParse->nested--; +} + +/* +** Locate the in-memory structure that describes a particular database +** table given the name of that table and (optionally) the name of the +** database containing the table. Return NULL if not found. +** +** If zDatabase is 0, all databases are searched for the table and the +** first matching table is returned. (No checking for duplicate table +** names is done.) The search order is TEMP first, then MAIN, then any +** auxiliary databases added using the ATTACH command. +** +** See also sqlite3LocateTable(). +*/ +Table *sqlite3FindTable(sqlite3 *db, const char *zName, const char *zDatabase){ + Table *p = 0; + int i; + assert( zName!=0 ); + for(i=OMIT_TEMPDB; inDb; i++){ + int j = (i<2) ? i^1 : i; /* Search TEMP before MAIN */ + if( zDatabase!=0 && sqlite3StrICmp(zDatabase, db->aDb[j].zName) ) continue; + p = sqlite3HashFind(&db->aDb[j].pSchema->tblHash, zName, strlen(zName)+1); + if( p ) break; + } + return p; +} + +/* +** Locate the in-memory structure that describes a particular database +** table given the name of that table and (optionally) the name of the +** database containing the table. Return NULL if not found. Also leave an +** error message in pParse->zErrMsg. +** +** The difference between this routine and sqlite3FindTable() is that this +** routine leaves an error message in pParse->zErrMsg where +** sqlite3FindTable() does not. +*/ +Table *sqlite3LocateTable( + Parse *pParse, /* context in which to report errors */ + int isView, /* True if looking for a VIEW rather than a TABLE */ + const char *zName, /* Name of the table we are looking for */ + const char *zDbase /* Name of the database. Might be NULL */ +){ + Table *p; + + /* Read the database schema. If an error occurs, leave an error message + ** and code in pParse and return NULL. */ + if( SQLITE_OK!=sqlite3ReadSchema(pParse) ){ + return 0; + } + + p = sqlite3FindTable(pParse->db, zName, zDbase); + if( p==0 ){ + const char *zMsg = isView ? "no such view" : "no such table"; + if( zDbase ){ + sqlite3ErrorMsg(pParse, "%s: %s.%s", zMsg, zDbase, zName); + }else{ + sqlite3ErrorMsg(pParse, "%s: %s", zMsg, zName); + } + pParse->checkSchema = 1; + } + return p; +} + +/* +** Locate the in-memory structure that describes +** a particular index given the name of that index +** and the name of the database that contains the index. +** Return NULL if not found. +** +** If zDatabase is 0, all databases are searched for the +** table and the first matching index is returned. (No checking +** for duplicate index names is done.) The search order is +** TEMP first, then MAIN, then any auxiliary databases added +** using the ATTACH command. +*/ +Index *sqlite3FindIndex(sqlite3 *db, const char *zName, const char *zDb){ + Index *p = 0; + int i; + for(i=OMIT_TEMPDB; inDb; i++){ + int j = (i<2) ? i^1 : i; /* Search TEMP before MAIN */ + Schema *pSchema = db->aDb[j].pSchema; + if( zDb && sqlite3StrICmp(zDb, db->aDb[j].zName) ) continue; + assert( pSchema || (j==1 && !db->aDb[1].pBt) ); + if( pSchema ){ + p = sqlite3HashFind(&pSchema->idxHash, zName, strlen(zName)+1); + } + if( p ) break; + } + return p; +} + +/* +** Reclaim the memory used by an index +*/ +static void freeIndex(Index *p){ + sqlite3_free(p->zColAff); + sqlite3_free(p); +} + +/* +** Remove the given index from the index hash table, and free +** its memory structures. +** +** The index is removed from the database hash tables but +** it is not unlinked from the Table that it indexes. +** Unlinking from the Table must be done by the calling function. +*/ +static void sqliteDeleteIndex(Index *p){ + Index *pOld; + const char *zName = p->zName; + + pOld = sqlite3HashInsert(&p->pSchema->idxHash, zName, strlen( zName)+1, 0); + assert( pOld==0 || pOld==p ); + freeIndex(p); +} + +/* +** For the index called zIdxName which is found in the database iDb, +** unlike that index from its Table then remove the index from +** the index hash table and free all memory structures associated +** with the index. +*/ +void sqlite3UnlinkAndDeleteIndex(sqlite3 *db, int iDb, const char *zIdxName){ + Index *pIndex; + int len; + Hash *pHash = &db->aDb[iDb].pSchema->idxHash; + + len = strlen(zIdxName); + pIndex = sqlite3HashInsert(pHash, zIdxName, len+1, 0); + if( pIndex ){ + if( pIndex->pTable->pIndex==pIndex ){ + pIndex->pTable->pIndex = pIndex->pNext; + }else{ + Index *p; + for(p=pIndex->pTable->pIndex; p && p->pNext!=pIndex; p=p->pNext){} + if( p && p->pNext==pIndex ){ + p->pNext = pIndex->pNext; + } + } + freeIndex(pIndex); + } + db->flags |= SQLITE_InternChanges; +} + +/* +** Erase all schema information from the in-memory hash tables of +** a single database. This routine is called to reclaim memory +** before the database closes. It is also called during a rollback +** if there were schema changes during the transaction or if a +** schema-cookie mismatch occurs. +** +** If iDb<=0 then reset the internal schema tables for all database +** files. If iDb>=2 then reset the internal schema for only the +** single file indicated. +*/ +void sqlite3ResetInternalSchema(sqlite3 *db, int iDb){ + int i, j; + assert( iDb>=0 && iDbnDb ); + + if( iDb==0 ){ + sqlite3BtreeEnterAll(db); + } + for(i=iDb; inDb; i++){ + Db *pDb = &db->aDb[i]; + if( pDb->pSchema ){ + assert(i==1 || (pDb->pBt && sqlite3BtreeHoldsMutex(pDb->pBt))); + sqlite3SchemaFree(pDb->pSchema); + } + if( iDb>0 ) return; + } + assert( iDb==0 ); + db->flags &= ~SQLITE_InternChanges; + sqlite3BtreeLeaveAll(db); + + /* If one or more of the auxiliary database files has been closed, + ** then remove them from the auxiliary database list. We take the + ** opportunity to do this here since we have just deleted all of the + ** schema hash tables and therefore do not have to make any changes + ** to any of those tables. + */ + for(i=0; inDb; i++){ + struct Db *pDb = &db->aDb[i]; + if( pDb->pBt==0 ){ + if( pDb->pAux && pDb->xFreeAux ) pDb->xFreeAux(pDb->pAux); + pDb->pAux = 0; + } + } + for(i=j=2; inDb; i++){ + struct Db *pDb = &db->aDb[i]; + if( pDb->pBt==0 ){ + sqlite3_free(pDb->zName); + pDb->zName = 0; + continue; + } + if( jaDb[j] = db->aDb[i]; + } + j++; + } + memset(&db->aDb[j], 0, (db->nDb-j)*sizeof(db->aDb[j])); + db->nDb = j; + if( db->nDb<=2 && db->aDb!=db->aDbStatic ){ + memcpy(db->aDbStatic, db->aDb, 2*sizeof(db->aDb[0])); + sqlite3_free(db->aDb); + db->aDb = db->aDbStatic; + } +} + +/* +** This routine is called when a commit occurs. +*/ +void sqlite3CommitInternalChanges(sqlite3 *db){ + db->flags &= ~SQLITE_InternChanges; +} + +/* +** Clear the column names from a table or view. +*/ +static void sqliteResetColumnNames(Table *pTable){ + int i; + Column *pCol; + assert( pTable!=0 ); + if( (pCol = pTable->aCol)!=0 ){ + for(i=0; inCol; i++, pCol++){ + sqlite3_free(pCol->zName); + sqlite3ExprDelete(pCol->pDflt); + sqlite3_free(pCol->zType); + sqlite3_free(pCol->zColl); + } + sqlite3_free(pTable->aCol); + } + pTable->aCol = 0; + pTable->nCol = 0; +} + +/* +** Remove the memory data structures associated with the given +** Table. No changes are made to disk by this routine. +** +** This routine just deletes the data structure. It does not unlink +** the table data structure from the hash table. Nor does it remove +** foreign keys from the sqlite.aFKey hash table. But it does destroy +** memory structures of the indices and foreign keys associated with +** the table. +*/ +void sqlite3DeleteTable(Table *pTable){ + Index *pIndex, *pNext; + FKey *pFKey, *pNextFKey; + + if( pTable==0 ) return; + + /* Do not delete the table until the reference count reaches zero. */ + pTable->nRef--; + if( pTable->nRef>0 ){ + return; + } + assert( pTable->nRef==0 ); + + /* Delete all indices associated with this table + */ + for(pIndex = pTable->pIndex; pIndex; pIndex=pNext){ + pNext = pIndex->pNext; + assert( pIndex->pSchema==pTable->pSchema ); + sqliteDeleteIndex(pIndex); + } + +#ifndef SQLITE_OMIT_FOREIGN_KEY + /* Delete all foreign keys associated with this table. The keys + ** should have already been unlinked from the pSchema->aFKey hash table + */ + for(pFKey=pTable->pFKey; pFKey; pFKey=pNextFKey){ + pNextFKey = pFKey->pNextFrom; + assert( sqlite3HashFind(&pTable->pSchema->aFKey, + pFKey->zTo, strlen(pFKey->zTo)+1)!=pFKey ); + sqlite3_free(pFKey); + } +#endif + + /* Delete the Table structure itself. + */ + sqliteResetColumnNames(pTable); + sqlite3_free(pTable->zName); + sqlite3_free(pTable->zColAff); + sqlite3SelectDelete(pTable->pSelect); +#ifndef SQLITE_OMIT_CHECK + sqlite3ExprDelete(pTable->pCheck); +#endif + sqlite3VtabClear(pTable); + sqlite3_free(pTable); +} + +/* +** Unlink the given table from the hash tables and the delete the +** table structure with all its indices and foreign keys. +*/ +void sqlite3UnlinkAndDeleteTable(sqlite3 *db, int iDb, const char *zTabName){ + Table *p; + FKey *pF1, *pF2; + Db *pDb; + + assert( db!=0 ); + assert( iDb>=0 && iDbnDb ); + assert( zTabName && zTabName[0] ); + pDb = &db->aDb[iDb]; + p = sqlite3HashInsert(&pDb->pSchema->tblHash, zTabName, strlen(zTabName)+1,0); + if( p ){ +#ifndef SQLITE_OMIT_FOREIGN_KEY + for(pF1=p->pFKey; pF1; pF1=pF1->pNextFrom){ + int nTo = strlen(pF1->zTo) + 1; + pF2 = sqlite3HashFind(&pDb->pSchema->aFKey, pF1->zTo, nTo); + if( pF2==pF1 ){ + sqlite3HashInsert(&pDb->pSchema->aFKey, pF1->zTo, nTo, pF1->pNextTo); + }else{ + while( pF2 && pF2->pNextTo!=pF1 ){ pF2=pF2->pNextTo; } + if( pF2 ){ + pF2->pNextTo = pF1->pNextTo; + } + } + } +#endif + sqlite3DeleteTable(p); + } + db->flags |= SQLITE_InternChanges; +} + +/* +** Given a token, return a string that consists of the text of that +** token with any quotations removed. Space to hold the returned string +** is obtained from sqliteMalloc() and must be freed by the calling +** function. +** +** Tokens are often just pointers into the original SQL text and so +** are not \000 terminated and are not persistent. The returned string +** is \000 terminated and is persistent. +*/ +char *sqlite3NameFromToken(sqlite3 *db, Token *pName){ + char *zName; + if( pName ){ + zName = sqlite3DbStrNDup(db, (char*)pName->z, pName->n); + sqlite3Dequote(zName); + }else{ + zName = 0; + } + return zName; +} + +/* +** Open the sqlite_master table stored in database number iDb for +** writing. The table is opened using cursor 0. +*/ +void sqlite3OpenMasterTable(Parse *p, int iDb){ + Vdbe *v = sqlite3GetVdbe(p); + sqlite3TableLock(p, iDb, MASTER_ROOT, 1, SCHEMA_TABLE(iDb)); + sqlite3VdbeAddOp3(v, OP_OpenWrite, 0, MASTER_ROOT, iDb); + sqlite3VdbeAddOp2(v, OP_SetNumColumns, 0, 5); /* sqlite_master has 5 columns */ +} + +/* +** The token *pName contains the name of a database (either "main" or +** "temp" or the name of an attached db). This routine returns the +** index of the named database in db->aDb[], or -1 if the named db +** does not exist. +*/ +int sqlite3FindDb(sqlite3 *db, Token *pName){ + int i = -1; /* Database number */ + int n; /* Number of characters in the name */ + Db *pDb; /* A database whose name space is being searched */ + char *zName; /* Name we are searching for */ + + zName = sqlite3NameFromToken(db, pName); + if( zName ){ + n = strlen(zName); + for(i=(db->nDb-1), pDb=&db->aDb[i]; i>=0; i--, pDb--){ + if( (!OMIT_TEMPDB || i!=1 ) && n==strlen(pDb->zName) && + 0==sqlite3StrICmp(pDb->zName, zName) ){ + break; + } + } + sqlite3_free(zName); + } + return i; +} + +/* The table or view or trigger name is passed to this routine via tokens +** pName1 and pName2. If the table name was fully qualified, for example: +** +** CREATE TABLE xxx.yyy (...); +** +** Then pName1 is set to "xxx" and pName2 "yyy". On the other hand if +** the table name is not fully qualified, i.e.: +** +** CREATE TABLE yyy(...); +** +** Then pName1 is set to "yyy" and pName2 is "". +** +** This routine sets the *ppUnqual pointer to point at the token (pName1 or +** pName2) that stores the unqualified table name. The index of the +** database "xxx" is returned. +*/ +int sqlite3TwoPartName( + Parse *pParse, /* Parsing and code generating context */ + Token *pName1, /* The "xxx" in the name "xxx.yyy" or "xxx" */ + Token *pName2, /* The "yyy" in the name "xxx.yyy" */ + Token **pUnqual /* Write the unqualified object name here */ +){ + int iDb; /* Database holding the object */ + sqlite3 *db = pParse->db; + + if( pName2 && pName2->n>0 ){ + assert( !db->init.busy ); + *pUnqual = pName2; + iDb = sqlite3FindDb(db, pName1); + if( iDb<0 ){ + sqlite3ErrorMsg(pParse, "unknown database %T", pName1); + pParse->nErr++; + return -1; + } + }else{ + assert( db->init.iDb==0 || db->init.busy ); + iDb = db->init.iDb; + *pUnqual = pName1; + } + return iDb; +} + +/* +** This routine is used to check if the UTF-8 string zName is a legal +** unqualified name for a new schema object (table, index, view or +** trigger). All names are legal except those that begin with the string +** "sqlite_" (in upper, lower or mixed case). This portion of the namespace +** is reserved for internal use. +*/ +int sqlite3CheckObjectName(Parse *pParse, const char *zName){ + if( !pParse->db->init.busy && pParse->nested==0 + && (pParse->db->flags & SQLITE_WriteSchema)==0 + && 0==sqlite3StrNICmp(zName, "sqlite_", 7) ){ + sqlite3ErrorMsg(pParse, "object name reserved for internal use: %s", zName); + return SQLITE_ERROR; + } + return SQLITE_OK; +} + +/* +** Begin constructing a new table representation in memory. This is +** the first of several action routines that get called in response +** to a CREATE TABLE statement. In particular, this routine is called +** after seeing tokens "CREATE" and "TABLE" and the table name. The isTemp +** flag is true if the table should be stored in the auxiliary database +** file instead of in the main database file. This is normally the case +** when the "TEMP" or "TEMPORARY" keyword occurs in between +** CREATE and TABLE. +** +** The new table record is initialized and put in pParse->pNewTable. +** As more of the CREATE TABLE statement is parsed, additional action +** routines will be called to add more information to this record. +** At the end of the CREATE TABLE statement, the sqlite3EndTable() routine +** is called to complete the construction of the new table record. +*/ +void sqlite3StartTable( + Parse *pParse, /* Parser context */ + Token *pName1, /* First part of the name of the table or view */ + Token *pName2, /* Second part of the name of the table or view */ + int isTemp, /* True if this is a TEMP table */ + int isView, /* True if this is a VIEW */ + int isVirtual, /* True if this is a VIRTUAL table */ + int noErr /* Do nothing if table already exists */ +){ + Table *pTable; + char *zName = 0; /* The name of the new table */ + sqlite3 *db = pParse->db; + Vdbe *v; + int iDb; /* Database number to create the table in */ + Token *pName; /* Unqualified name of the table to create */ + + /* The table or view name to create is passed to this routine via tokens + ** pName1 and pName2. If the table name was fully qualified, for example: + ** + ** CREATE TABLE xxx.yyy (...); + ** + ** Then pName1 is set to "xxx" and pName2 "yyy". On the other hand if + ** the table name is not fully qualified, i.e.: + ** + ** CREATE TABLE yyy(...); + ** + ** Then pName1 is set to "yyy" and pName2 is "". + ** + ** The call below sets the pName pointer to point at the token (pName1 or + ** pName2) that stores the unqualified table name. The variable iDb is + ** set to the index of the database that the table or view is to be + ** created in. + */ + iDb = sqlite3TwoPartName(pParse, pName1, pName2, &pName); + if( iDb<0 ) return; + if( !OMIT_TEMPDB && isTemp && iDb>1 ){ + /* If creating a temp table, the name may not be qualified */ + sqlite3ErrorMsg(pParse, "temporary table name must be unqualified"); + return; + } + if( !OMIT_TEMPDB && isTemp ) iDb = 1; + + pParse->sNameToken = *pName; + zName = sqlite3NameFromToken(db, pName); + if( zName==0 ) return; + if( SQLITE_OK!=sqlite3CheckObjectName(pParse, zName) ){ + goto begin_table_error; + } + if( db->init.iDb==1 ) isTemp = 1; +#ifndef SQLITE_OMIT_AUTHORIZATION + assert( (isTemp & 1)==isTemp ); + { + int code; + char *zDb = db->aDb[iDb].zName; + if( sqlite3AuthCheck(pParse, SQLITE_INSERT, SCHEMA_TABLE(isTemp), 0, zDb) ){ + goto begin_table_error; + } + if( isView ){ + if( !OMIT_TEMPDB && isTemp ){ + code = SQLITE_CREATE_TEMP_VIEW; + }else{ + code = SQLITE_CREATE_VIEW; + } + }else{ + if( !OMIT_TEMPDB && isTemp ){ + code = SQLITE_CREATE_TEMP_TABLE; + }else{ + code = SQLITE_CREATE_TABLE; + } + } + if( !isVirtual && sqlite3AuthCheck(pParse, code, zName, 0, zDb) ){ + goto begin_table_error; + } + } +#endif + + /* Make sure the new table name does not collide with an existing + ** index or table name in the same database. Issue an error message if + ** it does. The exception is if the statement being parsed was passed + ** to an sqlite3_declare_vtab() call. In that case only the column names + ** and types will be used, so there is no need to test for namespace + ** collisions. + */ + if( !IN_DECLARE_VTAB ){ + if( SQLITE_OK!=sqlite3ReadSchema(pParse) ){ + goto begin_table_error; + } + pTable = sqlite3FindTable(db, zName, db->aDb[iDb].zName); + if( pTable ){ + if( !noErr ){ + sqlite3ErrorMsg(pParse, "table %T already exists", pName); + } + goto begin_table_error; + } + if( sqlite3FindIndex(db, zName, 0)!=0 && (iDb==0 || !db->init.busy) ){ + sqlite3ErrorMsg(pParse, "there is already an index named %s", zName); + goto begin_table_error; + } + } + + pTable = sqlite3DbMallocZero(db, sizeof(Table)); + if( pTable==0 ){ + db->mallocFailed = 1; + pParse->rc = SQLITE_NOMEM; + pParse->nErr++; + goto begin_table_error; + } + pTable->zName = zName; + pTable->iPKey = -1; + pTable->pSchema = db->aDb[iDb].pSchema; + pTable->nRef = 1; + if( pParse->pNewTable ) sqlite3DeleteTable(pParse->pNewTable); + pParse->pNewTable = pTable; + + /* If this is the magic sqlite_sequence table used by autoincrement, + ** then record a pointer to this table in the main database structure + ** so that INSERT can find the table easily. + */ +#ifndef SQLITE_OMIT_AUTOINCREMENT + if( !pParse->nested && strcmp(zName, "sqlite_sequence")==0 ){ + pTable->pSchema->pSeqTab = pTable; + } +#endif + + /* Begin generating the code that will insert the table record into + ** the SQLITE_MASTER table. Note in particular that we must go ahead + ** and allocate the record number for the table entry now. Before any + ** PRIMARY KEY or UNIQUE keywords are parsed. Those keywords will cause + ** indices to be created and the table record must come before the + ** indices. Hence, the record number for the table must be allocated + ** now. + */ + if( !db->init.busy && (v = sqlite3GetVdbe(pParse))!=0 ){ + int j1; + int fileFormat; + int reg1, reg2, reg3; + sqlite3BeginWriteOperation(pParse, 0, iDb); + +#ifndef SQLITE_OMIT_VIRTUALTABLE + if( isVirtual ){ + sqlite3VdbeAddOp0(v, OP_VBegin); + } +#endif + + /* If the file format and encoding in the database have not been set, + ** set them now. + */ + reg1 = pParse->regRowid = ++pParse->nMem; + reg2 = pParse->regRoot = ++pParse->nMem; + reg3 = ++pParse->nMem; + sqlite3VdbeAddOp3(v, OP_ReadCookie, iDb, reg3, 1); /* file_format */ + sqlite3VdbeUsesBtree(v, iDb); + j1 = sqlite3VdbeAddOp1(v, OP_If, reg3); + fileFormat = (db->flags & SQLITE_LegacyFileFmt)!=0 ? + 1 : SQLITE_MAX_FILE_FORMAT; + sqlite3VdbeAddOp2(v, OP_Integer, fileFormat, reg3); + sqlite3VdbeAddOp3(v, OP_SetCookie, iDb, 1, reg3); + sqlite3VdbeAddOp2(v, OP_Integer, ENC(db), reg3); + sqlite3VdbeAddOp3(v, OP_SetCookie, iDb, 4, reg3); + sqlite3VdbeJumpHere(v, j1); + + /* This just creates a place-holder record in the sqlite_master table. + ** The record created does not contain anything yet. It will be replaced + ** by the real entry in code generated at sqlite3EndTable(). + ** + ** The rowid for the new entry is left on the top of the stack. + ** The rowid value is needed by the code that sqlite3EndTable will + ** generate. + */ +#if !defined(SQLITE_OMIT_VIEW) || !defined(SQLITE_OMIT_VIRTUALTABLE) + if( isView || isVirtual ){ + sqlite3VdbeAddOp2(v, OP_Integer, 0, reg2); + }else +#endif + { + sqlite3VdbeAddOp2(v, OP_CreateTable, iDb, reg2); + } + sqlite3OpenMasterTable(pParse, iDb); + sqlite3VdbeAddOp2(v, OP_NewRowid, 0, reg1); + sqlite3VdbeAddOp2(v, OP_Null, 0, reg3); + sqlite3VdbeAddOp3(v, OP_Insert, 0, reg3, reg1); + sqlite3VdbeChangeP5(v, OPFLAG_APPEND); + sqlite3VdbeAddOp0(v, OP_Close); + } + + /* Normal (non-error) return. */ + return; + + /* If an error occurs, we jump here */ +begin_table_error: + sqlite3_free(zName); + return; +} + +/* +** This macro is used to compare two strings in a case-insensitive manner. +** It is slightly faster than calling sqlite3StrICmp() directly, but +** produces larger code. +** +** WARNING: This macro is not compatible with the strcmp() family. It +** returns true if the two strings are equal, otherwise false. +*/ +#define STRICMP(x, y) (\ +sqlite3UpperToLower[*(unsigned char *)(x)]== \ +sqlite3UpperToLower[*(unsigned char *)(y)] \ +&& sqlite3StrICmp((x)+1,(y)+1)==0 ) + +/* +** Add a new column to the table currently being constructed. +** +** The parser calls this routine once for each column declaration +** in a CREATE TABLE statement. sqlite3StartTable() gets called +** first to get things going. Then this routine is called for each +** column. +*/ +void sqlite3AddColumn(Parse *pParse, Token *pName){ + Table *p; + int i; + char *z; + Column *pCol; + if( (p = pParse->pNewTable)==0 ) return; + if( p->nCol+1>SQLITE_MAX_COLUMN ){ + sqlite3ErrorMsg(pParse, "too many columns on %s", p->zName); + return; + } + z = sqlite3NameFromToken(pParse->db, pName); + if( z==0 ) return; + for(i=0; inCol; i++){ + if( STRICMP(z, p->aCol[i].zName) ){ + sqlite3ErrorMsg(pParse, "duplicate column name: %s", z); + sqlite3_free(z); + return; + } + } + if( (p->nCol & 0x7)==0 ){ + Column *aNew; + aNew = sqlite3DbRealloc(pParse->db,p->aCol,(p->nCol+8)*sizeof(p->aCol[0])); + if( aNew==0 ){ + sqlite3_free(z); + return; + } + p->aCol = aNew; + } + pCol = &p->aCol[p->nCol]; + memset(pCol, 0, sizeof(p->aCol[0])); + pCol->zName = z; + + /* If there is no type specified, columns have the default affinity + ** 'NONE'. If there is a type specified, then sqlite3AddColumnType() will + ** be called next to set pCol->affinity correctly. + */ + pCol->affinity = SQLITE_AFF_NONE; + p->nCol++; +} + +/* +** This routine is called by the parser while in the middle of +** parsing a CREATE TABLE statement. A "NOT NULL" constraint has +** been seen on a column. This routine sets the notNull flag on +** the column currently under construction. +*/ +void sqlite3AddNotNull(Parse *pParse, int onError){ + Table *p; + int i; + if( (p = pParse->pNewTable)==0 ) return; + i = p->nCol-1; + if( i>=0 ) p->aCol[i].notNull = onError; +} + +/* +** Scan the column type name zType (length nType) and return the +** associated affinity type. +** +** This routine does a case-independent search of zType for the +** substrings in the following table. If one of the substrings is +** found, the corresponding affinity is returned. If zType contains +** more than one of the substrings, entries toward the top of +** the table take priority. For example, if zType is 'BLOBINT', +** SQLITE_AFF_INTEGER is returned. +** +** Substring | Affinity +** -------------------------------- +** 'INT' | SQLITE_AFF_INTEGER +** 'CHAR' | SQLITE_AFF_TEXT +** 'CLOB' | SQLITE_AFF_TEXT +** 'TEXT' | SQLITE_AFF_TEXT +** 'BLOB' | SQLITE_AFF_NONE +** 'REAL' | SQLITE_AFF_REAL +** 'FLOA' | SQLITE_AFF_REAL +** 'DOUB' | SQLITE_AFF_REAL +** +** If none of the substrings in the above table are found, +** SQLITE_AFF_NUMERIC is returned. +*/ +char sqlite3AffinityType(const Token *pType){ + u32 h = 0; + char aff = SQLITE_AFF_NUMERIC; + const unsigned char *zIn = pType->z; + const unsigned char *zEnd = &pType->z[pType->n]; + + while( zIn!=zEnd ){ + h = (h<<8) + sqlite3UpperToLower[*zIn]; + zIn++; + if( h==(('c'<<24)+('h'<<16)+('a'<<8)+'r') ){ /* CHAR */ + aff = SQLITE_AFF_TEXT; + }else if( h==(('c'<<24)+('l'<<16)+('o'<<8)+'b') ){ /* CLOB */ + aff = SQLITE_AFF_TEXT; + }else if( h==(('t'<<24)+('e'<<16)+('x'<<8)+'t') ){ /* TEXT */ + aff = SQLITE_AFF_TEXT; + }else if( h==(('b'<<24)+('l'<<16)+('o'<<8)+'b') /* BLOB */ + && (aff==SQLITE_AFF_NUMERIC || aff==SQLITE_AFF_REAL) ){ + aff = SQLITE_AFF_NONE; +#ifndef SQLITE_OMIT_FLOATING_POINT + }else if( h==(('r'<<24)+('e'<<16)+('a'<<8)+'l') /* REAL */ + && aff==SQLITE_AFF_NUMERIC ){ + aff = SQLITE_AFF_REAL; + }else if( h==(('f'<<24)+('l'<<16)+('o'<<8)+'a') /* FLOA */ + && aff==SQLITE_AFF_NUMERIC ){ + aff = SQLITE_AFF_REAL; + }else if( h==(('d'<<24)+('o'<<16)+('u'<<8)+'b') /* DOUB */ + && aff==SQLITE_AFF_NUMERIC ){ + aff = SQLITE_AFF_REAL; +#endif + }else if( (h&0x00FFFFFF)==(('i'<<16)+('n'<<8)+'t') ){ /* INT */ + aff = SQLITE_AFF_INTEGER; + break; + } + } + + return aff; +} + +/* +** This routine is called by the parser while in the middle of +** parsing a CREATE TABLE statement. The pFirst token is the first +** token in the sequence of tokens that describe the type of the +** column currently under construction. pLast is the last token +** in the sequence. Use this information to construct a string +** that contains the typename of the column and store that string +** in zType. +*/ +void sqlite3AddColumnType(Parse *pParse, Token *pType){ + Table *p; + int i; + Column *pCol; + + if( (p = pParse->pNewTable)==0 ) return; + i = p->nCol-1; + if( i<0 ) return; + pCol = &p->aCol[i]; + sqlite3_free(pCol->zType); + pCol->zType = sqlite3NameFromToken(pParse->db, pType); + pCol->affinity = sqlite3AffinityType(pType); +} + +/* +** The expression is the default value for the most recently added column +** of the table currently under construction. +** +** Default value expressions must be constant. Raise an exception if this +** is not the case. +** +** This routine is called by the parser while in the middle of +** parsing a CREATE TABLE statement. +*/ +void sqlite3AddDefaultValue(Parse *pParse, Expr *pExpr){ + Table *p; + Column *pCol; + if( (p = pParse->pNewTable)!=0 ){ + pCol = &(p->aCol[p->nCol-1]); + if( !sqlite3ExprIsConstantOrFunction(pExpr) ){ + sqlite3ErrorMsg(pParse, "default value of column [%s] is not constant", + pCol->zName); + }else{ + Expr *pCopy; + sqlite3 *db = pParse->db; + sqlite3ExprDelete(pCol->pDflt); + pCol->pDflt = pCopy = sqlite3ExprDup(db, pExpr); + if( pCopy ){ + sqlite3TokenCopy(db, &pCopy->span, &pExpr->span); + } + } + } + sqlite3ExprDelete(pExpr); +} + +/* +** Designate the PRIMARY KEY for the table. pList is a list of names +** of columns that form the primary key. If pList is NULL, then the +** most recently added column of the table is the primary key. +** +** A table can have at most one primary key. If the table already has +** a primary key (and this is the second primary key) then create an +** error. +** +** If the PRIMARY KEY is on a single column whose datatype is INTEGER, +** then we will try to use that column as the rowid. Set the Table.iPKey +** field of the table under construction to be the index of the +** INTEGER PRIMARY KEY column. Table.iPKey is set to -1 if there is +** no INTEGER PRIMARY KEY. +** +** If the key is not an INTEGER PRIMARY KEY, then create a unique +** index for the key. No index is created for INTEGER PRIMARY KEYs. +*/ +void sqlite3AddPrimaryKey( + Parse *pParse, /* Parsing context */ + ExprList *pList, /* List of field names to be indexed */ + int onError, /* What to do with a uniqueness conflict */ + int autoInc, /* True if the AUTOINCREMENT keyword is present */ + int sortOrder /* SQLITE_SO_ASC or SQLITE_SO_DESC */ +){ + Table *pTab = pParse->pNewTable; + char *zType = 0; + int iCol = -1, i; + if( pTab==0 || IN_DECLARE_VTAB ) goto primary_key_exit; + if( pTab->hasPrimKey ){ + sqlite3ErrorMsg(pParse, + "table \"%s\" has more than one primary key", pTab->zName); + goto primary_key_exit; + } + pTab->hasPrimKey = 1; + if( pList==0 ){ + iCol = pTab->nCol - 1; + pTab->aCol[iCol].isPrimKey = 1; + }else{ + for(i=0; inExpr; i++){ + for(iCol=0; iColnCol; iCol++){ + if( sqlite3StrICmp(pList->a[i].zName, pTab->aCol[iCol].zName)==0 ){ + break; + } + } + if( iColnCol ){ + pTab->aCol[iCol].isPrimKey = 1; + } + } + if( pList->nExpr>1 ) iCol = -1; + } + if( iCol>=0 && iColnCol ){ + zType = pTab->aCol[iCol].zType; + } + if( zType && sqlite3StrICmp(zType, "INTEGER")==0 + && sortOrder==SQLITE_SO_ASC ){ + pTab->iPKey = iCol; + pTab->keyConf = onError; + pTab->autoInc = autoInc; + }else if( autoInc ){ +#ifndef SQLITE_OMIT_AUTOINCREMENT + sqlite3ErrorMsg(pParse, "AUTOINCREMENT is only allowed on an " + "INTEGER PRIMARY KEY"); +#endif + }else{ + sqlite3CreateIndex(pParse, 0, 0, 0, pList, onError, 0, 0, sortOrder, 0); + pList = 0; + } + +primary_key_exit: + sqlite3ExprListDelete(pList); + return; +} + +/* +** Add a new CHECK constraint to the table currently under construction. +*/ +void sqlite3AddCheckConstraint( + Parse *pParse, /* Parsing context */ + Expr *pCheckExpr /* The check expression */ +){ +#ifndef SQLITE_OMIT_CHECK + Table *pTab = pParse->pNewTable; + sqlite3 *db = pParse->db; + if( pTab && !IN_DECLARE_VTAB ){ + /* The CHECK expression must be duplicated so that tokens refer + ** to malloced space and not the (ephemeral) text of the CREATE TABLE + ** statement */ + pTab->pCheck = sqlite3ExprAnd(db, pTab->pCheck, + sqlite3ExprDup(db, pCheckExpr)); + } +#endif + sqlite3ExprDelete(pCheckExpr); +} + +/* +** Set the collation function of the most recently parsed table column +** to the CollSeq given. +*/ +void sqlite3AddCollateType(Parse *pParse, Token *pToken){ + Table *p; + int i; + char *zColl; /* Dequoted name of collation sequence */ + + if( (p = pParse->pNewTable)==0 ) return; + i = p->nCol-1; + + zColl = sqlite3NameFromToken(pParse->db, pToken); + if( !zColl ) return; + + if( sqlite3LocateCollSeq(pParse, zColl, -1) ){ + Index *pIdx; + p->aCol[i].zColl = zColl; + + /* If the column is declared as " PRIMARY KEY COLLATE ", + ** then an index may have been created on this column before the + ** collation type was added. Correct this if it is the case. + */ + for(pIdx=p->pIndex; pIdx; pIdx=pIdx->pNext){ + assert( pIdx->nColumn==1 ); + if( pIdx->aiColumn[0]==i ){ + pIdx->azColl[0] = p->aCol[i].zColl; + } + } + }else{ + sqlite3_free(zColl); + } +} + +/* +** This function returns the collation sequence for database native text +** encoding identified by the string zName, length nName. +** +** If the requested collation sequence is not available, or not available +** in the database native encoding, the collation factory is invoked to +** request it. If the collation factory does not supply such a sequence, +** and the sequence is available in another text encoding, then that is +** returned instead. +** +** If no versions of the requested collations sequence are available, or +** another error occurs, NULL is returned and an error message written into +** pParse. +** +** This routine is a wrapper around sqlite3FindCollSeq(). This routine +** invokes the collation factory if the named collation cannot be found +** and generates an error message. +*/ +CollSeq *sqlite3LocateCollSeq(Parse *pParse, const char *zName, int nName){ + sqlite3 *db = pParse->db; + u8 enc = ENC(db); + u8 initbusy = db->init.busy; + CollSeq *pColl; + + pColl = sqlite3FindCollSeq(db, enc, zName, nName, initbusy); + if( !initbusy && (!pColl || !pColl->xCmp) ){ + pColl = sqlite3GetCollSeq(db, pColl, zName, nName); + if( !pColl ){ + if( nName<0 ){ + nName = strlen(zName); + } + sqlite3ErrorMsg(pParse, "no such collation sequence: %.*s", nName, zName); + pColl = 0; + } + } + + return pColl; +} + + +/* +** Generate code that will increment the schema cookie. +** +** The schema cookie is used to determine when the schema for the +** database changes. After each schema change, the cookie value +** changes. When a process first reads the schema it records the +** cookie. Thereafter, whenever it goes to access the database, +** it checks the cookie to make sure the schema has not changed +** since it was last read. +** +** This plan is not completely bullet-proof. It is possible for +** the schema to change multiple times and for the cookie to be +** set back to prior value. But schema changes are infrequent +** and the probability of hitting the same cookie value is only +** 1 chance in 2^32. So we're safe enough. +*/ +void sqlite3ChangeCookie(Parse *pParse, int iDb){ + int r1 = sqlite3GetTempReg(pParse); + sqlite3 *db = pParse->db; + Vdbe *v = pParse->pVdbe; + sqlite3VdbeAddOp2(v, OP_Integer, db->aDb[iDb].pSchema->schema_cookie+1, r1); + sqlite3VdbeAddOp3(v, OP_SetCookie, iDb, 0, r1); + sqlite3ReleaseTempReg(pParse, r1); +} + +/* +** Measure the number of characters needed to output the given +** identifier. The number returned includes any quotes used +** but does not include the null terminator. +** +** The estimate is conservative. It might be larger that what is +** really needed. +*/ +static int identLength(const char *z){ + int n; + for(n=0; *z; n++, z++){ + if( *z=='"' ){ n++; } + } + return n + 2; +} + +/* +** Write an identifier onto the end of the given string. Add +** quote characters as needed. +*/ +static void identPut(char *z, int *pIdx, char *zSignedIdent){ + unsigned char *zIdent = (unsigned char*)zSignedIdent; + int i, j, needQuote; + i = *pIdx; + for(j=0; zIdent[j]; j++){ + if( !isalnum(zIdent[j]) && zIdent[j]!='_' ) break; + } + needQuote = zIdent[j]!=0 || isdigit(zIdent[0]) + || sqlite3KeywordCode(zIdent, j)!=TK_ID; + if( needQuote ) z[i++] = '"'; + for(j=0; zIdent[j]; j++){ + z[i++] = zIdent[j]; + if( zIdent[j]=='"' ) z[i++] = '"'; + } + if( needQuote ) z[i++] = '"'; + z[i] = 0; + *pIdx = i; +} + +/* +** Generate a CREATE TABLE statement appropriate for the given +** table. Memory to hold the text of the statement is obtained +** from sqliteMalloc() and must be freed by the calling function. +*/ +static char *createTableStmt(sqlite3 *db, Table *p, int isTemp){ + int i, k, n; + char *zStmt; + char *zSep, *zSep2, *zEnd, *z; + Column *pCol; + n = 0; + for(pCol = p->aCol, i=0; inCol; i++, pCol++){ + n += identLength(pCol->zName); + z = pCol->zType; + if( z ){ + n += (strlen(z) + 1); + } + } + n += identLength(p->zName); + if( n<50 ){ + zSep = ""; + zSep2 = ","; + zEnd = ")"; + }else{ + zSep = "\n "; + zSep2 = ",\n "; + zEnd = "\n)"; + } + n += 35 + 6*p->nCol; + zStmt = sqlite3_malloc( n ); + if( zStmt==0 ){ + db->mallocFailed = 1; + return 0; + } + sqlite3_snprintf(n, zStmt, + !OMIT_TEMPDB&&isTemp ? "CREATE TEMP TABLE ":"CREATE TABLE "); + k = strlen(zStmt); + identPut(zStmt, &k, p->zName); + zStmt[k++] = '('; + for(pCol=p->aCol, i=0; inCol; i++, pCol++){ + sqlite3_snprintf(n-k, &zStmt[k], zSep); + k += strlen(&zStmt[k]); + zSep = zSep2; + identPut(zStmt, &k, pCol->zName); + if( (z = pCol->zType)!=0 ){ + zStmt[k++] = ' '; + assert( strlen(z)+k+1<=n ); + sqlite3_snprintf(n-k, &zStmt[k], "%s", z); + k += strlen(z); + } + } + sqlite3_snprintf(n-k, &zStmt[k], "%s", zEnd); + return zStmt; +} + +/* +** This routine is called to report the final ")" that terminates +** a CREATE TABLE statement. +** +** The table structure that other action routines have been building +** is added to the internal hash tables, assuming no errors have +** occurred. +** +** An entry for the table is made in the master table on disk, unless +** this is a temporary table or db->init.busy==1. When db->init.busy==1 +** it means we are reading the sqlite_master table because we just +** connected to the database or because the sqlite_master table has +** recently changed, so the entry for this table already exists in +** the sqlite_master table. We do not want to create it again. +** +** If the pSelect argument is not NULL, it means that this routine +** was called to create a table generated from a +** "CREATE TABLE ... AS SELECT ..." statement. The column names of +** the new table will match the result set of the SELECT. +*/ +void sqlite3EndTable( + Parse *pParse, /* Parse context */ + Token *pCons, /* The ',' token after the last column defn. */ + Token *pEnd, /* The final ')' token in the CREATE TABLE */ + Select *pSelect /* Select from a "CREATE ... AS SELECT" */ +){ + Table *p; + sqlite3 *db = pParse->db; + int iDb; + + if( (pEnd==0 && pSelect==0) || pParse->nErr || db->mallocFailed ) { + return; + } + p = pParse->pNewTable; + if( p==0 ) return; + + assert( !db->init.busy || !pSelect ); + + iDb = sqlite3SchemaToIndex(db, p->pSchema); + +#ifndef SQLITE_OMIT_CHECK + /* Resolve names in all CHECK constraint expressions. + */ + if( p->pCheck ){ + SrcList sSrc; /* Fake SrcList for pParse->pNewTable */ + NameContext sNC; /* Name context for pParse->pNewTable */ + + memset(&sNC, 0, sizeof(sNC)); + memset(&sSrc, 0, sizeof(sSrc)); + sSrc.nSrc = 1; + sSrc.a[0].zName = p->zName; + sSrc.a[0].pTab = p; + sSrc.a[0].iCursor = -1; + sNC.pParse = pParse; + sNC.pSrcList = &sSrc; + sNC.isCheck = 1; + if( sqlite3ExprResolveNames(&sNC, p->pCheck) ){ + return; + } + } +#endif /* !defined(SQLITE_OMIT_CHECK) */ + + /* If the db->init.busy is 1 it means we are reading the SQL off the + ** "sqlite_master" or "sqlite_temp_master" table on the disk. + ** So do not write to the disk again. Extract the root page number + ** for the table from the db->init.newTnum field. (The page number + ** should have been put there by the sqliteOpenCb routine.) + */ + if( db->init.busy ){ + p->tnum = db->init.newTnum; + } + + /* If not initializing, then create a record for the new table + ** in the SQLITE_MASTER table of the database. The record number + ** for the new table entry should already be on the stack. + ** + ** If this is a TEMPORARY table, write the entry into the auxiliary + ** file instead of into the main database file. + */ + if( !db->init.busy ){ + int n; + Vdbe *v; + char *zType; /* "view" or "table" */ + char *zType2; /* "VIEW" or "TABLE" */ + char *zStmt; /* Text of the CREATE TABLE or CREATE VIEW statement */ + + v = sqlite3GetVdbe(pParse); + if( v==0 ) return; + + sqlite3VdbeAddOp1(v, OP_Close, 0); + + /* Create the rootpage for the new table and push it onto the stack. + ** A view has no rootpage, so just push a zero onto the stack for + ** views. Initialize zType at the same time. + */ + if( p->pSelect==0 ){ + /* A regular table */ + zType = "table"; + zType2 = "TABLE"; +#ifndef SQLITE_OMIT_VIEW + }else{ + /* A view */ + zType = "view"; + zType2 = "VIEW"; +#endif + } + + /* If this is a CREATE TABLE xx AS SELECT ..., execute the SELECT + ** statement to populate the new table. The root-page number for the + ** new table is on the top of the vdbe stack. + ** + ** Once the SELECT has been coded by sqlite3Select(), it is in a + ** suitable state to query for the column names and types to be used + ** by the new table. + ** + ** A shared-cache write-lock is not required to write to the new table, + ** as a schema-lock must have already been obtained to create it. Since + ** a schema-lock excludes all other database users, the write-lock would + ** be redundant. + */ + if( pSelect ){ + SelectDest dest; + Table *pSelTab; + + sqlite3VdbeAddOp3(v, OP_OpenWrite, 1, pParse->regRoot, iDb); + sqlite3VdbeChangeP5(v, 1); + pParse->nTab = 2; + sqlite3SelectDestInit(&dest, SRT_Table, 1); + sqlite3Select(pParse, pSelect, &dest, 0, 0, 0, 0); + sqlite3VdbeAddOp1(v, OP_Close, 1); + if( pParse->nErr==0 ){ + pSelTab = sqlite3ResultSetOfSelect(pParse, 0, pSelect); + if( pSelTab==0 ) return; + assert( p->aCol==0 ); + p->nCol = pSelTab->nCol; + p->aCol = pSelTab->aCol; + pSelTab->nCol = 0; + pSelTab->aCol = 0; + sqlite3DeleteTable(pSelTab); + } + } + + /* Compute the complete text of the CREATE statement */ + if( pSelect ){ + zStmt = createTableStmt(db, p, p->pSchema==db->aDb[1].pSchema); + }else{ + n = pEnd->z - pParse->sNameToken.z + 1; + zStmt = sqlite3MPrintf(db, + "CREATE %s %.*s", zType2, n, pParse->sNameToken.z + ); + } + + /* A slot for the record has already been allocated in the + ** SQLITE_MASTER table. We just need to update that slot with all + ** the information we've collected. The rowid for the preallocated + ** slot is the 2nd item on the stack. The top of the stack is the + ** root page for the new table (or a 0 if this is a view). + */ + sqlite3NestedParse(pParse, + "UPDATE %Q.%s " + "SET type='%s', name=%Q, tbl_name=%Q, rootpage=#%d, sql=%Q " + "WHERE rowid=#%d", + db->aDb[iDb].zName, SCHEMA_TABLE(iDb), + zType, + p->zName, + p->zName, + pParse->regRoot, + zStmt, + pParse->regRowid + ); + sqlite3_free(zStmt); + sqlite3ChangeCookie(pParse, iDb); + +#ifndef SQLITE_OMIT_AUTOINCREMENT + /* Check to see if we need to create an sqlite_sequence table for + ** keeping track of autoincrement keys. + */ + if( p->autoInc ){ + Db *pDb = &db->aDb[iDb]; + if( pDb->pSchema->pSeqTab==0 ){ + sqlite3NestedParse(pParse, + "CREATE TABLE %Q.sqlite_sequence(name,seq)", + pDb->zName + ); + } + } +#endif + + /* Reparse everything to update our internal data structures */ + sqlite3VdbeAddOp4(v, OP_ParseSchema, iDb, 0, 0, + sqlite3MPrintf(db, "tbl_name='%q'",p->zName), P4_DYNAMIC); + } + + + /* Add the table to the in-memory representation of the database. + */ + if( db->init.busy && pParse->nErr==0 ){ + Table *pOld; + FKey *pFKey; + Schema *pSchema = p->pSchema; + pOld = sqlite3HashInsert(&pSchema->tblHash, p->zName, strlen(p->zName)+1,p); + if( pOld ){ + assert( p==pOld ); /* Malloc must have failed inside HashInsert() */ + db->mallocFailed = 1; + return; + } +#ifndef SQLITE_OMIT_FOREIGN_KEY + for(pFKey=p->pFKey; pFKey; pFKey=pFKey->pNextFrom){ + void *data; + int nTo = strlen(pFKey->zTo) + 1; + pFKey->pNextTo = sqlite3HashFind(&pSchema->aFKey, pFKey->zTo, nTo); + data = sqlite3HashInsert(&pSchema->aFKey, pFKey->zTo, nTo, pFKey); + if( data==(void *)pFKey ){ + db->mallocFailed = 1; + } + } +#endif + pParse->pNewTable = 0; + db->nTable++; + db->flags |= SQLITE_InternChanges; + +#ifndef SQLITE_OMIT_ALTERTABLE + if( !p->pSelect ){ + const char *zName = (const char *)pParse->sNameToken.z; + int nName; + assert( !pSelect && pCons && pEnd ); + if( pCons->z==0 ){ + pCons = pEnd; + } + nName = (const char *)pCons->z - zName; + p->addColOffset = 13 + sqlite3Utf8CharLen(zName, nName); + } +#endif + } +} + +#ifndef SQLITE_OMIT_VIEW +/* +** The parser calls this routine in order to create a new VIEW +*/ +void sqlite3CreateView( + Parse *pParse, /* The parsing context */ + Token *pBegin, /* The CREATE token that begins the statement */ + Token *pName1, /* The token that holds the name of the view */ + Token *pName2, /* The token that holds the name of the view */ + Select *pSelect, /* A SELECT statement that will become the new view */ + int isTemp, /* TRUE for a TEMPORARY view */ + int noErr /* Suppress error messages if VIEW already exists */ +){ + Table *p; + int n; + const unsigned char *z; + Token sEnd; + DbFixer sFix; + Token *pName; + int iDb; + sqlite3 *db = pParse->db; + + if( pParse->nVar>0 ){ + sqlite3ErrorMsg(pParse, "parameters are not allowed in views"); + sqlite3SelectDelete(pSelect); + return; + } + sqlite3StartTable(pParse, pName1, pName2, isTemp, 1, 0, noErr); + p = pParse->pNewTable; + if( p==0 || pParse->nErr ){ + sqlite3SelectDelete(pSelect); + return; + } + sqlite3TwoPartName(pParse, pName1, pName2, &pName); + iDb = sqlite3SchemaToIndex(db, p->pSchema); + if( sqlite3FixInit(&sFix, pParse, iDb, "view", pName) + && sqlite3FixSelect(&sFix, pSelect) + ){ + sqlite3SelectDelete(pSelect); + return; + } + + /* Make a copy of the entire SELECT statement that defines the view. + ** This will force all the Expr.token.z values to be dynamically + ** allocated rather than point to the input string - which means that + ** they will persist after the current sqlite3_exec() call returns. + */ + p->pSelect = sqlite3SelectDup(db, pSelect); + sqlite3SelectDelete(pSelect); + if( db->mallocFailed ){ + return; + } + if( !db->init.busy ){ + sqlite3ViewGetColumnNames(pParse, p); + } + + /* Locate the end of the CREATE VIEW statement. Make sEnd point to + ** the end. + */ + sEnd = pParse->sLastToken; + if( sEnd.z[0]!=0 && sEnd.z[0]!=';' ){ + sEnd.z += sEnd.n; + } + sEnd.n = 0; + n = sEnd.z - pBegin->z; + z = (const unsigned char*)pBegin->z; + while( n>0 && (z[n-1]==';' || isspace(z[n-1])) ){ n--; } + sEnd.z = &z[n-1]; + sEnd.n = 1; + + /* Use sqlite3EndTable() to add the view to the SQLITE_MASTER table */ + sqlite3EndTable(pParse, 0, &sEnd, 0); + return; +} +#endif /* SQLITE_OMIT_VIEW */ + +#if !defined(SQLITE_OMIT_VIEW) || !defined(SQLITE_OMIT_VIRTUALTABLE) +/* +** The Table structure pTable is really a VIEW. Fill in the names of +** the columns of the view in the pTable structure. Return the number +** of errors. If an error is seen leave an error message in pParse->zErrMsg. +*/ +int sqlite3ViewGetColumnNames(Parse *pParse, Table *pTable){ + Table *pSelTab; /* A fake table from which we get the result set */ + Select *pSel; /* Copy of the SELECT that implements the view */ + int nErr = 0; /* Number of errors encountered */ + int n; /* Temporarily holds the number of cursors assigned */ + sqlite3 *db = pParse->db; /* Database connection for malloc errors */ + int (*xAuth)(void*,int,const char*,const char*,const char*,const char*); + + assert( pTable ); + +#ifndef SQLITE_OMIT_VIRTUALTABLE + if( sqlite3VtabCallConnect(pParse, pTable) ){ + return SQLITE_ERROR; + } + if( IsVirtual(pTable) ) return 0; +#endif + +#ifndef SQLITE_OMIT_VIEW + /* A positive nCol means the columns names for this view are + ** already known. + */ + if( pTable->nCol>0 ) return 0; + + /* A negative nCol is a special marker meaning that we are currently + ** trying to compute the column names. If we enter this routine with + ** a negative nCol, it means two or more views form a loop, like this: + ** + ** CREATE VIEW one AS SELECT * FROM two; + ** CREATE VIEW two AS SELECT * FROM one; + ** + ** Actually, this error is caught previously and so the following test + ** should always fail. But we will leave it in place just to be safe. + */ + if( pTable->nCol<0 ){ + sqlite3ErrorMsg(pParse, "view %s is circularly defined", pTable->zName); + return 1; + } + assert( pTable->nCol>=0 ); + + /* If we get this far, it means we need to compute the table names. + ** Note that the call to sqlite3ResultSetOfSelect() will expand any + ** "*" elements in the results set of the view and will assign cursors + ** to the elements of the FROM clause. But we do not want these changes + ** to be permanent. So the computation is done on a copy of the SELECT + ** statement that defines the view. + */ + assert( pTable->pSelect ); + pSel = sqlite3SelectDup(db, pTable->pSelect); + if( pSel ){ + n = pParse->nTab; + sqlite3SrcListAssignCursors(pParse, pSel->pSrc); + pTable->nCol = -1; +#ifndef SQLITE_OMIT_AUTHORIZATION + xAuth = db->xAuth; + db->xAuth = 0; + pSelTab = sqlite3ResultSetOfSelect(pParse, 0, pSel); + db->xAuth = xAuth; +#else + pSelTab = sqlite3ResultSetOfSelect(pParse, 0, pSel); +#endif + pParse->nTab = n; + if( pSelTab ){ + assert( pTable->aCol==0 ); + pTable->nCol = pSelTab->nCol; + pTable->aCol = pSelTab->aCol; + pSelTab->nCol = 0; + pSelTab->aCol = 0; + sqlite3DeleteTable(pSelTab); + pTable->pSchema->flags |= DB_UnresetViews; + }else{ + pTable->nCol = 0; + nErr++; + } + sqlite3SelectDelete(pSel); + } else { + nErr++; + } +#endif /* SQLITE_OMIT_VIEW */ + return nErr; +} +#endif /* !defined(SQLITE_OMIT_VIEW) || !defined(SQLITE_OMIT_VIRTUALTABLE) */ + +#ifndef SQLITE_OMIT_VIEW +/* +** Clear the column names from every VIEW in database idx. +*/ +static void sqliteViewResetAll(sqlite3 *db, int idx){ + HashElem *i; + if( !DbHasProperty(db, idx, DB_UnresetViews) ) return; + for(i=sqliteHashFirst(&db->aDb[idx].pSchema->tblHash); i;i=sqliteHashNext(i)){ + Table *pTab = sqliteHashData(i); + if( pTab->pSelect ){ + sqliteResetColumnNames(pTab); + } + } + DbClearProperty(db, idx, DB_UnresetViews); +} +#else +# define sqliteViewResetAll(A,B) +#endif /* SQLITE_OMIT_VIEW */ + +/* +** This function is called by the VDBE to adjust the internal schema +** used by SQLite when the btree layer moves a table root page. The +** root-page of a table or index in database iDb has changed from iFrom +** to iTo. +** +** Ticket #1728: The symbol table might still contain information +** on tables and/or indices that are the process of being deleted. +** If you are unlucky, one of those deleted indices or tables might +** have the same rootpage number as the real table or index that is +** being moved. So we cannot stop searching after the first match +** because the first match might be for one of the deleted indices +** or tables and not the table/index that is actually being moved. +** We must continue looping until all tables and indices with +** rootpage==iFrom have been converted to have a rootpage of iTo +** in order to be certain that we got the right one. +*/ +#ifndef SQLITE_OMIT_AUTOVACUUM +void sqlite3RootPageMoved(Db *pDb, int iFrom, int iTo){ + HashElem *pElem; + Hash *pHash; + + pHash = &pDb->pSchema->tblHash; + for(pElem=sqliteHashFirst(pHash); pElem; pElem=sqliteHashNext(pElem)){ + Table *pTab = sqliteHashData(pElem); + if( pTab->tnum==iFrom ){ + pTab->tnum = iTo; + } + } + pHash = &pDb->pSchema->idxHash; + for(pElem=sqliteHashFirst(pHash); pElem; pElem=sqliteHashNext(pElem)){ + Index *pIdx = sqliteHashData(pElem); + if( pIdx->tnum==iFrom ){ + pIdx->tnum = iTo; + } + } +} +#endif + +/* +** Write code to erase the table with root-page iTable from database iDb. +** Also write code to modify the sqlite_master table and internal schema +** if a root-page of another table is moved by the btree-layer whilst +** erasing iTable (this can happen with an auto-vacuum database). +*/ +static void destroyRootPage(Parse *pParse, int iTable, int iDb){ + Vdbe *v = sqlite3GetVdbe(pParse); + int r1 = sqlite3GetTempReg(pParse); + sqlite3VdbeAddOp3(v, OP_Destroy, iTable, r1, iDb); +#ifndef SQLITE_OMIT_AUTOVACUUM + /* OP_Destroy stores an in integer r1. If this integer + ** is non-zero, then it is the root page number of a table moved to + ** location iTable. The following code modifies the sqlite_master table to + ** reflect this. + ** + ** The "#%d" in the SQL is a special constant that means whatever value + ** is on the top of the stack. See sqlite3RegisterExpr(). + */ + sqlite3NestedParse(pParse, + "UPDATE %Q.%s SET rootpage=%d WHERE #%d AND rootpage=#%d", + pParse->db->aDb[iDb].zName, SCHEMA_TABLE(iDb), iTable, r1, r1); +#endif + sqlite3ReleaseTempReg(pParse, r1); +} + +/* +** Write VDBE code to erase table pTab and all associated indices on disk. +** Code to update the sqlite_master tables and internal schema definitions +** in case a root-page belonging to another table is moved by the btree layer +** is also added (this can happen with an auto-vacuum database). +*/ +static void destroyTable(Parse *pParse, Table *pTab){ +#ifdef SQLITE_OMIT_AUTOVACUUM + Index *pIdx; + int iDb = sqlite3SchemaToIndex(pParse->db, pTab->pSchema); + destroyRootPage(pParse, pTab->tnum, iDb); + for(pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext){ + destroyRootPage(pParse, pIdx->tnum, iDb); + } +#else + /* If the database may be auto-vacuum capable (if SQLITE_OMIT_AUTOVACUUM + ** is not defined), then it is important to call OP_Destroy on the + ** table and index root-pages in order, starting with the numerically + ** largest root-page number. This guarantees that none of the root-pages + ** to be destroyed is relocated by an earlier OP_Destroy. i.e. if the + ** following were coded: + ** + ** OP_Destroy 4 0 + ** ... + ** OP_Destroy 5 0 + ** + ** and root page 5 happened to be the largest root-page number in the + ** database, then root page 5 would be moved to page 4 by the + ** "OP_Destroy 4 0" opcode. The subsequent "OP_Destroy 5 0" would hit + ** a free-list page. + */ + int iTab = pTab->tnum; + int iDestroyed = 0; + + while( 1 ){ + Index *pIdx; + int iLargest = 0; + + if( iDestroyed==0 || iTabpIndex; pIdx; pIdx=pIdx->pNext){ + int iIdx = pIdx->tnum; + assert( pIdx->pSchema==pTab->pSchema ); + if( (iDestroyed==0 || (iIdxiLargest ){ + iLargest = iIdx; + } + } + if( iLargest==0 ){ + return; + }else{ + int iDb = sqlite3SchemaToIndex(pParse->db, pTab->pSchema); + destroyRootPage(pParse, iLargest, iDb); + iDestroyed = iLargest; + } + } +#endif +} + +/* +** This routine is called to do the work of a DROP TABLE statement. +** pName is the name of the table to be dropped. +*/ +void sqlite3DropTable(Parse *pParse, SrcList *pName, int isView, int noErr){ + Table *pTab; + Vdbe *v; + sqlite3 *db = pParse->db; + int iDb; + + if( pParse->nErr || db->mallocFailed ){ + goto exit_drop_table; + } + assert( pName->nSrc==1 ); + pTab = sqlite3LocateTable(pParse, isView, + pName->a[0].zName, pName->a[0].zDatabase); + + if( pTab==0 ){ + if( noErr ){ + sqlite3ErrorClear(pParse); + } + goto exit_drop_table; + } + iDb = sqlite3SchemaToIndex(db, pTab->pSchema); + assert( iDb>=0 && iDbnDb ); + + /* If pTab is a virtual table, call ViewGetColumnNames() to ensure + ** it is initialized. + */ + if( IsVirtual(pTab) && sqlite3ViewGetColumnNames(pParse, pTab) ){ + goto exit_drop_table; + } +#ifndef SQLITE_OMIT_AUTHORIZATION + { + int code; + const char *zTab = SCHEMA_TABLE(iDb); + const char *zDb = db->aDb[iDb].zName; + const char *zArg2 = 0; + if( sqlite3AuthCheck(pParse, SQLITE_DELETE, zTab, 0, zDb)){ + goto exit_drop_table; + } + if( isView ){ + if( !OMIT_TEMPDB && iDb==1 ){ + code = SQLITE_DROP_TEMP_VIEW; + }else{ + code = SQLITE_DROP_VIEW; + } +#ifndef SQLITE_OMIT_VIRTUALTABLE + }else if( IsVirtual(pTab) ){ + code = SQLITE_DROP_VTABLE; + zArg2 = pTab->pMod->zName; +#endif + }else{ + if( !OMIT_TEMPDB && iDb==1 ){ + code = SQLITE_DROP_TEMP_TABLE; + }else{ + code = SQLITE_DROP_TABLE; + } + } + if( sqlite3AuthCheck(pParse, code, pTab->zName, zArg2, zDb) ){ + goto exit_drop_table; + } + if( sqlite3AuthCheck(pParse, SQLITE_DELETE, pTab->zName, 0, zDb) ){ + goto exit_drop_table; + } + } +#endif + if( pTab->readOnly || pTab==db->aDb[iDb].pSchema->pSeqTab ){ + sqlite3ErrorMsg(pParse, "table %s may not be dropped", pTab->zName); + goto exit_drop_table; + } + +#ifndef SQLITE_OMIT_VIEW + /* Ensure DROP TABLE is not used on a view, and DROP VIEW is not used + ** on a table. + */ + if( isView && pTab->pSelect==0 ){ + sqlite3ErrorMsg(pParse, "use DROP TABLE to delete table %s", pTab->zName); + goto exit_drop_table; + } + if( !isView && pTab->pSelect ){ + sqlite3ErrorMsg(pParse, "use DROP VIEW to delete view %s", pTab->zName); + goto exit_drop_table; + } +#endif + + /* Generate code to remove the table from the master table + ** on disk. + */ + v = sqlite3GetVdbe(pParse); + if( v ){ + Trigger *pTrigger; + Db *pDb = &db->aDb[iDb]; + sqlite3BeginWriteOperation(pParse, 1, iDb); + +#ifndef SQLITE_OMIT_VIRTUALTABLE + if( IsVirtual(pTab) ){ + Vdbe *v = sqlite3GetVdbe(pParse); + if( v ){ + sqlite3VdbeAddOp0(v, OP_VBegin); + } + } +#endif + + /* Drop all triggers associated with the table being dropped. Code + ** is generated to remove entries from sqlite_master and/or + ** sqlite_temp_master if required. + */ + pTrigger = pTab->pTrigger; + while( pTrigger ){ + assert( pTrigger->pSchema==pTab->pSchema || + pTrigger->pSchema==db->aDb[1].pSchema ); + sqlite3DropTriggerPtr(pParse, pTrigger); + pTrigger = pTrigger->pNext; + } + +#ifndef SQLITE_OMIT_AUTOINCREMENT + /* Remove any entries of the sqlite_sequence table associated with + ** the table being dropped. This is done before the table is dropped + ** at the btree level, in case the sqlite_sequence table needs to + ** move as a result of the drop (can happen in auto-vacuum mode). + */ + if( pTab->autoInc ){ + sqlite3NestedParse(pParse, + "DELETE FROM %s.sqlite_sequence WHERE name=%Q", + pDb->zName, pTab->zName + ); + } +#endif + + /* Drop all SQLITE_MASTER table and index entries that refer to the + ** table. The program name loops through the master table and deletes + ** every row that refers to a table of the same name as the one being + ** dropped. Triggers are handled seperately because a trigger can be + ** created in the temp database that refers to a table in another + ** database. + */ + sqlite3NestedParse(pParse, + "DELETE FROM %Q.%s WHERE tbl_name=%Q and type!='trigger'", + pDb->zName, SCHEMA_TABLE(iDb), pTab->zName); + if( !isView && !IsVirtual(pTab) ){ + destroyTable(pParse, pTab); + } + + /* Remove the table entry from SQLite's internal schema and modify + ** the schema cookie. + */ + if( IsVirtual(pTab) ){ + sqlite3VdbeAddOp4(v, OP_VDestroy, iDb, 0, 0, pTab->zName, 0); + } + sqlite3VdbeAddOp4(v, OP_DropTable, iDb, 0, 0, pTab->zName, 0); + sqlite3ChangeCookie(pParse, iDb); + } + sqliteViewResetAll(db, iDb); + +exit_drop_table: + sqlite3SrcListDelete(pName); +} + +/* +** This routine is called to create a new foreign key on the table +** currently under construction. pFromCol determines which columns +** in the current table point to the foreign key. If pFromCol==0 then +** connect the key to the last column inserted. pTo is the name of +** the table referred to. pToCol is a list of tables in the other +** pTo table that the foreign key points to. flags contains all +** information about the conflict resolution algorithms specified +** in the ON DELETE, ON UPDATE and ON INSERT clauses. +** +** An FKey structure is created and added to the table currently +** under construction in the pParse->pNewTable field. The new FKey +** is not linked into db->aFKey at this point - that does not happen +** until sqlite3EndTable(). +** +** The foreign key is set for IMMEDIATE processing. A subsequent call +** to sqlite3DeferForeignKey() might change this to DEFERRED. +*/ +void sqlite3CreateForeignKey( + Parse *pParse, /* Parsing context */ + ExprList *pFromCol, /* Columns in this table that point to other table */ + Token *pTo, /* Name of the other table */ + ExprList *pToCol, /* Columns in the other table */ + int flags /* Conflict resolution algorithms. */ +){ +#ifndef SQLITE_OMIT_FOREIGN_KEY + FKey *pFKey = 0; + Table *p = pParse->pNewTable; + int nByte; + int i; + int nCol; + char *z; + + assert( pTo!=0 ); + if( p==0 || pParse->nErr || IN_DECLARE_VTAB ) goto fk_end; + if( pFromCol==0 ){ + int iCol = p->nCol-1; + if( iCol<0 ) goto fk_end; + if( pToCol && pToCol->nExpr!=1 ){ + sqlite3ErrorMsg(pParse, "foreign key on %s" + " should reference only one column of table %T", + p->aCol[iCol].zName, pTo); + goto fk_end; + } + nCol = 1; + }else if( pToCol && pToCol->nExpr!=pFromCol->nExpr ){ + sqlite3ErrorMsg(pParse, + "number of columns in foreign key does not match the number of " + "columns in the referenced table"); + goto fk_end; + }else{ + nCol = pFromCol->nExpr; + } + nByte = sizeof(*pFKey) + nCol*sizeof(pFKey->aCol[0]) + pTo->n + 1; + if( pToCol ){ + for(i=0; inExpr; i++){ + nByte += strlen(pToCol->a[i].zName) + 1; + } + } + pFKey = sqlite3DbMallocZero(pParse->db, nByte ); + if( pFKey==0 ){ + goto fk_end; + } + pFKey->pFrom = p; + pFKey->pNextFrom = p->pFKey; + z = (char*)&pFKey[1]; + pFKey->aCol = (struct sColMap*)z; + z += sizeof(struct sColMap)*nCol; + pFKey->zTo = z; + memcpy(z, pTo->z, pTo->n); + z[pTo->n] = 0; + z += pTo->n+1; + pFKey->pNextTo = 0; + pFKey->nCol = nCol; + if( pFromCol==0 ){ + pFKey->aCol[0].iFrom = p->nCol-1; + }else{ + for(i=0; inCol; j++){ + if( sqlite3StrICmp(p->aCol[j].zName, pFromCol->a[i].zName)==0 ){ + pFKey->aCol[i].iFrom = j; + break; + } + } + if( j>=p->nCol ){ + sqlite3ErrorMsg(pParse, + "unknown column \"%s\" in foreign key definition", + pFromCol->a[i].zName); + goto fk_end; + } + } + } + if( pToCol ){ + for(i=0; ia[i].zName); + pFKey->aCol[i].zCol = z; + memcpy(z, pToCol->a[i].zName, n); + z[n] = 0; + z += n+1; + } + } + pFKey->isDeferred = 0; + pFKey->deleteConf = flags & 0xff; + pFKey->updateConf = (flags >> 8 ) & 0xff; + pFKey->insertConf = (flags >> 16 ) & 0xff; + + /* Link the foreign key to the table as the last step. + */ + p->pFKey = pFKey; + pFKey = 0; + +fk_end: + sqlite3_free(pFKey); +#endif /* !defined(SQLITE_OMIT_FOREIGN_KEY) */ + sqlite3ExprListDelete(pFromCol); + sqlite3ExprListDelete(pToCol); +} + +/* +** This routine is called when an INITIALLY IMMEDIATE or INITIALLY DEFERRED +** clause is seen as part of a foreign key definition. The isDeferred +** parameter is 1 for INITIALLY DEFERRED and 0 for INITIALLY IMMEDIATE. +** The behavior of the most recently created foreign key is adjusted +** accordingly. +*/ +void sqlite3DeferForeignKey(Parse *pParse, int isDeferred){ +#ifndef SQLITE_OMIT_FOREIGN_KEY + Table *pTab; + FKey *pFKey; + if( (pTab = pParse->pNewTable)==0 || (pFKey = pTab->pFKey)==0 ) return; + pFKey->isDeferred = isDeferred; +#endif +} + +/* +** Generate code that will erase and refill index *pIdx. This is +** used to initialize a newly created index or to recompute the +** content of an index in response to a REINDEX command. +** +** if memRootPage is not negative, it means that the index is newly +** created. The register specified by memRootPage contains the +** root page number of the index. If memRootPage is negative, then +** the index already exists and must be cleared before being refilled and +** the root page number of the index is taken from pIndex->tnum. +*/ +static void sqlite3RefillIndex(Parse *pParse, Index *pIndex, int memRootPage){ + Table *pTab = pIndex->pTable; /* The table that is indexed */ + int iTab = pParse->nTab; /* Btree cursor used for pTab */ + int iIdx = pParse->nTab+1; /* Btree cursor used for pIndex */ + int addr1; /* Address of top of loop */ + int tnum; /* Root page of index */ + Vdbe *v; /* Generate code into this virtual machine */ + KeyInfo *pKey; /* KeyInfo for index */ + int regIdxKey; /* Registers containing the index key */ + int regRecord; /* Register holding assemblied index record */ + sqlite3 *db = pParse->db; /* The database connection */ + int iDb = sqlite3SchemaToIndex(db, pIndex->pSchema); + +#ifndef SQLITE_OMIT_AUTHORIZATION + if( sqlite3AuthCheck(pParse, SQLITE_REINDEX, pIndex->zName, 0, + db->aDb[iDb].zName ) ){ + return; + } +#endif + + /* Require a write-lock on the table to perform this operation */ + sqlite3TableLock(pParse, iDb, pTab->tnum, 1, pTab->zName); + + v = sqlite3GetVdbe(pParse); + if( v==0 ) return; + if( memRootPage>=0 ){ + tnum = memRootPage; + }else{ + tnum = pIndex->tnum; + sqlite3VdbeAddOp2(v, OP_Clear, tnum, iDb); + } + pKey = sqlite3IndexKeyinfo(pParse, pIndex); + sqlite3VdbeAddOp4(v, OP_OpenWrite, iIdx, tnum, iDb, + (char *)pKey, P4_KEYINFO_HANDOFF); + if( memRootPage>=0 ){ + sqlite3VdbeChangeP5(v, 1); + } + sqlite3OpenTable(pParse, iTab, iDb, pTab, OP_OpenRead); + addr1 = sqlite3VdbeAddOp2(v, OP_Rewind, iTab, 0); + regRecord = sqlite3GetTempReg(pParse); + regIdxKey = sqlite3GenerateIndexKey(pParse, pIndex, iTab, regRecord); + if( pIndex->onError!=OE_None ){ + int j1, j2; + int regRowid; + + regRowid = regIdxKey + pIndex->nColumn; + j1 = sqlite3VdbeAddOp3(v, OP_IsNull, regIdxKey, 0, pIndex->nColumn); + j2 = sqlite3VdbeAddOp4(v, OP_IsUnique, iIdx, + 0, regRowid, (char*)(sqlite3_intptr_t)regRecord, P4_INT32); + sqlite3VdbeAddOp4(v, OP_Halt, SQLITE_CONSTRAINT, OE_Abort, 0, + "indexed columns are not unique", P4_STATIC); + sqlite3VdbeJumpHere(v, j1); + sqlite3VdbeJumpHere(v, j2); + } + sqlite3VdbeAddOp2(v, OP_IdxInsert, iIdx, regRecord); + sqlite3ReleaseTempReg(pParse, regRecord); + sqlite3VdbeAddOp2(v, OP_Next, iTab, addr1+1); + sqlite3VdbeJumpHere(v, addr1); + sqlite3VdbeAddOp1(v, OP_Close, iTab); + sqlite3VdbeAddOp1(v, OP_Close, iIdx); +} + +/* +** Create a new index for an SQL table. pName1.pName2 is the name of the index +** and pTblList is the name of the table that is to be indexed. Both will +** be NULL for a primary key or an index that is created to satisfy a +** UNIQUE constraint. If pTable and pIndex are NULL, use pParse->pNewTable +** as the table to be indexed. pParse->pNewTable is a table that is +** currently being constructed by a CREATE TABLE statement. +** +** pList is a list of columns to be indexed. pList will be NULL if this +** is a primary key or unique-constraint on the most recent column added +** to the table currently under construction. +*/ +void sqlite3CreateIndex( + Parse *pParse, /* All information about this parse */ + Token *pName1, /* First part of index name. May be NULL */ + Token *pName2, /* Second part of index name. May be NULL */ + SrcList *pTblName, /* Table to index. Use pParse->pNewTable if 0 */ + ExprList *pList, /* A list of columns to be indexed */ + int onError, /* OE_Abort, OE_Ignore, OE_Replace, or OE_None */ + Token *pStart, /* The CREATE token that begins this statement */ + Token *pEnd, /* The ")" that closes the CREATE INDEX statement */ + int sortOrder, /* Sort order of primary key when pList==NULL */ + int ifNotExist /* Omit error if index already exists */ +){ + Table *pTab = 0; /* Table to be indexed */ + Index *pIndex = 0; /* The index to be created */ + char *zName = 0; /* Name of the index */ + int nName; /* Number of characters in zName */ + int i, j; + Token nullId; /* Fake token for an empty ID list */ + DbFixer sFix; /* For assigning database names to pTable */ + int sortOrderMask; /* 1 to honor DESC in index. 0 to ignore. */ + sqlite3 *db = pParse->db; + Db *pDb; /* The specific table containing the indexed database */ + int iDb; /* Index of the database that is being written */ + Token *pName = 0; /* Unqualified name of the index to create */ + struct ExprList_item *pListItem; /* For looping over pList */ + int nCol; + int nExtra = 0; + char *zExtra; + + if( pParse->nErr || db->mallocFailed || IN_DECLARE_VTAB ){ + goto exit_create_index; + } + + /* + ** Find the table that is to be indexed. Return early if not found. + */ + if( pTblName!=0 ){ + + /* Use the two-part index name to determine the database + ** to search for the table. 'Fix' the table name to this db + ** before looking up the table. + */ + assert( pName1 && pName2 ); + iDb = sqlite3TwoPartName(pParse, pName1, pName2, &pName); + if( iDb<0 ) goto exit_create_index; + +#ifndef SQLITE_OMIT_TEMPDB + /* If the index name was unqualified, check if the the table + ** is a temp table. If so, set the database to 1. Do not do this + ** if initialising a database schema. + */ + if( !db->init.busy ){ + pTab = sqlite3SrcListLookup(pParse, pTblName); + if( pName2 && pName2->n==0 && pTab && pTab->pSchema==db->aDb[1].pSchema ){ + iDb = 1; + } + } +#endif + + if( sqlite3FixInit(&sFix, pParse, iDb, "index", pName) && + sqlite3FixSrcList(&sFix, pTblName) + ){ + /* Because the parser constructs pTblName from a single identifier, + ** sqlite3FixSrcList can never fail. */ + assert(0); + } + pTab = sqlite3LocateTable(pParse, 0, pTblName->a[0].zName, + pTblName->a[0].zDatabase); + if( !pTab ) goto exit_create_index; + assert( db->aDb[iDb].pSchema==pTab->pSchema ); + }else{ + assert( pName==0 ); + pTab = pParse->pNewTable; + if( !pTab ) goto exit_create_index; + iDb = sqlite3SchemaToIndex(db, pTab->pSchema); + } + pDb = &db->aDb[iDb]; + + if( pTab==0 || pParse->nErr ) goto exit_create_index; + if( pTab->readOnly ){ + sqlite3ErrorMsg(pParse, "table %s may not be indexed", pTab->zName); + goto exit_create_index; + } +#ifndef SQLITE_OMIT_VIEW + if( pTab->pSelect ){ + sqlite3ErrorMsg(pParse, "views may not be indexed"); + goto exit_create_index; + } +#endif +#ifndef SQLITE_OMIT_VIRTUALTABLE + if( IsVirtual(pTab) ){ + sqlite3ErrorMsg(pParse, "virtual tables may not be indexed"); + goto exit_create_index; + } +#endif + + /* + ** Find the name of the index. Make sure there is not already another + ** index or table with the same name. + ** + ** Exception: If we are reading the names of permanent indices from the + ** sqlite_master table (because some other process changed the schema) and + ** one of the index names collides with the name of a temporary table or + ** index, then we will continue to process this index. + ** + ** If pName==0 it means that we are + ** dealing with a primary key or UNIQUE constraint. We have to invent our + ** own name. + */ + if( pName ){ + zName = sqlite3NameFromToken(db, pName); + if( SQLITE_OK!=sqlite3ReadSchema(pParse) ) goto exit_create_index; + if( zName==0 ) goto exit_create_index; + if( SQLITE_OK!=sqlite3CheckObjectName(pParse, zName) ){ + goto exit_create_index; + } + if( !db->init.busy ){ + if( SQLITE_OK!=sqlite3ReadSchema(pParse) ) goto exit_create_index; + if( sqlite3FindTable(db, zName, 0)!=0 ){ + sqlite3ErrorMsg(pParse, "there is already a table named %s", zName); + goto exit_create_index; + } + } + if( sqlite3FindIndex(db, zName, pDb->zName)!=0 ){ + if( !ifNotExist ){ + sqlite3ErrorMsg(pParse, "index %s already exists", zName); + } + goto exit_create_index; + } + }else{ + char zBuf[30]; + int n; + Index *pLoop; + for(pLoop=pTab->pIndex, n=1; pLoop; pLoop=pLoop->pNext, n++){} + sqlite3_snprintf(sizeof(zBuf),zBuf,"_%d",n); + zName = 0; + sqlite3SetString(&zName, "sqlite_autoindex_", pTab->zName, zBuf, (char*)0); + if( zName==0 ){ + db->mallocFailed = 1; + goto exit_create_index; + } + } + + /* Check for authorization to create an index. + */ +#ifndef SQLITE_OMIT_AUTHORIZATION + { + const char *zDb = pDb->zName; + if( sqlite3AuthCheck(pParse, SQLITE_INSERT, SCHEMA_TABLE(iDb), 0, zDb) ){ + goto exit_create_index; + } + i = SQLITE_CREATE_INDEX; + if( !OMIT_TEMPDB && iDb==1 ) i = SQLITE_CREATE_TEMP_INDEX; + if( sqlite3AuthCheck(pParse, i, zName, pTab->zName, zDb) ){ + goto exit_create_index; + } + } +#endif + + /* If pList==0, it means this routine was called to make a primary + ** key out of the last column added to the table under construction. + ** So create a fake list to simulate this. + */ + if( pList==0 ){ + nullId.z = (u8*)pTab->aCol[pTab->nCol-1].zName; + nullId.n = strlen((char*)nullId.z); + pList = sqlite3ExprListAppend(pParse, 0, 0, &nullId); + if( pList==0 ) goto exit_create_index; + pList->a[0].sortOrder = sortOrder; + } + + /* Figure out how many bytes of space are required to store explicitly + ** specified collation sequence names. + */ + for(i=0; inExpr; i++){ + Expr *pExpr = pList->a[i].pExpr; + if( pExpr ){ + nExtra += (1 + strlen(pExpr->pColl->zName)); + } + } + + /* + ** Allocate the index structure. + */ + nName = strlen(zName); + nCol = pList->nExpr; + pIndex = sqlite3DbMallocZero(db, + sizeof(Index) + /* Index structure */ + sizeof(int)*nCol + /* Index.aiColumn */ + sizeof(int)*(nCol+1) + /* Index.aiRowEst */ + sizeof(char *)*nCol + /* Index.azColl */ + sizeof(u8)*nCol + /* Index.aSortOrder */ + nName + 1 + /* Index.zName */ + nExtra /* Collation sequence names */ + ); + if( db->mallocFailed ){ + goto exit_create_index; + } + pIndex->azColl = (char**)(&pIndex[1]); + pIndex->aiColumn = (int *)(&pIndex->azColl[nCol]); + pIndex->aiRowEst = (unsigned *)(&pIndex->aiColumn[nCol]); + pIndex->aSortOrder = (u8 *)(&pIndex->aiRowEst[nCol+1]); + pIndex->zName = (char *)(&pIndex->aSortOrder[nCol]); + zExtra = (char *)(&pIndex->zName[nName+1]); + memcpy(pIndex->zName, zName, nName+1); + pIndex->pTable = pTab; + pIndex->nColumn = pList->nExpr; + pIndex->onError = onError; + pIndex->autoIndex = pName==0; + pIndex->pSchema = db->aDb[iDb].pSchema; + + /* Check to see if we should honor DESC requests on index columns + */ + if( pDb->pSchema->file_format>=4 ){ + sortOrderMask = -1; /* Honor DESC */ + }else{ + sortOrderMask = 0; /* Ignore DESC */ + } + + /* Scan the names of the columns of the table to be indexed and + ** load the column indices into the Index structure. Report an error + ** if any column is not found. + */ + for(i=0, pListItem=pList->a; inExpr; i++, pListItem++){ + const char *zColName = pListItem->zName; + Column *pTabCol; + int requestedSortOrder; + char *zColl; /* Collation sequence name */ + + for(j=0, pTabCol=pTab->aCol; jnCol; j++, pTabCol++){ + if( sqlite3StrICmp(zColName, pTabCol->zName)==0 ) break; + } + if( j>=pTab->nCol ){ + sqlite3ErrorMsg(pParse, "table %s has no column named %s", + pTab->zName, zColName); + goto exit_create_index; + } + /* TODO: Add a test to make sure that the same column is not named + ** more than once within the same index. Only the first instance of + ** the column will ever be used by the optimizer. Note that using the + ** same column more than once cannot be an error because that would + ** break backwards compatibility - it needs to be a warning. + */ + pIndex->aiColumn[i] = j; + if( pListItem->pExpr ){ + assert( pListItem->pExpr->pColl ); + zColl = zExtra; + sqlite3_snprintf(nExtra, zExtra, "%s", pListItem->pExpr->pColl->zName); + zExtra += (strlen(zColl) + 1); + }else{ + zColl = pTab->aCol[j].zColl; + if( !zColl ){ + zColl = db->pDfltColl->zName; + } + } + if( !db->init.busy && !sqlite3LocateCollSeq(pParse, zColl, -1) ){ + goto exit_create_index; + } + pIndex->azColl[i] = zColl; + requestedSortOrder = pListItem->sortOrder & sortOrderMask; + pIndex->aSortOrder[i] = requestedSortOrder; + } + sqlite3DefaultRowEst(pIndex); + + if( pTab==pParse->pNewTable ){ + /* This routine has been called to create an automatic index as a + ** result of a PRIMARY KEY or UNIQUE clause on a column definition, or + ** a PRIMARY KEY or UNIQUE clause following the column definitions. + ** i.e. one of: + ** + ** CREATE TABLE t(x PRIMARY KEY, y); + ** CREATE TABLE t(x, y, UNIQUE(x, y)); + ** + ** Either way, check to see if the table already has such an index. If + ** so, don't bother creating this one. This only applies to + ** automatically created indices. Users can do as they wish with + ** explicit indices. + */ + Index *pIdx; + for(pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext){ + int k; + assert( pIdx->onError!=OE_None ); + assert( pIdx->autoIndex ); + assert( pIndex->onError!=OE_None ); + + if( pIdx->nColumn!=pIndex->nColumn ) continue; + for(k=0; knColumn; k++){ + const char *z1 = pIdx->azColl[k]; + const char *z2 = pIndex->azColl[k]; + if( pIdx->aiColumn[k]!=pIndex->aiColumn[k] ) break; + if( pIdx->aSortOrder[k]!=pIndex->aSortOrder[k] ) break; + if( z1!=z2 && sqlite3StrICmp(z1, z2) ) break; + } + if( k==pIdx->nColumn ){ + if( pIdx->onError!=pIndex->onError ){ + /* This constraint creates the same index as a previous + ** constraint specified somewhere in the CREATE TABLE statement. + ** However the ON CONFLICT clauses are different. If both this + ** constraint and the previous equivalent constraint have explicit + ** ON CONFLICT clauses this is an error. Otherwise, use the + ** explicitly specified behaviour for the index. + */ + if( !(pIdx->onError==OE_Default || pIndex->onError==OE_Default) ){ + sqlite3ErrorMsg(pParse, + "conflicting ON CONFLICT clauses specified", 0); + } + if( pIdx->onError==OE_Default ){ + pIdx->onError = pIndex->onError; + } + } + goto exit_create_index; + } + } + } + + /* Link the new Index structure to its table and to the other + ** in-memory database structures. + */ + if( db->init.busy ){ + Index *p; + p = sqlite3HashInsert(&pIndex->pSchema->idxHash, + pIndex->zName, strlen(pIndex->zName)+1, pIndex); + if( p ){ + assert( p==pIndex ); /* Malloc must have failed */ + db->mallocFailed = 1; + goto exit_create_index; + } + db->flags |= SQLITE_InternChanges; + if( pTblName!=0 ){ + pIndex->tnum = db->init.newTnum; + } + } + + /* If the db->init.busy is 0 then create the index on disk. This + ** involves writing the index into the master table and filling in the + ** index with the current table contents. + ** + ** The db->init.busy is 0 when the user first enters a CREATE INDEX + ** command. db->init.busy is 1 when a database is opened and + ** CREATE INDEX statements are read out of the master table. In + ** the latter case the index already exists on disk, which is why + ** we don't want to recreate it. + ** + ** If pTblName==0 it means this index is generated as a primary key + ** or UNIQUE constraint of a CREATE TABLE statement. Since the table + ** has just been created, it contains no data and the index initialization + ** step can be skipped. + */ + else if( db->init.busy==0 ){ + Vdbe *v; + char *zStmt; + int iMem = ++pParse->nMem; + + v = sqlite3GetVdbe(pParse); + if( v==0 ) goto exit_create_index; + + + /* Create the rootpage for the index + */ + sqlite3BeginWriteOperation(pParse, 1, iDb); + sqlite3VdbeAddOp2(v, OP_CreateIndex, iDb, iMem); + + /* Gather the complete text of the CREATE INDEX statement into + ** the zStmt variable + */ + if( pStart && pEnd ){ + /* A named index with an explicit CREATE INDEX statement */ + zStmt = sqlite3MPrintf(db, "CREATE%s INDEX %.*s", + onError==OE_None ? "" : " UNIQUE", + pEnd->z - pName->z + 1, + pName->z); + }else{ + /* An automatic index created by a PRIMARY KEY or UNIQUE constraint */ + /* zStmt = sqlite3MPrintf(""); */ + zStmt = 0; + } + + /* Add an entry in sqlite_master for this index + */ + sqlite3NestedParse(pParse, + "INSERT INTO %Q.%s VALUES('index',%Q,%Q,#%d,%Q);", + db->aDb[iDb].zName, SCHEMA_TABLE(iDb), + pIndex->zName, + pTab->zName, + iMem, + zStmt + ); + sqlite3_free(zStmt); + + /* Fill the index with data and reparse the schema. Code an OP_Expire + ** to invalidate all pre-compiled statements. + */ + if( pTblName ){ + sqlite3RefillIndex(pParse, pIndex, iMem); + sqlite3ChangeCookie(pParse, iDb); + sqlite3VdbeAddOp4(v, OP_ParseSchema, iDb, 0, 0, + sqlite3MPrintf(db, "name='%q'", pIndex->zName), P4_DYNAMIC); + sqlite3VdbeAddOp1(v, OP_Expire, 0); + } + } + + /* When adding an index to the list of indices for a table, make + ** sure all indices labeled OE_Replace come after all those labeled + ** OE_Ignore. This is necessary for the correct operation of UPDATE + ** and INSERT. + */ + if( db->init.busy || pTblName==0 ){ + if( onError!=OE_Replace || pTab->pIndex==0 + || pTab->pIndex->onError==OE_Replace){ + pIndex->pNext = pTab->pIndex; + pTab->pIndex = pIndex; + }else{ + Index *pOther = pTab->pIndex; + while( pOther->pNext && pOther->pNext->onError!=OE_Replace ){ + pOther = pOther->pNext; + } + pIndex->pNext = pOther->pNext; + pOther->pNext = pIndex; + } + pIndex = 0; + } + + /* Clean up before exiting */ +exit_create_index: + if( pIndex ){ + freeIndex(pIndex); + } + sqlite3ExprListDelete(pList); + sqlite3SrcListDelete(pTblName); + sqlite3_free(zName); + return; +} + +/* +** Generate code to make sure the file format number is at least minFormat. +** The generated code will increase the file format number if necessary. +*/ +void sqlite3MinimumFileFormat(Parse *pParse, int iDb, int minFormat){ + Vdbe *v; + v = sqlite3GetVdbe(pParse); + if( v ){ + int r1 = sqlite3GetTempReg(pParse); + int r2 = sqlite3GetTempReg(pParse); + int j1; + sqlite3VdbeAddOp3(v, OP_ReadCookie, iDb, r1, 1); + sqlite3VdbeUsesBtree(v, iDb); + sqlite3VdbeAddOp2(v, OP_Integer, minFormat, r2); + j1 = sqlite3VdbeAddOp3(v, OP_Ge, r2, 0, r1); + sqlite3VdbeAddOp3(v, OP_SetCookie, iDb, 1, r2); + sqlite3VdbeJumpHere(v, j1); + sqlite3ReleaseTempReg(pParse, r1); + sqlite3ReleaseTempReg(pParse, r2); + } +} + +/* +** Fill the Index.aiRowEst[] array with default information - information +** to be used when we have not run the ANALYZE command. +** +** aiRowEst[0] is suppose to contain the number of elements in the index. +** Since we do not know, guess 1 million. aiRowEst[1] is an estimate of the +** number of rows in the table that match any particular value of the +** first column of the index. aiRowEst[2] is an estimate of the number +** of rows that match any particular combiniation of the first 2 columns +** of the index. And so forth. It must always be the case that +* +** aiRowEst[N]<=aiRowEst[N-1] +** aiRowEst[N]>=1 +** +** Apart from that, we have little to go on besides intuition as to +** how aiRowEst[] should be initialized. The numbers generated here +** are based on typical values found in actual indices. +*/ +void sqlite3DefaultRowEst(Index *pIdx){ + unsigned *a = pIdx->aiRowEst; + int i; + assert( a!=0 ); + a[0] = 1000000; + for(i=pIdx->nColumn; i>=5; i--){ + a[i] = 5; + } + while( i>=1 ){ + a[i] = 11 - i; + i--; + } + if( pIdx->onError!=OE_None ){ + a[pIdx->nColumn] = 1; + } +} + +/* +** This routine will drop an existing named index. This routine +** implements the DROP INDEX statement. +*/ +void sqlite3DropIndex(Parse *pParse, SrcList *pName, int ifExists){ + Index *pIndex; + Vdbe *v; + sqlite3 *db = pParse->db; + int iDb; + + if( pParse->nErr || db->mallocFailed ){ + goto exit_drop_index; + } + assert( pName->nSrc==1 ); + if( SQLITE_OK!=sqlite3ReadSchema(pParse) ){ + goto exit_drop_index; + } + pIndex = sqlite3FindIndex(db, pName->a[0].zName, pName->a[0].zDatabase); + if( pIndex==0 ){ + if( !ifExists ){ + sqlite3ErrorMsg(pParse, "no such index: %S", pName, 0); + } + pParse->checkSchema = 1; + goto exit_drop_index; + } + if( pIndex->autoIndex ){ + sqlite3ErrorMsg(pParse, "index associated with UNIQUE " + "or PRIMARY KEY constraint cannot be dropped", 0); + goto exit_drop_index; + } + iDb = sqlite3SchemaToIndex(db, pIndex->pSchema); +#ifndef SQLITE_OMIT_AUTHORIZATION + { + int code = SQLITE_DROP_INDEX; + Table *pTab = pIndex->pTable; + const char *zDb = db->aDb[iDb].zName; + const char *zTab = SCHEMA_TABLE(iDb); + if( sqlite3AuthCheck(pParse, SQLITE_DELETE, zTab, 0, zDb) ){ + goto exit_drop_index; + } + if( !OMIT_TEMPDB && iDb ) code = SQLITE_DROP_TEMP_INDEX; + if( sqlite3AuthCheck(pParse, code, pIndex->zName, pTab->zName, zDb) ){ + goto exit_drop_index; + } + } +#endif + + /* Generate code to remove the index and from the master table */ + v = sqlite3GetVdbe(pParse); + if( v ){ + sqlite3BeginWriteOperation(pParse, 1, iDb); + sqlite3NestedParse(pParse, + "DELETE FROM %Q.%s WHERE name=%Q", + db->aDb[iDb].zName, SCHEMA_TABLE(iDb), + pIndex->zName + ); + sqlite3ChangeCookie(pParse, iDb); + destroyRootPage(pParse, pIndex->tnum, iDb); + sqlite3VdbeAddOp4(v, OP_DropIndex, iDb, 0, 0, pIndex->zName, 0); + } + +exit_drop_index: + sqlite3SrcListDelete(pName); +} + +/* +** pArray is a pointer to an array of objects. Each object in the +** array is szEntry bytes in size. This routine allocates a new +** object on the end of the array. +** +** *pnEntry is the number of entries already in use. *pnAlloc is +** the previously allocated size of the array. initSize is the +** suggested initial array size allocation. +** +** The index of the new entry is returned in *pIdx. +** +** This routine returns a pointer to the array of objects. This +** might be the same as the pArray parameter or it might be a different +** pointer if the array was resized. +*/ +void *sqlite3ArrayAllocate( + sqlite3 *db, /* Connection to notify of malloc failures */ + void *pArray, /* Array of objects. Might be reallocated */ + int szEntry, /* Size of each object in the array */ + int initSize, /* Suggested initial allocation, in elements */ + int *pnEntry, /* Number of objects currently in use */ + int *pnAlloc, /* Current size of the allocation, in elements */ + int *pIdx /* Write the index of a new slot here */ +){ + char *z; + if( *pnEntry >= *pnAlloc ){ + void *pNew; + int newSize; + newSize = (*pnAlloc)*2 + initSize; + pNew = sqlite3DbRealloc(db, pArray, newSize*szEntry); + if( pNew==0 ){ + *pIdx = -1; + return pArray; + } + *pnAlloc = newSize; + pArray = pNew; + } + z = (char*)pArray; + memset(&z[*pnEntry * szEntry], 0, szEntry); + *pIdx = *pnEntry; + ++*pnEntry; + return pArray; +} + +/* +** Append a new element to the given IdList. Create a new IdList if +** need be. +** +** A new IdList is returned, or NULL if malloc() fails. +*/ +IdList *sqlite3IdListAppend(sqlite3 *db, IdList *pList, Token *pToken){ + int i; + if( pList==0 ){ + pList = sqlite3DbMallocZero(db, sizeof(IdList) ); + if( pList==0 ) return 0; + pList->nAlloc = 0; + } + pList->a = sqlite3ArrayAllocate( + db, + pList->a, + sizeof(pList->a[0]), + 5, + &pList->nId, + &pList->nAlloc, + &i + ); + if( i<0 ){ + sqlite3IdListDelete(pList); + return 0; + } + pList->a[i].zName = sqlite3NameFromToken(db, pToken); + return pList; +} + +/* +** Delete an IdList. +*/ +void sqlite3IdListDelete(IdList *pList){ + int i; + if( pList==0 ) return; + for(i=0; inId; i++){ + sqlite3_free(pList->a[i].zName); + } + sqlite3_free(pList->a); + sqlite3_free(pList); +} + +/* +** Return the index in pList of the identifier named zId. Return -1 +** if not found. +*/ +int sqlite3IdListIndex(IdList *pList, const char *zName){ + int i; + if( pList==0 ) return -1; + for(i=0; inId; i++){ + if( sqlite3StrICmp(pList->a[i].zName, zName)==0 ) return i; + } + return -1; +} + +/* +** Append a new table name to the given SrcList. Create a new SrcList if +** need be. A new entry is created in the SrcList even if pToken is NULL. +** +** A new SrcList is returned, or NULL if malloc() fails. +** +** If pDatabase is not null, it means that the table has an optional +** database name prefix. Like this: "database.table". The pDatabase +** points to the table name and the pTable points to the database name. +** The SrcList.a[].zName field is filled with the table name which might +** come from pTable (if pDatabase is NULL) or from pDatabase. +** SrcList.a[].zDatabase is filled with the database name from pTable, +** or with NULL if no database is specified. +** +** In other words, if call like this: +** +** sqlite3SrcListAppend(D,A,B,0); +** +** Then B is a table name and the database name is unspecified. If called +** like this: +** +** sqlite3SrcListAppend(D,A,B,C); +** +** Then C is the table name and B is the database name. +*/ +SrcList *sqlite3SrcListAppend( + sqlite3 *db, /* Connection to notify of malloc failures */ + SrcList *pList, /* Append to this SrcList. NULL creates a new SrcList */ + Token *pTable, /* Table to append */ + Token *pDatabase /* Database of the table */ +){ + struct SrcList_item *pItem; + if( pList==0 ){ + pList = sqlite3DbMallocZero(db, sizeof(SrcList) ); + if( pList==0 ) return 0; + pList->nAlloc = 1; + } + if( pList->nSrc>=pList->nAlloc ){ + SrcList *pNew; + pList->nAlloc *= 2; + pNew = sqlite3DbRealloc(db, pList, + sizeof(*pList) + (pList->nAlloc-1)*sizeof(pList->a[0]) ); + if( pNew==0 ){ + sqlite3SrcListDelete(pList); + return 0; + } + pList = pNew; + } + pItem = &pList->a[pList->nSrc]; + memset(pItem, 0, sizeof(pList->a[0])); + if( pDatabase && pDatabase->z==0 ){ + pDatabase = 0; + } + if( pDatabase && pTable ){ + Token *pTemp = pDatabase; + pDatabase = pTable; + pTable = pTemp; + } + pItem->zName = sqlite3NameFromToken(db, pTable); + pItem->zDatabase = sqlite3NameFromToken(db, pDatabase); + pItem->iCursor = -1; + pItem->isPopulated = 0; + pList->nSrc++; + return pList; +} + +/* +** Assign cursors to all tables in a SrcList +*/ +void sqlite3SrcListAssignCursors(Parse *pParse, SrcList *pList){ + int i; + struct SrcList_item *pItem; + assert(pList || pParse->db->mallocFailed ); + if( pList ){ + for(i=0, pItem=pList->a; inSrc; i++, pItem++){ + if( pItem->iCursor>=0 ) break; + pItem->iCursor = pParse->nTab++; + if( pItem->pSelect ){ + sqlite3SrcListAssignCursors(pParse, pItem->pSelect->pSrc); + } + } + } +} + +/* +** Delete an entire SrcList including all its substructure. +*/ +void sqlite3SrcListDelete(SrcList *pList){ + int i; + struct SrcList_item *pItem; + if( pList==0 ) return; + for(pItem=pList->a, i=0; inSrc; i++, pItem++){ + sqlite3_free(pItem->zDatabase); + sqlite3_free(pItem->zName); + sqlite3_free(pItem->zAlias); + sqlite3DeleteTable(pItem->pTab); + sqlite3SelectDelete(pItem->pSelect); + sqlite3ExprDelete(pItem->pOn); + sqlite3IdListDelete(pItem->pUsing); + } + sqlite3_free(pList); +} + +/* +** This routine is called by the parser to add a new term to the +** end of a growing FROM clause. The "p" parameter is the part of +** the FROM clause that has already been constructed. "p" is NULL +** if this is the first term of the FROM clause. pTable and pDatabase +** are the name of the table and database named in the FROM clause term. +** pDatabase is NULL if the database name qualifier is missing - the +** usual case. If the term has a alias, then pAlias points to the +** alias token. If the term is a subquery, then pSubquery is the +** SELECT statement that the subquery encodes. The pTable and +** pDatabase parameters are NULL for subqueries. The pOn and pUsing +** parameters are the content of the ON and USING clauses. +** +** Return a new SrcList which encodes is the FROM with the new +** term added. +*/ +SrcList *sqlite3SrcListAppendFromTerm( + Parse *pParse, /* Parsing context */ + SrcList *p, /* The left part of the FROM clause already seen */ + Token *pTable, /* Name of the table to add to the FROM clause */ + Token *pDatabase, /* Name of the database containing pTable */ + Token *pAlias, /* The right-hand side of the AS subexpression */ + Select *pSubquery, /* A subquery used in place of a table name */ + Expr *pOn, /* The ON clause of a join */ + IdList *pUsing /* The USING clause of a join */ +){ + struct SrcList_item *pItem; + sqlite3 *db = pParse->db; + p = sqlite3SrcListAppend(db, p, pTable, pDatabase); + if( p==0 || p->nSrc==0 ){ + sqlite3ExprDelete(pOn); + sqlite3IdListDelete(pUsing); + sqlite3SelectDelete(pSubquery); + return p; + } + pItem = &p->a[p->nSrc-1]; + if( pAlias && pAlias->n ){ + pItem->zAlias = sqlite3NameFromToken(db, pAlias); + } + pItem->pSelect = pSubquery; + pItem->pOn = pOn; + pItem->pUsing = pUsing; + return p; +} + +/* +** When building up a FROM clause in the parser, the join operator +** is initially attached to the left operand. But the code generator +** expects the join operator to be on the right operand. This routine +** Shifts all join operators from left to right for an entire FROM +** clause. +** +** Example: Suppose the join is like this: +** +** A natural cross join B +** +** The operator is "natural cross join". The A and B operands are stored +** in p->a[0] and p->a[1], respectively. The parser initially stores the +** operator with A. This routine shifts that operator over to B. +*/ +void sqlite3SrcListShiftJoinType(SrcList *p){ + if( p && p->a ){ + int i; + for(i=p->nSrc-1; i>0; i--){ + p->a[i].jointype = p->a[i-1].jointype; + } + p->a[0].jointype = 0; + } +} + +/* +** Begin a transaction +*/ +void sqlite3BeginTransaction(Parse *pParse, int type){ + sqlite3 *db; + Vdbe *v; + int i; + + if( pParse==0 || (db=pParse->db)==0 || db->aDb[0].pBt==0 ) return; + if( pParse->nErr || db->mallocFailed ) return; + if( sqlite3AuthCheck(pParse, SQLITE_TRANSACTION, "BEGIN", 0, 0) ) return; + + v = sqlite3GetVdbe(pParse); + if( !v ) return; + if( type!=TK_DEFERRED ){ + for(i=0; inDb; i++){ + sqlite3VdbeAddOp2(v, OP_Transaction, i, (type==TK_EXCLUSIVE)+1); + sqlite3VdbeUsesBtree(v, i); + } + } + sqlite3VdbeAddOp2(v, OP_AutoCommit, 0, 0); +} + +/* +** Commit a transaction +*/ +void sqlite3CommitTransaction(Parse *pParse){ + sqlite3 *db; + Vdbe *v; + + if( pParse==0 || (db=pParse->db)==0 || db->aDb[0].pBt==0 ) return; + if( pParse->nErr || db->mallocFailed ) return; + if( sqlite3AuthCheck(pParse, SQLITE_TRANSACTION, "COMMIT", 0, 0) ) return; + + v = sqlite3GetVdbe(pParse); + if( v ){ + sqlite3VdbeAddOp2(v, OP_AutoCommit, 1, 0); + } +} + +/* +** Rollback a transaction +*/ +void sqlite3RollbackTransaction(Parse *pParse){ + sqlite3 *db; + Vdbe *v; + + if( pParse==0 || (db=pParse->db)==0 || db->aDb[0].pBt==0 ) return; + if( pParse->nErr || db->mallocFailed ) return; + if( sqlite3AuthCheck(pParse, SQLITE_TRANSACTION, "ROLLBACK", 0, 0) ) return; + + v = sqlite3GetVdbe(pParse); + if( v ){ + sqlite3VdbeAddOp2(v, OP_AutoCommit, 1, 1); + } +} + +/* +** Make sure the TEMP database is open and available for use. Return +** the number of errors. Leave any error messages in the pParse structure. +*/ +int sqlite3OpenTempDatabase(Parse *pParse){ + sqlite3 *db = pParse->db; + if( db->aDb[1].pBt==0 && !pParse->explain ){ + int rc; + static const int flags = + SQLITE_OPEN_READWRITE | + SQLITE_OPEN_CREATE | + SQLITE_OPEN_EXCLUSIVE | + SQLITE_OPEN_DELETEONCLOSE | + SQLITE_OPEN_TEMP_DB; + + rc = sqlite3BtreeFactory(db, 0, 0, SQLITE_DEFAULT_CACHE_SIZE, flags, + &db->aDb[1].pBt); + if( rc!=SQLITE_OK ){ + sqlite3ErrorMsg(pParse, "unable to open a temporary database " + "file for storing temporary tables"); + pParse->rc = rc; + return 1; + } + assert( (db->flags & SQLITE_InTrans)==0 || db->autoCommit ); + assert( db->aDb[1].pSchema ); + } + return 0; +} + +/* +** Generate VDBE code that will verify the schema cookie and start +** a read-transaction for all named database files. +** +** It is important that all schema cookies be verified and all +** read transactions be started before anything else happens in +** the VDBE program. But this routine can be called after much other +** code has been generated. So here is what we do: +** +** The first time this routine is called, we code an OP_Goto that +** will jump to a subroutine at the end of the program. Then we +** record every database that needs its schema verified in the +** pParse->cookieMask field. Later, after all other code has been +** generated, the subroutine that does the cookie verifications and +** starts the transactions will be coded and the OP_Goto P2 value +** will be made to point to that subroutine. The generation of the +** cookie verification subroutine code happens in sqlite3FinishCoding(). +** +** If iDb<0 then code the OP_Goto only - don't set flag to verify the +** schema on any databases. This can be used to position the OP_Goto +** early in the code, before we know if any database tables will be used. +*/ +void sqlite3CodeVerifySchema(Parse *pParse, int iDb){ + sqlite3 *db; + Vdbe *v; + int mask; + + v = sqlite3GetVdbe(pParse); + if( v==0 ) return; /* This only happens if there was a prior error */ + db = pParse->db; + if( pParse->cookieGoto==0 ){ + pParse->cookieGoto = sqlite3VdbeAddOp2(v, OP_Goto, 0, 0)+1; + } + if( iDb>=0 ){ + assert( iDbnDb ); + assert( db->aDb[iDb].pBt!=0 || iDb==1 ); + assert( iDbcookieMask & mask)==0 ){ + pParse->cookieMask |= mask; + pParse->cookieValue[iDb] = db->aDb[iDb].pSchema->schema_cookie; + if( !OMIT_TEMPDB && iDb==1 ){ + sqlite3OpenTempDatabase(pParse); + } + } + } +} + +/* +** Generate VDBE code that prepares for doing an operation that +** might change the database. +** +** This routine starts a new transaction if we are not already within +** a transaction. If we are already within a transaction, then a checkpoint +** is set if the setStatement parameter is true. A checkpoint should +** be set for operations that might fail (due to a constraint) part of +** the way through and which will need to undo some writes without having to +** rollback the whole transaction. For operations where all constraints +** can be checked before any changes are made to the database, it is never +** necessary to undo a write and the checkpoint should not be set. +** +** Only database iDb and the temp database are made writable by this call. +** If iDb==0, then the main and temp databases are made writable. If +** iDb==1 then only the temp database is made writable. If iDb>1 then the +** specified auxiliary database and the temp database are made writable. +*/ +void sqlite3BeginWriteOperation(Parse *pParse, int setStatement, int iDb){ + Vdbe *v = sqlite3GetVdbe(pParse); + if( v==0 ) return; + sqlite3CodeVerifySchema(pParse, iDb); + pParse->writeMask |= 1<nested==0 ){ + sqlite3VdbeAddOp1(v, OP_Statement, iDb); + } + if( (OMIT_TEMPDB || iDb!=1) && pParse->db->aDb[1].pBt!=0 ){ + sqlite3BeginWriteOperation(pParse, setStatement, 1); + } +} + +/* +** Check to see if pIndex uses the collating sequence pColl. Return +** true if it does and false if it does not. +*/ +#ifndef SQLITE_OMIT_REINDEX +static int collationMatch(const char *zColl, Index *pIndex){ + int i; + for(i=0; inColumn; i++){ + const char *z = pIndex->azColl[i]; + if( z==zColl || (z && zColl && 0==sqlite3StrICmp(z, zColl)) ){ + return 1; + } + } + return 0; +} +#endif + +/* +** Recompute all indices of pTab that use the collating sequence pColl. +** If pColl==0 then recompute all indices of pTab. +*/ +#ifndef SQLITE_OMIT_REINDEX +static void reindexTable(Parse *pParse, Table *pTab, char const *zColl){ + Index *pIndex; /* An index associated with pTab */ + + for(pIndex=pTab->pIndex; pIndex; pIndex=pIndex->pNext){ + if( zColl==0 || collationMatch(zColl, pIndex) ){ + int iDb = sqlite3SchemaToIndex(pParse->db, pTab->pSchema); + sqlite3BeginWriteOperation(pParse, 0, iDb); + sqlite3RefillIndex(pParse, pIndex, -1); + } + } +} +#endif + +/* +** Recompute all indices of all tables in all databases where the +** indices use the collating sequence pColl. If pColl==0 then recompute +** all indices everywhere. +*/ +#ifndef SQLITE_OMIT_REINDEX +static void reindexDatabases(Parse *pParse, char const *zColl){ + Db *pDb; /* A single database */ + int iDb; /* The database index number */ + sqlite3 *db = pParse->db; /* The database connection */ + HashElem *k; /* For looping over tables in pDb */ + Table *pTab; /* A table in the database */ + + for(iDb=0, pDb=db->aDb; iDbnDb; iDb++, pDb++){ + assert( pDb!=0 ); + for(k=sqliteHashFirst(&pDb->pSchema->tblHash); k; k=sqliteHashNext(k)){ + pTab = (Table*)sqliteHashData(k); + reindexTable(pParse, pTab, zColl); + } + } +} +#endif + +/* +** Generate code for the REINDEX command. +** +** REINDEX -- 1 +** REINDEX -- 2 +** REINDEX ?.? -- 3 +** REINDEX ?.? -- 4 +** +** Form 1 causes all indices in all attached databases to be rebuilt. +** Form 2 rebuilds all indices in all databases that use the named +** collating function. Forms 3 and 4 rebuild the named index or all +** indices associated with the named table. +*/ +#ifndef SQLITE_OMIT_REINDEX +void sqlite3Reindex(Parse *pParse, Token *pName1, Token *pName2){ + CollSeq *pColl; /* Collating sequence to be reindexed, or NULL */ + char *z; /* Name of a table or index */ + const char *zDb; /* Name of the database */ + Table *pTab; /* A table in the database */ + Index *pIndex; /* An index associated with pTab */ + int iDb; /* The database index number */ + sqlite3 *db = pParse->db; /* The database connection */ + Token *pObjName; /* Name of the table or index to be reindexed */ + + /* Read the database schema. If an error occurs, leave an error message + ** and code in pParse and return NULL. */ + if( SQLITE_OK!=sqlite3ReadSchema(pParse) ){ + return; + } + + if( pName1==0 || pName1->z==0 ){ + reindexDatabases(pParse, 0); + return; + }else if( pName2==0 || pName2->z==0 ){ + char *zColl; + assert( pName1->z ); + zColl = sqlite3NameFromToken(pParse->db, pName1); + if( !zColl ) return; + pColl = sqlite3FindCollSeq(db, ENC(db), zColl, -1, 0); + if( pColl ){ + if( zColl ){ + reindexDatabases(pParse, zColl); + sqlite3_free(zColl); + } + return; + } + sqlite3_free(zColl); + } + iDb = sqlite3TwoPartName(pParse, pName1, pName2, &pObjName); + if( iDb<0 ) return; + z = sqlite3NameFromToken(db, pObjName); + if( z==0 ) return; + zDb = db->aDb[iDb].zName; + pTab = sqlite3FindTable(db, z, zDb); + if( pTab ){ + reindexTable(pParse, pTab, 0); + sqlite3_free(z); + return; + } + pIndex = sqlite3FindIndex(db, z, zDb); + sqlite3_free(z); + if( pIndex ){ + sqlite3BeginWriteOperation(pParse, 0, iDb); + sqlite3RefillIndex(pParse, pIndex, -1); + return; + } + sqlite3ErrorMsg(pParse, "unable to identify the object to be reindexed"); +} +#endif + +/* +** Return a dynamicly allocated KeyInfo structure that can be used +** with OP_OpenRead or OP_OpenWrite to access database index pIdx. +** +** If successful, a pointer to the new structure is returned. In this case +** the caller is responsible for calling sqlite3_free() on the returned +** pointer. If an error occurs (out of memory or missing collation +** sequence), NULL is returned and the state of pParse updated to reflect +** the error. +*/ +KeyInfo *sqlite3IndexKeyinfo(Parse *pParse, Index *pIdx){ + int i; + int nCol = pIdx->nColumn; + int nBytes = sizeof(KeyInfo) + (nCol-1)*sizeof(CollSeq*) + nCol; + KeyInfo *pKey = (KeyInfo *)sqlite3DbMallocZero(pParse->db, nBytes); + + if( pKey ){ + pKey->db = pParse->db; + pKey->aSortOrder = (u8 *)&(pKey->aColl[nCol]); + assert( &pKey->aSortOrder[nCol]==&(((u8 *)pKey)[nBytes]) ); + for(i=0; iazColl[i]; + assert( zColl ); + pKey->aColl[i] = sqlite3LocateCollSeq(pParse, zColl, -1); + pKey->aSortOrder[i] = pIdx->aSortOrder[i]; + } + pKey->nField = nCol; + } + + if( pParse->nErr ){ + sqlite3_free(pKey); + pKey = 0; + } + return pKey; +} Added: external/sqlite-source-3.5.7.x/callback.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/callback.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,378 @@ +/* +** 2005 May 23 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** +** This file contains functions used to access the internal hash tables +** of user defined functions and collation sequences. +** +** $Id: callback.c,v 1.23 2007/08/29 12:31:26 danielk1977 Exp $ +*/ + +#include "sqliteInt.h" + +/* +** Invoke the 'collation needed' callback to request a collation sequence +** in the database text encoding of name zName, length nName. +** If the collation sequence +*/ +static void callCollNeeded(sqlite3 *db, const char *zName, int nName){ + assert( !db->xCollNeeded || !db->xCollNeeded16 ); + if( nName<0 ) nName = strlen(zName); + if( db->xCollNeeded ){ + char *zExternal = sqlite3DbStrNDup(db, zName, nName); + if( !zExternal ) return; + db->xCollNeeded(db->pCollNeededArg, db, (int)ENC(db), zExternal); + sqlite3_free(zExternal); + } +#ifndef SQLITE_OMIT_UTF16 + if( db->xCollNeeded16 ){ + char const *zExternal; + sqlite3_value *pTmp = sqlite3ValueNew(db); + sqlite3ValueSetStr(pTmp, nName, zName, SQLITE_UTF8, SQLITE_STATIC); + zExternal = sqlite3ValueText(pTmp, SQLITE_UTF16NATIVE); + if( zExternal ){ + db->xCollNeeded16(db->pCollNeededArg, db, (int)ENC(db), zExternal); + } + sqlite3ValueFree(pTmp); + } +#endif +} + +/* +** This routine is called if the collation factory fails to deliver a +** collation function in the best encoding but there may be other versions +** of this collation function (for other text encodings) available. Use one +** of these instead if they exist. Avoid a UTF-8 <-> UTF-16 conversion if +** possible. +*/ +static int synthCollSeq(sqlite3 *db, CollSeq *pColl){ + CollSeq *pColl2; + char *z = pColl->zName; + int n = strlen(z); + int i; + static const u8 aEnc[] = { SQLITE_UTF16BE, SQLITE_UTF16LE, SQLITE_UTF8 }; + for(i=0; i<3; i++){ + pColl2 = sqlite3FindCollSeq(db, aEnc[i], z, n, 0); + if( pColl2->xCmp!=0 ){ + memcpy(pColl, pColl2, sizeof(CollSeq)); + pColl->xDel = 0; /* Do not copy the destructor */ + return SQLITE_OK; + } + } + return SQLITE_ERROR; +} + +/* +** This function is responsible for invoking the collation factory callback +** or substituting a collation sequence of a different encoding when the +** requested collation sequence is not available in the database native +** encoding. +** +** If it is not NULL, then pColl must point to the database native encoding +** collation sequence with name zName, length nName. +** +** The return value is either the collation sequence to be used in database +** db for collation type name zName, length nName, or NULL, if no collation +** sequence can be found. +*/ +CollSeq *sqlite3GetCollSeq( + sqlite3* db, + CollSeq *pColl, + const char *zName, + int nName +){ + CollSeq *p; + + p = pColl; + if( !p ){ + p = sqlite3FindCollSeq(db, ENC(db), zName, nName, 0); + } + if( !p || !p->xCmp ){ + /* No collation sequence of this type for this encoding is registered. + ** Call the collation factory to see if it can supply us with one. + */ + callCollNeeded(db, zName, nName); + p = sqlite3FindCollSeq(db, ENC(db), zName, nName, 0); + } + if( p && !p->xCmp && synthCollSeq(db, p) ){ + p = 0; + } + assert( !p || p->xCmp ); + return p; +} + +/* +** This routine is called on a collation sequence before it is used to +** check that it is defined. An undefined collation sequence exists when +** a database is loaded that contains references to collation sequences +** that have not been defined by sqlite3_create_collation() etc. +** +** If required, this routine calls the 'collation needed' callback to +** request a definition of the collating sequence. If this doesn't work, +** an equivalent collating sequence that uses a text encoding different +** from the main database is substituted, if one is available. +*/ +int sqlite3CheckCollSeq(Parse *pParse, CollSeq *pColl){ + if( pColl ){ + const char *zName = pColl->zName; + CollSeq *p = sqlite3GetCollSeq(pParse->db, pColl, zName, -1); + if( !p ){ + if( pParse->nErr==0 ){ + sqlite3ErrorMsg(pParse, "no such collation sequence: %s", zName); + } + pParse->nErr++; + return SQLITE_ERROR; + } + assert( p==pColl ); + } + return SQLITE_OK; +} + + + +/* +** Locate and return an entry from the db.aCollSeq hash table. If the entry +** specified by zName and nName is not found and parameter 'create' is +** true, then create a new entry. Otherwise return NULL. +** +** Each pointer stored in the sqlite3.aCollSeq hash table contains an +** array of three CollSeq structures. The first is the collation sequence +** prefferred for UTF-8, the second UTF-16le, and the third UTF-16be. +** +** Stored immediately after the three collation sequences is a copy of +** the collation sequence name. A pointer to this string is stored in +** each collation sequence structure. +*/ +static CollSeq *findCollSeqEntry( + sqlite3 *db, + const char *zName, + int nName, + int create +){ + CollSeq *pColl; + if( nName<0 ) nName = strlen(zName); + pColl = sqlite3HashFind(&db->aCollSeq, zName, nName); + + if( 0==pColl && create ){ + pColl = sqlite3DbMallocZero(db, 3*sizeof(*pColl) + nName + 1 ); + if( pColl ){ + CollSeq *pDel = 0; + pColl[0].zName = (char*)&pColl[3]; + pColl[0].enc = SQLITE_UTF8; + pColl[1].zName = (char*)&pColl[3]; + pColl[1].enc = SQLITE_UTF16LE; + pColl[2].zName = (char*)&pColl[3]; + pColl[2].enc = SQLITE_UTF16BE; + memcpy(pColl[0].zName, zName, nName); + pColl[0].zName[nName] = 0; + pDel = sqlite3HashInsert(&db->aCollSeq, pColl[0].zName, nName, pColl); + + /* If a malloc() failure occured in sqlite3HashInsert(), it will + ** return the pColl pointer to be deleted (because it wasn't added + ** to the hash table). + */ + assert( pDel==0 || pDel==pColl ); + if( pDel!=0 ){ + db->mallocFailed = 1; + sqlite3_free(pDel); + pColl = 0; + } + } + } + return pColl; +} + +/* +** Parameter zName points to a UTF-8 encoded string nName bytes long. +** Return the CollSeq* pointer for the collation sequence named zName +** for the encoding 'enc' from the database 'db'. +** +** If the entry specified is not found and 'create' is true, then create a +** new entry. Otherwise return NULL. +** +** A separate function sqlite3LocateCollSeq() is a wrapper around +** this routine. sqlite3LocateCollSeq() invokes the collation factory +** if necessary and generates an error message if the collating sequence +** cannot be found. +*/ +CollSeq *sqlite3FindCollSeq( + sqlite3 *db, + u8 enc, + const char *zName, + int nName, + int create +){ + CollSeq *pColl; + if( zName ){ + pColl = findCollSeqEntry(db, zName, nName, create); + }else{ + pColl = db->pDfltColl; + } + assert( SQLITE_UTF8==1 && SQLITE_UTF16LE==2 && SQLITE_UTF16BE==3 ); + assert( enc>=SQLITE_UTF8 && enc<=SQLITE_UTF16BE ); + if( pColl ) pColl += enc-1; + return pColl; +} + +/* +** Locate a user function given a name, a number of arguments and a flag +** indicating whether the function prefers UTF-16 over UTF-8. Return a +** pointer to the FuncDef structure that defines that function, or return +** NULL if the function does not exist. +** +** If the createFlag argument is true, then a new (blank) FuncDef +** structure is created and liked into the "db" structure if a +** no matching function previously existed. When createFlag is true +** and the nArg parameter is -1, then only a function that accepts +** any number of arguments will be returned. +** +** If createFlag is false and nArg is -1, then the first valid +** function found is returned. A function is valid if either xFunc +** or xStep is non-zero. +** +** If createFlag is false, then a function with the required name and +** number of arguments may be returned even if the eTextRep flag does not +** match that requested. +*/ +FuncDef *sqlite3FindFunction( + sqlite3 *db, /* An open database */ + const char *zName, /* Name of the function. Not null-terminated */ + int nName, /* Number of characters in the name */ + int nArg, /* Number of arguments. -1 means any number */ + u8 enc, /* Preferred text encoding */ + int createFlag /* Create new entry if true and does not otherwise exist */ +){ + FuncDef *p; /* Iterator variable */ + FuncDef *pFirst; /* First function with this name */ + FuncDef *pBest = 0; /* Best match found so far */ + int bestmatch = 0; + + + assert( enc==SQLITE_UTF8 || enc==SQLITE_UTF16LE || enc==SQLITE_UTF16BE ); + if( nArg<-1 ) nArg = -1; + + pFirst = (FuncDef*)sqlite3HashFind(&db->aFunc, zName, nName); + for(p=pFirst; p; p=p->pNext){ + /* During the search for the best function definition, bestmatch is set + ** as follows to indicate the quality of the match with the definition + ** pointed to by pBest: + ** + ** 0: pBest is NULL. No match has been found. + ** 1: A variable arguments function that prefers UTF-8 when a UTF-16 + ** encoding is requested, or vice versa. + ** 2: A variable arguments function that uses UTF-16BE when UTF-16LE is + ** requested, or vice versa. + ** 3: A variable arguments function using the same text encoding. + ** 4: A function with the exact number of arguments requested that + ** prefers UTF-8 when a UTF-16 encoding is requested, or vice versa. + ** 5: A function with the exact number of arguments requested that + ** prefers UTF-16LE when UTF-16BE is requested, or vice versa. + ** 6: An exact match. + ** + ** A larger value of 'matchqual' indicates a more desirable match. + */ + if( p->nArg==-1 || p->nArg==nArg || nArg==-1 ){ + int match = 1; /* Quality of this match */ + if( p->nArg==nArg || nArg==-1 ){ + match = 4; + } + if( enc==p->iPrefEnc ){ + match += 2; + } + else if( (enc==SQLITE_UTF16LE && p->iPrefEnc==SQLITE_UTF16BE) || + (enc==SQLITE_UTF16BE && p->iPrefEnc==SQLITE_UTF16LE) ){ + match += 1; + } + + if( match>bestmatch ){ + pBest = p; + bestmatch = match; + } + } + } + + /* If the createFlag parameter is true, and the seach did not reveal an + ** exact match for the name, number of arguments and encoding, then add a + ** new entry to the hash table and return it. + */ + if( createFlag && bestmatch<6 && + (pBest = sqlite3DbMallocZero(db, sizeof(*pBest)+nName))!=0 ){ + pBest->nArg = nArg; + pBest->pNext = pFirst; + pBest->iPrefEnc = enc; + memcpy(pBest->zName, zName, nName); + pBest->zName[nName] = 0; + if( pBest==sqlite3HashInsert(&db->aFunc,pBest->zName,nName,(void*)pBest) ){ + db->mallocFailed = 1; + sqlite3_free(pBest); + return 0; + } + } + + if( pBest && (pBest->xStep || pBest->xFunc || createFlag) ){ + return pBest; + } + return 0; +} + +/* +** Free all resources held by the schema structure. The void* argument points +** at a Schema struct. This function does not call sqlite3_free() on the +** pointer itself, it just cleans up subsiduary resources (i.e. the contents +** of the schema hash tables). +*/ +void sqlite3SchemaFree(void *p){ + Hash temp1; + Hash temp2; + HashElem *pElem; + Schema *pSchema = (Schema *)p; + + temp1 = pSchema->tblHash; + temp2 = pSchema->trigHash; + sqlite3HashInit(&pSchema->trigHash, SQLITE_HASH_STRING, 0); + sqlite3HashClear(&pSchema->aFKey); + sqlite3HashClear(&pSchema->idxHash); + for(pElem=sqliteHashFirst(&temp2); pElem; pElem=sqliteHashNext(pElem)){ + sqlite3DeleteTrigger((Trigger*)sqliteHashData(pElem)); + } + sqlite3HashClear(&temp2); + sqlite3HashInit(&pSchema->tblHash, SQLITE_HASH_STRING, 0); + for(pElem=sqliteHashFirst(&temp1); pElem; pElem=sqliteHashNext(pElem)){ + Table *pTab = sqliteHashData(pElem); + sqlite3DeleteTable(pTab); + } + sqlite3HashClear(&temp1); + pSchema->pSeqTab = 0; + pSchema->flags &= ~DB_SchemaLoaded; +} + +/* +** Find and return the schema associated with a BTree. Create +** a new one if necessary. +*/ +Schema *sqlite3SchemaGet(sqlite3 *db, Btree *pBt){ + Schema * p; + if( pBt ){ + p = (Schema *)sqlite3BtreeSchema(pBt, sizeof(Schema), sqlite3SchemaFree); + }else{ + p = (Schema *)sqlite3MallocZero(sizeof(Schema)); + } + if( !p ){ + db->mallocFailed = 1; + }else if ( 0==p->file_format ){ + sqlite3HashInit(&p->tblHash, SQLITE_HASH_STRING, 0); + sqlite3HashInit(&p->idxHash, SQLITE_HASH_STRING, 0); + sqlite3HashInit(&p->trigHash, SQLITE_HASH_STRING, 0); + sqlite3HashInit(&p->aFKey, SQLITE_HASH_STRING, 1); + p->enc = SQLITE_UTF8; + } + return p; +} Added: external/sqlite-source-3.5.7.x/complete.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/complete.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,271 @@ +/* +** 2001 September 15 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** An tokenizer for SQL +** +** This file contains C code that implements the sqlite3_complete() API. +** This code used to be part of the tokenizer.c source file. But by +** separating it out, the code will be automatically omitted from +** static links that do not use it. +** +** $Id: complete.c,v 1.6 2007/08/27 23:26:59 drh Exp $ +*/ +#include "sqliteInt.h" +#ifndef SQLITE_OMIT_COMPLETE + +/* +** This is defined in tokenize.c. We just have to import the definition. +*/ +#ifndef SQLITE_AMALGAMATION +#ifdef SQLITE_ASCII +extern const char sqlite3IsAsciiIdChar[]; +#define IdChar(C) (((c=C)&0x80)!=0 || (c>0x1f && sqlite3IsAsciiIdChar[c-0x20])) +#endif +#ifdef SQLITE_EBCDIC +extern const char sqlite3IsEbcdicIdChar[]; +#define IdChar(C) (((c=C)>=0x42 && sqlite3IsEbcdicIdChar[c-0x40])) +#endif +#endif /* SQLITE_AMALGAMATION */ + + +/* +** Token types used by the sqlite3_complete() routine. See the header +** comments on that procedure for additional information. +*/ +#define tkSEMI 0 +#define tkWS 1 +#define tkOTHER 2 +#define tkEXPLAIN 3 +#define tkCREATE 4 +#define tkTEMP 5 +#define tkTRIGGER 6 +#define tkEND 7 + +/* +** Return TRUE if the given SQL string ends in a semicolon. +** +** Special handling is require for CREATE TRIGGER statements. +** Whenever the CREATE TRIGGER keywords are seen, the statement +** must end with ";END;". +** +** This implementation uses a state machine with 7 states: +** +** (0) START At the beginning or end of an SQL statement. This routine +** returns 1 if it ends in the START state and 0 if it ends +** in any other state. +** +** (1) NORMAL We are in the middle of statement which ends with a single +** semicolon. +** +** (2) EXPLAIN The keyword EXPLAIN has been seen at the beginning of +** a statement. +** +** (3) CREATE The keyword CREATE has been seen at the beginning of a +** statement, possibly preceeded by EXPLAIN and/or followed by +** TEMP or TEMPORARY +** +** (4) TRIGGER We are in the middle of a trigger definition that must be +** ended by a semicolon, the keyword END, and another semicolon. +** +** (5) SEMI We've seen the first semicolon in the ";END;" that occurs at +** the end of a trigger definition. +** +** (6) END We've seen the ";END" of the ";END;" that occurs at the end +** of a trigger difinition. +** +** Transitions between states above are determined by tokens extracted +** from the input. The following tokens are significant: +** +** (0) tkSEMI A semicolon. +** (1) tkWS Whitespace +** (2) tkOTHER Any other SQL token. +** (3) tkEXPLAIN The "explain" keyword. +** (4) tkCREATE The "create" keyword. +** (5) tkTEMP The "temp" or "temporary" keyword. +** (6) tkTRIGGER The "trigger" keyword. +** (7) tkEND The "end" keyword. +** +** Whitespace never causes a state transition and is always ignored. +** +** If we compile with SQLITE_OMIT_TRIGGER, all of the computation needed +** to recognize the end of a trigger can be omitted. All we have to do +** is look for a semicolon that is not part of an string or comment. +*/ +int sqlite3_complete(const char *zSql){ + u8 state = 0; /* Current state, using numbers defined in header comment */ + u8 token; /* Value of the next token */ + +#ifndef SQLITE_OMIT_TRIGGER + /* A complex statement machine used to detect the end of a CREATE TRIGGER + ** statement. This is the normal case. + */ + static const u8 trans[7][8] = { + /* Token: */ + /* State: ** SEMI WS OTHER EXPLAIN CREATE TEMP TRIGGER END */ + /* 0 START: */ { 0, 0, 1, 2, 3, 1, 1, 1, }, + /* 1 NORMAL: */ { 0, 1, 1, 1, 1, 1, 1, 1, }, + /* 2 EXPLAIN: */ { 0, 2, 1, 1, 3, 1, 1, 1, }, + /* 3 CREATE: */ { 0, 3, 1, 1, 1, 3, 4, 1, }, + /* 4 TRIGGER: */ { 5, 4, 4, 4, 4, 4, 4, 4, }, + /* 5 SEMI: */ { 5, 5, 4, 4, 4, 4, 4, 6, }, + /* 6 END: */ { 0, 6, 4, 4, 4, 4, 4, 4, }, + }; +#else + /* If triggers are not suppored by this compile then the statement machine + ** used to detect the end of a statement is much simplier + */ + static const u8 trans[2][3] = { + /* Token: */ + /* State: ** SEMI WS OTHER */ + /* 0 START: */ { 0, 0, 1, }, + /* 1 NORMAL: */ { 0, 1, 1, }, + }; +#endif /* SQLITE_OMIT_TRIGGER */ + + while( *zSql ){ + switch( *zSql ){ + case ';': { /* A semicolon */ + token = tkSEMI; + break; + } + case ' ': + case '\r': + case '\t': + case '\n': + case '\f': { /* White space is ignored */ + token = tkWS; + break; + } + case '/': { /* C-style comments */ + if( zSql[1]!='*' ){ + token = tkOTHER; + break; + } + zSql += 2; + while( zSql[0] && (zSql[0]!='*' || zSql[1]!='/') ){ zSql++; } + if( zSql[0]==0 ) return 0; + zSql++; + token = tkWS; + break; + } + case '-': { /* SQL-style comments from "--" to end of line */ + if( zSql[1]!='-' ){ + token = tkOTHER; + break; + } + while( *zSql && *zSql!='\n' ){ zSql++; } + if( *zSql==0 ) return state==0; + token = tkWS; + break; + } + case '[': { /* Microsoft-style identifiers in [...] */ + zSql++; + while( *zSql && *zSql!=']' ){ zSql++; } + if( *zSql==0 ) return 0; + token = tkOTHER; + break; + } + case '`': /* Grave-accent quoted symbols used by MySQL */ + case '"': /* single- and double-quoted strings */ + case '\'': { + int c = *zSql; + zSql++; + while( *zSql && *zSql!=c ){ zSql++; } + if( *zSql==0 ) return 0; + token = tkOTHER; + break; + } + default: { + int c; + if( IdChar((u8)*zSql) ){ + /* Keywords and unquoted identifiers */ + int nId; + for(nId=1; IdChar(zSql[nId]); nId++){} +#ifdef SQLITE_OMIT_TRIGGER + token = tkOTHER; +#else + switch( *zSql ){ + case 'c': case 'C': { + if( nId==6 && sqlite3StrNICmp(zSql, "create", 6)==0 ){ + token = tkCREATE; + }else{ + token = tkOTHER; + } + break; + } + case 't': case 'T': { + if( nId==7 && sqlite3StrNICmp(zSql, "trigger", 7)==0 ){ + token = tkTRIGGER; + }else if( nId==4 && sqlite3StrNICmp(zSql, "temp", 4)==0 ){ + token = tkTEMP; + }else if( nId==9 && sqlite3StrNICmp(zSql, "temporary", 9)==0 ){ + token = tkTEMP; + }else{ + token = tkOTHER; + } + break; + } + case 'e': case 'E': { + if( nId==3 && sqlite3StrNICmp(zSql, "end", 3)==0 ){ + token = tkEND; + }else +#ifndef SQLITE_OMIT_EXPLAIN + if( nId==7 && sqlite3StrNICmp(zSql, "explain", 7)==0 ){ + token = tkEXPLAIN; + }else +#endif + { + token = tkOTHER; + } + break; + } + default: { + token = tkOTHER; + break; + } + } +#endif /* SQLITE_OMIT_TRIGGER */ + zSql += nId-1; + }else{ + /* Operators and special symbols */ + token = tkOTHER; + } + break; + } + } + state = trans[state][token]; + zSql++; + } + return state==0; +} + +#ifndef SQLITE_OMIT_UTF16 +/* +** This routine is the same as the sqlite3_complete() routine described +** above, except that the parameter is required to be UTF-16 encoded, not +** UTF-8. +*/ +int sqlite3_complete16(const void *zSql){ + sqlite3_value *pVal; + char const *zSql8; + int rc = SQLITE_NOMEM; + + pVal = sqlite3ValueNew(0); + sqlite3ValueSetStr(pVal, -1, zSql, SQLITE_UTF16NATIVE, SQLITE_STATIC); + zSql8 = sqlite3ValueText(pVal, SQLITE_UTF8); + if( zSql8 ){ + rc = sqlite3_complete(zSql8); + } + sqlite3ValueFree(pVal); + return sqlite3ApiExit(0, rc); +} +#endif /* SQLITE_OMIT_UTF16 */ +#endif /* SQLITE_OMIT_COMPLETE */ Added: external/sqlite-source-3.5.7.x/config.h ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/config.h Wed Mar 19 03:00:27 2008 @@ -0,0 +1,21 @@ +/* +** 2008 March 6 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** Default configuration header in case the 'configure' script is not used +** +** @(#) $Id: config.h,v 1.1 2008/03/06 07:36:18 mlcreech Exp $ +*/ +#ifndef _CONFIG_H_ +#define _CONFIG_H_ + +/* We do nothing here, since no assumptions are made by default */ + +#endif Added: external/sqlite-source-3.5.7.x/date.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/date.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,1048 @@ +/* +** 2003 October 31 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** This file contains the C functions that implement date and time +** functions for SQLite. +** +** There is only one exported symbol in this file - the function +** sqlite3RegisterDateTimeFunctions() found at the bottom of the file. +** All other code has file scope. +** +** $Id: date.c,v 1.76 2008/02/21 20:40:44 drh Exp $ +** +** SQLite processes all times and dates as Julian Day numbers. The +** dates and times are stored as the number of days since noon +** in Greenwich on November 24, 4714 B.C. according to the Gregorian +** calendar system. +** +** 1970-01-01 00:00:00 is JD 2440587.5 +** 2000-01-01 00:00:00 is JD 2451544.5 +** +** This implemention requires years to be expressed as a 4-digit number +** which means that only dates between 0000-01-01 and 9999-12-31 can +** be represented, even though julian day numbers allow a much wider +** range of dates. +** +** The Gregorian calendar system is used for all dates and times, +** even those that predate the Gregorian calendar. Historians usually +** use the Julian calendar for dates prior to 1582-10-15 and for some +** dates afterwards, depending on locale. Beware of this difference. +** +** The conversion algorithms are implemented based on descriptions +** in the following text: +** +** Jean Meeus +** Astronomical Algorithms, 2nd Edition, 1998 +** ISBM 0-943396-61-1 +** Willmann-Bell, Inc +** Richmond, Virginia (USA) +*/ +#include "sqliteInt.h" +#include +#include +#include +#include + +#ifndef SQLITE_OMIT_DATETIME_FUNCS + +/* +** A structure for holding a single date and time. +*/ +typedef struct DateTime DateTime; +struct DateTime { + double rJD; /* The julian day number */ + int Y, M, D; /* Year, month, and day */ + int h, m; /* Hour and minutes */ + int tz; /* Timezone offset in minutes */ + double s; /* Seconds */ + char validYMD; /* True if Y,M,D are valid */ + char validHMS; /* True if h,m,s are valid */ + char validJD; /* True if rJD is valid */ + char validTZ; /* True if tz is valid */ +}; + + +/* +** Convert zDate into one or more integers. Additional arguments +** come in groups of 5 as follows: +** +** N number of digits in the integer +** min minimum allowed value of the integer +** max maximum allowed value of the integer +** nextC first character after the integer +** pVal where to write the integers value. +** +** Conversions continue until one with nextC==0 is encountered. +** The function returns the number of successful conversions. +*/ +static int getDigits(const char *zDate, ...){ + va_list ap; + int val; + int N; + int min; + int max; + int nextC; + int *pVal; + int cnt = 0; + va_start(ap, zDate); + do{ + N = va_arg(ap, int); + min = va_arg(ap, int); + max = va_arg(ap, int); + nextC = va_arg(ap, int); + pVal = va_arg(ap, int*); + val = 0; + while( N-- ){ + if( !isdigit(*(u8*)zDate) ){ + goto end_getDigits; + } + val = val*10 + *zDate - '0'; + zDate++; + } + if( valmax || (nextC!=0 && nextC!=*zDate) ){ + goto end_getDigits; + } + *pVal = val; + zDate++; + cnt++; + }while( nextC ); +end_getDigits: + va_end(ap); + return cnt; +} + +/* +** Read text from z[] and convert into a floating point number. Return +** the number of digits converted. +*/ +#define getValue sqlite3AtoF + +/* +** Parse a timezone extension on the end of a date-time. +** The extension is of the form: +** +** (+/-)HH:MM +** +** Or the "zulu" notation: +** +** Z +** +** If the parse is successful, write the number of minutes +** of change in p->tz and return 0. If a parser error occurs, +** return non-zero. +** +** A missing specifier is not considered an error. +*/ +static int parseTimezone(const char *zDate, DateTime *p){ + int sgn = 0; + int nHr, nMn; + int c; + while( isspace(*(u8*)zDate) ){ zDate++; } + p->tz = 0; + c = *zDate; + if( c=='-' ){ + sgn = -1; + }else if( c=='+' ){ + sgn = +1; + }else if( c=='Z' || c=='z' ){ + zDate++; + goto zulu_time; + }else{ + return c!=0; + } + zDate++; + if( getDigits(zDate, 2, 0, 14, ':', &nHr, 2, 0, 59, 0, &nMn)!=2 ){ + return 1; + } + zDate += 5; + p->tz = sgn*(nMn + nHr*60); +zulu_time: + while( isspace(*(u8*)zDate) ){ zDate++; } + return *zDate!=0; +} + +/* +** Parse times of the form HH:MM or HH:MM:SS or HH:MM:SS.FFFF. +** The HH, MM, and SS must each be exactly 2 digits. The +** fractional seconds FFFF can be one or more digits. +** +** Return 1 if there is a parsing error and 0 on success. +*/ +static int parseHhMmSs(const char *zDate, DateTime *p){ + int h, m, s; + double ms = 0.0; + if( getDigits(zDate, 2, 0, 24, ':', &h, 2, 0, 59, 0, &m)!=2 ){ + return 1; + } + zDate += 5; + if( *zDate==':' ){ + zDate++; + if( getDigits(zDate, 2, 0, 59, 0, &s)!=1 ){ + return 1; + } + zDate += 2; + if( *zDate=='.' && isdigit((u8)zDate[1]) ){ + double rScale = 1.0; + zDate++; + while( isdigit(*(u8*)zDate) ){ + ms = ms*10.0 + *zDate - '0'; + rScale *= 10.0; + zDate++; + } + ms /= rScale; + } + }else{ + s = 0; + } + p->validJD = 0; + p->validHMS = 1; + p->h = h; + p->m = m; + p->s = s + ms; + if( parseTimezone(zDate, p) ) return 1; + p->validTZ = p->tz!=0; + return 0; +} + +/* +** Convert from YYYY-MM-DD HH:MM:SS to julian day. We always assume +** that the YYYY-MM-DD is according to the Gregorian calendar. +** +** Reference: Meeus page 61 +*/ +static void computeJD(DateTime *p){ + int Y, M, D, A, B, X1, X2; + + if( p->validJD ) return; + if( p->validYMD ){ + Y = p->Y; + M = p->M; + D = p->D; + }else{ + Y = 2000; /* If no YMD specified, assume 2000-Jan-01 */ + M = 1; + D = 1; + } + if( M<=2 ){ + Y--; + M += 12; + } + A = Y/100; + B = 2 - A + (A/4); + X1 = 365.25*(Y+4716); + X2 = 30.6001*(M+1); + p->rJD = X1 + X2 + D + B - 1524.5; + p->validJD = 1; + if( p->validHMS ){ + p->rJD += (p->h*3600.0 + p->m*60.0 + p->s)/86400.0; + if( p->validTZ ){ + p->rJD -= p->tz*60/86400.0; + p->validYMD = 0; + p->validHMS = 0; + p->validTZ = 0; + } + } +} + +/* +** Parse dates of the form +** +** YYYY-MM-DD HH:MM:SS.FFF +** YYYY-MM-DD HH:MM:SS +** YYYY-MM-DD HH:MM +** YYYY-MM-DD +** +** Write the result into the DateTime structure and return 0 +** on success and 1 if the input string is not a well-formed +** date. +*/ +static int parseYyyyMmDd(const char *zDate, DateTime *p){ + int Y, M, D, neg; + + if( zDate[0]=='-' ){ + zDate++; + neg = 1; + }else{ + neg = 0; + } + if( getDigits(zDate,4,0,9999,'-',&Y,2,1,12,'-',&M,2,1,31,0,&D)!=3 ){ + return 1; + } + zDate += 10; + while( isspace(*(u8*)zDate) || 'T'==*(u8*)zDate ){ zDate++; } + if( parseHhMmSs(zDate, p)==0 ){ + /* We got the time */ + }else if( *zDate==0 ){ + p->validHMS = 0; + }else{ + return 1; + } + p->validJD = 0; + p->validYMD = 1; + p->Y = neg ? -Y : Y; + p->M = M; + p->D = D; + if( p->validTZ ){ + computeJD(p); + } + return 0; +} + +/* +** Attempt to parse the given string into a Julian Day Number. Return +** the number of errors. +** +** The following are acceptable forms for the input string: +** +** YYYY-MM-DD HH:MM:SS.FFF +/-HH:MM +** DDDD.DD +** now +** +** In the first form, the +/-HH:MM is always optional. The fractional +** seconds extension (the ".FFF") is optional. The seconds portion +** (":SS.FFF") is option. The year and date can be omitted as long +** as there is a time string. The time string can be omitted as long +** as there is a year and date. +*/ +static int parseDateOrTime( + sqlite3_context *context, + const char *zDate, + DateTime *p +){ + memset(p, 0, sizeof(*p)); + if( parseYyyyMmDd(zDate,p)==0 ){ + return 0; + }else if( parseHhMmSs(zDate, p)==0 ){ + return 0; + }else if( sqlite3StrICmp(zDate,"now")==0){ + double r; + sqlite3OsCurrentTime((sqlite3_vfs *)sqlite3_user_data(context), &r); + p->rJD = r; + p->validJD = 1; + return 0; + }else if( sqlite3IsNumber(zDate, 0, SQLITE_UTF8) ){ + getValue(zDate, &p->rJD); + p->validJD = 1; + return 0; + } + return 1; +} + +/* +** Compute the Year, Month, and Day from the julian day number. +*/ +static void computeYMD(DateTime *p){ + int Z, A, B, C, D, E, X1; + if( p->validYMD ) return; + if( !p->validJD ){ + p->Y = 2000; + p->M = 1; + p->D = 1; + }else{ + Z = p->rJD + 0.5; + A = (Z - 1867216.25)/36524.25; + A = Z + 1 + A - (A/4); + B = A + 1524; + C = (B - 122.1)/365.25; + D = 365.25*C; + E = (B-D)/30.6001; + X1 = 30.6001*E; + p->D = B - D - X1; + p->M = E<14 ? E-1 : E-13; + p->Y = p->M>2 ? C - 4716 : C - 4715; + } + p->validYMD = 1; +} + +/* +** Compute the Hour, Minute, and Seconds from the julian day number. +*/ +static void computeHMS(DateTime *p){ + int Z, s; + if( p->validHMS ) return; + computeJD(p); + Z = p->rJD + 0.5; + s = (p->rJD + 0.5 - Z)*86400000.0 + 0.5; + p->s = 0.001*s; + s = p->s; + p->s -= s; + p->h = s/3600; + s -= p->h*3600; + p->m = s/60; + p->s += s - p->m*60; + p->validHMS = 1; +} + +/* +** Compute both YMD and HMS +*/ +static void computeYMD_HMS(DateTime *p){ + computeYMD(p); + computeHMS(p); +} + +/* +** Clear the YMD and HMS and the TZ +*/ +static void clearYMD_HMS_TZ(DateTime *p){ + p->validYMD = 0; + p->validHMS = 0; + p->validTZ = 0; +} + +/* +** Compute the difference (in days) between localtime and UTC (a.k.a. GMT) +** for the time value p where p is in UTC. +*/ +static double localtimeOffset(DateTime *p){ + DateTime x, y; + time_t t; + x = *p; + computeYMD_HMS(&x); + if( x.Y<1971 || x.Y>=2038 ){ + x.Y = 2000; + x.M = 1; + x.D = 1; + x.h = 0; + x.m = 0; + x.s = 0.0; + } else { + int s = x.s + 0.5; + x.s = s; + } + x.tz = 0; + x.validJD = 0; + computeJD(&x); + t = (x.rJD-2440587.5)*86400.0 + 0.5; +#ifdef HAVE_LOCALTIME_R + { + struct tm sLocal; + localtime_r(&t, &sLocal); + y.Y = sLocal.tm_year + 1900; + y.M = sLocal.tm_mon + 1; + y.D = sLocal.tm_mday; + y.h = sLocal.tm_hour; + y.m = sLocal.tm_min; + y.s = sLocal.tm_sec; + } +#else + { + struct tm *pTm; + sqlite3_mutex_enter(sqlite3_mutex_alloc(SQLITE_MUTEX_STATIC_MASTER)); + pTm = localtime(&t); + y.Y = pTm->tm_year + 1900; + y.M = pTm->tm_mon + 1; + y.D = pTm->tm_mday; + y.h = pTm->tm_hour; + y.m = pTm->tm_min; + y.s = pTm->tm_sec; + sqlite3_mutex_leave(sqlite3_mutex_alloc(SQLITE_MUTEX_STATIC_MASTER)); + } +#endif + y.validYMD = 1; + y.validHMS = 1; + y.validJD = 0; + y.validTZ = 0; + computeJD(&y); + return y.rJD - x.rJD; +} + +/* +** Process a modifier to a date-time stamp. The modifiers are +** as follows: +** +** NNN days +** NNN hours +** NNN minutes +** NNN.NNNN seconds +** NNN months +** NNN years +** start of month +** start of year +** start of week +** start of day +** weekday N +** unixepoch +** localtime +** utc +** +** Return 0 on success and 1 if there is any kind of error. +*/ +static int parseModifier(const char *zMod, DateTime *p){ + int rc = 1; + int n; + double r; + char *z, zBuf[30]; + z = zBuf; + for(n=0; nrJD += localtimeOffset(p); + clearYMD_HMS_TZ(p); + rc = 0; + } + break; + } + case 'u': { + /* + ** unixepoch + ** + ** Treat the current value of p->rJD as the number of + ** seconds since 1970. Convert to a real julian day number. + */ + if( strcmp(z, "unixepoch")==0 && p->validJD ){ + p->rJD = p->rJD/86400.0 + 2440587.5; + clearYMD_HMS_TZ(p); + rc = 0; + }else if( strcmp(z, "utc")==0 ){ + double c1; + computeJD(p); + c1 = localtimeOffset(p); + p->rJD -= c1; + clearYMD_HMS_TZ(p); + p->rJD += c1 - localtimeOffset(p); + rc = 0; + } + break; + } + case 'w': { + /* + ** weekday N + ** + ** Move the date to the same time on the next occurrence of + ** weekday N where 0==Sunday, 1==Monday, and so forth. If the + ** date is already on the appropriate weekday, this is a no-op. + */ + if( strncmp(z, "weekday ", 8)==0 && getValue(&z[8],&r)>0 + && (n=r)==r && n>=0 && r<7 ){ + int Z; + computeYMD_HMS(p); + p->validTZ = 0; + p->validJD = 0; + computeJD(p); + Z = p->rJD + 1.5; + Z %= 7; + if( Z>n ) Z -= 7; + p->rJD += n - Z; + clearYMD_HMS_TZ(p); + rc = 0; + } + break; + } + case 's': { + /* + ** start of TTTTT + ** + ** Move the date backwards to the beginning of the current day, + ** or month or year. + */ + if( strncmp(z, "start of ", 9)!=0 ) break; + z += 9; + computeYMD(p); + p->validHMS = 1; + p->h = p->m = 0; + p->s = 0.0; + p->validTZ = 0; + p->validJD = 0; + if( strcmp(z,"month")==0 ){ + p->D = 1; + rc = 0; + }else if( strcmp(z,"year")==0 ){ + computeYMD(p); + p->M = 1; + p->D = 1; + rc = 0; + }else if( strcmp(z,"day")==0 ){ + rc = 0; + } + break; + } + case '+': + case '-': + case '0': + case '1': + case '2': + case '3': + case '4': + case '5': + case '6': + case '7': + case '8': + case '9': { + n = getValue(z, &r); + assert( n>=1 ); + if( z[n]==':' ){ + /* A modifier of the form (+|-)HH:MM:SS.FFF adds (or subtracts) the + ** specified number of hours, minutes, seconds, and fractional seconds + ** to the time. The ".FFF" may be omitted. The ":SS.FFF" may be + ** omitted. + */ + const char *z2 = z; + DateTime tx; + int day; + if( !isdigit(*(u8*)z2) ) z2++; + memset(&tx, 0, sizeof(tx)); + if( parseHhMmSs(z2, &tx) ) break; + computeJD(&tx); + tx.rJD -= 0.5; + day = (int)tx.rJD; + tx.rJD -= day; + if( z[0]=='-' ) tx.rJD = -tx.rJD; + computeJD(p); + clearYMD_HMS_TZ(p); + p->rJD += tx.rJD; + rc = 0; + break; + } + z += n; + while( isspace(*(u8*)z) ) z++; + n = strlen(z); + if( n>10 || n<3 ) break; + if( z[n-1]=='s' ){ z[n-1] = 0; n--; } + computeJD(p); + rc = 0; + if( n==3 && strcmp(z,"day")==0 ){ + p->rJD += r; + }else if( n==4 && strcmp(z,"hour")==0 ){ + p->rJD += r/24.0; + }else if( n==6 && strcmp(z,"minute")==0 ){ + p->rJD += r/(24.0*60.0); + }else if( n==6 && strcmp(z,"second")==0 ){ + p->rJD += r/(24.0*60.0*60.0); + }else if( n==5 && strcmp(z,"month")==0 ){ + int x, y; + computeYMD_HMS(p); + p->M += r; + x = p->M>0 ? (p->M-1)/12 : (p->M-12)/12; + p->Y += x; + p->M -= x*12; + p->validJD = 0; + computeJD(p); + y = r; + if( y!=r ){ + p->rJD += (r - y)*30.0; + } + }else if( n==4 && strcmp(z,"year")==0 ){ + computeYMD_HMS(p); + p->Y += r; + p->validJD = 0; + computeJD(p); + }else{ + rc = 1; + } + clearYMD_HMS_TZ(p); + break; + } + default: { + break; + } + } + return rc; +} + +/* +** Process time function arguments. argv[0] is a date-time stamp. +** argv[1] and following are modifiers. Parse them all and write +** the resulting time into the DateTime structure p. Return 0 +** on success and 1 if there are any errors. +** +** If there are zero parameters (if even argv[0] is undefined) +** then assume a default value of "now" for argv[0]. +*/ +static int isDate( + sqlite3_context *context, + int argc, + sqlite3_value **argv, + DateTime *p +){ + int i; + const unsigned char *z; + static const unsigned char zDflt[] = "now"; + if( argc==0 ){ + z = zDflt; + }else{ + z = sqlite3_value_text(argv[0]); + } + if( !z || parseDateOrTime(context, (char*)z, p) ){ + return 1; + } + for(i=1; iSQLITE_MAX_LENGTH ){ + sqlite3_result_error_toobig(context); + return; + }else{ + z = sqlite3_malloc( n ); + if( z==0 ){ + sqlite3_result_error_nomem(context); + return; + } + } + computeJD(&x); + computeYMD_HMS(&x); + for(i=j=0; zFmt[i]; i++){ + if( zFmt[i]!='%' ){ + z[j++] = zFmt[i]; + }else{ + i++; + switch( zFmt[i] ){ + case 'd': sqlite3_snprintf(3, &z[j],"%02d",x.D); j+=2; break; + case 'f': { + double s = x.s; + if( s>59.999 ) s = 59.999; + sqlite3_snprintf(7, &z[j],"%06.3f", s); + j += strlen(&z[j]); + break; + } + case 'H': sqlite3_snprintf(3, &z[j],"%02d",x.h); j+=2; break; + case 'W': /* Fall thru */ + case 'j': { + int nDay; /* Number of days since 1st day of year */ + DateTime y = x; + y.validJD = 0; + y.M = 1; + y.D = 1; + computeJD(&y); + nDay = x.rJD - y.rJD + 0.5; + if( zFmt[i]=='W' ){ + int wd; /* 0=Monday, 1=Tuesday, ... 6=Sunday */ + wd = ((int)(x.rJD+0.5)) % 7; + sqlite3_snprintf(3, &z[j],"%02d",(nDay+7-wd)/7); + j += 2; + }else{ + sqlite3_snprintf(4, &z[j],"%03d",nDay+1); + j += 3; + } + break; + } + case 'J': { + sqlite3_snprintf(20, &z[j],"%.16g",x.rJD); + j+=strlen(&z[j]); + break; + } + case 'm': sqlite3_snprintf(3, &z[j],"%02d",x.M); j+=2; break; + case 'M': sqlite3_snprintf(3, &z[j],"%02d",x.m); j+=2; break; + case 's': { + sqlite3_snprintf(30,&z[j],"%d", + (int)((x.rJD-2440587.5)*86400.0 + 0.5)); + j += strlen(&z[j]); + break; + } + case 'S': sqlite3_snprintf(3,&z[j],"%02d",(int)x.s); j+=2; break; + case 'w': z[j++] = (((int)(x.rJD+1.5)) % 7) + '0'; break; + case 'Y': sqlite3_snprintf(5,&z[j],"%04d",x.Y); j+=strlen(&z[j]);break; + default: z[j++] = '%'; break; + } + } + } + z[j] = 0; + sqlite3_result_text(context, z, -1, + z==zBuf ? SQLITE_TRANSIENT : sqlite3_free); +} + +/* +** current_time() +** +** This function returns the same value as time('now'). +*/ +static void ctimeFunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + timeFunc(context, 0, 0); +} + +/* +** current_date() +** +** This function returns the same value as date('now'). +*/ +static void cdateFunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + dateFunc(context, 0, 0); +} + +/* +** current_timestamp() +** +** This function returns the same value as datetime('now'). +*/ +static void ctimestampFunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + datetimeFunc(context, 0, 0); +} +#endif /* !defined(SQLITE_OMIT_DATETIME_FUNCS) */ + +#ifdef SQLITE_OMIT_DATETIME_FUNCS +/* +** If the library is compiled to omit the full-scale date and time +** handling (to get a smaller binary), the following minimal version +** of the functions current_time(), current_date() and current_timestamp() +** are included instead. This is to support column declarations that +** include "DEFAULT CURRENT_TIME" etc. +** +** This function uses the C-library functions time(), gmtime() +** and strftime(). The format string to pass to strftime() is supplied +** as the user-data for the function. +*/ +static void currentTimeFunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + time_t t; + char *zFormat = (char *)sqlite3_user_data(context); + char zBuf[20]; + + time(&t); +#ifdef SQLITE_TEST + { + extern int sqlite3_current_time; /* See os_XXX.c */ + if( sqlite3_current_time ){ + t = sqlite3_current_time; + } + } +#endif + +#ifdef HAVE_GMTIME_R + { + struct tm sNow; + gmtime_r(&t, &sNow); + strftime(zBuf, 20, zFormat, &sNow); + } +#else + { + struct tm *pTm; + sqlite3_mutex_enter(sqlite3_mutex_alloc(SQLITE_MUTEX_STATIC_MASTER)); + pTm = gmtime(&t); + strftime(zBuf, 20, zFormat, pTm); + sqlite3_mutex_leave(sqlite3_mutex_alloc(SQLITE_MUTEX_STATIC_MASTER)); + } +#endif + + sqlite3_result_text(context, zBuf, -1, SQLITE_TRANSIENT); +} +#endif + +/* +** This function registered all of the above C functions as SQL +** functions. This should be the only routine in this file with +** external linkage. +*/ +void sqlite3RegisterDateTimeFunctions(sqlite3 *db){ +#ifndef SQLITE_OMIT_DATETIME_FUNCS + static const struct { + char *zName; + int nArg; + void (*xFunc)(sqlite3_context*,int,sqlite3_value**); + } aFuncs[] = { + { "julianday", -1, juliandayFunc }, + { "date", -1, dateFunc }, + { "time", -1, timeFunc }, + { "datetime", -1, datetimeFunc }, + { "strftime", -1, strftimeFunc }, + { "current_time", 0, ctimeFunc }, + { "current_timestamp", 0, ctimestampFunc }, + { "current_date", 0, cdateFunc }, + }; + int i; + + for(i=0; ipVfs), aFuncs[i].xFunc, 0, 0); + } +#else + static const struct { + char *zName; + char *zFormat; + } aFuncs[] = { + { "current_time", "%H:%M:%S" }, + { "current_date", "%Y-%m-%d" }, + { "current_timestamp", "%Y-%m-%d %H:%M:%S" } + }; + int i; + + for(i=0; izErrMsg and return NULL. If all tables +** are found, return a pointer to the last table. +*/ +Table *sqlite3SrcListLookup(Parse *pParse, SrcList *pSrc){ + Table *pTab = 0; + int i; + struct SrcList_item *pItem; + for(i=0, pItem=pSrc->a; inSrc; i++, pItem++){ + pTab = sqlite3LocateTable(pParse, 0, pItem->zName, pItem->zDatabase); + sqlite3DeleteTable(pItem->pTab); + pItem->pTab = pTab; + if( pTab ){ + pTab->nRef++; + } + } + return pTab; +} + +/* +** Check to make sure the given table is writable. If it is not +** writable, generate an error message and return 1. If it is +** writable return 0; +*/ +int sqlite3IsReadOnly(Parse *pParse, Table *pTab, int viewOk){ + if( (pTab->readOnly && (pParse->db->flags & SQLITE_WriteSchema)==0 + && pParse->nested==0) +#ifndef SQLITE_OMIT_VIRTUALTABLE + || (pTab->pMod && pTab->pMod->pModule->xUpdate==0) +#endif + ){ + sqlite3ErrorMsg(pParse, "table %s may not be modified", pTab->zName); + return 1; + } +#ifndef SQLITE_OMIT_VIEW + if( !viewOk && pTab->pSelect ){ + sqlite3ErrorMsg(pParse,"cannot modify %s because it is a view",pTab->zName); + return 1; + } +#endif + return 0; +} + +/* +** Generate code that will open a table for reading. +*/ +void sqlite3OpenTable( + Parse *p, /* Generate code into this VDBE */ + int iCur, /* The cursor number of the table */ + int iDb, /* The database index in sqlite3.aDb[] */ + Table *pTab, /* The table to be opened */ + int opcode /* OP_OpenRead or OP_OpenWrite */ +){ + Vdbe *v; + if( IsVirtual(pTab) ) return; + v = sqlite3GetVdbe(p); + assert( opcode==OP_OpenWrite || opcode==OP_OpenRead ); + sqlite3TableLock(p, iDb, pTab->tnum, (opcode==OP_OpenWrite), pTab->zName); + sqlite3VdbeAddOp3(v, opcode, iCur, pTab->tnum, iDb); + VdbeComment((v, "%s", pTab->zName)); + sqlite3VdbeAddOp2(v, OP_SetNumColumns, iCur, pTab->nCol); +} + + +#if !defined(SQLITE_OMIT_VIEW) && !defined(SQLITE_OMIT_TRIGGER) +/* +** Evaluate a view and store its result in an ephemeral table. The +** pWhere argument is an optional WHERE clause that restricts the +** set of rows in the view that are to be added to the ephemeral table. +*/ +void sqlite3MaterializeView( + Parse *pParse, /* Parsing context */ + Select *pView, /* View definition */ + Expr *pWhere, /* Optional WHERE clause to be added */ + u32 col_mask, /* Render only the columns in this mask. */ + int iCur /* Cursor number for ephemerial table */ +){ + SelectDest dest; + Select *pDup; + sqlite3 *db = pParse->db; + + pDup = sqlite3SelectDup(db, pView); + if( pWhere ){ + SrcList *pFrom; + + pWhere = sqlite3ExprDup(db, pWhere); + pFrom = sqlite3SrcListAppendFromTerm(pParse, 0, 0, 0, 0, pDup, 0, 0); + pDup = sqlite3SelectNew(pParse, 0, pFrom, pWhere, 0, 0, 0, 0, 0, 0); + } + sqlite3SelectMask(pParse, pDup, col_mask); + sqlite3SelectDestInit(&dest, SRT_EphemTab, iCur); + sqlite3Select(pParse, pDup, &dest, 0, 0, 0, 0); + sqlite3SelectDelete(pDup); +} +#endif /* !defined(SQLITE_OMIT_VIEW) && !defined(SQLITE_OMIT_TRIGGER) */ + + +/* +** Generate code for a DELETE FROM statement. +** +** DELETE FROM table_wxyz WHERE a<5 AND b NOT NULL; +** \________/ \________________/ +** pTabList pWhere +*/ +void sqlite3DeleteFrom( + Parse *pParse, /* The parser context */ + SrcList *pTabList, /* The table from which we should delete things */ + Expr *pWhere /* The WHERE clause. May be null */ +){ + Vdbe *v; /* The virtual database engine */ + Table *pTab; /* The table from which records will be deleted */ + const char *zDb; /* Name of database holding pTab */ + int end, addr = 0; /* A couple addresses of generated code */ + int i; /* Loop counter */ + WhereInfo *pWInfo; /* Information about the WHERE clause */ + Index *pIdx; /* For looping over indices of the table */ + int iCur; /* VDBE Cursor number for pTab */ + sqlite3 *db; /* Main database structure */ + AuthContext sContext; /* Authorization context */ + int oldIdx = -1; /* Cursor for the OLD table of AFTER triggers */ + NameContext sNC; /* Name context to resolve expressions in */ + int iDb; /* Database number */ + int memCnt = 0; /* Memory cell used for change counting */ + +#ifndef SQLITE_OMIT_TRIGGER + int isView; /* True if attempting to delete from a view */ + int triggers_exist = 0; /* True if any triggers exist */ +#endif + int iBeginAfterTrigger; /* Address of after trigger program */ + int iEndAfterTrigger; /* Exit of after trigger program */ + int iBeginBeforeTrigger; /* Address of before trigger program */ + int iEndBeforeTrigger; /* Exit of before trigger program */ + u32 old_col_mask = 0; /* Mask of OLD.* columns in use */ + + sContext.pParse = 0; + db = pParse->db; + if( pParse->nErr || db->mallocFailed ){ + goto delete_from_cleanup; + } + assert( pTabList->nSrc==1 ); + + /* Locate the table which we want to delete. This table has to be + ** put in an SrcList structure because some of the subroutines we + ** will be calling are designed to work with multiple tables and expect + ** an SrcList* parameter instead of just a Table* parameter. + */ + pTab = sqlite3SrcListLookup(pParse, pTabList); + if( pTab==0 ) goto delete_from_cleanup; + + /* Figure out if we have any triggers and if the table being + ** deleted from is a view + */ +#ifndef SQLITE_OMIT_TRIGGER + triggers_exist = sqlite3TriggersExist(pParse, pTab, TK_DELETE, 0); + isView = pTab->pSelect!=0; +#else +# define triggers_exist 0 +# define isView 0 +#endif +#ifdef SQLITE_OMIT_VIEW +# undef isView +# define isView 0 +#endif + + if( sqlite3IsReadOnly(pParse, pTab, triggers_exist) ){ + goto delete_from_cleanup; + } + iDb = sqlite3SchemaToIndex(db, pTab->pSchema); + assert( iDbnDb ); + zDb = db->aDb[iDb].zName; + if( sqlite3AuthCheck(pParse, SQLITE_DELETE, pTab->zName, 0, zDb) ){ + goto delete_from_cleanup; + } + + /* If pTab is really a view, make sure it has been initialized. + */ + if( sqlite3ViewGetColumnNames(pParse, pTab) ){ + goto delete_from_cleanup; + } + + /* Allocate a cursor used to store the old.* data for a trigger. + */ + if( triggers_exist ){ + oldIdx = pParse->nTab++; + } + + /* Assign cursor number to the table and all its indices. + */ + assert( pTabList->nSrc==1 ); + iCur = pTabList->a[0].iCursor = pParse->nTab++; + for(pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext){ + pParse->nTab++; + } + + /* Start the view context + */ + if( isView ){ + sqlite3AuthContextPush(pParse, &sContext, pTab->zName); + } + + /* Begin generating code. + */ + v = sqlite3GetVdbe(pParse); + if( v==0 ){ + goto delete_from_cleanup; + } + if( pParse->nested==0 ) sqlite3VdbeCountChanges(v); + sqlite3BeginWriteOperation(pParse, triggers_exist, iDb); + + if( triggers_exist ){ + int orconf = ((pParse->trigStack)?pParse->trigStack->orconf:OE_Default); + int iGoto = sqlite3VdbeAddOp0(v, OP_Goto); + addr = sqlite3VdbeMakeLabel(v); + + iBeginBeforeTrigger = sqlite3VdbeCurrentAddr(v); + (void)sqlite3CodeRowTrigger(pParse, TK_DELETE, 0, TRIGGER_BEFORE, pTab, + -1, oldIdx, orconf, addr, &old_col_mask, 0); + iEndBeforeTrigger = sqlite3VdbeAddOp0(v, OP_Goto); + + iBeginAfterTrigger = sqlite3VdbeCurrentAddr(v); + (void)sqlite3CodeRowTrigger(pParse, TK_DELETE, 0, TRIGGER_AFTER, pTab, -1, + oldIdx, orconf, addr, &old_col_mask, 0); + iEndAfterTrigger = sqlite3VdbeAddOp0(v, OP_Goto); + + sqlite3VdbeJumpHere(v, iGoto); + } + + /* If we are trying to delete from a view, realize that view into + ** a ephemeral table. + */ + if( isView ){ + sqlite3MaterializeView(pParse, pTab->pSelect, pWhere, old_col_mask, iCur); + } + + /* Resolve the column names in the WHERE clause. + */ + memset(&sNC, 0, sizeof(sNC)); + sNC.pParse = pParse; + sNC.pSrcList = pTabList; + if( sqlite3ExprResolveNames(&sNC, pWhere) ){ + goto delete_from_cleanup; + } + + /* Initialize the counter of the number of rows deleted, if + ** we are counting rows. + */ + if( db->flags & SQLITE_CountRows ){ + memCnt = ++pParse->nMem; + sqlite3VdbeAddOp2(v, OP_Integer, 0, memCnt); + } + + /* Special case: A DELETE without a WHERE clause deletes everything. + ** It is easier just to erase the whole table. Note, however, that + ** this means that the row change count will be incorrect. + */ + if( pWhere==0 && !triggers_exist && !IsVirtual(pTab) ){ + if( db->flags & SQLITE_CountRows ){ + /* If counting rows deleted, just count the total number of + ** entries in the table. */ + int addr2; + if( !isView ){ + sqlite3OpenTable(pParse, iCur, iDb, pTab, OP_OpenRead); + } + sqlite3VdbeAddOp2(v, OP_Rewind, iCur, sqlite3VdbeCurrentAddr(v)+2); + addr2 = sqlite3VdbeAddOp2(v, OP_AddImm, memCnt, 1); + sqlite3VdbeAddOp2(v, OP_Next, iCur, addr2); + sqlite3VdbeAddOp1(v, OP_Close, iCur); + } + if( !isView ){ + sqlite3VdbeAddOp2(v, OP_Clear, pTab->tnum, iDb); + if( !pParse->nested ){ + sqlite3VdbeChangeP4(v, -1, pTab->zName, P4_STATIC); + } + for(pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext){ + assert( pIdx->pSchema==pTab->pSchema ); + sqlite3VdbeAddOp2(v, OP_Clear, pIdx->tnum, iDb); + } + } + } + /* The usual case: There is a WHERE clause so we have to scan through + ** the table and pick which records to delete. + */ + else{ + int iRowid = ++pParse->nMem; /* Used for storing rowid values. */ + + /* Begin the database scan + */ + pWInfo = sqlite3WhereBegin(pParse, pTabList, pWhere, 0, 0); + if( pWInfo==0 ) goto delete_from_cleanup; + + /* Remember the rowid of every item to be deleted. + */ + sqlite3VdbeAddOp2(v, IsVirtual(pTab) ? OP_VRowid : OP_Rowid, iCur, iRowid); + sqlite3VdbeAddOp1(v, OP_FifoWrite, iRowid); + if( db->flags & SQLITE_CountRows ){ + sqlite3VdbeAddOp2(v, OP_AddImm, memCnt, 1); + } + + /* End the database scan loop. + */ + sqlite3WhereEnd(pWInfo); + + /* Open the pseudo-table used to store OLD if there are triggers. + */ + if( triggers_exist ){ + sqlite3VdbeAddOp1(v, OP_OpenPseudo, oldIdx); + sqlite3VdbeAddOp2(v, OP_SetNumColumns, oldIdx, pTab->nCol); + } + + /* Delete every item whose key was written to the list during the + ** database scan. We have to delete items after the scan is complete + ** because deleting an item can change the scan order. + */ + end = sqlite3VdbeMakeLabel(v); + + if( !isView ){ + /* Open cursors for the table we are deleting from and + ** all its indices. + */ + sqlite3OpenTableAndIndices(pParse, pTab, iCur, OP_OpenWrite); + } + + /* This is the beginning of the delete loop. If a trigger encounters + ** an IGNORE constraint, it jumps back to here. + */ + if( triggers_exist ){ + sqlite3VdbeResolveLabel(v, addr); + } + addr = sqlite3VdbeAddOp2(v, OP_FifoRead, iRowid, end); + + if( triggers_exist ){ + int iData = ++pParse->nMem; /* For storing row data of OLD table */ + + /* If the record is no longer present in the table, jump to the + ** next iteration of the loop through the contents of the fifo. + */ + sqlite3VdbeAddOp3(v, OP_NotExists, iCur, addr, iRowid); + + /* Populate the OLD.* pseudo-table */ + if( old_col_mask ){ + sqlite3VdbeAddOp2(v, OP_RowData, iCur, iData); + }else{ + sqlite3VdbeAddOp2(v, OP_Null, 0, iData); + } + sqlite3VdbeAddOp3(v, OP_Insert, oldIdx, iData, iRowid); + + /* Jump back and run the BEFORE triggers */ + sqlite3VdbeAddOp2(v, OP_Goto, 0, iBeginBeforeTrigger); + sqlite3VdbeJumpHere(v, iEndBeforeTrigger); + } + + if( !isView ){ + /* Delete the row */ +#ifndef SQLITE_OMIT_VIRTUALTABLE + if( IsVirtual(pTab) ){ + const char *pVtab = (const char *)pTab->pVtab; + pParse->pVirtualLock = pTab; + sqlite3VdbeAddOp4(v, OP_VUpdate, 0, 1, iRowid, pVtab, P4_VTAB); + }else +#endif + { + sqlite3GenerateRowDelete(pParse, pTab, iCur, iRowid, pParse->nested==0); + } + } + + /* If there are row triggers, close all cursors then invoke + ** the AFTER triggers + */ + if( triggers_exist ){ + /* Jump back and run the AFTER triggers */ + sqlite3VdbeAddOp2(v, OP_Goto, 0, iBeginAfterTrigger); + sqlite3VdbeJumpHere(v, iEndAfterTrigger); + } + + /* End of the delete loop */ + sqlite3VdbeAddOp2(v, OP_Goto, 0, addr); + sqlite3VdbeResolveLabel(v, end); + + /* Close the cursors after the loop if there are no row triggers */ + if( !isView && !IsVirtual(pTab) ){ + for(i=1, pIdx=pTab->pIndex; pIdx; i++, pIdx=pIdx->pNext){ + sqlite3VdbeAddOp2(v, OP_Close, iCur + i, pIdx->tnum); + } + sqlite3VdbeAddOp1(v, OP_Close, iCur); + } + } + + /* + ** Return the number of rows that were deleted. If this routine is + ** generating code because of a call to sqlite3NestedParse(), do not + ** invoke the callback function. + */ + if( db->flags & SQLITE_CountRows && pParse->nested==0 && !pParse->trigStack ){ + sqlite3VdbeAddOp2(v, OP_ResultRow, memCnt, 1); + sqlite3VdbeSetNumCols(v, 1); + sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "rows deleted", P4_STATIC); + } + +delete_from_cleanup: + sqlite3AuthContextPop(&sContext); + sqlite3SrcListDelete(pTabList); + sqlite3ExprDelete(pWhere); + return; +} + +/* +** This routine generates VDBE code that causes a single row of a +** single table to be deleted. +** +** The VDBE must be in a particular state when this routine is called. +** These are the requirements: +** +** 1. A read/write cursor pointing to pTab, the table containing the row +** to be deleted, must be opened as cursor number "base". +** +** 2. Read/write cursors for all indices of pTab must be open as +** cursor number base+i for the i-th index. +** +** 3. The record number of the row to be deleted must be stored in +** memory cell iRowid. +** +** This routine pops the top of the stack to remove the record number +** and then generates code to remove both the table record and all index +** entries that point to that record. +*/ +void sqlite3GenerateRowDelete( + Parse *pParse, /* Parsing context */ + Table *pTab, /* Table containing the row to be deleted */ + int iCur, /* Cursor number for the table */ + int iRowid, /* Memory cell that contains the rowid to delete */ + int count /* Increment the row change counter */ +){ + int addr; + Vdbe *v; + + v = pParse->pVdbe; + addr = sqlite3VdbeAddOp3(v, OP_NotExists, iCur, 0, iRowid); + sqlite3GenerateRowIndexDelete(pParse, pTab, iCur, 0); + sqlite3VdbeAddOp2(v, OP_Delete, iCur, (count?OPFLAG_NCHANGE:0)); + if( count ){ + sqlite3VdbeChangeP4(v, -1, pTab->zName, P4_STATIC); + } + sqlite3VdbeJumpHere(v, addr); +} + +/* +** This routine generates VDBE code that causes the deletion of all +** index entries associated with a single row of a single table. +** +** The VDBE must be in a particular state when this routine is called. +** These are the requirements: +** +** 1. A read/write cursor pointing to pTab, the table containing the row +** to be deleted, must be opened as cursor number "iCur". +** +** 2. Read/write cursors for all indices of pTab must be open as +** cursor number iCur+i for the i-th index. +** +** 3. The "iCur" cursor must be pointing to the row that is to be +** deleted. +*/ +void sqlite3GenerateRowIndexDelete( + Parse *pParse, /* Parsing and code generating context */ + Table *pTab, /* Table containing the row to be deleted */ + int iCur, /* Cursor number for the table */ + int *aRegIdx /* Only delete if aRegIdx!=0 && aRegIdx[i]>0 */ +){ + int i; + Index *pIdx; + int r1; + + r1 = sqlite3GetTempReg(pParse); + for(i=1, pIdx=pTab->pIndex; pIdx; i++, pIdx=pIdx->pNext){ + if( aRegIdx!=0 && aRegIdx[i-1]==0 ) continue; + sqlite3GenerateIndexKey(pParse, pIdx, iCur, r1); + sqlite3VdbeAddOp2(pParse->pVdbe, OP_IdxDelete, iCur+i, r1); + } + sqlite3ReleaseTempReg(pParse, r1); +} + +/* +** Generate code that will assemble an index key and put it on the top +** of the tack. The key with be for index pIdx which is an index on pTab. +** iCur is the index of a cursor open on the pTab table and pointing to +** the entry that needs indexing. +** +** Return a register number which is the first in a block of +** registers that holds the elements of the index key. The +** block of registers has already been deallocated by the time +** this routine returns. +*/ +int sqlite3GenerateIndexKey( + Parse *pParse, /* Parsing context */ + Index *pIdx, /* The index for which to generate a key */ + int iCur, /* Cursor number for the pIdx->pTable table */ + int regOut /* Write the new index key to this register */ +){ + Vdbe *v = pParse->pVdbe; + int j; + Table *pTab = pIdx->pTable; + int regBase; + int nCol; + + nCol = pIdx->nColumn; + regBase = sqlite3GetTempRange(pParse, nCol+1); + sqlite3VdbeAddOp2(v, OP_Rowid, iCur, regBase+nCol); + for(j=0; jaiColumn[j]; + if( idx==pTab->iPKey ){ + sqlite3VdbeAddOp2(v, OP_SCopy, regBase+nCol, regBase+j); + }else{ + sqlite3VdbeAddOp3(v, OP_Column, iCur, idx, regBase+j); + sqlite3ColumnDefault(v, pTab, idx); + } + } + sqlite3VdbeAddOp3(v, OP_MakeRecord, regBase, nCol+1, regOut); + sqlite3IndexAffinityStr(v, pIdx); + sqlite3ReleaseTempRange(pParse, regBase, nCol+1); + return regBase; +} Added: external/sqlite-source-3.5.7.x/expr.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/expr.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,2983 @@ +/* +** 2001 September 15 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** This file contains routines used for analyzing expressions and +** for generating VDBE code that evaluates expressions in SQLite. +** +** $Id: expr.c,v 1.354 2008/03/12 10:39:00 danielk1977 Exp $ +*/ +#include "sqliteInt.h" +#include + +/* +** Return the 'affinity' of the expression pExpr if any. +** +** If pExpr is a column, a reference to a column via an 'AS' alias, +** or a sub-select with a column as the return value, then the +** affinity of that column is returned. Otherwise, 0x00 is returned, +** indicating no affinity for the expression. +** +** i.e. the WHERE clause expresssions in the following statements all +** have an affinity: +** +** CREATE TABLE t1(a); +** SELECT * FROM t1 WHERE a; +** SELECT a AS b FROM t1 WHERE b; +** SELECT * FROM t1 WHERE (select a from t1); +*/ +char sqlite3ExprAffinity(Expr *pExpr){ + int op = pExpr->op; + if( op==TK_SELECT ){ + return sqlite3ExprAffinity(pExpr->pSelect->pEList->a[0].pExpr); + } +#ifndef SQLITE_OMIT_CAST + if( op==TK_CAST ){ + return sqlite3AffinityType(&pExpr->token); + } +#endif + return pExpr->affinity; +} + +/* +** Set the collating sequence for expression pExpr to be the collating +** sequence named by pToken. Return a pointer to the revised expression. +** The collating sequence is marked as "explicit" using the EP_ExpCollate +** flag. An explicit collating sequence will override implicit +** collating sequences. +*/ +Expr *sqlite3ExprSetColl(Parse *pParse, Expr *pExpr, Token *pName){ + char *zColl = 0; /* Dequoted name of collation sequence */ + CollSeq *pColl; + zColl = sqlite3NameFromToken(pParse->db, pName); + if( pExpr && zColl ){ + pColl = sqlite3LocateCollSeq(pParse, zColl, -1); + if( pColl ){ + pExpr->pColl = pColl; + pExpr->flags |= EP_ExpCollate; + } + } + sqlite3_free(zColl); + return pExpr; +} + +/* +** Return the default collation sequence for the expression pExpr. If +** there is no default collation type, return 0. +*/ +CollSeq *sqlite3ExprCollSeq(Parse *pParse, Expr *pExpr){ + CollSeq *pColl = 0; + if( pExpr ){ + int op; + pColl = pExpr->pColl; + op = pExpr->op; + if( (op==TK_CAST || op==TK_UPLUS) && !pColl ){ + return sqlite3ExprCollSeq(pParse, pExpr->pLeft); + } + } + if( sqlite3CheckCollSeq(pParse, pColl) ){ + pColl = 0; + } + return pColl; +} + +/* +** pExpr is an operand of a comparison operator. aff2 is the +** type affinity of the other operand. This routine returns the +** type affinity that should be used for the comparison operator. +*/ +char sqlite3CompareAffinity(Expr *pExpr, char aff2){ + char aff1 = sqlite3ExprAffinity(pExpr); + if( aff1 && aff2 ){ + /* Both sides of the comparison are columns. If one has numeric + ** affinity, use that. Otherwise use no affinity. + */ + if( sqlite3IsNumericAffinity(aff1) || sqlite3IsNumericAffinity(aff2) ){ + return SQLITE_AFF_NUMERIC; + }else{ + return SQLITE_AFF_NONE; + } + }else if( !aff1 && !aff2 ){ + /* Neither side of the comparison is a column. Compare the + ** results directly. + */ + return SQLITE_AFF_NONE; + }else{ + /* One side is a column, the other is not. Use the columns affinity. */ + assert( aff1==0 || aff2==0 ); + return (aff1 + aff2); + } +} + +/* +** pExpr is a comparison operator. Return the type affinity that should +** be applied to both operands prior to doing the comparison. +*/ +static char comparisonAffinity(Expr *pExpr){ + char aff; + assert( pExpr->op==TK_EQ || pExpr->op==TK_IN || pExpr->op==TK_LT || + pExpr->op==TK_GT || pExpr->op==TK_GE || pExpr->op==TK_LE || + pExpr->op==TK_NE ); + assert( pExpr->pLeft ); + aff = sqlite3ExprAffinity(pExpr->pLeft); + if( pExpr->pRight ){ + aff = sqlite3CompareAffinity(pExpr->pRight, aff); + } + else if( pExpr->pSelect ){ + aff = sqlite3CompareAffinity(pExpr->pSelect->pEList->a[0].pExpr, aff); + } + else if( !aff ){ + aff = SQLITE_AFF_NONE; + } + return aff; +} + +/* +** pExpr is a comparison expression, eg. '=', '<', IN(...) etc. +** idx_affinity is the affinity of an indexed column. Return true +** if the index with affinity idx_affinity may be used to implement +** the comparison in pExpr. +*/ +int sqlite3IndexAffinityOk(Expr *pExpr, char idx_affinity){ + char aff = comparisonAffinity(pExpr); + switch( aff ){ + case SQLITE_AFF_NONE: + return 1; + case SQLITE_AFF_TEXT: + return idx_affinity==SQLITE_AFF_TEXT; + default: + return sqlite3IsNumericAffinity(idx_affinity); + } +} + +/* +** Return the P5 value that should be used for a binary comparison +** opcode (OP_Eq, OP_Ge etc.) used to compare pExpr1 and pExpr2. +*/ +static u8 binaryCompareP5(Expr *pExpr1, Expr *pExpr2, int jumpIfNull){ + u8 aff = (char)sqlite3ExprAffinity(pExpr2); + aff = sqlite3CompareAffinity(pExpr1, aff) | jumpIfNull; + return aff; +} + +/* +** Return a pointer to the collation sequence that should be used by +** a binary comparison operator comparing pLeft and pRight. +** +** If the left hand expression has a collating sequence type, then it is +** used. Otherwise the collation sequence for the right hand expression +** is used, or the default (BINARY) if neither expression has a collating +** type. +** +** Argument pRight (but not pLeft) may be a null pointer. In this case, +** it is not considered. +*/ +CollSeq *sqlite3BinaryCompareCollSeq( + Parse *pParse, + Expr *pLeft, + Expr *pRight +){ + CollSeq *pColl; + assert( pLeft ); + if( pLeft->flags & EP_ExpCollate ){ + assert( pLeft->pColl ); + pColl = pLeft->pColl; + }else if( pRight && pRight->flags & EP_ExpCollate ){ + assert( pRight->pColl ); + pColl = pRight->pColl; + }else{ + pColl = sqlite3ExprCollSeq(pParse, pLeft); + if( !pColl ){ + pColl = sqlite3ExprCollSeq(pParse, pRight); + } + } + return pColl; +} + +/* +** Generate code for a comparison operator. +*/ +static int codeCompare( + Parse *pParse, /* The parsing (and code generating) context */ + Expr *pLeft, /* The left operand */ + Expr *pRight, /* The right operand */ + int opcode, /* The comparison opcode */ + int in1, int in2, /* Register holding operands */ + int dest, /* Jump here if true. */ + int jumpIfNull /* If true, jump if either operand is NULL */ +){ + int p5; + int addr; + CollSeq *p4; + + p4 = sqlite3BinaryCompareCollSeq(pParse, pLeft, pRight); + p5 = binaryCompareP5(pLeft, pRight, jumpIfNull); + addr = sqlite3VdbeAddOp4(pParse->pVdbe, opcode, in2, dest, in1, + (void*)p4, P4_COLLSEQ); + sqlite3VdbeChangeP5(pParse->pVdbe, p5); + return addr; +} + +/* +** Construct a new expression node and return a pointer to it. Memory +** for this node is obtained from sqlite3_malloc(). The calling function +** is responsible for making sure the node eventually gets freed. +*/ +Expr *sqlite3Expr( + sqlite3 *db, /* Handle for sqlite3DbMallocZero() (may be null) */ + int op, /* Expression opcode */ + Expr *pLeft, /* Left operand */ + Expr *pRight, /* Right operand */ + const Token *pToken /* Argument token */ +){ + Expr *pNew; + pNew = sqlite3DbMallocZero(db, sizeof(Expr)); + if( pNew==0 ){ + /* When malloc fails, delete pLeft and pRight. Expressions passed to + ** this function must always be allocated with sqlite3Expr() for this + ** reason. + */ + sqlite3ExprDelete(pLeft); + sqlite3ExprDelete(pRight); + return 0; + } + pNew->op = op; + pNew->pLeft = pLeft; + pNew->pRight = pRight; + pNew->iAgg = -1; + if( pToken ){ + assert( pToken->dyn==0 ); + pNew->span = pNew->token = *pToken; + }else if( pLeft ){ + if( pRight ){ + sqlite3ExprSpan(pNew, &pLeft->span, &pRight->span); + if( pRight->flags & EP_ExpCollate ){ + pNew->flags |= EP_ExpCollate; + pNew->pColl = pRight->pColl; + } + } + if( pLeft->flags & EP_ExpCollate ){ + pNew->flags |= EP_ExpCollate; + pNew->pColl = pLeft->pColl; + } + } + + sqlite3ExprSetHeight(pNew); + return pNew; +} + +/* +** Works like sqlite3Expr() except that it takes an extra Parse* +** argument and notifies the associated connection object if malloc fails. +*/ +Expr *sqlite3PExpr( + Parse *pParse, /* Parsing context */ + int op, /* Expression opcode */ + Expr *pLeft, /* Left operand */ + Expr *pRight, /* Right operand */ + const Token *pToken /* Argument token */ +){ + return sqlite3Expr(pParse->db, op, pLeft, pRight, pToken); +} + +/* +** When doing a nested parse, you can include terms in an expression +** that look like this: #1 #2 ... These terms refer to registers +** in the virtual machine. #N is the N-th register. +** +** This routine is called by the parser to deal with on of those terms. +** It immediately generates code to store the value in a memory location. +** The returns an expression that will code to extract the value from +** that memory location as needed. +*/ +Expr *sqlite3RegisterExpr(Parse *pParse, Token *pToken){ + Vdbe *v = pParse->pVdbe; + Expr *p; + if( pParse->nested==0 ){ + sqlite3ErrorMsg(pParse, "near \"%T\": syntax error", pToken); + return sqlite3PExpr(pParse, TK_NULL, 0, 0, 0); + } + if( v==0 ) return 0; + p = sqlite3PExpr(pParse, TK_REGISTER, 0, 0, pToken); + if( p==0 ){ + return 0; /* Malloc failed */ + } + p->iTable = atoi((char*)&pToken->z[1]); + return p; +} + +/* +** Join two expressions using an AND operator. If either expression is +** NULL, then just return the other expression. +*/ +Expr *sqlite3ExprAnd(sqlite3 *db, Expr *pLeft, Expr *pRight){ + if( pLeft==0 ){ + return pRight; + }else if( pRight==0 ){ + return pLeft; + }else{ + return sqlite3Expr(db, TK_AND, pLeft, pRight, 0); + } +} + +/* +** Set the Expr.span field of the given expression to span all +** text between the two given tokens. +*/ +void sqlite3ExprSpan(Expr *pExpr, Token *pLeft, Token *pRight){ + assert( pRight!=0 ); + assert( pLeft!=0 ); + if( pExpr && pRight->z && pLeft->z ){ + assert( pLeft->dyn==0 || pLeft->z[pLeft->n]==0 ); + if( pLeft->dyn==0 && pRight->dyn==0 ){ + pExpr->span.z = pLeft->z; + pExpr->span.n = pRight->n + (pRight->z - pLeft->z); + }else{ + pExpr->span.z = 0; + } + } +} + +/* +** Construct a new expression node for a function with multiple +** arguments. +*/ +Expr *sqlite3ExprFunction(Parse *pParse, ExprList *pList, Token *pToken){ + Expr *pNew; + assert( pToken ); + pNew = sqlite3DbMallocZero(pParse->db, sizeof(Expr) ); + if( pNew==0 ){ + sqlite3ExprListDelete(pList); /* Avoid leaking memory when malloc fails */ + return 0; + } + pNew->op = TK_FUNCTION; + pNew->pList = pList; + assert( pToken->dyn==0 ); + pNew->token = *pToken; + pNew->span = pNew->token; + + sqlite3ExprSetHeight(pNew); + return pNew; +} + +/* +** Assign a variable number to an expression that encodes a wildcard +** in the original SQL statement. +** +** Wildcards consisting of a single "?" are assigned the next sequential +** variable number. +** +** Wildcards of the form "?nnn" are assigned the number "nnn". We make +** sure "nnn" is not too be to avoid a denial of service attack when +** the SQL statement comes from an external source. +** +** Wildcards of the form ":aaa" or "$aaa" are assigned the same number +** as the previous instance of the same wildcard. Or if this is the first +** instance of the wildcard, the next sequenial variable number is +** assigned. +*/ +void sqlite3ExprAssignVarNumber(Parse *pParse, Expr *pExpr){ + Token *pToken; + sqlite3 *db = pParse->db; + + if( pExpr==0 ) return; + pToken = &pExpr->token; + assert( pToken->n>=1 ); + assert( pToken->z!=0 ); + assert( pToken->z[0]!=0 ); + if( pToken->n==1 ){ + /* Wildcard of the form "?". Assign the next variable number */ + pExpr->iTable = ++pParse->nVar; + }else if( pToken->z[0]=='?' ){ + /* Wildcard of the form "?nnn". Convert "nnn" to an integer and + ** use it as the variable number */ + int i; + pExpr->iTable = i = atoi((char*)&pToken->z[1]); + if( i<1 || i>SQLITE_MAX_VARIABLE_NUMBER ){ + sqlite3ErrorMsg(pParse, "variable number must be between ?1 and ?%d", + SQLITE_MAX_VARIABLE_NUMBER); + } + if( i>pParse->nVar ){ + pParse->nVar = i; + } + }else{ + /* Wildcards of the form ":aaa" or "$aaa". Reuse the same variable + ** number as the prior appearance of the same name, or if the name + ** has never appeared before, reuse the same variable number + */ + int i, n; + n = pToken->n; + for(i=0; inVarExpr; i++){ + Expr *pE; + if( (pE = pParse->apVarExpr[i])!=0 + && pE->token.n==n + && memcmp(pE->token.z, pToken->z, n)==0 ){ + pExpr->iTable = pE->iTable; + break; + } + } + if( i>=pParse->nVarExpr ){ + pExpr->iTable = ++pParse->nVar; + if( pParse->nVarExpr>=pParse->nVarExprAlloc-1 ){ + pParse->nVarExprAlloc += pParse->nVarExprAlloc + 10; + pParse->apVarExpr = + sqlite3DbReallocOrFree( + db, + pParse->apVarExpr, + pParse->nVarExprAlloc*sizeof(pParse->apVarExpr[0]) + ); + } + if( !db->mallocFailed ){ + assert( pParse->apVarExpr!=0 ); + pParse->apVarExpr[pParse->nVarExpr++] = pExpr; + } + } + } + if( !pParse->nErr && pParse->nVar>SQLITE_MAX_VARIABLE_NUMBER ){ + sqlite3ErrorMsg(pParse, "too many SQL variables"); + } +} + +/* +** Recursively delete an expression tree. +*/ +void sqlite3ExprDelete(Expr *p){ + if( p==0 ) return; + if( p->span.dyn ) sqlite3_free((char*)p->span.z); + if( p->token.dyn ) sqlite3_free((char*)p->token.z); + sqlite3ExprDelete(p->pLeft); + sqlite3ExprDelete(p->pRight); + sqlite3ExprListDelete(p->pList); + sqlite3SelectDelete(p->pSelect); + sqlite3_free(p); +} + +/* +** The Expr.token field might be a string literal that is quoted. +** If so, remove the quotation marks. +*/ +void sqlite3DequoteExpr(sqlite3 *db, Expr *p){ + if( ExprHasAnyProperty(p, EP_Dequoted) ){ + return; + } + ExprSetProperty(p, EP_Dequoted); + if( p->token.dyn==0 ){ + sqlite3TokenCopy(db, &p->token, &p->token); + } + sqlite3Dequote((char*)p->token.z); +} + + +/* +** The following group of routines make deep copies of expressions, +** expression lists, ID lists, and select statements. The copies can +** be deleted (by being passed to their respective ...Delete() routines) +** without effecting the originals. +** +** The expression list, ID, and source lists return by sqlite3ExprListDup(), +** sqlite3IdListDup(), and sqlite3SrcListDup() can not be further expanded +** by subsequent calls to sqlite*ListAppend() routines. +** +** Any tables that the SrcList might point to are not duplicated. +*/ +Expr *sqlite3ExprDup(sqlite3 *db, Expr *p){ + Expr *pNew; + if( p==0 ) return 0; + pNew = sqlite3DbMallocRaw(db, sizeof(*p) ); + if( pNew==0 ) return 0; + memcpy(pNew, p, sizeof(*pNew)); + if( p->token.z!=0 ){ + pNew->token.z = (u8*)sqlite3DbStrNDup(db, (char*)p->token.z, p->token.n); + pNew->token.dyn = 1; + }else{ + assert( pNew->token.z==0 ); + } + pNew->span.z = 0; + pNew->pLeft = sqlite3ExprDup(db, p->pLeft); + pNew->pRight = sqlite3ExprDup(db, p->pRight); + pNew->pList = sqlite3ExprListDup(db, p->pList); + pNew->pSelect = sqlite3SelectDup(db, p->pSelect); + return pNew; +} +void sqlite3TokenCopy(sqlite3 *db, Token *pTo, Token *pFrom){ + if( pTo->dyn ) sqlite3_free((char*)pTo->z); + if( pFrom->z ){ + pTo->n = pFrom->n; + pTo->z = (u8*)sqlite3DbStrNDup(db, (char*)pFrom->z, pFrom->n); + pTo->dyn = 1; + }else{ + pTo->z = 0; + } +} +ExprList *sqlite3ExprListDup(sqlite3 *db, ExprList *p){ + ExprList *pNew; + struct ExprList_item *pItem, *pOldItem; + int i; + if( p==0 ) return 0; + pNew = sqlite3DbMallocRaw(db, sizeof(*pNew) ); + if( pNew==0 ) return 0; + pNew->iECursor = 0; + pNew->nExpr = pNew->nAlloc = p->nExpr; + pNew->a = pItem = sqlite3DbMallocRaw(db, p->nExpr*sizeof(p->a[0]) ); + if( pItem==0 ){ + sqlite3_free(pNew); + return 0; + } + pOldItem = p->a; + for(i=0; inExpr; i++, pItem++, pOldItem++){ + Expr *pNewExpr, *pOldExpr; + pItem->pExpr = pNewExpr = sqlite3ExprDup(db, pOldExpr = pOldItem->pExpr); + if( pOldExpr->span.z!=0 && pNewExpr ){ + /* Always make a copy of the span for top-level expressions in the + ** expression list. The logic in SELECT processing that determines + ** the names of columns in the result set needs this information */ + sqlite3TokenCopy(db, &pNewExpr->span, &pOldExpr->span); + } + assert( pNewExpr==0 || pNewExpr->span.z!=0 + || pOldExpr->span.z==0 + || db->mallocFailed ); + pItem->zName = sqlite3DbStrDup(db, pOldItem->zName); + pItem->sortOrder = pOldItem->sortOrder; + pItem->isAgg = pOldItem->isAgg; + pItem->done = 0; + } + return pNew; +} + +/* +** If cursors, triggers, views and subqueries are all omitted from +** the build, then none of the following routines, except for +** sqlite3SelectDup(), can be called. sqlite3SelectDup() is sometimes +** called with a NULL argument. +*/ +#if !defined(SQLITE_OMIT_VIEW) || !defined(SQLITE_OMIT_TRIGGER) \ + || !defined(SQLITE_OMIT_SUBQUERY) +SrcList *sqlite3SrcListDup(sqlite3 *db, SrcList *p){ + SrcList *pNew; + int i; + int nByte; + if( p==0 ) return 0; + nByte = sizeof(*p) + (p->nSrc>0 ? sizeof(p->a[0]) * (p->nSrc-1) : 0); + pNew = sqlite3DbMallocRaw(db, nByte ); + if( pNew==0 ) return 0; + pNew->nSrc = pNew->nAlloc = p->nSrc; + for(i=0; inSrc; i++){ + struct SrcList_item *pNewItem = &pNew->a[i]; + struct SrcList_item *pOldItem = &p->a[i]; + Table *pTab; + pNewItem->zDatabase = sqlite3DbStrDup(db, pOldItem->zDatabase); + pNewItem->zName = sqlite3DbStrDup(db, pOldItem->zName); + pNewItem->zAlias = sqlite3DbStrDup(db, pOldItem->zAlias); + pNewItem->jointype = pOldItem->jointype; + pNewItem->iCursor = pOldItem->iCursor; + pNewItem->isPopulated = pOldItem->isPopulated; + pTab = pNewItem->pTab = pOldItem->pTab; + if( pTab ){ + pTab->nRef++; + } + pNewItem->pSelect = sqlite3SelectDup(db, pOldItem->pSelect); + pNewItem->pOn = sqlite3ExprDup(db, pOldItem->pOn); + pNewItem->pUsing = sqlite3IdListDup(db, pOldItem->pUsing); + pNewItem->colUsed = pOldItem->colUsed; + } + return pNew; +} +IdList *sqlite3IdListDup(sqlite3 *db, IdList *p){ + IdList *pNew; + int i; + if( p==0 ) return 0; + pNew = sqlite3DbMallocRaw(db, sizeof(*pNew) ); + if( pNew==0 ) return 0; + pNew->nId = pNew->nAlloc = p->nId; + pNew->a = sqlite3DbMallocRaw(db, p->nId*sizeof(p->a[0]) ); + if( pNew->a==0 ){ + sqlite3_free(pNew); + return 0; + } + for(i=0; inId; i++){ + struct IdList_item *pNewItem = &pNew->a[i]; + struct IdList_item *pOldItem = &p->a[i]; + pNewItem->zName = sqlite3DbStrDup(db, pOldItem->zName); + pNewItem->idx = pOldItem->idx; + } + return pNew; +} +Select *sqlite3SelectDup(sqlite3 *db, Select *p){ + Select *pNew; + if( p==0 ) return 0; + pNew = sqlite3DbMallocRaw(db, sizeof(*p) ); + if( pNew==0 ) return 0; + pNew->isDistinct = p->isDistinct; + pNew->pEList = sqlite3ExprListDup(db, p->pEList); + pNew->pSrc = sqlite3SrcListDup(db, p->pSrc); + pNew->pWhere = sqlite3ExprDup(db, p->pWhere); + pNew->pGroupBy = sqlite3ExprListDup(db, p->pGroupBy); + pNew->pHaving = sqlite3ExprDup(db, p->pHaving); + pNew->pOrderBy = sqlite3ExprListDup(db, p->pOrderBy); + pNew->op = p->op; + pNew->pPrior = sqlite3SelectDup(db, p->pPrior); + pNew->pLimit = sqlite3ExprDup(db, p->pLimit); + pNew->pOffset = sqlite3ExprDup(db, p->pOffset); + pNew->iLimit = -1; + pNew->iOffset = -1; + pNew->isResolved = p->isResolved; + pNew->isAgg = p->isAgg; + pNew->usesEphm = 0; + pNew->disallowOrderBy = 0; + pNew->pRightmost = 0; + pNew->addrOpenEphm[0] = -1; + pNew->addrOpenEphm[1] = -1; + pNew->addrOpenEphm[2] = -1; + return pNew; +} +#else +Select *sqlite3SelectDup(sqlite3 *db, Select *p){ + assert( p==0 ); + return 0; +} +#endif + + +/* +** Add a new element to the end of an expression list. If pList is +** initially NULL, then create a new expression list. +*/ +ExprList *sqlite3ExprListAppend( + Parse *pParse, /* Parsing context */ + ExprList *pList, /* List to which to append. Might be NULL */ + Expr *pExpr, /* Expression to be appended */ + Token *pName /* AS keyword for the expression */ +){ + sqlite3 *db = pParse->db; + if( pList==0 ){ + pList = sqlite3DbMallocZero(db, sizeof(ExprList) ); + if( pList==0 ){ + goto no_mem; + } + assert( pList->nAlloc==0 ); + } + if( pList->nAlloc<=pList->nExpr ){ + struct ExprList_item *a; + int n = pList->nAlloc*2 + 4; + a = sqlite3DbRealloc(db, pList->a, n*sizeof(pList->a[0])); + if( a==0 ){ + goto no_mem; + } + pList->a = a; + pList->nAlloc = n; + } + assert( pList->a!=0 ); + if( pExpr || pName ){ + struct ExprList_item *pItem = &pList->a[pList->nExpr++]; + memset(pItem, 0, sizeof(*pItem)); + pItem->zName = sqlite3NameFromToken(db, pName); + pItem->pExpr = pExpr; + } + return pList; + +no_mem: + /* Avoid leaking memory if malloc has failed. */ + sqlite3ExprDelete(pExpr); + sqlite3ExprListDelete(pList); + return 0; +} + +/* +** If the expression list pEList contains more than iLimit elements, +** leave an error message in pParse. +*/ +void sqlite3ExprListCheckLength( + Parse *pParse, + ExprList *pEList, + int iLimit, + const char *zObject +){ + if( pEList && pEList->nExpr>iLimit ){ + sqlite3ErrorMsg(pParse, "too many columns in %s", zObject); + } +} + + +#if defined(SQLITE_TEST) || SQLITE_MAX_EXPR_DEPTH>0 +/* The following three functions, heightOfExpr(), heightOfExprList() +** and heightOfSelect(), are used to determine the maximum height +** of any expression tree referenced by the structure passed as the +** first argument. +** +** If this maximum height is greater than the current value pointed +** to by pnHeight, the second parameter, then set *pnHeight to that +** value. +*/ +static void heightOfExpr(Expr *p, int *pnHeight){ + if( p ){ + if( p->nHeight>*pnHeight ){ + *pnHeight = p->nHeight; + } + } +} +static void heightOfExprList(ExprList *p, int *pnHeight){ + if( p ){ + int i; + for(i=0; inExpr; i++){ + heightOfExpr(p->a[i].pExpr, pnHeight); + } + } +} +static void heightOfSelect(Select *p, int *pnHeight){ + if( p ){ + heightOfExpr(p->pWhere, pnHeight); + heightOfExpr(p->pHaving, pnHeight); + heightOfExpr(p->pLimit, pnHeight); + heightOfExpr(p->pOffset, pnHeight); + heightOfExprList(p->pEList, pnHeight); + heightOfExprList(p->pGroupBy, pnHeight); + heightOfExprList(p->pOrderBy, pnHeight); + heightOfSelect(p->pPrior, pnHeight); + } +} + +/* +** Set the Expr.nHeight variable in the structure passed as an +** argument. An expression with no children, Expr.pList or +** Expr.pSelect member has a height of 1. Any other expression +** has a height equal to the maximum height of any other +** referenced Expr plus one. +*/ +void sqlite3ExprSetHeight(Expr *p){ + int nHeight = 0; + heightOfExpr(p->pLeft, &nHeight); + heightOfExpr(p->pRight, &nHeight); + heightOfExprList(p->pList, &nHeight); + heightOfSelect(p->pSelect, &nHeight); + p->nHeight = nHeight + 1; +} + +/* +** Return the maximum height of any expression tree referenced +** by the select statement passed as an argument. +*/ +int sqlite3SelectExprHeight(Select *p){ + int nHeight = 0; + heightOfSelect(p, &nHeight); + return nHeight; +} +#endif + +/* +** Delete an entire expression list. +*/ +void sqlite3ExprListDelete(ExprList *pList){ + int i; + struct ExprList_item *pItem; + if( pList==0 ) return; + assert( pList->a!=0 || (pList->nExpr==0 && pList->nAlloc==0) ); + assert( pList->nExpr<=pList->nAlloc ); + for(pItem=pList->a, i=0; inExpr; i++, pItem++){ + sqlite3ExprDelete(pItem->pExpr); + sqlite3_free(pItem->zName); + } + sqlite3_free(pList->a); + sqlite3_free(pList); +} + +/* +** Walk an expression tree. Call xFunc for each node visited. +** +** The return value from xFunc determines whether the tree walk continues. +** 0 means continue walking the tree. 1 means do not walk children +** of the current node but continue with siblings. 2 means abandon +** the tree walk completely. +** +** The return value from this routine is 1 to abandon the tree walk +** and 0 to continue. +** +** NOTICE: This routine does *not* descend into subqueries. +*/ +static int walkExprList(ExprList *, int (*)(void *, Expr*), void *); +static int walkExprTree(Expr *pExpr, int (*xFunc)(void*,Expr*), void *pArg){ + int rc; + if( pExpr==0 ) return 0; + rc = (*xFunc)(pArg, pExpr); + if( rc==0 ){ + if( walkExprTree(pExpr->pLeft, xFunc, pArg) ) return 1; + if( walkExprTree(pExpr->pRight, xFunc, pArg) ) return 1; + if( walkExprList(pExpr->pList, xFunc, pArg) ) return 1; + } + return rc>1; +} + +/* +** Call walkExprTree() for every expression in list p. +*/ +static int walkExprList(ExprList *p, int (*xFunc)(void *, Expr*), void *pArg){ + int i; + struct ExprList_item *pItem; + if( !p ) return 0; + for(i=p->nExpr, pItem=p->a; i>0; i--, pItem++){ + if( walkExprTree(pItem->pExpr, xFunc, pArg) ) return 1; + } + return 0; +} + +/* +** Call walkExprTree() for every expression in Select p, not including +** expressions that are part of sub-selects in any FROM clause or the LIMIT +** or OFFSET expressions.. +*/ +static int walkSelectExpr(Select *p, int (*xFunc)(void *, Expr*), void *pArg){ + walkExprList(p->pEList, xFunc, pArg); + walkExprTree(p->pWhere, xFunc, pArg); + walkExprList(p->pGroupBy, xFunc, pArg); + walkExprTree(p->pHaving, xFunc, pArg); + walkExprList(p->pOrderBy, xFunc, pArg); + if( p->pPrior ){ + walkSelectExpr(p->pPrior, xFunc, pArg); + } + return 0; +} + + +/* +** This routine is designed as an xFunc for walkExprTree(). +** +** pArg is really a pointer to an integer. If we can tell by looking +** at pExpr that the expression that contains pExpr is not a constant +** expression, then set *pArg to 0 and return 2 to abandon the tree walk. +** If pExpr does does not disqualify the expression from being a constant +** then do nothing. +** +** After walking the whole tree, if no nodes are found that disqualify +** the expression as constant, then we assume the whole expression +** is constant. See sqlite3ExprIsConstant() for additional information. +*/ +static int exprNodeIsConstant(void *pArg, Expr *pExpr){ + int *pN = (int*)pArg; + + /* If *pArg is 3 then any term of the expression that comes from + ** the ON or USING clauses of a join disqualifies the expression + ** from being considered constant. */ + if( (*pN)==3 && ExprHasAnyProperty(pExpr, EP_FromJoin) ){ + *pN = 0; + return 2; + } + + switch( pExpr->op ){ + /* Consider functions to be constant if all their arguments are constant + ** and *pArg==2 */ + case TK_FUNCTION: + if( (*pN)==2 ) return 0; + /* Fall through */ + case TK_ID: + case TK_COLUMN: + case TK_DOT: + case TK_AGG_FUNCTION: + case TK_AGG_COLUMN: +#ifndef SQLITE_OMIT_SUBQUERY + case TK_SELECT: + case TK_EXISTS: +#endif + *pN = 0; + return 2; + case TK_IN: + if( pExpr->pSelect ){ + *pN = 0; + return 2; + } + default: + return 0; + } +} + +/* +** Walk an expression tree. Return 1 if the expression is constant +** and 0 if it involves variables or function calls. +** +** For the purposes of this function, a double-quoted string (ex: "abc") +** is considered a variable but a single-quoted string (ex: 'abc') is +** a constant. +*/ +int sqlite3ExprIsConstant(Expr *p){ + int isConst = 1; + walkExprTree(p, exprNodeIsConstant, &isConst); + return isConst; +} + +/* +** Walk an expression tree. Return 1 if the expression is constant +** that does no originate from the ON or USING clauses of a join. +** Return 0 if it involves variables or function calls or terms from +** an ON or USING clause. +*/ +int sqlite3ExprIsConstantNotJoin(Expr *p){ + int isConst = 3; + walkExprTree(p, exprNodeIsConstant, &isConst); + return isConst!=0; +} + +/* +** Walk an expression tree. Return 1 if the expression is constant +** or a function call with constant arguments. Return and 0 if there +** are any variables. +** +** For the purposes of this function, a double-quoted string (ex: "abc") +** is considered a variable but a single-quoted string (ex: 'abc') is +** a constant. +*/ +int sqlite3ExprIsConstantOrFunction(Expr *p){ + int isConst = 2; + walkExprTree(p, exprNodeIsConstant, &isConst); + return isConst!=0; +} + +/* +** If the expression p codes a constant integer that is small enough +** to fit in a 32-bit integer, return 1 and put the value of the integer +** in *pValue. If the expression is not an integer or if it is too big +** to fit in a signed 32-bit integer, return 0 and leave *pValue unchanged. +*/ +int sqlite3ExprIsInteger(Expr *p, int *pValue){ + switch( p->op ){ + case TK_INTEGER: { + if( sqlite3GetInt32((char*)p->token.z, pValue) ){ + return 1; + } + break; + } + case TK_UPLUS: { + return sqlite3ExprIsInteger(p->pLeft, pValue); + } + case TK_UMINUS: { + int v; + if( sqlite3ExprIsInteger(p->pLeft, &v) ){ + *pValue = -v; + return 1; + } + break; + } + default: break; + } + return 0; +} + +/* +** Return TRUE if the given string is a row-id column name. +*/ +int sqlite3IsRowid(const char *z){ + if( sqlite3StrICmp(z, "_ROWID_")==0 ) return 1; + if( sqlite3StrICmp(z, "ROWID")==0 ) return 1; + if( sqlite3StrICmp(z, "OID")==0 ) return 1; + return 0; +} + +/* +** Given the name of a column of the form X.Y.Z or Y.Z or just Z, look up +** that name in the set of source tables in pSrcList and make the pExpr +** expression node refer back to that source column. The following changes +** are made to pExpr: +** +** pExpr->iDb Set the index in db->aDb[] of the database holding +** the table. +** pExpr->iTable Set to the cursor number for the table obtained +** from pSrcList. +** pExpr->iColumn Set to the column number within the table. +** pExpr->op Set to TK_COLUMN. +** pExpr->pLeft Any expression this points to is deleted +** pExpr->pRight Any expression this points to is deleted. +** +** The pDbToken is the name of the database (the "X"). This value may be +** NULL meaning that name is of the form Y.Z or Z. Any available database +** can be used. The pTableToken is the name of the table (the "Y"). This +** value can be NULL if pDbToken is also NULL. If pTableToken is NULL it +** means that the form of the name is Z and that columns from any table +** can be used. +** +** If the name cannot be resolved unambiguously, leave an error message +** in pParse and return non-zero. Return zero on success. +*/ +static int lookupName( + Parse *pParse, /* The parsing context */ + Token *pDbToken, /* Name of the database containing table, or NULL */ + Token *pTableToken, /* Name of table containing column, or NULL */ + Token *pColumnToken, /* Name of the column. */ + NameContext *pNC, /* The name context used to resolve the name */ + Expr *pExpr /* Make this EXPR node point to the selected column */ +){ + char *zDb = 0; /* Name of the database. The "X" in X.Y.Z */ + char *zTab = 0; /* Name of the table. The "Y" in X.Y.Z or Y.Z */ + char *zCol = 0; /* Name of the column. The "Z" */ + int i, j; /* Loop counters */ + int cnt = 0; /* Number of matching column names */ + int cntTab = 0; /* Number of matching table names */ + sqlite3 *db = pParse->db; /* The database */ + struct SrcList_item *pItem; /* Use for looping over pSrcList items */ + struct SrcList_item *pMatch = 0; /* The matching pSrcList item */ + NameContext *pTopNC = pNC; /* First namecontext in the list */ + Schema *pSchema = 0; /* Schema of the expression */ + + assert( pColumnToken && pColumnToken->z ); /* The Z in X.Y.Z cannot be NULL */ + zDb = sqlite3NameFromToken(db, pDbToken); + zTab = sqlite3NameFromToken(db, pTableToken); + zCol = sqlite3NameFromToken(db, pColumnToken); + if( db->mallocFailed ){ + goto lookupname_end; + } + + pExpr->iTable = -1; + while( pNC && cnt==0 ){ + ExprList *pEList; + SrcList *pSrcList = pNC->pSrcList; + + if( pSrcList ){ + for(i=0, pItem=pSrcList->a; inSrc; i++, pItem++){ + Table *pTab; + int iDb; + Column *pCol; + + pTab = pItem->pTab; + assert( pTab!=0 ); + iDb = sqlite3SchemaToIndex(db, pTab->pSchema); + assert( pTab->nCol>0 ); + if( zTab ){ + if( pItem->zAlias ){ + char *zTabName = pItem->zAlias; + if( sqlite3StrICmp(zTabName, zTab)!=0 ) continue; + }else{ + char *zTabName = pTab->zName; + if( zTabName==0 || sqlite3StrICmp(zTabName, zTab)!=0 ) continue; + if( zDb!=0 && sqlite3StrICmp(db->aDb[iDb].zName, zDb)!=0 ){ + continue; + } + } + } + if( 0==(cntTab++) ){ + pExpr->iTable = pItem->iCursor; + pSchema = pTab->pSchema; + pMatch = pItem; + } + for(j=0, pCol=pTab->aCol; jnCol; j++, pCol++){ + if( sqlite3StrICmp(pCol->zName, zCol)==0 ){ + const char *zColl = pTab->aCol[j].zColl; + IdList *pUsing; + cnt++; + pExpr->iTable = pItem->iCursor; + pMatch = pItem; + pSchema = pTab->pSchema; + /* Substitute the rowid (column -1) for the INTEGER PRIMARY KEY */ + pExpr->iColumn = j==pTab->iPKey ? -1 : j; + pExpr->affinity = pTab->aCol[j].affinity; + if( (pExpr->flags & EP_ExpCollate)==0 ){ + pExpr->pColl = sqlite3FindCollSeq(db, ENC(db), zColl,-1, 0); + } + if( inSrc-1 ){ + if( pItem[1].jointype & JT_NATURAL ){ + /* If this match occurred in the left table of a natural join, + ** then skip the right table to avoid a duplicate match */ + pItem++; + i++; + }else if( (pUsing = pItem[1].pUsing)!=0 ){ + /* If this match occurs on a column that is in the USING clause + ** of a join, skip the search of the right table of the join + ** to avoid a duplicate match there. */ + int k; + for(k=0; knId; k++){ + if( sqlite3StrICmp(pUsing->a[k].zName, zCol)==0 ){ + pItem++; + i++; + break; + } + } + } + } + break; + } + } + } + } + +#ifndef SQLITE_OMIT_TRIGGER + /* If we have not already resolved the name, then maybe + ** it is a new.* or old.* trigger argument reference + */ + if( zDb==0 && zTab!=0 && cnt==0 && pParse->trigStack!=0 ){ + TriggerStack *pTriggerStack = pParse->trigStack; + Table *pTab = 0; + u32 *piColMask; + if( pTriggerStack->newIdx != -1 && sqlite3StrICmp("new", zTab) == 0 ){ + pExpr->iTable = pTriggerStack->newIdx; + assert( pTriggerStack->pTab ); + pTab = pTriggerStack->pTab; + piColMask = &(pTriggerStack->newColMask); + }else if( pTriggerStack->oldIdx != -1 && sqlite3StrICmp("old", zTab)==0 ){ + pExpr->iTable = pTriggerStack->oldIdx; + assert( pTriggerStack->pTab ); + pTab = pTriggerStack->pTab; + piColMask = &(pTriggerStack->oldColMask); + } + + if( pTab ){ + int iCol; + Column *pCol = pTab->aCol; + + pSchema = pTab->pSchema; + cntTab++; + for(iCol=0; iCol < pTab->nCol; iCol++, pCol++) { + if( sqlite3StrICmp(pCol->zName, zCol)==0 ){ + const char *zColl = pTab->aCol[iCol].zColl; + cnt++; + pExpr->iColumn = iCol==pTab->iPKey ? -1 : iCol; + pExpr->affinity = pTab->aCol[iCol].affinity; + if( (pExpr->flags & EP_ExpCollate)==0 ){ + pExpr->pColl = sqlite3FindCollSeq(db, ENC(db), zColl,-1, 0); + } + pExpr->pTab = pTab; + if( iCol>=0 ){ + *piColMask |= ((u32)1<=32?0xffffffff:0); + } + break; + } + } + } + } +#endif /* !defined(SQLITE_OMIT_TRIGGER) */ + + /* + ** Perhaps the name is a reference to the ROWID + */ + if( cnt==0 && cntTab==1 && sqlite3IsRowid(zCol) ){ + cnt = 1; + pExpr->iColumn = -1; + pExpr->affinity = SQLITE_AFF_INTEGER; + } + + /* + ** If the input is of the form Z (not Y.Z or X.Y.Z) then the name Z + ** might refer to an result-set alias. This happens, for example, when + ** we are resolving names in the WHERE clause of the following command: + ** + ** SELECT a+b AS x FROM table WHERE x<10; + ** + ** In cases like this, replace pExpr with a copy of the expression that + ** forms the result set entry ("a+b" in the example) and return immediately. + ** Note that the expression in the result set should have already been + ** resolved by the time the WHERE clause is resolved. + */ + if( cnt==0 && (pEList = pNC->pEList)!=0 && zTab==0 ){ + for(j=0; jnExpr; j++){ + char *zAs = pEList->a[j].zName; + if( zAs!=0 && sqlite3StrICmp(zAs, zCol)==0 ){ + Expr *pDup, *pOrig; + assert( pExpr->pLeft==0 && pExpr->pRight==0 ); + assert( pExpr->pList==0 ); + assert( pExpr->pSelect==0 ); + pOrig = pEList->a[j].pExpr; + if( !pNC->allowAgg && ExprHasProperty(pOrig, EP_Agg) ){ + sqlite3ErrorMsg(pParse, "misuse of aliased aggregate %s", zAs); + sqlite3_free(zCol); + return 2; + } + pDup = sqlite3ExprDup(db, pOrig); + if( pExpr->flags & EP_ExpCollate ){ + pDup->pColl = pExpr->pColl; + pDup->flags |= EP_ExpCollate; + } + if( pExpr->span.dyn ) sqlite3_free((char*)pExpr->span.z); + if( pExpr->token.dyn ) sqlite3_free((char*)pExpr->token.z); + memcpy(pExpr, pDup, sizeof(*pExpr)); + sqlite3_free(pDup); + cnt = 1; + pMatch = 0; + assert( zTab==0 && zDb==0 ); + goto lookupname_end_2; + } + } + } + + /* Advance to the next name context. The loop will exit when either + ** we have a match (cnt>0) or when we run out of name contexts. + */ + if( cnt==0 ){ + pNC = pNC->pNext; + } + } + + /* + ** If X and Y are NULL (in other words if only the column name Z is + ** supplied) and the value of Z is enclosed in double-quotes, then + ** Z is a string literal if it doesn't match any column names. In that + ** case, we need to return right away and not make any changes to + ** pExpr. + ** + ** Because no reference was made to outer contexts, the pNC->nRef + ** fields are not changed in any context. + */ + if( cnt==0 && zTab==0 && pColumnToken->z[0]=='"' ){ + sqlite3_free(zCol); + return 0; + } + + /* + ** cnt==0 means there was not match. cnt>1 means there were two or + ** more matches. Either way, we have an error. + */ + if( cnt!=1 ){ + const char *zErr; + zErr = cnt==0 ? "no such column" : "ambiguous column name"; + if( zDb ){ + sqlite3ErrorMsg(pParse, "%s: %s.%s.%s", zErr, zDb, zTab, zCol); + }else if( zTab ){ + sqlite3ErrorMsg(pParse, "%s: %s.%s", zErr, zTab, zCol); + }else{ + sqlite3ErrorMsg(pParse, "%s: %s", zErr, zCol); + } + pTopNC->nErr++; + } + + /* If a column from a table in pSrcList is referenced, then record + ** this fact in the pSrcList.a[].colUsed bitmask. Column 0 causes + ** bit 0 to be set. Column 1 sets bit 1. And so forth. If the + ** column number is greater than the number of bits in the bitmask + ** then set the high-order bit of the bitmask. + */ + if( pExpr->iColumn>=0 && pMatch!=0 ){ + int n = pExpr->iColumn; + if( n>=sizeof(Bitmask)*8 ){ + n = sizeof(Bitmask)*8-1; + } + assert( pMatch->iCursor==pExpr->iTable ); + pMatch->colUsed |= ((Bitmask)1)<pLeft); + pExpr->pLeft = 0; + sqlite3ExprDelete(pExpr->pRight); + pExpr->pRight = 0; + pExpr->op = TK_COLUMN; +lookupname_end_2: + sqlite3_free(zCol); + if( cnt==1 ){ + assert( pNC!=0 ); + sqlite3AuthRead(pParse, pExpr, pSchema, pNC->pSrcList); + if( pMatch && !pMatch->pSelect ){ + pExpr->pTab = pMatch->pTab; + } + /* Increment the nRef value on all name contexts from TopNC up to + ** the point where the name matched. */ + for(;;){ + assert( pTopNC!=0 ); + pTopNC->nRef++; + if( pTopNC==pNC ) break; + pTopNC = pTopNC->pNext; + } + return 0; + } else { + return 1; + } +} + +/* +** This routine is designed as an xFunc for walkExprTree(). +** +** Resolve symbolic names into TK_COLUMN operators for the current +** node in the expression tree. Return 0 to continue the search down +** the tree or 2 to abort the tree walk. +** +** This routine also does error checking and name resolution for +** function names. The operator for aggregate functions is changed +** to TK_AGG_FUNCTION. +*/ +static int nameResolverStep(void *pArg, Expr *pExpr){ + NameContext *pNC = (NameContext*)pArg; + Parse *pParse; + + if( pExpr==0 ) return 1; + assert( pNC!=0 ); + pParse = pNC->pParse; + + if( ExprHasAnyProperty(pExpr, EP_Resolved) ) return 1; + ExprSetProperty(pExpr, EP_Resolved); +#ifndef NDEBUG + if( pNC->pSrcList && pNC->pSrcList->nAlloc>0 ){ + SrcList *pSrcList = pNC->pSrcList; + int i; + for(i=0; ipSrcList->nSrc; i++){ + assert( pSrcList->a[i].iCursor>=0 && pSrcList->a[i].iCursornTab); + } + } +#endif + switch( pExpr->op ){ + /* Double-quoted strings (ex: "abc") are used as identifiers if + ** possible. Otherwise they remain as strings. Single-quoted + ** strings (ex: 'abc') are always string literals. + */ + case TK_STRING: { + if( pExpr->token.z[0]=='\'' ) break; + /* Fall thru into the TK_ID case if this is a double-quoted string */ + } + /* A lone identifier is the name of a column. + */ + case TK_ID: { + lookupName(pParse, 0, 0, &pExpr->token, pNC, pExpr); + return 1; + } + + /* A table name and column name: ID.ID + ** Or a database, table and column: ID.ID.ID + */ + case TK_DOT: { + Token *pColumn; + Token *pTable; + Token *pDb; + Expr *pRight; + + /* if( pSrcList==0 ) break; */ + pRight = pExpr->pRight; + if( pRight->op==TK_ID ){ + pDb = 0; + pTable = &pExpr->pLeft->token; + pColumn = &pRight->token; + }else{ + assert( pRight->op==TK_DOT ); + pDb = &pExpr->pLeft->token; + pTable = &pRight->pLeft->token; + pColumn = &pRight->pRight->token; + } + lookupName(pParse, pDb, pTable, pColumn, pNC, pExpr); + return 1; + } + + /* Resolve function names + */ + case TK_CONST_FUNC: + case TK_FUNCTION: { + ExprList *pList = pExpr->pList; /* The argument list */ + int n = pList ? pList->nExpr : 0; /* Number of arguments */ + int no_such_func = 0; /* True if no such function exists */ + int wrong_num_args = 0; /* True if wrong number of arguments */ + int is_agg = 0; /* True if is an aggregate function */ + int i; + int auth; /* Authorization to use the function */ + int nId; /* Number of characters in function name */ + const char *zId; /* The function name. */ + FuncDef *pDef; /* Information about the function */ + int enc = ENC(pParse->db); /* The database encoding */ + + zId = (char*)pExpr->token.z; + nId = pExpr->token.n; + pDef = sqlite3FindFunction(pParse->db, zId, nId, n, enc, 0); + if( pDef==0 ){ + pDef = sqlite3FindFunction(pParse->db, zId, nId, -1, enc, 0); + if( pDef==0 ){ + no_such_func = 1; + }else{ + wrong_num_args = 1; + } + }else{ + is_agg = pDef->xFunc==0; + } +#ifndef SQLITE_OMIT_AUTHORIZATION + if( pDef ){ + auth = sqlite3AuthCheck(pParse, SQLITE_FUNCTION, 0, pDef->zName, 0); + if( auth!=SQLITE_OK ){ + if( auth==SQLITE_DENY ){ + sqlite3ErrorMsg(pParse, "not authorized to use function: %s", + pDef->zName); + pNC->nErr++; + } + pExpr->op = TK_NULL; + return 1; + } + } +#endif + if( is_agg && !pNC->allowAgg ){ + sqlite3ErrorMsg(pParse, "misuse of aggregate function %.*s()", nId,zId); + pNC->nErr++; + is_agg = 0; + }else if( no_such_func ){ + sqlite3ErrorMsg(pParse, "no such function: %.*s", nId, zId); + pNC->nErr++; + }else if( wrong_num_args ){ + sqlite3ErrorMsg(pParse,"wrong number of arguments to function %.*s()", + nId, zId); + pNC->nErr++; + } + if( is_agg ){ + pExpr->op = TK_AGG_FUNCTION; + pNC->hasAgg = 1; + } + if( is_agg ) pNC->allowAgg = 0; + for(i=0; pNC->nErr==0 && ia[i].pExpr, nameResolverStep, pNC); + } + if( is_agg ) pNC->allowAgg = 1; + /* FIX ME: Compute pExpr->affinity based on the expected return + ** type of the function + */ + return is_agg; + } +#ifndef SQLITE_OMIT_SUBQUERY + case TK_SELECT: + case TK_EXISTS: +#endif + case TK_IN: { + if( pExpr->pSelect ){ + int nRef = pNC->nRef; +#ifndef SQLITE_OMIT_CHECK + if( pNC->isCheck ){ + sqlite3ErrorMsg(pParse,"subqueries prohibited in CHECK constraints"); + } +#endif + sqlite3SelectResolve(pParse, pExpr->pSelect, pNC); + assert( pNC->nRef>=nRef ); + if( nRef!=pNC->nRef ){ + ExprSetProperty(pExpr, EP_VarSelect); + } + } + break; + } +#ifndef SQLITE_OMIT_CHECK + case TK_VARIABLE: { + if( pNC->isCheck ){ + sqlite3ErrorMsg(pParse,"parameters prohibited in CHECK constraints"); + } + break; + } +#endif + } + return 0; +} + +/* +** This routine walks an expression tree and resolves references to +** table columns. Nodes of the form ID.ID or ID resolve into an +** index to the table in the table list and a column offset. The +** Expr.opcode for such nodes is changed to TK_COLUMN. The Expr.iTable +** value is changed to the index of the referenced table in pTabList +** plus the "base" value. The base value will ultimately become the +** VDBE cursor number for a cursor that is pointing into the referenced +** table. The Expr.iColumn value is changed to the index of the column +** of the referenced table. The Expr.iColumn value for the special +** ROWID column is -1. Any INTEGER PRIMARY KEY column is tried as an +** alias for ROWID. +** +** Also resolve function names and check the functions for proper +** usage. Make sure all function names are recognized and all functions +** have the correct number of arguments. Leave an error message +** in pParse->zErrMsg if anything is amiss. Return the number of errors. +** +** If the expression contains aggregate functions then set the EP_Agg +** property on the expression. +*/ +int sqlite3ExprResolveNames( + NameContext *pNC, /* Namespace to resolve expressions in. */ + Expr *pExpr /* The expression to be analyzed. */ +){ + int savedHasAgg; + if( pExpr==0 ) return 0; +#if defined(SQLITE_TEST) || SQLITE_MAX_EXPR_DEPTH>0 + if( (pExpr->nHeight+pNC->pParse->nHeight)>SQLITE_MAX_EXPR_DEPTH ){ + sqlite3ErrorMsg(pNC->pParse, + "Expression tree is too large (maximum depth %d)", + SQLITE_MAX_EXPR_DEPTH + ); + return 1; + } + pNC->pParse->nHeight += pExpr->nHeight; +#endif + savedHasAgg = pNC->hasAgg; + pNC->hasAgg = 0; + walkExprTree(pExpr, nameResolverStep, pNC); +#if defined(SQLITE_TEST) || SQLITE_MAX_EXPR_DEPTH>0 + pNC->pParse->nHeight -= pExpr->nHeight; +#endif + if( pNC->nErr>0 ){ + ExprSetProperty(pExpr, EP_Error); + } + if( pNC->hasAgg ){ + ExprSetProperty(pExpr, EP_Agg); + }else if( savedHasAgg ){ + pNC->hasAgg = 1; + } + return ExprHasProperty(pExpr, EP_Error); +} + +/* +** A pointer instance of this structure is used to pass information +** through walkExprTree into codeSubqueryStep(). +*/ +typedef struct QueryCoder QueryCoder; +struct QueryCoder { + Parse *pParse; /* The parsing context */ + NameContext *pNC; /* Namespace of first enclosing query */ +}; + +#ifdef SQLITE_TEST + int sqlite3_enable_in_opt = 1; +#else + #define sqlite3_enable_in_opt 1 +#endif + +/* +** This function is used by the implementation of the IN (...) operator. +** It's job is to find or create a b-tree structure that may be used +** either to test for membership of the (...) set or to iterate through +** its members, skipping duplicates. +** +** The cursor opened on the structure (database table, database index +** or ephermal table) is stored in pX->iTable before this function returns. +** The returned value indicates the structure type, as follows: +** +** IN_INDEX_ROWID - The cursor was opened on a database table. +** IN_INDEX_INDEX - The cursor was opened on a database index. +** IN_INDEX_EPH - The cursor was opened on a specially created and +** populated epheremal table. +** +** An existing structure may only be used if the SELECT is of the simple +** form: +** +** SELECT FROM +** +** If the mustBeUnique parameter is false, the structure will be used +** for fast set membership tests. In this case an epheremal table must +** be used unless is an INTEGER PRIMARY KEY or an index can +** be found with as its left-most column. +** +** If mustBeUnique is true, then the structure will be used to iterate +** through the set members, skipping any duplicates. In this case an +** epheremal table must be used unless the selected is guaranteed +** to be unique - either because it is an INTEGER PRIMARY KEY or it +** is unique by virtue of a constraint or implicit index. +*/ +#ifndef SQLITE_OMIT_SUBQUERY +int sqlite3FindInIndex(Parse *pParse, Expr *pX, int mustBeUnique){ + Select *p; + int eType = 0; + int iTab = pParse->nTab++; + + /* The follwing if(...) expression is true if the SELECT is of the + ** simple form: + ** + ** SELECT FROM
                + ** + ** If this is the case, it may be possible to use an existing table + ** or index instead of generating an epheremal table. + */ + if( sqlite3_enable_in_opt + && (p=pX->pSelect)!=0 && !p->pPrior + && !p->isDistinct && !p->isAgg && !p->pGroupBy + && p->pSrc && p->pSrc->nSrc==1 && !p->pSrc->a[0].pSelect + && p->pSrc->a[0].pTab && !p->pSrc->a[0].pTab->pSelect + && p->pEList->nExpr==1 && p->pEList->a[0].pExpr->op==TK_COLUMN + && !p->pLimit && !p->pOffset && !p->pWhere + ){ + sqlite3 *db = pParse->db; + Index *pIdx; + Expr *pExpr = p->pEList->a[0].pExpr; + int iCol = pExpr->iColumn; + Vdbe *v = sqlite3GetVdbe(pParse); + + /* This function is only called from two places. In both cases the vdbe + ** has already been allocated. So assume sqlite3GetVdbe() is always + ** successful here. + */ + assert(v); + if( iCol<0 ){ + int iMem = ++pParse->nMem; + int iAddr; + Table *pTab = p->pSrc->a[0].pTab; + int iDb = sqlite3SchemaToIndex(db, pTab->pSchema); + sqlite3VdbeUsesBtree(v, iDb); + + iAddr = sqlite3VdbeAddOp1(v, OP_If, iMem); + sqlite3VdbeAddOp2(v, OP_Integer, 1, iMem); + + sqlite3OpenTable(pParse, iTab, iDb, pTab, OP_OpenRead); + eType = IN_INDEX_ROWID; + + sqlite3VdbeJumpHere(v, iAddr); + }else{ + /* The collation sequence used by the comparison. If an index is to + ** be used in place of a temp-table, it must be ordered according + ** to this collation sequence. + */ + CollSeq *pReq = sqlite3BinaryCompareCollSeq(pParse, pX->pLeft, pExpr); + + /* Check that the affinity that will be used to perform the + ** comparison is the same as the affinity of the column. If + ** it is not, it is not possible to use any index. + */ + Table *pTab = p->pSrc->a[0].pTab; + char aff = comparisonAffinity(pX); + int affinity_ok = (pTab->aCol[iCol].affinity==aff||aff==SQLITE_AFF_NONE); + + for(pIdx=pTab->pIndex; pIdx && eType==0 && affinity_ok; pIdx=pIdx->pNext){ + if( (pIdx->aiColumn[0]==iCol) + && (pReq==sqlite3FindCollSeq(db, ENC(db), pIdx->azColl[0], -1, 0)) + && (!mustBeUnique || (pIdx->nColumn==1 && pIdx->onError!=OE_None)) + ){ + int iDb; + int iMem = ++pParse->nMem; + int iAddr; + char *pKey; + + pKey = (char *)sqlite3IndexKeyinfo(pParse, pIdx); + iDb = sqlite3SchemaToIndex(db, pIdx->pSchema); + sqlite3VdbeUsesBtree(v, iDb); + + iAddr = sqlite3VdbeAddOp1(v, OP_If, iMem); + sqlite3VdbeAddOp2(v, OP_Integer, 1, iMem); + + sqlite3VdbeAddOp4(v, OP_OpenRead, iTab, pIdx->tnum, iDb, + pKey,P4_KEYINFO_HANDOFF); + VdbeComment((v, "%s", pIdx->zName)); + eType = IN_INDEX_INDEX; + sqlite3VdbeAddOp2(v, OP_SetNumColumns, iTab, pIdx->nColumn); + + sqlite3VdbeJumpHere(v, iAddr); + } + } + } + } + + if( eType==0 ){ + sqlite3CodeSubselect(pParse, pX); + eType = IN_INDEX_EPH; + }else{ + pX->iTable = iTab; + } + return eType; +} +#endif + +/* +** Generate code for scalar subqueries used as an expression +** and IN operators. Examples: +** +** (SELECT a FROM b) -- subquery +** EXISTS (SELECT a FROM b) -- EXISTS subquery +** x IN (4,5,11) -- IN operator with list on right-hand side +** x IN (SELECT a FROM b) -- IN operator with subquery on the right +** +** The pExpr parameter describes the expression that contains the IN +** operator or subquery. +*/ +#ifndef SQLITE_OMIT_SUBQUERY +void sqlite3CodeSubselect(Parse *pParse, Expr *pExpr){ + int testAddr = 0; /* One-time test address */ + Vdbe *v = sqlite3GetVdbe(pParse); + if( v==0 ) return; + + + /* This code must be run in its entirety every time it is encountered + ** if any of the following is true: + ** + ** * The right-hand side is a correlated subquery + ** * The right-hand side is an expression list containing variables + ** * We are inside a trigger + ** + ** If all of the above are false, then we can run this code just once + ** save the results, and reuse the same result on subsequent invocations. + */ + if( !ExprHasAnyProperty(pExpr, EP_VarSelect) && !pParse->trigStack ){ + int mem = ++pParse->nMem; + sqlite3VdbeAddOp1(v, OP_If, mem); + testAddr = sqlite3VdbeAddOp2(v, OP_Integer, 1, mem); + assert( testAddr>0 || pParse->db->mallocFailed ); + } + + switch( pExpr->op ){ + case TK_IN: { + char affinity; + KeyInfo keyInfo; + int addr; /* Address of OP_OpenEphemeral instruction */ + + affinity = sqlite3ExprAffinity(pExpr->pLeft); + + /* Whether this is an 'x IN(SELECT...)' or an 'x IN()' + ** expression it is handled the same way. A virtual table is + ** filled with single-field index keys representing the results + ** from the SELECT or the . + ** + ** If the 'x' expression is a column value, or the SELECT... + ** statement returns a column value, then the affinity of that + ** column is used to build the index keys. If both 'x' and the + ** SELECT... statement are columns, then numeric affinity is used + ** if either column has NUMERIC or INTEGER affinity. If neither + ** 'x' nor the SELECT... statement are columns, then numeric affinity + ** is used. + */ + pExpr->iTable = pParse->nTab++; + addr = sqlite3VdbeAddOp1(v, OP_OpenEphemeral, pExpr->iTable); + memset(&keyInfo, 0, sizeof(keyInfo)); + keyInfo.nField = 1; + sqlite3VdbeAddOp2(v, OP_SetNumColumns, pExpr->iTable, 1); + + if( pExpr->pSelect ){ + /* Case 1: expr IN (SELECT ...) + ** + ** Generate code to write the results of the select into the temporary + ** table allocated and opened above. + */ + SelectDest dest; + ExprList *pEList; + + sqlite3SelectDestInit(&dest, SRT_Set, pExpr->iTable); + dest.affinity = (int)affinity; + assert( (pExpr->iTable&0x0000FFFF)==pExpr->iTable ); + if( sqlite3Select(pParse, pExpr->pSelect, &dest, 0, 0, 0, 0) ){ + return; + } + pEList = pExpr->pSelect->pEList; + if( pEList && pEList->nExpr>0 ){ + keyInfo.aColl[0] = sqlite3BinaryCompareCollSeq(pParse, pExpr->pLeft, + pEList->a[0].pExpr); + } + }else if( pExpr->pList ){ + /* Case 2: expr IN (exprlist) + ** + ** For each expression, build an index key from the evaluation and + ** store it in the temporary table. If is a column, then use + ** that columns affinity when building index keys. If is not + ** a column, use numeric affinity. + */ + int i; + ExprList *pList = pExpr->pList; + struct ExprList_item *pItem; + int r1, r2; + + if( !affinity ){ + affinity = SQLITE_AFF_NONE; + } + keyInfo.aColl[0] = pExpr->pLeft->pColl; + + /* Loop through each expression in . */ + r1 = sqlite3GetTempReg(pParse); + r2 = sqlite3GetTempReg(pParse); + for(i=pList->nExpr, pItem=pList->a; i>0; i--, pItem++){ + Expr *pE2 = pItem->pExpr; + + /* If the expression is not constant then we will need to + ** disable the test that was generated above that makes sure + ** this code only executes once. Because for a non-constant + ** expression we need to rerun this code each time. + */ + if( testAddr && !sqlite3ExprIsConstant(pE2) ){ + sqlite3VdbeChangeToNoop(v, testAddr-1, 2); + testAddr = 0; + } + + /* Evaluate the expression and insert it into the temp table */ + sqlite3ExprCode(pParse, pE2, r1); + sqlite3VdbeAddOp4(v, OP_MakeRecord, r1, 1, r2, &affinity, 1); + sqlite3VdbeAddOp2(v, OP_IdxInsert, pExpr->iTable, r2); + } + sqlite3ReleaseTempReg(pParse, r1); + sqlite3ReleaseTempReg(pParse, r2); + } + sqlite3VdbeChangeP4(v, addr, (void *)&keyInfo, P4_KEYINFO); + break; + } + + case TK_EXISTS: + case TK_SELECT: { + /* This has to be a scalar SELECT. Generate code to put the + ** value of this select in a memory cell and record the number + ** of the memory cell in iColumn. + */ + static const Token one = { (u8*)"1", 0, 1 }; + Select *pSel; + SelectDest dest; + + pSel = pExpr->pSelect; + sqlite3SelectDestInit(&dest, 0, ++pParse->nMem); + if( pExpr->op==TK_SELECT ){ + dest.eDest = SRT_Mem; + sqlite3VdbeAddOp2(v, OP_Null, 0, dest.iParm); + VdbeComment((v, "Init subquery result")); + }else{ + dest.eDest = SRT_Exists; + sqlite3VdbeAddOp2(v, OP_Integer, 0, dest.iParm); + VdbeComment((v, "Init EXISTS result")); + } + sqlite3ExprDelete(pSel->pLimit); + pSel->pLimit = sqlite3PExpr(pParse, TK_INTEGER, 0, 0, &one); + if( sqlite3Select(pParse, pSel, &dest, 0, 0, 0, 0) ){ + return; + } + pExpr->iColumn = dest.iParm; + break; + } + } + + if( testAddr ){ + sqlite3VdbeJumpHere(v, testAddr-1); + } + + return; +} +#endif /* SQLITE_OMIT_SUBQUERY */ + +/* +** Duplicate an 8-byte value +*/ +static char *dup8bytes(Vdbe *v, const char *in){ + char *out = sqlite3DbMallocRaw(sqlite3VdbeDb(v), 8); + if( out ){ + memcpy(out, in, 8); + } + return out; +} + +/* +** Generate an instruction that will put the floating point +** value described by z[0..n-1] into register iMem. +** +** The z[] string will probably not be zero-terminated. But the +** z[n] character is guaranteed to be something that does not look +** like the continuation of the number. +*/ +static void codeReal(Vdbe *v, const char *z, int n, int negateFlag, int iMem){ + assert( z || v==0 || sqlite3VdbeDb(v)->mallocFailed ); + if( z ){ + double value; + char *zV; + assert( !isdigit(z[n]) ); + sqlite3AtoF(z, &value); + if( negateFlag ) value = -value; + zV = dup8bytes(v, (char*)&value); + sqlite3VdbeAddOp4(v, OP_Real, 0, iMem, 0, zV, P4_REAL); + } +} + + +/* +** Generate an instruction that will put the integer describe by +** text z[0..n-1] into register iMem. +** +** The z[] string will probably not be zero-terminated. But the +** z[n] character is guaranteed to be something that does not look +** like the continuation of the number. +*/ +static void codeInteger(Vdbe *v, const char *z, int n, int negFlag, int iMem){ + assert( z || v==0 || sqlite3VdbeDb(v)->mallocFailed ); + if( z ){ + int i; + assert( !isdigit(z[n]) ); + if( sqlite3GetInt32(z, &i) ){ + if( negFlag ) i = -i; + sqlite3VdbeAddOp2(v, OP_Integer, i, iMem); + }else if( sqlite3FitsIn64Bits(z, negFlag) ){ + i64 value; + char *zV; + sqlite3Atoi64(z, &value); + if( negFlag ) value = -value; + zV = dup8bytes(v, (char*)&value); + sqlite3VdbeAddOp4(v, OP_Int64, 0, iMem, 0, zV, P4_INT64); + }else{ + codeReal(v, z, n, negFlag, iMem); + } + } +} + + +/* +** Generate code that will extract the iColumn-th column from +** table pTab and store the column value in register iReg. +** There is an open cursor to pTab in +** iTable. If iColumn<0 then code is generated that extracts the rowid. +*/ +void sqlite3ExprCodeGetColumn( + Vdbe *v, /* The VM being created */ + Table *pTab, /* Description of the table we are reading from */ + int iColumn, /* Index of the table column */ + int iTable, /* The cursor pointing to the table */ + int iReg /* Store results here */ +){ + if( iColumn<0 ){ + int op = (pTab && IsVirtual(pTab)) ? OP_VRowid : OP_Rowid; + sqlite3VdbeAddOp2(v, op, iTable, iReg); + }else if( pTab==0 ){ + sqlite3VdbeAddOp3(v, OP_Column, iTable, iColumn, iReg); + }else{ + int op = IsVirtual(pTab) ? OP_VColumn : OP_Column; + sqlite3VdbeAddOp3(v, op, iTable, iColumn, iReg); + sqlite3ColumnDefault(v, pTab, iColumn); +#ifndef SQLITE_OMIT_FLOATING_POINT + if( pTab->aCol[iColumn].affinity==SQLITE_AFF_REAL ){ + sqlite3VdbeAddOp1(v, OP_RealAffinity, iReg); + } +#endif + } +} + +/* +** Generate code into the current Vdbe to evaluate the given +** expression. Attempt to store the results in register "target". +** Return the register where results are stored. +** +** With this routine, there is no guaranteed that results will +** be stored in target. The result might be stored in some other +** register if it is convenient to do so. The calling function +** must check the return code and move the results to the desired +** register. +*/ +static int sqlite3ExprCodeTarget(Parse *pParse, Expr *pExpr, int target){ + Vdbe *v = pParse->pVdbe; /* The VM under construction */ + int op; /* The opcode being coded */ + int inReg = target; /* Results stored in register inReg */ + int regFree1 = 0; /* If non-zero free this temporary register */ + int regFree2 = 0; /* If non-zero free this temporary register */ + int r1, r2, r3; /* Various register numbers */ + + assert( v!=0 || pParse->db->mallocFailed ); + assert( target>0 && target<=pParse->nMem ); + if( v==0 ) return 0; + + if( pExpr==0 ){ + op = TK_NULL; + }else{ + op = pExpr->op; + } + switch( op ){ + case TK_AGG_COLUMN: { + AggInfo *pAggInfo = pExpr->pAggInfo; + struct AggInfo_col *pCol = &pAggInfo->aCol[pExpr->iAgg]; + if( !pAggInfo->directMode ){ + assert( pCol->iMem>0 ); + inReg = pCol->iMem; + break; + }else if( pAggInfo->useSortingIdx ){ + sqlite3VdbeAddOp3(v, OP_Column, pAggInfo->sortingIdx, + pCol->iSorterColumn, target); + break; + } + /* Otherwise, fall thru into the TK_COLUMN case */ + } + case TK_COLUMN: { + if( pExpr->iTable<0 ){ + /* This only happens when coding check constraints */ + assert( pParse->ckBase>0 ); + inReg = pExpr->iColumn + pParse->ckBase; + }else{ + sqlite3ExprCodeGetColumn(v, pExpr->pTab, + pExpr->iColumn, pExpr->iTable, target); + } + break; + } + case TK_INTEGER: { + codeInteger(v, (char*)pExpr->token.z, pExpr->token.n, 0, target); + break; + } + case TK_FLOAT: { + codeReal(v, (char*)pExpr->token.z, pExpr->token.n, 0, target); + break; + } + case TK_STRING: { + sqlite3DequoteExpr(pParse->db, pExpr); + sqlite3VdbeAddOp4(v,OP_String8, 0, target, 0, + (char*)pExpr->token.z, pExpr->token.n); + break; + } + case TK_NULL: { + sqlite3VdbeAddOp2(v, OP_Null, 0, target); + break; + } +#ifndef SQLITE_OMIT_BLOB_LITERAL + case TK_BLOB: { + int n; + const char *z; + char *zBlob; + assert( pExpr->token.n>=3 ); + assert( pExpr->token.z[0]=='x' || pExpr->token.z[0]=='X' ); + assert( pExpr->token.z[1]=='\'' ); + assert( pExpr->token.z[pExpr->token.n-1]=='\'' ); + n = pExpr->token.n - 3; + z = (char*)pExpr->token.z + 2; + zBlob = sqlite3HexToBlob(sqlite3VdbeDb(v), z, n); + sqlite3VdbeAddOp4(v, OP_Blob, n/2, target, 0, zBlob, P4_DYNAMIC); + break; + } +#endif + case TK_VARIABLE: { + sqlite3VdbeAddOp2(v, OP_Variable, pExpr->iTable, target); + if( pExpr->token.n>1 ){ + sqlite3VdbeChangeP4(v, -1, (char*)pExpr->token.z, pExpr->token.n); + } + break; + } + case TK_REGISTER: { + inReg = pExpr->iTable; + break; + } +#ifndef SQLITE_OMIT_CAST + case TK_CAST: { + /* Expressions of the form: CAST(pLeft AS token) */ + int aff, to_op; + inReg = sqlite3ExprCodeTarget(pParse, pExpr->pLeft, target); + aff = sqlite3AffinityType(&pExpr->token); + to_op = aff - SQLITE_AFF_TEXT + OP_ToText; + assert( to_op==OP_ToText || aff!=SQLITE_AFF_TEXT ); + assert( to_op==OP_ToBlob || aff!=SQLITE_AFF_NONE ); + assert( to_op==OP_ToNumeric || aff!=SQLITE_AFF_NUMERIC ); + assert( to_op==OP_ToInt || aff!=SQLITE_AFF_INTEGER ); + assert( to_op==OP_ToReal || aff!=SQLITE_AFF_REAL ); + sqlite3VdbeAddOp1(v, to_op, inReg); + break; + } +#endif /* SQLITE_OMIT_CAST */ + case TK_LT: + case TK_LE: + case TK_GT: + case TK_GE: + case TK_NE: + case TK_EQ: { + assert( TK_LT==OP_Lt ); + assert( TK_LE==OP_Le ); + assert( TK_GT==OP_Gt ); + assert( TK_GE==OP_Ge ); + assert( TK_EQ==OP_Eq ); + assert( TK_NE==OP_Ne ); + r1 = sqlite3ExprCodeTemp(pParse, pExpr->pLeft, ®Free1); + r2 = sqlite3ExprCodeTemp(pParse, pExpr->pRight, ®Free2); + codeCompare(pParse, pExpr->pLeft, pExpr->pRight, op, + r1, r2, inReg, SQLITE_STOREP2); + break; + } + case TK_AND: + case TK_OR: + case TK_PLUS: + case TK_STAR: + case TK_MINUS: + case TK_REM: + case TK_BITAND: + case TK_BITOR: + case TK_SLASH: + case TK_LSHIFT: + case TK_RSHIFT: + case TK_CONCAT: { + assert( TK_AND==OP_And ); + assert( TK_OR==OP_Or ); + assert( TK_PLUS==OP_Add ); + assert( TK_MINUS==OP_Subtract ); + assert( TK_REM==OP_Remainder ); + assert( TK_BITAND==OP_BitAnd ); + assert( TK_BITOR==OP_BitOr ); + assert( TK_SLASH==OP_Divide ); + assert( TK_LSHIFT==OP_ShiftLeft ); + assert( TK_RSHIFT==OP_ShiftRight ); + assert( TK_CONCAT==OP_Concat ); + r1 = sqlite3ExprCodeTemp(pParse, pExpr->pLeft, ®Free1); + r2 = sqlite3ExprCodeTemp(pParse, pExpr->pRight, ®Free2); + sqlite3VdbeAddOp3(v, op, r2, r1, target); + break; + } + case TK_UMINUS: { + Expr *pLeft = pExpr->pLeft; + assert( pLeft ); + if( pLeft->op==TK_FLOAT || pLeft->op==TK_INTEGER ){ + Token *p = &pLeft->token; + if( pLeft->op==TK_FLOAT ){ + codeReal(v, (char*)p->z, p->n, 1, target); + }else{ + codeInteger(v, (char*)p->z, p->n, 1, target); + } + }else{ + regFree1 = r1 = sqlite3GetTempReg(pParse); + sqlite3VdbeAddOp2(v, OP_Integer, 0, r1); + r2 = sqlite3ExprCodeTarget(pParse, pExpr->pLeft, target); + sqlite3VdbeAddOp3(v, OP_Subtract, r2, r1, target); + } + inReg = target; + break; + } + case TK_BITNOT: + case TK_NOT: { + assert( TK_BITNOT==OP_BitNot ); + assert( TK_NOT==OP_Not ); + inReg = sqlite3ExprCodeTarget(pParse, pExpr->pLeft, target); + sqlite3VdbeAddOp1(v, op, inReg); + break; + } + case TK_ISNULL: + case TK_NOTNULL: { + int addr; + assert( TK_ISNULL==OP_IsNull ); + assert( TK_NOTNULL==OP_NotNull ); + sqlite3VdbeAddOp2(v, OP_Integer, 1, target); + r1 = sqlite3ExprCodeTemp(pParse, pExpr->pLeft, ®Free1); + addr = sqlite3VdbeAddOp1(v, op, r1); + sqlite3VdbeAddOp2(v, OP_AddImm, target, -1); + sqlite3VdbeJumpHere(v, addr); + break; + } + case TK_AGG_FUNCTION: { + AggInfo *pInfo = pExpr->pAggInfo; + if( pInfo==0 ){ + sqlite3ErrorMsg(pParse, "misuse of aggregate: %T", + &pExpr->span); + }else{ + inReg = pInfo->aFunc[pExpr->iAgg].iMem; + } + break; + } + case TK_CONST_FUNC: + case TK_FUNCTION: { + ExprList *pList = pExpr->pList; + int nExpr = pList ? pList->nExpr : 0; + FuncDef *pDef; + int nId; + const char *zId; + int constMask = 0; + int i; + sqlite3 *db = pParse->db; + u8 enc = ENC(db); + CollSeq *pColl = 0; + + zId = (char*)pExpr->token.z; + nId = pExpr->token.n; + pDef = sqlite3FindFunction(pParse->db, zId, nId, nExpr, enc, 0); + assert( pDef!=0 ); + if( pList ){ + nExpr = pList->nExpr; + r1 = sqlite3GetTempRange(pParse, nExpr); + sqlite3ExprCodeExprList(pParse, pList, r1); + }else{ + nExpr = r1 = 0; + } +#ifndef SQLITE_OMIT_VIRTUALTABLE + /* Possibly overload the function if the first argument is + ** a virtual table column. + ** + ** For infix functions (LIKE, GLOB, REGEXP, and MATCH) use the + ** second argument, not the first, as the argument to test to + ** see if it is a column in a virtual table. This is done because + ** the left operand of infix functions (the operand we want to + ** control overloading) ends up as the second argument to the + ** function. The expression "A glob B" is equivalent to + ** "glob(B,A). We want to use the A in "A glob B" to test + ** for function overloading. But we use the B term in "glob(B,A)". + */ + if( nExpr>=2 && (pExpr->flags & EP_InfixFunc) ){ + pDef = sqlite3VtabOverloadFunction(db, pDef, nExpr, pList->a[1].pExpr); + }else if( nExpr>0 ){ + pDef = sqlite3VtabOverloadFunction(db, pDef, nExpr, pList->a[0].pExpr); + } +#endif + for(i=0; ia[i].pExpr) ){ + constMask |= (1<needCollSeq && !pColl ){ + pColl = sqlite3ExprCollSeq(pParse, pList->a[i].pExpr); + } + } + if( pDef->needCollSeq ){ + if( !pColl ) pColl = pParse->db->pDfltColl; + sqlite3VdbeAddOp4(v, OP_CollSeq, 0, 0, 0, (char *)pColl, P4_COLLSEQ); + } + sqlite3VdbeAddOp4(v, OP_Function, constMask, r1, target, + (char*)pDef, P4_FUNCDEF); + sqlite3VdbeChangeP5(v, nExpr); + if( nExpr ){ + sqlite3ReleaseTempRange(pParse, r1, nExpr); + } + break; + } +#ifndef SQLITE_OMIT_SUBQUERY + case TK_EXISTS: + case TK_SELECT: { + if( pExpr->iColumn==0 ){ + sqlite3CodeSubselect(pParse, pExpr); + } + inReg = pExpr->iColumn; + break; + } + case TK_IN: { + int j1, j2, j3, j4, j5; + char affinity; + int eType; + + eType = sqlite3FindInIndex(pParse, pExpr, 0); + + /* Figure out the affinity to use to create a key from the results + ** of the expression. affinityStr stores a static string suitable for + ** P4 of OP_MakeRecord. + */ + affinity = comparisonAffinity(pExpr); + + sqlite3VdbeAddOp2(v, OP_Integer, 1, target); + + /* Code the from " IN (...)". The temporary table + ** pExpr->iTable contains the values that make up the (...) set. + */ + r1 = sqlite3ExprCodeTemp(pParse, pExpr->pLeft, ®Free1); + j1 = sqlite3VdbeAddOp1(v, OP_NotNull, r1); + sqlite3VdbeAddOp2(v, OP_Null, 0, target); + j2 = sqlite3VdbeAddOp0(v, OP_Goto); + sqlite3VdbeJumpHere(v, j1); + if( eType==IN_INDEX_ROWID ){ + j3 = sqlite3VdbeAddOp3(v, OP_MustBeInt, r1, 0, 1); + j4 = sqlite3VdbeAddOp3(v, OP_NotExists, pExpr->iTable, 0, r1); + j5 = sqlite3VdbeAddOp0(v, OP_Goto); + sqlite3VdbeJumpHere(v, j3); + sqlite3VdbeJumpHere(v, j4); + }else{ + r2 = regFree2 = sqlite3GetTempReg(pParse); + sqlite3VdbeAddOp4(v, OP_MakeRecord, r1, 1, r2, &affinity, 1); + j5 = sqlite3VdbeAddOp3(v, OP_Found, pExpr->iTable, 0, r2); + } + sqlite3VdbeAddOp2(v, OP_AddImm, target, -1); + sqlite3VdbeJumpHere(v, j2); + sqlite3VdbeJumpHere(v, j5); + break; + } +#endif + /* + ** x BETWEEN y AND z + ** + ** This is equivalent to + ** + ** x>=y AND x<=z + ** + ** X is stored in pExpr->pLeft. + ** Y is stored in pExpr->pList->a[0].pExpr. + ** Z is stored in pExpr->pList->a[1].pExpr. + */ + case TK_BETWEEN: { + Expr *pLeft = pExpr->pLeft; + struct ExprList_item *pLItem = pExpr->pList->a; + Expr *pRight = pLItem->pExpr; + + r1 = sqlite3ExprCodeTemp(pParse, pLeft, ®Free1); + r2 = sqlite3ExprCodeTemp(pParse, pRight, ®Free2); + r3 = sqlite3GetTempReg(pParse); + codeCompare(pParse, pLeft, pRight, OP_Ge, + r1, r2, r3, SQLITE_STOREP2); + pLItem++; + pRight = pLItem->pExpr; + sqlite3ReleaseTempReg(pParse, regFree2); + r2 = sqlite3ExprCodeTemp(pParse, pRight, ®Free2); + codeCompare(pParse, pLeft, pRight, OP_Le, r1, r2, r2, SQLITE_STOREP2); + sqlite3VdbeAddOp3(v, OP_And, r3, r2, target); + sqlite3ReleaseTempReg(pParse, r3); + break; + } + case TK_UPLUS: { + inReg = sqlite3ExprCodeTarget(pParse, pExpr->pLeft, target); + break; + } + + /* + ** Form A: + ** CASE x WHEN e1 THEN r1 WHEN e2 THEN r2 ... WHEN eN THEN rN ELSE y END + ** + ** Form B: + ** CASE WHEN e1 THEN r1 WHEN e2 THEN r2 ... WHEN eN THEN rN ELSE y END + ** + ** Form A is can be transformed into the equivalent form B as follows: + ** CASE WHEN x=e1 THEN r1 WHEN x=e2 THEN r2 ... + ** WHEN x=eN THEN rN ELSE y END + ** + ** X (if it exists) is in pExpr->pLeft. + ** Y is in pExpr->pRight. The Y is also optional. If there is no + ** ELSE clause and no other term matches, then the result of the + ** exprssion is NULL. + ** Ei is in pExpr->pList->a[i*2] and Ri is pExpr->pList->a[i*2+1]. + ** + ** The result of the expression is the Ri for the first matching Ei, + ** or if there is no matching Ei, the ELSE term Y, or if there is + ** no ELSE term, NULL. + */ + case TK_CASE: { + int endLabel; /* GOTO label for end of CASE stmt */ + int nextCase; /* GOTO label for next WHEN clause */ + int nExpr; /* 2x number of WHEN terms */ + int i; /* Loop counter */ + ExprList *pEList; /* List of WHEN terms */ + struct ExprList_item *aListelem; /* Array of WHEN terms */ + Expr opCompare; /* The X==Ei expression */ + Expr cacheX; /* Cached expression X */ + Expr *pX; /* The X expression */ + Expr *pTest; /* X==Ei (form A) or just Ei (form B) */ + + assert(pExpr->pList); + assert((pExpr->pList->nExpr % 2) == 0); + assert(pExpr->pList->nExpr > 0); + pEList = pExpr->pList; + aListelem = pEList->a; + nExpr = pEList->nExpr; + endLabel = sqlite3VdbeMakeLabel(v); + if( (pX = pExpr->pLeft)!=0 ){ + cacheX = *pX; + cacheX.iTable = sqlite3ExprCodeTemp(pParse, pX, ®Free1); + cacheX.op = TK_REGISTER; + opCompare.op = TK_EQ; + opCompare.pLeft = &cacheX; + pTest = &opCompare; + } + for(i=0; ipRight ){ + sqlite3ExprCode(pParse, pExpr->pRight, target); + }else{ + sqlite3VdbeAddOp2(v, OP_Null, 0, target); + } + sqlite3VdbeResolveLabel(v, endLabel); + break; + } +#ifndef SQLITE_OMIT_TRIGGER + case TK_RAISE: { + if( !pParse->trigStack ){ + sqlite3ErrorMsg(pParse, + "RAISE() may only be used within a trigger-program"); + return 0; + } + if( pExpr->iColumn!=OE_Ignore ){ + assert( pExpr->iColumn==OE_Rollback || + pExpr->iColumn == OE_Abort || + pExpr->iColumn == OE_Fail ); + sqlite3DequoteExpr(pParse->db, pExpr); + sqlite3VdbeAddOp4(v, OP_Halt, SQLITE_CONSTRAINT, pExpr->iColumn, 0, + (char*)pExpr->token.z, pExpr->token.n); + } else { + assert( pExpr->iColumn == OE_Ignore ); + sqlite3VdbeAddOp2(v, OP_ContextPop, 0, 0); + sqlite3VdbeAddOp2(v, OP_Goto, 0, pParse->trigStack->ignoreJump); + VdbeComment((v, "raise(IGNORE)")); + } + break; + } +#endif + } + sqlite3ReleaseTempReg(pParse, regFree1); + sqlite3ReleaseTempReg(pParse, regFree2); + return inReg; +} + +/* +** Generate code to evaluate an expression and store the results +** into a register. Return the register number where the results +** are stored. +** +** If the register is a temporary register that can be deallocated, +** then write its number into *pReg. If the result register is no +** a temporary, then set *pReg to zero. +*/ +int sqlite3ExprCodeTemp(Parse *pParse, Expr *pExpr, int *pReg){ + int r1 = sqlite3GetTempReg(pParse); + int r2 = sqlite3ExprCodeTarget(pParse, pExpr, r1); + if( r2==r1 ){ + *pReg = r1; + }else{ + sqlite3ReleaseTempReg(pParse, r1); + *pReg = 0; + } + return r2; +} + +/* +** Generate code that will evaluate expression pExpr and store the +** results in register target. The results are guaranteed to appear +** in register target. +*/ +int sqlite3ExprCode(Parse *pParse, Expr *pExpr, int target){ + int inReg; + + assert( target>0 && target<=pParse->nMem ); + inReg = sqlite3ExprCodeTarget(pParse, pExpr, target); + assert( pParse->pVdbe || pParse->db->mallocFailed ); + if( inReg!=target && pParse->pVdbe ){ + sqlite3VdbeAddOp2(pParse->pVdbe, OP_SCopy, inReg, target); + } + return target; +} + +/* +** Generate code that evalutes the given expression and puts the result +** in register target. +** +** Also make a copy of the expression results into another "cache" register +** and modify the expression so that the next time it is evaluated, +** the result is a copy of the cache register. +** +** This routine is used for expressions that are used multiple +** times. They are evaluated once and the results of the expression +** are reused. +*/ +int sqlite3ExprCodeAndCache(Parse *pParse, Expr *pExpr, int target){ + Vdbe *v = pParse->pVdbe; + int inReg; + inReg = sqlite3ExprCode(pParse, pExpr, target); + assert( target>0 ); + if( pExpr->op!=TK_REGISTER ){ + int iMem; + iMem = ++pParse->nMem; + sqlite3VdbeAddOp2(v, OP_Copy, inReg, iMem); + pExpr->iTable = iMem; + pExpr->op = TK_REGISTER; + } + return inReg; +} + + +/* +** Generate code that pushes the value of every element of the given +** expression list into a sequence of registers beginning at target. +** +** Return the number of elements evaluated. +*/ +int sqlite3ExprCodeExprList( + Parse *pParse, /* Parsing context */ + ExprList *pList, /* The expression list to be coded */ + int target /* Where to write results */ +){ + struct ExprList_item *pItem; + int i, n; + assert( pList!=0 || pParse->db->mallocFailed ); + if( pList==0 ){ + return 0; + } + assert( target>0 ); + n = pList->nExpr; + for(pItem=pList->a, i=n; i>0; i--, pItem++){ + sqlite3ExprCode(pParse, pItem->pExpr, target); + target++; + } + return n; +} + +/* +** Generate code for a boolean expression such that a jump is made +** to the label "dest" if the expression is true but execution +** continues straight thru if the expression is false. +** +** If the expression evaluates to NULL (neither true nor false), then +** take the jump if the jumpIfNull flag is SQLITE_JUMPIFNULL. +** +** This code depends on the fact that certain token values (ex: TK_EQ) +** are the same as opcode values (ex: OP_Eq) that implement the corresponding +** operation. Special comments in vdbe.c and the mkopcodeh.awk script in +** the make process cause these values to align. Assert()s in the code +** below verify that the numbers are aligned correctly. +*/ +void sqlite3ExprIfTrue(Parse *pParse, Expr *pExpr, int dest, int jumpIfNull){ + Vdbe *v = pParse->pVdbe; + int op = 0; + int regFree1 = 0; + int regFree2 = 0; + int r1, r2; + + assert( jumpIfNull==SQLITE_JUMPIFNULL || jumpIfNull==0 ); + if( v==0 || pExpr==0 ) return; + op = pExpr->op; + switch( op ){ + case TK_AND: { + int d2 = sqlite3VdbeMakeLabel(v); + sqlite3ExprIfFalse(pParse, pExpr->pLeft, d2,jumpIfNull^SQLITE_JUMPIFNULL); + sqlite3ExprIfTrue(pParse, pExpr->pRight, dest, jumpIfNull); + sqlite3VdbeResolveLabel(v, d2); + break; + } + case TK_OR: { + sqlite3ExprIfTrue(pParse, pExpr->pLeft, dest, jumpIfNull); + sqlite3ExprIfTrue(pParse, pExpr->pRight, dest, jumpIfNull); + break; + } + case TK_NOT: { + sqlite3ExprIfFalse(pParse, pExpr->pLeft, dest, jumpIfNull); + break; + } + case TK_LT: + case TK_LE: + case TK_GT: + case TK_GE: + case TK_NE: + case TK_EQ: { + assert( TK_LT==OP_Lt ); + assert( TK_LE==OP_Le ); + assert( TK_GT==OP_Gt ); + assert( TK_GE==OP_Ge ); + assert( TK_EQ==OP_Eq ); + assert( TK_NE==OP_Ne ); + r1 = sqlite3ExprCodeTemp(pParse, pExpr->pLeft, ®Free1); + r2 = sqlite3ExprCodeTemp(pParse, pExpr->pRight, ®Free2); + codeCompare(pParse, pExpr->pLeft, pExpr->pRight, op, + r1, r2, dest, jumpIfNull); + break; + } + case TK_ISNULL: + case TK_NOTNULL: { + assert( TK_ISNULL==OP_IsNull ); + assert( TK_NOTNULL==OP_NotNull ); + r1 = sqlite3ExprCodeTemp(pParse, pExpr->pLeft, ®Free1); + sqlite3VdbeAddOp2(v, op, r1, dest); + break; + } + case TK_BETWEEN: { + /* x BETWEEN y AND z + ** + ** Is equivalent to + ** + ** x>=y AND x<=z + ** + ** Code it as such, taking care to do the common subexpression + ** elementation of x. + */ + Expr exprAnd; + Expr compLeft; + Expr compRight; + Expr exprX; + + exprX = *pExpr->pLeft; + exprAnd.op = TK_AND; + exprAnd.pLeft = &compLeft; + exprAnd.pRight = &compRight; + compLeft.op = TK_GE; + compLeft.pLeft = &exprX; + compLeft.pRight = pExpr->pList->a[0].pExpr; + compRight.op = TK_LE; + compRight.pLeft = &exprX; + compRight.pRight = pExpr->pList->a[1].pExpr; + exprX.iTable = sqlite3ExprCodeTemp(pParse, &exprX, ®Free1); + exprX.op = TK_REGISTER; + sqlite3ExprIfTrue(pParse, &exprAnd, dest, jumpIfNull); + break; + } + default: { + r1 = sqlite3ExprCodeTemp(pParse, pExpr, ®Free1); + sqlite3VdbeAddOp3(v, OP_If, r1, dest, jumpIfNull!=0); + break; + } + } + sqlite3ReleaseTempReg(pParse, regFree1); + sqlite3ReleaseTempReg(pParse, regFree2); +} + +/* +** Generate code for a boolean expression such that a jump is made +** to the label "dest" if the expression is false but execution +** continues straight thru if the expression is true. +** +** If the expression evaluates to NULL (neither true nor false) then +** jump if jumpIfNull is SQLITE_JUMPIFNULL or fall through if jumpIfNull +** is 0. +*/ +void sqlite3ExprIfFalse(Parse *pParse, Expr *pExpr, int dest, int jumpIfNull){ + Vdbe *v = pParse->pVdbe; + int op = 0; + int regFree1 = 0; + int regFree2 = 0; + int r1, r2; + + assert( jumpIfNull==SQLITE_JUMPIFNULL || jumpIfNull==0 ); + if( v==0 || pExpr==0 ) return; + + /* The value of pExpr->op and op are related as follows: + ** + ** pExpr->op op + ** --------- ---------- + ** TK_ISNULL OP_NotNull + ** TK_NOTNULL OP_IsNull + ** TK_NE OP_Eq + ** TK_EQ OP_Ne + ** TK_GT OP_Le + ** TK_LE OP_Gt + ** TK_GE OP_Lt + ** TK_LT OP_Ge + ** + ** For other values of pExpr->op, op is undefined and unused. + ** The value of TK_ and OP_ constants are arranged such that we + ** can compute the mapping above using the following expression. + ** Assert()s verify that the computation is correct. + */ + op = ((pExpr->op+(TK_ISNULL&1))^1)-(TK_ISNULL&1); + + /* Verify correct alignment of TK_ and OP_ constants + */ + assert( pExpr->op!=TK_ISNULL || op==OP_NotNull ); + assert( pExpr->op!=TK_NOTNULL || op==OP_IsNull ); + assert( pExpr->op!=TK_NE || op==OP_Eq ); + assert( pExpr->op!=TK_EQ || op==OP_Ne ); + assert( pExpr->op!=TK_LT || op==OP_Ge ); + assert( pExpr->op!=TK_LE || op==OP_Gt ); + assert( pExpr->op!=TK_GT || op==OP_Le ); + assert( pExpr->op!=TK_GE || op==OP_Lt ); + + switch( pExpr->op ){ + case TK_AND: { + sqlite3ExprIfFalse(pParse, pExpr->pLeft, dest, jumpIfNull); + sqlite3ExprIfFalse(pParse, pExpr->pRight, dest, jumpIfNull); + break; + } + case TK_OR: { + int d2 = sqlite3VdbeMakeLabel(v); + sqlite3ExprIfTrue(pParse, pExpr->pLeft, d2, jumpIfNull^SQLITE_JUMPIFNULL); + sqlite3ExprIfFalse(pParse, pExpr->pRight, dest, jumpIfNull); + sqlite3VdbeResolveLabel(v, d2); + break; + } + case TK_NOT: { + sqlite3ExprIfTrue(pParse, pExpr->pLeft, dest, jumpIfNull); + break; + } + case TK_LT: + case TK_LE: + case TK_GT: + case TK_GE: + case TK_NE: + case TK_EQ: { + r1 = sqlite3ExprCodeTemp(pParse, pExpr->pLeft, ®Free1); + r2 = sqlite3ExprCodeTemp(pParse, pExpr->pRight, ®Free2); + codeCompare(pParse, pExpr->pLeft, pExpr->pRight, op, + r1, r2, dest, jumpIfNull); + break; + } + case TK_ISNULL: + case TK_NOTNULL: { + r1 = sqlite3ExprCodeTemp(pParse, pExpr->pLeft, ®Free1); + sqlite3VdbeAddOp2(v, op, r1, dest); + break; + } + case TK_BETWEEN: { + /* x BETWEEN y AND z + ** + ** Is equivalent to + ** + ** x>=y AND x<=z + ** + ** Code it as such, taking care to do the common subexpression + ** elementation of x. + */ + Expr exprAnd; + Expr compLeft; + Expr compRight; + Expr exprX; + + exprX = *pExpr->pLeft; + exprAnd.op = TK_AND; + exprAnd.pLeft = &compLeft; + exprAnd.pRight = &compRight; + compLeft.op = TK_GE; + compLeft.pLeft = &exprX; + compLeft.pRight = pExpr->pList->a[0].pExpr; + compRight.op = TK_LE; + compRight.pLeft = &exprX; + compRight.pRight = pExpr->pList->a[1].pExpr; + exprX.iTable = sqlite3ExprCodeTemp(pParse, &exprX, ®Free1); + exprX.op = TK_REGISTER; + sqlite3ExprIfFalse(pParse, &exprAnd, dest, jumpIfNull); + break; + } + default: { + r1 = sqlite3ExprCodeTemp(pParse, pExpr, ®Free1); + sqlite3VdbeAddOp3(v, OP_IfNot, r1, dest, jumpIfNull!=0); + break; + } + } + sqlite3ReleaseTempReg(pParse, regFree1); + sqlite3ReleaseTempReg(pParse, regFree2); +} + +/* +** Do a deep comparison of two expression trees. Return TRUE (non-zero) +** if they are identical and return FALSE if they differ in any way. +** +** Sometimes this routine will return FALSE even if the two expressions +** really are equivalent. If we cannot prove that the expressions are +** identical, we return FALSE just to be safe. So if this routine +** returns false, then you do not really know for certain if the two +** expressions are the same. But if you get a TRUE return, then you +** can be sure the expressions are the same. In the places where +** this routine is used, it does not hurt to get an extra FALSE - that +** just might result in some slightly slower code. But returning +** an incorrect TRUE could lead to a malfunction. +*/ +int sqlite3ExprCompare(Expr *pA, Expr *pB){ + int i; + if( pA==0||pB==0 ){ + return pB==pA; + } + if( pA->op!=pB->op ) return 0; + if( (pA->flags & EP_Distinct)!=(pB->flags & EP_Distinct) ) return 0; + if( !sqlite3ExprCompare(pA->pLeft, pB->pLeft) ) return 0; + if( !sqlite3ExprCompare(pA->pRight, pB->pRight) ) return 0; + if( pA->pList ){ + if( pB->pList==0 ) return 0; + if( pA->pList->nExpr!=pB->pList->nExpr ) return 0; + for(i=0; ipList->nExpr; i++){ + if( !sqlite3ExprCompare(pA->pList->a[i].pExpr, pB->pList->a[i].pExpr) ){ + return 0; + } + } + }else if( pB->pList ){ + return 0; + } + if( pA->pSelect || pB->pSelect ) return 0; + if( pA->iTable!=pB->iTable || pA->iColumn!=pB->iColumn ) return 0; + if( pA->op!=TK_COLUMN && pA->token.z ){ + if( pB->token.z==0 ) return 0; + if( pB->token.n!=pA->token.n ) return 0; + if( sqlite3StrNICmp((char*)pA->token.z,(char*)pB->token.z,pB->token.n)!=0 ){ + return 0; + } + } + return 1; +} + + +/* +** Add a new element to the pAggInfo->aCol[] array. Return the index of +** the new element. Return a negative number if malloc fails. +*/ +static int addAggInfoColumn(sqlite3 *db, AggInfo *pInfo){ + int i; + pInfo->aCol = sqlite3ArrayAllocate( + db, + pInfo->aCol, + sizeof(pInfo->aCol[0]), + 3, + &pInfo->nColumn, + &pInfo->nColumnAlloc, + &i + ); + return i; +} + +/* +** Add a new element to the pAggInfo->aFunc[] array. Return the index of +** the new element. Return a negative number if malloc fails. +*/ +static int addAggInfoFunc(sqlite3 *db, AggInfo *pInfo){ + int i; + pInfo->aFunc = sqlite3ArrayAllocate( + db, + pInfo->aFunc, + sizeof(pInfo->aFunc[0]), + 3, + &pInfo->nFunc, + &pInfo->nFuncAlloc, + &i + ); + return i; +} + +/* +** This is an xFunc for walkExprTree() used to implement +** sqlite3ExprAnalyzeAggregates(). See sqlite3ExprAnalyzeAggregates +** for additional information. +** +** This routine analyzes the aggregate function at pExpr. +*/ +static int analyzeAggregate(void *pArg, Expr *pExpr){ + int i; + NameContext *pNC = (NameContext *)pArg; + Parse *pParse = pNC->pParse; + SrcList *pSrcList = pNC->pSrcList; + AggInfo *pAggInfo = pNC->pAggInfo; + + switch( pExpr->op ){ + case TK_AGG_COLUMN: + case TK_COLUMN: { + /* Check to see if the column is in one of the tables in the FROM + ** clause of the aggregate query */ + if( pSrcList ){ + struct SrcList_item *pItem = pSrcList->a; + for(i=0; inSrc; i++, pItem++){ + struct AggInfo_col *pCol; + if( pExpr->iTable==pItem->iCursor ){ + /* If we reach this point, it means that pExpr refers to a table + ** that is in the FROM clause of the aggregate query. + ** + ** Make an entry for the column in pAggInfo->aCol[] if there + ** is not an entry there already. + */ + int k; + pCol = pAggInfo->aCol; + for(k=0; knColumn; k++, pCol++){ + if( pCol->iTable==pExpr->iTable && + pCol->iColumn==pExpr->iColumn ){ + break; + } + } + if( (k>=pAggInfo->nColumn) + && (k = addAggInfoColumn(pParse->db, pAggInfo))>=0 + ){ + pCol = &pAggInfo->aCol[k]; + pCol->pTab = pExpr->pTab; + pCol->iTable = pExpr->iTable; + pCol->iColumn = pExpr->iColumn; + pCol->iMem = ++pParse->nMem; + pCol->iSorterColumn = -1; + pCol->pExpr = pExpr; + if( pAggInfo->pGroupBy ){ + int j, n; + ExprList *pGB = pAggInfo->pGroupBy; + struct ExprList_item *pTerm = pGB->a; + n = pGB->nExpr; + for(j=0; jpExpr; + if( pE->op==TK_COLUMN && pE->iTable==pExpr->iTable && + pE->iColumn==pExpr->iColumn ){ + pCol->iSorterColumn = j; + break; + } + } + } + if( pCol->iSorterColumn<0 ){ + pCol->iSorterColumn = pAggInfo->nSortingColumn++; + } + } + /* There is now an entry for pExpr in pAggInfo->aCol[] (either + ** because it was there before or because we just created it). + ** Convert the pExpr to be a TK_AGG_COLUMN referring to that + ** pAggInfo->aCol[] entry. + */ + pExpr->pAggInfo = pAggInfo; + pExpr->op = TK_AGG_COLUMN; + pExpr->iAgg = k; + break; + } /* endif pExpr->iTable==pItem->iCursor */ + } /* end loop over pSrcList */ + } + return 1; + } + case TK_AGG_FUNCTION: { + /* The pNC->nDepth==0 test causes aggregate functions in subqueries + ** to be ignored */ + if( pNC->nDepth==0 ){ + /* Check to see if pExpr is a duplicate of another aggregate + ** function that is already in the pAggInfo structure + */ + struct AggInfo_func *pItem = pAggInfo->aFunc; + for(i=0; inFunc; i++, pItem++){ + if( sqlite3ExprCompare(pItem->pExpr, pExpr) ){ + break; + } + } + if( i>=pAggInfo->nFunc ){ + /* pExpr is original. Make a new entry in pAggInfo->aFunc[] + */ + u8 enc = ENC(pParse->db); + i = addAggInfoFunc(pParse->db, pAggInfo); + if( i>=0 ){ + pItem = &pAggInfo->aFunc[i]; + pItem->pExpr = pExpr; + pItem->iMem = ++pParse->nMem; + pItem->pFunc = sqlite3FindFunction(pParse->db, + (char*)pExpr->token.z, pExpr->token.n, + pExpr->pList ? pExpr->pList->nExpr : 0, enc, 0); + if( pExpr->flags & EP_Distinct ){ + pItem->iDistinct = pParse->nTab++; + }else{ + pItem->iDistinct = -1; + } + } + } + /* Make pExpr point to the appropriate pAggInfo->aFunc[] entry + */ + pExpr->iAgg = i; + pExpr->pAggInfo = pAggInfo; + return 1; + } + } + } + + /* Recursively walk subqueries looking for TK_COLUMN nodes that need + ** to be changed to TK_AGG_COLUMN. But increment nDepth so that + ** TK_AGG_FUNCTION nodes in subqueries will be unchanged. + */ + if( pExpr->pSelect ){ + pNC->nDepth++; + walkSelectExpr(pExpr->pSelect, analyzeAggregate, pNC); + pNC->nDepth--; + } + return 0; +} + +/* +** Analyze the given expression looking for aggregate functions and +** for variables that need to be added to the pParse->aAgg[] array. +** Make additional entries to the pParse->aAgg[] array as necessary. +** +** This routine should only be called after the expression has been +** analyzed by sqlite3ExprResolveNames(). +*/ +void sqlite3ExprAnalyzeAggregates(NameContext *pNC, Expr *pExpr){ + walkExprTree(pExpr, analyzeAggregate, pNC); +} + +/* +** Call sqlite3ExprAnalyzeAggregates() for every expression in an +** expression list. Return the number of errors. +** +** If an error is found, the analysis is cut short. +*/ +void sqlite3ExprAnalyzeAggList(NameContext *pNC, ExprList *pList){ + struct ExprList_item *pItem; + int i; + if( pList ){ + for(pItem=pList->a, i=0; inExpr; i++, pItem++){ + sqlite3ExprAnalyzeAggregates(pNC, pItem->pExpr); + } + } +} + +/* +** Allocate or deallocate temporary use registers during code generation. +*/ +int sqlite3GetTempReg(Parse *pParse){ + if( pParse->nTempReg ){ + return pParse->aTempReg[--pParse->nTempReg]; + }else{ + return ++pParse->nMem; + } +} +void sqlite3ReleaseTempReg(Parse *pParse, int iReg){ + if( iReg && pParse->nTempRegaTempReg) ){ + assert( iReg>0 ); + pParse->aTempReg[pParse->nTempReg++] = iReg; + } +} + +/* +** Allocate or deallocate a block of nReg consecutive registers +*/ +int sqlite3GetTempRange(Parse *pParse, int nReg){ + int i; + if( nReg<=pParse->nRangeReg ){ + i = pParse->iRangeReg; + pParse->iRangeReg += nReg; + pParse->nRangeReg -= nReg; + }else{ + i = pParse->nMem+1; + pParse->nMem += nReg; + } + return i; +} +void sqlite3ReleaseTempRange(Parse *pParse, int iReg, int nReg){ + if( nReg>pParse->nRangeReg ){ + pParse->nRangeReg = nReg; + pParse->iRangeReg = iReg; + } +} Added: external/sqlite-source-3.5.7.x/fault.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/fault.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,147 @@ +/* +** 2008 Jan 22 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** This file contains code to implement a fault-injector used for +** testing and verification of SQLite. +** +** Subsystems within SQLite can call sqlite3FaultStep() to see if +** they should simulate a fault. sqlite3FaultStep() normally returns +** zero but will return non-zero if a fault should be simulated. +** Fault injectors can be used, for example, to simulate memory +** allocation failures or I/O errors. +** +** The fault injector is omitted from the code if SQLite is +** compiled with -DSQLITE_OMIT_FAULTINJECTOR=1. There is a very +** small performance hit for leaving the fault injector in the code. +** Commerical products will probably want to omit the fault injector +** from production builds. But safety-critical systems who work +** under the motto "fly what you test and test what you fly" may +** choose to leave the fault injector enabled even in production. +*/ +#include "sqliteInt.h" + +#ifndef SQLITE_OMIT_FAULTINJECTOR + +/* +** There can be various kinds of faults. For example, there can be +** a memory allocation failure. Or an I/O failure. For each different +** fault type, there is a separate FaultInjector structure to keep track +** of the status of that fault. +*/ +static struct FaultInjector { + int iCountdown; /* Number of pending successes before we hit a failure */ + int nRepeat; /* Number of times to repeat the failure */ + int nBenign; /* Number of benign failures seen since last config */ + int nFail; /* Number of failures seen since last config */ + u8 enable; /* True if enabled */ + u8 benign; /* Ture if next failure will be benign */ +} aFault[SQLITE_FAULTINJECTOR_COUNT]; + +/* +** This routine configures and enables a fault injector. After +** calling this routine, aFaultStep() will return false (zero) +** nDelay times, then it will return true nRepeat times, +** then it will again begin returning false. +*/ +void sqlite3FaultConfig(int id, int nDelay, int nRepeat){ + assert( id>=0 && id=0; + aFault[id].benign = 0; +} + +/* +** Return the number of faults (both hard and benign faults) that have +** occurred since the injector was last configured. +*/ +int sqlite3FaultFailures(int id){ + assert( id>=0 && id=0 && id=0 && id=0 && id=0 && id0 ){ + aFault[id].iCountdown--; + return 0; + } + sqlite3Fault(); + aFault[id].nFail++; + if( aFault[id].benign ){ + aFault[id].nBenign++; + } + aFault[id].nRepeat--; + if( aFault[id].nRepeat<=0 ){ + aFault[id].enable = 0; + } + return 1; +} + +#endif /* SQLITE_OMIT_FAULTINJECTOR */ Added: external/sqlite-source-3.5.7.x/fts3.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/fts3.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,6403 @@ +/* +** 2006 Oct 10 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +****************************************************************************** +** +** This is an SQLite module implementing full-text search. +*/ + +/* +** The code in this file is only compiled if: +** +** * The FTS3 module is being built as an extension +** (in which case SQLITE_CORE is not defined), or +** +** * The FTS3 module is being built into the core of +** SQLite (in which case SQLITE_ENABLE_FTS3 is defined). +*/ + +/* TODO(shess) Consider exporting this comment to an HTML file or the +** wiki. +*/ +/* The full-text index is stored in a series of b+tree (-like) +** structures called segments which map terms to doclists. The +** structures are like b+trees in layout, but are constructed from the +** bottom up in optimal fashion and are not updatable. Since trees +** are built from the bottom up, things will be described from the +** bottom up. +** +** +**** Varints **** +** The basic unit of encoding is a variable-length integer called a +** varint. We encode variable-length integers in little-endian order +** using seven bits * per byte as follows: +** +** KEY: +** A = 0xxxxxxx 7 bits of data and one flag bit +** B = 1xxxxxxx 7 bits of data and one flag bit +** +** 7 bits - A +** 14 bits - BA +** 21 bits - BBA +** and so on. +** +** This is identical to how sqlite encodes varints (see util.c). +** +** +**** Document lists **** +** A doclist (document list) holds a docid-sorted list of hits for a +** given term. Doclists hold docids, and can optionally associate +** token positions and offsets with docids. +** +** A DL_POSITIONS_OFFSETS doclist is stored like this: +** +** array { +** varint docid; +** array { (position list for column 0) +** varint position; (delta from previous position plus POS_BASE) +** varint startOffset; (delta from previous startOffset) +** varint endOffset; (delta from startOffset) +** } +** array { +** varint POS_COLUMN; (marks start of position list for new column) +** varint column; (index of new column) +** array { +** varint position; (delta from previous position plus POS_BASE) +** varint startOffset;(delta from previous startOffset) +** varint endOffset; (delta from startOffset) +** } +** } +** varint POS_END; (marks end of positions for this document. +** } +** +** Here, array { X } means zero or more occurrences of X, adjacent in +** memory. A "position" is an index of a token in the token stream +** generated by the tokenizer, while an "offset" is a byte offset, +** both based at 0. Note that POS_END and POS_COLUMN occur in the +** same logical place as the position element, and act as sentinals +** ending a position list array. +** +** A DL_POSITIONS doclist omits the startOffset and endOffset +** information. A DL_DOCIDS doclist omits both the position and +** offset information, becoming an array of varint-encoded docids. +** +** On-disk data is stored as type DL_DEFAULT, so we don't serialize +** the type. Due to how deletion is implemented in the segmentation +** system, on-disk doclists MUST store at least positions. +** +** +**** Segment leaf nodes **** +** Segment leaf nodes store terms and doclists, ordered by term. Leaf +** nodes are written using LeafWriter, and read using LeafReader (to +** iterate through a single leaf node's data) and LeavesReader (to +** iterate through a segment's entire leaf layer). Leaf nodes have +** the format: +** +** varint iHeight; (height from leaf level, always 0) +** varint nTerm; (length of first term) +** char pTerm[nTerm]; (content of first term) +** varint nDoclist; (length of term's associated doclist) +** char pDoclist[nDoclist]; (content of doclist) +** array { +** (further terms are delta-encoded) +** varint nPrefix; (length of prefix shared with previous term) +** varint nSuffix; (length of unshared suffix) +** char pTermSuffix[nSuffix];(unshared suffix of next term) +** varint nDoclist; (length of term's associated doclist) +** char pDoclist[nDoclist]; (content of doclist) +** } +** +** Here, array { X } means zero or more occurrences of X, adjacent in +** memory. +** +** Leaf nodes are broken into blocks which are stored contiguously in +** the %_segments table in sorted order. This means that when the end +** of a node is reached, the next term is in the node with the next +** greater node id. +** +** New data is spilled to a new leaf node when the current node +** exceeds LEAF_MAX bytes (default 2048). New data which itself is +** larger than STANDALONE_MIN (default 1024) is placed in a standalone +** node (a leaf node with a single term and doclist). The goal of +** these settings is to pack together groups of small doclists while +** making it efficient to directly access large doclists. The +** assumption is that large doclists represent terms which are more +** likely to be query targets. +** +** TODO(shess) It may be useful for blocking decisions to be more +** dynamic. For instance, it may make more sense to have a 2.5k leaf +** node rather than splitting into 2k and .5k nodes. My intuition is +** that this might extend through 2x or 4x the pagesize. +** +** +**** Segment interior nodes **** +** Segment interior nodes store blockids for subtree nodes and terms +** to describe what data is stored by the each subtree. Interior +** nodes are written using InteriorWriter, and read using +** InteriorReader. InteriorWriters are created as needed when +** SegmentWriter creates new leaf nodes, or when an interior node +** itself grows too big and must be split. The format of interior +** nodes: +** +** varint iHeight; (height from leaf level, always >0) +** varint iBlockid; (block id of node's leftmost subtree) +** optional { +** varint nTerm; (length of first term) +** char pTerm[nTerm]; (content of first term) +** array { +** (further terms are delta-encoded) +** varint nPrefix; (length of shared prefix with previous term) +** varint nSuffix; (length of unshared suffix) +** char pTermSuffix[nSuffix]; (unshared suffix of next term) +** } +** } +** +** Here, optional { X } means an optional element, while array { X } +** means zero or more occurrences of X, adjacent in memory. +** +** An interior node encodes n terms separating n+1 subtrees. The +** subtree blocks are contiguous, so only the first subtree's blockid +** is encoded. The subtree at iBlockid will contain all terms less +** than the first term encoded (or all terms if no term is encoded). +** Otherwise, for terms greater than or equal to pTerm[i] but less +** than pTerm[i+1], the subtree for that term will be rooted at +** iBlockid+i. Interior nodes only store enough term data to +** distinguish adjacent children (if the rightmost term of the left +** child is "something", and the leftmost term of the right child is +** "wicked", only "w" is stored). +** +** New data is spilled to a new interior node at the same height when +** the current node exceeds INTERIOR_MAX bytes (default 2048). +** INTERIOR_MIN_TERMS (default 7) keeps large terms from monopolizing +** interior nodes and making the tree too skinny. The interior nodes +** at a given height are naturally tracked by interior nodes at +** height+1, and so on. +** +** +**** Segment directory **** +** The segment directory in table %_segdir stores meta-information for +** merging and deleting segments, and also the root node of the +** segment's tree. +** +** The root node is the top node of the segment's tree after encoding +** the entire segment, restricted to ROOT_MAX bytes (default 1024). +** This could be either a leaf node or an interior node. If the top +** node requires more than ROOT_MAX bytes, it is flushed to %_segments +** and a new root interior node is generated (which should always fit +** within ROOT_MAX because it only needs space for 2 varints, the +** height and the blockid of the previous root). +** +** The meta-information in the segment directory is: +** level - segment level (see below) +** idx - index within level +** - (level,idx uniquely identify a segment) +** start_block - first leaf node +** leaves_end_block - last leaf node +** end_block - last block (including interior nodes) +** root - contents of root node +** +** If the root node is a leaf node, then start_block, +** leaves_end_block, and end_block are all 0. +** +** +**** Segment merging **** +** To amortize update costs, segments are groups into levels and +** merged in matches. Each increase in level represents exponentially +** more documents. +** +** New documents (actually, document updates) are tokenized and +** written individually (using LeafWriter) to a level 0 segment, with +** incrementing idx. When idx reaches MERGE_COUNT (default 16), all +** level 0 segments are merged into a single level 1 segment. Level 1 +** is populated like level 0, and eventually MERGE_COUNT level 1 +** segments are merged to a single level 2 segment (representing +** MERGE_COUNT^2 updates), and so on. +** +** A segment merge traverses all segments at a given level in +** parallel, performing a straightforward sorted merge. Since segment +** leaf nodes are written in to the %_segments table in order, this +** merge traverses the underlying sqlite disk structures efficiently. +** After the merge, all segment blocks from the merged level are +** deleted. +** +** MERGE_COUNT controls how often we merge segments. 16 seems to be +** somewhat of a sweet spot for insertion performance. 32 and 64 show +** very similar performance numbers to 16 on insertion, though they're +** a tiny bit slower (perhaps due to more overhead in merge-time +** sorting). 8 is about 20% slower than 16, 4 about 50% slower than +** 16, 2 about 66% slower than 16. +** +** At query time, high MERGE_COUNT increases the number of segments +** which need to be scanned and merged. For instance, with 100k docs +** inserted: +** +** MERGE_COUNT segments +** 16 25 +** 8 12 +** 4 10 +** 2 6 +** +** This appears to have only a moderate impact on queries for very +** frequent terms (which are somewhat dominated by segment merge +** costs), and infrequent and non-existent terms still seem to be fast +** even with many segments. +** +** TODO(shess) That said, it would be nice to have a better query-side +** argument for MERGE_COUNT of 16. Also, it is possible/likely that +** optimizations to things like doclist merging will swing the sweet +** spot around. +** +** +** +**** Handling of deletions and updates **** +** Since we're using a segmented structure, with no docid-oriented +** index into the term index, we clearly cannot simply update the term +** index when a document is deleted or updated. For deletions, we +** write an empty doclist (varint(docid) varint(POS_END)), for updates +** we simply write the new doclist. Segment merges overwrite older +** data for a particular docid with newer data, so deletes or updates +** will eventually overtake the earlier data and knock it out. The +** query logic likewise merges doclists so that newer data knocks out +** older data. +** +** TODO(shess) Provide a VACUUM type operation to clear out all +** deletions and duplications. This would basically be a forced merge +** into a single segment. +*/ + +#if !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS3) + +#if defined(SQLITE_ENABLE_FTS3) && !defined(SQLITE_CORE) +# define SQLITE_CORE 1 +#endif + +#include +#include +#include +#include +#include + +#include "fts3.h" +#include "fts3_hash.h" +#include "fts3_tokenizer.h" +#ifndef SQLITE_CORE +# include "sqlite3ext.h" + SQLITE_EXTENSION_INIT1 +#endif + + +/* TODO(shess) MAN, this thing needs some refactoring. At minimum, it +** would be nice to order the file better, perhaps something along the +** lines of: +** +** - utility functions +** - table setup functions +** - table update functions +** - table query functions +** +** Put the query functions last because they're likely to reference +** typedefs or functions from the table update section. +*/ + +#if 0 +# define FTSTRACE(A) printf A; fflush(stdout) +#else +# define FTSTRACE(A) +#endif + +/* +** Default span for NEAR operators. +*/ +#define SQLITE_FTS3_DEFAULT_NEAR_PARAM 10 + +/* It is not safe to call isspace(), tolower(), or isalnum() on +** hi-bit-set characters. This is the same solution used in the +** tokenizer. +*/ +/* TODO(shess) The snippet-generation code should be using the +** tokenizer-generated tokens rather than doing its own local +** tokenization. +*/ +/* TODO(shess) Is __isascii() a portable version of (c&0x80)==0? */ +static int safe_isspace(char c){ + return (c&0x80)==0 ? isspace(c) : 0; +} +static int safe_tolower(char c){ + return (c&0x80)==0 ? tolower(c) : c; +} +static int safe_isalnum(char c){ + return (c&0x80)==0 ? isalnum(c) : 0; +} + +typedef enum DocListType { + DL_DOCIDS, /* docids only */ + DL_POSITIONS, /* docids + positions */ + DL_POSITIONS_OFFSETS /* docids + positions + offsets */ +} DocListType; + +/* +** By default, only positions and not offsets are stored in the doclists. +** To change this so that offsets are stored too, compile with +** +** -DDL_DEFAULT=DL_POSITIONS_OFFSETS +** +** If DL_DEFAULT is set to DL_DOCIDS, your table can only be inserted +** into (no deletes or updates). +*/ +#ifndef DL_DEFAULT +# define DL_DEFAULT DL_POSITIONS +#endif + +enum { + POS_END = 0, /* end of this position list */ + POS_COLUMN, /* followed by new column number */ + POS_BASE +}; + +/* MERGE_COUNT controls how often we merge segments (see comment at +** top of file). +*/ +#define MERGE_COUNT 16 + +/* utility functions */ + +/* CLEAR() and SCRAMBLE() abstract memset() on a pointer to a single +** record to prevent errors of the form: +** +** my_function(SomeType *b){ +** memset(b, '\0', sizeof(b)); // sizeof(b)!=sizeof(*b) +** } +*/ +/* TODO(shess) Obvious candidates for a header file. */ +#define CLEAR(b) memset(b, '\0', sizeof(*(b))) + +#ifndef NDEBUG +# define SCRAMBLE(b) memset(b, 0x55, sizeof(*(b))) +#else +# define SCRAMBLE(b) +#endif + +/* We may need up to VARINT_MAX bytes to store an encoded 64-bit integer. */ +#define VARINT_MAX 10 + +/* Write a 64-bit variable-length integer to memory starting at p[0]. + * The length of data written will be between 1 and VARINT_MAX bytes. + * The number of bytes written is returned. */ +static int fts3PutVarint(char *p, sqlite_int64 v){ + unsigned char *q = (unsigned char *) p; + sqlite_uint64 vu = v; + do{ + *q++ = (unsigned char) ((vu & 0x7f) | 0x80); + vu >>= 7; + }while( vu!=0 ); + q[-1] &= 0x7f; /* turn off high bit in final byte */ + assert( q - (unsigned char *)p <= VARINT_MAX ); + return (int) (q - (unsigned char *)p); +} + +/* Read a 64-bit variable-length integer from memory starting at p[0]. + * Return the number of bytes read, or 0 on error. + * The value is stored in *v. */ +static int fts3GetVarint(const char *p, sqlite_int64 *v){ + const unsigned char *q = (const unsigned char *) p; + sqlite_uint64 x = 0, y = 1; + while( (*q & 0x80) == 0x80 ){ + x += y * (*q++ & 0x7f); + y <<= 7; + if( q - (unsigned char *)p >= VARINT_MAX ){ /* bad data */ + assert( 0 ); + return 0; + } + } + x += y * (*q++); + *v = (sqlite_int64) x; + return (int) (q - (unsigned char *)p); +} + +static int fts3GetVarint32(const char *p, int *pi){ + sqlite_int64 i; + int ret = fts3GetVarint(p, &i); + *pi = (int) i; + assert( *pi==i ); + return ret; +} + +/*******************************************************************/ +/* DataBuffer is used to collect data into a buffer in piecemeal +** fashion. It implements the usual distinction between amount of +** data currently stored (nData) and buffer capacity (nCapacity). +** +** dataBufferInit - create a buffer with given initial capacity. +** dataBufferReset - forget buffer's data, retaining capacity. +** dataBufferDestroy - free buffer's data. +** dataBufferSwap - swap contents of two buffers. +** dataBufferExpand - expand capacity without adding data. +** dataBufferAppend - append data. +** dataBufferAppend2 - append two pieces of data at once. +** dataBufferReplace - replace buffer's data. +*/ +typedef struct DataBuffer { + char *pData; /* Pointer to malloc'ed buffer. */ + int nCapacity; /* Size of pData buffer. */ + int nData; /* End of data loaded into pData. */ +} DataBuffer; + +static void dataBufferInit(DataBuffer *pBuffer, int nCapacity){ + assert( nCapacity>=0 ); + pBuffer->nData = 0; + pBuffer->nCapacity = nCapacity; + pBuffer->pData = nCapacity==0 ? NULL : sqlite3_malloc(nCapacity); +} +static void dataBufferReset(DataBuffer *pBuffer){ + pBuffer->nData = 0; +} +static void dataBufferDestroy(DataBuffer *pBuffer){ + if( pBuffer->pData!=NULL ) sqlite3_free(pBuffer->pData); + SCRAMBLE(pBuffer); +} +static void dataBufferSwap(DataBuffer *pBuffer1, DataBuffer *pBuffer2){ + DataBuffer tmp = *pBuffer1; + *pBuffer1 = *pBuffer2; + *pBuffer2 = tmp; +} +static void dataBufferExpand(DataBuffer *pBuffer, int nAddCapacity){ + assert( nAddCapacity>0 ); + /* TODO(shess) Consider expanding more aggressively. Note that the + ** underlying malloc implementation may take care of such things for + ** us already. + */ + if( pBuffer->nData+nAddCapacity>pBuffer->nCapacity ){ + pBuffer->nCapacity = pBuffer->nData+nAddCapacity; + pBuffer->pData = sqlite3_realloc(pBuffer->pData, pBuffer->nCapacity); + } +} +static void dataBufferAppend(DataBuffer *pBuffer, + const char *pSource, int nSource){ + assert( nSource>0 && pSource!=NULL ); + dataBufferExpand(pBuffer, nSource); + memcpy(pBuffer->pData+pBuffer->nData, pSource, nSource); + pBuffer->nData += nSource; +} +static void dataBufferAppend2(DataBuffer *pBuffer, + const char *pSource1, int nSource1, + const char *pSource2, int nSource2){ + assert( nSource1>0 && pSource1!=NULL ); + assert( nSource2>0 && pSource2!=NULL ); + dataBufferExpand(pBuffer, nSource1+nSource2); + memcpy(pBuffer->pData+pBuffer->nData, pSource1, nSource1); + memcpy(pBuffer->pData+pBuffer->nData+nSource1, pSource2, nSource2); + pBuffer->nData += nSource1+nSource2; +} +static void dataBufferReplace(DataBuffer *pBuffer, + const char *pSource, int nSource){ + dataBufferReset(pBuffer); + dataBufferAppend(pBuffer, pSource, nSource); +} + +/* StringBuffer is a null-terminated version of DataBuffer. */ +typedef struct StringBuffer { + DataBuffer b; /* Includes null terminator. */ +} StringBuffer; + +static void initStringBuffer(StringBuffer *sb){ + dataBufferInit(&sb->b, 100); + dataBufferReplace(&sb->b, "", 1); +} +static int stringBufferLength(StringBuffer *sb){ + return sb->b.nData-1; +} +static char *stringBufferData(StringBuffer *sb){ + return sb->b.pData; +} +static void stringBufferDestroy(StringBuffer *sb){ + dataBufferDestroy(&sb->b); +} + +static void nappend(StringBuffer *sb, const char *zFrom, int nFrom){ + assert( sb->b.nData>0 ); + if( nFrom>0 ){ + sb->b.nData--; + dataBufferAppend2(&sb->b, zFrom, nFrom, "", 1); + } +} +static void append(StringBuffer *sb, const char *zFrom){ + nappend(sb, zFrom, strlen(zFrom)); +} + +/* Append a list of strings separated by commas. */ +static void appendList(StringBuffer *sb, int nString, char **azString){ + int i; + for(i=0; i0 ) append(sb, ", "); + append(sb, azString[i]); + } +} + +static int endsInWhiteSpace(StringBuffer *p){ + return stringBufferLength(p)>0 && + safe_isspace(stringBufferData(p)[stringBufferLength(p)-1]); +} + +/* If the StringBuffer ends in something other than white space, add a +** single space character to the end. +*/ +static void appendWhiteSpace(StringBuffer *p){ + if( stringBufferLength(p)==0 ) return; + if( !endsInWhiteSpace(p) ) append(p, " "); +} + +/* Remove white space from the end of the StringBuffer */ +static void trimWhiteSpace(StringBuffer *p){ + while( endsInWhiteSpace(p) ){ + p->b.pData[--p->b.nData-1] = '\0'; + } +} + +/*******************************************************************/ +/* DLReader is used to read document elements from a doclist. The +** current docid is cached, so dlrDocid() is fast. DLReader does not +** own the doclist buffer. +** +** dlrAtEnd - true if there's no more data to read. +** dlrDocid - docid of current document. +** dlrDocData - doclist data for current document (including docid). +** dlrDocDataBytes - length of same. +** dlrAllDataBytes - length of all remaining data. +** dlrPosData - position data for current document. +** dlrPosDataLen - length of pos data for current document (incl POS_END). +** dlrStep - step to current document. +** dlrInit - initial for doclist of given type against given data. +** dlrDestroy - clean up. +** +** Expected usage is something like: +** +** DLReader reader; +** dlrInit(&reader, pData, nData); +** while( !dlrAtEnd(&reader) ){ +** // calls to dlrDocid() and kin. +** dlrStep(&reader); +** } +** dlrDestroy(&reader); +*/ +typedef struct DLReader { + DocListType iType; + const char *pData; + int nData; + + sqlite_int64 iDocid; + int nElement; +} DLReader; + +static int dlrAtEnd(DLReader *pReader){ + assert( pReader->nData>=0 ); + return pReader->nData==0; +} +static sqlite_int64 dlrDocid(DLReader *pReader){ + assert( !dlrAtEnd(pReader) ); + return pReader->iDocid; +} +static const char *dlrDocData(DLReader *pReader){ + assert( !dlrAtEnd(pReader) ); + return pReader->pData; +} +static int dlrDocDataBytes(DLReader *pReader){ + assert( !dlrAtEnd(pReader) ); + return pReader->nElement; +} +static int dlrAllDataBytes(DLReader *pReader){ + assert( !dlrAtEnd(pReader) ); + return pReader->nData; +} +/* TODO(shess) Consider adding a field to track iDocid varint length +** to make these two functions faster. This might matter (a tiny bit) +** for queries. +*/ +static const char *dlrPosData(DLReader *pReader){ + sqlite_int64 iDummy; + int n = fts3GetVarint(pReader->pData, &iDummy); + assert( !dlrAtEnd(pReader) ); + return pReader->pData+n; +} +static int dlrPosDataLen(DLReader *pReader){ + sqlite_int64 iDummy; + int n = fts3GetVarint(pReader->pData, &iDummy); + assert( !dlrAtEnd(pReader) ); + return pReader->nElement-n; +} +static void dlrStep(DLReader *pReader){ + assert( !dlrAtEnd(pReader) ); + + /* Skip past current doclist element. */ + assert( pReader->nElement<=pReader->nData ); + pReader->pData += pReader->nElement; + pReader->nData -= pReader->nElement; + + /* If there is more data, read the next doclist element. */ + if( pReader->nData!=0 ){ + sqlite_int64 iDocidDelta; + int iDummy, n = fts3GetVarint(pReader->pData, &iDocidDelta); + pReader->iDocid += iDocidDelta; + if( pReader->iType>=DL_POSITIONS ){ + assert( nnData ); + while( 1 ){ + n += fts3GetVarint32(pReader->pData+n, &iDummy); + assert( n<=pReader->nData ); + if( iDummy==POS_END ) break; + if( iDummy==POS_COLUMN ){ + n += fts3GetVarint32(pReader->pData+n, &iDummy); + assert( nnData ); + }else if( pReader->iType==DL_POSITIONS_OFFSETS ){ + n += fts3GetVarint32(pReader->pData+n, &iDummy); + n += fts3GetVarint32(pReader->pData+n, &iDummy); + assert( nnData ); + } + } + } + pReader->nElement = n; + assert( pReader->nElement<=pReader->nData ); + } +} +static void dlrInit(DLReader *pReader, DocListType iType, + const char *pData, int nData){ + assert( pData!=NULL && nData!=0 ); + pReader->iType = iType; + pReader->pData = pData; + pReader->nData = nData; + pReader->nElement = 0; + pReader->iDocid = 0; + + /* Load the first element's data. There must be a first element. */ + dlrStep(pReader); +} +static void dlrDestroy(DLReader *pReader){ + SCRAMBLE(pReader); +} + +#ifndef NDEBUG +/* Verify that the doclist can be validly decoded. Also returns the +** last docid found because it is convenient in other assertions for +** DLWriter. +*/ +static void docListValidate(DocListType iType, const char *pData, int nData, + sqlite_int64 *pLastDocid){ + sqlite_int64 iPrevDocid = 0; + assert( nData>0 ); + assert( pData!=0 ); + assert( pData+nData>pData ); + while( nData!=0 ){ + sqlite_int64 iDocidDelta; + int n = fts3GetVarint(pData, &iDocidDelta); + iPrevDocid += iDocidDelta; + if( iType>DL_DOCIDS ){ + int iDummy; + while( 1 ){ + n += fts3GetVarint32(pData+n, &iDummy); + if( iDummy==POS_END ) break; + if( iDummy==POS_COLUMN ){ + n += fts3GetVarint32(pData+n, &iDummy); + }else if( iType>DL_POSITIONS ){ + n += fts3GetVarint32(pData+n, &iDummy); + n += fts3GetVarint32(pData+n, &iDummy); + } + assert( n<=nData ); + } + } + assert( n<=nData ); + pData += n; + nData -= n; + } + if( pLastDocid ) *pLastDocid = iPrevDocid; +} +#define ASSERT_VALID_DOCLIST(i, p, n, o) docListValidate(i, p, n, o) +#else +#define ASSERT_VALID_DOCLIST(i, p, n, o) assert( 1 ) +#endif + +/*******************************************************************/ +/* DLWriter is used to write doclist data to a DataBuffer. DLWriter +** always appends to the buffer and does not own it. +** +** dlwInit - initialize to write a given type doclistto a buffer. +** dlwDestroy - clear the writer's memory. Does not free buffer. +** dlwAppend - append raw doclist data to buffer. +** dlwCopy - copy next doclist from reader to writer. +** dlwAdd - construct doclist element and append to buffer. +** Only apply dlwAdd() to DL_DOCIDS doclists (else use PLWriter). +*/ +typedef struct DLWriter { + DocListType iType; + DataBuffer *b; + sqlite_int64 iPrevDocid; +#ifndef NDEBUG + int has_iPrevDocid; +#endif +} DLWriter; + +static void dlwInit(DLWriter *pWriter, DocListType iType, DataBuffer *b){ + pWriter->b = b; + pWriter->iType = iType; + pWriter->iPrevDocid = 0; +#ifndef NDEBUG + pWriter->has_iPrevDocid = 0; +#endif +} +static void dlwDestroy(DLWriter *pWriter){ + SCRAMBLE(pWriter); +} +/* iFirstDocid is the first docid in the doclist in pData. It is +** needed because pData may point within a larger doclist, in which +** case the first item would be delta-encoded. +** +** iLastDocid is the final docid in the doclist in pData. It is +** needed to create the new iPrevDocid for future delta-encoding. The +** code could decode the passed doclist to recreate iLastDocid, but +** the only current user (docListMerge) already has decoded this +** information. +*/ +/* TODO(shess) This has become just a helper for docListMerge. +** Consider a refactor to make this cleaner. +*/ +static void dlwAppend(DLWriter *pWriter, + const char *pData, int nData, + sqlite_int64 iFirstDocid, sqlite_int64 iLastDocid){ + sqlite_int64 iDocid = 0; + char c[VARINT_MAX]; + int nFirstOld, nFirstNew; /* Old and new varint len of first docid. */ +#ifndef NDEBUG + sqlite_int64 iLastDocidDelta; +#endif + + /* Recode the initial docid as delta from iPrevDocid. */ + nFirstOld = fts3GetVarint(pData, &iDocid); + assert( nFirstOldiType==DL_DOCIDS) ); + nFirstNew = fts3PutVarint(c, iFirstDocid-pWriter->iPrevDocid); + + /* Verify that the incoming doclist is valid AND that it ends with + ** the expected docid. This is essential because we'll trust this + ** docid in future delta-encoding. + */ + ASSERT_VALID_DOCLIST(pWriter->iType, pData, nData, &iLastDocidDelta); + assert( iLastDocid==iFirstDocid-iDocid+iLastDocidDelta ); + + /* Append recoded initial docid and everything else. Rest of docids + ** should have been delta-encoded from previous initial docid. + */ + if( nFirstOldb, c, nFirstNew, + pData+nFirstOld, nData-nFirstOld); + }else{ + dataBufferAppend(pWriter->b, c, nFirstNew); + } + pWriter->iPrevDocid = iLastDocid; +} +static void dlwCopy(DLWriter *pWriter, DLReader *pReader){ + dlwAppend(pWriter, dlrDocData(pReader), dlrDocDataBytes(pReader), + dlrDocid(pReader), dlrDocid(pReader)); +} +static void dlwAdd(DLWriter *pWriter, sqlite_int64 iDocid){ + char c[VARINT_MAX]; + int n = fts3PutVarint(c, iDocid-pWriter->iPrevDocid); + + /* Docids must ascend. */ + assert( !pWriter->has_iPrevDocid || iDocid>pWriter->iPrevDocid ); + assert( pWriter->iType==DL_DOCIDS ); + + dataBufferAppend(pWriter->b, c, n); + pWriter->iPrevDocid = iDocid; +#ifndef NDEBUG + pWriter->has_iPrevDocid = 1; +#endif +} + +/*******************************************************************/ +/* PLReader is used to read data from a document's position list. As +** the caller steps through the list, data is cached so that varints +** only need to be decoded once. +** +** plrInit, plrDestroy - create/destroy a reader. +** plrColumn, plrPosition, plrStartOffset, plrEndOffset - accessors +** plrAtEnd - at end of stream, only call plrDestroy once true. +** plrStep - step to the next element. +*/ +typedef struct PLReader { + /* These refer to the next position's data. nData will reach 0 when + ** reading the last position, so plrStep() signals EOF by setting + ** pData to NULL. + */ + const char *pData; + int nData; + + DocListType iType; + int iColumn; /* the last column read */ + int iPosition; /* the last position read */ + int iStartOffset; /* the last start offset read */ + int iEndOffset; /* the last end offset read */ +} PLReader; + +static int plrAtEnd(PLReader *pReader){ + return pReader->pData==NULL; +} +static int plrColumn(PLReader *pReader){ + assert( !plrAtEnd(pReader) ); + return pReader->iColumn; +} +static int plrPosition(PLReader *pReader){ + assert( !plrAtEnd(pReader) ); + return pReader->iPosition; +} +static int plrStartOffset(PLReader *pReader){ + assert( !plrAtEnd(pReader) ); + return pReader->iStartOffset; +} +static int plrEndOffset(PLReader *pReader){ + assert( !plrAtEnd(pReader) ); + return pReader->iEndOffset; +} +static void plrStep(PLReader *pReader){ + int i, n; + + assert( !plrAtEnd(pReader) ); + + if( pReader->nData==0 ){ + pReader->pData = NULL; + return; + } + + n = fts3GetVarint32(pReader->pData, &i); + if( i==POS_COLUMN ){ + n += fts3GetVarint32(pReader->pData+n, &pReader->iColumn); + pReader->iPosition = 0; + pReader->iStartOffset = 0; + n += fts3GetVarint32(pReader->pData+n, &i); + } + /* Should never see adjacent column changes. */ + assert( i!=POS_COLUMN ); + + if( i==POS_END ){ + pReader->nData = 0; + pReader->pData = NULL; + return; + } + + pReader->iPosition += i-POS_BASE; + if( pReader->iType==DL_POSITIONS_OFFSETS ){ + n += fts3GetVarint32(pReader->pData+n, &i); + pReader->iStartOffset += i; + n += fts3GetVarint32(pReader->pData+n, &i); + pReader->iEndOffset = pReader->iStartOffset+i; + } + assert( n<=pReader->nData ); + pReader->pData += n; + pReader->nData -= n; +} + +static void plrInit(PLReader *pReader, DLReader *pDLReader){ + pReader->pData = dlrPosData(pDLReader); + pReader->nData = dlrPosDataLen(pDLReader); + pReader->iType = pDLReader->iType; + pReader->iColumn = 0; + pReader->iPosition = 0; + pReader->iStartOffset = 0; + pReader->iEndOffset = 0; + plrStep(pReader); +} +static void plrDestroy(PLReader *pReader){ + SCRAMBLE(pReader); +} + +/*******************************************************************/ +/* PLWriter is used in constructing a document's position list. As a +** convenience, if iType is DL_DOCIDS, PLWriter becomes a no-op. +** PLWriter writes to the associated DLWriter's buffer. +** +** plwInit - init for writing a document's poslist. +** plwDestroy - clear a writer. +** plwAdd - append position and offset information. +** plwCopy - copy next position's data from reader to writer. +** plwTerminate - add any necessary doclist terminator. +** +** Calling plwAdd() after plwTerminate() may result in a corrupt +** doclist. +*/ +/* TODO(shess) Until we've written the second item, we can cache the +** first item's information. Then we'd have three states: +** +** - initialized with docid, no positions. +** - docid and one position. +** - docid and multiple positions. +** +** Only the last state needs to actually write to dlw->b, which would +** be an improvement in the DLCollector case. +*/ +typedef struct PLWriter { + DLWriter *dlw; + + int iColumn; /* the last column written */ + int iPos; /* the last position written */ + int iOffset; /* the last start offset written */ +} PLWriter; + +/* TODO(shess) In the case where the parent is reading these values +** from a PLReader, we could optimize to a copy if that PLReader has +** the same type as pWriter. +*/ +static void plwAdd(PLWriter *pWriter, int iColumn, int iPos, + int iStartOffset, int iEndOffset){ + /* Worst-case space for POS_COLUMN, iColumn, iPosDelta, + ** iStartOffsetDelta, and iEndOffsetDelta. + */ + char c[5*VARINT_MAX]; + int n = 0; + + /* Ban plwAdd() after plwTerminate(). */ + assert( pWriter->iPos!=-1 ); + + if( pWriter->dlw->iType==DL_DOCIDS ) return; + + if( iColumn!=pWriter->iColumn ){ + n += fts3PutVarint(c+n, POS_COLUMN); + n += fts3PutVarint(c+n, iColumn); + pWriter->iColumn = iColumn; + pWriter->iPos = 0; + pWriter->iOffset = 0; + } + assert( iPos>=pWriter->iPos ); + n += fts3PutVarint(c+n, POS_BASE+(iPos-pWriter->iPos)); + pWriter->iPos = iPos; + if( pWriter->dlw->iType==DL_POSITIONS_OFFSETS ){ + assert( iStartOffset>=pWriter->iOffset ); + n += fts3PutVarint(c+n, iStartOffset-pWriter->iOffset); + pWriter->iOffset = iStartOffset; + assert( iEndOffset>=iStartOffset ); + n += fts3PutVarint(c+n, iEndOffset-iStartOffset); + } + dataBufferAppend(pWriter->dlw->b, c, n); +} +static void plwCopy(PLWriter *pWriter, PLReader *pReader){ + plwAdd(pWriter, plrColumn(pReader), plrPosition(pReader), + plrStartOffset(pReader), plrEndOffset(pReader)); +} +static void plwInit(PLWriter *pWriter, DLWriter *dlw, sqlite_int64 iDocid){ + char c[VARINT_MAX]; + int n; + + pWriter->dlw = dlw; + + /* Docids must ascend. */ + assert( !pWriter->dlw->has_iPrevDocid || iDocid>pWriter->dlw->iPrevDocid ); + n = fts3PutVarint(c, iDocid-pWriter->dlw->iPrevDocid); + dataBufferAppend(pWriter->dlw->b, c, n); + pWriter->dlw->iPrevDocid = iDocid; +#ifndef NDEBUG + pWriter->dlw->has_iPrevDocid = 1; +#endif + + pWriter->iColumn = 0; + pWriter->iPos = 0; + pWriter->iOffset = 0; +} +/* TODO(shess) Should plwDestroy() also terminate the doclist? But +** then plwDestroy() would no longer be just a destructor, it would +** also be doing work, which isn't consistent with the overall idiom. +** Another option would be for plwAdd() to always append any necessary +** terminator, so that the output is always correct. But that would +** add incremental work to the common case with the only benefit being +** API elegance. Punt for now. +*/ +static void plwTerminate(PLWriter *pWriter){ + if( pWriter->dlw->iType>DL_DOCIDS ){ + char c[VARINT_MAX]; + int n = fts3PutVarint(c, POS_END); + dataBufferAppend(pWriter->dlw->b, c, n); + } +#ifndef NDEBUG + /* Mark as terminated for assert in plwAdd(). */ + pWriter->iPos = -1; +#endif +} +static void plwDestroy(PLWriter *pWriter){ + SCRAMBLE(pWriter); +} + +/*******************************************************************/ +/* DLCollector wraps PLWriter and DLWriter to provide a +** dynamically-allocated doclist area to use during tokenization. +** +** dlcNew - malloc up and initialize a collector. +** dlcDelete - destroy a collector and all contained items. +** dlcAddPos - append position and offset information. +** dlcAddDoclist - add the collected doclist to the given buffer. +** dlcNext - terminate the current document and open another. +*/ +typedef struct DLCollector { + DataBuffer b; + DLWriter dlw; + PLWriter plw; +} DLCollector; + +/* TODO(shess) This could also be done by calling plwTerminate() and +** dataBufferAppend(). I tried that, expecting nominal performance +** differences, but it seemed to pretty reliably be worth 1% to code +** it this way. I suspect it is the incremental malloc overhead (some +** percentage of the plwTerminate() calls will cause a realloc), so +** this might be worth revisiting if the DataBuffer implementation +** changes. +*/ +static void dlcAddDoclist(DLCollector *pCollector, DataBuffer *b){ + if( pCollector->dlw.iType>DL_DOCIDS ){ + char c[VARINT_MAX]; + int n = fts3PutVarint(c, POS_END); + dataBufferAppend2(b, pCollector->b.pData, pCollector->b.nData, c, n); + }else{ + dataBufferAppend(b, pCollector->b.pData, pCollector->b.nData); + } +} +static void dlcNext(DLCollector *pCollector, sqlite_int64 iDocid){ + plwTerminate(&pCollector->plw); + plwDestroy(&pCollector->plw); + plwInit(&pCollector->plw, &pCollector->dlw, iDocid); +} +static void dlcAddPos(DLCollector *pCollector, int iColumn, int iPos, + int iStartOffset, int iEndOffset){ + plwAdd(&pCollector->plw, iColumn, iPos, iStartOffset, iEndOffset); +} + +static DLCollector *dlcNew(sqlite_int64 iDocid, DocListType iType){ + DLCollector *pCollector = sqlite3_malloc(sizeof(DLCollector)); + dataBufferInit(&pCollector->b, 0); + dlwInit(&pCollector->dlw, iType, &pCollector->b); + plwInit(&pCollector->plw, &pCollector->dlw, iDocid); + return pCollector; +} +static void dlcDelete(DLCollector *pCollector){ + plwDestroy(&pCollector->plw); + dlwDestroy(&pCollector->dlw); + dataBufferDestroy(&pCollector->b); + SCRAMBLE(pCollector); + sqlite3_free(pCollector); +} + + +/* Copy the doclist data of iType in pData/nData into *out, trimming +** unnecessary data as we go. Only columns matching iColumn are +** copied, all columns copied if iColumn is -1. Elements with no +** matching columns are dropped. The output is an iOutType doclist. +*/ +/* NOTE(shess) This code is only valid after all doclists are merged. +** If this is run before merges, then doclist items which represent +** deletion will be trimmed, and will thus not effect a deletion +** during the merge. +*/ +static void docListTrim(DocListType iType, const char *pData, int nData, + int iColumn, DocListType iOutType, DataBuffer *out){ + DLReader dlReader; + DLWriter dlWriter; + + assert( iOutType<=iType ); + + dlrInit(&dlReader, iType, pData, nData); + dlwInit(&dlWriter, iOutType, out); + + while( !dlrAtEnd(&dlReader) ){ + PLReader plReader; + PLWriter plWriter; + int match = 0; + + plrInit(&plReader, &dlReader); + + while( !plrAtEnd(&plReader) ){ + if( iColumn==-1 || plrColumn(&plReader)==iColumn ){ + if( !match ){ + plwInit(&plWriter, &dlWriter, dlrDocid(&dlReader)); + match = 1; + } + plwAdd(&plWriter, plrColumn(&plReader), plrPosition(&plReader), + plrStartOffset(&plReader), plrEndOffset(&plReader)); + } + plrStep(&plReader); + } + if( match ){ + plwTerminate(&plWriter); + plwDestroy(&plWriter); + } + + plrDestroy(&plReader); + dlrStep(&dlReader); + } + dlwDestroy(&dlWriter); + dlrDestroy(&dlReader); +} + +/* Used by docListMerge() to keep doclists in the ascending order by +** docid, then ascending order by age (so the newest comes first). +*/ +typedef struct OrderedDLReader { + DLReader *pReader; + + /* TODO(shess) If we assume that docListMerge pReaders is ordered by + ** age (which we do), then we could use pReader comparisons to break + ** ties. + */ + int idx; +} OrderedDLReader; + +/* Order eof to end, then by docid asc, idx desc. */ +static int orderedDLReaderCmp(OrderedDLReader *r1, OrderedDLReader *r2){ + if( dlrAtEnd(r1->pReader) ){ + if( dlrAtEnd(r2->pReader) ) return 0; /* Both atEnd(). */ + return 1; /* Only r1 atEnd(). */ + } + if( dlrAtEnd(r2->pReader) ) return -1; /* Only r2 atEnd(). */ + + if( dlrDocid(r1->pReader)pReader) ) return -1; + if( dlrDocid(r1->pReader)>dlrDocid(r2->pReader) ) return 1; + + /* Descending on idx. */ + return r2->idx-r1->idx; +} + +/* Bubble p[0] to appropriate place in p[1..n-1]. Assumes that +** p[1..n-1] is already sorted. +*/ +/* TODO(shess) Is this frequent enough to warrant a binary search? +** Before implementing that, instrument the code to check. In most +** current usage, I expect that p[0] will be less than p[1] a very +** high proportion of the time. +*/ +static void orderedDLReaderReorder(OrderedDLReader *p, int n){ + while( n>1 && orderedDLReaderCmp(p, p+1)>0 ){ + OrderedDLReader tmp = p[0]; + p[0] = p[1]; + p[1] = tmp; + n--; + p++; + } +} + +/* Given an array of doclist readers, merge their doclist elements +** into out in sorted order (by docid), dropping elements from older +** readers when there is a duplicate docid. pReaders is assumed to be +** ordered by age, oldest first. +*/ +/* TODO(shess) nReaders must be <= MERGE_COUNT. This should probably +** be fixed. +*/ +static void docListMerge(DataBuffer *out, + DLReader *pReaders, int nReaders){ + OrderedDLReader readers[MERGE_COUNT]; + DLWriter writer; + int i, n; + const char *pStart = 0; + int nStart = 0; + sqlite_int64 iFirstDocid = 0, iLastDocid = 0; + + assert( nReaders>0 ); + if( nReaders==1 ){ + dataBufferAppend(out, dlrDocData(pReaders), dlrAllDataBytes(pReaders)); + return; + } + + assert( nReaders<=MERGE_COUNT ); + n = 0; + for(i=0; i0 ){ + orderedDLReaderReorder(readers+i, nReaders-i); + } + + dlwInit(&writer, pReaders[0].iType, out); + while( !dlrAtEnd(readers[0].pReader) ){ + sqlite_int64 iDocid = dlrDocid(readers[0].pReader); + + /* If this is a continuation of the current buffer to copy, extend + ** that buffer. memcpy() seems to be more efficient if it has a + ** lots of data to copy. + */ + if( dlrDocData(readers[0].pReader)==pStart+nStart ){ + nStart += dlrDocDataBytes(readers[0].pReader); + }else{ + if( pStart!=0 ){ + dlwAppend(&writer, pStart, nStart, iFirstDocid, iLastDocid); + } + pStart = dlrDocData(readers[0].pReader); + nStart = dlrDocDataBytes(readers[0].pReader); + iFirstDocid = iDocid; + } + iLastDocid = iDocid; + dlrStep(readers[0].pReader); + + /* Drop all of the older elements with the same docid. */ + for(i=1; i0 ){ + orderedDLReaderReorder(readers+i, nReaders-i); + } + } + + /* Copy over any remaining elements. */ + if( nStart>0 ) dlwAppend(&writer, pStart, nStart, iFirstDocid, iLastDocid); + dlwDestroy(&writer); +} + +/* Helper function for posListUnion(). Compares the current position +** between left and right, returning as standard C idiom of <0 if +** left0 if left>right, and 0 if left==right. "End" always +** compares greater. +*/ +static int posListCmp(PLReader *pLeft, PLReader *pRight){ + assert( pLeft->iType==pRight->iType ); + if( pLeft->iType==DL_DOCIDS ) return 0; + + if( plrAtEnd(pLeft) ) return plrAtEnd(pRight) ? 0 : 1; + if( plrAtEnd(pRight) ) return -1; + + if( plrColumn(pLeft)plrColumn(pRight) ) return 1; + + if( plrPosition(pLeft)plrPosition(pRight) ) return 1; + if( pLeft->iType==DL_POSITIONS ) return 0; + + if( plrStartOffset(pLeft)plrStartOffset(pRight) ) return 1; + + if( plrEndOffset(pLeft)plrEndOffset(pRight) ) return 1; + + return 0; +} + +/* Write the union of position lists in pLeft and pRight to pOut. +** "Union" in this case meaning "All unique position tuples". Should +** work with any doclist type, though both inputs and the output +** should be the same type. +*/ +static void posListUnion(DLReader *pLeft, DLReader *pRight, DLWriter *pOut){ + PLReader left, right; + PLWriter writer; + + assert( dlrDocid(pLeft)==dlrDocid(pRight) ); + assert( pLeft->iType==pRight->iType ); + assert( pLeft->iType==pOut->iType ); + + plrInit(&left, pLeft); + plrInit(&right, pRight); + plwInit(&writer, pOut, dlrDocid(pLeft)); + + while( !plrAtEnd(&left) || !plrAtEnd(&right) ){ + int c = posListCmp(&left, &right); + if( c<0 ){ + plwCopy(&writer, &left); + plrStep(&left); + }else if( c>0 ){ + plwCopy(&writer, &right); + plrStep(&right); + }else{ + plwCopy(&writer, &left); + plrStep(&left); + plrStep(&right); + } + } + + plwTerminate(&writer); + plwDestroy(&writer); + plrDestroy(&left); + plrDestroy(&right); +} + +/* Write the union of doclists in pLeft and pRight to pOut. For +** docids in common between the inputs, the union of the position +** lists is written. Inputs and outputs are always type DL_DEFAULT. +*/ +static void docListUnion( + const char *pLeft, int nLeft, + const char *pRight, int nRight, + DataBuffer *pOut /* Write the combined doclist here */ +){ + DLReader left, right; + DLWriter writer; + + if( nLeft==0 ){ + if( nRight!=0) dataBufferAppend(pOut, pRight, nRight); + return; + } + if( nRight==0 ){ + dataBufferAppend(pOut, pLeft, nLeft); + return; + } + + dlrInit(&left, DL_DEFAULT, pLeft, nLeft); + dlrInit(&right, DL_DEFAULT, pRight, nRight); + dlwInit(&writer, DL_DEFAULT, pOut); + + while( !dlrAtEnd(&left) || !dlrAtEnd(&right) ){ + if( dlrAtEnd(&right) ){ + dlwCopy(&writer, &left); + dlrStep(&left); + }else if( dlrAtEnd(&left) ){ + dlwCopy(&writer, &right); + dlrStep(&right); + }else if( dlrDocid(&left)dlrDocid(&right) ){ + dlwCopy(&writer, &right); + dlrStep(&right); + }else{ + posListUnion(&left, &right, &writer); + dlrStep(&left); + dlrStep(&right); + } + } + + dlrDestroy(&left); + dlrDestroy(&right); + dlwDestroy(&writer); +} + +/* +** This function is used as part of the implementation of phrase and +** NEAR matching. +** +** pLeft and pRight are DLReaders positioned to the same docid in +** lists of type DL_POSITION. This function writes an entry to the +** DLWriter pOut for each position in pRight that is less than +** (nNear+1) greater (but not equal to or smaller) than a position +** in pLeft. For example, if nNear is 0, and the positions contained +** by pLeft and pRight are: +** +** pLeft: 5 10 15 20 +** pRight: 6 9 17 21 +** +** then the docid is added to pOut. If pOut is of type DL_POSITIONS, +** then a positionids "6" and "21" are also added to pOut. +** +** If boolean argument isSaveLeft is true, then positionids are copied +** from pLeft instead of pRight. In the example above, the positions "5" +** and "20" would be added instead of "6" and "21". +*/ +static void posListPhraseMerge( + DLReader *pLeft, + DLReader *pRight, + int nNear, + int isSaveLeft, + DLWriter *pOut +){ + PLReader left, right; + PLWriter writer; + int match = 0; + + assert( dlrDocid(pLeft)==dlrDocid(pRight) ); + assert( pOut->iType!=DL_POSITIONS_OFFSETS ); + + plrInit(&left, pLeft); + plrInit(&right, pRight); + + while( !plrAtEnd(&left) && !plrAtEnd(&right) ){ + if( plrColumn(&left)plrColumn(&right) ){ + plrStep(&right); + }else if( plrPosition(&left)>=plrPosition(&right) ){ + plrStep(&right); + }else{ + if( (plrPosition(&right)-plrPosition(&left))<=(nNear+1) ){ + if( !match ){ + plwInit(&writer, pOut, dlrDocid(pLeft)); + match = 1; + } + if( !isSaveLeft ){ + plwAdd(&writer, plrColumn(&right), plrPosition(&right), 0, 0); + }else{ + plwAdd(&writer, plrColumn(&left), plrPosition(&left), 0, 0); + } + plrStep(&right); + }else{ + plrStep(&left); + } + } + } + + if( match ){ + plwTerminate(&writer); + plwDestroy(&writer); + } + + plrDestroy(&left); + plrDestroy(&right); +} + +/* +** Compare the values pointed to by the PLReaders passed as arguments. +** Return -1 if the value pointed to by pLeft is considered less than +** the value pointed to by pRight, +1 if it is considered greater +** than it, or 0 if it is equal. i.e. +** +** (*pLeft - *pRight) +** +** A PLReader that is in the EOF condition is considered greater than +** any other. If neither argument is in EOF state, the return value of +** plrColumn() is used. If the plrColumn() values are equal, the +** comparison is on the basis of plrPosition(). +*/ +static int plrCompare(PLReader *pLeft, PLReader *pRight){ + assert(!plrAtEnd(pLeft) || !plrAtEnd(pRight)); + + if( plrAtEnd(pRight) || plrAtEnd(pLeft) ){ + return (plrAtEnd(pRight) ? -1 : 1); + } + if( plrColumn(pLeft)!=plrColumn(pRight) ){ + return ((plrColumn(pLeft)0) +** and write the results into pOut. +** +** A phrase intersection means that two documents only match +** if pLeft.iPos+1==pRight.iPos. +** +** A NEAR intersection means that two documents only match if +** (abs(pLeft.iPos-pRight.iPos) one AND (two OR three) + * [one OR two three] ==> (one OR two) AND three + * + * A "-" before a term matches all entries that lack that term. + * The "-" must occur immediately before the term with in intervening + * space. This is how the search engines do it. + * + * A NOT term cannot be the right-hand operand of an OR. If this + * occurs in the query string, the NOT is ignored: + * + * [one OR -two] ==> one OR two + * + */ +typedef struct Query { + fulltext_vtab *pFts; /* The full text index */ + int nTerms; /* Number of terms in the query */ + QueryTerm *pTerms; /* Array of terms. Space obtained from malloc() */ + int nextIsOr; /* Set the isOr flag on the next inserted term */ + int nextIsNear; /* Set the isOr flag on the next inserted term */ + int nextColumn; /* Next word parsed must be in this column */ + int dfltColumn; /* The default column */ +} Query; + + +/* +** An instance of the following structure keeps track of generated +** matching-word offset information and snippets. +*/ +typedef struct Snippet { + int nMatch; /* Total number of matches */ + int nAlloc; /* Space allocated for aMatch[] */ + struct snippetMatch { /* One entry for each matching term */ + char snStatus; /* Status flag for use while constructing snippets */ + short int iCol; /* The column that contains the match */ + short int iTerm; /* The index in Query.pTerms[] of the matching term */ + int iToken; /* The index of the matching document token */ + short int nByte; /* Number of bytes in the term */ + int iStart; /* The offset to the first character of the term */ + } *aMatch; /* Points to space obtained from malloc */ + char *zOffset; /* Text rendering of aMatch[] */ + int nOffset; /* strlen(zOffset) */ + char *zSnippet; /* Snippet text */ + int nSnippet; /* strlen(zSnippet) */ +} Snippet; + + +typedef enum QueryType { + QUERY_GENERIC, /* table scan */ + QUERY_DOCID, /* lookup by docid */ + QUERY_FULLTEXT /* QUERY_FULLTEXT + [i] is a full-text search for column i*/ +} QueryType; + +typedef enum fulltext_statement { + CONTENT_INSERT_STMT, + CONTENT_SELECT_STMT, + CONTENT_UPDATE_STMT, + CONTENT_DELETE_STMT, + + BLOCK_INSERT_STMT, + BLOCK_SELECT_STMT, + BLOCK_DELETE_STMT, + + SEGDIR_MAX_INDEX_STMT, + SEGDIR_SET_STMT, + SEGDIR_SELECT_STMT, + SEGDIR_SPAN_STMT, + SEGDIR_DELETE_STMT, + SEGDIR_SELECT_ALL_STMT, + + MAX_STMT /* Always at end! */ +} fulltext_statement; + +/* These must exactly match the enum above. */ +/* TODO(shess): Is there some risk that a statement will be used in two +** cursors at once, e.g. if a query joins a virtual table to itself? +** If so perhaps we should move some of these to the cursor object. +*/ +static const char *const fulltext_zStatement[MAX_STMT] = { + /* CONTENT_INSERT */ NULL, /* generated in contentInsertStatement() */ + /* CONTENT_SELECT */ NULL, /* generated in contentSelectStatement() */ + /* CONTENT_UPDATE */ NULL, /* generated in contentUpdateStatement() */ + /* CONTENT_DELETE */ "delete from %_content where docid = ?", + + /* BLOCK_INSERT */ + "insert into %_segments (blockid, block) values (null, ?)", + /* BLOCK_SELECT */ "select block from %_segments where blockid = ?", + /* BLOCK_DELETE */ "delete from %_segments where blockid between ? and ?", + + /* SEGDIR_MAX_INDEX */ "select max(idx) from %_segdir where level = ?", + /* SEGDIR_SET */ "insert into %_segdir values (?, ?, ?, ?, ?, ?)", + /* SEGDIR_SELECT */ + "select start_block, leaves_end_block, root from %_segdir " + " where level = ? order by idx", + /* SEGDIR_SPAN */ + "select min(start_block), max(end_block) from %_segdir " + " where level = ? and start_block <> 0", + /* SEGDIR_DELETE */ "delete from %_segdir where level = ?", + /* SEGDIR_SELECT_ALL */ + "select root, leaves_end_block from %_segdir order by level desc, idx", +}; + +/* +** A connection to a fulltext index is an instance of the following +** structure. The xCreate and xConnect methods create an instance +** of this structure and xDestroy and xDisconnect free that instance. +** All other methods receive a pointer to the structure as one of their +** arguments. +*/ +struct fulltext_vtab { + sqlite3_vtab base; /* Base class used by SQLite core */ + sqlite3 *db; /* The database connection */ + const char *zDb; /* logical database name */ + const char *zName; /* virtual table name */ + int nColumn; /* number of columns in virtual table */ + char **azColumn; /* column names. malloced */ + char **azContentColumn; /* column names in content table; malloced */ + sqlite3_tokenizer *pTokenizer; /* tokenizer for inserts and queries */ + + /* Precompiled statements which we keep as long as the table is + ** open. + */ + sqlite3_stmt *pFulltextStatements[MAX_STMT]; + + /* Precompiled statements used for segment merges. We run a + ** separate select across the leaf level of each tree being merged. + */ + sqlite3_stmt *pLeafSelectStmts[MERGE_COUNT]; + /* The statement used to prepare pLeafSelectStmts. */ +#define LEAF_SELECT \ + "select block from %_segments where blockid between ? and ? order by blockid" + + /* These buffer pending index updates during transactions. + ** nPendingData estimates the memory size of the pending data. It + ** doesn't include the hash-bucket overhead, nor any malloc + ** overhead. When nPendingData exceeds kPendingThreshold, the + ** buffer is flushed even before the transaction closes. + ** pendingTerms stores the data, and is only valid when nPendingData + ** is >=0 (nPendingData<0 means pendingTerms has not been + ** initialized). iPrevDocid is the last docid written, used to make + ** certain we're inserting in sorted order. + */ + int nPendingData; +#define kPendingThreshold (1*1024*1024) + sqlite_int64 iPrevDocid; + fts3Hash pendingTerms; +}; + +/* +** When the core wants to do a query, it create a cursor using a +** call to xOpen. This structure is an instance of a cursor. It +** is destroyed by xClose. +*/ +typedef struct fulltext_cursor { + sqlite3_vtab_cursor base; /* Base class used by SQLite core */ + QueryType iCursorType; /* Copy of sqlite3_index_info.idxNum */ + sqlite3_stmt *pStmt; /* Prepared statement in use by the cursor */ + int eof; /* True if at End Of Results */ + Query q; /* Parsed query string */ + Snippet snippet; /* Cached snippet for the current row */ + int iColumn; /* Column being searched */ + DataBuffer result; /* Doclist results from fulltextQuery */ + DLReader reader; /* Result reader if result not empty */ +} fulltext_cursor; + +static struct fulltext_vtab *cursor_vtab(fulltext_cursor *c){ + return (fulltext_vtab *) c->base.pVtab; +} + +static const sqlite3_module fts3Module; /* forward declaration */ + +/* Return a dynamically generated statement of the form + * insert into %_content (docid, ...) values (?, ...) + */ +static const char *contentInsertStatement(fulltext_vtab *v){ + StringBuffer sb; + int i; + + initStringBuffer(&sb); + append(&sb, "insert into %_content (docid, "); + appendList(&sb, v->nColumn, v->azContentColumn); + append(&sb, ") values (?"); + for(i=0; inColumn; ++i) + append(&sb, ", ?"); + append(&sb, ")"); + return stringBufferData(&sb); +} + +/* Return a dynamically generated statement of the form + * select from %_content where docid = ? + */ +static const char *contentSelectStatement(fulltext_vtab *v){ + StringBuffer sb; + initStringBuffer(&sb); + append(&sb, "SELECT "); + appendList(&sb, v->nColumn, v->azContentColumn); + append(&sb, " FROM %_content WHERE docid = ?"); + return stringBufferData(&sb); +} + +/* Return a dynamically generated statement of the form + * update %_content set [col_0] = ?, [col_1] = ?, ... + * where docid = ? + */ +static const char *contentUpdateStatement(fulltext_vtab *v){ + StringBuffer sb; + int i; + + initStringBuffer(&sb); + append(&sb, "update %_content set "); + for(i=0; inColumn; ++i) { + if( i>0 ){ + append(&sb, ", "); + } + append(&sb, v->azContentColumn[i]); + append(&sb, " = ?"); + } + append(&sb, " where docid = ?"); + return stringBufferData(&sb); +} + +/* Puts a freshly-prepared statement determined by iStmt in *ppStmt. +** If the indicated statement has never been prepared, it is prepared +** and cached, otherwise the cached version is reset. +*/ +static int sql_get_statement(fulltext_vtab *v, fulltext_statement iStmt, + sqlite3_stmt **ppStmt){ + assert( iStmtpFulltextStatements[iStmt]==NULL ){ + const char *zStmt; + int rc; + switch( iStmt ){ + case CONTENT_INSERT_STMT: + zStmt = contentInsertStatement(v); break; + case CONTENT_SELECT_STMT: + zStmt = contentSelectStatement(v); break; + case CONTENT_UPDATE_STMT: + zStmt = contentUpdateStatement(v); break; + default: + zStmt = fulltext_zStatement[iStmt]; + } + rc = sql_prepare(v->db, v->zDb, v->zName, &v->pFulltextStatements[iStmt], + zStmt); + if( zStmt != fulltext_zStatement[iStmt]) sqlite3_free((void *) zStmt); + if( rc!=SQLITE_OK ) return rc; + } else { + int rc = sqlite3_reset(v->pFulltextStatements[iStmt]); + if( rc!=SQLITE_OK ) return rc; + } + + *ppStmt = v->pFulltextStatements[iStmt]; + return SQLITE_OK; +} + +/* Like sqlite3_step(), but convert SQLITE_DONE to SQLITE_OK and +** SQLITE_ROW to SQLITE_ERROR. Useful for statements like UPDATE, +** where we expect no results. +*/ +static int sql_single_step(sqlite3_stmt *s){ + int rc = sqlite3_step(s); + return (rc==SQLITE_DONE) ? SQLITE_OK : rc; +} + +/* Like sql_get_statement(), but for special replicated LEAF_SELECT +** statements. +*/ +/* TODO(shess) Write version for generic statements and then share +** that between the cached-statement functions. +*/ +static int sql_get_leaf_statement(fulltext_vtab *v, int idx, + sqlite3_stmt **ppStmt){ + assert( idx>=0 && idxpLeafSelectStmts[idx]==NULL ){ + int rc = sql_prepare(v->db, v->zDb, v->zName, &v->pLeafSelectStmts[idx], + LEAF_SELECT); + if( rc!=SQLITE_OK ) return rc; + }else{ + int rc = sqlite3_reset(v->pLeafSelectStmts[idx]); + if( rc!=SQLITE_OK ) return rc; + } + + *ppStmt = v->pLeafSelectStmts[idx]; + return SQLITE_OK; +} + +/* insert into %_content (docid, ...) values ([docid], [pValues]) +** If the docid contains SQL NULL, then a unique docid will be +** generated. +*/ +static int content_insert(fulltext_vtab *v, sqlite3_value *docid, + sqlite3_value **pValues){ + sqlite3_stmt *s; + int i; + int rc = sql_get_statement(v, CONTENT_INSERT_STMT, &s); + if( rc!=SQLITE_OK ) return rc; + + rc = sqlite3_bind_value(s, 1, docid); + if( rc!=SQLITE_OK ) return rc; + + for(i=0; inColumn; ++i){ + rc = sqlite3_bind_value(s, 2+i, pValues[i]); + if( rc!=SQLITE_OK ) return rc; + } + + return sql_single_step(s); +} + +/* update %_content set col0 = pValues[0], col1 = pValues[1], ... + * where docid = [iDocid] */ +static int content_update(fulltext_vtab *v, sqlite3_value **pValues, + sqlite_int64 iDocid){ + sqlite3_stmt *s; + int i; + int rc = sql_get_statement(v, CONTENT_UPDATE_STMT, &s); + if( rc!=SQLITE_OK ) return rc; + + for(i=0; inColumn; ++i){ + rc = sqlite3_bind_value(s, 1+i, pValues[i]); + if( rc!=SQLITE_OK ) return rc; + } + + rc = sqlite3_bind_int64(s, 1+v->nColumn, iDocid); + if( rc!=SQLITE_OK ) return rc; + + return sql_single_step(s); +} + +static void freeStringArray(int nString, const char **pString){ + int i; + + for (i=0 ; i < nString ; ++i) { + if( pString[i]!=NULL ) sqlite3_free((void *) pString[i]); + } + sqlite3_free((void *) pString); +} + +/* select * from %_content where docid = [iDocid] + * The caller must delete the returned array and all strings in it. + * null fields will be NULL in the returned array. + * + * TODO: Perhaps we should return pointer/length strings here for consistency + * with other code which uses pointer/length. */ +static int content_select(fulltext_vtab *v, sqlite_int64 iDocid, + const char ***pValues){ + sqlite3_stmt *s; + const char **values; + int i; + int rc; + + *pValues = NULL; + + rc = sql_get_statement(v, CONTENT_SELECT_STMT, &s); + if( rc!=SQLITE_OK ) return rc; + + rc = sqlite3_bind_int64(s, 1, iDocid); + if( rc!=SQLITE_OK ) return rc; + + rc = sqlite3_step(s); + if( rc!=SQLITE_ROW ) return rc; + + values = (const char **) sqlite3_malloc(v->nColumn * sizeof(const char *)); + for(i=0; inColumn; ++i){ + if( sqlite3_column_type(s, i)==SQLITE_NULL ){ + values[i] = NULL; + }else{ + values[i] = string_dup((char*)sqlite3_column_text(s, i)); + } + } + + /* We expect only one row. We must execute another sqlite3_step() + * to complete the iteration; otherwise the table will remain locked. */ + rc = sqlite3_step(s); + if( rc==SQLITE_DONE ){ + *pValues = values; + return SQLITE_OK; + } + + freeStringArray(v->nColumn, values); + return rc; +} + +/* delete from %_content where docid = [iDocid ] */ +static int content_delete(fulltext_vtab *v, sqlite_int64 iDocid){ + sqlite3_stmt *s; + int rc = sql_get_statement(v, CONTENT_DELETE_STMT, &s); + if( rc!=SQLITE_OK ) return rc; + + rc = sqlite3_bind_int64(s, 1, iDocid); + if( rc!=SQLITE_OK ) return rc; + + return sql_single_step(s); +} + +/* insert into %_segments values ([pData]) +** returns assigned blockid in *piBlockid +*/ +static int block_insert(fulltext_vtab *v, const char *pData, int nData, + sqlite_int64 *piBlockid){ + sqlite3_stmt *s; + int rc = sql_get_statement(v, BLOCK_INSERT_STMT, &s); + if( rc!=SQLITE_OK ) return rc; + + rc = sqlite3_bind_blob(s, 1, pData, nData, SQLITE_STATIC); + if( rc!=SQLITE_OK ) return rc; + + rc = sqlite3_step(s); + if( rc==SQLITE_ROW ) return SQLITE_ERROR; + if( rc!=SQLITE_DONE ) return rc; + + /* blockid column is an alias for rowid. */ + *piBlockid = sqlite3_last_insert_rowid(v->db); + return SQLITE_OK; +} + +/* delete from %_segments +** where blockid between [iStartBlockid] and [iEndBlockid] +** +** Deletes the range of blocks, inclusive, used to delete the blocks +** which form a segment. +*/ +static int block_delete(fulltext_vtab *v, + sqlite_int64 iStartBlockid, sqlite_int64 iEndBlockid){ + sqlite3_stmt *s; + int rc = sql_get_statement(v, BLOCK_DELETE_STMT, &s); + if( rc!=SQLITE_OK ) return rc; + + rc = sqlite3_bind_int64(s, 1, iStartBlockid); + if( rc!=SQLITE_OK ) return rc; + + rc = sqlite3_bind_int64(s, 2, iEndBlockid); + if( rc!=SQLITE_OK ) return rc; + + return sql_single_step(s); +} + +/* Returns SQLITE_ROW with *pidx set to the maximum segment idx found +** at iLevel. Returns SQLITE_DONE if there are no segments at +** iLevel. Otherwise returns an error. +*/ +static int segdir_max_index(fulltext_vtab *v, int iLevel, int *pidx){ + sqlite3_stmt *s; + int rc = sql_get_statement(v, SEGDIR_MAX_INDEX_STMT, &s); + if( rc!=SQLITE_OK ) return rc; + + rc = sqlite3_bind_int(s, 1, iLevel); + if( rc!=SQLITE_OK ) return rc; + + rc = sqlite3_step(s); + /* Should always get at least one row due to how max() works. */ + if( rc==SQLITE_DONE ) return SQLITE_DONE; + if( rc!=SQLITE_ROW ) return rc; + + /* NULL means that there were no inputs to max(). */ + if( SQLITE_NULL==sqlite3_column_type(s, 0) ){ + rc = sqlite3_step(s); + if( rc==SQLITE_ROW ) return SQLITE_ERROR; + return rc; + } + + *pidx = sqlite3_column_int(s, 0); + + /* We expect only one row. We must execute another sqlite3_step() + * to complete the iteration; otherwise the table will remain locked. */ + rc = sqlite3_step(s); + if( rc==SQLITE_ROW ) return SQLITE_ERROR; + if( rc!=SQLITE_DONE ) return rc; + return SQLITE_ROW; +} + +/* insert into %_segdir values ( +** [iLevel], [idx], +** [iStartBlockid], [iLeavesEndBlockid], [iEndBlockid], +** [pRootData] +** ) +*/ +static int segdir_set(fulltext_vtab *v, int iLevel, int idx, + sqlite_int64 iStartBlockid, + sqlite_int64 iLeavesEndBlockid, + sqlite_int64 iEndBlockid, + const char *pRootData, int nRootData){ + sqlite3_stmt *s; + int rc = sql_get_statement(v, SEGDIR_SET_STMT, &s); + if( rc!=SQLITE_OK ) return rc; + + rc = sqlite3_bind_int(s, 1, iLevel); + if( rc!=SQLITE_OK ) return rc; + + rc = sqlite3_bind_int(s, 2, idx); + if( rc!=SQLITE_OK ) return rc; + + rc = sqlite3_bind_int64(s, 3, iStartBlockid); + if( rc!=SQLITE_OK ) return rc; + + rc = sqlite3_bind_int64(s, 4, iLeavesEndBlockid); + if( rc!=SQLITE_OK ) return rc; + + rc = sqlite3_bind_int64(s, 5, iEndBlockid); + if( rc!=SQLITE_OK ) return rc; + + rc = sqlite3_bind_blob(s, 6, pRootData, nRootData, SQLITE_STATIC); + if( rc!=SQLITE_OK ) return rc; + + return sql_single_step(s); +} + +/* Queries %_segdir for the block span of the segments in level +** iLevel. Returns SQLITE_DONE if there are no blocks for iLevel, +** SQLITE_ROW if there are blocks, else an error. +*/ +static int segdir_span(fulltext_vtab *v, int iLevel, + sqlite_int64 *piStartBlockid, + sqlite_int64 *piEndBlockid){ + sqlite3_stmt *s; + int rc = sql_get_statement(v, SEGDIR_SPAN_STMT, &s); + if( rc!=SQLITE_OK ) return rc; + + rc = sqlite3_bind_int(s, 1, iLevel); + if( rc!=SQLITE_OK ) return rc; + + rc = sqlite3_step(s); + if( rc==SQLITE_DONE ) return SQLITE_DONE; /* Should never happen */ + if( rc!=SQLITE_ROW ) return rc; + + /* This happens if all segments at this level are entirely inline. */ + if( SQLITE_NULL==sqlite3_column_type(s, 0) ){ + /* We expect only one row. We must execute another sqlite3_step() + * to complete the iteration; otherwise the table will remain locked. */ + int rc2 = sqlite3_step(s); + if( rc2==SQLITE_ROW ) return SQLITE_ERROR; + return rc2; + } + + *piStartBlockid = sqlite3_column_int64(s, 0); + *piEndBlockid = sqlite3_column_int64(s, 1); + + /* We expect only one row. We must execute another sqlite3_step() + * to complete the iteration; otherwise the table will remain locked. */ + rc = sqlite3_step(s); + if( rc==SQLITE_ROW ) return SQLITE_ERROR; + if( rc!=SQLITE_DONE ) return rc; + return SQLITE_ROW; +} + +/* Delete the segment blocks and segment directory records for all +** segments at iLevel. +*/ +static int segdir_delete(fulltext_vtab *v, int iLevel){ + sqlite3_stmt *s; + sqlite_int64 iStartBlockid, iEndBlockid; + int rc = segdir_span(v, iLevel, &iStartBlockid, &iEndBlockid); + if( rc!=SQLITE_ROW && rc!=SQLITE_DONE ) return rc; + + if( rc==SQLITE_ROW ){ + rc = block_delete(v, iStartBlockid, iEndBlockid); + if( rc!=SQLITE_OK ) return rc; + } + + /* Delete the segment directory itself. */ + rc = sql_get_statement(v, SEGDIR_DELETE_STMT, &s); + if( rc!=SQLITE_OK ) return rc; + + rc = sqlite3_bind_int64(s, 1, iLevel); + if( rc!=SQLITE_OK ) return rc; + + return sql_single_step(s); +} + +/* TODO(shess) clearPendingTerms() is far down the file because +** writeZeroSegment() is far down the file because LeafWriter is far +** down the file. Consider refactoring the code to move the non-vtab +** code above the vtab code so that we don't need this forward +** reference. +*/ +static int clearPendingTerms(fulltext_vtab *v); + +/* +** Free the memory used to contain a fulltext_vtab structure. +*/ +static void fulltext_vtab_destroy(fulltext_vtab *v){ + int iStmt, i; + + FTSTRACE(("FTS3 Destroy %p\n", v)); + for( iStmt=0; iStmtpFulltextStatements[iStmt]!=NULL ){ + sqlite3_finalize(v->pFulltextStatements[iStmt]); + v->pFulltextStatements[iStmt] = NULL; + } + } + + for( i=0; ipLeafSelectStmts[i]!=NULL ){ + sqlite3_finalize(v->pLeafSelectStmts[i]); + v->pLeafSelectStmts[i] = NULL; + } + } + + if( v->pTokenizer!=NULL ){ + v->pTokenizer->pModule->xDestroy(v->pTokenizer); + v->pTokenizer = NULL; + } + + clearPendingTerms(v); + + sqlite3_free(v->azColumn); + for(i = 0; i < v->nColumn; ++i) { + sqlite3_free(v->azContentColumn[i]); + } + sqlite3_free(v->azContentColumn); + sqlite3_free(v); +} + +/* +** Token types for parsing the arguments to xConnect or xCreate. +*/ +#define TOKEN_EOF 0 /* End of file */ +#define TOKEN_SPACE 1 /* Any kind of whitespace */ +#define TOKEN_ID 2 /* An identifier */ +#define TOKEN_STRING 3 /* A string literal */ +#define TOKEN_PUNCT 4 /* A single punctuation character */ + +/* +** If X is a character that can be used in an identifier then +** ftsIdChar(X) will be true. Otherwise it is false. +** +** For ASCII, any character with the high-order bit set is +** allowed in an identifier. For 7-bit characters, +** isFtsIdChar[X] must be 1. +** +** Ticket #1066. the SQL standard does not allow '$' in the +** middle of identfiers. But many SQL implementations do. +** SQLite will allow '$' in identifiers for compatibility. +** But the feature is undocumented. +*/ +static const char isFtsIdChar[] = { +/* x0 x1 x2 x3 x4 x5 x6 x7 x8 x9 xA xB xC xD xE xF */ + 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, /* 2x */ + 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, /* 3x */ + 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, /* 4x */ + 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, /* 5x */ + 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, /* 6x */ + 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, /* 7x */ +}; +#define ftsIdChar(C) (((c=C)&0x80)!=0 || (c>0x1f && isFtsIdChar[c-0x20])) + + +/* +** Return the length of the token that begins at z[0]. +** Store the token type in *tokenType before returning. +*/ +static int ftsGetToken(const char *z, int *tokenType){ + int i, c; + switch( *z ){ + case 0: { + *tokenType = TOKEN_EOF; + return 0; + } + case ' ': case '\t': case '\n': case '\f': case '\r': { + for(i=1; safe_isspace(z[i]); i++){} + *tokenType = TOKEN_SPACE; + return i; + } + case '`': + case '\'': + case '"': { + int delim = z[0]; + for(i=1; (c=z[i])!=0; i++){ + if( c==delim ){ + if( z[i+1]==delim ){ + i++; + }else{ + break; + } + } + } + *tokenType = TOKEN_STRING; + return i + (c!=0); + } + case '[': { + for(i=1, c=z[0]; c!=']' && (c=z[i])!=0; i++){} + *tokenType = TOKEN_ID; + return i; + } + default: { + if( !ftsIdChar(*z) ){ + break; + } + for(i=1; ftsIdChar(z[i]); i++){} + *tokenType = TOKEN_ID; + return i; + } + } + *tokenType = TOKEN_PUNCT; + return 1; +} + +/* +** A token extracted from a string is an instance of the following +** structure. +*/ +typedef struct FtsToken { + const char *z; /* Pointer to token text. Not '\000' terminated */ + short int n; /* Length of the token text in bytes. */ +} FtsToken; + +/* +** Given a input string (which is really one of the argv[] parameters +** passed into xConnect or xCreate) split the string up into tokens. +** Return an array of pointers to '\000' terminated strings, one string +** for each non-whitespace token. +** +** The returned array is terminated by a single NULL pointer. +** +** Space to hold the returned array is obtained from a single +** malloc and should be freed by passing the return value to free(). +** The individual strings within the token list are all a part of +** the single memory allocation and will all be freed at once. +*/ +static char **tokenizeString(const char *z, int *pnToken){ + int nToken = 0; + FtsToken *aToken = sqlite3_malloc( strlen(z) * sizeof(aToken[0]) ); + int n = 1; + int e, i; + int totalSize = 0; + char **azToken; + char *zCopy; + while( n>0 ){ + n = ftsGetToken(z, &e); + if( e!=TOKEN_SPACE ){ + aToken[nToken].z = z; + aToken[nToken].n = n; + nToken++; + totalSize += n+1; + } + z += n; + } + azToken = (char**)sqlite3_malloc( nToken*sizeof(char*) + totalSize ); + zCopy = (char*)&azToken[nToken]; + nToken--; + for(i=0; i=0 ){ + azIn[j] = azIn[i]; + } + j++; + } + } + azIn[j] = 0; + } +} + + +/* +** Find the first alphanumeric token in the string zIn. Null-terminate +** this token. Remove any quotation marks. And return a pointer to +** the result. +*/ +static char *firstToken(char *zIn, char **pzTail){ + int n, ttype; + while(1){ + n = ftsGetToken(zIn, &ttype); + if( ttype==TOKEN_SPACE ){ + zIn += n; + }else if( ttype==TOKEN_EOF ){ + *pzTail = zIn; + return 0; + }else{ + zIn[n] = 0; + *pzTail = &zIn[1]; + dequoteString(zIn); + return zIn; + } + } + /*NOTREACHED*/ +} + +/* Return true if... +** +** * s begins with the string t, ignoring case +** * s is longer than t +** * The first character of s beyond t is not a alphanumeric +** +** Ignore leading space in *s. +** +** To put it another way, return true if the first token of +** s[] is t[]. +*/ +static int startsWith(const char *s, const char *t){ + while( safe_isspace(*s) ){ s++; } + while( *t ){ + if( safe_tolower(*s++)!=safe_tolower(*t++) ) return 0; + } + return *s!='_' && !safe_isalnum(*s); +} + +/* +** An instance of this structure defines the "spec" of a +** full text index. This structure is populated by parseSpec +** and use by fulltextConnect and fulltextCreate. +*/ +typedef struct TableSpec { + const char *zDb; /* Logical database name */ + const char *zName; /* Name of the full-text index */ + int nColumn; /* Number of columns to be indexed */ + char **azColumn; /* Original names of columns to be indexed */ + char **azContentColumn; /* Column names for %_content */ + char **azTokenizer; /* Name of tokenizer and its arguments */ +} TableSpec; + +/* +** Reclaim all of the memory used by a TableSpec +*/ +static void clearTableSpec(TableSpec *p) { + sqlite3_free(p->azColumn); + sqlite3_free(p->azContentColumn); + sqlite3_free(p->azTokenizer); +} + +/* Parse a CREATE VIRTUAL TABLE statement, which looks like this: + * + * CREATE VIRTUAL TABLE email + * USING fts3(subject, body, tokenize mytokenizer(myarg)) + * + * We return parsed information in a TableSpec structure. + * + */ +static int parseSpec(TableSpec *pSpec, int argc, const char *const*argv, + char**pzErr){ + int i, n; + char *z, *zDummy; + char **azArg; + const char *zTokenizer = 0; /* argv[] entry describing the tokenizer */ + + assert( argc>=3 ); + /* Current interface: + ** argv[0] - module name + ** argv[1] - database name + ** argv[2] - table name + ** argv[3..] - columns, optionally followed by tokenizer specification + ** and snippet delimiters specification. + */ + + /* Make a copy of the complete argv[][] array in a single allocation. + ** The argv[][] array is read-only and transient. We can write to the + ** copy in order to modify things and the copy is persistent. + */ + CLEAR(pSpec); + for(i=n=0; izDb = azArg[1]; + pSpec->zName = azArg[2]; + pSpec->nColumn = 0; + pSpec->azColumn = azArg; + zTokenizer = "tokenize simple"; + for(i=3; inColumn] = firstToken(azArg[i], &zDummy); + pSpec->nColumn++; + } + } + if( pSpec->nColumn==0 ){ + azArg[0] = "content"; + pSpec->nColumn = 1; + } + + /* + ** Construct the list of content column names. + ** + ** Each content column name will be of the form cNNAAAA + ** where NN is the column number and AAAA is the sanitized + ** column name. "sanitized" means that special characters are + ** converted to "_". The cNN prefix guarantees that all column + ** names are unique. + ** + ** The AAAA suffix is not strictly necessary. It is included + ** for the convenience of people who might examine the generated + ** %_content table and wonder what the columns are used for. + */ + pSpec->azContentColumn = sqlite3_malloc( pSpec->nColumn * sizeof(char *) ); + if( pSpec->azContentColumn==0 ){ + clearTableSpec(pSpec); + return SQLITE_NOMEM; + } + for(i=0; inColumn; i++){ + char *p; + pSpec->azContentColumn[i] = sqlite3_mprintf("c%d%s", i, azArg[i]); + for (p = pSpec->azContentColumn[i]; *p ; ++p) { + if( !safe_isalnum(*p) ) *p = '_'; + } + } + + /* + ** Parse the tokenizer specification string. + */ + pSpec->azTokenizer = tokenizeString(zTokenizer, &n); + tokenListToIdList(pSpec->azTokenizer); + + return SQLITE_OK; +} + +/* +** Generate a CREATE TABLE statement that describes the schema of +** the virtual table. Return a pointer to this schema string. +** +** Space is obtained from sqlite3_mprintf() and should be freed +** using sqlite3_free(). +*/ +static char *fulltextSchema( + int nColumn, /* Number of columns */ + const char *const* azColumn, /* List of columns */ + const char *zTableName /* Name of the table */ +){ + int i; + char *zSchema, *zNext; + const char *zSep = "("; + zSchema = sqlite3_mprintf("CREATE TABLE x"); + for(i=0; ibase */ + v->db = db; + v->zDb = spec->zDb; /* Freed when azColumn is freed */ + v->zName = spec->zName; /* Freed when azColumn is freed */ + v->nColumn = spec->nColumn; + v->azContentColumn = spec->azContentColumn; + spec->azContentColumn = 0; + v->azColumn = spec->azColumn; + spec->azColumn = 0; + + if( spec->azTokenizer==0 ){ + return SQLITE_NOMEM; + } + + zTok = spec->azTokenizer[0]; + if( !zTok ){ + zTok = "simple"; + } + nTok = strlen(zTok)+1; + + m = (sqlite3_tokenizer_module *)sqlite3Fts3HashFind(pHash, zTok, nTok); + if( !m ){ + *pzErr = sqlite3_mprintf("unknown tokenizer: %s", spec->azTokenizer[0]); + rc = SQLITE_ERROR; + goto err; + } + + for(n=0; spec->azTokenizer[n]; n++){} + if( n ){ + rc = m->xCreate(n-1, (const char*const*)&spec->azTokenizer[1], + &v->pTokenizer); + }else{ + rc = m->xCreate(0, 0, &v->pTokenizer); + } + if( rc!=SQLITE_OK ) goto err; + v->pTokenizer->pModule = m; + + /* TODO: verify the existence of backing tables foo_content, foo_term */ + + schema = fulltextSchema(v->nColumn, (const char*const*)v->azColumn, + spec->zName); + rc = sqlite3_declare_vtab(db, schema); + sqlite3_free(schema); + if( rc!=SQLITE_OK ) goto err; + + memset(v->pFulltextStatements, 0, sizeof(v->pFulltextStatements)); + + /* Indicate that the buffer is not live. */ + v->nPendingData = -1; + + *ppVTab = &v->base; + FTSTRACE(("FTS3 Connect %p\n", v)); + + return rc; + +err: + fulltext_vtab_destroy(v); + return rc; +} + +static int fulltextConnect( + sqlite3 *db, + void *pAux, + int argc, const char *const*argv, + sqlite3_vtab **ppVTab, + char **pzErr +){ + TableSpec spec; + int rc = parseSpec(&spec, argc, argv, pzErr); + if( rc!=SQLITE_OK ) return rc; + + rc = constructVtab(db, (fts3Hash *)pAux, &spec, ppVTab, pzErr); + clearTableSpec(&spec); + return rc; +} + +/* The %_content table holds the text of each document, with +** the docid column exposed as the SQLite rowid for the table. +*/ +/* TODO(shess) This comment needs elaboration to match the updated +** code. Work it into the top-of-file comment at that time. +*/ +static int fulltextCreate(sqlite3 *db, void *pAux, + int argc, const char * const *argv, + sqlite3_vtab **ppVTab, char **pzErr){ + int rc; + TableSpec spec; + StringBuffer schema; + FTSTRACE(("FTS3 Create\n")); + + rc = parseSpec(&spec, argc, argv, pzErr); + if( rc!=SQLITE_OK ) return rc; + + initStringBuffer(&schema); + append(&schema, "CREATE TABLE %_content("); + append(&schema, " docid INTEGER PRIMARY KEY,"); + appendList(&schema, spec.nColumn, spec.azContentColumn); + append(&schema, ")"); + rc = sql_exec(db, spec.zDb, spec.zName, stringBufferData(&schema)); + stringBufferDestroy(&schema); + if( rc!=SQLITE_OK ) goto out; + + rc = sql_exec(db, spec.zDb, spec.zName, + "create table %_segments(" + " blockid INTEGER PRIMARY KEY," + " block blob" + ");" + ); + if( rc!=SQLITE_OK ) goto out; + + rc = sql_exec(db, spec.zDb, spec.zName, + "create table %_segdir(" + " level integer," + " idx integer," + " start_block integer," + " leaves_end_block integer," + " end_block integer," + " root blob," + " primary key(level, idx)" + ");"); + if( rc!=SQLITE_OK ) goto out; + + rc = constructVtab(db, (fts3Hash *)pAux, &spec, ppVTab, pzErr); + +out: + clearTableSpec(&spec); + return rc; +} + +/* Decide how to handle an SQL query. */ +static int fulltextBestIndex(sqlite3_vtab *pVTab, sqlite3_index_info *pInfo){ + fulltext_vtab *v = (fulltext_vtab *)pVTab; + int i; + FTSTRACE(("FTS3 BestIndex\n")); + + for(i=0; inConstraint; ++i){ + const struct sqlite3_index_constraint *pConstraint; + pConstraint = &pInfo->aConstraint[i]; + if( pConstraint->usable ) { + if( (pConstraint->iColumn==-1 || pConstraint->iColumn==v->nColumn+1) && + pConstraint->op==SQLITE_INDEX_CONSTRAINT_EQ ){ + pInfo->idxNum = QUERY_DOCID; /* lookup by docid */ + FTSTRACE(("FTS3 QUERY_DOCID\n")); + } else if( pConstraint->iColumn>=0 && pConstraint->iColumn<=v->nColumn && + pConstraint->op==SQLITE_INDEX_CONSTRAINT_MATCH ){ + /* full-text search */ + pInfo->idxNum = QUERY_FULLTEXT + pConstraint->iColumn; + FTSTRACE(("FTS3 QUERY_FULLTEXT %d\n", pConstraint->iColumn)); + } else continue; + + pInfo->aConstraintUsage[i].argvIndex = 1; + pInfo->aConstraintUsage[i].omit = 1; + + /* An arbitrary value for now. + * TODO: Perhaps docid matches should be considered cheaper than + * full-text searches. */ + pInfo->estimatedCost = 1.0; + + return SQLITE_OK; + } + } + pInfo->idxNum = QUERY_GENERIC; + return SQLITE_OK; +} + +static int fulltextDisconnect(sqlite3_vtab *pVTab){ + FTSTRACE(("FTS3 Disconnect %p\n", pVTab)); + fulltext_vtab_destroy((fulltext_vtab *)pVTab); + return SQLITE_OK; +} + +static int fulltextDestroy(sqlite3_vtab *pVTab){ + fulltext_vtab *v = (fulltext_vtab *)pVTab; + int rc; + + FTSTRACE(("FTS3 Destroy %p\n", pVTab)); + rc = sql_exec(v->db, v->zDb, v->zName, + "drop table if exists %_content;" + "drop table if exists %_segments;" + "drop table if exists %_segdir;" + ); + if( rc!=SQLITE_OK ) return rc; + + fulltext_vtab_destroy((fulltext_vtab *)pVTab); + return SQLITE_OK; +} + +static int fulltextOpen(sqlite3_vtab *pVTab, sqlite3_vtab_cursor **ppCursor){ + fulltext_cursor *c; + + c = (fulltext_cursor *) sqlite3_malloc(sizeof(fulltext_cursor)); + if( c ){ + memset(c, 0, sizeof(fulltext_cursor)); + /* sqlite will initialize c->base */ + *ppCursor = &c->base; + FTSTRACE(("FTS3 Open %p: %p\n", pVTab, c)); + return SQLITE_OK; + }else{ + return SQLITE_NOMEM; + } +} + + +/* Free all of the dynamically allocated memory held by *q +*/ +static void queryClear(Query *q){ + int i; + for(i = 0; i < q->nTerms; ++i){ + sqlite3_free(q->pTerms[i].pTerm); + } + sqlite3_free(q->pTerms); + CLEAR(q); +} + +/* Free all of the dynamically allocated memory held by the +** Snippet +*/ +static void snippetClear(Snippet *p){ + sqlite3_free(p->aMatch); + sqlite3_free(p->zOffset); + sqlite3_free(p->zSnippet); + CLEAR(p); +} +/* +** Append a single entry to the p->aMatch[] log. +*/ +static void snippetAppendMatch( + Snippet *p, /* Append the entry to this snippet */ + int iCol, int iTerm, /* The column and query term */ + int iToken, /* Matching token in document */ + int iStart, int nByte /* Offset and size of the match */ +){ + int i; + struct snippetMatch *pMatch; + if( p->nMatch+1>=p->nAlloc ){ + p->nAlloc = p->nAlloc*2 + 10; + p->aMatch = sqlite3_realloc(p->aMatch, p->nAlloc*sizeof(p->aMatch[0]) ); + if( p->aMatch==0 ){ + p->nMatch = 0; + p->nAlloc = 0; + return; + } + } + i = p->nMatch++; + pMatch = &p->aMatch[i]; + pMatch->iCol = iCol; + pMatch->iTerm = iTerm; + pMatch->iToken = iToken; + pMatch->iStart = iStart; + pMatch->nByte = nByte; +} + +/* +** Sizing information for the circular buffer used in snippetOffsetsOfColumn() +*/ +#define FTS3_ROTOR_SZ (32) +#define FTS3_ROTOR_MASK (FTS3_ROTOR_SZ-1) + +/* +** Add entries to pSnippet->aMatch[] for every match that occurs against +** document zDoc[0..nDoc-1] which is stored in column iColumn. +*/ +static void snippetOffsetsOfColumn( + Query *pQuery, + Snippet *pSnippet, + int iColumn, + const char *zDoc, + int nDoc +){ + const sqlite3_tokenizer_module *pTModule; /* The tokenizer module */ + sqlite3_tokenizer *pTokenizer; /* The specific tokenizer */ + sqlite3_tokenizer_cursor *pTCursor; /* Tokenizer cursor */ + fulltext_vtab *pVtab; /* The full text index */ + int nColumn; /* Number of columns in the index */ + const QueryTerm *aTerm; /* Query string terms */ + int nTerm; /* Number of query string terms */ + int i, j; /* Loop counters */ + int rc; /* Return code */ + unsigned int match, prevMatch; /* Phrase search bitmasks */ + const char *zToken; /* Next token from the tokenizer */ + int nToken; /* Size of zToken */ + int iBegin, iEnd, iPos; /* Offsets of beginning and end */ + + /* The following variables keep a circular buffer of the last + ** few tokens */ + unsigned int iRotor = 0; /* Index of current token */ + int iRotorBegin[FTS3_ROTOR_SZ]; /* Beginning offset of token */ + int iRotorLen[FTS3_ROTOR_SZ]; /* Length of token */ + + pVtab = pQuery->pFts; + nColumn = pVtab->nColumn; + pTokenizer = pVtab->pTokenizer; + pTModule = pTokenizer->pModule; + rc = pTModule->xOpen(pTokenizer, zDoc, nDoc, &pTCursor); + if( rc ) return; + pTCursor->pTokenizer = pTokenizer; + aTerm = pQuery->pTerms; + nTerm = pQuery->nTerms; + if( nTerm>=FTS3_ROTOR_SZ ){ + nTerm = FTS3_ROTOR_SZ - 1; + } + prevMatch = 0; + while(1){ + rc = pTModule->xNext(pTCursor, &zToken, &nToken, &iBegin, &iEnd, &iPos); + if( rc ) break; + iRotorBegin[iRotor&FTS3_ROTOR_MASK] = iBegin; + iRotorLen[iRotor&FTS3_ROTOR_MASK] = iEnd-iBegin; + match = 0; + for(i=0; i=0 && iColnToken ) continue; + if( !aTerm[i].isPrefix && aTerm[i].nTerm1 && (prevMatch & (1<=0; j--){ + int k = (iRotor-j) & FTS3_ROTOR_MASK; + snippetAppendMatch(pSnippet, iColumn, i-j, iPos-j, + iRotorBegin[k], iRotorLen[k]); + } + } + } + prevMatch = match<<1; + iRotor++; + } + pTModule->xClose(pTCursor); +} + +/* +** Remove entries from the pSnippet structure to account for the NEAR +** operator. When this is called, pSnippet contains the list of token +** offsets produced by treating all NEAR operators as AND operators. +** This function removes any entries that should not be present after +** accounting for the NEAR restriction. For example, if the queried +** document is: +** +** "A B C D E A" +** +** and the query is: +** +** A NEAR/0 E +** +** then when this function is called the Snippet contains token offsets +** 0, 4 and 5. This function removes the "0" entry (because the first A +** is not near enough to an E). +*/ +static void trimSnippetOffsetsForNear(Query *pQuery, Snippet *pSnippet){ + int ii; + int iDir = 1; + + while(iDir>-2) { + assert( iDir==1 || iDir==-1 ); + for(ii=0; iinMatch; ii++){ + int jj; + int nNear; + struct snippetMatch *pMatch = &pSnippet->aMatch[ii]; + QueryTerm *pQueryTerm = &pQuery->pTerms[pMatch->iTerm]; + + if( (pMatch->iTerm+iDir)<0 + || (pMatch->iTerm+iDir)>=pQuery->nTerms + ){ + continue; + } + + nNear = pQueryTerm->nNear; + if( iDir<0 ){ + nNear = pQueryTerm[-1].nNear; + } + + if( pMatch->iTerm>=0 && nNear ){ + int isOk = 0; + int iNextTerm = pMatch->iTerm+iDir; + int iPrevTerm = iNextTerm; + + int iEndToken; + int iStartToken; + + if( iDir<0 ){ + int nPhrase = 1; + iStartToken = pMatch->iToken; + while( (pMatch->iTerm+nPhrase)nTerms + && pQuery->pTerms[pMatch->iTerm+nPhrase].iPhrase>1 + ){ + nPhrase++; + } + iEndToken = iStartToken + nPhrase - 1; + }else{ + iEndToken = pMatch->iToken; + iStartToken = pMatch->iToken+1-pQueryTerm->iPhrase; + } + + while( pQuery->pTerms[iNextTerm].iPhrase>1 ){ + iNextTerm--; + } + while( (iPrevTerm+1)nTerms && + pQuery->pTerms[iPrevTerm+1].iPhrase>1 + ){ + iPrevTerm++; + } + + for(jj=0; isOk==0 && jjnMatch; jj++){ + struct snippetMatch *p = &pSnippet->aMatch[jj]; + if( p->iCol==pMatch->iCol && (( + p->iTerm==iNextTerm && + p->iToken>iEndToken && + p->iToken<=iEndToken+nNear + ) || ( + p->iTerm==iPrevTerm && + p->iTokeniToken>=iStartToken-nNear + ))){ + isOk = 1; + } + } + if( !isOk ){ + for(jj=1-pQueryTerm->iPhrase; jj<=0; jj++){ + pMatch[jj].iTerm = -1; + } + ii = -1; + iDir = 1; + } + } + } + iDir -= 2; + } +} + +/* +** Compute all offsets for the current row of the query. +** If the offsets have already been computed, this routine is a no-op. +*/ +static void snippetAllOffsets(fulltext_cursor *p){ + int nColumn; + int iColumn, i; + int iFirst, iLast; + fulltext_vtab *pFts; + + if( p->snippet.nMatch ) return; + if( p->q.nTerms==0 ) return; + pFts = p->q.pFts; + nColumn = pFts->nColumn; + iColumn = (p->iCursorType - QUERY_FULLTEXT); + if( iColumn<0 || iColumn>=nColumn ){ + iFirst = 0; + iLast = nColumn-1; + }else{ + iFirst = iColumn; + iLast = iColumn; + } + for(i=iFirst; i<=iLast; i++){ + const char *zDoc; + int nDoc; + zDoc = (const char*)sqlite3_column_text(p->pStmt, i+1); + nDoc = sqlite3_column_bytes(p->pStmt, i+1); + snippetOffsetsOfColumn(&p->q, &p->snippet, i, zDoc, nDoc); + } + + trimSnippetOffsetsForNear(&p->q, &p->snippet); +} + +/* +** Convert the information in the aMatch[] array of the snippet +** into the string zOffset[0..nOffset-1]. +*/ +static void snippetOffsetText(Snippet *p){ + int i; + int cnt = 0; + StringBuffer sb; + char zBuf[200]; + if( p->zOffset ) return; + initStringBuffer(&sb); + for(i=0; inMatch; i++){ + struct snippetMatch *pMatch = &p->aMatch[i]; + if( pMatch->iTerm>=0 ){ + /* If snippetMatch.iTerm is less than 0, then the match was + ** discarded as part of processing the NEAR operator (see the + ** trimSnippetOffsetsForNear() function for details). Ignore + ** it in this case + */ + zBuf[0] = ' '; + sprintf(&zBuf[cnt>0], "%d %d %d %d", pMatch->iCol, + pMatch->iTerm, pMatch->iStart, pMatch->nByte); + append(&sb, zBuf); + cnt++; + } + } + p->zOffset = stringBufferData(&sb); + p->nOffset = stringBufferLength(&sb); +} + +/* +** zDoc[0..nDoc-1] is phrase of text. aMatch[0..nMatch-1] are a set +** of matching words some of which might be in zDoc. zDoc is column +** number iCol. +** +** iBreak is suggested spot in zDoc where we could begin or end an +** excerpt. Return a value similar to iBreak but possibly adjusted +** to be a little left or right so that the break point is better. +*/ +static int wordBoundary( + int iBreak, /* The suggested break point */ + const char *zDoc, /* Document text */ + int nDoc, /* Number of bytes in zDoc[] */ + struct snippetMatch *aMatch, /* Matching words */ + int nMatch, /* Number of entries in aMatch[] */ + int iCol /* The column number for zDoc[] */ +){ + int i; + if( iBreak<=10 ){ + return 0; + } + if( iBreak>=nDoc-10 ){ + return nDoc; + } + for(i=0; i0 && aMatch[i-1].iStart+aMatch[i-1].nByte>=iBreak ){ + return aMatch[i-1].iStart; + } + } + for(i=1; i<=10; i++){ + if( safe_isspace(zDoc[iBreak-i]) ){ + return iBreak - i + 1; + } + if( safe_isspace(zDoc[iBreak+i]) ){ + return iBreak + i + 1; + } + } + return iBreak; +} + + + +/* +** Allowed values for Snippet.aMatch[].snStatus +*/ +#define SNIPPET_IGNORE 0 /* It is ok to omit this match from the snippet */ +#define SNIPPET_DESIRED 1 /* We want to include this match in the snippet */ + +/* +** Generate the text of a snippet. +*/ +static void snippetText( + fulltext_cursor *pCursor, /* The cursor we need the snippet for */ + const char *zStartMark, /* Markup to appear before each match */ + const char *zEndMark, /* Markup to appear after each match */ + const char *zEllipsis /* Ellipsis mark */ +){ + int i, j; + struct snippetMatch *aMatch; + int nMatch; + int nDesired; + StringBuffer sb; + int tailCol; + int tailOffset; + int iCol; + int nDoc; + const char *zDoc; + int iStart, iEnd; + int tailEllipsis = 0; + int iMatch; + + + sqlite3_free(pCursor->snippet.zSnippet); + pCursor->snippet.zSnippet = 0; + aMatch = pCursor->snippet.aMatch; + nMatch = pCursor->snippet.nMatch; + initStringBuffer(&sb); + + for(i=0; iq.nTerms; i++){ + for(j=0; j0; i++){ + if( aMatch[i].snStatus!=SNIPPET_DESIRED ) continue; + nDesired--; + iCol = aMatch[i].iCol; + zDoc = (const char*)sqlite3_column_text(pCursor->pStmt, iCol+1); + nDoc = sqlite3_column_bytes(pCursor->pStmt, iCol+1); + iStart = aMatch[i].iStart - 40; + iStart = wordBoundary(iStart, zDoc, nDoc, aMatch, nMatch, iCol); + if( iStart<=10 ){ + iStart = 0; + } + if( iCol==tailCol && iStart<=tailOffset+20 ){ + iStart = tailOffset; + } + if( (iCol!=tailCol && tailCol>=0) || iStart!=tailOffset ){ + trimWhiteSpace(&sb); + appendWhiteSpace(&sb); + append(&sb, zEllipsis); + appendWhiteSpace(&sb); + } + iEnd = aMatch[i].iStart + aMatch[i].nByte + 40; + iEnd = wordBoundary(iEnd, zDoc, nDoc, aMatch, nMatch, iCol); + if( iEnd>=nDoc-10 ){ + iEnd = nDoc; + tailEllipsis = 0; + }else{ + tailEllipsis = 1; + } + while( iMatchsnippet.zSnippet = stringBufferData(&sb); + pCursor->snippet.nSnippet = stringBufferLength(&sb); +} + + +/* +** Close the cursor. For additional information see the documentation +** on the xClose method of the virtual table interface. +*/ +static int fulltextClose(sqlite3_vtab_cursor *pCursor){ + fulltext_cursor *c = (fulltext_cursor *) pCursor; + FTSTRACE(("FTS3 Close %p\n", c)); + sqlite3_finalize(c->pStmt); + queryClear(&c->q); + snippetClear(&c->snippet); + if( c->result.nData!=0 ) dlrDestroy(&c->reader); + dataBufferDestroy(&c->result); + sqlite3_free(c); + return SQLITE_OK; +} + +static int fulltextNext(sqlite3_vtab_cursor *pCursor){ + fulltext_cursor *c = (fulltext_cursor *) pCursor; + int rc; + + FTSTRACE(("FTS3 Next %p\n", pCursor)); + snippetClear(&c->snippet); + if( c->iCursorType < QUERY_FULLTEXT ){ + /* TODO(shess) Handle SQLITE_SCHEMA AND SQLITE_BUSY. */ + rc = sqlite3_step(c->pStmt); + switch( rc ){ + case SQLITE_ROW: + c->eof = 0; + return SQLITE_OK; + case SQLITE_DONE: + c->eof = 1; + return SQLITE_OK; + default: + c->eof = 1; + return rc; + } + } else { /* full-text query */ + rc = sqlite3_reset(c->pStmt); + if( rc!=SQLITE_OK ) return rc; + + if( c->result.nData==0 || dlrAtEnd(&c->reader) ){ + c->eof = 1; + return SQLITE_OK; + } + rc = sqlite3_bind_int64(c->pStmt, 1, dlrDocid(&c->reader)); + dlrStep(&c->reader); + if( rc!=SQLITE_OK ) return rc; + /* TODO(shess) Handle SQLITE_SCHEMA AND SQLITE_BUSY. */ + rc = sqlite3_step(c->pStmt); + if( rc==SQLITE_ROW ){ /* the case we expect */ + c->eof = 0; + return SQLITE_OK; + } + /* an error occurred; abort */ + return rc==SQLITE_DONE ? SQLITE_ERROR : rc; + } +} + + +/* TODO(shess) If we pushed LeafReader to the top of the file, or to +** another file, term_select() could be pushed above +** docListOfTerm(). +*/ +static int termSelect(fulltext_vtab *v, int iColumn, + const char *pTerm, int nTerm, int isPrefix, + DocListType iType, DataBuffer *out); + +/* Return a DocList corresponding to the query term *pTerm. If *pTerm +** is the first term of a phrase query, go ahead and evaluate the phrase +** query and return the doclist for the entire phrase query. +** +** The resulting DL_DOCIDS doclist is stored in pResult, which is +** overwritten. +*/ +static int docListOfTerm( + fulltext_vtab *v, /* The full text index */ + int iColumn, /* column to restrict to. No restriction if >=nColumn */ + QueryTerm *pQTerm, /* Term we are looking for, or 1st term of a phrase */ + DataBuffer *pResult /* Write the result here */ +){ + DataBuffer left, right, new; + int i, rc; + + /* No phrase search if no position info. */ + assert( pQTerm->nPhrase==0 || DL_DEFAULT!=DL_DOCIDS ); + + /* This code should never be called with buffered updates. */ + assert( v->nPendingData<0 ); + + dataBufferInit(&left, 0); + rc = termSelect(v, iColumn, pQTerm->pTerm, pQTerm->nTerm, pQTerm->isPrefix, + (0nPhrase ? DL_POSITIONS : DL_DOCIDS), &left); + if( rc ) return rc; + for(i=1; i<=pQTerm->nPhrase && left.nData>0; i++){ + /* If this token is connected to the next by a NEAR operator, and + ** the next token is the start of a phrase, then set nPhraseRight + ** to the number of tokens in the phrase. Otherwise leave it at 1. + */ + int nPhraseRight = 1; + while( (i+nPhraseRight)<=pQTerm->nPhrase + && pQTerm[i+nPhraseRight].nNear==0 + ){ + nPhraseRight++; + } + + dataBufferInit(&right, 0); + rc = termSelect(v, iColumn, pQTerm[i].pTerm, pQTerm[i].nTerm, + pQTerm[i].isPrefix, DL_POSITIONS, &right); + if( rc ){ + dataBufferDestroy(&left); + return rc; + } + dataBufferInit(&new, 0); + docListPhraseMerge(left.pData, left.nData, right.pData, right.nData, + pQTerm[i-1].nNear, pQTerm[i-1].iPhrase + nPhraseRight, + ((inPhrase) ? DL_POSITIONS : DL_DOCIDS), + &new); + dataBufferDestroy(&left); + dataBufferDestroy(&right); + left = new; + } + *pResult = left; + return SQLITE_OK; +} + +/* Add a new term pTerm[0..nTerm-1] to the query *q. +*/ +static void queryAdd(Query *q, const char *pTerm, int nTerm){ + QueryTerm *t; + ++q->nTerms; + q->pTerms = sqlite3_realloc(q->pTerms, q->nTerms * sizeof(q->pTerms[0])); + if( q->pTerms==0 ){ + q->nTerms = 0; + return; + } + t = &q->pTerms[q->nTerms - 1]; + CLEAR(t); + t->pTerm = sqlite3_malloc(nTerm+1); + memcpy(t->pTerm, pTerm, nTerm); + t->pTerm[nTerm] = 0; + t->nTerm = nTerm; + t->isOr = q->nextIsOr; + t->isPrefix = 0; + q->nextIsOr = 0; + t->iColumn = q->nextColumn; + q->nextColumn = q->dfltColumn; +} + +/* +** Check to see if the string zToken[0...nToken-1] matches any +** column name in the virtual table. If it does, +** return the zero-indexed column number. If not, return -1. +*/ +static int checkColumnSpecifier( + fulltext_vtab *pVtab, /* The virtual table */ + const char *zToken, /* Text of the token */ + int nToken /* Number of characters in the token */ +){ + int i; + for(i=0; inColumn; i++){ + if( memcmp(pVtab->azColumn[i], zToken, nToken)==0 + && pVtab->azColumn[i][nToken]==0 ){ + return i; + } + } + return -1; +} + +/* +** Parse the text at pSegment[0..nSegment-1]. Add additional terms +** to the query being assemblied in pQuery. +** +** inPhrase is true if pSegment[0..nSegement-1] is contained within +** double-quotes. If inPhrase is true, then the first term +** is marked with the number of terms in the phrase less one and +** OR and "-" syntax is ignored. If inPhrase is false, then every +** term found is marked with nPhrase=0 and OR and "-" syntax is significant. +*/ +static int tokenizeSegment( + sqlite3_tokenizer *pTokenizer, /* The tokenizer to use */ + const char *pSegment, int nSegment, /* Query expression being parsed */ + int inPhrase, /* True if within "..." */ + Query *pQuery /* Append results here */ +){ + const sqlite3_tokenizer_module *pModule = pTokenizer->pModule; + sqlite3_tokenizer_cursor *pCursor; + int firstIndex = pQuery->nTerms; + int iCol; + int nTerm = 1; + + int rc = pModule->xOpen(pTokenizer, pSegment, nSegment, &pCursor); + if( rc!=SQLITE_OK ) return rc; + pCursor->pTokenizer = pTokenizer; + + while( 1 ){ + const char *pToken; + int nToken, iBegin, iEnd, iPos; + + rc = pModule->xNext(pCursor, + &pToken, &nToken, + &iBegin, &iEnd, &iPos); + if( rc!=SQLITE_OK ) break; + if( !inPhrase && + pSegment[iEnd]==':' && + (iCol = checkColumnSpecifier(pQuery->pFts, pToken, nToken))>=0 ){ + pQuery->nextColumn = iCol; + continue; + } + if( !inPhrase && pQuery->nTerms>0 && nToken==2 + && pSegment[iBegin+0]=='O' + && pSegment[iBegin+1]=='R' + ){ + pQuery->nextIsOr = 1; + continue; + } + if( !inPhrase && pQuery->nTerms>0 && !pQuery->nextIsOr && nToken==4 + && pSegment[iBegin+0]=='N' + && pSegment[iBegin+1]=='E' + && pSegment[iBegin+2]=='A' + && pSegment[iBegin+3]=='R' + ){ + QueryTerm *pTerm = &pQuery->pTerms[pQuery->nTerms-1]; + if( (iBegin+6)='0' && pSegment[iBegin+5]<='9' + ){ + pTerm->nNear = (pSegment[iBegin+5] - '0'); + nToken += 2; + if( pSegment[iBegin+6]>='0' && pSegment[iBegin+6]<=9 ){ + pTerm->nNear = pTerm->nNear * 10 + (pSegment[iBegin+6] - '0'); + iEnd++; + } + pModule->xNext(pCursor, &pToken, &nToken, &iBegin, &iEnd, &iPos); + } else { + pTerm->nNear = SQLITE_FTS3_DEFAULT_NEAR_PARAM; + } + pTerm->nNear++; + continue; + } + + queryAdd(pQuery, pToken, nToken); + if( !inPhrase && iBegin>0 && pSegment[iBegin-1]=='-' ){ + pQuery->pTerms[pQuery->nTerms-1].isNot = 1; + } + if( iEndpTerms[pQuery->nTerms-1].isPrefix = 1; + } + pQuery->pTerms[pQuery->nTerms-1].iPhrase = nTerm; + if( inPhrase ){ + nTerm++; + } + } + + if( inPhrase && pQuery->nTerms>firstIndex ){ + pQuery->pTerms[firstIndex].nPhrase = pQuery->nTerms - firstIndex - 1; + } + + return pModule->xClose(pCursor); +} + +/* Parse a query string, yielding a Query object pQuery. +** +** The calling function will need to queryClear() to clean up +** the dynamically allocated memory held by pQuery. +*/ +static int parseQuery( + fulltext_vtab *v, /* The fulltext index */ + const char *zInput, /* Input text of the query string */ + int nInput, /* Size of the input text */ + int dfltColumn, /* Default column of the index to match against */ + Query *pQuery /* Write the parse results here. */ +){ + int iInput, inPhrase = 0; + int ii; + QueryTerm *aTerm; + + if( zInput==0 ) nInput = 0; + if( nInput<0 ) nInput = strlen(zInput); + pQuery->nTerms = 0; + pQuery->pTerms = NULL; + pQuery->nextIsOr = 0; + pQuery->nextColumn = dfltColumn; + pQuery->dfltColumn = dfltColumn; + pQuery->pFts = v; + + for(iInput=0; iInputiInput ){ + tokenizeSegment(v->pTokenizer, zInput+iInput, i-iInput, inPhrase, + pQuery); + } + iInput = i; + if( ipTerms; + for(ii=0; iinTerms; ii++){ + if( aTerm[ii].nNear || aTerm[ii].nPhrase ){ + while (aTerm[ii+aTerm[ii].nPhrase].nNear) { + aTerm[ii].nPhrase += (1 + aTerm[ii+aTerm[ii].nPhrase+1].nPhrase); + } + } + } + + return SQLITE_OK; +} + +/* TODO(shess) Refactor the code to remove this forward decl. */ +static int flushPendingTerms(fulltext_vtab *v); + +/* Perform a full-text query using the search expression in +** zInput[0..nInput-1]. Return a list of matching documents +** in pResult. +** +** Queries must match column iColumn. Or if iColumn>=nColumn +** they are allowed to match against any column. +*/ +static int fulltextQuery( + fulltext_vtab *v, /* The full text index */ + int iColumn, /* Match against this column by default */ + const char *zInput, /* The query string */ + int nInput, /* Number of bytes in zInput[] */ + DataBuffer *pResult, /* Write the result doclist here */ + Query *pQuery /* Put parsed query string here */ +){ + int i, iNext, rc; + DataBuffer left, right, or, new; + int nNot = 0; + QueryTerm *aTerm; + + /* TODO(shess) Instead of flushing pendingTerms, we could query for + ** the relevant term and merge the doclist into what we receive from + ** the database. Wait and see if this is a common issue, first. + ** + ** A good reason not to flush is to not generate update-related + ** error codes from here. + */ + + /* Flush any buffered updates before executing the query. */ + rc = flushPendingTerms(v); + if( rc!=SQLITE_OK ) return rc; + + /* TODO(shess) I think that the queryClear() calls below are not + ** necessary, because fulltextClose() already clears the query. + */ + rc = parseQuery(v, zInput, nInput, iColumn, pQuery); + if( rc!=SQLITE_OK ) return rc; + + /* Empty or NULL queries return no results. */ + if( pQuery->nTerms==0 ){ + dataBufferInit(pResult, 0); + return SQLITE_OK; + } + + /* Merge AND terms. */ + /* TODO(shess) I think we can early-exit if( i>nNot && left.nData==0 ). */ + aTerm = pQuery->pTerms; + for(i = 0; inTerms; i=iNext){ + if( aTerm[i].isNot ){ + /* Handle all NOT terms in a separate pass */ + nNot++; + iNext = i + aTerm[i].nPhrase+1; + continue; + } + iNext = i + aTerm[i].nPhrase + 1; + rc = docListOfTerm(v, aTerm[i].iColumn, &aTerm[i], &right); + if( rc ){ + if( i!=nNot ) dataBufferDestroy(&left); + queryClear(pQuery); + return rc; + } + while( iNextnTerms && aTerm[iNext].isOr ){ + rc = docListOfTerm(v, aTerm[iNext].iColumn, &aTerm[iNext], &or); + iNext += aTerm[iNext].nPhrase + 1; + if( rc ){ + if( i!=nNot ) dataBufferDestroy(&left); + dataBufferDestroy(&right); + queryClear(pQuery); + return rc; + } + dataBufferInit(&new, 0); + docListOrMerge(right.pData, right.nData, or.pData, or.nData, &new); + dataBufferDestroy(&right); + dataBufferDestroy(&or); + right = new; + } + if( i==nNot ){ /* first term processed. */ + left = right; + }else{ + dataBufferInit(&new, 0); + docListAndMerge(left.pData, left.nData, right.pData, right.nData, &new); + dataBufferDestroy(&right); + dataBufferDestroy(&left); + left = new; + } + } + + if( nNot==pQuery->nTerms ){ + /* We do not yet know how to handle a query of only NOT terms */ + return SQLITE_ERROR; + } + + /* Do the EXCEPT terms */ + for(i=0; inTerms; i += aTerm[i].nPhrase + 1){ + if( !aTerm[i].isNot ) continue; + rc = docListOfTerm(v, aTerm[i].iColumn, &aTerm[i], &right); + if( rc ){ + queryClear(pQuery); + dataBufferDestroy(&left); + return rc; + } + dataBufferInit(&new, 0); + docListExceptMerge(left.pData, left.nData, right.pData, right.nData, &new); + dataBufferDestroy(&right); + dataBufferDestroy(&left); + left = new; + } + + *pResult = left; + return rc; +} + +/* +** This is the xFilter interface for the virtual table. See +** the virtual table xFilter method documentation for additional +** information. +** +** If idxNum==QUERY_GENERIC then do a full table scan against +** the %_content table. +** +** If idxNum==QUERY_DOCID then do a docid lookup for a single entry +** in the %_content table. +** +** If idxNum>=QUERY_FULLTEXT then use the full text index. The +** column on the left-hand side of the MATCH operator is column +** number idxNum-QUERY_FULLTEXT, 0 indexed. argv[0] is the right-hand +** side of the MATCH operator. +*/ +/* TODO(shess) Upgrade the cursor initialization and destruction to +** account for fulltextFilter() being called multiple times on the +** same cursor. The current solution is very fragile. Apply fix to +** fts3 as appropriate. +*/ +static int fulltextFilter( + sqlite3_vtab_cursor *pCursor, /* The cursor used for this query */ + int idxNum, const char *idxStr, /* Which indexing scheme to use */ + int argc, sqlite3_value **argv /* Arguments for the indexing scheme */ +){ + fulltext_cursor *c = (fulltext_cursor *) pCursor; + fulltext_vtab *v = cursor_vtab(c); + int rc; + StringBuffer sb; + + FTSTRACE(("FTS3 Filter %p\n",pCursor)); + + initStringBuffer(&sb); + append(&sb, "SELECT docid, "); + appendList(&sb, v->nColumn, v->azContentColumn); + append(&sb, " FROM %_content"); + if( idxNum!=QUERY_GENERIC ) append(&sb, " WHERE docid = ?"); + sqlite3_finalize(c->pStmt); + rc = sql_prepare(v->db, v->zDb, v->zName, &c->pStmt, stringBufferData(&sb)); + stringBufferDestroy(&sb); + if( rc!=SQLITE_OK ) return rc; + + c->iCursorType = idxNum; + switch( idxNum ){ + case QUERY_GENERIC: + break; + + case QUERY_DOCID: + rc = sqlite3_bind_int64(c->pStmt, 1, sqlite3_value_int64(argv[0])); + if( rc!=SQLITE_OK ) return rc; + break; + + default: /* full-text search */ + { + const char *zQuery = (const char *)sqlite3_value_text(argv[0]); + assert( idxNum<=QUERY_FULLTEXT+v->nColumn); + assert( argc==1 ); + queryClear(&c->q); + if( c->result.nData!=0 ){ + /* This case happens if the same cursor is used repeatedly. */ + dlrDestroy(&c->reader); + dataBufferReset(&c->result); + }else{ + dataBufferInit(&c->result, 0); + } + rc = fulltextQuery(v, idxNum-QUERY_FULLTEXT, zQuery, -1, &c->result, &c->q); + if( rc!=SQLITE_OK ) return rc; + if( c->result.nData!=0 ){ + dlrInit(&c->reader, DL_DOCIDS, c->result.pData, c->result.nData); + } + break; + } + } + + return fulltextNext(pCursor); +} + +/* This is the xEof method of the virtual table. The SQLite core +** calls this routine to find out if it has reached the end of +** a query's results set. +*/ +static int fulltextEof(sqlite3_vtab_cursor *pCursor){ + fulltext_cursor *c = (fulltext_cursor *) pCursor; + return c->eof; +} + +/* This is the xColumn method of the virtual table. The SQLite +** core calls this method during a query when it needs the value +** of a column from the virtual table. This method needs to use +** one of the sqlite3_result_*() routines to store the requested +** value back in the pContext. +*/ +static int fulltextColumn(sqlite3_vtab_cursor *pCursor, + sqlite3_context *pContext, int idxCol){ + fulltext_cursor *c = (fulltext_cursor *) pCursor; + fulltext_vtab *v = cursor_vtab(c); + + if( idxColnColumn ){ + sqlite3_value *pVal = sqlite3_column_value(c->pStmt, idxCol+1); + sqlite3_result_value(pContext, pVal); + }else if( idxCol==v->nColumn ){ + /* The extra column whose name is the same as the table. + ** Return a blob which is a pointer to the cursor + */ + sqlite3_result_blob(pContext, &c, sizeof(c), SQLITE_TRANSIENT); + }else if( idxCol==v->nColumn+1 ){ + /* The docid column, which is an alias for rowid. */ + sqlite3_value *pVal = sqlite3_column_value(c->pStmt, 0); + sqlite3_result_value(pContext, pVal); + } + return SQLITE_OK; +} + +/* This is the xRowid method. The SQLite core calls this routine to +** retrieve the rowid for the current row of the result set. fts3 +** exposes %_content.docid as the rowid for the virtual table. The +** rowid should be written to *pRowid. +*/ +static int fulltextRowid(sqlite3_vtab_cursor *pCursor, sqlite_int64 *pRowid){ + fulltext_cursor *c = (fulltext_cursor *) pCursor; + + *pRowid = sqlite3_column_int64(c->pStmt, 0); + return SQLITE_OK; +} + +/* Add all terms in [zText] to pendingTerms table. If [iColumn] > 0, +** we also store positions and offsets in the hash table using that +** column number. +*/ +static int buildTerms(fulltext_vtab *v, sqlite_int64 iDocid, + const char *zText, int iColumn){ + sqlite3_tokenizer *pTokenizer = v->pTokenizer; + sqlite3_tokenizer_cursor *pCursor; + const char *pToken; + int nTokenBytes; + int iStartOffset, iEndOffset, iPosition; + int rc; + + rc = pTokenizer->pModule->xOpen(pTokenizer, zText, -1, &pCursor); + if( rc!=SQLITE_OK ) return rc; + + pCursor->pTokenizer = pTokenizer; + while( SQLITE_OK==(rc=pTokenizer->pModule->xNext(pCursor, + &pToken, &nTokenBytes, + &iStartOffset, &iEndOffset, + &iPosition)) ){ + DLCollector *p; + int nData; /* Size of doclist before our update. */ + + /* Positions can't be negative; we use -1 as a terminator + * internally. Token can't be NULL or empty. */ + if( iPosition<0 || pToken == NULL || nTokenBytes == 0 ){ + rc = SQLITE_ERROR; + break; + } + + p = fts3HashFind(&v->pendingTerms, pToken, nTokenBytes); + if( p==NULL ){ + nData = 0; + p = dlcNew(iDocid, DL_DEFAULT); + fts3HashInsert(&v->pendingTerms, pToken, nTokenBytes, p); + + /* Overhead for our hash table entry, the key, and the value. */ + v->nPendingData += sizeof(struct fts3HashElem)+sizeof(*p)+nTokenBytes; + }else{ + nData = p->b.nData; + if( p->dlw.iPrevDocid!=iDocid ) dlcNext(p, iDocid); + } + if( iColumn>=0 ){ + dlcAddPos(p, iColumn, iPosition, iStartOffset, iEndOffset); + } + + /* Accumulate data added by dlcNew or dlcNext, and dlcAddPos. */ + v->nPendingData += p->b.nData-nData; + } + + /* TODO(shess) Check return? Should this be able to cause errors at + ** this point? Actually, same question about sqlite3_finalize(), + ** though one could argue that failure there means that the data is + ** not durable. *ponder* + */ + pTokenizer->pModule->xClose(pCursor); + if( SQLITE_DONE == rc ) return SQLITE_OK; + return rc; +} + +/* Add doclists for all terms in [pValues] to pendingTerms table. */ +static int insertTerms(fulltext_vtab *v, sqlite_int64 iDocid, + sqlite3_value **pValues){ + int i; + for(i = 0; i < v->nColumn ; ++i){ + char *zText = (char*)sqlite3_value_text(pValues[i]); + int rc = buildTerms(v, iDocid, zText, i); + if( rc!=SQLITE_OK ) return rc; + } + return SQLITE_OK; +} + +/* Add empty doclists for all terms in the given row's content to +** pendingTerms. +*/ +static int deleteTerms(fulltext_vtab *v, sqlite_int64 iDocid){ + const char **pValues; + int i, rc; + + /* TODO(shess) Should we allow such tables at all? */ + if( DL_DEFAULT==DL_DOCIDS ) return SQLITE_ERROR; + + rc = content_select(v, iDocid, &pValues); + if( rc!=SQLITE_OK ) return rc; + + for(i = 0 ; i < v->nColumn; ++i) { + rc = buildTerms(v, iDocid, pValues[i], -1); + if( rc!=SQLITE_OK ) break; + } + + freeStringArray(v->nColumn, pValues); + return SQLITE_OK; +} + +/* TODO(shess) Refactor the code to remove this forward decl. */ +static int initPendingTerms(fulltext_vtab *v, sqlite_int64 iDocid); + +/* Insert a row into the %_content table; set *piDocid to be the ID of the +** new row. Add doclists for terms to pendingTerms. +*/ +static int index_insert(fulltext_vtab *v, sqlite3_value *pRequestDocid, + sqlite3_value **pValues, sqlite_int64 *piDocid){ + int rc; + + rc = content_insert(v, pRequestDocid, pValues); /* execute an SQL INSERT */ + if( rc!=SQLITE_OK ) return rc; + + /* docid column is an alias for rowid. */ + *piDocid = sqlite3_last_insert_rowid(v->db); + rc = initPendingTerms(v, *piDocid); + if( rc!=SQLITE_OK ) return rc; + + return insertTerms(v, *piDocid, pValues); +} + +/* Delete a row from the %_content table; add empty doclists for terms +** to pendingTerms. +*/ +static int index_delete(fulltext_vtab *v, sqlite_int64 iRow){ + int rc = initPendingTerms(v, iRow); + if( rc!=SQLITE_OK ) return rc; + + rc = deleteTerms(v, iRow); + if( rc!=SQLITE_OK ) return rc; + + return content_delete(v, iRow); /* execute an SQL DELETE */ +} + +/* Update a row in the %_content table; add delete doclists to +** pendingTerms for old terms not in the new data, add insert doclists +** to pendingTerms for terms in the new data. +*/ +static int index_update(fulltext_vtab *v, sqlite_int64 iRow, + sqlite3_value **pValues){ + int rc = initPendingTerms(v, iRow); + if( rc!=SQLITE_OK ) return rc; + + /* Generate an empty doclist for each term that previously appeared in this + * row. */ + rc = deleteTerms(v, iRow); + if( rc!=SQLITE_OK ) return rc; + + rc = content_update(v, pValues, iRow); /* execute an SQL UPDATE */ + if( rc!=SQLITE_OK ) return rc; + + /* Now add positions for terms which appear in the updated row. */ + return insertTerms(v, iRow, pValues); +} + +/*******************************************************************/ +/* InteriorWriter is used to collect terms and block references into +** interior nodes in %_segments. See commentary at top of file for +** format. +*/ + +/* How large interior nodes can grow. */ +#define INTERIOR_MAX 2048 + +/* Minimum number of terms per interior node (except the root). This +** prevents large terms from making the tree too skinny - must be >0 +** so that the tree always makes progress. Note that the min tree +** fanout will be INTERIOR_MIN_TERMS+1. +*/ +#define INTERIOR_MIN_TERMS 7 +#if INTERIOR_MIN_TERMS<1 +# error INTERIOR_MIN_TERMS must be greater than 0. +#endif + +/* ROOT_MAX controls how much data is stored inline in the segment +** directory. +*/ +/* TODO(shess) Push ROOT_MAX down to whoever is writing things. It's +** only here so that interiorWriterRootInfo() and leafWriterRootInfo() +** can both see it, but if the caller passed it in, we wouldn't even +** need a define. +*/ +#define ROOT_MAX 1024 +#if ROOT_MAXterm, 0); + dataBufferReplace(&block->term, pTerm, nTerm); + + n = fts3PutVarint(c, iHeight); + n += fts3PutVarint(c+n, iChildBlock); + dataBufferInit(&block->data, INTERIOR_MAX); + dataBufferReplace(&block->data, c, n); + } + return block; +} + +#ifndef NDEBUG +/* Verify that the data is readable as an interior node. */ +static void interiorBlockValidate(InteriorBlock *pBlock){ + const char *pData = pBlock->data.pData; + int nData = pBlock->data.nData; + int n, iDummy; + sqlite_int64 iBlockid; + + assert( nData>0 ); + assert( pData!=0 ); + assert( pData+nData>pData ); + + /* Must lead with height of node as a varint(n), n>0 */ + n = fts3GetVarint32(pData, &iDummy); + assert( n>0 ); + assert( iDummy>0 ); + assert( n0 ); + assert( n<=nData ); + pData += n; + nData -= n; + + /* Zero or more terms of positive length */ + if( nData!=0 ){ + /* First term is not delta-encoded. */ + n = fts3GetVarint32(pData, &iDummy); + assert( n>0 ); + assert( iDummy>0 ); + assert( n+iDummy>0); + assert( n+iDummy<=nData ); + pData += n+iDummy; + nData -= n+iDummy; + + /* Following terms delta-encoded. */ + while( nData!=0 ){ + /* Length of shared prefix. */ + n = fts3GetVarint32(pData, &iDummy); + assert( n>0 ); + assert( iDummy>=0 ); + assert( n0 ); + assert( iDummy>0 ); + assert( n+iDummy>0); + assert( n+iDummy<=nData ); + pData += n+iDummy; + nData -= n+iDummy; + } + } +} +#define ASSERT_VALID_INTERIOR_BLOCK(x) interiorBlockValidate(x) +#else +#define ASSERT_VALID_INTERIOR_BLOCK(x) assert( 1 ) +#endif + +typedef struct InteriorWriter { + int iHeight; /* from 0 at leaves. */ + InteriorBlock *first, *last; + struct InteriorWriter *parentWriter; + + DataBuffer term; /* Last term written to block "last". */ + sqlite_int64 iOpeningChildBlock; /* First child block in block "last". */ +#ifndef NDEBUG + sqlite_int64 iLastChildBlock; /* for consistency checks. */ +#endif +} InteriorWriter; + +/* Initialize an interior node where pTerm[nTerm] marks the leftmost +** term in the tree. iChildBlock is the leftmost child block at the +** next level down the tree. +*/ +static void interiorWriterInit(int iHeight, const char *pTerm, int nTerm, + sqlite_int64 iChildBlock, + InteriorWriter *pWriter){ + InteriorBlock *block; + assert( iHeight>0 ); + CLEAR(pWriter); + + pWriter->iHeight = iHeight; + pWriter->iOpeningChildBlock = iChildBlock; +#ifndef NDEBUG + pWriter->iLastChildBlock = iChildBlock; +#endif + block = interiorBlockNew(iHeight, iChildBlock, pTerm, nTerm); + pWriter->last = pWriter->first = block; + ASSERT_VALID_INTERIOR_BLOCK(pWriter->last); + dataBufferInit(&pWriter->term, 0); +} + +/* Append the child node rooted at iChildBlock to the interior node, +** with pTerm[nTerm] as the leftmost term in iChildBlock's subtree. +*/ +static void interiorWriterAppend(InteriorWriter *pWriter, + const char *pTerm, int nTerm, + sqlite_int64 iChildBlock){ + char c[VARINT_MAX+VARINT_MAX]; + int n, nPrefix = 0; + + ASSERT_VALID_INTERIOR_BLOCK(pWriter->last); + + /* The first term written into an interior node is actually + ** associated with the second child added (the first child was added + ** in interiorWriterInit, or in the if clause at the bottom of this + ** function). That term gets encoded straight up, with nPrefix left + ** at 0. + */ + if( pWriter->term.nData==0 ){ + n = fts3PutVarint(c, nTerm); + }else{ + while( nPrefixterm.nData && + pTerm[nPrefix]==pWriter->term.pData[nPrefix] ){ + nPrefix++; + } + + n = fts3PutVarint(c, nPrefix); + n += fts3PutVarint(c+n, nTerm-nPrefix); + } + +#ifndef NDEBUG + pWriter->iLastChildBlock++; +#endif + assert( pWriter->iLastChildBlock==iChildBlock ); + + /* Overflow to a new block if the new term makes the current block + ** too big, and the current block already has enough terms. + */ + if( pWriter->last->data.nData+n+nTerm-nPrefix>INTERIOR_MAX && + iChildBlock-pWriter->iOpeningChildBlock>INTERIOR_MIN_TERMS ){ + pWriter->last->next = interiorBlockNew(pWriter->iHeight, iChildBlock, + pTerm, nTerm); + pWriter->last = pWriter->last->next; + pWriter->iOpeningChildBlock = iChildBlock; + dataBufferReset(&pWriter->term); + }else{ + dataBufferAppend2(&pWriter->last->data, c, n, + pTerm+nPrefix, nTerm-nPrefix); + dataBufferReplace(&pWriter->term, pTerm, nTerm); + } + ASSERT_VALID_INTERIOR_BLOCK(pWriter->last); +} + +/* Free the space used by pWriter, including the linked-list of +** InteriorBlocks, and parentWriter, if present. +*/ +static int interiorWriterDestroy(InteriorWriter *pWriter){ + InteriorBlock *block = pWriter->first; + + while( block!=NULL ){ + InteriorBlock *b = block; + block = block->next; + dataBufferDestroy(&b->term); + dataBufferDestroy(&b->data); + sqlite3_free(b); + } + if( pWriter->parentWriter!=NULL ){ + interiorWriterDestroy(pWriter->parentWriter); + sqlite3_free(pWriter->parentWriter); + } + dataBufferDestroy(&pWriter->term); + SCRAMBLE(pWriter); + return SQLITE_OK; +} + +/* If pWriter can fit entirely in ROOT_MAX, return it as the root info +** directly, leaving *piEndBlockid unchanged. Otherwise, flush +** pWriter to %_segments, building a new layer of interior nodes, and +** recursively ask for their root into. +*/ +static int interiorWriterRootInfo(fulltext_vtab *v, InteriorWriter *pWriter, + char **ppRootInfo, int *pnRootInfo, + sqlite_int64 *piEndBlockid){ + InteriorBlock *block = pWriter->first; + sqlite_int64 iBlockid = 0; + int rc; + + /* If we can fit the segment inline */ + if( block==pWriter->last && block->data.nDatadata.pData; + *pnRootInfo = block->data.nData; + return SQLITE_OK; + } + + /* Flush the first block to %_segments, and create a new level of + ** interior node. + */ + ASSERT_VALID_INTERIOR_BLOCK(block); + rc = block_insert(v, block->data.pData, block->data.nData, &iBlockid); + if( rc!=SQLITE_OK ) return rc; + *piEndBlockid = iBlockid; + + pWriter->parentWriter = sqlite3_malloc(sizeof(*pWriter->parentWriter)); + interiorWriterInit(pWriter->iHeight+1, + block->term.pData, block->term.nData, + iBlockid, pWriter->parentWriter); + + /* Flush additional blocks and append to the higher interior + ** node. + */ + for(block=block->next; block!=NULL; block=block->next){ + ASSERT_VALID_INTERIOR_BLOCK(block); + rc = block_insert(v, block->data.pData, block->data.nData, &iBlockid); + if( rc!=SQLITE_OK ) return rc; + *piEndBlockid = iBlockid; + + interiorWriterAppend(pWriter->parentWriter, + block->term.pData, block->term.nData, iBlockid); + } + + /* Parent node gets the chance to be the root. */ + return interiorWriterRootInfo(v, pWriter->parentWriter, + ppRootInfo, pnRootInfo, piEndBlockid); +} + +/****************************************************************/ +/* InteriorReader is used to read off the data from an interior node +** (see comment at top of file for the format). +*/ +typedef struct InteriorReader { + const char *pData; + int nData; + + DataBuffer term; /* previous term, for decoding term delta. */ + + sqlite_int64 iBlockid; +} InteriorReader; + +static void interiorReaderDestroy(InteriorReader *pReader){ + dataBufferDestroy(&pReader->term); + SCRAMBLE(pReader); +} + +/* TODO(shess) The assertions are great, but what if we're in NDEBUG +** and the blob is empty or otherwise contains suspect data? +*/ +static void interiorReaderInit(const char *pData, int nData, + InteriorReader *pReader){ + int n, nTerm; + + /* Require at least the leading flag byte */ + assert( nData>0 ); + assert( pData[0]!='\0' ); + + CLEAR(pReader); + + /* Decode the base blockid, and set the cursor to the first term. */ + n = fts3GetVarint(pData+1, &pReader->iBlockid); + assert( 1+n<=nData ); + pReader->pData = pData+1+n; + pReader->nData = nData-(1+n); + + /* A single-child interior node (such as when a leaf node was too + ** large for the segment directory) won't have any terms. + ** Otherwise, decode the first term. + */ + if( pReader->nData==0 ){ + dataBufferInit(&pReader->term, 0); + }else{ + n = fts3GetVarint32(pReader->pData, &nTerm); + dataBufferInit(&pReader->term, nTerm); + dataBufferReplace(&pReader->term, pReader->pData+n, nTerm); + assert( n+nTerm<=pReader->nData ); + pReader->pData += n+nTerm; + pReader->nData -= n+nTerm; + } +} + +static int interiorReaderAtEnd(InteriorReader *pReader){ + return pReader->term.nData==0; +} + +static sqlite_int64 interiorReaderCurrentBlockid(InteriorReader *pReader){ + return pReader->iBlockid; +} + +static int interiorReaderTermBytes(InteriorReader *pReader){ + assert( !interiorReaderAtEnd(pReader) ); + return pReader->term.nData; +} +static const char *interiorReaderTerm(InteriorReader *pReader){ + assert( !interiorReaderAtEnd(pReader) ); + return pReader->term.pData; +} + +/* Step forward to the next term in the node. */ +static void interiorReaderStep(InteriorReader *pReader){ + assert( !interiorReaderAtEnd(pReader) ); + + /* If the last term has been read, signal eof, else construct the + ** next term. + */ + if( pReader->nData==0 ){ + dataBufferReset(&pReader->term); + }else{ + int n, nPrefix, nSuffix; + + n = fts3GetVarint32(pReader->pData, &nPrefix); + n += fts3GetVarint32(pReader->pData+n, &nSuffix); + + /* Truncate the current term and append suffix data. */ + pReader->term.nData = nPrefix; + dataBufferAppend(&pReader->term, pReader->pData+n, nSuffix); + + assert( n+nSuffix<=pReader->nData ); + pReader->pData += n+nSuffix; + pReader->nData -= n+nSuffix; + } + pReader->iBlockid++; +} + +/* Compare the current term to pTerm[nTerm], returning strcmp-style +** results. If isPrefix, equality means equal through nTerm bytes. +*/ +static int interiorReaderTermCmp(InteriorReader *pReader, + const char *pTerm, int nTerm, int isPrefix){ + const char *pReaderTerm = interiorReaderTerm(pReader); + int nReaderTerm = interiorReaderTermBytes(pReader); + int c, n = nReaderTerm0 ) return -1; + if( nTerm>0 ) return 1; + return 0; + } + + c = memcmp(pReaderTerm, pTerm, n); + if( c!=0 ) return c; + if( isPrefix && n==nTerm ) return 0; + return nReaderTerm - nTerm; +} + +/****************************************************************/ +/* LeafWriter is used to collect terms and associated doclist data +** into leaf blocks in %_segments (see top of file for format info). +** Expected usage is: +** +** LeafWriter writer; +** leafWriterInit(0, 0, &writer); +** while( sorted_terms_left_to_process ){ +** // data is doclist data for that term. +** rc = leafWriterStep(v, &writer, pTerm, nTerm, pData, nData); +** if( rc!=SQLITE_OK ) goto err; +** } +** rc = leafWriterFinalize(v, &writer); +**err: +** leafWriterDestroy(&writer); +** return rc; +** +** leafWriterStep() may write a collected leaf out to %_segments. +** leafWriterFinalize() finishes writing any buffered data and stores +** a root node in %_segdir. leafWriterDestroy() frees all buffers and +** InteriorWriters allocated as part of writing this segment. +** +** TODO(shess) Document leafWriterStepMerge(). +*/ + +/* Put terms with data this big in their own block. */ +#define STANDALONE_MIN 1024 + +/* Keep leaf blocks below this size. */ +#define LEAF_MAX 2048 + +typedef struct LeafWriter { + int iLevel; + int idx; + sqlite_int64 iStartBlockid; /* needed to create the root info */ + sqlite_int64 iEndBlockid; /* when we're done writing. */ + + DataBuffer term; /* previous encoded term */ + DataBuffer data; /* encoding buffer */ + + /* bytes of first term in the current node which distinguishes that + ** term from the last term of the previous node. + */ + int nTermDistinct; + + InteriorWriter parentWriter; /* if we overflow */ + int has_parent; +} LeafWriter; + +static void leafWriterInit(int iLevel, int idx, LeafWriter *pWriter){ + CLEAR(pWriter); + pWriter->iLevel = iLevel; + pWriter->idx = idx; + + dataBufferInit(&pWriter->term, 32); + + /* Start out with a reasonably sized block, though it can grow. */ + dataBufferInit(&pWriter->data, LEAF_MAX); +} + +#ifndef NDEBUG +/* Verify that the data is readable as a leaf node. */ +static void leafNodeValidate(const char *pData, int nData){ + int n, iDummy; + + if( nData==0 ) return; + assert( nData>0 ); + assert( pData!=0 ); + assert( pData+nData>pData ); + + /* Must lead with a varint(0) */ + n = fts3GetVarint32(pData, &iDummy); + assert( iDummy==0 ); + assert( n>0 ); + assert( n0 ); + assert( iDummy>0 ); + assert( n+iDummy>0 ); + assert( n+iDummy0 ); + assert( iDummy>0 ); + assert( n+iDummy>0 ); + assert( n+iDummy<=nData ); + ASSERT_VALID_DOCLIST(DL_DEFAULT, pData+n, iDummy, NULL); + pData += n+iDummy; + nData -= n+iDummy; + + /* Verify that trailing terms and doclists also are readable. */ + while( nData!=0 ){ + n = fts3GetVarint32(pData, &iDummy); + assert( n>0 ); + assert( iDummy>=0 ); + assert( n0 ); + assert( iDummy>0 ); + assert( n+iDummy>0 ); + assert( n+iDummy0 ); + assert( iDummy>0 ); + assert( n+iDummy>0 ); + assert( n+iDummy<=nData ); + ASSERT_VALID_DOCLIST(DL_DEFAULT, pData+n, iDummy, NULL); + pData += n+iDummy; + nData -= n+iDummy; + } +} +#define ASSERT_VALID_LEAF_NODE(p, n) leafNodeValidate(p, n) +#else +#define ASSERT_VALID_LEAF_NODE(p, n) assert( 1 ) +#endif + +/* Flush the current leaf node to %_segments, and adding the resulting +** blockid and the starting term to the interior node which will +** contain it. +*/ +static int leafWriterInternalFlush(fulltext_vtab *v, LeafWriter *pWriter, + int iData, int nData){ + sqlite_int64 iBlockid = 0; + const char *pStartingTerm; + int nStartingTerm, rc, n; + + /* Must have the leading varint(0) flag, plus at least some + ** valid-looking data. + */ + assert( nData>2 ); + assert( iData>=0 ); + assert( iData+nData<=pWriter->data.nData ); + ASSERT_VALID_LEAF_NODE(pWriter->data.pData+iData, nData); + + rc = block_insert(v, pWriter->data.pData+iData, nData, &iBlockid); + if( rc!=SQLITE_OK ) return rc; + assert( iBlockid!=0 ); + + /* Reconstruct the first term in the leaf for purposes of building + ** the interior node. + */ + n = fts3GetVarint32(pWriter->data.pData+iData+1, &nStartingTerm); + pStartingTerm = pWriter->data.pData+iData+1+n; + assert( pWriter->data.nData>iData+1+n+nStartingTerm ); + assert( pWriter->nTermDistinct>0 ); + assert( pWriter->nTermDistinct<=nStartingTerm ); + nStartingTerm = pWriter->nTermDistinct; + + if( pWriter->has_parent ){ + interiorWriterAppend(&pWriter->parentWriter, + pStartingTerm, nStartingTerm, iBlockid); + }else{ + interiorWriterInit(1, pStartingTerm, nStartingTerm, iBlockid, + &pWriter->parentWriter); + pWriter->has_parent = 1; + } + + /* Track the span of this segment's leaf nodes. */ + if( pWriter->iEndBlockid==0 ){ + pWriter->iEndBlockid = pWriter->iStartBlockid = iBlockid; + }else{ + pWriter->iEndBlockid++; + assert( iBlockid==pWriter->iEndBlockid ); + } + + return SQLITE_OK; +} +static int leafWriterFlush(fulltext_vtab *v, LeafWriter *pWriter){ + int rc = leafWriterInternalFlush(v, pWriter, 0, pWriter->data.nData); + if( rc!=SQLITE_OK ) return rc; + + /* Re-initialize the output buffer. */ + dataBufferReset(&pWriter->data); + + return SQLITE_OK; +} + +/* Fetch the root info for the segment. If the entire leaf fits +** within ROOT_MAX, then it will be returned directly, otherwise it +** will be flushed and the root info will be returned from the +** interior node. *piEndBlockid is set to the blockid of the last +** interior or leaf node written to disk (0 if none are written at +** all). +*/ +static int leafWriterRootInfo(fulltext_vtab *v, LeafWriter *pWriter, + char **ppRootInfo, int *pnRootInfo, + sqlite_int64 *piEndBlockid){ + /* we can fit the segment entirely inline */ + if( !pWriter->has_parent && pWriter->data.nDatadata.pData; + *pnRootInfo = pWriter->data.nData; + *piEndBlockid = 0; + return SQLITE_OK; + } + + /* Flush remaining leaf data. */ + if( pWriter->data.nData>0 ){ + int rc = leafWriterFlush(v, pWriter); + if( rc!=SQLITE_OK ) return rc; + } + + /* We must have flushed a leaf at some point. */ + assert( pWriter->has_parent ); + + /* Tenatively set the end leaf blockid as the end blockid. If the + ** interior node can be returned inline, this will be the final + ** blockid, otherwise it will be overwritten by + ** interiorWriterRootInfo(). + */ + *piEndBlockid = pWriter->iEndBlockid; + + return interiorWriterRootInfo(v, &pWriter->parentWriter, + ppRootInfo, pnRootInfo, piEndBlockid); +} + +/* Collect the rootInfo data and store it into the segment directory. +** This has the effect of flushing the segment's leaf data to +** %_segments, and also flushing any interior nodes to %_segments. +*/ +static int leafWriterFinalize(fulltext_vtab *v, LeafWriter *pWriter){ + sqlite_int64 iEndBlockid; + char *pRootInfo; + int rc, nRootInfo; + + rc = leafWriterRootInfo(v, pWriter, &pRootInfo, &nRootInfo, &iEndBlockid); + if( rc!=SQLITE_OK ) return rc; + + /* Don't bother storing an entirely empty segment. */ + if( iEndBlockid==0 && nRootInfo==0 ) return SQLITE_OK; + + return segdir_set(v, pWriter->iLevel, pWriter->idx, + pWriter->iStartBlockid, pWriter->iEndBlockid, + iEndBlockid, pRootInfo, nRootInfo); +} + +static void leafWriterDestroy(LeafWriter *pWriter){ + if( pWriter->has_parent ) interiorWriterDestroy(&pWriter->parentWriter); + dataBufferDestroy(&pWriter->term); + dataBufferDestroy(&pWriter->data); +} + +/* Encode a term into the leafWriter, delta-encoding as appropriate. +** Returns the length of the new term which distinguishes it from the +** previous term, which can be used to set nTermDistinct when a node +** boundary is crossed. +*/ +static int leafWriterEncodeTerm(LeafWriter *pWriter, + const char *pTerm, int nTerm){ + char c[VARINT_MAX+VARINT_MAX]; + int n, nPrefix = 0; + + assert( nTerm>0 ); + while( nPrefixterm.nData && + pTerm[nPrefix]==pWriter->term.pData[nPrefix] ){ + nPrefix++; + /* Failing this implies that the terms weren't in order. */ + assert( nPrefixdata.nData==0 ){ + /* Encode the node header and leading term as: + ** varint(0) + ** varint(nTerm) + ** char pTerm[nTerm] + */ + n = fts3PutVarint(c, '\0'); + n += fts3PutVarint(c+n, nTerm); + dataBufferAppend2(&pWriter->data, c, n, pTerm, nTerm); + }else{ + /* Delta-encode the term as: + ** varint(nPrefix) + ** varint(nSuffix) + ** char pTermSuffix[nSuffix] + */ + n = fts3PutVarint(c, nPrefix); + n += fts3PutVarint(c+n, nTerm-nPrefix); + dataBufferAppend2(&pWriter->data, c, n, pTerm+nPrefix, nTerm-nPrefix); + } + dataBufferReplace(&pWriter->term, pTerm, nTerm); + + return nPrefix+1; +} + +/* Used to avoid a memmove when a large amount of doclist data is in +** the buffer. This constructs a node and term header before +** iDoclistData and flushes the resulting complete node using +** leafWriterInternalFlush(). +*/ +static int leafWriterInlineFlush(fulltext_vtab *v, LeafWriter *pWriter, + const char *pTerm, int nTerm, + int iDoclistData){ + char c[VARINT_MAX+VARINT_MAX]; + int iData, n = fts3PutVarint(c, 0); + n += fts3PutVarint(c+n, nTerm); + + /* There should always be room for the header. Even if pTerm shared + ** a substantial prefix with the previous term, the entire prefix + ** could be constructed from earlier data in the doclist, so there + ** should be room. + */ + assert( iDoclistData>=n+nTerm ); + + iData = iDoclistData-(n+nTerm); + memcpy(pWriter->data.pData+iData, c, n); + memcpy(pWriter->data.pData+iData+n, pTerm, nTerm); + + return leafWriterInternalFlush(v, pWriter, iData, pWriter->data.nData-iData); +} + +/* Push pTerm[nTerm] along with the doclist data to the leaf layer of +** %_segments. +*/ +static int leafWriterStepMerge(fulltext_vtab *v, LeafWriter *pWriter, + const char *pTerm, int nTerm, + DLReader *pReaders, int nReaders){ + char c[VARINT_MAX+VARINT_MAX]; + int iTermData = pWriter->data.nData, iDoclistData; + int i, nData, n, nActualData, nActual, rc, nTermDistinct; + + ASSERT_VALID_LEAF_NODE(pWriter->data.pData, pWriter->data.nData); + nTermDistinct = leafWriterEncodeTerm(pWriter, pTerm, nTerm); + + /* Remember nTermDistinct if opening a new node. */ + if( iTermData==0 ) pWriter->nTermDistinct = nTermDistinct; + + iDoclistData = pWriter->data.nData; + + /* Estimate the length of the merged doclist so we can leave space + ** to encode it. + */ + for(i=0, nData=0; idata, c, n); + + docListMerge(&pWriter->data, pReaders, nReaders); + ASSERT_VALID_DOCLIST(DL_DEFAULT, + pWriter->data.pData+iDoclistData+n, + pWriter->data.nData-iDoclistData-n, NULL); + + /* The actual amount of doclist data at this point could be smaller + ** than the length we encoded. Additionally, the space required to + ** encode this length could be smaller. For small doclists, this is + ** not a big deal, we can just use memmove() to adjust things. + */ + nActualData = pWriter->data.nData-(iDoclistData+n); + nActual = fts3PutVarint(c, nActualData); + assert( nActualData<=nData ); + assert( nActual<=n ); + + /* If the new doclist is big enough for force a standalone leaf + ** node, we can immediately flush it inline without doing the + ** memmove(). + */ + /* TODO(shess) This test matches leafWriterStep(), which does this + ** test before it knows the cost to varint-encode the term and + ** doclist lengths. At some point, change to + ** pWriter->data.nData-iTermData>STANDALONE_MIN. + */ + if( nTerm+nActualData>STANDALONE_MIN ){ + /* Push leaf node from before this term. */ + if( iTermData>0 ){ + rc = leafWriterInternalFlush(v, pWriter, 0, iTermData); + if( rc!=SQLITE_OK ) return rc; + + pWriter->nTermDistinct = nTermDistinct; + } + + /* Fix the encoded doclist length. */ + iDoclistData += n - nActual; + memcpy(pWriter->data.pData+iDoclistData, c, nActual); + + /* Push the standalone leaf node. */ + rc = leafWriterInlineFlush(v, pWriter, pTerm, nTerm, iDoclistData); + if( rc!=SQLITE_OK ) return rc; + + /* Leave the node empty. */ + dataBufferReset(&pWriter->data); + + return rc; + } + + /* At this point, we know that the doclist was small, so do the + ** memmove if indicated. + */ + if( nActualdata.pData+iDoclistData+nActual, + pWriter->data.pData+iDoclistData+n, + pWriter->data.nData-(iDoclistData+n)); + pWriter->data.nData -= n-nActual; + } + + /* Replace written length with actual length. */ + memcpy(pWriter->data.pData+iDoclistData, c, nActual); + + /* If the node is too large, break things up. */ + /* TODO(shess) This test matches leafWriterStep(), which does this + ** test before it knows the cost to varint-encode the term and + ** doclist lengths. At some point, change to + ** pWriter->data.nData>LEAF_MAX. + */ + if( iTermData+nTerm+nActualData>LEAF_MAX ){ + /* Flush out the leading data as a node */ + rc = leafWriterInternalFlush(v, pWriter, 0, iTermData); + if( rc!=SQLITE_OK ) return rc; + + pWriter->nTermDistinct = nTermDistinct; + + /* Rebuild header using the current term */ + n = fts3PutVarint(pWriter->data.pData, 0); + n += fts3PutVarint(pWriter->data.pData+n, nTerm); + memcpy(pWriter->data.pData+n, pTerm, nTerm); + n += nTerm; + + /* There should always be room, because the previous encoding + ** included all data necessary to construct the term. + */ + assert( ndata.nData-iDoclistDatadata.pData+n, + pWriter->data.pData+iDoclistData, + pWriter->data.nData-iDoclistData); + pWriter->data.nData -= iDoclistData-n; + } + ASSERT_VALID_LEAF_NODE(pWriter->data.pData, pWriter->data.nData); + + return SQLITE_OK; +} + +/* Push pTerm[nTerm] along with the doclist data to the leaf layer of +** %_segments. +*/ +/* TODO(shess) Revise writeZeroSegment() so that doclists are +** constructed directly in pWriter->data. +*/ +static int leafWriterStep(fulltext_vtab *v, LeafWriter *pWriter, + const char *pTerm, int nTerm, + const char *pData, int nData){ + int rc; + DLReader reader; + + dlrInit(&reader, DL_DEFAULT, pData, nData); + rc = leafWriterStepMerge(v, pWriter, pTerm, nTerm, &reader, 1); + dlrDestroy(&reader); + + return rc; +} + + +/****************************************************************/ +/* LeafReader is used to iterate over an individual leaf node. */ +typedef struct LeafReader { + DataBuffer term; /* copy of current term. */ + + const char *pData; /* data for current term. */ + int nData; +} LeafReader; + +static void leafReaderDestroy(LeafReader *pReader){ + dataBufferDestroy(&pReader->term); + SCRAMBLE(pReader); +} + +static int leafReaderAtEnd(LeafReader *pReader){ + return pReader->nData<=0; +} + +/* Access the current term. */ +static int leafReaderTermBytes(LeafReader *pReader){ + return pReader->term.nData; +} +static const char *leafReaderTerm(LeafReader *pReader){ + assert( pReader->term.nData>0 ); + return pReader->term.pData; +} + +/* Access the doclist data for the current term. */ +static int leafReaderDataBytes(LeafReader *pReader){ + int nData; + assert( pReader->term.nData>0 ); + fts3GetVarint32(pReader->pData, &nData); + return nData; +} +static const char *leafReaderData(LeafReader *pReader){ + int n, nData; + assert( pReader->term.nData>0 ); + n = fts3GetVarint32(pReader->pData, &nData); + return pReader->pData+n; +} + +static void leafReaderInit(const char *pData, int nData, + LeafReader *pReader){ + int nTerm, n; + + assert( nData>0 ); + assert( pData[0]=='\0' ); + + CLEAR(pReader); + + /* Read the first term, skipping the header byte. */ + n = fts3GetVarint32(pData+1, &nTerm); + dataBufferInit(&pReader->term, nTerm); + dataBufferReplace(&pReader->term, pData+1+n, nTerm); + + /* Position after the first term. */ + assert( 1+n+nTermpData = pData+1+n+nTerm; + pReader->nData = nData-1-n-nTerm; +} + +/* Step the reader forward to the next term. */ +static void leafReaderStep(LeafReader *pReader){ + int n, nData, nPrefix, nSuffix; + assert( !leafReaderAtEnd(pReader) ); + + /* Skip previous entry's data block. */ + n = fts3GetVarint32(pReader->pData, &nData); + assert( n+nData<=pReader->nData ); + pReader->pData += n+nData; + pReader->nData -= n+nData; + + if( !leafReaderAtEnd(pReader) ){ + /* Construct the new term using a prefix from the old term plus a + ** suffix from the leaf data. + */ + n = fts3GetVarint32(pReader->pData, &nPrefix); + n += fts3GetVarint32(pReader->pData+n, &nSuffix); + assert( n+nSuffixnData ); + pReader->term.nData = nPrefix; + dataBufferAppend(&pReader->term, pReader->pData+n, nSuffix); + + pReader->pData += n+nSuffix; + pReader->nData -= n+nSuffix; + } +} + +/* strcmp-style comparison of pReader's current term against pTerm. +** If isPrefix, equality means equal through nTerm bytes. +*/ +static int leafReaderTermCmp(LeafReader *pReader, + const char *pTerm, int nTerm, int isPrefix){ + int c, n = pReader->term.nDataterm.nData : nTerm; + if( n==0 ){ + if( pReader->term.nData>0 ) return -1; + if(nTerm>0 ) return 1; + return 0; + } + + c = memcmp(pReader->term.pData, pTerm, n); + if( c!=0 ) return c; + if( isPrefix && n==nTerm ) return 0; + return pReader->term.nData - nTerm; +} + + +/****************************************************************/ +/* LeavesReader wraps LeafReader to allow iterating over the entire +** leaf layer of the tree. +*/ +typedef struct LeavesReader { + int idx; /* Index within the segment. */ + + sqlite3_stmt *pStmt; /* Statement we're streaming leaves from. */ + int eof; /* we've seen SQLITE_DONE from pStmt. */ + + LeafReader leafReader; /* reader for the current leaf. */ + DataBuffer rootData; /* root data for inline. */ +} LeavesReader; + +/* Access the current term. */ +static int leavesReaderTermBytes(LeavesReader *pReader){ + assert( !pReader->eof ); + return leafReaderTermBytes(&pReader->leafReader); +} +static const char *leavesReaderTerm(LeavesReader *pReader){ + assert( !pReader->eof ); + return leafReaderTerm(&pReader->leafReader); +} + +/* Access the doclist data for the current term. */ +static int leavesReaderDataBytes(LeavesReader *pReader){ + assert( !pReader->eof ); + return leafReaderDataBytes(&pReader->leafReader); +} +static const char *leavesReaderData(LeavesReader *pReader){ + assert( !pReader->eof ); + return leafReaderData(&pReader->leafReader); +} + +static int leavesReaderAtEnd(LeavesReader *pReader){ + return pReader->eof; +} + +/* loadSegmentLeaves() may not read all the way to SQLITE_DONE, thus +** leaving the statement handle open, which locks the table. +*/ +/* TODO(shess) This "solution" is not satisfactory. Really, there +** should be check-in function for all statement handles which +** arranges to call sqlite3_reset(). This most likely will require +** modification to control flow all over the place, though, so for now +** just punt. +** +** Note the the current system assumes that segment merges will run to +** completion, which is why this particular probably hasn't arisen in +** this case. Probably a brittle assumption. +*/ +static int leavesReaderReset(LeavesReader *pReader){ + return sqlite3_reset(pReader->pStmt); +} + +static void leavesReaderDestroy(LeavesReader *pReader){ + leafReaderDestroy(&pReader->leafReader); + dataBufferDestroy(&pReader->rootData); + SCRAMBLE(pReader); +} + +/* Initialize pReader with the given root data (if iStartBlockid==0 +** the leaf data was entirely contained in the root), or from the +** stream of blocks between iStartBlockid and iEndBlockid, inclusive. +*/ +static int leavesReaderInit(fulltext_vtab *v, + int idx, + sqlite_int64 iStartBlockid, + sqlite_int64 iEndBlockid, + const char *pRootData, int nRootData, + LeavesReader *pReader){ + CLEAR(pReader); + pReader->idx = idx; + + dataBufferInit(&pReader->rootData, 0); + if( iStartBlockid==0 ){ + /* Entire leaf level fit in root data. */ + dataBufferReplace(&pReader->rootData, pRootData, nRootData); + leafReaderInit(pReader->rootData.pData, pReader->rootData.nData, + &pReader->leafReader); + }else{ + sqlite3_stmt *s; + int rc = sql_get_leaf_statement(v, idx, &s); + if( rc!=SQLITE_OK ) return rc; + + rc = sqlite3_bind_int64(s, 1, iStartBlockid); + if( rc!=SQLITE_OK ) return rc; + + rc = sqlite3_bind_int64(s, 2, iEndBlockid); + if( rc!=SQLITE_OK ) return rc; + + rc = sqlite3_step(s); + if( rc==SQLITE_DONE ){ + pReader->eof = 1; + return SQLITE_OK; + } + if( rc!=SQLITE_ROW ) return rc; + + pReader->pStmt = s; + leafReaderInit(sqlite3_column_blob(pReader->pStmt, 0), + sqlite3_column_bytes(pReader->pStmt, 0), + &pReader->leafReader); + } + return SQLITE_OK; +} + +/* Step the current leaf forward to the next term. If we reach the +** end of the current leaf, step forward to the next leaf block. +*/ +static int leavesReaderStep(fulltext_vtab *v, LeavesReader *pReader){ + assert( !leavesReaderAtEnd(pReader) ); + leafReaderStep(&pReader->leafReader); + + if( leafReaderAtEnd(&pReader->leafReader) ){ + int rc; + if( pReader->rootData.pData ){ + pReader->eof = 1; + return SQLITE_OK; + } + rc = sqlite3_step(pReader->pStmt); + if( rc!=SQLITE_ROW ){ + pReader->eof = 1; + return rc==SQLITE_DONE ? SQLITE_OK : rc; + } + leafReaderDestroy(&pReader->leafReader); + leafReaderInit(sqlite3_column_blob(pReader->pStmt, 0), + sqlite3_column_bytes(pReader->pStmt, 0), + &pReader->leafReader); + } + return SQLITE_OK; +} + +/* Order LeavesReaders by their term, ignoring idx. Readers at eof +** always sort to the end. +*/ +static int leavesReaderTermCmp(LeavesReader *lr1, LeavesReader *lr2){ + if( leavesReaderAtEnd(lr1) ){ + if( leavesReaderAtEnd(lr2) ) return 0; + return 1; + } + if( leavesReaderAtEnd(lr2) ) return -1; + + return leafReaderTermCmp(&lr1->leafReader, + leavesReaderTerm(lr2), leavesReaderTermBytes(lr2), + 0); +} + +/* Similar to leavesReaderTermCmp(), with additional ordering by idx +** so that older segments sort before newer segments. +*/ +static int leavesReaderCmp(LeavesReader *lr1, LeavesReader *lr2){ + int c = leavesReaderTermCmp(lr1, lr2); + if( c!=0 ) return c; + return lr1->idx-lr2->idx; +} + +/* Assume that pLr[1]..pLr[nLr] are sorted. Bubble pLr[0] into its +** sorted position. +*/ +static void leavesReaderReorder(LeavesReader *pLr, int nLr){ + while( nLr>1 && leavesReaderCmp(pLr, pLr+1)>0 ){ + LeavesReader tmp = pLr[0]; + pLr[0] = pLr[1]; + pLr[1] = tmp; + nLr--; + pLr++; + } +} + +/* Initializes pReaders with the segments from level iLevel, returning +** the number of segments in *piReaders. Leaves pReaders in sorted +** order. +*/ +static int leavesReadersInit(fulltext_vtab *v, int iLevel, + LeavesReader *pReaders, int *piReaders){ + sqlite3_stmt *s; + int i, rc = sql_get_statement(v, SEGDIR_SELECT_STMT, &s); + if( rc!=SQLITE_OK ) return rc; + + rc = sqlite3_bind_int(s, 1, iLevel); + if( rc!=SQLITE_OK ) return rc; + + i = 0; + while( (rc = sqlite3_step(s))==SQLITE_ROW ){ + sqlite_int64 iStart = sqlite3_column_int64(s, 0); + sqlite_int64 iEnd = sqlite3_column_int64(s, 1); + const char *pRootData = sqlite3_column_blob(s, 2); + int nRootData = sqlite3_column_bytes(s, 2); + + assert( i0 ){ + leavesReaderDestroy(&pReaders[i]); + } + return rc; + } + + *piReaders = i; + + /* Leave our results sorted by term, then age. */ + while( i-- ){ + leavesReaderReorder(pReaders+i, *piReaders-i); + } + return SQLITE_OK; +} + +/* Merge doclists from pReaders[nReaders] into a single doclist, which +** is written to pWriter. Assumes pReaders is ordered oldest to +** newest. +*/ +/* TODO(shess) Consider putting this inline in segmentMerge(). */ +static int leavesReadersMerge(fulltext_vtab *v, + LeavesReader *pReaders, int nReaders, + LeafWriter *pWriter){ + DLReader dlReaders[MERGE_COUNT]; + const char *pTerm = leavesReaderTerm(pReaders); + int i, nTerm = leavesReaderTermBytes(pReaders); + + assert( nReaders<=MERGE_COUNT ); + + for(i=0; i0 ){ + rc = leavesReaderStep(v, lrs+i); + if( rc!=SQLITE_OK ) goto err; + + /* Reorder by term, then by age. */ + leavesReaderReorder(lrs+i, MERGE_COUNT-i); + } + } + + for(i=0; i0 ); + + for(rc=SQLITE_OK; rc==SQLITE_OK && !leavesReaderAtEnd(pReader); + rc=leavesReaderStep(v, pReader)){ + /* TODO(shess) Really want leavesReaderTermCmp(), but that name is + ** already taken to compare the terms of two LeavesReaders. Think + ** on a better name. [Meanwhile, break encapsulation rather than + ** use a confusing name.] + */ + int c = leafReaderTermCmp(&pReader->leafReader, pTerm, nTerm, isPrefix); + if( c>0 ) break; /* Past any possible matches. */ + if( c==0 ){ + const char *pData = leavesReaderData(pReader); + int iBuffer, nData = leavesReaderDataBytes(pReader); + + /* Find the first empty buffer. */ + for(iBuffer=0; iBuffer0 ){ + assert(pBuffers!=NULL); + memcpy(p, pBuffers, nBuffers*sizeof(*pBuffers)); + sqlite3_free(pBuffers); + } + pBuffers = p; + } + dataBufferInit(&(pBuffers[nBuffers]), 0); + nBuffers++; + } + + /* At this point, must have an empty at iBuffer. */ + assert(iBufferpData, p->nData); + + /* dataBufferReset() could allow a large doclist to blow up + ** our memory requirements. + */ + if( p->nCapacity<1024 ){ + dataBufferReset(p); + }else{ + dataBufferDestroy(p); + dataBufferInit(p, 0); + } + } + } + } + } + + /* Union all the doclists together into *out. */ + /* TODO(shess) What if *out is big? Sigh. */ + if( rc==SQLITE_OK && nBuffers>0 ){ + int iBuffer; + for(iBuffer=0; iBuffer0 ){ + if( out->nData==0 ){ + dataBufferSwap(out, &(pBuffers[iBuffer])); + }else{ + docListAccumulateUnion(out, pBuffers[iBuffer].pData, + pBuffers[iBuffer].nData); + } + } + } + } + + while( nBuffers-- ){ + dataBufferDestroy(&(pBuffers[nBuffers])); + } + if( pBuffers!=NULL ) sqlite3_free(pBuffers); + + return rc; +} + +/* Call loadSegmentLeavesInt() with pData/nData as input. */ +static int loadSegmentLeaf(fulltext_vtab *v, const char *pData, int nData, + const char *pTerm, int nTerm, int isPrefix, + DataBuffer *out){ + LeavesReader reader; + int rc; + + assert( nData>1 ); + assert( *pData=='\0' ); + rc = leavesReaderInit(v, 0, 0, 0, pData, nData, &reader); + if( rc!=SQLITE_OK ) return rc; + + rc = loadSegmentLeavesInt(v, &reader, pTerm, nTerm, isPrefix, out); + leavesReaderReset(&reader); + leavesReaderDestroy(&reader); + return rc; +} + +/* Call loadSegmentLeavesInt() with the leaf nodes from iStartLeaf to +** iEndLeaf (inclusive) as input, and merge the resulting doclist into +** out. +*/ +static int loadSegmentLeaves(fulltext_vtab *v, + sqlite_int64 iStartLeaf, sqlite_int64 iEndLeaf, + const char *pTerm, int nTerm, int isPrefix, + DataBuffer *out){ + int rc; + LeavesReader reader; + + assert( iStartLeaf<=iEndLeaf ); + rc = leavesReaderInit(v, 0, iStartLeaf, iEndLeaf, NULL, 0, &reader); + if( rc!=SQLITE_OK ) return rc; + + rc = loadSegmentLeavesInt(v, &reader, pTerm, nTerm, isPrefix, out); + leavesReaderReset(&reader); + leavesReaderDestroy(&reader); + return rc; +} + +/* Taking pData/nData as an interior node, find the sequence of child +** nodes which could include pTerm/nTerm/isPrefix. Note that the +** interior node terms logically come between the blocks, so there is +** one more blockid than there are terms (that block contains terms >= +** the last interior-node term). +*/ +/* TODO(shess) The calling code may already know that the end child is +** not worth calculating, because the end may be in a later sibling +** node. Consider whether breaking symmetry is worthwhile. I suspect +** it is not worthwhile. +*/ +static void getChildrenContaining(const char *pData, int nData, + const char *pTerm, int nTerm, int isPrefix, + sqlite_int64 *piStartChild, + sqlite_int64 *piEndChild){ + InteriorReader reader; + + assert( nData>1 ); + assert( *pData!='\0' ); + interiorReaderInit(pData, nData, &reader); + + /* Scan for the first child which could contain pTerm/nTerm. */ + while( !interiorReaderAtEnd(&reader) ){ + if( interiorReaderTermCmp(&reader, pTerm, nTerm, 0)>0 ) break; + interiorReaderStep(&reader); + } + *piStartChild = interiorReaderCurrentBlockid(&reader); + + /* Keep scanning to find a term greater than our term, using prefix + ** comparison if indicated. If isPrefix is false, this will be the + ** same blockid as the starting block. + */ + while( !interiorReaderAtEnd(&reader) ){ + if( interiorReaderTermCmp(&reader, pTerm, nTerm, isPrefix)>0 ) break; + interiorReaderStep(&reader); + } + *piEndChild = interiorReaderCurrentBlockid(&reader); + + interiorReaderDestroy(&reader); + + /* Children must ascend, and if !prefix, both must be the same. */ + assert( *piEndChild>=*piStartChild ); + assert( isPrefix || *piStartChild==*piEndChild ); +} + +/* Read block at iBlockid and pass it with other params to +** getChildrenContaining(). +*/ +static int loadAndGetChildrenContaining( + fulltext_vtab *v, + sqlite_int64 iBlockid, + const char *pTerm, int nTerm, int isPrefix, + sqlite_int64 *piStartChild, sqlite_int64 *piEndChild +){ + sqlite3_stmt *s = NULL; + int rc; + + assert( iBlockid!=0 ); + assert( pTerm!=NULL ); + assert( nTerm!=0 ); /* TODO(shess) Why not allow this? */ + assert( piStartChild!=NULL ); + assert( piEndChild!=NULL ); + + rc = sql_get_statement(v, BLOCK_SELECT_STMT, &s); + if( rc!=SQLITE_OK ) return rc; + + rc = sqlite3_bind_int64(s, 1, iBlockid); + if( rc!=SQLITE_OK ) return rc; + + rc = sqlite3_step(s); + if( rc==SQLITE_DONE ) return SQLITE_ERROR; + if( rc!=SQLITE_ROW ) return rc; + + getChildrenContaining(sqlite3_column_blob(s, 0), sqlite3_column_bytes(s, 0), + pTerm, nTerm, isPrefix, piStartChild, piEndChild); + + /* We expect only one row. We must execute another sqlite3_step() + * to complete the iteration; otherwise the table will remain + * locked. */ + rc = sqlite3_step(s); + if( rc==SQLITE_ROW ) return SQLITE_ERROR; + if( rc!=SQLITE_DONE ) return rc; + + return SQLITE_OK; +} + +/* Traverse the tree represented by pData[nData] looking for +** pTerm[nTerm], placing its doclist into *out. This is internal to +** loadSegment() to make error-handling cleaner. +*/ +static int loadSegmentInt(fulltext_vtab *v, const char *pData, int nData, + sqlite_int64 iLeavesEnd, + const char *pTerm, int nTerm, int isPrefix, + DataBuffer *out){ + /* Special case where root is a leaf. */ + if( *pData=='\0' ){ + return loadSegmentLeaf(v, pData, nData, pTerm, nTerm, isPrefix, out); + }else{ + int rc; + sqlite_int64 iStartChild, iEndChild; + + /* Process pData as an interior node, then loop down the tree + ** until we find the set of leaf nodes to scan for the term. + */ + getChildrenContaining(pData, nData, pTerm, nTerm, isPrefix, + &iStartChild, &iEndChild); + while( iStartChild>iLeavesEnd ){ + sqlite_int64 iNextStart, iNextEnd; + rc = loadAndGetChildrenContaining(v, iStartChild, pTerm, nTerm, isPrefix, + &iNextStart, &iNextEnd); + if( rc!=SQLITE_OK ) return rc; + + /* If we've branched, follow the end branch, too. */ + if( iStartChild!=iEndChild ){ + sqlite_int64 iDummy; + rc = loadAndGetChildrenContaining(v, iEndChild, pTerm, nTerm, isPrefix, + &iDummy, &iNextEnd); + if( rc!=SQLITE_OK ) return rc; + } + + assert( iNextStart<=iNextEnd ); + iStartChild = iNextStart; + iEndChild = iNextEnd; + } + assert( iStartChild<=iLeavesEnd ); + assert( iEndChild<=iLeavesEnd ); + + /* Scan through the leaf segments for doclists. */ + return loadSegmentLeaves(v, iStartChild, iEndChild, + pTerm, nTerm, isPrefix, out); + } +} + +/* Call loadSegmentInt() to collect the doclist for pTerm/nTerm, then +** merge its doclist over *out (any duplicate doclists read from the +** segment rooted at pData will overwrite those in *out). +*/ +/* TODO(shess) Consider changing this to determine the depth of the +** leaves using either the first characters of interior nodes (when +** ==1, we're one level above the leaves), or the first character of +** the root (which will describe the height of the tree directly). +** Either feels somewhat tricky to me. +*/ +/* TODO(shess) The current merge is likely to be slow for large +** doclists (though it should process from newest/smallest to +** oldest/largest, so it may not be that bad). It might be useful to +** modify things to allow for N-way merging. This could either be +** within a segment, with pairwise merges across segments, or across +** all segments at once. +*/ +static int loadSegment(fulltext_vtab *v, const char *pData, int nData, + sqlite_int64 iLeavesEnd, + const char *pTerm, int nTerm, int isPrefix, + DataBuffer *out){ + DataBuffer result; + int rc; + + assert( nData>1 ); + + /* This code should never be called with buffered updates. */ + assert( v->nPendingData<0 ); + + dataBufferInit(&result, 0); + rc = loadSegmentInt(v, pData, nData, iLeavesEnd, + pTerm, nTerm, isPrefix, &result); + if( rc==SQLITE_OK && result.nData>0 ){ + if( out->nData==0 ){ + DataBuffer tmp = *out; + *out = result; + result = tmp; + }else{ + DataBuffer merged; + DLReader readers[2]; + + dlrInit(&readers[0], DL_DEFAULT, out->pData, out->nData); + dlrInit(&readers[1], DL_DEFAULT, result.pData, result.nData); + dataBufferInit(&merged, out->nData+result.nData); + docListMerge(&merged, readers, 2); + dataBufferDestroy(out); + *out = merged; + dlrDestroy(&readers[0]); + dlrDestroy(&readers[1]); + } + } + dataBufferDestroy(&result); + return rc; +} + +/* Scan the database and merge together the posting lists for the term +** into *out. +*/ +static int termSelect(fulltext_vtab *v, int iColumn, + const char *pTerm, int nTerm, int isPrefix, + DocListType iType, DataBuffer *out){ + DataBuffer doclist; + sqlite3_stmt *s; + int rc = sql_get_statement(v, SEGDIR_SELECT_ALL_STMT, &s); + if( rc!=SQLITE_OK ) return rc; + + /* This code should never be called with buffered updates. */ + assert( v->nPendingData<0 ); + + dataBufferInit(&doclist, 0); + + /* Traverse the segments from oldest to newest so that newer doclist + ** elements for given docids overwrite older elements. + */ + while( (rc = sqlite3_step(s))==SQLITE_ROW ){ + const char *pData = sqlite3_column_blob(s, 0); + const int nData = sqlite3_column_bytes(s, 0); + const sqlite_int64 iLeavesEnd = sqlite3_column_int64(s, 1); + rc = loadSegment(v, pData, nData, iLeavesEnd, pTerm, nTerm, isPrefix, + &doclist); + if( rc!=SQLITE_OK ) goto err; + } + if( rc==SQLITE_DONE ){ + if( doclist.nData!=0 ){ + /* TODO(shess) The old term_select_all() code applied the column + ** restrict as we merged segments, leading to smaller buffers. + ** This is probably worthwhile to bring back, once the new storage + ** system is checked in. + */ + if( iColumn==v->nColumn) iColumn = -1; + docListTrim(DL_DEFAULT, doclist.pData, doclist.nData, + iColumn, iType, out); + } + rc = SQLITE_OK; + } + + err: + dataBufferDestroy(&doclist); + return rc; +} + +/****************************************************************/ +/* Used to hold hashtable data for sorting. */ +typedef struct TermData { + const char *pTerm; + int nTerm; + DLCollector *pCollector; +} TermData; + +/* Orders TermData elements in strcmp fashion ( <0 for less-than, 0 +** for equal, >0 for greater-than). +*/ +static int termDataCmp(const void *av, const void *bv){ + const TermData *a = (const TermData *)av; + const TermData *b = (const TermData *)bv; + int n = a->nTermnTerm ? a->nTerm : b->nTerm; + int c = memcmp(a->pTerm, b->pTerm, n); + if( c!=0 ) return c; + return a->nTerm-b->nTerm; +} + +/* Order pTerms data by term, then write a new level 0 segment using +** LeafWriter. +*/ +static int writeZeroSegment(fulltext_vtab *v, fts3Hash *pTerms){ + fts3HashElem *e; + int idx, rc, i, n; + TermData *pData; + LeafWriter writer; + DataBuffer dl; + + /* Determine the next index at level 0, merging as necessary. */ + rc = segdirNextIndex(v, 0, &idx); + if( rc!=SQLITE_OK ) return rc; + + n = fts3HashCount(pTerms); + pData = sqlite3_malloc(n*sizeof(TermData)); + + for(i = 0, e = fts3HashFirst(pTerms); e; i++, e = fts3HashNext(e)){ + assert( i1 ) qsort(pData, n, sizeof(*pData), termDataCmp); + + /* TODO(shess) Refactor so that we can write directly to the segment + ** DataBuffer, as happens for segment merges. + */ + leafWriterInit(0, idx, &writer); + dataBufferInit(&dl, 0); + for(i=0; inPendingData>=0 ){ + fts3HashElem *e; + for(e=fts3HashFirst(&v->pendingTerms); e; e=fts3HashNext(e)){ + dlcDelete(fts3HashData(e)); + } + fts3HashClear(&v->pendingTerms); + v->nPendingData = -1; + } + return SQLITE_OK; +} + +/* If pendingTerms has data, flush it to a level-zero segment, and +** free it. +*/ +static int flushPendingTerms(fulltext_vtab *v){ + if( v->nPendingData>=0 ){ + int rc = writeZeroSegment(v, &v->pendingTerms); + if( rc==SQLITE_OK ) clearPendingTerms(v); + return rc; + } + return SQLITE_OK; +} + +/* If pendingTerms is "too big", or docid is out of order, flush it. +** Regardless, be certain that pendingTerms is initialized for use. +*/ +static int initPendingTerms(fulltext_vtab *v, sqlite_int64 iDocid){ + /* TODO(shess) Explore whether partially flushing the buffer on + ** forced-flush would provide better performance. I suspect that if + ** we ordered the doclists by size and flushed the largest until the + ** buffer was half empty, that would let the less frequent terms + ** generate longer doclists. + */ + if( iDocid<=v->iPrevDocid || v->nPendingData>kPendingThreshold ){ + int rc = flushPendingTerms(v); + if( rc!=SQLITE_OK ) return rc; + } + if( v->nPendingData<0 ){ + fts3HashInit(&v->pendingTerms, FTS3_HASH_STRING, 1); + v->nPendingData = 0; + } + v->iPrevDocid = iDocid; + return SQLITE_OK; +} + +/* This function implements the xUpdate callback; it is the top-level entry + * point for inserting, deleting or updating a row in a full-text table. */ +static int fulltextUpdate(sqlite3_vtab *pVtab, int nArg, sqlite3_value **ppArg, + sqlite_int64 *pRowid){ + fulltext_vtab *v = (fulltext_vtab *) pVtab; + int rc; + + FTSTRACE(("FTS3 Update %p\n", pVtab)); + + if( nArg<2 ){ + rc = index_delete(v, sqlite3_value_int64(ppArg[0])); + } else if( sqlite3_value_type(ppArg[0]) != SQLITE_NULL ){ + /* An update: + * ppArg[0] = old rowid + * ppArg[1] = new rowid + * ppArg[2..2+v->nColumn-1] = values + * ppArg[2+v->nColumn] = value for magic column (we ignore this) + * ppArg[2+v->nColumn+1] = value for docid + */ + sqlite_int64 rowid = sqlite3_value_int64(ppArg[0]); + if( sqlite3_value_type(ppArg[1]) != SQLITE_INTEGER || + sqlite3_value_int64(ppArg[1]) != rowid ){ + rc = SQLITE_ERROR; /* we don't allow changing the rowid */ + }else if( sqlite3_value_type(ppArg[2+v->nColumn+1]) != SQLITE_INTEGER || + sqlite3_value_int64(ppArg[2+v->nColumn+1]) != rowid ){ + rc = SQLITE_ERROR; /* we don't allow changing the docid */ + }else{ + assert( nArg==2+v->nColumn+2); + rc = index_update(v, rowid, &ppArg[2]); + } + } else { + /* An insert: + * ppArg[1] = requested rowid + * ppArg[2..2+v->nColumn-1] = values + * ppArg[2+v->nColumn] = value for magic column (we ignore this) + * ppArg[2+v->nColumn+1] = value for docid + */ + sqlite3_value *pRequestDocid = ppArg[2+v->nColumn+1]; + assert( nArg==2+v->nColumn+2); + if( SQLITE_NULL != sqlite3_value_type(pRequestDocid) && + SQLITE_NULL != sqlite3_value_type(ppArg[1]) ){ + /* TODO(shess) Consider allowing this to work if the values are + ** identical. I'm inclined to discourage that usage, though, + ** given that both rowid and docid are special columns. Better + ** would be to define one or the other as the default winner, + ** but should it be fts3-centric (docid) or SQLite-centric + ** (rowid)? + */ + rc = SQLITE_ERROR; + }else{ + if( SQLITE_NULL == sqlite3_value_type(pRequestDocid) ){ + pRequestDocid = ppArg[1]; + } + rc = index_insert(v, pRequestDocid, &ppArg[2], pRowid); + } + } + + return rc; +} + +static int fulltextSync(sqlite3_vtab *pVtab){ + FTSTRACE(("FTS3 xSync()\n")); + return flushPendingTerms((fulltext_vtab *)pVtab); +} + +static int fulltextBegin(sqlite3_vtab *pVtab){ + fulltext_vtab *v = (fulltext_vtab *) pVtab; + FTSTRACE(("FTS3 xBegin()\n")); + + /* Any buffered updates should have been cleared by the previous + ** transaction. + */ + assert( v->nPendingData<0 ); + return clearPendingTerms(v); +} + +static int fulltextCommit(sqlite3_vtab *pVtab){ + fulltext_vtab *v = (fulltext_vtab *) pVtab; + FTSTRACE(("FTS3 xCommit()\n")); + + /* Buffered updates should have been cleared by fulltextSync(). */ + assert( v->nPendingData<0 ); + return clearPendingTerms(v); +} + +static int fulltextRollback(sqlite3_vtab *pVtab){ + FTSTRACE(("FTS3 xRollback()\n")); + return clearPendingTerms((fulltext_vtab *)pVtab); +} + +/* +** Implementation of the snippet() function for FTS3 +*/ +static void snippetFunc( + sqlite3_context *pContext, + int argc, + sqlite3_value **argv +){ + fulltext_cursor *pCursor; + if( argc<1 ) return; + if( sqlite3_value_type(argv[0])!=SQLITE_BLOB || + sqlite3_value_bytes(argv[0])!=sizeof(pCursor) ){ + sqlite3_result_error(pContext, "illegal first argument to html_snippet",-1); + }else{ + const char *zStart = ""; + const char *zEnd = ""; + const char *zEllipsis = "..."; + memcpy(&pCursor, sqlite3_value_blob(argv[0]), sizeof(pCursor)); + if( argc>=2 ){ + zStart = (const char*)sqlite3_value_text(argv[1]); + if( argc>=3 ){ + zEnd = (const char*)sqlite3_value_text(argv[2]); + if( argc>=4 ){ + zEllipsis = (const char*)sqlite3_value_text(argv[3]); + } + } + } + snippetAllOffsets(pCursor); + snippetText(pCursor, zStart, zEnd, zEllipsis); + sqlite3_result_text(pContext, pCursor->snippet.zSnippet, + pCursor->snippet.nSnippet, SQLITE_STATIC); + } +} + +/* +** Implementation of the offsets() function for FTS3 +*/ +static void snippetOffsetsFunc( + sqlite3_context *pContext, + int argc, + sqlite3_value **argv +){ + fulltext_cursor *pCursor; + if( argc<1 ) return; + if( sqlite3_value_type(argv[0])!=SQLITE_BLOB || + sqlite3_value_bytes(argv[0])!=sizeof(pCursor) ){ + sqlite3_result_error(pContext, "illegal first argument to offsets",-1); + }else{ + memcpy(&pCursor, sqlite3_value_blob(argv[0]), sizeof(pCursor)); + snippetAllOffsets(pCursor); + snippetOffsetText(&pCursor->snippet); + sqlite3_result_text(pContext, + pCursor->snippet.zOffset, pCursor->snippet.nOffset, + SQLITE_STATIC); + } +} + +/* +** This routine implements the xFindFunction method for the FTS3 +** virtual table. +*/ +static int fulltextFindFunction( + sqlite3_vtab *pVtab, + int nArg, + const char *zName, + void (**pxFunc)(sqlite3_context*,int,sqlite3_value**), + void **ppArg +){ + if( strcmp(zName,"snippet")==0 ){ + *pxFunc = snippetFunc; + return 1; + }else if( strcmp(zName,"offsets")==0 ){ + *pxFunc = snippetOffsetsFunc; + return 1; + } + return 0; +} + +/* +** Rename an fts3 table. +*/ +static int fulltextRename( + sqlite3_vtab *pVtab, + const char *zName +){ + fulltext_vtab *p = (fulltext_vtab *)pVtab; + int rc = SQLITE_NOMEM; + char *zSql = sqlite3_mprintf( + "ALTER TABLE %Q.'%q_content' RENAME TO '%q_content';" + "ALTER TABLE %Q.'%q_segments' RENAME TO '%q_segments';" + "ALTER TABLE %Q.'%q_segdir' RENAME TO '%q_segdir';" + , p->zDb, p->zName, zName + , p->zDb, p->zName, zName + , p->zDb, p->zName, zName + ); + if( zSql ){ + rc = sqlite3_exec(p->db, zSql, 0, 0, 0); + sqlite3_free(zSql); + } + return rc; +} + +static const sqlite3_module fts3Module = { + /* iVersion */ 0, + /* xCreate */ fulltextCreate, + /* xConnect */ fulltextConnect, + /* xBestIndex */ fulltextBestIndex, + /* xDisconnect */ fulltextDisconnect, + /* xDestroy */ fulltextDestroy, + /* xOpen */ fulltextOpen, + /* xClose */ fulltextClose, + /* xFilter */ fulltextFilter, + /* xNext */ fulltextNext, + /* xEof */ fulltextEof, + /* xColumn */ fulltextColumn, + /* xRowid */ fulltextRowid, + /* xUpdate */ fulltextUpdate, + /* xBegin */ fulltextBegin, + /* xSync */ fulltextSync, + /* xCommit */ fulltextCommit, + /* xRollback */ fulltextRollback, + /* xFindFunction */ fulltextFindFunction, + /* xRename */ fulltextRename, +}; + +static void hashDestroy(void *p){ + fts3Hash *pHash = (fts3Hash *)p; + sqlite3Fts3HashClear(pHash); + sqlite3_free(pHash); +} + +/* +** The fts3 built-in tokenizers - "simple" and "porter" - are implemented +** in files fts3_tokenizer1.c and fts3_porter.c respectively. The following +** two forward declarations are for functions declared in these files +** used to retrieve the respective implementations. +** +** Calling sqlite3Fts3SimpleTokenizerModule() sets the value pointed +** to by the argument to point a the "simple" tokenizer implementation. +** Function ...PorterTokenizerModule() sets *pModule to point to the +** porter tokenizer/stemmer implementation. +*/ +void sqlite3Fts3SimpleTokenizerModule(sqlite3_tokenizer_module const**ppModule); +void sqlite3Fts3PorterTokenizerModule(sqlite3_tokenizer_module const**ppModule); +void sqlite3Fts3IcuTokenizerModule(sqlite3_tokenizer_module const**ppModule); + +int sqlite3Fts3InitHashTable(sqlite3 *, fts3Hash *, const char *); + +/* +** Initialise the fts3 extension. If this extension is built as part +** of the sqlite library, then this function is called directly by +** SQLite. If fts3 is built as a dynamically loadable extension, this +** function is called by the sqlite3_extension_init() entry point. +*/ +int sqlite3Fts3Init(sqlite3 *db){ + int rc = SQLITE_OK; + fts3Hash *pHash = 0; + const sqlite3_tokenizer_module *pSimple = 0; + const sqlite3_tokenizer_module *pPorter = 0; + const sqlite3_tokenizer_module *pIcu = 0; + + sqlite3Fts3SimpleTokenizerModule(&pSimple); + sqlite3Fts3PorterTokenizerModule(&pPorter); +#ifdef SQLITE_ENABLE_ICU + sqlite3Fts3IcuTokenizerModule(&pIcu); +#endif + + /* Allocate and initialise the hash-table used to store tokenizers. */ + pHash = sqlite3_malloc(sizeof(fts3Hash)); + if( !pHash ){ + rc = SQLITE_NOMEM; + }else{ + sqlite3Fts3HashInit(pHash, FTS3_HASH_STRING, 1); + } + + /* Load the built-in tokenizers into the hash table */ + if( rc==SQLITE_OK ){ + if( sqlite3Fts3HashInsert(pHash, "simple", 7, (void *)pSimple) + || sqlite3Fts3HashInsert(pHash, "porter", 7, (void *)pPorter) + || (pIcu && sqlite3Fts3HashInsert(pHash, "icu", 4, (void *)pIcu)) + ){ + rc = SQLITE_NOMEM; + } + } + + /* Create the virtual table wrapper around the hash-table and overload + ** the two scalar functions. If this is successful, register the + ** module with sqlite. + */ + if( SQLITE_OK==rc + && SQLITE_OK==(rc = sqlite3Fts3InitHashTable(db, pHash, "fts3_tokenizer")) + && SQLITE_OK==(rc = sqlite3_overload_function(db, "snippet", -1)) + && SQLITE_OK==(rc = sqlite3_overload_function(db, "offsets", -1)) + ){ + return sqlite3_create_module_v2( + db, "fts3", &fts3Module, (void *)pHash, hashDestroy + ); + } + + /* An error has occured. Delete the hash table and return the error code. */ + assert( rc!=SQLITE_OK ); + if( pHash ){ + sqlite3Fts3HashClear(pHash); + sqlite3_free(pHash); + } + return rc; +} + +#if !SQLITE_CORE +int sqlite3_extension_init( + sqlite3 *db, + char **pzErrMsg, + const sqlite3_api_routines *pApi +){ + SQLITE_EXTENSION_INIT2(pApi) + return sqlite3Fts3Init(db); +} +#endif + +#endif /* !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS3) */ Added: external/sqlite-source-3.5.7.x/fts3.h ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/fts3.h Wed Mar 19 03:00:27 2008 @@ -0,0 +1,26 @@ +/* +** 2006 Oct 10 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +****************************************************************************** +** +** This header file is used by programs that want to link against the +** FTS3 library. All it does is declare the sqlite3Fts3Init() interface. +*/ +#include "sqlite3.h" + +#ifdef __cplusplus +extern "C" { +#endif /* __cplusplus */ + +int sqlite3Fts3Init(sqlite3 *db); + +#ifdef __cplusplus +} /* extern "C" */ +#endif /* __cplusplus */ Added: external/sqlite-source-3.5.7.x/fts3_hash.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/fts3_hash.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,374 @@ +/* +** 2001 September 22 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** This is the implementation of generic hash-tables used in SQLite. +** We've modified it slightly to serve as a standalone hash table +** implementation for the full-text indexing module. +*/ + +/* +** The code in this file is only compiled if: +** +** * The FTS3 module is being built as an extension +** (in which case SQLITE_CORE is not defined), or +** +** * The FTS3 module is being built into the core of +** SQLite (in which case SQLITE_ENABLE_FTS3 is defined). +*/ +#if !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS3) + +#include +#include +#include + +#include "sqlite3.h" +#include "fts3_hash.h" + +/* +** Malloc and Free functions +*/ +static void *fts3HashMalloc(int n){ + void *p = sqlite3_malloc(n); + if( p ){ + memset(p, 0, n); + } + return p; +} +static void fts3HashFree(void *p){ + sqlite3_free(p); +} + +/* Turn bulk memory into a hash table object by initializing the +** fields of the Hash structure. +** +** "pNew" is a pointer to the hash table that is to be initialized. +** keyClass is one of the constants +** FTS3_HASH_BINARY or FTS3_HASH_STRING. The value of keyClass +** determines what kind of key the hash table will use. "copyKey" is +** true if the hash table should make its own private copy of keys and +** false if it should just use the supplied pointer. +*/ +void sqlite3Fts3HashInit(fts3Hash *pNew, int keyClass, int copyKey){ + assert( pNew!=0 ); + assert( keyClass>=FTS3_HASH_STRING && keyClass<=FTS3_HASH_BINARY ); + pNew->keyClass = keyClass; + pNew->copyKey = copyKey; + pNew->first = 0; + pNew->count = 0; + pNew->htsize = 0; + pNew->ht = 0; +} + +/* Remove all entries from a hash table. Reclaim all memory. +** Call this routine to delete a hash table or to reset a hash table +** to the empty state. +*/ +void sqlite3Fts3HashClear(fts3Hash *pH){ + fts3HashElem *elem; /* For looping over all elements of the table */ + + assert( pH!=0 ); + elem = pH->first; + pH->first = 0; + fts3HashFree(pH->ht); + pH->ht = 0; + pH->htsize = 0; + while( elem ){ + fts3HashElem *next_elem = elem->next; + if( pH->copyKey && elem->pKey ){ + fts3HashFree(elem->pKey); + } + fts3HashFree(elem); + elem = next_elem; + } + pH->count = 0; +} + +/* +** Hash and comparison functions when the mode is FTS3_HASH_STRING +*/ +static int fts3StrHash(const void *pKey, int nKey){ + const char *z = (const char *)pKey; + int h = 0; + if( nKey<=0 ) nKey = (int) strlen(z); + while( nKey > 0 ){ + h = (h<<3) ^ h ^ *z++; + nKey--; + } + return h & 0x7fffffff; +} +static int fts3StrCompare(const void *pKey1, int n1, const void *pKey2, int n2){ + if( n1!=n2 ) return 1; + return strncmp((const char*)pKey1,(const char*)pKey2,n1); +} + +/* +** Hash and comparison functions when the mode is FTS3_HASH_BINARY +*/ +static int fts3BinHash(const void *pKey, int nKey){ + int h = 0; + const char *z = (const char *)pKey; + while( nKey-- > 0 ){ + h = (h<<3) ^ h ^ *(z++); + } + return h & 0x7fffffff; +} +static int fts3BinCompare(const void *pKey1, int n1, const void *pKey2, int n2){ + if( n1!=n2 ) return 1; + return memcmp(pKey1,pKey2,n1); +} + +/* +** Return a pointer to the appropriate hash function given the key class. +** +** The C syntax in this function definition may be unfamilar to some +** programmers, so we provide the following additional explanation: +** +** The name of the function is "ftsHashFunction". The function takes a +** single parameter "keyClass". The return value of ftsHashFunction() +** is a pointer to another function. Specifically, the return value +** of ftsHashFunction() is a pointer to a function that takes two parameters +** with types "const void*" and "int" and returns an "int". +*/ +static int (*ftsHashFunction(int keyClass))(const void*,int){ + if( keyClass==FTS3_HASH_STRING ){ + return &fts3StrHash; + }else{ + assert( keyClass==FTS3_HASH_BINARY ); + return &fts3BinHash; + } +} + +/* +** Return a pointer to the appropriate hash function given the key class. +** +** For help in interpreted the obscure C code in the function definition, +** see the header comment on the previous function. +*/ +static int (*ftsCompareFunction(int keyClass))(const void*,int,const void*,int){ + if( keyClass==FTS3_HASH_STRING ){ + return &fts3StrCompare; + }else{ + assert( keyClass==FTS3_HASH_BINARY ); + return &fts3BinCompare; + } +} + +/* Link an element into the hash table +*/ +static void fts3HashInsertElement( + fts3Hash *pH, /* The complete hash table */ + struct _fts3ht *pEntry, /* The entry into which pNew is inserted */ + fts3HashElem *pNew /* The element to be inserted */ +){ + fts3HashElem *pHead; /* First element already in pEntry */ + pHead = pEntry->chain; + if( pHead ){ + pNew->next = pHead; + pNew->prev = pHead->prev; + if( pHead->prev ){ pHead->prev->next = pNew; } + else { pH->first = pNew; } + pHead->prev = pNew; + }else{ + pNew->next = pH->first; + if( pH->first ){ pH->first->prev = pNew; } + pNew->prev = 0; + pH->first = pNew; + } + pEntry->count++; + pEntry->chain = pNew; +} + + +/* Resize the hash table so that it cantains "new_size" buckets. +** "new_size" must be a power of 2. The hash table might fail +** to resize if sqliteMalloc() fails. +*/ +static void fts3Rehash(fts3Hash *pH, int new_size){ + struct _fts3ht *new_ht; /* The new hash table */ + fts3HashElem *elem, *next_elem; /* For looping over existing elements */ + int (*xHash)(const void*,int); /* The hash function */ + + assert( (new_size & (new_size-1))==0 ); + new_ht = (struct _fts3ht *)fts3HashMalloc( new_size*sizeof(struct _fts3ht) ); + if( new_ht==0 ) return; + fts3HashFree(pH->ht); + pH->ht = new_ht; + pH->htsize = new_size; + xHash = ftsHashFunction(pH->keyClass); + for(elem=pH->first, pH->first=0; elem; elem = next_elem){ + int h = (*xHash)(elem->pKey, elem->nKey) & (new_size-1); + next_elem = elem->next; + fts3HashInsertElement(pH, &new_ht[h], elem); + } +} + +/* This function (for internal use only) locates an element in an +** hash table that matches the given key. The hash for this key has +** already been computed and is passed as the 4th parameter. +*/ +static fts3HashElem *fts3FindElementByHash( + const fts3Hash *pH, /* The pH to be searched */ + const void *pKey, /* The key we are searching for */ + int nKey, + int h /* The hash for this key. */ +){ + fts3HashElem *elem; /* Used to loop thru the element list */ + int count; /* Number of elements left to test */ + int (*xCompare)(const void*,int,const void*,int); /* comparison function */ + + if( pH->ht ){ + struct _fts3ht *pEntry = &pH->ht[h]; + elem = pEntry->chain; + count = pEntry->count; + xCompare = ftsCompareFunction(pH->keyClass); + while( count-- && elem ){ + if( (*xCompare)(elem->pKey,elem->nKey,pKey,nKey)==0 ){ + return elem; + } + elem = elem->next; + } + } + return 0; +} + +/* Remove a single entry from the hash table given a pointer to that +** element and a hash on the element's key. +*/ +static void fts3RemoveElementByHash( + fts3Hash *pH, /* The pH containing "elem" */ + fts3HashElem* elem, /* The element to be removed from the pH */ + int h /* Hash value for the element */ +){ + struct _fts3ht *pEntry; + if( elem->prev ){ + elem->prev->next = elem->next; + }else{ + pH->first = elem->next; + } + if( elem->next ){ + elem->next->prev = elem->prev; + } + pEntry = &pH->ht[h]; + if( pEntry->chain==elem ){ + pEntry->chain = elem->next; + } + pEntry->count--; + if( pEntry->count<=0 ){ + pEntry->chain = 0; + } + if( pH->copyKey && elem->pKey ){ + fts3HashFree(elem->pKey); + } + fts3HashFree( elem ); + pH->count--; + if( pH->count<=0 ){ + assert( pH->first==0 ); + assert( pH->count==0 ); + fts3HashClear(pH); + } +} + +/* Attempt to locate an element of the hash table pH with a key +** that matches pKey,nKey. Return the data for this element if it is +** found, or NULL if there is no match. +*/ +void *sqlite3Fts3HashFind(const fts3Hash *pH, const void *pKey, int nKey){ + int h; /* A hash on key */ + fts3HashElem *elem; /* The element that matches key */ + int (*xHash)(const void*,int); /* The hash function */ + + if( pH==0 || pH->ht==0 ) return 0; + xHash = ftsHashFunction(pH->keyClass); + assert( xHash!=0 ); + h = (*xHash)(pKey,nKey); + assert( (pH->htsize & (pH->htsize-1))==0 ); + elem = fts3FindElementByHash(pH,pKey,nKey, h & (pH->htsize-1)); + return elem ? elem->data : 0; +} + +/* Insert an element into the hash table pH. The key is pKey,nKey +** and the data is "data". +** +** If no element exists with a matching key, then a new +** element is created. A copy of the key is made if the copyKey +** flag is set. NULL is returned. +** +** If another element already exists with the same key, then the +** new data replaces the old data and the old data is returned. +** The key is not copied in this instance. If a malloc fails, then +** the new data is returned and the hash table is unchanged. +** +** If the "data" parameter to this function is NULL, then the +** element corresponding to "key" is removed from the hash table. +*/ +void *sqlite3Fts3HashInsert( + fts3Hash *pH, /* The hash table to insert into */ + const void *pKey, /* The key */ + int nKey, /* Number of bytes in the key */ + void *data /* The data */ +){ + int hraw; /* Raw hash value of the key */ + int h; /* the hash of the key modulo hash table size */ + fts3HashElem *elem; /* Used to loop thru the element list */ + fts3HashElem *new_elem; /* New element added to the pH */ + int (*xHash)(const void*,int); /* The hash function */ + + assert( pH!=0 ); + xHash = ftsHashFunction(pH->keyClass); + assert( xHash!=0 ); + hraw = (*xHash)(pKey, nKey); + assert( (pH->htsize & (pH->htsize-1))==0 ); + h = hraw & (pH->htsize-1); + elem = fts3FindElementByHash(pH,pKey,nKey,h); + if( elem ){ + void *old_data = elem->data; + if( data==0 ){ + fts3RemoveElementByHash(pH,elem,h); + }else{ + elem->data = data; + } + return old_data; + } + if( data==0 ) return 0; + new_elem = (fts3HashElem*)fts3HashMalloc( sizeof(fts3HashElem) ); + if( new_elem==0 ) return data; + if( pH->copyKey && pKey!=0 ){ + new_elem->pKey = fts3HashMalloc( nKey ); + if( new_elem->pKey==0 ){ + fts3HashFree(new_elem); + return data; + } + memcpy((void*)new_elem->pKey, pKey, nKey); + }else{ + new_elem->pKey = (void*)pKey; + } + new_elem->nKey = nKey; + pH->count++; + if( pH->htsize==0 ){ + fts3Rehash(pH,8); + if( pH->htsize==0 ){ + pH->count = 0; + fts3HashFree(new_elem); + return data; + } + } + if( pH->count > pH->htsize ){ + fts3Rehash(pH,pH->htsize*2); + } + assert( pH->htsize>0 ); + assert( (pH->htsize & (pH->htsize-1))==0 ); + h = hraw & (pH->htsize-1); + fts3HashInsertElement(pH, &pH->ht[h], new_elem); + new_elem->data = data; + return 0; +} + +#endif /* !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS3) */ Added: external/sqlite-source-3.5.7.x/fts3_hash.h ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/fts3_hash.h Wed Mar 19 03:00:27 2008 @@ -0,0 +1,110 @@ +/* +** 2001 September 22 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** This is the header file for the generic hash-table implemenation +** used in SQLite. We've modified it slightly to serve as a standalone +** hash table implementation for the full-text indexing module. +** +*/ +#ifndef _FTS3_HASH_H_ +#define _FTS3_HASH_H_ + +/* Forward declarations of structures. */ +typedef struct fts3Hash fts3Hash; +typedef struct fts3HashElem fts3HashElem; + +/* A complete hash table is an instance of the following structure. +** The internals of this structure are intended to be opaque -- client +** code should not attempt to access or modify the fields of this structure +** directly. Change this structure only by using the routines below. +** However, many of the "procedures" and "functions" for modifying and +** accessing this structure are really macros, so we can't really make +** this structure opaque. +*/ +struct fts3Hash { + char keyClass; /* HASH_INT, _POINTER, _STRING, _BINARY */ + char copyKey; /* True if copy of key made on insert */ + int count; /* Number of entries in this table */ + fts3HashElem *first; /* The first element of the array */ + int htsize; /* Number of buckets in the hash table */ + struct _fts3ht { /* the hash table */ + int count; /* Number of entries with this hash */ + fts3HashElem *chain; /* Pointer to first entry with this hash */ + } *ht; +}; + +/* Each element in the hash table is an instance of the following +** structure. All elements are stored on a single doubly-linked list. +** +** Again, this structure is intended to be opaque, but it can't really +** be opaque because it is used by macros. +*/ +struct fts3HashElem { + fts3HashElem *next, *prev; /* Next and previous elements in the table */ + void *data; /* Data associated with this element */ + void *pKey; int nKey; /* Key associated with this element */ +}; + +/* +** There are 2 different modes of operation for a hash table: +** +** FTS3_HASH_STRING pKey points to a string that is nKey bytes long +** (including the null-terminator, if any). Case +** is respected in comparisons. +** +** FTS3_HASH_BINARY pKey points to binary data nKey bytes long. +** memcmp() is used to compare keys. +** +** A copy of the key is made if the copyKey parameter to fts3HashInit is 1. +*/ +#define FTS3_HASH_STRING 1 +#define FTS3_HASH_BINARY 2 + +/* +** Access routines. To delete, insert a NULL pointer. +*/ +void sqlite3Fts3HashInit(fts3Hash*, int keytype, int copyKey); +void *sqlite3Fts3HashInsert(fts3Hash*, const void *pKey, int nKey, void *pData); +void *sqlite3Fts3HashFind(const fts3Hash*, const void *pKey, int nKey); +void sqlite3Fts3HashClear(fts3Hash*); + +/* +** Shorthand for the functions above +*/ +#define fts3HashInit sqlite3Fts3HashInit +#define fts3HashInsert sqlite3Fts3HashInsert +#define fts3HashFind sqlite3Fts3HashFind +#define fts3HashClear sqlite3Fts3HashClear + +/* +** Macros for looping over all elements of a hash table. The idiom is +** like this: +** +** fts3Hash h; +** fts3HashElem *p; +** ... +** for(p=fts3HashFirst(&h); p; p=fts3HashNext(p)){ +** SomeStructure *pData = fts3HashData(p); +** // do something with pData +** } +*/ +#define fts3HashFirst(H) ((H)->first) +#define fts3HashNext(E) ((E)->next) +#define fts3HashData(E) ((E)->data) +#define fts3HashKey(E) ((E)->pKey) +#define fts3HashKeysize(E) ((E)->nKey) + +/* +** Number of entries in a hash table +*/ +#define fts3HashCount(H) ((H)->count) + +#endif /* _FTS3_HASH_H_ */ Added: external/sqlite-source-3.5.7.x/fts3_icu.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/fts3_icu.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,258 @@ +/* +** 2007 June 22 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** This file implements a tokenizer for fts3 based on the ICU library. +** +** $Id: fts3_icu.c,v 1.2 2007/10/24 21:52:37 shess Exp $ +*/ + +#if !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS3) +#ifdef SQLITE_ENABLE_ICU + +#include +#include +#include "fts3_tokenizer.h" + +#include +#include +#include +#include + +typedef struct IcuTokenizer IcuTokenizer; +typedef struct IcuCursor IcuCursor; + +struct IcuTokenizer { + sqlite3_tokenizer base; + char *zLocale; +}; + +struct IcuCursor { + sqlite3_tokenizer_cursor base; + + UBreakIterator *pIter; /* ICU break-iterator object */ + int nChar; /* Number of UChar elements in pInput */ + UChar *aChar; /* Copy of input using utf-16 encoding */ + int *aOffset; /* Offsets of each character in utf-8 input */ + + int nBuffer; + char *zBuffer; + + int iToken; +}; + +/* +** Create a new tokenizer instance. +*/ +static int icuCreate( + int argc, /* Number of entries in argv[] */ + const char * const *argv, /* Tokenizer creation arguments */ + sqlite3_tokenizer **ppTokenizer /* OUT: Created tokenizer */ +){ + IcuTokenizer *p; + int n = 0; + + if( argc>0 ){ + n = strlen(argv[0])+1; + } + p = (IcuTokenizer *)sqlite3_malloc(sizeof(IcuTokenizer)+n); + if( !p ){ + return SQLITE_NOMEM; + } + memset(p, 0, sizeof(IcuTokenizer)); + + if( n ){ + p->zLocale = (char *)&p[1]; + memcpy(p->zLocale, argv[0], n); + } + + *ppTokenizer = (sqlite3_tokenizer *)p; + + return SQLITE_OK; +} + +/* +** Destroy a tokenizer +*/ +static int icuDestroy(sqlite3_tokenizer *pTokenizer){ + IcuTokenizer *p = (IcuTokenizer *)pTokenizer; + sqlite3_free(p); + return SQLITE_OK; +} + +/* +** Prepare to begin tokenizing a particular string. The input +** string to be tokenized is pInput[0..nBytes-1]. A cursor +** used to incrementally tokenize this string is returned in +** *ppCursor. +*/ +static int icuOpen( + sqlite3_tokenizer *pTokenizer, /* The tokenizer */ + const char *zInput, /* Input string */ + int nInput, /* Length of zInput in bytes */ + sqlite3_tokenizer_cursor **ppCursor /* OUT: Tokenization cursor */ +){ + IcuTokenizer *p = (IcuTokenizer *)pTokenizer; + IcuCursor *pCsr; + + const int32_t opt = U_FOLD_CASE_DEFAULT; + UErrorCode status = U_ZERO_ERROR; + int nChar; + + UChar32 c; + int iInput = 0; + int iOut = 0; + + *ppCursor = 0; + + if( -1 == nInput ) nInput = strlen(nInput); + nChar = nInput+1; + pCsr = (IcuCursor *)sqlite3_malloc( + sizeof(IcuCursor) + /* IcuCursor */ + nChar * sizeof(UChar) + /* IcuCursor.aChar[] */ + (nChar+1) * sizeof(int) /* IcuCursor.aOffset[] */ + ); + if( !pCsr ){ + return SQLITE_NOMEM; + } + memset(pCsr, 0, sizeof(IcuCursor)); + pCsr->aChar = (UChar *)&pCsr[1]; + pCsr->aOffset = (int *)&pCsr->aChar[nChar]; + + pCsr->aOffset[iOut] = iInput; + U8_NEXT(zInput, iInput, nInput, c); + while( c>0 ){ + int isError = 0; + c = u_foldCase(c, opt); + U16_APPEND(pCsr->aChar, iOut, nChar, c, isError); + if( isError ){ + sqlite3_free(pCsr); + return SQLITE_ERROR; + } + pCsr->aOffset[iOut] = iInput; + + if( iInputpIter = ubrk_open(UBRK_WORD, p->zLocale, pCsr->aChar, iOut, &status); + if( !U_SUCCESS(status) ){ + sqlite3_free(pCsr); + return SQLITE_ERROR; + } + pCsr->nChar = iOut; + + ubrk_first(pCsr->pIter); + *ppCursor = (sqlite3_tokenizer_cursor *)pCsr; + return SQLITE_OK; +} + +/* +** Close a tokenization cursor previously opened by a call to icuOpen(). +*/ +static int icuClose(sqlite3_tokenizer_cursor *pCursor){ + IcuCursor *pCsr = (IcuCursor *)pCursor; + ubrk_close(pCsr->pIter); + sqlite3_free(pCsr->zBuffer); + sqlite3_free(pCsr); + return SQLITE_OK; +} + +/* +** Extract the next token from a tokenization cursor. +*/ +static int icuNext( + sqlite3_tokenizer_cursor *pCursor, /* Cursor returned by simpleOpen */ + const char **ppToken, /* OUT: *ppToken is the token text */ + int *pnBytes, /* OUT: Number of bytes in token */ + int *piStartOffset, /* OUT: Starting offset of token */ + int *piEndOffset, /* OUT: Ending offset of token */ + int *piPosition /* OUT: Position integer of token */ +){ + IcuCursor *pCsr = (IcuCursor *)pCursor; + + int iStart = 0; + int iEnd = 0; + int nByte = 0; + + while( iStart==iEnd ){ + UChar32 c; + + iStart = ubrk_current(pCsr->pIter); + iEnd = ubrk_next(pCsr->pIter); + if( iEnd==UBRK_DONE ){ + return SQLITE_DONE; + } + + while( iStartaChar, iWhite, pCsr->nChar, c); + if( u_isspace(c) ){ + iStart = iWhite; + }else{ + break; + } + } + assert(iStart<=iEnd); + } + + do { + UErrorCode status = U_ZERO_ERROR; + if( nByte ){ + char *zNew = sqlite3_realloc(pCsr->zBuffer, nByte); + if( !zNew ){ + return SQLITE_NOMEM; + } + pCsr->zBuffer = zNew; + pCsr->nBuffer = nByte; + } + + u_strToUTF8( + pCsr->zBuffer, pCsr->nBuffer, &nByte, /* Output vars */ + &pCsr->aChar[iStart], iEnd-iStart, /* Input vars */ + &status /* Output success/failure */ + ); + } while( nByte>pCsr->nBuffer ); + + *ppToken = pCsr->zBuffer; + *pnBytes = nByte; + *piStartOffset = pCsr->aOffset[iStart]; + *piEndOffset = pCsr->aOffset[iEnd]; + *piPosition = pCsr->iToken++; + + return SQLITE_OK; +} + +/* +** The set of routines that implement the simple tokenizer +*/ +static const sqlite3_tokenizer_module icuTokenizerModule = { + 0, /* iVersion */ + icuCreate, /* xCreate */ + icuDestroy, /* xCreate */ + icuOpen, /* xOpen */ + icuClose, /* xClose */ + icuNext, /* xNext */ +}; + +/* +** Set *ppModule to point at the implementation of the ICU tokenizer. +*/ +void sqlite3Fts3IcuTokenizerModule( + sqlite3_tokenizer_module const**ppModule +){ + *ppModule = &icuTokenizerModule; +} + +#endif /* defined(SQLITE_ENABLE_ICU) */ +#endif /* !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS3) */ Added: external/sqlite-source-3.5.7.x/fts3_porter.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/fts3_porter.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,642 @@ +/* +** 2006 September 30 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** Implementation of the full-text-search tokenizer that implements +** a Porter stemmer. +*/ + +/* +** The code in this file is only compiled if: +** +** * The FTS3 module is being built as an extension +** (in which case SQLITE_CORE is not defined), or +** +** * The FTS3 module is being built into the core of +** SQLite (in which case SQLITE_ENABLE_FTS3 is defined). +*/ +#if !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS3) + + +#include +#include +#include +#include +#include + +#include "fts3_tokenizer.h" + +/* +** Class derived from sqlite3_tokenizer +*/ +typedef struct porter_tokenizer { + sqlite3_tokenizer base; /* Base class */ +} porter_tokenizer; + +/* +** Class derived from sqlit3_tokenizer_cursor +*/ +typedef struct porter_tokenizer_cursor { + sqlite3_tokenizer_cursor base; + const char *zInput; /* input we are tokenizing */ + int nInput; /* size of the input */ + int iOffset; /* current position in zInput */ + int iToken; /* index of next token to be returned */ + char *zToken; /* storage for current token */ + int nAllocated; /* space allocated to zToken buffer */ +} porter_tokenizer_cursor; + + +/* Forward declaration */ +static const sqlite3_tokenizer_module porterTokenizerModule; + + +/* +** Create a new tokenizer instance. +*/ +static int porterCreate( + int argc, const char * const *argv, + sqlite3_tokenizer **ppTokenizer +){ + porter_tokenizer *t; + t = (porter_tokenizer *) sqlite3_malloc(sizeof(*t)); + if( t==NULL ) return SQLITE_NOMEM; + memset(t, 0, sizeof(*t)); + *ppTokenizer = &t->base; + return SQLITE_OK; +} + +/* +** Destroy a tokenizer +*/ +static int porterDestroy(sqlite3_tokenizer *pTokenizer){ + sqlite3_free(pTokenizer); + return SQLITE_OK; +} + +/* +** Prepare to begin tokenizing a particular string. The input +** string to be tokenized is zInput[0..nInput-1]. A cursor +** used to incrementally tokenize this string is returned in +** *ppCursor. +*/ +static int porterOpen( + sqlite3_tokenizer *pTokenizer, /* The tokenizer */ + const char *zInput, int nInput, /* String to be tokenized */ + sqlite3_tokenizer_cursor **ppCursor /* OUT: Tokenization cursor */ +){ + porter_tokenizer_cursor *c; + + c = (porter_tokenizer_cursor *) sqlite3_malloc(sizeof(*c)); + if( c==NULL ) return SQLITE_NOMEM; + + c->zInput = zInput; + if( zInput==0 ){ + c->nInput = 0; + }else if( nInput<0 ){ + c->nInput = (int)strlen(zInput); + }else{ + c->nInput = nInput; + } + c->iOffset = 0; /* start tokenizing at the beginning */ + c->iToken = 0; + c->zToken = NULL; /* no space allocated, yet. */ + c->nAllocated = 0; + + *ppCursor = &c->base; + return SQLITE_OK; +} + +/* +** Close a tokenization cursor previously opened by a call to +** porterOpen() above. +*/ +static int porterClose(sqlite3_tokenizer_cursor *pCursor){ + porter_tokenizer_cursor *c = (porter_tokenizer_cursor *) pCursor; + sqlite3_free(c->zToken); + sqlite3_free(c); + return SQLITE_OK; +} +/* +** Vowel or consonant +*/ +static const char cType[] = { + 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, + 1, 1, 1, 2, 1 +}; + +/* +** isConsonant() and isVowel() determine if their first character in +** the string they point to is a consonant or a vowel, according +** to Porter ruls. +** +** A consonate is any letter other than 'a', 'e', 'i', 'o', or 'u'. +** 'Y' is a consonant unless it follows another consonant, +** in which case it is a vowel. +** +** In these routine, the letters are in reverse order. So the 'y' rule +** is that 'y' is a consonant unless it is followed by another +** consonent. +*/ +static int isVowel(const char*); +static int isConsonant(const char *z){ + int j; + char x = *z; + if( x==0 ) return 0; + assert( x>='a' && x<='z' ); + j = cType[x-'a']; + if( j<2 ) return j; + return z[1]==0 || isVowel(z + 1); +} +static int isVowel(const char *z){ + int j; + char x = *z; + if( x==0 ) return 0; + assert( x>='a' && x<='z' ); + j = cType[x-'a']; + if( j<2 ) return 1-j; + return isConsonant(z + 1); +} + +/* +** Let any sequence of one or more vowels be represented by V and let +** C be sequence of one or more consonants. Then every word can be +** represented as: +** +** [C] (VC){m} [V] +** +** In prose: A word is an optional consonant followed by zero or +** vowel-consonant pairs followed by an optional vowel. "m" is the +** number of vowel consonant pairs. This routine computes the value +** of m for the first i bytes of a word. +** +** Return true if the m-value for z is 1 or more. In other words, +** return true if z contains at least one vowel that is followed +** by a consonant. +** +** In this routine z[] is in reverse order. So we are really looking +** for an instance of of a consonant followed by a vowel. +*/ +static int m_gt_0(const char *z){ + while( isVowel(z) ){ z++; } + if( *z==0 ) return 0; + while( isConsonant(z) ){ z++; } + return *z!=0; +} + +/* Like mgt0 above except we are looking for a value of m which is +** exactly 1 +*/ +static int m_eq_1(const char *z){ + while( isVowel(z) ){ z++; } + if( *z==0 ) return 0; + while( isConsonant(z) ){ z++; } + if( *z==0 ) return 0; + while( isVowel(z) ){ z++; } + if( *z==0 ) return 1; + while( isConsonant(z) ){ z++; } + return *z==0; +} + +/* Like mgt0 above except we are looking for a value of m>1 instead +** or m>0 +*/ +static int m_gt_1(const char *z){ + while( isVowel(z) ){ z++; } + if( *z==0 ) return 0; + while( isConsonant(z) ){ z++; } + if( *z==0 ) return 0; + while( isVowel(z) ){ z++; } + if( *z==0 ) return 0; + while( isConsonant(z) ){ z++; } + return *z!=0; +} + +/* +** Return TRUE if there is a vowel anywhere within z[0..n-1] +*/ +static int hasVowel(const char *z){ + while( isConsonant(z) ){ z++; } + return *z!=0; +} + +/* +** Return TRUE if the word ends in a double consonant. +** +** The text is reversed here. So we are really looking at +** the first two characters of z[]. +*/ +static int doubleConsonant(const char *z){ + return isConsonant(z) && z[0]==z[1] && isConsonant(z+1); +} + +/* +** Return TRUE if the word ends with three letters which +** are consonant-vowel-consonent and where the final consonant +** is not 'w', 'x', or 'y'. +** +** The word is reversed here. So we are really checking the +** first three letters and the first one cannot be in [wxy]. +*/ +static int star_oh(const char *z){ + return + z[0]!=0 && isConsonant(z) && + z[0]!='w' && z[0]!='x' && z[0]!='y' && + z[1]!=0 && isVowel(z+1) && + z[2]!=0 && isConsonant(z+2); +} + +/* +** If the word ends with zFrom and xCond() is true for the stem +** of the word that preceeds the zFrom ending, then change the +** ending to zTo. +** +** The input word *pz and zFrom are both in reverse order. zTo +** is in normal order. +** +** Return TRUE if zFrom matches. Return FALSE if zFrom does not +** match. Not that TRUE is returned even if xCond() fails and +** no substitution occurs. +*/ +static int stem( + char **pz, /* The word being stemmed (Reversed) */ + const char *zFrom, /* If the ending matches this... (Reversed) */ + const char *zTo, /* ... change the ending to this (not reversed) */ + int (*xCond)(const char*) /* Condition that must be true */ +){ + char *z = *pz; + while( *zFrom && *zFrom==*z ){ z++; zFrom++; } + if( *zFrom!=0 ) return 0; + if( xCond && !xCond(z) ) return 1; + while( *zTo ){ + *(--z) = *(zTo++); + } + *pz = z; + return 1; +} + +/* +** This is the fallback stemmer used when the porter stemmer is +** inappropriate. The input word is copied into the output with +** US-ASCII case folding. If the input word is too long (more +** than 20 bytes if it contains no digits or more than 6 bytes if +** it contains digits) then word is truncated to 20 or 6 bytes +** by taking 10 or 3 bytes from the beginning and end. +*/ +static void copy_stemmer(const char *zIn, int nIn, char *zOut, int *pnOut){ + int i, mx, j; + int hasDigit = 0; + for(i=0; i='A' && c<='Z' ){ + zOut[i] = c - 'A' + 'a'; + }else{ + if( c>='0' && c<='9' ) hasDigit = 1; + zOut[i] = c; + } + } + mx = hasDigit ? 3 : 10; + if( nIn>mx*2 ){ + for(j=mx, i=nIn-mx; i=sizeof(zReverse)-7 ){ + /* The word is too big or too small for the porter stemmer. + ** Fallback to the copy stemmer */ + copy_stemmer(zIn, nIn, zOut, pnOut); + return; + } + for(i=0, j=sizeof(zReverse)-6; i='A' && c<='Z' ){ + zReverse[j] = c + 'a' - 'A'; + }else if( c>='a' && c<='z' ){ + zReverse[j] = c; + }else{ + /* The use of a character not in [a-zA-Z] means that we fallback + ** to the copy stemmer */ + copy_stemmer(zIn, nIn, zOut, pnOut); + return; + } + } + memset(&zReverse[sizeof(zReverse)-5], 0, 5); + z = &zReverse[j+1]; + + + /* Step 1a */ + if( z[0]=='s' ){ + if( + !stem(&z, "sess", "ss", 0) && + !stem(&z, "sei", "i", 0) && + !stem(&z, "ss", "ss", 0) + ){ + z++; + } + } + + /* Step 1b */ + z2 = z; + if( stem(&z, "dee", "ee", m_gt_0) ){ + /* Do nothing. The work was all in the test */ + }else if( + (stem(&z, "gni", "", hasVowel) || stem(&z, "de", "", hasVowel)) + && z!=z2 + ){ + if( stem(&z, "ta", "ate", 0) || + stem(&z, "lb", "ble", 0) || + stem(&z, "zi", "ize", 0) ){ + /* Do nothing. The work was all in the test */ + }else if( doubleConsonant(z) && (*z!='l' && *z!='s' && *z!='z') ){ + z++; + }else if( m_eq_1(z) && star_oh(z) ){ + *(--z) = 'e'; + } + } + + /* Step 1c */ + if( z[0]=='y' && hasVowel(z+1) ){ + z[0] = 'i'; + } + + /* Step 2 */ + switch( z[1] ){ + case 'a': + stem(&z, "lanoita", "ate", m_gt_0) || + stem(&z, "lanoit", "tion", m_gt_0); + break; + case 'c': + stem(&z, "icne", "ence", m_gt_0) || + stem(&z, "icna", "ance", m_gt_0); + break; + case 'e': + stem(&z, "rezi", "ize", m_gt_0); + break; + case 'g': + stem(&z, "igol", "log", m_gt_0); + break; + case 'l': + stem(&z, "ilb", "ble", m_gt_0) || + stem(&z, "illa", "al", m_gt_0) || + stem(&z, "iltne", "ent", m_gt_0) || + stem(&z, "ile", "e", m_gt_0) || + stem(&z, "ilsuo", "ous", m_gt_0); + break; + case 'o': + stem(&z, "noitazi", "ize", m_gt_0) || + stem(&z, "noita", "ate", m_gt_0) || + stem(&z, "rota", "ate", m_gt_0); + break; + case 's': + stem(&z, "msila", "al", m_gt_0) || + stem(&z, "ssenevi", "ive", m_gt_0) || + stem(&z, "ssenluf", "ful", m_gt_0) || + stem(&z, "ssensuo", "ous", m_gt_0); + break; + case 't': + stem(&z, "itila", "al", m_gt_0) || + stem(&z, "itivi", "ive", m_gt_0) || + stem(&z, "itilib", "ble", m_gt_0); + break; + } + + /* Step 3 */ + switch( z[0] ){ + case 'e': + stem(&z, "etaci", "ic", m_gt_0) || + stem(&z, "evita", "", m_gt_0) || + stem(&z, "ezila", "al", m_gt_0); + break; + case 'i': + stem(&z, "itici", "ic", m_gt_0); + break; + case 'l': + stem(&z, "laci", "ic", m_gt_0) || + stem(&z, "luf", "", m_gt_0); + break; + case 's': + stem(&z, "ssen", "", m_gt_0); + break; + } + + /* Step 4 */ + switch( z[1] ){ + case 'a': + if( z[0]=='l' && m_gt_1(z+2) ){ + z += 2; + } + break; + case 'c': + if( z[0]=='e' && z[2]=='n' && (z[3]=='a' || z[3]=='e') && m_gt_1(z+4) ){ + z += 4; + } + break; + case 'e': + if( z[0]=='r' && m_gt_1(z+2) ){ + z += 2; + } + break; + case 'i': + if( z[0]=='c' && m_gt_1(z+2) ){ + z += 2; + } + break; + case 'l': + if( z[0]=='e' && z[2]=='b' && (z[3]=='a' || z[3]=='i') && m_gt_1(z+4) ){ + z += 4; + } + break; + case 'n': + if( z[0]=='t' ){ + if( z[2]=='a' ){ + if( m_gt_1(z+3) ){ + z += 3; + } + }else if( z[2]=='e' ){ + stem(&z, "tneme", "", m_gt_1) || + stem(&z, "tnem", "", m_gt_1) || + stem(&z, "tne", "", m_gt_1); + } + } + break; + case 'o': + if( z[0]=='u' ){ + if( m_gt_1(z+2) ){ + z += 2; + } + }else if( z[3]=='s' || z[3]=='t' ){ + stem(&z, "noi", "", m_gt_1); + } + break; + case 's': + if( z[0]=='m' && z[2]=='i' && m_gt_1(z+3) ){ + z += 3; + } + break; + case 't': + stem(&z, "eta", "", m_gt_1) || + stem(&z, "iti", "", m_gt_1); + break; + case 'u': + if( z[0]=='s' && z[2]=='o' && m_gt_1(z+3) ){ + z += 3; + } + break; + case 'v': + case 'z': + if( z[0]=='e' && z[2]=='i' && m_gt_1(z+3) ){ + z += 3; + } + break; + } + + /* Step 5a */ + if( z[0]=='e' ){ + if( m_gt_1(z+1) ){ + z++; + }else if( m_eq_1(z+1) && !star_oh(z+1) ){ + z++; + } + } + + /* Step 5b */ + if( m_gt_1(z) && z[0]=='l' && z[1]=='l' ){ + z++; + } + + /* z[] is now the stemmed word in reverse order. Flip it back + ** around into forward order and return. + */ + *pnOut = i = strlen(z); + zOut[i] = 0; + while( *z ){ + zOut[--i] = *(z++); + } +} + +/* +** Characters that can be part of a token. We assume any character +** whose value is greater than 0x80 (any UTF character) can be +** part of a token. In other words, delimiters all must have +** values of 0x7f or lower. +*/ +static const char porterIdChar[] = { +/* x0 x1 x2 x3 x4 x5 x6 x7 x8 x9 xA xB xC xD xE xF */ + 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, /* 3x */ + 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, /* 4x */ + 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, /* 5x */ + 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, /* 6x */ + 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, /* 7x */ +}; +#define isDelim(C) (((ch=C)&0x80)==0 && (ch<0x30 || !porterIdChar[ch-0x30])) + +/* +** Extract the next token from a tokenization cursor. The cursor must +** have been opened by a prior call to porterOpen(). +*/ +static int porterNext( + sqlite3_tokenizer_cursor *pCursor, /* Cursor returned by porterOpen */ + const char **pzToken, /* OUT: *pzToken is the token text */ + int *pnBytes, /* OUT: Number of bytes in token */ + int *piStartOffset, /* OUT: Starting offset of token */ + int *piEndOffset, /* OUT: Ending offset of token */ + int *piPosition /* OUT: Position integer of token */ +){ + porter_tokenizer_cursor *c = (porter_tokenizer_cursor *) pCursor; + const char *z = c->zInput; + + while( c->iOffsetnInput ){ + int iStartOffset, ch; + + /* Scan past delimiter characters */ + while( c->iOffsetnInput && isDelim(z[c->iOffset]) ){ + c->iOffset++; + } + + /* Count non-delimiter characters. */ + iStartOffset = c->iOffset; + while( c->iOffsetnInput && !isDelim(z[c->iOffset]) ){ + c->iOffset++; + } + + if( c->iOffset>iStartOffset ){ + int n = c->iOffset-iStartOffset; + if( n>c->nAllocated ){ + c->nAllocated = n+20; + c->zToken = sqlite3_realloc(c->zToken, c->nAllocated); + if( c->zToken==NULL ) return SQLITE_NOMEM; + } + porter_stemmer(&z[iStartOffset], n, c->zToken, pnBytes); + *pzToken = c->zToken; + *piStartOffset = iStartOffset; + *piEndOffset = c->iOffset; + *piPosition = c->iToken++; + return SQLITE_OK; + } + } + return SQLITE_DONE; +} + +/* +** The set of routines that implement the porter-stemmer tokenizer +*/ +static const sqlite3_tokenizer_module porterTokenizerModule = { + 0, + porterCreate, + porterDestroy, + porterOpen, + porterClose, + porterNext, +}; + +/* +** Allocate a new porter tokenizer. Return a pointer to the new +** tokenizer in *ppModule +*/ +void sqlite3Fts3PorterTokenizerModule( + sqlite3_tokenizer_module const**ppModule +){ + *ppModule = &porterTokenizerModule; +} + +#endif /* !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS3) */ Added: external/sqlite-source-3.5.7.x/fts3_tokenizer.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/fts3_tokenizer.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,371 @@ +/* +** 2007 June 22 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +****************************************************************************** +** +** This is part of an SQLite module implementing full-text search. +** This particular file implements the generic tokenizer interface. +*/ + +/* +** The code in this file is only compiled if: +** +** * The FTS3 module is being built as an extension +** (in which case SQLITE_CORE is not defined), or +** +** * The FTS3 module is being built into the core of +** SQLite (in which case SQLITE_ENABLE_FTS3 is defined). +*/ +#if !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS3) + +#include "sqlite3ext.h" +#ifndef SQLITE_CORE + SQLITE_EXTENSION_INIT1 +#endif + +#include "fts3_hash.h" +#include "fts3_tokenizer.h" +#include + +/* +** Implementation of the SQL scalar function for accessing the underlying +** hash table. This function may be called as follows: +** +** SELECT (); +** SELECT (, ); +** +** where is the name passed as the second argument +** to the sqlite3Fts3InitHashTable() function (e.g. 'fts3_tokenizer'). +** +** If the argument is specified, it must be a blob value +** containing a pointer to be stored as the hash data corresponding +** to the string . If is not specified, then +** the string must already exist in the has table. Otherwise, +** an error is returned. +** +** Whether or not the argument is specified, the value returned +** is a blob containing the pointer stored as the hash data corresponding +** to string (after the hash-table is updated, if applicable). +*/ +static void scalarFunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + fts3Hash *pHash; + void *pPtr = 0; + const unsigned char *zName; + int nName; + + assert( argc==1 || argc==2 ); + + pHash = (fts3Hash *)sqlite3_user_data(context); + + zName = sqlite3_value_text(argv[0]); + nName = sqlite3_value_bytes(argv[0])+1; + + if( argc==2 ){ + void *pOld; + int n = sqlite3_value_bytes(argv[1]); + if( n!=sizeof(pPtr) ){ + sqlite3_result_error(context, "argument type mismatch", -1); + return; + } + pPtr = *(void **)sqlite3_value_blob(argv[1]); + pOld = sqlite3Fts3HashInsert(pHash, (void *)zName, nName, pPtr); + if( pOld==pPtr ){ + sqlite3_result_error(context, "out of memory", -1); + return; + } + }else{ + pPtr = sqlite3Fts3HashFind(pHash, zName, nName); + if( !pPtr ){ + char *zErr = sqlite3_mprintf("unknown tokenizer: %s", zName); + sqlite3_result_error(context, zErr, -1); + sqlite3_free(zErr); + return; + } + } + + sqlite3_result_blob(context, (void *)&pPtr, sizeof(pPtr), SQLITE_TRANSIENT); +} + +#ifdef SQLITE_TEST + +#include +#include + +/* +** Implementation of a special SQL scalar function for testing tokenizers +** designed to be used in concert with the Tcl testing framework. This +** function must be called with two arguments: +** +** SELECT (, ); +** SELECT (, ); +** +** where is the name passed as the second argument +** to the sqlite3Fts3InitHashTable() function (e.g. 'fts3_tokenizer') +** concatenated with the string '_test' (e.g. 'fts3_tokenizer_test'). +** +** The return value is a string that may be interpreted as a Tcl +** list. For each token in the , three elements are +** added to the returned list. The first is the token position, the +** second is the token text (folded, stemmed, etc.) and the third is the +** substring of associated with the token. For example, +** using the built-in "simple" tokenizer: +** +** SELECT fts_tokenizer_test('simple', 'I don't see how'); +** +** will return the string: +** +** "{0 i I 1 dont don't 2 see see 3 how how}" +** +*/ +static void testFunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + fts3Hash *pHash; + sqlite3_tokenizer_module *p; + sqlite3_tokenizer *pTokenizer = 0; + sqlite3_tokenizer_cursor *pCsr = 0; + + const char *zErr = 0; + + const char *zName; + int nName; + const char *zInput; + int nInput; + + const char *zArg = 0; + + const char *zToken; + int nToken; + int iStart; + int iEnd; + int iPos; + + Tcl_Obj *pRet; + + assert( argc==2 || argc==3 ); + + nName = sqlite3_value_bytes(argv[0]); + zName = (const char *)sqlite3_value_text(argv[0]); + nInput = sqlite3_value_bytes(argv[argc-1]); + zInput = (const char *)sqlite3_value_text(argv[argc-1]); + + if( argc==3 ){ + zArg = (const char *)sqlite3_value_text(argv[1]); + } + + pHash = (fts3Hash *)sqlite3_user_data(context); + p = (sqlite3_tokenizer_module *)sqlite3Fts3HashFind(pHash, zName, nName+1); + + if( !p ){ + char *zErr = sqlite3_mprintf("unknown tokenizer: %s", zName); + sqlite3_result_error(context, zErr, -1); + sqlite3_free(zErr); + return; + } + + pRet = Tcl_NewObj(); + Tcl_IncrRefCount(pRet); + + if( SQLITE_OK!=p->xCreate(zArg ? 1 : 0, &zArg, &pTokenizer) ){ + zErr = "error in xCreate()"; + goto finish; + } + pTokenizer->pModule = p; + if( SQLITE_OK!=p->xOpen(pTokenizer, zInput, nInput, &pCsr) ){ + zErr = "error in xOpen()"; + goto finish; + } + pCsr->pTokenizer = pTokenizer; + + while( SQLITE_OK==p->xNext(pCsr, &zToken, &nToken, &iStart, &iEnd, &iPos) ){ + Tcl_ListObjAppendElement(0, pRet, Tcl_NewIntObj(iPos)); + Tcl_ListObjAppendElement(0, pRet, Tcl_NewStringObj(zToken, nToken)); + zToken = &zInput[iStart]; + nToken = iEnd-iStart; + Tcl_ListObjAppendElement(0, pRet, Tcl_NewStringObj(zToken, nToken)); + } + + if( SQLITE_OK!=p->xClose(pCsr) ){ + zErr = "error in xClose()"; + goto finish; + } + if( SQLITE_OK!=p->xDestroy(pTokenizer) ){ + zErr = "error in xDestroy()"; + goto finish; + } + +finish: + if( zErr ){ + sqlite3_result_error(context, zErr, -1); + }else{ + sqlite3_result_text(context, Tcl_GetString(pRet), -1, SQLITE_TRANSIENT); + } + Tcl_DecrRefCount(pRet); +} + +static +int registerTokenizer( + sqlite3 *db, + char *zName, + const sqlite3_tokenizer_module *p +){ + int rc; + sqlite3_stmt *pStmt; + const char zSql[] = "SELECT fts3_tokenizer(?, ?)"; + + rc = sqlite3_prepare_v2(db, zSql, -1, &pStmt, 0); + if( rc!=SQLITE_OK ){ + return rc; + } + + sqlite3_bind_text(pStmt, 1, zName, -1, SQLITE_STATIC); + sqlite3_bind_blob(pStmt, 2, &p, sizeof(p), SQLITE_STATIC); + sqlite3_step(pStmt); + + return sqlite3_finalize(pStmt); +} + +static +int queryTokenizer( + sqlite3 *db, + char *zName, + const sqlite3_tokenizer_module **pp +){ + int rc; + sqlite3_stmt *pStmt; + const char zSql[] = "SELECT fts3_tokenizer(?)"; + + *pp = 0; + rc = sqlite3_prepare_v2(db, zSql, -1, &pStmt, 0); + if( rc!=SQLITE_OK ){ + return rc; + } + + sqlite3_bind_text(pStmt, 1, zName, -1, SQLITE_STATIC); + if( SQLITE_ROW==sqlite3_step(pStmt) ){ + if( sqlite3_column_type(pStmt, 0)==SQLITE_BLOB ){ + memcpy(pp, sqlite3_column_blob(pStmt, 0), sizeof(*pp)); + } + } + + return sqlite3_finalize(pStmt); +} + +void sqlite3Fts3SimpleTokenizerModule(sqlite3_tokenizer_module const**ppModule); + +/* +** Implementation of the scalar function fts3_tokenizer_internal_test(). +** This function is used for testing only, it is not included in the +** build unless SQLITE_TEST is defined. +** +** The purpose of this is to test that the fts3_tokenizer() function +** can be used as designed by the C-code in the queryTokenizer and +** registerTokenizer() functions above. These two functions are repeated +** in the README.tokenizer file as an example, so it is important to +** test them. +** +** To run the tests, evaluate the fts3_tokenizer_internal_test() scalar +** function with no arguments. An assert() will fail if a problem is +** detected. i.e.: +** +** SELECT fts3_tokenizer_internal_test(); +** +*/ +static void intTestFunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + int rc; + const sqlite3_tokenizer_module *p1; + const sqlite3_tokenizer_module *p2; + sqlite3 *db = (sqlite3 *)sqlite3_user_data(context); + + /* Test the query function */ + sqlite3Fts3SimpleTokenizerModule(&p1); + rc = queryTokenizer(db, "simple", &p2); + assert( rc==SQLITE_OK ); + assert( p1==p2 ); + rc = queryTokenizer(db, "nosuchtokenizer", &p2); + assert( rc==SQLITE_ERROR ); + assert( p2==0 ); + assert( 0==strcmp(sqlite3_errmsg(db), "unknown tokenizer: nosuchtokenizer") ); + + /* Test the storage function */ + rc = registerTokenizer(db, "nosuchtokenizer", p1); + assert( rc==SQLITE_OK ); + rc = queryTokenizer(db, "nosuchtokenizer", &p2); + assert( rc==SQLITE_OK ); + assert( p2==p1 ); + + sqlite3_result_text(context, "ok", -1, SQLITE_STATIC); +} + +#endif + +/* +** Set up SQL objects in database db used to access the contents of +** the hash table pointed to by argument pHash. The hash table must +** been initialised to use string keys, and to take a private copy +** of the key when a value is inserted. i.e. by a call similar to: +** +** sqlite3Fts3HashInit(pHash, FTS3_HASH_STRING, 1); +** +** This function adds a scalar function (see header comment above +** scalarFunc() in this file for details) and, if ENABLE_TABLE is +** defined at compilation time, a temporary virtual table (see header +** comment above struct HashTableVtab) to the database schema. Both +** provide read/write access to the contents of *pHash. +** +** The third argument to this function, zName, is used as the name +** of both the scalar and, if created, the virtual table. +*/ +int sqlite3Fts3InitHashTable( + sqlite3 *db, + fts3Hash *pHash, + const char *zName +){ + int rc = SQLITE_OK; + void *p = (void *)pHash; + const int any = SQLITE_ANY; + char *zTest = 0; + char *zTest2 = 0; + +#ifdef SQLITE_TEST + void *pdb = (void *)db; + zTest = sqlite3_mprintf("%s_test", zName); + zTest2 = sqlite3_mprintf("%s_internal_test", zName); + if( !zTest || !zTest2 ){ + rc = SQLITE_NOMEM; + } +#endif + + if( rc!=SQLITE_OK + || (rc = sqlite3_create_function(db, zName, 1, any, p, scalarFunc, 0, 0)) + || (rc = sqlite3_create_function(db, zName, 2, any, p, scalarFunc, 0, 0)) +#ifdef SQLITE_TEST + || (rc = sqlite3_create_function(db, zTest, 2, any, p, testFunc, 0, 0)) + || (rc = sqlite3_create_function(db, zTest, 3, any, p, testFunc, 0, 0)) + || (rc = sqlite3_create_function(db, zTest2, 0, any, pdb, intTestFunc, 0, 0)) +#endif + ); + + sqlite3_free(zTest); + sqlite3_free(zTest2); + return rc; +} + +#endif /* !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS3) */ Added: external/sqlite-source-3.5.7.x/fts3_tokenizer.h ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/fts3_tokenizer.h Wed Mar 19 03:00:27 2008 @@ -0,0 +1,145 @@ +/* +** 2006 July 10 +** +** The author disclaims copyright to this source code. +** +************************************************************************* +** Defines the interface to tokenizers used by fulltext-search. There +** are three basic components: +** +** sqlite3_tokenizer_module is a singleton defining the tokenizer +** interface functions. This is essentially the class structure for +** tokenizers. +** +** sqlite3_tokenizer is used to define a particular tokenizer, perhaps +** including customization information defined at creation time. +** +** sqlite3_tokenizer_cursor is generated by a tokenizer to generate +** tokens from a particular input. +*/ +#ifndef _FTS3_TOKENIZER_H_ +#define _FTS3_TOKENIZER_H_ + +/* TODO(shess) Only used for SQLITE_OK and SQLITE_DONE at this time. +** If tokenizers are to be allowed to call sqlite3_*() functions, then +** we will need a way to register the API consistently. +*/ +#include "sqlite3.h" + +/* +** Structures used by the tokenizer interface. When a new tokenizer +** implementation is registered, the caller provides a pointer to +** an sqlite3_tokenizer_module containing pointers to the callback +** functions that make up an implementation. +** +** When an fts3 table is created, it passes any arguments passed to +** the tokenizer clause of the CREATE VIRTUAL TABLE statement to the +** sqlite3_tokenizer_module.xCreate() function of the requested tokenizer +** implementation. The xCreate() function in turn returns an +** sqlite3_tokenizer structure representing the specific tokenizer to +** be used for the fts3 table (customized by the tokenizer clause arguments). +** +** To tokenize an input buffer, the sqlite3_tokenizer_module.xOpen() +** method is called. It returns an sqlite3_tokenizer_cursor object +** that may be used to tokenize a specific input buffer based on +** the tokenization rules supplied by a specific sqlite3_tokenizer +** object. +*/ +typedef struct sqlite3_tokenizer_module sqlite3_tokenizer_module; +typedef struct sqlite3_tokenizer sqlite3_tokenizer; +typedef struct sqlite3_tokenizer_cursor sqlite3_tokenizer_cursor; + +struct sqlite3_tokenizer_module { + + /* + ** Structure version. Should always be set to 0. + */ + int iVersion; + + /* + ** Create a new tokenizer. The values in the argv[] array are the + ** arguments passed to the "tokenizer" clause of the CREATE VIRTUAL + ** TABLE statement that created the fts3 table. For example, if + ** the following SQL is executed: + ** + ** CREATE .. USING fts3( ... , tokenizer arg1 arg2) + ** + ** then argc is set to 2, and the argv[] array contains pointers + ** to the strings "arg1" and "arg2". + ** + ** This method should return either SQLITE_OK (0), or an SQLite error + ** code. If SQLITE_OK is returned, then *ppTokenizer should be set + ** to point at the newly created tokenizer structure. The generic + ** sqlite3_tokenizer.pModule variable should not be initialised by + ** this callback. The caller will do so. + */ + int (*xCreate)( + int argc, /* Size of argv array */ + const char *const*argv, /* Tokenizer argument strings */ + sqlite3_tokenizer **ppTokenizer /* OUT: Created tokenizer */ + ); + + /* + ** Destroy an existing tokenizer. The fts3 module calls this method + ** exactly once for each successful call to xCreate(). + */ + int (*xDestroy)(sqlite3_tokenizer *pTokenizer); + + /* + ** Create a tokenizer cursor to tokenize an input buffer. The caller + ** is responsible for ensuring that the input buffer remains valid + ** until the cursor is closed (using the xClose() method). + */ + int (*xOpen)( + sqlite3_tokenizer *pTokenizer, /* Tokenizer object */ + const char *pInput, int nBytes, /* Input buffer */ + sqlite3_tokenizer_cursor **ppCursor /* OUT: Created tokenizer cursor */ + ); + + /* + ** Destroy an existing tokenizer cursor. The fts3 module calls this + ** method exactly once for each successful call to xOpen(). + */ + int (*xClose)(sqlite3_tokenizer_cursor *pCursor); + + /* + ** Retrieve the next token from the tokenizer cursor pCursor. This + ** method should either return SQLITE_OK and set the values of the + ** "OUT" variables identified below, or SQLITE_DONE to indicate that + ** the end of the buffer has been reached, or an SQLite error code. + ** + ** *ppToken should be set to point at a buffer containing the + ** normalized version of the token (i.e. after any case-folding and/or + ** stemming has been performed). *pnBytes should be set to the length + ** of this buffer in bytes. The input text that generated the token is + ** identified by the byte offsets returned in *piStartOffset and + ** *piEndOffset. + ** + ** The buffer *ppToken is set to point at is managed by the tokenizer + ** implementation. It is only required to be valid until the next call + ** to xNext() or xClose(). + */ + /* TODO(shess) current implementation requires pInput to be + ** nul-terminated. This should either be fixed, or pInput/nBytes + ** should be converted to zInput. + */ + int (*xNext)( + sqlite3_tokenizer_cursor *pCursor, /* Tokenizer cursor */ + const char **ppToken, int *pnBytes, /* OUT: Normalized text for token */ + int *piStartOffset, /* OUT: Byte offset of token in input buffer */ + int *piEndOffset, /* OUT: Byte offset of end of token in input buffer */ + int *piPosition /* OUT: Number of tokens returned before this one */ + ); +}; + +struct sqlite3_tokenizer { + const sqlite3_tokenizer_module *pModule; /* The module for this tokenizer */ + /* Tokenizer implementations will typically add additional fields */ +}; + +struct sqlite3_tokenizer_cursor { + sqlite3_tokenizer *pTokenizer; /* Tokenizer for this cursor. */ + /* Tokenizer implementations will typically add additional fields */ +}; + +#endif /* _FTS3_TOKENIZER_H_ */ Added: external/sqlite-source-3.5.7.x/fts3_tokenizer1.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/fts3_tokenizer1.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,230 @@ +/* +** 2006 Oct 10 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +****************************************************************************** +** +** Implementation of the "simple" full-text-search tokenizer. +*/ + +/* +** The code in this file is only compiled if: +** +** * The FTS3 module is being built as an extension +** (in which case SQLITE_CORE is not defined), or +** +** * The FTS3 module is being built into the core of +** SQLite (in which case SQLITE_ENABLE_FTS3 is defined). +*/ +#if !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS3) + + +#include +#include +#include +#include +#include + +#include "fts3_tokenizer.h" + +typedef struct simple_tokenizer { + sqlite3_tokenizer base; + char delim[128]; /* flag ASCII delimiters */ +} simple_tokenizer; + +typedef struct simple_tokenizer_cursor { + sqlite3_tokenizer_cursor base; + const char *pInput; /* input we are tokenizing */ + int nBytes; /* size of the input */ + int iOffset; /* current position in pInput */ + int iToken; /* index of next token to be returned */ + char *pToken; /* storage for current token */ + int nTokenAllocated; /* space allocated to zToken buffer */ +} simple_tokenizer_cursor; + + +/* Forward declaration */ +static const sqlite3_tokenizer_module simpleTokenizerModule; + +static int simpleDelim(simple_tokenizer *t, unsigned char c){ + return c<0x80 && t->delim[c]; +} + +/* +** Create a new tokenizer instance. +*/ +static int simpleCreate( + int argc, const char * const *argv, + sqlite3_tokenizer **ppTokenizer +){ + simple_tokenizer *t; + + t = (simple_tokenizer *) sqlite3_malloc(sizeof(*t)); + if( t==NULL ) return SQLITE_NOMEM; + memset(t, 0, sizeof(*t)); + + /* TODO(shess) Delimiters need to remain the same from run to run, + ** else we need to reindex. One solution would be a meta-table to + ** track such information in the database, then we'd only want this + ** information on the initial create. + */ + if( argc>1 ){ + int i, n = strlen(argv[1]); + for(i=0; i=0x80 ){ + sqlite3_free(t); + return SQLITE_ERROR; + } + t->delim[ch] = 1; + } + } else { + /* Mark non-alphanumeric ASCII characters as delimiters */ + int i; + for(i=1; i<0x80; i++){ + t->delim[i] = !isalnum(i); + } + } + + *ppTokenizer = &t->base; + return SQLITE_OK; +} + +/* +** Destroy a tokenizer +*/ +static int simpleDestroy(sqlite3_tokenizer *pTokenizer){ + sqlite3_free(pTokenizer); + return SQLITE_OK; +} + +/* +** Prepare to begin tokenizing a particular string. The input +** string to be tokenized is pInput[0..nBytes-1]. A cursor +** used to incrementally tokenize this string is returned in +** *ppCursor. +*/ +static int simpleOpen( + sqlite3_tokenizer *pTokenizer, /* The tokenizer */ + const char *pInput, int nBytes, /* String to be tokenized */ + sqlite3_tokenizer_cursor **ppCursor /* OUT: Tokenization cursor */ +){ + simple_tokenizer_cursor *c; + + c = (simple_tokenizer_cursor *) sqlite3_malloc(sizeof(*c)); + if( c==NULL ) return SQLITE_NOMEM; + + c->pInput = pInput; + if( pInput==0 ){ + c->nBytes = 0; + }else if( nBytes<0 ){ + c->nBytes = (int)strlen(pInput); + }else{ + c->nBytes = nBytes; + } + c->iOffset = 0; /* start tokenizing at the beginning */ + c->iToken = 0; + c->pToken = NULL; /* no space allocated, yet. */ + c->nTokenAllocated = 0; + + *ppCursor = &c->base; + return SQLITE_OK; +} + +/* +** Close a tokenization cursor previously opened by a call to +** simpleOpen() above. +*/ +static int simpleClose(sqlite3_tokenizer_cursor *pCursor){ + simple_tokenizer_cursor *c = (simple_tokenizer_cursor *) pCursor; + sqlite3_free(c->pToken); + sqlite3_free(c); + return SQLITE_OK; +} + +/* +** Extract the next token from a tokenization cursor. The cursor must +** have been opened by a prior call to simpleOpen(). +*/ +static int simpleNext( + sqlite3_tokenizer_cursor *pCursor, /* Cursor returned by simpleOpen */ + const char **ppToken, /* OUT: *ppToken is the token text */ + int *pnBytes, /* OUT: Number of bytes in token */ + int *piStartOffset, /* OUT: Starting offset of token */ + int *piEndOffset, /* OUT: Ending offset of token */ + int *piPosition /* OUT: Position integer of token */ +){ + simple_tokenizer_cursor *c = (simple_tokenizer_cursor *) pCursor; + simple_tokenizer *t = (simple_tokenizer *) pCursor->pTokenizer; + unsigned char *p = (unsigned char *)c->pInput; + + while( c->iOffsetnBytes ){ + int iStartOffset; + + /* Scan past delimiter characters */ + while( c->iOffsetnBytes && simpleDelim(t, p[c->iOffset]) ){ + c->iOffset++; + } + + /* Count non-delimiter characters. */ + iStartOffset = c->iOffset; + while( c->iOffsetnBytes && !simpleDelim(t, p[c->iOffset]) ){ + c->iOffset++; + } + + if( c->iOffset>iStartOffset ){ + int i, n = c->iOffset-iStartOffset; + if( n>c->nTokenAllocated ){ + c->nTokenAllocated = n+20; + c->pToken = sqlite3_realloc(c->pToken, c->nTokenAllocated); + if( c->pToken==NULL ) return SQLITE_NOMEM; + } + for(i=0; ipToken[i] = ch<0x80 ? tolower(ch) : ch; + } + *ppToken = c->pToken; + *pnBytes = n; + *piStartOffset = iStartOffset; + *piEndOffset = c->iOffset; + *piPosition = c->iToken++; + + return SQLITE_OK; + } + } + return SQLITE_DONE; +} + +/* +** The set of routines that implement the simple tokenizer +*/ +static const sqlite3_tokenizer_module simpleTokenizerModule = { + 0, + simpleCreate, + simpleDestroy, + simpleOpen, + simpleClose, + simpleNext, +}; + +/* +** Allocate a new simple tokenizer. Return a pointer to the new +** tokenizer in *ppModule +*/ +void sqlite3Fts3SimpleTokenizerModule( + sqlite3_tokenizer_module const**ppModule +){ + *ppModule = &simpleTokenizerModule; +} + +#endif /* !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS3) */ Added: external/sqlite-source-3.5.7.x/func.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/func.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,1573 @@ +/* +** 2002 February 23 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** This file contains the C functions that implement various SQL +** functions of SQLite. +** +** There is only one exported symbol in this file - the function +** sqliteRegisterBuildinFunctions() found at the bottom of the file. +** All other code has file scope. +** +** $Id: func.c,v 1.186 2008/03/06 09:58:50 mlcreech Exp $ +*/ +#include "sqliteInt.h" +#include +#include +#include +#include "vdbeInt.h" + + +/* +** Return the collating function associated with a function. +*/ +static CollSeq *sqlite3GetFuncCollSeq(sqlite3_context *context){ + return context->pColl; +} + +/* +** Implementation of the non-aggregate min() and max() functions +*/ +static void minmaxFunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + int i; + int mask; /* 0 for min() or 0xffffffff for max() */ + int iBest; + CollSeq *pColl; + + if( argc==0 ) return; + mask = sqlite3_user_data(context)==0 ? 0 : -1; + pColl = sqlite3GetFuncCollSeq(context); + assert( pColl ); + assert( mask==-1 || mask==0 ); + iBest = 0; + if( sqlite3_value_type(argv[0])==SQLITE_NULL ) return; + for(i=1; i=0 ){ + iBest = i; + } + } + sqlite3_result_value(context, argv[iBest]); +} + +/* +** Return the type of the argument. +*/ +static void typeofFunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + const char *z = 0; + switch( sqlite3_value_type(argv[0]) ){ + case SQLITE_NULL: z = "null"; break; + case SQLITE_INTEGER: z = "integer"; break; + case SQLITE_TEXT: z = "text"; break; + case SQLITE_FLOAT: z = "real"; break; + case SQLITE_BLOB: z = "blob"; break; + } + sqlite3_result_text(context, z, -1, SQLITE_STATIC); +} + + +/* +** Implementation of the length() function +*/ +static void lengthFunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + int len; + + assert( argc==1 ); + switch( sqlite3_value_type(argv[0]) ){ + case SQLITE_BLOB: + case SQLITE_INTEGER: + case SQLITE_FLOAT: { + sqlite3_result_int(context, sqlite3_value_bytes(argv[0])); + break; + } + case SQLITE_TEXT: { + const unsigned char *z = sqlite3_value_text(argv[0]); + if( z==0 ) return; + len = 0; + while( *z ){ + len++; + SQLITE_SKIP_UTF8(z); + } + sqlite3_result_int(context, len); + break; + } + default: { + sqlite3_result_null(context); + break; + } + } +} + +/* +** Implementation of the abs() function +*/ +static void absFunc(sqlite3_context *context, int argc, sqlite3_value **argv){ + assert( argc==1 ); + switch( sqlite3_value_type(argv[0]) ){ + case SQLITE_INTEGER: { + i64 iVal = sqlite3_value_int64(argv[0]); + if( iVal<0 ){ + if( (iVal<<1)==0 ){ + sqlite3_result_error(context, "integer overflow", -1); + return; + } + iVal = -iVal; + } + sqlite3_result_int64(context, iVal); + break; + } + case SQLITE_NULL: { + sqlite3_result_null(context); + break; + } + default: { + double rVal = sqlite3_value_double(argv[0]); + if( rVal<0 ) rVal = -rVal; + sqlite3_result_double(context, rVal); + break; + } + } +} + +/* +** Implementation of the substr() function. +** +** substr(x,p1,p2) returns p2 characters of x[] beginning with p1. +** p1 is 1-indexed. So substr(x,1,1) returns the first character +** of x. If x is text, then we actually count UTF-8 characters. +** If x is a blob, then we count bytes. +** +** If p1 is negative, then we begin abs(p1) from the end of x[]. +*/ +static void substrFunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + const unsigned char *z; + const unsigned char *z2; + int len; + int p0type; + i64 p1, p2; + + assert( argc==3 || argc==2 ); + p0type = sqlite3_value_type(argv[0]); + if( p0type==SQLITE_BLOB ){ + len = sqlite3_value_bytes(argv[0]); + z = sqlite3_value_blob(argv[0]); + if( z==0 ) return; + assert( len==sqlite3_value_bytes(argv[0]) ); + }else{ + z = sqlite3_value_text(argv[0]); + if( z==0 ) return; + len = 0; + for(z2=z; *z2; len++){ + SQLITE_SKIP_UTF8(z2); + } + } + p1 = sqlite3_value_int(argv[1]); + if( argc==3 ){ + p2 = sqlite3_value_int(argv[2]); + }else{ + p2 = SQLITE_MAX_LENGTH; + } + if( p1<0 ){ + p1 += len; + if( p1<0 ){ + p2 += p1; + p1 = 0; + } + }else if( p1>0 ){ + p1--; + } + if( p1+p2>len ){ + p2 = len-p1; + } + if( p0type!=SQLITE_BLOB ){ + while( *z && p1 ){ + SQLITE_SKIP_UTF8(z); + p1--; + } + for(z2=z; *z2 && p2; p2--){ + SQLITE_SKIP_UTF8(z2); + } + sqlite3_result_text(context, (char*)z, z2-z, SQLITE_TRANSIENT); + }else{ + if( p2<0 ) p2 = 0; + sqlite3_result_blob(context, (char*)&z[p1], p2, SQLITE_TRANSIENT); + } +} + +/* +** Implementation of the round() function +*/ +static void roundFunc(sqlite3_context *context, int argc, sqlite3_value **argv){ + int n = 0; + double r; + char zBuf[500]; /* larger than the %f representation of the largest double */ + assert( argc==1 || argc==2 ); + if( argc==2 ){ + if( SQLITE_NULL==sqlite3_value_type(argv[1]) ) return; + n = sqlite3_value_int(argv[1]); + if( n>30 ) n = 30; + if( n<0 ) n = 0; + } + if( sqlite3_value_type(argv[0])==SQLITE_NULL ) return; + r = sqlite3_value_double(argv[0]); + sqlite3_snprintf(sizeof(zBuf),zBuf,"%.*f",n,r); + sqlite3AtoF(zBuf, &r); + sqlite3_result_double(context, r); +} + +/* +** Allocate nByte bytes of space using sqlite3_malloc(). If the +** allocation fails, call sqlite3_result_error_nomem() to notify +** the database handle that malloc() has failed. +*/ +static void *contextMalloc(sqlite3_context *context, int nByte){ + char *z = sqlite3_malloc(nByte); + if( !z && nByte>0 ){ + sqlite3_result_error_nomem(context); + } + return z; +} + +/* +** Implementation of the upper() and lower() SQL functions. +*/ +static void upperFunc(sqlite3_context *context, int argc, sqlite3_value **argv){ + char *z1; + const char *z2; + int i, n; + if( argc<1 || SQLITE_NULL==sqlite3_value_type(argv[0]) ) return; + z2 = (char*)sqlite3_value_text(argv[0]); + n = sqlite3_value_bytes(argv[0]); + /* Verify that the call to _bytes() does not invalidate the _text() pointer */ + assert( z2==(char*)sqlite3_value_text(argv[0]) ); + if( z2 ){ + z1 = contextMalloc(context, n+1); + if( z1 ){ + memcpy(z1, z2, n+1); + for(i=0; z1[i]; i++){ + z1[i] = toupper(z1[i]); + } + sqlite3_result_text(context, z1, -1, sqlite3_free); + } + } +} +static void lowerFunc(sqlite3_context *context, int argc, sqlite3_value **argv){ + char *z1; + const char *z2; + int i, n; + if( argc<1 || SQLITE_NULL==sqlite3_value_type(argv[0]) ) return; + z2 = (char*)sqlite3_value_text(argv[0]); + n = sqlite3_value_bytes(argv[0]); + /* Verify that the call to _bytes() does not invalidate the _text() pointer */ + assert( z2==(char*)sqlite3_value_text(argv[0]) ); + if( z2 ){ + z1 = contextMalloc(context, n+1); + if( z1 ){ + memcpy(z1, z2, n+1); + for(i=0; z1[i]; i++){ + z1[i] = tolower(z1[i]); + } + sqlite3_result_text(context, z1, -1, sqlite3_free); + } + } +} + +/* +** Implementation of the IFNULL(), NVL(), and COALESCE() functions. +** All three do the same thing. They return the first non-NULL +** argument. +*/ +static void ifnullFunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + int i; + for(i=0; iSQLITE_MAX_LENGTH ){ + sqlite3_result_error_toobig(context); + return; + } + p = contextMalloc(context, n); + if( p ){ + sqlite3Randomness(n, p); + sqlite3_result_blob(context, (char*)p, n, sqlite3_free); + } +} + +/* +** Implementation of the last_insert_rowid() SQL function. The return +** value is the same as the sqlite3_last_insert_rowid() API function. +*/ +static void last_insert_rowid( + sqlite3_context *context, + int arg, + sqlite3_value **argv +){ + sqlite3 *db = sqlite3_user_data(context); + sqlite3_result_int64(context, sqlite3_last_insert_rowid(db)); +} + +/* +** Implementation of the changes() SQL function. The return value is the +** same as the sqlite3_changes() API function. +*/ +static void changes( + sqlite3_context *context, + int arg, + sqlite3_value **argv +){ + sqlite3 *db = sqlite3_user_data(context); + sqlite3_result_int(context, sqlite3_changes(db)); +} + +/* +** Implementation of the total_changes() SQL function. The return value is +** the same as the sqlite3_total_changes() API function. +*/ +static void total_changes( + sqlite3_context *context, + int arg, + sqlite3_value **argv +){ + sqlite3 *db = sqlite3_user_data(context); + sqlite3_result_int(context, sqlite3_total_changes(db)); +} + +/* +** A structure defining how to do GLOB-style comparisons. +*/ +struct compareInfo { + u8 matchAll; + u8 matchOne; + u8 matchSet; + u8 noCase; +}; + +/* +** For LIKE and GLOB matching on EBCDIC machines, assume that every +** character is exactly one byte in size. Also, all characters are +** able to participate in upper-case-to-lower-case mappings in EBCDIC +** whereas only characters less than 0x80 do in ASCII. +*/ +#if defined(SQLITE_EBCDIC) +# define sqlite3Utf8Read(A,B,C) (*(A++)) +# define GlogUpperToLower(A) A = sqlite3UpperToLower[A] +#else +# define GlogUpperToLower(A) if( A<0x80 ){ A = sqlite3UpperToLower[A]; } +#endif + +static const struct compareInfo globInfo = { '*', '?', '[', 0 }; +/* The correct SQL-92 behavior is for the LIKE operator to ignore +** case. Thus 'a' LIKE 'A' would be true. */ +static const struct compareInfo likeInfoNorm = { '%', '_', 0, 1 }; +/* If SQLITE_CASE_SENSITIVE_LIKE is defined, then the LIKE operator +** is case sensitive causing 'a' LIKE 'A' to be false */ +static const struct compareInfo likeInfoAlt = { '%', '_', 0, 0 }; + +/* +** Compare two UTF-8 strings for equality where the first string can +** potentially be a "glob" expression. Return true (1) if they +** are the same and false (0) if they are different. +** +** Globbing rules: +** +** '*' Matches any sequence of zero or more characters. +** +** '?' Matches exactly one character. +** +** [...] Matches one character from the enclosed list of +** characters. +** +** [^...] Matches one character not in the enclosed list. +** +** With the [...] and [^...] matching, a ']' character can be included +** in the list by making it the first character after '[' or '^'. A +** range of characters can be specified using '-'. Example: +** "[a-z]" matches any single lower-case letter. To match a '-', make +** it the last character in the list. +** +** This routine is usually quick, but can be N**2 in the worst case. +** +** Hints: to match '*' or '?', put them in "[]". Like this: +** +** abc[*]xyz Matches "abc*xyz" only +*/ +static int patternCompare( + const u8 *zPattern, /* The glob pattern */ + const u8 *zString, /* The string to compare against the glob */ + const struct compareInfo *pInfo, /* Information about how to do the compare */ + const int esc /* The escape character */ +){ + int c, c2; + int invert; + int seen; + u8 matchOne = pInfo->matchOne; + u8 matchAll = pInfo->matchAll; + u8 matchSet = pInfo->matchSet; + u8 noCase = pInfo->noCase; + int prevEscape = 0; /* True if the previous character was 'escape' */ + + while( (c = sqlite3Utf8Read(zPattern,0,&zPattern))!=0 ){ + if( !prevEscape && c==matchAll ){ + while( (c=sqlite3Utf8Read(zPattern,0,&zPattern)) == matchAll + || c == matchOne ){ + if( c==matchOne && sqlite3Utf8Read(zString, 0, &zString)==0 ){ + return 0; + } + } + if( c==0 ){ + return 1; + }else if( c==esc ){ + c = sqlite3Utf8Read(zPattern, 0, &zPattern); + if( c==0 ){ + return 0; + } + }else if( c==matchSet ){ + assert( esc==0 ); /* This is GLOB, not LIKE */ + assert( matchSet<0x80 ); /* '[' is a single-byte character */ + while( *zString && patternCompare(&zPattern[-1],zString,pInfo,esc)==0 ){ + SQLITE_SKIP_UTF8(zString); + } + return *zString!=0; + } + while( (c2 = sqlite3Utf8Read(zString,0,&zString))!=0 ){ + if( noCase ){ + GlogUpperToLower(c2); + GlogUpperToLower(c); + while( c2 != 0 && c2 != c ){ + c2 = sqlite3Utf8Read(zString, 0, &zString); + GlogUpperToLower(c2); + } + }else{ + while( c2 != 0 && c2 != c ){ + c2 = sqlite3Utf8Read(zString, 0, &zString); + } + } + if( c2==0 ) return 0; + if( patternCompare(zPattern,zString,pInfo,esc) ) return 1; + } + return 0; + }else if( !prevEscape && c==matchOne ){ + if( sqlite3Utf8Read(zString, 0, &zString)==0 ){ + return 0; + } + }else if( c==matchSet ){ + int prior_c = 0; + assert( esc==0 ); /* This only occurs for GLOB, not LIKE */ + seen = 0; + invert = 0; + c = sqlite3Utf8Read(zString, 0, &zString); + if( c==0 ) return 0; + c2 = sqlite3Utf8Read(zPattern, 0, &zPattern); + if( c2=='^' ){ + invert = 1; + c2 = sqlite3Utf8Read(zPattern, 0, &zPattern); + } + if( c2==']' ){ + if( c==']' ) seen = 1; + c2 = sqlite3Utf8Read(zPattern, 0, &zPattern); + } + while( c2 && c2!=']' ){ + if( c2=='-' && zPattern[0]!=']' && zPattern[0]!=0 && prior_c>0 ){ + c2 = sqlite3Utf8Read(zPattern, 0, &zPattern); + if( c>=prior_c && c<=c2 ) seen = 1; + prior_c = 0; + }else{ + if( c==c2 ){ + seen = 1; + } + prior_c = c2; + } + c2 = sqlite3Utf8Read(zPattern, 0, &zPattern); + } + if( c2==0 || (seen ^ invert)==0 ){ + return 0; + } + }else if( esc==c && !prevEscape ){ + prevEscape = 1; + }else{ + c2 = sqlite3Utf8Read(zString, 0, &zString); + if( noCase ){ + GlogUpperToLower(c); + GlogUpperToLower(c2); + } + if( c!=c2 ){ + return 0; + } + prevEscape = 0; + } + } + return *zString==0; +} + +/* +** Count the number of times that the LIKE operator (or GLOB which is +** just a variation of LIKE) gets called. This is used for testing +** only. +*/ +#ifdef SQLITE_TEST +int sqlite3_like_count = 0; +#endif + + +/* +** Implementation of the like() SQL function. This function implements +** the build-in LIKE operator. The first argument to the function is the +** pattern and the second argument is the string. So, the SQL statements: +** +** A LIKE B +** +** is implemented as like(B,A). +** +** This same function (with a different compareInfo structure) computes +** the GLOB operator. +*/ +static void likeFunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + const unsigned char *zA, *zB; + int escape = 0; + + zB = sqlite3_value_text(argv[0]); + zA = sqlite3_value_text(argv[1]); + + /* Limit the length of the LIKE or GLOB pattern to avoid problems + ** of deep recursion and N*N behavior in patternCompare(). + */ + if( sqlite3_value_bytes(argv[0])>SQLITE_MAX_LIKE_PATTERN_LENGTH ){ + sqlite3_result_error(context, "LIKE or GLOB pattern too complex", -1); + return; + } + assert( zB==sqlite3_value_text(argv[0]) ); /* Encoding did not change */ + + if( argc==3 ){ + /* The escape character string must consist of a single UTF-8 character. + ** Otherwise, return an error. + */ + const unsigned char *zEsc = sqlite3_value_text(argv[2]); + if( zEsc==0 ) return; + if( sqlite3Utf8CharLen((char*)zEsc, -1)!=1 ){ + sqlite3_result_error(context, + "ESCAPE expression must be a single character", -1); + return; + } + escape = sqlite3Utf8Read(zEsc, 0, &zEsc); + } + if( zA && zB ){ + struct compareInfo *pInfo = sqlite3_user_data(context); +#ifdef SQLITE_TEST + sqlite3_like_count++; +#endif + + sqlite3_result_int(context, patternCompare(zB, zA, pInfo, escape)); + } +} + +/* +** Implementation of the NULLIF(x,y) function. The result is the first +** argument if the arguments are different. The result is NULL if the +** arguments are equal to each other. +*/ +static void nullifFunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + CollSeq *pColl = sqlite3GetFuncCollSeq(context); + if( sqlite3MemCompare(argv[0], argv[1], pColl)!=0 ){ + sqlite3_result_value(context, argv[0]); + } +} + +/* +** Implementation of the VERSION(*) function. The result is the version +** of the SQLite library that is running. +*/ +static void versionFunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + sqlite3_result_text(context, sqlite3_version, -1, SQLITE_STATIC); +} + +/* Array for converting from half-bytes (nybbles) into ASCII hex +** digits. */ +static const char hexdigits[] = { + '0', '1', '2', '3', '4', '5', '6', '7', + '8', '9', 'A', 'B', 'C', 'D', 'E', 'F' +}; + +/* +** EXPERIMENTAL - This is not an official function. The interface may +** change. This function may disappear. Do not write code that depends +** on this function. +** +** Implementation of the QUOTE() function. This function takes a single +** argument. If the argument is numeric, the return value is the same as +** the argument. If the argument is NULL, the return value is the string +** "NULL". Otherwise, the argument is enclosed in single quotes with +** single-quote escapes. +*/ +static void quoteFunc(sqlite3_context *context, int argc, sqlite3_value **argv){ + if( argc<1 ) return; + switch( sqlite3_value_type(argv[0]) ){ + case SQLITE_NULL: { + sqlite3_result_text(context, "NULL", 4, SQLITE_STATIC); + break; + } + case SQLITE_INTEGER: + case SQLITE_FLOAT: { + sqlite3_result_value(context, argv[0]); + break; + } + case SQLITE_BLOB: { + char *zText = 0; + char const *zBlob = sqlite3_value_blob(argv[0]); + int nBlob = sqlite3_value_bytes(argv[0]); + assert( zBlob==sqlite3_value_blob(argv[0]) ); /* No encoding change */ + + if( 2*nBlob+4>SQLITE_MAX_LENGTH ){ + sqlite3_result_error_toobig(context); + return; + } + zText = (char *)contextMalloc(context, (2*nBlob)+4); + if( zText ){ + int i; + for(i=0; i>4)&0x0F]; + zText[(i*2)+3] = hexdigits[(zBlob[i])&0x0F]; + } + zText[(nBlob*2)+2] = '\''; + zText[(nBlob*2)+3] = '\0'; + zText[0] = 'X'; + zText[1] = '\''; + sqlite3_result_text(context, zText, -1, SQLITE_TRANSIENT); + sqlite3_free(zText); + } + break; + } + case SQLITE_TEXT: { + int i,j; + u64 n; + const unsigned char *zArg = sqlite3_value_text(argv[0]); + char *z; + + if( zArg==0 ) return; + for(i=0, n=0; zArg[i]; i++){ if( zArg[i]=='\'' ) n++; } + if( i+n+3>SQLITE_MAX_LENGTH ){ + sqlite3_result_error_toobig(context); + return; + } + z = contextMalloc(context, i+n+3); + if( z ){ + z[0] = '\''; + for(i=0, j=1; zArg[i]; i++){ + z[j++] = zArg[i]; + if( zArg[i]=='\'' ){ + z[j++] = '\''; + } + } + z[j++] = '\''; + z[j] = 0; + sqlite3_result_text(context, z, j, sqlite3_free); + } + } + } +} + +/* +** The hex() function. Interpret the argument as a blob. Return +** a hexadecimal rendering as text. +*/ +static void hexFunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + int i, n; + const unsigned char *pBlob; + char *zHex, *z; + assert( argc==1 ); + pBlob = sqlite3_value_blob(argv[0]); + n = sqlite3_value_bytes(argv[0]); + if( n*2+1>SQLITE_MAX_LENGTH ){ + sqlite3_result_error_toobig(context); + return; + } + assert( pBlob==sqlite3_value_blob(argv[0]) ); /* No encoding change */ + z = zHex = contextMalloc(context, n*2 + 1); + if( zHex ){ + for(i=0; i>4)&0xf]; + *(z++) = hexdigits[c&0xf]; + } + *z = 0; + sqlite3_result_text(context, zHex, n*2, sqlite3_free); + } +} + +/* +** The zeroblob(N) function returns a zero-filled blob of size N bytes. +*/ +static void zeroblobFunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + i64 n; + assert( argc==1 ); + n = sqlite3_value_int64(argv[0]); + if( n>SQLITE_MAX_LENGTH ){ + sqlite3_result_error_toobig(context); + }else{ + sqlite3_result_zeroblob(context, n); + } +} + +/* +** The replace() function. Three arguments are all strings: call +** them A, B, and C. The result is also a string which is derived +** from A by replacing every occurance of B with C. The match +** must be exact. Collating sequences are not used. +*/ +static void replaceFunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + const unsigned char *zStr; /* The input string A */ + const unsigned char *zPattern; /* The pattern string B */ + const unsigned char *zRep; /* The replacement string C */ + unsigned char *zOut; /* The output */ + int nStr; /* Size of zStr */ + int nPattern; /* Size of zPattern */ + int nRep; /* Size of zRep */ + i64 nOut; /* Maximum size of zOut */ + int loopLimit; /* Last zStr[] that might match zPattern[] */ + int i, j; /* Loop counters */ + + assert( argc==3 ); + zStr = sqlite3_value_text(argv[0]); + if( zStr==0 ) return; + nStr = sqlite3_value_bytes(argv[0]); + assert( zStr==sqlite3_value_text(argv[0]) ); /* No encoding change */ + zPattern = sqlite3_value_text(argv[1]); + if( zPattern==0 || zPattern[0]==0 ) return; + nPattern = sqlite3_value_bytes(argv[1]); + assert( zPattern==sqlite3_value_text(argv[1]) ); /* No encoding change */ + zRep = sqlite3_value_text(argv[2]); + if( zRep==0 ) return; + nRep = sqlite3_value_bytes(argv[2]); + assert( zRep==sqlite3_value_text(argv[2]) ); + nOut = nStr + 1; + assert( nOut=SQLITE_MAX_LENGTH ){ + sqlite3_result_error_toobig(context); + sqlite3_free(zOut); + return; + } + zOld = zOut; + zOut = sqlite3_realloc(zOut, (int)nOut); + if( zOut==0 ){ + sqlite3_result_error_nomem(context); + sqlite3_free(zOld); + return; + } + memcpy(&zOut[j], zRep, nRep); + j += nRep; + i += nPattern-1; + } + } + assert( j+nStr-i+1==nOut ); + memcpy(&zOut[j], &zStr[i], nStr-i); + j += nStr - i; + assert( j<=nOut ); + zOut[j] = 0; + sqlite3_result_text(context, (char*)zOut, j, sqlite3_free); +} + +/* +** Implementation of the TRIM(), LTRIM(), and RTRIM() functions. +** The userdata is 0x1 for left trim, 0x2 for right trim, 0x3 for both. +*/ +static void trimFunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + const unsigned char *zIn; /* Input string */ + const unsigned char *zCharSet; /* Set of characters to trim */ + int nIn; /* Number of bytes in input */ + sqlite3_intptr_t flags; /* 1: trimleft 2: trimright 3: trim */ + int i; /* Loop counter */ + unsigned char *aLen; /* Length of each character in zCharSet */ + unsigned char **azChar; /* Individual characters in zCharSet */ + int nChar; /* Number of characters in zCharSet */ + + if( sqlite3_value_type(argv[0])==SQLITE_NULL ){ + return; + } + zIn = sqlite3_value_text(argv[0]); + if( zIn==0 ) return; + nIn = sqlite3_value_bytes(argv[0]); + assert( zIn==sqlite3_value_text(argv[0]) ); + if( argc==1 ){ + static const unsigned char lenOne[] = { 1 }; + static const unsigned char *azOne[] = { (u8*)" " }; + nChar = 1; + aLen = (u8*)lenOne; + azChar = (unsigned char **)azOne; + zCharSet = 0; + }else if( (zCharSet = sqlite3_value_text(argv[1]))==0 ){ + return; + }else{ + const unsigned char *z; + for(z=zCharSet, nChar=0; *z; nChar++){ + SQLITE_SKIP_UTF8(z); + } + if( nChar>0 ){ + azChar = contextMalloc(context, nChar*(sizeof(char*)+1)); + if( azChar==0 ){ + return; + } + aLen = (unsigned char*)&azChar[nChar]; + for(z=zCharSet, nChar=0; *z; nChar++){ + azChar[nChar] = (unsigned char *)z; + SQLITE_SKIP_UTF8(z); + aLen[nChar] = z - azChar[nChar]; + } + } + } + if( nChar>0 ){ + flags = (sqlite3_intptr_t)sqlite3_user_data(context); + if( flags & 1 ){ + while( nIn>0 ){ + int len; + for(i=0; i=nChar ) break; + zIn += len; + nIn -= len; + } + } + if( flags & 2 ){ + while( nIn>0 ){ + int len; + for(i=0; i=nChar ) break; + nIn -= len; + } + } + if( zCharSet ){ + sqlite3_free(azChar); + } + } + sqlite3_result_text(context, (char*)zIn, nIn, SQLITE_TRANSIENT); +} + +#ifdef SQLITE_SOUNDEX +/* +** Compute the soundex encoding of a word. +*/ +static void soundexFunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + char zResult[8]; + const u8 *zIn; + int i, j; + static const unsigned char iCode[] = { + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 1, 2, 3, 0, 1, 2, 0, 0, 2, 2, 4, 5, 5, 0, + 1, 2, 6, 2, 3, 0, 1, 0, 2, 0, 2, 0, 0, 0, 0, 0, + 0, 0, 1, 2, 3, 0, 1, 2, 0, 0, 2, 2, 4, 5, 5, 0, + 1, 2, 6, 2, 3, 0, 1, 0, 2, 0, 2, 0, 0, 0, 0, 0, + }; + assert( argc==1 ); + zIn = (u8*)sqlite3_value_text(argv[0]); + if( zIn==0 ) zIn = (u8*)""; + for(i=0; zIn[i] && !isalpha(zIn[i]); i++){} + if( zIn[i] ){ + u8 prevcode = iCode[zIn[i]&0x7f]; + zResult[0] = toupper(zIn[i]); + for(j=1; j<4 && zIn[i]; i++){ + int code = iCode[zIn[i]&0x7f]; + if( code>0 ){ + if( code!=prevcode ){ + prevcode = code; + zResult[j++] = code + '0'; + } + }else{ + prevcode = 0; + } + } + while( j<4 ){ + zResult[j++] = '0'; + } + zResult[j] = 0; + sqlite3_result_text(context, zResult, 4, SQLITE_TRANSIENT); + }else{ + sqlite3_result_text(context, "?000", 4, SQLITE_STATIC); + } +} +#endif + +#ifndef SQLITE_OMIT_LOAD_EXTENSION +/* +** A function that loads a shared-library extension then returns NULL. +*/ +static void loadExt(sqlite3_context *context, int argc, sqlite3_value **argv){ + const char *zFile = (const char *)sqlite3_value_text(argv[0]); + const char *zProc; + sqlite3 *db = sqlite3_user_data(context); + char *zErrMsg = 0; + + if( argc==2 ){ + zProc = (const char *)sqlite3_value_text(argv[1]); + }else{ + zProc = 0; + } + if( zFile && sqlite3_load_extension(db, zFile, zProc, &zErrMsg) ){ + sqlite3_result_error(context, zErrMsg, -1); + sqlite3_free(zErrMsg); + } +} +#endif + +#ifdef SQLITE_TEST +/* +** This function generates a string of random characters. Used for +** generating test data. +*/ +static void randStr(sqlite3_context *context, int argc, sqlite3_value **argv){ + static const unsigned char zSrc[] = + "abcdefghijklmnopqrstuvwxyz" + "ABCDEFGHIJKLMNOPQRSTUVWXYZ" + "0123456789" + ".-!,:*^+=_|?/<> "; + int iMin, iMax, n, r, i; + unsigned char zBuf[1000]; + + /* It used to be possible to call randstr() with any number of arguments, + ** but now it is registered with SQLite as requiring exactly 2. + */ + assert(argc==2); + + iMin = sqlite3_value_int(argv[0]); + if( iMin<0 ) iMin = 0; + if( iMin>=sizeof(zBuf) ) iMin = sizeof(zBuf)-1; + iMax = sqlite3_value_int(argv[1]); + if( iMax=sizeof(zBuf) ) iMax = sizeof(zBuf)-1; + n = iMin; + if( iMax>iMin ){ + sqlite3Randomness(sizeof(r), &r); + r &= 0x7fffffff; + n += r%(iMax + 1 - iMin); + } + assert( ncnt++; + if( type==SQLITE_INTEGER ){ + i64 v = sqlite3_value_int64(argv[0]); + p->rSum += v; + if( (p->approx|p->overflow)==0 ){ + i64 iNewSum = p->iSum + v; + int s1 = p->iSum >> (sizeof(i64)*8-1); + int s2 = v >> (sizeof(i64)*8-1); + int s3 = iNewSum >> (sizeof(i64)*8-1); + p->overflow = (s1&s2&~s3) | (~s1&~s2&s3); + p->iSum = iNewSum; + } + }else{ + p->rSum += sqlite3_value_double(argv[0]); + p->approx = 1; + } + } +} +static void sumFinalize(sqlite3_context *context){ + SumCtx *p; + p = sqlite3_aggregate_context(context, 0); + if( p && p->cnt>0 ){ + if( p->overflow ){ + sqlite3_result_error(context,"integer overflow",-1); + }else if( p->approx ){ + sqlite3_result_double(context, p->rSum); + }else{ + sqlite3_result_int64(context, p->iSum); + } + } +} +static void avgFinalize(sqlite3_context *context){ + SumCtx *p; + p = sqlite3_aggregate_context(context, 0); + if( p && p->cnt>0 ){ + sqlite3_result_double(context, p->rSum/(double)p->cnt); + } +} +static void totalFinalize(sqlite3_context *context){ + SumCtx *p; + p = sqlite3_aggregate_context(context, 0); + sqlite3_result_double(context, p ? p->rSum : 0.0); +} + +/* +** The following structure keeps track of state information for the +** count() aggregate function. +*/ +typedef struct CountCtx CountCtx; +struct CountCtx { + i64 n; +}; + +/* +** Routines to implement the count() aggregate function. +*/ +static void countStep(sqlite3_context *context, int argc, sqlite3_value **argv){ + CountCtx *p; + p = sqlite3_aggregate_context(context, sizeof(*p)); + if( (argc==0 || SQLITE_NULL!=sqlite3_value_type(argv[0])) && p ){ + p->n++; + } +} +static void countFinalize(sqlite3_context *context){ + CountCtx *p; + p = sqlite3_aggregate_context(context, 0); + sqlite3_result_int64(context, p ? p->n : 0); +} + +/* +** Routines to implement min() and max() aggregate functions. +*/ +static void minmaxStep(sqlite3_context *context, int argc, sqlite3_value **argv){ + Mem *pArg = (Mem *)argv[0]; + Mem *pBest; + + if( sqlite3_value_type(argv[0])==SQLITE_NULL ) return; + pBest = (Mem *)sqlite3_aggregate_context(context, sizeof(*pBest)); + if( !pBest ) return; + + if( pBest->flags ){ + int max; + int cmp; + CollSeq *pColl = sqlite3GetFuncCollSeq(context); + /* This step function is used for both the min() and max() aggregates, + ** the only difference between the two being that the sense of the + ** comparison is inverted. For the max() aggregate, the + ** sqlite3_user_data() function returns (void *)-1. For min() it + ** returns (void *)db, where db is the sqlite3* database pointer. + ** Therefore the next statement sets variable 'max' to 1 for the max() + ** aggregate, or 0 for min(). + */ + max = sqlite3_user_data(context)!=0; + cmp = sqlite3MemCompare(pBest, pArg, pColl); + if( (max && cmp<0) || (!max && cmp>0) ){ + sqlite3VdbeMemCopy(pBest, pArg); + } + }else{ + sqlite3VdbeMemCopy(pBest, pArg); + } +} +static void minMaxFinalize(sqlite3_context *context){ + sqlite3_value *pRes; + pRes = (sqlite3_value *)sqlite3_aggregate_context(context, 0); + if( pRes ){ + if( pRes->flags ){ + sqlite3_result_value(context, pRes); + } + sqlite3VdbeMemRelease(pRes); + } +} + +/* +** group_concat(EXPR, ?SEPARATOR?) +*/ +static void groupConcatStep( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + const char *zVal; + StrAccum *pAccum; + const char *zSep; + int nVal, nSep; + if( sqlite3_value_type(argv[0])==SQLITE_NULL ) return; + pAccum = (StrAccum*)sqlite3_aggregate_context(context, sizeof(*pAccum)); + + if( pAccum ){ + pAccum->useMalloc = 1; + if( pAccum->nChar ){ + if( argc==2 ){ + zSep = (char*)sqlite3_value_text(argv[1]); + nSep = sqlite3_value_bytes(argv[1]); + }else{ + zSep = ","; + nSep = 1; + } + sqlite3StrAccumAppend(pAccum, zSep, nSep); + } + zVal = (char*)sqlite3_value_text(argv[0]); + nVal = sqlite3_value_bytes(argv[0]); + sqlite3StrAccumAppend(pAccum, zVal, nVal); + } +} +static void groupConcatFinalize(sqlite3_context *context){ + StrAccum *pAccum; + pAccum = sqlite3_aggregate_context(context, 0); + if( pAccum ){ + if( pAccum->tooBig ){ + sqlite3_result_error_toobig(context); + }else if( pAccum->mallocFailed ){ + sqlite3_result_error_nomem(context); + }else{ + sqlite3_result_text(context, sqlite3StrAccumFinish(pAccum), -1, + sqlite3_free); + } + } +} + +/* +** This function registered all of the above C functions as SQL +** functions. This should be the only routine in this file with +** external linkage. +*/ +void sqlite3RegisterBuiltinFunctions(sqlite3 *db){ + static const struct { + char *zName; + signed char nArg; + u8 argType; /* ff: db 1: 0, 2: 1, 3: 2,... N: N-1. */ + u8 eTextRep; /* 1: UTF-16. 0: UTF-8 */ + u8 needCollSeq; + void (*xFunc)(sqlite3_context*,int,sqlite3_value **); + } aFuncs[] = { + { "min", -1, 0, SQLITE_UTF8, 1, minmaxFunc }, + { "min", 0, 0, SQLITE_UTF8, 1, 0 }, + { "max", -1, 1, SQLITE_UTF8, 1, minmaxFunc }, + { "max", 0, 1, SQLITE_UTF8, 1, 0 }, + { "typeof", 1, 0, SQLITE_UTF8, 0, typeofFunc }, + { "length", 1, 0, SQLITE_UTF8, 0, lengthFunc }, + { "substr", 2, 0, SQLITE_UTF8, 0, substrFunc }, + { "substr", 3, 0, SQLITE_UTF8, 0, substrFunc }, + { "abs", 1, 0, SQLITE_UTF8, 0, absFunc }, + { "round", 1, 0, SQLITE_UTF8, 0, roundFunc }, + { "round", 2, 0, SQLITE_UTF8, 0, roundFunc }, + { "upper", 1, 0, SQLITE_UTF8, 0, upperFunc }, + { "lower", 1, 0, SQLITE_UTF8, 0, lowerFunc }, + { "coalesce", -1, 0, SQLITE_UTF8, 0, ifnullFunc }, + { "coalesce", 0, 0, SQLITE_UTF8, 0, 0 }, + { "coalesce", 1, 0, SQLITE_UTF8, 0, 0 }, + { "hex", 1, 0, SQLITE_UTF8, 0, hexFunc }, + { "ifnull", 2, 0, SQLITE_UTF8, 1, ifnullFunc }, + { "random", -1, 0, SQLITE_UTF8, 0, randomFunc }, + { "randomblob", 1, 0, SQLITE_UTF8, 0, randomBlob }, + { "nullif", 2, 0, SQLITE_UTF8, 1, nullifFunc }, + { "sqlite_version", 0, 0, SQLITE_UTF8, 0, versionFunc}, + { "quote", 1, 0, SQLITE_UTF8, 0, quoteFunc }, + { "last_insert_rowid", 0, 0xff, SQLITE_UTF8, 0, last_insert_rowid }, + { "changes", 0, 0xff, SQLITE_UTF8, 0, changes }, + { "total_changes", 0, 0xff, SQLITE_UTF8, 0, total_changes }, + { "replace", 3, 0, SQLITE_UTF8, 0, replaceFunc }, + { "ltrim", 1, 1, SQLITE_UTF8, 0, trimFunc }, + { "ltrim", 2, 1, SQLITE_UTF8, 0, trimFunc }, + { "rtrim", 1, 2, SQLITE_UTF8, 0, trimFunc }, + { "rtrim", 2, 2, SQLITE_UTF8, 0, trimFunc }, + { "trim", 1, 3, SQLITE_UTF8, 0, trimFunc }, + { "trim", 2, 3, SQLITE_UTF8, 0, trimFunc }, + { "zeroblob", 1, 0, SQLITE_UTF8, 0, zeroblobFunc }, +#ifdef SQLITE_SOUNDEX + { "soundex", 1, 0, SQLITE_UTF8, 0, soundexFunc}, +#endif +#ifndef SQLITE_OMIT_LOAD_EXTENSION + { "load_extension", 1, 0xff, SQLITE_UTF8, 0, loadExt }, + { "load_extension", 2, 0xff, SQLITE_UTF8, 0, loadExt }, +#endif +#ifdef SQLITE_TEST + { "randstr", 2, 0, SQLITE_UTF8, 0, randStr }, + { "test_destructor", 1, 0xff, SQLITE_UTF8, 0, test_destructor}, + { "test_destructor_count", 0, 0, SQLITE_UTF8, 0, test_destructor_count}, + { "test_auxdata", -1, 0, SQLITE_UTF8, 0, test_auxdata}, + { "test_error", 1, 0, SQLITE_UTF8, 0, test_error}, +#endif + }; + static const struct { + char *zName; + signed char nArg; + u8 argType; + u8 needCollSeq; + void (*xStep)(sqlite3_context*,int,sqlite3_value**); + void (*xFinalize)(sqlite3_context*); + } aAggs[] = { + { "min", 1, 0, 1, minmaxStep, minMaxFinalize }, + { "max", 1, 1, 1, minmaxStep, minMaxFinalize }, + { "sum", 1, 0, 0, sumStep, sumFinalize }, + { "total", 1, 0, 0, sumStep, totalFinalize }, + { "avg", 1, 0, 0, sumStep, avgFinalize }, + { "count", 0, 0, 0, countStep, countFinalize }, + { "count", 1, 0, 0, countStep, countFinalize }, + { "group_concat", 1, 0, 0, groupConcatStep, groupConcatFinalize }, + { "group_concat", 2, 0, 0, groupConcatStep, groupConcatFinalize }, + }; + int i; + + for(i=0; ineedCollSeq = 1; + } + } + } +#ifndef SQLITE_OMIT_ALTERTABLE + sqlite3AlterFunctions(db); +#endif +#ifndef SQLITE_OMIT_PARSER + sqlite3AttachFunctions(db); +#endif + for(i=0; ineedCollSeq = 1; + } + } + } + sqlite3RegisterDateTimeFunctions(db); + if( !db->mallocFailed ){ + int rc = sqlite3_overload_function(db, "MATCH", 2); + assert( rc==SQLITE_NOMEM || rc==SQLITE_OK ); + if( rc==SQLITE_NOMEM ){ + db->mallocFailed = 1; + } + } +#ifdef SQLITE_SSE + (void)sqlite3SseFunctions(db); +#endif +#ifdef SQLITE_CASE_SENSITIVE_LIKE + sqlite3RegisterLikeFunctions(db, 1); +#else + sqlite3RegisterLikeFunctions(db, 0); +#endif +} + +/* +** Set the LIKEOPT flag on the 2-argument function with the given name. +*/ +static void setLikeOptFlag(sqlite3 *db, const char *zName, int flagVal){ + FuncDef *pDef; + pDef = sqlite3FindFunction(db, zName, strlen(zName), 2, SQLITE_UTF8, 0); + if( pDef ){ + pDef->flags = flagVal; + } +} + +/* +** Register the built-in LIKE and GLOB functions. The caseSensitive +** parameter determines whether or not the LIKE operator is case +** sensitive. GLOB is always case sensitive. +*/ +void sqlite3RegisterLikeFunctions(sqlite3 *db, int caseSensitive){ + struct compareInfo *pInfo; + if( caseSensitive ){ + pInfo = (struct compareInfo*)&likeInfoAlt; + }else{ + pInfo = (struct compareInfo*)&likeInfoNorm; + } + sqlite3CreateFunc(db, "like", 2, SQLITE_UTF8, pInfo, likeFunc, 0, 0); + sqlite3CreateFunc(db, "like", 3, SQLITE_UTF8, pInfo, likeFunc, 0, 0); + sqlite3CreateFunc(db, "glob", 2, SQLITE_UTF8, + (struct compareInfo*)&globInfo, likeFunc, 0,0); + setLikeOptFlag(db, "glob", SQLITE_FUNC_LIKE | SQLITE_FUNC_CASE); + setLikeOptFlag(db, "like", + caseSensitive ? (SQLITE_FUNC_LIKE | SQLITE_FUNC_CASE) : SQLITE_FUNC_LIKE); +} + +/* +** pExpr points to an expression which implements a function. If +** it is appropriate to apply the LIKE optimization to that function +** then set aWc[0] through aWc[2] to the wildcard characters and +** return TRUE. If the function is not a LIKE-style function then +** return FALSE. +*/ +int sqlite3IsLikeFunction(sqlite3 *db, Expr *pExpr, int *pIsNocase, char *aWc){ + FuncDef *pDef; + if( pExpr->op!=TK_FUNCTION || !pExpr->pList ){ + return 0; + } + if( pExpr->pList->nExpr!=2 ){ + return 0; + } + pDef = sqlite3FindFunction(db, (char*)pExpr->token.z, pExpr->token.n, 2, + SQLITE_UTF8, 0); + if( pDef==0 || (pDef->flags & SQLITE_FUNC_LIKE)==0 ){ + return 0; + } + + /* The memcpy() statement assumes that the wildcard characters are + ** the first three statements in the compareInfo structure. The + ** asserts() that follow verify that assumption + */ + memcpy(aWc, pDef->pUserData, 3); + assert( (char*)&likeInfoAlt == (char*)&likeInfoAlt.matchAll ); + assert( &((char*)&likeInfoAlt)[1] == (char*)&likeInfoAlt.matchOne ); + assert( &((char*)&likeInfoAlt)[2] == (char*)&likeInfoAlt.matchSet ); + *pIsNocase = (pDef->flags & SQLITE_FUNC_CASE)==0; + return 1; +} Added: external/sqlite-source-3.5.7.x/hash.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/hash.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,423 @@ +/* +** 2001 September 22 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** This is the implementation of generic hash-tables +** used in SQLite. +** +** $Id: hash.c,v 1.26 2008/02/18 22:24:58 drh Exp $ +*/ +#include "sqliteInt.h" +#include + +/* Turn bulk memory into a hash table object by initializing the +** fields of the Hash structure. +** +** "pNew" is a pointer to the hash table that is to be initialized. +** keyClass is one of the constants SQLITE_HASH_INT, SQLITE_HASH_POINTER, +** SQLITE_HASH_BINARY, or SQLITE_HASH_STRING. The value of keyClass +** determines what kind of key the hash table will use. "copyKey" is +** true if the hash table should make its own private copy of keys and +** false if it should just use the supplied pointer. CopyKey only makes +** sense for SQLITE_HASH_STRING and SQLITE_HASH_BINARY and is ignored +** for other key classes. +*/ +void sqlite3HashInit(Hash *pNew, int keyClass, int copyKey){ + assert( pNew!=0 ); + assert( keyClass>=SQLITE_HASH_STRING && keyClass<=SQLITE_HASH_BINARY ); + pNew->keyClass = keyClass; +#if 0 + if( keyClass==SQLITE_HASH_POINTER || keyClass==SQLITE_HASH_INT ) copyKey = 0; +#endif + pNew->copyKey = copyKey; + pNew->first = 0; + pNew->count = 0; + pNew->htsize = 0; + pNew->ht = 0; +} + +/* Remove all entries from a hash table. Reclaim all memory. +** Call this routine to delete a hash table or to reset a hash table +** to the empty state. +*/ +void sqlite3HashClear(Hash *pH){ + HashElem *elem; /* For looping over all elements of the table */ + + assert( pH!=0 ); + elem = pH->first; + pH->first = 0; + if( pH->ht ) sqlite3_free(pH->ht); + pH->ht = 0; + pH->htsize = 0; + while( elem ){ + HashElem *next_elem = elem->next; + if( pH->copyKey && elem->pKey ){ + sqlite3_free(elem->pKey); + } + sqlite3_free(elem); + elem = next_elem; + } + pH->count = 0; +} + +#if 0 /* NOT USED */ +/* +** Hash and comparison functions when the mode is SQLITE_HASH_INT +*/ +static int intHash(const void *pKey, int nKey){ + return nKey ^ (nKey<<8) ^ (nKey>>8); +} +static int intCompare(const void *pKey1, int n1, const void *pKey2, int n2){ + return n2 - n1; +} +#endif + +#if 0 /* NOT USED */ +/* +** Hash and comparison functions when the mode is SQLITE_HASH_POINTER +*/ +static int ptrHash(const void *pKey, int nKey){ + uptr x = Addr(pKey); + return x ^ (x<<8) ^ (x>>8); +} +static int ptrCompare(const void *pKey1, int n1, const void *pKey2, int n2){ + if( pKey1==pKey2 ) return 0; + if( pKey1 0 ){ + h = (h<<3) ^ h ^ sqlite3UpperToLower[(unsigned char)*z++]; + nKey--; + } + return h & 0x7fffffff; +} +static int strCompare(const void *pKey1, int n1, const void *pKey2, int n2){ + if( n1!=n2 ) return 1; + return sqlite3StrNICmp((const char*)pKey1,(const char*)pKey2,n1); +} + +/* +** Hash and comparison functions when the mode is SQLITE_HASH_BINARY +*/ +static int binHash(const void *pKey, int nKey){ + int h = 0; + const char *z = (const char *)pKey; + while( nKey-- > 0 ){ + h = (h<<3) ^ h ^ *(z++); + } + return h & 0x7fffffff; +} +static int binCompare(const void *pKey1, int n1, const void *pKey2, int n2){ + if( n1!=n2 ) return 1; + return memcmp(pKey1,pKey2,n1); +} + +/* +** Return a pointer to the appropriate hash function given the key class. +** +** The C syntax in this function definition may be unfamilar to some +** programmers, so we provide the following additional explanation: +** +** The name of the function is "hashFunction". The function takes a +** single parameter "keyClass". The return value of hashFunction() +** is a pointer to another function. Specifically, the return value +** of hashFunction() is a pointer to a function that takes two parameters +** with types "const void*" and "int" and returns an "int". +*/ +static int (*hashFunction(int keyClass))(const void*,int){ +#if 0 /* HASH_INT and HASH_POINTER are never used */ + switch( keyClass ){ + case SQLITE_HASH_INT: return &intHash; + case SQLITE_HASH_POINTER: return &ptrHash; + case SQLITE_HASH_STRING: return &strHash; + case SQLITE_HASH_BINARY: return &binHash;; + default: break; + } + return 0; +#else + if( keyClass==SQLITE_HASH_STRING ){ + return &strHash; + }else{ + assert( keyClass==SQLITE_HASH_BINARY ); + return &binHash; + } +#endif +} + +/* +** Return a pointer to the appropriate hash function given the key class. +** +** For help in interpreted the obscure C code in the function definition, +** see the header comment on the previous function. +*/ +static int (*compareFunction(int keyClass))(const void*,int,const void*,int){ +#if 0 /* HASH_INT and HASH_POINTER are never used */ + switch( keyClass ){ + case SQLITE_HASH_INT: return &intCompare; + case SQLITE_HASH_POINTER: return &ptrCompare; + case SQLITE_HASH_STRING: return &strCompare; + case SQLITE_HASH_BINARY: return &binCompare; + default: break; + } + return 0; +#else + if( keyClass==SQLITE_HASH_STRING ){ + return &strCompare; + }else{ + assert( keyClass==SQLITE_HASH_BINARY ); + return &binCompare; + } +#endif +} + +/* Link an element into the hash table +*/ +static void insertElement( + Hash *pH, /* The complete hash table */ + struct _ht *pEntry, /* The entry into which pNew is inserted */ + HashElem *pNew /* The element to be inserted */ +){ + HashElem *pHead; /* First element already in pEntry */ + pHead = pEntry->chain; + if( pHead ){ + pNew->next = pHead; + pNew->prev = pHead->prev; + if( pHead->prev ){ pHead->prev->next = pNew; } + else { pH->first = pNew; } + pHead->prev = pNew; + }else{ + pNew->next = pH->first; + if( pH->first ){ pH->first->prev = pNew; } + pNew->prev = 0; + pH->first = pNew; + } + pEntry->count++; + pEntry->chain = pNew; +} + + +/* Resize the hash table so that it cantains "new_size" buckets. +** "new_size" must be a power of 2. The hash table might fail +** to resize if sqlite3_malloc() fails. +*/ +static void rehash(Hash *pH, int new_size){ + struct _ht *new_ht; /* The new hash table */ + HashElem *elem, *next_elem; /* For looping over existing elements */ + int (*xHash)(const void*,int); /* The hash function */ + +#ifdef SQLITE_MALLOC_SOFT_LIMIT + if( new_size*sizeof(struct _ht)>SQLITE_MALLOC_SOFT_LIMIT ){ + new_size = SQLITE_MALLOC_SOFT_LIMIT/sizeof(struct _ht); + } + if( new_size==pH->htsize ) return; +#endif + + /* There is a call to sqlite3_malloc() inside rehash(). If there is + ** already an allocation at pH->ht, then if this malloc() fails it + ** is benign (since failing to resize a hash table is a performance + ** hit only, not a fatal error). + */ + sqlite3FaultBenign(SQLITE_FAULTINJECTOR_MALLOC, pH->htsize>0); + new_ht = (struct _ht *)sqlite3MallocZero( new_size*sizeof(struct _ht) ); + sqlite3FaultBenign(SQLITE_FAULTINJECTOR_MALLOC, 0); + + if( new_ht==0 ) return; + if( pH->ht ) sqlite3_free(pH->ht); + pH->ht = new_ht; + pH->htsize = new_size; + xHash = hashFunction(pH->keyClass); + for(elem=pH->first, pH->first=0; elem; elem = next_elem){ + int h = (*xHash)(elem->pKey, elem->nKey) & (new_size-1); + next_elem = elem->next; + insertElement(pH, &new_ht[h], elem); + } +} + +/* This function (for internal use only) locates an element in an +** hash table that matches the given key. The hash for this key has +** already been computed and is passed as the 4th parameter. +*/ +static HashElem *findElementGivenHash( + const Hash *pH, /* The pH to be searched */ + const void *pKey, /* The key we are searching for */ + int nKey, + int h /* The hash for this key. */ +){ + HashElem *elem; /* Used to loop thru the element list */ + int count; /* Number of elements left to test */ + int (*xCompare)(const void*,int,const void*,int); /* comparison function */ + + if( pH->ht ){ + struct _ht *pEntry = &pH->ht[h]; + elem = pEntry->chain; + count = pEntry->count; + xCompare = compareFunction(pH->keyClass); + while( count-- && elem ){ + if( (*xCompare)(elem->pKey,elem->nKey,pKey,nKey)==0 ){ + return elem; + } + elem = elem->next; + } + } + return 0; +} + +/* Remove a single entry from the hash table given a pointer to that +** element and a hash on the element's key. +*/ +static void removeElementGivenHash( + Hash *pH, /* The pH containing "elem" */ + HashElem* elem, /* The element to be removed from the pH */ + int h /* Hash value for the element */ +){ + struct _ht *pEntry; + if( elem->prev ){ + elem->prev->next = elem->next; + }else{ + pH->first = elem->next; + } + if( elem->next ){ + elem->next->prev = elem->prev; + } + pEntry = &pH->ht[h]; + if( pEntry->chain==elem ){ + pEntry->chain = elem->next; + } + pEntry->count--; + if( pEntry->count<=0 ){ + pEntry->chain = 0; + } + if( pH->copyKey ){ + sqlite3_free(elem->pKey); + } + sqlite3_free( elem ); + pH->count--; + if( pH->count<=0 ){ + assert( pH->first==0 ); + assert( pH->count==0 ); + sqlite3HashClear(pH); + } +} + +/* Attempt to locate an element of the hash table pH with a key +** that matches pKey,nKey. Return a pointer to the corresponding +** HashElem structure for this element if it is found, or NULL +** otherwise. +*/ +HashElem *sqlite3HashFindElem(const Hash *pH, const void *pKey, int nKey){ + int h; /* A hash on key */ + HashElem *elem; /* The element that matches key */ + int (*xHash)(const void*,int); /* The hash function */ + + if( pH==0 || pH->ht==0 ) return 0; + xHash = hashFunction(pH->keyClass); + assert( xHash!=0 ); + h = (*xHash)(pKey,nKey); + elem = findElementGivenHash(pH,pKey,nKey, h % pH->htsize); + return elem; +} + +/* Attempt to locate an element of the hash table pH with a key +** that matches pKey,nKey. Return the data for this element if it is +** found, or NULL if there is no match. +*/ +void *sqlite3HashFind(const Hash *pH, const void *pKey, int nKey){ + HashElem *elem; /* The element that matches key */ + elem = sqlite3HashFindElem(pH, pKey, nKey); + return elem ? elem->data : 0; +} + +/* Insert an element into the hash table pH. The key is pKey,nKey +** and the data is "data". +** +** If no element exists with a matching key, then a new +** element is created. A copy of the key is made if the copyKey +** flag is set. NULL is returned. +** +** If another element already exists with the same key, then the +** new data replaces the old data and the old data is returned. +** The key is not copied in this instance. If a malloc fails, then +** the new data is returned and the hash table is unchanged. +** +** If the "data" parameter to this function is NULL, then the +** element corresponding to "key" is removed from the hash table. +*/ +void *sqlite3HashInsert(Hash *pH, const void *pKey, int nKey, void *data){ + int hraw; /* Raw hash value of the key */ + int h; /* the hash of the key modulo hash table size */ + HashElem *elem; /* Used to loop thru the element list */ + HashElem *new_elem; /* New element added to the pH */ + int (*xHash)(const void*,int); /* The hash function */ + + assert( pH!=0 ); + xHash = hashFunction(pH->keyClass); + assert( xHash!=0 ); + hraw = (*xHash)(pKey, nKey); + if( pH->htsize ){ + h = hraw % pH->htsize; + elem = findElementGivenHash(pH,pKey,nKey,h); + if( elem ){ + void *old_data = elem->data; + if( data==0 ){ + removeElementGivenHash(pH,elem,h); + }else{ + elem->data = data; + if( !pH->copyKey ){ + elem->pKey = (void *)pKey; + } + assert(nKey==elem->nKey); + } + return old_data; + } + } + if( data==0 ) return 0; + new_elem = (HashElem*)sqlite3_malloc( sizeof(HashElem) ); + if( new_elem==0 ) return data; + if( pH->copyKey && pKey!=0 ){ + new_elem->pKey = sqlite3_malloc( nKey ); + if( new_elem->pKey==0 ){ + sqlite3_free(new_elem); + return data; + } + memcpy((void*)new_elem->pKey, pKey, nKey); + }else{ + new_elem->pKey = (void*)pKey; + } + new_elem->nKey = nKey; + pH->count++; + if( pH->htsize==0 ){ + rehash(pH, 128/sizeof(pH->ht[0])); + if( pH->htsize==0 ){ + pH->count = 0; + if( pH->copyKey ){ + sqlite3_free(new_elem->pKey); + } + sqlite3_free(new_elem); + return data; + } + } + if( pH->count > pH->htsize ){ + rehash(pH,pH->htsize*2); + } + assert( pH->htsize>0 ); + h = hraw % pH->htsize; + insertElement(pH, &pH->ht[h], new_elem); + new_elem->data = data; + return 0; +} Added: external/sqlite-source-3.5.7.x/hash.h ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/hash.h Wed Mar 19 03:00:27 2008 @@ -0,0 +1,110 @@ +/* +** 2001 September 22 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** This is the header file for the generic hash-table implemenation +** used in SQLite. +** +** $Id: hash.h,v 1.11 2007/09/04 14:31:47 danielk1977 Exp $ +*/ +#ifndef _SQLITE_HASH_H_ +#define _SQLITE_HASH_H_ + +/* Forward declarations of structures. */ +typedef struct Hash Hash; +typedef struct HashElem HashElem; + +/* A complete hash table is an instance of the following structure. +** The internals of this structure are intended to be opaque -- client +** code should not attempt to access or modify the fields of this structure +** directly. Change this structure only by using the routines below. +** However, many of the "procedures" and "functions" for modifying and +** accessing this structure are really macros, so we can't really make +** this structure opaque. +*/ +struct Hash { + char keyClass; /* SQLITE_HASH_INT, _POINTER, _STRING, _BINARY */ + char copyKey; /* True if copy of key made on insert */ + int count; /* Number of entries in this table */ + int htsize; /* Number of buckets in the hash table */ + HashElem *first; /* The first element of the array */ + struct _ht { /* the hash table */ + int count; /* Number of entries with this hash */ + HashElem *chain; /* Pointer to first entry with this hash */ + } *ht; +}; + +/* Each element in the hash table is an instance of the following +** structure. All elements are stored on a single doubly-linked list. +** +** Again, this structure is intended to be opaque, but it can't really +** be opaque because it is used by macros. +*/ +struct HashElem { + HashElem *next, *prev; /* Next and previous elements in the table */ + void *data; /* Data associated with this element */ + void *pKey; int nKey; /* Key associated with this element */ +}; + +/* +** There are 4 different modes of operation for a hash table: +** +** SQLITE_HASH_INT nKey is used as the key and pKey is ignored. +** +** SQLITE_HASH_POINTER pKey is used as the key and nKey is ignored. +** +** SQLITE_HASH_STRING pKey points to a string that is nKey bytes long +** (including the null-terminator, if any). Case +** is ignored in comparisons. +** +** SQLITE_HASH_BINARY pKey points to binary data nKey bytes long. +** memcmp() is used to compare keys. +** +** A copy of the key is made for SQLITE_HASH_STRING and SQLITE_HASH_BINARY +** if the copyKey parameter to HashInit is 1. +*/ +/* #define SQLITE_HASH_INT 1 // NOT USED */ +/* #define SQLITE_HASH_POINTER 2 // NOT USED */ +#define SQLITE_HASH_STRING 3 +#define SQLITE_HASH_BINARY 4 + +/* +** Access routines. To delete, insert a NULL pointer. +*/ +void sqlite3HashInit(Hash*, int keytype, int copyKey); +void *sqlite3HashInsert(Hash*, const void *pKey, int nKey, void *pData); +void *sqlite3HashFind(const Hash*, const void *pKey, int nKey); +HashElem *sqlite3HashFindElem(const Hash*, const void *pKey, int nKey); +void sqlite3HashClear(Hash*); + +/* +** Macros for looping over all elements of a hash table. The idiom is +** like this: +** +** Hash h; +** HashElem *p; +** ... +** for(p=sqliteHashFirst(&h); p; p=sqliteHashNext(p)){ +** SomeStructure *pData = sqliteHashData(p); +** // do something with pData +** } +*/ +#define sqliteHashFirst(H) ((H)->first) +#define sqliteHashNext(E) ((E)->next) +#define sqliteHashData(E) ((E)->data) +#define sqliteHashKey(E) ((E)->pKey) +#define sqliteHashKeysize(E) ((E)->nKey) + +/* +** Number of entries in a hash table +*/ +#define sqliteHashCount(H) ((H)->count) + +#endif /* _SQLITE_HASH_H_ */ Added: external/sqlite-source-3.5.7.x/insert.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/insert.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,1665 @@ +/* +** 2001 September 15 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** This file contains C code routines that are called by the parser +** to handle INSERT statements in SQLite. +** +** $Id: insert.c,v 1.231 2008/03/06 09:58:50 mlcreech Exp $ +*/ +#include "sqliteInt.h" + +/* +** Set P4 of the most recently inserted opcode to a column affinity +** string for index pIdx. A column affinity string has one character +** for each column in the table, according to the affinity of the column: +** +** Character Column affinity +** ------------------------------ +** 'a' TEXT +** 'b' NONE +** 'c' NUMERIC +** 'd' INTEGER +** 'e' REAL +** +** An extra 'b' is appended to the end of the string to cover the +** rowid that appears as the last column in every index. +*/ +void sqlite3IndexAffinityStr(Vdbe *v, Index *pIdx){ + if( !pIdx->zColAff ){ + /* The first time a column affinity string for a particular index is + ** required, it is allocated and populated here. It is then stored as + ** a member of the Index structure for subsequent use. + ** + ** The column affinity string will eventually be deleted by + ** sqliteDeleteIndex() when the Index structure itself is cleaned + ** up. + */ + int n; + Table *pTab = pIdx->pTable; + sqlite3 *db = sqlite3VdbeDb(v); + pIdx->zColAff = (char *)sqlite3DbMallocZero(db, pIdx->nColumn+2); + if( !pIdx->zColAff ){ + return; + } + for(n=0; nnColumn; n++){ + pIdx->zColAff[n] = pTab->aCol[pIdx->aiColumn[n]].affinity; + } + pIdx->zColAff[n++] = SQLITE_AFF_NONE; + pIdx->zColAff[n] = 0; + } + + sqlite3VdbeChangeP4(v, -1, pIdx->zColAff, 0); +} + +/* +** Set P4 of the most recently inserted opcode to a column affinity +** string for table pTab. A column affinity string has one character +** for each column indexed by the index, according to the affinity of the +** column: +** +** Character Column affinity +** ------------------------------ +** 'a' TEXT +** 'b' NONE +** 'c' NUMERIC +** 'd' INTEGER +** 'e' REAL +*/ +void sqlite3TableAffinityStr(Vdbe *v, Table *pTab){ + /* The first time a column affinity string for a particular table + ** is required, it is allocated and populated here. It is then + ** stored as a member of the Table structure for subsequent use. + ** + ** The column affinity string will eventually be deleted by + ** sqlite3DeleteTable() when the Table structure itself is cleaned up. + */ + if( !pTab->zColAff ){ + char *zColAff; + int i; + sqlite3 *db = sqlite3VdbeDb(v); + + zColAff = (char *)sqlite3DbMallocZero(db, pTab->nCol+1); + if( !zColAff ){ + return; + } + + for(i=0; inCol; i++){ + zColAff[i] = pTab->aCol[i].affinity; + } + zColAff[pTab->nCol] = '\0'; + + pTab->zColAff = zColAff; + } + + sqlite3VdbeChangeP4(v, -1, pTab->zColAff, 0); +} + +/* +** Return non-zero if the table pTab in database iDb or any of its indices +** have been opened at any point in the VDBE program beginning at location +** iStartAddr throught the end of the program. This is used to see if +** a statement of the form "INSERT INTO SELECT ..." can +** run without using temporary table for the results of the SELECT. +*/ +static int readsTable(Vdbe *v, int iStartAddr, int iDb, Table *pTab){ + int i; + int iEnd = sqlite3VdbeCurrentAddr(v); + for(i=iStartAddr; iopcode==OP_OpenRead && pOp->p3==iDb ){ + Index *pIndex; + int tnum = pOp->p2; + if( tnum==pTab->tnum ){ + return 1; + } + for(pIndex=pTab->pIndex; pIndex; pIndex=pIndex->pNext){ + if( tnum==pIndex->tnum ){ + return 1; + } + } + } +#ifndef SQLITE_OMIT_VIRTUALTABLE + if( pOp->opcode==OP_VOpen && pOp->p4.pVtab==pTab->pVtab ){ + assert( pOp->p4.pVtab!=0 ); + assert( pOp->p4type==P4_VTAB ); + return 1; + } +#endif + } + return 0; +} + +#ifndef SQLITE_OMIT_AUTOINCREMENT +/* +** Write out code to initialize the autoincrement logic. This code +** looks up the current autoincrement value in the sqlite_sequence +** table and stores that value in a register. Code generated by +** autoIncStep() will keep that register holding the largest +** rowid value. Code generated by autoIncEnd() will write the new +** largest value of the counter back into the sqlite_sequence table. +** +** This routine returns the index of the mem[] cell that contains +** the maximum rowid counter. +** +** Three consecutive registers are allocated by this routine. The +** first two hold the name of the target table and the maximum rowid +** inserted into the target table, respectively. +** The third holds the rowid in sqlite_sequence where we will +** write back the revised maximum rowid. This routine returns the +** index of the second of these three registers. +*/ +static int autoIncBegin( + Parse *pParse, /* Parsing context */ + int iDb, /* Index of the database holding pTab */ + Table *pTab /* The table we are writing to */ +){ + int memId = 0; /* Register holding maximum rowid */ + if( pTab->autoInc ){ + Vdbe *v = pParse->pVdbe; + Db *pDb = &pParse->db->aDb[iDb]; + int iCur = pParse->nTab; + int addr; /* Address of the top of the loop */ + assert( v ); + pParse->nMem++; /* Holds name of table */ + memId = ++pParse->nMem; + pParse->nMem++; + sqlite3OpenTable(pParse, iCur, iDb, pDb->pSchema->pSeqTab, OP_OpenRead); + addr = sqlite3VdbeCurrentAddr(v); + sqlite3VdbeAddOp4(v, OP_String8, 0, memId-1, 0, pTab->zName, 0); + sqlite3VdbeAddOp2(v, OP_Rewind, iCur, addr+8); + sqlite3VdbeAddOp3(v, OP_Column, iCur, 0, memId); + sqlite3VdbeAddOp3(v, OP_Ne, memId-1, addr+7, memId); + sqlite3VdbeChangeP5(v, SQLITE_JUMPIFNULL); + sqlite3VdbeAddOp2(v, OP_Rowid, iCur, memId+1); + sqlite3VdbeAddOp3(v, OP_Column, iCur, 1, memId); + sqlite3VdbeAddOp2(v, OP_Goto, 0, addr+8); + sqlite3VdbeAddOp2(v, OP_Next, iCur, addr+2); + sqlite3VdbeAddOp2(v, OP_Close, iCur, 0); + } + return memId; +} + +/* +** Update the maximum rowid for an autoincrement calculation. +** +** This routine should be called when the top of the stack holds a +** new rowid that is about to be inserted. If that new rowid is +** larger than the maximum rowid in the memId memory cell, then the +** memory cell is updated. The stack is unchanged. +*/ +static void autoIncStep(Parse *pParse, int memId, int regRowid){ + if( memId>0 ){ + sqlite3VdbeAddOp2(pParse->pVdbe, OP_MemMax, memId, regRowid); + } +} + +/* +** After doing one or more inserts, the maximum rowid is stored +** in reg[memId]. Generate code to write this value back into the +** the sqlite_sequence table. +*/ +static void autoIncEnd( + Parse *pParse, /* The parsing context */ + int iDb, /* Index of the database holding pTab */ + Table *pTab, /* Table we are inserting into */ + int memId /* Memory cell holding the maximum rowid */ +){ + if( pTab->autoInc ){ + int iCur = pParse->nTab; + Vdbe *v = pParse->pVdbe; + Db *pDb = &pParse->db->aDb[iDb]; + int j1; + int iRec = ++pParse->nMem; /* Memory cell used for record */ + + assert( v ); + sqlite3OpenTable(pParse, iCur, iDb, pDb->pSchema->pSeqTab, OP_OpenWrite); + j1 = sqlite3VdbeAddOp1(v, OP_NotNull, memId+1); + sqlite3VdbeAddOp2(v, OP_NewRowid, iCur, memId+1); + sqlite3VdbeJumpHere(v, j1); + sqlite3VdbeAddOp3(v, OP_MakeRecord, memId-1, 2, iRec); + sqlite3VdbeAddOp3(v, OP_Insert, iCur, iRec, memId+1); + sqlite3VdbeChangeP5(v, OPFLAG_APPEND); + sqlite3VdbeAddOp1(v, OP_Close, iCur); + } +} +#else +/* +** If SQLITE_OMIT_AUTOINCREMENT is defined, then the three routines +** above are all no-ops +*/ +# define autoIncBegin(A,B,C) (0) +# define autoIncStep(A,B,C) +# define autoIncEnd(A,B,C,D) +#endif /* SQLITE_OMIT_AUTOINCREMENT */ + + +/* Forward declaration */ +static int xferOptimization( + Parse *pParse, /* Parser context */ + Table *pDest, /* The table we are inserting into */ + Select *pSelect, /* A SELECT statement to use as the data source */ + int onError, /* How to handle constraint errors */ + int iDbDest /* The database of pDest */ +); + +/* +** This routine is call to handle SQL of the following forms: +** +** insert into TABLE (IDLIST) values(EXPRLIST) +** insert into TABLE (IDLIST) select +** +** The IDLIST following the table name is always optional. If omitted, +** then a list of all columns for the table is substituted. The IDLIST +** appears in the pColumn parameter. pColumn is NULL if IDLIST is omitted. +** +** The pList parameter holds EXPRLIST in the first form of the INSERT +** statement above, and pSelect is NULL. For the second form, pList is +** NULL and pSelect is a pointer to the select statement used to generate +** data for the insert. +** +** The code generated follows one of four templates. For a simple +** select with data coming from a VALUES clause, the code executes +** once straight down through. The template looks like this: +** +** open write cursor to
                and its indices +** puts VALUES clause expressions onto the stack +** write the resulting record into
                +** cleanup +** +** The three remaining templates assume the statement is of the form +** +** INSERT INTO
                SELECT ... +** +** If the SELECT clause is of the restricted form "SELECT * FROM " - +** in other words if the SELECT pulls all columns from a single table +** and there is no WHERE or LIMIT or GROUP BY or ORDER BY clauses, and +** if and are distinct tables but have identical +** schemas, including all the same indices, then a special optimization +** is invoked that copies raw records from over to . +** See the xferOptimization() function for the implementation of this +** template. This is the second template. +** +** open a write cursor to
                +** open read cursor on +** transfer all records in over to
                +** close cursors +** foreach index on
                +** open a write cursor on the
                index +** open a read cursor on the corresponding index +** transfer all records from the read to the write cursors +** close cursors +** end foreach +** +** The third template is for when the second template does not apply +** and the SELECT clause does not read from
                at any time. +** The generated code follows this template: +** +** goto B +** A: setup for the SELECT +** loop over the rows in the SELECT +** gosub C +** end loop +** cleanup after the SELECT +** goto D +** B: open write cursor to
                and its indices +** goto A +** C: insert the select result into
                +** return +** D: cleanup +** +** The fourth template is used if the insert statement takes its +** values from a SELECT but the data is being inserted into a table +** that is also read as part of the SELECT. In the third form, +** we have to use a intermediate table to store the results of +** the select. The template is like this: +** +** goto B +** A: setup for the SELECT +** loop over the tables in the SELECT +** gosub C +** end loop +** cleanup after the SELECT +** goto D +** C: insert the select result into the intermediate table +** return +** B: open a cursor to an intermediate table +** goto A +** D: open write cursor to
                and its indices +** loop over the intermediate table +** transfer values form intermediate table into
                +** end the loop +** cleanup +*/ +void sqlite3Insert( + Parse *pParse, /* Parser context */ + SrcList *pTabList, /* Name of table into which we are inserting */ + ExprList *pList, /* List of values to be inserted */ + Select *pSelect, /* A SELECT statement to use as the data source */ + IdList *pColumn, /* Column names corresponding to IDLIST. */ + int onError /* How to handle constraint errors */ +){ + sqlite3 *db; /* The main database structure */ + Table *pTab; /* The table to insert into. aka TABLE */ + char *zTab; /* Name of the table into which we are inserting */ + const char *zDb; /* Name of the database holding this table */ + int i, j, idx; /* Loop counters */ + Vdbe *v; /* Generate code into this virtual machine */ + Index *pIdx; /* For looping over indices of the table */ + int nColumn; /* Number of columns in the data */ + int nHidden = 0; /* Number of hidden columns if TABLE is virtual */ + int baseCur = 0; /* VDBE Cursor number for pTab */ + int keyColumn = -1; /* Column that is the INTEGER PRIMARY KEY */ + int endOfLoop; /* Label for the end of the insertion loop */ + int useTempTable = 0; /* Store SELECT results in intermediate table */ + int srcTab = 0; /* Data comes from this temporary cursor if >=0 */ + int iCont=0,iBreak=0; /* Beginning and end of the loop over srcTab */ + int iSelectLoop = 0; /* Address of code that implements the SELECT */ + int iCleanup = 0; /* Address of the cleanup code */ + int iInsertBlock = 0; /* Address of the subroutine used to insert data */ + int newIdx = -1; /* Cursor for the NEW pseudo-table */ + int iDb; /* Index of database holding TABLE */ + Db *pDb; /* The database containing table being inserted into */ + int appendFlag = 0; /* True if the insert is likely to be an append */ + + /* Register allocations */ + int regFromSelect; /* Base register for data coming from SELECT */ + int regAutoinc = 0; /* Register holding the AUTOINCREMENT counter */ + int regRowCount = 0; /* Memory cell used for the row counter */ + int regIns; /* Block of regs holding rowid+data being inserted */ + int regRowid; /* registers holding insert rowid */ + int regData; /* register holding first column to insert */ + int regRecord; /* Holds the assemblied row record */ + int *aRegIdx = 0; /* One register allocated to each index */ + + +#ifndef SQLITE_OMIT_TRIGGER + int isView; /* True if attempting to insert into a view */ + int triggers_exist = 0; /* True if there are FOR EACH ROW triggers */ +#endif + + db = pParse->db; + if( pParse->nErr || db->mallocFailed ){ + goto insert_cleanup; + } + + /* Locate the table into which we will be inserting new information. + */ + assert( pTabList->nSrc==1 ); + zTab = pTabList->a[0].zName; + if( zTab==0 ) goto insert_cleanup; + pTab = sqlite3SrcListLookup(pParse, pTabList); + if( pTab==0 ){ + goto insert_cleanup; + } + iDb = sqlite3SchemaToIndex(db, pTab->pSchema); + assert( iDbnDb ); + pDb = &db->aDb[iDb]; + zDb = pDb->zName; + if( sqlite3AuthCheck(pParse, SQLITE_INSERT, pTab->zName, 0, zDb) ){ + goto insert_cleanup; + } + + /* Figure out if we have any triggers and if the table being + ** inserted into is a view + */ +#ifndef SQLITE_OMIT_TRIGGER + triggers_exist = sqlite3TriggersExist(pParse, pTab, TK_INSERT, 0); + isView = pTab->pSelect!=0; +#else +# define triggers_exist 0 +# define isView 0 +#endif +#ifdef SQLITE_OMIT_VIEW +# undef isView +# define isView 0 +#endif + + /* Ensure that: + * (a) the table is not read-only, + * (b) that if it is a view then ON INSERT triggers exist + */ + if( sqlite3IsReadOnly(pParse, pTab, triggers_exist) ){ + goto insert_cleanup; + } + assert( pTab!=0 ); + + /* If pTab is really a view, make sure it has been initialized. + ** ViewGetColumnNames() is a no-op if pTab is not a view (or virtual + ** module table). + */ + if( sqlite3ViewGetColumnNames(pParse, pTab) ){ + goto insert_cleanup; + } + + /* Allocate a VDBE + */ + v = sqlite3GetVdbe(pParse); + if( v==0 ) goto insert_cleanup; + if( pParse->nested==0 ) sqlite3VdbeCountChanges(v); + sqlite3BeginWriteOperation(pParse, pSelect || triggers_exist, iDb); + + /* if there are row triggers, allocate a temp table for new.* references. */ + if( triggers_exist ){ + newIdx = pParse->nTab++; + } + +#ifndef SQLITE_OMIT_XFER_OPT + /* If the statement is of the form + ** + ** INSERT INTO SELECT * FROM ; + ** + ** Then special optimizations can be applied that make the transfer + ** very fast and which reduce fragmentation of indices. + */ + if( pColumn==0 && xferOptimization(pParse, pTab, pSelect, onError, iDb) ){ + assert( !triggers_exist ); + assert( pList==0 ); + goto insert_cleanup; + } +#endif /* SQLITE_OMIT_XFER_OPT */ + + /* If this is an AUTOINCREMENT table, look up the sequence number in the + ** sqlite_sequence table and store it in memory cell regAutoinc. + */ + regAutoinc = autoIncBegin(pParse, iDb, pTab); + + /* Figure out how many columns of data are supplied. If the data + ** is coming from a SELECT statement, then this step also generates + ** all the code to implement the SELECT statement and invoke a subroutine + ** to process each row of the result. (Template 2.) If the SELECT + ** statement uses the the table that is being inserted into, then the + ** subroutine is also coded here. That subroutine stores the SELECT + ** results in a temporary table. (Template 3.) + */ + if( pSelect ){ + /* Data is coming from a SELECT. Generate code to implement that SELECT + */ + SelectDest dest; + int rc, iInitCode; + + iInitCode = sqlite3VdbeAddOp2(v, OP_Goto, 0, 0); + iSelectLoop = sqlite3VdbeCurrentAddr(v); + iInsertBlock = sqlite3VdbeMakeLabel(v); + sqlite3SelectDestInit(&dest, SRT_Subroutine, iInsertBlock); + + /* Resolve the expressions in the SELECT statement and execute it. */ + rc = sqlite3Select(pParse, pSelect, &dest, 0, 0, 0, 0); + if( rc || pParse->nErr || db->mallocFailed ){ + goto insert_cleanup; + } + + regFromSelect = dest.iMem; + iCleanup = sqlite3VdbeMakeLabel(v); + sqlite3VdbeAddOp2(v, OP_Goto, 0, iCleanup); + assert( pSelect->pEList ); + nColumn = pSelect->pEList->nExpr; + + /* Set useTempTable to TRUE if the result of the SELECT statement + ** should be written into a temporary table. Set to FALSE if each + ** row of the SELECT can be written directly into the result table. + ** + ** A temp table must be used if the table being updated is also one + ** of the tables being read by the SELECT statement. Also use a + ** temp table in the case of row triggers. + */ + if( triggers_exist || readsTable(v, iSelectLoop, iDb, pTab) ){ + useTempTable = 1; + } + + if( useTempTable ){ + /* Generate the subroutine that SELECT calls to process each row of + ** the result. Store the result in a temporary table + */ + int regRec, regRowid; + + srcTab = pParse->nTab++; + regRec = sqlite3GetTempReg(pParse); + regRowid = sqlite3GetTempReg(pParse); + sqlite3VdbeResolveLabel(v, iInsertBlock); + sqlite3VdbeAddOp3(v, OP_MakeRecord, regFromSelect, nColumn, regRec); + sqlite3VdbeAddOp2(v, OP_NewRowid, srcTab, regRowid); + sqlite3VdbeAddOp3(v, OP_Insert, srcTab, regRec, regRowid); + sqlite3VdbeAddOp2(v, OP_Return, 0, 0); + sqlite3ReleaseTempReg(pParse, regRec); + sqlite3ReleaseTempReg(pParse, regRowid); + + /* The following code runs first because the GOTO at the very top + ** of the program jumps to it. Create the temporary table, then jump + ** back up and execute the SELECT code above. + */ + sqlite3VdbeJumpHere(v, iInitCode); + sqlite3VdbeAddOp2(v, OP_OpenEphemeral, srcTab, 0); + sqlite3VdbeAddOp2(v, OP_SetNumColumns, srcTab, nColumn); + sqlite3VdbeAddOp2(v, OP_Goto, 0, iSelectLoop); + sqlite3VdbeResolveLabel(v, iCleanup); + }else{ + sqlite3VdbeJumpHere(v, iInitCode); + } + }else{ + /* This is the case if the data for the INSERT is coming from a VALUES + ** clause + */ + NameContext sNC; + memset(&sNC, 0, sizeof(sNC)); + sNC.pParse = pParse; + srcTab = -1; + assert( useTempTable==0 ); + nColumn = pList ? pList->nExpr : 0; + for(i=0; ia[i].pExpr) ){ + goto insert_cleanup; + } + } + } + + /* Make sure the number of columns in the source data matches the number + ** of columns to be inserted into the table. + */ + if( IsVirtual(pTab) ){ + for(i=0; inCol; i++){ + nHidden += (IsHiddenColumn(&pTab->aCol[i]) ? 1 : 0); + } + } + if( pColumn==0 && nColumn && nColumn!=(pTab->nCol-nHidden) ){ + sqlite3ErrorMsg(pParse, + "table %S has %d columns but %d values were supplied", + pTabList, 0, pTab->nCol, nColumn); + goto insert_cleanup; + } + if( pColumn!=0 && nColumn!=pColumn->nId ){ + sqlite3ErrorMsg(pParse, "%d values for %d columns", nColumn, pColumn->nId); + goto insert_cleanup; + } + + /* If the INSERT statement included an IDLIST term, then make sure + ** all elements of the IDLIST really are columns of the table and + ** remember the column indices. + ** + ** If the table has an INTEGER PRIMARY KEY column and that column + ** is named in the IDLIST, then record in the keyColumn variable + ** the index into IDLIST of the primary key column. keyColumn is + ** the index of the primary key as it appears in IDLIST, not as + ** is appears in the original table. (The index of the primary + ** key in the original table is pTab->iPKey.) + */ + if( pColumn ){ + for(i=0; inId; i++){ + pColumn->a[i].idx = -1; + } + for(i=0; inId; i++){ + for(j=0; jnCol; j++){ + if( sqlite3StrICmp(pColumn->a[i].zName, pTab->aCol[j].zName)==0 ){ + pColumn->a[i].idx = j; + if( j==pTab->iPKey ){ + keyColumn = i; + } + break; + } + } + if( j>=pTab->nCol ){ + if( sqlite3IsRowid(pColumn->a[i].zName) ){ + keyColumn = i; + }else{ + sqlite3ErrorMsg(pParse, "table %S has no column named %s", + pTabList, 0, pColumn->a[i].zName); + pParse->nErr++; + goto insert_cleanup; + } + } + } + } + + /* If there is no IDLIST term but the table has an integer primary + ** key, the set the keyColumn variable to the primary key column index + ** in the original table definition. + */ + if( pColumn==0 && nColumn>0 ){ + keyColumn = pTab->iPKey; + } + + /* Open the temp table for FOR EACH ROW triggers + */ + if( triggers_exist ){ + sqlite3VdbeAddOp2(v, OP_OpenPseudo, newIdx, 0); + sqlite3VdbeAddOp2(v, OP_SetNumColumns, newIdx, pTab->nCol); + } + + /* Initialize the count of rows to be inserted + */ + if( db->flags & SQLITE_CountRows ){ + regRowCount = ++pParse->nMem; + sqlite3VdbeAddOp2(v, OP_Integer, 0, regRowCount); + } + + /* If this is not a view, open the table and and all indices */ + if( !isView ){ + int nIdx; + int i; + + baseCur = pParse->nTab; + nIdx = sqlite3OpenTableAndIndices(pParse, pTab, baseCur, OP_OpenWrite); + aRegIdx = sqlite3DbMallocZero(db, sizeof(int)*(nIdx+1)); + if( aRegIdx==0 ){ + goto insert_cleanup; + } + for(i=0; inMem; + } + } + + /* If the data source is a temporary table, then we have to create + ** a loop because there might be multiple rows of data. If the data + ** source is a subroutine call from the SELECT statement, then we need + ** to launch the SELECT statement processing. + */ + if( useTempTable ){ + iBreak = sqlite3VdbeMakeLabel(v); + sqlite3VdbeAddOp2(v, OP_Rewind, srcTab, iBreak); + iCont = sqlite3VdbeCurrentAddr(v); + }else if( pSelect ){ + sqlite3VdbeAddOp2(v, OP_Goto, 0, iSelectLoop); + sqlite3VdbeResolveLabel(v, iInsertBlock); + } + + /* Allocate registers for holding the rowid of the new row, + ** the content of the new row, and the assemblied row record. + */ + regRecord = ++pParse->nMem; + regRowid = regIns = pParse->nMem+1; + pParse->nMem += pTab->nCol + 1; + if( IsVirtual(pTab) ){ + regRowid++; + pParse->nMem++; + } + regData = regRowid+1; + + /* Run the BEFORE and INSTEAD OF triggers, if there are any + */ + endOfLoop = sqlite3VdbeMakeLabel(v); + if( triggers_exist & TRIGGER_BEFORE ){ + int regRowid; + int regCols; + int regRec; + + /* build the NEW.* reference row. Note that if there is an INTEGER + ** PRIMARY KEY into which a NULL is being inserted, that NULL will be + ** translated into a unique ID for the row. But on a BEFORE trigger, + ** we do not know what the unique ID will be (because the insert has + ** not happened yet) so we substitute a rowid of -1 + */ + regRowid = sqlite3GetTempReg(pParse); + if( keyColumn<0 ){ + sqlite3VdbeAddOp2(v, OP_Integer, -1, regRowid); + }else if( useTempTable ){ + sqlite3VdbeAddOp3(v, OP_Column, srcTab, keyColumn, regRowid); + }else{ + int j1; + assert( pSelect==0 ); /* Otherwise useTempTable is true */ + sqlite3ExprCode(pParse, pList->a[keyColumn].pExpr, regRowid); + j1 = sqlite3VdbeAddOp1(v, OP_NotNull, regRowid); + sqlite3VdbeAddOp2(v, OP_Integer, -1, regRowid); + sqlite3VdbeJumpHere(v, j1); + sqlite3VdbeAddOp1(v, OP_MustBeInt, regRowid); + } + + /* Cannot have triggers on a virtual table. If it were possible, + ** this block would have to account for hidden column. + */ + assert(!IsVirtual(pTab)); + + /* Create the new column data + */ + regCols = sqlite3GetTempRange(pParse, pTab->nCol); + for(i=0; inCol; i++){ + if( pColumn==0 ){ + j = i; + }else{ + for(j=0; jnId; j++){ + if( pColumn->a[j].idx==i ) break; + } + } + if( pColumn && j>=pColumn->nId ){ + sqlite3ExprCode(pParse, pTab->aCol[i].pDflt, regCols+i); + }else if( useTempTable ){ + sqlite3VdbeAddOp3(v, OP_Column, srcTab, j, regCols+i); + }else{ + assert( pSelect==0 ); /* Otherwise useTempTable is true */ + sqlite3ExprCodeAndCache(pParse, pList->a[j].pExpr, regCols+i); + } + } + regRec = sqlite3GetTempReg(pParse); + sqlite3VdbeAddOp3(v, OP_MakeRecord, regCols, pTab->nCol, regRec); + + /* If this is an INSERT on a view with an INSTEAD OF INSERT trigger, + ** do not attempt any conversions before assembling the record. + ** If this is a real table, attempt conversions as required by the + ** table column affinities. + */ + if( !isView ){ + sqlite3TableAffinityStr(v, pTab); + } + sqlite3VdbeAddOp3(v, OP_Insert, newIdx, regRec, regRowid); + sqlite3ReleaseTempReg(pParse, regRec); + sqlite3ReleaseTempReg(pParse, regRowid); + sqlite3ReleaseTempRange(pParse, regCols, pTab->nCol); + + /* Fire BEFORE or INSTEAD OF triggers */ + if( sqlite3CodeRowTrigger(pParse, TK_INSERT, 0, TRIGGER_BEFORE, pTab, + newIdx, -1, onError, endOfLoop, 0, 0) ){ + goto insert_cleanup; + } + } + + /* Push the record number for the new entry onto the stack. The + ** record number is a randomly generate integer created by NewRowid + ** except when the table has an INTEGER PRIMARY KEY column, in which + ** case the record number is the same as that column. + */ + if( !isView ){ + if( IsVirtual(pTab) ){ + /* The row that the VUpdate opcode will delete: none */ + sqlite3VdbeAddOp2(v, OP_Null, 0, regIns); + } + if( keyColumn>=0 ){ + if( useTempTable ){ + sqlite3VdbeAddOp3(v, OP_Column, srcTab, keyColumn, regRowid); + }else if( pSelect ){ + sqlite3VdbeAddOp2(v, OP_SCopy, regFromSelect+keyColumn, regRowid); + }else{ + VdbeOp *pOp; + sqlite3ExprCode(pParse, pList->a[keyColumn].pExpr, regRowid); + pOp = sqlite3VdbeGetOp(v, sqlite3VdbeCurrentAddr(v) - 1); + if( pOp && pOp->opcode==OP_Null ){ + appendFlag = 1; + pOp->opcode = OP_NewRowid; + pOp->p1 = baseCur; + pOp->p2 = regRowid; + pOp->p3 = regAutoinc; + } + } + /* If the PRIMARY KEY expression is NULL, then use OP_NewRowid + ** to generate a unique primary key value. + */ + if( !appendFlag ){ + int j1; + j1 = sqlite3VdbeAddOp1(v, OP_NotNull, regRowid); + sqlite3VdbeAddOp3(v, OP_NewRowid, baseCur, regRowid, regAutoinc); + sqlite3VdbeJumpHere(v, j1); + sqlite3VdbeAddOp1(v, OP_MustBeInt, regRowid); + } + }else if( IsVirtual(pTab) ){ + sqlite3VdbeAddOp2(v, OP_Null, 0, regRowid); + }else{ + sqlite3VdbeAddOp3(v, OP_NewRowid, baseCur, regRowid, regAutoinc); + appendFlag = 1; + } + autoIncStep(pParse, regAutoinc, regRowid); + + /* Push onto the stack, data for all columns of the new entry, beginning + ** with the first column. + */ + nHidden = 0; + for(i=0; inCol; i++){ + int iRegStore = regRowid+1+i; + if( i==pTab->iPKey ){ + /* The value of the INTEGER PRIMARY KEY column is always a NULL. + ** Whenever this column is read, the record number will be substituted + ** in its place. So will fill this column with a NULL to avoid + ** taking up data space with information that will never be used. */ + sqlite3VdbeAddOp2(v, OP_Null, 0, iRegStore); + continue; + } + if( pColumn==0 ){ + if( IsHiddenColumn(&pTab->aCol[i]) ){ + assert( IsVirtual(pTab) ); + j = -1; + nHidden++; + }else{ + j = i - nHidden; + } + }else{ + for(j=0; jnId; j++){ + if( pColumn->a[j].idx==i ) break; + } + } + if( j<0 || nColumn==0 || (pColumn && j>=pColumn->nId) ){ + sqlite3ExprCode(pParse, pTab->aCol[i].pDflt, iRegStore); + }else if( useTempTable ){ + sqlite3VdbeAddOp3(v, OP_Column, srcTab, j, iRegStore); + }else if( pSelect ){ + sqlite3VdbeAddOp2(v, OP_SCopy, regFromSelect+j, iRegStore); + }else{ + sqlite3ExprCode(pParse, pList->a[j].pExpr, iRegStore); + } + } + + /* Generate code to check constraints and generate index keys and + ** do the insertion. + */ +#ifndef SQLITE_OMIT_VIRTUALTABLE + if( IsVirtual(pTab) ){ + pParse->pVirtualLock = pTab; + sqlite3VdbeAddOp4(v, OP_VUpdate, 1, pTab->nCol+2, regIns, + (const char*)pTab->pVtab, P4_VTAB); + }else +#endif + { + sqlite3GenerateConstraintChecks( + pParse, + pTab, + baseCur, + regIns, + aRegIdx, + keyColumn>=0, + 0, + onError, + endOfLoop + ); + sqlite3CompleteInsertion( + pParse, + pTab, + baseCur, + regIns, + aRegIdx, + 0, + 0, + (triggers_exist & TRIGGER_AFTER)!=0 ? newIdx : -1, + appendFlag + ); + } + } + + /* Update the count of rows that are inserted + */ + if( (db->flags & SQLITE_CountRows)!=0 ){ + sqlite3VdbeAddOp2(v, OP_AddImm, regRowCount, 1); + } + + if( triggers_exist ){ + /* Code AFTER triggers */ + if( sqlite3CodeRowTrigger(pParse, TK_INSERT, 0, TRIGGER_AFTER, pTab, + newIdx, -1, onError, endOfLoop, 0, 0) ){ + goto insert_cleanup; + } + } + + /* The bottom of the loop, if the data source is a SELECT statement + */ + sqlite3VdbeResolveLabel(v, endOfLoop); + if( useTempTable ){ + sqlite3VdbeAddOp2(v, OP_Next, srcTab, iCont); + sqlite3VdbeResolveLabel(v, iBreak); + sqlite3VdbeAddOp2(v, OP_Close, srcTab, 0); + }else if( pSelect ){ + sqlite3VdbeAddOp2(v, OP_Return, 0, 0); + sqlite3VdbeResolveLabel(v, iCleanup); + } + + if( !IsVirtual(pTab) && !isView ){ + /* Close all tables opened */ + sqlite3VdbeAddOp2(v, OP_Close, baseCur, 0); + for(idx=1, pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext, idx++){ + sqlite3VdbeAddOp2(v, OP_Close, idx+baseCur, 0); + } + } + + /* Update the sqlite_sequence table by storing the content of the + ** counter value in memory regAutoinc back into the sqlite_sequence + ** table. + */ + autoIncEnd(pParse, iDb, pTab, regAutoinc); + + /* + ** Return the number of rows inserted. If this routine is + ** generating code because of a call to sqlite3NestedParse(), do not + ** invoke the callback function. + */ + if( db->flags & SQLITE_CountRows && pParse->nested==0 && !pParse->trigStack ){ + sqlite3VdbeAddOp2(v, OP_ResultRow, regRowCount, 1); + sqlite3VdbeSetNumCols(v, 1); + sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "rows inserted", P4_STATIC); + } + +insert_cleanup: + sqlite3SrcListDelete(pTabList); + sqlite3ExprListDelete(pList); + sqlite3SelectDelete(pSelect); + sqlite3IdListDelete(pColumn); + sqlite3_free(aRegIdx); +} + +/* +** Generate code to do constraint checks prior to an INSERT or an UPDATE. +** +** The input is a range of consecutive registers as follows: +** +** 1. The rowid of the row to be updated before the update. This +** value is omitted unless we are doing an UPDATE that involves a +** change to the record number or writing to a virtual table. +** +** 2. The rowid of the row after the update. +** +** 3. The data in the first column of the entry after the update. +** +** i. Data from middle columns... +** +** N. The data in the last column of the entry after the update. +** +** The regRowid parameter is the index of the register containing (2). +** +** The old rowid shown as entry (1) above is omitted unless both isUpdate +** and rowidChng are 1. isUpdate is true for UPDATEs and false for +** INSERTs. RowidChng means that the new rowid is explicitly specified by +** the update or insert statement. If rowidChng is false, it means that +** the rowid is computed automatically in an insert or that the rowid value +** is not modified by the update. +** +** The code generated by this routine store new index entries into +** registers identified by aRegIdx[]. No index entry is created for +** indices where aRegIdx[i]==0. The order of indices in aRegIdx[] is +** the same as the order of indices on the linked list of indices +** attached to the table. +** +** This routine also generates code to check constraints. NOT NULL, +** CHECK, and UNIQUE constraints are all checked. If a constraint fails, +** then the appropriate action is performed. There are five possible +** actions: ROLLBACK, ABORT, FAIL, REPLACE, and IGNORE. +** +** Constraint type Action What Happens +** --------------- ---------- ---------------------------------------- +** any ROLLBACK The current transaction is rolled back and +** sqlite3_exec() returns immediately with a +** return code of SQLITE_CONSTRAINT. +** +** any ABORT Back out changes from the current command +** only (do not do a complete rollback) then +** cause sqlite3_exec() to return immediately +** with SQLITE_CONSTRAINT. +** +** any FAIL Sqlite_exec() returns immediately with a +** return code of SQLITE_CONSTRAINT. The +** transaction is not rolled back and any +** prior changes are retained. +** +** any IGNORE The record number and data is popped from +** the stack and there is an immediate jump +** to label ignoreDest. +** +** NOT NULL REPLACE The NULL value is replace by the default +** value for that column. If the default value +** is NULL, the action is the same as ABORT. +** +** UNIQUE REPLACE The other row that conflicts with the row +** being inserted is removed. +** +** CHECK REPLACE Illegal. The results in an exception. +** +** Which action to take is determined by the overrideError parameter. +** Or if overrideError==OE_Default, then the pParse->onError parameter +** is used. Or if pParse->onError==OE_Default then the onError value +** for the constraint is used. +** +** The calling routine must open a read/write cursor for pTab with +** cursor number "baseCur". All indices of pTab must also have open +** read/write cursors with cursor number baseCur+i for the i-th cursor. +** Except, if there is no possibility of a REPLACE action then +** cursors do not need to be open for indices where aRegIdx[i]==0. +*/ +void sqlite3GenerateConstraintChecks( + Parse *pParse, /* The parser context */ + Table *pTab, /* the table into which we are inserting */ + int baseCur, /* Index of a read/write cursor pointing at pTab */ + int regRowid, /* Index of the range of input registers */ + int *aRegIdx, /* Register used by each index. 0 for unused indices */ + int rowidChng, /* True if the rowid might collide with existing entry */ + int isUpdate, /* True for UPDATE, False for INSERT */ + int overrideError, /* Override onError to this if not OE_Default */ + int ignoreDest /* Jump to this label on an OE_Ignore resolution */ +){ + int i; + Vdbe *v; + int nCol; + int onError; + int j1, j2, j3; /* Addresses of jump instructions */ + int regData; /* Register containing first data column */ + int iCur; + Index *pIdx; + int seenReplace = 0; + int hasTwoRowids = (isUpdate && rowidChng); + + v = sqlite3GetVdbe(pParse); + assert( v!=0 ); + assert( pTab->pSelect==0 ); /* This table is not a VIEW */ + nCol = pTab->nCol; + regData = regRowid + 1; + + + /* Test all NOT NULL constraints. + */ + for(i=0; iiPKey ){ + continue; + } + onError = pTab->aCol[i].notNull; + if( onError==OE_None ) continue; + if( overrideError!=OE_Default ){ + onError = overrideError; + }else if( onError==OE_Default ){ + onError = OE_Abort; + } + if( onError==OE_Replace && pTab->aCol[i].pDflt==0 ){ + onError = OE_Abort; + } + j1 = sqlite3VdbeAddOp1(v, OP_NotNull, regData+i); + assert( onError==OE_Rollback || onError==OE_Abort || onError==OE_Fail + || onError==OE_Ignore || onError==OE_Replace ); + switch( onError ){ + case OE_Rollback: + case OE_Abort: + case OE_Fail: { + char *zMsg = 0; + sqlite3VdbeAddOp2(v, OP_Halt, SQLITE_CONSTRAINT, onError); + sqlite3SetString(&zMsg, pTab->zName, ".", pTab->aCol[i].zName, + " may not be NULL", (char*)0); + sqlite3VdbeChangeP4(v, -1, zMsg, P4_DYNAMIC); + break; + } + case OE_Ignore: { + sqlite3VdbeAddOp2(v, OP_Goto, 0, ignoreDest); + break; + } + case OE_Replace: { + sqlite3ExprCode(pParse, pTab->aCol[i].pDflt, regData+i); + break; + } + } + sqlite3VdbeJumpHere(v, j1); + } + + /* Test all CHECK constraints + */ +#ifndef SQLITE_OMIT_CHECK + if( pTab->pCheck && (pParse->db->flags & SQLITE_IgnoreChecks)==0 ){ + int allOk = sqlite3VdbeMakeLabel(v); + pParse->ckBase = regData; + sqlite3ExprIfTrue(pParse, pTab->pCheck, allOk, SQLITE_JUMPIFNULL); + onError = overrideError!=OE_Default ? overrideError : OE_Abort; + if( onError==OE_Ignore ){ + sqlite3VdbeAddOp2(v, OP_Goto, 0, ignoreDest); + }else{ + sqlite3VdbeAddOp2(v, OP_Halt, SQLITE_CONSTRAINT, onError); + } + sqlite3VdbeResolveLabel(v, allOk); + } +#endif /* !defined(SQLITE_OMIT_CHECK) */ + + /* If we have an INTEGER PRIMARY KEY, make sure the primary key + ** of the new record does not previously exist. Except, if this + ** is an UPDATE and the primary key is not changing, that is OK. + */ + if( rowidChng ){ + onError = pTab->keyConf; + if( overrideError!=OE_Default ){ + onError = overrideError; + }else if( onError==OE_Default ){ + onError = OE_Abort; + } + + if( onError!=OE_Replace || pTab->pIndex ){ + if( isUpdate ){ + j2 = sqlite3VdbeAddOp3(v, OP_Eq, regRowid, 0, regRowid-1); + } + j3 = sqlite3VdbeAddOp3(v, OP_NotExists, baseCur, 0, regRowid); + switch( onError ){ + default: { + onError = OE_Abort; + /* Fall thru into the next case */ + } + case OE_Rollback: + case OE_Abort: + case OE_Fail: { + sqlite3VdbeAddOp4(v, OP_Halt, SQLITE_CONSTRAINT, onError, 0, + "PRIMARY KEY must be unique", P4_STATIC); + break; + } + case OE_Replace: { + sqlite3GenerateRowIndexDelete(pParse, pTab, baseCur, 0); + seenReplace = 1; + break; + } + case OE_Ignore: { + assert( seenReplace==0 ); + sqlite3VdbeAddOp2(v, OP_Goto, 0, ignoreDest); + break; + } + } + sqlite3VdbeJumpHere(v, j3); + if( isUpdate ){ + sqlite3VdbeJumpHere(v, j2); + } + } + } + + /* Test all UNIQUE constraints by creating entries for each UNIQUE + ** index and making sure that duplicate entries do not already exist. + ** Add the new records to the indices as we go. + */ + for(iCur=0, pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext, iCur++){ + int regIdx; + int regR; + + if( aRegIdx[iCur]==0 ) continue; /* Skip unused indices */ + + /* Create a key for accessing the index entry */ + regIdx = sqlite3GetTempRange(pParse, pIdx->nColumn+1); + for(i=0; inColumn; i++){ + int idx = pIdx->aiColumn[i]; + if( idx==pTab->iPKey ){ + sqlite3VdbeAddOp2(v, OP_SCopy, regRowid, regIdx+i); + }else{ + sqlite3VdbeAddOp2(v, OP_SCopy, regData+idx, regIdx+i); + } + } + sqlite3VdbeAddOp2(v, OP_SCopy, regRowid, regIdx+i); + sqlite3VdbeAddOp3(v, OP_MakeRecord, regIdx, pIdx->nColumn+1, aRegIdx[iCur]); + sqlite3IndexAffinityStr(v, pIdx); + sqlite3ReleaseTempRange(pParse, regIdx, pIdx->nColumn+1); + + /* Find out what action to take in case there is an indexing conflict */ + onError = pIdx->onError; + if( onError==OE_None ) continue; /* pIdx is not a UNIQUE index */ + if( overrideError!=OE_Default ){ + onError = overrideError; + }else if( onError==OE_Default ){ + onError = OE_Abort; + } + if( seenReplace ){ + if( onError==OE_Ignore ) onError = OE_Replace; + else if( onError==OE_Fail ) onError = OE_Abort; + } + + + /* Check to see if the new index entry will be unique */ + j2 = sqlite3VdbeAddOp3(v, OP_IsNull, regIdx, 0, pIdx->nColumn); + regR = sqlite3GetTempReg(pParse); + sqlite3VdbeAddOp2(v, OP_SCopy, regRowid-hasTwoRowids, regR); + j3 = sqlite3VdbeAddOp4(v, OP_IsUnique, baseCur+iCur+1, 0, + regR, (char*)(sqlite3_intptr_t)aRegIdx[iCur], + P4_INT32); + + /* Generate code that executes if the new index entry is not unique */ + assert( onError==OE_Rollback || onError==OE_Abort || onError==OE_Fail + || onError==OE_Ignore || onError==OE_Replace ); + switch( onError ){ + case OE_Rollback: + case OE_Abort: + case OE_Fail: { + int j, n1, n2; + char zErrMsg[200]; + sqlite3_snprintf(sizeof(zErrMsg), zErrMsg, + pIdx->nColumn>1 ? "columns " : "column "); + n1 = strlen(zErrMsg); + for(j=0; jnColumn && n1aCol[pIdx->aiColumn[j]].zName; + n2 = strlen(zCol); + if( j>0 ){ + sqlite3_snprintf(sizeof(zErrMsg)-n1, &zErrMsg[n1], ", "); + n1 += 2; + } + if( n1+n2>sizeof(zErrMsg)-30 ){ + sqlite3_snprintf(sizeof(zErrMsg)-n1, &zErrMsg[n1], "..."); + n1 += 3; + break; + }else{ + sqlite3_snprintf(sizeof(zErrMsg)-n1, &zErrMsg[n1], "%s", zCol); + n1 += n2; + } + } + sqlite3_snprintf(sizeof(zErrMsg)-n1, &zErrMsg[n1], + pIdx->nColumn>1 ? " are not unique" : " is not unique"); + sqlite3VdbeAddOp4(v, OP_Halt, SQLITE_CONSTRAINT, onError, 0, zErrMsg,0); + break; + } + case OE_Ignore: { + assert( seenReplace==0 ); + sqlite3VdbeAddOp2(v, OP_Goto, 0, ignoreDest); + break; + } + case OE_Replace: { + sqlite3GenerateRowDelete(pParse, pTab, baseCur, regR, 0); + seenReplace = 1; + break; + } + } + sqlite3VdbeJumpHere(v, j2); + sqlite3VdbeJumpHere(v, j3); + sqlite3ReleaseTempReg(pParse, regR); + } +} + +/* +** This routine generates code to finish the INSERT or UPDATE operation +** that was started by a prior call to sqlite3GenerateConstraintChecks. +** A consecutive range of registers starting at regRowid contains the +** rowid and the content to be inserted. +** +** The arguments to this routine should be the same as the first six +** arguments to sqlite3GenerateConstraintChecks. +*/ +void sqlite3CompleteInsertion( + Parse *pParse, /* The parser context */ + Table *pTab, /* the table into which we are inserting */ + int baseCur, /* Index of a read/write cursor pointing at pTab */ + int regRowid, /* Range of content */ + int *aRegIdx, /* Register used by each index. 0 for unused indices */ + int rowidChng, /* True if the record number will change */ + int isUpdate, /* True for UPDATE, False for INSERT */ + int newIdx, /* Index of NEW table for triggers. -1 if none */ + int appendBias /* True if this is likely to be an append */ +){ + int i; + Vdbe *v; + int nIdx; + Index *pIdx; + int pik_flags; + int regData; + int regRec; + + v = sqlite3GetVdbe(pParse); + assert( v!=0 ); + assert( pTab->pSelect==0 ); /* This table is not a VIEW */ + for(nIdx=0, pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext, nIdx++){} + for(i=nIdx-1; i>=0; i--){ + if( aRegIdx[i]==0 ) continue; + sqlite3VdbeAddOp2(v, OP_IdxInsert, baseCur+i+1, aRegIdx[i]); + } + regData = regRowid + 1; + regRec = sqlite3GetTempReg(pParse); + sqlite3VdbeAddOp3(v, OP_MakeRecord, regData, pTab->nCol, regRec); + sqlite3TableAffinityStr(v, pTab); +#ifndef SQLITE_OMIT_TRIGGER + if( newIdx>=0 ){ + sqlite3VdbeAddOp3(v, OP_Insert, newIdx, regRec, regRowid); + } +#endif + if( pParse->nested ){ + pik_flags = 0; + }else{ + pik_flags = OPFLAG_NCHANGE; + pik_flags |= (isUpdate?OPFLAG_ISUPDATE:OPFLAG_LASTROWID); + } + if( appendBias ){ + pik_flags |= OPFLAG_APPEND; + } + sqlite3VdbeAddOp3(v, OP_Insert, baseCur, regRec, regRowid); + if( !pParse->nested ){ + sqlite3VdbeChangeP4(v, -1, pTab->zName, P4_STATIC); + } + sqlite3VdbeChangeP5(v, pik_flags); +} + +/* +** Generate code that will open cursors for a table and for all +** indices of that table. The "baseCur" parameter is the cursor number used +** for the table. Indices are opened on subsequent cursors. +** +** Return the number of indices on the table. +*/ +int sqlite3OpenTableAndIndices( + Parse *pParse, /* Parsing context */ + Table *pTab, /* Table to be opened */ + int baseCur, /* Cursor number assigned to the table */ + int op /* OP_OpenRead or OP_OpenWrite */ +){ + int i; + int iDb; + Index *pIdx; + Vdbe *v; + + if( IsVirtual(pTab) ) return 0; + iDb = sqlite3SchemaToIndex(pParse->db, pTab->pSchema); + v = sqlite3GetVdbe(pParse); + assert( v!=0 ); + sqlite3OpenTable(pParse, baseCur, iDb, pTab, op); + for(i=1, pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext, i++){ + KeyInfo *pKey = sqlite3IndexKeyinfo(pParse, pIdx); + assert( pIdx->pSchema==pTab->pSchema ); + sqlite3VdbeAddOp4(v, op, i+baseCur, pIdx->tnum, iDb, + (char*)pKey, P4_KEYINFO_HANDOFF); + VdbeComment((v, "%s", pIdx->zName)); + } + if( pParse->nTab<=baseCur+i ){ + pParse->nTab = baseCur+i; + } + return i-1; +} + + +#ifdef SQLITE_TEST +/* +** The following global variable is incremented whenever the +** transfer optimization is used. This is used for testing +** purposes only - to make sure the transfer optimization really +** is happening when it is suppose to. +*/ +int sqlite3_xferopt_count; +#endif /* SQLITE_TEST */ + + +#ifndef SQLITE_OMIT_XFER_OPT +/* +** Check to collation names to see if they are compatible. +*/ +static int xferCompatibleCollation(const char *z1, const char *z2){ + if( z1==0 ){ + return z2==0; + } + if( z2==0 ){ + return 0; + } + return sqlite3StrICmp(z1, z2)==0; +} + + +/* +** Check to see if index pSrc is compatible as a source of data +** for index pDest in an insert transfer optimization. The rules +** for a compatible index: +** +** * The index is over the same set of columns +** * The same DESC and ASC markings occurs on all columns +** * The same onError processing (OE_Abort, OE_Ignore, etc) +** * The same collating sequence on each column +*/ +static int xferCompatibleIndex(Index *pDest, Index *pSrc){ + int i; + assert( pDest && pSrc ); + assert( pDest->pTable!=pSrc->pTable ); + if( pDest->nColumn!=pSrc->nColumn ){ + return 0; /* Different number of columns */ + } + if( pDest->onError!=pSrc->onError ){ + return 0; /* Different conflict resolution strategies */ + } + for(i=0; inColumn; i++){ + if( pSrc->aiColumn[i]!=pDest->aiColumn[i] ){ + return 0; /* Different columns indexed */ + } + if( pSrc->aSortOrder[i]!=pDest->aSortOrder[i] ){ + return 0; /* Different sort orders */ + } + if( pSrc->azColl[i]!=pDest->azColl[i] ){ + return 0; /* Different collating sequences */ + } + } + + /* If no test above fails then the indices must be compatible */ + return 1; +} + +/* +** Attempt the transfer optimization on INSERTs of the form +** +** INSERT INTO tab1 SELECT * FROM tab2; +** +** This optimization is only attempted if +** +** (1) tab1 and tab2 have identical schemas including all the +** same indices and constraints +** +** (2) tab1 and tab2 are different tables +** +** (3) There must be no triggers on tab1 +** +** (4) The result set of the SELECT statement is "*" +** +** (5) The SELECT statement has no WHERE, HAVING, ORDER BY, GROUP BY, +** or LIMIT clause. +** +** (6) The SELECT statement is a simple (not a compound) select that +** contains only tab2 in its FROM clause +** +** This method for implementing the INSERT transfers raw records from +** tab2 over to tab1. The columns are not decoded. Raw records from +** the indices of tab2 are transfered to tab1 as well. In so doing, +** the resulting tab1 has much less fragmentation. +** +** This routine returns TRUE if the optimization is attempted. If any +** of the conditions above fail so that the optimization should not +** be attempted, then this routine returns FALSE. +*/ +static int xferOptimization( + Parse *pParse, /* Parser context */ + Table *pDest, /* The table we are inserting into */ + Select *pSelect, /* A SELECT statement to use as the data source */ + int onError, /* How to handle constraint errors */ + int iDbDest /* The database of pDest */ +){ + ExprList *pEList; /* The result set of the SELECT */ + Table *pSrc; /* The table in the FROM clause of SELECT */ + Index *pSrcIdx, *pDestIdx; /* Source and destination indices */ + struct SrcList_item *pItem; /* An element of pSelect->pSrc */ + int i; /* Loop counter */ + int iDbSrc; /* The database of pSrc */ + int iSrc, iDest; /* Cursors from source and destination */ + int addr1, addr2; /* Loop addresses */ + int emptyDestTest; /* Address of test for empty pDest */ + int emptySrcTest; /* Address of test for empty pSrc */ + Vdbe *v; /* The VDBE we are building */ + KeyInfo *pKey; /* Key information for an index */ + int regAutoinc; /* Memory register used by AUTOINC */ + int destHasUniqueIdx = 0; /* True if pDest has a UNIQUE index */ + int regData, regRowid; /* Registers holding data and rowid */ + + if( pSelect==0 ){ + return 0; /* Must be of the form INSERT INTO ... SELECT ... */ + } + if( pDest->pTrigger ){ + return 0; /* tab1 must not have triggers */ + } +#ifndef SQLITE_OMIT_VIRTUALTABLE + if( pDest->isVirtual ){ + return 0; /* tab1 must not be a virtual table */ + } +#endif + if( onError==OE_Default ){ + onError = OE_Abort; + } + if( onError!=OE_Abort && onError!=OE_Rollback ){ + return 0; /* Cannot do OR REPLACE or OR IGNORE or OR FAIL */ + } + assert(pSelect->pSrc); /* allocated even if there is no FROM clause */ + if( pSelect->pSrc->nSrc!=1 ){ + return 0; /* FROM clause must have exactly one term */ + } + if( pSelect->pSrc->a[0].pSelect ){ + return 0; /* FROM clause cannot contain a subquery */ + } + if( pSelect->pWhere ){ + return 0; /* SELECT may not have a WHERE clause */ + } + if( pSelect->pOrderBy ){ + return 0; /* SELECT may not have an ORDER BY clause */ + } + /* Do not need to test for a HAVING clause. If HAVING is present but + ** there is no ORDER BY, we will get an error. */ + if( pSelect->pGroupBy ){ + return 0; /* SELECT may not have a GROUP BY clause */ + } + if( pSelect->pLimit ){ + return 0; /* SELECT may not have a LIMIT clause */ + } + assert( pSelect->pOffset==0 ); /* Must be so if pLimit==0 */ + if( pSelect->pPrior ){ + return 0; /* SELECT may not be a compound query */ + } + if( pSelect->isDistinct ){ + return 0; /* SELECT may not be DISTINCT */ + } + pEList = pSelect->pEList; + assert( pEList!=0 ); + if( pEList->nExpr!=1 ){ + return 0; /* The result set must have exactly one column */ + } + assert( pEList->a[0].pExpr ); + if( pEList->a[0].pExpr->op!=TK_ALL ){ + return 0; /* The result set must be the special operator "*" */ + } + + /* At this point we have established that the statement is of the + ** correct syntactic form to participate in this optimization. Now + ** we have to check the semantics. + */ + pItem = pSelect->pSrc->a; + pSrc = sqlite3LocateTable(pParse, 0, pItem->zName, pItem->zDatabase); + if( pSrc==0 ){ + return 0; /* FROM clause does not contain a real table */ + } + if( pSrc==pDest ){ + return 0; /* tab1 and tab2 may not be the same table */ + } +#ifndef SQLITE_OMIT_VIRTUALTABLE + if( pSrc->isVirtual ){ + return 0; /* tab2 must not be a virtual table */ + } +#endif + if( pSrc->pSelect ){ + return 0; /* tab2 may not be a view */ + } + if( pDest->nCol!=pSrc->nCol ){ + return 0; /* Number of columns must be the same in tab1 and tab2 */ + } + if( pDest->iPKey!=pSrc->iPKey ){ + return 0; /* Both tables must have the same INTEGER PRIMARY KEY */ + } + for(i=0; inCol; i++){ + if( pDest->aCol[i].affinity!=pSrc->aCol[i].affinity ){ + return 0; /* Affinity must be the same on all columns */ + } + if( !xferCompatibleCollation(pDest->aCol[i].zColl, pSrc->aCol[i].zColl) ){ + return 0; /* Collating sequence must be the same on all columns */ + } + if( pDest->aCol[i].notNull && !pSrc->aCol[i].notNull ){ + return 0; /* tab2 must be NOT NULL if tab1 is */ + } + } + for(pDestIdx=pDest->pIndex; pDestIdx; pDestIdx=pDestIdx->pNext){ + if( pDestIdx->onError!=OE_None ){ + destHasUniqueIdx = 1; + } + for(pSrcIdx=pSrc->pIndex; pSrcIdx; pSrcIdx=pSrcIdx->pNext){ + if( xferCompatibleIndex(pDestIdx, pSrcIdx) ) break; + } + if( pSrcIdx==0 ){ + return 0; /* pDestIdx has no corresponding index in pSrc */ + } + } +#ifndef SQLITE_OMIT_CHECK + if( pDest->pCheck && !sqlite3ExprCompare(pSrc->pCheck, pDest->pCheck) ){ + return 0; /* Tables have different CHECK constraints. Ticket #2252 */ + } +#endif + + /* If we get this far, it means either: + ** + ** * We can always do the transfer if the table contains an + ** an integer primary key + ** + ** * We can conditionally do the transfer if the destination + ** table is empty. + */ +#ifdef SQLITE_TEST + sqlite3_xferopt_count++; +#endif + iDbSrc = sqlite3SchemaToIndex(pParse->db, pSrc->pSchema); + v = sqlite3GetVdbe(pParse); + sqlite3CodeVerifySchema(pParse, iDbSrc); + iSrc = pParse->nTab++; + iDest = pParse->nTab++; + regAutoinc = autoIncBegin(pParse, iDbDest, pDest); + sqlite3OpenTable(pParse, iDest, iDbDest, pDest, OP_OpenWrite); + if( (pDest->iPKey<0 && pDest->pIndex!=0) || destHasUniqueIdx ){ + /* If tables do not have an INTEGER PRIMARY KEY and there + ** are indices to be copied and the destination is not empty, + ** we have to disallow the transfer optimization because the + ** the rowids might change which will mess up indexing. + ** + ** Or if the destination has a UNIQUE index and is not empty, + ** we also disallow the transfer optimization because we cannot + ** insure that all entries in the union of DEST and SRC will be + ** unique. + */ + addr1 = sqlite3VdbeAddOp2(v, OP_Rewind, iDest, 0); + emptyDestTest = sqlite3VdbeAddOp2(v, OP_Goto, 0, 0); + sqlite3VdbeJumpHere(v, addr1); + }else{ + emptyDestTest = 0; + } + sqlite3OpenTable(pParse, iSrc, iDbSrc, pSrc, OP_OpenRead); + emptySrcTest = sqlite3VdbeAddOp2(v, OP_Rewind, iSrc, 0); + regData = sqlite3GetTempReg(pParse); + regRowid = sqlite3GetTempReg(pParse); + if( pDest->iPKey>=0 ){ + addr1 = sqlite3VdbeAddOp2(v, OP_Rowid, iSrc, regRowid); + addr2 = sqlite3VdbeAddOp3(v, OP_NotExists, iDest, 0, regRowid); + sqlite3VdbeAddOp4(v, OP_Halt, SQLITE_CONSTRAINT, onError, 0, + "PRIMARY KEY must be unique", P4_STATIC); + sqlite3VdbeJumpHere(v, addr2); + autoIncStep(pParse, regAutoinc, regRowid); + }else if( pDest->pIndex==0 ){ + addr1 = sqlite3VdbeAddOp2(v, OP_NewRowid, iDest, regRowid); + }else{ + addr1 = sqlite3VdbeAddOp2(v, OP_Rowid, iSrc, regRowid); + assert( pDest->autoInc==0 ); + } + sqlite3VdbeAddOp2(v, OP_RowData, iSrc, regData); + sqlite3VdbeAddOp3(v, OP_Insert, iDest, regData, regRowid); + sqlite3VdbeChangeP5(v, OPFLAG_NCHANGE|OPFLAG_LASTROWID|OPFLAG_APPEND); + sqlite3VdbeChangeP4(v, -1, pDest->zName, 0); + sqlite3VdbeAddOp2(v, OP_Next, iSrc, addr1); + autoIncEnd(pParse, iDbDest, pDest, regAutoinc); + for(pDestIdx=pDest->pIndex; pDestIdx; pDestIdx=pDestIdx->pNext){ + for(pSrcIdx=pSrc->pIndex; pSrcIdx; pSrcIdx=pSrcIdx->pNext){ + if( xferCompatibleIndex(pDestIdx, pSrcIdx) ) break; + } + assert( pSrcIdx ); + sqlite3VdbeAddOp2(v, OP_Close, iSrc, 0); + sqlite3VdbeAddOp2(v, OP_Close, iDest, 0); + pKey = sqlite3IndexKeyinfo(pParse, pSrcIdx); + sqlite3VdbeAddOp4(v, OP_OpenRead, iSrc, pSrcIdx->tnum, iDbSrc, + (char*)pKey, P4_KEYINFO_HANDOFF); + VdbeComment((v, "%s", pSrcIdx->zName)); + pKey = sqlite3IndexKeyinfo(pParse, pDestIdx); + sqlite3VdbeAddOp4(v, OP_OpenWrite, iDest, pDestIdx->tnum, iDbDest, + (char*)pKey, P4_KEYINFO_HANDOFF); + VdbeComment((v, "%s", pDestIdx->zName)); + addr1 = sqlite3VdbeAddOp2(v, OP_Rewind, iSrc, 0); + sqlite3VdbeAddOp2(v, OP_RowKey, iSrc, regData); + sqlite3VdbeAddOp3(v, OP_IdxInsert, iDest, regData, 1); + sqlite3VdbeAddOp2(v, OP_Next, iSrc, addr1+1); + sqlite3VdbeJumpHere(v, addr1); + } + sqlite3VdbeJumpHere(v, emptySrcTest); + sqlite3ReleaseTempReg(pParse, regRowid); + sqlite3ReleaseTempReg(pParse, regData); + sqlite3VdbeAddOp2(v, OP_Close, iSrc, 0); + sqlite3VdbeAddOp2(v, OP_Close, iDest, 0); + if( emptyDestTest ){ + sqlite3VdbeAddOp2(v, OP_Halt, SQLITE_OK, 0); + sqlite3VdbeJumpHere(v, emptyDestTest); + sqlite3VdbeAddOp2(v, OP_Close, iDest, 0); + return 0; + }else{ + return 1; + } +} +#endif /* SQLITE_OMIT_XFER_OPT */ Added: external/sqlite-source-3.5.7.x/journal.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/journal.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,238 @@ +/* +** 2007 August 22 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** +** @(#) $Id: journal.c,v 1.7 2007/09/06 13:49:37 drh Exp $ +*/ + +#ifdef SQLITE_ENABLE_ATOMIC_WRITE + +/* +** This file implements a special kind of sqlite3_file object used +** by SQLite to create journal files if the atomic-write optimization +** is enabled. +** +** The distinctive characteristic of this sqlite3_file is that the +** actual on disk file is created lazily. When the file is created, +** the caller specifies a buffer size for an in-memory buffer to +** be used to service read() and write() requests. The actual file +** on disk is not created or populated until either: +** +** 1) The in-memory representation grows too large for the allocated +** buffer, or +** 2) The xSync() method is called. +*/ + +#include "sqliteInt.h" + + +/* +** A JournalFile object is a subclass of sqlite3_file used by +** as an open file handle for journal files. +*/ +struct JournalFile { + sqlite3_io_methods *pMethod; /* I/O methods on journal files */ + int nBuf; /* Size of zBuf[] in bytes */ + char *zBuf; /* Space to buffer journal writes */ + int iSize; /* Amount of zBuf[] currently used */ + int flags; /* xOpen flags */ + sqlite3_vfs *pVfs; /* The "real" underlying VFS */ + sqlite3_file *pReal; /* The "real" underlying file descriptor */ + const char *zJournal; /* Name of the journal file */ +}; +typedef struct JournalFile JournalFile; + +/* +** If it does not already exists, create and populate the on-disk file +** for JournalFile p. +*/ +static int createFile(JournalFile *p){ + int rc = SQLITE_OK; + if( !p->pReal ){ + sqlite3_file *pReal = (sqlite3_file *)&p[1]; + rc = sqlite3OsOpen(p->pVfs, p->zJournal, pReal, p->flags, 0); + if( rc==SQLITE_OK ){ + p->pReal = pReal; + if( p->iSize>0 ){ + assert(p->iSize<=p->nBuf); + rc = sqlite3OsWrite(p->pReal, p->zBuf, p->iSize, 0); + } + } + } + return rc; +} + +/* +** Close the file. +*/ +static int jrnlClose(sqlite3_file *pJfd){ + JournalFile *p = (JournalFile *)pJfd; + if( p->pReal ){ + sqlite3OsClose(p->pReal); + } + sqlite3_free(p->zBuf); + return SQLITE_OK; +} + +/* +** Read data from the file. +*/ +static int jrnlRead( + sqlite3_file *pJfd, /* The journal file from which to read */ + void *zBuf, /* Put the results here */ + int iAmt, /* Number of bytes to read */ + sqlite_int64 iOfst /* Begin reading at this offset */ +){ + int rc = SQLITE_OK; + JournalFile *p = (JournalFile *)pJfd; + if( p->pReal ){ + rc = sqlite3OsRead(p->pReal, zBuf, iAmt, iOfst); + }else{ + assert( iAmt+iOfst<=p->iSize ); + memcpy(zBuf, &p->zBuf[iOfst], iAmt); + } + return rc; +} + +/* +** Write data to the file. +*/ +static int jrnlWrite( + sqlite3_file *pJfd, /* The journal file into which to write */ + const void *zBuf, /* Take data to be written from here */ + int iAmt, /* Number of bytes to write */ + sqlite_int64 iOfst /* Begin writing at this offset into the file */ +){ + int rc = SQLITE_OK; + JournalFile *p = (JournalFile *)pJfd; + if( !p->pReal && (iOfst+iAmt)>p->nBuf ){ + rc = createFile(p); + } + if( rc==SQLITE_OK ){ + if( p->pReal ){ + rc = sqlite3OsWrite(p->pReal, zBuf, iAmt, iOfst); + }else{ + memcpy(&p->zBuf[iOfst], zBuf, iAmt); + if( p->iSize<(iOfst+iAmt) ){ + p->iSize = (iOfst+iAmt); + } + } + } + return rc; +} + +/* +** Truncate the file. +*/ +static int jrnlTruncate(sqlite3_file *pJfd, sqlite_int64 size){ + int rc = SQLITE_OK; + JournalFile *p = (JournalFile *)pJfd; + if( p->pReal ){ + rc = sqlite3OsTruncate(p->pReal, size); + }else if( sizeiSize ){ + p->iSize = size; + } + return rc; +} + +/* +** Sync the file. +*/ +static int jrnlSync(sqlite3_file *pJfd, int flags){ + int rc; + JournalFile *p = (JournalFile *)pJfd; + rc = createFile(p); + if( rc==SQLITE_OK ){ + rc = sqlite3OsSync(p->pReal, flags); + } + return rc; +} + +/* +** Query the size of the file in bytes. +*/ +static int jrnlFileSize(sqlite3_file *pJfd, sqlite_int64 *pSize){ + int rc = SQLITE_OK; + JournalFile *p = (JournalFile *)pJfd; + if( p->pReal ){ + rc = sqlite3OsFileSize(p->pReal, pSize); + }else{ + *pSize = (sqlite_int64) p->iSize; + } + return rc; +} + +/* +** Table of methods for JournalFile sqlite3_file object. +*/ +static struct sqlite3_io_methods JournalFileMethods = { + 1, /* iVersion */ + jrnlClose, /* xClose */ + jrnlRead, /* xRead */ + jrnlWrite, /* xWrite */ + jrnlTruncate, /* xTruncate */ + jrnlSync, /* xSync */ + jrnlFileSize, /* xFileSize */ + 0, /* xLock */ + 0, /* xUnlock */ + 0, /* xCheckReservedLock */ + 0, /* xFileControl */ + 0, /* xSectorSize */ + 0 /* xDeviceCharacteristics */ +}; + +/* +** Open a journal file. +*/ +int sqlite3JournalOpen( + sqlite3_vfs *pVfs, /* The VFS to use for actual file I/O */ + const char *zName, /* Name of the journal file */ + sqlite3_file *pJfd, /* Preallocated, blank file handle */ + int flags, /* Opening flags */ + int nBuf /* Bytes buffered before opening the file */ +){ + JournalFile *p = (JournalFile *)pJfd; + memset(p, 0, sqlite3JournalSize(pVfs)); + if( nBuf>0 ){ + p->zBuf = sqlite3MallocZero(nBuf); + if( !p->zBuf ){ + return SQLITE_NOMEM; + } + }else{ + return sqlite3OsOpen(pVfs, zName, pJfd, flags, 0); + } + p->pMethod = &JournalFileMethods; + p->nBuf = nBuf; + p->flags = flags; + p->zJournal = zName; + p->pVfs = pVfs; + return SQLITE_OK; +} + +/* +** If the argument p points to a JournalFile structure, and the underlying +** file has not yet been created, create it now. +*/ +int sqlite3JournalCreate(sqlite3_file *p){ + if( p->pMethods!=&JournalFileMethods ){ + return SQLITE_OK; + } + return createFile((JournalFile *)p); +} + +/* +** Return the number of bytes required to store a JournalFile that uses vfs +** pVfs to create the underlying on-disk files. +*/ +int sqlite3JournalSize(sqlite3_vfs *pVfs){ + return (pVfs->szOsFile+sizeof(JournalFile)); +} +#endif Added: external/sqlite-source-3.5.7.x/keywordhash.h ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/keywordhash.h Wed Mar 19 03:00:27 2008 @@ -0,0 +1,112 @@ +/***** This file contains automatically generated code ****** +** +** The code in this file has been automatically generated by +** +** $Header: /sqlite/sqlite/tool/mkkeywordhash.c,v 1.31 2007/07/30 18:26:20 rse Exp $ +** +** The code in this file implements a function that determines whether +** or not a given identifier is really an SQL keyword. The same thing +** might be implemented more directly using a hand-written hash table. +** But by using this automatically generated code, the size of the code +** is substantially reduced. This is important for embedded applications +** on platforms with limited memory. +*/ +/* Hash score: 165 */ +static int keywordCode(const char *z, int n){ + /* zText[] encodes 775 bytes of keywords in 526 bytes */ + static const char zText[526] = + "BEFOREIGNOREGEXPLAINSTEADDESCAPEACHECKEYCONSTRAINTERSECTABLEFT" + "HENDATABASELECTRANSACTIONATURALTERAISELSEXCEPTRIGGEREFERENCES" + "UNIQUERYATTACHAVINGROUPDATEMPORARYBEGINNEREINDEXCLUSIVEXISTSBETWEEN" + "OTNULLIKECASCADEFERRABLECASECOLLATECREATECURRENT_DATEDELETEDETACH" + "IMMEDIATEJOINSERTMATCHPLANALYZEPRAGMABORTVALUESVIRTUALIMITWHEN" + "WHERENAMEAFTEREPLACEANDEFAULTAUTOINCREMENTCASTCOLUMNCOMMITCONFLICT" + "CROSSCURRENT_TIMESTAMPRIMARYDEFERREDISTINCTDROPFAILFROMFULLGLOB" + "YIFINTOFFSETISNULLORDERESTRICTOUTERIGHTROLLBACKROWUNIONUSINGVACUUM" + "VIEWINITIALLY"; + static const unsigned char aHash[127] = { + 63, 92, 109, 61, 0, 38, 0, 0, 69, 0, 64, 0, 0, + 102, 4, 65, 7, 0, 108, 72, 103, 99, 0, 22, 0, 0, + 113, 0, 111, 106, 0, 18, 80, 0, 1, 0, 0, 56, 57, + 0, 55, 11, 0, 33, 77, 89, 0, 110, 88, 0, 0, 45, + 0, 90, 54, 0, 20, 0, 114, 34, 19, 0, 10, 97, 28, + 83, 0, 0, 116, 93, 47, 115, 41, 12, 44, 0, 78, 0, + 87, 29, 0, 86, 0, 0, 0, 82, 79, 84, 75, 96, 6, + 14, 95, 0, 68, 0, 21, 76, 98, 27, 0, 112, 67, 104, + 49, 40, 71, 0, 0, 81, 100, 0, 107, 0, 15, 0, 0, + 24, 0, 73, 42, 50, 0, 16, 48, 0, 37, + }; + static const unsigned char aNext[116] = { + 0, 0, 0, 0, 0, 0, 0, 0, 0, 9, 0, 0, 0, + 0, 0, 0, 0, 5, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32, 0, 0, + 17, 0, 0, 0, 36, 39, 0, 0, 25, 0, 0, 31, 0, + 0, 0, 43, 52, 0, 0, 0, 53, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 51, 0, 0, 0, 0, 26, 0, 8, 46, + 2, 0, 0, 0, 0, 0, 0, 0, 3, 58, 66, 0, 13, + 0, 91, 85, 0, 94, 0, 74, 0, 0, 62, 0, 35, 101, + 0, 0, 105, 23, 30, 60, 70, 0, 0, 59, 0, 0, + }; + static const unsigned char aLen[116] = { + 6, 7, 3, 6, 6, 7, 7, 3, 4, 6, 4, 5, 3, + 10, 9, 5, 4, 4, 3, 8, 2, 6, 11, 2, 7, 5, + 5, 4, 6, 7, 10, 6, 5, 6, 6, 5, 6, 4, 9, + 2, 5, 5, 7, 5, 9, 6, 7, 7, 3, 4, 4, 7, + 3, 10, 4, 7, 6, 12, 6, 6, 9, 4, 6, 5, 4, + 7, 6, 5, 6, 7, 5, 4, 5, 6, 5, 7, 3, 7, + 13, 2, 2, 4, 6, 6, 8, 5, 17, 12, 7, 8, 8, + 2, 4, 4, 4, 4, 4, 2, 2, 4, 6, 2, 3, 6, + 5, 8, 5, 5, 8, 3, 5, 5, 6, 4, 9, 3, + }; + static const unsigned short int aOffset[116] = { + 0, 2, 2, 6, 10, 13, 18, 23, 25, 26, 31, 33, 37, + 40, 47, 55, 58, 61, 63, 65, 70, 71, 76, 85, 86, 91, + 95, 99, 102, 107, 113, 123, 126, 131, 136, 141, 144, 148, 148, + 152, 157, 160, 164, 166, 169, 177, 183, 189, 189, 192, 195, 199, + 200, 204, 214, 218, 225, 231, 243, 249, 255, 264, 266, 272, 277, + 279, 286, 291, 296, 302, 308, 313, 317, 320, 326, 330, 337, 339, + 346, 348, 350, 359, 363, 369, 375, 383, 388, 388, 404, 411, 418, + 419, 426, 430, 434, 438, 442, 445, 447, 449, 452, 452, 455, 458, + 464, 468, 476, 480, 485, 493, 496, 501, 506, 512, 516, 521, + }; + static const unsigned char aCode[116] = { + TK_BEFORE, TK_FOREIGN, TK_FOR, TK_IGNORE, TK_LIKE_KW, + TK_EXPLAIN, TK_INSTEAD, TK_ADD, TK_DESC, TK_ESCAPE, + TK_EACH, TK_CHECK, TK_KEY, TK_CONSTRAINT, TK_INTERSECT, + TK_TABLE, TK_JOIN_KW, TK_THEN, TK_END, TK_DATABASE, + TK_AS, TK_SELECT, TK_TRANSACTION,TK_ON, TK_JOIN_KW, + TK_ALTER, TK_RAISE, TK_ELSE, TK_EXCEPT, TK_TRIGGER, + TK_REFERENCES, TK_UNIQUE, TK_QUERY, TK_ATTACH, TK_HAVING, + TK_GROUP, TK_UPDATE, TK_TEMP, TK_TEMP, TK_OR, + TK_BEGIN, TK_JOIN_KW, TK_REINDEX, TK_INDEX, TK_EXCLUSIVE, + TK_EXISTS, TK_BETWEEN, TK_NOTNULL, TK_NOT, TK_NULL, + TK_LIKE_KW, TK_CASCADE, TK_ASC, TK_DEFERRABLE, TK_CASE, + TK_COLLATE, TK_CREATE, TK_CTIME_KW, TK_DELETE, TK_DETACH, + TK_IMMEDIATE, TK_JOIN, TK_INSERT, TK_MATCH, TK_PLAN, + TK_ANALYZE, TK_PRAGMA, TK_ABORT, TK_VALUES, TK_VIRTUAL, + TK_LIMIT, TK_WHEN, TK_WHERE, TK_RENAME, TK_AFTER, + TK_REPLACE, TK_AND, TK_DEFAULT, TK_AUTOINCR, TK_TO, + TK_IN, TK_CAST, TK_COLUMNKW, TK_COMMIT, TK_CONFLICT, + TK_JOIN_KW, TK_CTIME_KW, TK_CTIME_KW, TK_PRIMARY, TK_DEFERRED, + TK_DISTINCT, TK_IS, TK_DROP, TK_FAIL, TK_FROM, + TK_JOIN_KW, TK_LIKE_KW, TK_BY, TK_IF, TK_INTO, + TK_OFFSET, TK_OF, TK_SET, TK_ISNULL, TK_ORDER, + TK_RESTRICT, TK_JOIN_KW, TK_JOIN_KW, TK_ROLLBACK, TK_ROW, + TK_UNION, TK_USING, TK_VACUUM, TK_VIEW, TK_INITIALLY, + TK_ALL, + }; + int h, i; + if( n<2 ) return TK_ID; + h = ((charMap(z[0])*4) ^ + (charMap(z[n-1])*3) ^ + n) % 127; + for(i=((int)aHash[h])-1; i>=0; i=((int)aNext[i])-1){ + if( aLen[i]==n && sqlite3StrNICmp(&zText[aOffset[i]],z,n)==0 ){ + return aCode[i]; + } + } + return TK_ID; +} +int sqlite3KeywordCode(const unsigned char *z, int n){ + return keywordCode((char*)z, n); +} Added: external/sqlite-source-3.5.7.x/legacy.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/legacy.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,142 @@ +/* +** 2001 September 15 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** Main file for the SQLite library. The routines in this file +** implement the programmer interface to the library. Routines in +** other files are for internal use by SQLite and should not be +** accessed by users of the library. +** +** $Id: legacy.c,v 1.23 2008/02/13 18:25:27 danielk1977 Exp $ +*/ + +#include "sqliteInt.h" +#include + +/* +** Execute SQL code. Return one of the SQLITE_ success/failure +** codes. Also write an error message into memory obtained from +** malloc() and make *pzErrMsg point to that message. +** +** If the SQL is a query, then for each row in the query result +** the xCallback() function is called. pArg becomes the first +** argument to xCallback(). If xCallback=NULL then no callback +** is invoked, even for queries. +*/ +int sqlite3_exec( + sqlite3 *db, /* The database on which the SQL executes */ + const char *zSql, /* The SQL to be executed */ + sqlite3_callback xCallback, /* Invoke this callback routine */ + void *pArg, /* First argument to xCallback() */ + char **pzErrMsg /* Write error messages here */ +){ + int rc = SQLITE_OK; + const char *zLeftover; + sqlite3_stmt *pStmt = 0; + char **azCols = 0; + + int nRetry = 0; + int nCallback; + + if( zSql==0 ) return SQLITE_OK; + + sqlite3_mutex_enter(db->mutex); + while( (rc==SQLITE_OK || (rc==SQLITE_SCHEMA && (++nRetry)<2)) && zSql[0] ){ + int nCol; + char **azVals = 0; + + pStmt = 0; + rc = sqlite3_prepare(db, zSql, -1, &pStmt, &zLeftover); + assert( rc==SQLITE_OK || pStmt==0 ); + if( rc!=SQLITE_OK ){ + continue; + } + if( !pStmt ){ + /* this happens for a comment or white-space */ + zSql = zLeftover; + continue; + } + + nCallback = 0; + + nCol = sqlite3_column_count(pStmt); + azCols = sqlite3DbMallocZero(db, 2*nCol*sizeof(const char *) + 1); + if( azCols==0 ){ + goto exec_out; + } + + while( 1 ){ + int i; + rc = sqlite3_step(pStmt); + + /* Invoke the callback function if required */ + if( xCallback && (SQLITE_ROW==rc || + (SQLITE_DONE==rc && !nCallback && db->flags&SQLITE_NullCallback)) ){ + if( 0==nCallback ){ + for(i=0; imallocFailed = 1; + goto exec_out; + } + } + nCallback++; + } + if( rc==SQLITE_ROW ){ + azVals = &azCols[nCol]; + for(i=0; imallocFailed = 1; + goto exec_out; + } + } + } + if( xCallback(pArg, nCol, azVals, azCols) ){ + rc = SQLITE_ABORT; + goto exec_out; + } + } + + if( rc!=SQLITE_ROW ){ + rc = sqlite3_finalize(pStmt); + pStmt = 0; + if( rc!=SQLITE_SCHEMA ){ + nRetry = 0; + zSql = zLeftover; + while( isspace((unsigned char)zSql[0]) ) zSql++; + } + break; + } + } + + sqlite3_free(azCols); + azCols = 0; + } + +exec_out: + if( pStmt ) sqlite3_finalize(pStmt); + if( azCols ) sqlite3_free(azCols); + + rc = sqlite3ApiExit(db, rc); + if( rc!=SQLITE_OK && rc==sqlite3_errcode(db) && pzErrMsg ){ + int nErrMsg = 1 + strlen(sqlite3_errmsg(db)); + *pzErrMsg = sqlite3_malloc(nErrMsg); + if( *pzErrMsg ){ + memcpy(*pzErrMsg, sqlite3_errmsg(db), nErrMsg); + } + }else if( pzErrMsg ){ + *pzErrMsg = 0; + } + + assert( (rc&db->errMask)==rc ); + sqlite3_mutex_leave(db->mutex); + return rc; +} Added: external/sqlite-source-3.5.7.x/loadext.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/loadext.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,518 @@ +/* +** 2006 June 7 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** This file contains code used to dynamically load extensions into +** the SQLite library. +*/ +#ifndef SQLITE_OMIT_LOAD_EXTENSION + +#ifndef SQLITE_CORE + #define SQLITE_CORE 1 /* Disable the API redefinition in sqlite3ext.h */ +#endif +#include "sqlite3ext.h" +#include "sqliteInt.h" +#include +#include + +/* +** Some API routines are omitted when various features are +** excluded from a build of SQLite. Substitute a NULL pointer +** for any missing APIs. +*/ +#ifndef SQLITE_ENABLE_COLUMN_METADATA +# define sqlite3_column_database_name 0 +# define sqlite3_column_database_name16 0 +# define sqlite3_column_table_name 0 +# define sqlite3_column_table_name16 0 +# define sqlite3_column_origin_name 0 +# define sqlite3_column_origin_name16 0 +# define sqlite3_table_column_metadata 0 +#endif + +#ifdef SQLITE_OMIT_AUTHORIZATION +# define sqlite3_set_authorizer 0 +#endif + +#ifdef SQLITE_OMIT_UTF16 +# define sqlite3_bind_text16 0 +# define sqlite3_collation_needed16 0 +# define sqlite3_column_decltype16 0 +# define sqlite3_column_name16 0 +# define sqlite3_column_text16 0 +# define sqlite3_complete16 0 +# define sqlite3_create_collation16 0 +# define sqlite3_create_function16 0 +# define sqlite3_errmsg16 0 +# define sqlite3_open16 0 +# define sqlite3_prepare16 0 +# define sqlite3_prepare16_v2 0 +# define sqlite3_result_error16 0 +# define sqlite3_result_text16 0 +# define sqlite3_result_text16be 0 +# define sqlite3_result_text16le 0 +# define sqlite3_value_text16 0 +# define sqlite3_value_text16be 0 +# define sqlite3_value_text16le 0 +# define sqlite3_column_database_name16 0 +# define sqlite3_column_table_name16 0 +# define sqlite3_column_origin_name16 0 +#endif + +#ifdef SQLITE_OMIT_COMPLETE +# define sqlite3_complete 0 +# define sqlite3_complete16 0 +#endif + +#ifdef SQLITE_OMIT_PROGRESS_CALLBACK +# define sqlite3_progress_handler 0 +#endif + +#ifdef SQLITE_OMIT_VIRTUALTABLE +# define sqlite3_create_module 0 +# define sqlite3_create_module_v2 0 +# define sqlite3_declare_vtab 0 +#endif + +#ifdef SQLITE_OMIT_SHARED_CACHE +# define sqlite3_enable_shared_cache 0 +#endif + +#ifdef SQLITE_OMIT_TRACE +# define sqlite3_profile 0 +# define sqlite3_trace 0 +#endif + +#ifdef SQLITE_OMIT_GET_TABLE +# define sqlite3_free_table 0 +# define sqlite3_get_table 0 +#endif + +#ifdef SQLITE_OMIT_INCRBLOB +#define sqlite3_bind_zeroblob 0 +#define sqlite3_blob_bytes 0 +#define sqlite3_blob_close 0 +#define sqlite3_blob_open 0 +#define sqlite3_blob_read 0 +#define sqlite3_blob_write 0 +#endif + +/* +** The following structure contains pointers to all SQLite API routines. +** A pointer to this structure is passed into extensions when they are +** loaded so that the extension can make calls back into the SQLite +** library. +** +** When adding new APIs, add them to the bottom of this structure +** in order to preserve backwards compatibility. +** +** Extensions that use newer APIs should first call the +** sqlite3_libversion_number() to make sure that the API they +** intend to use is supported by the library. Extensions should +** also check to make sure that the pointer to the function is +** not NULL before calling it. +*/ +const sqlite3_api_routines sqlite3Apis = { + sqlite3_aggregate_context, + sqlite3_aggregate_count, + sqlite3_bind_blob, + sqlite3_bind_double, + sqlite3_bind_int, + sqlite3_bind_int64, + sqlite3_bind_null, + sqlite3_bind_parameter_count, + sqlite3_bind_parameter_index, + sqlite3_bind_parameter_name, + sqlite3_bind_text, + sqlite3_bind_text16, + sqlite3_bind_value, + sqlite3_busy_handler, + sqlite3_busy_timeout, + sqlite3_changes, + sqlite3_close, + sqlite3_collation_needed, + sqlite3_collation_needed16, + sqlite3_column_blob, + sqlite3_column_bytes, + sqlite3_column_bytes16, + sqlite3_column_count, + sqlite3_column_database_name, + sqlite3_column_database_name16, + sqlite3_column_decltype, + sqlite3_column_decltype16, + sqlite3_column_double, + sqlite3_column_int, + sqlite3_column_int64, + sqlite3_column_name, + sqlite3_column_name16, + sqlite3_column_origin_name, + sqlite3_column_origin_name16, + sqlite3_column_table_name, + sqlite3_column_table_name16, + sqlite3_column_text, + sqlite3_column_text16, + sqlite3_column_type, + sqlite3_column_value, + sqlite3_commit_hook, + sqlite3_complete, + sqlite3_complete16, + sqlite3_create_collation, + sqlite3_create_collation16, + sqlite3_create_function, + sqlite3_create_function16, + sqlite3_create_module, + sqlite3_data_count, + sqlite3_db_handle, + sqlite3_declare_vtab, + sqlite3_enable_shared_cache, + sqlite3_errcode, + sqlite3_errmsg, + sqlite3_errmsg16, + sqlite3_exec, + sqlite3_expired, + sqlite3_finalize, + sqlite3_free, + sqlite3_free_table, + sqlite3_get_autocommit, + sqlite3_get_auxdata, + sqlite3_get_table, + 0, /* Was sqlite3_global_recover(), but that function is deprecated */ + sqlite3_interrupt, + sqlite3_last_insert_rowid, + sqlite3_libversion, + sqlite3_libversion_number, + sqlite3_malloc, + sqlite3_mprintf, + sqlite3_open, + sqlite3_open16, + sqlite3_prepare, + sqlite3_prepare16, + sqlite3_profile, + sqlite3_progress_handler, + sqlite3_realloc, + sqlite3_reset, + sqlite3_result_blob, + sqlite3_result_double, + sqlite3_result_error, + sqlite3_result_error16, + sqlite3_result_int, + sqlite3_result_int64, + sqlite3_result_null, + sqlite3_result_text, + sqlite3_result_text16, + sqlite3_result_text16be, + sqlite3_result_text16le, + sqlite3_result_value, + sqlite3_rollback_hook, + sqlite3_set_authorizer, + sqlite3_set_auxdata, + sqlite3_snprintf, + sqlite3_step, + sqlite3_table_column_metadata, + sqlite3_thread_cleanup, + sqlite3_total_changes, + sqlite3_trace, + sqlite3_transfer_bindings, + sqlite3_update_hook, + sqlite3_user_data, + sqlite3_value_blob, + sqlite3_value_bytes, + sqlite3_value_bytes16, + sqlite3_value_double, + sqlite3_value_int, + sqlite3_value_int64, + sqlite3_value_numeric_type, + sqlite3_value_text, + sqlite3_value_text16, + sqlite3_value_text16be, + sqlite3_value_text16le, + sqlite3_value_type, + sqlite3_vmprintf, + /* + ** The original API set ends here. All extensions can call any + ** of the APIs above provided that the pointer is not NULL. But + ** before calling APIs that follow, extension should check the + ** sqlite3_libversion_number() to make sure they are dealing with + ** a library that is new enough to support that API. + ************************************************************************* + */ + sqlite3_overload_function, + + /* + ** Added after 3.3.13 + */ + sqlite3_prepare_v2, + sqlite3_prepare16_v2, + sqlite3_clear_bindings, + + /* + ** Added for 3.4.1 + */ + sqlite3_create_module_v2, + + /* + ** Added for 3.5.0 + */ + sqlite3_bind_zeroblob, + sqlite3_blob_bytes, + sqlite3_blob_close, + sqlite3_blob_open, + sqlite3_blob_read, + sqlite3_blob_write, + sqlite3_create_collation_v2, + sqlite3_file_control, + sqlite3_memory_highwater, + sqlite3_memory_used, +#ifdef SQLITE_MUTEX_NOOP + 0, + 0, + 0, + 0, + 0, +#else + sqlite3_mutex_alloc, + sqlite3_mutex_enter, + sqlite3_mutex_free, + sqlite3_mutex_leave, + sqlite3_mutex_try, +#endif + sqlite3_open_v2, + sqlite3_release_memory, + sqlite3_result_error_nomem, + sqlite3_result_error_toobig, + sqlite3_sleep, + sqlite3_soft_heap_limit, + sqlite3_vfs_find, + sqlite3_vfs_register, + sqlite3_vfs_unregister, +}; + +/* +** Attempt to load an SQLite extension library contained in the file +** zFile. The entry point is zProc. zProc may be 0 in which case a +** default entry point name (sqlite3_extension_init) is used. Use +** of the default name is recommended. +** +** Return SQLITE_OK on success and SQLITE_ERROR if something goes wrong. +** +** If an error occurs and pzErrMsg is not 0, then fill *pzErrMsg with +** error message text. The calling function should free this memory +** by calling sqlite3_free(). +*/ +static int sqlite3LoadExtension( + sqlite3 *db, /* Load the extension into this database connection */ + const char *zFile, /* Name of the shared library containing extension */ + const char *zProc, /* Entry point. Use "sqlite3_extension_init" if 0 */ + char **pzErrMsg /* Put error message here if not 0 */ +){ + sqlite3_vfs *pVfs = db->pVfs; + void *handle; + int (*xInit)(sqlite3*,char**,const sqlite3_api_routines*); + char *zErrmsg = 0; + void **aHandle; + + /* Ticket #1863. To avoid a creating security problems for older + ** applications that relink against newer versions of SQLite, the + ** ability to run load_extension is turned off by default. One + ** must call sqlite3_enable_load_extension() to turn on extension + ** loading. Otherwise you get the following error. + */ + if( (db->flags & SQLITE_LoadExtension)==0 ){ + if( pzErrMsg ){ + *pzErrMsg = sqlite3_mprintf("not authorized"); + } + return SQLITE_ERROR; + } + + if( zProc==0 ){ + zProc = "sqlite3_extension_init"; + } + + handle = sqlite3OsDlOpen(pVfs, zFile); + if( handle==0 ){ + if( pzErrMsg ){ + char zErr[256]; + zErr[sizeof(zErr)-1] = '\0'; + sqlite3_snprintf(sizeof(zErr)-1, zErr, + "unable to open shared library [%s]", zFile); + sqlite3OsDlError(pVfs, sizeof(zErr)-1, zErr); + *pzErrMsg = sqlite3DbStrDup(db, zErr); + } + return SQLITE_ERROR; + } + xInit = (int(*)(sqlite3*,char**,const sqlite3_api_routines*)) + sqlite3OsDlSym(pVfs, handle, zProc); + if( xInit==0 ){ + if( pzErrMsg ){ + char zErr[256]; + zErr[sizeof(zErr)-1] = '\0'; + sqlite3_snprintf(sizeof(zErr)-1, zErr, + "no entry point [%s] in shared library [%s]", zProc,zFile); + sqlite3OsDlError(pVfs, sizeof(zErr)-1, zErr); + *pzErrMsg = sqlite3DbStrDup(db, zErr); + sqlite3OsDlClose(pVfs, handle); + } + return SQLITE_ERROR; + }else if( xInit(db, &zErrmsg, &sqlite3Apis) ){ + if( pzErrMsg ){ + *pzErrMsg = sqlite3_mprintf("error during initialization: %s", zErrmsg); + } + sqlite3_free(zErrmsg); + sqlite3OsDlClose(pVfs, handle); + return SQLITE_ERROR; + } + + /* Append the new shared library handle to the db->aExtension array. */ + db->nExtension++; + aHandle = sqlite3DbMallocZero(db, sizeof(handle)*db->nExtension); + if( aHandle==0 ){ + return SQLITE_NOMEM; + } + if( db->nExtension>0 ){ + memcpy(aHandle, db->aExtension, sizeof(handle)*(db->nExtension-1)); + } + sqlite3_free(db->aExtension); + db->aExtension = aHandle; + + db->aExtension[db->nExtension-1] = handle; + return SQLITE_OK; +} +int sqlite3_load_extension( + sqlite3 *db, /* Load the extension into this database connection */ + const char *zFile, /* Name of the shared library containing extension */ + const char *zProc, /* Entry point. Use "sqlite3_extension_init" if 0 */ + char **pzErrMsg /* Put error message here if not 0 */ +){ + int rc; + sqlite3_mutex_enter(db->mutex); + rc = sqlite3LoadExtension(db, zFile, zProc, pzErrMsg); + sqlite3_mutex_leave(db->mutex); + return rc; +} + +/* +** Call this routine when the database connection is closing in order +** to clean up loaded extensions +*/ +void sqlite3CloseExtensions(sqlite3 *db){ + int i; + assert( sqlite3_mutex_held(db->mutex) ); + for(i=0; inExtension; i++){ + sqlite3OsDlClose(db->pVfs, db->aExtension[i]); + } + sqlite3_free(db->aExtension); +} + +/* +** Enable or disable extension loading. Extension loading is disabled by +** default so as not to open security holes in older applications. +*/ +int sqlite3_enable_load_extension(sqlite3 *db, int onoff){ + sqlite3_mutex_enter(db->mutex); + if( onoff ){ + db->flags |= SQLITE_LoadExtension; + }else{ + db->flags &= ~SQLITE_LoadExtension; + } + sqlite3_mutex_leave(db->mutex); + return SQLITE_OK; +} + +/* +** The following object holds the list of automatically loaded +** extensions. +** +** This list is shared across threads. The SQLITE_MUTEX_STATIC_MASTER +** mutex must be held while accessing this list. +*/ +static struct { + int nExt; /* Number of entries in aExt[] */ + void **aExt; /* Pointers to the extension init functions */ +} autoext = { 0, 0 }; + + +/* +** Register a statically linked extension that is automatically +** loaded by every new database connection. +*/ +int sqlite3_auto_extension(void *xInit){ + int i; + int rc = SQLITE_OK; + sqlite3_mutex *mutex = sqlite3_mutex_alloc(SQLITE_MUTEX_STATIC_MASTER); + sqlite3_mutex_enter(mutex); + for(i=0; i=autoext.nExt ){ + xInit = 0; + go = 0; + }else{ + xInit = (int(*)(sqlite3*,char**,const sqlite3_api_routines*)) + autoext.aExt[i]; + } + sqlite3_mutex_leave(mutex); + if( xInit && xInit(db, &zErrmsg, &sqlite3Apis) ){ + sqlite3Error(db, SQLITE_ERROR, + "automatic extension loading failed: %s", zErrmsg); + go = 0; + rc = SQLITE_ERROR; + sqlite3_free(zErrmsg); + } + } + return rc; +} + +#endif /* SQLITE_OMIT_LOAD_EXTENSION */ Added: external/sqlite-source-3.5.7.x/main.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/main.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,1525 @@ +/* +** 2001 September 15 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** Main file for the SQLite library. The routines in this file +** implement the programmer interface to the library. Routines in +** other files are for internal use by SQLite and should not be +** accessed by users of the library. +** +** $Id: main.c,v 1.421 2008/03/07 21:37:19 drh Exp $ +*/ +#include "sqliteInt.h" +#include +#ifdef SQLITE_ENABLE_FTS3 +# include "fts3.h" +#endif + +/* +** The version of the library +*/ +const char sqlite3_version[] = SQLITE_VERSION; +const char *sqlite3_libversion(void){ return sqlite3_version; } +int sqlite3_libversion_number(void){ return SQLITE_VERSION_NUMBER; } +int sqlite3_threadsafe(void){ return SQLITE_THREADSAFE; } + +/* +** If the following function pointer is not NULL and if +** SQLITE_ENABLE_IOTRACE is enabled, then messages describing +** I/O active are written using this function. These messages +** are intended for debugging activity only. +*/ +void (*sqlite3IoTrace)(const char*, ...) = 0; + +/* +** If the following global variable points to a string which is the +** name of a directory, then that directory will be used to store +** temporary files. +** +** See also the "PRAGMA temp_store_directory" SQL command. +*/ +char *sqlite3_temp_directory = 0; + + +/* +** Return true if the buffer z[0..n-1] contains all spaces. +*/ +static int allSpaces(const char *z, int n){ + while( n>0 && z[--n]==' ' ){} + return n==0; +} + +/* +** This is the default collating function named "BINARY" which is always +** available. +** +** If the padFlag argument is not NULL then space padding at the end +** of strings is ignored. This implements the RTRIM collation. +*/ +static int binCollFunc( + void *padFlag, + int nKey1, const void *pKey1, + int nKey2, const void *pKey2 +){ + int rc, n; + n = nKey1lastRowid; +} + +/* +** Return the number of changes in the most recent call to sqlite3_exec(). +*/ +int sqlite3_changes(sqlite3 *db){ + return db->nChange; +} + +/* +** Return the number of changes since the database handle was opened. +*/ +int sqlite3_total_changes(sqlite3 *db){ + return db->nTotalChange; +} + +/* +** Close an existing SQLite database +*/ +int sqlite3_close(sqlite3 *db){ + HashElem *i; + int j; + + if( !db ){ + return SQLITE_OK; + } + if( !sqlite3SafetyCheckSickOrOk(db) ){ + return SQLITE_MISUSE; + } + sqlite3_mutex_enter(db->mutex); + +#ifdef SQLITE_SSE + { + extern void sqlite3SseCleanup(sqlite3*); + sqlite3SseCleanup(db); + } +#endif + + sqlite3ResetInternalSchema(db, 0); + + /* If a transaction is open, the ResetInternalSchema() call above + ** will not have called the xDisconnect() method on any virtual + ** tables in the db->aVTrans[] array. The following sqlite3VtabRollback() + ** call will do so. We need to do this before the check for active + ** SQL statements below, as the v-table implementation may be storing + ** some prepared statements internally. + */ + sqlite3VtabRollback(db); + + /* If there are any outstanding VMs, return SQLITE_BUSY. */ + if( db->pVdbe ){ + sqlite3Error(db, SQLITE_BUSY, + "Unable to close due to unfinalised statements"); + sqlite3_mutex_leave(db->mutex); + return SQLITE_BUSY; + } + assert( sqlite3SafetyCheckSickOrOk(db) ); + + for(j=0; jnDb; j++){ + struct Db *pDb = &db->aDb[j]; + if( pDb->pBt ){ + sqlite3BtreeClose(pDb->pBt); + pDb->pBt = 0; + if( j!=1 ){ + pDb->pSchema = 0; + } + } + } + sqlite3ResetInternalSchema(db, 0); + assert( db->nDb<=2 ); + assert( db->aDb==db->aDbStatic ); + for(i=sqliteHashFirst(&db->aFunc); i; i=sqliteHashNext(i)){ + FuncDef *pFunc, *pNext; + for(pFunc = (FuncDef*)sqliteHashData(i); pFunc; pFunc=pNext){ + pNext = pFunc->pNext; + sqlite3_free(pFunc); + } + } + + for(i=sqliteHashFirst(&db->aCollSeq); i; i=sqliteHashNext(i)){ + CollSeq *pColl = (CollSeq *)sqliteHashData(i); + /* Invoke any destructors registered for collation sequence user data. */ + for(j=0; j<3; j++){ + if( pColl[j].xDel ){ + pColl[j].xDel(pColl[j].pUser); + } + } + sqlite3_free(pColl); + } + sqlite3HashClear(&db->aCollSeq); +#ifndef SQLITE_OMIT_VIRTUALTABLE + for(i=sqliteHashFirst(&db->aModule); i; i=sqliteHashNext(i)){ + Module *pMod = (Module *)sqliteHashData(i); + if( pMod->xDestroy ){ + pMod->xDestroy(pMod->pAux); + } + sqlite3_free(pMod); + } + sqlite3HashClear(&db->aModule); +#endif + + sqlite3HashClear(&db->aFunc); + sqlite3Error(db, SQLITE_OK, 0); /* Deallocates any cached error strings. */ + if( db->pErr ){ + sqlite3ValueFree(db->pErr); + } + sqlite3CloseExtensions(db); + + db->magic = SQLITE_MAGIC_ERROR; + + /* The temp-database schema is allocated differently from the other schema + ** objects (using sqliteMalloc() directly, instead of sqlite3BtreeSchema()). + ** So it needs to be freed here. Todo: Why not roll the temp schema into + ** the same sqliteMalloc() as the one that allocates the database + ** structure? + */ + sqlite3_free(db->aDb[1].pSchema); + sqlite3_mutex_leave(db->mutex); + db->magic = SQLITE_MAGIC_CLOSED; + sqlite3_mutex_free(db->mutex); + sqlite3_free(db); + return SQLITE_OK; +} + +/* +** Rollback all database files. +*/ +void sqlite3RollbackAll(sqlite3 *db){ + int i; + int inTrans = 0; + assert( sqlite3_mutex_held(db->mutex) ); + sqlite3FaultBenign(SQLITE_FAULTINJECTOR_MALLOC, 1); + for(i=0; inDb; i++){ + if( db->aDb[i].pBt ){ + if( sqlite3BtreeIsInTrans(db->aDb[i].pBt) ){ + inTrans = 1; + } + sqlite3BtreeRollback(db->aDb[i].pBt); + db->aDb[i].inTrans = 0; + } + } + sqlite3VtabRollback(db); + sqlite3FaultBenign(SQLITE_FAULTINJECTOR_MALLOC, 0); + + if( db->flags&SQLITE_InternChanges ){ + sqlite3ExpirePreparedStatements(db); + sqlite3ResetInternalSchema(db, 0); + } + + /* If one has been configured, invoke the rollback-hook callback */ + if( db->xRollbackCallback && (inTrans || !db->autoCommit) ){ + db->xRollbackCallback(db->pRollbackArg); + } +} + +/* +** Return a static string that describes the kind of error specified in the +** argument. +*/ +const char *sqlite3ErrStr(int rc){ + const char *z; + switch( rc & 0xff ){ + case SQLITE_ROW: + case SQLITE_DONE: + case SQLITE_OK: z = "not an error"; break; + case SQLITE_ERROR: z = "SQL logic error or missing database"; break; + case SQLITE_PERM: z = "access permission denied"; break; + case SQLITE_ABORT: z = "callback requested query abort"; break; + case SQLITE_BUSY: z = "database is locked"; break; + case SQLITE_LOCKED: z = "database table is locked"; break; + case SQLITE_NOMEM: z = "out of memory"; break; + case SQLITE_READONLY: z = "attempt to write a readonly database"; break; + case SQLITE_INTERRUPT: z = "interrupted"; break; + case SQLITE_IOERR: z = "disk I/O error"; break; + case SQLITE_CORRUPT: z = "database disk image is malformed"; break; + case SQLITE_FULL: z = "database or disk is full"; break; + case SQLITE_CANTOPEN: z = "unable to open database file"; break; + case SQLITE_EMPTY: z = "table contains no data"; break; + case SQLITE_SCHEMA: z = "database schema has changed"; break; + case SQLITE_TOOBIG: z = "String or BLOB exceeded size limit"; break; + case SQLITE_CONSTRAINT: z = "constraint failed"; break; + case SQLITE_MISMATCH: z = "datatype mismatch"; break; + case SQLITE_MISUSE: z = "library routine called out of sequence";break; + case SQLITE_NOLFS: z = "kernel lacks large file support"; break; + case SQLITE_AUTH: z = "authorization denied"; break; + case SQLITE_FORMAT: z = "auxiliary database format error"; break; + case SQLITE_RANGE: z = "bind or column index out of range"; break; + case SQLITE_NOTADB: z = "file is encrypted or is not a database";break; + default: z = "unknown error"; break; + } + return z; +} + +/* +** This routine implements a busy callback that sleeps and tries +** again until a timeout value is reached. The timeout value is +** an integer number of milliseconds passed in as the first +** argument. +*/ +static int sqliteDefaultBusyCallback( + void *ptr, /* Database connection */ + int count /* Number of times table has been busy */ +){ +#if OS_WIN || (defined(HAVE_USLEEP) && HAVE_USLEEP) + static const u8 delays[] = + { 1, 2, 5, 10, 15, 20, 25, 25, 25, 50, 50, 100 }; + static const u8 totals[] = + { 0, 1, 3, 8, 18, 33, 53, 78, 103, 128, 178, 228 }; +# define NDELAY (sizeof(delays)/sizeof(delays[0])) + sqlite3 *db = (sqlite3 *)ptr; + int timeout = db->busyTimeout; + int delay, prior; + + assert( count>=0 ); + if( count < NDELAY ){ + delay = delays[count]; + prior = totals[count]; + }else{ + delay = delays[NDELAY-1]; + prior = totals[NDELAY-1] + delay*(count-(NDELAY-1)); + } + if( prior + delay > timeout ){ + delay = timeout - prior; + if( delay<=0 ) return 0; + } + sqlite3OsSleep(db->pVfs, delay*1000); + return 1; +#else + sqlite3 *db = (sqlite3 *)ptr; + int timeout = ((sqlite3 *)ptr)->busyTimeout; + if( (count+1)*1000 > timeout ){ + return 0; + } + sqlite3OsSleep(db->pVfs, 1000000); + return 1; +#endif +} + +/* +** Invoke the given busy handler. +** +** This routine is called when an operation failed with a lock. +** If this routine returns non-zero, the lock is retried. If it +** returns 0, the operation aborts with an SQLITE_BUSY error. +*/ +int sqlite3InvokeBusyHandler(BusyHandler *p){ + int rc; + if( p==0 || p->xFunc==0 || p->nBusy<0 ) return 0; + rc = p->xFunc(p->pArg, p->nBusy); + if( rc==0 ){ + p->nBusy = -1; + }else{ + p->nBusy++; + } + return rc; +} + +/* +** This routine sets the busy callback for an Sqlite database to the +** given callback function with the given argument. +*/ +int sqlite3_busy_handler( + sqlite3 *db, + int (*xBusy)(void*,int), + void *pArg +){ + sqlite3_mutex_enter(db->mutex); + db->busyHandler.xFunc = xBusy; + db->busyHandler.pArg = pArg; + db->busyHandler.nBusy = 0; + sqlite3_mutex_leave(db->mutex); + return SQLITE_OK; +} + +#ifndef SQLITE_OMIT_PROGRESS_CALLBACK +/* +** This routine sets the progress callback for an Sqlite database to the +** given callback function with the given argument. The progress callback will +** be invoked every nOps opcodes. +*/ +void sqlite3_progress_handler( + sqlite3 *db, + int nOps, + int (*xProgress)(void*), + void *pArg +){ + if( sqlite3SafetyCheckOk(db) ){ + sqlite3_mutex_enter(db->mutex); + if( nOps>0 ){ + db->xProgress = xProgress; + db->nProgressOps = nOps; + db->pProgressArg = pArg; + }else{ + db->xProgress = 0; + db->nProgressOps = 0; + db->pProgressArg = 0; + } + sqlite3_mutex_leave(db->mutex); + } +} +#endif + + +/* +** This routine installs a default busy handler that waits for the +** specified number of milliseconds before returning 0. +*/ +int sqlite3_busy_timeout(sqlite3 *db, int ms){ + if( ms>0 ){ + db->busyTimeout = ms; + sqlite3_busy_handler(db, sqliteDefaultBusyCallback, (void*)db); + }else{ + sqlite3_busy_handler(db, 0, 0); + } + return SQLITE_OK; +} + +/* +** Cause any pending operation to stop at its earliest opportunity. +*/ +void sqlite3_interrupt(sqlite3 *db){ + if( sqlite3SafetyCheckOk(db) ){ + db->u1.isInterrupted = 1; + } +} + + +/* +** This function is exactly the same as sqlite3_create_function(), except +** that it is designed to be called by internal code. The difference is +** that if a malloc() fails in sqlite3_create_function(), an error code +** is returned and the mallocFailed flag cleared. +*/ +int sqlite3CreateFunc( + sqlite3 *db, + const char *zFunctionName, + int nArg, + int enc, + void *pUserData, + void (*xFunc)(sqlite3_context*,int,sqlite3_value **), + void (*xStep)(sqlite3_context*,int,sqlite3_value **), + void (*xFinal)(sqlite3_context*) +){ + FuncDef *p; + int nName; + + assert( sqlite3_mutex_held(db->mutex) ); + if( zFunctionName==0 || + (xFunc && (xFinal || xStep)) || + (!xFunc && (xFinal && !xStep)) || + (!xFunc && (!xFinal && xStep)) || + (nArg<-1 || nArg>127) || + (255<(nName = strlen(zFunctionName))) ){ + sqlite3Error(db, SQLITE_ERROR, "bad parameters"); + return SQLITE_ERROR; + } + +#ifndef SQLITE_OMIT_UTF16 + /* If SQLITE_UTF16 is specified as the encoding type, transform this + ** to one of SQLITE_UTF16LE or SQLITE_UTF16BE using the + ** SQLITE_UTF16NATIVE macro. SQLITE_UTF16 is not used internally. + ** + ** If SQLITE_ANY is specified, add three versions of the function + ** to the hash table. + */ + if( enc==SQLITE_UTF16 ){ + enc = SQLITE_UTF16NATIVE; + }else if( enc==SQLITE_ANY ){ + int rc; + rc = sqlite3CreateFunc(db, zFunctionName, nArg, SQLITE_UTF8, + pUserData, xFunc, xStep, xFinal); + if( rc==SQLITE_OK ){ + rc = sqlite3CreateFunc(db, zFunctionName, nArg, SQLITE_UTF16LE, + pUserData, xFunc, xStep, xFinal); + } + if( rc!=SQLITE_OK ){ + return rc; + } + enc = SQLITE_UTF16BE; + } +#else + enc = SQLITE_UTF8; +#endif + + /* Check if an existing function is being overridden or deleted. If so, + ** and there are active VMs, then return SQLITE_BUSY. If a function + ** is being overridden/deleted but there are no active VMs, allow the + ** operation to continue but invalidate all precompiled statements. + */ + p = sqlite3FindFunction(db, zFunctionName, nName, nArg, enc, 0); + if( p && p->iPrefEnc==enc && p->nArg==nArg ){ + if( db->activeVdbeCnt ){ + sqlite3Error(db, SQLITE_BUSY, + "Unable to delete/modify user-function due to active statements"); + assert( !db->mallocFailed ); + return SQLITE_BUSY; + }else{ + sqlite3ExpirePreparedStatements(db); + } + } + + p = sqlite3FindFunction(db, zFunctionName, nName, nArg, enc, 1); + assert(p || db->mallocFailed); + if( !p ){ + return SQLITE_NOMEM; + } + p->flags = 0; + p->xFunc = xFunc; + p->xStep = xStep; + p->xFinalize = xFinal; + p->pUserData = pUserData; + p->nArg = nArg; + return SQLITE_OK; +} + +/* +** Create new user functions. +*/ +int sqlite3_create_function( + sqlite3 *db, + const char *zFunctionName, + int nArg, + int enc, + void *p, + void (*xFunc)(sqlite3_context*,int,sqlite3_value **), + void (*xStep)(sqlite3_context*,int,sqlite3_value **), + void (*xFinal)(sqlite3_context*) +){ + int rc; + sqlite3_mutex_enter(db->mutex); + assert( !db->mallocFailed ); + rc = sqlite3CreateFunc(db, zFunctionName, nArg, enc, p, xFunc, xStep, xFinal); + rc = sqlite3ApiExit(db, rc); + sqlite3_mutex_leave(db->mutex); + return rc; +} + +#ifndef SQLITE_OMIT_UTF16 +int sqlite3_create_function16( + sqlite3 *db, + const void *zFunctionName, + int nArg, + int eTextRep, + void *p, + void (*xFunc)(sqlite3_context*,int,sqlite3_value**), + void (*xStep)(sqlite3_context*,int,sqlite3_value**), + void (*xFinal)(sqlite3_context*) +){ + int rc; + char *zFunc8; + sqlite3_mutex_enter(db->mutex); + assert( !db->mallocFailed ); + zFunc8 = sqlite3Utf16to8(db, zFunctionName, -1); + rc = sqlite3CreateFunc(db, zFunc8, nArg, eTextRep, p, xFunc, xStep, xFinal); + sqlite3_free(zFunc8); + rc = sqlite3ApiExit(db, rc); + sqlite3_mutex_leave(db->mutex); + return rc; +} +#endif + + +/* +** Declare that a function has been overloaded by a virtual table. +** +** If the function already exists as a regular global function, then +** this routine is a no-op. If the function does not exist, then create +** a new one that always throws a run-time error. +** +** When virtual tables intend to provide an overloaded function, they +** should call this routine to make sure the global function exists. +** A global function must exist in order for name resolution to work +** properly. +*/ +int sqlite3_overload_function( + sqlite3 *db, + const char *zName, + int nArg +){ + int nName = strlen(zName); + int rc; + sqlite3_mutex_enter(db->mutex); + if( sqlite3FindFunction(db, zName, nName, nArg, SQLITE_UTF8, 0)==0 ){ + sqlite3CreateFunc(db, zName, nArg, SQLITE_UTF8, + 0, sqlite3InvalidFunction, 0, 0); + } + rc = sqlite3ApiExit(db, SQLITE_OK); + sqlite3_mutex_leave(db->mutex); + return rc; +} + +#ifndef SQLITE_OMIT_TRACE +/* +** Register a trace function. The pArg from the previously registered trace +** is returned. +** +** A NULL trace function means that no tracing is executes. A non-NULL +** trace is a pointer to a function that is invoked at the start of each +** SQL statement. +*/ +void *sqlite3_trace(sqlite3 *db, void (*xTrace)(void*,const char*), void *pArg){ + void *pOld; + sqlite3_mutex_enter(db->mutex); + pOld = db->pTraceArg; + db->xTrace = xTrace; + db->pTraceArg = pArg; + sqlite3_mutex_leave(db->mutex); + return pOld; +} +/* +** Register a profile function. The pArg from the previously registered +** profile function is returned. +** +** A NULL profile function means that no profiling is executes. A non-NULL +** profile is a pointer to a function that is invoked at the conclusion of +** each SQL statement that is run. +*/ +void *sqlite3_profile( + sqlite3 *db, + void (*xProfile)(void*,const char*,sqlite_uint64), + void *pArg +){ + void *pOld; + sqlite3_mutex_enter(db->mutex); + pOld = db->pProfileArg; + db->xProfile = xProfile; + db->pProfileArg = pArg; + sqlite3_mutex_leave(db->mutex); + return pOld; +} +#endif /* SQLITE_OMIT_TRACE */ + +/*** EXPERIMENTAL *** +** +** Register a function to be invoked when a transaction comments. +** If the invoked function returns non-zero, then the commit becomes a +** rollback. +*/ +void *sqlite3_commit_hook( + sqlite3 *db, /* Attach the hook to this database */ + int (*xCallback)(void*), /* Function to invoke on each commit */ + void *pArg /* Argument to the function */ +){ + void *pOld; + sqlite3_mutex_enter(db->mutex); + pOld = db->pCommitArg; + db->xCommitCallback = xCallback; + db->pCommitArg = pArg; + sqlite3_mutex_leave(db->mutex); + return pOld; +} + +/* +** Register a callback to be invoked each time a row is updated, +** inserted or deleted using this database connection. +*/ +void *sqlite3_update_hook( + sqlite3 *db, /* Attach the hook to this database */ + void (*xCallback)(void*,int,char const *,char const *,sqlite_int64), + void *pArg /* Argument to the function */ +){ + void *pRet; + sqlite3_mutex_enter(db->mutex); + pRet = db->pUpdateArg; + db->xUpdateCallback = xCallback; + db->pUpdateArg = pArg; + sqlite3_mutex_leave(db->mutex); + return pRet; +} + +/* +** Register a callback to be invoked each time a transaction is rolled +** back by this database connection. +*/ +void *sqlite3_rollback_hook( + sqlite3 *db, /* Attach the hook to this database */ + void (*xCallback)(void*), /* Callback function */ + void *pArg /* Argument to the function */ +){ + void *pRet; + sqlite3_mutex_enter(db->mutex); + pRet = db->pRollbackArg; + db->xRollbackCallback = xCallback; + db->pRollbackArg = pArg; + sqlite3_mutex_leave(db->mutex); + return pRet; +} + +/* +** This routine is called to create a connection to a database BTree +** driver. If zFilename is the name of a file, then that file is +** opened and used. If zFilename is the magic name ":memory:" then +** the database is stored in memory (and is thus forgotten as soon as +** the connection is closed.) If zFilename is NULL then the database +** is a "virtual" database for transient use only and is deleted as +** soon as the connection is closed. +** +** A virtual database can be either a disk file (that is automatically +** deleted when the file is closed) or it an be held entirely in memory, +** depending on the values of the TEMP_STORE compile-time macro and the +** db->temp_store variable, according to the following chart: +** +** TEMP_STORE db->temp_store Location of temporary database +** ---------- -------------- ------------------------------ +** 0 any file +** 1 1 file +** 1 2 memory +** 1 0 file +** 2 1 file +** 2 2 memory +** 2 0 memory +** 3 any memory +*/ +int sqlite3BtreeFactory( + const sqlite3 *db, /* Main database when opening aux otherwise 0 */ + const char *zFilename, /* Name of the file containing the BTree database */ + int omitJournal, /* if TRUE then do not journal this file */ + int nCache, /* How many pages in the page cache */ + int vfsFlags, /* Flags passed through to vfsOpen */ + Btree **ppBtree /* Pointer to new Btree object written here */ +){ + int btFlags = 0; + int rc; + + assert( sqlite3_mutex_held(db->mutex) ); + assert( ppBtree != 0); + if( omitJournal ){ + btFlags |= BTREE_OMIT_JOURNAL; + } + if( db->flags & SQLITE_NoReadlock ){ + btFlags |= BTREE_NO_READLOCK; + } + if( zFilename==0 ){ +#if TEMP_STORE==0 + /* Do nothing */ +#endif +#ifndef SQLITE_OMIT_MEMORYDB +#if TEMP_STORE==1 + if( db->temp_store==2 ) zFilename = ":memory:"; +#endif +#if TEMP_STORE==2 + if( db->temp_store!=1 ) zFilename = ":memory:"; +#endif +#if TEMP_STORE==3 + zFilename = ":memory:"; +#endif +#endif /* SQLITE_OMIT_MEMORYDB */ + } + + if( (vfsFlags & SQLITE_OPEN_MAIN_DB)!=0 && (zFilename==0 || *zFilename==0) ){ + vfsFlags = (vfsFlags & ~SQLITE_OPEN_MAIN_DB) | SQLITE_OPEN_TEMP_DB; + } + rc = sqlite3BtreeOpen(zFilename, (sqlite3 *)db, ppBtree, btFlags, vfsFlags); + if( rc==SQLITE_OK ){ + sqlite3BtreeSetCacheSize(*ppBtree, nCache); + } + return rc; +} + +/* +** Return UTF-8 encoded English language explanation of the most recent +** error. +*/ +const char *sqlite3_errmsg(sqlite3 *db){ + const char *z; + if( !db ){ + return sqlite3ErrStr(SQLITE_NOMEM); + } + if( !sqlite3SafetyCheckSickOrOk(db) || db->errCode==SQLITE_MISUSE ){ + return sqlite3ErrStr(SQLITE_MISUSE); + } + sqlite3_mutex_enter(db->mutex); + assert( !db->mallocFailed ); + z = (char*)sqlite3_value_text(db->pErr); + if( z==0 ){ + z = sqlite3ErrStr(db->errCode); + } + sqlite3_mutex_leave(db->mutex); + return z; +} + +#ifndef SQLITE_OMIT_UTF16 +/* +** Return UTF-16 encoded English language explanation of the most recent +** error. +*/ +const void *sqlite3_errmsg16(sqlite3 *db){ + /* Because all the characters in the string are in the unicode + ** range 0x00-0xFF, if we pad the big-endian string with a + ** zero byte, we can obtain the little-endian string with + ** &big_endian[1]. + */ + static const char outOfMemBe[] = { + 0, 'o', 0, 'u', 0, 't', 0, ' ', + 0, 'o', 0, 'f', 0, ' ', + 0, 'm', 0, 'e', 0, 'm', 0, 'o', 0, 'r', 0, 'y', 0, 0, 0 + }; + static const char misuseBe [] = { + 0, 'l', 0, 'i', 0, 'b', 0, 'r', 0, 'a', 0, 'r', 0, 'y', 0, ' ', + 0, 'r', 0, 'o', 0, 'u', 0, 't', 0, 'i', 0, 'n', 0, 'e', 0, ' ', + 0, 'c', 0, 'a', 0, 'l', 0, 'l', 0, 'e', 0, 'd', 0, ' ', + 0, 'o', 0, 'u', 0, 't', 0, ' ', + 0, 'o', 0, 'f', 0, ' ', + 0, 's', 0, 'e', 0, 'q', 0, 'u', 0, 'e', 0, 'n', 0, 'c', 0, 'e', 0, 0, 0 + }; + + const void *z; + if( !db ){ + return (void *)(&outOfMemBe[SQLITE_UTF16NATIVE==SQLITE_UTF16LE?1:0]); + } + if( !sqlite3SafetyCheckSickOrOk(db) || db->errCode==SQLITE_MISUSE ){ + return (void *)(&misuseBe[SQLITE_UTF16NATIVE==SQLITE_UTF16LE?1:0]); + } + sqlite3_mutex_enter(db->mutex); + assert( !db->mallocFailed ); + z = sqlite3_value_text16(db->pErr); + if( z==0 ){ + sqlite3ValueSetStr(db->pErr, -1, sqlite3ErrStr(db->errCode), + SQLITE_UTF8, SQLITE_STATIC); + z = sqlite3_value_text16(db->pErr); + } + sqlite3ApiExit(0, 0); + sqlite3_mutex_leave(db->mutex); + return z; +} +#endif /* SQLITE_OMIT_UTF16 */ + +/* +** Return the most recent error code generated by an SQLite routine. If NULL is +** passed to this function, we assume a malloc() failed during sqlite3_open(). +*/ +int sqlite3_errcode(sqlite3 *db){ + if( db && !sqlite3SafetyCheckSickOrOk(db) ){ + return SQLITE_MISUSE; + } + if( !db || db->mallocFailed ){ + return SQLITE_NOMEM; + } + return db->errCode & db->errMask; +} + +/* +** Create a new collating function for database "db". The name is zName +** and the encoding is enc. +*/ +static int createCollation( + sqlite3* db, + const char *zName, + int enc, + void* pCtx, + int(*xCompare)(void*,int,const void*,int,const void*), + void(*xDel)(void*) +){ + CollSeq *pColl; + int enc2; + + assert( sqlite3_mutex_held(db->mutex) ); + + /* If SQLITE_UTF16 is specified as the encoding type, transform this + ** to one of SQLITE_UTF16LE or SQLITE_UTF16BE using the + ** SQLITE_UTF16NATIVE macro. SQLITE_UTF16 is not used internally. + */ + enc2 = enc & ~SQLITE_UTF16_ALIGNED; + if( enc2==SQLITE_UTF16 ){ + enc2 = SQLITE_UTF16NATIVE; + } + + if( (enc2&~3)!=0 ){ + sqlite3Error(db, SQLITE_ERROR, "unknown encoding"); + return SQLITE_ERROR; + } + + /* Check if this call is removing or replacing an existing collation + ** sequence. If so, and there are active VMs, return busy. If there + ** are no active VMs, invalidate any pre-compiled statements. + */ + pColl = sqlite3FindCollSeq(db, (u8)enc2, zName, strlen(zName), 0); + if( pColl && pColl->xCmp ){ + if( db->activeVdbeCnt ){ + sqlite3Error(db, SQLITE_BUSY, + "Unable to delete/modify collation sequence due to active statements"); + return SQLITE_BUSY; + } + sqlite3ExpirePreparedStatements(db); + + /* If collation sequence pColl was created directly by a call to + ** sqlite3_create_collation, and not generated by synthCollSeq(), + ** then any copies made by synthCollSeq() need to be invalidated. + ** Also, collation destructor - CollSeq.xDel() - function may need + ** to be called. + */ + if( (pColl->enc & ~SQLITE_UTF16_ALIGNED)==enc2 ){ + CollSeq *aColl = sqlite3HashFind(&db->aCollSeq, zName, strlen(zName)); + int j; + for(j=0; j<3; j++){ + CollSeq *p = &aColl[j]; + if( p->enc==pColl->enc ){ + if( p->xDel ){ + p->xDel(p->pUser); + } + p->xCmp = 0; + } + } + } + } + + pColl = sqlite3FindCollSeq(db, (u8)enc2, zName, strlen(zName), 1); + if( pColl ){ + pColl->xCmp = xCompare; + pColl->pUser = pCtx; + pColl->xDel = xDel; + pColl->enc = enc2 | (enc & SQLITE_UTF16_ALIGNED); + } + sqlite3Error(db, SQLITE_OK, 0); + return SQLITE_OK; +} + + +/* +** This routine does the work of opening a database on behalf of +** sqlite3_open() and sqlite3_open16(). The database filename "zFilename" +** is UTF-8 encoded. +*/ +static int openDatabase( + const char *zFilename, /* Database filename UTF-8 encoded */ + sqlite3 **ppDb, /* OUT: Returned database handle */ + unsigned flags, /* Operational flags */ + const char *zVfs /* Name of the VFS to use */ +){ + sqlite3 *db; + int rc; + CollSeq *pColl; + + /* Remove harmful bits from the flags parameter */ + flags &= ~( SQLITE_OPEN_DELETEONCLOSE | + SQLITE_OPEN_MAIN_DB | + SQLITE_OPEN_TEMP_DB | + SQLITE_OPEN_TRANSIENT_DB | + SQLITE_OPEN_MAIN_JOURNAL | + SQLITE_OPEN_TEMP_JOURNAL | + SQLITE_OPEN_SUBJOURNAL | + SQLITE_OPEN_MASTER_JOURNAL + ); + + /* Allocate the sqlite data structure */ + db = sqlite3MallocZero( sizeof(sqlite3) ); + if( db==0 ) goto opendb_out; + db->mutex = sqlite3_mutex_alloc(SQLITE_MUTEX_RECURSIVE); + if( db->mutex==0 ){ + sqlite3_free(db); + db = 0; + goto opendb_out; + } + sqlite3_mutex_enter(db->mutex); + db->errMask = 0xff; + db->priorNewRowid = 0; + db->nDb = 2; + db->magic = SQLITE_MAGIC_BUSY; + db->aDb = db->aDbStatic; + db->autoCommit = 1; + db->nextAutovac = -1; + db->flags |= SQLITE_ShortColNames +#if SQLITE_DEFAULT_FILE_FORMAT<4 + | SQLITE_LegacyFileFmt +#endif +#ifdef SQLITE_ENABLE_LOAD_EXTENSION + | SQLITE_LoadExtension +#endif + ; + sqlite3HashInit(&db->aFunc, SQLITE_HASH_STRING, 0); + sqlite3HashInit(&db->aCollSeq, SQLITE_HASH_STRING, 0); +#ifndef SQLITE_OMIT_VIRTUALTABLE + sqlite3HashInit(&db->aModule, SQLITE_HASH_STRING, 0); +#endif + + db->pVfs = sqlite3_vfs_find(zVfs); + if( !db->pVfs ){ + rc = SQLITE_ERROR; + db->magic = SQLITE_MAGIC_SICK; + sqlite3Error(db, rc, "no such vfs: %s", zVfs); + goto opendb_out; + } + + /* Add the default collation sequence BINARY. BINARY works for both UTF-8 + ** and UTF-16, so add a version for each to avoid any unnecessary + ** conversions. The only error that can occur here is a malloc() failure. + */ + createCollation(db, "BINARY", SQLITE_UTF8, 0, binCollFunc, 0); + createCollation(db, "BINARY", SQLITE_UTF16BE, 0, binCollFunc, 0); + createCollation(db, "BINARY", SQLITE_UTF16LE, 0, binCollFunc, 0); + createCollation(db, "RTRIM", SQLITE_UTF8, (void*)1, binCollFunc, 0); + if( db->mallocFailed || + (db->pDfltColl = sqlite3FindCollSeq(db, SQLITE_UTF8, "BINARY", 6, 0))==0 + ){ + assert( db->mallocFailed ); + db->magic = SQLITE_MAGIC_SICK; + goto opendb_out; + } + + /* Also add a UTF-8 case-insensitive collation sequence. */ + createCollation(db, "NOCASE", SQLITE_UTF8, 0, nocaseCollatingFunc, 0); + + /* Set flags on the built-in collating sequences */ + db->pDfltColl->type = SQLITE_COLL_BINARY; + pColl = sqlite3FindCollSeq(db, SQLITE_UTF8, "NOCASE", 6, 0); + if( pColl ){ + pColl->type = SQLITE_COLL_NOCASE; + } + + /* Open the backend database driver */ + db->openFlags = flags; + rc = sqlite3BtreeFactory(db, zFilename, 0, SQLITE_DEFAULT_CACHE_SIZE, + flags | SQLITE_OPEN_MAIN_DB, + &db->aDb[0].pBt); + if( rc!=SQLITE_OK ){ + sqlite3Error(db, rc, 0); + db->magic = SQLITE_MAGIC_SICK; + goto opendb_out; + } + db->aDb[0].pSchema = sqlite3SchemaGet(db, db->aDb[0].pBt); + db->aDb[1].pSchema = sqlite3SchemaGet(db, 0); + + + /* The default safety_level for the main database is 'full'; for the temp + ** database it is 'NONE'. This matches the pager layer defaults. + */ + db->aDb[0].zName = "main"; + db->aDb[0].safety_level = 3; +#ifndef SQLITE_OMIT_TEMPDB + db->aDb[1].zName = "temp"; + db->aDb[1].safety_level = 1; +#endif + + db->magic = SQLITE_MAGIC_OPEN; + if( db->mallocFailed ){ + goto opendb_out; + } + + /* Register all built-in functions, but do not attempt to read the + ** database schema yet. This is delayed until the first time the database + ** is accessed. + */ + sqlite3Error(db, SQLITE_OK, 0); + sqlite3RegisterBuiltinFunctions(db); + + /* Load automatic extensions - extensions that have been registered + ** using the sqlite3_automatic_extension() API. + */ + (void)sqlite3AutoLoadExtensions(db); + if( sqlite3_errcode(db)!=SQLITE_OK ){ + goto opendb_out; + } + +#ifdef SQLITE_ENABLE_FTS1 + if( !db->mallocFailed ){ + extern int sqlite3Fts1Init(sqlite3*); + rc = sqlite3Fts1Init(db); + } +#endif + +#ifdef SQLITE_ENABLE_FTS2 + if( !db->mallocFailed && rc==SQLITE_OK ){ + extern int sqlite3Fts2Init(sqlite3*); + rc = sqlite3Fts2Init(db); + } +#endif + +#ifdef SQLITE_ENABLE_FTS3 + if( !db->mallocFailed && rc==SQLITE_OK ){ + rc = sqlite3Fts3Init(db); + } +#endif + +#ifdef SQLITE_ENABLE_ICU + if( !db->mallocFailed && rc==SQLITE_OK ){ + extern int sqlite3IcuInit(sqlite3*); + rc = sqlite3IcuInit(db); + } +#endif + sqlite3Error(db, rc, 0); + + /* -DSQLITE_DEFAULT_LOCKING_MODE=1 makes EXCLUSIVE the default locking + ** mode. -DSQLITE_DEFAULT_LOCKING_MODE=0 make NORMAL the default locking + ** mode. Doing nothing at all also makes NORMAL the default. + */ +#ifdef SQLITE_DEFAULT_LOCKING_MODE + db->dfltLockMode = SQLITE_DEFAULT_LOCKING_MODE; + sqlite3PagerLockingMode(sqlite3BtreePager(db->aDb[0].pBt), + SQLITE_DEFAULT_LOCKING_MODE); +#endif + +opendb_out: + if( db ){ + assert( db->mutex!=0 ); + sqlite3_mutex_leave(db->mutex); + } + if( SQLITE_NOMEM==(rc = sqlite3_errcode(db)) ){ + sqlite3_close(db); + db = 0; + } + *ppDb = db; + return sqlite3ApiExit(0, rc); +} + +/* +** Open a new database handle. +*/ +int sqlite3_open( + const char *zFilename, + sqlite3 **ppDb +){ + return openDatabase(zFilename, ppDb, + SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE, 0); +} +int sqlite3_open_v2( + const char *filename, /* Database filename (UTF-8) */ + sqlite3 **ppDb, /* OUT: SQLite db handle */ + int flags, /* Flags */ + const char *zVfs /* Name of VFS module to use */ +){ + return openDatabase(filename, ppDb, flags, zVfs); +} + +#ifndef SQLITE_OMIT_UTF16 +/* +** Open a new database handle. +*/ +int sqlite3_open16( + const void *zFilename, + sqlite3 **ppDb +){ + char const *zFilename8; /* zFilename encoded in UTF-8 instead of UTF-16 */ + sqlite3_value *pVal; + int rc = SQLITE_NOMEM; + + assert( zFilename ); + assert( ppDb ); + *ppDb = 0; + pVal = sqlite3ValueNew(0); + sqlite3ValueSetStr(pVal, -1, zFilename, SQLITE_UTF16NATIVE, SQLITE_STATIC); + zFilename8 = sqlite3ValueText(pVal, SQLITE_UTF8); + if( zFilename8 ){ + rc = openDatabase(zFilename8, ppDb, + SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE, 0); + assert( *ppDb || rc==SQLITE_NOMEM ); + if( rc==SQLITE_OK ){ + rc = sqlite3_exec(*ppDb, "PRAGMA encoding = 'UTF-16'", 0, 0, 0); + if( rc!=SQLITE_OK ){ + sqlite3_close(*ppDb); + *ppDb = 0; + } + } + } + sqlite3ValueFree(pVal); + + return sqlite3ApiExit(0, rc); +} +#endif /* SQLITE_OMIT_UTF16 */ + +/* +** Register a new collation sequence with the database handle db. +*/ +int sqlite3_create_collation( + sqlite3* db, + const char *zName, + int enc, + void* pCtx, + int(*xCompare)(void*,int,const void*,int,const void*) +){ + int rc; + sqlite3_mutex_enter(db->mutex); + assert( !db->mallocFailed ); + rc = createCollation(db, zName, enc, pCtx, xCompare, 0); + rc = sqlite3ApiExit(db, rc); + sqlite3_mutex_leave(db->mutex); + return rc; +} + +/* +** Register a new collation sequence with the database handle db. +*/ +int sqlite3_create_collation_v2( + sqlite3* db, + const char *zName, + int enc, + void* pCtx, + int(*xCompare)(void*,int,const void*,int,const void*), + void(*xDel)(void*) +){ + int rc; + sqlite3_mutex_enter(db->mutex); + assert( !db->mallocFailed ); + rc = createCollation(db, zName, enc, pCtx, xCompare, xDel); + rc = sqlite3ApiExit(db, rc); + sqlite3_mutex_leave(db->mutex); + return rc; +} + +#ifndef SQLITE_OMIT_UTF16 +/* +** Register a new collation sequence with the database handle db. +*/ +int sqlite3_create_collation16( + sqlite3* db, + const char *zName, + int enc, + void* pCtx, + int(*xCompare)(void*,int,const void*,int,const void*) +){ + int rc = SQLITE_OK; + char *zName8; + sqlite3_mutex_enter(db->mutex); + assert( !db->mallocFailed ); + zName8 = sqlite3Utf16to8(db, zName, -1); + if( zName8 ){ + rc = createCollation(db, zName8, enc, pCtx, xCompare, 0); + sqlite3_free(zName8); + } + rc = sqlite3ApiExit(db, rc); + sqlite3_mutex_leave(db->mutex); + return rc; +} +#endif /* SQLITE_OMIT_UTF16 */ + +/* +** Register a collation sequence factory callback with the database handle +** db. Replace any previously installed collation sequence factory. +*/ +int sqlite3_collation_needed( + sqlite3 *db, + void *pCollNeededArg, + void(*xCollNeeded)(void*,sqlite3*,int eTextRep,const char*) +){ + sqlite3_mutex_enter(db->mutex); + db->xCollNeeded = xCollNeeded; + db->xCollNeeded16 = 0; + db->pCollNeededArg = pCollNeededArg; + sqlite3_mutex_leave(db->mutex); + return SQLITE_OK; +} + +#ifndef SQLITE_OMIT_UTF16 +/* +** Register a collation sequence factory callback with the database handle +** db. Replace any previously installed collation sequence factory. +*/ +int sqlite3_collation_needed16( + sqlite3 *db, + void *pCollNeededArg, + void(*xCollNeeded16)(void*,sqlite3*,int eTextRep,const void*) +){ + sqlite3_mutex_enter(db->mutex); + db->xCollNeeded = 0; + db->xCollNeeded16 = xCollNeeded16; + db->pCollNeededArg = pCollNeededArg; + sqlite3_mutex_leave(db->mutex); + return SQLITE_OK; +} +#endif /* SQLITE_OMIT_UTF16 */ + +#ifndef SQLITE_OMIT_GLOBALRECOVER +/* +** This function is now an anachronism. It used to be used to recover from a +** malloc() failure, but SQLite now does this automatically. +*/ +int sqlite3_global_recover(void){ + return SQLITE_OK; +} +#endif + +/* +** Test to see whether or not the database connection is in autocommit +** mode. Return TRUE if it is and FALSE if not. Autocommit mode is on +** by default. Autocommit is disabled by a BEGIN statement and reenabled +** by the next COMMIT or ROLLBACK. +** +******* THIS IS AN EXPERIMENTAL API AND IS SUBJECT TO CHANGE ****** +*/ +int sqlite3_get_autocommit(sqlite3 *db){ + return db->autoCommit; +} + +#ifdef SQLITE_DEBUG +/* +** The following routine is subtituted for constant SQLITE_CORRUPT in +** debugging builds. This provides a way to set a breakpoint for when +** corruption is first detected. +*/ +int sqlite3Corrupt(void){ + return SQLITE_CORRUPT; +} +#endif + +/* +** This is a convenience routine that makes sure that all thread-specific +** data for this thread has been deallocated. +** +** SQLite no longer uses thread-specific data so this routine is now a +** no-op. It is retained for historical compatibility. +*/ +void sqlite3_thread_cleanup(void){ +} + +/* +** Return meta information about a specific column of a database table. +** See comment in sqlite3.h (sqlite.h.in) for details. +*/ +#ifdef SQLITE_ENABLE_COLUMN_METADATA +int sqlite3_table_column_metadata( + sqlite3 *db, /* Connection handle */ + const char *zDbName, /* Database name or NULL */ + const char *zTableName, /* Table name */ + const char *zColumnName, /* Column name */ + char const **pzDataType, /* OUTPUT: Declared data type */ + char const **pzCollSeq, /* OUTPUT: Collation sequence name */ + int *pNotNull, /* OUTPUT: True if NOT NULL constraint exists */ + int *pPrimaryKey, /* OUTPUT: True if column part of PK */ + int *pAutoinc /* OUTPUT: True if colums is auto-increment */ +){ + int rc; + char *zErrMsg = 0; + Table *pTab = 0; + Column *pCol = 0; + int iCol; + + char const *zDataType = 0; + char const *zCollSeq = 0; + int notnull = 0; + int primarykey = 0; + int autoinc = 0; + + /* Ensure the database schema has been loaded */ + (void)sqlite3SafetyOn(db); + sqlite3_mutex_enter(db->mutex); + sqlite3BtreeEnterAll(db); + rc = sqlite3Init(db, &zErrMsg); + sqlite3BtreeLeaveAll(db); + if( SQLITE_OK!=rc ){ + goto error_out; + } + + /* Locate the table in question */ + pTab = sqlite3FindTable(db, zTableName, zDbName); + if( !pTab || pTab->pSelect ){ + pTab = 0; + goto error_out; + } + + /* Find the column for which info is requested */ + if( sqlite3IsRowid(zColumnName) ){ + iCol = pTab->iPKey; + if( iCol>=0 ){ + pCol = &pTab->aCol[iCol]; + } + }else{ + for(iCol=0; iColnCol; iCol++){ + pCol = &pTab->aCol[iCol]; + if( 0==sqlite3StrICmp(pCol->zName, zColumnName) ){ + break; + } + } + if( iCol==pTab->nCol ){ + pTab = 0; + goto error_out; + } + } + + /* The following block stores the meta information that will be returned + ** to the caller in local variables zDataType, zCollSeq, notnull, primarykey + ** and autoinc. At this point there are two possibilities: + ** + ** 1. The specified column name was rowid", "oid" or "_rowid_" + ** and there is no explicitly declared IPK column. + ** + ** 2. The table is not a view and the column name identified an + ** explicitly declared column. Copy meta information from *pCol. + */ + if( pCol ){ + zDataType = pCol->zType; + zCollSeq = pCol->zColl; + notnull = (pCol->notNull?1:0); + primarykey = (pCol->isPrimKey?1:0); + autoinc = ((pTab->iPKey==iCol && pTab->autoInc)?1:0); + }else{ + zDataType = "INTEGER"; + primarykey = 1; + } + if( !zCollSeq ){ + zCollSeq = "BINARY"; + } + +error_out: + (void)sqlite3SafetyOff(db); + + /* Whether the function call succeeded or failed, set the output parameters + ** to whatever their local counterparts contain. If an error did occur, + ** this has the effect of zeroing all output parameters. + */ + if( pzDataType ) *pzDataType = zDataType; + if( pzCollSeq ) *pzCollSeq = zCollSeq; + if( pNotNull ) *pNotNull = notnull; + if( pPrimaryKey ) *pPrimaryKey = primarykey; + if( pAutoinc ) *pAutoinc = autoinc; + + if( SQLITE_OK==rc && !pTab ){ + sqlite3SetString(&zErrMsg, "no such table column: ", zTableName, ".", + zColumnName, 0); + rc = SQLITE_ERROR; + } + sqlite3Error(db, rc, (zErrMsg?"%s":0), zErrMsg); + sqlite3_free(zErrMsg); + rc = sqlite3ApiExit(db, rc); + sqlite3_mutex_leave(db->mutex); + return rc; +} +#endif + +/* +** Sleep for a little while. Return the amount of time slept. +*/ +int sqlite3_sleep(int ms){ + sqlite3_vfs *pVfs; + int rc; + pVfs = sqlite3_vfs_find(0); + + /* This function works in milliseconds, but the underlying OsSleep() + ** API uses microseconds. Hence the 1000's. + */ + rc = (sqlite3OsSleep(pVfs, 1000*ms)/1000); + return rc; +} + +/* +** Enable or disable the extended result codes. +*/ +int sqlite3_extended_result_codes(sqlite3 *db, int onoff){ + sqlite3_mutex_enter(db->mutex); + db->errMask = onoff ? 0xffffffff : 0xff; + sqlite3_mutex_leave(db->mutex); + return SQLITE_OK; +} + +/* +** Invoke the xFileControl method on a particular database. +*/ +int sqlite3_file_control(sqlite3 *db, const char *zDbName, int op, void *pArg){ + int rc = SQLITE_ERROR; + int iDb; + sqlite3_mutex_enter(db->mutex); + if( zDbName==0 ){ + iDb = 0; + }else{ + for(iDb=0; iDbnDb; iDb++){ + if( strcmp(db->aDb[iDb].zName, zDbName)==0 ) break; + } + } + if( iDbnDb ){ + Btree *pBtree = db->aDb[iDb].pBt; + if( pBtree ){ + Pager *pPager; + sqlite3_file *fd; + sqlite3BtreeEnter(pBtree); + pPager = sqlite3BtreePager(pBtree); + assert( pPager!=0 ); + fd = sqlite3PagerFile(pPager); + assert( fd!=0 ); + if( fd->pMethods ){ + rc = sqlite3OsFileControl(fd, op, pArg); + } + sqlite3BtreeLeave(pBtree); + } + } + sqlite3_mutex_leave(db->mutex); + return rc; +} + +/* +** Interface to the testing logic. +*/ +int sqlite3_test_control(int op, ...){ + va_list ap; + int rc = 0; + va_start(ap, op); + switch( op ){ +#ifndef SQLITE_OMIT_FAULTINJECTOR + case SQLITE_TESTCTRL_FAULT_CONFIG: { + int id = va_arg(ap, int); + int nDelay = va_arg(ap, int); + int nRepeat = va_arg(ap, int); + sqlite3FaultConfig(id, nDelay, nRepeat); + break; + } + case SQLITE_TESTCTRL_FAULT_FAILURES: { + int id = va_arg(ap, int); + rc = sqlite3FaultFailures(id); + break; + } + case SQLITE_TESTCTRL_FAULT_BENIGN_FAILURES: { + int id = va_arg(ap, int); + rc = sqlite3FaultBenignFailures(id); + break; + } + case SQLITE_TESTCTRL_FAULT_PENDING: { + int id = va_arg(ap, int); + rc = sqlite3FaultPending(id); + break; + } +#endif /* SQLITE_OMIT_FAULTINJECTOR */ + } + va_end(ap); + return rc; +} Added: external/sqlite-source-3.5.7.x/malloc.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/malloc.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,239 @@ +/* +** 2001 September 15 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** Memory allocation functions used throughout sqlite. +** +** +** $Id: malloc.c,v 1.14 2007/10/20 16:36:31 drh Exp $ +*/ +#include "sqliteInt.h" +#include +#include + +/* +** This routine runs when the memory allocator sees that the +** total memory allocation is about to exceed the soft heap +** limit. +*/ +static void softHeapLimitEnforcer( + void *NotUsed, + sqlite3_int64 inUse, + int allocSize +){ + sqlite3_release_memory(allocSize); +} + +/* +** Set the soft heap-size limit for the current thread. Passing a +** zero or negative value indicates no limit. +*/ +void sqlite3_soft_heap_limit(int n){ + sqlite3_uint64 iLimit; + int overage; + if( n<0 ){ + iLimit = 0; + }else{ + iLimit = n; + } + if( iLimit>0 ){ + sqlite3_memory_alarm(softHeapLimitEnforcer, 0, iLimit); + }else{ + sqlite3_memory_alarm(0, 0, 0); + } + overage = sqlite3_memory_used() - n; + if( overage>0 ){ + sqlite3_release_memory(overage); + } +} + +/* +** Release memory held by SQLite instances created by the current thread. +*/ +int sqlite3_release_memory(int n){ +#ifdef SQLITE_ENABLE_MEMORY_MANAGEMENT + return sqlite3PagerReleaseMemory(n); +#else + return SQLITE_OK; +#endif +} + + +/* +** Allocate and zero memory. +*/ +void *sqlite3MallocZero(unsigned n){ + void *p = sqlite3_malloc(n); + if( p ){ + memset(p, 0, n); + } + return p; +} + +/* +** Allocate and zero memory. If the allocation fails, make +** the mallocFailed flag in the connection pointer. +*/ +void *sqlite3DbMallocZero(sqlite3 *db, unsigned n){ + void *p = sqlite3DbMallocRaw(db, n); + if( p ){ + memset(p, 0, n); + } + return p; +} + +/* +** Allocate and zero memory. If the allocation fails, make +** the mallocFailed flag in the connection pointer. +*/ +void *sqlite3DbMallocRaw(sqlite3 *db, unsigned n){ + void *p = 0; + if( !db || db->mallocFailed==0 ){ + p = sqlite3_malloc(n); + if( !p && db ){ + db->mallocFailed = 1; + } + } + return p; +} + +/* +** Resize the block of memory pointed to by p to n bytes. If the +** resize fails, set the mallocFailed flag inthe connection object. +*/ +void *sqlite3DbRealloc(sqlite3 *db, void *p, int n){ + void *pNew = 0; + if( db->mallocFailed==0 ){ + pNew = sqlite3_realloc(p, n); + if( !pNew ){ + db->mallocFailed = 1; + } + } + return pNew; +} + +/* +** Attempt to reallocate p. If the reallocation fails, then free p +** and set the mallocFailed flag in the database connection. +*/ +void *sqlite3DbReallocOrFree(sqlite3 *db, void *p, int n){ + void *pNew; + pNew = sqlite3DbRealloc(db, p, n); + if( !pNew ){ + sqlite3_free(p); + } + return pNew; +} + +/* +** Make a copy of a string in memory obtained from sqliteMalloc(). These +** functions call sqlite3MallocRaw() directly instead of sqliteMalloc(). This +** is because when memory debugging is turned on, these two functions are +** called via macros that record the current file and line number in the +** ThreadData structure. +*/ +char *sqlite3StrDup(const char *z){ + char *zNew; + int n; + if( z==0 ) return 0; + n = strlen(z)+1; + zNew = sqlite3_malloc(n); + if( zNew ) memcpy(zNew, z, n); + return zNew; +} +char *sqlite3StrNDup(const char *z, int n){ + char *zNew; + if( z==0 ) return 0; + zNew = sqlite3_malloc(n+1); + if( zNew ){ + memcpy(zNew, z, n); + zNew[n] = 0; + } + return zNew; +} + +char *sqlite3DbStrDup(sqlite3 *db, const char *z){ + char *zNew = sqlite3StrDup(z); + if( z && !zNew ){ + db->mallocFailed = 1; + } + return zNew; +} +char *sqlite3DbStrNDup(sqlite3 *db, const char *z, int n){ + char *zNew = sqlite3StrNDup(z, n); + if( z && !zNew ){ + db->mallocFailed = 1; + } + return zNew; +} + +/* +** Create a string from the 2nd and subsequent arguments (up to the +** first NULL argument), store the string in memory obtained from +** sqliteMalloc() and make the pointer indicated by the 1st argument +** point to that string. The 1st argument must either be NULL or +** point to memory obtained from sqliteMalloc(). +*/ +void sqlite3SetString(char **pz, ...){ + va_list ap; + int nByte; + const char *z; + char *zResult; + + assert( pz!=0 ); + nByte = 1; + va_start(ap, pz); + while( (z = va_arg(ap, const char*))!=0 ){ + nByte += strlen(z); + } + va_end(ap); + sqlite3_free(*pz); + *pz = zResult = sqlite3_malloc(nByte); + if( zResult==0 ){ + return; + } + *zResult = 0; + va_start(ap, pz); + while( (z = va_arg(ap, const char*))!=0 ){ + int n = strlen(z); + memcpy(zResult, z, n); + zResult += n; + } + zResult[0] = 0; + va_end(ap); +} + + +/* +** This function must be called before exiting any API function (i.e. +** returning control to the user) that has called sqlite3_malloc or +** sqlite3_realloc. +** +** The returned value is normally a copy of the second argument to this +** function. However, if a malloc() failure has occured since the previous +** invocation SQLITE_NOMEM is returned instead. +** +** If the first argument, db, is not NULL and a malloc() error has occured, +** then the connection error-code (the value returned by sqlite3_errcode()) +** is set to SQLITE_NOMEM. +*/ +int sqlite3ApiExit(sqlite3* db, int rc){ + /* If the db handle is not NULL, then we must hold the connection handle + ** mutex here. Otherwise the read (and possible write) of db->mallocFailed + ** is unsafe, as is the call to sqlite3Error(). + */ + assert( !db || sqlite3_mutex_held(db->mutex) ); + if( db && db->mallocFailed ){ + sqlite3Error(db, SQLITE_NOMEM, 0); + db->mallocFailed = 0; + rc = SQLITE_NOMEM; + } + return rc & (db ? db->errMask : 0xff); +} Added: external/sqlite-source-3.5.7.x/mem1.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/mem1.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,227 @@ +/* +** 2007 August 14 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** This file contains the C functions that implement a memory +** allocation subsystem for use by SQLite. +** +** $Id: mem1.c,v 1.16 2008/02/14 23:26:56 drh Exp $ +*/ +#include "sqliteInt.h" + +/* +** This version of the memory allocator is the default. It is +** used when no other memory allocator is specified using compile-time +** macros. +*/ +#ifdef SQLITE_SYSTEM_MALLOC + +/* +** All of the static variables used by this module are collected +** into a single structure named "mem". This is to keep the +** static variables organized and to reduce namespace pollution +** when this module is combined with other in the amalgamation. +*/ +static struct { + /* + ** The alarm callback and its arguments. The mem.mutex lock will + ** be held while the callback is running. Recursive calls into + ** the memory subsystem are allowed, but no new callbacks will be + ** issued. The alarmBusy variable is set to prevent recursive + ** callbacks. + */ + sqlite3_int64 alarmThreshold; + void (*alarmCallback)(void*, sqlite3_int64,int); + void *alarmArg; + int alarmBusy; + + /* + ** Mutex to control access to the memory allocation subsystem. + */ + sqlite3_mutex *mutex; + + /* + ** Current allocation and high-water mark. + */ + sqlite3_int64 nowUsed; + sqlite3_int64 mxUsed; + + +} mem; + +/* +** Enter the mutex mem.mutex. Allocate it if it is not already allocated. +*/ +static void enterMem(void){ + if( mem.mutex==0 ){ + mem.mutex = sqlite3_mutex_alloc(SQLITE_MUTEX_STATIC_MEM); + } + sqlite3_mutex_enter(mem.mutex); +} + +/* +** Return the amount of memory currently checked out. +*/ +sqlite3_int64 sqlite3_memory_used(void){ + sqlite3_int64 n; + enterMem(); + n = mem.nowUsed; + sqlite3_mutex_leave(mem.mutex); + return n; +} + +/* +** Return the maximum amount of memory that has ever been +** checked out since either the beginning of this process +** or since the most recent reset. +*/ +sqlite3_int64 sqlite3_memory_highwater(int resetFlag){ + sqlite3_int64 n; + enterMem(); + n = mem.mxUsed; + if( resetFlag ){ + mem.mxUsed = mem.nowUsed; + } + sqlite3_mutex_leave(mem.mutex); + return n; +} + +/* +** Change the alarm callback +*/ +int sqlite3_memory_alarm( + void(*xCallback)(void *pArg, sqlite3_int64 used,int N), + void *pArg, + sqlite3_int64 iThreshold +){ + enterMem(); + mem.alarmCallback = xCallback; + mem.alarmArg = pArg; + mem.alarmThreshold = iThreshold; + sqlite3_mutex_leave(mem.mutex); + return SQLITE_OK; +} + +/* +** Trigger the alarm +*/ +static void sqlite3MemsysAlarm(int nByte){ + void (*xCallback)(void*,sqlite3_int64,int); + sqlite3_int64 nowUsed; + void *pArg; + if( mem.alarmCallback==0 || mem.alarmBusy ) return; + mem.alarmBusy = 1; + xCallback = mem.alarmCallback; + nowUsed = mem.nowUsed; + pArg = mem.alarmArg; + sqlite3_mutex_leave(mem.mutex); + xCallback(pArg, nowUsed, nByte); + sqlite3_mutex_enter(mem.mutex); + mem.alarmBusy = 0; +} + +/* +** Allocate nBytes of memory +*/ +void *sqlite3_malloc(int nBytes){ + sqlite3_int64 *p = 0; + if( nBytes>0 ){ + enterMem(); + if( mem.alarmCallback!=0 && mem.nowUsed+nBytes>=mem.alarmThreshold ){ + sqlite3MemsysAlarm(nBytes); + } + p = malloc(nBytes+8); + if( p==0 ){ + sqlite3MemsysAlarm(nBytes); + p = malloc(nBytes+8); + } + if( p ){ + p[0] = nBytes; + p++; + mem.nowUsed += nBytes; + if( mem.nowUsed>mem.mxUsed ){ + mem.mxUsed = mem.nowUsed; + } + } + sqlite3_mutex_leave(mem.mutex); + } + return (void*)p; +} + +/* +** Free memory. +*/ +void sqlite3_free(void *pPrior){ + sqlite3_int64 *p; + int nByte; + if( pPrior==0 ){ + return; + } + assert( mem.mutex!=0 ); + p = pPrior; + p--; + nByte = (int)*p; + sqlite3_mutex_enter(mem.mutex); + mem.nowUsed -= nByte; + free(p); + sqlite3_mutex_leave(mem.mutex); +} + +/* +** Return the number of bytes allocated at p. +*/ +int sqlite3MallocSize(void *p){ + sqlite3_int64 *pInt; + if( !p ) return 0; + pInt = p; + return pInt[-1]; +} + +/* +** Change the size of an existing memory allocation +*/ +void *sqlite3_realloc(void *pPrior, int nBytes){ + int nOld; + sqlite3_int64 *p; + if( pPrior==0 ){ + return sqlite3_malloc(nBytes); + } + if( nBytes<=0 ){ + sqlite3_free(pPrior); + return 0; + } + p = pPrior; + p--; + nOld = (int)p[0]; + assert( mem.mutex!=0 ); + sqlite3_mutex_enter(mem.mutex); + if( mem.nowUsed+nBytes-nOld>=mem.alarmThreshold ){ + sqlite3MemsysAlarm(nBytes-nOld); + } + p = realloc(p, nBytes+8); + if( p==0 ){ + sqlite3MemsysAlarm(nBytes); + p = pPrior; + p--; + p = realloc(p, nBytes+8); + } + if( p ){ + p[0] = nBytes; + p++; + mem.nowUsed += nBytes-nOld; + if( mem.nowUsed>mem.mxUsed ){ + mem.mxUsed = mem.nowUsed; + } + } + sqlite3_mutex_leave(mem.mutex); + return (void*)p; +} + +#endif /* SQLITE_SYSTEM_MALLOC */ Added: external/sqlite-source-3.5.7.x/mem2.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/mem2.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,458 @@ +/* +** 2007 August 15 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** This file contains the C functions that implement a memory +** allocation subsystem for use by SQLite. +** +** $Id: mem2.c,v 1.22 2008/02/19 15:15:16 drh Exp $ +*/ +#include "sqliteInt.h" + +/* +** This version of the memory allocator is used only if the +** SQLITE_MEMDEBUG macro is defined +*/ +#ifdef SQLITE_MEMDEBUG + +/* +** The backtrace functionality is only available with GLIBC +*/ +#ifdef __GLIBC__ + extern int backtrace(void**,int); + extern void backtrace_symbols_fd(void*const*,int,int); +#else +# define backtrace(A,B) 0 +# define backtrace_symbols_fd(A,B,C) +#endif +#include + +/* +** Each memory allocation looks like this: +** +** ------------------------------------------------------------------------ +** | Title | backtrace pointers | MemBlockHdr | allocation | EndGuard | +** ------------------------------------------------------------------------ +** +** The application code sees only a pointer to the allocation. We have +** to back up from the allocation pointer to find the MemBlockHdr. The +** MemBlockHdr tells us the size of the allocation and the number of +** backtrace pointers. There is also a guard word at the end of the +** MemBlockHdr. +*/ +struct MemBlockHdr { + struct MemBlockHdr *pNext, *pPrev; /* Linked list of all unfreed memory */ + int iSize; /* Size of this allocation */ + char nBacktrace; /* Number of backtraces on this alloc */ + char nBacktraceSlots; /* Available backtrace slots */ + short nTitle; /* Bytes of title; includes '\0' */ + int iForeGuard; /* Guard word for sanity */ +}; + +/* +** Guard words +*/ +#define FOREGUARD 0x80F5E153 +#define REARGUARD 0xE4676B53 + +/* +** Number of malloc size increments to track. +*/ +#define NCSIZE 1000 + +/* +** All of the static variables used by this module are collected +** into a single structure named "mem". This is to keep the +** static variables organized and to reduce namespace pollution +** when this module is combined with other in the amalgamation. +*/ +static struct { + /* + ** The alarm callback and its arguments. The mem.mutex lock will + ** be held while the callback is running. Recursive calls into + ** the memory subsystem are allowed, but no new callbacks will be + ** issued. The alarmBusy variable is set to prevent recursive + ** callbacks. + */ + sqlite3_int64 alarmThreshold; + void (*alarmCallback)(void*, sqlite3_int64, int); + void *alarmArg; + int alarmBusy; + + /* + ** Mutex to control access to the memory allocation subsystem. + */ + sqlite3_mutex *mutex; + + /* + ** Current allocation and high-water mark. + */ + sqlite3_int64 nowUsed; + sqlite3_int64 mxUsed; + + /* + ** Head and tail of a linked list of all outstanding allocations + */ + struct MemBlockHdr *pFirst; + struct MemBlockHdr *pLast; + + /* + ** The number of levels of backtrace to save in new allocations. + */ + int nBacktrace; + + /* + ** Title text to insert in front of each block + */ + int nTitle; /* Bytes of zTitle to save. Includes '\0' and padding */ + char zTitle[100]; /* The title text */ + + /* + ** sqlite3MallocDisallow() increments the following counter. + ** sqlite3MallocAllow() decrements it. + */ + int disallow; /* Do not allow memory allocation */ + + /* + ** Gather statistics on the sizes of memory allocations. + ** sizeCnt[i] is the number of allocation attempts of i*8 + ** bytes. i==NCSIZE is the number of allocation attempts for + ** sizes more than NCSIZE*8 bytes. + */ + int sizeCnt[NCSIZE]; + +} mem; + + +/* +** Enter the mutex mem.mutex. Allocate it if it is not already allocated. +*/ +static void enterMem(void){ + if( mem.mutex==0 ){ + mem.mutex = sqlite3_mutex_alloc(SQLITE_MUTEX_STATIC_MEM); + } + sqlite3_mutex_enter(mem.mutex); +} + +/* +** Return the amount of memory currently checked out. +*/ +sqlite3_int64 sqlite3_memory_used(void){ + sqlite3_int64 n; + enterMem(); + n = mem.nowUsed; + sqlite3_mutex_leave(mem.mutex); + return n; +} + +/* +** Return the maximum amount of memory that has ever been +** checked out since either the beginning of this process +** or since the most recent reset. +*/ +sqlite3_int64 sqlite3_memory_highwater(int resetFlag){ + sqlite3_int64 n; + enterMem(); + n = mem.mxUsed; + if( resetFlag ){ + mem.mxUsed = mem.nowUsed; + } + sqlite3_mutex_leave(mem.mutex); + return n; +} + +/* +** Change the alarm callback +*/ +int sqlite3_memory_alarm( + void(*xCallback)(void *pArg, sqlite3_int64 used, int N), + void *pArg, + sqlite3_int64 iThreshold +){ + enterMem(); + mem.alarmCallback = xCallback; + mem.alarmArg = pArg; + mem.alarmThreshold = iThreshold; + sqlite3_mutex_leave(mem.mutex); + return SQLITE_OK; +} + +/* +** Trigger the alarm +*/ +static void sqlite3MemsysAlarm(int nByte){ + void (*xCallback)(void*,sqlite3_int64,int); + sqlite3_int64 nowUsed; + void *pArg; + if( mem.alarmCallback==0 || mem.alarmBusy ) return; + mem.alarmBusy = 1; + xCallback = mem.alarmCallback; + nowUsed = mem.nowUsed; + pArg = mem.alarmArg; + sqlite3_mutex_leave(mem.mutex); + xCallback(pArg, nowUsed, nByte); + sqlite3_mutex_enter(mem.mutex); + mem.alarmBusy = 0; +} + +/* +** Given an allocation, find the MemBlockHdr for that allocation. +** +** This routine checks the guards at either end of the allocation and +** if they are incorrect it asserts. +*/ +static struct MemBlockHdr *sqlite3MemsysGetHeader(void *pAllocation){ + struct MemBlockHdr *p; + int *pInt; + + p = (struct MemBlockHdr*)pAllocation; + p--; + assert( p->iForeGuard==FOREGUARD ); + assert( (p->iSize & 3)==0 ); + pInt = (int*)pAllocation; + assert( pInt[p->iSize/sizeof(int)]==REARGUARD ); + return p; +} + +/* +** Return the number of bytes currently allocated at address p. +*/ +int sqlite3MallocSize(void *p){ + struct MemBlockHdr *pHdr; + if( !p ){ + return 0; + } + pHdr = sqlite3MemsysGetHeader(p); + return pHdr->iSize; +} + +/* +** Allocate nByte bytes of memory. +*/ +void *sqlite3_malloc(int nByte){ + struct MemBlockHdr *pHdr; + void **pBt; + char *z; + int *pInt; + void *p = 0; + int totalSize; + + if( nByte>0 ){ + enterMem(); + assert( mem.disallow==0 ); + if( mem.alarmCallback!=0 && mem.nowUsed+nByte>=mem.alarmThreshold ){ + sqlite3MemsysAlarm(nByte); + } + nByte = (nByte+3)&~3; + if( nByte/8>NCSIZE-1 ){ + mem.sizeCnt[NCSIZE-1]++; + }else{ + mem.sizeCnt[nByte/8]++; + } + totalSize = nByte + sizeof(*pHdr) + sizeof(int) + + mem.nBacktrace*sizeof(void*) + mem.nTitle; + if( sqlite3FaultStep(SQLITE_FAULTINJECTOR_MALLOC) ){ + p = 0; + }else{ + p = malloc(totalSize); + if( p==0 ){ + sqlite3MemsysAlarm(nByte); + p = malloc(totalSize); + } + } + if( p ){ + z = p; + pBt = (void**)&z[mem.nTitle]; + pHdr = (struct MemBlockHdr*)&pBt[mem.nBacktrace]; + pHdr->pNext = 0; + pHdr->pPrev = mem.pLast; + if( mem.pLast ){ + mem.pLast->pNext = pHdr; + }else{ + mem.pFirst = pHdr; + } + mem.pLast = pHdr; + pHdr->iForeGuard = FOREGUARD; + pHdr->nBacktraceSlots = mem.nBacktrace; + pHdr->nTitle = mem.nTitle; + if( mem.nBacktrace ){ + void *aAddr[40]; + pHdr->nBacktrace = backtrace(aAddr, mem.nBacktrace+1)-1; + memcpy(pBt, &aAddr[1], pHdr->nBacktrace*sizeof(void*)); + }else{ + pHdr->nBacktrace = 0; + } + if( mem.nTitle ){ + memcpy(z, mem.zTitle, mem.nTitle); + } + pHdr->iSize = nByte; + pInt = (int*)&pHdr[1]; + pInt[nByte/sizeof(int)] = REARGUARD; + memset(pInt, 0x65, nByte); + mem.nowUsed += nByte; + if( mem.nowUsed>mem.mxUsed ){ + mem.mxUsed = mem.nowUsed; + } + p = (void*)pInt; + } + sqlite3_mutex_leave(mem.mutex); + } + return p; +} + +/* +** Free memory. +*/ +void sqlite3_free(void *pPrior){ + struct MemBlockHdr *pHdr; + void **pBt; + char *z; + if( pPrior==0 ){ + return; + } + assert( mem.mutex!=0 ); + pHdr = sqlite3MemsysGetHeader(pPrior); + pBt = (void**)pHdr; + pBt -= pHdr->nBacktraceSlots; + sqlite3_mutex_enter(mem.mutex); + mem.nowUsed -= pHdr->iSize; + if( pHdr->pPrev ){ + assert( pHdr->pPrev->pNext==pHdr ); + pHdr->pPrev->pNext = pHdr->pNext; + }else{ + assert( mem.pFirst==pHdr ); + mem.pFirst = pHdr->pNext; + } + if( pHdr->pNext ){ + assert( pHdr->pNext->pPrev==pHdr ); + pHdr->pNext->pPrev = pHdr->pPrev; + }else{ + assert( mem.pLast==pHdr ); + mem.pLast = pHdr->pPrev; + } + z = (char*)pBt; + z -= pHdr->nTitle; + memset(z, 0x2b, sizeof(void*)*pHdr->nBacktraceSlots + sizeof(*pHdr) + + pHdr->iSize + sizeof(int) + pHdr->nTitle); + free(z); + sqlite3_mutex_leave(mem.mutex); +} + +/* +** Change the size of an existing memory allocation. +** +** For this debugging implementation, we *always* make a copy of the +** allocation into a new place in memory. In this way, if the +** higher level code is using pointer to the old allocation, it is +** much more likely to break and we are much more liking to find +** the error. +*/ +void *sqlite3_realloc(void *pPrior, int nByte){ + struct MemBlockHdr *pOldHdr; + void *pNew; + if( pPrior==0 ){ + return sqlite3_malloc(nByte); + } + if( nByte<=0 ){ + sqlite3_free(pPrior); + return 0; + } + assert( mem.disallow==0 ); + pOldHdr = sqlite3MemsysGetHeader(pPrior); + pNew = sqlite3_malloc(nByte); + if( pNew ){ + memcpy(pNew, pPrior, nByteiSize ? nByte : pOldHdr->iSize); + if( nByte>pOldHdr->iSize ){ + memset(&((char*)pNew)[pOldHdr->iSize], 0x2b, nByte - pOldHdr->iSize); + } + sqlite3_free(pPrior); + } + return pNew; +} + +/* +** Set the number of backtrace levels kept for each allocation. +** A value of zero turns of backtracing. The number is always rounded +** up to a multiple of 2. +*/ +void sqlite3MemdebugBacktrace(int depth){ + if( depth<0 ){ depth = 0; } + if( depth>20 ){ depth = 20; } + depth = (depth+1)&0xfe; + mem.nBacktrace = depth; +} + +/* +** Set the title string for subsequent allocations. +*/ +void sqlite3MemdebugSettitle(const char *zTitle){ + int n = strlen(zTitle) + 1; + enterMem(); + if( n>=sizeof(mem.zTitle) ) n = sizeof(mem.zTitle)-1; + memcpy(mem.zTitle, zTitle, n); + mem.zTitle[n] = 0; + mem.nTitle = (n+3)&~3; + sqlite3_mutex_leave(mem.mutex); +} + +/* +** Open the file indicated and write a log of all unfreed memory +** allocations into that log. +*/ +void sqlite3MemdebugDump(const char *zFilename){ + FILE *out; + struct MemBlockHdr *pHdr; + void **pBt; + int i; + out = fopen(zFilename, "w"); + if( out==0 ){ + fprintf(stderr, "** Unable to output memory debug output log: %s **\n", + zFilename); + return; + } + for(pHdr=mem.pFirst; pHdr; pHdr=pHdr->pNext){ + char *z = (char*)pHdr; + z -= pHdr->nBacktraceSlots*sizeof(void*) + pHdr->nTitle; + fprintf(out, "**** %d bytes at %p from %s ****\n", + pHdr->iSize, &pHdr[1], pHdr->nTitle ? z : "???"); + if( pHdr->nBacktrace ){ + fflush(out); + pBt = (void**)pHdr; + pBt -= pHdr->nBacktraceSlots; + backtrace_symbols_fd(pBt, pHdr->nBacktrace, fileno(out)); + fprintf(out, "\n"); + } + } + fprintf(out, "COUNTS:\n"); + for(i=0; i%3d: %d\n", NCSIZE*8, mem.sizeCnt[NCSIZE-1]); + } + fclose(out); +} + +/* +** Return the number of times sqlite3_malloc() has been called. +*/ +int sqlite3MemdebugMallocCount(){ + int i; + int nTotal = 0; + for(i=0; i=1 ); + size = mem.aPool[i-1].u.hdr.size4x/4; + assert( size==mem.aPool[i+size-1].u.hdr.prevSize ); + assert( size>=2 ); + if( size <= MX_SMALL ){ + memsys3UnlinkFromList(i, &mem.aiSmall[size-2]); + }else{ + hash = size % N_HASH; + memsys3UnlinkFromList(i, &mem.aiHash[hash]); + } +} + +/* +** Link the chunk at mem.aPool[i] so that is on the list rooted +** at *pRoot. +*/ +static void memsys3LinkIntoList(u32 i, u32 *pRoot){ + assert( sqlite3_mutex_held(mem.mutex) ); + mem.aPool[i].u.list.next = *pRoot; + mem.aPool[i].u.list.prev = 0; + if( *pRoot ){ + mem.aPool[*pRoot].u.list.prev = i; + } + *pRoot = i; +} + +/* +** Link the chunk at index i into either the appropriate +** small chunk list, or into the large chunk hash table. +*/ +static void memsys3Link(u32 i){ + u32 size, hash; + assert( sqlite3_mutex_held(mem.mutex) ); + assert( i>=1 ); + assert( (mem.aPool[i-1].u.hdr.size4x & 1)==0 ); + size = mem.aPool[i-1].u.hdr.size4x/4; + assert( size==mem.aPool[i+size-1].u.hdr.prevSize ); + assert( size>=2 ); + if( size <= MX_SMALL ){ + memsys3LinkIntoList(i, &mem.aiSmall[size-2]); + }else{ + hash = size % N_HASH; + memsys3LinkIntoList(i, &mem.aiHash[hash]); + } +} + +/* +** Enter the mutex mem.mutex. Allocate it if it is not already allocated. +** +** Also: Initialize the memory allocation subsystem the first time +** this routine is called. +*/ +static void memsys3Enter(void){ + if( mem.mutex==0 ){ + mem.mutex = sqlite3_mutex_alloc(SQLITE_MUTEX_STATIC_MEM); + mem.aPool[0].u.hdr.size4x = SQLITE_MEMORY_SIZE/2 + 2; + mem.aPool[SQLITE_MEMORY_SIZE/8].u.hdr.prevSize = SQLITE_MEMORY_SIZE/8; + mem.aPool[SQLITE_MEMORY_SIZE/8].u.hdr.size4x = 1; + mem.iMaster = 1; + mem.szMaster = SQLITE_MEMORY_SIZE/8; + mem.mnMaster = mem.szMaster; + } + sqlite3_mutex_enter(mem.mutex); +} + +/* +** Return the amount of memory currently checked out. +*/ +sqlite3_int64 sqlite3_memory_used(void){ + sqlite3_int64 n; + memsys3Enter(); + n = SQLITE_MEMORY_SIZE - mem.szMaster*8; + sqlite3_mutex_leave(mem.mutex); + return n; +} + +/* +** Return the maximum amount of memory that has ever been +** checked out since either the beginning of this process +** or since the most recent reset. +*/ +sqlite3_int64 sqlite3_memory_highwater(int resetFlag){ + sqlite3_int64 n; + memsys3Enter(); + n = SQLITE_MEMORY_SIZE - mem.mnMaster*8; + if( resetFlag ){ + mem.mnMaster = mem.szMaster; + } + sqlite3_mutex_leave(mem.mutex); + return n; +} + +/* +** Change the alarm callback. +** +** This is a no-op for the static memory allocator. The purpose +** of the memory alarm is to support sqlite3_soft_heap_limit(). +** But with this memory allocator, the soft_heap_limit is really +** a hard limit that is fixed at SQLITE_MEMORY_SIZE. +*/ +int sqlite3_memory_alarm( + void(*xCallback)(void *pArg, sqlite3_int64 used,int N), + void *pArg, + sqlite3_int64 iThreshold +){ + return SQLITE_OK; +} + +/* +** Called when we are unable to satisfy an allocation of nBytes. +*/ +static void memsys3OutOfMemory(int nByte){ + if( !mem.alarmBusy ){ + mem.alarmBusy = 1; + assert( sqlite3_mutex_held(mem.mutex) ); + sqlite3_mutex_leave(mem.mutex); + sqlite3_release_memory(nByte); + sqlite3_mutex_enter(mem.mutex); + mem.alarmBusy = 0; + } +} + +/* +** Return the size of an outstanding allocation, in bytes. The +** size returned omits the 8-byte header overhead. This only +** works for chunks that are currently checked out. +*/ +int sqlite3MallocSize(void *p){ + int iSize = 0; + if( p ){ + Mem3Block *pBlock = (Mem3Block*)p; + assert( (pBlock[-1].u.hdr.size4x&1)!=0 ); + iSize = (pBlock[-1].u.hdr.size4x&~3)*2 - 4; + } + return iSize; +} + +/* +** Chunk i is a free chunk that has been unlinked. Adjust its +** size parameters for check-out and return a pointer to the +** user portion of the chunk. +*/ +static void *memsys3Checkout(u32 i, int nBlock){ + u32 x; + assert( sqlite3_mutex_held(mem.mutex) ); + assert( i>=1 ); + assert( mem.aPool[i-1].u.hdr.size4x/4==nBlock ); + assert( mem.aPool[i+nBlock-1].u.hdr.prevSize==nBlock ); + x = mem.aPool[i-1].u.hdr.size4x; + mem.aPool[i-1].u.hdr.size4x = nBlock*4 | 1 | (x&2); + mem.aPool[i+nBlock-1].u.hdr.prevSize = nBlock; + mem.aPool[i+nBlock-1].u.hdr.size4x |= 2; + return &mem.aPool[i]; +} + +/* +** Carve a piece off of the end of the mem.iMaster free chunk. +** Return a pointer to the new allocation. Or, if the master chunk +** is not large enough, return 0. +*/ +static void *memsys3FromMaster(int nBlock){ + assert( sqlite3_mutex_held(mem.mutex) ); + assert( mem.szMaster>=nBlock ); + if( nBlock>=mem.szMaster-1 ){ + /* Use the entire master */ + void *p = memsys3Checkout(mem.iMaster, mem.szMaster); + mem.iMaster = 0; + mem.szMaster = 0; + mem.mnMaster = 0; + return p; + }else{ + /* Split the master block. Return the tail. */ + u32 newi, x; + newi = mem.iMaster + mem.szMaster - nBlock; + assert( newi > mem.iMaster+1 ); + mem.aPool[mem.iMaster+mem.szMaster-1].u.hdr.prevSize = nBlock; + mem.aPool[mem.iMaster+mem.szMaster-1].u.hdr.size4x |= 2; + mem.aPool[newi-1].u.hdr.size4x = nBlock*4 + 1; + mem.szMaster -= nBlock; + mem.aPool[newi-1].u.hdr.prevSize = mem.szMaster; + x = mem.aPool[mem.iMaster-1].u.hdr.size4x & 2; + mem.aPool[mem.iMaster-1].u.hdr.size4x = mem.szMaster*4 | x; + if( mem.szMaster < mem.mnMaster ){ + mem.mnMaster = mem.szMaster; + } + return (void*)&mem.aPool[newi]; + } +} + +/* +** *pRoot is the head of a list of free chunks of the same size +** or same size hash. In other words, *pRoot is an entry in either +** mem.aiSmall[] or mem.aiHash[]. +** +** This routine examines all entries on the given list and tries +** to coalesce each entries with adjacent free chunks. +** +** If it sees a chunk that is larger than mem.iMaster, it replaces +** the current mem.iMaster with the new larger chunk. In order for +** this mem.iMaster replacement to work, the master chunk must be +** linked into the hash tables. That is not the normal state of +** affairs, of course. The calling routine must link the master +** chunk before invoking this routine, then must unlink the (possibly +** changed) master chunk once this routine has finished. +*/ +static void memsys3Merge(u32 *pRoot){ + u32 iNext, prev, size, i, x; + + assert( sqlite3_mutex_held(mem.mutex) ); + for(i=*pRoot; i>0; i=iNext){ + iNext = mem.aPool[i].u.list.next; + size = mem.aPool[i-1].u.hdr.size4x; + assert( (size&1)==0 ); + if( (size&2)==0 ){ + memsys3UnlinkFromList(i, pRoot); + assert( i > mem.aPool[i-1].u.hdr.prevSize ); + prev = i - mem.aPool[i-1].u.hdr.prevSize; + if( prev==iNext ){ + iNext = mem.aPool[prev].u.list.next; + } + memsys3Unlink(prev); + size = i + size/4 - prev; + x = mem.aPool[prev-1].u.hdr.size4x & 2; + mem.aPool[prev-1].u.hdr.size4x = size*4 | x; + mem.aPool[prev+size-1].u.hdr.prevSize = size; + memsys3Link(prev); + i = prev; + }else{ + size /= 4; + } + if( size>mem.szMaster ){ + mem.iMaster = i; + mem.szMaster = size; + } + } +} + +/* +** Return a block of memory of at least nBytes in size. +** Return NULL if unable. +*/ +static void *memsys3Malloc(int nByte){ + u32 i; + int nBlock; + int toFree; + + assert( sqlite3_mutex_held(mem.mutex) ); + assert( sizeof(Mem3Block)==8 ); + if( nByte<=12 ){ + nBlock = 2; + }else{ + nBlock = (nByte + 11)/8; + } + assert( nBlock >= 2 ); + + /* STEP 1: + ** Look for an entry of the correct size in either the small + ** chunk table or in the large chunk hash table. This is + ** successful most of the time (about 9 times out of 10). + */ + if( nBlock <= MX_SMALL ){ + i = mem.aiSmall[nBlock-2]; + if( i>0 ){ + memsys3UnlinkFromList(i, &mem.aiSmall[nBlock-2]); + return memsys3Checkout(i, nBlock); + } + }else{ + int hash = nBlock % N_HASH; + for(i=mem.aiHash[hash]; i>0; i=mem.aPool[i].u.list.next){ + if( mem.aPool[i-1].u.hdr.size4x/4==nBlock ){ + memsys3UnlinkFromList(i, &mem.aiHash[hash]); + return memsys3Checkout(i, nBlock); + } + } + } + + /* STEP 2: + ** Try to satisfy the allocation by carving a piece off of the end + ** of the master chunk. This step usually works if step 1 fails. + */ + if( mem.szMaster>=nBlock ){ + return memsys3FromMaster(nBlock); + } + + + /* STEP 3: + ** Loop through the entire memory pool. Coalesce adjacent free + ** chunks. Recompute the master chunk as the largest free chunk. + ** Then try again to satisfy the allocation by carving a piece off + ** of the end of the master chunk. This step happens very + ** rarely (we hope!) + */ + for(toFree=nBlock*16; toFree=nBlock ){ + return memsys3FromMaster(nBlock); + } + } + } + + /* If none of the above worked, then we fail. */ + return 0; +} + +/* +** Free an outstanding memory allocation. +*/ +void memsys3Free(void *pOld){ + Mem3Block *p = (Mem3Block*)pOld; + int i; + u32 size, x; + assert( sqlite3_mutex_held(mem.mutex) ); + assert( p>mem.aPool && p<&mem.aPool[SQLITE_MEMORY_SIZE/8] ); + i = p - mem.aPool; + assert( (mem.aPool[i-1].u.hdr.size4x&1)==1 ); + size = mem.aPool[i-1].u.hdr.size4x/4; + assert( i+size<=SQLITE_MEMORY_SIZE/8+1 ); + mem.aPool[i-1].u.hdr.size4x &= ~1; + mem.aPool[i+size-1].u.hdr.prevSize = size; + mem.aPool[i+size-1].u.hdr.size4x &= ~2; + memsys3Link(i); + + /* Try to expand the master using the newly freed chunk */ + if( mem.iMaster ){ + while( (mem.aPool[mem.iMaster-1].u.hdr.size4x&2)==0 ){ + size = mem.aPool[mem.iMaster-1].u.hdr.prevSize; + mem.iMaster -= size; + mem.szMaster += size; + memsys3Unlink(mem.iMaster); + x = mem.aPool[mem.iMaster-1].u.hdr.size4x & 2; + mem.aPool[mem.iMaster-1].u.hdr.size4x = mem.szMaster*4 | x; + mem.aPool[mem.iMaster+mem.szMaster-1].u.hdr.prevSize = mem.szMaster; + } + x = mem.aPool[mem.iMaster-1].u.hdr.size4x & 2; + while( (mem.aPool[mem.iMaster+mem.szMaster-1].u.hdr.size4x&1)==0 ){ + memsys3Unlink(mem.iMaster+mem.szMaster); + mem.szMaster += mem.aPool[mem.iMaster+mem.szMaster-1].u.hdr.size4x/4; + mem.aPool[mem.iMaster-1].u.hdr.size4x = mem.szMaster*4 | x; + mem.aPool[mem.iMaster+mem.szMaster-1].u.hdr.prevSize = mem.szMaster; + } + } +} + +/* +** Allocate nBytes of memory +*/ +void *sqlite3_malloc(int nBytes){ + sqlite3_int64 *p = 0; + if( nBytes>0 ){ + memsys3Enter(); + p = memsys3Malloc(nBytes); + sqlite3_mutex_leave(mem.mutex); + } + return (void*)p; +} + +/* +** Free memory. +*/ +void sqlite3_free(void *pPrior){ + if( pPrior==0 ){ + return; + } + assert( mem.mutex!=0 ); + sqlite3_mutex_enter(mem.mutex); + memsys3Free(pPrior); + sqlite3_mutex_leave(mem.mutex); +} + +/* +** Change the size of an existing memory allocation +*/ +void *sqlite3_realloc(void *pPrior, int nBytes){ + int nOld; + void *p; + if( pPrior==0 ){ + return sqlite3_malloc(nBytes); + } + if( nBytes<=0 ){ + sqlite3_free(pPrior); + return 0; + } + assert( mem.mutex!=0 ); + nOld = sqlite3MallocSize(pPrior); + if( nBytes<=nOld && nBytes>=nOld-128 ){ + return pPrior; + } + sqlite3_mutex_enter(mem.mutex); + p = memsys3Malloc(nBytes); + if( p ){ + if( nOld>1)!=(size&1) ){ + fprintf(out, "%p tail checkout bit is incorrect\n", &mem.aPool[i]); + assert( 0 ); + break; + } + if( size&1 ){ + fprintf(out, "%p %6d bytes checked out\n", &mem.aPool[i], (size/4)*8-8); + }else{ + fprintf(out, "%p %6d bytes free%s\n", &mem.aPool[i], (size/4)*8-8, + i==mem.iMaster ? " **master**" : ""); + } + } + for(i=0; i0; j=mem.aPool[j].u.list.next){ + fprintf(out, " %p(%d)", &mem.aPool[j], + (mem.aPool[j-1].u.hdr.size4x/4)*8-8); + } + fprintf(out, "\n"); + } + for(i=0; i0; j=mem.aPool[j].u.list.next){ + fprintf(out, " %p(%d)", &mem.aPool[j], + (mem.aPool[j-1].u.hdr.size4x/4)*8-8); + } + fprintf(out, "\n"); + } + fprintf(out, "master=%d\n", mem.iMaster); + fprintf(out, "nowUsed=%d\n", SQLITE_MEMORY_SIZE - mem.szMaster*8); + fprintf(out, "mxUsed=%d\n", SQLITE_MEMORY_SIZE - mem.mnMaster*8); + sqlite3_mutex_leave(mem.mutex); + if( out==stdout ){ + fflush(stdout); + }else{ + fclose(out); + } +#endif +} + + +#endif /* !SQLITE_MEMORY_SIZE */ Added: external/sqlite-source-3.5.7.x/mem4.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/mem4.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,393 @@ +/* +** 2007 August 14 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** This file contains the C functions that implement a memory +** allocation subsystem for use by SQLite. +** +** $Id: mem4.c,v 1.2 2008/02/14 23:26:56 drh Exp $ +*/ +#include "sqliteInt.h" + +/* +** This version of the memory allocator attempts to obtain memory +** from mmap() if the size of the allocation is close to the size +** of a virtual memory page. If the size of the allocation is different +** from the virtual memory page size, then ordinary malloc() is used. +** Ordinary malloc is also used if space allocated to mmap() is +** exhausted. +** +** Enable this memory allocation by compiling with -DSQLITE_MMAP_HEAP_SIZE=nnn +** where nnn is the maximum number of bytes of mmap-ed memory you want +** to support. This module may choose to use less memory than requested. +** +*/ +#ifdef SQLITE_MMAP_HEAP_SIZE + +/* +** This is a test version of the memory allocator that attempts to +** use mmap() and madvise() for allocations and frees of approximately +** the virtual memory page size. +*/ +#include +#include +#include +#include + + +/* +** All of the static variables used by this module are collected +** into a single structure named "mem". This is to keep the +** static variables organized and to reduce namespace pollution +** when this module is combined with other in the amalgamation. +*/ +static struct { + /* + ** The alarm callback and its arguments. The mem.mutex lock will + ** be held while the callback is running. Recursive calls into + ** the memory subsystem are allowed, but no new callbacks will be + ** issued. The alarmBusy variable is set to prevent recursive + ** callbacks. + */ + sqlite3_int64 alarmThreshold; + void (*alarmCallback)(void*, sqlite3_int64,int); + void *alarmArg; + int alarmBusy; + + /* + ** Mutex to control access to the memory allocation subsystem. + */ + sqlite3_mutex *mutex; + + /* + ** Current allocation and high-water mark. + */ + sqlite3_int64 nowUsed; + sqlite3_int64 mxUsed; + + /* + ** Current allocation and high-water marks for mmap allocated memory. + */ + sqlite3_int64 nowUsedMMap; + sqlite3_int64 mxUsedMMap; + + /* + ** Size of a single mmap page. Obtained from sysconf(). + */ + int szPage; + int mnPage; + + /* + ** The number of available mmap pages. + */ + int nPage; + + /* + ** Index of the first free page. 0 means no pages have been freed. + */ + int firstFree; + + /* First unused page on the top of the heap. + */ + int firstUnused; + + /* + ** Bulk memory obtained from from mmap(). + */ + char *mmapHeap; /* first byte of the heap */ + +} mem; + + +/* +** Enter the mutex mem.mutex. Allocate it if it is not already allocated. +** The mmap() region is initialized the first time this routine is called. +*/ +static void memsys4Enter(void){ + if( mem.mutex==0 ){ + mem.mutex = sqlite3_mutex_alloc(SQLITE_MUTEX_STATIC_MEM); + } + sqlite3_mutex_enter(mem.mutex); +} + +/* +** Attempt to free memory to the mmap heap. This only works if +** the pointer p is within the range of memory addresses that +** comprise the mmap heap. Return 1 if the memory was freed +** successfully. Return 0 if the pointer is out of range. +*/ +static int mmapFree(void *p){ + char *z; + int idx, *a; + if( mem.mmapHeap==MAP_FAILED || mem.nPage==0 ){ + return 0; + } + z = (char*)p; + idx = (z - mem.mmapHeap)/mem.szPage; + if( idx<1 || idx>=mem.nPage ){ + return 0; + } + a = (int*)mem.mmapHeap; + a[idx] = a[mem.firstFree]; + mem.firstFree = idx; + mem.nowUsedMMap -= mem.szPage; + madvise(p, mem.szPage, MADV_DONTNEED); + return 1; +} + +/* +** Attempt to allocate nBytes from the mmap heap. Return a pointer +** to the allocated page. Or, return NULL if the allocation fails. +** +** The allocation will fail if nBytes is not the right size. +** Or, the allocation will fail if the mmap heap has been exhausted. +*/ +static void *mmapAlloc(int nBytes){ + int idx = 0; + if( nBytes>mem.szPage || nBytes mem.szPage ){ + mem.nPage = mem.szPage/sizeof(int); + } + mem.mmapHeap = mmap(0, mem.szPage*mem.nPage, PROT_WRITE|PROT_READ, + MAP_ANONYMOUS|MAP_SHARED, -1, 0); + if( mem.mmapHeap==MAP_FAILED ){ + mem.firstUnused = errno; + }else{ + mem.firstUnused = 1; + mem.nowUsedMMap = mem.szPage; + } + } + if( mem.mmapHeap==MAP_FAILED ){ + return 0; + } + if( mem.firstFree ){ + int idx = mem.firstFree; + int *a = (int*)mem.mmapHeap; + mem.firstFree = a[idx]; + }else if( mem.firstUnusedmem.mxUsedMMap ){ + mem.mxUsedMMap = mem.nowUsedMMap; + } + return (void*)&mem.mmapHeap[idx*mem.szPage]; + }else{ + return 0; + } +} + +/* +** Release the mmap-ed memory region if it is currently allocated and +** is not in use. +*/ +static void mmapUnmap(void){ + if( mem.mmapHeap==MAP_FAILED ) return; + if( mem.nPage==0 ) return; + if( mem.nowUsedMMap>mem.szPage ) return; + munmap(mem.mmapHeap, mem.nPage*mem.szPage); + mem.nowUsedMMap = 0; + mem.nPage = 0; +} + + +/* +** Return the amount of memory currently checked out. +*/ +sqlite3_int64 sqlite3_memory_used(void){ + sqlite3_int64 n; + memsys4Enter(); + n = mem.nowUsed + mem.nowUsedMMap; + sqlite3_mutex_leave(mem.mutex); + return n; +} + +/* +** Return the maximum amount of memory that has ever been +** checked out since either the beginning of this process +** or since the most recent reset. +*/ +sqlite3_int64 sqlite3_memory_highwater(int resetFlag){ + sqlite3_int64 n; + memsys4Enter(); + n = mem.mxUsed + mem.mxUsedMMap; + if( resetFlag ){ + mem.mxUsed = mem.nowUsed; + mem.mxUsedMMap = mem.nowUsedMMap; + } + sqlite3_mutex_leave(mem.mutex); + return n; +} + +/* +** Change the alarm callback +*/ +int sqlite3_memory_alarm( + void(*xCallback)(void *pArg, sqlite3_int64 used,int N), + void *pArg, + sqlite3_int64 iThreshold +){ + memsys4Enter(); + mem.alarmCallback = xCallback; + mem.alarmArg = pArg; + mem.alarmThreshold = iThreshold; + sqlite3_mutex_leave(mem.mutex); + return SQLITE_OK; +} + +/* +** Trigger the alarm +*/ +static void sqlite3MemsysAlarm(int nByte){ + void (*xCallback)(void*,sqlite3_int64,int); + sqlite3_int64 nowUsed; + void *pArg; + if( mem.alarmCallback==0 || mem.alarmBusy ) return; + mem.alarmBusy = 1; + xCallback = mem.alarmCallback; + nowUsed = mem.nowUsed; + pArg = mem.alarmArg; + sqlite3_mutex_leave(mem.mutex); + xCallback(pArg, nowUsed, nByte); + sqlite3_mutex_enter(mem.mutex); + mem.alarmBusy = 0; +} + +/* +** Allocate nBytes of memory +*/ +static void *memsys4Malloc(int nBytes){ + sqlite3_int64 *p = 0; + if( mem.alarmCallback!=0 + && mem.nowUsed+mem.nowUsedMMap+nBytes>=mem.alarmThreshold ){ + sqlite3MemsysAlarm(nBytes); + } + if( (p = mmapAlloc(nBytes))==0 ){ + p = malloc(nBytes+8); + if( p==0 ){ + sqlite3MemsysAlarm(nBytes); + p = malloc(nBytes+8); + } + if( p ){ + p[0] = nBytes; + p++; + mem.nowUsed += nBytes; + if( mem.nowUsed>mem.mxUsed ){ + mem.mxUsed = mem.nowUsed; + } + } + } + return (void*)p; +} + +/* +** Return the size of a memory allocation +*/ +static int memsys4Size(void *pPrior){ + char *z = (char*)pPrior; + int idx = mem.nPage ? (z - mem.mmapHeap)/mem.szPage : 0; + int nByte; + if( idx>=1 && idx0 ){ + memsys4Enter(); + p = memsys4Malloc(nBytes); + sqlite3_mutex_leave(mem.mutex); + } + return (void*)p; +} + +/* +** Free memory. +*/ +void sqlite3_free(void *pPrior){ + if( pPrior==0 ){ + return; + } + assert( mem.mutex!=0 ); + sqlite3_mutex_enter(mem.mutex); + memsys4Free(pPrior); + sqlite3_mutex_leave(mem.mutex); +} + + + +/* +** Change the size of an existing memory allocation +*/ +void *sqlite3_realloc(void *pPrior, int nBytes){ + int nOld; + sqlite3_int64 *p; + if( pPrior==0 ){ + return sqlite3_malloc(nBytes); + } + if( nBytes<=0 ){ + sqlite3_free(pPrior); + return 0; + } + nOld = memsys4Size(pPrior); + if( nBytes<=nOld && nBytes>=nOld-128 ){ + return pPrior; + } + assert( mem.mutex!=0 ); + sqlite3_mutex_enter(mem.mutex); + p = memsys4Malloc(nBytes); + if( p ){ + if( nOld=0 && i=0 && iLogsize=0 ){ + mem.aPool[next].u.list.prev = prev; + } +} + +/* +** Link the chunk at mem.aPool[i] so that is on the iLogsize +** free list. +*/ +static void memsys5Link(int i, int iLogsize){ + int x; + assert( sqlite3_mutex_held(mem.mutex) ); + assert( i>=0 && i=0 && iLogsize=0 ){ + assert( x=POW2_MAX ); + mem.mutex = sqlite3_mutex_alloc(SQLITE_MUTEX_STATIC_MEM); + sqlite3_mutex_enter(mem.mutex); + for(i=0; i=0 && i=0 && iLogsize=0 ); + while( i>0 ){ + if( imem.maxRequest ){ + mem.maxRequest = nByte; + } + + /* Simulate a memory allocation fault */ + if( sqlite3FaultStep(SQLITE_FAULTINJECTOR_MALLOC) ) return 0; + + /* Round nByte up to the next valid power of two */ + if( nByte>POW2_MAX ) return 0; + for(iFullSz=POW2_MIN, iLogsize=0; iFullSz=mem.alarmThreshold ){ + memsys5Alarm(iFullSz); + } + + /* Make sure mem.aiFreelist[iLogsize] contains at least one free + ** block. If not, then split a block of the next larger power of + ** two in order to create a new free block of size iLogsize. + */ + for(iBin=iLogsize; mem.aiFreelist[iBin]<0 && iBin=NSIZE ) return 0; + i = memsys5UnlinkFirst(iBin); + while( iBin>iLogsize ){ + int newSize; + + iBin--; + newSize = 1 << iBin; + mem.aCtrl[i+newSize] = CTRL_FREE | iBin; + memsys5Link(i+newSize, iBin); + } + mem.aCtrl[i] = iLogsize; + + /* Update allocator performance statistics. */ + mem.nAlloc++; + mem.totalAlloc += iFullSz; + mem.totalExcess += iFullSz - nByte; + mem.currentCount++; + mem.currentOut += iFullSz; + if( mem.maxCount=0 && i0 ); + assert( mem.currentOut>=0 ); + mem.currentCount--; + mem.currentOut -= size*POW2_MIN; + assert( mem.currentOut>0 || mem.currentCount==0 ); + assert( mem.currentCount>0 || mem.currentOut==0 ); + + mem.aCtrl[i] = CTRL_FREE | iLogsize; + while( iLogsize>iLogsize) & 1 ){ + iBuddy = i - size; + }else{ + iBuddy = i + size; + } + assert( iBuddy>=0 && iBuddy0 ){ + memsys5Enter(); + p = memsys5Malloc(nBytes); + sqlite3_mutex_leave(mem.mutex); + } + return (void*)p; +} + +/* +** Free memory. +*/ +void sqlite3_free(void *pPrior){ + if( pPrior==0 ){ + return; + } + assert( mem.mutex!=0 ); + sqlite3_mutex_enter(mem.mutex); + memsys5Free(pPrior); + sqlite3_mutex_leave(mem.mutex); +} + +/* +** Change the size of an existing memory allocation +*/ +void *sqlite3_realloc(void *pPrior, int nBytes){ + int nOld; + void *p; + if( pPrior==0 ){ + return sqlite3_malloc(nBytes); + } + if( nBytes<=0 ){ + sqlite3_free(pPrior); + return 0; + } + assert( mem.mutex!=0 ); + nOld = sqlite3MallocSize(pPrior); + if( nBytes<=nOld ){ + return pPrior; + } + sqlite3_mutex_enter(mem.mutex); + p = memsys5Malloc(nBytes); + if( p ){ + memcpy(p, pPrior, nOld); + memsys5Free(pPrior); + } + sqlite3_mutex_leave(mem.mutex); + return p; +} + +/* +** Open the file indicated and write a log of all unfreed memory +** allocations into that log. +*/ +void sqlite3MemdebugDump(const char *zFilename){ +#ifdef SQLITE_DEBUG + FILE *out; + int i, j, n; + + if( zFilename==0 || zFilename[0]==0 ){ + out = stdout; + }else{ + out = fopen(zFilename, "w"); + if( out==0 ){ + fprintf(stderr, "** Unable to output memory debug output log: %s **\n", + zFilename); + return; + } + } + memsys5Enter(); + for(i=0; i=0; j = mem.aPool[j].u.list.next, n++){} + fprintf(out, "freelist items of size %d: %d\n", POW2_MIN << i, n); + } + fprintf(out, "mem.nAlloc = %llu\n", mem.nAlloc); + fprintf(out, "mem.totalAlloc = %llu\n", mem.totalAlloc); + fprintf(out, "mem.totalExcess = %llu\n", mem.totalExcess); + fprintf(out, "mem.currentOut = %u\n", mem.currentOut); + fprintf(out, "mem.currentCount = %u\n", mem.currentCount); + fprintf(out, "mem.maxOut = %u\n", mem.maxOut); + fprintf(out, "mem.maxCount = %u\n", mem.maxCount); + fprintf(out, "mem.maxRequest = %u\n", mem.maxRequest); + sqlite3_mutex_leave(mem.mutex); + if( out==stdout ){ + fflush(stdout); + }else{ + fclose(out); + } +#endif +} + + +#endif /* !SQLITE_POW2_MEMORY_SIZE */ Added: external/sqlite-source-3.5.7.x/mutex.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/mutex.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,126 @@ +/* +** 2007 August 14 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** This file contains the C functions that implement mutexes. +** +** The implementation in this file does not provide any mutual +** exclusion and is thus suitable for use only in applications +** that use SQLite in a single thread. But this implementation +** does do a lot of error checking on mutexes to make sure they +** are called correctly and at appropriate times. Hence, this +** implementation is suitable for testing. +** debugging purposes +** +** $Id: mutex.c,v 1.16 2007/09/10 16:13:00 danielk1977 Exp $ +*/ +#include "sqliteInt.h" + +#ifdef SQLITE_MUTEX_NOOP_DEBUG +/* +** In this implementation, mutexes do not provide any mutual exclusion. +** But the error checking is provided. This implementation is useful +** for test purposes. +*/ + +/* +** The mutex object +*/ +struct sqlite3_mutex { + int id; /* The mutex type */ + int cnt; /* Number of entries without a matching leave */ +}; + +/* +** The sqlite3_mutex_alloc() routine allocates a new +** mutex and returns a pointer to it. If it returns NULL +** that means that a mutex could not be allocated. +*/ +sqlite3_mutex *sqlite3_mutex_alloc(int id){ + static sqlite3_mutex aStatic[5]; + sqlite3_mutex *pNew = 0; + switch( id ){ + case SQLITE_MUTEX_FAST: + case SQLITE_MUTEX_RECURSIVE: { + pNew = sqlite3_malloc(sizeof(*pNew)); + if( pNew ){ + pNew->id = id; + pNew->cnt = 0; + } + break; + } + default: { + assert( id-2 >= 0 ); + assert( id-2 < sizeof(aStatic)/sizeof(aStatic[0]) ); + pNew = &aStatic[id-2]; + pNew->id = id; + break; + } + } + return pNew; +} + +/* +** This routine deallocates a previously allocated mutex. +*/ +void sqlite3_mutex_free(sqlite3_mutex *p){ + assert( p ); + assert( p->cnt==0 ); + assert( p->id==SQLITE_MUTEX_FAST || p->id==SQLITE_MUTEX_RECURSIVE ); + sqlite3_free(p); +} + +/* +** The sqlite3_mutex_enter() and sqlite3_mutex_try() routines attempt +** to enter a mutex. If another thread is already within the mutex, +** sqlite3_mutex_enter() will block and sqlite3_mutex_try() will return +** SQLITE_BUSY. The sqlite3_mutex_try() interface returns SQLITE_OK +** upon successful entry. Mutexes created using SQLITE_MUTEX_RECURSIVE can +** be entered multiple times by the same thread. In such cases the, +** mutex must be exited an equal number of times before another thread +** can enter. If the same thread tries to enter any other kind of mutex +** more than once, the behavior is undefined. +*/ +void sqlite3_mutex_enter(sqlite3_mutex *p){ + assert( p ); + assert( p->id==SQLITE_MUTEX_RECURSIVE || sqlite3_mutex_notheld(p) ); + p->cnt++; +} +int sqlite3_mutex_try(sqlite3_mutex *p){ + assert( p ); + assert( p->id==SQLITE_MUTEX_RECURSIVE || sqlite3_mutex_notheld(p) ); + p->cnt++; + return SQLITE_OK; +} + +/* +** The sqlite3_mutex_leave() routine exits a mutex that was +** previously entered by the same thread. The behavior +** is undefined if the mutex is not currently entered or +** is not currently allocated. SQLite will never do either. +*/ +void sqlite3_mutex_leave(sqlite3_mutex *p){ + assert( p ); + assert( sqlite3_mutex_held(p) ); + p->cnt--; + assert( p->id==SQLITE_MUTEX_RECURSIVE || sqlite3_mutex_notheld(p) ); +} + +/* +** The sqlite3_mutex_held() and sqlite3_mutex_notheld() routine are +** intended for use inside assert() statements. +*/ +int sqlite3_mutex_held(sqlite3_mutex *p){ + return p==0 || p->cnt>0; +} +int sqlite3_mutex_notheld(sqlite3_mutex *p){ + return p==0 || p->cnt==0; +} +#endif /* SQLITE_MUTEX_NOOP_DEBUG */ Added: external/sqlite-source-3.5.7.x/mutex.h ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/mutex.h Wed Mar 19 03:00:27 2008 @@ -0,0 +1,82 @@ +/* +** 2007 August 28 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** +** This file contains the common header for all mutex implementations. +** The sqliteInt.h header #includes this file so that it is available +** to all source files. We break it out in an effort to keep the code +** better organized. +** +** NOTE: source files should *not* #include this header file directly. +** Source files should #include the sqliteInt.h file and let that file +** include this one indirectly. +** +** $Id: mutex.h,v 1.2 2007/08/30 14:10:30 drh Exp $ +*/ + + +#ifdef SQLITE_MUTEX_APPDEF +/* +** If SQLITE_MUTEX_APPDEF is defined, then this whole module is +** omitted and equivalent functionality must be provided by the +** application that links against the SQLite library. +*/ +#else +/* +** Figure out what version of the code to use. The choices are +** +** SQLITE_MUTEX_NOOP For single-threaded applications that +** do not desire error checking. +** +** SQLITE_MUTEX_NOOP_DEBUG For single-threaded applications with +** error checking to help verify that mutexes +** are being used correctly even though they +** are not needed. Used when SQLITE_DEBUG is +** defined on single-threaded builds. +** +** SQLITE_MUTEX_PTHREADS For multi-threaded applications on Unix. +** +** SQLITE_MUTEX_W32 For multi-threaded applications on Win32. +** +** SQLITE_MUTEX_OS2 For multi-threaded applications on OS/2. +*/ +#define SQLITE_MUTEX_NOOP 1 /* The default */ +#if defined(SQLITE_DEBUG) && !SQLITE_THREADSAFE +# undef SQLITE_MUTEX_NOOP +# define SQLITE_MUTEX_NOOP_DEBUG +#endif +#if defined(SQLITE_MUTEX_NOOP) && SQLITE_THREADSAFE && OS_UNIX +# undef SQLITE_MUTEX_NOOP +# define SQLITE_MUTEX_PTHREADS +#endif +#if defined(SQLITE_MUTEX_NOOP) && SQLITE_THREADSAFE && OS_WIN +# undef SQLITE_MUTEX_NOOP +# define SQLITE_MUTEX_W32 +#endif +#if defined(SQLITE_MUTEX_NOOP) && SQLITE_THREADSAFE && OS_OS2 +# undef SQLITE_MUTEX_NOOP +# define SQLITE_MUTEX_OS2 +#endif + +#ifdef SQLITE_MUTEX_NOOP +/* +** If this is a no-op implementation, implement everything as macros. +*/ +#define sqlite3_mutex_alloc(X) ((sqlite3_mutex*)8) +#define sqlite3_mutex_free(X) +#define sqlite3_mutex_enter(X) +#define sqlite3_mutex_try(X) SQLITE_OK +#define sqlite3_mutex_leave(X) +#define sqlite3_mutex_held(X) 1 +#define sqlite3_mutex_notheld(X) 1 +#endif + +#endif /* SQLITE_MUTEX_APPDEF */ Added: external/sqlite-source-3.5.7.x/mutex_os2.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/mutex_os2.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,245 @@ +/* +** 2007 August 28 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** This file contains the C functions that implement mutexes for OS/2 +** +** $Id: mutex_os2.c,v 1.5 2008/02/01 19:42:38 pweilbacher Exp $ +*/ +#include "sqliteInt.h" + +/* +** The code in this file is only used if SQLITE_MUTEX_OS2 is defined. +** See the mutex.h file for details. +*/ +#ifdef SQLITE_MUTEX_OS2 + +/********************** OS/2 Mutex Implementation ********************** +** +** This implementation of mutexes is built using the OS/2 API. +*/ + +/* +** The mutex object +** Each recursive mutex is an instance of the following structure. +*/ +struct sqlite3_mutex { + HMTX mutex; /* Mutex controlling the lock */ + int id; /* Mutex type */ + int nRef; /* Number of references */ + TID owner; /* Thread holding this mutex */ +}; + +#define OS2_MUTEX_INITIALIZER 0,0,0,0 + +/* +** The sqlite3_mutex_alloc() routine allocates a new +** mutex and returns a pointer to it. If it returns NULL +** that means that a mutex could not be allocated. +** SQLite will unwind its stack and return an error. The argument +** to sqlite3_mutex_alloc() is one of these integer constants: +** +**
                  +**
                • SQLITE_MUTEX_FAST 0 +**
                • SQLITE_MUTEX_RECURSIVE 1 +**
                • SQLITE_MUTEX_STATIC_MASTER 2 +**
                • SQLITE_MUTEX_STATIC_MEM 3 +**
                • SQLITE_MUTEX_STATIC_PRNG 4 +**
                +** +** The first two constants cause sqlite3_mutex_alloc() to create +** a new mutex. The new mutex is recursive when SQLITE_MUTEX_RECURSIVE +** is used but not necessarily so when SQLITE_MUTEX_FAST is used. +** The mutex implementation does not need to make a distinction +** between SQLITE_MUTEX_RECURSIVE and SQLITE_MUTEX_FAST if it does +** not want to. But SQLite will only request a recursive mutex in +** cases where it really needs one. If a faster non-recursive mutex +** implementation is available on the host platform, the mutex subsystem +** might return such a mutex in response to SQLITE_MUTEX_FAST. +** +** The other allowed parameters to sqlite3_mutex_alloc() each return +** a pointer to a static preexisting mutex. Three static mutexes are +** used by the current version of SQLite. Future versions of SQLite +** may add additional static mutexes. Static mutexes are for internal +** use by SQLite only. Applications that use SQLite mutexes should +** use only the dynamic mutexes returned by SQLITE_MUTEX_FAST or +** SQLITE_MUTEX_RECURSIVE. +** +** Note that if one of the dynamic mutex parameters (SQLITE_MUTEX_FAST +** or SQLITE_MUTEX_RECURSIVE) is used then sqlite3_mutex_alloc() +** returns a different mutex on every call. But for the static +** mutex types, the same mutex is returned on every call that has +** the same type number. +*/ +sqlite3_mutex *sqlite3_mutex_alloc(int iType){ + sqlite3_mutex *p = NULL; + switch( iType ){ + case SQLITE_MUTEX_FAST: + case SQLITE_MUTEX_RECURSIVE: { + p = sqlite3MallocZero( sizeof(*p) ); + if( p ){ + p->id = iType; + if( DosCreateMutexSem( 0, &p->mutex, 0, FALSE ) != NO_ERROR ){ + sqlite3_free( p ); + p = NULL; + } + } + break; + } + default: { + static volatile int isInit = 0; + static sqlite3_mutex staticMutexes[] = { + { OS2_MUTEX_INITIALIZER, }, + { OS2_MUTEX_INITIALIZER, }, + { OS2_MUTEX_INITIALIZER, }, + { OS2_MUTEX_INITIALIZER, }, + { OS2_MUTEX_INITIALIZER, }, + }; + if ( !isInit ){ + APIRET rc; + PTIB ptib; + PPIB ppib; + HMTX mutex; + char name[32]; + DosGetInfoBlocks( &ptib, &ppib ); + sqlite3_snprintf( sizeof(name), name, "\\SEM32\\SQLITE%04x", + ppib->pib_ulpid ); + while( !isInit ){ + mutex = 0; + rc = DosCreateMutexSem( name, &mutex, 0, FALSE); + if( rc == NO_ERROR ){ + int i; + if( !isInit ){ + for( i = 0; i < sizeof(staticMutexes)/sizeof(staticMutexes[0]); i++ ){ + DosCreateMutexSem( 0, &staticMutexes[i].mutex, 0, FALSE ); + } + isInit = 1; + } + DosCloseMutexSem( mutex ); + }else if( rc == ERROR_DUPLICATE_NAME ){ + DosSleep( 1 ); + }else{ + return p; + } + } + } + assert( iType-2 >= 0 ); + assert( iType-2 < sizeof(staticMutexes)/sizeof(staticMutexes[0]) ); + p = &staticMutexes[iType-2]; + p->id = iType; + break; + } + } + return p; +} + + +/* +** This routine deallocates a previously allocated mutex. +** SQLite is careful to deallocate every mutex that it allocates. +*/ +void sqlite3_mutex_free(sqlite3_mutex *p){ + assert( p ); + assert( p->nRef==0 ); + assert( p->id==SQLITE_MUTEX_FAST || p->id==SQLITE_MUTEX_RECURSIVE ); + DosCloseMutexSem( p->mutex ); + sqlite3_free( p ); +} + +/* +** The sqlite3_mutex_enter() and sqlite3_mutex_try() routines attempt +** to enter a mutex. If another thread is already within the mutex, +** sqlite3_mutex_enter() will block and sqlite3_mutex_try() will return +** SQLITE_BUSY. The sqlite3_mutex_try() interface returns SQLITE_OK +** upon successful entry. Mutexes created using SQLITE_MUTEX_RECURSIVE can +** be entered multiple times by the same thread. In such cases the, +** mutex must be exited an equal number of times before another thread +** can enter. If the same thread tries to enter any other kind of mutex +** more than once, the behavior is undefined. +*/ +void sqlite3_mutex_enter(sqlite3_mutex *p){ + TID tid; + PID holder1; + ULONG holder2; + assert( p ); + assert( p->id==SQLITE_MUTEX_RECURSIVE || sqlite3_mutex_notheld(p) ); + DosRequestMutexSem(p->mutex, SEM_INDEFINITE_WAIT); + DosQueryMutexSem(p->mutex, &holder1, &tid, &holder2); + p->owner = tid; + p->nRef++; +} +int sqlite3_mutex_try(sqlite3_mutex *p){ + int rc; + TID tid; + PID holder1; + ULONG holder2; + assert( p ); + assert( p->id==SQLITE_MUTEX_RECURSIVE || sqlite3_mutex_notheld(p) ); + if( DosRequestMutexSem(p->mutex, SEM_IMMEDIATE_RETURN) == NO_ERROR) { + DosQueryMutexSem(p->mutex, &holder1, &tid, &holder2); + p->owner = tid; + p->nRef++; + rc = SQLITE_OK; + } else { + rc = SQLITE_BUSY; + } + + return rc; +} + +/* +** The sqlite3_mutex_leave() routine exits a mutex that was +** previously entered by the same thread. The behavior +** is undefined if the mutex is not currently entered or +** is not currently allocated. SQLite will never do either. +*/ +void sqlite3_mutex_leave(sqlite3_mutex *p){ + TID tid; + PID holder1; + ULONG holder2; + assert( p->nRef>0 ); + DosQueryMutexSem(p->mutex, &holder1, &tid, &holder2); + assert( p->owner==tid ); + p->nRef--; + assert( p->nRef==0 || p->id==SQLITE_MUTEX_RECURSIVE ); + DosReleaseMutexSem(p->mutex); +} + +/* +** The sqlite3_mutex_held() and sqlite3_mutex_notheld() routine are +** intended for use inside assert() statements. +*/ +int sqlite3_mutex_held(sqlite3_mutex *p){ + TID tid; + PID pid; + ULONG ulCount; + PTIB ptib; + if( p!=0 ) { + DosQueryMutexSem(p->mutex, &pid, &tid, &ulCount); + } else { + DosGetInfoBlocks(&ptib, NULL); + tid = ptib->tib_ptib2->tib2_ultid; + } + return p==0 || (p->nRef!=0 && p->owner==tid); +} +int sqlite3_mutex_notheld(sqlite3_mutex *p){ + TID tid; + PID pid; + ULONG ulCount; + PTIB ptib; + if( p!= 0 ) { + DosQueryMutexSem(p->mutex, &pid, &tid, &ulCount); + } else { + DosGetInfoBlocks(&ptib, NULL); + tid = ptib->tib_ptib2->tib2_ultid; + } + return p==0 || p->nRef==0 || p->owner!=tid; +} +#endif /* SQLITE_MUTEX_OS2 */ Added: external/sqlite-source-3.5.7.x/mutex_unix.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/mutex_unix.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,298 @@ +/* +** 2007 August 28 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** This file contains the C functions that implement mutexes for pthreads +** +** $Id: mutex_unix.c,v 1.5 2007/11/28 14:04:57 drh Exp $ +*/ +#include "sqliteInt.h" + +/* +** The code in this file is only used if we are compiling threadsafe +** under unix with pthreads. +** +** Note that this implementation requires a version of pthreads that +** supports recursive mutexes. +*/ +#ifdef SQLITE_MUTEX_PTHREADS + +#include + + +/* +** Each recursive mutex is an instance of the following structure. +*/ +struct sqlite3_mutex { + pthread_mutex_t mutex; /* Mutex controlling the lock */ + int id; /* Mutex type */ + int nRef; /* Number of entrances */ + pthread_t owner; /* Thread that is within this mutex */ +#ifdef SQLITE_DEBUG + int trace; /* True to trace changes */ +#endif +}; + +/* +** The sqlite3_mutex_alloc() routine allocates a new +** mutex and returns a pointer to it. If it returns NULL +** that means that a mutex could not be allocated. SQLite +** will unwind its stack and return an error. The argument +** to sqlite3_mutex_alloc() is one of these integer constants: +** +**
                  +**
                • SQLITE_MUTEX_FAST +**
                • SQLITE_MUTEX_RECURSIVE +**
                • SQLITE_MUTEX_STATIC_MASTER +**
                • SQLITE_MUTEX_STATIC_MEM +**
                • SQLITE_MUTEX_STATIC_MEM2 +**
                • SQLITE_MUTEX_STATIC_PRNG +**
                • SQLITE_MUTEX_STATIC_LRU +**
                +** +** The first two constants cause sqlite3_mutex_alloc() to create +** a new mutex. The new mutex is recursive when SQLITE_MUTEX_RECURSIVE +** is used but not necessarily so when SQLITE_MUTEX_FAST is used. +** The mutex implementation does not need to make a distinction +** between SQLITE_MUTEX_RECURSIVE and SQLITE_MUTEX_FAST if it does +** not want to. But SQLite will only request a recursive mutex in +** cases where it really needs one. If a faster non-recursive mutex +** implementation is available on the host platform, the mutex subsystem +** might return such a mutex in response to SQLITE_MUTEX_FAST. +** +** The other allowed parameters to sqlite3_mutex_alloc() each return +** a pointer to a static preexisting mutex. Three static mutexes are +** used by the current version of SQLite. Future versions of SQLite +** may add additional static mutexes. Static mutexes are for internal +** use by SQLite only. Applications that use SQLite mutexes should +** use only the dynamic mutexes returned by SQLITE_MUTEX_FAST or +** SQLITE_MUTEX_RECURSIVE. +** +** Note that if one of the dynamic mutex parameters (SQLITE_MUTEX_FAST +** or SQLITE_MUTEX_RECURSIVE) is used then sqlite3_mutex_alloc() +** returns a different mutex on every call. But for the static +** mutex types, the same mutex is returned on every call that has +** the same type number. +*/ +sqlite3_mutex *sqlite3_mutex_alloc(int iType){ + static sqlite3_mutex staticMutexes[] = { + { PTHREAD_MUTEX_INITIALIZER, }, + { PTHREAD_MUTEX_INITIALIZER, }, + { PTHREAD_MUTEX_INITIALIZER, }, + { PTHREAD_MUTEX_INITIALIZER, }, + { PTHREAD_MUTEX_INITIALIZER, }, + }; + sqlite3_mutex *p; + switch( iType ){ + case SQLITE_MUTEX_RECURSIVE: { + p = sqlite3MallocZero( sizeof(*p) ); + if( p ){ +#ifdef SQLITE_HOMEGROWN_RECURSIVE_MUTEX + /* If recursive mutexes are not available, we will have to + ** build our own. See below. */ + pthread_mutex_init(&p->mutex, 0); +#else + /* Use a recursive mutex if it is available */ + pthread_mutexattr_t recursiveAttr; + pthread_mutexattr_init(&recursiveAttr); + pthread_mutexattr_settype(&recursiveAttr, PTHREAD_MUTEX_RECURSIVE); + pthread_mutex_init(&p->mutex, &recursiveAttr); + pthread_mutexattr_destroy(&recursiveAttr); +#endif + p->id = iType; + } + break; + } + case SQLITE_MUTEX_FAST: { + p = sqlite3MallocZero( sizeof(*p) ); + if( p ){ + p->id = iType; + pthread_mutex_init(&p->mutex, 0); + } + break; + } + default: { + assert( iType-2 >= 0 ); + assert( iType-2 < sizeof(staticMutexes)/sizeof(staticMutexes[0]) ); + p = &staticMutexes[iType-2]; + p->id = iType; + break; + } + } + return p; +} + + +/* +** This routine deallocates a previously +** allocated mutex. SQLite is careful to deallocate every +** mutex that it allocates. +*/ +void sqlite3_mutex_free(sqlite3_mutex *p){ + assert( p ); + assert( p->nRef==0 ); + assert( p->id==SQLITE_MUTEX_FAST || p->id==SQLITE_MUTEX_RECURSIVE ); + pthread_mutex_destroy(&p->mutex); + sqlite3_free(p); +} + +/* +** The sqlite3_mutex_enter() and sqlite3_mutex_try() routines attempt +** to enter a mutex. If another thread is already within the mutex, +** sqlite3_mutex_enter() will block and sqlite3_mutex_try() will return +** SQLITE_BUSY. The sqlite3_mutex_try() interface returns SQLITE_OK +** upon successful entry. Mutexes created using SQLITE_MUTEX_RECURSIVE can +** be entered multiple times by the same thread. In such cases the, +** mutex must be exited an equal number of times before another thread +** can enter. If the same thread tries to enter any other kind of mutex +** more than once, the behavior is undefined. +*/ +void sqlite3_mutex_enter(sqlite3_mutex *p){ + assert( p ); + assert( p->id==SQLITE_MUTEX_RECURSIVE || sqlite3_mutex_notheld(p) ); + +#ifdef SQLITE_HOMEGROWN_RECURSIVE_MUTEX + /* If recursive mutexes are not available, then we have to grow + ** our own. This implementation assumes that pthread_equal() + ** is atomic - that it cannot be deceived into thinking self + ** and p->owner are equal if p->owner changes between two values + ** that are not equal to self while the comparison is taking place. + ** This implementation also assumes a coherent cache - that + ** separate processes cannot read different values from the same + ** address at the same time. If either of these two conditions + ** are not met, then the mutexes will fail and problems will result. + */ + { + pthread_t self = pthread_self(); + if( p->nRef>0 && pthread_equal(p->owner, self) ){ + p->nRef++; + }else{ + pthread_mutex_lock(&p->mutex); + assert( p->nRef==0 ); + p->owner = self; + p->nRef = 1; + } + } +#else + /* Use the built-in recursive mutexes if they are available. + */ + pthread_mutex_lock(&p->mutex); + p->owner = pthread_self(); + p->nRef++; +#endif + +#ifdef SQLITE_DEBUG + if( p->trace ){ + printf("enter mutex %p (%d) with nRef=%d\n", p, p->trace, p->nRef); + } +#endif +} +int sqlite3_mutex_try(sqlite3_mutex *p){ + int rc; + assert( p ); + assert( p->id==SQLITE_MUTEX_RECURSIVE || sqlite3_mutex_notheld(p) ); + +#ifdef SQLITE_HOMEGROWN_RECURSIVE_MUTEX + /* If recursive mutexes are not available, then we have to grow + ** our own. This implementation assumes that pthread_equal() + ** is atomic - that it cannot be deceived into thinking self + ** and p->owner are equal if p->owner changes between two values + ** that are not equal to self while the comparison is taking place. + ** This implementation also assumes a coherent cache - that + ** separate processes cannot read different values from the same + ** address at the same time. If either of these two conditions + ** are not met, then the mutexes will fail and problems will result. + */ + { + pthread_t self = pthread_self(); + if( p->nRef>0 && pthread_equal(p->owner, self) ){ + p->nRef++; + rc = SQLITE_OK; + }else if( pthread_mutex_lock(&p->mutex)==0 ){ + assert( p->nRef==0 ); + p->owner = self; + p->nRef = 1; + rc = SQLITE_OK; + }else{ + rc = SQLITE_BUSY; + } + } +#else + /* Use the built-in recursive mutexes if they are available. + */ + if( pthread_mutex_trylock(&p->mutex)==0 ){ + p->owner = pthread_self(); + p->nRef++; + rc = SQLITE_OK; + }else{ + rc = SQLITE_BUSY; + } +#endif + +#ifdef SQLITE_DEBUG + if( rc==SQLITE_OK && p->trace ){ + printf("enter mutex %p (%d) with nRef=%d\n", p, p->trace, p->nRef); + } +#endif + return rc; +} + +/* +** The sqlite3_mutex_leave() routine exits a mutex that was +** previously entered by the same thread. The behavior +** is undefined if the mutex is not currently entered or +** is not currently allocated. SQLite will never do either. +*/ +void sqlite3_mutex_leave(sqlite3_mutex *p){ + assert( p ); + assert( sqlite3_mutex_held(p) ); + p->nRef--; + assert( p->nRef==0 || p->id==SQLITE_MUTEX_RECURSIVE ); + +#ifdef SQLITE_HOMEGROWN_RECURSIVE_MUTEX + if( p->nRef==0 ){ + pthread_mutex_unlock(&p->mutex); + } +#else + pthread_mutex_unlock(&p->mutex); +#endif + +#ifdef SQLITE_DEBUG + if( p->trace ){ + printf("leave mutex %p (%d) with nRef=%d\n", p, p->trace, p->nRef); + } +#endif +} + +/* +** The sqlite3_mutex_held() and sqlite3_mutex_notheld() routine are +** intended for use only inside assert() statements. On some platforms, +** there might be race conditions that can cause these routines to +** deliver incorrect results. In particular, if pthread_equal() is +** not an atomic operation, then these routines might delivery +** incorrect results. On most platforms, pthread_equal() is a +** comparison of two integers and is therefore atomic. But we are +** told that HPUX is not such a platform. If so, then these routines +** will not always work correctly on HPUX. +** +** On those platforms where pthread_equal() is not atomic, SQLite +** should be compiled without -DSQLITE_DEBUG and with -DNDEBUG to +** make sure no assert() statements are evaluated and hence these +** routines are never called. +*/ +#ifndef NDEBUG +int sqlite3_mutex_held(sqlite3_mutex *p){ + return p==0 || (p->nRef!=0 && pthread_equal(p->owner, pthread_self())); +} +int sqlite3_mutex_notheld(sqlite3_mutex *p){ + return p==0 || p->nRef==0 || pthread_equal(p->owner, pthread_self())==0; +} +#endif +#endif /* SQLITE_MUTEX_PTHREAD */ Added: external/sqlite-source-3.5.7.x/mutex_w32.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/mutex_w32.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,219 @@ +/* +** 2007 August 14 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** This file contains the C functions that implement mutexes for win32 +** +** $Id: mutex_w32.c,v 1.5 2007/10/05 15:08:01 drh Exp $ +*/ +#include "sqliteInt.h" + +/* +** The code in this file is only used if we are compiling multithreaded +** on a win32 system. +*/ +#ifdef SQLITE_MUTEX_W32 + +/* +** Each recursive mutex is an instance of the following structure. +*/ +struct sqlite3_mutex { + CRITICAL_SECTION mutex; /* Mutex controlling the lock */ + int id; /* Mutex type */ + int nRef; /* Number of enterances */ + DWORD owner; /* Thread holding this mutex */ +}; + +/* +** Return true (non-zero) if we are running under WinNT, Win2K, WinXP, +** or WinCE. Return false (zero) for Win95, Win98, or WinME. +** +** Here is an interesting observation: Win95, Win98, and WinME lack +** the LockFileEx() API. But we can still statically link against that +** API as long as we don't call it win running Win95/98/ME. A call to +** this routine is used to determine if the host is Win95/98/ME or +** WinNT/2K/XP so that we will know whether or not we can safely call +** the LockFileEx() API. +*/ +#if OS_WINCE +# define mutexIsNT() (1) +#else + static int mutexIsNT(void){ + static int osType = 0; + if( osType==0 ){ + OSVERSIONINFO sInfo; + sInfo.dwOSVersionInfoSize = sizeof(sInfo); + GetVersionEx(&sInfo); + osType = sInfo.dwPlatformId==VER_PLATFORM_WIN32_NT ? 2 : 1; + } + return osType==2; + } +#endif /* OS_WINCE */ + + +/* +** The sqlite3_mutex_alloc() routine allocates a new +** mutex and returns a pointer to it. If it returns NULL +** that means that a mutex could not be allocated. SQLite +** will unwind its stack and return an error. The argument +** to sqlite3_mutex_alloc() is one of these integer constants: +** +**
                  +**
                • SQLITE_MUTEX_FAST 0 +**
                • SQLITE_MUTEX_RECURSIVE 1 +**
                • SQLITE_MUTEX_STATIC_MASTER 2 +**
                • SQLITE_MUTEX_STATIC_MEM 3 +**
                • SQLITE_MUTEX_STATIC_PRNG 4 +**
                +** +** The first two constants cause sqlite3_mutex_alloc() to create +** a new mutex. The new mutex is recursive when SQLITE_MUTEX_RECURSIVE +** is used but not necessarily so when SQLITE_MUTEX_FAST is used. +** The mutex implementation does not need to make a distinction +** between SQLITE_MUTEX_RECURSIVE and SQLITE_MUTEX_FAST if it does +** not want to. But SQLite will only request a recursive mutex in +** cases where it really needs one. If a faster non-recursive mutex +** implementation is available on the host platform, the mutex subsystem +** might return such a mutex in response to SQLITE_MUTEX_FAST. +** +** The other allowed parameters to sqlite3_mutex_alloc() each return +** a pointer to a static preexisting mutex. Three static mutexes are +** used by the current version of SQLite. Future versions of SQLite +** may add additional static mutexes. Static mutexes are for internal +** use by SQLite only. Applications that use SQLite mutexes should +** use only the dynamic mutexes returned by SQLITE_MUTEX_FAST or +** SQLITE_MUTEX_RECURSIVE. +** +** Note that if one of the dynamic mutex parameters (SQLITE_MUTEX_FAST +** or SQLITE_MUTEX_RECURSIVE) is used then sqlite3_mutex_alloc() +** returns a different mutex on every call. But for the static +** mutex types, the same mutex is returned on every call that has +** the same type number. +*/ +sqlite3_mutex *sqlite3_mutex_alloc(int iType){ + sqlite3_mutex *p; + + switch( iType ){ + case SQLITE_MUTEX_FAST: + case SQLITE_MUTEX_RECURSIVE: { + p = sqlite3MallocZero( sizeof(*p) ); + if( p ){ + p->id = iType; + InitializeCriticalSection(&p->mutex); + } + break; + } + default: { + static sqlite3_mutex staticMutexes[5]; + static int isInit = 0; + while( !isInit ){ + static long lock = 0; + if( InterlockedIncrement(&lock)==1 ){ + int i; + for(i=0; i= 0 ); + assert( iType-2 < sizeof(staticMutexes)/sizeof(staticMutexes[0]) ); + p = &staticMutexes[iType-2]; + p->id = iType; + break; + } + } + return p; +} + + +/* +** This routine deallocates a previously +** allocated mutex. SQLite is careful to deallocate every +** mutex that it allocates. +*/ +void sqlite3_mutex_free(sqlite3_mutex *p){ + assert( p ); + assert( p->nRef==0 ); + assert( p->id==SQLITE_MUTEX_FAST || p->id==SQLITE_MUTEX_RECURSIVE ); + DeleteCriticalSection(&p->mutex); + sqlite3_free(p); +} + +/* +** The sqlite3_mutex_enter() and sqlite3_mutex_try() routines attempt +** to enter a mutex. If another thread is already within the mutex, +** sqlite3_mutex_enter() will block and sqlite3_mutex_try() will return +** SQLITE_BUSY. The sqlite3_mutex_try() interface returns SQLITE_OK +** upon successful entry. Mutexes created using SQLITE_MUTEX_RECURSIVE can +** be entered multiple times by the same thread. In such cases the, +** mutex must be exited an equal number of times before another thread +** can enter. If the same thread tries to enter any other kind of mutex +** more than once, the behavior is undefined. +*/ +void sqlite3_mutex_enter(sqlite3_mutex *p){ + assert( p ); + assert( p->id==SQLITE_MUTEX_RECURSIVE || sqlite3_mutex_notheld(p) ); + EnterCriticalSection(&p->mutex); + p->owner = GetCurrentThreadId(); + p->nRef++; +} +int sqlite3_mutex_try(sqlite3_mutex *p){ + int rc = SQLITE_BUSY; + assert( p ); + assert( p->id==SQLITE_MUTEX_RECURSIVE || sqlite3_mutex_notheld(p) ); + /* + ** The sqlite3_mutex_try() routine is very rarely used, and when it + ** is used it is merely an optimization. So it is OK for it to always + ** fail. + ** + ** The TryEnterCriticalSection() interface is only available on WinNT. + ** And some windows compilers complain if you try to use it without + ** first doing some #defines that prevent SQLite from building on Win98. + ** For that reason, we will omit this optimization for now. See + ** ticket #2685. + */ +#if 0 + if( mutexIsNT() && TryEnterCriticalSection(&p->mutex) ){ + p->owner = GetCurrentThreadId(); + p->nRef++; + rc = SQLITE_OK; + } +#endif + return rc; +} + +/* +** The sqlite3_mutex_leave() routine exits a mutex that was +** previously entered by the same thread. The behavior +** is undefined if the mutex is not currently entered or +** is not currently allocated. SQLite will never do either. +*/ +void sqlite3_mutex_leave(sqlite3_mutex *p){ + assert( p->nRef>0 ); + assert( p->owner==GetCurrentThreadId() ); + p->nRef--; + assert( p->nRef==0 || p->id==SQLITE_MUTEX_RECURSIVE ); + LeaveCriticalSection(&p->mutex); +} + +/* +** The sqlite3_mutex_held() and sqlite3_mutex_notheld() routine are +** intended for use only inside assert() statements. +*/ +int sqlite3_mutex_held(sqlite3_mutex *p){ + return p==0 || (p->nRef!=0 && p->owner==GetCurrentThreadId()); +} +int sqlite3_mutex_notheld(sqlite3_mutex *p){ + return p==0 || p->nRef==0 || p->owner!=GetCurrentThreadId(); +} +#endif /* SQLITE_MUTEX_W32 */ Added: external/sqlite-source-3.5.7.x/opcodes.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/opcodes.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,151 @@ +/* Automatically generated. Do not edit */ +/* See the mkopcodec.awk script for details. */ +#if !defined(SQLITE_OMIT_EXPLAIN) || !defined(NDEBUG) || defined(VDBE_PROFILE) || defined(SQLITE_DEBUG) +const char *sqlite3OpcodeName(int i){ + static const char *const azName[] = { "?", + /* 1 */ "VNext", + /* 2 */ "Column", + /* 3 */ "SetCookie", + /* 4 */ "Sequence", + /* 5 */ "MoveGt", + /* 6 */ "RowKey", + /* 7 */ "SCopy", + /* 8 */ "OpenWrite", + /* 9 */ "If", + /* 10 */ "VRowid", + /* 11 */ "CollSeq", + /* 12 */ "OpenRead", + /* 13 */ "Expire", + /* 14 */ "AutoCommit", + /* 15 */ "IntegrityCk", + /* 16 */ "Not", + /* 17 */ "Sort", + /* 18 */ "Copy", + /* 19 */ "Trace", + /* 20 */ "Function", + /* 21 */ "IfNeg", + /* 22 */ "Noop", + /* 23 */ "Return", + /* 24 */ "NewRowid", + /* 25 */ "Variable", + /* 26 */ "String", + /* 27 */ "RealAffinity", + /* 28 */ "VRename", + /* 29 */ "ParseSchema", + /* 30 */ "VOpen", + /* 31 */ "Close", + /* 32 */ "CreateIndex", + /* 33 */ "IsUnique", + /* 34 */ "NotFound", + /* 35 */ "Int64", + /* 36 */ "MustBeInt", + /* 37 */ "Halt", + /* 38 */ "Rowid", + /* 39 */ "IdxLT", + /* 40 */ "AddImm", + /* 41 */ "Statement", + /* 42 */ "RowData", + /* 43 */ "MemMax", + /* 44 */ "NotExists", + /* 45 */ "Gosub", + /* 46 */ "Integer", + /* 47 */ "Prev", + /* 48 */ "VColumn", + /* 49 */ "CreateTable", + /* 50 */ "Last", + /* 51 */ "IncrVacuum", + /* 52 */ "IdxRowid", + /* 53 */ "ResetCount", + /* 54 */ "FifoWrite", + /* 55 */ "ContextPush", + /* 56 */ "DropTrigger", + /* 57 */ "DropIndex", + /* 58 */ "IdxGE", + /* 59 */ "IdxDelete", + /* 60 */ "Or", + /* 61 */ "And", + /* 62 */ "Vacuum", + /* 63 */ "MoveLe", + /* 64 */ "IfNot", + /* 65 */ "IsNull", + /* 66 */ "NotNull", + /* 67 */ "Ne", + /* 68 */ "Eq", + /* 69 */ "Gt", + /* 70 */ "Le", + /* 71 */ "Lt", + /* 72 */ "Ge", + /* 73 */ "DropTable", + /* 74 */ "BitAnd", + /* 75 */ "BitOr", + /* 76 */ "ShiftLeft", + /* 77 */ "ShiftRight", + /* 78 */ "Add", + /* 79 */ "Subtract", + /* 80 */ "Multiply", + /* 81 */ "Divide", + /* 82 */ "Remainder", + /* 83 */ "Concat", + /* 84 */ "MakeRecord", + /* 85 */ "ResultRow", + /* 86 */ "Delete", + /* 87 */ "BitNot", + /* 88 */ "String8", + /* 89 */ "AggFinal", + /* 90 */ "Goto", + /* 91 */ "TableLock", + /* 92 */ "FifoRead", + /* 93 */ "Clear", + /* 94 */ "MoveLt", + /* 95 */ "VerifyCookie", + /* 96 */ "AggStep", + /* 97 */ "SetNumColumns", + /* 98 */ "Transaction", + /* 99 */ "VFilter", + /* 100 */ "VDestroy", + /* 101 */ "ContextPop", + /* 102 */ "Next", + /* 103 */ "IdxInsert", + /* 104 */ "Insert", + /* 105 */ "Destroy", + /* 106 */ "ReadCookie", + /* 107 */ "ForceInt", + /* 108 */ "LoadAnalysis", + /* 109 */ "Explain", + /* 110 */ "OpenPseudo", + /* 111 */ "OpenEphemeral", + /* 112 */ "Null", + /* 113 */ "Move", + /* 114 */ "Blob", + /* 115 */ "Rewind", + /* 116 */ "MoveGe", + /* 117 */ "VBegin", + /* 118 */ "VUpdate", + /* 119 */ "IfZero", + /* 120 */ "VCreate", + /* 121 */ "Found", + /* 122 */ "IfPos", + /* 123 */ "NullRow", + /* 124 */ "NotUsed_124", + /* 125 */ "Real", + /* 126 */ "NotUsed_126", + /* 127 */ "NotUsed_127", + /* 128 */ "NotUsed_128", + /* 129 */ "NotUsed_129", + /* 130 */ "NotUsed_130", + /* 131 */ "NotUsed_131", + /* 132 */ "NotUsed_132", + /* 133 */ "NotUsed_133", + /* 134 */ "NotUsed_134", + /* 135 */ "NotUsed_135", + /* 136 */ "NotUsed_136", + /* 137 */ "NotUsed_137", + /* 138 */ "ToText", + /* 139 */ "ToBlob", + /* 140 */ "ToNumeric", + /* 141 */ "ToInt", + /* 142 */ "ToReal", + }; + return azName[i]; +} +#endif Added: external/sqlite-source-3.5.7.x/opcodes.h ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/opcodes.h Wed Mar 19 03:00:27 2008 @@ -0,0 +1,177 @@ +/* Automatically generated. Do not edit */ +/* See the mkopcodeh.awk script for details */ +#define OP_VNext 1 +#define OP_Column 2 +#define OP_SetCookie 3 +#define OP_Real 125 /* same as TK_FLOAT */ +#define OP_Sequence 4 +#define OP_MoveGt 5 +#define OP_Ge 72 /* same as TK_GE */ +#define OP_RowKey 6 +#define OP_SCopy 7 +#define OP_Eq 68 /* same as TK_EQ */ +#define OP_OpenWrite 8 +#define OP_NotNull 66 /* same as TK_NOTNULL */ +#define OP_If 9 +#define OP_ToInt 141 /* same as TK_TO_INT */ +#define OP_String8 88 /* same as TK_STRING */ +#define OP_VRowid 10 +#define OP_CollSeq 11 +#define OP_OpenRead 12 +#define OP_Expire 13 +#define OP_AutoCommit 14 +#define OP_Gt 69 /* same as TK_GT */ +#define OP_IntegrityCk 15 +#define OP_Sort 17 +#define OP_Copy 18 +#define OP_Trace 19 +#define OP_Function 20 +#define OP_IfNeg 21 +#define OP_And 61 /* same as TK_AND */ +#define OP_Subtract 79 /* same as TK_MINUS */ +#define OP_Noop 22 +#define OP_Return 23 +#define OP_Remainder 82 /* same as TK_REM */ +#define OP_NewRowid 24 +#define OP_Multiply 80 /* same as TK_STAR */ +#define OP_Variable 25 +#define OP_String 26 +#define OP_RealAffinity 27 +#define OP_VRename 28 +#define OP_ParseSchema 29 +#define OP_VOpen 30 +#define OP_Close 31 +#define OP_CreateIndex 32 +#define OP_IsUnique 33 +#define OP_NotFound 34 +#define OP_Int64 35 +#define OP_MustBeInt 36 +#define OP_Halt 37 +#define OP_Rowid 38 +#define OP_IdxLT 39 +#define OP_AddImm 40 +#define OP_Statement 41 +#define OP_RowData 42 +#define OP_MemMax 43 +#define OP_Or 60 /* same as TK_OR */ +#define OP_NotExists 44 +#define OP_Gosub 45 +#define OP_Divide 81 /* same as TK_SLASH */ +#define OP_Integer 46 +#define OP_ToNumeric 140 /* same as TK_TO_NUMERIC*/ +#define OP_Prev 47 +#define OP_Concat 83 /* same as TK_CONCAT */ +#define OP_BitAnd 74 /* same as TK_BITAND */ +#define OP_VColumn 48 +#define OP_CreateTable 49 +#define OP_Last 50 +#define OP_IsNull 65 /* same as TK_ISNULL */ +#define OP_IncrVacuum 51 +#define OP_IdxRowid 52 +#define OP_ShiftRight 77 /* same as TK_RSHIFT */ +#define OP_ResetCount 53 +#define OP_FifoWrite 54 +#define OP_ContextPush 55 +#define OP_DropTrigger 56 +#define OP_DropIndex 57 +#define OP_IdxGE 58 +#define OP_IdxDelete 59 +#define OP_Vacuum 62 +#define OP_MoveLe 63 +#define OP_IfNot 64 +#define OP_DropTable 73 +#define OP_MakeRecord 84 +#define OP_ToBlob 139 /* same as TK_TO_BLOB */ +#define OP_ResultRow 85 +#define OP_Delete 86 +#define OP_AggFinal 89 +#define OP_ShiftLeft 76 /* same as TK_LSHIFT */ +#define OP_Goto 90 +#define OP_TableLock 91 +#define OP_FifoRead 92 +#define OP_Clear 93 +#define OP_MoveLt 94 +#define OP_Le 70 /* same as TK_LE */ +#define OP_VerifyCookie 95 +#define OP_AggStep 96 +#define OP_ToText 138 /* same as TK_TO_TEXT */ +#define OP_Not 16 /* same as TK_NOT */ +#define OP_ToReal 142 /* same as TK_TO_REAL */ +#define OP_SetNumColumns 97 +#define OP_Transaction 98 +#define OP_VFilter 99 +#define OP_Ne 67 /* same as TK_NE */ +#define OP_VDestroy 100 +#define OP_ContextPop 101 +#define OP_BitOr 75 /* same as TK_BITOR */ +#define OP_Next 102 +#define OP_IdxInsert 103 +#define OP_Lt 71 /* same as TK_LT */ +#define OP_Insert 104 +#define OP_Destroy 105 +#define OP_ReadCookie 106 +#define OP_ForceInt 107 +#define OP_LoadAnalysis 108 +#define OP_Explain 109 +#define OP_OpenPseudo 110 +#define OP_OpenEphemeral 111 +#define OP_Null 112 +#define OP_Move 113 +#define OP_Blob 114 +#define OP_Add 78 /* same as TK_PLUS */ +#define OP_Rewind 115 +#define OP_MoveGe 116 +#define OP_VBegin 117 +#define OP_VUpdate 118 +#define OP_IfZero 119 +#define OP_BitNot 87 /* same as TK_BITNOT */ +#define OP_VCreate 120 +#define OP_Found 121 +#define OP_IfPos 122 +#define OP_NullRow 123 + +/* The following opcode values are never used */ +#define OP_NotUsed_124 124 +#define OP_NotUsed_126 126 +#define OP_NotUsed_127 127 +#define OP_NotUsed_128 128 +#define OP_NotUsed_129 129 +#define OP_NotUsed_130 130 +#define OP_NotUsed_131 131 +#define OP_NotUsed_132 132 +#define OP_NotUsed_133 133 +#define OP_NotUsed_134 134 +#define OP_NotUsed_135 135 +#define OP_NotUsed_136 136 +#define OP_NotUsed_137 137 + + +/* Properties such as "out2" or "jump" that are specified in +** comments following the "case" for each opcode in the vdbe.c +** are encoded into bitvectors as follows: +*/ +#define OPFLG_JUMP 0x0001 /* jump: P2 holds jmp target */ +#define OPFLG_OUT2_PRERELEASE 0x0002 /* out2-prerelease: */ +#define OPFLG_IN1 0x0004 /* in1: P1 is an input */ +#define OPFLG_IN2 0x0008 /* in2: P2 is an input */ +#define OPFLG_IN3 0x0010 /* in3: P3 is an input */ +#define OPFLG_OUT3 0x0020 /* out3: P3 is an output */ +#define OPFLG_INITIALIZER {\ +/* 0 */ 0x00, 0x01, 0x00, 0x10, 0x02, 0x11, 0x00, 0x00,\ +/* 8 */ 0x00, 0x05, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00,\ +/* 16 */ 0x04, 0x01, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00,\ +/* 24 */ 0x02, 0x02, 0x02, 0x04, 0x00, 0x00, 0x00, 0x00,\ +/* 32 */ 0x02, 0x11, 0x11, 0x02, 0x05, 0x00, 0x02, 0x11,\ +/* 40 */ 0x04, 0x00, 0x00, 0x0c, 0x11, 0x01, 0x02, 0x01,\ +/* 48 */ 0x00, 0x02, 0x01, 0x01, 0x02, 0x00, 0x04, 0x00,\ +/* 56 */ 0x00, 0x00, 0x11, 0x08, 0x2c, 0x2c, 0x00, 0x11,\ +/* 64 */ 0x05, 0x05, 0x05, 0x15, 0x15, 0x15, 0x15, 0x15,\ +/* 72 */ 0x15, 0x00, 0x2c, 0x2c, 0x2c, 0x2c, 0x2c, 0x2c,\ +/* 80 */ 0x2c, 0x2c, 0x2c, 0x2c, 0x00, 0x00, 0x00, 0x04,\ +/* 88 */ 0x02, 0x00, 0x01, 0x00, 0x01, 0x00, 0x11, 0x00,\ +/* 96 */ 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x01, 0x08,\ +/* 104 */ 0x00, 0x02, 0x02, 0x05, 0x00, 0x00, 0x00, 0x00,\ +/* 112 */ 0x02, 0x00, 0x02, 0x01, 0x11, 0x00, 0x00, 0x05,\ +/* 120 */ 0x00, 0x11, 0x05, 0x00, 0x00, 0x02, 0x00, 0x00,\ +/* 128 */ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,\ +/* 136 */ 0x00, 0x00, 0x04, 0x04, 0x04, 0x04, 0x04,} Added: external/sqlite-source-3.5.7.x/os.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/os.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,267 @@ +/* +** 2005 November 29 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +****************************************************************************** +** +** This file contains OS interface code that is common to all +** architectures. +*/ +#define _SQLITE_OS_C_ 1 +#include "sqliteInt.h" +#undef _SQLITE_OS_C_ + +/* +** The default SQLite sqlite3_vfs implementations do not allocate +** memory (actually, os_unix.c allocates a small amount of memory +** from within OsOpen()), but some third-party implementations may. +** So we test the effects of a malloc() failing and the sqlite3OsXXX() +** function returning SQLITE_IOERR_NOMEM using the DO_OS_MALLOC_TEST macro. +** +** The following functions are instrumented for malloc() failure +** testing: +** +** sqlite3OsOpen() +** sqlite3OsRead() +** sqlite3OsWrite() +** sqlite3OsSync() +** sqlite3OsLock() +** +*/ +#ifdef SQLITE_TEST + #define DO_OS_MALLOC_TEST if (1) { \ + void *pTstAlloc = sqlite3_malloc(10); \ + if (!pTstAlloc) return SQLITE_IOERR_NOMEM; \ + sqlite3_free(pTstAlloc); \ + } +#else + #define DO_OS_MALLOC_TEST +#endif + +/* +** The following routines are convenience wrappers around methods +** of the sqlite3_file object. This is mostly just syntactic sugar. All +** of this would be completely automatic if SQLite were coded using +** C++ instead of plain old C. +*/ +int sqlite3OsClose(sqlite3_file *pId){ + int rc = SQLITE_OK; + if( pId->pMethods ){ + rc = pId->pMethods->xClose(pId); + pId->pMethods = 0; + } + return rc; +} +int sqlite3OsRead(sqlite3_file *id, void *pBuf, int amt, i64 offset){ + DO_OS_MALLOC_TEST; + return id->pMethods->xRead(id, pBuf, amt, offset); +} +int sqlite3OsWrite(sqlite3_file *id, const void *pBuf, int amt, i64 offset){ + DO_OS_MALLOC_TEST; + return id->pMethods->xWrite(id, pBuf, amt, offset); +} +int sqlite3OsTruncate(sqlite3_file *id, i64 size){ + return id->pMethods->xTruncate(id, size); +} +int sqlite3OsSync(sqlite3_file *id, int flags){ + DO_OS_MALLOC_TEST; + return id->pMethods->xSync(id, flags); +} +int sqlite3OsFileSize(sqlite3_file *id, i64 *pSize){ + return id->pMethods->xFileSize(id, pSize); +} +int sqlite3OsLock(sqlite3_file *id, int lockType){ + DO_OS_MALLOC_TEST; + return id->pMethods->xLock(id, lockType); +} +int sqlite3OsUnlock(sqlite3_file *id, int lockType){ + return id->pMethods->xUnlock(id, lockType); +} +int sqlite3OsCheckReservedLock(sqlite3_file *id){ + return id->pMethods->xCheckReservedLock(id); +} +int sqlite3OsFileControl(sqlite3_file *id, int op, void *pArg){ + return id->pMethods->xFileControl(id,op,pArg); +} +int sqlite3OsSectorSize(sqlite3_file *id){ + int (*xSectorSize)(sqlite3_file*) = id->pMethods->xSectorSize; + return (xSectorSize ? xSectorSize(id) : SQLITE_DEFAULT_SECTOR_SIZE); +} +int sqlite3OsDeviceCharacteristics(sqlite3_file *id){ + return id->pMethods->xDeviceCharacteristics(id); +} + +/* +** The next group of routines are convenience wrappers around the +** VFS methods. +*/ +int sqlite3OsOpen( + sqlite3_vfs *pVfs, + const char *zPath, + sqlite3_file *pFile, + int flags, + int *pFlagsOut +){ + DO_OS_MALLOC_TEST; + return pVfs->xOpen(pVfs, zPath, pFile, flags, pFlagsOut); +} +int sqlite3OsDelete(sqlite3_vfs *pVfs, const char *zPath, int dirSync){ + return pVfs->xDelete(pVfs, zPath, dirSync); +} +int sqlite3OsAccess(sqlite3_vfs *pVfs, const char *zPath, int flags){ + return pVfs->xAccess(pVfs, zPath, flags); +} +int sqlite3OsGetTempname(sqlite3_vfs *pVfs, int nBufOut, char *zBufOut){ + return pVfs->xGetTempname(pVfs, nBufOut, zBufOut); +} +int sqlite3OsFullPathname( + sqlite3_vfs *pVfs, + const char *zPath, + int nPathOut, + char *zPathOut +){ + return pVfs->xFullPathname(pVfs, zPath, nPathOut, zPathOut); +} +void *sqlite3OsDlOpen(sqlite3_vfs *pVfs, const char *zPath){ + return pVfs->xDlOpen(pVfs, zPath); +} +void sqlite3OsDlError(sqlite3_vfs *pVfs, int nByte, char *zBufOut){ + pVfs->xDlError(pVfs, nByte, zBufOut); +} +void *sqlite3OsDlSym(sqlite3_vfs *pVfs, void *pHandle, const char *zSymbol){ + return pVfs->xDlSym(pVfs, pHandle, zSymbol); +} +void sqlite3OsDlClose(sqlite3_vfs *pVfs, void *pHandle){ + pVfs->xDlClose(pVfs, pHandle); +} +int sqlite3OsRandomness(sqlite3_vfs *pVfs, int nByte, char *zBufOut){ + return pVfs->xRandomness(pVfs, nByte, zBufOut); +} +int sqlite3OsSleep(sqlite3_vfs *pVfs, int nMicro){ + return pVfs->xSleep(pVfs, nMicro); +} +int sqlite3OsCurrentTime(sqlite3_vfs *pVfs, double *pTimeOut){ + return pVfs->xCurrentTime(pVfs, pTimeOut); +} + +int sqlite3OsOpenMalloc( + sqlite3_vfs *pVfs, + const char *zFile, + sqlite3_file **ppFile, + int flags, + int *pOutFlags +){ + int rc = SQLITE_NOMEM; + sqlite3_file *pFile; + pFile = (sqlite3_file *)sqlite3_malloc(pVfs->szOsFile); + if( pFile ){ + rc = sqlite3OsOpen(pVfs, zFile, pFile, flags, pOutFlags); + if( rc!=SQLITE_OK ){ + sqlite3_free(pFile); + }else{ + *ppFile = pFile; + } + } + return rc; +} +int sqlite3OsCloseFree(sqlite3_file *pFile){ + int rc = SQLITE_OK; + if( pFile ){ + rc = sqlite3OsClose(pFile); + sqlite3_free(pFile); + } + return rc; +} + +/* +** The list of all registered VFS implementations. This list is +** initialized to the single VFS returned by sqlite3OsDefaultVfs() +** upon the first call to sqlite3_vfs_find(). +*/ +static sqlite3_vfs *vfsList = 0; + +/* +** Locate a VFS by name. If no name is given, simply return the +** first VFS on the list. +*/ +sqlite3_vfs *sqlite3_vfs_find(const char *zVfs){ +#ifndef SQLITE_MUTEX_NOOP + sqlite3_mutex *mutex = sqlite3_mutex_alloc(SQLITE_MUTEX_STATIC_MASTER); +#endif + sqlite3_vfs *pVfs = 0; + static int isInit = 0; + sqlite3_mutex_enter(mutex); + if( !isInit ){ + vfsList = sqlite3OsDefaultVfs(); + isInit = 1; + } + for(pVfs = vfsList; pVfs; pVfs=pVfs->pNext){ + if( zVfs==0 ) break; + if( strcmp(zVfs, pVfs->zName)==0 ) break; + } + sqlite3_mutex_leave(mutex); + return pVfs; +} + +/* +** Unlink a VFS from the linked list +*/ +static void vfsUnlink(sqlite3_vfs *pVfs){ + assert( sqlite3_mutex_held(sqlite3_mutex_alloc(SQLITE_MUTEX_STATIC_MASTER)) ); + if( pVfs==0 ){ + /* No-op */ + }else if( vfsList==pVfs ){ + vfsList = pVfs->pNext; + }else if( vfsList ){ + sqlite3_vfs *p = vfsList; + while( p->pNext && p->pNext!=pVfs ){ + p = p->pNext; + } + if( p->pNext==pVfs ){ + p->pNext = pVfs->pNext; + } + } +} + +/* +** Register a VFS with the system. It is harmless to register the same +** VFS multiple times. The new VFS becomes the default if makeDflt is +** true. +*/ +int sqlite3_vfs_register(sqlite3_vfs *pVfs, int makeDflt){ +#ifndef SQLITE_MUTEX_NOOP + sqlite3_mutex *mutex = sqlite3_mutex_alloc(SQLITE_MUTEX_STATIC_MASTER); +#endif + sqlite3_vfs_find(0); /* Make sure we are initialized */ + sqlite3_mutex_enter(mutex); + vfsUnlink(pVfs); + if( makeDflt || vfsList==0 ){ + pVfs->pNext = vfsList; + vfsList = pVfs; + }else{ + pVfs->pNext = vfsList->pNext; + vfsList->pNext = pVfs; + } + assert(vfsList); + sqlite3_mutex_leave(mutex); + return SQLITE_OK; +} + +/* +** Unregister a VFS so that it is no longer accessible. +*/ +int sqlite3_vfs_unregister(sqlite3_vfs *pVfs){ +#ifndef SQLITE_MUTEX_NOOP + sqlite3_mutex *mutex = sqlite3_mutex_alloc(SQLITE_MUTEX_STATIC_MASTER); +#endif + sqlite3_mutex_enter(mutex); + vfsUnlink(pVfs); + sqlite3_mutex_leave(mutex); + return SQLITE_OK; +} Added: external/sqlite-source-3.5.7.x/os.h ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/os.h Wed Mar 19 03:00:27 2008 @@ -0,0 +1,275 @@ +/* +** 2001 September 16 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +****************************************************************************** +** +** This header file (together with is companion C source-code file +** "os.c") attempt to abstract the underlying operating system so that +** the SQLite library will work on both POSIX and windows systems. +** +** This header file is #include-ed by sqliteInt.h and thus ends up +** being included by every source file. +*/ +#ifndef _SQLITE_OS_H_ +#define _SQLITE_OS_H_ + +/* +** Figure out if we are dealing with Unix, Windows, or some other +** operating system. After the following block of preprocess macros, +** all of OS_UNIX, OS_WIN, OS_OS2, and OS_OTHER will defined to either +** 1 or 0. One of the four will be 1. The other three will be 0. +*/ +#if defined(OS_OTHER) +# if OS_OTHER==1 +# undef OS_UNIX +# define OS_UNIX 0 +# undef OS_WIN +# define OS_WIN 0 +# undef OS_OS2 +# define OS_OS2 0 +# else +# undef OS_OTHER +# endif +#endif +#if !defined(OS_UNIX) && !defined(OS_OTHER) +# define OS_OTHER 0 +# ifndef OS_WIN +# if defined(_WIN32) || defined(WIN32) || defined(__CYGWIN__) || defined(__MINGW32__) || defined(__BORLANDC__) +# define OS_WIN 1 +# define OS_UNIX 0 +# define OS_OS2 0 +# elif defined(__EMX__) || defined(_OS2) || defined(OS2) || defined(_OS2_) || defined(__OS2__) +# define OS_WIN 0 +# define OS_UNIX 0 +# define OS_OS2 1 +# else +# define OS_WIN 0 +# define OS_UNIX 1 +# define OS_OS2 0 +# endif +# else +# define OS_UNIX 0 +# define OS_OS2 0 +# endif +#else +# ifndef OS_WIN +# define OS_WIN 0 +# endif +#endif + + + +/* +** Define the maximum size of a temporary filename +*/ +#if OS_WIN +# include +# define SQLITE_TEMPNAME_SIZE (MAX_PATH+50) +#elif OS_OS2 +# if (__GNUC__ > 3 || __GNUC__ == 3 && __GNUC_MINOR__ >= 3) && defined(OS2_HIGH_MEMORY) +# include /* has to be included before os2.h for linking to work */ +# endif +# define INCL_DOSDATETIME +# define INCL_DOSFILEMGR +# define INCL_DOSERRORS +# define INCL_DOSMISC +# define INCL_DOSPROCESS +# define INCL_DOSMODULEMGR +# define INCL_DOSSEMAPHORES +# include +# define SQLITE_TEMPNAME_SIZE (CCHMAXPATHCOMP) +#else +# define SQLITE_TEMPNAME_SIZE 200 +#endif + +/* If the SET_FULLSYNC macro is not defined above, then make it +** a no-op +*/ +#ifndef SET_FULLSYNC +# define SET_FULLSYNC(x,y) +#endif + +/* +** The default size of a disk sector +*/ +#ifndef SQLITE_DEFAULT_SECTOR_SIZE +# define SQLITE_DEFAULT_SECTOR_SIZE 512 +#endif + +/* +** Temporary files are named starting with this prefix followed by 16 random +** alphanumeric characters, and no file extension. They are stored in the +** OS's standard temporary file directory, and are deleted prior to exit. +** If sqlite is being embedded in another program, you may wish to change the +** prefix to reflect your program's name, so that if your program exits +** prematurely, old temporary files can be easily identified. This can be done +** using -DSQLITE_TEMP_FILE_PREFIX=myprefix_ on the compiler command line. +** +** 2006-10-31: The default prefix used to be "sqlite_". But then +** Mcafee started using SQLite in their anti-virus product and it +** started putting files with the "sqlite" name in the c:/temp folder. +** This annoyed many windows users. Those users would then do a +** Google search for "sqlite", find the telephone numbers of the +** developers and call to wake them up at night and complain. +** For this reason, the default name prefix is changed to be "sqlite" +** spelled backwards. So the temp files are still identified, but +** anybody smart enough to figure out the code is also likely smart +** enough to know that calling the developer will not help get rid +** of the file. +*/ +#ifndef SQLITE_TEMP_FILE_PREFIX +# define SQLITE_TEMP_FILE_PREFIX "etilqs_" +#endif + +/* +** The following values may be passed as the second argument to +** sqlite3OsLock(). The various locks exhibit the following semantics: +** +** SHARED: Any number of processes may hold a SHARED lock simultaneously. +** RESERVED: A single process may hold a RESERVED lock on a file at +** any time. Other processes may hold and obtain new SHARED locks. +** PENDING: A single process may hold a PENDING lock on a file at +** any one time. Existing SHARED locks may persist, but no new +** SHARED locks may be obtained by other processes. +** EXCLUSIVE: An EXCLUSIVE lock precludes all other locks. +** +** PENDING_LOCK may not be passed directly to sqlite3OsLock(). Instead, a +** process that requests an EXCLUSIVE lock may actually obtain a PENDING +** lock. This can be upgraded to an EXCLUSIVE lock by a subsequent call to +** sqlite3OsLock(). +*/ +#define NO_LOCK 0 +#define SHARED_LOCK 1 +#define RESERVED_LOCK 2 +#define PENDING_LOCK 3 +#define EXCLUSIVE_LOCK 4 + +/* +** File Locking Notes: (Mostly about windows but also some info for Unix) +** +** We cannot use LockFileEx() or UnlockFileEx() on Win95/98/ME because +** those functions are not available. So we use only LockFile() and +** UnlockFile(). +** +** LockFile() prevents not just writing but also reading by other processes. +** A SHARED_LOCK is obtained by locking a single randomly-chosen +** byte out of a specific range of bytes. The lock byte is obtained at +** random so two separate readers can probably access the file at the +** same time, unless they are unlucky and choose the same lock byte. +** An EXCLUSIVE_LOCK is obtained by locking all bytes in the range. +** There can only be one writer. A RESERVED_LOCK is obtained by locking +** a single byte of the file that is designated as the reserved lock byte. +** A PENDING_LOCK is obtained by locking a designated byte different from +** the RESERVED_LOCK byte. +** +** On WinNT/2K/XP systems, LockFileEx() and UnlockFileEx() are available, +** which means we can use reader/writer locks. When reader/writer locks +** are used, the lock is placed on the same range of bytes that is used +** for probabilistic locking in Win95/98/ME. Hence, the locking scheme +** will support two or more Win95 readers or two or more WinNT readers. +** But a single Win95 reader will lock out all WinNT readers and a single +** WinNT reader will lock out all other Win95 readers. +** +** The following #defines specify the range of bytes used for locking. +** SHARED_SIZE is the number of bytes available in the pool from which +** a random byte is selected for a shared lock. The pool of bytes for +** shared locks begins at SHARED_FIRST. +** +** These #defines are available in sqlite_aux.h so that adaptors for +** connecting SQLite to other operating systems can use the same byte +** ranges for locking. In particular, the same locking strategy and +** byte ranges are used for Unix. This leaves open the possiblity of having +** clients on win95, winNT, and unix all talking to the same shared file +** and all locking correctly. To do so would require that samba (or whatever +** tool is being used for file sharing) implements locks correctly between +** windows and unix. I'm guessing that isn't likely to happen, but by +** using the same locking range we are at least open to the possibility. +** +** Locking in windows is manditory. For this reason, we cannot store +** actual data in the bytes used for locking. The pager never allocates +** the pages involved in locking therefore. SHARED_SIZE is selected so +** that all locks will fit on a single page even at the minimum page size. +** PENDING_BYTE defines the beginning of the locks. By default PENDING_BYTE +** is set high so that we don't have to allocate an unused page except +** for very large databases. But one should test the page skipping logic +** by setting PENDING_BYTE low and running the entire regression suite. +** +** Changing the value of PENDING_BYTE results in a subtly incompatible +** file format. Depending on how it is changed, you might not notice +** the incompatibility right away, even running a full regression test. +** The default location of PENDING_BYTE is the first byte past the +** 1GB boundary. +** +*/ +#ifndef SQLITE_TEST +#define PENDING_BYTE 0x40000000 /* First byte past the 1GB boundary */ +#else +extern unsigned int sqlite3_pending_byte; +#define PENDING_BYTE sqlite3_pending_byte +#endif + +#define RESERVED_BYTE (PENDING_BYTE+1) +#define SHARED_FIRST (PENDING_BYTE+2) +#define SHARED_SIZE 510 + +/* +** Functions for accessing sqlite3_file methods +*/ +int sqlite3OsClose(sqlite3_file*); +int sqlite3OsRead(sqlite3_file*, void*, int amt, i64 offset); +int sqlite3OsWrite(sqlite3_file*, const void*, int amt, i64 offset); +int sqlite3OsTruncate(sqlite3_file*, i64 size); +int sqlite3OsSync(sqlite3_file*, int); +int sqlite3OsFileSize(sqlite3_file*, i64 *pSize); +int sqlite3OsLock(sqlite3_file*, int); +int sqlite3OsUnlock(sqlite3_file*, int); +int sqlite3OsCheckReservedLock(sqlite3_file *id); +int sqlite3OsFileControl(sqlite3_file*,int,void*); +int sqlite3OsSectorSize(sqlite3_file *id); +int sqlite3OsDeviceCharacteristics(sqlite3_file *id); + +/* +** Functions for accessing sqlite3_vfs methods +*/ +int sqlite3OsOpen(sqlite3_vfs *, const char *, sqlite3_file*, int, int *); +int sqlite3OsDelete(sqlite3_vfs *, const char *, int); +int sqlite3OsAccess(sqlite3_vfs *, const char *, int); +int sqlite3OsGetTempname(sqlite3_vfs *, int, char *); +int sqlite3OsFullPathname(sqlite3_vfs *, const char *, int, char *); +void *sqlite3OsDlOpen(sqlite3_vfs *, const char *); +void sqlite3OsDlError(sqlite3_vfs *, int, char *); +void *sqlite3OsDlSym(sqlite3_vfs *, void *, const char *); +void sqlite3OsDlClose(sqlite3_vfs *, void *); +int sqlite3OsRandomness(sqlite3_vfs *, int, char *); +int sqlite3OsSleep(sqlite3_vfs *, int); +int sqlite3OsCurrentTime(sqlite3_vfs *, double*); + +/* +** Convenience functions for opening and closing files using +** sqlite3_malloc() to obtain space for the file-handle structure. +*/ +int sqlite3OsOpenMalloc(sqlite3_vfs *, const char *, sqlite3_file **, int,int*); +int sqlite3OsCloseFree(sqlite3_file *); + +/* +** Each OS-specific backend defines an instance of the following +** structure for returning a pointer to its sqlite3_vfs. If OS_OTHER +** is defined (meaning that the application-defined OS interface layer +** is used) then there is no default VFS. The application must +** register one or more VFS structures using sqlite3_vfs_register() +** before attempting to use SQLite. +*/ +#if OS_UNIX || OS_WIN || OS_OS2 +sqlite3_vfs *sqlite3OsDefaultVfs(void); +#else +# define sqlite3OsDefaultVfs(X) 0 +#endif + +#endif /* _SQLITE_OS_H_ */ Added: external/sqlite-source-3.5.7.x/os_common.h ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/os_common.h Wed Mar 19 03:00:27 2008 @@ -0,0 +1,131 @@ +/* +** 2004 May 22 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +****************************************************************************** +** +** This file contains macros and a little bit of code that is common to +** all of the platform-specific files (os_*.c) and is #included into those +** files. +** +** This file should be #included by the os_*.c files only. It is not a +** general purpose header file. +*/ + +/* +** At least two bugs have slipped in because we changed the MEMORY_DEBUG +** macro to SQLITE_DEBUG and some older makefiles have not yet made the +** switch. The following code should catch this problem at compile-time. +*/ +#ifdef MEMORY_DEBUG +# error "The MEMORY_DEBUG macro is obsolete. Use SQLITE_DEBUG instead." +#endif + + +/* + * When testing, this global variable stores the location of the + * pending-byte in the database file. + */ +#ifdef SQLITE_TEST +unsigned int sqlite3_pending_byte = 0x40000000; +#endif + +#ifdef SQLITE_DEBUG +int sqlite3OSTrace = 0; +#define OSTRACE1(X) if( sqlite3OSTrace ) sqlite3DebugPrintf(X) +#define OSTRACE2(X,Y) if( sqlite3OSTrace ) sqlite3DebugPrintf(X,Y) +#define OSTRACE3(X,Y,Z) if( sqlite3OSTrace ) sqlite3DebugPrintf(X,Y,Z) +#define OSTRACE4(X,Y,Z,A) if( sqlite3OSTrace ) sqlite3DebugPrintf(X,Y,Z,A) +#define OSTRACE5(X,Y,Z,A,B) if( sqlite3OSTrace ) sqlite3DebugPrintf(X,Y,Z,A,B) +#define OSTRACE6(X,Y,Z,A,B,C) \ + if(sqlite3OSTrace) sqlite3DebugPrintf(X,Y,Z,A,B,C) +#define OSTRACE7(X,Y,Z,A,B,C,D) \ + if(sqlite3OSTrace) sqlite3DebugPrintf(X,Y,Z,A,B,C,D) +#else +#define OSTRACE1(X) +#define OSTRACE2(X,Y) +#define OSTRACE3(X,Y,Z) +#define OSTRACE4(X,Y,Z,A) +#define OSTRACE5(X,Y,Z,A,B) +#define OSTRACE6(X,Y,Z,A,B,C) +#define OSTRACE7(X,Y,Z,A,B,C,D) +#endif + +/* +** Macros for performance tracing. Normally turned off. Only works +** on i486 hardware. +*/ +#ifdef SQLITE_PERFORMANCE_TRACE +__inline__ unsigned long long int hwtime(void){ + unsigned long long int x; + __asm__("rdtsc\n\t" + "mov %%edx, %%ecx\n\t" + :"=A" (x)); + return x; +} +static unsigned long long int g_start; +static unsigned int elapse; +#define TIMER_START g_start=hwtime() +#define TIMER_END elapse=hwtime()-g_start +#define TIMER_ELAPSED elapse +#else +#define TIMER_START +#define TIMER_END +#define TIMER_ELAPSED 0 +#endif + +/* +** If we compile with the SQLITE_TEST macro set, then the following block +** of code will give us the ability to simulate a disk I/O error. This +** is used for testing the I/O recovery logic. +*/ +#ifdef SQLITE_TEST +int sqlite3_io_error_hit = 0; /* Total number of I/O Errors */ +int sqlite3_io_error_hardhit = 0; /* Number of non-benign errors */ +int sqlite3_io_error_pending = 0; /* Count down to first I/O error */ +int sqlite3_io_error_persist = 0; /* True if I/O errors persist */ +int sqlite3_io_error_benign = 0; /* True if errors are benign */ +int sqlite3_diskfull_pending = 0; +int sqlite3_diskfull = 0; +#define SimulateIOErrorBenign(X) sqlite3_io_error_benign=(X) +#define SimulateIOError(CODE) \ + if( (sqlite3_io_error_persist && sqlite3_io_error_hit) \ + || sqlite3_io_error_pending-- == 1 ) \ + { local_ioerr(); CODE; } +static void local_ioerr(){ + IOTRACE(("IOERR\n")); + sqlite3_io_error_hit++; + if( !sqlite3_io_error_benign ) sqlite3_io_error_hardhit++; +} +#define SimulateDiskfullError(CODE) \ + if( sqlite3_diskfull_pending ){ \ + if( sqlite3_diskfull_pending == 1 ){ \ + local_ioerr(); \ + sqlite3_diskfull = 1; \ + sqlite3_io_error_hit = 1; \ + CODE; \ + }else{ \ + sqlite3_diskfull_pending--; \ + } \ + } +#else +#define SimulateIOErrorBenign(X) +#define SimulateIOError(A) +#define SimulateDiskfullError(A) +#endif + +/* +** When testing, keep a count of the number of open files. +*/ +#ifdef SQLITE_TEST +int sqlite3_open_file_count = 0; +#define OpenCounter(X) sqlite3_open_file_count+=(X) +#else +#define OpenCounter(X) +#endif Added: external/sqlite-source-3.5.7.x/os_os2.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/os_os2.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,984 @@ +/* +** 2006 Feb 14 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +****************************************************************************** +** +** This file contains code that is specific to OS/2. +*/ + +#include "sqliteInt.h" + +#if OS_OS2 + +/* +** A Note About Memory Allocation: +** +** This driver uses malloc()/free() directly rather than going through +** the SQLite-wrappers sqlite3_malloc()/sqlite3_free(). Those wrappers +** are designed for use on embedded systems where memory is scarce and +** malloc failures happen frequently. OS/2 does not typically run on +** embedded systems, and when it does the developers normally have bigger +** problems to worry about than running out of memory. So there is not +** a compelling need to use the wrappers. +** +** But there is a good reason to not use the wrappers. If we use the +** wrappers then we will get simulated malloc() failures within this +** driver. And that causes all kinds of problems for our tests. We +** could enhance SQLite to deal with simulated malloc failures within +** the OS driver, but the code to deal with those failure would not +** be exercised on Linux (which does not need to malloc() in the driver) +** and so we would have difficulty writing coverage tests for that +** code. Better to leave the code out, we think. +** +** The point of this discussion is as follows: When creating a new +** OS layer for an embedded system, if you use this file as an example, +** avoid the use of malloc()/free(). Those routines work ok on OS/2 +** desktops but not so well in embedded systems. +*/ + +/* +** Macros used to determine whether or not to use threads. +*/ +#if defined(SQLITE_THREADSAFE) && SQLITE_THREADSAFE +# define SQLITE_OS2_THREADS 1 +#endif + +/* +** Include code that is common to all os_*.c files +*/ +#include "os_common.h" + +/* +** The os2File structure is subclass of sqlite3_file specific for the OS/2 +** protability layer. +*/ +typedef struct os2File os2File; +struct os2File { + const sqlite3_io_methods *pMethod; /* Always the first entry */ + HFILE h; /* Handle for accessing the file */ + int delOnClose; /* True if file is to be deleted on close */ + char* pathToDel; /* Name of file to delete on close */ + unsigned char locktype; /* Type of lock currently held on this file */ +}; + +/***************************************************************************** +** The next group of routines implement the I/O methods specified +** by the sqlite3_io_methods object. +******************************************************************************/ + +/* +** Close a file. +*/ +int os2Close( sqlite3_file *id ){ + APIRET rc = NO_ERROR; + os2File *pFile; + if( id && (pFile = (os2File*)id) != 0 ){ + OSTRACE2( "CLOSE %d\n", pFile->h ); + rc = DosClose( pFile->h ); + pFile->locktype = NO_LOCK; + if( pFile->delOnClose != 0 ){ + rc = DosForceDelete( (PSZ)pFile->pathToDel ); + } + if( pFile->pathToDel ){ + free( pFile->pathToDel ); + } + id = 0; + OpenCounter( -1 ); + } + + return rc == NO_ERROR ? SQLITE_OK : SQLITE_IOERR; +} + +/* +** Read data from a file into a buffer. Return SQLITE_OK if all +** bytes were read successfully and SQLITE_IOERR if anything goes +** wrong. +*/ +int os2Read( + sqlite3_file *id, /* File to read from */ + void *pBuf, /* Write content into this buffer */ + int amt, /* Number of bytes to read */ + sqlite3_int64 offset /* Begin reading at this offset */ +){ + ULONG fileLocation = 0L; + ULONG got; + os2File *pFile = (os2File*)id; + assert( id!=0 ); + SimulateIOError( return SQLITE_IOERR_READ ); + OSTRACE3( "READ %d lock=%d\n", pFile->h, pFile->locktype ); + if( DosSetFilePtr(pFile->h, offset, FILE_BEGIN, &fileLocation) != NO_ERROR ){ + return SQLITE_IOERR; + } + if( DosRead( pFile->h, pBuf, amt, &got ) != NO_ERROR ){ + return SQLITE_IOERR_READ; + } + if( got == (ULONG)amt ) + return SQLITE_OK; + else { + memset(&((char*)pBuf)[got], 0, amt-got); + return SQLITE_IOERR_SHORT_READ; + } +} + +/* +** Write data from a buffer into a file. Return SQLITE_OK on success +** or some other error code on failure. +*/ +int os2Write( + sqlite3_file *id, /* File to write into */ + const void *pBuf, /* The bytes to be written */ + int amt, /* Number of bytes to write */ + sqlite3_int64 offset /* Offset into the file to begin writing at */ +){ + ULONG fileLocation = 0L; + APIRET rc = NO_ERROR; + ULONG wrote; + os2File *pFile = (os2File*)id; + assert( id!=0 ); + SimulateIOError( return SQLITE_IOERR_WRITE ); + SimulateDiskfullError( return SQLITE_FULL ); + OSTRACE3( "WRITE %d lock=%d\n", pFile->h, pFile->locktype ); + if( DosSetFilePtr(pFile->h, offset, FILE_BEGIN, &fileLocation) != NO_ERROR ){ + return SQLITE_IOERR; + } + assert( amt>0 ); + while( amt > 0 && + (rc = DosWrite( pFile->h, (PVOID)pBuf, amt, &wrote )) && + wrote > 0 + ){ + amt -= wrote; + pBuf = &((char*)pBuf)[wrote]; + } + + return ( rc != NO_ERROR || amt > (int)wrote ) ? SQLITE_FULL : SQLITE_OK; +} + +/* +** Truncate an open file to a specified size +*/ +int os2Truncate( sqlite3_file *id, i64 nByte ){ + APIRET rc = NO_ERROR; + os2File *pFile = (os2File*)id; + OSTRACE3( "TRUNCATE %d %lld\n", pFile->h, nByte ); + SimulateIOError( return SQLITE_IOERR_TRUNCATE ); + rc = DosSetFileSize( pFile->h, nByte ); + return rc == NO_ERROR ? SQLITE_OK : SQLITE_IOERR; +} + +#ifdef SQLITE_TEST +/* +** Count the number of fullsyncs and normal syncs. This is used to test +** that syncs and fullsyncs are occuring at the right times. +*/ +int sqlite3_sync_count = 0; +int sqlite3_fullsync_count = 0; +#endif + +/* +** Make sure all writes to a particular file are committed to disk. +*/ +int os2Sync( sqlite3_file *id, int flags ){ + os2File *pFile = (os2File*)id; + OSTRACE3( "SYNC %d lock=%d\n", pFile->h, pFile->locktype ); +#ifdef SQLITE_TEST + if( flags & SQLITE_SYNC_FULL){ + sqlite3_fullsync_count++; + } + sqlite3_sync_count++; +#endif + return DosResetBuffer( pFile->h ) == NO_ERROR ? SQLITE_OK : SQLITE_IOERR; +} + +/* +** Determine the current size of a file in bytes +*/ +int os2FileSize( sqlite3_file *id, sqlite3_int64 *pSize ){ + APIRET rc = NO_ERROR; + FILESTATUS3 fsts3FileInfo; + memset(&fsts3FileInfo, 0, sizeof(fsts3FileInfo)); + assert( id!=0 ); + SimulateIOError( return SQLITE_IOERR ); + rc = DosQueryFileInfo( ((os2File*)id)->h, FIL_STANDARD, &fsts3FileInfo, sizeof(FILESTATUS3) ); + if( rc == NO_ERROR ){ + *pSize = fsts3FileInfo.cbFile; + return SQLITE_OK; + }else{ + return SQLITE_IOERR; + } +} + +/* +** Acquire a reader lock. +*/ +static int getReadLock( os2File *pFile ){ + FILELOCK LockArea, + UnlockArea; + APIRET res; + memset(&LockArea, 0, sizeof(LockArea)); + memset(&UnlockArea, 0, sizeof(UnlockArea)); + LockArea.lOffset = SHARED_FIRST; + LockArea.lRange = SHARED_SIZE; + UnlockArea.lOffset = 0L; + UnlockArea.lRange = 0L; + res = DosSetFileLocks( pFile->h, &UnlockArea, &LockArea, 2000L, 1L ); + OSTRACE3( "GETREADLOCK %d res=%d\n", pFile->h, res ); + return res; +} + +/* +** Undo a readlock +*/ +static int unlockReadLock( os2File *id ){ + FILELOCK LockArea, + UnlockArea; + APIRET res; + memset(&LockArea, 0, sizeof(LockArea)); + memset(&UnlockArea, 0, sizeof(UnlockArea)); + LockArea.lOffset = 0L; + LockArea.lRange = 0L; + UnlockArea.lOffset = SHARED_FIRST; + UnlockArea.lRange = SHARED_SIZE; + res = DosSetFileLocks( id->h, &UnlockArea, &LockArea, 2000L, 1L ); + OSTRACE3( "UNLOCK-READLOCK file handle=%d res=%d?\n", id->h, res ); + return res; +} + +/* +** Lock the file with the lock specified by parameter locktype - one +** of the following: +** +** (1) SHARED_LOCK +** (2) RESERVED_LOCK +** (3) PENDING_LOCK +** (4) EXCLUSIVE_LOCK +** +** Sometimes when requesting one lock state, additional lock states +** are inserted in between. The locking might fail on one of the later +** transitions leaving the lock state different from what it started but +** still short of its goal. The following chart shows the allowed +** transitions and the inserted intermediate states: +** +** UNLOCKED -> SHARED +** SHARED -> RESERVED +** SHARED -> (PENDING) -> EXCLUSIVE +** RESERVED -> (PENDING) -> EXCLUSIVE +** PENDING -> EXCLUSIVE +** +** This routine will only increase a lock. The os2Unlock() routine +** erases all locks at once and returns us immediately to locking level 0. +** It is not possible to lower the locking level one step at a time. You +** must go straight to locking level 0. +*/ +int os2Lock( sqlite3_file *id, int locktype ){ + int rc = SQLITE_OK; /* Return code from subroutines */ + APIRET res = NO_ERROR; /* Result of an OS/2 lock call */ + int newLocktype; /* Set pFile->locktype to this value before exiting */ + int gotPendingLock = 0;/* True if we acquired a PENDING lock this time */ + FILELOCK LockArea, + UnlockArea; + os2File *pFile = (os2File*)id; + memset(&LockArea, 0, sizeof(LockArea)); + memset(&UnlockArea, 0, sizeof(UnlockArea)); + assert( pFile!=0 ); + OSTRACE4( "LOCK %d %d was %d\n", pFile->h, locktype, pFile->locktype ); + + /* If there is already a lock of this type or more restrictive on the + ** os2File, do nothing. Don't use the end_lock: exit path, as + ** sqlite3OsEnterMutex() hasn't been called yet. + */ + if( pFile->locktype>=locktype ){ + OSTRACE3( "LOCK %d %d ok (already held)\n", pFile->h, locktype ); + return SQLITE_OK; + } + + /* Make sure the locking sequence is correct + */ + assert( pFile->locktype!=NO_LOCK || locktype==SHARED_LOCK ); + assert( locktype!=PENDING_LOCK ); + assert( locktype!=RESERVED_LOCK || pFile->locktype==SHARED_LOCK ); + + /* Lock the PENDING_LOCK byte if we need to acquire a PENDING lock or + ** a SHARED lock. If we are acquiring a SHARED lock, the acquisition of + ** the PENDING_LOCK byte is temporary. + */ + newLocktype = pFile->locktype; + if( pFile->locktype==NO_LOCK + || (locktype==EXCLUSIVE_LOCK && pFile->locktype==RESERVED_LOCK) + ){ + int cnt = 3; + + LockArea.lOffset = PENDING_BYTE; + LockArea.lRange = 1L; + UnlockArea.lOffset = 0L; + UnlockArea.lRange = 0L; + + while( cnt-->0 && ( res = DosSetFileLocks( pFile->h, &UnlockArea, &LockArea, 2000L, 1L) ) + != NO_ERROR + ){ + /* Try 3 times to get the pending lock. The pending lock might be + ** held by another reader process who will release it momentarily. + */ + OSTRACE2( "LOCK could not get a PENDING lock. cnt=%d\n", cnt ); + DosSleep(1); + } + if( res == NO_ERROR){ + gotPendingLock = 1; + OSTRACE3( "LOCK %d pending lock boolean set. res=%d\n", pFile->h, res ); + } + } + + /* Acquire a shared lock + */ + if( locktype==SHARED_LOCK && res == NO_ERROR ){ + assert( pFile->locktype==NO_LOCK ); + res = getReadLock(pFile); + if( res == NO_ERROR ){ + newLocktype = SHARED_LOCK; + } + OSTRACE3( "LOCK %d acquire shared lock. res=%d\n", pFile->h, res ); + } + + /* Acquire a RESERVED lock + */ + if( locktype==RESERVED_LOCK && res == NO_ERROR ){ + assert( pFile->locktype==SHARED_LOCK ); + LockArea.lOffset = RESERVED_BYTE; + LockArea.lRange = 1L; + UnlockArea.lOffset = 0L; + UnlockArea.lRange = 0L; + res = DosSetFileLocks( pFile->h, &UnlockArea, &LockArea, 2000L, 1L ); + if( res == NO_ERROR ){ + newLocktype = RESERVED_LOCK; + } + OSTRACE3( "LOCK %d acquire reserved lock. res=%d\n", pFile->h, res ); + } + + /* Acquire a PENDING lock + */ + if( locktype==EXCLUSIVE_LOCK && res == NO_ERROR ){ + newLocktype = PENDING_LOCK; + gotPendingLock = 0; + OSTRACE2( "LOCK %d acquire pending lock. pending lock boolean unset.\n", pFile->h ); + } + + /* Acquire an EXCLUSIVE lock + */ + if( locktype==EXCLUSIVE_LOCK && res == NO_ERROR ){ + assert( pFile->locktype>=SHARED_LOCK ); + res = unlockReadLock(pFile); + OSTRACE2( "unreadlock = %d\n", res ); + LockArea.lOffset = SHARED_FIRST; + LockArea.lRange = SHARED_SIZE; + UnlockArea.lOffset = 0L; + UnlockArea.lRange = 0L; + res = DosSetFileLocks( pFile->h, &UnlockArea, &LockArea, 2000L, 1L ); + if( res == NO_ERROR ){ + newLocktype = EXCLUSIVE_LOCK; + }else{ + OSTRACE2( "OS/2 error-code = %d\n", res ); + getReadLock(pFile); + } + OSTRACE3( "LOCK %d acquire exclusive lock. res=%d\n", pFile->h, res ); + } + + /* If we are holding a PENDING lock that ought to be released, then + ** release it now. + */ + if( gotPendingLock && locktype==SHARED_LOCK ){ + int r; + LockArea.lOffset = 0L; + LockArea.lRange = 0L; + UnlockArea.lOffset = PENDING_BYTE; + UnlockArea.lRange = 1L; + r = DosSetFileLocks( pFile->h, &UnlockArea, &LockArea, 2000L, 1L ); + OSTRACE3( "LOCK %d unlocking pending/is shared. r=%d\n", pFile->h, r ); + } + + /* Update the state of the lock has held in the file descriptor then + ** return the appropriate result code. + */ + if( res == NO_ERROR ){ + rc = SQLITE_OK; + }else{ + OSTRACE4( "LOCK FAILED %d trying for %d but got %d\n", pFile->h, + locktype, newLocktype ); + rc = SQLITE_BUSY; + } + pFile->locktype = newLocktype; + OSTRACE3( "LOCK %d now %d\n", pFile->h, pFile->locktype ); + return rc; +} + +/* +** This routine checks if there is a RESERVED lock held on the specified +** file by this or any other process. If such a lock is held, return +** non-zero, otherwise zero. +*/ +int os2CheckReservedLock( sqlite3_file *id ){ + int r = 0; + os2File *pFile = (os2File*)id; + assert( pFile!=0 ); + if( pFile->locktype>=RESERVED_LOCK ){ + r = 1; + OSTRACE3( "TEST WR-LOCK %d %d (local)\n", pFile->h, r ); + }else{ + FILELOCK LockArea, + UnlockArea; + APIRET rc = NO_ERROR; + memset(&LockArea, 0, sizeof(LockArea)); + memset(&UnlockArea, 0, sizeof(UnlockArea)); + LockArea.lOffset = RESERVED_BYTE; + LockArea.lRange = 1L; + UnlockArea.lOffset = 0L; + UnlockArea.lRange = 0L; + rc = DosSetFileLocks( pFile->h, &UnlockArea, &LockArea, 2000L, 1L ); + OSTRACE3( "TEST WR-LOCK %d lock reserved byte rc=%d\n", pFile->h, rc ); + if( rc == NO_ERROR ){ + APIRET rcu = NO_ERROR; /* return code for unlocking */ + LockArea.lOffset = 0L; + LockArea.lRange = 0L; + UnlockArea.lOffset = RESERVED_BYTE; + UnlockArea.lRange = 1L; + rcu = DosSetFileLocks( pFile->h, &UnlockArea, &LockArea, 2000L, 1L ); + OSTRACE3( "TEST WR-LOCK %d unlock reserved byte r=%d\n", pFile->h, rcu ); + } + r = !(rc == NO_ERROR); + OSTRACE3( "TEST WR-LOCK %d %d (remote)\n", pFile->h, r ); + } + return r; +} + +/* +** Lower the locking level on file descriptor id to locktype. locktype +** must be either NO_LOCK or SHARED_LOCK. +** +** If the locking level of the file descriptor is already at or below +** the requested locking level, this routine is a no-op. +** +** It is not possible for this routine to fail if the second argument +** is NO_LOCK. If the second argument is SHARED_LOCK then this routine +** might return SQLITE_IOERR; +*/ +int os2Unlock( sqlite3_file *id, int locktype ){ + int type; + os2File *pFile = (os2File*)id; + APIRET rc = SQLITE_OK; + APIRET res = NO_ERROR; + FILELOCK LockArea, + UnlockArea; + memset(&LockArea, 0, sizeof(LockArea)); + memset(&UnlockArea, 0, sizeof(UnlockArea)); + assert( pFile!=0 ); + assert( locktype<=SHARED_LOCK ); + OSTRACE4( "UNLOCK %d to %d was %d\n", pFile->h, locktype, pFile->locktype ); + type = pFile->locktype; + if( type>=EXCLUSIVE_LOCK ){ + LockArea.lOffset = 0L; + LockArea.lRange = 0L; + UnlockArea.lOffset = SHARED_FIRST; + UnlockArea.lRange = SHARED_SIZE; + res = DosSetFileLocks( pFile->h, &UnlockArea, &LockArea, 2000L, 1L ); + OSTRACE3( "UNLOCK %d exclusive lock res=%d\n", pFile->h, res ); + if( locktype==SHARED_LOCK && getReadLock(pFile) != NO_ERROR ){ + /* This should never happen. We should always be able to + ** reacquire the read lock */ + OSTRACE3( "UNLOCK %d to %d getReadLock() failed\n", pFile->h, locktype ); + rc = SQLITE_IOERR_UNLOCK; + } + } + if( type>=RESERVED_LOCK ){ + LockArea.lOffset = 0L; + LockArea.lRange = 0L; + UnlockArea.lOffset = RESERVED_BYTE; + UnlockArea.lRange = 1L; + res = DosSetFileLocks( pFile->h, &UnlockArea, &LockArea, 2000L, 1L ); + OSTRACE3( "UNLOCK %d reserved res=%d\n", pFile->h, res ); + } + if( locktype==NO_LOCK && type>=SHARED_LOCK ){ + res = unlockReadLock(pFile); + OSTRACE5( "UNLOCK %d is %d want %d res=%d\n", pFile->h, type, locktype, res ); + } + if( type>=PENDING_LOCK ){ + LockArea.lOffset = 0L; + LockArea.lRange = 0L; + UnlockArea.lOffset = PENDING_BYTE; + UnlockArea.lRange = 1L; + res = DosSetFileLocks( pFile->h, &UnlockArea, &LockArea, 2000L, 1L ); + OSTRACE3( "UNLOCK %d pending res=%d\n", pFile->h, res ); + } + pFile->locktype = locktype; + OSTRACE3( "UNLOCK %d now %d\n", pFile->h, pFile->locktype ); + return rc; +} + +/* +** Control and query of the open file handle. +*/ +static int os2FileControl(sqlite3_file *id, int op, void *pArg){ + switch( op ){ + case SQLITE_FCNTL_LOCKSTATE: { + *(int*)pArg = ((os2File*)id)->locktype; + OSTRACE3( "FCNTL_LOCKSTATE %d lock=%d\n", ((os2File*)id)->h, ((os2File*)id)->locktype ); + return SQLITE_OK; + } + } + return SQLITE_ERROR; +} + +/* +** Return the sector size in bytes of the underlying block device for +** the specified file. This is almost always 512 bytes, but may be +** larger for some devices. +** +** SQLite code assumes this function cannot fail. It also assumes that +** if two files are created in the same file-system directory (i.e. +** a database and its journal file) that the sector size will be the +** same for both. +*/ +static int os2SectorSize(sqlite3_file *id){ + return SQLITE_DEFAULT_SECTOR_SIZE; +} + +/* +** Return a vector of device characteristics. +*/ +static int os2DeviceCharacteristics(sqlite3_file *id){ + return 0; +} + +/* +** This vector defines all the methods that can operate on an +** sqlite3_file for os2. +*/ +static const sqlite3_io_methods os2IoMethod = { + 1, /* iVersion */ + os2Close, + os2Read, + os2Write, + os2Truncate, + os2Sync, + os2FileSize, + os2Lock, + os2Unlock, + os2CheckReservedLock, + os2FileControl, + os2SectorSize, + os2DeviceCharacteristics +}; + +/*************************************************************************** +** Here ends the I/O methods that form the sqlite3_io_methods object. +** +** The next block of code implements the VFS methods. +****************************************************************************/ + +/* +** Open a file. +*/ +static int os2Open( + sqlite3_vfs *pVfs, /* Not used */ + const char *zName, /* Name of the file */ + sqlite3_file *id, /* Write the SQLite file handle here */ + int flags, /* Open mode flags */ + int *pOutFlags /* Status return flags */ +){ + HFILE h; + ULONG ulFileAttribute = 0; + ULONG ulOpenFlags = 0; + ULONG ulOpenMode = 0; + os2File *pFile = (os2File*)id; + APIRET rc = NO_ERROR; + ULONG ulAction; + + memset(pFile, 0, sizeof(*pFile)); + + OSTRACE2( "OPEN want %d\n", flags ); + + //ulOpenMode = flags & SQLITE_OPEN_READWRITE ? OPEN_ACCESS_READWRITE : OPEN_ACCESS_READONLY; + if( flags & SQLITE_OPEN_READWRITE ){ + ulOpenMode |= OPEN_ACCESS_READWRITE; + OSTRACE1( "OPEN read/write\n" ); + }else{ + ulOpenMode |= OPEN_ACCESS_READONLY; + OSTRACE1( "OPEN read only\n" ); + } + + //ulOpenFlags = flags & SQLITE_OPEN_CREATE ? OPEN_ACTION_CREATE_IF_NEW : OPEN_ACTION_FAIL_IF_NEW; + if( flags & SQLITE_OPEN_CREATE ){ + ulOpenFlags |= OPEN_ACTION_OPEN_IF_EXISTS | OPEN_ACTION_CREATE_IF_NEW; + OSTRACE1( "OPEN open new/create\n" ); + }else{ + ulOpenFlags |= OPEN_ACTION_OPEN_IF_EXISTS | OPEN_ACTION_FAIL_IF_NEW; + OSTRACE1( "OPEN open existing\n" ); + } + + //ulOpenMode |= flags & SQLITE_OPEN_MAIN_DB ? OPEN_SHARE_DENYNONE : OPEN_SHARE_DENYWRITE; + if( flags & SQLITE_OPEN_MAIN_DB ){ + ulOpenMode |= OPEN_SHARE_DENYNONE; + OSTRACE1( "OPEN share read/write\n" ); + }else{ + ulOpenMode |= OPEN_SHARE_DENYWRITE; + OSTRACE1( "OPEN share read only\n" ); + } + + if( flags & (SQLITE_OPEN_TEMP_DB | SQLITE_OPEN_TEMP_JOURNAL + | SQLITE_OPEN_SUBJOURNAL) ){ + //ulFileAttribute = FILE_HIDDEN; //for debugging, we want to make sure it is deleted + ulFileAttribute = FILE_NORMAL; + pFile->delOnClose = 1; + pFile->pathToDel = (char*)malloc(sizeof(char) * pVfs->mxPathname); + sqlite3OsFullPathname(pVfs, zName, pVfs->mxPathname, pFile->pathToDel); + OSTRACE1( "OPEN hidden/delete on close file attributes\n" ); + }else{ + ulFileAttribute = FILE_ARCHIVED | FILE_NORMAL; + pFile->delOnClose = 0; + pFile->pathToDel = NULL; + OSTRACE1( "OPEN normal file attribute\n" ); + } + + /* always open in random access mode for possibly better speed */ + ulOpenMode |= OPEN_FLAGS_RANDOM; + ulOpenMode |= OPEN_FLAGS_FAIL_ON_ERROR; + + rc = DosOpen( (PSZ)zName, + &h, + &ulAction, + 0L, + ulFileAttribute, + ulOpenFlags, + ulOpenMode, + (PEAOP2)NULL ); + if( rc != NO_ERROR ){ + OSTRACE7( "OPEN Invalid handle rc=%d: zName=%s, ulAction=%#lx, ulAttr=%#lx, ulFlags=%#lx, ulMode=%#lx\n", + rc, zName, ulAction, ulFileAttribute, ulOpenFlags, ulOpenMode ); + if( flags & SQLITE_OPEN_READWRITE ){ + OSTRACE2( "OPEN %d Invalid handle\n", ((flags | SQLITE_OPEN_READONLY) & ~SQLITE_OPEN_READWRITE) ); + return os2Open( 0, zName, id, + ((flags | SQLITE_OPEN_READONLY) & ~SQLITE_OPEN_READWRITE), + pOutFlags ); + }else{ + return SQLITE_CANTOPEN; + } + } + + if( pOutFlags ){ + *pOutFlags = flags & SQLITE_OPEN_READWRITE ? SQLITE_OPEN_READWRITE : SQLITE_OPEN_READONLY; + } + + pFile->pMethod = &os2IoMethod; + pFile->h = h; + OpenCounter(+1); + OSTRACE3( "OPEN %d pOutFlags=%d\n", pFile->h, pOutFlags ); + return SQLITE_OK; +} + +/* +** Delete the named file. +*/ +int os2Delete( + sqlite3_vfs *pVfs, /* Not used on os2 */ + const char *zFilename, /* Name of file to delete */ + int syncDir /* Not used on os2 */ +){ + APIRET rc = NO_ERROR; + SimulateIOError(return SQLITE_IOERR_DELETE); + rc = DosDelete( (PSZ)zFilename ); + OSTRACE2( "DELETE \"%s\"\n", zFilename ); + return rc == NO_ERROR ? SQLITE_OK : SQLITE_IOERR; +} + +/* +** Check the existance and status of a file. +*/ +static int os2Access( + sqlite3_vfs *pVfs, /* Not used on os2 */ + const char *zFilename, /* Name of file to check */ + int flags /* Type of test to make on this file */ +){ + FILESTATUS3 fsts3ConfigInfo; + APIRET rc = NO_ERROR; + + memset(&fsts3ConfigInfo, 0, sizeof(fsts3ConfigInfo)); + rc = DosQueryPathInfo( (PSZ)zFilename, FIL_STANDARD, + &fsts3ConfigInfo, sizeof(FILESTATUS3) ); + OSTRACE4( "ACCESS fsts3ConfigInfo.attrFile=%d flags=%d rc=%d\n", + fsts3ConfigInfo.attrFile, flags, rc ); + switch( flags ){ + case SQLITE_ACCESS_READ: + case SQLITE_ACCESS_EXISTS: + rc = (rc == NO_ERROR); + OSTRACE3( "ACCESS %s access of read and exists rc=%d\n", zFilename, rc ); + break; + case SQLITE_ACCESS_READWRITE: + rc = (fsts3ConfigInfo.attrFile & FILE_READONLY) == 0; + OSTRACE3( "ACCESS %s access of read/write rc=%d\n", zFilename, rc ); + break; + default: + assert( !"Invalid flags argument" ); + } + return rc; +} + + +/* +** Create a temporary file name in zBuf. zBuf must be big enough to +** hold at pVfs->mxPathname characters. +*/ +static int os2GetTempname( sqlite3_vfs *pVfs, int nBuf, char *zBuf ){ + static const unsigned char zChars[] = + "abcdefghijklmnopqrstuvwxyz" + "ABCDEFGHIJKLMNOPQRSTUVWXYZ" + "0123456789"; + int i, j; + char zTempPathBuf[3]; + PSZ zTempPath = (PSZ)&zTempPathBuf; + if( DosScanEnv( (PSZ)"TEMP", &zTempPath ) ){ + if( DosScanEnv( (PSZ)"TMP", &zTempPath ) ){ + if( DosScanEnv( (PSZ)"TMPDIR", &zTempPath ) ){ + ULONG ulDriveNum = 0, ulDriveMap = 0; + DosQueryCurrentDisk( &ulDriveNum, &ulDriveMap ); + sprintf( (char*)zTempPath, "%c:", (char)( 'A' + ulDriveNum - 1 ) ); + } + } + } + /* strip off a trailing slashes or backslashes, otherwise we would get * + * multiple (back)slashes which causes DosOpen() to fail */ + j = strlen(zTempPath); + while( j > 0 && ( zTempPath[j-1] == '\\' || zTempPath[j-1] == '/' ) ){ + j--; + } + zTempPath[j] = '\0'; + sqlite3_snprintf( nBuf-30, zBuf, + "%s\\"SQLITE_TEMP_FILE_PREFIX, zTempPath ); + j = strlen( zBuf ); + sqlite3Randomness( 20, &zBuf[j] ); + for( i = 0; i < 20; i++, j++ ){ + zBuf[j] = (char)zChars[ ((unsigned char)zBuf[j])%(sizeof(zChars)-1) ]; + } + zBuf[j] = 0; + OSTRACE2( "TEMP FILENAME: %s\n", zBuf ); + return SQLITE_OK; +} + + +/* +** Turn a relative pathname into a full pathname. Write the full +** pathname into zFull[]. zFull[] will be at least pVfs->mxPathname +** bytes in size. +*/ +static int os2FullPathname( + sqlite3_vfs *pVfs, /* Pointer to vfs object */ + const char *zRelative, /* Possibly relative input path */ + int nFull, /* Size of output buffer in bytes */ + char *zFull /* Output buffer */ +){ + APIRET rc = DosQueryPathInfo( zRelative, FIL_QUERYFULLNAME, zFull, nFull ); + return rc == NO_ERROR ? SQLITE_OK : SQLITE_IOERR; +} + +#ifndef SQLITE_OMIT_LOAD_EXTENSION +/* +** Interfaces for opening a shared library, finding entry points +** within the shared library, and closing the shared library. +*/ +/* +** Interfaces for opening a shared library, finding entry points +** within the shared library, and closing the shared library. +*/ +static void *os2DlOpen(sqlite3_vfs *pVfs, const char *zFilename){ + UCHAR loadErr[256]; + HMODULE hmod; + APIRET rc; + rc = DosLoadModule((PSZ)loadErr, sizeof(loadErr), zFilename, &hmod); + return rc != NO_ERROR ? 0 : (void*)hmod; +} +/* +** A no-op since the error code is returned on the DosLoadModule call. +** os2Dlopen returns zero if DosLoadModule is not successful. +*/ +static void os2DlError(sqlite3_vfs *pVfs, int nBuf, char *zBufOut){ +/* no-op */ +} +void *os2DlSym(sqlite3_vfs *pVfs, void *pHandle, const char *zSymbol){ + PFN pfn; + APIRET rc; + rc = DosQueryProcAddr((HMODULE)pHandle, 0L, zSymbol, &pfn); + if( rc != NO_ERROR ){ + /* if the symbol itself was not found, search again for the same + * symbol with an extra underscore, that might be needed depending + * on the calling convention */ + char _zSymbol[256] = "_"; + strncat(_zSymbol, zSymbol, 255); + rc = DosQueryProcAddr((HMODULE)pHandle, 0L, _zSymbol, &pfn); + } + return rc != NO_ERROR ? 0 : (void*)pfn; +} +void os2DlClose(sqlite3_vfs *pVfs, void *pHandle){ + DosFreeModule((HMODULE)pHandle); +} +#else /* if SQLITE_OMIT_LOAD_EXTENSION is defined: */ + #define os2DlOpen 0 + #define os2DlError 0 + #define os2DlSym 0 + #define os2DlClose 0 +#endif + + +/* +** Write up to nBuf bytes of randomness into zBuf. +*/ +static int os2Randomness(sqlite3_vfs *pVfs, int nBuf, char *zBuf ){ + ULONG sizeofULong = sizeof(ULONG); + int n = 0; + if( sizeof(DATETIME) <= nBuf - n ){ + DATETIME x; + DosGetDateTime(&x); + memcpy(&zBuf[n], &x, sizeof(x)); + n += sizeof(x); + } + + if( sizeofULong <= nBuf - n ){ + PPIB ppib; + DosGetInfoBlocks(NULL, &ppib); + memcpy(&zBuf[n], &ppib->pib_ulpid, sizeofULong); + n += sizeofULong; + } + + if( sizeofULong <= nBuf - n ){ + PTIB ptib; + DosGetInfoBlocks(&ptib, NULL); + memcpy(&zBuf[n], &ptib->tib_ptib2->tib2_ultid, sizeofULong); + n += sizeofULong; + } + + /* if we still haven't filled the buffer yet the following will */ + /* grab everything once instead of making several calls for a single item */ + if( sizeofULong <= nBuf - n ){ + ULONG ulSysInfo[QSV_MAX]; + DosQuerySysInfo(1L, QSV_MAX, ulSysInfo, sizeofULong * QSV_MAX); + + memcpy(&zBuf[n], &ulSysInfo[QSV_MS_COUNT - 1], sizeofULong); + n += sizeofULong; + + if( sizeofULong <= nBuf - n ){ + memcpy(&zBuf[n], &ulSysInfo[QSV_TIMER_INTERVAL - 1], sizeofULong); + n += sizeofULong; + } + if( sizeofULong <= nBuf - n ){ + memcpy(&zBuf[n], &ulSysInfo[QSV_TIME_LOW - 1], sizeofULong); + n += sizeofULong; + } + if( sizeofULong <= nBuf - n ){ + memcpy(&zBuf[n], &ulSysInfo[QSV_TIME_HIGH - 1], sizeofULong); + n += sizeofULong; + } + if( sizeofULong <= nBuf - n ){ + memcpy(&zBuf[n], &ulSysInfo[QSV_TOTAVAILMEM - 1], sizeofULong); + n += sizeofULong; + } + } + + return n; +} + +/* +** Sleep for a little while. Return the amount of time slept. +** The argument is the number of microseconds we want to sleep. +** The return value is the number of microseconds of sleep actually +** requested from the underlying operating system, a number which +** might be greater than or equal to the argument, but not less +** than the argument. +*/ +static int os2Sleep( sqlite3_vfs *pVfs, int microsec ){ + DosSleep( (microsec/1000) ); + return microsec; +} + +/* +** The following variable, if set to a non-zero value, becomes the result +** returned from sqlite3OsCurrentTime(). This is used for testing. +*/ +#ifdef SQLITE_TEST +int sqlite3_current_time = 0; +#endif + +/* +** Find the current time (in Universal Coordinated Time). Write the +** current time and date as a Julian Day number into *prNow and +** return 0. Return 1 if the time and date cannot be found. +*/ +int os2CurrentTime( sqlite3_vfs *pVfs, double *prNow ){ + double now; + SHORT minute; /* needs to be able to cope with negative timezone offset */ + USHORT second, hour, + day, month, year; + DATETIME dt; + DosGetDateTime( &dt ); + second = (USHORT)dt.seconds; + minute = (SHORT)dt.minutes + dt.timezone; + hour = (USHORT)dt.hours; + day = (USHORT)dt.day; + month = (USHORT)dt.month; + year = (USHORT)dt.year; + + /* Calculations from http://www.astro.keele.ac.uk/~rno/Astronomy/hjd.html + http://www.astro.keele.ac.uk/~rno/Astronomy/hjd-0.1.c */ + /* Calculate the Julian days */ + now = day - 32076 + + 1461*(year + 4800 + (month - 14)/12)/4 + + 367*(month - 2 - (month - 14)/12*12)/12 - + 3*((year + 4900 + (month - 14)/12)/100)/4; + + /* Add the fractional hours, mins and seconds */ + now += (hour + 12.0)/24.0; + now += minute/1440.0; + now += second/86400.0; + *prNow = now; +#ifdef SQLITE_TEST + if( sqlite3_current_time ){ + *prNow = sqlite3_current_time/86400.0 + 2440587.5; + } +#endif + return 0; +} + +/* +** Return a pointer to the sqlite3DefaultVfs structure. We use +** a function rather than give the structure global scope because +** some compilers (MSVC) do not allow forward declarations of +** initialized structures. +*/ +sqlite3_vfs *sqlite3OsDefaultVfs(void){ + static sqlite3_vfs os2Vfs = { + 1, /* iVersion */ + sizeof(os2File), /* szOsFile */ + CCHMAXPATH, /* mxPathname */ + 0, /* pNext */ + "os2", /* zName */ + 0, /* pAppData */ + + os2Open, /* xOpen */ + os2Delete, /* xDelete */ + os2Access, /* xAccess */ + os2GetTempname, /* xGetTempname */ + os2FullPathname, /* xFullPathname */ + os2DlOpen, /* xDlOpen */ + os2DlError, /* xDlError */ + os2DlSym, /* xDlSym */ + os2DlClose, /* xDlClose */ + os2Randomness, /* xRandomness */ + os2Sleep, /* xSleep */ + os2CurrentTime /* xCurrentTime */ + }; + + return &os2Vfs; +} + +#endif /* OS_OS2 */ Added: external/sqlite-source-3.5.7.x/os_unix.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/os_unix.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,2766 @@ +/* +** 2004 May 22 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +****************************************************************************** +** +** This file contains code that is specific to Unix systems. +*/ +#include "sqliteInt.h" +#if OS_UNIX /* This file is used on unix only */ + +/* #define SQLITE_ENABLE_LOCKING_STYLE 0 */ + +/* +** These #defines should enable >2GB file support on Posix if the +** underlying operating system supports it. If the OS lacks +** large file support, these should be no-ops. +** +** Large file support can be disabled using the -DSQLITE_DISABLE_LFS switch +** on the compiler command line. This is necessary if you are compiling +** on a recent machine (ex: RedHat 7.2) but you want your code to work +** on an older machine (ex: RedHat 6.0). If you compile on RedHat 7.2 +** without this option, LFS is enable. But LFS does not exist in the kernel +** in RedHat 6.0, so the code won't work. Hence, for maximum binary +** portability you should omit LFS. +*/ +#ifndef SQLITE_DISABLE_LFS +# define _LARGE_FILE 1 +# ifndef _FILE_OFFSET_BITS +# define _FILE_OFFSET_BITS 64 +# endif +# define _LARGEFILE_SOURCE 1 +#endif + +/* +** standard include files. +*/ +#include +#include +#include +#include +#include +#include +#include +#ifdef SQLITE_ENABLE_LOCKING_STYLE +#include +#include +#include +#endif /* SQLITE_ENABLE_LOCKING_STYLE */ + +/* +** If we are to be thread-safe, include the pthreads header and define +** the SQLITE_UNIX_THREADS macro. +*/ +#if SQLITE_THREADSAFE +# include +# define SQLITE_UNIX_THREADS 1 +#endif + +/* +** Default permissions when creating a new file +*/ +#ifndef SQLITE_DEFAULT_FILE_PERMISSIONS +# define SQLITE_DEFAULT_FILE_PERMISSIONS 0644 +#endif + +/* +** Maximum supported path-length. +*/ +#define MAX_PATHNAME 512 + + +/* +** The unixFile structure is subclass of sqlite3_file specific for the unix +** protability layer. +*/ +typedef struct unixFile unixFile; +struct unixFile { + sqlite3_io_methods const *pMethod; /* Always the first entry */ +#ifdef SQLITE_TEST + /* In test mode, increase the size of this structure a bit so that + ** it is larger than the struct CrashFile defined in test6.c. + */ + char aPadding[32]; +#endif + struct openCnt *pOpen; /* Info about all open fd's on this inode */ + struct lockInfo *pLock; /* Info about locks on this inode */ +#ifdef SQLITE_ENABLE_LOCKING_STYLE + void *lockingContext; /* Locking style specific state */ +#endif /* SQLITE_ENABLE_LOCKING_STYLE */ + int h; /* The file descriptor */ + unsigned char locktype; /* The type of lock held on this fd */ + int dirfd; /* File descriptor for the directory */ +#if SQLITE_THREADSAFE + pthread_t tid; /* The thread that "owns" this unixFile */ +#endif +}; + +/* +** Include code that is common to all os_*.c files +*/ +#include "os_common.h" + +/* +** Define various macros that are missing from some systems. +*/ +#ifndef O_LARGEFILE +# define O_LARGEFILE 0 +#endif +#ifdef SQLITE_DISABLE_LFS +# undef O_LARGEFILE +# define O_LARGEFILE 0 +#endif +#ifndef O_NOFOLLOW +# define O_NOFOLLOW 0 +#endif +#ifndef O_BINARY +# define O_BINARY 0 +#endif + +/* +** The DJGPP compiler environment looks mostly like Unix, but it +** lacks the fcntl() system call. So redefine fcntl() to be something +** that always succeeds. This means that locking does not occur under +** DJGPP. But it is DOS - what did you expect? +*/ +#ifdef __DJGPP__ +# define fcntl(A,B,C) 0 +#endif + +/* +** The threadid macro resolves to the thread-id or to 0. Used for +** testing and debugging only. +*/ +#if SQLITE_THREADSAFE +#define threadid pthread_self() +#else +#define threadid 0 +#endif + +/* +** Set or check the unixFile.tid field. This field is set when an unixFile +** is first opened. All subsequent uses of the unixFile verify that the +** same thread is operating on the unixFile. Some operating systems do +** not allow locks to be overridden by other threads and that restriction +** means that sqlite3* database handles cannot be moved from one thread +** to another. This logic makes sure a user does not try to do that +** by mistake. +** +** Version 3.3.1 (2006-01-15): unixFile can be moved from one thread to +** another as long as we are running on a system that supports threads +** overriding each others locks (which now the most common behavior) +** or if no locks are held. But the unixFile.pLock field needs to be +** recomputed because its key includes the thread-id. See the +** transferOwnership() function below for additional information +*/ +#if SQLITE_THREADSAFE +# define SET_THREADID(X) (X)->tid = pthread_self() +# define CHECK_THREADID(X) (threadsOverrideEachOthersLocks==0 && \ + !pthread_equal((X)->tid, pthread_self())) +#else +# define SET_THREADID(X) +# define CHECK_THREADID(X) 0 +#endif + +/* +** Here is the dirt on POSIX advisory locks: ANSI STD 1003.1 (1996) +** section 6.5.2.2 lines 483 through 490 specify that when a process +** sets or clears a lock, that operation overrides any prior locks set +** by the same process. It does not explicitly say so, but this implies +** that it overrides locks set by the same process using a different +** file descriptor. Consider this test case: +** +** int fd1 = open("./file1", O_RDWR|O_CREAT, 0644); +** int fd2 = open("./file2", O_RDWR|O_CREAT, 0644); +** +** Suppose ./file1 and ./file2 are really the same file (because +** one is a hard or symbolic link to the other) then if you set +** an exclusive lock on fd1, then try to get an exclusive lock +** on fd2, it works. I would have expected the second lock to +** fail since there was already a lock on the file due to fd1. +** But not so. Since both locks came from the same process, the +** second overrides the first, even though they were on different +** file descriptors opened on different file names. +** +** Bummer. If you ask me, this is broken. Badly broken. It means +** that we cannot use POSIX locks to synchronize file access among +** competing threads of the same process. POSIX locks will work fine +** to synchronize access for threads in separate processes, but not +** threads within the same process. +** +** To work around the problem, SQLite has to manage file locks internally +** on its own. Whenever a new database is opened, we have to find the +** specific inode of the database file (the inode is determined by the +** st_dev and st_ino fields of the stat structure that fstat() fills in) +** and check for locks already existing on that inode. When locks are +** created or removed, we have to look at our own internal record of the +** locks to see if another thread has previously set a lock on that same +** inode. +** +** The sqlite3_file structure for POSIX is no longer just an integer file +** descriptor. It is now a structure that holds the integer file +** descriptor and a pointer to a structure that describes the internal +** locks on the corresponding inode. There is one locking structure +** per inode, so if the same inode is opened twice, both unixFile structures +** point to the same locking structure. The locking structure keeps +** a reference count (so we will know when to delete it) and a "cnt" +** field that tells us its internal lock status. cnt==0 means the +** file is unlocked. cnt==-1 means the file has an exclusive lock. +** cnt>0 means there are cnt shared locks on the file. +** +** Any attempt to lock or unlock a file first checks the locking +** structure. The fcntl() system call is only invoked to set a +** POSIX lock if the internal lock structure transitions between +** a locked and an unlocked state. +** +** 2004-Jan-11: +** More recent discoveries about POSIX advisory locks. (The more +** I discover, the more I realize the a POSIX advisory locks are +** an abomination.) +** +** If you close a file descriptor that points to a file that has locks, +** all locks on that file that are owned by the current process are +** released. To work around this problem, each unixFile structure contains +** a pointer to an openCnt structure. There is one openCnt structure +** per open inode, which means that multiple unixFile can point to a single +** openCnt. When an attempt is made to close an unixFile, if there are +** other unixFile open on the same inode that are holding locks, the call +** to close() the file descriptor is deferred until all of the locks clear. +** The openCnt structure keeps a list of file descriptors that need to +** be closed and that list is walked (and cleared) when the last lock +** clears. +** +** First, under Linux threads, because each thread has a separate +** process ID, lock operations in one thread do not override locks +** to the same file in other threads. Linux threads behave like +** separate processes in this respect. But, if you close a file +** descriptor in linux threads, all locks are cleared, even locks +** on other threads and even though the other threads have different +** process IDs. Linux threads is inconsistent in this respect. +** (I'm beginning to think that linux threads is an abomination too.) +** The consequence of this all is that the hash table for the lockInfo +** structure has to include the process id as part of its key because +** locks in different threads are treated as distinct. But the +** openCnt structure should not include the process id in its +** key because close() clears lock on all threads, not just the current +** thread. Were it not for this goofiness in linux threads, we could +** combine the lockInfo and openCnt structures into a single structure. +** +** 2004-Jun-28: +** On some versions of linux, threads can override each others locks. +** On others not. Sometimes you can change the behavior on the same +** system by setting the LD_ASSUME_KERNEL environment variable. The +** POSIX standard is silent as to which behavior is correct, as far +** as I can tell, so other versions of unix might show the same +** inconsistency. There is no little doubt in my mind that posix +** advisory locks and linux threads are profoundly broken. +** +** To work around the inconsistencies, we have to test at runtime +** whether or not threads can override each others locks. This test +** is run once, the first time any lock is attempted. A static +** variable is set to record the results of this test for future +** use. +*/ + +/* +** An instance of the following structure serves as the key used +** to locate a particular lockInfo structure given its inode. +** +** If threads cannot override each others locks, then we set the +** lockKey.tid field to the thread ID. If threads can override +** each others locks then tid is always set to zero. tid is omitted +** if we compile without threading support. +*/ +struct lockKey { + dev_t dev; /* Device number */ + ino_t ino; /* Inode number */ +#if SQLITE_THREADSAFE + pthread_t tid; /* Thread ID or zero if threads can override each other */ +#endif +}; + +/* +** An instance of the following structure is allocated for each open +** inode on each thread with a different process ID. (Threads have +** different process IDs on linux, but not on most other unixes.) +** +** A single inode can have multiple file descriptors, so each unixFile +** structure contains a pointer to an instance of this object and this +** object keeps a count of the number of unixFile pointing to it. +*/ +struct lockInfo { + struct lockKey key; /* The lookup key */ + int cnt; /* Number of SHARED locks held */ + int locktype; /* One of SHARED_LOCK, RESERVED_LOCK etc. */ + int nRef; /* Number of pointers to this structure */ +}; + +/* +** An instance of the following structure serves as the key used +** to locate a particular openCnt structure given its inode. This +** is the same as the lockKey except that the thread ID is omitted. +*/ +struct openKey { + dev_t dev; /* Device number */ + ino_t ino; /* Inode number */ +}; + +/* +** An instance of the following structure is allocated for each open +** inode. This structure keeps track of the number of locks on that +** inode. If a close is attempted against an inode that is holding +** locks, the close is deferred until all locks clear by adding the +** file descriptor to be closed to the pending list. +*/ +struct openCnt { + struct openKey key; /* The lookup key */ + int nRef; /* Number of pointers to this structure */ + int nLock; /* Number of outstanding locks */ + int nPending; /* Number of pending close() operations */ + int *aPending; /* Malloced space holding fd's awaiting a close() */ +}; + +/* +** These hash tables map inodes and file descriptors (really, lockKey and +** openKey structures) into lockInfo and openCnt structures. Access to +** these hash tables must be protected by a mutex. +*/ +static Hash lockHash = {SQLITE_HASH_BINARY, 0, 0, 0, 0, 0}; +static Hash openHash = {SQLITE_HASH_BINARY, 0, 0, 0, 0, 0}; + +#ifdef SQLITE_ENABLE_LOCKING_STYLE +/* +** The locking styles are associated with the different file locking +** capabilities supported by different file systems. +** +** POSIX locking style fully supports shared and exclusive byte-range locks +** ADP locking only supports exclusive byte-range locks +** FLOCK only supports a single file-global exclusive lock +** DOTLOCK isn't a true locking style, it refers to the use of a special +** file named the same as the database file with a '.lock' extension, this +** can be used on file systems that do not offer any reliable file locking +** NO locking means that no locking will be attempted, this is only used for +** read-only file systems currently +** UNSUPPORTED means that no locking will be attempted, this is only used for +** file systems that are known to be unsupported +*/ +typedef enum { + posixLockingStyle = 0, /* standard posix-advisory locks */ + afpLockingStyle, /* use afp locks */ + flockLockingStyle, /* use flock() */ + dotlockLockingStyle, /* use .lock files */ + noLockingStyle, /* useful for read-only file system */ + unsupportedLockingStyle /* indicates unsupported file system */ +} sqlite3LockingStyle; +#endif /* SQLITE_ENABLE_LOCKING_STYLE */ + +/* +** Helper functions to obtain and relinquish the global mutex. +*/ +static void enterMutex(){ + sqlite3_mutex_enter(sqlite3_mutex_alloc(SQLITE_MUTEX_STATIC_MASTER)); +} +static void leaveMutex(){ + sqlite3_mutex_leave(sqlite3_mutex_alloc(SQLITE_MUTEX_STATIC_MASTER)); +} + +#if SQLITE_THREADSAFE +/* +** This variable records whether or not threads can override each others +** locks. +** +** 0: No. Threads cannot override each others locks. +** 1: Yes. Threads can override each others locks. +** -1: We don't know yet. +** +** On some systems, we know at compile-time if threads can override each +** others locks. On those systems, the SQLITE_THREAD_OVERRIDE_LOCK macro +** will be set appropriately. On other systems, we have to check at +** runtime. On these latter systems, SQLTIE_THREAD_OVERRIDE_LOCK is +** undefined. +** +** This variable normally has file scope only. But during testing, we make +** it a global so that the test code can change its value in order to verify +** that the right stuff happens in either case. +*/ +#ifndef SQLITE_THREAD_OVERRIDE_LOCK +# define SQLITE_THREAD_OVERRIDE_LOCK -1 +#endif +#ifdef SQLITE_TEST +int threadsOverrideEachOthersLocks = SQLITE_THREAD_OVERRIDE_LOCK; +#else +static int threadsOverrideEachOthersLocks = SQLITE_THREAD_OVERRIDE_LOCK; +#endif + +/* +** This structure holds information passed into individual test +** threads by the testThreadLockingBehavior() routine. +*/ +struct threadTestData { + int fd; /* File to be locked */ + struct flock lock; /* The locking operation */ + int result; /* Result of the locking operation */ +}; + +#ifdef SQLITE_LOCK_TRACE +/* +** Print out information about all locking operations. +** +** This routine is used for troubleshooting locks on multithreaded +** platforms. Enable by compiling with the -DSQLITE_LOCK_TRACE +** command-line option on the compiler. This code is normally +** turned off. +*/ +static int lockTrace(int fd, int op, struct flock *p){ + char *zOpName, *zType; + int s; + int savedErrno; + if( op==F_GETLK ){ + zOpName = "GETLK"; + }else if( op==F_SETLK ){ + zOpName = "SETLK"; + }else{ + s = fcntl(fd, op, p); + sqlite3DebugPrintf("fcntl unknown %d %d %d\n", fd, op, s); + return s; + } + if( p->l_type==F_RDLCK ){ + zType = "RDLCK"; + }else if( p->l_type==F_WRLCK ){ + zType = "WRLCK"; + }else if( p->l_type==F_UNLCK ){ + zType = "UNLCK"; + }else{ + assert( 0 ); + } + assert( p->l_whence==SEEK_SET ); + s = fcntl(fd, op, p); + savedErrno = errno; + sqlite3DebugPrintf("fcntl %d %d %s %s %d %d %d %d\n", + threadid, fd, zOpName, zType, (int)p->l_start, (int)p->l_len, + (int)p->l_pid, s); + if( s==(-1) && op==F_SETLK && (p->l_type==F_RDLCK || p->l_type==F_WRLCK) ){ + struct flock l2; + l2 = *p; + fcntl(fd, F_GETLK, &l2); + if( l2.l_type==F_RDLCK ){ + zType = "RDLCK"; + }else if( l2.l_type==F_WRLCK ){ + zType = "WRLCK"; + }else if( l2.l_type==F_UNLCK ){ + zType = "UNLCK"; + }else{ + assert( 0 ); + } + sqlite3DebugPrintf("fcntl-failure-reason: %s %d %d %d\n", + zType, (int)l2.l_start, (int)l2.l_len, (int)l2.l_pid); + } + errno = savedErrno; + return s; +} +#define fcntl lockTrace +#endif /* SQLITE_LOCK_TRACE */ + +/* +** The testThreadLockingBehavior() routine launches two separate +** threads on this routine. This routine attempts to lock a file +** descriptor then returns. The success or failure of that attempt +** allows the testThreadLockingBehavior() procedure to determine +** whether or not threads can override each others locks. +*/ +static void *threadLockingTest(void *pArg){ + struct threadTestData *pData = (struct threadTestData*)pArg; + pData->result = fcntl(pData->fd, F_SETLK, &pData->lock); + return pArg; +} + +/* +** This procedure attempts to determine whether or not threads +** can override each others locks then sets the +** threadsOverrideEachOthersLocks variable appropriately. +*/ +static void testThreadLockingBehavior(int fd_orig){ + int fd; + struct threadTestData d[2]; + pthread_t t[2]; + + fd = dup(fd_orig); + if( fd<0 ) return; + memset(d, 0, sizeof(d)); + d[0].fd = fd; + d[0].lock.l_type = F_RDLCK; + d[0].lock.l_len = 1; + d[0].lock.l_start = 0; + d[0].lock.l_whence = SEEK_SET; + d[1] = d[0]; + d[1].lock.l_type = F_WRLCK; + pthread_create(&t[0], 0, threadLockingTest, &d[0]); + pthread_create(&t[1], 0, threadLockingTest, &d[1]); + pthread_join(t[0], 0); + pthread_join(t[1], 0); + close(fd); + threadsOverrideEachOthersLocks = d[0].result==0 && d[1].result==0; +} +#endif /* SQLITE_THREADSAFE */ + +/* +** Release a lockInfo structure previously allocated by findLockInfo(). +*/ +static void releaseLockInfo(struct lockInfo *pLock){ + if (pLock == NULL) + return; + pLock->nRef--; + if( pLock->nRef==0 ){ + sqlite3HashInsert(&lockHash, &pLock->key, sizeof(pLock->key), 0); + sqlite3_free(pLock); + } +} + +/* +** Release a openCnt structure previously allocated by findLockInfo(). +*/ +static void releaseOpenCnt(struct openCnt *pOpen){ + if (pOpen == NULL) + return; + pOpen->nRef--; + if( pOpen->nRef==0 ){ + sqlite3HashInsert(&openHash, &pOpen->key, sizeof(pOpen->key), 0); + free(pOpen->aPending); + sqlite3_free(pOpen); + } +} + +#ifdef SQLITE_ENABLE_LOCKING_STYLE +/* +** Tests a byte-range locking query to see if byte range locks are +** supported, if not we fall back to dotlockLockingStyle. +*/ +static sqlite3LockingStyle sqlite3TestLockingStyle( + const char *filePath, + int fd +){ + /* test byte-range lock using fcntl */ + struct flock lockInfo; + + lockInfo.l_len = 1; + lockInfo.l_start = 0; + lockInfo.l_whence = SEEK_SET; + lockInfo.l_type = F_RDLCK; + + if( fcntl(fd, F_GETLK, &lockInfo)!=-1 ) { + return posixLockingStyle; + } + + /* testing for flock can give false positives. So if if the above test + ** fails, then we fall back to using dot-lock style locking. + */ + return dotlockLockingStyle; +} + +/* +** Examines the f_fstypename entry in the statfs structure as returned by +** stat() for the file system hosting the database file, assigns the +** appropriate locking style based on its value. These values and +** assignments are based on Darwin/OSX behavior and have not been tested on +** other systems. +*/ +static sqlite3LockingStyle sqlite3DetectLockingStyle( + const char *filePath, + int fd +){ + +#ifdef SQLITE_FIXED_LOCKING_STYLE + return (sqlite3LockingStyle)SQLITE_FIXED_LOCKING_STYLE; +#else + struct statfs fsInfo; + + if( statfs(filePath, &fsInfo) == -1 ){ + return sqlite3TestLockingStyle(filePath, fd); + } + if( fsInfo.f_flags & MNT_RDONLY ){ + return noLockingStyle; + } + if( strcmp(fsInfo.f_fstypename, "hfs")==0 || + strcmp(fsInfo.f_fstypename, "ufs")==0 ){ + return posixLockingStyle; + } + if( strcmp(fsInfo.f_fstypename, "afpfs")==0 ){ + return afpLockingStyle; + } + if( strcmp(fsInfo.f_fstypename, "nfs")==0 ){ + return sqlite3TestLockingStyle(filePath, fd); + } + if( strcmp(fsInfo.f_fstypename, "smbfs")==0 ){ + return flockLockingStyle; + } + if( strcmp(fsInfo.f_fstypename, "msdos")==0 ){ + return dotlockLockingStyle; + } + if( strcmp(fsInfo.f_fstypename, "webdav")==0 ){ + return unsupportedLockingStyle; + } + return sqlite3TestLockingStyle(filePath, fd); +#endif /* SQLITE_FIXED_LOCKING_STYLE */ +} + +#endif /* SQLITE_ENABLE_LOCKING_STYLE */ + +/* +** Given a file descriptor, locate lockInfo and openCnt structures that +** describes that file descriptor. Create new ones if necessary. The +** return values might be uninitialized if an error occurs. +** +** Return the number of errors. +*/ +static int findLockInfo( + int fd, /* The file descriptor used in the key */ + struct lockInfo **ppLock, /* Return the lockInfo structure here */ + struct openCnt **ppOpen /* Return the openCnt structure here */ +){ + int rc; + struct lockKey key1; + struct openKey key2; + struct stat statbuf; + struct lockInfo *pLock; + struct openCnt *pOpen; + rc = fstat(fd, &statbuf); + if( rc!=0 ) return 1; + + memset(&key1, 0, sizeof(key1)); + key1.dev = statbuf.st_dev; + key1.ino = statbuf.st_ino; +#if SQLITE_THREADSAFE + if( threadsOverrideEachOthersLocks<0 ){ + testThreadLockingBehavior(fd); + } + key1.tid = threadsOverrideEachOthersLocks ? 0 : pthread_self(); +#endif + memset(&key2, 0, sizeof(key2)); + key2.dev = statbuf.st_dev; + key2.ino = statbuf.st_ino; + pLock = (struct lockInfo*)sqlite3HashFind(&lockHash, &key1, sizeof(key1)); + if( pLock==0 ){ + struct lockInfo *pOld; + pLock = sqlite3_malloc( sizeof(*pLock) ); + if( pLock==0 ){ + rc = 1; + goto exit_findlockinfo; + } + pLock->key = key1; + pLock->nRef = 1; + pLock->cnt = 0; + pLock->locktype = 0; + pOld = sqlite3HashInsert(&lockHash, &pLock->key, sizeof(key1), pLock); + if( pOld!=0 ){ + assert( pOld==pLock ); + sqlite3_free(pLock); + rc = 1; + goto exit_findlockinfo; + } + }else{ + pLock->nRef++; + } + *ppLock = pLock; + if( ppOpen!=0 ){ + pOpen = (struct openCnt*)sqlite3HashFind(&openHash, &key2, sizeof(key2)); + if( pOpen==0 ){ + struct openCnt *pOld; + pOpen = sqlite3_malloc( sizeof(*pOpen) ); + if( pOpen==0 ){ + releaseLockInfo(pLock); + rc = 1; + goto exit_findlockinfo; + } + pOpen->key = key2; + pOpen->nRef = 1; + pOpen->nLock = 0; + pOpen->nPending = 0; + pOpen->aPending = 0; + pOld = sqlite3HashInsert(&openHash, &pOpen->key, sizeof(key2), pOpen); + if( pOld!=0 ){ + assert( pOld==pOpen ); + sqlite3_free(pOpen); + releaseLockInfo(pLock); + rc = 1; + goto exit_findlockinfo; + } + }else{ + pOpen->nRef++; + } + *ppOpen = pOpen; + } + +exit_findlockinfo: + return rc; +} + +#ifdef SQLITE_DEBUG +/* +** Helper function for printing out trace information from debugging +** binaries. This returns the string represetation of the supplied +** integer lock-type. +*/ +static const char *locktypeName(int locktype){ + switch( locktype ){ + case NO_LOCK: return "NONE"; + case SHARED_LOCK: return "SHARED"; + case RESERVED_LOCK: return "RESERVED"; + case PENDING_LOCK: return "PENDING"; + case EXCLUSIVE_LOCK: return "EXCLUSIVE"; + } + return "ERROR"; +} +#endif + +/* +** If we are currently in a different thread than the thread that the +** unixFile argument belongs to, then transfer ownership of the unixFile +** over to the current thread. +** +** A unixFile is only owned by a thread on systems where one thread is +** unable to override locks created by a different thread. RedHat9 is +** an example of such a system. +** +** Ownership transfer is only allowed if the unixFile is currently unlocked. +** If the unixFile is locked and an ownership is wrong, then return +** SQLITE_MISUSE. SQLITE_OK is returned if everything works. +*/ +#if SQLITE_THREADSAFE +static int transferOwnership(unixFile *pFile){ + int rc; + pthread_t hSelf; + if( threadsOverrideEachOthersLocks ){ + /* Ownership transfers not needed on this system */ + return SQLITE_OK; + } + hSelf = pthread_self(); + if( pthread_equal(pFile->tid, hSelf) ){ + /* We are still in the same thread */ + OSTRACE1("No-transfer, same thread\n"); + return SQLITE_OK; + } + if( pFile->locktype!=NO_LOCK ){ + /* We cannot change ownership while we are holding a lock! */ + return SQLITE_MISUSE; + } + OSTRACE4("Transfer ownership of %d from %d to %d\n", + pFile->h, pFile->tid, hSelf); + pFile->tid = hSelf; + if (pFile->pLock != NULL) { + releaseLockInfo(pFile->pLock); + rc = findLockInfo(pFile->h, &pFile->pLock, 0); + OSTRACE5("LOCK %d is now %s(%s,%d)\n", pFile->h, + locktypeName(pFile->locktype), + locktypeName(pFile->pLock->locktype), pFile->pLock->cnt); + return rc; + } else { + return SQLITE_OK; + } +} +#else + /* On single-threaded builds, ownership transfer is a no-op */ +# define transferOwnership(X) SQLITE_OK +#endif + +/* +** Seek to the offset passed as the second argument, then read cnt +** bytes into pBuf. Return the number of bytes actually read. +** +** NB: If you define USE_PREAD or USE_PREAD64, then it might also +** be necessary to define _XOPEN_SOURCE to be 500. This varies from +** one system to another. Since SQLite does not define USE_PREAD +** any any form by default, we will not attempt to define _XOPEN_SOURCE. +** See tickets #2741 and #2681. +*/ +static int seekAndRead(unixFile *id, sqlite3_int64 offset, void *pBuf, int cnt){ + int got; + i64 newOffset; + TIMER_START; +#if defined(USE_PREAD) + got = pread(id->h, pBuf, cnt, offset); + SimulateIOError( got = -1 ); +#elif defined(USE_PREAD64) + got = pread64(id->h, pBuf, cnt, offset); + SimulateIOError( got = -1 ); +#else + newOffset = lseek(id->h, offset, SEEK_SET); + SimulateIOError( newOffset-- ); + if( newOffset!=offset ){ + return -1; + } + got = read(id->h, pBuf, cnt); +#endif + TIMER_END; + OSTRACE5("READ %-3d %5d %7lld %d\n", id->h, got, offset, TIMER_ELAPSED); + return got; +} + +/* +** Read data from a file into a buffer. Return SQLITE_OK if all +** bytes were read successfully and SQLITE_IOERR if anything goes +** wrong. +*/ +static int unixRead( + sqlite3_file *id, + void *pBuf, + int amt, + sqlite3_int64 offset +){ + int got; + assert( id ); + got = seekAndRead((unixFile*)id, offset, pBuf, amt); + if( got==amt ){ + return SQLITE_OK; + }else if( got<0 ){ + return SQLITE_IOERR_READ; + }else{ + memset(&((char*)pBuf)[got], 0, amt-got); + return SQLITE_IOERR_SHORT_READ; + } +} + +/* +** Seek to the offset in id->offset then read cnt bytes into pBuf. +** Return the number of bytes actually read. Update the offset. +*/ +static int seekAndWrite(unixFile *id, i64 offset, const void *pBuf, int cnt){ + int got; + i64 newOffset; + TIMER_START; +#if defined(USE_PREAD) + got = pwrite(id->h, pBuf, cnt, offset); +#elif defined(USE_PREAD64) + got = pwrite64(id->h, pBuf, cnt, offset); +#else + newOffset = lseek(id->h, offset, SEEK_SET); + if( newOffset!=offset ){ + return -1; + } + got = write(id->h, pBuf, cnt); +#endif + TIMER_END; + OSTRACE5("WRITE %-3d %5d %7lld %d\n", id->h, got, offset, TIMER_ELAPSED); + return got; +} + + +/* +** Write data from a buffer into a file. Return SQLITE_OK on success +** or some other error code on failure. +*/ +static int unixWrite( + sqlite3_file *id, + const void *pBuf, + int amt, + sqlite3_int64 offset +){ + int wrote = 0; + assert( id ); + assert( amt>0 ); + while( amt>0 && (wrote = seekAndWrite((unixFile*)id, offset, pBuf, amt))>0 ){ + amt -= wrote; + offset += wrote; + pBuf = &((char*)pBuf)[wrote]; + } + SimulateIOError(( wrote=(-1), amt=1 )); + SimulateDiskfullError(( wrote=0, amt=1 )); + if( amt>0 ){ + if( wrote<0 ){ + return SQLITE_IOERR_WRITE; + }else{ + return SQLITE_FULL; + } + } + return SQLITE_OK; +} + +#ifdef SQLITE_TEST +/* +** Count the number of fullsyncs and normal syncs. This is used to test +** that syncs and fullsyncs are occuring at the right times. +*/ +int sqlite3_sync_count = 0; +int sqlite3_fullsync_count = 0; +#endif + +/* +** Use the fdatasync() API only if the HAVE_FDATASYNC macro is defined. +** Otherwise use fsync() in its place. +*/ +#ifndef HAVE_FDATASYNC +# define fdatasync fsync +#endif + +/* +** Define HAVE_FULLFSYNC to 0 or 1 depending on whether or not +** the F_FULLFSYNC macro is defined. F_FULLFSYNC is currently +** only available on Mac OS X. But that could change. +*/ +#ifdef F_FULLFSYNC +# define HAVE_FULLFSYNC 1 +#else +# define HAVE_FULLFSYNC 0 +#endif + + +/* +** The fsync() system call does not work as advertised on many +** unix systems. The following procedure is an attempt to make +** it work better. +** +** The SQLITE_NO_SYNC macro disables all fsync()s. This is useful +** for testing when we want to run through the test suite quickly. +** You are strongly advised *not* to deploy with SQLITE_NO_SYNC +** enabled, however, since with SQLITE_NO_SYNC enabled, an OS crash +** or power failure will likely corrupt the database file. +*/ +static int full_fsync(int fd, int fullSync, int dataOnly){ + int rc; + + /* Record the number of times that we do a normal fsync() and + ** FULLSYNC. This is used during testing to verify that this procedure + ** gets called with the correct arguments. + */ +#ifdef SQLITE_TEST + if( fullSync ) sqlite3_fullsync_count++; + sqlite3_sync_count++; +#endif + + /* If we compiled with the SQLITE_NO_SYNC flag, then syncing is a + ** no-op + */ +#ifdef SQLITE_NO_SYNC + rc = SQLITE_OK; +#else + +#if HAVE_FULLFSYNC + if( fullSync ){ + rc = fcntl(fd, F_FULLFSYNC, 0); + }else{ + rc = 1; + } + /* If the FULLFSYNC failed, fall back to attempting an fsync(). + * It shouldn't be possible for fullfsync to fail on the local + * file system (on OSX), so failure indicates that FULLFSYNC + * isn't supported for this file system. So, attempt an fsync + * and (for now) ignore the overhead of a superfluous fcntl call. + * It'd be better to detect fullfsync support once and avoid + * the fcntl call every time sync is called. + */ + if( rc ) rc = fsync(fd); + +#else + if( dataOnly ){ + rc = fdatasync(fd); + }else{ + rc = fsync(fd); + } +#endif /* HAVE_FULLFSYNC */ +#endif /* defined(SQLITE_NO_SYNC) */ + + return rc; +} + +/* +** Make sure all writes to a particular file are committed to disk. +** +** If dataOnly==0 then both the file itself and its metadata (file +** size, access time, etc) are synced. If dataOnly!=0 then only the +** file data is synced. +** +** Under Unix, also make sure that the directory entry for the file +** has been created by fsync-ing the directory that contains the file. +** If we do not do this and we encounter a power failure, the directory +** entry for the journal might not exist after we reboot. The next +** SQLite to access the file will not know that the journal exists (because +** the directory entry for the journal was never created) and the transaction +** will not roll back - possibly leading to database corruption. +*/ +static int unixSync(sqlite3_file *id, int flags){ + int rc; + unixFile *pFile = (unixFile*)id; + + int isDataOnly = (flags&SQLITE_SYNC_DATAONLY); + int isFullsync = (flags&0x0F)==SQLITE_SYNC_FULL; + + /* Check that one of SQLITE_SYNC_NORMAL or FULL was passed */ + assert((flags&0x0F)==SQLITE_SYNC_NORMAL + || (flags&0x0F)==SQLITE_SYNC_FULL + ); + + assert( pFile ); + OSTRACE2("SYNC %-3d\n", pFile->h); + rc = full_fsync(pFile->h, isFullsync, isDataOnly); + SimulateIOError( rc=1 ); + if( rc ){ + return SQLITE_IOERR_FSYNC; + } + if( pFile->dirfd>=0 ){ + OSTRACE4("DIRSYNC %-3d (have_fullfsync=%d fullsync=%d)\n", pFile->dirfd, + HAVE_FULLFSYNC, isFullsync); +#ifndef SQLITE_DISABLE_DIRSYNC + /* The directory sync is only attempted if full_fsync is + ** turned off or unavailable. If a full_fsync occurred above, + ** then the directory sync is superfluous. + */ + if( (!HAVE_FULLFSYNC || !isFullsync) && full_fsync(pFile->dirfd,0,0) ){ + /* + ** We have received multiple reports of fsync() returning + ** errors when applied to directories on certain file systems. + ** A failed directory sync is not a big deal. So it seems + ** better to ignore the error. Ticket #1657 + */ + /* return SQLITE_IOERR; */ + } +#endif + close(pFile->dirfd); /* Only need to sync once, so close the directory */ + pFile->dirfd = -1; /* when we are done. */ + } + return SQLITE_OK; +} + +/* +** Truncate an open file to a specified size +*/ +static int unixTruncate(sqlite3_file *id, i64 nByte){ + int rc; + assert( id ); + SimulateIOError( return SQLITE_IOERR_TRUNCATE ); + rc = ftruncate(((unixFile*)id)->h, (off_t)nByte); + if( rc ){ + return SQLITE_IOERR_TRUNCATE; + }else{ + return SQLITE_OK; + } +} + +/* +** Determine the current size of a file in bytes +*/ +static int unixFileSize(sqlite3_file *id, i64 *pSize){ + int rc; + struct stat buf; + assert( id ); + rc = fstat(((unixFile*)id)->h, &buf); + SimulateIOError( rc=1 ); + if( rc!=0 ){ + return SQLITE_IOERR_FSTAT; + } + *pSize = buf.st_size; + return SQLITE_OK; +} + +/* +** This routine checks if there is a RESERVED lock held on the specified +** file by this or any other process. If such a lock is held, return +** non-zero. If the file is unlocked or holds only SHARED locks, then +** return zero. +*/ +static int unixCheckReservedLock(sqlite3_file *id){ + int r = 0; + unixFile *pFile = (unixFile*)id; + + assert( pFile ); + enterMutex(); /* Because pFile->pLock is shared across threads */ + + /* Check if a thread in this process holds such a lock */ + if( pFile->pLock->locktype>SHARED_LOCK ){ + r = 1; + } + + /* Otherwise see if some other process holds it. + */ + if( !r ){ + struct flock lock; + lock.l_whence = SEEK_SET; + lock.l_start = RESERVED_BYTE; + lock.l_len = 1; + lock.l_type = F_WRLCK; + fcntl(pFile->h, F_GETLK, &lock); + if( lock.l_type!=F_UNLCK ){ + r = 1; + } + } + + leaveMutex(); + OSTRACE3("TEST WR-LOCK %d %d\n", pFile->h, r); + + return r; +} + +/* +** Lock the file with the lock specified by parameter locktype - one +** of the following: +** +** (1) SHARED_LOCK +** (2) RESERVED_LOCK +** (3) PENDING_LOCK +** (4) EXCLUSIVE_LOCK +** +** Sometimes when requesting one lock state, additional lock states +** are inserted in between. The locking might fail on one of the later +** transitions leaving the lock state different from what it started but +** still short of its goal. The following chart shows the allowed +** transitions and the inserted intermediate states: +** +** UNLOCKED -> SHARED +** SHARED -> RESERVED +** SHARED -> (PENDING) -> EXCLUSIVE +** RESERVED -> (PENDING) -> EXCLUSIVE +** PENDING -> EXCLUSIVE +** +** This routine will only increase a lock. Use the sqlite3OsUnlock() +** routine to lower a locking level. +*/ +static int unixLock(sqlite3_file *id, int locktype){ + /* The following describes the implementation of the various locks and + ** lock transitions in terms of the POSIX advisory shared and exclusive + ** lock primitives (called read-locks and write-locks below, to avoid + ** confusion with SQLite lock names). The algorithms are complicated + ** slightly in order to be compatible with windows systems simultaneously + ** accessing the same database file, in case that is ever required. + ** + ** Symbols defined in os.h indentify the 'pending byte' and the 'reserved + ** byte', each single bytes at well known offsets, and the 'shared byte + ** range', a range of 510 bytes at a well known offset. + ** + ** To obtain a SHARED lock, a read-lock is obtained on the 'pending + ** byte'. If this is successful, a random byte from the 'shared byte + ** range' is read-locked and the lock on the 'pending byte' released. + ** + ** A process may only obtain a RESERVED lock after it has a SHARED lock. + ** A RESERVED lock is implemented by grabbing a write-lock on the + ** 'reserved byte'. + ** + ** A process may only obtain a PENDING lock after it has obtained a + ** SHARED lock. A PENDING lock is implemented by obtaining a write-lock + ** on the 'pending byte'. This ensures that no new SHARED locks can be + ** obtained, but existing SHARED locks are allowed to persist. A process + ** does not have to obtain a RESERVED lock on the way to a PENDING lock. + ** This property is used by the algorithm for rolling back a journal file + ** after a crash. + ** + ** An EXCLUSIVE lock, obtained after a PENDING lock is held, is + ** implemented by obtaining a write-lock on the entire 'shared byte + ** range'. Since all other locks require a read-lock on one of the bytes + ** within this range, this ensures that no other locks are held on the + ** database. + ** + ** The reason a single byte cannot be used instead of the 'shared byte + ** range' is that some versions of windows do not support read-locks. By + ** locking a random byte from a range, concurrent SHARED locks may exist + ** even if the locking primitive used is always a write-lock. + */ + int rc = SQLITE_OK; + unixFile *pFile = (unixFile*)id; + struct lockInfo *pLock = pFile->pLock; + struct flock lock; + int s; + + assert( pFile ); + OSTRACE7("LOCK %d %s was %s(%s,%d) pid=%d\n", pFile->h, + locktypeName(locktype), locktypeName(pFile->locktype), + locktypeName(pLock->locktype), pLock->cnt , getpid()); + + /* If there is already a lock of this type or more restrictive on the + ** unixFile, do nothing. Don't use the end_lock: exit path, as + ** enterMutex() hasn't been called yet. + */ + if( pFile->locktype>=locktype ){ + OSTRACE3("LOCK %d %s ok (already held)\n", pFile->h, + locktypeName(locktype)); + return SQLITE_OK; + } + + /* Make sure the locking sequence is correct + */ + assert( pFile->locktype!=NO_LOCK || locktype==SHARED_LOCK ); + assert( locktype!=PENDING_LOCK ); + assert( locktype!=RESERVED_LOCK || pFile->locktype==SHARED_LOCK ); + + /* This mutex is needed because pFile->pLock is shared across threads + */ + enterMutex(); + + /* Make sure the current thread owns the pFile. + */ + rc = transferOwnership(pFile); + if( rc!=SQLITE_OK ){ + leaveMutex(); + return rc; + } + pLock = pFile->pLock; + + /* If some thread using this PID has a lock via a different unixFile* + ** handle that precludes the requested lock, return BUSY. + */ + if( (pFile->locktype!=pLock->locktype && + (pLock->locktype>=PENDING_LOCK || locktype>SHARED_LOCK)) + ){ + rc = SQLITE_BUSY; + goto end_lock; + } + + /* If a SHARED lock is requested, and some thread using this PID already + ** has a SHARED or RESERVED lock, then increment reference counts and + ** return SQLITE_OK. + */ + if( locktype==SHARED_LOCK && + (pLock->locktype==SHARED_LOCK || pLock->locktype==RESERVED_LOCK) ){ + assert( locktype==SHARED_LOCK ); + assert( pFile->locktype==0 ); + assert( pLock->cnt>0 ); + pFile->locktype = SHARED_LOCK; + pLock->cnt++; + pFile->pOpen->nLock++; + goto end_lock; + } + + lock.l_len = 1L; + + lock.l_whence = SEEK_SET; + + /* A PENDING lock is needed before acquiring a SHARED lock and before + ** acquiring an EXCLUSIVE lock. For the SHARED lock, the PENDING will + ** be released. + */ + if( locktype==SHARED_LOCK + || (locktype==EXCLUSIVE_LOCK && pFile->locktypeh, F_SETLK, &lock); + if( s==(-1) ){ + rc = (errno==EINVAL) ? SQLITE_NOLFS : SQLITE_BUSY; + goto end_lock; + } + } + + + /* If control gets to this point, then actually go ahead and make + ** operating system calls for the specified lock. + */ + if( locktype==SHARED_LOCK ){ + assert( pLock->cnt==0 ); + assert( pLock->locktype==0 ); + + /* Now get the read-lock */ + lock.l_start = SHARED_FIRST; + lock.l_len = SHARED_SIZE; + s = fcntl(pFile->h, F_SETLK, &lock); + + /* Drop the temporary PENDING lock */ + lock.l_start = PENDING_BYTE; + lock.l_len = 1L; + lock.l_type = F_UNLCK; + if( fcntl(pFile->h, F_SETLK, &lock)!=0 ){ + rc = SQLITE_IOERR_UNLOCK; /* This should never happen */ + goto end_lock; + } + if( s==(-1) ){ + rc = (errno==EINVAL) ? SQLITE_NOLFS : SQLITE_BUSY; + }else{ + pFile->locktype = SHARED_LOCK; + pFile->pOpen->nLock++; + pLock->cnt = 1; + } + }else if( locktype==EXCLUSIVE_LOCK && pLock->cnt>1 ){ + /* We are trying for an exclusive lock but another thread in this + ** same process is still holding a shared lock. */ + rc = SQLITE_BUSY; + }else{ + /* The request was for a RESERVED or EXCLUSIVE lock. It is + ** assumed that there is a SHARED or greater lock on the file + ** already. + */ + assert( 0!=pFile->locktype ); + lock.l_type = F_WRLCK; + switch( locktype ){ + case RESERVED_LOCK: + lock.l_start = RESERVED_BYTE; + break; + case EXCLUSIVE_LOCK: + lock.l_start = SHARED_FIRST; + lock.l_len = SHARED_SIZE; + break; + default: + assert(0); + } + s = fcntl(pFile->h, F_SETLK, &lock); + if( s==(-1) ){ + rc = (errno==EINVAL) ? SQLITE_NOLFS : SQLITE_BUSY; + } + } + + if( rc==SQLITE_OK ){ + pFile->locktype = locktype; + pLock->locktype = locktype; + }else if( locktype==EXCLUSIVE_LOCK ){ + pFile->locktype = PENDING_LOCK; + pLock->locktype = PENDING_LOCK; + } + +end_lock: + leaveMutex(); + OSTRACE4("LOCK %d %s %s\n", pFile->h, locktypeName(locktype), + rc==SQLITE_OK ? "ok" : "failed"); + return rc; +} + +/* +** Lower the locking level on file descriptor pFile to locktype. locktype +** must be either NO_LOCK or SHARED_LOCK. +** +** If the locking level of the file descriptor is already at or below +** the requested locking level, this routine is a no-op. +*/ +static int unixUnlock(sqlite3_file *id, int locktype){ + struct lockInfo *pLock; + struct flock lock; + int rc = SQLITE_OK; + unixFile *pFile = (unixFile*)id; + int h; + + assert( pFile ); + OSTRACE7("UNLOCK %d %d was %d(%d,%d) pid=%d\n", pFile->h, locktype, + pFile->locktype, pFile->pLock->locktype, pFile->pLock->cnt, getpid()); + + assert( locktype<=SHARED_LOCK ); + if( pFile->locktype<=locktype ){ + return SQLITE_OK; + } + if( CHECK_THREADID(pFile) ){ + return SQLITE_MISUSE; + } + enterMutex(); + h = pFile->h; + pLock = pFile->pLock; + assert( pLock->cnt!=0 ); + if( pFile->locktype>SHARED_LOCK ){ + assert( pLock->locktype==pFile->locktype ); + SimulateIOErrorBenign(1); + SimulateIOError( h=(-1) ) + SimulateIOErrorBenign(0); + if( locktype==SHARED_LOCK ){ + lock.l_type = F_RDLCK; + lock.l_whence = SEEK_SET; + lock.l_start = SHARED_FIRST; + lock.l_len = SHARED_SIZE; + if( fcntl(h, F_SETLK, &lock)==(-1) ){ + rc = SQLITE_IOERR_RDLOCK; + } + } + lock.l_type = F_UNLCK; + lock.l_whence = SEEK_SET; + lock.l_start = PENDING_BYTE; + lock.l_len = 2L; assert( PENDING_BYTE+1==RESERVED_BYTE ); + if( fcntl(h, F_SETLK, &lock)!=(-1) ){ + pLock->locktype = SHARED_LOCK; + }else{ + rc = SQLITE_IOERR_UNLOCK; + } + } + if( locktype==NO_LOCK ){ + struct openCnt *pOpen; + + /* Decrement the shared lock counter. Release the lock using an + ** OS call only when all threads in this same process have released + ** the lock. + */ + pLock->cnt--; + if( pLock->cnt==0 ){ + lock.l_type = F_UNLCK; + lock.l_whence = SEEK_SET; + lock.l_start = lock.l_len = 0L; + SimulateIOErrorBenign(1); + SimulateIOError( h=(-1) ) + SimulateIOErrorBenign(0); + if( fcntl(h, F_SETLK, &lock)!=(-1) ){ + pLock->locktype = NO_LOCK; + }else{ + rc = SQLITE_IOERR_UNLOCK; + pLock->cnt = 1; + } + } + + /* Decrement the count of locks against this same file. When the + ** count reaches zero, close any other file descriptors whose close + ** was deferred because of outstanding locks. + */ + if( rc==SQLITE_OK ){ + pOpen = pFile->pOpen; + pOpen->nLock--; + assert( pOpen->nLock>=0 ); + if( pOpen->nLock==0 && pOpen->nPending>0 ){ + int i; + for(i=0; inPending; i++){ + close(pOpen->aPending[i]); + } + free(pOpen->aPending); + pOpen->nPending = 0; + pOpen->aPending = 0; + } + } + } + leaveMutex(); + if( rc==SQLITE_OK ) pFile->locktype = locktype; + return rc; +} + +/* +** Close a file. +*/ +static int unixClose(sqlite3_file *id){ + unixFile *pFile = (unixFile *)id; + if( !pFile ) return SQLITE_OK; + unixUnlock(id, NO_LOCK); + if( pFile->dirfd>=0 ) close(pFile->dirfd); + pFile->dirfd = -1; + enterMutex(); + + if( pFile->pOpen->nLock ){ + /* If there are outstanding locks, do not actually close the file just + ** yet because that would clear those locks. Instead, add the file + ** descriptor to pOpen->aPending. It will be automatically closed when + ** the last lock is cleared. + */ + int *aNew; + struct openCnt *pOpen = pFile->pOpen; + aNew = realloc( pOpen->aPending, (pOpen->nPending+1)*sizeof(int) ); + if( aNew==0 ){ + /* If a malloc fails, just leak the file descriptor */ + }else{ + pOpen->aPending = aNew; + pOpen->aPending[pOpen->nPending] = pFile->h; + pOpen->nPending++; + } + }else{ + /* There are no outstanding locks so we can close the file immediately */ + close(pFile->h); + } + releaseLockInfo(pFile->pLock); + releaseOpenCnt(pFile->pOpen); + + leaveMutex(); + OSTRACE2("CLOSE %-3d\n", pFile->h); + OpenCounter(-1); + memset(pFile, 0, sizeof(unixFile)); + return SQLITE_OK; +} + + +#ifdef SQLITE_ENABLE_LOCKING_STYLE +#pragma mark AFP Support + +/* + ** The afpLockingContext structure contains all afp lock specific state + */ +typedef struct afpLockingContext afpLockingContext; +struct afpLockingContext { + unsigned long long sharedLockByte; + const char *filePath; +}; + +struct ByteRangeLockPB2 +{ + unsigned long long offset; /* offset to first byte to lock */ + unsigned long long length; /* nbr of bytes to lock */ + unsigned long long retRangeStart; /* nbr of 1st byte locked if successful */ + unsigned char unLockFlag; /* 1 = unlock, 0 = lock */ + unsigned char startEndFlag; /* 1=rel to end of fork, 0=rel to start */ + int fd; /* file desc to assoc this lock with */ +}; + +#define afpfsByteRangeLock2FSCTL _IOWR('z', 23, struct ByteRangeLockPB2) + +/* +** Return 0 on success, 1 on failure. To match the behavior of the +** normal posix file locking (used in unixLock for example), we should +** provide 'richer' return codes - specifically to differentiate between +** 'file busy' and 'file system error' results. +*/ +static int _AFPFSSetLock( + const char *path, + int fd, + unsigned long long offset, + unsigned long long length, + int setLockFlag +){ + struct ByteRangeLockPB2 pb; + int err; + + pb.unLockFlag = setLockFlag ? 0 : 1; + pb.startEndFlag = 0; + pb.offset = offset; + pb.length = length; + pb.fd = fd; + OSTRACE5("AFPLOCK setting lock %s for %d in range %llx:%llx\n", + (setLockFlag?"ON":"OFF"), fd, offset, length); + err = fsctl(path, afpfsByteRangeLock2FSCTL, &pb, 0); + if ( err==-1 ) { + OSTRACE4("AFPLOCK failed to fsctl() '%s' %d %s\n", path, errno, + strerror(errno)); + return 1; /* error */ + } else { + return 0; + } +} + +/* + ** This routine checks if there is a RESERVED lock held on the specified + ** file by this or any other process. If such a lock is held, return + ** non-zero. If the file is unlocked or holds only SHARED locks, then + ** return zero. + */ +static int afpUnixCheckReservedLock(sqlite3_file *id){ + int r = 0; + unixFile *pFile = (unixFile*)id; + + assert( pFile ); + afpLockingContext *context = (afpLockingContext *) pFile->lockingContext; + + /* Check if a thread in this process holds such a lock */ + if( pFile->locktype>SHARED_LOCK ){ + r = 1; + } + + /* Otherwise see if some other process holds it. + */ + if ( !r ) { + /* lock the byte */ + int failed = _AFPFSSetLock(context->filePath, pFile->h, RESERVED_BYTE, 1,1); + if (failed) { + /* if we failed to get the lock then someone else must have it */ + r = 1; + } else { + /* if we succeeded in taking the reserved lock, unlock it to restore + ** the original state */ + _AFPFSSetLock(context->filePath, pFile->h, RESERVED_BYTE, 1, 0); + } + } + OSTRACE3("TEST WR-LOCK %d %d\n", pFile->h, r); + + return r; +} + +/* AFP-style locking following the behavior of unixLock, see the unixLock +** function comments for details of lock management. */ +static int afpUnixLock(sqlite3_file *id, int locktype){ + int rc = SQLITE_OK; + unixFile *pFile = (unixFile*)id; + afpLockingContext *context = (afpLockingContext *) pFile->lockingContext; + int gotPendingLock = 0; + + assert( pFile ); + OSTRACE5("LOCK %d %s was %s pid=%d\n", pFile->h, + locktypeName(locktype), locktypeName(pFile->locktype), getpid()); + + /* If there is already a lock of this type or more restrictive on the + ** unixFile, do nothing. Don't use the afp_end_lock: exit path, as + ** enterMutex() hasn't been called yet. + */ + if( pFile->locktype>=locktype ){ + OSTRACE3("LOCK %d %s ok (already held)\n", pFile->h, + locktypeName(locktype)); + return SQLITE_OK; + } + + /* Make sure the locking sequence is correct + */ + assert( pFile->locktype!=NO_LOCK || locktype==SHARED_LOCK ); + assert( locktype!=PENDING_LOCK ); + assert( locktype!=RESERVED_LOCK || pFile->locktype==SHARED_LOCK ); + + /* This mutex is needed because pFile->pLock is shared across threads + */ + enterMutex(); + + /* Make sure the current thread owns the pFile. + */ + rc = transferOwnership(pFile); + if( rc!=SQLITE_OK ){ + leaveMutex(); + return rc; + } + + /* A PENDING lock is needed before acquiring a SHARED lock and before + ** acquiring an EXCLUSIVE lock. For the SHARED lock, the PENDING will + ** be released. + */ + if( locktype==SHARED_LOCK + || (locktype==EXCLUSIVE_LOCK && pFile->locktypefilePath, pFile->h, PENDING_BYTE, 1, 1); + if (failed) { + rc = SQLITE_BUSY; + goto afp_end_lock; + } + } + + /* If control gets to this point, then actually go ahead and make + ** operating system calls for the specified lock. + */ + if( locktype==SHARED_LOCK ){ + int lk, failed; + int tries = 0; + + /* Now get the read-lock */ + /* note that the quality of the randomness doesn't matter that much */ + lk = random(); + context->sharedLockByte = (lk & 0x7fffffff)%(SHARED_SIZE - 1); + failed = _AFPFSSetLock(context->filePath, pFile->h, + SHARED_FIRST+context->sharedLockByte, 1, 1); + + /* Drop the temporary PENDING lock */ + if (_AFPFSSetLock(context->filePath, pFile->h, PENDING_BYTE, 1, 0)) { + rc = SQLITE_IOERR_UNLOCK; /* This should never happen */ + goto afp_end_lock; + } + + if( failed ){ + rc = SQLITE_BUSY; + } else { + pFile->locktype = SHARED_LOCK; + } + }else{ + /* The request was for a RESERVED or EXCLUSIVE lock. It is + ** assumed that there is a SHARED or greater lock on the file + ** already. + */ + int failed = 0; + assert( 0!=pFile->locktype ); + if (locktype >= RESERVED_LOCK && pFile->locktype < RESERVED_LOCK) { + /* Acquire a RESERVED lock */ + failed = _AFPFSSetLock(context->filePath, pFile->h, RESERVED_BYTE, 1,1); + } + if (!failed && locktype == EXCLUSIVE_LOCK) { + /* Acquire an EXCLUSIVE lock */ + + /* Remove the shared lock before trying the range. we'll need to + ** reestablish the shared lock if we can't get the afpUnixUnlock + */ + if (!_AFPFSSetLock(context->filePath, pFile->h, SHARED_FIRST + + context->sharedLockByte, 1, 0)) { + /* now attemmpt to get the exclusive lock range */ + failed = _AFPFSSetLock(context->filePath, pFile->h, SHARED_FIRST, + SHARED_SIZE, 1); + if (failed && _AFPFSSetLock(context->filePath, pFile->h, SHARED_FIRST + + context->sharedLockByte, 1, 1)) { + rc = SQLITE_IOERR_RDLOCK; /* this should never happen */ + } + } else { + /* */ + rc = SQLITE_IOERR_UNLOCK; /* this should never happen */ + } + } + if( failed && rc == SQLITE_OK){ + rc = SQLITE_BUSY; + } + } + + if( rc==SQLITE_OK ){ + pFile->locktype = locktype; + }else if( locktype==EXCLUSIVE_LOCK ){ + pFile->locktype = PENDING_LOCK; + } + +afp_end_lock: + leaveMutex(); + OSTRACE4("LOCK %d %s %s\n", pFile->h, locktypeName(locktype), + rc==SQLITE_OK ? "ok" : "failed"); + return rc; +} + +/* +** Lower the locking level on file descriptor pFile to locktype. locktype +** must be either NO_LOCK or SHARED_LOCK. +** +** If the locking level of the file descriptor is already at or below +** the requested locking level, this routine is a no-op. +*/ +static int afpUnixUnlock(sqlite3_file *id, int locktype) { + struct flock lock; + int rc = SQLITE_OK; + unixFile *pFile = (unixFile*)id; + afpLockingContext *context = (afpLockingContext *) pFile->lockingContext; + + assert( pFile ); + OSTRACE5("UNLOCK %d %d was %d pid=%d\n", pFile->h, locktype, + pFile->locktype, getpid()); + + assert( locktype<=SHARED_LOCK ); + if( pFile->locktype<=locktype ){ + return SQLITE_OK; + } + if( CHECK_THREADID(pFile) ){ + return SQLITE_MISUSE; + } + enterMutex(); + if( pFile->locktype>SHARED_LOCK ){ + if( locktype==SHARED_LOCK ){ + int failed = 0; + + /* unlock the exclusive range - then re-establish the shared lock */ + if (pFile->locktype==EXCLUSIVE_LOCK) { + failed = _AFPFSSetLock(context->filePath, pFile->h, SHARED_FIRST, + SHARED_SIZE, 0); + if (!failed) { + /* successfully removed the exclusive lock */ + if (_AFPFSSetLock(context->filePath, pFile->h, SHARED_FIRST+ + context->sharedLockByte, 1, 1)) { + /* failed to re-establish our shared lock */ + rc = SQLITE_IOERR_RDLOCK; /* This should never happen */ + } + } else { + /* This should never happen - failed to unlock the exclusive range */ + rc = SQLITE_IOERR_UNLOCK; + } + } + } + if (rc == SQLITE_OK && pFile->locktype>=PENDING_LOCK) { + if (_AFPFSSetLock(context->filePath, pFile->h, PENDING_BYTE, 1, 0)){ + /* failed to release the pending lock */ + rc = SQLITE_IOERR_UNLOCK; /* This should never happen */ + } + } + if (rc == SQLITE_OK && pFile->locktype>=RESERVED_LOCK) { + if (_AFPFSSetLock(context->filePath, pFile->h, RESERVED_BYTE, 1, 0)) { + /* failed to release the reserved lock */ + rc = SQLITE_IOERR_UNLOCK; /* This should never happen */ + } + } + } + if( locktype==NO_LOCK ){ + int failed = _AFPFSSetLock(context->filePath, pFile->h, + SHARED_FIRST + context->sharedLockByte, 1, 0); + if (failed) { + rc = SQLITE_IOERR_UNLOCK; /* This should never happen */ + } + } + if (rc == SQLITE_OK) + pFile->locktype = locktype; + leaveMutex(); + return rc; +} + +/* +** Close a file & cleanup AFP specific locking context +*/ +static int afpUnixClose(sqlite3_file *id) { + unixFile *pFile = (unixFile*)id; + + if( !pFile ) return SQLITE_OK; + afpUnixUnlock(id, NO_LOCK); + sqlite3_free(pFile->lockingContext); + if( pFile->dirfd>=0 ) close(pFile->dirfd); + pFile->dirfd = -1; + enterMutex(); + close(pFile->h); + leaveMutex(); + OSTRACE2("CLOSE %-3d\n", pFile->h); + OpenCounter(-1); + memset(pFile, 0, sizeof(unixFile)); + return SQLITE_OK; +} + + +#pragma mark flock() style locking + +/* +** The flockLockingContext is not used +*/ +typedef void flockLockingContext; + +static int flockUnixCheckReservedLock(sqlite3_file *id){ + unixFile *pFile = (unixFile*)id; + + if (pFile->locktype == RESERVED_LOCK) { + return 1; /* already have a reserved lock */ + } else { + /* attempt to get the lock */ + int rc = flock(pFile->h, LOCK_EX | LOCK_NB); + if (!rc) { + /* got the lock, unlock it */ + flock(pFile->h, LOCK_UN); + return 0; /* no one has it reserved */ + } + return 1; /* someone else might have it reserved */ + } +} + +static int flockUnixLock(sqlite3_file *id, int locktype) { + unixFile *pFile = (unixFile*)id; + + /* if we already have a lock, it is exclusive. + ** Just adjust level and punt on outta here. */ + if (pFile->locktype > NO_LOCK) { + pFile->locktype = locktype; + return SQLITE_OK; + } + + /* grab an exclusive lock */ + int rc = flock(pFile->h, LOCK_EX | LOCK_NB); + if (rc) { + /* didn't get, must be busy */ + return SQLITE_BUSY; + } else { + /* got it, set the type and return ok */ + pFile->locktype = locktype; + return SQLITE_OK; + } +} + +static int flockUnixUnlock(sqlite3_file *id, int locktype) { + unixFile *pFile = (unixFile*)id; + + assert( locktype<=SHARED_LOCK ); + + /* no-op if possible */ + if( pFile->locktype==locktype ){ + return SQLITE_OK; + } + + /* shared can just be set because we always have an exclusive */ + if (locktype==SHARED_LOCK) { + pFile->locktype = locktype; + return SQLITE_OK; + } + + /* no, really, unlock. */ + int rc = flock(pFile->h, LOCK_UN); + if (rc) + return SQLITE_IOERR_UNLOCK; + else { + pFile->locktype = NO_LOCK; + return SQLITE_OK; + } +} + +/* +** Close a file. +*/ +static int flockUnixClose(sqlite3_file *id) { + unixFile *pFile = (unixFile*)id; + + if( !pFile ) return SQLITE_OK; + flockUnixUnlock(id, NO_LOCK); + + if( pFile->dirfd>=0 ) close(pFile->dirfd); + pFile->dirfd = -1; + + enterMutex(); + close(pFile->h); + leaveMutex(); + OSTRACE2("CLOSE %-3d\n", pFile->h); + OpenCounter(-1); + memset(pFile, 0, sizeof(unixFile)); + return SQLITE_OK; +} + +#pragma mark Old-School .lock file based locking + +/* +** The dotlockLockingContext structure contains all dotlock (.lock) lock +** specific state +*/ +typedef struct dotlockLockingContext dotlockLockingContext; +struct dotlockLockingContext { + char *lockPath; +}; + + +static int dotlockUnixCheckReservedLock(sqlite3_file *id) { + unixFile *pFile = (unixFile*)id; + dotlockLockingContext *context; + + context = (dotlockLockingContext*)pFile->lockingContext; + if (pFile->locktype == RESERVED_LOCK) { + return 1; /* already have a reserved lock */ + } else { + struct stat statBuf; + if (lstat(context->lockPath,&statBuf) == 0){ + /* file exists, someone else has the lock */ + return 1; + }else{ + /* file does not exist, we could have it if we want it */ + return 0; + } + } +} + +static int dotlockUnixLock(sqlite3_file *id, int locktype) { + unixFile *pFile = (unixFile*)id; + dotlockLockingContext *context; + int fd; + + context = (dotlockLockingContext*)pFile->lockingContext; + + /* if we already have a lock, it is exclusive. + ** Just adjust level and punt on outta here. */ + if (pFile->locktype > NO_LOCK) { + pFile->locktype = locktype; + + /* Always update the timestamp on the old file */ + utimes(context->lockPath,NULL); + return SQLITE_OK; + } + + /* check to see if lock file already exists */ + struct stat statBuf; + if (lstat(context->lockPath,&statBuf) == 0){ + return SQLITE_BUSY; /* it does, busy */ + } + + /* grab an exclusive lock */ + fd = open(context->lockPath,O_RDONLY|O_CREAT|O_EXCL,0600); + if( fd<0 ){ + /* failed to open/create the file, someone else may have stolen the lock */ + return SQLITE_BUSY; + } + close(fd); + + /* got it, set the type and return ok */ + pFile->locktype = locktype; + return SQLITE_OK; +} + +static int dotlockUnixUnlock(sqlite3_file *id, int locktype) { + unixFile *pFile = (unixFile*)id; + dotlockLockingContext *context; + + context = (dotlockLockingContext*)pFile->lockingContext; + + assert( locktype<=SHARED_LOCK ); + + /* no-op if possible */ + if( pFile->locktype==locktype ){ + return SQLITE_OK; + } + + /* shared can just be set because we always have an exclusive */ + if (locktype==SHARED_LOCK) { + pFile->locktype = locktype; + return SQLITE_OK; + } + + /* no, really, unlock. */ + unlink(context->lockPath); + pFile->locktype = NO_LOCK; + return SQLITE_OK; +} + +/* + ** Close a file. + */ +static int dotlockUnixClose(sqlite3_file *id) { + unixFile *pFile = (unixFile*)id; + + if( !pFile ) return SQLITE_OK; + dotlockUnixUnlock(id, NO_LOCK); + sqlite3_free(pFile->lockingContext); + if( pFile->dirfd>=0 ) close(pFile->dirfd); + pFile->dirfd = -1; + enterMutex(); + close(pFile->h); + leaveMutex(); + OSTRACE2("CLOSE %-3d\n", pFile->h); + OpenCounter(-1); + memset(pFile, 0, sizeof(unixFile)); + return SQLITE_OK; +} + + +#pragma mark No locking + +/* +** The nolockLockingContext is void +*/ +typedef void nolockLockingContext; + +static int nolockUnixCheckReservedLock(sqlite3_file *id) { + return 0; +} + +static int nolockUnixLock(sqlite3_file *id, int locktype) { + return SQLITE_OK; +} + +static int nolockUnixUnlock(sqlite3_file *id, int locktype) { + return SQLITE_OK; +} + +/* +** Close a file. +*/ +static int nolockUnixClose(sqlite3_file *id) { + unixFile *pFile = (unixFile*)id; + + if( !pFile ) return SQLITE_OK; + if( pFile->dirfd>=0 ) close(pFile->dirfd); + pFile->dirfd = -1; + enterMutex(); + close(pFile->h); + leaveMutex(); + OSTRACE2("CLOSE %-3d\n", pFile->h); + OpenCounter(-1); + memset(pFile, 0, sizeof(unixFile)); + return SQLITE_OK; +} + +#endif /* SQLITE_ENABLE_LOCKING_STYLE */ + + +/* +** Information and control of an open file handle. +*/ +static int unixFileControl(sqlite3_file *id, int op, void *pArg){ + switch( op ){ + case SQLITE_FCNTL_LOCKSTATE: { + *(int*)pArg = ((unixFile*)id)->locktype; + return SQLITE_OK; + } + } + return SQLITE_ERROR; +} + +/* +** Return the sector size in bytes of the underlying block device for +** the specified file. This is almost always 512 bytes, but may be +** larger for some devices. +** +** SQLite code assumes this function cannot fail. It also assumes that +** if two files are created in the same file-system directory (i.e. +** a database and its journal file) that the sector size will be the +** same for both. +*/ +static int unixSectorSize(sqlite3_file *id){ + return SQLITE_DEFAULT_SECTOR_SIZE; +} + +/* +** Return the device characteristics for the file. This is always 0. +*/ +static int unixDeviceCharacteristics(sqlite3_file *id){ + return 0; +} + +/* +** This vector defines all the methods that can operate on an sqlite3_file +** for unix. +*/ +static const sqlite3_io_methods sqlite3UnixIoMethod = { + 1, /* iVersion */ + unixClose, + unixRead, + unixWrite, + unixTruncate, + unixSync, + unixFileSize, + unixLock, + unixUnlock, + unixCheckReservedLock, + unixFileControl, + unixSectorSize, + unixDeviceCharacteristics +}; + +#ifdef SQLITE_ENABLE_LOCKING_STYLE +/* +** This vector defines all the methods that can operate on an sqlite3_file +** for unix with AFP style file locking. +*/ +static const sqlite3_io_methods sqlite3AFPLockingUnixIoMethod = { + 1, /* iVersion */ + afpUnixClose, + unixRead, + unixWrite, + unixTruncate, + unixSync, + unixFileSize, + afpUnixLock, + afpUnixUnlock, + afpUnixCheckReservedLock, + unixFileControl, + unixSectorSize, + unixDeviceCharacteristics +}; + +/* +** This vector defines all the methods that can operate on an sqlite3_file +** for unix with flock() style file locking. +*/ +static const sqlite3_io_methods sqlite3FlockLockingUnixIoMethod = { + 1, /* iVersion */ + flockUnixClose, + unixRead, + unixWrite, + unixTruncate, + unixSync, + unixFileSize, + flockUnixLock, + flockUnixUnlock, + flockUnixCheckReservedLock, + unixFileControl, + unixSectorSize, + unixDeviceCharacteristics +}; + +/* +** This vector defines all the methods that can operate on an sqlite3_file +** for unix with dotlock style file locking. +*/ +static const sqlite3_io_methods sqlite3DotlockLockingUnixIoMethod = { + 1, /* iVersion */ + dotlockUnixClose, + unixRead, + unixWrite, + unixTruncate, + unixSync, + unixFileSize, + dotlockUnixLock, + dotlockUnixUnlock, + dotlockUnixCheckReservedLock, + unixFileControl, + unixSectorSize, + unixDeviceCharacteristics +}; + +/* +** This vector defines all the methods that can operate on an sqlite3_file +** for unix with nolock style file locking. +*/ +static const sqlite3_io_methods sqlite3NolockLockingUnixIoMethod = { + 1, /* iVersion */ + nolockUnixClose, + unixRead, + unixWrite, + unixTruncate, + unixSync, + unixFileSize, + nolockUnixLock, + nolockUnixUnlock, + nolockUnixCheckReservedLock, + unixFileControl, + unixSectorSize, + unixDeviceCharacteristics +}; + +#endif /* SQLITE_ENABLE_LOCKING_STYLE */ + +/* +** Allocate memory for a new unixFile and initialize that unixFile. +** Write a pointer to the new unixFile into *pId. +** If we run out of memory, close the file and return an error. +*/ +#ifdef SQLITE_ENABLE_LOCKING_STYLE +/* +** When locking extensions are enabled, the filepath and locking style +** are needed to determine the unixFile pMethod to use for locking operations. +** The locking-style specific lockingContext data structure is created +** and assigned here also. +*/ +static int fillInUnixFile( + int h, /* Open file descriptor of file being opened */ + int dirfd, /* Directory file descriptor */ + sqlite3_file *pId, /* Write to the unixFile structure here */ + const char *zFilename /* Name of the file being opened */ +){ + sqlite3LockingStyle lockingStyle; + unixFile *pNew = (unixFile *)pId; + int rc; + +#ifdef FD_CLOEXEC + fcntl(h, F_SETFD, fcntl(h, F_GETFD, 0) | FD_CLOEXEC); +#endif + + lockingStyle = sqlite3DetectLockingStyle(zFilename, h); + if ( lockingStyle==posixLockingStyle ){ + enterMutex(); + rc = findLockInfo(h, &pNew->pLock, &pNew->pOpen); + leaveMutex(); + if( rc ){ + if( dirfd>=0 ) close(dirfd); + close(h); + return SQLITE_NOMEM; + } + } else { + /* pLock and pOpen are only used for posix advisory locking */ + pNew->pLock = NULL; + pNew->pOpen = NULL; + } + + OSTRACE3("OPEN %-3d %s\n", h, zFilename); + pNew->dirfd = -1; + pNew->h = h; + pNew->dirfd = dirfd; + SET_THREADID(pNew); + + switch(lockingStyle) { + case afpLockingStyle: { + /* afp locking uses the file path so it needs to be included in + ** the afpLockingContext */ + afpLockingContext *context; + pNew->pMethod = &sqlite3AFPLockingUnixIoMethod; + pNew->lockingContext = context = sqlite3_malloc( sizeof(*context) ); + if( context==0 ){ + close(h); + if( dirfd>=0 ) close(dirfd); + return SQLITE_NOMEM; + } + + /* NB: zFilename exists and remains valid until the file is closed + ** according to requirement F11141. So we do not need to make a + ** copy of the filename. */ + context->filePath = zFilename; + srandomdev(); + break; + } + case flockLockingStyle: + /* flock locking doesn't need additional lockingContext information */ + pNew->pMethod = &sqlite3FlockLockingUnixIoMethod; + break; + case dotlockLockingStyle: { + /* dotlock locking uses the file path so it needs to be included in + ** the dotlockLockingContext */ + dotlockLockingContext *context; + int nFilename; + nFilename = strlen(zFilename); + pNew->pMethod = &sqlite3DotlockLockingUnixIoMethod; + pNew->lockingContext = context = + sqlite3_malloc( sizeof(*context) + nFilename + 6 ); + if( context==0 ){ + close(h); + if( dirfd>=0 ) close(dirfd); + return SQLITE_NOMEM; + } + context->lockPath = (char*)&context[1]; + sqlite3_snprintf(nFilename, context->lockPath, + "%s.lock", zFilename); + break; + } + case posixLockingStyle: + /* posix locking doesn't need additional lockingContext information */ + pNew->pMethod = &sqlite3UnixIoMethod; + break; + case noLockingStyle: + case unsupportedLockingStyle: + default: + pNew->pMethod = &sqlite3NolockLockingUnixIoMethod; + } + OpenCounter(+1); + return SQLITE_OK; +} +#else /* SQLITE_ENABLE_LOCKING_STYLE */ +static int fillInUnixFile( + int h, /* Open file descriptor on file being opened */ + int dirfd, + sqlite3_file *pId, /* Write to the unixFile structure here */ + const char *zFilename /* Name of the file being opened */ +){ + unixFile *pNew = (unixFile *)pId; + int rc; + +#ifdef FD_CLOEXEC + fcntl(h, F_SETFD, fcntl(h, F_GETFD, 0) | FD_CLOEXEC); +#endif + + enterMutex(); + rc = findLockInfo(h, &pNew->pLock, &pNew->pOpen); + leaveMutex(); + if( rc ){ + if( dirfd>=0 ) close(dirfd); + close(h); + return SQLITE_NOMEM; + } + + OSTRACE3("OPEN %-3d %s\n", h, zFilename); + pNew->dirfd = -1; + pNew->h = h; + pNew->dirfd = dirfd; + SET_THREADID(pNew); + + pNew->pMethod = &sqlite3UnixIoMethod; + OpenCounter(+1); + return SQLITE_OK; +} +#endif /* SQLITE_ENABLE_LOCKING_STYLE */ + +/* +** Open a file descriptor to the directory containing file zFilename. +** If successful, *pFd is set to the opened file descriptor and +** SQLITE_OK is returned. If an error occurs, either SQLITE_NOMEM +** or SQLITE_CANTOPEN is returned and *pFd is set to an undefined +** value. +** +** If SQLITE_OK is returned, the caller is responsible for closing +** the file descriptor *pFd using close(). +*/ +static int openDirectory(const char *zFilename, int *pFd){ + int ii; + int fd = -1; + char zDirname[MAX_PATHNAME+1]; + + sqlite3_snprintf(MAX_PATHNAME, zDirname, "%s", zFilename); + for(ii=strlen(zDirname); ii>=0 && zDirname[ii]!='/'; ii--); + if( ii>0 ){ + zDirname[ii] = '\0'; + fd = open(zDirname, O_RDONLY|O_BINARY, 0); + if( fd>=0 ){ +#ifdef FD_CLOEXEC + fcntl(fd, F_SETFD, fcntl(fd, F_GETFD, 0) | FD_CLOEXEC); +#endif + OSTRACE3("OPENDIR %-3d %s\n", fd, zDirname); + } + } + *pFd = fd; + return (fd>=0?SQLITE_OK:SQLITE_CANTOPEN); +} + +/* +** Open the file zPath. +** +** Previously, the SQLite OS layer used three functions in place of this +** one: +** +** sqlite3OsOpenReadWrite(); +** sqlite3OsOpenReadOnly(); +** sqlite3OsOpenExclusive(); +** +** These calls correspond to the following combinations of flags: +** +** ReadWrite() -> (READWRITE | CREATE) +** ReadOnly() -> (READONLY) +** OpenExclusive() -> (READWRITE | CREATE | EXCLUSIVE) +** +** The old OpenExclusive() accepted a boolean argument - "delFlag". If +** true, the file was configured to be automatically deleted when the +** file handle closed. To achieve the same effect using this new +** interface, add the DELETEONCLOSE flag to those specified above for +** OpenExclusive(). +*/ +static int unixOpen( + sqlite3_vfs *pVfs, + const char *zPath, + sqlite3_file *pFile, + int flags, + int *pOutFlags +){ + int fd = 0; /* File descriptor returned by open() */ + int dirfd = -1; /* Directory file descriptor */ + int oflags = 0; /* Flags to pass to open() */ + int eType = flags&0xFFFFFF00; /* Type of file to open */ + + int isExclusive = (flags & SQLITE_OPEN_EXCLUSIVE); + int isDelete = (flags & SQLITE_OPEN_DELETEONCLOSE); + int isCreate = (flags & SQLITE_OPEN_CREATE); + int isReadonly = (flags & SQLITE_OPEN_READONLY); + int isReadWrite = (flags & SQLITE_OPEN_READWRITE); + + /* If creating a master or main-file journal, this function will open + ** a file-descriptor on the directory too. The first time unixSync() + ** is called the directory file descriptor will be fsync()ed and close()d. + */ + int isOpenDirectory = (isCreate && + (eType==SQLITE_OPEN_MASTER_JOURNAL || eType==SQLITE_OPEN_MAIN_JOURNAL) + ); + + /* Check the following statements are true: + ** + ** (a) Exactly one of the READWRITE and READONLY flags must be set, and + ** (b) if CREATE is set, then READWRITE must also be set, and + ** (c) if EXCLUSIVE is set, then CREATE must also be set. + ** (d) if DELETEONCLOSE is set, then CREATE must also be set. + */ + assert((isReadonly==0 || isReadWrite==0) && (isReadWrite || isReadonly)); + assert(isCreate==0 || isReadWrite); + assert(isExclusive==0 || isCreate); + assert(isDelete==0 || isCreate); + + + /* The main DB, main journal, and master journal are never automatically + ** deleted + */ + assert( eType!=SQLITE_OPEN_MAIN_DB || !isDelete ); + assert( eType!=SQLITE_OPEN_MAIN_JOURNAL || !isDelete ); + assert( eType!=SQLITE_OPEN_MASTER_JOURNAL || !isDelete ); + + /* Assert that the upper layer has set one of the "file-type" flags. */ + assert( eType==SQLITE_OPEN_MAIN_DB || eType==SQLITE_OPEN_TEMP_DB + || eType==SQLITE_OPEN_MAIN_JOURNAL || eType==SQLITE_OPEN_TEMP_JOURNAL + || eType==SQLITE_OPEN_SUBJOURNAL || eType==SQLITE_OPEN_MASTER_JOURNAL + || eType==SQLITE_OPEN_TRANSIENT_DB + ); + + if( isReadonly ) oflags |= O_RDONLY; + if( isReadWrite ) oflags |= O_RDWR; + if( isCreate ) oflags |= O_CREAT; + if( isExclusive ) oflags |= (O_EXCL|O_NOFOLLOW); + oflags |= (O_LARGEFILE|O_BINARY); + + memset(pFile, 0, sizeof(unixFile)); + fd = open(zPath, oflags, isDelete?0600:SQLITE_DEFAULT_FILE_PERMISSIONS); + if( fd<0 && errno!=EISDIR && isReadWrite && !isExclusive ){ + /* Failed to open the file for read/write access. Try read-only. */ + flags &= ~(SQLITE_OPEN_READWRITE|SQLITE_OPEN_CREATE); + flags |= SQLITE_OPEN_READONLY; + return unixOpen(pVfs, zPath, pFile, flags, pOutFlags); + } + if( fd<0 ){ + return SQLITE_CANTOPEN; + } + if( isDelete ){ + unlink(zPath); + } + if( pOutFlags ){ + *pOutFlags = flags; + } + + assert(fd!=0); + if( isOpenDirectory ){ + int rc = openDirectory(zPath, &dirfd); + if( rc!=SQLITE_OK ){ + close(fd); + return rc; + } + } + return fillInUnixFile(fd, dirfd, pFile, zPath); +} + +/* +** Delete the file at zPath. If the dirSync argument is true, fsync() +** the directory after deleting the file. +*/ +static int unixDelete(sqlite3_vfs *pVfs, const char *zPath, int dirSync){ + int rc = SQLITE_OK; + SimulateIOError(return SQLITE_IOERR_DELETE); + unlink(zPath); + if( dirSync ){ + int fd; + rc = openDirectory(zPath, &fd); + if( rc==SQLITE_OK ){ + if( fsync(fd) ){ + rc = SQLITE_IOERR_DIR_FSYNC; + } + close(fd); + } + } + return rc; +} + +/* +** Test the existance of or access permissions of file zPath. The +** test performed depends on the value of flags: +** +** SQLITE_ACCESS_EXISTS: Return 1 if the file exists +** SQLITE_ACCESS_READWRITE: Return 1 if the file is read and writable. +** SQLITE_ACCESS_READONLY: Return 1 if the file is readable. +** +** Otherwise return 0. +*/ +static int unixAccess(sqlite3_vfs *pVfs, const char *zPath, int flags){ + int amode = 0; + switch( flags ){ + case SQLITE_ACCESS_EXISTS: + amode = F_OK; + break; + case SQLITE_ACCESS_READWRITE: + amode = W_OK|R_OK; + break; + case SQLITE_ACCESS_READ: + amode = R_OK; + break; + + default: + assert(!"Invalid flags argument"); + } + return (access(zPath, amode)==0); +} + +/* +** Create a temporary file name in zBuf. zBuf must be allocated +** by the calling process and must be big enough to hold at least +** pVfs->mxPathname bytes. +*/ +static int unixGetTempname(sqlite3_vfs *pVfs, int nBuf, char *zBuf){ + static const char *azDirs[] = { + 0, + "/var/tmp", + "/usr/tmp", + "/tmp", + ".", + }; + static const unsigned char zChars[] = + "abcdefghijklmnopqrstuvwxyz" + "ABCDEFGHIJKLMNOPQRSTUVWXYZ" + "0123456789"; + int i, j; + struct stat buf; + const char *zDir = "."; + + /* It's odd to simulate an io-error here, but really this is just + ** using the io-error infrastructure to test that SQLite handles this + ** function failing. + */ + SimulateIOError( return SQLITE_ERROR ); + + azDirs[0] = sqlite3_temp_directory; + for(i=0; imxPathname==MAX_PATHNAME ); + sqlite3_snprintf(nBuf-17, zBuf, "%s/"SQLITE_TEMP_FILE_PREFIX, zDir); + j = strlen(zBuf); + sqlite3Randomness(15, &zBuf[j]); + for(i=0; i<15; i++, j++){ + zBuf[j] = (char)zChars[ ((unsigned char)zBuf[j])%(sizeof(zChars)-1) ]; + } + zBuf[j] = 0; + }while( access(zBuf,0)==0 ); + return SQLITE_OK; +} + + +/* +** Turn a relative pathname into a full pathname. The relative path +** is stored as a nul-terminated string in the buffer pointed to by +** zPath. +** +** zOut points to a buffer of at least sqlite3_vfs.mxPathname bytes +** (in this case, MAX_PATHNAME bytes). The full-path is written to +** this buffer before returning. +*/ +static int unixFullPathname( + sqlite3_vfs *pVfs, /* Pointer to vfs object */ + const char *zPath, /* Possibly relative input path */ + int nOut, /* Size of output buffer in bytes */ + char *zOut /* Output buffer */ +){ + + /* It's odd to simulate an io-error here, but really this is just + ** using the io-error infrastructure to test that SQLite handles this + ** function failing. This function could fail if, for example, the + ** current working directly has been unlinked. + */ + SimulateIOError( return SQLITE_ERROR ); + + assert( pVfs->mxPathname==MAX_PATHNAME ); + zOut[nOut-1] = '\0'; + if( zPath[0]=='/' ){ + sqlite3_snprintf(nOut, zOut, "%s", zPath); + }else{ + int nCwd; + if( getcwd(zOut, nOut-1)==0 ){ + return SQLITE_CANTOPEN; + } + nCwd = strlen(zOut); + sqlite3_snprintf(nOut-nCwd, &zOut[nCwd], "/%s", zPath); + } + return SQLITE_OK; + +#if 0 + /* + ** Remove "/./" path elements and convert "/A/./" path elements + ** to just "/". + */ + if( zFull ){ + int i, j; + for(i=j=0; zFull[i]; i++){ + if( zFull[i]=='/' ){ + if( zFull[i+1]=='/' ) continue; + if( zFull[i+1]=='.' && zFull[i+2]=='/' ){ + i += 1; + continue; + } + if( zFull[i+1]=='.' && zFull[i+2]=='.' && zFull[i+3]=='/' ){ + while( j>0 && zFull[j-1]!='/' ){ j--; } + i += 3; + continue; + } + } + zFull[j++] = zFull[i]; + } + zFull[j] = 0; + } +#endif +} + + +#ifndef SQLITE_OMIT_LOAD_EXTENSION +/* +** Interfaces for opening a shared library, finding entry points +** within the shared library, and closing the shared library. +*/ +#include +static void *unixDlOpen(sqlite3_vfs *pVfs, const char *zFilename){ + return dlopen(zFilename, RTLD_NOW | RTLD_GLOBAL); +} + +/* +** SQLite calls this function immediately after a call to unixDlSym() or +** unixDlOpen() fails (returns a null pointer). If a more detailed error +** message is available, it is written to zBufOut. If no error message +** is available, zBufOut is left unmodified and SQLite uses a default +** error message. +*/ +static void unixDlError(sqlite3_vfs *pVfs, int nBuf, char *zBufOut){ + char *zErr; + enterMutex(); + zErr = dlerror(); + if( zErr ){ + sqlite3_snprintf(nBuf, zBufOut, "%s", zErr); + } + leaveMutex(); +} +static void *unixDlSym(sqlite3_vfs *pVfs, void *pHandle, const char *zSymbol){ + return dlsym(pHandle, zSymbol); +} +static void unixDlClose(sqlite3_vfs *pVfs, void *pHandle){ + dlclose(pHandle); +} +#else /* if SQLITE_OMIT_LOAD_EXTENSION is defined: */ + #define unixDlOpen 0 + #define unixDlError 0 + #define unixDlSym 0 + #define unixDlClose 0 +#endif + +/* +** Write nBuf bytes of random data to the supplied buffer zBuf. +*/ +static int unixRandomness(sqlite3_vfs *pVfs, int nBuf, char *zBuf){ + + assert(nBuf>=(sizeof(time_t)+sizeof(int))); + + /* We have to initialize zBuf to prevent valgrind from reporting + ** errors. The reports issued by valgrind are incorrect - we would + ** prefer that the randomness be increased by making use of the + ** uninitialized space in zBuf - but valgrind errors tend to worry + ** some users. Rather than argue, it seems easier just to initialize + ** the whole array and silence valgrind, even if that means less randomness + ** in the random seed. + ** + ** When testing, initializing zBuf[] to zero is all we do. That means + ** that we always use the same random number sequence. This makes the + ** tests repeatable. + */ + memset(zBuf, 0, nBuf); +#if !defined(SQLITE_TEST) + { + int pid, fd; + fd = open("/dev/urandom", O_RDONLY); + if( fd<0 ){ + time_t t; + time(&t); + memcpy(zBuf, &t, sizeof(t)); + pid = getpid(); + memcpy(&zBuf[sizeof(t)], &pid, sizeof(pid)); + }else{ + read(fd, zBuf, nBuf); + close(fd); + } + } +#endif + return SQLITE_OK; +} + + +/* +** Sleep for a little while. Return the amount of time slept. +** The argument is the number of microseconds we want to sleep. +** The return value is the number of microseconds of sleep actually +** requested from the underlying operating system, a number which +** might be greater than or equal to the argument, but not less +** than the argument. +*/ +static int unixSleep(sqlite3_vfs *pVfs, int microseconds){ +#if defined(HAVE_USLEEP) && HAVE_USLEEP + usleep(microseconds); + return microseconds; +#else + int seconds = (microseconds+999999)/1000000; + sleep(seconds); + return seconds*1000000; +#endif +} + +/* +** The following variable, if set to a non-zero value, becomes the result +** returned from sqlite3OsCurrentTime(). This is used for testing. +*/ +#ifdef SQLITE_TEST +int sqlite3_current_time = 0; +#endif + +/* +** Find the current time (in Universal Coordinated Time). Write the +** current time and date as a Julian Day number into *prNow and +** return 0. Return 1 if the time and date cannot be found. +*/ +static int unixCurrentTime(sqlite3_vfs *pVfs, double *prNow){ +#ifdef NO_GETTOD + time_t t; + time(&t); + *prNow = t/86400.0 + 2440587.5; +#else + struct timeval sNow; + gettimeofday(&sNow, 0); + *prNow = 2440587.5 + sNow.tv_sec/86400.0 + sNow.tv_usec/86400000000.0; +#endif +#ifdef SQLITE_TEST + if( sqlite3_current_time ){ + *prNow = sqlite3_current_time/86400.0 + 2440587.5; + } +#endif + return 0; +} + +/* +** Return a pointer to the sqlite3DefaultVfs structure. We use +** a function rather than give the structure global scope because +** some compilers (MSVC) do not allow forward declarations of +** initialized structures. +*/ +sqlite3_vfs *sqlite3OsDefaultVfs(void){ + static sqlite3_vfs unixVfs = { + 1, /* iVersion */ + sizeof(unixFile), /* szOsFile */ + MAX_PATHNAME, /* mxPathname */ + 0, /* pNext */ + "unix", /* zName */ + 0, /* pAppData */ + + unixOpen, /* xOpen */ + unixDelete, /* xDelete */ + unixAccess, /* xAccess */ + unixGetTempname, /* xGetTempName */ + unixFullPathname, /* xFullPathname */ + unixDlOpen, /* xDlOpen */ + unixDlError, /* xDlError */ + unixDlSym, /* xDlSym */ + unixDlClose, /* xDlClose */ + unixRandomness, /* xRandomness */ + unixSleep, /* xSleep */ + unixCurrentTime /* xCurrentTime */ + }; + + return &unixVfs; +} + +#endif /* OS_UNIX */ Added: external/sqlite-source-3.5.7.x/os_win.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/os_win.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,1565 @@ +/* +** 2004 May 22 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +****************************************************************************** +** +** This file contains code that is specific to windows. +*/ +#include "sqliteInt.h" +#if OS_WIN /* This file is used for windows only */ + + +/* +** A Note About Memory Allocation: +** +** This driver uses malloc()/free() directly rather than going through +** the SQLite-wrappers sqlite3_malloc()/sqlite3_free(). Those wrappers +** are designed for use on embedded systems where memory is scarce and +** malloc failures happen frequently. Win32 does not typically run on +** embedded systems, and when it does the developers normally have bigger +** problems to worry about than running out of memory. So there is not +** a compelling need to use the wrappers. +** +** But there is a good reason to not use the wrappers. If we use the +** wrappers then we will get simulated malloc() failures within this +** driver. And that causes all kinds of problems for our tests. We +** could enhance SQLite to deal with simulated malloc failures within +** the OS driver, but the code to deal with those failure would not +** be exercised on Linux (which does not need to malloc() in the driver) +** and so we would have difficulty writing coverage tests for that +** code. Better to leave the code out, we think. +** +** The point of this discussion is as follows: When creating a new +** OS layer for an embedded system, if you use this file as an example, +** avoid the use of malloc()/free(). Those routines work ok on windows +** desktops but not so well in embedded systems. +*/ + +#include + +#ifdef __CYGWIN__ +# include +#endif + +/* +** Macros used to determine whether or not to use threads. +*/ +#if defined(THREADSAFE) && THREADSAFE +# define SQLITE_W32_THREADS 1 +#endif + +/* +** Include code that is common to all os_*.c files +*/ +#include "os_common.h" + +/* +** Determine if we are dealing with WindowsCE - which has a much +** reduced API. +*/ +#if defined(_WIN32_WCE) +# define OS_WINCE 1 +# define AreFileApisANSI() 1 +#else +# define OS_WINCE 0 +#endif + +/* +** WinCE lacks native support for file locking so we have to fake it +** with some code of our own. +*/ +#if OS_WINCE +typedef struct winceLock { + int nReaders; /* Number of reader locks obtained */ + BOOL bPending; /* Indicates a pending lock has been obtained */ + BOOL bReserved; /* Indicates a reserved lock has been obtained */ + BOOL bExclusive; /* Indicates an exclusive lock has been obtained */ +} winceLock; +#endif + +/* +** The winFile structure is a subclass of sqlite3_file* specific to the win32 +** portability layer. +*/ +typedef struct winFile winFile; +struct winFile { + const sqlite3_io_methods *pMethod;/* Must be first */ + HANDLE h; /* Handle for accessing the file */ + unsigned char locktype; /* Type of lock currently held on this file */ + short sharedLockByte; /* Randomly chosen byte used as a shared lock */ +#if OS_WINCE + WCHAR *zDeleteOnClose; /* Name of file to delete when closing */ + HANDLE hMutex; /* Mutex used to control access to shared lock */ + HANDLE hShared; /* Shared memory segment used for locking */ + winceLock local; /* Locks obtained by this instance of winFile */ + winceLock *shared; /* Global shared lock memory for the file */ +#endif +}; + + +/* +** The following variable is (normally) set once and never changes +** thereafter. It records whether the operating system is Win95 +** or WinNT. +** +** 0: Operating system unknown. +** 1: Operating system is Win95. +** 2: Operating system is WinNT. +** +** In order to facilitate testing on a WinNT system, the test fixture +** can manually set this value to 1 to emulate Win98 behavior. +*/ +#ifdef SQLITE_TEST +int sqlite3_os_type = 0; +#else +static int sqlite3_os_type = 0; +#endif + +/* +** Return true (non-zero) if we are running under WinNT, Win2K, WinXP, +** or WinCE. Return false (zero) for Win95, Win98, or WinME. +** +** Here is an interesting observation: Win95, Win98, and WinME lack +** the LockFileEx() API. But we can still statically link against that +** API as long as we don't call it win running Win95/98/ME. A call to +** this routine is used to determine if the host is Win95/98/ME or +** WinNT/2K/XP so that we will know whether or not we can safely call +** the LockFileEx() API. +*/ +#if OS_WINCE +# define isNT() (1) +#else + static int isNT(void){ + if( sqlite3_os_type==0 ){ + OSVERSIONINFO sInfo; + sInfo.dwOSVersionInfoSize = sizeof(sInfo); + GetVersionEx(&sInfo); + sqlite3_os_type = sInfo.dwPlatformId==VER_PLATFORM_WIN32_NT ? 2 : 1; + } + return sqlite3_os_type==2; + } +#endif /* OS_WINCE */ + +/* +** Convert a UTF-8 string to microsoft unicode (UTF-16?). +** +** Space to hold the returned string is obtained from malloc. +*/ +static WCHAR *utf8ToUnicode(const char *zFilename){ + int nChar; + WCHAR *zWideFilename; + + nChar = MultiByteToWideChar(CP_UTF8, 0, zFilename, -1, NULL, 0); + zWideFilename = malloc( nChar*sizeof(zWideFilename[0]) ); + if( zWideFilename==0 ){ + return 0; + } + nChar = MultiByteToWideChar(CP_UTF8, 0, zFilename, -1, zWideFilename, nChar); + if( nChar==0 ){ + free(zWideFilename); + zWideFilename = 0; + } + return zWideFilename; +} + +/* +** Convert microsoft unicode to UTF-8. Space to hold the returned string is +** obtained from malloc(). +*/ +static char *unicodeToUtf8(const WCHAR *zWideFilename){ + int nByte; + char *zFilename; + + nByte = WideCharToMultiByte(CP_UTF8, 0, zWideFilename, -1, 0, 0, 0, 0); + zFilename = malloc( nByte ); + if( zFilename==0 ){ + return 0; + } + nByte = WideCharToMultiByte(CP_UTF8, 0, zWideFilename, -1, zFilename, nByte, + 0, 0); + if( nByte == 0 ){ + free(zFilename); + zFilename = 0; + } + return zFilename; +} + +/* +** Convert an ansi string to microsoft unicode, based on the +** current codepage settings for file apis. +** +** Space to hold the returned string is obtained +** from malloc. +*/ +static WCHAR *mbcsToUnicode(const char *zFilename){ + int nByte; + WCHAR *zMbcsFilename; + int codepage = AreFileApisANSI() ? CP_ACP : CP_OEMCP; + + nByte = MultiByteToWideChar(codepage, 0, zFilename, -1, NULL,0)*sizeof(WCHAR); + zMbcsFilename = malloc( nByte*sizeof(zMbcsFilename[0]) ); + if( zMbcsFilename==0 ){ + return 0; + } + nByte = MultiByteToWideChar(codepage, 0, zFilename, -1, zMbcsFilename, nByte); + if( nByte==0 ){ + free(zMbcsFilename); + zMbcsFilename = 0; + } + return zMbcsFilename; +} + +/* +** Convert microsoft unicode to multibyte character string, based on the +** user's Ansi codepage. +** +** Space to hold the returned string is obtained from +** malloc(). +*/ +static char *unicodeToMbcs(const WCHAR *zWideFilename){ + int nByte; + char *zFilename; + int codepage = AreFileApisANSI() ? CP_ACP : CP_OEMCP; + + nByte = WideCharToMultiByte(codepage, 0, zWideFilename, -1, 0, 0, 0, 0); + zFilename = malloc( nByte ); + if( zFilename==0 ){ + return 0; + } + nByte = WideCharToMultiByte(codepage, 0, zWideFilename, -1, zFilename, nByte, + 0, 0); + if( nByte == 0 ){ + free(zFilename); + zFilename = 0; + } + return zFilename; +} + +/* +** Convert multibyte character string to UTF-8. Space to hold the +** returned string is obtained from malloc(). +*/ +static char *mbcsToUtf8(const char *zFilename){ + char *zFilenameUtf8; + WCHAR *zTmpWide; + + zTmpWide = mbcsToUnicode(zFilename); + if( zTmpWide==0 ){ + return 0; + } + zFilenameUtf8 = unicodeToUtf8(zTmpWide); + free(zTmpWide); + return zFilenameUtf8; +} + +/* +** Convert UTF-8 to multibyte character string. Space to hold the +** returned string is obtained from malloc(). +*/ +static char *utf8ToMbcs(const char *zFilename){ + char *zFilenameMbcs; + WCHAR *zTmpWide; + + zTmpWide = utf8ToUnicode(zFilename); + if( zTmpWide==0 ){ + return 0; + } + zFilenameMbcs = unicodeToMbcs(zTmpWide); + free(zTmpWide); + return zFilenameMbcs; +} + +#if OS_WINCE +/************************************************************************* +** This section contains code for WinCE only. +*/ +/* +** WindowsCE does not have a localtime() function. So create a +** substitute. +*/ +#include +struct tm *__cdecl localtime(const time_t *t) +{ + static struct tm y; + FILETIME uTm, lTm; + SYSTEMTIME pTm; + sqlite3_int64 t64; + t64 = *t; + t64 = (t64 + 11644473600)*10000000; + uTm.dwLowDateTime = t64 & 0xFFFFFFFF; + uTm.dwHighDateTime= t64 >> 32; + FileTimeToLocalFileTime(&uTm,&lTm); + FileTimeToSystemTime(&lTm,&pTm); + y.tm_year = pTm.wYear - 1900; + y.tm_mon = pTm.wMonth - 1; + y.tm_wday = pTm.wDayOfWeek; + y.tm_mday = pTm.wDay; + y.tm_hour = pTm.wHour; + y.tm_min = pTm.wMinute; + y.tm_sec = pTm.wSecond; + return &y; +} + +/* This will never be called, but defined to make the code compile */ +#define GetTempPathA(a,b) + +#define LockFile(a,b,c,d,e) winceLockFile(&a, b, c, d, e) +#define UnlockFile(a,b,c,d,e) winceUnlockFile(&a, b, c, d, e) +#define LockFileEx(a,b,c,d,e,f) winceLockFileEx(&a, b, c, d, e, f) + +#define HANDLE_TO_WINFILE(a) (winFile*)&((char*)a)[-offsetof(winFile,h)] + +/* +** Acquire a lock on the handle h +*/ +static void winceMutexAcquire(HANDLE h){ + DWORD dwErr; + do { + dwErr = WaitForSingleObject(h, INFINITE); + } while (dwErr != WAIT_OBJECT_0 && dwErr != WAIT_ABANDONED); +} +/* +** Release a lock acquired by winceMutexAcquire() +*/ +#define winceMutexRelease(h) ReleaseMutex(h) + +/* +** Create the mutex and shared memory used for locking in the file +** descriptor pFile +*/ +static BOOL winceCreateLock(const char *zFilename, winFile *pFile){ + WCHAR *zTok; + WCHAR *zName = utf8ToUnicode(zFilename); + BOOL bInit = TRUE; + + /* Initialize the local lockdata */ + ZeroMemory(&pFile->local, sizeof(pFile->local)); + + /* Replace the backslashes from the filename and lowercase it + ** to derive a mutex name. */ + zTok = CharLowerW(zName); + for (;*zTok;zTok++){ + if (*zTok == '\\') *zTok = '_'; + } + + /* Create/open the named mutex */ + pFile->hMutex = CreateMutexW(NULL, FALSE, zName); + if (!pFile->hMutex){ + free(zName); + return FALSE; + } + + /* Acquire the mutex before continuing */ + winceMutexAcquire(pFile->hMutex); + + /* Since the names of named mutexes, semaphores, file mappings etc are + ** case-sensitive, take advantage of that by uppercasing the mutex name + ** and using that as the shared filemapping name. + */ + CharUpperW(zName); + pFile->hShared = CreateFileMappingW(INVALID_HANDLE_VALUE, NULL, + PAGE_READWRITE, 0, sizeof(winceLock), + zName); + + /* Set a flag that indicates we're the first to create the memory so it + ** must be zero-initialized */ + if (GetLastError() == ERROR_ALREADY_EXISTS){ + bInit = FALSE; + } + + free(zName); + + /* If we succeeded in making the shared memory handle, map it. */ + if (pFile->hShared){ + pFile->shared = (winceLock*)MapViewOfFile(pFile->hShared, + FILE_MAP_READ|FILE_MAP_WRITE, 0, 0, sizeof(winceLock)); + /* If mapping failed, close the shared memory handle and erase it */ + if (!pFile->shared){ + CloseHandle(pFile->hShared); + pFile->hShared = NULL; + } + } + + /* If shared memory could not be created, then close the mutex and fail */ + if (pFile->hShared == NULL){ + winceMutexRelease(pFile->hMutex); + CloseHandle(pFile->hMutex); + pFile->hMutex = NULL; + return FALSE; + } + + /* Initialize the shared memory if we're supposed to */ + if (bInit) { + ZeroMemory(pFile->shared, sizeof(winceLock)); + } + + winceMutexRelease(pFile->hMutex); + return TRUE; +} + +/* +** Destroy the part of winFile that deals with wince locks +*/ +static void winceDestroyLock(winFile *pFile){ + if (pFile->hMutex){ + /* Acquire the mutex */ + winceMutexAcquire(pFile->hMutex); + + /* The following blocks should probably assert in debug mode, but they + are to cleanup in case any locks remained open */ + if (pFile->local.nReaders){ + pFile->shared->nReaders --; + } + if (pFile->local.bReserved){ + pFile->shared->bReserved = FALSE; + } + if (pFile->local.bPending){ + pFile->shared->bPending = FALSE; + } + if (pFile->local.bExclusive){ + pFile->shared->bExclusive = FALSE; + } + + /* De-reference and close our copy of the shared memory handle */ + UnmapViewOfFile(pFile->shared); + CloseHandle(pFile->hShared); + + /* Done with the mutex */ + winceMutexRelease(pFile->hMutex); + CloseHandle(pFile->hMutex); + pFile->hMutex = NULL; + } +} + +/* +** An implementation of the LockFile() API of windows for wince +*/ +static BOOL winceLockFile( + HANDLE *phFile, + DWORD dwFileOffsetLow, + DWORD dwFileOffsetHigh, + DWORD nNumberOfBytesToLockLow, + DWORD nNumberOfBytesToLockHigh +){ + winFile *pFile = HANDLE_TO_WINFILE(phFile); + BOOL bReturn = FALSE; + + if (!pFile->hMutex) return TRUE; + winceMutexAcquire(pFile->hMutex); + + /* Wanting an exclusive lock? */ + if (dwFileOffsetLow == SHARED_FIRST + && nNumberOfBytesToLockLow == SHARED_SIZE){ + if (pFile->shared->nReaders == 0 && pFile->shared->bExclusive == 0){ + pFile->shared->bExclusive = TRUE; + pFile->local.bExclusive = TRUE; + bReturn = TRUE; + } + } + + /* Want a read-only lock? */ + else if ((dwFileOffsetLow >= SHARED_FIRST && + dwFileOffsetLow < SHARED_FIRST + SHARED_SIZE) && + nNumberOfBytesToLockLow == 1){ + if (pFile->shared->bExclusive == 0){ + pFile->local.nReaders ++; + if (pFile->local.nReaders == 1){ + pFile->shared->nReaders ++; + } + bReturn = TRUE; + } + } + + /* Want a pending lock? */ + else if (dwFileOffsetLow == PENDING_BYTE && nNumberOfBytesToLockLow == 1){ + /* If no pending lock has been acquired, then acquire it */ + if (pFile->shared->bPending == 0) { + pFile->shared->bPending = TRUE; + pFile->local.bPending = TRUE; + bReturn = TRUE; + } + } + /* Want a reserved lock? */ + else if (dwFileOffsetLow == RESERVED_BYTE && nNumberOfBytesToLockLow == 1){ + if (pFile->shared->bReserved == 0) { + pFile->shared->bReserved = TRUE; + pFile->local.bReserved = TRUE; + bReturn = TRUE; + } + } + + winceMutexRelease(pFile->hMutex); + return bReturn; +} + +/* +** An implementation of the UnlockFile API of windows for wince +*/ +static BOOL winceUnlockFile( + HANDLE *phFile, + DWORD dwFileOffsetLow, + DWORD dwFileOffsetHigh, + DWORD nNumberOfBytesToUnlockLow, + DWORD nNumberOfBytesToUnlockHigh +){ + winFile *pFile = HANDLE_TO_WINFILE(phFile); + BOOL bReturn = FALSE; + + if (!pFile->hMutex) return TRUE; + winceMutexAcquire(pFile->hMutex); + + /* Releasing a reader lock or an exclusive lock */ + if (dwFileOffsetLow >= SHARED_FIRST && + dwFileOffsetLow < SHARED_FIRST + SHARED_SIZE){ + /* Did we have an exclusive lock? */ + if (pFile->local.bExclusive){ + pFile->local.bExclusive = FALSE; + pFile->shared->bExclusive = FALSE; + bReturn = TRUE; + } + + /* Did we just have a reader lock? */ + else if (pFile->local.nReaders){ + pFile->local.nReaders --; + if (pFile->local.nReaders == 0) + { + pFile->shared->nReaders --; + } + bReturn = TRUE; + } + } + + /* Releasing a pending lock */ + else if (dwFileOffsetLow == PENDING_BYTE && nNumberOfBytesToUnlockLow == 1){ + if (pFile->local.bPending){ + pFile->local.bPending = FALSE; + pFile->shared->bPending = FALSE; + bReturn = TRUE; + } + } + /* Releasing a reserved lock */ + else if (dwFileOffsetLow == RESERVED_BYTE && nNumberOfBytesToUnlockLow == 1){ + if (pFile->local.bReserved) { + pFile->local.bReserved = FALSE; + pFile->shared->bReserved = FALSE; + bReturn = TRUE; + } + } + + winceMutexRelease(pFile->hMutex); + return bReturn; +} + +/* +** An implementation of the LockFileEx() API of windows for wince +*/ +static BOOL winceLockFileEx( + HANDLE *phFile, + DWORD dwFlags, + DWORD dwReserved, + DWORD nNumberOfBytesToLockLow, + DWORD nNumberOfBytesToLockHigh, + LPOVERLAPPED lpOverlapped +){ + /* If the caller wants a shared read lock, forward this call + ** to winceLockFile */ + if (lpOverlapped->Offset == SHARED_FIRST && + dwFlags == 1 && + nNumberOfBytesToLockLow == SHARED_SIZE){ + return winceLockFile(phFile, SHARED_FIRST, 0, 1, 0); + } + return FALSE; +} +/* +** End of the special code for wince +*****************************************************************************/ +#endif /* OS_WINCE */ + +/***************************************************************************** +** The next group of routines implement the I/O methods specified +** by the sqlite3_io_methods object. +******************************************************************************/ + +/* +** Close a file. +** +** It is reported that an attempt to close a handle might sometimes +** fail. This is a very unreasonable result, but windows is notorious +** for being unreasonable so I do not doubt that it might happen. If +** the close fails, we pause for 100 milliseconds and try again. As +** many as MX_CLOSE_ATTEMPT attempts to close the handle are made before +** giving up and returning an error. +*/ +#define MX_CLOSE_ATTEMPT 3 +static int winClose(sqlite3_file *id){ + int rc, cnt = 0; + winFile *pFile = (winFile*)id; + OSTRACE2("CLOSE %d\n", pFile->h); + do{ + rc = CloseHandle(pFile->h); + }while( rc==0 && cnt++ < MX_CLOSE_ATTEMPT && (Sleep(100), 1) ); +#if OS_WINCE +#define WINCE_DELETION_ATTEMPTS 3 + winceDestroyLock(pFile); + if( pFile->zDeleteOnClose ){ + int cnt = 0; + while( + DeleteFileW(pFile->zDeleteOnClose)==0 + && GetFileAttributesW(pFile->zDeleteOnClose)!=0xffffffff + && cnt++ < WINCE_DELETION_ATTEMPTS + ){ + Sleep(100); /* Wait a little before trying again */ + } + free(pFile->zDeleteOnClose); + } +#endif + OpenCounter(-1); + return rc ? SQLITE_OK : SQLITE_IOERR; +} + +/* +** Some microsoft compilers lack this definition. +*/ +#ifndef INVALID_SET_FILE_POINTER +# define INVALID_SET_FILE_POINTER ((DWORD)-1) +#endif + +/* +** Read data from a file into a buffer. Return SQLITE_OK if all +** bytes were read successfully and SQLITE_IOERR if anything goes +** wrong. +*/ +static int winRead( + sqlite3_file *id, /* File to read from */ + void *pBuf, /* Write content into this buffer */ + int amt, /* Number of bytes to read */ + sqlite3_int64 offset /* Begin reading at this offset */ +){ + LONG upperBits = (offset>>32) & 0x7fffffff; + LONG lowerBits = offset & 0xffffffff; + DWORD rc; + DWORD got; + winFile *pFile = (winFile*)id; + assert( id!=0 ); + SimulateIOError(return SQLITE_IOERR_READ); + OSTRACE3("READ %d lock=%d\n", pFile->h, pFile->locktype); + rc = SetFilePointer(pFile->h, lowerBits, &upperBits, FILE_BEGIN); + if( rc==INVALID_SET_FILE_POINTER && GetLastError()!=NO_ERROR ){ + return SQLITE_FULL; + } + if( !ReadFile(pFile->h, pBuf, amt, &got, 0) ){ + return SQLITE_IOERR_READ; + } + if( got==(DWORD)amt ){ + return SQLITE_OK; + }else{ + memset(&((char*)pBuf)[got], 0, amt-got); + return SQLITE_IOERR_SHORT_READ; + } +} + +/* +** Write data from a buffer into a file. Return SQLITE_OK on success +** or some other error code on failure. +*/ +static int winWrite( + sqlite3_file *id, /* File to write into */ + const void *pBuf, /* The bytes to be written */ + int amt, /* Number of bytes to write */ + sqlite3_int64 offset /* Offset into the file to begin writing at */ +){ + LONG upperBits = (offset>>32) & 0x7fffffff; + LONG lowerBits = offset & 0xffffffff; + DWORD rc; + DWORD wrote; + winFile *pFile = (winFile*)id; + assert( id!=0 ); + SimulateIOError(return SQLITE_IOERR_WRITE); + SimulateDiskfullError(return SQLITE_FULL); + OSTRACE3("WRITE %d lock=%d\n", pFile->h, pFile->locktype); + rc = SetFilePointer(pFile->h, lowerBits, &upperBits, FILE_BEGIN); + if( rc==INVALID_SET_FILE_POINTER && GetLastError()!=NO_ERROR ){ + return SQLITE_FULL; + } + assert( amt>0 ); + while( + amt>0 + && (rc = WriteFile(pFile->h, pBuf, amt, &wrote, 0))!=0 + && wrote>0 + ){ + amt -= wrote; + pBuf = &((char*)pBuf)[wrote]; + } + if( !rc || amt>(int)wrote ){ + return SQLITE_FULL; + } + return SQLITE_OK; +} + +/* +** Truncate an open file to a specified size +*/ +static int winTruncate(sqlite3_file *id, sqlite3_int64 nByte){ + LONG upperBits = (nByte>>32) & 0x7fffffff; + LONG lowerBits = nByte & 0xffffffff; + winFile *pFile = (winFile*)id; + OSTRACE3("TRUNCATE %d %lld\n", pFile->h, nByte); + SimulateIOError(return SQLITE_IOERR_TRUNCATE); + SetFilePointer(pFile->h, lowerBits, &upperBits, FILE_BEGIN); + SetEndOfFile(pFile->h); + return SQLITE_OK; +} + +#ifdef SQLITE_TEST +/* +** Count the number of fullsyncs and normal syncs. This is used to test +** that syncs and fullsyncs are occuring at the right times. +*/ +int sqlite3_sync_count = 0; +int sqlite3_fullsync_count = 0; +#endif + +/* +** Make sure all writes to a particular file are committed to disk. +*/ +static int winSync(sqlite3_file *id, int flags){ + winFile *pFile = (winFile*)id; + OSTRACE3("SYNC %d lock=%d\n", pFile->h, pFile->locktype); +#ifdef SQLITE_TEST + if( flags & SQLITE_SYNC_FULL ){ + sqlite3_fullsync_count++; + } + sqlite3_sync_count++; +#endif + if( FlushFileBuffers(pFile->h) ){ + return SQLITE_OK; + }else{ + return SQLITE_IOERR; + } +} + +/* +** Determine the current size of a file in bytes +*/ +static int winFileSize(sqlite3_file *id, sqlite3_int64 *pSize){ + winFile *pFile = (winFile*)id; + DWORD upperBits, lowerBits; + SimulateIOError(return SQLITE_IOERR_FSTAT); + lowerBits = GetFileSize(pFile->h, &upperBits); + *pSize = (((sqlite3_int64)upperBits)<<32) + lowerBits; + return SQLITE_OK; +} + +/* +** LOCKFILE_FAIL_IMMEDIATELY is undefined on some Windows systems. +*/ +#ifndef LOCKFILE_FAIL_IMMEDIATELY +# define LOCKFILE_FAIL_IMMEDIATELY 1 +#endif + +/* +** Acquire a reader lock. +** Different API routines are called depending on whether or not this +** is Win95 or WinNT. +*/ +static int getReadLock(winFile *pFile){ + int res; + if( isNT() ){ + OVERLAPPED ovlp; + ovlp.Offset = SHARED_FIRST; + ovlp.OffsetHigh = 0; + ovlp.hEvent = 0; + res = LockFileEx(pFile->h, LOCKFILE_FAIL_IMMEDIATELY, + 0, SHARED_SIZE, 0, &ovlp); + }else{ + int lk; + sqlite3Randomness(sizeof(lk), &lk); + pFile->sharedLockByte = (lk & 0x7fffffff)%(SHARED_SIZE - 1); + res = LockFile(pFile->h, SHARED_FIRST+pFile->sharedLockByte, 0, 1, 0); + } + return res; +} + +/* +** Undo a readlock +*/ +static int unlockReadLock(winFile *pFile){ + int res; + if( isNT() ){ + res = UnlockFile(pFile->h, SHARED_FIRST, 0, SHARED_SIZE, 0); + }else{ + res = UnlockFile(pFile->h, SHARED_FIRST + pFile->sharedLockByte, 0, 1, 0); + } + return res; +} + +/* +** Lock the file with the lock specified by parameter locktype - one +** of the following: +** +** (1) SHARED_LOCK +** (2) RESERVED_LOCK +** (3) PENDING_LOCK +** (4) EXCLUSIVE_LOCK +** +** Sometimes when requesting one lock state, additional lock states +** are inserted in between. The locking might fail on one of the later +** transitions leaving the lock state different from what it started but +** still short of its goal. The following chart shows the allowed +** transitions and the inserted intermediate states: +** +** UNLOCKED -> SHARED +** SHARED -> RESERVED +** SHARED -> (PENDING) -> EXCLUSIVE +** RESERVED -> (PENDING) -> EXCLUSIVE +** PENDING -> EXCLUSIVE +** +** This routine will only increase a lock. The winUnlock() routine +** erases all locks at once and returns us immediately to locking level 0. +** It is not possible to lower the locking level one step at a time. You +** must go straight to locking level 0. +*/ +static int winLock(sqlite3_file *id, int locktype){ + int rc = SQLITE_OK; /* Return code from subroutines */ + int res = 1; /* Result of a windows lock call */ + int newLocktype; /* Set pFile->locktype to this value before exiting */ + int gotPendingLock = 0;/* True if we acquired a PENDING lock this time */ + winFile *pFile = (winFile*)id; + + assert( pFile!=0 ); + OSTRACE5("LOCK %d %d was %d(%d)\n", + pFile->h, locktype, pFile->locktype, pFile->sharedLockByte); + + /* If there is already a lock of this type or more restrictive on the + ** OsFile, do nothing. Don't use the end_lock: exit path, as + ** sqlite3OsEnterMutex() hasn't been called yet. + */ + if( pFile->locktype>=locktype ){ + return SQLITE_OK; + } + + /* Make sure the locking sequence is correct + */ + assert( pFile->locktype!=NO_LOCK || locktype==SHARED_LOCK ); + assert( locktype!=PENDING_LOCK ); + assert( locktype!=RESERVED_LOCK || pFile->locktype==SHARED_LOCK ); + + /* Lock the PENDING_LOCK byte if we need to acquire a PENDING lock or + ** a SHARED lock. If we are acquiring a SHARED lock, the acquisition of + ** the PENDING_LOCK byte is temporary. + */ + newLocktype = pFile->locktype; + if( pFile->locktype==NO_LOCK + || (locktype==EXCLUSIVE_LOCK && pFile->locktype==RESERVED_LOCK) + ){ + int cnt = 3; + while( cnt-->0 && (res = LockFile(pFile->h, PENDING_BYTE, 0, 1, 0))==0 ){ + /* Try 3 times to get the pending lock. The pending lock might be + ** held by another reader process who will release it momentarily. + */ + OSTRACE2("could not get a PENDING lock. cnt=%d\n", cnt); + Sleep(1); + } + gotPendingLock = res; + } + + /* Acquire a shared lock + */ + if( locktype==SHARED_LOCK && res ){ + assert( pFile->locktype==NO_LOCK ); + res = getReadLock(pFile); + if( res ){ + newLocktype = SHARED_LOCK; + } + } + + /* Acquire a RESERVED lock + */ + if( locktype==RESERVED_LOCK && res ){ + assert( pFile->locktype==SHARED_LOCK ); + res = LockFile(pFile->h, RESERVED_BYTE, 0, 1, 0); + if( res ){ + newLocktype = RESERVED_LOCK; + } + } + + /* Acquire a PENDING lock + */ + if( locktype==EXCLUSIVE_LOCK && res ){ + newLocktype = PENDING_LOCK; + gotPendingLock = 0; + } + + /* Acquire an EXCLUSIVE lock + */ + if( locktype==EXCLUSIVE_LOCK && res ){ + assert( pFile->locktype>=SHARED_LOCK ); + res = unlockReadLock(pFile); + OSTRACE2("unreadlock = %d\n", res); + res = LockFile(pFile->h, SHARED_FIRST, 0, SHARED_SIZE, 0); + if( res ){ + newLocktype = EXCLUSIVE_LOCK; + }else{ + OSTRACE2("error-code = %d\n", GetLastError()); + getReadLock(pFile); + } + } + + /* If we are holding a PENDING lock that ought to be released, then + ** release it now. + */ + if( gotPendingLock && locktype==SHARED_LOCK ){ + UnlockFile(pFile->h, PENDING_BYTE, 0, 1, 0); + } + + /* Update the state of the lock has held in the file descriptor then + ** return the appropriate result code. + */ + if( res ){ + rc = SQLITE_OK; + }else{ + OSTRACE4("LOCK FAILED %d trying for %d but got %d\n", pFile->h, + locktype, newLocktype); + rc = SQLITE_BUSY; + } + pFile->locktype = newLocktype; + return rc; +} + +/* +** This routine checks if there is a RESERVED lock held on the specified +** file by this or any other process. If such a lock is held, return +** non-zero, otherwise zero. +*/ +static int winCheckReservedLock(sqlite3_file *id){ + int rc; + winFile *pFile = (winFile*)id; + assert( pFile!=0 ); + if( pFile->locktype>=RESERVED_LOCK ){ + rc = 1; + OSTRACE3("TEST WR-LOCK %d %d (local)\n", pFile->h, rc); + }else{ + rc = LockFile(pFile->h, RESERVED_BYTE, 0, 1, 0); + if( rc ){ + UnlockFile(pFile->h, RESERVED_BYTE, 0, 1, 0); + } + rc = !rc; + OSTRACE3("TEST WR-LOCK %d %d (remote)\n", pFile->h, rc); + } + return rc; +} + +/* +** Lower the locking level on file descriptor id to locktype. locktype +** must be either NO_LOCK or SHARED_LOCK. +** +** If the locking level of the file descriptor is already at or below +** the requested locking level, this routine is a no-op. +** +** It is not possible for this routine to fail if the second argument +** is NO_LOCK. If the second argument is SHARED_LOCK then this routine +** might return SQLITE_IOERR; +*/ +static int winUnlock(sqlite3_file *id, int locktype){ + int type; + winFile *pFile = (winFile*)id; + int rc = SQLITE_OK; + assert( pFile!=0 ); + assert( locktype<=SHARED_LOCK ); + OSTRACE5("UNLOCK %d to %d was %d(%d)\n", pFile->h, locktype, + pFile->locktype, pFile->sharedLockByte); + type = pFile->locktype; + if( type>=EXCLUSIVE_LOCK ){ + UnlockFile(pFile->h, SHARED_FIRST, 0, SHARED_SIZE, 0); + if( locktype==SHARED_LOCK && !getReadLock(pFile) ){ + /* This should never happen. We should always be able to + ** reacquire the read lock */ + rc = SQLITE_IOERR_UNLOCK; + } + } + if( type>=RESERVED_LOCK ){ + UnlockFile(pFile->h, RESERVED_BYTE, 0, 1, 0); + } + if( locktype==NO_LOCK && type>=SHARED_LOCK ){ + unlockReadLock(pFile); + } + if( type>=PENDING_LOCK ){ + UnlockFile(pFile->h, PENDING_BYTE, 0, 1, 0); + } + pFile->locktype = locktype; + return rc; +} + +/* +** Control and query of the open file handle. +*/ +static int winFileControl(sqlite3_file *id, int op, void *pArg){ + switch( op ){ + case SQLITE_FCNTL_LOCKSTATE: { + *(int*)pArg = ((winFile*)id)->locktype; + return SQLITE_OK; + } + } + return SQLITE_ERROR; +} + +/* +** Return the sector size in bytes of the underlying block device for +** the specified file. This is almost always 512 bytes, but may be +** larger for some devices. +** +** SQLite code assumes this function cannot fail. It also assumes that +** if two files are created in the same file-system directory (i.e. +** a database and its journal file) that the sector size will be the +** same for both. +*/ +static int winSectorSize(sqlite3_file *id){ + return SQLITE_DEFAULT_SECTOR_SIZE; +} + +/* +** Return a vector of device characteristics. +*/ +static int winDeviceCharacteristics(sqlite3_file *id){ + return 0; +} + +/* +** This vector defines all the methods that can operate on an +** sqlite3_file for win32. +*/ +static const sqlite3_io_methods winIoMethod = { + 1, /* iVersion */ + winClose, + winRead, + winWrite, + winTruncate, + winSync, + winFileSize, + winLock, + winUnlock, + winCheckReservedLock, + winFileControl, + winSectorSize, + winDeviceCharacteristics +}; + +/*************************************************************************** +** Here ends the I/O methods that form the sqlite3_io_methods object. +** +** The next block of code implements the VFS methods. +****************************************************************************/ + +/* +** Convert a UTF-8 filename into whatever form the underlying +** operating system wants filenames in. Space to hold the result +** is obtained from malloc and must be freed by the calling +** function. +*/ +static void *convertUtf8Filename(const char *zFilename){ + void *zConverted = 0; + if( isNT() ){ + zConverted = utf8ToUnicode(zFilename); + }else{ + zConverted = utf8ToMbcs(zFilename); + } + /* caller will handle out of memory */ + return zConverted; +} + +/* +** Open a file. +*/ +static int winOpen( + sqlite3_vfs *pVfs, /* Not used */ + const char *zName, /* Name of the file (UTF-8) */ + sqlite3_file *id, /* Write the SQLite file handle here */ + int flags, /* Open mode flags */ + int *pOutFlags /* Status return flags */ +){ + HANDLE h; + DWORD dwDesiredAccess; + DWORD dwShareMode; + DWORD dwCreationDisposition; + DWORD dwFlagsAndAttributes = 0; + int isTemp; + winFile *pFile = (winFile*)id; + void *zConverted = convertUtf8Filename(zName); + if( zConverted==0 ){ + return SQLITE_NOMEM; + } + + if( flags & SQLITE_OPEN_READWRITE ){ + dwDesiredAccess = GENERIC_READ | GENERIC_WRITE; + }else{ + dwDesiredAccess = GENERIC_READ; + } + if( flags & SQLITE_OPEN_CREATE ){ + dwCreationDisposition = OPEN_ALWAYS; + }else{ + dwCreationDisposition = OPEN_EXISTING; + } + if( flags & SQLITE_OPEN_MAIN_DB ){ + dwShareMode = FILE_SHARE_READ | FILE_SHARE_WRITE; + }else{ + dwShareMode = 0; + } + if( flags & SQLITE_OPEN_DELETEONCLOSE ){ +#if OS_WINCE + dwFlagsAndAttributes = FILE_ATTRIBUTE_HIDDEN; +#else + dwFlagsAndAttributes = FILE_ATTRIBUTE_TEMPORARY + | FILE_ATTRIBUTE_HIDDEN + | FILE_FLAG_DELETE_ON_CLOSE; +#endif + isTemp = 1; + }else{ + dwFlagsAndAttributes = FILE_ATTRIBUTE_NORMAL; + isTemp = 0; + } + /* Reports from the internet are that performance is always + ** better if FILE_FLAG_RANDOM_ACCESS is used. Ticket #2699. */ + dwFlagsAndAttributes |= FILE_FLAG_RANDOM_ACCESS; + if( isNT() ){ + h = CreateFileW((WCHAR*)zConverted, + dwDesiredAccess, + dwShareMode, + NULL, + dwCreationDisposition, + dwFlagsAndAttributes, + NULL + ); + }else{ +#if OS_WINCE + return SQLITE_NOMEM; +#else + h = CreateFileA((char*)zConverted, + dwDesiredAccess, + dwShareMode, + NULL, + dwCreationDisposition, + dwFlagsAndAttributes, + NULL + ); +#endif + } + if( h==INVALID_HANDLE_VALUE ){ + free(zConverted); + if( flags & SQLITE_OPEN_READWRITE ){ + return winOpen(0, zName, id, + ((flags|SQLITE_OPEN_READONLY)&~SQLITE_OPEN_READWRITE), pOutFlags); + }else{ + return SQLITE_CANTOPEN; + } + } + if( pOutFlags ){ + if( flags & SQLITE_OPEN_READWRITE ){ + *pOutFlags = SQLITE_OPEN_READWRITE; + }else{ + *pOutFlags = SQLITE_OPEN_READONLY; + } + } + memset(pFile, 0, sizeof(*pFile)); + pFile->pMethod = &winIoMethod; + pFile->h = h; +#if OS_WINCE + if( (flags & (SQLITE_OPEN_READWRITE|SQLITE_OPEN_MAIN_DB)) == + (SQLITE_OPEN_READWRITE|SQLITE_OPEN_MAIN_DB) + && !winceCreateLock(zName, pFile) + ){ + CloseHandle(h); + free(zConverted); + return SQLITE_CANTOPEN; + } + if( isTemp ){ + pFile->zDeleteOnClose = zConverted; + }else +#endif + { + free(zConverted); + } + OpenCounter(+1); + return SQLITE_OK; +} + +/* +** Delete the named file. +** +** Note that windows does not allow a file to be deleted if some other +** process has it open. Sometimes a virus scanner or indexing program +** will open a journal file shortly after it is created in order to do +** whatever does. While this other process is holding the +** file open, we will be unable to delete it. To work around this +** problem, we delay 100 milliseconds and try to delete again. Up +** to MX_DELETION_ATTEMPTs deletion attempts are run before giving +** up and returning an error. +*/ +#define MX_DELETION_ATTEMPTS 5 +static int winDelete( + sqlite3_vfs *pVfs, /* Not used on win32 */ + const char *zFilename, /* Name of file to delete */ + int syncDir /* Not used on win32 */ +){ + int cnt = 0; + int rc; + void *zConverted = convertUtf8Filename(zFilename); + if( zConverted==0 ){ + return SQLITE_NOMEM; + } + SimulateIOError(return SQLITE_IOERR_DELETE); + if( isNT() ){ + do{ + DeleteFileW(zConverted); + }while( (rc = GetFileAttributesW(zConverted))!=0xffffffff + && cnt++ < MX_DELETION_ATTEMPTS && (Sleep(100), 1) ); + }else{ +#if OS_WINCE + return SQLITE_NOMEM; +#else + do{ + DeleteFileA(zConverted); + }while( (rc = GetFileAttributesA(zConverted))!=0xffffffff + && cnt++ < MX_DELETION_ATTEMPTS && (Sleep(100), 1) ); +#endif + } + free(zConverted); + OSTRACE2("DELETE \"%s\"\n", zFilename); + return rc==0xffffffff ? SQLITE_OK : SQLITE_IOERR_DELETE; +} + +/* +** Check the existance and status of a file. +*/ +static int winAccess( + sqlite3_vfs *pVfs, /* Not used on win32 */ + const char *zFilename, /* Name of file to check */ + int flags /* Type of test to make on this file */ +){ + DWORD attr; + int rc; + void *zConverted = convertUtf8Filename(zFilename); + if( zConverted==0 ){ + return SQLITE_NOMEM; + } + if( isNT() ){ + attr = GetFileAttributesW((WCHAR*)zConverted); + }else{ +#if OS_WINCE + return SQLITE_NOMEM; +#else + attr = GetFileAttributesA((char*)zConverted); +#endif + } + free(zConverted); + switch( flags ){ + case SQLITE_ACCESS_READ: + case SQLITE_ACCESS_EXISTS: + rc = attr!=0xffffffff; + break; + case SQLITE_ACCESS_READWRITE: + rc = (attr & FILE_ATTRIBUTE_READONLY)==0; + break; + default: + assert(!"Invalid flags argument"); + } + return rc; +} + + +/* +** Create a temporary file name in zBuf. zBuf must be big enough to +** hold at pVfs->mxPathname characters. +*/ +static int winGetTempname(sqlite3_vfs *pVfs, int nBuf, char *zBuf){ + static char zChars[] = + "abcdefghijklmnopqrstuvwxyz" + "ABCDEFGHIJKLMNOPQRSTUVWXYZ" + "0123456789"; + int i, j; + char zTempPath[MAX_PATH+1]; + if( sqlite3_temp_directory ){ + sqlite3_snprintf(MAX_PATH-30, zTempPath, "%s", sqlite3_temp_directory); + }else if( isNT() ){ + char *zMulti; + WCHAR zWidePath[MAX_PATH]; + GetTempPathW(MAX_PATH-30, zWidePath); + zMulti = unicodeToUtf8(zWidePath); + if( zMulti ){ + sqlite3_snprintf(MAX_PATH-30, zTempPath, "%s", zMulti); + free(zMulti); + }else{ + return SQLITE_NOMEM; + } + }else{ + char *zUtf8; + char zMbcsPath[MAX_PATH]; + GetTempPathA(MAX_PATH-30, zMbcsPath); + zUtf8 = mbcsToUtf8(zMbcsPath); + if( zUtf8 ){ + sqlite3_snprintf(MAX_PATH-30, zTempPath, "%s", zUtf8); + free(zUtf8); + }else{ + return SQLITE_NOMEM; + } + } + for(i=strlen(zTempPath); i>0 && zTempPath[i-1]=='\\'; i--){} + zTempPath[i] = 0; + sqlite3_snprintf(nBuf-30, zBuf, + "%s\\"SQLITE_TEMP_FILE_PREFIX, zTempPath); + j = strlen(zBuf); + sqlite3Randomness(20, &zBuf[j]); + for(i=0; i<20; i++, j++){ + zBuf[j] = (char)zChars[ ((unsigned char)zBuf[j])%(sizeof(zChars)-1) ]; + } + zBuf[j] = 0; + OSTRACE2("TEMP FILENAME: %s\n", zBuf); + return SQLITE_OK; +} + +/* +** Turn a relative pathname into a full pathname. Write the full +** pathname into zOut[]. zOut[] will be at least pVfs->mxPathname +** bytes in size. +*/ +static int winFullPathname( + sqlite3_vfs *pVfs, /* Pointer to vfs object */ + const char *zRelative, /* Possibly relative input path */ + int nFull, /* Size of output buffer in bytes */ + char *zFull /* Output buffer */ +){ + +#if defined(__CYGWIN__) + cygwin_conv_to_full_win32_path(zRelative, zFull); + return SQLITE_OK; +#endif + +#if OS_WINCE + /* WinCE has no concept of a relative pathname, or so I am told. */ + sqlite3_snprintf(pVfs->mxPathname, zFull, "%s", zRelative); + return SQLITE_OK; +#endif + +#if !OS_WINCE && !defined(__CYGWIN__) + int nByte; + void *zConverted; + char *zOut; + zConverted = convertUtf8Filename(zRelative); + if( isNT() ){ + WCHAR *zTemp; + nByte = GetFullPathNameW((WCHAR*)zConverted, 0, 0, 0) + 3; + zTemp = malloc( nByte*sizeof(zTemp[0]) ); + if( zTemp==0 ){ + free(zConverted); + return SQLITE_NOMEM; + } + GetFullPathNameW((WCHAR*)zConverted, nByte, zTemp, 0); + free(zConverted); + zOut = unicodeToUtf8(zTemp); + free(zTemp); + }else{ + char *zTemp; + nByte = GetFullPathNameA((char*)zConverted, 0, 0, 0) + 3; + zTemp = malloc( nByte*sizeof(zTemp[0]) ); + if( zTemp==0 ){ + free(zConverted); + return SQLITE_NOMEM; + } + GetFullPathNameA((char*)zConverted, nByte, zTemp, 0); + free(zConverted); + zOut = mbcsToUtf8(zTemp); + free(zTemp); + } + if( zOut ){ + sqlite3_snprintf(pVfs->mxPathname, zFull, "%s", zOut); + free(zOut); + return SQLITE_OK; + }else{ + return SQLITE_NOMEM; + } +#endif +} + +#ifndef SQLITE_OMIT_LOAD_EXTENSION +/* +** Interfaces for opening a shared library, finding entry points +** within the shared library, and closing the shared library. +*/ +/* +** Interfaces for opening a shared library, finding entry points +** within the shared library, and closing the shared library. +*/ +static void *winDlOpen(sqlite3_vfs *pVfs, const char *zFilename){ + HANDLE h; + void *zConverted = convertUtf8Filename(zFilename); + if( zConverted==0 ){ + return 0; + } + if( isNT() ){ + h = LoadLibraryW((WCHAR*)zConverted); + }else{ +#if OS_WINCE + return 0; +#else + h = LoadLibraryA((char*)zConverted); +#endif + } + free(zConverted); + return (void*)h; +} +static void winDlError(sqlite3_vfs *pVfs, int nBuf, char *zBufOut){ +#if OS_WINCE + int error = GetLastError(); + if( error>0x7FFFFFF ){ + sqlite3_snprintf(nBuf, zBufOut, "OsError 0x%x", error); + }else{ + sqlite3_snprintf(nBuf, zBufOut, "OsError %d", error); + } +#else + FormatMessageA( + FORMAT_MESSAGE_FROM_SYSTEM, + NULL, + GetLastError(), + 0, + zBufOut, + nBuf-1, + 0 + ); +#endif +} +void *winDlSym(sqlite3_vfs *pVfs, void *pHandle, const char *zSymbol){ +#if OS_WINCE + /* The GetProcAddressA() routine is only available on wince. */ + return GetProcAddressA((HANDLE)pHandle, zSymbol); +#else + /* All other windows platforms expect GetProcAddress() to take + ** an Ansi string regardless of the _UNICODE setting */ + return GetProcAddress((HANDLE)pHandle, zSymbol); +#endif +} +void winDlClose(sqlite3_vfs *pVfs, void *pHandle){ + FreeLibrary((HANDLE)pHandle); +} +#else /* if SQLITE_OMIT_LOAD_EXTENSION is defined: */ + #define winDlOpen 0 + #define winDlError 0 + #define winDlSym 0 + #define winDlClose 0 +#endif + + +/* +** Write up to nBuf bytes of randomness into zBuf. +*/ +static int winRandomness(sqlite3_vfs *pVfs, int nBuf, char *zBuf){ + int n = 0; + if( sizeof(SYSTEMTIME)<=nBuf-n ){ + SYSTEMTIME x; + GetSystemTime(&x); + memcpy(&zBuf[n], &x, sizeof(x)); + n += sizeof(x); + } + if( sizeof(DWORD)<=nBuf-n ){ + DWORD pid = GetCurrentProcessId(); + memcpy(&zBuf[n], &pid, sizeof(pid)); + n += sizeof(pid); + } + if( sizeof(DWORD)<=nBuf-n ){ + DWORD cnt = GetTickCount(); + memcpy(&zBuf[n], &cnt, sizeof(cnt)); + n += sizeof(cnt); + } + if( sizeof(LARGE_INTEGER)<=nBuf-n ){ + LARGE_INTEGER i; + QueryPerformanceCounter(&i); + memcpy(&zBuf[n], &i, sizeof(i)); + n += sizeof(i); + } + return n; +} + + +/* +** Sleep for a little while. Return the amount of time slept. +*/ +static int winSleep(sqlite3_vfs *pVfs, int microsec){ + Sleep((microsec+999)/1000); + return ((microsec+999)/1000)*1000; +} + +/* +** The following variable, if set to a non-zero value, becomes the result +** returned from sqlite3OsCurrentTime(). This is used for testing. +*/ +#ifdef SQLITE_TEST +int sqlite3_current_time = 0; +#endif + +/* +** Find the current time (in Universal Coordinated Time). Write the +** current time and date as a Julian Day number into *prNow and +** return 0. Return 1 if the time and date cannot be found. +*/ +int winCurrentTime(sqlite3_vfs *pVfs, double *prNow){ + FILETIME ft; + /* FILETIME structure is a 64-bit value representing the number of + 100-nanosecond intervals since January 1, 1601 (= JD 2305813.5). + */ + double now; +#if OS_WINCE + SYSTEMTIME time; + GetSystemTime(&time); + SystemTimeToFileTime(&time,&ft); +#else + GetSystemTimeAsFileTime( &ft ); +#endif + now = ((double)ft.dwHighDateTime) * 4294967296.0; + *prNow = (now + ft.dwLowDateTime)/864000000000.0 + 2305813.5; +#ifdef SQLITE_TEST + if( sqlite3_current_time ){ + *prNow = sqlite3_current_time/86400.0 + 2440587.5; + } +#endif + return 0; +} + + +/* +** Return a pointer to the sqlite3DefaultVfs structure. We use +** a function rather than give the structure global scope because +** some compilers (MSVC) do not allow forward declarations of +** initialized structures. +*/ +sqlite3_vfs *sqlite3OsDefaultVfs(void){ + static sqlite3_vfs winVfs = { + 1, /* iVersion */ + sizeof(winFile), /* szOsFile */ + MAX_PATH, /* mxPathname */ + 0, /* pNext */ + "win32", /* zName */ + 0, /* pAppData */ + + winOpen, /* xOpen */ + winDelete, /* xDelete */ + winAccess, /* xAccess */ + winGetTempname, /* xGetTempName */ + winFullPathname, /* xFullPathname */ + winDlOpen, /* xDlOpen */ + winDlError, /* xDlError */ + winDlSym, /* xDlSym */ + winDlClose, /* xDlClose */ + winRandomness, /* xRandomness */ + winSleep, /* xSleep */ + winCurrentTime /* xCurrentTime */ + }; + + return &winVfs; +} + +#endif /* OS_WIN */ Added: external/sqlite-source-3.5.7.x/pager.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/pager.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,5173 @@ +/* +** 2001 September 15 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** This is the implementation of the page cache subsystem or "pager". +** +** The pager is used to access a database disk file. It implements +** atomic commit and rollback through the use of a journal file that +** is separate from the database file. The pager also implements file +** locking to prevent two processes from writing the same database +** file simultaneously, or one process from reading the database while +** another is writing. +** +** @(#) $Id: pager.c,v 1.417 2008/03/17 13:50:58 drh Exp $ +*/ +#ifndef SQLITE_OMIT_DISKIO +#include "sqliteInt.h" +#include +#include + +/* +** Macros for troubleshooting. Normally turned off +*/ +#if 0 +#define sqlite3DebugPrintf printf +#define PAGERTRACE1(X) sqlite3DebugPrintf(X) +#define PAGERTRACE2(X,Y) sqlite3DebugPrintf(X,Y) +#define PAGERTRACE3(X,Y,Z) sqlite3DebugPrintf(X,Y,Z) +#define PAGERTRACE4(X,Y,Z,W) sqlite3DebugPrintf(X,Y,Z,W) +#define PAGERTRACE5(X,Y,Z,W,V) sqlite3DebugPrintf(X,Y,Z,W,V) +#else +#define PAGERTRACE1(X) +#define PAGERTRACE2(X,Y) +#define PAGERTRACE3(X,Y,Z) +#define PAGERTRACE4(X,Y,Z,W) +#define PAGERTRACE5(X,Y,Z,W,V) +#endif + +/* +** The following two macros are used within the PAGERTRACEX() macros above +** to print out file-descriptors. +** +** PAGERID() takes a pointer to a Pager struct as its argument. The +** associated file-descriptor is returned. FILEHANDLEID() takes an sqlite3_file +** struct as its argument. +*/ +#define PAGERID(p) ((int)(p->fd)) +#define FILEHANDLEID(fd) ((int)fd) + +/* +** The page cache as a whole is always in one of the following +** states: +** +** PAGER_UNLOCK The page cache is not currently reading or +** writing the database file. There is no +** data held in memory. This is the initial +** state. +** +** PAGER_SHARED The page cache is reading the database. +** Writing is not permitted. There can be +** multiple readers accessing the same database +** file at the same time. +** +** PAGER_RESERVED This process has reserved the database for writing +** but has not yet made any changes. Only one process +** at a time can reserve the database. The original +** database file has not been modified so other +** processes may still be reading the on-disk +** database file. +** +** PAGER_EXCLUSIVE The page cache is writing the database. +** Access is exclusive. No other processes or +** threads can be reading or writing while one +** process is writing. +** +** PAGER_SYNCED The pager moves to this state from PAGER_EXCLUSIVE +** after all dirty pages have been written to the +** database file and the file has been synced to +** disk. All that remains to do is to remove or +** truncate the journal file and the transaction +** will be committed. +** +** The page cache comes up in PAGER_UNLOCK. The first time a +** sqlite3PagerGet() occurs, the state transitions to PAGER_SHARED. +** After all pages have been released using sqlite_page_unref(), +** the state transitions back to PAGER_UNLOCK. The first time +** that sqlite3PagerWrite() is called, the state transitions to +** PAGER_RESERVED. (Note that sqlite3PagerWrite() can only be +** called on an outstanding page which means that the pager must +** be in PAGER_SHARED before it transitions to PAGER_RESERVED.) +** PAGER_RESERVED means that there is an open rollback journal. +** The transition to PAGER_EXCLUSIVE occurs before any changes +** are made to the database file, though writes to the rollback +** journal occurs with just PAGER_RESERVED. After an sqlite3PagerRollback() +** or sqlite3PagerCommitPhaseTwo(), the state can go back to PAGER_SHARED, +** or it can stay at PAGER_EXCLUSIVE if we are in exclusive access mode. +*/ +#define PAGER_UNLOCK 0 +#define PAGER_SHARED 1 /* same as SHARED_LOCK */ +#define PAGER_RESERVED 2 /* same as RESERVED_LOCK */ +#define PAGER_EXCLUSIVE 4 /* same as EXCLUSIVE_LOCK */ +#define PAGER_SYNCED 5 + +/* +** If the SQLITE_BUSY_RESERVED_LOCK macro is set to true at compile-time, +** then failed attempts to get a reserved lock will invoke the busy callback. +** This is off by default. To see why, consider the following scenario: +** +** Suppose thread A already has a shared lock and wants a reserved lock. +** Thread B already has a reserved lock and wants an exclusive lock. If +** both threads are using their busy callbacks, it might be a long time +** be for one of the threads give up and allows the other to proceed. +** But if the thread trying to get the reserved lock gives up quickly +** (if it never invokes its busy callback) then the contention will be +** resolved quickly. +*/ +#ifndef SQLITE_BUSY_RESERVED_LOCK +# define SQLITE_BUSY_RESERVED_LOCK 0 +#endif + +/* +** This macro rounds values up so that if the value is an address it +** is guaranteed to be an address that is aligned to an 8-byte boundary. +*/ +#define FORCE_ALIGNMENT(X) (((X)+7)&~7) + +typedef struct PgHdr PgHdr; + +/* +** Each pager stores all currently unreferenced pages in a list sorted +** in least-recently-used (LRU) order (i.e. the first item on the list has +** not been referenced in a long time, the last item has been recently +** used). An instance of this structure is included as part of each +** pager structure for this purpose (variable Pager.lru). +** +** Additionally, if memory-management is enabled, all unreferenced pages +** are stored in a global LRU list (global variable sqlite3LruPageList). +** +** In both cases, the PagerLruList.pFirstSynced variable points to +** the first page in the corresponding list that does not require an +** fsync() operation before its memory can be reclaimed. If no such +** page exists, PagerLruList.pFirstSynced is set to NULL. +*/ +typedef struct PagerLruList PagerLruList; +struct PagerLruList { + PgHdr *pFirst; /* First page in LRU list */ + PgHdr *pLast; /* Last page in LRU list (the most recently used) */ + PgHdr *pFirstSynced; /* First page in list with PgHdr.needSync==0 */ +}; + +/* +** The following structure contains the next and previous pointers used +** to link a PgHdr structure into a PagerLruList linked list. +*/ +typedef struct PagerLruLink PagerLruLink; +struct PagerLruLink { + PgHdr *pNext; + PgHdr *pPrev; +}; + +/* +** Each in-memory image of a page begins with the following header. +** This header is only visible to this pager module. The client +** code that calls pager sees only the data that follows the header. +** +** Client code should call sqlite3PagerWrite() on a page prior to making +** any modifications to that page. The first time sqlite3PagerWrite() +** is called, the original page contents are written into the rollback +** journal and PgHdr.inJournal and PgHdr.needSync are set. Later, once +** the journal page has made it onto the disk surface, PgHdr.needSync +** is cleared. The modified page cannot be written back into the original +** database file until the journal pages has been synced to disk and the +** PgHdr.needSync has been cleared. +** +** The PgHdr.dirty flag is set when sqlite3PagerWrite() is called and +** is cleared again when the page content is written back to the original +** database file. +** +** Details of important structure elements: +** +** needSync +** +** If this is true, this means that it is not safe to write the page +** content to the database because the original content needed +** for rollback has not by synced to the main rollback journal. +** The original content may have been written to the rollback journal +** but it has not yet been synced. So we cannot write to the database +** file because power failure might cause the page in the journal file +** to never reach the disk. It is as if the write to the journal file +** does not occur until the journal file is synced. +** +** This flag is false if the page content exactly matches what +** currently exists in the database file. The needSync flag is also +** false if the original content has been written to the main rollback +** journal and synced. If the page represents a new page that has +** been added onto the end of the database during the current +** transaction, the needSync flag is true until the original database +** size in the journal header has been synced to disk. +** +** inJournal +** +** This is true if the original page has been written into the main +** rollback journal. This is always false for new pages added to +** the end of the database file during the current transaction. +** And this flag says nothing about whether or not the journal +** has been synced to disk. For pages that are in the original +** database file, the following expression should always be true: +** +** inJournal = sqlite3BitvecTest(pPager->pInJournal, pgno) +** +** The pPager->pInJournal object is only valid for the original +** pages of the database, not new pages that are added to the end +** of the database, so obviously the above expression cannot be +** valid for new pages. For new pages inJournal is always 0. +** +** dirty +** +** When true, this means that the content of the page has been +** modified and needs to be written back to the database file. +** If false, it means that either the content of the page is +** unchanged or else the content is unimportant and we do not +** care whether or not it is preserved. +** +** alwaysRollback +** +** This means that the sqlite3PagerDontRollback() API should be +** ignored for this page. The DontRollback() API attempts to say +** that the content of the page on disk is unimportant (it is an +** unused page on the freelist) so that it is unnecessary to +** rollback changes to this page because the content of the page +** can change without changing the meaning of the database. This +** flag overrides any DontRollback() attempt. This flag is set +** when a page that originally contained valid data is added to +** the freelist. Later in the same transaction, this page might +** be pulled from the freelist and reused for something different +** and at that point the DontRollback() API will be called because +** pages taken from the freelist do not need to be protected by +** the rollback journal. But this flag says that the page was +** not originally part of the freelist so that it still needs to +** be rolled back in spite of any subsequent DontRollback() calls. +** +** needRead +** +** This flag means (when true) that the content of the page has +** not yet been loaded from disk. The in-memory content is just +** garbage. (Actually, we zero the content, but you should not +** make any assumptions about the content nevertheless.) If the +** content is needed in the future, it should be read from the +** original database file. +*/ +struct PgHdr { + Pager *pPager; /* The pager to which this page belongs */ + Pgno pgno; /* The page number for this page */ + PgHdr *pNextHash, *pPrevHash; /* Hash collision chain for PgHdr.pgno */ + PagerLruLink free; /* Next and previous free pages */ + PgHdr *pNextAll; /* A list of all pages */ + u8 inJournal; /* TRUE if has been written to journal */ + u8 dirty; /* TRUE if we need to write back changes */ + u8 needSync; /* Sync journal before writing this page */ + u8 alwaysRollback; /* Disable DontRollback() for this page */ + u8 needRead; /* Read content if PagerWrite() is called */ + short int nRef; /* Number of users of this page */ + PgHdr *pDirty, *pPrevDirty; /* Dirty pages */ +#ifdef SQLITE_ENABLE_MEMORY_MANAGEMENT + PagerLruLink gfree; /* Global list of nRef==0 pages */ +#endif +#ifdef SQLITE_CHECK_PAGES + u32 pageHash; +#endif + void *pData; /* Page data */ + /* Pager.nExtra bytes of local data appended to this header */ +}; + +/* +** For an in-memory only database, some extra information is recorded about +** each page so that changes can be rolled back. (Journal files are not +** used for in-memory databases.) The following information is added to +** the end of every EXTRA block for in-memory databases. +** +** This information could have been added directly to the PgHdr structure. +** But then it would take up an extra 8 bytes of storage on every PgHdr +** even for disk-based databases. Splitting it out saves 8 bytes. This +** is only a savings of 0.8% but those percentages add up. +*/ +typedef struct PgHistory PgHistory; +struct PgHistory { + u8 *pOrig; /* Original page text. Restore to this on a full rollback */ + u8 *pStmt; /* Text as it was at the beginning of the current statement */ + PgHdr *pNextStmt, *pPrevStmt; /* List of pages in the statement journal */ + u8 inStmt; /* TRUE if in the statement subjournal */ +}; + +/* +** A macro used for invoking the codec if there is one +*/ +#ifdef SQLITE_HAS_CODEC +# define CODEC1(P,D,N,X) if( P->xCodec!=0 ){ P->xCodec(P->pCodecArg,D,N,X); } +# define CODEC2(P,D,N,X) ((char*)(P->xCodec!=0?P->xCodec(P->pCodecArg,D,N,X):D)) +#else +# define CODEC1(P,D,N,X) /* NO-OP */ +# define CODEC2(P,D,N,X) ((char*)D) +#endif + +/* +** Convert a pointer to a PgHdr into a pointer to its data +** and back again. +*/ +#define PGHDR_TO_DATA(P) ((P)->pData) +#define PGHDR_TO_EXTRA(G,P) ((void*)&((G)[1])) +#define PGHDR_TO_HIST(P,PGR) \ + ((PgHistory*)&((char*)(&(P)[1]))[(PGR)->nExtra]) + +/* +** A open page cache is an instance of the following structure. +** +** Pager.errCode may be set to SQLITE_IOERR, SQLITE_CORRUPT, or +** or SQLITE_FULL. Once one of the first three errors occurs, it persists +** and is returned as the result of every major pager API call. The +** SQLITE_FULL return code is slightly different. It persists only until the +** next successful rollback is performed on the pager cache. Also, +** SQLITE_FULL does not affect the sqlite3PagerGet() and sqlite3PagerLookup() +** APIs, they may still be used successfully. +*/ +struct Pager { + sqlite3_vfs *pVfs; /* OS functions to use for IO */ + u8 journalOpen; /* True if journal file descriptors is valid */ + u8 journalStarted; /* True if header of journal is synced */ + u8 useJournal; /* Use a rollback journal on this file */ + u8 noReadlock; /* Do not bother to obtain readlocks */ + u8 stmtOpen; /* True if the statement subjournal is open */ + u8 stmtInUse; /* True we are in a statement subtransaction */ + u8 stmtAutoopen; /* Open stmt journal when main journal is opened*/ + u8 noSync; /* Do not sync the journal if true */ + u8 fullSync; /* Do extra syncs of the journal for robustness */ + u8 sync_flags; /* One of SYNC_NORMAL or SYNC_FULL */ + u8 state; /* PAGER_UNLOCK, _SHARED, _RESERVED, etc. */ + u8 tempFile; /* zFilename is a temporary file */ + u8 readOnly; /* True for a read-only database */ + u8 needSync; /* True if an fsync() is needed on the journal */ + u8 dirtyCache; /* True if cached pages have changed */ + u8 alwaysRollback; /* Disable DontRollback() for all pages */ + u8 memDb; /* True to inhibit all file I/O */ + u8 setMaster; /* True if a m-j name has been written to jrnl */ + u8 doNotSync; /* Boolean. While true, do not spill the cache */ + u8 exclusiveMode; /* Boolean. True if locking_mode==EXCLUSIVE */ + u8 changeCountDone; /* Set after incrementing the change-counter */ + u32 vfsFlags; /* Flags for sqlite3_vfs.xOpen() */ + int errCode; /* One of several kinds of errors */ + int dbSize; /* Number of pages in the file */ + int origDbSize; /* dbSize before the current change */ + int stmtSize; /* Size of database (in pages) at stmt_begin() */ + int nRec; /* Number of pages written to the journal */ + u32 cksumInit; /* Quasi-random value added to every checksum */ + int stmtNRec; /* Number of records in stmt subjournal */ + int nExtra; /* Add this many bytes to each in-memory page */ + int pageSize; /* Number of bytes in a page */ + int nPage; /* Total number of in-memory pages */ + int nRef; /* Number of in-memory pages with PgHdr.nRef>0 */ + int mxPage; /* Maximum number of pages to hold in cache */ + Pgno mxPgno; /* Maximum allowed size of the database */ + Bitvec *pInJournal; /* One bit for each page in the database file */ + Bitvec *pInStmt; /* One bit for each page in the database */ + char *zFilename; /* Name of the database file */ + char *zJournal; /* Name of the journal file */ + char *zDirectory; /* Directory hold database and journal files */ + char *zStmtJrnl; /* Name of the statement journal file */ + sqlite3_file *fd, *jfd; /* File descriptors for database and journal */ + sqlite3_file *stfd; /* File descriptor for the statement subjournal*/ + BusyHandler *pBusyHandler; /* Pointer to sqlite.busyHandler */ + PagerLruList lru; /* LRU list of free pages */ + PgHdr *pAll; /* List of all pages */ + PgHdr *pStmt; /* List of pages in the statement subjournal */ + PgHdr *pDirty; /* List of all dirty pages */ + i64 journalOff; /* Current byte offset in the journal file */ + i64 journalHdr; /* Byte offset to previous journal header */ + i64 stmtHdrOff; /* First journal header written this statement */ + i64 stmtCksum; /* cksumInit when statement was started */ + i64 stmtJSize; /* Size of journal at stmt_begin() */ + int sectorSize; /* Assumed sector size during rollback */ +#ifdef SQLITE_TEST + int nHit, nMiss; /* Cache hits and missing */ + int nRead, nWrite; /* Database pages read/written */ +#endif + void (*xDestructor)(DbPage*,int); /* Call this routine when freeing pages */ + void (*xReiniter)(DbPage*,int); /* Call this routine when reloading pages */ +#ifdef SQLITE_HAS_CODEC + void *(*xCodec)(void*,void*,Pgno,int); /* Routine for en/decoding data */ + void *pCodecArg; /* First argument to xCodec() */ +#endif + int nHash; /* Size of the pager hash table */ + PgHdr **aHash; /* Hash table to map page number to PgHdr */ +#ifdef SQLITE_ENABLE_MEMORY_MANAGEMENT + Pager *pNext; /* Doubly linked list of pagers on which */ + Pager *pPrev; /* sqlite3_release_memory() will work */ + int iInUseMM; /* Non-zero if unavailable to MM */ + int iInUseDB; /* Non-zero if in sqlite3_release_memory() */ +#endif + char *pTmpSpace; /* Pager.pageSize bytes of space for tmp use */ + char dbFileVers[16]; /* Changes whenever database file changes */ +}; + +/* +** The following global variables hold counters used for +** testing purposes only. These variables do not exist in +** a non-testing build. These variables are not thread-safe. +*/ +#ifdef SQLITE_TEST +int sqlite3_pager_readdb_count = 0; /* Number of full pages read from DB */ +int sqlite3_pager_writedb_count = 0; /* Number of full pages written to DB */ +int sqlite3_pager_writej_count = 0; /* Number of pages written to journal */ +int sqlite3_pager_pgfree_count = 0; /* Number of cache pages freed */ +# define PAGER_INCR(v) v++ +#else +# define PAGER_INCR(v) +#endif + +/* +** The following variable points to the head of a double-linked list +** of all pagers that are eligible for page stealing by the +** sqlite3_release_memory() interface. Access to this list is +** protected by the SQLITE_MUTEX_STATIC_MEM2 mutex. +*/ +#ifdef SQLITE_ENABLE_MEMORY_MANAGEMENT +static Pager *sqlite3PagerList = 0; +static PagerLruList sqlite3LruPageList = {0, 0, 0}; +#endif + + +/* +** Journal files begin with the following magic string. The data +** was obtained from /dev/random. It is used only as a sanity check. +** +** Since version 2.8.0, the journal format contains additional sanity +** checking information. If the power fails while the journal is begin +** written, semi-random garbage data might appear in the journal +** file after power is restored. If an attempt is then made +** to roll the journal back, the database could be corrupted. The additional +** sanity checking data is an attempt to discover the garbage in the +** journal and ignore it. +** +** The sanity checking information for the new journal format consists +** of a 32-bit checksum on each page of data. The checksum covers both +** the page number and the pPager->pageSize bytes of data for the page. +** This cksum is initialized to a 32-bit random value that appears in the +** journal file right after the header. The random initializer is important, +** because garbage data that appears at the end of a journal is likely +** data that was once in other files that have now been deleted. If the +** garbage data came from an obsolete journal file, the checksums might +** be correct. But by initializing the checksum to random value which +** is different for every journal, we minimize that risk. +*/ +static const unsigned char aJournalMagic[] = { + 0xd9, 0xd5, 0x05, 0xf9, 0x20, 0xa1, 0x63, 0xd7, +}; + +/* +** The size of the header and of each page in the journal is determined +** by the following macros. +*/ +#define JOURNAL_PG_SZ(pPager) ((pPager->pageSize) + 8) + +/* +** The journal header size for this pager. In the future, this could be +** set to some value read from the disk controller. The important +** characteristic is that it is the same size as a disk sector. +*/ +#define JOURNAL_HDR_SZ(pPager) (pPager->sectorSize) + +/* +** The macro MEMDB is true if we are dealing with an in-memory database. +** We do this as a macro so that if the SQLITE_OMIT_MEMORYDB macro is set, +** the value of MEMDB will be a constant and the compiler will optimize +** out code that would never execute. +*/ +#ifdef SQLITE_OMIT_MEMORYDB +# define MEMDB 0 +#else +# define MEMDB pPager->memDb +#endif + +/* +** Page number PAGER_MJ_PGNO is never used in an SQLite database (it is +** reserved for working around a windows/posix incompatibility). It is +** used in the journal to signify that the remainder of the journal file +** is devoted to storing a master journal name - there are no more pages to +** roll back. See comments for function writeMasterJournal() for details. +*/ +/* #define PAGER_MJ_PGNO(x) (PENDING_BYTE/((x)->pageSize)) */ +#define PAGER_MJ_PGNO(x) ((PENDING_BYTE/((x)->pageSize))+1) + +/* +** The maximum legal page number is (2^31 - 1). +*/ +#define PAGER_MAX_PGNO 2147483647 + +/* +** The pagerEnter() and pagerLeave() routines acquire and release +** a mutex on each pager. The mutex is recursive. +** +** This is a special-purpose mutex. It only provides mutual exclusion +** between the Btree and the Memory Management sqlite3_release_memory() +** function. It does not prevent, for example, two Btrees from accessing +** the same pager at the same time. Other general-purpose mutexes in +** the btree layer handle that chore. +*/ +#ifdef SQLITE_ENABLE_MEMORY_MANAGEMENT + static void pagerEnter(Pager *p){ + p->iInUseDB++; + if( p->iInUseMM && p->iInUseDB==1 ){ + sqlite3_mutex *mutex; + mutex = sqlite3_mutex_alloc(SQLITE_MUTEX_STATIC_MEM2); + p->iInUseDB = 0; + sqlite3_mutex_enter(mutex); + p->iInUseDB = 1; + sqlite3_mutex_leave(mutex); + } + assert( p->iInUseMM==0 ); + } + static void pagerLeave(Pager *p){ + p->iInUseDB--; + assert( p->iInUseDB>=0 ); + } +#else +# define pagerEnter(X) +# define pagerLeave(X) +#endif + +/* +** Add page pPg to the end of the linked list managed by structure +** pList (pPg becomes the last entry in the list - the most recently +** used). Argument pLink should point to either pPg->free or pPg->gfree, +** depending on whether pPg is being added to the pager-specific or +** global LRU list. +*/ +static void listAdd(PagerLruList *pList, PagerLruLink *pLink, PgHdr *pPg){ + pLink->pNext = 0; + pLink->pPrev = pList->pLast; + +#ifdef SQLITE_ENABLE_MEMORY_MANAGEMENT + assert(pLink==&pPg->free || pLink==&pPg->gfree); + assert(pLink==&pPg->gfree || pList!=&sqlite3LruPageList); +#endif + + if( pList->pLast ){ + int iOff = (char *)pLink - (char *)pPg; + PagerLruLink *pLastLink = (PagerLruLink *)(&((u8 *)pList->pLast)[iOff]); + pLastLink->pNext = pPg; + }else{ + assert(!pList->pFirst); + pList->pFirst = pPg; + } + + pList->pLast = pPg; + if( !pList->pFirstSynced && pPg->needSync==0 ){ + pList->pFirstSynced = pPg; + } +} + +/* +** Remove pPg from the list managed by the structure pointed to by pList. +** +** Argument pLink should point to either pPg->free or pPg->gfree, depending +** on whether pPg is being added to the pager-specific or global LRU list. +*/ +static void listRemove(PagerLruList *pList, PagerLruLink *pLink, PgHdr *pPg){ + int iOff = (char *)pLink - (char *)pPg; + +#ifdef SQLITE_ENABLE_MEMORY_MANAGEMENT + assert(pLink==&pPg->free || pLink==&pPg->gfree); + assert(pLink==&pPg->gfree || pList!=&sqlite3LruPageList); +#endif + + if( pPg==pList->pFirst ){ + pList->pFirst = pLink->pNext; + } + if( pPg==pList->pLast ){ + pList->pLast = pLink->pPrev; + } + if( pLink->pPrev ){ + PagerLruLink *pPrevLink = (PagerLruLink *)(&((u8 *)pLink->pPrev)[iOff]); + pPrevLink->pNext = pLink->pNext; + } + if( pLink->pNext ){ + PagerLruLink *pNextLink = (PagerLruLink *)(&((u8 *)pLink->pNext)[iOff]); + pNextLink->pPrev = pLink->pPrev; + } + if( pPg==pList->pFirstSynced ){ + PgHdr *p = pLink->pNext; + while( p && p->needSync ){ + PagerLruLink *pL = (PagerLruLink *)(&((u8 *)p)[iOff]); + p = pL->pNext; + } + pList->pFirstSynced = p; + } + + pLink->pNext = pLink->pPrev = 0; +} + +/* +** Add page pPg to the list of free pages for the pager. If +** memory-management is enabled, also add the page to the global +** list of free pages. +*/ +static void lruListAdd(PgHdr *pPg){ + listAdd(&pPg->pPager->lru, &pPg->free, pPg); +#ifdef SQLITE_ENABLE_MEMORY_MANAGEMENT + if( !pPg->pPager->memDb ){ + sqlite3_mutex_enter(sqlite3_mutex_alloc(SQLITE_MUTEX_STATIC_LRU)); + listAdd(&sqlite3LruPageList, &pPg->gfree, pPg); + sqlite3_mutex_leave(sqlite3_mutex_alloc(SQLITE_MUTEX_STATIC_LRU)); + } +#endif +} + +/* +** Remove page pPg from the list of free pages for the associated pager. +** If memory-management is enabled, also remove pPg from the global list +** of free pages. +*/ +static void lruListRemove(PgHdr *pPg){ + listRemove(&pPg->pPager->lru, &pPg->free, pPg); +#ifdef SQLITE_ENABLE_MEMORY_MANAGEMENT + if( !pPg->pPager->memDb ){ + sqlite3_mutex_enter(sqlite3_mutex_alloc(SQLITE_MUTEX_STATIC_LRU)); + listRemove(&sqlite3LruPageList, &pPg->gfree, pPg); + sqlite3_mutex_leave(sqlite3_mutex_alloc(SQLITE_MUTEX_STATIC_LRU)); + } +#endif +} + +/* +** This function is called just after the needSync flag has been cleared +** from all pages managed by pPager (usually because the journal file +** has just been synced). It updates the pPager->lru.pFirstSynced variable +** and, if memory-management is enabled, the sqlite3LruPageList.pFirstSynced +** variable also. +*/ +static void lruListSetFirstSynced(Pager *pPager){ + pPager->lru.pFirstSynced = pPager->lru.pFirst; +#ifdef SQLITE_ENABLE_MEMORY_MANAGEMENT + if( !pPager->memDb ){ + PgHdr *p; + sqlite3_mutex_enter(sqlite3_mutex_alloc(SQLITE_MUTEX_STATIC_LRU)); + for(p=sqlite3LruPageList.pFirst; p && p->needSync; p=p->gfree.pNext); + assert(p==pPager->lru.pFirstSynced || p==sqlite3LruPageList.pFirstSynced); + sqlite3LruPageList.pFirstSynced = p; + sqlite3_mutex_leave(sqlite3_mutex_alloc(SQLITE_MUTEX_STATIC_LRU)); + } +#endif +} + +/* +** Return true if page *pPg has already been written to the statement +** journal (or statement snapshot has been created, if *pPg is part +** of an in-memory database). +*/ +static int pageInStatement(PgHdr *pPg){ + Pager *pPager = pPg->pPager; + if( MEMDB ){ + return PGHDR_TO_HIST(pPg, pPager)->inStmt; + }else{ + return sqlite3BitvecTest(pPager->pInStmt, pPg->pgno); + } +} + +/* +** Change the size of the pager hash table to N. N must be a power +** of two. +*/ +static void pager_resize_hash_table(Pager *pPager, int N){ + PgHdr **aHash, *pPg; + assert( N>0 && (N&(N-1))==0 ); +#ifdef SQLITE_MALLOC_SOFT_LIMIT + if( N*sizeof(aHash[0])>SQLITE_MALLOC_SOFT_LIMIT ){ + N = SQLITE_MALLOC_SOFT_LIMIT/sizeof(aHash[0]); + } + if( N==pPager->nHash ) return; +#endif + pagerLeave(pPager); + sqlite3FaultBenign(SQLITE_FAULTINJECTOR_MALLOC, pPager->aHash!=0); + aHash = sqlite3MallocZero( sizeof(aHash[0])*N ); + sqlite3FaultBenign(SQLITE_FAULTINJECTOR_MALLOC, 0); + pagerEnter(pPager); + if( aHash==0 ){ + /* Failure to rehash is not an error. It is only a performance hit. */ + return; + } + sqlite3_free(pPager->aHash); + pPager->nHash = N; + pPager->aHash = aHash; + for(pPg=pPager->pAll; pPg; pPg=pPg->pNextAll){ + int h; + if( pPg->pgno==0 ){ + assert( pPg->pNextHash==0 && pPg->pPrevHash==0 ); + continue; + } + h = pPg->pgno & (N-1); + pPg->pNextHash = aHash[h]; + if( aHash[h] ){ + aHash[h]->pPrevHash = pPg; + } + aHash[h] = pPg; + pPg->pPrevHash = 0; + } +} + +/* +** Read a 32-bit integer from the given file descriptor. Store the integer +** that is read in *pRes. Return SQLITE_OK if everything worked, or an +** error code is something goes wrong. +** +** All values are stored on disk as big-endian. +*/ +static int read32bits(sqlite3_file *fd, i64 offset, u32 *pRes){ + unsigned char ac[4]; + int rc = sqlite3OsRead(fd, ac, sizeof(ac), offset); + if( rc==SQLITE_OK ){ + *pRes = sqlite3Get4byte(ac); + } + return rc; +} + +/* +** Write a 32-bit integer into a string buffer in big-endian byte order. +*/ +#define put32bits(A,B) sqlite3Put4byte((u8*)A,B) + +/* +** Write a 32-bit integer into the given file descriptor. Return SQLITE_OK +** on success or an error code is something goes wrong. +*/ +static int write32bits(sqlite3_file *fd, i64 offset, u32 val){ + char ac[4]; + put32bits(ac, val); + return sqlite3OsWrite(fd, ac, 4, offset); +} + +/* +** If file pFd is open, call sqlite3OsUnlock() on it. +*/ +static int osUnlock(sqlite3_file *pFd, int eLock){ + if( !pFd->pMethods ){ + return SQLITE_OK; + } + return sqlite3OsUnlock(pFd, eLock); +} + +/* +** This function determines whether or not the atomic-write optimization +** can be used with this pager. The optimization can be used if: +** +** (a) the value returned by OsDeviceCharacteristics() indicates that +** a database page may be written atomically, and +** (b) the value returned by OsSectorSize() is less than or equal +** to the page size. +** +** If the optimization cannot be used, 0 is returned. If it can be used, +** then the value returned is the size of the journal file when it +** contains rollback data for exactly one page. +*/ +#ifdef SQLITE_ENABLE_ATOMIC_WRITE +static int jrnlBufferSize(Pager *pPager){ + int dc; /* Device characteristics */ + int nSector; /* Sector size */ + int nPage; /* Page size */ + sqlite3_file *fd = pPager->fd; + + if( fd->pMethods ){ + dc = sqlite3OsDeviceCharacteristics(fd); + nSector = sqlite3OsSectorSize(fd); + nPage = pPager->pageSize; + } + + assert(SQLITE_IOCAP_ATOMIC512==(512>>8)); + assert(SQLITE_IOCAP_ATOMIC64K==(65536>>8)); + + if( !fd->pMethods || (dc&(SQLITE_IOCAP_ATOMIC|(nPage>>8))&&nSector<=nPage) ){ + return JOURNAL_HDR_SZ(pPager) + JOURNAL_PG_SZ(pPager); + } + return 0; +} +#endif + +/* +** This function should be called when an error occurs within the pager +** code. The first argument is a pointer to the pager structure, the +** second the error-code about to be returned by a pager API function. +** The value returned is a copy of the second argument to this function. +** +** If the second argument is SQLITE_IOERR, SQLITE_CORRUPT, or SQLITE_FULL +** the error becomes persistent. Until the persisten error is cleared, +** subsequent API calls on this Pager will immediately return the same +** error code. +** +** A persistent error indicates that the contents of the pager-cache +** cannot be trusted. This state can be cleared by completely discarding +** the contents of the pager-cache. If a transaction was active when +** the persistent error occured, then the rollback journal may need +** to be replayed. +*/ +static void pager_unlock(Pager *pPager); +static int pager_error(Pager *pPager, int rc){ + int rc2 = rc & 0xff; + assert( + pPager->errCode==SQLITE_FULL || + pPager->errCode==SQLITE_OK || + (pPager->errCode & 0xff)==SQLITE_IOERR + ); + if( + rc2==SQLITE_FULL || + rc2==SQLITE_IOERR || + rc2==SQLITE_CORRUPT + ){ + pPager->errCode = rc; + if( pPager->state==PAGER_UNLOCK && pPager->nRef==0 ){ + /* If the pager is already unlocked, call pager_unlock() now to + ** clear the error state and ensure that the pager-cache is + ** completely empty. + */ + pager_unlock(pPager); + } + } + return rc; +} + +/* +** If SQLITE_CHECK_PAGES is defined then we do some sanity checking +** on the cache using a hash function. This is used for testing +** and debugging only. +*/ +#ifdef SQLITE_CHECK_PAGES +/* +** Return a 32-bit hash of the page data for pPage. +*/ +static u32 pager_datahash(int nByte, unsigned char *pData){ + u32 hash = 0; + int i; + for(i=0; ipPager->pageSize, + (unsigned char *)PGHDR_TO_DATA(pPage)); +} + +/* +** The CHECK_PAGE macro takes a PgHdr* as an argument. If SQLITE_CHECK_PAGES +** is defined, and NDEBUG is not defined, an assert() statement checks +** that the page is either dirty or still matches the calculated page-hash. +*/ +#define CHECK_PAGE(x) checkPage(x) +static void checkPage(PgHdr *pPg){ + Pager *pPager = pPg->pPager; + assert( !pPg->pageHash || pPager->errCode || MEMDB || pPg->dirty || + pPg->pageHash==pager_pagehash(pPg) ); +} + +#else +#define pager_datahash(X,Y) 0 +#define pager_pagehash(X) 0 +#define CHECK_PAGE(x) +#endif + +/* +** When this is called the journal file for pager pPager must be open. +** The master journal file name is read from the end of the file and +** written into memory supplied by the caller. +** +** zMaster must point to a buffer of at least nMaster bytes allocated by +** the caller. This should be sqlite3_vfs.mxPathname+1 (to ensure there is +** enough space to write the master journal name). If the master journal +** name in the journal is longer than nMaster bytes (including a +** nul-terminator), then this is handled as if no master journal name +** were present in the journal. +** +** If no master journal file name is present zMaster[0] is set to 0 and +** SQLITE_OK returned. +*/ +static int readMasterJournal(sqlite3_file *pJrnl, char *zMaster, int nMaster){ + int rc; + u32 len; + i64 szJ; + u32 cksum; + int i; + unsigned char aMagic[8]; /* A buffer to hold the magic header */ + + zMaster[0] = '\0'; + + rc = sqlite3OsFileSize(pJrnl, &szJ); + if( rc!=SQLITE_OK || szJ<16 ) return rc; + + rc = read32bits(pJrnl, szJ-16, &len); + if( rc!=SQLITE_OK ) return rc; + + if( len>=nMaster ){ + return SQLITE_OK; + } + + rc = read32bits(pJrnl, szJ-12, &cksum); + if( rc!=SQLITE_OK ) return rc; + + rc = sqlite3OsRead(pJrnl, aMagic, 8, szJ-8); + if( rc!=SQLITE_OK || memcmp(aMagic, aJournalMagic, 8) ) return rc; + + rc = sqlite3OsRead(pJrnl, zMaster, len, szJ-16-len); + if( rc!=SQLITE_OK ){ + return rc; + } + zMaster[len] = '\0'; + + /* See if the checksum matches the master journal name */ + for(i=0; ijournalOff; + if( c ){ + offset = ((c-1)/JOURNAL_HDR_SZ(pPager) + 1) * JOURNAL_HDR_SZ(pPager); + } + assert( offset%JOURNAL_HDR_SZ(pPager)==0 ); + assert( offset>=c ); + assert( (offset-c)journalOff = offset; +} + +/* +** The journal file must be open when this routine is called. A journal +** header (JOURNAL_HDR_SZ bytes) is written into the journal file at the +** current location. +** +** The format for the journal header is as follows: +** - 8 bytes: Magic identifying journal format. +** - 4 bytes: Number of records in journal, or -1 no-sync mode is on. +** - 4 bytes: Random number used for page hash. +** - 4 bytes: Initial database page count. +** - 4 bytes: Sector size used by the process that wrote this journal. +** +** Followed by (JOURNAL_HDR_SZ - 24) bytes of unused space. +*/ +static int writeJournalHdr(Pager *pPager){ + char zHeader[sizeof(aJournalMagic)+16]; + int rc; + + if( pPager->stmtHdrOff==0 ){ + pPager->stmtHdrOff = pPager->journalOff; + } + + seekJournalHdr(pPager); + pPager->journalHdr = pPager->journalOff; + + memcpy(zHeader, aJournalMagic, sizeof(aJournalMagic)); + + /* + ** Write the nRec Field - the number of page records that follow this + ** journal header. Normally, zero is written to this value at this time. + ** After the records are added to the journal (and the journal synced, + ** if in full-sync mode), the zero is overwritten with the true number + ** of records (see syncJournal()). + ** + ** A faster alternative is to write 0xFFFFFFFF to the nRec field. When + ** reading the journal this value tells SQLite to assume that the + ** rest of the journal file contains valid page records. This assumption + ** is dangerous, as if a failure occured whilst writing to the journal + ** file it may contain some garbage data. There are two scenarios + ** where this risk can be ignored: + ** + ** * When the pager is in no-sync mode. Corruption can follow a + ** power failure in this case anyway. + ** + ** * When the SQLITE_IOCAP_SAFE_APPEND flag is set. This guarantees + ** that garbage data is never appended to the journal file. + */ + assert(pPager->fd->pMethods||pPager->noSync); + if( (pPager->noSync) + || (sqlite3OsDeviceCharacteristics(pPager->fd)&SQLITE_IOCAP_SAFE_APPEND) + ){ + put32bits(&zHeader[sizeof(aJournalMagic)], 0xffffffff); + }else{ + put32bits(&zHeader[sizeof(aJournalMagic)], 0); + } + + /* The random check-hash initialiser */ + sqlite3Randomness(sizeof(pPager->cksumInit), &pPager->cksumInit); + put32bits(&zHeader[sizeof(aJournalMagic)+4], pPager->cksumInit); + /* The initial database size */ + put32bits(&zHeader[sizeof(aJournalMagic)+8], pPager->dbSize); + /* The assumed sector size for this process */ + put32bits(&zHeader[sizeof(aJournalMagic)+12], pPager->sectorSize); + IOTRACE(("JHDR %p %lld %d\n", pPager, pPager->journalHdr, sizeof(zHeader))) + rc = sqlite3OsWrite(pPager->jfd, zHeader, sizeof(zHeader),pPager->journalOff); + pPager->journalOff += JOURNAL_HDR_SZ(pPager); + + /* The journal header has been written successfully. Seek the journal + ** file descriptor to the end of the journal header sector. + */ + if( rc==SQLITE_OK ){ + IOTRACE(("JTAIL %p %lld\n", pPager, pPager->journalOff-1)) + rc = sqlite3OsWrite(pPager->jfd, "\000", 1, pPager->journalOff-1); + } + return rc; +} + +/* +** The journal file must be open when this is called. A journal header file +** (JOURNAL_HDR_SZ bytes) is read from the current location in the journal +** file. See comments above function writeJournalHdr() for a description of +** the journal header format. +** +** If the header is read successfully, *nRec is set to the number of +** page records following this header and *dbSize is set to the size of the +** database before the transaction began, in pages. Also, pPager->cksumInit +** is set to the value read from the journal header. SQLITE_OK is returned +** in this case. +** +** If the journal header file appears to be corrupted, SQLITE_DONE is +** returned and *nRec and *dbSize are not set. If JOURNAL_HDR_SZ bytes +** cannot be read from the journal file an error code is returned. +*/ +static int readJournalHdr( + Pager *pPager, + i64 journalSize, + u32 *pNRec, + u32 *pDbSize +){ + int rc; + unsigned char aMagic[8]; /* A buffer to hold the magic header */ + i64 jrnlOff; + + seekJournalHdr(pPager); + if( pPager->journalOff+JOURNAL_HDR_SZ(pPager) > journalSize ){ + return SQLITE_DONE; + } + jrnlOff = pPager->journalOff; + + rc = sqlite3OsRead(pPager->jfd, aMagic, sizeof(aMagic), jrnlOff); + if( rc ) return rc; + jrnlOff += sizeof(aMagic); + + if( memcmp(aMagic, aJournalMagic, sizeof(aMagic))!=0 ){ + return SQLITE_DONE; + } + + rc = read32bits(pPager->jfd, jrnlOff, pNRec); + if( rc ) return rc; + + rc = read32bits(pPager->jfd, jrnlOff+4, &pPager->cksumInit); + if( rc ) return rc; + + rc = read32bits(pPager->jfd, jrnlOff+8, pDbSize); + if( rc ) return rc; + + /* Update the assumed sector-size to match the value used by + ** the process that created this journal. If this journal was + ** created by a process other than this one, then this routine + ** is being called from within pager_playback(). The local value + ** of Pager.sectorSize is restored at the end of that routine. + */ + rc = read32bits(pPager->jfd, jrnlOff+12, (u32 *)&pPager->sectorSize); + if( rc ) return rc; + + pPager->journalOff += JOURNAL_HDR_SZ(pPager); + return SQLITE_OK; +} + + +/* +** Write the supplied master journal name into the journal file for pager +** pPager at the current location. The master journal name must be the last +** thing written to a journal file. If the pager is in full-sync mode, the +** journal file descriptor is advanced to the next sector boundary before +** anything is written. The format is: +** +** + 4 bytes: PAGER_MJ_PGNO. +** + N bytes: length of master journal name. +** + 4 bytes: N +** + 4 bytes: Master journal name checksum. +** + 8 bytes: aJournalMagic[]. +** +** The master journal page checksum is the sum of the bytes in the master +** journal name. +** +** If zMaster is a NULL pointer (occurs for a single database transaction), +** this call is a no-op. +*/ +static int writeMasterJournal(Pager *pPager, const char *zMaster){ + int rc; + int len; + int i; + i64 jrnlOff; + u32 cksum = 0; + char zBuf[sizeof(aJournalMagic)+2*4]; + + if( !zMaster || pPager->setMaster) return SQLITE_OK; + pPager->setMaster = 1; + + len = strlen(zMaster); + for(i=0; ifullSync ){ + seekJournalHdr(pPager); + } + jrnlOff = pPager->journalOff; + pPager->journalOff += (len+20); + + rc = write32bits(pPager->jfd, jrnlOff, PAGER_MJ_PGNO(pPager)); + if( rc!=SQLITE_OK ) return rc; + jrnlOff += 4; + + rc = sqlite3OsWrite(pPager->jfd, zMaster, len, jrnlOff); + if( rc!=SQLITE_OK ) return rc; + jrnlOff += len; + + put32bits(zBuf, len); + put32bits(&zBuf[4], cksum); + memcpy(&zBuf[8], aJournalMagic, sizeof(aJournalMagic)); + rc = sqlite3OsWrite(pPager->jfd, zBuf, 8+sizeof(aJournalMagic), jrnlOff); + pPager->needSync = !pPager->noSync; + return rc; +} + +/* +** Add or remove a page from the list of all pages that are in the +** statement journal. +** +** The Pager keeps a separate list of pages that are currently in +** the statement journal. This helps the sqlite3PagerStmtCommit() +** routine run MUCH faster for the common case where there are many +** pages in memory but only a few are in the statement journal. +*/ +static void page_add_to_stmt_list(PgHdr *pPg){ + Pager *pPager = pPg->pPager; + PgHistory *pHist = PGHDR_TO_HIST(pPg, pPager); + assert( MEMDB ); + if( !pHist->inStmt ){ + assert( pHist->pPrevStmt==0 && pHist->pNextStmt==0 ); + if( pPager->pStmt ){ + PGHDR_TO_HIST(pPager->pStmt, pPager)->pPrevStmt = pPg; + } + pHist->pNextStmt = pPager->pStmt; + pPager->pStmt = pPg; + pHist->inStmt = 1; + } +} + +/* +** Find a page in the hash table given its page number. Return +** a pointer to the page or NULL if not found. +*/ +static PgHdr *pager_lookup(Pager *pPager, Pgno pgno){ + PgHdr *p; + if( pPager->aHash==0 ) return 0; + p = pPager->aHash[pgno & (pPager->nHash-1)]; + while( p && p->pgno!=pgno ){ + p = p->pNextHash; + } + return p; +} + +/* +** Clear the in-memory cache. This routine +** sets the state of the pager back to what it was when it was first +** opened. Any outstanding pages are invalidated and subsequent attempts +** to access those pages will likely result in a coredump. +*/ +static void pager_reset(Pager *pPager){ + PgHdr *pPg, *pNext; + if( pPager->errCode ) return; + for(pPg=pPager->pAll; pPg; pPg=pNext){ + IOTRACE(("PGFREE %p %d\n", pPager, pPg->pgno)); + PAGER_INCR(sqlite3_pager_pgfree_count); + pNext = pPg->pNextAll; + lruListRemove(pPg); + sqlite3_free(pPg->pData); + sqlite3_free(pPg); + } + assert(pPager->lru.pFirst==0); + assert(pPager->lru.pFirstSynced==0); + assert(pPager->lru.pLast==0); + pPager->pStmt = 0; + pPager->pAll = 0; + pPager->pDirty = 0; + pPager->nHash = 0; + sqlite3_free(pPager->aHash); + pPager->nPage = 0; + pPager->aHash = 0; + pPager->nRef = 0; +} + +/* +** Unlock the database file. +** +** If the pager is currently in error state, discard the contents of +** the cache and reset the Pager structure internal state. If there is +** an open journal-file, then the next time a shared-lock is obtained +** on the pager file (by this or any other process), it will be +** treated as a hot-journal and rolled back. +*/ +static void pager_unlock(Pager *pPager){ + if( !pPager->exclusiveMode ){ + if( !MEMDB ){ + int rc = osUnlock(pPager->fd, NO_LOCK); + if( rc ) pPager->errCode = rc; + pPager->dbSize = -1; + IOTRACE(("UNLOCK %p\n", pPager)) + + /* If Pager.errCode is set, the contents of the pager cache cannot be + ** trusted. Now that the pager file is unlocked, the contents of the + ** cache can be discarded and the error code safely cleared. + */ + if( pPager->errCode ){ + if( rc==SQLITE_OK ) pPager->errCode = SQLITE_OK; + pager_reset(pPager); + if( pPager->stmtOpen ){ + sqlite3OsClose(pPager->stfd); + sqlite3BitvecDestroy(pPager->pInStmt); + pPager->pInStmt = 0; + } + if( pPager->journalOpen ){ + sqlite3OsClose(pPager->jfd); + pPager->journalOpen = 0; + sqlite3BitvecDestroy(pPager->pInJournal); + pPager->pInJournal = 0; + } + pPager->stmtOpen = 0; + pPager->stmtInUse = 0; + pPager->journalOff = 0; + pPager->journalStarted = 0; + pPager->stmtAutoopen = 0; + pPager->origDbSize = 0; + } + } + + if( !MEMDB || pPager->errCode==SQLITE_OK ){ + pPager->state = PAGER_UNLOCK; + pPager->changeCountDone = 0; + } + } +} + +/* +** Execute a rollback if a transaction is active and unlock the +** database file. If the pager has already entered the error state, +** do not attempt the rollback. +*/ +static void pagerUnlockAndRollback(Pager *p){ + assert( p->state>=PAGER_RESERVED || p->journalOpen==0 ); + if( p->errCode==SQLITE_OK && p->state>=PAGER_RESERVED ){ + sqlite3PagerRollback(p); + } + pager_unlock(p); + assert( p->errCode || !p->journalOpen || (p->exclusiveMode&&!p->journalOff) ); + assert( p->errCode || !p->stmtOpen || p->exclusiveMode ); +} + +/* +** This routine ends a transaction. A transaction is ended by either +** a COMMIT or a ROLLBACK. +** +** When this routine is called, the pager has the journal file open and +** a RESERVED or EXCLUSIVE lock on the database. This routine will release +** the database lock and acquires a SHARED lock in its place if that is +** the appropriate thing to do. Release locks usually is appropriate, +** unless we are in exclusive access mode or unless this is a +** COMMIT AND BEGIN or ROLLBACK AND BEGIN operation. +** +** The journal file is either deleted or truncated. +** +** TODO: Consider keeping the journal file open for temporary databases. +** This might give a performance improvement on windows where opening +** a file is an expensive operation. +*/ +static int pager_end_transaction(Pager *pPager){ + PgHdr *pPg; + int rc = SQLITE_OK; + int rc2 = SQLITE_OK; + assert( !MEMDB ); + if( pPager->statestmtOpen && !pPager->exclusiveMode ){ + sqlite3OsClose(pPager->stfd); + pPager->stmtOpen = 0; + } + if( pPager->journalOpen ){ + if( pPager->exclusiveMode + && (rc = sqlite3OsTruncate(pPager->jfd, 0))==SQLITE_OK ){; + pPager->journalOff = 0; + pPager->journalStarted = 0; + }else{ + sqlite3OsClose(pPager->jfd); + pPager->journalOpen = 0; + if( rc==SQLITE_OK ){ + rc = sqlite3OsDelete(pPager->pVfs, pPager->zJournal, 0); + } + } + sqlite3BitvecDestroy(pPager->pInJournal); + pPager->pInJournal = 0; + for(pPg=pPager->pAll; pPg; pPg=pPg->pNextAll){ + pPg->inJournal = 0; + pPg->dirty = 0; + pPg->needSync = 0; + pPg->alwaysRollback = 0; +#ifdef SQLITE_CHECK_PAGES + pPg->pageHash = pager_pagehash(pPg); +#endif + } + pPager->pDirty = 0; + pPager->dirtyCache = 0; + pPager->nRec = 0; + }else{ + assert( pPager->pInJournal==0 ); + assert( pPager->dirtyCache==0 || pPager->useJournal==0 ); + } + + if( !pPager->exclusiveMode ){ + rc2 = osUnlock(pPager->fd, SHARED_LOCK); + pPager->state = PAGER_SHARED; + }else if( pPager->state==PAGER_SYNCED ){ + pPager->state = PAGER_EXCLUSIVE; + } + pPager->origDbSize = 0; + pPager->setMaster = 0; + pPager->needSync = 0; + lruListSetFirstSynced(pPager); + pPager->dbSize = -1; + + return (rc==SQLITE_OK?rc2:rc); +} + +/* +** Compute and return a checksum for the page of data. +** +** This is not a real checksum. It is really just the sum of the +** random initial value and the page number. We experimented with +** a checksum of the entire data, but that was found to be too slow. +** +** Note that the page number is stored at the beginning of data and +** the checksum is stored at the end. This is important. If journal +** corruption occurs due to a power failure, the most likely scenario +** is that one end or the other of the record will be changed. It is +** much less likely that the two ends of the journal record will be +** correct and the middle be corrupt. Thus, this "checksum" scheme, +** though fast and simple, catches the mostly likely kind of corruption. +** +** FIX ME: Consider adding every 200th (or so) byte of the data to the +** checksum. That way if a single page spans 3 or more disk sectors and +** only the middle sector is corrupt, we will still have a reasonable +** chance of failing the checksum and thus detecting the problem. +*/ +static u32 pager_cksum(Pager *pPager, const u8 *aData){ + u32 cksum = pPager->cksumInit; + int i = pPager->pageSize-200; + while( i>0 ){ + cksum += aData[i]; + i -= 200; + } + return cksum; +} + +/* Forward declaration */ +static void makeClean(PgHdr*); + +/* +** Read a single page from the journal file opened on file descriptor +** jfd. Playback this one page. +** +** If useCksum==0 it means this journal does not use checksums. Checksums +** are not used in statement journals because statement journals do not +** need to survive power failures. +*/ +static int pager_playback_one_page( + Pager *pPager, + sqlite3_file *jfd, + i64 offset, + int useCksum +){ + int rc; + PgHdr *pPg; /* An existing page in the cache */ + Pgno pgno; /* The page number of a page in journal */ + u32 cksum; /* Checksum used for sanity checking */ + u8 *aData = (u8 *)pPager->pTmpSpace; /* Temp storage for a page */ + + /* useCksum should be true for the main journal and false for + ** statement journals. Verify that this is always the case + */ + assert( jfd == (useCksum ? pPager->jfd : pPager->stfd) ); + assert( aData ); + + rc = read32bits(jfd, offset, &pgno); + if( rc!=SQLITE_OK ) return rc; + rc = sqlite3OsRead(jfd, aData, pPager->pageSize, offset+4); + if( rc!=SQLITE_OK ) return rc; + pPager->journalOff += pPager->pageSize + 4; + + /* Sanity checking on the page. This is more important that I originally + ** thought. If a power failure occurs while the journal is being written, + ** it could cause invalid data to be written into the journal. We need to + ** detect this invalid data (with high probability) and ignore it. + */ + if( pgno==0 || pgno==PAGER_MJ_PGNO(pPager) ){ + return SQLITE_DONE; + } + if( pgno>(unsigned)pPager->dbSize ){ + return SQLITE_OK; + } + if( useCksum ){ + rc = read32bits(jfd, offset+pPager->pageSize+4, &cksum); + if( rc ) return rc; + pPager->journalOff += 4; + if( pager_cksum(pPager, aData)!=cksum ){ + return SQLITE_DONE; + } + } + + assert( pPager->state==PAGER_RESERVED || pPager->state>=PAGER_EXCLUSIVE ); + + /* If the pager is in RESERVED state, then there must be a copy of this + ** page in the pager cache. In this case just update the pager cache, + ** not the database file. The page is left marked dirty in this case. + ** + ** An exception to the above rule: If the database is in no-sync mode + ** and a page is moved during an incremental vacuum then the page may + ** not be in the pager cache. Later: if a malloc() or IO error occurs + ** during a Movepage() call, then the page may not be in the cache + ** either. So the condition described in the above paragraph is not + ** assert()able. + ** + ** If in EXCLUSIVE state, then we update the pager cache if it exists + ** and the main file. The page is then marked not dirty. + ** + ** Ticket #1171: The statement journal might contain page content that is + ** different from the page content at the start of the transaction. + ** This occurs when a page is changed prior to the start of a statement + ** then changed again within the statement. When rolling back such a + ** statement we must not write to the original database unless we know + ** for certain that original page contents are synced into the main rollback + ** journal. Otherwise, a power loss might leave modified data in the + ** database file without an entry in the rollback journal that can + ** restore the database to its original form. Two conditions must be + ** met before writing to the database files. (1) the database must be + ** locked. (2) we know that the original page content is fully synced + ** in the main journal either because the page is not in cache or else + ** the page is marked as needSync==0. + */ + pPg = pager_lookup(pPager, pgno); + PAGERTRACE4("PLAYBACK %d page %d hash(%08x)\n", + PAGERID(pPager), pgno, pager_datahash(pPager->pageSize, aData)); + if( pPager->state>=PAGER_EXCLUSIVE && (pPg==0 || pPg->needSync==0) ){ + i64 offset = (pgno-1)*(i64)pPager->pageSize; + rc = sqlite3OsWrite(pPager->fd, aData, pPager->pageSize, offset); + if( pPg ){ + makeClean(pPg); + } + } + if( pPg ){ + /* No page should ever be explicitly rolled back that is in use, except + ** for page 1 which is held in use in order to keep the lock on the + ** database active. However such a page may be rolled back as a result + ** of an internal error resulting in an automatic call to + ** sqlite3PagerRollback(). + */ + void *pData; + /* assert( pPg->nRef==0 || pPg->pgno==1 ); */ + pData = PGHDR_TO_DATA(pPg); + memcpy(pData, aData, pPager->pageSize); + if( pPager->xReiniter ){ + pPager->xReiniter(pPg, pPager->pageSize); + } +#ifdef SQLITE_CHECK_PAGES + pPg->pageHash = pager_pagehash(pPg); +#endif + /* If this was page 1, then restore the value of Pager.dbFileVers. + ** Do this before any decoding. */ + if( pgno==1 ){ + memcpy(&pPager->dbFileVers, &((u8*)pData)[24],sizeof(pPager->dbFileVers)); + } + + /* Decode the page just read from disk */ + CODEC1(pPager, pData, pPg->pgno, 3); + } + return rc; +} + +/* +** Parameter zMaster is the name of a master journal file. A single journal +** file that referred to the master journal file has just been rolled back. +** This routine checks if it is possible to delete the master journal file, +** and does so if it is. +** +** Argument zMaster may point to Pager.pTmpSpace. So that buffer is not +** available for use within this function. +** +** +** The master journal file contains the names of all child journals. +** To tell if a master journal can be deleted, check to each of the +** children. If all children are either missing or do not refer to +** a different master journal, then this master journal can be deleted. +*/ +static int pager_delmaster(Pager *pPager, const char *zMaster){ + sqlite3_vfs *pVfs = pPager->pVfs; + int rc; + int master_open = 0; + sqlite3_file *pMaster; + sqlite3_file *pJournal; + char *zMasterJournal = 0; /* Contents of master journal file */ + i64 nMasterJournal; /* Size of master journal file */ + + /* Open the master journal file exclusively in case some other process + ** is running this routine also. Not that it makes too much difference. + */ + pMaster = (sqlite3_file *)sqlite3_malloc(pVfs->szOsFile * 2); + pJournal = (sqlite3_file *)(((u8 *)pMaster) + pVfs->szOsFile); + if( !pMaster ){ + rc = SQLITE_NOMEM; + }else{ + int flags = (SQLITE_OPEN_READONLY|SQLITE_OPEN_MASTER_JOURNAL); + rc = sqlite3OsOpen(pVfs, zMaster, pMaster, flags, 0); + } + if( rc!=SQLITE_OK ) goto delmaster_out; + master_open = 1; + + rc = sqlite3OsFileSize(pMaster, &nMasterJournal); + if( rc!=SQLITE_OK ) goto delmaster_out; + + if( nMasterJournal>0 ){ + char *zJournal; + char *zMasterPtr = 0; + int nMasterPtr = pPager->pVfs->mxPathname+1; + + /* Load the entire master journal file into space obtained from + ** sqlite3_malloc() and pointed to by zMasterJournal. + */ + zMasterJournal = (char *)sqlite3_malloc(nMasterJournal + nMasterPtr); + if( !zMasterJournal ){ + rc = SQLITE_NOMEM; + goto delmaster_out; + } + zMasterPtr = &zMasterJournal[nMasterJournal]; + rc = sqlite3OsRead(pMaster, zMasterJournal, nMasterJournal, 0); + if( rc!=SQLITE_OK ) goto delmaster_out; + + zJournal = zMasterJournal; + while( (zJournal-zMasterJournal)state>=PAGER_EXCLUSIVE && pPager->fd->pMethods ){ + i64 currentSize, newSize; + rc = sqlite3OsFileSize(pPager->fd, ¤tSize); + newSize = pPager->pageSize*(i64)nPage; + if( rc==SQLITE_OK && currentSize>newSize ){ + rc = sqlite3OsTruncate(pPager->fd, newSize); + } + } + if( rc==SQLITE_OK ){ + pPager->dbSize = nPage; + pager_truncate_cache(pPager); + } + return rc; +} + +/* +** Set the sectorSize for the given pager. +** +** The sector size is the larger of the sector size reported +** by sqlite3OsSectorSize() and the pageSize. +*/ +static void setSectorSize(Pager *pPager){ + assert(pPager->fd->pMethods||pPager->tempFile); + if( !pPager->tempFile ){ + /* Sector size doesn't matter for temporary files. Also, the file + ** may not have been opened yet, in whcih case the OsSectorSize() + ** call will segfault. + */ + pPager->sectorSize = sqlite3OsSectorSize(pPager->fd); + } + if( pPager->sectorSizepageSize ){ + pPager->sectorSize = pPager->pageSize; + } +} + +/* +** Playback the journal and thus restore the database file to +** the state it was in before we started making changes. +** +** The journal file format is as follows: +** +** (1) 8 byte prefix. A copy of aJournalMagic[]. +** (2) 4 byte big-endian integer which is the number of valid page records +** in the journal. If this value is 0xffffffff, then compute the +** number of page records from the journal size. +** (3) 4 byte big-endian integer which is the initial value for the +** sanity checksum. +** (4) 4 byte integer which is the number of pages to truncate the +** database to during a rollback. +** (5) 4 byte integer which is the number of bytes in the master journal +** name. The value may be zero (indicate that there is no master +** journal.) +** (6) N bytes of the master journal name. The name will be nul-terminated +** and might be shorter than the value read from (5). If the first byte +** of the name is \000 then there is no master journal. The master +** journal name is stored in UTF-8. +** (7) Zero or more pages instances, each as follows: +** + 4 byte page number. +** + pPager->pageSize bytes of data. +** + 4 byte checksum +** +** When we speak of the journal header, we mean the first 6 items above. +** Each entry in the journal is an instance of the 7th item. +** +** Call the value from the second bullet "nRec". nRec is the number of +** valid page entries in the journal. In most cases, you can compute the +** value of nRec from the size of the journal file. But if a power +** failure occurred while the journal was being written, it could be the +** case that the size of the journal file had already been increased but +** the extra entries had not yet made it safely to disk. In such a case, +** the value of nRec computed from the file size would be too large. For +** that reason, we always use the nRec value in the header. +** +** If the nRec value is 0xffffffff it means that nRec should be computed +** from the file size. This value is used when the user selects the +** no-sync option for the journal. A power failure could lead to corruption +** in this case. But for things like temporary table (which will be +** deleted when the power is restored) we don't care. +** +** If the file opened as the journal file is not a well-formed +** journal file then all pages up to the first corrupted page are rolled +** back (or no pages if the journal header is corrupted). The journal file +** is then deleted and SQLITE_OK returned, just as if no corruption had +** been encountered. +** +** If an I/O or malloc() error occurs, the journal-file is not deleted +** and an error code is returned. +*/ +static int pager_playback(Pager *pPager, int isHot){ + sqlite3_vfs *pVfs = pPager->pVfs; + i64 szJ; /* Size of the journal file in bytes */ + u32 nRec; /* Number of Records in the journal */ + int i; /* Loop counter */ + Pgno mxPg = 0; /* Size of the original file in pages */ + int rc; /* Result code of a subroutine */ + char *zMaster = 0; /* Name of master journal file if any */ + + /* Figure out how many records are in the journal. Abort early if + ** the journal is empty. + */ + assert( pPager->journalOpen ); + rc = sqlite3OsFileSize(pPager->jfd, &szJ); + if( rc!=SQLITE_OK || szJ==0 ){ + goto end_playback; + } + + /* Read the master journal name from the journal, if it is present. + ** If a master journal file name is specified, but the file is not + ** present on disk, then the journal is not hot and does not need to be + ** played back. + */ + zMaster = pPager->pTmpSpace; + rc = readMasterJournal(pPager->jfd, zMaster, pPager->pVfs->mxPathname+1); + assert( rc!=SQLITE_DONE ); + if( rc!=SQLITE_OK + || (zMaster[0] && !sqlite3OsAccess(pVfs, zMaster, SQLITE_ACCESS_EXISTS)) + ){ + zMaster = 0; + if( rc==SQLITE_DONE ) rc = SQLITE_OK; + goto end_playback; + } + pPager->journalOff = 0; + zMaster = 0; + + /* This loop terminates either when the readJournalHdr() call returns + ** SQLITE_DONE or an IO error occurs. */ + while( 1 ){ + + /* Read the next journal header from the journal file. If there are + ** not enough bytes left in the journal file for a complete header, or + ** it is corrupted, then a process must of failed while writing it. + ** This indicates nothing more needs to be rolled back. + */ + rc = readJournalHdr(pPager, szJ, &nRec, &mxPg); + if( rc!=SQLITE_OK ){ + if( rc==SQLITE_DONE ){ + rc = SQLITE_OK; + } + goto end_playback; + } + + /* If nRec is 0xffffffff, then this journal was created by a process + ** working in no-sync mode. This means that the rest of the journal + ** file consists of pages, there are no more journal headers. Compute + ** the value of nRec based on this assumption. + */ + if( nRec==0xffffffff ){ + assert( pPager->journalOff==JOURNAL_HDR_SZ(pPager) ); + nRec = (szJ - JOURNAL_HDR_SZ(pPager))/JOURNAL_PG_SZ(pPager); + } + + /* If nRec is 0 and this rollback is of a transaction created by this + ** process and if this is the final header in the journal, then it means + ** that this part of the journal was being filled but has not yet been + ** synced to disk. Compute the number of pages based on the remaining + ** size of the file. + ** + ** The third term of the test was added to fix ticket #2565. + */ + if( nRec==0 && !isHot && + pPager->journalHdr+JOURNAL_HDR_SZ(pPager)==pPager->journalOff ){ + nRec = (szJ - pPager->journalOff) / JOURNAL_PG_SZ(pPager); + } + + /* If this is the first header read from the journal, truncate the + ** database file back to its original size. + */ + if( pPager->journalOff==JOURNAL_HDR_SZ(pPager) ){ + rc = pager_truncate(pPager, mxPg); + if( rc!=SQLITE_OK ){ + goto end_playback; + } + } + + /* Copy original pages out of the journal and back into the database file. + */ + for(i=0; ijfd, pPager->journalOff, 1); + if( rc!=SQLITE_OK ){ + if( rc==SQLITE_DONE ){ + rc = SQLITE_OK; + pPager->journalOff = szJ; + break; + }else{ + goto end_playback; + } + } + } + } + /*NOTREACHED*/ + assert( 0 ); + +end_playback: + if( rc==SQLITE_OK ){ + zMaster = pPager->pTmpSpace; + rc = readMasterJournal(pPager->jfd, zMaster, pPager->pVfs->mxPathname+1); + } + if( rc==SQLITE_OK ){ + rc = pager_end_transaction(pPager); + } + if( rc==SQLITE_OK && zMaster[0] ){ + /* If there was a master journal and this routine will return success, + ** see if it is possible to delete the master journal. + */ + rc = pager_delmaster(pPager, zMaster); + } + + /* The Pager.sectorSize variable may have been updated while rolling + ** back a journal created by a process with a different sector size + ** value. Reset it to the correct value for this process. + */ + setSectorSize(pPager); + return rc; +} + +/* +** Playback the statement journal. +** +** This is similar to playing back the transaction journal but with +** a few extra twists. +** +** (1) The number of pages in the database file at the start of +** the statement is stored in pPager->stmtSize, not in the +** journal file itself. +** +** (2) In addition to playing back the statement journal, also +** playback all pages of the transaction journal beginning +** at offset pPager->stmtJSize. +*/ +static int pager_stmt_playback(Pager *pPager){ + i64 szJ; /* Size of the full journal */ + i64 hdrOff; + int nRec; /* Number of Records */ + int i; /* Loop counter */ + int rc; + + szJ = pPager->journalOff; +#ifndef NDEBUG + { + i64 os_szJ; + rc = sqlite3OsFileSize(pPager->jfd, &os_szJ); + if( rc!=SQLITE_OK ) return rc; + assert( szJ==os_szJ ); + } +#endif + + /* Set hdrOff to be the offset just after the end of the last journal + ** page written before the first journal-header for this statement + ** transaction was written, or the end of the file if no journal + ** header was written. + */ + hdrOff = pPager->stmtHdrOff; + assert( pPager->fullSync || !hdrOff ); + if( !hdrOff ){ + hdrOff = szJ; + } + + /* Truncate the database back to its original size. + */ + rc = pager_truncate(pPager, pPager->stmtSize); + assert( pPager->state>=PAGER_SHARED ); + + /* Figure out how many records are in the statement journal. + */ + assert( pPager->stmtInUse && pPager->journalOpen ); + nRec = pPager->stmtNRec; + + /* Copy original pages out of the statement journal and back into the + ** database file. Note that the statement journal omits checksums from + ** each record since power-failure recovery is not important to statement + ** journals. + */ + for(i=0; ipageSize); + rc = pager_playback_one_page(pPager, pPager->stfd, offset, 0); + assert( rc!=SQLITE_DONE ); + if( rc!=SQLITE_OK ) goto end_stmt_playback; + } + + /* Now roll some pages back from the transaction journal. Pager.stmtJSize + ** was the size of the journal file when this statement was started, so + ** everything after that needs to be rolled back, either into the + ** database, the memory cache, or both. + ** + ** If it is not zero, then Pager.stmtHdrOff is the offset to the start + ** of the first journal header written during this statement transaction. + */ + pPager->journalOff = pPager->stmtJSize; + pPager->cksumInit = pPager->stmtCksum; + while( pPager->journalOff < hdrOff ){ + rc = pager_playback_one_page(pPager, pPager->jfd, pPager->journalOff, 1); + assert( rc!=SQLITE_DONE ); + if( rc!=SQLITE_OK ) goto end_stmt_playback; + } + + while( pPager->journalOff < szJ ){ + u32 nJRec; /* Number of Journal Records */ + u32 dummy; + rc = readJournalHdr(pPager, szJ, &nJRec, &dummy); + if( rc!=SQLITE_OK ){ + assert( rc!=SQLITE_DONE ); + goto end_stmt_playback; + } + if( nJRec==0 ){ + nJRec = (szJ - pPager->journalOff) / (pPager->pageSize+8); + } + for(i=nJRec-1; i>=0 && pPager->journalOff < szJ; i--){ + rc = pager_playback_one_page(pPager, pPager->jfd, pPager->journalOff, 1); + assert( rc!=SQLITE_DONE ); + if( rc!=SQLITE_OK ) goto end_stmt_playback; + } + } + + pPager->journalOff = szJ; + +end_stmt_playback: + if( rc==SQLITE_OK) { + pPager->journalOff = szJ; + /* pager_reload_cache(pPager); */ + } + return rc; +} + +/* +** Change the maximum number of in-memory pages that are allowed. +*/ +void sqlite3PagerSetCachesize(Pager *pPager, int mxPage){ + if( mxPage>10 ){ + pPager->mxPage = mxPage; + }else{ + pPager->mxPage = 10; + } +} + +/* +** Adjust the robustness of the database to damage due to OS crashes +** or power failures by changing the number of syncs()s when writing +** the rollback journal. There are three levels: +** +** OFF sqlite3OsSync() is never called. This is the default +** for temporary and transient files. +** +** NORMAL The journal is synced once before writes begin on the +** database. This is normally adequate protection, but +** it is theoretically possible, though very unlikely, +** that an inopertune power failure could leave the journal +** in a state which would cause damage to the database +** when it is rolled back. +** +** FULL The journal is synced twice before writes begin on the +** database (with some additional information - the nRec field +** of the journal header - being written in between the two +** syncs). If we assume that writing a +** single disk sector is atomic, then this mode provides +** assurance that the journal will not be corrupted to the +** point of causing damage to the database during rollback. +** +** Numeric values associated with these states are OFF==1, NORMAL=2, +** and FULL=3. +*/ +#ifndef SQLITE_OMIT_PAGER_PRAGMAS +void sqlite3PagerSetSafetyLevel(Pager *pPager, int level, int full_fsync){ + pPager->noSync = level==1 || pPager->tempFile; + pPager->fullSync = level==3 && !pPager->tempFile; + pPager->sync_flags = (full_fsync?SQLITE_SYNC_FULL:SQLITE_SYNC_NORMAL); + if( pPager->noSync ) pPager->needSync = 0; +} +#endif + +/* +** The following global variable is incremented whenever the library +** attempts to open a temporary file. This information is used for +** testing and analysis only. +*/ +#ifdef SQLITE_TEST +int sqlite3_opentemp_count = 0; +#endif + +/* +** Open a temporary file. +** +** Write the file descriptor into *fd. Return SQLITE_OK on success or some +** other error code if we fail. The OS will automatically delete the temporary +** file when it is closed. +*/ +static int sqlite3PagerOpentemp( + sqlite3_vfs *pVfs, /* The virtual file system layer */ + sqlite3_file *pFile, /* Write the file descriptor here */ + char *zFilename, /* Name of the file. Might be NULL */ + int vfsFlags /* Flags passed through to the VFS */ +){ + int rc; + assert( zFilename!=0 ); + +#ifdef SQLITE_TEST + sqlite3_opentemp_count++; /* Used for testing and analysis only */ +#endif + + vfsFlags |= SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE | + SQLITE_OPEN_EXCLUSIVE | SQLITE_OPEN_DELETEONCLOSE; + rc = sqlite3OsOpen(pVfs, zFilename, pFile, vfsFlags, 0); + assert( rc!=SQLITE_OK || pFile->pMethods ); + return rc; +} + +/* +** Create a new page cache and put a pointer to the page cache in *ppPager. +** The file to be cached need not exist. The file is not locked until +** the first call to sqlite3PagerGet() and is only held open until the +** last page is released using sqlite3PagerUnref(). +** +** If zFilename is NULL then a randomly-named temporary file is created +** and used as the file to be cached. The file will be deleted +** automatically when it is closed. +** +** If zFilename is ":memory:" then all information is held in cache. +** It is never written to disk. This can be used to implement an +** in-memory database. +*/ +int sqlite3PagerOpen( + sqlite3_vfs *pVfs, /* The virtual file system to use */ + Pager **ppPager, /* Return the Pager structure here */ + const char *zFilename, /* Name of the database file to open */ + int nExtra, /* Extra bytes append to each in-memory page */ + int flags, /* flags controlling this file */ + int vfsFlags /* flags passed through to sqlite3_vfs.xOpen() */ +){ + u8 *pPtr; + Pager *pPager = 0; + int rc = SQLITE_OK; + int i; + int tempFile = 0; + int memDb = 0; + int readOnly = 0; + int useJournal = (flags & PAGER_OMIT_JOURNAL)==0; + int noReadlock = (flags & PAGER_NO_READLOCK)!=0; + int journalFileSize = sqlite3JournalSize(pVfs); + int nDefaultPage = SQLITE_DEFAULT_PAGE_SIZE; + char *zPathname; + int nPathname; + char *zStmtJrnl; + int nStmtJrnl; + + /* The default return is a NULL pointer */ + *ppPager = 0; + + /* Compute the full pathname */ + nPathname = pVfs->mxPathname+1; + zPathname = sqlite3_malloc(nPathname*2); + if( zPathname==0 ){ + return SQLITE_NOMEM; + } + if( zFilename && zFilename[0] ){ +#ifndef SQLITE_OMIT_MEMORYDB + if( strcmp(zFilename,":memory:")==0 ){ + memDb = 1; + zPathname[0] = 0; + }else +#endif + { + rc = sqlite3OsFullPathname(pVfs, zFilename, nPathname, zPathname); + } + }else{ + rc = sqlite3OsGetTempname(pVfs, nPathname, zPathname); + } + if( rc!=SQLITE_OK ){ + sqlite3_free(zPathname); + return rc; + } + nPathname = strlen(zPathname); + + /* Put the statement journal in temporary disk space since this is + ** sometimes RAM disk or other optimized storage. Unlikely the main + ** main journal file, the statement journal does not need to be + ** colocated with the database nor does it need to be persistent. + */ + zStmtJrnl = &zPathname[nPathname+1]; + rc = sqlite3OsGetTempname(pVfs, pVfs->mxPathname+1, zStmtJrnl); + if( rc!=SQLITE_OK ){ + sqlite3_free(zPathname); + return rc; + } + nStmtJrnl = strlen(zStmtJrnl); + + /* Allocate memory for the pager structure */ + pPager = sqlite3MallocZero( + sizeof(*pPager) + /* Pager structure */ + journalFileSize + /* The journal file structure */ + pVfs->szOsFile * 3 + /* The main db and two journal files */ + 3*nPathname + 40 + /* zFilename, zDirectory, zJournal */ + nStmtJrnl /* zStmtJrnl */ + ); + if( !pPager ){ + sqlite3_free(zPathname); + return SQLITE_NOMEM; + } + pPtr = (u8 *)&pPager[1]; + pPager->vfsFlags = vfsFlags; + pPager->fd = (sqlite3_file*)&pPtr[pVfs->szOsFile*0]; + pPager->stfd = (sqlite3_file*)&pPtr[pVfs->szOsFile*1]; + pPager->jfd = (sqlite3_file*)&pPtr[pVfs->szOsFile*2]; + pPager->zFilename = (char*)&pPtr[pVfs->szOsFile*2+journalFileSize]; + pPager->zDirectory = &pPager->zFilename[nPathname+1]; + pPager->zJournal = &pPager->zDirectory[nPathname+1]; + pPager->zStmtJrnl = &pPager->zJournal[nPathname+10]; + pPager->pVfs = pVfs; + memcpy(pPager->zFilename, zPathname, nPathname+1); + memcpy(pPager->zStmtJrnl, zStmtJrnl, nStmtJrnl+1); + sqlite3_free(zPathname); + + /* Open the pager file. + */ + if( zFilename && zFilename[0] && !memDb ){ + if( nPathname>(pVfs->mxPathname - sizeof("-journal")) ){ + rc = SQLITE_CANTOPEN; + }else{ + int fout = 0; + rc = sqlite3OsOpen(pVfs, pPager->zFilename, pPager->fd, + pPager->vfsFlags, &fout); + readOnly = (fout&SQLITE_OPEN_READONLY); + + /* If the file was successfully opened for read/write access, + ** choose a default page size in case we have to create the + ** database file. The default page size is the maximum of: + ** + ** + SQLITE_DEFAULT_PAGE_SIZE, + ** + The value returned by sqlite3OsSectorSize() + ** + The largest page size that can be written atomically. + */ + if( rc==SQLITE_OK && !readOnly ){ + int iSectorSize = sqlite3OsSectorSize(pPager->fd); + if( nDefaultPagefd); + int ii; + assert(SQLITE_IOCAP_ATOMIC512==(512>>8)); + assert(SQLITE_IOCAP_ATOMIC64K==(65536>>8)); + assert(SQLITE_MAX_DEFAULT_PAGE_SIZE<=65536); + for(ii=nDefaultPage; ii<=SQLITE_MAX_DEFAULT_PAGE_SIZE; ii=ii*2){ + if( iDc&(SQLITE_IOCAP_ATOMIC|(ii>>8)) ) nDefaultPage = ii; + } + } +#endif + if( nDefaultPage>SQLITE_MAX_DEFAULT_PAGE_SIZE ){ + nDefaultPage = SQLITE_MAX_DEFAULT_PAGE_SIZE; + } + } + } + }else if( !memDb ){ + /* If a temporary file is requested, it is not opened immediately. + ** In this case we accept the default page size and delay actually + ** opening the file until the first call to OsWrite(). + */ + tempFile = 1; + pPager->state = PAGER_EXCLUSIVE; + } + + if( pPager && rc==SQLITE_OK ){ + pPager->pTmpSpace = (char *)sqlite3_malloc(nDefaultPage); + } + + /* If an error occured in either of the blocks above. + ** Free the Pager structure and close the file. + ** Since the pager is not allocated there is no need to set + ** any Pager.errMask variables. + */ + if( !pPager || !pPager->pTmpSpace ){ + sqlite3OsClose(pPager->fd); + sqlite3_free(pPager); + return ((rc==SQLITE_OK)?SQLITE_NOMEM:rc); + } + + PAGERTRACE3("OPEN %d %s\n", FILEHANDLEID(pPager->fd), pPager->zFilename); + IOTRACE(("OPEN %p %s\n", pPager, pPager->zFilename)) + + /* Fill in Pager.zDirectory[] */ + memcpy(pPager->zDirectory, pPager->zFilename, nPathname+1); + for(i=strlen(pPager->zDirectory); i>0 && pPager->zDirectory[i-1]!='/'; i--){} + if( i>0 ) pPager->zDirectory[i-1] = 0; + + /* Fill in Pager.zJournal[] */ + memcpy(pPager->zJournal, pPager->zFilename, nPathname); + memcpy(&pPager->zJournal[nPathname], "-journal", 9); + + /* pPager->journalOpen = 0; */ + pPager->useJournal = useJournal && !memDb; + pPager->noReadlock = noReadlock && readOnly; + /* pPager->stmtOpen = 0; */ + /* pPager->stmtInUse = 0; */ + /* pPager->nRef = 0; */ + pPager->dbSize = memDb-1; + pPager->pageSize = nDefaultPage; + /* pPager->stmtSize = 0; */ + /* pPager->stmtJSize = 0; */ + /* pPager->nPage = 0; */ + pPager->mxPage = 100; + pPager->mxPgno = SQLITE_MAX_PAGE_COUNT; + /* pPager->state = PAGER_UNLOCK; */ + assert( pPager->state == (tempFile ? PAGER_EXCLUSIVE : PAGER_UNLOCK) ); + /* pPager->errMask = 0; */ + pPager->tempFile = tempFile; + assert( tempFile==PAGER_LOCKINGMODE_NORMAL + || tempFile==PAGER_LOCKINGMODE_EXCLUSIVE ); + assert( PAGER_LOCKINGMODE_EXCLUSIVE==1 ); + pPager->exclusiveMode = tempFile; + pPager->memDb = memDb; + pPager->readOnly = readOnly; + /* pPager->needSync = 0; */ + pPager->noSync = pPager->tempFile || !useJournal; + pPager->fullSync = (pPager->noSync?0:1); + pPager->sync_flags = SQLITE_SYNC_NORMAL; + /* pPager->pFirst = 0; */ + /* pPager->pFirstSynced = 0; */ + /* pPager->pLast = 0; */ + pPager->nExtra = FORCE_ALIGNMENT(nExtra); + assert(pPager->fd->pMethods||memDb||tempFile); + if( !memDb ){ + setSectorSize(pPager); + } + /* pPager->pBusyHandler = 0; */ + /* memset(pPager->aHash, 0, sizeof(pPager->aHash)); */ + *ppPager = pPager; +#ifdef SQLITE_ENABLE_MEMORY_MANAGEMENT + pPager->iInUseMM = 0; + pPager->iInUseDB = 0; + if( !memDb ){ + sqlite3_mutex *mutex = sqlite3_mutex_alloc(SQLITE_MUTEX_STATIC_MEM2); + sqlite3_mutex_enter(mutex); + pPager->pNext = sqlite3PagerList; + if( sqlite3PagerList ){ + assert( sqlite3PagerList->pPrev==0 ); + sqlite3PagerList->pPrev = pPager; + } + pPager->pPrev = 0; + sqlite3PagerList = pPager; + sqlite3_mutex_leave(mutex); + } +#endif + return SQLITE_OK; +} + +/* +** Set the busy handler function. +*/ +void sqlite3PagerSetBusyhandler(Pager *pPager, BusyHandler *pBusyHandler){ + pPager->pBusyHandler = pBusyHandler; +} + +/* +** Set the destructor for this pager. If not NULL, the destructor is called +** when the reference count on each page reaches zero. The destructor can +** be used to clean up information in the extra segment appended to each page. +** +** The destructor is not called as a result sqlite3PagerClose(). +** Destructors are only called by sqlite3PagerUnref(). +*/ +void sqlite3PagerSetDestructor(Pager *pPager, void (*xDesc)(DbPage*,int)){ + pPager->xDestructor = xDesc; +} + +/* +** Set the reinitializer for this pager. If not NULL, the reinitializer +** is called when the content of a page in cache is restored to its original +** value as a result of a rollback. The callback gives higher-level code +** an opportunity to restore the EXTRA section to agree with the restored +** page data. +*/ +void sqlite3PagerSetReiniter(Pager *pPager, void (*xReinit)(DbPage*,int)){ + pPager->xReiniter = xReinit; +} + +/* +** Set the page size to *pPageSize. If the suggest new page size is +** inappropriate, then an alternative page size is set to that +** value before returning. +*/ +int sqlite3PagerSetPagesize(Pager *pPager, u16 *pPageSize){ + int rc = SQLITE_OK; + u16 pageSize = *pPageSize; + assert( pageSize==0 || (pageSize>=512 && pageSize<=SQLITE_MAX_PAGE_SIZE) ); + if( pageSize && pageSize!=pPager->pageSize + && !pPager->memDb && pPager->nRef==0 + ){ + char *pNew = (char *)sqlite3_malloc(pageSize); + if( !pNew ){ + rc = SQLITE_NOMEM; + }else{ + pagerEnter(pPager); + pager_reset(pPager); + pPager->pageSize = pageSize; + setSectorSize(pPager); + sqlite3_free(pPager->pTmpSpace); + pPager->pTmpSpace = pNew; + pagerLeave(pPager); + } + } + *pPageSize = pPager->pageSize; + return rc; +} + +/* +** Return a pointer to the "temporary page" buffer held internally +** by the pager. This is a buffer that is big enough to hold the +** entire content of a database page. This buffer is used internally +** during rollback and will be overwritten whenever a rollback +** occurs. But other modules are free to use it too, as long as +** no rollbacks are happening. +*/ +void *sqlite3PagerTempSpace(Pager *pPager){ + return pPager->pTmpSpace; +} + +/* +** Attempt to set the maximum database page count if mxPage is positive. +** Make no changes if mxPage is zero or negative. And never reduce the +** maximum page count below the current size of the database. +** +** Regardless of mxPage, return the current maximum page count. +*/ +int sqlite3PagerMaxPageCount(Pager *pPager, int mxPage){ + if( mxPage>0 ){ + pPager->mxPgno = mxPage; + } + sqlite3PagerPagecount(pPager); + return pPager->mxPgno; +} + +/* +** The following set of routines are used to disable the simulated +** I/O error mechanism. These routines are used to avoid simulated +** errors in places where we do not care about errors. +** +** Unless -DSQLITE_TEST=1 is used, these routines are all no-ops +** and generate no code. +*/ +#ifdef SQLITE_TEST +extern int sqlite3_io_error_pending; +extern int sqlite3_io_error_hit; +static int saved_cnt; +void disable_simulated_io_errors(void){ + saved_cnt = sqlite3_io_error_pending; + sqlite3_io_error_pending = -1; +} +void enable_simulated_io_errors(void){ + sqlite3_io_error_pending = saved_cnt; +} +#else +# define disable_simulated_io_errors() +# define enable_simulated_io_errors() +#endif + +/* +** Read the first N bytes from the beginning of the file into memory +** that pDest points to. +** +** No error checking is done. The rational for this is that this function +** may be called even if the file does not exist or contain a header. In +** these cases sqlite3OsRead() will return an error, to which the correct +** response is to zero the memory at pDest and continue. A real IO error +** will presumably recur and be picked up later (Todo: Think about this). +*/ +int sqlite3PagerReadFileheader(Pager *pPager, int N, unsigned char *pDest){ + int rc = SQLITE_OK; + memset(pDest, 0, N); + assert(MEMDB||pPager->fd->pMethods||pPager->tempFile); + if( pPager->fd->pMethods ){ + IOTRACE(("DBHDR %p 0 %d\n", pPager, N)) + rc = sqlite3OsRead(pPager->fd, pDest, N, 0); + if( rc==SQLITE_IOERR_SHORT_READ ){ + rc = SQLITE_OK; + } + } + return rc; +} + +/* +** Return the total number of pages in the disk file associated with +** pPager. +** +** If the PENDING_BYTE lies on the page directly after the end of the +** file, then consider this page part of the file too. For example, if +** PENDING_BYTE is byte 4096 (the first byte of page 5) and the size of the +** file is 4096 bytes, 5 is returned instead of 4. +*/ +int sqlite3PagerPagecount(Pager *pPager){ + i64 n = 0; + int rc; + assert( pPager!=0 ); + if( pPager->errCode ){ + return -1; + } + if( pPager->dbSize>=0 ){ + n = pPager->dbSize; + } else { + assert(pPager->fd->pMethods||pPager->tempFile); + if( (pPager->fd->pMethods) + && (rc = sqlite3OsFileSize(pPager->fd, &n))!=SQLITE_OK ){ + pPager->nRef++; + pager_error(pPager, rc); + pPager->nRef--; + return -1; + } + if( n>0 && npageSize ){ + n = 1; + }else{ + n /= pPager->pageSize; + } + if( pPager->state!=PAGER_UNLOCK ){ + pPager->dbSize = n; + } + } + if( n==(PENDING_BYTE/pPager->pageSize) ){ + n++; + } + if( n>pPager->mxPgno ){ + pPager->mxPgno = n; + } + return n; +} + + +#ifndef SQLITE_OMIT_MEMORYDB +/* +** Clear a PgHistory block +*/ +static void clearHistory(PgHistory *pHist){ + sqlite3_free(pHist->pOrig); + sqlite3_free(pHist->pStmt); + pHist->pOrig = 0; + pHist->pStmt = 0; +} +#else +#define clearHistory(x) +#endif + +/* +** Forward declaration +*/ +static int syncJournal(Pager*); + +/* +** Unlink pPg from its hash chain. Also set the page number to 0 to indicate +** that the page is not part of any hash chain. This is required because the +** sqlite3PagerMovepage() routine can leave a page in the +** pNextFree/pPrevFree list that is not a part of any hash-chain. +*/ +static void unlinkHashChain(Pager *pPager, PgHdr *pPg){ + if( pPg->pgno==0 ){ + assert( pPg->pNextHash==0 && pPg->pPrevHash==0 ); + return; + } + if( pPg->pNextHash ){ + pPg->pNextHash->pPrevHash = pPg->pPrevHash; + } + if( pPg->pPrevHash ){ + assert( pPager->aHash[pPg->pgno & (pPager->nHash-1)]!=pPg ); + pPg->pPrevHash->pNextHash = pPg->pNextHash; + }else{ + int h = pPg->pgno & (pPager->nHash-1); + pPager->aHash[h] = pPg->pNextHash; + } + if( MEMDB ){ + clearHistory(PGHDR_TO_HIST(pPg, pPager)); + } + pPg->pgno = 0; + pPg->pNextHash = pPg->pPrevHash = 0; +} + +/* +** Unlink a page from the free list (the list of all pages where nRef==0) +** and from its hash collision chain. +*/ +static void unlinkPage(PgHdr *pPg){ + Pager *pPager = pPg->pPager; + + /* Unlink from free page list */ + lruListRemove(pPg); + + /* Unlink from the pgno hash table */ + unlinkHashChain(pPager, pPg); +} + +/* +** This routine is used to truncate the cache when a database +** is truncated. Drop from the cache all pages whose pgno is +** larger than pPager->dbSize and is unreferenced. +** +** Referenced pages larger than pPager->dbSize are zeroed. +** +** Actually, at the point this routine is called, it would be +** an error to have a referenced page. But rather than delete +** that page and guarantee a subsequent segfault, it seems better +** to zero it and hope that we error out sanely. +*/ +static void pager_truncate_cache(Pager *pPager){ + PgHdr *pPg; + PgHdr **ppPg; + int dbSize = pPager->dbSize; + + ppPg = &pPager->pAll; + while( (pPg = *ppPg)!=0 ){ + if( pPg->pgno<=dbSize ){ + ppPg = &pPg->pNextAll; + }else if( pPg->nRef>0 ){ + memset(PGHDR_TO_DATA(pPg), 0, pPager->pageSize); + ppPg = &pPg->pNextAll; + }else{ + *ppPg = pPg->pNextAll; + IOTRACE(("PGFREE %p %d\n", pPager, pPg->pgno)); + PAGER_INCR(sqlite3_pager_pgfree_count); + unlinkPage(pPg); + makeClean(pPg); + sqlite3_free(pPg->pData); + sqlite3_free(pPg); + pPager->nPage--; + } + } +} + +/* +** Try to obtain a lock on a file. Invoke the busy callback if the lock +** is currently not available. Repeat until the busy callback returns +** false or until the lock succeeds. +** +** Return SQLITE_OK on success and an error code if we cannot obtain +** the lock. +*/ +static int pager_wait_on_lock(Pager *pPager, int locktype){ + int rc; + + /* The OS lock values must be the same as the Pager lock values */ + assert( PAGER_SHARED==SHARED_LOCK ); + assert( PAGER_RESERVED==RESERVED_LOCK ); + assert( PAGER_EXCLUSIVE==EXCLUSIVE_LOCK ); + + /* If the file is currently unlocked then the size must be unknown */ + assert( pPager->state>=PAGER_SHARED || pPager->dbSize<0 || MEMDB ); + + if( pPager->state>=locktype ){ + rc = SQLITE_OK; + }else{ + if( pPager->pBusyHandler ) pPager->pBusyHandler->nBusy = 0; + do { + rc = sqlite3OsLock(pPager->fd, locktype); + }while( rc==SQLITE_BUSY && sqlite3InvokeBusyHandler(pPager->pBusyHandler) ); + if( rc==SQLITE_OK ){ + pPager->state = locktype; + IOTRACE(("LOCK %p %d\n", pPager, locktype)) + } + } + return rc; +} + +/* +** Truncate the file to the number of pages specified. +*/ +int sqlite3PagerTruncate(Pager *pPager, Pgno nPage){ + int rc; + assert( pPager->state>=PAGER_SHARED || MEMDB ); + sqlite3PagerPagecount(pPager); + if( pPager->errCode ){ + rc = pPager->errCode; + return rc; + } + if( nPage>=(unsigned)pPager->dbSize ){ + return SQLITE_OK; + } + if( MEMDB ){ + pPager->dbSize = nPage; + pager_truncate_cache(pPager); + return SQLITE_OK; + } + pagerEnter(pPager); + rc = syncJournal(pPager); + pagerLeave(pPager); + if( rc!=SQLITE_OK ){ + return rc; + } + + /* Get an exclusive lock on the database before truncating. */ + pagerEnter(pPager); + rc = pager_wait_on_lock(pPager, EXCLUSIVE_LOCK); + pagerLeave(pPager); + if( rc!=SQLITE_OK ){ + return rc; + } + + rc = pager_truncate(pPager, nPage); + return rc; +} + +/* +** Shutdown the page cache. Free all memory and close all files. +** +** If a transaction was in progress when this routine is called, that +** transaction is rolled back. All outstanding pages are invalidated +** and their memory is freed. Any attempt to use a page associated +** with this page cache after this function returns will likely +** result in a coredump. +** +** This function always succeeds. If a transaction is active an attempt +** is made to roll it back. If an error occurs during the rollback +** a hot journal may be left in the filesystem but no error is returned +** to the caller. +*/ +int sqlite3PagerClose(Pager *pPager){ +#ifdef SQLITE_ENABLE_MEMORY_MANAGEMENT + if( !MEMDB ){ + sqlite3_mutex *mutex = sqlite3_mutex_alloc(SQLITE_MUTEX_STATIC_MEM2); + sqlite3_mutex_enter(mutex); + if( pPager->pPrev ){ + pPager->pPrev->pNext = pPager->pNext; + }else{ + sqlite3PagerList = pPager->pNext; + } + if( pPager->pNext ){ + pPager->pNext->pPrev = pPager->pPrev; + } + sqlite3_mutex_leave(mutex); + } +#endif + + disable_simulated_io_errors(); + pPager->errCode = 0; + pPager->exclusiveMode = 0; + pager_reset(pPager); + pagerUnlockAndRollback(pPager); + enable_simulated_io_errors(); + PAGERTRACE2("CLOSE %d\n", PAGERID(pPager)); + IOTRACE(("CLOSE %p\n", pPager)) + assert( pPager->errCode || (pPager->journalOpen==0 && pPager->stmtOpen==0) ); + if( pPager->journalOpen ){ + sqlite3OsClose(pPager->jfd); + } + sqlite3BitvecDestroy(pPager->pInJournal); + if( pPager->stmtOpen ){ + sqlite3OsClose(pPager->stfd); + } + sqlite3OsClose(pPager->fd); + /* Temp files are automatically deleted by the OS + ** if( pPager->tempFile ){ + ** sqlite3OsDelete(pPager->zFilename); + ** } + */ + + sqlite3_free(pPager->aHash); + sqlite3_free(pPager->pTmpSpace); + sqlite3_free(pPager); + return SQLITE_OK; +} + +#if !defined(NDEBUG) || defined(SQLITE_TEST) +/* +** Return the page number for the given page data. +*/ +Pgno sqlite3PagerPagenumber(DbPage *p){ + return p->pgno; +} +#endif + +/* +** The page_ref() function increments the reference count for a page. +** If the page is currently on the freelist (the reference count is zero) then +** remove it from the freelist. +** +** For non-test systems, page_ref() is a macro that calls _page_ref() +** online of the reference count is zero. For test systems, page_ref() +** is a real function so that we can set breakpoints and trace it. +*/ +static void _page_ref(PgHdr *pPg){ + if( pPg->nRef==0 ){ + /* The page is currently on the freelist. Remove it. */ + lruListRemove(pPg); + pPg->pPager->nRef++; + } + pPg->nRef++; +} +#ifdef SQLITE_DEBUG + static void page_ref(PgHdr *pPg){ + if( pPg->nRef==0 ){ + _page_ref(pPg); + }else{ + pPg->nRef++; + } + } +#else +# define page_ref(P) ((P)->nRef==0?_page_ref(P):(void)(P)->nRef++) +#endif + +/* +** Increment the reference count for a page. The input pointer is +** a reference to the page data. +*/ +int sqlite3PagerRef(DbPage *pPg){ + pagerEnter(pPg->pPager); + page_ref(pPg); + pagerLeave(pPg->pPager); + return SQLITE_OK; +} + +/* +** Sync the journal. In other words, make sure all the pages that have +** been written to the journal have actually reached the surface of the +** disk. It is not safe to modify the original database file until after +** the journal has been synced. If the original database is modified before +** the journal is synced and a power failure occurs, the unsynced journal +** data would be lost and we would be unable to completely rollback the +** database changes. Database corruption would occur. +** +** This routine also updates the nRec field in the header of the journal. +** (See comments on the pager_playback() routine for additional information.) +** If the sync mode is FULL, two syncs will occur. First the whole journal +** is synced, then the nRec field is updated, then a second sync occurs. +** +** For temporary databases, we do not care if we are able to rollback +** after a power failure, so no sync occurs. +** +** If the IOCAP_SEQUENTIAL flag is set for the persistent media on which +** the database is stored, then OsSync() is never called on the journal +** file. In this case all that is required is to update the nRec field in +** the journal header. +** +** This routine clears the needSync field of every page current held in +** memory. +*/ +static int syncJournal(Pager *pPager){ + PgHdr *pPg; + int rc = SQLITE_OK; + + + /* Sync the journal before modifying the main database + ** (assuming there is a journal and it needs to be synced.) + */ + if( pPager->needSync ){ + if( !pPager->tempFile ){ + int iDc = sqlite3OsDeviceCharacteristics(pPager->fd); + assert( pPager->journalOpen ); + + /* assert( !pPager->noSync ); // noSync might be set if synchronous + ** was turned off after the transaction was started. Ticket #615 */ +#ifndef NDEBUG + { + /* Make sure the pPager->nRec counter we are keeping agrees + ** with the nRec computed from the size of the journal file. + */ + i64 jSz; + rc = sqlite3OsFileSize(pPager->jfd, &jSz); + if( rc!=0 ) return rc; + assert( pPager->journalOff==jSz ); + } +#endif + if( 0==(iDc&SQLITE_IOCAP_SAFE_APPEND) ){ + /* Write the nRec value into the journal file header. If in + ** full-synchronous mode, sync the journal first. This ensures that + ** all data has really hit the disk before nRec is updated to mark + ** it as a candidate for rollback. + ** + ** This is not required if the persistent media supports the + ** SAFE_APPEND property. Because in this case it is not possible + ** for garbage data to be appended to the file, the nRec field + ** is populated with 0xFFFFFFFF when the journal header is written + ** and never needs to be updated. + */ + i64 jrnlOff; + if( pPager->fullSync && 0==(iDc&SQLITE_IOCAP_SEQUENTIAL) ){ + PAGERTRACE2("SYNC journal of %d\n", PAGERID(pPager)); + IOTRACE(("JSYNC %p\n", pPager)) + rc = sqlite3OsSync(pPager->jfd, pPager->sync_flags); + if( rc!=0 ) return rc; + } + + jrnlOff = pPager->journalHdr + sizeof(aJournalMagic); + IOTRACE(("JHDR %p %lld %d\n", pPager, jrnlOff, 4)); + rc = write32bits(pPager->jfd, jrnlOff, pPager->nRec); + if( rc ) return rc; + } + if( 0==(iDc&SQLITE_IOCAP_SEQUENTIAL) ){ + PAGERTRACE2("SYNC journal of %d\n", PAGERID(pPager)); + IOTRACE(("JSYNC %p\n", pPager)) + rc = sqlite3OsSync(pPager->jfd, pPager->sync_flags| + (pPager->sync_flags==SQLITE_SYNC_FULL?SQLITE_SYNC_DATAONLY:0) + ); + if( rc!=0 ) return rc; + } + pPager->journalStarted = 1; + } + pPager->needSync = 0; + + /* Erase the needSync flag from every page. + */ + for(pPg=pPager->pAll; pPg; pPg=pPg->pNextAll){ + pPg->needSync = 0; + } + lruListSetFirstSynced(pPager); + } + +#ifndef NDEBUG + /* If the Pager.needSync flag is clear then the PgHdr.needSync + ** flag must also be clear for all pages. Verify that this + ** invariant is true. + */ + else{ + for(pPg=pPager->pAll; pPg; pPg=pPg->pNextAll){ + assert( pPg->needSync==0 ); + } + assert( pPager->lru.pFirstSynced==pPager->lru.pFirst ); + } +#endif + + return rc; +} + +/* +** Merge two lists of pages connected by pDirty and in pgno order. +** Do not both fixing the pPrevDirty pointers. +*/ +static PgHdr *merge_pagelist(PgHdr *pA, PgHdr *pB){ + PgHdr result, *pTail; + pTail = &result; + while( pA && pB ){ + if( pA->pgnopgno ){ + pTail->pDirty = pA; + pTail = pA; + pA = pA->pDirty; + }else{ + pTail->pDirty = pB; + pTail = pB; + pB = pB->pDirty; + } + } + if( pA ){ + pTail->pDirty = pA; + }else if( pB ){ + pTail->pDirty = pB; + }else{ + pTail->pDirty = 0; + } + return result.pDirty; +} + +/* +** Sort the list of pages in accending order by pgno. Pages are +** connected by pDirty pointers. The pPrevDirty pointers are +** corrupted by this sort. +*/ +#define N_SORT_BUCKET_ALLOC 25 +#define N_SORT_BUCKET 25 +#ifdef SQLITE_TEST + int sqlite3_pager_n_sort_bucket = 0; + #undef N_SORT_BUCKET + #define N_SORT_BUCKET \ + (sqlite3_pager_n_sort_bucket?sqlite3_pager_n_sort_bucket:N_SORT_BUCKET_ALLOC) +#endif +static PgHdr *sort_pagelist(PgHdr *pIn){ + PgHdr *a[N_SORT_BUCKET_ALLOC], *p; + int i; + memset(a, 0, sizeof(a)); + while( pIn ){ + p = pIn; + pIn = p->pDirty; + p->pDirty = 0; + for(i=0; ipPager; + + /* At this point there may be either a RESERVED or EXCLUSIVE lock on the + ** database file. If there is already an EXCLUSIVE lock, the following + ** calls to sqlite3OsLock() are no-ops. + ** + ** Moving the lock from RESERVED to EXCLUSIVE actually involves going + ** through an intermediate state PENDING. A PENDING lock prevents new + ** readers from attaching to the database but is unsufficient for us to + ** write. The idea of a PENDING lock is to prevent new readers from + ** coming in while we wait for existing readers to clear. + ** + ** While the pager is in the RESERVED state, the original database file + ** is unchanged and we can rollback without having to playback the + ** journal into the original database file. Once we transition to + ** EXCLUSIVE, it means the database file has been changed and any rollback + ** will require a journal playback. + */ + rc = pager_wait_on_lock(pPager, EXCLUSIVE_LOCK); + if( rc!=SQLITE_OK ){ + return rc; + } + + pList = sort_pagelist(pList); + for(p=pList; p; p=p->pDirty){ + assert( p->dirty ); + p->dirty = 0; + } + while( pList ){ + + /* If the file has not yet been opened, open it now. */ + if( !pPager->fd->pMethods ){ + assert(pPager->tempFile); + rc = sqlite3PagerOpentemp(pPager->pVfs, pPager->fd, pPager->zFilename, + pPager->vfsFlags); + if( rc ) return rc; + } + + /* If there are dirty pages in the page cache with page numbers greater + ** than Pager.dbSize, this means sqlite3PagerTruncate() was called to + ** make the file smaller (presumably by auto-vacuum code). Do not write + ** any such pages to the file. + */ + if( pList->pgno<=pPager->dbSize ){ + i64 offset = (pList->pgno-1)*(i64)pPager->pageSize; + char *pData = CODEC2(pPager, PGHDR_TO_DATA(pList), pList->pgno, 6); + PAGERTRACE4("STORE %d page %d hash(%08x)\n", + PAGERID(pPager), pList->pgno, pager_pagehash(pList)); + IOTRACE(("PGOUT %p %d\n", pPager, pList->pgno)); + rc = sqlite3OsWrite(pPager->fd, pData, pPager->pageSize, offset); + PAGER_INCR(sqlite3_pager_writedb_count); + PAGER_INCR(pPager->nWrite); + if( pList->pgno==1 ){ + memcpy(&pPager->dbFileVers, &pData[24], sizeof(pPager->dbFileVers)); + } + } +#ifndef NDEBUG + else{ + PAGERTRACE3("NOSTORE %d page %d\n", PAGERID(pPager), pList->pgno); + } +#endif + if( rc ) return rc; +#ifdef SQLITE_CHECK_PAGES + pList->pageHash = pager_pagehash(pList); +#endif + pList = pList->pDirty; + } + return SQLITE_OK; +} + +/* +** Collect every dirty page into a dirty list and +** return a pointer to the head of that list. All pages are +** collected even if they are still in use. +*/ +static PgHdr *pager_get_all_dirty_pages(Pager *pPager){ + +#ifndef NDEBUG + /* Verify the sanity of the dirty list when we are running + ** in debugging mode. This is expensive, so do not + ** do this on a normal build. */ + int n1 = 0; + int n2 = 0; + PgHdr *p; + for(p=pPager->pAll; p; p=p->pNextAll){ if( p->dirty ) n1++; } + for(p=pPager->pDirty; p; p=p->pDirty){ n2++; } + assert( n1==n2 ); +#endif + + return pPager->pDirty; +} + +/* +** Return TRUE if there is a hot journal on the given pager. +** A hot journal is one that needs to be played back. +** +** If the current size of the database file is 0 but a journal file +** exists, that is probably an old journal left over from a prior +** database with the same name. Just delete the journal. +*/ +static int hasHotJournal(Pager *pPager){ + sqlite3_vfs *pVfs = pPager->pVfs; + if( !pPager->useJournal ) return 0; + if( !pPager->fd->pMethods ) return 0; + if( !sqlite3OsAccess(pVfs, pPager->zJournal, SQLITE_ACCESS_EXISTS) ){ + return 0; + } + if( sqlite3OsCheckReservedLock(pPager->fd) ){ + return 0; + } + if( sqlite3PagerPagecount(pPager)==0 ){ + sqlite3OsDelete(pVfs, pPager->zJournal, 0); + return 0; + }else{ + return 1; + } +} + +/* +** Try to find a page in the cache that can be recycled. +** +** This routine may return SQLITE_IOERR, SQLITE_FULL or SQLITE_OK. It +** does not set the pPager->errCode variable. +*/ +static int pager_recycle(Pager *pPager, PgHdr **ppPg){ + PgHdr *pPg; + *ppPg = 0; + + /* It is illegal to call this function unless the pager object + ** pointed to by pPager has at least one free page (page with nRef==0). + */ + assert(!MEMDB); + assert(pPager->lru.pFirst); + + /* Find a page to recycle. Try to locate a page that does not + ** require us to do an fsync() on the journal. + */ + pPg = pPager->lru.pFirstSynced; + + /* If we could not find a page that does not require an fsync() + ** on the journal file then fsync the journal file. This is a + ** very slow operation, so we work hard to avoid it. But sometimes + ** it can't be helped. + */ + if( pPg==0 && pPager->lru.pFirst){ + int iDc = sqlite3OsDeviceCharacteristics(pPager->fd); + int rc = syncJournal(pPager); + if( rc!=0 ){ + return rc; + } + if( pPager->fullSync && 0==(iDc&SQLITE_IOCAP_SAFE_APPEND) ){ + /* If in full-sync mode, write a new journal header into the + ** journal file. This is done to avoid ever modifying a journal + ** header that is involved in the rollback of pages that have + ** already been written to the database (in case the header is + ** trashed when the nRec field is updated). + */ + pPager->nRec = 0; + assert( pPager->journalOff > 0 ); + assert( pPager->doNotSync==0 ); + rc = writeJournalHdr(pPager); + if( rc!=0 ){ + return rc; + } + } + pPg = pPager->lru.pFirst; + } + + assert( pPg->nRef==0 ); + + /* Write the page to the database file if it is dirty. + */ + if( pPg->dirty ){ + int rc; + assert( pPg->needSync==0 ); + makeClean(pPg); + pPg->dirty = 1; + pPg->pDirty = 0; + rc = pager_write_pagelist( pPg ); + pPg->dirty = 0; + if( rc!=SQLITE_OK ){ + return rc; + } + } + assert( pPg->dirty==0 ); + + /* If the page we are recycling is marked as alwaysRollback, then + ** set the global alwaysRollback flag, thus disabling the + ** sqlite3PagerDontRollback() optimization for the rest of this transaction. + ** It is necessary to do this because the page marked alwaysRollback + ** might be reloaded at a later time but at that point we won't remember + ** that is was marked alwaysRollback. This means that all pages must + ** be marked as alwaysRollback from here on out. + */ + if( pPg->alwaysRollback ){ + IOTRACE(("ALWAYS_ROLLBACK %p\n", pPager)) + pPager->alwaysRollback = 1; + } + + /* Unlink the old page from the free list and the hash table + */ + unlinkPage(pPg); + assert( pPg->pgno==0 ); + + *ppPg = pPg; + return SQLITE_OK; +} + +#ifdef SQLITE_ENABLE_MEMORY_MANAGEMENT +/* +** This function is called to free superfluous dynamically allocated memory +** held by the pager system. Memory in use by any SQLite pager allocated +** by the current thread may be sqlite3_free()ed. +** +** nReq is the number of bytes of memory required. Once this much has +** been released, the function returns. The return value is the total number +** of bytes of memory released. +*/ +int sqlite3PagerReleaseMemory(int nReq){ + int nReleased = 0; /* Bytes of memory released so far */ + sqlite3_mutex *mutex; /* The MEM2 mutex */ + Pager *pPager; /* For looping over pagers */ + BusyHandler *savedBusy; /* Saved copy of the busy handler */ + int rc = SQLITE_OK; + + /* Acquire the memory-management mutex + */ + mutex = sqlite3_mutex_alloc(SQLITE_MUTEX_STATIC_MEM2); + sqlite3_mutex_enter(mutex); + + /* Signal all database connections that memory management wants + ** to have access to the pagers. + */ + for(pPager=sqlite3PagerList; pPager; pPager=pPager->pNext){ + pPager->iInUseMM = 1; + } + + while( rc==SQLITE_OK && (nReq<0 || nReleasedneedSync || pPg->pPager->iInUseDB) ){ + pPg = pPg->gfree.pNext; + } + if( !pPg ){ + pPg = sqlite3LruPageList.pFirst; + while( pPg && pPg->pPager->iInUseDB ){ + pPg = pPg->gfree.pNext; + } + } + sqlite3_mutex_leave(sqlite3_mutex_alloc(SQLITE_MUTEX_STATIC_LRU)); + + /* If pPg==0, then the block above has failed to find a page to + ** recycle. In this case return early - no further memory will + ** be released. + */ + if( !pPg ) break; + + pPager = pPg->pPager; + assert(!pPg->needSync || pPg==pPager->lru.pFirst); + assert(pPg->needSync || pPg==pPager->lru.pFirstSynced); + + savedBusy = pPager->pBusyHandler; + pPager->pBusyHandler = 0; + rc = pager_recycle(pPager, &pRecycled); + pPager->pBusyHandler = savedBusy; + assert(pRecycled==pPg || rc!=SQLITE_OK); + if( rc==SQLITE_OK ){ + /* We've found a page to free. At this point the page has been + ** removed from the page hash-table, free-list and synced-list + ** (pFirstSynced). It is still in the all pages (pAll) list. + ** Remove it from this list before freeing. + ** + ** Todo: Check the Pager.pStmt list to make sure this is Ok. It + ** probably is though. + */ + PgHdr *pTmp; + assert( pPg ); + if( pPg==pPager->pAll ){ + pPager->pAll = pPg->pNextAll; + }else{ + for( pTmp=pPager->pAll; pTmp->pNextAll!=pPg; pTmp=pTmp->pNextAll ){} + pTmp->pNextAll = pPg->pNextAll; + } + nReleased += ( + sizeof(*pPg) + pPager->pageSize + + sizeof(u32) + pPager->nExtra + + MEMDB*sizeof(PgHistory) + ); + IOTRACE(("PGFREE %p %d *\n", pPager, pPg->pgno)); + PAGER_INCR(sqlite3_pager_pgfree_count); + sqlite3_free(pPg->pData); + sqlite3_free(pPg); + pPager->nPage--; + }else{ + /* An error occured whilst writing to the database file or + ** journal in pager_recycle(). The error is not returned to the + ** caller of this function. Instead, set the Pager.errCode variable. + ** The error will be returned to the user (or users, in the case + ** of a shared pager cache) of the pager for which the error occured. + */ + assert( + (rc&0xff)==SQLITE_IOERR || + rc==SQLITE_FULL || + rc==SQLITE_BUSY + ); + assert( pPager->state>=PAGER_RESERVED ); + pager_error(pPager, rc); + } + } + + /* Clear the memory management flags and release the mutex + */ + for(pPager=sqlite3PagerList; pPager; pPager=pPager->pNext){ + pPager->iInUseMM = 0; + } + sqlite3_mutex_leave(mutex); + + /* Return the number of bytes released + */ + return nReleased; +} +#endif /* SQLITE_ENABLE_MEMORY_MANAGEMENT */ + +/* +** Read the content of page pPg out of the database file. +*/ +static int readDbPage(Pager *pPager, PgHdr *pPg, Pgno pgno){ + int rc; + i64 offset; + assert( MEMDB==0 ); + assert(pPager->fd->pMethods||pPager->tempFile); + if( !pPager->fd->pMethods ){ + return SQLITE_IOERR_SHORT_READ; + } + offset = (pgno-1)*(i64)pPager->pageSize; + rc = sqlite3OsRead(pPager->fd, PGHDR_TO_DATA(pPg), pPager->pageSize, offset); + PAGER_INCR(sqlite3_pager_readdb_count); + PAGER_INCR(pPager->nRead); + IOTRACE(("PGIN %p %d\n", pPager, pgno)); + if( pgno==1 ){ + memcpy(&pPager->dbFileVers, &((u8*)PGHDR_TO_DATA(pPg))[24], + sizeof(pPager->dbFileVers)); + } + CODEC1(pPager, PGHDR_TO_DATA(pPg), pPg->pgno, 3); + PAGERTRACE4("FETCH %d page %d hash(%08x)\n", + PAGERID(pPager), pPg->pgno, pager_pagehash(pPg)); + return rc; +} + + +/* +** This function is called to obtain the shared lock required before +** data may be read from the pager cache. If the shared lock has already +** been obtained, this function is a no-op. +** +** Immediately after obtaining the shared lock (if required), this function +** checks for a hot-journal file. If one is found, an emergency rollback +** is performed immediately. +*/ +static int pagerSharedLock(Pager *pPager){ + int rc = SQLITE_OK; + int isHot = 0; + + /* If this database is opened for exclusive access, has no outstanding + ** page references and is in an error-state, now is the chance to clear + ** the error. Discard the contents of the pager-cache and treat any + ** open journal file as a hot-journal. + */ + if( !MEMDB && pPager->exclusiveMode && pPager->nRef==0 && pPager->errCode ){ + if( pPager->journalOpen ){ + isHot = 1; + } + pager_reset(pPager); + pPager->errCode = SQLITE_OK; + } + + /* If the pager is still in an error state, do not proceed. The error + ** state will be cleared at some point in the future when all page + ** references are dropped and the cache can be discarded. + */ + if( pPager->errCode && pPager->errCode!=SQLITE_FULL ){ + return pPager->errCode; + } + + if( pPager->state==PAGER_UNLOCK || isHot ){ + sqlite3_vfs *pVfs = pPager->pVfs; + if( !MEMDB ){ + assert( pPager->nRef==0 ); + if( !pPager->noReadlock ){ + rc = pager_wait_on_lock(pPager, SHARED_LOCK); + if( rc!=SQLITE_OK ){ + return pager_error(pPager, rc); + } + assert( pPager->state>=SHARED_LOCK ); + } + + /* If a journal file exists, and there is no RESERVED lock on the + ** database file, then it either needs to be played back or deleted. + */ + if( hasHotJournal(pPager) || isHot ){ + /* Get an EXCLUSIVE lock on the database file. At this point it is + ** important that a RESERVED lock is not obtained on the way to the + ** EXCLUSIVE lock. If it were, another process might open the + ** database file, detect the RESERVED lock, and conclude that the + ** database is safe to read while this process is still rolling it + ** back. + ** + ** Because the intermediate RESERVED lock is not requested, the + ** second process will get to this point in the code and fail to + ** obtain its own EXCLUSIVE lock on the database file. + */ + if( pPager->statefd, EXCLUSIVE_LOCK); + if( rc!=SQLITE_OK ){ + pager_unlock(pPager); + return pager_error(pPager, rc); + } + pPager->state = PAGER_EXCLUSIVE; + } + + /* Open the journal for reading only. Return SQLITE_BUSY if + ** we are unable to open the journal file. + ** + ** The journal file does not need to be locked itself. The + ** journal file is never open unless the main database file holds + ** a write lock, so there is never any chance of two or more + ** processes opening the journal at the same time. + ** + ** Open the journal for read/write access. This is because in + ** exclusive-access mode the file descriptor will be kept open and + ** possibly used for a transaction later on. On some systems, the + ** OsTruncate() call used in exclusive-access mode also requires + ** a read/write file handle. + */ + if( !isHot ){ + rc = SQLITE_BUSY; + if( sqlite3OsAccess(pVfs, pPager->zJournal, SQLITE_ACCESS_EXISTS) ){ + int fout = 0; + int f = SQLITE_OPEN_READWRITE|SQLITE_OPEN_MAIN_JOURNAL; + assert( !pPager->tempFile ); + rc = sqlite3OsOpen(pVfs, pPager->zJournal, pPager->jfd, f, &fout); + assert( rc!=SQLITE_OK || pPager->jfd->pMethods ); + if( fout&SQLITE_OPEN_READONLY ){ + rc = SQLITE_BUSY; + sqlite3OsClose(pPager->jfd); + } + } + } + if( rc!=SQLITE_OK ){ + pager_unlock(pPager); + switch( rc ){ + case SQLITE_NOMEM: + case SQLITE_IOERR_UNLOCK: + case SQLITE_IOERR_NOMEM: + return rc; + default: + return SQLITE_BUSY; + } + } + pPager->journalOpen = 1; + pPager->journalStarted = 0; + pPager->journalOff = 0; + pPager->setMaster = 0; + pPager->journalHdr = 0; + + /* Playback and delete the journal. Drop the database write + ** lock and reacquire the read lock. + */ + rc = pager_playback(pPager, 1); + if( rc!=SQLITE_OK ){ + return pager_error(pPager, rc); + } + assert(pPager->state==PAGER_SHARED || + (pPager->exclusiveMode && pPager->state>PAGER_SHARED) + ); + } + + if( pPager->pAll ){ + /* The shared-lock has just been acquired on the database file + ** and there are already pages in the cache (from a previous + ** read or write transaction). Check to see if the database + ** has been modified. If the database has changed, flush the + ** cache. + ** + ** Database changes is detected by looking at 15 bytes beginning + ** at offset 24 into the file. The first 4 of these 16 bytes are + ** a 32-bit counter that is incremented with each change. The + ** other bytes change randomly with each file change when + ** a codec is in use. + ** + ** There is a vanishingly small chance that a change will not be + ** detected. The chance of an undetected change is so small that + ** it can be neglected. + */ + char dbFileVers[sizeof(pPager->dbFileVers)]; + sqlite3PagerPagecount(pPager); + + if( pPager->errCode ){ + return pPager->errCode; + } + + if( pPager->dbSize>0 ){ + IOTRACE(("CKVERS %p %d\n", pPager, sizeof(dbFileVers))); + rc = sqlite3OsRead(pPager->fd, &dbFileVers, sizeof(dbFileVers), 24); + if( rc!=SQLITE_OK ){ + return rc; + } + }else{ + memset(dbFileVers, 0, sizeof(dbFileVers)); + } + + if( memcmp(pPager->dbFileVers, dbFileVers, sizeof(dbFileVers))!=0 ){ + pager_reset(pPager); + } + } + } + assert( pPager->exclusiveMode || pPager->state<=PAGER_SHARED ); + if( pPager->state==PAGER_UNLOCK ){ + pPager->state = PAGER_SHARED; + } + } + + return rc; +} + +/* +** Allocate a PgHdr object. Either create a new one or reuse +** an existing one that is not otherwise in use. +** +** A new PgHdr structure is created if any of the following are +** true: +** +** (1) We have not exceeded our maximum allocated cache size +** as set by the "PRAGMA cache_size" command. +** +** (2) There are no unused PgHdr objects available at this time. +** +** (3) This is an in-memory database. +** +** (4) There are no PgHdr objects that do not require a journal +** file sync and a sync of the journal file is currently +** prohibited. +** +** Otherwise, reuse an existing PgHdr. In other words, reuse an +** existing PgHdr if all of the following are true: +** +** (1) We have reached or exceeded the maximum cache size +** allowed by "PRAGMA cache_size". +** +** (2) There is a PgHdr available with PgHdr->nRef==0 +** +** (3) We are not in an in-memory database +** +** (4) Either there is an available PgHdr that does not need +** to be synced to disk or else disk syncing is currently +** allowed. +*/ +static int pagerAllocatePage(Pager *pPager, PgHdr **ppPg){ + int rc = SQLITE_OK; + PgHdr *pPg; + int nByteHdr; + + /* Create a new PgHdr if any of the four conditions defined + ** above are met: */ + if( pPager->nPagemxPage + || pPager->lru.pFirst==0 + || MEMDB + || (pPager->lru.pFirstSynced==0 && pPager->doNotSync) + ){ + void *pData; + if( pPager->nPage>=pPager->nHash ){ + pager_resize_hash_table(pPager, + pPager->nHash<256 ? 256 : pPager->nHash*2); + if( pPager->nHash==0 ){ + rc = SQLITE_NOMEM; + goto pager_allocate_out; + } + } + pagerLeave(pPager); + nByteHdr = sizeof(*pPg) + sizeof(u32) + pPager->nExtra + + MEMDB*sizeof(PgHistory); + pPg = sqlite3_malloc( nByteHdr ); + if( pPg ){ + pData = sqlite3_malloc( pPager->pageSize ); + if( pData==0 ){ + sqlite3_free(pPg); + pPg = 0; + } + } + pagerEnter(pPager); + if( pPg==0 ){ + rc = SQLITE_NOMEM; + goto pager_allocate_out; + } + memset(pPg, 0, nByteHdr); + pPg->pData = pData; + pPg->pPager = pPager; + pPg->pNextAll = pPager->pAll; + pPager->pAll = pPg; + pPager->nPage++; + }else{ + /* Recycle an existing page with a zero ref-count. */ + rc = pager_recycle(pPager, &pPg); + if( rc==SQLITE_BUSY ){ + rc = SQLITE_IOERR_BLOCKED; + } + if( rc!=SQLITE_OK ){ + goto pager_allocate_out; + } + assert( pPager->state>=SHARED_LOCK ); + assert(pPg); + } + *ppPg = pPg; + +pager_allocate_out: + return rc; +} + +/* +** Make sure we have the content for a page. If the page was +** previously acquired with noContent==1, then the content was +** just initialized to zeros instead of being read from disk. +** But now we need the real data off of disk. So make sure we +** have it. Read it in if we do not have it already. +*/ +static int pager_get_content(PgHdr *pPg){ + if( pPg->needRead ){ + int rc = readDbPage(pPg->pPager, pPg, pPg->pgno); + if( rc==SQLITE_OK ){ + pPg->needRead = 0; + }else{ + return rc; + } + } + return SQLITE_OK; +} + +/* +** Acquire a page. +** +** A read lock on the disk file is obtained when the first page is acquired. +** This read lock is dropped when the last page is released. +** +** This routine works for any page number greater than 0. If the database +** file is smaller than the requested page, then no actual disk +** read occurs and the memory image of the page is initialized to +** all zeros. The extra data appended to a page is always initialized +** to zeros the first time a page is loaded into memory. +** +** The acquisition might fail for several reasons. In all cases, +** an appropriate error code is returned and *ppPage is set to NULL. +** +** See also sqlite3PagerLookup(). Both this routine and Lookup() attempt +** to find a page in the in-memory cache first. If the page is not already +** in memory, this routine goes to disk to read it in whereas Lookup() +** just returns 0. This routine acquires a read-lock the first time it +** has to go to disk, and could also playback an old journal if necessary. +** Since Lookup() never goes to disk, it never has to deal with locks +** or journal files. +** +** If noContent is false, the page contents are actually read from disk. +** If noContent is true, it means that we do not care about the contents +** of the page at this time, so do not do a disk read. Just fill in the +** page content with zeros. But mark the fact that we have not read the +** content by setting the PgHdr.needRead flag. Later on, if +** sqlite3PagerWrite() is called on this page or if this routine is +** called again with noContent==0, that means that the content is needed +** and the disk read should occur at that point. +*/ +static int pagerAcquire( + Pager *pPager, /* The pager open on the database file */ + Pgno pgno, /* Page number to fetch */ + DbPage **ppPage, /* Write a pointer to the page here */ + int noContent /* Do not bother reading content from disk if true */ +){ + PgHdr *pPg; + int rc; + + assert( pPager->state==PAGER_UNLOCK || pPager->nRef>0 || pgno==1 ); + + /* The maximum page number is 2^31. Return SQLITE_CORRUPT if a page + ** number greater than this, or zero, is requested. + */ + if( pgno>PAGER_MAX_PGNO || pgno==0 || pgno==PAGER_MJ_PGNO(pPager) ){ + return SQLITE_CORRUPT_BKPT; + } + + /* Make sure we have not hit any critical errors. + */ + assert( pPager!=0 ); + *ppPage = 0; + + /* If this is the first page accessed, then get a SHARED lock + ** on the database file. pagerSharedLock() is a no-op if + ** a database lock is already held. + */ + rc = pagerSharedLock(pPager); + if( rc!=SQLITE_OK ){ + return rc; + } + assert( pPager->state!=PAGER_UNLOCK ); + + pPg = pager_lookup(pPager, pgno); + if( pPg==0 ){ + /* The requested page is not in the page cache. */ + int nMax; + int h; + PAGER_INCR(pPager->nMiss); + rc = pagerAllocatePage(pPager, &pPg); + if( rc!=SQLITE_OK ){ + return rc; + } + + pPg->pgno = pgno; + assert( !MEMDB || pgno>pPager->stmtSize ); + pPg->inJournal = sqlite3BitvecTest(pPager->pInJournal, pgno); + pPg->needSync = 0; + + makeClean(pPg); + pPg->nRef = 1; + + pPager->nRef++; + if( pPager->nExtra>0 ){ + memset(PGHDR_TO_EXTRA(pPg, pPager), 0, pPager->nExtra); + } + nMax = sqlite3PagerPagecount(pPager); + if( pPager->errCode ){ + rc = pPager->errCode; + sqlite3PagerUnref(pPg); + return rc; + } + + /* Populate the page with data, either by reading from the database + ** file, or by setting the entire page to zero. + */ + if( nMax<(int)pgno || MEMDB || (noContent && !pPager->alwaysRollback) ){ + if( pgno>pPager->mxPgno ){ + sqlite3PagerUnref(pPg); + return SQLITE_FULL; + } + memset(PGHDR_TO_DATA(pPg), 0, pPager->pageSize); + pPg->needRead = noContent && !pPager->alwaysRollback; + IOTRACE(("ZERO %p %d\n", pPager, pgno)); + }else{ + rc = readDbPage(pPager, pPg, pgno); + if( rc!=SQLITE_OK && rc!=SQLITE_IOERR_SHORT_READ ){ + pPg->pgno = 0; + sqlite3PagerUnref(pPg); + return rc; + } + pPg->needRead = 0; + } + + /* Link the page into the page hash table */ + h = pgno & (pPager->nHash-1); + assert( pgno!=0 ); + pPg->pNextHash = pPager->aHash[h]; + pPager->aHash[h] = pPg; + if( pPg->pNextHash ){ + assert( pPg->pNextHash->pPrevHash==0 ); + pPg->pNextHash->pPrevHash = pPg; + } + +#ifdef SQLITE_CHECK_PAGES + pPg->pageHash = pager_pagehash(pPg); +#endif + }else{ + /* The requested page is in the page cache. */ + assert(pPager->nRef>0 || pgno==1); + PAGER_INCR(pPager->nHit); + if( !noContent ){ + rc = pager_get_content(pPg); + if( rc ){ + return rc; + } + } + page_ref(pPg); + } + *ppPage = pPg; + return SQLITE_OK; +} +int sqlite3PagerAcquire( + Pager *pPager, /* The pager open on the database file */ + Pgno pgno, /* Page number to fetch */ + DbPage **ppPage, /* Write a pointer to the page here */ + int noContent /* Do not bother reading content from disk if true */ +){ + int rc; + pagerEnter(pPager); + rc = pagerAcquire(pPager, pgno, ppPage, noContent); + pagerLeave(pPager); + return rc; +} + + +/* +** Acquire a page if it is already in the in-memory cache. Do +** not read the page from disk. Return a pointer to the page, +** or 0 if the page is not in cache. +** +** See also sqlite3PagerGet(). The difference between this routine +** and sqlite3PagerGet() is that _get() will go to the disk and read +** in the page if the page is not already in cache. This routine +** returns NULL if the page is not in cache or if a disk I/O error +** has ever happened. +*/ +DbPage *sqlite3PagerLookup(Pager *pPager, Pgno pgno){ + PgHdr *pPg = 0; + + assert( pPager!=0 ); + assert( pgno!=0 ); + + pagerEnter(pPager); + if( pPager->state==PAGER_UNLOCK ){ + assert( !pPager->pAll || pPager->exclusiveMode ); + }else if( pPager->errCode && pPager->errCode!=SQLITE_FULL ){ + /* Do nothing */ + }else if( (pPg = pager_lookup(pPager, pgno))!=0 ){ + page_ref(pPg); + } + pagerLeave(pPager); + return pPg; +} + +/* +** Release a page. +** +** If the number of references to the page drop to zero, then the +** page is added to the LRU list. When all references to all pages +** are released, a rollback occurs and the lock on the database is +** removed. +*/ +int sqlite3PagerUnref(DbPage *pPg){ + Pager *pPager = pPg->pPager; + + /* Decrement the reference count for this page + */ + assert( pPg->nRef>0 ); + pagerEnter(pPg->pPager); + pPg->nRef--; + + CHECK_PAGE(pPg); + + /* When the number of references to a page reach 0, call the + ** destructor and add the page to the freelist. + */ + if( pPg->nRef==0 ){ + + lruListAdd(pPg); + if( pPager->xDestructor ){ + pPager->xDestructor(pPg, pPager->pageSize); + } + + /* When all pages reach the freelist, drop the read lock from + ** the database file. + */ + pPager->nRef--; + assert( pPager->nRef>=0 ); + if( pPager->nRef==0 && (!pPager->exclusiveMode || pPager->journalOff>0) ){ + pagerUnlockAndRollback(pPager); + } + } + pagerLeave(pPager); + return SQLITE_OK; +} + +/* +** Create a journal file for pPager. There should already be a RESERVED +** or EXCLUSIVE lock on the database file when this routine is called. +** +** Return SQLITE_OK if everything. Return an error code and release the +** write lock if anything goes wrong. +*/ +static int pager_open_journal(Pager *pPager){ + sqlite3_vfs *pVfs = pPager->pVfs; + int flags = (SQLITE_OPEN_READWRITE|SQLITE_OPEN_EXCLUSIVE|SQLITE_OPEN_CREATE); + + int rc; + assert( !MEMDB ); + assert( pPager->state>=PAGER_RESERVED ); + assert( pPager->journalOpen==0 ); + assert( pPager->useJournal ); + assert( pPager->pInJournal==0 ); + sqlite3PagerPagecount(pPager); + pagerLeave(pPager); + pPager->pInJournal = sqlite3BitvecCreate(pPager->dbSize); + pagerEnter(pPager); + if( pPager->pInJournal==0 ){ + rc = SQLITE_NOMEM; + goto failed_to_open_journal; + } + + if( pPager->tempFile ){ + flags |= (SQLITE_OPEN_DELETEONCLOSE|SQLITE_OPEN_TEMP_JOURNAL); + }else{ + flags |= (SQLITE_OPEN_MAIN_JOURNAL); + } +#ifdef SQLITE_ENABLE_ATOMIC_WRITE + rc = sqlite3JournalOpen( + pVfs, pPager->zJournal, pPager->jfd, flags, jrnlBufferSize(pPager) + ); +#else + rc = sqlite3OsOpen(pVfs, pPager->zJournal, pPager->jfd, flags, 0); +#endif + assert( rc!=SQLITE_OK || pPager->jfd->pMethods ); + pPager->journalOff = 0; + pPager->setMaster = 0; + pPager->journalHdr = 0; + if( rc!=SQLITE_OK ){ + if( rc==SQLITE_NOMEM ){ + sqlite3OsDelete(pVfs, pPager->zJournal, 0); + } + goto failed_to_open_journal; + } + pPager->journalOpen = 1; + pPager->journalStarted = 0; + pPager->needSync = 0; + pPager->alwaysRollback = 0; + pPager->nRec = 0; + if( pPager->errCode ){ + rc = pPager->errCode; + goto failed_to_open_journal; + } + pPager->origDbSize = pPager->dbSize; + + rc = writeJournalHdr(pPager); + + if( pPager->stmtAutoopen && rc==SQLITE_OK ){ + rc = sqlite3PagerStmtBegin(pPager); + } + if( rc!=SQLITE_OK && rc!=SQLITE_NOMEM && rc!=SQLITE_IOERR_NOMEM ){ + rc = pager_end_transaction(pPager); + if( rc==SQLITE_OK ){ + rc = SQLITE_FULL; + } + } + return rc; + +failed_to_open_journal: + sqlite3BitvecDestroy(pPager->pInJournal); + pPager->pInJournal = 0; + return rc; +} + +/* +** Acquire a write-lock on the database. The lock is removed when +** the any of the following happen: +** +** * sqlite3PagerCommitPhaseTwo() is called. +** * sqlite3PagerRollback() is called. +** * sqlite3PagerClose() is called. +** * sqlite3PagerUnref() is called to on every outstanding page. +** +** The first parameter to this routine is a pointer to any open page of the +** database file. Nothing changes about the page - it is used merely to +** acquire a pointer to the Pager structure and as proof that there is +** already a read-lock on the database. +** +** The second parameter indicates how much space in bytes to reserve for a +** master journal file-name at the start of the journal when it is created. +** +** A journal file is opened if this is not a temporary file. For temporary +** files, the opening of the journal file is deferred until there is an +** actual need to write to the journal. +** +** If the database is already reserved for writing, this routine is a no-op. +** +** If exFlag is true, go ahead and get an EXCLUSIVE lock on the file +** immediately instead of waiting until we try to flush the cache. The +** exFlag is ignored if a transaction is already active. +*/ +int sqlite3PagerBegin(DbPage *pPg, int exFlag){ + Pager *pPager = pPg->pPager; + int rc = SQLITE_OK; + pagerEnter(pPager); + assert( pPg->nRef>0 ); + assert( pPager->state!=PAGER_UNLOCK ); + if( pPager->state==PAGER_SHARED ){ + assert( pPager->pInJournal==0 ); + if( MEMDB ){ + pPager->state = PAGER_EXCLUSIVE; + pPager->origDbSize = pPager->dbSize; + }else{ + rc = sqlite3OsLock(pPager->fd, RESERVED_LOCK); + if( rc==SQLITE_OK ){ + pPager->state = PAGER_RESERVED; + if( exFlag ){ + rc = pager_wait_on_lock(pPager, EXCLUSIVE_LOCK); + } + } + if( rc!=SQLITE_OK ){ + pagerLeave(pPager); + return rc; + } + pPager->dirtyCache = 0; + PAGERTRACE2("TRANSACTION %d\n", PAGERID(pPager)); + if( pPager->useJournal && !pPager->tempFile ){ + rc = pager_open_journal(pPager); + } + } + }else if( pPager->journalOpen && pPager->journalOff==0 ){ + /* This happens when the pager was in exclusive-access mode last + ** time a (read or write) transaction was successfully concluded + ** by this connection. Instead of deleting the journal file it was + ** kept open and truncated to 0 bytes. + */ + assert( pPager->nRec==0 ); + assert( pPager->origDbSize==0 ); + assert( pPager->pInJournal==0 ); + sqlite3PagerPagecount(pPager); + pagerLeave(pPager); + pPager->pInJournal = sqlite3BitvecCreate( pPager->dbSize ); + pagerEnter(pPager); + if( !pPager->pInJournal ){ + rc = SQLITE_NOMEM; + }else{ + pPager->origDbSize = pPager->dbSize; + rc = writeJournalHdr(pPager); + } + } + assert( !pPager->journalOpen || pPager->journalOff>0 || rc!=SQLITE_OK ); + pagerLeave(pPager); + return rc; +} + +/* +** Make a page dirty. Set its dirty flag and add it to the dirty +** page list. +*/ +static void makeDirty(PgHdr *pPg){ + if( pPg->dirty==0 ){ + Pager *pPager = pPg->pPager; + pPg->dirty = 1; + pPg->pDirty = pPager->pDirty; + if( pPager->pDirty ){ + pPager->pDirty->pPrevDirty = pPg; + } + pPg->pPrevDirty = 0; + pPager->pDirty = pPg; + } +} + +/* +** Make a page clean. Clear its dirty bit and remove it from the +** dirty page list. +*/ +static void makeClean(PgHdr *pPg){ + if( pPg->dirty ){ + pPg->dirty = 0; + if( pPg->pDirty ){ + assert( pPg->pDirty->pPrevDirty==pPg ); + pPg->pDirty->pPrevDirty = pPg->pPrevDirty; + } + if( pPg->pPrevDirty ){ + assert( pPg->pPrevDirty->pDirty==pPg ); + pPg->pPrevDirty->pDirty = pPg->pDirty; + }else{ + assert( pPg->pPager->pDirty==pPg ); + pPg->pPager->pDirty = pPg->pDirty; + } + } +} + + +/* +** Mark a data page as writeable. The page is written into the journal +** if it is not there already. This routine must be called before making +** changes to a page. +** +** The first time this routine is called, the pager creates a new +** journal and acquires a RESERVED lock on the database. If the RESERVED +** lock could not be acquired, this routine returns SQLITE_BUSY. The +** calling routine must check for that return value and be careful not to +** change any page data until this routine returns SQLITE_OK. +** +** If the journal file could not be written because the disk is full, +** then this routine returns SQLITE_FULL and does an immediate rollback. +** All subsequent write attempts also return SQLITE_FULL until there +** is a call to sqlite3PagerCommit() or sqlite3PagerRollback() to +** reset. +*/ +static int pager_write(PgHdr *pPg){ + void *pData = PGHDR_TO_DATA(pPg); + Pager *pPager = pPg->pPager; + int rc = SQLITE_OK; + + /* Check for errors + */ + if( pPager->errCode ){ + return pPager->errCode; + } + if( pPager->readOnly ){ + return SQLITE_PERM; + } + + assert( !pPager->setMaster ); + + CHECK_PAGE(pPg); + + /* If this page was previously acquired with noContent==1, that means + ** we didn't really read in the content of the page. This can happen + ** (for example) when the page is being moved to the freelist. But + ** now we are (perhaps) moving the page off of the freelist for + ** reuse and we need to know its original content so that content + ** can be stored in the rollback journal. So do the read at this + ** time. + */ + rc = pager_get_content(pPg); + if( rc ){ + return rc; + } + + /* Mark the page as dirty. If the page has already been written + ** to the journal then we can return right away. + */ + makeDirty(pPg); + if( pPg->inJournal && (pageInStatement(pPg) || pPager->stmtInUse==0) ){ + pPager->dirtyCache = 1; + }else{ + + /* If we get this far, it means that the page needs to be + ** written to the transaction journal or the ckeckpoint journal + ** or both. + ** + ** First check to see that the transaction journal exists and + ** create it if it does not. + */ + assert( pPager->state!=PAGER_UNLOCK ); + rc = sqlite3PagerBegin(pPg, 0); + if( rc!=SQLITE_OK ){ + return rc; + } + assert( pPager->state>=PAGER_RESERVED ); + if( !pPager->journalOpen && pPager->useJournal ){ + rc = pager_open_journal(pPager); + if( rc!=SQLITE_OK ) return rc; + } + assert( pPager->journalOpen || !pPager->useJournal ); + pPager->dirtyCache = 1; + + /* The transaction journal now exists and we have a RESERVED or an + ** EXCLUSIVE lock on the main database file. Write the current page to + ** the transaction journal if it is not there already. + */ + if( !pPg->inJournal && (pPager->useJournal || MEMDB) ){ + if( (int)pPg->pgno <= pPager->origDbSize ){ + if( MEMDB ){ + PgHistory *pHist = PGHDR_TO_HIST(pPg, pPager); + PAGERTRACE3("JOURNAL %d page %d\n", PAGERID(pPager), pPg->pgno); + assert( pHist->pOrig==0 ); + pHist->pOrig = sqlite3_malloc( pPager->pageSize ); + if( !pHist->pOrig ){ + return SQLITE_NOMEM; + } + memcpy(pHist->pOrig, PGHDR_TO_DATA(pPg), pPager->pageSize); + }else{ + u32 cksum; + char *pData2; + + /* We should never write to the journal file the page that + ** contains the database locks. The following assert verifies + ** that we do not. */ + assert( pPg->pgno!=PAGER_MJ_PGNO(pPager) ); + pData2 = CODEC2(pPager, pData, pPg->pgno, 7); + cksum = pager_cksum(pPager, (u8*)pData2); + rc = write32bits(pPager->jfd, pPager->journalOff, pPg->pgno); + if( rc==SQLITE_OK ){ + rc = sqlite3OsWrite(pPager->jfd, pData2, pPager->pageSize, + pPager->journalOff + 4); + pPager->journalOff += pPager->pageSize+4; + } + if( rc==SQLITE_OK ){ + rc = write32bits(pPager->jfd, pPager->journalOff, cksum); + pPager->journalOff += 4; + } + IOTRACE(("JOUT %p %d %lld %d\n", pPager, pPg->pgno, + pPager->journalOff, pPager->pageSize)); + PAGER_INCR(sqlite3_pager_writej_count); + PAGERTRACE5("JOURNAL %d page %d needSync=%d hash(%08x)\n", + PAGERID(pPager), pPg->pgno, pPg->needSync, pager_pagehash(pPg)); + + /* An error has occured writing to the journal file. The + ** transaction will be rolled back by the layer above. + */ + if( rc!=SQLITE_OK ){ + return rc; + } + + pPager->nRec++; + assert( pPager->pInJournal!=0 ); + sqlite3BitvecSet(pPager->pInJournal, pPg->pgno); + pPg->needSync = !pPager->noSync; + if( pPager->stmtInUse ){ + sqlite3BitvecSet(pPager->pInStmt, pPg->pgno); + } + } + }else{ + pPg->needSync = !pPager->journalStarted && !pPager->noSync; + PAGERTRACE4("APPEND %d page %d needSync=%d\n", + PAGERID(pPager), pPg->pgno, pPg->needSync); + } + if( pPg->needSync ){ + pPager->needSync = 1; + } + pPg->inJournal = 1; + } + + /* If the statement journal is open and the page is not in it, + ** then write the current page to the statement journal. Note that + ** the statement journal format differs from the standard journal format + ** in that it omits the checksums and the header. + */ + if( pPager->stmtInUse + && !pageInStatement(pPg) + && (int)pPg->pgno<=pPager->stmtSize + ){ + assert( pPg->inJournal || (int)pPg->pgno>pPager->origDbSize ); + if( MEMDB ){ + PgHistory *pHist = PGHDR_TO_HIST(pPg, pPager); + assert( pHist->pStmt==0 ); + pHist->pStmt = sqlite3_malloc( pPager->pageSize ); + if( pHist->pStmt ){ + memcpy(pHist->pStmt, PGHDR_TO_DATA(pPg), pPager->pageSize); + } + PAGERTRACE3("STMT-JOURNAL %d page %d\n", PAGERID(pPager), pPg->pgno); + page_add_to_stmt_list(pPg); + }else{ + i64 offset = pPager->stmtNRec*(4+pPager->pageSize); + char *pData2 = CODEC2(pPager, pData, pPg->pgno, 7); + rc = write32bits(pPager->stfd, offset, pPg->pgno); + if( rc==SQLITE_OK ){ + rc = sqlite3OsWrite(pPager->stfd, pData2, pPager->pageSize, offset+4); + } + PAGERTRACE3("STMT-JOURNAL %d page %d\n", PAGERID(pPager), pPg->pgno); + if( rc!=SQLITE_OK ){ + return rc; + } + pPager->stmtNRec++; + assert( pPager->pInStmt!=0 ); + sqlite3BitvecSet(pPager->pInStmt, pPg->pgno); + } + } + } + + /* Update the database size and return. + */ + assert( pPager->state>=PAGER_SHARED ); + if( pPager->dbSize<(int)pPg->pgno ){ + pPager->dbSize = pPg->pgno; + if( !MEMDB && pPager->dbSize==PENDING_BYTE/pPager->pageSize ){ + pPager->dbSize++; + } + } + return rc; +} + +/* +** This function is used to mark a data-page as writable. It uses +** pager_write() to open a journal file (if it is not already open) +** and write the page *pData to the journal. +** +** The difference between this function and pager_write() is that this +** function also deals with the special case where 2 or more pages +** fit on a single disk sector. In this case all co-resident pages +** must have been written to the journal file before returning. +*/ +int sqlite3PagerWrite(DbPage *pDbPage){ + int rc = SQLITE_OK; + + PgHdr *pPg = pDbPage; + Pager *pPager = pPg->pPager; + Pgno nPagePerSector = (pPager->sectorSize/pPager->pageSize); + + pagerEnter(pPager); + if( !MEMDB && nPagePerSector>1 ){ + Pgno nPageCount; /* Total number of pages in database file */ + Pgno pg1; /* First page of the sector pPg is located on. */ + int nPage; /* Number of pages starting at pg1 to journal */ + int ii; + int needSync = 0; + + /* Set the doNotSync flag to 1. This is because we cannot allow a journal + ** header to be written between the pages journaled by this function. + */ + assert( pPager->doNotSync==0 ); + pPager->doNotSync = 1; + + /* This trick assumes that both the page-size and sector-size are + ** an integer power of 2. It sets variable pg1 to the identifier + ** of the first page of the sector pPg is located on. + */ + pg1 = ((pPg->pgno-1) & ~(nPagePerSector-1)) + 1; + + nPageCount = sqlite3PagerPagecount(pPager); + if( pPg->pgno>nPageCount ){ + nPage = (pPg->pgno - pg1)+1; + }else if( (pg1+nPagePerSector-1)>nPageCount ){ + nPage = nPageCount+1-pg1; + }else{ + nPage = nPagePerSector; + } + assert(nPage>0); + assert(pg1<=pPg->pgno); + assert((pg1+nPage)>pPg->pgno); + + for(ii=0; iipgno || !sqlite3BitvecTest(pPager->pInJournal, pg) ){ + if( pg!=PAGER_MJ_PGNO(pPager) ){ + rc = sqlite3PagerGet(pPager, pg, &pPage); + if( rc==SQLITE_OK ){ + rc = pager_write(pPage); + if( pPage->needSync ){ + needSync = 1; + } + sqlite3PagerUnref(pPage); + } + } + }else if( (pPage = pager_lookup(pPager, pg))!=0 ){ + if( pPage->needSync ){ + needSync = 1; + } + } + } + + /* If the PgHdr.needSync flag is set for any of the nPage pages + ** starting at pg1, then it needs to be set for all of them. Because + ** writing to any of these nPage pages may damage the others, the + ** journal file must contain sync()ed copies of all of them + ** before any of them can be written out to the database file. + */ + if( needSync ){ + for(ii=0; iineedSync = 1; + } + assert(pPager->needSync); + } + + assert( pPager->doNotSync==1 ); + pPager->doNotSync = 0; + }else{ + rc = pager_write(pDbPage); + } + pagerLeave(pPager); + return rc; +} + +/* +** Return TRUE if the page given in the argument was previously passed +** to sqlite3PagerWrite(). In other words, return TRUE if it is ok +** to change the content of the page. +*/ +#ifndef NDEBUG +int sqlite3PagerIswriteable(DbPage *pPg){ + return pPg->dirty; +} +#endif + +#ifndef SQLITE_OMIT_VACUUM +/* +** Replace the content of a single page with the information in the third +** argument. +*/ +int sqlite3PagerOverwrite(Pager *pPager, Pgno pgno, void *pData){ + PgHdr *pPg; + int rc; + + pagerEnter(pPager); + rc = sqlite3PagerGet(pPager, pgno, &pPg); + if( rc==SQLITE_OK ){ + rc = sqlite3PagerWrite(pPg); + if( rc==SQLITE_OK ){ + memcpy(sqlite3PagerGetData(pPg), pData, pPager->pageSize); + } + sqlite3PagerUnref(pPg); + } + pagerLeave(pPager); + return rc; +} +#endif + +/* +** A call to this routine tells the pager that it is not necessary to +** write the information on page pPg back to the disk, even though +** that page might be marked as dirty. +** +** The overlying software layer calls this routine when all of the data +** on the given page is unused. The pager marks the page as clean so +** that it does not get written to disk. +** +** Tests show that this optimization, together with the +** sqlite3PagerDontRollback() below, more than double the speed +** of large INSERT operations and quadruple the speed of large DELETEs. +** +** When this routine is called, set the alwaysRollback flag to true. +** Subsequent calls to sqlite3PagerDontRollback() for the same page +** will thereafter be ignored. This is necessary to avoid a problem +** where a page with data is added to the freelist during one part of +** a transaction then removed from the freelist during a later part +** of the same transaction and reused for some other purpose. When it +** is first added to the freelist, this routine is called. When reused, +** the sqlite3PagerDontRollback() routine is called. But because the +** page contains critical data, we still need to be sure it gets +** rolled back in spite of the sqlite3PagerDontRollback() call. +*/ +void sqlite3PagerDontWrite(DbPage *pDbPage){ + PgHdr *pPg = pDbPage; + Pager *pPager = pPg->pPager; + + if( MEMDB ) return; + pagerEnter(pPager); + pPg->alwaysRollback = 1; + if( pPg->dirty && !pPager->stmtInUse ){ + assert( pPager->state>=PAGER_SHARED ); + if( pPager->dbSize==(int)pPg->pgno && pPager->origDbSizedbSize ){ + /* If this pages is the last page in the file and the file has grown + ** during the current transaction, then do NOT mark the page as clean. + ** When the database file grows, we must make sure that the last page + ** gets written at least once so that the disk file will be the correct + ** size. If you do not write this page and the size of the file + ** on the disk ends up being too small, that can lead to database + ** corruption during the next transaction. + */ + }else{ + PAGERTRACE3("DONT_WRITE page %d of %d\n", pPg->pgno, PAGERID(pPager)); + IOTRACE(("CLEAN %p %d\n", pPager, pPg->pgno)) + makeClean(pPg); +#ifdef SQLITE_CHECK_PAGES + pPg->pageHash = pager_pagehash(pPg); +#endif + } + } + pagerLeave(pPager); +} + +/* +** A call to this routine tells the pager that if a rollback occurs, +** it is not necessary to restore the data on the given page. This +** means that the pager does not have to record the given page in the +** rollback journal. +** +** If we have not yet actually read the content of this page (if +** the PgHdr.needRead flag is set) then this routine acts as a promise +** that we will never need to read the page content in the future. +** so the needRead flag can be cleared at this point. +** +** This routine is only called from a single place in the sqlite btree +** code (when a leaf is removed from the free-list). This allows the +** following assumptions to be made about pPg: +** +** 1. PagerDontWrite() has been called on the page, OR +** PagerWrite() has not yet been called on the page. +** +** 2. The page existed when the transaction was started. +** +** Details: DontRollback() (this routine) is only called when a leaf is +** removed from the free list. DontWrite() is called whenever a page +** becomes a free-list leaf. +*/ +void sqlite3PagerDontRollback(DbPage *pPg){ + Pager *pPager = pPg->pPager; + + pagerEnter(pPager); + assert( pPager->state>=PAGER_RESERVED ); + + /* If the journal file is not open, or DontWrite() has been called on + ** this page (DontWrite() sets the alwaysRollback flag), then this + ** function is a no-op. + */ + if( pPager->journalOpen==0 || pPg->alwaysRollback || pPager->alwaysRollback ){ + pagerLeave(pPager); + return; + } + assert( !MEMDB ); /* For a memdb, pPager->journalOpen is always 0 */ + + /* Check that PagerWrite() has not yet been called on this page, and + ** that the page existed when the transaction started. + */ + assert( !pPg->inJournal && (int)pPg->pgno <= pPager->origDbSize ); + + assert( pPager->pInJournal!=0 ); + sqlite3BitvecSet(pPager->pInJournal, pPg->pgno); + pPg->inJournal = 1; + pPg->needRead = 0; + if( pPager->stmtInUse ){ + assert( pPager->stmtSize <= pPager->origDbSize ); + sqlite3BitvecSet(pPager->pInStmt, pPg->pgno); + } + PAGERTRACE3("DONT_ROLLBACK page %d of %d\n", pPg->pgno, PAGERID(pPager)); + IOTRACE(("GARBAGE %p %d\n", pPager, pPg->pgno)) + pagerLeave(pPager); +} + + +/* +** This routine is called to increment the database file change-counter, +** stored at byte 24 of the pager file. +*/ +static int pager_incr_changecounter(Pager *pPager, int isDirect){ + PgHdr *pPgHdr; + u32 change_counter; + int rc = SQLITE_OK; + + if( !pPager->changeCountDone ){ + /* Open page 1 of the file for writing. */ + rc = sqlite3PagerGet(pPager, 1, &pPgHdr); + if( rc!=SQLITE_OK ) return rc; + + if( !isDirect ){ + rc = sqlite3PagerWrite(pPgHdr); + if( rc!=SQLITE_OK ){ + sqlite3PagerUnref(pPgHdr); + return rc; + } + } + + /* Increment the value just read and write it back to byte 24. */ + change_counter = sqlite3Get4byte((u8*)pPager->dbFileVers); + change_counter++; + put32bits(((char*)PGHDR_TO_DATA(pPgHdr))+24, change_counter); + + if( isDirect && pPager->fd->pMethods ){ + const void *zBuf = PGHDR_TO_DATA(pPgHdr); + rc = sqlite3OsWrite(pPager->fd, zBuf, pPager->pageSize, 0); + } + + /* Release the page reference. */ + sqlite3PagerUnref(pPgHdr); + pPager->changeCountDone = 1; + } + return rc; +} + +/* +** Sync the database file for the pager pPager. zMaster points to the name +** of a master journal file that should be written into the individual +** journal file. zMaster may be NULL, which is interpreted as no master +** journal (a single database transaction). +** +** This routine ensures that the journal is synced, all dirty pages written +** to the database file and the database file synced. The only thing that +** remains to commit the transaction is to delete the journal file (or +** master journal file if specified). +** +** Note that if zMaster==NULL, this does not overwrite a previous value +** passed to an sqlite3PagerCommitPhaseOne() call. +** +** If parameter nTrunc is non-zero, then the pager file is truncated to +** nTrunc pages (this is used by auto-vacuum databases). +*/ +int sqlite3PagerCommitPhaseOne(Pager *pPager, const char *zMaster, Pgno nTrunc){ + int rc = SQLITE_OK; + + PAGERTRACE4("DATABASE SYNC: File=%s zMaster=%s nTrunc=%d\n", + pPager->zFilename, zMaster, nTrunc); + pagerEnter(pPager); + + /* If this is an in-memory db, or no pages have been written to, or this + ** function has already been called, it is a no-op. + */ + if( pPager->state!=PAGER_SYNCED && !MEMDB && pPager->dirtyCache ){ + PgHdr *pPg; + +#ifdef SQLITE_ENABLE_ATOMIC_WRITE + /* The atomic-write optimization can be used if all of the + ** following are true: + ** + ** + The file-system supports the atomic-write property for + ** blocks of size page-size, and + ** + This commit is not part of a multi-file transaction, and + ** + Exactly one page has been modified and store in the journal file. + ** + ** If the optimization can be used, then the journal file will never + ** be created for this transaction. + */ + int useAtomicWrite = ( + !zMaster && + pPager->journalOff==jrnlBufferSize(pPager) && + nTrunc==0 && + (0==pPager->pDirty || 0==pPager->pDirty->pDirty) + ); + if( useAtomicWrite ){ + /* Update the nRec field in the journal file. */ + int offset = pPager->journalHdr + sizeof(aJournalMagic); + assert(pPager->nRec==1); + rc = write32bits(pPager->jfd, offset, pPager->nRec); + + /* Update the db file change counter. The following call will modify + ** the in-memory representation of page 1 to include the updated + ** change counter and then write page 1 directly to the database + ** file. Because of the atomic-write property of the host file-system, + ** this is safe. + */ + if( rc==SQLITE_OK ){ + rc = pager_incr_changecounter(pPager, 1); + } + }else{ + rc = sqlite3JournalCreate(pPager->jfd); + } + + if( !useAtomicWrite && rc==SQLITE_OK ) +#endif + + /* If a master journal file name has already been written to the + ** journal file, then no sync is required. This happens when it is + ** written, then the process fails to upgrade from a RESERVED to an + ** EXCLUSIVE lock. The next time the process tries to commit the + ** transaction the m-j name will have already been written. + */ + if( !pPager->setMaster ){ + assert( pPager->journalOpen ); + rc = pager_incr_changecounter(pPager, 0); + if( rc!=SQLITE_OK ) goto sync_exit; +#ifndef SQLITE_OMIT_AUTOVACUUM + if( nTrunc!=0 ){ + /* If this transaction has made the database smaller, then all pages + ** being discarded by the truncation must be written to the journal + ** file. + */ + Pgno i; + int iSkip = PAGER_MJ_PGNO(pPager); + for( i=nTrunc+1; i<=pPager->origDbSize; i++ ){ + if( !sqlite3BitvecTest(pPager->pInJournal, i) && i!=iSkip ){ + rc = sqlite3PagerGet(pPager, i, &pPg); + if( rc!=SQLITE_OK ) goto sync_exit; + rc = sqlite3PagerWrite(pPg); + sqlite3PagerUnref(pPg); + if( rc!=SQLITE_OK ) goto sync_exit; + } + } + } +#endif + rc = writeMasterJournal(pPager, zMaster); + if( rc!=SQLITE_OK ) goto sync_exit; + rc = syncJournal(pPager); + } + if( rc!=SQLITE_OK ) goto sync_exit; + +#ifndef SQLITE_OMIT_AUTOVACUUM + if( nTrunc!=0 ){ + rc = sqlite3PagerTruncate(pPager, nTrunc); + if( rc!=SQLITE_OK ) goto sync_exit; + } +#endif + + /* Write all dirty pages to the database file */ + pPg = pager_get_all_dirty_pages(pPager); + rc = pager_write_pagelist(pPg); + if( rc!=SQLITE_OK ){ + assert( rc!=SQLITE_IOERR_BLOCKED ); + /* The error might have left the dirty list all fouled up here, + ** but that does not matter because if the if the dirty list did + ** get corrupted, then the transaction will roll back and + ** discard the dirty list. There is an assert in + ** pager_get_all_dirty_pages() that verifies that no attempt + ** is made to use an invalid dirty list. + */ + goto sync_exit; + } + pPager->pDirty = 0; + + /* Sync the database file. */ + if( !pPager->noSync ){ + rc = sqlite3OsSync(pPager->fd, pPager->sync_flags); + } + IOTRACE(("DBSYNC %p\n", pPager)) + + pPager->state = PAGER_SYNCED; + }else if( MEMDB && nTrunc!=0 ){ + rc = sqlite3PagerTruncate(pPager, nTrunc); + } + +sync_exit: + if( rc==SQLITE_IOERR_BLOCKED ){ + /* pager_incr_changecounter() may attempt to obtain an exclusive + * lock to spill the cache and return IOERR_BLOCKED. But since + * there is no chance the cache is inconsistent, it is + * better to return SQLITE_BUSY. + */ + rc = SQLITE_BUSY; + } + pagerLeave(pPager); + return rc; +} + + +/* +** Commit all changes to the database and release the write lock. +** +** If the commit fails for any reason, a rollback attempt is made +** and an error code is returned. If the commit worked, SQLITE_OK +** is returned. +*/ +int sqlite3PagerCommitPhaseTwo(Pager *pPager){ + int rc; + PgHdr *pPg; + + if( pPager->errCode ){ + return pPager->errCode; + } + if( pPager->statedirty = 0; + pPg->inJournal = 0; + pHist->inStmt = 0; + pPg->needSync = 0; + pHist->pPrevStmt = pHist->pNextStmt = 0; + pPg = pPg->pDirty; + } + pPager->pDirty = 0; +#ifndef NDEBUG + for(pPg=pPager->pAll; pPg; pPg=pPg->pNextAll){ + PgHistory *pHist = PGHDR_TO_HIST(pPg, pPager); + assert( !pPg->alwaysRollback ); + assert( !pHist->pOrig ); + assert( !pHist->pStmt ); + } +#endif + pPager->pStmt = 0; + pPager->state = PAGER_SHARED; + pagerLeave(pPager); + return SQLITE_OK; + } + assert( pPager->journalOpen || !pPager->dirtyCache ); + assert( pPager->state==PAGER_SYNCED || !pPager->dirtyCache ); + rc = pager_end_transaction(pPager); + rc = pager_error(pPager, rc); + pagerLeave(pPager); + return rc; +} + +/* +** Rollback all changes. The database falls back to PAGER_SHARED mode. +** All in-memory cache pages revert to their original data contents. +** The journal is deleted. +** +** This routine cannot fail unless some other process is not following +** the correct locking protocol or unless some other +** process is writing trash into the journal file (SQLITE_CORRUPT) or +** unless a prior malloc() failed (SQLITE_NOMEM). Appropriate error +** codes are returned for all these occasions. Otherwise, +** SQLITE_OK is returned. +*/ +int sqlite3PagerRollback(Pager *pPager){ + int rc; + PAGERTRACE2("ROLLBACK %d\n", PAGERID(pPager)); + if( MEMDB ){ + PgHdr *p; + for(p=pPager->pAll; p; p=p->pNextAll){ + PgHistory *pHist; + assert( !p->alwaysRollback ); + if( !p->dirty ){ + assert( !((PgHistory *)PGHDR_TO_HIST(p, pPager))->pOrig ); + assert( !((PgHistory *)PGHDR_TO_HIST(p, pPager))->pStmt ); + continue; + } + + pHist = PGHDR_TO_HIST(p, pPager); + if( pHist->pOrig ){ + memcpy(PGHDR_TO_DATA(p), pHist->pOrig, pPager->pageSize); + PAGERTRACE3("ROLLBACK-PAGE %d of %d\n", p->pgno, PAGERID(pPager)); + }else{ + PAGERTRACE3("PAGE %d is clean on %d\n", p->pgno, PAGERID(pPager)); + } + clearHistory(pHist); + p->dirty = 0; + p->inJournal = 0; + pHist->inStmt = 0; + pHist->pPrevStmt = pHist->pNextStmt = 0; + if( pPager->xReiniter ){ + pPager->xReiniter(p, pPager->pageSize); + } + } + pPager->pDirty = 0; + pPager->pStmt = 0; + pPager->dbSize = pPager->origDbSize; + pager_truncate_cache(pPager); + pPager->stmtInUse = 0; + pPager->state = PAGER_SHARED; + return SQLITE_OK; + } + + pagerEnter(pPager); + if( !pPager->dirtyCache || !pPager->journalOpen ){ + rc = pager_end_transaction(pPager); + pagerLeave(pPager); + return rc; + } + + if( pPager->errCode && pPager->errCode!=SQLITE_FULL ){ + if( pPager->state>=PAGER_EXCLUSIVE ){ + pager_playback(pPager, 0); + } + pagerLeave(pPager); + return pPager->errCode; + } + if( pPager->state==PAGER_RESERVED ){ + int rc2; + rc = pager_playback(pPager, 0); + rc2 = pager_end_transaction(pPager); + if( rc==SQLITE_OK ){ + rc = rc2; + } + }else{ + rc = pager_playback(pPager, 0); + } + /* pager_reset(pPager); */ + pPager->dbSize = -1; + + /* If an error occurs during a ROLLBACK, we can no longer trust the pager + ** cache. So call pager_error() on the way out to make any error + ** persistent. + */ + rc = pager_error(pPager, rc); + pagerLeave(pPager); + return rc; +} + +/* +** Return TRUE if the database file is opened read-only. Return FALSE +** if the database is (in theory) writable. +*/ +int sqlite3PagerIsreadonly(Pager *pPager){ + return pPager->readOnly; +} + +/* +** Return the number of references to the pager. +*/ +int sqlite3PagerRefcount(Pager *pPager){ + return pPager->nRef; +} + +#ifdef SQLITE_TEST +/* +** This routine is used for testing and analysis only. +*/ +int *sqlite3PagerStats(Pager *pPager){ + static int a[11]; + a[0] = pPager->nRef; + a[1] = pPager->nPage; + a[2] = pPager->mxPage; + a[3] = pPager->dbSize; + a[4] = pPager->state; + a[5] = pPager->errCode; + a[6] = pPager->nHit; + a[7] = pPager->nMiss; + a[8] = 0; /* Used to be pPager->nOvfl */ + a[9] = pPager->nRead; + a[10] = pPager->nWrite; + return a; +} +#endif + +/* +** Set the statement rollback point. +** +** This routine should be called with the transaction journal already +** open. A new statement journal is created that can be used to rollback +** changes of a single SQL command within a larger transaction. +*/ +static int pagerStmtBegin(Pager *pPager){ + int rc; + assert( !pPager->stmtInUse ); + assert( pPager->state>=PAGER_SHARED ); + assert( pPager->dbSize>=0 ); + PAGERTRACE2("STMT-BEGIN %d\n", PAGERID(pPager)); + if( MEMDB ){ + pPager->stmtInUse = 1; + pPager->stmtSize = pPager->dbSize; + return SQLITE_OK; + } + if( !pPager->journalOpen ){ + pPager->stmtAutoopen = 1; + return SQLITE_OK; + } + assert( pPager->journalOpen ); + pagerLeave(pPager); + assert( pPager->pInStmt==0 ); + pPager->pInStmt = sqlite3BitvecCreate(pPager->dbSize); + pagerEnter(pPager); + if( pPager->pInStmt==0 ){ + /* sqlite3OsLock(pPager->fd, SHARED_LOCK); */ + return SQLITE_NOMEM; + } +#ifndef NDEBUG + rc = sqlite3OsFileSize(pPager->jfd, &pPager->stmtJSize); + if( rc ) goto stmt_begin_failed; + assert( pPager->stmtJSize == pPager->journalOff ); +#endif + pPager->stmtJSize = pPager->journalOff; + pPager->stmtSize = pPager->dbSize; + pPager->stmtHdrOff = 0; + pPager->stmtCksum = pPager->cksumInit; + if( !pPager->stmtOpen ){ + rc = sqlite3PagerOpentemp(pPager->pVfs, pPager->stfd, pPager->zStmtJrnl, + SQLITE_OPEN_SUBJOURNAL); + if( rc ){ + goto stmt_begin_failed; + } + pPager->stmtOpen = 1; + pPager->stmtNRec = 0; + } + pPager->stmtInUse = 1; + return SQLITE_OK; + +stmt_begin_failed: + if( pPager->pInStmt ){ + sqlite3BitvecDestroy(pPager->pInStmt); + pPager->pInStmt = 0; + } + return rc; +} +int sqlite3PagerStmtBegin(Pager *pPager){ + int rc; + pagerEnter(pPager); + rc = pagerStmtBegin(pPager); + pagerLeave(pPager); + return rc; +} + +/* +** Commit a statement. +*/ +int sqlite3PagerStmtCommit(Pager *pPager){ + pagerEnter(pPager); + if( pPager->stmtInUse ){ + PgHdr *pPg, *pNext; + PAGERTRACE2("STMT-COMMIT %d\n", PAGERID(pPager)); + if( !MEMDB ){ + /* sqlite3OsTruncate(pPager->stfd, 0); */ + sqlite3BitvecDestroy(pPager->pInStmt); + pPager->pInStmt = 0; + }else{ + for(pPg=pPager->pStmt; pPg; pPg=pNext){ + PgHistory *pHist = PGHDR_TO_HIST(pPg, pPager); + pNext = pHist->pNextStmt; + assert( pHist->inStmt ); + pHist->inStmt = 0; + pHist->pPrevStmt = pHist->pNextStmt = 0; + sqlite3_free(pHist->pStmt); + pHist->pStmt = 0; + } + } + pPager->stmtNRec = 0; + pPager->stmtInUse = 0; + pPager->pStmt = 0; + } + pPager->stmtAutoopen = 0; + pagerLeave(pPager); + return SQLITE_OK; +} + +/* +** Rollback a statement. +*/ +int sqlite3PagerStmtRollback(Pager *pPager){ + int rc; + pagerEnter(pPager); + if( pPager->stmtInUse ){ + PAGERTRACE2("STMT-ROLLBACK %d\n", PAGERID(pPager)); + if( MEMDB ){ + PgHdr *pPg; + PgHistory *pHist; + for(pPg=pPager->pStmt; pPg; pPg=pHist->pNextStmt){ + pHist = PGHDR_TO_HIST(pPg, pPager); + if( pHist->pStmt ){ + memcpy(PGHDR_TO_DATA(pPg), pHist->pStmt, pPager->pageSize); + sqlite3_free(pHist->pStmt); + pHist->pStmt = 0; + } + } + pPager->dbSize = pPager->stmtSize; + pager_truncate_cache(pPager); + rc = SQLITE_OK; + }else{ + rc = pager_stmt_playback(pPager); + } + sqlite3PagerStmtCommit(pPager); + }else{ + rc = SQLITE_OK; + } + pPager->stmtAutoopen = 0; + pagerLeave(pPager); + return rc; +} + +/* +** Return the full pathname of the database file. +*/ +const char *sqlite3PagerFilename(Pager *pPager){ + return pPager->zFilename; +} + +/* +** Return the VFS structure for the pager. +*/ +const sqlite3_vfs *sqlite3PagerVfs(Pager *pPager){ + return pPager->pVfs; +} + +/* +** Return the file handle for the database file associated +** with the pager. This might return NULL if the file has +** not yet been opened. +*/ +sqlite3_file *sqlite3PagerFile(Pager *pPager){ + return pPager->fd; +} + +/* +** Return the directory of the database file. +*/ +const char *sqlite3PagerDirname(Pager *pPager){ + return pPager->zDirectory; +} + +/* +** Return the full pathname of the journal file. +*/ +const char *sqlite3PagerJournalname(Pager *pPager){ + return pPager->zJournal; +} + +/* +** Return true if fsync() calls are disabled for this pager. Return FALSE +** if fsync()s are executed normally. +*/ +int sqlite3PagerNosync(Pager *pPager){ + return pPager->noSync; +} + +#ifdef SQLITE_HAS_CODEC +/* +** Set the codec for this pager +*/ +void sqlite3PagerSetCodec( + Pager *pPager, + void *(*xCodec)(void*,void*,Pgno,int), + void *pCodecArg +){ + pPager->xCodec = xCodec; + pPager->pCodecArg = pCodecArg; +} +#endif + +#ifndef SQLITE_OMIT_AUTOVACUUM +/* +** Move the page pPg to location pgno in the file. +** +** There must be no references to the page previously located at +** pgno (which we call pPgOld) though that page is allowed to be +** in cache. If the page previous located at pgno is not already +** in the rollback journal, it is not put there by by this routine. +** +** References to the page pPg remain valid. Updating any +** meta-data associated with pPg (i.e. data stored in the nExtra bytes +** allocated along with the page) is the responsibility of the caller. +** +** A transaction must be active when this routine is called. It used to be +** required that a statement transaction was not active, but this restriction +** has been removed (CREATE INDEX needs to move a page when a statement +** transaction is active). +*/ +int sqlite3PagerMovepage(Pager *pPager, DbPage *pPg, Pgno pgno){ + PgHdr *pPgOld; /* The page being overwritten. */ + int h; + Pgno needSyncPgno = 0; + + pagerEnter(pPager); + assert( pPg->nRef>0 ); + + PAGERTRACE5("MOVE %d page %d (needSync=%d) moves to %d\n", + PAGERID(pPager), pPg->pgno, pPg->needSync, pgno); + IOTRACE(("MOVE %p %d %d\n", pPager, pPg->pgno, pgno)) + + pager_get_content(pPg); + if( pPg->needSync ){ + needSyncPgno = pPg->pgno; + assert( pPg->inJournal || (int)pgno>pPager->origDbSize ); + assert( pPg->dirty ); + assert( pPager->needSync ); + } + + /* Unlink pPg from its hash-chain */ + unlinkHashChain(pPager, pPg); + + /* If the cache contains a page with page-number pgno, remove it + ** from its hash chain. Also, if the PgHdr.needSync was set for + ** page pgno before the 'move' operation, it needs to be retained + ** for the page moved there. + */ + pPg->needSync = 0; + pPgOld = pager_lookup(pPager, pgno); + if( pPgOld ){ + assert( pPgOld->nRef==0 ); + unlinkHashChain(pPager, pPgOld); + makeClean(pPgOld); + pPg->needSync = pPgOld->needSync; + }else{ + pPg->needSync = 0; + } + pPg->inJournal = sqlite3BitvecTest(pPager->pInJournal, pgno); + + /* Change the page number for pPg and insert it into the new hash-chain. */ + assert( pgno!=0 ); + pPg->pgno = pgno; + h = pgno & (pPager->nHash-1); + if( pPager->aHash[h] ){ + assert( pPager->aHash[h]->pPrevHash==0 ); + pPager->aHash[h]->pPrevHash = pPg; + } + pPg->pNextHash = pPager->aHash[h]; + pPager->aHash[h] = pPg; + pPg->pPrevHash = 0; + + makeDirty(pPg); + pPager->dirtyCache = 1; + + if( needSyncPgno ){ + /* If needSyncPgno is non-zero, then the journal file needs to be + ** sync()ed before any data is written to database file page needSyncPgno. + ** Currently, no such page exists in the page-cache and the + ** Pager.pInJournal bit has been set. This needs to be remedied by loading + ** the page into the pager-cache and setting the PgHdr.needSync flag. + ** + ** If the attempt to load the page into the page-cache fails, (due + ** to a malloc() or IO failure), clear the bit in the pInJournal[] + ** array. Otherwise, if the page is loaded and written again in + ** this transaction, it may be written to the database file before + ** it is synced into the journal file. This way, it may end up in + ** the journal file twice, but that is not a problem. + ** + ** The sqlite3PagerGet() call may cause the journal to sync. So make + ** sure the Pager.needSync flag is set too. + */ + int rc; + PgHdr *pPgHdr; + assert( pPager->needSync ); + rc = sqlite3PagerGet(pPager, needSyncPgno, &pPgHdr); + if( rc!=SQLITE_OK ){ + if( pPager->pInJournal && (int)needSyncPgno<=pPager->origDbSize ){ + sqlite3BitvecClear(pPager->pInJournal, needSyncPgno); + } + pagerLeave(pPager); + return rc; + } + pPager->needSync = 1; + pPgHdr->needSync = 1; + pPgHdr->inJournal = 1; + makeDirty(pPgHdr); + sqlite3PagerUnref(pPgHdr); + } + + pagerLeave(pPager); + return SQLITE_OK; +} +#endif + +/* +** Return a pointer to the data for the specified page. +*/ +void *sqlite3PagerGetData(DbPage *pPg){ + return PGHDR_TO_DATA(pPg); +} + +/* +** Return a pointer to the Pager.nExtra bytes of "extra" space +** allocated along with the specified page. +*/ +void *sqlite3PagerGetExtra(DbPage *pPg){ + Pager *pPager = pPg->pPager; + return (pPager?PGHDR_TO_EXTRA(pPg, pPager):0); +} + +/* +** Get/set the locking-mode for this pager. Parameter eMode must be one +** of PAGER_LOCKINGMODE_QUERY, PAGER_LOCKINGMODE_NORMAL or +** PAGER_LOCKINGMODE_EXCLUSIVE. If the parameter is not _QUERY, then +** the locking-mode is set to the value specified. +** +** The returned value is either PAGER_LOCKINGMODE_NORMAL or +** PAGER_LOCKINGMODE_EXCLUSIVE, indicating the current (possibly updated) +** locking-mode. +*/ +int sqlite3PagerLockingMode(Pager *pPager, int eMode){ + assert( eMode==PAGER_LOCKINGMODE_QUERY + || eMode==PAGER_LOCKINGMODE_NORMAL + || eMode==PAGER_LOCKINGMODE_EXCLUSIVE ); + assert( PAGER_LOCKINGMODE_QUERY<0 ); + assert( PAGER_LOCKINGMODE_NORMAL>=0 && PAGER_LOCKINGMODE_EXCLUSIVE>=0 ); + if( eMode>=0 && !pPager->tempFile ){ + pPager->exclusiveMode = eMode; + } + return (int)pPager->exclusiveMode; +} + +#ifdef SQLITE_TEST +/* +** Print a listing of all referenced pages and their ref count. +*/ +void sqlite3PagerRefdump(Pager *pPager){ + PgHdr *pPg; + for(pPg=pPager->pAll; pPg; pPg=pPg->pNextAll){ + if( pPg->nRef<=0 ) continue; + sqlite3DebugPrintf("PAGE %3d addr=%p nRef=%d\n", + pPg->pgno, PGHDR_TO_DATA(pPg), pPg->nRef); + } +} +#endif + +#endif /* SQLITE_OMIT_DISKIO */ Added: external/sqlite-source-3.5.7.x/pager.h ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/pager.h Wed Mar 19 03:00:27 2008 @@ -0,0 +1,125 @@ +/* +** 2001 September 15 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** This header file defines the interface that the sqlite page cache +** subsystem. The page cache subsystem reads and writes a file a page +** at a time and provides a journal for rollback. +** +** @(#) $Id: pager.h,v 1.69 2008/02/02 20:47:38 drh Exp $ +*/ + +#ifndef _PAGER_H_ +#define _PAGER_H_ + +/* +** The type used to represent a page number. The first page in a file +** is called page 1. 0 is used to represent "not a page". +*/ +typedef unsigned int Pgno; + +/* +** Each open file is managed by a separate instance of the "Pager" structure. +*/ +typedef struct Pager Pager; + +/* +** Handle type for pages. +*/ +typedef struct PgHdr DbPage; + +/* +** Allowed values for the flags parameter to sqlite3PagerOpen(). +** +** NOTE: This values must match the corresponding BTREE_ values in btree.h. +*/ +#define PAGER_OMIT_JOURNAL 0x0001 /* Do not use a rollback journal */ +#define PAGER_NO_READLOCK 0x0002 /* Omit readlocks on readonly files */ + +/* +** Valid values for the second argument to sqlite3PagerLockingMode(). +*/ +#define PAGER_LOCKINGMODE_QUERY -1 +#define PAGER_LOCKINGMODE_NORMAL 0 +#define PAGER_LOCKINGMODE_EXCLUSIVE 1 + +/* +** See source code comments for a detailed description of the following +** routines: +*/ +int sqlite3PagerOpen(sqlite3_vfs *, Pager **ppPager, const char*, int,int,int); +void sqlite3PagerSetBusyhandler(Pager*, BusyHandler *pBusyHandler); +void sqlite3PagerSetDestructor(Pager*, void(*)(DbPage*,int)); +void sqlite3PagerSetReiniter(Pager*, void(*)(DbPage*,int)); +int sqlite3PagerSetPagesize(Pager*, u16*); +int sqlite3PagerMaxPageCount(Pager*, int); +int sqlite3PagerReadFileheader(Pager*, int, unsigned char*); +void sqlite3PagerSetCachesize(Pager*, int); +int sqlite3PagerClose(Pager *pPager); +int sqlite3PagerAcquire(Pager *pPager, Pgno pgno, DbPage **ppPage, int clrFlag); +#define sqlite3PagerGet(A,B,C) sqlite3PagerAcquire(A,B,C,0) +DbPage *sqlite3PagerLookup(Pager *pPager, Pgno pgno); +int sqlite3PagerRef(DbPage*); +int sqlite3PagerUnref(DbPage*); +int sqlite3PagerWrite(DbPage*); +int sqlite3PagerOverwrite(Pager *pPager, Pgno pgno, void*); +int sqlite3PagerPagecount(Pager*); +int sqlite3PagerTruncate(Pager*,Pgno); +int sqlite3PagerBegin(DbPage*, int exFlag); +int sqlite3PagerCommitPhaseOne(Pager*,const char *zMaster, Pgno); +int sqlite3PagerCommitPhaseTwo(Pager*); +int sqlite3PagerRollback(Pager*); +int sqlite3PagerIsreadonly(Pager*); +int sqlite3PagerStmtBegin(Pager*); +int sqlite3PagerStmtCommit(Pager*); +int sqlite3PagerStmtRollback(Pager*); +void sqlite3PagerDontRollback(DbPage*); +void sqlite3PagerDontWrite(DbPage*); +int sqlite3PagerRefcount(Pager*); +void sqlite3PagerSetSafetyLevel(Pager*,int,int); +const char *sqlite3PagerFilename(Pager*); +const sqlite3_vfs *sqlite3PagerVfs(Pager*); +sqlite3_file *sqlite3PagerFile(Pager*); +const char *sqlite3PagerDirname(Pager*); +const char *sqlite3PagerJournalname(Pager*); +int sqlite3PagerNosync(Pager*); +int sqlite3PagerMovepage(Pager*,DbPage*,Pgno); +void *sqlite3PagerGetData(DbPage *); +void *sqlite3PagerGetExtra(DbPage *); +int sqlite3PagerLockingMode(Pager *, int); +void *sqlite3PagerTempSpace(Pager*); + +#if defined(SQLITE_ENABLE_MEMORY_MANAGEMENT) && !defined(SQLITE_OMIT_DISKIO) + int sqlite3PagerReleaseMemory(int); +#endif + +#ifdef SQLITE_HAS_CODEC + void sqlite3PagerSetCodec(Pager*,void*(*)(void*,void*,Pgno,int),void*); +#endif + +#if !defined(NDEBUG) || defined(SQLITE_TEST) + Pgno sqlite3PagerPagenumber(DbPage*); + int sqlite3PagerIswriteable(DbPage*); +#endif + +#ifdef SQLITE_TEST + int *sqlite3PagerStats(Pager*); + void sqlite3PagerRefdump(Pager*); +#endif + +#ifdef SQLITE_TEST +void disable_simulated_io_errors(void); +void enable_simulated_io_errors(void); +#else +# define disable_simulated_io_errors() +# define enable_simulated_io_errors() +#endif + +#endif /* _PAGER_H_ */ Added: external/sqlite-source-3.5.7.x/parse.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/parse.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,3480 @@ +/* Driver template for the LEMON parser generator. +** The author disclaims copyright to this source code. +*/ +/* First off, code is include which follows the "include" declaration +** in the input file. */ +#include +#line 51 "parse.y" + +#include "sqliteInt.h" + +/* +** An instance of this structure holds information about the +** LIMIT clause of a SELECT statement. +*/ +struct LimitVal { + Expr *pLimit; /* The LIMIT expression. NULL if there is no limit */ + Expr *pOffset; /* The OFFSET expression. NULL if there is none */ +}; + +/* +** An instance of this structure is used to store the LIKE, +** GLOB, NOT LIKE, and NOT GLOB operators. +*/ +struct LikeOp { + Token eOperator; /* "like" or "glob" or "regexp" */ + int not; /* True if the NOT keyword is present */ +}; + +/* +** An instance of the following structure describes the event of a +** TRIGGER. "a" is the event type, one of TK_UPDATE, TK_INSERT, +** TK_DELETE, or TK_INSTEAD. If the event is of the form +** +** UPDATE ON (a,b,c) +** +** Then the "b" IdList records the list "a,b,c". +*/ +struct TrigEvent { int a; IdList * b; }; + +/* +** An instance of this structure holds the ATTACH key and the key type. +*/ +struct AttachKey { int type; Token key; }; + +#line 47 "parse.c" +/* Next is all token values, in a form suitable for use by makeheaders. +** This section will be null unless lemon is run with the -m switch. +*/ +/* +** These constants (all generated automatically by the parser generator) +** specify the various kinds of tokens (terminals) that the parser +** understands. +** +** Each symbol here is a terminal symbol in the grammar. +*/ +/* Make sure the INTERFACE macro is defined. +*/ +#ifndef INTERFACE +# define INTERFACE 1 +#endif +/* The next thing included is series of defines which control +** various aspects of the generated parser. +** YYCODETYPE is the data type used for storing terminal +** and nonterminal numbers. "unsigned char" is +** used if there are fewer than 250 terminals +** and nonterminals. "int" is used otherwise. +** YYNOCODE is a number of type YYCODETYPE which corresponds +** to no legal terminal or nonterminal number. This +** number is used to fill in empty slots of the hash +** table. +** YYFALLBACK If defined, this indicates that one or more tokens +** have fall-back values which should be used if the +** original value of the token will not parse. +** YYACTIONTYPE is the data type used for storing terminal +** and nonterminal numbers. "unsigned char" is +** used if there are fewer than 250 rules and +** states combined. "int" is used otherwise. +** sqlite3ParserTOKENTYPE is the data type used for minor tokens given +** directly to the parser from the tokenizer. +** YYMINORTYPE is the data type used for all minor tokens. +** This is typically a union of many types, one of +** which is sqlite3ParserTOKENTYPE. The entry in the union +** for base tokens is called "yy0". +** YYSTACKDEPTH is the maximum depth of the parser's stack. If +** zero the stack is dynamically sized using realloc() +** sqlite3ParserARG_SDECL A static variable declaration for the %extra_argument +** sqlite3ParserARG_PDECL A parameter declaration for the %extra_argument +** sqlite3ParserARG_STORE Code to store %extra_argument into yypParser +** sqlite3ParserARG_FETCH Code to extract %extra_argument from yypParser +** YYNSTATE the combined number of states. +** YYNRULE the number of rules in the grammar +** YYERRORSYMBOL is the code number of the error symbol. If not +** defined, then do no error processing. +*/ +#define YYCODETYPE unsigned char +#define YYNOCODE 248 +#define YYACTIONTYPE unsigned short int +#define YYWILDCARD 59 +#define sqlite3ParserTOKENTYPE Token +typedef union { + sqlite3ParserTOKENTYPE yy0; + int yy46; + struct LikeOp yy72; + Expr* yy172; + ExprList* yy174; + Select* yy219; + struct LimitVal yy234; + TriggerStep* yy243; + struct TrigEvent yy370; + SrcList* yy373; + struct {int value; int mask;} yy405; + Token yy410; + IdList* yy432; +} YYMINORTYPE; +#ifndef YYSTACKDEPTH +#define YYSTACKDEPTH 100 +#endif +#define sqlite3ParserARG_SDECL Parse *pParse; +#define sqlite3ParserARG_PDECL ,Parse *pParse +#define sqlite3ParserARG_FETCH Parse *pParse = yypParser->pParse +#define sqlite3ParserARG_STORE yypParser->pParse = pParse +#define YYNSTATE 588 +#define YYNRULE 312 +#define YYFALLBACK 1 +#define YY_NO_ACTION (YYNSTATE+YYNRULE+2) +#define YY_ACCEPT_ACTION (YYNSTATE+YYNRULE+1) +#define YY_ERROR_ACTION (YYNSTATE+YYNRULE) + +/* Next are that tables used to determine what action to take based on the +** current state and lookahead token. These tables are used to implement +** functions that take a state number and lookahead value and return an +** action integer. +** +** Suppose the action integer is N. Then the action is determined as +** follows +** +** 0 <= N < YYNSTATE Shift N. That is, push the lookahead +** token onto the stack and goto state N. +** +** YYNSTATE <= N < YYNSTATE+YYNRULE Reduce by rule N-YYNSTATE. +** +** N == YYNSTATE+YYNRULE A syntax error has occurred. +** +** N == YYNSTATE+YYNRULE+1 The parser accepts its input. +** +** N == YYNSTATE+YYNRULE+2 No such action. Denotes unused +** slots in the yy_action[] table. +** +** The action table is constructed as a single large table named yy_action[]. +** Given state S and lookahead X, the action is computed as +** +** yy_action[ yy_shift_ofst[S] + X ] +** +** If the index value yy_shift_ofst[S]+X is out of range or if the value +** yy_lookahead[yy_shift_ofst[S]+X] is not equal to X or if yy_shift_ofst[S] +** is equal to YY_SHIFT_USE_DFLT, it means that the action is not in the table +** and that yy_default[S] should be used instead. +** +** The formula above is for computing the action when the lookahead is +** a terminal symbol. If the lookahead is a non-terminal (as occurs after +** a reduce action) then the yy_reduce_ofst[] array is used in place of +** the yy_shift_ofst[] array and YY_REDUCE_USE_DFLT is used in place of +** YY_SHIFT_USE_DFLT. +** +** The following are the tables generated in this section: +** +** yy_action[] A single table containing all actions. +** yy_lookahead[] A table containing the lookahead for each entry in +** yy_action. Used to detect hash collisions. +** yy_shift_ofst[] For each state, the offset into yy_action for +** shifting terminals. +** yy_reduce_ofst[] For each state, the offset into yy_action for +** shifting non-terminals after a reduce. +** yy_default[] Default action for each state. +*/ +static const YYACTIONTYPE yy_action[] = { + /* 0 */ 292, 901, 124, 587, 409, 172, 2, 418, 61, 61, + /* 10 */ 61, 61, 519, 63, 63, 63, 63, 64, 64, 65, + /* 20 */ 65, 65, 66, 210, 447, 212, 425, 431, 68, 63, + /* 30 */ 63, 63, 63, 64, 64, 65, 65, 65, 66, 210, + /* 40 */ 391, 388, 396, 451, 60, 59, 297, 435, 436, 432, + /* 50 */ 432, 62, 62, 61, 61, 61, 61, 263, 63, 63, + /* 60 */ 63, 63, 64, 64, 65, 65, 65, 66, 210, 292, + /* 70 */ 493, 494, 418, 489, 208, 82, 67, 420, 69, 154, + /* 80 */ 63, 63, 63, 63, 64, 64, 65, 65, 65, 66, + /* 90 */ 210, 67, 462, 69, 154, 425, 431, 573, 264, 58, + /* 100 */ 64, 64, 65, 65, 65, 66, 210, 397, 398, 422, + /* 110 */ 422, 422, 292, 60, 59, 297, 435, 436, 432, 432, + /* 120 */ 62, 62, 61, 61, 61, 61, 317, 63, 63, 63, + /* 130 */ 63, 64, 64, 65, 65, 65, 66, 210, 425, 431, + /* 140 */ 94, 65, 65, 65, 66, 210, 396, 210, 414, 34, + /* 150 */ 56, 298, 442, 443, 410, 488, 60, 59, 297, 435, + /* 160 */ 436, 432, 432, 62, 62, 61, 61, 61, 61, 490, + /* 170 */ 63, 63, 63, 63, 64, 64, 65, 65, 65, 66, + /* 180 */ 210, 292, 257, 524, 295, 571, 113, 408, 522, 451, + /* 190 */ 331, 317, 407, 20, 418, 340, 519, 396, 532, 531, + /* 200 */ 505, 447, 212, 570, 569, 208, 530, 425, 431, 149, + /* 210 */ 150, 397, 398, 414, 41, 211, 151, 533, 372, 489, + /* 220 */ 261, 568, 259, 420, 292, 60, 59, 297, 435, 436, + /* 230 */ 432, 432, 62, 62, 61, 61, 61, 61, 317, 63, + /* 240 */ 63, 63, 63, 64, 64, 65, 65, 65, 66, 210, + /* 250 */ 425, 431, 447, 333, 215, 422, 422, 422, 363, 418, + /* 260 */ 414, 41, 397, 398, 366, 567, 211, 292, 60, 59, + /* 270 */ 297, 435, 436, 432, 432, 62, 62, 61, 61, 61, + /* 280 */ 61, 396, 63, 63, 63, 63, 64, 64, 65, 65, + /* 290 */ 65, 66, 210, 425, 431, 491, 300, 524, 474, 66, + /* 300 */ 210, 214, 474, 229, 411, 286, 534, 20, 449, 523, + /* 310 */ 168, 60, 59, 297, 435, 436, 432, 432, 62, 62, + /* 320 */ 61, 61, 61, 61, 474, 63, 63, 63, 63, 64, + /* 330 */ 64, 65, 65, 65, 66, 210, 209, 480, 317, 77, + /* 340 */ 292, 239, 300, 55, 484, 230, 397, 398, 181, 547, + /* 350 */ 494, 345, 348, 349, 67, 152, 69, 154, 339, 524, + /* 360 */ 414, 35, 350, 241, 221, 370, 425, 431, 578, 20, + /* 370 */ 164, 118, 243, 343, 248, 344, 176, 322, 442, 443, + /* 380 */ 414, 3, 80, 252, 60, 59, 297, 435, 436, 432, + /* 390 */ 432, 62, 62, 61, 61, 61, 61, 174, 63, 63, + /* 400 */ 63, 63, 64, 64, 65, 65, 65, 66, 210, 292, + /* 410 */ 221, 550, 236, 487, 510, 353, 317, 118, 243, 343, + /* 420 */ 248, 344, 176, 181, 317, 525, 345, 348, 349, 252, + /* 430 */ 223, 415, 155, 464, 511, 425, 431, 350, 414, 34, + /* 440 */ 465, 211, 177, 175, 160, 237, 414, 34, 338, 549, + /* 450 */ 449, 323, 168, 60, 59, 297, 435, 436, 432, 432, + /* 460 */ 62, 62, 61, 61, 61, 61, 415, 63, 63, 63, + /* 470 */ 63, 64, 64, 65, 65, 65, 66, 210, 292, 542, + /* 480 */ 335, 517, 504, 541, 456, 571, 302, 19, 331, 144, + /* 490 */ 317, 390, 317, 330, 2, 362, 457, 294, 483, 373, + /* 500 */ 269, 268, 252, 570, 425, 431, 588, 391, 388, 458, + /* 510 */ 208, 495, 414, 49, 414, 49, 303, 585, 892, 159, + /* 520 */ 892, 496, 60, 59, 297, 435, 436, 432, 432, 62, + /* 530 */ 62, 61, 61, 61, 61, 201, 63, 63, 63, 63, + /* 540 */ 64, 64, 65, 65, 65, 66, 210, 292, 317, 181, + /* 550 */ 439, 255, 345, 348, 349, 370, 153, 582, 308, 251, + /* 560 */ 309, 452, 76, 350, 78, 382, 211, 426, 427, 415, + /* 570 */ 414, 27, 319, 425, 431, 440, 1, 22, 585, 891, + /* 580 */ 396, 891, 544, 478, 320, 263, 438, 438, 429, 430, + /* 590 */ 415, 60, 59, 297, 435, 436, 432, 432, 62, 62, + /* 600 */ 61, 61, 61, 61, 328, 63, 63, 63, 63, 64, + /* 610 */ 64, 65, 65, 65, 66, 210, 292, 428, 582, 374, + /* 620 */ 224, 93, 517, 9, 336, 396, 557, 396, 456, 67, + /* 630 */ 396, 69, 154, 399, 400, 401, 320, 238, 438, 438, + /* 640 */ 457, 318, 425, 431, 299, 397, 398, 320, 433, 438, + /* 650 */ 438, 581, 291, 458, 225, 327, 5, 222, 546, 292, + /* 660 */ 60, 59, 297, 435, 436, 432, 432, 62, 62, 61, + /* 670 */ 61, 61, 61, 395, 63, 63, 63, 63, 64, 64, + /* 680 */ 65, 65, 65, 66, 210, 425, 431, 482, 313, 392, + /* 690 */ 397, 398, 397, 398, 207, 397, 398, 824, 273, 517, + /* 700 */ 251, 200, 292, 60, 59, 297, 435, 436, 432, 432, + /* 710 */ 62, 62, 61, 61, 61, 61, 470, 63, 63, 63, + /* 720 */ 63, 64, 64, 65, 65, 65, 66, 210, 425, 431, + /* 730 */ 171, 160, 263, 263, 304, 415, 276, 119, 274, 263, + /* 740 */ 517, 517, 263, 517, 192, 292, 60, 70, 297, 435, + /* 750 */ 436, 432, 432, 62, 62, 61, 61, 61, 61, 379, + /* 760 */ 63, 63, 63, 63, 64, 64, 65, 65, 65, 66, + /* 770 */ 210, 425, 431, 384, 559, 305, 306, 251, 415, 320, + /* 780 */ 560, 438, 438, 561, 540, 360, 540, 387, 292, 196, + /* 790 */ 59, 297, 435, 436, 432, 432, 62, 62, 61, 61, + /* 800 */ 61, 61, 371, 63, 63, 63, 63, 64, 64, 65, + /* 810 */ 65, 65, 66, 210, 425, 431, 396, 275, 251, 251, + /* 820 */ 172, 250, 418, 415, 386, 367, 178, 179, 180, 469, + /* 830 */ 311, 123, 156, 128, 297, 435, 436, 432, 432, 62, + /* 840 */ 62, 61, 61, 61, 61, 317, 63, 63, 63, 63, + /* 850 */ 64, 64, 65, 65, 65, 66, 210, 72, 324, 177, + /* 860 */ 4, 317, 263, 317, 296, 263, 415, 414, 28, 317, + /* 870 */ 263, 317, 321, 72, 324, 317, 4, 421, 445, 445, + /* 880 */ 296, 397, 398, 414, 23, 414, 32, 418, 321, 326, + /* 890 */ 329, 414, 53, 414, 52, 317, 158, 414, 98, 451, + /* 900 */ 317, 194, 317, 277, 317, 326, 378, 471, 502, 317, + /* 910 */ 478, 279, 478, 165, 294, 451, 317, 414, 96, 75, + /* 920 */ 74, 469, 414, 101, 414, 102, 414, 112, 73, 315, + /* 930 */ 316, 414, 114, 420, 448, 75, 74, 481, 414, 16, + /* 940 */ 381, 317, 183, 467, 73, 315, 316, 72, 324, 420, + /* 950 */ 4, 208, 317, 186, 296, 317, 499, 500, 476, 208, + /* 960 */ 173, 341, 321, 414, 99, 422, 422, 422, 423, 424, + /* 970 */ 11, 361, 380, 307, 414, 33, 415, 414, 97, 326, + /* 980 */ 460, 422, 422, 422, 423, 424, 11, 415, 413, 451, + /* 990 */ 413, 162, 412, 317, 412, 468, 226, 227, 228, 104, + /* 1000 */ 84, 473, 317, 509, 508, 317, 622, 477, 317, 75, + /* 1010 */ 74, 249, 205, 21, 281, 414, 24, 418, 73, 315, + /* 1020 */ 316, 282, 317, 420, 414, 54, 507, 414, 115, 317, + /* 1030 */ 414, 116, 506, 203, 147, 549, 244, 512, 526, 202, + /* 1040 */ 317, 513, 204, 317, 414, 117, 317, 245, 317, 18, + /* 1050 */ 317, 414, 25, 317, 256, 422, 422, 422, 423, 424, + /* 1060 */ 11, 258, 414, 36, 260, 414, 37, 317, 414, 26, + /* 1070 */ 414, 38, 414, 39, 262, 414, 40, 317, 514, 317, + /* 1080 */ 128, 317, 418, 317, 189, 377, 278, 268, 267, 414, + /* 1090 */ 42, 293, 317, 254, 317, 128, 208, 365, 8, 414, + /* 1100 */ 43, 414, 44, 414, 29, 414, 30, 352, 368, 128, + /* 1110 */ 317, 545, 317, 128, 414, 45, 414, 46, 317, 583, + /* 1120 */ 383, 553, 317, 173, 554, 317, 91, 317, 564, 369, + /* 1130 */ 91, 357, 414, 47, 414, 48, 580, 270, 290, 271, + /* 1140 */ 414, 31, 272, 556, 414, 10, 566, 414, 50, 414, + /* 1150 */ 51, 280, 283, 284, 577, 146, 463, 405, 584, 231, + /* 1160 */ 325, 419, 444, 466, 446, 246, 505, 552, 563, 515, + /* 1170 */ 516, 520, 163, 518, 394, 347, 7, 402, 403, 404, + /* 1180 */ 314, 84, 232, 334, 332, 83, 79, 416, 170, 57, + /* 1190 */ 213, 461, 125, 85, 337, 342, 492, 301, 233, 498, + /* 1200 */ 497, 105, 502, 219, 354, 247, 521, 234, 501, 235, + /* 1210 */ 287, 417, 503, 218, 527, 528, 529, 358, 240, 535, + /* 1220 */ 475, 242, 288, 479, 356, 184, 185, 121, 187, 132, + /* 1230 */ 188, 548, 537, 88, 190, 193, 364, 142, 375, 376, + /* 1240 */ 555, 133, 220, 562, 134, 310, 135, 138, 136, 574, + /* 1250 */ 575, 141, 576, 265, 579, 100, 538, 217, 393, 92, + /* 1260 */ 103, 95, 406, 623, 624, 166, 434, 167, 437, 71, + /* 1270 */ 453, 441, 450, 17, 143, 157, 169, 6, 111, 13, + /* 1280 */ 454, 455, 459, 472, 126, 81, 12, 127, 161, 485, + /* 1290 */ 486, 216, 86, 122, 106, 182, 253, 346, 312, 107, + /* 1300 */ 120, 87, 351, 108, 245, 355, 145, 536, 359, 129, + /* 1310 */ 173, 266, 191, 109, 289, 551, 130, 539, 195, 543, + /* 1320 */ 131, 14, 197, 199, 198, 558, 137, 139, 140, 110, + /* 1330 */ 15, 285, 572, 206, 389, 565, 385, 148, 586, 902, + /* 1340 */ 902, 902, 902, 902, 902, 89, 90, +}; +static const YYCODETYPE yy_lookahead[] = { + /* 0 */ 16, 139, 140, 141, 168, 21, 144, 23, 69, 70, + /* 10 */ 71, 72, 176, 74, 75, 76, 77, 78, 79, 80, + /* 20 */ 81, 82, 83, 84, 78, 79, 42, 43, 73, 74, + /* 30 */ 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, + /* 40 */ 1, 2, 23, 58, 60, 61, 62, 63, 64, 65, + /* 50 */ 66, 67, 68, 69, 70, 71, 72, 147, 74, 75, + /* 60 */ 76, 77, 78, 79, 80, 81, 82, 83, 84, 16, + /* 70 */ 185, 186, 88, 88, 110, 22, 217, 92, 219, 220, + /* 80 */ 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, + /* 90 */ 84, 217, 218, 219, 220, 42, 43, 238, 188, 46, + /* 100 */ 78, 79, 80, 81, 82, 83, 84, 88, 89, 124, + /* 110 */ 125, 126, 16, 60, 61, 62, 63, 64, 65, 66, + /* 120 */ 67, 68, 69, 70, 71, 72, 147, 74, 75, 76, + /* 130 */ 77, 78, 79, 80, 81, 82, 83, 84, 42, 43, + /* 140 */ 44, 80, 81, 82, 83, 84, 23, 84, 169, 170, + /* 150 */ 19, 164, 165, 166, 23, 169, 60, 61, 62, 63, + /* 160 */ 64, 65, 66, 67, 68, 69, 70, 71, 72, 169, + /* 170 */ 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, + /* 180 */ 84, 16, 14, 147, 150, 147, 21, 167, 168, 58, + /* 190 */ 211, 147, 156, 157, 23, 216, 176, 23, 181, 176, + /* 200 */ 177, 78, 79, 165, 166, 110, 183, 42, 43, 78, + /* 210 */ 79, 88, 89, 169, 170, 228, 180, 181, 123, 88, + /* 220 */ 52, 98, 54, 92, 16, 60, 61, 62, 63, 64, + /* 230 */ 65, 66, 67, 68, 69, 70, 71, 72, 147, 74, + /* 240 */ 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, + /* 250 */ 42, 43, 78, 209, 210, 124, 125, 126, 224, 88, + /* 260 */ 169, 170, 88, 89, 230, 227, 228, 16, 60, 61, + /* 270 */ 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, + /* 280 */ 72, 23, 74, 75, 76, 77, 78, 79, 80, 81, + /* 290 */ 82, 83, 84, 42, 43, 160, 16, 147, 161, 83, + /* 300 */ 84, 210, 161, 153, 169, 158, 156, 157, 161, 162, + /* 310 */ 163, 60, 61, 62, 63, 64, 65, 66, 67, 68, + /* 320 */ 69, 70, 71, 72, 161, 74, 75, 76, 77, 78, + /* 330 */ 79, 80, 81, 82, 83, 84, 192, 200, 147, 131, + /* 340 */ 16, 200, 16, 199, 20, 190, 88, 89, 90, 185, + /* 350 */ 186, 93, 94, 95, 217, 22, 219, 220, 147, 147, + /* 360 */ 169, 170, 104, 200, 84, 147, 42, 43, 156, 157, + /* 370 */ 90, 91, 92, 93, 94, 95, 96, 164, 165, 166, + /* 380 */ 169, 170, 131, 103, 60, 61, 62, 63, 64, 65, + /* 390 */ 66, 67, 68, 69, 70, 71, 72, 155, 74, 75, + /* 400 */ 76, 77, 78, 79, 80, 81, 82, 83, 84, 16, + /* 410 */ 84, 11, 221, 20, 30, 16, 147, 91, 92, 93, + /* 420 */ 94, 95, 96, 90, 147, 181, 93, 94, 95, 103, + /* 430 */ 212, 189, 155, 27, 50, 42, 43, 104, 169, 170, + /* 440 */ 34, 228, 43, 201, 202, 147, 169, 170, 206, 49, + /* 450 */ 161, 162, 163, 60, 61, 62, 63, 64, 65, 66, + /* 460 */ 67, 68, 69, 70, 71, 72, 189, 74, 75, 76, + /* 470 */ 77, 78, 79, 80, 81, 82, 83, 84, 16, 25, + /* 480 */ 211, 147, 20, 29, 12, 147, 102, 19, 211, 21, + /* 490 */ 147, 141, 147, 216, 144, 41, 24, 98, 20, 99, + /* 500 */ 100, 101, 103, 165, 42, 43, 0, 1, 2, 37, + /* 510 */ 110, 39, 169, 170, 169, 170, 182, 19, 20, 147, + /* 520 */ 22, 49, 60, 61, 62, 63, 64, 65, 66, 67, + /* 530 */ 68, 69, 70, 71, 72, 155, 74, 75, 76, 77, + /* 540 */ 78, 79, 80, 81, 82, 83, 84, 16, 147, 90, + /* 550 */ 20, 20, 93, 94, 95, 147, 155, 59, 215, 225, + /* 560 */ 215, 20, 130, 104, 132, 227, 228, 42, 43, 189, + /* 570 */ 169, 170, 16, 42, 43, 20, 19, 22, 19, 20, + /* 580 */ 23, 22, 18, 147, 106, 147, 108, 109, 63, 64, + /* 590 */ 189, 60, 61, 62, 63, 64, 65, 66, 67, 68, + /* 600 */ 69, 70, 71, 72, 186, 74, 75, 76, 77, 78, + /* 610 */ 79, 80, 81, 82, 83, 84, 16, 92, 59, 55, + /* 620 */ 212, 21, 147, 19, 147, 23, 188, 23, 12, 217, + /* 630 */ 23, 219, 220, 7, 8, 9, 106, 147, 108, 109, + /* 640 */ 24, 147, 42, 43, 208, 88, 89, 106, 92, 108, + /* 650 */ 109, 244, 245, 37, 145, 39, 191, 182, 94, 16, + /* 660 */ 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, + /* 670 */ 70, 71, 72, 147, 74, 75, 76, 77, 78, 79, + /* 680 */ 80, 81, 82, 83, 84, 42, 43, 80, 142, 143, + /* 690 */ 88, 89, 88, 89, 148, 88, 89, 133, 14, 147, + /* 700 */ 225, 155, 16, 60, 61, 62, 63, 64, 65, 66, + /* 710 */ 67, 68, 69, 70, 71, 72, 114, 74, 75, 76, + /* 720 */ 77, 78, 79, 80, 81, 82, 83, 84, 42, 43, + /* 730 */ 201, 202, 147, 147, 182, 189, 52, 147, 54, 147, + /* 740 */ 147, 147, 147, 147, 155, 16, 60, 61, 62, 63, + /* 750 */ 64, 65, 66, 67, 68, 69, 70, 71, 72, 213, + /* 760 */ 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, + /* 770 */ 84, 42, 43, 188, 188, 182, 182, 225, 189, 106, + /* 780 */ 188, 108, 109, 188, 99, 100, 101, 241, 16, 155, + /* 790 */ 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, + /* 800 */ 71, 72, 213, 74, 75, 76, 77, 78, 79, 80, + /* 810 */ 81, 82, 83, 84, 42, 43, 23, 133, 225, 225, + /* 820 */ 21, 225, 23, 189, 239, 236, 99, 100, 101, 22, + /* 830 */ 242, 243, 155, 22, 62, 63, 64, 65, 66, 67, + /* 840 */ 68, 69, 70, 71, 72, 147, 74, 75, 76, 77, + /* 850 */ 78, 79, 80, 81, 82, 83, 84, 16, 17, 43, + /* 860 */ 19, 147, 147, 147, 23, 147, 189, 169, 170, 147, + /* 870 */ 147, 147, 31, 16, 17, 147, 19, 147, 124, 125, + /* 880 */ 23, 88, 89, 169, 170, 169, 170, 88, 31, 48, + /* 890 */ 147, 169, 170, 169, 170, 147, 89, 169, 170, 58, + /* 900 */ 147, 22, 147, 188, 147, 48, 188, 114, 97, 147, + /* 910 */ 147, 188, 147, 19, 98, 58, 147, 169, 170, 78, + /* 920 */ 79, 114, 169, 170, 169, 170, 169, 170, 87, 88, + /* 930 */ 89, 169, 170, 92, 161, 78, 79, 80, 169, 170, + /* 940 */ 91, 147, 155, 22, 87, 88, 89, 16, 17, 92, + /* 950 */ 19, 110, 147, 155, 23, 147, 7, 8, 20, 110, + /* 960 */ 22, 80, 31, 169, 170, 124, 125, 126, 127, 128, + /* 970 */ 129, 208, 123, 208, 169, 170, 189, 169, 170, 48, + /* 980 */ 147, 124, 125, 126, 127, 128, 129, 189, 107, 58, + /* 990 */ 107, 5, 111, 147, 111, 203, 10, 11, 12, 13, + /* 1000 */ 121, 147, 147, 91, 92, 147, 112, 147, 147, 78, + /* 1010 */ 79, 147, 26, 19, 28, 169, 170, 23, 87, 88, + /* 1020 */ 89, 35, 147, 92, 169, 170, 178, 169, 170, 147, + /* 1030 */ 169, 170, 147, 47, 113, 49, 92, 178, 147, 53, + /* 1040 */ 147, 178, 56, 147, 169, 170, 147, 103, 147, 19, + /* 1050 */ 147, 169, 170, 147, 147, 124, 125, 126, 127, 128, + /* 1060 */ 129, 147, 169, 170, 147, 169, 170, 147, 169, 170, + /* 1070 */ 169, 170, 169, 170, 147, 169, 170, 147, 20, 147, + /* 1080 */ 22, 147, 88, 147, 232, 99, 100, 101, 147, 169, + /* 1090 */ 170, 105, 147, 20, 147, 22, 110, 147, 68, 169, + /* 1100 */ 170, 169, 170, 169, 170, 169, 170, 20, 147, 22, + /* 1110 */ 147, 20, 147, 22, 169, 170, 169, 170, 147, 20, + /* 1120 */ 134, 20, 147, 22, 20, 147, 22, 147, 20, 147, + /* 1130 */ 22, 233, 169, 170, 169, 170, 20, 147, 22, 147, + /* 1140 */ 169, 170, 147, 147, 169, 170, 147, 169, 170, 169, + /* 1150 */ 170, 147, 147, 147, 147, 191, 172, 149, 59, 193, + /* 1160 */ 223, 161, 229, 172, 229, 172, 177, 194, 194, 172, + /* 1170 */ 161, 161, 6, 172, 146, 173, 22, 146, 146, 146, + /* 1180 */ 154, 121, 194, 118, 116, 119, 130, 189, 112, 120, + /* 1190 */ 222, 152, 152, 98, 115, 98, 171, 40, 195, 179, + /* 1200 */ 171, 19, 97, 84, 15, 171, 179, 196, 173, 197, + /* 1210 */ 174, 198, 171, 226, 171, 171, 171, 38, 204, 152, + /* 1220 */ 205, 204, 174, 205, 152, 151, 151, 60, 151, 19, + /* 1230 */ 152, 184, 152, 130, 151, 184, 152, 214, 152, 15, + /* 1240 */ 194, 187, 226, 194, 187, 152, 187, 184, 187, 33, + /* 1250 */ 152, 214, 152, 234, 137, 159, 235, 175, 1, 237, + /* 1260 */ 175, 237, 20, 112, 112, 112, 92, 112, 107, 19, + /* 1270 */ 11, 20, 20, 231, 19, 19, 22, 117, 240, 117, + /* 1280 */ 20, 20, 20, 114, 19, 22, 22, 20, 112, 20, + /* 1290 */ 20, 44, 19, 243, 19, 96, 20, 44, 246, 19, + /* 1300 */ 32, 19, 44, 19, 103, 16, 21, 17, 36, 98, + /* 1310 */ 22, 133, 98, 19, 5, 1, 45, 51, 122, 45, + /* 1320 */ 102, 19, 113, 115, 14, 17, 113, 102, 122, 14, + /* 1330 */ 19, 136, 20, 135, 3, 123, 57, 19, 4, 247, + /* 1340 */ 247, 247, 247, 247, 247, 68, 68, +}; +#define YY_SHIFT_USE_DFLT (-62) +#define YY_SHIFT_MAX 389 +static const short yy_shift_ofst[] = { + /* 0 */ 39, 841, 986, -16, 841, 931, 931, 258, 123, -36, + /* 10 */ 96, 931, 931, 931, 931, 931, -45, 400, 174, 19, + /* 20 */ 171, -54, -54, 53, 165, 208, 251, 324, 393, 462, + /* 30 */ 531, 600, 643, 686, 643, 643, 643, 643, 643, 643, + /* 40 */ 643, 643, 643, 643, 643, 643, 643, 643, 643, 643, + /* 50 */ 643, 643, 729, 772, 772, 857, 931, 931, 931, 931, + /* 60 */ 931, 931, 931, 931, 931, 931, 931, 931, 931, 931, + /* 70 */ 931, 931, 931, 931, 931, 931, 931, 931, 931, 931, + /* 80 */ 931, 931, 931, 931, 931, 931, 931, 931, 931, 931, + /* 90 */ 931, 931, 931, 931, 931, 931, -61, -61, 6, 6, + /* 100 */ 280, 22, 61, 399, 564, 19, 19, 19, 19, 19, + /* 110 */ 19, 19, 216, 171, 63, -62, -62, -62, 131, 326, + /* 120 */ 472, 472, 498, 559, 506, 799, 19, 799, 19, 19, + /* 130 */ 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, + /* 140 */ 19, 849, 95, -36, -36, -36, -62, -62, -62, -15, + /* 150 */ -15, 333, 459, 478, 557, 530, 541, 616, 602, 793, + /* 160 */ 604, 607, 626, 19, 19, 881, 19, 19, 994, 19, + /* 170 */ 19, 807, 19, 19, 673, 807, 19, 19, 384, 384, + /* 180 */ 384, 19, 19, 673, 19, 19, 673, 19, 454, 685, + /* 190 */ 19, 19, 673, 19, 19, 19, 673, 19, 19, 19, + /* 200 */ 673, 673, 19, 19, 19, 19, 19, 468, 883, 921, + /* 210 */ 171, 754, 754, 432, 406, 406, 406, 816, 406, 171, + /* 220 */ 406, 171, 811, 879, 879, 1166, 1166, 1166, 1166, 1154, + /* 230 */ -36, 1060, 1065, 1066, 1068, 1069, 1056, 1076, 1076, 1095, + /* 240 */ 1079, 1095, 1079, 1097, 1097, 1157, 1097, 1105, 1097, 1182, + /* 250 */ 1119, 1119, 1157, 1097, 1097, 1097, 1182, 1189, 1076, 1189, + /* 260 */ 1076, 1189, 1076, 1076, 1179, 1103, 1189, 1076, 1167, 1167, + /* 270 */ 1210, 1060, 1076, 1224, 1224, 1224, 1224, 1060, 1167, 1210, + /* 280 */ 1076, 1216, 1216, 1076, 1076, 1117, -62, -62, -62, -62, + /* 290 */ -62, -62, 525, 684, 727, 168, 894, 556, 555, 938, + /* 300 */ 944, 949, 912, 1058, 1073, 1087, 1091, 1101, 1104, 1108, + /* 310 */ 1030, 1116, 1099, 1257, 1242, 1151, 1152, 1153, 1155, 1174, + /* 320 */ 1161, 1250, 1251, 1252, 1255, 1259, 1256, 1260, 1254, 1261, + /* 330 */ 1262, 1263, 1160, 1264, 1162, 1263, 1169, 1265, 1267, 1176, + /* 340 */ 1269, 1270, 1268, 1247, 1273, 1253, 1275, 1276, 1280, 1282, + /* 350 */ 1258, 1284, 1199, 1201, 1289, 1290, 1285, 1211, 1272, 1266, + /* 360 */ 1271, 1288, 1274, 1178, 1214, 1294, 1309, 1314, 1218, 1277, + /* 370 */ 1278, 1196, 1302, 1209, 1310, 1208, 1308, 1213, 1225, 1206, + /* 380 */ 1311, 1212, 1312, 1315, 1279, 1198, 1195, 1318, 1331, 1334, +}; +#define YY_REDUCE_USE_DFLT (-165) +#define YY_REDUCE_MAX 291 +static const short yy_reduce_ofst[] = { + /* 0 */ -138, 277, 546, 137, 401, -21, 44, 36, 38, 242, + /* 10 */ -141, 191, 91, 269, 343, 345, -126, 589, 338, 150, + /* 20 */ 147, -13, 213, 412, 412, 412, 412, 412, 412, 412, + /* 30 */ 412, 412, 412, 412, 412, 412, 412, 412, 412, 412, + /* 40 */ 412, 412, 412, 412, 412, 412, 412, 412, 412, 412, + /* 50 */ 412, 412, 412, 412, 412, 211, 698, 714, 716, 722, + /* 60 */ 724, 728, 748, 753, 755, 757, 762, 769, 794, 805, + /* 70 */ 808, 846, 855, 858, 861, 875, 882, 893, 896, 899, + /* 80 */ 901, 903, 906, 920, 930, 932, 934, 936, 945, 947, + /* 90 */ 963, 965, 971, 975, 978, 980, 412, 412, 412, 412, + /* 100 */ 20, 412, 412, 23, 34, 334, 475, 552, 593, 594, + /* 110 */ 585, 212, 412, 289, 412, 412, 412, 412, 135, -164, + /* 120 */ -115, 164, 407, 407, 350, 141, 436, 163, 596, -90, + /* 130 */ 763, 218, 765, 438, 586, 592, 595, 715, 718, 408, + /* 140 */ 723, 380, 634, 677, 787, 798, 144, 529, 588, -14, + /* 150 */ 0, 17, 244, 155, 298, 155, 155, 418, 372, 477, + /* 160 */ 490, 494, 509, 526, 590, 465, 494, 730, 773, 743, + /* 170 */ 833, 792, 854, 860, 155, 792, 864, 885, 848, 859, + /* 180 */ 863, 891, 907, 155, 914, 917, 155, 927, 852, 898, + /* 190 */ 941, 950, 155, 961, 982, 990, 155, 992, 995, 996, + /* 200 */ 155, 155, 999, 1004, 1005, 1006, 1007, 1008, 964, 966, + /* 210 */ 1000, 933, 935, 937, 984, 991, 993, 989, 997, 1009, + /* 220 */ 1001, 1010, 1002, 973, 974, 1028, 1031, 1032, 1033, 1026, + /* 230 */ 998, 988, 1003, 1011, 1012, 1013, 968, 1039, 1040, 1014, + /* 240 */ 1015, 1017, 1018, 1025, 1029, 1020, 1034, 1035, 1041, 1036, + /* 250 */ 987, 1016, 1027, 1043, 1044, 1045, 1048, 1074, 1067, 1075, + /* 260 */ 1072, 1077, 1078, 1080, 1019, 1021, 1083, 1084, 1047, 1051, + /* 270 */ 1023, 1046, 1086, 1054, 1057, 1059, 1061, 1049, 1063, 1037, + /* 280 */ 1093, 1022, 1024, 1098, 1100, 1038, 1096, 1082, 1085, 1042, + /* 290 */ 1050, 1052, +}; +static const YYACTIONTYPE yy_default[] = { + /* 0 */ 594, 819, 900, 709, 900, 819, 900, 900, 846, 713, + /* 10 */ 875, 817, 900, 900, 900, 900, 791, 900, 846, 900, + /* 20 */ 625, 846, 846, 742, 900, 900, 900, 900, 900, 900, + /* 30 */ 900, 900, 743, 900, 821, 816, 812, 814, 813, 820, + /* 40 */ 744, 733, 740, 747, 725, 859, 749, 750, 756, 757, + /* 50 */ 876, 874, 779, 778, 797, 900, 900, 900, 900, 900, + /* 60 */ 900, 900, 900, 900, 900, 900, 900, 900, 900, 900, + /* 70 */ 900, 900, 900, 900, 900, 900, 900, 900, 900, 900, + /* 80 */ 900, 900, 900, 900, 900, 900, 900, 900, 900, 900, + /* 90 */ 900, 900, 900, 900, 900, 900, 781, 803, 780, 790, + /* 100 */ 618, 782, 783, 678, 613, 900, 900, 900, 900, 900, + /* 110 */ 900, 900, 784, 900, 785, 798, 799, 800, 900, 900, + /* 120 */ 900, 900, 900, 900, 594, 709, 900, 709, 900, 900, + /* 130 */ 900, 900, 900, 900, 900, 900, 900, 900, 900, 900, + /* 140 */ 900, 900, 900, 900, 900, 900, 703, 713, 893, 900, + /* 150 */ 900, 669, 900, 900, 900, 900, 900, 900, 900, 900, + /* 160 */ 900, 900, 601, 599, 900, 701, 900, 900, 627, 900, + /* 170 */ 900, 711, 900, 900, 716, 717, 900, 900, 900, 900, + /* 180 */ 900, 900, 900, 615, 900, 900, 690, 900, 852, 900, + /* 190 */ 900, 900, 866, 900, 900, 900, 864, 900, 900, 900, + /* 200 */ 692, 752, 833, 900, 879, 881, 900, 900, 701, 710, + /* 210 */ 900, 900, 900, 815, 736, 736, 736, 648, 736, 900, + /* 220 */ 736, 900, 651, 746, 746, 598, 598, 598, 598, 668, + /* 230 */ 900, 746, 737, 739, 729, 741, 900, 718, 718, 726, + /* 240 */ 728, 726, 728, 680, 680, 665, 680, 651, 680, 825, + /* 250 */ 830, 830, 665, 680, 680, 680, 825, 610, 718, 610, + /* 260 */ 718, 610, 718, 718, 856, 858, 610, 718, 682, 682, + /* 270 */ 758, 746, 718, 689, 689, 689, 689, 746, 682, 758, + /* 280 */ 718, 878, 878, 718, 718, 886, 635, 653, 653, 861, + /* 290 */ 893, 898, 900, 900, 900, 900, 765, 900, 900, 900, + /* 300 */ 900, 900, 900, 900, 900, 900, 900, 900, 900, 900, + /* 310 */ 839, 900, 900, 900, 900, 770, 766, 900, 767, 900, + /* 320 */ 695, 900, 900, 900, 900, 900, 900, 900, 900, 900, + /* 330 */ 900, 818, 900, 730, 900, 738, 900, 900, 900, 900, + /* 340 */ 900, 900, 900, 900, 900, 900, 900, 900, 900, 900, + /* 350 */ 900, 900, 900, 900, 900, 900, 900, 900, 900, 900, + /* 360 */ 854, 855, 900, 900, 900, 900, 900, 900, 900, 900, + /* 370 */ 900, 900, 900, 900, 900, 900, 900, 900, 900, 900, + /* 380 */ 900, 900, 900, 900, 885, 900, 900, 888, 595, 900, + /* 390 */ 589, 592, 591, 593, 597, 600, 622, 623, 624, 602, + /* 400 */ 603, 604, 605, 606, 607, 608, 614, 616, 634, 636, + /* 410 */ 620, 638, 699, 700, 762, 693, 694, 698, 621, 773, + /* 420 */ 764, 768, 769, 771, 772, 786, 787, 789, 795, 802, + /* 430 */ 805, 788, 793, 794, 796, 801, 804, 696, 697, 808, + /* 440 */ 628, 629, 632, 633, 842, 844, 843, 845, 631, 630, + /* 450 */ 774, 777, 810, 811, 867, 868, 869, 870, 871, 806, + /* 460 */ 719, 809, 792, 731, 734, 735, 732, 702, 712, 721, + /* 470 */ 722, 723, 724, 707, 708, 714, 727, 760, 761, 715, + /* 480 */ 704, 705, 706, 807, 763, 775, 776, 639, 640, 770, + /* 490 */ 641, 642, 643, 681, 684, 685, 686, 644, 663, 666, + /* 500 */ 667, 645, 652, 646, 647, 654, 655, 656, 659, 660, + /* 510 */ 661, 662, 657, 658, 826, 827, 831, 829, 828, 649, + /* 520 */ 650, 664, 637, 626, 619, 670, 673, 674, 675, 676, + /* 530 */ 677, 679, 671, 672, 617, 609, 611, 720, 848, 857, + /* 540 */ 853, 849, 850, 851, 612, 822, 823, 683, 754, 755, + /* 550 */ 847, 860, 862, 759, 863, 865, 890, 687, 688, 691, + /* 560 */ 832, 872, 745, 748, 751, 753, 834, 835, 836, 837, + /* 570 */ 840, 841, 838, 873, 877, 880, 882, 883, 884, 887, + /* 580 */ 889, 894, 895, 896, 899, 897, 596, 590, +}; +#define YY_SZ_ACTTAB (int)(sizeof(yy_action)/sizeof(yy_action[0])) + +/* The next table maps tokens into fallback tokens. If a construct +** like the following: +** +** %fallback ID X Y Z. +** +** appears in the grammer, then ID becomes a fallback token for X, Y, +** and Z. Whenever one of the tokens X, Y, or Z is input to the parser +** but it does not parse, the type of the token is changed to ID and +** the parse is retried before an error is thrown. +*/ +#ifdef YYFALLBACK +static const YYCODETYPE yyFallback[] = { + 0, /* $ => nothing */ + 0, /* SEMI => nothing */ + 23, /* EXPLAIN => ID */ + 23, /* QUERY => ID */ + 23, /* PLAN => ID */ + 23, /* BEGIN => ID */ + 0, /* TRANSACTION => nothing */ + 23, /* DEFERRED => ID */ + 23, /* IMMEDIATE => ID */ + 23, /* EXCLUSIVE => ID */ + 0, /* COMMIT => nothing */ + 23, /* END => ID */ + 0, /* ROLLBACK => nothing */ + 0, /* CREATE => nothing */ + 0, /* TABLE => nothing */ + 23, /* IF => ID */ + 0, /* NOT => nothing */ + 0, /* EXISTS => nothing */ + 23, /* TEMP => ID */ + 0, /* LP => nothing */ + 0, /* RP => nothing */ + 0, /* AS => nothing */ + 0, /* COMMA => nothing */ + 0, /* ID => nothing */ + 23, /* ABORT => ID */ + 23, /* AFTER => ID */ + 23, /* ANALYZE => ID */ + 23, /* ASC => ID */ + 23, /* ATTACH => ID */ + 23, /* BEFORE => ID */ + 23, /* CASCADE => ID */ + 23, /* CAST => ID */ + 23, /* CONFLICT => ID */ + 23, /* DATABASE => ID */ + 23, /* DESC => ID */ + 23, /* DETACH => ID */ + 23, /* EACH => ID */ + 23, /* FAIL => ID */ + 23, /* FOR => ID */ + 23, /* IGNORE => ID */ + 23, /* INITIALLY => ID */ + 23, /* INSTEAD => ID */ + 23, /* LIKE_KW => ID */ + 23, /* MATCH => ID */ + 23, /* KEY => ID */ + 23, /* OF => ID */ + 23, /* OFFSET => ID */ + 23, /* PRAGMA => ID */ + 23, /* RAISE => ID */ + 23, /* REPLACE => ID */ + 23, /* RESTRICT => ID */ + 23, /* ROW => ID */ + 23, /* TRIGGER => ID */ + 23, /* VACUUM => ID */ + 23, /* VIEW => ID */ + 23, /* VIRTUAL => ID */ + 23, /* REINDEX => ID */ + 23, /* RENAME => ID */ + 23, /* CTIME_KW => ID */ + 0, /* ANY => nothing */ + 0, /* OR => nothing */ + 0, /* AND => nothing */ + 0, /* IS => nothing */ + 0, /* BETWEEN => nothing */ + 0, /* IN => nothing */ + 0, /* ISNULL => nothing */ + 0, /* NOTNULL => nothing */ + 0, /* NE => nothing */ + 0, /* EQ => nothing */ + 0, /* GT => nothing */ + 0, /* LE => nothing */ + 0, /* LT => nothing */ + 0, /* GE => nothing */ + 0, /* ESCAPE => nothing */ + 0, /* BITAND => nothing */ + 0, /* BITOR => nothing */ + 0, /* LSHIFT => nothing */ + 0, /* RSHIFT => nothing */ + 0, /* PLUS => nothing */ + 0, /* MINUS => nothing */ + 0, /* STAR => nothing */ + 0, /* SLASH => nothing */ + 0, /* REM => nothing */ + 0, /* CONCAT => nothing */ + 0, /* COLLATE => nothing */ + 0, /* UMINUS => nothing */ + 0, /* UPLUS => nothing */ + 0, /* BITNOT => nothing */ + 0, /* STRING => nothing */ + 0, /* JOIN_KW => nothing */ + 0, /* CONSTRAINT => nothing */ + 0, /* DEFAULT => nothing */ + 0, /* NULL => nothing */ + 0, /* PRIMARY => nothing */ + 0, /* UNIQUE => nothing */ + 0, /* CHECK => nothing */ + 0, /* REFERENCES => nothing */ + 0, /* AUTOINCR => nothing */ + 0, /* ON => nothing */ + 0, /* DELETE => nothing */ + 0, /* UPDATE => nothing */ + 0, /* INSERT => nothing */ + 0, /* SET => nothing */ + 0, /* DEFERRABLE => nothing */ + 0, /* FOREIGN => nothing */ + 0, /* DROP => nothing */ + 0, /* UNION => nothing */ + 0, /* ALL => nothing */ + 0, /* EXCEPT => nothing */ + 0, /* INTERSECT => nothing */ + 0, /* SELECT => nothing */ + 0, /* DISTINCT => nothing */ + 0, /* DOT => nothing */ + 0, /* FROM => nothing */ + 0, /* JOIN => nothing */ + 0, /* USING => nothing */ + 0, /* ORDER => nothing */ + 0, /* BY => nothing */ + 0, /* GROUP => nothing */ + 0, /* HAVING => nothing */ + 0, /* LIMIT => nothing */ + 0, /* WHERE => nothing */ + 0, /* INTO => nothing */ + 0, /* VALUES => nothing */ + 0, /* INTEGER => nothing */ + 0, /* FLOAT => nothing */ + 0, /* BLOB => nothing */ + 0, /* REGISTER => nothing */ + 0, /* VARIABLE => nothing */ + 0, /* CASE => nothing */ + 0, /* WHEN => nothing */ + 0, /* THEN => nothing */ + 0, /* ELSE => nothing */ + 0, /* INDEX => nothing */ + 0, /* ALTER => nothing */ + 0, /* TO => nothing */ + 0, /* ADD => nothing */ + 0, /* COLUMNKW => nothing */ +}; +#endif /* YYFALLBACK */ + +/* The following structure represents a single element of the +** parser's stack. Information stored includes: +** +** + The state number for the parser at this level of the stack. +** +** + The value of the token stored at this level of the stack. +** (In other words, the "major" token.) +** +** + The semantic value stored at this level of the stack. This is +** the information used by the action routines in the grammar. +** It is sometimes called the "minor" token. +*/ +struct yyStackEntry { + int stateno; /* The state-number */ + int major; /* The major token value. This is the code + ** number for the token at this stack level */ + YYMINORTYPE minor; /* The user-supplied minor token value. This + ** is the value of the token */ +}; +typedef struct yyStackEntry yyStackEntry; + +/* The state of the parser is completely contained in an instance of +** the following structure */ +struct yyParser { + int yyidx; /* Index of top element in stack */ + int yyerrcnt; /* Shifts left before out of the error */ + sqlite3ParserARG_SDECL /* A place to hold %extra_argument */ +#if YYSTACKDEPTH<=0 + int yystksz; /* Current side of the stack */ + yyStackEntry *yystack; /* The parser's stack */ +#else + yyStackEntry yystack[YYSTACKDEPTH]; /* The parser's stack */ +#endif +}; +typedef struct yyParser yyParser; + +#ifndef NDEBUG +#include +static FILE *yyTraceFILE = 0; +static char *yyTracePrompt = 0; +#endif /* NDEBUG */ + +#ifndef NDEBUG +/* +** Turn parser tracing on by giving a stream to which to write the trace +** and a prompt to preface each trace message. Tracing is turned off +** by making either argument NULL +** +** Inputs: +**
                  +**
                • A FILE* to which trace output should be written. +** If NULL, then tracing is turned off. +**
                • A prefix string written at the beginning of every +** line of trace output. If NULL, then tracing is +** turned off. +**
                +** +** Outputs: +** None. +*/ +void sqlite3ParserTrace(FILE *TraceFILE, char *zTracePrompt){ + yyTraceFILE = TraceFILE; + yyTracePrompt = zTracePrompt; + if( yyTraceFILE==0 ) yyTracePrompt = 0; + else if( yyTracePrompt==0 ) yyTraceFILE = 0; +} +#endif /* NDEBUG */ + +#ifndef NDEBUG +/* For tracing shifts, the names of all terminals and nonterminals +** are required. The following table supplies these names */ +static const char *const yyTokenName[] = { + "$", "SEMI", "EXPLAIN", "QUERY", + "PLAN", "BEGIN", "TRANSACTION", "DEFERRED", + "IMMEDIATE", "EXCLUSIVE", "COMMIT", "END", + "ROLLBACK", "CREATE", "TABLE", "IF", + "NOT", "EXISTS", "TEMP", "LP", + "RP", "AS", "COMMA", "ID", + "ABORT", "AFTER", "ANALYZE", "ASC", + "ATTACH", "BEFORE", "CASCADE", "CAST", + "CONFLICT", "DATABASE", "DESC", "DETACH", + "EACH", "FAIL", "FOR", "IGNORE", + "INITIALLY", "INSTEAD", "LIKE_KW", "MATCH", + "KEY", "OF", "OFFSET", "PRAGMA", + "RAISE", "REPLACE", "RESTRICT", "ROW", + "TRIGGER", "VACUUM", "VIEW", "VIRTUAL", + "REINDEX", "RENAME", "CTIME_KW", "ANY", + "OR", "AND", "IS", "BETWEEN", + "IN", "ISNULL", "NOTNULL", "NE", + "EQ", "GT", "LE", "LT", + "GE", "ESCAPE", "BITAND", "BITOR", + "LSHIFT", "RSHIFT", "PLUS", "MINUS", + "STAR", "SLASH", "REM", "CONCAT", + "COLLATE", "UMINUS", "UPLUS", "BITNOT", + "STRING", "JOIN_KW", "CONSTRAINT", "DEFAULT", + "NULL", "PRIMARY", "UNIQUE", "CHECK", + "REFERENCES", "AUTOINCR", "ON", "DELETE", + "UPDATE", "INSERT", "SET", "DEFERRABLE", + "FOREIGN", "DROP", "UNION", "ALL", + "EXCEPT", "INTERSECT", "SELECT", "DISTINCT", + "DOT", "FROM", "JOIN", "USING", + "ORDER", "BY", "GROUP", "HAVING", + "LIMIT", "WHERE", "INTO", "VALUES", + "INTEGER", "FLOAT", "BLOB", "REGISTER", + "VARIABLE", "CASE", "WHEN", "THEN", + "ELSE", "INDEX", "ALTER", "TO", + "ADD", "COLUMNKW", "error", "input", + "cmdlist", "ecmd", "cmdx", "cmd", + "explain", "transtype", "trans_opt", "nm", + "create_table", "create_table_args", "temp", "ifnotexists", + "dbnm", "columnlist", "conslist_opt", "select", + "column", "columnid", "type", "carglist", + "id", "ids", "typetoken", "typename", + "signed", "plus_num", "minus_num", "carg", + "ccons", "term", "expr", "onconf", + "sortorder", "autoinc", "idxlist_opt", "refargs", + "defer_subclause", "refarg", "refact", "init_deferred_pred_opt", + "conslist", "tcons", "idxlist", "defer_subclause_opt", + "orconf", "resolvetype", "raisetype", "ifexists", + "fullname", "oneselect", "multiselect_op", "distinct", + "selcollist", "from", "where_opt", "groupby_opt", + "having_opt", "orderby_opt", "limit_opt", "sclp", + "as", "seltablist", "stl_prefix", "joinop", + "on_opt", "using_opt", "seltablist_paren", "joinop2", + "inscollist", "sortlist", "sortitem", "nexprlist", + "setlist", "insert_cmd", "inscollist_opt", "itemlist", + "exprlist", "likeop", "escape", "between_op", + "in_op", "case_operand", "case_exprlist", "case_else", + "uniqueflag", "idxitem", "collate", "nmnum", + "plus_opt", "number", "trigger_decl", "trigger_cmd_list", + "trigger_time", "trigger_event", "foreach_clause", "when_clause", + "trigger_cmd", "database_kw_opt", "key_opt", "add_column_fullname", + "kwcolumn_opt", "create_vtab", "vtabarglist", "vtabarg", + "vtabargtoken", "lp", "anylist", +}; +#endif /* NDEBUG */ + +#ifndef NDEBUG +/* For tracing reduce actions, the names of all rules are required. +*/ +static const char *const yyRuleName[] = { + /* 0 */ "input ::= cmdlist", + /* 1 */ "cmdlist ::= cmdlist ecmd", + /* 2 */ "cmdlist ::= ecmd", + /* 3 */ "cmdx ::= cmd", + /* 4 */ "ecmd ::= SEMI", + /* 5 */ "ecmd ::= explain cmdx SEMI", + /* 6 */ "explain ::=", + /* 7 */ "explain ::= EXPLAIN", + /* 8 */ "explain ::= EXPLAIN QUERY PLAN", + /* 9 */ "cmd ::= BEGIN transtype trans_opt", + /* 10 */ "trans_opt ::=", + /* 11 */ "trans_opt ::= TRANSACTION", + /* 12 */ "trans_opt ::= TRANSACTION nm", + /* 13 */ "transtype ::=", + /* 14 */ "transtype ::= DEFERRED", + /* 15 */ "transtype ::= IMMEDIATE", + /* 16 */ "transtype ::= EXCLUSIVE", + /* 17 */ "cmd ::= COMMIT trans_opt", + /* 18 */ "cmd ::= END trans_opt", + /* 19 */ "cmd ::= ROLLBACK trans_opt", + /* 20 */ "cmd ::= create_table create_table_args", + /* 21 */ "create_table ::= CREATE temp TABLE ifnotexists nm dbnm", + /* 22 */ "ifnotexists ::=", + /* 23 */ "ifnotexists ::= IF NOT EXISTS", + /* 24 */ "temp ::= TEMP", + /* 25 */ "temp ::=", + /* 26 */ "create_table_args ::= LP columnlist conslist_opt RP", + /* 27 */ "create_table_args ::= AS select", + /* 28 */ "columnlist ::= columnlist COMMA column", + /* 29 */ "columnlist ::= column", + /* 30 */ "column ::= columnid type carglist", + /* 31 */ "columnid ::= nm", + /* 32 */ "id ::= ID", + /* 33 */ "ids ::= ID|STRING", + /* 34 */ "nm ::= ID", + /* 35 */ "nm ::= STRING", + /* 36 */ "nm ::= JOIN_KW", + /* 37 */ "type ::=", + /* 38 */ "type ::= typetoken", + /* 39 */ "typetoken ::= typename", + /* 40 */ "typetoken ::= typename LP signed RP", + /* 41 */ "typetoken ::= typename LP signed COMMA signed RP", + /* 42 */ "typename ::= ids", + /* 43 */ "typename ::= typename ids", + /* 44 */ "signed ::= plus_num", + /* 45 */ "signed ::= minus_num", + /* 46 */ "carglist ::= carglist carg", + /* 47 */ "carglist ::=", + /* 48 */ "carg ::= CONSTRAINT nm ccons", + /* 49 */ "carg ::= ccons", + /* 50 */ "ccons ::= DEFAULT term", + /* 51 */ "ccons ::= DEFAULT LP expr RP", + /* 52 */ "ccons ::= DEFAULT PLUS term", + /* 53 */ "ccons ::= DEFAULT MINUS term", + /* 54 */ "ccons ::= DEFAULT id", + /* 55 */ "ccons ::= NULL onconf", + /* 56 */ "ccons ::= NOT NULL onconf", + /* 57 */ "ccons ::= PRIMARY KEY sortorder onconf autoinc", + /* 58 */ "ccons ::= UNIQUE onconf", + /* 59 */ "ccons ::= CHECK LP expr RP", + /* 60 */ "ccons ::= REFERENCES nm idxlist_opt refargs", + /* 61 */ "ccons ::= defer_subclause", + /* 62 */ "ccons ::= COLLATE ids", + /* 63 */ "autoinc ::=", + /* 64 */ "autoinc ::= AUTOINCR", + /* 65 */ "refargs ::=", + /* 66 */ "refargs ::= refargs refarg", + /* 67 */ "refarg ::= MATCH nm", + /* 68 */ "refarg ::= ON DELETE refact", + /* 69 */ "refarg ::= ON UPDATE refact", + /* 70 */ "refarg ::= ON INSERT refact", + /* 71 */ "refact ::= SET NULL", + /* 72 */ "refact ::= SET DEFAULT", + /* 73 */ "refact ::= CASCADE", + /* 74 */ "refact ::= RESTRICT", + /* 75 */ "defer_subclause ::= NOT DEFERRABLE init_deferred_pred_opt", + /* 76 */ "defer_subclause ::= DEFERRABLE init_deferred_pred_opt", + /* 77 */ "init_deferred_pred_opt ::=", + /* 78 */ "init_deferred_pred_opt ::= INITIALLY DEFERRED", + /* 79 */ "init_deferred_pred_opt ::= INITIALLY IMMEDIATE", + /* 80 */ "conslist_opt ::=", + /* 81 */ "conslist_opt ::= COMMA conslist", + /* 82 */ "conslist ::= conslist COMMA tcons", + /* 83 */ "conslist ::= conslist tcons", + /* 84 */ "conslist ::= tcons", + /* 85 */ "tcons ::= CONSTRAINT nm", + /* 86 */ "tcons ::= PRIMARY KEY LP idxlist autoinc RP onconf", + /* 87 */ "tcons ::= UNIQUE LP idxlist RP onconf", + /* 88 */ "tcons ::= CHECK LP expr RP onconf", + /* 89 */ "tcons ::= FOREIGN KEY LP idxlist RP REFERENCES nm idxlist_opt refargs defer_subclause_opt", + /* 90 */ "defer_subclause_opt ::=", + /* 91 */ "defer_subclause_opt ::= defer_subclause", + /* 92 */ "onconf ::=", + /* 93 */ "onconf ::= ON CONFLICT resolvetype", + /* 94 */ "orconf ::=", + /* 95 */ "orconf ::= OR resolvetype", + /* 96 */ "resolvetype ::= raisetype", + /* 97 */ "resolvetype ::= IGNORE", + /* 98 */ "resolvetype ::= REPLACE", + /* 99 */ "cmd ::= DROP TABLE ifexists fullname", + /* 100 */ "ifexists ::= IF EXISTS", + /* 101 */ "ifexists ::=", + /* 102 */ "cmd ::= CREATE temp VIEW ifnotexists nm dbnm AS select", + /* 103 */ "cmd ::= DROP VIEW ifexists fullname", + /* 104 */ "cmd ::= select", + /* 105 */ "select ::= oneselect", + /* 106 */ "select ::= select multiselect_op oneselect", + /* 107 */ "multiselect_op ::= UNION", + /* 108 */ "multiselect_op ::= UNION ALL", + /* 109 */ "multiselect_op ::= EXCEPT|INTERSECT", + /* 110 */ "oneselect ::= SELECT distinct selcollist from where_opt groupby_opt having_opt orderby_opt limit_opt", + /* 111 */ "distinct ::= DISTINCT", + /* 112 */ "distinct ::= ALL", + /* 113 */ "distinct ::=", + /* 114 */ "sclp ::= selcollist COMMA", + /* 115 */ "sclp ::=", + /* 116 */ "selcollist ::= sclp expr as", + /* 117 */ "selcollist ::= sclp STAR", + /* 118 */ "selcollist ::= sclp nm DOT STAR", + /* 119 */ "as ::= AS nm", + /* 120 */ "as ::= ids", + /* 121 */ "as ::=", + /* 122 */ "from ::=", + /* 123 */ "from ::= FROM seltablist", + /* 124 */ "stl_prefix ::= seltablist joinop", + /* 125 */ "stl_prefix ::=", + /* 126 */ "seltablist ::= stl_prefix nm dbnm as on_opt using_opt", + /* 127 */ "seltablist ::= stl_prefix LP seltablist_paren RP as on_opt using_opt", + /* 128 */ "seltablist_paren ::= select", + /* 129 */ "seltablist_paren ::= seltablist", + /* 130 */ "dbnm ::=", + /* 131 */ "dbnm ::= DOT nm", + /* 132 */ "fullname ::= nm dbnm", + /* 133 */ "joinop ::= COMMA|JOIN", + /* 134 */ "joinop ::= JOIN_KW JOIN", + /* 135 */ "joinop ::= JOIN_KW nm JOIN", + /* 136 */ "joinop ::= JOIN_KW nm nm JOIN", + /* 137 */ "on_opt ::= ON expr", + /* 138 */ "on_opt ::=", + /* 139 */ "using_opt ::= USING LP inscollist RP", + /* 140 */ "using_opt ::=", + /* 141 */ "orderby_opt ::=", + /* 142 */ "orderby_opt ::= ORDER BY sortlist", + /* 143 */ "sortlist ::= sortlist COMMA sortitem sortorder", + /* 144 */ "sortlist ::= sortitem sortorder", + /* 145 */ "sortitem ::= expr", + /* 146 */ "sortorder ::= ASC", + /* 147 */ "sortorder ::= DESC", + /* 148 */ "sortorder ::=", + /* 149 */ "groupby_opt ::=", + /* 150 */ "groupby_opt ::= GROUP BY nexprlist", + /* 151 */ "having_opt ::=", + /* 152 */ "having_opt ::= HAVING expr", + /* 153 */ "limit_opt ::=", + /* 154 */ "limit_opt ::= LIMIT expr", + /* 155 */ "limit_opt ::= LIMIT expr OFFSET expr", + /* 156 */ "limit_opt ::= LIMIT expr COMMA expr", + /* 157 */ "cmd ::= DELETE FROM fullname where_opt", + /* 158 */ "where_opt ::=", + /* 159 */ "where_opt ::= WHERE expr", + /* 160 */ "cmd ::= UPDATE orconf fullname SET setlist where_opt", + /* 161 */ "setlist ::= setlist COMMA nm EQ expr", + /* 162 */ "setlist ::= nm EQ expr", + /* 163 */ "cmd ::= insert_cmd INTO fullname inscollist_opt VALUES LP itemlist RP", + /* 164 */ "cmd ::= insert_cmd INTO fullname inscollist_opt select", + /* 165 */ "cmd ::= insert_cmd INTO fullname inscollist_opt DEFAULT VALUES", + /* 166 */ "insert_cmd ::= INSERT orconf", + /* 167 */ "insert_cmd ::= REPLACE", + /* 168 */ "itemlist ::= itemlist COMMA expr", + /* 169 */ "itemlist ::= expr", + /* 170 */ "inscollist_opt ::=", + /* 171 */ "inscollist_opt ::= LP inscollist RP", + /* 172 */ "inscollist ::= inscollist COMMA nm", + /* 173 */ "inscollist ::= nm", + /* 174 */ "expr ::= term", + /* 175 */ "expr ::= LP expr RP", + /* 176 */ "term ::= NULL", + /* 177 */ "expr ::= ID", + /* 178 */ "expr ::= JOIN_KW", + /* 179 */ "expr ::= nm DOT nm", + /* 180 */ "expr ::= nm DOT nm DOT nm", + /* 181 */ "term ::= INTEGER|FLOAT|BLOB", + /* 182 */ "term ::= STRING", + /* 183 */ "expr ::= REGISTER", + /* 184 */ "expr ::= VARIABLE", + /* 185 */ "expr ::= expr COLLATE ids", + /* 186 */ "expr ::= CAST LP expr AS typetoken RP", + /* 187 */ "expr ::= ID LP distinct exprlist RP", + /* 188 */ "expr ::= ID LP STAR RP", + /* 189 */ "term ::= CTIME_KW", + /* 190 */ "expr ::= expr AND expr", + /* 191 */ "expr ::= expr OR expr", + /* 192 */ "expr ::= expr LT|GT|GE|LE expr", + /* 193 */ "expr ::= expr EQ|NE expr", + /* 194 */ "expr ::= expr BITAND|BITOR|LSHIFT|RSHIFT expr", + /* 195 */ "expr ::= expr PLUS|MINUS expr", + /* 196 */ "expr ::= expr STAR|SLASH|REM expr", + /* 197 */ "expr ::= expr CONCAT expr", + /* 198 */ "likeop ::= LIKE_KW", + /* 199 */ "likeop ::= NOT LIKE_KW", + /* 200 */ "likeop ::= MATCH", + /* 201 */ "likeop ::= NOT MATCH", + /* 202 */ "escape ::= ESCAPE expr", + /* 203 */ "escape ::=", + /* 204 */ "expr ::= expr likeop expr escape", + /* 205 */ "expr ::= expr ISNULL|NOTNULL", + /* 206 */ "expr ::= expr IS NULL", + /* 207 */ "expr ::= expr NOT NULL", + /* 208 */ "expr ::= expr IS NOT NULL", + /* 209 */ "expr ::= NOT expr", + /* 210 */ "expr ::= BITNOT expr", + /* 211 */ "expr ::= MINUS expr", + /* 212 */ "expr ::= PLUS expr", + /* 213 */ "between_op ::= BETWEEN", + /* 214 */ "between_op ::= NOT BETWEEN", + /* 215 */ "expr ::= expr between_op expr AND expr", + /* 216 */ "in_op ::= IN", + /* 217 */ "in_op ::= NOT IN", + /* 218 */ "expr ::= expr in_op LP exprlist RP", + /* 219 */ "expr ::= LP select RP", + /* 220 */ "expr ::= expr in_op LP select RP", + /* 221 */ "expr ::= expr in_op nm dbnm", + /* 222 */ "expr ::= EXISTS LP select RP", + /* 223 */ "expr ::= CASE case_operand case_exprlist case_else END", + /* 224 */ "case_exprlist ::= case_exprlist WHEN expr THEN expr", + /* 225 */ "case_exprlist ::= WHEN expr THEN expr", + /* 226 */ "case_else ::= ELSE expr", + /* 227 */ "case_else ::=", + /* 228 */ "case_operand ::= expr", + /* 229 */ "case_operand ::=", + /* 230 */ "exprlist ::= nexprlist", + /* 231 */ "exprlist ::=", + /* 232 */ "nexprlist ::= nexprlist COMMA expr", + /* 233 */ "nexprlist ::= expr", + /* 234 */ "cmd ::= CREATE uniqueflag INDEX ifnotexists nm dbnm ON nm LP idxlist RP", + /* 235 */ "uniqueflag ::= UNIQUE", + /* 236 */ "uniqueflag ::=", + /* 237 */ "idxlist_opt ::=", + /* 238 */ "idxlist_opt ::= LP idxlist RP", + /* 239 */ "idxlist ::= idxlist COMMA idxitem collate sortorder", + /* 240 */ "idxlist ::= idxitem collate sortorder", + /* 241 */ "idxitem ::= nm", + /* 242 */ "collate ::=", + /* 243 */ "collate ::= COLLATE ids", + /* 244 */ "cmd ::= DROP INDEX ifexists fullname", + /* 245 */ "cmd ::= VACUUM", + /* 246 */ "cmd ::= VACUUM nm", + /* 247 */ "cmd ::= PRAGMA nm dbnm EQ nmnum", + /* 248 */ "cmd ::= PRAGMA nm dbnm EQ ON", + /* 249 */ "cmd ::= PRAGMA nm dbnm EQ minus_num", + /* 250 */ "cmd ::= PRAGMA nm dbnm LP nmnum RP", + /* 251 */ "cmd ::= PRAGMA nm dbnm", + /* 252 */ "nmnum ::= plus_num", + /* 253 */ "nmnum ::= nm", + /* 254 */ "plus_num ::= plus_opt number", + /* 255 */ "minus_num ::= MINUS number", + /* 256 */ "number ::= INTEGER|FLOAT", + /* 257 */ "plus_opt ::= PLUS", + /* 258 */ "plus_opt ::=", + /* 259 */ "cmd ::= CREATE trigger_decl BEGIN trigger_cmd_list END", + /* 260 */ "trigger_decl ::= temp TRIGGER ifnotexists nm dbnm trigger_time trigger_event ON fullname foreach_clause when_clause", + /* 261 */ "trigger_time ::= BEFORE", + /* 262 */ "trigger_time ::= AFTER", + /* 263 */ "trigger_time ::= INSTEAD OF", + /* 264 */ "trigger_time ::=", + /* 265 */ "trigger_event ::= DELETE|INSERT", + /* 266 */ "trigger_event ::= UPDATE", + /* 267 */ "trigger_event ::= UPDATE OF inscollist", + /* 268 */ "foreach_clause ::=", + /* 269 */ "foreach_clause ::= FOR EACH ROW", + /* 270 */ "when_clause ::=", + /* 271 */ "when_clause ::= WHEN expr", + /* 272 */ "trigger_cmd_list ::= trigger_cmd_list trigger_cmd SEMI", + /* 273 */ "trigger_cmd_list ::=", + /* 274 */ "trigger_cmd ::= UPDATE orconf nm SET setlist where_opt", + /* 275 */ "trigger_cmd ::= insert_cmd INTO nm inscollist_opt VALUES LP itemlist RP", + /* 276 */ "trigger_cmd ::= insert_cmd INTO nm inscollist_opt select", + /* 277 */ "trigger_cmd ::= DELETE FROM nm where_opt", + /* 278 */ "trigger_cmd ::= select", + /* 279 */ "expr ::= RAISE LP IGNORE RP", + /* 280 */ "expr ::= RAISE LP raisetype COMMA nm RP", + /* 281 */ "raisetype ::= ROLLBACK", + /* 282 */ "raisetype ::= ABORT", + /* 283 */ "raisetype ::= FAIL", + /* 284 */ "cmd ::= DROP TRIGGER ifexists fullname", + /* 285 */ "cmd ::= ATTACH database_kw_opt expr AS expr key_opt", + /* 286 */ "cmd ::= DETACH database_kw_opt expr", + /* 287 */ "key_opt ::=", + /* 288 */ "key_opt ::= KEY expr", + /* 289 */ "database_kw_opt ::= DATABASE", + /* 290 */ "database_kw_opt ::=", + /* 291 */ "cmd ::= REINDEX", + /* 292 */ "cmd ::= REINDEX nm dbnm", + /* 293 */ "cmd ::= ANALYZE", + /* 294 */ "cmd ::= ANALYZE nm dbnm", + /* 295 */ "cmd ::= ALTER TABLE fullname RENAME TO nm", + /* 296 */ "cmd ::= ALTER TABLE add_column_fullname ADD kwcolumn_opt column", + /* 297 */ "add_column_fullname ::= fullname", + /* 298 */ "kwcolumn_opt ::=", + /* 299 */ "kwcolumn_opt ::= COLUMNKW", + /* 300 */ "cmd ::= create_vtab", + /* 301 */ "cmd ::= create_vtab LP vtabarglist RP", + /* 302 */ "create_vtab ::= CREATE VIRTUAL TABLE nm dbnm USING nm", + /* 303 */ "vtabarglist ::= vtabarg", + /* 304 */ "vtabarglist ::= vtabarglist COMMA vtabarg", + /* 305 */ "vtabarg ::=", + /* 306 */ "vtabarg ::= vtabarg vtabargtoken", + /* 307 */ "vtabargtoken ::= ANY", + /* 308 */ "vtabargtoken ::= lp anylist RP", + /* 309 */ "lp ::= LP", + /* 310 */ "anylist ::=", + /* 311 */ "anylist ::= anylist ANY", +}; +#endif /* NDEBUG */ + + +#if YYSTACKDEPTH<=0 +/* +** Try to increase the size of the parser stack. +*/ +static void yyGrowStack(yyParser *p){ + int newSize; + yyStackEntry *pNew; + + newSize = p->yystksz*2 + 100; + pNew = realloc(p->yystack, newSize*sizeof(pNew[0])); + if( pNew ){ + p->yystack = pNew; + p->yystksz = newSize; +#ifndef NDEBUG + if( yyTraceFILE ){ + fprintf(yyTraceFILE,"%sStack grows to %d entries!\n", + yyTracePrompt, p->yystksz); + } +#endif + } +} +#endif + +/* +** This function allocates a new parser. +** The only argument is a pointer to a function which works like +** malloc. +** +** Inputs: +** A pointer to the function used to allocate memory. +** +** Outputs: +** A pointer to a parser. This pointer is used in subsequent calls +** to sqlite3Parser and sqlite3ParserFree. +*/ +void *sqlite3ParserAlloc(void *(*mallocProc)(size_t)){ + yyParser *pParser; + pParser = (yyParser*)(*mallocProc)( (size_t)sizeof(yyParser) ); + if( pParser ){ + pParser->yyidx = -1; +#if YYSTACKDEPTH<=0 + yyGrowStack(pParser); +#endif + } + return pParser; +} + +/* The following function deletes the value associated with a +** symbol. The symbol can be either a terminal or nonterminal. +** "yymajor" is the symbol code, and "yypminor" is a pointer to +** the value. +*/ +static void yy_destructor(YYCODETYPE yymajor, YYMINORTYPE *yypminor){ + switch( yymajor ){ + /* Here is inserted the actions which take place when a + ** terminal or non-terminal is destroyed. This can happen + ** when the symbol is popped from the stack during a + ** reduce or during error processing or when a parser is + ** being destroyed before it is finished parsing. + ** + ** Note: during a reduce, the only symbols destroyed are those + ** which appear on the RHS of the rule, but which are not used + ** inside the C code. + */ + case 155: /* select */ + case 189: /* oneselect */ + case 206: /* seltablist_paren */ +#line 369 "parse.y" +{sqlite3SelectDelete((yypminor->yy219));} +#line 1271 "parse.c" + break; + case 169: /* term */ + case 170: /* expr */ + case 194: /* where_opt */ + case 196: /* having_opt */ + case 204: /* on_opt */ + case 210: /* sortitem */ + case 218: /* escape */ + case 221: /* case_operand */ + case 223: /* case_else */ + case 235: /* when_clause */ + case 238: /* key_opt */ +#line 629 "parse.y" +{sqlite3ExprDelete((yypminor->yy172));} +#line 1286 "parse.c" + break; + case 174: /* idxlist_opt */ + case 182: /* idxlist */ + case 192: /* selcollist */ + case 195: /* groupby_opt */ + case 197: /* orderby_opt */ + case 199: /* sclp */ + case 209: /* sortlist */ + case 211: /* nexprlist */ + case 212: /* setlist */ + case 215: /* itemlist */ + case 216: /* exprlist */ + case 222: /* case_exprlist */ +#line 887 "parse.y" +{sqlite3ExprListDelete((yypminor->yy174));} +#line 1302 "parse.c" + break; + case 188: /* fullname */ + case 193: /* from */ + case 201: /* seltablist */ + case 202: /* stl_prefix */ +#line 486 "parse.y" +{sqlite3SrcListDelete((yypminor->yy373));} +#line 1310 "parse.c" + break; + case 205: /* using_opt */ + case 208: /* inscollist */ + case 214: /* inscollist_opt */ +#line 503 "parse.y" +{sqlite3IdListDelete((yypminor->yy432));} +#line 1317 "parse.c" + break; + case 231: /* trigger_cmd_list */ + case 236: /* trigger_cmd */ +#line 990 "parse.y" +{sqlite3DeleteTriggerStep((yypminor->yy243));} +#line 1323 "parse.c" + break; + case 233: /* trigger_event */ +#line 976 "parse.y" +{sqlite3IdListDelete((yypminor->yy370).b);} +#line 1328 "parse.c" + break; + default: break; /* If no destructor action specified: do nothing */ + } +} + +/* +** Pop the parser's stack once. +** +** If there is a destructor routine associated with the token which +** is popped from the stack, then call it. +** +** Return the major token number for the symbol popped. +*/ +static int yy_pop_parser_stack(yyParser *pParser){ + YYCODETYPE yymajor; + yyStackEntry *yytos = &pParser->yystack[pParser->yyidx]; + + if( pParser->yyidx<0 ) return 0; +#ifndef NDEBUG + if( yyTraceFILE && pParser->yyidx>=0 ){ + fprintf(yyTraceFILE,"%sPopping %s\n", + yyTracePrompt, + yyTokenName[yytos->major]); + } +#endif + yymajor = yytos->major; + yy_destructor( yymajor, &yytos->minor); + pParser->yyidx--; + return yymajor; +} + +/* +** Deallocate and destroy a parser. Destructors are all called for +** all stack elements before shutting the parser down. +** +** Inputs: +**
                  +**
                • A pointer to the parser. This should be a pointer +** obtained from sqlite3ParserAlloc. +**
                • A pointer to a function used to reclaim memory obtained +** from malloc. +**
                +*/ +void sqlite3ParserFree( + void *p, /* The parser to be deleted */ + void (*freeProc)(void*) /* Function used to reclaim memory */ +){ + yyParser *pParser = (yyParser*)p; + if( pParser==0 ) return; + while( pParser->yyidx>=0 ) yy_pop_parser_stack(pParser); +#if YYSTACKDEPTH<=0 + free(pParser->yystack); +#endif + (*freeProc)((void*)pParser); +} + +/* +** Find the appropriate action for a parser given the terminal +** look-ahead token iLookAhead. +** +** If the look-ahead token is YYNOCODE, then check to see if the action is +** independent of the look-ahead. If it is, return the action, otherwise +** return YY_NO_ACTION. +*/ +static int yy_find_shift_action( + yyParser *pParser, /* The parser */ + YYCODETYPE iLookAhead /* The look-ahead token */ +){ + int i; + int stateno = pParser->yystack[pParser->yyidx].stateno; + + if( stateno>YY_SHIFT_MAX || (i = yy_shift_ofst[stateno])==YY_SHIFT_USE_DFLT ){ + return yy_default[stateno]; + } + assert( iLookAhead!=YYNOCODE ); + i += iLookAhead; + if( i<0 || i>=YY_SZ_ACTTAB || yy_lookahead[i]!=iLookAhead ){ + if( iLookAhead>0 ){ +#ifdef YYFALLBACK + int iFallback; /* Fallback token */ + if( iLookAhead %s\n", + yyTracePrompt, yyTokenName[iLookAhead], yyTokenName[iFallback]); + } +#endif + return yy_find_shift_action(pParser, iFallback); + } +#endif +#ifdef YYWILDCARD + { + int j = i - iLookAhead + YYWILDCARD; + if( j>=0 && j %s\n", + yyTracePrompt, yyTokenName[iLookAhead], yyTokenName[YYWILDCARD]); + } +#endif /* NDEBUG */ + return yy_action[j]; + } + } +#endif /* YYWILDCARD */ + } + return yy_default[stateno]; + }else{ + return yy_action[i]; + } +} + +/* +** Find the appropriate action for a parser given the non-terminal +** look-ahead token iLookAhead. +** +** If the look-ahead token is YYNOCODE, then check to see if the action is +** independent of the look-ahead. If it is, return the action, otherwise +** return YY_NO_ACTION. +*/ +static int yy_find_reduce_action( + int stateno, /* Current state number */ + YYCODETYPE iLookAhead /* The look-ahead token */ +){ + int i; + assert( stateno<=YY_REDUCE_MAX ); + i = yy_reduce_ofst[stateno]; + assert( i!=YY_REDUCE_USE_DFLT ); + assert( iLookAhead!=YYNOCODE ); + i += iLookAhead; + assert( i>=0 && iyyidx--; +#ifndef NDEBUG + if( yyTraceFILE ){ + fprintf(yyTraceFILE,"%sStack Overflow!\n",yyTracePrompt); + } +#endif + while( yypParser->yyidx>=0 ) yy_pop_parser_stack(yypParser); + /* Here code is inserted which will execute if the parser + ** stack every overflows */ +#line 39 "parse.y" + + sqlite3ErrorMsg(pParse, "parser stack overflow"); + pParse->parseError = 1; +#line 1483 "parse.c" + sqlite3ParserARG_STORE; /* Suppress warning about unused %extra_argument var */ +} + +/* +** Perform a shift action. +*/ +static void yy_shift( + yyParser *yypParser, /* The parser to be shifted */ + int yyNewState, /* The new state to shift in */ + int yyMajor, /* The major token to shift in */ + YYMINORTYPE *yypMinor /* Pointer ot the minor token to shift in */ +){ + yyStackEntry *yytos; + yypParser->yyidx++; +#if YYSTACKDEPTH>0 + if( yypParser->yyidx>=YYSTACKDEPTH ){ + yyStackOverflow(yypParser, yypMinor); + return; + } +#else + if( yypParser->yyidx>=yypParser->yystksz ){ + yyGrowStack(yypParser); + if( yypParser->yyidx>=yypParser->yystksz ){ + yyStackOverflow(yypParser, yypMinor); + return; + } + } +#endif + yytos = &yypParser->yystack[yypParser->yyidx]; + yytos->stateno = yyNewState; + yytos->major = yyMajor; + yytos->minor = *yypMinor; +#ifndef NDEBUG + if( yyTraceFILE && yypParser->yyidx>0 ){ + int i; + fprintf(yyTraceFILE,"%sShift %d\n",yyTracePrompt,yyNewState); + fprintf(yyTraceFILE,"%sStack:",yyTracePrompt); + for(i=1; i<=yypParser->yyidx; i++) + fprintf(yyTraceFILE," %s",yyTokenName[yypParser->yystack[i].major]); + fprintf(yyTraceFILE,"\n"); + } +#endif +} + +/* The following table contains information about every rule that +** is used during the reduce. +*/ +static const struct { + YYCODETYPE lhs; /* Symbol on the left-hand side of the rule */ + unsigned char nrhs; /* Number of right-hand side symbols in the rule */ +} yyRuleInfo[] = { + { 139, 1 }, + { 140, 2 }, + { 140, 1 }, + { 142, 1 }, + { 141, 1 }, + { 141, 3 }, + { 144, 0 }, + { 144, 1 }, + { 144, 3 }, + { 143, 3 }, + { 146, 0 }, + { 146, 1 }, + { 146, 2 }, + { 145, 0 }, + { 145, 1 }, + { 145, 1 }, + { 145, 1 }, + { 143, 2 }, + { 143, 2 }, + { 143, 2 }, + { 143, 2 }, + { 148, 6 }, + { 151, 0 }, + { 151, 3 }, + { 150, 1 }, + { 150, 0 }, + { 149, 4 }, + { 149, 2 }, + { 153, 3 }, + { 153, 1 }, + { 156, 3 }, + { 157, 1 }, + { 160, 1 }, + { 161, 1 }, + { 147, 1 }, + { 147, 1 }, + { 147, 1 }, + { 158, 0 }, + { 158, 1 }, + { 162, 1 }, + { 162, 4 }, + { 162, 6 }, + { 163, 1 }, + { 163, 2 }, + { 164, 1 }, + { 164, 1 }, + { 159, 2 }, + { 159, 0 }, + { 167, 3 }, + { 167, 1 }, + { 168, 2 }, + { 168, 4 }, + { 168, 3 }, + { 168, 3 }, + { 168, 2 }, + { 168, 2 }, + { 168, 3 }, + { 168, 5 }, + { 168, 2 }, + { 168, 4 }, + { 168, 4 }, + { 168, 1 }, + { 168, 2 }, + { 173, 0 }, + { 173, 1 }, + { 175, 0 }, + { 175, 2 }, + { 177, 2 }, + { 177, 3 }, + { 177, 3 }, + { 177, 3 }, + { 178, 2 }, + { 178, 2 }, + { 178, 1 }, + { 178, 1 }, + { 176, 3 }, + { 176, 2 }, + { 179, 0 }, + { 179, 2 }, + { 179, 2 }, + { 154, 0 }, + { 154, 2 }, + { 180, 3 }, + { 180, 2 }, + { 180, 1 }, + { 181, 2 }, + { 181, 7 }, + { 181, 5 }, + { 181, 5 }, + { 181, 10 }, + { 183, 0 }, + { 183, 1 }, + { 171, 0 }, + { 171, 3 }, + { 184, 0 }, + { 184, 2 }, + { 185, 1 }, + { 185, 1 }, + { 185, 1 }, + { 143, 4 }, + { 187, 2 }, + { 187, 0 }, + { 143, 8 }, + { 143, 4 }, + { 143, 1 }, + { 155, 1 }, + { 155, 3 }, + { 190, 1 }, + { 190, 2 }, + { 190, 1 }, + { 189, 9 }, + { 191, 1 }, + { 191, 1 }, + { 191, 0 }, + { 199, 2 }, + { 199, 0 }, + { 192, 3 }, + { 192, 2 }, + { 192, 4 }, + { 200, 2 }, + { 200, 1 }, + { 200, 0 }, + { 193, 0 }, + { 193, 2 }, + { 202, 2 }, + { 202, 0 }, + { 201, 6 }, + { 201, 7 }, + { 206, 1 }, + { 206, 1 }, + { 152, 0 }, + { 152, 2 }, + { 188, 2 }, + { 203, 1 }, + { 203, 2 }, + { 203, 3 }, + { 203, 4 }, + { 204, 2 }, + { 204, 0 }, + { 205, 4 }, + { 205, 0 }, + { 197, 0 }, + { 197, 3 }, + { 209, 4 }, + { 209, 2 }, + { 210, 1 }, + { 172, 1 }, + { 172, 1 }, + { 172, 0 }, + { 195, 0 }, + { 195, 3 }, + { 196, 0 }, + { 196, 2 }, + { 198, 0 }, + { 198, 2 }, + { 198, 4 }, + { 198, 4 }, + { 143, 4 }, + { 194, 0 }, + { 194, 2 }, + { 143, 6 }, + { 212, 5 }, + { 212, 3 }, + { 143, 8 }, + { 143, 5 }, + { 143, 6 }, + { 213, 2 }, + { 213, 1 }, + { 215, 3 }, + { 215, 1 }, + { 214, 0 }, + { 214, 3 }, + { 208, 3 }, + { 208, 1 }, + { 170, 1 }, + { 170, 3 }, + { 169, 1 }, + { 170, 1 }, + { 170, 1 }, + { 170, 3 }, + { 170, 5 }, + { 169, 1 }, + { 169, 1 }, + { 170, 1 }, + { 170, 1 }, + { 170, 3 }, + { 170, 6 }, + { 170, 5 }, + { 170, 4 }, + { 169, 1 }, + { 170, 3 }, + { 170, 3 }, + { 170, 3 }, + { 170, 3 }, + { 170, 3 }, + { 170, 3 }, + { 170, 3 }, + { 170, 3 }, + { 217, 1 }, + { 217, 2 }, + { 217, 1 }, + { 217, 2 }, + { 218, 2 }, + { 218, 0 }, + { 170, 4 }, + { 170, 2 }, + { 170, 3 }, + { 170, 3 }, + { 170, 4 }, + { 170, 2 }, + { 170, 2 }, + { 170, 2 }, + { 170, 2 }, + { 219, 1 }, + { 219, 2 }, + { 170, 5 }, + { 220, 1 }, + { 220, 2 }, + { 170, 5 }, + { 170, 3 }, + { 170, 5 }, + { 170, 4 }, + { 170, 4 }, + { 170, 5 }, + { 222, 5 }, + { 222, 4 }, + { 223, 2 }, + { 223, 0 }, + { 221, 1 }, + { 221, 0 }, + { 216, 1 }, + { 216, 0 }, + { 211, 3 }, + { 211, 1 }, + { 143, 11 }, + { 224, 1 }, + { 224, 0 }, + { 174, 0 }, + { 174, 3 }, + { 182, 5 }, + { 182, 3 }, + { 225, 1 }, + { 226, 0 }, + { 226, 2 }, + { 143, 4 }, + { 143, 1 }, + { 143, 2 }, + { 143, 5 }, + { 143, 5 }, + { 143, 5 }, + { 143, 6 }, + { 143, 3 }, + { 227, 1 }, + { 227, 1 }, + { 165, 2 }, + { 166, 2 }, + { 229, 1 }, + { 228, 1 }, + { 228, 0 }, + { 143, 5 }, + { 230, 11 }, + { 232, 1 }, + { 232, 1 }, + { 232, 2 }, + { 232, 0 }, + { 233, 1 }, + { 233, 1 }, + { 233, 3 }, + { 234, 0 }, + { 234, 3 }, + { 235, 0 }, + { 235, 2 }, + { 231, 3 }, + { 231, 0 }, + { 236, 6 }, + { 236, 8 }, + { 236, 5 }, + { 236, 4 }, + { 236, 1 }, + { 170, 4 }, + { 170, 6 }, + { 186, 1 }, + { 186, 1 }, + { 186, 1 }, + { 143, 4 }, + { 143, 6 }, + { 143, 3 }, + { 238, 0 }, + { 238, 2 }, + { 237, 1 }, + { 237, 0 }, + { 143, 1 }, + { 143, 3 }, + { 143, 1 }, + { 143, 3 }, + { 143, 6 }, + { 143, 6 }, + { 239, 1 }, + { 240, 0 }, + { 240, 1 }, + { 143, 1 }, + { 143, 4 }, + { 241, 7 }, + { 242, 1 }, + { 242, 3 }, + { 243, 0 }, + { 243, 2 }, + { 244, 1 }, + { 244, 3 }, + { 245, 1 }, + { 246, 0 }, + { 246, 2 }, +}; + +static void yy_accept(yyParser*); /* Forward Declaration */ + +/* +** Perform a reduce action and the shift that must immediately +** follow the reduce. +*/ +static void yy_reduce( + yyParser *yypParser, /* The parser */ + int yyruleno /* Number of the rule by which to reduce */ +){ + int yygoto; /* The next state */ + int yyact; /* The next action */ + YYMINORTYPE yygotominor; /* The LHS of the rule reduced */ + yyStackEntry *yymsp; /* The top of the parser's stack */ + int yysize; /* Amount to pop the stack */ + sqlite3ParserARG_FETCH; + yymsp = &yypParser->yystack[yypParser->yyidx]; +#ifndef NDEBUG + if( yyTraceFILE && yyruleno>=0 + && yyruleno<(int)(sizeof(yyRuleName)/sizeof(yyRuleName[0])) ){ + fprintf(yyTraceFILE, "%sReduce [%s].\n", yyTracePrompt, + yyRuleName[yyruleno]); + } +#endif /* NDEBUG */ + + /* Silence complaints from purify about yygotominor being uninitialized + ** in some cases when it is copied into the stack after the following + ** switch. yygotominor is uninitialized when a rule reduces that does + ** not set the value of its left-hand side nonterminal. Leaving the + ** value of the nonterminal uninitialized is utterly harmless as long + ** as the value is never used. So really the only thing this code + ** accomplishes is to quieten purify. + ** + ** 2007-01-16: The wireshark project (www.wireshark.org) reports that + ** without this code, their parser segfaults. I'm not sure what there + ** parser is doing to make this happen. This is the second bug report + ** from wireshark this week. Clearly they are stressing Lemon in ways + ** that it has not been previously stressed... (SQLite ticket #2172) + */ + memset(&yygotominor, 0, sizeof(yygotominor)); + + + switch( yyruleno ){ + /* Beginning here are the reduction cases. A typical example + ** follows: + ** case 0: + ** #line + ** { ... } // User supplied code + ** #line + ** break; + */ + case 0: /* input ::= cmdlist */ + case 1: /* cmdlist ::= cmdlist ecmd */ + case 2: /* cmdlist ::= ecmd */ + case 4: /* ecmd ::= SEMI */ + case 5: /* ecmd ::= explain cmdx SEMI */ + case 10: /* trans_opt ::= */ + case 11: /* trans_opt ::= TRANSACTION */ + case 12: /* trans_opt ::= TRANSACTION nm */ + case 20: /* cmd ::= create_table create_table_args */ + case 28: /* columnlist ::= columnlist COMMA column */ + case 29: /* columnlist ::= column */ + case 37: /* type ::= */ + case 44: /* signed ::= plus_num */ + case 45: /* signed ::= minus_num */ + case 46: /* carglist ::= carglist carg */ + case 47: /* carglist ::= */ + case 48: /* carg ::= CONSTRAINT nm ccons */ + case 49: /* carg ::= ccons */ + case 55: /* ccons ::= NULL onconf */ + case 82: /* conslist ::= conslist COMMA tcons */ + case 83: /* conslist ::= conslist tcons */ + case 84: /* conslist ::= tcons */ + case 85: /* tcons ::= CONSTRAINT nm */ + case 257: /* plus_opt ::= PLUS */ + case 258: /* plus_opt ::= */ + case 268: /* foreach_clause ::= */ + case 269: /* foreach_clause ::= FOR EACH ROW */ + case 289: /* database_kw_opt ::= DATABASE */ + case 290: /* database_kw_opt ::= */ + case 298: /* kwcolumn_opt ::= */ + case 299: /* kwcolumn_opt ::= COLUMNKW */ + case 303: /* vtabarglist ::= vtabarg */ + case 304: /* vtabarglist ::= vtabarglist COMMA vtabarg */ + case 306: /* vtabarg ::= vtabarg vtabargtoken */ + case 310: /* anylist ::= */ +#line 91 "parse.y" +{ +} +#line 1938 "parse.c" + break; + case 3: /* cmdx ::= cmd */ +#line 94 "parse.y" +{ sqlite3FinishCoding(pParse); } +#line 1943 "parse.c" + break; + case 6: /* explain ::= */ +#line 97 "parse.y" +{ sqlite3BeginParse(pParse, 0); } +#line 1948 "parse.c" + break; + case 7: /* explain ::= EXPLAIN */ +#line 99 "parse.y" +{ sqlite3BeginParse(pParse, 1); } +#line 1953 "parse.c" + break; + case 8: /* explain ::= EXPLAIN QUERY PLAN */ +#line 100 "parse.y" +{ sqlite3BeginParse(pParse, 2); } +#line 1958 "parse.c" + break; + case 9: /* cmd ::= BEGIN transtype trans_opt */ +#line 106 "parse.y" +{sqlite3BeginTransaction(pParse, yymsp[-1].minor.yy46);} +#line 1963 "parse.c" + break; + case 13: /* transtype ::= */ +#line 111 "parse.y" +{yygotominor.yy46 = TK_DEFERRED;} +#line 1968 "parse.c" + break; + case 14: /* transtype ::= DEFERRED */ + case 15: /* transtype ::= IMMEDIATE */ + case 16: /* transtype ::= EXCLUSIVE */ + case 107: /* multiselect_op ::= UNION */ + case 109: /* multiselect_op ::= EXCEPT|INTERSECT */ +#line 112 "parse.y" +{yygotominor.yy46 = yymsp[0].major;} +#line 1977 "parse.c" + break; + case 17: /* cmd ::= COMMIT trans_opt */ + case 18: /* cmd ::= END trans_opt */ +#line 115 "parse.y" +{sqlite3CommitTransaction(pParse);} +#line 1983 "parse.c" + break; + case 19: /* cmd ::= ROLLBACK trans_opt */ +#line 117 "parse.y" +{sqlite3RollbackTransaction(pParse);} +#line 1988 "parse.c" + break; + case 21: /* create_table ::= CREATE temp TABLE ifnotexists nm dbnm */ +#line 122 "parse.y" +{ + sqlite3StartTable(pParse,&yymsp[-1].minor.yy410,&yymsp[0].minor.yy410,yymsp[-4].minor.yy46,0,0,yymsp[-2].minor.yy46); +} +#line 1995 "parse.c" + break; + case 22: /* ifnotexists ::= */ + case 25: /* temp ::= */ + case 63: /* autoinc ::= */ + case 77: /* init_deferred_pred_opt ::= */ + case 79: /* init_deferred_pred_opt ::= INITIALLY IMMEDIATE */ + case 90: /* defer_subclause_opt ::= */ + case 101: /* ifexists ::= */ + case 112: /* distinct ::= ALL */ + case 113: /* distinct ::= */ + case 213: /* between_op ::= BETWEEN */ + case 216: /* in_op ::= IN */ +#line 126 "parse.y" +{yygotominor.yy46 = 0;} +#line 2010 "parse.c" + break; + case 23: /* ifnotexists ::= IF NOT EXISTS */ + case 24: /* temp ::= TEMP */ + case 64: /* autoinc ::= AUTOINCR */ + case 78: /* init_deferred_pred_opt ::= INITIALLY DEFERRED */ + case 100: /* ifexists ::= IF EXISTS */ + case 111: /* distinct ::= DISTINCT */ + case 214: /* between_op ::= NOT BETWEEN */ + case 217: /* in_op ::= NOT IN */ +#line 127 "parse.y" +{yygotominor.yy46 = 1;} +#line 2022 "parse.c" + break; + case 26: /* create_table_args ::= LP columnlist conslist_opt RP */ +#line 133 "parse.y" +{ + sqlite3EndTable(pParse,&yymsp[-1].minor.yy410,&yymsp[0].minor.yy0,0); +} +#line 2029 "parse.c" + break; + case 27: /* create_table_args ::= AS select */ +#line 136 "parse.y" +{ + sqlite3EndTable(pParse,0,0,yymsp[0].minor.yy219); + sqlite3SelectDelete(yymsp[0].minor.yy219); +} +#line 2037 "parse.c" + break; + case 30: /* column ::= columnid type carglist */ +#line 148 "parse.y" +{ + yygotominor.yy410.z = yymsp[-2].minor.yy410.z; + yygotominor.yy410.n = (pParse->sLastToken.z-yymsp[-2].minor.yy410.z) + pParse->sLastToken.n; +} +#line 2045 "parse.c" + break; + case 31: /* columnid ::= nm */ +#line 152 "parse.y" +{ + sqlite3AddColumn(pParse,&yymsp[0].minor.yy410); + yygotominor.yy410 = yymsp[0].minor.yy410; +} +#line 2053 "parse.c" + break; + case 32: /* id ::= ID */ + case 33: /* ids ::= ID|STRING */ + case 34: /* nm ::= ID */ + case 35: /* nm ::= STRING */ + case 36: /* nm ::= JOIN_KW */ + case 256: /* number ::= INTEGER|FLOAT */ +#line 162 "parse.y" +{yygotominor.yy410 = yymsp[0].minor.yy0;} +#line 2063 "parse.c" + break; + case 38: /* type ::= typetoken */ +#line 223 "parse.y" +{sqlite3AddColumnType(pParse,&yymsp[0].minor.yy410);} +#line 2068 "parse.c" + break; + case 39: /* typetoken ::= typename */ + case 42: /* typename ::= ids */ + case 119: /* as ::= AS nm */ + case 120: /* as ::= ids */ + case 131: /* dbnm ::= DOT nm */ + case 241: /* idxitem ::= nm */ + case 243: /* collate ::= COLLATE ids */ + case 252: /* nmnum ::= plus_num */ + case 253: /* nmnum ::= nm */ + case 254: /* plus_num ::= plus_opt number */ + case 255: /* minus_num ::= MINUS number */ +#line 224 "parse.y" +{yygotominor.yy410 = yymsp[0].minor.yy410;} +#line 2083 "parse.c" + break; + case 40: /* typetoken ::= typename LP signed RP */ +#line 225 "parse.y" +{ + yygotominor.yy410.z = yymsp[-3].minor.yy410.z; + yygotominor.yy410.n = &yymsp[0].minor.yy0.z[yymsp[0].minor.yy0.n] - yymsp[-3].minor.yy410.z; +} +#line 2091 "parse.c" + break; + case 41: /* typetoken ::= typename LP signed COMMA signed RP */ +#line 229 "parse.y" +{ + yygotominor.yy410.z = yymsp[-5].minor.yy410.z; + yygotominor.yy410.n = &yymsp[0].minor.yy0.z[yymsp[0].minor.yy0.n] - yymsp[-5].minor.yy410.z; +} +#line 2099 "parse.c" + break; + case 43: /* typename ::= typename ids */ +#line 235 "parse.y" +{yygotominor.yy410.z=yymsp[-1].minor.yy410.z; yygotominor.yy410.n=yymsp[0].minor.yy410.n+(yymsp[0].minor.yy410.z-yymsp[-1].minor.yy410.z);} +#line 2104 "parse.c" + break; + case 50: /* ccons ::= DEFAULT term */ + case 52: /* ccons ::= DEFAULT PLUS term */ +#line 246 "parse.y" +{sqlite3AddDefaultValue(pParse,yymsp[0].minor.yy172);} +#line 2110 "parse.c" + break; + case 51: /* ccons ::= DEFAULT LP expr RP */ +#line 247 "parse.y" +{sqlite3AddDefaultValue(pParse,yymsp[-1].minor.yy172);} +#line 2115 "parse.c" + break; + case 53: /* ccons ::= DEFAULT MINUS term */ +#line 249 "parse.y" +{ + Expr *p = sqlite3PExpr(pParse, TK_UMINUS, yymsp[0].minor.yy172, 0, 0); + sqlite3AddDefaultValue(pParse,p); +} +#line 2123 "parse.c" + break; + case 54: /* ccons ::= DEFAULT id */ +#line 253 "parse.y" +{ + Expr *p = sqlite3PExpr(pParse, TK_STRING, 0, 0, &yymsp[0].minor.yy410); + sqlite3AddDefaultValue(pParse,p); +} +#line 2131 "parse.c" + break; + case 56: /* ccons ::= NOT NULL onconf */ +#line 262 "parse.y" +{sqlite3AddNotNull(pParse, yymsp[0].minor.yy46);} +#line 2136 "parse.c" + break; + case 57: /* ccons ::= PRIMARY KEY sortorder onconf autoinc */ +#line 264 "parse.y" +{sqlite3AddPrimaryKey(pParse,0,yymsp[-1].minor.yy46,yymsp[0].minor.yy46,yymsp[-2].minor.yy46);} +#line 2141 "parse.c" + break; + case 58: /* ccons ::= UNIQUE onconf */ +#line 265 "parse.y" +{sqlite3CreateIndex(pParse,0,0,0,0,yymsp[0].minor.yy46,0,0,0,0);} +#line 2146 "parse.c" + break; + case 59: /* ccons ::= CHECK LP expr RP */ +#line 266 "parse.y" +{sqlite3AddCheckConstraint(pParse,yymsp[-1].minor.yy172);} +#line 2151 "parse.c" + break; + case 60: /* ccons ::= REFERENCES nm idxlist_opt refargs */ +#line 268 "parse.y" +{sqlite3CreateForeignKey(pParse,0,&yymsp[-2].minor.yy410,yymsp[-1].minor.yy174,yymsp[0].minor.yy46);} +#line 2156 "parse.c" + break; + case 61: /* ccons ::= defer_subclause */ +#line 269 "parse.y" +{sqlite3DeferForeignKey(pParse,yymsp[0].minor.yy46);} +#line 2161 "parse.c" + break; + case 62: /* ccons ::= COLLATE ids */ +#line 270 "parse.y" +{sqlite3AddCollateType(pParse, &yymsp[0].minor.yy410);} +#line 2166 "parse.c" + break; + case 65: /* refargs ::= */ +#line 283 "parse.y" +{ yygotominor.yy46 = OE_Restrict * 0x010101; } +#line 2171 "parse.c" + break; + case 66: /* refargs ::= refargs refarg */ +#line 284 "parse.y" +{ yygotominor.yy46 = (yymsp[-1].minor.yy46 & yymsp[0].minor.yy405.mask) | yymsp[0].minor.yy405.value; } +#line 2176 "parse.c" + break; + case 67: /* refarg ::= MATCH nm */ +#line 286 "parse.y" +{ yygotominor.yy405.value = 0; yygotominor.yy405.mask = 0x000000; } +#line 2181 "parse.c" + break; + case 68: /* refarg ::= ON DELETE refact */ +#line 287 "parse.y" +{ yygotominor.yy405.value = yymsp[0].minor.yy46; yygotominor.yy405.mask = 0x0000ff; } +#line 2186 "parse.c" + break; + case 69: /* refarg ::= ON UPDATE refact */ +#line 288 "parse.y" +{ yygotominor.yy405.value = yymsp[0].minor.yy46<<8; yygotominor.yy405.mask = 0x00ff00; } +#line 2191 "parse.c" + break; + case 70: /* refarg ::= ON INSERT refact */ +#line 289 "parse.y" +{ yygotominor.yy405.value = yymsp[0].minor.yy46<<16; yygotominor.yy405.mask = 0xff0000; } +#line 2196 "parse.c" + break; + case 71: /* refact ::= SET NULL */ +#line 291 "parse.y" +{ yygotominor.yy46 = OE_SetNull; } +#line 2201 "parse.c" + break; + case 72: /* refact ::= SET DEFAULT */ +#line 292 "parse.y" +{ yygotominor.yy46 = OE_SetDflt; } +#line 2206 "parse.c" + break; + case 73: /* refact ::= CASCADE */ +#line 293 "parse.y" +{ yygotominor.yy46 = OE_Cascade; } +#line 2211 "parse.c" + break; + case 74: /* refact ::= RESTRICT */ +#line 294 "parse.y" +{ yygotominor.yy46 = OE_Restrict; } +#line 2216 "parse.c" + break; + case 75: /* defer_subclause ::= NOT DEFERRABLE init_deferred_pred_opt */ + case 76: /* defer_subclause ::= DEFERRABLE init_deferred_pred_opt */ + case 91: /* defer_subclause_opt ::= defer_subclause */ + case 93: /* onconf ::= ON CONFLICT resolvetype */ + case 95: /* orconf ::= OR resolvetype */ + case 96: /* resolvetype ::= raisetype */ + case 166: /* insert_cmd ::= INSERT orconf */ +#line 296 "parse.y" +{yygotominor.yy46 = yymsp[0].minor.yy46;} +#line 2227 "parse.c" + break; + case 80: /* conslist_opt ::= */ +#line 306 "parse.y" +{yygotominor.yy410.n = 0; yygotominor.yy410.z = 0;} +#line 2232 "parse.c" + break; + case 81: /* conslist_opt ::= COMMA conslist */ +#line 307 "parse.y" +{yygotominor.yy410 = yymsp[-1].minor.yy0;} +#line 2237 "parse.c" + break; + case 86: /* tcons ::= PRIMARY KEY LP idxlist autoinc RP onconf */ +#line 313 "parse.y" +{sqlite3AddPrimaryKey(pParse,yymsp[-3].minor.yy174,yymsp[0].minor.yy46,yymsp[-2].minor.yy46,0);} +#line 2242 "parse.c" + break; + case 87: /* tcons ::= UNIQUE LP idxlist RP onconf */ +#line 315 "parse.y" +{sqlite3CreateIndex(pParse,0,0,0,yymsp[-2].minor.yy174,yymsp[0].minor.yy46,0,0,0,0);} +#line 2247 "parse.c" + break; + case 88: /* tcons ::= CHECK LP expr RP onconf */ +#line 316 "parse.y" +{sqlite3AddCheckConstraint(pParse,yymsp[-2].minor.yy172);} +#line 2252 "parse.c" + break; + case 89: /* tcons ::= FOREIGN KEY LP idxlist RP REFERENCES nm idxlist_opt refargs defer_subclause_opt */ +#line 318 "parse.y" +{ + sqlite3CreateForeignKey(pParse, yymsp[-6].minor.yy174, &yymsp[-3].minor.yy410, yymsp[-2].minor.yy174, yymsp[-1].minor.yy46); + sqlite3DeferForeignKey(pParse, yymsp[0].minor.yy46); +} +#line 2260 "parse.c" + break; + case 92: /* onconf ::= */ + case 94: /* orconf ::= */ +#line 332 "parse.y" +{yygotominor.yy46 = OE_Default;} +#line 2266 "parse.c" + break; + case 97: /* resolvetype ::= IGNORE */ +#line 337 "parse.y" +{yygotominor.yy46 = OE_Ignore;} +#line 2271 "parse.c" + break; + case 98: /* resolvetype ::= REPLACE */ + case 167: /* insert_cmd ::= REPLACE */ +#line 338 "parse.y" +{yygotominor.yy46 = OE_Replace;} +#line 2277 "parse.c" + break; + case 99: /* cmd ::= DROP TABLE ifexists fullname */ +#line 342 "parse.y" +{ + sqlite3DropTable(pParse, yymsp[0].minor.yy373, 0, yymsp[-1].minor.yy46); +} +#line 2284 "parse.c" + break; + case 102: /* cmd ::= CREATE temp VIEW ifnotexists nm dbnm AS select */ +#line 352 "parse.y" +{ + sqlite3CreateView(pParse, &yymsp[-7].minor.yy0, &yymsp[-3].minor.yy410, &yymsp[-2].minor.yy410, yymsp[0].minor.yy219, yymsp[-6].minor.yy46, yymsp[-4].minor.yy46); +} +#line 2291 "parse.c" + break; + case 103: /* cmd ::= DROP VIEW ifexists fullname */ +#line 355 "parse.y" +{ + sqlite3DropTable(pParse, yymsp[0].minor.yy373, 1, yymsp[-1].minor.yy46); +} +#line 2298 "parse.c" + break; + case 104: /* cmd ::= select */ +#line 362 "parse.y" +{ + SelectDest dest = {SRT_Callback, 0, 0}; + sqlite3Select(pParse, yymsp[0].minor.yy219, &dest, 0, 0, 0, 0); + sqlite3SelectDelete(yymsp[0].minor.yy219); +} +#line 2307 "parse.c" + break; + case 105: /* select ::= oneselect */ + case 128: /* seltablist_paren ::= select */ +#line 373 "parse.y" +{yygotominor.yy219 = yymsp[0].minor.yy219;} +#line 2313 "parse.c" + break; + case 106: /* select ::= select multiselect_op oneselect */ +#line 375 "parse.y" +{ + if( yymsp[0].minor.yy219 ){ + yymsp[0].minor.yy219->op = yymsp[-1].minor.yy46; + yymsp[0].minor.yy219->pPrior = yymsp[-2].minor.yy219; + }else{ + sqlite3SelectDelete(yymsp[-2].minor.yy219); + } + yygotominor.yy219 = yymsp[0].minor.yy219; +} +#line 2326 "parse.c" + break; + case 108: /* multiselect_op ::= UNION ALL */ +#line 386 "parse.y" +{yygotominor.yy46 = TK_ALL;} +#line 2331 "parse.c" + break; + case 110: /* oneselect ::= SELECT distinct selcollist from where_opt groupby_opt having_opt orderby_opt limit_opt */ +#line 390 "parse.y" +{ + yygotominor.yy219 = sqlite3SelectNew(pParse,yymsp[-6].minor.yy174,yymsp[-5].minor.yy373,yymsp[-4].minor.yy172,yymsp[-3].minor.yy174,yymsp[-2].minor.yy172,yymsp[-1].minor.yy174,yymsp[-7].minor.yy46,yymsp[0].minor.yy234.pLimit,yymsp[0].minor.yy234.pOffset); +} +#line 2338 "parse.c" + break; + case 114: /* sclp ::= selcollist COMMA */ + case 238: /* idxlist_opt ::= LP idxlist RP */ +#line 411 "parse.y" +{yygotominor.yy174 = yymsp[-1].minor.yy174;} +#line 2344 "parse.c" + break; + case 115: /* sclp ::= */ + case 141: /* orderby_opt ::= */ + case 149: /* groupby_opt ::= */ + case 231: /* exprlist ::= */ + case 237: /* idxlist_opt ::= */ +#line 412 "parse.y" +{yygotominor.yy174 = 0;} +#line 2353 "parse.c" + break; + case 116: /* selcollist ::= sclp expr as */ +#line 413 "parse.y" +{ + yygotominor.yy174 = sqlite3ExprListAppend(pParse,yymsp[-2].minor.yy174,yymsp[-1].minor.yy172,yymsp[0].minor.yy410.n?&yymsp[0].minor.yy410:0); +} +#line 2360 "parse.c" + break; + case 117: /* selcollist ::= sclp STAR */ +#line 416 "parse.y" +{ + Expr *p = sqlite3PExpr(pParse, TK_ALL, 0, 0, 0); + yygotominor.yy174 = sqlite3ExprListAppend(pParse, yymsp[-1].minor.yy174, p, 0); +} +#line 2368 "parse.c" + break; + case 118: /* selcollist ::= sclp nm DOT STAR */ +#line 420 "parse.y" +{ + Expr *pRight = sqlite3PExpr(pParse, TK_ALL, 0, 0, 0); + Expr *pLeft = sqlite3PExpr(pParse, TK_ID, 0, 0, &yymsp[-2].minor.yy410); + Expr *pDot = sqlite3PExpr(pParse, TK_DOT, pLeft, pRight, 0); + yygotominor.yy174 = sqlite3ExprListAppend(pParse,yymsp[-3].minor.yy174, pDot, 0); +} +#line 2378 "parse.c" + break; + case 121: /* as ::= */ +#line 433 "parse.y" +{yygotominor.yy410.n = 0;} +#line 2383 "parse.c" + break; + case 122: /* from ::= */ +#line 445 "parse.y" +{yygotominor.yy373 = sqlite3DbMallocZero(pParse->db, sizeof(*yygotominor.yy373));} +#line 2388 "parse.c" + break; + case 123: /* from ::= FROM seltablist */ +#line 446 "parse.y" +{ + yygotominor.yy373 = yymsp[0].minor.yy373; + sqlite3SrcListShiftJoinType(yygotominor.yy373); +} +#line 2396 "parse.c" + break; + case 124: /* stl_prefix ::= seltablist joinop */ +#line 454 "parse.y" +{ + yygotominor.yy373 = yymsp[-1].minor.yy373; + if( yygotominor.yy373 && yygotominor.yy373->nSrc>0 ) yygotominor.yy373->a[yygotominor.yy373->nSrc-1].jointype = yymsp[0].minor.yy46; +} +#line 2404 "parse.c" + break; + case 125: /* stl_prefix ::= */ +#line 458 "parse.y" +{yygotominor.yy373 = 0;} +#line 2409 "parse.c" + break; + case 126: /* seltablist ::= stl_prefix nm dbnm as on_opt using_opt */ +#line 459 "parse.y" +{ + yygotominor.yy373 = sqlite3SrcListAppendFromTerm(pParse,yymsp[-5].minor.yy373,&yymsp[-4].minor.yy410,&yymsp[-3].minor.yy410,&yymsp[-2].minor.yy410,0,yymsp[-1].minor.yy172,yymsp[0].minor.yy432); +} +#line 2416 "parse.c" + break; + case 127: /* seltablist ::= stl_prefix LP seltablist_paren RP as on_opt using_opt */ +#line 464 "parse.y" +{ + yygotominor.yy373 = sqlite3SrcListAppendFromTerm(pParse,yymsp[-6].minor.yy373,0,0,&yymsp[-2].minor.yy410,yymsp[-4].minor.yy219,yymsp[-1].minor.yy172,yymsp[0].minor.yy432); + } +#line 2423 "parse.c" + break; + case 129: /* seltablist_paren ::= seltablist */ +#line 475 "parse.y" +{ + sqlite3SrcListShiftJoinType(yymsp[0].minor.yy373); + yygotominor.yy219 = sqlite3SelectNew(pParse,0,yymsp[0].minor.yy373,0,0,0,0,0,0,0); + } +#line 2431 "parse.c" + break; + case 130: /* dbnm ::= */ +#line 482 "parse.y" +{yygotominor.yy410.z=0; yygotominor.yy410.n=0;} +#line 2436 "parse.c" + break; + case 132: /* fullname ::= nm dbnm */ +#line 487 "parse.y" +{yygotominor.yy373 = sqlite3SrcListAppend(pParse->db,0,&yymsp[-1].minor.yy410,&yymsp[0].minor.yy410);} +#line 2441 "parse.c" + break; + case 133: /* joinop ::= COMMA|JOIN */ +#line 491 "parse.y" +{ yygotominor.yy46 = JT_INNER; } +#line 2446 "parse.c" + break; + case 134: /* joinop ::= JOIN_KW JOIN */ +#line 492 "parse.y" +{ yygotominor.yy46 = sqlite3JoinType(pParse,&yymsp[-1].minor.yy0,0,0); } +#line 2451 "parse.c" + break; + case 135: /* joinop ::= JOIN_KW nm JOIN */ +#line 493 "parse.y" +{ yygotominor.yy46 = sqlite3JoinType(pParse,&yymsp[-2].minor.yy0,&yymsp[-1].minor.yy410,0); } +#line 2456 "parse.c" + break; + case 136: /* joinop ::= JOIN_KW nm nm JOIN */ +#line 495 "parse.y" +{ yygotominor.yy46 = sqlite3JoinType(pParse,&yymsp[-3].minor.yy0,&yymsp[-2].minor.yy410,&yymsp[-1].minor.yy410); } +#line 2461 "parse.c" + break; + case 137: /* on_opt ::= ON expr */ + case 145: /* sortitem ::= expr */ + case 152: /* having_opt ::= HAVING expr */ + case 159: /* where_opt ::= WHERE expr */ + case 174: /* expr ::= term */ + case 202: /* escape ::= ESCAPE expr */ + case 226: /* case_else ::= ELSE expr */ + case 228: /* case_operand ::= expr */ +#line 499 "parse.y" +{yygotominor.yy172 = yymsp[0].minor.yy172;} +#line 2473 "parse.c" + break; + case 138: /* on_opt ::= */ + case 151: /* having_opt ::= */ + case 158: /* where_opt ::= */ + case 203: /* escape ::= */ + case 227: /* case_else ::= */ + case 229: /* case_operand ::= */ +#line 500 "parse.y" +{yygotominor.yy172 = 0;} +#line 2483 "parse.c" + break; + case 139: /* using_opt ::= USING LP inscollist RP */ + case 171: /* inscollist_opt ::= LP inscollist RP */ +#line 504 "parse.y" +{yygotominor.yy432 = yymsp[-1].minor.yy432;} +#line 2489 "parse.c" + break; + case 140: /* using_opt ::= */ + case 170: /* inscollist_opt ::= */ +#line 505 "parse.y" +{yygotominor.yy432 = 0;} +#line 2495 "parse.c" + break; + case 142: /* orderby_opt ::= ORDER BY sortlist */ + case 150: /* groupby_opt ::= GROUP BY nexprlist */ + case 230: /* exprlist ::= nexprlist */ +#line 516 "parse.y" +{yygotominor.yy174 = yymsp[0].minor.yy174;} +#line 2502 "parse.c" + break; + case 143: /* sortlist ::= sortlist COMMA sortitem sortorder */ +#line 517 "parse.y" +{ + yygotominor.yy174 = sqlite3ExprListAppend(pParse,yymsp[-3].minor.yy174,yymsp[-1].minor.yy172,0); + if( yygotominor.yy174 ) yygotominor.yy174->a[yygotominor.yy174->nExpr-1].sortOrder = yymsp[0].minor.yy46; +} +#line 2510 "parse.c" + break; + case 144: /* sortlist ::= sortitem sortorder */ +#line 521 "parse.y" +{ + yygotominor.yy174 = sqlite3ExprListAppend(pParse,0,yymsp[-1].minor.yy172,0); + if( yygotominor.yy174 && yygotominor.yy174->a ) yygotominor.yy174->a[0].sortOrder = yymsp[0].minor.yy46; +} +#line 2518 "parse.c" + break; + case 146: /* sortorder ::= ASC */ + case 148: /* sortorder ::= */ +#line 529 "parse.y" +{yygotominor.yy46 = SQLITE_SO_ASC;} +#line 2524 "parse.c" + break; + case 147: /* sortorder ::= DESC */ +#line 530 "parse.y" +{yygotominor.yy46 = SQLITE_SO_DESC;} +#line 2529 "parse.c" + break; + case 153: /* limit_opt ::= */ +#line 556 "parse.y" +{yygotominor.yy234.pLimit = 0; yygotominor.yy234.pOffset = 0;} +#line 2534 "parse.c" + break; + case 154: /* limit_opt ::= LIMIT expr */ +#line 557 "parse.y" +{yygotominor.yy234.pLimit = yymsp[0].minor.yy172; yygotominor.yy234.pOffset = 0;} +#line 2539 "parse.c" + break; + case 155: /* limit_opt ::= LIMIT expr OFFSET expr */ +#line 559 "parse.y" +{yygotominor.yy234.pLimit = yymsp[-2].minor.yy172; yygotominor.yy234.pOffset = yymsp[0].minor.yy172;} +#line 2544 "parse.c" + break; + case 156: /* limit_opt ::= LIMIT expr COMMA expr */ +#line 561 "parse.y" +{yygotominor.yy234.pOffset = yymsp[-2].minor.yy172; yygotominor.yy234.pLimit = yymsp[0].minor.yy172;} +#line 2549 "parse.c" + break; + case 157: /* cmd ::= DELETE FROM fullname where_opt */ +#line 565 "parse.y" +{sqlite3DeleteFrom(pParse,yymsp[-1].minor.yy373,yymsp[0].minor.yy172);} +#line 2554 "parse.c" + break; + case 160: /* cmd ::= UPDATE orconf fullname SET setlist where_opt */ +#line 575 "parse.y" +{ + sqlite3ExprListCheckLength(pParse,yymsp[-1].minor.yy174,SQLITE_MAX_COLUMN,"set list"); + sqlite3Update(pParse,yymsp[-3].minor.yy373,yymsp[-1].minor.yy174,yymsp[0].minor.yy172,yymsp[-4].minor.yy46); +} +#line 2562 "parse.c" + break; + case 161: /* setlist ::= setlist COMMA nm EQ expr */ +#line 584 "parse.y" +{yygotominor.yy174 = sqlite3ExprListAppend(pParse,yymsp[-4].minor.yy174,yymsp[0].minor.yy172,&yymsp[-2].minor.yy410);} +#line 2567 "parse.c" + break; + case 162: /* setlist ::= nm EQ expr */ +#line 586 "parse.y" +{yygotominor.yy174 = sqlite3ExprListAppend(pParse,0,yymsp[0].minor.yy172,&yymsp[-2].minor.yy410);} +#line 2572 "parse.c" + break; + case 163: /* cmd ::= insert_cmd INTO fullname inscollist_opt VALUES LP itemlist RP */ +#line 592 "parse.y" +{sqlite3Insert(pParse, yymsp[-5].minor.yy373, yymsp[-1].minor.yy174, 0, yymsp[-4].minor.yy432, yymsp[-7].minor.yy46);} +#line 2577 "parse.c" + break; + case 164: /* cmd ::= insert_cmd INTO fullname inscollist_opt select */ +#line 594 "parse.y" +{sqlite3Insert(pParse, yymsp[-2].minor.yy373, 0, yymsp[0].minor.yy219, yymsp[-1].minor.yy432, yymsp[-4].minor.yy46);} +#line 2582 "parse.c" + break; + case 165: /* cmd ::= insert_cmd INTO fullname inscollist_opt DEFAULT VALUES */ +#line 596 "parse.y" +{sqlite3Insert(pParse, yymsp[-3].minor.yy373, 0, 0, yymsp[-2].minor.yy432, yymsp[-5].minor.yy46);} +#line 2587 "parse.c" + break; + case 168: /* itemlist ::= itemlist COMMA expr */ + case 232: /* nexprlist ::= nexprlist COMMA expr */ +#line 607 "parse.y" +{yygotominor.yy174 = sqlite3ExprListAppend(pParse,yymsp[-2].minor.yy174,yymsp[0].minor.yy172,0);} +#line 2593 "parse.c" + break; + case 169: /* itemlist ::= expr */ + case 233: /* nexprlist ::= expr */ +#line 609 "parse.y" +{yygotominor.yy174 = sqlite3ExprListAppend(pParse,0,yymsp[0].minor.yy172,0);} +#line 2599 "parse.c" + break; + case 172: /* inscollist ::= inscollist COMMA nm */ +#line 619 "parse.y" +{yygotominor.yy432 = sqlite3IdListAppend(pParse->db,yymsp[-2].minor.yy432,&yymsp[0].minor.yy410);} +#line 2604 "parse.c" + break; + case 173: /* inscollist ::= nm */ +#line 621 "parse.y" +{yygotominor.yy432 = sqlite3IdListAppend(pParse->db,0,&yymsp[0].minor.yy410);} +#line 2609 "parse.c" + break; + case 175: /* expr ::= LP expr RP */ +#line 632 "parse.y" +{yygotominor.yy172 = yymsp[-1].minor.yy172; sqlite3ExprSpan(yygotominor.yy172,&yymsp[-2].minor.yy0,&yymsp[0].minor.yy0); } +#line 2614 "parse.c" + break; + case 176: /* term ::= NULL */ + case 181: /* term ::= INTEGER|FLOAT|BLOB */ + case 182: /* term ::= STRING */ +#line 633 "parse.y" +{yygotominor.yy172 = sqlite3PExpr(pParse, yymsp[0].major, 0, 0, &yymsp[0].minor.yy0);} +#line 2621 "parse.c" + break; + case 177: /* expr ::= ID */ + case 178: /* expr ::= JOIN_KW */ +#line 634 "parse.y" +{yygotominor.yy172 = sqlite3PExpr(pParse, TK_ID, 0, 0, &yymsp[0].minor.yy0);} +#line 2627 "parse.c" + break; + case 179: /* expr ::= nm DOT nm */ +#line 636 "parse.y" +{ + Expr *temp1 = sqlite3PExpr(pParse, TK_ID, 0, 0, &yymsp[-2].minor.yy410); + Expr *temp2 = sqlite3PExpr(pParse, TK_ID, 0, 0, &yymsp[0].minor.yy410); + yygotominor.yy172 = sqlite3PExpr(pParse, TK_DOT, temp1, temp2, 0); +} +#line 2636 "parse.c" + break; + case 180: /* expr ::= nm DOT nm DOT nm */ +#line 641 "parse.y" +{ + Expr *temp1 = sqlite3PExpr(pParse, TK_ID, 0, 0, &yymsp[-4].minor.yy410); + Expr *temp2 = sqlite3PExpr(pParse, TK_ID, 0, 0, &yymsp[-2].minor.yy410); + Expr *temp3 = sqlite3PExpr(pParse, TK_ID, 0, 0, &yymsp[0].minor.yy410); + Expr *temp4 = sqlite3PExpr(pParse, TK_DOT, temp2, temp3, 0); + yygotominor.yy172 = sqlite3PExpr(pParse, TK_DOT, temp1, temp4, 0); +} +#line 2647 "parse.c" + break; + case 183: /* expr ::= REGISTER */ +#line 650 "parse.y" +{yygotominor.yy172 = sqlite3RegisterExpr(pParse, &yymsp[0].minor.yy0);} +#line 2652 "parse.c" + break; + case 184: /* expr ::= VARIABLE */ +#line 651 "parse.y" +{ + Token *pToken = &yymsp[0].minor.yy0; + Expr *pExpr = yygotominor.yy172 = sqlite3PExpr(pParse, TK_VARIABLE, 0, 0, pToken); + sqlite3ExprAssignVarNumber(pParse, pExpr); +} +#line 2661 "parse.c" + break; + case 185: /* expr ::= expr COLLATE ids */ +#line 656 "parse.y" +{ + yygotominor.yy172 = sqlite3ExprSetColl(pParse, yymsp[-2].minor.yy172, &yymsp[0].minor.yy410); +} +#line 2668 "parse.c" + break; + case 186: /* expr ::= CAST LP expr AS typetoken RP */ +#line 660 "parse.y" +{ + yygotominor.yy172 = sqlite3PExpr(pParse, TK_CAST, yymsp[-3].minor.yy172, 0, &yymsp[-1].minor.yy410); + sqlite3ExprSpan(yygotominor.yy172,&yymsp[-5].minor.yy0,&yymsp[0].minor.yy0); +} +#line 2676 "parse.c" + break; + case 187: /* expr ::= ID LP distinct exprlist RP */ +#line 665 "parse.y" +{ + if( yymsp[-1].minor.yy174 && yymsp[-1].minor.yy174->nExpr>SQLITE_MAX_FUNCTION_ARG ){ + sqlite3ErrorMsg(pParse, "too many arguments on function %T", &yymsp[-4].minor.yy0); + } + yygotominor.yy172 = sqlite3ExprFunction(pParse, yymsp[-1].minor.yy174, &yymsp[-4].minor.yy0); + sqlite3ExprSpan(yygotominor.yy172,&yymsp[-4].minor.yy0,&yymsp[0].minor.yy0); + if( yymsp[-2].minor.yy46 && yygotominor.yy172 ){ + yygotominor.yy172->flags |= EP_Distinct; + } +} +#line 2690 "parse.c" + break; + case 188: /* expr ::= ID LP STAR RP */ +#line 675 "parse.y" +{ + yygotominor.yy172 = sqlite3ExprFunction(pParse, 0, &yymsp[-3].minor.yy0); + sqlite3ExprSpan(yygotominor.yy172,&yymsp[-3].minor.yy0,&yymsp[0].minor.yy0); +} +#line 2698 "parse.c" + break; + case 189: /* term ::= CTIME_KW */ +#line 679 "parse.y" +{ + /* The CURRENT_TIME, CURRENT_DATE, and CURRENT_TIMESTAMP values are + ** treated as functions that return constants */ + yygotominor.yy172 = sqlite3ExprFunction(pParse, 0,&yymsp[0].minor.yy0); + if( yygotominor.yy172 ){ + yygotominor.yy172->op = TK_CONST_FUNC; + yygotominor.yy172->span = yymsp[0].minor.yy0; + } +} +#line 2711 "parse.c" + break; + case 190: /* expr ::= expr AND expr */ + case 191: /* expr ::= expr OR expr */ + case 192: /* expr ::= expr LT|GT|GE|LE expr */ + case 193: /* expr ::= expr EQ|NE expr */ + case 194: /* expr ::= expr BITAND|BITOR|LSHIFT|RSHIFT expr */ + case 195: /* expr ::= expr PLUS|MINUS expr */ + case 196: /* expr ::= expr STAR|SLASH|REM expr */ + case 197: /* expr ::= expr CONCAT expr */ +#line 688 "parse.y" +{yygotominor.yy172 = sqlite3PExpr(pParse,yymsp[-1].major,yymsp[-2].minor.yy172,yymsp[0].minor.yy172,0);} +#line 2723 "parse.c" + break; + case 198: /* likeop ::= LIKE_KW */ + case 200: /* likeop ::= MATCH */ +#line 700 "parse.y" +{yygotominor.yy72.eOperator = yymsp[0].minor.yy0; yygotominor.yy72.not = 0;} +#line 2729 "parse.c" + break; + case 199: /* likeop ::= NOT LIKE_KW */ + case 201: /* likeop ::= NOT MATCH */ +#line 701 "parse.y" +{yygotominor.yy72.eOperator = yymsp[0].minor.yy0; yygotominor.yy72.not = 1;} +#line 2735 "parse.c" + break; + case 204: /* expr ::= expr likeop expr escape */ +#line 708 "parse.y" +{ + ExprList *pList; + pList = sqlite3ExprListAppend(pParse,0, yymsp[-1].minor.yy172, 0); + pList = sqlite3ExprListAppend(pParse,pList, yymsp[-3].minor.yy172, 0); + if( yymsp[0].minor.yy172 ){ + pList = sqlite3ExprListAppend(pParse,pList, yymsp[0].minor.yy172, 0); + } + yygotominor.yy172 = sqlite3ExprFunction(pParse, pList, &yymsp[-2].minor.yy72.eOperator); + if( yymsp[-2].minor.yy72.not ) yygotominor.yy172 = sqlite3PExpr(pParse, TK_NOT, yygotominor.yy172, 0, 0); + sqlite3ExprSpan(yygotominor.yy172, &yymsp[-3].minor.yy172->span, &yymsp[-1].minor.yy172->span); + if( yygotominor.yy172 ) yygotominor.yy172->flags |= EP_InfixFunc; +} +#line 2751 "parse.c" + break; + case 205: /* expr ::= expr ISNULL|NOTNULL */ +#line 721 "parse.y" +{ + yygotominor.yy172 = sqlite3PExpr(pParse, yymsp[0].major, yymsp[-1].minor.yy172, 0, 0); + sqlite3ExprSpan(yygotominor.yy172,&yymsp[-1].minor.yy172->span,&yymsp[0].minor.yy0); +} +#line 2759 "parse.c" + break; + case 206: /* expr ::= expr IS NULL */ +#line 725 "parse.y" +{ + yygotominor.yy172 = sqlite3PExpr(pParse, TK_ISNULL, yymsp[-2].minor.yy172, 0, 0); + sqlite3ExprSpan(yygotominor.yy172,&yymsp[-2].minor.yy172->span,&yymsp[0].minor.yy0); +} +#line 2767 "parse.c" + break; + case 207: /* expr ::= expr NOT NULL */ +#line 729 "parse.y" +{ + yygotominor.yy172 = sqlite3PExpr(pParse, TK_NOTNULL, yymsp[-2].minor.yy172, 0, 0); + sqlite3ExprSpan(yygotominor.yy172,&yymsp[-2].minor.yy172->span,&yymsp[0].minor.yy0); +} +#line 2775 "parse.c" + break; + case 208: /* expr ::= expr IS NOT NULL */ +#line 733 "parse.y" +{ + yygotominor.yy172 = sqlite3PExpr(pParse, TK_NOTNULL, yymsp[-3].minor.yy172, 0, 0); + sqlite3ExprSpan(yygotominor.yy172,&yymsp[-3].minor.yy172->span,&yymsp[0].minor.yy0); +} +#line 2783 "parse.c" + break; + case 209: /* expr ::= NOT expr */ + case 210: /* expr ::= BITNOT expr */ +#line 737 "parse.y" +{ + yygotominor.yy172 = sqlite3PExpr(pParse, yymsp[-1].major, yymsp[0].minor.yy172, 0, 0); + sqlite3ExprSpan(yygotominor.yy172,&yymsp[-1].minor.yy0,&yymsp[0].minor.yy172->span); +} +#line 2792 "parse.c" + break; + case 211: /* expr ::= MINUS expr */ +#line 745 "parse.y" +{ + yygotominor.yy172 = sqlite3PExpr(pParse, TK_UMINUS, yymsp[0].minor.yy172, 0, 0); + sqlite3ExprSpan(yygotominor.yy172,&yymsp[-1].minor.yy0,&yymsp[0].minor.yy172->span); +} +#line 2800 "parse.c" + break; + case 212: /* expr ::= PLUS expr */ +#line 749 "parse.y" +{ + yygotominor.yy172 = sqlite3PExpr(pParse, TK_UPLUS, yymsp[0].minor.yy172, 0, 0); + sqlite3ExprSpan(yygotominor.yy172,&yymsp[-1].minor.yy0,&yymsp[0].minor.yy172->span); +} +#line 2808 "parse.c" + break; + case 215: /* expr ::= expr between_op expr AND expr */ +#line 756 "parse.y" +{ + ExprList *pList = sqlite3ExprListAppend(pParse,0, yymsp[-2].minor.yy172, 0); + pList = sqlite3ExprListAppend(pParse,pList, yymsp[0].minor.yy172, 0); + yygotominor.yy172 = sqlite3PExpr(pParse, TK_BETWEEN, yymsp[-4].minor.yy172, 0, 0); + if( yygotominor.yy172 ){ + yygotominor.yy172->pList = pList; + }else{ + sqlite3ExprListDelete(pList); + } + if( yymsp[-3].minor.yy46 ) yygotominor.yy172 = sqlite3PExpr(pParse, TK_NOT, yygotominor.yy172, 0, 0); + sqlite3ExprSpan(yygotominor.yy172,&yymsp[-4].minor.yy172->span,&yymsp[0].minor.yy172->span); +} +#line 2824 "parse.c" + break; + case 218: /* expr ::= expr in_op LP exprlist RP */ +#line 772 "parse.y" +{ + yygotominor.yy172 = sqlite3PExpr(pParse, TK_IN, yymsp[-4].minor.yy172, 0, 0); + if( yygotominor.yy172 ){ + yygotominor.yy172->pList = yymsp[-1].minor.yy174; + sqlite3ExprSetHeight(yygotominor.yy172); + }else{ + sqlite3ExprListDelete(yymsp[-1].minor.yy174); + } + if( yymsp[-3].minor.yy46 ) yygotominor.yy172 = sqlite3PExpr(pParse, TK_NOT, yygotominor.yy172, 0, 0); + sqlite3ExprSpan(yygotominor.yy172,&yymsp[-4].minor.yy172->span,&yymsp[0].minor.yy0); + } +#line 2839 "parse.c" + break; + case 219: /* expr ::= LP select RP */ +#line 783 "parse.y" +{ + yygotominor.yy172 = sqlite3PExpr(pParse, TK_SELECT, 0, 0, 0); + if( yygotominor.yy172 ){ + yygotominor.yy172->pSelect = yymsp[-1].minor.yy219; + sqlite3ExprSetHeight(yygotominor.yy172); + }else{ + sqlite3SelectDelete(yymsp[-1].minor.yy219); + } + sqlite3ExprSpan(yygotominor.yy172,&yymsp[-2].minor.yy0,&yymsp[0].minor.yy0); + } +#line 2853 "parse.c" + break; + case 220: /* expr ::= expr in_op LP select RP */ +#line 793 "parse.y" +{ + yygotominor.yy172 = sqlite3PExpr(pParse, TK_IN, yymsp[-4].minor.yy172, 0, 0); + if( yygotominor.yy172 ){ + yygotominor.yy172->pSelect = yymsp[-1].minor.yy219; + sqlite3ExprSetHeight(yygotominor.yy172); + }else{ + sqlite3SelectDelete(yymsp[-1].minor.yy219); + } + if( yymsp[-3].minor.yy46 ) yygotominor.yy172 = sqlite3PExpr(pParse, TK_NOT, yygotominor.yy172, 0, 0); + sqlite3ExprSpan(yygotominor.yy172,&yymsp[-4].minor.yy172->span,&yymsp[0].minor.yy0); + } +#line 2868 "parse.c" + break; + case 221: /* expr ::= expr in_op nm dbnm */ +#line 804 "parse.y" +{ + SrcList *pSrc = sqlite3SrcListAppend(pParse->db, 0,&yymsp[-1].minor.yy410,&yymsp[0].minor.yy410); + yygotominor.yy172 = sqlite3PExpr(pParse, TK_IN, yymsp[-3].minor.yy172, 0, 0); + if( yygotominor.yy172 ){ + yygotominor.yy172->pSelect = sqlite3SelectNew(pParse, 0,pSrc,0,0,0,0,0,0,0); + sqlite3ExprSetHeight(yygotominor.yy172); + }else{ + sqlite3SrcListDelete(pSrc); + } + if( yymsp[-2].minor.yy46 ) yygotominor.yy172 = sqlite3PExpr(pParse, TK_NOT, yygotominor.yy172, 0, 0); + sqlite3ExprSpan(yygotominor.yy172,&yymsp[-3].minor.yy172->span,yymsp[0].minor.yy410.z?&yymsp[0].minor.yy410:&yymsp[-1].minor.yy410); + } +#line 2884 "parse.c" + break; + case 222: /* expr ::= EXISTS LP select RP */ +#line 816 "parse.y" +{ + Expr *p = yygotominor.yy172 = sqlite3PExpr(pParse, TK_EXISTS, 0, 0, 0); + if( p ){ + p->pSelect = yymsp[-1].minor.yy219; + sqlite3ExprSpan(p,&yymsp[-3].minor.yy0,&yymsp[0].minor.yy0); + sqlite3ExprSetHeight(yygotominor.yy172); + }else{ + sqlite3SelectDelete(yymsp[-1].minor.yy219); + } + } +#line 2898 "parse.c" + break; + case 223: /* expr ::= CASE case_operand case_exprlist case_else END */ +#line 829 "parse.y" +{ + yygotominor.yy172 = sqlite3PExpr(pParse, TK_CASE, yymsp[-3].minor.yy172, yymsp[-1].minor.yy172, 0); + if( yygotominor.yy172 ){ + yygotominor.yy172->pList = yymsp[-2].minor.yy174; + sqlite3ExprSetHeight(yygotominor.yy172); + }else{ + sqlite3ExprListDelete(yymsp[-2].minor.yy174); + } + sqlite3ExprSpan(yygotominor.yy172, &yymsp[-4].minor.yy0, &yymsp[0].minor.yy0); +} +#line 2912 "parse.c" + break; + case 224: /* case_exprlist ::= case_exprlist WHEN expr THEN expr */ +#line 841 "parse.y" +{ + yygotominor.yy174 = sqlite3ExprListAppend(pParse,yymsp[-4].minor.yy174, yymsp[-2].minor.yy172, 0); + yygotominor.yy174 = sqlite3ExprListAppend(pParse,yygotominor.yy174, yymsp[0].minor.yy172, 0); +} +#line 2920 "parse.c" + break; + case 225: /* case_exprlist ::= WHEN expr THEN expr */ +#line 845 "parse.y" +{ + yygotominor.yy174 = sqlite3ExprListAppend(pParse,0, yymsp[-2].minor.yy172, 0); + yygotominor.yy174 = sqlite3ExprListAppend(pParse,yygotominor.yy174, yymsp[0].minor.yy172, 0); +} +#line 2928 "parse.c" + break; + case 234: /* cmd ::= CREATE uniqueflag INDEX ifnotexists nm dbnm ON nm LP idxlist RP */ +#line 874 "parse.y" +{ + sqlite3CreateIndex(pParse, &yymsp[-6].minor.yy410, &yymsp[-5].minor.yy410, + sqlite3SrcListAppend(pParse->db,0,&yymsp[-3].minor.yy410,0), yymsp[-1].minor.yy174, yymsp[-9].minor.yy46, + &yymsp[-10].minor.yy0, &yymsp[0].minor.yy0, SQLITE_SO_ASC, yymsp[-7].minor.yy46); +} +#line 2937 "parse.c" + break; + case 235: /* uniqueflag ::= UNIQUE */ + case 282: /* raisetype ::= ABORT */ +#line 881 "parse.y" +{yygotominor.yy46 = OE_Abort;} +#line 2943 "parse.c" + break; + case 236: /* uniqueflag ::= */ +#line 882 "parse.y" +{yygotominor.yy46 = OE_None;} +#line 2948 "parse.c" + break; + case 239: /* idxlist ::= idxlist COMMA idxitem collate sortorder */ +#line 892 "parse.y" +{ + Expr *p = 0; + if( yymsp[-1].minor.yy410.n>0 ){ + p = sqlite3PExpr(pParse, TK_COLUMN, 0, 0, 0); + sqlite3ExprSetColl(pParse, p, &yymsp[-1].minor.yy410); + } + yygotominor.yy174 = sqlite3ExprListAppend(pParse,yymsp[-4].minor.yy174, p, &yymsp[-2].minor.yy410); + sqlite3ExprListCheckLength(pParse, yygotominor.yy174, SQLITE_MAX_COLUMN, "index"); + if( yygotominor.yy174 ) yygotominor.yy174->a[yygotominor.yy174->nExpr-1].sortOrder = yymsp[0].minor.yy46; +} +#line 2962 "parse.c" + break; + case 240: /* idxlist ::= idxitem collate sortorder */ +#line 902 "parse.y" +{ + Expr *p = 0; + if( yymsp[-1].minor.yy410.n>0 ){ + p = sqlite3PExpr(pParse, TK_COLUMN, 0, 0, 0); + sqlite3ExprSetColl(pParse, p, &yymsp[-1].minor.yy410); + } + yygotominor.yy174 = sqlite3ExprListAppend(pParse,0, p, &yymsp[-2].minor.yy410); + sqlite3ExprListCheckLength(pParse, yygotominor.yy174, SQLITE_MAX_COLUMN, "index"); + if( yygotominor.yy174 ) yygotominor.yy174->a[yygotominor.yy174->nExpr-1].sortOrder = yymsp[0].minor.yy46; +} +#line 2976 "parse.c" + break; + case 242: /* collate ::= */ +#line 915 "parse.y" +{yygotominor.yy410.z = 0; yygotominor.yy410.n = 0;} +#line 2981 "parse.c" + break; + case 244: /* cmd ::= DROP INDEX ifexists fullname */ +#line 921 "parse.y" +{sqlite3DropIndex(pParse, yymsp[0].minor.yy373, yymsp[-1].minor.yy46);} +#line 2986 "parse.c" + break; + case 245: /* cmd ::= VACUUM */ + case 246: /* cmd ::= VACUUM nm */ +#line 927 "parse.y" +{sqlite3Vacuum(pParse);} +#line 2992 "parse.c" + break; + case 247: /* cmd ::= PRAGMA nm dbnm EQ nmnum */ +#line 935 "parse.y" +{sqlite3Pragma(pParse,&yymsp[-3].minor.yy410,&yymsp[-2].minor.yy410,&yymsp[0].minor.yy410,0);} +#line 2997 "parse.c" + break; + case 248: /* cmd ::= PRAGMA nm dbnm EQ ON */ +#line 936 "parse.y" +{sqlite3Pragma(pParse,&yymsp[-3].minor.yy410,&yymsp[-2].minor.yy410,&yymsp[0].minor.yy0,0);} +#line 3002 "parse.c" + break; + case 249: /* cmd ::= PRAGMA nm dbnm EQ minus_num */ +#line 937 "parse.y" +{ + sqlite3Pragma(pParse,&yymsp[-3].minor.yy410,&yymsp[-2].minor.yy410,&yymsp[0].minor.yy410,1); +} +#line 3009 "parse.c" + break; + case 250: /* cmd ::= PRAGMA nm dbnm LP nmnum RP */ +#line 940 "parse.y" +{sqlite3Pragma(pParse,&yymsp[-4].minor.yy410,&yymsp[-3].minor.yy410,&yymsp[-1].minor.yy410,0);} +#line 3014 "parse.c" + break; + case 251: /* cmd ::= PRAGMA nm dbnm */ +#line 941 "parse.y" +{sqlite3Pragma(pParse,&yymsp[-1].minor.yy410,&yymsp[0].minor.yy410,0,0);} +#line 3019 "parse.c" + break; + case 259: /* cmd ::= CREATE trigger_decl BEGIN trigger_cmd_list END */ +#line 955 "parse.y" +{ + Token all; + all.z = yymsp[-3].minor.yy410.z; + all.n = (yymsp[0].minor.yy0.z - yymsp[-3].minor.yy410.z) + yymsp[0].minor.yy0.n; + sqlite3FinishTrigger(pParse, yymsp[-1].minor.yy243, &all); +} +#line 3029 "parse.c" + break; + case 260: /* trigger_decl ::= temp TRIGGER ifnotexists nm dbnm trigger_time trigger_event ON fullname foreach_clause when_clause */ +#line 964 "parse.y" +{ + sqlite3BeginTrigger(pParse, &yymsp[-7].minor.yy410, &yymsp[-6].minor.yy410, yymsp[-5].minor.yy46, yymsp[-4].minor.yy370.a, yymsp[-4].minor.yy370.b, yymsp[-2].minor.yy373, yymsp[0].minor.yy172, yymsp[-10].minor.yy46, yymsp[-8].minor.yy46); + yygotominor.yy410 = (yymsp[-6].minor.yy410.n==0?yymsp[-7].minor.yy410:yymsp[-6].minor.yy410); +} +#line 3037 "parse.c" + break; + case 261: /* trigger_time ::= BEFORE */ + case 264: /* trigger_time ::= */ +#line 970 "parse.y" +{ yygotominor.yy46 = TK_BEFORE; } +#line 3043 "parse.c" + break; + case 262: /* trigger_time ::= AFTER */ +#line 971 "parse.y" +{ yygotominor.yy46 = TK_AFTER; } +#line 3048 "parse.c" + break; + case 263: /* trigger_time ::= INSTEAD OF */ +#line 972 "parse.y" +{ yygotominor.yy46 = TK_INSTEAD;} +#line 3053 "parse.c" + break; + case 265: /* trigger_event ::= DELETE|INSERT */ + case 266: /* trigger_event ::= UPDATE */ +#line 977 "parse.y" +{yygotominor.yy370.a = yymsp[0].major; yygotominor.yy370.b = 0;} +#line 3059 "parse.c" + break; + case 267: /* trigger_event ::= UPDATE OF inscollist */ +#line 979 "parse.y" +{yygotominor.yy370.a = TK_UPDATE; yygotominor.yy370.b = yymsp[0].minor.yy432;} +#line 3064 "parse.c" + break; + case 270: /* when_clause ::= */ + case 287: /* key_opt ::= */ +#line 986 "parse.y" +{ yygotominor.yy172 = 0; } +#line 3070 "parse.c" + break; + case 271: /* when_clause ::= WHEN expr */ + case 288: /* key_opt ::= KEY expr */ +#line 987 "parse.y" +{ yygotominor.yy172 = yymsp[0].minor.yy172; } +#line 3076 "parse.c" + break; + case 272: /* trigger_cmd_list ::= trigger_cmd_list trigger_cmd SEMI */ +#line 991 "parse.y" +{ + if( yymsp[-2].minor.yy243 ){ + yymsp[-2].minor.yy243->pLast->pNext = yymsp[-1].minor.yy243; + }else{ + yymsp[-2].minor.yy243 = yymsp[-1].minor.yy243; + } + yymsp[-2].minor.yy243->pLast = yymsp[-1].minor.yy243; + yygotominor.yy243 = yymsp[-2].minor.yy243; +} +#line 3089 "parse.c" + break; + case 273: /* trigger_cmd_list ::= */ +#line 1000 "parse.y" +{ yygotominor.yy243 = 0; } +#line 3094 "parse.c" + break; + case 274: /* trigger_cmd ::= UPDATE orconf nm SET setlist where_opt */ +#line 1006 "parse.y" +{ yygotominor.yy243 = sqlite3TriggerUpdateStep(pParse->db, &yymsp[-3].minor.yy410, yymsp[-1].minor.yy174, yymsp[0].minor.yy172, yymsp[-4].minor.yy46); } +#line 3099 "parse.c" + break; + case 275: /* trigger_cmd ::= insert_cmd INTO nm inscollist_opt VALUES LP itemlist RP */ +#line 1011 "parse.y" +{yygotominor.yy243 = sqlite3TriggerInsertStep(pParse->db, &yymsp[-5].minor.yy410, yymsp[-4].minor.yy432, yymsp[-1].minor.yy174, 0, yymsp[-7].minor.yy46);} +#line 3104 "parse.c" + break; + case 276: /* trigger_cmd ::= insert_cmd INTO nm inscollist_opt select */ +#line 1014 "parse.y" +{yygotominor.yy243 = sqlite3TriggerInsertStep(pParse->db, &yymsp[-2].minor.yy410, yymsp[-1].minor.yy432, 0, yymsp[0].minor.yy219, yymsp[-4].minor.yy46);} +#line 3109 "parse.c" + break; + case 277: /* trigger_cmd ::= DELETE FROM nm where_opt */ +#line 1018 "parse.y" +{yygotominor.yy243 = sqlite3TriggerDeleteStep(pParse->db, &yymsp[-1].minor.yy410, yymsp[0].minor.yy172);} +#line 3114 "parse.c" + break; + case 278: /* trigger_cmd ::= select */ +#line 1021 "parse.y" +{yygotominor.yy243 = sqlite3TriggerSelectStep(pParse->db, yymsp[0].minor.yy219); } +#line 3119 "parse.c" + break; + case 279: /* expr ::= RAISE LP IGNORE RP */ +#line 1024 "parse.y" +{ + yygotominor.yy172 = sqlite3PExpr(pParse, TK_RAISE, 0, 0, 0); + if( yygotominor.yy172 ){ + yygotominor.yy172->iColumn = OE_Ignore; + sqlite3ExprSpan(yygotominor.yy172, &yymsp[-3].minor.yy0, &yymsp[0].minor.yy0); + } +} +#line 3130 "parse.c" + break; + case 280: /* expr ::= RAISE LP raisetype COMMA nm RP */ +#line 1031 "parse.y" +{ + yygotominor.yy172 = sqlite3PExpr(pParse, TK_RAISE, 0, 0, &yymsp[-1].minor.yy410); + if( yygotominor.yy172 ) { + yygotominor.yy172->iColumn = yymsp[-3].minor.yy46; + sqlite3ExprSpan(yygotominor.yy172, &yymsp[-5].minor.yy0, &yymsp[0].minor.yy0); + } +} +#line 3141 "parse.c" + break; + case 281: /* raisetype ::= ROLLBACK */ +#line 1041 "parse.y" +{yygotominor.yy46 = OE_Rollback;} +#line 3146 "parse.c" + break; + case 283: /* raisetype ::= FAIL */ +#line 1043 "parse.y" +{yygotominor.yy46 = OE_Fail;} +#line 3151 "parse.c" + break; + case 284: /* cmd ::= DROP TRIGGER ifexists fullname */ +#line 1048 "parse.y" +{ + sqlite3DropTrigger(pParse,yymsp[0].minor.yy373,yymsp[-1].minor.yy46); +} +#line 3158 "parse.c" + break; + case 285: /* cmd ::= ATTACH database_kw_opt expr AS expr key_opt */ +#line 1055 "parse.y" +{ + sqlite3Attach(pParse, yymsp[-3].minor.yy172, yymsp[-1].minor.yy172, yymsp[0].minor.yy172); +} +#line 3165 "parse.c" + break; + case 286: /* cmd ::= DETACH database_kw_opt expr */ +#line 1058 "parse.y" +{ + sqlite3Detach(pParse, yymsp[0].minor.yy172); +} +#line 3172 "parse.c" + break; + case 291: /* cmd ::= REINDEX */ +#line 1073 "parse.y" +{sqlite3Reindex(pParse, 0, 0);} +#line 3177 "parse.c" + break; + case 292: /* cmd ::= REINDEX nm dbnm */ +#line 1074 "parse.y" +{sqlite3Reindex(pParse, &yymsp[-1].minor.yy410, &yymsp[0].minor.yy410);} +#line 3182 "parse.c" + break; + case 293: /* cmd ::= ANALYZE */ +#line 1079 "parse.y" +{sqlite3Analyze(pParse, 0, 0);} +#line 3187 "parse.c" + break; + case 294: /* cmd ::= ANALYZE nm dbnm */ +#line 1080 "parse.y" +{sqlite3Analyze(pParse, &yymsp[-1].minor.yy410, &yymsp[0].minor.yy410);} +#line 3192 "parse.c" + break; + case 295: /* cmd ::= ALTER TABLE fullname RENAME TO nm */ +#line 1085 "parse.y" +{ + sqlite3AlterRenameTable(pParse,yymsp[-3].minor.yy373,&yymsp[0].minor.yy410); +} +#line 3199 "parse.c" + break; + case 296: /* cmd ::= ALTER TABLE add_column_fullname ADD kwcolumn_opt column */ +#line 1088 "parse.y" +{ + sqlite3AlterFinishAddColumn(pParse, &yymsp[0].minor.yy410); +} +#line 3206 "parse.c" + break; + case 297: /* add_column_fullname ::= fullname */ +#line 1091 "parse.y" +{ + sqlite3AlterBeginAddColumn(pParse, yymsp[0].minor.yy373); +} +#line 3213 "parse.c" + break; + case 300: /* cmd ::= create_vtab */ +#line 1100 "parse.y" +{sqlite3VtabFinishParse(pParse,0);} +#line 3218 "parse.c" + break; + case 301: /* cmd ::= create_vtab LP vtabarglist RP */ +#line 1101 "parse.y" +{sqlite3VtabFinishParse(pParse,&yymsp[0].minor.yy0);} +#line 3223 "parse.c" + break; + case 302: /* create_vtab ::= CREATE VIRTUAL TABLE nm dbnm USING nm */ +#line 1102 "parse.y" +{ + sqlite3VtabBeginParse(pParse, &yymsp[-3].minor.yy410, &yymsp[-2].minor.yy410, &yymsp[0].minor.yy410); +} +#line 3230 "parse.c" + break; + case 305: /* vtabarg ::= */ +#line 1107 "parse.y" +{sqlite3VtabArgInit(pParse);} +#line 3235 "parse.c" + break; + case 307: /* vtabargtoken ::= ANY */ + case 308: /* vtabargtoken ::= lp anylist RP */ + case 309: /* lp ::= LP */ + case 311: /* anylist ::= anylist ANY */ +#line 1109 "parse.y" +{sqlite3VtabArgExtend(pParse,&yymsp[0].minor.yy0);} +#line 3243 "parse.c" + break; + }; + yygoto = yyRuleInfo[yyruleno].lhs; + yysize = yyRuleInfo[yyruleno].nrhs; + yypParser->yyidx -= yysize; + yyact = yy_find_reduce_action(yymsp[-yysize].stateno,yygoto); + if( yyact < YYNSTATE ){ +#ifdef NDEBUG + /* If we are not debugging and the reduce action popped at least + ** one element off the stack, then we can push the new element back + ** onto the stack here, and skip the stack overflow test in yy_shift(). + ** That gives a significant speed improvement. */ + if( yysize ){ + yypParser->yyidx++; + yymsp -= yysize-1; + yymsp->stateno = yyact; + yymsp->major = yygoto; + yymsp->minor = yygotominor; + }else +#endif + { + yy_shift(yypParser,yyact,yygoto,&yygotominor); + } + }else{ + assert( yyact == YYNSTATE + YYNRULE + 1 ); + yy_accept(yypParser); + } +} + +/* +** The following code executes when the parse fails +*/ +static void yy_parse_failed( + yyParser *yypParser /* The parser */ +){ + sqlite3ParserARG_FETCH; +#ifndef NDEBUG + if( yyTraceFILE ){ + fprintf(yyTraceFILE,"%sFail!\n",yyTracePrompt); + } +#endif + while( yypParser->yyidx>=0 ) yy_pop_parser_stack(yypParser); + /* Here code is inserted which will be executed whenever the + ** parser fails */ + sqlite3ParserARG_STORE; /* Suppress warning about unused %extra_argument variable */ +} + +/* +** The following code executes when a syntax error first occurs. +*/ +static void yy_syntax_error( + yyParser *yypParser, /* The parser */ + int yymajor, /* The major type of the error token */ + YYMINORTYPE yyminor /* The minor type of the error token */ +){ + sqlite3ParserARG_FETCH; +#define TOKEN (yyminor.yy0) +#line 34 "parse.y" + + assert( TOKEN.z[0] ); /* The tokenizer always gives us a token */ + sqlite3ErrorMsg(pParse, "near \"%T\": syntax error", &TOKEN); + pParse->parseError = 1; +#line 3307 "parse.c" + sqlite3ParserARG_STORE; /* Suppress warning about unused %extra_argument variable */ +} + +/* +** The following is executed when the parser accepts +*/ +static void yy_accept( + yyParser *yypParser /* The parser */ +){ + sqlite3ParserARG_FETCH; +#ifndef NDEBUG + if( yyTraceFILE ){ + fprintf(yyTraceFILE,"%sAccept!\n",yyTracePrompt); + } +#endif + while( yypParser->yyidx>=0 ) yy_pop_parser_stack(yypParser); + /* Here code is inserted which will be executed whenever the + ** parser accepts */ + sqlite3ParserARG_STORE; /* Suppress warning about unused %extra_argument variable */ +} + +/* The main parser program. +** The first argument is a pointer to a structure obtained from +** "sqlite3ParserAlloc" which describes the current state of the parser. +** The second argument is the major token number. The third is +** the minor token. The fourth optional argument is whatever the +** user wants (and specified in the grammar) and is available for +** use by the action routines. +** +** Inputs: +**
                  +**
                • A pointer to the parser (an opaque structure.) +**
                • The major token number. +**
                • The minor token number. +**
                • An option argument of a grammar-specified type. +**
                +** +** Outputs: +** None. +*/ +void sqlite3Parser( + void *yyp, /* The parser */ + int yymajor, /* The major token code number */ + sqlite3ParserTOKENTYPE yyminor /* The value for the token */ + sqlite3ParserARG_PDECL /* Optional %extra_argument parameter */ +){ + YYMINORTYPE yyminorunion; + int yyact; /* The parser action. */ + int yyendofinput; /* True if we are at the end of input */ +#ifdef YYERRORSYMBOL + int yyerrorhit = 0; /* True if yymajor has invoked an error */ +#endif + yyParser *yypParser; /* The parser */ + + /* (re)initialize the parser, if necessary */ + yypParser = (yyParser*)yyp; + if( yypParser->yyidx<0 ){ +#if YYSTACKDEPTH<=0 + if( yypParser->yystksz <=0 ){ + memset(&yyminorunion, 0, sizeof(yyminorunion)); + yyStackOverflow(yypParser, &yyminorunion); + return; + } +#endif + yypParser->yyidx = 0; + yypParser->yyerrcnt = -1; + yypParser->yystack[0].stateno = 0; + yypParser->yystack[0].major = 0; + } + yyminorunion.yy0 = yyminor; + yyendofinput = (yymajor==0); + sqlite3ParserARG_STORE; + +#ifndef NDEBUG + if( yyTraceFILE ){ + fprintf(yyTraceFILE,"%sInput %s\n",yyTracePrompt,yyTokenName[yymajor]); + } +#endif + + do{ + yyact = yy_find_shift_action(yypParser,yymajor); + if( yyactyyerrcnt--; + yymajor = YYNOCODE; + }else if( yyact < YYNSTATE + YYNRULE ){ + yy_reduce(yypParser,yyact-YYNSTATE); + }else{ + assert( yyact == YY_ERROR_ACTION ); +#ifdef YYERRORSYMBOL + int yymx; +#endif +#ifndef NDEBUG + if( yyTraceFILE ){ + fprintf(yyTraceFILE,"%sSyntax Error!\n",yyTracePrompt); + } +#endif +#ifdef YYERRORSYMBOL + /* A syntax error has occurred. + ** The response to an error depends upon whether or not the + ** grammar defines an error token "ERROR". + ** + ** This is what we do if the grammar does define ERROR: + ** + ** * Call the %syntax_error function. + ** + ** * Begin popping the stack until we enter a state where + ** it is legal to shift the error symbol, then shift + ** the error symbol. + ** + ** * Set the error count to three. + ** + ** * Begin accepting and shifting new tokens. No new error + ** processing will occur until three tokens have been + ** shifted successfully. + ** + */ + if( yypParser->yyerrcnt<0 ){ + yy_syntax_error(yypParser,yymajor,yyminorunion); + } + yymx = yypParser->yystack[yypParser->yyidx].major; + if( yymx==YYERRORSYMBOL || yyerrorhit ){ +#ifndef NDEBUG + if( yyTraceFILE ){ + fprintf(yyTraceFILE,"%sDiscard input token %s\n", + yyTracePrompt,yyTokenName[yymajor]); + } +#endif + yy_destructor(yymajor,&yyminorunion); + yymajor = YYNOCODE; + }else{ + while( + yypParser->yyidx >= 0 && + yymx != YYERRORSYMBOL && + (yyact = yy_find_reduce_action( + yypParser->yystack[yypParser->yyidx].stateno, + YYERRORSYMBOL)) >= YYNSTATE + ){ + yy_pop_parser_stack(yypParser); + } + if( yypParser->yyidx < 0 || yymajor==0 ){ + yy_destructor(yymajor,&yyminorunion); + yy_parse_failed(yypParser); + yymajor = YYNOCODE; + }else if( yymx!=YYERRORSYMBOL ){ + YYMINORTYPE u2; + u2.YYERRSYMDT = 0; + yy_shift(yypParser,yyact,YYERRORSYMBOL,&u2); + } + } + yypParser->yyerrcnt = 3; + yyerrorhit = 1; +#else /* YYERRORSYMBOL is not defined */ + /* This is what we do if the grammar does not define ERROR: + ** + ** * Report an error message, and throw away the input token. + ** + ** * If the input token is $, then fail the parse. + ** + ** As before, subsequent error messages are suppressed until + ** three input tokens have been successfully shifted. + */ + if( yypParser->yyerrcnt<=0 ){ + yy_syntax_error(yypParser,yymajor,yyminorunion); + } + yypParser->yyerrcnt = 3; + yy_destructor(yymajor,&yyminorunion); + if( yyendofinput ){ + yy_parse_failed(yypParser); + } + yymajor = YYNOCODE; +#endif + } + }while( yymajor!=YYNOCODE && yypParser->yyidx>=0 ); + return; +} Added: external/sqlite-source-3.5.7.x/parse.h ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/parse.h Wed Mar 19 03:00:27 2008 @@ -0,0 +1,152 @@ +#define TK_SEMI 1 +#define TK_EXPLAIN 2 +#define TK_QUERY 3 +#define TK_PLAN 4 +#define TK_BEGIN 5 +#define TK_TRANSACTION 6 +#define TK_DEFERRED 7 +#define TK_IMMEDIATE 8 +#define TK_EXCLUSIVE 9 +#define TK_COMMIT 10 +#define TK_END 11 +#define TK_ROLLBACK 12 +#define TK_CREATE 13 +#define TK_TABLE 14 +#define TK_IF 15 +#define TK_NOT 16 +#define TK_EXISTS 17 +#define TK_TEMP 18 +#define TK_LP 19 +#define TK_RP 20 +#define TK_AS 21 +#define TK_COMMA 22 +#define TK_ID 23 +#define TK_ABORT 24 +#define TK_AFTER 25 +#define TK_ANALYZE 26 +#define TK_ASC 27 +#define TK_ATTACH 28 +#define TK_BEFORE 29 +#define TK_CASCADE 30 +#define TK_CAST 31 +#define TK_CONFLICT 32 +#define TK_DATABASE 33 +#define TK_DESC 34 +#define TK_DETACH 35 +#define TK_EACH 36 +#define TK_FAIL 37 +#define TK_FOR 38 +#define TK_IGNORE 39 +#define TK_INITIALLY 40 +#define TK_INSTEAD 41 +#define TK_LIKE_KW 42 +#define TK_MATCH 43 +#define TK_KEY 44 +#define TK_OF 45 +#define TK_OFFSET 46 +#define TK_PRAGMA 47 +#define TK_RAISE 48 +#define TK_REPLACE 49 +#define TK_RESTRICT 50 +#define TK_ROW 51 +#define TK_TRIGGER 52 +#define TK_VACUUM 53 +#define TK_VIEW 54 +#define TK_VIRTUAL 55 +#define TK_REINDEX 56 +#define TK_RENAME 57 +#define TK_CTIME_KW 58 +#define TK_ANY 59 +#define TK_OR 60 +#define TK_AND 61 +#define TK_IS 62 +#define TK_BETWEEN 63 +#define TK_IN 64 +#define TK_ISNULL 65 +#define TK_NOTNULL 66 +#define TK_NE 67 +#define TK_EQ 68 +#define TK_GT 69 +#define TK_LE 70 +#define TK_LT 71 +#define TK_GE 72 +#define TK_ESCAPE 73 +#define TK_BITAND 74 +#define TK_BITOR 75 +#define TK_LSHIFT 76 +#define TK_RSHIFT 77 +#define TK_PLUS 78 +#define TK_MINUS 79 +#define TK_STAR 80 +#define TK_SLASH 81 +#define TK_REM 82 +#define TK_CONCAT 83 +#define TK_COLLATE 84 +#define TK_UMINUS 85 +#define TK_UPLUS 86 +#define TK_BITNOT 87 +#define TK_STRING 88 +#define TK_JOIN_KW 89 +#define TK_CONSTRAINT 90 +#define TK_DEFAULT 91 +#define TK_NULL 92 +#define TK_PRIMARY 93 +#define TK_UNIQUE 94 +#define TK_CHECK 95 +#define TK_REFERENCES 96 +#define TK_AUTOINCR 97 +#define TK_ON 98 +#define TK_DELETE 99 +#define TK_UPDATE 100 +#define TK_INSERT 101 +#define TK_SET 102 +#define TK_DEFERRABLE 103 +#define TK_FOREIGN 104 +#define TK_DROP 105 +#define TK_UNION 106 +#define TK_ALL 107 +#define TK_EXCEPT 108 +#define TK_INTERSECT 109 +#define TK_SELECT 110 +#define TK_DISTINCT 111 +#define TK_DOT 112 +#define TK_FROM 113 +#define TK_JOIN 114 +#define TK_USING 115 +#define TK_ORDER 116 +#define TK_BY 117 +#define TK_GROUP 118 +#define TK_HAVING 119 +#define TK_LIMIT 120 +#define TK_WHERE 121 +#define TK_INTO 122 +#define TK_VALUES 123 +#define TK_INTEGER 124 +#define TK_FLOAT 125 +#define TK_BLOB 126 +#define TK_REGISTER 127 +#define TK_VARIABLE 128 +#define TK_CASE 129 +#define TK_WHEN 130 +#define TK_THEN 131 +#define TK_ELSE 132 +#define TK_INDEX 133 +#define TK_ALTER 134 +#define TK_TO 135 +#define TK_ADD 136 +#define TK_COLUMNKW 137 +#define TK_TO_TEXT 138 +#define TK_TO_BLOB 139 +#define TK_TO_NUMERIC 140 +#define TK_TO_INT 141 +#define TK_TO_REAL 142 +#define TK_END_OF_FILE 143 +#define TK_ILLEGAL 144 +#define TK_SPACE 145 +#define TK_UNCLOSED_STRING 146 +#define TK_COMMENT 147 +#define TK_FUNCTION 148 +#define TK_COLUMN 149 +#define TK_AGG_FUNCTION 150 +#define TK_AGG_COLUMN 151 +#define TK_CONST_FUNC 152 Added: external/sqlite-source-3.5.7.x/pragma.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/pragma.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,1228 @@ +/* +** 2003 April 6 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** This file contains code used to implement the PRAGMA command. +** +** $Id: pragma.c,v 1.170 2008/02/13 18:25:27 danielk1977 Exp $ +*/ +#include "sqliteInt.h" +#include + +/* Ignore this whole file if pragmas are disabled +*/ +#if !defined(SQLITE_OMIT_PRAGMA) && !defined(SQLITE_OMIT_PARSER) + +/* +** Interpret the given string as a safety level. Return 0 for OFF, +** 1 for ON or NORMAL and 2 for FULL. Return 1 for an empty or +** unrecognized string argument. +** +** Note that the values returned are one less that the values that +** should be passed into sqlite3BtreeSetSafetyLevel(). The is done +** to support legacy SQL code. The safety level used to be boolean +** and older scripts may have used numbers 0 for OFF and 1 for ON. +*/ +static int getSafetyLevel(const char *z){ + /* 123456789 123456789 */ + static const char zText[] = "onoffalseyestruefull"; + static const u8 iOffset[] = {0, 1, 2, 4, 9, 12, 16}; + static const u8 iLength[] = {2, 2, 3, 5, 3, 4, 4}; + static const u8 iValue[] = {1, 0, 0, 0, 1, 1, 2}; + int i, n; + if( isdigit(*z) ){ + return atoi(z); + } + n = strlen(z); + for(i=0; i=0&&i<=2)?i:0); +} +#endif /* ifndef SQLITE_OMIT_AUTOVACUUM */ + +#ifndef SQLITE_OMIT_PAGER_PRAGMAS +/* +** Interpret the given string as a temp db location. Return 1 for file +** backed temporary databases, 2 for the Red-Black tree in memory database +** and 0 to use the compile-time default. +*/ +static int getTempStore(const char *z){ + if( z[0]>='0' && z[0]<='2' ){ + return z[0] - '0'; + }else if( sqlite3StrICmp(z, "file")==0 ){ + return 1; + }else if( sqlite3StrICmp(z, "memory")==0 ){ + return 2; + }else{ + return 0; + } +} +#endif /* SQLITE_PAGER_PRAGMAS */ + +#ifndef SQLITE_OMIT_PAGER_PRAGMAS +/* +** Invalidate temp storage, either when the temp storage is changed +** from default, or when 'file' and the temp_store_directory has changed +*/ +static int invalidateTempStorage(Parse *pParse){ + sqlite3 *db = pParse->db; + if( db->aDb[1].pBt!=0 ){ + if( !db->autoCommit ){ + sqlite3ErrorMsg(pParse, "temporary storage cannot be changed " + "from within a transaction"); + return SQLITE_ERROR; + } + sqlite3BtreeClose(db->aDb[1].pBt); + db->aDb[1].pBt = 0; + sqlite3ResetInternalSchema(db, 0); + } + return SQLITE_OK; +} +#endif /* SQLITE_PAGER_PRAGMAS */ + +#ifndef SQLITE_OMIT_PAGER_PRAGMAS +/* +** If the TEMP database is open, close it and mark the database schema +** as needing reloading. This must be done when using the TEMP_STORE +** or DEFAULT_TEMP_STORE pragmas. +*/ +static int changeTempStorage(Parse *pParse, const char *zStorageType){ + int ts = getTempStore(zStorageType); + sqlite3 *db = pParse->db; + if( db->temp_store==ts ) return SQLITE_OK; + if( invalidateTempStorage( pParse ) != SQLITE_OK ){ + return SQLITE_ERROR; + } + db->temp_store = ts; + return SQLITE_OK; +} +#endif /* SQLITE_PAGER_PRAGMAS */ + +/* +** Generate code to return a single integer value. +*/ +static void returnSingleInt(Parse *pParse, const char *zLabel, int value){ + Vdbe *v = sqlite3GetVdbe(pParse); + int mem = ++pParse->nMem; + sqlite3VdbeAddOp2(v, OP_Integer, value, mem); + if( pParse->explain==0 ){ + sqlite3VdbeSetNumCols(v, 1); + sqlite3VdbeSetColName(v, 0, COLNAME_NAME, zLabel, P4_STATIC); + } + sqlite3VdbeAddOp2(v, OP_ResultRow, mem, 1); +} + +#ifndef SQLITE_OMIT_FLAG_PRAGMAS +/* +** Check to see if zRight and zLeft refer to a pragma that queries +** or changes one of the flags in db->flags. Return 1 if so and 0 if not. +** Also, implement the pragma. +*/ +static int flagPragma(Parse *pParse, const char *zLeft, const char *zRight){ + static const struct sPragmaType { + const char *zName; /* Name of the pragma */ + int mask; /* Mask for the db->flags value */ + } aPragma[] = { + { "full_column_names", SQLITE_FullColNames }, + { "short_column_names", SQLITE_ShortColNames }, + { "count_changes", SQLITE_CountRows }, + { "empty_result_callbacks", SQLITE_NullCallback }, + { "legacy_file_format", SQLITE_LegacyFileFmt }, + { "fullfsync", SQLITE_FullFSync }, +#ifdef SQLITE_DEBUG + { "sql_trace", SQLITE_SqlTrace }, + { "vdbe_listing", SQLITE_VdbeListing }, + { "vdbe_trace", SQLITE_VdbeTrace }, +#endif +#ifndef SQLITE_OMIT_CHECK + { "ignore_check_constraints", SQLITE_IgnoreChecks }, +#endif + /* The following is VERY experimental */ + { "writable_schema", SQLITE_WriteSchema|SQLITE_RecoveryMode }, + { "omit_readlock", SQLITE_NoReadlock }, + + /* TODO: Maybe it shouldn't be possible to change the ReadUncommitted + ** flag if there are any active statements. */ + { "read_uncommitted", SQLITE_ReadUncommitted }, + }; + int i; + const struct sPragmaType *p; + for(i=0, p=aPragma; izName)==0 ){ + sqlite3 *db = pParse->db; + Vdbe *v; + v = sqlite3GetVdbe(pParse); + if( v ){ + if( zRight==0 ){ + returnSingleInt(pParse, p->zName, (db->flags & p->mask)!=0 ); + }else{ + if( getBoolean(zRight) ){ + db->flags |= p->mask; + }else{ + db->flags &= ~p->mask; + } + + /* Many of the flag-pragmas modify the code generated by the SQL + ** compiler (eg. count_changes). So add an opcode to expire all + ** compiled SQL statements after modifying a pragma value. + */ + sqlite3VdbeAddOp2(v, OP_Expire, 0, 0); + } + } + + return 1; + } + } + return 0; +} +#endif /* SQLITE_OMIT_FLAG_PRAGMAS */ + +/* +** Process a pragma statement. +** +** Pragmas are of this form: +** +** PRAGMA [database.]id [= value] +** +** The identifier might also be a string. The value is a string, and +** identifier, or a number. If minusFlag is true, then the value is +** a number that was preceded by a minus sign. +** +** If the left side is "database.id" then pId1 is the database name +** and pId2 is the id. If the left side is just "id" then pId1 is the +** id and pId2 is any empty string. +*/ +void sqlite3Pragma( + Parse *pParse, + Token *pId1, /* First part of [database.]id field */ + Token *pId2, /* Second part of [database.]id field, or NULL */ + Token *pValue, /* Token for , or NULL */ + int minusFlag /* True if a '-' sign preceded */ +){ + char *zLeft = 0; /* Nul-terminated UTF-8 string */ + char *zRight = 0; /* Nul-terminated UTF-8 string , or NULL */ + const char *zDb = 0; /* The database name */ + Token *pId; /* Pointer to token */ + int iDb; /* Database index for */ + sqlite3 *db = pParse->db; + Db *pDb; + Vdbe *v = pParse->pVdbe = sqlite3VdbeCreate(db); + if( v==0 ) return; + pParse->nMem = 2; + + /* Interpret the [database.] part of the pragma statement. iDb is the + ** index of the database this pragma is being applied to in db.aDb[]. */ + iDb = sqlite3TwoPartName(pParse, pId1, pId2, &pId); + if( iDb<0 ) return; + pDb = &db->aDb[iDb]; + + /* If the temp database has been explicitly named as part of the + ** pragma, make sure it is open. + */ + if( iDb==1 && sqlite3OpenTempDatabase(pParse) ){ + return; + } + + zLeft = sqlite3NameFromToken(db, pId); + if( !zLeft ) return; + if( minusFlag ){ + zRight = sqlite3MPrintf(db, "-%T", pValue); + }else{ + zRight = sqlite3NameFromToken(db, pValue); + } + + zDb = ((iDb>0)?pDb->zName:0); + if( sqlite3AuthCheck(pParse, SQLITE_PRAGMA, zLeft, zRight, zDb) ){ + goto pragma_out; + } + +#ifndef SQLITE_OMIT_PAGER_PRAGMAS + /* + ** PRAGMA [database.]default_cache_size + ** PRAGMA [database.]default_cache_size=N + ** + ** The first form reports the current persistent setting for the + ** page cache size. The value returned is the maximum number of + ** pages in the page cache. The second form sets both the current + ** page cache size value and the persistent page cache size value + ** stored in the database file. + ** + ** The default cache size is stored in meta-value 2 of page 1 of the + ** database file. The cache size is actually the absolute value of + ** this memory location. The sign of meta-value 2 determines the + ** synchronous setting. A negative value means synchronous is off + ** and a positive value means synchronous is on. + */ + if( sqlite3StrICmp(zLeft,"default_cache_size")==0 ){ + static const VdbeOpList getCacheSize[] = { + { OP_ReadCookie, 0, 1, 2}, /* 0 */ + { OP_IfPos, 1, 6, 0}, + { OP_Integer, 0, 2, 0}, + { OP_Subtract, 1, 2, 1}, + { OP_IfPos, 1, 6, 0}, + { OP_Integer, 0, 1, 0}, /* 5 */ + { OP_ResultRow, 1, 1, 0}, + }; + int addr; + if( sqlite3ReadSchema(pParse) ) goto pragma_out; + sqlite3VdbeUsesBtree(v, iDb); + if( !zRight ){ + sqlite3VdbeSetNumCols(v, 1); + sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "cache_size", P4_STATIC); + pParse->nMem += 2; + addr = sqlite3VdbeAddOpList(v, ArraySize(getCacheSize), getCacheSize); + sqlite3VdbeChangeP1(v, addr, iDb); + sqlite3VdbeChangeP1(v, addr+5, SQLITE_DEFAULT_CACHE_SIZE); + }else{ + int size = atoi(zRight); + if( size<0 ) size = -size; + sqlite3BeginWriteOperation(pParse, 0, iDb); + sqlite3VdbeAddOp2(v, OP_Integer, size, 1); + sqlite3VdbeAddOp3(v, OP_ReadCookie, iDb, 2, 2); + addr = sqlite3VdbeAddOp2(v, OP_IfPos, 2, 0); + sqlite3VdbeAddOp2(v, OP_Integer, -size, 1); + sqlite3VdbeJumpHere(v, addr); + sqlite3VdbeAddOp3(v, OP_SetCookie, iDb, 2, 1); + pDb->pSchema->cache_size = size; + sqlite3BtreeSetCacheSize(pDb->pBt, pDb->pSchema->cache_size); + } + }else + + /* + ** PRAGMA [database.]page_size + ** PRAGMA [database.]page_size=N + ** + ** The first form reports the current setting for the + ** database page size in bytes. The second form sets the + ** database page size value. The value can only be set if + ** the database has not yet been created. + */ + if( sqlite3StrICmp(zLeft,"page_size")==0 ){ + Btree *pBt = pDb->pBt; + if( !zRight ){ + int size = pBt ? sqlite3BtreeGetPageSize(pBt) : 0; + returnSingleInt(pParse, "page_size", size); + }else{ + /* Malloc may fail when setting the page-size, as there is an internal + ** buffer that the pager module resizes using sqlite3_realloc(). + */ + if( SQLITE_NOMEM==sqlite3BtreeSetPageSize(pBt, atoi(zRight), -1) ){ + db->mallocFailed = 1; + } + } + }else + + /* + ** PRAGMA [database.]max_page_count + ** PRAGMA [database.]max_page_count=N + ** + ** The first form reports the current setting for the + ** maximum number of pages in the database file. The + ** second form attempts to change this setting. Both + ** forms return the current setting. + */ + if( sqlite3StrICmp(zLeft,"max_page_count")==0 ){ + Btree *pBt = pDb->pBt; + int newMax = 0; + if( zRight ){ + newMax = atoi(zRight); + } + if( pBt ){ + newMax = sqlite3BtreeMaxPageCount(pBt, newMax); + } + returnSingleInt(pParse, "max_page_count", newMax); + }else + + /* + ** PRAGMA [database.]locking_mode + ** PRAGMA [database.]locking_mode = (normal|exclusive) + */ + if( sqlite3StrICmp(zLeft,"locking_mode")==0 ){ + const char *zRet = "normal"; + int eMode = getLockingMode(zRight); + + if( pId2->n==0 && eMode==PAGER_LOCKINGMODE_QUERY ){ + /* Simple "PRAGMA locking_mode;" statement. This is a query for + ** the current default locking mode (which may be different to + ** the locking-mode of the main database). + */ + eMode = db->dfltLockMode; + }else{ + Pager *pPager; + if( pId2->n==0 ){ + /* This indicates that no database name was specified as part + ** of the PRAGMA command. In this case the locking-mode must be + ** set on all attached databases, as well as the main db file. + ** + ** Also, the sqlite3.dfltLockMode variable is set so that + ** any subsequently attached databases also use the specified + ** locking mode. + */ + int ii; + assert(pDb==&db->aDb[0]); + for(ii=2; iinDb; ii++){ + pPager = sqlite3BtreePager(db->aDb[ii].pBt); + sqlite3PagerLockingMode(pPager, eMode); + } + db->dfltLockMode = eMode; + } + pPager = sqlite3BtreePager(pDb->pBt); + eMode = sqlite3PagerLockingMode(pPager, eMode); + } + + assert(eMode==PAGER_LOCKINGMODE_NORMAL||eMode==PAGER_LOCKINGMODE_EXCLUSIVE); + if( eMode==PAGER_LOCKINGMODE_EXCLUSIVE ){ + zRet = "exclusive"; + } + sqlite3VdbeSetNumCols(v, 1); + sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "locking_mode", P4_STATIC); + sqlite3VdbeAddOp4(v, OP_String8, 0, 1, 0, zRet, 0); + sqlite3VdbeAddOp2(v, OP_ResultRow, 1, 1); + }else +#endif /* SQLITE_OMIT_PAGER_PRAGMAS */ + + /* + ** PRAGMA [database.]auto_vacuum + ** PRAGMA [database.]auto_vacuum=N + ** + ** Get or set the (boolean) value of the database 'auto-vacuum' parameter. + */ +#ifndef SQLITE_OMIT_AUTOVACUUM + if( sqlite3StrICmp(zLeft,"auto_vacuum")==0 ){ + Btree *pBt = pDb->pBt; + if( sqlite3ReadSchema(pParse) ){ + goto pragma_out; + } + if( !zRight ){ + int auto_vacuum = + pBt ? sqlite3BtreeGetAutoVacuum(pBt) : SQLITE_DEFAULT_AUTOVACUUM; + returnSingleInt(pParse, "auto_vacuum", auto_vacuum); + }else{ + int eAuto = getAutoVacuum(zRight); + db->nextAutovac = eAuto; + if( eAuto>=0 ){ + /* Call SetAutoVacuum() to set initialize the internal auto and + ** incr-vacuum flags. This is required in case this connection + ** creates the database file. It is important that it is created + ** as an auto-vacuum capable db. + */ + int rc = sqlite3BtreeSetAutoVacuum(pBt, eAuto); + if( rc==SQLITE_OK && (eAuto==1 || eAuto==2) ){ + /* When setting the auto_vacuum mode to either "full" or + ** "incremental", write the value of meta[6] in the database + ** file. Before writing to meta[6], check that meta[3] indicates + ** that this really is an auto-vacuum capable database. + */ + static const VdbeOpList setMeta6[] = { + { OP_Transaction, 0, 1, 0}, /* 0 */ + { OP_ReadCookie, 0, 1, 3}, /* 1 */ + { OP_If, 1, 0, 0}, /* 2 */ + { OP_Halt, SQLITE_OK, OE_Abort, 0}, /* 3 */ + { OP_Integer, 0, 1, 0}, /* 4 */ + { OP_SetCookie, 0, 6, 1}, /* 5 */ + }; + int iAddr; + iAddr = sqlite3VdbeAddOpList(v, ArraySize(setMeta6), setMeta6); + sqlite3VdbeChangeP1(v, iAddr, iDb); + sqlite3VdbeChangeP1(v, iAddr+1, iDb); + sqlite3VdbeChangeP2(v, iAddr+2, iAddr+4); + sqlite3VdbeChangeP1(v, iAddr+4, eAuto-1); + sqlite3VdbeChangeP1(v, iAddr+5, iDb); + sqlite3VdbeUsesBtree(v, iDb); + } + } + } + }else +#endif + + /* + ** PRAGMA [database.]incremental_vacuum(N) + ** + ** Do N steps of incremental vacuuming on a database. + */ +#ifndef SQLITE_OMIT_AUTOVACUUM + if( sqlite3StrICmp(zLeft,"incremental_vacuum")==0 ){ + int iLimit, addr; + if( sqlite3ReadSchema(pParse) ){ + goto pragma_out; + } + if( zRight==0 || !sqlite3GetInt32(zRight, &iLimit) || iLimit<=0 ){ + iLimit = 0x7fffffff; + } + sqlite3BeginWriteOperation(pParse, 0, iDb); + sqlite3VdbeAddOp2(v, OP_Integer, iLimit, 1); + addr = sqlite3VdbeAddOp1(v, OP_IncrVacuum, iDb); + sqlite3VdbeAddOp1(v, OP_ResultRow, 1); + sqlite3VdbeAddOp2(v, OP_AddImm, 1, -1); + sqlite3VdbeAddOp2(v, OP_IfPos, 1, addr); + sqlite3VdbeJumpHere(v, addr); + }else +#endif + +#ifndef SQLITE_OMIT_PAGER_PRAGMAS + /* + ** PRAGMA [database.]cache_size + ** PRAGMA [database.]cache_size=N + ** + ** The first form reports the current local setting for the + ** page cache size. The local setting can be different from + ** the persistent cache size value that is stored in the database + ** file itself. The value returned is the maximum number of + ** pages in the page cache. The second form sets the local + ** page cache size value. It does not change the persistent + ** cache size stored on the disk so the cache size will revert + ** to its default value when the database is closed and reopened. + ** N should be a positive integer. + */ + if( sqlite3StrICmp(zLeft,"cache_size")==0 ){ + if( sqlite3ReadSchema(pParse) ) goto pragma_out; + if( !zRight ){ + returnSingleInt(pParse, "cache_size", pDb->pSchema->cache_size); + }else{ + int size = atoi(zRight); + if( size<0 ) size = -size; + pDb->pSchema->cache_size = size; + sqlite3BtreeSetCacheSize(pDb->pBt, pDb->pSchema->cache_size); + } + }else + + /* + ** PRAGMA temp_store + ** PRAGMA temp_store = "default"|"memory"|"file" + ** + ** Return or set the local value of the temp_store flag. Changing + ** the local value does not make changes to the disk file and the default + ** value will be restored the next time the database is opened. + ** + ** Note that it is possible for the library compile-time options to + ** override this setting + */ + if( sqlite3StrICmp(zLeft, "temp_store")==0 ){ + if( !zRight ){ + returnSingleInt(pParse, "temp_store", db->temp_store); + }else{ + changeTempStorage(pParse, zRight); + } + }else + + /* + ** PRAGMA temp_store_directory + ** PRAGMA temp_store_directory = ""|"directory_name" + ** + ** Return or set the local value of the temp_store_directory flag. Changing + ** the value sets a specific directory to be used for temporary files. + ** Setting to a null string reverts to the default temporary directory search. + ** If temporary directory is changed, then invalidateTempStorage. + ** + */ + if( sqlite3StrICmp(zLeft, "temp_store_directory")==0 ){ + if( !zRight ){ + if( sqlite3_temp_directory ){ + sqlite3VdbeSetNumCols(v, 1); + sqlite3VdbeSetColName(v, 0, COLNAME_NAME, + "temp_store_directory", P4_STATIC); + sqlite3VdbeAddOp4(v, OP_String8, 0, 1, 0, sqlite3_temp_directory, 0); + sqlite3VdbeAddOp2(v, OP_ResultRow, 1, 1); + } + }else{ + if( zRight[0] + && !sqlite3OsAccess(db->pVfs, zRight, SQLITE_ACCESS_READWRITE) + ){ + sqlite3ErrorMsg(pParse, "not a writable directory"); + goto pragma_out; + } + if( TEMP_STORE==0 + || (TEMP_STORE==1 && db->temp_store<=1) + || (TEMP_STORE==2 && db->temp_store==1) + ){ + invalidateTempStorage(pParse); + } + sqlite3_free(sqlite3_temp_directory); + if( zRight[0] ){ + sqlite3_temp_directory = zRight; + zRight = 0; + }else{ + sqlite3_temp_directory = 0; + } + } + }else + + /* + ** PRAGMA [database.]synchronous + ** PRAGMA [database.]synchronous=OFF|ON|NORMAL|FULL + ** + ** Return or set the local value of the synchronous flag. Changing + ** the local value does not make changes to the disk file and the + ** default value will be restored the next time the database is + ** opened. + */ + if( sqlite3StrICmp(zLeft,"synchronous")==0 ){ + if( sqlite3ReadSchema(pParse) ) goto pragma_out; + if( !zRight ){ + returnSingleInt(pParse, "synchronous", pDb->safety_level-1); + }else{ + if( !db->autoCommit ){ + sqlite3ErrorMsg(pParse, + "Safety level may not be changed inside a transaction"); + }else{ + pDb->safety_level = getSafetyLevel(zRight)+1; + } + } + }else +#endif /* SQLITE_OMIT_PAGER_PRAGMAS */ + +#ifndef SQLITE_OMIT_FLAG_PRAGMAS + if( flagPragma(pParse, zLeft, zRight) ){ + /* The flagPragma() subroutine also generates any necessary code + ** there is nothing more to do here */ + }else +#endif /* SQLITE_OMIT_FLAG_PRAGMAS */ + +#ifndef SQLITE_OMIT_SCHEMA_PRAGMAS + /* + ** PRAGMA table_info(
                ) + ** + ** Return a single row for each column of the named table. The columns of + ** the returned data set are: + ** + ** cid: Column id (numbered from left to right, starting at 0) + ** name: Column name + ** type: Column declaration type. + ** notnull: True if 'NOT NULL' is part of column declaration + ** dflt_value: The default value for the column, if any. + */ + if( sqlite3StrICmp(zLeft, "table_info")==0 && zRight ){ + Table *pTab; + if( sqlite3ReadSchema(pParse) ) goto pragma_out; + pTab = sqlite3FindTable(db, zRight, zDb); + if( pTab ){ + int i; + int nHidden = 0; + Column *pCol; + sqlite3VdbeSetNumCols(v, 6); + pParse->nMem = 6; + sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "cid", P4_STATIC); + sqlite3VdbeSetColName(v, 1, COLNAME_NAME, "name", P4_STATIC); + sqlite3VdbeSetColName(v, 2, COLNAME_NAME, "type", P4_STATIC); + sqlite3VdbeSetColName(v, 3, COLNAME_NAME, "notnull", P4_STATIC); + sqlite3VdbeSetColName(v, 4, COLNAME_NAME, "dflt_value", P4_STATIC); + sqlite3VdbeSetColName(v, 5, COLNAME_NAME, "pk", P4_STATIC); + sqlite3ViewGetColumnNames(pParse, pTab); + for(i=0, pCol=pTab->aCol; inCol; i++, pCol++){ + const Token *pDflt; + if( IsHiddenColumn(pCol) ){ + nHidden++; + continue; + } + sqlite3VdbeAddOp2(v, OP_Integer, i-nHidden, 1); + sqlite3VdbeAddOp4(v, OP_String8, 0, 2, 0, pCol->zName, 0); + sqlite3VdbeAddOp4(v, OP_String8, 0, 3, 0, + pCol->zType ? pCol->zType : "", 0); + sqlite3VdbeAddOp2(v, OP_Integer, pCol->notNull, 4); + if( pCol->pDflt && (pDflt = &pCol->pDflt->span)->z ){ + sqlite3VdbeAddOp4(v, OP_String8, 0, 5, 0, (char*)pDflt->z, pDflt->n); + }else{ + sqlite3VdbeAddOp2(v, OP_Null, 0, 5); + } + sqlite3VdbeAddOp2(v, OP_Integer, pCol->isPrimKey, 6); + sqlite3VdbeAddOp2(v, OP_ResultRow, 1, 6); + } + } + }else + + if( sqlite3StrICmp(zLeft, "index_info")==0 && zRight ){ + Index *pIdx; + Table *pTab; + if( sqlite3ReadSchema(pParse) ) goto pragma_out; + pIdx = sqlite3FindIndex(db, zRight, zDb); + if( pIdx ){ + int i; + pTab = pIdx->pTable; + sqlite3VdbeSetNumCols(v, 3); + pParse->nMem = 3; + sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "seqno", P4_STATIC); + sqlite3VdbeSetColName(v, 1, COLNAME_NAME, "cid", P4_STATIC); + sqlite3VdbeSetColName(v, 2, COLNAME_NAME, "name", P4_STATIC); + for(i=0; inColumn; i++){ + int cnum = pIdx->aiColumn[i]; + sqlite3VdbeAddOp2(v, OP_Integer, i, 1); + sqlite3VdbeAddOp2(v, OP_Integer, cnum, 2); + assert( pTab->nCol>cnum ); + sqlite3VdbeAddOp4(v, OP_String8, 0, 3, 0, pTab->aCol[cnum].zName, 0); + sqlite3VdbeAddOp2(v, OP_ResultRow, 1, 3); + } + } + }else + + if( sqlite3StrICmp(zLeft, "index_list")==0 && zRight ){ + Index *pIdx; + Table *pTab; + if( sqlite3ReadSchema(pParse) ) goto pragma_out; + pTab = sqlite3FindTable(db, zRight, zDb); + if( pTab ){ + v = sqlite3GetVdbe(pParse); + pIdx = pTab->pIndex; + if( pIdx ){ + int i = 0; + sqlite3VdbeSetNumCols(v, 3); + pParse->nMem = 3; + sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "seq", P4_STATIC); + sqlite3VdbeSetColName(v, 1, COLNAME_NAME, "name", P4_STATIC); + sqlite3VdbeSetColName(v, 2, COLNAME_NAME, "unique", P4_STATIC); + while(pIdx){ + sqlite3VdbeAddOp2(v, OP_Integer, i, 1); + sqlite3VdbeAddOp4(v, OP_String8, 0, 2, 0, pIdx->zName, 0); + sqlite3VdbeAddOp2(v, OP_Integer, pIdx->onError!=OE_None, 3); + sqlite3VdbeAddOp2(v, OP_ResultRow, 1, 3); + ++i; + pIdx = pIdx->pNext; + } + } + } + }else + + if( sqlite3StrICmp(zLeft, "database_list")==0 ){ + int i; + if( sqlite3ReadSchema(pParse) ) goto pragma_out; + sqlite3VdbeSetNumCols(v, 3); + pParse->nMem = 3; + sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "seq", P4_STATIC); + sqlite3VdbeSetColName(v, 1, COLNAME_NAME, "name", P4_STATIC); + sqlite3VdbeSetColName(v, 2, COLNAME_NAME, "file", P4_STATIC); + for(i=0; inDb; i++){ + if( db->aDb[i].pBt==0 ) continue; + assert( db->aDb[i].zName!=0 ); + sqlite3VdbeAddOp2(v, OP_Integer, i, 1); + sqlite3VdbeAddOp4(v, OP_String8, 0, 2, 0, db->aDb[i].zName, 0); + sqlite3VdbeAddOp4(v, OP_String8, 0, 3, 0, + sqlite3BtreeGetFilename(db->aDb[i].pBt), 0); + sqlite3VdbeAddOp2(v, OP_ResultRow, 1, 3); + } + }else + + if( sqlite3StrICmp(zLeft, "collation_list")==0 ){ + int i = 0; + HashElem *p; + sqlite3VdbeSetNumCols(v, 2); + pParse->nMem = 2; + sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "seq", P4_STATIC); + sqlite3VdbeSetColName(v, 1, COLNAME_NAME, "name", P4_STATIC); + for(p=sqliteHashFirst(&db->aCollSeq); p; p=sqliteHashNext(p)){ + CollSeq *pColl = (CollSeq *)sqliteHashData(p); + sqlite3VdbeAddOp2(v, OP_Integer, i++, 1); + sqlite3VdbeAddOp4(v, OP_String8, 0, 2, 0, pColl->zName, 0); + sqlite3VdbeAddOp2(v, OP_ResultRow, 1, 2); + } + }else +#endif /* SQLITE_OMIT_SCHEMA_PRAGMAS */ + +#ifndef SQLITE_OMIT_FOREIGN_KEY + if( sqlite3StrICmp(zLeft, "foreign_key_list")==0 && zRight ){ + FKey *pFK; + Table *pTab; + if( sqlite3ReadSchema(pParse) ) goto pragma_out; + pTab = sqlite3FindTable(db, zRight, zDb); + if( pTab ){ + v = sqlite3GetVdbe(pParse); + pFK = pTab->pFKey; + if( pFK ){ + int i = 0; + sqlite3VdbeSetNumCols(v, 5); + pParse->nMem = 5; + sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "id", P4_STATIC); + sqlite3VdbeSetColName(v, 1, COLNAME_NAME, "seq", P4_STATIC); + sqlite3VdbeSetColName(v, 2, COLNAME_NAME, "table", P4_STATIC); + sqlite3VdbeSetColName(v, 3, COLNAME_NAME, "from", P4_STATIC); + sqlite3VdbeSetColName(v, 4, COLNAME_NAME, "to", P4_STATIC); + while(pFK){ + int j; + for(j=0; jnCol; j++){ + char *zCol = pFK->aCol[j].zCol; + sqlite3VdbeAddOp2(v, OP_Integer, i, 1); + sqlite3VdbeAddOp2(v, OP_Integer, j, 2); + sqlite3VdbeAddOp4(v, OP_String8, 0, 3, 0, pFK->zTo, 0); + sqlite3VdbeAddOp4(v, OP_String8, 0, 4, 0, + pTab->aCol[pFK->aCol[j].iFrom].zName, 0); + sqlite3VdbeAddOp4(v, zCol ? OP_String8 : OP_Null, 0, 5, 0, zCol, 0); + sqlite3VdbeAddOp2(v, OP_ResultRow, 1, 5); + } + ++i; + pFK = pFK->pNextFrom; + } + } + } + }else +#endif /* !defined(SQLITE_OMIT_FOREIGN_KEY) */ + +#ifndef NDEBUG + if( sqlite3StrICmp(zLeft, "parser_trace")==0 ){ + if( zRight ){ + if( getBoolean(zRight) ){ + sqlite3ParserTrace(stderr, "parser: "); + }else{ + sqlite3ParserTrace(0, 0); + } + } + }else +#endif + + /* Reinstall the LIKE and GLOB functions. The variant of LIKE + ** used will be case sensitive or not depending on the RHS. + */ + if( sqlite3StrICmp(zLeft, "case_sensitive_like")==0 ){ + if( zRight ){ + sqlite3RegisterLikeFunctions(db, getBoolean(zRight)); + } + }else + +#ifndef SQLITE_INTEGRITY_CHECK_ERROR_MAX +# define SQLITE_INTEGRITY_CHECK_ERROR_MAX 100 +#endif + +#ifndef SQLITE_OMIT_INTEGRITY_CHECK + /* Pragma "quick_check" is an experimental reduced version of + ** integrity_check designed to detect most database corruption + ** without most of the overhead of a full integrity-check. + */ + if( sqlite3StrICmp(zLeft, "integrity_check")==0 + || sqlite3StrICmp(zLeft, "quick_check")==0 + ){ + int i, j, addr, mxErr; + + /* Code that appears at the end of the integrity check. If no error + ** messages have been generated, output OK. Otherwise output the + ** error message + */ + static const VdbeOpList endCode[] = { + { OP_AddImm, 1, 0, 0}, /* 0 */ + { OP_IfNeg, 1, 0, 0}, /* 1 */ + { OP_String8, 0, 3, 0}, /* 2 */ + { OP_ResultRow, 3, 1, 0}, + }; + + int isQuick = (zLeft[0]=='q'); + + /* Initialize the VDBE program */ + if( sqlite3ReadSchema(pParse) ) goto pragma_out; + pParse->nMem = 6; + sqlite3VdbeSetNumCols(v, 1); + sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "integrity_check", P4_STATIC); + + /* Set the maximum error count */ + mxErr = SQLITE_INTEGRITY_CHECK_ERROR_MAX; + if( zRight ){ + mxErr = atoi(zRight); + if( mxErr<=0 ){ + mxErr = SQLITE_INTEGRITY_CHECK_ERROR_MAX; + } + } + sqlite3VdbeAddOp2(v, OP_Integer, mxErr, 1); /* reg[1] holds errors left */ + + /* Do an integrity check on each database file */ + for(i=0; inDb; i++){ + HashElem *x; + Hash *pTbls; + int cnt = 0; + + if( OMIT_TEMPDB && i==1 ) continue; + + sqlite3CodeVerifySchema(pParse, i); + addr = sqlite3VdbeAddOp1(v, OP_IfPos, 1); /* Halt if out of errors */ + sqlite3VdbeAddOp2(v, OP_Halt, 0, 0); + sqlite3VdbeJumpHere(v, addr); + + /* Do an integrity check of the B-Tree + ** + ** Begin by filling registers 2, 3, ... with the root pages numbers + ** for all tables and indices in the database. + */ + pTbls = &db->aDb[i].pSchema->tblHash; + for(x=sqliteHashFirst(pTbls); x; x=sqliteHashNext(x)){ + Table *pTab = sqliteHashData(x); + Index *pIdx; + sqlite3VdbeAddOp2(v, OP_Integer, pTab->tnum, 2+cnt); + cnt++; + for(pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext){ + sqlite3VdbeAddOp2(v, OP_Integer, pIdx->tnum, 2+cnt); + cnt++; + } + } + if( cnt==0 ) continue; + + /* Make sure sufficient number of registers have been allocated */ + if( pParse->nMem < cnt+4 ){ + pParse->nMem = cnt+4; + } + + /* Do the b-tree integrity checks */ + sqlite3VdbeAddOp3(v, OP_IntegrityCk, 2, cnt, 1); + sqlite3VdbeChangeP5(v, i); + addr = sqlite3VdbeAddOp1(v, OP_IsNull, 2); + sqlite3VdbeAddOp4(v, OP_String8, 0, 3, 0, + sqlite3MPrintf(db, "*** in database %s ***\n", db->aDb[i].zName), + P4_DYNAMIC); + sqlite3VdbeAddOp2(v, OP_Move, 2, 4); + sqlite3VdbeAddOp3(v, OP_Concat, 4, 3, 2); + sqlite3VdbeAddOp2(v, OP_ResultRow, 2, 1); + sqlite3VdbeJumpHere(v, addr); + + /* Make sure all the indices are constructed correctly. + */ + for(x=sqliteHashFirst(pTbls); x && !isQuick; x=sqliteHashNext(x)){ + Table *pTab = sqliteHashData(x); + Index *pIdx; + int loopTop; + + if( pTab->pIndex==0 ) continue; + addr = sqlite3VdbeAddOp1(v, OP_IfPos, 1); /* Stop if out of errors */ + sqlite3VdbeAddOp2(v, OP_Halt, 0, 0); + sqlite3VdbeJumpHere(v, addr); + sqlite3OpenTableAndIndices(pParse, pTab, 1, OP_OpenRead); + sqlite3VdbeAddOp2(v, OP_Integer, 0, 2); /* reg(2) will count entries */ + loopTop = sqlite3VdbeAddOp2(v, OP_Rewind, 1, 0); + sqlite3VdbeAddOp2(v, OP_AddImm, 2, 1); /* increment entry count */ + for(j=0, pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext, j++){ + int jmp2; + static const VdbeOpList idxErr[] = { + { OP_AddImm, 1, -1, 0}, + { OP_String8, 0, 3, 0}, /* 1 */ + { OP_Rowid, 1, 4, 0}, + { OP_String8, 0, 5, 0}, /* 3 */ + { OP_String8, 0, 6, 0}, /* 4 */ + { OP_Concat, 4, 3, 3}, + { OP_Concat, 5, 3, 3}, + { OP_Concat, 6, 3, 3}, + { OP_ResultRow, 3, 1, 0}, + }; + sqlite3GenerateIndexKey(pParse, pIdx, 1, 3); + jmp2 = sqlite3VdbeAddOp3(v, OP_Found, j+2, 0, 3); + addr = sqlite3VdbeAddOpList(v, ArraySize(idxErr), idxErr); + sqlite3VdbeChangeP4(v, addr+1, "rowid ", P4_STATIC); + sqlite3VdbeChangeP4(v, addr+3, " missing from index ", P4_STATIC); + sqlite3VdbeChangeP4(v, addr+4, pIdx->zName, P4_STATIC); + sqlite3VdbeJumpHere(v, jmp2); + } + sqlite3VdbeAddOp2(v, OP_Next, 1, loopTop+1); + sqlite3VdbeJumpHere(v, loopTop); + for(j=0, pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext, j++){ + static const VdbeOpList cntIdx[] = { + { OP_Integer, 0, 3, 0}, + { OP_Rewind, 0, 0, 0}, /* 1 */ + { OP_AddImm, 3, 1, 0}, + { OP_Next, 0, 0, 0}, /* 3 */ + { OP_Eq, 2, 0, 3}, /* 4 */ + { OP_AddImm, 1, -1, 0}, + { OP_String8, 0, 2, 0}, /* 6 */ + { OP_String8, 0, 3, 0}, /* 7 */ + { OP_Concat, 3, 2, 2}, + { OP_ResultRow, 2, 1, 0}, + }; + if( pIdx->tnum==0 ) continue; + addr = sqlite3VdbeAddOp1(v, OP_IfPos, 1); + sqlite3VdbeAddOp2(v, OP_Halt, 0, 0); + sqlite3VdbeJumpHere(v, addr); + addr = sqlite3VdbeAddOpList(v, ArraySize(cntIdx), cntIdx); + sqlite3VdbeChangeP1(v, addr+1, j+2); + sqlite3VdbeChangeP2(v, addr+1, addr+4); + sqlite3VdbeChangeP1(v, addr+3, j+2); + sqlite3VdbeChangeP2(v, addr+3, addr+2); + sqlite3VdbeJumpHere(v, addr+4); + sqlite3VdbeChangeP4(v, addr+6, + "wrong # of entries in index ", P4_STATIC); + sqlite3VdbeChangeP4(v, addr+7, pIdx->zName, P4_STATIC); + } + } + } + addr = sqlite3VdbeAddOpList(v, ArraySize(endCode), endCode); + sqlite3VdbeChangeP2(v, addr, -mxErr); + sqlite3VdbeJumpHere(v, addr+1); + sqlite3VdbeChangeP4(v, addr+2, "ok", P4_STATIC); + }else +#endif /* SQLITE_OMIT_INTEGRITY_CHECK */ + +#ifndef SQLITE_OMIT_UTF16 + /* + ** PRAGMA encoding + ** PRAGMA encoding = "utf-8"|"utf-16"|"utf-16le"|"utf-16be" + ** + ** In its first form, this pragma returns the encoding of the main + ** database. If the database is not initialized, it is initialized now. + ** + ** The second form of this pragma is a no-op if the main database file + ** has not already been initialized. In this case it sets the default + ** encoding that will be used for the main database file if a new file + ** is created. If an existing main database file is opened, then the + ** default text encoding for the existing database is used. + ** + ** In all cases new databases created using the ATTACH command are + ** created to use the same default text encoding as the main database. If + ** the main database has not been initialized and/or created when ATTACH + ** is executed, this is done before the ATTACH operation. + ** + ** In the second form this pragma sets the text encoding to be used in + ** new database files created using this database handle. It is only + ** useful if invoked immediately after the main database i + */ + if( sqlite3StrICmp(zLeft, "encoding")==0 ){ + static const struct EncName { + char *zName; + u8 enc; + } encnames[] = { + { "UTF-8", SQLITE_UTF8 }, + { "UTF8", SQLITE_UTF8 }, + { "UTF-16le", SQLITE_UTF16LE }, + { "UTF16le", SQLITE_UTF16LE }, + { "UTF-16be", SQLITE_UTF16BE }, + { "UTF16be", SQLITE_UTF16BE }, + { "UTF-16", 0 }, /* SQLITE_UTF16NATIVE */ + { "UTF16", 0 }, /* SQLITE_UTF16NATIVE */ + { 0, 0 } + }; + const struct EncName *pEnc; + if( !zRight ){ /* "PRAGMA encoding" */ + if( sqlite3ReadSchema(pParse) ) goto pragma_out; + sqlite3VdbeSetNumCols(v, 1); + sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "encoding", P4_STATIC); + sqlite3VdbeAddOp2(v, OP_String8, 0, 1); + for(pEnc=&encnames[0]; pEnc->zName; pEnc++){ + if( pEnc->enc==ENC(pParse->db) ){ + sqlite3VdbeChangeP4(v, -1, pEnc->zName, P4_STATIC); + break; + } + } + sqlite3VdbeAddOp2(v, OP_ResultRow, 1, 1); + }else{ /* "PRAGMA encoding = XXX" */ + /* Only change the value of sqlite.enc if the database handle is not + ** initialized. If the main database exists, the new sqlite.enc value + ** will be overwritten when the schema is next loaded. If it does not + ** already exists, it will be created to use the new encoding value. + */ + if( + !(DbHasProperty(db, 0, DB_SchemaLoaded)) || + DbHasProperty(db, 0, DB_Empty) + ){ + for(pEnc=&encnames[0]; pEnc->zName; pEnc++){ + if( 0==sqlite3StrICmp(zRight, pEnc->zName) ){ + ENC(pParse->db) = pEnc->enc ? pEnc->enc : SQLITE_UTF16NATIVE; + break; + } + } + if( !pEnc->zName ){ + sqlite3ErrorMsg(pParse, "unsupported encoding: %s", zRight); + } + } + } + }else +#endif /* SQLITE_OMIT_UTF16 */ + +#ifndef SQLITE_OMIT_SCHEMA_VERSION_PRAGMAS + /* + ** PRAGMA [database.]schema_version + ** PRAGMA [database.]schema_version = + ** + ** PRAGMA [database.]user_version + ** PRAGMA [database.]user_version = + ** + ** The pragma's schema_version and user_version are used to set or get + ** the value of the schema-version and user-version, respectively. Both + ** the schema-version and the user-version are 32-bit signed integers + ** stored in the database header. + ** + ** The schema-cookie is usually only manipulated internally by SQLite. It + ** is incremented by SQLite whenever the database schema is modified (by + ** creating or dropping a table or index). The schema version is used by + ** SQLite each time a query is executed to ensure that the internal cache + ** of the schema used when compiling the SQL query matches the schema of + ** the database against which the compiled query is actually executed. + ** Subverting this mechanism by using "PRAGMA schema_version" to modify + ** the schema-version is potentially dangerous and may lead to program + ** crashes or database corruption. Use with caution! + ** + ** The user-version is not used internally by SQLite. It may be used by + ** applications for any purpose. + */ + if( sqlite3StrICmp(zLeft, "schema_version")==0 + || sqlite3StrICmp(zLeft, "user_version")==0 + || sqlite3StrICmp(zLeft, "freelist_count")==0 + ){ + + int iCookie; /* Cookie index. 0 for schema-cookie, 6 for user-cookie. */ + sqlite3VdbeUsesBtree(v, iDb); + switch( zLeft[0] ){ + case 's': case 'S': + iCookie = 0; + break; + case 'f': case 'F': + iCookie = 1; + iDb = (-1*(iDb+1)); + assert(iDb<=0); + break; + default: + iCookie = 5; + break; + } + + if( zRight && iDb>=0 ){ + /* Write the specified cookie value */ + static const VdbeOpList setCookie[] = { + { OP_Transaction, 0, 1, 0}, /* 0 */ + { OP_Integer, 0, 1, 0}, /* 1 */ + { OP_SetCookie, 0, 0, 1}, /* 2 */ + }; + int addr = sqlite3VdbeAddOpList(v, ArraySize(setCookie), setCookie); + sqlite3VdbeChangeP1(v, addr, iDb); + sqlite3VdbeChangeP1(v, addr+1, atoi(zRight)); + sqlite3VdbeChangeP1(v, addr+2, iDb); + sqlite3VdbeChangeP2(v, addr+2, iCookie); + }else{ + /* Read the specified cookie value */ + static const VdbeOpList readCookie[] = { + { OP_ReadCookie, 0, 1, 0}, /* 0 */ + { OP_ResultRow, 1, 1, 0} + }; + int addr = sqlite3VdbeAddOpList(v, ArraySize(readCookie), readCookie); + sqlite3VdbeChangeP1(v, addr, iDb); + sqlite3VdbeChangeP3(v, addr, iCookie); + sqlite3VdbeSetNumCols(v, 1); + sqlite3VdbeSetColName(v, 0, COLNAME_NAME, zLeft, P4_TRANSIENT); + } + }else +#endif /* SQLITE_OMIT_SCHEMA_VERSION_PRAGMAS */ + +#if defined(SQLITE_DEBUG) || defined(SQLITE_TEST) + /* + ** Report the current state of file logs for all databases + */ + if( sqlite3StrICmp(zLeft, "lock_status")==0 ){ + static const char *const azLockName[] = { + "unlocked", "shared", "reserved", "pending", "exclusive" + }; + int i; + Vdbe *v = sqlite3GetVdbe(pParse); + sqlite3VdbeSetNumCols(v, 2); + pParse->nMem = 2; + sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "database", P4_STATIC); + sqlite3VdbeSetColName(v, 1, COLNAME_NAME, "status", P4_STATIC); + for(i=0; inDb; i++){ + Btree *pBt; + Pager *pPager; + const char *zState = "unknown"; + int j; + if( db->aDb[i].zName==0 ) continue; + sqlite3VdbeAddOp4(v, OP_String8, 0, 1, 0, db->aDb[i].zName, P4_STATIC); + pBt = db->aDb[i].pBt; + if( pBt==0 || (pPager = sqlite3BtreePager(pBt))==0 ){ + zState = "closed"; + }else if( sqlite3_file_control(db, i ? db->aDb[i].zName : 0, + SQLITE_FCNTL_LOCKSTATE, &j)==SQLITE_OK ){ + zState = azLockName[j]; + } + sqlite3VdbeAddOp4(v, OP_String8, 0, 2, 0, zState, P4_STATIC); + sqlite3VdbeAddOp2(v, OP_ResultRow, 1, 2); + } + }else +#endif + +#ifdef SQLITE_SSE + /* + ** Check to see if the sqlite_statements table exists. Create it + ** if it does not. + */ + if( sqlite3StrICmp(zLeft, "create_sqlite_statement_table")==0 ){ + extern int sqlite3CreateStatementsTable(Parse*); + sqlite3CreateStatementsTable(pParse); + }else +#endif + +#if SQLITE_HAS_CODEC + if( sqlite3StrICmp(zLeft, "key")==0 ){ + sqlite3_key(db, zRight, strlen(zRight)); + }else +#endif +#if SQLITE_HAS_CODEC || defined(SQLITE_ENABLE_CEROD) + if( sqlite3StrICmp(zLeft, "activate_extensions")==0 ){ +#if SQLITE_HAS_CODEC + if( sqlite3StrNICmp(zRight, "see-", 4)==0 ){ + extern void sqlite3_activate_see(const char*); + sqlite3_activate_see(&zRight[4]); + } +#endif +#ifdef SQLITE_ENABLE_CEROD + if( sqlite3StrNICmp(zRight, "cerod-", 6)==0 ){ + extern void sqlite3_activate_cerod(const char*); + sqlite3_activate_cerod(&zRight[6]); + } +#endif + } +#endif + + {} + + if( v ){ + /* Code an OP_Expire at the end of each PRAGMA program to cause + ** the VDBE implementing the pragma to expire. Most (all?) pragmas + ** are only valid for a single execution. + */ + sqlite3VdbeAddOp2(v, OP_Expire, 1, 0); + + /* + ** Reset the safety level, in case the fullfsync flag or synchronous + ** setting changed. + */ +#ifndef SQLITE_OMIT_PAGER_PRAGMAS + if( db->autoCommit ){ + sqlite3BtreeSetSafetyLevel(pDb->pBt, pDb->safety_level, + (db->flags&SQLITE_FullFSync)!=0); + } +#endif + } +pragma_out: + sqlite3_free(zLeft); + sqlite3_free(zRight); +} + +#endif /* SQLITE_OMIT_PRAGMA || SQLITE_OMIT_PARSER */ Added: external/sqlite-source-3.5.7.x/prepare.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/prepare.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,780 @@ +/* +** 2005 May 25 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** This file contains the implementation of the sqlite3_prepare() +** interface, and routines that contribute to loading the database schema +** from disk. +** +** $Id: prepare.c,v 1.78 2008/03/08 12:23:31 drh Exp $ +*/ +#include "sqliteInt.h" +#include + +/* +** Fill the InitData structure with an error message that indicates +** that the database is corrupt. +*/ +static void corruptSchema(InitData *pData, const char *zExtra){ + if( !pData->db->mallocFailed ){ + sqlite3SetString(pData->pzErrMsg, "malformed database schema", + zExtra!=0 && zExtra[0]!=0 ? " - " : (char*)0, zExtra, (char*)0); + } + pData->rc = SQLITE_CORRUPT; +} + +/* +** This is the callback routine for the code that initializes the +** database. See sqlite3Init() below for additional information. +** This routine is also called from the OP_ParseSchema opcode of the VDBE. +** +** Each callback contains the following information: +** +** argv[0] = name of thing being created +** argv[1] = root page number for table or index. 0 for trigger or view. +** argv[2] = SQL text for the CREATE statement. +** +*/ +int sqlite3InitCallback(void *pInit, int argc, char **argv, char **azColName){ + InitData *pData = (InitData*)pInit; + sqlite3 *db = pData->db; + int iDb = pData->iDb; + + assert( sqlite3_mutex_held(db->mutex) ); + pData->rc = SQLITE_OK; + DbClearProperty(db, iDb, DB_Empty); + if( db->mallocFailed ){ + corruptSchema(pData, 0); + return SQLITE_NOMEM; + } + + assert( argc==3 ); + if( argv==0 ) return 0; /* Might happen if EMPTY_RESULT_CALLBACKS are on */ + if( argv[1]==0 ){ + corruptSchema(pData, 0); + return 1; + } + assert( iDb>=0 && iDbnDb ); + if( argv[2] && argv[2][0] ){ + /* Call the parser to process a CREATE TABLE, INDEX or VIEW. + ** But because db->init.busy is set to 1, no VDBE code is generated + ** or executed. All the parser does is build the internal data + ** structures that describe the table, index, or view. + */ + char *zErr; + int rc; + assert( db->init.busy ); + db->init.iDb = iDb; + db->init.newTnum = atoi(argv[1]); + rc = sqlite3_exec(db, argv[2], 0, 0, &zErr); + db->init.iDb = 0; + assert( rc!=SQLITE_OK || zErr==0 ); + if( SQLITE_OK!=rc ){ + pData->rc = rc; + if( rc==SQLITE_NOMEM ){ + db->mallocFailed = 1; + }else if( rc!=SQLITE_INTERRUPT ){ + corruptSchema(pData, zErr); + } + sqlite3_free(zErr); + return 1; + } + }else if( argv[0]==0 ){ + corruptSchema(pData, 0); + }else{ + /* If the SQL column is blank it means this is an index that + ** was created to be the PRIMARY KEY or to fulfill a UNIQUE + ** constraint for a CREATE TABLE. The index should have already + ** been created when we processed the CREATE TABLE. All we have + ** to do here is record the root page number for that index. + */ + Index *pIndex; + pIndex = sqlite3FindIndex(db, argv[0], db->aDb[iDb].zName); + if( pIndex==0 || pIndex->tnum!=0 ){ + /* This can occur if there exists an index on a TEMP table which + ** has the same name as another index on a permanent index. Since + ** the permanent table is hidden by the TEMP table, we can also + ** safely ignore the index on the permanent table. + */ + /* Do Nothing */; + }else{ + pIndex->tnum = atoi(argv[1]); + } + } + return 0; +} + +/* +** Attempt to read the database schema and initialize internal +** data structures for a single database file. The index of the +** database file is given by iDb. iDb==0 is used for the main +** database. iDb==1 should never be used. iDb>=2 is used for +** auxiliary databases. Return one of the SQLITE_ error codes to +** indicate success or failure. +*/ +static int sqlite3InitOne(sqlite3 *db, int iDb, char **pzErrMsg){ + int rc; + BtCursor *curMain; + int size; + Table *pTab; + Db *pDb; + char const *azArg[4]; + int meta[10]; + InitData initData; + char const *zMasterSchema; + char const *zMasterName = SCHEMA_TABLE(iDb); + + /* + ** The master database table has a structure like this + */ + static const char master_schema[] = + "CREATE TABLE sqlite_master(\n" + " type text,\n" + " name text,\n" + " tbl_name text,\n" + " rootpage integer,\n" + " sql text\n" + ")" + ; +#ifndef SQLITE_OMIT_TEMPDB + static const char temp_master_schema[] = + "CREATE TEMP TABLE sqlite_temp_master(\n" + " type text,\n" + " name text,\n" + " tbl_name text,\n" + " rootpage integer,\n" + " sql text\n" + ")" + ; +#else + #define temp_master_schema 0 +#endif + + assert( iDb>=0 && iDbnDb ); + assert( db->aDb[iDb].pSchema ); + assert( sqlite3_mutex_held(db->mutex) ); + assert( iDb==1 || sqlite3BtreeHoldsMutex(db->aDb[iDb].pBt) ); + + /* zMasterSchema and zInitScript are set to point at the master schema + ** and initialisation script appropriate for the database being + ** initialised. zMasterName is the name of the master table. + */ + if( !OMIT_TEMPDB && iDb==1 ){ + zMasterSchema = temp_master_schema; + }else{ + zMasterSchema = master_schema; + } + zMasterName = SCHEMA_TABLE(iDb); + + /* Construct the schema tables. */ + azArg[0] = zMasterName; + azArg[1] = "1"; + azArg[2] = zMasterSchema; + azArg[3] = 0; + initData.db = db; + initData.iDb = iDb; + initData.pzErrMsg = pzErrMsg; + (void)sqlite3SafetyOff(db); + rc = sqlite3InitCallback(&initData, 3, (char **)azArg, 0); + (void)sqlite3SafetyOn(db); + if( rc ){ + rc = initData.rc; + goto error_out; + } + pTab = sqlite3FindTable(db, zMasterName, db->aDb[iDb].zName); + if( pTab ){ + pTab->readOnly = 1; + } + + /* Create a cursor to hold the database open + */ + pDb = &db->aDb[iDb]; + if( pDb->pBt==0 ){ + if( !OMIT_TEMPDB && iDb==1 ){ + DbSetProperty(db, 1, DB_SchemaLoaded); + } + return SQLITE_OK; + } + sqlite3BtreeEnter(pDb->pBt); + rc = sqlite3BtreeCursor(pDb->pBt, MASTER_ROOT, 0, 0, 0, &curMain); + if( rc!=SQLITE_OK && rc!=SQLITE_EMPTY ){ + sqlite3SetString(pzErrMsg, sqlite3ErrStr(rc), (char*)0); + sqlite3BtreeLeave(pDb->pBt); + goto error_out; + } + + /* Get the database meta information. + ** + ** Meta values are as follows: + ** meta[0] Schema cookie. Changes with each schema change. + ** meta[1] File format of schema layer. + ** meta[2] Size of the page cache. + ** meta[3] Use freelist if 0. Autovacuum if greater than zero. + ** meta[4] Db text encoding. 1:UTF-8 2:UTF-16LE 3:UTF-16BE + ** meta[5] The user cookie. Used by the application. + ** meta[6] Incremental-vacuum flag. + ** meta[7] + ** meta[8] + ** meta[9] + ** + ** Note: The #defined SQLITE_UTF* symbols in sqliteInt.h correspond to + ** the possible values of meta[4]. + */ + if( rc==SQLITE_OK ){ + int i; + for(i=0; rc==SQLITE_OK && ipBt, i+1, (u32 *)&meta[i]); + } + if( rc ){ + sqlite3SetString(pzErrMsg, sqlite3ErrStr(rc), (char*)0); + sqlite3BtreeCloseCursor(curMain); + sqlite3BtreeLeave(pDb->pBt); + goto error_out; + } + }else{ + memset(meta, 0, sizeof(meta)); + } + pDb->pSchema->schema_cookie = meta[0]; + + /* If opening a non-empty database, check the text encoding. For the + ** main database, set sqlite3.enc to the encoding of the main database. + ** For an attached db, it is an error if the encoding is not the same + ** as sqlite3.enc. + */ + if( meta[4] ){ /* text encoding */ + if( iDb==0 ){ + /* If opening the main database, set ENC(db). */ + ENC(db) = (u8)meta[4]; + db->pDfltColl = sqlite3FindCollSeq(db, SQLITE_UTF8, "BINARY", 6, 0); + }else{ + /* If opening an attached database, the encoding much match ENC(db) */ + if( meta[4]!=ENC(db) ){ + sqlite3BtreeCloseCursor(curMain); + sqlite3SetString(pzErrMsg, "attached databases must use the same" + " text encoding as main database", (char*)0); + sqlite3BtreeLeave(pDb->pBt); + return SQLITE_ERROR; + } + } + }else{ + DbSetProperty(db, iDb, DB_Empty); + } + pDb->pSchema->enc = ENC(db); + + size = meta[2]; + if( size==0 ){ size = SQLITE_DEFAULT_CACHE_SIZE; } + if( size<0 ) size = -size; + pDb->pSchema->cache_size = size; + sqlite3BtreeSetCacheSize(pDb->pBt, pDb->pSchema->cache_size); + + /* + ** file_format==1 Version 3.0.0. + ** file_format==2 Version 3.1.3. // ALTER TABLE ADD COLUMN + ** file_format==3 Version 3.1.4. // ditto but with non-NULL defaults + ** file_format==4 Version 3.3.0. // DESC indices. Boolean constants + */ + pDb->pSchema->file_format = meta[1]; + if( pDb->pSchema->file_format==0 ){ + pDb->pSchema->file_format = 1; + } + if( pDb->pSchema->file_format>SQLITE_MAX_FILE_FORMAT ){ + sqlite3BtreeCloseCursor(curMain); + sqlite3SetString(pzErrMsg, "unsupported file format", (char*)0); + sqlite3BtreeLeave(pDb->pBt); + return SQLITE_ERROR; + } + + /* Ticket #2804: When we open a database in the newer file format, + ** clear the legacy_file_format pragma flag so that a VACUUM will + ** not downgrade the database and thus invalidate any descending + ** indices that the user might have created. + */ + if( iDb==0 && meta[1]>=4 ){ + db->flags &= ~SQLITE_LegacyFileFmt; + } + + /* Read the schema information out of the schema tables + */ + assert( db->init.busy ); + if( rc==SQLITE_EMPTY ){ + /* For an empty database, there is nothing to read */ + rc = SQLITE_OK; + }else{ + char *zSql; + zSql = sqlite3MPrintf(db, + "SELECT name, rootpage, sql FROM '%q'.%s", + db->aDb[iDb].zName, zMasterName); + (void)sqlite3SafetyOff(db); +#ifndef SQLITE_OMIT_AUTHORIZATION + { + int (*xAuth)(void*,int,const char*,const char*,const char*,const char*); + xAuth = db->xAuth; + db->xAuth = 0; +#endif + rc = sqlite3_exec(db, zSql, sqlite3InitCallback, &initData, 0); +#ifndef SQLITE_OMIT_AUTHORIZATION + db->xAuth = xAuth; + } +#endif + if( rc==SQLITE_ABORT ) rc = initData.rc; + (void)sqlite3SafetyOn(db); + sqlite3_free(zSql); +#ifndef SQLITE_OMIT_ANALYZE + if( rc==SQLITE_OK ){ + sqlite3AnalysisLoad(db, iDb); + } +#endif + sqlite3BtreeCloseCursor(curMain); + } + if( db->mallocFailed ){ + /* sqlite3SetString(pzErrMsg, "out of memory", (char*)0); */ + rc = SQLITE_NOMEM; + sqlite3ResetInternalSchema(db, 0); + } + if( rc==SQLITE_OK || (db->flags&SQLITE_RecoveryMode)){ + /* Black magic: If the SQLITE_RecoveryMode flag is set, then consider + ** the schema loaded, even if errors occured. In this situation the + ** current sqlite3_prepare() operation will fail, but the following one + ** will attempt to compile the supplied statement against whatever subset + ** of the schema was loaded before the error occured. The primary + ** purpose of this is to allow access to the sqlite_master table + ** even when its contents have been corrupted. + */ + DbSetProperty(db, iDb, DB_SchemaLoaded); + rc = SQLITE_OK; + } + sqlite3BtreeLeave(pDb->pBt); + +error_out: + if( rc==SQLITE_NOMEM || rc==SQLITE_IOERR_NOMEM ){ + db->mallocFailed = 1; + } + return rc; +} + +/* +** Initialize all database files - the main database file, the file +** used to store temporary tables, and any additional database files +** created using ATTACH statements. Return a success code. If an +** error occurs, write an error message into *pzErrMsg. +** +** After a database is initialized, the DB_SchemaLoaded bit is set +** bit is set in the flags field of the Db structure. If the database +** file was of zero-length, then the DB_Empty flag is also set. +*/ +int sqlite3Init(sqlite3 *db, char **pzErrMsg){ + int i, rc; + int commit_internal = !(db->flags&SQLITE_InternChanges); + + assert( sqlite3_mutex_held(db->mutex) ); + if( db->init.busy ) return SQLITE_OK; + rc = SQLITE_OK; + db->init.busy = 1; + for(i=0; rc==SQLITE_OK && inDb; i++){ + if( DbHasProperty(db, i, DB_SchemaLoaded) || i==1 ) continue; + rc = sqlite3InitOne(db, i, pzErrMsg); + if( rc ){ + sqlite3ResetInternalSchema(db, i); + } + } + + /* Once all the other databases have been initialised, load the schema + ** for the TEMP database. This is loaded last, as the TEMP database + ** schema may contain references to objects in other databases. + */ +#ifndef SQLITE_OMIT_TEMPDB + if( rc==SQLITE_OK && db->nDb>1 && !DbHasProperty(db, 1, DB_SchemaLoaded) ){ + rc = sqlite3InitOne(db, 1, pzErrMsg); + if( rc ){ + sqlite3ResetInternalSchema(db, 1); + } + } +#endif + + db->init.busy = 0; + if( rc==SQLITE_OK && commit_internal ){ + sqlite3CommitInternalChanges(db); + } + + return rc; +} + +/* +** This routine is a no-op if the database schema is already initialised. +** Otherwise, the schema is loaded. An error code is returned. +*/ +int sqlite3ReadSchema(Parse *pParse){ + int rc = SQLITE_OK; + sqlite3 *db = pParse->db; + assert( sqlite3_mutex_held(db->mutex) ); + if( !db->init.busy ){ + rc = sqlite3Init(db, &pParse->zErrMsg); + } + if( rc!=SQLITE_OK ){ + pParse->rc = rc; + pParse->nErr++; + } + return rc; +} + + +/* +** Check schema cookies in all databases. If any cookie is out +** of date, return 0. If all schema cookies are current, return 1. +*/ +static int schemaIsValid(sqlite3 *db){ + int iDb; + int rc; + BtCursor *curTemp; + int cookie; + int allOk = 1; + + assert( sqlite3_mutex_held(db->mutex) ); + for(iDb=0; allOk && iDbnDb; iDb++){ + Btree *pBt; + pBt = db->aDb[iDb].pBt; + if( pBt==0 ) continue; + rc = sqlite3BtreeCursor(pBt, MASTER_ROOT, 0, 0, 0, &curTemp); + if( rc==SQLITE_OK ){ + rc = sqlite3BtreeGetMeta(pBt, 1, (u32 *)&cookie); + if( rc==SQLITE_OK && cookie!=db->aDb[iDb].pSchema->schema_cookie ){ + allOk = 0; + } + sqlite3BtreeCloseCursor(curTemp); + } + if( rc==SQLITE_NOMEM || rc==SQLITE_IOERR_NOMEM ){ + db->mallocFailed = 1; + } + } + return allOk; +} + +/* +** Convert a schema pointer into the iDb index that indicates +** which database file in db->aDb[] the schema refers to. +** +** If the same database is attached more than once, the first +** attached database is returned. +*/ +int sqlite3SchemaToIndex(sqlite3 *db, Schema *pSchema){ + int i = -1000000; + + /* If pSchema is NULL, then return -1000000. This happens when code in + ** expr.c is trying to resolve a reference to a transient table (i.e. one + ** created by a sub-select). In this case the return value of this + ** function should never be used. + ** + ** We return -1000000 instead of the more usual -1 simply because using + ** -1000000 as incorrectly using -1000000 index into db->aDb[] is much + ** more likely to cause a segfault than -1 (of course there are assert() + ** statements too, but it never hurts to play the odds). + */ + assert( sqlite3_mutex_held(db->mutex) ); + if( pSchema ){ + for(i=0; inDb; i++){ + if( db->aDb[i].pSchema==pSchema ){ + break; + } + } + assert( i>=0 &&i>=0 && inDb ); + } + return i; +} + +/* +** Compile the UTF-8 encoded SQL statement zSql into a statement handle. +*/ +static int sqlite3Prepare( + sqlite3 *db, /* Database handle. */ + const char *zSql, /* UTF-8 encoded SQL statement. */ + int nBytes, /* Length of zSql in bytes. */ + int saveSqlFlag, /* True to copy SQL text into the sqlite3_stmt */ + sqlite3_stmt **ppStmt, /* OUT: A pointer to the prepared statement */ + const char **pzTail /* OUT: End of parsed string */ +){ + Parse sParse; + char *zErrMsg = 0; + int rc = SQLITE_OK; + int i; + + assert( ppStmt ); + *ppStmt = 0; + if( sqlite3SafetyOn(db) ){ + return SQLITE_MISUSE; + } + assert( !db->mallocFailed ); + assert( sqlite3_mutex_held(db->mutex) ); + + /* If any attached database schemas are locked, do not proceed with + ** compilation. Instead return SQLITE_LOCKED immediately. + */ + for(i=0; inDb; i++) { + Btree *pBt = db->aDb[i].pBt; + if( pBt ){ + int rc; + rc = sqlite3BtreeSchemaLocked(pBt); + if( rc ){ + const char *zDb = db->aDb[i].zName; + sqlite3Error(db, SQLITE_LOCKED, "database schema is locked: %s", zDb); + (void)sqlite3SafetyOff(db); + return SQLITE_LOCKED; + } + } + } + + memset(&sParse, 0, sizeof(sParse)); + sParse.db = db; + if( nBytes>=0 && zSql[nBytes]!=0 ){ + char *zSqlCopy; + if( SQLITE_MAX_SQL_LENGTH>0 && nBytes>SQLITE_MAX_SQL_LENGTH ){ + sqlite3Error(db, SQLITE_TOOBIG, "statement too long"); + (void)sqlite3SafetyOff(db); + return SQLITE_TOOBIG; + } + zSqlCopy = sqlite3DbStrNDup(db, zSql, nBytes); + if( zSqlCopy ){ + sqlite3RunParser(&sParse, zSqlCopy, &zErrMsg); + sqlite3_free(zSqlCopy); + } + sParse.zTail = &zSql[nBytes]; + }else{ + sqlite3RunParser(&sParse, zSql, &zErrMsg); + } + + if( db->mallocFailed ){ + sParse.rc = SQLITE_NOMEM; + } + if( sParse.rc==SQLITE_DONE ) sParse.rc = SQLITE_OK; + if( sParse.checkSchema && !schemaIsValid(db) ){ + sParse.rc = SQLITE_SCHEMA; + } + if( sParse.rc==SQLITE_SCHEMA ){ + sqlite3ResetInternalSchema(db, 0); + } + if( db->mallocFailed ){ + sParse.rc = SQLITE_NOMEM; + } + if( pzTail ){ + *pzTail = sParse.zTail; + } + rc = sParse.rc; + +#ifndef SQLITE_OMIT_EXPLAIN + if( rc==SQLITE_OK && sParse.pVdbe && sParse.explain ){ + if( sParse.explain==2 ){ + sqlite3VdbeSetNumCols(sParse.pVdbe, 3); + sqlite3VdbeSetColName(sParse.pVdbe, 0, COLNAME_NAME, "order", P4_STATIC); + sqlite3VdbeSetColName(sParse.pVdbe, 1, COLNAME_NAME, "from", P4_STATIC); + sqlite3VdbeSetColName(sParse.pVdbe, 2, COLNAME_NAME, "detail", P4_STATIC); + }else{ + sqlite3VdbeSetNumCols(sParse.pVdbe, 8); + sqlite3VdbeSetColName(sParse.pVdbe, 0, COLNAME_NAME, "addr", P4_STATIC); + sqlite3VdbeSetColName(sParse.pVdbe, 1, COLNAME_NAME, "opcode", P4_STATIC); + sqlite3VdbeSetColName(sParse.pVdbe, 2, COLNAME_NAME, "p1", P4_STATIC); + sqlite3VdbeSetColName(sParse.pVdbe, 3, COLNAME_NAME, "p2", P4_STATIC); + sqlite3VdbeSetColName(sParse.pVdbe, 4, COLNAME_NAME, "p3", P4_STATIC); + sqlite3VdbeSetColName(sParse.pVdbe, 5, COLNAME_NAME, "p4", P4_STATIC); + sqlite3VdbeSetColName(sParse.pVdbe, 6, COLNAME_NAME, "p5", P4_STATIC); + sqlite3VdbeSetColName(sParse.pVdbe, 7, COLNAME_NAME, "comment",P4_STATIC); + } + } +#endif + + if( sqlite3SafetyOff(db) ){ + rc = SQLITE_MISUSE; + } + + if( saveSqlFlag ){ + sqlite3VdbeSetSql(sParse.pVdbe, zSql, sParse.zTail - zSql); + } + if( rc!=SQLITE_OK || db->mallocFailed ){ + sqlite3_finalize((sqlite3_stmt*)sParse.pVdbe); + assert(!(*ppStmt)); + }else{ + *ppStmt = (sqlite3_stmt*)sParse.pVdbe; + } + + if( zErrMsg ){ + sqlite3Error(db, rc, "%s", zErrMsg); + sqlite3_free(zErrMsg); + }else{ + sqlite3Error(db, rc, 0); + } + + rc = sqlite3ApiExit(db, rc); + assert( (rc&db->errMask)==rc ); + return rc; +} +static int sqlite3LockAndPrepare( + sqlite3 *db, /* Database handle. */ + const char *zSql, /* UTF-8 encoded SQL statement. */ + int nBytes, /* Length of zSql in bytes. */ + int saveSqlFlag, /* True to copy SQL text into the sqlite3_stmt */ + sqlite3_stmt **ppStmt, /* OUT: A pointer to the prepared statement */ + const char **pzTail /* OUT: End of parsed string */ +){ + int rc; + if( !sqlite3SafetyCheckOk(db) ){ + return SQLITE_MISUSE; + } + sqlite3_mutex_enter(db->mutex); + sqlite3BtreeEnterAll(db); + rc = sqlite3Prepare(db, zSql, nBytes, saveSqlFlag, ppStmt, pzTail); + sqlite3BtreeLeaveAll(db); + sqlite3_mutex_leave(db->mutex); + return rc; +} + +/* +** Rerun the compilation of a statement after a schema change. +** Return true if the statement was recompiled successfully. +** Return false if there is an error of some kind. +*/ +int sqlite3Reprepare(Vdbe *p){ + int rc; + sqlite3_stmt *pNew; + const char *zSql; + sqlite3 *db; + + assert( sqlite3_mutex_held(sqlite3VdbeDb(p)->mutex) ); + zSql = sqlite3_sql((sqlite3_stmt *)p); + assert( zSql!=0 ); /* Reprepare only called for prepare_v2() statements */ + db = sqlite3VdbeDb(p); + assert( sqlite3_mutex_held(db->mutex) ); + rc = sqlite3LockAndPrepare(db, zSql, -1, 0, &pNew, 0); + if( rc ){ + if( rc==SQLITE_NOMEM ){ + db->mallocFailed = 1; + } + assert( pNew==0 ); + return 0; + }else{ + assert( pNew!=0 ); + } + sqlite3VdbeSwap((Vdbe*)pNew, p); + sqlite3_transfer_bindings(pNew, (sqlite3_stmt*)p); + sqlite3VdbeResetStepResult((Vdbe*)pNew); + sqlite3VdbeFinalize((Vdbe*)pNew); + return 1; +} + + +/* +** Two versions of the official API. Legacy and new use. In the legacy +** version, the original SQL text is not saved in the prepared statement +** and so if a schema change occurs, SQLITE_SCHEMA is returned by +** sqlite3_step(). In the new version, the original SQL text is retained +** and the statement is automatically recompiled if an schema change +** occurs. +*/ +int sqlite3_prepare( + sqlite3 *db, /* Database handle. */ + const char *zSql, /* UTF-8 encoded SQL statement. */ + int nBytes, /* Length of zSql in bytes. */ + sqlite3_stmt **ppStmt, /* OUT: A pointer to the prepared statement */ + const char **pzTail /* OUT: End of parsed string */ +){ + int rc; + rc = sqlite3LockAndPrepare(db,zSql,nBytes,0,ppStmt,pzTail); + assert( rc==SQLITE_OK || ppStmt==0 || *ppStmt==0 ); /* VERIFY: F13021 */ + return rc; +} +int sqlite3_prepare_v2( + sqlite3 *db, /* Database handle. */ + const char *zSql, /* UTF-8 encoded SQL statement. */ + int nBytes, /* Length of zSql in bytes. */ + sqlite3_stmt **ppStmt, /* OUT: A pointer to the prepared statement */ + const char **pzTail /* OUT: End of parsed string */ +){ + int rc; + rc = sqlite3LockAndPrepare(db,zSql,nBytes,1,ppStmt,pzTail); + assert( rc==SQLITE_OK || ppStmt==0 || *ppStmt==0 ); /* VERIFY: F13021 */ + return rc; +} + + +#ifndef SQLITE_OMIT_UTF16 +/* +** Compile the UTF-16 encoded SQL statement zSql into a statement handle. +*/ +static int sqlite3Prepare16( + sqlite3 *db, /* Database handle. */ + const void *zSql, /* UTF-8 encoded SQL statement. */ + int nBytes, /* Length of zSql in bytes. */ + int saveSqlFlag, /* True to save SQL text into the sqlite3_stmt */ + sqlite3_stmt **ppStmt, /* OUT: A pointer to the prepared statement */ + const void **pzTail /* OUT: End of parsed string */ +){ + /* This function currently works by first transforming the UTF-16 + ** encoded string to UTF-8, then invoking sqlite3_prepare(). The + ** tricky bit is figuring out the pointer to return in *pzTail. + */ + char *zSql8; + const char *zTail8 = 0; + int rc = SQLITE_OK; + + if( !sqlite3SafetyCheckOk(db) ){ + return SQLITE_MISUSE; + } + sqlite3_mutex_enter(db->mutex); + zSql8 = sqlite3Utf16to8(db, zSql, nBytes); + if( zSql8 ){ + rc = sqlite3LockAndPrepare(db, zSql8, -1, saveSqlFlag, ppStmt, &zTail8); + } + + if( zTail8 && pzTail ){ + /* If sqlite3_prepare returns a tail pointer, we calculate the + ** equivalent pointer into the UTF-16 string by counting the unicode + ** characters between zSql8 and zTail8, and then returning a pointer + ** the same number of characters into the UTF-16 string. + */ + int chars_parsed = sqlite3Utf8CharLen(zSql8, zTail8-zSql8); + *pzTail = (u8 *)zSql + sqlite3Utf16ByteLen(zSql, chars_parsed); + } + sqlite3_free(zSql8); + rc = sqlite3ApiExit(db, rc); + sqlite3_mutex_leave(db->mutex); + return rc; +} + +/* +** Two versions of the official API. Legacy and new use. In the legacy +** version, the original SQL text is not saved in the prepared statement +** and so if a schema change occurs, SQLITE_SCHEMA is returned by +** sqlite3_step(). In the new version, the original SQL text is retained +** and the statement is automatically recompiled if an schema change +** occurs. +*/ +int sqlite3_prepare16( + sqlite3 *db, /* Database handle. */ + const void *zSql, /* UTF-8 encoded SQL statement. */ + int nBytes, /* Length of zSql in bytes. */ + sqlite3_stmt **ppStmt, /* OUT: A pointer to the prepared statement */ + const void **pzTail /* OUT: End of parsed string */ +){ + int rc; + rc = sqlite3Prepare16(db,zSql,nBytes,0,ppStmt,pzTail); + assert( rc==SQLITE_OK || ppStmt==0 || *ppStmt==0 ); /* VERIFY: F13021 */ + return rc; +} +int sqlite3_prepare16_v2( + sqlite3 *db, /* Database handle. */ + const void *zSql, /* UTF-8 encoded SQL statement. */ + int nBytes, /* Length of zSql in bytes. */ + sqlite3_stmt **ppStmt, /* OUT: A pointer to the prepared statement */ + const void **pzTail /* OUT: End of parsed string */ +){ + int rc; + rc = sqlite3Prepare16(db,zSql,nBytes,1,ppStmt,pzTail); + assert( rc==SQLITE_OK || ppStmt==0 || *ppStmt==0 ); /* VERIFY: F13021 */ + return rc; +} + +#endif /* SQLITE_OMIT_UTF16 */ Added: external/sqlite-source-3.5.7.x/printf.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/printf.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,904 @@ +/* +** The "printf" code that follows dates from the 1980's. It is in +** the public domain. The original comments are included here for +** completeness. They are very out-of-date but might be useful as +** an historical reference. Most of the "enhancements" have been backed +** out so that the functionality is now the same as standard printf(). +** +************************************************************************** +** +** The following modules is an enhanced replacement for the "printf" subroutines +** found in the standard C library. The following enhancements are +** supported: +** +** + Additional functions. The standard set of "printf" functions +** includes printf, fprintf, sprintf, vprintf, vfprintf, and +** vsprintf. This module adds the following: +** +** * snprintf -- Works like sprintf, but has an extra argument +** which is the size of the buffer written to. +** +** * mprintf -- Similar to sprintf. Writes output to memory +** obtained from malloc. +** +** * xprintf -- Calls a function to dispose of output. +** +** * nprintf -- No output, but returns the number of characters +** that would have been output by printf. +** +** * A v- version (ex: vsnprintf) of every function is also +** supplied. +** +** + A few extensions to the formatting notation are supported: +** +** * The "=" flag (similar to "-") causes the output to be +** be centered in the appropriately sized field. +** +** * The %b field outputs an integer in binary notation. +** +** * The %c field now accepts a precision. The character output +** is repeated by the number of times the precision specifies. +** +** * The %' field works like %c, but takes as its character the +** next character of the format string, instead of the next +** argument. For example, printf("%.78'-") prints 78 minus +** signs, the same as printf("%.78c",'-'). +** +** + When compiled using GCC on a SPARC, this version of printf is +** faster than the library printf for SUN OS 4.1. +** +** + All functions are fully reentrant. +** +*/ +#include "sqliteInt.h" + +/* +** Conversion types fall into various categories as defined by the +** following enumeration. +*/ +#define etRADIX 1 /* Integer types. %d, %x, %o, and so forth */ +#define etFLOAT 2 /* Floating point. %f */ +#define etEXP 3 /* Exponentional notation. %e and %E */ +#define etGENERIC 4 /* Floating or exponential, depending on exponent. %g */ +#define etSIZE 5 /* Return number of characters processed so far. %n */ +#define etSTRING 6 /* Strings. %s */ +#define etDYNSTRING 7 /* Dynamically allocated strings. %z */ +#define etPERCENT 8 /* Percent symbol. %% */ +#define etCHARX 9 /* Characters. %c */ +/* The rest are extensions, not normally found in printf() */ +#define etCHARLIT 10 /* Literal characters. %' */ +#define etSQLESCAPE 11 /* Strings with '\'' doubled. %q */ +#define etSQLESCAPE2 12 /* Strings with '\'' doubled and enclosed in '', + NULL pointers replaced by SQL NULL. %Q */ +#define etTOKEN 13 /* a pointer to a Token structure */ +#define etSRCLIST 14 /* a pointer to a SrcList */ +#define etPOINTER 15 /* The %p conversion */ +#define etSQLESCAPE3 16 /* %w -> Strings with '\"' doubled */ +#define etORDINAL 17 /* %r -> 1st, 2nd, 3rd, 4th, etc. English only */ + + +/* +** An "etByte" is an 8-bit unsigned value. +*/ +typedef unsigned char etByte; + +/* +** Each builtin conversion character (ex: the 'd' in "%d") is described +** by an instance of the following structure +*/ +typedef struct et_info { /* Information about each format field */ + char fmttype; /* The format field code letter */ + etByte base; /* The base for radix conversion */ + etByte flags; /* One or more of FLAG_ constants below */ + etByte type; /* Conversion paradigm */ + etByte charset; /* Offset into aDigits[] of the digits string */ + etByte prefix; /* Offset into aPrefix[] of the prefix string */ +} et_info; + +/* +** Allowed values for et_info.flags +*/ +#define FLAG_SIGNED 1 /* True if the value to convert is signed */ +#define FLAG_INTERN 2 /* True if for internal use only */ +#define FLAG_STRING 4 /* Allow infinity precision */ + + +/* +** The following table is searched linearly, so it is good to put the +** most frequently used conversion types first. +*/ +static const char aDigits[] = "0123456789ABCDEF0123456789abcdef"; +static const char aPrefix[] = "-x0\000X0"; +static const et_info fmtinfo[] = { + { 'd', 10, 1, etRADIX, 0, 0 }, + { 's', 0, 4, etSTRING, 0, 0 }, + { 'g', 0, 1, etGENERIC, 30, 0 }, + { 'z', 0, 4, etDYNSTRING, 0, 0 }, + { 'q', 0, 4, etSQLESCAPE, 0, 0 }, + { 'Q', 0, 4, etSQLESCAPE2, 0, 0 }, + { 'w', 0, 4, etSQLESCAPE3, 0, 0 }, + { 'c', 0, 0, etCHARX, 0, 0 }, + { 'o', 8, 0, etRADIX, 0, 2 }, + { 'u', 10, 0, etRADIX, 0, 0 }, + { 'x', 16, 0, etRADIX, 16, 1 }, + { 'X', 16, 0, etRADIX, 0, 4 }, +#ifndef SQLITE_OMIT_FLOATING_POINT + { 'f', 0, 1, etFLOAT, 0, 0 }, + { 'e', 0, 1, etEXP, 30, 0 }, + { 'E', 0, 1, etEXP, 14, 0 }, + { 'G', 0, 1, etGENERIC, 14, 0 }, +#endif + { 'i', 10, 1, etRADIX, 0, 0 }, + { 'n', 0, 0, etSIZE, 0, 0 }, + { '%', 0, 0, etPERCENT, 0, 0 }, + { 'p', 16, 0, etPOINTER, 0, 1 }, + { 'T', 0, 2, etTOKEN, 0, 0 }, + { 'S', 0, 2, etSRCLIST, 0, 0 }, + { 'r', 10, 3, etORDINAL, 0, 0 }, +}; +#define etNINFO (sizeof(fmtinfo)/sizeof(fmtinfo[0])) + +/* +** If SQLITE_OMIT_FLOATING_POINT is defined, then none of the floating point +** conversions will work. +*/ +#ifndef SQLITE_OMIT_FLOATING_POINT +/* +** "*val" is a double such that 0.1 <= *val < 10.0 +** Return the ascii code for the leading digit of *val, then +** multiply "*val" by 10.0 to renormalize. +** +** Example: +** input: *val = 3.14159 +** output: *val = 1.4159 function return = '3' +** +** The counter *cnt is incremented each time. After counter exceeds +** 16 (the number of significant digits in a 64-bit float) '0' is +** always returned. +*/ +static int et_getdigit(LONGDOUBLE_TYPE *val, int *cnt){ + int digit; + LONGDOUBLE_TYPE d; + if( (*cnt)++ >= 16 ) return '0'; + digit = (int)*val; + d = digit; + digit += '0'; + *val = (*val - d)*10.0; + return digit; +} +#endif /* SQLITE_OMIT_FLOATING_POINT */ + +/* +** Append N space characters to the given string buffer. +*/ +static void appendSpace(StrAccum *pAccum, int N){ + static const char zSpaces[] = " "; + while( N>=sizeof(zSpaces)-1 ){ + sqlite3StrAccumAppend(pAccum, zSpaces, sizeof(zSpaces)-1); + N -= sizeof(zSpaces)-1; + } + if( N>0 ){ + sqlite3StrAccumAppend(pAccum, zSpaces, N); + } +} + +/* +** On machines with a small stack size, you can redefine the +** SQLITE_PRINT_BUF_SIZE to be less than 350. But beware - for +** smaller values some %f conversions may go into an infinite loop. +*/ +#ifndef SQLITE_PRINT_BUF_SIZE +# define SQLITE_PRINT_BUF_SIZE 350 +#endif +#define etBUFSIZE SQLITE_PRINT_BUF_SIZE /* Size of the output buffer */ + +/* +** The root program. All variations call this core. +** +** INPUTS: +** func This is a pointer to a function taking three arguments +** 1. A pointer to anything. Same as the "arg" parameter. +** 2. A pointer to the list of characters to be output +** (Note, this list is NOT null terminated.) +** 3. An integer number of characters to be output. +** (Note: This number might be zero.) +** +** arg This is the pointer to anything which will be passed as the +** first argument to "func". Use it for whatever you like. +** +** fmt This is the format string, as in the usual print. +** +** ap This is a pointer to a list of arguments. Same as in +** vfprint. +** +** OUTPUTS: +** The return value is the total number of characters sent to +** the function "func". Returns -1 on a error. +** +** Note that the order in which automatic variables are declared below +** seems to make a big difference in determining how fast this beast +** will run. +*/ +static void vxprintf( + StrAccum *pAccum, /* Accumulate results here */ + int useExtended, /* Allow extended %-conversions */ + const char *fmt, /* Format string */ + va_list ap /* arguments */ +){ + int c; /* Next character in the format string */ + char *bufpt; /* Pointer to the conversion buffer */ + int precision; /* Precision of the current field */ + int length; /* Length of the field */ + int idx; /* A general purpose loop counter */ + int width; /* Width of the current field */ + etByte flag_leftjustify; /* True if "-" flag is present */ + etByte flag_plussign; /* True if "+" flag is present */ + etByte flag_blanksign; /* True if " " flag is present */ + etByte flag_alternateform; /* True if "#" flag is present */ + etByte flag_altform2; /* True if "!" flag is present */ + etByte flag_zeropad; /* True if field width constant starts with zero */ + etByte flag_long; /* True if "l" flag is present */ + etByte flag_longlong; /* True if the "ll" flag is present */ + etByte done; /* Loop termination flag */ + sqlite_uint64 longvalue; /* Value for integer types */ + LONGDOUBLE_TYPE realvalue; /* Value for real types */ + const et_info *infop; /* Pointer to the appropriate info structure */ + char buf[etBUFSIZE]; /* Conversion buffer */ + char prefix; /* Prefix character. "+" or "-" or " " or '\0'. */ + etByte errorflag = 0; /* True if an error is encountered */ + etByte xtype; /* Conversion paradigm */ + char *zExtra; /* Extra memory used for etTCLESCAPE conversions */ +#ifndef SQLITE_OMIT_FLOATING_POINT + int exp, e2; /* exponent of real numbers */ + double rounder; /* Used for rounding floating point values */ + etByte flag_dp; /* True if decimal point should be shown */ + etByte flag_rtz; /* True if trailing zeros should be removed */ + etByte flag_exp; /* True to force display of the exponent */ + int nsd; /* Number of significant digits returned */ +#endif + + length = 0; + bufpt = 0; + for(; (c=(*fmt))!=0; ++fmt){ + if( c!='%' ){ + int amt; + bufpt = (char *)fmt; + amt = 1; + while( (c=(*++fmt))!='%' && c!=0 ) amt++; + sqlite3StrAccumAppend(pAccum, bufpt, amt); + if( c==0 ) break; + } + if( (c=(*++fmt))==0 ){ + errorflag = 1; + sqlite3StrAccumAppend(pAccum, "%", 1); + break; + } + /* Find out what flags are present */ + flag_leftjustify = flag_plussign = flag_blanksign = + flag_alternateform = flag_altform2 = flag_zeropad = 0; + done = 0; + do{ + switch( c ){ + case '-': flag_leftjustify = 1; break; + case '+': flag_plussign = 1; break; + case ' ': flag_blanksign = 1; break; + case '#': flag_alternateform = 1; break; + case '!': flag_altform2 = 1; break; + case '0': flag_zeropad = 1; break; + default: done = 1; break; + } + }while( !done && (c=(*++fmt))!=0 ); + /* Get the field width */ + width = 0; + if( c=='*' ){ + width = va_arg(ap,int); + if( width<0 ){ + flag_leftjustify = 1; + width = -width; + } + c = *++fmt; + }else{ + while( c>='0' && c<='9' ){ + width = width*10 + c - '0'; + c = *++fmt; + } + } + if( width > etBUFSIZE-10 ){ + width = etBUFSIZE-10; + } + /* Get the precision */ + if( c=='.' ){ + precision = 0; + c = *++fmt; + if( c=='*' ){ + precision = va_arg(ap,int); + if( precision<0 ) precision = -precision; + c = *++fmt; + }else{ + while( c>='0' && c<='9' ){ + precision = precision*10 + c - '0'; + c = *++fmt; + } + } + }else{ + precision = -1; + } + /* Get the conversion type modifier */ + if( c=='l' ){ + flag_long = 1; + c = *++fmt; + if( c=='l' ){ + flag_longlong = 1; + c = *++fmt; + }else{ + flag_longlong = 0; + } + }else{ + flag_long = flag_longlong = 0; + } + /* Fetch the info entry for the field */ + infop = 0; + for(idx=0; idxflags & FLAG_INTERN)==0 ){ + xtype = infop->type; + }else{ + return; + } + break; + } + } + zExtra = 0; + if( infop==0 ){ + return; + } + + + /* Limit the precision to prevent overflowing buf[] during conversion */ + if( precision>etBUFSIZE-40 && (infop->flags & FLAG_STRING)==0 ){ + precision = etBUFSIZE-40; + } + + /* + ** At this point, variables are initialized as follows: + ** + ** flag_alternateform TRUE if a '#' is present. + ** flag_altform2 TRUE if a '!' is present. + ** flag_plussign TRUE if a '+' is present. + ** flag_leftjustify TRUE if a '-' is present or if the + ** field width was negative. + ** flag_zeropad TRUE if the width began with 0. + ** flag_long TRUE if the letter 'l' (ell) prefixed + ** the conversion character. + ** flag_longlong TRUE if the letter 'll' (ell ell) prefixed + ** the conversion character. + ** flag_blanksign TRUE if a ' ' is present. + ** width The specified field width. This is + ** always non-negative. Zero is the default. + ** precision The specified precision. The default + ** is -1. + ** xtype The class of the conversion. + ** infop Pointer to the appropriate info struct. + */ + switch( xtype ){ + case etPOINTER: + flag_longlong = sizeof(char*)==sizeof(i64); + flag_long = sizeof(char*)==sizeof(long int); + /* Fall through into the next case */ + case etORDINAL: + case etRADIX: + if( infop->flags & FLAG_SIGNED ){ + i64 v; + if( flag_longlong ) v = va_arg(ap,i64); + else if( flag_long ) v = va_arg(ap,long int); + else v = va_arg(ap,int); + if( v<0 ){ + longvalue = -v; + prefix = '-'; + }else{ + longvalue = v; + if( flag_plussign ) prefix = '+'; + else if( flag_blanksign ) prefix = ' '; + else prefix = 0; + } + }else{ + if( flag_longlong ) longvalue = va_arg(ap,u64); + else if( flag_long ) longvalue = va_arg(ap,unsigned long int); + else longvalue = va_arg(ap,unsigned int); + prefix = 0; + } + if( longvalue==0 ) flag_alternateform = 0; + if( flag_zeropad && precision=4 || (longvalue/10)%10==1 ){ + x = 0; + } + buf[etBUFSIZE-3] = zOrd[x*2]; + buf[etBUFSIZE-2] = zOrd[x*2+1]; + bufpt -= 2; + } + { + register const char *cset; /* Use registers for speed */ + register int base; + cset = &aDigits[infop->charset]; + base = infop->base; + do{ /* Convert to ascii */ + *(--bufpt) = cset[longvalue%base]; + longvalue = longvalue/base; + }while( longvalue>0 ); + } + length = &buf[etBUFSIZE-1]-bufpt; + for(idx=precision-length; idx>0; idx--){ + *(--bufpt) = '0'; /* Zero pad */ + } + if( prefix ) *(--bufpt) = prefix; /* Add sign */ + if( flag_alternateform && infop->prefix ){ /* Add "0" or "0x" */ + const char *pre; + char x; + pre = &aPrefix[infop->prefix]; + if( *bufpt!=pre[0] ){ + for(; (x=(*pre))!=0; pre++) *(--bufpt) = x; + } + } + length = &buf[etBUFSIZE-1]-bufpt; + break; + case etFLOAT: + case etEXP: + case etGENERIC: + realvalue = va_arg(ap,double); +#ifndef SQLITE_OMIT_FLOATING_POINT + if( precision<0 ) precision = 6; /* Set default precision */ + if( precision>etBUFSIZE/2-10 ) precision = etBUFSIZE/2-10; + if( realvalue<0.0 ){ + realvalue = -realvalue; + prefix = '-'; + }else{ + if( flag_plussign ) prefix = '+'; + else if( flag_blanksign ) prefix = ' '; + else prefix = 0; + } + if( xtype==etGENERIC && precision>0 ) precision--; +#if 0 + /* Rounding works like BSD when the constant 0.4999 is used. Wierd! */ + for(idx=precision, rounder=0.4999; idx>0; idx--, rounder*=0.1); +#else + /* It makes more sense to use 0.5 */ + for(idx=precision, rounder=0.5; idx>0; idx--, rounder*=0.1){} +#endif + if( xtype==etFLOAT ) realvalue += rounder; + /* Normalize realvalue to within 10.0 > realvalue >= 1.0 */ + exp = 0; + if( sqlite3_isnan(realvalue) ){ + bufpt = "NaN"; + length = 3; + break; + } + if( realvalue>0.0 ){ + while( realvalue>=1e32 && exp<=350 ){ realvalue *= 1e-32; exp+=32; } + while( realvalue>=1e8 && exp<=350 ){ realvalue *= 1e-8; exp+=8; } + while( realvalue>=10.0 && exp<=350 ){ realvalue *= 0.1; exp++; } + while( realvalue<1e-8 && exp>=-350 ){ realvalue *= 1e8; exp-=8; } + while( realvalue<1.0 && exp>=-350 ){ realvalue *= 10.0; exp--; } + if( exp>350 || exp<-350 ){ + if( prefix=='-' ){ + bufpt = "-Inf"; + }else if( prefix=='+' ){ + bufpt = "+Inf"; + }else{ + bufpt = "Inf"; + } + length = strlen(bufpt); + break; + } + } + bufpt = buf; + /* + ** If the field type is etGENERIC, then convert to either etEXP + ** or etFLOAT, as appropriate. + */ + flag_exp = xtype==etEXP; + if( xtype!=etFLOAT ){ + realvalue += rounder; + if( realvalue>=10.0 ){ realvalue *= 0.1; exp++; } + } + if( xtype==etGENERIC ){ + flag_rtz = !flag_alternateform; + if( exp<-4 || exp>precision ){ + xtype = etEXP; + }else{ + precision = precision - exp; + xtype = etFLOAT; + } + }else{ + flag_rtz = 0; + } + if( xtype==etEXP ){ + e2 = 0; + }else{ + e2 = exp; + } + nsd = 0; + flag_dp = (precision>0) | flag_alternateform | flag_altform2; + /* The sign in front of the number */ + if( prefix ){ + *(bufpt++) = prefix; + } + /* Digits prior to the decimal point */ + if( e2<0 ){ + *(bufpt++) = '0'; + }else{ + for(; e2>=0; e2--){ + *(bufpt++) = et_getdigit(&realvalue,&nsd); + } + } + /* The decimal point */ + if( flag_dp ){ + *(bufpt++) = '.'; + } + /* "0" digits after the decimal point but before the first + ** significant digit of the number */ + for(e2++; e2<0 && precision>0; precision--, e2++){ + *(bufpt++) = '0'; + } + /* Significant digits after the decimal point */ + while( (precision--)>0 ){ + *(bufpt++) = et_getdigit(&realvalue,&nsd); + } + /* Remove trailing zeros and the "." if no digits follow the "." */ + if( flag_rtz && flag_dp ){ + while( bufpt[-1]=='0' ) *(--bufpt) = 0; + assert( bufpt>buf ); + if( bufpt[-1]=='.' ){ + if( flag_altform2 ){ + *(bufpt++) = '0'; + }else{ + *(--bufpt) = 0; + } + } + } + /* Add the "eNNN" suffix */ + if( flag_exp || (xtype==etEXP && exp) ){ + *(bufpt++) = aDigits[infop->charset]; + if( exp<0 ){ + *(bufpt++) = '-'; exp = -exp; + }else{ + *(bufpt++) = '+'; + } + if( exp>=100 ){ + *(bufpt++) = (exp/100)+'0'; /* 100's digit */ + exp %= 100; + } + *(bufpt++) = exp/10+'0'; /* 10's digit */ + *(bufpt++) = exp%10+'0'; /* 1's digit */ + } + *bufpt = 0; + + /* The converted number is in buf[] and zero terminated. Output it. + ** Note that the number is in the usual order, not reversed as with + ** integer conversions. */ + length = bufpt-buf; + bufpt = buf; + + /* Special case: Add leading zeros if the flag_zeropad flag is + ** set and we are not left justified */ + if( flag_zeropad && !flag_leftjustify && length < width){ + int i; + int nPad = width - length; + for(i=width; i>=nPad; i--){ + bufpt[i] = bufpt[i-nPad]; + } + i = prefix!=0; + while( nPad-- ) bufpt[i++] = '0'; + length = width; + } +#endif + break; + case etSIZE: + *(va_arg(ap,int*)) = pAccum->nChar; + length = width = 0; + break; + case etPERCENT: + buf[0] = '%'; + bufpt = buf; + length = 1; + break; + case etCHARLIT: + case etCHARX: + c = buf[0] = (xtype==etCHARX ? va_arg(ap,int) : *++fmt); + if( precision>=0 ){ + for(idx=1; idx=0 && precisionetBUFSIZE ){ + bufpt = zExtra = sqlite3_malloc( n ); + if( bufpt==0 ) return; + }else{ + bufpt = buf; + } + j = 0; + if( needQuote ) bufpt[j++] = q; + for(i=0; (ch=escarg[i])!=0; i++){ + bufpt[j++] = ch; + if( ch==q ) bufpt[j++] = ch; + } + if( needQuote ) bufpt[j++] = q; + bufpt[j] = 0; + length = j; + /* The precision is ignored on %q and %Q */ + /* if( precision>=0 && precisionz ){ + sqlite3StrAccumAppend(pAccum, (const char*)pToken->z, pToken->n); + } + length = width = 0; + break; + } + case etSRCLIST: { + SrcList *pSrc = va_arg(ap, SrcList*); + int k = va_arg(ap, int); + struct SrcList_item *pItem = &pSrc->a[k]; + assert( k>=0 && knSrc ); + if( pItem->zDatabase && pItem->zDatabase[0] ){ + sqlite3StrAccumAppend(pAccum, pItem->zDatabase, -1); + sqlite3StrAccumAppend(pAccum, ".", 1); + } + sqlite3StrAccumAppend(pAccum, pItem->zName, -1); + length = width = 0; + break; + } + }/* End switch over the format type */ + /* + ** The text of the conversion is pointed to by "bufpt" and is + ** "length" characters long. The field width is "width". Do + ** the output. + */ + if( !flag_leftjustify ){ + register int nspace; + nspace = width-length; + if( nspace>0 ){ + appendSpace(pAccum, nspace); + } + } + if( length>0 ){ + sqlite3StrAccumAppend(pAccum, bufpt, length); + } + if( flag_leftjustify ){ + register int nspace; + nspace = width-length; + if( nspace>0 ){ + appendSpace(pAccum, nspace); + } + } + if( zExtra ){ + sqlite3_free(zExtra); + } + }/* End for loop over the format string */ +} /* End of function */ + +/* +** Append N bytes of text from z to the StrAccum object. +*/ +void sqlite3StrAccumAppend(StrAccum *p, const char *z, int N){ + if( p->tooBig | p->mallocFailed ){ + return; + } + if( N<0 ){ + N = strlen(z); + } + if( N==0 ){ + return; + } + if( p->nChar+N >= p->nAlloc ){ + char *zNew; + if( !p->useMalloc ){ + p->tooBig = 1; + N = p->nAlloc - p->nChar - 1; + if( N<=0 ){ + return; + } + }else{ + p->nAlloc += p->nAlloc + N + 1; + if( p->nAlloc > SQLITE_MAX_LENGTH ){ + p->nAlloc = SQLITE_MAX_LENGTH; + if( p->nChar+N >= p->nAlloc ){ + sqlite3StrAccumReset(p); + p->tooBig = 1; + return; + } + } + zNew = sqlite3_malloc( p->nAlloc ); + if( zNew ){ + memcpy(zNew, p->zText, p->nChar); + sqlite3StrAccumReset(p); + p->zText = zNew; + }else{ + p->mallocFailed = 1; + sqlite3StrAccumReset(p); + return; + } + } + } + memcpy(&p->zText[p->nChar], z, N); + p->nChar += N; +} + +/* +** Finish off a string by making sure it is zero-terminated. +** Return a pointer to the resulting string. Return a NULL +** pointer if any kind of error was encountered. +*/ +char *sqlite3StrAccumFinish(StrAccum *p){ + if( p->zText ){ + p->zText[p->nChar] = 0; + if( p->useMalloc && p->zText==p->zBase ){ + p->zText = sqlite3_malloc( p->nChar+1 ); + if( p->zText ){ + memcpy(p->zText, p->zBase, p->nChar+1); + }else{ + p->mallocFailed = 1; + } + } + } + return p->zText; +} + +/* +** Reset an StrAccum string. Reclaim all malloced memory. +*/ +void sqlite3StrAccumReset(StrAccum *p){ + if( p->zText!=p->zBase ){ + sqlite3_free(p->zText); + p->zText = 0; + } +} + +/* +** Initialize a string accumulator +*/ +static void sqlite3StrAccumInit(StrAccum *p, char *zBase, int n){ + p->zText = p->zBase = zBase; + p->nChar = 0; + p->nAlloc = n; + p->useMalloc = 1; + p->tooBig = 0; + p->mallocFailed = 0; +} + +/* +** Print into memory obtained from sqliteMalloc(). Use the internal +** %-conversion extensions. +*/ +char *sqlite3VMPrintf(sqlite3 *db, const char *zFormat, va_list ap){ + char *z; + char zBase[SQLITE_PRINT_BUF_SIZE]; + StrAccum acc; + sqlite3StrAccumInit(&acc, zBase, sizeof(zBase)); + vxprintf(&acc, 1, zFormat, ap); + z = sqlite3StrAccumFinish(&acc); + if( acc.mallocFailed && db ){ + db->mallocFailed = 1; + } + return z; +} + +/* +** Print into memory obtained from sqliteMalloc(). Use the internal +** %-conversion extensions. +*/ +char *sqlite3MPrintf(sqlite3 *db, const char *zFormat, ...){ + va_list ap; + char *z; + va_start(ap, zFormat); + z = sqlite3VMPrintf(db, zFormat, ap); + va_end(ap); + return z; +} + +/* +** Print into memory obtained from sqlite3_malloc(). Omit the internal +** %-conversion extensions. +*/ +char *sqlite3_vmprintf(const char *zFormat, va_list ap){ + char *z; + char zBase[SQLITE_PRINT_BUF_SIZE]; + StrAccum acc; + sqlite3StrAccumInit(&acc, zBase, sizeof(zBase)); + vxprintf(&acc, 0, zFormat, ap); + z = sqlite3StrAccumFinish(&acc); + return z; +} + +/* +** Print into memory obtained from sqlite3_malloc()(). Omit the internal +** %-conversion extensions. +*/ +char *sqlite3_mprintf(const char *zFormat, ...){ + va_list ap; + char *z; + va_start(ap, zFormat); + z = sqlite3_vmprintf(zFormat, ap); + va_end(ap); + return z; +} + +/* +** sqlite3_snprintf() works like snprintf() except that it ignores the +** current locale settings. This is important for SQLite because we +** are not able to use a "," as the decimal point in place of "." as +** specified by some locales. +*/ +char *sqlite3_snprintf(int n, char *zBuf, const char *zFormat, ...){ + char *z; + va_list ap; + StrAccum acc; + + if( n<=0 ){ + return zBuf; + } + sqlite3StrAccumInit(&acc, zBuf, n); + acc.useMalloc = 0; + va_start(ap,zFormat); + vxprintf(&acc, 0, zFormat, ap); + va_end(ap); + z = sqlite3StrAccumFinish(&acc); + return z; +} + +#if defined(SQLITE_TEST) || defined(SQLITE_DEBUG) || defined(SQLITE_MEMDEBUG) +/* +** A version of printf() that understands %lld. Used for debugging. +** The printf() built into some versions of windows does not understand %lld +** and segfaults if you give it a long long int. +*/ +void sqlite3DebugPrintf(const char *zFormat, ...){ + va_list ap; + StrAccum acc; + char zBuf[500]; + sqlite3StrAccumInit(&acc, zBuf, sizeof(zBuf)); + acc.useMalloc = 0; + va_start(ap,zFormat); + vxprintf(&acc, 0, zFormat, ap); + va_end(ap); + sqlite3StrAccumFinish(&acc); + fprintf(stdout,"%s", zBuf); + fflush(stdout); +} +#endif Added: external/sqlite-source-3.5.7.x/random.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/random.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,121 @@ +/* +** 2001 September 15 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** This file contains code to implement a pseudo-random number +** generator (PRNG) for SQLite. +** +** Random numbers are used by some of the database backends in order +** to generate random integer keys for tables or random filenames. +** +** $Id: random.c,v 1.21 2008/01/16 17:46:38 drh Exp $ +*/ +#include "sqliteInt.h" + + +/* All threads share a single random number generator. +** This structure is the current state of the generator. +*/ +static struct sqlite3PrngType { + unsigned char isInit; /* True if initialized */ + unsigned char i, j; /* State variables */ + unsigned char s[256]; /* State variables */ +} sqlite3Prng; + +/* +** Get a single 8-bit random value from the RC4 PRNG. The Mutex +** must be held while executing this routine. +** +** Why not just use a library random generator like lrand48() for this? +** Because the OP_NewRowid opcode in the VDBE depends on having a very +** good source of random numbers. The lrand48() library function may +** well be good enough. But maybe not. Or maybe lrand48() has some +** subtle problems on some systems that could cause problems. It is hard +** to know. To minimize the risk of problems due to bad lrand48() +** implementations, SQLite uses this random number generator based +** on RC4, which we know works very well. +** +** (Later): Actually, OP_NewRowid does not depend on a good source of +** randomness any more. But we will leave this code in all the same. +*/ +static int randomByte(void){ + unsigned char t; + + + /* Initialize the state of the random number generator once, + ** the first time this routine is called. The seed value does + ** not need to contain a lot of randomness since we are not + ** trying to do secure encryption or anything like that... + ** + ** Nothing in this file or anywhere else in SQLite does any kind of + ** encryption. The RC4 algorithm is being used as a PRNG (pseudo-random + ** number generator) not as an encryption device. + */ + if( !sqlite3Prng.isInit ){ + int i; + char k[256]; + sqlite3Prng.j = 0; + sqlite3Prng.i = 0; + sqlite3OsRandomness(sqlite3_vfs_find(0), 256, k); + for(i=0; i<256; i++){ + sqlite3Prng.s[i] = i; + } + for(i=0; i<256; i++){ + sqlite3Prng.j += sqlite3Prng.s[i] + k[i]; + t = sqlite3Prng.s[sqlite3Prng.j]; + sqlite3Prng.s[sqlite3Prng.j] = sqlite3Prng.s[i]; + sqlite3Prng.s[i] = t; + } + sqlite3Prng.isInit = 1; + } + + /* Generate and return single random byte + */ + sqlite3Prng.i++; + t = sqlite3Prng.s[sqlite3Prng.i]; + sqlite3Prng.j += t; + sqlite3Prng.s[sqlite3Prng.i] = sqlite3Prng.s[sqlite3Prng.j]; + sqlite3Prng.s[sqlite3Prng.j] = t; + t += sqlite3Prng.s[sqlite3Prng.i]; + return sqlite3Prng.s[t]; +} + +/* +** Return N random bytes. +*/ +void sqlite3Randomness(int N, void *pBuf){ + unsigned char *zBuf = pBuf; + static sqlite3_mutex *mutex = 0; + if( mutex==0 ){ + mutex = sqlite3_mutex_alloc(SQLITE_MUTEX_STATIC_PRNG); + } + sqlite3_mutex_enter(mutex); + while( N-- ){ + *(zBuf++) = randomByte(); + } + sqlite3_mutex_leave(mutex); +} + +#ifdef SQLITE_TEST +/* +** For testing purposes, we sometimes want to preserve the state of +** PRNG and restore the PRNG to its saved state at a later time. +*/ +static struct sqlite3PrngType sqlite3SavedPrng; +void sqlite3SavePrngState(void){ + memcpy(&sqlite3SavedPrng, &sqlite3Prng, sizeof(sqlite3Prng)); +} +void sqlite3RestorePrngState(void){ + memcpy(&sqlite3Prng, &sqlite3SavedPrng, sizeof(sqlite3Prng)); +} +void sqlite3ResetPrngState(void){ + sqlite3Prng.isInit = 0; +} +#endif /* SQLITE_TEST */ Added: external/sqlite-source-3.5.7.x/select.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/select.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,3694 @@ +/* +** 2001 September 15 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** This file contains C code routines that are called by the parser +** to handle SELECT statements in SQLite. +** +** $Id: select.c,v 1.415 2008/03/04 17:45:01 mlcreech Exp $ +*/ +#include "sqliteInt.h" + + +/* +** Delete all the content of a Select structure but do not deallocate +** the select structure itself. +*/ +static void clearSelect(Select *p){ + sqlite3ExprListDelete(p->pEList); + sqlite3SrcListDelete(p->pSrc); + sqlite3ExprDelete(p->pWhere); + sqlite3ExprListDelete(p->pGroupBy); + sqlite3ExprDelete(p->pHaving); + sqlite3ExprListDelete(p->pOrderBy); + sqlite3SelectDelete(p->pPrior); + sqlite3ExprDelete(p->pLimit); + sqlite3ExprDelete(p->pOffset); +} + +/* +** Initialize a SelectDest structure. +*/ +void sqlite3SelectDestInit(SelectDest *pDest, int eDest, int iParm){ + pDest->eDest = eDest; + pDest->iParm = iParm; + pDest->affinity = 0; + pDest->iMem = 0; +} + + +/* +** Allocate a new Select structure and return a pointer to that +** structure. +*/ +Select *sqlite3SelectNew( + Parse *pParse, /* Parsing context */ + ExprList *pEList, /* which columns to include in the result */ + SrcList *pSrc, /* the FROM clause -- which tables to scan */ + Expr *pWhere, /* the WHERE clause */ + ExprList *pGroupBy, /* the GROUP BY clause */ + Expr *pHaving, /* the HAVING clause */ + ExprList *pOrderBy, /* the ORDER BY clause */ + int isDistinct, /* true if the DISTINCT keyword is present */ + Expr *pLimit, /* LIMIT value. NULL means not used */ + Expr *pOffset /* OFFSET value. NULL means no offset */ +){ + Select *pNew; + Select standin; + sqlite3 *db = pParse->db; + pNew = sqlite3DbMallocZero(db, sizeof(*pNew) ); + assert( !pOffset || pLimit ); /* Can't have OFFSET without LIMIT. */ + if( pNew==0 ){ + pNew = &standin; + memset(pNew, 0, sizeof(*pNew)); + } + if( pEList==0 ){ + pEList = sqlite3ExprListAppend(pParse, 0, sqlite3Expr(db,TK_ALL,0,0,0), 0); + } + pNew->pEList = pEList; + pNew->pSrc = pSrc; + pNew->pWhere = pWhere; + pNew->pGroupBy = pGroupBy; + pNew->pHaving = pHaving; + pNew->pOrderBy = pOrderBy; + pNew->isDistinct = isDistinct; + pNew->op = TK_SELECT; + assert( pOffset==0 || pLimit!=0 ); + pNew->pLimit = pLimit; + pNew->pOffset = pOffset; + pNew->iLimit = -1; + pNew->iOffset = -1; + pNew->addrOpenEphm[0] = -1; + pNew->addrOpenEphm[1] = -1; + pNew->addrOpenEphm[2] = -1; + if( pNew==&standin) { + clearSelect(pNew); + pNew = 0; + } + return pNew; +} + +/* +** Delete the given Select structure and all of its substructures. +*/ +void sqlite3SelectDelete(Select *p){ + if( p ){ + clearSelect(p); + sqlite3_free(p); + } +} + +/* +** Given 1 to 3 identifiers preceeding the JOIN keyword, determine the +** type of join. Return an integer constant that expresses that type +** in terms of the following bit values: +** +** JT_INNER +** JT_CROSS +** JT_OUTER +** JT_NATURAL +** JT_LEFT +** JT_RIGHT +** +** A full outer join is the combination of JT_LEFT and JT_RIGHT. +** +** If an illegal or unsupported join type is seen, then still return +** a join type, but put an error in the pParse structure. +*/ +int sqlite3JoinType(Parse *pParse, Token *pA, Token *pB, Token *pC){ + int jointype = 0; + Token *apAll[3]; + Token *p; + static const struct { + const char zKeyword[8]; + u8 nChar; + u8 code; + } keywords[] = { + { "natural", 7, JT_NATURAL }, + { "left", 4, JT_LEFT|JT_OUTER }, + { "right", 5, JT_RIGHT|JT_OUTER }, + { "full", 4, JT_LEFT|JT_RIGHT|JT_OUTER }, + { "outer", 5, JT_OUTER }, + { "inner", 5, JT_INNER }, + { "cross", 5, JT_INNER|JT_CROSS }, + }; + int i, j; + apAll[0] = pA; + apAll[1] = pB; + apAll[2] = pC; + for(i=0; i<3 && apAll[i]; i++){ + p = apAll[i]; + for(j=0; jn==keywords[j].nChar + && sqlite3StrNICmp((char*)p->z, keywords[j].zKeyword, p->n)==0 ){ + jointype |= keywords[j].code; + break; + } + } + if( j>=sizeof(keywords)/sizeof(keywords[0]) ){ + jointype |= JT_ERROR; + break; + } + } + if( + (jointype & (JT_INNER|JT_OUTER))==(JT_INNER|JT_OUTER) || + (jointype & JT_ERROR)!=0 + ){ + const char *zSp1 = " "; + const char *zSp2 = " "; + if( pB==0 ){ zSp1++; } + if( pC==0 ){ zSp2++; } + sqlite3ErrorMsg(pParse, "unknown or unsupported join type: " + "%T%s%T%s%T", pA, zSp1, pB, zSp2, pC); + jointype = JT_INNER; + }else if( jointype & JT_RIGHT ){ + sqlite3ErrorMsg(pParse, + "RIGHT and FULL OUTER JOINs are not currently supported"); + jointype = JT_INNER; + } + return jointype; +} + +/* +** Return the index of a column in a table. Return -1 if the column +** is not contained in the table. +*/ +static int columnIndex(Table *pTab, const char *zCol){ + int i; + for(i=0; inCol; i++){ + if( sqlite3StrICmp(pTab->aCol[i].zName, zCol)==0 ) return i; + } + return -1; +} + +/* +** Set the value of a token to a '\000'-terminated string. +*/ +static void setToken(Token *p, const char *z){ + p->z = (u8*)z; + p->n = z ? strlen(z) : 0; + p->dyn = 0; +} + +/* +** Set the token to the double-quoted and escaped version of the string pointed +** to by z. For example; +** +** {a"bc} -> {"a""bc"} +*/ +static void setQuotedToken(Parse *pParse, Token *p, const char *z){ + p->z = (u8 *)sqlite3MPrintf(0, "\"%w\"", z); + p->dyn = 1; + if( p->z ){ + p->n = strlen((char *)p->z); + }else{ + pParse->db->mallocFailed = 1; + } +} + +/* +** Create an expression node for an identifier with the name of zName +*/ +Expr *sqlite3CreateIdExpr(Parse *pParse, const char *zName){ + Token dummy; + setToken(&dummy, zName); + return sqlite3PExpr(pParse, TK_ID, 0, 0, &dummy); +} + + +/* +** Add a term to the WHERE expression in *ppExpr that requires the +** zCol column to be equal in the two tables pTab1 and pTab2. +*/ +static void addWhereTerm( + Parse *pParse, /* Parsing context */ + const char *zCol, /* Name of the column */ + const Table *pTab1, /* First table */ + const char *zAlias1, /* Alias for first table. May be NULL */ + const Table *pTab2, /* Second table */ + const char *zAlias2, /* Alias for second table. May be NULL */ + int iRightJoinTable, /* VDBE cursor for the right table */ + Expr **ppExpr /* Add the equality term to this expression */ +){ + Expr *pE1a, *pE1b, *pE1c; + Expr *pE2a, *pE2b, *pE2c; + Expr *pE; + + pE1a = sqlite3CreateIdExpr(pParse, zCol); + pE2a = sqlite3CreateIdExpr(pParse, zCol); + if( zAlias1==0 ){ + zAlias1 = pTab1->zName; + } + pE1b = sqlite3CreateIdExpr(pParse, zAlias1); + if( zAlias2==0 ){ + zAlias2 = pTab2->zName; + } + pE2b = sqlite3CreateIdExpr(pParse, zAlias2); + pE1c = sqlite3PExpr(pParse, TK_DOT, pE1b, pE1a, 0); + pE2c = sqlite3PExpr(pParse, TK_DOT, pE2b, pE2a, 0); + pE = sqlite3PExpr(pParse, TK_EQ, pE1c, pE2c, 0); + if( pE ){ + ExprSetProperty(pE, EP_FromJoin); + pE->iRightJoinTable = iRightJoinTable; + } + *ppExpr = sqlite3ExprAnd(pParse->db,*ppExpr, pE); +} + +/* +** Set the EP_FromJoin property on all terms of the given expression. +** And set the Expr.iRightJoinTable to iTable for every term in the +** expression. +** +** The EP_FromJoin property is used on terms of an expression to tell +** the LEFT OUTER JOIN processing logic that this term is part of the +** join restriction specified in the ON or USING clause and not a part +** of the more general WHERE clause. These terms are moved over to the +** WHERE clause during join processing but we need to remember that they +** originated in the ON or USING clause. +** +** The Expr.iRightJoinTable tells the WHERE clause processing that the +** expression depends on table iRightJoinTable even if that table is not +** explicitly mentioned in the expression. That information is needed +** for cases like this: +** +** SELECT * FROM t1 LEFT JOIN t2 ON t1.a=t2.b AND t1.x=5 +** +** The where clause needs to defer the handling of the t1.x=5 +** term until after the t2 loop of the join. In that way, a +** NULL t2 row will be inserted whenever t1.x!=5. If we do not +** defer the handling of t1.x=5, it will be processed immediately +** after the t1 loop and rows with t1.x!=5 will never appear in +** the output, which is incorrect. +*/ +static void setJoinExpr(Expr *p, int iTable){ + while( p ){ + ExprSetProperty(p, EP_FromJoin); + p->iRightJoinTable = iTable; + setJoinExpr(p->pLeft, iTable); + p = p->pRight; + } +} + +/* +** This routine processes the join information for a SELECT statement. +** ON and USING clauses are converted into extra terms of the WHERE clause. +** NATURAL joins also create extra WHERE clause terms. +** +** The terms of a FROM clause are contained in the Select.pSrc structure. +** The left most table is the first entry in Select.pSrc. The right-most +** table is the last entry. The join operator is held in the entry to +** the left. Thus entry 0 contains the join operator for the join between +** entries 0 and 1. Any ON or USING clauses associated with the join are +** also attached to the left entry. +** +** This routine returns the number of errors encountered. +*/ +static int sqliteProcessJoin(Parse *pParse, Select *p){ + SrcList *pSrc; /* All tables in the FROM clause */ + int i, j; /* Loop counters */ + struct SrcList_item *pLeft; /* Left table being joined */ + struct SrcList_item *pRight; /* Right table being joined */ + + pSrc = p->pSrc; + pLeft = &pSrc->a[0]; + pRight = &pLeft[1]; + for(i=0; inSrc-1; i++, pRight++, pLeft++){ + Table *pLeftTab = pLeft->pTab; + Table *pRightTab = pRight->pTab; + + if( pLeftTab==0 || pRightTab==0 ) continue; + + /* When the NATURAL keyword is present, add WHERE clause terms for + ** every column that the two tables have in common. + */ + if( pRight->jointype & JT_NATURAL ){ + if( pRight->pOn || pRight->pUsing ){ + sqlite3ErrorMsg(pParse, "a NATURAL join may not have " + "an ON or USING clause", 0); + return 1; + } + for(j=0; jnCol; j++){ + char *zName = pLeftTab->aCol[j].zName; + if( columnIndex(pRightTab, zName)>=0 ){ + addWhereTerm(pParse, zName, pLeftTab, pLeft->zAlias, + pRightTab, pRight->zAlias, + pRight->iCursor, &p->pWhere); + + } + } + } + + /* Disallow both ON and USING clauses in the same join + */ + if( pRight->pOn && pRight->pUsing ){ + sqlite3ErrorMsg(pParse, "cannot have both ON and USING " + "clauses in the same join"); + return 1; + } + + /* Add the ON clause to the end of the WHERE clause, connected by + ** an AND operator. + */ + if( pRight->pOn ){ + setJoinExpr(pRight->pOn, pRight->iCursor); + p->pWhere = sqlite3ExprAnd(pParse->db, p->pWhere, pRight->pOn); + pRight->pOn = 0; + } + + /* Create extra terms on the WHERE clause for each column named + ** in the USING clause. Example: If the two tables to be joined are + ** A and B and the USING clause names X, Y, and Z, then add this + ** to the WHERE clause: A.X=B.X AND A.Y=B.Y AND A.Z=B.Z + ** Report an error if any column mentioned in the USING clause is + ** not contained in both tables to be joined. + */ + if( pRight->pUsing ){ + IdList *pList = pRight->pUsing; + for(j=0; jnId; j++){ + char *zName = pList->a[j].zName; + if( columnIndex(pLeftTab, zName)<0 || columnIndex(pRightTab, zName)<0 ){ + sqlite3ErrorMsg(pParse, "cannot join using column %s - column " + "not present in both tables", zName); + return 1; + } + addWhereTerm(pParse, zName, pLeftTab, pLeft->zAlias, + pRightTab, pRight->zAlias, + pRight->iCursor, &p->pWhere); + } + } + } + return 0; +} + +/* +** Insert code into "v" that will push the record on the top of the +** stack into the sorter. +*/ +static void pushOntoSorter( + Parse *pParse, /* Parser context */ + ExprList *pOrderBy, /* The ORDER BY clause */ + Select *pSelect, /* The whole SELECT statement */ + int regData /* Register holding data to be sorted */ +){ + Vdbe *v = pParse->pVdbe; + int nExpr = pOrderBy->nExpr; + int regBase = sqlite3GetTempRange(pParse, nExpr+2); + int regRecord = sqlite3GetTempReg(pParse); + sqlite3ExprCodeExprList(pParse, pOrderBy, regBase); + sqlite3VdbeAddOp2(v, OP_Sequence, pOrderBy->iECursor, regBase+nExpr); + sqlite3VdbeAddOp2(v, OP_Move, regData, regBase+nExpr+1); + sqlite3VdbeAddOp3(v, OP_MakeRecord, regBase, nExpr + 2, regRecord); + sqlite3VdbeAddOp2(v, OP_IdxInsert, pOrderBy->iECursor, regRecord); + sqlite3ReleaseTempReg(pParse, regRecord); + sqlite3ReleaseTempRange(pParse, regBase, nExpr+2); + if( pSelect->iLimit>=0 ){ + int addr1, addr2; + int iLimit; + if( pSelect->pOffset ){ + iLimit = pSelect->iOffset+1; + }else{ + iLimit = pSelect->iLimit; + } + addr1 = sqlite3VdbeAddOp1(v, OP_IfZero, iLimit); + sqlite3VdbeAddOp2(v, OP_AddImm, iLimit, -1); + addr2 = sqlite3VdbeAddOp0(v, OP_Goto); + sqlite3VdbeJumpHere(v, addr1); + sqlite3VdbeAddOp1(v, OP_Last, pOrderBy->iECursor); + sqlite3VdbeAddOp1(v, OP_Delete, pOrderBy->iECursor); + sqlite3VdbeJumpHere(v, addr2); + pSelect->iLimit = -1; + } +} + +/* +** Add code to implement the OFFSET +*/ +static void codeOffset( + Vdbe *v, /* Generate code into this VM */ + Select *p, /* The SELECT statement being coded */ + int iContinue /* Jump here to skip the current record */ +){ + if( p->iOffset>=0 && iContinue!=0 ){ + int addr; + sqlite3VdbeAddOp2(v, OP_AddImm, p->iOffset, -1); + addr = sqlite3VdbeAddOp1(v, OP_IfNeg, p->iOffset); + sqlite3VdbeAddOp2(v, OP_Goto, 0, iContinue); + VdbeComment((v, "skip OFFSET records")); + sqlite3VdbeJumpHere(v, addr); + } +} + +/* +** Add code that will check to make sure the N registers starting at iMem +** form a distinct entry. iTab is a sorting index that holds previously +** seen combinations of the N values. A new entry is made in iTab +** if the current N values are new. +** +** A jump to addrRepeat is made and the N+1 values are popped from the +** stack if the top N elements are not distinct. +*/ +static void codeDistinct( + Parse *pParse, /* Parsing and code generating context */ + int iTab, /* A sorting index used to test for distinctness */ + int addrRepeat, /* Jump to here if not distinct */ + int N, /* Number of elements */ + int iMem /* First element */ +){ + Vdbe *v; + int r1; + + v = pParse->pVdbe; + r1 = sqlite3GetTempReg(pParse); + sqlite3VdbeAddOp3(v, OP_MakeRecord, iMem, N, r1); + sqlite3VdbeAddOp3(v, OP_Found, iTab, addrRepeat, r1); + sqlite3VdbeAddOp2(v, OP_IdxInsert, iTab, r1); + sqlite3ReleaseTempReg(pParse, r1); +} + +/* +** Generate an error message when a SELECT is used within a subexpression +** (example: "a IN (SELECT * FROM table)") but it has more than 1 result +** column. We do this in a subroutine because the error occurs in multiple +** places. +*/ +static int checkForMultiColumnSelectError( + Parse *pParse, /* Parse context. */ + SelectDest *pDest, /* Destination of SELECT results */ + int nExpr /* Number of result columns returned by SELECT */ +){ + int eDest = pDest->eDest; + if( nExpr>1 && (eDest==SRT_Mem || eDest==SRT_Set) ){ + sqlite3ErrorMsg(pParse, "only a single result allowed for " + "a SELECT that is part of an expression"); + return 1; + }else{ + return 0; + } +} + +/* +** This routine generates the code for the inside of the inner loop +** of a SELECT. +** +** If srcTab and nColumn are both zero, then the pEList expressions +** are evaluated in order to get the data for this row. If nColumn>0 +** then data is pulled from srcTab and pEList is used only to get the +** datatypes for each column. +*/ +static void selectInnerLoop( + Parse *pParse, /* The parser context */ + Select *p, /* The complete select statement being coded */ + ExprList *pEList, /* List of values being extracted */ + int srcTab, /* Pull data from this table */ + int nColumn, /* Number of columns in the source table */ + ExprList *pOrderBy, /* If not NULL, sort results using this key */ + int distinct, /* If >=0, make sure results are distinct */ + SelectDest *pDest, /* How to dispose of the results */ + int iContinue, /* Jump here to continue with next row */ + int iBreak, /* Jump here to break out of the inner loop */ + char *aff /* affinity string if eDest is SRT_Union */ +){ + Vdbe *v = pParse->pVdbe; + int i; + int hasDistinct; /* True if the DISTINCT keyword is present */ + int regResult; /* Start of memory holding result set */ + int eDest = pDest->eDest; /* How to dispose of results */ + int iParm = pDest->iParm; /* First argument to disposal method */ + int nResultCol; /* Number of result columns */ + + if( v==0 ) return; + assert( pEList!=0 ); + + /* If there was a LIMIT clause on the SELECT statement, then do the check + ** to see if this row should be output. + */ + hasDistinct = distinct>=0 && pEList->nExpr>0; + if( pOrderBy==0 && !hasDistinct ){ + codeOffset(v, p, iContinue); + } + + /* Pull the requested columns. + */ + if( nColumn>0 ){ + nResultCol = nColumn; + }else{ + nResultCol = pEList->nExpr; + } + if( pDest->iMem==0 ){ + pDest->iMem = sqlite3GetTempRange(pParse, nResultCol); + } + regResult = pDest->iMem; + if( nColumn>0 ){ + for(i=0; ia[i].pExpr, regResult+i); + } + } + nColumn = nResultCol; + + /* If the DISTINCT keyword was present on the SELECT statement + ** and this row has been seen before, then do not make this row + ** part of the result. + */ + if( hasDistinct ){ + assert( pEList!=0 ); + assert( pEList->nExpr==nColumn ); + codeDistinct(pParse, distinct, iContinue, nColumn, regResult); + if( pOrderBy==0 ){ + codeOffset(v, p, iContinue); + } + } + + if( checkForMultiColumnSelectError(pParse, pDest, pEList->nExpr) ){ + return; + } + + switch( eDest ){ + /* In this mode, write each query result to the key of the temporary + ** table iParm. + */ +#ifndef SQLITE_OMIT_COMPOUND_SELECT + case SRT_Union: { + int r1; + r1 = sqlite3GetTempReg(pParse); + sqlite3VdbeAddOp3(v, OP_MakeRecord, regResult, nColumn, r1); + if( aff ){ + sqlite3VdbeChangeP4(v, -1, aff, P4_STATIC); + } + sqlite3VdbeAddOp2(v, OP_IdxInsert, iParm, r1); + sqlite3ReleaseTempReg(pParse, r1); + break; + } + + /* Construct a record from the query result, but instead of + ** saving that record, use it as a key to delete elements from + ** the temporary table iParm. + */ + case SRT_Except: { + int r1; + r1 = sqlite3GetTempReg(pParse); + sqlite3VdbeAddOp3(v, OP_MakeRecord, regResult, nColumn, r1); + sqlite3VdbeChangeP4(v, -1, aff, P4_STATIC); + sqlite3VdbeAddOp2(v, OP_IdxDelete, iParm, r1); + sqlite3ReleaseTempReg(pParse, r1); + break; + } +#endif + + /* Store the result as data using a unique key. + */ + case SRT_Table: + case SRT_EphemTab: { + int r1 = sqlite3GetTempReg(pParse); + sqlite3VdbeAddOp3(v, OP_MakeRecord, regResult, nColumn, r1); + if( pOrderBy ){ + pushOntoSorter(pParse, pOrderBy, p, r1); + }else{ + int r2 = sqlite3GetTempReg(pParse); + sqlite3VdbeAddOp2(v, OP_NewRowid, iParm, r2); + sqlite3VdbeAddOp3(v, OP_Insert, iParm, r1, r2); + sqlite3VdbeChangeP5(v, OPFLAG_APPEND); + sqlite3ReleaseTempReg(pParse, r2); + } + sqlite3ReleaseTempReg(pParse, r1); + break; + } + +#ifndef SQLITE_OMIT_SUBQUERY + /* If we are creating a set for an "expr IN (SELECT ...)" construct, + ** then there should be a single item on the stack. Write this + ** item into the set table with bogus data. + */ + case SRT_Set: { + int addr2; + + assert( nColumn==1 ); + addr2 = sqlite3VdbeAddOp1(v, OP_IsNull, regResult); + p->affinity = sqlite3CompareAffinity(pEList->a[0].pExpr, pDest->affinity); + if( pOrderBy ){ + /* At first glance you would think we could optimize out the + ** ORDER BY in this case since the order of entries in the set + ** does not matter. But there might be a LIMIT clause, in which + ** case the order does matter */ + pushOntoSorter(pParse, pOrderBy, p, regResult); + }else{ + int r1 = sqlite3GetTempReg(pParse); + sqlite3VdbeAddOp4(v, OP_MakeRecord, regResult, 1, r1, &p->affinity, 1); + sqlite3VdbeAddOp2(v, OP_IdxInsert, iParm, r1); + sqlite3ReleaseTempReg(pParse, r1); + } + sqlite3VdbeJumpHere(v, addr2); + break; + } + + /* If any row exist in the result set, record that fact and abort. + */ + case SRT_Exists: { + sqlite3VdbeAddOp2(v, OP_Integer, 1, iParm); + /* The LIMIT clause will terminate the loop for us */ + break; + } + + /* If this is a scalar select that is part of an expression, then + ** store the results in the appropriate memory cell and break out + ** of the scan loop. + */ + case SRT_Mem: { + assert( nColumn==1 ); + if( pOrderBy ){ + pushOntoSorter(pParse, pOrderBy, p, regResult); + }else{ + sqlite3VdbeAddOp2(v, OP_Move, regResult, iParm); + /* The LIMIT clause will jump out of the loop for us */ + } + break; + } +#endif /* #ifndef SQLITE_OMIT_SUBQUERY */ + + /* Send the data to the callback function or to a subroutine. In the + ** case of a subroutine, the subroutine itself is responsible for + ** popping the data from the stack. + */ + case SRT_Subroutine: + case SRT_Callback: { + if( pOrderBy ){ + int r1 = sqlite3GetTempReg(pParse); + sqlite3VdbeAddOp3(v, OP_MakeRecord, regResult, nColumn, r1); + pushOntoSorter(pParse, pOrderBy, p, r1); + sqlite3ReleaseTempReg(pParse, r1); + }else if( eDest==SRT_Subroutine ){ + sqlite3VdbeAddOp2(v, OP_Gosub, 0, iParm); + }else{ + sqlite3VdbeAddOp2(v, OP_ResultRow, regResult, nColumn); + } + break; + } + +#if !defined(SQLITE_OMIT_TRIGGER) + /* Discard the results. This is used for SELECT statements inside + ** the body of a TRIGGER. The purpose of such selects is to call + ** user-defined functions that have side effects. We do not care + ** about the actual results of the select. + */ + default: { + assert( eDest==SRT_Discard ); + break; + } +#endif + } + + /* Jump to the end of the loop if the LIMIT is reached. + */ + if( p->iLimit>=0 && pOrderBy==0 ){ + sqlite3VdbeAddOp2(v, OP_AddImm, p->iLimit, -1); + sqlite3VdbeAddOp2(v, OP_IfZero, p->iLimit, iBreak); + } +} + +/* +** Given an expression list, generate a KeyInfo structure that records +** the collating sequence for each expression in that expression list. +** +** If the ExprList is an ORDER BY or GROUP BY clause then the resulting +** KeyInfo structure is appropriate for initializing a virtual index to +** implement that clause. If the ExprList is the result set of a SELECT +** then the KeyInfo structure is appropriate for initializing a virtual +** index to implement a DISTINCT test. +** +** Space to hold the KeyInfo structure is obtain from malloc. The calling +** function is responsible for seeing that this structure is eventually +** freed. Add the KeyInfo structure to the P4 field of an opcode using +** P4_KEYINFO_HANDOFF is the usual way of dealing with this. +*/ +static KeyInfo *keyInfoFromExprList(Parse *pParse, ExprList *pList){ + sqlite3 *db = pParse->db; + int nExpr; + KeyInfo *pInfo; + struct ExprList_item *pItem; + int i; + + nExpr = pList->nExpr; + pInfo = sqlite3DbMallocZero(db, sizeof(*pInfo) + nExpr*(sizeof(CollSeq*)+1) ); + if( pInfo ){ + pInfo->aSortOrder = (u8*)&pInfo->aColl[nExpr]; + pInfo->nField = nExpr; + pInfo->enc = ENC(db); + for(i=0, pItem=pList->a; ipExpr); + if( !pColl ){ + pColl = db->pDfltColl; + } + pInfo->aColl[i] = pColl; + pInfo->aSortOrder[i] = pItem->sortOrder; + } + } + return pInfo; +} + + +/* +** If the inner loop was generated using a non-null pOrderBy argument, +** then the results were placed in a sorter. After the loop is terminated +** we need to run the sorter and output the results. The following +** routine generates the code needed to do that. +*/ +static void generateSortTail( + Parse *pParse, /* Parsing context */ + Select *p, /* The SELECT statement */ + Vdbe *v, /* Generate code into this VDBE */ + int nColumn, /* Number of columns of data */ + SelectDest *pDest /* Write the sorted results here */ +){ + int brk = sqlite3VdbeMakeLabel(v); + int cont = sqlite3VdbeMakeLabel(v); + int addr; + int iTab; + int pseudoTab = 0; + ExprList *pOrderBy = p->pOrderBy; + + int eDest = pDest->eDest; + int iParm = pDest->iParm; + + int regRow; + int regRowid; + + iTab = pOrderBy->iECursor; + if( eDest==SRT_Callback || eDest==SRT_Subroutine ){ + pseudoTab = pParse->nTab++; + sqlite3VdbeAddOp2(v, OP_OpenPseudo, pseudoTab, 0); + sqlite3VdbeAddOp2(v, OP_SetNumColumns, pseudoTab, nColumn); + } + addr = 1 + sqlite3VdbeAddOp2(v, OP_Sort, iTab, brk); + codeOffset(v, p, cont); + regRow = sqlite3GetTempReg(pParse); + regRowid = sqlite3GetTempReg(pParse); + sqlite3VdbeAddOp3(v, OP_Column, iTab, pOrderBy->nExpr + 1, regRow); + switch( eDest ){ + case SRT_Table: + case SRT_EphemTab: { + sqlite3VdbeAddOp2(v, OP_NewRowid, iParm, regRowid); + sqlite3VdbeAddOp3(v, OP_Insert, iParm, regRow, regRowid); + sqlite3VdbeChangeP5(v, OPFLAG_APPEND); + break; + } +#ifndef SQLITE_OMIT_SUBQUERY + case SRT_Set: { + int j1; + assert( nColumn==1 ); + j1 = sqlite3VdbeAddOp1(v, OP_IsNull, regRow); + sqlite3VdbeAddOp4(v, OP_MakeRecord, regRow, 1, regRowid, &p->affinity, 1); + sqlite3VdbeAddOp2(v, OP_IdxInsert, iParm, regRowid); + sqlite3VdbeJumpHere(v, j1); + break; + } + case SRT_Mem: { + assert( nColumn==1 ); + sqlite3VdbeAddOp2(v, OP_Move, regRow, iParm); + /* The LIMIT clause will terminate the loop for us */ + break; + } +#endif + case SRT_Callback: + case SRT_Subroutine: { + int i; + sqlite3VdbeAddOp2(v, OP_Integer, 1, regRowid); + sqlite3VdbeAddOp3(v, OP_Insert, pseudoTab, regRow, regRowid); + for(i=0; iiMem+i); + } + if( eDest==SRT_Callback ){ + sqlite3VdbeAddOp2(v, OP_ResultRow, pDest->iMem, nColumn); + }else{ + sqlite3VdbeAddOp2(v, OP_Gosub, 0, iParm); + } + break; + } + default: { + /* Do nothing */ + break; + } + } + sqlite3ReleaseTempReg(pParse, regRow); + sqlite3ReleaseTempReg(pParse, regRowid); + + /* Jump to the end of the loop when the LIMIT is reached + */ + if( p->iLimit>=0 ){ + sqlite3VdbeAddOp2(v, OP_AddImm, p->iLimit, -1); + sqlite3VdbeAddOp2(v, OP_IfZero, p->iLimit, brk); + } + + /* The bottom of the loop + */ + sqlite3VdbeResolveLabel(v, cont); + sqlite3VdbeAddOp2(v, OP_Next, iTab, addr); + sqlite3VdbeResolveLabel(v, brk); + if( eDest==SRT_Callback || eDest==SRT_Subroutine ){ + sqlite3VdbeAddOp2(v, OP_Close, pseudoTab, 0); + } + +} + +/* +** Return a pointer to a string containing the 'declaration type' of the +** expression pExpr. The string may be treated as static by the caller. +** +** The declaration type is the exact datatype definition extracted from the +** original CREATE TABLE statement if the expression is a column. The +** declaration type for a ROWID field is INTEGER. Exactly when an expression +** is considered a column can be complex in the presence of subqueries. The +** result-set expression in all of the following SELECT statements is +** considered a column by this function. +** +** SELECT col FROM tbl; +** SELECT (SELECT col FROM tbl; +** SELECT (SELECT col FROM tbl); +** SELECT abc FROM (SELECT col AS abc FROM tbl); +** +** The declaration type for any expression other than a column is NULL. +*/ +static const char *columnType( + NameContext *pNC, + Expr *pExpr, + const char **pzOriginDb, + const char **pzOriginTab, + const char **pzOriginCol +){ + char const *zType = 0; + char const *zOriginDb = 0; + char const *zOriginTab = 0; + char const *zOriginCol = 0; + int j; + if( pExpr==0 || pNC->pSrcList==0 ) return 0; + + switch( pExpr->op ){ + case TK_AGG_COLUMN: + case TK_COLUMN: { + /* The expression is a column. Locate the table the column is being + ** extracted from in NameContext.pSrcList. This table may be real + ** database table or a subquery. + */ + Table *pTab = 0; /* Table structure column is extracted from */ + Select *pS = 0; /* Select the column is extracted from */ + int iCol = pExpr->iColumn; /* Index of column in pTab */ + while( pNC && !pTab ){ + SrcList *pTabList = pNC->pSrcList; + for(j=0;jnSrc && pTabList->a[j].iCursor!=pExpr->iTable;j++); + if( jnSrc ){ + pTab = pTabList->a[j].pTab; + pS = pTabList->a[j].pSelect; + }else{ + pNC = pNC->pNext; + } + } + + if( pTab==0 ){ + /* FIX ME: + ** This can occurs if you have something like "SELECT new.x;" inside + ** a trigger. In other words, if you reference the special "new" + ** table in the result set of a select. We do not have a good way + ** to find the actual table type, so call it "TEXT". This is really + ** something of a bug, but I do not know how to fix it. + ** + ** This code does not produce the correct answer - it just prevents + ** a segfault. See ticket #1229. + */ + zType = "TEXT"; + break; + } + + assert( pTab ); + if( pS ){ + /* The "table" is actually a sub-select or a view in the FROM clause + ** of the SELECT statement. Return the declaration type and origin + ** data for the result-set column of the sub-select. + */ + if( iCol>=0 && iColpEList->nExpr ){ + /* If iCol is less than zero, then the expression requests the + ** rowid of the sub-select or view. This expression is legal (see + ** test case misc2.2.2) - it always evaluates to NULL. + */ + NameContext sNC; + Expr *p = pS->pEList->a[iCol].pExpr; + sNC.pSrcList = pS->pSrc; + sNC.pNext = 0; + sNC.pParse = pNC->pParse; + zType = columnType(&sNC, p, &zOriginDb, &zOriginTab, &zOriginCol); + } + }else if( pTab->pSchema ){ + /* A real table */ + assert( !pS ); + if( iCol<0 ) iCol = pTab->iPKey; + assert( iCol==-1 || (iCol>=0 && iColnCol) ); + if( iCol<0 ){ + zType = "INTEGER"; + zOriginCol = "rowid"; + }else{ + zType = pTab->aCol[iCol].zType; + zOriginCol = pTab->aCol[iCol].zName; + } + zOriginTab = pTab->zName; + if( pNC->pParse ){ + int iDb = sqlite3SchemaToIndex(pNC->pParse->db, pTab->pSchema); + zOriginDb = pNC->pParse->db->aDb[iDb].zName; + } + } + break; + } +#ifndef SQLITE_OMIT_SUBQUERY + case TK_SELECT: { + /* The expression is a sub-select. Return the declaration type and + ** origin info for the single column in the result set of the SELECT + ** statement. + */ + NameContext sNC; + Select *pS = pExpr->pSelect; + Expr *p = pS->pEList->a[0].pExpr; + sNC.pSrcList = pS->pSrc; + sNC.pNext = pNC; + sNC.pParse = pNC->pParse; + zType = columnType(&sNC, p, &zOriginDb, &zOriginTab, &zOriginCol); + break; + } +#endif + } + + if( pzOriginDb ){ + assert( pzOriginTab && pzOriginCol ); + *pzOriginDb = zOriginDb; + *pzOriginTab = zOriginTab; + *pzOriginCol = zOriginCol; + } + return zType; +} + +/* +** Generate code that will tell the VDBE the declaration types of columns +** in the result set. +*/ +static void generateColumnTypes( + Parse *pParse, /* Parser context */ + SrcList *pTabList, /* List of tables */ + ExprList *pEList /* Expressions defining the result set */ +){ + Vdbe *v = pParse->pVdbe; + int i; + NameContext sNC; + sNC.pSrcList = pTabList; + sNC.pParse = pParse; + for(i=0; inExpr; i++){ + Expr *p = pEList->a[i].pExpr; + const char *zOrigDb = 0; + const char *zOrigTab = 0; + const char *zOrigCol = 0; + const char *zType = columnType(&sNC, p, &zOrigDb, &zOrigTab, &zOrigCol); + + /* The vdbe must make its own copy of the column-type and other + ** column specific strings, in case the schema is reset before this + ** virtual machine is deleted. + */ + sqlite3VdbeSetColName(v, i, COLNAME_DECLTYPE, zType, P4_TRANSIENT); + sqlite3VdbeSetColName(v, i, COLNAME_DATABASE, zOrigDb, P4_TRANSIENT); + sqlite3VdbeSetColName(v, i, COLNAME_TABLE, zOrigTab, P4_TRANSIENT); + sqlite3VdbeSetColName(v, i, COLNAME_COLUMN, zOrigCol, P4_TRANSIENT); + } +} + +/* +** Generate code that will tell the VDBE the names of columns +** in the result set. This information is used to provide the +** azCol[] values in the callback. +*/ +static void generateColumnNames( + Parse *pParse, /* Parser context */ + SrcList *pTabList, /* List of tables */ + ExprList *pEList /* Expressions defining the result set */ +){ + Vdbe *v = pParse->pVdbe; + int i, j; + sqlite3 *db = pParse->db; + int fullNames, shortNames; + +#ifndef SQLITE_OMIT_EXPLAIN + /* If this is an EXPLAIN, skip this step */ + if( pParse->explain ){ + return; + } +#endif + + assert( v!=0 ); + if( pParse->colNamesSet || v==0 || db->mallocFailed ) return; + pParse->colNamesSet = 1; + fullNames = (db->flags & SQLITE_FullColNames)!=0; + shortNames = (db->flags & SQLITE_ShortColNames)!=0; + sqlite3VdbeSetNumCols(v, pEList->nExpr); + for(i=0; inExpr; i++){ + Expr *p; + p = pEList->a[i].pExpr; + if( p==0 ) continue; + if( pEList->a[i].zName ){ + char *zName = pEList->a[i].zName; + sqlite3VdbeSetColName(v, i, COLNAME_NAME, zName, strlen(zName)); + continue; + } + if( p->op==TK_COLUMN && pTabList ){ + Table *pTab; + char *zCol; + int iCol = p->iColumn; + for(j=0; jnSrc && pTabList->a[j].iCursor!=p->iTable; j++){} + assert( jnSrc ); + pTab = pTabList->a[j].pTab; + if( iCol<0 ) iCol = pTab->iPKey; + assert( iCol==-1 || (iCol>=0 && iColnCol) ); + if( iCol<0 ){ + zCol = "rowid"; + }else{ + zCol = pTab->aCol[iCol].zName; + } + if( !shortNames && !fullNames && p->span.z && p->span.z[0] ){ + sqlite3VdbeSetColName(v, i, COLNAME_NAME, (char*)p->span.z, p->span.n); + }else if( fullNames || (!shortNames && pTabList->nSrc>1) ){ + char *zName = 0; + char *zTab; + + zTab = pTabList->a[j].zAlias; + if( fullNames || zTab==0 ) zTab = pTab->zName; + sqlite3SetString(&zName, zTab, ".", zCol, (char*)0); + sqlite3VdbeSetColName(v, i, COLNAME_NAME, zName, P4_DYNAMIC); + }else{ + sqlite3VdbeSetColName(v, i, COLNAME_NAME, zCol, strlen(zCol)); + } + }else if( p->span.z && p->span.z[0] ){ + sqlite3VdbeSetColName(v, i, COLNAME_NAME, (char*)p->span.z, p->span.n); + /* sqlite3VdbeCompressSpace(v, addr); */ + }else{ + char zName[30]; + assert( p->op!=TK_COLUMN || pTabList==0 ); + sqlite3_snprintf(sizeof(zName), zName, "column%d", i+1); + sqlite3VdbeSetColName(v, i, COLNAME_NAME, zName, 0); + } + } + generateColumnTypes(pParse, pTabList, pEList); +} + +#ifndef SQLITE_OMIT_COMPOUND_SELECT +/* +** Name of the connection operator, used for error messages. +*/ +static const char *selectOpName(int id){ + char *z; + switch( id ){ + case TK_ALL: z = "UNION ALL"; break; + case TK_INTERSECT: z = "INTERSECT"; break; + case TK_EXCEPT: z = "EXCEPT"; break; + default: z = "UNION"; break; + } + return z; +} +#endif /* SQLITE_OMIT_COMPOUND_SELECT */ + +/* +** Forward declaration +*/ +static int prepSelectStmt(Parse*, Select*); + +/* +** Given a SELECT statement, generate a Table structure that describes +** the result set of that SELECT. +*/ +Table *sqlite3ResultSetOfSelect(Parse *pParse, char *zTabName, Select *pSelect){ + Table *pTab; + int i, j; + ExprList *pEList; + Column *aCol, *pCol; + sqlite3 *db = pParse->db; + + while( pSelect->pPrior ) pSelect = pSelect->pPrior; + if( prepSelectStmt(pParse, pSelect) ){ + return 0; + } + if( sqlite3SelectResolve(pParse, pSelect, 0) ){ + return 0; + } + pTab = sqlite3DbMallocZero(db, sizeof(Table) ); + if( pTab==0 ){ + return 0; + } + pTab->nRef = 1; + pTab->zName = zTabName ? sqlite3DbStrDup(db, zTabName) : 0; + pEList = pSelect->pEList; + pTab->nCol = pEList->nExpr; + assert( pTab->nCol>0 ); + pTab->aCol = aCol = sqlite3DbMallocZero(db, sizeof(pTab->aCol[0])*pTab->nCol); + for(i=0, pCol=aCol; inCol; i++, pCol++){ + Expr *p, *pR; + char *zType; + char *zName; + int nName; + CollSeq *pColl; + int cnt; + NameContext sNC; + + /* Get an appropriate name for the column + */ + p = pEList->a[i].pExpr; + assert( p->pRight==0 || p->pRight->token.z==0 || p->pRight->token.z[0]!=0 ); + if( (zName = pEList->a[i].zName)!=0 ){ + /* If the column contains an "AS " phrase, use as the name */ + zName = sqlite3DbStrDup(db, zName); + }else if( p->op==TK_DOT + && (pR=p->pRight)!=0 && pR->token.z && pR->token.z[0] ){ + /* For columns of the from A.B use B as the name */ + zName = sqlite3MPrintf(db, "%T", &pR->token); + }else if( p->span.z && p->span.z[0] ){ + /* Use the original text of the column expression as its name */ + zName = sqlite3MPrintf(db, "%T", &p->span); + }else{ + /* If all else fails, make up a name */ + zName = sqlite3MPrintf(db, "column%d", i+1); + } + if( !zName || db->mallocFailed ){ + db->mallocFailed = 1; + sqlite3_free(zName); + sqlite3DeleteTable(pTab); + return 0; + } + sqlite3Dequote(zName); + + /* Make sure the column name is unique. If the name is not unique, + ** append a integer to the name so that it becomes unique. + */ + nName = strlen(zName); + for(j=cnt=0; jzName = zName; + + /* Get the typename, type affinity, and collating sequence for the + ** column. + */ + memset(&sNC, 0, sizeof(sNC)); + sNC.pSrcList = pSelect->pSrc; + zType = sqlite3DbStrDup(db, columnType(&sNC, p, 0, 0, 0)); + pCol->zType = zType; + pCol->affinity = sqlite3ExprAffinity(p); + pColl = sqlite3ExprCollSeq(pParse, p); + if( pColl ){ + pCol->zColl = sqlite3DbStrDup(db, pColl->zName); + } + } + pTab->iPKey = -1; + return pTab; +} + +/* +** Prepare a SELECT statement for processing by doing the following +** things: +** +** (1) Make sure VDBE cursor numbers have been assigned to every +** element of the FROM clause. +** +** (2) Fill in the pTabList->a[].pTab fields in the SrcList that +** defines FROM clause. When views appear in the FROM clause, +** fill pTabList->a[].pSelect with a copy of the SELECT statement +** that implements the view. A copy is made of the view's SELECT +** statement so that we can freely modify or delete that statement +** without worrying about messing up the presistent representation +** of the view. +** +** (3) Add terms to the WHERE clause to accomodate the NATURAL keyword +** on joins and the ON and USING clause of joins. +** +** (4) Scan the list of columns in the result set (pEList) looking +** for instances of the "*" operator or the TABLE.* operator. +** If found, expand each "*" to be every column in every table +** and TABLE.* to be every column in TABLE. +** +** Return 0 on success. If there are problems, leave an error message +** in pParse and return non-zero. +*/ +static int prepSelectStmt(Parse *pParse, Select *p){ + int i, j, k, rc; + SrcList *pTabList; + ExprList *pEList; + struct SrcList_item *pFrom; + sqlite3 *db = pParse->db; + + if( p==0 || p->pSrc==0 || db->mallocFailed ){ + return 1; + } + pTabList = p->pSrc; + pEList = p->pEList; + + /* Make sure cursor numbers have been assigned to all entries in + ** the FROM clause of the SELECT statement. + */ + sqlite3SrcListAssignCursors(pParse, p->pSrc); + + /* Look up every table named in the FROM clause of the select. If + ** an entry of the FROM clause is a subquery instead of a table or view, + ** then create a transient table structure to describe the subquery. + */ + for(i=0, pFrom=pTabList->a; inSrc; i++, pFrom++){ + Table *pTab; + if( pFrom->pTab!=0 ){ + /* This statement has already been prepared. There is no need + ** to go further. */ + assert( i==0 ); + return 0; + } + if( pFrom->zName==0 ){ +#ifndef SQLITE_OMIT_SUBQUERY + /* A sub-query in the FROM clause of a SELECT */ + assert( pFrom->pSelect!=0 ); + if( pFrom->zAlias==0 ){ + pFrom->zAlias = + sqlite3MPrintf(db, "sqlite_subquery_%p_", (void*)pFrom->pSelect); + } + assert( pFrom->pTab==0 ); + pFrom->pTab = pTab = + sqlite3ResultSetOfSelect(pParse, pFrom->zAlias, pFrom->pSelect); + if( pTab==0 ){ + return 1; + } + /* The isEphem flag indicates that the Table structure has been + ** dynamically allocated and may be freed at any time. In other words, + ** pTab is not pointing to a persistent table structure that defines + ** part of the schema. */ + pTab->isEphem = 1; +#endif + }else{ + /* An ordinary table or view name in the FROM clause */ + assert( pFrom->pTab==0 ); + pFrom->pTab = pTab = + sqlite3LocateTable(pParse,0,pFrom->zName,pFrom->zDatabase); + if( pTab==0 ){ + return 1; + } + pTab->nRef++; +#if !defined(SQLITE_OMIT_VIEW) || !defined (SQLITE_OMIT_VIRTUALTABLE) + if( pTab->pSelect || IsVirtual(pTab) ){ + /* We reach here if the named table is a really a view */ + if( sqlite3ViewGetColumnNames(pParse, pTab) ){ + return 1; + } + /* If pFrom->pSelect!=0 it means we are dealing with a + ** view within a view. The SELECT structure has already been + ** copied by the outer view so we can skip the copy step here + ** in the inner view. + */ + if( pFrom->pSelect==0 ){ + pFrom->pSelect = sqlite3SelectDup(db, pTab->pSelect); + } + } +#endif + } + } + + /* Process NATURAL keywords, and ON and USING clauses of joins. + */ + if( sqliteProcessJoin(pParse, p) ) return 1; + + /* For every "*" that occurs in the column list, insert the names of + ** all columns in all tables. And for every TABLE.* insert the names + ** of all columns in TABLE. The parser inserted a special expression + ** with the TK_ALL operator for each "*" that it found in the column list. + ** The following code just has to locate the TK_ALL expressions and expand + ** each one to the list of all columns in all tables. + ** + ** The first loop just checks to see if there are any "*" operators + ** that need expanding. + */ + for(k=0; knExpr; k++){ + Expr *pE = pEList->a[k].pExpr; + if( pE->op==TK_ALL ) break; + if( pE->op==TK_DOT && pE->pRight && pE->pRight->op==TK_ALL + && pE->pLeft && pE->pLeft->op==TK_ID ) break; + } + rc = 0; + if( knExpr ){ + /* + ** If we get here it means the result set contains one or more "*" + ** operators that need to be expanded. Loop through each expression + ** in the result set and expand them one by one. + */ + struct ExprList_item *a = pEList->a; + ExprList *pNew = 0; + int flags = pParse->db->flags; + int longNames = (flags & SQLITE_FullColNames)!=0 && + (flags & SQLITE_ShortColNames)==0; + + for(k=0; knExpr; k++){ + Expr *pE = a[k].pExpr; + if( pE->op!=TK_ALL && + (pE->op!=TK_DOT || pE->pRight==0 || pE->pRight->op!=TK_ALL) ){ + /* This particular expression does not need to be expanded. + */ + pNew = sqlite3ExprListAppend(pParse, pNew, a[k].pExpr, 0); + if( pNew ){ + pNew->a[pNew->nExpr-1].zName = a[k].zName; + }else{ + rc = 1; + } + a[k].pExpr = 0; + a[k].zName = 0; + }else{ + /* This expression is a "*" or a "TABLE.*" and needs to be + ** expanded. */ + int tableSeen = 0; /* Set to 1 when TABLE matches */ + char *zTName; /* text of name of TABLE */ + if( pE->op==TK_DOT && pE->pLeft ){ + zTName = sqlite3NameFromToken(db, &pE->pLeft->token); + }else{ + zTName = 0; + } + for(i=0, pFrom=pTabList->a; inSrc; i++, pFrom++){ + Table *pTab = pFrom->pTab; + char *zTabName = pFrom->zAlias; + if( zTabName==0 || zTabName[0]==0 ){ + zTabName = pTab->zName; + } + if( zTName && (zTabName==0 || zTabName[0]==0 || + sqlite3StrICmp(zTName, zTabName)!=0) ){ + continue; + } + tableSeen = 1; + for(j=0; jnCol; j++){ + Expr *pExpr, *pRight; + char *zName = pTab->aCol[j].zName; + + /* If a column is marked as 'hidden' (currently only possible + ** for virtual tables), do not include it in the expanded + ** result-set list. + */ + if( IsHiddenColumn(&pTab->aCol[j]) ){ + assert(IsVirtual(pTab)); + continue; + } + + if( i>0 ){ + struct SrcList_item *pLeft = &pTabList->a[i-1]; + if( (pLeft[1].jointype & JT_NATURAL)!=0 && + columnIndex(pLeft->pTab, zName)>=0 ){ + /* In a NATURAL join, omit the join columns from the + ** table on the right */ + continue; + } + if( sqlite3IdListIndex(pLeft[1].pUsing, zName)>=0 ){ + /* In a join with a USING clause, omit columns in the + ** using clause from the table on the right. */ + continue; + } + } + pRight = sqlite3PExpr(pParse, TK_ID, 0, 0, 0); + if( pRight==0 ) break; + setQuotedToken(pParse, &pRight->token, zName); + if( zTabName && (longNames || pTabList->nSrc>1) ){ + Expr *pLeft = sqlite3PExpr(pParse, TK_ID, 0, 0, 0); + pExpr = sqlite3PExpr(pParse, TK_DOT, pLeft, pRight, 0); + if( pExpr==0 ) break; + setQuotedToken(pParse, &pLeft->token, zTabName); + setToken(&pExpr->span, + sqlite3MPrintf(db, "%s.%s", zTabName, zName)); + pExpr->span.dyn = 1; + pExpr->token.z = 0; + pExpr->token.n = 0; + pExpr->token.dyn = 0; + }else{ + pExpr = pRight; + pExpr->span = pExpr->token; + pExpr->span.dyn = 0; + } + if( longNames ){ + pNew = sqlite3ExprListAppend(pParse, pNew, pExpr, &pExpr->span); + }else{ + pNew = sqlite3ExprListAppend(pParse, pNew, pExpr, &pRight->token); + } + } + } + if( !tableSeen ){ + if( zTName ){ + sqlite3ErrorMsg(pParse, "no such table: %s", zTName); + }else{ + sqlite3ErrorMsg(pParse, "no tables specified"); + } + rc = 1; + } + sqlite3_free(zTName); + } + } + sqlite3ExprListDelete(pEList); + p->pEList = pNew; + } + if( p->pEList && p->pEList->nExpr>SQLITE_MAX_COLUMN ){ + sqlite3ErrorMsg(pParse, "too many columns in result set"); + rc = SQLITE_ERROR; + } + if( db->mallocFailed ){ + rc = SQLITE_NOMEM; + } + return rc; +} + +/* +** pE is a pointer to an expression which is a single term in +** ORDER BY or GROUP BY clause. +** +** If pE evaluates to an integer constant i, then return i. +** This is an indication to the caller that it should sort +** by the i-th column of the result set. +** +** If pE is a well-formed expression and the SELECT statement +** is not compound, then return 0. This indicates to the +** caller that it should sort by the value of the ORDER BY +** expression. +** +** If the SELECT is compound, then attempt to match pE against +** result set columns in the left-most SELECT statement. Return +** the index i of the matching column, as an indication to the +** caller that it should sort by the i-th column. If there is +** no match, return -1 and leave an error message in pParse. +*/ +static int matchOrderByTermToExprList( + Parse *pParse, /* Parsing context for error messages */ + Select *pSelect, /* The SELECT statement with the ORDER BY clause */ + Expr *pE, /* The specific ORDER BY term */ + int idx, /* When ORDER BY term is this */ + int isCompound, /* True if this is a compound SELECT */ + u8 *pHasAgg /* True if expression contains aggregate functions */ +){ + int i; /* Loop counter */ + ExprList *pEList; /* The columns of the result set */ + NameContext nc; /* Name context for resolving pE */ + + + /* If the term is an integer constant, return the value of that + ** constant */ + pEList = pSelect->pEList; + if( sqlite3ExprIsInteger(pE, &i) ){ + if( i<=0 ){ + /* If i is too small, make it too big. That way the calling + ** function still sees a value that is out of range, but does + ** not confuse the column number with 0 or -1 result code. + */ + i = pEList->nExpr+1; + } + return i; + } + + /* If the term is a simple identifier that try to match that identifier + ** against a column name in the result set. + */ + if( pE->op==TK_ID || (pE->op==TK_STRING && pE->token.z[0]!='\'') ){ + sqlite3 *db = pParse->db; + char *zCol = sqlite3NameFromToken(db, &pE->token); + if( zCol==0 ){ + return -1; + } + for(i=0; inExpr; i++){ + char *zAs = pEList->a[i].zName; + if( zAs!=0 && sqlite3StrICmp(zAs, zCol)==0 ){ + sqlite3_free(zCol); + return i+1; + } + } + sqlite3_free(zCol); + } + + /* Resolve all names in the ORDER BY term expression + */ + memset(&nc, 0, sizeof(nc)); + nc.pParse = pParse; + nc.pSrcList = pSelect->pSrc; + nc.pEList = pEList; + nc.allowAgg = 1; + nc.nErr = 0; + if( sqlite3ExprResolveNames(&nc, pE) ){ + if( isCompound ){ + sqlite3ErrorClear(pParse); + return 0; + }else{ + return -1; + } + } + if( nc.hasAgg && pHasAgg ){ + *pHasAgg = 1; + } + + /* For a compound SELECT, we need to try to match the ORDER BY + ** expression against an expression in the result set + */ + if( isCompound ){ + for(i=0; inExpr; i++){ + if( sqlite3ExprCompare(pEList->a[i].pExpr, pE) ){ + return i+1; + } + } + } + return 0; +} + + +/* +** Analyze and ORDER BY or GROUP BY clause in a simple SELECT statement. +** Return the number of errors seen. +** +** Every term of the ORDER BY or GROUP BY clause needs to be an +** expression. If any expression is an integer constant, then +** that expression is replaced by the corresponding +** expression from the result set. +*/ +static int processOrderGroupBy( + Parse *pParse, /* Parsing context. Leave error messages here */ + Select *pSelect, /* The SELECT statement containing the clause */ + ExprList *pOrderBy, /* The ORDER BY or GROUP BY clause to be processed */ + int isOrder, /* 1 for ORDER BY. 0 for GROUP BY */ + u8 *pHasAgg /* Set to TRUE if any term contains an aggregate */ +){ + int i; + sqlite3 *db = pParse->db; + ExprList *pEList; + + if( pOrderBy==0 || pParse->db->mallocFailed ) return 0; + if( pOrderBy->nExpr>SQLITE_MAX_COLUMN ){ + const char *zType = isOrder ? "ORDER" : "GROUP"; + sqlite3ErrorMsg(pParse, "too many terms in %s BY clause", zType); + return 1; + } + pEList = pSelect->pEList; + if( pEList==0 ){ + return 0; + } + for(i=0; inExpr; i++){ + int iCol; + Expr *pE = pOrderBy->a[i].pExpr; + iCol = matchOrderByTermToExprList(pParse, pSelect, pE, i+1, 0, pHasAgg); + if( iCol<0 ){ + return 1; + } + if( iCol>pEList->nExpr ){ + const char *zType = isOrder ? "ORDER" : "GROUP"; + sqlite3ErrorMsg(pParse, + "%r %s BY term out of range - should be " + "between 1 and %d", i+1, zType, pEList->nExpr); + return 1; + } + if( iCol>0 ){ + CollSeq *pColl = pE->pColl; + int flags = pE->flags & EP_ExpCollate; + sqlite3ExprDelete(pE); + pE = sqlite3ExprDup(db, pEList->a[iCol-1].pExpr); + pOrderBy->a[i].pExpr = pE; + if( pE && pColl && flags ){ + pE->pColl = pColl; + pE->flags |= flags; + } + } + } + return 0; +} + +/* +** Analyze and ORDER BY or GROUP BY clause in a SELECT statement. Return +** the number of errors seen. +** +** The processing depends on whether the SELECT is simple or compound. +** For a simple SELECT statement, evry term of the ORDER BY or GROUP BY +** clause needs to be an expression. If any expression is an integer +** constant, then that expression is replaced by the corresponding +** expression from the result set. +** +** For compound SELECT statements, every expression needs to be of +** type TK_COLUMN with a iTable value as given in the 4th parameter. +** If any expression is an integer, that becomes the column number. +** Otherwise, match the expression against result set columns from +** the left-most SELECT. +*/ +static int processCompoundOrderBy( + Parse *pParse, /* Parsing context. Leave error messages here */ + Select *pSelect, /* The SELECT statement containing the ORDER BY */ + int iTable /* Output table for compound SELECT statements */ +){ + int i; + ExprList *pOrderBy; + ExprList *pEList; + sqlite3 *db; + int moreToDo = 1; + + pOrderBy = pSelect->pOrderBy; + if( pOrderBy==0 ) return 0; + if( pOrderBy->nExpr>SQLITE_MAX_COLUMN ){ + sqlite3ErrorMsg(pParse, "too many terms in ORDER BY clause"); + return 1; + } + db = pParse->db; + for(i=0; inExpr; i++){ + pOrderBy->a[i].done = 0; + } + while( pSelect->pPrior ){ + pSelect = pSelect->pPrior; + } + while( pSelect && moreToDo ){ + moreToDo = 0; + for(i=0; inExpr; i++){ + int iCol = -1; + Expr *pE, *pDup; + if( pOrderBy->a[i].done ) continue; + pE = pOrderBy->a[i].pExpr; + pDup = sqlite3ExprDup(db, pE); + if( !db->mallocFailed ){ + assert(pDup); + iCol = matchOrderByTermToExprList(pParse, pSelect, pDup, i+1, 1, 0); + } + sqlite3ExprDelete(pDup); + if( iCol<0 ){ + return 1; + } + pEList = pSelect->pEList; + if( pEList==0 ){ + return 1; + } + if( iCol>pEList->nExpr ){ + sqlite3ErrorMsg(pParse, + "%r ORDER BY term out of range - should be " + "between 1 and %d", i+1, pEList->nExpr); + return 1; + } + if( iCol>0 ){ + pE->op = TK_COLUMN; + pE->iTable = iTable; + pE->iAgg = -1; + pE->iColumn = iCol-1; + pE->pTab = 0; + pOrderBy->a[i].done = 1; + }else{ + moreToDo = 1; + } + } + pSelect = pSelect->pNext; + } + for(i=0; inExpr; i++){ + if( pOrderBy->a[i].done==0 ){ + sqlite3ErrorMsg(pParse, "%r ORDER BY term does not match any " + "column in the result set", i+1); + return 1; + } + } + return 0; +} + +/* +** Get a VDBE for the given parser context. Create a new one if necessary. +** If an error occurs, return NULL and leave a message in pParse. +*/ +Vdbe *sqlite3GetVdbe(Parse *pParse){ + Vdbe *v = pParse->pVdbe; + if( v==0 ){ + v = pParse->pVdbe = sqlite3VdbeCreate(pParse->db); +#ifndef SQLITE_OMIT_TRACE + if( v ){ + sqlite3VdbeAddOp0(v, OP_Trace); + } +#endif + } + return v; +} + + +/* +** Compute the iLimit and iOffset fields of the SELECT based on the +** pLimit and pOffset expressions. pLimit and pOffset hold the expressions +** that appear in the original SQL statement after the LIMIT and OFFSET +** keywords. Or NULL if those keywords are omitted. iLimit and iOffset +** are the integer memory register numbers for counters used to compute +** the limit and offset. If there is no limit and/or offset, then +** iLimit and iOffset are negative. +** +** This routine changes the values of iLimit and iOffset only if +** a limit or offset is defined by pLimit and pOffset. iLimit and +** iOffset should have been preset to appropriate default values +** (usually but not always -1) prior to calling this routine. +** Only if pLimit!=0 or pOffset!=0 do the limit registers get +** redefined. The UNION ALL operator uses this property to force +** the reuse of the same limit and offset registers across multiple +** SELECT statements. +*/ +static void computeLimitRegisters(Parse *pParse, Select *p, int iBreak){ + Vdbe *v = 0; + int iLimit = 0; + int iOffset; + int addr1; + + /* + ** "LIMIT -1" always shows all rows. There is some + ** contraversy about what the correct behavior should be. + ** The current implementation interprets "LIMIT 0" to mean + ** no rows. + */ + if( p->pLimit ){ + p->iLimit = iLimit = ++pParse->nMem; + v = sqlite3GetVdbe(pParse); + if( v==0 ) return; + sqlite3ExprCode(pParse, p->pLimit, iLimit); + sqlite3VdbeAddOp1(v, OP_MustBeInt, iLimit); + VdbeComment((v, "LIMIT counter")); + sqlite3VdbeAddOp2(v, OP_IfZero, iLimit, iBreak); + } + if( p->pOffset ){ + p->iOffset = iOffset = ++pParse->nMem; + if( p->pLimit ){ + pParse->nMem++; /* Allocate an extra register for limit+offset */ + } + v = sqlite3GetVdbe(pParse); + if( v==0 ) return; + sqlite3ExprCode(pParse, p->pOffset, iOffset); + sqlite3VdbeAddOp1(v, OP_MustBeInt, iOffset); + VdbeComment((v, "OFFSET counter")); + addr1 = sqlite3VdbeAddOp1(v, OP_IfPos, iOffset); + sqlite3VdbeAddOp2(v, OP_Integer, 0, iOffset); + sqlite3VdbeJumpHere(v, addr1); + if( p->pLimit ){ + sqlite3VdbeAddOp3(v, OP_Add, iLimit, iOffset, iOffset+1); + VdbeComment((v, "LIMIT+OFFSET")); + addr1 = sqlite3VdbeAddOp1(v, OP_IfPos, iLimit); + sqlite3VdbeAddOp2(v, OP_Integer, -1, iOffset+1); + sqlite3VdbeJumpHere(v, addr1); + } + } +} + +/* +** Allocate a virtual index to use for sorting. +*/ +static void createSortingIndex(Parse *pParse, Select *p, ExprList *pOrderBy){ + if( pOrderBy ){ + int addr; + assert( pOrderBy->iECursor==0 ); + pOrderBy->iECursor = pParse->nTab++; + addr = sqlite3VdbeAddOp2(pParse->pVdbe, OP_OpenEphemeral, + pOrderBy->iECursor, pOrderBy->nExpr+1); + assert( p->addrOpenEphm[2] == -1 ); + p->addrOpenEphm[2] = addr; + } +} + +#ifndef SQLITE_OMIT_COMPOUND_SELECT +/* +** Return the appropriate collating sequence for the iCol-th column of +** the result set for the compound-select statement "p". Return NULL if +** the column has no default collating sequence. +** +** The collating sequence for the compound select is taken from the +** left-most term of the select that has a collating sequence. +*/ +static CollSeq *multiSelectCollSeq(Parse *pParse, Select *p, int iCol){ + CollSeq *pRet; + if( p->pPrior ){ + pRet = multiSelectCollSeq(pParse, p->pPrior, iCol); + }else{ + pRet = 0; + } + if( pRet==0 ){ + pRet = sqlite3ExprCollSeq(pParse, p->pEList->a[iCol].pExpr); + } + return pRet; +} +#endif /* SQLITE_OMIT_COMPOUND_SELECT */ + +#ifndef SQLITE_OMIT_COMPOUND_SELECT +/* +** This routine is called to process a query that is really the union +** or intersection of two or more separate queries. +** +** "p" points to the right-most of the two queries. the query on the +** left is p->pPrior. The left query could also be a compound query +** in which case this routine will be called recursively. +** +** The results of the total query are to be written into a destination +** of type eDest with parameter iParm. +** +** Example 1: Consider a three-way compound SQL statement. +** +** SELECT a FROM t1 UNION SELECT b FROM t2 UNION SELECT c FROM t3 +** +** This statement is parsed up as follows: +** +** SELECT c FROM t3 +** | +** `-----> SELECT b FROM t2 +** | +** `------> SELECT a FROM t1 +** +** The arrows in the diagram above represent the Select.pPrior pointer. +** So if this routine is called with p equal to the t3 query, then +** pPrior will be the t2 query. p->op will be TK_UNION in this case. +** +** Notice that because of the way SQLite parses compound SELECTs, the +** individual selects always group from left to right. +*/ +static int multiSelect( + Parse *pParse, /* Parsing context */ + Select *p, /* The right-most of SELECTs to be coded */ + SelectDest *pDest, /* What to do with query results */ + char *aff /* If eDest is SRT_Union, the affinity string */ +){ + int rc = SQLITE_OK; /* Success code from a subroutine */ + Select *pPrior; /* Another SELECT immediately to our left */ + Vdbe *v; /* Generate code to this VDBE */ + int nCol; /* Number of columns in the result set */ + ExprList *pOrderBy; /* The ORDER BY clause on p */ + int aSetP2[2]; /* Set P2 value of these op to number of columns */ + int nSetP2 = 0; /* Number of slots in aSetP2[] used */ + SelectDest dest; /* Alternative data destination */ + + dest = *pDest; + + /* Make sure there is no ORDER BY or LIMIT clause on prior SELECTs. Only + ** the last (right-most) SELECT in the series may have an ORDER BY or LIMIT. + */ + if( p==0 || p->pPrior==0 ){ + rc = 1; + goto multi_select_end; + } + pPrior = p->pPrior; + assert( pPrior->pRightmost!=pPrior ); + assert( pPrior->pRightmost==p->pRightmost ); + if( pPrior->pOrderBy ){ + sqlite3ErrorMsg(pParse,"ORDER BY clause should come after %s not before", + selectOpName(p->op)); + rc = 1; + goto multi_select_end; + } + if( pPrior->pLimit ){ + sqlite3ErrorMsg(pParse,"LIMIT clause should come after %s not before", + selectOpName(p->op)); + rc = 1; + goto multi_select_end; + } + + /* Make sure we have a valid query engine. If not, create a new one. + */ + v = sqlite3GetVdbe(pParse); + if( v==0 ){ + rc = 1; + goto multi_select_end; + } + + /* Create the destination temporary table if necessary + */ + if( dest.eDest==SRT_EphemTab ){ + assert( p->pEList ); + assert( nSetP2pOrderBy; + switch( p->op ){ + case TK_ALL: { + if( pOrderBy==0 ){ + int addr = 0; + assert( !pPrior->pLimit ); + pPrior->pLimit = p->pLimit; + pPrior->pOffset = p->pOffset; + rc = sqlite3Select(pParse, pPrior, &dest, 0, 0, 0, aff); + p->pLimit = 0; + p->pOffset = 0; + if( rc ){ + goto multi_select_end; + } + p->pPrior = 0; + p->iLimit = pPrior->iLimit; + p->iOffset = pPrior->iOffset; + if( p->iLimit>=0 ){ + addr = sqlite3VdbeAddOp1(v, OP_IfZero, p->iLimit); + VdbeComment((v, "Jump ahead if LIMIT reached")); + } + rc = sqlite3Select(pParse, p, &dest, 0, 0, 0, aff); + p->pPrior = pPrior; + if( rc ){ + goto multi_select_end; + } + if( addr ){ + sqlite3VdbeJumpHere(v, addr); + } + break; + } + /* For UNION ALL ... ORDER BY fall through to the next case */ + } + case TK_EXCEPT: + case TK_UNION: { + int unionTab; /* Cursor number of the temporary table holding result */ + int op = 0; /* One of the SRT_ operations to apply to self */ + int priorOp; /* The SRT_ operation to apply to prior selects */ + Expr *pLimit, *pOffset; /* Saved values of p->nLimit and p->nOffset */ + int addr; + SelectDest uniondest; + + priorOp = p->op==TK_ALL ? SRT_Table : SRT_Union; + if( dest.eDest==priorOp && pOrderBy==0 && !p->pLimit && !p->pOffset ){ + /* We can reuse a temporary table generated by a SELECT to our + ** right. + */ + unionTab = dest.iParm; + }else{ + /* We will need to create our own temporary table to hold the + ** intermediate results. + */ + unionTab = pParse->nTab++; + if( processCompoundOrderBy(pParse, p, unionTab) ){ + rc = 1; + goto multi_select_end; + } + addr = sqlite3VdbeAddOp2(v, OP_OpenEphemeral, unionTab, 0); + if( priorOp==SRT_Table ){ + assert( nSetP2addrOpenEphm[0] == -1 ); + p->addrOpenEphm[0] = addr; + p->pRightmost->usesEphm = 1; + } + createSortingIndex(pParse, p, pOrderBy); + assert( p->pEList ); + } + + /* Code the SELECT statements to our left + */ + assert( !pPrior->pOrderBy ); + sqlite3SelectDestInit(&uniondest, priorOp, unionTab); + rc = sqlite3Select(pParse, pPrior, &uniondest, 0, 0, 0, aff); + if( rc ){ + goto multi_select_end; + } + + /* Code the current SELECT statement + */ + switch( p->op ){ + case TK_EXCEPT: op = SRT_Except; break; + case TK_UNION: op = SRT_Union; break; + case TK_ALL: op = SRT_Table; break; + } + p->pPrior = 0; + p->pOrderBy = 0; + p->disallowOrderBy = pOrderBy!=0; + pLimit = p->pLimit; + p->pLimit = 0; + pOffset = p->pOffset; + p->pOffset = 0; + uniondest.eDest = op; + rc = sqlite3Select(pParse, p, &uniondest, 0, 0, 0, aff); + /* Query flattening in sqlite3Select() might refill p->pOrderBy. + ** Be sure to delete p->pOrderBy, therefore, to avoid a memory leak. */ + sqlite3ExprListDelete(p->pOrderBy); + p->pPrior = pPrior; + p->pOrderBy = pOrderBy; + sqlite3ExprDelete(p->pLimit); + p->pLimit = pLimit; + p->pOffset = pOffset; + p->iLimit = -1; + p->iOffset = -1; + if( rc ){ + goto multi_select_end; + } + + + /* Convert the data in the temporary table into whatever form + ** it is that we currently need. + */ + if( dest.eDest!=priorOp || unionTab!=dest.iParm ){ + int iCont, iBreak, iStart; + assert( p->pEList ); + if( dest.eDest==SRT_Callback ){ + Select *pFirst = p; + while( pFirst->pPrior ) pFirst = pFirst->pPrior; + generateColumnNames(pParse, 0, pFirst->pEList); + } + iBreak = sqlite3VdbeMakeLabel(v); + iCont = sqlite3VdbeMakeLabel(v); + computeLimitRegisters(pParse, p, iBreak); + sqlite3VdbeAddOp2(v, OP_Rewind, unionTab, iBreak); + iStart = sqlite3VdbeCurrentAddr(v); + selectInnerLoop(pParse, p, p->pEList, unionTab, p->pEList->nExpr, + pOrderBy, -1, &dest, iCont, iBreak, 0); + sqlite3VdbeResolveLabel(v, iCont); + sqlite3VdbeAddOp2(v, OP_Next, unionTab, iStart); + sqlite3VdbeResolveLabel(v, iBreak); + sqlite3VdbeAddOp2(v, OP_Close, unionTab, 0); + } + break; + } + case TK_INTERSECT: { + int tab1, tab2; + int iCont, iBreak, iStart; + Expr *pLimit, *pOffset; + int addr; + SelectDest intersectdest; + int r1; + + /* INTERSECT is different from the others since it requires + ** two temporary tables. Hence it has its own case. Begin + ** by allocating the tables we will need. + */ + tab1 = pParse->nTab++; + tab2 = pParse->nTab++; + if( processCompoundOrderBy(pParse, p, tab1) ){ + rc = 1; + goto multi_select_end; + } + createSortingIndex(pParse, p, pOrderBy); + + addr = sqlite3VdbeAddOp2(v, OP_OpenEphemeral, tab1, 0); + assert( p->addrOpenEphm[0] == -1 ); + p->addrOpenEphm[0] = addr; + p->pRightmost->usesEphm = 1; + assert( p->pEList ); + + /* Code the SELECTs to our left into temporary table "tab1". + */ + sqlite3SelectDestInit(&intersectdest, SRT_Union, tab1); + rc = sqlite3Select(pParse, pPrior, &intersectdest, 0, 0, 0, aff); + if( rc ){ + goto multi_select_end; + } + + /* Code the current SELECT into temporary table "tab2" + */ + addr = sqlite3VdbeAddOp2(v, OP_OpenEphemeral, tab2, 0); + assert( p->addrOpenEphm[1] == -1 ); + p->addrOpenEphm[1] = addr; + p->pPrior = 0; + pLimit = p->pLimit; + p->pLimit = 0; + pOffset = p->pOffset; + p->pOffset = 0; + intersectdest.iParm = tab2; + rc = sqlite3Select(pParse, p, &intersectdest, 0, 0, 0, aff); + p->pPrior = pPrior; + sqlite3ExprDelete(p->pLimit); + p->pLimit = pLimit; + p->pOffset = pOffset; + if( rc ){ + goto multi_select_end; + } + + /* Generate code to take the intersection of the two temporary + ** tables. + */ + assert( p->pEList ); + if( dest.eDest==SRT_Callback ){ + Select *pFirst = p; + while( pFirst->pPrior ) pFirst = pFirst->pPrior; + generateColumnNames(pParse, 0, pFirst->pEList); + } + iBreak = sqlite3VdbeMakeLabel(v); + iCont = sqlite3VdbeMakeLabel(v); + computeLimitRegisters(pParse, p, iBreak); + sqlite3VdbeAddOp2(v, OP_Rewind, tab1, iBreak); + r1 = sqlite3GetTempReg(pParse); + iStart = sqlite3VdbeAddOp2(v, OP_RowKey, tab1, r1); + sqlite3VdbeAddOp3(v, OP_NotFound, tab2, iCont, r1); + sqlite3ReleaseTempReg(pParse, r1); + selectInnerLoop(pParse, p, p->pEList, tab1, p->pEList->nExpr, + pOrderBy, -1, &dest, iCont, iBreak, 0); + sqlite3VdbeResolveLabel(v, iCont); + sqlite3VdbeAddOp2(v, OP_Next, tab1, iStart); + sqlite3VdbeResolveLabel(v, iBreak); + sqlite3VdbeAddOp2(v, OP_Close, tab2, 0); + sqlite3VdbeAddOp2(v, OP_Close, tab1, 0); + break; + } + } + + /* Make sure all SELECTs in the statement have the same number of elements + ** in their result sets. + */ + assert( p->pEList && pPrior->pEList ); + if( p->pEList->nExpr!=pPrior->pEList->nExpr ){ + sqlite3ErrorMsg(pParse, "SELECTs to the left and right of %s" + " do not have the same number of result columns", selectOpName(p->op)); + rc = 1; + goto multi_select_end; + } + + /* Set the number of columns in temporary tables + */ + nCol = p->pEList->nExpr; + while( nSetP2 ){ + sqlite3VdbeChangeP2(v, aSetP2[--nSetP2], nCol); + } + + /* Compute collating sequences used by either the ORDER BY clause or + ** by any temporary tables needed to implement the compound select. + ** Attach the KeyInfo structure to all temporary tables. Invoke the + ** ORDER BY processing if there is an ORDER BY clause. + ** + ** This section is run by the right-most SELECT statement only. + ** SELECT statements to the left always skip this part. The right-most + ** SELECT might also skip this part if it has no ORDER BY clause and + ** no temp tables are required. + */ + if( pOrderBy || p->usesEphm ){ + int i; /* Loop counter */ + KeyInfo *pKeyInfo; /* Collating sequence for the result set */ + Select *pLoop; /* For looping through SELECT statements */ + int nKeyCol; /* Number of entries in pKeyInfo->aCol[] */ + CollSeq **apColl; /* For looping through pKeyInfo->aColl[] */ + CollSeq **aCopy; /* A copy of pKeyInfo->aColl[] */ + + assert( p->pRightmost==p ); + nKeyCol = nCol + (pOrderBy ? pOrderBy->nExpr : 0); + pKeyInfo = sqlite3DbMallocZero(pParse->db, + sizeof(*pKeyInfo)+nKeyCol*(sizeof(CollSeq*) + 1)); + if( !pKeyInfo ){ + rc = SQLITE_NOMEM; + goto multi_select_end; + } + + pKeyInfo->enc = ENC(pParse->db); + pKeyInfo->nField = nCol; + + for(i=0, apColl=pKeyInfo->aColl; idb->pDfltColl; + } + } + + for(pLoop=p; pLoop; pLoop=pLoop->pPrior){ + for(i=0; i<2; i++){ + int addr = pLoop->addrOpenEphm[i]; + if( addr<0 ){ + /* If [0] is unused then [1] is also unused. So we can + ** always safely abort as soon as the first unused slot is found */ + assert( pLoop->addrOpenEphm[1]<0 ); + break; + } + sqlite3VdbeChangeP2(v, addr, nCol); + sqlite3VdbeChangeP4(v, addr, (char*)pKeyInfo, P4_KEYINFO); + pLoop->addrOpenEphm[i] = -1; + } + } + + if( pOrderBy ){ + struct ExprList_item *pOTerm = pOrderBy->a; + int nOrderByExpr = pOrderBy->nExpr; + int addr; + u8 *pSortOrder; + + /* Reuse the same pKeyInfo for the ORDER BY as was used above for + ** the compound select statements. Except we have to change out the + ** pKeyInfo->aColl[] values. Some of the aColl[] values will be + ** reused when constructing the pKeyInfo for the ORDER BY, so make + ** a copy. Sufficient space to hold both the nCol entries for + ** the compound select and the nOrderbyExpr entries for the ORDER BY + ** was allocated above. But we need to move the compound select + ** entries out of the way before constructing the ORDER BY entries. + ** Move the compound select entries into aCopy[] where they can be + ** accessed and reused when constructing the ORDER BY entries. + ** Because nCol might be greater than or less than nOrderByExpr + ** we have to use memmove() when doing the copy. + */ + aCopy = &pKeyInfo->aColl[nOrderByExpr]; + pSortOrder = pKeyInfo->aSortOrder = (u8*)&aCopy[nCol]; + memmove(aCopy, pKeyInfo->aColl, nCol*sizeof(CollSeq*)); + + apColl = pKeyInfo->aColl; + for(i=0; ipExpr; + if( (pExpr->flags & EP_ExpCollate) ){ + assert( pExpr->pColl!=0 ); + *apColl = pExpr->pColl; + }else{ + *apColl = aCopy[pExpr->iColumn]; + } + *pSortOrder = pOTerm->sortOrder; + } + assert( p->pRightmost==p ); + assert( p->addrOpenEphm[2]>=0 ); + addr = p->addrOpenEphm[2]; + sqlite3VdbeChangeP2(v, addr, p->pOrderBy->nExpr+2); + pKeyInfo->nField = nOrderByExpr; + sqlite3VdbeChangeP4(v, addr, (char*)pKeyInfo, P4_KEYINFO_HANDOFF); + pKeyInfo = 0; + generateSortTail(pParse, p, v, p->pEList->nExpr, &dest); + } + + sqlite3_free(pKeyInfo); + } + +multi_select_end: + pDest->iMem = dest.iMem; + return rc; +} +#endif /* SQLITE_OMIT_COMPOUND_SELECT */ + +#ifndef SQLITE_OMIT_VIEW +/* Forward Declarations */ +static void substExprList(sqlite3*, ExprList*, int, ExprList*); +static void substSelect(sqlite3*, Select *, int, ExprList *); + +/* +** Scan through the expression pExpr. Replace every reference to +** a column in table number iTable with a copy of the iColumn-th +** entry in pEList. (But leave references to the ROWID column +** unchanged.) +** +** This routine is part of the flattening procedure. A subquery +** whose result set is defined by pEList appears as entry in the +** FROM clause of a SELECT such that the VDBE cursor assigned to that +** FORM clause entry is iTable. This routine make the necessary +** changes to pExpr so that it refers directly to the source table +** of the subquery rather the result set of the subquery. +*/ +static void substExpr( + sqlite3 *db, /* Report malloc errors to this connection */ + Expr *pExpr, /* Expr in which substitution occurs */ + int iTable, /* Table to be substituted */ + ExprList *pEList /* Substitute expressions */ +){ + if( pExpr==0 ) return; + if( pExpr->op==TK_COLUMN && pExpr->iTable==iTable ){ + if( pExpr->iColumn<0 ){ + pExpr->op = TK_NULL; + }else{ + Expr *pNew; + assert( pEList!=0 && pExpr->iColumnnExpr ); + assert( pExpr->pLeft==0 && pExpr->pRight==0 && pExpr->pList==0 ); + pNew = pEList->a[pExpr->iColumn].pExpr; + assert( pNew!=0 ); + pExpr->op = pNew->op; + assert( pExpr->pLeft==0 ); + pExpr->pLeft = sqlite3ExprDup(db, pNew->pLeft); + assert( pExpr->pRight==0 ); + pExpr->pRight = sqlite3ExprDup(db, pNew->pRight); + assert( pExpr->pList==0 ); + pExpr->pList = sqlite3ExprListDup(db, pNew->pList); + pExpr->iTable = pNew->iTable; + pExpr->pTab = pNew->pTab; + pExpr->iColumn = pNew->iColumn; + pExpr->iAgg = pNew->iAgg; + sqlite3TokenCopy(db, &pExpr->token, &pNew->token); + sqlite3TokenCopy(db, &pExpr->span, &pNew->span); + pExpr->pSelect = sqlite3SelectDup(db, pNew->pSelect); + pExpr->flags = pNew->flags; + } + }else{ + substExpr(db, pExpr->pLeft, iTable, pEList); + substExpr(db, pExpr->pRight, iTable, pEList); + substSelect(db, pExpr->pSelect, iTable, pEList); + substExprList(db, pExpr->pList, iTable, pEList); + } +} +static void substExprList( + sqlite3 *db, /* Report malloc errors here */ + ExprList *pList, /* List to scan and in which to make substitutes */ + int iTable, /* Table to be substituted */ + ExprList *pEList /* Substitute values */ +){ + int i; + if( pList==0 ) return; + for(i=0; inExpr; i++){ + substExpr(db, pList->a[i].pExpr, iTable, pEList); + } +} +static void substSelect( + sqlite3 *db, /* Report malloc errors here */ + Select *p, /* SELECT statement in which to make substitutions */ + int iTable, /* Table to be replaced */ + ExprList *pEList /* Substitute values */ +){ + if( !p ) return; + substExprList(db, p->pEList, iTable, pEList); + substExprList(db, p->pGroupBy, iTable, pEList); + substExprList(db, p->pOrderBy, iTable, pEList); + substExpr(db, p->pHaving, iTable, pEList); + substExpr(db, p->pWhere, iTable, pEList); + substSelect(db, p->pPrior, iTable, pEList); +} +#endif /* !defined(SQLITE_OMIT_VIEW) */ + +#ifndef SQLITE_OMIT_VIEW +/* +** This routine attempts to flatten subqueries in order to speed +** execution. It returns 1 if it makes changes and 0 if no flattening +** occurs. +** +** To understand the concept of flattening, consider the following +** query: +** +** SELECT a FROM (SELECT x+y AS a FROM t1 WHERE z<100) WHERE a>5 +** +** The default way of implementing this query is to execute the +** subquery first and store the results in a temporary table, then +** run the outer query on that temporary table. This requires two +** passes over the data. Furthermore, because the temporary table +** has no indices, the WHERE clause on the outer query cannot be +** optimized. +** +** This routine attempts to rewrite queries such as the above into +** a single flat select, like this: +** +** SELECT x+y AS a FROM t1 WHERE z<100 AND a>5 +** +** The code generated for this simpification gives the same result +** but only has to scan the data once. And because indices might +** exist on the table t1, a complete scan of the data might be +** avoided. +** +** Flattening is only attempted if all of the following are true: +** +** (1) The subquery and the outer query do not both use aggregates. +** +** (2) The subquery is not an aggregate or the outer query is not a join. +** +** (3) The subquery is not the right operand of a left outer join, or +** the subquery is not itself a join. (Ticket #306) +** +** (4) The subquery is not DISTINCT or the outer query is not a join. +** +** (5) The subquery is not DISTINCT or the outer query does not use +** aggregates. +** +** (6) The subquery does not use aggregates or the outer query is not +** DISTINCT. +** +** (7) The subquery has a FROM clause. +** +** (8) The subquery does not use LIMIT or the outer query is not a join. +** +** (9) The subquery does not use LIMIT or the outer query does not use +** aggregates. +** +** (10) The subquery does not use aggregates or the outer query does not +** use LIMIT. +** +** (11) The subquery and the outer query do not both have ORDER BY clauses. +** +** (12) The subquery is not the right term of a LEFT OUTER JOIN or the +** subquery has no WHERE clause. (added by ticket #350) +** +** (13) The subquery and outer query do not both use LIMIT +** +** (14) The subquery does not use OFFSET +** +** (15) The outer query is not part of a compound select or the +** subquery does not have both an ORDER BY and a LIMIT clause. +** (See ticket #2339) +** +** (16) The outer query is not an aggregate or the subquery does +** not contain ORDER BY. (Ticket #2942) This used to not matter +** until we introduced the group_concat() function. +** +** In this routine, the "p" parameter is a pointer to the outer query. +** The subquery is p->pSrc->a[iFrom]. isAgg is true if the outer query +** uses aggregates and subqueryIsAgg is true if the subquery uses aggregates. +** +** If flattening is not attempted, this routine is a no-op and returns 0. +** If flattening is attempted this routine returns 1. +** +** All of the expression analysis must occur on both the outer query and +** the subquery before this routine runs. +*/ +static int flattenSubquery( + sqlite3 *db, /* Database connection */ + Select *p, /* The parent or outer SELECT statement */ + int iFrom, /* Index in p->pSrc->a[] of the inner subquery */ + int isAgg, /* True if outer SELECT uses aggregate functions */ + int subqueryIsAgg /* True if the subquery uses aggregate functions */ +){ + Select *pSub; /* The inner query or "subquery" */ + SrcList *pSrc; /* The FROM clause of the outer query */ + SrcList *pSubSrc; /* The FROM clause of the subquery */ + ExprList *pList; /* The result set of the outer query */ + int iParent; /* VDBE cursor number of the pSub result set temp table */ + int i; /* Loop counter */ + Expr *pWhere; /* The WHERE clause */ + struct SrcList_item *pSubitem; /* The subquery */ + + /* Check to see if flattening is permitted. Return 0 if not. + */ + if( p==0 ) return 0; + pSrc = p->pSrc; + assert( pSrc && iFrom>=0 && iFromnSrc ); + pSubitem = &pSrc->a[iFrom]; + pSub = pSubitem->pSelect; + assert( pSub!=0 ); + if( isAgg && subqueryIsAgg ) return 0; /* Restriction (1) */ + if( subqueryIsAgg && pSrc->nSrc>1 ) return 0; /* Restriction (2) */ + pSubSrc = pSub->pSrc; + assert( pSubSrc ); + /* Prior to version 3.1.2, when LIMIT and OFFSET had to be simple constants, + ** not arbitrary expresssions, we allowed some combining of LIMIT and OFFSET + ** because they could be computed at compile-time. But when LIMIT and OFFSET + ** became arbitrary expressions, we were forced to add restrictions (13) + ** and (14). */ + if( pSub->pLimit && p->pLimit ) return 0; /* Restriction (13) */ + if( pSub->pOffset ) return 0; /* Restriction (14) */ + if( p->pRightmost && pSub->pLimit && pSub->pOrderBy ){ + return 0; /* Restriction (15) */ + } + if( pSubSrc->nSrc==0 ) return 0; /* Restriction (7) */ + if( (pSub->isDistinct || pSub->pLimit) + && (pSrc->nSrc>1 || isAgg) ){ /* Restrictions (4)(5)(8)(9) */ + return 0; + } + if( p->isDistinct && subqueryIsAgg ) return 0; /* Restriction (6) */ + if( (p->disallowOrderBy || p->pOrderBy) && pSub->pOrderBy ){ + return 0; /* Restriction (11) */ + } + if( isAgg && pSub->pOrderBy ) return 0; /* Restriction (16) */ + + /* Restriction 3: If the subquery is a join, make sure the subquery is + ** not used as the right operand of an outer join. Examples of why this + ** is not allowed: + ** + ** t1 LEFT OUTER JOIN (t2 JOIN t3) + ** + ** If we flatten the above, we would get + ** + ** (t1 LEFT OUTER JOIN t2) JOIN t3 + ** + ** which is not at all the same thing. + */ + if( pSubSrc->nSrc>1 && (pSubitem->jointype & JT_OUTER)!=0 ){ + return 0; + } + + /* Restriction 12: If the subquery is the right operand of a left outer + ** join, make sure the subquery has no WHERE clause. + ** An examples of why this is not allowed: + ** + ** t1 LEFT OUTER JOIN (SELECT * FROM t2 WHERE t2.x>0) + ** + ** If we flatten the above, we would get + ** + ** (t1 LEFT OUTER JOIN t2) WHERE t2.x>0 + ** + ** But the t2.x>0 test will always fail on a NULL row of t2, which + ** effectively converts the OUTER JOIN into an INNER JOIN. + */ + if( (pSubitem->jointype & JT_OUTER)!=0 && pSub->pWhere!=0 ){ + return 0; + } + + /* If we reach this point, it means flattening is permitted for the + ** iFrom-th entry of the FROM clause in the outer query. + */ + + /* Move all of the FROM elements of the subquery into the + ** the FROM clause of the outer query. Before doing this, remember + ** the cursor number for the original outer query FROM element in + ** iParent. The iParent cursor will never be used. Subsequent code + ** will scan expressions looking for iParent references and replace + ** those references with expressions that resolve to the subquery FROM + ** elements we are now copying in. + */ + iParent = pSubitem->iCursor; + { + int nSubSrc = pSubSrc->nSrc; + int jointype = pSubitem->jointype; + + sqlite3DeleteTable(pSubitem->pTab); + sqlite3_free(pSubitem->zDatabase); + sqlite3_free(pSubitem->zName); + sqlite3_free(pSubitem->zAlias); + pSubitem->pTab = 0; + pSubitem->zDatabase = 0; + pSubitem->zName = 0; + pSubitem->zAlias = 0; + if( nSubSrc>1 ){ + int extra = nSubSrc - 1; + for(i=1; ipSrc = 0; + return 1; + } + } + p->pSrc = pSrc; + for(i=pSrc->nSrc-1; i-extra>=iFrom; i--){ + pSrc->a[i] = pSrc->a[i-extra]; + } + } + for(i=0; ia[i+iFrom] = pSubSrc->a[i]; + memset(&pSubSrc->a[i], 0, sizeof(pSubSrc->a[i])); + } + pSrc->a[iFrom].jointype = jointype; + } + + /* Now begin substituting subquery result set expressions for + ** references to the iParent in the outer query. + ** + ** Example: + ** + ** SELECT a+5, b*10 FROM (SELECT x*3 AS a, y+10 AS b FROM t1) WHERE a>b; + ** \ \_____________ subquery __________/ / + ** \_____________________ outer query ______________________________/ + ** + ** We look at every expression in the outer query and every place we see + ** "a" we substitute "x*3" and every place we see "b" we substitute "y+10". + */ + pList = p->pEList; + for(i=0; inExpr; i++){ + Expr *pExpr; + if( pList->a[i].zName==0 && (pExpr = pList->a[i].pExpr)->span.z!=0 ){ + pList->a[i].zName = + sqlite3DbStrNDup(db, (char*)pExpr->span.z, pExpr->span.n); + } + } + substExprList(db, p->pEList, iParent, pSub->pEList); + if( isAgg ){ + substExprList(db, p->pGroupBy, iParent, pSub->pEList); + substExpr(db, p->pHaving, iParent, pSub->pEList); + } + if( pSub->pOrderBy ){ + assert( p->pOrderBy==0 ); + p->pOrderBy = pSub->pOrderBy; + pSub->pOrderBy = 0; + }else if( p->pOrderBy ){ + substExprList(db, p->pOrderBy, iParent, pSub->pEList); + } + if( pSub->pWhere ){ + pWhere = sqlite3ExprDup(db, pSub->pWhere); + }else{ + pWhere = 0; + } + if( subqueryIsAgg ){ + assert( p->pHaving==0 ); + p->pHaving = p->pWhere; + p->pWhere = pWhere; + substExpr(db, p->pHaving, iParent, pSub->pEList); + p->pHaving = sqlite3ExprAnd(db, p->pHaving, + sqlite3ExprDup(db, pSub->pHaving)); + assert( p->pGroupBy==0 ); + p->pGroupBy = sqlite3ExprListDup(db, pSub->pGroupBy); + }else{ + substExpr(db, p->pWhere, iParent, pSub->pEList); + p->pWhere = sqlite3ExprAnd(db, p->pWhere, pWhere); + } + + /* The flattened query is distinct if either the inner or the + ** outer query is distinct. + */ + p->isDistinct = p->isDistinct || pSub->isDistinct; + + /* + ** SELECT ... FROM (SELECT ... LIMIT a OFFSET b) LIMIT x OFFSET y; + ** + ** One is tempted to try to add a and b to combine the limits. But this + ** does not work if either limit is negative. + */ + if( pSub->pLimit ){ + p->pLimit = pSub->pLimit; + pSub->pLimit = 0; + } + + /* Finially, delete what is left of the subquery and return + ** success. + */ + sqlite3SelectDelete(pSub); + return 1; +} +#endif /* SQLITE_OMIT_VIEW */ + +/* +** Analyze the SELECT statement passed as an argument to see if it +** is a min() or max() query. Return ORDERBY_MIN or ORDERBY_MAX if +** it is, or 0 otherwise. At present, a query is considered to be +** a min()/max() query if: +** +** 1. There is a single object in the FROM clause. +** +** 2. There is a single expression in the result set, and it is +** either min(x) or max(x), where x is a column reference. +*/ +static int minMaxQuery(Parse *pParse, Select *p){ + Expr *pExpr; + ExprList *pEList = p->pEList; + + if( pEList->nExpr!=1 ) return ORDERBY_NORMAL; + pExpr = pEList->a[0].pExpr; + pEList = pExpr->pList; + if( pExpr->op!=TK_AGG_FUNCTION || pEList==0 || pEList->nExpr!=1 ) return 0; + if( pEList->a[0].pExpr->op!=TK_AGG_COLUMN ) return ORDERBY_NORMAL; + if( pExpr->token.n!=3 ) return ORDERBY_NORMAL; + if( sqlite3StrNICmp((char*)pExpr->token.z,"min",3)==0 ){ + return ORDERBY_MIN; + }else if( sqlite3StrNICmp((char*)pExpr->token.z,"max",3)==0 ){ + return ORDERBY_MAX; + } + return ORDERBY_NORMAL; +} + +/* +** This routine resolves any names used in the result set of the +** supplied SELECT statement. If the SELECT statement being resolved +** is a sub-select, then pOuterNC is a pointer to the NameContext +** of the parent SELECT. +*/ +int sqlite3SelectResolve( + Parse *pParse, /* The parser context */ + Select *p, /* The SELECT statement being coded. */ + NameContext *pOuterNC /* The outer name context. May be NULL. */ +){ + ExprList *pEList; /* Result set. */ + int i; /* For-loop variable used in multiple places */ + NameContext sNC; /* Local name-context */ + ExprList *pGroupBy; /* The group by clause */ + + /* If this routine has run before, return immediately. */ + if( p->isResolved ){ + assert( !pOuterNC ); + return SQLITE_OK; + } + p->isResolved = 1; + + /* If there have already been errors, do nothing. */ + if( pParse->nErr>0 ){ + return SQLITE_ERROR; + } + + /* Prepare the select statement. This call will allocate all cursors + ** required to handle the tables and subqueries in the FROM clause. + */ + if( prepSelectStmt(pParse, p) ){ + return SQLITE_ERROR; + } + + /* Resolve the expressions in the LIMIT and OFFSET clauses. These + ** are not allowed to refer to any names, so pass an empty NameContext. + */ + memset(&sNC, 0, sizeof(sNC)); + sNC.pParse = pParse; + if( sqlite3ExprResolveNames(&sNC, p->pLimit) || + sqlite3ExprResolveNames(&sNC, p->pOffset) ){ + return SQLITE_ERROR; + } + + /* Set up the local name-context to pass to ExprResolveNames() to + ** resolve the expression-list. + */ + sNC.allowAgg = 1; + sNC.pSrcList = p->pSrc; + sNC.pNext = pOuterNC; + + /* Resolve names in the result set. */ + pEList = p->pEList; + if( !pEList ) return SQLITE_ERROR; + for(i=0; inExpr; i++){ + Expr *pX = pEList->a[i].pExpr; + if( sqlite3ExprResolveNames(&sNC, pX) ){ + return SQLITE_ERROR; + } + } + + /* If there are no aggregate functions in the result-set, and no GROUP BY + ** expression, do not allow aggregates in any of the other expressions. + */ + assert( !p->isAgg ); + pGroupBy = p->pGroupBy; + if( pGroupBy || sNC.hasAgg ){ + p->isAgg = 1; + }else{ + sNC.allowAgg = 0; + } + + /* If a HAVING clause is present, then there must be a GROUP BY clause. + */ + if( p->pHaving && !pGroupBy ){ + sqlite3ErrorMsg(pParse, "a GROUP BY clause is required before HAVING"); + return SQLITE_ERROR; + } + + /* Add the expression list to the name-context before parsing the + ** other expressions in the SELECT statement. This is so that + ** expressions in the WHERE clause (etc.) can refer to expressions by + ** aliases in the result set. + ** + ** Minor point: If this is the case, then the expression will be + ** re-evaluated for each reference to it. + */ + sNC.pEList = p->pEList; + if( sqlite3ExprResolveNames(&sNC, p->pWhere) || + sqlite3ExprResolveNames(&sNC, p->pHaving) ){ + return SQLITE_ERROR; + } + if( p->pPrior==0 ){ + if( processOrderGroupBy(pParse, p, p->pOrderBy, 1, &sNC.hasAgg) ){ + return SQLITE_ERROR; + } + } + if( processOrderGroupBy(pParse, p, pGroupBy, 0, &sNC.hasAgg) ){ + return SQLITE_ERROR; + } + + if( pParse->db->mallocFailed ){ + return SQLITE_NOMEM; + } + + /* Make sure the GROUP BY clause does not contain aggregate functions. + */ + if( pGroupBy ){ + struct ExprList_item *pItem; + + for(i=0, pItem=pGroupBy->a; inExpr; i++, pItem++){ + if( ExprHasProperty(pItem->pExpr, EP_Agg) ){ + sqlite3ErrorMsg(pParse, "aggregate functions are not allowed in " + "the GROUP BY clause"); + return SQLITE_ERROR; + } + } + } + + /* If this is one SELECT of a compound, be sure to resolve names + ** in the other SELECTs. + */ + if( p->pPrior ){ + return sqlite3SelectResolve(pParse, p->pPrior, pOuterNC); + }else{ + return SQLITE_OK; + } +} + +/* +** Reset the aggregate accumulator. +** +** The aggregate accumulator is a set of memory cells that hold +** intermediate results while calculating an aggregate. This +** routine simply stores NULLs in all of those memory cells. +*/ +static void resetAccumulator(Parse *pParse, AggInfo *pAggInfo){ + Vdbe *v = pParse->pVdbe; + int i; + struct AggInfo_func *pFunc; + if( pAggInfo->nFunc+pAggInfo->nColumn==0 ){ + return; + } + for(i=0; inColumn; i++){ + sqlite3VdbeAddOp2(v, OP_Null, 0, pAggInfo->aCol[i].iMem); + } + for(pFunc=pAggInfo->aFunc, i=0; inFunc; i++, pFunc++){ + sqlite3VdbeAddOp2(v, OP_Null, 0, pFunc->iMem); + if( pFunc->iDistinct>=0 ){ + Expr *pE = pFunc->pExpr; + if( pE->pList==0 || pE->pList->nExpr!=1 ){ + sqlite3ErrorMsg(pParse, "DISTINCT in aggregate must be followed " + "by an expression"); + pFunc->iDistinct = -1; + }else{ + KeyInfo *pKeyInfo = keyInfoFromExprList(pParse, pE->pList); + sqlite3VdbeAddOp4(v, OP_OpenEphemeral, pFunc->iDistinct, 0, 0, + (char*)pKeyInfo, P4_KEYINFO_HANDOFF); + } + } + } +} + +/* +** Invoke the OP_AggFinalize opcode for every aggregate function +** in the AggInfo structure. +*/ +static void finalizeAggFunctions(Parse *pParse, AggInfo *pAggInfo){ + Vdbe *v = pParse->pVdbe; + int i; + struct AggInfo_func *pF; + for(i=0, pF=pAggInfo->aFunc; inFunc; i++, pF++){ + ExprList *pList = pF->pExpr->pList; + sqlite3VdbeAddOp4(v, OP_AggFinal, pF->iMem, pList ? pList->nExpr : 0, 0, + (void*)pF->pFunc, P4_FUNCDEF); + } +} + +/* +** Update the accumulator memory cells for an aggregate based on +** the current cursor position. +*/ +static void updateAccumulator(Parse *pParse, AggInfo *pAggInfo){ + Vdbe *v = pParse->pVdbe; + int i; + struct AggInfo_func *pF; + struct AggInfo_col *pC; + + pAggInfo->directMode = 1; + for(i=0, pF=pAggInfo->aFunc; inFunc; i++, pF++){ + int nArg; + int addrNext = 0; + int regAgg; + ExprList *pList = pF->pExpr->pList; + if( pList ){ + nArg = pList->nExpr; + regAgg = sqlite3GetTempRange(pParse, nArg); + sqlite3ExprCodeExprList(pParse, pList, regAgg); + }else{ + nArg = 0; + regAgg = 0; + } + if( pF->iDistinct>=0 ){ + addrNext = sqlite3VdbeMakeLabel(v); + assert( nArg==1 ); + codeDistinct(pParse, pF->iDistinct, addrNext, 1, regAgg); + } + if( pF->pFunc->needCollSeq ){ + CollSeq *pColl = 0; + struct ExprList_item *pItem; + int j; + assert( pList!=0 ); /* pList!=0 if pF->pFunc->needCollSeq is true */ + for(j=0, pItem=pList->a; !pColl && jpExpr); + } + if( !pColl ){ + pColl = pParse->db->pDfltColl; + } + sqlite3VdbeAddOp4(v, OP_CollSeq, 0, 0, 0, (char *)pColl, P4_COLLSEQ); + } + sqlite3VdbeAddOp4(v, OP_AggStep, 0, regAgg, pF->iMem, + (void*)pF->pFunc, P4_FUNCDEF); + sqlite3VdbeChangeP5(v, nArg); + sqlite3ReleaseTempRange(pParse, regAgg, nArg); + if( addrNext ){ + sqlite3VdbeResolveLabel(v, addrNext); + } + } + for(i=0, pC=pAggInfo->aCol; inAccumulator; i++, pC++){ + sqlite3ExprCode(pParse, pC->pExpr, pC->iMem); + } + pAggInfo->directMode = 0; +} + +#ifndef SQLITE_OMIT_TRIGGER +/* +** This function is used when a SELECT statement is used to create a +** temporary table for iterating through when running an INSTEAD OF +** UPDATE or INSTEAD OF DELETE trigger. +** +** If possible, the SELECT statement is modified so that NULL values +** are stored in the temporary table for all columns for which the +** corresponding bit in argument mask is not set. If mask takes the +** special value 0xffffffff, then all columns are populated. +*/ +void sqlite3SelectMask(Parse *pParse, Select *p, u32 mask){ + if( p && !p->pPrior && !p->isDistinct && mask!=0xffffffff ){ + ExprList *pEList; + int i; + sqlite3SelectResolve(pParse, p, 0); + pEList = p->pEList; + for(i=0; pEList && inExpr && i<32; i++){ + if( !(mask&((u32)1<a[i].pExpr); + pEList->a[i].pExpr = sqlite3Expr(pParse->db, TK_NULL, 0, 0, 0); + } + } + } +} +#endif + +/* +** Generate code for the given SELECT statement. +** +** The results are distributed in various ways depending on the +** contents of the SelectDest structure pointed to by argument pDest +** as follows: +** +** pDest->eDest Result +** ------------ ------------------------------------------- +** SRT_Callback Invoke the callback for each row of the result. +** +** SRT_Mem Store first result in memory cell pDest->iParm +** +** SRT_Set Store non-null results as keys of table pDest->iParm. +** Apply the affinity pDest->affinity before storing them. +** +** SRT_Union Store results as a key in a temporary table pDest->iParm. +** +** SRT_Except Remove results from the temporary table pDest->iParm. +** +** SRT_Table Store results in temporary table pDest->iParm +** +** SRT_EphemTab Create an temporary table pDest->iParm and store +** the result there. The cursor is left open after +** returning. +** +** SRT_Subroutine For each row returned, push the results onto the +** vdbe stack and call the subroutine (via OP_Gosub) +** at address pDest->iParm. +** +** SRT_Exists Store a 1 in memory cell pDest->iParm if the result +** set is not empty. +** +** SRT_Discard Throw the results away. +** +** See the selectInnerLoop() function for a canonical listing of the +** allowed values of eDest and their meanings. +** +** This routine returns the number of errors. If any errors are +** encountered, then an appropriate error message is left in +** pParse->zErrMsg. +** +** This routine does NOT free the Select structure passed in. The +** calling function needs to do that. +** +** The pParent, parentTab, and *pParentAgg fields are filled in if this +** SELECT is a subquery. This routine may try to combine this SELECT +** with its parent to form a single flat query. In so doing, it might +** change the parent query from a non-aggregate to an aggregate query. +** For that reason, the pParentAgg flag is passed as a pointer, so it +** can be changed. +** +** Example 1: The meaning of the pParent parameter. +** +** SELECT * FROM t1 JOIN (SELECT x, count(*) FROM t2) JOIN t3; +** \ \_______ subquery _______/ / +** \ / +** \____________________ outer query ___________________/ +** +** This routine is called for the outer query first. For that call, +** pParent will be NULL. During the processing of the outer query, this +** routine is called recursively to handle the subquery. For the recursive +** call, pParent will point to the outer query. Because the subquery is +** the second element in a three-way join, the parentTab parameter will +** be 1 (the 2nd value of a 0-indexed array.) +*/ +int sqlite3Select( + Parse *pParse, /* The parser context */ + Select *p, /* The SELECT statement being coded. */ + SelectDest *pDest, /* What to do with the query results */ + Select *pParent, /* Another SELECT for which this is a sub-query */ + int parentTab, /* Index in pParent->pSrc of this query */ + int *pParentAgg, /* True if pParent uses aggregate functions */ + char *aff /* If eDest is SRT_Union, the affinity string */ +){ + int i, j; /* Loop counters */ + WhereInfo *pWInfo; /* Return from sqlite3WhereBegin() */ + Vdbe *v; /* The virtual machine under construction */ + int isAgg; /* True for select lists like "count(*)" */ + ExprList *pEList; /* List of columns to extract. */ + SrcList *pTabList; /* List of tables to select from */ + Expr *pWhere; /* The WHERE clause. May be NULL */ + ExprList *pOrderBy; /* The ORDER BY clause. May be NULL */ + ExprList *pGroupBy; /* The GROUP BY clause. May be NULL */ + Expr *pHaving; /* The HAVING clause. May be NULL */ + int isDistinct; /* True if the DISTINCT keyword is present */ + int distinct; /* Table to use for the distinct set */ + int rc = 1; /* Value to return from this function */ + int addrSortIndex; /* Address of an OP_OpenEphemeral instruction */ + AggInfo sAggInfo; /* Information used by aggregate queries */ + int iEnd; /* Address of the end of the query */ + sqlite3 *db; /* The database connection */ + + db = pParse->db; + if( p==0 || db->mallocFailed || pParse->nErr ){ + return 1; + } + if( sqlite3AuthCheck(pParse, SQLITE_SELECT, 0, 0, 0) ) return 1; + memset(&sAggInfo, 0, sizeof(sAggInfo)); + + pOrderBy = p->pOrderBy; + if( IgnorableOrderby(pDest) ){ + p->pOrderBy = 0; + + /* In these cases the DISTINCT operator makes no difference to the + ** results, so remove it if it were specified. + */ + assert(pDest->eDest==SRT_Exists || pDest->eDest==SRT_Union || + pDest->eDest==SRT_Except || pDest->eDest==SRT_Discard); + p->isDistinct = 0; + } + if( sqlite3SelectResolve(pParse, p, 0) ){ + goto select_end; + } + p->pOrderBy = pOrderBy; + +#ifndef SQLITE_OMIT_COMPOUND_SELECT + /* If there is are a sequence of queries, do the earlier ones first. + */ + if( p->pPrior ){ + if( p->pRightmost==0 ){ + Select *pLoop, *pRight = 0; + int cnt = 0; + for(pLoop=p; pLoop; pLoop=pLoop->pPrior, cnt++){ + pLoop->pRightmost = p; + pLoop->pNext = pRight; + pRight = pLoop; + } + if( SQLITE_MAX_COMPOUND_SELECT>0 && cnt>SQLITE_MAX_COMPOUND_SELECT ){ + sqlite3ErrorMsg(pParse, "too many terms in compound SELECT"); + return 1; + } + } + return multiSelect(pParse, p, pDest, aff); + } +#endif + + /* Make local copies of the parameters for this query. + */ + pTabList = p->pSrc; + pWhere = p->pWhere; + pGroupBy = p->pGroupBy; + pHaving = p->pHaving; + isAgg = p->isAgg; + isDistinct = p->isDistinct; + pEList = p->pEList; + if( pEList==0 ) goto select_end; + + /* + ** Do not even attempt to generate any code if we have already seen + ** errors before this routine starts. + */ + if( pParse->nErr>0 ) goto select_end; + + /* If writing to memory or generating a set + ** only a single column may be output. + */ +#ifndef SQLITE_OMIT_SUBQUERY + if( checkForMultiColumnSelectError(pParse, pDest, pEList->nExpr) ){ + goto select_end; + } +#endif + + /* ORDER BY is ignored for some destinations. + */ + if( IgnorableOrderby(pDest) ){ + pOrderBy = 0; + } + + /* Begin generating code. + */ + v = sqlite3GetVdbe(pParse); + if( v==0 ) goto select_end; + + /* Generate code for all sub-queries in the FROM clause + */ +#if !defined(SQLITE_OMIT_SUBQUERY) || !defined(SQLITE_OMIT_VIEW) + for(i=0; inSrc; i++){ + const char *zSavedAuthContext = 0; + int needRestoreContext; + struct SrcList_item *pItem = &pTabList->a[i]; + SelectDest dest; + + if( pItem->pSelect==0 || pItem->isPopulated ) continue; + if( pItem->zName!=0 ){ + zSavedAuthContext = pParse->zAuthContext; + pParse->zAuthContext = pItem->zName; + needRestoreContext = 1; + }else{ + needRestoreContext = 0; + } +#if defined(SQLITE_TEST) || SQLITE_MAX_EXPR_DEPTH>0 + /* Increment Parse.nHeight by the height of the largest expression + ** tree refered to by this, the parent select. The child select + ** may contain expression trees of at most + ** (SQLITE_MAX_EXPR_DEPTH-Parse.nHeight) height. This is a bit + ** more conservative than necessary, but much easier than enforcing + ** an exact limit. + */ + pParse->nHeight += sqlite3SelectExprHeight(p); +#endif + sqlite3SelectDestInit(&dest, SRT_EphemTab, pItem->iCursor); + sqlite3Select(pParse, pItem->pSelect, &dest, p, i, &isAgg, 0); + if( db->mallocFailed ){ + goto select_end; + } +#if defined(SQLITE_TEST) || SQLITE_MAX_EXPR_DEPTH>0 + pParse->nHeight -= sqlite3SelectExprHeight(p); +#endif + if( needRestoreContext ){ + pParse->zAuthContext = zSavedAuthContext; + } + pTabList = p->pSrc; + pWhere = p->pWhere; + if( !IgnorableOrderby(pDest) ){ + pOrderBy = p->pOrderBy; + } + pGroupBy = p->pGroupBy; + pHaving = p->pHaving; + isDistinct = p->isDistinct; + } +#endif + + /* Check to see if this is a subquery that can be "flattened" into its parent. + ** If flattening is a possiblity, do so and return immediately. + */ +#ifndef SQLITE_OMIT_VIEW + if( pParent && pParentAgg && + flattenSubquery(db, pParent, parentTab, *pParentAgg, isAgg) ){ + if( isAgg ) *pParentAgg = 1; + goto select_end; + } +#endif + + /* If possible, rewrite the query to use GROUP BY instead of DISTINCT. + ** GROUP BY may use an index, DISTINCT never does. + */ + if( p->isDistinct && !p->isAgg && !p->pGroupBy ){ + p->pGroupBy = sqlite3ExprListDup(db, p->pEList); + pGroupBy = p->pGroupBy; + p->isDistinct = 0; + isDistinct = 0; + } + + /* If there is an ORDER BY clause, then this sorting + ** index might end up being unused if the data can be + ** extracted in pre-sorted order. If that is the case, then the + ** OP_OpenEphemeral instruction will be changed to an OP_Noop once + ** we figure out that the sorting index is not needed. The addrSortIndex + ** variable is used to facilitate that change. + */ + if( pOrderBy ){ + KeyInfo *pKeyInfo; + pKeyInfo = keyInfoFromExprList(pParse, pOrderBy); + pOrderBy->iECursor = pParse->nTab++; + p->addrOpenEphm[2] = addrSortIndex = + sqlite3VdbeAddOp4(v, OP_OpenEphemeral, + pOrderBy->iECursor, pOrderBy->nExpr+2, 0, + (char*)pKeyInfo, P4_KEYINFO_HANDOFF); + }else{ + addrSortIndex = -1; + } + + /* If the output is destined for a temporary table, open that table. + */ + if( pDest->eDest==SRT_EphemTab ){ + sqlite3VdbeAddOp2(v, OP_OpenEphemeral, pDest->iParm, pEList->nExpr); + } + + /* Set the limiter. + */ + iEnd = sqlite3VdbeMakeLabel(v); + computeLimitRegisters(pParse, p, iEnd); + + /* Open a virtual index to use for the distinct set. + */ + if( isDistinct ){ + KeyInfo *pKeyInfo; + assert( isAgg || pGroupBy ); + distinct = pParse->nTab++; + pKeyInfo = keyInfoFromExprList(pParse, p->pEList); + sqlite3VdbeAddOp4(v, OP_OpenEphemeral, distinct, 0, 0, + (char*)pKeyInfo, P4_KEYINFO_HANDOFF); + }else{ + distinct = -1; + } + + /* Aggregate and non-aggregate queries are handled differently */ + if( !isAgg && pGroupBy==0 ){ + /* This case is for non-aggregate queries + ** Begin the database scan + */ + pWInfo = sqlite3WhereBegin(pParse, pTabList, pWhere, &pOrderBy, 0); + if( pWInfo==0 ) goto select_end; + + /* If sorting index that was created by a prior OP_OpenEphemeral + ** instruction ended up not being needed, then change the OP_OpenEphemeral + ** into an OP_Noop. + */ + if( addrSortIndex>=0 && pOrderBy==0 ){ + sqlite3VdbeChangeToNoop(v, addrSortIndex, 1); + p->addrOpenEphm[2] = -1; + } + + /* Use the standard inner loop + */ + assert(!isDistinct); + selectInnerLoop(pParse, p, pEList, 0, 0, pOrderBy, -1, pDest, + pWInfo->iContinue, pWInfo->iBreak, aff); + + /* End the database scan loop. + */ + sqlite3WhereEnd(pWInfo); + }else{ + /* This is the processing for aggregate queries */ + NameContext sNC; /* Name context for processing aggregate information */ + int iAMem; /* First Mem address for storing current GROUP BY */ + int iBMem; /* First Mem address for previous GROUP BY */ + int iUseFlag; /* Mem address holding flag indicating that at least + ** one row of the input to the aggregator has been + ** processed */ + int iAbortFlag; /* Mem address which causes query abort if positive */ + int groupBySort; /* Rows come from source in GROUP BY order */ + + + /* The following variables hold addresses or labels for parts of the + ** virtual machine program we are putting together */ + int addrOutputRow; /* Start of subroutine that outputs a result row */ + int addrSetAbort; /* Set the abort flag and return */ + int addrInitializeLoop; /* Start of code that initializes the input loop */ + int addrTopOfLoop; /* Top of the input loop */ + int addrGroupByChange; /* Code that runs when any GROUP BY term changes */ + int addrProcessRow; /* Code to process a single input row */ + int addrEnd; /* End of all processing */ + int addrSortingIdx; /* The OP_OpenEphemeral for the sorting index */ + int addrReset; /* Subroutine for resetting the accumulator */ + + addrEnd = sqlite3VdbeMakeLabel(v); + + /* Convert TK_COLUMN nodes into TK_AGG_COLUMN and make entries in + ** sAggInfo for all TK_AGG_FUNCTION nodes in expressions of the + ** SELECT statement. + */ + memset(&sNC, 0, sizeof(sNC)); + sNC.pParse = pParse; + sNC.pSrcList = pTabList; + sNC.pAggInfo = &sAggInfo; + sAggInfo.nSortingColumn = pGroupBy ? pGroupBy->nExpr+1 : 0; + sAggInfo.pGroupBy = pGroupBy; + sqlite3ExprAnalyzeAggList(&sNC, pEList); + sqlite3ExprAnalyzeAggList(&sNC, pOrderBy); + if( pHaving ){ + sqlite3ExprAnalyzeAggregates(&sNC, pHaving); + } + sAggInfo.nAccumulator = sAggInfo.nColumn; + for(i=0; ipList); + } + if( db->mallocFailed ) goto select_end; + + /* Processing for aggregates with GROUP BY is very different and + ** much more complex than aggregates without a GROUP BY. + */ + if( pGroupBy ){ + KeyInfo *pKeyInfo; /* Keying information for the group by clause */ + + /* Create labels that we will be needing + */ + + addrInitializeLoop = sqlite3VdbeMakeLabel(v); + addrGroupByChange = sqlite3VdbeMakeLabel(v); + addrProcessRow = sqlite3VdbeMakeLabel(v); + + /* If there is a GROUP BY clause we might need a sorting index to + ** implement it. Allocate that sorting index now. If it turns out + ** that we do not need it after all, the OpenEphemeral instruction + ** will be converted into a Noop. + */ + sAggInfo.sortingIdx = pParse->nTab++; + pKeyInfo = keyInfoFromExprList(pParse, pGroupBy); + addrSortingIdx = + sqlite3VdbeAddOp4(v, OP_OpenEphemeral, sAggInfo.sortingIdx, + sAggInfo.nSortingColumn, 0, + (char*)pKeyInfo, P4_KEYINFO_HANDOFF); + + /* Initialize memory locations used by GROUP BY aggregate processing + */ + iUseFlag = ++pParse->nMem; + iAbortFlag = ++pParse->nMem; + iAMem = pParse->nMem + 1; + pParse->nMem += pGroupBy->nExpr; + iBMem = pParse->nMem + 1; + pParse->nMem += pGroupBy->nExpr; + sqlite3VdbeAddOp2(v, OP_Integer, 0, iAbortFlag); + VdbeComment((v, "clear abort flag")); + sqlite3VdbeAddOp2(v, OP_Integer, 0, iUseFlag); + VdbeComment((v, "indicate accumulator empty")); + sqlite3VdbeAddOp2(v, OP_Goto, 0, addrInitializeLoop); + + /* Generate a subroutine that outputs a single row of the result + ** set. This subroutine first looks at the iUseFlag. If iUseFlag + ** is less than or equal to zero, the subroutine is a no-op. If + ** the processing calls for the query to abort, this subroutine + ** increments the iAbortFlag memory location before returning in + ** order to signal the caller to abort. + */ + addrSetAbort = sqlite3VdbeCurrentAddr(v); + sqlite3VdbeAddOp2(v, OP_Integer, 1, iAbortFlag); + VdbeComment((v, "set abort flag")); + sqlite3VdbeAddOp2(v, OP_Return, 0, 0); + addrOutputRow = sqlite3VdbeCurrentAddr(v); + sqlite3VdbeAddOp2(v, OP_IfPos, iUseFlag, addrOutputRow+2); + VdbeComment((v, "Groupby result generator entry point")); + sqlite3VdbeAddOp2(v, OP_Return, 0, 0); + finalizeAggFunctions(pParse, &sAggInfo); + if( pHaving ){ + sqlite3ExprIfFalse(pParse, pHaving, addrOutputRow+1, SQLITE_JUMPIFNULL); + } + selectInnerLoop(pParse, p, p->pEList, 0, 0, pOrderBy, + distinct, pDest, + addrOutputRow+1, addrSetAbort, aff); + sqlite3VdbeAddOp2(v, OP_Return, 0, 0); + VdbeComment((v, "end groupby result generator")); + + /* Generate a subroutine that will reset the group-by accumulator + */ + addrReset = sqlite3VdbeCurrentAddr(v); + resetAccumulator(pParse, &sAggInfo); + sqlite3VdbeAddOp2(v, OP_Return, 0, 0); + + /* Begin a loop that will extract all source rows in GROUP BY order. + ** This might involve two separate loops with an OP_Sort in between, or + ** it might be a single loop that uses an index to extract information + ** in the right order to begin with. + */ + sqlite3VdbeResolveLabel(v, addrInitializeLoop); + sqlite3VdbeAddOp2(v, OP_Gosub, 0, addrReset); + pWInfo = sqlite3WhereBegin(pParse, pTabList, pWhere, &pGroupBy, 0); + if( pWInfo==0 ) goto select_end; + if( pGroupBy==0 ){ + /* The optimizer is able to deliver rows in group by order so + ** we do not have to sort. The OP_OpenEphemeral table will be + ** cancelled later because we still need to use the pKeyInfo + */ + pGroupBy = p->pGroupBy; + groupBySort = 0; + }else{ + /* Rows are coming out in undetermined order. We have to push + ** each row into a sorting index, terminate the first loop, + ** then loop over the sorting index in order to get the output + ** in sorted order + */ + int regBase; + int regRecord; + int nCol; + int nGroupBy; + + groupBySort = 1; + nGroupBy = pGroupBy->nExpr; + nCol = nGroupBy + 1; + j = nGroupBy+1; + for(i=0; i=j ){ + nCol++; + j++; + } + } + regBase = sqlite3GetTempRange(pParse, nCol); + sqlite3ExprCodeExprList(pParse, pGroupBy, regBase); + sqlite3VdbeAddOp2(v, OP_Sequence, sAggInfo.sortingIdx,regBase+nGroupBy); + j = nGroupBy+1; + for(i=0; iiSorterColumn>=j ){ + sqlite3ExprCodeGetColumn(v, pCol->pTab, pCol->iColumn, pCol->iTable, + j + regBase); + j++; + } + } + regRecord = sqlite3GetTempReg(pParse); + sqlite3VdbeAddOp3(v, OP_MakeRecord, regBase, nCol, regRecord); + sqlite3VdbeAddOp2(v, OP_IdxInsert, sAggInfo.sortingIdx, regRecord); + sqlite3ReleaseTempReg(pParse, regRecord); + sqlite3ReleaseTempRange(pParse, regBase, nCol); + sqlite3WhereEnd(pWInfo); + sqlite3VdbeAddOp2(v, OP_Sort, sAggInfo.sortingIdx, addrEnd); + VdbeComment((v, "GROUP BY sort")); + sAggInfo.useSortingIdx = 1; + } + + /* Evaluate the current GROUP BY terms and store in b0, b1, b2... + ** (b0 is memory location iBMem+0, b1 is iBMem+1, and so forth) + ** Then compare the current GROUP BY terms against the GROUP BY terms + ** from the previous row currently stored in a0, a1, a2... + */ + addrTopOfLoop = sqlite3VdbeCurrentAddr(v); + for(j=0; jnExpr; j++){ + if( groupBySort ){ + sqlite3VdbeAddOp3(v, OP_Column, sAggInfo.sortingIdx, j, iBMem+j); + }else{ + sAggInfo.directMode = 1; + sqlite3ExprCode(pParse, pGroupBy->a[j].pExpr, iBMem+j); + } + } + for(j=pGroupBy->nExpr-1; j>=0; j--){ + if( j==0 ){ + sqlite3VdbeAddOp3(v, OP_Eq, iAMem+j, addrProcessRow, iBMem+j); + }else{ + sqlite3VdbeAddOp3(v, OP_Ne, iAMem+j, addrGroupByChange, iBMem+j); + } + sqlite3VdbeChangeP4(v, -1, (void*)pKeyInfo->aColl[j], P4_COLLSEQ); + sqlite3VdbeChangeP5(v, SQLITE_NULLEQUAL); + } + + /* Generate code that runs whenever the GROUP BY changes. + ** Change in the GROUP BY are detected by the previous code + ** block. If there were no changes, this block is skipped. + ** + ** This code copies current group by terms in b0,b1,b2,... + ** over to a0,a1,a2. It then calls the output subroutine + ** and resets the aggregate accumulator registers in preparation + ** for the next GROUP BY batch. + */ + sqlite3VdbeResolveLabel(v, addrGroupByChange); + for(j=0; jnExpr; j++){ + sqlite3VdbeAddOp2(v, OP_Move, iBMem+j, iAMem+j); + } + sqlite3VdbeAddOp2(v, OP_Gosub, 0, addrOutputRow); + VdbeComment((v, "output one row")); + sqlite3VdbeAddOp2(v, OP_IfPos, iAbortFlag, addrEnd); + VdbeComment((v, "check abort flag")); + sqlite3VdbeAddOp2(v, OP_Gosub, 0, addrReset); + VdbeComment((v, "reset accumulator")); + + /* Update the aggregate accumulators based on the content of + ** the current row + */ + sqlite3VdbeResolveLabel(v, addrProcessRow); + updateAccumulator(pParse, &sAggInfo); + sqlite3VdbeAddOp2(v, OP_Integer, 1, iUseFlag); + VdbeComment((v, "indicate data in accumulator")); + + /* End of the loop + */ + if( groupBySort ){ + sqlite3VdbeAddOp2(v, OP_Next, sAggInfo.sortingIdx, addrTopOfLoop); + }else{ + sqlite3WhereEnd(pWInfo); + sqlite3VdbeChangeToNoop(v, addrSortingIdx, 1); + } + + /* Output the final row of result + */ + sqlite3VdbeAddOp2(v, OP_Gosub, 0, addrOutputRow); + VdbeComment((v, "output final row")); + + } /* endif pGroupBy */ + else { + ExprList *pMinMax = 0; + ExprList *pDel = 0; + u8 flag; + + /* Check if the query is of one of the following forms: + ** + ** SELECT min(x) FROM ... + ** SELECT max(x) FROM ... + ** + ** If it is, then ask the code in where.c to attempt to sort results + ** as if there was an "ORDER ON x" or "ORDER ON x DESC" clause. + ** If where.c is able to produce results sorted in this order, then + ** add vdbe code to break out of the processing loop after the + ** first iteration (since the first iteration of the loop is + ** guaranteed to operate on the row with the minimum or maximum + ** value of x, the only row required). + ** + ** A special flag must be passed to sqlite3WhereBegin() to slightly + ** modify behaviour as follows: + ** + ** + If the query is a "SELECT min(x)", then the loop coded by + ** where.c should not iterate over any values with a NULL value + ** for x. + ** + ** + The optimizer code in where.c (the thing that decides which + ** index or indices to use) should place a different priority on + ** satisfying the 'ORDER BY' clause than it does in other cases. + ** Refer to code and comments in where.c for details. + */ + flag = minMaxQuery(pParse, p); + if( flag ){ + pDel = pMinMax = sqlite3ExprListDup(db, p->pEList->a[0].pExpr->pList); + if( pMinMax && !db->mallocFailed ){ + pMinMax->a[0].sortOrder = ((flag==ORDERBY_MIN)?0:1); + pMinMax->a[0].pExpr->op = TK_COLUMN; + } + } + + /* This case runs if the aggregate has no GROUP BY clause. The + ** processing is much simpler since there is only a single row + ** of output. + */ + resetAccumulator(pParse, &sAggInfo); + pWInfo = sqlite3WhereBegin(pParse, pTabList, pWhere, &pMinMax, flag); + if( pWInfo==0 ){ + sqlite3ExprListDelete(pDel); + goto select_end; + } + updateAccumulator(pParse, &sAggInfo); + if( !pMinMax && flag ){ + sqlite3VdbeAddOp2(v, OP_Goto, 0, pWInfo->iBreak); + VdbeComment((v, "%s() by index", (flag==ORDERBY_MIN?"min":"max"))); + } + sqlite3WhereEnd(pWInfo); + finalizeAggFunctions(pParse, &sAggInfo); + pOrderBy = 0; + if( pHaving ){ + sqlite3ExprIfFalse(pParse, pHaving, addrEnd, SQLITE_JUMPIFNULL); + } + selectInnerLoop(pParse, p, p->pEList, 0, 0, 0, -1, + pDest, addrEnd, addrEnd, aff); + + sqlite3ExprListDelete(pDel); + } + sqlite3VdbeResolveLabel(v, addrEnd); + + } /* endif aggregate query */ + + /* If there is an ORDER BY clause, then we need to sort the results + ** and send them to the callback one by one. + */ + if( pOrderBy ){ + generateSortTail(pParse, p, v, pEList->nExpr, pDest); + } + +#ifndef SQLITE_OMIT_SUBQUERY + /* If this was a subquery, we have now converted the subquery into a + ** temporary table. So set the SrcList_item.isPopulated flag to prevent + ** this subquery from being evaluated again and to force the use of + ** the temporary table. + */ + if( pParent ){ + assert( pParent->pSrc->nSrc>parentTab ); + assert( pParent->pSrc->a[parentTab].pSelect==p ); + pParent->pSrc->a[parentTab].isPopulated = 1; + } +#endif + + /* Jump here to skip this query + */ + sqlite3VdbeResolveLabel(v, iEnd); + + /* The SELECT was successfully coded. Set the return code to 0 + ** to indicate no errors. + */ + rc = 0; + + /* Control jumps to here if an error is encountered above, or upon + ** successful coding of the SELECT. + */ +select_end: + + /* Identify column names if we will be using them in a callback. This + ** step is skipped if the output is going to some other destination. + */ + if( rc==SQLITE_OK && pDest->eDest==SRT_Callback ){ + generateColumnNames(pParse, pTabList, pEList); + } + + sqlite3_free(sAggInfo.aCol); + sqlite3_free(sAggInfo.aFunc); + return rc; +} + +#if defined(SQLITE_DEBUG) +/* +******************************************************************************* +** The following code is used for testing and debugging only. The code +** that follows does not appear in normal builds. +** +** These routines are used to print out the content of all or part of a +** parse structures such as Select or Expr. Such printouts are useful +** for helping to understand what is happening inside the code generator +** during the execution of complex SELECT statements. +** +** These routine are not called anywhere from within the normal +** code base. Then are intended to be called from within the debugger +** or from temporary "printf" statements inserted for debugging. +*/ +static void sqlite3PrintExpr(Expr *p){ + if( p->token.z && p->token.n>0 ){ + sqlite3DebugPrintf("(%.*s", p->token.n, p->token.z); + }else{ + sqlite3DebugPrintf("(%d", p->op); + } + if( p->pLeft ){ + sqlite3DebugPrintf(" "); + sqlite3PrintExpr(p->pLeft); + } + if( p->pRight ){ + sqlite3DebugPrintf(" "); + sqlite3PrintExpr(p->pRight); + } + sqlite3DebugPrintf(")"); +} +static void sqlite3PrintExprList(ExprList *pList){ + int i; + for(i=0; inExpr; i++){ + sqlite3PrintExpr(pList->a[i].pExpr); + if( inExpr-1 ){ + sqlite3DebugPrintf(", "); + } + } +} +static void sqlite3PrintSelect(Select *p, int indent){ + sqlite3DebugPrintf("%*sSELECT(%p) ", indent, "", p); + sqlite3PrintExprList(p->pEList); + sqlite3DebugPrintf("\n"); + if( p->pSrc ){ + char *zPrefix; + int i; + zPrefix = "FROM"; + for(i=0; ipSrc->nSrc; i++){ + struct SrcList_item *pItem = &p->pSrc->a[i]; + sqlite3DebugPrintf("%*s ", indent+6, zPrefix); + zPrefix = ""; + if( pItem->pSelect ){ + sqlite3DebugPrintf("(\n"); + sqlite3PrintSelect(pItem->pSelect, indent+10); + sqlite3DebugPrintf("%*s)", indent+8, ""); + }else if( pItem->zName ){ + sqlite3DebugPrintf("%s", pItem->zName); + } + if( pItem->pTab ){ + sqlite3DebugPrintf("(table: %s)", pItem->pTab->zName); + } + if( pItem->zAlias ){ + sqlite3DebugPrintf(" AS %s", pItem->zAlias); + } + if( ipSrc->nSrc-1 ){ + sqlite3DebugPrintf(","); + } + sqlite3DebugPrintf("\n"); + } + } + if( p->pWhere ){ + sqlite3DebugPrintf("%*s WHERE ", indent, ""); + sqlite3PrintExpr(p->pWhere); + sqlite3DebugPrintf("\n"); + } + if( p->pGroupBy ){ + sqlite3DebugPrintf("%*s GROUP BY ", indent, ""); + sqlite3PrintExprList(p->pGroupBy); + sqlite3DebugPrintf("\n"); + } + if( p->pHaving ){ + sqlite3DebugPrintf("%*s HAVING ", indent, ""); + sqlite3PrintExpr(p->pHaving); + sqlite3DebugPrintf("\n"); + } + if( p->pOrderBy ){ + sqlite3DebugPrintf("%*s ORDER BY ", indent, ""); + sqlite3PrintExprList(p->pOrderBy); + sqlite3DebugPrintf("\n"); + } +} +/* End of the structure debug printing code +*****************************************************************************/ +#endif /* defined(SQLITE_TEST) || defined(SQLITE_DEBUG) */ Added: external/sqlite-source-3.5.7.x/shell.c ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/shell.c Wed Mar 19 03:00:27 2008 @@ -0,0 +1,2087 @@ +/* +** 2001 September 15 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** This file contains code to implement the "sqlite" command line +** utility for accessing SQLite databases. +** +** $Id: shell.c,v 1.176 2008/03/04 17:45:01 mlcreech Exp $ +*/ +#include +#include +#include +#include +#include "sqlite3.h" +#include +#include + +#if !defined(_WIN32) && !defined(WIN32) && !defined(__OS2__) +# include +# include +# include +# include +#endif + +#ifdef __OS2__ +# include +#endif + +#if defined(HAVE_READLINE) && HAVE_READLINE==1 +# include +# include +#else +# define readline(p) local_getline(p,stdin) +# define add_history(X) +# define read_history(X) +# define write_history(X) +# define stifle_history(X) +#endif + +#if defined(_WIN32) || defined(WIN32) +# include +#else +/* Make sure isatty() has a prototype. +*/ +extern int isatty(); +#endif + +#if defined(_WIN32_WCE) +/* Windows CE (arm-wince-mingw32ce-gcc) does not provide isatty() + * thus we always assume that we have a console. That can be + * overridden with the -batch command line option. + */ +#define isatty(x) 1 +#endif + +#if !defined(_WIN32) && !defined(WIN32) && !defined(__OS2__) +#include +#include + +/* Saved resource information for the beginning of an operation */ +static struct rusage sBegin; + +/* True if the timer is enabled */ +static int enableTimer = 0; + +/* +** Begin timing an operation +*/ +static void beginTimer(void){ + if( enableTimer ){ + getrusage(RUSAGE_SELF, &sBegin); + } +} + +/* Return the difference of two time_structs in microseconds */ +static int timeDiff(struct timeval *pStart, struct timeval *pEnd){ + return (pEnd->tv_usec - pStart->tv_usec) + + 1000000*(pEnd->tv_sec - pStart->tv_sec); +} + +/* +** Print the timing results. +*/ +static void endTimer(void){ + if( enableTimer ){ + struct rusage sEnd; + getrusage(RUSAGE_SELF, &sEnd); + printf("CPU Time: user %f sys %f\n", + 0.000001*timeDiff(&sBegin.ru_utime, &sEnd.ru_utime), + 0.000001*timeDiff(&sBegin.ru_stime, &sEnd.ru_stime)); + } +} +#define BEGIN_TIMER beginTimer() +#define END_TIMER endTimer() +#define HAS_TIMER 1 +#else +#define BEGIN_TIMER +#define END_TIMER +#define HAS_TIMER 0 +#endif + + +/* +** If the following flag is set, then command execution stops +** at an error if we are not interactive. +*/ +static int bail_on_error = 0; + +/* +** Threat stdin as an interactive input if the following variable +** is true. Otherwise, assume stdin is connected to a file or pipe. +*/ +static int stdin_is_interactive = 1; + +/* +** The following is the open SQLite database. We make a pointer +** to this database a static variable so that it can be accessed +** by the SIGINT handler to interrupt database processing. +*/ +static sqlite3 *db = 0; + +/* +** True if an interrupt (Control-C) has been received. +*/ +static volatile int seenInterrupt = 0; + +/* +** This is the name of our program. It is set in main(), used +** in a number of other places, mostly for error messages. +*/ +static char *Argv0; + +/* +** Prompt strings. Initialized in main. Settable with +** .prompt main continue +*/ +static char mainPrompt[20]; /* First line prompt. default: "sqlite> "*/ +static char continuePrompt[20]; /* Continuation prompt. default: " ...> " */ + +/* +** Write I/O traces to the following stream. +*/ +#ifdef SQLITE_ENABLE_IOTRACE +static FILE *iotrace = 0; +#endif + +/* +** This routine works like printf in that its first argument is a +** format string and subsequent arguments are values to be substituted +** in place of % fields. The result of formatting this string +** is written to iotrace. +*/ +#ifdef SQLITE_ENABLE_IOTRACE +static void iotracePrintf(const char *zFormat, ...){ + va_list ap; + char *z; + if( iotrace==0 ) return; + va_start(ap, zFormat); + z = sqlite3_vmprintf(zFormat, ap); + va_end(ap); + fprintf(iotrace, "%s", z); + sqlite3_free(z); +} +#endif + + +/* +** Determines if a string is a number of not. +*/ +static int isNumber(const char *z, int *realnum){ + if( *z=='-' || *z=='+' ) z++; + if( !isdigit(*z) ){ + return 0; + } + z++; + if( realnum ) *realnum = 0; + while( isdigit(*z) ){ z++; } + if( *z=='.' ){ + z++; + if( !isdigit(*z) ) return 0; + while( isdigit(*z) ){ z++; } + if( realnum ) *realnum = 1; + } + if( *z=='e' || *z=='E' ){ + z++; + if( *z=='+' || *z=='-' ) z++; + if( !isdigit(*z) ) return 0; + while( isdigit(*z) ){ z++; } + if( realnum ) *realnum = 1; + } + return *z==0; +} + +/* +** A global char* and an SQL function to access its current value +** from within an SQL statement. This program used to use the +** sqlite_exec_printf() API to substitue a string into an SQL statement. +** The correct way to do this with sqlite3 is to use the bind API, but +** since the shell is built around the callback paradigm it would be a lot +** of work. Instead just use this hack, which is quite harmless. +*/ +static const char *zShellStatic = 0; +static void shellstaticFunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + assert( 0==argc ); + assert( zShellStatic ); + sqlite3_result_text(context, zShellStatic, -1, SQLITE_STATIC); +} + + +/* +** This routine reads a line of text from FILE in, stores +** the text in memory obtained from malloc() and returns a pointer +** to the text. NULL is returned at end of file, or if malloc() +** fails. +** +** The interface is like "readline" but no command-line editing +** is done. +*/ +static char *local_getline(char *zPrompt, FILE *in){ + char *zLine; + int nLine; + int n; + int eol; + + if( zPrompt && *zPrompt ){ + printf("%s",zPrompt); + fflush(stdout); + } + nLine = 100; + zLine = malloc( nLine ); + if( zLine==0 ) return 0; + n = 0; + eol = 0; + while( !eol ){ + if( n+100>nLine ){ + nLine = nLine*2 + 100; + zLine = realloc(zLine, nLine); + if( zLine==0 ) return 0; + } + if( fgets(&zLine[n], nLine - n, in)==0 ){ + if( n==0 ){ + free(zLine); + return 0; + } + zLine[n] = 0; + eol = 1; + break; + } + while( zLine[n] ){ n++; } + if( n>0 && zLine[n-1]=='\n' ){ + n--; + zLine[n] = 0; + eol = 1; + } + } + zLine = realloc( zLine, n+1 ); + return zLine; +} + +/* +** Retrieve a single line of input text. +** +** zPrior is a string of prior text retrieved. If not the empty +** string, then issue a continuation prompt. +*/ +static char *one_input_line(const char *zPrior, FILE *in){ + char *zPrompt; + char *zResult; + if( in!=0 ){ + return local_getline(0, in); + } + if( zPrior && zPrior[0] ){ + zPrompt = continuePrompt; + }else{ + zPrompt = mainPrompt; + } + zResult = readline(zPrompt); +#if defined(HAVE_READLINE) && HAVE_READLINE==1 + if( zResult && *zResult ) add_history(zResult); +#endif + return zResult; +} + +struct previous_mode_data { + int valid; /* Is there legit data in here? */ + int mode; + int showHeader; + int colWidth[100]; +}; + +/* +** An pointer to an instance of this structure is passed from +** the main program to the callback. This is used to communicate +** state and mode information. +*/ +struct callback_data { + sqlite3 *db; /* The database */ + int echoOn; /* True to echo input commands */ + int cnt; /* Number of records displayed so far */ + FILE *out; /* Write results here */ + int mode; /* An output mode setting */ + int writableSchema; /* True if PRAGMA writable_schema=ON */ + int showHeader; /* True to show column names in List or Column mode */ + char *zDestTable; /* Name of destination table when MODE_Insert */ + char separator[20]; /* Separator character for MODE_List */ + int colWidth[100]; /* Requested width of each column when in column mode*/ + int actualWidth[100]; /* Actual width of each column */ + char nullvalue[20]; /* The text to print when a NULL comes back from + ** the database */ + struct previous_mode_data explainPrev; + /* Holds the mode information just before + ** .explain ON */ + char outfile[FILENAME_MAX]; /* Filename for *out */ + const char *zDbFilename; /* name of the database file */ +}; + +/* +** These are the allowed modes. +*/ +#define MODE_Line 0 /* One column per line. Blank line between records */ +#define MODE_Column 1 /* One record per line in neat columns */ +#define MODE_List 2 /* One record per line with a separator */ +#define MODE_Semi 3 /* Same as MODE_List but append ";" to each line */ +#define MODE_Html 4 /* Generate an XHTML table */ +#define MODE_Insert 5 /* Generate SQL "insert" statements */ +#define MODE_Tcl 6 /* Generate ANSI-C or TCL quoted elements */ +#define MODE_Csv 7 /* Quote strings, numbers are plain */ +#define MODE_Explain 8 /* Like MODE_Column, but do not truncate data */ + +static const char *modeDescr[] = { + "line", + "column", + "list", + "semi", + "html", + "insert", + "tcl", + "csv", + "explain", +}; + +/* +** Number of elements in an array +*/ +#define ArraySize(X) (sizeof(X)/sizeof(X[0])) + +/* +** Output the given string as a quoted string using SQL quoting conventions. +*/ +static void output_quoted_string(FILE *out, const char *z){ + int i; + int nSingle = 0; + for(i=0; z[i]; i++){ + if( z[i]=='\'' ) nSingle++; + } + if( nSingle==0 ){ + fprintf(out,"'%s'",z); + }else{ + fprintf(out,"'"); + while( *z ){ + for(i=0; z[i] && z[i]!='\''; i++){} + if( i==0 ){ + fprintf(out,"''"); + z++; + }else if( z[i]=='\'' ){ + fprintf(out,"%.*s''",i,z); + z += i+1; + }else{ + fprintf(out,"%s",z); + break; + } + } + fprintf(out,"'"); + } +} + +/* +** Output the given string as a quoted according to C or TCL quoting rules. +*/ +static void output_c_string(FILE *out, const char *z){ + unsigned int c; + fputc('"', out); + while( (c = *(z++))!=0 ){ + if( c=='\\' ){ + fputc(c, out); + fputc(c, out); + }else if( c=='\t' ){ + fputc('\\', out); + fputc('t', out); + }else if( c=='\n' ){ + fputc('\\', out); + fputc('n', out); + }else if( c=='\r' ){ + fputc('\\', out); + fputc('r', out); + }else if( !isprint(c) ){ + fprintf(out, "\\%03o", c&0xff); + }else{ + fputc(c, out); + } + } + fputc('"', out); +} + +/* +** Output the given string with characters that are special to +** HTML escaped. +*/ +static void output_html_string(FILE *out, const char *z){ + int i; + while( *z ){ + for(i=0; z[i] && z[i]!='<' && z[i]!='&'; i++){} + if( i>0 ){ + fprintf(out,"%.*s",i,z); + } + if( z[i]=='<' ){ + fprintf(out,"<"); + }else if( z[i]=='&' ){ + fprintf(out,"&"); + }else{ + break; + } + z += i + 1; + } +} + +/* +** If a field contains any character identified by a 1 in the following +** array, then the string must be quoted for CSV. +*/ +static const char needCsvQuote[] = { + 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, + 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, + 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, + 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, + 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, + 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, + 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, + 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, + 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, + 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, + 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, +}; + +/* +** Output a single term of CSV. Actually, p->separator is used for +** the separator, which may or may not be a comma. p->nullvalue is +** the null value. Strings are quoted using ANSI-C rules. Numbers +** appear outside of quotes. +*/ +static void output_csv(struct callback_data *p, const char *z, int bSep){ + FILE *out = p->out; + if( z==0 ){ + fprintf(out,"%s",p->nullvalue); + }else{ + int i; + int nSep = strlen(p->separator); + for(i=0; z[i]; i++){ + if( needCsvQuote[((unsigned char*)z)[i]] + || (z[i]==p->separator[0] && + (nSep==1 || memcmp(z, p->separator, nSep)==0)) ){ + i = 0; + break; + } + } + if( i==0 ){ + putc('"', out); + for(i=0; z[i]; i++){ + if( z[i]=='"' ) putc('"', out); + putc(z[i], out); + } + putc('"', out); + }else{ + fprintf(out, "%s", z); + } + } + if( bSep ){ + fprintf(p->out, "%s", p->separator); + } +} + +#ifdef SIGINT +/* +** This routine runs when the user presses Ctrl-C +*/ +static void interrupt_handler(int NotUsed){ + seenInterrupt = 1; + if( db ) sqlite3_interrupt(db); +} +#endif + +/* +** This is the callback routine that the SQLite library +** invokes for each row of a query result. +*/ +static int callback(void *pArg, int nArg, char **azArg, char **azCol){ + int i; + struct callback_data *p = (struct callback_data*)pArg; + switch( p->mode ){ + case MODE_Line: { + int w = 5; + if( azArg==0 ) break; + for(i=0; iw ) w = len; + } + if( p->cnt++>0 ) fprintf(p->out,"\n"); + for(i=0; iout,"%*s = %s\n", w, azCol[i], + azArg[i] ? azArg[i] : p->nullvalue); + } + break; + } + case MODE_Explain: + case MODE_Column: { + if( p->cnt++==0 ){ + for(i=0; icolWidth) ){ + w = p->colWidth[i]; + }else{ + w = 0; + } + if( w<=0 ){ + w = strlen(azCol[i] ? azCol[i] : ""); + if( w<10 ) w = 10; + n = strlen(azArg && azArg[i] ? azArg[i] : p->nullvalue); + if( wactualWidth) ){ + p->actualWidth[i] = w; + } + if( p->showHeader ){ + fprintf(p->out,"%-*.*s%s",w,w,azCol[i], i==nArg-1 ? "\n": " "); + } + } + if( p->showHeader ){ + for(i=0; iactualWidth) ){ + w = p->actualWidth[i]; + }else{ + w = 10; + } + fprintf(p->out,"%-*.*s%s",w,w,"-----------------------------------" + "----------------------------------------------------------", + i==nArg-1 ? "\n": " "); + } + } + } + if( azArg==0 ) break; + for(i=0; iactualWidth) ){ + w = p->actualWidth[i]; + }else{ + w = 10; + } + if( p->mode==MODE_Explain && azArg[i] && strlen(azArg[i])>w ){ + w = strlen(azArg[i]); + } + fprintf(p->out,"%-*.*s%s",w,w, + azArg[i] ? azArg[i] : p->nullvalue, i==nArg-1 ? "\n": " "); + } + break; + } + case MODE_Semi: + case MODE_List: { + if( p->cnt++==0 && p->showHeader ){ + for(i=0; iout,"%s%s",azCol[i], i==nArg-1 ? "\n" : p->separator); + } + } + if( azArg==0 ) break; + for(i=0; inullvalue; + fprintf(p->out, "%s", z); + if( iout, "%s", p->separator); + }else if( p->mode==MODE_Semi ){ + fprintf(p->out, ";\n"); + }else{ + fprintf(p->out, "\n"); + } + } + break; + } + case MODE_Html: { + if( p->cnt++==0 && p->showHeader ){ + fprintf(p->out,""); + for(i=0; iout,"",azCol[i]); + } + fprintf(p->out,"\n"); + } + if( azArg==0 ) break; + fprintf(p->out,""); + for(i=0; iout,"\n"); + } + fprintf(p->out,"\n"); + break; + } + case MODE_Tcl: { + if( p->cnt++==0 && p->showHeader ){ + for(i=0; iout,azCol[i] ? azCol[i] : ""); + fprintf(p->out, "%s", p->separator); + } + fprintf(p->out,"\n"); + } + if( azArg==0 ) break; + for(i=0; iout, azArg[i] ? azArg[i] : p->nullvalue); + fprintf(p->out, "%s", p->separator); + } + fprintf(p->out,"\n"); + break; + } + case MODE_Csv: { + if( p->cnt++==0 && p->showHeader ){ + for(i=0; iout,"\n"); + } + if( azArg==0 ) break; + for(i=0; iout,"\n"); + break; + } + case MODE_Insert: { + if( azArg==0 ) break; + fprintf(p->out,"INSERT INTO %s VALUES(",p->zDestTable); + for(i=0; i0 ? ",": ""; + if( azArg[i]==0 ){ + fprintf(p->out,"%sNULL",zSep); + }else if( isNumber(azArg[i], 0) ){ + fprintf(p->out,"%s%s",zSep, azArg[i]); + }else{ + if( zSep[0] ) fprintf(p->out,"%s",zSep); + output_quoted_string(p->out, azArg[i]); + } + } + fprintf(p->out,");\n"); + break; + } + } + return 0; +} + +/* +** Set the destination table field of the callback_data structure to +** the name of the table given. Escape any quote characters in the +** table name. +*/ +static void set_table_name(struct callback_data *p, const char *zName){ + int i, n; + int needQuote; + char *z; + + if( p->zDestTable ){ + free(p->zDestTable); + p->zDestTable = 0; + } + if( zName==0 ) return; + needQuote = !isalpha((unsigned char)*zName) && *zName!='_'; + for(i=n=0; zName[i]; i++, n++){ + if( !isalnum((unsigned char)zName[i]) && zName[i]!='_' ){ + needQuote = 1; + if( zName[i]=='\'' ) n++; + } + } + if( needQuote ) n += 2; + z = p->zDestTable = malloc( n+1 ); + if( z==0 ){ + fprintf(stderr,"Out of memory!\n"); + exit(1); + } + n = 0; + if( needQuote ) z[n++] = '\''; + for(i=0; zName[i]; i++){ + z[n++] = zName[i]; + if( zName[i]=='\'' ) z[n++] = '\''; + } + if( needQuote ) z[n++] = '\''; + z[n] = 0; +} + +/* zIn is either a pointer to a NULL-terminated string in memory obtained +** from malloc(), or a NULL pointer. The string pointed to by zAppend is +** added to zIn, and the result returned in memory obtained from malloc(). +** zIn, if it was not NULL, is freed. +** +** If the third argument, quote, is not '\0', then it is used as a +** quote character for zAppend. +*/ +static char *appendText(char *zIn, char const *zAppend, char quote){ + int len; + int i; + int nAppend = strlen(zAppend); + int nIn = (zIn?strlen(zIn):0); + + len = nAppend+nIn+1; + if( quote ){ + len += 2; + for(i=0; iout, "DELETE FROM sqlite_sequence;\n"); + }else if( strcmp(zTable, "sqlite_stat1")==0 ){ + fprintf(p->out, "ANALYZE sqlite_master;\n"); + }else if( strncmp(zTable, "sqlite_", 7)==0 ){ + return 0; + }else if( strncmp(zSql, "CREATE VIRTUAL TABLE", 20)==0 ){ + char *zIns; + if( !p->writableSchema ){ + fprintf(p->out, "PRAGMA writable_schema=ON;\n"); + p->writableSchema = 1; + } + zIns = sqlite3_mprintf( + "INSERT INTO sqlite_master(type,name,tbl_name,rootpage,sql)" + "VALUES('table','%q','%q',0,'%q');", + zTable, zTable, zSql); + fprintf(p->out, "%s\n", zIns); + sqlite3_free(zIns); + return 0; + }else{ + fprintf(p->out, "%s;\n", zSql); + } + + if( strcmp(zType, "table")==0 ){ + sqlite3_stmt *pTableInfo = 0; + char *zSelect = 0; + char *zTableInfo = 0; + char *zTmp = 0; + + zTableInfo = appendText(zTableInfo, "PRAGMA table_info(", 0); + zTableInfo = appendText(zTableInfo, zTable, '"'); + zTableInfo = appendText(zTableInfo, ");", 0); + + rc = sqlite3_prepare(p->db, zTableInfo, -1, &pTableInfo, 0); + if( zTableInfo ) free(zTableInfo); + if( rc!=SQLITE_OK || !pTableInfo ){ + return 1; + } + + zSelect = appendText(zSelect, "SELECT 'INSERT INTO ' || ", 0); + zTmp = appendText(zTmp, zTable, '"'); + if( zTmp ){ + zSelect = appendText(zSelect, zTmp, '\''); + } + zSelect = appendText(zSelect, " || ' VALUES(' || ", 0); + rc = sqlite3_step(pTableInfo); + while( rc==SQLITE_ROW ){ + const char *zText = (const char *)sqlite3_column_text(pTableInfo, 1); + zSelect = appendText(zSelect, "quote(", 0); + zSelect = appendText(zSelect, zText, '"'); + rc = sqlite3_step(pTableInfo); + if( rc==SQLITE_ROW ){ + zSelect = appendText(zSelect, ") || ',' || ", 0); + }else{ + zSelect = appendText(zSelect, ") ", 0); + } + } + rc = sqlite3_finalize(pTableInfo); + if( rc!=SQLITE_OK ){ + if( zSelect ) free(zSelect); + return 1; + } + zSelect = appendText(zSelect, "|| ')' FROM ", 0); + zSelect = appendText(zSelect, zTable, '"'); + + rc = run_table_dump_query(p->out, p->db, zSelect); + if( rc==SQLITE_CORRUPT ){ + zSelect = appendText(zSelect, " ORDER BY rowid DESC", 0); + rc = run_table_dump_query(p->out, p->db, zSelect); + } + if( zSelect ) free(zSelect); + } + return 0; +} + +/* +** Run zQuery. Use dump_callback() as the callback routine so that +** the contents of the query are output as SQL statements. +** +** If we get a SQLITE_CORRUPT error, rerun the query after appending +** "ORDER BY rowid DESC" to the end. +*/ +static int run_schema_dump_query( + struct callback_data *p, + const char *zQuery, + char **pzErrMsg +){ + int rc; + rc = sqlite3_exec(p->db, zQuery, dump_callback, p, pzErrMsg); + if( rc==SQLITE_CORRUPT ){ + char *zQ2; + int len = strlen(zQuery); + if( pzErrMsg ) sqlite3_free(*pzErrMsg); + zQ2 = malloc( len+100 ); + if( zQ2==0 ) return rc; + sqlite3_snprintf(sizeof(zQ2), zQ2, "%s ORDER BY rowid DESC", zQuery); + rc = sqlite3_exec(p->db, zQ2, dump_callback, p, pzErrMsg); + free(zQ2); + } + return rc; +} + +/* +** Text of a help message +*/ +static char zHelp[] = + ".bail ON|OFF Stop after hitting an error. Default OFF\n" + ".databases List names and files of attached databases\n" + ".dump ?TABLE? ... Dump the database in an SQL text format\n" + ".echo ON|OFF Turn command echo on or off\n" + ".exit Exit this program\n" + ".explain ON|OFF Turn output mode suitable for EXPLAIN on or off.\n" + ".header(s) ON|OFF Turn display of headers on or off\n" + ".help Show this message\n" + ".import FILE TABLE Import data from FILE into TABLE\n" + ".indices TABLE Show names of all indices on TABLE\n" +#ifdef SQLITE_ENABLE_IOTRACE + ".iotrace FILE Enable I/O diagnostic logging to FILE\n" +#endif +#ifndef SQLITE_OMIT_LOAD_EXTENSION + ".load FILE ?ENTRY? Load an extension library\n" +#endif + ".mode MODE ?TABLE? Set output mode where MODE is one of:\n" + " csv Comma-separated values\n" + " column Left-aligned columns. (See .width)\n" + " html HTML
                %s
                "); + output_html_string(p->out, azArg[i] ? azArg[i] : p->nullvalue); + fprintf(p->out,"
                code\n" + " insert SQL insert statements for TABLE\n" + " line One value per line\n" + " list Values delimited by .separator string\n" + " tabs Tab-separated values\n" + " tcl TCL list elements\n" + ".nullvalue STRING Print STRING in place of NULL values\n" + ".output FILENAME Send output to FILENAME\n" + ".output stdout Send output to the screen\n" + ".prompt MAIN CONTINUE Replace the standard prompts\n" + ".quit Exit this program\n" + ".read FILENAME Execute SQL in FILENAME\n" + ".schema ?TABLE? Show the CREATE statements\n" + ".separator STRING Change separator used by output mode and .import\n" + ".show Show the current values for various settings\n" + ".tables ?PATTERN? List names of tables matching a LIKE pattern\n" + ".timeout MS Try opening locked tables for MS milliseconds\n" +#if HAS_TIMER + ".timer ON|OFF Turn the CPU timer measurement on or off\n" +#endif + ".width NUM NUM ... Set column widths for \"column\" mode\n" +; + +/* Forward reference */ +static int process_input(struct callback_data *p, FILE *in); + +/* +** Make sure the database is open. If it is not, then open it. If +** the database fails to open, print an error message and exit. +*/ +static void open_db(struct callback_data *p){ + if( p->db==0 ){ + sqlite3_open(p->zDbFilename, &p->db); + db = p->db; + sqlite3_create_function(db, "shellstatic", 0, SQLITE_UTF8, 0, + shellstaticFunc, 0, 0); + if( SQLITE_OK!=sqlite3_errcode(db) ){ + fprintf(stderr,"Unable to open database \"%s\": %s\n", + p->zDbFilename, sqlite3_errmsg(db)); + exit(1); + } +#ifndef SQLITE_OMIT_LOAD_EXTENSION + sqlite3_enable_load_extension(p->db, 1); +#endif + } +} + +/* +** Do C-language style dequoting. +** +** \t -> tab +** \n -> newline +** \r -> carriage return +** \NNN -> ascii character NNN in octal +** \\ -> backslash +*/ +static void resolve_backslashes(char *z){ + int i, j, c; + for(i=j=0; (c = z[i])!=0; i++, j++){ + if( c=='\\' ){ + c = z[++i]; + if( c=='n' ){ + c = '\n'; + }else if( c=='t' ){ + c = '\t'; + }else if( c=='r' ){ + c = '\r'; + }else if( c>='0' && c<='7' ){ + c -= '0'; + if( z[i+1]>='0' && z[i+1]<='7' ){ + i++; + c = (c<<3) + z[i] - '0'; + if( z[i+1]>='0' && z[i+1]<='7' ){ + i++; + c = (c<<3) + z[i] - '0'; + } + } + } + } + z[j] = c; + } + z[j] = 0; +} + +/* +** Interpret zArg as a boolean value. Return either 0 or 1. +*/ +static int booleanValue(char *zArg){ + int val = atoi(zArg); + int j; + for(j=0; zArg[j]; j++){ + zArg[j] = tolower(zArg[j]); + } + if( strcmp(zArg,"on")==0 ){ + val = 1; + }else if( strcmp(zArg,"yes")==0 ){ + val = 1; + } + return val; +} + +/* +** If an input line begins with "." then invoke this routine to +** process that line. +** +** Return 1 on error, 2 to exit, and 0 otherwise. +*/ +static int do_meta_command(char *zLine, struct callback_data *p){ + int i = 1; + int nArg = 0; + int n, c; + int rc = 0; + char *azArg[50]; + + /* Parse the input line into tokens. + */ + while( zLine[i] && nArg1 && strncmp(azArg[0], "bail", n)==0 && nArg>1 ){ + bail_on_error = booleanValue(azArg[1]); + }else + + if( c=='d' && n>1 && strncmp(azArg[0], "databases", n)==0 ){ + struct callback_data data; + char *zErrMsg = 0; + open_db(p); + memcpy(&data, p, sizeof(data)); + data.showHeader = 1; + data.mode = MODE_Column; + data.colWidth[0] = 3; + data.colWidth[1] = 15; + data.colWidth[2] = 58; + data.cnt = 0; + sqlite3_exec(p->db, "PRAGMA database_list; ", callback, &data, &zErrMsg); + if( zErrMsg ){ + fprintf(stderr,"Error: %s\n", zErrMsg); + sqlite3_free(zErrMsg); + } + }else + + if( c=='d' && strncmp(azArg[0], "dump", n)==0 ){ + char *zErrMsg = 0; + open_db(p); + fprintf(p->out, "BEGIN TRANSACTION;\n"); + p->writableSchema = 0; + if( nArg==1 ){ + run_schema_dump_query(p, + "SELECT name, type, sql FROM sqlite_master " + "WHERE sql NOT NULL AND type=='table'", 0 + ); + run_table_dump_query(p->out, p->db, + "SELECT sql FROM sqlite_master " + "WHERE sql NOT NULL AND type IN ('index','trigger','view')" + ); + }else{ + int i; + for(i=1; iout, p->db, + "SELECT sql FROM sqlite_master " + "WHERE sql NOT NULL" + " AND type IN ('index','trigger','view')" + " AND tbl_name LIKE shellstatic()" + ); + zShellStatic = 0; + } + } + if( p->writableSchema ){ + fprintf(p->out, "PRAGMA writable_schema=OFF;\n"); + p->writableSchema = 0; + } + if( zErrMsg ){ + fprintf(stderr,"Error: %s\n", zErrMsg); + sqlite3_free(zErrMsg); + }else{ + fprintf(p->out, "COMMIT;\n"); + } + }else + + if( c=='e' && strncmp(azArg[0], "echo", n)==0 && nArg>1 ){ + p->echoOn = booleanValue(azArg[1]); + }else + + if( c=='e' && strncmp(azArg[0], "exit", n)==0 ){ + rc = 2; + }else + + if( c=='e' && strncmp(azArg[0], "explain", n)==0 ){ + int val = nArg>=2 ? booleanValue(azArg[1]) : 1; + if(val == 1) { + if(!p->explainPrev.valid) { + p->explainPrev.valid = 1; + p->explainPrev.mode = p->mode; + p->explainPrev.showHeader = p->showHeader; + memcpy(p->explainPrev.colWidth,p->colWidth,sizeof(p->colWidth)); + } + /* We could put this code under the !p->explainValid + ** condition so that it does not execute if we are already in + ** explain mode. However, always executing it allows us an easy + ** was to reset to explain mode in case the user previously + ** did an .explain followed by a .width, .mode or .header + ** command. + */ + p->mode = MODE_Explain; + p->showHeader = 1; + memset(p->colWidth,0,ArraySize(p->colWidth)); + p->colWidth[0] = 4; /* addr */ + p->colWidth[1] = 13; /* opcode */ + p->colWidth[2] = 4; /* P1 */ + p->colWidth[3] = 4; /* P2 */ + p->colWidth[4] = 4; /* P3 */ + p->colWidth[5] = 13; /* P4 */ + p->colWidth[6] = 2; /* P5 */ + p->colWidth[7] = 13; /* Comment */ + }else if (p->explainPrev.valid) { + p->explainPrev.valid = 0; + p->mode = p->explainPrev.mode; + p->showHeader = p->explainPrev.showHeader; + memcpy(p->colWidth,p->explainPrev.colWidth,sizeof(p->colWidth)); + } + }else + + if( c=='h' && (strncmp(azArg[0], "header", n)==0 || + strncmp(azArg[0], "headers", n)==0 )&& nArg>1 ){ + p->showHeader = booleanValue(azArg[1]); + }else + + if( c=='h' && strncmp(azArg[0], "help", n)==0 ){ + fprintf(stderr,zHelp); + }else + + if( c=='i' && strncmp(azArg[0], "import", n)==0 && nArg>=3 ){ + char *zTable = azArg[2]; /* Insert data into this table */ + char *zFile = azArg[1]; /* The file from which to extract data */ + sqlite3_stmt *pStmt; /* A statement */ + int rc; /* Result code */ + int nCol; /* Number of columns in the table */ + int nByte; /* Number of bytes in an SQL string */ + int i, j; /* Loop counters */ + int nSep; /* Number of bytes in p->separator[] */ + char *zSql; /* An SQL statement */ + char *zLine; /* A single line of input from the file */ + char **azCol; /* zLine[] broken up into columns */ + char *zCommit; /* How to commit changes */ + FILE *in; /* The input file */ + int lineno = 0; /* Line number of input file */ + + open_db(p); + nSep = strlen(p->separator); + if( nSep==0 ){ + fprintf(stderr, "non-null separator required for import\n"); + return 0; + } + zSql = sqlite3_mprintf("SELECT * FROM '%q'", zTable); + if( zSql==0 ) return 0; + nByte = strlen(zSql); + rc = sqlite3_prepare(p->db, zSql, -1, &pStmt, 0); + sqlite3_free(zSql); + if( rc ){ + fprintf(stderr,"Error: %s\n", sqlite3_errmsg(db)); + nCol = 0; + rc = 1; + }else{ + nCol = sqlite3_column_count(pStmt); + } + sqlite3_finalize(pStmt); + if( nCol==0 ) return 0; + zSql = malloc( nByte + 20 + nCol*2 ); + if( zSql==0 ) return 0; + sqlite3_snprintf(nByte+20, zSql, "INSERT INTO '%q' VALUES(?", zTable); + j = strlen(zSql); + for(i=1; idb, zSql, -1, &pStmt, 0); + free(zSql); + if( rc ){ + fprintf(stderr, "Error: %s\n", sqlite3_errmsg(db)); + sqlite3_finalize(pStmt); + return 1; + } + in = fopen(zFile, "rb"); + if( in==0 ){ + fprintf(stderr, "cannot open file: %s\n", zFile); + sqlite3_finalize(pStmt); + return 0; + } + azCol = malloc( sizeof(azCol[0])*(nCol+1) ); + if( azCol==0 ){ + fclose(in); + return 0; + } + sqlite3_exec(p->db, "BEGIN", 0, 0, 0); + zCommit = "COMMIT"; + while( (zLine = local_getline(0, in))!=0 ){ + char *z; + i = 0; + lineno++; + azCol[0] = zLine; + for(i=0, z=zLine; *z && *z!='\n' && *z!='\r'; z++){ + if( *z==p->separator[0] && strncmp(z, p->separator, nSep)==0 ){ + *z = 0; + i++; + if( idb, zCommit, 0, 0, 0); + }else + + if( c=='i' && strncmp(azArg[0], "indices", n)==0 && nArg>1 ){ + struct callback_data data; + char *zErrMsg = 0; + open_db(p); + memcpy(&data, p, sizeof(data)); + data.showHeader = 0; + data.mode = MODE_List; + zShellStatic = azArg[1]; + sqlite3_exec(p->db, + "SELECT name FROM sqlite_master " + "WHERE type='index' AND tbl_name LIKE shellstatic() " + "UNION ALL " + "SELECT name FROM sqlite_temp_master " + "WHERE type='index' AND tbl_name LIKE shellstatic() " + "ORDER BY 1", + callback, &data, &zErrMsg + ); + zShellStatic = 0; + if( zErrMsg ){ + fprintf(stderr,"Error: %s\n", zErrMsg); + sqlite3_free(zErrMsg); + } + }else + +#ifdef SQLITE_ENABLE_IOTRACE + if( c=='i' && strncmp(azArg[0], "iotrace", n)==0 ){ + extern void (*sqlite3IoTrace)(const char*, ...); + if( iotrace && iotrace!=stdout ) fclose(iotrace); + iotrace = 0; + if( nArg<2 ){ + sqlite3IoTrace = 0; + }else if( strcmp(azArg[1], "-")==0 ){ + sqlite3IoTrace = iotracePrintf; + iotrace = stdout; + }else{ + iotrace = fopen(azArg[1], "w"); + if( iotrace==0 ){ + fprintf(stderr, "cannot open \"%s\"\n", azArg[1]); + sqlite3IoTrace = 0; + }else{ + sqlite3IoTrace = iotracePrintf; + } + } + }else +#endif + +#ifndef SQLITE_OMIT_LOAD_EXTENSION + if( c=='l' && strncmp(azArg[0], "load", n)==0 && nArg>=2 ){ + const char *zFile, *zProc; + char *zErrMsg = 0; + int rc; + zFile = azArg[1]; + zProc = nArg>=3 ? azArg[2] : 0; + open_db(p); + rc = sqlite3_load_extension(p->db, zFile, zProc, &zErrMsg); + if( rc!=SQLITE_OK ){ + fprintf(stderr, "%s\n", zErrMsg); + sqlite3_free(zErrMsg); + rc = 1; + } + }else +#endif + + if( c=='m' && strncmp(azArg[0], "mode", n)==0 && nArg>=2 ){ + int n2 = strlen(azArg[1]); + if( strncmp(azArg[1],"line",n2)==0 + || + strncmp(azArg[1],"lines",n2)==0 ){ + p->mode = MODE_Line; + }else if( strncmp(azArg[1],"column",n2)==0 + || + strncmp(azArg[1],"columns",n2)==0 ){ + p->mode = MODE_Column; + }else if( strncmp(azArg[1],"list",n2)==0 ){ + p->mode = MODE_List; + }else if( strncmp(azArg[1],"html",n2)==0 ){ + p->mode = MODE_Html; + }else if( strncmp(azArg[1],"tcl",n2)==0 ){ + p->mode = MODE_Tcl; + }else if( strncmp(azArg[1],"csv",n2)==0 ){ + p->mode = MODE_Csv; + sqlite3_snprintf(sizeof(p->separator), p->separator, ","); + }else if( strncmp(azArg[1],"tabs",n2)==0 ){ + p->mode = MODE_List; + sqlite3_snprintf(sizeof(p->separator), p->separator, "\t"); + }else if( strncmp(azArg[1],"insert",n2)==0 ){ + p->mode = MODE_Insert; + if( nArg>=3 ){ + set_table_name(p, azArg[2]); + }else{ + set_table_name(p, "table"); + } + }else { + fprintf(stderr,"mode should be one of: " + "column csv html insert line list tabs tcl\n"); + } + }else + + if( c=='n' && strncmp(azArg[0], "nullvalue", n)==0 && nArg==2 ) { + sqlite3_snprintf(sizeof(p->nullvalue), p->nullvalue, + "%.*s", (int)ArraySize(p->nullvalue)-1, azArg[1]); + }else + + if( c=='o' && strncmp(azArg[0], "output", n)==0 && nArg==2 ){ + if( p->out!=stdout ){ + fclose(p->out); + } + if( strcmp(azArg[1],"stdout")==0 ){ + p->out = stdout; + sqlite3_snprintf(sizeof(p->outfile), p->outfile, "stdout"); + }else{ + p->out = fopen(azArg[1], "wb"); + if( p->out==0 ){ + fprintf(stderr,"can't write to \"%s\"\n", azArg[1]); + p->out = stdout; + } else { + sqlite3_snprintf(sizeof(p->outfile), p->outfile, "%s", azArg[1]); + } + } + }else + + if( c=='p' && strncmp(azArg[0], "prompt", n)==0 && (nArg==2 || nArg==3)){ + if( nArg >= 2) { + strncpy(mainPrompt,azArg[1],(int)ArraySize(mainPrompt)-1); + } + if( nArg >= 3) { + strncpy(continuePrompt,azArg[2],(int)ArraySize(continuePrompt)-1); + } + }else + + if( c=='q' && strncmp(azArg[0], "quit", n)==0 ){ + rc = 2; + }else + + if( c=='r' && strncmp(azArg[0], "read", n)==0 && nArg==2 ){ + FILE *alt = fopen(azArg[1], "rb"); + if( alt==0 ){ + fprintf(stderr,"can't open \"%s\"\n", azArg[1]); + }else{ + process_input(p, alt); + fclose(alt); + } + }else + + if( c=='s' && strncmp(azArg[0], "schema", n)==0 ){ + struct callback_data data; + char *zErrMsg = 0; + open_db(p); + memcpy(&data, p, sizeof(data)); + data.showHeader = 0; + data.mode = MODE_Semi; + if( nArg>1 ){ + int i; + for(i=0; azArg[1][i]; i++) azArg[1][i] = tolower(azArg[1][i]); + if( strcmp(azArg[1],"sqlite_master")==0 ){ + char *new_argv[2], *new_colv[2]; + new_argv[0] = "CREATE TABLE sqlite_master (\n" + " type text,\n" + " name text,\n" + " tbl_name text,\n" + " rootpage integer,\n" + " sql text\n" + ")"; + new_argv[1] = 0; + new_colv[0] = "sql"; + new_colv[1] = 0; + callback(&data, 1, new_argv, new_colv); + }else if( strcmp(azArg[1],"sqlite_temp_master")==0 ){ + char *new_argv[2], *new_colv[2]; + new_argv[0] = "CREATE TEMP TABLE sqlite_temp_master (\n" + " type text,\n" + " name text,\n" + " tbl_name text,\n" + " rootpage integer,\n" + " sql text\n" + ")"; + new_argv[1] = 0; + new_colv[0] = "sql"; + new_colv[1] = 0; + callback(&data, 1, new_argv, new_colv); + }else{ + zShellStatic = azArg[1]; + sqlite3_exec(p->db, + "SELECT sql FROM " + " (SELECT * FROM sqlite_master UNION ALL" + " SELECT * FROM sqlite_temp_master) " + "WHERE tbl_name LIKE shellstatic() AND type!='meta' AND sql NOTNULL " + "ORDER BY substr(type,2,1), name", + callback, &data, &zErrMsg); + zShellStatic = 0; + } + }else{ + sqlite3_exec(p->db, + "SELECT sql FROM " + " (SELECT * FROM sqlite_master UNION ALL" + " SELECT * FROM sqlite_temp_master) " + "WHERE type!='meta' AND sql NOTNULL AND name NOT LIKE 'sqlite_%'" + "ORDER BY substr(type,2,1), name", + callback, &data, &zErrMsg + ); + } + if( zErrMsg ){ + fprintf(stderr,"Error: %s\n", zErrMsg); + sqlite3_free(zErrMsg); + } + }else + + if( c=='s' && strncmp(azArg[0], "separator", n)==0 && nArg==2 ){ + sqlite3_snprintf(sizeof(p->separator), p->separator, + "%.*s", (int)sizeof(p->separator)-1, azArg[1]); + }else + + if( c=='s' && strncmp(azArg[0], "show", n)==0){ + int i; + fprintf(p->out,"%9.9s: %s\n","echo", p->echoOn ? "on" : "off"); + fprintf(p->out,"%9.9s: %s\n","explain", p->explainPrev.valid ? "on" :"off"); + fprintf(p->out,"%9.9s: %s\n","headers", p->showHeader ? "on" : "off"); + fprintf(p->out,"%9.9s: %s\n","mode", modeDescr[p->mode]); + fprintf(p->out,"%9.9s: ", "nullvalue"); + output_c_string(p->out, p->nullvalue); + fprintf(p->out, "\n"); + fprintf(p->out,"%9.9s: %s\n","output", + strlen(p->outfile) ? p->outfile : "stdout"); + fprintf(p->out,"%9.9s: ", "separator"); + output_c_string(p->out, p->separator); + fprintf(p->out, "\n"); + fprintf(p->out,"%9.9s: ","width"); + for (i=0;i<(int)ArraySize(p->colWidth) && p->colWidth[i] != 0;i++) { + fprintf(p->out,"%d ",p->colWidth[i]); + } + fprintf(p->out,"\n"); + }else + + if( c=='t' && n>1 && strncmp(azArg[0], "tables", n)==0 ){ + char **azResult; + int nRow, rc; + char *zErrMsg; + open_db(p); + if( nArg==1 ){ + rc = sqlite3_get_table(p->db, + "SELECT name FROM sqlite_master " + "WHERE type IN ('table','view') AND name NOT LIKE 'sqlite_%'" + "UNION ALL " + "SELECT name FROM sqlite_temp_master " + "WHERE type IN ('table','view') " + "ORDER BY 1", + &azResult, &nRow, 0, &zErrMsg + ); + }else{ + zShellStatic = azArg[1]; + rc = sqlite3_get_table(p->db, + "SELECT name FROM sqlite_master " + "WHERE type IN ('table','view') AND name LIKE '%'||shellstatic()||'%' " + "UNION ALL " + "SELECT name FROM sqlite_temp_master " + "WHERE type IN ('table','view') AND name LIKE '%'||shellstatic()||'%' " + "ORDER BY 1", + &azResult, &nRow, 0, &zErrMsg + ); + zShellStatic = 0; + } + if( zErrMsg ){ + fprintf(stderr,"Error: %s\n", zErrMsg); + sqlite3_free(zErrMsg); + } + if( rc==SQLITE_OK ){ + int len, maxlen = 0; + int i, j; + int nPrintCol, nPrintRow; + for(i=1; i<=nRow; i++){ + if( azResult[i]==0 ) continue; + len = strlen(azResult[i]); + if( len>maxlen ) maxlen = len; + } + nPrintCol = 80/(maxlen+2); + if( nPrintCol<1 ) nPrintCol = 1; + nPrintRow = (nRow + nPrintCol - 1)/nPrintCol; + for(i=0; i4 && strncmp(azArg[0], "timeout", n)==0 && nArg>=2 ){ + open_db(p); + sqlite3_busy_timeout(p->db, atoi(azArg[1])); + }else + +#if HAS_TIMER + if( c=='t' && n>=5 && strncmp(azArg[0], "timer", n)==0 && nArg>1 ){ + enableTimer = booleanValue(azArg[1]); + }else +#endif + + if( c=='w' && strncmp(azArg[0], "width", n)==0 ){ + int j; + assert( nArg<=ArraySize(azArg) ); + for(j=1; jcolWidth); j++){ + p->colWidth[j-1] = atoi(azArg[j]); + } + }else + + + { + fprintf(stderr, "unknown command or invalid arguments: " + " \"%s\". Enter \".help\" for help\n", azArg[0]); + } + + return rc; +} + +/* +** Return TRUE if a semicolon occurs anywhere in the first N characters +** of string z[]. +*/ +static int _contains_semicolon(const char *z, int N){ + int i; + for(i=0; iout); + free(zLine); + zLine = one_input_line(zSql, in); + if( zLine==0 ){ + break; /* We have reached EOF */ + } + if( seenInterrupt ){ + if( in!=0 ) break; + seenInterrupt = 0; + } + lineno++; + if( p->echoOn ) printf("%s\n", zLine); + if( (zSql==0 || zSql[0]==0) && _all_whitespace(zLine) ) continue; + if( zLine && zLine[0]=='.' && nSql==0 ){ + rc = do_meta_command(zLine, p); + if( rc==2 ){ + break; + }else if( rc ){ + errCnt++; + } + continue; + } + if( _is_command_terminator(zLine) ){ + memcpy(zLine,";",2); + } + nSqlPrior = nSql; + if( zSql==0 ){ + int i; + for(i=0; zLine[i] && isspace((unsigned char)zLine[i]); i++){} + if( zLine[i]!=0 ){ + nSql = strlen(zLine); + zSql = malloc( nSql+1 ); + if( zSql==0 ){ + fprintf(stderr, "out of memory\n"); + exit(1); + } + memcpy(zSql, zLine, nSql+1); + startline = lineno; + } + }else{ + int len = strlen(zLine); + zSql = realloc( zSql, nSql + len + 2 ); + if( zSql==0 ){ + fprintf(stderr,"%s: out of memory!\n", Argv0); + exit(1); + } + zSql[nSql++] = '\n'; + memcpy(&zSql[nSql], zLine, len+1); + nSql += len; + } + if( zSql && _contains_semicolon(&zSql[nSqlPrior], nSql-nSqlPrior) + && sqlite3_complete(zSql) ){ + p->cnt = 0; + open_db(p); + BEGIN_TIMER; + rc = sqlite3_exec(p->db, zSql, callback, p, &zErrMsg); + END_TIMER; + if( rc || zErrMsg ){ + char zPrefix[100]; + if( in!=0 || !stdin_is_interactive ){ + sqlite3_snprintf(sizeof(zPrefix), zPrefix, + "SQL error near line %d:", startline); + }else{ + sqlite3_snprintf(sizeof(zPrefix), zPrefix, "SQL error:"); + } + if( zErrMsg!=0 ){ + printf("%s %s\n", zPrefix, zErrMsg); + sqlite3_free(zErrMsg); + zErrMsg = 0; + }else{ + printf("%s %s\n", zPrefix, sqlite3_errmsg(p->db)); + } + errCnt++; + } + free(zSql); + zSql = 0; + nSql = 0; + } + } + if( zSql ){ + if( !_all_whitespace(zSql) ) printf("Incomplete SQL: %s\n", zSql); + free(zSql); + } + free(zLine); + return errCnt; +} + +/* +** Return a pathname which is the user's home directory. A +** 0 return indicates an error of some kind. Space to hold the +** resulting string is obtained from malloc(). The calling +** function should free the result. +*/ +static char *find_home_dir(void){ + char *home_dir = NULL; + +#if !defined(_WIN32) && !defined(WIN32) && !defined(__OS2__) && !defined(_WIN32_WCE) + struct passwd *pwent; + uid_t uid = getuid(); + if( (pwent=getpwuid(uid)) != NULL) { + home_dir = pwent->pw_dir; + } +#endif + +#if defined(_WIN32_WCE) + /* Windows CE (arm-wince-mingw32ce-gcc) does not provide getenv() + */ + home_dir = strdup("/"); +#else + +#if defined(_WIN32) || defined(WIN32) || defined(__OS2__) + if (!home_dir) { + home_dir = getenv("USERPROFILE"); + } +#endif + + if (!home_dir) { + home_dir = getenv("HOME"); + } + +#if defined(_WIN32) || defined(WIN32) || defined(__OS2__) + if (!home_dir) { + char *zDrive, *zPath; + int n; + zDrive = getenv("HOMEDRIVE"); + zPath = getenv("HOMEPATH"); + if( zDrive && zPath ){ + n = strlen(zDrive) + strlen(zPath) + 1; + home_dir = malloc( n ); + if( home_dir==0 ) return 0; + sqlite3_snprintf(n, home_dir, "%s%s", zDrive, zPath); + return home_dir; + } + home_dir = "c:\\"; + } +#endif + +#endif /* !_WIN32_WCE */ + + if( home_dir ){ + int n = strlen(home_dir) + 1; + char *z = malloc( n ); + if( z ) memcpy(z, home_dir, n); + home_dir = z; + } + + return home_dir; +} + +/* +** Read input from the file given by sqliterc_override. Or if that +** parameter is NULL, take input from ~/.sqliterc +*/ +static void process_sqliterc( + struct callback_data *p, /* Configuration data */ + const char *sqliterc_override /* Name of config file. NULL to use default */ +){ + char *home_dir = NULL; + const char *sqliterc = sqliterc_override; + char *zBuf = 0; + FILE *in = NULL; + int nBuf; + + if (sqliterc == NULL) { + home_dir = find_home_dir(); + if( home_dir==0 ){ + fprintf(stderr,"%s: cannot locate your home directory!\n", Argv0); + return; + } + nBuf = strlen(home_dir) + 16; + zBuf = malloc( nBuf ); + if( zBuf==0 ){ + fprintf(stderr,"%s: out of memory!\n", Argv0); + exit(1); + } + sqlite3_snprintf(nBuf, zBuf,"%s/.sqliterc",home_dir); + free(home_dir); + sqliterc = (const char*)zBuf; + } + in = fopen(sqliterc,"rb"); + if( in ){ + if( stdin_is_interactive ){ + printf("-- Loading resources from %s\n",sqliterc); + } + process_input(p,in); + fclose(in); + } + free(zBuf); + return; +} + +/* +** Show available command line options +*/ +static const char zOptions[] = + " -init filename read/process named file\n" + " -echo print commands before execution\n" + " -[no]header turn headers on or off\n" + " -bail stop after hitting an error\n" + " -interactive force interactive I/O\n" + " -batch force batch I/O\n" + " -column set output mode to 'column'\n" + " -csv set output mode to 'csv'\n" + " -html set output mode to HTML\n" + " -line set output mode to 'line'\n" + " -list set output mode to 'list'\n" + " -separator 'x' set output field separator (|)\n" + " -nullvalue 'text' set text string for NULL values\n" + " -version show SQLite version\n" +; +static void usage(int showDetail){ + fprintf(stderr, + "Usage: %s [OPTIONS] FILENAME [SQL]\n" + "FILENAME is the name of an SQLite database. A new database is created\n" + "if the file does not previously exist.\n", Argv0); + if( showDetail ){ + fprintf(stderr, "OPTIONS include:\n%s", zOptions); + }else{ + fprintf(stderr, "Use the -help option for additional information\n"); + } + exit(1); +} + +/* +** Initialize the state information in data +*/ +static void main_init(struct callback_data *data) { + memset(data, 0, sizeof(*data)); + data->mode = MODE_List; + memcpy(data->separator,"|", 2); + data->showHeader = 0; + sqlite3_snprintf(sizeof(mainPrompt), mainPrompt,"sqlite> "); + sqlite3_snprintf(sizeof(continuePrompt), continuePrompt," ...> "); +} + +int main(int argc, char **argv){ + char *zErrMsg = 0; + struct callback_data data; + const char *zInitFile = 0; + char *zFirstCmd = 0; + int i; + int rc = 0; + + Argv0 = argv[0]; + main_init(&data); + stdin_is_interactive = isatty(0); + + /* Make sure we have a valid signal handler early, before anything + ** else is done. + */ +#ifdef SIGINT + signal(SIGINT, interrupt_handler); +#endif + + /* Do an initial pass through the command-line argument to locate + ** the name of the database file, the name of the initialization file, + ** and the first command to execute. + */ + for(i=1; i /* Needed for the definition of va_list */ + +/* +** Make sure we can call this stuff from C++. +*/ +#ifdef __cplusplus +extern "C" { +#endif + + +/* +** Add the ability to override 'extern' +*/ +#ifndef SQLITE_EXTERN +# define SQLITE_EXTERN extern +#endif + +/* +** Make sure these symbols where not defined by some previous header +** file. +*/ +#ifdef SQLITE_VERSION +# undef SQLITE_VERSION +#endif +#ifdef SQLITE_VERSION_NUMBER +# undef SQLITE_VERSION_NUMBER +#endif + +/* +** CAPI3REF: Compile-Time Library Version Numbers {F10010} +** +** The SQLITE_VERSION and SQLITE_VERSION_NUMBER #defines in +** the sqlite3.h file specify the version of SQLite with which +** that header file is associated. +** +** The "version" of SQLite is a string of the form "X.Y.Z". +** The phrase "alpha" or "beta" might be appended after the Z. +** The X value is major version number always 3 in SQLite3. +** The X value only changes when backwards compatibility is +** broken and we intend to never break +** backwards compatibility. The Y value is the minor version +** number and only changes when +** there are major feature enhancements that are forwards compatible +** but not backwards compatible. The Z value is release number +** and is incremented with +** each release but resets back to 0 when Y is incremented. +** +** See also: [sqlite3_libversion()] and [sqlite3_libversion_number()]. +** +** INVARIANTS: +** +** {F10011} The SQLITE_VERSION #define in the sqlite3.h header file +** evaluates to a string literal that is the SQLite version +** with which the header file is associated. +** +** {F10014} The SQLITE_VERSION_NUMBER #define resolves to an integer +** with the value (X*1000000 + Y*1000 + Z) where X, Y, and +** Z are the major version, minor version, and release number. +*/ +#define SQLITE_VERSION "3.5.7" +#define SQLITE_VERSION_NUMBER 3005007 + +/* +** CAPI3REF: Run-Time Library Version Numbers {F10020} +** KEYWORDS: sqlite3_version +** +** These features provide the same information as the [SQLITE_VERSION] +** and [SQLITE_VERSION_NUMBER] #defines in the header, but are associated +** with the library instead of the header file. Cautious programmers might +** include a check in their application to verify that +** sqlite3_libversion_number() always returns the value +** [SQLITE_VERSION_NUMBER]. +** +** The sqlite3_libversion() function returns the same information as is +** in the sqlite3_version[] string constant. The function is provided +** for use in DLLs since DLL users usually do not have direct access to string +** constants within the DLL. +** +** INVARIANTS: +** +** {F10021} The [sqlite3_libversion_number()] interface returns an integer +** equal to [SQLITE_VERSION_NUMBER]. +** +** {F10022} The [sqlite3_version] string constant contains the text of the +** [SQLITE_VERSION] string. +** +** {F10023} The [sqlite3_libversion()] function returns +** a pointer to the [sqlite3_version] string constant. +*/ +SQLITE_EXTERN const char sqlite3_version[]; +const char *sqlite3_libversion(void); +int sqlite3_libversion_number(void); + +/* +** CAPI3REF: Test To See If The Library Is Threadsafe {F10100} +** +** SQLite can be compiled with or without mutexes. When +** the SQLITE_THREADSAFE C preprocessor macro is true, mutexes +** are enabled and SQLite is threadsafe. When that macro is false, +** the mutexes are omitted. Without the mutexes, it is not safe +** to use SQLite from more than one thread. +** +** There is a measurable performance penalty for enabling mutexes. +** So if speed is of utmost importance, it makes sense to disable +** the mutexes. But for maximum safety, mutexes should be enabled. +** The default behavior is for mutexes to be enabled. +** +** This interface can be used by a program to make sure that the +** version of SQLite that it is linking against was compiled with +** the desired setting of the SQLITE_THREADSAFE macro. +** +** INVARIANTS: +** +** {F10101} The [sqlite3_threadsafe()] function returns nonzero if +** SQLite was compiled with its mutexes enabled or zero +** if SQLite was compiled with mutexes disabled. +*/ +int sqlite3_threadsafe(void); + +/* +** CAPI3REF: Database Connection Handle {F12000} +** KEYWORDS: {database connection} +** +** Each open SQLite database is represented by pointer to an instance of the +** opaque structure named "sqlite3". It is useful to think of an sqlite3 +** pointer as an object. The [sqlite3_open()], [sqlite3_open16()], and +** [sqlite3_open_v2()] interfaces are its constructors +** and [sqlite3_close()] is its destructor. There are many other interfaces +** (such as [sqlite3_prepare_v2()], [sqlite3_create_function()], and +** [sqlite3_busy_timeout()] to name but three) that are methods on this +** object. +*/ +typedef struct sqlite3 sqlite3; + + +/* +** CAPI3REF: 64-Bit Integer Types {F10200} +** KEYWORDS: sqlite_int64 sqlite_uint64 +** +** Because there is no cross-platform way to specify 64-bit integer types +** SQLite includes typedefs for 64-bit signed and unsigned integers. +** +** The sqlite3_int64 and sqlite3_uint64 are the preferred type +** definitions. The sqlite_int64 and sqlite_uint64 types are +** supported for backwards compatibility only. +** +** INVARIANTS: +** +** {F10201} The [sqlite_int64] and [sqlite3_int64] types specify a +** 64-bit signed integer. +** +** {F10202} The [sqlite_uint64] and [sqlite3_uint64] types specify +** a 64-bit unsigned integer. +*/ +#ifdef SQLITE_INT64_TYPE + typedef SQLITE_INT64_TYPE sqlite_int64; + typedef unsigned SQLITE_INT64_TYPE sqlite_uint64; +#elif defined(_MSC_VER) || defined(__BORLANDC__) + typedef __int64 sqlite_int64; + typedef unsigned __int64 sqlite_uint64; +#else + typedef long long int sqlite_int64; + typedef unsigned long long int sqlite_uint64; +#endif +typedef sqlite_int64 sqlite3_int64; +typedef sqlite_uint64 sqlite3_uint64; + +/* +** If compiling for a processor that lacks floating point support, +** substitute integer for floating-point +*/ +#ifdef SQLITE_OMIT_FLOATING_POINT +# define double sqlite3_int64 +#endif + +/* +** CAPI3REF: Closing A Database Connection {F12010} +** +** This routine is the destructor for the [sqlite3] object. +** +** Applications should [sqlite3_finalize | finalize] all +** [prepared statements] and +** [sqlite3_blob_close | close] all [sqlite3_blob | BLOBs] +** associated with the [sqlite3] object prior +** to attempting to close the [sqlite3] object. +** +** What happens to pending transactions? Are they +** rolled back, or abandoned? +** +** INVARIANTS: +** +** {F12011} The [sqlite3_close()] interface destroys an [sqlite3] object +** allocated by a prior call to [sqlite3_open()], +** [sqlite3_open16()], or [sqlite3_open_v2()]. +** +** {F12012} The [sqlite3_close()] function releases all memory used by the +** connection and closes all open files. +** +** {F12013} If the database connection contains +** [prepared statements] that have not been +** finalized by [sqlite3_finalize()], then [sqlite3_close()] +** returns [SQLITE_BUSY] and leaves the connection open. +** +** {F12014} Giving sqlite3_close() a NULL pointer is a harmless no-op. +** +** LIMITATIONS: +** +** {U12015} The parameter to [sqlite3_close()] must be an [sqlite3] object +** pointer previously obtained from [sqlite3_open()] or the +** equivalent, or NULL. +** +** {U12016} The parameter to [sqlite3_close()] must not have been previously +** closed. +*/ +int sqlite3_close(sqlite3 *); + +/* +** The type for a callback function. +** This is legacy and deprecated. It is included for historical +** compatibility and is not documented. +*/ +typedef int (*sqlite3_callback)(void*,int,char**, char**); + +/* +** CAPI3REF: One-Step Query Execution Interface {F12100} +** +** The sqlite3_exec() interface is a convenient way of running +** one or more SQL statements without a lot of C code. The +** SQL statements are passed in as the second parameter to +** sqlite3_exec(). The statements are evaluated one by one +** until either an error or an interrupt is encountered or +** until they are all done. The 3rd parameter is an optional +** callback that is invoked once for each row of any query results +** produced by the SQL statements. The 5th parameter tells where +** to write any error messages. +** +** The sqlite3_exec() interface is implemented in terms of +** [sqlite3_prepare_v2()], [sqlite3_step()], and [sqlite3_finalize()]. +** The sqlite3_exec() routine does nothing that cannot be done +** by [sqlite3_prepare_v2()], [sqlite3_step()], and [sqlite3_finalize()]. +** The sqlite3_exec() is just a convenient wrapper. +** +** INVARIANTS: +** +** {F12101} The [sqlite3_exec()] interface evaluates zero or more UTF-8 +** encoded, semicolon-separated, SQL statements in the +** zero-terminated string of its 2nd parameter within the +** context of the [sqlite3] object given in the 1st parameter. +** +** {F12104} The return value of [sqlite3_exec()] is SQLITE_OK if all +** SQL statements run successfully. +** +** {F12105} The return value of [sqlite3_exec()] is an appropriate +** non-zero error code if any SQL statement fails. +** +** {F12107} If one or more of the SQL statements handed to [sqlite3_exec()] +** return results and the 3rd parameter is not NULL, then +** the callback function specified by the 3rd parameter is +** invoked once for each row of result. +** +** {F12110} If the callback returns a non-zero value then [sqlite3_exec()] +** will aborted the SQL statement it is currently evaluating, +** skip all subsequent SQL statements, and return [SQLITE_ABORT]. +** What happens to *errmsg here? Does the result code for +** sqlite3_errcode() get set? +** +** {F12113} The [sqlite3_exec()] routine will pass its 4th parameter through +** as the 1st parameter of the callback. +** +** {F12116} The [sqlite3_exec()] routine sets the 2nd parameter of its +** callback to be the number of columns in the current row of +** result. +** +** {F12119} The [sqlite3_exec()] routine sets the 3rd parameter of its +** callback to be an array of pointers to strings holding the +** values for each column in the current result set row as +** obtained from [sqlite3_column_text()]. +** +** {F12122} The [sqlite3_exec()] routine sets the 4th parameter of its +** callback to be an array of pointers to strings holding the +** names of result columns as obtained from [sqlite3_column_name()]. +** +** {F12125} If the 3rd parameter to [sqlite3_exec()] is NULL then +** [sqlite3_exec()] never invokes a callback. All query +** results are silently discarded. +** +** {F12128} If an error occurs while parsing or evaluating any of the SQL +** statements handed to [sqlite3_exec()] then [sqlite3_exec()] will +** return an [error code] other than [SQLITE_OK]. +** +** {F12131} If an error occurs while parsing or evaluating any of the SQL +** handed to [sqlite3_exec()] and if the 5th parameter (errmsg) +** to [sqlite3_exec()] is not NULL, then an error message is +** allocated using the equivalent of [sqlite3_mprintf()] and +** *errmsg is made to point to that message. +** +** {F12134} The [sqlite3_exec()] routine does not change the value of +** *errmsg if errmsg is NULL or if there are no errors. +** +** {F12137} The [sqlite3_exec()] function sets the error code and message +** accessible via [sqlite3_errcode()], [sqlite3_errmsg()], and +** [sqlite3_errmsg16()]. +** +** LIMITATIONS: +** +** {U12141} The first parameter to [sqlite3_exec()] must be an valid and open +** [database connection]. +** +** {U12142} The database connection must not be closed while +** [sqlite3_exec()] is running. +** +** {U12143} The calling function is should use [sqlite3_free()] to free +** the memory that *errmsg is left pointing at once the error +** message is no longer needed. +** +** {U12145} The SQL statement text in the 2nd parameter to [sqlite3_exec()] +** must remain unchanged while [sqlite3_exec()] is running. +*/ +int sqlite3_exec( + sqlite3*, /* An open database */ + const char *sql, /* SQL to be evaluted */ + int (*callback)(void*,int,char**,char**), /* Callback function */ + void *, /* 1st argument to callback */ + char **errmsg /* Error msg written here */ +); + +/* +** CAPI3REF: Result Codes {F10210} +** KEYWORDS: SQLITE_OK {error code} {error codes} +** +** Many SQLite functions return an integer result code from the set shown +** here in order to indicates success or failure. +** +** See also: [SQLITE_IOERR_READ | extended result codes] +*/ +#define SQLITE_OK 0 /* Successful result */ +/* beginning-of-error-codes */ +#define SQLITE_ERROR 1 /* SQL error or missing database */ +#define SQLITE_INTERNAL 2 /* Internal logic error in SQLite */ +#define SQLITE_PERM 3 /* Access permission denied */ +#define SQLITE_ABORT 4 /* Callback routine requested an abort */ +#define SQLITE_BUSY 5 /* The database file is locked */ +#define SQLITE_LOCKED 6 /* A table in the database is locked */ +#define SQLITE_NOMEM 7 /* A malloc() failed */ +#define SQLITE_READONLY 8 /* Attempt to write a readonly database */ +#define SQLITE_INTERRUPT 9 /* Operation terminated by sqlite3_interrupt()*/ +#define SQLITE_IOERR 10 /* Some kind of disk I/O error occurred */ +#define SQLITE_CORRUPT 11 /* The database disk image is malformed */ +#define SQLITE_NOTFOUND 12 /* NOT USED. Table or record not found */ +#define SQLITE_FULL 13 /* Insertion failed because database is full */ +#define SQLITE_CANTOPEN 14 /* Unable to open the database file */ +#define SQLITE_PROTOCOL 15 /* NOT USED. Database lock protocol error */ +#define SQLITE_EMPTY 16 /* Database is empty */ +#define SQLITE_SCHEMA 17 /* The database schema changed */ +#define SQLITE_TOOBIG 18 /* String or BLOB exceeds size limit */ +#define SQLITE_CONSTRAINT 19 /* Abort due to constraint violation */ +#define SQLITE_MISMATCH 20 /* Data type mismatch */ +#define SQLITE_MISUSE 21 /* Library used incorrectly */ +#define SQLITE_NOLFS 22 /* Uses OS features not supported on host */ +#define SQLITE_AUTH 23 /* Authorization denied */ +#define SQLITE_FORMAT 24 /* Auxiliary database format error */ +#define SQLITE_RANGE 25 /* 2nd parameter to sqlite3_bind out of range */ +#define SQLITE_NOTADB 26 /* File opened that is not a database file */ +#define SQLITE_ROW 100 /* sqlite3_step() has another row ready */ +#define SQLITE_DONE 101 /* sqlite3_step() has finished executing */ +/* end-of-error-codes */ + +/* +** CAPI3REF: Extended Result Codes {F10220} +** KEYWORDS: {extended error code} {extended error codes} +** KEYWORDS: {extended result codes} +** +** In its default configuration, SQLite API routines return one of 26 integer +** [SQLITE_OK | result codes]. However, experience has shown that +** many of these result codes are too course-grained. They do not provide as +** much information about problems as programmers might like. In an effort to +** address this, newer versions of SQLite (version 3.3.8 and later) include +** support for additional result codes that provide more detailed information +** about errors. The extended result codes are enabled or disabled +** for each database connection using the [sqlite3_extended_result_codes()] +** API. +** +** Some of the available extended result codes are listed here. +** One may expect the number of extended result codes will be expand +** over time. Software that uses extended result codes should expect +** to see new result codes in future releases of SQLite. +** +** The SQLITE_OK result code will never be extended. It will always +** be exactly zero. +** +** INVARIANTS: +** +** {F10223} The symbolic name for an extended result code always contains +** a related primary result code as a prefix. +** +** {F10224} Primary result code names contain a single "_" character. +** +** {F10225} Extended result code names contain two or more "_" characters. +** +** {F10226} The numeric value of an extended result code contains the +** numeric value of its corresponding primary result code in +** its least significant 8 bits. +*/ +#define SQLITE_IOERR_READ (SQLITE_IOERR | (1<<8)) +#define SQLITE_IOERR_SHORT_READ (SQLITE_IOERR | (2<<8)) +#define SQLITE_IOERR_WRITE (SQLITE_IOERR | (3<<8)) +#define SQLITE_IOERR_FSYNC (SQLITE_IOERR | (4<<8)) +#define SQLITE_IOERR_DIR_FSYNC (SQLITE_IOERR | (5<<8)) +#define SQLITE_IOERR_TRUNCATE (SQLITE_IOERR | (6<<8)) +#define SQLITE_IOERR_FSTAT (SQLITE_IOERR | (7<<8)) +#define SQLITE_IOERR_UNLOCK (SQLITE_IOERR | (8<<8)) +#define SQLITE_IOERR_RDLOCK (SQLITE_IOERR | (9<<8)) +#define SQLITE_IOERR_DELETE (SQLITE_IOERR | (10<<8)) +#define SQLITE_IOERR_BLOCKED (SQLITE_IOERR | (11<<8)) +#define SQLITE_IOERR_NOMEM (SQLITE_IOERR | (12<<8)) + +/* +** CAPI3REF: Flags For File Open Operations {F10230} +** +** These bit values are intended for use in the +** 3rd parameter to the [sqlite3_open_v2()] interface and +** in the 4th parameter to the xOpen method of the +** [sqlite3_vfs] object. +*/ +#define SQLITE_OPEN_READONLY 0x00000001 +#define SQLITE_OPEN_READWRITE 0x00000002 +#define SQLITE_OPEN_CREATE 0x00000004 +#define SQLITE_OPEN_DELETEONCLOSE 0x00000008 +#define SQLITE_OPEN_EXCLUSIVE 0x00000010 +#define SQLITE_OPEN_MAIN_DB 0x00000100 +#define SQLITE_OPEN_TEMP_DB 0x00000200 +#define SQLITE_OPEN_TRANSIENT_DB 0x00000400 +#define SQLITE_OPEN_MAIN_JOURNAL 0x00000800 +#define SQLITE_OPEN_TEMP_JOURNAL 0x00001000 +#define SQLITE_OPEN_SUBJOURNAL 0x00002000 +#define SQLITE_OPEN_MASTER_JOURNAL 0x00004000 + +/* +** CAPI3REF: Device Characteristics {F10240} +** +** The xDeviceCapabilities method of the [sqlite3_io_methods] +** object returns an integer which is a vector of the these +** bit values expressing I/O characteristics of the mass storage +** device that holds the file that the [sqlite3_io_methods] +** refers to. +** +** The SQLITE_IOCAP_ATOMIC property means that all writes of +** any size are atomic. The SQLITE_IOCAP_ATOMICnnn values +** mean that writes of blocks that are nnn bytes in size and +** are aligned to an address which is an integer multiple of +** nnn are atomic. The SQLITE_IOCAP_SAFE_APPEND value means +** that when data is appended to a file, the data is appended +** first then the size of the file is extended, never the other +** way around. The SQLITE_IOCAP_SEQUENTIAL property means that +** information is written to disk in the same order as calls +** to xWrite(). +*/ +#define SQLITE_IOCAP_ATOMIC 0x00000001 +#define SQLITE_IOCAP_ATOMIC512 0x00000002 +#define SQLITE_IOCAP_ATOMIC1K 0x00000004 +#define SQLITE_IOCAP_ATOMIC2K 0x00000008 +#define SQLITE_IOCAP_ATOMIC4K 0x00000010 +#define SQLITE_IOCAP_ATOMIC8K 0x00000020 +#define SQLITE_IOCAP_ATOMIC16K 0x00000040 +#define SQLITE_IOCAP_ATOMIC32K 0x00000080 +#define SQLITE_IOCAP_ATOMIC64K 0x00000100 +#define SQLITE_IOCAP_SAFE_APPEND 0x00000200 +#define SQLITE_IOCAP_SEQUENTIAL 0x00000400 + +/* +** CAPI3REF: File Locking Levels {F10250} +** +** SQLite uses one of these integer values as the second +** argument to calls it makes to the xLock() and xUnlock() methods +** of an [sqlite3_io_methods] object. +*/ +#define SQLITE_LOCK_NONE 0 +#define SQLITE_LOCK_SHARED 1 +#define SQLITE_LOCK_RESERVED 2 +#define SQLITE_LOCK_PENDING 3 +#define SQLITE_LOCK_EXCLUSIVE 4 + +/* +** CAPI3REF: Synchronization Type Flags {F10260} +** +** When SQLite invokes the xSync() method of an +** [sqlite3_io_methods] object it uses a combination of +** these integer values as the second argument. +** +** When the SQLITE_SYNC_DATAONLY flag is used, it means that the +** sync operation only needs to flush data to mass storage. Inode +** information need not be flushed. The SQLITE_SYNC_NORMAL flag means +** to use normal fsync() semantics. The SQLITE_SYNC_FULL flag means +** to use Mac OS-X style fullsync instead of fsync(). +*/ +#define SQLITE_SYNC_NORMAL 0x00002 +#define SQLITE_SYNC_FULL 0x00003 +#define SQLITE_SYNC_DATAONLY 0x00010 + + +/* +** CAPI3REF: OS Interface Open File Handle {F11110} +** +** An [sqlite3_file] object represents an open file in the OS +** interface layer. Individual OS interface implementations will +** want to subclass this object by appending additional fields +** for their own use. The pMethods entry is a pointer to an +** [sqlite3_io_methods] object that defines methods for performing +** I/O operations on the open file. +*/ +typedef struct sqlite3_file sqlite3_file; +struct sqlite3_file { + const struct sqlite3_io_methods *pMethods; /* Methods for an open file */ +}; + +/* +** CAPI3REF: OS Interface File Virtual Methods Object {F11120} +** +** Every file opened by the [sqlite3_vfs] xOpen method contains a pointer to +** an instance of this object. This object defines the +** methods used to perform various operations against the open file. +** +** The flags argument to xSync may be one of [SQLITE_SYNC_NORMAL] or +** [SQLITE_SYNC_FULL]. The first choice is the normal fsync(). +* The second choice is an +** OS-X style fullsync. The SQLITE_SYNC_DATA flag may be ORed in to +** indicate that only the data of the file and not its inode needs to be +** synced. +** +** The integer values to xLock() and xUnlock() are one of +**
                  +**
                • [SQLITE_LOCK_NONE], +**
                • [SQLITE_LOCK_SHARED], +**
                • [SQLITE_LOCK_RESERVED], +**
                • [SQLITE_LOCK_PENDING], or +**
                • [SQLITE_LOCK_EXCLUSIVE]. +**
                +** xLock() increases the lock. xUnlock() decreases the lock. +** The xCheckReservedLock() method looks +** to see if any database connection, either in this +** process or in some other process, is holding an RESERVED, +** PENDING, or EXCLUSIVE lock on the file. It returns true +** if such a lock exists and false if not. +** +** The xFileControl() method is a generic interface that allows custom +** VFS implementations to directly control an open file using the +** [sqlite3_file_control()] interface. The second "op" argument +** is an integer opcode. The third +** argument is a generic pointer which is intended to be a pointer +** to a structure that may contain arguments or space in which to +** write return values. Potential uses for xFileControl() might be +** functions to enable blocking locks with timeouts, to change the +** locking strategy (for example to use dot-file locks), to inquire +** about the status of a lock, or to break stale locks. The SQLite +** core reserves opcodes less than 100 for its own use. +** A [SQLITE_FCNTL_LOCKSTATE | list of opcodes] less than 100 is available. +** Applications that define a custom xFileControl method should use opcodes +** greater than 100 to avoid conflicts. +** +** The xSectorSize() method returns the sector size of the +** device that underlies the file. The sector size is the +** minimum write that can be performed without disturbing +** other bytes in the file. The xDeviceCharacteristics() +** method returns a bit vector describing behaviors of the +** underlying device: +** +**
                  +**
                • [SQLITE_IOCAP_ATOMIC] +**
                • [SQLITE_IOCAP_ATOMIC512] +**
                • [SQLITE_IOCAP_ATOMIC1K] +**
                • [SQLITE_IOCAP_ATOMIC2K] +**
                • [SQLITE_IOCAP_ATOMIC4K] +**
                • [SQLITE_IOCAP_ATOMIC8K] +**
                • [SQLITE_IOCAP_ATOMIC16K] +**
                • [SQLITE_IOCAP_ATOMIC32K] +**
                • [SQLITE_IOCAP_ATOMIC64K] +**
                • [SQLITE_IOCAP_SAFE_APPEND] +**
                • [SQLITE_IOCAP_SEQUENTIAL] +**
                +** +** The SQLITE_IOCAP_ATOMIC property means that all writes of +** any size are atomic. The SQLITE_IOCAP_ATOMICnnn values +** mean that writes of blocks that are nnn bytes in size and +** are aligned to an address which is an integer multiple of +** nnn are atomic. The SQLITE_IOCAP_SAFE_APPEND value means +** that when data is appended to a file, the data is appended +** first then the size of the file is extended, never the other +** way around. The SQLITE_IOCAP_SEQUENTIAL property means that +** information is written to disk in the same order as calls +** to xWrite(). +*/ +typedef struct sqlite3_io_methods sqlite3_io_methods; +struct sqlite3_io_methods { + int iVersion; + int (*xClose)(sqlite3_file*); + int (*xRead)(sqlite3_file*, void*, int iAmt, sqlite3_int64 iOfst); + int (*xWrite)(sqlite3_file*, const void*, int iAmt, sqlite3_int64 iOfst); + int (*xTruncate)(sqlite3_file*, sqlite3_int64 size); + int (*xSync)(sqlite3_file*, int flags); + int (*xFileSize)(sqlite3_file*, sqlite3_int64 *pSize); + int (*xLock)(sqlite3_file*, int); + int (*xUnlock)(sqlite3_file*, int); + int (*xCheckReservedLock)(sqlite3_file*); + int (*xFileControl)(sqlite3_file*, int op, void *pArg); + int (*xSectorSize)(sqlite3_file*); + int (*xDeviceCharacteristics)(sqlite3_file*); + /* Additional methods may be added in future releases */ +}; + +/* +** CAPI3REF: Standard File Control Opcodes {F11310} +** +** These integer constants are opcodes for the xFileControl method +** of the [sqlite3_io_methods] object and to the [sqlite3_file_control()] +** interface. +** +** The [SQLITE_FCNTL_LOCKSTATE] opcode is used for debugging. This +** opcode causes the xFileControl method to write the current state of +** the lock (one of [SQLITE_LOCK_NONE], [SQLITE_LOCK_SHARED], +** [SQLITE_LOCK_RESERVED], [SQLITE_LOCK_PENDING], or [SQLITE_LOCK_EXCLUSIVE]) +** into an integer that the pArg argument points to. This capability +** is used during testing and only needs to be supported when SQLITE_TEST +** is defined. +*/ +#define SQLITE_FCNTL_LOCKSTATE 1 + +/* +** CAPI3REF: Mutex Handle {F17110} +** +** The mutex module within SQLite defines [sqlite3_mutex] to be an +** abstract type for a mutex object. The SQLite core never looks +** at the internal representation of an [sqlite3_mutex]. It only +** deals with pointers to the [sqlite3_mutex] object. +** +** Mutexes are created using [sqlite3_mutex_alloc()]. +*/ +typedef struct sqlite3_mutex sqlite3_mutex; + +/* +** CAPI3REF: OS Interface Object {F11140} +** +** An instance of this object defines the interface between the +** SQLite core and the underlying operating system. The "vfs" +** in the name of the object stands for "virtual file system". +** +** The iVersion field is initially 1 but may be larger for future +** versions of SQLite. Additional fields may be appended to this +** object when the iVersion value is increased. +** +** The szOsFile field is the size of the subclassed [sqlite3_file] +** structure used by this VFS. mxPathname is the maximum length of +** a pathname in this VFS. +** +** Registered sqlite3_vfs objects are kept on a linked list formed by +** the pNext pointer. The [sqlite3_vfs_register()] +** and [sqlite3_vfs_unregister()] interfaces manage this list +** in a thread-safe way. The [sqlite3_vfs_find()] interface +** searches the list. +** +** The pNext field is the only field in the sqlite3_vfs +** structure that SQLite will ever modify. SQLite will only access +** or modify this field while holding a particular static mutex. +** The application should never modify anything within the sqlite3_vfs +** object once the object has been registered. +** +** The zName field holds the name of the VFS module. The name must +** be unique across all VFS modules. +** +** {F11141} SQLite will guarantee that the zFilename string passed to +** xOpen() is a full pathname as generated by xFullPathname() and +** that the string will be valid and unchanged until xClose() is +** called. {END} So the [sqlite3_file] can store a pointer to the +** filename if it needs to remember the filename for some reason. +** +** {F11142} The flags argument to xOpen() includes all bits set in +** the flags argument to [sqlite3_open_v2()]. Or if [sqlite3_open()] +** or [sqlite3_open16()] is used, then flags includes at least +** [SQLITE_OPEN_READWRITE] | [SQLITE_OPEN_CREATE]. {END} +** If xOpen() opens a file read-only then it sets *pOutFlags to +** include [SQLITE_OPEN_READONLY]. Other bits in *pOutFlags may be +** set. +** +** {F11143} SQLite will also add one of the following flags to the xOpen() +** call, depending on the object being opened: +** +**
                  +**
                • [SQLITE_OPEN_MAIN_DB] +**
                • [SQLITE_OPEN_MAIN_JOURNAL] +**
                • [SQLITE_OPEN_TEMP_DB] +**
                • [SQLITE_OPEN_TEMP_JOURNAL] +**
                • [SQLITE_OPEN_TRANSIENT_DB] +**
                • [SQLITE_OPEN_SUBJOURNAL] +**
                • [SQLITE_OPEN_MASTER_JOURNAL] +**
                {END} +** +** The file I/O implementation can use the object type flags to +** changes the way it deals with files. For example, an application +** that does not care about crash recovery or rollback might make +** the open of a journal file a no-op. Writes to this journal would +** also be no-ops, and any attempt to read the journal would return +** SQLITE_IOERR. Or the implementation might recognize that a database +** file will be doing page-aligned sector reads and writes in a random +** order and set up its I/O subsystem accordingly. +** +** SQLite might also add one of the following flags to the xOpen +** method: +** +**
                  +**
                • [SQLITE_OPEN_DELETEONCLOSE] +**
                • [SQLITE_OPEN_EXCLUSIVE] +**
                +** +** {F11145} The [SQLITE_OPEN_DELETEONCLOSE] flag means the file should be +** deleted when it is closed. {F11146} The [SQLITE_OPEN_DELETEONCLOSE] +** will be set for TEMP databases, journals and for subjournals. +** {F11147} The [SQLITE_OPEN_EXCLUSIVE] flag means the file should be opened +** for exclusive access. This flag is set for all files except +** for the main database file. {END} +** +** {F11148} At least szOsFile bytes of memory are allocated by SQLite +** to hold the [sqlite3_file] structure passed as the third +** argument to xOpen. {END} The xOpen method does not have to +** allocate the structure; it should just fill it in. +** +** {F11149} The flags argument to xAccess() may be [SQLITE_ACCESS_EXISTS] +** to test for the existance of a file, +** or [SQLITE_ACCESS_READWRITE] to test to see +** if a file is readable and writable, or [SQLITE_ACCESS_READ] +** to test to see if a file is at least readable. {END} The file can be a +** directory. +** +** {F11150} SQLite will always allocate at least mxPathname+1 bytes for +** the output buffers for xGetTempname and xFullPathname. {F11151} The exact +** size of the output buffer is also passed as a parameter to both +** methods. {END} If the output buffer is not large enough, SQLITE_CANTOPEN +** should be returned. As this is handled as a fatal error by SQLite, +** vfs implementations should endeavor to prevent this by setting +** mxPathname to a sufficiently large value. +** +** The xRandomness(), xSleep(), and xCurrentTime() interfaces +** are not strictly a part of the filesystem, but they are +** included in the VFS structure for completeness. +** The xRandomness() function attempts to return nBytes bytes +** of good-quality randomness into zOut. The return value is +** the actual number of bytes of randomness obtained. The +** xSleep() method causes the calling thread to sleep for at +** least the number of microseconds given. The xCurrentTime() +** method returns a Julian Day Number for the current date and +** time. +*/ +typedef struct sqlite3_vfs sqlite3_vfs; +struct sqlite3_vfs { + int iVersion; /* Structure version number */ + int szOsFile; /* Size of subclassed sqlite3_file */ + int mxPathname; /* Maximum file pathname length */ + sqlite3_vfs *pNext; /* Next registered VFS */ + const char *zName; /* Name of this virtual file system */ + void *pAppData; /* Pointer to application-specific data */ + int (*xOpen)(sqlite3_vfs*, const char *zName, sqlite3_file*, + int flags, int *pOutFlags); + int (*xDelete)(sqlite3_vfs*, const char *zName, int syncDir); + int (*xAccess)(sqlite3_vfs*, const char *zName, int flags); + int (*xGetTempname)(sqlite3_vfs*, int nOut, char *zOut); + int (*xFullPathname)(sqlite3_vfs*, const char *zName, int nOut, char *zOut); + void *(*xDlOpen)(sqlite3_vfs*, const char *zFilename); + void (*xDlError)(sqlite3_vfs*, int nByte, char *zErrMsg); + void *(*xDlSym)(sqlite3_vfs*,void*, const char *zSymbol); + void (*xDlClose)(sqlite3_vfs*, void*); + int (*xRandomness)(sqlite3_vfs*, int nByte, char *zOut); + int (*xSleep)(sqlite3_vfs*, int microseconds); + int (*xCurrentTime)(sqlite3_vfs*, double*); + /* New fields may be appended in figure versions. The iVersion + ** value will increment whenever this happens. */ +}; + +/* +** CAPI3REF: Flags for the xAccess VFS method {F11190} +** +** {F11191} These integer constants can be used as the third parameter to +** the xAccess method of an [sqlite3_vfs] object. {END} They determine +** what kind of permissions the xAccess method is +** looking for. {F11192} With SQLITE_ACCESS_EXISTS, the xAccess method +** simply checks to see if the file exists. {F11193} With +** SQLITE_ACCESS_READWRITE, the xAccess method checks to see +** if the file is both readable and writable. {F11194} With +** SQLITE_ACCESS_READ the xAccess method +** checks to see if the file is readable. +*/ +#define SQLITE_ACCESS_EXISTS 0 +#define SQLITE_ACCESS_READWRITE 1 +#define SQLITE_ACCESS_READ 2 + +/* +** CAPI3REF: Enable Or Disable Extended Result Codes {F12200} +** +** The sqlite3_extended_result_codes() routine enables or disables the +** [SQLITE_IOERR_READ | extended result codes] feature of SQLite. +** The extended result codes are disabled by default for historical +** compatibility. +** +** INVARIANTS: +** +** {F12201} Each new [database connection] has the +** [extended result codes] feature +** disabled by default. +** +** {F12202} The [sqlite3_extended_result_codes(D,F)] interface will enable +** [extended result codes] for the +** [database connection] D if the F parameter +** is true, or disable them if F is false. +*/ +int sqlite3_extended_result_codes(sqlite3*, int onoff); + +/* +** CAPI3REF: Last Insert Rowid {F12220} +** +** Each entry in an SQLite table has a unique 64-bit signed +** integer key called the "rowid". The rowid is always available +** as an undeclared column named ROWID, OID, or _ROWID_ as long as those +** names are not also used by explicitly declared columns. If +** the table has a column of type INTEGER PRIMARY KEY then that column +** is another alias for the rowid. +** +** This routine returns the rowid of the most recent +** successful INSERT into the database from the database connection +** shown in the first argument. If no successful inserts +** have ever occurred on this database connection, zero is returned. +** +** If an INSERT occurs within a trigger, then the rowid of the +** inserted row is returned by this routine as long as the trigger +** is running. But once the trigger terminates, the value returned +** by this routine reverts to the last value inserted before the +** trigger fired. +** +** An INSERT that fails due to a constraint violation is not a +** successful insert and does not change the value returned by this +** routine. Thus INSERT OR FAIL, INSERT OR IGNORE, INSERT OR ROLLBACK, +** and INSERT OR ABORT make no changes to the return value of this +** routine when their insertion fails. When INSERT OR REPLACE +** encounters a constraint violation, it does not fail. The +** INSERT continues to completion after deleting rows that caused +** the constraint problem so INSERT OR REPLACE will always change +** the return value of this interface. +** +** For the purposes of this routine, an insert is considered to +** be successful even if it is subsequently rolled back. +** +** INVARIANTS: +** +** {F12221} The [sqlite3_last_insert_rowid()] function returns the +** rowid of the most recent successful insert done +** on the same database connection and within the same +** trigger context, or zero if there have +** been no qualifying inserts on that connection. +** +** {F12223} The [sqlite3_last_insert_rowid()] function returns +** same value when called from the same trigger context +** immediately before and after a ROLLBACK. +** +** LIMITATIONS: +** +** {U12232} If a separate thread does a new insert on the same +** database connection while the [sqlite3_last_insert_rowid()] +** function is running and thus changes the last insert rowid, +** then the value returned by [sqlite3_last_insert_rowid()] is +** unpredictable and might not equal either the old or the new +** last insert rowid. +*/ +sqlite3_int64 sqlite3_last_insert_rowid(sqlite3*); + +/* +** CAPI3REF: Count The Number Of Rows Modified {F12240} +** +** This function returns the number of database rows that were changed +** or inserted or deleted by the most recently completed SQL statement +** on the connection specified by the first parameter. Only +** changes that are directly specified by the INSERT, UPDATE, or +** DELETE statement are counted. Auxiliary changes caused by +** triggers are not counted. Use the [sqlite3_total_changes()] function +** to find the total number of changes including changes caused by triggers. +** +** A "row change" is a change to a single row of a single table +** caused by an INSERT, DELETE, or UPDATE statement. Rows that +** are changed as side effects of REPLACE constraint resolution, +** rollback, ABORT processing, DROP TABLE, or by any other +** mechanisms do not count as direct row changes. +** +** A "trigger context" is a scope of execution that begins and +** ends with the script of a trigger. Most SQL statements are +** evaluated outside of any trigger. This is the "top level" +** trigger context. If a trigger fires from the top level, a +** new trigger context is entered for the duration of that one +** trigger. Subtriggers create subcontexts for their duration. +** +** Calling [sqlite3_exec()] or [sqlite3_step()] recursively does +** not create a new trigger context. +** +** This function returns the number of direct row changes in the +** most recent INSERT, UPDATE, or DELETE statement within the same +** trigger context. +** +** So when called from the top level, this function returns the +** number of changes in the most recent INSERT, UPDATE, or DELETE +** that also occurred at the top level. +** Within the body of a trigger, the sqlite3_changes() interface +** can be called to find the number of +** changes in the most recently completed INSERT, UPDATE, or DELETE +** statement within the body of the same trigger. +** However, the number returned does not include in changes +** caused by subtriggers since they have their own context. +** +** SQLite implements the command "DELETE FROM table" without +** a WHERE clause by dropping and recreating the table. (This is much +** faster than going through and deleting individual elements from the +** table.) Because of this optimization, the deletions in +** "DELETE FROM table" are not row changes and will not be counted +** by the sqlite3_changes() or [sqlite3_total_changes()] functions. +** To get an accurate count of the number of rows deleted, use +** "DELETE FROM table WHERE 1" instead. +** +** INVARIANTS: +** +** {F12241} The [sqlite3_changes()] function returns the number of +** row changes caused by the most recent INSERT, UPDATE, +** or DELETE statement on the same database connection and +** within the same trigger context, or zero if there have +** not been any qualifying row changes. +** +** LIMITATIONS: +** +** {U12252} If a separate thread makes changes on the same database connection +** while [sqlite3_changes()] is running then the value returned +** is unpredictable and unmeaningful. +*/ +int sqlite3_changes(sqlite3*); + +/* +** CAPI3REF: Total Number Of Rows Modified {F12260} +*** +** This function returns the number of row changes caused +** by INSERT, UPDATE or DELETE statements since the database handle +** was opened. The count includes all changes from all trigger +** contexts. But the count does not include changes used to +** implement REPLACE constraints, do rollbacks or ABORT processing, +** or DROP table processing. +** The changes +** are counted as soon as the statement that makes them is completed +** (when the statement handle is passed to [sqlite3_reset()] or +** [sqlite3_finalize()]). +** +** SQLite implements the command "DELETE FROM table" without +** a WHERE clause by dropping and recreating the table. (This is much +** faster than going +** through and deleting individual elements from the table.) Because of +** this optimization, the change count for "DELETE FROM table" will be +** zero regardless of the number of elements that were originally in the +** table. To get an accurate count of the number of rows deleted, use +** "DELETE FROM table WHERE 1" instead. +** +** See also the [sqlite3_changes()] interface. +** +** INVARIANTS: +** +** {F12261} The [sqlite3_total_changes()] returns the total number +** of row changes caused by INSERT, UPDATE, and/or DELETE +** statements on the same [database connection], in any +** trigger context, since the database connection was +** created. +** +** LIMITATIONS: +** +** {U12264} If a separate thread makes changes on the same database connection +** while [sqlite3_total_changes()] is running then the value +** returned is unpredictable and unmeaningful. +*/ +int sqlite3_total_changes(sqlite3*); + +/* +** CAPI3REF: Interrupt A Long-Running Query {F12270} +** +** This function causes any pending database operation to abort and +** return at its earliest opportunity. This routine is typically +** called in response to a user action such as pressing "Cancel" +** or Ctrl-C where the user wants a long query operation to halt +** immediately. +** +** It is safe to call this routine from a thread different from the +** thread that is currently running the database operation. But it +** is not safe to call this routine with a database connection that +** is closed or might close before sqlite3_interrupt() returns. +** +** If an SQL is very nearly finished at the time when sqlite3_interrupt() +** is called, then it might not have an opportunity to be interrupted. +** It might continue to completion. +** An SQL operation that is interrupted will return +** [SQLITE_INTERRUPT]. If the interrupted SQL operation is an +** INSERT, UPDATE, or DELETE that is inside an explicit transaction, +** then the entire transaction will be rolled back automatically. +** A call to sqlite3_interrupt() has no effect on SQL statements +** that are started after sqlite3_interrupt() returns. +** +** INVARIANTS: +** +** {F12271} The [sqlite3_interrupt()] interface will force all running +** SQL statements associated with the same database connection +** to halt after processing at most one additional row of +** data. +** +** {F12272} Any SQL statement that is interrupted by [sqlite3_interrupt()] +** will return [SQLITE_INTERRUPT]. +** +** LIMITATIONS: +** +** {U12279} If the database connection closes while [sqlite3_interrupt()] +** is running then bad things will likely happen. +*/ +void sqlite3_interrupt(sqlite3*); + +/* +** CAPI3REF: Determine If An SQL Statement Is Complete {F10510} +** +** These routines are useful for command-line input to determine if the +** currently entered text seems to form complete a SQL statement or +** if additional input is needed before sending the text into +** SQLite for parsing. These routines return true if the input string +** appears to be a complete SQL statement. A statement is judged to be +** complete if it ends with a semicolon token and is not a fragment of a +** CREATE TRIGGER statement. Semicolons that are embedded within +** string literals or quoted identifier names or comments are not +** independent tokens (they are part of the token in which they are +** embedded) and thus do not count as a statement terminator. +** +** These routines do not parse the SQL and +** so will not detect syntactically incorrect SQL. +** +** INVARIANTS: +** +** {F10511} The sqlite3_complete() and sqlite3_complete16() functions +** return true (non-zero) if and only if the last +** non-whitespace token in their input is a semicolon that +** is not in between the BEGIN and END of a CREATE TRIGGER +** statement. +** +** LIMITATIONS: +** +** {U10512} The input to sqlite3_complete() must be a zero-terminated +** UTF-8 string. +** +** {U10513} The input to sqlite3_complete16() must be a zero-terminated +** UTF-16 string in native byte order. +*/ +int sqlite3_complete(const char *sql); +int sqlite3_complete16(const void *sql); + +/* +** CAPI3REF: Register A Callback To Handle SQLITE_BUSY Errors {F12310} +** +** This routine identifies a callback function that might be +** invoked whenever an attempt is made to open a database table +** that another thread or process has locked. +** If the busy callback is NULL, then [SQLITE_BUSY] +** or [SQLITE_IOERR_BLOCKED] +** is returned immediately upon encountering the lock. +** If the busy callback is not NULL, then the +** callback will be invoked with two arguments. The +** first argument to the handler is a copy of the void* pointer which +** is the third argument to this routine. The second argument to +** the handler is the number of times that the busy handler has +** been invoked for this locking event. If the +** busy callback returns 0, then no additional attempts are made to +** access the database and [SQLITE_BUSY] or [SQLITE_IOERR_BLOCKED] is returned. +** If the callback returns non-zero, then another attempt +** is made to open the database for reading and the cycle repeats. +** +** The presence of a busy handler does not guarantee that +** it will be invoked when there is lock contention. +** If SQLite determines that invoking the busy handler could result in +** a deadlock, it will go ahead and return [SQLITE_BUSY] or +** [SQLITE_IOERR_BLOCKED] instead of invoking the +** busy handler. +** Consider a scenario where one process is holding a read lock that +** it is trying to promote to a reserved lock and +** a second process is holding a reserved lock that it is trying +** to promote to an exclusive lock. The first process cannot proceed +** because it is blocked by the second and the second process cannot +** proceed because it is blocked by the first. If both processes +** invoke the busy handlers, neither will make any progress. Therefore, +** SQLite returns [SQLITE_BUSY] for the first process, hoping that this +** will induce the first process to release its read lock and allow +** the second process to proceed. +** +** The default busy callback is NULL. +** +** The [SQLITE_BUSY] error is converted to [SQLITE_IOERR_BLOCKED] +** when SQLite is in the middle of a large transaction where all the +** changes will not fit into the in-memory cache. SQLite will +** already hold a RESERVED lock on the database file, but it needs +** to promote this lock to EXCLUSIVE so that it can spill cache +** pages into the database file without harm to concurrent +** readers. If it is unable to promote the lock, then the in-memory +** cache will be left in an inconsistent state and so the error +** code is promoted from the relatively benign [SQLITE_BUSY] to +** the more severe [SQLITE_IOERR_BLOCKED]. This error code promotion +** forces an automatic rollback of the changes. See the +** +** CorruptionFollowingBusyError wiki page for a discussion of why +** this is important. +** +** There can only be a single busy handler defined for each database +** connection. Setting a new busy handler clears any previous one. +** Note that calling [sqlite3_busy_timeout()] will also set or clear +** the busy handler. +** +** INVARIANTS: +** +** {F12311} The [sqlite3_busy_handler()] function replaces the busy handler +** callback in the database connection identified by the 1st +** parameter with a new busy handler identified by the 2nd and 3rd +** parameters. +** +** {F12312} The default busy handler for new database connections is NULL. +** +** {F12314} When two or more database connection share a common cache, +** the busy handler for the database connection currently using +** the cache is invoked when the cache encounters a lock. +** +** {F12316} If a busy handler callback returns zero, then the SQLite +** interface that provoked the locking event will return +** [SQLITE_BUSY]. +** +** {F12318} SQLite will invokes the busy handler with two argument which +** are a copy of the pointer supplied by the 3rd parameter to +** [sqlite3_busy_handler()] and a count of the number of prior +** invocations of the busy handler for the same locking event. +** +** LIMITATIONS: +** +** {U12319} A busy handler should not call close the database connection +** or prepared statement that invoked the busy handler. +*/ +int sqlite3_busy_handler(sqlite3*, int(*)(void*,int), void*); + +/* +** CAPI3REF: Set A Busy Timeout {F12340} +** +** This routine sets a [sqlite3_busy_handler | busy handler] +** that sleeps for a while when a +** table is locked. The handler will sleep multiple times until +** at least "ms" milliseconds of sleeping have been done. {F12343} After +** "ms" milliseconds of sleeping, the handler returns 0 which +** causes [sqlite3_step()] to return [SQLITE_BUSY] or [SQLITE_IOERR_BLOCKED]. +** +** Calling this routine with an argument less than or equal to zero +** turns off all busy handlers. +** +** There can only be a single busy handler for a particular database +** connection. If another busy handler was defined +** (using [sqlite3_busy_handler()]) prior to calling +** this routine, that other busy handler is cleared. +** +** INVARIANTS: +** +** {F12341} The [sqlite3_busy_timeout()] function overrides any prior +** [sqlite3_busy_timeout()] or [sqlite3_busy_handler()] setting +** on the same database connection. +** +** {F12343} If the 2nd parameter to [sqlite3_busy_timeout()] is less than +** or equal to zero, then the busy handler is cleared so that +** all subsequent locking events immediately return [SQLITE_BUSY]. +** +** {F12344} If the 2nd parameter to [sqlite3_busy_timeout()] is a positive +** number N, then a busy handler is set that repeatedly calls +** the xSleep() method in the VFS interface until either the +** lock clears or until the cumulative sleep time reported back +** by xSleep() exceeds N milliseconds. +*/ +int sqlite3_busy_timeout(sqlite3*, int ms); + +/* +** CAPI3REF: Convenience Routines For Running Queries {F12370} +** +** Definition: A result table is memory data structure created by the +** [sqlite3_get_table()] interface. A result table records the +** complete query results from one or more queries. +** +** The table conceptually has a number of rows and columns. But +** these numbers are not part of the result table itself. These +** numbers are obtained separately. Let N be the number of rows +** and M be the number of columns. +** +** A result table is an array of pointers to zero-terminated +** UTF-8 strings. There are (N+1)*M elements in the array. +** The first M pointers point to zero-terminated strings that +** contain the names of the columns. +** The remaining entries all point to query results. NULL +** values are give a NULL pointer. All other values are in +** their UTF-8 zero-terminated string representation as returned by +** [sqlite3_column_text()]. +** +** A result table might consists of one or more memory allocations. +** It is not safe to pass a result table directly to [sqlite3_free()]. +** A result table should be deallocated using [sqlite3_free_table()]. +** +** As an example of the result table format, suppose a query result +** is as follows: +** +**
                +**        Name        | Age
                +**        -----------------------
                +**        Alice       | 43
                +**        Bob         | 28
                +**        Cindy       | 21
                +** 
                +** +** There are two column (M==2) and three rows (N==3). Thus the +** result table has 8 entries. Suppose the result table is stored +** in an array names azResult. Then azResult holds this content: +** +**
                +**        azResult[0] = "Name";
                +**        azResult[1] = "Age";
                +**        azResult[2] = "Alice";
                +**        azResult[3] = "43";
                +**        azResult[4] = "Bob";
                +**        azResult[5] = "28";
                +**        azResult[6] = "Cindy";
                +**        azResult[7] = "21";
                +** 
                +** +** The sqlite3_get_table() function evaluates one or more +** semicolon-separated SQL statements in the zero-terminated UTF-8 +** string of its 2nd parameter. It returns a result table to the +** pointer given in its 3rd parameter. +** +** After the calling function has finished using the result, it should +** pass the pointer to the result table to sqlite3_free_table() in order to +** release the memory that was malloc-ed. Because of the way the +** [sqlite3_malloc()] happens within sqlite3_get_table(), the calling +** function must not try to call [sqlite3_free()] directly. Only +** [sqlite3_free_table()] is able to release the memory properly and safely. +** +** The sqlite3_get_table() interface is implemented as a wrapper around +** [sqlite3_exec()]. The sqlite3_get_table() routine does not have access +** to any internal data structures of SQLite. It uses only the public +** interface defined here. As a consequence, errors that occur in the +** wrapper layer outside of the internal [sqlite3_exec()] call are not +** reflected in subsequent calls to [sqlite3_errcode()] or +** [sqlite3_errmsg()]. +** +** INVARIANTS: +** +** {F12371} If a [sqlite3_get_table()] fails a memory allocation, then +** it frees the result table under construction, aborts the +** query in process, skips any subsequent queries, sets the +** *resultp output pointer to NULL and returns [SQLITE_NOMEM]. +** +** {F12373} If the ncolumn parameter to [sqlite3_get_table()] is not NULL +** then [sqlite3_get_table()] write the number of columns in the +** result set of the query into *ncolumn if the query is +** successful (if the function returns SQLITE_OK). +** +** {F12374} If the nrow parameter to [sqlite3_get_table()] is not NULL +** then [sqlite3_get_table()] write the number of rows in the +** result set of the query into *nrow if the query is +** successful (if the function returns SQLITE_OK). +** +** {F12376} The [sqlite3_get_table()] function sets its *ncolumn value +** to the number of columns in the result set of the query in the +** sql parameter, or to zero if the query in sql has an empty +** result set. +*/ +int sqlite3_get_table( + sqlite3*, /* An open database */ + const char *sql, /* SQL to be evaluated */ + char ***pResult, /* Results of the query */ + int *nrow, /* Number of result rows written here */ + int *ncolumn, /* Number of result columns written here */ + char **errmsg /* Error msg written here */ +); +void sqlite3_free_table(char **result); + +/* +** CAPI3REF: Formatted String Printing Functions {F17400} +** +** These routines are workalikes of the "printf()" family of functions +** from the standard C library. +** +** The sqlite3_mprintf() and sqlite3_vmprintf() routines write their +** results into memory obtained from [sqlite3_malloc()]. +** The strings returned by these two routines should be +** released by [sqlite3_free()]. Both routines return a +** NULL pointer if [sqlite3_malloc()] is unable to allocate enough +** memory to hold the resulting string. +** +** In sqlite3_snprintf() routine is similar to "snprintf()" from +** the standard C library. The result is written into the +** buffer supplied as the second parameter whose size is given by +** the first parameter. Note that the order of the +** first two parameters is reversed from snprintf(). This is an +** historical accident that cannot be fixed without breaking +** backwards compatibility. Note also that sqlite3_snprintf() +** returns a pointer to its buffer instead of the number of +** characters actually written into the buffer. We admit that +** the number of characters written would be a more useful return +** value but we cannot change the implementation of sqlite3_snprintf() +** now without breaking compatibility. +** +** As long as the buffer size is greater than zero, sqlite3_snprintf() +** guarantees that the buffer is always zero-terminated. The first +** parameter "n" is the total size of the buffer, including space for +** the zero terminator. So the longest string that can be completely +** written will be n-1 characters. +** +** These routines all implement some additional formatting +** options that are useful for constructing SQL statements. +** All of the usual printf formatting options apply. In addition, there +** is are "%q", "%Q", and "%z" options. +** +** The %q option works like %s in that it substitutes a null-terminated +** string from the argument list. But %q also doubles every '\'' character. +** %q is designed for use inside a string literal. By doubling each '\'' +** character it escapes that character and allows it to be inserted into +** the string. +** +** For example, so some string variable contains text as follows: +** +**
                +**  char *zText = "It's a happy day!";
                +** 
                +** +** One can use this text in an SQL statement as follows: +** +**
                +**  char *zSQL = sqlite3_mprintf("INSERT INTO table VALUES('%q')", zText);
                +**  sqlite3_exec(db, zSQL, 0, 0, 0);
                +**  sqlite3_free(zSQL);
                +** 
                +** +** Because the %q format string is used, the '\'' character in zText +** is escaped and the SQL generated is as follows: +** +**
                +**  INSERT INTO table1 VALUES('It''s a happy day!')
                +** 
                +** +** This is correct. Had we used %s instead of %q, the generated SQL +** would have looked like this: +** +**
                +**  INSERT INTO table1 VALUES('It's a happy day!');
                +** 
                +** +** This second example is an SQL syntax error. As a general rule you +** should always use %q instead of %s when inserting text into a string +** literal. +** +** The %Q option works like %q except it also adds single quotes around +** the outside of the total string. Or if the parameter in the argument +** list is a NULL pointer, %Q substitutes the text "NULL" (without single +** quotes) in place of the %Q option. {END} So, for example, one could say: +** +**
                +**  char *zSQL = sqlite3_mprintf("INSERT INTO table VALUES(%Q)", zText);
                +**  sqlite3_exec(db, zSQL, 0, 0, 0);
                +**  sqlite3_free(zSQL);
                +** 
                +** +** The code above will render a correct SQL statement in the zSQL +** variable even if the zText variable is a NULL pointer. +** +** The "%z" formatting option works exactly like "%s" with the +** addition that after the string has been read and copied into +** the result, [sqlite3_free()] is called on the input string. {END} +** +** INVARIANTS: +** +** {F17403} The [sqlite3_mprintf()] and [sqlite3_vmprintf()] interfaces +** return either pointers to zero-terminated UTF-8 strings held in +** memory obtained from [sqlite3_malloc()] or NULL pointers if +** a call to [sqlite3_malloc()] fails. +** +** {F17406} The [sqlite3_snprintf()] interface writes a zero-terminated +** UTF-8 string into the buffer pointed to by the second parameter +** provided that the first parameter is greater than zero. +** +** {F17407} The [sqlite3_snprintf()] interface does not writes slots of +** its output buffer (the second parameter) outside the range +** of 0 through N-1 (where N is the first parameter) +** regardless of the length of the string +** requested by the format specification. +** +*/ +char *sqlite3_mprintf(const char*,...); +char *sqlite3_vmprintf(const char*, va_list); +char *sqlite3_snprintf(int,char*,const char*, ...); + +/* +** CAPI3REF: Memory Allocation Subsystem {F17300} +** +** The SQLite core uses these three routines for all of its own +** internal memory allocation needs. "Core" in the previous sentence +** does not include operating-system specific VFS implementation. The +** windows VFS uses native malloc and free for some operations. +** +** The sqlite3_malloc() routine returns a pointer to a block +** of memory at least N bytes in length, where N is the parameter. +** If sqlite3_malloc() is unable to obtain sufficient free +** memory, it returns a NULL pointer. If the parameter N to +** sqlite3_malloc() is zero or negative then sqlite3_malloc() returns +** a NULL pointer. +** +** Calling sqlite3_free() with a pointer previously returned +** by sqlite3_malloc() or sqlite3_realloc() releases that memory so +** that it might be reused. The sqlite3_free() routine is +** a no-op if is called with a NULL pointer. Passing a NULL pointer +** to sqlite3_free() is harmless. After being freed, memory +** should neither be read nor written. Even reading previously freed +** memory might result in a segmentation fault or other severe error. +** Memory corruption, a segmentation fault, or other severe error +** might result if sqlite3_free() is called with a non-NULL pointer that +** was not obtained from sqlite3_malloc() or sqlite3_free(). +** +** The sqlite3_realloc() interface attempts to resize a +** prior memory allocation to be at least N bytes, where N is the +** second parameter. The memory allocation to be resized is the first +** parameter. If the first parameter to sqlite3_realloc() +** is a NULL pointer then its behavior is identical to calling +** sqlite3_malloc(N) where N is the second parameter to sqlite3_realloc(). +** If the second parameter to sqlite3_realloc() is zero or +** negative then the behavior is exactly the same as calling +** sqlite3_free(P) where P is the first parameter to sqlite3_realloc(). +** Sqlite3_realloc() returns a pointer to a memory allocation +** of at least N bytes in size or NULL if sufficient memory is unavailable. +** If M is the size of the prior allocation, then min(N,M) bytes +** of the prior allocation are copied into the beginning of buffer returned +** by sqlite3_realloc() and the prior allocation is freed. +** If sqlite3_realloc() returns NULL, then the prior allocation +** is not freed. +** +** The memory returned by sqlite3_malloc() and sqlite3_realloc() +** is always aligned to at least an 8 byte boundary. {END} +** +** The default implementation +** of the memory allocation subsystem uses the malloc(), realloc() +** and free() provided by the standard C library. {F17382} However, if +** SQLite is compiled with the following C preprocessor macro +** +**
                SQLITE_MEMORY_SIZE=NNN
                +** +** where NNN is an integer, then SQLite create a static +** array of at least NNN bytes in size and use that array +** for all of its dynamic memory allocation needs. {END} Additional +** memory allocator options may be added in future releases. +** +** In SQLite version 3.5.0 and 3.5.1, it was possible to define +** the SQLITE_OMIT_MEMORY_ALLOCATION which would cause the built-in +** implementation of these routines to be omitted. That capability +** is no longer provided. Only built-in memory allocators can be +** used. +** +** The windows OS interface layer calls +** the system malloc() and free() directly when converting +** filenames between the UTF-8 encoding used by SQLite +** and whatever filename encoding is used by the particular windows +** installation. Memory allocation errors are detected, but +** they are reported back as [SQLITE_CANTOPEN] or +** [SQLITE_IOERR] rather than [SQLITE_NOMEM]. +** +** INVARIANTS: +** +** {F17303} The [sqlite3_malloc(N)] interface returns either a pointer to +** newly checked-out block of at least N bytes of memory +** that is 8-byte aligned, +** or it returns NULL if it is unable to fulfill the request. +** +** {F17304} The [sqlite3_malloc(N)] interface returns a NULL pointer if +** N is less than or equal to zero. +** +** {F17305} The [sqlite3_free(P)] interface releases memory previously +** returned from [sqlite3_malloc()] or [sqlite3_realloc()], +** making it available for reuse. +** +** {F17306} A call to [sqlite3_free(NULL)] is a harmless no-op. +** +** {F17310} A call to [sqlite3_realloc(0,N)] is equivalent to a call +** to [sqlite3_malloc(N)]. +** +** {F17312} A call to [sqlite3_realloc(P,0)] is equivalent to a call +** to [sqlite3_free(P)]. +** +** {F17315} The SQLite core uses [sqlite3_malloc()], [sqlite3_realloc()], +** and [sqlite3_free()] for all of its memory allocation and +** deallocation needs. +** +** {F17318} The [sqlite3_realloc(P,N)] interface returns either a pointer +** to a block of checked-out memory of at least N bytes in size +** that is 8-byte aligned, or a NULL pointer. +** +** {F17321} When [sqlite3_realloc(P,N)] returns a non-NULL pointer, it first +** copies the first K bytes of content from P into the newly allocated +** where K is the lessor of N and the size of the buffer P. +** +** {F17322} When [sqlite3_realloc(P,N)] returns a non-NULL pointer, it first +** releases the buffer P. +** +** {F17323} When [sqlite3_realloc(P,N)] returns NULL, the buffer P is +** not modified or released. +** +** LIMITATIONS: +** +** {U17350} The pointer arguments to [sqlite3_free()] and [sqlite3_realloc()] +** must be either NULL or else a pointer obtained from a prior +** invocation of [sqlite3_malloc()] or [sqlite3_realloc()] that has +** not been released. +** +** {U17351} The application must not read or write any part of +** a block of memory after it has been released using +** [sqlite3_free()] or [sqlite3_realloc()]. +** +*/ +void *sqlite3_malloc(int); +void *sqlite3_realloc(void*, int); +void sqlite3_free(void*); + +/* +** CAPI3REF: Memory Allocator Statistics {F17370} +** +** SQLite provides these two interfaces for reporting on the status +** of the [sqlite3_malloc()], [sqlite3_free()], and [sqlite3_realloc()] +** the memory allocation subsystem included within the SQLite. +** +** INVARIANTS: +** +** {F17371} The [sqlite3_memory_used()] routine returns the +** number of bytes of memory currently outstanding +** (malloced but not freed). +** +** {F17373} The [sqlite3_memory_highwater()] routine returns the maximum +** value of [sqlite3_memory_used()] +** since the highwater mark was last reset. +** +** {F17374} The values returned by [sqlite3_memory_used()] and +** [sqlite3_memory_highwater()] include any overhead +** added by SQLite in its implementation of [sqlite3_malloc()], +** but not overhead added by the any underlying system library +** routines that [sqlite3_malloc()] may call. +** +** {F17375} The memory highwater mark is reset to the current value of +** [sqlite3_memory_used()] if and only if the parameter to +** [sqlite3_memory_highwater()] is true. The value returned +** by [sqlite3_memory_highwater(1)] is the highwater mark +** prior to the reset. +*/ +sqlite3_int64 sqlite3_memory_used(void); +sqlite3_int64 sqlite3_memory_highwater(int resetFlag); + +/* +** CAPI3REF: Compile-Time Authorization Callbacks {F12500} +** +** This routine registers a authorizer callback with a particular +** database connection, supplied in the first argument. +** The authorizer callback is invoked as SQL statements are being compiled +** by [sqlite3_prepare()] or its variants [sqlite3_prepare_v2()], +** [sqlite3_prepare16()] and [sqlite3_prepare16_v2()]. At various +** points during the compilation process, as logic is being created +** to perform various actions, the authorizer callback is invoked to +** see if those actions are allowed. The authorizer callback should +** return SQLITE_OK to allow the action, [SQLITE_IGNORE] to disallow the +** specific action but allow the SQL statement to continue to be +** compiled, or [SQLITE_DENY] to cause the entire SQL statement to be +** rejected with an error. If the authorizer callback returns +** any value other than [SQLITE_IGNORE], [SQLITE_OK], or [SQLITE_DENY] +** then [sqlite3_prepare_v2()] or equivalent call that triggered +** the authorizer will fail with an error message. +** +** When the callback returns [SQLITE_OK], that means the operation +** requested is ok. When the callback returns [SQLITE_DENY], the +** [sqlite3_prepare_v2()] or equivalent call that triggered the +** authorizer will fail with an error message explaining that +** access is denied. If the authorizer code is [SQLITE_READ] +** and the callback returns [SQLITE_IGNORE] then the prepared +** statement is constructed to insert a NULL value in place of +** the table column that would have +** been read if [SQLITE_OK] had been returned. The [SQLITE_IGNORE] +** return can be used to deny an untrusted user access to individual +** columns of a table. +** +** The first parameter to the authorizer callback is a copy of +** the third parameter to the sqlite3_set_authorizer() interface. +** The second parameter to the callback is an integer +** [SQLITE_COPY | action code] that specifies the particular action +** to be authorized. The third through sixth +** parameters to the callback are zero-terminated strings that contain +** additional details about the action to be authorized. +** +** An authorizer is used when preparing SQL statements from an untrusted +** source, to ensure that the SQL statements do not try to access data +** that they are not allowed to see, or that they do not try to +** execute malicious statements that damage the database. For +** example, an application may allow a user to enter arbitrary +** SQL queries for evaluation by a database. But the application does +** not want the user to be able to make arbitrary changes to the +** database. An authorizer could then be put in place while the +** user-entered SQL is being prepared that disallows everything +** except SELECT statements. +** +** Only a single authorizer can be in place on a database connection +** at a time. Each call to sqlite3_set_authorizer overrides the +** previous call. Disable the authorizer by installing a NULL callback. +** The authorizer is disabled by default. +** +** Note that the authorizer callback is invoked only during +** [sqlite3_prepare()] or its variants. Authorization is not +** performed during statement evaluation in [sqlite3_step()]. +** +** INVARIANTS: +** +** {F12501} The [sqlite3_set_authorizer(D,...)] interface registers a +** authorizer callback with database connection D. +** +** {F12502} The authorizer callback is invoked as SQL statements are +** being compiled +** +** {F12503} If the authorizer callback returns any value other than +** [SQLITE_IGNORE], [SQLITE_OK], or [SQLITE_DENY] then +** the [sqlite3_prepare_v2()] or equivalent call that caused +** the authorizer callback to run shall fail with an +** [SQLITE_ERROR] error code and an appropriate error message. +** +** {F12504} When the authorizer callback returns [SQLITE_OK], the operation +** described is coded normally. +** +** {F12505} When the authorizer callback returns [SQLITE_DENY], the +** [sqlite3_prepare_v2()] or equivalent call that caused the +** authorizer callback to run shall fail +** with an [SQLITE_ERROR] error code and an error message +** explaining that access is denied. +** +** {F12506} If the authorizer code (the 2nd parameter to the authorizer +** callback) is [SQLITE_READ] and the authorizer callback returns +** [SQLITE_IGNORE] then the prepared statement is constructed to +** insert a NULL value in place of the table column that would have +** been read if [SQLITE_OK] had been returned. +** +** {F12507} If the authorizer code (the 2nd parameter to the authorizer +** callback) is anything other than [SQLITE_READ], then +** a return of [SQLITE_IGNORE] has the same effect as [SQLITE_DENY]. +** +** {F12510} The first parameter to the authorizer callback is a copy of +** the third parameter to the [sqlite3_set_authorizer()] interface. +** +** {F12511} The second parameter to the callback is an integer +** [SQLITE_COPY | action code] that specifies the particular action +** to be authorized. +** +** {F12512} The third through sixth parameters to the callback are +** zero-terminated strings that contain +** additional details about the action to be authorized. +** +** {F12520} Each call to [sqlite3_set_authorizer()] overrides the +** any previously installed authorizer. +** +** {F12521} A NULL authorizer means that no authorization +** callback is invoked. +** +** {F12522} The default authorizer is NULL. +*/ +int sqlite3_set_authorizer( + sqlite3*, + int (*xAuth)(void*,int,const char*,const char*,const char*,const char*), + void *pUserData +); + +/* +** CAPI3REF: Authorizer Return Codes {F12590} +** +** The [sqlite3_set_authorizer | authorizer callback function] must +** return either [SQLITE_OK] or one of these two constants in order +** to signal SQLite whether or not the action is permitted. See the +** [sqlite3_set_authorizer | authorizer documentation] for additional +** information. +*/ +#define SQLITE_DENY 1 /* Abort the SQL statement with an error */ +#define SQLITE_IGNORE 2 /* Don't allow access, but don't generate an error */ + +/* +** CAPI3REF: Authorizer Action Codes {F12550} +** +** The [sqlite3_set_authorizer()] interface registers a callback function +** that is invoked to authorizer certain SQL statement actions. The +** second parameter to the callback is an integer code that specifies +** what action is being authorized. These are the integer action codes that +** the authorizer callback may be passed. +** +** These action code values signify what kind of operation is to be +** authorized. The 3rd and 4th parameters to the authorization +** callback function will be parameters or NULL depending on which of these +** codes is used as the second parameter. The 5th parameter to the +** authorizer callback is the name of the database ("main", "temp", +** etc.) if applicable. The 6th parameter to the authorizer callback +** is the name of the inner-most trigger or view that is responsible for +** the access attempt or NULL if this access attempt is directly from +** top-level SQL code. +** +** INVARIANTS: +** +** {F12551} The second parameter to an +** [sqlite3_set_authorizer | authorizer callback is always an integer +** [SQLITE_COPY | authorizer code] that specifies what action +** is being authorized. +** +** {F12552} The 3rd and 4th parameters to the +** [sqlite3_set_authorizer | authorization callback function] +** will be parameters or NULL depending on which +** [SQLITE_COPY | authorizer code] is used as the second parameter. +** +** {F12553} The 5th parameter to the +** [sqlite3_set_authorizer | authorizer callback] is the name +** of the database (example: "main", "temp", etc.) if applicable. +** +** {F12554} The 6th parameter to the +** [sqlite3_set_authorizer | authorizer callback] is the name +** of the inner-most trigger or view that is responsible for +** the access attempt or NULL if this access attempt is directly from +** top-level SQL code. +*/ +/******************************************* 3rd ************ 4th ***********/ +#define SQLITE_CREATE_INDEX 1 /* Index Name Table Name */ +#define SQLITE_CREATE_TABLE 2 /* Table Name NULL */ +#define SQLITE_CREATE_TEMP_INDEX 3 /* Index Name Table Name */ +#define SQLITE_CREATE_TEMP_TABLE 4 /* Table Name NULL */ +#define SQLITE_CREATE_TEMP_TRIGGER 5 /* Trigger Name Table Name */ +#define SQLITE_CREATE_TEMP_VIEW 6 /* View Name NULL */ +#define SQLITE_CREATE_TRIGGER 7 /* Trigger Name Table Name */ +#define SQLITE_CREATE_VIEW 8 /* View Name NULL */ +#define SQLITE_DELETE 9 /* Table Name NULL */ +#define SQLITE_DROP_INDEX 10 /* Index Name Table Name */ +#define SQLITE_DROP_TABLE 11 /* Table Name NULL */ +#define SQLITE_DROP_TEMP_INDEX 12 /* Index Name Table Name */ +#define SQLITE_DROP_TEMP_TABLE 13 /* Table Name NULL */ +#define SQLITE_DROP_TEMP_TRIGGER 14 /* Trigger Name Table Name */ +#define SQLITE_DROP_TEMP_VIEW 15 /* View Name NULL */ +#define SQLITE_DROP_TRIGGER 16 /* Trigger Name Table Name */ +#define SQLITE_DROP_VIEW 17 /* View Name NULL */ +#define SQLITE_INSERT 18 /* Table Name NULL */ +#define SQLITE_PRAGMA 19 /* Pragma Name 1st arg or NULL */ +#define SQLITE_READ 20 /* Table Name Column Name */ +#define SQLITE_SELECT 21 /* NULL NULL */ +#define SQLITE_TRANSACTION 22 /* NULL NULL */ +#define SQLITE_UPDATE 23 /* Table Name Column Name */ +#define SQLITE_ATTACH 24 /* Filename NULL */ +#define SQLITE_DETACH 25 /* Database Name NULL */ +#define SQLITE_ALTER_TABLE 26 /* Database Name Table Name */ +#define SQLITE_REINDEX 27 /* Index Name NULL */ +#define SQLITE_ANALYZE 28 /* Table Name NULL */ +#define SQLITE_CREATE_VTABLE 29 /* Table Name Module Name */ +#define SQLITE_DROP_VTABLE 30 /* Table Name Module Name */ +#define SQLITE_FUNCTION 31 /* Function Name NULL */ +#define SQLITE_COPY 0 /* No longer used */ + +/* +** CAPI3REF: Tracing And Profiling Functions {F12280} +** +** These routines register callback functions that can be used for +** tracing and profiling the execution of SQL statements. +** +** The callback function registered by sqlite3_trace() is invoked at +** various times when an SQL statement is being run by [sqlite3_step()]. +** The callback returns a UTF-8 rendering of the SQL statement text +** as the statement first begins executing. Additional callbacks occur +** as each triggersubprogram is entered. The callbacks for triggers +** contain a UTF-8 SQL comment that identifies the trigger. +** +** The callback function registered by sqlite3_profile() is invoked +** as each SQL statement finishes. The profile callback contains +** the original statement text and an estimate of wall-clock time +** of how long that statement took to run. +** +** The sqlite3_profile() API is currently considered experimental and +** is subject to change or removal in a future release. +** +** The trigger reporting feature of the trace callback is considered +** experimental and is subject to change or removal in future releases. +** Future versions of SQLite might also add new trace callback +** invocations. +** +** INVARIANTS: +** +** {F12281} The callback function registered by [sqlite3_trace()] is +** whenever an SQL statement first begins to execute and +** whenever a trigger subprogram first begins to run. +** +** {F12282} Each call to [sqlite3_trace()] overrides the previously +** registered trace callback. +** +** {F12283} A NULL trace callback disables tracing. +** +** {F12284} The first argument to the trace callback is a copy of +** the pointer which was the 3rd argument to [sqlite3_trace()]. +** +** {F12285} The second argument to the trace callback is a +** zero-terminated UTF8 string containing the original text +** of the SQL statement as it was passed into [sqlite3_prepare_v2()] +** or the equivalent, or an SQL comment indicating the beginning +** of a trigger subprogram. +** +** {F12287} The callback function registered by [sqlite3_profile()] is invoked +** as each SQL statement finishes. +** +** {F12288} The first parameter to the profile callback is a copy of +** the 3rd parameter to [sqlite3_profile()]. +** +** {F12289} The second parameter to the profile callback is a +** zero-terminated UTF-8 string that contains the complete text of +** the SQL statement as it was processed by [sqlite3_prepare_v2()] +** or the equivalent. +** +** {F12290} The third parameter to the profile callback is an estimate +** of the number of nanoseconds of wall-clock time required to +** run the SQL statement from start to finish. +*/ +void *sqlite3_trace(sqlite3*, void(*xTrace)(void*,const char*), void*); +void *sqlite3_profile(sqlite3*, + void(*xProfile)(void*,const char*,sqlite3_uint64), void*); + +/* +** CAPI3REF: Query Progress Callbacks {F12910} +** +** This routine configures a callback function - the +** progress callback - that is invoked periodically during long +** running calls to [sqlite3_exec()], [sqlite3_step()] and +** [sqlite3_get_table()]. An example use for this +** interface is to keep a GUI updated during a large query. +** +** If the progress callback returns non-zero, the opertion is +** interrupted. This feature can be used to implement a +** "Cancel" button on a GUI dialog box. +** +** INVARIANTS: +** +** {F12911} The callback function registered by [sqlite3_progress_handler()] +** is invoked periodically during long running calls to +** [sqlite3_step()]. +** +** {F12912} The progress callback is invoked once for every N virtual +** machine opcodes, where N is the second argument to +** the [sqlite3_progress_handler()] call that registered +** the callback. What if N is less than 1? +** +** {F12913} The progress callback itself is identified by the third +** argument to [sqlite3_progress_handler()]. +** +** {F12914} The fourth argument [sqlite3_progress_handler()] is a +*** void pointer passed to the progress callback +** function each time it is invoked. +** +** {F12915} If a call to [sqlite3_step()] results in fewer than +** N opcodes being executed, +** then the progress callback is never invoked. {END} +** +** {F12916} Every call to [sqlite3_progress_handler()] +** overwrites any previously registere progress handler. +** +** {F12917} If the progress handler callback is NULL then no progress +** handler is invoked. +** +** {F12918} If the progress callback returns a result other than 0, then +** the behavior is a if [sqlite3_interrupt()] had been called. +*/ +void sqlite3_progress_handler(sqlite3*, int, int(*)(void*), void*); + +/* +** CAPI3REF: Opening A New Database Connection {F12700} +** +** These routines open an SQLite database file whose name +** is given by the filename argument. +** The filename argument is interpreted as UTF-8 +** for [sqlite3_open()] and [sqlite3_open_v2()] and as UTF-16 +** in the native byte order for [sqlite3_open16()]. +** An [sqlite3*] handle is usually returned in *ppDb, even +** if an error occurs. The only exception is if SQLite is unable +** to allocate memory to hold the [sqlite3] object, a NULL will +** be written into *ppDb instead of a pointer to the [sqlite3] object. +** If the database is opened (and/or created) +** successfully, then [SQLITE_OK] is returned. Otherwise an +** error code is returned. The +** [sqlite3_errmsg()] or [sqlite3_errmsg16()] routines can be used to obtain +** an English language description of the error. +** +** The default encoding for the database will be UTF-8 if +** [sqlite3_open()] or [sqlite3_open_v2()] is called and +** UTF-16 in the native byte order if [sqlite3_open16()] is used. +** +** Whether or not an error occurs when it is opened, resources +** associated with the [sqlite3*] handle should be released by passing it +** to [sqlite3_close()] when it is no longer required. +** +** The [sqlite3_open_v2()] interface works like [sqlite3_open()] +** except that it acccepts two additional parameters for additional control +** over the new database connection. The flags parameter can be +** one of: +** +**
                  +**
                1. [SQLITE_OPEN_READONLY] +**
                2. [SQLITE_OPEN_READWRITE] +**
                3. [SQLITE_OPEN_READWRITE] | [SQLITE_OPEN_CREATE] +**
                +** +** The first value opens the database read-only. +** If the database does not previously exist, an error is returned. +** The second option opens +** the database for reading and writing if possible, or reading only if +** if the file is write protected. In either case the database +** must already exist or an error is returned. The third option +** opens the database for reading and writing and creates it if it does +** not already exist. +** The third options is behavior that is always used for [sqlite3_open()] +** and [sqlite3_open16()]. +** +** If the filename is ":memory:", then an private +** in-memory database is created for the connection. This in-memory +** database will vanish when the database connection is closed. Future +** version of SQLite might make use of additional special filenames +** that begin with the ":" character. It is recommended that +** when a database filename really does begin with +** ":" that you prefix the filename with a pathname like "./" to +** avoid ambiguity. +** +** If the filename is an empty string, then a private temporary +** on-disk database will be created. This private database will be +** automatically deleted as soon as the database connection is closed. +** +** The fourth parameter to sqlite3_open_v2() is the name of the +** [sqlite3_vfs] object that defines the operating system +** interface that the new database connection should use. If the +** fourth parameter is a NULL pointer then the default [sqlite3_vfs] +** object is used. +** +** Note to windows users: The encoding used for the filename argument +** of [sqlite3_open()] and [sqlite3_open_v2()] must be UTF-8, not whatever +** codepage is currently defined. Filenames containing international +** characters must be converted to UTF-8 prior to passing them into +** [sqlite3_open()] or [sqlite3_open_v2()]. +** +** INVARIANTS: +** +** {F12701} The [sqlite3_open()], [sqlite3_open16()], and +** [sqlite3_open_v2()] interfaces create a new +** [database connection] associated with +** the database file given in their first parameter. +** +** {F12702} The filename argument is interpreted as UTF-8 +** for [sqlite3_open()] and [sqlite3_open_v2()] and as UTF-16 +** in the native byte order for [sqlite3_open16()]. +** +** {F12703} A successful invocation of [sqlite3_open()], [sqlite3_open16()], +** or [sqlite3_open_v2()] writes a pointer to a new +** [database connection] into *ppDb. +** +** {F12704} The [sqlite3_open()], [sqlite3_open16()], and +** [sqlite3_open_v2()] interfaces return [SQLITE_OK] upon success, +** or an appropriate [error code] on failure. +** +** {F12706} The default text encoding for a new database created using +** [sqlite3_open()] or [sqlite3_open_v2()] will be UTF-8. +** +** {F12707} The default text encoding for a new database created using +** [sqlite3_open16()] will be UTF-16. +** +** {F12709} The [sqlite3_open(F,D)] interface is equivalent to +** [sqlite3_open_v2(F,D,G,0)] where the G parameter is +** [SQLITE_OPEN_READWRITE]|[SQLITE_OPEN_CREATE]. +** +** {F12711} If the G parameter to [sqlite3_open_v2(F,D,G,V)] contains the +** bit value [SQLITE_OPEN_READONLY] then the database is opened +** for reading only. +** +** {F12712} If the G parameter to [sqlite3_open_v2(F,D,G,V)] contains the +** bit value [SQLITE_OPEN_READWRITE] then the database is opened +** reading and writing if possible, or for reading only if the +** file is write protected by the operating system. +** +** {F12713} If the G parameter to [sqlite3_open(v2(F,D,G,V)] omits the +** bit value [SQLITE_OPEN_CREATE] and the database does not +** previously exist, an error is returned. +** +** {F12714} If the G parameter to [sqlite3_open(v2(F,D,G,V)] contains the +** bit value [SQLITE_OPEN_CREATE] and the database does not +** previously exist, then an attempt is made to create and +** initialize the database. +** +** {F12717} If the filename argument to [sqlite3_open()], [sqlite3_open16()], +** or [sqlite3_open_v2()] is ":memory:", then an private, +** ephemeral, in-memory database is created for the connection. +** Is SQLITE_OPEN_CREATE|SQLITE_OPEN_READWRITE required +** in sqlite3_open_v2()? +** +** {F12719} If the filename is NULL or an empty string, then a private, +** ephermeral on-disk database will be created. +** Is SQLITE_OPEN_CREATE|SQLITE_OPEN_READWRITE required +** in sqlite3_open_v2()? +** +** {F12721} The [database connection] created by +** [sqlite3_open_v2(F,D,G,V)] will use the +** [sqlite3_vfs] object identified by the V parameter, or +** the default [sqlite3_vfs] object is V is a NULL pointer. +*/ +int sqlite3_open( + const char *filename, /* Database filename (UTF-8) */ + sqlite3 **ppDb /* OUT: SQLite db handle */ +); +int sqlite3_open16( + const void *filename, /* Database filename (UTF-16) */ + sqlite3 **ppDb /* OUT: SQLite db handle */ +); +int sqlite3_open_v2( + const char *filename, /* Database filename (UTF-8) */ + sqlite3 **ppDb, /* OUT: SQLite db handle */ + int flags, /* Flags */ + const char *zVfs /* Name of VFS module to use */ +); + +/* +** CAPI3REF: Error Codes And Messages {F12800} +** +** The sqlite3_errcode() interface returns the numeric +** [SQLITE_OK | result code] or [SQLITE_IOERR_READ | extended result code] +** for the most recent failed sqlite3_* API call associated +** with [sqlite3] handle 'db'. If a prior API call failed but the +** most recent API call succeeded, the return value from sqlite3_errcode() +** is undefined. +** +** The sqlite3_errmsg() and sqlite3_errmsg16() return English-language +** text that describes the error, as either UTF8 or UTF16 respectively. +** Memory to hold the error message string is managed internally. +** The application does not need to worry with freeing the result. +** However, the error string might be overwritten or deallocated by +** subsequent calls to other SQLite interface functions. +** +** INVARIANTS: +** +** {F12801} The [sqlite3_errcode(D)] interface returns the numeric +** [SQLITE_OK | result code] or +** [SQLITE_IOERR_READ | extended result code] +** for the most recently failed interface call associated +** with [database connection] D. +** +** {F12803} The [sqlite3_errmsg(D)] and [sqlite3_errmsg16(D)] +** interfaces return English-language text that describes +** the error in the mostly recently failed interface call, +** encoded as either UTF8 or UTF16 respectively. +** +** {F12807} The strings returned by [sqlite3_errmsg()] and [sqlite3_errmsg16()] +** are valid until the next SQLite interface call. +** +** {F12808} Calls to API routines that do not return an error code +** (example: [sqlite3_data_count()]) do not +** change the error code or message returned by +** [sqlite3_errcode()], [sqlite3_errmsg()], or [sqlite3_errmsg16()]. +** +** {F12809} Interfaces that are not associated with a specific +** [database connection] (examples: +** [sqlite3_mprintf()] or [sqlite3_enable_shared_cache()] +** do not change the values returned by +** [sqlite3_errcode()], [sqlite3_errmsg()], or [sqlite3_errmsg16()]. +*/ +int sqlite3_errcode(sqlite3 *db); +const char *sqlite3_errmsg(sqlite3*); +const void *sqlite3_errmsg16(sqlite3*); + +/* +** CAPI3REF: SQL Statement Object {F13000} +** KEYWORDS: {prepared statement} {prepared statements} +** +** An instance of this object represent single SQL statements. This +** object is variously known as a "prepared statement" or a +** "compiled SQL statement" or simply as a "statement". +** +** The life of a statement object goes something like this: +** +**
                  +**
                1. Create the object using [sqlite3_prepare_v2()] or a related +** function. +**
                2. Bind values to host parameters using +** [sqlite3_bind_blob | sqlite3_bind_* interfaces]. +**
                3. Run the SQL by calling [sqlite3_step()] one or more times. +**
                4. Reset the statement using [sqlite3_reset()] then go back +** to step 2. Do this zero or more times. +**
                5. Destroy the object using [sqlite3_finalize()]. +**
                +** +** Refer to documentation on individual methods above for additional +** information. +*/ +typedef struct sqlite3_stmt sqlite3_stmt; + +/* +** CAPI3REF: Compiling An SQL Statement {F13010} +** +** To execute an SQL query, it must first be compiled into a byte-code +** program using one of these routines. +** +** The first argument "db" is an [database connection] +** obtained from a prior call to [sqlite3_open()], [sqlite3_open_v2()] +** or [sqlite3_open16()]. +** The second argument "zSql" is the statement to be compiled, encoded +** as either UTF-8 or UTF-16. The sqlite3_prepare() and sqlite3_prepare_v2() +** interfaces uses UTF-8 and sqlite3_prepare16() and sqlite3_prepare16_v2() +** use UTF-16. {END} +** +** If the nByte argument is less +** than zero, then zSql is read up to the first zero terminator. +** If nByte is non-negative, then it is the maximum number of +** bytes read from zSql. When nByte is non-negative, the +** zSql string ends at either the first '\000' or '\u0000' character or +** until the nByte-th byte, whichever comes first. {END} +** +** *pzTail is made to point to the first byte past the end of the +** first SQL statement in zSql. These routines only compiles the first +** statement in zSql, so *pzTail is left pointing to what remains +** uncompiled. +** +** *ppStmt is left pointing to a compiled [prepared statement] that can be +** executed using [sqlite3_step()]. Or if there is an error, *ppStmt is +** set to NULL. If the input text contains no SQL (if the input +** is and empty string or a comment) then *ppStmt is set to NULL. +** {U13018} The calling procedure is responsible for deleting the +** compiled SQL statement +** using [sqlite3_finalize()] after it has finished with it. +** +** On success, [SQLITE_OK] is returned. Otherwise an +** [error code] is returned. +** +** The sqlite3_prepare_v2() and sqlite3_prepare16_v2() interfaces are +** recommended for all new programs. The two older interfaces are retained +** for backwards compatibility, but their use is discouraged. +** In the "v2" interfaces, the prepared statement +** that is returned (the [sqlite3_stmt] object) contains a copy of the +** original SQL text. {END} This causes the [sqlite3_step()] interface to +** behave a differently in two ways: +** +**
                  +**
                1. +** If the database schema changes, instead of returning [SQLITE_SCHEMA] as it +** always used to do, [sqlite3_step()] will automatically recompile the SQL +** statement and try to run it again. If the schema has changed in +** a way that makes the statement no longer valid, [sqlite3_step()] will still +** return [SQLITE_SCHEMA]. But unlike the legacy behavior, +** [SQLITE_SCHEMA] is now a fatal error. Calling +** [sqlite3_prepare_v2()] again will not make the +** error go away. Note: use [sqlite3_errmsg()] to find the text +** of the parsing error that results in an [SQLITE_SCHEMA] return. {END} +**
                2. +** +**
                3. +** When an error occurs, +** [sqlite3_step()] will return one of the detailed +** [error codes] or [extended error codes]. +** The legacy behavior was that [sqlite3_step()] would only return a generic +** [SQLITE_ERROR] result code and you would have to make a second call to +** [sqlite3_reset()] in order to find the underlying cause of the problem. +** With the "v2" prepare interfaces, the underlying reason for the error is +** returned immediately. +**
                4. +**
                +** +** INVARIANTS: +** +** {F13011} The [sqlite3_prepare(db,zSql,...)] and +** [sqlite3_prepare_v2(db,zSql,...)] interfaces interpret the +** text in their zSql parameter as UTF-8. +** +** {F13012} The [sqlite3_prepare16(db,zSql,...)] and +** [sqlite3_prepare16_v2(db,zSql,...)] interfaces interpret the +** text in their zSql parameter as UTF-16 in the native byte order. +** +** {F13013} If the nByte argument to [sqlite3_prepare_v2(db,zSql,nByte,...)] +** and its variants is less than zero, then SQL text is +** read from zSql is read up to the first zero terminator. +** +** {F13014} If the nByte argument to [sqlite3_prepare_v2(db,zSql,nByte,...)] +** and its variants is non-negative, then nBytes bytes +** SQL text is read from zSql. +** +** {F13015} In [sqlite3_prepare_v2(db,zSql,N,P,pzTail)] and its variants +** if the zSql input text contains more than one SQL statement +** and pzTail is not NULL, then *pzTail is made to point to the +** first byte past the end of the first SQL statement in zSql. +** What does *pzTail point to if there is one statement? +** +** {F13016} A successful call to [sqlite3_prepare_v2(db,zSql,N,ppStmt,...)] +** or one of its variants writes into *ppStmt a pointer to a new +** [prepared statement] or a pointer to NULL +** if zSql contains nothing other than whitespace or comments. +** +** {F13019} The [sqlite3_prepare_v2()] interface and its variants return +** [SQLITE_OK] or an appropriate [error code] upon failure. +** +** {F13021} Before [sqlite3_prepare(db,zSql,nByte,ppStmt,pzTail)] or its +** variants returns an error (any value other than [SQLITE_OK]) +** it first sets *ppStmt to NULL. +*/ +int sqlite3_prepare( + sqlite3 *db, /* Database handle */ + const char *zSql, /* SQL statement, UTF-8 encoded */ + int nByte, /* Maximum length of zSql in bytes. */ + sqlite3_stmt **ppStmt, /* OUT: Statement handle */ + const char **pzTail /* OUT: Pointer to unused portion of zSql */ +); +int sqlite3_prepare_v2( + sqlite3 *db, /* Database handle */ + const char *zSql, /* SQL statement, UTF-8 encoded */ + int nByte, /* Maximum length of zSql in bytes. */ + sqlite3_stmt **ppStmt, /* OUT: Statement handle */ + const char **pzTail /* OUT: Pointer to unused portion of zSql */ +); +int sqlite3_prepare16( + sqlite3 *db, /* Database handle */ + const void *zSql, /* SQL statement, UTF-16 encoded */ + int nByte, /* Maximum length of zSql in bytes. */ + sqlite3_stmt **ppStmt, /* OUT: Statement handle */ + const void **pzTail /* OUT: Pointer to unused portion of zSql */ +); +int sqlite3_prepare16_v2( + sqlite3 *db, /* Database handle */ + const void *zSql, /* SQL statement, UTF-16 encoded */ + int nByte, /* Maximum length of zSql in bytes. */ + sqlite3_stmt **ppStmt, /* OUT: Statement handle */ + const void **pzTail /* OUT: Pointer to unused portion of zSql */ +); + +/* +** CAPIREF: Retrieving Statement SQL {F13100} +** +** This intereface can be used to retrieve a saved copy of the original +** SQL text used to create a [prepared statement]. +** +** INVARIANTS: +** +** {F13101} If the [prepared statement] passed as +** the an argument to [sqlite3_sql()] was compiled +** compiled using either [sqlite3_prepare_v2()] or +** [sqlite3_prepare16_v2()], +** then [sqlite3_sql()] function returns a pointer to a +** zero-terminated string containing a UTF-8 rendering +** of the original SQL statement. +** +** {F13102} If the [prepared statement] passed as +** the an argument to [sqlite3_sql()] was compiled +** compiled using either [sqlite3_prepare()] or +** [sqlite3_prepare16()], +** then [sqlite3_sql()] function returns a NULL pointer. +** +** {F13103} The string returned by [sqlite3_sql(S)] is valid until the +** [prepared statement] S is deleted using [sqlite3_finalize(S)]. +*/ +const char *sqlite3_sql(sqlite3_stmt *pStmt); + +/* +** CAPI3REF: Dynamically Typed Value Object {F15000} +** +** SQLite uses the sqlite3_value object to represent all values +** that are or can be stored in a database table. +** SQLite uses dynamic typing for the values it stores. +** Values stored in sqlite3_value objects can be +** be integers, floating point values, strings, BLOBs, or NULL. +*/ +typedef struct Mem sqlite3_value; + +/* +** CAPI3REF: SQL Function Context Object {F16001} +** +** The context in which an SQL function executes is stored in an +** sqlite3_context object. A pointer to an sqlite3_context +** object is always first parameter to application-defined SQL functions. +*/ +typedef struct sqlite3_context sqlite3_context; + +/* +** CAPI3REF: Binding Values To Prepared Statements {F13500} +** +** In the SQL strings input to [sqlite3_prepare_v2()] and its +** variants, literals may be replace by a parameter in one +** of these forms: +** +**
                  +**
                • ? +**
                • ?NNN +**
                • :VVV +**
                • @VVV +**
                • $VVV +**
                +** +** In the parameter forms shown above NNN is an integer literal, +** VVV alpha-numeric parameter name. +** The values of these parameters (also called "host parameter names" +** or "SQL parameters") +** can be set using the sqlite3_bind_*() routines defined here. +** +** The first argument to the sqlite3_bind_*() routines always +** is a pointer to the [sqlite3_stmt] object returned from +** [sqlite3_prepare_v2()] or its variants. The second +** argument is the index of the parameter to be set. The +** first parameter has an index of 1. When the same named +** parameter is used more than once, second and subsequent +** occurrences have the same index as the first occurrence. +** The index for named parameters can be looked up using the +** [sqlite3_bind_parameter_name()] API if desired. The index +** for "?NNN" parameters is the value of NNN. +** The NNN value must be between 1 and the compile-time +** parameter SQLITE_MAX_VARIABLE_NUMBER (default value: 999). +** +** The third argument is the value to bind to the parameter. +** +** In those +** routines that have a fourth argument, its value is the number of bytes +** in the parameter. To be clear: the value is the number of bytes +** in the value, not the number of characters. The number +** of bytes does not include the zero-terminator at the end of strings. +** If the fourth parameter is negative, the length of the string is +** number of bytes up to the first zero terminator. +** +** The fifth argument to sqlite3_bind_blob(), sqlite3_bind_text(), and +** sqlite3_bind_text16() is a destructor used to dispose of the BLOB or +** string after SQLite has finished with it. If the fifth argument is +** the special value [SQLITE_STATIC], then SQLite assumes that the +** information is in static, unmanaged space and does not need to be freed. +** If the fifth argument has the value [SQLITE_TRANSIENT], then +** SQLite makes its own private copy of the data immediately, before +** the sqlite3_bind_*() routine returns. +** +** The sqlite3_bind_zeroblob() routine binds a BLOB of length N that +** is filled with zeros. A zeroblob uses a fixed amount of memory +** (just an integer to hold it size) while it is being processed. +** Zeroblobs are intended to serve as place-holders for BLOBs whose +** content is later written using +** [sqlite3_blob_open | increment BLOB I/O] routines. A negative +** value for the zeroblob results in a zero-length BLOB. +** +** The sqlite3_bind_*() routines must be called after +** [sqlite3_prepare_v2()] (and its variants) or [sqlite3_reset()] and +** before [sqlite3_step()]. +** Bindings are not cleared by the [sqlite3_reset()] routine. +** Unbound parameters are interpreted as NULL. +** +** These routines return [SQLITE_OK] on success or an error code if +** anything goes wrong. [SQLITE_RANGE] is returned if the parameter +** index is out of range. [SQLITE_NOMEM] is returned if malloc fails. +** [SQLITE_MISUSE] might be returned if these routines are called on a +** virtual machine that is the wrong state or which has already been finalized. +** Detection of misuse is unreliable. Applications should not depend +** on SQLITE_MISUSE returns. SQLITE_MISUSE is intended to indicate a +** a logic error in the application. Future versions of SQLite might +** panic rather than return SQLITE_MISUSE. +** +** See also: [sqlite3_bind_parameter_count()], +** [sqlite3_bind_parameter_name()], and +** [sqlite3_bind_parameter_index()]. +** +** INVARIANTS: +** +** {F13506} The [sqlite3_prepare | SQL statement compiler] recognizes +** tokens of the forms "?", "?NNN", "$VVV", ":VVV", and "@VVV" +** as SQL parameters, where NNN is any sequence of one or more +** digits and where VVV is any sequence of one or more +** alphanumeric characters or "::" optionally followed by +** a string containing no spaces and contained within parentheses. +** +** {F13509} The initial value of an SQL parameter is NULL. +** +** {F13512} The index of an "?" SQL parameter is one larger than the +** largest index of SQL parameter to the left, or 1 if +** the "?" is the leftmost SQL parameter. +** +** {F13515} The index of an "?NNN" SQL parameter is the integer NNN. +** +** {F13518} The index of an ":VVV", "$VVV", or "@VVV" SQL parameter is +** the same as the index of leftmost occurances of the same +** parameter, or one more than the largest index over all +** parameters to the left if this is the first occurrance +** of this parameter, or 1 if this is the leftmost parameter. +** +** {F13521} The [sqlite3_prepare | SQL statement compiler] fail with +** an [SQLITE_RANGE] error if the index of an SQL parameter +** is less than 1 or greater than SQLITE_MAX_VARIABLE_NUMBER. +** +** {F13524} Calls to [sqlite3_bind_text | sqlite3_bind(S,N,V,...)] +** associate the value V with all SQL parameters having an +** index of N in the [prepared statement] S. +** +** {F13527} Calls to [sqlite3_bind_text | sqlite3_bind(S,N,...)] +** override prior calls with the same values of S and N. +** +** {F13530} Bindings established by [sqlite3_bind_text | sqlite3_bind(S,...)] +** persist across calls to [sqlite3_reset(S)]. +** +** {F13533} In calls to [sqlite3_bind_blob(S,N,V,L,D)], +** [sqlite3_bind_text(S,N,V,L,D)], or +** [sqlite3_bind_text16(S,N,V,L,D)] SQLite binds the first L +** bytes of the blob or string pointed to by V, when L +** is non-negative. +** +** {F13536} In calls to [sqlite3_bind_text(S,N,V,L,D)] or +** [sqlite3_bind_text16(S,N,V,L,D)] SQLite binds characters +** from V through the first zero character when L is negative. +** +** {F13539} In calls to [sqlite3_bind_blob(S,N,V,L,D)], +** [sqlite3_bind_text(S,N,V,L,D)], or +** [sqlite3_bind_text16(S,N,V,L,D)] when D is the special +** constant [SQLITE_STATIC], SQLite assumes that the value V +** is held in static unmanaged space that will not change +** during the lifetime of the binding. +** +** {F13542} In calls to [sqlite3_bind_blob(S,N,V,L,D)], +** [sqlite3_bind_text(S,N,V,L,D)], or +** [sqlite3_bind_text16(S,N,V,L,D)] when D is the special +** constant [SQLITE_TRANSIENT], the routine makes a +** private copy of V value before it returns. +** +** {F13545} In calls to [sqlite3_bind_blob(S,N,V,L,D)], +** [sqlite3_bind_text(S,N,V,L,D)], or +** [sqlite3_bind_text16(S,N,V,L,D)] when D is a pointer to +** a function, SQLite invokes that function to destroy the +** V value after it has finished using the V value. +** +** {F13548} In calls to [sqlite3_bind_zeroblob(S,N,V,L)] the value bound +** is a blob of L bytes, or a zero-length blob if L is negative. +*/ +int sqlite3_bind_blob(sqlite3_stmt*, int, const void*, int n, void(*)(void*)); +int sqlite3_bind_double(sqlite3_stmt*, int, double); +int sqlite3_bind_int(sqlite3_stmt*, int, int); +int sqlite3_bind_int64(sqlite3_stmt*, int, sqlite3_int64); +int sqlite3_bind_null(sqlite3_stmt*, int); +int sqlite3_bind_text(sqlite3_stmt*, int, const char*, int n, void(*)(void*)); +int sqlite3_bind_text16(sqlite3_stmt*, int, const void*, int, void(*)(void*)); +int sqlite3_bind_value(sqlite3_stmt*, int, const sqlite3_value*); +int sqlite3_bind_zeroblob(sqlite3_stmt*, int, int n); + +/* +** CAPI3REF: Number Of SQL Parameters {F13600} +** +** This routine can be used to find the number of SQL parameters +** in a prepared statement. SQL parameters are tokens of the +** form "?", "?NNN", ":AAA", "$AAA", or "@AAA" that serve as +** place-holders for values that are [sqlite3_bind_blob | bound] +** to the parameters at a later time. +** +** This routine actually returns the index of the largest parameter. +** For all forms except ?NNN, this will correspond to the number of +** unique parameters. If parameters of the ?NNN are used, there may +** be gaps in the list. +** +** See also: [sqlite3_bind_blob|sqlite3_bind()], +** [sqlite3_bind_parameter_name()], and +** [sqlite3_bind_parameter_index()]. +** +** INVARIANTS: +** +** {F13601} The [sqlite3_bind_parameter_count(S)] interface returns +** the largest index of all SQL parameters in the +** [prepared statement] S, or 0 if S +** contains no SQL parameters. +*/ +int sqlite3_bind_parameter_count(sqlite3_stmt*); + +/* +** CAPI3REF: Name Of A Host Parameter {F13620} +** +** This routine returns a pointer to the name of the n-th +** SQL parameter in a [prepared statement]. +** SQL parameters of the form ":AAA" or "@AAA" or "$AAA" have a name +** which is the string ":AAA" or "@AAA" or "$VVV". +** In other words, the initial ":" or "$" or "@" +** is included as part of the name. +** Parameters of the form "?" or "?NNN" have no name. +** +** The first host parameter has an index of 1, not 0. +** +** If the value n is out of range or if the n-th parameter is +** nameless, then NULL is returned. The returned string is +** always in the UTF-8 encoding even if the named parameter was +** originally specified as UTF-16 in [sqlite3_prepare16()] or +** [sqlite3_prepare16_v2()]. +** +** See also: [sqlite3_bind_blob|sqlite3_bind()], +** [sqlite3_bind_parameter_count()], and +** [sqlite3_bind_parameter_index()]. +** +** INVARIANTS: +** +** {F13621} The [sqlite3_bind_parameter_name(S,N)] interface returns +** a UTF-8 rendering of the name of the SQL parameter in +** [prepared statement] S having index N, or +** NULL if there is no SQL parameter with index N or if the +** parameter with index N is an anonymous parameter "?" or +** a numbered parameter "?NNN". +*/ +const char *sqlite3_bind_parameter_name(sqlite3_stmt*, int); + +/* +** CAPI3REF: Index Of A Parameter With A Given Name {F13640} +** +** Return the index of an SQL parameter given its name. The +** index value returned is suitable for use as the second +** parameter to [sqlite3_bind_blob|sqlite3_bind()]. A zero +** is returned if no matching parameter is found. The parameter +** name must be given in UTF-8 even if the original statement +** was prepared from UTF-16 text using [sqlite3_prepare16_v2()]. +** +** See also: [sqlite3_bind_blob|sqlite3_bind()], +** [sqlite3_bind_parameter_count()], and +** [sqlite3_bind_parameter_index()]. +** +** INVARIANTS: +** +** {F13641} The [sqlite3_bind_parameter_index(S,N)] interface returns +** the index of SQL parameter in [prepared statement] +** S whose name matches the UTF-8 string N, or 0 if there is +** no match. +*/ +int sqlite3_bind_parameter_index(sqlite3_stmt*, const char *zName); + +/* +** CAPI3REF: Reset All Bindings On A Prepared Statement {F13660} +** +** Contrary to the intuition of many, [sqlite3_reset()] does not +** reset the [sqlite3_bind_blob | bindings] on a +** [prepared statement]. Use this routine to +** reset all host parameters to NULL. +** +** INVARIANTS: +** +** {F13661} The [sqlite3_clear_bindings(S)] interface resets all +** SQL parameter bindings in [prepared statement] S +** back to NULL. +*/ +int sqlite3_clear_bindings(sqlite3_stmt*); + +/* +** CAPI3REF: Number Of Columns In A Result Set {F13710} +** +** Return the number of columns in the result set returned by the +** [prepared statement]. This routine returns 0 +** if pStmt is an SQL statement that does not return data (for +** example an UPDATE). +** +** INVARIANTS: +** +** {F13711} The [sqlite3_column_count(S)] interface returns the number of +** columns in the result set generated by the +** [prepared statement] S, or 0 if S does not generate +** a result set. +*/ +int sqlite3_column_count(sqlite3_stmt *pStmt); + +/* +** CAPI3REF: Column Names In A Result Set {F13720} +** +** These routines return the name assigned to a particular column +** in the result set of a SELECT statement. The sqlite3_column_name() +** interface returns a pointer to a zero-terminated UTF8 string +** and sqlite3_column_name16() returns a pointer to a zero-terminated +** UTF16 string. The first parameter is the +** [prepared statement] that implements the SELECT statement. +** The second parameter is the column number. The left-most column is +** number 0. +** +** The returned string pointer is valid until either the +** [prepared statement] is destroyed by [sqlite3_finalize()] +** or until the next call sqlite3_column_name() or sqlite3_column_name16() +** on the same column. +** +** If sqlite3_malloc() fails during the processing of either routine +** (for example during a conversion from UTF-8 to UTF-16) then a +** NULL pointer is returned. +** +** The name of a result column is the value of the "AS" clause for +** that column, if there is an AS clause. If there is no AS clause +** then the name of the column is unspecified and may change from +** one release of SQLite to the next. +** +** INVARIANTS: +** +** {F13721} A successful invocation of the [sqlite3_column_name(S,N)] +** interface returns the name +** of the Nth column (where 0 is the left-most column) for the +** result set of [prepared statement] S as a +** zero-terminated UTF-8 string. +** +** {F13723} A successful invocation of the [sqlite3_column_name16(S,N)] +** interface returns the name +** of the Nth column (where 0 is the left-most column) for the +** result set of [prepared statement] S as a +** zero-terminated UTF-16 string in the native byte order. +** +** {F13724} The [sqlite3_column_name()] and [sqlite3_column_name16()] +** interfaces return a NULL pointer if they are unable to +** allocate memory memory to hold there normal return strings. +** +** {F13725} If the N parameter to [sqlite3_column_name(S,N)] or +** [sqlite3_column_name16(S,N)] is out of range, then the +** interfaces returns a NULL pointer. +** +** {F13726} The strings returned by [sqlite3_column_name(S,N)] and +** [sqlite3_column_name16(S,N)] are valid until the next +** call to either routine with the same S and N parameters +** or until [sqlite3_finalize(S)] is called. +** +** {F13727} When a result column of a [SELECT] statement contains +** an AS clause, the name of that column is the indentifier +** to the right of the AS keyword. +*/ +const char *sqlite3_column_name(sqlite3_stmt*, int N); +const void *sqlite3_column_name16(sqlite3_stmt*, int N); + +/* +** CAPI3REF: Source Of Data In A Query Result {F13740} +** +** These routines provide a means to determine what column of what +** table in which database a result of a SELECT statement comes from. +** The name of the database or table or column can be returned as +** either a UTF8 or UTF16 string. The _database_ routines return +** the database name, the _table_ routines return the table name, and +** the origin_ routines return the column name. +** The returned string is valid until +** the [prepared statement] is destroyed using +** [sqlite3_finalize()] or until the same information is requested +** again in a different encoding. +** +** The names returned are the original un-aliased names of the +** database, table, and column. +** +** The first argument to the following calls is a [prepared statement]. +** These functions return information about the Nth column returned by +** the statement, where N is the second function argument. +** +** If the Nth column returned by the statement is an expression +** or subquery and is not a column value, then all of these functions +** return NULL. These routine might also return NULL if a memory +** allocation error occurs. Otherwise, they return the +** name of the attached database, table and column that query result +** column was extracted from. +** +** As with all other SQLite APIs, those postfixed with "16" return +** UTF-16 encoded strings, the other functions return UTF-8. {END} +** +** These APIs are only available if the library was compiled with the +** SQLITE_ENABLE_COLUMN_METADATA preprocessor symbol defined. +** +** {U13751} +** If two or more threads call one or more of these routines against the same +** prepared statement and column at the same time then the results are +** undefined. +** +** INVARIANTS: +** +** {F13741} The [sqlite3_column_database_name(S,N)] interface returns either +** the UTF-8 zero-terminated name of the database from which the +** Nth result column of [prepared statement] S +** is extracted, or NULL if the the Nth column of S is a +** general expression or if unable to allocate memory +** to store the name. +** +** {F13742} The [sqlite3_column_database_name16(S,N)] interface returns either +** the UTF-16 native byte order +** zero-terminated name of the database from which the +** Nth result column of [prepared statement] S +** is extracted, or NULL if the the Nth column of S is a +** general expression or if unable to allocate memory +** to store the name. +** +** {F13743} The [sqlite3_column_table_name(S,N)] interface returns either +** the UTF-8 zero-terminated name of the table from which the +** Nth result column of [prepared statement] S +** is extracted, or NULL if the the Nth column of S is a +** general expression or if unable to allocate memory +** to store the name. +** +** {F13744} The [sqlite3_column_table_name16(S,N)] interface returns either +** the UTF-16 native byte order +** zero-terminated name of the table from which the +** Nth result column of [prepared statement] S +** is extracted, or NULL if the the Nth column of S is a +** general expression or if unable to allocate memory +** to store the name. +** +** {F13745} The [sqlite3_column_origin_name(S,N)] interface returns either +** the UTF-8 zero-terminated name of the table column from which the +** Nth result column of [prepared statement] S +** is extracted, or NULL if the the Nth column of S is a +** general expression or if unable to allocate memory +** to store the name. +** +** {F13746} The [sqlite3_column_origin_name16(S,N)] interface returns either +** the UTF-16 native byte order +** zero-terminated name of the table column from which the +** Nth result column of [prepared statement] S +** is extracted, or NULL if the the Nth column of S is a +** general expression or if unable to allocate memory +** to store the name. +** +** {F13748} The return values from +** [sqlite3_column_database_name|column metadata interfaces] +** are valid +** for the lifetime of the [prepared statement] +** or until the encoding is changed by another metadata +** interface call for the same prepared statement and column. +** +** LIMITATIONS: +** +** {U13751} If two or more threads call one or more +** [sqlite3_column_database_name|column metadata interfaces] +** the same [prepared statement] and result column +** at the same time then the results are undefined. +*/ +const char *sqlite3_column_database_name(sqlite3_stmt*,int); +const void *sqlite3_column_database_name16(sqlite3_stmt*,int); +const char *sqlite3_column_table_name(sqlite3_stmt*,int); +const void *sqlite3_column_table_name16(sqlite3_stmt*,int); +const char *sqlite3_column_origin_name(sqlite3_stmt*,int); +const void *sqlite3_column_origin_name16(sqlite3_stmt*,int); + +/* +** CAPI3REF: Declared Datatype Of A Query Result {F13760} +** +** The first parameter is a [prepared statement]. +** If this statement is a SELECT statement and the Nth column of the +** returned result set of that SELECT is a table column (not an +** expression or subquery) then the declared type of the table +** column is returned. If the Nth column of the result set is an +** expression or subquery, then a NULL pointer is returned. +** The returned string is always UTF-8 encoded. {END} +** For example, in the database schema: +** +** CREATE TABLE t1(c1 VARIANT); +** +** And the following statement compiled: +** +** SELECT c1 + 1, c1 FROM t1; +** +** Then this routine would return the string "VARIANT" for the second +** result column (i==1), and a NULL pointer for the first result column +** (i==0). +** +** SQLite uses dynamic run-time typing. So just because a column +** is declared to contain a particular type does not mean that the +** data stored in that column is of the declared type. SQLite is +** strongly typed, but the typing is dynamic not static. Type +** is associated with individual values, not with the containers +** used to hold those values. +** +** INVARIANTS: +** +** {F13761} A successful call to [sqlite3_column_decltype(S,N)] +** returns a zero-terminated UTF-8 string containing the +** the declared datatype of the table column that appears +** as the Nth column (numbered from 0) of the result set to the +** [prepared statement] S. +** +** {F13762} A successful call to [sqlite3_column_decltype16(S,N)] +** returns a zero-terminated UTF-16 native byte order string +** containing the declared datatype of the table column that appears +** as the Nth column (numbered from 0) of the result set to the +** [prepared statement] S. +** +** {F13763} If N is less than 0 or N is greater than or equal to +** the number of columns in [prepared statement] S +** or if the Nth column of S is an expression or subquery rather +** than a table column or if a memory allocation failure +** occurs during encoding conversions, then +** calls to [sqlite3_column_decltype(S,N)] or +** [sqlite3_column_decltype16(S,N)] return NULL. +*/ +const char *sqlite3_column_decltype(sqlite3_stmt*,int); +const void *sqlite3_column_decltype16(sqlite3_stmt*,int); + +/* +** CAPI3REF: Evaluate An SQL Statement {F13200} +** +** After an [prepared statement] has been prepared with a call +** to either [sqlite3_prepare_v2()] or [sqlite3_prepare16_v2()] or to one of +** the legacy interfaces [sqlite3_prepare()] or [sqlite3_prepare16()], +** then this function must be called one or more times to evaluate the +** statement. +** +** The details of the behavior of this sqlite3_step() interface depend +** on whether the statement was prepared using the newer "v2" interface +** [sqlite3_prepare_v2()] and [sqlite3_prepare16_v2()] or the older legacy +** interface [sqlite3_prepare()] and [sqlite3_prepare16()]. The use of the +** new "v2" interface is recommended for new applications but the legacy +** interface will continue to be supported. +** +** In the lagacy interface, the return value will be either [SQLITE_BUSY], +** [SQLITE_DONE], [SQLITE_ROW], [SQLITE_ERROR], or [SQLITE_MISUSE]. +** With the "v2" interface, any of the other [SQLITE_OK | result code] +** or [SQLITE_IOERR_READ | extended result code] might be returned as +** well. +** +** [SQLITE_BUSY] means that the database engine was unable to acquire the +** database locks it needs to do its job. If the statement is a COMMIT +** or occurs outside of an explicit transaction, then you can retry the +** statement. If the statement is not a COMMIT and occurs within a +** explicit transaction then you should rollback the transaction before +** continuing. +** +** [SQLITE_DONE] means that the statement has finished executing +** successfully. sqlite3_step() should not be called again on this virtual +** machine without first calling [sqlite3_reset()] to reset the virtual +** machine back to its initial state. +** +** If the SQL statement being executed returns any data, then +** [SQLITE_ROW] is returned each time a new row of data is ready +** for processing by the caller. The values may be accessed using +** the [sqlite3_column_int | column access functions]. +** sqlite3_step() is called again to retrieve the next row of data. +** +** [SQLITE_ERROR] means that a run-time error (such as a constraint +** violation) has occurred. sqlite3_step() should not be called again on +** the VM. More information may be found by calling [sqlite3_errmsg()]. +** With the legacy interface, a more specific error code (example: +** [SQLITE_INTERRUPT], [SQLITE_SCHEMA], [SQLITE_CORRUPT], and so forth) +** can be obtained by calling [sqlite3_reset()] on the +** [prepared statement]. In the "v2" interface, +** the more specific error code is returned directly by sqlite3_step(). +** +** [SQLITE_MISUSE] means that the this routine was called inappropriately. +** Perhaps it was called on a [prepared statement] that has +** already been [sqlite3_finalize | finalized] or on one that had +** previously returned [SQLITE_ERROR] or [SQLITE_DONE]. Or it could +** be the case that the same database connection is being used by two or +** more threads at the same moment in time. +** +** Goofy Interface Alert: +** In the legacy interface, +** the sqlite3_step() API always returns a generic error code, +** [SQLITE_ERROR], following any error other than [SQLITE_BUSY] +** and [SQLITE_MISUSE]. You must call [sqlite3_reset()] or +** [sqlite3_finalize()] in order to find one of the specific +** [error codes] that better describes the error. +** We admit that this is a goofy design. The problem has been fixed +** with the "v2" interface. If you prepare all of your SQL statements +** using either [sqlite3_prepare_v2()] or [sqlite3_prepare16_v2()] instead +** of the legacy [sqlite3_prepare()] and [sqlite3_prepare16()], then the +** more specific [error codes] are returned directly +** by sqlite3_step(). The use of the "v2" interface is recommended. +** +** INVARIANTS: +** +** {F13202} If [prepared statement] S is ready to be +** run, then [sqlite3_step(S)] advances that prepared statement +** until to completion or until it is ready to return another +** row of the result set or an interrupt or run-time error occurs. +** +** {F15304} When a call to [sqlite3_step(S)] causes the +** [prepared statement] S to run to completion, +** the function returns [SQLITE_DONE]. +** +** {F15306} When a call to [sqlite3_step(S)] stops because it is ready +** to return another row of the result set, it returns +** [SQLITE_ROW]. +** +** {F15308} If a call to [sqlite3_step(S)] encounters an +** [sqlite3_interrupt|interrupt] or a run-time error, +** it returns an appropraite error code that is not one of +** [SQLITE_OK], [SQLITE_ROW], or [SQLITE_DONE]. +** +** {F15310} If an [sqlite3_interrupt|interrupt] or run-time error +** occurs during a call to [sqlite3_step(S)] +** for a [prepared statement] S created using +** legacy interfaces [sqlite3_prepare()] or +** [sqlite3_prepare16()] then the function returns either +** [SQLITE_ERROR], [SQLITE_BUSY], or [SQLITE_MISUSE]. +*/ +int sqlite3_step(sqlite3_stmt*); + +/* +** CAPI3REF: Number of columns in a result set {F13770} +** +** Return the number of values in the current row of the result set. +** +** INVARIANTS: +** +** {F13771} After a call to [sqlite3_step(S)] that returns +** [SQLITE_ROW], the [sqlite3_data_count(S)] routine +** will return the same value as the +** [sqlite3_column_count(S)] function. +** +** {F13772} After [sqlite3_step(S)] has returned any value other than +** [SQLITE_ROW] or before [sqlite3_step(S)] has been +** called on the [prepared statement] for +** the first time since it was [sqlite3_prepare|prepared] +** or [sqlite3_reset|reset], the [sqlite3_data_count(S)] +** routine returns zero. +*/ +int sqlite3_data_count(sqlite3_stmt *pStmt); + +/* +** CAPI3REF: Fundamental Datatypes {F10265} +** KEYWORDS: SQLITE_TEXT +** +** {F10266}Every value in SQLite has one of five fundamental datatypes: +** +**
                  +**
                • 64-bit signed integer +**
                • 64-bit IEEE floating point number +**
                • string +**
                • BLOB +**
                • NULL +**
                {END} +** +** These constants are codes for each of those types. +** +** Note that the SQLITE_TEXT constant was also used in SQLite version 2 +** for a completely different meaning. Software that links against both +** SQLite version 2 and SQLite version 3 should use SQLITE3_TEXT not +** SQLITE_TEXT. +*/ +#define SQLITE_INTEGER 1 +#define SQLITE_FLOAT 2 +#define SQLITE_BLOB 4 +#define SQLITE_NULL 5 +#ifdef SQLITE_TEXT +# undef SQLITE_TEXT +#else +# define SQLITE_TEXT 3 +#endif +#define SQLITE3_TEXT 3 + +/* +** CAPI3REF: Results Values From A Query {F13800} +** +** These routines form the "result set query" interface. +** +** These routines return information about +** a single column of the current result row of a query. In every +** case the first argument is a pointer to the +** [prepared statement] that is being +** evaluated (the [sqlite3_stmt*] that was returned from +** [sqlite3_prepare_v2()] or one of its variants) and +** the second argument is the index of the column for which information +** should be returned. The left-most column of the result set +** has an index of 0. +** +** If the SQL statement is not currently point to a valid row, or if the +** the column index is out of range, the result is undefined. +** These routines may only be called when the most recent call to +** [sqlite3_step()] has returned [SQLITE_ROW] and neither +** [sqlite3_reset()] nor [sqlite3_finalize()] has been call subsequently. +** If any of these routines are called after [sqlite3_reset()] or +** [sqlite3_finalize()] or after [sqlite3_step()] has returned +** something other than [SQLITE_ROW], the results are undefined. +** If [sqlite3_step()] or [sqlite3_reset()] or [sqlite3_finalize()] +** are called from a different thread while any of these routines +** are pending, then the results are undefined. +** +** The sqlite3_column_type() routine returns +** [SQLITE_INTEGER | datatype code] for the initial data type +** of the result column. The returned value is one of [SQLITE_INTEGER], +** [SQLITE_FLOAT], [SQLITE_TEXT], [SQLITE_BLOB], or [SQLITE_NULL]. The value +** returned by sqlite3_column_type() is only meaningful if no type +** conversions have occurred as described below. After a type conversion, +** the value returned by sqlite3_column_type() is undefined. Future +** versions of SQLite may change the behavior of sqlite3_column_type() +** following a type conversion. +** +** If the result is a BLOB or UTF-8 string then the sqlite3_column_bytes() +** routine returns the number of bytes in that BLOB or string. +** If the result is a UTF-16 string, then sqlite3_column_bytes() converts +** the string to UTF-8 and then returns the number of bytes. +** If the result is a numeric value then sqlite3_column_bytes() uses +** [sqlite3_snprintf()] to convert that value to a UTF-8 string and returns +** the number of bytes in that string. +** The value returned does not include the zero terminator at the end +** of the string. For clarity: the value returned is the number of +** bytes in the string, not the number of characters. +** +** Strings returned by sqlite3_column_text() and sqlite3_column_text16(), +** even empty strings, are always zero terminated. The return +** value from sqlite3_column_blob() for a zero-length blob is an arbitrary +** pointer, possibly even a NULL pointer. +** +** The sqlite3_column_bytes16() routine is similar to sqlite3_column_bytes() +** but leaves the result in UTF-16 in native byte order instead of UTF-8. +** The zero terminator is not included in this count. +** +** These routines attempt to convert the value where appropriate. For +** example, if the internal representation is FLOAT and a text result +** is requested, [sqlite3_snprintf()] is used internally to do the conversion +** automatically. The following table details the conversions that +** are applied: +** +**
                +**
                +**
                Internal
                Type
                Requested
                Type
                Conversion +** +**
                NULL INTEGER Result is 0 +**
                NULL FLOAT Result is 0.0 +**
                NULL TEXT Result is NULL pointer +**
                NULL BLOB Result is NULL pointer +**
                INTEGER FLOAT Convert from integer to float +**
                INTEGER TEXT ASCII rendering of the integer +**
                INTEGER BLOB Same as for INTEGER->TEXT +**
                FLOAT INTEGER Convert from float to integer +**
                FLOAT TEXT ASCII rendering of the float +**
                FLOAT BLOB Same as FLOAT->TEXT +**
                TEXT INTEGER Use atoi() +**
                TEXT FLOAT Use atof() +**
                TEXT BLOB No change +**
                BLOB INTEGER Convert to TEXT then use atoi() +**
                BLOB FLOAT Convert to TEXT then use atof() +**
                BLOB TEXT Add a zero terminator if needed +**
                +** +** +** The table above makes reference to standard C library functions atoi() +** and atof(). SQLite does not really use these functions. It has its +** on equavalent internal routines. The atoi() and atof() names are +** used in the table for brevity and because they are familiar to most +** C programmers. +** +** Note that when type conversions occur, pointers returned by prior +** calls to sqlite3_column_blob(), sqlite3_column_text(), and/or +** sqlite3_column_text16() may be invalidated. +** Type conversions and pointer invalidations might occur +** in the following cases: +** +**
                  +**
                • The initial content is a BLOB and sqlite3_column_text() +** or sqlite3_column_text16() is called. A zero-terminator might +** need to be added to the string.

                • +** +**
                • The initial content is UTF-8 text and sqlite3_column_bytes16() or +** sqlite3_column_text16() is called. The content must be converted +** to UTF-16.

                • +** +**
                • The initial content is UTF-16 text and sqlite3_column_bytes() or +** sqlite3_column_text() is called. The content must be converted +** to UTF-8.

                • +**
                +** +** Conversions between UTF-16be and UTF-16le are always done in place and do +** not invalidate a prior pointer, though of course the content of the buffer +** that the prior pointer points to will have been modified. Other kinds +** of conversion are done in place when it is possible, but sometime it is +** not possible and in those cases prior pointers are invalidated. +** +** The safest and easiest to remember policy is to invoke these routines +** in one of the following ways: +** +**
                  +**
                • sqlite3_column_text() followed by sqlite3_column_bytes()
                • +**
                • sqlite3_column_blob() followed by sqlite3_column_bytes()
                • +**
                • sqlite3_column_text16() followed by sqlite3_column_bytes16()
                • +**
                +** +** In other words, you should call sqlite3_column_text(), sqlite3_column_blob(), +** or sqlite3_column_text16() first to force the result into the desired +** format, then invoke sqlite3_column_bytes() or sqlite3_column_bytes16() to +** find the size of the result. Do not mix call to sqlite3_column_text() or +** sqlite3_column_blob() with calls to sqlite3_column_bytes16(). And do not +** mix calls to sqlite3_column_text16() with calls to sqlite3_column_bytes(). +** +** The pointers returned are valid until a type conversion occurs as +** described above, or until [sqlite3_step()] or [sqlite3_reset()] or +** [sqlite3_finalize()] is called. The memory space used to hold strings +** and blobs is freed automatically. Do not pass the pointers returned +** [sqlite3_column_blob()], [sqlite3_column_text()], etc. into +** [sqlite3_free()]. +** +** If a memory allocation error occurs during the evaluation of any +** of these routines, a default value is returned. The default value +** is either the integer 0, the floating point number 0.0, or a NULL +** pointer. Subsequent calls to [sqlite3_errcode()] will return +** [SQLITE_NOMEM]. +** +** INVARIANTS: +** +** {F13803} The [sqlite3_column_blob(S,N)] interface converts the +** Nth column in the current row of the result set for +** [prepared statement] S into a blob and then returns a +** pointer to the converted value. +** +** {F13806} The [sqlite3_column_bytes(S,N)] interface returns the +** number of bytes in the blob or string (exclusive of the +** zero terminator on the string) that was returned by the +** most recent call to [sqlite3_column_blob(S,N)] or +** [sqlite3_column_text(S,N)]. +** +** {F13809} The [sqlite3_column_bytes16(S,N)] interface returns the +** number of bytes in the string (exclusive of the +** zero terminator on the string) that was returned by the +** most recent call to [sqlite3_column_text16(S,N)]. +** +** {F13812} The [sqlite3_column_double(S,N)] interface converts the +** Nth column in the current row of the result set for +** [prepared statement] S into a floating point value and +** returns a copy of that value. +** +** {F13815} The [sqlite3_column_int(S,N)] interface converts the +** Nth column in the current row of the result set for +** [prepared statement] S into a 64-bit signed integer and +** returns the lower 32 bits of that integer. +** +** {F13818} The [sqlite3_column_int64(S,N)] interface converts the +** Nth column in the current row of the result set for +** [prepared statement] S into a 64-bit signed integer and +** returns a copy of that integer. +** +** {F13821} The [sqlite3_column_text(S,N)] interface converts the +** Nth column in the current row of the result set for +** [prepared statement] S into a zero-terminated UTF-8 +** string and returns a pointer to that string. +** +** {F13824} The [sqlite3_column_text16(S,N)] interface converts the +** Nth column in the current row of the result set for +** [prepared statement] S into a zero-terminated 2-byte +** aligned UTF-16 native byte order +** string and returns a pointer to that string. +** +** {F13827} The [sqlite3_column_type(S,N)] interface returns +** one of [SQLITE_NULL], [SQLITE_INTEGER], [SQLITE_FLOAT], +** [SQLITE_TEXT], or [SQLITE_BLOB] as appropriate for +** the Nth column in the current row of the result set for +** [prepared statement] S. +** +** {F13830} The [sqlite3_column_value(S,N)] interface returns a +** pointer to the [sqlite3_value] object that for the +** Nth column in the current row of the result set for +** [prepared statement] S. +*/ +const void *sqlite3_column_blob(sqlite3_stmt*, int iCol); +int sqlite3_column_bytes(sqlite3_stmt*, int iCol); +int sqlite3_column_bytes16(sqlite3_stmt*, int iCol); +double sqlite3_column_double(sqlite3_stmt*, int iCol); +int sqlite3_column_int(sqlite3_stmt*, int iCol); +sqlite3_int64 sqlite3_column_int64(sqlite3_stmt*, int iCol); +const unsigned char *sqlite3_column_text(sqlite3_stmt*, int iCol); +const void *sqlite3_column_text16(sqlite3_stmt*, int iCol); +int sqlite3_column_type(sqlite3_stmt*, int iCol); +sqlite3_value *sqlite3_column_value(sqlite3_stmt*, int iCol); + +/* +** CAPI3REF: Destroy A Prepared Statement Object {F13300} +** +** The sqlite3_finalize() function is called to delete a +** [prepared statement]. If the statement was +** executed successfully, or not executed at all, then SQLITE_OK is returned. +** If execution of the statement failed then an +** [error code] or [extended error code] +** is returned. +** +** This routine can be called at any point during the execution of the +** [prepared statement]. If the virtual machine has not +** completed execution when this routine is called, that is like +** encountering an error or an interrupt. (See [sqlite3_interrupt()].) +** Incomplete updates may be rolled back and transactions cancelled, +** depending on the circumstances, and the +** [error code] returned will be [SQLITE_ABORT]. +** +** INVARIANTS: +** +** {F11302} The [sqlite3_finalize(S)] interface destroys the +** [prepared statement] S and releases all +** memory and file resources held by that object. +** +** {F11304} If the most recent call to [sqlite3_step(S)] for the +** [prepared statement] S returned an error, +** then [sqlite3_finalize(S)] returns that same error. +*/ +int sqlite3_finalize(sqlite3_stmt *pStmt); + +/* +** CAPI3REF: Reset A Prepared Statement Object {F13330} +** +** The sqlite3_reset() function is called to reset a +** [prepared statement] object. +** back to its initial state, ready to be re-executed. +** Any SQL statement variables that had values bound to them using +** the [sqlite3_bind_blob | sqlite3_bind_*() API] retain their values. +** Use [sqlite3_clear_bindings()] to reset the bindings. +** +** {F11332} The [sqlite3_reset(S)] interface resets the [prepared statement] S +** back to the beginning of its program. +** +** {F11334} If the most recent call to [sqlite3_step(S)] for +** [prepared statement] S returned [SQLITE_ROW] or [SQLITE_DONE], +** or if [sqlite3_step(S)] has never before been called on S, +** then [sqlite3_reset(S)] returns [SQLITE_OK]. +** +** {F11336} If the most recent call to [sqlite3_step(S)] for +** [prepared statement] S indicated an error, then +** [sqlite3_reset(S)] returns an appropriate [error code]. +** +** {F11338} The [sqlite3_reset(S)] interface does not change the values +** of any [sqlite3_bind_blob|bindings] on [prepared statement] S. +*/ +int sqlite3_reset(sqlite3_stmt *pStmt); + +/* +** CAPI3REF: Create Or Redefine SQL Functions {F16100} +** KEYWORDS: {function creation routines} +** +** These two functions (collectively known as +** "function creation routines") are used to add SQL functions or aggregates +** or to redefine the behavior of existing SQL functions or aggregates. The +** difference only between the two is that the second parameter, the +** name of the (scalar) function or aggregate, is encoded in UTF-8 for +** sqlite3_create_function() and UTF-16 for sqlite3_create_function16(). +** +** The first parameter is the [database connection] to which the SQL +** function is to be added. If a single +** program uses more than one [database connection] internally, then SQL +** functions must be added individually to each [database connection]. +** +** The second parameter is the name of the SQL function to be created +** or redefined. +** The length of the name is limited to 255 bytes, exclusive of the +** zero-terminator. Note that the name length limit is in bytes, not +** characters. Any attempt to create a function with a longer name +** will result in an SQLITE_ERROR error. +** +** The third parameter is the number of arguments that the SQL function or +** aggregate takes. If this parameter is negative, then the SQL function or +** aggregate may take any number of arguments. +** +** The fourth parameter, eTextRep, specifies what +** [SQLITE_UTF8 | text encoding] this SQL function prefers for +** its parameters. Any SQL function implementation should be able to work +** work with UTF-8, UTF-16le, or UTF-16be. But some implementations may be +** more efficient with one encoding than another. It is allowed to +** invoke sqlite3_create_function() or sqlite3_create_function16() multiple +** times with the same function but with different values of eTextRep. +** When multiple implementations of the same function are available, SQLite +** will pick the one that involves the least amount of data conversion. +** If there is only a single implementation which does not care what +** text encoding is used, then the fourth argument should be +** [SQLITE_ANY]. +** +** The fifth parameter is an arbitrary pointer. The implementation +** of the function can gain access to this pointer using +** [sqlite3_user_data()]. +** +** The seventh, eighth and ninth parameters, xFunc, xStep and xFinal, are +** pointers to C-language functions that implement the SQL +** function or aggregate. A scalar SQL function requires an implementation of +** the xFunc callback only, NULL pointers should be passed as the xStep +** and xFinal parameters. An aggregate SQL function requires an implementation +** of xStep and xFinal and NULL should be passed for xFunc. To delete an +** existing SQL function or aggregate, pass NULL for all three function +** callback. +** +** It is permitted to register multiple implementations of the same +** functions with the same name but with either differing numbers of +** arguments or differing perferred text encodings. SQLite will use +** the implementation most closely matches the way in which the +** SQL function is used. +** +** INVARIANTS: +** +** {F16103} The [sqlite3_create_function16()] interface behaves exactly +** like [sqlite3_create_function()] in every way except that it +** interprets the zFunctionName argument as +** zero-terminated UTF-16 native byte order instead of as a +** zero-terminated UTF-8. +** +** {F16106} A successful invocation of +** the [sqlite3_create_function(D,X,N,E,...)] interface registers +** or replaces callback functions in [database connection] D +** used to implement the SQL function named X with N parameters +** and having a perferred text encoding of E. +** +** {F16109} A successful call to [sqlite3_create_function(D,X,N,E,P,F,S,L)] +** replaces the P, F, S, and L values from any prior calls with +** the same D, X, N, and E values. +** +** {F16112} The [sqlite3_create_function(D,X,...)] interface fails with +** a return code of [SQLITE_ERROR] if the SQL function name X is +** longer than 255 bytes exclusive of the zero terminator. +** +** {F16118} Either F must be NULL and S and L are non-NULL or else F +** is non-NULL and S and L are NULL, otherwise +** [sqlite3_create_function(D,X,N,E,P,F,S,L)] returns [SQLITE_ERROR]. +** +** {F16121} The [sqlite3_create_function(D,...)] interface fails with an +** error code of [SQLITE_BUSY] if there exist [prepared statements] +** associated with the [database connection] D. +** +** {F16124} The [sqlite3_create_function(D,X,N,...)] interface fails with an +** error code of [SQLITE_ERROR] if parameter N (specifying the number +** of arguments to the SQL function being registered) is less +** than -1 or greater than 127. +** +** {F16127} When N is non-negative, the [sqlite3_create_function(D,X,N,...)] +** interface causes callbacks to be invoked for the SQL function +** named X when the number of arguments to the SQL function is +** exactly N. +** +** {F16130} When N is -1, the [sqlite3_create_function(D,X,N,...)] +** interface causes callbacks to be invoked for the SQL function +** named X with any number of arguments. +** +** {F16133} When calls to [sqlite3_create_function(D,X,N,...)] +** specify multiple implementations of the same function X +** and when one implementation has N>=0 and the other has N=(-1) +** the implementation with a non-zero N is preferred. +** +** {F16136} When calls to [sqlite3_create_function(D,X,N,E,...)] +** specify multiple implementations of the same function X with +** the same number of arguments N but with different +** encodings E, then the implementation where E matches the +** database encoding is preferred. +** +** {F16139} For an aggregate SQL function created using +** [sqlite3_create_function(D,X,N,E,P,0,S,L)] the finializer +** function L will always be invoked exactly once if the +** step function S is called one or more times. +*/ +int sqlite3_create_function( + sqlite3 *db, + const char *zFunctionName, + int nArg, + int eTextRep, + void *pApp, + void (*xFunc)(sqlite3_context*,int,sqlite3_value**), + void (*xStep)(sqlite3_context*,int,sqlite3_value**), + void (*xFinal)(sqlite3_context*) +); +int sqlite3_create_function16( + sqlite3 *db, + const void *zFunctionName, + int nArg, + int eTextRep, + void *pApp, + void (*xFunc)(sqlite3_context*,int,sqlite3_value**), + void (*xStep)(sqlite3_context*,int,sqlite3_value**), + void (*xFinal)(sqlite3_context*) +); + +/* +** CAPI3REF: Text Encodings {F10267} +** +** These constant define integer codes that represent the various +** text encodings supported by SQLite. +*/ +#define SQLITE_UTF8 1 +#define SQLITE_UTF16LE 2 +#define SQLITE_UTF16BE 3 +#define SQLITE_UTF16 4 /* Use native byte order */ +#define SQLITE_ANY 5 /* sqlite3_create_function only */ +#define SQLITE_UTF16_ALIGNED 8 /* sqlite3_create_collation only */ + +/* +** CAPI3REF: Obsolete Functions +** +** These functions are all now obsolete. In order to maintain +** backwards compatibility with older code, we continue to support +** these functions. However, new development projects should avoid +** the use of these functions. To help encourage people to avoid +** using these functions, we are not going to tell you want they do. +*/ +int sqlite3_aggregate_count(sqlite3_context*); +int sqlite3_expired(sqlite3_stmt*); +int sqlite3_transfer_bindings(sqlite3_stmt*, sqlite3_stmt*); +int sqlite3_global_recover(void); +void sqlite3_thread_cleanup(void); +int sqlite3_memory_alarm(void(*)(void*,sqlite3_int64,int),void*,sqlite3_int64); + +/* +** CAPI3REF: Obtaining SQL Function Parameter Values {F15100} +** +** The C-language implementation of SQL functions and aggregates uses +** this set of interface routines to access the parameter values on +** the function or aggregate. +** +** The xFunc (for scalar functions) or xStep (for aggregates) parameters +** to [sqlite3_create_function()] and [sqlite3_create_function16()] +** define callbacks that implement the SQL functions and aggregates. +** The 4th parameter to these callbacks is an array of pointers to +** [sqlite3_value] objects. There is one [sqlite3_value] object for +** each parameter to the SQL function. These routines are used to +** extract values from the [sqlite3_value] objects. +** +** These routines work just like the corresponding +** [sqlite3_column_blob | sqlite3_column_* routines] except that +** these routines take a single [sqlite3_value*] pointer instead +** of an [sqlite3_stmt*] pointer and an integer column number. +** +** The sqlite3_value_text16() interface extracts a UTF16 string +** in the native byte-order of the host machine. The +** sqlite3_value_text16be() and sqlite3_value_text16le() interfaces +** extract UTF16 strings as big-endian and little-endian respectively. +** +** The sqlite3_value_numeric_type() interface attempts to apply +** numeric affinity to the value. This means that an attempt is +** made to convert the value to an integer or floating point. If +** such a conversion is possible without loss of information (in other +** words if the value is a string that looks like a number) +** then the conversion is done. Otherwise no conversion occurs. The +** [SQLITE_INTEGER | datatype] after conversion is returned. +** +** Please pay particular attention to the fact that the pointer that +** is returned from [sqlite3_value_blob()], [sqlite3_value_text()], or +** [sqlite3_value_text16()] can be invalidated by a subsequent call to +** [sqlite3_value_bytes()], [sqlite3_value_bytes16()], [sqlite3_value_text()], +** or [sqlite3_value_text16()]. +** +** These routines must be called from the same thread as +** the SQL function that supplied the sqlite3_value* parameters. +** Or, if the sqlite3_value* argument comes from the [sqlite3_column_value()] +** interface, then these routines should be called from the same thread +** that ran [sqlite3_column_value()]. +** +** +** INVARIANTS: +** +** {F15103} The [sqlite3_value_blob(V)] interface converts the +** [sqlite3_value] object V into a blob and then returns a +** pointer to the converted value. +** +** {F15106} The [sqlite3_value_bytes(V)] interface returns the +** number of bytes in the blob or string (exclusive of the +** zero terminator on the string) that was returned by the +** most recent call to [sqlite3_value_blob(V)] or +** [sqlite3_value_text(V)]. +** +** {F15109} The [sqlite3_value_bytes16(V)] interface returns the +** number of bytes in the string (exclusive of the +** zero terminator on the string) that was returned by the +** most recent call to [sqlite3_value_text16(V)], +** [sqlite3_value_text16be(V)], or [sqlite3_value_text16le(V)]. +** +** {F15112} The [sqlite3_value_double(V)] interface converts the +** [sqlite3_value] object V into a floating point value and +** returns a copy of that value. +** +** {F15115} The [sqlite3_value_int(V)] interface converts the +** [sqlite3_value] object V into a 64-bit signed integer and +** returns the lower 32 bits of that integer. +** +** {F15118} The [sqlite3_value_int64(V)] interface converts the +** [sqlite3_value] object V into a 64-bit signed integer and +** returns a copy of that integer. +** +** {F15121} The [sqlite3_value_text(V)] interface converts the +** [sqlite3_value] object V into a zero-terminated UTF-8 +** string and returns a pointer to that string. +** +** {F15124} The [sqlite3_value_text16(V)] interface converts the +** [sqlite3_value] object V into a zero-terminated 2-byte +** aligned UTF-16 native byte order +** string and returns a pointer to that string. +** +** {F15127} The [sqlite3_value_text16be(V)] interface converts the +** [sqlite3_value] object V into a zero-terminated 2-byte +** aligned UTF-16 big-endian +** string and returns a pointer to that string. +** +** {F15130} The [sqlite3_value_text16le(V)] interface converts the +** [sqlite3_value] object V into a zero-terminated 2-byte +** aligned UTF-16 little-endian +** string and returns a pointer to that string. +** +** {F15133} The [sqlite3_value_type(V)] interface returns +** one of [SQLITE_NULL], [SQLITE_INTEGER], [SQLITE_FLOAT], +** [SQLITE_TEXT], or [SQLITE_BLOB] as appropriate for +** the [sqlite3_value] object V. +** +** {F15136} The [sqlite3_value_numeric_type(V)] interface converts +** the [sqlite3_value] object V into either an integer or +** a floating point value if it can do so without loss of +** information, and returns one of [SQLITE_NULL], +** [SQLITE_INTEGER], [SQLITE_FLOAT], [SQLITE_TEXT], or +** [SQLITE_BLOB] as appropriate for +** the [sqlite3_value] object V after the conversion attempt. +*/ +const void *sqlite3_value_blob(sqlite3_value*); +int sqlite3_value_bytes(sqlite3_value*); +int sqlite3_value_bytes16(sqlite3_value*); +double sqlite3_value_double(sqlite3_value*); +int sqlite3_value_int(sqlite3_value*); +sqlite3_int64 sqlite3_value_int64(sqlite3_value*); +const unsigned char *sqlite3_value_text(sqlite3_value*); +const void *sqlite3_value_text16(sqlite3_value*); +const void *sqlite3_value_text16le(sqlite3_value*); +const void *sqlite3_value_text16be(sqlite3_value*); +int sqlite3_value_type(sqlite3_value*); +int sqlite3_value_numeric_type(sqlite3_value*); + +/* +** CAPI3REF: Obtain Aggregate Function Context {F16210} +** +** The implementation of aggregate SQL functions use this routine to allocate +** a structure for storing their state. +** The first time the sqlite3_aggregate_context() routine is +** is called for a particular aggregate, SQLite allocates nBytes of memory +** zeros that memory, and returns a pointer to it. +** On second and subsequent calls to sqlite3_aggregate_context() +** for the same aggregate function index, the same buffer is returned. +** The implementation +** of the aggregate can use the returned buffer to accumulate data. +** +** SQLite automatically frees the allocated buffer when the aggregate +** query concludes. +** +** The first parameter should be a copy of the +** [sqlite3_context | SQL function context] that is the first +** parameter to the callback routine that implements the aggregate +** function. +** +** This routine must be called from the same thread in which +** the aggregate SQL function is running. +** +** INVARIANTS: +** +** {F16211} The first invocation of [sqlite3_aggregate_context(C,N)] for +** a particular instance of an aggregate function (for a particular +** context C) causes SQLite to allocation N bytes of memory, +** zero that memory, and return a pointer to the allocationed +** memory. +** +** {F16213} If a memory allocation error occurs during +** [sqlite3_aggregate_context(C,N)] then the function returns 0. +** +** {F16215} Second and subsequent invocations of +** [sqlite3_aggregate_context(C,N)] for the same context pointer C +** ignore the N parameter and return a pointer to the same +** block of memory returned by the first invocation. +** +** {F16217} The memory allocated by [sqlite3_aggregate_context(C,N)] is +** automatically freed on the next call to [sqlite3_reset()] +** or [sqlite3_finalize()] for the [prepared statement] containing +** the aggregate function associated with context C. +*/ +void *sqlite3_aggregate_context(sqlite3_context*, int nBytes); + +/* +** CAPI3REF: User Data For Functions {F16240} +** +** The sqlite3_user_data() interface returns a copy of +** the pointer that was the pUserData parameter (the 5th parameter) +** of the the [sqlite3_create_function()] +** and [sqlite3_create_function16()] routines that originally +** registered the application defined function. {END} +** +** This routine must be called from the same thread in which +** the application-defined function is running. +** +** INVARIANTS: +** +** {F16243} The [sqlite3_user_data(C)] interface returns a copy of the +** P pointer from the [sqlite3_create_function(D,X,N,E,P,F,S,L)] +** or [sqlite3_create_function16(D,X,N,E,P,F,S,L)] call that +** registered the SQL function associated with +** [sqlite3_context] C. +*/ +void *sqlite3_user_data(sqlite3_context*); + +/* +** CAPI3REF: Function Auxiliary Data {F16270} +** +** The following two functions may be used by scalar SQL functions to +** associate meta-data with argument values. If the same value is passed to +** multiple invocations of the same SQL function during query execution, under +** some circumstances the associated meta-data may be preserved. This may +** be used, for example, to add a regular-expression matching scalar +** function. The compiled version of the regular expression is stored as +** meta-data associated with the SQL value passed as the regular expression +** pattern. The compiled regular expression can be reused on multiple +** invocations of the same function so that the original pattern string +** does not need to be recompiled on each invocation. +** +** The sqlite3_get_auxdata() interface returns a pointer to the meta-data +** associated by the sqlite3_set_auxdata() function with the Nth argument +** value to the application-defined function. +** If no meta-data has been ever been set for the Nth +** argument of the function, or if the cooresponding function parameter +** has changed since the meta-data was set, then sqlite3_get_auxdata() +** returns a NULL pointer. +** +** The sqlite3_set_auxdata() interface saves the meta-data +** pointed to by its 3rd parameter as the meta-data for the N-th +** argument of the application-defined function. Subsequent +** calls to sqlite3_get_auxdata() might return this data, if it has +** not been destroyed. +** If it is not NULL, SQLite will invoke the destructor +** function given by the 4th parameter to sqlite3_set_auxdata() on +** the meta-data when the corresponding function parameter changes +** or when the SQL statement completes, whichever comes first. +** +** SQLite is free to call the destructor and drop meta-data on +** any parameter of any function at any time. The only guarantee +** is that the destructor will be called before the metadata is +** dropped. +** +** In practice, meta-data is preserved between function calls for +** expressions that are constant at compile time. This includes literal +** values and SQL variables. +** +** These routines must be called from the same thread in which +** the SQL function is running. +** +** INVARIANTS: +** +** {F16272} The [sqlite3_get_auxdata(C,N)] interface returns a pointer +** to metadata associated with the Nth parameter of the SQL function +** whose context is C, or NULL if there is no metadata associated +** with that parameter. +** +** {F16274} The [sqlite3_set_auxdata(C,N,P,D)] interface assigns a metadata +** pointer P to the Nth parameter of the SQL function with context +** C. +** +** {F16276} SQLite will invoke the destructor D with a single argument +** which is the metadata pointer P following a call to +** [sqlite3_set_auxdata(C,N,P,D)] when SQLite ceases to hold +** the metadata. +** +** {F16277} SQLite ceases to hold metadata for an SQL function parameter +** when the value of that parameter changes. +** +** {F16278} When [sqlite3_set_auxdata(C,N,P,D)] is invoked, the destructor +** is called for any prior metadata associated with the same function +** context C and parameter N. +** +** {F16279} SQLite will call destructors for any metadata it is holding +** in a particular [prepared statement] S when either +** [sqlite3_reset(S)] or [sqlite3_finalize(S)] is called. +*/ +void *sqlite3_get_auxdata(sqlite3_context*, int N); +void sqlite3_set_auxdata(sqlite3_context*, int N, void*, void (*)(void*)); + + +/* +** CAPI3REF: Constants Defining Special Destructor Behavior {F10280} +** +** These are special value for the destructor that is passed in as the +** final argument to routines like [sqlite3_result_blob()]. If the destructor +** argument is SQLITE_STATIC, it means that the content pointer is constant +** and will never change. It does not need to be destroyed. The +** SQLITE_TRANSIENT value means that the content will likely change in +** the near future and that SQLite should make its own private copy of +** the content before returning. +** +** The typedef is necessary to work around problems in certain +** C++ compilers. See ticket #2191. +*/ +typedef void (*sqlite3_destructor_type)(void*); +#define SQLITE_STATIC ((sqlite3_destructor_type)0) +#define SQLITE_TRANSIENT ((sqlite3_destructor_type)-1) + +/* +** CAPI3REF: Setting The Result Of An SQL Function {F16400} +** +** These routines are used by the xFunc or xFinal callbacks that +** implement SQL functions and aggregates. See +** [sqlite3_create_function()] and [sqlite3_create_function16()] +** for additional information. +** +** These functions work very much like the +** [sqlite3_bind_blob | sqlite3_bind_*] family of functions used +** to bind values to host parameters in prepared statements. +** Refer to the +** [sqlite3_bind_blob | sqlite3_bind_* documentation] for +** additional information. +** +** The sqlite3_result_blob() interface sets the result from +** an application defined function to be the BLOB whose content is pointed +** to by the second parameter and which is N bytes long where N is the +** third parameter. +** The sqlite3_result_zeroblob() inerfaces set the result of +** the application defined function to be a BLOB containing all zero +** bytes and N bytes in size, where N is the value of the 2nd parameter. +** +** The sqlite3_result_double() interface sets the result from +** an application defined function to be a floating point value specified +** by its 2nd argument. +** +** The sqlite3_result_error() and sqlite3_result_error16() functions +** cause the implemented SQL function to throw an exception. +** SQLite uses the string pointed to by the +** 2nd parameter of sqlite3_result_error() or sqlite3_result_error16() +** as the text of an error message. SQLite interprets the error +** message string from sqlite3_result_error() as UTF8. SQLite +** interprets the string from sqlite3_result_error16() as UTF16 in native +** byte order. If the third parameter to sqlite3_result_error() +** or sqlite3_result_error16() is negative then SQLite takes as the error +** message all text up through the first zero character. +** If the third parameter to sqlite3_result_error() or +** sqlite3_result_error16() is non-negative then SQLite takes that many +** bytes (not characters) from the 2nd parameter as the error message. +** The sqlite3_result_error() and sqlite3_result_error16() +** routines make a copy private copy of the error message text before +** they return. Hence, the calling function can deallocate or +** modify the text after they return without harm. +** The sqlite3_result_error_code() function changes the error code +** returned by SQLite as a result of an error in a function. By default, +** the error code is SQLITE_ERROR. +** +** The sqlite3_result_toobig() interface causes SQLite +** to throw an error indicating that a string or BLOB is to long +** to represent. The sqlite3_result_nomem() interface +** causes SQLite to throw an exception indicating that the a +** memory allocation failed. +** +** The sqlite3_result_int() interface sets the return value +** of the application-defined function to be the 32-bit signed integer +** value given in the 2nd argument. +** The sqlite3_result_int64() interface sets the return value +** of the application-defined function to be the 64-bit signed integer +** value given in the 2nd argument. +** +** The sqlite3_result_null() interface sets the return value +** of the application-defined function to be NULL. +** +** The sqlite3_result_text(), sqlite3_result_text16(), +** sqlite3_result_text16le(), and sqlite3_result_text16be() interfaces +** set the return value of the application-defined function to be +** a text string which is represented as UTF-8, UTF-16 native byte order, +** UTF-16 little endian, or UTF-16 big endian, respectively. +** SQLite takes the text result from the application from +** the 2nd parameter of the sqlite3_result_text* interfaces. +** If the 3rd parameter to the sqlite3_result_text* interfaces +** is negative, then SQLite takes result text from the 2nd parameter +** through the first zero character. +** If the 3rd parameter to the sqlite3_result_text* interfaces +** is non-negative, then as many bytes (not characters) of the text +** pointed to by the 2nd parameter are taken as the application-defined +** function result. +** If the 4th parameter to the sqlite3_result_text* interfaces +** or sqlite3_result_blob is a non-NULL pointer, then SQLite calls that +** function as the destructor on the text or blob result when it has +** finished using that result. +** If the 4th parameter to the sqlite3_result_text* interfaces +** or sqlite3_result_blob is the special constant SQLITE_STATIC, then +** SQLite assumes that the text or blob result is constant space and +** does not copy the space or call a destructor when it has +** finished using that result. +** If the 4th parameter to the sqlite3_result_text* interfaces +** or sqlite3_result_blob is the special constant SQLITE_TRANSIENT +** then SQLite makes a copy of the result into space obtained from +** from [sqlite3_malloc()] before it returns. +** +** The sqlite3_result_value() interface sets the result of +** the application-defined function to be a copy the [sqlite3_value] +** object specified by the 2nd parameter. The +** sqlite3_result_value() interface makes a copy of the [sqlite3_value] +** so that [sqlite3_value] specified in the parameter may change or +** be deallocated after sqlite3_result_value() returns without harm. +** +** If these routines are called from within the different thread +** than the one containing the application-defined function that recieved +** the [sqlite3_context] pointer, the results are undefined. +** +** INVARIANTS: +** +** {F16403} The default return value from any SQL function is NULL. +** +** {F16406} The [sqlite3_result_blob(C,V,N,D)] interface changes the +** return value of function C to be a blob that is N bytes +** in length and with content pointed to by V. +** +** {F16409} The [sqlite3_result_double(C,V)] interface changes the +** return value of function C to be the floating point value V. +** +** {F16412} The [sqlite3_result_error(C,V,N)] interface changes the return +** value of function C to be an exception with error code +** [SQLITE_ERROR] and a UTF8 error message copied from V up to the +** first zero byte or until N bytes are read if N is positive. +** +** {F16415} The [sqlite3_result_error16(C,V,N)] interface changes the return +** value of function C to be an exception with error code +** [SQLITE_ERROR] and a UTF16 native byte order error message +** copied from V up to the first zero terminator or until N bytes +** are read if N is positive. +** +** {F16418} The [sqlite3_result_error_toobig(C)] interface changes the return +** value of the function C to be an exception with error code +** [SQLITE_TOOBIG] and an appropriate error message. +** +** {F16421} The [sqlite3_result_error_nomem(C)] interface changes the return +** value of the function C to be an exception with error code +** [SQLITE_NOMEM] and an appropriate error message. +** +** {F16424} The [sqlite3_result_error_code(C,E)] interface changes the return +** value of the function C to be an exception with error code E. +** The error message text is unchanged. +** +** {F16427} The [sqlite3_result_int(C,V)] interface changes the +** return value of function C to be the 32-bit integer value V. +** +** {F16430} The [sqlite3_result_int64(C,V)] interface changes the +** return value of function C to be the 64-bit integer value V. +** +** {F16433} The [sqlite3_result_null(C)] interface changes the +** return value of function C to be NULL. +** +** {F16436} The [sqlite3_result_text(C,V,N,D)] interface changes the +** return value of function C to be the UTF8 string +** V up through the first zero or until N bytes are read if N +** is positive. +** +** {F16439} The [sqlite3_result_text16(C,V,N,D)] interface changes the +** return value of function C to be the UTF16 native byte order +** string V up through the first zero or until N bytes are read if N +** is positive. +** +** {F16442} The [sqlite3_result_text16be(C,V,N,D)] interface changes the +** return value of function C to be the UTF16 big-endian +** string V up through the first zero or until N bytes are read if N +** is positive. +** +** {F16445} The [sqlite3_result_text16le(C,V,N,D)] interface changes the +** return value of function C to be the UTF16 little-endian +** string V up through the first zero or until N bytes are read if N +** is positive. +** +** {F16448} The [sqlite3_result_value(C,V)] interface changes the +** return value of function C to be [sqlite3_value] object V. +** +** {F16451} The [sqlite3_result_zeroblob(C,N)] interface changes the +** return value of function C to be an N-byte blob of all zeros. +** +** {F16454} The [sqlite3_result_error()] and [sqlite3_result_error16()] +** interfaces make a copy of their error message strings before +** returning. +** +** {F16457} If the D destructor parameter to [sqlite3_result_blob(C,V,N,D)], +** [sqlite3_result_text(C,V,N,D)], [sqlite3_result_text16(C,V,N,D)], +** [sqlite3_result_text16be(C,V,N,D)], or +** [sqlite3_result_text16le(C,V,N,D)] is the constant [SQLITE_STATIC] +** then no destructor is ever called on the pointer V and SQLite +** assumes that V is immutable. +** +** {F16460} If the D destructor parameter to [sqlite3_result_blob(C,V,N,D)], +** [sqlite3_result_text(C,V,N,D)], [sqlite3_result_text16(C,V,N,D)], +** [sqlite3_result_text16be(C,V,N,D)], or +** [sqlite3_result_text16le(C,V,N,D)] is the constant +** [SQLITE_TRANSIENT] then the interfaces makes a copy of the +** content of V and retains the copy. +** +** {F16463} If the D destructor parameter to [sqlite3_result_blob(C,V,N,D)], +** [sqlite3_result_text(C,V,N,D)], [sqlite3_result_text16(C,V,N,D)], +** [sqlite3_result_text16be(C,V,N,D)], or +** [sqlite3_result_text16le(C,V,N,D)] is some value other than +** the constants [SQLITE_STATIC] and [SQLITE_TRANSIENT] then +** SQLite will invoke the destructor D with V as its only argument +** when it has finished with the V value. +*/ +void sqlite3_result_blob(sqlite3_context*, const void*, int, void(*)(void*)); +void sqlite3_result_double(sqlite3_context*, double); +void sqlite3_result_error(sqlite3_context*, const char*, int); +void sqlite3_result_error16(sqlite3_context*, const void*, int); +void sqlite3_result_error_toobig(sqlite3_context*); +void sqlite3_result_error_nomem(sqlite3_context*); +void sqlite3_result_error_code(sqlite3_context*, int); +void sqlite3_result_int(sqlite3_context*, int); +void sqlite3_result_int64(sqlite3_context*, sqlite3_int64); +void sqlite3_result_null(sqlite3_context*); +void sqlite3_result_text(sqlite3_context*, const char*, int, void(*)(void*)); +void sqlite3_result_text16(sqlite3_context*, const void*, int, void(*)(void*)); +void sqlite3_result_text16le(sqlite3_context*, const void*, int,void(*)(void*)); +void sqlite3_result_text16be(sqlite3_context*, const void*, int,void(*)(void*)); +void sqlite3_result_value(sqlite3_context*, sqlite3_value*); +void sqlite3_result_zeroblob(sqlite3_context*, int n); + +/* +** CAPI3REF: Define New Collating Sequences {F16600} +** +** These functions are used to add new collation sequences to the +** [sqlite3*] handle specified as the first argument. +** +** The name of the new collation sequence is specified as a UTF-8 string +** for sqlite3_create_collation() and sqlite3_create_collation_v2() +** and a UTF-16 string for sqlite3_create_collation16(). In all cases +** the name is passed as the second function argument. +** +** The third argument may be one of the constants [SQLITE_UTF8], +** [SQLITE_UTF16LE] or [SQLITE_UTF16BE], indicating that the user-supplied +** routine expects to be passed pointers to strings encoded using UTF-8, +** UTF-16 little-endian or UTF-16 big-endian respectively. The +** third argument might also be [SQLITE_UTF16_ALIGNED] to indicate that +** the routine expects pointers to 16-bit word aligned strings +** of UTF16 in the native byte order of the host computer. +** +** A pointer to the user supplied routine must be passed as the fifth +** argument. If it is NULL, this is the same as deleting the collation +** sequence (so that SQLite cannot call it anymore). +** Each time the application +** supplied function is invoked, it is passed a copy of the void* passed as +** the fourth argument to sqlite3_create_collation() or +** sqlite3_create_collation16() as its first parameter. +** +** The remaining arguments to the application-supplied routine are two strings, +** each represented by a (length, data) pair and encoded in the encoding +** that was passed as the third argument when the collation sequence was +** registered. {END} The application defined collation routine should +** return negative, zero or positive if +** the first string is less than, equal to, or greater than the second +** string. i.e. (STRING1 - STRING2). +** +** The sqlite3_create_collation_v2() works like sqlite3_create_collation() +** excapt that it takes an extra argument which is a destructor for +** the collation. The destructor is called when the collation is +** destroyed and is passed a copy of the fourth parameter void* pointer +** of the sqlite3_create_collation_v2(). +** Collations are destroyed when +** they are overridden by later calls to the collation creation functions +** or when the [sqlite3*] database handle is closed using [sqlite3_close()]. +** +** INVARIANTS: +** +** {F16603} A successful call to the +** [sqlite3_create_collation_v2(B,X,E,P,F,D)] interface +** registers function F as the comparison function used to +** implement collation X on [database connection] B for +** databases having encoding E. +** +** {F16604} SQLite understands the X parameter to +** [sqlite3_create_collation_v2(B,X,E,P,F,D)] as a zero-terminated +** UTF-8 string in which case is ignored for ASCII characters and +** is significant for non-ASCII characters. +** +** {F16606} Successive calls to [sqlite3_create_collation_v2(B,X,E,P,F,D)] +** with the same values for B, X, and E, override prior values +** of P, F, and D. +** +** {F16609} The destructor D in [sqlite3_create_collation_v2(B,X,E,P,F,D)] +** is not NULL then it is called with argument P when the +** collating function is dropped by SQLite. +** +** {F16612} A collating function is dropped when it is overloaded. +** +** {F16615} A collating function is dropped when the database connection +** is closed using [sqlite3_close()]. +** +** {F16618} The pointer P in [sqlite3_create_collation_v2(B,X,E,P,F,D)] +** is passed through as the first parameter to the comparison +** function F for all subsequent invocations of F. +** +** {F16621} A call to [sqlite3_create_collation(B,X,E,P,F)] is exactly +** the same as a call to [sqlite3_create_collation_v2()] with +** the same parameters and a NULL destructor. +** +** {F16624} Following a [sqlite3_create_collation_v2(B,X,E,P,F,D)], +** SQLite uses the comparison function F for all text comparison +** operations on [database connection] B on text values that +** use the collating sequence name X. +** +** {F16627} The [sqlite3_create_collation16(B,X,E,P,F)] works the same +** as [sqlite3_create_collation(B,X,E,P,F)] except that the +** collation name X is understood as UTF-16 in native byte order +** instead of UTF-8. +** +** {F16630} When multiple comparison functions are available for the same +** collating sequence, SQLite chooses the one whose text encoding +** requires the least amount of conversion from the default +** text encoding of the database. +*/ +int sqlite3_create_collation( + sqlite3*, + const char *zName, + int eTextRep, + void*, + int(*xCompare)(void*,int,const void*,int,const void*) +); +int sqlite3_create_collation_v2( + sqlite3*, + const char *zName, + int eTextRep, + void*, + int(*xCompare)(void*,int,const void*,int,const void*), + void(*xDestroy)(void*) +); +int sqlite3_create_collation16( + sqlite3*, + const char *zName, + int eTextRep, + void*, + int(*xCompare)(void*,int,const void*,int,const void*) +); + +/* +** CAPI3REF: Collation Needed Callbacks {F16700} +** +** To avoid having to register all collation sequences before a database +** can be used, a single callback function may be registered with the +** database handle to be called whenever an undefined collation sequence is +** required. +** +** If the function is registered using the sqlite3_collation_needed() API, +** then it is passed the names of undefined collation sequences as strings +** encoded in UTF-8. {F16703} If sqlite3_collation_needed16() is used, the names +** are passed as UTF-16 in machine native byte order. A call to either +** function replaces any existing callback. +** +** When the callback is invoked, the first argument passed is a copy +** of the second argument to sqlite3_collation_needed() or +** sqlite3_collation_needed16(). The second argument is the database +** handle. The third argument is one of [SQLITE_UTF8], +** [SQLITE_UTF16BE], or [SQLITE_UTF16LE], indicating the most +** desirable form of the collation sequence function required. +** The fourth parameter is the name of the +** required collation sequence. +** +** The callback function should register the desired collation using +** [sqlite3_create_collation()], [sqlite3_create_collation16()], or +** [sqlite3_create_collation_v2()]. +** +** INVARIANTS: +** +** {F16702} A successful call to [sqlite3_collation_needed(D,P,F)] +** or [sqlite3_collation_needed16(D,P,F)] causes +** the [database connection] D to invoke callback F with first +** parameter P whenever it needs a comparison function for a +** collating sequence that it does not know about. +** +** {F16704} Each successful call to [sqlite3_collation_needed()] or +** [sqlite3_collation_needed16()] overrides the callback registered +** on the same [database connection] by prior calls to either +** interface. +** +** {F16706} The name of the requested collating function passed in the +** 4th parameter to the callback is in UTF-8 if the callback +** was registered using [sqlite3_collation_needed()] and +** is in UTF-16 native byte order if the callback was +** registered using [sqlite3_collation_needed16()]. +** +** +*/ +int sqlite3_collation_needed( + sqlite3*, + void*, + void(*)(void*,sqlite3*,int eTextRep,const char*) +); +int sqlite3_collation_needed16( + sqlite3*, + void*, + void(*)(void*,sqlite3*,int eTextRep,const void*) +); + +/* +** Specify the key for an encrypted database. This routine should be +** called right after sqlite3_open(). +** +** The code to implement this API is not available in the public release +** of SQLite. +*/ +int sqlite3_key( + sqlite3 *db, /* Database to be rekeyed */ + const void *pKey, int nKey /* The key */ +); + +/* +** Change the key on an open database. If the current database is not +** encrypted, this routine will encrypt it. If pNew==0 or nNew==0, the +** database is decrypted. +** +** The code to implement this API is not available in the public release +** of SQLite. +*/ +int sqlite3_rekey( + sqlite3 *db, /* Database to be rekeyed */ + const void *pKey, int nKey /* The new key */ +); + +/* +** CAPI3REF: Suspend Execution For A Short Time {F10530} +** +** The sqlite3_sleep() function +** causes the current thread to suspend execution +** for at least a number of milliseconds specified in its parameter. +** +** If the operating system does not support sleep requests with +** millisecond time resolution, then the time will be rounded up to +** the nearest second. The number of milliseconds of sleep actually +** requested from the operating system is returned. +** +** SQLite implements this interface by calling the xSleep() +** method of the default [sqlite3_vfs] object. +** +** INVARIANTS: +** +** {F10533} The [sqlite3_sleep(M)] interface invokes the xSleep +** method of the default [sqlite3_vfs|VFS] in order to +** suspend execution of the current thread for at least +** M milliseconds. +** +** {F10536} The [sqlite3_sleep(M)] interface returns the number of +** milliseconds of sleep actually requested of the operating +** system, which might be larger than the parameter M. +*/ +int sqlite3_sleep(int); + +/* +** CAPI3REF: Name Of The Folder Holding Temporary Files {F10310} +** +** If this global variable is made to point to a string which is +** the name of a folder (a.ka. directory), then all temporary files +** created by SQLite will be placed in that directory. If this variable +** is NULL pointer, then SQLite does a search for an appropriate temporary +** file directory. +** +** It is not safe to modify this variable once a database connection +** has been opened. It is intended that this variable be set once +** as part of process initialization and before any SQLite interface +** routines have been call and remain unchanged thereafter. +*/ +SQLITE_EXTERN char *sqlite3_temp_directory; + +/* +** CAPI3REF: Test To See If The Database Is In Auto-Commit Mode {F12930} +** +** The sqlite3_get_autocommit() interfaces returns non-zero or +** zero if the given database connection is or is not in autocommit mode, +** respectively. Autocommit mode is on +** by default. Autocommit mode is disabled by a [BEGIN] statement. +** Autocommit mode is reenabled by a [COMMIT] or [ROLLBACK]. +** +** If certain kinds of errors occur on a statement within a multi-statement +** transactions (errors including [SQLITE_FULL], [SQLITE_IOERR], +** [SQLITE_NOMEM], [SQLITE_BUSY], and [SQLITE_INTERRUPT]) then the +** transaction might be rolled back automatically. The only way to +** find out if SQLite automatically rolled back the transaction after +** an error is to use this function. +** +** INVARIANTS: +** +** {F12931} The [sqlite3_get_autocommit(D)] interface returns non-zero or +** zero if the [database connection] D is or is not in autocommit +** mode, respectively. +** +** {F12932} Autocommit mode is on by default. +** +** {F12933} Autocommit mode is disabled by a successful [BEGIN] statement. +** +** {F12934} Autocommit mode is enabled by a successful [COMMIT] or [ROLLBACK] +** statement. +** +** +** LIMITATIONS: +*** +** {U12936} If another thread changes the autocommit status of the database +** connection while this routine is running, then the return value +** is undefined. +*/ +int sqlite3_get_autocommit(sqlite3*); + +/* +** CAPI3REF: Find The Database Handle Of A Prepared Statement {F13120} +** +** The sqlite3_db_handle interface +** returns the [sqlite3*] database handle to which a +** [prepared statement] belongs. +** The database handle returned by sqlite3_db_handle +** is the same database handle that was +** the first argument to the [sqlite3_prepare_v2()] or its variants +** that was used to create the statement in the first place. +** +** INVARIANTS: +** +** {F13123} The [sqlite3_db_handle(S)] interface returns a pointer +** to the [database connection] associated with +** [prepared statement] S. +*/ +sqlite3 *sqlite3_db_handle(sqlite3_stmt*); + + +/* +** CAPI3REF: Commit And Rollback Notification Callbacks {F12950} +** +** The sqlite3_commit_hook() interface registers a callback +** function to be invoked whenever a transaction is committed. +** Any callback set by a previous call to sqlite3_commit_hook() +** for the same database connection is overridden. +** The sqlite3_rollback_hook() interface registers a callback +** function to be invoked whenever a transaction is committed. +** Any callback set by a previous call to sqlite3_commit_hook() +** for the same database connection is overridden. +** The pArg argument is passed through +** to the callback. If the callback on a commit hook function +** returns non-zero, then the commit is converted into a rollback. +** +** If another function was previously registered, its +** pArg value is returned. Otherwise NULL is returned. +** +** Registering a NULL function disables the callback. +** +** For the purposes of this API, a transaction is said to have been +** rolled back if an explicit "ROLLBACK" statement is executed, or +** an error or constraint causes an implicit rollback to occur. +** The rollback callback is not invoked if a transaction is +** automatically rolled back because the database connection is closed. +** The rollback callback is not invoked if a transaction is +** rolled back because a commit callback returned non-zero. +** Check on this +** +** These are experimental interfaces and are subject to change. +** +** INVARIANTS: +** +** {F12951} The [sqlite3_commit_hook(D,F,P)] interface registers the +** callback function F to be invoked with argument P whenever +** a transaction commits on [database connection] D. +** +** {F12952} The [sqlite3_commit_hook(D,F,P)] interface returns the P +** argument from the previous call with the same +** [database connection ] D , or NULL on the first call +** for a particular [database connection] D. +** +** {F12953} Each call to [sqlite3_commit_hook()] overwrites the callback +** registered by prior calls. +** +** {F12954} If the F argument to [sqlite3_commit_hook(D,F,P)] is NULL +** then the commit hook callback is cancelled and no callback +** is invoked when a transaction commits. +** +** {F12955} If the commit callback returns non-zero then the commit is +** converted into a rollback. +** +** {F12961} The [sqlite3_rollback_hook(D,F,P)] interface registers the +** callback function F to be invoked with argument P whenever +** a transaction rolls back on [database connection] D. +** +** {F12962} The [sqlite3_rollback_hook(D,F,P)] interface returns the P +** argument from the previous call with the same +** [database connection ] D , or NULL on the first call +** for a particular [database connection] D. +** +** {F12963} Each call to [sqlite3_rollback_hook()] overwrites the callback +** registered by prior calls. +** +** {F12964} If the F argument to [sqlite3_rollback_hook(D,F,P)] is NULL +** then the rollback hook callback is cancelled and no callback +** is invoked when a transaction rolls back. +*/ +void *sqlite3_commit_hook(sqlite3*, int(*)(void*), void*); +void *sqlite3_rollback_hook(sqlite3*, void(*)(void *), void*); + +/* +** CAPI3REF: Data Change Notification Callbacks {F12970} +** +** The sqlite3_update_hook() interface +** registers a callback function with the database connection identified by the +** first argument to be invoked whenever a row is updated, inserted or deleted. +** Any callback set by a previous call to this function for the same +** database connection is overridden. +** +** The second argument is a pointer to the function to invoke when a +** row is updated, inserted or deleted. +** The first argument to the callback is +** a copy of the third argument to sqlite3_update_hook(). +** The second callback +** argument is one of [SQLITE_INSERT], [SQLITE_DELETE] or [SQLITE_UPDATE], +** depending on the operation that caused the callback to be invoked. +** The third and +** fourth arguments to the callback contain pointers to the database and +** table name containing the affected row. +** The final callback parameter is +** the rowid of the row. +** In the case of an update, this is the rowid after +** the update takes place. +** +** The update hook is not invoked when internal system tables are +** modified (i.e. sqlite_master and sqlite_sequence). +** +** If another function was previously registered, its pArg value +** is returned. Otherwise NULL is returned. +** +** INVARIANTS: +** +** {F12971} The [sqlite3_update_hook(D,F,P)] interface causes callback +** function F to be invoked with first parameter P whenever +** a table row is modified, inserted, or deleted on +** [database connection] D. +** +** {F12973} The [sqlite3_update_hook(D,F,P)] interface returns the value +** of P for the previous call on the same [database connection] D, +** or NULL for the first call. +** +** {F12975} If the update hook callback F in [sqlite3_update_hook(D,F,P)] +** is NULL then the no update callbacks are made. +** +** {F12977} Each call to [sqlite3_update_hook(D,F,P)] overrides prior calls +** to the same interface on the same [database connection] D. +** +** {F12979} The update hook callback is not invoked when internal system +** tables such as sqlite_master and sqlite_sequence are modified. +** +** {F12981} The second parameter to the update callback +** is one of [SQLITE_INSERT], [SQLITE_DELETE] or [SQLITE_UPDATE], +** depending on the operation that caused the callback to be invoked. +** +** {F12983} The third and fourth arguments to the callback contain pointers +** to zero-terminated UTF-8 strings which are the names of the +** database and table that is being updated. + +** {F12985} The final callback parameter is the rowid of the row after +** the change occurs. +*/ +void *sqlite3_update_hook( + sqlite3*, + void(*)(void *,int ,char const *,char const *,sqlite3_int64), + void* +); + +/* +** CAPI3REF: Enable Or Disable Shared Pager Cache {F10330} +** +** This routine enables or disables the sharing of the database cache +** and schema data structures between connections to the same database. +** Sharing is enabled if the argument is true and disabled if the argument +** is false. +** +** Cache sharing is enabled and disabled +** for an entire process. {END} This is a change as of SQLite version 3.5.0. +** In prior versions of SQLite, sharing was +** enabled or disabled for each thread separately. +** +** The cache sharing mode set by this interface effects all subsequent +** calls to [sqlite3_open()], [sqlite3_open_v2()], and [sqlite3_open16()]. +** Existing database connections continue use the sharing mode +** that was in effect at the time they were opened. +** +** Virtual tables cannot be used with a shared cache. When shared +** cache is enabled, the [sqlite3_create_module()] API used to register +** virtual tables will always return an error. +** +** This routine returns [SQLITE_OK] if shared cache was +** enabled or disabled successfully. An [error code] +** is returned otherwise. +** +** Shared cache is disabled by default. But this might change in +** future releases of SQLite. Applications that care about shared +** cache setting should set it explicitly. +** +** INVARIANTS: +** +** {F10331} A successful invocation of [sqlite3_enable_shared_cache(B)] +** will enable or disable shared cache mode for any subsequently +** created [database connection] in the same process. +** +** {F10336} When shared cache is enabled, the [sqlite3_create_module()] +** interface will always return an error. +** +** {F10337} The [sqlite3_enable_shared_cache(B)] interface returns +** [SQLITE_OK] if shared cache was enabled or disabled successfully. +** +** {F10339} Shared cache is disabled by default. +*/ +int sqlite3_enable_shared_cache(int); + +/* +** CAPI3REF: Attempt To Free Heap Memory {F17340} +** +** The sqlite3_release_memory() interface attempts to +** free N bytes of heap memory by deallocating non-essential memory +** allocations held by the database labrary. {END} Memory used +** to cache database pages to improve performance is an example of +** non-essential memory. Sqlite3_release_memory() returns +** the number of bytes actually freed, which might be more or less +** than the amount requested. +** +** INVARIANTS: +** +** {F17341} The [sqlite3_release_memory(N)] interface attempts to +** free N bytes of heap memory by deallocating non-essential +** memory allocations held by the database labrary. +** +** {F16342} The [sqlite3_release_memory(N)] returns the number +** of bytes actually freed, which might be more or less +** than the amount requested. +*/ +int sqlite3_release_memory(int); + +/* +** CAPI3REF: Impose A Limit On Heap Size {F17350} +** +** The sqlite3_soft_heap_limit() interface +** places a "soft" limit on the amount of heap memory that may be allocated +** by SQLite. If an internal allocation is requested +** that would exceed the soft heap limit, [sqlite3_release_memory()] is +** invoked one or more times to free up some space before the allocation +** is made. +** +** The limit is called "soft", because if +** [sqlite3_release_memory()] cannot +** free sufficient memory to prevent the limit from being exceeded, +** the memory is allocated anyway and the current operation proceeds. +** +** A negative or zero value for N means that there is no soft heap limit and +** [sqlite3_release_memory()] will only be called when memory is exhausted. +** The default value for the soft heap limit is zero. +** +** SQLite makes a best effort to honor the soft heap limit. +** But if the soft heap limit cannot honored, execution will +** continue without error or notification. This is why the limit is +** called a "soft" limit. It is advisory only. +** +** Prior to SQLite version 3.5.0, this routine only constrained the memory +** allocated by a single thread - the same thread in which this routine +** runs. Beginning with SQLite version 3.5.0, the soft heap limit is +** applied to all threads. The value specified for the soft heap limit +** is an upper bound on the total memory allocation for all threads. In +** version 3.5.0 there is no mechanism for limiting the heap usage for +** individual threads. +** +** INVARIANTS: +** +** {F16351} The [sqlite3_soft_heap_limit(N)] interface places a soft limit +** of N bytes on the amount of heap memory that may be allocated +** using [sqlite3_malloc()] or [sqlite3_realloc()] at any point +** in time. +** +** {F16352} If a call to [sqlite3_malloc()] or [sqlite3_realloc()] would +** cause the total amount of allocated memory to exceed the +** soft heap limit, then [sqlite3_release_memory()] is invoked +** in an attempt to reduce the memory usage prior to proceeding +** with the memory allocation attempt. +** +** {F16353} Calls to [sqlite3_malloc()] or [sqlite3_realloc()] that trigger +** attempts to reduce memory usage through the soft heap limit +** mechanism continue even if the attempt to reduce memory +** usage is unsuccessful. +** +** {F16354} A negative or zero value for N in a call to +** [sqlite3_soft_heap_limit(N)] means that there is no soft +** heap limit and [sqlite3_release_memory()] will only be +** called when memory is completely exhausted. +** +** {F16355} The default value for the soft heap limit is zero. +** +** {F16358} Each call to [sqlite3_soft_heap_limit(N)] overrides the +** values set by all prior calls. +*/ +void sqlite3_soft_heap_limit(int); + +/* +** CAPI3REF: Extract Metadata About A Column Of A Table {F12850} +** +** This routine +** returns meta-data about a specific column of a specific database +** table accessible using the connection handle passed as the first function +** argument. +** +** The column is identified by the second, third and fourth parameters to +** this function. The second parameter is either the name of the database +** (i.e. "main", "temp" or an attached database) containing the specified +** table or NULL. If it is NULL, then all attached databases are searched +** for the table using the same algorithm as the database engine uses to +** resolve unqualified table references. +** +** The third and fourth parameters to this function are the table and column +** name of the desired column, respectively. Neither of these parameters +** may be NULL. +** +** Meta information is returned by writing to the memory locations passed as +** the 5th and subsequent parameters to this function. Any of these +** arguments may be NULL, in which case the corresponding element of meta +** information is ommitted. +** +**
                +** Parameter     Output Type      Description
                +** -----------------------------------
                +**
                +**   5th         const char*      Data type
                +**   6th         const char*      Name of the default collation sequence 
                +**   7th         int              True if the column has a NOT NULL constraint
                +**   8th         int              True if the column is part of the PRIMARY KEY
                +**   9th         int              True if the column is AUTOINCREMENT
                +** 
                +** +** +** The memory pointed to by the character pointers returned for the +** declaration type and collation sequence is valid only until the next +** call to any sqlite API function. +** +** If the specified table is actually a view, then an error is returned. +** +** If the specified column is "rowid", "oid" or "_rowid_" and an +** INTEGER PRIMARY KEY column has been explicitly declared, then the output +** parameters are set for the explicitly declared column. If there is no +** explicitly declared IPK column, then the output parameters are set as +** follows: +** +**
                +**     data type: "INTEGER"
                +**     collation sequence: "BINARY"
                +**     not null: 0
                +**     primary key: 1
                +**     auto increment: 0
                +** 
                +** +** This function may load one or more schemas from database files. If an +** error occurs during this process, or if the requested table or column +** cannot be found, an SQLITE error code is returned and an error message +** left in the database handle (to be retrieved using sqlite3_errmsg()). +** +** This API is only available if the library was compiled with the +** SQLITE_ENABLE_COLUMN_METADATA preprocessor symbol defined. +*/ +int sqlite3_table_column_metadata( + sqlite3 *db, /* Connection handle */ + const char *zDbName, /* Database name or NULL */ + const char *zTableName, /* Table name */ + const char *zColumnName, /* Column name */ + char const **pzDataType, /* OUTPUT: Declared data type */ + char const **pzCollSeq, /* OUTPUT: Collation sequence name */ + int *pNotNull, /* OUTPUT: True if NOT NULL constraint exists */ + int *pPrimaryKey, /* OUTPUT: True if column part of PK */ + int *pAutoinc /* OUTPUT: True if column is auto-increment */ +); + +/* +** CAPI3REF: Load An Extension {F12600} +** +** {F12601} The sqlite3_load_extension() interface +** attempts to load an SQLite extension library contained in the file +** zFile. {F12602} The entry point is zProc. {F12603} zProc may be 0 +** in which case the name of the entry point defaults +** to "sqlite3_extension_init". +** +** {F12604} The sqlite3_load_extension() interface shall +** return [SQLITE_OK] on success and [SQLITE_ERROR] if something goes wrong. +** +** {F12605} +** If an error occurs and pzErrMsg is not 0, then the +** sqlite3_load_extension() interface shall attempt to fill *pzErrMsg with +** error message text stored in memory obtained from [sqlite3_malloc()]. +** {END} The calling function should free this memory +** by calling [sqlite3_free()]. +** +** {F12606} +** Extension loading must be enabled using [sqlite3_enable_load_extension()] +** prior to calling this API or an error will be returned. +*/ +int sqlite3_load_extension( + sqlite3 *db, /* Load the extension into this database connection */ + const char *zFile, /* Name of the shared library containing extension */ + const char *zProc, /* Entry point. Derived from zFile if 0 */ + char **pzErrMsg /* Put error message here if not 0 */ +); + +/* +** CAPI3REF: Enable Or Disable Extension Loading {F12620} +** +** So as not to open security holes in older applications that are +** unprepared to deal with extension loading, and as a means of disabling +** extension loading while evaluating user-entered SQL, the following +** API is provided to turn the [sqlite3_load_extension()] mechanism on and +** off. {F12622} It is off by default. {END} See ticket #1863. +** +** {F12621} Call the sqlite3_enable_load_extension() routine +** with onoff==1 to turn extension loading on +** and call it with onoff==0 to turn it back off again. {END} +*/ +int sqlite3_enable_load_extension(sqlite3 *db, int onoff); + +/* +** CAPI3REF: Make Arrangements To Automatically Load An Extension {F12640} +** +** {F12641} This function +** registers an extension entry point that is automatically invoked +** whenever a new database connection is opened using +** [sqlite3_open()], [sqlite3_open16()], or [sqlite3_open_v2()]. {END} +** +** This API can be invoked at program startup in order to register +** one or more statically linked extensions that will be available +** to all new database connections. +** +** {F12642} Duplicate extensions are detected so calling this routine multiple +** times with the same extension is harmless. +** +** {F12643} This routine stores a pointer to the extension in an array +** that is obtained from sqlite_malloc(). {END} If you run a memory leak +** checker on your program and it reports a leak because of this +** array, then invoke [sqlite3_reset_auto_extension()] prior +** to shutdown to free the memory. +** +** {F12644} Automatic extensions apply across all threads. {END} +** +** This interface is experimental and is subject to change or +** removal in future releases of SQLite. +*/ +int sqlite3_auto_extension(void *xEntryPoint); + + +/* +** CAPI3REF: Reset Automatic Extension Loading {F12660} +** +** {F12661} This function disables all previously registered +** automatic extensions. {END} This +** routine undoes the effect of all prior [sqlite3_auto_extension()] +** calls. +** +** {F12662} This call disabled automatic extensions in all threads. {END} +** +** This interface is experimental and is subject to change or +** removal in future releases of SQLite. +*/ +void sqlite3_reset_auto_extension(void); + + +/* +****** EXPERIMENTAL - subject to change without notice ************** +** +** The interface to the virtual-table mechanism is currently considered +** to be experimental. The interface might change in incompatible ways. +** If this is a problem for you, do not use the interface at this time. +** +** When the virtual-table mechanism stablizes, we will declare the +** interface fixed, support it indefinitely, and remove this comment. +*/ + +/* +** Structures used by the virtual table interface +*/ +typedef struct sqlite3_vtab sqlite3_vtab; +typedef struct sqlite3_index_info sqlite3_index_info; +typedef struct sqlite3_vtab_cursor sqlite3_vtab_cursor; +typedef struct sqlite3_module sqlite3_module; + +/* +** CAPI3REF: Virtual Table Object {F18000} +** KEYWORDS: sqlite3_module +** +** A module is a class of virtual tables. Each module is defined +** by an instance of the following structure. This structure consists +** mostly of methods for the module. +*/ +struct sqlite3_module { + int iVersion; + int (*xCreate)(sqlite3*, void *pAux, + int argc, const char *const*argv, + sqlite3_vtab **ppVTab, char**); + int (*xConnect)(sqlite3*, void *pAux, + int argc, const char *const*argv, + sqlite3_vtab **ppVTab, char**); + int (*xBestIndex)(sqlite3_vtab *pVTab, sqlite3_index_info*); + int (*xDisconnect)(sqlite3_vtab *pVTab); + int (*xDestroy)(sqlite3_vtab *pVTab); + int (*xOpen)(sqlite3_vtab *pVTab, sqlite3_vtab_cursor **ppCursor); + int (*xClose)(sqlite3_vtab_cursor*); + int (*xFilter)(sqlite3_vtab_cursor*, int idxNum, const char *idxStr, + int argc, sqlite3_value **argv); + int (*xNext)(sqlite3_vtab_cursor*); + int (*xEof)(sqlite3_vtab_cursor*); + int (*xColumn)(sqlite3_vtab_cursor*, sqlite3_context*, int); + int (*xRowid)(sqlite3_vtab_cursor*, sqlite3_int64 *pRowid); + int (*xUpdate)(sqlite3_vtab *, int, sqlite3_value **, sqlite3_int64 *); + int (*xBegin)(sqlite3_vtab *pVTab); + int (*xSync)(sqlite3_vtab *pVTab); + int (*xCommit)(sqlite3_vtab *pVTab); + int (*xRollback)(sqlite3_vtab *pVTab); + int (*xFindFunction)(sqlite3_vtab *pVtab, int nArg, const char *zName, + void (**pxFunc)(sqlite3_context*,int,sqlite3_value**), + void **ppArg); + + int (*xRename)(sqlite3_vtab *pVtab, const char *zNew); +}; + +/* +** CAPI3REF: Virtual Table Indexing Information {F18100} +** KEYWORDS: sqlite3_index_info +** +** The sqlite3_index_info structure and its substructures is used to +** pass information into and receive the reply from the xBestIndex +** method of an sqlite3_module. The fields under **Inputs** are the +** inputs to xBestIndex and are read-only. xBestIndex inserts its +** results into the **Outputs** fields. +** +** The aConstraint[] array records WHERE clause constraints of the +** form: +** +** column OP expr +** +** Where OP is =, <, <=, >, or >=. +** The particular operator is stored +** in aConstraint[].op. The index of the column is stored in +** aConstraint[].iColumn. aConstraint[].usable is TRUE if the +** expr on the right-hand side can be evaluated (and thus the constraint +** is usable) and false if it cannot. +** +** The optimizer automatically inverts terms of the form "expr OP column" +** and makes other simplifications to the WHERE clause in an attempt to +** get as many WHERE clause terms into the form shown above as possible. +** The aConstraint[] array only reports WHERE clause terms in the correct +** form that refer to the particular virtual table being queried. +** +** Information about the ORDER BY clause is stored in aOrderBy[]. +** Each term of aOrderBy records a column of the ORDER BY clause. +** +** The xBestIndex method must fill aConstraintUsage[] with information +** about what parameters to pass to xFilter. If argvIndex>0 then +** the right-hand side of the corresponding aConstraint[] is evaluated +** and becomes the argvIndex-th entry in argv. If aConstraintUsage[].omit +** is true, then the constraint is assumed to be fully handled by the +** virtual table and is not checked again by SQLite. +** +** The idxNum and idxPtr values are recorded and passed into xFilter. +** sqlite3_free() is used to free idxPtr if needToFreeIdxPtr is true. +** +** The orderByConsumed means that output from xFilter will occur in +** the correct order to satisfy the ORDER BY clause so that no separate +** sorting step is required. +** +** The estimatedCost value is an estimate of the cost of doing the +** particular lookup. A full scan of a table with N entries should have +** a cost of N. A binary search of a table of N entries should have a +** cost of approximately log(N). +*/ +struct sqlite3_index_info { + /* Inputs */ + int nConstraint; /* Number of entries in aConstraint */ + struct sqlite3_index_constraint { + int iColumn; /* Column on left-hand side of constraint */ + unsigned char op; /* Constraint operator */ + unsigned char usable; /* True if this constraint is usable */ + int iTermOffset; /* Used internally - xBestIndex should ignore */ + } *aConstraint; /* Table of WHERE clause constraints */ + int nOrderBy; /* Number of terms in the ORDER BY clause */ + struct sqlite3_index_orderby { + int iColumn; /* Column number */ + unsigned char desc; /* True for DESC. False for ASC. */ + } *aOrderBy; /* The ORDER BY clause */ + + /* Outputs */ + struct sqlite3_index_constraint_usage { + int argvIndex; /* if >0, constraint is part of argv to xFilter */ + unsigned char omit; /* Do not code a test for this constraint */ + } *aConstraintUsage; + int idxNum; /* Number used to identify the index */ + char *idxStr; /* String, possibly obtained from sqlite3_malloc */ + int needToFreeIdxStr; /* Free idxStr using sqlite3_free() if true */ + int orderByConsumed; /* True if output is already ordered */ + double estimatedCost; /* Estimated cost of using this index */ +}; +#define SQLITE_INDEX_CONSTRAINT_EQ 2 +#define SQLITE_INDEX_CONSTRAINT_GT 4 +#define SQLITE_INDEX_CONSTRAINT_LE 8 +#define SQLITE_INDEX_CONSTRAINT_LT 16 +#define SQLITE_INDEX_CONSTRAINT_GE 32 +#define SQLITE_INDEX_CONSTRAINT_MATCH 64 + +/* +** CAPI3REF: Register A Virtual Table Implementation {F18200} +** +** This routine is used to register a new module name with an SQLite +** connection. Module names must be registered before creating new +** virtual tables on the module, or before using preexisting virtual +** tables of the module. +*/ +int sqlite3_create_module( + sqlite3 *db, /* SQLite connection to register module with */ + const char *zName, /* Name of the module */ + const sqlite3_module *, /* Methods for the module */ + void * /* Client data for xCreate/xConnect */ +); + +/* +** CAPI3REF: Register A Virtual Table Implementation {F18210} +** +** This routine is identical to the sqlite3_create_module() method above, +** except that it allows a destructor function to be specified. It is +** even more experimental than the rest of the virtual tables API. +*/ +int sqlite3_create_module_v2( + sqlite3 *db, /* SQLite connection to register module with */ + const char *zName, /* Name of the module */ + const sqlite3_module *, /* Methods for the module */ + void *, /* Client data for xCreate/xConnect */ + void(*xDestroy)(void*) /* Module destructor function */ +); + +/* +** CAPI3REF: Virtual Table Instance Object {F18010} +** KEYWORDS: sqlite3_vtab +** +** Every module implementation uses a subclass of the following structure +** to describe a particular instance of the module. Each subclass will +** be tailored to the specific needs of the module implementation. The +** purpose of this superclass is to define certain fields that are common +** to all module implementations. +** +** Virtual tables methods can set an error message by assigning a +** string obtained from sqlite3_mprintf() to zErrMsg. The method should +** take care that any prior string is freed by a call to sqlite3_free() +** prior to assigning a new string to zErrMsg. After the error message +** is delivered up to the client application, the string will be automatically +** freed by sqlite3_free() and the zErrMsg field will be zeroed. Note +** that sqlite3_mprintf() and sqlite3_free() are used on the zErrMsg field +** since virtual tables are commonly implemented in loadable extensions which +** do not have access to sqlite3MPrintf() or sqlite3Free(). +*/ +struct sqlite3_vtab { + const sqlite3_module *pModule; /* The module for this virtual table */ + int nRef; /* Used internally */ + char *zErrMsg; /* Error message from sqlite3_mprintf() */ + /* Virtual table implementations will typically add additional fields */ +}; + +/* +** CAPI3REF: Virtual Table Cursor Object {F18020} +** KEYWORDS: sqlite3_vtab_cursor +** +** Every module implementation uses a subclass of the following structure +** to describe cursors that point into the virtual table and are used +** to loop through the virtual table. Cursors are created using the +** xOpen method of the module. Each module implementation will define +** the content of a cursor structure to suit its own needs. +** +** This superclass exists in order to define fields of the cursor that +** are common to all implementations. +*/ +struct sqlite3_vtab_cursor { + sqlite3_vtab *pVtab; /* Virtual table of this cursor */ + /* Virtual table implementations will typically add additional fields */ +}; + +/* +** CAPI3REF: Declare The Schema Of A Virtual Table {F18280} +** +** The xCreate and xConnect methods of a module use the following API +** to declare the format (the names and datatypes of the columns) of +** the virtual tables they implement. +*/ +int sqlite3_declare_vtab(sqlite3*, const char *zCreateTable); + +/* +** CAPI3REF: Overload A Function For A Virtual Table {F18300} +** +** Virtual tables can provide alternative implementations of functions +** using the xFindFunction method. But global versions of those functions +** must exist in order to be overloaded. +** +** This API makes sure a global version of a function with a particular +** name and number of parameters exists. If no such function exists +** before this API is called, a new function is created. The implementation +** of the new function always causes an exception to be thrown. So +** the new function is not good for anything by itself. Its only +** purpose is to be a place-holder function that can be overloaded +** by virtual tables. +** +** This API should be considered part of the virtual table interface, +** which is experimental and subject to change. +*/ +int sqlite3_overload_function(sqlite3*, const char *zFuncName, int nArg); + +/* +** The interface to the virtual-table mechanism defined above (back up +** to a comment remarkably similar to this one) is currently considered +** to be experimental. The interface might change in incompatible ways. +** If this is a problem for you, do not use the interface at this time. +** +** When the virtual-table mechanism stabilizes, we will declare the +** interface fixed, support it indefinitely, and remove this comment. +** +****** EXPERIMENTAL - subject to change without notice ************** +*/ + +/* +** CAPI3REF: A Handle To An Open BLOB {F17800} +** +** An instance of this object represents an open BLOB on which +** incremental I/O can be preformed. +** Objects of this type are created by +** [sqlite3_blob_open()] and destroyed by [sqlite3_blob_close()]. +** The [sqlite3_blob_read()] and [sqlite3_blob_write()] interfaces +** can be used to read or write small subsections of the blob. +** The [sqlite3_blob_bytes()] interface returns the size of the +** blob in bytes. +*/ +typedef struct sqlite3_blob sqlite3_blob; + +/* +** CAPI3REF: Open A BLOB For Incremental I/O {F17810} +** +** This interfaces opens a handle to the blob located +** in row iRow,, column zColumn, table zTable in database zDb; +** in other words, the same blob that would be selected by: +** +**
                +**     SELECT zColumn FROM zDb.zTable WHERE rowid = iRow;
                +** 
                {END} +** +** If the flags parameter is non-zero, the blob is opened for +** read and write access. If it is zero, the blob is opened for read +** access. +** +** On success, [SQLITE_OK] is returned and the new +** [sqlite3_blob | blob handle] is written to *ppBlob. +** Otherwise an error code is returned and +** any value written to *ppBlob should not be used by the caller. +** This function sets the database-handle error code and message +** accessible via [sqlite3_errcode()] and [sqlite3_errmsg()]. +** +** INVARIANTS: +** +** {F17813} A successful invocation of the [sqlite3_blob_open(D,B,T,C,R,F,P)] +** interface opens an [sqlite3_blob] object P on the blob +** in column C of table T in database B on [database connection] D. +** +** {F17814} A successful invocation of [sqlite3_blob_open(D,...)] starts +** a new transaction on [database connection] D if that connection +** is not already in a transaction. +** +** {F17816} The [sqlite3_blob_open(D,B,T,C,R,F,P)] interface opens the blob +** for read and write access if and only if the F parameter +** is non-zero. +** +** {F17819} The [sqlite3_blob_open()] interface returns [SQLITE_OK] on +** success and an appropriate [error code] on failure. +** +** {F17821} If an error occurs during evaluation of [sqlite3_blob_open(D,...)] +** then subsequent calls to [sqlite3_errcode(D)], +** [sqlite3_errmsg(D)], and [sqlite3_errmsg16(D)] will return +** information approprate for that error. +*/ +int sqlite3_blob_open( + sqlite3*, + const char *zDb, + const char *zTable, + const char *zColumn, + sqlite3_int64 iRow, + int flags, + sqlite3_blob **ppBlob +); + +/* +** CAPI3REF: Close A BLOB Handle {F17830} +** +** Close an open [sqlite3_blob | blob handle]. +** +** Closing a BLOB shall cause the current transaction to commit +** if there are no other BLOBs, no pending prepared statements, and the +** database connection is in autocommit mode. +** If any writes were made to the BLOB, they might be held in cache +** until the close operation if they will fit. {END} +** Closing the BLOB often forces the changes +** out to disk and so if any I/O errors occur, they will likely occur +** at the time when the BLOB is closed. {F17833} Any errors that occur during +** closing are reported as a non-zero return value. +** +** The BLOB is closed unconditionally. Even if this routine returns +** an error code, the BLOB is still closed. +** +** INVARIANTS: +** +** {F17833} The [sqlite3_blob_close(P)] interface closes an +** [sqlite3_blob] object P previously opened using +** [sqlite3_blob_open()]. +** +** {F17836} Closing an [sqlite3_blob] object using +** [sqlite3_blob_close()] shall cause the current transaction to +** commit if there are no other open [sqlite3_blob] objects +** or [prepared statements] on the same [database connection] and +** the [database connection] is in +** [sqlite3_get_autocommit | autocommit mode]. +** +** {F17839} The [sqlite3_blob_close(P)] interfaces closes the +** [sqlite3_blob] object P unconditionally, even if +** [sqlite3_blob_close(P)] returns something other than [SQLITE_OK]. +** +*/ +int sqlite3_blob_close(sqlite3_blob *); + +/* +** CAPI3REF: Return The Size Of An Open BLOB {F17840} +** +** Return the size in bytes of the blob accessible via the open +** [sqlite3_blob] object in its only argument. +** +** INVARIANTS: +** +** {F17843} The [sqlite3_blob_bytes(P)] interface returns the size +** in bytes of the BLOB that the [sqlite3_blob] object P +** refers to. +*/ +int sqlite3_blob_bytes(sqlite3_blob *); + +/* +** CAPI3REF: Read Data From A BLOB Incrementally {F17850} +** +** This function is used to read data from an open +** [sqlite3_blob | blob-handle] into a caller supplied buffer. +** N bytes of data are copied into buffer +** Z from the open blob, starting at offset iOffset. +** +** If offset iOffset is less than N bytes from the end of the blob, +** [SQLITE_ERROR] is returned and no data is read. If N or iOffset is +** less than zero [SQLITE_ERROR] is returned and no data is read. +** +** On success, SQLITE_OK is returned. Otherwise, an +** [error code] or an [extended error code] is returned. +** +** INVARIANTS: +** +** {F17853} The [sqlite3_blob_read(P,Z,N,X)] interface reads N bytes +** beginning at offset X from +** the blob that [sqlite3_blob] object P refers to +** and writes those N bytes into buffer Z. +** +** {F17856} In [sqlite3_blob_read(P,Z,N,X)] if the size of the blob +** is less than N+X bytes, then the function returns [SQLITE_ERROR] +** and nothing is read from the blob. +** +** {F17859} In [sqlite3_blob_read(P,Z,N,X)] if X or N is less than zero +** then the function returns [SQLITE_ERROR] +** and nothing is read from the blob. +** +** {F17862} The [sqlite3_blob_read(P,Z,N,X)] interface returns [SQLITE_OK] +** if N bytes where successfully read into buffer Z. +** +** {F17865} If the requested read could not be completed, +** the [sqlite3_blob_read(P,Z,N,X)] interface returns an +** appropriate [error code] or [extended error code]. +** +** {F17868} If an error occurs during evaluation of [sqlite3_blob_read(D,...)] +** then subsequent calls to [sqlite3_errcode(D)], +** [sqlite3_errmsg(D)], and [sqlite3_errmsg16(D)] will return +** information approprate for that error. +*/ +int sqlite3_blob_read(sqlite3_blob *, void *Z, int N, int iOffset); + +/* +** CAPI3REF: Write Data Into A BLOB Incrementally {F17870} +** +** This function is used to write data into an open +** [sqlite3_blob | blob-handle] from a user supplied buffer. +** n bytes of data are copied from the buffer +** pointed to by z into the open blob, starting at offset iOffset. +** +** If the [sqlite3_blob | blob-handle] passed as the first argument +** was not opened for writing (the flags parameter to [sqlite3_blob_open()] +*** was zero), this function returns [SQLITE_READONLY]. +** +** This function may only modify the contents of the blob; it is +** not possible to increase the size of a blob using this API. +** If offset iOffset is less than n bytes from the end of the blob, +** [SQLITE_ERROR] is returned and no data is written. If n is +** less than zero [SQLITE_ERROR] is returned and no data is written. +** +** On success, SQLITE_OK is returned. Otherwise, an +** [error code] or an [extended error code] is returned. +** +** INVARIANTS: +** +** {F17873} The [sqlite3_blob_write(P,Z,N,X)] interface writes N bytes +** from buffer Z into +** the blob that [sqlite3_blob] object P refers to +** beginning at an offset of X into the blob. +** +** {F17875} The [sqlite3_blob_write(P,Z,N,X)] interface returns +** [SQLITE_READONLY] if the [sqlite3_blob] object P was +** [sqlite3_blob_open | opened] for reading only. +** +** {F17876} In [sqlite3_blob_write(P,Z,N,X)] if the size of the blob +** is less than N+X bytes, then the function returns [SQLITE_ERROR] +** and nothing is written into the blob. +** +** {F17879} In [sqlite3_blob_write(P,Z,N,X)] if X or N is less than zero +** then the function returns [SQLITE_ERROR] +** and nothing is written into the blob. +** +** {F17882} The [sqlite3_blob_write(P,Z,N,X)] interface returns [SQLITE_OK] +** if N bytes where successfully written into blob. +** +** {F17885} If the requested write could not be completed, +** the [sqlite3_blob_write(P,Z,N,X)] interface returns an +** appropriate [error code] or [extended error code]. +** +** {F17888} If an error occurs during evaluation of [sqlite3_blob_write(D,...)] +** then subsequent calls to [sqlite3_errcode(D)], +** [sqlite3_errmsg(D)], and [sqlite3_errmsg16(D)] will return +** information approprate for that error. +*/ +int sqlite3_blob_write(sqlite3_blob *, const void *z, int n, int iOffset); + +/* +** CAPI3REF: Virtual File System Objects {F11200} +** +** A virtual filesystem (VFS) is an [sqlite3_vfs] object +** that SQLite uses to interact +** with the underlying operating system. Most SQLite builds come with a +** single default VFS that is appropriate for the host computer. +** New VFSes can be registered and existing VFSes can be unregistered. +** The following interfaces are provided. +** +** The sqlite3_vfs_find() interface returns a pointer to +** a VFS given its name. Names are case sensitive. +** Names are zero-terminated UTF-8 strings. +** If there is no match, a NULL +** pointer is returned. If zVfsName is NULL then the default +** VFS is returned. +** +** New VFSes are registered with sqlite3_vfs_register(). +** Each new VFS becomes the default VFS if the makeDflt flag is set. +** The same VFS can be registered multiple times without injury. +** To make an existing VFS into the default VFS, register it again +** with the makeDflt flag set. If two different VFSes with the +** same name are registered, the behavior is undefined. If a +** VFS is registered with a name that is NULL or an empty string, +** then the behavior is undefined. +** +** Unregister a VFS with the sqlite3_vfs_unregister() interface. +** If the default VFS is unregistered, another VFS is chosen as +** the default. The choice for the new VFS is arbitrary. +** +** INVARIANTS: +** +** {F11203} The [sqlite3_vfs_find(N)] interface returns a pointer to the +** registered [sqlite3_vfs] object whose name exactly matches +** the zero-terminated UTF-8 string N, or it returns NULL if +** there is no match. +** +** {F11206} If the N parameter to [sqlite3_vfs_find(N)] is NULL then +** the function returns a pointer to the default [sqlite3_vfs] +** object if there is one, or NULL if there is no default +** [sqlite3_vfs] object. +** +** {F11209} The [sqlite3_vfs_register(P,F)] interface registers the +** well-formed [sqlite3_vfs] object P using the name given +** by the zName field of the object. +** +** {F11212} Using the [sqlite3_vfs_register(P,F)] interface to register +** the same [sqlite3_vfs] object multiple times is a harmless no-op. +** +** {F11215} The [sqlite3_vfs_register(P,F)] interface makes the +** the [sqlite3_vfs] object P the default [sqlite3_vfs] object +** if F is non-zero. +** +** {F11218} The [sqlite3_vfs_unregister(P)] interface unregisters the +** [sqlite3_vfs] object P so that it is no longer returned by +** subsequent calls to [sqlite3_vfs_find()]. +*/ +sqlite3_vfs *sqlite3_vfs_find(const char *zVfsName); +int sqlite3_vfs_register(sqlite3_vfs*, int makeDflt); +int sqlite3_vfs_unregister(sqlite3_vfs*); + +/* +** CAPI3REF: Mutexes {F17000} +** +** The SQLite core uses these routines for thread +** synchronization. Though they are intended for internal +** use by SQLite, code that links against SQLite is +** permitted to use any of these routines. +** +** The SQLite source code contains multiple implementations +** of these mutex routines. An appropriate implementation +** is selected automatically at compile-time. The following +** implementations are available in the SQLite core: +** +**
                  +**
                • SQLITE_MUTEX_OS2 +**
                • SQLITE_MUTEX_PTHREAD +**
                • SQLITE_MUTEX_W32 +**
                • SQLITE_MUTEX_NOOP +**
                +** +** The SQLITE_MUTEX_NOOP implementation is a set of routines +** that does no real locking and is appropriate for use in +** a single-threaded application. The SQLITE_MUTEX_OS2, +** SQLITE_MUTEX_PTHREAD, and SQLITE_MUTEX_W32 implementations +** are appropriate for use on os/2, unix, and windows. +** +** If SQLite is compiled with the SQLITE_MUTEX_APPDEF preprocessor +** macro defined (with "-DSQLITE_MUTEX_APPDEF=1"), then no mutex +** implementation is included with the library. The +** mutex interface routines defined here become external +** references in the SQLite library for which implementations +** must be provided by the application. This facility allows an +** application that links against SQLite to provide its own mutex +** implementation without having to modify the SQLite core. +** +** {F17011} The sqlite3_mutex_alloc() routine allocates a new +** mutex and returns a pointer to it. {F17012} If it returns NULL +** that means that a mutex could not be allocated. {F17013} SQLite +** will unwind its stack and return an error. {F17014} The argument +** to sqlite3_mutex_alloc() is one of these integer constants: +** +**
                  +**
                • SQLITE_MUTEX_FAST +**
                • SQLITE_MUTEX_RECURSIVE +**
                • SQLITE_MUTEX_STATIC_MASTER +**
                • SQLITE_MUTEX_STATIC_MEM +**
                • SQLITE_MUTEX_STATIC_MEM2 +**
                • SQLITE_MUTEX_STATIC_PRNG +**
                • SQLITE_MUTEX_STATIC_LRU +**
                {END} +** +** {F17015} The first two constants cause sqlite3_mutex_alloc() to create +** a new mutex. The new mutex is recursive when SQLITE_MUTEX_RECURSIVE +** is used but not necessarily so when SQLITE_MUTEX_FAST is used. {END} +** The mutex implementation does not need to make a distinction +** between SQLITE_MUTEX_RECURSIVE and SQLITE_MUTEX_FAST if it does +** not want to. {F17016} But SQLite will only request a recursive mutex in +** cases where it really needs one. {END} If a faster non-recursive mutex +** implementation is available on the host platform, the mutex subsystem +** might return such a mutex in response to SQLITE_MUTEX_FAST. +** +** {F17017} The other allowed parameters to sqlite3_mutex_alloc() each return +** a pointer to a static preexisting mutex. {END} Four static mutexes are +** used by the current version of SQLite. Future versions of SQLite +** may add additional static mutexes. Static mutexes are for internal +** use by SQLite only. Applications that use SQLite mutexes should +** use only the dynamic mutexes returned by SQLITE_MUTEX_FAST or +** SQLITE_MUTEX_RECURSIVE. +** +** {F17018} Note that if one of the dynamic mutex parameters (SQLITE_MUTEX_FAST +** or SQLITE_MUTEX_RECURSIVE) is used then sqlite3_mutex_alloc() +** returns a different mutex on every call. {F17034} But for the static +** mutex types, the same mutex is returned on every call that has +** the same type number. {END} +** +** {F17019} The sqlite3_mutex_free() routine deallocates a previously +** allocated dynamic mutex. {F17020} SQLite is careful to deallocate every +** dynamic mutex that it allocates. {U17021} The dynamic mutexes must not be in +** use when they are deallocated. {U17022} Attempting to deallocate a static +** mutex results in undefined behavior. {F17023} SQLite never deallocates +** a static mutex. {END} +** +** The sqlite3_mutex_enter() and sqlite3_mutex_try() routines attempt +** to enter a mutex. {F17024} If another thread is already within the mutex, +** sqlite3_mutex_enter() will block and sqlite3_mutex_try() will return +** SQLITE_BUSY. {F17025} The sqlite3_mutex_try() interface returns SQLITE_OK +** upon successful entry. {F17026} Mutexes created using +** SQLITE_MUTEX_RECURSIVE can be entered multiple times by the same thread. +** {F17027} In such cases the, +** mutex must be exited an equal number of times before another thread +** can enter. {U17028} If the same thread tries to enter any other +** kind of mutex more than once, the behavior is undefined. +** {F17029} SQLite will never exhibit +** such behavior in its own use of mutexes. {END} +** +** Some systems (ex: windows95) do not the operation implemented by +** sqlite3_mutex_try(). On those systems, sqlite3_mutex_try() will +** always return SQLITE_BUSY. {F17030} The SQLite core only ever uses +** sqlite3_mutex_try() as an optimization so this is acceptable behavior. {END} +** +** {F17031} The sqlite3_mutex_leave() routine exits a mutex that was +** previously entered by the same thread. {U17032} The behavior +** is undefined if the mutex is not currently entered by the +** calling thread or is not currently allocated. {F17033} SQLite will +** never do either. {END} +** +** See also: [sqlite3_mutex_held()] and [sqlite3_mutex_notheld()]. +*/ +sqlite3_mutex *sqlite3_mutex_alloc(int); +void sqlite3_mutex_free(sqlite3_mutex*); +void sqlite3_mutex_enter(sqlite3_mutex*); +int sqlite3_mutex_try(sqlite3_mutex*); +void sqlite3_mutex_leave(sqlite3_mutex*); + +/* +** CAPI3REF: Mutex Verifcation Routines {F17080} +** +** The sqlite3_mutex_held() and sqlite3_mutex_notheld() routines +** are intended for use inside assert() statements. {F17081} The SQLite core +** never uses these routines except inside an assert() and applications +** are advised to follow the lead of the core. {F17082} The core only +** provides implementations for these routines when it is compiled +** with the SQLITE_DEBUG flag. {U17087} External mutex implementations +** are only required to provide these routines if SQLITE_DEBUG is +** defined and if NDEBUG is not defined. +** +** {F17083} These routines should return true if the mutex in their argument +** is held or not held, respectively, by the calling thread. {END} +** +** {X17084} The implementation is not required to provided versions of these +** routines that actually work. +** If the implementation does not provide working +** versions of these routines, it should at least provide stubs +** that always return true so that one does not get spurious +** assertion failures. {END} +** +** {F17085} If the argument to sqlite3_mutex_held() is a NULL pointer then +** the routine should return 1. {END} This seems counter-intuitive since +** clearly the mutex cannot be held if it does not exist. But the +** the reason the mutex does not exist is because the build is not +** using mutexes. And we do not want the assert() containing the +** call to sqlite3_mutex_held() to fail, so a non-zero return is +** the appropriate thing to do. {F17086} The sqlite3_mutex_notheld() +** interface should also return 1 when given a NULL pointer. +*/ +int sqlite3_mutex_held(sqlite3_mutex*); +int sqlite3_mutex_notheld(sqlite3_mutex*); + +/* +** CAPI3REF: Mutex Types {F17001} +** +** {F17002} The [sqlite3_mutex_alloc()] interface takes a single argument +** which is one of these integer constants. {END} +*/ +#define SQLITE_MUTEX_FAST 0 +#define SQLITE_MUTEX_RECURSIVE 1 +#define SQLITE_MUTEX_STATIC_MASTER 2 +#define SQLITE_MUTEX_STATIC_MEM 3 /* sqlite3_malloc() */ +#define SQLITE_MUTEX_STATIC_MEM2 4 /* sqlite3_release_memory() */ +#define SQLITE_MUTEX_STATIC_PRNG 5 /* sqlite3_random() */ +#define SQLITE_MUTEX_STATIC_LRU 6 /* lru page list */ + +/* +** CAPI3REF: Low-Level Control Of Database Files {F11300} +** +** {F11301} The [sqlite3_file_control()] interface makes a direct call to the +** xFileControl method for the [sqlite3_io_methods] object associated +** with a particular database identified by the second argument. {F11302} The +** name of the database is the name assigned to the database by the +** ATTACH SQL command that opened the +** database. {F11303} To control the main database file, use the name "main" +** or a NULL pointer. {F11304} The third and fourth parameters to this routine +** are passed directly through to the second and third parameters of +** the xFileControl method. {F11305} The return value of the xFileControl +** method becomes the return value of this routine. +** +** {F11306} If the second parameter (zDbName) does not match the name of any +** open database file, then SQLITE_ERROR is returned. {F11307} This error +** code is not remembered and will not be recalled by [sqlite3_errcode()] +** or [sqlite3_errmsg()]. {U11308} The underlying xFileControl method might +** also return SQLITE_ERROR. {U11309} There is no way to distinguish between +** an incorrect zDbName and an SQLITE_ERROR return from the underlying +** xFileControl method. {END} +** +** See also: [SQLITE_FCNTL_LOCKSTATE] +*/ +int sqlite3_file_control(sqlite3*, const char *zDbName, int op, void*); + +/* +** CAPI3REF: Testing Interface {F11400} +** +** The sqlite3_test_control() interface is used to read out internal +** state of SQLite and to inject faults into SQLite for testing +** purposes. The first parameter a operation code that determines +** the number, meaning, and operation of all subsequent parameters. +** +** This interface is not for use by applications. It exists solely +** for verifying the correct operation of the SQLite library. Depending +** on how the SQLite library is compiled, this interface might not exist. +** +** The details of the operation codes, their meanings, the parameters +** they take, and what they do are all subject to change without notice. +** Unlike most of the SQLite API, this function is not guaranteed to +** operate consistently from one release to the next. +*/ +int sqlite3_test_control(int op, ...); + +/* +** CAPI3REF: Testing Interface Operation Codes {F11410} +** +** These constants are the valid operation code parameters used +** as the first argument to [sqlite3_test_control()]. +** +** These parameters and their meansing are subject to change +** without notice. These values are for testing purposes only. +** Applications should not use any of these parameters or the +** [sqlite3_test_control()] interface. +*/ +#define SQLITE_TESTCTRL_FAULT_CONFIG 1 +#define SQLITE_TESTCTRL_FAULT_FAILURES 2 +#define SQLITE_TESTCTRL_FAULT_BENIGN_FAILURES 3 +#define SQLITE_TESTCTRL_FAULT_PENDING 4 + + + + + +/* +** Undo the hack that converts floating point types to integer for +** builds on processors without floating point support. +*/ +#ifdef SQLITE_OMIT_FLOATING_POINT +# undef double +#endif + +#ifdef __cplusplus +} /* End of the 'extern "C"' block */ +#endif +#endif Added: external/sqlite-source-3.5.7.x/sqlite3ext.h ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/sqlite3ext.h Wed Mar 19 03:00:27 2008 @@ -0,0 +1,350 @@ +/* +** 2006 June 7 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** This header file defines the SQLite interface for use by +** shared libraries that want to be imported as extensions into +** an SQLite instance. Shared libraries that intend to be loaded +** as extensions by SQLite should #include this file instead of +** sqlite3.h. +** +** @(#) $Id: sqlite3ext.h,v 1.18 2008/03/02 03:32:05 mlcreech Exp $ +*/ +#ifndef _SQLITE3EXT_H_ +#define _SQLITE3EXT_H_ +#include "sqlite3.h" + +typedef struct sqlite3_api_routines sqlite3_api_routines; + +/* +** The following structure holds pointers to all of the SQLite API +** routines. +** +** WARNING: In order to maintain backwards compatibility, add new +** interfaces to the end of this structure only. If you insert new +** interfaces in the middle of this structure, then older different +** versions of SQLite will not be able to load each others' shared +** libraries! +*/ +struct sqlite3_api_routines { + void * (*aggregate_context)(sqlite3_context*,int nBytes); + int (*aggregate_count)(sqlite3_context*); + int (*bind_blob)(sqlite3_stmt*,int,const void*,int n,void(*)(void*)); + int (*bind_double)(sqlite3_stmt*,int,double); + int (*bind_int)(sqlite3_stmt*,int,int); + int (*bind_int64)(sqlite3_stmt*,int,sqlite_int64); + int (*bind_null)(sqlite3_stmt*,int); + int (*bind_parameter_count)(sqlite3_stmt*); + int (*bind_parameter_index)(sqlite3_stmt*,const char*zName); + const char * (*bind_parameter_name)(sqlite3_stmt*,int); + int (*bind_text)(sqlite3_stmt*,int,const char*,int n,void(*)(void*)); + int (*bind_text16)(sqlite3_stmt*,int,const void*,int,void(*)(void*)); + int (*bind_value)(sqlite3_stmt*,int,const sqlite3_value*); + int (*busy_handler)(sqlite3*,int(*)(void*,int),void*); + int (*busy_timeout)(sqlite3*,int ms); + int (*changes)(sqlite3*); + int (*close)(sqlite3*); + int (*collation_needed)(sqlite3*,void*,void(*)(void*,sqlite3*,int eTextRep,const char*)); + int (*collation_needed16)(sqlite3*,void*,void(*)(void*,sqlite3*,int eTextRep,const void*)); + const void * (*column_blob)(sqlite3_stmt*,int iCol); + int (*column_bytes)(sqlite3_stmt*,int iCol); + int (*column_bytes16)(sqlite3_stmt*,int iCol); + int (*column_count)(sqlite3_stmt*pStmt); + const char * (*column_database_name)(sqlite3_stmt*,int); + const void * (*column_database_name16)(sqlite3_stmt*,int); + const char * (*column_decltype)(sqlite3_stmt*,int i); + const void * (*column_decltype16)(sqlite3_stmt*,int); + double (*column_double)(sqlite3_stmt*,int iCol); + int (*column_int)(sqlite3_stmt*,int iCol); + sqlite_int64 (*column_int64)(sqlite3_stmt*,int iCol); + const char * (*column_name)(sqlite3_stmt*,int); + const void * (*column_name16)(sqlite3_stmt*,int); + const char * (*column_origin_name)(sqlite3_stmt*,int); + const void * (*column_origin_name16)(sqlite3_stmt*,int); + const char * (*column_table_name)(sqlite3_stmt*,int); + const void * (*column_table_name16)(sqlite3_stmt*,int); + const unsigned char * (*column_text)(sqlite3_stmt*,int iCol); + const void * (*column_text16)(sqlite3_stmt*,int iCol); + int (*column_type)(sqlite3_stmt*,int iCol); + sqlite3_value* (*column_value)(sqlite3_stmt*,int iCol); + void * (*commit_hook)(sqlite3*,int(*)(void*),void*); + int (*complete)(const char*sql); + int (*complete16)(const void*sql); + int (*create_collation)(sqlite3*,const char*,int,void*,int(*)(void*,int,const void*,int,const void*)); + int (*create_collation16)(sqlite3*,const char*,int,void*,int(*)(void*,int,const void*,int,const void*)); + int (*create_function)(sqlite3*,const char*,int,int,void*,void (*xFunc)(sqlite3_context*,int,sqlite3_value**),void (*xStep)(sqlite3_context*,int,sqlite3_value**),void (*xFinal)(sqlite3_context*)); + int (*create_function16)(sqlite3*,const void*,int,int,void*,void (*xFunc)(sqlite3_context*,int,sqlite3_value**),void (*xStep)(sqlite3_context*,int,sqlite3_value**),void (*xFinal)(sqlite3_context*)); + int (*create_module)(sqlite3*,const char*,const sqlite3_module*,void*); + int (*data_count)(sqlite3_stmt*pStmt); + sqlite3 * (*db_handle)(sqlite3_stmt*); + int (*declare_vtab)(sqlite3*,const char*); + int (*enable_shared_cache)(int); + int (*errcode)(sqlite3*db); + const char * (*errmsg)(sqlite3*); + const void * (*errmsg16)(sqlite3*); + int (*exec)(sqlite3*,const char*,sqlite3_callback,void*,char**); + int (*expired)(sqlite3_stmt*); + int (*finalize)(sqlite3_stmt*pStmt); + void (*free)(void*); + void (*free_table)(char**result); + int (*get_autocommit)(sqlite3*); + void * (*get_auxdata)(sqlite3_context*,int); + int (*get_table)(sqlite3*,const char*,char***,int*,int*,char**); + int (*global_recover)(void); + void (*interruptx)(sqlite3*); + sqlite_int64 (*last_insert_rowid)(sqlite3*); + const char * (*libversion)(void); + int (*libversion_number)(void); + void *(*malloc)(int); + char * (*mprintf)(const char*,...); + int (*open)(const char*,sqlite3**); + int (*open16)(const void*,sqlite3**); + int (*prepare)(sqlite3*,const char*,int,sqlite3_stmt**,const char**); + int (*prepare16)(sqlite3*,const void*,int,sqlite3_stmt**,const void**); + void * (*profile)(sqlite3*,void(*)(void*,const char*,sqlite_uint64),void*); + void (*progress_handler)(sqlite3*,int,int(*)(void*),void*); + void *(*realloc)(void*,int); + int (*reset)(sqlite3_stmt*pStmt); + void (*result_blob)(sqlite3_context*,const void*,int,void(*)(void*)); + void (*result_double)(sqlite3_context*,double); + void (*result_error)(sqlite3_context*,const char*,int); + void (*result_error16)(sqlite3_context*,const void*,int); + void (*result_int)(sqlite3_context*,int); + void (*result_int64)(sqlite3_context*,sqlite_int64); + void (*result_null)(sqlite3_context*); + void (*result_text)(sqlite3_context*,const char*,int,void(*)(void*)); + void (*result_text16)(sqlite3_context*,const void*,int,void(*)(void*)); + void (*result_text16be)(sqlite3_context*,const void*,int,void(*)(void*)); + void (*result_text16le)(sqlite3_context*,const void*,int,void(*)(void*)); + void (*result_value)(sqlite3_context*,sqlite3_value*); + void * (*rollback_hook)(sqlite3*,void(*)(void*),void*); + int (*set_authorizer)(sqlite3*,int(*)(void*,int,const char*,const char*,const char*,const char*),void*); + void (*set_auxdata)(sqlite3_context*,int,void*,void (*)(void*)); + char * (*snprintf)(int,char*,const char*,...); + int (*step)(sqlite3_stmt*); + int (*table_column_metadata)(sqlite3*,const char*,const char*,const char*,char const**,char const**,int*,int*,int*); + void (*thread_cleanup)(void); + int (*total_changes)(sqlite3*); + void * (*trace)(sqlite3*,void(*xTrace)(void*,const char*),void*); + int (*transfer_bindings)(sqlite3_stmt*,sqlite3_stmt*); + void * (*update_hook)(sqlite3*,void(*)(void*,int ,char const*,char const*,sqlite_int64),void*); + void * (*user_data)(sqlite3_context*); + const void * (*value_blob)(sqlite3_value*); + int (*value_bytes)(sqlite3_value*); + int (*value_bytes16)(sqlite3_value*); + double (*value_double)(sqlite3_value*); + int (*value_int)(sqlite3_value*); + sqlite_int64 (*value_int64)(sqlite3_value*); + int (*value_numeric_type)(sqlite3_value*); + const unsigned char * (*value_text)(sqlite3_value*); + const void * (*value_text16)(sqlite3_value*); + const void * (*value_text16be)(sqlite3_value*); + const void * (*value_text16le)(sqlite3_value*); + int (*value_type)(sqlite3_value*); + char *(*vmprintf)(const char*,va_list); + /* Added ??? */ + int (*overload_function)(sqlite3*, const char *zFuncName, int nArg); + /* Added by 3.3.13 */ + int (*prepare_v2)(sqlite3*,const char*,int,sqlite3_stmt**,const char**); + int (*prepare16_v2)(sqlite3*,const void*,int,sqlite3_stmt**,const void**); + int (*clear_bindings)(sqlite3_stmt*); + /* Added by 3.4.1 */ + int (*create_module_v2)(sqlite3*,const char*,const sqlite3_module*,void*,void (*xDestroy)(void *)); + /* Added by 3.5.0 */ + int (*bind_zeroblob)(sqlite3_stmt*,int,int); + int (*blob_bytes)(sqlite3_blob*); + int (*blob_close)(sqlite3_blob*); + int (*blob_open)(sqlite3*,const char*,const char*,const char*,sqlite3_int64,int,sqlite3_blob**); + int (*blob_read)(sqlite3_blob*,void*,int,int); + int (*blob_write)(sqlite3_blob*,const void*,int,int); + int (*create_collation_v2)(sqlite3*,const char*,int,void*,int(*)(void*,int,const void*,int,const void*),void(*)(void*)); + int (*file_control)(sqlite3*,const char*,int,void*); + sqlite3_int64 (*memory_highwater)(int); + sqlite3_int64 (*memory_used)(void); + sqlite3_mutex *(*mutex_alloc)(int); + void (*mutex_enter)(sqlite3_mutex*); + void (*mutex_free)(sqlite3_mutex*); + void (*mutex_leave)(sqlite3_mutex*); + int (*mutex_try)(sqlite3_mutex*); + int (*open_v2)(const char*,sqlite3**,int,const char*); + int (*release_memory)(int); + void (*result_error_nomem)(sqlite3_context*); + void (*result_error_toobig)(sqlite3_context*); + int (*sleep)(int); + void (*soft_heap_limit)(int); + sqlite3_vfs *(*vfs_find)(const char*); + int (*vfs_register)(sqlite3_vfs*,int); + int (*vfs_unregister)(sqlite3_vfs*); +}; + +/* +** The following macros redefine the API routines so that they are +** redirected throught the global sqlite3_api structure. +** +** This header file is also used by the loadext.c source file +** (part of the main SQLite library - not an extension) so that +** it can get access to the sqlite3_api_routines structure +** definition. But the main library does not want to redefine +** the API. So the redefinition macros are only valid if the +** SQLITE_CORE macros is undefined. +*/ +#ifndef SQLITE_CORE +#define sqlite3_aggregate_context sqlite3_api->aggregate_context +#define sqlite3_aggregate_count sqlite3_api->aggregate_count +#define sqlite3_bind_blob sqlite3_api->bind_blob +#define sqlite3_bind_double sqlite3_api->bind_double +#define sqlite3_bind_int sqlite3_api->bind_int +#define sqlite3_bind_int64 sqlite3_api->bind_int64 +#define sqlite3_bind_null sqlite3_api->bind_null +#define sqlite3_bind_parameter_count sqlite3_api->bind_parameter_count +#define sqlite3_bind_parameter_index sqlite3_api->bind_parameter_index +#define sqlite3_bind_parameter_name sqlite3_api->bind_parameter_name +#define sqlite3_bind_text sqlite3_api->bind_text +#define sqlite3_bind_text16 sqlite3_api->bind_text16 +#define sqlite3_bind_value sqlite3_api->bind_value +#define sqlite3_busy_handler sqlite3_api->busy_handler +#define sqlite3_busy_timeout sqlite3_api->busy_timeout +#define sqlite3_changes sqlite3_api->changes +#define sqlite3_close sqlite3_api->close +#define sqlite3_collation_needed sqlite3_api->collation_needed +#define sqlite3_collation_needed16 sqlite3_api->collation_needed16 +#define sqlite3_column_blob sqlite3_api->column_blob +#define sqlite3_column_bytes sqlite3_api->column_bytes +#define sqlite3_column_bytes16 sqlite3_api->column_bytes16 +#define sqlite3_column_count sqlite3_api->column_count +#define sqlite3_column_database_name sqlite3_api->column_database_name +#define sqlite3_column_database_name16 sqlite3_api->column_database_name16 +#define sqlite3_column_decltype sqlite3_api->column_decltype +#define sqlite3_column_decltype16 sqlite3_api->column_decltype16 +#define sqlite3_column_double sqlite3_api->column_double +#define sqlite3_column_int sqlite3_api->column_int +#define sqlite3_column_int64 sqlite3_api->column_int64 +#define sqlite3_column_name sqlite3_api->column_name +#define sqlite3_column_name16 sqlite3_api->column_name16 +#define sqlite3_column_origin_name sqlite3_api->column_origin_name +#define sqlite3_column_origin_name16 sqlite3_api->column_origin_name16 +#define sqlite3_column_table_name sqlite3_api->column_table_name +#define sqlite3_column_table_name16 sqlite3_api->column_table_name16 +#define sqlite3_column_text sqlite3_api->column_text +#define sqlite3_column_text16 sqlite3_api->column_text16 +#define sqlite3_column_type sqlite3_api->column_type +#define sqlite3_column_value sqlite3_api->column_value +#define sqlite3_commit_hook sqlite3_api->commit_hook +#define sqlite3_complete sqlite3_api->complete +#define sqlite3_complete16 sqlite3_api->complete16 +#define sqlite3_create_collation sqlite3_api->create_collation +#define sqlite3_create_collation16 sqlite3_api->create_collation16 +#define sqlite3_create_function sqlite3_api->create_function +#define sqlite3_create_function16 sqlite3_api->create_function16 +#define sqlite3_create_module sqlite3_api->create_module +#define sqlite3_create_module_v2 sqlite3_api->create_module_v2 +#define sqlite3_data_count sqlite3_api->data_count +#define sqlite3_db_handle sqlite3_api->db_handle +#define sqlite3_declare_vtab sqlite3_api->declare_vtab +#define sqlite3_enable_shared_cache sqlite3_api->enable_shared_cache +#define sqlite3_errcode sqlite3_api->errcode +#define sqlite3_errmsg sqlite3_api->errmsg +#define sqlite3_errmsg16 sqlite3_api->errmsg16 +#define sqlite3_exec sqlite3_api->exec +#define sqlite3_expired sqlite3_api->expired +#define sqlite3_finalize sqlite3_api->finalize +#define sqlite3_free sqlite3_api->free +#define sqlite3_free_table sqlite3_api->free_table +#define sqlite3_get_autocommit sqlite3_api->get_autocommit +#define sqlite3_get_auxdata sqlite3_api->get_auxdata +#define sqlite3_get_table sqlite3_api->get_table +#define sqlite3_global_recover sqlite3_api->global_recover +#define sqlite3_interrupt sqlite3_api->interruptx +#define sqlite3_last_insert_rowid sqlite3_api->last_insert_rowid +#define sqlite3_libversion sqlite3_api->libversion +#define sqlite3_libversion_number sqlite3_api->libversion_number +#define sqlite3_malloc sqlite3_api->malloc +#define sqlite3_mprintf sqlite3_api->mprintf +#define sqlite3_open sqlite3_api->open +#define sqlite3_open16 sqlite3_api->open16 +#define sqlite3_prepare sqlite3_api->prepare +#define sqlite3_prepare16 sqlite3_api->prepare16 +#define sqlite3_prepare_v2 sqlite3_api->prepare_v2 +#define sqlite3_prepare16_v2 sqlite3_api->prepare16_v2 +#define sqlite3_profile sqlite3_api->profile +#define sqlite3_progress_handler sqlite3_api->progress_handler +#define sqlite3_realloc sqlite3_api->realloc +#define sqlite3_reset sqlite3_api->reset +#define sqlite3_result_blob sqlite3_api->result_blob +#define sqlite3_result_double sqlite3_api->result_double +#define sqlite3_result_error sqlite3_api->result_error +#define sqlite3_result_error16 sqlite3_api->result_error16 +#define sqlite3_result_int sqlite3_api->result_int +#define sqlite3_result_int64 sqlite3_api->result_int64 +#define sqlite3_result_null sqlite3_api->result_null +#define sqlite3_result_text sqlite3_api->result_text +#define sqlite3_result_text16 sqlite3_api->result_text16 +#define sqlite3_result_text16be sqlite3_api->result_text16be +#define sqlite3_result_text16le sqlite3_api->result_text16le +#define sqlite3_result_value sqlite3_api->result_value +#define sqlite3_rollback_hook sqlite3_api->rollback_hook +#define sqlite3_set_authorizer sqlite3_api->set_authorizer +#define sqlite3_set_auxdata sqlite3_api->set_auxdata +#define sqlite3_snprintf sqlite3_api->snprintf +#define sqlite3_step sqlite3_api->step +#define sqlite3_table_column_metadata sqlite3_api->table_column_metadata +#define sqlite3_thread_cleanup sqlite3_api->thread_cleanup +#define sqlite3_total_changes sqlite3_api->total_changes +#define sqlite3_trace sqlite3_api->trace +#define sqlite3_transfer_bindings sqlite3_api->transfer_bindings +#define sqlite3_update_hook sqlite3_api->update_hook +#define sqlite3_user_data sqlite3_api->user_data +#define sqlite3_value_blob sqlite3_api->value_blob +#define sqlite3_value_bytes sqlite3_api->value_bytes +#define sqlite3_value_bytes16 sqlite3_api->value_bytes16 +#define sqlite3_value_double sqlite3_api->value_double +#define sqlite3_value_int sqlite3_api->value_int +#define sqlite3_value_int64 sqlite3_api->value_int64 +#define sqlite3_value_numeric_type sqlite3_api->value_numeric_type +#define sqlite3_value_text sqlite3_api->value_text +#define sqlite3_value_text16 sqlite3_api->value_text16 +#define sqlite3_value_text16be sqlite3_api->value_text16be +#define sqlite3_value_text16le sqlite3_api->value_text16le +#define sqlite3_value_type sqlite3_api->value_type +#define sqlite3_vmprintf sqlite3_api->vmprintf +#define sqlite3_overload_function sqlite3_api->overload_function +#define sqlite3_prepare_v2 sqlite3_api->prepare_v2 +#define sqlite3_prepare16_v2 sqlite3_api->prepare16_v2 +#define sqlite3_clear_bindings sqlite3_api->clear_bindings +#define sqlite3_bind_zeroblob sqlite3_api->bind_zeroblob +#define sqlite3_blob_bytes sqlite3_api->blob_bytes +#define sqlite3_blob_close sqlite3_api->blob_close +#define sqlite3_blob_open sqlite3_api->blob_open +#define sqlite3_blob_read sqlite3_api->blob_read +#define sqlite3_blob_write sqlite3_api->blob_write +#define sqlite3_create_collation_v2 sqlite3_api->create_collation_v2 +#define sqlite3_file_control sqlite3_api->file_control +#define sqlite3_memory_highwater sqlite3_api->memory_highwater +#define sqlite3_memory_used sqlite3_api->memory_used +#define sqlite3_mutex_alloc sqlite3_api->mutex_alloc +#define sqlite3_mutex_enter sqlite3_api->mutex_enter +#define sqlite3_mutex_free sqlite3_api->mutex_free +#define sqlite3_mutex_leave sqlite3_api->mutex_leave +#define sqlite3_mutex_try sqlite3_api->mutex_try +#define sqlite3_open_v2 sqlite3_api->open_v2 +#define sqlite3_release_memory sqlite3_api->release_memory +#define sqlite3_result_error_nomem sqlite3_api->result_error_nomem +#define sqlite3_result_error_toobig sqlite3_api->result_error_toobig +#define sqlite3_sleep sqlite3_api->sleep +#define sqlite3_soft_heap_limit sqlite3_api->soft_heap_limit +#define sqlite3_vfs_find sqlite3_api->vfs_find +#define sqlite3_vfs_register sqlite3_api->vfs_register +#define sqlite3_vfs_unregister sqlite3_api->vfs_unregister +#endif /* SQLITE_CORE */ + +#define SQLITE_EXTENSION_INIT1 const sqlite3_api_routines *sqlite3_api; +#define SQLITE_EXTENSION_INIT2(v) sqlite3_api = v; + +#endif /* _SQLITE3EXT_H_ */ Added: external/sqlite-source-3.5.7.x/sqliteInt.h ============================================================================== --- (empty file) +++ external/sqlite-source-3.5.7.x/sqliteInt.h Wed Mar 19 03:00:27 2008 @@ -0,0 +1,2185 @@ +/* +** 2001 September 15 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** Internal interface definitions for SQLite. +** +** @(#) $Id: sqliteInt.h,v 1.673 2008/03/14 13:02:08 mlcreech Exp $ +*/ +#ifndef _SQLITEINT_H_ +#define _SQLITEINT_H_ + +/* +** Include the configuration header output by 'configure' if it was run +** (otherwise we get an empty default). +*/ +#include "config.h" + +/* Needed for various definitions... */ +#define _GNU_SOURCE + +/* +** Include standard header files as necessary +*/ +#ifdef HAVE_STDINT_H +#include +#endif +#ifdef HAVE_INTTYPES_H +#include +#endif + +/* +** If possible, use the C99 intptr_t type to define an integral type of +** equivalent size to a pointer. (Technically it's >= sizeof(void *), but +** practically it's == sizeof(void *)). We fall back to an int if this type +** isn't defined. +*/ +#ifdef HAVE_INTPTR_T + typedef intptr_t sqlite3_intptr_t; +#else + typedef int sqlite3_intptr_t; +#endif + + +/* +** The macro unlikely() is a hint that surrounds a boolean +** expression that is usually false. Macro likely() surrounds +** a boolean expression that is usually true. GCC is able to +** use these hints to generate better code, sometimes. +*/ +#if defined(__GNUC__) && 0 +# define likely(X) __builtin_expect((X),1) +# define unlikely(X) __builtin_expect((X),0) +#else +# define likely(X) !!(X) +# define unlikely(X) !!(X) +#endif + + +/* +** These #defines should enable >2GB file support on Posix if the +** underlying operating system supports it. If the OS lacks +** large file support, or if the OS is windows, these should be no-ops. +** +** Ticket #2739: The _LARGEFILE_SOURCE macro must appear before any +** system #includes. Hence, this block of code must be the very first +** code in all source files. +** +** Large file support can be disabled using the -DSQLITE_DISABLE_LFS switch +** on the compiler command line. This is necessary if you are compiling +** on a recent machine (ex: RedHat 7.2) but you want your code to work +** on an older machine (ex: RedHat 6.0). If you compile on RedHat 7.2 +** without this option, LFS is enable. But LFS does not exist in the kernel +** in RedHat 6.0, so the code won't work. Hence, for maximum binary +** portability you should omit LFS. +** +** Similar is true for MacOS. LFS is only supported on MacOS 9 and later. +*/ +#ifndef SQLITE_DISABLE_LFS +# define _LARGE_FILE 1 +# ifndef _FILE_OFFSET_BITS +# define _FILE_OFFSET_BITS 64 +# endif +# define _LARGEFILE_SOURCE 1 +#endif + + +#include "sqliteLimit.h" + +/* +** For testing purposes, the various size limit constants are really +** variables that we can modify in the testfixture. +*/ +#ifdef SQLITE_TEST + #undef SQLITE_MAX_LENGTH + #undef SQLITE_MAX_COLUMN + #undef SQLITE_MAX_SQL_LENGTH + #undef SQLITE_MAX_EXPR_DEPTH + #undef SQLITE_MAX_COMPOUND_SELECT + #undef SQLITE_MAX_VDBE_OP + #undef SQLITE_MAX_FUNCTION_ARG + #undef SQLITE_MAX_VARIABLE_NUMBER + #undef SQLITE_MAX_PAGE_SIZE + #undef SQLITE_MAX_PAGE_COUNT + #undef SQLITE_MAX_LIKE_PATTERN_LENGTH + + #define SQLITE_MAX_LENGTH sqlite3MAX_LENGTH + #define SQLITE_MAX_COLUMN sqlite3MAX_COLUMN + #define SQLITE_MAX_SQL_LENGTH sqlite3MAX_SQL_LENGTH + #define SQLITE_MAX_EXPR_DEPTH sqlite3MAX_EXPR_DEPTH + #define SQLITE_MAX_COMPOUND_SELECT sqlite3MAX_COMPOUND_SELECT + #define SQLITE_MAX_VDBE_OP sqlite3MAX_VDBE_OP + #define SQLITE_MAX_FUNCTION_ARG sqlite3MAX_FUNCTION_ARG + #define SQLITE_MAX_VARIABLE_NUMBER sqlite3MAX_VARIABLE_NUMBER + #define SQLITE_MAX_PAGE_SIZE sqlite3MAX_PAGE_SIZE + #define SQLITE_MAX_PAGE_COUNT sqlite3MAX_PAGE_COUNT + #define SQLITE_MAX_LIKE_PATTERN_LENGTH sqlite3MAX_LIKE_PATTERN_LENGTH + + extern int sqlite3MAX_LENGTH; + extern int sqlite3MAX_COLUMN; + extern int sqlite3MAX_SQL_LENGTH; + extern int sqlite3MAX_EXPR_DEPTH; + extern int sqlite3MAX_COMPOUND_SELECT; + extern int sqlite3MAX_VDBE_OP; + extern int sqlite3MAX_FUNCTION_ARG; + extern int sqlite3MAX_VARIABLE_NUMBER; + extern int sqlite3MAX_PAGE_SIZE; + extern int sqlite3MAX_PAGE_COUNT; + extern int sqlite3MAX_LIKE_PATTERN_LENGTH; +#endif + + +/* +** The SQLITE_THREADSAFE macro must be defined as either 0 or 1. +** Older versions of SQLite used an optional THREADSAFE macro. +** We support that for legacy +*/ +#if !defined(SQLITE_THREADSAFE) +#if defined(THREADSAFE) +# define SQLITE_THREADSAFE THREADSAFE +#else +# define SQLITE_THREADSAFE 1 +#endif +#endif + +/* +** Exactly one of the following macros must be defined in order to +** specify which memory allocation subsystem to use. +** +** SQLITE_SYSTEM_MALLOC // Use normal system malloc() +** SQLITE_MEMDEBUG // Debugging version of system malloc() +** SQLITE_MEMORY_SIZE // internal allocator #1 +** SQLITE_MMAP_HEAP_SIZE // internal mmap() allocator +** SQLITE_POW2_MEMORY_SIZE // internal power-of-two allocator +** +** If none of the above are defined, then set SQLITE_SYSTEM_MALLOC as +** the default. +*/ +#if defined(SQLITE_SYSTEM_MALLOC)+defined(SQLITE_MEMDEBUG)+\ + defined(SQLITE_MEMORY_SIZE)+defined(SQLITE_MMAP_HEAP_SIZE)+\ + defined(SQLITE_POW2_MEMORY_SIZE)>1 +# error "At most one of the following compile-time configuration options\ + is allows: SQLITE_SYSTEM_MALLOC, SQLITE_MEMDEBUG, SQLITE_MEMORY_SIZE,\ + SQLITE_MMAP_HEAP_SIZE, SQLITE_POW2_MEMORY_SIZE" +#endif +#if defined(SQLITE_SYSTEM_MALLOC)+defined(SQLITE_MEMDEBUG)+\ + defined(SQLITE_MEMORY_SIZE)+defined(SQLITE_MMAP_HEAP_SIZE)+\ + defined(SQLITE_POW2_MEMORY_SIZE)==0 +# define SQLITE_SYSTEM_MALLOC 1 +#endif + +/* +** If SQLITE_MALLOC_SOFT_LIMIT is defined, then try to keep the +** sizes of memory allocations below this value where possible. +*/ +#if defined(SQLITE_POW2_MEMORY_SIZE) && !defined(SQLITE_MALLOC_SOFT_LIMIT) +# define SQLITE_MALLOC_SOFT_LIMIT 1024 +#endif + +/* +** We need to define _XOPEN_SOURCE as follows in order to enable +** recursive mutexes on most unix systems. But Mac OS X is different. +** The _XOPEN_SOURCE define causes problems for Mac OS X we are told, +** so it is omitted there. See ticket #2673. +** +** Later we learn that _XOPEN_SOURCE is poorly or incorrectly +** implemented on some systems. So we avoid defining it at all +** if it is already defined or if it is unneeded because we are +** not doing a threadsafe build. Ticket #2681. +** +** See also ticket #2741. +*/ +#if !defined(_XOPEN_SOURCE) && !defined(__DARWIN__) && !defined(__APPLE__) && SQLITE_THREADSAFE +# define _XOPEN_SOURCE 500 /* Needed to enable pthread recursive mutexes */ +#endif + +#if defined(SQLITE_TCL) || defined(TCLSH) +# include +#endif + +/* +** Many people are failing to set -DNDEBUG=1 when compiling SQLite. +** Setting NDEBUG makes the code smaller and run faster. So the following +** lines are added to automatically set NDEBUG unless the -DSQLITE_DEBUG=1 +** option is set. Thus NDEBUG becomes an opt-in rather than an opt-out +** feature. +*/ +#if !defined(NDEBUG) && !defined(SQLITE_DEBUG) +# define NDEBUG 1 +#endif + +#include "sqlite3.h" +#include "hash.h" +#include "parse.h" +#include +#include +#include +#include +#include + +#define sqlite3_isnan(X) ((X)!=(X)) + +/* +** If compiling for a processor that lacks floating point support, +** substitute integer for floating-point +*/ +#ifdef SQLITE_OMIT_FLOATING_POINT +# define double sqlite_int64 +# define LONGDOUBLE_TYPE sqlite_int64 +# ifndef SQLITE_BIG_DBL +# define SQLITE_BIG_DBL (0x7fffffffffffffff) +# endif +# define SQLITE_OMIT_DATETIME_FUNCS 1 +# define SQLITE_OMIT_TRACE 1 +# undef SQLITE_MIXED_ENDIAN_64BIT_FLOAT +#endif +#ifndef SQLITE_BIG_DBL +# define SQLITE_BIG_DBL (1e99) +#endif + +/* +** OMIT_TEMPDB is set to 1 if SQLITE_OMIT_TEMPDB is defined, or 0 +** afterward. Having this macro allows us to cause the C compiler +** to omit code used by TEMP tables without messy #ifndef statements. +*/ +#ifdef SQLITE_OMIT_TEMPDB +#define OMIT_TEMPDB 1 +#else +#define OMIT_TEMPDB 0 +#endif + +/* +** If the following macro is set to 1, then NULL values are considered +** distinct when determining whether or not two entries are the same +** in a UNIQUE index. This is the way PostgreSQL, Oracle, DB2, MySQL, +** OCELOT, and Firebird all work. The SQL92 spec explicitly says this +** is the way things are suppose to work. +** +** If the following macro is set to 0, the NULLs are indistinct for +** a UNIQUE index. In this mode, you can only have a single NULL entry +** for a column declared UNIQUE. This is the way Informix and SQL Server +** work. +*/ +#define NULL_DISTINCT_FOR_UNIQUE 1 + +/* +** The "file format" number is an integer that is incremented whenever +** the VDBE-level file format changes. The following macros define the +** the default file format for new databases and the maximum file format +** that the library can read. +*/ +#define SQLITE_MAX_FILE_FORMAT 4 +#ifndef SQLITE_DEFAULT_FILE_FORMAT +# define SQLITE_DEFAULT_FILE_FORMAT 1 +#endif + +/* +** Provide a default value for TEMP_STORE in case it is not specified +** on the command-line +*/ +#ifndef TEMP_STORE +# define TEMP_STORE 1 +#endif + +/* +** GCC does not define the offsetof() macro so we'll have to do it +** ourselves. +*/ +#ifndef offsetof +#define offsetof(STRUCTURE,FIELD) ((int)((char*)&((STRUCTURE*)0)->FIELD)) +#endif + +/* +** Check to see if this machine uses EBCDIC. (Yes, believe it or +** not, there are still machines out there that use EBCDIC.) +*/ +#if 'A' == '\301' +# define SQLITE_EBCDIC 1 +#else +# define SQLITE_ASCII 1 +#endif + +/* +** Integers of known sizes. These typedefs might change for architectures +** where the sizes very. Preprocessor macros are available so that the +** types can be conveniently redefined at compile-type. Like this: +** +** cc '-DUINTPTR_TYPE=long long int' ... +*/ +#ifndef UINT32_TYPE +# ifdef HAVE_UINT32_T +# define UINT32_TYPE uint32_t +# else +# define UINT32_TYPE unsigned int +# endif +#endif +#ifndef UINT16_TYPE +# ifdef HAVE_UINT16_T +# define UINT16_TYPE uint16_t +# else +# define UINT16_TYPE unsigned short int +# endif +#endif +#ifndef INT16_TYPE +# ifdef HAVE_INT16_T +# define INT16_TYPE int16_t +# else +# define INT16_TYPE short int +# endif +#endif +#ifndef UINT8_TYPE +# ifdef HAVE_UINT8_T +# define UINT8_TYPE uint8_t +# else +# define UINT8_TYPE unsigned char +# endif +#endif +#ifndef INT8_TYPE +# ifdef HAVE_INT8_T +# define INT8_TYPE int8_t +# else +# define INT8_TYPE signed char +# endif +#endif +#ifndef LONGDOUBLE_TYPE +# define LONGDOUBLE_TYPE long double +#endif +typedef sqlite_int64 i64; /* 8-byte signed integer */ +typedef sqlite_uint64 u64; /* 8-byte unsigned integer */ +typedef UINT32_TYPE u32; /* 4-byte unsigned integer */ +typedef UINT16_TYPE u16; /* 2-byte unsigned integer */ +typedef INT16_TYPE i16; /* 2-byte signed integer */ +typedef UINT8_TYPE u8; /* 1-byte unsigned integer */ +typedef UINT8_TYPE i8; /* 1-byte signed integer */ + +/* +** Macros to determine whether the machine is big or little endian, +** evaluated at runtime. +*/ +#ifdef SQLITE_AMALGAMATION +const int sqlite3one; +#else +extern const int sqlite3one; +#endif +#if defined(i386) || defined(__i386__) || defined(_M_IX86) +# define SQLITE_BIGENDIAN 0 +# define SQLITE_LITTLEENDIAN 1 +# define SQLITE_UTF16NATIVE SQLITE_UTF16LE +#else +# define SQLITE_BIGENDIAN (*(char *)(&sqlite3one)==0) +# define SQLITE_LITTLEENDIAN (*(char *)(&sqlite3one)==1) +# define SQLITE_UTF16NATIVE (SQLITE_BIGENDIAN?SQLITE_UTF16BE:SQLITE_UTF16LE) +#endif + +/* +** An instance of the following structure is used to store the busy-handler +** callback for a given sqlite handle. +** +** The sqlite.busyHandler member of the sqlite struct contains the busy +** callback for the database handle. Each pager opened via the sqlite +** handle is passed a pointer to sqlite.busyHandler. The busy-handler +** callback is currently invoked only from within pager.c. +*/ +typedef struct BusyHandler BusyHandler; +struct BusyHandler { + int (*xFunc)(void *,int); /* The busy callback */ + void *pArg; /* First arg to busy callback */ + int nBusy; /* Incremented with each busy call */ +}; + +/* +** Name of the master database table. The master database table +** is a special table that holds the names and attributes of all +** user tables and indices. +*/ +#define MASTER_NAME "sqlite_master" +#define TEMP_MASTER_NAME "sqlite_temp_master" + +/* +** The root-page of the master database table. +*/ +#define MASTER_ROOT 1 + +/* +** The name of the schema table. +*/ +#define SCHEMA_TABLE(x) ((!OMIT_TEMPDB)&&(x==1)?TEMP_MASTER_NAME:MASTER_NAME) + +/* +** A convenience macro that returns the number of elements in +** an array. +*/ +#define ArraySize(X) (sizeof(X)/sizeof(X[0])) + +/* +** Forward references to structures +*/ +typedef struct AggInfo AggInfo; +typedef struct AuthContext AuthContext; +typedef struct Bitvec Bitvec; +typedef struct CollSeq CollSeq; +typedef struct Column Column; +typedef struct Db Db; +typedef struct Schema Schema; +typedef struct Expr Expr; +typedef struct ExprList ExprList; +typedef struct FKey FKey; +typedef struct FuncDef FuncDef; +typedef struct IdList IdList; +typedef struct Index Index; +typedef struct KeyClass KeyClass; +typedef struct KeyInfo KeyInfo; +typedef struct Module Module; +typedef struct NameContext NameContext; +typedef struct Parse Parse; +typedef struct Select Select; +typedef struct SrcList SrcList; +typedef struct StrAccum StrAccum; +typedef struct Table Table; +typedef struct TableLock TableLock; +typedef struct Token Token; +typedef struct TriggerStack TriggerStack; +typedef struct TriggerStep TriggerStep; +typedef struct Trigger Trigger; +typedef struct WhereInfo WhereInfo; +typedef struct WhereLevel WhereLevel; + +/* +** Defer sourcing vdbe.h and btree.h until after the "u8" and +** "BusyHandler" typedefs. vdbe.h also requires a few of the opaque +** pointer types (i.e. FuncDef) defined above. +*/ +#include "btree.h" +#include "vdbe.h" +#include "pager.h" + +#include "os.h" +#include "mutex.h" + + +/* +** Each database file to be accessed by the system is an instance +** of the following structure. There are normally two of these structures +** in the sqlite.aDb[] array. aDb[0] is the main database file and +** aDb[1] is the database file used to hold temporary tables. Additional +** databases may be attached. +*/ +struct Db { + char *zName; /* Name of this database */ + Btree *pBt; /* The B*Tree structure for this database file */ + u8 inTrans; /* 0: not writable. 1: Transaction. 2: Checkpoint */ + u8 safety_level; /* How aggressive at synching data to disk */ + void *pAux; /* Auxiliary data. Usually NULL */ + void (*xFreeAux)(void*); /* Routine to free pAux */ + Schema *pSchema; /* Pointer to database schema (possibly shared) */ +}; + +/* +** An instance of the following structure stores a database schema. +** +** If there are no virtual tables configured in this schema, the +** Schema.db variable is set to NULL. After the first virtual table +** has been added, it is set to point to the database connection +** used to create the connection. Once a virtual table has been +** added to the Schema structure and the Schema.db variable populated, +** only that database connection may use the Schema to prepare +** statements. +*/ +struct Schema { + int schema_cookie; /* Database schema version number for this file */ + Hash tblHash; /* All tables indexed by name */ + Hash idxHash; /* All (named) indices indexed by name */ + Hash trigHash; /* All triggers indexed by name */ + Hash aFKey; /* Foreign keys indexed by to-table */ + Table *pSeqTab; /* The sqlite_sequence table used by AUTOINCREMENT */ + u8 file_format; /* Schema format version for this file */ + u8 enc; /* Text encoding used by this database */ + u16 flags; /* Flags associated with this schema */ + int cache_size; /* Number of pages to use in the cache */ +#ifndef SQLITE_OMIT_VIRTUALTABLE + sqlite3 *db; /* "Owner" connection. See comment above */ +#endif +}; + +/* +** These macros can be used to test, set, or clear bits in the +** Db.flags field. +*/ +#define DbHasProperty(D,I,P) (((D)->aDb[I].pSchema->flags&(P))==(P)) +#define DbHasAnyProperty(D,I,P) (((D)->aDb[I].pSchema->flags&(P))!=0) +#define DbSetProperty(D,I,P) (D)->aDb[I].pSchema->flags|=(P) +#define DbClearProperty(D,I,P) (D)->aDb[I].pSchema->flags&=~(P) + +/* +** Allowed values for the DB.flags field. +** +** The DB_SchemaLoaded flag is set after the database schema has been +** read into internal hash tables. +** +** DB_UnresetViews means that one or more views have column names that +** have been filled out. If the schema changes, these column names might +** changes and so the view will need to be reset. +*/ +#define DB_SchemaLoaded 0x0001 /* The schema has been loaded */ +#define DB_UnresetViews 0x0002 /* Some views have defined column names */ +#define DB_Empty 0x0004 /* The file is empty (length 0 bytes) */ + + +/* +** Each database is an instance of the following structure. +** +** The sqlite.lastRowid records the last insert rowid generated by an +** insert statement. Inserts on views do not affect its value. Each +** trigger has its own context, so that lastRowid can be updated inside +** triggers as usual. The previous value will be restored once the trigger +** exits. Upon entering a before or instead of trigger, lastRowid is no +** longer (since after version 2.8.12) reset to -1. +** +** The sqlite.nChange does not count changes within triggers and keeps no +** context. It is reset at start of sqlite3_exec. +** The sqlite.lsChange represents the number of changes made by the last +** insert, update, or delete statement. It remains constant throughout the +** length of a statement and is then updated by OP_SetCounts. It keeps a +** context stack just like lastRowid so that the count of changes +** within a trigger is not seen outside the trigger. Changes to views do not +** affect the value of lsChange. +** The sqlite.csChange keeps track of the number of current changes (since +** the last statement) and is used to update sqlite_lsChange. +** +** The member variables sqlite.errCode, sqlite.zErrMsg and sqlite.zErrMsg16 +** store the most recent error code and, if applicable, string. The +** internal function sqlite3Error() is used to set these variables +** consistently. +*/ +struct sqlite3 { + sqlite3_vfs *pVfs; /* OS Interface */ + int nDb; /* Number of backends currently in use */ + Db *aDb; /* All backends */ + int flags; /* Miscellanous flags. See below */ + int openFlags; /* Flags passed to sqlite3_vfs.xOpen() */ + int errCode; /* Most recent error code (SQLITE_*) */ + int errMask; /* & result codes with this before returning */ + u8 autoCommit; /* The auto-commit flag. */ + u8 temp_store; /* 1: file 2: memory 0: default */ + u8 mallocFailed; /* True if we have seen a malloc failure */ + signed char nextAutovac; /* Autovac setting after VACUUM if >=0 */ + int nTable; /* Number of tables in the database */ + CollSeq *pDfltColl; /* The default collating sequence (BINARY) */ + i64 lastRowid; /* ROWID of most recent insert (see above) */ + i64 priorNewRowid; /* Last randomly generated ROWID */ + int magic; /* Magic number for detect library misuse */ + int nChange; /* Value returned by sqlite3_changes() */ + int nTotalChange; /* Value returned by sqlite3_total_changes() */ + sqlite3_mutex *mutex; /* Connection mutex */ + struct sqlite3InitInfo { /* Information used during initialization */ + int iDb; /* When back is being initialized */ + int newTnum; /* Rootpage of table being initialized */ + u8 busy; /* TRUE if currently initializing */ + } init; + int nExtension; /* Number of loaded extensions */ + void **aExtension; /* Array of shared libraray handles */ + struct Vdbe *pVdbe; /* List of active virtual machines */ + int activeVdbeCnt; /* Number of vdbes currently executing */ + void (*xTrace)(void*,const char*); /* Trace function */ + void *pTraceArg; /* Argument to the trace function */ + void (*xProfile)(void*,const char*,u64); /* Profiling function */ + void *pProfileArg; /* Argument to profile function */ + void *pCommitArg; /* Argument to xCommitCallback() */ + int (*xCommitCallback)(void*); /* Invoked at every commit. */ + void *pRollbackArg; /* Argument to xRollbackCallback() */ + void (*xRollbackCallback)(void*); /* Invoked at every commit. */ + void *pUpdateArg; + void (*xUpdateCallback)(void*,int, const char*,const char*,sqlite_int64); + void(*xCollNeeded)(void*,sqlite3*,int eTextRep,const char*); + void(*xCollNeeded16)(void*,sqlite3*,int eTextRep,const void*); + void *pCollNeededArg; + sqlite3_value *pErr; /* Most recent error message */ + char *zErrMsg; /* Most recent error message (UTF-8 encoded) */ + char *zErrMsg16; /* Most recent error message (UTF-16 encoded) */ + union { + int isInterrupted; /* True if sqlite3_interrupt has been called */ + double notUsed1; /* Spacer */ + } u1; +#ifndef SQLITE_OMIT_AUTHORIZATION + int (*xAuth)(void*,int,const char*,const char*,const char*,const char*); + /* Access authorization function */ + void *pAuthArg; /* 1st argument to the access auth function */ +#endif +#ifndef SQLITE_OMIT_PROGRESS_CALLBACK + int (*xProgress)(void *); /* The progress callback */ + void *pProgressArg; /* Argument to the progress callback */ + int nProgressOps; /* Number of opcodes for progress callback */ +#endif +#ifndef SQLITE_OMIT_VIRTUALTABLE + Hash aModule; /* populated by sqlite3_create_module() */ + Table *pVTab; /* vtab with active Connect/Create method */ + sqlite3_vtab **aVTrans; /* Virtual tables with open transactions */ + int nVTrans; /* Allocated size of aVTrans */ +#endif + Hash aFunc; /* All functions that can be in SQL exprs */ + Hash aCollSeq; /* All collating sequences */ + BusyHandler busyHandler; /* Busy callback */ + int busyTimeout; /* Busy handler timeout, in msec */ + Db aDbStatic[2]; /* Static space for the 2 default backends */ +#ifdef SQLITE_SSE + sqlite3_stmt *pFetch; /* Used by SSE to fetch stored statements */ +#endif + u8 dfltLockMode; /* Default locking-mode for attached dbs */ +}; + +/* +** A macro to discover the encoding of a database. +*/ +#define ENC(db) ((db)->aDb[0].pSchema->enc) + +/* +** Possible values for the sqlite.flags and or Db.flags fields. +** +** On sqlite.flags, the SQLITE_InTrans value means that we have +** executed a BEGIN. On Db.flags, SQLITE_InTrans means a statement +** transaction is active on that particular database file. +*/ +#define SQLITE_VdbeTrace 0x00000001 /* True to trace VDBE execution */ +#define SQLITE_InTrans 0x00000008 /* True if in a transaction */ +#define SQLITE_InternChanges 0x00000010 /* Uncommitted Hash table changes */ +#define SQLITE_FullColNames 0x00000020 /* Show full column names on SELECT */ +#define SQLITE_ShortColNames 0x00000040 /* Show short columns names */ +#define SQLITE_CountRows 0x00000080 /* Count rows changed by INSERT, */ + /* DELETE, or UPDATE and return */ + /* the count using a callback. */ +#define SQLITE_NullCallback 0x00000100 /* Invoke the callback once if the */ + /* result set is empty */ +#define SQLITE_SqlTrace 0x00000200 /* Debug print SQL as it executes */ +#define SQLITE_VdbeListing 0x00000400 /* Debug listings of VDBE programs */ +#define SQLITE_WriteSchema 0x00000800 /* OK to update SQLITE_MASTER */ +#define SQLITE_NoReadlock 0x00001000 /* Readlocks are omitted when + ** accessing read-only databases */ +#define SQLITE_IgnoreChecks 0x00002000 /* Do not enforce check constraints */ +#define SQLITE_ReadUncommitted 0x00004000 /* For shared-cache mode */ +#define SQLITE_LegacyFileFmt 0x00008000 /* Create new databases in format 1 */ +#define SQLITE_FullFSync 0x00010000 /* Use full fsync on the backend */ +#define SQLITE_LoadExtension 0x00020000 /* Enable load_extension */ + +#define SQLITE_RecoveryMode 0x00040000 /* Ignore schema errors */ +#define SQLITE_SharedCache 0x00080000 /* Cache sharing is enabled */ +#define SQLITE_Vtab 0x00100000 /* There exists a virtual table */ + +/* +** Possible values for the sqlite.magic field. +** The numbers are obtained at random and have no special meaning, other +** than being distinct from one another. +*/ +#define SQLITE_MAGIC_OPEN 0xa029a697 /* Database is open */ +#define SQLITE_MAGIC_CLOSED 0x9f3c2d33 /* Database is closed */ +#define SQLITE_MAGIC_SICK 0x4b771290 /* Error and awaiting close */ +#define SQLITE_MAGIC_BUSY 0xf03b7906 /* Database currently in use */ +#define SQLITE_MAGIC_ERROR 0xb5357930 /* An SQLITE_MISUSE error occurred */ + +/* +** Each SQL function is defined by an instance of the following +** structure. A pointer to this structure is stored in the sqlite.aFunc +** hash table. When multiple functions have the same name, the hash table +** points to a linked list of these structures. +*/ +struct FuncDef { + i16 nArg; /* Number of arguments. -1 means unlimited */ + u8 iPrefEnc; /* Preferred text encoding (SQLITE_UTF8, 16LE, 16BE) */ + u8 needCollSeq; /* True if sqlite3GetFuncCollSeq() might be called */ + u8 flags; /* Some combination of SQLITE_FUNC_* */ + void *pUserData; /* User data parameter */ + FuncDef *pNext; /* Next function with same name */ + void (*xFunc)(sqlite3_context*,int,sqlite3_value**); /* Regular function */ + void (*xStep)(sqlite3_context*,int,sqlite3_value**); /* Aggregate step */ + void (*xFinalize)(sqlite3_context*); /* Aggregate finializer */ + char zName[1]; /* SQL name of the function. MUST BE LAST */ +}; + +/* +** Each SQLite module (virtual table definition) is defined by an +** instance of the following structure, stored in the sqlite3.aModule +** hash table. +*/ +struct Module { + const sqlite3_module *pModule; /* Callback pointers */ + const char *zName; /* Name passed to create_module() */ + void *pAux; /* pAux passed to create_module() */ + void (*xDestroy)(void *); /* Module destructor function */ +}; + +/* +** Possible values for FuncDef.flags +*/ +#define SQLITE_FUNC_LIKE 0x01 /* Candidate for the LIKE optimization */ +#define SQLITE_FUNC_CASE 0x02 /* Case-sensitive LIKE-type function */ +#define SQLITE_FUNC_EPHEM 0x04 /* Ephermeral. Delete with VDBE */ + +/* +** information about each column of an SQL table is held in an instance +** of this structure. +*/ +struct Column { + char *zName; /* Name of this column */ + Expr *pDflt; /* Default value of this column */ + char *zType; /* Data type for this column */ + char *zColl; /* Collating sequence. If NULL, use the default */ + u8 notNull; /* True if there is a NOT NULL constraint */ + u8 isPrimKey; /* True if this column is part of the PRIMARY KEY */ + char affinity; /* One of the SQLITE_AFF_... values */ +#ifndef SQLITE_OMIT_VIRTUALTABLE + u8 isHidden; /* True if this column is 'hidden' */ +#endif +}; + +/* +** A "Collating Sequence" is defined by an instance of the following +** structure. Conceptually, a collating sequence consists of a name and +** a comparison routine that defines the order of that sequence. +** +** There may two seperate implementations of the collation function, one +** that processes text in UTF-8 encoding (CollSeq.xCmp) and another that +** processes text encoded in UTF-16 (CollSeq.xCmp16), using the machine +** native byte order. When a collation sequence is invoked, SQLite selects +** the version that will require the least expensive encoding +** translations, if any. +** +** The CollSeq.pUser member variable is an extra parameter that passed in +** as the first argument to the UTF-8 comparison function, xCmp. +** CollSeq.pUser16 is the equivalent for the UTF-16 comparison function, +** xCmp16. +** +** If both CollSeq.xCmp and CollSeq.xCmp16 are NULL, it means that the +** collating sequence is undefined. Indices built on an undefined +** collating sequence may not be read or written. +*/ +struct CollSeq { + char *zName; /* Name of the collating sequence, UTF-8 encoded */ + u8 enc; /* Text encoding handled by xCmp() */ + u8 type; /* One of the SQLITE_COLL_... values below */ + void *pUser; /* First argument to xCmp() */ + int (*xCmp)(void*,int, const void*, int, const void*); + void (*xDel)(void*); /* Destructor for pUser */ +}; + +/* +** Allowed values of CollSeq flags: +*/ +#define SQLITE_COLL_BINARY 1 /* The default memcmp() collating sequence */ +#define SQLITE_COLL_NOCASE 2 /* The built-in NOCASE collating sequence */ +#define SQLITE_COLL_REVERSE 3 /* The built-in REVERSE collating sequence */ +#define SQLITE_COLL_USER 0 /* Any other user-defined collating sequence */ + +/* +** A sort order can be either ASC or DESC. +*/ +#define SQLITE_SO_ASC 0 /* Sort in ascending order */ +#define SQLITE_SO_DESC 1 /* Sort in ascending order */ + +/* +** Column affinity types. +** +** These used to have mnemonic name like 'i' for SQLITE_AFF_INTEGER and +** 't' for SQLITE_AFF_TEXT. But we can save a little space and improve +** the speed a little by number the values consecutively. +** +** But rather than start with 0 or 1, we begin with 'a'. That way, +** when multiple affinity types are concatenated into a string and +** used as the P4 operand, they will be more readable. +** +** Note also that the numeric types are grouped together so that testing +** for a numeric type is a single comparison. +*/ +#define SQLITE_AFF_TEXT 'a' +#define SQLITE_AFF_NONE 'b' +#define SQLITE_AFF_NUMERIC 'c' +#define SQLITE_AFF_INTEGER 'd' +#define SQLITE_AFF_REAL 'e' + +#define sqlite3IsNumericAffinity(X) ((X)>=SQLITE_AFF_NUMERIC) + +/* +** The SQLITE_AFF_MASK values masks off the significant bits of an +** affinity value. +*/ +#define SQLITE_AFF_MASK 0x67 + +/* +** Additional bit values that can be ORed with an affinity without +** changing the affinity. +*/ +#define SQLITE_JUMPIFNULL 0x08 /* jumps if either operand is NULL */ +#define SQLITE_NULLEQUAL 0x10 /* compare NULLs equal */ +#define SQLITE_STOREP2 0x80 /* Store result in reg[P2] rather than jump */ + +/* +** Each SQL table is represented in memory by an instance of the +** following structure. +** +** Table.zName is the name of the table. The case of the original +** CREATE TABLE statement is stored, but case is not significant for +** comparisons. +** +** Table.nCol is the number of columns in this table. Table.aCol is a +** pointer to an array of Column structures, one for each column. +** +** If the table has an INTEGER PRIMARY KEY, then Table.iPKey is the index of +** the column that is that key. Otherwise Table.iPKey is negative. Note +** that the datatype of the PRIMARY KEY must be INTEGER for this field to +** be set. An INTEGER PRIMARY KEY is used as the rowid for each row of +** the table. If a table has no INTEGER PRIMARY KEY, then a random rowid +** is generated for each row of the table. Table.hasPrimKey is true if +** the table has any PRIMARY KEY, INTEGER or otherwise. +** +** Table.tnum is the page number for the root BTree page of the table in the +** database file. If Table.iDb is the index of the database table backend +** in sqlite.aDb[]. 0 is for the main database and 1 is for the file that +** holds temporary tables and indices. If Table.isEphem +** is true, then the table is stored in a file that is automatically deleted +** when the VDBE cursor to the table is closed. In this case Table.tnum +** refers VDBE cursor number that holds the table open, not to the root +** page number. Transient tables are used to hold the results of a +** sub-query that appears instead of a real table name in the FROM clause +** of a SELECT statement. +*/ +struct Table { + char *zName; /* Name of the table */ + int nCol; /* Number of columns in this table */ + Column *aCol; /* Information about each column */ + int iPKey; /* If not less then 0, use aCol[iPKey] as the primary key */ + Index *pIndex; /* List of SQL indexes on this table. */ + int tnum; /* Root BTree node for this table (see note above) */ + Select *pSelect; /* NULL for tables. Points to definition if a view. */ + int nRef; /* Number of pointers to this Table */ + Trigger *pTrigger; /* List of SQL triggers on this table */ + FKey *pFKey; /* Linked list of all foreign keys in this table */ + char *zColAff; /* String defining the affinity of each column */ +#ifndef SQLITE_OMIT_CHECK + Expr *pCheck; /* The AND of all CHECK constraints */ +#endif +#ifndef SQLITE_OMIT_ALTERTABLE + int addColOffset; /* Offset in CREATE TABLE statement to add a new column */ +#endif + u8 readOnly; /* True if this table should not be written by the user */ + u8 isEphem; /* True if created using OP_OpenEphermeral */ + u8 hasPrimKey; /* True if there exists a primary key */ + u8 keyConf; /* What to do in case of uniqueness conflict on iPKey */ + u8 autoInc; /* True if the integer primary key is autoincrement */ +#ifndef SQLITE_OMIT_VIRTUALTABLE + u8 isVirtual; /* True if this is a virtual table */ + u8 isCommit; /* True once the CREATE TABLE has been committed */ + Module *pMod; /* Pointer to the implementation of the module */ + sqlite3_vtab *pVtab; /* Pointer to the module instance */ + int nModuleArg; /* Number of arguments to the module */ + char **azModuleArg; /* Text of all module args. [0] is module name */ +#endif + Schema *pSchema; /* Schema that contains this table */ +}; + +/* +** Test to see whether or not a table is a virtual table. This is +** done as a macro so that it will be optimized out when virtual +** table support is omitted from the build. +*/ +#ifndef SQLITE_OMIT_VIRTUALTABLE +# define IsVirtual(X) ((X)->isVirtual) +# define IsHiddenColumn(X) ((X)->isHidden) +#else +# define IsVirtual(X) 0 +# define IsHiddenColumn(X) 0 +#endif + +/* +** Each foreign key constraint is an instance of the following structure. +** +** A foreign key is associated with two tables. The "from" table is +** the table that contains the REFERENCES clause that creates the foreign +** key. The "to" table is the table that is named in the REFERENCES clause. +** Consider this example: +** +** CREATE TABLE ex1( +** a INTEGER PRIMARY KEY, +** b INTEGER CONSTRAINT fk1 REFERENCES ex2(x) +** ); +** +** For foreign key "fk1", the from-table is "ex1" and the to-table is "ex2". +** +** Each REFERENCES clause generates an instance of the following structure +** which is attached to the from-table. The to-table need not exist when +** the from-table is created. The existance of the to-table is not checked +** until an attempt is made to insert data into the from-table. +** +** The sqlite.aFKey hash table stores pointers to this structure +** given the name of a to-table. For each to-table, all foreign keys +** associated with that table are on a linked list using the FKey.pNextTo +** field. +*/ +struct FKey { + Table *pFrom; /* The table that constains the REFERENCES clause */ + FKey *pNextFrom; /* Next foreign key in pFrom */ + char *zTo; /* Name of table that the key points to */ + FKey *pNextTo; /* Next foreign key that points to zTo */ + int nCol; /* Number of columns in this key */ + struct sColMap { /* Mapping of columns in pFrom to columns in zTo */ + int iFrom; /* Index of column in pFrom */ + char *zCol; /* Name of column in zTo. If 0 use PRIMARY KEY */ + } *aCol; /* One entry for each of nCol column s */ + u8 isDeferred; /* True if constraint checking is deferred till COMMIT */ + u8 updateConf; /* How to resolve conflicts that occur on UPDATE */ + u8 deleteConf; /* How to resolve conflicts that occur on DELETE */ + u8 insertConf; /* How to resolve conflicts that occur on INSERT */ +}; + +/* +** SQLite supports many different ways to resolve a constraint +** error. ROLLBACK processing means that a constraint violation +** causes the operation in process to fail and for the current transaction +** to be rolled back. ABORT processing means the operation in process +** fails and any prior changes from that one operation are backed out, +** but the transaction is not rolled back. FAIL processing means that +** the operation in progress stops and returns an error code. But prior +** changes due to the same operation are not backed out and no rollback +** occurs. IGNORE means that the particular row that caused the constraint +** error is not inserted or updated. Processing continues and no error +** is returned. REPLACE means that preexisting database rows that caused +** a UNIQUE constraint violation are removed so that the new insert or +** update can proceed. Processing continues and no error is reported. +** +** RESTRICT, SETNULL, and CASCADE actions apply only to foreign keys. +** RESTRICT is the same as ABORT for IMMEDIATE foreign keys and the +** same as ROLLBACK for DEFERRED keys. SETNULL means that the foreign +** key is set to NULL. CASCADE means that a DELETE or UPDATE of the +** referenced table row is propagated into the row that holds the +** foreign key. +** +** The following symbolic values are used to record which type +** of action to take. +*/ +#define OE_None 0 /* There is no constraint to check */ +#define OE_Rollback 1 /* Fail the operation and rollback the transaction */ +#define OE_Abort 2 /* Back out changes but do no rollback transaction */ +#define OE_Fail 3 /* Stop the operation but leave all prior changes */ +#define OE_Ignore 4 /* Ignore the error. Do not do the INSERT or UPDATE */ +#define OE_Replace 5 /* Delete existing record, then do INSERT or UPDATE */ + +#define OE_Restrict 6 /* OE_Abort for IMMEDIATE, OE_Rollback for DEFERRED */ +#define OE_SetNull 7 /* Set the foreign key value to NULL */ +#define OE_SetDflt 8 /* Set the foreign key value to its default */ +#define OE_Cascade 9 /* Cascade the changes */ + +#define OE_Default 99 /* Do whatever the default action is */ + + +/* +** An instance of the following structure is passed as the first +** argument to sqlite3VdbeKeyCompare and is used to control the +** comparison of the two index keys. +** +** If the KeyInfo.incrKey value is true and the comparison would +** otherwise be equal, then return a result as if the second key +** were larger. +*/ +struct KeyInfo { + sqlite3 *db; /* The database connection */ + u8 enc; /* Text encoding - one of the TEXT_Utf* values */ + u8 incrKey; /* Increase 2nd key by epsilon before comparison */ + u8 prefixIsEqual; /* Treat a prefix as equal */ + int nField; /* Number of entries in aColl[] */ + u8 *aSortOrder; /* If defined an aSortOrder[i] is true, sort DESC */ + CollSeq *aColl[1]; /* Collating sequence for each term of the key */ +}; + +/* +** Each SQL index is represented in memory by an +** instance of the following structure. +** +** The columns of the table that are to be indexed are described +** by the aiColumn[] field of this structure. For example, suppose +** we have the following table and index: +** +** CREATE TABLE Ex1(c1 int, c2 int, c3 text); +** CREATE INDEX Ex2 ON Ex1(c3,c1); +** +** In the Table structure describing Ex1, nCol==3 because there are +** three columns in the table. In the Index structure describing +** Ex2, nColumn==2 since 2 of the 3 columns of Ex1 are indexed. +** The value of aiColumn is {2, 0}. aiColumn[0]==2 because the +** first column to be indexed (c3) has an index of 2 in Ex1.aCol[]. +** The second column to be indexed (c1) has an index of 0 in +** Ex1.aCol[], hence Ex2.aiColumn[1]==0. +** +** The Index.onError field determines whether or not the indexed columns +** must be unique and what to do if they are not. When Index.onError=OE_None, +** it means this is not a unique index. Otherwise it is a unique index +** and the value of Index.onError indicate the which conflict resolution +** algorithm to employ whenever an attempt is made to insert a non-unique +** element. +*/ +struct Index { + char *zName; /* Name of this index */ + int nColumn; /* Number of columns in the table used by this index */ + int *aiColumn; /* Which columns are used by this index. 1st is 0 */ + unsigned *aiRowEst; /* Result of ANALYZE: Est. rows selected by each column */ + Table *pTable; /* The SQL table being indexed */ + int tnum; /* Page containing root of this index in database file */ + u8 onError; /* OE_Abort, OE_Ignore, OE_Replace, or OE_None */ + u8 autoIndex; /* True if is automatically created (ex: by UNIQUE) */ + char *zColAff; /* String defining the affinity of each column */ + Index *pNext; /* The next index associated with the same table */ + Schema *pSchema; /* Schema containing this index */ + u8 *aSortOrder; /* Array of size Index.nColumn. True==DESC, False==ASC */ + char **azColl; /* Array of collation sequence names for index */ +}; + +/* +** Each token coming out of the lexer is an instance of +** this structure. Tokens are also used as part of an expression. +** +** Note if Token.z==0 then Token.dyn and Token.n are undefined and +** may contain random values. Do not make any assuptions about Token.dyn +** and Token.n when Token.z==0. +*/ +struct Token { + const unsigned char *z; /* Text of the token. Not NULL-terminated! */ + unsigned dyn : 1; /* True for malloced memory, false for static */ + unsigned n : 31; /* Number of characters in this token */ +}; + +/* +** An instance of this structure contains information needed to generate +** code for a SELECT that contains aggregate functions. +** +** If Expr.op==TK_AGG_COLUMN or TK_AGG_FUNCTION then Expr.pAggInfo is a +** pointer to this structure. The Expr.iColumn field is the index in +** AggInfo.aCol[] or AggInfo.aFunc[] of information needed to generate +** code for that node. +** +** AggInfo.pGroupBy and AggInfo.aFunc.pExpr point to fields within the +** original Select structure that describes the SELECT statement. These +** fields do not need to be freed when deallocating the AggInfo structure. +*/ +struct AggInfo { + u8 directMode; /* Direct rendering mode means take data directly + ** from source tables rather than from accumulators */ + u8 useSortingIdx; /* In direct mode, reference the sorting index rather + ** than the source table */ + int sortingIdx; /* Cursor number of the sorting index */ + ExprList *pGroupBy; /* The group by clause */ + int nSortingColumn; /* Number of columns in the sorting index */ + struct AggInfo_col { /* For each column used in source tables */ + Table *pTab; /* Source table */ + int iTable; /* Cursor number of the source table */ + int iColumn; /* Column number within the source table */ + int iSorterColumn; /* Column number in the sorting index */ + int iMem; /* Memory location that acts as accumulator */ + Expr *pExpr; /* The original expression */ + } *aCol; + int nColumn; /* Number of used entries in aCol[] */ + int nColumnAlloc; /* Number of slots allocated for aCol[] */ + int nAccumulator; /* Number of columns that show through to the output. + ** Additional columns are used only as parameters to + ** aggregate functions */ + struct AggInfo_func { /* For each aggregate function */ + Expr *pExpr; /* Expression encoding the function */ + FuncDef *pFunc; /* The aggregate function implementation */ + int iMem; /* Memory location that acts as accumulator */ + int iDistinct; /* Ephermeral table used to enforce DISTINCT */ + } *aFunc; + int nFunc; /* Number of entries in aFunc[] */ + int nFuncAlloc; /* Number of slots allocated for aFunc[] */ +}; + +/* +** Each node of an expression in the parse tree is an instance +** of this structure. +** +** Expr.op is the opcode. The integer parser token codes are reused +** as opcodes here. For example, the parser defines TK_GE to be an integer +** code representing the ">=" operator. This same integer code is reused +** to represent the greater-than-or-equal-to operator in the expression +** tree. +** +** Expr.pRight and Expr.pLeft are subexpressions. Expr.pList is a list +** of argument if the expression is a function. +** +** Expr.token is the operator token for this node. For some expressions +** that have subexpressions, Expr.token can be the complete text that gave +** rise to the Expr. In the latter case, the token is marked as being +** a compound token. +** +** An expression of the form ID or ID.ID refers to a column in a table. +** For such expressions, Expr.op is set to TK_COLUMN and Expr.iTable is +** the integer cursor number of a VDBE cursor pointing to that table and +** Expr.iColumn is the column number for the specific column. If the +** expression is used as a result in an aggregate SELECT, then the +** value is also stored in the Expr.iAgg column in the aggregate so that +** it can be accessed after all aggregates are computed. +** +** If the expression is a function, the Expr.iTable is an integer code +** representing which function. If the expression is an unbound variable +** marker (a question mark character '?' in the original SQL) then the +** Expr.iTable holds the index number for that variable. +** +** If the expression is a subquery then Expr.iColumn holds an integer +** register number containing the result of the subquery. If the +** subquery gives a constant result, then iTable is -1. If the subquery +** gives a different answer at different times during statement processing +** then iTable is the address of a subroutine that computes the subquery. +** +** The Expr.pSelect field points to a SELECT statement. The SELECT might +** be the right operand of an IN operator. Or, if a scalar SELECT appears +** in an expression the opcode is TK_SELECT and Expr.pSelect is the only +** operand. +** +** If the Expr is of type OP_Column, and the table it is selecting from +** is a disk table or the "old.*" pseudo-table, then pTab points to the +** corresponding table definition. +*/ +struct Expr { + u8 op; /* Operation performed by this node */ + char affinity; /* The affinity of the column or 0 if not a column */ + u16 flags; /* Various flags. See below */ + CollSeq *pColl; /* The collation type of the column or 0 */ + Expr *pLeft, *pRight; /* Left and right subnodes */ + ExprList *pList; /* A list of expressions used as function arguments + ** or in " IN (aCol[] or ->aFunc[] */ + int iRightJoinTable; /* If EP_FromJoin, the right table of the join */ + Select *pSelect; /* When the expression is a sub-select. Also the + ** right side of " IN ( From python-checkins at python.org Sun Mar 23 22:25:20 2008 From: python-checkins at python.org (georg.brandl) Date: Sun, 23 Mar 2008 22:25:20 +0100 (CET) Subject: [Python-checkins] r61816 - doctools/trunk/doc/changes.rst doctools/trunk/doc/contents.rst Message-ID: <20080323212520.29F3A1E4028@bag.python.org> Author: georg.brandl Date: Sun Mar 23 22:25:19 2008 New Revision: 61816 Added: doctools/trunk/doc/changes.rst Modified: doctools/trunk/doc/contents.rst Log: Add the changelog to the documentation. Added: doctools/trunk/doc/changes.rst ============================================================================== --- (empty file) +++ doctools/trunk/doc/changes.rst Sun Mar 23 22:25:19 2008 @@ -0,0 +1,4 @@ +Changes in Sphinx +***************** + +.. include:: ../CHANGES Modified: doctools/trunk/doc/contents.rst ============================================================================== --- doctools/trunk/doc/contents.rst (original) +++ doctools/trunk/doc/contents.rst Sun Mar 23 22:25:19 2008 @@ -16,6 +16,7 @@ extensions glossary + changes Indices and tables From python-checkins at python.org Sun Mar 23 22:30:58 2008 From: python-checkins at python.org (georg.brandl) Date: Sun, 23 Mar 2008 22:30:58 +0100 (CET) Subject: [Python-checkins] r61817 - doctools/trunk/CHANGES Message-ID: <20080323213058.F16F31E4039@bag.python.org> Author: georg.brandl Date: Sun Mar 23 22:30:58 2008 New Revision: 61817 Modified: doctools/trunk/CHANGES Log: Add some formatting. Modified: doctools/trunk/CHANGES ============================================================================== --- doctools/trunk/CHANGES (original) +++ doctools/trunk/CHANGES Sun Mar 23 22:30:58 2008 @@ -8,7 +8,7 @@ and distutils scripts. Only install via entry points. * sphinx.builder: Don't recognize the HTML builder's copied source - files (under _sources) as input files if the source suffix is + files (under ``_sources``) as input files if the source suffix is ``.txt``. * sphinx.highlighting: Generate correct markup for LaTeX Verbatim @@ -28,15 +28,16 @@ * sphinx.ext.doctest: Make the group in which doctest blocks are placed selectable, and default to ``'default'``. -* sphinx.ext.doctest: Replace in doctest blocks by +* sphinx.ext.doctest: Replace ```` in doctest blocks by real blank lines for presentation output, and remove doctest options given inline. * sphinx.environment: Move doctest_blocks out of block_quotes to support indented doctest blocks. -* sphinx.ext.autodoc: Render .. automodule:: docstrings in a section - node, so that module docstrings can contain proper sectioning. +* sphinx.ext.autodoc: Render ``.. automodule::`` docstrings in a + section node, so that module docstrings can contain proper + sectioning. * sphinx.ext.autodoc: Use the module's encoding for decoding docstrings, rather than requiring ASCII. From buildbot at python.org Sun Mar 23 22:45:05 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 23 Mar 2008 21:45:05 +0000 Subject: [Python-checkins] buildbot failure in S-390 Debian 3.0 Message-ID: <20080323214505.B81AD1E401E@bag.python.org> The Buildbot has detected a new failure of S-390 Debian 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/S-390%20Debian%203.0/builds/135 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-s390 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: gregory.p.smith BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From buildbot at python.org Sun Mar 23 23:02:43 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 23 Mar 2008 22:02:43 +0000 Subject: [Python-checkins] buildbot failure in sparc solaris10 gcc 3.0 Message-ID: <20080323220243.E3BE71E4010@bag.python.org> The Buildbot has detected a new failure of sparc solaris10 gcc 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20solaris10%20gcc%203.0/builds/712 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: loewis-sun Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: christian.heimes BUILD FAILED: failed compile sincerely, -The Buildbot From python-checkins at python.org Sun Mar 23 23:05:57 2008 From: python-checkins at python.org (christian.heimes) Date: Sun, 23 Mar 2008 23:05:57 +0100 (CET) Subject: [Python-checkins] r61819 - python/branches/trunk-bytearray/PCbuild/pythoncore.vcproj Message-ID: <20080323220557.7725D1E4010@bag.python.org> Author: christian.heimes Date: Sun Mar 23 23:05:57 2008 New Revision: 61819 Modified: python/branches/trunk-bytearray/PCbuild/pythoncore.vcproj Log: Untested updates to the PCBuild directory Modified: python/branches/trunk-bytearray/PCbuild/pythoncore.vcproj ============================================================================== --- python/branches/trunk-bytearray/PCbuild/pythoncore.vcproj (original) +++ python/branches/trunk-bytearray/PCbuild/pythoncore.vcproj Sun Mar 23 23:05:57 2008 @@ -655,6 +655,14 @@ > + + + + @@ -1347,6 +1355,14 @@ > + + + + From python-checkins at python.org Sun Mar 23 23:14:38 2008 From: python-checkins at python.org (gregory.p.smith) Date: Sun, 23 Mar 2008 23:14:38 +0100 (CET) Subject: [Python-checkins] r61820 - python/trunk/Modules/zlibmodule.c Message-ID: <20080323221438.9D7C51E4010@bag.python.org> Author: gregory.p.smith Date: Sun Mar 23 23:14:38 2008 New Revision: 61820 Modified: python/trunk/Modules/zlibmodule.c Log: replace calls to get the initial values with the raw constants. Modified: python/trunk/Modules/zlibmodule.c ============================================================================== --- python/trunk/Modules/zlibmodule.c (original) +++ python/trunk/Modules/zlibmodule.c Sun Mar 23 23:14:38 2008 @@ -889,7 +889,7 @@ static PyObject * PyZlib_adler32(PyObject *self, PyObject *args) { - uLong adler32val = adler32(0L, Z_NULL, 0); + uLong adler32val = 1; /* adler32(0L, Z_NULL, 0) */ Byte *buf; int len, signed_val; @@ -912,7 +912,7 @@ static PyObject * PyZlib_crc32(PyObject *self, PyObject *args) { - uLong crc32val = crc32(0L, Z_NULL, 0); + uLong crc32val = 0; /* crc32(0L, Z_NULL, 0) */ Byte *buf; int len, signed_val; From buildbot at python.org Mon Mar 24 00:07:08 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 23 Mar 2008 23:07:08 +0000 Subject: [Python-checkins] buildbot failure in ppc Debian unstable trunk Message-ID: <20080323230708.9D8AA1E4010@bag.python.org> The Buildbot has detected a new failure of ppc Debian unstable trunk. Full details are available at: http://www.python.org/dev/buildbot/all/ppc%20Debian%20unstable%20trunk/builds/1062 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: gregory.p.smith BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_socket make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Mon Mar 24 00:28:27 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 23 Mar 2008 23:28:27 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-3 trunk Message-ID: <20080323232828.0BABF1E4010@bag.python.org> The Buildbot has detected a new failure of x86 XP-3 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-3%20trunk/builds/1149 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: gregory.p.smith BUILD FAILED: failed test Excerpt from the test logfile: 24 tests failed: test_array test_bufio test_builtin test_cmd_line_script test_cookielib test_deque test_distutils test_file test_filecmp test_fileinput test_gzip test_iter test_mailbox test_marshal test_old_mailbox test_pep277 test_posixpath test_set test_threading test_univnewlines test_urllib2 test_uu test_zipfile test_zipimport ====================================================================== ERROR: test_tofromfile (test.test_array.LongTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_array.py", line 166, in test_tofromfile f = open(test_support.TESTFN, 'wb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== FAIL: test_intern (test.test_builtin.BuiltinTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_builtin.py", line 1067, in test_intern self.assert_(intern(s) is s) AssertionError ====================================================================== ERROR: test_zipfile (test.test_cmd_line_script.CmdLineTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_cmd_line_script.py", line 160, in test_zipfile self._check_script(zip_name, None, zip_name, '') File "C:\buildbot\work\trunk.heller-windows\build\lib\contextlib.py", line 33, in __exit__ self.gen.throw(type, value, traceback) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_cmd_line_script.py", line 36, in temp_dir shutil.rmtree(dirname) File "c:\buildbot\work\trunk.heller-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "c:\buildbot\work\trunk.heller-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: 'c:\\docume~1\\theller\\locals~1\\temp\\tmphduf5g\\test_zip.zip' ====================================================================== ERROR: test_zipfile_compiled (test.test_cmd_line_script.CmdLineTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_cmd_line_script.py", line 167, in test_zipfile_compiled self._check_script(zip_name, None, zip_name, '') File "C:\buildbot\work\trunk.heller-windows\build\lib\contextlib.py", line 33, in __exit__ self.gen.throw(type, value, traceback) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_cmd_line_script.py", line 36, in temp_dir shutil.rmtree(dirname) File "c:\buildbot\work\trunk.heller-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "c:\buildbot\work\trunk.heller-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: 'c:\\docume~1\\theller\\locals~1\\temp\\tmpf4lcrk\\test_zip.zip' ====================================================================== ERROR: test_bad_magic (test.test_cookielib.FileCookieJarTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_cookielib.py", line 268, in test_bad_magic f = open(filename, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_lwp_valueless_cookie (test.test_cookielib.FileCookieJarTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_cookielib.py", line 243, in test_lwp_valueless_cookie c.save(filename, ignore_discard=True) File "C:\buildbot\work\trunk.heller-windows\build\lib\_LWPCookieJar.py", line 83, in save f = open(filename, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_mozilla (test.test_cookielib.LWPCookieTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_cookielib.py", line 1580, in test_mozilla new_c = save_and_restore(c, True) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_cookielib.py", line 1571, in save_and_restore cj.save(ignore_discard=ignore_discard) File "C:\buildbot\work\trunk.heller-windows\build\lib\_MozillaCookieJar.py", line 118, in save f = open(filename, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_rejection (test.test_cookielib.LWPCookieTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_cookielib.py", line 1512, in test_rejection c.save(filename, ignore_discard=True) File "C:\buildbot\work\trunk.heller-windows\build\lib\_LWPCookieJar.py", line 83, in save f = open(filename, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_maxlen (test.test_deque.TestBasic) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_deque.py", line 86, in test_maxlen os.remove(test_support.TESTFN) WindowsError: [Error 5] Access is denied: '@test' ====================================================================== ERROR: test_print (test.test_deque.TestBasic) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_deque.py", line 291, in test_print fo.close() UnboundLocalError: local variable 'fo' referenced before assignment ====================================================================== ERROR: test_read (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 48, in test_read self.test_write() File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 95, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_readline (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 85, in test_readline self.test_write() File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 95, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_readlines (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 98, in test_readlines self.test_write() File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 95, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_seek_read (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 112, in test_seek_read self.test_write() File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 95, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_seek_whence (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 132, in test_seek_whence self.test_write() File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 95, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_seek_write (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 144, in test_seek_write f = gzip.GzipFile(self.filename, 'w') File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 95, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_write (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 95, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_builtin_list (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_iter.py", line 262, in test_builtin_list f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_builtin_map (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_iter.py", line 408, in test_builtin_map f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_builtin_max_min (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_iter.py", line 371, in test_builtin_max_min f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_builtin_tuple (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_iter.py", line 295, in test_builtin_tuple f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_builtin_zip (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_iter.py", line 455, in test_builtin_zip f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_countOf (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_iter.py", line 617, in test_countOf f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_in_and_not_in (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_iter.py", line 580, in test_in_and_not_in f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_indexOf (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_iter.py", line 651, in test_indexOf f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_iter_file (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_iter.py", line 232, in test_iter_file f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_unicode_join_endcase (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_iter.py", line 534, in test_unicode_join_endcase f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_unpack_iter (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_iter.py", line 760, in test_unpack_iter f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_writelines (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_iter.py", line 677, in test_writelines f = file(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_add (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add_and_close (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add_from_string (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add_mbox_or_mmdf_message (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_clear (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_close (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_contains (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_delitem (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_discard (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_dump_message (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_flush (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_file (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_message (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_string (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_getitem (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_has_key (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_items (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iter (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iteritems (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iterkeys (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_itervalues (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_keys (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_len (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_lock_conflict (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_lock_unlock (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_open_close_open (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_pop (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_popitem (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_relock (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_remove (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_set_item (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_update (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_values (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add_and_close (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add_from_string (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add_mbox_or_mmdf_message (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_clear (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_close (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_contains (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_delitem (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_discard (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_dump_message (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_flush (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_file (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_message (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_string (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_getitem (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_has_key (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_items (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iter (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iteritems (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iterkeys (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_itervalues (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_keys (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_len (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_lock_conflict (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_lock_unlock (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_open_close_open (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_pop (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_popitem (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_relock (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_remove (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_set_item (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_update (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_values (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add_and_remove_folders (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_clear (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_close (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_contains (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_delitem (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_discard (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_dump_message (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_flush (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_file (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_folder (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_message (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_string (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_getitem (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_has_key (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_items (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iter (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iteritems (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iterkeys (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_itervalues (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_keys (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_len (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_list_folders (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_lock_unlock (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_pack (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_pop (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_popitem (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_remove (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_sequences (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_set_item (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_update (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_values (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 938, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_clear (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 938, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_close (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 938, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_contains (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 938, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_delitem (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 938, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_discard (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 938, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_dump_message (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 938, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_flush (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 938, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 938, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_file (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 938, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_message (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 938, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_string (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 938, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_getitem (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 938, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_has_key (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 938, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_items (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 938, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iter (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 938, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iteritems (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 938, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iterkeys (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 938, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_itervalues (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 938, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_keys (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 938, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_labels (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 938, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_len (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 938, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_lock_unlock (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 938, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_pop (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 938, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_popitem (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 938, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_remove (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 938, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_set_item (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 938, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_update (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 938, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_values (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 938, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_initialize_with_file (test.test_mailbox.TestMessage) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 990, in test_initialize_with_file f = open(self._path, 'w+') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_initialize_with_file (test.test_mailbox.TestMaildirMessage) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 990, in test_initialize_with_file f = open(self._path, 'w+') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_floats (test.test_marshal.FloatTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_marshal.py", line 70, in test_floats marshal.dump(f, file(test_support.TESTFN, "wb")) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_buffer (test.test_marshal.StringTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_marshal.py", line 135, in test_buffer marshal.dump(b, file(test_support.TESTFN, "wb")) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_string (test.test_marshal.StringTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_marshal.py", line 124, in test_string marshal.dump(s, file(test_support.TESTFN, "wb")) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_unicode (test.test_marshal.StringTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_marshal.py", line 113, in test_unicode marshal.dump(s, file(test_support.TESTFN, "wb")) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_dict (test.test_marshal.ContainerTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_marshal.py", line 164, in test_dict marshal.dump(self.d, file(test_support.TESTFN, "wb")) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_list (test.test_marshal.ContainerTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_marshal.py", line 173, in test_list marshal.dump(lst, file(test_support.TESTFN, "wb")) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_sets (test.test_marshal.ContainerTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_marshal.py", line 194, in test_sets marshal.dump(t, file(test_support.TESTFN, "wb")) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_tuple (test.test_marshal.ContainerTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_marshal.py", line 182, in test_tuple marshal.dump(t, file(test_support.TESTFN, "wb")) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: Test an empty maildir mailbox ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_old_mailbox.py", line 30, in setUp os.mkdir(self._dir) WindowsError: [Error 5] Access is denied: '@test' ====================================================================== ERROR: test_cyclical_print (test.test_set.TestSet) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_set.py", line 293, in test_cyclical_print fo.close() UnboundLocalError: local variable 'fo' referenced before assignment ====================================================================== ERROR: test_cyclical_print (test.test_set.TestSetSubclass) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_set.py", line 293, in test_cyclical_print fo.close() UnboundLocalError: local variable 'fo' referenced before assignment ====================================================================== ERROR: test_cyclical_print (test.test_set.TestSetSubclassWithKeywordArgs) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_set.py", line 293, in test_cyclical_print fo.close() UnboundLocalError: local variable 'fo' referenced before assignment ====================================================================== ERROR: test_file (test.test_urllib2.HandlerTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_urllib2.py", line 612, in test_file f = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_decode (test.test_uu.UUFileTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_uu.py", line 149, in test_decode uu.decode(f) File "C:\buildbot\work\trunk.heller-windows\build\lib\uu.py", line 111, in decode raise Error('Cannot overwrite existing file: %s' % out_file) Error: Cannot overwrite existing file: @testo ====================================================================== ERROR: test_decodetwice (test.test_uu.UUFileTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_uu.py", line 165, in test_decodetwice f = open(self.tmpin, 'r') IOError: [Errno 2] No such file or directory: '@testi' ====================================================================== ERROR: testAbsoluteArcnames (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 240, in testAbsoluteArcnames zipfp.write(TESTFN, "/absolute") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testAbsoluteArcnames (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testAppendToNonZipFile (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 266, in testAppendToNonZipFile zipfp.write(TESTFN, TESTFN) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testAppendToNonZipFile (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testAppendToZipFile (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 250, in testAppendToZipFile zipfp.write(TESTFN, TESTFN) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testAppendToZipFile (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 203, in testDeflated self.zipTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 44, in zipTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 38, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testExtract (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 307, in testExtract zipfp.writestr(fpath, fdata) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testExtract (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testExtractAll (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 336, in testExtractAll zipfp.writestr(fpath, fdata) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testExtractAll (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testIterlinesDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 223, in testIterlinesDeflated self.zipIterlinesTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 179, in zipIterlinesTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 38, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testIterlinesDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testIterlinesStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 198, in testIterlinesStored self.zipIterlinesTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 179, in zipIterlinesTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 38, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testIterlinesStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testOpenDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 207, in testOpenDeflated self.zipOpenTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 107, in zipOpenTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 38, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testOpenDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testOpenStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 133, in testOpenStored self.zipOpenTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 107, in zipOpenTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 38, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testOpenStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testRandomOpenDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 211, in testRandomOpenDeflated self.zipRandomOpenTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 136, in zipRandomOpenTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 38, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testRandomOpenDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testRandomOpenStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 153, in testRandomOpenStored self.zipRandomOpenTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 136, in zipRandomOpenTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 38, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testRandomOpenStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testReadlineDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 215, in testReadlineDeflated self.zipReadlineTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 156, in zipReadlineTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 38, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadlineDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testReadlineStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 190, in testReadlineStored self.zipReadlineTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 156, in zipReadlineTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 38, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadlineStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testReadlinesDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 219, in testReadlinesDeflated self.zipReadlinesTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 168, in zipReadlinesTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 38, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadlinesDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testReadlinesStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 194, in testReadlinesStored self.zipReadlinesTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 168, in zipReadlinesTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 38, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadlinesStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 104, in testStored self.zipTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 44, in zipTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 38, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: test_PerFileCompression (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 286, in test_PerFileCompression zipfp.write(TESTFN, 'storeme', zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: test_PerFileCompression (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: test_WriteDefaultName (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 279, in test_WriteDefaultName zipfp.write(TESTFN) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: test_WriteDefaultName (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testClosedZipRaisesRuntimeError (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 627, in testClosedZipRaisesRuntimeError zipf.writestr("foo.txt", "O, for a Muse of Fire!") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testCreateNonExistentFileForAppend (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 565, in testCreateNonExistentFileForAppend zf.writestr(filename, content) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testIsZipValidFile (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 604, in testIsZipValidFile zipf.writestr("foo.txt", "O, for a Muse of Fire!") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: test_BadOpenMode (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 648, in test_BadOpenMode zipf.writestr("foo.txt", "O, for a Muse of Fire!") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: test_NullByteInFilename (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 682, in test_NullByteInFilename zipf.writestr("foo.txt\x00qqq", "O, for a Muse of Fire!") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: test_Read0 (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 660, in test_Read0 zipf.writestr("foo.txt", "O, for a Muse of Fire!") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testWritePyfile (test.test_zipfile.PyZipFileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 490, in testWritePyfile zipfp.writepy(fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 1174, in writepy self.write(fname, arcname) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testWritePythonDirectory (test.test_zipfile.PyZipFileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 538, in testWritePythonDirectory zipfp.writepy(TESTFN2) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 1166, in writepy self.write(fname, arcname) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testWritePythonPackage (test.test_zipfile.PyZipFileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 515, in testWritePythonPackage zipfp.writepy(packagedir) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 1137, in writepy self.write(fname, arcname) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testBadPassword (test.test_zipfile.DecryptionTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 719, in setUp self.zip = zipfile.ZipFile(TESTFN, "r") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 615, in __init__ self._GetContents() File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 635, in _GetContents self._RealGetContents() File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 672, in _RealGetContents raise BadZipfile, "Bad magic number for central directory" BadZipfile: Bad magic number for central directory ====================================================================== ERROR: testGoodPassword (test.test_zipfile.DecryptionTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 719, in setUp self.zip = zipfile.ZipFile(TESTFN, "r") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 615, in __init__ self._GetContents() File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 635, in _GetContents self._RealGetContents() File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 672, in _RealGetContents raise BadZipfile, "Bad magic number for central directory" BadZipfile: Bad magic number for central directory ====================================================================== ERROR: testNoPassword (test.test_zipfile.DecryptionTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 719, in setUp self.zip = zipfile.ZipFile(TESTFN, "r") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 615, in __init__ self._GetContents() File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 635, in _GetContents self._RealGetContents() File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 672, in _RealGetContents raise BadZipfile, "Bad magic number for central directory" BadZipfile: Bad magic number for central directory ====================================================================== ERROR: testDifferentFile (test.test_zipfile.TestsWithMultipleOpens) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 846, in setUp zipfp.writestr('ones', '1'*FIXEDTEST_SIZE) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testInterleaved (test.test_zipfile.TestsWithMultipleOpens) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 846, in setUp zipfp.writestr('ones', '1'*FIXEDTEST_SIZE) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testSameFile (test.test_zipfile.TestsWithMultipleOpens) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 846, in setUp zipfp.writestr('ones', '1'*FIXEDTEST_SIZE) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testIterlinesDeflated (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 990, in testIterlinesDeflated self.iterlinesTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 949, in iterlinesTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 909, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testIterlinesStored (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 973, in testIterlinesStored self.iterlinesTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 949, in iterlinesTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 909, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadDeflated (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 978, in testReadDeflated self.readTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 913, in readTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 909, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadStored (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 961, in testReadStored self.readTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 913, in readTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 909, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadlineDeflated (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 903, in setUp open(self.arcfiles[s], "wb").write(self.arcdata[s]) IOError: [Errno 13] Permission denied: '@test-1' ====================================================================== ERROR: testReadlineStored (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 903, in setUp open(self.arcfiles[s], "wb").write(self.arcdata[s]) IOError: [Errno 13] Permission denied: '@test-1' ====================================================================== ERROR: testReadlinesDeflated (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 986, in testReadlinesDeflated self.readlinesTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 937, in readlinesTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 909, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadlinesStored (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 969, in testReadlinesStored self.readlinesTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 937, in readlinesTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 909, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testOpenStored (test.test_zipfile.TestsWithRandomBinaryFiles) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 818, in testOpenStored self.zipOpenTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 787, in zipOpenTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 767, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testRandomOpenStored (test.test_zipfile.TestsWithRandomBinaryFiles) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 840, in testRandomOpenStored self.zipRandomOpenTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 821, in zipRandomOpenTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 767, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testStored (test.test_zipfile.TestsWithRandomBinaryFiles) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 784, in testStored self.zipTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 772, in zipTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 767, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testBadMTime (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 183, in testBadMTime self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testBadMagic (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 161, in testBadMagic self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testBadMagic2 (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 170, in testBadMagic2 self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testBoth (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 148, in testBoth self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testDeepPackage (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 197, in testDeepPackage self.doTest(pyc_ext, files, TESTPACK, TESTPACK2, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testDoctestFile (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 299, in testDoctestFile self.runDoctest(self.doDoctestFile) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 284, in runDoctest self.doTest(".py", files, TESTMOD, call=callback) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testDoctestSuite (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 310, in testDoctestSuite self.runDoctest(self.doDoctestSuite) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 284, in runDoctest self.doTest(".py", files, TESTMOD, call=callback) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testEmptyPy (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 152, in testEmptyPy self.doTest(None, files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 91, in doTest ["__dummy__"]) ImportError: No module named ziptestmodule ====================================================================== ERROR: testGetCompiledSource (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 279, in testGetCompiledSource self.doTest(pyc_ext, files, TESTMOD, call=self.assertModuleSource) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testGetData (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 236, in testGetData z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testGetSource (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 273, in testGetSource self.doTest(".py", files, TESTMOD, call=self.assertModuleSource) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testImport_WithStuff (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 266, in testImport_WithStuff stuff="Some Stuff"*31) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testImporterAttr (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 259, in testImporterAttr self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testPackage (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 189, in testPackage self.doTest(pyc_ext, files, TESTPACK, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testPy (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 139, in testPy self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testPyc (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 143, in testPyc self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testTraceback (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 333, in testTraceback self.doTest(None, files, TESTMOD, call=self.doTraceback) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testZipImporterMethods (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 206, in testZipImporterMethods z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testBadMTime (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 183, in testBadMTime self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testBadMagic (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 161, in testBadMagic self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testBadMagic2 (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 170, in testBadMagic2 self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testBoth (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 148, in testBoth self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testDeepPackage (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 197, in testDeepPackage self.doTest(pyc_ext, files, TESTPACK, TESTPACK2, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testDoctestFile (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 299, in testDoctestFile self.runDoctest(self.doDoctestFile) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 284, in runDoctest self.doTest(".py", files, TESTMOD, call=callback) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testDoctestSuite (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 310, in testDoctestSuite self.runDoctest(self.doDoctestSuite) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 284, in runDoctest self.doTest(".py", files, TESTMOD, call=callback) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testEmptyPy (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 152, in testEmptyPy self.doTest(None, files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testGetCompiledSource (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 279, in testGetCompiledSource self.doTest(pyc_ext, files, TESTMOD, call=self.assertModuleSource) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testGetData (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 236, in testGetData z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testGetSource (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 273, in testGetSource self.doTest(".py", files, TESTMOD, call=self.assertModuleSource) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testImport_WithStuff (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 266, in testImport_WithStuff stuff="Some Stuff"*31) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testImporterAttr (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 259, in testImporterAttr self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testPackage (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 189, in testPackage self.doTest(pyc_ext, files, TESTPACK, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testPy (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 139, in testPy self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testPyc (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 143, in testPyc self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testTraceback (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 333, in testTraceback self.doTest(None, files, TESTMOD, call=self.doTraceback) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testZipImporterMethods (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 206, in testZipImporterMethods z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' sincerely, -The Buildbot From python-checkins at python.org Mon Mar 24 00:43:03 2008 From: python-checkins at python.org (gregory.p.smith) Date: Mon, 24 Mar 2008 00:43:03 +0100 (CET) Subject: [Python-checkins] r61821 - python/trunk/Lib/gzip.py Message-ID: <20080323234303.558C11E4019@bag.python.org> Author: gregory.p.smith Date: Mon Mar 24 00:43:02 2008 New Revision: 61821 Modified: python/trunk/Lib/gzip.py Log: A bugfix for r61813, it would fail if the data size was >=2**32. Modified: python/trunk/Lib/gzip.py ============================================================================== --- python/trunk/Lib/gzip.py (original) +++ python/trunk/Lib/gzip.py Mon Mar 24 00:43:02 2008 @@ -15,10 +15,6 @@ READ, WRITE = 1, 2 -def U32(i): - """Return the low-order 32 bits, as a non-negative int or long.""" - return i & 0xFFFFFFFFL - def write32u(output, value): # The L format writes the bit pattern correctly whether signed # or unsigned. @@ -306,7 +302,7 @@ if crc32 != self.crc: raise IOError("CRC check failed %s != %s" % (hex(crc32), hex(self.crc))) - elif isize != self.size: + elif isize != (self.size & 0xffffffffL): raise IOError, "Incorrect length of data produced" def close(self): From python-checkins at python.org Mon Mar 24 00:45:12 2008 From: python-checkins at python.org (gregory.p.smith) Date: Mon, 24 Mar 2008 00:45:12 +0100 (CET) Subject: [Python-checkins] r61822 - python/trunk/Lib/gzip.py Message-ID: <20080323234512.82F661E4025@bag.python.org> Author: gregory.p.smith Date: Mon Mar 24 00:45:12 2008 New Revision: 61822 Modified: python/trunk/Lib/gzip.py Log: prevent a warning from the struct module when data size >= 2**32. Modified: python/trunk/Lib/gzip.py ============================================================================== --- python/trunk/Lib/gzip.py (original) +++ python/trunk/Lib/gzip.py Mon Mar 24 00:45:12 2008 @@ -310,7 +310,7 @@ self.fileobj.write(self.compress.flush()) write32u(self.fileobj, self.crc) # self.size may exceed 2GB, or even 4GB - write32u(self.fileobj, self.size) + write32u(self.fileobj, self.size & 0xffffffffL) self.fileobj = None elif self.mode == READ: self.fileobj = None From buildbot at python.org Mon Mar 24 00:46:39 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 23 Mar 2008 23:46:39 +0000 Subject: [Python-checkins] buildbot failure in ppc Debian unstable 3.0 Message-ID: <20080323234640.1F7721E4010@bag.python.org> The Buildbot has detected a new failure of ppc Debian unstable 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/ppc%20Debian%20unstable%203.0/builds/688 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_signal make: *** [buildbottest] Error 1 sincerely, -The Buildbot From python-checkins at python.org Mon Mar 24 01:08:02 2008 From: python-checkins at python.org (gregory.p.smith) Date: Mon, 24 Mar 2008 01:08:02 +0100 (CET) Subject: [Python-checkins] r61823 - in python/trunk: Modules/binascii.c setup.py Message-ID: <20080324000802.81C351E4021@bag.python.org> Author: gregory.p.smith Date: Mon Mar 24 01:08:01 2008 New Revision: 61823 Modified: python/trunk/Modules/binascii.c python/trunk/setup.py Log: Have the binascii module use zlib's optimized crc32() function when available to reduce our code size (1k data table and tiny bit of code). It falls back to its own without zlib. Modified: python/trunk/Modules/binascii.c ============================================================================== --- python/trunk/Modules/binascii.c (original) +++ python/trunk/Modules/binascii.c Mon Mar 24 01:08:01 2008 @@ -56,6 +56,9 @@ #define PY_SSIZE_T_CLEAN #include "Python.h" +#ifdef USE_ZLIB_CRC32 +#include "zlib.h" +#endif static PyObject *Error; static PyObject *Incomplete; @@ -748,6 +751,26 @@ PyDoc_STRVAR(doc_crc32, "(data, oldcrc = 0) -> newcrc. Compute CRC-32 incrementally"); +#ifdef USE_ZLIB_CRC32 +/* This was taken from zlibmodule.c PyZlib_crc32 (but is PY_SSIZE_T_CLEAN) */ +static PyObject * +binascii_crc32(PyObject *self, PyObject *args) +{ + uLong crc32val = 0; /* crc32(0L, Z_NULL, 0) */ + Byte *buf; + Py_ssize_t len; + int signed_val; + + if (!PyArg_ParseTuple(args, "s#|k:crc32", &buf, &len, &crc32val)) + return NULL; + /* In Python 2.x we return a signed integer regardless of native platform + * long size (the 32bit unsigned long is treated as 32-bit signed and sign + * extended into a 64-bit long inside the integer object). 3.0 does the + * right thing and returns unsigned. http://bugs.python.org/issue1202 */ + signed_val = crc32(crc32val, buf, len); + return PyInt_FromLong(signed_val); +} +#else /* USE_ZLIB_CRC32 */ /* Crc - 32 BIT ANSI X3.66 CRC checksum files Also known as: ISO 3307 **********************************************************************| @@ -898,6 +921,7 @@ #endif return PyInt_FromLong(result); } +#endif /* USE_ZLIB_CRC32 */ static PyObject * Modified: python/trunk/setup.py ============================================================================== --- python/trunk/setup.py (original) +++ python/trunk/setup.py Mon Mar 24 01:08:01 2008 @@ -486,9 +486,6 @@ # select(2); not on ancient System V exts.append( Extension('select', ['selectmodule.c']) ) - # Helper module for various ascii-encoders - exts.append( Extension('binascii', ['binascii.c']) ) - # Fred Drake's interface to the Python parser exts.append( Extension('parser', ['parsermodule.c']) ) @@ -1069,6 +1066,7 @@ # You can upgrade zlib to version 1.1.4 yourself by going to # http://www.gzip.org/zlib/ zlib_inc = find_file('zlib.h', [], inc_dirs) + have_zlib = False if zlib_inc is not None: zlib_h = zlib_inc[0] + '/zlib.h' version = '"0.0.0"' @@ -1090,6 +1088,7 @@ exts.append( Extension('zlib', ['zlibmodule.c'], libraries = ['z'], extra_link_args = zlib_extra_link_args)) + have_zlib = True else: missing.append('zlib') else: @@ -1097,6 +1096,21 @@ else: missing.append('zlib') + # Helper module for various ascii-encoders. Uses zlib for an optimized + # crc32 if we have it. Otherwise binascii uses its own. + if have_zlib: + extra_compile_args = ['-DUSE_ZLIB_CRC32'] + libraries = ['z'] + extra_link_args = zlib_extra_link_args + else: + extra_compile_args = [] + libraries = [] + extra_link_args = [] + exts.append( Extension('binascii', ['binascii.c'], + extra_compile_args = extra_compile_args, + libraries = libraries, + extra_link_args = extra_link_args) ) + # Gustavo Niemeyer's bz2 module. if (self.compiler.find_library_file(lib_dirs, 'bz2')): if sys.platform == "darwin": From python-checkins at python.org Mon Mar 24 01:30:24 2008 From: python-checkins at python.org (david.wolever) Date: Mon, 24 Mar 2008 01:30:24 +0100 (CET) Subject: [Python-checkins] r61824 - in sandbox/trunk/2to3/lib2to3: fixes/fix_itertools_imports.py tests/test_fixers.py Message-ID: <20080324003024.D704B1E4027@bag.python.org> Author: david.wolever Date: Mon Mar 24 01:30:24 2008 New Revision: 61824 Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_itertools_imports.py sandbox/trunk/2to3/lib2to3/tests/test_fixers.py Log: Fixed a bug where 'from itertools import izip' would return 'from itertools import' Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_itertools_imports.py ============================================================================== --- sandbox/trunk/2to3/lib2to3/fixes/fix_itertools_imports.py (original) +++ sandbox/trunk/2to3/lib2to3/fixes/fix_itertools_imports.py Mon Mar 24 01:30:24 2008 @@ -17,6 +17,9 @@ # Handle 'import ... as ...' continue if child.value in ('imap', 'izip', 'ifilter'): + # The value must be set to none in case child == import, + # so that the test for empty imports will work out + child.value = None child.remove() elif child.value == 'ifilterfalse': node.changed() @@ -34,10 +37,9 @@ if unicode(children[-1]) == ',': children[-1].remove() - # If there is nothing left, return a blank line + # If there are no imports left, just get rid of the entire statement if not (imports.children or getattr(imports, 'value', None)): - new = BlankLine() - new.prefix = node.get_prefix() - else: - new = node - return new + p = node.get_prefix() + node = BlankLine() + node.prefix = p + return node Modified: sandbox/trunk/2to3/lib2to3/tests/test_fixers.py ============================================================================== --- sandbox/trunk/2to3/lib2to3/tests/test_fixers.py (original) +++ sandbox/trunk/2to3/lib2to3/tests/test_fixers.py Mon Mar 24 01:30:24 2008 @@ -3036,6 +3036,10 @@ a = "" self.check(b, a) + b = "from itertools import izip" + a = "" + self.check(b, a) + def test_import_as(self): b = "from itertools import izip, bar as bang, imap" a = "from itertools import bar as bang" From python-checkins at python.org Mon Mar 24 01:46:54 2008 From: python-checkins at python.org (martin.v.loewis) Date: Mon, 24 Mar 2008 01:46:54 +0100 (CET) Subject: [Python-checkins] r61825 - in python/trunk/Lib/lib2to3: fixes/fix_import.py fixes/fix_itertools_imports.py fixes/util.py tests/benchmark.py tests/pytree_idempotency.py tests/test_fixers.py Message-ID: <20080324004654.380F01E4027@bag.python.org> Author: martin.v.loewis Date: Mon Mar 24 01:46:53 2008 New Revision: 61825 Modified: python/trunk/Lib/lib2to3/ (props changed) python/trunk/Lib/lib2to3/fixes/fix_import.py python/trunk/Lib/lib2to3/fixes/fix_itertools_imports.py python/trunk/Lib/lib2to3/fixes/util.py python/trunk/Lib/lib2to3/tests/benchmark.py python/trunk/Lib/lib2to3/tests/pytree_idempotency.py python/trunk/Lib/lib2to3/tests/test_fixers.py Log: Merged revisions 61724-61824 via svnmerge from svn+ssh://pythondev at svn.python.org/sandbox/trunk/2to3/lib2to3 ........ r61730 | martin.v.loewis | 2008-03-22 02:20:58 +0100 (Sa, 22 M?r 2008) | 2 lines More explicit relative imports. ........ r61755 | david.wolever | 2008-03-22 21:33:52 +0100 (Sa, 22 M?r 2008) | 1 line Fixing #2446 -- 2to3 now translates 'import foo' to 'from . import foo' ........ r61824 | david.wolever | 2008-03-24 01:30:24 +0100 (Mo, 24 M?r 2008) | 3 lines Fixed a bug where 'from itertools import izip' would return 'from itertools import' ........ Modified: python/trunk/Lib/lib2to3/fixes/fix_import.py ============================================================================== --- python/trunk/Lib/lib2to3/fixes/fix_import.py (original) +++ python/trunk/Lib/lib2to3/fixes/fix_import.py Mon Mar 24 01:46:53 2008 @@ -7,19 +7,20 @@ And this import: import spam Becomes: - import .spam + from . import spam """ # Local imports from . import basefix from os.path import dirname, join, exists, pathsep +from .util import FromImport class FixImport(basefix.BaseFix): PATTERN = """ - import_from< 'from' imp=any 'import' any > + import_from< type='from' imp=any 'import' any > | - import_name< 'import' imp=any > + import_name< type='import' imp=any > """ def transform(self, node, results): @@ -33,15 +34,19 @@ # I guess this is a global import -- skip it! return - # Some imps are top-level (eg: 'import ham') - # some are first level (eg: 'import ham.eggs') - # some are third level (eg: 'import ham.eggs as spam') - # Hence, the loop - while not hasattr(imp, 'value'): - imp = imp.children[0] - - imp.value = "." + imp.value - node.changed() + if results['type'].value == 'from': + # Some imps are top-level (eg: 'import ham') + # some are first level (eg: 'import ham.eggs') + # some are third level (eg: 'import ham.eggs as spam') + # Hence, the loop + while not hasattr(imp, 'value'): + imp = imp.children[0] + imp.value = "." + imp.value + node.changed() + else: + new = FromImport('.', getattr(imp, 'content', None) or [imp]) + new.prefix = node.get_prefix() + node = new return node def probably_a_local_import(imp_name, file_path): Modified: python/trunk/Lib/lib2to3/fixes/fix_itertools_imports.py ============================================================================== --- python/trunk/Lib/lib2to3/fixes/fix_itertools_imports.py (original) +++ python/trunk/Lib/lib2to3/fixes/fix_itertools_imports.py Mon Mar 24 01:46:53 2008 @@ -17,6 +17,9 @@ # Handle 'import ... as ...' continue if child.value in ('imap', 'izip', 'ifilter'): + # The value must be set to none in case child == import, + # so that the test for empty imports will work out + child.value = None child.remove() elif child.value == 'ifilterfalse': node.changed() @@ -34,10 +37,9 @@ if unicode(children[-1]) == ',': children[-1].remove() - # If there is nothing left, return a blank line + # If there are no imports left, just get rid of the entire statement if not (imports.children or getattr(imports, 'value', None)): - new = BlankLine() - new.prefix = node.get_prefix() - else: - new = node - return new + p = node.get_prefix() + node = BlankLine() + node.prefix = p + return node Modified: python/trunk/Lib/lib2to3/fixes/util.py ============================================================================== --- python/trunk/Lib/lib2to3/fixes/util.py (original) +++ python/trunk/Lib/lib2to3/fixes/util.py Mon Mar 24 01:46:53 2008 @@ -108,6 +108,26 @@ inner, Leaf(token.RBRACE, "]")]) +def FromImport(package_name, name_leafs): + """ Return an import statement in the form: + from package import name_leafs""" + # XXX: May not handle dotted imports properly (eg, package_name='foo.bar') + assert package_name == '.' or '.' not in package.name, "FromImport has "\ + "not been tested with dotted package names -- use at your own "\ + "peril!" + + for leaf in name_leafs: + # Pull the leaves out of their old tree + leaf.remove() + + children = [Leaf(token.NAME, 'from'), + Leaf(token.NAME, package_name, prefix=" "), + Leaf(token.NAME, 'import', prefix=" "), + Node(syms.import_as_names, name_leafs)] + imp = Node(syms.import_from, children) + return imp + + ########################################################### ### Determine whether a node represents a given literal ########################################################### Modified: python/trunk/Lib/lib2to3/tests/benchmark.py ============================================================================== --- python/trunk/Lib/lib2to3/tests/benchmark.py (original) +++ python/trunk/Lib/lib2to3/tests/benchmark.py Mon Mar 24 01:46:53 2008 @@ -13,7 +13,7 @@ from time import time # Test imports -from support import adjust_path +from .support import adjust_path adjust_path() # Local imports Modified: python/trunk/Lib/lib2to3/tests/pytree_idempotency.py ============================================================================== --- python/trunk/Lib/lib2to3/tests/pytree_idempotency.py (original) +++ python/trunk/Lib/lib2to3/tests/pytree_idempotency.py Mon Mar 24 01:46:53 2008 @@ -7,7 +7,7 @@ __author__ = "Guido van Rossum " # Support imports (need to be imported first) -import support +from . import support # Python imports import os Modified: python/trunk/Lib/lib2to3/tests/test_fixers.py ============================================================================== --- python/trunk/Lib/lib2to3/tests/test_fixers.py (original) +++ python/trunk/Lib/lib2to3/tests/test_fixers.py Mon Mar 24 01:46:53 2008 @@ -3036,6 +3036,10 @@ a = "" self.check(b, a) + b = "from itertools import izip" + a = "" + self.check(b, a) + def test_import_as(self): b = "from itertools import izip, bar as bang, imap" a = "from itertools import bar as bang" @@ -3105,6 +3109,10 @@ self.failUnlessEqual(set(self.files_checked), expected_checks) def test_from(self): + b = "from foo import bar, baz" + a = "from .foo import bar, baz" + self.check_both(b, a) + b = "from foo import bar" a = "from .foo import bar" self.check_both(b, a) @@ -3121,17 +3129,21 @@ def test_import(self): b = "import foo" - a = "import .foo" + a = "from . import foo" + self.check_both(b, a) + + b = "import foo, bar" + a = "from . import foo, bar" self.check_both(b, a) def test_dotted_import(self): b = "import foo.bar" - a = "import .foo.bar" + a = "from . import foo.bar" self.check_both(b, a) def test_dotted_import_as(self): b = "import foo.bar as bang" - a = "import .foo.bar as bang" + a = "from . import foo.bar as bang" self.check_both(b, a) From jimjjewett at gmail.com Mon Mar 24 01:52:46 2008 From: jimjjewett at gmail.com (Jim Jewett) Date: Sun, 23 Mar 2008 20:52:46 -0400 Subject: [Python-checkins] r61709 - python/trunk/Doc/library/functions.rst python/trunk/Doc/library/future_builtins.rst python/trunk/Doc/library/python.rst In-Reply-To: <20080321193757.B2DD71E400B@bag.python.org> References: <20080321193757.B2DD71E400B@bag.python.org> Message-ID: What is the precise specification of the builtin "print" function. Does it call "str", or does it just behave as if the builtin str had been called? In 2.5, the print statement ignores any overrides of the str builtin, but I'm not sure whether a _function_ should -- and I do think it should be specified. -jJ On 3/21/08, georg.brandl wrote: > New Revision: 61709 ============================================================================== > +++ python/trunk/Doc/library/functions.rst Fri Mar 21 20:37:57 2008 > @@ -817,6 +817,33 @@ ... > +.. function:: print([object, ...][, sep=' '][, end='\n'][, file=sys.stdout]) ... > + All non-keyword arguments are converted to strings like :func:`str` does From python-checkins at python.org Mon Mar 24 02:39:08 2008 From: python-checkins at python.org (brett.cannon) Date: Mon, 24 Mar 2008 02:39:08 +0100 (CET) Subject: [Python-checkins] r61828 - peps/trunk/pep-3108.txt Message-ID: <20080324013908.AC5311E401F@bag.python.org> Author: brett.cannon Date: Mon Mar 24 02:39:08 2008 New Revision: 61828 Modified: peps/trunk/pep-3108.txt Log: Add dircache and linecache to the lineup at the chopping block. Modified: peps/trunk/pep-3108.txt ============================================================================== --- peps/trunk/pep-3108.txt (original) +++ peps/trunk/pep-3108.txt Mon Mar 24 02:39:08 2008 @@ -289,6 +289,11 @@ package is redundant [#ast-removal]_. + The AST created by the compiler is available [#ast]_. + Mechanism to compile from an AST needs to be added. + +* dircache + + + Negligible use. + + Easily replicated. * dl @@ -314,6 +319,11 @@ + Unit tests relied on rgbimg and imgfile. - rgbimg was removed in Python 2.6. - imgfile slated for removal in this PEP. [done] + +* linecache + + + Negligible use. + + Easily replicated. * linuxaudiodev [done] From buildbot at python.org Mon Mar 24 02:40:42 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 24 Mar 2008 01:40:42 +0000 Subject: [Python-checkins] buildbot failure in S-390 Debian trunk Message-ID: <20080324014042.C2BEC1E4028@bag.python.org> The Buildbot has detected a new failure of S-390 Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/S-390%20Debian%20trunk/builds/239 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-s390 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: gregory.p.smith BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_threading make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Mon Mar 24 03:14:10 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 24 Mar 2008 02:14:10 +0000 Subject: [Python-checkins] buildbot failure in S-390 Debian 3.0 Message-ID: <20080324021410.D1D2B1E4010@bag.python.org> The Buildbot has detected a new failure of S-390 Debian 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/S-390%20Debian%203.0/builds/137 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-s390 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From buildbot at python.org Mon Mar 24 03:35:36 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 24 Mar 2008 02:35:36 +0000 Subject: [Python-checkins] buildbot failure in amd64 gentoo 3.0 Message-ID: <20080324023536.6F4951E4010@bag.python.org> The Buildbot has detected a new failure of amd64 gentoo 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20gentoo%203.0/builds/221 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-amd64 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: make: *** [buildbottest] Segmentation fault (core dumped) sincerely, -The Buildbot From buildbot at python.org Mon Mar 24 03:52:41 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 24 Mar 2008 02:52:41 +0000 Subject: [Python-checkins] buildbot failure in ppc Debian unstable trunk Message-ID: <20080324025242.109631E4010@bag.python.org> The Buildbot has detected a new failure of ppc Debian unstable trunk. Full details are available at: http://www.python.org/dev/buildbot/all/ppc%20Debian%20unstable%20trunk/builds/1064 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: gregory.p.smith,martin.v.loewis BUILD FAILED: failed test Excerpt from the test logfile: Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_signal.py", line 129, in test_main pickle.dump(None, done_w) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/contextlib.py", line 157, in __exit__ self.thing.close() IOError: [Errno 9] Bad file descriptor 1 test failed: test_signal make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Mon Mar 24 03:53:39 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 24 Mar 2008 02:53:39 +0000 Subject: [Python-checkins] buildbot failure in g4 osx.4 trunk Message-ID: <20080324025339.69D4C1E4010@bag.python.org> The Buildbot has detected a new failure of g4 osx.4 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/g4%20osx.4%20trunk/builds/3085 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: psf-g4 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed test Excerpt from the test logfile: Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable 1 test failed: test_xmlrpc Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable ====================================================================== FAIL: test_dotted_attribute (test.test_xmlrpc.SimpleServerTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/test/test_xmlrpc.py", line 472, in test_dotted_attribute self.test_simple1() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/test/test_xmlrpc.py", line 361, in test_simple1 self.fail("%s\n%s" % (e, getattr(e, "headers", ""))) AssertionError: [Errno 32] Broken pipe ====================================================================== FAIL: test_introspection1 (test.test_xmlrpc.SimpleServerTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/test/test_xmlrpc.py", line 387, in test_introspection1 self.fail("%s\n%s" % (e, getattr(e, "headers", ""))) AssertionError: [Errno 32] Broken pipe ====================================================================== FAIL: test_introspection2 (test.test_xmlrpc.SimpleServerTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/test/test_xmlrpc.py", line 399, in test_introspection2 self.fail("%s\n%s" % (e, getattr(e, "headers", ""))) AssertionError: [Errno 32] Broken pipe ====================================================================== FAIL: test_introspection3 (test.test_xmlrpc.SimpleServerTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/test/test_xmlrpc.py", line 411, in test_introspection3 self.fail("%s\n%s" % (e, getattr(e, "headers", ""))) AssertionError: [Errno 32] Broken pipe ====================================================================== FAIL: test_introspection4 (test.test_xmlrpc.SimpleServerTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/test/test_xmlrpc.py", line 424, in test_introspection4 self.fail("%s\n%s" % (e, getattr(e, "headers", ""))) AssertionError: [Errno 32] Broken pipe ====================================================================== FAIL: test_multicall (test.test_xmlrpc.SimpleServerTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/test/test_xmlrpc.py", line 441, in test_multicall self.fail("%s\n%s" % (e, getattr(e, "headers", ""))) AssertionError: [Errno 32] Broken pipe ====================================================================== FAIL: test_non_existing_multicall (test.test_xmlrpc.SimpleServerTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/test/test_xmlrpc.py", line 462, in test_non_existing_multicall self.fail("%s\n%s" % (e, getattr(e, "headers", ""))) AssertionError: [Errno 32] Broken pipe ====================================================================== FAIL: test_simple1 (test.test_xmlrpc.SimpleServerTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/test/test_xmlrpc.py", line 361, in test_simple1 self.fail("%s\n%s" % (e, getattr(e, "headers", ""))) AssertionError: [Errno 32] Broken pipe ====================================================================== FAIL: test_basic (test.test_xmlrpc.FailingServerTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/test/test_xmlrpc.py", line 519, in test_basic self.fail("%s\n%s" % (e, getattr(e, "headers", ""))) AssertionError: [Errno 32] Broken pipe make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Mon Mar 24 04:49:37 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 24 Mar 2008 03:49:37 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-3 trunk Message-ID: <20080324034937.CCB7E1E4019@bag.python.org> The Buildbot has detected a new failure of x86 XP-3 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-3%20trunk/builds/1151 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: gregory.p.smith,martin.v.loewis BUILD FAILED: failed test Excerpt from the test logfile: 17 tests failed: test_array test_bufio test_cmd_line_script test_cookielib test_deque test_distutils test_file test_fileinput test_gzip test_iter test_mailbox test_mmap test_set test_threading test_urllib2 test_zipfile test_zipimport ====================================================================== ERROR: test_tofromfile (test.test_array.UnsignedIntTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_array.py", line 166, in test_tofromfile f = open(test_support.TESTFN, 'wb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_nullpat (test.test_bufio.BufferSizeTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_bufio.py", line 59, in test_nullpat self.drive_one("\0" * 1000) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_bufio.py", line 51, in drive_one self.try_one(teststring[:-1]) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_bufio.py", line 21, in try_one f = open(test_support.TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_primepat (test.test_bufio.BufferSizeTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_bufio.py", line 56, in test_primepat self.drive_one("1234567890\00\01\02\03\04\05\06") File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_bufio.py", line 49, in drive_one self.try_one(teststring) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_bufio.py", line 21, in try_one f = open(test_support.TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_zipfile (test.test_cmd_line_script.CmdLineTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_cmd_line_script.py", line 160, in test_zipfile self._check_script(zip_name, None, zip_name, '') File "C:\buildbot\work\trunk.heller-windows\build\lib\contextlib.py", line 33, in __exit__ self.gen.throw(type, value, traceback) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_cmd_line_script.py", line 36, in temp_dir shutil.rmtree(dirname) File "c:\buildbot\work\trunk.heller-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "c:\buildbot\work\trunk.heller-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: 'c:\\docume~1\\theller\\locals~1\\temp\\tmpotoctb\\test_zip.zip' ====================================================================== ERROR: test_zipfile_compiled (test.test_cmd_line_script.CmdLineTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_cmd_line_script.py", line 167, in test_zipfile_compiled self._check_script(zip_name, None, zip_name, '') File "C:\buildbot\work\trunk.heller-windows\build\lib\contextlib.py", line 33, in __exit__ self.gen.throw(type, value, traceback) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_cmd_line_script.py", line 36, in temp_dir shutil.rmtree(dirname) File "c:\buildbot\work\trunk.heller-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "c:\buildbot\work\trunk.heller-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: 'c:\\docume~1\\theller\\locals~1\\temp\\tmprsjeaj\\test_zip.zip' ====================================================================== ERROR: test_maxlen (test.test_deque.TestBasic) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_deque.py", line 86, in test_maxlen os.remove(test_support.TESTFN) WindowsError: [Error 5] Access is denied: '@test' ====================================================================== ERROR: test_print (test.test_deque.TestBasic) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_deque.py", line 291, in test_print fo.close() UnboundLocalError: local variable 'fo' referenced before assignment ====================================================================== ERROR: test_read (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 48, in test_read self.test_write() File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 79, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_readline (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 85, in test_readline self.test_write() File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 79, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_readlines (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 98, in test_readlines self.test_write() File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 79, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_seek_read (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 112, in test_seek_read self.test_write() File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 79, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_seek_whence (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 132, in test_seek_whence self.test_write() File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 79, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_seek_write (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 144, in test_seek_write f = gzip.GzipFile(self.filename, 'w') File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 79, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_write (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 79, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_builtin_list (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_iter.py", line 262, in test_builtin_list f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_builtin_map (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_iter.py", line 408, in test_builtin_map f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_builtin_max_min (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_iter.py", line 371, in test_builtin_max_min f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_builtin_tuple (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_iter.py", line 295, in test_builtin_tuple f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_builtin_zip (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_iter.py", line 455, in test_builtin_zip f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_countOf (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_iter.py", line 617, in test_countOf f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_in_and_not_in (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_iter.py", line 580, in test_in_and_not_in f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_indexOf (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_iter.py", line 651, in test_indexOf f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_iter_file (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_iter.py", line 232, in test_iter_file f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_unicode_join_endcase (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_iter.py", line 534, in test_unicode_join_endcase f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_unpack_iter (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_iter.py", line 760, in test_unpack_iter f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_writelines (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_iter.py", line 677, in test_writelines f = file(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_add (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add_MM (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add_and_remove_folders (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_clean (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_clear (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_close (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_consistent_factory (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_contains (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_create_tmp (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_delitem (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_directory_in_folder (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_discard (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_dump_message (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_flush (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_folder (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_MM (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_file (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_folder (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_message (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_string (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_getitem (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_has_key (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_initialize_existing (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_initialize_new (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_items (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iter (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iteritems (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iterkeys (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_itervalues (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_keys (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_len (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_list_folders (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_lock_unlock (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_lookup (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_pop (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_popitem (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_refresh (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_remove (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_set_MM (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_set_item (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_update (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_values (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add_and_close (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add_from_string (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add_mbox_or_mmdf_message (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_clear (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_close (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_contains (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_delitem (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_discard (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_dump_message (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_flush (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_file (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_message (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_string (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_getitem (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_has_key (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_items (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iter (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iteritems (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iterkeys (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_itervalues (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_keys (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_len (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_lock_conflict (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_lock_unlock (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_open_close_open (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_pop (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_popitem (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_relock (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_remove (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_set_item (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_update (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_values (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add_and_close (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add_from_string (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add_mbox_or_mmdf_message (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_clear (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_close (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_contains (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_delitem (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_discard (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_dump_message (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_flush (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_file (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_message (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_string (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_getitem (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_has_key (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_items (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iter (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iteritems (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iterkeys (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_itervalues (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_keys (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_len (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_lock_conflict (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_lock_unlock (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_open_close_open (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_pop (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_popitem (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_relock (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_remove (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_set_item (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_update (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_values (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 814, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add_and_remove_folders (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_clear (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_close (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_contains (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_delitem (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_discard (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_dump_message (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_flush (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_file (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_folder (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_message (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_string (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_getitem (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_has_key (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_items (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iter (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iteritems (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iterkeys (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_itervalues (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_keys (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_len (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_list_folders (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_lock_unlock (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_pack (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_pop (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_popitem (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_remove (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_sequences (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_set_item (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_update (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_values (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 819, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 938, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_basic (test.test_mmap.MmapTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mmap.py", line 24, in test_basic f = open(TESTFN, 'w+') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_double_close (test.test_mmap.MmapTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mmap.py", line 296, in test_double_close f = open(TESTFN, 'w+') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_entire_file (test.test_mmap.MmapTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mmap.py", line 310, in test_entire_file f = open(TESTFN, "w+") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_find_end (test.test_mmap.MmapTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mmap.py", line 260, in test_find_end f = open(TESTFN, 'w+') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_move (test.test_mmap.MmapTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mmap.py", line 324, in test_move f = open(TESTFN, 'w+') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_offset (test.test_mmap.MmapTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mmap.py", line 388, in test_offset f = open (TESTFN, 'w+b') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_rfind (test.test_mmap.MmapTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mmap.py", line 278, in test_rfind f = open(TESTFN, 'w+') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_tougher_find (test.test_mmap.MmapTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mmap.py", line 242, in test_tougher_find f = open(TESTFN, 'w+') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_cyclical_print (test.test_set.TestSet) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_set.py", line 293, in test_cyclical_print fo.close() UnboundLocalError: local variable 'fo' referenced before assignment ====================================================================== ERROR: test_file (test.test_urllib2.HandlerTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_urllib2.py", line 612, in test_file f = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testAbsoluteArcnames (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testAppendToNonZipFile (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testAppendToZipFile (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testExtract (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testExtractAll (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testIterlinesDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testIterlinesStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testLowCompression (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testOpenDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testOpenStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testRandomOpenDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testRandomOpenStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testReadlineDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testReadlineStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testReadlinesDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testReadlinesStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_PerFileCompression (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 286, in test_PerFileCompression zipfp.write(TESTFN, 'storeme', zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: test_PerFileCompression (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: test_WriteDefaultName (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 279, in test_WriteDefaultName zipfp.write(TESTFN) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: test_WriteDefaultName (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testClosedZipRaisesRuntimeError (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 627, in testClosedZipRaisesRuntimeError zipf.writestr("foo.txt", "O, for a Muse of Fire!") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testCreateNonExistentFileForAppend (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 565, in testCreateNonExistentFileForAppend zf.writestr(filename, content) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testIsZipValidFile (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 604, in testIsZipValidFile zipf.writestr("foo.txt", "O, for a Muse of Fire!") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: test_BadOpenMode (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 648, in test_BadOpenMode zipf.writestr("foo.txt", "O, for a Muse of Fire!") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: test_NullByteInFilename (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 682, in test_NullByteInFilename zipf.writestr("foo.txt\x00qqq", "O, for a Muse of Fire!") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: test_Read0 (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 660, in test_Read0 zipf.writestr("foo.txt", "O, for a Muse of Fire!") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testWritePyfile (test.test_zipfile.PyZipFileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 490, in testWritePyfile zipfp.writepy(fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 1174, in writepy self.write(fname, arcname) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testWritePythonDirectory (test.test_zipfile.PyZipFileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 538, in testWritePythonDirectory zipfp.writepy(TESTFN2) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 1166, in writepy self.write(fname, arcname) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testWritePythonPackage (test.test_zipfile.PyZipFileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 515, in testWritePythonPackage zipfp.writepy(packagedir) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 1137, in writepy self.write(fname, arcname) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testBadPassword (test.test_zipfile.DecryptionTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 719, in setUp self.zip = zipfile.ZipFile(TESTFN, "r") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 615, in __init__ self._GetContents() File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 635, in _GetContents self._RealGetContents() File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 672, in _RealGetContents raise BadZipfile, "Bad magic number for central directory" BadZipfile: Bad magic number for central directory ====================================================================== ERROR: testGoodPassword (test.test_zipfile.DecryptionTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 719, in setUp self.zip = zipfile.ZipFile(TESTFN, "r") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 615, in __init__ self._GetContents() File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 635, in _GetContents self._RealGetContents() File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 672, in _RealGetContents raise BadZipfile, "Bad magic number for central directory" BadZipfile: Bad magic number for central directory ====================================================================== ERROR: testNoPassword (test.test_zipfile.DecryptionTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 719, in setUp self.zip = zipfile.ZipFile(TESTFN, "r") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 615, in __init__ self._GetContents() File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 635, in _GetContents self._RealGetContents() File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 672, in _RealGetContents raise BadZipfile, "Bad magic number for central directory" BadZipfile: Bad magic number for central directory ====================================================================== ERROR: testDifferentFile (test.test_zipfile.TestsWithMultipleOpens) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 846, in setUp zipfp.writestr('ones', '1'*FIXEDTEST_SIZE) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testInterleaved (test.test_zipfile.TestsWithMultipleOpens) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 846, in setUp zipfp.writestr('ones', '1'*FIXEDTEST_SIZE) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testSameFile (test.test_zipfile.TestsWithMultipleOpens) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 846, in setUp zipfp.writestr('ones', '1'*FIXEDTEST_SIZE) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testIterlinesDeflated (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 990, in testIterlinesDeflated self.iterlinesTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 949, in iterlinesTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 909, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testIterlinesStored (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 973, in testIterlinesStored self.iterlinesTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 949, in iterlinesTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 909, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadDeflated (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 978, in testReadDeflated self.readTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 913, in readTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 909, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadStored (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 961, in testReadStored self.readTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 913, in readTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 909, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadlineDeflated (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 982, in testReadlineDeflated self.readlineTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 924, in readlineTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 909, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadlineStored (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 965, in testReadlineStored self.readlineTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 924, in readlineTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 909, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadlinesDeflated (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 986, in testReadlinesDeflated self.readlinesTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 937, in readlinesTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 909, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadlinesStored (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 969, in testReadlinesStored self.readlinesTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 937, in readlinesTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 909, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testOpenStored (test.test_zipfile.TestsWithRandomBinaryFiles) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 818, in testOpenStored self.zipOpenTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 787, in zipOpenTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 767, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testRandomOpenStored (test.test_zipfile.TestsWithRandomBinaryFiles) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 840, in testRandomOpenStored self.zipRandomOpenTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 821, in zipRandomOpenTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 767, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testStored (test.test_zipfile.TestsWithRandomBinaryFiles) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 784, in testStored self.zipTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 772, in zipTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 767, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testBadMTime (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 183, in testBadMTime self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testBadMagic (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 161, in testBadMagic self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testBadMagic2 (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 170, in testBadMagic2 self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testBoth (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 148, in testBoth self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testDeepPackage (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 197, in testDeepPackage self.doTest(pyc_ext, files, TESTPACK, TESTPACK2, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testDoctestFile (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 299, in testDoctestFile self.runDoctest(self.doDoctestFile) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 284, in runDoctest self.doTest(".py", files, TESTMOD, call=callback) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testDoctestSuite (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 310, in testDoctestSuite self.runDoctest(self.doDoctestSuite) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 284, in runDoctest self.doTest(".py", files, TESTMOD, call=callback) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testEmptyPy (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 152, in testEmptyPy self.doTest(None, files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 91, in doTest ["__dummy__"]) ImportError: No module named ziptestmodule ====================================================================== ERROR: testGetCompiledSource (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 279, in testGetCompiledSource self.doTest(pyc_ext, files, TESTMOD, call=self.assertModuleSource) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testGetData (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 236, in testGetData z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testGetSource (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 273, in testGetSource self.doTest(".py", files, TESTMOD, call=self.assertModuleSource) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testImport_WithStuff (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 266, in testImport_WithStuff stuff="Some Stuff"*31) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testImporterAttr (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 259, in testImporterAttr self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testPackage (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 189, in testPackage self.doTest(pyc_ext, files, TESTPACK, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testPy (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 139, in testPy self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testPyc (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 143, in testPyc self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testTraceback (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 333, in testTraceback self.doTest(None, files, TESTMOD, call=self.doTraceback) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testZipImporterMethods (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 206, in testZipImporterMethods z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testBadMTime (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 183, in testBadMTime self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testBadMagic (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 161, in testBadMagic self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testBadMagic2 (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 170, in testBadMagic2 self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testBoth (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 148, in testBoth self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testDeepPackage (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 197, in testDeepPackage self.doTest(pyc_ext, files, TESTPACK, TESTPACK2, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testDoctestFile (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 299, in testDoctestFile self.runDoctest(self.doDoctestFile) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 284, in runDoctest self.doTest(".py", files, TESTMOD, call=callback) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testDoctestSuite (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 310, in testDoctestSuite self.runDoctest(self.doDoctestSuite) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 284, in runDoctest self.doTest(".py", files, TESTMOD, call=callback) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testEmptyPy (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 152, in testEmptyPy self.doTest(None, files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testGetCompiledSource (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 279, in testGetCompiledSource self.doTest(pyc_ext, files, TESTMOD, call=self.assertModuleSource) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testGetData (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 236, in testGetData z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testGetSource (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 273, in testGetSource self.doTest(".py", files, TESTMOD, call=self.assertModuleSource) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testImport_WithStuff (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 266, in testImport_WithStuff stuff="Some Stuff"*31) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testImporterAttr (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 259, in testImporterAttr self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testPackage (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 189, in testPackage self.doTest(pyc_ext, files, TESTPACK, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testPy (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 139, in testPy self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testPyc (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 143, in testPyc self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testTraceback (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 333, in testTraceback self.doTest(None, files, TESTMOD, call=self.doTraceback) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testZipImporterMethods (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 206, in testZipImporterMethods z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' sincerely, -The Buildbot From python-checkins at python.org Mon Mar 24 05:46:16 2008 From: python-checkins at python.org (brett.cannon) Date: Mon, 24 Mar 2008 05:46:16 +0100 (CET) Subject: [Python-checkins] r61830 - sandbox/trunk/import_in_py/TODO Message-ID: <20080324044616.F28E61E4021@bag.python.org> Author: brett.cannon Date: Mon Mar 24 05:46:16 2008 New Revision: 61830 Modified: sandbox/trunk/import_in_py/TODO Log: More TODOs relating to possible issues in Py3K. Modified: sandbox/trunk/import_in_py/TODO ============================================================================== --- sandbox/trunk/import_in_py/TODO (original) +++ sandbox/trunk/import_in_py/TODO Mon Mar 24 05:46:16 2008 @@ -1,4 +1,6 @@ -* Put importlib into 2.6 stdlib after hiding the entire API and exposing only - a simple one. * For Py3K, always have __file__ point to the .py file if it exists. * PEP 366. +* For Py3K, use _fileio and any wrappers to deal with lack of open() before + io.py can be imported. +* Decide if source files really need to be opened as text or should be passed + in as bytes and let parser handle any conversion. From buildbot at python.org Mon Mar 24 06:44:14 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 24 Mar 2008 05:44:14 +0000 Subject: [Python-checkins] buildbot failure in ppc Debian unstable 3.0 Message-ID: <20080324054414.6706D1E4019@bag.python.org> The Buildbot has detected a new failure of ppc Debian unstable 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/ppc%20Debian%20unstable%203.0/builds/691 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: neal.norwitz BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_xmlrpc_net make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Mon Mar 24 06:57:13 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 24 Mar 2008 05:57:13 +0000 Subject: [Python-checkins] buildbot failure in S-390 Debian 3.0 Message-ID: <20080324055713.58C811E4019@bag.python.org> The Buildbot has detected a new failure of S-390 Debian 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/S-390%20Debian%203.0/builds/139 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-s390 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: neal.norwitz BUILD FAILED: failed test Excerpt from the test logfile: 2 tests failed: test_signal test_xmlrpc_net ====================================================================== FAIL: test_main (test.test_signal.InterProcessSignalTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/3.0.klose-debian-s390/build/Lib/test/test_signal.py", line 142, in test_main self.fail(tb) AssertionError: Traceback (most recent call last): make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Mon Mar 24 07:01:22 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 24 Mar 2008 06:01:22 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-3 3.0 Message-ID: <20080324060123.259801E4019@bag.python.org> The Buildbot has detected a new failure of x86 XP-3 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-3%203.0/builds/713 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: Traceback (most recent call last): File "c:\buildbot\work\3.0.heller-windows\build\lib\threading.py", line 490, in _bootstrap_inner self.run() File "c:\buildbot\work\3.0.heller-windows\build\lib\threading.py", line 446, in run self._target(*self._args, **self._kwargs) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_smtplib.py", line 116, in debugging_server poll_fun(0.01, asyncore.socket_map) File "C:\buildbot\work\3.0.heller-windows\build\lib\asyncore.py", line 132, in poll read(obj) File "C:\buildbot\work\3.0.heller-windows\build\lib\asyncore.py", line 72, in read obj.handle_error() File "C:\buildbot\work\3.0.heller-windows\build\lib\asyncore.py", line 68, in read obj.handle_read_event() File "C:\buildbot\work\3.0.heller-windows\build\lib\asyncore.py", line 390, in handle_read_event self.handle_read() File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_ssl.py", line 527, in handle_read data = self.recv(1024) File "C:\buildbot\work\3.0.heller-windows\build\lib\asyncore.py", line 342, in recv data = self.socket.recv(buffer_size) File "C:\buildbot\work\3.0.heller-windows\build\lib\ssl.py", line 247, in recv return self.read(buflen) File "C:\buildbot\work\3.0.heller-windows\build\lib\ssl.py", line 162, in read v = self._sslobj.read(len or 1024) socket.error: [Errno 10053] An established connection was aborted by the software in your host machine 19 tests failed: test_bufio test_builtin test_cmd_line_script test_cookielib test_distutils test_filecmp test_fileinput test_gzip test_io test_iter test_mailbox test_marshal test_mmap test_multibytecodec test_smtplib test_urllib2 test_uu test_zipfile test_zipimport ====================================================================== ERROR: test_zipfile (test.test_cmd_line_script.CmdLineTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_cmd_line_script.py", line 160, in test_zipfile self._check_script(zip_name, None, zip_name, '') File "C:\buildbot\work\3.0.heller-windows\build\lib\contextlib.py", line 33, in __exit__ self.gen.throw(type, value, traceback) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_cmd_line_script.py", line 36, in temp_dir shutil.rmtree(dirname) File "c:\buildbot\work\3.0.heller-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "c:\buildbot\work\3.0.heller-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: 'c:\\docume~1\\theller\\locals~1\\temp\\tmpprujue\\test_zip.zip' ====================================================================== ERROR: test_zipfile_compiled (test.test_cmd_line_script.CmdLineTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_cmd_line_script.py", line 167, in test_zipfile_compiled self._check_script(zip_name, None, zip_name, '') File "C:\buildbot\work\3.0.heller-windows\build\lib\contextlib.py", line 33, in __exit__ self.gen.throw(type, value, traceback) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_cmd_line_script.py", line 36, in temp_dir shutil.rmtree(dirname) File "c:\buildbot\work\3.0.heller-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "c:\buildbot\work\3.0.heller-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: 'c:\\docume~1\\theller\\locals~1\\temp\\tmptueqai\\test_zip.zip' ====================================================================== ERROR: test_zero_byte_files (test.test_fileinput.FileInputTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_fileinput.py", line 145, in test_zero_byte_files remove_tempfiles(t1, t2, t3, t4) UnboundLocalError: local variable 't3' referenced before assignment ====================================================================== ERROR: test_read (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_gzip.py", line 48, in test_read self.test_write() File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\3.0.heller-windows\build\lib\gzip.py", line 91, in __init__ fileobj = self.myfileobj = builtins.open(filename, mode or 'rb') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_readline (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_gzip.py", line 85, in test_readline self.test_write() File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\3.0.heller-windows\build\lib\gzip.py", line 91, in __init__ fileobj = self.myfileobj = builtins.open(filename, mode or 'rb') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_readlines (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_gzip.py", line 98, in test_readlines self.test_write() File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\3.0.heller-windows\build\lib\gzip.py", line 91, in __init__ fileobj = self.myfileobj = builtins.open(filename, mode or 'rb') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_seek_read (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_gzip.py", line 112, in test_seek_read self.test_write() File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\3.0.heller-windows\build\lib\gzip.py", line 91, in __init__ fileobj = self.myfileobj = builtins.open(filename, mode or 'rb') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_seek_whence (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_gzip.py", line 132, in test_seek_whence self.test_write() File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\3.0.heller-windows\build\lib\gzip.py", line 91, in __init__ fileobj = self.myfileobj = builtins.open(filename, mode or 'rb') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_seek_write (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_gzip.py", line 144, in test_seek_write f = gzip.GzipFile(self.filename, 'w') File "C:\buildbot\work\3.0.heller-windows\build\lib\gzip.py", line 91, in __init__ fileobj = self.myfileobj = builtins.open(filename, mode or 'rb') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_write (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\3.0.heller-windows\build\lib\gzip.py", line 91, in __init__ fileobj = self.myfileobj = builtins.open(filename, mode or 'rb') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_array_writes (test.test_io.IOTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_io.py", line 259, in test_array_writes f = io.open(test_support.TESTFN, "wb", 0) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_buffered_file_io (test.test_io.IOTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_io.py", line 163, in test_buffered_file_io f = io.open(test_support.TESTFN, "wb") File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_close_flushes (test.test_io.IOTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_io.py", line 249, in test_close_flushes f = io.open(test_support.TESTFN, "wb") File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_destructor (test.test_io.IOTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_io.py", line 243, in test_destructor f = MyFileIO(test_support.TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_large_file_ops (test.test_io.IOTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_io.py", line 209, in test_large_file_ops f = io.open(test_support.TESTFN, "w+b", 0) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_raw_file_io (test.test_io.IOTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_io.py", line 149, in test_raw_file_io f = io.open(test_support.TESTFN, "wb", buffering=0) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_readline (test.test_io.IOTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_io.py", line 177, in test_readline f = io.open(test_support.TESTFN, "wb") File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_with_open (test.test_io.IOTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_io.py", line 219, in test_with_open with open(test_support.TESTFN, "wb", bufsize) as f: File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testBasicIO (test.test_io.TextIOWrapperTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_io.py", line 799, in testBasicIO f = io.open(test_support.TESTFN, "w+", encoding=enc) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_flush (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 718, in tearDown self._delete_recursively(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 47, in _delete_recursively os.remove(target) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test' ====================================================================== ERROR: test_popitem (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 336, in test_popitem self.assertEqual(int(msg.get_payload()), keys.index(key)) ValueError: invalid literal for int() with base 10: 'From: foo 0 From MAILER-DAEMON Mon Mar 24 05:54:55 2008 From: foo 1 From MAILER-DAEMON Mon Mar 24 05:54:55 2008 From: foo 2 From MAILER-DAEMON Mon Mar 24 05:54:55 2008 From: foo 3 From MAIL' ====================================================================== ERROR: test_flush (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 718, in tearDown self._delete_recursively(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 47, in _delete_recursively os.remove(target) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test' ====================================================================== ERROR: test_get_file (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 813, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 766, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 513, in __init__ f = open(self._path, 'r') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_message (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 813, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 766, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 513, in __init__ f = open(self._path, 'r') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_string (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 813, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 766, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 513, in __init__ f = open(self._path, 'r') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_getitem (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 813, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 766, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 513, in __init__ f = open(self._path, 'r') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_items (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 813, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 766, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 513, in __init__ f = open(self._path, 'r') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iter (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 813, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 766, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 513, in __init__ f = open(self._path, 'r') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iteritems (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 813, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 766, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 513, in __init__ f = open(self._path, 'r') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iterkeys (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 813, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 766, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 513, in __init__ f = open(self._path, 'r') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_itervalues (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 813, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 766, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 513, in __init__ f = open(self._path, 'r') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_keys (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 813, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 766, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 513, in __init__ f = open(self._path, 'r') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_len (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 813, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 766, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 513, in __init__ f = open(self._path, 'r') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_lock_conflict (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 813, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 766, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 513, in __init__ f = open(self._path, 'r') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_lock_unlock (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 813, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 766, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 513, in __init__ f = open(self._path, 'r') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_open_close_open (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 813, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 766, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 513, in __init__ f = open(self._path, 'r') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_pop (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 813, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 766, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 513, in __init__ f = open(self._path, 'r') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_popitem (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 813, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 766, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 513, in __init__ f = open(self._path, 'r') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_relock (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 813, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 766, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 513, in __init__ f = open(self._path, 'r') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_remove (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 813, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 766, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 513, in __init__ f = open(self._path, 'r') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_set_item (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 813, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 766, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 513, in __init__ f = open(self._path, 'r') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_update (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 813, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 766, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 513, in __init__ f = open(self._path, 'r') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_values (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 813, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 766, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 513, in __init__ f = open(self._path, 'r') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 818, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 813, in __init__ os.mkdir(self._path, 0o700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add_and_remove_folders (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 818, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 813, in __init__ os.mkdir(self._path, 0o700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_clear (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 818, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 813, in __init__ os.mkdir(self._path, 0o700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_close (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 818, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 813, in __init__ os.mkdir(self._path, 0o700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_contains (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 818, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 813, in __init__ os.mkdir(self._path, 0o700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_delitem (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 818, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 813, in __init__ os.mkdir(self._path, 0o700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_discard (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 818, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 813, in __init__ os.mkdir(self._path, 0o700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_dump_message (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 818, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 813, in __init__ os.mkdir(self._path, 0o700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_flush (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 818, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 813, in __init__ os.mkdir(self._path, 0o700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 818, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 813, in __init__ os.mkdir(self._path, 0o700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_file (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 818, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 813, in __init__ os.mkdir(self._path, 0o700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_folder (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 818, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 813, in __init__ os.mkdir(self._path, 0o700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_message (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 818, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 813, in __init__ os.mkdir(self._path, 0o700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_string (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 818, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 813, in __init__ os.mkdir(self._path, 0o700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_getitem (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 818, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 813, in __init__ os.mkdir(self._path, 0o700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_items (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 818, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 813, in __init__ os.mkdir(self._path, 0o700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iter (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 818, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 813, in __init__ os.mkdir(self._path, 0o700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iteritems (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 818, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 813, in __init__ os.mkdir(self._path, 0o700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iterkeys (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 818, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 813, in __init__ os.mkdir(self._path, 0o700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_itervalues (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 818, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 813, in __init__ os.mkdir(self._path, 0o700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_keys (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 818, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 813, in __init__ os.mkdir(self._path, 0o700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_len (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 818, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 813, in __init__ os.mkdir(self._path, 0o700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_list_folders (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 818, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 813, in __init__ os.mkdir(self._path, 0o700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_lock_unlock (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 818, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 813, in __init__ os.mkdir(self._path, 0o700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_pack (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 818, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 813, in __init__ os.mkdir(self._path, 0o700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_pop (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 818, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 813, in __init__ os.mkdir(self._path, 0o700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_popitem (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 818, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 813, in __init__ os.mkdir(self._path, 0o700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_remove (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 818, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 813, in __init__ os.mkdir(self._path, 0o700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_sequences (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 818, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 813, in __init__ os.mkdir(self._path, 0o700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_set_item (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 818, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 813, in __init__ os.mkdir(self._path, 0o700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_update (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 818, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 813, in __init__ os.mkdir(self._path, 0o700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_values (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 818, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 813, in __init__ os.mkdir(self._path, 0o700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 937, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 1121, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 513, in __init__ f = open(self._path, 'r') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_clear (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 937, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 1121, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 513, in __init__ f = open(self._path, 'r') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_close (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 937, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 1121, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 513, in __init__ f = open(self._path, 'r') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_contains (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 937, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 1121, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 513, in __init__ f = open(self._path, 'r') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_delitem (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 937, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 1121, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 513, in __init__ f = open(self._path, 'r') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_discard (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 937, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 1121, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 513, in __init__ f = open(self._path, 'r') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_dump_message (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 937, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 1121, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 513, in __init__ f = open(self._path, 'r') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_flush (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 937, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 1121, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 513, in __init__ f = open(self._path, 'r') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 937, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 1121, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 513, in __init__ f = open(self._path, 'r') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_file (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 937, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 1121, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 513, in __init__ f = open(self._path, 'r') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_message (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 937, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 1121, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 513, in __init__ f = open(self._path, 'r') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_string (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 937, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 1121, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 513, in __init__ f = open(self._path, 'r') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_getitem (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 937, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 1121, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 513, in __init__ f = open(self._path, 'r') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_items (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 937, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 1121, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 513, in __init__ f = open(self._path, 'r') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iter (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 937, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 1121, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\3.0.heller-windows\build\lib\mailbox.py", line 513, in __init__ f = open(self._path, 'r') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_popitem (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 336, in test_popitem self.assertEqual(int(msg.get_payload()), keys.index(key)) ValueError: invalid literal for int() with base 10: 'From: foo *** EOOH *** From: foo 0 1,, From: foo *** EOOH *** From: foo 1 1,, From: foo *** EOOH *** From: foo 2 1,, From: foo *** EOOH *** From: foo 3 ' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 412, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_add (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 77, in test_add self.assertEqual(self._box.get_string(keys[0]), self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:28 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:28 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:28 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:28 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'From: foo\n\n0' ====================================================================== FAIL: test_add_and_close (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 758, in test_add_and_close self.assertEqual(contents, open(self._path, 'r').read()) AssertionError: 'From MAILER-DAEMON Mon Mar 24 05:54:28 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:28 2008\n\nFrom: foo\n\n0\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:28 2008\n\nFrom: foo\n\n1\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:28 2008\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:28 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'From MAILER-DAEMON Mon Mar 24 05:54:28 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:28 2008\n\nFrom: foo\n\n0\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:28 2008\n\nFrom: foo\n\n1\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:28 2008\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:28 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:28 2008\n\nFrom: foo\n\n0\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:28 2008\n\nFrom: foo\n\n1\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:28 2008\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:28 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:28 2008\n\nFrom: foo\n\n1\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:28 2008\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:28 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:28 2008\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:28 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:28 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' ====================================================================== FAIL: test_add_from_string (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 725, in test_add_from_string self.assertEqual(self._box[key].get_from(), 'foo at bar blah') AssertionError: 'foo at bar blah\n' != 'foo at bar blah' ====================================================================== FAIL: test_close (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 389, in test_close self._test_flush_or_close(self._box.close) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 6 != 3 ====================================================================== FAIL: test_delitem (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 87, in test_delitem self._test_remove_or_delitem(self._box.__delitem__) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n1' != 'From: foo\n\n1' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 412, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_flush (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 377, in test_flush self._test_flush_or_close(self._box.flush) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 6 != 3 ====================================================================== FAIL: test_get (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 129, in test_get self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_file (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 174, in test_get_file self._template % 0) AssertionError: '' != 'From: foo\n\n0' ====================================================================== FAIL: test_get_message (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 156, in test_get_message self.assertEqual(msg0['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_string (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 164, in test_get_string self.assertEqual(self._box.get_string(key0), self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:54 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'From: foo\n\n0' ====================================================================== FAIL: test_getitem (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 144, in test_getitem self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_items (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 207, in test_items self._check_iteration(self._box.items, do_keys=True, do_values=True) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iter (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 194, in test_iter do_values=True) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iteritems (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 203, in test_iteritems do_values=True) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_itervalues (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 189, in test_itervalues do_values=True) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_open_close_open (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 742, in test_open_close_open self.assertEqual(len(self._box), 3) AssertionError: 6 != 3 ====================================================================== FAIL: test_pop (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 313, in test_pop self.assertEqual(self._box.pop(key0).get_payload(), '0') AssertionError: 'From: foo\n\n0\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:55 2008\n\nFrom: foo\n\n1' != '0' ====================================================================== FAIL: test_remove (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 83, in test_remove self._test_remove_or_delitem(self._box.remove) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n1' != 'From: foo\n\n1' ====================================================================== FAIL: test_set_item (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 272, in test_set_item self._template % 'original 0') AssertionError: '\nFrom: foo\n\noriginal 0' != 'From: foo\n\noriginal 0' ====================================================================== FAIL: test_update (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 350, in test_update self._template % 'changed 0') AssertionError: '\nFrom: foo\n\nchanged 0\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:55 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'From: foo\n\nchanged 0' ====================================================================== FAIL: test_values (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 198, in test_values self._check_iteration(self._box.values, do_keys=False, do_values=True) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_add (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 77, in test_add self.assertEqual(self._box.get_string(keys[0]), self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:55 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:55 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:55 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:55 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n' != 'From: foo\n\n0' ====================================================================== FAIL: test_add_and_close (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 758, in test_add_and_close self.assertEqual(contents, open(self._path, 'r').read()) AssertionError: '\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:55 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:55 2008\n\nFrom: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:55 2008\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:55 2008\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:55 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n' != '\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:55 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:55 2008\n\nFrom: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:55 2008\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:55 2008\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:55 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:55 2008\n\nFrom: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:55 2008\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:55 2008\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:55 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:55 2008\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:55 2008\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:55 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:55 2008\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:55 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Mon Mar 24 05:54:55 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n' ====================================================================== FAIL: test_add_from_string (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 725, in test_add_from_string self.assertEqual(self._box[key].get_from(), 'foo at bar blah') AssertionError: 'foo at bar blah\n' != 'foo at bar blah' ====================================================================== FAIL: test_close (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 389, in test_close self._test_flush_or_close(self._box.close) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_delitem (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 87, in test_delitem self._test_remove_or_delitem(self._box.__delitem__) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n' != 'From: foo\n\n1' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 412, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_flush (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 377, in test_flush self._test_flush_or_close(self._box.flush) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_get (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 129, in test_get self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iteritems (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 203, in test_iteritems do_values=True) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_itervalues (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 189, in test_itervalues do_values=True) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_pop (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 313, in test_pop self.assertEqual(self._box.pop(key0).get_payload(), '0') AssertionError: 'From: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n0\n\n\x1f\x0c\n\n1,,\n\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n1\n\n\x1f' != '0' ====================================================================== FAIL: test_remove (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 83, in test_remove self._test_remove_or_delitem(self._box.remove) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n1\n\n\x1f' != 'From: foo\n\n1' ====================================================================== FAIL: test_set_item (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 272, in test_set_item self._template % 'original 0') AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\noriginal 0\n\n\x1f' != 'From: foo\n\noriginal 0' ====================================================================== FAIL: test_update (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 350, in test_update self._template % 'changed 0') AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\nchanged 0\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f' != 'From: foo\n\nchanged 0' ====================================================================== FAIL: test_values (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 198, in test_values self._check_iteration(self._box.values, do_keys=False, do_values=True) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== ERROR: test_ints (test.test_marshal.IntTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_marshal.py", line 34, in test_ints self.helper(expected) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_marshal.py", line 14, in helper f = open(test_support.TESTFN, "wb") File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_floats (test.test_marshal.FloatTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_marshal.py", line 72, in test_floats self.helper(float(expected)) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_marshal.py", line 14, in helper f = open(test_support.TESTFN, "wb") File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_bytes (test.test_marshal.StringTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_marshal.py", line 103, in test_bytes self.helper(s) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_marshal.py", line 14, in helper f = open(test_support.TESTFN, "wb") File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_string (test.test_marshal.StringTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_marshal.py", line 99, in test_string self.helper(s) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_marshal.py", line 14, in helper f = open(test_support.TESTFN, "wb") File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_unicode (test.test_marshal.StringTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_marshal.py", line 95, in test_unicode self.helper(marshal.loads(marshal.dumps(s))) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_marshal.py", line 14, in helper f = open(test_support.TESTFN, "wb") File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_dict (test.test_marshal.ContainerTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_marshal.py", line 128, in test_dict self.helper(self.d) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_marshal.py", line 14, in helper f = open(test_support.TESTFN, "wb") File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_list (test.test_marshal.ContainerTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_marshal.py", line 131, in test_list self.helper(list(self.d.items())) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_marshal.py", line 14, in helper f = open(test_support.TESTFN, "wb") File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_sets (test.test_marshal.ContainerTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_marshal.py", line 138, in test_sets self.helper(constructor(self.d.keys())) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_marshal.py", line 14, in helper f = open(test_support.TESTFN, "wb") File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_tuple (test.test_marshal.ContainerTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_marshal.py", line 134, in test_tuple self.helper(tuple(self.d.keys())) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_marshal.py", line 14, in helper f = open(test_support.TESTFN, "wb") File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_access_parameter (test.test_mmap.MmapTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mmap.py", line 113, in test_access_parameter open(TESTFN, "wb").write(b"a"*mapsize) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_basic (test.test_mmap.MmapTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mmap.py", line 24, in test_basic f = open(TESTFN, 'bw+') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_double_close (test.test_mmap.MmapTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mmap.py", line 293, in test_double_close f = open(TESTFN, 'wb+') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_entire_file (test.test_mmap.MmapTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mmap.py", line 307, in test_entire_file f = open(TESTFN, "wb+") File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_bug1728403 (test.test_multibytecodec.Test_StreamReader) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_multibytecodec.py", line 143, in test_bug1728403 f = open(TESTFN, 'wb') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_file (test.test_urllib2.HandlerTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_urllib2.py", line 607, in test_file f = open(TESTFN, "wb") File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testAbsoluteArcnames (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 30, in setUp fp = open(TESTFN, "wb") File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testAppendToNonZipFile (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 30, in setUp fp = open(TESTFN, "wb") File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testAppendToZipFile (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 243, in testAppendToZipFile zipfp.write(TESTFN, TESTFN) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 938, in write self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testAppendToZipFile (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 351, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 196, in testDeflated self.zipTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 43, in zipTest self.makeTestArchive(f, compression) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 37, in makeTestArchive zipfp.write(TESTFN, "another.name") File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 938, in write self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 351, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testExtract (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 300, in testExtract zipfp.writestr(fpath, fdata) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 1001, in writestr self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testExtract (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 351, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testExtractAll (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 329, in testExtractAll zipfp.writestr(fpath, fdata) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 1001, in writestr self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testExtractAll (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 351, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testIterlinesDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 216, in testIterlinesDeflated self.zipIterlinesTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 172, in zipIterlinesTest self.makeTestArchive(f, compression) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 37, in makeTestArchive zipfp.write(TESTFN, "another.name") File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 938, in write self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testIterlinesDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 351, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testIterlinesStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 191, in testIterlinesStored self.zipIterlinesTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 172, in zipIterlinesTest self.makeTestArchive(f, compression) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 37, in makeTestArchive zipfp.write(TESTFN, "another.name") File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 938, in write self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testIterlinesStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 351, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testOpenDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 200, in testOpenDeflated self.zipOpenTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 100, in zipOpenTest self.makeTestArchive(f, compression) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 37, in makeTestArchive zipfp.write(TESTFN, "another.name") File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 938, in write self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testOpenDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 351, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testOpenStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 126, in testOpenStored self.zipOpenTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 100, in zipOpenTest self.makeTestArchive(f, compression) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 37, in makeTestArchive zipfp.write(TESTFN, "another.name") File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 938, in write self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testOpenStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 351, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testRandomOpenDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 204, in testRandomOpenDeflated self.zipRandomOpenTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 129, in zipRandomOpenTest self.makeTestArchive(f, compression) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 37, in makeTestArchive zipfp.write(TESTFN, "another.name") File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 938, in write self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testRandomOpenDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 351, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testRandomOpenStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 146, in testRandomOpenStored self.zipRandomOpenTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 129, in zipRandomOpenTest self.makeTestArchive(f, compression) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 37, in makeTestArchive zipfp.write(TESTFN, "another.name") File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 938, in write self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testRandomOpenStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 351, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testReadlineDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 208, in testReadlineDeflated self.zipReadlineTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 149, in zipReadlineTest self.makeTestArchive(f, compression) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 37, in makeTestArchive zipfp.write(TESTFN, "another.name") File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 938, in write self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadlineDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 351, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testReadlineStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 183, in testReadlineStored self.zipReadlineTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 149, in zipReadlineTest self.makeTestArchive(f, compression) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 37, in makeTestArchive zipfp.write(TESTFN, "another.name") File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 938, in write self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadlineStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 351, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testReadlinesDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 212, in testReadlinesDeflated self.zipReadlinesTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 161, in zipReadlinesTest self.makeTestArchive(f, compression) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 37, in makeTestArchive zipfp.write(TESTFN, "another.name") File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 938, in write self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadlinesDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 351, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testReadlinesStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 187, in testReadlinesStored self.zipReadlinesTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 161, in zipReadlinesTest self.makeTestArchive(f, compression) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 37, in makeTestArchive zipfp.write(TESTFN, "another.name") File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 938, in write self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadlinesStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 351, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 97, in testStored self.zipTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 43, in zipTest self.makeTestArchive(f, compression) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 37, in makeTestArchive zipfp.write(TESTFN, "another.name") File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 938, in write self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 351, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: test_PerFileCompression (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 279, in test_PerFileCompression zipfp.write(TESTFN, 'storeme', zipfile.ZIP_STORED) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 938, in write self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: test_PerFileCompression (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 351, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: test_WriteDefaultName (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 272, in test_WriteDefaultName zipfp.write(TESTFN) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 938, in write self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: test_WriteDefaultName (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 351, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testAbsoluteArcnames (test.test_zipfile.TestZip64InSmallFiles) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 366, in setUp fp = open(TESTFN, "wb") File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testDeflated (test.test_zipfile.TestZip64InSmallFiles) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 366, in setUp fp = open(TESTFN, "wb") File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testLargeFileException (test.test_zipfile.TestZip64InSmallFiles) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 366, in setUp fp = open(TESTFN, "wb") File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testStored (test.test_zipfile.TestZip64InSmallFiles) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 366, in setUp fp = open(TESTFN, "wb") File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testCloseErroneousFile (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 571, in testCloseErroneousFile fp = open(TESTFN, "w") File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testClosedZipRaisesRuntimeError (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 615, in testClosedZipRaisesRuntimeError zipf.writestr("foo.txt", "O, for a Muse of Fire!") File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 1001, in writestr self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testIsZipErroneousFile (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 582, in testIsZipErroneousFile fp = open(TESTFN, "w") File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testIsZipValidFile (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 591, in testIsZipValidFile zipf = zipfile.ZipFile(TESTFN, mode="w") File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 609, in __init__ self.fp = io.open(file, modeDict[mode]) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_BadOpenMode (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 635, in test_BadOpenMode zipf = zipfile.ZipFile(TESTFN, mode="w") File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 609, in __init__ self.fp = io.open(file, modeDict[mode]) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_NullByteInFilename (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 669, in test_NullByteInFilename zipf = zipfile.ZipFile(TESTFN, mode="w") File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 609, in __init__ self.fp = io.open(file, modeDict[mode]) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_OpenNonexistentItem (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 660, in test_OpenNonexistentItem zipf = zipfile.ZipFile(TESTFN, mode="w") File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 609, in __init__ self.fp = io.open(file, modeDict[mode]) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_Read0 (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 647, in test_Read0 zipf = zipfile.ZipFile(TESTFN, mode="w") File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 609, in __init__ self.fp = io.open(file, modeDict[mode]) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testWriteNonPyfile (test.test_zipfile.PyZipFileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 538, in testWriteNonPyfile open(TESTFN, 'w').write('most definitely not a python file') File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 212, in __new__ return open(*args, **kwargs) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testWritePyfile (test.test_zipfile.PyZipFileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 478, in testWritePyfile zipfp.writepy(fn) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 1177, in writepy self.write(fname, arcname) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 938, in write self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testWritePythonDirectory (test.test_zipfile.PyZipFileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 511, in testWritePythonDirectory os.mkdir(TESTFN2) WindowsError: [Error 5] Access is denied: '@test2' ====================================================================== ERROR: testWritePythonPackage (test.test_zipfile.PyZipFileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 503, in testWritePythonPackage zipfp.writepy(packagedir) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 1140, in writepy self.write(fname, arcname) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 938, in write self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testBadPassword (test.test_zipfile.DecryptionTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 707, in setUp self.zip = zipfile.ZipFile(TESTFN, "r") File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 622, in __init__ self._GetContents() File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 642, in _GetContents self._RealGetContents() File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 679, in _RealGetContents raise BadZipfile("Bad magic number for central directory") zipfile.BadZipfile: Bad magic number for central directory ====================================================================== ERROR: testGoodPassword (test.test_zipfile.DecryptionTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 707, in setUp self.zip = zipfile.ZipFile(TESTFN, "r") File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 622, in __init__ self._GetContents() File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 642, in _GetContents self._RealGetContents() File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 679, in _RealGetContents raise BadZipfile("Bad magic number for central directory") zipfile.BadZipfile: Bad magic number for central directory ====================================================================== ERROR: testNoPassword (test.test_zipfile.DecryptionTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 707, in setUp self.zip = zipfile.ZipFile(TESTFN, "r") File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 622, in __init__ self._GetContents() File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 642, in _GetContents self._RealGetContents() File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 679, in _RealGetContents raise BadZipfile("Bad magic number for central directory") zipfile.BadZipfile: Bad magic number for central directory ====================================================================== ERROR: testDifferentFile (test.test_zipfile.TestsWithMultipleOpens) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 834, in setUp zipfp = zipfile.ZipFile(TESTFN2, "w", zipfile.ZIP_DEFLATED) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 609, in __init__ self.fp = io.open(file, modeDict[mode]) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test2' ====================================================================== ERROR: testInterleaved (test.test_zipfile.TestsWithMultipleOpens) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 834, in setUp zipfp = zipfile.ZipFile(TESTFN2, "w", zipfile.ZIP_DEFLATED) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 609, in __init__ self.fp = io.open(file, modeDict[mode]) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test2' ====================================================================== ERROR: testSameFile (test.test_zipfile.TestsWithMultipleOpens) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 834, in setUp zipfp = zipfile.ZipFile(TESTFN2, "w", zipfile.ZIP_DEFLATED) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 609, in __init__ self.fp = io.open(file, modeDict[mode]) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test2' ====================================================================== ERROR: testIterlinesDeflated (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 985, in testIterlinesDeflated self.iterlinesTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 944, in iterlinesTest self.makeTestArchive(f, compression) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 902, in makeTestArchive zipfp = zipfile.ZipFile(f, "w", compression) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 609, in __init__ self.fp = io.open(file, modeDict[mode]) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test2' ====================================================================== ERROR: testIterlinesStored (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 968, in testIterlinesStored self.iterlinesTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 944, in iterlinesTest self.makeTestArchive(f, compression) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 902, in makeTestArchive zipfp = zipfile.ZipFile(f, "w", compression) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 609, in __init__ self.fp = io.open(file, modeDict[mode]) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: '@test2' ====================================================================== ERROR: testReadDeflated (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 973, in testReadDeflated self.readTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 908, in readTest self.makeTestArchive(f, compression) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 904, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 938, in write self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadStored (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 956, in testReadStored self.readTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 908, in readTest self.makeTestArchive(f, compression) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 904, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 938, in write self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadlineDeflated (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 977, in testReadlineDeflated self.readlineTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 919, in readlineTest self.makeTestArchive(f, compression) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 904, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 938, in write self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadlineStored (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 960, in testReadlineStored self.readlineTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 919, in readlineTest self.makeTestArchive(f, compression) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 904, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 938, in write self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadlinesDeflated (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 981, in testReadlinesDeflated self.readlinesTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 932, in readlinesTest self.makeTestArchive(f, compression) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 904, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 938, in write self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadlinesStored (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 964, in testReadlinesStored self.readlinesTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 932, in readlinesTest self.makeTestArchive(f, compression) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 904, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 938, in write self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testOpenStored (test.test_zipfile.TestsWithRandomBinaryFiles) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 807, in testOpenStored self.zipOpenTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 776, in zipOpenTest self.makeTestArchive(f, compression) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 756, in makeTestArchive zipfp.write(TESTFN, "another.name") File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 938, in write self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testRandomOpenStored (test.test_zipfile.TestsWithRandomBinaryFiles) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 829, in testRandomOpenStored self.zipRandomOpenTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 810, in zipRandomOpenTest self.makeTestArchive(f, compression) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 756, in makeTestArchive zipfp.write(TESTFN, "another.name") File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 938, in write self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testStored (test.test_zipfile.TestsWithRandomBinaryFiles) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 773, in testStored self.zipTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 761, in zipTest self.makeTestArchive(f, compression) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 756, in makeTestArchive zipfp.write(TESTFN, "another.name") File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 938, in write self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== FAIL: testCreateNonExistentFileForAppend (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipfile.py", line 556, in testCreateNonExistentFileForAppend self.fail('Could not append data to a non-existent zip file.') AssertionError: Could not append data to a non-existent zip file. ====================================================================== ERROR: testBadMTime (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 180, in testBadMTime self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 1001, in writestr self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testBadMagic (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 160, in testBadMagic self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 1001, in writestr self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testBadMagic2 (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 168, in testBadMagic2 self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 1001, in writestr self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testBoth (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 148, in testBoth self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 1001, in writestr self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testDeepPackage (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 194, in testDeepPackage self.doTest(pyc_ext, files, TESTPACK, TESTPACK2, TESTMOD) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 1001, in writestr self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testDoctestFile (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 296, in testDoctestFile self.runDoctest(self.doDoctestFile) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 281, in runDoctest self.doTest(".py", files, TESTMOD, call=callback) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 1001, in writestr self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testDoctestSuite (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 307, in testDoctestSuite self.runDoctest(self.doDoctestSuite) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 281, in runDoctest self.doTest(".py", files, TESTMOD, call=callback) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 1001, in writestr self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testEmptyPy (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 152, in testEmptyPy self.doTest(None, files, TESTMOD) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 91, in doTest ["__dummy__"]) ImportError: No module named ziptestmodule ====================================================================== ERROR: testGetCompiledSource (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 276, in testGetCompiledSource self.doTest(pyc_ext, files, TESTMOD, call=self.assertModuleSource) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 609, in __init__ self.fp = io.open(file, modeDict[mode]) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testGetData (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 233, in testGetData z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 609, in __init__ self.fp = io.open(file, modeDict[mode]) File "c:\buildbot\work\3.0.heller-windows\build\lib\io.py", line 151, in open closefd) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\3.0.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testGetSource (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 270, in testGetSource self.doTest(".py", files, TESTMOD, call=self.assertModuleSource) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 1001, in writestr self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testImport_WithStuff (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 263, in testImport_WithStuff stuff=b"Some Stuff"*31) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 1001, in writestr self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testImporterAttr (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 256, in testImporterAttr self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 1001, in writestr self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testPackage (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 186, in testPackage self.doTest(pyc_ext, files, TESTPACK, TESTMOD) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 1001, in writestr self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testPy (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 139, in testPy self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 1001, in writestr self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testPyc (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 143, in testPyc self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 1001, in writestr self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testTraceback (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 330, in testTraceback self.doTest(None, files, TESTMOD, call=self.doTraceback) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 1001, in writestr self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testZipImporterMethods (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 208, in testZipImporterMethods z.writestr(zinfo, data) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 1001, in writestr self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testBadMTime (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 180, in testBadMTime self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 1001, in writestr self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testBadMagic (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 160, in testBadMagic self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 1001, in writestr self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testBadMagic2 (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 168, in testBadMagic2 self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 1001, in writestr self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testBoth (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 148, in testBoth self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 1001, in writestr self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testDeepPackage (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 194, in testDeepPackage self.doTest(pyc_ext, files, TESTPACK, TESTPACK2, TESTMOD) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 1001, in writestr self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testDoctestFile (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 296, in testDoctestFile self.runDoctest(self.doDoctestFile) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 281, in runDoctest self.doTest(".py", files, TESTMOD, call=callback) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 1001, in writestr self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testDoctestSuite (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 307, in testDoctestSuite self.runDoctest(self.doDoctestSuite) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 281, in runDoctest self.doTest(".py", files, TESTMOD, call=callback) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 1001, in writestr self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testEmptyPy (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 152, in testEmptyPy self.doTest(None, files, TESTMOD) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 91, in doTest ["__dummy__"]) ImportError: No module named ziptestmodule ====================================================================== ERROR: testGetCompiledSource (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 276, in testGetCompiledSource self.doTest(pyc_ext, files, TESTMOD, call=self.assertModuleSource) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 1001, in writestr self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testGetData (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 238, in testGetData z.writestr(name, data) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 1001, in writestr self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testGetSource (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 270, in testGetSource self.doTest(".py", files, TESTMOD, call=self.assertModuleSource) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 1001, in writestr self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testImport_WithStuff (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 263, in testImport_WithStuff stuff=b"Some Stuff"*31) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 1001, in writestr self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testImporterAttr (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 256, in testImporterAttr self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 1001, in writestr self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testPackage (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 186, in testPackage self.doTest(pyc_ext, files, TESTPACK, TESTMOD) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 1001, in writestr self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testPy (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 139, in testPy self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 1001, in writestr self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testPyc (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 143, in testPyc self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 1001, in writestr self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testTraceback (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 330, in testTraceback self.doTest(None, files, TESTMOD, call=self.doTraceback) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 1001, in writestr self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testZipImporterMethods (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_zipimport.py", line 208, in testZipImporterMethods z.writestr(zinfo, data) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 1001, in writestr self._writecheck(zinfo) File "C:\buildbot\work\3.0.heller-windows\build\lib\zipfile.py", line 905, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") zipfile.LargeZipFile: Filesize would require ZIP64 extensions sincerely, -The Buildbot From python-checkins at python.org Mon Mar 24 07:07:49 2008 From: python-checkins at python.org (raymond.hettinger) Date: Mon, 24 Mar 2008 07:07:49 +0100 (CET) Subject: [Python-checkins] r61834 - python/trunk/Doc/library/random.rst Message-ID: <20080324060750.0106A1E4019@bag.python.org> Author: raymond.hettinger Date: Mon Mar 24 07:07:49 2008 New Revision: 61834 Modified: python/trunk/Doc/library/random.rst Log: Tighten documentation for Random.triangular. Modified: python/trunk/Doc/library/random.rst ============================================================================== --- python/trunk/Doc/library/random.rst (original) +++ python/trunk/Doc/library/random.rst Mon Mar 24 07:07:49 2008 @@ -192,13 +192,13 @@ .. function:: triangular(low, high, mode) - Return a random floating point number *N* such that ``low <= N < high`` - and with the specified *mode* between those bounds. + Return a random floating point number *N* such that ``low <= N < high`` and + with the specified *mode* between those bounds. The *low* and *high* bounds + default to zero and one. The *mode* argument defaults to the midpoint + between the bounds, giving a symmetric distribution. - If *mode* is not specified or is ``None``, it defaults to the midpoint - between the upper and lower bounds, producing a symmetric distribution. + .. versionadded:: 2.6 - The default values for *low* and *high* are zero and one. .. function:: betavariate(alpha, beta) From python-checkins at python.org Mon Mar 24 07:23:12 2008 From: python-checkins at python.org (brett.cannon) Date: Mon, 24 Mar 2008 07:23:12 +0100 (CET) Subject: [Python-checkins] r61838 - in sandbox/trunk/import_in_py/Py3K: _importlib.py importlib.py tests/test_fs_importer.py Message-ID: <20080324062312.3D3E81E4019@bag.python.org> Author: brett.cannon Date: Mon Mar 24 07:23:11 2008 New Revision: 61838 Modified: sandbox/trunk/import_in_py/Py3K/_importlib.py sandbox/trunk/import_in_py/Py3K/importlib.py sandbox/trunk/import_in_py/Py3K/tests/test_fs_importer.py Log: Make it so that importlib can be tested under a normal 3.0 build. Modified: sandbox/trunk/import_in_py/Py3K/_importlib.py ============================================================================== --- sandbox/trunk/import_in_py/Py3K/_importlib.py (original) +++ sandbox/trunk/import_in_py/Py3K/_importlib.py Mon Mar 24 07:23:11 2008 @@ -424,7 +424,7 @@ # Request paths instead of just booleans since 'compile' needs it for # source. if not source_path and not bytecode_path: - raise ValueError("neither source nor bytecode was specified as " + raise ImportError("neither source nor bytecode was specified as " "available") source_timestamp = None # Try to use bytecode if it is available. Modified: sandbox/trunk/import_in_py/Py3K/importlib.py ============================================================================== --- sandbox/trunk/import_in_py/Py3K/importlib.py (original) +++ sandbox/trunk/import_in_py/Py3K/importlib.py Mon Mar 24 07:23:11 2008 @@ -44,6 +44,39 @@ """Set __import__ back to the original implementation (assumes _set__import__ was called previously).""" __builtins__['__import__'] = original__import__ + + +def _case_ok(*args): + """XXX stub for testing.""" + return True + + +def _w_long(x): + """Convert a 32-bit integer to little-endian. + + XXX Temporary until marshal's long functions are exposed. + + """ + x = int(x) + int_bytes = [] + int_bytes.append(x & 0xFF) + int_bytes.append((x >> 8) & 0xFF) + int_bytes.append((x >> 16) & 0xFF) + int_bytes.append((x >> 24) & 0xFF) + return bytearray(int_bytes) + + +def _r_long(int_bytes): + """Convert 4 bytes in little-endian to an integer. + + XXX Temporary until marshal's long function are exposed. + + """ + x = int_bytes[0] + x |= int_bytes[1] << 8 + x |= int_bytes[2] << 16 + x |= int_bytes[3] << 24 + return x # Required built-in modules. @@ -72,6 +105,10 @@ # For os.path.join replacement; pull from Include/osdefs.h:SEP . _importlib.path_sep = sep +_importlib._case_ok = _case_ok +marshal._w_long = _w_long +marshal._r_long = _r_long + del _importlib Modified: sandbox/trunk/import_in_py/Py3K/tests/test_fs_importer.py ============================================================================== --- sandbox/trunk/import_in_py/Py3K/tests/test_fs_importer.py (original) +++ sandbox/trunk/import_in_py/Py3K/tests/test_fs_importer.py Mon Mar 24 07:23:11 2008 @@ -132,7 +132,7 @@ test_support.unlink(os.path.join(self.directory, self.pkg_name + self.pyc_ext)) - def test_module_case_sensitivity(self): + def __test_module_case_sensitivity(self): # Case-sensitivity should always matter as long as PYTHONCASEOK is not # set. name_len = len(self.top_level_module_name) @@ -151,7 +151,7 @@ assert os.environ['PYTHONCASEOK'] self.failUnless(self.importer.find_module(bad_case_name)) - def test_package_case_sensitivity(self): + def __test_package_case_sensitivity(self): # Case-sensitivity should always matter as long as PYTHONCASEOK is not # set. name_len = len(self.pkg_name) From python-checkins at python.org Mon Mar 24 07:25:10 2008 From: python-checkins at python.org (neal.norwitz) Date: Mon, 24 Mar 2008 07:25:10 +0100 (CET) Subject: [Python-checkins] r61839 - peps/trunk/pep-3108.txt Message-ID: <20080324062510.D4C9E1E4019@bag.python.org> Author: neal.norwitz Date: Mon Mar 24 07:25:10 2008 New Revision: 61839 Modified: peps/trunk/pep-3108.txt Log: dl was removed in r61837. Modified: peps/trunk/pep-3108.txt ============================================================================== --- peps/trunk/pep-3108.txt (original) +++ peps/trunk/pep-3108.txt Mon Mar 24 07:25:10 2008 @@ -295,7 +295,7 @@ + Negligible use. + Easily replicated. -* dl +* dl [done] + ctypes provides better support for same functionality. From buildbot at python.org Mon Mar 24 07:36:04 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 24 Mar 2008 06:36:04 +0000 Subject: [Python-checkins] buildbot failure in amd64 gentoo 3.0 Message-ID: <20080324063604.86C691E401E@bag.python.org> The Buildbot has detected a new failure of amd64 gentoo 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20gentoo%203.0/builds/224 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-amd64 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: neal.norwitz BUILD FAILED: failed test Excerpt from the test logfile: Traceback (most recent call last): File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/threading.py", line 490, in _bootstrap_inner self.run() File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/threading.py", line 446, in run self._target(*self._args, **self._kwargs) File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/bsddb/test/test_thread.py", line 89, in writerThread self._writerThread(*args, **kwargs) File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/bsddb/test/test_thread.py", line 278, in _writerThread self.assertEqual(data, self.makeData(key)) File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/unittest.py", line 325, in failUnlessEqual raise self.failureException(msg or '%r != %r' % (first, second)) AssertionError: None != b'2000-2000-2000-2000-2000' Traceback (most recent call last): File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/threading.py", line 490, in _bootstrap_inner self.run() File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/threading.py", line 446, in run self._target(*self._args, **self._kwargs) File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/bsddb/test/test_thread.py", line 89, in writerThread self._writerThread(*args, **kwargs) File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/bsddb/test/test_thread.py", line 278, in _writerThread self.assertEqual(data, self.makeData(key)) File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/unittest.py", line 325, in failUnlessEqual raise self.failureException(msg or '%r != %r' % (first, second)) AssertionError: None != b'1001-1001-1001-1001-1001' Traceback (most recent call last): File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/threading.py", line 490, in _bootstrap_inner self.run() File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/threading.py", line 446, in run self._target(*self._args, **self._kwargs) File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/bsddb/test/test_thread.py", line 89, in writerThread self._writerThread(*args, **kwargs) File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/bsddb/test/test_thread.py", line 278, in _writerThread self.assertEqual(data, self.makeData(key)) File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/unittest.py", line 325, in failUnlessEqual raise self.failureException(msg or '%r != %r' % (first, second)) AssertionError: None != b'0002-0002-0002-0002-0002' 1 test failed: test_threading make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Mon Mar 24 07:36:07 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 24 Mar 2008 06:36:07 +0000 Subject: [Python-checkins] buildbot failure in amd64 XP 3.0 Message-ID: <20080324063607.DC8AA1E4026@bag.python.org> The Buildbot has detected a new failure of amd64 XP 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20XP%203.0/builds/705 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows-amd64 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: neal.norwitz BUILD FAILED: failed compile sincerely, -The Buildbot From python-checkins at python.org Mon Mar 24 08:04:32 2008 From: python-checkins at python.org (brett.cannon) Date: Mon, 24 Mar 2008 08:04:32 +0100 (CET) Subject: [Python-checkins] r61840 - sandbox/trunk/import_in_py/TODO Message-ID: <20080324070432.2EAB61E4019@bag.python.org> Author: brett.cannon Date: Mon Mar 24 08:04:31 2008 New Revision: 61840 Modified: sandbox/trunk/import_in_py/TODO Log: Add a todo item to make the Py3K version use _fileio._FileIO. Doing so will do away with any special open() until io can be imported. This should allow for easier testing as a non-bootstrapping Py3K can be used to run the test suite and thus not require debugging in a bootstrapping Py3K interpreter where sys.stdout and friends do not exist yet. Modified: sandbox/trunk/import_in_py/TODO ============================================================================== --- sandbox/trunk/import_in_py/TODO (original) +++ sandbox/trunk/import_in_py/TODO Mon Mar 24 08:04:31 2008 @@ -1,6 +1,7 @@ +* Move Py3K version over to _fileio._FileIO for file work. + + Can set as open() if desired (but probably not needed). + + Make sure that reading/writing to file is done in bytes. + + Be aware that encoding/decoding from file not guaranteed to work as + the encodings module might not be available yet. * For Py3K, always have __file__ point to the .py file if it exists. -* PEP 366. -* For Py3K, use _fileio and any wrappers to deal with lack of open() before - io.py can be imported. -* Decide if source files really need to be opened as text or should be passed - in as bytes and let parser handle any conversion. +* PEP 366. \ No newline at end of file From buildbot at python.org Mon Mar 24 09:00:03 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 24 Mar 2008 08:00:03 +0000 Subject: [Python-checkins] buildbot failure in g4 osx.4 3.0 Message-ID: <20080324080003.DE3981E401E@bag.python.org> The Buildbot has detected a new failure of g4 osx.4 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/g4%20osx.4%203.0/builds/651 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: psf-g4 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: neal.norwitz BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From python-checkins at python.org Mon Mar 24 09:17:40 2008 From: python-checkins at python.org (raymond.hettinger) Date: Mon, 24 Mar 2008 09:17:40 +0100 (CET) Subject: [Python-checkins] r61841 - in python/trunk: Lib/copy.py Misc/NEWS Message-ID: <20080324081740.250E71E4003@bag.python.org> Author: raymond.hettinger Date: Mon Mar 24 09:17:39 2008 New Revision: 61841 Modified: python/trunk/Lib/copy.py python/trunk/Misc/NEWS Log: Issue 2460: Make Ellipsis objects copyable. Modified: python/trunk/Lib/copy.py ============================================================================== --- python/trunk/Lib/copy.py (original) +++ python/trunk/Lib/copy.py Mon Mar 24 09:17:39 2008 @@ -101,7 +101,7 @@ return x for t in (type(None), int, long, float, bool, str, tuple, frozenset, type, xrange, types.ClassType, - types.BuiltinFunctionType, + types.BuiltinFunctionType, type(Ellipsis), types.FunctionType): d[t] = _copy_immutable for name in ("ComplexType", "UnicodeType", "CodeType"): @@ -197,6 +197,7 @@ def _deepcopy_atomic(x, memo): return x d[type(None)] = _deepcopy_atomic +d[type(Ellipsis)] = _deepcopy_atomic d[int] = _deepcopy_atomic d[long] = _deepcopy_atomic d[float] = _deepcopy_atomic Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Mon Mar 24 09:17:39 2008 @@ -66,6 +66,8 @@ - Issue #2432: give DictReader the dialect and line_num attributes advertised in the docs. +- Issue #2460: Make Ellipsis object copyable. + - Issue #1681432: Add triangular distribution to the random module - Issue #2136: urllib2's auth handler now allows single-quoted realms in the From python-checkins at python.org Mon Mar 24 10:34:43 2008 From: python-checkins at python.org (georg.brandl) Date: Mon, 24 Mar 2008 10:34:43 +0100 (CET) Subject: [Python-checkins] r61842 - python/trunk/Doc/library/audioop.rst Message-ID: <20080324093443.E6DCF1E4003@bag.python.org> Author: georg.brandl Date: Mon Mar 24 10:34:34 2008 New Revision: 61842 Modified: python/trunk/Doc/library/audioop.rst Log: #1700821: add a note to audioop docs about signedness of sample formats. Modified: python/trunk/Doc/library/audioop.rst ============================================================================== --- python/trunk/Doc/library/audioop.rst (original) +++ python/trunk/Doc/library/audioop.rst Mon Mar 24 10:34:34 2008 @@ -141,6 +141,18 @@ Convert samples between 1-, 2- and 4-byte formats. + .. note:: + + In some audio formats, such as .WAV files, 16 and 32 bit samples are + signed, but 8 bit samples are unsigned. So when converting to 8 bit wide + samples for these formats, you need to also add 128 to the result:: + + new_frames = audioop.lin2lin(frames, old_width, 1) + new_frames = audioop.bias(new_frames, 1, 128) + + The same, in reverse, has to be applied when converting from 8 to 16 or 32 + bit width samples. + .. function:: lin2ulaw(fragment, width) From python-checkins at python.org Mon Mar 24 10:42:28 2008 From: python-checkins at python.org (georg.brandl) Date: Mon, 24 Mar 2008 10:42:28 +0100 (CET) Subject: [Python-checkins] r61843 - in doctools/trunk: CHANGES sphinx/__init__.py sphinx/environment.py Message-ID: <20080324094228.A43161E4003@bag.python.org> Author: georg.brandl Date: Mon Mar 24 10:42:28 2008 New Revision: 61843 Modified: doctools/trunk/CHANGES doctools/trunk/sphinx/__init__.py (props changed) doctools/trunk/sphinx/environment.py Log: Don't error out on reading an empty file. Modified: doctools/trunk/CHANGES ============================================================================== --- doctools/trunk/CHANGES (original) +++ doctools/trunk/CHANGES Mon Mar 24 10:42:28 2008 @@ -19,6 +19,8 @@ * sphinx.htmlwriter: Make parsed-literal blocks work as expected, not highlighting them via Pygments. +* sphinx.environment: Don't error out on reading an empty source file. + Release 0.1.61798 (Mar 23, 2008) ================================ Modified: doctools/trunk/sphinx/environment.py ============================================================================== --- doctools/trunk/sphinx/environment.py (original) +++ doctools/trunk/sphinx/environment.py Mon Mar 24 10:42:28 2008 @@ -457,7 +457,11 @@ Process the docinfo part of the doctree as metadata. """ self.metadata[docname] = md = {} - docinfo = doctree[0] + try: + docinfo = doctree[0] + except IndexError: + # probably an empty document + return if docinfo.__class__ is not nodes.docinfo: # nothing to see here return From python-checkins at python.org Mon Mar 24 10:43:01 2008 From: python-checkins at python.org (georg.brandl) Date: Mon, 24 Mar 2008 10:43:01 +0100 (CET) Subject: [Python-checkins] r61844 - doctools/trunk/CHANGES Message-ID: <20080324094301.401441E4003@bag.python.org> Author: georg.brandl Date: Mon Mar 24 10:43:00 2008 New Revision: 61844 Modified: doctools/trunk/CHANGES Log: Release preparation. Modified: doctools/trunk/CHANGES ============================================================================== --- doctools/trunk/CHANGES (original) +++ doctools/trunk/CHANGES Mon Mar 24 10:43:00 2008 @@ -1,5 +1,5 @@ -Changes in trunk -================ +Release 0.1.61843 (Mar 24, 2008) +================================ * sphinx.quickstart: Really don't create a makefile if the user doesn't want one. From g.brandl at gmx.net Mon Mar 24 10:47:51 2008 From: g.brandl at gmx.net (Georg Brandl) Date: Mon, 24 Mar 2008 10:47:51 +0100 Subject: [Python-checkins] r61709 - python/trunk/Doc/library/functions.rst python/trunk/Doc/library/future_builtins.rst python/trunk/Doc/library/python.rst In-Reply-To: References: <20080321193757.B2DD71E400B@bag.python.org> Message-ID: Sorry, but I don't see an ambiguity here. "Like str() does" means "like the builtin, not overridden str() does". Of course, "converted to strings using str()" would be wrong. Georg Jim Jewett schrieb: > What is the precise specification of the builtin "print" function. > Does it call "str", or does it just behave as if the builtin str had > been called? > > In 2.5, the print statement ignores any overrides of the str builtin, > but I'm not sure whether a _function_ should -- and I do think it > should be specified. > > -jJ > > On 3/21/08, georg.brandl wrote: >> New Revision: 61709 > > ============================================================================== > >> +++ python/trunk/Doc/library/functions.rst Fri Mar 21 20:37:57 2008 >> @@ -817,6 +817,33 @@ > > ... >> +.. function:: print([object, ...][, sep=' '][, end='\n'][, file=sys.stdout]) > ... > >> + All non-keyword arguments are converted to strings like :func:`str` does -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out. From python-checkins at python.org Mon Mar 24 12:02:45 2008 From: python-checkins at python.org (eric.smith) Date: Mon, 24 Mar 2008 12:02:45 +0100 (CET) Subject: [Python-checkins] r61845 - peps/trunk/pep-3127.txt Message-ID: <20080324110245.021B91E4019@bag.python.org> Author: eric.smith Date: Mon Mar 24 12:02:44 2008 New Revision: 61845 Modified: peps/trunk/pep-3127.txt Log: Clarified that octal % formatting in 2.6 does not change. Modified: peps/trunk/pep-3127.txt ============================================================================== --- peps/trunk/pep-3127.txt (original) +++ peps/trunk/pep-3127.txt Mon Mar 24 12:02:44 2008 @@ -129,9 +129,10 @@ ----------------- The string (and unicode in 2.6) % operator will have -'b' format specifier added for binary, and the alternate -syntax of the 'o' option will need to be updated to -add '0o' in front, instead of '0'. +'b' format specifier added for binary in both 2.6 and 3.0. +In 3.0, the alternate syntax of the 'o' option will need to +be updated to add '0o' in front, instead of '0'. In 2.6, +alternate octal formatting will continue to add only '0'. PEP 3101 already supports 'b' for binary output. From python-checkins at python.org Mon Mar 24 13:57:54 2008 From: python-checkins at python.org (martin.v.loewis) Date: Mon, 24 Mar 2008 13:57:54 +0100 (CET) Subject: [Python-checkins] r61846 - in python/trunk: Misc/NEWS Tools/scripts/2to3 setup.py Message-ID: <20080324125754.2901E1E401D@bag.python.org> Author: martin.v.loewis Date: Mon Mar 24 13:57:53 2008 New Revision: 61846 Added: python/trunk/Tools/scripts/2to3 (contents, props changed) Modified: python/trunk/Misc/NEWS python/trunk/setup.py Log: Install 2to3 script. Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Mon Mar 24 13:57:53 2008 @@ -160,6 +160,8 @@ Build ----- +- A new script 2to3 is now installed, to run the 2.x to 3.x converter. + - Python/memmove.c and Python/strerror.c have been removed; both functions are in the C89 standard library. Added: python/trunk/Tools/scripts/2to3 ============================================================================== --- (empty file) +++ python/trunk/Tools/scripts/2to3 Mon Mar 24 13:57:53 2008 @@ -0,0 +1,5 @@ +#!/usr/bin/env python +from lib2to3 import refactor +import sys + +sys.exit(refactor.main()) Modified: python/trunk/setup.py ============================================================================== --- python/trunk/setup.py (original) +++ python/trunk/setup.py Mon Mar 24 13:57:53 2008 @@ -1716,6 +1716,7 @@ # Scripts to install scripts = ['Tools/scripts/pydoc', 'Tools/scripts/idle', + 'Tools/scripts/2to3', 'Lib/smtpd.py'] ) From buildbot at python.org Mon Mar 24 14:25:04 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 24 Mar 2008 13:25:04 +0000 Subject: [Python-checkins] buildbot failure in amd64 gentoo trunk Message-ID: <20080324132504.5FC1C1E401D@bag.python.org> The Buildbot has detected a new failure of amd64 gentoo trunk. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20gentoo%20trunk/builds/465 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-amd64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl,martin.v.loewis BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_urllibnet make: *** [buildbottest] Error 1 sincerely, -The Buildbot From python-checkins at python.org Mon Mar 24 14:31:17 2008 From: python-checkins at python.org (martin.v.loewis) Date: Mon, 24 Mar 2008 14:31:17 +0100 (CET) Subject: [Python-checkins] r61847 - in python/trunk: Doc/library/signal.rst Lib/test/test_signal.py Misc/ACKS Misc/NEWS Modules/signalmodule.c configure configure.in pyconfig.h.in Message-ID: <20080324133117.577101E401E@bag.python.org> Author: martin.v.loewis Date: Mon Mar 24 14:31:16 2008 New Revision: 61847 Modified: python/trunk/Doc/library/signal.rst python/trunk/Lib/test/test_signal.py python/trunk/Misc/ACKS python/trunk/Misc/NEWS python/trunk/Modules/signalmodule.c python/trunk/configure python/trunk/configure.in python/trunk/pyconfig.h.in Log: Patch #2240: Implement signal.setitimer and signal.getitimer. Modified: python/trunk/Doc/library/signal.rst ============================================================================== --- python/trunk/Doc/library/signal.rst (original) +++ python/trunk/Doc/library/signal.rst Mon Mar 24 14:31:16 2008 @@ -39,12 +39,13 @@ * Some care must be taken if both signals and threads are used in the same program. The fundamental thing to remember in using signals and threads simultaneously is: always perform :func:`signal` operations in the main thread - of execution. Any thread can perform an :func:`alarm`, :func:`getsignal`, or - :func:`pause`; only the main thread can set a new signal handler, and the main - thread will be the only one to receive signals (this is enforced by the Python - :mod:`signal` module, even if the underlying thread implementation supports - sending signals to individual threads). This means that signals can't be used - as a means of inter-thread communication. Use locks instead. + of execution. Any thread can perform an :func:`alarm`, :func:`getsignal`, + :func:`pause`, :func:`setitimer` or :func:`getitimer`; only the main thread + can set a new signal handler, and the main thread will be the only one to + receive signals (this is enforced by the Python :mod:`signal` module, even + if the underlying thread implementation supports sending signals to + individual threads). This means that signals can't be used as a means of + inter-thread communication. Use locks instead. The variables defined in the :mod:`signal` module are: @@ -78,6 +79,36 @@ One more than the number of the highest signal number. + +.. data:: ITIMER_REAL + + Decrements interval timer in real time, and delivers SIGALRM upon expiration. + + +.. data:: ITIMER_VIRTUAL + + Decrements interval timer only when the process is executing, and delivers + SIGVTALRM upon expiration. + + +.. data:: ITIMER_PROF + + Decrements interval timer both when the process executes and when the + system is executing on behalf of the process. Coupled with ITIMER_VIRTUAL, + this timer is usually used to profile the time spent by the application + in user and kernel space. SIGPROF is delivered upon expiration. + + +The :mod:`signal` module defines one exception: + +.. exception:: ItimerError + + Raised to signal an error from the underlying :func:`setitimer` or + :func:`getitimer` implementation. Expect this error if an invalid + interval timer or a negative time is passed to :func:`setitimer`. + This error is a subtype of :exc:`IOError`. + + The :mod:`signal` module defines the following functions: @@ -110,6 +141,29 @@ :manpage:`signal(2)`.) +.. function:: setitimer(which, seconds[, interval]) + + Sets given itimer (one of :const:`signal.ITIMER_REAL`, + :const:`signal.ITIMER_VIRTUAL` or :const:`signal.ITIMER_PROF`) especified + by *which* to fire after *seconds* (float is accepted, different from + :func:`alarm`) and after that every *interval* seconds. The interval + timer specified by *which* can be cleared by setting seconds to zero. + + The old values are returned as a tuple: (delay, interval). + + Attempting to pass an invalid interval timer will cause a + :exc:`ItimerError`. + + .. versionadded:: 2.6 + + +.. function:: getitimer(which) + + Returns current value of a given itimer especified by *which*. + + .. versionadded:: 2.6 + + .. function:: set_wakeup_fd(fd) Set the wakeup fd to *fd*. When a signal is received, a ``'\0'`` byte is @@ -124,7 +178,6 @@ exception to be raised. - .. function:: siginterrupt(signalnum, flag) Change system call restart behaviour: if *flag* is :const:`False`, system calls Modified: python/trunk/Lib/test/test_signal.py ============================================================================== --- python/trunk/Lib/test/test_signal.py (original) +++ python/trunk/Lib/test/test_signal.py Mon Mar 24 14:31:16 2008 @@ -258,9 +258,93 @@ i=self.readpipe_interrupted(lambda: signal.siginterrupt(self.signum, 0)) self.assertEquals(i, False) +class ItimerTest(unittest.TestCase): + def setUp(self): + self.hndl_called = False + self.hndl_count = 0 + self.itimer = None + + def tearDown(self): + if self.itimer is not None: # test_itimer_exc doesn't change this attr + # just ensure that itimer is stopped + signal.setitimer(self.itimer, 0) + + def sig_alrm(self, *args): + self.hndl_called = True + if test_support.verbose: + print("SIGALRM handler invoked", args) + + def sig_vtalrm(self, *args): + self.hndl_called = True + + if self.hndl_count > 3: + # it shouldn't be here, because it should have been disabled. + raise signal.ItimerError("setitimer didn't disable ITIMER_VIRTUAL " + "timer.") + elif self.hndl_count == 3: + # disable ITIMER_VIRTUAL, this function shouldn't be called anymore + signal.setitimer(signal.ITIMER_VIRTUAL, 0) + if test_support.verbose: + print("last SIGVTALRM handler call") + + self.hndl_count += 1 + + if test_support.verbose: + print("SIGVTALRM handler invoked", args) + + def sig_prof(self, *args): + self.hndl_called = True + signal.setitimer(signal.ITIMER_PROF, 0) + + if test_support.verbose: + print("SIGPROF handler invoked", args) + + def test_itimer_exc(self): + # XXX I'm assuming -1 is an invalid itimer, but maybe some platform + # defines it ? + self.assertRaises(signal.ItimerError, signal.setitimer, -1, 0) + # negative time + self.assertRaises(signal.ItimerError, signal.setitimer, + signal.ITIMER_REAL, -1) + + def test_itimer_real(self): + self.itimer = signal.ITIMER_REAL + signal.signal(signal.SIGALRM, self.sig_alrm) + signal.setitimer(self.itimer, 1.0) + if test_support.verbose: + print("\ncall pause()...") + signal.pause() + + self.assertEqual(self.hndl_called, True) + + def test_itimer_virtual(self): + self.itimer = signal.ITIMER_VIRTUAL + signal.signal(signal.SIGVTALRM, self.sig_vtalrm) + signal.setitimer(self.itimer, 0.3, 0.2) + + for i in xrange(100000000): + if signal.getitimer(self.itimer) == (0.0, 0.0): + break # sig_vtalrm handler stopped this itimer + + # virtual itimer should be (0.0, 0.0) now + self.assertEquals(signal.getitimer(self.itimer), (0.0, 0.0)) + # and the handler should have been called + self.assertEquals(self.hndl_called, True) + + def test_itimer_prof(self): + self.itimer = signal.ITIMER_PROF + signal.signal(signal.SIGPROF, self.sig_prof) + signal.setitimer(self.itimer, 0.2) + + for i in xrange(100000000): + if signal.getitimer(self.itimer) == (0.0, 0.0): + break # sig_prof handler stopped this itimer + + self.assertEqual(self.hndl_called, True) + def test_main(): test_support.run_unittest(BasicSignalTests, InterProcessSignalTests, - WakeupSignalTests, SiginterruptTest) + WakeupSignalTests, SiginterruptTest, ItimerTest) if __name__ == "__main__": Modified: python/trunk/Misc/ACKS ============================================================================== --- python/trunk/Misc/ACKS (original) +++ python/trunk/Misc/ACKS Mon Mar 24 14:31:16 2008 @@ -528,6 +528,7 @@ Zach Pincus Michael Piotrowski Antoine Pitrou +Guilherme Polo Michael Pomraning Iustin Pop John Popplewell Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Mon Mar 24 14:31:16 2008 @@ -60,6 +60,11 @@ - Issue #2143: Fix embedded readline() hang on SSL socket EOF. +Extensions Modules +------------------ + +- Patch #2240: Implement signal.setitimer and signal.getitimer. + Library ------- Modified: python/trunk/Modules/signalmodule.c ============================================================================== --- python/trunk/Modules/signalmodule.c (original) +++ python/trunk/Modules/signalmodule.c Mon Mar 24 14:31:16 2008 @@ -13,6 +13,7 @@ #include #include +#include #ifndef SIG_ERR #define SIG_ERR ((PyOS_sighandler_t)(-1)) @@ -93,6 +94,49 @@ static PyOS_sighandler_t old_siginthandler = SIG_DFL; +#ifdef HAVE_GETITIMER +static PyObject *ItimerError; + +/* auxiliary functions for setitimer/getitimer */ +static void +timeval_from_double(double d, struct timeval *tv) +{ + tv->tv_sec = floor(d); + tv->tv_usec = fmod(d, 1.0) * 1000000.0; +} + +static inline double +double_from_timeval(struct timeval *tv) +{ + return tv->tv_sec + (double)(tv->tv_usec / 1000000.0); +} + +static PyObject * +itimer_retval(struct itimerval *iv) +{ + PyObject *r, *v; + + r = PyTuple_New(2); + if (r == NULL) + return NULL; + + if(!(v = PyFloat_FromDouble(double_from_timeval(&iv->it_value)))) { + Py_DECREF(r); + return NULL; + } + + PyTuple_SET_ITEM(r, 0, v); + + if(!(v = PyFloat_FromDouble(double_from_timeval(&iv->it_interval)))) { + Py_DECREF(r); + return NULL; + } + + PyTuple_SET_ITEM(r, 1, v); + + return r; +} +#endif static PyObject * signal_default_int_handler(PyObject *self, PyObject *args) @@ -347,11 +391,77 @@ } +#ifdef HAVE_SETITIMER +static PyObject * +signal_setitimer(PyObject *self, PyObject *args) +{ + double first; + double interval = 0; + int which; + struct itimerval new, old; + + if(!PyArg_ParseTuple(args, "id|d:setitimer", &which, &first, &interval)) + return NULL; + + timeval_from_double(first, &new.it_value); + timeval_from_double(interval, &new.it_interval); + /* Let OS check "which" value */ + if (setitimer(which, &new, &old) != 0) { + PyErr_SetFromErrno(ItimerError); + return NULL; + } + + return itimer_retval(&old); +} + +PyDoc_STRVAR(setitimer_doc, +"setitimer(which, seconds[, interval])\n\ +\n\ +Sets given itimer (one of ITIMER_REAL, ITIMER_VIRTUAL\n\ +or ITIMER_PROF) to fire after value seconds and after\n\ +that every interval seconds.\n\ +The itimer can be cleared by setting seconds to zero.\n\ +\n\ +Returns old values as a tuple: (delay, interval)."); +#endif + + +#ifdef HAVE_GETITIMER +static PyObject * +signal_getitimer(PyObject *self, PyObject *args) +{ + int which; + struct itimerval old; + + if (!PyArg_ParseTuple(args, "i:getitimer", &which)) + return NULL; + + if (getitimer(which, &old) != 0) { + PyErr_SetFromErrno(ItimerError); + return NULL; + } + + return itimer_retval(&old); +} + +PyDoc_STRVAR(getitimer_doc, +"getitimer(which)\n\ +\n\ +Returns current value of given itimer."); +#endif + + /* List of functions defined in the module */ static PyMethodDef signal_methods[] = { #ifdef HAVE_ALARM {"alarm", signal_alarm, METH_VARARGS, alarm_doc}, #endif +#ifdef HAVE_SETITIMER + {"setitimer", signal_setitimer, METH_VARARGS, setitimer_doc}, +#endif +#ifdef HAVE_GETITIMER + {"getitimer", signal_getitimer, METH_VARARGS, getitimer_doc}, +#endif {"signal", signal_signal, METH_VARARGS, signal_doc}, {"getsignal", signal_getsignal, METH_VARARGS, getsignal_doc}, {"set_wakeup_fd", signal_set_wakeup_fd, METH_VARARGS, set_wakeup_fd_doc}, @@ -374,19 +484,32 @@ Functions:\n\ \n\ alarm() -- cause SIGALRM after a specified time [Unix only]\n\ +setitimer() -- cause a signal (described below) after a specified\n\ + float time and the timer may restart then [Unix only]\n\ +getitimer() -- get current value of timer [Unix only]\n\ signal() -- set the action for a given signal\n\ getsignal() -- get the signal action for a given signal\n\ pause() -- wait until a signal arrives [Unix only]\n\ default_int_handler() -- default SIGINT handler\n\ \n\ -Constants:\n\ -\n\ +signal constants:\n\ SIG_DFL -- used to refer to the system default handler\n\ SIG_IGN -- used to ignore the signal\n\ NSIG -- number of defined signals\n\ -\n\ SIGINT, SIGTERM, etc. -- signal numbers\n\ \n\ +itimer constants:\n\ +ITIMER_REAL -- decrements in real time, and delivers SIGALRM upon\n\ + expiration\n\ +ITIMER_VIRTUAL -- decrements only when the process is executing,\n\ + and delivers SIGVTALRM upon expiration\n\ +ITIMER_PROF -- decrements both when the process is executing and\n\ + when the system is executing on behalf of the process.\n\ + Coupled with ITIMER_VIRTUAL, this timer is usually\n\ + used to profile the time spent by the application\n\ + in user and kernel space. SIGPROF is delivered upon\n\ + expiration.\n\ +\n\n\ *** IMPORTANT NOTICE ***\n\ A signal handler function is called with two arguments:\n\ the first is the signal number, the second is the interrupted stack frame."); @@ -639,6 +762,29 @@ PyDict_SetItemString(d, "SIGINFO", x); Py_XDECREF(x); #endif + +#ifdef ITIMER_REAL + x = PyLong_FromLong(ITIMER_REAL); + PyDict_SetItemString(d, "ITIMER_REAL", x); + Py_DECREF(x); +#endif +#ifdef ITIMER_VIRTUAL + x = PyLong_FromLong(ITIMER_VIRTUAL); + PyDict_SetItemString(d, "ITIMER_VIRTUAL", x); + Py_DECREF(x); +#endif +#ifdef ITIMER_PROF + x = PyLong_FromLong(ITIMER_PROF); + PyDict_SetItemString(d, "ITIMER_PROF", x); + Py_DECREF(x); +#endif + +#if defined (HAVE_SETITIMER) || defined (HAVE_GETITIMER) + ItimerError = PyErr_NewException("signal.ItimerError", + PyExc_IOError, NULL); + PyDict_SetItemString(d, "ItimerError", ItimerError); +#endif + if (!PyErr_Occurred()) return; Modified: python/trunk/configure ============================================================================== --- python/trunk/configure (original) +++ python/trunk/configure Mon Mar 24 14:31:16 2008 @@ -1,5 +1,5 @@ #! /bin/sh -# From configure.in Revision: 61436 . +# From configure.in Revision: 61722 . # Guess values for system-dependent variables and create Makefiles. # Generated by GNU Autoconf 2.61 for python 2.6. # @@ -15506,8 +15506,10 @@ -for ac_func in alarm bind_textdomain_codeset chown clock confstr \ - ctermid execv fchmod fchown fork fpathconf ftime ftruncate \ + + +for ac_func in alarm setitimer getitimer bind_textdomain_codeset chown \ + clock confstr ctermid execv fchmod fchown fork fpathconf ftime ftruncate \ gai_strerror getgroups getlogin getloadavg getpeername getpgid getpid \ getpriority getpwent getspnam getspent getsid getwd \ kill killpg lchmod lchown lstat mkfifo mknod mktime \ Modified: python/trunk/configure.in ============================================================================== --- python/trunk/configure.in (original) +++ python/trunk/configure.in Mon Mar 24 14:31:16 2008 @@ -2303,8 +2303,8 @@ AC_MSG_RESULT(MACHDEP_OBJS) # checks for library functions -AC_CHECK_FUNCS(alarm bind_textdomain_codeset chown clock confstr \ - ctermid execv fchmod fchown fork fpathconf ftime ftruncate \ +AC_CHECK_FUNCS(alarm setitimer getitimer bind_textdomain_codeset chown \ + clock confstr ctermid execv fchmod fchown fork fpathconf ftime ftruncate \ gai_strerror getgroups getlogin getloadavg getpeername getpgid getpid \ getpriority getpwent getspnam getspent getsid getwd \ kill killpg lchmod lchown lstat mkfifo mknod mktime \ Modified: python/trunk/pyconfig.h.in ============================================================================== --- python/trunk/pyconfig.h.in (original) +++ python/trunk/pyconfig.h.in Mon Mar 24 14:31:16 2008 @@ -243,6 +243,9 @@ /* Define this if you have the 6-arg version of gethostbyname_r(). */ #undef HAVE_GETHOSTBYNAME_R_6_ARG +/* Define to 1 if you have the `getitimer' function. */ +#undef HAVE_GETITIMER + /* Define to 1 if you have the `getloadavg' function. */ #undef HAVE_GETLOADAVG @@ -501,6 +504,9 @@ /* Define if you have the 'setgroups' function. */ #undef HAVE_SETGROUPS +/* Define to 1 if you have the `setitimer' function. */ +#undef HAVE_SETITIMER + /* Define to 1 if you have the `setlocale' function. */ #undef HAVE_SETLOCALE From buildbot at python.org Mon Mar 24 14:43:10 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 24 Mar 2008 13:43:10 +0000 Subject: [Python-checkins] buildbot failure in x86 W2k8 trunk Message-ID: <20080324134310.4C60B1E401D@bag.python.org> The Buildbot has detected a new failure of x86 W2k8 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20W2k8%20trunk/builds/196 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: nelson-windows Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed compile sincerely, -The Buildbot From buildbot at python.org Mon Mar 24 14:47:56 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 24 Mar 2008 13:47:56 +0000 Subject: [Python-checkins] buildbot failure in x86 W2k8 3.0 Message-ID: <20080324134757.18EDE1E401D@bag.python.org> The Buildbot has detected a new failure of x86 W2k8 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20W2k8%203.0/builds/110 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: nelson-windows Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed compile sincerely, -The Buildbot From python-checkins at python.org Mon Mar 24 14:54:23 2008 From: python-checkins at python.org (martin.v.loewis) Date: Mon, 24 Mar 2008 14:54:23 +0100 (CET) Subject: [Python-checkins] r61849 - python/trunk/Modules/signalmodule.c Message-ID: <20080324135423.EB97F1E4024@bag.python.org> Author: martin.v.loewis Date: Mon Mar 24 14:54:23 2008 New Revision: 61849 Modified: python/trunk/Modules/signalmodule.c Log: Conditionalize sys/time.h inclusion. Modified: python/trunk/Modules/signalmodule.c ============================================================================== --- python/trunk/Modules/signalmodule.c (original) +++ python/trunk/Modules/signalmodule.c Mon Mar 24 14:54:23 2008 @@ -13,7 +13,9 @@ #include #include +#ifdef HAVE_SYS_TIME_H #include +#endif #ifndef SIG_ERR #define SIG_ERR ((PyOS_sighandler_t)(-1)) From buildbot at python.org Mon Mar 24 15:03:56 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 24 Mar 2008 14:03:56 +0000 Subject: [Python-checkins] buildbot failure in amd64 gentoo 3.0 Message-ID: <20080324140356.BC30C1E401D@bag.python.org> The Buildbot has detected a new failure of amd64 gentoo 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20gentoo%203.0/builds/226 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-amd64 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed test Excerpt from the test logfile: make: *** [buildbottest] Alarm clock sincerely, -The Buildbot From buildbot at python.org Mon Mar 24 15:04:33 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 24 Mar 2008 14:04:33 +0000 Subject: [Python-checkins] buildbot failure in S-390 Debian trunk Message-ID: <20080324140434.1CA7A1E401D@bag.python.org> The Buildbot has detected a new failure of S-390 Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/S-390%20Debian%20trunk/builds/242 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-s390 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl,martin.v.loewis BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_signal make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Mon Mar 24 15:35:21 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 24 Mar 2008 14:35:21 +0000 Subject: [Python-checkins] buildbot failure in ppc Debian unstable 3.0 Message-ID: <20080324143522.247A21E401D@bag.python.org> The Buildbot has detected a new failure of ppc Debian unstable 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/ppc%20Debian%20unstable%203.0/builds/695 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_timeout make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Mon Mar 24 15:52:53 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 24 Mar 2008 14:52:53 +0000 Subject: [Python-checkins] buildbot failure in S-390 Debian 3.0 Message-ID: <20080324145254.54F251E401E@bag.python.org> The Buildbot has detected a new failure of S-390 Debian 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/S-390%20Debian%203.0/builds/142 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-s390 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_signal ====================================================================== FAIL: test_itimer_exc (test.test_signal.ItimerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/3.0.klose-debian-s390/build/Lib/test/test_signal.py", line 308, in test_itimer_exc signal.ITIMER_REAL, -1) AssertionError: ItimerError not raised by setitimer make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Mon Mar 24 16:17:46 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 24 Mar 2008 15:17:46 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-3 3.0 Message-ID: <20080324151746.8AD6D1E401D@bag.python.org> The Buildbot has detected a new failure of x86 XP-3 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-3%203.0/builds/717 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed failed slave lost sincerely, -The Buildbot From python-checkins at python.org Mon Mar 24 20:57:42 2008 From: python-checkins at python.org (christian.heimes) Date: Mon, 24 Mar 2008 20:57:42 +0100 (CET) Subject: [Python-checkins] r61851 - python/trunk/Python/sysmodule.c Message-ID: <20080324195742.6EB901E401E@bag.python.org> Author: christian.heimes Date: Mon Mar 24 20:57:42 2008 New Revision: 61851 Modified: python/trunk/Python/sysmodule.c Log: Added quick hack for bzr Modified: python/trunk/Python/sysmodule.c ============================================================================== --- python/trunk/Python/sysmodule.c (original) +++ python/trunk/Python/sysmodule.c Mon Mar 24 20:57:42 2008 @@ -1063,8 +1063,15 @@ return; python = strstr(headurl, "/python/"); - if (!python) + if (!python) { + /* XXX quick hack to get bzr working */ + *patchlevel_revision = '\0'; + strcpy(branch, ""); + strcpy(shortbranch, "unknown"); + svn_revision = ""; + return Py_FatalError("subversion keywords missing"); + } br_start = python + 8; br_end = strchr(br_start, '/'); From python-checkins at python.org Mon Mar 24 20:58:17 2008 From: python-checkins at python.org (christian.heimes) Date: Mon, 24 Mar 2008 20:58:17 +0100 (CET) Subject: [Python-checkins] r61852 - python/trunk/Python/sysmodule.c Message-ID: <20080324195817.6D1A91E401E@bag.python.org> Author: christian.heimes Date: Mon Mar 24 20:58:17 2008 New Revision: 61852 Modified: python/trunk/Python/sysmodule.c Log: Added quick hack for bzr Modified: python/trunk/Python/sysmodule.c ============================================================================== --- python/trunk/Python/sysmodule.c (original) +++ python/trunk/Python/sysmodule.c Mon Mar 24 20:58:17 2008 @@ -1069,8 +1069,8 @@ strcpy(branch, ""); strcpy(shortbranch, "unknown"); svn_revision = ""; - return - Py_FatalError("subversion keywords missing"); + return; + /* Py_FatalError("subversion keywords missing"); */ } br_start = python + 8; From nnorwitz at gmail.com Mon Mar 24 22:08:19 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Mon, 24 Mar 2008 16:08:19 -0500 Subject: [Python-checkins] Python Regression Test Failures basics (1) Message-ID: <20080324210819.GA22067@python.psfb.org> 315 tests OK. 1 test failed: test_signal 31 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_bsddb3 test_cd test_cl test_curses test_epoll test_gl test_imageop test_imgfile test_ioctl test_kqueue test_linuxaudiodev test_macostools test_ossaudiodev test_pep277 test_py3kwarn test_scriptpackages test_socketserver test_startfile test_sunaudiodev test_tcl test_timeout test_unicode_file test_urllib2net test_urllibnet test_winreg test_winsound test_zipfile64 2 skips unexpected on linux2: test_epoll test_ioctl test_grammar test_opcodes test_dict test_builtin test_exceptions test_types test_unittest test_doctest test_doctest2 test_MimeWriter test_SimpleHTTPServer test_StringIO test___all__ test___future__ test__locale test_abc test_abstract_numbers test_aepack test_aepack skipped -- No module named aepack test_al test_al skipped -- No module named al test_anydbm test_applesingle test_applesingle skipped -- No module named macostools test_array test_ast test_asynchat test_asyncore test_atexit test_audioop test_augassign test_base64 test_bastion test_bigaddrspace test_bigmem test_binascii test_binhex test_binop test_bisect test_bool test_bsddb test_bsddb185 test_bsddb185 skipped -- No module named bsddb185 test_bsddb3 test_bsddb3 skipped -- Use of the `bsddb' resource not enabled test_buffer test_bufio test_bz2 test_calendar test_call test_capi test_cd test_cd skipped -- No module named cd test_cfgparser test_cgi test_charmapcodec test_cl test_cl skipped -- No module named cl test_class test_cmath test_cmd test_cmd_line test_cmd_line_script test_code test_codeccallbacks test_codecencodings_cn test_codecencodings_hk test_codecencodings_jp test_codecencodings_kr test_codecencodings_tw test_codecmaps_cn test_codecmaps_hk test_codecmaps_jp test_codecmaps_kr test_codecmaps_tw test_codecs test_codeop test_coding test_coercion test_collections test_colorsys test_commands test_compare test_compile test_compiler test_complex test_complex_args test_contains test_contextlib test_cookie test_cookielib test_copy test_copy_reg test_cpickle test_cprofile test_crypt test_csv test_ctypes test_curses test_curses skipped -- Use of the `curses' resource not enabled test_datetime test_dbm test_decimal test_decorators test_defaultdict test_deque test_descr test_descrtut test_difflib test_dircache test_dis test_distutils test_dl test_docxmlrpc test_dumbdbm test_dummy_thread test_dummy_threading test_email test_email_codecs test_email_renamed test_enumerate test_eof test_epoll test_epoll skipped -- kernel doesn't support epoll() test_errno test_exception_variations test_extcall test_fcntl test_file test_filecmp test_fileinput test_float test_fnmatch test_fork1 test_format test_fpformat test_fractions test_frozen test_ftplib test_funcattrs test_functools test_future test_future_builtins test_gc test_gdbm test_generators test_genericpath test_genexps test_getargs test_getargs2 test_getopt test_gettext test_gl test_gl skipped -- No module named gl test_glob test_global test_grp test_gzip test_hash test_hashlib test_heapq test_hmac test_hotshot test_htmllib test_htmlparser test_httplib test_imageop test_imageop skipped -- No module named imgfile test_imaplib test_imgfile test_imgfile skipped -- No module named imgfile test_imp test_import test_importhooks test_index test_inspect test_int_literal test_ioctl test_ioctl skipped -- Unable to open /dev/tty test_isinstance test_iter test_iterlen test_itertools test_kqueue test_kqueue skipped -- test works only on BSD test_largefile test_linuxaudiodev test_linuxaudiodev skipped -- Use of the `audio' resource not enabled test_list test_locale test_logging test_long test_long_future test_longexp test_macostools test_macostools skipped -- No module named macostools test_macpath test_mailbox test_marshal test_math test_md5 test_mhlib test_mimetools test_mimetypes test_minidom test_mmap test_module test_modulefinder test_multibytecodec test_multibytecodec_support test_multifile test_mutants test_mutex test_netrc test_new test_nis test_normalization test_ntpath test_old_mailbox test_openpty test_operator test_optparse test_os test_ossaudiodev test_ossaudiodev skipped -- Use of the `audio' resource not enabled test_parser Expecting 's_push: parser stack overflow' in next line s_push: parser stack overflow test_peepholer test_pep247 test_pep263 test_pep277 test_pep277 skipped -- test works only on NT+ test_pep292 test_pep352 test_pickle test_pickletools test_pipes test_pkg test_pkgimport test_platform test_plistlib test_poll test_popen [8070 refs] [8070 refs] [8070 refs] test_popen2 test_poplib test_posix test_posixpath test_pow test_pprint test_print test_profile test_profilehooks test_property test_pstats test_pty test_pwd test_py3kwarn test_py3kwarn skipped -- test.test_py3kwarn must be run with the -3 flag test_pyclbr test_pyexpat test_queue test_quopri [8447 refs] [8447 refs] test_random test_re test_repr test_resource test_rfc822 test_richcmp test_robotparser test_runpy test_sax test_scope test_scriptpackages test_scriptpackages skipped -- No module named aetools test_select test_set test_sets test_sgmllib test_sha test_shelve test_shlex test_shutil test_signal test test_signal failed -- Traceback (most recent call last): File "/tmp/python-test/local/lib/python2.6/test/test_signal.py", line 308, in test_itimer_exc signal.ITIMER_REAL, -1) AssertionError: ItimerError not raised test_site test_slice test_smtplib test_socket test_socket_ssl test_socketserver test_socketserver skipped -- Use of the `network' resource not enabled test_softspace test_sort test_sqlite test_ssl test_startfile test_startfile skipped -- cannot import name startfile test_str test_strftime test_string test_stringprep test_strop test_strptime test_struct test_structmembers test_structseq test_subprocess [8065 refs] [8067 refs] [8065 refs] [8065 refs] [8065 refs] [8065 refs] [8065 refs] [8065 refs] [8065 refs] [8065 refs] [8067 refs] [9990 refs] [8283 refs] [8067 refs] [8065 refs] [8065 refs] [8065 refs] [8065 refs] [8065 refs] . [8065 refs] [8065 refs] this bit of output is from a test of stdout in a different process ... [8065 refs] [8065 refs] [8283 refs] test_sunaudiodev test_sunaudiodev skipped -- No module named sunaudiodev test_sundry test_symtable test_syntax test_sys [8065 refs] [8065 refs] test_tarfile test_tcl test_tcl skipped -- No module named _tkinter test_telnetlib test_tempfile [8070 refs] test_textwrap test_thread test_threaded_import test_threadedtempfile test_threading [11209 refs] test_threading_local test_threadsignals test_time test_timeout test_timeout skipped -- Use of the `network' resource not enabled test_tokenize test_trace test_traceback test_transformer test_tuple test_typechecks test_ucn test_unary test_undocumented_details test_unicode test_unicode_file test_unicode_file skipped -- No Unicode filesystem semantics on this platform. test_unicodedata test_univnewlines test_unpack test_urllib test_urllib2 test_urllib2_localnet test_urllib2net test_urllib2net skipped -- Use of the `network' resource not enabled test_urllibnet test_urllibnet skipped -- Use of the `network' resource not enabled test_urlparse test_userdict test_userlist test_userstring test_uu test_uuid WARNING: uuid.getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._ifconfig_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._unixdll_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. test_wait3 test_wait4 test_warnings test_wave test_weakref test_whichdb test_winreg test_winreg skipped -- No module named _winreg test_winsound test_winsound skipped -- No module named winsound test_with test_wsgiref test_xdrlib test_xml_etree test_xml_etree_c test_xmllib test_xmlrpc test_xpickle test_xrange test_zipfile test_zipfile64 test_zipfile64 skipped -- test requires loads of disk-space bytes and a long time to run test_zipimport test_zlib 315 tests OK. 1 test failed: test_signal 31 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_bsddb3 test_cd test_cl test_curses test_epoll test_gl test_imageop test_imgfile test_ioctl test_kqueue test_linuxaudiodev test_macostools test_ossaudiodev test_pep277 test_py3kwarn test_scriptpackages test_socketserver test_startfile test_sunaudiodev test_tcl test_timeout test_unicode_file test_urllib2net test_urllibnet test_winreg test_winsound test_zipfile64 2 skips unexpected on linux2: test_epoll test_ioctl [569849 refs] From nnorwitz at gmail.com Mon Mar 24 22:13:38 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Mon, 24 Mar 2008 16:13:38 -0500 Subject: [Python-checkins] Python Regression Test Failures opt (1) Message-ID: <20080324211338.GA23161@python.psfb.org> 315 tests OK. 1 test failed: test_signal 31 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_bsddb3 test_cd test_cl test_curses test_epoll test_gl test_imageop test_imgfile test_ioctl test_kqueue test_linuxaudiodev test_macostools test_ossaudiodev test_pep277 test_py3kwarn test_scriptpackages test_socketserver test_startfile test_sunaudiodev test_tcl test_timeout test_unicode_file test_urllib2net test_urllibnet test_winreg test_winsound test_zipfile64 2 skips unexpected on linux2: test_epoll test_ioctl test_grammar test_opcodes test_dict test_builtin test_exceptions test_types test_unittest test_doctest test_doctest2 test_MimeWriter test_SimpleHTTPServer test_StringIO test___all__ test___future__ test__locale test_abc test_abstract_numbers test_aepack test_aepack skipped -- No module named aepack test_al test_al skipped -- No module named al test_anydbm test_applesingle test_applesingle skipped -- No module named macostools test_array test_ast test_asynchat test_asyncore test_atexit test_audioop test_augassign test_base64 test_bastion test_bigaddrspace test_bigmem test_binascii test_binhex test_binop test_bisect test_bool test_bsddb test_bsddb185 test_bsddb185 skipped -- No module named bsddb185 test_bsddb3 test_bsddb3 skipped -- Use of the `bsddb' resource not enabled test_buffer test_bufio test_bz2 test_calendar test_call test_capi test_cd test_cd skipped -- No module named cd test_cfgparser test_cgi test_charmapcodec test_cl test_cl skipped -- No module named cl test_class test_cmath test_cmd test_cmd_line test_cmd_line_script test_code test_codeccallbacks test_codecencodings_cn test_codecencodings_hk test_codecencodings_jp test_codecencodings_kr test_codecencodings_tw test_codecmaps_cn test_codecmaps_hk test_codecmaps_jp test_codecmaps_kr test_codecmaps_tw test_codecs test_codeop test_coding test_coercion test_collections test_colorsys test_commands test_compare test_compile test_compiler test_complex test_complex_args test_contains test_contextlib test_cookie test_cookielib test_copy test_copy_reg test_cpickle test_cprofile test_crypt test_csv test_ctypes test_curses test_curses skipped -- Use of the `curses' resource not enabled test_datetime test_dbm test_decimal test_decorators test_defaultdict test_deque test_descr test_descrtut test_difflib test_dircache test_dis test_distutils [10129 refs] test_dl test_docxmlrpc test_dumbdbm test_dummy_thread test_dummy_threading test_email test_email_codecs test_email_renamed test_enumerate test_eof test_epoll test_epoll skipped -- kernel doesn't support epoll() test_errno test_exception_variations test_extcall test_fcntl test_file test_filecmp test_fileinput test_float test_fnmatch test_fork1 test_format test_fpformat test_fractions test_frozen test_ftplib test_funcattrs test_functools test_future test_future_builtins test_gc test_gdbm test_generators test_genericpath test_genexps test_getargs test_getargs2 test_getopt test_gettext test_gl test_gl skipped -- No module named gl test_glob test_global test_grp test_gzip test_hash test_hashlib test_heapq test_hmac test_hotshot test_htmllib test_htmlparser test_httplib test_imageop test_imageop skipped -- No module named imgfile test_imaplib test_imgfile test_imgfile skipped -- No module named imgfile test_imp test_import test_importhooks test_index test_inspect test_int_literal test_ioctl test_ioctl skipped -- Unable to open /dev/tty test_isinstance test_iter test_iterlen test_itertools test_kqueue test_kqueue skipped -- test works only on BSD test_largefile test_linuxaudiodev test_linuxaudiodev skipped -- Use of the `audio' resource not enabled test_list test_locale test_logging test_long test_long_future test_longexp test_macostools test_macostools skipped -- No module named macostools test_macpath test_mailbox test_marshal test_math test_md5 test_mhlib test_mimetools test_mimetypes test_minidom test_mmap test_module test_modulefinder test_multibytecodec test_multibytecodec_support test_multifile test_mutants test_mutex test_netrc test_new test_nis test_normalization test_ntpath test_old_mailbox test_openpty test_operator test_optparse test_os test_ossaudiodev test_ossaudiodev skipped -- Use of the `audio' resource not enabled test_parser Expecting 's_push: parser stack overflow' in next line s_push: parser stack overflow test_peepholer test_pep247 test_pep263 test_pep277 test_pep277 skipped -- test works only on NT+ test_pep292 test_pep352 test_pickle test_pickletools test_pipes test_pkg test_pkgimport test_platform test_plistlib test_poll test_popen [8070 refs] [8070 refs] [8070 refs] test_popen2 test_poplib test_posix test_posixpath test_pow test_pprint test_print test_profile test_profilehooks test_property test_pstats test_pty test_pwd test_py3kwarn test_py3kwarn skipped -- test.test_py3kwarn must be run with the -3 flag test_pyclbr test_pyexpat test_queue test_quopri [8447 refs] [8447 refs] test_random test_re test_repr test_resource test_rfc822 test_richcmp test_robotparser test_runpy test_sax test_scope test_scriptpackages test_scriptpackages skipped -- No module named aetools test_select test_set test_sets test_sgmllib test_sha test_shelve test_shlex test_shutil test_signal test test_signal failed -- Traceback (most recent call last): File "/tmp/python-test/local/lib/python2.6/test/test_signal.py", line 308, in test_itimer_exc signal.ITIMER_REAL, -1) AssertionError: ItimerError not raised test_site test_slice test_smtplib test_socket test_socket_ssl test_socketserver test_socketserver skipped -- Use of the `network' resource not enabled test_softspace test_sort test_sqlite test_ssl test_startfile test_startfile skipped -- cannot import name startfile test_str test_strftime test_string test_stringprep test_strop test_strptime test_struct test_structmembers test_structseq test_subprocess [8065 refs] [8067 refs] [8065 refs] [8065 refs] [8065 refs] [8065 refs] [8065 refs] [8065 refs] [8065 refs] [8065 refs] [8067 refs] [9990 refs] [8283 refs] [8067 refs] [8065 refs] [8065 refs] [8065 refs] [8065 refs] [8065 refs] . [8065 refs] [8065 refs] this bit of output is from a test of stdout in a different process ... [8065 refs] [8065 refs] [8283 refs] test_sunaudiodev test_sunaudiodev skipped -- No module named sunaudiodev test_sundry test_symtable test_syntax test_sys [8065 refs] [8065 refs] test_tarfile test_tcl test_tcl skipped -- No module named _tkinter test_telnetlib test_tempfile [8070 refs] test_textwrap test_thread test_threaded_import test_threadedtempfile test_threading [11209 refs] test_threading_local test_threadsignals test_time test_timeout test_timeout skipped -- Use of the `network' resource not enabled test_tokenize test_trace test_traceback test_transformer test_tuple test_typechecks test_ucn test_unary test_undocumented_details test_unicode test_unicode_file test_unicode_file skipped -- No Unicode filesystem semantics on this platform. test_unicodedata test_univnewlines test_unpack test_urllib test_urllib2 test_urllib2_localnet test_urllib2net test_urllib2net skipped -- Use of the `network' resource not enabled test_urllibnet test_urllibnet skipped -- Use of the `network' resource not enabled test_urlparse test_userdict test_userlist test_userstring test_uu test_uuid WARNING: uuid.getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._ifconfig_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._unixdll_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. test_wait3 test_wait4 test_warnings test_wave test_weakref test_whichdb test_winreg test_winreg skipped -- No module named _winreg test_winsound test_winsound skipped -- No module named winsound test_with test_wsgiref test_xdrlib test_xml_etree test_xml_etree_c test_xmllib test_xmlrpc test_xpickle test_xrange test_zipfile test_zipfile64 test_zipfile64 skipped -- test requires loads of disk-space bytes and a long time to run test_zipimport test_zlib 315 tests OK. 1 test failed: test_signal 31 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_bsddb3 test_cd test_cl test_curses test_epoll test_gl test_imageop test_imgfile test_ioctl test_kqueue test_linuxaudiodev test_macostools test_ossaudiodev test_pep277 test_py3kwarn test_scriptpackages test_socketserver test_startfile test_sunaudiodev test_tcl test_timeout test_unicode_file test_urllib2net test_urllibnet test_winreg test_winsound test_zipfile64 2 skips unexpected on linux2: test_epoll test_ioctl [569439 refs] From python-checkins at python.org Mon Mar 24 22:04:10 2008 From: python-checkins at python.org (amaury.forgeotdarc) Date: Mon, 24 Mar 2008 22:04:10 +0100 (CET) Subject: [Python-checkins] r61853 - python/trunk/Objects/unicodeobject.c Message-ID: <20080324210410.CF5851E401E@bag.python.org> Author: amaury.forgeotdarc Date: Mon Mar 24 22:04:10 2008 New Revision: 61853 Modified: python/trunk/Objects/unicodeobject.c Log: Issue2469: Correct a typo I introduced at r61793: compilation error with UCS4 builds. All buildbots compile with UCS2... Modified: python/trunk/Objects/unicodeobject.c ============================================================================== --- python/trunk/Objects/unicodeobject.c (original) +++ python/trunk/Objects/unicodeobject.c Mon Mar 24 22:04:10 2008 @@ -3095,7 +3095,7 @@ /* UCS-4 character. Either store directly, or as surrogate pair. */ #ifdef Py_UNICODE_WIDE - *p++ = (Py_UNIC0DE) x; + *p++ = (Py_UNICODE) x; #else x -= 0x10000L; *p++ = 0xD800 + (Py_UNICODE) (x >> 10); From python-checkins at python.org Mon Mar 24 22:17:00 2008 From: python-checkins at python.org (amaury.forgeotdarc) Date: Mon, 24 Mar 2008 22:17:00 +0100 (CET) Subject: [Python-checkins] r61854 - in python/branches/release25-maint: Lib/test/test_unicode.py Misc/NEWS Objects/unicodeobject.c Message-ID: <20080324211700.9A76A1E401E@bag.python.org> Author: amaury.forgeotdarc Date: Mon Mar 24 22:16:28 2008 New Revision: 61854 Modified: python/branches/release25-maint/Lib/test/test_unicode.py python/branches/release25-maint/Misc/NEWS python/branches/release25-maint/Objects/unicodeobject.c Log: #1477: ur'\U0010FFFF' used to raise in narrow unicode builds. Corrected the raw-unicode-escape codec to use UTF-16 surrogates in this case, like the unicode-escape codec does. Backport of r61793 and r61853 Modified: python/branches/release25-maint/Lib/test/test_unicode.py ============================================================================== --- python/branches/release25-maint/Lib/test/test_unicode.py (original) +++ python/branches/release25-maint/Lib/test/test_unicode.py Mon Mar 24 22:16:28 2008 @@ -736,12 +736,25 @@ print >>out, u'def\n' def test_ucs4(self): - if sys.maxunicode == 0xFFFF: - return x = u'\U00100000' y = x.encode("raw-unicode-escape").decode("raw-unicode-escape") self.assertEqual(x, y) + y = r'\U00100000' + x = y.decode("raw-unicode-escape").encode("raw-unicode-escape") + self.assertEqual(x, y) + y = r'\U00010000' + x = y.decode("raw-unicode-escape").encode("raw-unicode-escape") + self.assertEqual(x, y) + + try: + '\U11111111'.decode("raw-unicode-escape") + except UnicodeDecodeError, e: + self.assertEqual(e.start, 0) + self.assertEqual(e.end, 10) + else: + self.fail("Should have raised UnicodeDecodeError") + def test_conversion(self): # Make sure __unicode__() works properly class Foo0: Modified: python/branches/release25-maint/Misc/NEWS ============================================================================== --- python/branches/release25-maint/Misc/NEWS (original) +++ python/branches/release25-maint/Misc/NEWS Mon Mar 24 22:16:28 2008 @@ -11,6 +11,13 @@ Core and builtins ----------------- + +- Issue #1477: With narrow Unicode builds, the unicode escape sequence + \Uxxxxxxxx did not accept values outside the Basic Multilingual Plane. This + affected raw unicode literals and the 'raw-unicode-escape' codec. Now + UTF-16 surrogates are generated in this case, like normal unicode literals + and the 'unicode-escape' codec. + - Issue #2321: use pymalloc for unicode object string data to reduce memory usage in some circumstances. Modified: python/branches/release25-maint/Objects/unicodeobject.c ============================================================================== --- python/branches/release25-maint/Objects/unicodeobject.c (original) +++ python/branches/release25-maint/Objects/unicodeobject.c Mon Mar 24 22:16:28 2008 @@ -2273,8 +2273,22 @@ else x += 10 + c - 'A'; } -#ifndef Py_UNICODE_WIDE - if (x > 0x10000) { + if (x <= 0xffff) + /* UCS-2 character */ + *p++ = (Py_UNICODE) x; + else if (x <= 0x10ffff) { + /* UCS-4 character. Either store directly, or as + surrogate pair. */ +#ifdef Py_UNICODE_WIDE + *p++ = (Py_UNICODE) x; +#else + x -= 0x10000L; + *p++ = 0xD800 + (Py_UNICODE) (x >> 10); + *p++ = 0xDC00 + (Py_UNICODE) (x & 0x03FF); +#endif + } else { + endinpos = s-starts; + outpos = p-PyUnicode_AS_UNICODE(v); if (unicode_decode_call_errorhandler( errors, &errorHandler, "rawunicodeescape", "\\Uxxxxxxxx out of range", @@ -2282,8 +2296,6 @@ (PyObject **)&v, &outpos, &p)) goto onError; } -#endif - *p++ = x; nextByte: ; } @@ -2337,6 +2349,32 @@ *p++ = hexdigit[ch & 15]; } else +#else + /* Map UTF-16 surrogate pairs to '\U00xxxxxx' */ + if (ch >= 0xD800 && ch < 0xDC00) { + Py_UNICODE ch2; + Py_UCS4 ucs; + + ch2 = *s++; + size--; + if (ch2 >= 0xDC00 && ch2 <= 0xDFFF) { + ucs = (((ch & 0x03FF) << 10) | (ch2 & 0x03FF)) + 0x00010000; + *p++ = '\\'; + *p++ = 'U'; + *p++ = hexdigit[(ucs >> 28) & 0xf]; + *p++ = hexdigit[(ucs >> 24) & 0xf]; + *p++ = hexdigit[(ucs >> 20) & 0xf]; + *p++ = hexdigit[(ucs >> 16) & 0xf]; + *p++ = hexdigit[(ucs >> 12) & 0xf]; + *p++ = hexdigit[(ucs >> 8) & 0xf]; + *p++ = hexdigit[(ucs >> 4) & 0xf]; + *p++ = hexdigit[ucs & 0xf]; + continue; + } + /* Fall through: isolated surrogates are copied as-is */ + s--; + size++; + } #endif /* Map 16-bit characters to '\uxxxx' */ if (ch >= 256) { From nnorwitz at gmail.com Mon Mar 24 23:13:04 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Mon, 24 Mar 2008 17:13:04 -0500 Subject: [Python-checkins] Python Regression Test Failures refleak (1) Message-ID: <20080324221304.GA9554@python.psfb.org> More important issues: ---------------------- test_xmlrpc leaked [33, -30, 4] references, sum=7 Less important issues: ---------------------- test_cmd_line leaked [0, 0, -23] references, sum=-23 test_popen2 leaked [26, -26, 0] references, sum=0 test_smtplib leaked [-86, 86, -86] references, sum=-86 test_threadedtempfile leaked [0, 0, 102] references, sum=102 test_threadsignals leaked [0, 0, -8] references, sum=-8 test_urllib2_localnet leaked [3, 3, 3] references, sum=9 From buildbot at python.org Mon Mar 24 22:47:35 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 24 Mar 2008 21:47:35 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-3 trunk Message-ID: <20080324214735.C56521E401F@bag.python.org> The Buildbot has detected a new failure of x86 XP-3 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-3%20trunk/builds/1155 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: 18 tests failed: test_array test_bz2 test_deque test_distutils test_file test_gzip test_hotshot test_list test_mailbox test_mmap test_multibytecodec test_posixpath test_set test_univnewlines test_urllib test_urllib2 test_uu test_zipfile ====================================================================== ERROR: test_tofromfile (test.test_array.UnicodeTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_array.py", line 166, in test_tofromfile f = open(test_support.TESTFN, 'wb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_maxlen (test.test_deque.TestBasic) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_deque.py", line 86, in test_maxlen os.remove(test_support.TESTFN) WindowsError: [Error 5] Access is denied: '@test' ====================================================================== ERROR: test_print (test.test_deque.TestBasic) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_deque.py", line 291, in test_print fo.close() UnboundLocalError: local variable 'fo' referenced before assignment ====================================================================== ERROR: test_read (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 48, in test_read self.test_write() File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 79, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_readline (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 85, in test_readline self.test_write() File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 79, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_readlines (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 98, in test_readlines self.test_write() File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 79, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_seek_read (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 112, in test_seek_read self.test_write() File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 79, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_seek_whence (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 132, in test_seek_whence self.test_write() File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 79, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_seek_write (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 144, in test_seek_write f = gzip.GzipFile(self.filename, 'w') File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 79, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_write (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 79, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_addinfo (test.test_hotshot.HotShotTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_hotshot.py", line 74, in test_addinfo profiler = self.new_profiler() File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_hotshot.py", line 42, in new_profiler return hotshot.Profile(self.logfn, lineevents, linetimings) File "C:\buildbot\work\trunk.heller-windows\build\lib\hotshot\__init__.py", line 15, in __init__ logfn, self.lineevents, self.linetimings) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_bad_sys_path (test.test_hotshot.HotShotTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_hotshot.py", line 118, in test_bad_sys_path self.assertRaises(RuntimeError, coverage, test_support.TESTFN) File "C:\buildbot\work\trunk.heller-windows\build\lib\unittest.py", line 329, in failUnlessRaises callableObj(*args, **kwargs) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_line_numbers (test.test_hotshot.HotShotTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_hotshot.py", line 98, in test_line_numbers self.run_test(g, events, self.new_profiler(lineevents=1)) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_hotshot.py", line 42, in new_profiler return hotshot.Profile(self.logfn, lineevents, linetimings) File "C:\buildbot\work\trunk.heller-windows\build\lib\hotshot\__init__.py", line 15, in __init__ logfn, self.lineevents, self.linetimings) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_start_stop (test.test_hotshot.HotShotTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_hotshot.py", line 104, in test_start_stop profiler = self.new_profiler() File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_hotshot.py", line 42, in new_profiler return hotshot.Profile(self.logfn, lineevents, linetimings) File "C:\buildbot\work\trunk.heller-windows\build\lib\hotshot\__init__.py", line 15, in __init__ logfn, self.lineevents, self.linetimings) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_print (test.test_list.ListTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\list_tests.py", line 66, in test_print fo.close() UnboundLocalError: local variable 'fo' referenced before assignment ====================================================================== ERROR: test_add (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add_MM (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add_and_remove_folders (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_clean (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_clear (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_close (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_consistent_factory (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_contains (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_create_tmp (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_delitem (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_directory_in_folder (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_discard (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_dump_message (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_flush (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_folder (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_MM (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_file (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_folder (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_message (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_string (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_getitem (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_has_key (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_initialize_existing (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_initialize_new (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_items (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iter (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iteritems (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iterkeys (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_itervalues (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_keys (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_len (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_list_folders (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_lock_unlock (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_lookup (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_pop (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_popitem (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_refresh (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_remove (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_set_MM (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_set_item (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_update (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_values (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 465, in setUp TestMailbox.setUp(self) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 462, in _factory = lambda self, path, factory=None: mailbox.Maildir(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 233, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add_and_close (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add_from_string (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add_mbox_or_mmdf_message (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_clear (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_close (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_contains (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_delitem (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_discard (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_dump_message (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_flush (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_file (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_message (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_string (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_getitem (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_has_key (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_items (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iter (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iteritems (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iterkeys (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_itervalues (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 809, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 2] No such file or directory: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_basic (test.test_mmap.MmapTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mmap.py", line 24, in test_basic f = open(TESTFN, 'w+') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_double_close (test.test_mmap.MmapTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mmap.py", line 296, in test_double_close f = open(TESTFN, 'w+') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_entire_file (test.test_mmap.MmapTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mmap.py", line 310, in test_entire_file f = open(TESTFN, "w+") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_find_end (test.test_mmap.MmapTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mmap.py", line 260, in test_find_end f = open(TESTFN, 'w+') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_move (test.test_mmap.MmapTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mmap.py", line 324, in test_move f = open(TESTFN, 'w+') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_offset (test.test_mmap.MmapTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mmap.py", line 388, in test_offset f = open (TESTFN, 'w+b') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_rfind (test.test_mmap.MmapTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mmap.py", line 278, in test_rfind f = open(TESTFN, 'w+') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_tougher_find (test.test_mmap.MmapTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mmap.py", line 242, in test_tougher_find f = open(TESTFN, 'w+') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_bug1728403 (test.test_multibytecodec.Test_StreamReader) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_multibytecodec.py", line 148, in test_bug1728403 os.unlink(TESTFN) WindowsError: [Error 5] Access is denied: '@test' ====================================================================== ERROR: test_exists (test.test_posixpath.PosixPathTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_posixpath.py", line 196, in test_exists f = open(test_support.TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_getsize (test.test_posixpath.PosixPathTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_posixpath.py", line 144, in test_getsize f = open(test_support.TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_isdir (test.test_posixpath.PosixPathTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_posixpath.py", line 210, in test_isdir f = open(test_support.TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_isfile (test.test_posixpath.PosixPathTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_posixpath.py", line 227, in test_isfile f = open(test_support.TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_time (test.test_posixpath.PosixPathTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_posixpath.py", line 154, in test_time f = open(test_support.TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_cyclical_print (test.test_set.TestSet) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_set.py", line 293, in test_cyclical_print fo.close() UnboundLocalError: local variable 'fo' referenced before assignment ====================================================================== ERROR: test_cyclical_print (test.test_set.TestSetSubclass) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_set.py", line 293, in test_cyclical_print fo.close() UnboundLocalError: local variable 'fo' referenced before assignment ====================================================================== ERROR: test_file (test.test_urllib2.HandlerTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_urllib2.py", line 612, in test_file f = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_decode (test.test_uu.UUFileTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_uu.py", line 149, in test_decode uu.decode(f) File "C:\buildbot\work\trunk.heller-windows\build\lib\uu.py", line 111, in decode raise Error('Cannot overwrite existing file: %s' % out_file) Error: Cannot overwrite existing file: @testo ====================================================================== ERROR: test_decodetwice (test.test_uu.UUFileTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_uu.py", line 165, in test_decodetwice f = open(self.tmpin, 'r') IOError: [Errno 2] No such file or directory: '@testi' ====================================================================== ERROR: testAbsoluteArcnames (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testAppendToNonZipFile (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testAppendToZipFile (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testExtract (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testExtractAll (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testIterlinesDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testIterlinesStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testLowCompression (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testOpenDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testOpenStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testRandomOpenDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 211, in testRandomOpenDeflated self.zipRandomOpenTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 136, in zipRandomOpenTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 38, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testRandomOpenDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testRandomOpenStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 153, in testRandomOpenStored self.zipRandomOpenTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 136, in zipRandomOpenTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 38, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testRandomOpenStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testReadlineDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 215, in testReadlineDeflated self.zipReadlineTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 156, in zipReadlineTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 38, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadlineDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testReadlineStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 190, in testReadlineStored self.zipReadlineTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 156, in zipReadlineTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 38, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadlineStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testReadlinesDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 219, in testReadlinesDeflated self.zipReadlinesTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 168, in zipReadlinesTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 38, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadlinesDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testReadlinesStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 194, in testReadlinesStored self.zipReadlinesTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 168, in zipReadlinesTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 38, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadlinesStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 104, in testStored self.zipTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 44, in zipTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 38, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: test_PerFileCompression (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 286, in test_PerFileCompression zipfp.write(TESTFN, 'storeme', zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: test_PerFileCompression (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: test_WriteDefaultName (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 279, in test_WriteDefaultName zipfp.write(TESTFN) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: test_WriteDefaultName (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testClosedZipRaisesRuntimeError (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 627, in testClosedZipRaisesRuntimeError zipf.writestr("foo.txt", "O, for a Muse of Fire!") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testCreateNonExistentFileForAppend (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 565, in testCreateNonExistentFileForAppend zf.writestr(filename, content) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testIsZipValidFile (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 604, in testIsZipValidFile zipf.writestr("foo.txt", "O, for a Muse of Fire!") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: test_BadOpenMode (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 648, in test_BadOpenMode zipf.writestr("foo.txt", "O, for a Muse of Fire!") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: test_NullByteInFilename (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 682, in test_NullByteInFilename zipf.writestr("foo.txt\x00qqq", "O, for a Muse of Fire!") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: test_Read0 (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 660, in test_Read0 zipf.writestr("foo.txt", "O, for a Muse of Fire!") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testWritePyfile (test.test_zipfile.PyZipFileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 490, in testWritePyfile zipfp.writepy(fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 1174, in writepy self.write(fname, arcname) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testWritePythonDirectory (test.test_zipfile.PyZipFileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 538, in testWritePythonDirectory zipfp.writepy(TESTFN2) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 1166, in writepy self.write(fname, arcname) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testWritePythonPackage (test.test_zipfile.PyZipFileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 515, in testWritePythonPackage zipfp.writepy(packagedir) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 1137, in writepy self.write(fname, arcname) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testBadPassword (test.test_zipfile.DecryptionTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 719, in setUp self.zip = zipfile.ZipFile(TESTFN, "r") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 615, in __init__ self._GetContents() File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 635, in _GetContents self._RealGetContents() File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 672, in _RealGetContents raise BadZipfile, "Bad magic number for central directory" BadZipfile: Bad magic number for central directory ====================================================================== ERROR: testGoodPassword (test.test_zipfile.DecryptionTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 719, in setUp self.zip = zipfile.ZipFile(TESTFN, "r") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 615, in __init__ self._GetContents() File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 635, in _GetContents self._RealGetContents() File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 672, in _RealGetContents raise BadZipfile, "Bad magic number for central directory" BadZipfile: Bad magic number for central directory ====================================================================== ERROR: testNoPassword (test.test_zipfile.DecryptionTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 719, in setUp self.zip = zipfile.ZipFile(TESTFN, "r") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 615, in __init__ self._GetContents() File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 635, in _GetContents self._RealGetContents() File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 672, in _RealGetContents raise BadZipfile, "Bad magic number for central directory" BadZipfile: Bad magic number for central directory ====================================================================== ERROR: testDifferentFile (test.test_zipfile.TestsWithMultipleOpens) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 846, in setUp zipfp.writestr('ones', '1'*FIXEDTEST_SIZE) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testInterleaved (test.test_zipfile.TestsWithMultipleOpens) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 846, in setUp zipfp.writestr('ones', '1'*FIXEDTEST_SIZE) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testSameFile (test.test_zipfile.TestsWithMultipleOpens) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 846, in setUp zipfp.writestr('ones', '1'*FIXEDTEST_SIZE) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testIterlinesDeflated (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 990, in testIterlinesDeflated self.iterlinesTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 949, in iterlinesTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 909, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testIterlinesStored (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 973, in testIterlinesStored self.iterlinesTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 949, in iterlinesTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 909, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadDeflated (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 978, in testReadDeflated self.readTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 913, in readTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 909, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadStored (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 961, in testReadStored self.readTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 913, in readTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 909, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadlineDeflated (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 982, in testReadlineDeflated self.readlineTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 924, in readlineTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 909, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadlineStored (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 965, in testReadlineStored self.readlineTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 924, in readlineTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 909, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadlinesDeflated (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 986, in testReadlinesDeflated self.readlinesTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 937, in readlinesTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 909, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadlinesStored (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 969, in testReadlinesStored self.readlinesTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 937, in readlinesTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 909, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testOpenStored (test.test_zipfile.TestsWithRandomBinaryFiles) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 818, in testOpenStored self.zipOpenTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 787, in zipOpenTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 767, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testRandomOpenStored (test.test_zipfile.TestsWithRandomBinaryFiles) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 840, in testRandomOpenStored self.zipRandomOpenTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 821, in zipRandomOpenTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 767, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testStored (test.test_zipfile.TestsWithRandomBinaryFiles) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 784, in testStored self.zipTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 772, in zipTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 767, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions sincerely, -The Buildbot From python-checkins at python.org Mon Mar 24 22:51:16 2008 From: python-checkins at python.org (martin.v.loewis) Date: Mon, 24 Mar 2008 22:51:16 +0100 (CET) Subject: [Python-checkins] r61856 - tracker/instances/setuptools/db/backend_name Message-ID: <20080324215116.706091E4032@bag.python.org> Author: martin.v.loewis Date: Mon Mar 24 22:51:16 2008 New Revision: 61856 Removed: tracker/instances/setuptools/db/backend_name Log: Remove db from repository. Deleted: /tracker/instances/setuptools/db/backend_name ============================================================================== --- /tracker/instances/setuptools/db/backend_name Mon Mar 24 22:51:16 2008 +++ (empty file) @@ -1 +0,0 @@ -postgresql From python-checkins at python.org Mon Mar 24 22:52:08 2008 From: python-checkins at python.org (martin.v.loewis) Date: Mon, 24 Mar 2008 22:52:08 +0100 (CET) Subject: [Python-checkins] r61857 - tracker/instances/setuptools/db Message-ID: <20080324215208.5D54D1E4024@bag.python.org> Author: martin.v.loewis Date: Mon Mar 24 22:52:08 2008 New Revision: 61857 Removed: tracker/instances/setuptools/db/ Log: Remove db from repository; this doesn't work with roundup-admin initialize. From python-checkins at python.org Mon Mar 24 22:53:15 2008 From: python-checkins at python.org (martin.v.loewis) Date: Mon, 24 Mar 2008 22:53:15 +0100 (CET) Subject: [Python-checkins] r61858 - in tracker/instances/setuptools: detectors/config.ini.template detectors/sendmail.py extensions/timestamp.py extensions/timezone.py html/user.item.html html/user.register.html Message-ID: <20080324215315.5210F1E401F@bag.python.org> Author: martin.v.loewis Date: Mon Mar 24 22:53:14 2008 New Revision: 61858 Added: tracker/instances/setuptools/detectors/config.ini.template (contents, props changed) tracker/instances/setuptools/detectors/sendmail.py (contents, props changed) tracker/instances/setuptools/extensions/timestamp.py (contents, props changed) tracker/instances/setuptools/extensions/timezone.py (contents, props changed) Modified: tracker/instances/setuptools/html/user.item.html tracker/instances/setuptools/html/user.register.html Log: Add our standard extensions and detectors. Added: tracker/instances/setuptools/detectors/config.ini.template ============================================================================== --- (empty file) +++ tracker/instances/setuptools/detectors/config.ini.template Mon Mar 24 22:53:14 2008 @@ -0,0 +1,17 @@ +#This configuration file controls the behavior of busybody.py and tellteam.py +#The two definitions can be comma-delimited lists of email addresses. +#Be sure these addresses will accept mail from the tracker's email address. +[main] +triage_email = triage at example.com +busybody_email= busybody at example.com + +# URI to XMLRPC server doing the actual spam check. +spambayes_uri = http://www.webfast.com:80/sbrpc +# These must match the {ham,spam}_cutoff setting in the SpamBayes server +# config. +spambayes_ham_cutoff = 0.2 +spambayes_spam_cutoff = 0.85 + +spambayes_may_view_spam = User,Coordinator,Developer +spambayes_may_classify = Coordinator +spambayes_may_report_misclassified = User,Coordinator,Developer Added: tracker/instances/setuptools/detectors/sendmail.py ============================================================================== --- (empty file) +++ tracker/instances/setuptools/detectors/sendmail.py Mon Mar 24 22:53:14 2008 @@ -0,0 +1,190 @@ +from roundup import roundupdb + +def determineNewMessages(cl, nodeid, oldvalues): + ''' Figure a list of the messages that are being added to the given + node in this transaction. + ''' + messages = [] + if oldvalues is None: + # the action was a create, so use all the messages in the create + messages = cl.get(nodeid, 'messages') + elif oldvalues.has_key('messages'): + # the action was a set (so adding new messages to an existing issue) + m = {} + for msgid in oldvalues['messages']: + m[msgid] = 1 + messages = [] + # figure which of the messages now on the issue weren't there before + for msgid in cl.get(nodeid, 'messages'): + if not m.has_key(msgid): + messages.append(msgid) + return messages + + +def is_spam(db, msgid): + """Return true if message has a spambayes score above + db.config.detectors['SPAMBAYES_SPAM_CUTOFF']. Also return true if + msgid is None, which happens when there are no messages (i.e., a + property-only change)""" + if not msgid: + return False + cutoff_score = float(db.config.detectors['SPAMBAYES_SPAM_CUTOFF']) + + msg = db.getnode("msg", msgid) + if msg.has_key('spambayes_score') and \ + msg['spambayes_score'] > cutoff_score: + return True + return False + + +def sendmail(db, cl, nodeid, oldvalues): + """Send mail to various recipients, when changes occur: + + * For all changes (property-only, or with new message), send mail + to all e-mail addresses defined in + db.config.detectors['BUSYBODY_EMAIL'] + + * For all changes (property-only, or with new message), send mail + to all members of the nosy list. + + * For new issues, and only for new issue, send mail to + db.config.detectors['TRIAGE_EMAIL'] + + """ + + sendto = [] + + # The busybody addresses always get mail. + try: + sendto += db.config.detectors['BUSYBODY_EMAIL'].split(",") + except KeyError: + pass + + # New submission? + if None == oldvalues: + changenote = cl.generateCreateNote(nodeid) + try: + # Add triage addresses + sendto += db.config.detectors['TRIAGE_EMAIL'].split(",") + except KeyError: + pass + oldfiles = [] + else: + changenote = cl.generateChangeNote(nodeid, oldvalues) + oldfiles = oldvalues.get('files', []) + + newfiles = db.issue.get(nodeid, 'files', []) + if oldfiles != newfiles: + added = [fid for fid in newfiles if fid not in oldfiles] + removed = [fid for fid in oldfiles if fid not in newfiles] + filemsg = "" + + for fid in added: + url = db.config.TRACKER_WEB + "file%s/%s" % \ + (fid, db.file.get(fid, "name")) + changenote+="\nAdded file: %s" % url + for fid in removed: + url = db.config.TRACKER_WEB + "file%s/%s" % \ + (fid, db.file.get(fid, "name")) + changenote+="\nRemoved file: %s" % url + + + authid = db.getuid() + + new_messages = determineNewMessages(cl, nodeid, oldvalues) + + # Make sure we send a nosy mail even for property-only + # changes. + if not new_messages: + new_messages = [None] + + for msgid in [msgid for msgid in new_messages if not is_spam(db, msgid)]: + try: + cl.send_message(nodeid, msgid, changenote, sendto, + authid=authid) + nosymessage(db, nodeid, msgid, oldvalues, changenote) + except roundupdb.MessageSendError, message: + raise roundupdb.DetectorError, message + +def nosymessage(db, nodeid, msgid, oldvalues, note, + whichnosy='nosy', + from_address=None, cc=[], bcc=[]): + """Send a message to the members of an issue's nosy list. + + The message is sent only to users on the nosy list who are not + already on the "recipients" list for the message. + + These users are then added to the message's "recipients" list. + + If 'msgid' is None, the message gets sent only to the nosy + list, and it's called a 'System Message'. + + The "cc" argument indicates additional recipients to send the + message to that may not be specified in the message's recipients + list. + + The "bcc" argument also indicates additional recipients to send the + message to that may not be specified in the message's recipients + list. These recipients will not be included in the To: or Cc: + address lists. + """ + if msgid: + authid = db.msg.get(msgid, 'author') + recipients = db.msg.get(msgid, 'recipients', []) + else: + # "system message" + authid = None + recipients = [] + + sendto = [] + bcc_sendto = [] + seen_message = {} + for recipient in recipients: + seen_message[recipient] = 1 + + def add_recipient(userid, to): + # make sure they have an address + address = db.user.get(userid, 'address') + if address: + to.append(address) + recipients.append(userid) + + def good_recipient(userid): + # Make sure we don't send mail to either the anonymous + # user or a user who has already seen the message. + return (userid and + (db.user.get(userid, 'username') != 'anonymous') and + not seen_message.has_key(userid)) + + # possibly send the message to the author, as long as they aren't + # anonymous + if (good_recipient(authid) and + (db.config.MESSAGES_TO_AUTHOR == 'yes' or + (db.config.MESSAGES_TO_AUTHOR == 'new' and not oldvalues))): + add_recipient(authid, sendto) + + if authid: + seen_message[authid] = 1 + + # now deal with the nosy and cc people who weren't recipients. + for userid in cc + db.issue.get(nodeid, whichnosy): + if good_recipient(userid): + add_recipient(userid, sendto) + + # now deal with bcc people. + for userid in bcc: + if good_recipient(userid): + add_recipient(userid, bcc_sendto) + + # If we have new recipients, update the message's recipients + # and send the mail. + if sendto or bcc_sendto: + if msgid is not None: + db.msg.set(msgid, recipients=recipients) + db.issue.send_message(nodeid, msgid, note, sendto, from_address, + bcc_sendto) + + +def init(db): + db.issue.react('set', sendmail) + db.issue.react('create', sendmail) Added: tracker/instances/setuptools/extensions/timestamp.py ============================================================================== --- (empty file) +++ tracker/instances/setuptools/extensions/timestamp.py Mon Mar 24 22:53:14 2008 @@ -0,0 +1,28 @@ +import time, struct, base64 +from roundup.cgi.actions import RegisterAction +from roundup.cgi.exceptions import * + +def timestamp(): + return base64.encodestring(struct.pack("i", time.time())).strip() + +def unpack_timestamp(s): + return struct.unpack("i",base64.decodestring(s))[0] + +class Timestamped: + def check(self): + try: + created = unpack_timestamp(self.form['opaque'].value) + except KeyError: + raise FormError, "somebody tampered with the form" + if time.time() - created < 4: + raise FormError, "responding to the form too quickly" + return True + +class TimestampedRegister(Timestamped, RegisterAction): + def permission(self): + self.check() + RegisterAction.permission(self) + +def init(instance): + instance.registerUtil('timestamp', timestamp) + instance.registerAction('register', TimestampedRegister) Added: tracker/instances/setuptools/extensions/timezone.py ============================================================================== --- (empty file) +++ tracker/instances/setuptools/extensions/timezone.py Mon Mar 24 22:53:14 2008 @@ -0,0 +1,37 @@ +# Utility for replacing the simple input field for the timezone with +# a select-field that lists the available values. + +import cgi + +try: + import pytz +except ImportError: + pytz = None + + +def tzfield(prop, name, default): + if pytz: + value = prop.plain() + if '' == value: + value = default + else: + try: + value = "Etc/GMT%+d" % int(value) + except ValueError: + pass + + l = ['') + return '\n'.join(l) + + else: + return prop.field() + +def init(instance): + instance.registerUtil('tzfield', tzfield) Modified: tracker/instances/setuptools/html/user.item.html ============================================================================== --- tracker/instances/setuptools/html/user.item.html (original) +++ tracker/instances/setuptools/html/user.item.html Mon Mar 24 22:53:14 2008 @@ -104,10 +104,8 @@ Timezone - - (this is a numeric hour offset, the default is - ) + Modified: tracker/instances/setuptools/html/user.register.html ============================================================================== --- tracker/instances/setuptools/html/user.register.html (original) +++ tracker/instances/setuptools/html/user.register.html Mon Mar 24 22:53:14 2008 @@ -11,7 +11,7 @@
                - + From nnorwitz at gmail.com Mon Mar 24 23:37:19 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Mon, 24 Mar 2008 17:37:19 -0500 Subject: [Python-checkins] Python Regression Test Failures all (1) Message-ID: <20080324223719.GA24518@python.psfb.org> 320 tests OK. 1 test failed: test_signal 23 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_cd test_cl test_epoll test_gl test_imageop test_imgfile test_ioctl test_kqueue test_macostools test_pep277 test_py3kwarn test_scriptpackages test_startfile test_sunaudiodev test_tcl test_unicode_file test_winreg test_winsound test_zipfile64 2 skips unexpected on linux2: test_epoll test_ioctl test_grammar test_opcodes test_dict test_builtin test_exceptions test_types test_unittest test_doctest test_doctest2 test_MimeWriter test_SimpleHTTPServer test_StringIO test___all__ test___future__ test__locale test_abc test_abstract_numbers test_aepack test_aepack skipped -- No module named aepack test_al test_al skipped -- No module named al test_anydbm test_applesingle test_applesingle skipped -- No module named macostools test_array test_ast test_asynchat test_asyncore test_atexit test_audioop test_augassign test_base64 test_bastion test_bigaddrspace test_bigmem test_binascii test_binhex test_binop test_bisect test_bool test_bsddb test_bsddb185 test_bsddb185 skipped -- No module named bsddb185 test_bsddb3 test_buffer test_bufio test_bz2 test_calendar test_call test_capi test_cd test_cd skipped -- No module named cd test_cfgparser test_cgi test_charmapcodec test_cl test_cl skipped -- No module named cl test_class test_cmath test_cmd test_cmd_line test_cmd_line_script test_code test_codeccallbacks test_codecencodings_cn test_codecencodings_hk test_codecencodings_jp test_codecencodings_kr test_codecencodings_tw test_codecmaps_cn test_codecmaps_hk test_codecmaps_jp test_codecmaps_kr test_codecmaps_tw test_codecs test_codeop test_coding test_coercion test_collections test_colorsys test_commands test_compare test_compile test_compiler testCompileLibrary still working, be patient... test_complex test_complex_args test_contains test_contextlib test_cookie test_cookielib test_copy test_copy_reg test_cpickle test_cprofile test_crypt test_csv test_ctypes test_datetime test_dbm test_decimal test_decorators test_defaultdict test_deque test_descr test_descrtut test_difflib test_dircache test_dis test_distutils test_dl test_docxmlrpc test_dumbdbm test_dummy_thread test_dummy_threading test_email test_email_codecs test_email_renamed test_enumerate test_eof test_epoll test_epoll skipped -- kernel doesn't support epoll() test_errno test_exception_variations test_extcall test_fcntl test_file test_filecmp test_fileinput test_float test_fnmatch test_fork1 test_format test_fpformat test_fractions test_frozen test_ftplib test_funcattrs test_functools test_future test_future_builtins test_gc test_gdbm test_generators test_genericpath test_genexps test_getargs test_getargs2 test_getopt test_gettext test_gl test_gl skipped -- No module named gl test_glob test_global test_grp test_gzip test_hash test_hashlib test_heapq test_hmac test_hotshot test_htmllib test_htmlparser test_httplib test_imageop test_imageop skipped -- No module named imgfile test_imaplib test_imgfile test_imgfile skipped -- No module named imgfile test_imp test_import test_importhooks test_index test_inspect test_int_literal test_ioctl test_ioctl skipped -- Unable to open /dev/tty test_isinstance test_iter test_iterlen test_itertools test_kqueue test_kqueue skipped -- test works only on BSD test_largefile test_list test_locale test_logging test_long test_long_future test_longexp test_macostools test_macostools skipped -- No module named macostools test_macpath test_mailbox test_marshal test_math test_md5 test_mhlib test_mimetools test_mimetypes test_minidom test_mmap test_module test_modulefinder test_multibytecodec test_multibytecodec_support test_multifile test_mutants test_mutex test_netrc test_new test_nis test_normalization test_ntpath test_old_mailbox test_openpty test_operator test_optparse test_os test_parser Expecting 's_push: parser stack overflow' in next line s_push: parser stack overflow test_peepholer test_pep247 test_pep263 test_pep277 test_pep277 skipped -- test works only on NT+ test_pep292 test_pep352 test_pickle test_pickletools test_pipes test_pkg test_pkgimport test_platform test_plistlib test_poll test_popen [8070 refs] [8070 refs] [8070 refs] test_popen2 test_poplib test_posix test_posixpath test_pow test_pprint test_print test_profile test_profilehooks test_property test_pstats test_pty test_pwd test_py3kwarn test_py3kwarn skipped -- test.test_py3kwarn must be run with the -3 flag test_pyclbr test_pyexpat test_queue test_quopri [8447 refs] [8447 refs] test_random test_re test_repr test_resource test_rfc822 test_richcmp test_robotparser test_runpy test_sax test_scope test_scriptpackages test_scriptpackages skipped -- No module named aetools test_select test_set test_sets test_sgmllib test_sha test_shelve test_shlex test_shutil test_signal test test_signal failed -- Traceback (most recent call last): File "/tmp/python-test/local/lib/python2.6/test/test_signal.py", line 308, in test_itimer_exc signal.ITIMER_REAL, -1) AssertionError: ItimerError not raised test_site test_slice test_smtplib test_socket test_socket_ssl test_socketserver test_softspace test_sort test_sqlite test_ssl test_startfile test_startfile skipped -- cannot import name startfile test_str test_strftime test_string test_stringprep test_strop test_strptime test_struct test_structmembers test_structseq test_subprocess [8065 refs] [8067 refs] [8065 refs] [8065 refs] [8065 refs] [8065 refs] [8065 refs] [8065 refs] [8065 refs] [8065 refs] [8067 refs] [9990 refs] [8283 refs] [8067 refs] [8065 refs] [8065 refs] [8065 refs] [8065 refs] [8065 refs] . [8065 refs] [8065 refs] this bit of output is from a test of stdout in a different process ... [8065 refs] [8065 refs] [8283 refs] test_sunaudiodev test_sunaudiodev skipped -- No module named sunaudiodev test_sundry test_symtable test_syntax test_sys [8065 refs] [8065 refs] test_tarfile test_tcl test_tcl skipped -- No module named _tkinter test_telnetlib test_tempfile [8070 refs] test_textwrap test_thread test_threaded_import test_threadedtempfile test_threading [11209 refs] test_threading_local test_threadsignals test_time test_timeout test_tokenize test_trace test_traceback test_transformer test_tuple test_typechecks test_ucn test_unary test_undocumented_details test_unicode test_unicode_file test_unicode_file skipped -- No Unicode filesystem semantics on this platform. test_unicodedata test_univnewlines test_unpack test_urllib test_urllib2 test_urllib2_localnet test_urllib2net No handlers could be found for logger "test_urllib2" test_urllibnet test_urlparse test_userdict test_userlist test_userstring test_uu test_uuid WARNING: uuid.getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._ifconfig_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._unixdll_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. test_wait3 test_wait4 test_warnings test_wave test_weakref test_whichdb test_winreg test_winreg skipped -- No module named _winreg test_winsound test_winsound skipped -- No module named winsound test_with test_wsgiref test_xdrlib test_xml_etree test_xml_etree_c test_xmllib test_xmlrpc test_xpickle test_xrange test_zipfile test_zipfile64 test_zipfile64 skipped -- test requires loads of disk-space bytes and a long time to run test_zipimport test_zlib 320 tests OK. 1 test failed: test_signal 23 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_cd test_cl test_epoll test_gl test_imageop test_imgfile test_ioctl test_kqueue test_macostools test_pep277 test_py3kwarn test_scriptpackages test_startfile test_sunaudiodev test_tcl test_unicode_file test_winreg test_winsound test_zipfile64 2 skips unexpected on linux2: test_epoll test_ioctl [581509 refs] From buildbot at python.org Mon Mar 24 23:21:17 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 24 Mar 2008 22:21:17 +0000 Subject: [Python-checkins] buildbot failure in x86 gentoo 2.5 Message-ID: <20080324222117.9FBF81E401F@bag.python.org> The Buildbot has detected a new failure of x86 gentoo 2.5. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20gentoo%202.5/builds/588 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-x86 Build Reason: Build Source Stamp: [branch branches/release25-maint] HEAD Blamelist: amaury.forgeotdarc BUILD FAILED: failed test Excerpt from the test logfile: Traceback (most recent call last): File "/home/buildslave/python-trunk/2.5.norwitz-x86/build/Lib/threading.py", line 486, in __bootstrap_inner self.run() File "/home/buildslave/python-trunk/2.5.norwitz-x86/build/Lib/threading.py", line 446, in run self.__target(*self.__args, **self.__kwargs) File "/home/buildslave/python-trunk/2.5.norwitz-x86/build/Lib/bsddb/test/test_thread.py", line 281, in readerThread rec = dbutils.DeadlockWrap(c.next, max_retries=10) File "/home/buildslave/python-trunk/2.5.norwitz-x86/build/Lib/bsddb/dbutils.py", line 62, in DeadlockWrap return function(*_args, **_kwargs) DBLockDeadlockError: (-30996, 'DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock') Traceback (most recent call last): File "/home/buildslave/python-trunk/2.5.norwitz-x86/build/Lib/threading.py", line 486, in __bootstrap_inner self.run() File "/home/buildslave/python-trunk/2.5.norwitz-x86/build/Lib/threading.py", line 446, in run self.__target(*self.__args, **self.__kwargs) File "/home/buildslave/python-trunk/2.5.norwitz-x86/build/Lib/bsddb/test/test_thread.py", line 281, in readerThread rec = dbutils.DeadlockWrap(c.next, max_retries=10) File "/home/buildslave/python-trunk/2.5.norwitz-x86/build/Lib/bsddb/dbutils.py", line 62, in DeadlockWrap return function(*_args, **_kwargs) DBLockDeadlockError: (-30996, 'DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock') Traceback (most recent call last): File "/home/buildslave/python-trunk/2.5.norwitz-x86/build/Lib/threading.py", line 486, in __bootstrap_inner self.run() File "/home/buildslave/python-trunk/2.5.norwitz-x86/build/Lib/threading.py", line 446, in run self.__target(*self.__args, **self.__kwargs) File "/home/buildslave/python-trunk/2.5.norwitz-x86/build/Lib/bsddb/test/test_thread.py", line 281, in readerThread rec = dbutils.DeadlockWrap(c.next, max_retries=10) File "/home/buildslave/python-trunk/2.5.norwitz-x86/build/Lib/bsddb/dbutils.py", line 62, in DeadlockWrap return function(*_args, **_kwargs) DBLockDeadlockError: (-30996, 'DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock') Traceback (most recent call last): File "/home/buildslave/python-trunk/2.5.norwitz-x86/build/Lib/threading.py", line 486, in __bootstrap_inner self.run() File "/home/buildslave/python-trunk/2.5.norwitz-x86/build/Lib/threading.py", line 446, in run self.__target(*self.__args, **self.__kwargs) File "/home/buildslave/python-trunk/2.5.norwitz-x86/build/Lib/bsddb/test/test_thread.py", line 281, in readerThread rec = dbutils.DeadlockWrap(c.next, max_retries=10) File "/home/buildslave/python-trunk/2.5.norwitz-x86/build/Lib/bsddb/dbutils.py", line 62, in DeadlockWrap return function(*_args, **_kwargs) DBLockDeadlockError: (-30996, 'DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock') Traceback (most recent call last): File "/home/buildslave/python-trunk/2.5.norwitz-x86/build/Lib/threading.py", line 486, in __bootstrap_inner self.run() File "/home/buildslave/python-trunk/2.5.norwitz-x86/build/Lib/threading.py", line 446, in run self.__target(*self.__args, **self.__kwargs) File "/home/buildslave/python-trunk/2.5.norwitz-x86/build/Lib/bsddb/test/test_thread.py", line 281, in readerThread rec = dbutils.DeadlockWrap(c.next, max_retries=10) File "/home/buildslave/python-trunk/2.5.norwitz-x86/build/Lib/bsddb/dbutils.py", line 62, in DeadlockWrap return function(*_args, **_kwargs) DBLockDeadlockError: (-30996, 'DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock') 1 test failed: test_socketserver Traceback (most recent call last): File "/home/buildslave/python-trunk/2.5.norwitz-x86/build/Lib/threading.py", line 486, in __bootstrap_inner self.run() File "/home/buildslave/python-trunk/2.5.norwitz-x86/build/Lib/test/test_socketserver.py", line 81, in run svr = svrcls(self.__addr, self.__hdlrcls) File "/home/buildslave/python-trunk/2.5.norwitz-x86/build/Lib/SocketServer.py", line 330, in __init__ self.server_bind() File "/home/buildslave/python-trunk/2.5.norwitz-x86/build/Lib/SocketServer.py", line 341, in server_bind self.socket.bind(self.server_address) File "", line 1, in bind error: (98, 'Address already in use') Traceback (most recent call last): File "/home/buildslave/python-trunk/2.5.norwitz-x86/build/Lib/threading.py", line 486, in __bootstrap_inner self.run() File "/home/buildslave/python-trunk/2.5.norwitz-x86/build/Lib/threading.py", line 446, in run self.__target(*self.__args, **self.__kwargs) File "/home/buildslave/python-trunk/2.5.norwitz-x86/build/Lib/SocketServer.py", line 467, in process_request_thread self.handle_error(request, client_address) File "/home/buildslave/python-trunk/2.5.norwitz-x86/build/Lib/SocketServer.py", line 464, in process_request_thread self.finish_request(request, client_address) File "/home/buildslave/python-trunk/2.5.norwitz-x86/build/Lib/SocketServer.py", line 254, in finish_request self.RequestHandlerClass(request, client_address, self) File "/home/buildslave/python-trunk/2.5.norwitz-x86/build/Lib/SocketServer.py", line 522, in __init__ self.handle() File "/home/buildslave/python-trunk/2.5.norwitz-x86/build/Lib/test/test_socketserver.py", line 21, in handle time.sleep(DELAY) AttributeError: 'NoneType' object has no attribute 'sleep' Traceback (most recent call last): File "./Lib/test/regrtest.py", line 557, in runtest_inner indirect_test() File "/home/buildslave/python-trunk/2.5.norwitz-x86/build/Lib/test/test_socketserver.py", line 212, in test_main testall() File "/home/buildslave/python-trunk/2.5.norwitz-x86/build/Lib/test/test_socketserver.py", line 195, in testall testloop(socket.AF_INET, tcpservers, MyStreamHandler, teststream) File "/home/buildslave/python-trunk/2.5.norwitz-x86/build/Lib/test/test_socketserver.py", line 144, in testloop testfunc(proto, addr) File "/home/buildslave/python-trunk/2.5.norwitz-x86/build/Lib/test/test_socketserver.py", line 64, in teststream buf = data = receive(s, 100) File "/home/buildslave/python-trunk/2.5.norwitz-x86/build/Lib/test/test_socketserver.py", line 46, in receive return sock.recv(n) error: (104, 'Connection reset by peer') make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Tue Mar 25 00:02:50 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 24 Mar 2008 23:02:50 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-3 2.5 Message-ID: <20080324230251.2C1351E401F@bag.python.org> The Buildbot has detected a new failure of x86 XP-3 2.5. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-3%202.5/builds/224 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows Build Reason: Build Source Stamp: [branch branches/release25-maint] HEAD Blamelist: amaury.forgeotdarc BUILD FAILED: failed test Excerpt from the test logfile: 14 tests failed: test_array test_bz2 test_cookielib test_distutils test_exceptions test_gzip test_hotshot test_iter test_marshal test_pep277 test_set test_tarfile test_zipfile test_zipimport ====================================================================== ERROR: test_tofromfile (test.test_array.UnicodeTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_array.py", line 166, in test_tofromfile f = open(test_support.TESTFN, 'wb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_tofromfile (test.test_array.ByteTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_array.py", line 166, in test_tofromfile f = open(test_support.TESTFN, 'wb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_tofromfile (test.test_array.ShortTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_array.py", line 166, in test_tofromfile f = open(test_support.TESTFN, 'wb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_tofromfile (test.test_array.UnsignedShortTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_array.py", line 166, in test_tofromfile f = open(test_support.TESTFN, 'wb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_read (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_gzip.py", line 48, in test_read self.test_write() File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\2.5.heller-windows\build\lib\gzip.py", line 95, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_readline (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_gzip.py", line 85, in test_readline self.test_write() File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\2.5.heller-windows\build\lib\gzip.py", line 95, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_readlines (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_gzip.py", line 98, in test_readlines self.test_write() File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\2.5.heller-windows\build\lib\gzip.py", line 95, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_seek_read (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_gzip.py", line 112, in test_seek_read self.test_write() File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\2.5.heller-windows\build\lib\gzip.py", line 95, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_seek_write (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_gzip.py", line 133, in test_seek_write f = gzip.GzipFile(self.filename, 'w') File "C:\buildbot\work\2.5.heller-windows\build\lib\gzip.py", line 95, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_write (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\2.5.heller-windows\build\lib\gzip.py", line 95, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_addinfo (test.test_hotshot.HotShotTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_hotshot.py", line 74, in test_addinfo profiler = self.new_profiler() File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_hotshot.py", line 42, in new_profiler return hotshot.Profile(self.logfn, lineevents, linetimings) File "C:\buildbot\work\2.5.heller-windows\build\lib\hotshot\__init__.py", line 13, in __init__ logfn, self.lineevents, self.linetimings) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_bad_sys_path (test.test_hotshot.HotShotTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_hotshot.py", line 118, in test_bad_sys_path self.assertRaises(RuntimeError, coverage, test_support.TESTFN) File "C:\buildbot\work\2.5.heller-windows\build\lib\unittest.py", line 320, in failUnlessRaises callableObj(*args, **kwargs) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_line_numbers (test.test_hotshot.HotShotTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_hotshot.py", line 98, in test_line_numbers self.run_test(g, events, self.new_profiler(lineevents=1)) File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_hotshot.py", line 42, in new_profiler return hotshot.Profile(self.logfn, lineevents, linetimings) File "C:\buildbot\work\2.5.heller-windows\build\lib\hotshot\__init__.py", line 13, in __init__ logfn, self.lineevents, self.linetimings) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_start_stop (test.test_hotshot.HotShotTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_hotshot.py", line 104, in test_start_stop profiler = self.new_profiler() File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_hotshot.py", line 42, in new_profiler return hotshot.Profile(self.logfn, lineevents, linetimings) File "C:\buildbot\work\2.5.heller-windows\build\lib\hotshot\__init__.py", line 13, in __init__ logfn, self.lineevents, self.linetimings) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_builtin_list (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_iter.py", line 262, in test_builtin_list f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_builtin_map (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_iter.py", line 408, in test_builtin_map f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_builtin_max_min (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_iter.py", line 371, in test_builtin_max_min f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_builtin_tuple (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_iter.py", line 295, in test_builtin_tuple f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_builtin_zip (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_iter.py", line 455, in test_builtin_zip f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_countOf (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_iter.py", line 617, in test_countOf f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_in_and_not_in (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_iter.py", line 580, in test_in_and_not_in f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_indexOf (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_iter.py", line 651, in test_indexOf f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_iter_file (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_iter.py", line 232, in test_iter_file f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_unicode_join_endcase (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_iter.py", line 534, in test_unicode_join_endcase f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_unpack_iter (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_iter.py", line 760, in test_unpack_iter f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_writelines (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_iter.py", line 677, in test_writelines f = file(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_bool (test.test_marshal.IntTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_marshal.py", line 54, in test_bool marshal.dump(b, file(test_support.TESTFN, "wb")) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_ints (test.test_marshal.IntTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_marshal.py", line 19, in test_ints marshal.dump(expected, file(test_support.TESTFN, "wb")) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_floats (test.test_marshal.FloatTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_marshal.py", line 70, in test_floats marshal.dump(f, file(test_support.TESTFN, "wb")) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_buffer (test.test_marshal.StringTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_marshal.py", line 135, in test_buffer marshal.dump(b, file(test_support.TESTFN, "wb")) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_string (test.test_marshal.StringTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_marshal.py", line 124, in test_string marshal.dump(s, file(test_support.TESTFN, "wb")) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_unicode (test.test_marshal.StringTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_marshal.py", line 113, in test_unicode marshal.dump(s, file(test_support.TESTFN, "wb")) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_dict (test.test_marshal.ContainerTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_marshal.py", line 164, in test_dict marshal.dump(self.d, file(test_support.TESTFN, "wb")) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_list (test.test_marshal.ContainerTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_marshal.py", line 173, in test_list marshal.dump(lst, file(test_support.TESTFN, "wb")) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_sets (test.test_marshal.ContainerTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_marshal.py", line 194, in test_sets marshal.dump(t, file(test_support.TESTFN, "wb")) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_tuple (test.test_marshal.ContainerTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_marshal.py", line 182, in test_tuple marshal.dump(t, file(test_support.TESTFN, "wb")) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testAbsoluteArcnames (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipfile.py", line 106, in testAbsoluteArcnames zipfp.write(TESTFN, "/absolute") File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 561, in write self._writecheck(zinfo) File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 533, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testAbsoluteArcnames (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipfile.py", line 116, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipfile.py", line 102, in testDeflated self.zipTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipfile.py", line 29, in zipTest zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 561, in write self._writecheck(zinfo) File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 533, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipfile.py", line 116, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipfile.py", line 97, in testStored self.zipTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipfile.py", line 29, in zipTest zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 561, in write self._writecheck(zinfo) File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 533, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipfile.py", line 116, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testClosedZipRaisesRuntimeError (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipfile.py", line 343, in testClosedZipRaisesRuntimeError zipf.writestr("foo.txt", "O, for a Muse of Fire!") File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 615, in writestr self._writecheck(zinfo) File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 533, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testWritePyfile (test.test_zipfile.PyZipFileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipfile.py", line 249, in testWritePyfile zipfp.writepy(fn) File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 793, in writepy self.write(fname, arcname) File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 561, in write self._writecheck(zinfo) File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 533, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testWritePythonDirectory (test.test_zipfile.PyZipFileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipfile.py", line 297, in testWritePythonDirectory zipfp.writepy(TESTFN2) File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 785, in writepy self.write(fname, arcname) File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 561, in write self._writecheck(zinfo) File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 533, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testWritePythonPackage (test.test_zipfile.PyZipFileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipfile.py", line 274, in testWritePythonPackage zipfp.writepy(packagedir) File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 756, in writepy self.write(fname, arcname) File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 561, in write self._writecheck(zinfo) File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 533, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testBadMTime (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 183, in testBadMTime self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 615, in writestr self._writecheck(zinfo) File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 533, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testBadMagic (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 161, in testBadMagic self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 615, in writestr self._writecheck(zinfo) File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 533, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testBadMagic2 (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 170, in testBadMagic2 self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 615, in writestr self._writecheck(zinfo) File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 533, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testBoth (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 148, in testBoth self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 615, in writestr self._writecheck(zinfo) File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 533, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testDeepPackage (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 197, in testDeepPackage self.doTest(pyc_ext, files, TESTPACK, TESTPACK2, TESTMOD) File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 615, in writestr self._writecheck(zinfo) File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 533, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testDoctestFile (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 294, in testDoctestFile self.runDoctest(self.doDoctestFile) File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 279, in runDoctest self.doTest(".py", files, TESTMOD, call=callback) File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 615, in writestr self._writecheck(zinfo) File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 533, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testDoctestSuite (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 305, in testDoctestSuite self.runDoctest(self.doDoctestSuite) File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 279, in runDoctest self.doTest(".py", files, TESTMOD, call=callback) File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 615, in writestr self._writecheck(zinfo) File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 533, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testEmptyPy (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 152, in testEmptyPy self.doTest(None, files, TESTMOD) File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 91, in doTest ["__dummy__"]) ImportError: No module named ziptestmodule ====================================================================== ERROR: testGetCompiledSource (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 274, in testGetCompiledSource self.doTest(pyc_ext, files, TESTMOD, call=self.assertModuleSource) File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 339, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\2.5.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testGetData (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 231, in testGetData z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 339, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\2.5.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testGetSource (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 268, in testGetSource self.doTest(".py", files, TESTMOD, call=self.assertModuleSource) File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 339, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\2.5.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testImport_WithStuff (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 261, in testImport_WithStuff stuff="Some Stuff"*31) File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 339, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\2.5.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testImporterAttr (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 254, in testImporterAttr self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 339, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\2.5.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testPackage (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 189, in testPackage self.doTest(pyc_ext, files, TESTPACK, TESTMOD) File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 339, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\2.5.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testPy (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 139, in testPy self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 339, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\2.5.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testPyc (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 143, in testPyc self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 339, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\2.5.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testTraceback (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 328, in testTraceback self.doTest(None, files, TESTMOD, call=self.doTraceback) File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 339, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\2.5.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testZipImporterMethods (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 206, in testZipImporterMethods z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 339, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\2.5.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testBadMTime (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 183, in testBadMTime self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 339, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\2.5.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testBadMagic (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 161, in testBadMagic self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 339, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\2.5.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testBadMagic2 (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 170, in testBadMagic2 self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 339, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\2.5.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testBoth (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 148, in testBoth self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 339, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\2.5.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testDeepPackage (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 197, in testDeepPackage self.doTest(pyc_ext, files, TESTPACK, TESTPACK2, TESTMOD) File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 339, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\2.5.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testDoctestFile (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 294, in testDoctestFile self.runDoctest(self.doDoctestFile) File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 279, in runDoctest self.doTest(".py", files, TESTMOD, call=callback) File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 339, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\2.5.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testDoctestSuite (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 305, in testDoctestSuite self.runDoctest(self.doDoctestSuite) File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 279, in runDoctest self.doTest(".py", files, TESTMOD, call=callback) File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 339, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\2.5.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testEmptyPy (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 152, in testEmptyPy self.doTest(None, files, TESTMOD) File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 339, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\2.5.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testGetCompiledSource (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 274, in testGetCompiledSource self.doTest(pyc_ext, files, TESTMOD, call=self.assertModuleSource) File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 339, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\2.5.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testGetData (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 231, in testGetData z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 339, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\2.5.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testGetSource (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 268, in testGetSource self.doTest(".py", files, TESTMOD, call=self.assertModuleSource) File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 339, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\2.5.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testImport_WithStuff (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 261, in testImport_WithStuff stuff="Some Stuff"*31) File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 339, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\2.5.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testImporterAttr (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 254, in testImporterAttr self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 339, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\2.5.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testPackage (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 189, in testPackage self.doTest(pyc_ext, files, TESTPACK, TESTMOD) File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 339, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\2.5.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testPy (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 139, in testPy self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 339, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\2.5.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testPyc (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 143, in testPyc self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 339, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\2.5.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testTraceback (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 328, in testTraceback self.doTest(None, files, TESTMOD, call=self.doTraceback) File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 339, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\2.5.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testZipImporterMethods (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\2.5.heller-windows\build\lib\test\test_zipimport.py", line 206, in testZipImporterMethods z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\2.5.heller-windows\build\lib\zipfile.py", line 339, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\2.5.heller-windows\\build\\PCbuild\\junk95142.zip' sincerely, -The Buildbot From python-checkins at python.org Tue Mar 25 00:15:47 2008 From: python-checkins at python.org (brett.cannon) Date: Tue, 25 Mar 2008 00:15:47 +0100 (CET) Subject: [Python-checkins] r61859 - sandbox/trunk/import_in_py/docs/__import__.pdf sandbox/trunk/import_in_py/docs/flowchart.graffle Message-ID: <20080324231547.8C94A1E401F@bag.python.org> Author: brett.cannon Date: Tue Mar 25 00:15:47 2008 New Revision: 61859 Modified: sandbox/trunk/import_in_py/docs/__import__.pdf sandbox/trunk/import_in_py/docs/flowchart.graffle Log: Fix an error where the existence of the parent module's __path__ was not what was being checked for when determining whether to use sys.path or not. Modified: sandbox/trunk/import_in_py/docs/__import__.pdf ============================================================================== Binary files. No diff available. Modified: sandbox/trunk/import_in_py/docs/flowchart.graffle ============================================================================== Binary files. No diff available. From python-checkins at python.org Tue Mar 25 00:18:02 2008 From: python-checkins at python.org (brett.cannon) Date: Tue, 25 Mar 2008 00:18:02 +0100 (CET) Subject: [Python-checkins] r61860 - sandbox/trunk/import_in_py/docs/__import__.pdf sandbox/trunk/import_in_py/docs/flowchart.graffle Message-ID: <20080324231802.E955C1E401F@bag.python.org> Author: brett.cannon Date: Tue Mar 25 00:18:02 2008 New Revision: 61860 Modified: sandbox/trunk/import_in_py/docs/__import__.pdf sandbox/trunk/import_in_py/docs/flowchart.graffle Log: Fix an error where sys.meta_path entries were being called with the caller's __path__ instead of the parent module's __path__. Modified: sandbox/trunk/import_in_py/docs/__import__.pdf ============================================================================== Binary files. No diff available. Modified: sandbox/trunk/import_in_py/docs/flowchart.graffle ============================================================================== Binary files. No diff available. From buildbot at python.org Tue Mar 25 00:39:20 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 24 Mar 2008 23:39:20 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 trunk Message-ID: <20080324233920.D67651E401F@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%20trunk/builds/2746 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: amaury.forgeotdarc BUILD FAILED: failed test Excerpt from the test logfile: 3 tests failed: test_asynchat test_smtplib test_tarfile ====================================================================== ERROR: test_stream_padding (test.test_tarfile.GzipStreamWriteTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_tarfile.py", line 666, in test_stream_padding data = fobj.read() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 212, in read self._read(readsize) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 284, in _read self._read_eof() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/gzip.py", line 304, in _read_eof hex(self.crc))) IOError: CRC check failed 0x71de97f9 != 0x271dde9aL sincerely, -The Buildbot From python-checkins at python.org Tue Mar 25 02:35:43 2008 From: python-checkins at python.org (brett.cannon) Date: Tue, 25 Mar 2008 02:35:43 +0100 (CET) Subject: [Python-checkins] r61861 - sandbox/trunk/import_in_py/NOTES sandbox/trunk/import_in_py/TODO Message-ID: <20080325013543.0D0031E401F@bag.python.org> Author: brett.cannon Date: Tue Mar 25 02:35:42 2008 New Revision: 61861 Added: sandbox/trunk/import_in_py/NOTES - copied unchanged from r61840, sandbox/trunk/import_in_py/TODO Removed: sandbox/trunk/import_in_py/TODO Log: Move TODO to NOTES. Deleted: /sandbox/trunk/import_in_py/TODO ============================================================================== --- /sandbox/trunk/import_in_py/TODO Tue Mar 25 02:35:42 2008 +++ (empty file) @@ -1,7 +0,0 @@ -* Move Py3K version over to _fileio._FileIO for file work. - + Can set as open() if desired (but probably not needed). - + Make sure that reading/writing to file is done in bytes. - + Be aware that encoding/decoding from file not guaranteed to work as - the encodings module might not be available yet. -* For Py3K, always have __file__ point to the .py file if it exists. -* PEP 366. \ No newline at end of file From python-checkins at python.org Tue Mar 25 02:47:32 2008 From: python-checkins at python.org (brett.cannon) Date: Tue, 25 Mar 2008 02:47:32 +0100 (CET) Subject: [Python-checkins] r61862 - sandbox/trunk/import_in_py/NOTES Message-ID: <20080325014732.501921E401F@bag.python.org> Author: brett.cannon Date: Tue Mar 25 02:47:32 2008 New Revision: 61862 Modified: sandbox/trunk/import_in_py/NOTES Log: Add some more todos and add an "Ideas" section. Modified: sandbox/trunk/import_in_py/NOTES ============================================================================== --- sandbox/trunk/import_in_py/NOTES (original) +++ sandbox/trunk/import_in_py/NOTES Tue Mar 25 02:47:32 2008 @@ -1,7 +1,24 @@ +to do +///// +[assume all work is for Py3K] + * Move Py3K version over to _fileio._FileIO for file work. + Can set as open() if desired (but probably not needed). + Make sure that reading/writing to file is done in bytes. + Be aware that encoding/decoding from file not guaranteed to work as the encodings module might not be available yet. -* For Py3K, always have __file__ point to the .py file if it exists. -* PEP 366. \ No newline at end of file + + Will compile() do the right thing with bytes, decoding, etc.? +* Always have __file__ point to the .py file if it exists. +* PEP 366. +* Rename _importlib to importlib. + + Add a fix_importlib() method that does what importlib currently does. + + Will allow for imp to become _importlib and be cleaned up. + +Ideas +///// + +* Add init_import(). + + Set sys.meta_path (for when all importers are on there). + + Handle using any C implementations from (the future) _importlib.c over + the pure Python implementation. + - Make sure there is a some way to test both versions of anything. \ No newline at end of file From python-checkins at python.org Tue Mar 25 05:17:39 2008 From: python-checkins at python.org (neal.norwitz) Date: Tue, 25 Mar 2008 05:17:39 +0100 (CET) Subject: [Python-checkins] r61863 - python/trunk/Lib/test/test_deque.py python/trunk/Lib/test/test_uu.py Message-ID: <20080325041739.0C8D61E4013@bag.python.org> Author: neal.norwitz Date: Tue Mar 25 05:17:38 2008 New Revision: 61863 Modified: python/trunk/Lib/test/test_deque.py python/trunk/Lib/test/test_uu.py Log: Fix a bunch of UnboundLocalErrors when the tests fail. Modified: python/trunk/Lib/test/test_deque.py ============================================================================== --- python/trunk/Lib/test/test_deque.py (original) +++ python/trunk/Lib/test/test_deque.py Tue Mar 25 05:17:38 2008 @@ -63,27 +63,27 @@ self.assertEqual(list(d), range(7, 10)) d = deque(xrange(200), maxlen=10) d.append(d) + fo = open(test_support.TESTFN, "wb") try: - fo = open(test_support.TESTFN, "wb") print >> fo, d, fo.close() fo = open(test_support.TESTFN, "rb") self.assertEqual(fo.read(), repr(d)) finally: fo.close() - os.remove(test_support.TESTFN) + test_support.unlink(test_support.TESTFN) d = deque(range(10), maxlen=None) self.assertEqual(repr(d), 'deque([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])') + fo = open(test_support.TESTFN, "wb") try: - fo = open(test_support.TESTFN, "wb") print >> fo, d, fo.close() fo = open(test_support.TESTFN, "rb") self.assertEqual(fo.read(), repr(d)) finally: fo.close() - os.remove(test_support.TESTFN) + test_support.unlink(test_support.TESTFN) def test_comparisons(self): d = deque('xabc'); d.popleft() @@ -281,15 +281,15 @@ def test_print(self): d = deque(xrange(200)) d.append(d) + fo = open(test_support.TESTFN, "wb") try: - fo = open(test_support.TESTFN, "wb") print >> fo, d, fo.close() fo = open(test_support.TESTFN, "rb") self.assertEqual(fo.read(), repr(d)) finally: fo.close() - os.remove(test_support.TESTFN) + test_support.unlink(test_support.TESTFN) def test_init(self): self.assertRaises(TypeError, deque, 'abc', 2, 3); Modified: python/trunk/Lib/test/test_uu.py ============================================================================== --- python/trunk/Lib/test/test_uu.py (original) +++ python/trunk/Lib/test/test_uu.py Tue Mar 25 05:17:38 2008 @@ -112,6 +112,7 @@ del self.tmpout def test_encode(self): + fin = fout = None try: fin = open(self.tmpin, 'wb') fin.write(plaintext) @@ -140,6 +141,7 @@ self._kill(fout) def test_decode(self): + f = None try: f = open(self.tmpin, 'w') f.write(encodedtextwrapped % (0644, self.tmpout)) @@ -159,6 +161,7 @@ def test_decodetwice(self): # Verify that decode() will refuse to overwrite an existing file + f = None try: f = cStringIO.StringIO(encodedtextwrapped % (0644, self.tmpout)) From python-checkins at python.org Tue Mar 25 05:18:19 2008 From: python-checkins at python.org (neal.norwitz) Date: Tue, 25 Mar 2008 05:18:19 +0100 (CET) Subject: [Python-checkins] r61864 - in python/trunk: Objects/unicodeobject.c PC/_winreg.c PC/w9xpopen.c Python/peephole.c Message-ID: <20080325041819.6958E1E4013@bag.python.org> Author: neal.norwitz Date: Tue Mar 25 05:18:18 2008 New Revision: 61864 Modified: python/trunk/Objects/unicodeobject.c python/trunk/PC/_winreg.c python/trunk/PC/w9xpopen.c python/trunk/Python/peephole.c Log: Try to fix a bunch of compiler warnings on Win64. Modified: python/trunk/Objects/unicodeobject.c ============================================================================== --- python/trunk/Objects/unicodeobject.c (original) +++ python/trunk/Objects/unicodeobject.c Tue Mar 25 05:18:18 2008 @@ -7261,7 +7261,7 @@ done = str->length; } while (done < nchars) { - int n = (done <= nchars-done) ? done : nchars-done; + Py_ssize_t n = (done <= nchars-done) ? done : nchars-done; Py_UNICODE_COPY(p+done, p, n); done += n; } Modified: python/trunk/PC/_winreg.c ============================================================================== --- python/trunk/PC/_winreg.c (original) +++ python/trunk/PC/_winreg.c Tue Mar 25 05:18:18 2008 @@ -715,7 +715,7 @@ static BOOL Py2Reg(PyObject *value, DWORD typ, BYTE **retDataBuf, DWORD *retDataSize) { - int i,j; + Py_ssize_t i,j; switch (typ) { case REG_DWORD: if (value != Py_None && !PyInt_Check(value)) Modified: python/trunk/PC/w9xpopen.c ============================================================================== --- python/trunk/PC/w9xpopen.c (original) +++ python/trunk/PC/w9xpopen.c Tue Mar 25 05:18:18 2008 @@ -30,7 +30,7 @@ STARTUPINFO si; PROCESS_INFORMATION pi; DWORD exit_code=0; - int cmdlen = 0; + size_t cmdlen = 0; int i; char *cmdline, *cmdlinefill; Modified: python/trunk/Python/peephole.c ============================================================================== --- python/trunk/Python/peephole.c (original) +++ python/trunk/Python/peephole.c Tue Mar 25 05:18:18 2008 @@ -29,7 +29,7 @@ Also works for BUILD_LIST when followed by an "in" or "not in" test. */ static int -tuple_of_constants(unsigned char *codestr, int n, PyObject *consts) +tuple_of_constants(unsigned char *codestr, Py_ssize_t n, PyObject *consts) { PyObject *newconst, *constant; Py_ssize_t i, arg, len_consts; @@ -228,7 +228,7 @@ } static unsigned int * -markblocks(unsigned char *code, int len) +markblocks(unsigned char *code, Py_ssize_t len) { unsigned int *blocks = (unsigned int *)PyMem_Malloc(len*sizeof(int)); int i,j, opcode, blockcnt = 0; From python-checkins at python.org Tue Mar 25 06:38:43 2008 From: python-checkins at python.org (martin.v.loewis) Date: Tue, 25 Mar 2008 06:38:43 +0100 (CET) Subject: [Python-checkins] r61865 - tracker/instances/jobs/schema.py Message-ID: <20080325053843.888731E4013@bag.python.org> Author: martin.v.loewis Date: Tue Mar 25 06:38:43 2008 New Revision: 61865 Modified: tracker/instances/jobs/schema.py Log: Remove view permission for the anonymous user. Modified: tracker/instances/jobs/schema.py ============================================================================== --- tracker/instances/jobs/schema.py (original) +++ tracker/instances/jobs/schema.py Tue Mar 25 06:38:43 2008 @@ -302,14 +302,6 @@ for cl in 'status',: db.security.addPermissionToRole('Anonymous', 'View', cl) -# Allow users to see the realname -p = db.security.addPermission(name = 'View', - description = 'View real name', - klass = 'user', - properties = ('realname',)) -db.security.addPermissionToRole('Anonymous', p) - - # [OPTIONAL] # Allow anonymous users access to create or edit "offer" items (and the # related file and message items) From python-checkins at python.org Tue Mar 25 06:47:01 2008 From: python-checkins at python.org (martin.v.loewis) Date: Tue, 25 Mar 2008 06:47:01 +0100 (CET) Subject: [Python-checkins] r61866 - tracker/instances/jobs/schema.py Message-ID: <20080325054701.5EE061E4013@bag.python.org> Author: martin.v.loewis Date: Tue Mar 25 06:47:00 2008 New Revision: 61866 Modified: tracker/instances/jobs/schema.py Log: Prevent users Modified: tracker/instances/jobs/schema.py ============================================================================== --- tracker/instances/jobs/schema.py (original) +++ tracker/instances/jobs/schema.py Tue Mar 25 06:47:00 2008 @@ -223,7 +223,7 @@ # May users view other user information? Comment these lines out # if you don't want them to -db.security.addPermissionToRole('User', 'View', 'user') +#db.security.addPermissionToRole('User', 'View', 'user') db.security.addPermissionToRole('Editor', 'View', 'user') db.security.addPermissionToRole('Coordinator', 'View', 'user') From python-checkins at python.org Tue Mar 25 07:12:46 2008 From: python-checkins at python.org (gregory.p.smith) Date: Tue, 25 Mar 2008 07:12:46 +0100 (CET) Subject: [Python-checkins] r61867 - in python/trunk/Lib: tarfile.py test/test_zlib.py Message-ID: <20080325061246.6E7761E4031@bag.python.org> Author: gregory.p.smith Date: Tue Mar 25 07:12:45 2008 New Revision: 61867 Modified: python/trunk/Lib/tarfile.py python/trunk/Lib/test/test_zlib.py Log: A stab in the dark attempt to fix the alpha/tru64 buildbot problem and add more test coverage of valid inputs to zlib.crc32. Modified: python/trunk/Lib/tarfile.py ============================================================================== --- python/trunk/Lib/tarfile.py (original) +++ python/trunk/Lib/tarfile.py Tue Mar 25 07:12:45 2008 @@ -420,7 +420,7 @@ except ImportError: raise CompressionError("zlib module is not available") self.zlib = zlib - self.crc = zlib.crc32("") + self.crc = zlib.crc32("") & 0xffffffffL if mode == "r": self._init_read_gz() else: @@ -458,7 +458,7 @@ """Write string s to the stream. """ if self.comptype == "gz": - self.crc = self.zlib.crc32(s, self.crc) + self.crc = self.zlib.crc32(s, self.crc) & 0xffffffffL self.pos += len(s) if self.comptype != "tar": s = self.cmp.compress(s) Modified: python/trunk/Lib/test/test_zlib.py ============================================================================== --- python/trunk/Lib/test/test_zlib.py (original) +++ python/trunk/Lib/test/test_zlib.py Tue Mar 25 07:12:45 2008 @@ -53,6 +53,15 @@ self.assertEqual(binascii.crc32(foo), zlib.crc32(foo)) self.assertEqual(binascii.crc32('spam'), zlib.crc32('spam')) + def test_negative_crc_iv_input(self): + # The range of valid input values for the crc state should be + # -2**31 through 2**32-1 to allow inputs artifically constrained + # to a signed 32-bit integer. + self.assertEqual(zlib.crc32('ham', -1), zlib.crc32('ham', 0xffffffffL)) + self.assertEqual(zlib.crc32('spam', -3141593), + zlib.crc32('spam', 0xffd01027L)) + self.assertEqual(zlib.crc32('spam', -(2**31)), + zlib.crc32('spam', (2**31))) class ExceptionTestCase(unittest.TestCase): From python-checkins at python.org Tue Mar 25 07:35:10 2008 From: python-checkins at python.org (neal.norwitz) Date: Tue, 25 Mar 2008 07:35:10 +0100 (CET) Subject: [Python-checkins] r61869 - python/trunk/Lib/test/test_set.py Message-ID: <20080325063510.60CE71E4013@bag.python.org> Author: neal.norwitz Date: Tue Mar 25 07:35:10 2008 New Revision: 61869 Modified: python/trunk/Lib/test/test_set.py Log: Don't try to close a non-open file. Don't let file removal cause the test to fail. Modified: python/trunk/Lib/test/test_set.py ============================================================================== --- python/trunk/Lib/test/test_set.py (original) +++ python/trunk/Lib/test/test_set.py Tue Mar 25 07:35:10 2008 @@ -283,15 +283,15 @@ w = ReprWrapper() s = self.thetype([w]) w.value = s + fo = open(test_support.TESTFN, "wb") try: - fo = open(test_support.TESTFN, "wb") print >> fo, s, fo.close() fo = open(test_support.TESTFN, "rb") self.assertEqual(fo.read(), repr(s)) finally: fo.close() - os.remove(test_support.TESTFN) + test_support.unlink(test_support.TESTFN) def test_do_not_rehash_dict_keys(self): n = 10 @@ -626,15 +626,15 @@ self.assertEqual(repr(self.set), self.repr) def test_print(self): + fo = open(test_support.TESTFN, "wb") try: - fo = open(test_support.TESTFN, "wb") print >> fo, self.set, fo.close() fo = open(test_support.TESTFN, "rb") self.assertEqual(fo.read(), repr(self.set)) finally: fo.close() - os.remove(test_support.TESTFN) + test_support.unlink(test_support.TESTFN) def test_length(self): self.assertEqual(len(self.set), self.length) From buildbot at python.org Tue Mar 25 07:36:41 2008 From: buildbot at python.org (buildbot at python.org) Date: Tue, 25 Mar 2008 06:36:41 +0000 Subject: [Python-checkins] buildbot failure in amd64 gentoo 3.0 Message-ID: <20080325063641.E5DA01E4013@bag.python.org> The Buildbot has detected a new failure of amd64 gentoo 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20gentoo%203.0/builds/228 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-amd64 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: gregory.p.smith BUILD FAILED: failed test Excerpt from the test logfile: make: *** [buildbottest] Alarm clock sincerely, -The Buildbot From python-checkins at python.org Tue Mar 25 08:00:40 2008 From: python-checkins at python.org (neal.norwitz) Date: Tue, 25 Mar 2008 08:00:40 +0100 (CET) Subject: [Python-checkins] r61870 - python/trunk/Lib/test/test_signal.py Message-ID: <20080325070040.434431E401F@bag.python.org> Author: neal.norwitz Date: Tue Mar 25 08:00:39 2008 New Revision: 61870 Modified: python/trunk/Lib/test/test_signal.py Log: Try to get this test to be more stable: * disable gc during the test run because we are spawning objects and there was an exception when calling Popen.__del__ * Always set an alarm handler so the process doesn't exit if the test fails (should probably add assertions on the value of hndl_called in more places) * Using a negative time causes Linux to treat it as zero, so disable that test. Modified: python/trunk/Lib/test/test_signal.py ============================================================================== --- python/trunk/Lib/test/test_signal.py (original) +++ python/trunk/Lib/test/test_signal.py Tue Mar 25 08:00:39 2008 @@ -1,6 +1,7 @@ import unittest from test import test_support from contextlib import closing, nested +import gc import pickle import select import signal @@ -30,6 +31,14 @@ class InterProcessSignalTests(unittest.TestCase): MAX_DURATION = 20 # Entire test should last at most 20 sec. + def setUp(self): + self.using_gc = gc.isenabled() + gc.disable() + + def tearDown(self): + if self.using_gc: + gc.enable() + def handlerA(self, *args): self.a_called = True if test_support.verbose: @@ -263,8 +272,10 @@ self.hndl_called = False self.hndl_count = 0 self.itimer = None + self.old_alarm = signal.signal(signal.SIGALRM, self.sig_alrm) def tearDown(self): + signal.signal(signal.SIGALRM, self.old_alarm) if self.itimer is not None: # test_itimer_exc doesn't change this attr # just ensure that itimer is stopped signal.setitimer(self.itimer, 0) @@ -303,13 +314,13 @@ # XXX I'm assuming -1 is an invalid itimer, but maybe some platform # defines it ? self.assertRaises(signal.ItimerError, signal.setitimer, -1, 0) - # negative time - self.assertRaises(signal.ItimerError, signal.setitimer, - signal.ITIMER_REAL, -1) + # Negative times are treated as zero on some platforms. + if 0: + self.assertRaises(signal.ItimerError, + signal.setitimer, signal.ITIMER_REAL, -1) def test_itimer_real(self): self.itimer = signal.ITIMER_REAL - signal.signal(signal.SIGALRM, self.sig_alrm) signal.setitimer(self.itimer, 1.0) if test_support.verbose: print("\ncall pause()...") From python-checkins at python.org Tue Mar 25 08:20:16 2008 From: python-checkins at python.org (georg.brandl) Date: Tue, 25 Mar 2008 08:20:16 +0100 (CET) Subject: [Python-checkins] r61871 - python/trunk/Doc/library/functions.rst Message-ID: <20080325072016.0B3F21E4013@bag.python.org> Author: georg.brandl Date: Tue Mar 25 08:20:15 2008 New Revision: 61871 Modified: python/trunk/Doc/library/functions.rst Log: #868845: document <...> reprs. Modified: python/trunk/Doc/library/functions.rst ============================================================================== --- python/trunk/Doc/library/functions.rst (original) +++ python/trunk/Doc/library/functions.rst Tue Mar 25 08:20:15 2008 @@ -1004,11 +1004,15 @@ .. function:: repr(object) - Return a string containing a printable representation of an object. This is the - same value yielded by conversions (reverse quotes). It is sometimes useful to be - able to access this operation as an ordinary function. For many types, this - function makes an attempt to return a string that would yield an object with the - same value when passed to :func:`eval`. + Return a string containing a printable representation of an object. This is + the same value yielded by conversions (reverse quotes). It is sometimes + useful to be able to access this operation as an ordinary function. For many + types, this function makes an attempt to return a string that would yield an + object with the same value when passed to :func:`eval`, otherwise the + representation is a string enclosed in angle brackets that contains the name + of the type of the object together with additional information often + including the name and address of the object. A class can control what this + function returns for its instances by defining a :meth:`__repr__` method. .. function:: reversed(seq) From buildbot at python.org Tue Mar 25 08:22:52 2008 From: buildbot at python.org (buildbot at python.org) Date: Tue, 25 Mar 2008 07:22:52 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 trunk Message-ID: <20080325072252.2DA291E4024@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%20trunk/builds/2748 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: gregory.p.smith BUILD FAILED: failed test Excerpt from the test logfile: 4 tests failed: test_asynchat test_smtplib test_socket test_zlib ====================================================================== FAIL: testInterruptedTimeout (test.test_socket.TCPTimeoutTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_socket.py", line 994, in testInterruptedTimeout self.fail("got Alarm in wrong place") AssertionError: got Alarm in wrong place ====================================================================== FAIL: test_negative_crc_iv_input (test.test_zlib.ChecksumTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_zlib.py", line 60, in test_negative_crc_iv_input self.assertEqual(zlib.crc32('ham', -1), zlib.crc32('ham', 0xffffffffL)) AssertionError: 153214308 != -153214364 sincerely, -The Buildbot From python-checkins at python.org Tue Mar 25 08:31:28 2008 From: python-checkins at python.org (gregory.p.smith) Date: Tue, 25 Mar 2008 08:31:28 +0100 (CET) Subject: [Python-checkins] r61874 - python/trunk/Modules/binascii.c python/trunk/Modules/zlibmodule.c Message-ID: <20080325073128.97B631E4013@bag.python.org> Author: gregory.p.smith Date: Tue Mar 25 08:31:28 2008 New Revision: 61874 Modified: python/trunk/Modules/binascii.c python/trunk/Modules/zlibmodule.c Log: Use a 32-bit unsigned int here, a long is not needed. Modified: python/trunk/Modules/binascii.c ============================================================================== --- python/trunk/Modules/binascii.c (original) +++ python/trunk/Modules/binascii.c Tue Mar 25 08:31:28 2008 @@ -756,12 +756,12 @@ static PyObject * binascii_crc32(PyObject *self, PyObject *args) { - uLong crc32val = 0; /* crc32(0L, Z_NULL, 0) */ + unsigned int crc32val = 0; /* crc32(0L, Z_NULL, 0) */ Byte *buf; Py_ssize_t len; int signed_val; - if (!PyArg_ParseTuple(args, "s#|k:crc32", &buf, &len, &crc32val)) + if (!PyArg_ParseTuple(args, "s#|I:crc32", &buf, &len, &crc32val)) return NULL; /* In Python 2.x we return a signed integer regardless of native platform * long size (the 32bit unsigned long is treated as 32-bit signed and sign Modified: python/trunk/Modules/zlibmodule.c ============================================================================== --- python/trunk/Modules/zlibmodule.c (original) +++ python/trunk/Modules/zlibmodule.c Tue Mar 25 08:31:28 2008 @@ -889,11 +889,11 @@ static PyObject * PyZlib_adler32(PyObject *self, PyObject *args) { - uLong adler32val = 1; /* adler32(0L, Z_NULL, 0) */ + unsigned int adler32val = 1; /* adler32(0L, Z_NULL, 0) */ Byte *buf; int len, signed_val; - if (!PyArg_ParseTuple(args, "s#|k:adler32", &buf, &len, &adler32val)) + if (!PyArg_ParseTuple(args, "s#|I:adler32", &buf, &len, &adler32val)) return NULL; /* In Python 2.x we return a signed integer regardless of native platform * long size (the 32bit unsigned long is treated as 32-bit signed and sign @@ -912,11 +912,11 @@ static PyObject * PyZlib_crc32(PyObject *self, PyObject *args) { - uLong crc32val = 0; /* crc32(0L, Z_NULL, 0) */ + unsigned int crc32val = 0; /* crc32(0L, Z_NULL, 0) */ Byte *buf; int len, signed_val; - if (!PyArg_ParseTuple(args, "s#|k:crc32", &buf, &len, &crc32val)) + if (!PyArg_ParseTuple(args, "s#|I:crc32", &buf, &len, &crc32val)) return NULL; /* In Python 2.x we return a signed integer regardless of native platform * long size (the 32bit unsigned long is treated as 32-bit signed and sign From buildbot at python.org Tue Mar 25 08:34:47 2008 From: buildbot at python.org (buildbot at python.org) Date: Tue, 25 Mar 2008 07:34:47 +0000 Subject: [Python-checkins] buildbot failure in ppc Debian unstable 3.0 Message-ID: <20080325073447.AD0901E4013@bag.python.org> The Buildbot has detected a new failure of ppc Debian unstable 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/ppc%20Debian%20unstable%203.0/builds/697 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: gregory.p.smith BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_urllibnet ====================================================================== ERROR: test_fileno (test.test_urllibnet.urlopenNetworkTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/3.0.klose-debian-ppc/build/Lib/test/test_urllibnet.py", line 126, in test_fileno self.assert_(FILE.read(), "reading from file created using fd " File "/home/pybot/buildarea/3.0.klose-debian-ppc/build/Lib/io.py", line 1470, in read decoder.decode(self.buffer.read(), final=True)) File "/home/pybot/buildarea/3.0.klose-debian-ppc/build/Lib/io.py", line 1075, in decode output = self.decoder.decode(input, final=final) File "/home/pybot/buildarea/3.0.klose-debian-ppc/build/Lib/encodings/ascii.py", line 26, in decode return codecs.ascii_decode(input, self.errors)[0] UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 12578: ordinal not in range(128) ====================================================================== ERROR: test_basic (test.test_urllibnet.urlretrieveNetworkTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/3.0.klose-debian-ppc/build/Lib/test/test_urllibnet.py", line 157, in test_basic self.assert_(FILE.read(), "reading from the file location returned" File "/home/pybot/buildarea/3.0.klose-debian-ppc/build/Lib/io.py", line 1470, in read decoder.decode(self.buffer.read(), final=True)) File "/home/pybot/buildarea/3.0.klose-debian-ppc/build/Lib/io.py", line 1075, in decode output = self.decoder.decode(input, final=final) File "/home/pybot/buildarea/3.0.klose-debian-ppc/build/Lib/encodings/ascii.py", line 26, in decode return codecs.ascii_decode(input, self.errors)[0] UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 12578: ordinal not in range(128) ====================================================================== ERROR: test_specified_path (test.test_urllibnet.urlretrieveNetworkTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/3.0.klose-debian-ppc/build/Lib/test/test_urllibnet.py", line 171, in test_specified_path self.assert_(FILE.read(), "reading from temporary file failed") File "/home/pybot/buildarea/3.0.klose-debian-ppc/build/Lib/io.py", line 1470, in read decoder.decode(self.buffer.read(), final=True)) File "/home/pybot/buildarea/3.0.klose-debian-ppc/build/Lib/io.py", line 1075, in decode output = self.decoder.decode(input, final=final) File "/home/pybot/buildarea/3.0.klose-debian-ppc/build/Lib/encodings/ascii.py", line 26, in decode return codecs.ascii_decode(input, self.errors)[0] UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 12578: ordinal not in range(128) make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Tue Mar 25 08:46:05 2008 From: buildbot at python.org (buildbot at python.org) Date: Tue, 25 Mar 2008 07:46:05 +0000 Subject: [Python-checkins] buildbot failure in x86 W2k8 3.0 Message-ID: <20080325074605.E338E1E4025@bag.python.org> The Buildbot has detected a new failure of x86 W2k8 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20W2k8%203.0/builds/113 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: nelson-windows Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: georg.brandl BUILD FAILED: failed failed slave lost sincerely, -The Buildbot From python-checkins at python.org Tue Mar 25 08:46:07 2008 From: python-checkins at python.org (gregory.p.smith) Date: Tue, 25 Mar 2008 08:46:07 +0100 (CET) Subject: [Python-checkins] r61875 - python/trunk/Modules/binascii.c Message-ID: <20080325074607.7908A1E4023@bag.python.org> Author: gregory.p.smith Date: Tue Mar 25 08:46:07 2008 New Revision: 61875 Modified: python/trunk/Modules/binascii.c Log: On platforms without zlib, make this do the right thing and return the python 2.x signed value. Also, don't waste space on a table full of unsigned longs when all it needs are unsigned ints (incase anyone builds this without zlib on a 64-bit unix for some strange reason). tested by forcing it to compile this version on both 32-bit and 64-bit linux. Modified: python/trunk/Modules/binascii.c ============================================================================== --- python/trunk/Modules/binascii.c (original) +++ python/trunk/Modules/binascii.c Tue Mar 25 08:46:07 2008 @@ -834,91 +834,78 @@ using byte-swap instructions. ********************************************************************/ -static unsigned long crc_32_tab[256] = { -0x00000000UL, 0x77073096UL, 0xee0e612cUL, 0x990951baUL, 0x076dc419UL, -0x706af48fUL, 0xe963a535UL, 0x9e6495a3UL, 0x0edb8832UL, 0x79dcb8a4UL, -0xe0d5e91eUL, 0x97d2d988UL, 0x09b64c2bUL, 0x7eb17cbdUL, 0xe7b82d07UL, -0x90bf1d91UL, 0x1db71064UL, 0x6ab020f2UL, 0xf3b97148UL, 0x84be41deUL, -0x1adad47dUL, 0x6ddde4ebUL, 0xf4d4b551UL, 0x83d385c7UL, 0x136c9856UL, -0x646ba8c0UL, 0xfd62f97aUL, 0x8a65c9ecUL, 0x14015c4fUL, 0x63066cd9UL, -0xfa0f3d63UL, 0x8d080df5UL, 0x3b6e20c8UL, 0x4c69105eUL, 0xd56041e4UL, -0xa2677172UL, 0x3c03e4d1UL, 0x4b04d447UL, 0xd20d85fdUL, 0xa50ab56bUL, -0x35b5a8faUL, 0x42b2986cUL, 0xdbbbc9d6UL, 0xacbcf940UL, 0x32d86ce3UL, -0x45df5c75UL, 0xdcd60dcfUL, 0xabd13d59UL, 0x26d930acUL, 0x51de003aUL, -0xc8d75180UL, 0xbfd06116UL, 0x21b4f4b5UL, 0x56b3c423UL, 0xcfba9599UL, -0xb8bda50fUL, 0x2802b89eUL, 0x5f058808UL, 0xc60cd9b2UL, 0xb10be924UL, -0x2f6f7c87UL, 0x58684c11UL, 0xc1611dabUL, 0xb6662d3dUL, 0x76dc4190UL, -0x01db7106UL, 0x98d220bcUL, 0xefd5102aUL, 0x71b18589UL, 0x06b6b51fUL, -0x9fbfe4a5UL, 0xe8b8d433UL, 0x7807c9a2UL, 0x0f00f934UL, 0x9609a88eUL, -0xe10e9818UL, 0x7f6a0dbbUL, 0x086d3d2dUL, 0x91646c97UL, 0xe6635c01UL, -0x6b6b51f4UL, 0x1c6c6162UL, 0x856530d8UL, 0xf262004eUL, 0x6c0695edUL, -0x1b01a57bUL, 0x8208f4c1UL, 0xf50fc457UL, 0x65b0d9c6UL, 0x12b7e950UL, -0x8bbeb8eaUL, 0xfcb9887cUL, 0x62dd1ddfUL, 0x15da2d49UL, 0x8cd37cf3UL, -0xfbd44c65UL, 0x4db26158UL, 0x3ab551ceUL, 0xa3bc0074UL, 0xd4bb30e2UL, -0x4adfa541UL, 0x3dd895d7UL, 0xa4d1c46dUL, 0xd3d6f4fbUL, 0x4369e96aUL, -0x346ed9fcUL, 0xad678846UL, 0xda60b8d0UL, 0x44042d73UL, 0x33031de5UL, -0xaa0a4c5fUL, 0xdd0d7cc9UL, 0x5005713cUL, 0x270241aaUL, 0xbe0b1010UL, -0xc90c2086UL, 0x5768b525UL, 0x206f85b3UL, 0xb966d409UL, 0xce61e49fUL, -0x5edef90eUL, 0x29d9c998UL, 0xb0d09822UL, 0xc7d7a8b4UL, 0x59b33d17UL, -0x2eb40d81UL, 0xb7bd5c3bUL, 0xc0ba6cadUL, 0xedb88320UL, 0x9abfb3b6UL, -0x03b6e20cUL, 0x74b1d29aUL, 0xead54739UL, 0x9dd277afUL, 0x04db2615UL, -0x73dc1683UL, 0xe3630b12UL, 0x94643b84UL, 0x0d6d6a3eUL, 0x7a6a5aa8UL, -0xe40ecf0bUL, 0x9309ff9dUL, 0x0a00ae27UL, 0x7d079eb1UL, 0xf00f9344UL, -0x8708a3d2UL, 0x1e01f268UL, 0x6906c2feUL, 0xf762575dUL, 0x806567cbUL, -0x196c3671UL, 0x6e6b06e7UL, 0xfed41b76UL, 0x89d32be0UL, 0x10da7a5aUL, -0x67dd4accUL, 0xf9b9df6fUL, 0x8ebeeff9UL, 0x17b7be43UL, 0x60b08ed5UL, -0xd6d6a3e8UL, 0xa1d1937eUL, 0x38d8c2c4UL, 0x4fdff252UL, 0xd1bb67f1UL, -0xa6bc5767UL, 0x3fb506ddUL, 0x48b2364bUL, 0xd80d2bdaUL, 0xaf0a1b4cUL, -0x36034af6UL, 0x41047a60UL, 0xdf60efc3UL, 0xa867df55UL, 0x316e8eefUL, -0x4669be79UL, 0xcb61b38cUL, 0xbc66831aUL, 0x256fd2a0UL, 0x5268e236UL, -0xcc0c7795UL, 0xbb0b4703UL, 0x220216b9UL, 0x5505262fUL, 0xc5ba3bbeUL, -0xb2bd0b28UL, 0x2bb45a92UL, 0x5cb36a04UL, 0xc2d7ffa7UL, 0xb5d0cf31UL, -0x2cd99e8bUL, 0x5bdeae1dUL, 0x9b64c2b0UL, 0xec63f226UL, 0x756aa39cUL, -0x026d930aUL, 0x9c0906a9UL, 0xeb0e363fUL, 0x72076785UL, 0x05005713UL, -0x95bf4a82UL, 0xe2b87a14UL, 0x7bb12baeUL, 0x0cb61b38UL, 0x92d28e9bUL, -0xe5d5be0dUL, 0x7cdcefb7UL, 0x0bdbdf21UL, 0x86d3d2d4UL, 0xf1d4e242UL, -0x68ddb3f8UL, 0x1fda836eUL, 0x81be16cdUL, 0xf6b9265bUL, 0x6fb077e1UL, -0x18b74777UL, 0x88085ae6UL, 0xff0f6a70UL, 0x66063bcaUL, 0x11010b5cUL, -0x8f659effUL, 0xf862ae69UL, 0x616bffd3UL, 0x166ccf45UL, 0xa00ae278UL, -0xd70dd2eeUL, 0x4e048354UL, 0x3903b3c2UL, 0xa7672661UL, 0xd06016f7UL, -0x4969474dUL, 0x3e6e77dbUL, 0xaed16a4aUL, 0xd9d65adcUL, 0x40df0b66UL, -0x37d83bf0UL, 0xa9bcae53UL, 0xdebb9ec5UL, 0x47b2cf7fUL, 0x30b5ffe9UL, -0xbdbdf21cUL, 0xcabac28aUL, 0x53b39330UL, 0x24b4a3a6UL, 0xbad03605UL, -0xcdd70693UL, 0x54de5729UL, 0x23d967bfUL, 0xb3667a2eUL, 0xc4614ab8UL, -0x5d681b02UL, 0x2a6f2b94UL, 0xb40bbe37UL, 0xc30c8ea1UL, 0x5a05df1bUL, -0x2d02ef8dUL +static unsigned int crc_32_tab[256] = { +0x00000000U, 0x77073096U, 0xee0e612cU, 0x990951baU, 0x076dc419U, +0x706af48fU, 0xe963a535U, 0x9e6495a3U, 0x0edb8832U, 0x79dcb8a4U, +0xe0d5e91eU, 0x97d2d988U, 0x09b64c2bU, 0x7eb17cbdU, 0xe7b82d07U, +0x90bf1d91U, 0x1db71064U, 0x6ab020f2U, 0xf3b97148U, 0x84be41deU, +0x1adad47dU, 0x6ddde4ebU, 0xf4d4b551U, 0x83d385c7U, 0x136c9856U, +0x646ba8c0U, 0xfd62f97aU, 0x8a65c9ecU, 0x14015c4fU, 0x63066cd9U, +0xfa0f3d63U, 0x8d080df5U, 0x3b6e20c8U, 0x4c69105eU, 0xd56041e4U, +0xa2677172U, 0x3c03e4d1U, 0x4b04d447U, 0xd20d85fdU, 0xa50ab56bU, +0x35b5a8faU, 0x42b2986cU, 0xdbbbc9d6U, 0xacbcf940U, 0x32d86ce3U, +0x45df5c75U, 0xdcd60dcfU, 0xabd13d59U, 0x26d930acU, 0x51de003aU, +0xc8d75180U, 0xbfd06116U, 0x21b4f4b5U, 0x56b3c423U, 0xcfba9599U, +0xb8bda50fU, 0x2802b89eU, 0x5f058808U, 0xc60cd9b2U, 0xb10be924U, +0x2f6f7c87U, 0x58684c11U, 0xc1611dabU, 0xb6662d3dU, 0x76dc4190U, +0x01db7106U, 0x98d220bcU, 0xefd5102aU, 0x71b18589U, 0x06b6b51fU, +0x9fbfe4a5U, 0xe8b8d433U, 0x7807c9a2U, 0x0f00f934U, 0x9609a88eU, +0xe10e9818U, 0x7f6a0dbbU, 0x086d3d2dU, 0x91646c97U, 0xe6635c01U, +0x6b6b51f4U, 0x1c6c6162U, 0x856530d8U, 0xf262004eU, 0x6c0695edU, +0x1b01a57bU, 0x8208f4c1U, 0xf50fc457U, 0x65b0d9c6U, 0x12b7e950U, +0x8bbeb8eaU, 0xfcb9887cU, 0x62dd1ddfU, 0x15da2d49U, 0x8cd37cf3U, +0xfbd44c65U, 0x4db26158U, 0x3ab551ceU, 0xa3bc0074U, 0xd4bb30e2U, +0x4adfa541U, 0x3dd895d7U, 0xa4d1c46dU, 0xd3d6f4fbU, 0x4369e96aU, +0x346ed9fcU, 0xad678846U, 0xda60b8d0U, 0x44042d73U, 0x33031de5U, +0xaa0a4c5fU, 0xdd0d7cc9U, 0x5005713cU, 0x270241aaU, 0xbe0b1010U, +0xc90c2086U, 0x5768b525U, 0x206f85b3U, 0xb966d409U, 0xce61e49fU, +0x5edef90eU, 0x29d9c998U, 0xb0d09822U, 0xc7d7a8b4U, 0x59b33d17U, +0x2eb40d81U, 0xb7bd5c3bU, 0xc0ba6cadU, 0xedb88320U, 0x9abfb3b6U, +0x03b6e20cU, 0x74b1d29aU, 0xead54739U, 0x9dd277afU, 0x04db2615U, +0x73dc1683U, 0xe3630b12U, 0x94643b84U, 0x0d6d6a3eU, 0x7a6a5aa8U, +0xe40ecf0bU, 0x9309ff9dU, 0x0a00ae27U, 0x7d079eb1U, 0xf00f9344U, +0x8708a3d2U, 0x1e01f268U, 0x6906c2feU, 0xf762575dU, 0x806567cbU, +0x196c3671U, 0x6e6b06e7U, 0xfed41b76U, 0x89d32be0U, 0x10da7a5aU, +0x67dd4accU, 0xf9b9df6fU, 0x8ebeeff9U, 0x17b7be43U, 0x60b08ed5U, +0xd6d6a3e8U, 0xa1d1937eU, 0x38d8c2c4U, 0x4fdff252U, 0xd1bb67f1U, +0xa6bc5767U, 0x3fb506ddU, 0x48b2364bU, 0xd80d2bdaU, 0xaf0a1b4cU, +0x36034af6U, 0x41047a60U, 0xdf60efc3U, 0xa867df55U, 0x316e8eefU, +0x4669be79U, 0xcb61b38cU, 0xbc66831aU, 0x256fd2a0U, 0x5268e236U, +0xcc0c7795U, 0xbb0b4703U, 0x220216b9U, 0x5505262fU, 0xc5ba3bbeU, +0xb2bd0b28U, 0x2bb45a92U, 0x5cb36a04U, 0xc2d7ffa7U, 0xb5d0cf31U, +0x2cd99e8bU, 0x5bdeae1dU, 0x9b64c2b0U, 0xec63f226U, 0x756aa39cU, +0x026d930aU, 0x9c0906a9U, 0xeb0e363fU, 0x72076785U, 0x05005713U, +0x95bf4a82U, 0xe2b87a14U, 0x7bb12baeU, 0x0cb61b38U, 0x92d28e9bU, +0xe5d5be0dU, 0x7cdcefb7U, 0x0bdbdf21U, 0x86d3d2d4U, 0xf1d4e242U, +0x68ddb3f8U, 0x1fda836eU, 0x81be16cdU, 0xf6b9265bU, 0x6fb077e1U, +0x18b74777U, 0x88085ae6U, 0xff0f6a70U, 0x66063bcaU, 0x11010b5cU, +0x8f659effU, 0xf862ae69U, 0x616bffd3U, 0x166ccf45U, 0xa00ae278U, +0xd70dd2eeU, 0x4e048354U, 0x3903b3c2U, 0xa7672661U, 0xd06016f7U, +0x4969474dU, 0x3e6e77dbU, 0xaed16a4aU, 0xd9d65adcU, 0x40df0b66U, +0x37d83bf0U, 0xa9bcae53U, 0xdebb9ec5U, 0x47b2cf7fU, 0x30b5ffe9U, +0xbdbdf21cU, 0xcabac28aU, 0x53b39330U, 0x24b4a3a6U, 0xbad03605U, +0xcdd70693U, 0x54de5729U, 0x23d967bfU, 0xb3667a2eU, 0xc4614ab8U, +0x5d681b02U, 0x2a6f2b94U, 0xb40bbe37U, 0xc30c8ea1U, 0x5a05df1bU, +0x2d02ef8dU }; static PyObject * binascii_crc32(PyObject *self, PyObject *args) { /* By Jim Ahlstrom; All rights transferred to CNRI */ unsigned char *bin_data; - unsigned long crc = 0UL; /* initial value of CRC */ + unsigned int crc = 0U; /* initial value of CRC */ Py_ssize_t len; - long result; + int result; - if ( !PyArg_ParseTuple(args, "s#|k:crc32", &bin_data, &len, &crc) ) + if ( !PyArg_ParseTuple(args, "s#|I:crc32", &bin_data, &len, &crc) ) return NULL; crc = ~ crc; -#if SIZEOF_LONG > 4 - /* only want the trailing 32 bits */ - crc &= 0xFFFFFFFFUL; -#endif while (len--) - crc = crc_32_tab[(crc ^ *bin_data++) & 0xffUL] ^ (crc >> 8); + crc = crc_32_tab[(crc ^ *bin_data++) & 0xffU] ^ (crc >> 8); /* Note: (crc >> 8) MUST zero fill on left */ - result = (long)(crc ^ 0xFFFFFFFFUL); -#if SIZEOF_LONG > 4 - /* Extend the sign bit. This is one way to ensure the result is the - * same across platforms. The other way would be to return an - * unbounded unsigned long, but the evidence suggests that lots of - * code outside this treats the result as if it were a signed 4-byte - * integer. - */ - result |= -(result & (1L << 31)); -#endif + result = (int)(crc ^ 0xFFFFFFFFU); return PyInt_FromLong(result); } #endif /* USE_ZLIB_CRC32 */ From python-checkins at python.org Tue Mar 25 08:56:28 2008 From: python-checkins at python.org (georg.brandl) Date: Tue, 25 Mar 2008 08:56:28 +0100 (CET) Subject: [Python-checkins] r61878 - in python/trunk: Lib/test/test_py3kwarn.py Misc/NEWS Objects/bufferobject.c Message-ID: <20080325075628.45A161E4013@bag.python.org> Author: georg.brandl Date: Tue Mar 25 08:56:27 2008 New Revision: 61878 Modified: python/trunk/Lib/test/test_py3kwarn.py python/trunk/Misc/NEWS python/trunk/Objects/bufferobject.c Log: #2355: py3k warning for buffer(). Modified: python/trunk/Lib/test/test_py3kwarn.py ============================================================================== --- python/trunk/Lib/test/test_py3kwarn.py (original) +++ python/trunk/Lib/test/test_py3kwarn.py Tue Mar 25 08:56:27 2008 @@ -118,6 +118,11 @@ with catch_warning() as w: self.assertWarning(set(), w, expected) + def test_buffer(self): + expected = 'buffer will be removed in 3.x' + with catch_warning() as w: + self.assertWarning(buffer('a'), w, expected) + def test_main(): run_unittest(TestPy3KWarnings) Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Tue Mar 25 08:56:27 2008 @@ -11,7 +11,9 @@ Core and builtins ----------------- - + +- Issue #2355: add Py3k warning for buffer(). + - Issue #1477: With narrow Unicode builds, the unicode escape sequence \Uxxxxxxxx did not accept values outside the Basic Multilingual Plane. This affected raw unicode literals and the 'raw-unicode-escape' codec. Now Modified: python/trunk/Objects/bufferobject.c ============================================================================== --- python/trunk/Objects/bufferobject.c (original) +++ python/trunk/Objects/bufferobject.c Tue Mar 25 08:56:27 2008 @@ -229,6 +229,11 @@ static PyObject * buffer_new(PyTypeObject *type, PyObject *args, PyObject *kw) { + if (Py_Py3kWarningFlag && + PyErr_WarnEx(PyExc_DeprecationWarning, + "buffer will be removed in 3.x", 1) < 0) + return NULL; + PyObject *ob; Py_ssize_t offset = 0; Py_ssize_t size = Py_END_OF_BUFFER; From buildbot at python.org Tue Mar 25 09:00:35 2008 From: buildbot at python.org (buildbot at python.org) Date: Tue, 25 Mar 2008 08:00:35 +0000 Subject: [Python-checkins] buildbot failure in sparc Debian trunk Message-ID: <20080325080035.8E1411E4013@bag.python.org> The Buildbot has detected a new failure of sparc Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20Debian%20trunk/builds/243 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-sparc Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: gregory.p.smith BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From buildbot at python.org Tue Mar 25 09:02:58 2008 From: buildbot at python.org (buildbot at python.org) Date: Tue, 25 Mar 2008 08:02:58 +0000 Subject: [Python-checkins] buildbot failure in amd64 gentoo trunk Message-ID: <20080325080301.6C99C1E4013@bag.python.org> The Buildbot has detected a new failure of amd64 gentoo trunk. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20gentoo%20trunk/builds/474 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-amd64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl,gregory.p.smith BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_urllibnet make: *** [buildbottest] Error 1 sincerely, -The Buildbot From python-checkins at python.org Tue Mar 25 09:29:15 2008 From: python-checkins at python.org (georg.brandl) Date: Tue, 25 Mar 2008 09:29:15 +0100 (CET) Subject: [Python-checkins] r61879 - in python/trunk: Doc/whatsnew/2.6.rst Lib/test/test_py3kwarn.py Objects/bufferobject.c Objects/cellobject.c Objects/codeobject.c Objects/dictobject.c Objects/exceptions.c Objects/listobject.c Objects/methodobject.c Objects/object.c Objects/typeobject.c Parser/tokenizer.c Python/ast.c Python/bltinmodule.c Python/ceval.c Python/sysmodule.c Message-ID: <20080325082915.2304A1E4013@bag.python.org> Author: georg.brandl Date: Tue Mar 25 09:29:14 2008 New Revision: 61879 Modified: python/trunk/Doc/whatsnew/2.6.rst python/trunk/Lib/test/test_py3kwarn.py python/trunk/Objects/bufferobject.c python/trunk/Objects/cellobject.c python/trunk/Objects/codeobject.c python/trunk/Objects/dictobject.c python/trunk/Objects/exceptions.c python/trunk/Objects/listobject.c python/trunk/Objects/methodobject.c python/trunk/Objects/object.c python/trunk/Objects/typeobject.c python/trunk/Parser/tokenizer.c python/trunk/Python/ast.c python/trunk/Python/bltinmodule.c python/trunk/Python/ceval.c python/trunk/Python/sysmodule.c Log: Make Py3k warnings consistent w.r.t. punctuation; also respect the EOL 80 limit and supply more alternatives in warning messages. Modified: python/trunk/Doc/whatsnew/2.6.rst ============================================================================== --- python/trunk/Doc/whatsnew/2.6.rst (original) +++ python/trunk/Doc/whatsnew/2.6.rst Tue Mar 25 09:29:14 2008 @@ -93,7 +93,7 @@ about features that will be removed in Python 3.0. You can run code with this switch to see how much work will be necessary to port code to 3.0. The value of this switch is available -to Python code as the boolean variable ``sys.py3kwarning``, +to Python code as the boolean variable :data:`sys.py3kwarning`, and to C extension code as :cdata:`Py_Py3kWarningFlag`. Python 3.0 adds several new built-in functions and change the Modified: python/trunk/Lib/test/test_py3kwarn.py ============================================================================== --- python/trunk/Lib/test/test_py3kwarn.py (original) +++ python/trunk/Lib/test/test_py3kwarn.py Tue Mar 25 09:29:14 2008 @@ -11,21 +11,21 @@ class TestPy3KWarnings(unittest.TestCase): def test_type_inequality_comparisons(self): - expected = 'type inequality comparisons not supported in 3.x.' + expected = 'type inequality comparisons not supported in 3.x' with catch_warning() as w: self.assertWarning(int < str, w, expected) with catch_warning() as w: self.assertWarning(type < object, w, expected) def test_object_inequality_comparisons(self): - expected = 'comparing unequal types not supported in 3.x.' + expected = 'comparing unequal types not supported in 3.x' with catch_warning() as w: self.assertWarning(str < [], w, expected) with catch_warning() as w: self.assertWarning(object() < (1, 2), w, expected) def test_dict_inequality_comparisons(self): - expected = 'dict inequality comparisons not supported in 3.x.' + expected = 'dict inequality comparisons not supported in 3.x' with catch_warning() as w: self.assertWarning({} < {2:3}, w, expected) with catch_warning() as w: @@ -36,7 +36,7 @@ self.assertWarning({2:3} >= {}, w, expected) def test_cell_inequality_comparisons(self): - expected = 'cell comparisons not supported in 3.x.' + expected = 'cell comparisons not supported in 3.x' def f(x): def g(): return x @@ -49,7 +49,7 @@ self.assertWarning(cell0 < cell1, w, expected) def test_code_inequality_comparisons(self): - expected = 'code inequality comparisons not supported in 3.x.' + expected = 'code inequality comparisons not supported in 3.x' def f(x): pass def g(x): @@ -65,7 +65,7 @@ def test_builtin_function_or_method_comparisons(self): expected = ('builtin_function_or_method ' - 'inequality comparisons not supported in 3.x.') + 'inequality comparisons not supported in 3.x') func = eval meth = {}.get with catch_warning() as w: @@ -81,7 +81,7 @@ self.assertEqual(str(warning.message), expected_message) def test_sort_cmp_arg(self): - expected = "In 3.x, the cmp argument is no longer supported." + expected = "the cmp argument is not supported in 3.x" lst = range(5) cmp = lambda x,y: -1 @@ -95,7 +95,7 @@ self.assertWarning(sorted(lst, cmp), w, expected) def test_sys_exc_clear(self): - expected = 'sys.exc_clear() not supported in 3.x. Use except clauses.' + expected = 'sys.exc_clear() not supported in 3.x; use except clauses' with catch_warning() as w: self.assertWarning(sys.exc_clear(), w, expected) @@ -119,7 +119,7 @@ self.assertWarning(set(), w, expected) def test_buffer(self): - expected = 'buffer will be removed in 3.x' + expected = 'buffer() not supported in 3.x; use memoryview()' with catch_warning() as w: self.assertWarning(buffer('a'), w, expected) Modified: python/trunk/Objects/bufferobject.c ============================================================================== --- python/trunk/Objects/bufferobject.c (original) +++ python/trunk/Objects/bufferobject.c Tue Mar 25 09:29:14 2008 @@ -231,7 +231,8 @@ { if (Py_Py3kWarningFlag && PyErr_WarnEx(PyExc_DeprecationWarning, - "buffer will be removed in 3.x", 1) < 0) + "buffer() not supported in 3.x; " + "use memoryview()", 1) < 0) return NULL; PyObject *ob; Modified: python/trunk/Objects/cellobject.c ============================================================================== --- python/trunk/Objects/cellobject.c (original) +++ python/trunk/Objects/cellobject.c Tue Mar 25 09:29:14 2008 @@ -55,8 +55,9 @@ cell_compare(PyCellObject *a, PyCellObject *b) { /* Py3K warning for comparisons */ - if (Py_Py3kWarningFlag && PyErr_Warn(PyExc_DeprecationWarning, - "cell comparisons not supported in 3.x.") < 0) { + if (Py_Py3kWarningFlag && + PyErr_Warn(PyExc_DeprecationWarning, + "cell comparisons not supported in 3.x") < 0) { return -2; } Modified: python/trunk/Objects/codeobject.c ============================================================================== --- python/trunk/Objects/codeobject.c (original) +++ python/trunk/Objects/codeobject.c Tue Mar 25 09:29:14 2008 @@ -338,9 +338,12 @@ !PyCode_Check(self) || !PyCode_Check(other)) { - /* Py3K warning if types are not equal and comparison isn't == or != */ - if (Py_Py3kWarningFlag && PyErr_Warn(PyExc_DeprecationWarning, - "code inequality comparisons not supported in 3.x.") < 0) { + /* Py3K warning if types are not equal and comparison + isn't == or != */ + if (Py_Py3kWarningFlag && + PyErr_Warn(PyExc_DeprecationWarning, + "code inequality comparisons not supported " + "in 3.x") < 0) { return NULL; } Modified: python/trunk/Objects/dictobject.c ============================================================================== --- python/trunk/Objects/dictobject.c (original) +++ python/trunk/Objects/dictobject.c Tue Mar 25 09:29:14 2008 @@ -1778,8 +1778,10 @@ } else { /* Py3K warning if comparison isn't == or != */ - if (Py_Py3kWarningFlag && PyErr_Warn(PyExc_DeprecationWarning, - "dict inequality comparisons not supported in 3.x.") < 0) { + if (Py_Py3kWarningFlag && + PyErr_Warn(PyExc_DeprecationWarning, + "dict inequality comparisons not supported " + "in 3.x") < 0) { return NULL; } res = Py_NotImplemented; @@ -1811,7 +1813,8 @@ { if (Py_Py3kWarningFlag && PyErr_Warn(PyExc_DeprecationWarning, - "dict.has_key() not supported in 3.x") < 0) + "dict.has_key() not supported in 3.x; " + "use the in operator") < 0) return NULL; return dict_contains(mp, key); } Modified: python/trunk/Objects/exceptions.c ============================================================================== --- python/trunk/Objects/exceptions.c (original) +++ python/trunk/Objects/exceptions.c Tue Mar 25 09:29:14 2008 @@ -190,10 +190,10 @@ BaseException_getitem(PyBaseExceptionObject *self, Py_ssize_t index) { if (Py_Py3kWarningFlag) { - if (PyErr_Warn(PyExc_DeprecationWarning, - "In 3.x, __getitem__ is not supported for exception " - "classes, use args attribute") == -1) - return NULL; + if (PyErr_Warn(PyExc_DeprecationWarning, + "__getitem__ not supported for exception " + "classes in 3.x; use args attribute") == -1) + return NULL; } return PySequence_GetItem(self->args, index); } @@ -203,10 +203,10 @@ Py_ssize_t start, Py_ssize_t stop) { if (Py_Py3kWarningFlag) { - if (PyErr_Warn(PyExc_DeprecationWarning, - "In 3.x, __getslice__ is not supported for exception " - "classes, use args attribute") == -1) - return NULL; + if (PyErr_Warn(PyExc_DeprecationWarning, + "__getslice__ not supported for exception " + "classes in 3.x; use args attribute") == -1) + return NULL; } return PySequence_GetSlice(self->args, start, stop); } Modified: python/trunk/Objects/listobject.c ============================================================================== --- python/trunk/Objects/listobject.c (original) +++ python/trunk/Objects/listobject.c Tue Mar 25 09:29:14 2008 @@ -2040,7 +2040,7 @@ if (compare != NULL && Py_Py3kWarningFlag && PyErr_Warn(PyExc_DeprecationWarning, - "In 3.x, the cmp argument is no longer supported.") < 0) + "the cmp argument is not supported in 3.x") < 0) return NULL; if (keyfunc == Py_None) keyfunc = NULL; Modified: python/trunk/Objects/methodobject.c ============================================================================== --- python/trunk/Objects/methodobject.c (original) +++ python/trunk/Objects/methodobject.c Tue Mar 25 09:29:14 2008 @@ -235,9 +235,10 @@ !PyCFunction_Check(other)) { /* Py3K warning if types are not equal and comparison isn't == or != */ - if (Py_Py3kWarningFlag && PyErr_Warn(PyExc_DeprecationWarning, - "builtin_function_or_method " - "inequality comparisons not supported in 3.x.") < 0) { + if (Py_Py3kWarningFlag && + PyErr_Warn(PyExc_DeprecationWarning, + "builtin_function_or_method inequality " + "comparisons not supported in 3.x") < 0) { return NULL; } @@ -353,12 +354,10 @@ { if (name[0] == '_' && name[1] == '_') { if (strcmp(name, "__methods__") == 0) { - if (Py_Py3kWarningFlag) { - if (PyErr_Warn(PyExc_DeprecationWarning, - "__methods__ not supported " - "in 3.x") < 0) - return NULL; - } + if (Py_Py3kWarningFlag && + PyErr_Warn(PyExc_DeprecationWarning, + "__methods__ not supported in 3.x") < 0) + return NULL; return listmethodchain(chain); } if (strcmp(name, "__doc__") == 0) { Modified: python/trunk/Objects/object.c ============================================================================== --- python/trunk/Objects/object.c (original) +++ python/trunk/Objects/object.c Tue Mar 25 09:29:14 2008 @@ -867,9 +867,10 @@ /* Py3K warning if types are not equal and comparison isn't == or != */ if (Py_Py3kWarningFlag && - v->ob_type != w->ob_type && op != Py_EQ && op != Py_NE && - PyErr_Warn(PyExc_DeprecationWarning, - "comparing unequal types not supported in 3.x.") < 0) { + v->ob_type != w->ob_type && op != Py_EQ && op != Py_NE && + PyErr_Warn(PyExc_DeprecationWarning, + "comparing unequal types not supported " + "in 3.x") < 0) { return NULL; } @@ -1691,8 +1692,8 @@ (strcmp(attrname, "__members__") == 0 || strcmp(attrname, "__methods__") == 0)) { if (PyErr_Warn(PyExc_DeprecationWarning, - "__members__ and __methods__ not supported " - "in 3.x") < 0) { + "__members__ and __methods__ not " + "supported in 3.x") < 0) { Py_XDECREF(list); return -1; } Modified: python/trunk/Objects/typeobject.c ============================================================================== --- python/trunk/Objects/typeobject.c (original) +++ python/trunk/Objects/typeobject.c Tue Mar 25 09:29:14 2008 @@ -609,7 +609,8 @@ /* Py3K warning if comparison isn't == or != */ if (Py_Py3kWarningFlag && op != Py_EQ && op != Py_NE && PyErr_Warn(PyExc_DeprecationWarning, - "type inequality comparisons not supported in 3.x.") < 0) { + "type inequality comparisons not supported " + "in 3.x") < 0) { return NULL; } Modified: python/trunk/Parser/tokenizer.c ============================================================================== --- python/trunk/Parser/tokenizer.c (original) +++ python/trunk/Parser/tokenizer.c Tue Mar 25 09:29:14 2008 @@ -1531,7 +1531,7 @@ #ifndef PGEN if (Py_Py3kWarningFlag && token == NOTEQUAL && c == '<') { if (PyErr_WarnExplicit(PyExc_DeprecationWarning, - "<> not supported in 3.x", + "<> not supported in 3.x; use !=", tok->filename, tok->lineno, NULL, NULL)) { return ERRORTOKEN; Modified: python/trunk/Python/ast.c ============================================================================== --- python/trunk/Python/ast.c (original) +++ python/trunk/Python/ast.c Tue Mar 25 09:29:14 2008 @@ -1363,7 +1363,7 @@ expr_ty expression; if (Py_Py3kWarningFlag) { if (PyErr_WarnExplicit(PyExc_DeprecationWarning, - "backquote not supported in 3.x", + "backquote not supported in 3.x; use repr()", c->c_filename, LINENO(n), NULL, NULL)) { return NULL; Modified: python/trunk/Python/bltinmodule.c ============================================================================== --- python/trunk/Python/bltinmodule.c (original) +++ python/trunk/Python/bltinmodule.c Tue Mar 25 09:29:14 2008 @@ -166,7 +166,8 @@ if (Py_Py3kWarningFlag && PyErr_Warn(PyExc_DeprecationWarning, - "apply() not supported in 3.x. Use func(*args, **kwargs).") < 0) + "apply() not supported in 3.x; " + "use func(*args, **kwargs)") < 0) return NULL; if (!PyArg_UnpackTuple(args, "apply", 1, 3, &func, &alist, &kwdict)) @@ -225,7 +226,8 @@ { if (Py_Py3kWarningFlag && PyErr_Warn(PyExc_DeprecationWarning, - "callable() not supported in 3.x. Use hasattr(o, '__call__').") < 0) + "callable() not supported in 3.x; " + "use hasattr(o, '__call__')") < 0) return NULL; return PyBool_FromLong((long)PyCallable_Check(v)); } @@ -684,7 +686,7 @@ if (Py_Py3kWarningFlag && PyErr_Warn(PyExc_DeprecationWarning, - "execfile() not supported in 3.x. Use exec().") < 0) + "execfile() not supported in 3.x; use exec()") < 0) return NULL; if (!PyArg_ParseTuple(args, "s|O!O:execfile", @@ -912,7 +914,8 @@ if (func == Py_None) { if (Py_Py3kWarningFlag && PyErr_Warn(PyExc_DeprecationWarning, - "map(None, ...) not supported in 3.x. Use list(...).") < 0) + "map(None, ...) not supported in 3.x; " + "use list(...)") < 0) return NULL; if (n == 1) { /* map(None, S) is the same as list(S). */ @@ -1934,7 +1937,8 @@ if (Py_Py3kWarningFlag && PyErr_Warn(PyExc_DeprecationWarning, - "reduce() not supported in 3.x") < 0) + "reduce() not supported in 3.x; " + "use functools.reduce()") < 0) return NULL; if (!PyArg_UnpackTuple(args, "reduce", 2, 3, &func, &seq, &result)) @@ -2011,7 +2015,7 @@ { if (Py_Py3kWarningFlag && PyErr_Warn(PyExc_DeprecationWarning, - "reload() not supported in 3.x") < 0) + "reload() not supported in 3.x; use imp.reload()") < 0) return NULL; return PyImport_ReloadModule(v); Modified: python/trunk/Python/ceval.c ============================================================================== --- python/trunk/Python/ceval.c (original) +++ python/trunk/Python/ceval.c Tue Mar 25 09:29:14 2008 @@ -4056,7 +4056,7 @@ PyType_FastSubclass((PyTypeObject*)(x), Py_TPFLAGS_BASE_EXC_SUBCLASS)) #define CANNOT_CATCH_MSG "catching classes that don't inherit from " \ - "BaseException is not allowed in 3.x." + "BaseException is not allowed in 3.x" static PyObject * cmp_outcome(int op, register PyObject *v, register PyObject *w) Modified: python/trunk/Python/sysmodule.c ============================================================================== --- python/trunk/Python/sysmodule.c (original) +++ python/trunk/Python/sysmodule.c Tue Mar 25 09:29:14 2008 @@ -174,8 +174,8 @@ if (Py_Py3kWarningFlag && PyErr_Warn(PyExc_DeprecationWarning, - "sys.exc_clear() not supported in 3.x. " - "Use except clauses.") < 0) + "sys.exc_clear() not supported in 3.x; " + "use except clauses") < 0) return NULL; tstate = PyThreadState_GET(); From python-checkins at python.org Tue Mar 25 09:31:32 2008 From: python-checkins at python.org (georg.brandl) Date: Tue, 25 Mar 2008 09:31:32 +0100 (CET) Subject: [Python-checkins] r61880 - python/trunk/Objects/cellobject.c python/trunk/Objects/dictobject.c Message-ID: <20080325083132.B3E601E4013@bag.python.org> Author: georg.brandl Date: Tue Mar 25 09:31:32 2008 New Revision: 61880 Modified: python/trunk/Objects/cellobject.c python/trunk/Objects/dictobject.c Log: Fix tabs. Modified: python/trunk/Objects/cellobject.c ============================================================================== --- python/trunk/Objects/cellobject.c (original) +++ python/trunk/Objects/cellobject.c Tue Mar 25 09:31:32 2008 @@ -56,7 +56,7 @@ { /* Py3K warning for comparisons */ if (Py_Py3kWarningFlag && - PyErr_Warn(PyExc_DeprecationWarning, + PyErr_Warn(PyExc_DeprecationWarning, "cell comparisons not supported in 3.x") < 0) { return -2; } Modified: python/trunk/Objects/dictobject.c ============================================================================== --- python/trunk/Objects/dictobject.c (original) +++ python/trunk/Objects/dictobject.c Tue Mar 25 09:31:32 2008 @@ -1779,7 +1779,7 @@ else { /* Py3K warning if comparison isn't == or != */ if (Py_Py3kWarningFlag && - PyErr_Warn(PyExc_DeprecationWarning, + PyErr_Warn(PyExc_DeprecationWarning, "dict inequality comparisons not supported " "in 3.x") < 0) { return NULL; From python-checkins at python.org Tue Mar 25 09:37:23 2008 From: python-checkins at python.org (georg.brandl) Date: Tue, 25 Mar 2008 09:37:23 +0100 (CET) Subject: [Python-checkins] r61881 - in python/trunk: Misc/NEWS Modules/arraymodule.c Message-ID: <20080325083723.86DF31E4013@bag.python.org> Author: georg.brandl Date: Tue Mar 25 09:37:23 2008 New Revision: 61881 Modified: python/trunk/Misc/NEWS python/trunk/Modules/arraymodule.c Log: #2359: add Py3k warning for array.read/array.write. Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Tue Mar 25 09:37:23 2008 @@ -48,6 +48,8 @@ are still valid. There are binary literals with a prefix of "0b". This also affects int(x, 0). +- Issue #2359: Adding deprecation warnings for array.{read,write}. + - Issue #1779871: Gnu gcc can now build Python on OS X because the flags -Wno-long-double, -no-cpp-precomp, and -mno-fused-madd are no longer passed. Modified: python/trunk/Modules/arraymodule.c ============================================================================== --- python/trunk/Modules/arraymodule.c (original) +++ python/trunk/Modules/arraymodule.c Tue Mar 25 09:37:23 2008 @@ -1255,6 +1255,18 @@ static PyObject * +array_fromfile_as_read(arrayobject *self, PyObject *args) +{ + if (Py_Py3kWarningFlag && + PyErr_Warn(PyExc_DeprecationWarning, + "array.read() not supported in 3.x; " + "use array.fromfile()") < 0) + return NULL; + return array_fromfile(self, args); +} + + +static PyObject * array_tofile(arrayobject *self, PyObject *f) { FILE *fp; @@ -1284,6 +1296,18 @@ static PyObject * +array_tofile_as_write(arrayobject *self, PyObject *f) +{ + if (Py_Py3kWarningFlag && + PyErr_Warn(PyExc_DeprecationWarning, + "array.write() not supported in 3.x; " + "use array.tofile()") < 0) + return NULL; + return array_tofile(self, f); +} + + +static PyObject * array_fromlist(arrayobject *self, PyObject *list) { Py_ssize_t n; @@ -1522,7 +1546,7 @@ insert_doc}, {"pop", (PyCFunction)array_pop, METH_VARARGS, pop_doc}, - {"read", (PyCFunction)array_fromfile, METH_VARARGS, + {"read", (PyCFunction)array_fromfile_as_read, METH_VARARGS, fromfile_doc}, {"__reduce__", (PyCFunction)array_reduce, METH_NOARGS, array_doc}, @@ -1542,7 +1566,7 @@ {"tounicode", (PyCFunction)array_tounicode, METH_NOARGS, tounicode_doc}, #endif - {"write", (PyCFunction)array_tofile, METH_O, + {"write", (PyCFunction)array_tofile_as_write, METH_O, tofile_doc}, {NULL, NULL} /* sentinel */ }; From python-checkins at python.org Tue Mar 25 09:39:11 2008 From: python-checkins at python.org (georg.brandl) Date: Tue, 25 Mar 2008 09:39:11 +0100 (CET) Subject: [Python-checkins] r61882 - python/trunk/Doc/library/optparse.rst Message-ID: <20080325083911.40CFD1E4013@bag.python.org> Author: georg.brandl Date: Tue Mar 25 09:39:10 2008 New Revision: 61882 Modified: python/trunk/Doc/library/optparse.rst Log: #2476: document that %default feature is new in 2.4. Modified: python/trunk/Doc/library/optparse.rst ============================================================================== --- python/trunk/Doc/library/optparse.rst (original) +++ python/trunk/Doc/library/optparse.rst Tue Mar 25 09:39:10 2008 @@ -534,10 +534,11 @@ description "write output to FILE". This is a simple but effective way to make your help text a lot clearer and more useful for end users. -* options that have a default value can include ``%default`` in the help - string---\ :mod:`optparse` will replace it with :func:`str` of the option's - default value. If an option has no default value (or the default value is - ``None``), ``%default`` expands to ``none``. +.. versionadded:: 2.4 + Options that have a default value can include ``%default`` in the help + string---\ :mod:`optparse` will replace it with :func:`str` of the option's + default value. If an option has no default value (or the default value is + ``None``), ``%default`` expands to ``none``. When dealing with many options, it is convenient to group these options for better help output. An :class:`OptionParser` can contain From buildbot at python.org Tue Mar 25 09:45:01 2008 From: buildbot at python.org (buildbot at python.org) Date: Tue, 25 Mar 2008 08:45:01 +0000 Subject: [Python-checkins] buildbot failure in x86 W2k8 trunk Message-ID: <20080325084501.4064D1E4013@bag.python.org> The Buildbot has detected a new failure of x86 W2k8 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20W2k8%20trunk/builds/205 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: nelson-windows Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl BUILD FAILED: failed compile sincerely, -The Buildbot From buildbot at python.org Tue Mar 25 09:48:27 2008 From: buildbot at python.org (buildbot at python.org) Date: Tue, 25 Mar 2008 08:48:27 +0000 Subject: [Python-checkins] buildbot failure in sparc solaris10 gcc 3.0 Message-ID: <20080325084827.D3EAA1E401F@bag.python.org> The Buildbot has detected a new failure of sparc solaris10 gcc 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20solaris10%20gcc%203.0/builds/721 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: loewis-sun Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: georg.brandl BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_xmlrpc_net ====================================================================== ERROR: test_current_time (test.test_xmlrpc_net.CurrentTimeTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/test/test_xmlrpc_net.py", line 18, in test_current_time t0 = server.currentTime.getCurrentTime() File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/xmlrpclib.py", line 1113, in __call__ return self.__send(self.__name, args) File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/xmlrpclib.py", line 1371, in __request verbose=self.__verbose File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/xmlrpclib.py", line 1142, in request http_conn = self.send_request(host, handler, request_body, verbose) File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/xmlrpclib.py", line 1227, in send_request connection.request("POST", handler, request_body, headers) File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/httplib.py", line 898, in request self._send_request(method, url, body, headers) File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/httplib.py", line 935, in _send_request self.endheaders() File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/httplib.py", line 893, in endheaders self._send_output() File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/httplib.py", line 759, in _send_output self.send(msg) File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/httplib.py", line 718, in send self.connect() File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/httplib.py", line 702, in connect self.timeout) File "/opt/users/buildbot/slave/3.0.loewis-sun/build/Lib/socket.py", line 294, in create_connection raise error(msg) socket.error: [Errno 146] Connection refused sincerely, -The Buildbot From buildbot at python.org Tue Mar 25 10:27:21 2008 From: buildbot at python.org (buildbot at python.org) Date: Tue, 25 Mar 2008 09:27:21 +0000 Subject: [Python-checkins] buildbot failure in x86 XP 3.0 Message-ID: <20080325092723.1B5001E4023@bag.python.org> The Buildbot has detected a new failure of x86 XP 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP%203.0/builds/61 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: armbruster-windows Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: georg.brandl,gregory.p.smith BUILD FAILED: failed test Excerpt from the test logfile: 5 tests failed: test_mailbox test_ssl test_urllib2net test_urllibnet test_xmlrpc_net ====================================================================== ERROR: test_flush (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 718, in tearDown self._delete_recursively(self._path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 47, in _delete_recursively os.remove(target) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test' ====================================================================== ERROR: test_popitem (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 336, in test_popitem self.assertEqual(int(msg.get_payload()), keys.index(key)) ValueError: invalid literal for int() with base 10: 'From: foo 0 From MAILER-DAEMON Fri Apr 07 09:32:03 2000 From: foo 1 From MAILER-DAEMON Fri Apr 07 09:32:03 2000 From: foo 2 From MAILER-DAEMON Fri Apr 07 09:32:03 2000 From: foo 3 From MAIL' ====================================================================== ERROR: test_flush (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 718, in tearDown self._delete_recursively(self._path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 47, in _delete_recursively os.remove(target) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test' ====================================================================== ERROR: test_popitem (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 336, in test_popitem self.assertEqual(int(msg.get_payload()), keys.index(key)) ValueError: invalid literal for int() with base 10: 'From: foo 0 \x01\x01\x01\x01 \x01\x01\x01\x01 From MAILER-DAEMON Fri Apr 07 09:32:10 2000 From: foo 1 \x01\x01\x01\x01 \x01\x01\x01\x01 From MAILER-DAEMON Fri Apr 07 09:32:10 2000 From: foo 2 \x01\x01\x01\x01 \x01\x01\x01\x01 From MAILER-DAEMON Fri Apr 07 09' ====================================================================== ERROR: test_flush (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 941, in tearDown self._delete_recursively(self._path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 47, in _delete_recursively os.remove(target) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test' ====================================================================== ERROR: test_popitem (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 336, in test_popitem self.assertEqual(int(msg.get_payload()), keys.index(key)) ValueError: invalid literal for int() with base 10: 'From: foo *** EOOH *** From: foo 0 1,, From: foo *** EOOH *** From: foo 1 1,, From: foo *** EOOH *** From: foo 2 1,, From: foo *** EOOH *** From: foo 3 ' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 412, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_add (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 77, in test_add self.assertEqual(self._box.get_string(keys[0]), self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\nFrom MAILER-DAEMON Fri Apr 07 09:31:59 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Fri Apr 07 09:31:59 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Fri Apr 07 09:31:59 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Fri Apr 07 09:31:59 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'From: foo\n\n0' ====================================================================== FAIL: test_add_and_close (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 758, in test_add_and_close self.assertEqual(contents, open(self._path, 'r').read()) AssertionError: 'From MAILER-DAEMON Fri Apr 07 09:31:59 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Fri Apr 07 09:31:59 2000\n\nFrom: foo\n\n0\n\nFrom MAILER-DAEMON Fri Apr 07 09:31:59 2000\n\nFrom: foo\n\n1\n\nFrom MAILER-DAEMON Fri Apr 07 09:31:59 2000\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Fri Apr 07 09:31:59 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'From MAILER-DAEMON Fri Apr 07 09:31:59 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Fri Apr 07 09:31:59 2000\n\nFrom: foo\n\n0\n\nFrom MAILER-DAEMON Fri Apr 07 09:31:59 2000\n\nFrom: foo\n\n1\n\nFrom MAILER-DAEMON Fri Apr 07 09:31:59 2000\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Fri Apr 07 09:31:59 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Fri Apr 07 09:31:59 2000\n\nFrom: foo\n\n0\n\nFrom MAILER-DAEMON Fri Apr 07 09:31:59 2000\n\nFrom: foo\n\n1\n\nFrom MAILER-DAEMON Fri Apr 07 09:31:59 2000\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Fri Apr 07 09:31:59 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Fri Apr 07 09:31:59 2000\n\nFrom: foo\n\n1\n\nFrom MAILER-DAEMON Fri Apr 07 09:31:59 2000\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Fri Apr 07 09:31:59 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Fri Apr 07 09:31:59 2000\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Fri Apr 07 09:31:59 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Fri Apr 07 09:31:59 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' ====================================================================== FAIL: test_add_from_string (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 725, in test_add_from_string self.assertEqual(self._box[key].get_from(), 'foo at bar blah') AssertionError: 'foo at bar blah\n' != 'foo at bar blah' ====================================================================== FAIL: test_close (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 389, in test_close self._test_flush_or_close(self._box.close) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 6 != 3 ====================================================================== FAIL: test_delitem (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 87, in test_delitem self._test_remove_or_delitem(self._box.__delitem__) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n1' != 'From: foo\n\n1' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 412, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_flush (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 377, in test_flush self._test_flush_or_close(self._box.flush) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 6 != 3 ====================================================================== FAIL: test_get (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 129, in test_get self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_file (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 174, in test_get_file self._template % 0) AssertionError: '' != 'From: foo\n\n0' ====================================================================== FAIL: test_get_message (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 156, in test_get_message self.assertEqual(msg0['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_string (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 164, in test_get_string self.assertEqual(self._box.get_string(key0), self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\nFrom MAILER-DAEMON Fri Apr 07 09:32:01 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'From: foo\n\n0' ====================================================================== FAIL: test_getitem (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 144, in test_getitem self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_items (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 207, in test_items self._check_iteration(self._box.items, do_keys=True, do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iter (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 194, in test_iter do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iteritems (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 203, in test_iteritems do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_itervalues (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 189, in test_itervalues do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_open_close_open (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 742, in test_open_close_open self.assertEqual(len(self._box), 3) AssertionError: 6 != 3 ====================================================================== FAIL: test_pop (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 313, in test_pop self.assertEqual(self._box.pop(key0).get_payload(), '0') AssertionError: 'From: foo\n\n0\n\nFrom MAILER-DAEMON Fri Apr 07 09:32:03 2000\n\nFrom: foo\n\n1' != '0' ====================================================================== FAIL: test_remove (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 83, in test_remove self._test_remove_or_delitem(self._box.remove) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n1' != 'From: foo\n\n1' ====================================================================== FAIL: test_set_item (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 272, in test_set_item self._template % 'original 0') AssertionError: '\nFrom: foo\n\noriginal 0' != 'From: foo\n\noriginal 0' ====================================================================== FAIL: test_update (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 350, in test_update self._template % 'changed 0') AssertionError: '\nFrom: foo\n\nchanged 0\n\nFrom MAILER-DAEMON Fri Apr 07 09:32:04 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'From: foo\n\nchanged 0' ====================================================================== FAIL: test_values (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 198, in test_values self._check_iteration(self._box.values, do_keys=False, do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_add (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 77, in test_add self.assertEqual(self._box.get_string(keys[0]), self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 09:32:04 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 09:32:04 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 09:32:04 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 09:32:04 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n' != 'From: foo\n\n0' ====================================================================== FAIL: test_add_and_close (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 758, in test_add_and_close self.assertEqual(contents, open(self._path, 'r').read()) AssertionError: '\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 09:32:05 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 09:32:05 2000\n\nFrom: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 09:32:05 2000\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 09:32:05 2000\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 09:32:05 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n' != '\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 09:32:05 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 09:32:05 2000\n\nFrom: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 09:32:05 2000\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 09:32:05 2000\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 09:32:05 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 09:32:05 2000\n\nFrom: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 09:32:05 2000\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 09:32:05 2000\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 09:32:05 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 09:32:05 2000\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 09:32:05 2000\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 09:32:05 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 09:32:05 2000\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 09:32:05 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 09:32:05 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n' ====================================================================== FAIL: test_add_from_string (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 725, in test_add_from_string self.assertEqual(self._box[key].get_from(), 'foo at bar blah') AssertionError: 'foo at bar blah\n' != 'foo at bar blah' ====================================================================== FAIL: test_close (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 389, in test_close self._test_flush_or_close(self._box.close) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_delitem (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 87, in test_delitem self._test_remove_or_delitem(self._box.__delitem__) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n' != 'From: foo\n\n1' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 412, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_flush (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 377, in test_flush self._test_flush_or_close(self._box.flush) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_get (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 129, in test_get self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_file (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 174, in test_get_file self._template % 0) AssertionError: '' != 'From: foo\n\n0' ====================================================================== FAIL: test_get_message (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 156, in test_get_message self.assertEqual(msg0['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_string (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 164, in test_get_string self.assertEqual(self._box.get_string(key0), self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 09:32:07 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n' != 'From: foo\n\n0' ====================================================================== FAIL: test_getitem (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 144, in test_getitem self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_items (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 207, in test_items self._check_iteration(self._box.items, do_keys=True, do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iter (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 194, in test_iter do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iteritems (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 203, in test_iteritems do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_itervalues (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 189, in test_itervalues do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_open_close_open (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 742, in test_open_close_open self.assertEqual(len(self._box), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_pop (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 313, in test_pop self.assertEqual(self._box.pop(key0).get_payload(), '0') AssertionError: 'From: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 09:32:10 2000\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n' != '0' ====================================================================== FAIL: test_remove (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 83, in test_remove self._test_remove_or_delitem(self._box.remove) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n' != 'From: foo\n\n1' ====================================================================== FAIL: test_set_item (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 272, in test_set_item self._template % 'original 0') AssertionError: '\nFrom: foo\n\noriginal 0\n\n\x01\x01\x01\x01\n\n' != 'From: foo\n\noriginal 0' ====================================================================== FAIL: test_update (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 350, in test_update self._template % 'changed 0') AssertionError: '\nFrom: foo\n\nchanged 0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 09:32:11 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n' != 'From: foo\n\nchanged 0' ====================================================================== FAIL: test_values (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 198, in test_values self._check_iteration(self._box.values, do_keys=False, do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 412, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_add (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 77, in test_add self.assertEqual(self._box.get_string(keys[0]), self._template % 0) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n0\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f' != 'From: foo\n\n0' ====================================================================== FAIL: test_close (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 389, in test_close self._test_flush_or_close(self._box.close) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_delitem (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 87, in test_delitem self._test_remove_or_delitem(self._box.__delitem__) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n1\n\n\x1f' != 'From: foo\n\n1' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 412, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_flush (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 377, in test_flush self._test_flush_or_close(self._box.flush) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_get (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 129, in test_get self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_file (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 174, in test_get_file self._template % 0) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n0\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f' != 'From: foo\n\n0' ====================================================================== FAIL: test_get_message (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 156, in test_get_message self.assertEqual(msg0['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_string (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 164, in test_get_string self.assertEqual(self._box.get_string(key0), self._template % 0) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n0\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f' != 'From: foo\n\n0' ====================================================================== FAIL: test_getitem (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 144, in test_getitem self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_items (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 207, in test_items self._check_iteration(self._box.items, do_keys=True, do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iter (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 194, in test_iter do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iteritems (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 203, in test_iteritems do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_itervalues (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 189, in test_itervalues do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_pop (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 313, in test_pop self.assertEqual(self._box.pop(key0).get_payload(), '0') AssertionError: 'From: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n0\n\n\x1f\x0c\n\n1,,\n\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n1\n\n\x1f' != '0' ====================================================================== FAIL: test_remove (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 83, in test_remove self._test_remove_or_delitem(self._box.remove) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n1\n\n\x1f' != 'From: foo\n\n1' ====================================================================== FAIL: test_set_item (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 272, in test_set_item self._template % 'original 0') AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\noriginal 0\n\n\x1f' != 'From: foo\n\noriginal 0' ====================================================================== FAIL: test_update (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 350, in test_update self._template % 'changed 0') AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\nchanged 0\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f' != 'From: foo\n\nchanged 0' ====================================================================== FAIL: test_values (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 198, in test_values self._check_iteration(self._box.values, do_keys=False, do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== ERROR: testConnect (test.test_ssl.NetworkedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 130, in testConnect raise test_support.TestFailed("Unexpected exception %s" % x) test.test_support.TestFailed: Unexpected exception [Errno 1] _ssl.c:486: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed ====================================================================== ERROR: testProtocolSSL2 (test.test_ssl.ThreadedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 822, in testProtocolSSL2 tryProtocolCombo(ssl.PROTOCOL_SSLv2, ssl.PROTOCOL_SSLv2, True, ssl.CERT_OPTIONAL) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 691, in tryProtocolCombo chatty=False, connectionchatty=False) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 640, in serverParamsTest raise test_support.TestFailed("Unexpected SSL error: " + str(x)) test.test_support.TestFailed: Unexpected SSL error: [Errno 1] _ssl.c:486: error:1407E086:SSL routines:SSL2_SET_CERTIFICATE:certificate verify failed ====================================================================== ERROR: testProtocolSSL23 (test.test_ssl.ThreadedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 843, in testProtocolSSL23 tryProtocolCombo(ssl.PROTOCOL_SSLv23, ssl.PROTOCOL_SSLv3, True, ssl.CERT_OPTIONAL) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 691, in tryProtocolCombo chatty=False, connectionchatty=False) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 640, in serverParamsTest raise test_support.TestFailed("Unexpected SSL error: " + str(x)) test.test_support.TestFailed: Unexpected SSL error: [Errno 1] _ssl.c:486: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed ====================================================================== ERROR: testProtocolSSL3 (test.test_ssl.ThreadedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 855, in testProtocolSSL3 tryProtocolCombo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True, ssl.CERT_OPTIONAL) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 691, in tryProtocolCombo chatty=False, connectionchatty=False) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 640, in serverParamsTest raise test_support.TestFailed("Unexpected SSL error: " + str(x)) test.test_support.TestFailed: Unexpected SSL error: [Errno 1] _ssl.c:486: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed ====================================================================== ERROR: testProtocolTLS1 (test.test_ssl.ThreadedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 865, in testProtocolTLS1 tryProtocolCombo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True, ssl.CERT_OPTIONAL) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 691, in tryProtocolCombo chatty=False, connectionchatty=False) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 640, in serverParamsTest raise test_support.TestFailed("Unexpected SSL error: " + str(x)) test.test_support.TestFailed: Unexpected SSL error: [Errno 1] _ssl.c:486: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed ====================================================================== ERROR: testReadCert (test.test_ssl.ThreadedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 738, in testReadCert "Unexpected SSL error: " + str(x)) test.test_support.TestFailed: Unexpected SSL error: [Errno 1] _ssl.c:486: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed ====================================================================== FAIL: test_bad_address (test.test_urllib2net.urlopenNetworkTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_urllib2net.py", line 160, in test_bad_address urllib2.urlopen, "http://www.python.invalid./") AssertionError: IOError not raised by urlopen ====================================================================== FAIL: test_bad_address (test.test_urllibnet.urlopenNetworkTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_urllibnet.py", line 142, in test_bad_address urllib.urlopen, "http://www.python.invalid./") AssertionError: IOError not raised by urlopen ====================================================================== ERROR: test_current_time (test.test_xmlrpc_net.CurrentTimeTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_xmlrpc_net.py", line 18, in test_current_time t0 = server.currentTime.getCurrentTime() File "C:\python\buildarea\3.0.armbruster-windows\build\lib\xmlrpclib.py", line 1113, in __call__ return self.__send(self.__name, args) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\xmlrpclib.py", line 1371, in __request verbose=self.__verbose File "C:\python\buildarea\3.0.armbruster-windows\build\lib\xmlrpclib.py", line 1142, in request http_conn = self.send_request(host, handler, request_body, verbose) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\xmlrpclib.py", line 1227, in send_request connection.request("POST", handler, request_body, headers) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\httplib.py", line 898, in request self._send_request(method, url, body, headers) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\httplib.py", line 935, in _send_request self.endheaders() File "C:\python\buildarea\3.0.armbruster-windows\build\lib\httplib.py", line 893, in endheaders self._send_output() File "C:\python\buildarea\3.0.armbruster-windows\build\lib\httplib.py", line 759, in _send_output self.send(msg) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\httplib.py", line 718, in send self.connect() File "C:\python\buildarea\3.0.armbruster-windows\build\lib\httplib.py", line 702, in connect self.timeout) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\socket.py", line 294, in create_connection raise error(msg) socket.error: [Errno 10061] No connection could be made because the target machine actively refused it sincerely, -The Buildbot From buildbot at python.org Tue Mar 25 10:34:58 2008 From: buildbot at python.org (buildbot at python.org) Date: Tue, 25 Mar 2008 09:34:58 +0000 Subject: [Python-checkins] buildbot failure in sparc Debian 3.0 Message-ID: <20080325093458.8E1BA1E4013@bag.python.org> The Buildbot has detected a new failure of sparc Debian 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20Debian%203.0/builds/131 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-sparc Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: gregory.p.smith BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From buildbot at python.org Tue Mar 25 10:42:30 2008 From: buildbot at python.org (buildbot at python.org) Date: Tue, 25 Mar 2008 09:42:30 +0000 Subject: [Python-checkins] buildbot failure in sparc Ubuntu 3.0 Message-ID: <20080325094230.BCB761E4013@bag.python.org> The Buildbot has detected a new failure of sparc Ubuntu 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20Ubuntu%203.0/builds/204 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-ubuntu-sparc Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: gregory.p.smith BUILD FAILED: failed test Excerpt from the test logfile: 2 tests failed: test_urllibnet test_xmlrpc_net ====================================================================== ERROR: test_fileno (test.test_urllibnet.urlopenNetworkTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/3.0.klose-ubuntu-sparc/build/Lib/test/test_urllibnet.py", line 126, in test_fileno self.assert_(FILE.read(), "reading from file created using fd " File "/home/pybot/buildarea/3.0.klose-ubuntu-sparc/build/Lib/io.py", line 1470, in read decoder.decode(self.buffer.read(), final=True)) File "/home/pybot/buildarea/3.0.klose-ubuntu-sparc/build/Lib/io.py", line 1075, in decode output = self.decoder.decode(input, final=final) File "/home/pybot/buildarea/3.0.klose-ubuntu-sparc/build/Lib/encodings/ascii.py", line 26, in decode return codecs.ascii_decode(input, self.errors)[0] UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 12578: ordinal not in range(128) ====================================================================== ERROR: test_basic (test.test_urllibnet.urlretrieveNetworkTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/3.0.klose-ubuntu-sparc/build/Lib/test/test_urllibnet.py", line 157, in test_basic self.assert_(FILE.read(), "reading from the file location returned" File "/home/pybot/buildarea/3.0.klose-ubuntu-sparc/build/Lib/io.py", line 1470, in read decoder.decode(self.buffer.read(), final=True)) File "/home/pybot/buildarea/3.0.klose-ubuntu-sparc/build/Lib/io.py", line 1075, in decode output = self.decoder.decode(input, final=final) File "/home/pybot/buildarea/3.0.klose-ubuntu-sparc/build/Lib/encodings/ascii.py", line 26, in decode return codecs.ascii_decode(input, self.errors)[0] UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 12578: ordinal not in range(128) ====================================================================== ERROR: test_specified_path (test.test_urllibnet.urlretrieveNetworkTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/3.0.klose-ubuntu-sparc/build/Lib/test/test_urllibnet.py", line 171, in test_specified_path self.assert_(FILE.read(), "reading from temporary file failed") File "/home/pybot/buildarea/3.0.klose-ubuntu-sparc/build/Lib/io.py", line 1470, in read decoder.decode(self.buffer.read(), final=True)) File "/home/pybot/buildarea/3.0.klose-ubuntu-sparc/build/Lib/io.py", line 1075, in decode output = self.decoder.decode(input, final=final) File "/home/pybot/buildarea/3.0.klose-ubuntu-sparc/build/Lib/encodings/ascii.py", line 26, in decode return codecs.ascii_decode(input, self.errors)[0] UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 12578: ordinal not in range(128) ====================================================================== ERROR: test_current_time (test.test_xmlrpc_net.CurrentTimeTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/3.0.klose-ubuntu-sparc/build/Lib/test/test_xmlrpc_net.py", line 18, in test_current_time t0 = server.currentTime.getCurrentTime() File "/home/pybot/buildarea/3.0.klose-ubuntu-sparc/build/Lib/xmlrpclib.py", line 1113, in __call__ return self.__send(self.__name, args) File "/home/pybot/buildarea/3.0.klose-ubuntu-sparc/build/Lib/xmlrpclib.py", line 1371, in __request verbose=self.__verbose File "/home/pybot/buildarea/3.0.klose-ubuntu-sparc/build/Lib/xmlrpclib.py", line 1142, in request http_conn = self.send_request(host, handler, request_body, verbose) File "/home/pybot/buildarea/3.0.klose-ubuntu-sparc/build/Lib/xmlrpclib.py", line 1227, in send_request connection.request("POST", handler, request_body, headers) File "/home/pybot/buildarea/3.0.klose-ubuntu-sparc/build/Lib/httplib.py", line 898, in request self._send_request(method, url, body, headers) File "/home/pybot/buildarea/3.0.klose-ubuntu-sparc/build/Lib/httplib.py", line 935, in _send_request self.endheaders() File "/home/pybot/buildarea/3.0.klose-ubuntu-sparc/build/Lib/httplib.py", line 893, in endheaders self._send_output() File "/home/pybot/buildarea/3.0.klose-ubuntu-sparc/build/Lib/httplib.py", line 759, in _send_output self.send(msg) File "/home/pybot/buildarea/3.0.klose-ubuntu-sparc/build/Lib/httplib.py", line 718, in send self.connect() File "/home/pybot/buildarea/3.0.klose-ubuntu-sparc/build/Lib/httplib.py", line 702, in connect self.timeout) File "/home/pybot/buildarea/3.0.klose-ubuntu-sparc/build/Lib/socket.py", line 294, in create_connection raise error(msg) socket.error: [Errno 60] Connection timed out make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Tue Mar 25 10:54:02 2008 From: buildbot at python.org (buildbot at python.org) Date: Tue, 25 Mar 2008 09:54:02 +0000 Subject: [Python-checkins] buildbot failure in g4 osx.4 3.0 Message-ID: <20080325095402.BBBD01E401F@bag.python.org> The Buildbot has detected a new failure of g4 osx.4 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/g4%20osx.4%203.0/builds/655 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: psf-g4 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: georg.brandl BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From python-checkins at python.org Tue Mar 25 11:16:52 2008 From: python-checkins at python.org (georg.brandl) Date: Tue, 25 Mar 2008 11:16:52 +0100 (CET) Subject: [Python-checkins] r61884 - in doctools/trunk: CHANGES sphinx/builder.py sphinx/directives.py sphinx/environment.py sphinx/ext/doctest.py sphinx/linkcheck.py Message-ID: <20080325101652.093421E4025@bag.python.org> Author: georg.brandl Date: Tue Mar 25 11:16:51 2008 New Revision: 61884 Modified: doctools/trunk/CHANGES doctools/trunk/sphinx/builder.py doctools/trunk/sphinx/directives.py doctools/trunk/sphinx/environment.py doctools/trunk/sphinx/ext/doctest.py doctools/trunk/sphinx/linkcheck.py Log: Add a dependency system for handling .. include, .. literalinclude and later .. image dependencies. Modified: doctools/trunk/CHANGES ============================================================================== --- doctools/trunk/CHANGES (original) +++ doctools/trunk/CHANGES Tue Mar 25 11:16:51 2008 @@ -1,3 +1,10 @@ +Changes in trunk +================ + +* sphinx.environment: Take dependent files into account when collecting + the set of outdated sources. + + Release 0.1.61843 (Mar 24, 2008) ================================ Modified: doctools/trunk/sphinx/builder.py ============================================================================== --- doctools/trunk/sphinx/builder.py (original) +++ doctools/trunk/sphinx/builder.py Tue Mar 25 11:16:51 2008 @@ -127,8 +127,7 @@ # build methods def load_env(self): - """Set up the build environment. Return True if a pickled file could be - successfully loaded, False if a new environment had to be created.""" + """Set up the build environment.""" if self.env: return if not self.freshenv: @@ -143,8 +142,10 @@ else: self.info('failed: %s' % err) self.env = BuildEnvironment(self.srcdir, self.doctreedir, self.config) + self.env.find_files(self.config) else: self.env = BuildEnvironment(self.srcdir, self.doctreedir, self.config) + self.env.find_files(self.config) self.env.set_warnfunc(self.warn) def build_all(self): @@ -171,10 +172,6 @@ def build_update(self): """Only rebuild files changed or added since last build.""" to_build = self.get_outdated_docs() - if not to_build and self.env.all_docs: - # if there is nothing in all_docs, it's a fresh env - self.info(bold('no target files are out of date, exiting.')) - return if isinstance(to_build, str): self.build([], to_build) else: @@ -213,6 +210,10 @@ # global actions self.info(bold('checking consistency...')) self.env.check_consistency() + else: + if not docnames: + self.info(bold('no targets are out of date.')) + return # another indirection to support methods which don't build files # individually @@ -222,14 +223,15 @@ self.info(bold('finishing... ')) self.finish() if self.app._warncount: - self.info(bold('build succeeded, %s warnings.' % self.app._warncount)) + self.info(bold('build succeeded, %s warning%s.' % + (self.app._warncount, self.app._warncount != 1 and 's' or ''))) else: self.info(bold('build succeeded.')) def write(self, build_docnames, updated_docnames, method='update'): if build_docnames is None: # build_all - build_docnames = self.env.all_docs + build_docnames = self.env.found_docs if method == 'update': # build updated ones as well docnames = set(build_docnames) | set(updated_docnames) @@ -383,7 +385,7 @@ self.handle_page(docname, ctx) def finish(self): - self.info(bold('writing additional files...')) + self.info(bold('writing additional files...'), nonl=1) # the global general index @@ -397,6 +399,7 @@ genindexentries = self.env.index, genindexcounts = indexcounts, ) + self.info(' genindex', nonl=1) self.handle_page('genindex', genindexcontext, 'genindex.html') # the global module index @@ -442,21 +445,26 @@ modindexentries = modindexentries, platforms = platforms, ) + self.info(' modindex', nonl=1) self.handle_page('modindex', modindexcontext, 'modindex.html') # the search page + self.info(' search', nonl=1) self.handle_page('search', {}, 'search.html') # additional pages from conf.py for pagename, template in self.config.html_additional_pages.items(): + self.info(' '+pagename, nonl=1) self.handle_page(pagename, {}, template) # the index page indextemplate = self.config.html_index if indextemplate: + self.info(' index', nonl=1) self.handle_page('index', {'indextemplate': indextemplate}, 'index.html') # copy static files + self.info() self.info(bold('copying static files...')) ensuredir(path.join(self.outdir, 'static')) staticdirnames = [path.join(path.dirname(__file__), 'static')] + \ @@ -481,10 +489,7 @@ return docname + '.html' def get_outdated_docs(self): - for docname in get_matching_docs( - self.srcdir, self.config.source_suffix, - exclude=set(self.config.unused_docs), - prune=['_sources']): + for docname in self.env.found_docs: targetname = self.env.doc2path(docname, self.outdir, '.html') try: targetmtime = path.getmtime(targetname) @@ -566,10 +571,7 @@ self.init_translator_class() def get_outdated_docs(self): - for docname in get_matching_docs( - self.srcdir, self.config.source_suffix, - exclude=set(self.config.unused_docs), - prune=['_sources']): + for docname in self.env.found_docs: targetname = self.env.doc2path(docname, self.outdir, '.fpickle') try: targetmtime = path.getmtime(targetname) Modified: doctools/trunk/sphinx/directives.py ============================================================================== --- doctools/trunk/sphinx/directives.py (original) +++ doctools/trunk/sphinx/directives.py Tue Mar 25 11:16:51 2008 @@ -664,10 +664,10 @@ if not state.document.settings.file_insertion_enabled: return [state.document.reporter.warning('File insertion disabled', line=lineno)] env = state.document.settings.env - fn = arguments[0] + rel_fn = arguments[0] source_dir = path.dirname(path.abspath(state_machine.input_lines.source( lineno - state_machine.input_offset - 1))) - fn = path.normpath(path.join(source_dir, fn)) + fn = path.normpath(path.join(source_dir, rel_fn)) try: f = open(fn) @@ -683,6 +683,7 @@ retnode['language'] = options['language'] if 'linenos' in options: retnode['linenos'] = True + state.document.settings.env.note_dependency(rel_fn) return [retnode] literalinclude_directive.options = {'linenos': directives.flag, Modified: doctools/trunk/sphinx/environment.py ============================================================================== --- doctools/trunk/sphinx/environment.py (original) +++ doctools/trunk/sphinx/environment.py Tue Mar 25 11:16:51 2008 @@ -57,7 +57,7 @@ # This is increased every time a new environment attribute is added # to properly invalidate pickle files. -ENV_VERSION = 18 +ENV_VERSION = 19 def walk_depth(node, depth, maxdepth): @@ -218,8 +218,10 @@ # All "docnames" here are /-separated and relative and exclude the source suffix. self.found_docs = set() # contains all existing docnames - self.all_docs = {} # docname -> (mtime, md5sum) at the time of build + self.all_docs = {} # docname -> mtime at the time of build # contains all built docnames + self.dependencies = {} # docname -> set of dependent file names, relative to + # documentation root # File metadata self.metadata = {} # docname -> dict of metadata items @@ -278,6 +280,7 @@ if docname in self.all_docs: self.all_docs.pop(docname, None) self.metadata.pop(docname, None) + self.dependencies.pop(docname, None) self.titles.pop(docname, None) self.tocs.pop(docname, None) self.toc_num_entries.pop(docname, None) @@ -318,14 +321,18 @@ else: return path.join(base, docname.replace(SEP, path.sep)) + suffix - def get_outdated_files(self, config, config_changed): + def find_files(self, config): """ - Return (added, changed, removed) sets. + Find all source files in the source dir and put them in self.found_docs. """ self.found_docs = set(get_matching_docs(self.srcdir, config.source_suffix, exclude=set(config.unused_docs), prune=['_sources'])) + def get_outdated_files(self, config_changed): + """ + Return (added, changed, removed) sets. + """ # clear all files no longer present removed = set(self.all_docs) - self.found_docs @@ -339,17 +346,28 @@ for docname in self.found_docs: if docname not in self.all_docs: added.add(docname) - else: - # if the doctree file is not there, rebuild - if not path.isfile(self.doc2path(docname, self.doctreedir, - '.doctree')): - changed.add(docname) - continue - mtime, md5sum = self.all_docs[docname] - newmtime = path.getmtime(self.doc2path(docname)) - if newmtime == mtime: - continue + continue + # if the doctree file is not there, rebuild + if not path.isfile(self.doc2path(docname, self.doctreedir, + '.doctree')): changed.add(docname) + continue + # check the mtime of the document + mtime = self.all_docs[docname] + newmtime = path.getmtime(self.doc2path(docname)) + if newmtime > mtime: + changed.add(docname) + continue + # finally, check the mtime of dependencies + for dep in self.dependencies.get(docname, ()): + deppath = path.join(self.srcdir, dep) + if not path.isfile(deppath): + changed.add(docname) + break + depmtime = path.getmtime(deppath) + if depmtime > mtime: + changed.add(docname) + break return added, changed, removed @@ -369,12 +387,14 @@ continue if not hasattr(self.config, key) or \ self.config[key] != config[key]: + msg = '[config changed] ' config_changed = True break else: msg = '' - added, changed, removed = self.get_outdated_files(config, config_changed) + self.find_files(config) + added, changed, removed = self.get_outdated_files(config_changed) msg += '%s added, %s changed, %s removed' % (len(added), len(changed), len(removed)) yield msg @@ -409,18 +429,14 @@ doctree = publish_doctree(None, src_path, FileInput, settings_overrides=self.settings, reader=MyStandaloneReader()) + self.process_dependencies(docname, doctree) self.process_metadata(docname, doctree) self.create_title_from(docname, doctree) self.note_labels_from(docname, doctree) self.build_toc_from(docname, doctree) - # calculate the MD5 of the file at time of build - f = open(src_path, 'rb') - try: - md5sum = md5(f.read()).digest() - finally: - f.close() - self.all_docs[docname] = (path.getmtime(src_path), md5sum) + # store time of reading, used to find outdated files + self.all_docs[docname] = time.time() if app: app.emit('doctree-read', doctree) @@ -430,6 +446,7 @@ doctree.transformer = None doctree.settings.warning_stream = None doctree.settings.env = None + doctree.settings.record_dependencies = None # cleanup self.docname = None @@ -452,6 +469,18 @@ else: return doctree + def process_dependencies(self, docname, doctree): + """ + Process docutils-generated dependency info. + """ + deps = doctree.settings.record_dependencies + if not deps: + return + basename = path.dirname(self.doc2path(docname, base=None)) + for dep in deps.list: + dep = path.join(basename, dep) + self.dependencies.setdefault(docname, set()).add(dep) + def process_metadata(self, docname, doctree): """ Process the docinfo part of the doctree as metadata. @@ -602,6 +631,11 @@ def note_versionchange(self, type, version, node, lineno): self.versionchanges.setdefault(version, []).append( (type, self.docname, lineno, self.currmodule, self.currdesc, node.astext())) + + def note_dependency(self, filename): + basename = path.dirname(self.doc2path(self.docname, base=None)) + filename = path.join(basename, filename) + self.dependencies.setdefault(self.docname, set()).add(filename) # ------- # --------- RESOLVING REFERENCES AND TOCTREES ------------------------------ Modified: doctools/trunk/sphinx/ext/doctest.py ============================================================================== --- doctools/trunk/sphinx/ext/doctest.py (original) +++ doctools/trunk/sphinx/ext/doctest.py Tue Mar 25 11:16:51 2008 @@ -183,7 +183,7 @@ return '' def get_outdated_docs(self): - return self.env.all_docs + return self.env.found_docs def finish(self): # write executive summary @@ -204,7 +204,7 @@ def write(self, build_docnames, updated_docnames, method='update'): if build_docnames is None: - build_docnames = self.env.all_docs + build_docnames = sorted(self.env.all_docs) self.info(bold('running tests...')) for docname in build_docnames: Modified: doctools/trunk/sphinx/linkcheck.py ============================================================================== --- doctools/trunk/sphinx/linkcheck.py (original) +++ doctools/trunk/sphinx/linkcheck.py Tue Mar 25 11:16:51 2008 @@ -42,7 +42,7 @@ return '' def get_outdated_docs(self): - return self.env.all_docs + return self.env.found_docs def prepare_writing(self, docnames): return From python-checkins at python.org Tue Mar 25 11:20:09 2008 From: python-checkins at python.org (georg.brandl) Date: Tue, 25 Mar 2008 11:20:09 +0100 (CET) Subject: [Python-checkins] r61885 - doctools/trunk/sphinx/environment.py Message-ID: <20080325102009.683C61E401F@bag.python.org> Author: georg.brandl Date: Tue Mar 25 11:20:09 2008 New Revision: 61885 Modified: doctools/trunk/sphinx/environment.py Log: Handle errors more gracefully. Modified: doctools/trunk/sphinx/environment.py ============================================================================== --- doctools/trunk/sphinx/environment.py (original) +++ doctools/trunk/sphinx/environment.py Tue Mar 25 11:20:09 2008 @@ -360,12 +360,17 @@ continue # finally, check the mtime of dependencies for dep in self.dependencies.get(docname, ()): - deppath = path.join(self.srcdir, dep) - if not path.isfile(deppath): - changed.add(docname) - break - depmtime = path.getmtime(deppath) - if depmtime > mtime: + try: + deppath = path.join(self.srcdir, dep) + if not path.isfile(deppath): + changed.add(docname) + break + depmtime = path.getmtime(deppath) + if depmtime > mtime: + changed.add(docname) + break + except EnvironmentError: + # give it another chance changed.add(docname) break From python-checkins at python.org Tue Mar 25 11:31:13 2008 From: python-checkins at python.org (georg.brandl) Date: Tue, 25 Mar 2008 11:31:13 +0100 (CET) Subject: [Python-checkins] r61886 - in doctools/trunk: CHANGES sphinx/environment.py sphinx/ext/autodoc.py Message-ID: <20080325103113.E4CA11E401F@bag.python.org> Author: georg.brandl Date: Tue Mar 25 11:31:13 2008 New Revision: 61886 Modified: doctools/trunk/CHANGES doctools/trunk/sphinx/environment.py doctools/trunk/sphinx/ext/autodoc.py Log: Record deps from autodoc. Modified: doctools/trunk/CHANGES ============================================================================== --- doctools/trunk/CHANGES (original) +++ doctools/trunk/CHANGES Tue Mar 25 11:31:13 2008 @@ -4,6 +4,12 @@ * sphinx.environment: Take dependent files into account when collecting the set of outdated sources. +* sphinx.directives: Record files included with ``.. literalinclude::`` + as dependencies. + +* sphinx.ext.autodoc: Record files from which docstrings are included + as dependencies. + Release 0.1.61843 (Mar 24, 2008) ================================ Modified: doctools/trunk/sphinx/environment.py ============================================================================== --- doctools/trunk/sphinx/environment.py (original) +++ doctools/trunk/sphinx/environment.py Tue Mar 25 11:31:13 2008 @@ -361,6 +361,7 @@ # finally, check the mtime of dependencies for dep in self.dependencies.get(docname, ()): try: + # this will do the right thing when dep is absolute too deppath = path.join(self.srcdir, dep) if not path.isfile(deppath): changed.add(docname) @@ -639,6 +640,7 @@ def note_dependency(self, filename): basename = path.dirname(self.doc2path(self.docname, base=None)) + # this will do the right thing when filename is absolute too filename = path.join(basename, filename) self.dependencies.setdefault(self.docname, set()).add(filename) # ------- Modified: doctools/trunk/sphinx/ext/autodoc.py ============================================================================== --- doctools/trunk/sphinx/ext/autodoc.py (original) +++ doctools/trunk/sphinx/ext/autodoc.py Tue Mar 25 11:31:13 2008 @@ -69,8 +69,8 @@ return charset -def generate_rst(what, name, members, undoc, add_content, - document, lineno, indent=''): +def generate_rst(what, name, members, undoc, add_content, document, lineno, + indent='', filename_set=None): env = document.settings.env # find out what to import @@ -101,6 +101,11 @@ try: todoc = module = __import__(mod, None, None, ['foo']) + if filename_set is not None and hasattr(module, '__file__') and module.__file__: + modfile = module.__file__ + if modfile.lower().endswith('.pyc') or modfile.lower().endswith('.pyo'): + modfile = modfile[:-1] + filename_set.add(modfile) for part in objpath: todoc = getattr(todoc, part) if hasattr(todoc, '__module__'): @@ -218,8 +223,14 @@ members = options.get('members', []) undoc = 'undoc-members' in options + filename_set = set() warnings, result = generate_rst(what, name, members, undoc, content, - state.document, lineno) + state.document, lineno, filename_set=filename_set) + + # record all filenames as dependencies -- this will at least partially make + # automatic invalidation possible + for fn in filename_set: + state.document.settings.env.note_dependency(fn) if dirname == 'automodule': node = nodes.section() From python-checkins at python.org Tue Mar 25 11:36:39 2008 From: python-checkins at python.org (georg.brandl) Date: Tue, 25 Mar 2008 11:36:39 +0100 (CET) Subject: [Python-checkins] r61887 - in doctools/trunk: CHANGES sphinx/builder.py Message-ID: <20080325103639.888071E401F@bag.python.org> Author: georg.brandl Date: Tue Mar 25 11:36:39 2008 New Revision: 61887 Modified: doctools/trunk/CHANGES doctools/trunk/sphinx/builder.py Log: * sphinx.builder: Handle unavailability of TOC relations (previous/ next chapter) more gracefully in the HTML builder. Modified: doctools/trunk/CHANGES ============================================================================== --- doctools/trunk/CHANGES (original) +++ doctools/trunk/CHANGES Tue Mar 25 11:36:39 2008 @@ -10,6 +10,9 @@ * sphinx.ext.autodoc: Record files from which docstrings are included as dependencies. +* sphinx.builder: Handle unavailability of TOC relations (previous/ + next chapter) more gracefully in the HTML builder. + Release 0.1.61843 (Mar 24, 2008) ================================ Modified: doctools/trunk/sphinx/builder.py ============================================================================== --- doctools/trunk/sphinx/builder.py (original) +++ doctools/trunk/sphinx/builder.py Tue Mar 25 11:36:39 2008 @@ -347,22 +347,33 @@ prev = next = None parents = [] related = self.env.toctree_relations.get(docname) + titles = self.env.titles if related: - prev = {'link': self.get_relative_uri(docname, related[1]), - 'title': self.render_partial(self.env.titles[related[1]])['title']} - next = {'link': self.get_relative_uri(docname, related[2]), - 'title': self.render_partial(self.env.titles[related[2]])['title']} + try: + prev = {'link': self.get_relative_uri(docname, related[1]), + 'title': self.render_partial(titles[related[1]])['title']} + except KeyError: + # the relation is (somehow) not in the TOC tree, handle that gracefully + prev = None + try: + next = {'link': self.get_relative_uri(docname, related[2]), + 'title': self.render_partial(titles[related[2]])['title']} + except KeyError: + next = None while related: - parents.append( - {'link': self.get_relative_uri(docname, related[0]), - 'title': self.render_partial(self.env.titles[related[0]])['title']}) + try: + parents.append( + {'link': self.get_relative_uri(docname, related[0]), + 'title': self.render_partial(titles[related[0]])['title']}) + except KeyError: + pass related = self.env.toctree_relations.get(related[0]) if parents: parents.pop() # remove link to the master file; we have a generic # "back to index" link already parents.reverse() - title = self.env.titles.get(docname) + title = titles.get(docname) if title: title = self.render_partial(title)['title'] else: From python-checkins at python.org Tue Mar 25 12:01:28 2008 From: python-checkins at python.org (georg.brandl) Date: Tue, 25 Mar 2008 12:01:28 +0100 (CET) Subject: [Python-checkins] r61888 - doctools/trunk/CHANGES doctools/trunk/setup.py Message-ID: <20080325110128.F2A471E4034@bag.python.org> Author: georg.brandl Date: Tue Mar 25 12:01:28 2008 New Revision: 61888 Modified: doctools/trunk/CHANGES doctools/trunk/setup.py Log: * setup: On Python 2.4, don't egg-depend on docutils if a docutils is already installed -- else it will be overwritten. Modified: doctools/trunk/CHANGES ============================================================================== --- doctools/trunk/CHANGES (original) +++ doctools/trunk/CHANGES Tue Mar 25 12:01:28 2008 @@ -13,6 +13,9 @@ * sphinx.builder: Handle unavailability of TOC relations (previous/ next chapter) more gracefully in the HTML builder. +* setup: On Python 2.4, don't egg-depend on docutils if a docutils is + already installed -- else it will be overwritten. + Release 0.1.61843 (Mar 24, 2008) ================================ Modified: doctools/trunk/setup.py ============================================================================== --- doctools/trunk/setup.py (original) +++ doctools/trunk/setup.py Tue Mar 25 12:01:28 2008 @@ -1,10 +1,12 @@ # -*- coding: utf-8 -*- import ez_setup ez_setup.use_setuptools() -import sphinx +import sys from setuptools import setup, Feature +import sphinx + long_desc = ''' Sphinx is a tool that makes it easy to create intelligent and beautiful documentation for Python projects (or other documents consisting of @@ -32,6 +34,25 @@ and inclusion of appropriately formatted docstrings. ''' +requires = ['Pygments>=0.8', 'docutils>=0.4'] + +if sys.version_info < (2, 4): + print 'ERROR: Sphinx requires at least Python 2.4 to run.' + sys.exit(1) + +if sys.version_info < (2, 5): + # Python 2.4's distutils doesn't automatically install an egg-info, + # so an existing docutils install won't be detected -- in that case, + # remove the dependency from setup.py + try: + import docutils + if int(docutils.__version__[2]) < 4: + raise ValueError('docutils not recent enough') + except: + pass + else: + del requires[-1] + setup( name='Sphinx', version=sphinx.__version__, @@ -66,5 +87,5 @@ 'sphinx-quickstart = sphinx.quickstart:main' ] }, - install_requires=['Pygments>=0.8', 'docutils>=0.4'] + install_requires=requires, ) From buildbot at python.org Tue Mar 25 12:31:18 2008 From: buildbot at python.org (buildbot at python.org) Date: Tue, 25 Mar 2008 11:31:18 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 3.0 Message-ID: <20080325113118.23A6C1E401F@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%203.0/builds/755 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: georg.brandl BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From python-checkins at python.org Tue Mar 25 12:59:51 2008 From: python-checkins at python.org (georg.brandl) Date: Tue, 25 Mar 2008 12:59:51 +0100 (CET) Subject: [Python-checkins] r61889 - python/trunk/Objects/bufferobject.c Message-ID: <20080325115951.869AB1E4041@bag.python.org> Author: georg.brandl Date: Tue Mar 25 12:59:51 2008 New Revision: 61889 Modified: python/trunk/Objects/bufferobject.c Log: Move declarations to block start. Modified: python/trunk/Objects/bufferobject.c ============================================================================== --- python/trunk/Objects/bufferobject.c (original) +++ python/trunk/Objects/bufferobject.c Tue Mar 25 12:59:51 2008 @@ -229,16 +229,16 @@ static PyObject * buffer_new(PyTypeObject *type, PyObject *args, PyObject *kw) { + PyObject *ob; + Py_ssize_t offset = 0; + Py_ssize_t size = Py_END_OF_BUFFER; + if (Py_Py3kWarningFlag && PyErr_WarnEx(PyExc_DeprecationWarning, "buffer() not supported in 3.x; " "use memoryview()", 1) < 0) return NULL; - PyObject *ob; - Py_ssize_t offset = 0; - Py_ssize_t size = Py_END_OF_BUFFER; - if (!_PyArg_NoKeywords("buffer()", kw)) return NULL; From python-checkins at python.org Tue Mar 25 13:32:03 2008 From: python-checkins at python.org (georg.brandl) Date: Tue, 25 Mar 2008 13:32:03 +0100 (CET) Subject: [Python-checkins] r61890 - in doctools/trunk: CHANGES doc/rest.rst sphinx/builder.py sphinx/directives.py sphinx/environment.py sphinx/htmlwriter.py sphinx/latexwriter.py sphinx/roles.py Message-ID: <20080325123203.E2DE51E4002@bag.python.org> Author: georg.brandl Date: Tue Mar 25 13:32:03 2008 New Revision: 61890 Modified: doctools/trunk/CHANGES doctools/trunk/doc/rest.rst doctools/trunk/sphinx/builder.py doctools/trunk/sphinx/directives.py doctools/trunk/sphinx/environment.py doctools/trunk/sphinx/htmlwriter.py doctools/trunk/sphinx/latexwriter.py doctools/trunk/sphinx/roles.py Log: Support the image directive. Modified: doctools/trunk/CHANGES ============================================================================== --- doctools/trunk/CHANGES (original) +++ doctools/trunk/CHANGES Tue Mar 25 13:32:03 2008 @@ -1,6 +1,9 @@ Changes in trunk ================ +* sphinx.htmlwriter, sphinx.latexwriter: Support the ``.. image::`` + directive by copying image files to the output directory. + * sphinx.environment: Take dependent files into account when collecting the set of outdated sources. Modified: doctools/trunk/doc/rest.rst ============================================================================== --- doctools/trunk/doc/rest.rst (original) +++ doctools/trunk/doc/rest.rst Tue Mar 25 13:32:03 2008 @@ -205,6 +205,19 @@ directive start. +Images +------ + +reST supports an image directive, used like so:: + + .. image:: filename + (options) + +When used within Sphinx, the ``filename`` given must be relative to the source +file, and Sphinx will automatically copy image files over to a subdirectory of +the output directory on building. + + Footnotes --------- Modified: doctools/trunk/sphinx/builder.py ============================================================================== --- doctools/trunk/sphinx/builder.py (original) +++ doctools/trunk/sphinx/builder.py Tue Mar 25 13:32:03 2008 @@ -75,7 +75,7 @@ def init_templates(self): """Call if you need Jinja templates in the builder.""" - # lazily import this, maybe other builders won't need it + # lazily import this, other builders won't need it from sphinx._jinja import Environment, SphinxFileSystemLoader # load templates @@ -103,14 +103,18 @@ raise NotImplementedError def get_relative_uri(self, from_, to, typ=None): - """Return a relative URI between two source filenames. - May raise environment.NoUri if there's no way to return a - sensible URI.""" + """ + Return a relative URI between two source filenames. May raise environment.NoUri + if there's no way to return a sensible URI. + """ return relative_uri(self.get_target_uri(from_), self.get_target_uri(to, typ)) def get_outdated_docs(self): - """Return a list of output files that are outdated.""" + """ + Return an iterable of output files that are outdated, or a string describing + what an update build will build. + """ raise NotImplementedError def status_iterator(self, iterable, summary, colorfunc): @@ -173,7 +177,7 @@ """Only rebuild files changed or added since last build.""" to_build = self.get_outdated_docs() if isinstance(to_build, str): - self.build([], to_build) + self.build(['__all__'], to_build) else: to_build = list(to_build) self.build(to_build, @@ -211,7 +215,7 @@ self.info(bold('checking consistency...')) self.env.check_consistency() else: - if not docnames: + if method == 'update' and not docnames: self.info(bold('no targets are out of date.')) return @@ -341,6 +345,7 @@ destination = StringOutput(encoding='utf-8') doctree.settings = self.docsettings + self.imgpath = relative_uri(self.get_target_uri(docname), '_images') self.docwriter.write(doctree, destination) self.docwriter.assemble_parts() @@ -474,8 +479,19 @@ self.info(' index', nonl=1) self.handle_page('index', {'indextemplate': indextemplate}, 'index.html') - # copy static files self.info() + + # copy image files + if self.env.images: + self.info(bold('copying images...'), nonl=1) + ensuredir(path.join(self.outdir, '_images')) + for src, dest in self.env.images.iteritems(): + self.info(' '+src, nonl=1) + shutil.copyfile(path.join(self.srcdir, src), + path.join(self.outdir, '_images', dest)) + self.info() + + # copy static files self.info(bold('copying static files...')) ensuredir(path.join(self.outdir, 'static')) staticdirnames = [path.join(path.dirname(__file__), 'static')] + \ @@ -796,6 +812,15 @@ return largetree def finish(self): + # copy image files + if self.env.images: + self.info(bold('copying images...'), nonl=1) + for src, dest in self.env.images.iteritems(): + self.info(' '+src, nonl=1) + shutil.copyfile(path.join(self.srcdir, src), + path.join(self.outdir, dest)) + self.info() + self.info(bold('copying TeX support files...')) staticdirname = path.join(path.dirname(__file__), 'texinputs') for filename in os.listdir(staticdirname): Modified: doctools/trunk/sphinx/directives.py ============================================================================== --- doctools/trunk/sphinx/directives.py (original) +++ doctools/trunk/sphinx/directives.py Tue Mar 25 13:32:03 2008 @@ -352,7 +352,7 @@ signode['ids'].append(fullname) signode['first'] = (not names) state.document.note_explicit_target(signode) - env.note_descref(fullname, desctype) + env.note_descref(fullname, desctype, lineno) names.append(name) env.note_index_entry('single', Modified: doctools/trunk/sphinx/environment.py ============================================================================== --- doctools/trunk/sphinx/environment.py (original) +++ doctools/trunk/sphinx/environment.py Tue Mar 25 13:32:03 2008 @@ -57,7 +57,7 @@ # This is increased every time a new environment attribute is added # to properly invalidate pickle files. -ENV_VERSION = 19 +ENV_VERSION = 20 def walk_depth(node, depth, maxdepth): @@ -251,6 +251,7 @@ # (type, string, target, aliasname) self.versionchanges = {} # version -> list of # (type, docname, lineno, module, descname, content) + self.images = {} # absolute path -> unique filename # These are set while parsing a file self.docname = None # current document name @@ -269,9 +270,11 @@ self._warnfunc = func self.settings['warning_stream'] = RedirStream(func) - def warn(self, docname, msg): + def warn(self, docname, msg, lineno=None): if docname: - self._warnfunc(self.doc2path(docname) + ':: ' + msg) + if lineno is None: + lineno = '' + self._warnfunc('%s:%s: %s' % (self.doc2path(docname), lineno, msg)) else: self._warnfunc('GLOBAL:: ' + msg) @@ -420,6 +423,12 @@ self.warn(None, 'master file %s not found' % self.doc2path(config.master_doc)) + # remove all non-existing images from inventory + for imgsrc in self.images.keys(): + if not os.access(path.join(self.srcdir, imgsrc), os.R_OK): + del self.images[imgsrc] + + # --------- SINGLE FILE BUILDING ------------------------------------------- def read_doc(self, docname, src_path=None, save_parsed=True, app=None): @@ -436,6 +445,7 @@ settings_overrides=self.settings, reader=MyStandaloneReader()) self.process_dependencies(docname, doctree) + self.process_images(docname, doctree) self.process_metadata(docname, doctree) self.create_title_from(docname, doctree) self.note_labels_from(docname, doctree) @@ -482,11 +492,37 @@ deps = doctree.settings.record_dependencies if not deps: return - basename = path.dirname(self.doc2path(docname, base=None)) + docdir = path.dirname(self.doc2path(docname, base=None)) for dep in deps.list: - dep = path.join(basename, dep) + dep = path.join(docdir, dep) self.dependencies.setdefault(docname, set()).add(dep) + def process_images(self, docname, doctree): + """ + Process and rewrite image URIs. + """ + docdir = path.dirname(self.doc2path(docname, base=None)) + for node in doctree.traverse(nodes.image): + imguri = node['uri'] + if imguri.find('://') != -1: + self.warn(docname, 'Nonlocal image URI found: %s' % imguri, node.line) + else: + imgpath = path.normpath(path.join(docdir, imguri)) + node['uri'] = imgpath + self.dependencies.setdefault(docname, set()).add(imgpath) + if not os.access(path.join(self.srcdir, imgpath), os.R_OK): + self.warn(docname, 'Image file not readable: %s' % imguri, node.line) + if imgpath in self.images: + continue + names = set(self.images.values()) + uniquename = path.basename(imgpath) + base, ext = path.splitext(uniquename) + i = 0 + while uniquename in names: + i += 1 + uniquename = '%s%s%s' % (base, i, ext) + self.images[imgpath] = uniquename + def process_metadata(self, docname, doctree): """ Process the docinfo part of the doctree as metadata. @@ -527,6 +563,8 @@ if not explicit: continue labelid = document.nameids[name] + if labelid is None: + continue node = document.ids[labelid] if name.isdigit() or node.has_key('refuri') or \ node.tagname.startswith('desc_'): @@ -535,7 +573,8 @@ continue if name in self.labels: self.warn(docname, 'duplicate label %s, ' % name + - 'other instance in %s' % self.doc2path(self.labels[name][0])) + 'other instance in %s' % self.doc2path(self.labels[name][0]), + node.line) self.anonlabels[name] = docname, labelid if not isinstance(node, nodes.section): # anonymous-only labels @@ -616,11 +655,12 @@ # ------- # these are called from docutils directives and therefore use self.docname # - def note_descref(self, fullname, desctype): + def note_descref(self, fullname, desctype, line): if fullname in self.descrefs: self.warn(self.docname, 'duplicate canonical description name %s, ' % fullname + - 'other instance in %s' % self.doc2path(self.descrefs[fullname][0])) + 'other instance in %s' % self.doc2path(self.descrefs[fullname][0]), + line) self.descrefs[fullname] = (self.docname, desctype) def note_module(self, modname, synopsis, platform, deprecated): @@ -780,7 +820,8 @@ docname, labelid = self.reftargets.get((typ, target), ('', '')) if not docname: if typ == 'term': - self.warn(fromdocname, 'term not in glossary: %s' % target) + self.warn(fromdocname, 'term not in glossary: %s' % target, + node.line) newnode = contnode else: newnode = nodes.reference('', '') Modified: doctools/trunk/sphinx/htmlwriter.py ============================================================================== --- doctools/trunk/sphinx/htmlwriter.py (original) +++ doctools/trunk/sphinx/htmlwriter.py Tue Mar 25 13:32:03 2008 @@ -10,6 +10,7 @@ """ import sys +from os import path from docutils import nodes from docutils.writers.html4css1 import Writer, HTMLTranslator as BaseTranslator @@ -246,6 +247,15 @@ def depart_highlightlang(self, node): pass + # overwritten + def visit_image(self, node): + olduri = node['uri'] + # rewrite the URI if the environment knows about it + if olduri in self.builder.env.images: + node['uri'] = path.join(self.builder.imgpath, + self.builder.env.images[olduri]) + BaseTranslator.visit_image(self, node) + def visit_toctree(self, node): # this only happens when formatting a toc from env.tocs -- in this # case we don't want to include the subtree Modified: doctools/trunk/sphinx/latexwriter.py ============================================================================== --- doctools/trunk/sphinx/latexwriter.py (original) +++ doctools/trunk/sphinx/latexwriter.py Tue Mar 25 13:32:03 2008 @@ -40,6 +40,15 @@ \end{document} ''' +GRAPHICX = r''' +%% Check if we are compiling under latex or pdflatex. +\ifx\pdftexversion\undefined + \usepackage{graphicx} +\else + \usepackage[pdftex]{graphicx} +\fi +''' + class LaTeXWriter(writers.Writer): @@ -118,11 +127,14 @@ self.first_document = 1 self.this_is_the_title = 1 self.literal_whitespace = 0 + self.need_graphicx = 0 def astext(self): return (HEADER % self.options) + \ (self.options['modindex'] and '\\makemodindex\n' or '') + \ - self.highlighter.get_stylesheet() + '\n\n' + \ + self.highlighter.get_stylesheet() + \ + (self.need_graphicx and GRAPHICX or '') + \ + '\n\n' + \ u''.join(self.body) + \ (self.options['modindex'] and '\\printmodindex\n' or '') + \ (FOOTER % self.options) @@ -498,6 +510,49 @@ def depart_module(self, node): pass + def visit_image(self, node): + self.need_graphicx = 1 + attrs = node.attributes + pre = [] # in reverse order + post = [] + include_graphics_options = "" + inline = isinstance(node.parent, nodes.TextElement) + if attrs.has_key('scale'): + # Could also be done with ``scale`` option to + # ``\includegraphics``; doing it this way for consistency. + pre.append('\\scalebox{%f}{' % (attrs['scale'] / 100.0,)) + post.append('}') + if attrs.has_key('width'): + include_graphics_options = '[width=%s]' % attrs['width'] + if attrs.has_key('align'): + align_prepost = { + # By default latex aligns the top of an image. + (1, 'top'): ('', ''), + (1, 'middle'): ('\\raisebox{-0.5\\height}{', '}'), + (1, 'bottom'): ('\\raisebox{-\\height}{', '}'), + (0, 'center'): ('{\\hfill', '\\hfill}'), + # These 2 don't exactly do the right thing. The image should + # be floated alongside the paragraph. See + # http://www.w3.org/TR/html4/struct/objects.html#adef-align-IMG + (0, 'left'): ('{', '\\hfill}'), + (0, 'right'): ('{\\hfill', '}'),} + try: + pre.append(align_prepost[inline, attrs['align']][0]) + post.append(align_prepost[inline, attrs['align']][1]) + except KeyError: + pass + if not inline: + pre.append('\n') + post.append('\n') + pre.reverse() + self.body.extend(pre) + # XXX: for now, don't fiddle around with graphics formats + uri = self.builder.env.images.get(node['uri'], node['uri']) + self.body.append('\\includegraphics%s{%s}' % (include_graphics_options, uri)) + self.body.extend(post) + def depart_image(self, node): + pass + def visit_note(self, node): self.body.append('\n\\begin{notice}[note]') def depart_note(self, node): Modified: doctools/trunk/sphinx/roles.py ============================================================================== --- doctools/trunk/sphinx/roles.py (original) +++ doctools/trunk/sphinx/roles.py Tue Mar 25 13:32:03 2008 @@ -120,6 +120,8 @@ # we want a cross-reference, create the reference node pnode = addnodes.pending_xref(rawtext, reftype=typ, refcaption=False, modname=env.currmodule, classname=env.currclass) + # we may need the line number for warnings + pnode.line = lineno innertext = text # special actions for Python object cross-references if typ in ('data', 'exc', 'func', 'class', 'const', 'attr', 'meth', 'mod'): From buildbot at python.org Tue Mar 25 14:07:47 2008 From: buildbot at python.org (buildbot at python.org) Date: Tue, 25 Mar 2008 13:07:47 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-3 trunk Message-ID: <20080325130747.5CA031E4002@bag.python.org> The Buildbot has detected a new failure of x86 XP-3 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-3%20trunk/builds/1161 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_winsound ====================================================================== ERROR: test_alias_asterisk (test.test_winsound.PlaySoundTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_winsound.py", line 86, in test_alias_asterisk winsound.PlaySound('SystemAsterisk', winsound.SND_ALIAS) RuntimeError: Failed to play sound ====================================================================== ERROR: test_alias_exclamation (test.test_winsound.PlaySoundTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_winsound.py", line 96, in test_alias_exclamation winsound.PlaySound('SystemExclamation', winsound.SND_ALIAS) RuntimeError: Failed to play sound ====================================================================== ERROR: test_alias_exit (test.test_winsound.PlaySoundTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_winsound.py", line 106, in test_alias_exit winsound.PlaySound('SystemExit', winsound.SND_ALIAS) RuntimeError: Failed to play sound ====================================================================== ERROR: test_alias_hand (test.test_winsound.PlaySoundTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_winsound.py", line 116, in test_alias_hand winsound.PlaySound('SystemHand', winsound.SND_ALIAS) RuntimeError: Failed to play sound ====================================================================== ERROR: test_alias_question (test.test_winsound.PlaySoundTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_winsound.py", line 126, in test_alias_question winsound.PlaySound('SystemQuestion', winsound.SND_ALIAS) RuntimeError: Failed to play sound sincerely, -The Buildbot From python-checkins at python.org Tue Mar 25 15:33:23 2008 From: python-checkins at python.org (mark.dickinson) Date: Tue, 25 Mar 2008 15:33:23 +0100 (CET) Subject: [Python-checkins] r61892 - in python/trunk: Lib/decimal.py Lib/test/test_decimal.py Misc/NEWS Message-ID: <20080325143323.6DC1F1E401F@bag.python.org> Author: mark.dickinson Date: Tue Mar 25 15:33:23 2008 New Revision: 61892 Modified: python/trunk/Lib/decimal.py python/trunk/Lib/test/test_decimal.py python/trunk/Misc/NEWS Log: Issue #2478: Decimal(sqrt(0)) failed when the decimal context was not explicitly supplied. Modified: python/trunk/Lib/decimal.py ============================================================================== --- python/trunk/Lib/decimal.py (original) +++ python/trunk/Lib/decimal.py Tue Mar 25 15:33:23 2008 @@ -2453,6 +2453,9 @@ def sqrt(self, context=None): """Return the square root of self.""" + if context is None: + context = getcontext() + if self._is_special: ans = self._check_nans(context=context) if ans: @@ -2466,9 +2469,6 @@ ans = _dec_from_triple(self._sign, '0', self._exp // 2) return ans._fix(context) - if context is None: - context = getcontext() - if self._sign == 1: return context._raise_error(InvalidOperation, 'sqrt(-x), x > 0') Modified: python/trunk/Lib/test/test_decimal.py ============================================================================== --- python/trunk/Lib/test/test_decimal.py (original) +++ python/trunk/Lib/test/test_decimal.py Tue Mar 25 15:33:23 2008 @@ -1315,6 +1315,12 @@ d = d1.max(d2) self.assertTrue(type(d) is Decimal) + def test_implicit_context(self): + # Check results when context given implicitly. (Issue 2478) + c = getcontext() + self.assertEqual(str(Decimal(0).sqrt()), + str(c.sqrt(Decimal(0)))) + class DecimalPythonAPItests(unittest.TestCase): Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Tue Mar 25 15:33:23 2008 @@ -72,6 +72,8 @@ Library ------- +- Issue #2478: fix failure of decimal.Decimal(0).sqrt() + - Issue #2432: give DictReader the dialect and line_num attributes advertised in the docs. From python-checkins at python.org Tue Mar 25 15:35:26 2008 From: python-checkins at python.org (mark.dickinson) Date: Tue, 25 Mar 2008 15:35:26 +0100 (CET) Subject: [Python-checkins] r61893 - in python/branches/release25-maint: Lib/decimal.py Lib/test/test_decimal.py Misc/NEWS Message-ID: <20080325143526.4332E1E4029@bag.python.org> Author: mark.dickinson Date: Tue Mar 25 15:35:25 2008 New Revision: 61893 Modified: python/branches/release25-maint/Lib/decimal.py python/branches/release25-maint/Lib/test/test_decimal.py python/branches/release25-maint/Misc/NEWS Log: Decimal.sqrt(0) failed when the context was not explicitly supplied. Modified: python/branches/release25-maint/Lib/decimal.py ============================================================================== --- python/branches/release25-maint/Lib/decimal.py (original) +++ python/branches/release25-maint/Lib/decimal.py Tue Mar 25 15:35:25 2008 @@ -2316,6 +2316,9 @@ def sqrt(self, context=None): """Return the square root of self.""" + if context is None: + context = getcontext() + if self._is_special: ans = self._check_nans(context=context) if ans: @@ -2329,9 +2332,6 @@ ans = _dec_from_triple(self._sign, '0', self._exp // 2) return ans._fix(context) - if context is None: - context = getcontext() - if self._sign == 1: return context._raise_error(InvalidOperation, 'sqrt(-x), x > 0') Modified: python/branches/release25-maint/Lib/test/test_decimal.py ============================================================================== --- python/branches/release25-maint/Lib/test/test_decimal.py (original) +++ python/branches/release25-maint/Lib/test/test_decimal.py Tue Mar 25 15:35:25 2008 @@ -1192,6 +1192,12 @@ d = d1.max(d2) self.assertTrue(type(d) is Decimal) + def test_implicit_context(self): + # Check results when context given implicitly. (Issue 2478) + c = getcontext() + self.assertEqual(str(Decimal(0).sqrt()), + str(c.sqrt(Decimal(0)))) + class DecimalPythonAPItests(unittest.TestCase): Modified: python/branches/release25-maint/Misc/NEWS ============================================================================== --- python/branches/release25-maint/Misc/NEWS (original) +++ python/branches/release25-maint/Misc/NEWS Tue Mar 25 15:35:25 2008 @@ -27,6 +27,8 @@ Library ------- +- Issue #2478: fix failure of decimal.Decimal(0).sqrt() + - Issue #2432: give DictReader the dialect and line_num attributes advertised in the docs. From python-checkins at python.org Tue Mar 25 16:22:25 2008 From: python-checkins at python.org (georg.brandl) Date: Tue, 25 Mar 2008 16:22:25 +0100 (CET) Subject: [Python-checkins] r61895 - in doctools/trunk: CHANGES doc/templating.rst sphinx/builder.py sphinx/environment.py sphinx/htmlwriter.py sphinx/latexwriter.py sphinx/util/__init__.py Message-ID: <20080325152225.8C6761E4021@bag.python.org> Author: georg.brandl Date: Tue Mar 25 16:22:25 2008 New Revision: 61895 Modified: doctools/trunk/CHANGES doctools/trunk/doc/templating.rst doctools/trunk/sphinx/builder.py doctools/trunk/sphinx/environment.py doctools/trunk/sphinx/htmlwriter.py doctools/trunk/sphinx/latexwriter.py doctools/trunk/sphinx/util/__init__.py Log: Rebuild all HTML files in case of a template change. Also, record image docnames in order to be able to delete image records from the env. Modified: doctools/trunk/CHANGES ============================================================================== --- doctools/trunk/CHANGES (original) +++ doctools/trunk/CHANGES Tue Mar 25 16:22:25 2008 @@ -13,6 +13,8 @@ * sphinx.ext.autodoc: Record files from which docstrings are included as dependencies. +* sphinx.builder: Rebuild all HTML files in case of a template change. + * sphinx.builder: Handle unavailability of TOC relations (previous/ next chapter) more gracefully in the HTML builder. Modified: doctools/trunk/doc/templating.rst ============================================================================== --- doctools/trunk/doc/templating.rst (original) +++ doctools/trunk/doc/templating.rst Tue Mar 25 16:22:25 2008 @@ -12,7 +12,7 @@ that you can overwrite only specific blocks within a template, customizing it while also keeping the changes at a minimum. -Inheritance is done via two directives, ``extends`` and ``block``. +Inheritance is done via two (Jinja) directives, ``extends`` and ``block``. .. template path blocks Modified: doctools/trunk/sphinx/builder.py ============================================================================== --- doctools/trunk/sphinx/builder.py (original) +++ doctools/trunk/sphinx/builder.py Tue Mar 25 16:22:25 2008 @@ -25,7 +25,8 @@ from docutils.readers.doctree import Reader as DoctreeReader from sphinx import addnodes -from sphinx.util import (get_matching_docs, ensuredir, relative_uri, SEP, os_path) +from sphinx.util import (get_matching_docs, mtimes_of_files, + ensuredir, relative_uri, SEP, os_path) from sphinx.htmlhelp import build_hhx from sphinx.htmlwriter import HTMLWriter, HTMLTranslator, SmartyPantsHTMLTranslator from sphinx.latexwriter import LaTeXWriter @@ -83,6 +84,7 @@ base_templates_path = path.join(path.dirname(__file__), 'templates') ext_templates_path = [path.join(self.srcdir, dir) for dir in self.config.templates_path] + self.templates_path = [base_templates_path] + ext_templates_path loader = SphinxFileSystemLoader(base_templates_path, ext_templates_path) self.jinja_env = Environment(loader=loader, # disable traceback, more likely that something @@ -277,8 +279,9 @@ Builds standalone HTML docs. """ name = 'html' - copysource = True + out_suffix = '.html' + indexer_format = 'json' def init(self): """Load templates.""" @@ -485,7 +488,7 @@ if self.env.images: self.info(bold('copying images...'), nonl=1) ensuredir(path.join(self.outdir, '_images')) - for src, dest in self.env.images.iteritems(): + for src, (_, dest) in self.env.images.iteritems(): self.info(' '+src, nonl=1) shutil.copyfile(path.join(self.srcdir, src), path.join(self.outdir, '_images', dest)) @@ -510,29 +513,26 @@ # dump the search index self.handle_finish() - # --------- these are overwritten by the Pickle builder - - def get_target_uri(self, docname, typ=None): - return docname + '.html' - def get_outdated_docs(self): + template_mtime = max(mtimes_of_files(self.templates_path, '.html')) for docname in self.env.found_docs: - targetname = self.env.doc2path(docname, self.outdir, '.html') + if docname not in self.env.all_docs: + yield docname + continue + targetname = self.env.doc2path(docname, self.outdir, self.out_suffix) try: targetmtime = path.getmtime(targetname) - except: + except Exception: targetmtime = 0 - if docname not in self.env.all_docs: + srcmtime = max(path.getmtime(self.env.doc2path(docname)), template_mtime) + if srcmtime > targetmtime: yield docname - elif path.getmtime(self.env.doc2path(docname)) > targetmtime: - yield docname - def load_indexer(self, docnames): try: - f = open(path.join(self.outdir, 'searchindex.json'), 'r') + f = open(path.join(self.outdir, 'searchindex.'+self.indexer_format), 'r') try: - self.indexer.load(f, 'json') + self.indexer.load(f, self.indexer_format) finally: f.close() except (IOError, OSError): @@ -545,6 +545,11 @@ if self.indexer is not None and title: self.indexer.feed(pagename, title, doctree) + # --------- these are overwritten by the Pickle builder + + def get_target_uri(self, docname, typ=None): + return docname + '.html' + def handle_page(self, pagename, addctx, templatename='page.html'): ctx = self.globalcontext.copy() ctx['current_page_name'] = pagename @@ -593,20 +598,12 @@ Builds HTML docs without rendering templates. """ name = 'pickle' + out_suffix = '.fpickle' + indexer_format = 'pickle' def init(self): self.init_translator_class() - def get_outdated_docs(self): - for docname in self.env.found_docs: - targetname = self.env.doc2path(docname, self.outdir, '.fpickle') - try: - targetmtime = path.getmtime(targetname) - except: - targetmtime = 0 - if path.getmtime(self.env.doc2path(docname)) > targetmtime: - yield docname - def get_target_uri(self, docname, typ=None): if docname == 'index': return '' @@ -614,23 +611,6 @@ return docname[:-5] # up to sep return docname + SEP - def load_indexer(self, docnames): - try: - f = open(path.join(self.outdir, 'searchindex.pickle'), 'r') - try: - self.indexer.load(f, 'pickle') - finally: - f.close() - except (IOError, OSError): - pass - # delete all entries for files that will be rebuilt - self.indexer.prune(set(self.env.all_docs) - set(docnames)) - - def index_page(self, pagename, doctree, title): - # only index pages with title - if self.indexer is not None and title: - self.indexer.feed(pagename, title, doctree) - def handle_page(self, pagename, ctx, templatename='page.html'): ctx['current_page_name'] = pagename sidebarfile = self.config.html_sidebars.get(pagename, '') @@ -815,7 +795,7 @@ # copy image files if self.env.images: self.info(bold('copying images...'), nonl=1) - for src, dest in self.env.images.iteritems(): + for src, (_, dest) in self.env.images.iteritems(): self.info(' '+src, nonl=1) shutil.copyfile(path.join(self.srcdir, src), path.join(self.outdir, dest)) Modified: doctools/trunk/sphinx/environment.py ============================================================================== --- doctools/trunk/sphinx/environment.py (original) +++ doctools/trunk/sphinx/environment.py Tue Mar 25 16:22:25 2008 @@ -55,9 +55,9 @@ 'sectsubtitle_xform': False, } -# This is increased every time a new environment attribute is added -# to properly invalidate pickle files. -ENV_VERSION = 20 +# This is increased every time an environment attribute is added +# or changed to properly invalidate pickle files. +ENV_VERSION = 21 def walk_depth(node, depth, maxdepth): @@ -251,7 +251,7 @@ # (type, string, target, aliasname) self.versionchanges = {} # version -> list of # (type, docname, lineno, module, descname, content) - self.images = {} # absolute path -> unique filename + self.images = {} # absolute path -> (docnames, unique filename) # These are set while parsing a file self.docname = None # current document name @@ -287,13 +287,14 @@ self.titles.pop(docname, None) self.tocs.pop(docname, None) self.toc_num_entries.pop(docname, None) + self.filemodules.pop(docname, None) + self.indexentries.pop(docname, None) for subfn, fnset in self.files_to_rebuild.iteritems(): fnset.discard(docname) for fullname, (fn, _) in self.descrefs.items(): if fn == docname: del self.descrefs[fullname] - self.filemodules.pop(docname, None) for modname, (fn, _, _, _) in self.modules.items(): if fn == docname: del self.modules[modname] @@ -303,10 +304,13 @@ for key, (fn, _) in self.reftargets.items(): if fn == docname: del self.reftargets[key] - self.indexentries.pop(docname, None) for version, changes in self.versionchanges.items(): new = [change for change in changes if change[1] != docname] changes[:] = new + for fullpath, (docs, _) in self.images.items(): + docs.discard(docname) + if not docs: + del self.images[fullpath] def doc2path(self, docname, base=True, suffix=None): """ @@ -501,6 +505,7 @@ """ Process and rewrite image URIs. """ + existing_names = set(v[1] for v in self.images.itervalues()) docdir = path.dirname(self.doc2path(docname, base=None)) for node in doctree.traverse(nodes.image): imguri = node['uri'] @@ -513,15 +518,16 @@ if not os.access(path.join(self.srcdir, imgpath), os.R_OK): self.warn(docname, 'Image file not readable: %s' % imguri, node.line) if imgpath in self.images: + self.images[imgpath][0].add(docname) continue - names = set(self.images.values()) uniquename = path.basename(imgpath) base, ext = path.splitext(uniquename) i = 0 - while uniquename in names: + while uniquename in existing_names: i += 1 uniquename = '%s%s%s' % (base, i, ext) - self.images[imgpath] = uniquename + self.images[imgpath] = (set([docname]), uniquename) + existing_names.add(uniquename) def process_metadata(self, docname, doctree): """ Modified: doctools/trunk/sphinx/htmlwriter.py ============================================================================== --- doctools/trunk/sphinx/htmlwriter.py (original) +++ doctools/trunk/sphinx/htmlwriter.py Tue Mar 25 16:22:25 2008 @@ -253,7 +253,7 @@ # rewrite the URI if the environment knows about it if olduri in self.builder.env.images: node['uri'] = path.join(self.builder.imgpath, - self.builder.env.images[olduri]) + self.builder.env.images[olduri][1]) BaseTranslator.visit_image(self, node) def visit_toctree(self, node): Modified: doctools/trunk/sphinx/latexwriter.py ============================================================================== --- doctools/trunk/sphinx/latexwriter.py (original) +++ doctools/trunk/sphinx/latexwriter.py Tue Mar 25 16:22:25 2008 @@ -547,7 +547,10 @@ pre.reverse() self.body.extend(pre) # XXX: for now, don't fiddle around with graphics formats - uri = self.builder.env.images.get(node['uri'], node['uri']) + if node['uri'] in self.builder.env.images: + uri = self.builder.env.images[node['uri']][1] + else: + uri = node['uri'] self.body.append('\\includegraphics%s{%s}' % (include_graphics_options, uri)) self.body.extend(post) def depart_image(self, node): Modified: doctools/trunk/sphinx/util/__init__.py ============================================================================== --- doctools/trunk/sphinx/util/__init__.py (original) +++ doctools/trunk/sphinx/util/__init__.py Tue Mar 25 16:22:25 2008 @@ -54,6 +54,8 @@ """ Get all file names (without suffix) matching a suffix in a directory, recursively. + + Exclude files in *exclude*, prune directories in *prune*. """ pattern = '*' + suffix # dirname is a normalized absolute path. @@ -75,6 +77,17 @@ yield qualified_name +def mtimes_of_files(dirnames, suffix): + for dirname in dirnames: + for root, dirs, files in os.walk(dirname): + for sfile in files: + if sfile.endswith(suffix): + try: + yield path.getmtime(path.join(root, sfile)) + except EnvironmentError: + pass + + def shorten_result(text='', keywords=[], maxlen=240, fuzz=60): if not text: text = '' From python-checkins at python.org Tue Mar 25 16:43:30 2008 From: python-checkins at python.org (collin.winter) Date: Tue, 25 Mar 2008 16:43:30 +0100 (CET) Subject: [Python-checkins] r61897 - sandbox/trunk/2to3/test.py Message-ID: <20080325154330.4890D1E4013@bag.python.org> Author: collin.winter Date: Tue Mar 25 16:43:29 2008 New Revision: 61897 Modified: sandbox/trunk/2to3/test.py Log: Typo fix, style cleanup in test.py Modified: sandbox/trunk/2to3/test.py ============================================================================== --- sandbox/trunk/2to3/test.py (original) +++ sandbox/trunk/2to3/test.py Tue Mar 25 16:43:29 2008 @@ -11,7 +11,7 @@ import lib2to3.tests.support from sys import exit, argv -if '-h' in argv or '--help' in argv or len(argv) > 2: +if "-h" in argv or "--help" in argv or len(argv) > 2: print "Usage: %s [-h] [test suite[.test class]]" %(argv[0]) print "default : run all tests in lib2to3/tests/test_*.py" print "test suite: run tests in lib2to3/tests/" @@ -27,7 +27,7 @@ exit(1) if argv[1].find(".") == -1: - # Just hte module was specified, load all the tests + # Just the module was specified, load all the tests suite = unittest.TestLoader().loadTestsFromModule(mod) else: # A class was specified, load that From python-checkins at python.org Tue Mar 25 16:43:57 2008 From: python-checkins at python.org (collin.winter) Date: Tue, 25 Mar 2008 16:43:57 +0100 (CET) Subject: [Python-checkins] r61898 - sandbox/trunk/2to3/find_pattern.py Message-ID: <20080325154357.732AC1E402C@bag.python.org> Author: collin.winter Date: Tue Mar 25 16:43:57 2008 New Revision: 61898 Modified: sandbox/trunk/2to3/find_pattern.py Log: Fix a busted import in find_pattern.py Modified: sandbox/trunk/2to3/find_pattern.py ============================================================================== --- sandbox/trunk/2to3/find_pattern.py (original) +++ sandbox/trunk/2to3/find_pattern.py Tue Mar 25 16:43:57 2008 @@ -48,7 +48,7 @@ # Local imports from lib2to3 import pytree -from pgen2 import driver +from lib2to3.pgen2 import driver from lib2to3.pygram import python_symbols, python_grammar driver = driver.Driver(python_grammar, convert=pytree.convert) From buildbot at python.org Tue Mar 25 16:47:13 2008 From: buildbot at python.org (buildbot at python.org) Date: Tue, 25 Mar 2008 15:47:13 +0000 Subject: [Python-checkins] buildbot failure in ppc Debian unstable trunk Message-ID: <20080325154713.C31341E4013@bag.python.org> The Buildbot has detected a new failure of ppc Debian unstable trunk. Full details are available at: http://www.python.org/dev/buildbot/all/ppc%20Debian%20unstable%20trunk/builds/1077 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: mark.dickinson BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From python-checkins at python.org Tue Mar 25 17:53:41 2008 From: python-checkins at python.org (collin.winter) Date: Tue, 25 Mar 2008 17:53:41 +0100 (CET) Subject: [Python-checkins] r61899 - sandbox/trunk/2to3/lib2to3/tests/test_all_fixers.py Message-ID: <20080325165341.5A0B31E4008@bag.python.org> Author: collin.winter Date: Tue Mar 25 17:53:41 2008 New Revision: 61899 Modified: sandbox/trunk/2to3/lib2to3/tests/test_all_fixers.py Log: Add a missing explicit fixer to test_all_fixers. Modified: sandbox/trunk/2to3/lib2to3/tests/test_all_fixers.py ============================================================================== --- sandbox/trunk/2to3/lib2to3/tests/test_all_fixers.py (original) +++ sandbox/trunk/2to3/lib2to3/tests/test_all_fixers.py Tue Mar 25 17:53:41 2008 @@ -27,7 +27,7 @@ class Test_all(support.TestCase): def setUp(self): - options = Options(fix=["all", "idioms", "ws_comma"], + options = Options(fix=["all", "idioms", "ws_comma", "buffer"], print_function=False) self.refactor = refactor.RefactoringTool(options) From buildbot at python.org Tue Mar 25 18:34:28 2008 From: buildbot at python.org (buildbot at python.org) Date: Tue, 25 Mar 2008 17:34:28 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 2.5 Message-ID: <20080325173428.2BB601E4006@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 2.5. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%202.5/builds/473 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch branches/release25-maint] HEAD Blamelist: mark.dickinson BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_socket ====================================================================== FAIL: testInterruptedTimeout (test.test_socket.TCPTimeoutTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/2.5.norwitz-tru64/build/Lib/test/test_socket.py", line 879, in testInterruptedTimeout self.fail("got Alarm in wrong place") AssertionError: got Alarm in wrong place sincerely, -The Buildbot From python-checkins at python.org Tue Mar 25 18:36:43 2008 From: python-checkins at python.org (georg.brandl) Date: Tue, 25 Mar 2008 18:36:43 +0100 (CET) Subject: [Python-checkins] r61900 - python/trunk/Misc/developers.txt Message-ID: <20080325173643.DF1021E4006@bag.python.org> Author: georg.brandl Date: Tue Mar 25 18:36:43 2008 New Revision: 61900 Modified: python/trunk/Misc/developers.txt Log: Add Benjamin. Modified: python/trunk/Misc/developers.txt ============================================================================== --- python/trunk/Misc/developers.txt (original) +++ python/trunk/Misc/developers.txt Tue Mar 25 18:36:43 2008 @@ -17,6 +17,9 @@ Permissions History ------------------- +- Benjamin Peterson was given SVN access on 25 March 2008 by Georg + Brandl, for bug triage work. + - Jerry Seutter was given SVN access on 20 March 2008 by BAC, for general contributions to Python. From buildbot at python.org Tue Mar 25 18:56:13 2008 From: buildbot at python.org (buildbot at python.org) Date: Tue, 25 Mar 2008 17:56:13 +0000 Subject: [Python-checkins] buildbot failure in x86 XP 3.0 Message-ID: <20080325175614.0DB471E4006@bag.python.org> The Buildbot has detected a new failure of x86 XP 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP%203.0/builds/64 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: armbruster-windows Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\threading.py", line 490, in _bootstrap_inner self.run() File "C:\python\buildarea\3.0.armbruster-windows\build\lib\threading.py", line 446, in run self._target(*self._args, **self._kwargs) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_smtplib.py", line 116, in debugging_server poll_fun(0.01, asyncore.socket_map) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\asyncore.py", line 132, in poll read(obj) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\asyncore.py", line 72, in read obj.handle_error() File "C:\python\buildarea\3.0.armbruster-windows\build\lib\asyncore.py", line 68, in read obj.handle_read_event() File "C:\python\buildarea\3.0.armbruster-windows\build\lib\asyncore.py", line 390, in handle_read_event self.handle_read() File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 527, in handle_read data = self.recv(1024) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\asyncore.py", line 342, in recv data = self.socket.recv(buffer_size) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\ssl.py", line 247, in recv return self.read(buflen) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\ssl.py", line 162, in read v = self._sslobj.read(len or 1024) socket.error: [Errno 10053] An established connection was aborted by the software in your host machine 6 tests failed: test_mailbox test_smtplib test_ssl test_urllib2net test_urllibnet test_xmlrpc_net ====================================================================== ERROR: test_flush (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 718, in tearDown self._delete_recursively(self._path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 47, in _delete_recursively os.remove(target) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test' ====================================================================== ERROR: test_popitem (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 336, in test_popitem self.assertEqual(int(msg.get_payload()), keys.index(key)) ValueError: invalid literal for int() with base 10: 'From: foo 0 From MAILER-DAEMON Fri Apr 07 18:00:43 2000 From: foo 1 From MAILER-DAEMON Fri Apr 07 18:00:43 2000 From: foo 2 From MAILER-DAEMON Fri Apr 07 18:00:43 2000 From: foo 3 From MAIL' ====================================================================== ERROR: test_flush (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 718, in tearDown self._delete_recursively(self._path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 47, in _delete_recursively os.remove(target) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test' ====================================================================== ERROR: test_popitem (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 336, in test_popitem self.assertEqual(int(msg.get_payload()), keys.index(key)) ValueError: invalid literal for int() with base 10: 'From: foo 0 \x01\x01\x01\x01 \x01\x01\x01\x01 From MAILER-DAEMON Fri Apr 07 18:00:52 2000 From: foo 1 \x01\x01\x01\x01 \x01\x01\x01\x01 From MAILER-DAEMON Fri Apr 07 18:00:52 2000 From: foo 2 \x01\x01\x01\x01 \x01\x01\x01\x01 From MAILER-DAEMON Fri Apr 07 18' ====================================================================== ERROR: test_flush (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 941, in tearDown self._delete_recursively(self._path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 47, in _delete_recursively os.remove(target) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test' ====================================================================== ERROR: test_popitem (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 336, in test_popitem self.assertEqual(int(msg.get_payload()), keys.index(key)) ValueError: invalid literal for int() with base 10: 'From: foo *** EOOH *** From: foo 0 1,, From: foo *** EOOH *** From: foo 1 1,, From: foo *** EOOH *** From: foo 2 1,, From: foo *** EOOH *** From: foo 3 ' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 412, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_add (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 77, in test_add self.assertEqual(self._box.get_string(keys[0]), self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:37 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:37 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:37 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:37 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'From: foo\n\n0' ====================================================================== FAIL: test_add_and_close (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 758, in test_add_and_close self.assertEqual(contents, open(self._path, 'r').read()) AssertionError: 'From MAILER-DAEMON Fri Apr 07 18:00:38 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:38 2000\n\nFrom: foo\n\n0\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:38 2000\n\nFrom: foo\n\n1\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:38 2000\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:38 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'From MAILER-DAEMON Fri Apr 07 18:00:38 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:38 2000\n\nFrom: foo\n\n0\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:38 2000\n\nFrom: foo\n\n1\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:38 2000\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:38 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:38 2000\n\nFrom: foo\n\n0\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:38 2000\n\nFrom: foo\n\n1\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:38 2000\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:38 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:38 2000\n\nFrom: foo\n\n1\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:38 2000\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:38 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:38 2000\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:38 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:38 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' ====================================================================== FAIL: test_add_from_string (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 725, in test_add_from_string self.assertEqual(self._box[key].get_from(), 'foo at bar blah') AssertionError: 'foo at bar blah\n' != 'foo at bar blah' ====================================================================== FAIL: test_close (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 389, in test_close self._test_flush_or_close(self._box.close) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 6 != 3 ====================================================================== FAIL: test_delitem (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 87, in test_delitem self._test_remove_or_delitem(self._box.__delitem__) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n1' != 'From: foo\n\n1' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 412, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_flush (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 377, in test_flush self._test_flush_or_close(self._box.flush) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 6 != 3 ====================================================================== FAIL: test_get (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 129, in test_get self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_file (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 174, in test_get_file self._template % 0) AssertionError: '' != 'From: foo\n\n0' ====================================================================== FAIL: test_get_message (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 156, in test_get_message self.assertEqual(msg0['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_string (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 164, in test_get_string self.assertEqual(self._box.get_string(key0), self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:40 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'From: foo\n\n0' ====================================================================== FAIL: test_getitem (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 144, in test_getitem self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_items (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 207, in test_items self._check_iteration(self._box.items, do_keys=True, do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iter (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 194, in test_iter do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iteritems (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 203, in test_iteritems do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_itervalues (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 189, in test_itervalues do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_open_close_open (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 742, in test_open_close_open self.assertEqual(len(self._box), 3) AssertionError: 6 != 3 ====================================================================== FAIL: test_pop (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 313, in test_pop self.assertEqual(self._box.pop(key0).get_payload(), '0') AssertionError: 'From: foo\n\n0\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:43 2000\n\nFrom: foo\n\n1' != '0' ====================================================================== FAIL: test_remove (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 83, in test_remove self._test_remove_or_delitem(self._box.remove) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n1' != 'From: foo\n\n1' ====================================================================== FAIL: test_set_item (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 272, in test_set_item self._template % 'original 0') AssertionError: '\nFrom: foo\n\noriginal 0' != 'From: foo\n\noriginal 0' ====================================================================== FAIL: test_update (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 350, in test_update self._template % 'changed 0') AssertionError: '\nFrom: foo\n\nchanged 0\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:44 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'From: foo\n\nchanged 0' ====================================================================== FAIL: test_values (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 198, in test_values self._check_iteration(self._box.values, do_keys=False, do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_add (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 77, in test_add self.assertEqual(self._box.get_string(keys[0]), self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:44 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:44 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:44 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:44 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n' != 'From: foo\n\n0' ====================================================================== FAIL: test_add_and_close (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 758, in test_add_and_close self.assertEqual(contents, open(self._path, 'r').read()) AssertionError: '\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:45 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:45 2000\n\nFrom: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:45 2000\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:45 2000\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:45 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n' != '\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:45 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:45 2000\n\nFrom: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:45 2000\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:45 2000\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:45 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:45 2000\n\nFrom: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:45 2000\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:45 2000\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:45 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:45 2000\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:45 2000\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:45 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:45 2000\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:45 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:45 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n' ====================================================================== FAIL: test_add_from_string (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 725, in test_add_from_string self.assertEqual(self._box[key].get_from(), 'foo at bar blah') AssertionError: 'foo at bar blah\n' != 'foo at bar blah' ====================================================================== FAIL: test_close (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 389, in test_close self._test_flush_or_close(self._box.close) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_delitem (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 87, in test_delitem self._test_remove_or_delitem(self._box.__delitem__) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n' != 'From: foo\n\n1' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 412, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_flush (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 377, in test_flush self._test_flush_or_close(self._box.flush) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_get (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 129, in test_get self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_file (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 174, in test_get_file self._template % 0) AssertionError: '' != 'From: foo\n\n0' ====================================================================== FAIL: test_get_message (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 156, in test_get_message self.assertEqual(msg0['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_string (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 164, in test_get_string self.assertEqual(self._box.get_string(key0), self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:48 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n' != 'From: foo\n\n0' ====================================================================== FAIL: test_getitem (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 144, in test_getitem self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_items (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 207, in test_items self._check_iteration(self._box.items, do_keys=True, do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iter (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 194, in test_iter do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iteritems (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 203, in test_iteritems do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_itervalues (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 189, in test_itervalues do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_open_close_open (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 742, in test_open_close_open self.assertEqual(len(self._box), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_pop (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 313, in test_pop self.assertEqual(self._box.pop(key0).get_payload(), '0') AssertionError: 'From: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:51 2000\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n' != '0' ====================================================================== FAIL: test_remove (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 83, in test_remove self._test_remove_or_delitem(self._box.remove) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n' != 'From: foo\n\n1' ====================================================================== FAIL: test_set_item (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 272, in test_set_item self._template % 'original 0') AssertionError: '\nFrom: foo\n\noriginal 0\n\n\x01\x01\x01\x01\n\n' != 'From: foo\n\noriginal 0' ====================================================================== FAIL: test_update (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 350, in test_update self._template % 'changed 0') AssertionError: '\nFrom: foo\n\nchanged 0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Fri Apr 07 18:00:52 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n' != 'From: foo\n\nchanged 0' ====================================================================== FAIL: test_values (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 198, in test_values self._check_iteration(self._box.values, do_keys=False, do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 412, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_add (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 77, in test_add self.assertEqual(self._box.get_string(keys[0]), self._template % 0) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n0\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f' != 'From: foo\n\n0' ====================================================================== FAIL: test_close (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 389, in test_close self._test_flush_or_close(self._box.close) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_delitem (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 87, in test_delitem self._test_remove_or_delitem(self._box.__delitem__) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n1\n\n\x1f' != 'From: foo\n\n1' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 412, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_flush (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 377, in test_flush self._test_flush_or_close(self._box.flush) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_get (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 129, in test_get self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_file (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 174, in test_get_file self._template % 0) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n0\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f' != 'From: foo\n\n0' ====================================================================== FAIL: test_get_message (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 156, in test_get_message self.assertEqual(msg0['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_string (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 164, in test_get_string self.assertEqual(self._box.get_string(key0), self._template % 0) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n0\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f' != 'From: foo\n\n0' ====================================================================== FAIL: test_getitem (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 144, in test_getitem self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_items (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 207, in test_items self._check_iteration(self._box.items, do_keys=True, do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iter (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 194, in test_iter do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iteritems (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 203, in test_iteritems do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_itervalues (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 189, in test_itervalues do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_pop (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 313, in test_pop self.assertEqual(self._box.pop(key0).get_payload(), '0') AssertionError: 'From: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n0\n\n\x1f\x0c\n\n1,,\n\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n1\n\n\x1f' != '0' ====================================================================== FAIL: test_remove (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 83, in test_remove self._test_remove_or_delitem(self._box.remove) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n1\n\n\x1f' != 'From: foo\n\n1' ====================================================================== FAIL: test_set_item (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 272, in test_set_item self._template % 'original 0') AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\noriginal 0\n\n\x1f' != 'From: foo\n\noriginal 0' ====================================================================== FAIL: test_update (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 350, in test_update self._template % 'changed 0') AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\nchanged 0\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f' != 'From: foo\n\nchanged 0' ====================================================================== FAIL: test_values (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 198, in test_values self._check_iteration(self._box.values, do_keys=False, do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== ERROR: testConnect (test.test_ssl.NetworkedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 130, in testConnect raise test_support.TestFailed("Unexpected exception %s" % x) test.test_support.TestFailed: Unexpected exception [Errno 1] _ssl.c:486: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed ====================================================================== ERROR: testProtocolSSL2 (test.test_ssl.ThreadedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 822, in testProtocolSSL2 tryProtocolCombo(ssl.PROTOCOL_SSLv2, ssl.PROTOCOL_SSLv2, True, ssl.CERT_OPTIONAL) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 691, in tryProtocolCombo chatty=False, connectionchatty=False) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 640, in serverParamsTest raise test_support.TestFailed("Unexpected SSL error: " + str(x)) test.test_support.TestFailed: Unexpected SSL error: [Errno 1] _ssl.c:486: error:1407E086:SSL routines:SSL2_SET_CERTIFICATE:certificate verify failed ====================================================================== ERROR: testProtocolSSL23 (test.test_ssl.ThreadedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 843, in testProtocolSSL23 tryProtocolCombo(ssl.PROTOCOL_SSLv23, ssl.PROTOCOL_SSLv3, True, ssl.CERT_OPTIONAL) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 691, in tryProtocolCombo chatty=False, connectionchatty=False) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 640, in serverParamsTest raise test_support.TestFailed("Unexpected SSL error: " + str(x)) test.test_support.TestFailed: Unexpected SSL error: [Errno 1] _ssl.c:486: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed ====================================================================== ERROR: testProtocolSSL3 (test.test_ssl.ThreadedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 855, in testProtocolSSL3 tryProtocolCombo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True, ssl.CERT_OPTIONAL) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 691, in tryProtocolCombo chatty=False, connectionchatty=False) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 640, in serverParamsTest raise test_support.TestFailed("Unexpected SSL error: " + str(x)) test.test_support.TestFailed: Unexpected SSL error: [Errno 1] _ssl.c:486: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed ====================================================================== ERROR: testProtocolTLS1 (test.test_ssl.ThreadedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 865, in testProtocolTLS1 tryProtocolCombo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True, ssl.CERT_OPTIONAL) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 691, in tryProtocolCombo chatty=False, connectionchatty=False) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 640, in serverParamsTest raise test_support.TestFailed("Unexpected SSL error: " + str(x)) test.test_support.TestFailed: Unexpected SSL error: [Errno 1] _ssl.c:486: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed ====================================================================== ERROR: testReadCert (test.test_ssl.ThreadedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 738, in testReadCert "Unexpected SSL error: " + str(x)) test.test_support.TestFailed: Unexpected SSL error: [Errno 1] _ssl.c:486: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed ====================================================================== FAIL: test_bad_address (test.test_urllib2net.urlopenNetworkTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_urllib2net.py", line 160, in test_bad_address urllib2.urlopen, "http://www.python.invalid./") AssertionError: IOError not raised by urlopen ====================================================================== FAIL: test_bad_address (test.test_urllibnet.urlopenNetworkTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_urllibnet.py", line 142, in test_bad_address urllib.urlopen, "http://www.python.invalid./") AssertionError: IOError not raised by urlopen ====================================================================== FAIL: test_current_time (test.test_xmlrpc_net.CurrentTimeTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_xmlrpc_net.py", line 38, in test_current_time self.assert_(delta.days <= 1) AssertionError: None sincerely, -The Buildbot From python-checkins at python.org Tue Mar 25 19:31:06 2008 From: python-checkins at python.org (georg.brandl) Date: Tue, 25 Mar 2008 19:31:06 +0100 (CET) Subject: [Python-checkins] r61901 - doctools/trunk/doc/Makefile Message-ID: <20080325183106.7B7A51E4006@bag.python.org> Author: georg.brandl Date: Tue Mar 25 19:31:06 2008 New Revision: 61901 Modified: doctools/trunk/doc/Makefile Log: Rename pickle builder target. Modified: doctools/trunk/doc/Makefile ============================================================================== --- doctools/trunk/doc/Makefile (original) +++ doctools/trunk/doc/Makefile Tue Mar 25 19:31:06 2008 @@ -29,12 +29,12 @@ @echo @echo "Build finished. The HTML pages are in _build/html." -web: - mkdir -p _build/web _build/doctrees - $(SPHINXBUILD) -b web $(ALLSPHINXOPTS) _build/web +pickle: + mkdir -p _build/pickle _build/doctrees + $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) _build/pickle @echo @echo "Build finished; now you can run" - @echo " python -m sphinx.web _build/web" + @echo " python -m sphinx.web _build/pickle" @echo "to start the server." htmlhelp: From python-checkins at python.org Tue Mar 25 19:32:23 2008 From: python-checkins at python.org (georg.brandl) Date: Tue, 25 Mar 2008 19:32:23 +0100 (CET) Subject: [Python-checkins] r61902 - in doctools/trunk: CHANGES sphinx/builder.py sphinx/templates/layout.html sphinx/templates/modindex.html sphinx/templates/search.html Message-ID: <20080325183223.DEC2F1E401F@bag.python.org> Author: georg.brandl Date: Tue Mar 25 19:32:23 2008 New Revision: 61902 Modified: doctools/trunk/CHANGES doctools/trunk/sphinx/builder.py doctools/trunk/sphinx/templates/layout.html doctools/trunk/sphinx/templates/modindex.html doctools/trunk/sphinx/templates/search.html Log: Rename static to _static and consistently name _sources. Modified: doctools/trunk/CHANGES ============================================================================== --- doctools/trunk/CHANGES (original) +++ doctools/trunk/CHANGES Tue Mar 25 19:32:23 2008 @@ -4,6 +4,9 @@ * sphinx.htmlwriter, sphinx.latexwriter: Support the ``.. image::`` directive by copying image files to the output directory. +* sphinx.builder: Consistently name "special" HTML output directories + with a leading underscore; this means ``_sources`` and ``_static``. + * sphinx.environment: Take dependent files into account when collecting the set of outdated sources. Modified: doctools/trunk/sphinx/builder.py ============================================================================== --- doctools/trunk/sphinx/builder.py (original) +++ doctools/trunk/sphinx/builder.py Tue Mar 25 19:32:23 2008 @@ -496,7 +496,7 @@ # copy static files self.info(bold('copying static files...')) - ensuredir(path.join(self.outdir, 'static')) + ensuredir(path.join(self.outdir, '_static')) staticdirnames = [path.join(path.dirname(__file__), 'static')] + \ [path.join(self.srcdir, spath) for spath in self.config.html_static_path] @@ -504,9 +504,9 @@ for filename in os.listdir(staticdirname): if not filename.startswith('.'): shutil.copyfile(path.join(staticdirname, filename), - path.join(self.outdir, 'static', filename)) + path.join(self.outdir, '_static', filename)) # add pygments style file - f = open(path.join(self.outdir, 'static', 'pygments.css'), 'w') + f = open(path.join(self.outdir, '_static', 'pygments.css'), 'w') f.write(PygmentsBridge('html', self.config.pygments_style).get_stylesheet()) f.close() @@ -514,7 +514,10 @@ self.handle_finish() def get_outdated_docs(self): - template_mtime = max(mtimes_of_files(self.templates_path, '.html')) + if self.templates_path: + template_mtime = max(mtimes_of_files(self.templates_path, '.html')) + else: + template_mtime = 0 for docname in self.env.found_docs: if docname not in self.env.all_docs: yield docname @@ -603,6 +606,8 @@ def init(self): self.init_translator_class() + # no templates used, but get_outdated_docs() needs this attribute + self.templates_path = [] def get_target_uri(self, docname, typ=None): if docname == 'index': @@ -627,7 +632,7 @@ # if there is a source file, copy the source file for the # "show source" link if ctx.get('sourcename'): - source_name = path.join(self.outdir, 'sources', + source_name = path.join(self.outdir, '_sources', os_path(ctx['sourcename'])) ensuredir(path.dirname(source_name)) shutil.copyfile(self.env.doc2path(pagename), source_name) Modified: doctools/trunk/sphinx/templates/layout.html ============================================================================== --- doctools/trunk/sphinx/templates/layout.html (original) +++ doctools/trunk/sphinx/templates/layout.html Tue Mar 25 19:32:23 2008 @@ -46,8 +46,8 @@ {%- endfor %} {%- else %} - - + + {%- endif %} {%- if builder != 'htmlhelp' %} - - - + + + {%- endif %} {%- block rellinks %} {%- if hasdoc('about') %} Modified: doctools/trunk/sphinx/templates/modindex.html ============================================================================== --- doctools/trunk/sphinx/templates/modindex.html (original) +++ doctools/trunk/sphinx/templates/modindex.html Tue Mar 25 19:32:23 2008 @@ -32,7 +32,7 @@ {%- else -%}
                Name{% if collapse -%} - {%- endif %} {% if indent %}   {% endif %} Modified: doctools/trunk/sphinx/templates/search.html ============================================================================== --- doctools/trunk/sphinx/templates/search.html (original) +++ doctools/trunk/sphinx/templates/search.html Tue Mar 25 19:32:23 2008 @@ -1,7 +1,7 @@ {% extends "layout.html" %} {% set title = 'Search Documentation' %} {% block extrahead %} - + {% endblock %} {% block body %}

                Search Documentation

                From python-checkins at python.org Tue Mar 25 19:34:11 2008 From: python-checkins at python.org (georg.brandl) Date: Tue, 25 Mar 2008 19:34:11 +0100 (CET) Subject: [Python-checkins] r61903 - doctools/trunk/doc/_templates/indexsidebar.html doctools/trunk/doc/_templates/layout.html Message-ID: <20080325183411.EDD841E4006@bag.python.org> Author: georg.brandl Date: Tue Mar 25 19:34:11 2008 New Revision: 61903 Modified: doctools/trunk/doc/_templates/indexsidebar.html doctools/trunk/doc/_templates/layout.html Log: Add Jabber address. Modified: doctools/trunk/doc/_templates/indexsidebar.html ============================================================================== --- doctools/trunk/doc/_templates/indexsidebar.html (original) +++ doctools/trunk/doc/_templates/indexsidebar.html Tue Mar 25 19:34:11 2008 @@ -7,7 +7,8 @@

                Questions? Suggestions?

                -

                Send them to <georg at python org>, or come to the +

                Send them to <georg at python org>, contact the author +via Jabber at <gbrandl at pocoo org> or come to the #python-docs channel on FreeNode.

                You can also open a bug at Python's bug tracker, using the "Documentation tools" category.

                Modified: doctools/trunk/doc/_templates/layout.html ============================================================================== --- doctools/trunk/doc/_templates/layout.html (original) +++ doctools/trunk/doc/_templates/layout.html Tue Mar 25 19:34:11 2008 @@ -7,7 +7,7 @@ {% block beforerelbar %}
                - +
                {% endblock %} From python-checkins at python.org Tue Mar 25 19:47:59 2008 From: python-checkins at python.org (mark.dickinson) Date: Tue, 25 Mar 2008 19:47:59 +0100 (CET) Subject: [Python-checkins] r61904 - in python/trunk: Lib/decimal.py Lib/test/test_decimal.py Misc/NEWS Message-ID: <20080325184759.6B82C1E4028@bag.python.org> Author: mark.dickinson Date: Tue Mar 25 19:47:59 2008 New Revision: 61904 Modified: python/trunk/Lib/decimal.py python/trunk/Lib/test/test_decimal.py python/trunk/Misc/NEWS Log: Issue #2482: Make sure that the coefficient of a Decimal instance is always stored as a str instance, even when that Decimal has been created from a unicode string. Modified: python/trunk/Lib/decimal.py ============================================================================== --- python/trunk/Lib/decimal.py (original) +++ python/trunk/Lib/decimal.py Tue Mar 25 19:47:59 2008 @@ -557,17 +557,17 @@ fracpart = m.group('frac') exp = int(m.group('exp') or '0') if fracpart is not None: - self._int = (intpart+fracpart).lstrip('0') or '0' + self._int = str((intpart+fracpart).lstrip('0') or '0') self._exp = exp - len(fracpart) else: - self._int = intpart.lstrip('0') or '0' + self._int = str(intpart.lstrip('0') or '0') self._exp = exp self._is_special = False else: diag = m.group('diag') if diag is not None: # NaN - self._int = diag.lstrip('0') + self._int = str(diag.lstrip('0')) if m.group('signal'): self._exp = 'N' else: Modified: python/trunk/Lib/test/test_decimal.py ============================================================================== --- python/trunk/Lib/test/test_decimal.py (original) +++ python/trunk/Lib/test/test_decimal.py Tue Mar 25 19:47:59 2008 @@ -434,6 +434,12 @@ self.assertEqual(str(Decimal('1.3E4 \n')), '1.3E+4') self.assertEqual(str(Decimal(' -7.89')), '-7.89') + #unicode strings should be permitted + self.assertEqual(str(Decimal(u'0E-017')), '0E-17') + self.assertEqual(str(Decimal(u'45')), '45') + self.assertEqual(str(Decimal(u'-Inf')), '-Infinity') + self.assertEqual(str(Decimal(u'NaN123')), 'NaN123') + def test_explicit_from_tuples(self): #zero @@ -1149,6 +1155,16 @@ self.assertEqual(str(d), '15.32') # str self.assertEqual(repr(d), "Decimal('15.32')") # repr + # result type of string methods should be str, not unicode + unicode_inputs = [u'123.4', u'0.5E2', u'Infinity', u'sNaN', + u'-0.0E100', u'-NaN001', u'-Inf'] + + for u in unicode_inputs: + d = Decimal(u) + self.assertEqual(type(str(d)), str) + self.assertEqual(type(repr(d)), str) + self.assertEqual(type(d.to_eng_string()), str) + def test_tonum_methods(self): #Test float, int and long methods. Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Tue Mar 25 19:47:59 2008 @@ -72,6 +72,10 @@ Library ------- +- Issue #2482: Make sure that the coefficient of a Decimal is always + stored as a str instance, not as a unicode instance. This ensures + that str(Decimal) is always an instance of str. + - Issue #2478: fix failure of decimal.Decimal(0).sqrt() - Issue #2432: give DictReader the dialect and line_num attributes From python-checkins at python.org Tue Mar 25 19:58:13 2008 From: python-checkins at python.org (mark.dickinson) Date: Tue, 25 Mar 2008 19:58:13 +0100 (CET) Subject: [Python-checkins] r61906 - in python/branches/release25-maint: Lib/decimal.py Lib/test/test_decimal.py Misc/NEWS Message-ID: <20080325185813.ABA521E402D@bag.python.org> Author: mark.dickinson Date: Tue Mar 25 19:58:13 2008 New Revision: 61906 Modified: python/branches/release25-maint/Lib/decimal.py python/branches/release25-maint/Lib/test/test_decimal.py python/branches/release25-maint/Misc/NEWS Log: Issue #2482: Make sure that the coefficient of a Decimal instance is stored as a str instance rather than a unicode instance. Backported from Python 2.6 (see r61904). Modified: python/branches/release25-maint/Lib/decimal.py ============================================================================== --- python/branches/release25-maint/Lib/decimal.py (original) +++ python/branches/release25-maint/Lib/decimal.py Tue Mar 25 19:58:13 2008 @@ -549,17 +549,17 @@ fracpart = m.group('frac') exp = int(m.group('exp') or '0') if fracpart is not None: - self._int = (intpart+fracpart).lstrip('0') or '0' + self._int = str((intpart+fracpart).lstrip('0') or '0') self._exp = exp - len(fracpart) else: - self._int = intpart.lstrip('0') or '0' + self._int = str(intpart.lstrip('0') or '0') self._exp = exp self._is_special = False else: diag = m.group('diag') if diag is not None: # NaN - self._int = diag.lstrip('0') + self._int = str(diag.lstrip('0')) if m.group('signal'): self._exp = 'N' else: Modified: python/branches/release25-maint/Lib/test/test_decimal.py ============================================================================== --- python/branches/release25-maint/Lib/test/test_decimal.py (original) +++ python/branches/release25-maint/Lib/test/test_decimal.py Tue Mar 25 19:58:13 2008 @@ -429,6 +429,12 @@ #just not a number self.assertEqual(str(Decimal('ugly')), 'NaN') + #unicode strings should be permitted + self.assertEqual(str(Decimal(u'0E-017')), '0E-17') + self.assertEqual(str(Decimal(u'45')), '45') + self.assertEqual(str(Decimal(u'-Inf')), '-Infinity') + self.assertEqual(str(Decimal(u'NaN123')), 'NaN123') + def test_explicit_from_tuples(self): #zero @@ -1032,6 +1038,16 @@ self.assertEqual(str(d), '15.32') # str self.assertEqual(repr(d), 'Decimal("15.32")') # repr + # result type of string methods should be str, not unicode + unicode_inputs = [u'123.4', u'0.5E2', u'Infinity', u'sNaN', + u'-0.0E100', u'-NaN001', u'-Inf'] + + for u in unicode_inputs: + d = Decimal(u) + self.assertEqual(type(str(d)), str) + self.assertEqual(type(repr(d)), str) + self.assertEqual(type(d.to_eng_string()), str) + def test_tonum_methods(self): #Test float, int and long methods. Modified: python/branches/release25-maint/Misc/NEWS ============================================================================== --- python/branches/release25-maint/Misc/NEWS (original) +++ python/branches/release25-maint/Misc/NEWS Tue Mar 25 19:58:13 2008 @@ -27,6 +27,11 @@ Library ------- +- Issue #2482: Make sure that the coefficient of a Decimal is always + stored as a str instance, not as a unicode instance. This ensures + that str(Decimal) is always an instance of str. This fixes a + regression from Python 2.5.1 to Python 2.5.2. + - Issue #2478: fix failure of decimal.Decimal(0).sqrt() - Issue #2432: give DictReader the dialect and line_num attributes From python-checkins at python.org Tue Mar 25 20:08:08 2008 From: python-checkins at python.org (brett.cannon) Date: Tue, 25 Mar 2008 20:08:08 +0100 (CET) Subject: [Python-checkins] r61907 - peps/trunk/pep-3108.txt Message-ID: <20080325190808.CECCE1E402F@bag.python.org> Author: brett.cannon Date: Tue Mar 25 20:08:08 2008 New Revision: 61907 Modified: peps/trunk/pep-3108.txt Log: Update where StringIO ended up. Modified: peps/trunk/pep-3108.txt ============================================================================== --- peps/trunk/pep-3108.txt (original) +++ peps/trunk/pep-3108.txt Tue Mar 25 20:08:08 2008 @@ -561,7 +561,7 @@ * StringIO/cStringIO [done] - + Added to the 'io' module as stringio. + + Add the class to the 'io' module. No public, documented interface From python-checkins at python.org Tue Mar 25 20:20:26 2008 From: python-checkins at python.org (georg.brandl) Date: Tue, 25 Mar 2008 20:20:26 +0100 (CET) Subject: [Python-checkins] r61908 - in doctools/trunk/doc: builders.rst ext/builderapi.rst templating.rst Message-ID: <20080325192026.EF1A51E4037@bag.python.org> Author: georg.brandl Date: Tue Mar 25 20:20:26 2008 New Revision: 61908 Modified: doctools/trunk/doc/builders.rst doctools/trunk/doc/ext/builderapi.rst doctools/trunk/doc/templating.rst Log: Expand on the pickle builder. Modified: doctools/trunk/doc/builders.rst ============================================================================== --- doctools/trunk/doc/builders.rst (original) +++ doctools/trunk/doc/builders.rst Tue Mar 25 20:20:26 2008 @@ -34,10 +34,12 @@ This builder produces a directory with pickle files containing mostly HTML fragments and TOC information, for use of a web application (or custom - postprocessing tool) that doesn't use the standard HTML templates. + postprocessing tool) that doesn't use the standard HTML templates. It also + is the format used by the Sphinx Web application. - It also is the format used by the Sphinx Web application. Its name is - ``pickle``. (The old name ``web`` still works as well.) + See :ref:`pickle-details` for details about the output format. + + Its name is ``pickle``. (The old name ``web`` still works as well.) .. class:: LaTeXBuilder @@ -71,3 +73,76 @@ * :mod:`~sphinx.ext.doctest` * :mod:`~sphinx.ext.coverage` + + +.. _pickle-details: + +Pickle builder details +---------------------- + +The builder outputs one pickle file per source file, and a few special files. +It also copies the reST source files in the directory ``_sources`` under the +output directory. + +The files per source file have the extensions ``.fpickle``, and are arranged in +directories just as the source files are. They unpickle to a dictionary with +these keys: + +``body`` + The HTML "body" (that is, the HTML rendering of the source file), as rendered + by the HTML translator. + +``title`` + The title of the document, as HTML (may contain markup). + +``toc`` + The table of contents for the file, rendered as an HTML ``
                  ``. + +``display_toc`` + A boolean that is ``True`` if the ``toc`` contains more than one entry. + +``current_page_name`` + The document name of the current file. + +``parents``, ``prev`` and ``next`` + Information about related chapters in the TOC tree. Each relation is a + dictionary with the keys ``link`` (HREF for the relation) and ``title`` + (title of the related document, as HTML). ``parents`` is a list of + relations, while ``prev`` and ``next`` are a single relation. + +``sourcename`` + The name of the source file under ``_sources``. + +The special files are located in the root output directory. They are: + +``environment.pickle`` + The build environment. (XXX add important environment properties) + +``globalcontext.pickle`` + A pickled dict with these keys: + + ``project``, ``copyright``, ``release``, ``version`` + The same values as given in the configuration file. + + ``style``, ``use_modindex`` + :confval:`html_style` and :confval:`html_use_modindex`, respectively. + + ``last_updated`` + Date of last build. + + ``builder`` + Name of the used builder, in the case of pickles this is always + ``'pickle'``. + + ``titles`` + A dictionary of all documents' titles, as HTML strings. + +``searchindex.pickle`` + An index that can be used for searching the documentation. It is a pickled + list with these entries: + + * A list of indexed docnames. + * A list of document titles, as HTML strings, in the same order as the first + list. + * A dict mapping word roots (processed by an English-language stemmer) to a + list of integers, which are indices into the first list. Modified: doctools/trunk/doc/ext/builderapi.rst ============================================================================== --- doctools/trunk/doc/ext/builderapi.rst (original) +++ doctools/trunk/doc/ext/builderapi.rst Tue Mar 25 20:20:26 2008 @@ -1,3 +1,5 @@ +.. _writing-builders: + Writing new builders ==================== Modified: doctools/trunk/doc/templating.rst ============================================================================== --- doctools/trunk/doc/templating.rst (original) +++ doctools/trunk/doc/templating.rst Tue Mar 25 20:20:26 2008 @@ -8,6 +8,24 @@ anyone having used Django will already be familiar with it. It also has excellent documentation for those who need to make themselves familiar with it. + +Do I need to use Sphinx' templates to produce HTML? +--------------------------------------------------- + +No. You have several other options: + +* You can :ref:`write a custom builder ` that derives from + :class:`~sphinx.builder.StandaloneHTMLBuilder` and calls your template engine + of choice. + +* You can use the :class:`~sphinx.builder.PickleHTMLBuilder` that produces + pickle files with the page contents, and postprocess them using a custom tool, + or use them in your Web application. + + +Jinja/Sphinx Templating Primer +------------------------------ + The most important concept in Jinja is :dfn:`template inheritance`, which means that you can overwrite only specific blocks within a template, customizing it while also keeping the changes at a minimum. From buildbot at python.org Tue Mar 25 20:28:30 2008 From: buildbot at python.org (buildbot at python.org) Date: Tue, 25 Mar 2008 19:28:30 +0000 Subject: [Python-checkins] buildbot failure in x86 FreeBSD 2 2.5 Message-ID: <20080325192830.5E5951E4006@bag.python.org> The Buildbot has detected a new failure of x86 FreeBSD 2 2.5. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20FreeBSD%202%202.5/builds/9 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: werven-freebsd Build Reason: Build Source Stamp: [branch branches/release25-maint] HEAD Blamelist: mark.dickinson BUILD FAILED: failed test Excerpt from the test logfile: Traceback (most recent call last): File "/usr/home/buildbot/buildarea/2.5.werven-freebsd/build/Lib/threading.py", line 486, in __bootstrap_inner self.run() File "/usr/home/buildbot/buildarea/2.5.werven-freebsd/build/Lib/threading.py", line 446, in run self.__target(*self.__args, **self.__kwargs) File "/usr/home/buildbot/buildarea/2.5.werven-freebsd/build/Lib/bsddb/test/test_thread.py", line 281, in readerThread rec = dbutils.DeadlockWrap(c.next, max_retries=10) File "/usr/home/buildbot/buildarea/2.5.werven-freebsd/build/Lib/bsddb/dbutils.py", line 62, in DeadlockWrap return function(*_args, **_kwargs) DBLockDeadlockError: (-30995, 'DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock') Traceback (most recent call last): File "/usr/home/buildbot/buildarea/2.5.werven-freebsd/build/Lib/threading.py", line 486, in __bootstrap_inner self.run() File "/usr/home/buildbot/buildarea/2.5.werven-freebsd/build/Lib/threading.py", line 446, in run self.__target(*self.__args, **self.__kwargs) File "/usr/home/buildbot/buildarea/2.5.werven-freebsd/build/Lib/bsddb/test/test_thread.py", line 281, in readerThread rec = dbutils.DeadlockWrap(c.next, max_retries=10) File "/usr/home/buildbot/buildarea/2.5.werven-freebsd/build/Lib/bsddb/dbutils.py", line 62, in DeadlockWrap return function(*_args, **_kwargs) DBLockDeadlockError: (-30995, 'DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock') Traceback (most recent call last): File "/usr/home/buildbot/buildarea/2.5.werven-freebsd/build/Lib/threading.py", line 486, in __bootstrap_inner self.run() File "/usr/home/buildbot/buildarea/2.5.werven-freebsd/build/Lib/threading.py", line 446, in run self.__target(*self.__args, **self.__kwargs) File "/usr/home/buildbot/buildarea/2.5.werven-freebsd/build/Lib/bsddb/test/test_thread.py", line 281, in readerThread rec = dbutils.DeadlockWrap(c.next, max_retries=10) File "/usr/home/buildbot/buildarea/2.5.werven-freebsd/build/Lib/bsddb/dbutils.py", line 62, in DeadlockWrap return function(*_args, **_kwargs) DBLockDeadlockError: (-30995, 'DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock') Traceback (most recent call last): File "/usr/home/buildbot/buildarea/2.5.werven-freebsd/build/Lib/threading.py", line 486, in __bootstrap_inner self.run() File "/usr/home/buildbot/buildarea/2.5.werven-freebsd/build/Lib/threading.py", line 446, in run self.__target(*self.__args, **self.__kwargs) File "/usr/home/buildbot/buildarea/2.5.werven-freebsd/build/Lib/bsddb/test/test_thread.py", line 281, in readerThread rec = dbutils.DeadlockWrap(c.next, max_retries=10) File "/usr/home/buildbot/buildarea/2.5.werven-freebsd/build/Lib/bsddb/dbutils.py", line 62, in DeadlockWrap return function(*_args, **_kwargs) DBLockDeadlockError: (-30995, 'DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock') Traceback (most recent call last): File "/usr/home/buildbot/buildarea/2.5.werven-freebsd/build/Lib/threading.py", line 486, in __bootstrap_inner self.run() File "/usr/home/buildbot/buildarea/2.5.werven-freebsd/build/Lib/threading.py", line 446, in run self.__target(*self.__args, **self.__kwargs) File "/usr/home/buildbot/buildarea/2.5.werven-freebsd/build/Lib/bsddb/test/test_thread.py", line 281, in readerThread rec = dbutils.DeadlockWrap(c.next, max_retries=10) File "/usr/home/buildbot/buildarea/2.5.werven-freebsd/build/Lib/bsddb/dbutils.py", line 62, in DeadlockWrap return function(*_args, **_kwargs) DBLockDeadlockError: (-30995, 'DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock') sincerely, -The Buildbot From buildbot at python.org Tue Mar 25 20:55:59 2008 From: buildbot at python.org (buildbot at python.org) Date: Tue, 25 Mar 2008 19:55:59 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 trunk Message-ID: <20080325195559.39E171E4013@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%20trunk/builds/2753 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl,mark.dickinson BUILD FAILED: failed test Excerpt from the test logfile: 3 tests failed: test_asynchat test_smtplib test_socket ====================================================================== FAIL: testInterruptedTimeout (test.test_socket.TCPTimeoutTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_socket.py", line 994, in testInterruptedTimeout self.fail("got Alarm in wrong place") AssertionError: got Alarm in wrong place sincerely, -The Buildbot From python-checkins at python.org Tue Mar 25 20:57:10 2008 From: python-checkins at python.org (georg.brandl) Date: Tue, 25 Mar 2008 20:57:10 +0100 (CET) Subject: [Python-checkins] r61909 - in doctools/trunk/doc: concepts.rst config.rst markup/desc.rst Message-ID: <20080325195710.198B51E4008@bag.python.org> Author: georg.brandl Date: Tue Mar 25 20:57:09 2008 New Revision: 61909 Modified: doctools/trunk/doc/concepts.rst doctools/trunk/doc/config.rst doctools/trunk/doc/markup/desc.rst Log: Add a paragraph on special names. Modified: doctools/trunk/doc/concepts.rst ============================================================================== --- doctools/trunk/doc/concepts.rst (original) +++ doctools/trunk/doc/concepts.rst Tue Mar 25 20:57:09 2008 @@ -65,3 +65,35 @@ The "master document" (selected by :confval:`master_doc`) is the "root" of the TOC tree hierarchy. It can be used as the documentation's main page, or as a "full table of contents" if you don't give a ``maxdepth`` option. + + +Special names +------------- + +Sphinx reserves some document names for its own use; you should not try to +create documents with these names -- it will cause problems. + +The special document names (and pages generated for them) are: + +* ``genindex``, ``modindex``, ``search`` + + These are used for the general index, the module index, and the search page, + respectively. + + The general index is populated with entries from modules, all index-generating + :ref:`description units `, and from :dir:`index` directives. + + The module index contains one entry per :dir:`module` directive. + + The search page contains a form that uses the generated JSON search index and + JavaScript to full-text search the generated documents for search words; it + should work on every major browser that supports modern JavaScript. + +* every name beginning with ``_`` + + Though only few such names are currently used by Sphinx, you should not create + documents or document-containing directories with such names. (Using ``_`` as + a prefix for a custom template directory is fine.) + +``index`` is a special name, too, if the :confval:`html_index` config value is +nonempty. Modified: doctools/trunk/doc/config.rst ============================================================================== --- doctools/trunk/doc/config.rst (original) +++ doctools/trunk/doc/config.rst Tue Mar 25 20:57:09 2008 @@ -166,18 +166,39 @@ Content template for the index page, filename relative to this file. If this is not the empty string, the "index" document will not be created from a - reStructuredText file but from this template. + reStructuredText file but from the ``index.html`` template. The template you + specify in this value will be included in the ``index.html``, together with + a list of tables. + + If you want to completely override the resulting ``index`` document, set this + to some nonempty value and override the ``index.html`` template. .. confval:: html_sidebars Custom sidebar templates, must be a dictionary that maps document names to - template names. + template names. Example:: + + html_sidebars = { + 'using/windows': 'windowssidebar.html' + } + + This will render the template ``windowssidebar.html`` within the sidebar of + the given document. .. confval:: html_additional_pages Additional templates that should be rendered to HTML pages, must be a dictionary that maps document names to template names. + Example:: + + html_additional_pages = { + 'download': 'customdownload.html', + } + + This will render the template ``customdownload.html`` as the page + ``download.html``. + .. confval:: html_use_modindex If true, add a module index to the HTML documents. Default is ``True``. Modified: doctools/trunk/doc/markup/desc.rst ============================================================================== --- doctools/trunk/doc/markup/desc.rst (original) +++ doctools/trunk/doc/markup/desc.rst Tue Mar 25 20:57:09 2008 @@ -26,6 +26,8 @@ submodule, in which case the name should be fully qualified, including the package name). + This directive will also cause an entry in the global module index. + The ``platform`` option, if present, is a comma-separated list of the platforms on which the module is available (if it is available on all platforms, the option should be omitted). The keys are short identifiers; @@ -38,6 +40,7 @@ The ``deprecated`` option can be given (with no value) to mark a module as deprecated; it will be designated as such in various locations then. + .. directive:: .. moduleauthor:: name The ``moduleauthor`` directive, which can appear multiple times, names the @@ -53,6 +56,8 @@ in overview files. +.. _desc-units: + Description units ----------------- From python-checkins at python.org Tue Mar 25 21:49:52 2008 From: python-checkins at python.org (georg.brandl) Date: Tue, 25 Mar 2008 21:49:52 +0100 (CET) Subject: [Python-checkins] r61910 - in doctools/trunk: CHANGES sphinx/texinputs/Makefile sphinx/texinputs/fncychap.sty Message-ID: <20080325204952.625BE1E4006@bag.python.org> Author: georg.brandl Date: Tue Mar 25 21:49:51 2008 New Revision: 61910 Added: doctools/trunk/sphinx/texinputs/fncychap.sty Modified: doctools/trunk/CHANGES doctools/trunk/sphinx/texinputs/Makefile Log: Add fncychap.sty; add clean target. Modified: doctools/trunk/CHANGES ============================================================================== --- doctools/trunk/CHANGES (original) +++ doctools/trunk/CHANGES Tue Mar 25 21:49:51 2008 @@ -21,6 +21,10 @@ * sphinx.builder: Handle unavailability of TOC relations (previous/ next chapter) more gracefully in the HTML builder. +* sphinx.latexwriter: Include fncychap.sty which doesn't seem to be + very common in TeX distributions. Add a ``clean`` target in the + latex Makefile. + * setup: On Python 2.4, don't egg-depend on docutils if a docutils is already installed -- else it will be overwritten. @@ -75,4 +79,4 @@ Release 0.1.61611 (Mar 21, 2008) ================================ -First public release. +* First public release. Modified: doctools/trunk/sphinx/texinputs/Makefile ============================================================================== --- doctools/trunk/sphinx/texinputs/Makefile (original) +++ doctools/trunk/sphinx/texinputs/Makefile Tue Mar 25 21:49:51 2008 @@ -28,6 +28,8 @@ bz2: tar-$(FMT) bzip2 -9 -k $(ARCHIVEPREFIX)docs-$(FMT).tar +# The number of LaTeX runs is quite conservative, but I don't expect it +# to get run often, so the little extra time won't hurt. %.dvi: %.tex latex $< latex $< @@ -46,5 +48,9 @@ pdflatex $< pdflatex $< -.PHONY: all all-pdf all-dvi all-ps +clean: + rm -f *.pdf *.dvi *.ps + rm -f *.log *.ind *.aux *.toc *.syn *.idx *.out *.ilg + +.PHONY: all all-pdf all-dvi all-ps clean Added: doctools/trunk/sphinx/texinputs/fncychap.sty ============================================================================== --- (empty file) +++ doctools/trunk/sphinx/texinputs/fncychap.sty Tue Mar 25 21:49:51 2008 @@ -0,0 +1,433 @@ +%%% Derived from the original fncychap.sty, +%%% but changed ``TWELV'' to ``TWELVE''. + +%%% Copyright Ulf A. Lindgren +%%% Department of Applied Electronics +%%% Chalmers University of Technology +%%% S-412 96 Gothenburg, Sweden +%%% E-mail lindgren at ae.chalmers.se +%%% +%%% Note Permission is granted to modify this file under +%%% the condition that it is saved using another +%%% file and package name. +%%% +%%% Revision 1.1 +%%% +%%% Jan. 8th Modified package name base date option +%%% Jan. 22th Modified FmN and FmTi for error in book.cls +%%% \MakeUppercase{#}->{\MakeUppercase#} +%%% Apr. 6th Modified Lenny option to prevent undesired +%%% skip of line. +%%% Nov. 8th Fixed \@chapapp for AMS +%%% Feb. 11th Fixed appendix problem related to Bjarne +%%% Last modified Feb. 11th 1998 + +\NeedsTeXFormat{LaTeX2e}[1995/12/01] +\ProvidesPackage{fncychap} + [1997/04/06 v1.11 + LaTeX package (Revised chapters)] + +%%%% DEFINITION OF Chapapp variables +\newcommand{\CNV}{\huge\bfseries} +\newcommand{\ChNameVar}[1]{\renewcommand{\CNV}{#1}} + + +%%%% DEFINITION OF TheChapter variables +\newcommand{\CNoV}{\huge\bfseries} +\newcommand{\ChNumVar}[1]{\renewcommand{\CNoV}{#1}} + +\newif\ifUCN +\UCNfalse +\newif\ifLCN +\LCNfalse +\def\ChNameLowerCase{\LCNtrue\UCNfalse} +\def\ChNameUpperCase{\UCNtrue\LCNfalse} +\def\ChNameAsIs{\UCNfalse\LCNfalse} + +%%%%% Fix for AMSBook 971008 + +\@ifundefined{@chapapp}{\let\@chapapp\chaptername}{} + + +%%%%% Fix for Bjarne and appendix 980211 + +\newif\ifinapp +\inappfalse +\renewcommand\appendix{\par + \setcounter{chapter}{0}% + \setcounter{section}{0}% + \inapptrue% + \renewcommand\@chapapp{\appendixname}% + \renewcommand\thechapter{\@Alph\c at chapter}} + +%%%%% + +\newcommand{\FmN}[1]{% +\ifUCN + {\MakeUppercase#1}\LCNfalse +\else + \ifLCN + {\MakeLowercase#1}\UCNfalse + \else #1 + \fi +\fi} + + +%%%% DEFINITION OF Title variables +\newcommand{\CTV}{\Huge\bfseries} +\newcommand{\ChTitleVar}[1]{\renewcommand{\CTV}{#1}} + +%%%% DEFINITION OF the basic rule width +\newlength{\RW} +\setlength{\RW}{1pt} +\newcommand{\ChRuleWidth}[1]{\setlength{\RW}{#1}} + +\newif\ifUCT +\UCTfalse +\newif\ifLCT +\LCTfalse +\def\ChTitleLowerCase{\LCTtrue\UCTfalse} +\def\ChTitleUpperCase{\UCTtrue\LCTfalse} +\def\ChTitleAsIs{\UCTfalse\LCTfalse} +\newcommand{\FmTi}[1]{% +\ifUCT + + {\MakeUppercase#1}\LCTfalse +\else + \ifLCT + {\MakeLowercase#1}\UCTfalse + \else #1 + \fi +\fi} + + + +\newlength{\mylen} +\newlength{\myhi} +\newlength{\px} +\newlength{\py} +\newlength{\pyy} +\newlength{\pxx} + + +\def\mghrulefill#1{\leavevmode\leaders\hrule\@height #1\hfill\kern\z@} + +\newcommand{\DOCH}{% + \CNV\FmN{\@chapapp}\space \CNoV\thechapter + \par\nobreak + \vskip 20\p@ + } +\newcommand{\DOTI}[1]{% + \CTV\FmTi{#1}\par\nobreak + \vskip 40\p@ + } +\newcommand{\DOTIS}[1]{% + \CTV\FmTi{#1}\par\nobreak + \vskip 40\p@ + } + +%%%%%% SONNY DEF + +\DeclareOption{Sonny}{% + \ChNameVar{\Large\sf} + \ChNumVar{\Huge} + \ChTitleVar{\Large\sf} + \ChRuleWidth{0.5pt} + \ChNameUpperCase + \renewcommand{\DOCH}{% + \raggedleft + \CNV\FmN{\@chapapp}\space \CNoV\thechapter + \par\nobreak + \vskip 40\p@} + \renewcommand{\DOTI}[1]{% + \CTV\raggedleft\mghrulefill{\RW}\par\nobreak + \vskip 5\p@ + \CTV\FmTi{#1}\par\nobreak + \mghrulefill{\RW}\par\nobreak + \vskip 40\p@} + \renewcommand{\DOTIS}[1]{% + \CTV\raggedleft\mghrulefill{\RW}\par\nobreak + \vskip 5\p@ + \CTV\FmTi{#1}\par\nobreak + \mghrulefill{\RW}\par\nobreak + \vskip 40\p@} +} + +%%%%%% LENNY DEF + +\DeclareOption{Lenny}{% + + \ChNameVar{\fontsize{14}{16}\usefont{OT1}{phv}{m}{n}\selectfont} + \ChNumVar{\fontsize{60}{62}\usefont{OT1}{ptm}{m}{n}\selectfont} + \ChTitleVar{\Huge\bfseries\rm} + \ChRuleWidth{1pt} + \renewcommand{\DOCH}{% + \settowidth{\px}{\CNV\FmN{\@chapapp}} + \addtolength{\px}{2pt} + \settoheight{\py}{\CNV\FmN{\@chapapp}} + \addtolength{\py}{1pt} + + \settowidth{\mylen}{\CNV\FmN{\@chapapp}\space\CNoV\thechapter} + \addtolength{\mylen}{1pt} + \settowidth{\pxx}{\CNoV\thechapter} + \addtolength{\pxx}{-1pt} + + \settoheight{\pyy}{\CNoV\thechapter} + \addtolength{\pyy}{-2pt} + \setlength{\myhi}{\pyy} + \addtolength{\myhi}{-1\py} + \par + \parbox[b]{\textwidth}{% + \rule[\py]{\RW}{\myhi}% + \hskip -\RW% + \rule[\pyy]{\px}{\RW}% + \hskip -\px% + \raggedright% + \CNV\FmN{\@chapapp}\space\CNoV\thechapter% + \hskip1pt% + \mghrulefill{\RW}% + \rule{\RW}{\pyy}\par\nobreak% + \vskip -\baselineskip% + \vskip -\pyy% + \hskip \mylen% + \mghrulefill{\RW}\par\nobreak% + \vskip \pyy}% + \vskip 20\p@} + + + \renewcommand{\DOTI}[1]{% + \raggedright + \CTV\FmTi{#1}\par\nobreak + \vskip 40\p@} + + \renewcommand{\DOTIS}[1]{% + \raggedright + \CTV\FmTi{#1}\par\nobreak + \vskip 40\p@} + } + + +%%%%%%% GLENN DEF + + +\DeclareOption{Glenn}{% + \ChNameVar{\bfseries\Large\sf} + \ChNumVar{\Huge} + \ChTitleVar{\bfseries\Large\rm} + \ChRuleWidth{1pt} + \ChNameUpperCase + \ChTitleUpperCase + \renewcommand{\DOCH}{% + \settoheight{\myhi}{\CTV\FmTi{Test}} + \setlength{\py}{\baselineskip} + \addtolength{\py}{\RW} + \addtolength{\py}{\myhi} + \setlength{\pyy}{\py} + \addtolength{\pyy}{-1\RW} + + \raggedright + \CNV\FmN{\@chapapp}\space\CNoV\thechapter + \hskip 3pt\mghrulefill{\RW}\rule[-1\pyy]{2\RW}{\py}\par\nobreak} + + \renewcommand{\DOTI}[1]{% + \addtolength{\pyy}{-4pt} + \settoheight{\myhi}{\CTV\FmTi{#1}} + \addtolength{\myhi}{\py} + \addtolength{\myhi}{-1\RW} + \vskip -1\pyy + \rule{2\RW}{\myhi}\mghrulefill{\RW}\hskip 2pt + \raggedleft\CTV\FmTi{#1}\par\nobreak + \vskip 80\p@} + + \renewcommand{\DOTIS}[1]{% + \setlength{\py}{10pt} + \setlength{\pyy}{\py} + \addtolength{\pyy}{\RW} + \setlength{\myhi}{\baselineskip} + \addtolength{\myhi}{\pyy} + \mghrulefill{\RW}\rule[-1\py]{2\RW}{\pyy}\par\nobreak +% \addtolength{}{} +\vskip -1\baselineskip + \rule{2\RW}{\myhi}\mghrulefill{\RW}\hskip 2pt + \raggedleft\CTV\FmTi{#1}\par\nobreak + \vskip 60\p@} + } + +%%%%%%% CONNY DEF + +\DeclareOption{Conny}{% + \ChNameUpperCase + \ChTitleUpperCase + \ChNameVar{\centering\Huge\rm\bfseries} + \ChNumVar{\Huge} + \ChTitleVar{\centering\Huge\rm} + \ChRuleWidth{2pt} + + \renewcommand{\DOCH}{% + \mghrulefill{3\RW}\par\nobreak + \vskip -0.5\baselineskip + \mghrulefill{\RW}\par\nobreak + \CNV\FmN{\@chapapp}\space \CNoV\thechapter + \par\nobreak + \vskip -0.5\baselineskip + } + \renewcommand{\DOTI}[1]{% + \mghrulefill{\RW}\par\nobreak + \CTV\FmTi{#1}\par\nobreak + \vskip 60\p@ + } + \renewcommand{\DOTIS}[1]{% + \mghrulefill{\RW}\par\nobreak + \CTV\FmTi{#1}\par\nobreak + \vskip 60\p@ + } + } + +%%%%%%% REJNE DEF + +\DeclareOption{Rejne}{% + + \ChNameUpperCase + \ChTitleUpperCase + \ChNameVar{\centering\Large\rm} + \ChNumVar{\Huge} + \ChTitleVar{\centering\Huge\rm} + \ChRuleWidth{1pt} + \renewcommand{\DOCH}{% + \settoheight{\py}{\CNoV\thechapter} + \addtolength{\py}{-1pt} + \CNV\FmN{\@chapapp}\par\nobreak + \vskip 20\p@ + \setlength{\myhi}{2\baselineskip} + \setlength{\px}{\myhi} + \addtolength{\px}{-1\RW} + \rule[-1\px]{\RW}{\myhi}\mghrulefill{\RW}\hskip + 10pt\raisebox{-0.5\py}{\CNoV\thechapter}\hskip +10pt\mghrulefill{\RW}\rule[-1\px]{\RW}{\myhi}\par\nobreak + \vskip -1\p@ + } + \renewcommand{\DOTI}[1]{% + \setlength{\mylen}{\textwidth} + \addtolength{\mylen}{-2\RW} + {\vrule width\RW}\parbox{\mylen}{\CTV\FmTi{#1}}{\vrule +width\RW}\par\nobreak + \vskip +-1pt\rule{\RW}{2\baselineskip}\mghrulefill{\RW}\rule{\RW}{2\baselineskip} + \vskip 60\p@ + } + \renewcommand{\DOTIS}[1]{% + \setlength{\py}{\fboxrule} + \setlength{\fboxrule}{\RW} + \setlength{\mylen}{\textwidth} + \addtolength{\mylen}{-2\RW} + \fbox{\parbox{\mylen}{\vskip +2\baselineskip\CTV\FmTi{#1}\par\nobreak\vskip \baselineskip}} + \setlength{\fboxrule}{\py} + \vskip 60\p@ + } + } + + +%%%%%%% BJARNE DEF + +\DeclareOption{Bjarne}{% + \ChNameUpperCase + \ChTitleUpperCase + \ChNameVar{\raggedleft\normalsize\rm} + \ChNumVar{\raggedleft \bfseries\Large} + \ChTitleVar{\raggedleft \Large\rm} + \ChRuleWidth{1pt} + + +%% Note thechapter -> c at chapter fix appendix bug + + \newcounter{AlphaCnt} + \newcounter{AlphaDecCnt} + \newcommand{\AlphaNo}{% + \ifcase\number\theAlphaCnt + \ifnum\c at chapter=0 + ZERO\else{}\fi + \or ONE\or TWO\or THREE\or FOUR\or FIVE + \or SIX\or SEVEN\or EIGHT\or NINE\or TEN + \or ELEVEN\or TWELVE\or THIRTEEN\or FOURTEEN\or FIFTEEN + \or SIXTEEN\or SEVENTEEN\or EIGHTEEN\or NINETEEN\fi +} + + \newcommand{\AlphaDecNo}{% + \setcounter{AlphaDecCnt}{0} + \@whilenum\number\theAlphaCnt>0\do + {\addtocounter{AlphaCnt}{-10} + \addtocounter{AlphaDecCnt}{1}} + \ifnum\number\theAlphaCnt=0 + \else + \addtocounter{AlphaDecCnt}{-1} + \addtocounter{AlphaCnt}{10} + \fi + + + \ifcase\number\theAlphaDecCnt\or TEN\or TWENTY\or THIRTY\or + FORTY\or FIFTY\or SIXTY\or SEVENTY\or EIGHTY\or NINETY\fi + } + \newcommand{\TheAlphaChapter}{% + + \ifinapp + \thechapter + \else + \setcounter{AlphaCnt}{\c at chapter} + \ifnum\c at chapter<20 + \AlphaNo + \else + \AlphaDecNo\AlphaNo + \fi + \fi + } + \renewcommand{\DOCH}{% + \mghrulefill{\RW}\par\nobreak + \CNV\FmN{\@chapapp}\par\nobreak + \CNoV\TheAlphaChapter\par\nobreak + \vskip -1\baselineskip\vskip 5pt\mghrulefill{\RW}\par\nobreak + \vskip 20\p@ + } + \renewcommand{\DOTI}[1]{% + \CTV\FmTi{#1}\par\nobreak + \vskip 40\p@ + } + \renewcommand{\DOTIS}[1]{% + \CTV\FmTi{#1}\par\nobreak + \vskip 40\p@ + } +} + +\DeclareOption*{% + \PackageWarning{fancychapter}{unknown style option} + } + +\ProcessOptions* \relax + +\def\@makechapterhead#1{% + \vspace*{50\p@}% + {\parindent \z@ \raggedright \normalfont + \ifnum \c at secnumdepth >\m at ne + \DOCH + \fi + \interlinepenalty\@M + \DOTI{#1} + }} +\def\@schapter#1{\if at twocolumn + \@topnewpage[\@makeschapterhead{#1}]% + \else + \@makeschapterhead{#1}% + \@afterheading + \fi} +\def\@makeschapterhead#1{% + \vspace*{50\p@}% + {\parindent \z@ \raggedright + \normalfont + \interlinepenalty\@M + \DOTIS{#1} + \vskip 40\p@ + }} + +\endinput + + From python-checkins at python.org Tue Mar 25 21:59:02 2008 From: python-checkins at python.org (georg.brandl) Date: Tue, 25 Mar 2008 21:59:02 +0100 (CET) Subject: [Python-checkins] r61911 - doctools/trunk/sphinx/directives.py Message-ID: <20080325205902.223101E4006@bag.python.org> Author: georg.brandl Date: Tue Mar 25 21:59:01 2008 New Revision: 61911 Modified: doctools/trunk/sphinx/directives.py Log: Strip parentheses for C function pointer type names. Modified: doctools/trunk/sphinx/directives.py ============================================================================== --- doctools/trunk/sphinx/directives.py (original) +++ doctools/trunk/sphinx/directives.py Tue Mar 25 21:59:01 2008 @@ -191,6 +191,7 @@ (\( [^()]+ \)) \s* # name in parentheses \( (.*) \) $ # arguments ''', re.VERBOSE) +c_funcptr_name_re = re.compile(r'^\(\s*\*\s*(.*?)\s*\)$') # RE to split at word boundaries wsplit_re = re.compile(r'(\W+)') @@ -224,6 +225,10 @@ signode += addnodes.desc_type("", "") parse_c_type(signode[-1], rettype) signode += addnodes.desc_name(name, name) + # clean up parentheses from canonical name + m = c_funcptr_name_re.match(name) + if m: + name = m.group(1) if not arglist: if desctype == 'cfunction': # for functions, add an empty parameter list From python-checkins at python.org Tue Mar 25 22:03:56 2008 From: python-checkins at python.org (benjamin.peterson) Date: Tue, 25 Mar 2008 22:03:56 +0100 (CET) Subject: [Python-checkins] r61912 - sandbox/trunk/release/release.py Message-ID: <20080325210356.1D8DF1E4006@bag.python.org> Author: benjamin.peterson Date: Tue Mar 25 22:03:55 2008 New Revision: 61912 Modified: sandbox/trunk/release/release.py Log: Get the ordering of lines right on top Modified: sandbox/trunk/release/release.py ============================================================================== --- sandbox/trunk/release/release.py (original) +++ sandbox/trunk/release/release.py Tue Mar 25 22:03:55 2008 @@ -1,5 +1,5 @@ -"An assistant for making Python releases by Benjamin Peterson" #!/usr/bin/env python +"An assistant for making Python releases by Benjamin Peterson" from __future__ import with_statement import sys From python-checkins at python.org Tue Mar 25 22:14:42 2008 From: python-checkins at python.org (benjamin.peterson) Date: Tue, 25 Mar 2008 22:14:42 +0100 (CET) Subject: [Python-checkins] r61913 - python/trunk/Misc/ACKS Message-ID: <20080325211442.68C391E4006@bag.python.org> Author: benjamin.peterson Date: Tue Mar 25 22:14:42 2008 New Revision: 61913 Modified: python/trunk/Misc/ACKS Log: Merged the ACKS from py3k Modified: python/trunk/Misc/ACKS ============================================================================== --- python/trunk/Misc/ACKS (original) +++ python/trunk/Misc/ACKS Tue Mar 25 22:14:42 2008 @@ -1,14 +1,3 @@ -Acknowledgements ----------------- - -This list is not complete and not in any useful order, but I would -like to thank everybody who contributed in any way, with code, hints, -bug reports, ideas, moral support, endorsement, or even complaints.... -Without you I would've stopped working on Python long ago! - - --Guido - -PS: In the standard Python distribution this file is encoded in Latin-1. David Abrahams Jim Ahlstrom @@ -17,15 +6,15 @@ Kevin Altis Mark Anacker Anders Andersen -Erik Anders?n John Anderson +Erik Anders?n Oliver Andrich Ross Andrus Jason Asbahr David Ascher -Peter ?strand Chris AtLee John Aycock +Jan-Hein B"uhrman Donovan Baarda Attila Babo Alfonso Baciero @@ -33,6 +22,7 @@ Stig Bakken Greg Ball Luigi Ballabio +Jeff Balogh Michael J. Barber Chris Barker Quentin Barnes @@ -47,13 +37,12 @@ Samuel L. Bayer Donald Beaudry David Beazley -Neal Becker Robin Becker +Neal Becker Bill Bedford Reimer Behrends Ben Bell Thomas Bellman -Juan M. Bello Rivas Alexander Belopolsky Andrew Bennetts Andy Bensky @@ -87,13 +76,12 @@ Dave Brennan Tom Bridgman Richard Brodie -Gary S. Brown Daniel Brotsky +Gary S. Brown Oleg Broytmann Dave Brueck Stan Bubrouski Erik de Bueger -Jan-Hein B"uhrman Dick Bulterman Bill Bumgarner Jimmy Burgett @@ -114,9 +102,9 @@ Octavian Cerna Hye-Shik Chang Jeffrey Chang -Brad Chapman -Greg Chapman Mitch Chapman +Greg Chapman +Brad Chapman David Chaum Nicolas Chauvat Michael Chermside @@ -133,6 +121,7 @@ Dave Cole Benjamin Collar Jeffery Collins +Paul Colomiets Matt Conway David M. Cooke Greg Copeland @@ -146,8 +135,8 @@ Christopher A. Craig Laura Creighton Drew Csillag -Tom Culliton John Cugini +Tom Culliton Andrew Dalke Lars Damerow Eric Daniel @@ -163,13 +152,11 @@ Mark Dickinson Yves Dionne Daniel Dittmar -Walter D?rwald Jaromir Dolecek Ismail Donmez Dima Dorfman Cesar Douady Dean Draayer -Fred L. Drake, Jr. John DuBois Paul Dubois Quinn Dunkan @@ -180,6 +167,7 @@ Eugene Dvurechenski Josip Dzolonga Maxim Dzumanenko +Walter D?rwald Hans Eckardt Grant Edwards John Ehresman @@ -194,8 +182,8 @@ Ben Escoto Andy Eskilsson Stefan Esser -Carey Evans Stephen D Evans +Carey Evans Tim Everett Paul Everitt David Everly @@ -213,6 +201,7 @@ Frederik Fix Matt Fleming Hern?n Mart?nez Foffani +Michael Foord Doug Fort John Fouhy Martin Franklin @@ -250,14 +239,14 @@ Eddy De Greef Duncan Grisby Dag Gruneau -Thomas G?ttler Michael Guravage Lars Gust?bel +Thomas G?ttler Barry Haddow -V?clav Haisman Paul ten Hagen Rasmus Hahn Peter Haight +V?clav Haisman Bob Halley Jesse Hallio Jun Hamano @@ -269,11 +258,11 @@ Lynda Hardman Derek Harland Jason Harper -Gerhard H?ring Larry Hastings Shane Hathaway Rycharde Hawkes Jochen Hayek +Christian Heimes Thomas Heller Malte Helmert Lance Finn Helsten @@ -317,15 +306,15 @@ Greg Humphreys Eric Huss Jeremy Hylton +Gerhard H?ring Mihai Ibanescu -Juan David Ib??ez Palomar Lars Immisch Tony Ingraldi John Interrante Bob Ippolito Atsuo Ishimoto -Ben Jackson Paul Jackson +Ben Jackson David Jacobs Kevin Jacobs Kjetil Jacobsen @@ -343,8 +332,9 @@ Richard Jones Irmen de Jong Lucas de Jonge -Jens B. Jorgensen John Jorgensen +Jens B. Jorgensen +Fred L. Drake, Jr. Andreas Jung Tattoo Mabonzo K. Bob Kahn @@ -354,8 +344,8 @@ Jacob Kaplan-Moss Lou Kates Sebastien Keim -Randall Kern Robert Kern +Randall Kern Magnus Kessler Lawrence Kesteloot Vivek Khera @@ -378,7 +368,6 @@ Hannu Krosing Andrew Kuchling Vladimir Kushnir -Arnaud Mazin Cameron Laird Tino Lange Andrew Langmead @@ -389,28 +378,28 @@ Simon Law Chris Lawrence Brian Leair -Christopher Lee -Inyeol Lee John J. Lee +Inyeol Lee Thomas Lee +Christopher Lee Luc Lefebvre Kip Lehman Joerg Lehmann Luke Kenneth Casson Leighton Marc-Andre Lemburg John Lenton +Christopher Tur Lesniewski-Laas Mark Levinson William Lewis Robert van Liere Shawn Ligocki Martin Ligr Christopher Lindblad -Eric Lindvall Bjorn Lindqvist Per Lindqvist +Eric Lindvall Nick Lockwood Stephanie Lockwood -Martin von L?wis Anne Lord Tom Loredo Jason Lowe @@ -421,7 +410,7 @@ Mark Lutz Jim Lynch Mikael Lyngvig -Alan McIntyre +Martin von L?wis Andrew I MacIntyre Tim MacKenzie Nick Maclaren @@ -438,14 +427,14 @@ Nick Mathewson Graham Matthews Dieter Maurer +Arnaud Mazin +Chris McDonough Greg McFarlane +Alan McIntyre Michael McLay Gordon McMillan -Damien Miller -Jay T. Miller -Chris McDonough -Andrew McNamara Caolan McNamara +Andrew McNamara Craig McPheeters Lambert Meertens Bill van Melle @@ -453,23 +442,25 @@ Mike Meyer Steven Miale Trent Mick -Chad Miller Damien Miller +Chad Miller +Jay T. Miller Roman Milner -Dom Mitchell Dustin J. Mitchell +Dom Mitchell Doug Moen -Paul Moore The Dragon De Monsyne Skip Montanaro +Paul Moore James A Morrison -Sape Mullender Sjoerd Mullender +Sape Mullender Michael Muller John Nagle Takahiro Nakayama Travers Naran Fredrik Nehr +Trent Nelson Tony Nelson Chad Netzer Max Neunh?ffer @@ -495,21 +486,20 @@ Douglas Orr Denis S. Otkidach Russel Owen +Ondrej Palkovsky Mike Pall Todd R. Palmer +Juan David Ib??ez Palomar Jan Palus +M. Papillon Peter Parente Alexandre Parenteau Dan Parisien Harri Pasanen Randy Pausch -Ondrej Palkovsky -M. Papillon -Marcel van der Peijl Samuele Pedroni +Marcel van der Peijl Steven Pemberton -Eduardo P?rez -Fernando P?rez Mark Perrego Trevor Perrin Tim Peters @@ -521,13 +511,14 @@ Adrian Phillips Christopher J. Phoenix Neale Pickett -Jean-Fran?ois Pi?ronne +Jim St. Pierre Dan Pierson Martijn Pieters Fran?ois Pinard Zach Pincus Michael Piotrowski Antoine Pitrou +Jean-Fran?ois Pi?ronne Guilherme Polo Michael Pomraning Iustin Pop @@ -536,9 +527,12 @@ Paul Prescod Donovan Preston Steve Purcell +Fernando P?rez +Eduardo P?rez Brian Quinlan Anders Qvist Burton Radons +Antti Rasinen Eric Raymond Edward K. Ream Marc Recht @@ -556,25 +550,25 @@ Armin Rigo Nicholas Riley Jean-Claude Rimbault +Juan M. Bello Rivas Anthony Roach Mark Roberts -Andy Robinson Jim Robinson +Andy Robinson Kevin Rodgers Giampaolo Rodola Mike Romberg Case Roole Timothy Roscoe -Craig Rowland Jim Roskind -Erik van Blokland Just van Rossum Hugo van Rossum Saskia van Rossum Donald Wallace Rouse II Liam Routt -Sam Ruby +Craig Rowland Paul Rubin +Sam Ruby Audun S. Runde Jeff Rush Sam Rushing @@ -598,8 +592,8 @@ Stefan Schwarzer Dietmar Schwertberger Federico Schwindt -Barry Scott Steven Scott +Barry Scott Nick Seidenman ??iga Seilnach Fred Sells @@ -622,8 +616,8 @@ George Sipe J. Sipprell Kragen Sitaker -Christopher Smith Eric V. Smith +Christopher Smith Gregory P. Smith Rafal Smotrzyk Dirk Soede @@ -634,7 +628,6 @@ Noah Spurrier Nathan Srebro RajGopal Srinivasan -Jim St. Pierre Quentin Stafford-Fraser Frank Stajano Oliver Steele @@ -657,8 +650,8 @@ William Tanksley Christian Tanzer Steven Taschuk -Amy Taylor Monty Taylor +Amy Taylor Tobias Thelen Robin Thomas Eric Tiedemann @@ -674,18 +667,19 @@ John Tromp Jason Trowbridge Anthony Tuininga -Christopher Tur Lesniewski-Laas Stephen Turner Bill Tutt -Eren T?rkay Doobee R. Tzeck +Eren T?rkay Lionel Ulmer Roger Upole Michael Urman Hector Urtubia Atul Varma Dmitry Vasiliev +Alexandre Vassalotti Frank Vercruesse +Mike Verdone Jaap Vermeulen Al Vezza Jacques A. Vidrine @@ -718,9 +712,9 @@ Felix Wiemann Gerry Wiener Bryce "Zooko" Wilcox-O'Hearn -Gerald S. Williams John Williams Sue Williams +Gerald S. Williams Frank Willison Greg V. Wilson Jody Winston @@ -730,9 +724,9 @@ Jean-Claude Wippler Lars Wirzenius Stefan Witzel +David Wolever Klaus-Juergen Wolf Dan Wolfe -David Wolever Richard Wolff Gordon Worley Thomas Wouters @@ -750,3 +744,5 @@ Mike Zarnstorff Siebren van der Zee Uwe Zessin +Amaury Forgeot d'Arc +Peter ?strand From g.brandl at gmx.net Tue Mar 25 22:16:36 2008 From: g.brandl at gmx.net (Georg Brandl) Date: Tue, 25 Mar 2008 22:16:36 +0100 Subject: [Python-checkins] r61913 - python/trunk/Misc/ACKS In-Reply-To: <20080325211442.68C391E4006@bag.python.org> References: <20080325211442.68C391E4006@bag.python.org> Message-ID: benjamin.peterson schrieb: > Author: benjamin.peterson > Date: Tue Mar 25 22:14:42 2008 > New Revision: 61913 > > Modified: > python/trunk/Misc/ACKS > Log: > Merged the ACKS from py3k Why did you remove the header? Georg -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out. From python-checkins at python.org Tue Mar 25 22:18:39 2008 From: python-checkins at python.org (thomas.heller) Date: Tue, 25 Mar 2008 22:18:39 +0100 (CET) Subject: [Python-checkins] r61915 - python/trunk/Modules/_ctypes/_ctypes.c Message-ID: <20080325211839.631A31E401F@bag.python.org> Author: thomas.heller Date: Tue Mar 25 22:18:39 2008 New Revision: 61915 Modified: python/trunk/Modules/_ctypes/_ctypes.c Log: Make _ctypes.c PY_SSIZE_T_CLEAN. Modified: python/trunk/Modules/_ctypes/_ctypes.c ============================================================================== --- python/trunk/Modules/_ctypes/_ctypes.c (original) +++ python/trunk/Modules/_ctypes/_ctypes.c Tue Mar 25 22:18:39 2008 @@ -104,6 +104,8 @@ * */ +#define PY_SSIZE_T_CLEAN + #include "Python.h" #include "structmember.h" @@ -2293,7 +2295,7 @@ CData_setstate(PyObject *_self, PyObject *args) { void *data; - int len; + Py_ssize_t len; int res; PyObject *dict, *mydict; CDataObject *self = (CDataObject *)_self; @@ -3023,7 +3025,7 @@ char *name = NULL; PyObject *paramflags = NULL; GUID *iid = NULL; - int iid_len = 0; + Py_ssize_t iid_len = 0; if (!PyArg_ParseTuple(args, "is|Oz#", &index, &name, ¶mflags, &iid, &iid_len)) return NULL; From buildbot at python.org Tue Mar 25 22:53:09 2008 From: buildbot at python.org (buildbot at python.org) Date: Tue, 25 Mar 2008 21:53:09 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-3 trunk Message-ID: <20080325215310.122581E4008@bag.python.org> The Buildbot has detected a new failure of x86 XP-3 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-3%20trunk/builds/1165 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: benjamin.peterson,thomas.heller BUILD FAILED: failed test Excerpt from the test logfile: 3 tests failed: test_threading test_timeout test_winsound ====================================================================== ERROR: test_alias_asterisk (test.test_winsound.PlaySoundTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_winsound.py", line 86, in test_alias_asterisk winsound.PlaySound('SystemAsterisk', winsound.SND_ALIAS) RuntimeError: Failed to play sound ====================================================================== ERROR: test_alias_exclamation (test.test_winsound.PlaySoundTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_winsound.py", line 96, in test_alias_exclamation winsound.PlaySound('SystemExclamation', winsound.SND_ALIAS) RuntimeError: Failed to play sound ====================================================================== ERROR: test_alias_exit (test.test_winsound.PlaySoundTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_winsound.py", line 106, in test_alias_exit winsound.PlaySound('SystemExit', winsound.SND_ALIAS) RuntimeError: Failed to play sound ====================================================================== ERROR: test_alias_hand (test.test_winsound.PlaySoundTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_winsound.py", line 116, in test_alias_hand winsound.PlaySound('SystemHand', winsound.SND_ALIAS) RuntimeError: Failed to play sound ====================================================================== ERROR: test_alias_question (test.test_winsound.PlaySoundTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_winsound.py", line 126, in test_alias_question winsound.PlaySound('SystemQuestion', winsound.SND_ALIAS) RuntimeError: Failed to play sound sincerely, -The Buildbot From python-checkins at python.org Tue Mar 25 22:55:50 2008 From: python-checkins at python.org (benjamin.peterson) Date: Tue, 25 Mar 2008 22:55:50 +0100 (CET) Subject: [Python-checkins] r61916 - python/trunk/Misc/ACKS Message-ID: <20080325215550.7DB111E4021@bag.python.org> Author: benjamin.peterson Date: Tue Mar 25 22:55:50 2008 New Revision: 61916 Modified: python/trunk/Misc/ACKS Log: Opps! I merged the revisions, but forgot to add the header to ACKS Modified: python/trunk/Misc/ACKS ============================================================================== --- python/trunk/Misc/ACKS (original) +++ python/trunk/Misc/ACKS Tue Mar 25 22:55:50 2008 @@ -1,3 +1,14 @@ +Acknowledgements +---------------- + +This list is not complete and not in any useful order, but I would +like to thank everybody who contributed in any way, with code, hints, +bug reports, ideas, moral support, endorsement, or even complaints.... +Without you I would've stopped working on Python long ago! + + --Guido + +PS: In the standard Python distribution this file is encoded in Latin-1. David Abrahams Jim Ahlstrom From buildbot at python.org Tue Mar 25 23:39:32 2008 From: buildbot at python.org (buildbot at python.org) Date: Tue, 25 Mar 2008 22:39:32 +0000 Subject: [Python-checkins] buildbot failure in S-390 Debian trunk Message-ID: <20080325223933.0F4371E4006@bag.python.org> The Buildbot has detected a new failure of S-390 Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/S-390%20Debian%20trunk/builds/253 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-s390 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: benjamin.peterson,thomas.heller BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_threading make: *** [buildbottest] Error 1 sincerely, -The Buildbot From python-checkins at python.org Wed Mar 26 00:57:06 2008 From: python-checkins at python.org (christian.heimes) Date: Wed, 26 Mar 2008 00:57:06 +0100 (CET) Subject: [Python-checkins] r61917 - in python/branches/trunk-bytearray: Lib/test/string_tests.py Lib/test/test_bytes.py Objects/bytesobject.c Message-ID: <20080325235706.D12EB1E4006@bag.python.org> Author: christian.heimes Date: Wed Mar 26 00:57:06 2008 New Revision: 61917 Modified: python/branches/trunk-bytearray/Lib/test/string_tests.py python/branches/trunk-bytearray/Lib/test/test_bytes.py python/branches/trunk-bytearray/Objects/bytesobject.c Log: The type system of Python 2.6 has subtle differences to 3.0's. I've removed the Py_TPFLAGS_BASETYPE flags from bytearray for now. bytearray can't be subclasses until the issues with bytearray subclasses are fixed. Modified: python/branches/trunk-bytearray/Lib/test/string_tests.py ============================================================================== --- python/branches/trunk-bytearray/Lib/test/string_tests.py (original) +++ python/branches/trunk-bytearray/Lib/test/string_tests.py Wed Mar 26 00:57:06 2008 @@ -27,6 +27,9 @@ # Change in subclasses to change the behaviour of fixtesttype() type2test = None + # is the type subclass-able? + subclassable = True + # All tests pass their arguments to the testing methods # as str objects. fixtesttype() can be used to propagate # these arguments to the appropriate type @@ -57,7 +60,7 @@ ) # if the original is returned make sure that # this doesn't happen with subclasses - if object == realresult: + if self.subclassable and object == realresult: class subtype(self.__class__.type2test): pass object = subtype(object) Modified: python/branches/trunk-bytearray/Lib/test/test_bytes.py ============================================================================== --- python/branches/trunk-bytearray/Lib/test/test_bytes.py (original) +++ python/branches/trunk-bytearray/Lib/test/test_bytes.py Wed Mar 26 00:57:06 2008 @@ -865,6 +865,7 @@ class FixedStringTest(test.string_tests.BaseTest): + subclassable = False def fixtype(self, obj): if isinstance(obj, str): @@ -891,8 +892,8 @@ type2test = bytearray -class ByteArraySubclass(bytearray): - pass +#class ByteArraySubclass(bytearray): +# pass class ByteArraySubclassTest(unittest.TestCase): @@ -968,6 +969,15 @@ x = subclass(newarg=4, source=b"abcd") self.assertEqual(x, b"abcd") +class ByteArrayNotSubclassTest(unittest.TestCase): + def test_not_subclassable(self): + try: + class ByteArraySubclass(bytearray): + pass + except TypeError: + pass + else: + self.fail("Bytearray is subclassable") def test_main(): #test.test_support.run_unittest(BytesTest) @@ -976,7 +986,8 @@ test.test_support.run_unittest( ByteArrayTest, ByteArrayAsStringTest, - ByteArraySubclassTest, + #ByteArraySubclassTest, + ByteArrayNotSubclassTest, BytearrayPEP3137Test) if __name__ == "__main__": Modified: python/branches/trunk-bytearray/Objects/bytesobject.c ============================================================================== --- python/branches/trunk-bytearray/Objects/bytesobject.c (original) +++ python/branches/trunk-bytearray/Objects/bytesobject.c Wed Mar 26 00:57:06 2008 @@ -3232,7 +3232,8 @@ PyObject_GenericGetAttr, /* tp_getattro */ 0, /* tp_setattro */ &bytes_as_buffer, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | + /* Py_TPFLAGS_BASETYPE */ + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_NEWBUFFER, /* tp_flags */ bytes_doc, /* tp_doc */ 0, /* tp_traverse */ From python-checkins at python.org Wed Mar 26 01:16:51 2008 From: python-checkins at python.org (andrew.kuchling) Date: Wed, 26 Mar 2008 01:16:51 +0100 (CET) Subject: [Python-checkins] r61918 - python/trunk/Modules/selectmodule.c Message-ID: <20080326001651.14C7B1E4006@bag.python.org> Author: andrew.kuchling Date: Wed Mar 26 01:16:50 2008 New Revision: 61918 Modified: python/trunk/Modules/selectmodule.c Log: Minor docstring typos Modified: python/trunk/Modules/selectmodule.c ============================================================================== --- python/trunk/Modules/selectmodule.c (original) +++ python/trunk/Modules/selectmodule.c Wed Mar 26 01:16:50 2008 @@ -412,7 +412,7 @@ PyDoc_STRVAR(poll_modify_doc, "modify(fd, eventmask) -> None\n\n\ -Modify an already register file descriptor.\n\ +Modify an already registered file descriptor.\n\ fd -- either an integer, or an object with a fileno() method returning an\n\ int.\n\ events -- an optional bitmask describing the type of events to check for"); @@ -918,10 +918,10 @@ PyDoc_STRVAR(pyepoll_register_doc, "register(fd[, eventmask]) -> bool\n\ \n\ -Registers a new fd or modifies an already registered fd. register returns\n\ +Registers a new fd or modifies an already registered fd. register() returns\n\ True if a new fd was registered or False if the event mask for fd was modified.\n\ -fd is the target file descriptor of the operation\n\ -events is a bit set composed of the various EPOLL constants, the default\n\ +fd is the target file descriptor of the operation.\n\ +events is a bit set composed of the various EPOLL constants; the default\n\ is EPOLL_IN | EPOLL_OUT | EPOLL_PRI.\n\ \n\ The epoll interface supports all file descriptors that support poll."); @@ -1719,7 +1719,7 @@ \n\ *** IMPORTANT NOTICE ***\n\ On Windows and OpenVMS, only sockets are supported; on Unix, all file\n\ -descriptors."); +descriptors can be used."); static PyMethodDef select_methods[] = { {"select", select_select, METH_VARARGS, select_doc}, From buildbot at python.org Wed Mar 26 01:26:08 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 26 Mar 2008 00:26:08 +0000 Subject: [Python-checkins] buildbot failure in x86 W2k8 trunk Message-ID: <20080326002608.A29BC1E4006@bag.python.org> The Buildbot has detected a new failure of x86 W2k8 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20W2k8%20trunk/builds/211 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: nelson-windows Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: andrew.kuchling,benjamin.peterson BUILD FAILED: failed failed slave lost sincerely, -The Buildbot From python-checkins at python.org Wed Mar 26 01:30:02 2008 From: python-checkins at python.org (andrew.kuchling) Date: Wed, 26 Mar 2008 01:30:02 +0100 (CET) Subject: [Python-checkins] r61919 - python/trunk/Doc/whatsnew/2.6.rst Message-ID: <20080326003002.E92351E4006@bag.python.org> Author: andrew.kuchling Date: Wed Mar 26 01:30:02 2008 New Revision: 61919 Modified: python/trunk/Doc/whatsnew/2.6.rst Log: Add various items Modified: python/trunk/Doc/whatsnew/2.6.rst ============================================================================== --- python/trunk/Doc/whatsnew/2.6.rst (original) +++ python/trunk/Doc/whatsnew/2.6.rst Wed Mar 26 01:30:02 2008 @@ -555,10 +555,11 @@ Format specifiers can reference other fields through nesting:: fmt = '{0:{1}}' - fmt.format('Invoice #1234', width) -> - 'Invoice #1234 ' fmt.format('Invoice #1234', 15) -> 'Invoice #1234 ' + width = 35 + fmt.format('Invoice #1234', width) -> + 'Invoice #1234 ' The alignment of a field within the desired width can be specified: @@ -571,11 +572,38 @@ = (For numeric types only) Pad after the sign. ================ ============================================ -Format data types:: - - ... XXX take table from PEP 3101 +Format specifiers can also include a presentation type, which +controls how the value is formatted. For example, floating-point numbers +can be formatted as a general number or in exponential notation: + + >>> '{0:g}'.format(3.75) + '3.75' + >>> '{0:e}'.format(3.75) + '3.750000e+00' + +A variety of presentation types are available. Consult the 2.6 +documentation for a complete list (XXX add link, once it's in the 2.6 +docs), but here's a sample:: + + 'b' - Binary. Outputs the number in base 2. + 'c' - Character. Converts the integer to the corresponding + Unicode character before printing. + 'd' - Decimal Integer. Outputs the number in base 10. + 'o' - Octal format. Outputs the number in base 8. + 'x' - Hex format. Outputs the number in base 16, using lower- + case letters for the digits above 9. + 'e' - Exponent notation. Prints the number in scientific + notation using the letter 'e' to indicate the exponent. + 'g' - General format. This prints the number as a fixed-point + number, unless the number is too large, in which case + it switches to 'e' exponent notation. + 'n' - Number. This is the same as 'g', except that it uses the + current locale setting to insert the appropriate + number separator characters. + '%' - Percentage. Multiplies the number by 100 and displays + in fixed ('f') format, followed by a percent sign. -Classes and types can define a __format__ method to control how it's +Classes and types can define a __format__ method to control how they're formatted. It receives a single argument, the format specifier:: def __format__(self, format_spec): @@ -610,7 +638,6 @@ Python 2.6 has a ``__future__`` import that removes ``print`` as language syntax, letting you use the functional form instead. For example:: - XXX need to check from __future__ import print_function print('# of entries', len(dictionary), file=sys.stderr) @@ -701,6 +728,21 @@ .. ====================================================================== +.. _pep-3118: + +PEP 3118: Revised Buffer Protocol +===================================================== + +The buffer protocol is a C-level API that lets Python extensions +XXX + +.. seealso:: + + :pep:`3118` - Revising the buffer protocol + PEP written by Travis Oliphant and Carl Banks. + +.. ====================================================================== + .. _pep-3119: PEP 3119: Abstract Base Classes @@ -1082,7 +1124,7 @@ by using pymalloc for the Unicode string's data. * The ``with`` statement now stores the :meth:`__exit__` method on the stack, - producing a small speedup. (Implemented by Nick Coghlan.) + producing a small speedup. (Implemented by Jeffrey Yasskin.) * To reduce memory usage, the garbage collector will now clear internal free lists when garbage-collecting the highest generation of objects. @@ -1361,10 +1403,8 @@ the forward search. (Contributed by John Lenton.) -* The :mod:`new` module has been removed from Python 3.0. - Importing it therefore - triggers a warning message when Python is running in 3.0-warning - mode. +* (3.0-warning mode) The :mod:`new` module has been removed from + Python 3.0. Importing it therefore triggers a warning message. * The :mod:`operator` module gained a :func:`methodcaller` function that takes a name and an optional @@ -1483,6 +1523,14 @@ .. Issue 1727780 + The new ``triangular(low, high, mode)`` function returns random + numbers following a triangular distribution. The returned values + are between *low* and *high*, not including *high* itself, and + with *mode* as the mode, the most frequently occurring value + in the distribution. (Contributed by Raymond Hettinger. XXX check) + + .. Patch 1681432 + * Long regular expression searches carried out by the :mod:`re` module will now check for signals being delivered, so especially long searches can now be interrupted. @@ -1500,6 +1548,16 @@ .. Patch 1861 +* The :mod:`select` module now has wrapper functions + for the Linux :cfunc:`epoll` and BSD :cfunc:`kqueue` system calls. + Also, a :meth:`modify` method was added to the existing :class:`poll` + objects; ``pollobj.modify(fd, eventmask)`` takes a file descriptor + or file object and an event mask, + + (Contributed by XXX.) + + .. Patch 1657 + * The :mod:`sets` module has been deprecated; it's better to use the built-in :class:`set` and :class:`frozenset` types. @@ -1948,9 +2006,8 @@ Porting to Python 2.6 ===================== -This section lists previously described changes, and a few -esoteric bugfixes, that may require changes to your -code: +This section lists previously described changes and other bugfixes +that may require changes to your code: * The :meth:`__init__` method of :class:`collections.deque` now clears any existing contents of the deque @@ -1986,7 +2043,11 @@ .. Issue 1330538 -* In 3.0-warning mode, inequality comparisons between two dictionaries +* (3.0-warning mode) The :class:`Exception` class now warns + when accessed using slicing or index access; having + :class:`Exception` behave like a tuple is being phased out. + +* (3.0-warning mode) inequality comparisons between two dictionaries or two objects that don't implement comparison methods are reported as warnings. ``dict1 == dict2`` still works, but ``dict1 < dict2`` is being phased out. From python-checkins at python.org Wed Mar 26 01:44:08 2008 From: python-checkins at python.org (christian.heimes) Date: Wed, 26 Mar 2008 01:44:08 +0100 (CET) Subject: [Python-checkins] r61920 - python/branches/trunk-bytearray/Lib/test/test_io.py Message-ID: <20080326004408.864601E5CDB@bag.python.org> Author: christian.heimes Date: Wed Mar 26 01:44:08 2008 New Revision: 61920 Modified: python/branches/trunk-bytearray/Lib/test/test_io.py Log: Disabled last failing test I don't understand what the test is testing and how it suppose to work. Ka-Ping, please check it out. Modified: python/branches/trunk-bytearray/Lib/test/test_io.py ============================================================================== --- python/branches/trunk-bytearray/Lib/test/test_io.py (original) +++ python/branches/trunk-bytearray/Lib/test/test_io.py Wed Mar 26 01:44:08 2008 @@ -894,7 +894,8 @@ f.readline() f.tell() - def testSeekAndTell(self): + # FIXME: figure out why the test fails with Python 2.6 + def XXXtestSeekAndTell(self): """Test seek/tell using the StatefulIncrementalDecoder.""" def lookupTestDecoder(name): From buildbot at python.org Wed Mar 26 05:43:37 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 26 Mar 2008 04:43:37 +0000 Subject: [Python-checkins] buildbot failure in x86 FreeBSD 2 3.0 Message-ID: <20080326044337.5BB9A1E4006@bag.python.org> The Buildbot has detected a new failure of x86 FreeBSD 2 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20FreeBSD%202%203.0/builds/92 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: werven-freebsd Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: benjamin.peterson,neal.norwitz BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From python-checkins at python.org Wed Mar 26 05:55:51 2008 From: python-checkins at python.org (neal.norwitz) Date: Wed, 26 Mar 2008 05:55:51 +0100 (CET) Subject: [Python-checkins] r61922 - python/trunk/Lib/test/test_timeout.py Message-ID: <20080326045551.DD2EF1E4006@bag.python.org> Author: neal.norwitz Date: Wed Mar 26 05:55:51 2008 New Revision: 61922 Modified: python/trunk/Lib/test/test_timeout.py Log: Try to get this test to be less flaky. It was failing sometimes because the connect would succeed before the timeout occurred. Try using an address and port that hopefully doesn't exist to ensure we get no response. If this doesn't work, we can use a public address close to python.org and hopefully that address never gets taken. Modified: python/trunk/Lib/test/test_timeout.py ============================================================================== --- python/trunk/Lib/test/test_timeout.py (original) +++ python/trunk/Lib/test/test_timeout.py Wed Mar 26 05:55:51 2008 @@ -107,24 +107,19 @@ self.sock.close() def testConnectTimeout(self): - # If we are too close to www.python.org, this test will fail. - # Pick a host that should be farther away. - if (socket.getfqdn().split('.')[-2:] == ['python', 'org'] or - socket.getfqdn().split('.')[-2:-1] == ['xs4all']): - self.addr_remote = ('tut.fi', 80) - - # Lookup the IP address to avoid including the DNS lookup time + # Choose a private address that is unlikely to exist to prevent + # failures due to the connect succeeding before the timeout. + # Use a dotted IP address to avoid including the DNS lookup time # with the connect time. This avoids failing the assertion that # the timeout occurred fast enough. - self.addr_remote = (socket.gethostbyname(self.addr_remote[0]), 80) + addr = ('10.0.0.0', 12345) # Test connect() timeout _timeout = 0.001 self.sock.settimeout(_timeout) _t1 = time.time() - self.failUnlessRaises(socket.error, self.sock.connect, - self.addr_remote) + self.failUnlessRaises(socket.error, self.sock.connect, addr) _t2 = time.time() _delta = abs(_t1 - _t2) From buildbot at python.org Wed Mar 26 06:01:27 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 26 Mar 2008 05:01:27 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-3 3.0 Message-ID: <20080326050127.A3AB71E4006@bag.python.org> The Buildbot has detected a new failure of x86 XP-3 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-3%203.0/builds/723 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: benjamin.peterson,neal.norwitz BUILD FAILED: failed test Excerpt from the test logfile: 2 tests failed: test_mailbox test_winsound ====================================================================== ERROR: test_flush (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 718, in tearDown self._delete_recursively(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 47, in _delete_recursively os.remove(target) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test' ====================================================================== ERROR: test_popitem (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 336, in test_popitem self.assertEqual(int(msg.get_payload()), keys.index(key)) ValueError: invalid literal for int() with base 10: 'From: foo 0 From MAILER-DAEMON Wed Mar 26 04:58:13 2008 From: foo 1 From MAILER-DAEMON Wed Mar 26 04:58:13 2008 From: foo 2 From MAILER-DAEMON Wed Mar 26 04:58:13 2008 From: foo 3 From MAIL' ====================================================================== ERROR: test_flush (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 718, in tearDown self._delete_recursively(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 47, in _delete_recursively os.remove(target) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test' ====================================================================== ERROR: test_popitem (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 336, in test_popitem self.assertEqual(int(msg.get_payload()), keys.index(key)) ValueError: invalid literal for int() with base 10: 'From: foo 0 \x01\x01\x01\x01 \x01\x01\x01\x01 From MAILER-DAEMON Wed Mar 26 04:58:17 2008 From: foo 1 \x01\x01\x01\x01 \x01\x01\x01\x01 From MAILER-DAEMON Wed Mar 26 04:58:17 2008 From: foo 2 \x01\x01\x01\x01 \x01\x01\x01\x01 From MAILER-DAEMON Wed Mar 26 04' ====================================================================== ERROR: test_flush (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 941, in tearDown self._delete_recursively(self._path) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 47, in _delete_recursively os.remove(target) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test' ====================================================================== ERROR: test_popitem (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 336, in test_popitem self.assertEqual(int(msg.get_payload()), keys.index(key)) ValueError: invalid literal for int() with base 10: 'From: foo *** EOOH *** From: foo 0 1,, From: foo *** EOOH *** From: foo 1 1,, From: foo *** EOOH *** From: foo 2 1,, From: foo *** EOOH *** From: foo 3 ' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 412, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_add (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 77, in test_add self.assertEqual(self._box.get_string(keys[0]), self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:11 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:11 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:11 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:11 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'From: foo\n\n0' ====================================================================== FAIL: test_add_and_close (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 758, in test_add_and_close self.assertEqual(contents, open(self._path, 'r').read()) AssertionError: 'From MAILER-DAEMON Wed Mar 26 04:58:11 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:11 2008\n\nFrom: foo\n\n0\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:11 2008\n\nFrom: foo\n\n1\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:11 2008\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:11 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'From MAILER-DAEMON Wed Mar 26 04:58:11 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:11 2008\n\nFrom: foo\n\n0\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:11 2008\n\nFrom: foo\n\n1\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:11 2008\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:11 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:11 2008\n\nFrom: foo\n\n0\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:11 2008\n\nFrom: foo\n\n1\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:11 2008\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:11 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:11 2008\n\nFrom: foo\n\n1\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:11 2008\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:11 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:11 2008\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:11 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:11 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' ====================================================================== FAIL: test_add_from_string (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 725, in test_add_from_string self.assertEqual(self._box[key].get_from(), 'foo at bar blah') AssertionError: 'foo at bar blah\n' != 'foo at bar blah' ====================================================================== FAIL: test_close (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 389, in test_close self._test_flush_or_close(self._box.close) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 6 != 3 ====================================================================== FAIL: test_delitem (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 87, in test_delitem self._test_remove_or_delitem(self._box.__delitem__) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n1' != 'From: foo\n\n1' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 412, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_flush (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 377, in test_flush self._test_flush_or_close(self._box.flush) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 6 != 3 ====================================================================== FAIL: test_get (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 129, in test_get self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_file (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 174, in test_get_file self._template % 0) AssertionError: '' != 'From: foo\n\n0' ====================================================================== FAIL: test_get_message (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 156, in test_get_message self.assertEqual(msg0['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_string (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 164, in test_get_string self.assertEqual(self._box.get_string(key0), self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:12 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'From: foo\n\n0' ====================================================================== FAIL: test_getitem (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 144, in test_getitem self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_items (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 207, in test_items self._check_iteration(self._box.items, do_keys=True, do_values=True) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iter (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 194, in test_iter do_values=True) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iteritems (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 203, in test_iteritems do_values=True) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_itervalues (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 189, in test_itervalues do_values=True) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_open_close_open (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 742, in test_open_close_open self.assertEqual(len(self._box), 3) AssertionError: 6 != 3 ====================================================================== FAIL: test_pop (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 313, in test_pop self.assertEqual(self._box.pop(key0).get_payload(), '0') AssertionError: 'From: foo\n\n0\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:13 2008\n\nFrom: foo\n\n1' != '0' ====================================================================== FAIL: test_remove (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 83, in test_remove self._test_remove_or_delitem(self._box.remove) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n1' != 'From: foo\n\n1' ====================================================================== FAIL: test_set_item (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 272, in test_set_item self._template % 'original 0') AssertionError: '\nFrom: foo\n\noriginal 0' != 'From: foo\n\noriginal 0' ====================================================================== FAIL: test_update (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 350, in test_update self._template % 'changed 0') AssertionError: '\nFrom: foo\n\nchanged 0\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:14 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'From: foo\n\nchanged 0' ====================================================================== FAIL: test_values (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 198, in test_values self._check_iteration(self._box.values, do_keys=False, do_values=True) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_add (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 77, in test_add self.assertEqual(self._box.get_string(keys[0]), self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:14 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:14 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:14 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:14 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n' != 'From: foo\n\n0' ====================================================================== FAIL: test_add_and_close (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 758, in test_add_and_close self.assertEqual(contents, open(self._path, 'r').read()) AssertionError: '\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:14 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:14 2008\n\nFrom: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:14 2008\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:14 2008\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:14 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n' != '\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:14 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:14 2008\n\nFrom: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:14 2008\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:14 2008\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:14 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:14 2008\n\nFrom: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:14 2008\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:14 2008\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:14 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:14 2008\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:14 2008\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:14 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:14 2008\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:14 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:14 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n' ====================================================================== FAIL: test_add_from_string (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 725, in test_add_from_string self.assertEqual(self._box[key].get_from(), 'foo at bar blah') AssertionError: 'foo at bar blah\n' != 'foo at bar blah' ====================================================================== FAIL: test_close (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 389, in test_close self._test_flush_or_close(self._box.close) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_delitem (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 87, in test_delitem self._test_remove_or_delitem(self._box.__delitem__) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n' != 'From: foo\n\n1' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 412, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_flush (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 377, in test_flush self._test_flush_or_close(self._box.flush) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_get (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 129, in test_get self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_file (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 174, in test_get_file self._template % 0) AssertionError: '' != 'From: foo\n\n0' ====================================================================== FAIL: test_get_message (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 156, in test_get_message self.assertEqual(msg0['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_string (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 164, in test_get_string self.assertEqual(self._box.get_string(key0), self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:15 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n' != 'From: foo\n\n0' ====================================================================== FAIL: test_getitem (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 144, in test_getitem self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_items (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 207, in test_items self._check_iteration(self._box.items, do_keys=True, do_values=True) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iter (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 194, in test_iter do_values=True) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iteritems (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 203, in test_iteritems do_values=True) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_itervalues (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 189, in test_itervalues do_values=True) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_open_close_open (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 742, in test_open_close_open self.assertEqual(len(self._box), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_pop (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 313, in test_pop self.assertEqual(self._box.pop(key0).get_payload(), '0') AssertionError: 'From: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:17 2008\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n' != '0' ====================================================================== FAIL: test_remove (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 83, in test_remove self._test_remove_or_delitem(self._box.remove) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n' != 'From: foo\n\n1' ====================================================================== FAIL: test_set_item (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 272, in test_set_item self._template % 'original 0') AssertionError: '\nFrom: foo\n\noriginal 0\n\n\x01\x01\x01\x01\n\n' != 'From: foo\n\noriginal 0' ====================================================================== FAIL: test_update (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 350, in test_update self._template % 'changed 0') AssertionError: '\nFrom: foo\n\nchanged 0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 04:58:18 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n' != 'From: foo\n\nchanged 0' ====================================================================== FAIL: test_values (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 198, in test_values self._check_iteration(self._box.values, do_keys=False, do_values=True) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 412, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_add (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 77, in test_add self.assertEqual(self._box.get_string(keys[0]), self._template % 0) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n0\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f' != 'From: foo\n\n0' ====================================================================== FAIL: test_close (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 389, in test_close self._test_flush_or_close(self._box.close) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_delitem (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 87, in test_delitem self._test_remove_or_delitem(self._box.__delitem__) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n1\n\n\x1f' != 'From: foo\n\n1' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 412, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_flush (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 377, in test_flush self._test_flush_or_close(self._box.flush) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_get (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 129, in test_get self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_file (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 174, in test_get_file self._template % 0) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n0\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f' != 'From: foo\n\n0' ====================================================================== FAIL: test_get_message (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 156, in test_get_message self.assertEqual(msg0['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_string (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 164, in test_get_string self.assertEqual(self._box.get_string(key0), self._template % 0) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n0\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f' != 'From: foo\n\n0' ====================================================================== FAIL: test_getitem (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 144, in test_getitem self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_items (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 207, in test_items self._check_iteration(self._box.items, do_keys=True, do_values=True) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iter (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 194, in test_iter do_values=True) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iteritems (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 203, in test_iteritems do_values=True) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_itervalues (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 189, in test_itervalues do_values=True) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_pop (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 313, in test_pop self.assertEqual(self._box.pop(key0).get_payload(), '0') AssertionError: 'From: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n0\n\n\x1f\x0c\n\n1,,\n\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n1\n\n\x1f' != '0' ====================================================================== FAIL: test_remove (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 83, in test_remove self._test_remove_or_delitem(self._box.remove) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n1\n\n\x1f' != 'From: foo\n\n1' ====================================================================== FAIL: test_set_item (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 272, in test_set_item self._template % 'original 0') AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\noriginal 0\n\n\x1f' != 'From: foo\n\noriginal 0' ====================================================================== FAIL: test_update (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 350, in test_update self._template % 'changed 0') AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\nchanged 0\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f' != 'From: foo\n\nchanged 0' ====================================================================== FAIL: test_values (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 198, in test_values self._check_iteration(self._box.values, do_keys=False, do_values=True) File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== ERROR: test_alias_asterisk (test.test_winsound.PlaySoundTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_winsound.py", line 87, in test_alias_asterisk winsound.PlaySound('SystemAsterisk', winsound.SND_ALIAS) RuntimeError: Failed to play sound ====================================================================== ERROR: test_alias_exclamation (test.test_winsound.PlaySoundTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_winsound.py", line 97, in test_alias_exclamation winsound.PlaySound('SystemExclamation', winsound.SND_ALIAS) RuntimeError: Failed to play sound ====================================================================== ERROR: test_alias_exit (test.test_winsound.PlaySoundTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_winsound.py", line 107, in test_alias_exit winsound.PlaySound('SystemExit', winsound.SND_ALIAS) RuntimeError: Failed to play sound ====================================================================== ERROR: test_alias_hand (test.test_winsound.PlaySoundTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_winsound.py", line 117, in test_alias_hand winsound.PlaySound('SystemHand', winsound.SND_ALIAS) RuntimeError: Failed to play sound ====================================================================== ERROR: test_alias_question (test.test_winsound.PlaySoundTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\3.0.heller-windows\build\lib\test\test_winsound.py", line 127, in test_alias_question winsound.PlaySound('SystemQuestion', winsound.SND_ALIAS) RuntimeError: Failed to play sound sincerely, -The Buildbot From python-checkins at python.org Wed Mar 26 06:03:03 2008 From: python-checkins at python.org (jerry.seutter) Date: Wed, 26 Mar 2008 06:03:03 +0100 (CET) Subject: [Python-checkins] r61923 - python/trunk/Lib/test/test_imaplib.py Message-ID: <20080326050303.5F4AF1E4006@bag.python.org> Author: jerry.seutter Date: Wed Mar 26 06:03:03 2008 New Revision: 61923 Modified: python/trunk/Lib/test/test_imaplib.py Log: Changed test so it no longer runs as a side effect of importing. Modified: python/trunk/Lib/test/test_imaplib.py ============================================================================== --- python/trunk/Lib/test/test_imaplib.py (original) +++ python/trunk/Lib/test/test_imaplib.py Wed Mar 26 06:03:03 2008 @@ -1,12 +1,25 @@ import imaplib import time -# We can check only that it successfully produces a result, -# not the correctness of the result itself, since the result -# depends on the timezone the machine is in. +from test import test_support +import unittest -timevalues = [2000000000, 2000000000.0, time.localtime(2000000000), - '"18-May-2033 05:33:20 +0200"'] -for t in timevalues: - imaplib.Time2Internaldate(t) +class TestImaplib(unittest.TestCase): + def test_that_Time2Internaldate_returns_a_result(self): + # We can check only that it successfully produces a result, + # not the correctness of the result itself, since the result + # depends on the timezone the machine is in. + timevalues = [2000000000, 2000000000.0, time.localtime(2000000000), + '"18-May-2033 05:33:20 +0200"'] + + for t in timevalues: + imaplib.Time2Internaldate(t) + + +def test_main(): + test_support.run_unittest(TestImaplib) + + +if __name__ == "__main__": + unittest.main() From python-checkins at python.org Wed Mar 26 06:19:41 2008 From: python-checkins at python.org (neal.norwitz) Date: Wed, 26 Mar 2008 06:19:41 +0100 (CET) Subject: [Python-checkins] r61924 - python/trunk/Lib/test/test_mailbox.py Message-ID: <20080326051941.977E81E4006@bag.python.org> Author: neal.norwitz Date: Wed Mar 26 06:19:41 2008 New Revision: 61924 Modified: python/trunk/Lib/test/test_mailbox.py Log: Ensure that the mailbox is closed to prevent problems on Windows with removing an open file. This doesn't seem to be a problem in 2.6, but that appears to be somewhat accidental (specific to reference counting). When this gets merged to 3.0, it will make the 3.0 code simpler. Modified: python/trunk/Lib/test/test_mailbox.py ============================================================================== --- python/trunk/Lib/test/test_mailbox.py (original) +++ python/trunk/Lib/test/test_mailbox.py Wed Mar 26 06:19:41 2008 @@ -379,7 +379,7 @@ def test_flush(self): # Write changes to disk - self._test_flush_or_close(self._box.flush) + self._test_flush_or_close(self._box.flush, True) def test_lock_unlock(self): # Lock and unlock the mailbox @@ -391,14 +391,16 @@ def test_close(self): # Close mailbox and flush changes to disk - self._test_flush_or_close(self._box.close) + self._test_flush_or_close(self._box.close, False) - def _test_flush_or_close(self, method): + def _test_flush_or_close(self, method, should_call_close): contents = [self._template % i for i in xrange(3)] self._box.add(contents[0]) self._box.add(contents[1]) self._box.add(contents[2]) method() + if should_call_close: + self._box.close() self._box = self._factory(self._path) keys = self._box.keys() self.assert_(len(keys) == 3) From python-checkins at python.org Wed Mar 26 06:32:52 2008 From: python-checkins at python.org (jerry.seutter) Date: Wed, 26 Mar 2008 06:32:52 +0100 (CET) Subject: [Python-checkins] r61925 - python/trunk/Lib/test/test_format.py Message-ID: <20080326053252.7448B1E4006@bag.python.org> Author: jerry.seutter Date: Wed Mar 26 06:32:51 2008 New Revision: 61925 Modified: python/trunk/Lib/test/test_format.py Log: Changed test so it no longer runs as a side effect of importing. Modified: python/trunk/Lib/test/test_format.py ============================================================================== --- python/trunk/Lib/test/test_format.py (original) +++ python/trunk/Lib/test/test_format.py Wed Mar 26 06:32:51 2008 @@ -1,7 +1,9 @@ -from test.test_support import verbose, have_unicode, TestFailed import sys -from test.test_support import MAX_Py_ssize_t -maxsize = MAX_Py_ssize_t +from test.test_support import verbose, have_unicode, TestFailed +import test.test_support as test_support +import unittest + +maxsize = test_support.MAX_Py_ssize_t # test string formatting operator (I am not sure if this is being tested # elsewhere but, surely, some of the given cases are *not* tested because @@ -11,6 +13,7 @@ overflowok = 1 overflowrequired = 0 + def testformat(formatstr, args, output=None, limit=None): if verbose: if output: @@ -51,232 +54,242 @@ if verbose: print 'yes' + def testboth(formatstr, *args): testformat(formatstr, *args) if have_unicode: testformat(unicode(formatstr), *args) -testboth("%.1d", (1,), "1") -testboth("%.*d", (sys.maxint,1)) # expect overflow -testboth("%.100d", (1,), '0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001') -testboth("%#.117x", (1,), '0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001') -testboth("%#.118x", (1,), '0x0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001') - -testboth("%f", (1.0,), "1.000000") -# these are trying to test the limits of the internal magic-number-length -# formatting buffer, if that number changes then these tests are less -# effective -testboth("%#.*g", (109, -1.e+49/3.)) -testboth("%#.*g", (110, -1.e+49/3.)) -testboth("%#.*g", (110, -1.e+100/3.)) - -# test some ridiculously large precision, expect overflow -testboth('%12.*f', (123456, 1.0)) - -# check for internal overflow validation on length of precision -overflowrequired = 1 -testboth("%#.*g", (110, -1.e+100/3.)) -testboth("%#.*G", (110, -1.e+100/3.)) -testboth("%#.*f", (110, -1.e+100/3.)) -testboth("%#.*F", (110, -1.e+100/3.)) -overflowrequired = 0 +class FormatTest(unittest.TestCase): + def test_format(self): + testboth("%.1d", (1,), "1") + testboth("%.*d", (sys.maxint,1)) # expect overflow + testboth("%.100d", (1,), '0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001') + testboth("%#.117x", (1,), '0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001') + testboth("%#.118x", (1,), '0x0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001') + + testboth("%f", (1.0,), "1.000000") + # these are trying to test the limits of the internal magic-number-length + # formatting buffer, if that number changes then these tests are less + # effective + testboth("%#.*g", (109, -1.e+49/3.)) + testboth("%#.*g", (110, -1.e+49/3.)) + testboth("%#.*g", (110, -1.e+100/3.)) + + # test some ridiculously large precision, expect overflow + testboth('%12.*f', (123456, 1.0)) + + # check for internal overflow validation on length of precision + overflowrequired = 1 + testboth("%#.*g", (110, -1.e+100/3.)) + testboth("%#.*G", (110, -1.e+100/3.)) + testboth("%#.*f", (110, -1.e+100/3.)) + testboth("%#.*F", (110, -1.e+100/3.)) + overflowrequired = 0 + + # Formatting of long integers. Overflow is not ok + overflowok = 0 + testboth("%x", 10L, "a") + testboth("%x", 100000000000L, "174876e800") + testboth("%o", 10L, "12") + testboth("%o", 100000000000L, "1351035564000") + testboth("%d", 10L, "10") + testboth("%d", 100000000000L, "100000000000") + + big = 123456789012345678901234567890L + testboth("%d", big, "123456789012345678901234567890") + testboth("%d", -big, "-123456789012345678901234567890") + testboth("%5d", -big, "-123456789012345678901234567890") + testboth("%31d", -big, "-123456789012345678901234567890") + testboth("%32d", -big, " -123456789012345678901234567890") + testboth("%-32d", -big, "-123456789012345678901234567890 ") + testboth("%032d", -big, "-0123456789012345678901234567890") + testboth("%-032d", -big, "-123456789012345678901234567890 ") + testboth("%034d", -big, "-000123456789012345678901234567890") + testboth("%034d", big, "0000123456789012345678901234567890") + testboth("%0+34d", big, "+000123456789012345678901234567890") + testboth("%+34d", big, " +123456789012345678901234567890") + testboth("%34d", big, " 123456789012345678901234567890") + testboth("%.2d", big, "123456789012345678901234567890") + testboth("%.30d", big, "123456789012345678901234567890") + testboth("%.31d", big, "0123456789012345678901234567890") + testboth("%32.31d", big, " 0123456789012345678901234567890") + testboth("%d", float(big), "123456________________________", 6) + + big = 0x1234567890abcdef12345L # 21 hex digits + testboth("%x", big, "1234567890abcdef12345") + testboth("%x", -big, "-1234567890abcdef12345") + testboth("%5x", -big, "-1234567890abcdef12345") + testboth("%22x", -big, "-1234567890abcdef12345") + testboth("%23x", -big, " -1234567890abcdef12345") + testboth("%-23x", -big, "-1234567890abcdef12345 ") + testboth("%023x", -big, "-01234567890abcdef12345") + testboth("%-023x", -big, "-1234567890abcdef12345 ") + testboth("%025x", -big, "-0001234567890abcdef12345") + testboth("%025x", big, "00001234567890abcdef12345") + testboth("%0+25x", big, "+0001234567890abcdef12345") + testboth("%+25x", big, " +1234567890abcdef12345") + testboth("%25x", big, " 1234567890abcdef12345") + testboth("%.2x", big, "1234567890abcdef12345") + testboth("%.21x", big, "1234567890abcdef12345") + testboth("%.22x", big, "01234567890abcdef12345") + testboth("%23.22x", big, " 01234567890abcdef12345") + testboth("%-23.22x", big, "01234567890abcdef12345 ") + testboth("%X", big, "1234567890ABCDEF12345") + testboth("%#X", big, "0X1234567890ABCDEF12345") + testboth("%#x", big, "0x1234567890abcdef12345") + testboth("%#x", -big, "-0x1234567890abcdef12345") + testboth("%#.23x", -big, "-0x001234567890abcdef12345") + testboth("%#+.23x", big, "+0x001234567890abcdef12345") + testboth("%# .23x", big, " 0x001234567890abcdef12345") + testboth("%#+.23X", big, "+0X001234567890ABCDEF12345") + testboth("%#-+.23X", big, "+0X001234567890ABCDEF12345") + testboth("%#-+26.23X", big, "+0X001234567890ABCDEF12345") + testboth("%#-+27.23X", big, "+0X001234567890ABCDEF12345 ") + testboth("%#+27.23X", big, " +0X001234567890ABCDEF12345") + # next one gets two leading zeroes from precision, and another from the + # 0 flag and the width + testboth("%#+027.23X", big, "+0X0001234567890ABCDEF12345") + # same, except no 0 flag + testboth("%#+27.23X", big, " +0X001234567890ABCDEF12345") + testboth("%x", float(big), "123456_______________", 6) + + big = 012345670123456701234567012345670L # 32 octal digits + testboth("%o", big, "12345670123456701234567012345670") + testboth("%o", -big, "-12345670123456701234567012345670") + testboth("%5o", -big, "-12345670123456701234567012345670") + testboth("%33o", -big, "-12345670123456701234567012345670") + testboth("%34o", -big, " -12345670123456701234567012345670") + testboth("%-34o", -big, "-12345670123456701234567012345670 ") + testboth("%034o", -big, "-012345670123456701234567012345670") + testboth("%-034o", -big, "-12345670123456701234567012345670 ") + testboth("%036o", -big, "-00012345670123456701234567012345670") + testboth("%036o", big, "000012345670123456701234567012345670") + testboth("%0+36o", big, "+00012345670123456701234567012345670") + testboth("%+36o", big, " +12345670123456701234567012345670") + testboth("%36o", big, " 12345670123456701234567012345670") + testboth("%.2o", big, "12345670123456701234567012345670") + testboth("%.32o", big, "12345670123456701234567012345670") + testboth("%.33o", big, "012345670123456701234567012345670") + testboth("%34.33o", big, " 012345670123456701234567012345670") + testboth("%-34.33o", big, "012345670123456701234567012345670 ") + testboth("%o", big, "12345670123456701234567012345670") + testboth("%#o", big, "012345670123456701234567012345670") + testboth("%#o", -big, "-012345670123456701234567012345670") + testboth("%#.34o", -big, "-0012345670123456701234567012345670") + testboth("%#+.34o", big, "+0012345670123456701234567012345670") + testboth("%# .34o", big, " 0012345670123456701234567012345670") + testboth("%#+.34o", big, "+0012345670123456701234567012345670") + testboth("%#-+.34o", big, "+0012345670123456701234567012345670") + testboth("%#-+37.34o", big, "+0012345670123456701234567012345670 ") + testboth("%#+37.34o", big, " +0012345670123456701234567012345670") + # next one gets one leading zero from precision + testboth("%.33o", big, "012345670123456701234567012345670") + # base marker shouldn't change that, since "0" is redundant + testboth("%#.33o", big, "012345670123456701234567012345670") + # but reduce precision, and base marker should add a zero + testboth("%#.32o", big, "012345670123456701234567012345670") + # one leading zero from precision, and another from "0" flag & width + testboth("%034.33o", big, "0012345670123456701234567012345670") + # base marker shouldn't change that + testboth("%0#34.33o", big, "0012345670123456701234567012345670") + testboth("%o", float(big), "123456__________________________", 6) + + # Some small ints, in both Python int and long flavors). + testboth("%d", 42, "42") + testboth("%d", -42, "-42") + testboth("%d", 42L, "42") + testboth("%d", -42L, "-42") + testboth("%d", 42.0, "42") + testboth("%#x", 1, "0x1") + testboth("%#x", 1L, "0x1") + testboth("%#X", 1, "0X1") + testboth("%#X", 1L, "0X1") + testboth("%#x", 1.0, "0x1") + testboth("%#o", 1, "01") + testboth("%#o", 1L, "01") + testboth("%#o", 0, "0") + testboth("%#o", 0L, "0") + testboth("%o", 0, "0") + testboth("%o", 0L, "0") + testboth("%d", 0, "0") + testboth("%d", 0L, "0") + testboth("%#x", 0, "0x0") + testboth("%#x", 0L, "0x0") + testboth("%#X", 0, "0X0") + testboth("%#X", 0L, "0X0") + + testboth("%x", 0x42, "42") + testboth("%x", -0x42, "-42") + testboth("%x", 0x42L, "42") + testboth("%x", -0x42L, "-42") + testboth("%x", float(0x42), "42") + + testboth("%o", 042, "42") + testboth("%o", -042, "-42") + testboth("%o", 042L, "42") + testboth("%o", -042L, "-42") + testboth("%o", float(042), "42") -# Formatting of long integers. Overflow is not ok -overflowok = 0 -testboth("%x", 10L, "a") -testboth("%x", 100000000000L, "174876e800") -testboth("%o", 10L, "12") -testboth("%o", 100000000000L, "1351035564000") -testboth("%d", 10L, "10") -testboth("%d", 100000000000L, "100000000000") - -big = 123456789012345678901234567890L -testboth("%d", big, "123456789012345678901234567890") -testboth("%d", -big, "-123456789012345678901234567890") -testboth("%5d", -big, "-123456789012345678901234567890") -testboth("%31d", -big, "-123456789012345678901234567890") -testboth("%32d", -big, " -123456789012345678901234567890") -testboth("%-32d", -big, "-123456789012345678901234567890 ") -testboth("%032d", -big, "-0123456789012345678901234567890") -testboth("%-032d", -big, "-123456789012345678901234567890 ") -testboth("%034d", -big, "-000123456789012345678901234567890") -testboth("%034d", big, "0000123456789012345678901234567890") -testboth("%0+34d", big, "+000123456789012345678901234567890") -testboth("%+34d", big, " +123456789012345678901234567890") -testboth("%34d", big, " 123456789012345678901234567890") -testboth("%.2d", big, "123456789012345678901234567890") -testboth("%.30d", big, "123456789012345678901234567890") -testboth("%.31d", big, "0123456789012345678901234567890") -testboth("%32.31d", big, " 0123456789012345678901234567890") -testboth("%d", float(big), "123456________________________", 6) - -big = 0x1234567890abcdef12345L # 21 hex digits -testboth("%x", big, "1234567890abcdef12345") -testboth("%x", -big, "-1234567890abcdef12345") -testboth("%5x", -big, "-1234567890abcdef12345") -testboth("%22x", -big, "-1234567890abcdef12345") -testboth("%23x", -big, " -1234567890abcdef12345") -testboth("%-23x", -big, "-1234567890abcdef12345 ") -testboth("%023x", -big, "-01234567890abcdef12345") -testboth("%-023x", -big, "-1234567890abcdef12345 ") -testboth("%025x", -big, "-0001234567890abcdef12345") -testboth("%025x", big, "00001234567890abcdef12345") -testboth("%0+25x", big, "+0001234567890abcdef12345") -testboth("%+25x", big, " +1234567890abcdef12345") -testboth("%25x", big, " 1234567890abcdef12345") -testboth("%.2x", big, "1234567890abcdef12345") -testboth("%.21x", big, "1234567890abcdef12345") -testboth("%.22x", big, "01234567890abcdef12345") -testboth("%23.22x", big, " 01234567890abcdef12345") -testboth("%-23.22x", big, "01234567890abcdef12345 ") -testboth("%X", big, "1234567890ABCDEF12345") -testboth("%#X", big, "0X1234567890ABCDEF12345") -testboth("%#x", big, "0x1234567890abcdef12345") -testboth("%#x", -big, "-0x1234567890abcdef12345") -testboth("%#.23x", -big, "-0x001234567890abcdef12345") -testboth("%#+.23x", big, "+0x001234567890abcdef12345") -testboth("%# .23x", big, " 0x001234567890abcdef12345") -testboth("%#+.23X", big, "+0X001234567890ABCDEF12345") -testboth("%#-+.23X", big, "+0X001234567890ABCDEF12345") -testboth("%#-+26.23X", big, "+0X001234567890ABCDEF12345") -testboth("%#-+27.23X", big, "+0X001234567890ABCDEF12345 ") -testboth("%#+27.23X", big, " +0X001234567890ABCDEF12345") -# next one gets two leading zeroes from precision, and another from the -# 0 flag and the width -testboth("%#+027.23X", big, "+0X0001234567890ABCDEF12345") -# same, except no 0 flag -testboth("%#+27.23X", big, " +0X001234567890ABCDEF12345") -testboth("%x", float(big), "123456_______________", 6) - -big = 012345670123456701234567012345670L # 32 octal digits -testboth("%o", big, "12345670123456701234567012345670") -testboth("%o", -big, "-12345670123456701234567012345670") -testboth("%5o", -big, "-12345670123456701234567012345670") -testboth("%33o", -big, "-12345670123456701234567012345670") -testboth("%34o", -big, " -12345670123456701234567012345670") -testboth("%-34o", -big, "-12345670123456701234567012345670 ") -testboth("%034o", -big, "-012345670123456701234567012345670") -testboth("%-034o", -big, "-12345670123456701234567012345670 ") -testboth("%036o", -big, "-00012345670123456701234567012345670") -testboth("%036o", big, "000012345670123456701234567012345670") -testboth("%0+36o", big, "+00012345670123456701234567012345670") -testboth("%+36o", big, " +12345670123456701234567012345670") -testboth("%36o", big, " 12345670123456701234567012345670") -testboth("%.2o", big, "12345670123456701234567012345670") -testboth("%.32o", big, "12345670123456701234567012345670") -testboth("%.33o", big, "012345670123456701234567012345670") -testboth("%34.33o", big, " 012345670123456701234567012345670") -testboth("%-34.33o", big, "012345670123456701234567012345670 ") -testboth("%o", big, "12345670123456701234567012345670") -testboth("%#o", big, "012345670123456701234567012345670") -testboth("%#o", -big, "-012345670123456701234567012345670") -testboth("%#.34o", -big, "-0012345670123456701234567012345670") -testboth("%#+.34o", big, "+0012345670123456701234567012345670") -testboth("%# .34o", big, " 0012345670123456701234567012345670") -testboth("%#+.34o", big, "+0012345670123456701234567012345670") -testboth("%#-+.34o", big, "+0012345670123456701234567012345670") -testboth("%#-+37.34o", big, "+0012345670123456701234567012345670 ") -testboth("%#+37.34o", big, " +0012345670123456701234567012345670") -# next one gets one leading zero from precision -testboth("%.33o", big, "012345670123456701234567012345670") -# base marker shouldn't change that, since "0" is redundant -testboth("%#.33o", big, "012345670123456701234567012345670") -# but reduce precision, and base marker should add a zero -testboth("%#.32o", big, "012345670123456701234567012345670") -# one leading zero from precision, and another from "0" flag & width -testboth("%034.33o", big, "0012345670123456701234567012345670") -# base marker shouldn't change that -testboth("%0#34.33o", big, "0012345670123456701234567012345670") -testboth("%o", float(big), "123456__________________________", 6) - -# Some small ints, in both Python int and long flavors). -testboth("%d", 42, "42") -testboth("%d", -42, "-42") -testboth("%d", 42L, "42") -testboth("%d", -42L, "-42") -testboth("%d", 42.0, "42") -testboth("%#x", 1, "0x1") -testboth("%#x", 1L, "0x1") -testboth("%#X", 1, "0X1") -testboth("%#X", 1L, "0X1") -testboth("%#x", 1.0, "0x1") -testboth("%#o", 1, "01") -testboth("%#o", 1L, "01") -testboth("%#o", 0, "0") -testboth("%#o", 0L, "0") -testboth("%o", 0, "0") -testboth("%o", 0L, "0") -testboth("%d", 0, "0") -testboth("%d", 0L, "0") -testboth("%#x", 0, "0x0") -testboth("%#x", 0L, "0x0") -testboth("%#X", 0, "0X0") -testboth("%#X", 0L, "0X0") - -testboth("%x", 0x42, "42") -testboth("%x", -0x42, "-42") -testboth("%x", 0x42L, "42") -testboth("%x", -0x42L, "-42") -testboth("%x", float(0x42), "42") - -testboth("%o", 042, "42") -testboth("%o", -042, "-42") -testboth("%o", 042L, "42") -testboth("%o", -042L, "-42") -testboth("%o", float(042), "42") - -# Test exception for unknown format characters -if verbose: - print 'Testing exceptions' + # Test exception for unknown format characters + if verbose: + print 'Testing exceptions' -def test_exc(formatstr, args, exception, excmsg): - try: - testformat(formatstr, args) - except exception, exc: - if str(exc) == excmsg: - if verbose: - print "yes" - else: - if verbose: print 'no' - print 'Unexpected ', exception, ':', repr(str(exc)) - except: - if verbose: print 'no' - print 'Unexpected exception' - raise - else: - raise TestFailed, 'did not get expected exception: %s' % excmsg + def test_exc(formatstr, args, exception, excmsg): + try: + testformat(formatstr, args) + except exception, exc: + if str(exc) == excmsg: + if verbose: + print "yes" + else: + if verbose: print 'no' + print 'Unexpected ', exception, ':', repr(str(exc)) + except: + if verbose: print 'no' + print 'Unexpected exception' + raise + else: + raise TestFailed, 'did not get expected exception: %s' % excmsg + + test_exc('abc %a', 1, ValueError, + "unsupported format character 'a' (0x61) at index 5") + if have_unicode: + test_exc(unicode('abc %\u3000','raw-unicode-escape'), 1, ValueError, + "unsupported format character '?' (0x3000) at index 5") + + test_exc('%d', '1', TypeError, "%d format: a number is required, not str") + test_exc('%g', '1', TypeError, "float argument required, not str") + test_exc('no format', '1', TypeError, + "not all arguments converted during string formatting") + test_exc('no format', u'1', TypeError, + "not all arguments converted during string formatting") + test_exc(u'no format', '1', TypeError, + "not all arguments converted during string formatting") + test_exc(u'no format', u'1', TypeError, + "not all arguments converted during string formatting") + + class Foobar(long): + def __oct__(self): + # Returning a non-string should not blow up. + return self + 1 + + test_exc('%o', Foobar(), TypeError, + "expected string or Unicode object, long found") + + if maxsize == 2**31-1: + # crashes 2.2.1 and earlier: + try: + "%*d"%(maxsize, -127) + except MemoryError: + pass + else: + raise TestFailed, '"%*d"%(maxsize, -127) should fail' -test_exc('abc %a', 1, ValueError, - "unsupported format character 'a' (0x61) at index 5") -if have_unicode: - test_exc(unicode('abc %\u3000','raw-unicode-escape'), 1, ValueError, - "unsupported format character '?' (0x3000) at index 5") - -test_exc('%d', '1', TypeError, "%d format: a number is required, not str") -test_exc('%g', '1', TypeError, "float argument required, not str") -test_exc('no format', '1', TypeError, - "not all arguments converted during string formatting") -test_exc('no format', u'1', TypeError, - "not all arguments converted during string formatting") -test_exc(u'no format', '1', TypeError, - "not all arguments converted during string formatting") -test_exc(u'no format', u'1', TypeError, - "not all arguments converted during string formatting") - -class Foobar(long): - def __oct__(self): - # Returning a non-string should not blow up. - return self + 1 +def test_main(): + test_support.run_unittest(FormatTest) -test_exc('%o', Foobar(), TypeError, - "expected string or Unicode object, long found") -if maxsize == 2**31-1: - # crashes 2.2.1 and earlier: - try: - "%*d"%(maxsize, -127) - except MemoryError: - pass - else: - raise TestFailed, '"%*d"%(maxsize, -127) should fail' +if __name__ == "__main__": + unittest.main() From buildbot at python.org Wed Mar 26 06:33:40 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 26 Mar 2008 05:33:40 +0000 Subject: [Python-checkins] buildbot failure in ppc Debian unstable trunk Message-ID: <20080326053340.935F61E4006@bag.python.org> The Buildbot has detected a new failure of ppc Debian unstable trunk. Full details are available at: http://www.python.org/dev/buildbot/all/ppc%20Debian%20unstable%20trunk/builds/1081 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: andrew.kuchling,neal.norwitz BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_ssl make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Wed Mar 26 06:39:19 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 26 Mar 2008 05:39:19 +0000 Subject: [Python-checkins] buildbot failure in x86 XP 3.0 Message-ID: <20080326053920.2FC691E4006@bag.python.org> The Buildbot has detected a new failure of x86 XP 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP%203.0/builds/66 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: armbruster-windows Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: benjamin.peterson,neal.norwitz BUILD FAILED: failed test Excerpt from the test logfile: Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\threading.py", line 490, in _bootstrap_inner self.run() File "C:\python\buildarea\3.0.armbruster-windows\build\lib\threading.py", line 446, in run self._target(*self._args, **self._kwargs) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_smtplib.py", line 116, in debugging_server poll_fun(0.01, asyncore.socket_map) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\asyncore.py", line 132, in poll read(obj) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\asyncore.py", line 72, in read obj.handle_error() File "C:\python\buildarea\3.0.armbruster-windows\build\lib\asyncore.py", line 68, in read obj.handle_read_event() File "C:\python\buildarea\3.0.armbruster-windows\build\lib\asyncore.py", line 390, in handle_read_event self.handle_read() File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 527, in handle_read data = self.recv(1024) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\asyncore.py", line 342, in recv data = self.socket.recv(buffer_size) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\ssl.py", line 247, in recv return self.read(buflen) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\ssl.py", line 162, in read v = self._sslobj.read(len or 1024) socket.error: [Errno 10053] An established connection was aborted by the software in your host machine 7 tests failed: test_mailbox test_smtplib test_ssl test_threading test_urllib2net test_urllibnet test_xmlrpc_net ====================================================================== ERROR: test_flush (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 718, in tearDown self._delete_recursively(self._path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 47, in _delete_recursively os.remove(target) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test' ====================================================================== ERROR: test_popitem (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 336, in test_popitem self.assertEqual(int(msg.get_payload()), keys.index(key)) ValueError: invalid literal for int() with base 10: 'From: foo 0 From MAILER-DAEMON Sat Apr 08 05:43:58 2000 From: foo 1 From MAILER-DAEMON Sat Apr 08 05:43:58 2000 From: foo 2 From MAILER-DAEMON Sat Apr 08 05:43:58 2000 From: foo 3 From MAIL' ====================================================================== ERROR: test_flush (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 718, in tearDown self._delete_recursively(self._path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 47, in _delete_recursively os.remove(target) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test' ====================================================================== ERROR: test_popitem (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 336, in test_popitem self.assertEqual(int(msg.get_payload()), keys.index(key)) ValueError: invalid literal for int() with base 10: 'From: foo 0 \x01\x01\x01\x01 \x01\x01\x01\x01 From MAILER-DAEMON Sat Apr 08 05:44:05 2000 From: foo 1 \x01\x01\x01\x01 \x01\x01\x01\x01 From MAILER-DAEMON Sat Apr 08 05:44:05 2000 From: foo 2 \x01\x01\x01\x01 \x01\x01\x01\x01 From MAILER-DAEMON Sat Apr 08 05' ====================================================================== ERROR: test_flush (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 941, in tearDown self._delete_recursively(self._path) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 47, in _delete_recursively os.remove(target) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test' ====================================================================== ERROR: test_popitem (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 336, in test_popitem self.assertEqual(int(msg.get_payload()), keys.index(key)) ValueError: invalid literal for int() with base 10: 'From: foo *** EOOH *** From: foo 0 1,, From: foo *** EOOH *** From: foo 1 1,, From: foo *** EOOH *** From: foo 2 1,, From: foo *** EOOH *** From: foo 3 ' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 412, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_add (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 77, in test_add self.assertEqual(self._box.get_string(keys[0]), self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\nFrom MAILER-DAEMON Sat Apr 08 05:43:54 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Sat Apr 08 05:43:54 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Sat Apr 08 05:43:54 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Sat Apr 08 05:43:54 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'From: foo\n\n0' ====================================================================== FAIL: test_add_and_close (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 758, in test_add_and_close self.assertEqual(contents, open(self._path, 'r').read()) AssertionError: 'From MAILER-DAEMON Sat Apr 08 05:43:54 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Sat Apr 08 05:43:54 2000\n\nFrom: foo\n\n0\n\nFrom MAILER-DAEMON Sat Apr 08 05:43:54 2000\n\nFrom: foo\n\n1\n\nFrom MAILER-DAEMON Sat Apr 08 05:43:54 2000\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Sat Apr 08 05:43:54 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'From MAILER-DAEMON Sat Apr 08 05:43:54 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Sat Apr 08 05:43:54 2000\n\nFrom: foo\n\n0\n\nFrom MAILER-DAEMON Sat Apr 08 05:43:54 2000\n\nFrom: foo\n\n1\n\nFrom MAILER-DAEMON Sat Apr 08 05:43:54 2000\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Sat Apr 08 05:43:54 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Sat Apr 08 05:43:54 2000\n\nFrom: foo\n\n0\n\nFrom MAILER-DAEMON Sat Apr 08 05:43:54 2000\n\nFrom: foo\n\n1\n\nFrom MAILER-DAEMON Sat Apr 08 05:43:54 2000\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Sat Apr 08 05:43:54 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Sat Apr 08 05:43:54 2000\n\nFrom: foo\n\n1\n\nFrom MAILER-DAEMON Sat Apr 08 05:43:54 2000\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Sat Apr 08 05:43:54 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Sat Apr 08 05:43:54 2000\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Sat Apr 08 05:43:54 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Sat Apr 08 05:43:54 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' ====================================================================== FAIL: test_add_from_string (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 725, in test_add_from_string self.assertEqual(self._box[key].get_from(), 'foo at bar blah') AssertionError: 'foo at bar blah\n' != 'foo at bar blah' ====================================================================== FAIL: test_close (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 389, in test_close self._test_flush_or_close(self._box.close) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 6 != 3 ====================================================================== FAIL: test_delitem (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 87, in test_delitem self._test_remove_or_delitem(self._box.__delitem__) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n1' != 'From: foo\n\n1' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 412, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_flush (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 377, in test_flush self._test_flush_or_close(self._box.flush) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 6 != 3 ====================================================================== FAIL: test_get (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 129, in test_get self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_file (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 174, in test_get_file self._template % 0) AssertionError: '' != 'From: foo\n\n0' ====================================================================== FAIL: test_get_message (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 156, in test_get_message self.assertEqual(msg0['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_string (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 164, in test_get_string self.assertEqual(self._box.get_string(key0), self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\nFrom MAILER-DAEMON Sat Apr 08 05:43:56 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'From: foo\n\n0' ====================================================================== FAIL: test_getitem (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 144, in test_getitem self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_items (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 207, in test_items self._check_iteration(self._box.items, do_keys=True, do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iter (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 194, in test_iter do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iteritems (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 203, in test_iteritems do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_itervalues (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 189, in test_itervalues do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_open_close_open (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 742, in test_open_close_open self.assertEqual(len(self._box), 3) AssertionError: 6 != 3 ====================================================================== FAIL: test_pop (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 313, in test_pop self.assertEqual(self._box.pop(key0).get_payload(), '0') AssertionError: 'From: foo\n\n0\n\nFrom MAILER-DAEMON Sat Apr 08 05:43:58 2000\n\nFrom: foo\n\n1' != '0' ====================================================================== FAIL: test_remove (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 83, in test_remove self._test_remove_or_delitem(self._box.remove) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n1' != 'From: foo\n\n1' ====================================================================== FAIL: test_set_item (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 272, in test_set_item self._template % 'original 0') AssertionError: '\nFrom: foo\n\noriginal 0' != 'From: foo\n\noriginal 0' ====================================================================== FAIL: test_update (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 350, in test_update self._template % 'changed 0') AssertionError: '\nFrom: foo\n\nchanged 0\n\nFrom MAILER-DAEMON Sat Apr 08 05:43:59 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'From: foo\n\nchanged 0' ====================================================================== FAIL: test_values (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 198, in test_values self._check_iteration(self._box.values, do_keys=False, do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_add (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 77, in test_add self.assertEqual(self._box.get_string(keys[0]), self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sat Apr 08 05:43:59 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sat Apr 08 05:43:59 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sat Apr 08 05:43:59 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sat Apr 08 05:43:59 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n' != 'From: foo\n\n0' ====================================================================== FAIL: test_add_and_close (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 758, in test_add_and_close self.assertEqual(contents, open(self._path, 'r').read()) AssertionError: '\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sat Apr 08 05:44:00 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sat Apr 08 05:44:00 2000\n\nFrom: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sat Apr 08 05:44:00 2000\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sat Apr 08 05:44:00 2000\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sat Apr 08 05:44:00 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n' != '\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sat Apr 08 05:44:00 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sat Apr 08 05:44:00 2000\n\nFrom: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sat Apr 08 05:44:00 2000\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sat Apr 08 05:44:00 2000\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sat Apr 08 05:44:00 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sat Apr 08 05:44:00 2000\n\nFrom: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sat Apr 08 05:44:00 2000\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sat Apr 08 05:44:00 2000\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sat Apr 08 05:44:00 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sat Apr 08 05:44:00 2000\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sat Apr 08 05:44:00 2000\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sat Apr 08 05:44:00 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sat Apr 08 05:44:00 2000\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sat Apr 08 05:44:00 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sat Apr 08 05:44:00 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n' ====================================================================== FAIL: test_add_from_string (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 725, in test_add_from_string self.assertEqual(self._box[key].get_from(), 'foo at bar blah') AssertionError: 'foo at bar blah\n' != 'foo at bar blah' ====================================================================== FAIL: test_close (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 389, in test_close self._test_flush_or_close(self._box.close) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_delitem (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 87, in test_delitem self._test_remove_or_delitem(self._box.__delitem__) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n' != 'From: foo\n\n1' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 412, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_flush (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 377, in test_flush self._test_flush_or_close(self._box.flush) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_get (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 129, in test_get self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_file (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 174, in test_get_file self._template % 0) AssertionError: '' != 'From: foo\n\n0' ====================================================================== FAIL: test_get_message (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 156, in test_get_message self.assertEqual(msg0['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_string (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 164, in test_get_string self.assertEqual(self._box.get_string(key0), self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sat Apr 08 05:44:02 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n' != 'From: foo\n\n0' ====================================================================== FAIL: test_getitem (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 144, in test_getitem self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_items (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 207, in test_items self._check_iteration(self._box.items, do_keys=True, do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iter (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 194, in test_iter do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iteritems (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 203, in test_iteritems do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_itervalues (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 189, in test_itervalues do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_open_close_open (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 742, in test_open_close_open self.assertEqual(len(self._box), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_pop (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 313, in test_pop self.assertEqual(self._box.pop(key0).get_payload(), '0') AssertionError: 'From: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sat Apr 08 05:44:05 2000\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n' != '0' ====================================================================== FAIL: test_remove (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 83, in test_remove self._test_remove_or_delitem(self._box.remove) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n' != 'From: foo\n\n1' ====================================================================== FAIL: test_set_item (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 272, in test_set_item self._template % 'original 0') AssertionError: '\nFrom: foo\n\noriginal 0\n\n\x01\x01\x01\x01\n\n' != 'From: foo\n\noriginal 0' ====================================================================== FAIL: test_update (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 350, in test_update self._template % 'changed 0') AssertionError: '\nFrom: foo\n\nchanged 0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Sat Apr 08 05:44:06 2000\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n' != 'From: foo\n\nchanged 0' ====================================================================== FAIL: test_values (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 198, in test_values self._check_iteration(self._box.values, do_keys=False, do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 412, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_add (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 77, in test_add self.assertEqual(self._box.get_string(keys[0]), self._template % 0) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n0\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f' != 'From: foo\n\n0' ====================================================================== FAIL: test_close (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 389, in test_close self._test_flush_or_close(self._box.close) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_delitem (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 87, in test_delitem self._test_remove_or_delitem(self._box.__delitem__) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n1\n\n\x1f' != 'From: foo\n\n1' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 412, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_flush (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 377, in test_flush self._test_flush_or_close(self._box.flush) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 400, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_get (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 129, in test_get self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_file (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 174, in test_get_file self._template % 0) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n0\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f' != 'From: foo\n\n0' ====================================================================== FAIL: test_get_message (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 156, in test_get_message self.assertEqual(msg0['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_string (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 164, in test_get_string self.assertEqual(self._box.get_string(key0), self._template % 0) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n0\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f' != 'From: foo\n\n0' ====================================================================== FAIL: test_getitem (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 144, in test_getitem self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_items (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 207, in test_items self._check_iteration(self._box.items, do_keys=True, do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iter (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 194, in test_iter do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iteritems (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 203, in test_iteritems do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_itervalues (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 189, in test_itervalues do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_pop (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 313, in test_pop self.assertEqual(self._box.pop(key0).get_payload(), '0') AssertionError: 'From: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n0\n\n\x1f\x0c\n\n1,,\n\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n1\n\n\x1f' != '0' ====================================================================== FAIL: test_remove (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 83, in test_remove self._test_remove_or_delitem(self._box.remove) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n1\n\n\x1f' != 'From: foo\n\n1' ====================================================================== FAIL: test_set_item (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 272, in test_set_item self._template % 'original 0') AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\noriginal 0\n\n\x1f' != 'From: foo\n\noriginal 0' ====================================================================== FAIL: test_update (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 350, in test_update self._template % 'changed 0') AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\nchanged 0\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f' != 'From: foo\n\nchanged 0' ====================================================================== FAIL: test_values (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 198, in test_values self._check_iteration(self._box.values, do_keys=False, do_values=True) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== ERROR: testConnect (test.test_ssl.NetworkedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 130, in testConnect raise test_support.TestFailed("Unexpected exception %s" % x) test.test_support.TestFailed: Unexpected exception [Errno 1] _ssl.c:486: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed ====================================================================== ERROR: testProtocolSSL2 (test.test_ssl.ThreadedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 822, in testProtocolSSL2 tryProtocolCombo(ssl.PROTOCOL_SSLv2, ssl.PROTOCOL_SSLv2, True, ssl.CERT_OPTIONAL) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 691, in tryProtocolCombo chatty=False, connectionchatty=False) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 640, in serverParamsTest raise test_support.TestFailed("Unexpected SSL error: " + str(x)) test.test_support.TestFailed: Unexpected SSL error: [Errno 1] _ssl.c:486: error:1407E086:SSL routines:SSL2_SET_CERTIFICATE:certificate verify failed ====================================================================== ERROR: testProtocolSSL23 (test.test_ssl.ThreadedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 843, in testProtocolSSL23 tryProtocolCombo(ssl.PROTOCOL_SSLv23, ssl.PROTOCOL_SSLv3, True, ssl.CERT_OPTIONAL) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 691, in tryProtocolCombo chatty=False, connectionchatty=False) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 640, in serverParamsTest raise test_support.TestFailed("Unexpected SSL error: " + str(x)) test.test_support.TestFailed: Unexpected SSL error: [Errno 1] _ssl.c:486: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed ====================================================================== ERROR: testProtocolSSL3 (test.test_ssl.ThreadedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 855, in testProtocolSSL3 tryProtocolCombo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True, ssl.CERT_OPTIONAL) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 691, in tryProtocolCombo chatty=False, connectionchatty=False) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 640, in serverParamsTest raise test_support.TestFailed("Unexpected SSL error: " + str(x)) test.test_support.TestFailed: Unexpected SSL error: [Errno 1] _ssl.c:486: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed ====================================================================== ERROR: testProtocolTLS1 (test.test_ssl.ThreadedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 865, in testProtocolTLS1 tryProtocolCombo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True, ssl.CERT_OPTIONAL) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 691, in tryProtocolCombo chatty=False, connectionchatty=False) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 640, in serverParamsTest raise test_support.TestFailed("Unexpected SSL error: " + str(x)) test.test_support.TestFailed: Unexpected SSL error: [Errno 1] _ssl.c:486: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed ====================================================================== ERROR: testReadCert (test.test_ssl.ThreadedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 738, in testReadCert "Unexpected SSL error: " + str(x)) test.test_support.TestFailed: Unexpected SSL error: [Errno 1] _ssl.c:486: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed ====================================================================== FAIL: test_bad_address (test.test_urllib2net.urlopenNetworkTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_urllib2net.py", line 160, in test_bad_address urllib2.urlopen, "http://www.python.invalid./") AssertionError: IOError not raised by urlopen ====================================================================== FAIL: test_bad_address (test.test_urllibnet.urlopenNetworkTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_urllibnet.py", line 145, in test_bad_address urllib.urlopen, "http://www.python.invalid./") AssertionError: IOError not raised by urlopen ====================================================================== FAIL: test_current_time (test.test_xmlrpc_net.CurrentTimeTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_xmlrpc_net.py", line 38, in test_current_time self.assert_(delta.days <= 1) AssertionError: None sincerely, -The Buildbot From python-checkins at python.org Wed Mar 26 06:58:15 2008 From: python-checkins at python.org (jerry.seutter) Date: Wed, 26 Mar 2008 06:58:15 +0100 (CET) Subject: [Python-checkins] r61926 - python/trunk/Lib/test/test_ntpath.py Message-ID: <20080326055815.2608C1E4006@bag.python.org> Author: jerry.seutter Date: Wed Mar 26 06:58:14 2008 New Revision: 61926 Modified: python/trunk/Lib/test/test_ntpath.py Log: Changed test so it no longer runs as a side effect of importing. Modified: python/trunk/Lib/test/test_ntpath.py ============================================================================== --- python/trunk/Lib/test/test_ntpath.py (original) +++ python/trunk/Lib/test/test_ntpath.py Wed Mar 26 06:58:14 2008 @@ -1,174 +1,187 @@ import ntpath -from test.test_support import verbose, TestFailed import os +from test.test_support import verbose, TestFailed +import test.test_support as test_support +import unittest -errors = 0 def tester(fn, wantResult): - global errors fn = fn.replace("\\", "\\\\") gotResult = eval(fn) if wantResult != gotResult: - print "error!" - print "evaluated: " + str(fn) - print "should be: " + str(wantResult) - print " returned: " + str(gotResult) - print "" - errors = errors + 1 - -tester('ntpath.splitext("foo.ext")', ('foo', '.ext')) -tester('ntpath.splitext("/foo/foo.ext")', ('/foo/foo', '.ext')) -tester('ntpath.splitext(".ext")', ('.ext', '')) -tester('ntpath.splitext("\\foo.ext\\foo")', ('\\foo.ext\\foo', '')) -tester('ntpath.splitext("foo.ext\\")', ('foo.ext\\', '')) -tester('ntpath.splitext("")', ('', '')) -tester('ntpath.splitext("foo.bar.ext")', ('foo.bar', '.ext')) -tester('ntpath.splitext("xx/foo.bar.ext")', ('xx/foo.bar', '.ext')) -tester('ntpath.splitext("xx\\foo.bar.ext")', ('xx\\foo.bar', '.ext')) -tester('ntpath.splitext("c:a/b\\c.d")', ('c:a/b\\c', '.d')) - -tester('ntpath.splitdrive("c:\\foo\\bar")', - ('c:', '\\foo\\bar')) -tester('ntpath.splitunc("\\\\conky\\mountpoint\\foo\\bar")', - ('\\\\conky\\mountpoint', '\\foo\\bar')) -tester('ntpath.splitdrive("c:/foo/bar")', - ('c:', '/foo/bar')) -tester('ntpath.splitunc("//conky/mountpoint/foo/bar")', - ('//conky/mountpoint', '/foo/bar')) - -tester('ntpath.split("c:\\foo\\bar")', ('c:\\foo', 'bar')) -tester('ntpath.split("\\\\conky\\mountpoint\\foo\\bar")', - ('\\\\conky\\mountpoint\\foo', 'bar')) - -tester('ntpath.split("c:\\")', ('c:\\', '')) -tester('ntpath.split("\\\\conky\\mountpoint\\")', - ('\\\\conky\\mountpoint', '')) - -tester('ntpath.split("c:/")', ('c:/', '')) -tester('ntpath.split("//conky/mountpoint/")', ('//conky/mountpoint', '')) - -tester('ntpath.isabs("c:\\")', 1) -tester('ntpath.isabs("\\\\conky\\mountpoint\\")', 1) -tester('ntpath.isabs("\\foo")', 1) -tester('ntpath.isabs("\\foo\\bar")', 1) - -tester('ntpath.commonprefix(["/home/swenson/spam", "/home/swen/spam"])', - "/home/swen") -tester('ntpath.commonprefix(["\\home\\swen\\spam", "\\home\\swen\\eggs"])', - "\\home\\swen\\") -tester('ntpath.commonprefix(["/home/swen/spam", "/home/swen/spam"])', - "/home/swen/spam") - -tester('ntpath.join("")', '') -tester('ntpath.join("", "", "")', '') -tester('ntpath.join("a")', 'a') -tester('ntpath.join("/a")', '/a') -tester('ntpath.join("\\a")', '\\a') -tester('ntpath.join("a:")', 'a:') -tester('ntpath.join("a:", "b")', 'a:b') -tester('ntpath.join("a:", "/b")', 'a:/b') -tester('ntpath.join("a:", "\\b")', 'a:\\b') -tester('ntpath.join("a", "/b")', '/b') -tester('ntpath.join("a", "\\b")', '\\b') -tester('ntpath.join("a", "b", "c")', 'a\\b\\c') -tester('ntpath.join("a\\", "b", "c")', 'a\\b\\c') -tester('ntpath.join("a", "b\\", "c")', 'a\\b\\c') -tester('ntpath.join("a", "b", "\\c")', '\\c') -tester('ntpath.join("d:\\", "\\pleep")', 'd:\\pleep') -tester('ntpath.join("d:\\", "a", "b")', 'd:\\a\\b') -tester("ntpath.join('c:', '/a')", 'c:/a') -tester("ntpath.join('c:/', '/a')", 'c:/a') -tester("ntpath.join('c:/a', '/b')", '/b') -tester("ntpath.join('c:', 'd:/')", 'd:/') -tester("ntpath.join('c:/', 'd:/')", 'd:/') -tester("ntpath.join('c:/', 'd:/a/b')", 'd:/a/b') - -tester("ntpath.join('')", '') -tester("ntpath.join('', '', '', '', '')", '') -tester("ntpath.join('a')", 'a') -tester("ntpath.join('', 'a')", 'a') -tester("ntpath.join('', '', '', '', 'a')", 'a') -tester("ntpath.join('a', '')", 'a\\') -tester("ntpath.join('a', '', '', '', '')", 'a\\') -tester("ntpath.join('a\\', '')", 'a\\') -tester("ntpath.join('a\\', '', '', '', '')", 'a\\') - -tester("ntpath.normpath('A//////././//.//B')", r'A\B') -tester("ntpath.normpath('A/./B')", r'A\B') -tester("ntpath.normpath('A/foo/../B')", r'A\B') -tester("ntpath.normpath('C:A//B')", r'C:A\B') -tester("ntpath.normpath('D:A/./B')", r'D:A\B') -tester("ntpath.normpath('e:A/foo/../B')", r'e:A\B') - -tester("ntpath.normpath('C:///A//B')", r'C:\A\B') -tester("ntpath.normpath('D:///A/./B')", r'D:\A\B') -tester("ntpath.normpath('e:///A/foo/../B')", r'e:\A\B') - -tester("ntpath.normpath('..')", r'..') -tester("ntpath.normpath('.')", r'.') -tester("ntpath.normpath('')", r'.') -tester("ntpath.normpath('/')", '\\') -tester("ntpath.normpath('c:/')", 'c:\\') -tester("ntpath.normpath('/../.././..')", '\\') -tester("ntpath.normpath('c:/../../..')", 'c:\\') -tester("ntpath.normpath('../.././..')", r'..\..\..') -tester("ntpath.normpath('K:../.././..')", r'K:..\..\..') -tester("ntpath.normpath('C:////a/b')", r'C:\a\b') -tester("ntpath.normpath('//machine/share//a/b')", r'\\machine\share\a\b') - -oldenv = os.environ.copy() -try: - os.environ.clear() - os.environ["foo"] = "bar" - os.environ["{foo"] = "baz1" - os.environ["{foo}"] = "baz2" - tester('ntpath.expandvars("foo")', "foo") - tester('ntpath.expandvars("$foo bar")', "bar bar") - tester('ntpath.expandvars("${foo}bar")', "barbar") - tester('ntpath.expandvars("$[foo]bar")', "$[foo]bar") - tester('ntpath.expandvars("$bar bar")', "$bar bar") - tester('ntpath.expandvars("$?bar")', "$?bar") - tester('ntpath.expandvars("${foo}bar")', "barbar") - tester('ntpath.expandvars("$foo}bar")', "bar}bar") - tester('ntpath.expandvars("${foo")', "${foo") - tester('ntpath.expandvars("${{foo}}")', "baz1}") - tester('ntpath.expandvars("$foo$foo")', "barbar") - tester('ntpath.expandvars("$bar$bar")', "$bar$bar") - tester('ntpath.expandvars("%foo% bar")', "bar bar") - tester('ntpath.expandvars("%foo%bar")', "barbar") - tester('ntpath.expandvars("%foo%%foo%")', "barbar") - tester('ntpath.expandvars("%%foo%%foo%foo%")', "%foo%foobar") - tester('ntpath.expandvars("%?bar%")', "%?bar%") - tester('ntpath.expandvars("%foo%%bar")', "bar%bar") - tester('ntpath.expandvars("\'%foo%\'%bar")', "\'%foo%\'%bar") -finally: - os.environ.clear() - os.environ.update(oldenv) - -# ntpath.abspath() can only be used on a system with the "nt" module -# (reasonably), so we protect this test with "import nt". This allows -# the rest of the tests for the ntpath module to be run to completion -# on any platform, since most of the module is intended to be usable -# from any platform. -try: - import nt -except ImportError: - pass -else: - tester('ntpath.abspath("C:\\")', "C:\\") - -currentdir = os.path.split(os.getcwd())[-1] -tester('ntpath.relpath("a")', 'a') -tester('ntpath.relpath(os.path.abspath("a"))', 'a') -tester('ntpath.relpath("a/b")', 'a\\b') -tester('ntpath.relpath("../a/b")', '..\\a\\b') -tester('ntpath.relpath("a", "../b")', '..\\'+currentdir+'\\a') -tester('ntpath.relpath("a/b", "../c")', '..\\'+currentdir+'\\a\\b') -tester('ntpath.relpath("a", "b/c")', '..\\..\\a') -tester('ntpath.relpath("//conky/mountpoint/a", "//conky/mountpoint/b/c")', '..\\..\\a') -tester('ntpath.relpath("a", "a")', '.') - -if errors: - raise TestFailed(str(errors) + " errors.") -elif verbose: - print "No errors. Thank your lucky stars." + raise TestFailed, "%s should return: %s but returned: %s" \ + %(str(fn), str(wantResult), str(gotResult)) + + +class TestNtpath(unittest.TestCase): + def test_splitext(self): + tester('ntpath.splitext("foo.ext")', ('foo', '.ext')) + tester('ntpath.splitext("/foo/foo.ext")', ('/foo/foo', '.ext')) + tester('ntpath.splitext(".ext")', ('.ext', '')) + tester('ntpath.splitext("\\foo.ext\\foo")', ('\\foo.ext\\foo', '')) + tester('ntpath.splitext("foo.ext\\")', ('foo.ext\\', '')) + tester('ntpath.splitext("")', ('', '')) + tester('ntpath.splitext("foo.bar.ext")', ('foo.bar', '.ext')) + tester('ntpath.splitext("xx/foo.bar.ext")', ('xx/foo.bar', '.ext')) + tester('ntpath.splitext("xx\\foo.bar.ext")', ('xx\\foo.bar', '.ext')) + tester('ntpath.splitext("c:a/b\\c.d")', ('c:a/b\\c', '.d')) + + def test_splitdrive(self): + tester('ntpath.splitdrive("c:\\foo\\bar")', + ('c:', '\\foo\\bar')) + tester('ntpath.splitdrive("c:/foo/bar")', + ('c:', '/foo/bar')) + + def test_splitunc(self): + tester('ntpath.splitunc("\\\\conky\\mountpoint\\foo\\bar")', + ('\\\\conky\\mountpoint', '\\foo\\bar')) + tester('ntpath.splitunc("//conky/mountpoint/foo/bar")', + ('//conky/mountpoint', '/foo/bar')) + + def test_split(self): + tester('ntpath.split("c:\\foo\\bar")', ('c:\\foo', 'bar')) + tester('ntpath.split("\\\\conky\\mountpoint\\foo\\bar")', + ('\\\\conky\\mountpoint\\foo', 'bar')) + + tester('ntpath.split("c:\\")', ('c:\\', '')) + tester('ntpath.split("\\\\conky\\mountpoint\\")', + ('\\\\conky\\mountpoint', '')) + + tester('ntpath.split("c:/")', ('c:/', '')) + tester('ntpath.split("//conky/mountpoint/")', ('//conky/mountpoint', '')) + + def test_isabs(self): + tester('ntpath.isabs("c:\\")', 1) + tester('ntpath.isabs("\\\\conky\\mountpoint\\")', 1) + tester('ntpath.isabs("\\foo")', 1) + tester('ntpath.isabs("\\foo\\bar")', 1) + + def test_commonprefix(self): + tester('ntpath.commonprefix(["/home/swenson/spam", "/home/swen/spam"])', + "/home/swen") + tester('ntpath.commonprefix(["\\home\\swen\\spam", "\\home\\swen\\eggs"])', + "\\home\\swen\\") + tester('ntpath.commonprefix(["/home/swen/spam", "/home/swen/spam"])', + "/home/swen/spam") + + def test_join(self): + tester('ntpath.join("")', '') + tester('ntpath.join("", "", "")', '') + tester('ntpath.join("a")', 'a') + tester('ntpath.join("/a")', '/a') + tester('ntpath.join("\\a")', '\\a') + tester('ntpath.join("a:")', 'a:') + tester('ntpath.join("a:", "b")', 'a:b') + tester('ntpath.join("a:", "/b")', 'a:/b') + tester('ntpath.join("a:", "\\b")', 'a:\\b') + tester('ntpath.join("a", "/b")', '/b') + tester('ntpath.join("a", "\\b")', '\\b') + tester('ntpath.join("a", "b", "c")', 'a\\b\\c') + tester('ntpath.join("a\\", "b", "c")', 'a\\b\\c') + tester('ntpath.join("a", "b\\", "c")', 'a\\b\\c') + tester('ntpath.join("a", "b", "\\c")', '\\c') + tester('ntpath.join("d:\\", "\\pleep")', 'd:\\pleep') + tester('ntpath.join("d:\\", "a", "b")', 'd:\\a\\b') + tester("ntpath.join('c:', '/a')", 'c:/a') + tester("ntpath.join('c:/', '/a')", 'c:/a') + tester("ntpath.join('c:/a', '/b')", '/b') + tester("ntpath.join('c:', 'd:/')", 'd:/') + tester("ntpath.join('c:/', 'd:/')", 'd:/') + tester("ntpath.join('c:/', 'd:/a/b')", 'd:/a/b') + + tester("ntpath.join('')", '') + tester("ntpath.join('', '', '', '', '')", '') + tester("ntpath.join('a')", 'a') + tester("ntpath.join('', 'a')", 'a') + tester("ntpath.join('', '', '', '', 'a')", 'a') + tester("ntpath.join('a', '')", 'a\\') + tester("ntpath.join('a', '', '', '', '')", 'a\\') + tester("ntpath.join('a\\', '')", 'a\\') + tester("ntpath.join('a\\', '', '', '', '')", 'a\\') + + def test_normpath(self): + tester("ntpath.normpath('A//////././//.//B')", r'A\B') + tester("ntpath.normpath('A/./B')", r'A\B') + tester("ntpath.normpath('A/foo/../B')", r'A\B') + tester("ntpath.normpath('C:A//B')", r'C:A\B') + tester("ntpath.normpath('D:A/./B')", r'D:A\B') + tester("ntpath.normpath('e:A/foo/../B')", r'e:A\B') + + tester("ntpath.normpath('C:///A//B')", r'C:\A\B') + tester("ntpath.normpath('D:///A/./B')", r'D:\A\B') + tester("ntpath.normpath('e:///A/foo/../B')", r'e:\A\B') + + tester("ntpath.normpath('..')", r'..') + tester("ntpath.normpath('.')", r'.') + tester("ntpath.normpath('')", r'.') + tester("ntpath.normpath('/')", '\\') + tester("ntpath.normpath('c:/')", 'c:\\') + tester("ntpath.normpath('/../.././..')", '\\') + tester("ntpath.normpath('c:/../../..')", 'c:\\') + tester("ntpath.normpath('../.././..')", r'..\..\..') + tester("ntpath.normpath('K:../.././..')", r'K:..\..\..') + tester("ntpath.normpath('C:////a/b')", r'C:\a\b') + tester("ntpath.normpath('//machine/share//a/b')", r'\\machine\share\a\b') + + def test_expandvars(self): + oldenv = os.environ.copy() + try: + os.environ.clear() + os.environ["foo"] = "bar" + os.environ["{foo"] = "baz1" + os.environ["{foo}"] = "baz2" + tester('ntpath.expandvars("foo")', "foo") + tester('ntpath.expandvars("$foo bar")', "bar bar") + tester('ntpath.expandvars("${foo}bar")', "barbar") + tester('ntpath.expandvars("$[foo]bar")', "$[foo]bar") + tester('ntpath.expandvars("$bar bar")', "$bar bar") + tester('ntpath.expandvars("$?bar")', "$?bar") + tester('ntpath.expandvars("${foo}bar")', "barbar") + tester('ntpath.expandvars("$foo}bar")', "bar}bar") + tester('ntpath.expandvars("${foo")', "${foo") + tester('ntpath.expandvars("${{foo}}")', "baz1}") + tester('ntpath.expandvars("$foo$foo")', "barbar") + tester('ntpath.expandvars("$bar$bar")', "$bar$bar") + tester('ntpath.expandvars("%foo% bar")', "bar bar") + tester('ntpath.expandvars("%foo%bar")', "barbar") + tester('ntpath.expandvars("%foo%%foo%")', "barbar") + tester('ntpath.expandvars("%%foo%%foo%foo%")', "%foo%foobar") + tester('ntpath.expandvars("%?bar%")', "%?bar%") + tester('ntpath.expandvars("%foo%%bar")', "bar%bar") + tester('ntpath.expandvars("\'%foo%\'%bar")', "\'%foo%\'%bar") + finally: + os.environ.clear() + os.environ.update(oldenv) + + def test_abspath(self): + # ntpath.abspath() can only be used on a system with the "nt" module + # (reasonably), so we protect this test with "import nt". This allows + # the rest of the tests for the ntpath module to be run to completion + # on any platform, since most of the module is intended to be usable + # from any platform. + try: + import nt + except ImportError: + pass + else: + tester('ntpath.abspath("C:\\")', "C:\\") + + def test_relpath(self): + currentdir = os.path.split(os.getcwd())[-1] + tester('ntpath.relpath("a")', 'a') + tester('ntpath.relpath(os.path.abspath("a"))', 'a') + tester('ntpath.relpath("a/b")', 'a\\b') + tester('ntpath.relpath("../a/b")', '..\\a\\b') + tester('ntpath.relpath("a", "../b")', '..\\'+currentdir+'\\a') + tester('ntpath.relpath("a/b", "../c")', '..\\'+currentdir+'\\a\\b') + tester('ntpath.relpath("a", "b/c")', '..\\..\\a') + tester('ntpath.relpath("//conky/mountpoint/a", "//conky/mountpoint/b/c")', '..\\..\\a') + tester('ntpath.relpath("a", "a")', '.') + + +def test_main(): + test_support.run_unittest(TestNtpath) + + +if __name__ == "__main__": + unittest.main() From buildbot at python.org Wed Mar 26 06:58:16 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 26 Mar 2008 05:58:16 +0000 Subject: [Python-checkins] buildbot failure in S-390 Debian trunk Message-ID: <20080326055816.B226A1E4008@bag.python.org> The Buildbot has detected a new failure of S-390 Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/S-390%20Debian%20trunk/builds/255 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-s390 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: andrew.kuchling,neal.norwitz BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From buildbot at python.org Wed Mar 26 07:02:43 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 26 Mar 2008 06:02:43 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 trunk Message-ID: <20080326060243.B511E1E4006@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%20trunk/builds/2756 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: andrew.kuchling,neal.norwitz BUILD FAILED: failed test Excerpt from the test logfile: 2 tests failed: test_asynchat test_smtplib ====================================================================== FAIL: testSend (test.test_smtplib.DebuggingServerTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_smtplib.py", line 228, in testSend self.assertEqual(self.output.getvalue(), mexpect) AssertionError: '---------- MESSAGE FOLLOWS ----------\nA test message\n------------ END MESSAGE ------------\nwarning: unhandled exception\n' != '---------- MESSAGE FOLLOWS ----------\nA test message\n------------ END MESSAGE ------------\n' sincerely, -The Buildbot From python-checkins at python.org Wed Mar 26 10:04:36 2008 From: python-checkins at python.org (georg.brandl) Date: Wed, 26 Mar 2008 10:04:36 +0100 (CET) Subject: [Python-checkins] r61928 - python/trunk/Misc/developers.txt Message-ID: <20080326090436.826F71E4006@bag.python.org> Author: georg.brandl Date: Wed Mar 26 10:04:36 2008 New Revision: 61928 Modified: python/trunk/Misc/developers.txt Log: Add Josiah. Modified: python/trunk/Misc/developers.txt ============================================================================== --- python/trunk/Misc/developers.txt (original) +++ python/trunk/Misc/developers.txt Wed Mar 26 10:04:36 2008 @@ -17,6 +17,9 @@ Permissions History ------------------- +- Josiah Carlson was given SVN access on 26 March 2008 by Georg Brandl, + for work on asyncore/asynchat. + - Benjamin Peterson was given SVN access on 25 March 2008 by Georg Brandl, for bug triage work. From buildbot at python.org Wed Mar 26 10:30:42 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 26 Mar 2008 09:30:42 +0000 Subject: [Python-checkins] buildbot failure in ppc Debian unstable 3.0 Message-ID: <20080326093042.7B9111E4030@bag.python.org> The Buildbot has detected a new failure of ppc Debian unstable 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/ppc%20Debian%20unstable%203.0/builds/705 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: georg.brandl BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_signal make: *** [buildbottest] Error 1 sincerely, -The Buildbot From python-checkins at python.org Wed Mar 26 10:32:50 2008 From: python-checkins at python.org (georg.brandl) Date: Wed, 26 Mar 2008 10:32:50 +0100 (CET) Subject: [Python-checkins] r61929 - python/trunk/Doc/library/configparser.rst Message-ID: <20080326093250.202C41E4006@bag.python.org> Author: georg.brandl Date: Wed Mar 26 10:32:46 2008 New Revision: 61929 Modified: python/trunk/Doc/library/configparser.rst Log: Add an example for an RFC 822 continuation. Modified: python/trunk/Doc/library/configparser.rst ============================================================================== --- python/trunk/Doc/library/configparser.rst (original) +++ python/trunk/Doc/library/configparser.rst Wed Mar 26 10:32:46 2008 @@ -29,18 +29,20 @@ The configuration file consists of sections, led by a ``[section]`` header and followed by ``name: value`` entries, with continuations in the style of -:rfc:`822`; ``name=value`` is also accepted. Note that leading whitespace is -removed from values. The optional values can contain format strings which refer -to other values in the same section, or values in a special ``DEFAULT`` section. -Additional defaults can be provided on initialization and retrieval. Lines -beginning with ``'#'`` or ``';'`` are ignored and may be used to provide -comments. +:rfc:`822` (see section 3.1.1, "LONG HEADER FIELDS"); ``name=value`` is also +accepted. Note that leading whitespace is removed from values. The optional +values can contain format strings which refer to other values in the same +section, or values in a special ``DEFAULT`` section. Additional defaults can be +provided on initialization and retrieval. Lines beginning with ``'#'`` or +``';'`` are ignored and may be used to provide comments. For example:: [My Section] foodir: %(dir)s/whatever dir=frob + long: this value continues + in the next line would resolve the ``%(dir)s`` to the value of ``dir`` (``frob`` in this case). All reference expansions are done on demand. From buildbot at python.org Wed Mar 26 10:50:53 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 26 Mar 2008 09:50:53 +0000 Subject: [Python-checkins] buildbot failure in alpha Debian 2.5 Message-ID: <20080326095053.59B531E4037@bag.python.org> The Buildbot has detected a new failure of alpha Debian 2.5. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Debian%202.5/builds/0 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-alpha Build Reason: Build Source Stamp: [branch branches/release25-maint] HEAD Blamelist: mark.dickinson BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From buildbot at python.org Wed Mar 26 11:18:16 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 26 Mar 2008 10:18:16 +0000 Subject: [Python-checkins] buildbot failure in g4 osx.4 3.0 Message-ID: <20080326101816.2BB881E4006@bag.python.org> The Buildbot has detected a new failure of g4 osx.4 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/g4%20osx.4%203.0/builds/661 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: psf-g4 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: georg.brandl BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From python-checkins at python.org Wed Mar 26 12:46:19 2008 From: python-checkins at python.org (christian.heimes) Date: Wed, 26 Mar 2008 12:46:19 +0100 (CET) Subject: [Python-checkins] r61930 - in python/branches/trunk-bytearray: Modules/main.c Python/pythonrun.c Message-ID: <20080326114619.354451E4006@bag.python.org> Author: christian.heimes Date: Wed Mar 26 12:46:18 2008 New Revision: 61930 Modified: python/branches/trunk-bytearray/Modules/main.c python/branches/trunk-bytearray/Python/pythonrun.c Log: Re-enabled bytes warning code Modified: python/branches/trunk-bytearray/Modules/main.c ============================================================================== --- python/branches/trunk-bytearray/Modules/main.c (original) +++ python/branches/trunk-bytearray/Modules/main.c Wed Mar 26 12:46:18 2008 @@ -40,7 +40,7 @@ static int orig_argc; /* command line options */ -#define BASE_OPTS "3Bc:dEhim:OQ:StuUvVW:xX?" +#define BASE_OPTS "3bBc:dEhim:OQ:StuUvVW:xX?" #ifndef RISCOS #define PROGRAM_OPTS BASE_OPTS @@ -296,6 +296,9 @@ } switch (c) { + case 'b': + Py_BytesWarningFlag++; + break; case 'd': Py_DebugFlag++; Modified: python/branches/trunk-bytearray/Python/pythonrun.c ============================================================================== --- python/branches/trunk-bytearray/Python/pythonrun.c (original) +++ python/branches/trunk-bytearray/Python/pythonrun.c Wed Mar 26 12:46:18 2008 @@ -258,7 +258,6 @@ if (!warnings_module) { PyErr_Clear(); } -#if 0 else { PyObject *o; char *action[8]; @@ -278,7 +277,6 @@ "warning filter for BytesWarning."); Py_DECREF(o); } -#endif #if defined(Py_USING_UNICODE) && defined(HAVE_LANGINFO_H) && defined(CODESET) /* On Unix, set the file system encoding according to the From python-checkins at python.org Wed Mar 26 12:57:47 2008 From: python-checkins at python.org (benjamin.peterson) Date: Wed, 26 Mar 2008 12:57:47 +0100 (CET) Subject: [Python-checkins] r61931 - python/trunk/Lib/pdb.py Message-ID: <20080326115747.7E0561E4006@bag.python.org> Author: benjamin.peterson Date: Wed Mar 26 12:57:47 2008 New Revision: 61931 Modified: python/trunk/Lib/pdb.py Log: Added help options to PDB Modified: python/trunk/Lib/pdb.py ============================================================================== --- python/trunk/Lib/pdb.py (original) +++ python/trunk/Lib/pdb.py Wed Mar 26 12:57:47 2008 @@ -1238,7 +1238,7 @@ print 'along the Python search path' def main(): - if not sys.argv[1:]: + if not sys.argv[1:] or sys.argv[1] in ("--help", "-h"): print "usage: pdb.py scriptfile [arg] ..." sys.exit(2) From python-checkins at python.org Wed Mar 26 13:16:59 2008 From: python-checkins at python.org (georg.brandl) Date: Wed, 26 Mar 2008 13:16:59 +0100 (CET) Subject: [Python-checkins] r61932 - in doctools/trunk: CHANGES doc/Makefile sphinx/quickstart.py sphinx/texinputs/howto.cls sphinx/texinputs/manual.cls Message-ID: <20080326121659.97E2F1E4006@bag.python.org> Author: georg.brandl Date: Wed Mar 26 13:16:59 2008 New Revision: 61932 Modified: doctools/trunk/CHANGES doctools/trunk/doc/Makefile doctools/trunk/sphinx/quickstart.py doctools/trunk/sphinx/texinputs/howto.cls doctools/trunk/sphinx/texinputs/manual.cls Log: Several fixes in the latex processing. Modified: doctools/trunk/CHANGES ============================================================================== --- doctools/trunk/CHANGES (original) +++ doctools/trunk/CHANGES Wed Mar 26 13:16:59 2008 @@ -23,7 +23,8 @@ * sphinx.latexwriter: Include fncychap.sty which doesn't seem to be very common in TeX distributions. Add a ``clean`` target in the - latex Makefile. + latex Makefile. Really pass the correct paper and size options + to the LaTeX document class. * setup: On Python 2.4, don't egg-depend on docutils if a docutils is already installed -- else it will be overwritten. Modified: doctools/trunk/doc/Makefile ============================================================================== --- doctools/trunk/doc/Makefile (original) +++ doctools/trunk/doc/Makefile Wed Mar 26 13:16:59 2008 @@ -6,7 +6,9 @@ SPHINXBUILD = python ../sphinx-build.py PAPER = -ALLSPHINXOPTS = -d _build/doctrees -D latex_paper_size=$(PAPER) \ +PAPEROPT_a4 = -D latex_paper_size=a4 +PAPEROPT_letter = -D latex_paper_size=letter +ALLSPHINXOPTS = -d _build/doctrees $(PAPEROPT_$(PAPER)) \ $(SPHINXOPTS) . .PHONY: help clean html web htmlhelp latex changes linkcheck Modified: doctools/trunk/sphinx/quickstart.py ============================================================================== --- doctools/trunk/sphinx/quickstart.py (original) +++ doctools/trunk/sphinx/quickstart.py Wed Mar 26 13:16:59 2008 @@ -177,12 +177,14 @@ # # You can set these variables from the command line. -SPHINXOPTS = -SPHINXBUILD = sphinx-build -PAPER = - -ALLSPHINXOPTS = -d %(rbuilddir)s/doctrees -D latex_paper_size=$(PAPER) \\ - $(SPHINXOPTS) %(rsrcdir)s +SPHINXOPTS = +SPHINXBUILD = sphinx-build +PAPER = + +# Internal variables. +PAPEROPT_a4 = -D latex_paper_size=a4 +PAPEROPT_letter = -D latex_paper_size=letter +ALLSPHINXOPTS = -d %(rbuilddir)s/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) %(rsrcdir) .PHONY: help clean html web htmlhelp latex changes linkcheck Modified: doctools/trunk/sphinx/texinputs/howto.cls ============================================================================== --- doctools/trunk/sphinx/texinputs/howto.cls (original) +++ doctools/trunk/sphinx/texinputs/howto.cls Wed Mar 26 13:16:59 2008 @@ -3,14 +3,13 @@ % \NeedsTeXFormat{LaTeX2e}[1995/12/01] -\ProvidesClass{howto} - [1998/02/25 Document class (Python HOWTO)] +\ProvidesClass{howto}[1998/02/25 Document class (Python HOWTO)] \RequirePackage{fancybox} -% Change the options here to get a different set of basic options, This -% is where to add things like "a4paper" or "10pt". -% +% Pass all given class options to the parent class. +\DeclareOption*{\PassOptionsToClass{\CurrentOption}{article}} +\ProcessOptions\relax \LoadClass[twoside]{article} \setcounter{secnumdepth}{1} @@ -26,7 +25,7 @@ % implement, and is used to put the chapter and section information in % the footers. % -\RequirePackage{fancyhdr}\typeout{Using fancier footers than usual.} +\RequirePackage{fancyhdr} % Required package: @@ -41,11 +40,11 @@ \RequirePackage{makeidx} -% support for module synopsis sections: +% Support for module synopsis sections: \newcommand{\py at ModSynopsisFilename}{\jobname.syn} -% need to do one of these.... +% Need to do one of these.... \newcommand{\py at doHorizontalRule}{\rule{\textwidth}{1pt}} Modified: doctools/trunk/sphinx/texinputs/manual.cls ============================================================================== --- doctools/trunk/sphinx/texinputs/manual.cls (original) +++ doctools/trunk/sphinx/texinputs/manual.cls Wed Mar 26 13:16:59 2008 @@ -3,14 +3,13 @@ % \NeedsTeXFormat{LaTeX2e}[1995/12/01] -\ProvidesClass{manual} - [1998/03/03 Document class (Python manual)] +\ProvidesClass{manual}[1998/03/03 Document class (Python manual)] \RequirePackage{fancybox} -% Change the options here to get a different set of basic options, but only -% if you have to. -% +% Pass all given class options to the parent class. +\DeclareOption*{\PassOptionsToClass{\CurrentOption}{report}} +\ProcessOptions\relax \LoadClass[twoside,openright]{report} \setcounter{secnumdepth}{2} @@ -26,21 +25,20 @@ % implement, and is used to put the chapter and section information in % the footers. % -\RequirePackage{fancyhdr}\typeout{Using fancier footers than usual.} - +\RequirePackage{fancyhdr} % Required packages: % % The "fncychap" package is used to get the nice chapter headers. The -% .sty file is distributed with Python, so you should not need to disable +% .sty file is distributed with Sphinx, so you should not need to disable % it. You'd also end up with a mixed page style; uglier than stock LaTeX! % -\RequirePackage[Bjarne]{fncychap}\typeout{Using fancy chapter headings.} +\RequirePackage[Bjarne]{fncychap} % Do horizontal rules it this way to match: \newcommand{\py at doHorizontalRule}{\mghrulefill{\RW}} -% -% -% This gives us all the Python-specific markup that we really want. + + +% This gives us all the Sphinx-specific markup that we really want. % This should come last. Do not change this. % \RequirePackage{sphinx} @@ -50,7 +48,7 @@ \RequirePackage{makeidx} -% support for module synopsis sections: +% Support for module synopsis sections: \newcommand{\py at ModSynopsisFilename}{\jobname\thechapter.syn} \let\py at OldChapter=\chapter \renewcommand{\chapter}{ From python-checkins at python.org Wed Mar 26 13:20:49 2008 From: python-checkins at python.org (christian.heimes) Date: Wed, 26 Mar 2008 13:20:49 +0100 (CET) Subject: [Python-checkins] r61933 - python/branches/trunk-bytearray/Objects/typeobject.c Message-ID: <20080326122049.4FA6A1E4013@bag.python.org> Author: christian.heimes Date: Wed Mar 26 13:20:46 2008 New Revision: 61933 Modified: python/branches/trunk-bytearray/Objects/typeobject.c Log: Fixed a bug in the new buffer protocol. The buffer slots weren't copied into a subclass. Modified: python/branches/trunk-bytearray/Objects/typeobject.c ============================================================================== --- python/branches/trunk-bytearray/Objects/typeobject.c (original) +++ python/branches/trunk-bytearray/Objects/typeobject.c Wed Mar 26 13:20:46 2008 @@ -3762,6 +3762,8 @@ COPYBUF(bf_getwritebuffer); COPYBUF(bf_getsegcount); COPYBUF(bf_getcharbuffer); + COPYBUF(bf_getbuffer); + COPYBUF(bf_releasebuffer); } basebase = base->tp_base; From g.brandl at gmx.net Wed Mar 26 13:20:43 2008 From: g.brandl at gmx.net (Georg Brandl) Date: Wed, 26 Mar 2008 13:20:43 +0100 Subject: [Python-checkins] r61931 - python/trunk/Lib/pdb.py In-Reply-To: <20080326115747.7E0561E4006@bag.python.org> References: <20080326115747.7E0561E4006@bag.python.org> Message-ID: benjamin.peterson schrieb: > Author: benjamin.peterson > Date: Wed Mar 26 12:57:47 2008 > New Revision: 61931 > > Modified: > python/trunk/Lib/pdb.py > Log: > Added help options to PDB Please include the issue number in the commit message (and, if it isn't you, the patch author). Thanks, Georg -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out. From python-checkins at python.org Wed Mar 26 13:25:10 2008 From: python-checkins at python.org (christian.heimes) Date: Wed, 26 Mar 2008 13:25:10 +0100 (CET) Subject: [Python-checkins] r61934 - in python/branches/trunk-bytearray: Lib/test/string_tests.py Lib/test/test_bytes.py Objects/bytesobject.c Message-ID: <20080326122510.BB4E41E4007@bag.python.org> Author: christian.heimes Date: Wed Mar 26 13:25:09 2008 New Revision: 61934 Modified: python/branches/trunk-bytearray/Lib/test/string_tests.py python/branches/trunk-bytearray/Lib/test/test_bytes.py python/branches/trunk-bytearray/Objects/bytesobject.c Log: Re-enabled bytearray subclassing - all tests are passing. Modified: python/branches/trunk-bytearray/Lib/test/string_tests.py ============================================================================== --- python/branches/trunk-bytearray/Lib/test/string_tests.py (original) +++ python/branches/trunk-bytearray/Lib/test/string_tests.py Wed Mar 26 13:25:09 2008 @@ -27,9 +27,6 @@ # Change in subclasses to change the behaviour of fixtesttype() type2test = None - # is the type subclass-able? - subclassable = True - # All tests pass their arguments to the testing methods # as str objects. fixtesttype() can be used to propagate # these arguments to the appropriate type @@ -60,7 +57,7 @@ ) # if the original is returned make sure that # this doesn't happen with subclasses - if self.subclassable and object == realresult: + if object == realresult: class subtype(self.__class__.type2test): pass object = subtype(object) Modified: python/branches/trunk-bytearray/Lib/test/test_bytes.py ============================================================================== --- python/branches/trunk-bytearray/Lib/test/test_bytes.py (original) +++ python/branches/trunk-bytearray/Lib/test/test_bytes.py Wed Mar 26 13:25:09 2008 @@ -865,7 +865,6 @@ class FixedStringTest(test.string_tests.BaseTest): - subclassable = False def fixtype(self, obj): if isinstance(obj, str): @@ -892,8 +891,8 @@ type2test = bytearray -#class ByteArraySubclass(bytearray): -# pass +class ByteArraySubclass(bytearray): + pass class ByteArraySubclassTest(unittest.TestCase): @@ -969,16 +968,6 @@ x = subclass(newarg=4, source=b"abcd") self.assertEqual(x, b"abcd") -class ByteArrayNotSubclassTest(unittest.TestCase): - def test_not_subclassable(self): - try: - class ByteArraySubclass(bytearray): - pass - except TypeError: - pass - else: - self.fail("Bytearray is subclassable") - def test_main(): #test.test_support.run_unittest(BytesTest) #test.test_support.run_unittest(AssortedBytesTest) @@ -986,8 +975,7 @@ test.test_support.run_unittest( ByteArrayTest, ByteArrayAsStringTest, - #ByteArraySubclassTest, - ByteArrayNotSubclassTest, + ByteArraySubclassTest, BytearrayPEP3137Test) if __name__ == "__main__": Modified: python/branches/trunk-bytearray/Objects/bytesobject.c ============================================================================== --- python/branches/trunk-bytearray/Objects/bytesobject.c (original) +++ python/branches/trunk-bytearray/Objects/bytesobject.c Wed Mar 26 13:25:09 2008 @@ -3232,8 +3232,7 @@ PyObject_GenericGetAttr, /* tp_getattro */ 0, /* tp_setattro */ &bytes_as_buffer, /* tp_as_buffer */ - /* Py_TPFLAGS_BASETYPE */ - Py_TPFLAGS_DEFAULT | + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_NEWBUFFER, /* tp_flags */ bytes_doc, /* tp_doc */ 0, /* tp_traverse */ From python-checkins at python.org Wed Mar 26 13:32:49 2008 From: python-checkins at python.org (christian.heimes) Date: Wed, 26 Mar 2008 13:32:49 +0100 (CET) Subject: [Python-checkins] r61935 - python/trunk Message-ID: <20080326123249.8EAB51E4006@bag.python.org> Author: christian.heimes Date: Wed Mar 26 13:32:49 2008 New Revision: 61935 Modified: python/trunk/ (props changed) Log: Prepare integration of bytearray backport branch From python-checkins at python.org Wed Mar 26 13:49:51 2008 From: python-checkins at python.org (christian.heimes) Date: Wed, 26 Mar 2008 13:49:51 +0100 (CET) Subject: [Python-checkins] r61936 - in python/trunk: Include/Python.h Include/bytes_methods.h Include/bytesobject.h Include/pydebug.h Include/pyerrors.h Include/pythonrun.h Lib/codecs.py Lib/io.py Lib/test/buffer_tests.py Lib/test/exception_hierarchy.txt Lib/test/string_tests.py Lib/test/test_bytes.py Lib/test/test_io.py Lib/test/test_print.py Makefile.pre.in Modules/main.c Objects/bytes_methods.c Objects/bytesobject.c Objects/exceptions.c Objects/object.c Objects/stringlib/ctype.h Objects/stringlib/transmogrify.h Objects/stringobject.c Objects/typeobject.c Objects/unicodeobject.c PCbuild/pythoncore.vcproj Python/bltinmodule.c Python/pythonrun.c Message-ID: <20080326124951.44A2F1E4006@bag.python.org> Author: christian.heimes Date: Wed Mar 26 13:49:49 2008 New Revision: 61936 Added: python/trunk/Include/bytes_methods.h - copied unchanged from r61934, python/branches/trunk-bytearray/Include/bytes_methods.h python/trunk/Include/bytesobject.h - copied unchanged from r61934, python/branches/trunk-bytearray/Include/bytesobject.h python/trunk/Lib/io.py - copied unchanged from r61934, python/branches/trunk-bytearray/Lib/io.py python/trunk/Lib/test/buffer_tests.py - copied unchanged from r61934, python/branches/trunk-bytearray/Lib/test/buffer_tests.py python/trunk/Lib/test/test_bytes.py - copied unchanged from r61934, python/branches/trunk-bytearray/Lib/test/test_bytes.py python/trunk/Lib/test/test_io.py - copied unchanged from r61934, python/branches/trunk-bytearray/Lib/test/test_io.py python/trunk/Objects/bytes_methods.c - copied unchanged from r61934, python/branches/trunk-bytearray/Objects/bytes_methods.c python/trunk/Objects/bytesobject.c - copied unchanged from r61934, python/branches/trunk-bytearray/Objects/bytesobject.c python/trunk/Objects/stringlib/ctype.h - copied unchanged from r61934, python/branches/trunk-bytearray/Objects/stringlib/ctype.h python/trunk/Objects/stringlib/transmogrify.h - copied unchanged from r61934, python/branches/trunk-bytearray/Objects/stringlib/transmogrify.h Modified: python/trunk/ (props changed) python/trunk/Include/Python.h python/trunk/Include/pydebug.h python/trunk/Include/pyerrors.h python/trunk/Include/pythonrun.h python/trunk/Lib/codecs.py python/trunk/Lib/test/exception_hierarchy.txt python/trunk/Lib/test/string_tests.py python/trunk/Lib/test/test_print.py python/trunk/Makefile.pre.in python/trunk/Modules/main.c python/trunk/Objects/exceptions.c python/trunk/Objects/object.c python/trunk/Objects/stringobject.c python/trunk/Objects/typeobject.c python/trunk/Objects/unicodeobject.c python/trunk/PCbuild/pythoncore.vcproj python/trunk/Python/bltinmodule.c python/trunk/Python/pythonrun.c Log: Merged revisions 61750,61752,61754,61756,61760,61763,61768,61772,61775,61805,61809,61812,61819,61917,61920,61930,61933-61934 via svnmerge from svn+ssh://pythondev at svn.python.org/python/branches/trunk-bytearray ........ r61750 | christian.heimes | 2008-03-22 20:47:44 +0100 (Sat, 22 Mar 2008) | 1 line Copied files from py3k w/o modifications ........ r61752 | christian.heimes | 2008-03-22 20:53:20 +0100 (Sat, 22 Mar 2008) | 7 lines Take One * Added initialization code, warnings, flags etc. to the appropriate places * Added new buffer interface to string type * Modified tests * Modified Makefile.pre.in to compile the new files * Added bytesobject.c to Python.h ........ r61754 | christian.heimes | 2008-03-22 21:22:19 +0100 (Sat, 22 Mar 2008) | 2 lines Disabled bytearray.extend for now since it causes an infinite recursion Fixed serveral unit tests ........ r61756 | christian.heimes | 2008-03-22 21:43:38 +0100 (Sat, 22 Mar 2008) | 5 lines Added PyBytes support to several places: str + bytearray ord(bytearray) bytearray(str, encoding) ........ r61760 | christian.heimes | 2008-03-22 21:56:32 +0100 (Sat, 22 Mar 2008) | 1 line Fixed more unit tests related to type('') is not unicode ........ r61763 | christian.heimes | 2008-03-22 22:20:28 +0100 (Sat, 22 Mar 2008) | 2 lines Fixed more unit tests Fixed bytearray.extend ........ r61768 | christian.heimes | 2008-03-22 22:40:50 +0100 (Sat, 22 Mar 2008) | 1 line Implemented old buffer interface for bytearray ........ r61772 | christian.heimes | 2008-03-22 23:24:52 +0100 (Sat, 22 Mar 2008) | 1 line Added backport of the io module ........ r61775 | christian.heimes | 2008-03-23 03:50:49 +0100 (Sun, 23 Mar 2008) | 1 line Fix str assignement to bytearray. Assignment of a str of size 1 is interpreted as a single byte ........ r61805 | christian.heimes | 2008-03-23 19:33:48 +0100 (Sun, 23 Mar 2008) | 3 lines Fixed more tests Fixed bytearray() comparsion with unicode() Fixed iterator assignment of bytearray ........ r61809 | christian.heimes | 2008-03-23 21:02:21 +0100 (Sun, 23 Mar 2008) | 2 lines str(bytesarray()) now returns the bytes and not the representation of the bytearray object Enabled and fixed more unit tests ........ r61812 | christian.heimes | 2008-03-23 21:53:08 +0100 (Sun, 23 Mar 2008) | 3 lines Clear error PyNumber_AsSsize_t() fails Use CHARMASK for ob_svall access disabled a test with memoryview again ........ r61819 | christian.heimes | 2008-03-23 23:05:57 +0100 (Sun, 23 Mar 2008) | 1 line Untested updates to the PCBuild directory ........ r61917 | christian.heimes | 2008-03-26 00:57:06 +0100 (Wed, 26 Mar 2008) | 1 line The type system of Python 2.6 has subtle differences to 3.0's. I've removed the Py_TPFLAGS_BASETYPE flags from bytearray for now. bytearray can't be subclasses until the issues with bytearray subclasses are fixed. ........ r61920 | christian.heimes | 2008-03-26 01:44:08 +0100 (Wed, 26 Mar 2008) | 2 lines Disabled last failing test I don't understand what the test is testing and how it suppose to work. Ka-Ping, please check it out. ........ r61930 | christian.heimes | 2008-03-26 12:46:18 +0100 (Wed, 26 Mar 2008) | 1 line Re-enabled bytes warning code ........ r61933 | christian.heimes | 2008-03-26 13:20:46 +0100 (Wed, 26 Mar 2008) | 1 line Fixed a bug in the new buffer protocol. The buffer slots weren't copied into a subclass. ........ r61934 | christian.heimes | 2008-03-26 13:25:09 +0100 (Wed, 26 Mar 2008) | 1 line Re-enabled bytearray subclassing - all tests are passing. ........ Modified: python/trunk/Include/Python.h ============================================================================== --- python/trunk/Include/Python.h (original) +++ python/trunk/Include/Python.h Wed Mar 26 13:49:49 2008 @@ -92,6 +92,7 @@ #include "stringobject.h" /* #include "memoryobject.h" */ #include "bufferobject.h" +#include "bytesobject.h" #include "tupleobject.h" #include "listobject.h" #include "dictobject.h" Modified: python/trunk/Include/pydebug.h ============================================================================== --- python/trunk/Include/pydebug.h (original) +++ python/trunk/Include/pydebug.h Wed Mar 26 13:49:49 2008 @@ -11,6 +11,7 @@ PyAPI_DATA(int) Py_InspectFlag; PyAPI_DATA(int) Py_OptimizeFlag; PyAPI_DATA(int) Py_NoSiteFlag; +PyAPI_DATA(int) Py_BytesWarningFlag; PyAPI_DATA(int) Py_UseClassExceptionsFlag; PyAPI_DATA(int) Py_FrozenFlag; PyAPI_DATA(int) Py_TabcheckFlag; Modified: python/trunk/Include/pyerrors.h ============================================================================== --- python/trunk/Include/pyerrors.h (original) +++ python/trunk/Include/pyerrors.h Wed Mar 26 13:49:49 2008 @@ -175,6 +175,7 @@ PyAPI_DATA(PyObject *) PyExc_FutureWarning; PyAPI_DATA(PyObject *) PyExc_ImportWarning; PyAPI_DATA(PyObject *) PyExc_UnicodeWarning; +PyAPI_DATA(PyObject *) PyExc_BytesWarning; /* Convenience functions */ Modified: python/trunk/Include/pythonrun.h ============================================================================== --- python/trunk/Include/pythonrun.h (original) +++ python/trunk/Include/pythonrun.h Wed Mar 26 13:49:49 2008 @@ -123,6 +123,7 @@ PyAPI_FUNC(int) _PyFrame_Init(void); PyAPI_FUNC(int) _PyInt_Init(void); PyAPI_FUNC(void) _PyFloat_Init(void); +PyAPI_FUNC(int) PyBytes_Init(void); /* Various internal finalizers */ PyAPI_FUNC(void) _PyExc_Fini(void); @@ -138,6 +139,7 @@ PyAPI_FUNC(void) PyInt_Fini(void); PyAPI_FUNC(void) PyFloat_Fini(void); PyAPI_FUNC(void) PyOS_FiniInterrupts(void); +PyAPI_FUNC(void) PyBytes_Fini(void); /* Stuff with no proper home (yet) */ PyAPI_FUNC(char *) PyOS_Readline(FILE *, FILE *, char *); Modified: python/trunk/Lib/codecs.py ============================================================================== --- python/trunk/Lib/codecs.py (original) +++ python/trunk/Lib/codecs.py Wed Mar 26 13:49:49 2008 @@ -181,6 +181,18 @@ Resets the encoder to the initial state. """ + def getstate(self): + """ + Return the current state of the encoder. + """ + return 0 + + def setstate(self, state): + """ + Set the current state of the encoder. state must have been + returned by getstate(). + """ + class BufferedIncrementalEncoder(IncrementalEncoder): """ This subclass of IncrementalEncoder can be used as the baseclass for an @@ -208,6 +220,12 @@ IncrementalEncoder.reset(self) self.buffer = "" + def getstate(self): + return self.buffer or 0 + + def setstate(self, state): + self.buffer = state or "" + class IncrementalDecoder(object): """ An IncrementalDecoder decodes an input in multiple steps. The input can be @@ -235,6 +253,28 @@ Resets the decoder to the initial state. """ + def getstate(self): + """ + Return the current state of the decoder. + + This must be a (buffered_input, additional_state_info) tuple. + buffered_input must be a bytes object containing bytes that + were passed to decode() that have not yet been converted. + additional_state_info must be a non-negative integer + representing the state of the decoder WITHOUT yet having + processed the contents of buffered_input. In the initial state + and after reset(), getstate() must return (b"", 0). + """ + return (b"", 0) + + def setstate(self, state): + """ + Set the current state of the decoder. + + state must have been returned by getstate(). The effect of + setstate((b"", 0)) must be equivalent to reset(). + """ + class BufferedIncrementalDecoder(IncrementalDecoder): """ This subclass of IncrementalDecoder can be used as the baseclass for an @@ -262,6 +302,14 @@ IncrementalDecoder.reset(self) self.buffer = "" + def getstate(self): + # additional state info is always 0 + return (self.buffer, 0) + + def setstate(self, state): + # ignore additional state info + self.buffer = state[0] + # # The StreamWriter and StreamReader class provide generic working # interfaces which can be used to implement new encoding submodules Modified: python/trunk/Lib/test/exception_hierarchy.txt ============================================================================== --- python/trunk/Lib/test/exception_hierarchy.txt (original) +++ python/trunk/Lib/test/exception_hierarchy.txt Wed Mar 26 13:49:49 2008 @@ -46,3 +46,4 @@ +-- FutureWarning +-- ImportWarning +-- UnicodeWarning + +-- BytesWarning Modified: python/trunk/Lib/test/string_tests.py ============================================================================== --- python/trunk/Lib/test/string_tests.py (original) +++ python/trunk/Lib/test/string_tests.py Wed Mar 26 13:49:49 2008 @@ -486,8 +486,9 @@ 'lstrip', unicode('xyz', 'ascii')) self.checkequal(unicode('xyzzyhello', 'ascii'), 'xyzzyhelloxyzzy', 'rstrip', unicode('xyz', 'ascii')) - self.checkequal(unicode('hello', 'ascii'), 'hello', - 'strip', unicode('xyz', 'ascii')) + # XXX + #self.checkequal(unicode('hello', 'ascii'), 'hello', + # 'strip', unicode('xyz', 'ascii')) self.checkraises(TypeError, 'hello', 'strip', 42, 42) self.checkraises(TypeError, 'hello', 'lstrip', 42, 42) @@ -727,6 +728,9 @@ self.checkraises(TypeError, '123', 'zfill') +# XXX alias for py3k forward compatibility +BaseTest = CommonTest + class MixinStrUnicodeUserStringTest: # additional tests that only work for # stringlike objects, i.e. str, unicode, UserString Modified: python/trunk/Lib/test/test_print.py ============================================================================== --- python/trunk/Lib/test/test_print.py (original) +++ python/trunk/Lib/test/test_print.py Wed Mar 26 13:49:49 2008 @@ -9,10 +9,10 @@ from test import test_support import sys -try: +if sys.version_info[0] == 3: # 3.x from io import StringIO -except ImportError: +else: # 2.x from StringIO import StringIO Modified: python/trunk/Makefile.pre.in ============================================================================== --- python/trunk/Makefile.pre.in (original) +++ python/trunk/Makefile.pre.in Wed Mar 26 13:49:49 2008 @@ -295,6 +295,8 @@ Objects/abstract.o \ Objects/boolobject.o \ Objects/bufferobject.o \ + Objects/bytes_methods.o \ + Objects/bytesobject.o \ Objects/cellobject.o \ Objects/classobject.o \ Objects/cobject.o \ @@ -518,13 +520,16 @@ $(srcdir)/Objects/unicodetype_db.h STRINGLIB_HEADERS= \ + $(srcdir)/Include/bytes_methods.h \ $(srcdir)/Objects/stringlib/count.h \ + $(srcdir)/Objects/stringlib/ctype.h \ $(srcdir)/Objects/stringlib/fastsearch.h \ $(srcdir)/Objects/stringlib/find.h \ $(srcdir)/Objects/stringlib/formatter.h \ $(srcdir)/Objects/stringlib/partition.h \ $(srcdir)/Objects/stringlib/stringdefs.h \ $(srcdir)/Objects/stringlib/string_format.h \ + $(srcdir)/Objects/stringlib/transmogrify.h \ $(srcdir)/Objects/stringlib/unicodedefs.h Objects/unicodeobject.o: $(srcdir)/Objects/unicodeobject.c \ @@ -532,6 +537,8 @@ Objects/stringobject.o: $(srcdir)/Objects/stringobject.c \ $(STRINGLIB_HEADERS) +Objects/bytesobject.o: $(srcdir)/Objects/bytesobject.c \ + $(STRINGLIB_HEADERS) Python/formatter_unicode.o: $(srcdir)/Python/formatter_unicode.c \ $(STRINGLIB_HEADERS) @@ -550,6 +557,8 @@ Include/ast.h \ Include/bitset.h \ Include/boolobject.h \ + Include/bytes_methods.h \ + Include/bytesobject.h \ Include/bufferobject.h \ Include/cellobject.h \ Include/ceval.h \ Modified: python/trunk/Modules/main.c ============================================================================== --- python/trunk/Modules/main.c (original) +++ python/trunk/Modules/main.c Wed Mar 26 13:49:49 2008 @@ -40,7 +40,7 @@ static int orig_argc; /* command line options */ -#define BASE_OPTS "3Bc:dEhim:OQ:StuUvVW:xX?" +#define BASE_OPTS "3bBc:dEhim:OQ:StuUvVW:xX?" #ifndef RISCOS #define PROGRAM_OPTS BASE_OPTS @@ -296,6 +296,9 @@ } switch (c) { + case 'b': + Py_BytesWarningFlag++; + break; case 'd': Py_DebugFlag++; Modified: python/trunk/Objects/exceptions.c ============================================================================== --- python/trunk/Objects/exceptions.c (original) +++ python/trunk/Objects/exceptions.c Wed Mar 26 13:49:49 2008 @@ -1923,6 +1923,12 @@ "Base class for warnings about Unicode related problems, mostly\n" "related to conversion problems."); +/* + * BytesWarning extends Warning + */ +SimpleExtendsException(PyExc_Warning, BytesWarning, + "Base class for warnings about bytes and buffer related problems, mostly\n" + "related to conversion from str or comparing to str."); /* Pre-computed MemoryError instance. Best to create this as early as * possible and not wait until a MemoryError is actually raised! @@ -2031,6 +2037,7 @@ PRE_INIT(FutureWarning) PRE_INIT(ImportWarning) PRE_INIT(UnicodeWarning) + PRE_INIT(BytesWarning) m = Py_InitModule4("exceptions", functions, exceptions_doc, (PyObject *)NULL, PYTHON_API_VERSION); @@ -2097,6 +2104,7 @@ POST_INIT(FutureWarning) POST_INIT(ImportWarning) POST_INIT(UnicodeWarning) + POST_INIT(BytesWarning) PyExc_MemoryErrorInst = BaseException_new(&_PyExc_MemoryError, NULL, NULL); if (!PyExc_MemoryErrorInst) Modified: python/trunk/Objects/object.c ============================================================================== --- python/trunk/Objects/object.c (original) +++ python/trunk/Objects/object.c Wed Mar 26 13:49:49 2008 @@ -1986,6 +1986,9 @@ if (PyType_Ready(&PyString_Type) < 0) Py_FatalError("Can't initialize 'str'"); + if (PyType_Ready(&PyBytes_Type) < 0) + Py_FatalError("Can't initialize 'bytes'"); + if (PyType_Ready(&PyList_Type) < 0) Py_FatalError("Can't initialize 'list'"); Modified: python/trunk/Objects/stringobject.c ============================================================================== --- python/trunk/Objects/stringobject.c (original) +++ python/trunk/Objects/stringobject.c Wed Mar 26 13:49:49 2008 @@ -953,6 +953,8 @@ if (PyUnicode_Check(bb)) return PyUnicode_Concat((PyObject *)a, bb); #endif + if (PyBytes_Check(bb)) + return PyBytes_Concat((PyObject *)a, bb); PyErr_Format(PyExc_TypeError, "cannot concatenate 'str' and '%.200s' objects", Py_TYPE(bb)->tp_name); @@ -1303,6 +1305,13 @@ return Py_SIZE(self); } +static int +string_buffer_getbuffer(PyStringObject *self, Py_buffer *view, int flags) +{ + return PyBuffer_FillInfo(view, (void *)self->ob_sval, Py_SIZE(self), + 0, flags); +} + static PySequenceMethods string_as_sequence = { (lenfunc)string_length, /*sq_length*/ (binaryfunc)string_concat, /*sq_concat*/ @@ -1325,6 +1334,8 @@ (writebufferproc)string_buffer_getwritebuf, (segcountproc)string_buffer_getsegcount, (charbufferproc)string_buffer_getcharbuf, + (getbufferproc)string_buffer_getbuffer, + 0, /* XXX */ }; @@ -4122,7 +4133,8 @@ 0, /* tp_setattro */ &string_as_buffer, /* tp_as_buffer */ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_CHECKTYPES | - Py_TPFLAGS_BASETYPE | Py_TPFLAGS_STRING_SUBCLASS, /* tp_flags */ + Py_TPFLAGS_BASETYPE | Py_TPFLAGS_STRING_SUBCLASS | + Py_TPFLAGS_HAVE_NEWBUFFER, /* tp_flags */ string_doc, /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ Modified: python/trunk/Objects/typeobject.c ============================================================================== --- python/trunk/Objects/typeobject.c (original) +++ python/trunk/Objects/typeobject.c Wed Mar 26 13:49:49 2008 @@ -3763,6 +3763,8 @@ COPYBUF(bf_getwritebuffer); COPYBUF(bf_getsegcount); COPYBUF(bf_getcharbuffer); + COPYBUF(bf_getbuffer); + COPYBUF(bf_releasebuffer); } basebase = base->tp_base; Modified: python/trunk/Objects/unicodeobject.c ============================================================================== --- python/trunk/Objects/unicodeobject.c (original) +++ python/trunk/Objects/unicodeobject.c Wed Mar 26 13:49:49 2008 @@ -1076,7 +1076,13 @@ if (PyString_Check(obj)) { s = PyString_AS_STRING(obj); len = PyString_GET_SIZE(obj); - } + } + else if (PyBytes_Check(obj)) { + /* Python 2.x specific */ + PyErr_Format(PyExc_TypeError, + "decoding bytearray is not supported"); + return NULL; + } else if (PyObject_AsCharBuffer(obj, &s, &len)) { /* Overwrite the error message with something more useful in case of a TypeError. */ Modified: python/trunk/PCbuild/pythoncore.vcproj ============================================================================== --- python/trunk/PCbuild/pythoncore.vcproj (original) +++ python/trunk/PCbuild/pythoncore.vcproj Wed Mar 26 13:49:49 2008 @@ -655,6 +655,14 @@ > + + + + @@ -1347,6 +1355,14 @@ > + + + + Modified: python/trunk/Python/bltinmodule.c ============================================================================== --- python/trunk/Python/bltinmodule.c (original) +++ python/trunk/Python/bltinmodule.c Wed Mar 26 13:49:49 2008 @@ -1437,6 +1437,13 @@ ord = (long)((unsigned char)*PyString_AS_STRING(obj)); return PyInt_FromLong(ord); } + } else if (PyBytes_Check(obj)) { + size = PyBytes_GET_SIZE(obj); + if (size == 1) { + ord = (long)((unsigned char)*PyBytes_AS_STRING(obj)); + return PyInt_FromLong(ord); + } + #ifdef Py_USING_UNICODE } else if (PyUnicode_Check(obj)) { size = PyUnicode_GET_SIZE(obj); @@ -2552,6 +2559,7 @@ SETBUILTIN("basestring", &PyBaseString_Type); SETBUILTIN("bool", &PyBool_Type); /* SETBUILTIN("memoryview", &PyMemoryView_Type); */ + SETBUILTIN("bytearray", &PyBytes_Type); SETBUILTIN("bytes", &PyString_Type); SETBUILTIN("buffer", &PyBuffer_Type); SETBUILTIN("classmethod", &PyClassMethod_Type); Modified: python/trunk/Python/pythonrun.c ============================================================================== --- python/trunk/Python/pythonrun.c (original) +++ python/trunk/Python/pythonrun.c Wed Mar 26 13:49:49 2008 @@ -72,6 +72,7 @@ int Py_InteractiveFlag; /* Needed by Py_FdIsInteractive() below */ int Py_InspectFlag; /* Needed to determine whether to exit at SystemError */ int Py_NoSiteFlag; /* Suppress 'import site' */ +int Py_BytesWarningFlag; /* Warn on str(bytes) and str(buffer) */ int Py_DontWriteBytecodeFlag; /* Suppress writing bytecode files (*.py[co]) */ int Py_UseClassExceptionsFlag = 1; /* Needed by bltinmodule.c: deprecated */ int Py_FrozenFlag; /* Needed by getpath.c */ @@ -193,6 +194,9 @@ if (!_PyInt_Init()) Py_FatalError("Py_Initialize: can't init ints"); + if (!PyBytes_Init()) + Py_FatalError("Py_Initialize: can't init bytearray"); + _PyFloat_Init(); interp->modules = PyDict_New(); @@ -251,8 +255,28 @@ #endif /* WITH_THREAD */ warnings_module = PyImport_ImportModule("warnings"); - if (!warnings_module) + if (!warnings_module) { PyErr_Clear(); + } + else { + PyObject *o; + char *action[8]; + + if (Py_BytesWarningFlag > 1) + *action = "error"; + else if (Py_BytesWarningFlag) + *action = "default"; + else + *action = "ignore"; + + o = PyObject_CallMethod(warnings_module, + "simplefilter", "sO", + *action, PyExc_BytesWarning); + if (o == NULL) + Py_FatalError("Py_Initialize: can't initialize" + "warning filter for BytesWarning."); + Py_DECREF(o); + } #if defined(Py_USING_UNICODE) && defined(HAVE_LANGINFO_H) && defined(CODESET) /* On Unix, set the file system encoding according to the @@ -471,6 +495,7 @@ PyList_Fini(); PySet_Fini(); PyString_Fini(); + PyBytes_Fini(); PyInt_Fini(); PyFloat_Fini(); PyDict_Fini(); From python-checkins at python.org Wed Mar 26 13:50:32 2008 From: python-checkins at python.org (christian.heimes) Date: Wed, 26 Mar 2008 13:50:32 +0100 (CET) Subject: [Python-checkins] r61937 - python/trunk Message-ID: <20080326125032.DF09A1E4006@bag.python.org> Author: christian.heimes Date: Wed Mar 26 13:50:32 2008 New Revision: 61937 Modified: python/trunk/ (props changed) Log: Removed merge tracking for "svnmerge" for svn+ssh://pythondev at svn.python.org/python/branches/libffi3-branch From python-checkins at python.org Wed Mar 26 13:50:43 2008 From: python-checkins at python.org (christian.heimes) Date: Wed, 26 Mar 2008 13:50:43 +0100 (CET) Subject: [Python-checkins] r61938 - python/trunk Message-ID: <20080326125043.C17E81E4006@bag.python.org> Author: christian.heimes Date: Wed Mar 26 13:50:43 2008 New Revision: 61938 Modified: python/trunk/ (props changed) Log: Removed merge tracking for "svnmerge" for svn+ssh://pythondev at svn.python.org/python/branches/trunk-bytearray From python-checkins at python.org Wed Mar 26 13:51:21 2008 From: python-checkins at python.org (christian.heimes) Date: Wed, 26 Mar 2008 13:51:21 +0100 (CET) Subject: [Python-checkins] r61939 - python/branches/trunk-bytearray Message-ID: <20080326125121.96EA91E4006@bag.python.org> Author: christian.heimes Date: Wed Mar 26 13:51:20 2008 New Revision: 61939 Removed: python/branches/trunk-bytearray/ Log: Remove merged branch From python-checkins at python.org Wed Mar 26 13:53:59 2008 From: python-checkins at python.org (christian.heimes) Date: Wed, 26 Mar 2008 13:53:59 +0100 (CET) Subject: [Python-checkins] r61940 - python/trunk/Misc/NEWS Message-ID: <20080326125359.2F7EF1E4006@bag.python.org> Author: christian.heimes Date: Wed Mar 26 13:53:58 2008 New Revision: 61940 Modified: python/trunk/Misc/NEWS Log: Updated Misc/NEWS Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Wed Mar 26 13:53:58 2008 @@ -12,6 +12,8 @@ Core and builtins ----------------- +- Added backport of bytearray type. + - Issue #2355: add Py3k warning for buffer(). - Issue #1477: With narrow Unicode builds, the unicode escape sequence @@ -72,6 +74,8 @@ Library ------- +- Backport of Python 3.0's io module. + - Issue #2482: Make sure that the coefficient of a Decimal is always stored as a str instance, not as a unicode instance. This ensures that str(Decimal) is always an instance of str. From python-checkins at python.org Wed Mar 26 13:57:48 2008 From: python-checkins at python.org (georg.brandl) Date: Wed, 26 Mar 2008 13:57:48 +0100 (CET) Subject: [Python-checkins] r61943 - python/trunk/Modules/selectmodule.c Message-ID: <20080326125748.42B0A1E4006@bag.python.org> Author: georg.brandl Date: Wed Mar 26 13:57:47 2008 New Revision: 61943 Modified: python/trunk/Modules/selectmodule.c Log: Fix and simplify error handling, silencing a compiler warning. Modified: python/trunk/Modules/selectmodule.c ============================================================================== --- python/trunk/Modules/selectmodule.c (original) +++ python/trunk/Modules/selectmodule.c Wed Mar 26 13:57:47 2008 @@ -991,6 +991,7 @@ else if (dtimeout * 1000.0 > INT_MAX) { PyErr_SetString(PyExc_OverflowError, "timeout is too large"); + return NULL; } else { timeout = (int)(dtimeout * 1000.0); @@ -1027,19 +1028,15 @@ } for (i = 0; i < nfds; i++) { - etuple = Py_BuildValue("iI", evs[i].data.fd, - evs[i].events); + etuple = Py_BuildValue("iI", evs[i].data.fd, evs[i].events); if (etuple == NULL) { + Py_CLEAR(elist); goto error; } PyList_SET_ITEM(elist, i, etuple); } - if (0) { - error: - Py_CLEAR(elist); - Py_XDECREF(etuple); - } + error: PyMem_Free(evs); return elist; } From buildbot at python.org Wed Mar 26 14:07:02 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 26 Mar 2008 13:07:02 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 trunk Message-ID: <20080326130702.6B9C21E4006@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%20trunk/builds/2759 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: benjamin.peterson,georg.brandl BUILD FAILED: failed test Excerpt from the test logfile: 4 tests failed: test_asynchat test_signal test_smtplib test_socket ====================================================================== FAIL: test_wakeup_fd_during (test.test_signal.WakeupSignalTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_signal.py", line 205, in test_wakeup_fd_during [self.read], [], [], self.TIMEOUT_FULL) AssertionError: error not raised ====================================================================== FAIL: test_wakeup_fd_early (test.test_signal.WakeupSignalTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_signal.py", line 193, in test_wakeup_fd_early self.assert_(mid_time - before_time < self.TIMEOUT_HALF) AssertionError sincerely, -The Buildbot From python-checkins at python.org Wed Mar 26 14:12:56 2008 From: python-checkins at python.org (georg.brandl) Date: Wed, 26 Mar 2008 14:12:56 +0100 (CET) Subject: [Python-checkins] r61945 - doctools/trunk/sphinx/__init__.py Message-ID: <20080326131256.C37C71E4006@bag.python.org> Author: georg.brandl Date: Wed Mar 26 14:12:56 2008 New Revision: 61945 Modified: doctools/trunk/sphinx/__init__.py (props changed) Log: Prepare for release. From python-checkins at python.org Wed Mar 26 14:13:42 2008 From: python-checkins at python.org (georg.brandl) Date: Wed, 26 Mar 2008 14:13:42 +0100 (CET) Subject: [Python-checkins] r61946 - doctools/trunk/CHANGES Message-ID: <20080326131342.A2ECD1E4006@bag.python.org> Author: georg.brandl Date: Wed Mar 26 14:13:42 2008 New Revision: 61946 Modified: doctools/trunk/CHANGES Log: Add release info. Modified: doctools/trunk/CHANGES ============================================================================== --- doctools/trunk/CHANGES (original) +++ doctools/trunk/CHANGES Wed Mar 26 14:13:42 2008 @@ -1,5 +1,5 @@ -Changes in trunk -================ +Release 0.1.61945 (Mar 26, 2008) +================================ * sphinx.htmlwriter, sphinx.latexwriter: Support the ``.. image::`` directive by copying image files to the output directory. From python-checkins at python.org Wed Mar 26 14:21:39 2008 From: python-checkins at python.org (georg.brandl) Date: Wed, 26 Mar 2008 14:21:39 +0100 (CET) Subject: [Python-checkins] r61947 - in doctools/trunk/sphinx: builder.py ext/coverage.py Message-ID: <20080326132140.0274D1E4006@bag.python.org> Author: georg.brandl Date: Wed Mar 26 14:21:39 2008 New Revision: 61947 Modified: doctools/trunk/sphinx/builder.py doctools/trunk/sphinx/ext/coverage.py Log: Pylint fixing. Modified: doctools/trunk/sphinx/builder.py ============================================================================== --- doctools/trunk/sphinx/builder.py (original) +++ doctools/trunk/sphinx/builder.py Wed Mar 26 14:21:39 2008 @@ -25,8 +25,7 @@ from docutils.readers.doctree import Reader as DoctreeReader from sphinx import addnodes -from sphinx.util import (get_matching_docs, mtimes_of_files, - ensuredir, relative_uri, SEP, os_path) +from sphinx.util import mtimes_of_files, ensuredir, relative_uri, SEP, os_path from sphinx.htmlhelp import build_hhx from sphinx.htmlwriter import HTMLWriter, HTMLTranslator, SmartyPantsHTMLTranslator from sphinx.latexwriter import LaTeXWriter @@ -230,7 +229,8 @@ self.finish() if self.app._warncount: self.info(bold('build succeeded, %s warning%s.' % - (self.app._warncount, self.app._warncount != 1 and 's' or ''))) + (self.app._warncount, + self.app._warncount != 1 and 's' or ''))) else: self.info(bold('build succeeded.')) Modified: doctools/trunk/sphinx/ext/coverage.py ============================================================================== --- doctools/trunk/sphinx/ext/coverage.py (original) +++ doctools/trunk/sphinx/ext/coverage.py Wed Mar 26 14:21:39 2008 @@ -49,7 +49,7 @@ try: self.c_regexes.append((name, re.compile(exp))) except Exception: - warnfunc('invalid regex %r in coverage_c_regexes' % exp) + self.warn('invalid regex %r in coverage_c_regexes' % exp) self.c_ignorexps = {} for (name, exps) in self.config.coverage_ignore_c_items.iteritems(): From buildbot at python.org Wed Mar 26 14:30:32 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 26 Mar 2008 13:30:32 +0000 Subject: [Python-checkins] buildbot failure in amd64 gentoo trunk Message-ID: <20080326133032.812821E4006@bag.python.org> The Buildbot has detected a new failure of amd64 gentoo trunk. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20gentoo%20trunk/builds/489 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-amd64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: christian.heimes BUILD FAILED: failed failed slave lost sincerely, -The Buildbot From buildbot at python.org Wed Mar 26 14:53:48 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 26 Mar 2008 13:53:48 +0000 Subject: [Python-checkins] buildbot failure in sparc Ubuntu trunk Message-ID: <20080326135430.5D02F1E4006@bag.python.org> The Buildbot has detected a new failure of sparc Ubuntu trunk. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20Ubuntu%20trunk/builds/405 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-ubuntu-sparc Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: benjamin.peterson,georg.brandl BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_tarfile make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Wed Mar 26 14:53:47 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 26 Mar 2008 13:53:47 +0000 Subject: [Python-checkins] buildbot failure in S-390 Debian 3.0 Message-ID: <20080326135517.75B721E4007@bag.python.org> The Buildbot has detected a new failure of S-390 Debian 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/S-390%20Debian%203.0/builds/152 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-s390 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_signal make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Wed Mar 26 14:56:22 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 26 Mar 2008 13:56:22 +0000 Subject: [Python-checkins] buildbot failure in PPC64 Debian trunk Message-ID: <20080326135631.5ADA01E4006@bag.python.org> The Buildbot has detected a new failure of PPC64 Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/PPC64%20Debian%20trunk/builds/602 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: christian.heimes,georg.brandl BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_io ====================================================================== ERROR: testBasicIO (test.test_io.TextIOWrapperTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/test/test_io.py", line 822, in testBasicIO self.multi_line_test(f, enc) File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/test/test_io.py", line 840, in multi_line_test pos = f.tell() File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/io.py", line 1386, in tell for next_byte[0] in next_input: ValueError: byte must be in range(0, 256) ====================================================================== ERROR: testSeeking (test.test_io.TextIOWrapperTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/test/test_io.py", line 882, in testSeeking self.assertEquals(f.tell(), prefix_size) File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/io.py", line 1386, in tell for next_byte[0] in next_input: ValueError: byte must be in range(0, 256) ====================================================================== ERROR: testSeekingToo (test.test_io.TextIOWrapperTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/test/test_io.py", line 895, in testSeekingToo f.tell() File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/io.py", line 1386, in tell for next_byte[0] in next_input: ValueError: byte must be in range(0, 256) ====================================================================== ERROR: testTelling (test.test_io.TextIOWrapperTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/test/test_io.py", line 857, in testTelling self.assertEquals(f.tell(), p1) File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/io.py", line 1386, in tell for next_byte[0] in next_input: ValueError: byte must be in range(0, 256) ====================================================================== ERROR: test_issue1395_5 (test.test_io.TextIOWrapperTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/test/test_io.py", line 1077, in test_issue1395_5 pos = txt.tell() File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/io.py", line 1386, in tell for next_byte[0] in next_input: ValueError: byte must be in range(0, 256) make: *** [buildbottest] Error 1 sincerely, -The Buildbot From python-checkins at python.org Wed Mar 26 16:33:48 2008 From: python-checkins at python.org (georg.brandl) Date: Wed, 26 Mar 2008 16:33:48 +0100 (CET) Subject: [Python-checkins] r61949 - doctools/trunk/sphinx/quickstart.py Message-ID: <20080326153348.481741E4006@bag.python.org> Author: georg.brandl Date: Wed Mar 26 16:33:47 2008 New Revision: 61949 Modified: doctools/trunk/sphinx/quickstart.py Log: Fix format string. Modified: doctools/trunk/sphinx/quickstart.py ============================================================================== --- doctools/trunk/sphinx/quickstart.py (original) +++ doctools/trunk/sphinx/quickstart.py Wed Mar 26 16:33:47 2008 @@ -184,7 +184,7 @@ # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter -ALLSPHINXOPTS = -d %(rbuilddir)s/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) %(rsrcdir) +ALLSPHINXOPTS = -d %(rbuilddir)s/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) %(rsrcdir)s .PHONY: help clean html web htmlhelp latex changes linkcheck From python-checkins at python.org Wed Mar 26 16:34:47 2008 From: python-checkins at python.org (georg.brandl) Date: Wed, 26 Mar 2008 16:34:47 +0100 (CET) Subject: [Python-checkins] r61950 - in doctools/trunk: CHANGES sphinx/__init__.py Message-ID: <20080326153447.885DF1E4006@bag.python.org> Author: georg.brandl Date: Wed Mar 26 16:34:47 2008 New Revision: 61950 Modified: doctools/trunk/CHANGES doctools/trunk/sphinx/__init__.py (props changed) Log: Update Changelog. Modified: doctools/trunk/CHANGES ============================================================================== --- doctools/trunk/CHANGES (original) +++ doctools/trunk/CHANGES Wed Mar 26 16:34:47 2008 @@ -1,3 +1,9 @@ +Release 0.1.61950 (Mar 26, 2008) +================================ + +* sphinx.quickstart: Fix format string for Makefile. + + Release 0.1.61945 (Mar 26, 2008) ================================ From buildbot at python.org Wed Mar 26 16:43:49 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 26 Mar 2008 15:43:49 +0000 Subject: [Python-checkins] buildbot failure in amd64 XP 3.0 Message-ID: <20080326154350.132A81E4006@bag.python.org> The Buildbot has detected a new failure of amd64 XP 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20XP%203.0/builds/719 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows-amd64 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: 3 tests failed: test_getargs2 test_mailbox test_winsound ====================================================================== ERROR: test_n (test.test_getargs2.Signed_TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_getargs2.py", line 190, in test_n self.failUnlessEqual(99, getargs_n(Long())) TypeError: 'Long' object cannot be interpreted as an integer ====================================================================== ERROR: test_popitem (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 336, in test_popitem self.assertEqual(int(msg.get_payload()), keys.index(key)) ValueError: invalid literal for int() with base 10: 'From: foo 0 From MAILER-DAEMON Wed Mar 26 15:43:15 2008 From: foo 1 From MAILER-DAEMON Wed Mar 26 15:43:15 2008 From: foo 2 From MAILER-DAEMON Wed Mar 26 15:43:15 2008 From: foo 3 From MAIL' ====================================================================== ERROR: test_popitem (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 336, in test_popitem self.assertEqual(int(msg.get_payload()), keys.index(key)) ValueError: invalid literal for int() with base 10: 'From: foo 0 \x01\x01\x01\x01 \x01\x01\x01\x01 From MAILER-DAEMON Wed Mar 26 15:43:17 2008 From: foo 1 \x01\x01\x01\x01 \x01\x01\x01\x01 From MAILER-DAEMON Wed Mar 26 15:43:17 2008 From: foo 2 \x01\x01\x01\x01 \x01\x01\x01\x01 From MAILER-DAEMON Wed Mar 26 15' ====================================================================== ERROR: test_popitem (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 336, in test_popitem self.assertEqual(int(msg.get_payload()), keys.index(key)) ValueError: invalid literal for int() with base 10: 'From: foo *** EOOH *** From: foo 0 1,, From: foo *** EOOH *** From: foo 1 1,, From: foo *** EOOH *** From: foo 2 1,, From: foo *** EOOH *** From: foo 3 ' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestMaildir) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 414, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_add (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 77, in test_add self.assertEqual(self._box.get_string(keys[0]), self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:13 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:13 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:13 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:13 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'From: foo\n\n0' ====================================================================== FAIL: test_add_and_close (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 760, in test_add_and_close self.assertEqual(contents, open(self._path, 'r').read()) AssertionError: 'From MAILER-DAEMON Wed Mar 26 15:43:13 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:13 2008\n\nFrom: foo\n\n0\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:13 2008\n\nFrom: foo\n\n1\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:13 2008\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:13 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'From MAILER-DAEMON Wed Mar 26 15:43:13 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:13 2008\n\nFrom: foo\n\n0\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:13 2008\n\nFrom: foo\n\n1\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:13 2008\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:13 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:13 2008\n\nFrom: foo\n\n0\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:13 2008\n\nFrom: foo\n\n1\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:13 2008\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:13 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:13 2008\n\nFrom: foo\n\n1\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:13 2008\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:13 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:13 2008\n\nFrom: foo\n\n2\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:13 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:13 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' ====================================================================== FAIL: test_add_from_string (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 727, in test_add_from_string self.assertEqual(self._box[key].get_from(), 'foo at bar blah') AssertionError: 'foo at bar blah\n' != 'foo at bar blah' ====================================================================== FAIL: test_close (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 389, in test_close self._test_flush_or_close(self._box.close, False) File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 402, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 6 != 3 ====================================================================== FAIL: test_delitem (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 87, in test_delitem self._test_remove_or_delitem(self._box.__delitem__) File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n1' != 'From: foo\n\n1' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 414, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_flush (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 377, in test_flush self._test_flush_or_close(self._box.flush, True) File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 402, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 6 != 3 ====================================================================== FAIL: test_get (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 129, in test_get self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_file (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 174, in test_get_file self._template % 0) AssertionError: '' != 'From: foo\n\n0' ====================================================================== FAIL: test_get_message (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 156, in test_get_message self.assertEqual(msg0['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_string (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 164, in test_get_string self.assertEqual(self._box.get_string(key0), self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:14 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'From: foo\n\n0' ====================================================================== FAIL: test_getitem (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 144, in test_getitem self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_items (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 207, in test_items self._check_iteration(self._box.items, do_keys=True, do_values=True) File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iter (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 194, in test_iter do_values=True) File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iteritems (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 203, in test_iteritems do_values=True) File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_itervalues (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 189, in test_itervalues do_values=True) File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_open_close_open (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 744, in test_open_close_open self.assertEqual(len(self._box), 3) AssertionError: 6 != 3 ====================================================================== FAIL: test_pop (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 313, in test_pop self.assertEqual(self._box.pop(key0).get_payload(), '0') AssertionError: 'From: foo\n\n0\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:15 2008\n\nFrom: foo\n\n1' != '0' ====================================================================== FAIL: test_remove (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 83, in test_remove self._test_remove_or_delitem(self._box.remove) File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n1' != 'From: foo\n\n1' ====================================================================== FAIL: test_set_item (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 272, in test_set_item self._template % 'original 0') AssertionError: '\nFrom: foo\n\noriginal 0' != 'From: foo\n\noriginal 0' ====================================================================== FAIL: test_update (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 350, in test_update self._template % 'changed 0') AssertionError: '\nFrom: foo\n\nchanged 0\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:15 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'From: foo\n\nchanged 0' ====================================================================== FAIL: test_values (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 198, in test_values self._check_iteration(self._box.values, do_keys=False, do_values=True) File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_add (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 77, in test_add self.assertEqual(self._box.get_string(keys[0]), self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:15 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:15 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:15 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:15 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n' != 'From: foo\n\n0' ====================================================================== FAIL: test_add_and_close (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 760, in test_add_and_close self.assertEqual(contents, open(self._path, 'r').read()) AssertionError: '\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:15 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:15 2008\n\nFrom: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:15 2008\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:15 2008\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:15 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n' != '\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:15 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:15 2008\n\nFrom: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:15 2008\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:15 2008\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:15 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:15 2008\n\nFrom: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:15 2008\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:15 2008\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:15 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:15 2008\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:15 2008\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:15 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:15 2008\n\nFrom: foo\n\n2\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:15 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:15 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n\n\n\x01\x01\x01\x01\n\n' ====================================================================== FAIL: test_add_from_string (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 727, in test_add_from_string self.assertEqual(self._box[key].get_from(), 'foo at bar blah') AssertionError: 'foo at bar blah\n' != 'foo at bar blah' ====================================================================== FAIL: test_close (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 389, in test_close self._test_flush_or_close(self._box.close, False) File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 402, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_delitem (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 87, in test_delitem self._test_remove_or_delitem(self._box.__delitem__) File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n' != 'From: foo\n\n1' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 414, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_flush (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 377, in test_flush self._test_flush_or_close(self._box.flush, True) File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 402, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_get (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 129, in test_get self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_file (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 174, in test_get_file self._template % 0) AssertionError: '' != 'From: foo\n\n0' ====================================================================== FAIL: test_get_message (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 156, in test_get_message self.assertEqual(msg0['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_string (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 164, in test_get_string self.assertEqual(self._box.get_string(key0), self._template % 0) AssertionError: '\nFrom: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:16 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n' != 'From: foo\n\n0' ====================================================================== FAIL: test_getitem (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 144, in test_getitem self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_items (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 207, in test_items self._check_iteration(self._box.items, do_keys=True, do_values=True) File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iter (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 194, in test_iter do_values=True) File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iteritems (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 203, in test_iteritems do_values=True) File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_itervalues (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 189, in test_itervalues do_values=True) File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_open_close_open (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 744, in test_open_close_open self.assertEqual(len(self._box), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_pop (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 313, in test_pop self.assertEqual(self._box.pop(key0).get_payload(), '0') AssertionError: 'From: foo\n\n0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:17 2008\n\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n' != '0' ====================================================================== FAIL: test_remove (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 83, in test_remove self._test_remove_or_delitem(self._box.remove) File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n1\n\n\x01\x01\x01\x01\n\n' != 'From: foo\n\n1' ====================================================================== FAIL: test_set_item (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 272, in test_set_item self._template % 'original 0') AssertionError: '\nFrom: foo\n\noriginal 0\n\n\x01\x01\x01\x01\n\n' != 'From: foo\n\noriginal 0' ====================================================================== FAIL: test_update (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 350, in test_update self._template % 'changed 0') AssertionError: '\nFrom: foo\n\nchanged 0\n\n\x01\x01\x01\x01\n\n\x01\x01\x01\x01\n\nFrom MAILER-DAEMON Wed Mar 26 15:43:17 2008\n\nReturn-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n\n\n\x01\x01\x01\x01\n\n' != 'From: foo\n\nchanged 0' ====================================================================== FAIL: test_values (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 198, in test_values self._check_iteration(self._box.values, do_keys=False, do_values=True) File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 414, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_add (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 77, in test_add self.assertEqual(self._box.get_string(keys[0]), self._template % 0) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n0\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f' != 'From: foo\n\n0' ====================================================================== FAIL: test_close (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 389, in test_close self._test_flush_or_close(self._box.close, False) File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 402, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_delitem (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 87, in test_delitem self._test_remove_or_delitem(self._box.__delitem__) File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n1\n\n\x1f' != 'From: foo\n\n1' ====================================================================== FAIL: test_dump_message (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 414, in test_dump_message _sample_message.replace('\n', os.linesep)) AssertionError: 'Return-Path: \nX-Original-To: gkj+person at localhost\nDelivered-To: gkj+person at localhost\nReceived: from localhost (localhost [127.0.0.1])\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nDelivered-To: gkj at sundance.gregorykjohnson.com\nReceived: from localhost [127.0.0.1]\n by localhost with POP3 (fetchmail-6.2.5)\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\nDate: Wed, 13 Jul 2005 17:23:11 -0400\nFrom: "Gregory K. Johnson" \nTo: gkj at gregorykjohnson.com\nSubject: Sample message\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\nContent-Disposition: inline\nUser-Agent: Mutt/1.5.9i\n\n\n--NMuMz9nt05w80d4+\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nThis is a sample message.\n\n--\nGregory K. Johnson\n\n--NMuMz9nt05w80d4+\nContent-Type: application/octet-stream\nContent-Disposition: attachment; filename="text.gz"\nContent-Transfer-Encoding: base64\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n3FYlAAAA\n\n--NMuMz9nt05w80d4+--\n' != 'Return-Path: \r\nX-Original-To: gkj+person at localhost\r\nDelivered-To: gkj+person at localhost\r\nReceived: from localhost (localhost [127.0.0.1])\r\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\r\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nDelivered-To: gkj at sundance.gregorykjohnson.com\r\nReceived: from localhost [127.0.0.1]\r\n by localhost with POP3 (fetchmail-6.2.5)\r\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\r\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\r\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\r\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\r\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\r\nDate: Wed, 13 Jul 2005 17:23:11 -0400\r\nFrom: "Gregory K. Johnson" \r\nTo: gkj at gregorykjohnson.com\r\nSubject: Sample message\r\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\r\nMime-Version: 1.0\r\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\r\nContent-Disposition: inline\r\nUser-Agent: Mutt/1.5.9i\r\n\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: text/plain; charset=us-ascii\r\nContent-Disposition: inline\r\n\r\nThis is a sample message.\r\n\r\n--\r\nGregory K. Johnson\r\n\r\n--NMuMz9nt05w80d4+\r\nContent-Type: application/octet-stream\r\nContent-Disposition: attachment; filename="text.gz"\r\nContent-Transfer-Encoding: base64\r\n\r\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\r\n3FYlAAAA\r\n\r\n--NMuMz9nt05w80d4+--\r\n' ====================================================================== FAIL: test_flush (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 377, in test_flush self._test_flush_or_close(self._box.flush, True) File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 402, in _test_flush_or_close self.assertEqual(len(keys), 3) AssertionError: 0 != 3 ====================================================================== FAIL: test_get (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 129, in test_get self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_file (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 174, in test_get_file self._template % 0) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n0\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f' != 'From: foo\n\n0' ====================================================================== FAIL: test_get_message (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 156, in test_get_message self.assertEqual(msg0['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_get_string (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 164, in test_get_string self.assertEqual(self._box.get_string(key0), self._template % 0) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n0\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f' != 'From: foo\n\n0' ====================================================================== FAIL: test_getitem (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 144, in test_getitem self.assertEqual(msg['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_items (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 207, in test_items self._check_iteration(self._box.items, do_keys=True, do_values=True) File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iter (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 194, in test_iter do_values=True) File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_iteritems (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 203, in test_iteritems do_values=True) File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_itervalues (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 189, in test_itervalues do_values=True) File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== FAIL: test_pop (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 313, in test_pop self.assertEqual(self._box.pop(key0).get_payload(), '0') AssertionError: 'From: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n0\n\n\x1f\x0c\n\n1,,\n\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n1\n\n\x1f' != '0' ====================================================================== FAIL: test_remove (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 83, in test_remove self._test_remove_or_delitem(self._box.remove) File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 99, in _test_remove_or_delitem self.assertEqual(self._box.get_string(key1), self._template % 1) AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\n1\n\n\x1f' != 'From: foo\n\n1' ====================================================================== FAIL: test_set_item (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 272, in test_set_item self._template % 'original 0') AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\noriginal 0\n\n\x1f' != 'From: foo\n\noriginal 0' ====================================================================== FAIL: test_update (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 350, in test_update self._template % 'changed 0') AssertionError: '\nFrom: foo\n\n\n\n*** EOOH ***\n\nFrom: foo\n\n\n\nchanged 0\n\n\x1f\x0c\n\n1,,\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n*** EOOH ***\n\nReturn-Path: \n\nX-Original-To: gkj+person at localhost\n\nDelivered-To: gkj+person at localhost\n\nReceived: from localhost (localhost [127.0.0.1])\n\n by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17\n\n for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nDelivered-To: gkj at sundance.gregorykjohnson.com\n\nReceived: from localhost [127.0.0.1]\n\n by localhost with POP3 (fetchmail-6.2.5)\n\n for gkj+person at localhost (single-drop); Wed, 13 Jul 2005 17:23:16 -0400 (EDT)\n\nReceived: from andy.gregorykjohnson.com (andy.gregorykjohnson.com [64.32.235.228])\n\n by sundance.gregorykjohnson.com (Postfix) with ESMTP id 5B056316746\n\n for ; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nReceived: by andy.gregorykjohnson.com (Postfix, from userid 1000)\n\n id 490CD9DD17; Wed, 13 Jul 2005 17:23:11 -0400 (EDT)\n\nDate: Wed, 13 Jul 2005 17:23:11 -0400\n\nFrom: "Gregory K. Johnson" \n\nTo: gkj at gregorykjohnson.com\n\nSubject: Sample message\n\nMessage-ID: <20050713212311.GC4701 at andy.gregorykjohnson.com>\n\nMime-Version: 1.0\n\nContent-Type: multipart/mixed; boundary="NMuMz9nt05w80d4+"\n\nContent-Disposition: inline\n\nUser-Agent: Mutt/1.5.9i\n\n\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: text/plain; charset=us-ascii\n\nContent-Disposition: inline\n\n\n\nThis is a sample message.\n\n\n\n--\n\nGregory K. Johnson\n\n\n\n--NMuMz9nt05w80d4+\n\nContent-Type: application/octet-stream\n\nContent-Disposition: attachment; filename="text.gz"\n\nContent-Transfer-Encoding: base64\n\n\n\nH4sICM2D1UIAA3RleHQAC8nILFYAokSFktSKEoW0zJxUPa7wzJIMhZLyfIWczLzUYj0uAHTs\n\n3FYlAAAA\n\n\n\n--NMuMz9nt05w80d4+--\n\n\n\n\x1f' != 'From: foo\n\nchanged 0' ====================================================================== FAIL: test_values (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 198, in test_values self._check_iteration(self._box.values, do_keys=False, do_values=True) File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_mailbox.py", line 231, in _check_iteration self.assertEqual(value['from'], 'foo') AssertionError: None != 'foo' ====================================================================== ERROR: test_alias_asterisk (test.test_winsound.PlaySoundTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_winsound.py", line 87, in test_alias_asterisk winsound.PlaySound('SystemAsterisk', winsound.SND_ALIAS) RuntimeError: Failed to play sound ====================================================================== ERROR: test_alias_exclamation (test.test_winsound.PlaySoundTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_winsound.py", line 97, in test_alias_exclamation winsound.PlaySound('SystemExclamation', winsound.SND_ALIAS) RuntimeError: Failed to play sound ====================================================================== ERROR: test_alias_exit (test.test_winsound.PlaySoundTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_winsound.py", line 107, in test_alias_exit winsound.PlaySound('SystemExit', winsound.SND_ALIAS) RuntimeError: Failed to play sound ====================================================================== ERROR: test_alias_hand (test.test_winsound.PlaySoundTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_winsound.py", line 117, in test_alias_hand winsound.PlaySound('SystemHand', winsound.SND_ALIAS) RuntimeError: Failed to play sound ====================================================================== ERROR: test_alias_question (test.test_winsound.PlaySoundTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_winsound.py", line 127, in test_alias_question winsound.PlaySound('SystemQuestion', winsound.SND_ALIAS) RuntimeError: Failed to play sound sincerely, -The Buildbot From buildbot at python.org Wed Mar 26 18:50:50 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 26 Mar 2008 17:50:50 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-4 trunk Message-ID: <20080326175051.2F13D1E4014@bag.python.org> The Buildbot has detected a new failure of x86 XP-4 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-4%20trunk/builds/880 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: bolen-windows Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: christian.heimes,georg.brandl BUILD FAILED: failed test Excerpt from the test logfile: 3 tests failed: test_anydbm test_bsddb test_whichdb ====================================================================== ERROR: test_anydbm_creation (test.test_anydbm.AnyDBMTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_anydbm.py", line 37, in test_anydbm_creation f = anydbm.open(_fname, 'c') File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\anydbm.py", line 83, in open return mod.open(file, flag, mode) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\dbhash.py", line 16, in open return bsddb.hashopen(file, flag, mode) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 310, in hashopen d.open(file, db.DB_HASH, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_anydbm_keys (test.test_anydbm.AnyDBMTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_anydbm.py", line 58, in test_anydbm_keys self.init_db() File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_anydbm.py", line 69, in init_db f = anydbm.open(_fname, 'n') File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\anydbm.py", line 83, in open return mod.open(file, flag, mode) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\dbhash.py", line 16, in open return bsddb.hashopen(file, flag, mode) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 310, in hashopen d.open(file, db.DB_HASH, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_anydbm_modification (test.test_anydbm.AnyDBMTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_anydbm.py", line 45, in test_anydbm_modification self.init_db() File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_anydbm.py", line 69, in init_db f = anydbm.open(_fname, 'n') File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\anydbm.py", line 83, in open return mod.open(file, flag, mode) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\dbhash.py", line 16, in open return bsddb.hashopen(file, flag, mode) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 310, in hashopen d.open(file, db.DB_HASH, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_anydbm_read (test.test_anydbm.AnyDBMTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_anydbm.py", line 52, in test_anydbm_read self.init_db() File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_anydbm.py", line 69, in init_db f = anydbm.open(_fname, 'n') File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\anydbm.py", line 83, in open return mod.open(file, flag, mode) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\dbhash.py", line 16, in open return bsddb.hashopen(file, flag, mode) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 310, in hashopen d.open(file, db.DB_HASH, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test__no_deadlock_first (test.test_bsddb.TestBTree) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 327, in btopen d.open(file, db.DB_BTREE, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_change (test.test_bsddb.TestBTree) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 327, in btopen d.open(file, db.DB_BTREE, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_clear (test.test_bsddb.TestBTree) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 327, in btopen d.open(file, db.DB_BTREE, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_close_and_reopen (test.test_bsddb.TestBTree) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 327, in btopen d.open(file, db.DB_BTREE, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_contains (test.test_bsddb.TestBTree) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 327, in btopen d.open(file, db.DB_BTREE, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_first_next_looping (test.test_bsddb.TestBTree) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 327, in btopen d.open(file, db.DB_BTREE, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_first_while_deleting (test.test_bsddb.TestBTree) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 327, in btopen d.open(file, db.DB_BTREE, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_for_cursor_memleak (test.test_bsddb.TestBTree) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 327, in btopen d.open(file, db.DB_BTREE, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_get (test.test_bsddb.TestBTree) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 327, in btopen d.open(file, db.DB_BTREE, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_getitem (test.test_bsddb.TestBTree) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 327, in btopen d.open(file, db.DB_BTREE, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_has_key (test.test_bsddb.TestBTree) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 327, in btopen d.open(file, db.DB_BTREE, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_iter_while_modifying_values (test.test_bsddb.TestBTree) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 327, in btopen d.open(file, db.DB_BTREE, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_iteritems_while_modifying_values (test.test_bsddb.TestBTree) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 327, in btopen d.open(file, db.DB_BTREE, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_keyordering (test.test_bsddb.TestBTree) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 327, in btopen d.open(file, db.DB_BTREE, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_last_while_deleting (test.test_bsddb.TestBTree) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 327, in btopen d.open(file, db.DB_BTREE, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_len (test.test_bsddb.TestBTree) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 327, in btopen d.open(file, db.DB_BTREE, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_mapping_iteration_methods (test.test_bsddb.TestBTree) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 327, in btopen d.open(file, db.DB_BTREE, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_pop (test.test_bsddb.TestBTree) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 327, in btopen d.open(file, db.DB_BTREE, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_popitem (test.test_bsddb.TestBTree) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 327, in btopen d.open(file, db.DB_BTREE, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_previous_last_looping (test.test_bsddb.TestBTree) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 327, in btopen d.open(file, db.DB_BTREE, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_set_location (test.test_bsddb.TestBTree) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 327, in btopen d.open(file, db.DB_BTREE, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_setdefault (test.test_bsddb.TestBTree) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 327, in btopen d.open(file, db.DB_BTREE, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_update (test.test_bsddb.TestBTree) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 327, in btopen d.open(file, db.DB_BTREE, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test__no_deadlock_first (test.test_bsddb.TestHashTable) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 310, in hashopen d.open(file, db.DB_HASH, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_change (test.test_bsddb.TestHashTable) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 310, in hashopen d.open(file, db.DB_HASH, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_clear (test.test_bsddb.TestHashTable) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 310, in hashopen d.open(file, db.DB_HASH, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_close_and_reopen (test.test_bsddb.TestHashTable) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 310, in hashopen d.open(file, db.DB_HASH, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_contains (test.test_bsddb.TestHashTable) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 310, in hashopen d.open(file, db.DB_HASH, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_first_next_looping (test.test_bsddb.TestHashTable) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 310, in hashopen d.open(file, db.DB_HASH, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_first_while_deleting (test.test_bsddb.TestHashTable) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 310, in hashopen d.open(file, db.DB_HASH, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_for_cursor_memleak (test.test_bsddb.TestHashTable) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 310, in hashopen d.open(file, db.DB_HASH, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_get (test.test_bsddb.TestHashTable) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 310, in hashopen d.open(file, db.DB_HASH, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_getitem (test.test_bsddb.TestHashTable) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 310, in hashopen d.open(file, db.DB_HASH, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_has_key (test.test_bsddb.TestHashTable) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 310, in hashopen d.open(file, db.DB_HASH, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_iter_while_modifying_values (test.test_bsddb.TestHashTable) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 310, in hashopen d.open(file, db.DB_HASH, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_iteritems_while_modifying_values (test.test_bsddb.TestHashTable) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 310, in hashopen d.open(file, db.DB_HASH, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_keyordering (test.test_bsddb.TestHashTable) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 310, in hashopen d.open(file, db.DB_HASH, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_last_while_deleting (test.test_bsddb.TestHashTable) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 310, in hashopen d.open(file, db.DB_HASH, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_len (test.test_bsddb.TestHashTable) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 310, in hashopen d.open(file, db.DB_HASH, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_mapping_iteration_methods (test.test_bsddb.TestHashTable) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 310, in hashopen d.open(file, db.DB_HASH, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_pop (test.test_bsddb.TestHashTable) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 310, in hashopen d.open(file, db.DB_HASH, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_popitem (test.test_bsddb.TestHashTable) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 310, in hashopen d.open(file, db.DB_HASH, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_previous_last_looping (test.test_bsddb.TestHashTable) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 310, in hashopen d.open(file, db.DB_HASH, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_set_location (test.test_bsddb.TestHashTable) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 310, in hashopen d.open(file, db.DB_HASH, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_setdefault (test.test_bsddb.TestHashTable) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 310, in hashopen d.open(file, db.DB_HASH, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_update (test.test_bsddb.TestHashTable) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_bsddb.py", line 16, in setUp self.f = self.openmethod[0](self.fname, self.openflag, cachesize=32768) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 310, in hashopen d.open(file, db.DB_HASH, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: test_whichdb_dbhash (test.test_whichdb.WhichDBTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_whichdb.py", line 48, in test_whichdb_name f = mod.open(_fname, 'c') File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\dbhash.py", line 16, in open return bsddb.hashopen(file, flag, mode) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\bsddb\__init__.py", line 310, in hashopen d.open(file, db.DB_HASH, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') sincerely, -The Buildbot From buildbot at python.org Wed Mar 26 20:57:55 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 26 Mar 2008 19:57:55 +0000 Subject: [Python-checkins] buildbot failure in x86 W2k8 3.0 Message-ID: <20080326195755.821EE1E4021@bag.python.org> The Buildbot has detected a new failure of x86 W2k8 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20W2k8%203.0/builds/123 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: nelson-windows Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: amaury.forgeotdarc BUILD FAILED: failed failed slave lost sincerely, -The Buildbot From buildbot at python.org Wed Mar 26 21:19:36 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 26 Mar 2008 20:19:36 +0000 Subject: [Python-checkins] buildbot failure in amd64 gentoo 3.0 Message-ID: <20080326201936.93B881E4007@bag.python.org> The Buildbot has detected a new failure of amd64 gentoo 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20gentoo%203.0/builds/239 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-amd64 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: amaury.forgeotdarc BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_threading make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Wed Mar 26 21:32:56 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 26 Mar 2008 20:32:56 +0000 Subject: [Python-checkins] buildbot failure in sparc Debian 3.0 Message-ID: <20080326203256.D179D1E4008@bag.python.org> The Buildbot has detected a new failure of sparc Debian 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20Debian%203.0/builds/140 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-sparc Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: amaury.forgeotdarc BUILD FAILED: failed failed slave lost sincerely, -The Buildbot From buildbot at python.org Wed Mar 26 21:32:56 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 26 Mar 2008 20:32:56 +0000 Subject: [Python-checkins] buildbot failure in sparc Ubuntu 3.0 Message-ID: <20080326203256.D04D61E4007@bag.python.org> The Buildbot has detected a new failure of sparc Ubuntu 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20Ubuntu%203.0/builds/213 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-ubuntu-sparc Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: amaury.forgeotdarc BUILD FAILED: failed failed slave lost sincerely, -The Buildbot From python-checkins at python.org Wed Mar 26 22:41:36 2008 From: python-checkins at python.org (mark.dickinson) Date: Wed, 26 Mar 2008 22:41:36 +0100 (CET) Subject: [Python-checkins] r61952 - python/trunk/Doc/c-api/structures.rst Message-ID: <20080326214136.AA1511E4008@bag.python.org> Author: mark.dickinson Date: Wed Mar 26 22:41:36 2008 New Revision: 61952 Modified: python/trunk/Doc/c-api/structures.rst Log: Typo: "objects reference count" -> "object's reference count" Modified: python/trunk/Doc/c-api/structures.rst ============================================================================== --- python/trunk/Doc/c-api/structures.rst (original) +++ python/trunk/Doc/c-api/structures.rst Wed Mar 26 22:41:36 2008 @@ -20,7 +20,7 @@ All object types are extensions of this type. This is a type which contains the information Python needs to treat a pointer to an object as an object. In a - normal "release" build, it contains only the objects reference count and a + normal "release" build, it contains only the object's reference count and a pointer to the corresponding type object. It corresponds to the fields defined by the expansion of the ``PyObject_HEAD`` macro. From python-checkins at python.org Wed Mar 26 23:01:38 2008 From: python-checkins at python.org (christian.heimes) Date: Wed, 26 Mar 2008 23:01:38 +0100 (CET) Subject: [Python-checkins] r61953 - in python/trunk: Include/code.h Include/compile.h Include/parsetok.h Include/pythonrun.h Lib/__future__.py Misc/NEWS Parser/parser.c Parser/parsetok.c Python/ast.c Python/future.c Python/import.c Python/pythonrun.c Message-ID: <20080326220138.6BDE81E400C@bag.python.org> Author: christian.heimes Date: Wed Mar 26 23:01:37 2008 New Revision: 61953 Modified: python/trunk/Include/code.h python/trunk/Include/compile.h python/trunk/Include/parsetok.h python/trunk/Include/pythonrun.h python/trunk/Lib/__future__.py python/trunk/Misc/NEWS python/trunk/Parser/parser.c python/trunk/Parser/parsetok.c python/trunk/Python/ast.c python/trunk/Python/future.c python/trunk/Python/import.c python/trunk/Python/pythonrun.c Log: Patch #2477: Added from __future__ import unicode_literals The new PyParser_*Ex() functions are based on Neal's suggestion and initial patch. The new __future__ feature makes all '' and r'' unicode strings. b'' and br'' stay (byte) strings. Modified: python/trunk/Include/code.h ============================================================================== --- python/trunk/Include/code.h (original) +++ python/trunk/Include/code.h Wed Mar 26 23:01:37 2008 @@ -49,6 +49,7 @@ #define CO_FUTURE_ABSOLUTE_IMPORT 0x4000 /* do absolute imports by default */ #define CO_FUTURE_WITH_STATEMENT 0x8000 #define CO_FUTURE_PRINT_FUNCTION 0x10000 +#define CO_FUTURE_UNICODE_LITERALS 0x20000 /* This should be defined if a future statement modifies the syntax. For example, when a keyword is added. Modified: python/trunk/Include/compile.h ============================================================================== --- python/trunk/Include/compile.h (original) +++ python/trunk/Include/compile.h Wed Mar 26 23:01:37 2008 @@ -25,6 +25,7 @@ #define FUTURE_ABSOLUTE_IMPORT "absolute_import" #define FUTURE_WITH_STATEMENT "with_statement" #define FUTURE_PRINT_FUNCTION "print_function" +#define FUTURE_UNICODE_LITERALS "unicode_literals" struct _mod; /* Declare the existence of this type */ Modified: python/trunk/Include/parsetok.h ============================================================================== --- python/trunk/Include/parsetok.h (original) +++ python/trunk/Include/parsetok.h Wed Mar 26 23:01:37 2008 @@ -28,6 +28,7 @@ #endif #define PyPARSE_PRINT_IS_FUNCTION 0x0004 +#define PyPARSE_UNICODE_LITERALS 0x0008 @@ -41,11 +42,18 @@ PyAPI_FUNC(node *) PyParser_ParseFileFlags(FILE *, const char *, grammar *, int, char *, char *, perrdetail *, int); +PyAPI_FUNC(node *) PyParser_ParseFileFlagsEx(FILE *, const char *, grammar *, + int, char *, char *, + perrdetail *, int *); PyAPI_FUNC(node *) PyParser_ParseStringFlagsFilename(const char *, const char *, grammar *, int, perrdetail *, int); +PyAPI_FUNC(node *) PyParser_ParseStringFlagsFilenameEx(const char *, + const char *, + grammar *, int, + perrdetail *, int *); /* Note that he following function is defined in pythonrun.c not parsetok.c. */ PyAPI_FUNC(void) PyParser_SetError(perrdetail *); Modified: python/trunk/Include/pythonrun.h ============================================================================== --- python/trunk/Include/pythonrun.h (original) +++ python/trunk/Include/pythonrun.h Wed Mar 26 23:01:37 2008 @@ -8,7 +8,8 @@ #endif #define PyCF_MASK (CO_FUTURE_DIVISION | CO_FUTURE_ABSOLUTE_IMPORT | \ - CO_FUTURE_WITH_STATEMENT|CO_FUTURE_PRINT_FUNCTION) + CO_FUTURE_WITH_STATEMENT | CO_FUTURE_PRINT_FUNCTION | \ + CO_FUTURE_UNICODE_LITERALS) #define PyCF_MASK_OBSOLETE (CO_NESTED) #define PyCF_SOURCE_IS_UTF8 0x0100 #define PyCF_DONT_IMPLY_DEDENT 0x0200 Modified: python/trunk/Lib/__future__.py ============================================================================== --- python/trunk/Lib/__future__.py (original) +++ python/trunk/Lib/__future__.py Wed Mar 26 23:01:37 2008 @@ -54,6 +54,7 @@ "absolute_import", "with_statement", "print_function", + "unicode_literals", ] __all__ = ["all_feature_names"] + all_feature_names @@ -68,6 +69,7 @@ CO_FUTURE_ABSOLUTE_IMPORT = 0x4000 # perform absolute imports by default CO_FUTURE_WITH_STATEMENT = 0x8000 # with statement CO_FUTURE_PRINT_FUNCTION = 0x10000 # print function +CO_FUTURE_UNICODE_LITERALS = 0x20000 # unicode string literals class _Feature: def __init__(self, optionalRelease, mandatoryRelease, compiler_flag): @@ -120,3 +122,7 @@ print_function = _Feature((2, 6, 0, "alpha", 2), (3, 0, 0, "alpha", 0), CO_FUTURE_PRINT_FUNCTION) + +unicode_literals = _Feature((2, 6, 0, "alpha", 2), + (3, 0, 0, "alpha", 0), + CO_FUTURE_UNICODE_LITERALS) Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Wed Mar 26 23:01:37 2008 @@ -12,6 +12,8 @@ Core and builtins ----------------- +- Patch #2477: Added from __future__ import unicode_literals + - Added backport of bytearray type. - Issue #2355: add Py3k warning for buffer(). @@ -186,6 +188,12 @@ - Patch #2284: Add -x64 option to rt.bat. +C API +----- + +- Patch #2477: Added PyParser_ParseFileFlagsEx() and + PyParser_ParseStringFlagsFilenameEx() + What's New in Python 2.6 alpha 1? ================================= Modified: python/trunk/Parser/parser.c ============================================================================== --- python/trunk/Parser/parser.c (original) +++ python/trunk/Parser/parser.c Wed Mar 26 23:01:37 2008 @@ -202,14 +202,18 @@ for (i = 0; i < NCH(ch); i += 2) { cch = CHILD(ch, i); - if (NCH(cch) >= 1 && TYPE(CHILD(cch, 0)) == NAME && - strcmp(STR(CHILD(cch, 0)), "with_statement") == 0) { - ps->p_flags |= CO_FUTURE_WITH_STATEMENT; - break; - } else if (NCH(cch) >= 1 && TYPE(CHILD(cch, 0)) == NAME && - strcmp(STR(CHILD(cch, 0)), "print_function") == 0) { - ps->p_flags |= CO_FUTURE_PRINT_FUNCTION; - break; + if (NCH(cch) >= 1 && TYPE(CHILD(cch, 0)) == NAME) { + char *str_ch = STR(CHILD(cch, 0)); + if (strcmp(str_ch, FUTURE_WITH_STATEMENT) == 0) { + ps->p_flags |= CO_FUTURE_WITH_STATEMENT; + break; + } else if (strcmp(str_ch, FUTURE_PRINT_FUNCTION) == 0) { + ps->p_flags |= CO_FUTURE_PRINT_FUNCTION; + break; + } else if (strcmp(str_ch, FUTURE_UNICODE_LITERALS) == 0) { + ps->p_flags |= CO_FUTURE_UNICODE_LITERALS; + break; + } } } } Modified: python/trunk/Parser/parsetok.c ============================================================================== --- python/trunk/Parser/parsetok.c (original) +++ python/trunk/Parser/parsetok.c Wed Mar 26 23:01:37 2008 @@ -14,7 +14,7 @@ /* Forward */ -static node *parsetok(struct tok_state *, grammar *, int, perrdetail *, int); +static node *parsetok(struct tok_state *, grammar *, int, perrdetail *, int *); static void initerr(perrdetail *err_ret, const char* filename); /* Parse input coming from a string. Return error code, print some errors. */ @@ -37,6 +37,16 @@ grammar *g, int start, perrdetail *err_ret, int flags) { + int iflags = flags; + return PyParser_ParseStringFlagsFilenameEx(s, filename, g, start, + err_ret, &iflags); +} + +node * +PyParser_ParseStringFlagsFilenameEx(const char *s, const char *filename, + grammar *g, int start, + perrdetail *err_ret, int *flags) +{ struct tok_state *tok; initerr(err_ret, filename); @@ -70,6 +80,14 @@ PyParser_ParseFileFlags(FILE *fp, const char *filename, grammar *g, int start, char *ps1, char *ps2, perrdetail *err_ret, int flags) { + int iflags = flags; + return PyParser_ParseFileFlagsEx(fp, filename, g, start, ps1, ps2, err_ret, &iflags); +} + +node * +PyParser_ParseFileFlagsEx(FILE *fp, const char *filename, grammar *g, int start, + char *ps1, char *ps2, perrdetail *err_ret, int *flags) +{ struct tok_state *tok; initerr(err_ret, filename); @@ -85,7 +103,6 @@ tok->alterror++; } - return parsetok(tok, g, start, err_ret, flags); } @@ -110,7 +127,7 @@ static node * parsetok(struct tok_state *tok, grammar *g, int start, perrdetail *err_ret, - int flags) + int *flags) { parser_state *ps; node *n; @@ -123,8 +140,13 @@ return NULL; } #ifdef PY_PARSER_REQUIRES_FUTURE_KEYWORD - if (flags & PyPARSE_PRINT_IS_FUNCTION) + if (*flags & PyPARSE_PRINT_IS_FUNCTION) { ps->p_flags |= CO_FUTURE_PRINT_FUNCTION; + } + if (*flags & PyPARSE_UNICODE_LITERALS) { + ps->p_flags |= CO_FUTURE_UNICODE_LITERALS; + } + #endif for (;;) { @@ -147,7 +169,7 @@ except if a certain flag is given -- codeop.py uses this. */ if (tok->indent && - !(flags & PyPARSE_DONT_IMPLY_DEDENT)) + !(*flags & PyPARSE_DONT_IMPLY_DEDENT)) { tok->pendin = -tok->indent; tok->indent = 0; @@ -191,6 +213,7 @@ else n = NULL; + *flags = ps->p_flags; PyParser_Delete(ps); if (n == NULL) { Modified: python/trunk/Python/ast.c ============================================================================== --- python/trunk/Python/ast.c (original) +++ python/trunk/Python/ast.c Wed Mar 26 23:01:37 2008 @@ -18,6 +18,7 @@ /* Data structure used internally */ struct compiling { char *c_encoding; /* source encoding */ + int c_future_unicode; /* __future__ unicode literals flag */ PyArena *c_arena; /* arena for allocating memeory */ const char *c_filename; /* filename */ }; @@ -36,7 +37,7 @@ static expr_ty ast_for_call(struct compiling *, const node *, expr_ty); static PyObject *parsenumber(const char *); -static PyObject *parsestr(const char *s, const char *encoding); +static PyObject *parsestr(struct compiling *, const char *); static PyObject *parsestrplus(struct compiling *, const node *n); #ifndef LINENO @@ -198,6 +199,7 @@ } else { c.c_encoding = NULL; } + c.c_future_unicode = flags && flags->cf_flags & CO_FUTURE_UNICODE_LITERALS; c.c_arena = arena; c.c_filename = filename; @@ -3247,13 +3249,13 @@ * parsestr parses it, and returns the decoded Python string object. */ static PyObject * -parsestr(const char *s, const char *encoding) +parsestr(struct compiling *c, const char *s) { size_t len; int quote = Py_CHARMASK(*s); int rawmode = 0; int need_encoding; - int unicode = 0; + int unicode = c->c_future_unicode; if (isalpha(quote) || quote == '_') { if (quote == 'u' || quote == 'U') { @@ -3262,6 +3264,7 @@ } if (quote == 'b' || quote == 'B') { quote = *++s; + unicode = 0; } if (quote == 'r' || quote == 'R') { quote = *++s; @@ -3293,12 +3296,12 @@ } #ifdef Py_USING_UNICODE if (unicode || Py_UnicodeFlag) { - return decode_unicode(s, len, rawmode, encoding); + return decode_unicode(s, len, rawmode, c->c_encoding); } #endif - need_encoding = (encoding != NULL && - strcmp(encoding, "utf-8") != 0 && - strcmp(encoding, "iso-8859-1") != 0); + need_encoding = (c->c_encoding != NULL && + strcmp(c->c_encoding, "utf-8") != 0 && + strcmp(c->c_encoding, "iso-8859-1") != 0); if (rawmode || strchr(s, '\\') == NULL) { if (need_encoding) { #ifndef Py_USING_UNICODE @@ -3310,7 +3313,7 @@ PyObject *v, *u = PyUnicode_DecodeUTF8(s, len, NULL); if (u == NULL) return NULL; - v = PyUnicode_AsEncodedString(u, encoding, NULL); + v = PyUnicode_AsEncodedString(u, c->c_encoding, NULL); Py_DECREF(u); return v; #endif @@ -3320,7 +3323,7 @@ } return PyString_DecodeEscape(s, len, NULL, unicode, - need_encoding ? encoding : NULL); + need_encoding ? c->c_encoding : NULL); } /* Build a Python string object out of a STRING atom. This takes care of @@ -3333,11 +3336,11 @@ PyObject *v; int i; REQ(CHILD(n, 0), STRING); - if ((v = parsestr(STR(CHILD(n, 0)), c->c_encoding)) != NULL) { + if ((v = parsestr(c, STR(CHILD(n, 0)))) != NULL) { /* String literal concatenation */ for (i = 1; i < NCH(n); i++) { PyObject *s; - s = parsestr(STR(CHILD(n, i)), c->c_encoding); + s = parsestr(c, STR(CHILD(n, i))); if (s == NULL) goto onError; if (PyString_Check(v) && PyString_Check(s)) { Modified: python/trunk/Python/future.c ============================================================================== --- python/trunk/Python/future.c (original) +++ python/trunk/Python/future.c Wed Mar 26 23:01:37 2008 @@ -35,6 +35,8 @@ ff->ff_features |= CO_FUTURE_WITH_STATEMENT; } else if (strcmp(feature, FUTURE_PRINT_FUNCTION) == 0) { ff->ff_features |= CO_FUTURE_PRINT_FUNCTION; + } else if (strcmp(feature, FUTURE_UNICODE_LITERALS) == 0) { + ff->ff_features |= CO_FUTURE_UNICODE_LITERALS; } else if (strcmp(feature, "braces") == 0) { PyErr_SetString(PyExc_SyntaxError, "not a chance"); Modified: python/trunk/Python/import.c ============================================================================== --- python/trunk/Python/import.c (original) +++ python/trunk/Python/import.c Wed Mar 26 23:01:37 2008 @@ -818,11 +818,12 @@ { PyCodeObject *co = NULL; mod_ty mod; + PyCompilerFlags flags; PyArena *arena = PyArena_New(); if (arena == NULL) return NULL; - mod = PyParser_ASTFromFile(fp, pathname, Py_file_input, 0, 0, 0, + mod = PyParser_ASTFromFile(fp, pathname, Py_file_input, 0, 0, &flags, NULL, arena); if (mod) { co = PyAST_Compile(mod, pathname, NULL, arena); Modified: python/trunk/Python/pythonrun.c ============================================================================== --- python/trunk/Python/pythonrun.c (original) +++ python/trunk/Python/pythonrun.c Wed Mar 26 23:01:37 2008 @@ -774,8 +774,11 @@ #define PARSER_FLAGS(flags) \ ((flags) ? ((((flags)->cf_flags & PyCF_DONT_IMPLY_DEDENT) ? \ PyPARSE_DONT_IMPLY_DEDENT : 0) \ - | ((flags)->cf_flags & CO_FUTURE_PRINT_FUNCTION ? \ - PyPARSE_PRINT_IS_FUNCTION : 0)) : 0) + | (((flags)->cf_flags & CO_FUTURE_PRINT_FUNCTION) ? \ + PyPARSE_PRINT_IS_FUNCTION : 0) \ + | (((flags)->cf_flags & CO_FUTURE_UNICODE_LITERALS) ? \ + PyPARSE_UNICODE_LITERALS : 0) \ + ) : 0) #endif int @@ -1390,11 +1393,12 @@ { struct symtable *st; mod_ty mod; + PyCompilerFlags flags; PyArena *arena = PyArena_New(); if (arena == NULL) return NULL; - mod = PyParser_ASTFromString(str, filename, start, NULL, arena); + mod = PyParser_ASTFromString(str, filename, start, &flags, arena); if (mod == NULL) { PyArena_Free(arena); return NULL; @@ -1411,10 +1415,16 @@ { mod_ty mod; perrdetail err; - node *n = PyParser_ParseStringFlagsFilename(s, filename, + int iflags; + iflags = PARSER_FLAGS(flags); + + node *n = PyParser_ParseStringFlagsFilenameEx(s, filename, &_PyParser_Grammar, start, &err, - PARSER_FLAGS(flags)); + &iflags); if (n) { + if (flags) { + flags->cf_flags |= iflags & PyCF_MASK; + } mod = PyAST_FromNode(n, flags, filename, arena); PyNode_Free(n); return mod; @@ -1432,9 +1442,15 @@ { mod_ty mod; perrdetail err; - node *n = PyParser_ParseFileFlags(fp, filename, &_PyParser_Grammar, - start, ps1, ps2, &err, PARSER_FLAGS(flags)); + int iflags; + + iflags = PARSER_FLAGS(flags); + node *n = PyParser_ParseFileFlagsEx(fp, filename, &_PyParser_Grammar, + start, ps1, ps2, &err, &iflags); if (n) { + if (flags) { + flags->cf_flags |= iflags & PyCF_MASK; + } mod = PyAST_FromNode(n, flags, filename, arena); PyNode_Free(n); return mod; From buildbot at python.org Wed Mar 26 23:08:52 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 26 Mar 2008 22:08:52 +0000 Subject: [Python-checkins] buildbot failure in sparc solaris10 gcc trunk Message-ID: <20080326220852.60EC11E4013@bag.python.org> The Buildbot has detected a new failure of sparc solaris10 gcc trunk. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20solaris10%20gcc%20trunk/builds/3053 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: loewis-sun Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: christian.heimes,mark.dickinson BUILD FAILED: failed compile sincerely, -The Buildbot From buildbot at python.org Wed Mar 26 23:09:47 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 26 Mar 2008 22:09:47 +0000 Subject: [Python-checkins] buildbot failure in x86 W2k8 trunk Message-ID: <20080326220947.8F0211E402B@bag.python.org> The Buildbot has detected a new failure of x86 W2k8 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20W2k8%20trunk/builds/219 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: nelson-windows Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: christian.heimes,mark.dickinson BUILD FAILED: failed compile sincerely, -The Buildbot From buildbot at python.org Wed Mar 26 23:10:21 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 26 Mar 2008 22:10:21 +0000 Subject: [Python-checkins] buildbot failure in S-390 Debian trunk Message-ID: <20080326221021.3D7231E401F@bag.python.org> The Buildbot has detected a new failure of S-390 Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/S-390%20Debian%20trunk/builds/260 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-s390 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: christian.heimes,mark.dickinson BUILD FAILED: failed compile sincerely, -The Buildbot From python-checkins at python.org Wed Mar 26 23:20:27 2008 From: python-checkins at python.org (christian.heimes) Date: Wed, 26 Mar 2008 23:20:27 +0100 (CET) Subject: [Python-checkins] r61954 - python/trunk/Parser/parsetok.c Message-ID: <20080326222027.49DE91E402C@bag.python.org> Author: christian.heimes Date: Wed Mar 26 23:20:26 2008 New Revision: 61954 Modified: python/trunk/Parser/parsetok.c Log: Surround p_flags access with #ifdef PY_PARSER_REQUIRES_FUTURE_KEYWORD Modified: python/trunk/Parser/parsetok.c ============================================================================== --- python/trunk/Parser/parsetok.c (original) +++ python/trunk/Parser/parsetok.c Wed Mar 26 23:20:26 2008 @@ -213,7 +213,9 @@ else n = NULL; +#ifdef PY_PARSER_REQUIRES_FUTURE_KEYWORD *flags = ps->p_flags; +#endif PyParser_Delete(ps); if (n == NULL) { From buildbot at python.org Wed Mar 26 23:24:54 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 26 Mar 2008 22:24:54 +0000 Subject: [Python-checkins] buildbot failure in amd64 XP 3.0 Message-ID: <20080326222454.9AD991E402D@bag.python.org> The Buildbot has detected a new failure of amd64 XP 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20XP%203.0/builds/722 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows-amd64 Build Reason: The web-page 'rebuild' button was pressed by 'theller': Build Source Stamp: [branch branches/py3k] HEAD Blamelist: amaury.forgeotdarc BUILD FAILED: failed compile sincerely, -The Buildbot From buildbot at python.org Wed Mar 26 23:27:57 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 26 Mar 2008 22:27:57 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-3 trunk Message-ID: <20080326222757.75C5E1E4030@bag.python.org> The Buildbot has detected a new failure of x86 XP-3 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-3%20trunk/builds/1176 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: christian.heimes,mark.dickinson BUILD FAILED: failed compile sincerely, -The Buildbot From buildbot at python.org Wed Mar 26 23:48:36 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 26 Mar 2008 22:48:36 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-3 3.0 Message-ID: <20080326224836.CFC451E4007@bag.python.org> The Buildbot has detected a new failure of x86 XP-3 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-3%203.0/builds/729 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: christian.heimes BUILD FAILED: failed compile sincerely, -The Buildbot From python-checkins at python.org Wed Mar 26 23:51:59 2008 From: python-checkins at python.org (christian.heimes) Date: Wed, 26 Mar 2008 23:51:59 +0100 (CET) Subject: [Python-checkins] r61956 - python/trunk/Python/import.c python/trunk/Python/pythonrun.c Message-ID: <20080326225159.48B801E4007@bag.python.org> Author: christian.heimes Date: Wed Mar 26 23:51:58 2008 New Revision: 61956 Modified: python/trunk/Python/import.c python/trunk/Python/pythonrun.c Log: Initialize PyCompilerFlags cf_flags with 0 Modified: python/trunk/Python/import.c ============================================================================== --- python/trunk/Python/import.c (original) +++ python/trunk/Python/import.c Wed Mar 26 23:51:58 2008 @@ -823,6 +823,8 @@ if (arena == NULL) return NULL; + flags.cf_flags = 0; + mod = PyParser_ASTFromFile(fp, pathname, Py_file_input, 0, 0, &flags, NULL, arena); if (mod) { Modified: python/trunk/Python/pythonrun.c ============================================================================== --- python/trunk/Python/pythonrun.c (original) +++ python/trunk/Python/pythonrun.c Wed Mar 26 23:51:58 2008 @@ -1398,6 +1398,8 @@ if (arena == NULL) return NULL; + flags.cf_flags = 0; + mod = PyParser_ASTFromString(str, filename, start, &flags, arena); if (mod == NULL) { PyArena_Free(arena); From python-checkins at python.org Wed Mar 26 23:55:32 2008 From: python-checkins at python.org (christian.heimes) Date: Wed, 26 Mar 2008 23:55:32 +0100 (CET) Subject: [Python-checkins] r61957 - python/trunk/Lib/test/test_future4.py Message-ID: <20080326225532.3F00E1E4007@bag.python.org> Author: christian.heimes Date: Wed Mar 26 23:55:31 2008 New Revision: 61957 Added: python/trunk/Lib/test/test_future4.py (contents, props changed) Log: I forgot to svn add the future test Added: python/trunk/Lib/test/test_future4.py ============================================================================== --- (empty file) +++ python/trunk/Lib/test/test_future4.py Wed Mar 26 23:55:31 2008 @@ -0,0 +1,24 @@ +from __future__ import print_function +from __future__ import unicode_literals + +import unittest +from test import test_support + +class TestFuture(unittest.TestCase): + def assertType(self, obj, typ): + self.assert_(type(obj) is typ, + "type(%r) is %r, not %r" % (obj, type(obj), typ)) + + def test_unicode_strings(self): + self.assertType("", unicode) + self.assertType(r"", unicode) + self.assertType(u"", unicode) + self.assertType(ur"", unicode) + self.assertType(b"", str) + self.assertType(br"", str) + +def test_main(): + test_support.run_unittest(TestFuture) + +if __name__ == "__main__": + test_main() From buildbot at python.org Thu Mar 27 00:04:46 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 26 Mar 2008 23:04:46 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-4 3.0 Message-ID: <20080326230447.1E84E1E4023@bag.python.org> The Buildbot has detected a new failure of x86 XP-4 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-4%203.0/builds/640 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: bolen-windows Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: christian.heimes BUILD FAILED: failed compile sincerely, -The Buildbot From python-checkins at python.org Thu Mar 27 00:07:43 2008 From: python-checkins at python.org (amaury.forgeotdarc) Date: Thu, 27 Mar 2008 00:07:43 +0100 (CET) Subject: [Python-checkins] r61958 - python/trunk/Python/pythonrun.c Message-ID: <20080326230743.CD8571E4007@bag.python.org> Author: amaury.forgeotdarc Date: Thu Mar 27 00:07:43 2008 New Revision: 61958 Modified: python/trunk/Python/pythonrun.c Log: C89 compliance: Microsoft compilers want variable declarations at the top Modified: python/trunk/Python/pythonrun.c ============================================================================== --- python/trunk/Python/pythonrun.c (original) +++ python/trunk/Python/pythonrun.c Thu Mar 27 00:07:43 2008 @@ -1417,8 +1417,7 @@ { mod_ty mod; perrdetail err; - int iflags; - iflags = PARSER_FLAGS(flags); + int iflags = PARSER_FLAGS(flags); node *n = PyParser_ParseStringFlagsFilenameEx(s, filename, &_PyParser_Grammar, start, &err, @@ -1444,9 +1443,8 @@ { mod_ty mod; perrdetail err; - int iflags; + int iflags = PARSER_FLAGS(flags); - iflags = PARSER_FLAGS(flags); node *n = PyParser_ParseFileFlagsEx(fp, filename, &_PyParser_Grammar, start, ps1, ps2, &err, &iflags); if (n) { From buildbot at python.org Thu Mar 27 00:07:53 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 26 Mar 2008 23:07:53 +0000 Subject: [Python-checkins] buildbot failure in ppc Debian unstable 3.0 Message-ID: <20080326230753.B49541E4007@bag.python.org> The Buildbot has detected a new failure of ppc Debian unstable 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/ppc%20Debian%20unstable%203.0/builds/709 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: 10 tests failed: test___all__ test_difflib test_netrc test_shlex test_sqlite test_sundry test_threading_local test_unpack_ex test_xml_etree test_xml_etree_c ====================================================================== ERROR: test_all (test.test___all__.AllTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/3.0.klose-debian-ppc/build/Lib/test/test___all__.py", line 91, in test_all self.check_all("netrc") File "/home/pybot/buildarea/3.0.klose-debian-ppc/build/Lib/test/test___all__.py", line 11, in check_all exec("import %s" % modname, names) File "", line 1, in File "/home/pybot/buildarea/3.0.klose-debian-ppc/build/Lib/netrc.py", line 5, in import os, shlex File "/home/pybot/buildarea/3.0.klose-debian-ppc/build/Lib/shlex.py", line 38 SyntaxError: (unicode error) invalid data Traceback (most recent call last): File "./Lib/test/regrtest.py", line 596, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/home/pybot/buildarea/3.0.klose-debian-ppc/build/Lib/test/test_netrc.py", line 2, in import netrc, os, unittest, sys File "/home/pybot/buildarea/3.0.klose-debian-ppc/build/Lib/netrc.py", line 5, in import os, shlex File "/home/pybot/buildarea/3.0.klose-debian-ppc/build/Lib/shlex.py", line 38 SyntaxError: (unicode error) invalid data Re-running test 'test_shlex' in verbose mode Traceback (most recent call last): File "./Lib/test/regrtest.py", line 596, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/home/pybot/buildarea/3.0.klose-debian-ppc/build/Lib/test/test_shlex.py", line 4, in File "/home/pybot/buildarea/3.0.klose-debian-ppc/build/Lib/shlex.py", line 38 SyntaxError: (unicode error) invalid data Re-running test 'test_sqlite' in verbose mode Traceback (most recent call last): File "./Lib/test/regrtest.py", line 596, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/home/pybot/buildarea/3.0.klose-debian-ppc/build/Lib/test/test_sqlite.py", line 7, in from sqlite3.test import (dbapi, types, userfunctions, File "/home/pybot/buildarea/3.0.klose-debian-ppc/build/Lib/sqlite3/test/types.py", line 39 SyntaxError: (unicode error) unexpected end of data Re-running test 'test_sundry' in verbose mode Traceback (most recent call last): File "./Lib/test/regrtest.py", line 596, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/home/pybot/buildarea/3.0.klose-debian-ppc/build/Lib/test/test_sundry.py", line 111, in import xml File "/home/pybot/buildarea/3.0.klose-debian-ppc/build/Lib/contextlib.py", line 33, in __exit__ self.gen.throw(type, value, traceback) File "/home/pybot/buildarea/3.0.klose-debian-ppc/build/Lib/test/test_support.py", line 312, in catch_warning yield warning File "/home/pybot/buildarea/3.0.klose-debian-ppc/build/Lib/test/test_sundry.py", line 110, in import webbrowser File "/home/pybot/buildarea/3.0.klose-debian-ppc/build/Lib/webbrowser.py", line 6, in import shlex File "/home/pybot/buildarea/3.0.klose-debian-ppc/build/Lib/shlex.py", line 38 SyntaxError: (unicode error) invalid data Re-running test 'test_threading_local' in verbose mode make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Thu Mar 27 00:13:06 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 26 Mar 2008 23:13:06 +0000 Subject: [Python-checkins] buildbot failure in sparc solaris10 gcc 3.0 Message-ID: <20080326231306.E554A1E4008@bag.python.org> The Buildbot has detected a new failure of sparc solaris10 gcc 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20solaris10%20gcc%203.0/builds/731 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: loewis-sun Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: 5 tests failed: test_collections test_netrc test_sundry test_trace test_zipimport sincerely, -The Buildbot From python-checkins at python.org Thu Mar 27 00:13:59 2008 From: python-checkins at python.org (christian.heimes) Date: Thu, 27 Mar 2008 00:13:59 +0100 (CET) Subject: [Python-checkins] r61959 - in python/trunk/Lib: io.py test/test_io.py Message-ID: <20080326231359.72CBD1E4007@bag.python.org> Author: christian.heimes Date: Thu Mar 27 00:13:59 2008 New Revision: 61959 Modified: python/trunk/Lib/io.py python/trunk/Lib/test/test_io.py Log: Use the new unicode literals for the io module use basestring instead of str in Python 2.x Modified: python/trunk/Lib/io.py ============================================================================== --- python/trunk/Lib/io.py (original) +++ python/trunk/Lib/io.py Thu Mar 27 00:13:59 2008 @@ -19,6 +19,8 @@ XXX use incremental encoder for text output, at least for UTF-16 and UTF-8-SIG XXX check writable, readable and seekable in appropriate places """ +from __future__ import print_function +from __future__ import unicode_literals __author__ = ("Guido van Rossum , " "Mike Verdone , " @@ -110,15 +112,15 @@ binary stream, a buffered binary stream, or a buffered text stream, open for reading and/or writing. """ - if not isinstance(file, (str, unicode, int)): + if not isinstance(file, (basestring, int)): raise TypeError("invalid file: %r" % file) - if not isinstance(mode, str): + if not isinstance(mode, basestring): raise TypeError("invalid mode: %r" % mode) if buffering is not None and not isinstance(buffering, int): raise TypeError("invalid buffering: %r" % buffering) - if encoding is not None and not isinstance(encoding, str): + if encoding is not None and not isinstance(encoding, basestring): raise TypeError("invalid encoding: %r" % encoding) - if errors is not None and not isinstance(errors, str): + if errors is not None and not isinstance(errors, basestring): raise TypeError("invalid errors: %r" % errors) modes = set(mode) if modes - set("arwb+tU") or len(mode) > len(modes): @@ -1163,13 +1165,13 @@ else: encoding = locale.getpreferredencoding() - if not isinstance(encoding, str): + if not isinstance(encoding, basestring): raise ValueError("invalid encoding: %r" % encoding) if errors is None: errors = "strict" else: - if not isinstance(errors, str): + if not isinstance(errors, basestring): raise ValueError("invalid errors: %r" % errors) self.buffer = buffer Modified: python/trunk/Lib/test/test_io.py ============================================================================== --- python/trunk/Lib/test/test_io.py (original) +++ python/trunk/Lib/test/test_io.py Thu Mar 27 00:13:59 2008 @@ -1,5 +1,6 @@ """Unit tests for io.py.""" from __future__ import print_function +from __future__ import unicode_literals import os import sys From buildbot at python.org Thu Mar 27 00:40:16 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 26 Mar 2008 23:40:16 +0000 Subject: [Python-checkins] buildbot failure in ppc Debian unstable trunk Message-ID: <20080326234016.4C6401E4007@bag.python.org> The Buildbot has detected a new failure of ppc Debian unstable trunk. Full details are available at: http://www.python.org/dev/buildbot/all/ppc%20Debian%20unstable%20trunk/builds/1088 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: 3 tests failed: test_signal test_urllib2 test_urllib2net ====================================================================== ERROR: test_trivial (test.test_urllib2.TrivialTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_urllib2.py", line 19, in test_trivial self.assertRaises(ValueError, urllib2.urlopen, 'bogus url') File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/unittest.py", line 329, in failUnlessRaises callableObj(*args, **kwargs) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 123, in urlopen _opener = build_opener() File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 461, in build_opener opener.add_handler(klass()) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 673, in __init__ proxies = getproxies() File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib.py", line 1294, in getproxies_environment for name, value in os.environ.items(): AttributeError: 'NoneType' object has no attribute 'environ' ====================================================================== ERROR: test_file (test.test_urllib2.HandlerTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_urllib2.py", line 619, in test_file r = h.file_open(Request(url)) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 1210, in file_open return self.open_local_file(req) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 1229, in open_local_file localfile = url2pathname(file) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib.py", line 55, in url2pathname return unquote(pathname) TypeError: 'NoneType' object is not callable ====================================================================== ERROR: test_http (test.test_urllib2.HandlerTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_urllib2.py", line 725, in test_http r.read; r.readline # wrapped MockFile methods AttributeError: addinfourl instance has no attribute 'read' ====================================================================== ERROR: test_build_opener (test.test_urllib2.MiscTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_urllib2.py", line 1044, in test_build_opener o = build_opener(FooHandler, BarHandler) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 461, in build_opener opener.add_handler(klass()) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 673, in __init__ proxies = getproxies() File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib.py", line 1294, in getproxies_environment for name, value in os.environ.items(): AttributeError: 'NoneType' object has no attribute 'environ' ====================================================================== ERROR: testURLread (test.test_urllib2net.URLTimeoutTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_urllib2net.py", line 38, in testURLread f = _urlopen_with_retry("http://www.python.org/") File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_urllib2net.py", line 19, in _urlopen_with_retry return urllib2.urlopen(host, *args, **kwargs) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 123, in urlopen _opener = build_opener() File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 461, in build_opener opener.add_handler(klass()) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 673, in __init__ proxies = getproxies() File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib.py", line 1294, in getproxies_environment for name, value in os.environ.items(): AttributeError: 'NoneType' object has no attribute 'environ' ====================================================================== ERROR: test_bad_address (test.test_urllib2net.urlopenNetworkTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_urllib2net.py", line 161, in test_bad_address urllib2.urlopen, "http://www.python.invalid./") File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/unittest.py", line 329, in failUnlessRaises callableObj(*args, **kwargs) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 123, in urlopen _opener = build_opener() File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 461, in build_opener opener.add_handler(klass()) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 673, in __init__ proxies = getproxies() File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib.py", line 1294, in getproxies_environment for name, value in os.environ.items(): AttributeError: 'NoneType' object has no attribute 'environ' ====================================================================== ERROR: test_basic (test.test_urllib2net.urlopenNetworkTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_urllib2net.py", line 119, in test_basic open_url = _urlopen_with_retry("http://www.python.org/") File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_urllib2net.py", line 19, in _urlopen_with_retry return urllib2.urlopen(host, *args, **kwargs) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 123, in urlopen _opener = build_opener() File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 461, in build_opener opener.add_handler(klass()) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 673, in __init__ proxies = getproxies() File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib.py", line 1294, in getproxies_environment for name, value in os.environ.items(): AttributeError: 'NoneType' object has no attribute 'environ' ====================================================================== ERROR: test_geturl (test.test_urllib2net.urlopenNetworkTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_urllib2net.py", line 143, in test_geturl open_url = _urlopen_with_retry(URL) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_urllib2net.py", line 19, in _urlopen_with_retry return urllib2.urlopen(host, *args, **kwargs) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 123, in urlopen _opener = build_opener() File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 461, in build_opener opener.add_handler(klass()) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 673, in __init__ proxies = getproxies() File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib.py", line 1294, in getproxies_environment for name, value in os.environ.items(): AttributeError: 'NoneType' object has no attribute 'environ' ====================================================================== ERROR: test_info (test.test_urllib2net.urlopenNetworkTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_urllib2net.py", line 130, in test_info open_url = _urlopen_with_retry("http://www.python.org/") File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_urllib2net.py", line 19, in _urlopen_with_retry return urllib2.urlopen(host, *args, **kwargs) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 123, in urlopen _opener = build_opener() File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 461, in build_opener opener.add_handler(klass()) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 673, in __init__ proxies = getproxies() File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib.py", line 1294, in getproxies_environment for name, value in os.environ.items(): AttributeError: 'NoneType' object has no attribute 'environ' ====================================================================== ERROR: test_file (test.test_urllib2net.OtherNetworkTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_urllib2net.py", line 201, in test_file self._test_urls(urls, self._extra_handlers(), urllib2.urlopen) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_urllib2net.py", line 249, in _test_urls urllib2.install_opener(urllib2.build_opener(*handlers)) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 461, in build_opener opener.add_handler(klass()) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 673, in __init__ proxies = getproxies() File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib.py", line 1294, in getproxies_environment for name, value in os.environ.items(): AttributeError: 'NoneType' object has no attribute 'environ' ====================================================================== ERROR: test_ftp (test.test_urllib2net.OtherNetworkTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_urllib2net.py", line 189, in test_ftp self._test_urls(urls, self._extra_handlers()) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_urllib2net.py", line 249, in _test_urls urllib2.install_opener(urllib2.build_opener(*handlers)) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 461, in build_opener opener.add_handler(klass()) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 673, in __init__ proxies = getproxies() File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib.py", line 1294, in getproxies_environment for name, value in os.environ.items(): AttributeError: 'NoneType' object has no attribute 'environ' ====================================================================== ERROR: test_http (test.test_urllib2net.OtherNetworkTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_urllib2net.py", line 213, in test_http self._test_urls(urls, self._extra_handlers()) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_urllib2net.py", line 249, in _test_urls urllib2.install_opener(urllib2.build_opener(*handlers)) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 461, in build_opener opener.add_handler(klass()) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 673, in __init__ proxies = getproxies() File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib.py", line 1294, in getproxies_environment for name, value in os.environ.items(): AttributeError: 'NoneType' object has no attribute 'environ' ====================================================================== ERROR: test_range (test.test_urllib2net.OtherNetworkTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_urllib2net.py", line 174, in test_range result = _urlopen_with_retry(req) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_urllib2net.py", line 19, in _urlopen_with_retry return urllib2.urlopen(host, *args, **kwargs) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 123, in urlopen _opener = build_opener() File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 461, in build_opener opener.add_handler(klass()) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 673, in __init__ proxies = getproxies() File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib.py", line 1294, in getproxies_environment for name, value in os.environ.items(): AttributeError: 'NoneType' object has no attribute 'environ' ====================================================================== ERROR: test_close (test.test_urllib2net.CloseSocketTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_urllib2net.py", line 90, in test_close response = _urlopen_with_retry("http://www.python.org/") File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_urllib2net.py", line 19, in _urlopen_with_retry return urllib2.urlopen(host, *args, **kwargs) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 123, in urlopen _opener = build_opener() File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 461, in build_opener opener.add_handler(klass()) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 673, in __init__ proxies = getproxies() File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib.py", line 1294, in getproxies_environment for name, value in os.environ.items(): AttributeError: 'NoneType' object has no attribute 'environ' ====================================================================== ERROR: test_ftp_NoneNodefault (test.test_urllib2net.TimeoutTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_urllib2net.py", line 320, in test_ftp_NoneNodefault u = _urlopen_with_retry(self.FTP_HOST, timeout=None) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_urllib2net.py", line 19, in _urlopen_with_retry return urllib2.urlopen(host, *args, **kwargs) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 123, in urlopen _opener = build_opener() File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 461, in build_opener opener.add_handler(klass()) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 673, in __init__ proxies = getproxies() File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib.py", line 1294, in getproxies_environment for name, value in os.environ.items(): AttributeError: 'NoneType' object has no attribute 'environ' ====================================================================== ERROR: test_ftp_NoneWithdefault (test.test_urllib2net.TimeoutTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_urllib2net.py", line 314, in test_ftp_NoneWithdefault u = _urlopen_with_retry(self.FTP_HOST, timeout=None) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_urllib2net.py", line 19, in _urlopen_with_retry return urllib2.urlopen(host, *args, **kwargs) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 123, in urlopen _opener = build_opener() File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 461, in build_opener opener.add_handler(klass()) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 673, in __init__ proxies = getproxies() File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib.py", line 1294, in getproxies_environment for name, value in os.environ.items(): AttributeError: 'NoneType' object has no attribute 'environ' ====================================================================== ERROR: test_ftp_Value (test.test_urllib2net.TimeoutTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_urllib2net.py", line 324, in test_ftp_Value u = _urlopen_with_retry(self.FTP_HOST, timeout=60) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_urllib2net.py", line 19, in _urlopen_with_retry return urllib2.urlopen(host, *args, **kwargs) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 123, in urlopen _opener = build_opener() File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 461, in build_opener opener.add_handler(klass()) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 673, in __init__ proxies = getproxies() File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib.py", line 1294, in getproxies_environment for name, value in os.environ.items(): AttributeError: 'NoneType' object has no attribute 'environ' ====================================================================== ERROR: test_ftp_basic (test.test_urllib2net.TimeoutTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_urllib2net.py", line 307, in test_ftp_basic u = _urlopen_with_retry(self.FTP_HOST) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_urllib2net.py", line 19, in _urlopen_with_retry return urllib2.urlopen(host, *args, **kwargs) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 123, in urlopen _opener = build_opener() File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 461, in build_opener opener.add_handler(klass()) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 673, in __init__ proxies = getproxies() File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib.py", line 1294, in getproxies_environment for name, value in os.environ.items(): AttributeError: 'NoneType' object has no attribute 'environ' ====================================================================== ERROR: test_http_NoneNodefault (test.test_urllib2net.TimeoutTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_urllib2net.py", line 301, in test_http_NoneNodefault u = _urlopen_with_retry("http://www.python.org", timeout=None) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_urllib2net.py", line 19, in _urlopen_with_retry return urllib2.urlopen(host, *args, **kwargs) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 123, in urlopen _opener = build_opener() File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 461, in build_opener opener.add_handler(klass()) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 673, in __init__ proxies = getproxies() File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib.py", line 1294, in getproxies_environment for name, value in os.environ.items(): AttributeError: 'NoneType' object has no attribute 'environ' ====================================================================== ERROR: test_http_NoneWithdefault (test.test_urllib2net.TimeoutTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_urllib2net.py", line 291, in test_http_NoneWithdefault u = _urlopen_with_retry("http://www.python.org", timeout=None) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_urllib2net.py", line 19, in _urlopen_with_retry return urllib2.urlopen(host, *args, **kwargs) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 123, in urlopen _opener = build_opener() File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 461, in build_opener opener.add_handler(klass()) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 673, in __init__ proxies = getproxies() File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib.py", line 1294, in getproxies_environment for name, value in os.environ.items(): AttributeError: 'NoneType' object has no attribute 'environ' ====================================================================== ERROR: test_http_Value (test.test_urllib2net.TimeoutTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_urllib2net.py", line 297, in test_http_Value u = _urlopen_with_retry("http://www.python.org", timeout=120) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_urllib2net.py", line 19, in _urlopen_with_retry return urllib2.urlopen(host, *args, **kwargs) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 123, in urlopen _opener = build_opener() File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 461, in build_opener opener.add_handler(klass()) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 673, in __init__ proxies = getproxies() File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib.py", line 1294, in getproxies_environment for name, value in os.environ.items(): AttributeError: 'NoneType' object has no attribute 'environ' ====================================================================== ERROR: test_http_basic (test.test_urllib2net.TimeoutTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_urllib2net.py", line 284, in test_http_basic u = _urlopen_with_retry("http://www.python.org") File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_urllib2net.py", line 19, in _urlopen_with_retry return urllib2.urlopen(host, *args, **kwargs) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 123, in urlopen _opener = build_opener() File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 461, in build_opener opener.add_handler(klass()) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib2.py", line 673, in __init__ proxies = getproxies() File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/urllib.py", line 1294, in getproxies_environment for name, value in os.environ.items(): AttributeError: 'NoneType' object has no attribute 'environ' make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Thu Mar 27 00:58:18 2008 From: buildbot at python.org (buildbot at python.org) Date: Wed, 26 Mar 2008 23:58:18 +0000 Subject: [Python-checkins] buildbot failure in S-390 Debian 3.0 Message-ID: <20080326235819.176661E4029@bag.python.org> The Buildbot has detected a new failure of S-390 Debian 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/S-390%20Debian%203.0/builds/155 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-s390 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From python-checkins at python.org Thu Mar 27 01:25:33 2008 From: python-checkins at python.org (benjamin.peterson) Date: Thu, 27 Mar 2008 01:25:33 +0100 (CET) Subject: [Python-checkins] r61964 - python/trunk/Misc/ACKS Message-ID: <20080327002533.6D30A1E4008@bag.python.org> Author: benjamin.peterson Date: Thu Mar 27 01:25:33 2008 New Revision: 61964 Modified: python/trunk/Misc/ACKS Log: add commas for introductory clauses Modified: python/trunk/Misc/ACKS ============================================================================== --- python/trunk/Misc/ACKS (original) +++ python/trunk/Misc/ACKS Thu Mar 27 01:25:33 2008 @@ -4,11 +4,11 @@ This list is not complete and not in any useful order, but I would like to thank everybody who contributed in any way, with code, hints, bug reports, ideas, moral support, endorsement, or even complaints.... -Without you I would've stopped working on Python long ago! +Without you, I would've stopped working on Python long ago! --Guido -PS: In the standard Python distribution this file is encoded in Latin-1. +PS: In the standard Python distribution, this file is encoded in Latin-1. David Abrahams Jim Ahlstrom From buildbot at python.org Thu Mar 27 01:47:59 2008 From: buildbot at python.org (buildbot at python.org) Date: Thu, 27 Mar 2008 00:47:59 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 trunk Message-ID: <20080327004800.1D61F1E4008@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%20trunk/builds/2762 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: amaury.forgeotdarc,christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: 7 tests failed: test_asynchat test_compiler test_io test_shelve test_smtplib test_socket test_unicodedata ====================================================================== ERROR: testCompileLibrary (test.test_compiler.CompilerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_compiler.py", line 53, in testCompileLibrary compiler.compile(buf, basename, "exec") File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/compiler/pycodegen.py", line 64, in compile gen.compile() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/compiler/pycodegen.py", line 112, in compile gen = ModuleCodeGenerator(tree) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/compiler/pycodegen.py", line 1277, in __init__ self.futures = future.find_futures(tree) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/compiler/future.py", line 59, in find_futures walk(node, p1) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/compiler/visitor.py", line 106, in walk walker.preorder(tree, visitor) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/compiler/visitor.py", line 63, in preorder self.dispatch(tree, *args) # XXX *args make sense? File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/compiler/visitor.py", line 57, in dispatch return meth(node, *args) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/compiler/future.py", line 27, in visitModule if not self.check_stmt(s): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/compiler/future.py", line 37, in check_stmt "future feature %s is not defined" % name SyntaxError: future feature unicode_literals is not defined ====================================================================== ERROR: testBasicIO (test.test_io.TextIOWrapperTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_io.py", line 823, in testBasicIO self.multi_line_test(f, enc) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_io.py", line 841, in multi_line_test pos = f.tell() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/io.py", line 1388, in tell for next_byte[0] in next_input: ValueError: byte must be in range(0, 256) ====================================================================== ERROR: testSeeking (test.test_io.TextIOWrapperTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_io.py", line 883, in testSeeking self.assertEquals(f.tell(), prefix_size) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/io.py", line 1388, in tell for next_byte[0] in next_input: ValueError: byte must be in range(0, 256) ====================================================================== ERROR: testSeekingToo (test.test_io.TextIOWrapperTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_io.py", line 896, in testSeekingToo f.tell() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/io.py", line 1388, in tell for next_byte[0] in next_input: ValueError: byte must be in range(0, 256) ====================================================================== ERROR: testTelling (test.test_io.TextIOWrapperTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_io.py", line 858, in testTelling self.assertEquals(f.tell(), p1) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/io.py", line 1388, in tell for next_byte[0] in next_input: ValueError: byte must be in range(0, 256) ====================================================================== ERROR: test_issue1395_5 (test.test_io.TextIOWrapperTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_io.py", line 1078, in test_issue1395_5 pos = txt.tell() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/io.py", line 1388, in tell for next_byte[0] in next_input: ValueError: byte must be in range(0, 256) ====================================================================== ERROR: test_bool (test.test_shelve.TestAsciiFileShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 135, in test_bool self.assert_(not self._empty_mapping()) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 93, in _empty_mapping x= shelve.open(self.fn+str(self.counter), **self._args) TypeError: open() keywords must be strings ====================================================================== ERROR: test_constructor (test.test_shelve.TestAsciiFileShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 132, in test_constructor self.assertEqual(self._empty_mapping(), self._empty_mapping()) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 93, in _empty_mapping x= shelve.open(self.fn+str(self.counter), **self._args) TypeError: open() keywords must be strings ====================================================================== ERROR: test_get (test.test_shelve.TestAsciiFileShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 270, in test_get d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 93, in _empty_mapping x= shelve.open(self.fn+str(self.counter), **self._args) TypeError: open() keywords must be strings ====================================================================== ERROR: test_items (test.test_shelve.TestAsciiFileShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 155, in test_items d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 93, in _empty_mapping x= shelve.open(self.fn+str(self.counter), **self._args) TypeError: open() keywords must be strings ====================================================================== ERROR: test_keys (test.test_shelve.TestAsciiFileShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 141, in test_keys d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 93, in _empty_mapping x= shelve.open(self.fn+str(self.counter), **self._args) TypeError: open() keywords must be strings ====================================================================== ERROR: test_len (test.test_shelve.TestAsciiFileShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 161, in test_len d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 93, in _empty_mapping x= shelve.open(self.fn+str(self.counter), **self._args) TypeError: open() keywords must be strings ====================================================================== ERROR: test_pop (test.test_shelve.TestAsciiFileShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 291, in test_pop d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 93, in _empty_mapping x= shelve.open(self.fn+str(self.counter), **self._args) TypeError: open() keywords must be strings ====================================================================== ERROR: test_popitem (test.test_shelve.TestAsciiFileShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 286, in test_popitem d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 93, in _empty_mapping x= shelve.open(self.fn+str(self.counter), **self._args) TypeError: open() keywords must be strings ====================================================================== ERROR: test_read (test.test_shelve.TestAsciiFileShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 44, in test_read p = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 93, in _empty_mapping x= shelve.open(self.fn+str(self.counter), **self._args) TypeError: open() keywords must be strings ====================================================================== ERROR: test_setdefault (test.test_shelve.TestAsciiFileShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 282, in test_setdefault d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 93, in _empty_mapping x= shelve.open(self.fn+str(self.counter), **self._args) TypeError: open() keywords must be strings ====================================================================== ERROR: test_update (test.test_shelve.TestAsciiFileShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 172, in test_update d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 93, in _empty_mapping x= shelve.open(self.fn+str(self.counter), **self._args) TypeError: open() keywords must be strings ====================================================================== ERROR: test_values (test.test_shelve.TestAsciiFileShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 149, in test_values d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 93, in _empty_mapping x= shelve.open(self.fn+str(self.counter), **self._args) TypeError: open() keywords must be strings ====================================================================== ERROR: test_write (test.test_shelve.TestAsciiFileShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 91, in test_write p = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 93, in _empty_mapping x= shelve.open(self.fn+str(self.counter), **self._args) TypeError: open() keywords must be strings ====================================================================== ERROR: test_bool (test.test_shelve.TestBinaryFileShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 135, in test_bool self.assert_(not self._empty_mapping()) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 93, in _empty_mapping x= shelve.open(self.fn+str(self.counter), **self._args) TypeError: open() keywords must be strings ====================================================================== ERROR: test_constructor (test.test_shelve.TestBinaryFileShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 132, in test_constructor self.assertEqual(self._empty_mapping(), self._empty_mapping()) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 93, in _empty_mapping x= shelve.open(self.fn+str(self.counter), **self._args) TypeError: open() keywords must be strings ====================================================================== ERROR: test_get (test.test_shelve.TestBinaryFileShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 270, in test_get d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 93, in _empty_mapping x= shelve.open(self.fn+str(self.counter), **self._args) TypeError: open() keywords must be strings ====================================================================== ERROR: test_items (test.test_shelve.TestBinaryFileShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 155, in test_items d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 93, in _empty_mapping x= shelve.open(self.fn+str(self.counter), **self._args) TypeError: open() keywords must be strings ====================================================================== ERROR: test_keys (test.test_shelve.TestBinaryFileShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 141, in test_keys d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 93, in _empty_mapping x= shelve.open(self.fn+str(self.counter), **self._args) TypeError: open() keywords must be strings ====================================================================== ERROR: test_len (test.test_shelve.TestBinaryFileShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 161, in test_len d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 93, in _empty_mapping x= shelve.open(self.fn+str(self.counter), **self._args) TypeError: open() keywords must be strings ====================================================================== ERROR: test_pop (test.test_shelve.TestBinaryFileShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 291, in test_pop d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 93, in _empty_mapping x= shelve.open(self.fn+str(self.counter), **self._args) TypeError: open() keywords must be strings ====================================================================== ERROR: test_popitem (test.test_shelve.TestBinaryFileShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 286, in test_popitem d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 93, in _empty_mapping x= shelve.open(self.fn+str(self.counter), **self._args) TypeError: open() keywords must be strings ====================================================================== ERROR: test_read (test.test_shelve.TestBinaryFileShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 44, in test_read p = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 93, in _empty_mapping x= shelve.open(self.fn+str(self.counter), **self._args) TypeError: open() keywords must be strings ====================================================================== ERROR: test_setdefault (test.test_shelve.TestBinaryFileShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 282, in test_setdefault d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 93, in _empty_mapping x= shelve.open(self.fn+str(self.counter), **self._args) TypeError: open() keywords must be strings ====================================================================== ERROR: test_update (test.test_shelve.TestBinaryFileShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 172, in test_update d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 93, in _empty_mapping x= shelve.open(self.fn+str(self.counter), **self._args) TypeError: open() keywords must be strings ====================================================================== ERROR: test_values (test.test_shelve.TestBinaryFileShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 149, in test_values d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 93, in _empty_mapping x= shelve.open(self.fn+str(self.counter), **self._args) TypeError: open() keywords must be strings ====================================================================== ERROR: test_write (test.test_shelve.TestBinaryFileShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 91, in test_write p = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 93, in _empty_mapping x= shelve.open(self.fn+str(self.counter), **self._args) TypeError: open() keywords must be strings ====================================================================== ERROR: test_bool (test.test_shelve.TestProto2FileShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 135, in test_bool self.assert_(not self._empty_mapping()) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 93, in _empty_mapping x= shelve.open(self.fn+str(self.counter), **self._args) TypeError: open() keywords must be strings ====================================================================== ERROR: test_constructor (test.test_shelve.TestProto2FileShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 132, in test_constructor self.assertEqual(self._empty_mapping(), self._empty_mapping()) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 93, in _empty_mapping x= shelve.open(self.fn+str(self.counter), **self._args) TypeError: open() keywords must be strings ====================================================================== ERROR: test_get (test.test_shelve.TestProto2FileShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 270, in test_get d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 93, in _empty_mapping x= shelve.open(self.fn+str(self.counter), **self._args) TypeError: open() keywords must be strings ====================================================================== ERROR: test_items (test.test_shelve.TestProto2FileShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 155, in test_items d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 93, in _empty_mapping x= shelve.open(self.fn+str(self.counter), **self._args) TypeError: open() keywords must be strings ====================================================================== ERROR: test_keys (test.test_shelve.TestProto2FileShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 141, in test_keys d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 93, in _empty_mapping x= shelve.open(self.fn+str(self.counter), **self._args) TypeError: open() keywords must be strings ====================================================================== ERROR: test_len (test.test_shelve.TestProto2FileShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 161, in test_len d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 93, in _empty_mapping x= shelve.open(self.fn+str(self.counter), **self._args) TypeError: open() keywords must be strings ====================================================================== ERROR: test_pop (test.test_shelve.TestProto2FileShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 291, in test_pop d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 93, in _empty_mapping x= shelve.open(self.fn+str(self.counter), **self._args) TypeError: open() keywords must be strings ====================================================================== ERROR: test_popitem (test.test_shelve.TestProto2FileShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 286, in test_popitem d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 93, in _empty_mapping x= shelve.open(self.fn+str(self.counter), **self._args) TypeError: open() keywords must be strings ====================================================================== ERROR: test_read (test.test_shelve.TestProto2FileShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 44, in test_read p = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 93, in _empty_mapping x= shelve.open(self.fn+str(self.counter), **self._args) TypeError: open() keywords must be strings ====================================================================== ERROR: test_setdefault (test.test_shelve.TestProto2FileShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 282, in test_setdefault d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 93, in _empty_mapping x= shelve.open(self.fn+str(self.counter), **self._args) TypeError: open() keywords must be strings ====================================================================== ERROR: test_update (test.test_shelve.TestProto2FileShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 172, in test_update d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 93, in _empty_mapping x= shelve.open(self.fn+str(self.counter), **self._args) TypeError: open() keywords must be strings ====================================================================== ERROR: test_values (test.test_shelve.TestProto2FileShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 149, in test_values d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 93, in _empty_mapping x= shelve.open(self.fn+str(self.counter), **self._args) TypeError: open() keywords must be strings ====================================================================== ERROR: test_write (test.test_shelve.TestProto2FileShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 91, in test_write p = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 93, in _empty_mapping x= shelve.open(self.fn+str(self.counter), **self._args) TypeError: open() keywords must be strings ====================================================================== ERROR: test_bool (test.test_shelve.TestAsciiMemShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 135, in test_bool self.assert_(not self._empty_mapping()) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 90, in _empty_mapping x= shelve.Shelf({}, **self._args) TypeError: __init__() keywords must be strings ====================================================================== ERROR: test_constructor (test.test_shelve.TestAsciiMemShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 132, in test_constructor self.assertEqual(self._empty_mapping(), self._empty_mapping()) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 90, in _empty_mapping x= shelve.Shelf({}, **self._args) TypeError: __init__() keywords must be strings ====================================================================== ERROR: test_get (test.test_shelve.TestAsciiMemShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 270, in test_get d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 90, in _empty_mapping x= shelve.Shelf({}, **self._args) TypeError: __init__() keywords must be strings ====================================================================== ERROR: test_items (test.test_shelve.TestAsciiMemShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 155, in test_items d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 90, in _empty_mapping x= shelve.Shelf({}, **self._args) TypeError: __init__() keywords must be strings ====================================================================== ERROR: test_keys (test.test_shelve.TestAsciiMemShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 141, in test_keys d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 90, in _empty_mapping x= shelve.Shelf({}, **self._args) TypeError: __init__() keywords must be strings ====================================================================== ERROR: test_len (test.test_shelve.TestAsciiMemShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 161, in test_len d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 90, in _empty_mapping x= shelve.Shelf({}, **self._args) TypeError: __init__() keywords must be strings ====================================================================== ERROR: test_pop (test.test_shelve.TestAsciiMemShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 291, in test_pop d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 90, in _empty_mapping x= shelve.Shelf({}, **self._args) TypeError: __init__() keywords must be strings ====================================================================== ERROR: test_popitem (test.test_shelve.TestAsciiMemShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 286, in test_popitem d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 90, in _empty_mapping x= shelve.Shelf({}, **self._args) TypeError: __init__() keywords must be strings ====================================================================== ERROR: test_read (test.test_shelve.TestAsciiMemShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 44, in test_read p = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 90, in _empty_mapping x= shelve.Shelf({}, **self._args) TypeError: __init__() keywords must be strings ====================================================================== ERROR: test_setdefault (test.test_shelve.TestAsciiMemShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 282, in test_setdefault d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 90, in _empty_mapping x= shelve.Shelf({}, **self._args) TypeError: __init__() keywords must be strings ====================================================================== ERROR: test_update (test.test_shelve.TestAsciiMemShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 172, in test_update d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 90, in _empty_mapping x= shelve.Shelf({}, **self._args) TypeError: __init__() keywords must be strings ====================================================================== ERROR: test_values (test.test_shelve.TestAsciiMemShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 149, in test_values d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 90, in _empty_mapping x= shelve.Shelf({}, **self._args) TypeError: __init__() keywords must be strings ====================================================================== ERROR: test_write (test.test_shelve.TestAsciiMemShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 91, in test_write p = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 90, in _empty_mapping x= shelve.Shelf({}, **self._args) TypeError: __init__() keywords must be strings ====================================================================== ERROR: test_bool (test.test_shelve.TestBinaryMemShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 135, in test_bool self.assert_(not self._empty_mapping()) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 90, in _empty_mapping x= shelve.Shelf({}, **self._args) TypeError: __init__() keywords must be strings ====================================================================== ERROR: test_constructor (test.test_shelve.TestBinaryMemShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 132, in test_constructor self.assertEqual(self._empty_mapping(), self._empty_mapping()) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 90, in _empty_mapping x= shelve.Shelf({}, **self._args) TypeError: __init__() keywords must be strings ====================================================================== ERROR: test_get (test.test_shelve.TestBinaryMemShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 270, in test_get d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 90, in _empty_mapping x= shelve.Shelf({}, **self._args) TypeError: __init__() keywords must be strings ====================================================================== ERROR: test_items (test.test_shelve.TestBinaryMemShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 155, in test_items d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 90, in _empty_mapping x= shelve.Shelf({}, **self._args) TypeError: __init__() keywords must be strings ====================================================================== ERROR: test_keys (test.test_shelve.TestBinaryMemShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 141, in test_keys d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 90, in _empty_mapping x= shelve.Shelf({}, **self._args) TypeError: __init__() keywords must be strings ====================================================================== ERROR: test_len (test.test_shelve.TestBinaryMemShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 161, in test_len d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 90, in _empty_mapping x= shelve.Shelf({}, **self._args) TypeError: __init__() keywords must be strings ====================================================================== ERROR: test_pop (test.test_shelve.TestBinaryMemShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 291, in test_pop d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 90, in _empty_mapping x= shelve.Shelf({}, **self._args) TypeError: __init__() keywords must be strings ====================================================================== ERROR: test_popitem (test.test_shelve.TestBinaryMemShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 286, in test_popitem d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 90, in _empty_mapping x= shelve.Shelf({}, **self._args) TypeError: __init__() keywords must be strings ====================================================================== ERROR: test_read (test.test_shelve.TestBinaryMemShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 44, in test_read p = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 90, in _empty_mapping x= shelve.Shelf({}, **self._args) TypeError: __init__() keywords must be strings ====================================================================== ERROR: test_setdefault (test.test_shelve.TestBinaryMemShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 282, in test_setdefault d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 90, in _empty_mapping x= shelve.Shelf({}, **self._args) TypeError: __init__() keywords must be strings ====================================================================== ERROR: test_update (test.test_shelve.TestBinaryMemShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 172, in test_update d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 90, in _empty_mapping x= shelve.Shelf({}, **self._args) TypeError: __init__() keywords must be strings ====================================================================== ERROR: test_values (test.test_shelve.TestBinaryMemShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 149, in test_values d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 90, in _empty_mapping x= shelve.Shelf({}, **self._args) TypeError: __init__() keywords must be strings ====================================================================== ERROR: test_write (test.test_shelve.TestBinaryMemShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 91, in test_write p = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 90, in _empty_mapping x= shelve.Shelf({}, **self._args) TypeError: __init__() keywords must be strings ====================================================================== ERROR: test_bool (test.test_shelve.TestProto2MemShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 135, in test_bool self.assert_(not self._empty_mapping()) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 90, in _empty_mapping x= shelve.Shelf({}, **self._args) TypeError: __init__() keywords must be strings ====================================================================== ERROR: test_constructor (test.test_shelve.TestProto2MemShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 132, in test_constructor self.assertEqual(self._empty_mapping(), self._empty_mapping()) File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 90, in _empty_mapping x= shelve.Shelf({}, **self._args) TypeError: __init__() keywords must be strings ====================================================================== ERROR: test_get (test.test_shelve.TestProto2MemShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 270, in test_get d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 90, in _empty_mapping x= shelve.Shelf({}, **self._args) TypeError: __init__() keywords must be strings ====================================================================== ERROR: test_items (test.test_shelve.TestProto2MemShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 155, in test_items d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 90, in _empty_mapping x= shelve.Shelf({}, **self._args) TypeError: __init__() keywords must be strings ====================================================================== ERROR: test_keys (test.test_shelve.TestProto2MemShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 141, in test_keys d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 90, in _empty_mapping x= shelve.Shelf({}, **self._args) TypeError: __init__() keywords must be strings ====================================================================== ERROR: test_len (test.test_shelve.TestProto2MemShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 161, in test_len d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 90, in _empty_mapping x= shelve.Shelf({}, **self._args) TypeError: __init__() keywords must be strings ====================================================================== ERROR: test_pop (test.test_shelve.TestProto2MemShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 291, in test_pop d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 90, in _empty_mapping x= shelve.Shelf({}, **self._args) TypeError: __init__() keywords must be strings ====================================================================== ERROR: test_popitem (test.test_shelve.TestProto2MemShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 286, in test_popitem d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 90, in _empty_mapping x= shelve.Shelf({}, **self._args) TypeError: __init__() keywords must be strings ====================================================================== ERROR: test_read (test.test_shelve.TestProto2MemShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 44, in test_read p = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 90, in _empty_mapping x= shelve.Shelf({}, **self._args) TypeError: __init__() keywords must be strings ====================================================================== ERROR: test_setdefault (test.test_shelve.TestProto2MemShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 282, in test_setdefault d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 90, in _empty_mapping x= shelve.Shelf({}, **self._args) TypeError: __init__() keywords must be strings ====================================================================== ERROR: test_update (test.test_shelve.TestProto2MemShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 172, in test_update d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 90, in _empty_mapping x= shelve.Shelf({}, **self._args) TypeError: __init__() keywords must be strings ====================================================================== ERROR: test_values (test.test_shelve.TestProto2MemShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 149, in test_values d = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 90, in _empty_mapping x= shelve.Shelf({}, **self._args) TypeError: __init__() keywords must be strings ====================================================================== ERROR: test_write (test.test_shelve.TestProto2MemShelve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/mapping_tests.py", line 91, in test_write p = self._empty_mapping() File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_shelve.py", line 90, in _empty_mapping x= shelve.Shelf({}, **self._args) TypeError: __init__() keywords must be strings ====================================================================== FAIL: testInterruptedTimeout (test.test_socket.TCPTimeoutTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_socket.py", line 994, in testInterruptedTimeout self.fail("got Alarm in wrong place") AssertionError: got Alarm in wrong place ====================================================================== FAIL: test_east_asian_width (test.test_unicodedata.UnicodeFunctionsTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_unicodedata.py", line 179, in test_east_asian_width self.assertRaises(TypeError, eaw, 'a') AssertionError: TypeError not raised sincerely, -The Buildbot From python-checkins at python.org Thu Mar 27 02:36:38 2008 From: python-checkins at python.org (christian.heimes) Date: Thu, 27 Mar 2008 02:36:38 +0100 (CET) Subject: [Python-checkins] r61965 - in python/trunk: PC/config.c PC/pyconfig.h PCbuild/pythoncore.vcproj Message-ID: <20080327013638.D6EB81E4008@bag.python.org> Author: christian.heimes Date: Thu Mar 27 02:36:21 2008 New Revision: 61965 Modified: python/trunk/PC/config.c python/trunk/PC/pyconfig.h python/trunk/PCbuild/pythoncore.vcproj Log: Hopefully added _fileio module to the Windows build system Modified: python/trunk/PC/config.c ============================================================================== --- python/trunk/PC/config.c (original) +++ python/trunk/PC/config.c Thu Mar 27 02:36:21 2008 @@ -52,6 +52,7 @@ extern void init_winreg(void); extern void init_struct(void); extern void initdatetime(void); +extern void init_fileio(void); extern void init_functools(void); extern void initzlib(void); @@ -129,6 +130,7 @@ {"_winreg", init_winreg}, {"_struct", init_struct}, {"datetime", initdatetime}, + {"_fileio", init_fileio}, {"_functools", init_functools}, {"xxsubtype", initxxsubtype}, Modified: python/trunk/PC/pyconfig.h ============================================================================== --- python/trunk/PC/pyconfig.h (original) +++ python/trunk/PC/pyconfig.h Thu Mar 27 02:36:21 2008 @@ -207,12 +207,13 @@ #endif /* MS_WIN32 && !MS_WIN64 */ typedef int pid_t; -#define hypot _hypot #include #define Py_IS_NAN _isnan #define Py_IS_INFINITY(X) (!_finite(X) && !_isnan(X)) #define Py_IS_FINITE(X) _finite(X) +#define copysign _copysign +#define hypot _hypot #endif /* _MSC_VER */ @@ -392,7 +393,7 @@ /* Fairly standard from here! */ /* Define to 1 if you have the `copysign' function. */ -/* #define HAVE_COPYSIGN 1*/ +#define HAVE_COPYSIGN 1 /* Define to 1 if you have the `isinf' function. */ #define HAVE_ISINF 1 Modified: python/trunk/PCbuild/pythoncore.vcproj ============================================================================== --- python/trunk/PCbuild/pythoncore.vcproj (original) +++ python/trunk/PCbuild/pythoncore.vcproj Thu Mar 27 02:36:21 2008 @@ -987,6 +987,10 @@ > + + From python-checkins at python.org Thu Mar 27 02:38:47 2008 From: python-checkins at python.org (christian.heimes) Date: Thu, 27 Mar 2008 02:38:47 +0100 (CET) Subject: [Python-checkins] r61966 - python/trunk/PC/pyconfig.h Message-ID: <20080327013847.D053F1E4008@bag.python.org> Author: christian.heimes Date: Thu Mar 27 02:38:47 2008 New Revision: 61966 Modified: python/trunk/PC/pyconfig.h Log: Revert commit accident Modified: python/trunk/PC/pyconfig.h ============================================================================== --- python/trunk/PC/pyconfig.h (original) +++ python/trunk/PC/pyconfig.h Thu Mar 27 02:38:47 2008 @@ -207,13 +207,12 @@ #endif /* MS_WIN32 && !MS_WIN64 */ typedef int pid_t; +#define hypot _hypot #include #define Py_IS_NAN _isnan #define Py_IS_INFINITY(X) (!_finite(X) && !_isnan(X)) #define Py_IS_FINITE(X) _finite(X) -#define copysign _copysign -#define hypot _hypot #endif /* _MSC_VER */ @@ -393,7 +392,7 @@ /* Fairly standard from here! */ /* Define to 1 if you have the `copysign' function. */ -#define HAVE_COPYSIGN 1 +/* #define HAVE_COPYSIGN 1*/ /* Define to 1 if you have the `isinf' function. */ #define HAVE_ISINF 1 From python-checkins at python.org Thu Mar 27 04:49:54 2008 From: python-checkins at python.org (neal.norwitz) Date: Thu, 27 Mar 2008 04:49:54 +0100 (CET) Subject: [Python-checkins] r61967 - python/trunk/Objects/bytesobject.c Message-ID: <20080327034954.B2BFC1E4008@bag.python.org> Author: neal.norwitz Date: Thu Mar 27 04:49:54 2008 New Revision: 61967 Modified: python/trunk/Objects/bytesobject.c Log: Fix bytes so it works on 64-bit platforms. (Also remove some #if 0 code that is already handled in _getbytevalue.) Modified: python/trunk/Objects/bytesobject.c ============================================================================== --- python/trunk/Objects/bytesobject.c (original) +++ python/trunk/Objects/bytesobject.c Thu Mar 27 04:49:54 2008 @@ -549,7 +549,7 @@ static int bytes_setitem(PyBytesObject *self, Py_ssize_t i, PyObject *value) { - Py_ssize_t ival; + int ival; if (i < 0) i += Py_SIZE(self); @@ -564,16 +564,6 @@ if (!_getbytevalue(value, &ival)) return -1; -#if 0 - ival = PyNumber_AsSsize_t(value, PyExc_ValueError); - if (ival == -1 && PyErr_Occurred()) - return -1; - - if (ival < 0 || ival >= 256) { - PyErr_SetString(PyExc_ValueError, "byte must be in range(0, 256)"); - return -1; - } -#endif self->ob_bytes[i] = ival; return 0; @@ -609,12 +599,13 @@ else { Py_ssize_t ival = PyNumber_AsSsize_t(values, PyExc_ValueError); if (ival == -1 && PyErr_Occurred()) { + int int_value; /* Also accept str of size 1 in 2.x */ PyErr_Clear(); - if (!_getbytevalue(values, &ival)) + if (!_getbytevalue(values, &int_value)) return -1; - } - if (ival < 0 || ival >= 256) { + ival = (int) int_value; + } else if (ival < 0 || ival >= 256) { PyErr_SetString(PyExc_ValueError, "byte must be in range(0, 256)"); return -1; From python-checkins at python.org Thu Mar 27 05:40:08 2008 From: python-checkins at python.org (neal.norwitz) Date: Thu, 27 Mar 2008 05:40:08 +0100 (CET) Subject: [Python-checkins] r61968 - python/trunk/Objects/bytesobject.c Message-ID: <20080327044008.4D1311E400D@bag.python.org> Author: neal.norwitz Date: Thu Mar 27 05:40:07 2008 New Revision: 61968 Modified: python/trunk/Objects/bytesobject.c Log: Fix memory leaks Modified: python/trunk/Objects/bytesobject.c ============================================================================== --- python/trunk/Objects/bytesobject.c (original) +++ python/trunk/Objects/bytesobject.c Thu Mar 27 05:40:07 2008 @@ -2683,17 +2683,21 @@ if (! _getbytevalue(item, &value)) { Py_DECREF(item); Py_DECREF(it); + PyMem_Free(buf); return NULL; } buf[len++] = value; Py_DECREF(item); if (len >= buf_size) { + char *new_buf; buf_size = len + (len >> 1) + 1; - buf = (char *)PyMem_Realloc(buf, buf_size * sizeof(char)); - if (buf == NULL) { + new_buf = (char *)PyMem_Realloc(buf, buf_size * sizeof(char)); + if (new_buf == NULL) { Py_DECREF(it); + PyMem_Free(buf); return PyErr_NoMemory(); } + buf = new_buf; } } Py_DECREF(it); From python-checkins at python.org Thu Mar 27 05:40:51 2008 From: python-checkins at python.org (neal.norwitz) Date: Thu, 27 Mar 2008 05:40:51 +0100 (CET) Subject: [Python-checkins] r61969 - in python/trunk: Include/bytes_methods.h Objects/longobject.c Objects/unicodeobject.c Python/mystrtoul.c Message-ID: <20080327044051.1F4D11E4008@bag.python.org> Author: neal.norwitz Date: Thu Mar 27 05:40:50 2008 New Revision: 61969 Modified: python/trunk/Include/bytes_methods.h python/trunk/Objects/longobject.c python/trunk/Objects/unicodeobject.c python/trunk/Python/mystrtoul.c Log: Fix warnings about using char as an array subscript. This is not portable since char is signed on some platforms and unsigned on others. Modified: python/trunk/Include/bytes_methods.h ============================================================================== --- python/trunk/Include/bytes_methods.h (original) +++ python/trunk/Include/bytes_methods.h Thu Mar 27 05:40:50 2008 @@ -44,13 +44,13 @@ extern const unsigned int _Py_ctype_table[256]; -#define ISLOWER(c) (_Py_ctype_table[Py_CHARMASK(c)] & FLAG_LOWER) -#define ISUPPER(c) (_Py_ctype_table[Py_CHARMASK(c)] & FLAG_UPPER) -#define ISALPHA(c) (_Py_ctype_table[Py_CHARMASK(c)] & FLAG_ALPHA) -#define ISDIGIT(c) (_Py_ctype_table[Py_CHARMASK(c)] & FLAG_DIGIT) -#define ISXDIGIT(c) (_Py_ctype_table[Py_CHARMASK(c)] & FLAG_XDIGIT) -#define ISALNUM(c) (_Py_ctype_table[Py_CHARMASK(c)] & FLAG_ALNUM) -#define ISSPACE(c) (_Py_ctype_table[Py_CHARMASK(c)] & FLAG_SPACE) +#define ISLOWER(c) (_Py_ctype_table[(unsigned)Py_CHARMASK(c)] & FLAG_LOWER) +#define ISUPPER(c) (_Py_ctype_table[(unsigned)Py_CHARMASK(c)] & FLAG_UPPER) +#define ISALPHA(c) (_Py_ctype_table[(unsigned)Py_CHARMASK(c)] & FLAG_ALPHA) +#define ISDIGIT(c) (_Py_ctype_table[(unsigned)Py_CHARMASK(c)] & FLAG_DIGIT) +#define ISXDIGIT(c) (_Py_ctype_table[(unsigned)Py_CHARMASK(c)] & FLAG_XDIGIT) +#define ISALNUM(c) (_Py_ctype_table[(unsigned)Py_CHARMASK(c)] & FLAG_ALNUM) +#define ISSPACE(c) (_Py_ctype_table[(unsigned)Py_CHARMASK(c)] & FLAG_SPACE) #undef islower #define islower(c) undefined_islower(c) Modified: python/trunk/Objects/longobject.c ============================================================================== --- python/trunk/Objects/longobject.c (original) +++ python/trunk/Objects/longobject.c Thu Mar 27 05:40:50 2008 @@ -1397,7 +1397,7 @@ n >>= 1; /* n <- total # of bits needed, while setting p to end-of-string */ n = 0; - while (_PyLong_DigitValue[Py_CHARMASK(*p)] < base) + while (_PyLong_DigitValue[(unsigned)Py_CHARMASK(*p)] < base) ++p; *str = p; /* n <- # of Python digits needed, = ceiling(n/PyLong_SHIFT). */ @@ -1418,7 +1418,7 @@ bits_in_accum = 0; pdigit = z->ob_digit; while (--p >= start) { - int k = _PyLong_DigitValue[Py_CHARMASK(*p)]; + int k = _PyLong_DigitValue[(unsigned)Py_CHARMASK(*p)]; assert(k >= 0 && k < base); accum |= (twodigits)(k << bits_in_accum); bits_in_accum += bits_per_char; @@ -1609,7 +1609,7 @@ /* Find length of the string of numeric characters. */ scan = str; - while (_PyLong_DigitValue[Py_CHARMASK(*scan)] < base) + while (_PyLong_DigitValue[(unsigned)Py_CHARMASK(*scan)] < base) ++scan; /* Create a long object that can contain the largest possible @@ -1635,10 +1635,10 @@ /* Work ;-) */ while (str < scan) { /* grab up to convwidth digits from the input string */ - c = (digit)_PyLong_DigitValue[Py_CHARMASK(*str++)]; + c = (digit)_PyLong_DigitValue[(unsigned)Py_CHARMASK(*str++)]; for (i = 1; i < convwidth && str != scan; ++i, ++str) { c = (twodigits)(c * base + - _PyLong_DigitValue[Py_CHARMASK(*str)]); + _PyLong_DigitValue[(unsigned)Py_CHARMASK(*str)]); assert(c < PyLong_BASE); } Modified: python/trunk/Objects/unicodeobject.c ============================================================================== --- python/trunk/Objects/unicodeobject.c (original) +++ python/trunk/Objects/unicodeobject.c Thu Mar 27 05:40:50 2008 @@ -480,13 +480,13 @@ /* Single characters are shared when using this constructor. Restrict to ASCII, since the input must be UTF-8. */ if (size == 1 && Py_CHARMASK(*u) < 128) { - unicode = unicode_latin1[Py_CHARMASK(*u)]; + unicode = unicode_latin1[(unsigned)Py_CHARMASK(*u)]; if (!unicode) { unicode = _PyUnicode_New(1); if (!unicode) return NULL; unicode->str[0] = Py_CHARMASK(*u); - unicode_latin1[Py_CHARMASK(*u)] = unicode; + unicode_latin1[(unsigned)Py_CHARMASK(*u)] = unicode; } Py_INCREF(unicode); return (PyObject *)unicode; Modified: python/trunk/Python/mystrtoul.c ============================================================================== --- python/trunk/Python/mystrtoul.c (original) +++ python/trunk/Python/mystrtoul.c Thu Mar 27 05:40:50 2008 @@ -109,7 +109,7 @@ ++str; if (*str == 'x' || *str == 'X') { /* there must be at least one digit after 0x */ - if (_PyLong_DigitValue[Py_CHARMASK(str[1])] >= 16) { + if (_PyLong_DigitValue[(unsigned)Py_CHARMASK(str[1])] >= 16) { if (ptr) *ptr = str; return 0; @@ -118,7 +118,7 @@ base = 16; } else if (*str == 'o' || *str == 'O') { /* there must be at least one digit after 0o */ - if (_PyLong_DigitValue[Py_CHARMASK(str[1])] >= 8) { + if (_PyLong_DigitValue[(unsigned)Py_CHARMASK(str[1])] >= 8) { if (ptr) *ptr = str; return 0; @@ -127,7 +127,7 @@ base = 8; } else if (*str == 'b' || *str == 'B') { /* there must be at least one digit after 0b */ - if (_PyLong_DigitValue[Py_CHARMASK(str[1])] >= 2) { + if (_PyLong_DigitValue[(unsigned)Py_CHARMASK(str[1])] >= 2) { if (ptr) *ptr = str; return 0; @@ -147,7 +147,7 @@ ++str; if (*str == 'b' || *str == 'B') { /* there must be at least one digit after 0b */ - if (_PyLong_DigitValue[Py_CHARMASK(str[1])] >= 2) { + if (_PyLong_DigitValue[(unsigned)Py_CHARMASK(str[1])] >= 2) { if (ptr) *ptr = str; return 0; @@ -162,7 +162,7 @@ ++str; if (*str == 'o' || *str == 'O') { /* there must be at least one digit after 0o */ - if (_PyLong_DigitValue[Py_CHARMASK(str[1])] >= 8) { + if (_PyLong_DigitValue[(unsigned)Py_CHARMASK(str[1])] >= 8) { if (ptr) *ptr = str; return 0; @@ -177,7 +177,7 @@ ++str; if (*str == 'x' || *str == 'X') { /* there must be at least one digit after 0x */ - if (_PyLong_DigitValue[Py_CHARMASK(str[1])] >= 16) { + if (_PyLong_DigitValue[(unsigned)Py_CHARMASK(str[1])] >= 16) { if (ptr) *ptr = str; return 0; @@ -203,7 +203,7 @@ ovlimit = digitlimit[base]; /* do the conversion until non-digit character encountered */ - while ((c = _PyLong_DigitValue[Py_CHARMASK(*str)]) < base) { + while ((c = _PyLong_DigitValue[(unsigned)Py_CHARMASK(*str)]) < base) { if (ovlimit > 0) /* no overflow check required */ result = result * base + c; else { /* requires overflow check */ @@ -240,7 +240,7 @@ overflowed: if (ptr) { /* spool through remaining digit characters */ - while (_PyLong_DigitValue[Py_CHARMASK(*str)] < base) + while (_PyLong_DigitValue[(unsigned)Py_CHARMASK(*str)] < base) ++str; *ptr = str; } From python-checkins at python.org Thu Mar 27 06:02:57 2008 From: python-checkins at python.org (neal.norwitz) Date: Thu, 27 Mar 2008 06:02:57 +0100 (CET) Subject: [Python-checkins] r61970 - python/trunk/Lib/compiler/future.py Message-ID: <20080327050257.E15151E400C@bag.python.org> Author: neal.norwitz Date: Thu Mar 27 06:02:57 2008 New Revision: 61970 Modified: python/trunk/Lib/compiler/future.py Log: Fix test_compiler after adding unicode_literals Modified: python/trunk/Lib/compiler/future.py ============================================================================== --- python/trunk/Lib/compiler/future.py (original) +++ python/trunk/Lib/compiler/future.py Thu Mar 27 06:02:57 2008 @@ -16,7 +16,8 @@ class FutureParser: features = ("nested_scopes", "generators", "division", - "absolute_import", "with_statement", "print_function") + "absolute_import", "with_statement", "print_function", + "unicode_literals") def __init__(self): self.found = {} # set From python-checkins at python.org Thu Mar 27 06:03:12 2008 From: python-checkins at python.org (neal.norwitz) Date: Thu, 27 Mar 2008 06:03:12 +0100 (CET) Subject: [Python-checkins] r61971 - python/trunk/Modules/_ssl.c Message-ID: <20080327050312.03E681E401F@bag.python.org> Author: neal.norwitz Date: Thu Mar 27 06:03:11 2008 New Revision: 61971 Modified: python/trunk/Modules/_ssl.c Log: Fix compiler warnings Modified: python/trunk/Modules/_ssl.c ============================================================================== --- python/trunk/Modules/_ssl.c (original) +++ python/trunk/Modules/_ssl.c Thu Mar 27 06:03:11 2008 @@ -1431,7 +1431,7 @@ */ if ((_ssl_locks == NULL) || - (n < 0) || (n >= _ssl_locks_count)) + (n < 0) || ((unsigned)n >= _ssl_locks_count)) return; if (mode & CRYPTO_LOCK) { @@ -1443,7 +1443,7 @@ static int _setup_ssl_threads(void) { - int i; + unsigned int i; if (_ssl_locks == NULL) { _ssl_locks_count = CRYPTO_num_locks(); From buildbot at python.org Thu Mar 27 06:11:34 2008 From: buildbot at python.org (buildbot at python.org) Date: Thu, 27 Mar 2008 05:11:34 +0000 Subject: [Python-checkins] buildbot failure in amd64 XP trunk Message-ID: <20080327051134.613691E400B@bag.python.org> The Buildbot has detected a new failure of amd64 XP trunk. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20XP%20trunk/builds/1059 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows-amd64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: neal.norwitz BUILD FAILED: failed compile sincerely, -The Buildbot From buildbot at python.org Thu Mar 27 07:32:11 2008 From: buildbot at python.org (buildbot at python.org) Date: Thu, 27 Mar 2008 06:32:11 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 trunk Message-ID: <20080327063212.15BA61E400A@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%20trunk/builds/2765 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: neal.norwitz BUILD FAILED: failed test Excerpt from the test logfile: 3 tests failed: test_asynchat test_smtplib test_socket ====================================================================== FAIL: testInterruptedTimeout (test.test_socket.TCPTimeoutTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_socket.py", line 994, in testInterruptedTimeout self.fail("got Alarm in wrong place") AssertionError: got Alarm in wrong place sincerely, -The Buildbot From python-checkins at python.org Thu Mar 27 07:52:02 2008 From: python-checkins at python.org (neal.norwitz) Date: Thu, 27 Mar 2008 07:52:02 +0100 (CET) Subject: [Python-checkins] r61972 - python/trunk/Objects/floatobject.c python/trunk/Objects/intobject.c Message-ID: <20080327065202.30ABF1E4008@bag.python.org> Author: neal.norwitz Date: Thu Mar 27 07:52:01 2008 New Revision: 61972 Modified: python/trunk/Objects/floatobject.c python/trunk/Objects/intobject.c Log: Pluralss only need one s, not 2 (intss -> ints) Modified: python/trunk/Objects/floatobject.c ============================================================================== --- python/trunk/Objects/floatobject.c (original) +++ python/trunk/Objects/floatobject.c Thu Mar 27 07:52:01 2008 @@ -1719,7 +1719,7 @@ } else { fprintf(stderr, - ": %" PY_FORMAT_SIZE_T "d unfreed floats%s in %" + ": %" PY_FORMAT_SIZE_T "d unfreed float%s in %" PY_FORMAT_SIZE_T "d out of %" PY_FORMAT_SIZE_T "d block%s\n", fsum, fsum == 1 ? "" : "s", Modified: python/trunk/Objects/intobject.c ============================================================================== --- python/trunk/Objects/intobject.c (original) +++ python/trunk/Objects/intobject.c Thu Mar 27 07:52:01 2008 @@ -1378,7 +1378,7 @@ } else { fprintf(stderr, - ": %" PY_FORMAT_SIZE_T "d unfreed ints%s in %" + ": %" PY_FORMAT_SIZE_T "d unfreed int%s in %" PY_FORMAT_SIZE_T "d out of %" PY_FORMAT_SIZE_T "d block%s\n", isum, isum == 1 ? "" : "s", From python-checkins at python.org Thu Mar 27 10:02:33 2008 From: python-checkins at python.org (christian.heimes) Date: Thu, 27 Mar 2008 10:02:33 +0100 (CET) Subject: [Python-checkins] r61973 - python/trunk/Python/import.c Message-ID: <20080327090233.A8BB41E402B@bag.python.org> Author: christian.heimes Date: Thu Mar 27 10:02:33 2008 New Revision: 61973 Modified: python/trunk/Python/import.c Log: Quick 'n dirty hack: Increase the magic by 2 to force a rebuild of pyc/pyo files on the build bots Modified: python/trunk/Python/import.c ============================================================================== --- python/trunk/Python/import.c (original) +++ python/trunk/Python/import.c Thu Mar 27 10:02:33 2008 @@ -80,7 +80,7 @@ /* Magic word as global; note that _PyImport_Init() can change the value of this global to accommodate for alterations of how the compiler works which are enabled by command line switches. */ -static long pyc_magic = MAGIC; +static long pyc_magic = MAGIC+2; /* See _PyImport_FixupExtension() below */ static PyObject *extensions = NULL; From buildbot at python.org Thu Mar 27 10:22:39 2008 From: buildbot at python.org (buildbot at python.org) Date: Thu, 27 Mar 2008 09:22:39 +0000 Subject: [Python-checkins] buildbot failure in sparc Debian trunk Message-ID: <20080327092239.381591E4024@bag.python.org> The Buildbot has detected a new failure of sparc Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20Debian%20trunk/builds/254 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-sparc Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: amaury.forgeotdarc,benjamin.peterson,christian.heimes,mark.dickinson,neal.norwitz BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_urllib2 make: *** [buildbottest] Error 1 sincerely, -The Buildbot From python-checkins at python.org Thu Mar 27 10:42:36 2008 From: python-checkins at python.org (eric.smith) Date: Thu, 27 Mar 2008 10:42:36 +0100 (CET) Subject: [Python-checkins] r61974 - python/trunk/Lib/test/test_future4.py Message-ID: <20080327094236.211861E401F@bag.python.org> Author: eric.smith Date: Thu Mar 27 10:42:35 2008 New Revision: 61974 Modified: python/trunk/Lib/test/test_future4.py Log: Added test cases for single quoted strings, both forms of triple quotes, and some string concatenations. Removed unneeded __future__ print_function import. Modified: python/trunk/Lib/test/test_future4.py ============================================================================== --- python/trunk/Lib/test/test_future4.py (original) +++ python/trunk/Lib/test/test_future4.py Thu Mar 27 10:42:35 2008 @@ -1,4 +1,3 @@ -from __future__ import print_function from __future__ import unicode_literals import unittest @@ -11,11 +10,35 @@ def test_unicode_strings(self): self.assertType("", unicode) + self.assertType('', unicode) self.assertType(r"", unicode) + self.assertType(r'', unicode) + self.assertType(""" """, unicode) + self.assertType(''' ''', unicode) + self.assertType(r""" """, unicode) + self.assertType(r''' ''', unicode) self.assertType(u"", unicode) + self.assertType(u'', unicode) self.assertType(ur"", unicode) + self.assertType(ur'', unicode) + self.assertType(u""" """, unicode) + self.assertType(u''' ''', unicode) + self.assertType(ur""" """, unicode) + self.assertType(ur''' ''', unicode) + self.assertType(b"", str) + self.assertType(b'', str) self.assertType(br"", str) + self.assertType(br'', str) + self.assertType(b""" """, str) + self.assertType(b''' ''', str) + self.assertType(br""" """, str) + self.assertType(br''' ''', str) + + self.assertType('' '', unicode) + self.assertType('' u'', unicode) + self.assertType(u'' '', unicode) + self.assertType(u'' u'', unicode) def test_main(): test_support.run_unittest(TestFuture) From buildbot at python.org Thu Mar 27 11:10:35 2008 From: buildbot at python.org (buildbot at python.org) Date: Thu, 27 Mar 2008 10:10:35 +0000 Subject: [Python-checkins] buildbot failure in amd64 gentoo trunk Message-ID: <20080327101035.9FE551E400A@bag.python.org> The Buildbot has detected a new failure of amd64 gentoo trunk. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20gentoo%20trunk/builds/504 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-amd64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: eric.smith BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_tokenize make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Thu Mar 27 11:11:14 2008 From: buildbot at python.org (buildbot at python.org) Date: Thu, 27 Mar 2008 10:11:14 +0000 Subject: [Python-checkins] buildbot failure in amd64 XP trunk Message-ID: <20080327101114.4E3231E400A@bag.python.org> The Buildbot has detected a new failure of amd64 XP trunk. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20XP%20trunk/builds/1063 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows-amd64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: eric.smith BUILD FAILED: failed compile sincerely, -The Buildbot From buildbot at python.org Thu Mar 27 11:13:14 2008 From: buildbot at python.org (buildbot at python.org) Date: Thu, 27 Mar 2008 10:13:14 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 trunk Message-ID: <20080327101314.E7C5E1E400A@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%20trunk/builds/2767 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: 2 tests failed: test_asynchat test_smtplib sincerely, -The Buildbot From eric+python-dev at trueblade.com Thu Mar 27 10:59:02 2008 From: eric+python-dev at trueblade.com (Eric Smith) Date: Thu, 27 Mar 2008 05:59:02 -0400 Subject: [Python-checkins] r61969 - in python/trunk: Include/bytes_methods.h Objects/longobject.c Objects/unicodeobject.c Python/mystrtoul.c In-Reply-To: <20080327044051.1F4D11E4008@bag.python.org> References: <20080327044051.1F4D11E4008@bag.python.org> Message-ID: <47EB6FE6.3090606@trueblade.com> neal.norwitz wrote: > Author: neal.norwitz > Date: Thu Mar 27 05:40:50 2008 > New Revision: 61969 > > Modified: > python/trunk/Include/bytes_methods.h > python/trunk/Objects/longobject.c > python/trunk/Objects/unicodeobject.c > python/trunk/Python/mystrtoul.c > Log: > Fix warnings about using char as an array subscript. This is not portable > since char is signed on some platforms and unsigned on others. Is there any reason not to make Py_CHARMASK just do the cast to unsigned? I see lots of other uses of Py_CHARMASK that potentially have this same problem. From buildbot at python.org Thu Mar 27 11:21:55 2008 From: buildbot at python.org (buildbot at python.org) Date: Thu, 27 Mar 2008 10:21:55 +0000 Subject: [Python-checkins] buildbot failure in ppc Debian unstable trunk Message-ID: <20080327102155.2E5EC1E400A@bag.python.org> The Buildbot has detected a new failure of ppc Debian unstable trunk. Full details are available at: http://www.python.org/dev/buildbot/all/ppc%20Debian%20unstable%20trunk/builds/1096 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: eric.smith BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_tokenize make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Thu Mar 27 11:27:20 2008 From: buildbot at python.org (buildbot at python.org) Date: Thu, 27 Mar 2008 10:27:20 +0000 Subject: [Python-checkins] buildbot failure in PPC64 Debian trunk Message-ID: <20080327102720.C270B1E400A@bag.python.org> The Buildbot has detected a new failure of PPC64 Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/PPC64%20Debian%20trunk/builds/613 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: eric.smith BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_tokenize make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Thu Mar 27 11:35:30 2008 From: buildbot at python.org (buildbot at python.org) Date: Thu, 27 Mar 2008 10:35:30 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-3 trunk Message-ID: <20080327103530.5CA411E400A@bag.python.org> The Buildbot has detected a new failure of x86 XP-3 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-3%20trunk/builds/1187 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: eric.smith BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_tokenize Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\threading.py", line 490, in __bootstrap_inner self.run() File "C:\buildbot\work\trunk.heller-windows\build\lib\threading.py", line 446, in run self.__target(*self.__args, **self.__kwargs) File "C:\buildbot\work\trunk.heller-windows\build\lib\bsddb\test\test_thread.py", line 284, in readerThread rec = dbutils.DeadlockWrap(c.next, max_retries=10) File "C:\buildbot\work\trunk.heller-windows\build\lib\bsddb\dbutils.py", line 62, in DeadlockWrap return function(*_args, **_kwargs) DBLockDeadlockError: (-30995, 'DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock') sincerely, -The Buildbot From python-checkins at python.org Thu Mar 27 11:35:53 2008 From: python-checkins at python.org (christian.heimes) Date: Thu, 27 Mar 2008 11:35:53 +0100 (CET) Subject: [Python-checkins] r61975 - python/trunk/Python/import.c Message-ID: <20080327103553.0D4F61E400A@bag.python.org> Author: christian.heimes Date: Thu Mar 27 11:35:52 2008 New Revision: 61975 Modified: python/trunk/Python/import.c Log: Build bots are working again - removing the hack Modified: python/trunk/Python/import.c ============================================================================== --- python/trunk/Python/import.c (original) +++ python/trunk/Python/import.c Thu Mar 27 11:35:52 2008 @@ -80,7 +80,7 @@ /* Magic word as global; note that _PyImport_Init() can change the value of this global to accommodate for alterations of how the compiler works which are enabled by command line switches. */ -static long pyc_magic = MAGIC+2; +static long pyc_magic = MAGIC; /* See _PyImport_FixupExtension() below */ static PyObject *extensions = NULL; From buildbot at python.org Thu Mar 27 11:36:57 2008 From: buildbot at python.org (buildbot at python.org) Date: Thu, 27 Mar 2008 10:36:57 +0000 Subject: [Python-checkins] buildbot failure in sparc solaris10 gcc trunk Message-ID: <20080327103657.630FD1E400A@bag.python.org> The Buildbot has detected a new failure of sparc solaris10 gcc trunk. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20solaris10%20gcc%20trunk/builds/3064 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: loewis-sun Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: eric.smith BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_tokenize sincerely, -The Buildbot From buildbot at python.org Thu Mar 27 12:20:39 2008 From: buildbot at python.org (buildbot at python.org) Date: Thu, 27 Mar 2008 11:20:39 +0000 Subject: [Python-checkins] buildbot failure in S-390 Debian trunk Message-ID: <20080327112039.650581E4014@bag.python.org> The Buildbot has detected a new failure of S-390 Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/S-390%20Debian%20trunk/builds/270 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-s390 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: eric.smith BUILD FAILED: failed test Excerpt from the test logfile: Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/threading.py", line 490, in __bootstrap_inner self.run() File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/threading.py", line 446, in run self.__target(*self.__args, **self.__kwargs) File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/bsddb/test/test_thread.py", line 284, in readerThread rec = dbutils.DeadlockWrap(c.next, max_retries=10) File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/bsddb/dbutils.py", line 62, in DeadlockWrap return function(*_args, **_kwargs) DBLockDeadlockError: (-30995, 'DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock') Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/threading.py", line 490, in __bootstrap_inner self.run() File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/threading.py", line 446, in run self.__target(*self.__args, **self.__kwargs) File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/bsddb/test/test_thread.py", line 263, in writerThread self.assertEqual(data, self.makeData(key)) File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/unittest.py", line 343, in failUnlessEqual (msg or '%r != %r' % (first, second)) AssertionError: None != '1000-1000-1000-1000-1000' Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/threading.py", line 490, in __bootstrap_inner self.run() File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/threading.py", line 446, in run self.__target(*self.__args, **self.__kwargs) File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/bsddb/test/test_thread.py", line 263, in writerThread self.assertEqual(data, self.makeData(key)) File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/unittest.py", line 343, in failUnlessEqual (msg or '%r != %r' % (first, second)) AssertionError: None != '2000-2000-2000-2000-2000' Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/threading.py", line 490, in __bootstrap_inner self.run() File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/threading.py", line 446, in run self.__target(*self.__args, **self.__kwargs) File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/bsddb/test/test_thread.py", line 263, in writerThread self.assertEqual(data, self.makeData(key)) File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/unittest.py", line 343, in failUnlessEqual (msg or '%r != %r' % (first, second)) AssertionError: None != '0002-0002-0002-0002-0002' 2 tests failed: test_threading test_tokenize make: *** [buildbottest] Error 1 sincerely, -The Buildbot From python-checkins at python.org Thu Mar 27 12:46:54 2008 From: python-checkins at python.org (christian.heimes) Date: Thu, 27 Mar 2008 12:46:54 +0100 (CET) Subject: [Python-checkins] r61976 - python/trunk/Lib/test/test_tokenize.py Message-ID: <20080327114654.6BDC71E400A@bag.python.org> Author: christian.heimes Date: Thu Mar 27 12:46:37 2008 New Revision: 61976 Modified: python/trunk/Lib/test/test_tokenize.py Log: Fixed tokenize tests The tokenize module doesn't understand __future__.unicode_literals yet Modified: python/trunk/Lib/test/test_tokenize.py ============================================================================== --- python/trunk/Lib/test/test_tokenize.py (original) +++ python/trunk/Lib/test/test_tokenize.py Thu Mar 27 12:46:37 2008 @@ -490,11 +490,17 @@ >>> >>> tempdir = os.path.dirname(f) or os.curdir >>> testfiles = glob.glob(os.path.join(tempdir, "test*.py")) + + XXX: tokenize doesn not support __future__.unicode_literals yet + >>> blacklist = ("test_future4.py",) + >>> testfiles = [f for f in testfiles if not f.endswith(blacklist)] >>> if not test_support.is_resource_enabled("compiler"): ... testfiles = random.sample(testfiles, 10) ... >>> for testfile in testfiles: - ... if not roundtrip(open(testfile)): break + ... if not roundtrip(open(testfile)): + ... print "Roundtrip failed for file %s" % testfile + ... break ... else: True True """ From buildbot at python.org Thu Mar 27 13:40:35 2008 From: buildbot at python.org (buildbot at python.org) Date: Thu, 27 Mar 2008 12:40:35 +0000 Subject: [Python-checkins] buildbot failure in ARM Linux EABI trunk Message-ID: <20080327124035.3F6881E4011@bag.python.org> The Buildbot has detected a new failure of ARM Linux EABI trunk. Full details are available at: http://www.python.org/dev/buildbot/all/ARM%20Linux%20EABI%20trunk/builds/0 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-linux-armeabi Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: amaury.forgeotdarc,andrew.kuchling,benjamin.peterson,christian.heimes,georg.brandl,gregory.p.smith,jerry.seutter,mark.dickinson,neal.norwitz,thomas.heller BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From buildbot at python.org Thu Mar 27 13:54:06 2008 From: buildbot at python.org (buildbot at python.org) Date: Thu, 27 Mar 2008 12:54:06 +0000 Subject: [Python-checkins] buildbot failure in sparc Ubuntu trunk Message-ID: <20080327125406.ADB251E400A@bag.python.org> The Buildbot has detected a new failure of sparc Ubuntu trunk. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20Ubuntu%20trunk/builds/409 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-ubuntu-sparc Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: christian.heimes,eric.smith BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_tokenize make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Thu Mar 27 14:18:40 2008 From: buildbot at python.org (buildbot at python.org) Date: Thu, 27 Mar 2008 13:18:40 +0000 Subject: [Python-checkins] buildbot failure in sparc Debian trunk Message-ID: <20080327131840.8C44B1E4023@bag.python.org> The Buildbot has detected a new failure of sparc Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20Debian%20trunk/builds/256 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-sparc Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: christian.heimes,eric.smith BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_tokenize make: *** [buildbottest] Error 1 sincerely, -The Buildbot From python-checkins at python.org Thu Mar 27 14:27:31 2008 From: python-checkins at python.org (georg.brandl) Date: Thu, 27 Mar 2008 14:27:31 +0100 (CET) Subject: [Python-checkins] r61977 - in python/trunk: Doc/library/smtplib.rst Lib/smtplib.py Misc/NEWS Message-ID: <20080327132731.799001E400B@bag.python.org> Author: georg.brandl Date: Thu Mar 27 14:27:31 2008 New Revision: 61977 Modified: python/trunk/Doc/library/smtplib.rst python/trunk/Lib/smtplib.py python/trunk/Misc/NEWS Log: #2248: return result of QUIT from quit(). Modified: python/trunk/Doc/library/smtplib.rst ============================================================================== --- python/trunk/Doc/library/smtplib.rst (original) +++ python/trunk/Doc/library/smtplib.rst Thu Mar 27 14:27:31 2008 @@ -318,7 +318,12 @@ .. method:: SMTP.quit() - Terminate the SMTP session and close the connection. + Terminate the SMTP session and close the connection. Return the result of + the SMTP ``QUIT`` command. + + .. versionchanged:: 2.6 + Return a value. + Low-level methods corresponding to the standard SMTP/ESMTP commands ``HELP``, ``RSET``, ``NOOP``, ``MAIL``, ``RCPT``, and ``DATA`` are also supported. Modified: python/trunk/Lib/smtplib.py ============================================================================== --- python/trunk/Lib/smtplib.py (original) +++ python/trunk/Lib/smtplib.py Thu Mar 27 14:27:31 2008 @@ -726,8 +726,9 @@ def quit(self): """Terminate the SMTP session.""" - self.docmd("quit") + res = self.docmd("quit") self.close() + return res if _have_ssl: Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Thu Mar 27 14:27:31 2008 @@ -12,7 +12,7 @@ Core and builtins ----------------- -- Patch #2477: Added from __future__ import unicode_literals +- Patch #2477: Added from __future__ import unicode_literals. - Added backport of bytearray type. @@ -76,6 +76,8 @@ Library ------- +- Issue #2248: return the result of the QUIT command. from SMTP.quit(). + - Backport of Python 3.0's io module. - Issue #2482: Make sure that the coefficient of a Decimal is always From python-checkins at python.org Thu Mar 27 14:34:59 2008 From: python-checkins at python.org (georg.brandl) Date: Thu, 27 Mar 2008 14:34:59 +0100 (CET) Subject: [Python-checkins] r61978 - python/trunk/Lib/test/outstanding_bugs.py Message-ID: <20080327133459.4E9F51E400A@bag.python.org> Author: georg.brandl Date: Thu Mar 27 14:34:59 2008 New Revision: 61978 Modified: python/trunk/Lib/test/outstanding_bugs.py Log: The bug for which there was a test in outstanding_bugs.py was agreed not to be a bug. Modified: python/trunk/Lib/test/outstanding_bugs.py ============================================================================== --- python/trunk/Lib/test/outstanding_bugs.py (original) +++ python/trunk/Lib/test/outstanding_bugs.py Thu Mar 27 14:34:59 2008 @@ -10,44 +10,13 @@ from test import test_support # -# One test case for outstanding bugs at the moment: +# No test cases for outstanding bugs at the moment. # -class TestDifflibLongestMatch(unittest.TestCase): - # From Patch #1678339: - # The find_longest_match method in the difflib's SequenceMatcher has a bug. - - # The bug is in turn caused by a problem with creating a b2j mapping which - # should contain a list of indices for each of the list elements in b. - # However, when the b2j mapping is being created (this is being done in - # __chain_b method in the SequenceMatcher) the mapping becomes broken. The - # cause of this is that for the frequently used elements the list of indices - # is removed and the element is being enlisted in the populardict mapping. - - # The test case tries to match two strings like: - # abbbbbb.... and ...bbbbbbc - - # The number of b is equal and the find_longest_match should have returned - # the proper amount. However, in case the number of "b"s is large enough, the - # method reports that the length of the longest common substring is 0. It - # simply can't find it. - - # A bug was raised some time ago on this matter. It's ID is 1528074. - - def test_find_longest_match(self): - import difflib - for i in (190, 200, 210): - text1 = "a" + "b"*i - text2 = "b"*i + "c" - m = difflib.SequenceMatcher(None, text1, text2) - (aptr, bptr, l) = m.find_longest_match(0, len(text1), 0, len(text2)) - self.assertEquals(i, l) - self.assertEquals(aptr, 1) - self.assertEquals(bptr, 0) - def test_main(): - test_support.run_unittest(TestDifflibLongestMatch) + #test_support.run_unittest() + pass if __name__ == "__main__": test_main() From buildbot at python.org Thu Mar 27 15:35:31 2008 From: buildbot at python.org (buildbot at python.org) Date: Thu, 27 Mar 2008 14:35:31 +0000 Subject: [Python-checkins] buildbot failure in S-390 Debian trunk Message-ID: <20080327143531.255891E4024@bag.python.org> The Buildbot has detected a new failure of S-390 Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/S-390%20Debian%20trunk/builds/273 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-s390 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl BUILD FAILED: failed test Excerpt from the test logfile: Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/threading.py", line 490, in __bootstrap_inner self.run() File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/threading.py", line 446, in run self.__target(*self.__args, **self.__kwargs) File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/bsddb/test/test_thread.py", line 263, in writerThread self.assertEqual(data, self.makeData(key)) File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/unittest.py", line 343, in failUnlessEqual (msg or '%r != %r' % (first, second)) AssertionError: None != '0002-0002-0002-0002-0002' Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/threading.py", line 490, in __bootstrap_inner self.run() File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/threading.py", line 446, in run self.__target(*self.__args, **self.__kwargs) File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/bsddb/test/test_thread.py", line 263, in writerThread self.assertEqual(data, self.makeData(key)) File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/unittest.py", line 343, in failUnlessEqual (msg or '%r != %r' % (first, second)) AssertionError: None != '1000-1000-1000-1000-1000' Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/threading.py", line 490, in __bootstrap_inner self.run() File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/threading.py", line 446, in run self.__target(*self.__args, **self.__kwargs) File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/bsddb/test/test_thread.py", line 263, in writerThread self.assertEqual(data, self.makeData(key)) File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/unittest.py", line 343, in failUnlessEqual (msg or '%r != %r' % (first, second)) AssertionError: None != '2000-2000-2000-2000-2000' 1 test failed: test_signal make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Thu Mar 27 16:22:05 2008 From: buildbot at python.org (buildbot at python.org) Date: Thu, 27 Mar 2008 15:22:05 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 trunk Message-ID: <20080327152205.780841E400A@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%20trunk/builds/2771 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl BUILD FAILED: failed test Excerpt from the test logfile: 2 tests failed: test_asynchat test_smtplib sincerely, -The Buildbot From buildbot at python.org Thu Mar 27 16:24:59 2008 From: buildbot at python.org (buildbot at python.org) Date: Thu, 27 Mar 2008 15:24:59 +0000 Subject: [Python-checkins] buildbot failure in ARM Linux EABI 3.0 Message-ID: <20080327152459.569C31E4025@bag.python.org> The Buildbot has detected a new failure of ARM Linux EABI 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/ARM%20Linux%20EABI%203.0/builds/0 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-linux-armeabi Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: amaury.forgeotdarc,benjamin.peterson,christian.heimes,georg.brandl,gregory.p.smith,mark.dickinson,neal.norwitz BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From buildbot at python.org Thu Mar 27 21:46:16 2008 From: buildbot at python.org (buildbot at python.org) Date: Thu, 27 Mar 2008 20:46:16 +0000 Subject: [Python-checkins] buildbot failure in ARM Linux EABI 2.5 Message-ID: <20080327204616.D7ABA1E4018@bag.python.org> The Buildbot has detected a new failure of ARM Linux EABI 2.5. Full details are available at: http://www.python.org/dev/buildbot/all/ARM%20Linux%20EABI%202.5/builds/0 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-linux-armeabi Build Reason: Build Source Stamp: [branch branches/release25-maint] HEAD Blamelist: mark.dickinson BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_bsddb3 ====================================================================== ERROR: test02_WithSource (bsddb.test.test_recno.SimpleRecnoTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea-armeabi/2.5.klose-linux-armeabi/build/Lib/bsddb/test/test_recno.py", line 210, in test02_WithSource f = open(source, 'w') # create the file IOError: [Errno 13] Permission denied: '/tmp/db_home/test_recno.txt' ====================================================================== ERROR: test02_WithSource (bsddb.test.test_recno.SimpleRecnoTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea-armeabi/2.5.klose-linux-armeabi/build/Lib/bsddb/test/test_recno.py", line 31, in tearDown os.remove(self.filename) OSError: [Errno 2] No such file or directory: '/tmp/tmp2GUmJ8' make: *** [buildbottest] Error 1 sincerely, -The Buildbot From python-checkins at python.org Fri Mar 28 00:23:54 2008 From: python-checkins at python.org (amaury.forgeotdarc) Date: Fri, 28 Mar 2008 00:23:54 +0100 (CET) Subject: [Python-checkins] r61979 - in python/trunk: Lib/test/test_tokenize.py Lib/tokenize.py Misc/NEWS Message-ID: <20080327232354.9FA4D1E4012@bag.python.org> Author: amaury.forgeotdarc Date: Fri Mar 28 00:23:54 2008 New Revision: 61979 Modified: python/trunk/Lib/test/test_tokenize.py python/trunk/Lib/tokenize.py python/trunk/Misc/NEWS Log: Issue2495: tokenize.untokenize did not insert space between two consecutive string literals: "" "" => """", which is invalid code. Will backport Modified: python/trunk/Lib/test/test_tokenize.py ============================================================================== --- python/trunk/Lib/test/test_tokenize.py (original) +++ python/trunk/Lib/test/test_tokenize.py Fri Mar 28 00:23:54 2008 @@ -487,13 +487,18 @@ >>> roundtrip("# Comment \\\\nx = 0") True +Two string literals on the same line + + >>> roundtrip("'' ''") + True + +Test roundtrip on random python modules. +pass the '-ucompiler' option to process the full directory. + >>> >>> tempdir = os.path.dirname(f) or os.curdir >>> testfiles = glob.glob(os.path.join(tempdir, "test*.py")) - XXX: tokenize doesn not support __future__.unicode_literals yet - >>> blacklist = ("test_future4.py",) - >>> testfiles = [f for f in testfiles if not f.endswith(blacklist)] >>> if not test_support.is_resource_enabled("compiler"): ... testfiles = random.sample(testfiles, 10) ... Modified: python/trunk/Lib/tokenize.py ============================================================================== --- python/trunk/Lib/tokenize.py (original) +++ python/trunk/Lib/tokenize.py Fri Mar 28 00:23:54 2008 @@ -210,12 +210,21 @@ tokval += ' ' if toknum in (NEWLINE, NL): startline = True + prevstring = False for tok in iterable: toknum, tokval = tok[:2] if toknum in (NAME, NUMBER): tokval += ' ' + # Insert a space between two consecutive strings + if toknum == STRING: + if prevstring: + tokval = ' ' + tokval + prevstring = True + else: + prevstring = False + if toknum == INDENT: indents.append(tokval) continue @@ -244,7 +253,7 @@ t1 = [tok[:2] for tok in generate_tokens(f.readline)] newcode = untokenize(t1) readline = iter(newcode.splitlines(1)).next - t2 = [tok[:2] for tokin generate_tokens(readline)] + t2 = [tok[:2] for tok in generate_tokens(readline)] assert t1 == t2 """ ut = Untokenizer() Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Fri Mar 28 00:23:54 2008 @@ -76,6 +76,10 @@ Library ------- +- Issue #2495: tokenize.untokenize now inserts a space between two consecutive + string literals; previously, ["" ""] was rendered as [""""], which is + incorrect python code. + - Issue #2248: return the result of the QUIT command. from SMTP.quit(). - Backport of Python 3.0's io module. From python-checkins at python.org Fri Mar 28 00:41:59 2008 From: python-checkins at python.org (amaury.forgeotdarc) Date: Fri, 28 Mar 2008 00:41:59 +0100 (CET) Subject: [Python-checkins] r61980 - in python/branches/release25-maint: Lib/test/output/test_tokenize Lib/test/tokenize_tests.txt Lib/tokenize.py Misc/NEWS Message-ID: <20080327234159.6CCE51E4012@bag.python.org> Author: amaury.forgeotdarc Date: Fri Mar 28 00:41:59 2008 New Revision: 61980 Modified: python/branches/release25-maint/Lib/test/output/test_tokenize python/branches/release25-maint/Lib/test/tokenize_tests.txt python/branches/release25-maint/Lib/tokenize.py python/branches/release25-maint/Misc/NEWS Log: Issue2495: tokenize.untokenize did not insert space between two consecutive string literals: "" "" becomes """", which is invalid code. Backport of r61979. Modified: python/branches/release25-maint/Lib/test/output/test_tokenize ============================================================================== --- python/branches/release25-maint/Lib/test/output/test_tokenize (original) +++ python/branches/release25-maint/Lib/test/output/test_tokenize Fri Mar 28 00:41:59 2008 @@ -656,4 +656,10 @@ 177,11-177,15: NAME 'pass' 177,15-177,16: NEWLINE '\n' 178,0-178,1: NL '\n' -179,0-179,0: ENDMARKER '' +179,0-179,13: COMMENT '# Issue 2495\n' +180,0-180,1: NAME 'x' +180,2-180,3: OP '=' +180,4-180,6: STRING "''" +180,7-180,9: STRING "''" +180,9-180,10: NEWLINE '\n' +181,0-181,0: ENDMARKER '' Modified: python/branches/release25-maint/Lib/test/tokenize_tests.txt ============================================================================== --- python/branches/release25-maint/Lib/test/tokenize_tests.txt (original) +++ python/branches/release25-maint/Lib/test/tokenize_tests.txt Fri Mar 28 00:41:59 2008 @@ -176,3 +176,5 @@ @staticmethod def foo(): pass +# Issue 2495 +x = '' '' Modified: python/branches/release25-maint/Lib/tokenize.py ============================================================================== --- python/branches/release25-maint/Lib/tokenize.py (original) +++ python/branches/release25-maint/Lib/tokenize.py Fri Mar 28 00:41:59 2008 @@ -171,11 +171,12 @@ t1 = [tok[:2] for tok in generate_tokens(f.readline)] newcode = untokenize(t1) readline = iter(newcode.splitlines(1)).next - t2 = [tok[:2] for tokin generate_tokens(readline)] + t2 = [tok[:2] for tok in generate_tokens(readline)] assert t1 == t2 """ startline = False + prevstring = False indents = [] toks = [] toks_append = toks.append @@ -185,6 +186,14 @@ if toknum in (NAME, NUMBER): tokval += ' ' + # Insert a space between two consecutive strings + if toknum == STRING: + if prevstring: + tokval = ' ' + tokval + prevstring = True + else: + prevstring = False + if toknum == INDENT: indents.append(tokval) continue Modified: python/branches/release25-maint/Misc/NEWS ============================================================================== --- python/branches/release25-maint/Misc/NEWS (original) +++ python/branches/release25-maint/Misc/NEWS Fri Mar 28 00:41:59 2008 @@ -27,6 +27,10 @@ Library ------- +- Issue #2495: tokenize.untokenize now inserts a space between two consecutive + string literals; previously, ["" ""] was rendered as [""""], which is + incorrect python code. + - Issue #2482: Make sure that the coefficient of a Decimal is always stored as a str instance, not as a unicode instance. This ensures that str(Decimal) is always an instance of str. This fixes a From buildbot at python.org Fri Mar 28 01:02:01 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 28 Mar 2008 00:02:01 +0000 Subject: [Python-checkins] buildbot failure in x86 W2k8 2.5 Message-ID: <20080328000201.AD3271E4012@bag.python.org> The Buildbot has detected a new failure of x86 W2k8 2.5. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20W2k8%202.5/builds/17 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: nelson-windows Build Reason: Build Source Stamp: [branch branches/release25-maint] HEAD Blamelist: amaury.forgeotdarc BUILD FAILED: failed compile sincerely, -The Buildbot From python-checkins at python.org Fri Mar 28 01:21:35 2008 From: python-checkins at python.org (amaury.forgeotdarc) Date: Fri, 28 Mar 2008 01:21:35 +0100 (CET) Subject: [Python-checkins] r61981 - python/trunk/Lib/test/regrtest.py Message-ID: <20080328002135.227D01E4013@bag.python.org> Author: amaury.forgeotdarc Date: Fri Mar 28 01:21:34 2008 New Revision: 61981 Modified: python/trunk/Lib/test/regrtest.py Log: test_future3.py is a regular test file, and should be part of the test suite Modified: python/trunk/Lib/test/regrtest.py ============================================================================== --- python/trunk/Lib/test/regrtest.py (original) +++ python/trunk/Lib/test/regrtest.py Fri Mar 28 01:21:34 2008 @@ -487,7 +487,6 @@ 'test_support', 'test_future1', 'test_future2', - 'test_future3', ] def findtests(testdir=None, stdtests=STDTESTS, nottests=NOTTESTS): From buildbot at python.org Fri Mar 28 01:23:15 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 28 Mar 2008 00:23:15 +0000 Subject: [Python-checkins] buildbot failure in amd64 XP 2.5 Message-ID: <20080328002315.7FCE21E4013@bag.python.org> The Buildbot has detected a new failure of amd64 XP 2.5. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20XP%202.5/builds/191 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows-amd64 Build Reason: Build Source Stamp: [branch branches/release25-maint] HEAD Blamelist: amaury.forgeotdarc BUILD FAILED: failed compile sincerely, -The Buildbot From buildbot at python.org Fri Mar 28 01:31:46 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 28 Mar 2008 00:31:46 +0000 Subject: [Python-checkins] buildbot failure in S-390 Debian trunk Message-ID: <20080328003155.679281E4013@bag.python.org> The Buildbot has detected a new failure of S-390 Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/S-390%20Debian%20trunk/builds/275 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-s390 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: amaury.forgeotdarc BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_signal make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Fri Mar 28 01:53:36 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 28 Mar 2008 00:53:36 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-3 trunk Message-ID: <20080328005336.A969C1E4013@bag.python.org> The Buildbot has detected a new failure of x86 XP-3 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-3%20trunk/builds/1193 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: amaury.forgeotdarc BUILD FAILED: failed failed slave lost sincerely, -The Buildbot From buildbot at python.org Fri Mar 28 02:02:37 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 28 Mar 2008 01:02:37 +0000 Subject: [Python-checkins] buildbot failure in x86 W2k8 3.0 Message-ID: <20080328010238.AD9C51E4013@bag.python.org> The Buildbot has detected a new failure of x86 W2k8 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20W2k8%203.0/builds/126 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: nelson-windows Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: christian.heimes BUILD FAILED: failed failed slave lost sincerely, -The Buildbot From buildbot at python.org Fri Mar 28 02:07:33 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 28 Mar 2008 01:07:33 +0000 Subject: [Python-checkins] buildbot failure in hppa Ubuntu 2.5 Message-ID: <20080328010733.9B7B71E4013@bag.python.org> The Buildbot has detected a new failure of hppa Ubuntu 2.5. Full details are available at: http://www.python.org/dev/buildbot/all/hppa%20Ubuntu%202.5/builds/180 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-ubuntu-hppa Build Reason: Build Source Stamp: [branch branches/release25-maint] HEAD Blamelist: amaury.forgeotdarc BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_commands ====================================================================== FAIL: test_getstatus (test.test_commands.CommandTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/2.5.klose-ubuntu-hppa/build/Lib/test/test_commands.py", line 56, in test_getstatus self.assert_(re.match(pat, getstatus("/."), re.VERBOSE)) AssertionError make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Fri Mar 28 02:24:45 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 28 Mar 2008 01:24:45 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-3 3.0 Message-ID: <20080328012445.59A841E4013@bag.python.org> The Buildbot has detected a new failure of x86 XP-3 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-3%203.0/builds/731 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: christian.heimes BUILD FAILED: failed failed slave lost sincerely, -The Buildbot From buildbot at python.org Fri Mar 28 02:26:02 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 28 Mar 2008 01:26:02 +0000 Subject: [Python-checkins] buildbot failure in sparc Ubuntu trunk Message-ID: <20080328012602.35B7A1E4013@bag.python.org> The Buildbot has detected a new failure of sparc Ubuntu trunk. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20Ubuntu%20trunk/builds/412 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-ubuntu-sparc Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: amaury.forgeotdarc BUILD FAILED: failed test Excerpt from the test logfile: Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-ubuntu-sparc/build/Lib/threading.py", line 490, in __bootstrap_inner self.run() File "/home/pybot/buildarea/trunk.klose-ubuntu-sparc/build/Lib/threading.py", line 446, in run self.__target(*self.__args, **self.__kwargs) File "/home/pybot/buildarea/trunk.klose-ubuntu-sparc/build/Lib/test/test_telnetlib.py", line 14, in server serv.bind(("", 9091)) File "", line 1, in bind error: [Errno 48] Address already in use sincerely, -The Buildbot From python-checkins at python.org Fri Mar 28 03:19:46 2008 From: python-checkins at python.org (collin.winter) Date: Fri, 28 Mar 2008 03:19:46 +0100 (CET) Subject: [Python-checkins] r61983 - in sandbox/trunk/2to3/lib2to3: fixes/fix_except.py tests/test_fixers.py Message-ID: <20080328021946.76AA81E4014@bag.python.org> Author: collin.winter Date: Fri Mar 28 03:19:46 2008 New Revision: 61983 Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_except.py sandbox/trunk/2to3/lib2to3/tests/test_fixers.py Log: Fix http://bugs.python.org/issue2453: support empty excepts in fix_except. Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_except.py ============================================================================== --- sandbox/trunk/2to3/lib2to3/fixes/fix_except.py (original) +++ sandbox/trunk/2to3/lib2to3/fixes/fix_except.py Fri Mar 28 03:19:46 2008 @@ -37,15 +37,18 @@ PATTERN = """ try_stmt< 'try' ':' suite - cleanup=((except_clause ':' suite)+ ['else' ':' suite] - ['finally' ':' suite] - | 'finally' ':' suite) > + cleanup=(except_clause ':' suite)+ + tail=(['except' ':' suite] + ['else' ':' suite] + ['finally' ':' suite]) > """ def transform(self, node, results): syms = self.syms - try_cleanup = [ch.clone() for ch in results['cleanup']] + tail = [n.clone() for n in results["tail"]] + + try_cleanup = [ch.clone() for ch in results["cleanup"]] for except_clause, e_suite in find_excepts(try_cleanup): if len(except_clause.children) == 4: (E, comma, N) = except_clause.children[1:4] @@ -85,5 +88,5 @@ N.set_prefix(" ") #TODO(cwinter) fix this when children becomes a smart list - children = [c.clone() for c in node.children[:3]] + try_cleanup + children = [c.clone() for c in node.children[:3]] + try_cleanup + tail return pytree.Node(node.type, children) Modified: sandbox/trunk/2to3/lib2to3/tests/test_fixers.py ============================================================================== --- sandbox/trunk/2to3/lib2to3/tests/test_fixers.py (original) +++ sandbox/trunk/2to3/lib2to3/tests/test_fixers.py Fri Mar 28 03:19:46 2008 @@ -679,6 +679,72 @@ pass""" self.check(b, a) + def test_bare_except(self): + b = """ + try: + pass + except Exception, a: + pass + except: + pass""" + + a = """ + try: + pass + except Exception as a: + pass + except: + pass""" + self.check(b, a) + + def test_bare_except_and_else_finally(self): + b = """ + try: + pass + except Exception, a: + pass + except: + pass + else: + pass + finally: + pass""" + + a = """ + try: + pass + except Exception as a: + pass + except: + pass + else: + pass + finally: + pass""" + self.check(b, a) + + def test_multi_fixed_excepts_before_bare_except(self): + b = """ + try: + pass + except TypeError, b: + pass + except Exception, a: + pass + except: + pass""" + + a = """ + try: + pass + except TypeError as b: + pass + except Exception as a: + pass + except: + pass""" + self.check(b, a) + # These should not be touched: def test_unchanged_1(self): From guido at python.org Fri Mar 28 04:38:59 2008 From: guido at python.org (Guido van Rossum) Date: Thu, 27 Mar 2008 20:38:59 -0700 Subject: [Python-checkins] r61969 - in python/trunk: Include/bytes_methods.h Objects/longobject.c Objects/unicodeobject.c Python/mystrtoul.c In-Reply-To: <47EB6FE6.3090606@trueblade.com> References: <20080327044051.1F4D11E4008@bag.python.org> <47EB6FE6.3090606@trueblade.com> Message-ID: I'm with Eric. Py_CHARMASK exists to make sure char values are converted to unsigned regardless of the signedness of char. If some compiler still thinks they are signed chars, something's wrong with Py_CHARMASK (or with that compiler). On Thu, Mar 27, 2008 at 2:59 AM, Eric Smith wrote: > neal.norwitz wrote: > > Author: neal.norwitz > > Date: Thu Mar 27 05:40:50 2008 > > New Revision: 61969 > > > > Modified: > > python/trunk/Include/bytes_methods.h > > python/trunk/Objects/longobject.c > > python/trunk/Objects/unicodeobject.c > > python/trunk/Python/mystrtoul.c > > Log: > > Fix warnings about using char as an array subscript. This is not portable > > since char is signed on some platforms and unsigned on others. > > Is there any reason not to make Py_CHARMASK just do the cast to > unsigned? I see lots of other uses of Py_CHARMASK that potentially have > this same problem. > > > > _______________________________________________ > Python-checkins mailing list > Python-checkins at python.org > http://mail.python.org/mailman/listinfo/python-checkins > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From buildbot at python.org Fri Mar 28 05:00:42 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 28 Mar 2008 04:00:42 +0000 Subject: [Python-checkins] buildbot failure in x86 XP 3.0 Message-ID: <20080328040042.950091E4014@bag.python.org> The Buildbot has detected a new failure of x86 XP 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP%203.0/builds/71 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: armbruster-windows Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: 4 tests failed: test_ssl test_urllib2net test_urllibnet test_xmlrpc_net ====================================================================== ERROR: testConnect (test.test_ssl.NetworkedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 130, in testConnect raise test_support.TestFailed("Unexpected exception %s" % x) test.test_support.TestFailed: Unexpected exception [Errno 1] _ssl.c:486: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed ====================================================================== ERROR: testProtocolSSL2 (test.test_ssl.ThreadedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 822, in testProtocolSSL2 tryProtocolCombo(ssl.PROTOCOL_SSLv2, ssl.PROTOCOL_SSLv2, True, ssl.CERT_OPTIONAL) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 691, in tryProtocolCombo chatty=False, connectionchatty=False) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 640, in serverParamsTest raise test_support.TestFailed("Unexpected SSL error: " + str(x)) test.test_support.TestFailed: Unexpected SSL error: [Errno 1] _ssl.c:486: error:1407E086:SSL routines:SSL2_SET_CERTIFICATE:certificate verify failed ====================================================================== ERROR: testProtocolSSL23 (test.test_ssl.ThreadedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 843, in testProtocolSSL23 tryProtocolCombo(ssl.PROTOCOL_SSLv23, ssl.PROTOCOL_SSLv3, True, ssl.CERT_OPTIONAL) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 691, in tryProtocolCombo chatty=False, connectionchatty=False) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 640, in serverParamsTest raise test_support.TestFailed("Unexpected SSL error: " + str(x)) test.test_support.TestFailed: Unexpected SSL error: [Errno 1] _ssl.c:486: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed ====================================================================== ERROR: testProtocolSSL3 (test.test_ssl.ThreadedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 855, in testProtocolSSL3 tryProtocolCombo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True, ssl.CERT_OPTIONAL) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 691, in tryProtocolCombo chatty=False, connectionchatty=False) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 640, in serverParamsTest raise test_support.TestFailed("Unexpected SSL error: " + str(x)) test.test_support.TestFailed: Unexpected SSL error: [Errno 1] _ssl.c:486: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed ====================================================================== ERROR: testProtocolTLS1 (test.test_ssl.ThreadedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 865, in testProtocolTLS1 tryProtocolCombo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True, ssl.CERT_OPTIONAL) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 691, in tryProtocolCombo chatty=False, connectionchatty=False) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 640, in serverParamsTest raise test_support.TestFailed("Unexpected SSL error: " + str(x)) test.test_support.TestFailed: Unexpected SSL error: [Errno 1] _ssl.c:486: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed ====================================================================== ERROR: testReadCert (test.test_ssl.ThreadedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 738, in testReadCert "Unexpected SSL error: " + str(x)) test.test_support.TestFailed: Unexpected SSL error: [Errno 1] _ssl.c:486: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed ====================================================================== FAIL: test_bad_address (test.test_urllib2net.urlopenNetworkTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_urllib2net.py", line 160, in test_bad_address urllib2.urlopen, "http://www.python.invalid./") AssertionError: IOError not raised by urlopen ====================================================================== FAIL: test_bad_address (test.test_urllibnet.urlopenNetworkTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_urllibnet.py", line 145, in test_bad_address urllib.urlopen, "http://www.python.invalid./") AssertionError: IOError not raised by urlopen ====================================================================== FAIL: test_current_time (test.test_xmlrpc_net.CurrentTimeTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_xmlrpc_net.py", line 38, in test_current_time self.assert_(delta.days <= 1) AssertionError: None sincerely, -The Buildbot From python-checkins at python.org Fri Mar 28 05:11:18 2008 From: python-checkins at python.org (jeffrey.yasskin) Date: Fri, 28 Mar 2008 05:11:18 +0100 (CET) Subject: [Python-checkins] r61984 - in python/trunk/Lib: test/test_threading.py threading.py Message-ID: <20080328041118.E61E11E4014@bag.python.org> Author: jeffrey.yasskin Date: Fri Mar 28 05:11:18 2008 New Revision: 61984 Modified: python/trunk/Lib/test/test_threading.py python/trunk/Lib/threading.py Log: Kill a race in test_threading in which the exception info in a thread finishing up after it was joined had a traceback pointing to that thread's (deleted) target attribute, while the test was trying to check that the target was destroyed. Big thanks to Antoine Pitrou for diagnosing the race and pointing out sys.exc_clear() to kill the exception early. This fixes issue 2496. Modified: python/trunk/Lib/test/test_threading.py ============================================================================== --- python/trunk/Lib/test/test_threading.py (original) +++ python/trunk/Lib/test/test_threading.py Fri Mar 28 05:11:18 2008 @@ -276,13 +276,17 @@ weak_cyclic_object = weakref.ref(cyclic_object) cyclic_object.thread.join() del cyclic_object - self.assertEquals(None, weak_cyclic_object()) + self.assertEquals(None, weak_cyclic_object(), + msg=('%d references still around' % + sys.getrefcount(weak_cyclic_object()))) raising_cyclic_object = RunSelfFunction(should_raise=True) weak_raising_cyclic_object = weakref.ref(raising_cyclic_object) raising_cyclic_object.thread.join() del raising_cyclic_object - self.assertEquals(None, weak_raising_cyclic_object()) + self.assertEquals(None, weak_raising_cyclic_object(), + msg=('%d references still around' % + sys.getrefcount(weak_raising_cyclic_object()))) class ThreadingExceptionTests(unittest.TestCase): Modified: python/trunk/Lib/threading.py ============================================================================== --- python/trunk/Lib/threading.py (original) +++ python/trunk/Lib/threading.py Fri Mar 28 05:11:18 2008 @@ -392,6 +392,9 @@ # shutdown and thus raises an exception about trying to perform some # operation on/with a NoneType __exc_info = _sys.exc_info + # Keep sys.exc_clear too to clear the exception just before + # allowing .join() to return. + __exc_clear = _sys.exc_clear def __init__(self, group=None, target=None, name=None, args=(), kwargs=None, verbose=None): @@ -527,6 +530,12 @@ else: if __debug__: self._note("%s.__bootstrap(): normal return", self) + finally: + # Prevent a race in + # test_threading.test_no_refcycle_through_target when + # the exception keeps the target alive past when we + # assert that it's dead. + self.__exc_clear() finally: with _active_limbo_lock: self.__stop() From nnorwitz at gmail.com Fri Mar 28 05:24:40 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Thu, 27 Mar 2008 21:24:40 -0700 Subject: [Python-checkins] r61969 - in python/trunk: Include/bytes_methods.h Objects/longobject.c Objects/unicodeobject.c Python/mystrtoul.c In-Reply-To: References: <20080327044051.1F4D11E4008@bag.python.org> <47EB6FE6.3090606@trueblade.com> Message-ID: I think it's fine to apply the change to Py_CHARMASK. I'll change it. On Thu, Mar 27, 2008 at 8:38 PM, Guido van Rossum wrote: > I'm with Eric. Py_CHARMASK exists to make sure char values are > converted to unsigned regardless of the signedness of char. If some > compiler still thinks they are signed chars, something's wrong with > Py_CHARMASK (or with that compiler). > > > > On Thu, Mar 27, 2008 at 2:59 AM, Eric Smith > wrote: > > neal.norwitz wrote: > > > Author: neal.norwitz > > > Date: Thu Mar 27 05:40:50 2008 > > > New Revision: 61969 > > > > > > Modified: > > > python/trunk/Include/bytes_methods.h > > > python/trunk/Objects/longobject.c > > > python/trunk/Objects/unicodeobject.c > > > python/trunk/Python/mystrtoul.c > > > Log: > > > Fix warnings about using char as an array subscript. This is not portable > > > since char is signed on some platforms and unsigned on others. > > > > Is there any reason not to make Py_CHARMASK just do the cast to > > unsigned? I see lots of other uses of Py_CHARMASK that potentially have > > this same problem. > > > > > > > > _______________________________________________ > > Python-checkins mailing list > > Python-checkins at python.org > > http://mail.python.org/mailman/listinfo/python-checkins > > > > > > -- > --Guido van Rossum (home page: http://www.python.org/~guido/) > > > _______________________________________________ > Python-checkins mailing list > Python-checkins at python.org > http://mail.python.org/mailman/listinfo/python-checkins > From python-checkins at python.org Fri Mar 28 05:41:34 2008 From: python-checkins at python.org (neal.norwitz) Date: Fri, 28 Mar 2008 05:41:34 +0100 (CET) Subject: [Python-checkins] r61985 - python/trunk/Lib/test/test_telnetlib.py Message-ID: <20080328044134.A03521E4017@bag.python.org> Author: neal.norwitz Date: Fri Mar 28 05:41:34 2008 New Revision: 61985 Modified: python/trunk/Lib/test/test_telnetlib.py Log: Allow use of other ports so the test can pass if 9091 is in use Modified: python/trunk/Lib/test/test_telnetlib.py ============================================================================== --- python/trunk/Lib/test/test_telnetlib.py (original) +++ python/trunk/Lib/test/test_telnetlib.py Fri Mar 28 05:41:34 2008 @@ -6,12 +6,14 @@ from unittest import TestCase from test import test_support +PORT = 9091 def server(evt): serv = socket.socket(socket.AF_INET, socket.SOCK_STREAM) serv.settimeout(3) serv.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) - serv.bind(("", 9091)) + global PORT + PORT = test_support.bind_port(serv, "", PORT) serv.listen(5) evt.set() try: @@ -36,24 +38,24 @@ def testBasic(self): # connects - telnet = telnetlib.Telnet("localhost", 9091) + telnet = telnetlib.Telnet("localhost", PORT) telnet.sock.close() def testTimeoutDefault(self): # default - telnet = telnetlib.Telnet("localhost", 9091) + telnet = telnetlib.Telnet("localhost", PORT) self.assertTrue(telnet.sock.gettimeout() is None) telnet.sock.close() def testTimeoutValue(self): # a value - telnet = telnetlib.Telnet("localhost", 9091, timeout=30) + telnet = telnetlib.Telnet("localhost", PORT, timeout=30) self.assertEqual(telnet.sock.gettimeout(), 30) telnet.sock.close() def testTimeoutDifferentOrder(self): telnet = telnetlib.Telnet(timeout=30) - telnet.open("localhost", 9091) + telnet.open("localhost", PORT) self.assertEqual(telnet.sock.gettimeout(), 30) telnet.sock.close() @@ -62,7 +64,7 @@ previous = socket.getdefaulttimeout() socket.setdefaulttimeout(30) try: - telnet = telnetlib.Telnet("localhost", 9091, timeout=None) + telnet = telnetlib.Telnet("localhost", PORT, timeout=None) finally: socket.setdefaulttimeout(previous) self.assertEqual(telnet.sock.gettimeout(), 30) From python-checkins at python.org Fri Mar 28 05:53:11 2008 From: python-checkins at python.org (jeffrey.yasskin) Date: Fri, 28 Mar 2008 05:53:11 +0100 (CET) Subject: [Python-checkins] r61986 - python/trunk/Lib/test/test_socket.py Message-ID: <20080328045311.0E3B11E4014@bag.python.org> Author: jeffrey.yasskin Date: Fri Mar 28 05:53:10 2008 New Revision: 61986 Modified: python/trunk/Lib/test/test_socket.py Log: Print more information the next time test_socket throws the wrong exception. Modified: python/trunk/Lib/test/test_socket.py ============================================================================== --- python/trunk/Lib/test/test_socket.py (original) +++ python/trunk/Lib/test/test_socket.py Fri Mar 28 05:53:10 2008 @@ -5,8 +5,9 @@ import socket import select -import time import thread, threading +import time +import traceback import Queue import sys import os @@ -986,10 +987,13 @@ except Alarm: pass except: - self.fail("caught other exception instead of Alarm") + self.fail("caught other exception instead of Alarm:" + " %s(%s):\n%s" % + (sys.exc_info()[:2] + (traceback.format_exc(),))) else: self.fail("nothing caught") - signal.alarm(0) # shut off alarm + finally: + signal.alarm(0) # shut off alarm except Alarm: self.fail("got Alarm in wrong place") finally: From python-checkins at python.org Fri Mar 28 05:58:51 2008 From: python-checkins at python.org (neal.norwitz) Date: Fri, 28 Mar 2008 05:58:51 +0100 (CET) Subject: [Python-checkins] r61987 - in python/trunk: Include/Python.h Include/bytes_methods.h Objects/longobject.c Objects/unicodeobject.c Parser/tokenizer.c Python/mystrtoul.c Message-ID: <20080328045851.A56B41E4014@bag.python.org> Author: neal.norwitz Date: Fri Mar 28 05:58:51 2008 New Revision: 61987 Modified: python/trunk/Include/Python.h python/trunk/Include/bytes_methods.h python/trunk/Objects/longobject.c python/trunk/Objects/unicodeobject.c python/trunk/Parser/tokenizer.c python/trunk/Python/mystrtoul.c Log: Revert r61969 which added casts to Py_CHARMASK to avoid compiler warnings. Rather than sprinkle casts throughout the code, change Py_CHARMASK to always cast it's result to an unsigned char. This should ensure we do the right thing when accessing an array with the result. Modified: python/trunk/Include/Python.h ============================================================================== --- python/trunk/Include/Python.h (original) +++ python/trunk/Include/Python.h Fri Mar 28 05:58:51 2008 @@ -148,7 +148,7 @@ #ifdef __CHAR_UNSIGNED__ #define Py_CHARMASK(c) (c) #else -#define Py_CHARMASK(c) ((c) & 0xff) +#define Py_CHARMASK(c) ((unsigned char)((c) & 0xff)) #endif #include "pyfpe.h" Modified: python/trunk/Include/bytes_methods.h ============================================================================== --- python/trunk/Include/bytes_methods.h (original) +++ python/trunk/Include/bytes_methods.h Fri Mar 28 05:58:51 2008 @@ -44,13 +44,13 @@ extern const unsigned int _Py_ctype_table[256]; -#define ISLOWER(c) (_Py_ctype_table[(unsigned)Py_CHARMASK(c)] & FLAG_LOWER) -#define ISUPPER(c) (_Py_ctype_table[(unsigned)Py_CHARMASK(c)] & FLAG_UPPER) -#define ISALPHA(c) (_Py_ctype_table[(unsigned)Py_CHARMASK(c)] & FLAG_ALPHA) -#define ISDIGIT(c) (_Py_ctype_table[(unsigned)Py_CHARMASK(c)] & FLAG_DIGIT) -#define ISXDIGIT(c) (_Py_ctype_table[(unsigned)Py_CHARMASK(c)] & FLAG_XDIGIT) -#define ISALNUM(c) (_Py_ctype_table[(unsigned)Py_CHARMASK(c)] & FLAG_ALNUM) -#define ISSPACE(c) (_Py_ctype_table[(unsigned)Py_CHARMASK(c)] & FLAG_SPACE) +#define ISLOWER(c) (_Py_ctype_table[Py_CHARMASK(c)] & FLAG_LOWER) +#define ISUPPER(c) (_Py_ctype_table[Py_CHARMASK(c)] & FLAG_UPPER) +#define ISALPHA(c) (_Py_ctype_table[Py_CHARMASK(c)] & FLAG_ALPHA) +#define ISDIGIT(c) (_Py_ctype_table[Py_CHARMASK(c)] & FLAG_DIGIT) +#define ISXDIGIT(c) (_Py_ctype_table[Py_CHARMASK(c)] & FLAG_XDIGIT) +#define ISALNUM(c) (_Py_ctype_table[Py_CHARMASK(c)] & FLAG_ALNUM) +#define ISSPACE(c) (_Py_ctype_table[Py_CHARMASK(c)] & FLAG_SPACE) #undef islower #define islower(c) undefined_islower(c) Modified: python/trunk/Objects/longobject.c ============================================================================== --- python/trunk/Objects/longobject.c (original) +++ python/trunk/Objects/longobject.c Fri Mar 28 05:58:51 2008 @@ -1397,7 +1397,7 @@ n >>= 1; /* n <- total # of bits needed, while setting p to end-of-string */ n = 0; - while (_PyLong_DigitValue[(unsigned)Py_CHARMASK(*p)] < base) + while (_PyLong_DigitValue[Py_CHARMASK(*p)] < base) ++p; *str = p; /* n <- # of Python digits needed, = ceiling(n/PyLong_SHIFT). */ @@ -1418,7 +1418,7 @@ bits_in_accum = 0; pdigit = z->ob_digit; while (--p >= start) { - int k = _PyLong_DigitValue[(unsigned)Py_CHARMASK(*p)]; + int k = _PyLong_DigitValue[Py_CHARMASK(*p)]; assert(k >= 0 && k < base); accum |= (twodigits)(k << bits_in_accum); bits_in_accum += bits_per_char; @@ -1609,7 +1609,7 @@ /* Find length of the string of numeric characters. */ scan = str; - while (_PyLong_DigitValue[(unsigned)Py_CHARMASK(*scan)] < base) + while (_PyLong_DigitValue[Py_CHARMASK(*scan)] < base) ++scan; /* Create a long object that can contain the largest possible @@ -1635,10 +1635,10 @@ /* Work ;-) */ while (str < scan) { /* grab up to convwidth digits from the input string */ - c = (digit)_PyLong_DigitValue[(unsigned)Py_CHARMASK(*str++)]; + c = (digit)_PyLong_DigitValue[Py_CHARMASK(*str++)]; for (i = 1; i < convwidth && str != scan; ++i, ++str) { c = (twodigits)(c * base + - _PyLong_DigitValue[(unsigned)Py_CHARMASK(*str)]); + _PyLong_DigitValue[Py_CHARMASK(*str)]); assert(c < PyLong_BASE); } Modified: python/trunk/Objects/unicodeobject.c ============================================================================== --- python/trunk/Objects/unicodeobject.c (original) +++ python/trunk/Objects/unicodeobject.c Fri Mar 28 05:58:51 2008 @@ -480,13 +480,13 @@ /* Single characters are shared when using this constructor. Restrict to ASCII, since the input must be UTF-8. */ if (size == 1 && Py_CHARMASK(*u) < 128) { - unicode = unicode_latin1[(unsigned)Py_CHARMASK(*u)]; + unicode = unicode_latin1[Py_CHARMASK(*u)]; if (!unicode) { unicode = _PyUnicode_New(1); if (!unicode) return NULL; unicode->str[0] = Py_CHARMASK(*u); - unicode_latin1[(unsigned)Py_CHARMASK(*u)] = unicode; + unicode_latin1[Py_CHARMASK(*u)] = unicode; } Py_INCREF(unicode); return (PyObject *)unicode; Modified: python/trunk/Parser/tokenizer.c ============================================================================== --- python/trunk/Parser/tokenizer.c (original) +++ python/trunk/Parser/tokenizer.c Fri Mar 28 05:58:51 2008 @@ -27,14 +27,6 @@ /* Don't ever change this -- it would break the portability of Python code */ #define TABSIZE 8 -/* Convert a possibly signed character to a nonnegative int */ -/* XXX This assumes characters are 8 bits wide */ -#ifdef __CHAR_UNSIGNED__ -#define Py_CHARMASK(c) (c) -#else -#define Py_CHARMASK(c) ((c) & 0xff) -#endif - /* Forward */ static struct tok_state *tok_new(void); static int tok_nextc(struct tok_state *tok); Modified: python/trunk/Python/mystrtoul.c ============================================================================== --- python/trunk/Python/mystrtoul.c (original) +++ python/trunk/Python/mystrtoul.c Fri Mar 28 05:58:51 2008 @@ -109,7 +109,7 @@ ++str; if (*str == 'x' || *str == 'X') { /* there must be at least one digit after 0x */ - if (_PyLong_DigitValue[(unsigned)Py_CHARMASK(str[1])] >= 16) { + if (_PyLong_DigitValue[Py_CHARMASK(str[1])] >= 16) { if (ptr) *ptr = str; return 0; @@ -118,7 +118,7 @@ base = 16; } else if (*str == 'o' || *str == 'O') { /* there must be at least one digit after 0o */ - if (_PyLong_DigitValue[(unsigned)Py_CHARMASK(str[1])] >= 8) { + if (_PyLong_DigitValue[Py_CHARMASK(str[1])] >= 8) { if (ptr) *ptr = str; return 0; @@ -127,7 +127,7 @@ base = 8; } else if (*str == 'b' || *str == 'B') { /* there must be at least one digit after 0b */ - if (_PyLong_DigitValue[(unsigned)Py_CHARMASK(str[1])] >= 2) { + if (_PyLong_DigitValue[Py_CHARMASK(str[1])] >= 2) { if (ptr) *ptr = str; return 0; @@ -147,7 +147,7 @@ ++str; if (*str == 'b' || *str == 'B') { /* there must be at least one digit after 0b */ - if (_PyLong_DigitValue[(unsigned)Py_CHARMASK(str[1])] >= 2) { + if (_PyLong_DigitValue[Py_CHARMASK(str[1])] >= 2) { if (ptr) *ptr = str; return 0; @@ -162,7 +162,7 @@ ++str; if (*str == 'o' || *str == 'O') { /* there must be at least one digit after 0o */ - if (_PyLong_DigitValue[(unsigned)Py_CHARMASK(str[1])] >= 8) { + if (_PyLong_DigitValue[Py_CHARMASK(str[1])] >= 8) { if (ptr) *ptr = str; return 0; @@ -177,7 +177,7 @@ ++str; if (*str == 'x' || *str == 'X') { /* there must be at least one digit after 0x */ - if (_PyLong_DigitValue[(unsigned)Py_CHARMASK(str[1])] >= 16) { + if (_PyLong_DigitValue[Py_CHARMASK(str[1])] >= 16) { if (ptr) *ptr = str; return 0; @@ -203,7 +203,7 @@ ovlimit = digitlimit[base]; /* do the conversion until non-digit character encountered */ - while ((c = _PyLong_DigitValue[(unsigned)Py_CHARMASK(*str)]) < base) { + while ((c = _PyLong_DigitValue[Py_CHARMASK(*str)]) < base) { if (ovlimit > 0) /* no overflow check required */ result = result * base + c; else { /* requires overflow check */ @@ -240,7 +240,7 @@ overflowed: if (ptr) { /* spool through remaining digit characters */ - while (_PyLong_DigitValue[(unsigned)Py_CHARMASK(*str)] < base) + while (_PyLong_DigitValue[Py_CHARMASK(*str)] < base) ++str; *ptr = str; } From python-checkins at python.org Fri Mar 28 06:25:36 2008 From: python-checkins at python.org (martin.v.loewis) Date: Fri, 28 Mar 2008 06:25:36 +0100 (CET) Subject: [Python-checkins] r61988 - python/trunk/Lib/lib2to3/tests/test_fixers.py Message-ID: <20080328052536.977301E4014@bag.python.org> Author: martin.v.loewis Date: Fri Mar 28 06:25:36 2008 New Revision: 61988 Modified: python/trunk/Lib/lib2to3/tests/test_fixers.py Log: Disable test that depends on #2412 being fixed. Modified: python/trunk/Lib/lib2to3/tests/test_fixers.py ============================================================================== --- python/trunk/Lib/lib2to3/tests/test_fixers.py (original) +++ python/trunk/Lib/lib2to3/tests/test_fixers.py Fri Mar 28 06:25:36 2008 @@ -435,6 +435,8 @@ # is fixed so it won't crash when it sees print(x=y). # When #2412 is fixed, the try/except block can be taken # out and the tests can be run like normal. + # MvL: disable entirely for now, so that it doesn't print to stdout + return try: s = "from __future__ import print_function\n"\ "print('Hai!', end=' ')" From python-checkins at python.org Fri Mar 28 06:26:11 2008 From: python-checkins at python.org (martin.v.loewis) Date: Fri, 28 Mar 2008 06:26:11 +0100 (CET) Subject: [Python-checkins] r61989 - python/trunk/Lib/test/test_lib2to3.py Message-ID: <20080328052611.64CE11E4014@bag.python.org> Author: martin.v.loewis Date: Fri Mar 28 06:26:10 2008 New Revision: 61989 Added: python/trunk/Lib/test/test_lib2to3.py (contents, props changed) Log: Run 2to3 tests. Added: python/trunk/Lib/test/test_lib2to3.py ============================================================================== --- (empty file) +++ python/trunk/Lib/test/test_lib2to3.py Fri Mar 28 06:26:10 2008 @@ -0,0 +1,15 @@ +# Skipping test_parser and test_all_fixers +# because of running +from lib2to3.tests import test_fixers, test_pytree, test_util +import unittest +from test.test_support import run_unittest + +def suite(): + tests = unittest.TestSuite() + loader = unittest.TestLoader() + for m in (test_fixers,test_pytree,test_util): + tests.addTests(loader.loadTestsFromModule(m)) + return tests + +def test_main(): + run_unittest(suite()) From python-checkins at python.org Fri Mar 28 06:27:52 2008 From: python-checkins at python.org (martin.v.loewis) Date: Fri, 28 Mar 2008 06:27:52 +0100 (CET) Subject: [Python-checkins] r61990 - in python/trunk/Lib/lib2to3: fixes/fix_except.py tests/test_all_fixers.py tests/test_fixers.py Message-ID: <20080328052752.2A9641E4014@bag.python.org> Author: martin.v.loewis Date: Fri Mar 28 06:27:44 2008 New Revision: 61990 Modified: python/trunk/Lib/lib2to3/ (props changed) python/trunk/Lib/lib2to3/fixes/fix_except.py python/trunk/Lib/lib2to3/tests/test_all_fixers.py python/trunk/Lib/lib2to3/tests/test_fixers.py Log: Merged revisions 61825-61989 via svnmerge from svn+ssh://pythondev at svn.python.org/sandbox/trunk/2to3/lib2to3 ........ r61899 | collin.winter | 2008-03-25 17:53:41 +0100 (Di, 25 M?r 2008) | 1 line Add a missing explicit fixer to test_all_fixers. ........ r61983 | collin.winter | 2008-03-28 03:19:46 +0100 (Fr, 28 M?r 2008) | 2 lines Fix http://bugs.python.org/issue2453: support empty excepts in fix_except. ........ Modified: python/trunk/Lib/lib2to3/fixes/fix_except.py ============================================================================== --- python/trunk/Lib/lib2to3/fixes/fix_except.py (original) +++ python/trunk/Lib/lib2to3/fixes/fix_except.py Fri Mar 28 06:27:44 2008 @@ -37,15 +37,18 @@ PATTERN = """ try_stmt< 'try' ':' suite - cleanup=((except_clause ':' suite)+ ['else' ':' suite] - ['finally' ':' suite] - | 'finally' ':' suite) > + cleanup=(except_clause ':' suite)+ + tail=(['except' ':' suite] + ['else' ':' suite] + ['finally' ':' suite]) > """ def transform(self, node, results): syms = self.syms - try_cleanup = [ch.clone() for ch in results['cleanup']] + tail = [n.clone() for n in results["tail"]] + + try_cleanup = [ch.clone() for ch in results["cleanup"]] for except_clause, e_suite in find_excepts(try_cleanup): if len(except_clause.children) == 4: (E, comma, N) = except_clause.children[1:4] @@ -85,5 +88,5 @@ N.set_prefix(" ") #TODO(cwinter) fix this when children becomes a smart list - children = [c.clone() for c in node.children[:3]] + try_cleanup + children = [c.clone() for c in node.children[:3]] + try_cleanup + tail return pytree.Node(node.type, children) Modified: python/trunk/Lib/lib2to3/tests/test_all_fixers.py ============================================================================== --- python/trunk/Lib/lib2to3/tests/test_all_fixers.py (original) +++ python/trunk/Lib/lib2to3/tests/test_all_fixers.py Fri Mar 28 06:27:44 2008 @@ -27,7 +27,7 @@ class Test_all(support.TestCase): def setUp(self): - options = Options(fix=["all", "idioms", "ws_comma"], + options = Options(fix=["all", "idioms", "ws_comma", "buffer"], print_function=False) self.refactor = refactor.RefactoringTool(options) Modified: python/trunk/Lib/lib2to3/tests/test_fixers.py ============================================================================== --- python/trunk/Lib/lib2to3/tests/test_fixers.py (original) +++ python/trunk/Lib/lib2to3/tests/test_fixers.py Fri Mar 28 06:27:44 2008 @@ -681,6 +681,72 @@ pass""" self.check(b, a) + def test_bare_except(self): + b = """ + try: + pass + except Exception, a: + pass + except: + pass""" + + a = """ + try: + pass + except Exception as a: + pass + except: + pass""" + self.check(b, a) + + def test_bare_except_and_else_finally(self): + b = """ + try: + pass + except Exception, a: + pass + except: + pass + else: + pass + finally: + pass""" + + a = """ + try: + pass + except Exception as a: + pass + except: + pass + else: + pass + finally: + pass""" + self.check(b, a) + + def test_multi_fixed_excepts_before_bare_except(self): + b = """ + try: + pass + except TypeError, b: + pass + except Exception, a: + pass + except: + pass""" + + a = """ + try: + pass + except TypeError as b: + pass + except Exception as a: + pass + except: + pass""" + self.check(b, a) + # These should not be touched: def test_unchanged_1(self): From python-checkins at python.org Fri Mar 28 06:34:59 2008 From: python-checkins at python.org (neal.norwitz) Date: Fri, 28 Mar 2008 06:34:59 +0100 (CET) Subject: [Python-checkins] r61992 - in python/trunk: Objects/complexobject.c Objects/floatobject.c configure configure.in pyconfig.h.in Message-ID: <20080328053459.E0DDE1E4014@bag.python.org> Author: neal.norwitz Date: Fri Mar 28 06:34:59 2008 New Revision: 61992 Modified: python/trunk/Objects/complexobject.c python/trunk/Objects/floatobject.c python/trunk/configure python/trunk/configure.in python/trunk/pyconfig.h.in Log: Fix compiler warning about finite() missing on Solaris. Modified: python/trunk/Objects/complexobject.c ============================================================================== --- python/trunk/Objects/complexobject.c (original) +++ python/trunk/Objects/complexobject.c Fri Mar 28 06:34:59 2008 @@ -8,6 +8,10 @@ #include "Python.h" #include "structmember.h" +#ifdef HAVE_IEEEFP_H +#include +#endif + #ifndef WITHOUT_COMPLEX /* Precisions used by repr() and str(), respectively. Modified: python/trunk/Objects/floatobject.c ============================================================================== --- python/trunk/Objects/floatobject.c (original) +++ python/trunk/Objects/floatobject.c Fri Mar 28 06:34:59 2008 @@ -10,6 +10,10 @@ #include #include +#ifdef HAVE_IEEEFP_H +#include +#endif + #include "formatter_string.h" #if !defined(__STDC__) Modified: python/trunk/configure ============================================================================== --- python/trunk/configure (original) +++ python/trunk/configure Fri Mar 28 06:34:59 2008 @@ -1,5 +1,5 @@ #! /bin/sh -# From configure.in Revision: 61722 . +# From configure.in Revision: 61847 . # Guess values for system-dependent variables and create Makefiles. # Generated by GNU Autoconf 2.61 for python 2.6. # @@ -5421,9 +5421,10 @@ + for ac_header in asm/types.h conio.h curses.h direct.h dlfcn.h errno.h \ fcntl.h grp.h \ -io.h langinfo.h libintl.h ncurses.h poll.h process.h pthread.h \ +ieeefp.h io.h langinfo.h libintl.h ncurses.h poll.h process.h pthread.h \ shadow.h signal.h stdint.h stropts.h termios.h thread.h \ unistd.h utime.h \ sys/audioio.h sys/bsdtty.h sys/epoll.h sys/event.h sys/file.h sys/loadavg.h \ Modified: python/trunk/configure.in ============================================================================== --- python/trunk/configure.in (original) +++ python/trunk/configure.in Fri Mar 28 06:34:59 2008 @@ -1099,7 +1099,7 @@ AC_HEADER_STDC AC_CHECK_HEADERS(asm/types.h conio.h curses.h direct.h dlfcn.h errno.h \ fcntl.h grp.h \ -io.h langinfo.h libintl.h ncurses.h poll.h process.h pthread.h \ +ieeefp.h io.h langinfo.h libintl.h ncurses.h poll.h process.h pthread.h \ shadow.h signal.h stdint.h stropts.h termios.h thread.h \ unistd.h utime.h \ sys/audioio.h sys/bsdtty.h sys/epoll.h sys/event.h sys/file.h sys/loadavg.h \ Modified: python/trunk/pyconfig.h.in ============================================================================== --- python/trunk/pyconfig.h.in (original) +++ python/trunk/pyconfig.h.in Fri Mar 28 06:34:59 2008 @@ -300,6 +300,9 @@ /* Define to 1 if you have the `hypot' function. */ #undef HAVE_HYPOT +/* Define to 1 if you have the header file. */ +#undef HAVE_IEEEFP_H + /* Define if you have the 'inet_aton' function. */ #undef HAVE_INET_ATON From buildbot at python.org Fri Mar 28 06:57:22 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 28 Mar 2008 05:57:22 +0000 Subject: [Python-checkins] buildbot failure in amd64 gentoo trunk Message-ID: <20080328055722.5FE2C1E4014@bag.python.org> The Buildbot has detected a new failure of amd64 gentoo trunk. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20gentoo%20trunk/builds/514 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-amd64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_logging ====================================================================== FAIL: test_flush (test.test_logging.MemoryHandlerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/test/test_logging.py", line 468, in test_flush self.assert_log_lines(lines) File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/test/test_logging.py", line 112, in assert_log_lines self.assertEquals(len(actual_lines), len(expected_values)) AssertionError: 0 != 3 make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Fri Mar 28 07:23:37 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 28 Mar 2008 06:23:37 +0000 Subject: [Python-checkins] buildbot failure in g4 osx.4 trunk Message-ID: <20080328062338.2D8D21E4014@bag.python.org> The Buildbot has detected a new failure of g4 osx.4 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/g4%20osx.4%20trunk/builds/3106 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: psf-g4 Build Reason: The web-page 'force build' button was pressed by 'christian.heimes': force another rebuild Build Source Stamp: [branch trunk] HEAD Blamelist: BUILD FAILED: failed test Excerpt from the test logfile: Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable 2 tests failed: test_logging test_xmlrpc ====================================================================== FAIL: test_flush (test.test_logging.MemoryHandlerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/test/test_logging.py", line 468, in test_flush self.assert_log_lines(lines) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/test/test_logging.py", line 112, in assert_log_lines self.assertEquals(len(actual_lines), len(expected_values)) AssertionError: 0 != 3 Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 281, in _handle_request_noblock self.process_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 307, in process_request self.finish_request(request, client_address) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/SocketServer.py", line 615, in __init__ self.handle() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 318, in handle self.handle_one_request() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/BaseHTTPServer.py", line 301, in handle_one_request self.raw_requestline = self.rfile.readline() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/socket.py", line 369, in readline data = self._sock.recv(self._rbufsize) error: [Errno 35] Resource temporarily unavailable ====================================================================== FAIL: test_dotted_attribute (test.test_xmlrpc.SimpleServerTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/test/test_xmlrpc.py", line 472, in test_dotted_attribute self.test_simple1() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/test/test_xmlrpc.py", line 361, in test_simple1 self.fail("%s\n%s" % (e, getattr(e, "headers", ""))) AssertionError: [Errno 54] Connection reset by peer ====================================================================== FAIL: test_introspection1 (test.test_xmlrpc.SimpleServerTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/test/test_xmlrpc.py", line 387, in test_introspection1 self.fail("%s\n%s" % (e, getattr(e, "headers", ""))) AssertionError: [Errno 32] Broken pipe ====================================================================== FAIL: test_introspection2 (test.test_xmlrpc.SimpleServerTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/test/test_xmlrpc.py", line 399, in test_introspection2 self.fail("%s\n%s" % (e, getattr(e, "headers", ""))) AssertionError: [Errno 32] Broken pipe ====================================================================== FAIL: test_introspection3 (test.test_xmlrpc.SimpleServerTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/test/test_xmlrpc.py", line 411, in test_introspection3 self.fail("%s\n%s" % (e, getattr(e, "headers", ""))) AssertionError: [Errno 32] Broken pipe ====================================================================== FAIL: test_introspection4 (test.test_xmlrpc.SimpleServerTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/test/test_xmlrpc.py", line 424, in test_introspection4 self.fail("%s\n%s" % (e, getattr(e, "headers", ""))) AssertionError: [Errno 32] Broken pipe ====================================================================== FAIL: test_multicall (test.test_xmlrpc.SimpleServerTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/test/test_xmlrpc.py", line 441, in test_multicall self.fail("%s\n%s" % (e, getattr(e, "headers", ""))) AssertionError: [Errno 32] Broken pipe ====================================================================== FAIL: test_non_existing_multicall (test.test_xmlrpc.SimpleServerTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/test/test_xmlrpc.py", line 462, in test_non_existing_multicall self.fail("%s\n%s" % (e, getattr(e, "headers", ""))) AssertionError: [Errno 32] Broken pipe ====================================================================== FAIL: test_simple1 (test.test_xmlrpc.SimpleServerTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/test/test_xmlrpc.py", line 361, in test_simple1 self.fail("%s\n%s" % (e, getattr(e, "headers", ""))) AssertionError: [Errno 32] Broken pipe ====================================================================== FAIL: test_basic (test.test_xmlrpc.FailingServerTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/test/test_xmlrpc.py", line 519, in test_basic self.fail("%s\n%s" % (e, getattr(e, "headers", ""))) AssertionError: [Errno 32] Broken pipe make: *** [buildbottest] Error 1 sincerely, -The Buildbot From python-checkins at python.org Fri Mar 28 07:34:03 2008 From: python-checkins at python.org (neal.norwitz) Date: Fri, 28 Mar 2008 07:34:03 +0100 (CET) Subject: [Python-checkins] r61993 - python/trunk/Lib/test/test_xmlrpc.py Message-ID: <20080328063403.622351E4014@bag.python.org> Author: neal.norwitz Date: Fri Mar 28 07:34:03 2008 New Revision: 61993 Modified: python/trunk/Lib/test/test_xmlrpc.py Log: Bug 1503: Get the test to pass on OSX. This should make the test more reliable, but I'm not convinced it is the right solution. We need to determine if this causes the test to hang on any platforms or do other bad things. Even if it gets the test to pass reliably, it might be that we want to fix this in socket. The socket returned from accept() is different on different platforms (inheriting attributes or not) and we might want to ensure that the attributes (at least blocking) is the same across all platforms. Modified: python/trunk/Lib/test/test_xmlrpc.py ============================================================================== --- python/trunk/Lib/test/test_xmlrpc.py (original) +++ python/trunk/Lib/test/test_xmlrpc.py Fri Mar 28 07:34:03 2008 @@ -273,9 +273,17 @@ '''This is my function''' return True + class MyXMLRPCServer(SimpleXMLRPCServer.SimpleXMLRPCServer): + def get_request(self): + # Ensure the socket is always non-blocking. On Linux, socket + # attributes are not inherited like they are on *BSD and Windows. + s, port = self.socket.accept() + s.setblocking(True) + return s, port + try: - serv = SimpleXMLRPCServer.SimpleXMLRPCServer(("localhost", 0), - logRequests=False, bind_and_activate=False) + serv = MyXMLRPCServer(("localhost", 0), + logRequests=False, bind_and_activate=False) serv.socket.settimeout(3) serv.server_bind() global PORT From buildbot at python.org Fri Mar 28 08:16:25 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 28 Mar 2008 07:16:25 +0000 Subject: [Python-checkins] buildbot failure in amd64 XP 3.0 Message-ID: <20080328071625.4CA231E4014@bag.python.org> The Buildbot has detected a new failure of amd64 XP 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20XP%203.0/builds/726 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows-amd64 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_getargs2 ====================================================================== ERROR: test_n (test.test_getargs2.Signed_TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_getargs2.py", line 190, in test_n self.failUnlessEqual(99, getargs_n(Long())) TypeError: 'Long' object cannot be interpreted as an integer sincerely, -The Buildbot From buildbot at python.org Fri Mar 28 08:24:26 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 28 Mar 2008 07:24:26 +0000 Subject: [Python-checkins] buildbot failure in sparc solaris10 gcc trunk Message-ID: <20080328072426.2CB3E1E4014@bag.python.org> The Buildbot has detected a new failure of sparc solaris10 gcc trunk. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20solaris10%20gcc%20trunk/builds/3074 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: loewis-sun Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: neal.norwitz BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_logging ====================================================================== FAIL: test_flush (test.test_logging.MemoryHandlerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/users/buildbot/slave/trunk.loewis-sun/build/Lib/test/test_logging.py", line 468, in test_flush self.assert_log_lines(lines) File "/opt/users/buildbot/slave/trunk.loewis-sun/build/Lib/test/test_logging.py", line 112, in assert_log_lines self.assertEquals(len(actual_lines), len(expected_values)) AssertionError: 0 != 3 sincerely, -The Buildbot From python-checkins at python.org Fri Mar 28 08:25:47 2008 From: python-checkins at python.org (georg.brandl) Date: Fri, 28 Mar 2008 08:25:47 +0100 (CET) Subject: [Python-checkins] r61994 - doctools/trunk/doc/_templates/indexsidebar.html Message-ID: <20080328072547.0744F1E4014@bag.python.org> Author: georg.brandl Date: Fri Mar 28 08:25:46 2008 New Revision: 61994 Modified: doctools/trunk/doc/_templates/indexsidebar.html Log: Refer to the new Google group. Modified: doctools/trunk/doc/_templates/indexsidebar.html ============================================================================== --- doctools/trunk/doc/_templates/indexsidebar.html (original) +++ doctools/trunk/doc/_templates/indexsidebar.html Fri Mar 28 08:25:46 2008 @@ -7,8 +7,7 @@

                  Questions? Suggestions?

                  -

                  Send them to <georg at python org>, contact the author -via Jabber at <gbrandl at pocoo org> or come to the -#python-docs channel on FreeNode.

                  +

                  Join the Google group +or come to the #python-docs channel on FreeNode.

                  You can also open a bug at Python's bug tracker, using the "Documentation tools" category.

                  From python-checkins at python.org Fri Mar 28 08:26:33 2008 From: python-checkins at python.org (georg.brandl) Date: Fri, 28 Mar 2008 08:26:33 +0100 (CET) Subject: [Python-checkins] r61995 - doctools/trunk/doc/_templates/indexsidebar.html Message-ID: <20080328072633.1C2CE1E4014@bag.python.org> Author: georg.brandl Date: Fri Mar 28 08:26:32 2008 New Revision: 61995 Modified: doctools/trunk/doc/_templates/indexsidebar.html Log: Fix HTML. Modified: doctools/trunk/doc/_templates/indexsidebar.html ============================================================================== --- doctools/trunk/doc/_templates/indexsidebar.html (original) +++ doctools/trunk/doc/_templates/indexsidebar.html Fri Mar 28 08:26:32 2008 @@ -7,7 +7,7 @@

                  Questions? Suggestions?

                  -

                  Join the Google group +

                  Join the Google group or come to the #python-docs channel on FreeNode.

                  You can also open a bug at Python's bug tracker, using the "Documentation tools" category.

                  From python-checkins at python.org Fri Mar 28 08:34:27 2008 From: python-checkins at python.org (georg.brandl) Date: Fri, 28 Mar 2008 08:34:27 +0100 (CET) Subject: [Python-checkins] r61996 - doctools/trunk/README Message-ID: <20080328073427.202051E4014@bag.python.org> Author: georg.brandl Date: Fri Mar 28 08:34:26 2008 New Revision: 61996 Modified: doctools/trunk/README Log: Add mkdir step to README. Modified: doctools/trunk/README ============================================================================== --- doctools/trunk/README (original) +++ doctools/trunk/README Fri Mar 28 08:34:26 2008 @@ -17,6 +17,7 @@ After installing:: cd doc + mkdir -p _build/html sphinx-build . _build/html browser _build/index.html From python-checkins at python.org Fri Mar 28 08:36:31 2008 From: python-checkins at python.org (neal.norwitz) Date: Fri, 28 Mar 2008 08:36:31 +0100 (CET) Subject: [Python-checkins] r61997 - python/trunk/Lib/test/test_sax.py Message-ID: <20080328073631.F1A381E4014@bag.python.org> Author: neal.norwitz Date: Fri Mar 28 08:36:31 2008 New Revision: 61997 Modified: python/trunk/Lib/test/test_sax.py Log: Name the main method correctly so the test is run Modified: python/trunk/Lib/test/test_sax.py ============================================================================== --- python/trunk/Lib/test/test_sax.py (original) +++ python/trunk/Lib/test/test_sax.py Fri Mar 28 08:36:31 2008 @@ -684,7 +684,7 @@ self.assertRaises(SAXParseException, parser.parse, sio) -def unittest_main(): +def test_main(): run_unittest(MakeParserTest, SaxutilsTest, XmlgenTest, @@ -693,4 +693,4 @@ XmlReaderTest) if __name__ == "__main__": - unittest_main() + test_main() From buildbot at python.org Fri Mar 28 08:52:26 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 28 Mar 2008 07:52:26 +0000 Subject: [Python-checkins] buildbot failure in ppc Debian unstable trunk Message-ID: <20080328075226.A1C851E4014@bag.python.org> The Buildbot has detected a new failure of ppc Debian unstable trunk. Full details are available at: http://www.python.org/dev/buildbot/all/ppc%20Debian%20unstable%20trunk/builds/1105 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: martin.v.loewis,neal.norwitz BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_socket_ssl make: *** [buildbottest] Error 1 sincerely, -The Buildbot From python-checkins at python.org Fri Mar 28 09:00:45 2008 From: python-checkins at python.org (gregory.p.smith) Date: Fri, 28 Mar 2008 09:00:45 +0100 (CET) Subject: [Python-checkins] r61998 - python/trunk/Lib/test/test_urllib2_localnet.py python/trunk/Lib/test/test_urllib2net.py Message-ID: <20080328080045.79ABA1E4014@bag.python.org> Author: gregory.p.smith Date: Fri Mar 28 09:00:44 2008 New Revision: 61998 Modified: python/trunk/Lib/test/test_urllib2_localnet.py python/trunk/Lib/test/test_urllib2net.py Log: This patch moves some tests from test_urllib2_net to test_urllib2_localnet. The moved tests use a local server rather than going out to external servers. Accepts patch from issue2429. Contributed by Jerry Seutter & Michael Foord (fuzzyman) at PyCon 2008. Modified: python/trunk/Lib/test/test_urllib2_localnet.py ============================================================================== --- python/trunk/Lib/test/test_urllib2_localnet.py (original) +++ python/trunk/Lib/test/test_urllib2_localnet.py Fri Mar 28 09:00:44 2008 @@ -1,5 +1,6 @@ #!/usr/bin/env python +import mimetools import threading import urlparse import urllib2 @@ -216,7 +217,7 @@ # Test cases class ProxyAuthTests(unittest.TestCase): - URL = "http://www.foo.com" + URL = "http://localhost" USER = "tester" PASSWD = "test123" @@ -278,6 +279,204 @@ pass result.close() + +def GetRequestHandler(responses): + + class FakeHTTPRequestHandler(BaseHTTPServer.BaseHTTPRequestHandler): + + server_version = "TestHTTP/" + requests = [] + headers_received = [] + port = 80 + + def do_GET(self): + body = self.send_head() + if body: + self.wfile.write(body) + + def do_POST(self): + content_length = self.headers['Content-Length'] + post_data = self.rfile.read(int(content_length)) + self.do_GET() + self.requests.append(post_data) + + def send_head(self): + FakeHTTPRequestHandler.headers_received = self.headers + self.requests.append(self.path) + response_code, headers, body = responses.pop(0) + + self.send_response(response_code) + + for (header, value) in headers: + self.send_header(header, value % self.port) + if body: + self.send_header('Content-type', 'text/plain') + self.end_headers() + return body + self.end_headers() + + def log_message(self, *args): + pass + + + return FakeHTTPRequestHandler + + +class TestUrlopen(unittest.TestCase): + """Tests urllib2.urlopen using the network. + + These tests are not exhaustive. Assuming that testing using files does a + good job overall of some of the basic interface features. There are no + tests exercising the optional 'data' and 'proxies' arguments. No tests + for transparent redirection have been written. + """ + + def start_server(self, responses): + handler = GetRequestHandler(responses) + + self.server = LoopbackHttpServerThread(handler) + self.server.start() + self.server.ready.wait() + port = self.server.port + handler.port = port + return handler + + + def test_redirection(self): + expected_response = 'We got here...' + responses = [ + (302, [('Location', 'http://localhost:%s/somewhere_else')], ''), + (200, [], expected_response) + ] + + handler = self.start_server(responses) + + try: + f = urllib2.urlopen('http://localhost:%s/' % handler.port) + data = f.read() + f.close() + + self.assertEquals(data, expected_response) + self.assertEquals(handler.requests, ['/', '/somewhere_else']) + finally: + self.server.stop() + + + def test_404(self): + expected_response = 'Bad bad bad...' + handler = self.start_server([(404, [], expected_response)]) + + try: + try: + urllib2.urlopen('http://localhost:%s/weeble' % handler.port) + except urllib2.URLError, f: + pass + else: + self.fail('404 should raise URLError') + + data = f.read() + f.close() + + self.assertEquals(data, expected_response) + self.assertEquals(handler.requests, ['/weeble']) + finally: + self.server.stop() + + + def test_200(self): + expected_response = 'pycon 2008...' + handler = self.start_server([(200, [], expected_response)]) + + try: + f = urllib2.urlopen('http://localhost:%s/bizarre' % handler.port) + data = f.read() + f.close() + + self.assertEquals(data, expected_response) + self.assertEquals(handler.requests, ['/bizarre']) + finally: + self.server.stop() + + def test_200_with_parameters(self): + expected_response = 'pycon 2008...' + handler = self.start_server([(200, [], expected_response)]) + + try: + f = urllib2.urlopen('http://localhost:%s/bizarre' % handler.port, 'get=with_feeling') + data = f.read() + f.close() + + self.assertEquals(data, expected_response) + self.assertEquals(handler.requests, ['/bizarre', 'get=with_feeling']) + finally: + self.server.stop() + + + def test_sending_headers(self): + handler = self.start_server([(200, [], "we don't care")]) + + try: + req = urllib2.Request("http://localhost:%s/" % handler.port, + headers={'Range': 'bytes=20-39'}) + urllib2.urlopen(req) + self.assertEqual(handler.headers_received['Range'], 'bytes=20-39') + finally: + self.server.stop() + + def test_basic(self): + handler = self.start_server([(200, [], "we don't care")]) + + try: + open_url = urllib2.urlopen("http://localhost:%s" % handler.port) + for attr in ("read", "close", "info", "geturl"): + self.assert_(hasattr(open_url, attr), "object returned from " + "urlopen lacks the %s attribute" % attr) + try: + self.assert_(open_url.read(), "calling 'read' failed") + finally: + open_url.close() + finally: + self.server.stop() + + def test_info(self): + handler = self.start_server([(200, [], "we don't care")]) + + try: + open_url = urllib2.urlopen("http://localhost:%s" % handler.port) + info_obj = open_url.info() + self.assert_(isinstance(info_obj, mimetools.Message), + "object returned by 'info' is not an instance of " + "mimetools.Message") + self.assertEqual(info_obj.getsubtype(), "plain") + finally: + self.server.stop() + + def test_geturl(self): + # Make sure same URL as opened is returned by geturl. + handler = self.start_server([(200, [], "we don't care")]) + + try: + open_url = urllib2.urlopen("http://localhost:%s" % handler.port) + url = open_url.geturl() + self.assertEqual(url, "http://localhost:%s" % handler.port) + finally: + self.server.stop() + + + def test_bad_address(self): + # Make sure proper exception is raised when connecting to a bogus + # address. + self.assertRaises(IOError, + # SF patch 809915: In Sep 2003, VeriSign started + # highjacking invalid .com and .net addresses to + # boost traffic to their own site. This test + # started failing then. One hopes the .invalid + # domain will be spared to serve its defined + # purpose. + # urllib2.urlopen, "http://www.sadflkjsasadf.com/") + urllib2.urlopen, "http://www.python.invalid./") + + def test_main(): # We will NOT depend on the network resource flag # (Lib/test/regrtest.py -u network) since all tests here are only @@ -286,6 +485,7 @@ #test_support.requires("network") test_support.run_unittest(ProxyAuthTests) + test_support.run_unittest(TestUrlopen) if __name__ == "__main__": test_main() Modified: python/trunk/Lib/test/test_urllib2net.py ============================================================================== --- python/trunk/Lib/test/test_urllib2net.py (original) +++ python/trunk/Lib/test/test_urllib2net.py Fri Mar 28 09:00:44 2008 @@ -24,20 +24,6 @@ raise last_exc -class URLTimeoutTest(unittest.TestCase): - - TIMEOUT = 10.0 - - def setUp(self): - socket.setdefaulttimeout(self.TIMEOUT) - - def tearDown(self): - socket.setdefaulttimeout(None) - - def testURLread(self): - f = _urlopen_with_retry("http://www.python.org/") - x = f.read() - class AuthTests(unittest.TestCase): """Tests urllib2 authentication features.""" @@ -99,68 +85,6 @@ response.close() self.assert_(fileobject.closed) -class urlopenNetworkTests(unittest.TestCase): - """Tests urllib2.urlopen using the network. - - These tests are not exhaustive. Assuming that testing using files does a - good job overall of some of the basic interface features. There are no - tests exercising the optional 'data' and 'proxies' arguments. No tests - for transparent redirection have been written. - - setUp is not used for always constructing a connection to - http://www.python.org/ since there a few tests that don't use that address - and making a connection is expensive enough to warrant minimizing unneeded - connections. - - """ - - def test_basic(self): - # Simple test expected to pass. - open_url = _urlopen_with_retry("http://www.python.org/") - for attr in ("read", "close", "info", "geturl"): - self.assert_(hasattr(open_url, attr), "object returned from " - "urlopen lacks the %s attribute" % attr) - try: - self.assert_(open_url.read(), "calling 'read' failed") - finally: - open_url.close() - - def test_info(self): - # Test 'info'. - open_url = _urlopen_with_retry("http://www.python.org/") - try: - info_obj = open_url.info() - finally: - open_url.close() - self.assert_(isinstance(info_obj, mimetools.Message), - "object returned by 'info' is not an instance of " - "mimetools.Message") - self.assertEqual(info_obj.getsubtype(), "html") - - def test_geturl(self): - # Make sure same URL as opened is returned by geturl. - URL = "http://www.python.org/" - open_url = _urlopen_with_retry(URL) - try: - gotten_url = open_url.geturl() - finally: - open_url.close() - self.assertEqual(gotten_url, URL) - - def test_bad_address(self): - # Make sure proper exception is raised when connecting to a bogus - # address. - self.assertRaises(IOError, - # SF patch 809915: In Sep 2003, VeriSign started - # highjacking invalid .com and .net addresses to - # boost traffic to their own site. This test - # started failing then. One hopes the .invalid - # domain will be spared to serve its defined - # purpose. - # urllib2.urlopen, "http://www.sadflkjsasadf.com/") - urllib2.urlopen, "http://www.python.invalid./") - - class OtherNetworkTests(unittest.TestCase): def setUp(self): if 0: # for debugging @@ -168,13 +92,6 @@ logger = logging.getLogger("test_urllib2net") logger.addHandler(logging.StreamHandler()) - def test_range (self): - req = urllib2.Request("http://www.python.org", - headers={'Range': 'bytes=20-39'}) - result = _urlopen_with_retry(req) - data = result.read() - self.assertEqual(len(data), 20) - # XXX The rest of these tests aren't very good -- they don't check much. # They do sometimes catch some major disasters, though. @@ -202,16 +119,6 @@ finally: os.remove(TESTFN) - def test_http(self): - urls = [ - 'http://www.espn.com/', # redirect - 'http://www.python.org/Spanish/Inquistion/', - ('http://www.python.org/cgi-bin/faqw.py', - 'query=pythonistas&querytype=simple&casefold=yes&req=search', None), - 'http://www.python.org/', - ] - self._test_urls(urls, self._extra_handlers()) - # XXX Following test depends on machine configurations that are internal # to CNRI. Need to set up a public server with the right authentication # configuration for test purposes. @@ -279,6 +186,7 @@ return handlers + class TimeoutTest(unittest.TestCase): def test_http_basic(self): u = _urlopen_with_retry("http://www.python.org") @@ -327,9 +235,7 @@ def test_main(): test_support.requires("network") - test_support.run_unittest(URLTimeoutTest, - urlopenNetworkTests, - AuthTests, + test_support.run_unittest(AuthTests, OtherNetworkTests, CloseSocketTest, TimeoutTest, From python-checkins at python.org Fri Mar 28 09:06:56 2008 From: python-checkins at python.org (georg.brandl) Date: Fri, 28 Mar 2008 09:06:56 +0100 (CET) Subject: [Python-checkins] r61999 - python/trunk/Doc/library/gzip.rst Message-ID: <20080328080656.E9C041E4014@bag.python.org> Author: georg.brandl Date: Fri Mar 28 09:06:56 2008 New Revision: 61999 Modified: python/trunk/Doc/library/gzip.rst Log: #2406: add examples to gzip docs. Modified: python/trunk/Doc/library/gzip.rst ============================================================================== --- python/trunk/Doc/library/gzip.rst (original) +++ python/trunk/Doc/library/gzip.rst Fri Mar 28 09:06:56 2008 @@ -1,19 +1,22 @@ - :mod:`gzip` --- Support for :program:`gzip` files ================================================= .. module:: gzip :synopsis: Interfaces for gzip compression and decompression using file objects. +This module provides a simple interface to compress and decompress files just +like the GNU programs :program:`gzip` and :program:`gunzip` would. + +The data compression is provided by the :mod:``zlib`` module. -The data compression provided by the ``zlib`` module is compatible with that -used by the GNU compression program :program:`gzip`. Accordingly, the -:mod:`gzip` module provides the :class:`GzipFile` class to read and write +The :mod:`gzip` module provides the :class:`GzipFile` class which is modeled +after Python's File Object. The :class:`GzipFile` class reads and writes :program:`gzip`\ -format files, automatically compressing or decompressing the -data so it looks like an ordinary file object. Note that additional file -formats which can be decompressed by the :program:`gzip` and :program:`gunzip` -programs, such as those produced by :program:`compress` and :program:`pack`, -are not supported by this module. +data so that it looks like an ordinary file object. + +Note that additional file formats which can be decompressed by the +:program:`gzip` and :program:`gunzip` programs, such as those produced by +:program:`compress` and :program:`pack`, are not supported by this module. For other archive formats, see the :mod:`bz2`, :mod:`zipfile`, and :mod:`tarfile` modules. @@ -63,6 +66,36 @@ *compresslevel* defaults to ``9``. +.. _gzip-usage-examples: + +Examples of usage +----------------- + +Example of how to read a compressed file:: + + import gzip + f = gzip.open('/home/joe/file.txt.gz', 'rb') + file_content = f.read() + f.close() + +Example of how to create a compressed GZIP file:: + + import gzip + content = "Lots of content here" + f = gzip.open('/home/joe/file.txt.gz', 'wb') + f.write(content) + f.close() + +Example of how to GZIP compress an existing file:: + + import gzip + f_in = open('/home/joe/file.txt', 'rb') + f_out = gzip.open('/home/joe/file.txt.gz', 'wb') + f_out.writelines(f_in) + f_out.close() + f_in.close() + + .. seealso:: Module :mod:`zlib` From buildbot at python.org Fri Mar 28 09:15:19 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 28 Mar 2008 08:15:19 +0000 Subject: [Python-checkins] buildbot failure in amd64 XP trunk Message-ID: <20080328081519.946E11E402D@bag.python.org> The Buildbot has detected a new failure of amd64 XP trunk. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20XP%20trunk/builds/1073 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows-amd64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: neal.norwitz BUILD FAILED: failed compile sincerely, -The Buildbot From buildbot at python.org Fri Mar 28 09:20:33 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 28 Mar 2008 08:20:33 +0000 Subject: [Python-checkins] buildbot failure in S-390 Debian trunk Message-ID: <20080328082033.9ACBA1E401C@bag.python.org> The Buildbot has detected a new failure of S-390 Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/S-390%20Debian%20trunk/builds/279 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-s390 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: martin.v.loewis,neal.norwitz BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_logging ====================================================================== FAIL: test_flush (test.test_logging.MemoryHandlerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/test/test_logging.py", line 468, in test_flush self.assert_log_lines(lines) File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/test/test_logging.py", line 112, in assert_log_lines self.assertEquals(len(actual_lines), len(expected_values)) AssertionError: 0 != 3 make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Fri Mar 28 09:28:25 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 28 Mar 2008 08:28:25 +0000 Subject: [Python-checkins] buildbot failure in amd64 gentoo trunk Message-ID: <20080328082826.115A81E4014@bag.python.org> The Buildbot has detected a new failure of amd64 gentoo trunk. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20gentoo%20trunk/builds/518 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-amd64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: gregory.p.smith BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_logging ====================================================================== FAIL: test_flush (test.test_logging.MemoryHandlerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/test/test_logging.py", line 468, in test_flush self.assert_log_lines(lines) File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/test/test_logging.py", line 112, in assert_log_lines self.assertEquals(len(actual_lines), len(expected_values)) AssertionError: 0 != 3 make: *** [buildbottest] Error 1 sincerely, -The Buildbot From python-checkins at python.org Fri Mar 28 09:32:22 2008 From: python-checkins at python.org (gregory.p.smith) Date: Fri, 28 Mar 2008 09:32:22 +0100 (CET) Subject: [Python-checkins] r62000 - in python/trunk: Doc/library/sqlite3.rst Lib/test/test_sqlite.py Misc/ACKS Misc/NEWS Modules/_sqlite/connection.c Message-ID: <20080328083222.0FD6F1E4014@bag.python.org> Author: gregory.p.smith Date: Fri Mar 28 09:32:09 2008 New Revision: 62000 Modified: python/trunk/Doc/library/sqlite3.rst python/trunk/Lib/test/test_sqlite.py python/trunk/Misc/ACKS python/trunk/Misc/NEWS python/trunk/Modules/_sqlite/connection.c Log: Accept patch issue2426 by Paul Kippes (kippesp). Adds sqlite3.Connection.iterdump to allow dumping of databases. Modified: python/trunk/Doc/library/sqlite3.rst ============================================================================== --- python/trunk/Doc/library/sqlite3.rst (original) +++ python/trunk/Doc/library/sqlite3.rst Fri Mar 28 09:32:09 2008 @@ -378,6 +378,27 @@ deleted since the database connection was opened. +.. attribute:: Connection.iterdump + + Returns an iterator to dump the database in an SQL text format. Useful when + saving an in-memory database for later restoration. This function provides + the same capabilities as the :kbd:`.dump` command in the :program:`sqlite3` + shell. + + .. versionadded:: 2.6 + + Example:: + + # Convert file existing_db.db to SQL dump file dump.sql + import sqlite3, os + + con = sqlite3.connect('existing_db.db') + full_dump = os.linesep.join([line for line in con.iterdump()]) + f = open('dump.sql', 'w') + f.writelines(full_dump) + f.close() + + .. _sqlite3-cursor-objects: Cursor Objects Modified: python/trunk/Lib/test/test_sqlite.py ============================================================================== --- python/trunk/Lib/test/test_sqlite.py (original) +++ python/trunk/Lib/test/test_sqlite.py Fri Mar 28 09:32:09 2008 @@ -5,12 +5,13 @@ except ImportError: raise TestSkipped('no sqlite available') from sqlite3.test import (dbapi, types, userfunctions, py25tests, - factory, transactions, hooks, regression) + factory, transactions, hooks, regression, + dump) def test_main(): run_unittest(dbapi.suite(), types.suite(), userfunctions.suite(), py25tests.suite(), factory.suite(), transactions.suite(), - hooks.suite(), regression.suite()) + hooks.suite(), regression.suite(), dump.suite()) if __name__ == "__main__": test_main() Modified: python/trunk/Misc/ACKS ============================================================================== --- python/trunk/Misc/ACKS (original) +++ python/trunk/Misc/ACKS Fri Mar 28 09:32:09 2008 @@ -362,6 +362,7 @@ Vivek Khera Mads Kiilerich Taek Joo Kim +Paul Kippes Steve Kirsch Ron Klatchko Bastian Kleineidam Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Fri Mar 28 09:32:09 2008 @@ -12,6 +12,9 @@ Core and builtins ----------------- +- Patch #2426: Added sqlite3.Connection.iterdump method to allow easy dumping + of databases. Contributed by Paul Kippes at PyCon 2008. + - Patch #2477: Added from __future__ import unicode_literals. - Added backport of bytearray type. Modified: python/trunk/Modules/_sqlite/connection.c ============================================================================== --- python/trunk/Modules/_sqlite/connection.c (original) +++ python/trunk/Modules/_sqlite/connection.c Fri Mar 28 09:32:09 2008 @@ -1199,6 +1199,52 @@ return retval; } +/* Function author: Paul Kippes + * Class method of Connection to call the Python function _iterdump + * of the sqlite3 module. + */ +static PyObject * +pysqlite_connection_iterdump(pysqlite_Connection* self, PyObject* args) +{ + PyObject* retval = NULL; + PyObject* module = NULL; + PyObject* module_dict; + PyObject* pyfn_iterdump; + + if (!pysqlite_check_connection(self)) { + goto finally; + } + + module = PyImport_ImportModule(MODULE_NAME ".dump"); + if (!module) { + goto finally; + } + + module_dict = PyModule_GetDict(module); + if (!module_dict) { + goto finally; + } + + pyfn_iterdump = PyDict_GetItemString(module_dict, "_iterdump"); + if (!pyfn_iterdump) { + PyErr_SetString(pysqlite_OperationalError, "Failed to obtain _iterdump() reference"); + goto finally; + } + + args = PyTuple_New(1); + if (!args) { + goto finally; + } + Py_INCREF(self); + PyTuple_SetItem(args, 0, (PyObject*)self); + retval = PyObject_CallObject(pyfn_iterdump, args); + +finally: + Py_XDECREF(args); + Py_XDECREF(module); + return retval; +} + static PyObject * pysqlite_connection_create_collation(pysqlite_Connection* self, PyObject* args) { @@ -1344,6 +1390,8 @@ PyDoc_STR("Creates a collation function. Non-standard.")}, {"interrupt", (PyCFunction)pysqlite_connection_interrupt, METH_NOARGS, PyDoc_STR("Abort any pending database operation. Non-standard.")}, + {"iterdump", (PyCFunction)pysqlite_connection_iterdump, METH_NOARGS, + PyDoc_STR("Returns iterator to the dump of the database in an SQL text format.")}, {"__enter__", (PyCFunction)pysqlite_connection_enter, METH_NOARGS, PyDoc_STR("For context manager. Non-standard.")}, {"__exit__", (PyCFunction)pysqlite_connection_exit, METH_VARARGS, From buildbot at python.org Fri Mar 28 09:37:36 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 28 Mar 2008 08:37:36 +0000 Subject: [Python-checkins] buildbot failure in PPC64 Debian trunk Message-ID: <20080328083740.061381E4014@bag.python.org> The Buildbot has detected a new failure of PPC64 Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/PPC64%20Debian%20trunk/builds/623 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: neal.norwitz BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_logging ====================================================================== FAIL: test_flush (test.test_logging.MemoryHandlerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/test/test_logging.py", line 468, in test_flush self.assert_log_lines(lines) File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/test/test_logging.py", line 112, in assert_log_lines self.assertEquals(len(actual_lines), len(expected_values)) AssertionError: 0 != 3 make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Fri Mar 28 09:51:13 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 28 Mar 2008 08:51:13 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 trunk Message-ID: <20080328085113.AB8531E4014@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%20trunk/builds/2776 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: martin.v.loewis,neal.norwitz BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From buildbot at python.org Fri Mar 28 09:54:14 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 28 Mar 2008 08:54:14 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-4 3.0 Message-ID: <20080328085414.BACB31E4014@bag.python.org> The Buildbot has detected a new failure of x86 XP-4 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-4%203.0/builds/643 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: bolen-windows Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed test Excerpt from the test logfile: 2 tests failed: test_asynchat test_logging ====================================================================== ERROR: test_close_when_done (test.test_asynchat.TestAsynchat) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_asynchat.py", line 210, in test_close_when_done asyncore.loop(use_poll=self.usepoll, count=300, timeout=.01) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 195, in loop poll_fun(timeout, map) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 132, in poll read(obj) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 72, in read obj.handle_error() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 68, in read obj.handle_read_event() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 390, in handle_read_event self.handle_read() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_ssl.py", line 527, in handle_read data = self.recv(1024) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 342, in recv data = self.socket.recv(buffer_size) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\ssl.py", line 247, in recv return self.read(buflen) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\ssl.py", line 162, in read v = self._sslobj.read(len or 1024) socket.error: [Errno 10053] An established connection was aborted by the software in your host machine ====================================================================== ERROR: test_empty_line (test.test_asynchat.TestAsynchat) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_asynchat.py", line 198, in test_empty_line asyncore.loop(use_poll=self.usepoll, count=300, timeout=.01) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 195, in loop poll_fun(timeout, map) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 132, in poll read(obj) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 72, in read obj.handle_error() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 68, in read obj.handle_read_event() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 390, in handle_read_event self.handle_read() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_ssl.py", line 527, in handle_read data = self.recv(1024) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 342, in recv data = self.socket.recv(buffer_size) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\ssl.py", line 247, in recv return self.read(buflen) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\ssl.py", line 162, in read v = self._sslobj.read(len or 1024) socket.error: [Errno 10053] An established connection was aborted by the software in your host machine ====================================================================== ERROR: test_line_terminator1 (test.test_asynchat.TestAsynchat) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_asynchat.py", line 126, in test_line_terminator1 self.line_terminator_check('\n', l) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_asynchat.py", line 114, in line_terminator_check asyncore.loop(use_poll=self.usepoll, count=300, timeout=.01) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 195, in loop poll_fun(timeout, map) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 132, in poll read(obj) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 72, in read obj.handle_error() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 68, in read obj.handle_read_event() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 390, in handle_read_event self.handle_read() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_ssl.py", line 527, in handle_read data = self.recv(1024) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 342, in recv data = self.socket.recv(buffer_size) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\ssl.py", line 247, in recv return self.read(buflen) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\ssl.py", line 162, in read v = self._sslobj.read(len or 1024) socket.error: [Errno 10053] An established connection was aborted by the software in your host machine ====================================================================== ERROR: test_line_terminator2 (test.test_asynchat.TestAsynchat) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_asynchat.py", line 131, in test_line_terminator2 self.line_terminator_check('\r\n', l) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_asynchat.py", line 114, in line_terminator_check asyncore.loop(use_poll=self.usepoll, count=300, timeout=.01) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 195, in loop poll_fun(timeout, map) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 132, in poll read(obj) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 72, in read obj.handle_error() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 68, in read obj.handle_read_event() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 390, in handle_read_event self.handle_read() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_ssl.py", line 527, in handle_read data = self.recv(1024) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 342, in recv data = self.socket.recv(buffer_size) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\ssl.py", line 247, in recv return self.read(buflen) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\ssl.py", line 162, in read v = self._sslobj.read(len or 1024) socket.error: [Errno 10053] An established connection was aborted by the software in your host machine ====================================================================== ERROR: test_line_terminator3 (test.test_asynchat.TestAsynchat) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_asynchat.py", line 136, in test_line_terminator3 self.line_terminator_check('qqq', l) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_asynchat.py", line 114, in line_terminator_check asyncore.loop(use_poll=self.usepoll, count=300, timeout=.01) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 195, in loop poll_fun(timeout, map) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 132, in poll read(obj) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 72, in read obj.handle_error() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 68, in read obj.handle_read_event() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 390, in handle_read_event self.handle_read() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_ssl.py", line 527, in handle_read data = self.recv(1024) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 342, in recv data = self.socket.recv(buffer_size) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\ssl.py", line 247, in recv return self.read(buflen) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\ssl.py", line 162, in read v = self._sslobj.read(len or 1024) socket.error: [Errno 10053] An established connection was aborted by the software in your host machine ====================================================================== ERROR: test_none_terminator (test.test_asynchat.TestAsynchat) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_asynchat.py", line 165, in test_none_terminator asyncore.loop(use_poll=self.usepoll, count=300, timeout=.01) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 195, in loop poll_fun(timeout, map) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 132, in poll read(obj) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 72, in read obj.handle_error() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 68, in read obj.handle_read_event() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 390, in handle_read_event self.handle_read() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_ssl.py", line 527, in handle_read data = self.recv(1024) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 342, in recv data = self.socket.recv(buffer_size) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\ssl.py", line 247, in recv return self.read(buflen) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\ssl.py", line 162, in read v = self._sslobj.read(len or 1024) socket.error: [Errno 10053] An established connection was aborted by the software in your host machine ====================================================================== ERROR: test_numeric_terminator1 (test.test_asynchat.TestAsynchat) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_asynchat.py", line 153, in test_numeric_terminator1 self.numeric_terminator_check(1) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_asynchat.py", line 145, in numeric_terminator_check asyncore.loop(use_poll=self.usepoll, count=300, timeout=.01) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 195, in loop poll_fun(timeout, map) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 132, in poll read(obj) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 72, in read obj.handle_error() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 68, in read obj.handle_read_event() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 390, in handle_read_event self.handle_read() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_ssl.py", line 527, in handle_read data = self.recv(1024) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 342, in recv data = self.socket.recv(buffer_size) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\ssl.py", line 247, in recv return self.read(buflen) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\ssl.py", line 162, in read v = self._sslobj.read(len or 1024) socket.error: [Errno 10053] An established connection was aborted by the software in your host machine ====================================================================== ERROR: test_numeric_terminator2 (test.test_asynchat.TestAsynchat) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_asynchat.py", line 156, in test_numeric_terminator2 self.numeric_terminator_check(6) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_asynchat.py", line 145, in numeric_terminator_check asyncore.loop(use_poll=self.usepoll, count=300, timeout=.01) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 195, in loop poll_fun(timeout, map) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 132, in poll read(obj) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 72, in read obj.handle_error() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 68, in read obj.handle_read_event() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 390, in handle_read_event self.handle_read() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_ssl.py", line 527, in handle_read data = self.recv(1024) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 342, in recv data = self.socket.recv(buffer_size) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\ssl.py", line 247, in recv return self.read(buflen) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\ssl.py", line 162, in read v = self._sslobj.read(len or 1024) socket.error: [Errno 10053] An established connection was aborted by the software in your host machine ====================================================================== ERROR: test_simple_producer (test.test_asynchat.TestAsynchat) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_asynchat.py", line 177, in test_simple_producer asyncore.loop(use_poll=self.usepoll, count=300, timeout=.01) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 195, in loop poll_fun(timeout, map) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 132, in poll read(obj) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 72, in read obj.handle_error() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 68, in read obj.handle_read_event() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 390, in handle_read_event self.handle_read() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_ssl.py", line 527, in handle_read data = self.recv(1024) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 342, in recv data = self.socket.recv(buffer_size) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\ssl.py", line 247, in recv return self.read(buflen) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\ssl.py", line 162, in read v = self._sslobj.read(len or 1024) socket.error: [Errno 10053] An established connection was aborted by the software in your host machine ====================================================================== ERROR: test_string_producer (test.test_asynchat.TestAsynchat) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_asynchat.py", line 187, in test_string_producer asyncore.loop(use_poll=self.usepoll, count=300, timeout=.01) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 195, in loop poll_fun(timeout, map) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 132, in poll read(obj) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 72, in read obj.handle_error() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 68, in read obj.handle_read_event() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 390, in handle_read_event self.handle_read() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_ssl.py", line 527, in handle_read data = self.recv(1024) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 342, in recv data = self.socket.recv(buffer_size) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\ssl.py", line 247, in recv return self.read(buflen) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\ssl.py", line 162, in read v = self._sslobj.read(len or 1024) socket.error: [Errno 10053] An established connection was aborted by the software in your host machine ====================================================================== ERROR: test_close_when_done (test.test_asynchat.TestAsynchat_WithPoll) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_asynchat.py", line 210, in test_close_when_done asyncore.loop(use_poll=self.usepoll, count=300, timeout=.01) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 195, in loop poll_fun(timeout, map) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 132, in poll read(obj) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 72, in read obj.handle_error() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 68, in read obj.handle_read_event() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 390, in handle_read_event self.handle_read() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_ssl.py", line 527, in handle_read data = self.recv(1024) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 342, in recv data = self.socket.recv(buffer_size) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\ssl.py", line 247, in recv return self.read(buflen) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\ssl.py", line 162, in read v = self._sslobj.read(len or 1024) socket.error: [Errno 10053] An established connection was aborted by the software in your host machine ====================================================================== ERROR: test_empty_line (test.test_asynchat.TestAsynchat_WithPoll) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_asynchat.py", line 198, in test_empty_line asyncore.loop(use_poll=self.usepoll, count=300, timeout=.01) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 195, in loop poll_fun(timeout, map) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 132, in poll read(obj) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 72, in read obj.handle_error() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 68, in read obj.handle_read_event() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 390, in handle_read_event self.handle_read() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_ssl.py", line 527, in handle_read data = self.recv(1024) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 342, in recv data = self.socket.recv(buffer_size) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\ssl.py", line 247, in recv return self.read(buflen) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\ssl.py", line 162, in read v = self._sslobj.read(len or 1024) socket.error: [Errno 10053] An established connection was aborted by the software in your host machine ====================================================================== ERROR: test_line_terminator1 (test.test_asynchat.TestAsynchat_WithPoll) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_asynchat.py", line 126, in test_line_terminator1 self.line_terminator_check('\n', l) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_asynchat.py", line 114, in line_terminator_check asyncore.loop(use_poll=self.usepoll, count=300, timeout=.01) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 195, in loop poll_fun(timeout, map) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 132, in poll read(obj) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 72, in read obj.handle_error() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 68, in read obj.handle_read_event() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 390, in handle_read_event self.handle_read() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_ssl.py", line 527, in handle_read data = self.recv(1024) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 342, in recv data = self.socket.recv(buffer_size) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\ssl.py", line 247, in recv return self.read(buflen) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\ssl.py", line 162, in read v = self._sslobj.read(len or 1024) socket.error: [Errno 10053] An established connection was aborted by the software in your host machine ====================================================================== ERROR: test_line_terminator2 (test.test_asynchat.TestAsynchat_WithPoll) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_asynchat.py", line 131, in test_line_terminator2 self.line_terminator_check('\r\n', l) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_asynchat.py", line 114, in line_terminator_check asyncore.loop(use_poll=self.usepoll, count=300, timeout=.01) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 195, in loop poll_fun(timeout, map) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 132, in poll read(obj) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 72, in read obj.handle_error() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 68, in read obj.handle_read_event() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 390, in handle_read_event self.handle_read() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_ssl.py", line 527, in handle_read data = self.recv(1024) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 342, in recv data = self.socket.recv(buffer_size) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\ssl.py", line 247, in recv return self.read(buflen) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\ssl.py", line 162, in read v = self._sslobj.read(len or 1024) socket.error: [Errno 10053] An established connection was aborted by the software in your host machine ====================================================================== ERROR: test_line_terminator3 (test.test_asynchat.TestAsynchat_WithPoll) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_asynchat.py", line 136, in test_line_terminator3 self.line_terminator_check('qqq', l) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_asynchat.py", line 114, in line_terminator_check asyncore.loop(use_poll=self.usepoll, count=300, timeout=.01) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 195, in loop poll_fun(timeout, map) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 132, in poll read(obj) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 72, in read obj.handle_error() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 68, in read obj.handle_read_event() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 390, in handle_read_event self.handle_read() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_ssl.py", line 527, in handle_read data = self.recv(1024) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 342, in recv data = self.socket.recv(buffer_size) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\ssl.py", line 247, in recv return self.read(buflen) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\ssl.py", line 162, in read v = self._sslobj.read(len or 1024) socket.error: [Errno 10053] An established connection was aborted by the software in your host machine ====================================================================== ERROR: test_none_terminator (test.test_asynchat.TestAsynchat_WithPoll) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_asynchat.py", line 165, in test_none_terminator asyncore.loop(use_poll=self.usepoll, count=300, timeout=.01) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 195, in loop poll_fun(timeout, map) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 132, in poll read(obj) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 72, in read obj.handle_error() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 68, in read obj.handle_read_event() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 390, in handle_read_event self.handle_read() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_ssl.py", line 527, in handle_read data = self.recv(1024) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 342, in recv data = self.socket.recv(buffer_size) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\ssl.py", line 247, in recv return self.read(buflen) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\ssl.py", line 162, in read v = self._sslobj.read(len or 1024) socket.error: [Errno 10053] An established connection was aborted by the software in your host machine ====================================================================== ERROR: test_numeric_terminator1 (test.test_asynchat.TestAsynchat_WithPoll) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_asynchat.py", line 153, in test_numeric_terminator1 self.numeric_terminator_check(1) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_asynchat.py", line 145, in numeric_terminator_check asyncore.loop(use_poll=self.usepoll, count=300, timeout=.01) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 195, in loop poll_fun(timeout, map) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 132, in poll read(obj) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 72, in read obj.handle_error() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 68, in read obj.handle_read_event() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 390, in handle_read_event self.handle_read() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_ssl.py", line 527, in handle_read data = self.recv(1024) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 342, in recv data = self.socket.recv(buffer_size) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\ssl.py", line 247, in recv return self.read(buflen) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\ssl.py", line 162, in read v = self._sslobj.read(len or 1024) socket.error: [Errno 10053] An established connection was aborted by the software in your host machine ====================================================================== ERROR: test_numeric_terminator2 (test.test_asynchat.TestAsynchat_WithPoll) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_asynchat.py", line 156, in test_numeric_terminator2 self.numeric_terminator_check(6) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_asynchat.py", line 145, in numeric_terminator_check asyncore.loop(use_poll=self.usepoll, count=300, timeout=.01) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 195, in loop poll_fun(timeout, map) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 132, in poll read(obj) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 72, in read obj.handle_error() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 68, in read obj.handle_read_event() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 390, in handle_read_event self.handle_read() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_ssl.py", line 527, in handle_read data = self.recv(1024) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 342, in recv data = self.socket.recv(buffer_size) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\ssl.py", line 247, in recv return self.read(buflen) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\ssl.py", line 162, in read v = self._sslobj.read(len or 1024) socket.error: [Errno 10053] An established connection was aborted by the software in your host machine ====================================================================== ERROR: test_simple_producer (test.test_asynchat.TestAsynchat_WithPoll) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_asynchat.py", line 177, in test_simple_producer asyncore.loop(use_poll=self.usepoll, count=300, timeout=.01) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 195, in loop poll_fun(timeout, map) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 132, in poll read(obj) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 72, in read obj.handle_error() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 68, in read obj.handle_read_event() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 390, in handle_read_event self.handle_read() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_ssl.py", line 527, in handle_read data = self.recv(1024) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 342, in recv data = self.socket.recv(buffer_size) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\ssl.py", line 247, in recv return self.read(buflen) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\ssl.py", line 162, in read v = self._sslobj.read(len or 1024) socket.error: [Errno 10053] An established connection was aborted by the software in your host machine ====================================================================== ERROR: test_string_producer (test.test_asynchat.TestAsynchat_WithPoll) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_asynchat.py", line 187, in test_string_producer asyncore.loop(use_poll=self.usepoll, count=300, timeout=.01) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 195, in loop poll_fun(timeout, map) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 132, in poll read(obj) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 72, in read obj.handle_error() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 68, in read obj.handle_read_event() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 390, in handle_read_event self.handle_read() File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_ssl.py", line 527, in handle_read data = self.recv(1024) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\asyncore.py", line 342, in recv data = self.socket.recv(buffer_size) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\ssl.py", line 247, in recv return self.read(buflen) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\ssl.py", line 162, in read v = self._sslobj.read(len or 1024) socket.error: [Errno 10053] An established connection was aborted by the software in your host machine ====================================================================== FAIL: test_flush (test.test_logging.MemoryHandlerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_logging.py", line 468, in test_flush self.assert_log_lines(lines) File "E:\cygwin\home\db3l\buildarea\3.0.bolen-windows\build\lib\test\test_logging.py", line 112, in assert_log_lines self.assertEquals(len(actual_lines), len(expected_values)) AssertionError: 0 != 3 sincerely, -The Buildbot From buildbot at python.org Fri Mar 28 09:59:19 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 28 Mar 2008 08:59:19 +0000 Subject: [Python-checkins] buildbot failure in ppc Debian unstable trunk Message-ID: <20080328085919.A49031E4014@bag.python.org> The Buildbot has detected a new failure of ppc Debian unstable trunk. Full details are available at: http://www.python.org/dev/buildbot/all/ppc%20Debian%20unstable%20trunk/builds/1107 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: gregory.p.smith BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_logging ====================================================================== FAIL: test_flush (test.test_logging.MemoryHandlerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_logging.py", line 468, in test_flush self.assert_log_lines(lines) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_logging.py", line 112, in assert_log_lines self.assertEquals(len(actual_lines), len(expected_values)) AssertionError: 0 != 3 make: *** [buildbottest] Error 1 sincerely, -The Buildbot From g.brandl at gmx.net Fri Mar 28 10:22:36 2008 From: g.brandl at gmx.net (Georg Brandl) Date: Fri, 28 Mar 2008 10:22:36 +0100 Subject: [Python-checkins] r62000 - in python/trunk: Doc/library/sqlite3.rst Lib/test/test_sqlite.py Misc/ACKS Misc/NEWS Modules/_sqlite/connection.c In-Reply-To: <20080328083222.0FD6F1E4014@bag.python.org> References: <20080328083222.0FD6F1E4014@bag.python.org> Message-ID: gregory.p.smith schrieb: > Author: gregory.p.smith > Date: Fri Mar 28 09:32:09 2008 > New Revision: 62000 > > Modified: > python/trunk/Doc/library/sqlite3.rst > python/trunk/Lib/test/test_sqlite.py > python/trunk/Misc/ACKS > python/trunk/Misc/NEWS > python/trunk/Modules/_sqlite/connection.c > Log: > Accept patch issue2426 by Paul Kippes (kippesp). > > Adds sqlite3.Connection.iterdump to allow dumping of databases. > > > Modified: python/trunk/Doc/library/sqlite3.rst > ============================================================================== > --- python/trunk/Doc/library/sqlite3.rst (original) > +++ python/trunk/Doc/library/sqlite3.rst Fri Mar 28 09:32:09 2008 > @@ -378,6 +378,27 @@ > deleted since the database connection was opened. > > > +.. attribute:: Connection.iterdump Shouldn't that be a .. method? > + Returns an iterator to dump the database in an SQL text format. Useful when > + saving an in-memory database for later restoration. This function provides > + the same capabilities as the :kbd:`.dump` command in the :program:`sqlite3` > + shell. > + > + .. versionadded:: 2.6 > + > + Example:: > + > + # Convert file existing_db.db to SQL dump file dump.sql > + import sqlite3, os > + > + con = sqlite3.connect('existing_db.db') > + full_dump = os.linesep.join([line for line in con.iterdump()]) > + f = open('dump.sql', 'w') > + f.writelines(full_dump) > + f.close() > + > + -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out. From python-checkins at python.org Fri Mar 28 10:26:28 2008 From: python-checkins at python.org (georg.brandl) Date: Fri, 28 Mar 2008 10:26:28 +0100 (CET) Subject: [Python-checkins] r62001 - doctools/trunk/TODO Message-ID: <20080328092628.80C421E4014@bag.python.org> Author: georg.brandl Date: Fri Mar 28 10:26:28 2008 New Revision: 62001 Modified: doctools/trunk/TODO Log: Add todo items. Modified: doctools/trunk/TODO ============================================================================== --- doctools/trunk/TODO (original) +++ doctools/trunk/TODO Fri Mar 28 10:26:28 2008 @@ -4,6 +4,8 @@ Sphinx ****** +- autoattribute in autodoc +- range and object options for literalinclude - option for compact module index - HTML section numbers? - split the general index? From buildbot at python.org Fri Mar 28 10:31:39 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 28 Mar 2008 09:31:39 +0000 Subject: [Python-checkins] buildbot failure in ARM Linux EABI 3.0 Message-ID: <20080328093145.530EE1E4014@bag.python.org> The Buildbot has detected a new failure of ARM Linux EABI 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/ARM%20Linux%20EABI%203.0/builds/2 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-linux-armeabi Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From buildbot at python.org Fri Mar 28 10:54:41 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 28 Mar 2008 09:54:41 +0000 Subject: [Python-checkins] buildbot failure in sparc solaris10 gcc trunk Message-ID: <20080328095441.B321D1E4014@bag.python.org> The Buildbot has detected a new failure of sparc solaris10 gcc trunk. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20solaris10%20gcc%20trunk/builds/3077 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: loewis-sun Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl,gregory.p.smith BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_logging ====================================================================== FAIL: test_flush (test.test_logging.MemoryHandlerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/users/buildbot/slave/trunk.loewis-sun/build/Lib/test/test_logging.py", line 468, in test_flush self.assert_log_lines(lines) File "/opt/users/buildbot/slave/trunk.loewis-sun/build/Lib/test/test_logging.py", line 112, in assert_log_lines self.assertEquals(len(actual_lines), len(expected_values)) AssertionError: 0 != 3 sincerely, -The Buildbot From buildbot at python.org Fri Mar 28 11:18:29 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 28 Mar 2008 10:18:29 +0000 Subject: [Python-checkins] buildbot failure in sparc Debian trunk Message-ID: <20080328101829.B0C7C1E4014@bag.python.org> The Buildbot has detected a new failure of sparc Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20Debian%20trunk/builds/261 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-sparc Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: jeffrey.yasskin,martin.v.loewis,neal.norwitz BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_logging ====================================================================== FAIL: test_flush (test.test_logging.MemoryHandlerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea-sid/trunk.klose-debian-sparc/build/Lib/test/test_logging.py", line 468, in test_flush self.assert_log_lines(lines) File "/home/pybot/buildarea-sid/trunk.klose-debian-sparc/build/Lib/test/test_logging.py", line 112, in assert_log_lines self.assertEquals(len(actual_lines), len(expected_values)) AssertionError: 0 != 3 make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Fri Mar 28 11:24:59 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 28 Mar 2008 10:24:59 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-3 trunk Message-ID: <20080328102500.10F961E4014@bag.python.org> The Buildbot has detected a new failure of x86 XP-3 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-3%20trunk/builds/1197 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: neal.norwitz BUILD FAILED: failed test Excerpt from the test logfile: 22 tests failed: test_array test_bufio test_cmd_line_script test_cookielib test_deque test_distutils test_file test_filecmp test_gzip test_hotshot test_iter test_logging test_mailbox test_marshal test_mmap test_set test_univnewlines test_urllib test_urllib2 test_uu test_zipfile test_zipimport ====================================================================== ERROR: test_zipfile (test.test_cmd_line_script.CmdLineTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_cmd_line_script.py", line 160, in test_zipfile self._check_script(zip_name, None, zip_name, '') File "C:\buildbot\work\trunk.heller-windows\build\lib\contextlib.py", line 33, in __exit__ self.gen.throw(type, value, traceback) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_cmd_line_script.py", line 36, in temp_dir shutil.rmtree(dirname) File "c:\buildbot\work\trunk.heller-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "c:\buildbot\work\trunk.heller-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: 'c:\\docume~1\\theller\\locals~1\\temp\\tmpt1gqle\\test_zip.zip' ====================================================================== ERROR: test_zipfile_compiled (test.test_cmd_line_script.CmdLineTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_cmd_line_script.py", line 167, in test_zipfile_compiled self._check_script(zip_name, None, zip_name, '') File "C:\buildbot\work\trunk.heller-windows\build\lib\contextlib.py", line 33, in __exit__ self.gen.throw(type, value, traceback) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_cmd_line_script.py", line 36, in temp_dir shutil.rmtree(dirname) File "c:\buildbot\work\trunk.heller-windows\build\lib\shutil.py", line 184, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "c:\buildbot\work\trunk.heller-windows\build\lib\shutil.py", line 182, in rmtree os.remove(fullname) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: 'c:\\docume~1\\theller\\locals~1\\temp\\tmp5sbenu\\test_zip.zip' ====================================================================== ERROR: test_maxlen (test.test_deque.TestBasic) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_deque.py", line 78, in test_maxlen fo = open(test_support.TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_print (test.test_deque.TestBasic) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_deque.py", line 284, in test_print fo = open(test_support.TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testTruncateOnWindows (test.test_file.OtherFileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_file.py", line 184, in testTruncateOnWindows os.unlink(TESTFN) WindowsError: [Error 5] Access is denied: '@test' ====================================================================== ERROR: testUnicodeOpen (test.test_file.OtherFileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_file.py", line 145, in testUnicodeOpen f = open(unicode(TESTFN), "w") IOError: [Errno 13] Permission denied: u'@test' ====================================================================== ERROR: testExit (test.test_file.FileSubclassTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_file.py", line 337, in testExit with C(TESTFN, 'w') as f: File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_file.py", line 332, in __init__ file.__init__(self, *args) IOError: [Errno 13] Permission denied: '@test' ====================================================================== FAIL: testSetBufferSize (test.test_file.OtherFileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_file.py", line 180, in testSetBufferSize self.fail('error setting buffer size %d: %s' % (s, str(msg))) AssertionError: error setting buffer size -1: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_different (test.test_filecmp.FileCompareTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_filecmp.py", line 13, in setUp output = open(name, 'w') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_matching (test.test_filecmp.FileCompareTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_filecmp.py", line 13, in setUp output = open(name, 'w') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_1647484 (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 158, in test_1647484 f = gzip.GzipFile(self.filename, mode) File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 79, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_append (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 54, in test_append self.test_write() File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 79, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_many_append (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 65, in test_many_append f = gzip.open(self.filename, 'wb', 9) File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 33, in open return GzipFile(filename, mode, compresslevel) File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 79, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_mode (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 151, in test_mode self.test_write() File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 79, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_read (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 48, in test_read self.test_write() File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 79, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_readline (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 85, in test_readline self.test_write() File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 79, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_readlines (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 98, in test_readlines self.test_write() File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 79, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_seek_read (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 112, in test_seek_read self.test_write() File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 79, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_seek_whence (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 132, in test_seek_whence self.test_write() File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 79, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_seek_write (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 144, in test_seek_write f = gzip.GzipFile(self.filename, 'w') File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 79, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_write (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 79, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_addinfo (test.test_hotshot.HotShotTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_hotshot.py", line 74, in test_addinfo profiler = self.new_profiler() File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_hotshot.py", line 42, in new_profiler return hotshot.Profile(self.logfn, lineevents, linetimings) File "C:\buildbot\work\trunk.heller-windows\build\lib\hotshot\__init__.py", line 15, in __init__ logfn, self.lineevents, self.linetimings) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_bad_sys_path (test.test_hotshot.HotShotTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_hotshot.py", line 118, in test_bad_sys_path self.assertRaises(RuntimeError, coverage, test_support.TESTFN) File "C:\buildbot\work\trunk.heller-windows\build\lib\unittest.py", line 329, in failUnlessRaises callableObj(*args, **kwargs) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_line_numbers (test.test_hotshot.HotShotTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_hotshot.py", line 98, in test_line_numbers self.run_test(g, events, self.new_profiler(lineevents=1)) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_hotshot.py", line 42, in new_profiler return hotshot.Profile(self.logfn, lineevents, linetimings) File "C:\buildbot\work\trunk.heller-windows\build\lib\hotshot\__init__.py", line 15, in __init__ logfn, self.lineevents, self.linetimings) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_start_stop (test.test_hotshot.HotShotTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_hotshot.py", line 104, in test_start_stop profiler = self.new_profiler() File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_hotshot.py", line 42, in new_profiler return hotshot.Profile(self.logfn, lineevents, linetimings) File "C:\buildbot\work\trunk.heller-windows\build\lib\hotshot\__init__.py", line 15, in __init__ logfn, self.lineevents, self.linetimings) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_builtin_list (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_iter.py", line 262, in test_builtin_list f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_builtin_map (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_iter.py", line 408, in test_builtin_map f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_builtin_max_min (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_iter.py", line 371, in test_builtin_max_min f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_builtin_tuple (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_iter.py", line 295, in test_builtin_tuple f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_builtin_zip (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_iter.py", line 455, in test_builtin_zip f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_countOf (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_iter.py", line 617, in test_countOf f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_in_and_not_in (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_iter.py", line 580, in test_in_and_not_in f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_indexOf (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_iter.py", line 651, in test_indexOf f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_iter_file (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_iter.py", line 232, in test_iter_file f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_unicode_join_endcase (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_iter.py", line 534, in test_unicode_join_endcase f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_unpack_iter (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_iter.py", line 760, in test_unpack_iter f = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_writelines (test.test_iter.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_iter.py", line 677, in test_writelines f = file(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== FAIL: test_flush (test.test_logging.MemoryHandlerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_logging.py", line 468, in test_flush self.assert_log_lines(lines) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_logging.py", line 112, in assert_log_lines self.assertEquals(len(actual_lines), len(expected_values)) AssertionError: 0 != 3 ====================================================================== ERROR: test_floats (test.test_marshal.FloatTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_marshal.py", line 70, in test_floats marshal.dump(f, file(test_support.TESTFN, "wb")) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_buffer (test.test_marshal.StringTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_marshal.py", line 135, in test_buffer marshal.dump(b, file(test_support.TESTFN, "wb")) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_string (test.test_marshal.StringTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_marshal.py", line 124, in test_string marshal.dump(s, file(test_support.TESTFN, "wb")) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_unicode (test.test_marshal.StringTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_marshal.py", line 113, in test_unicode marshal.dump(s, file(test_support.TESTFN, "wb")) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_dict (test.test_marshal.ContainerTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_marshal.py", line 164, in test_dict marshal.dump(self.d, file(test_support.TESTFN, "wb")) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_list (test.test_marshal.ContainerTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_marshal.py", line 173, in test_list marshal.dump(lst, file(test_support.TESTFN, "wb")) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_sets (test.test_marshal.ContainerTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_marshal.py", line 194, in test_sets marshal.dump(t, file(test_support.TESTFN, "wb")) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_tuple (test.test_marshal.ContainerTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_marshal.py", line 182, in test_tuple marshal.dump(t, file(test_support.TESTFN, "wb")) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_print (test.test_set.TestBasicOpsEmpty) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_set.py", line 629, in test_print fo = open(test_support.TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_print (test.test_set.TestBasicOpsSingleton) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_set.py", line 629, in test_print fo = open(test_support.TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_print (test.test_set.TestBasicOpsTuple) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_set.py", line 629, in test_print fo = open(test_support.TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_print (test.test_set.TestBasicOpsTriple) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_set.py", line 629, in test_print fo = open(test_support.TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_file (test.test_urllib2.HandlerTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_urllib2.py", line 612, in test_file f = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testAbsoluteArcnames (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testAppendToNonZipFile (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testAppendToZipFile (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testExtract (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testExtractAll (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testIterlinesDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testIterlinesStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testLowCompression (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testOpenDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testOpenStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testRandomOpenDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testRandomOpenStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testReadlineDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testReadlineStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testReadlinesDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testReadlinesStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_PerFileCompression (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 286, in test_PerFileCompression zipfp.write(TESTFN, 'storeme', zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: test_PerFileCompression (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: test_WriteDefaultName (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 279, in test_WriteDefaultName zipfp.write(TESTFN) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: test_WriteDefaultName (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testClosedZipRaisesRuntimeError (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 627, in testClosedZipRaisesRuntimeError zipf.writestr("foo.txt", "O, for a Muse of Fire!") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testCreateNonExistentFileForAppend (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 565, in testCreateNonExistentFileForAppend zf.writestr(filename, content) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testIsZipValidFile (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 604, in testIsZipValidFile zipf.writestr("foo.txt", "O, for a Muse of Fire!") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: test_BadOpenMode (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 648, in test_BadOpenMode zipf.writestr("foo.txt", "O, for a Muse of Fire!") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: test_NullByteInFilename (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 682, in test_NullByteInFilename zipf.writestr("foo.txt\x00qqq", "O, for a Muse of Fire!") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: test_Read0 (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 660, in test_Read0 zipf.writestr("foo.txt", "O, for a Muse of Fire!") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testWritePyfile (test.test_zipfile.PyZipFileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 490, in testWritePyfile zipfp.writepy(fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 1174, in writepy self.write(fname, arcname) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testWritePythonDirectory (test.test_zipfile.PyZipFileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 538, in testWritePythonDirectory zipfp.writepy(TESTFN2) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 1166, in writepy self.write(fname, arcname) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testWritePythonPackage (test.test_zipfile.PyZipFileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 515, in testWritePythonPackage zipfp.writepy(packagedir) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 1137, in writepy self.write(fname, arcname) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testBadPassword (test.test_zipfile.DecryptionTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 719, in setUp self.zip = zipfile.ZipFile(TESTFN, "r") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 615, in __init__ self._GetContents() File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 635, in _GetContents self._RealGetContents() File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 672, in _RealGetContents raise BadZipfile, "Bad magic number for central directory" BadZipfile: Bad magic number for central directory ====================================================================== ERROR: testGoodPassword (test.test_zipfile.DecryptionTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 719, in setUp self.zip = zipfile.ZipFile(TESTFN, "r") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 615, in __init__ self._GetContents() File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 635, in _GetContents self._RealGetContents() File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 672, in _RealGetContents raise BadZipfile, "Bad magic number for central directory" BadZipfile: Bad magic number for central directory ====================================================================== ERROR: testNoPassword (test.test_zipfile.DecryptionTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 719, in setUp self.zip = zipfile.ZipFile(TESTFN, "r") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 615, in __init__ self._GetContents() File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 635, in _GetContents self._RealGetContents() File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 672, in _RealGetContents raise BadZipfile, "Bad magic number for central directory" BadZipfile: Bad magic number for central directory ====================================================================== ERROR: testDifferentFile (test.test_zipfile.TestsWithMultipleOpens) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 846, in setUp zipfp.writestr('ones', '1'*FIXEDTEST_SIZE) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testInterleaved (test.test_zipfile.TestsWithMultipleOpens) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 846, in setUp zipfp.writestr('ones', '1'*FIXEDTEST_SIZE) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testSameFile (test.test_zipfile.TestsWithMultipleOpens) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 846, in setUp zipfp.writestr('ones', '1'*FIXEDTEST_SIZE) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testIterlinesDeflated (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 990, in testIterlinesDeflated self.iterlinesTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 949, in iterlinesTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 909, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testIterlinesStored (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 973, in testIterlinesStored self.iterlinesTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 949, in iterlinesTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 909, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadDeflated (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 978, in testReadDeflated self.readTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 913, in readTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 909, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadStored (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 961, in testReadStored self.readTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 913, in readTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 909, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadlineDeflated (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 982, in testReadlineDeflated self.readlineTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 924, in readlineTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 909, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadlineStored (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 965, in testReadlineStored self.readlineTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 924, in readlineTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 909, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadlinesDeflated (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 986, in testReadlinesDeflated self.readlinesTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 937, in readlinesTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 909, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadlinesStored (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 969, in testReadlinesStored self.readlinesTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 937, in readlinesTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 909, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testOpenStored (test.test_zipfile.TestsWithRandomBinaryFiles) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 818, in testOpenStored self.zipOpenTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 787, in zipOpenTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 767, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testRandomOpenStored (test.test_zipfile.TestsWithRandomBinaryFiles) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 840, in testRandomOpenStored self.zipRandomOpenTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 821, in zipRandomOpenTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 767, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testStored (test.test_zipfile.TestsWithRandomBinaryFiles) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 784, in testStored self.zipTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 772, in zipTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 767, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testBadMTime (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 183, in testBadMTime self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testBadMagic (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 161, in testBadMagic self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testBadMagic2 (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 170, in testBadMagic2 self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testBoth (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 148, in testBoth self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testDeepPackage (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 197, in testDeepPackage self.doTest(pyc_ext, files, TESTPACK, TESTPACK2, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testDoctestFile (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 299, in testDoctestFile self.runDoctest(self.doDoctestFile) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 284, in runDoctest self.doTest(".py", files, TESTMOD, call=callback) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testDoctestSuite (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 310, in testDoctestSuite self.runDoctest(self.doDoctestSuite) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 284, in runDoctest self.doTest(".py", files, TESTMOD, call=callback) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testEmptyPy (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 152, in testEmptyPy self.doTest(None, files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 91, in doTest ["__dummy__"]) ImportError: No module named ziptestmodule ====================================================================== ERROR: testGetCompiledSource (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 279, in testGetCompiledSource self.doTest(pyc_ext, files, TESTMOD, call=self.assertModuleSource) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testGetData (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 236, in testGetData z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testGetSource (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 273, in testGetSource self.doTest(".py", files, TESTMOD, call=self.assertModuleSource) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testImport_WithStuff (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 266, in testImport_WithStuff stuff="Some Stuff"*31) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testImporterAttr (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 259, in testImporterAttr self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testPackage (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 189, in testPackage self.doTest(pyc_ext, files, TESTPACK, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testPy (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 139, in testPy self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testPyc (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 143, in testPyc self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testTraceback (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 333, in testTraceback self.doTest(None, files, TESTMOD, call=self.doTraceback) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testZipImporterMethods (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 206, in testZipImporterMethods z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testBadMTime (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 183, in testBadMTime self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testBadMagic (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 161, in testBadMagic self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testBadMagic2 (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 170, in testBadMagic2 self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testBoth (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 148, in testBoth self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testDeepPackage (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 197, in testDeepPackage self.doTest(pyc_ext, files, TESTPACK, TESTPACK2, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testDoctestFile (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 299, in testDoctestFile self.runDoctest(self.doDoctestFile) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 284, in runDoctest self.doTest(".py", files, TESTMOD, call=callback) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testDoctestSuite (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 310, in testDoctestSuite self.runDoctest(self.doDoctestSuite) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 284, in runDoctest self.doTest(".py", files, TESTMOD, call=callback) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testEmptyPy (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 152, in testEmptyPy self.doTest(None, files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testGetCompiledSource (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 279, in testGetCompiledSource self.doTest(pyc_ext, files, TESTMOD, call=self.assertModuleSource) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testGetData (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 236, in testGetData z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testGetSource (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 273, in testGetSource self.doTest(".py", files, TESTMOD, call=self.assertModuleSource) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testImport_WithStuff (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 266, in testImport_WithStuff stuff="Some Stuff"*31) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testImporterAttr (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 259, in testImporterAttr self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testPackage (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 189, in testPackage self.doTest(pyc_ext, files, TESTPACK, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testPy (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 139, in testPy self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testPyc (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 143, in testPyc self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testTraceback (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 333, in testTraceback self.doTest(None, files, TESTMOD, call=self.doTraceback) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testZipImporterMethods (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 206, in testZipImporterMethods z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' sincerely, -The Buildbot From buildbot at python.org Fri Mar 28 12:27:06 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 28 Mar 2008 11:27:06 +0000 Subject: [Python-checkins] buildbot failure in ppc Debian unstable 3.0 Message-ID: <20080328112706.498121E4009@bag.python.org> The Buildbot has detected a new failure of ppc Debian unstable 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/ppc%20Debian%20unstable%203.0/builds/713 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: christian.heimes BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_logging ====================================================================== FAIL: test_flush (test.test_logging.MemoryHandlerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/3.0.klose-debian-ppc/build/Lib/test/test_logging.py", line 468, in test_flush self.assert_log_lines(lines) File "/home/pybot/buildarea/3.0.klose-debian-ppc/build/Lib/test/test_logging.py", line 112, in assert_log_lines self.assertEquals(len(actual_lines), len(expected_values)) AssertionError: 0 != 3 make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Fri Mar 28 12:53:08 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 28 Mar 2008 11:53:08 +0000 Subject: [Python-checkins] buildbot failure in sparc Ubuntu trunk Message-ID: <20080328115308.766E41E4014@bag.python.org> The Buildbot has detected a new failure of sparc Ubuntu trunk. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20Ubuntu%20trunk/builds/415 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-ubuntu-sparc Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl,gregory.p.smith,neal.norwitz BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_logging ====================================================================== FAIL: test_flush (test.test_logging.MemoryHandlerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-ubuntu-sparc/build/Lib/test/test_logging.py", line 468, in test_flush self.assert_log_lines(lines) File "/home/pybot/buildarea/trunk.klose-ubuntu-sparc/build/Lib/test/test_logging.py", line 112, in assert_log_lines self.assertEquals(len(actual_lines), len(expected_values)) AssertionError: 0 != 3 make: *** [buildbottest] Error 1 sincerely, -The Buildbot From python-checkins at python.org Fri Mar 28 13:11:57 2008 From: python-checkins at python.org (georg.brandl) Date: Fri, 28 Mar 2008 13:11:57 +0100 (CET) Subject: [Python-checkins] r62004 - in python/trunk: Doc/library/_ast.rst Doc/library/compiler.rst Doc/library/functions.rst Doc/library/language.rst Include/Python-ast.h Lib/test/test_compile.py Misc/NEWS Parser/asdl_c.py Python/Python-ast.c Python/bltinmodule.c Python/compile.c Message-ID: <20080328121157.82FD51E4037@bag.python.org> Author: georg.brandl Date: Fri Mar 28 13:11:56 2008 New Revision: 62004 Modified: python/trunk/Doc/library/_ast.rst python/trunk/Doc/library/compiler.rst python/trunk/Doc/library/functions.rst python/trunk/Doc/library/language.rst python/trunk/Include/Python-ast.h python/trunk/Lib/test/test_compile.py python/trunk/Misc/NEWS python/trunk/Parser/asdl_c.py python/trunk/Python/Python-ast.c python/trunk/Python/bltinmodule.c python/trunk/Python/compile.c Log: Patch #1810 by Thomas Lee, reviewed by myself: allow compiling Python AST objects into code objects in compile(). Modified: python/trunk/Doc/library/_ast.rst ============================================================================== --- python/trunk/Doc/library/_ast.rst (original) +++ python/trunk/Doc/library/_ast.rst Fri Mar 28 13:11:56 2008 @@ -12,16 +12,16 @@ .. versionadded:: 2.5 The ``_ast`` module helps Python applications to process trees of the Python -abstract syntax grammar. The Python compiler currently provides read-only access -to such trees, meaning that applications can only create a tree for a given -piece of Python source code; generating :term:`bytecode` from a (potentially modified) -tree is not supported. The abstract syntax itself might change with each Python -release; this module helps to find out programmatically what the current grammar -looks like. - -An abstract syntax tree can be generated by passing ``_ast.PyCF_ONLY_AST`` as a -flag to the :func:`compile` builtin function. The result will be a tree of -objects whose classes all inherit from ``_ast.AST``. +abstract syntax grammar. The abstract syntax itself might change with each +Python release; this module helps to find out programmatically what the current +grammar looks like. + +An abstract syntax tree can be generated by passing :data:`_ast.PyCF_ONLY_AST` +as a flag to the :func:`compile` builtin function. The result will be a tree of +objects whose classes all inherit from :class:`_ast.AST`. + +A modified abstract syntax tree can be compiled into a Python code object using +the built-in :func:`compile` function. The actual classes are derived from the ``Parser/Python.asdl`` file, which is reproduced below. There is one class defined for each left-hand side symbol in @@ -41,12 +41,15 @@ ``_ast.stmt`` subclasses also have lineno and col_offset attributes. The lineno is the line number of source text (1 indexed so the first line is line 1) and the col_offset is the utf8 byte offset of the first token that generated the -node. The utf8 offset is recorded because the parser uses utf8 internally. +node. The utf8 offset is recorded because the parser uses utf8 internally. If these attributes are marked as optional in the grammar (using a question mark), the value might be ``None``. If the attributes can have zero-or-more values (marked with an asterisk), the values are represented as Python lists. +The constructors of all ``_ast`` classes don't take arguments; instead, if you +create instances, you must assign the required attributes separately. + Abstract Grammar ---------------- Modified: python/trunk/Doc/library/compiler.rst ============================================================================== --- python/trunk/Doc/library/compiler.rst (original) +++ python/trunk/Doc/library/compiler.rst Fri Mar 28 13:11:56 2008 @@ -28,12 +28,6 @@ This chapter explains how the various components of the :mod:`compiler` package work. It blends reference material with a tutorial. -The following modules are part of the :mod:`compiler` package: - -.. toctree:: - - _ast.rst - The basic interface =================== Modified: python/trunk/Doc/library/functions.rst ============================================================================== --- python/trunk/Doc/library/functions.rst (original) +++ python/trunk/Doc/library/functions.rst Fri Mar 28 13:11:56 2008 @@ -190,21 +190,27 @@ .. function:: compile(source, filename, mode[, flags[, dont_inherit]]) - Compile the *source* into a code object. Code objects can be executed by an - :keyword:`exec` statement or evaluated by a call to :func:`eval`. The - *filename* argument should give the file from which the code was read; pass some - recognizable value if it wasn't read from a file (``''`` is commonly - used). The *mode* argument specifies what kind of code must be compiled; it can - be ``'exec'`` if *source* consists of a sequence of statements, ``'eval'`` if it - consists of a single expression, or ``'single'`` if it consists of a single - interactive statement (in the latter case, expression statements that evaluate - to something else than ``None`` will be printed). + Compile the *source* into a code or AST object. Code objects can be executed + by an :keyword:`exec` statement or evaluated by a call to :func:`eval`. + *source* can either be a string or an AST object. Refer to the :mod:`_ast` + module documentation for information on how to compile into and from AST + objects. + + When compiling a string with multi-line statements, two caveats apply: line + endings must be represented by a single newline character (``'\n'``), and the + input must be terminated by at least one newline character. If line endings + are represented by ``'\r\n'``, use the string :meth:`replace` method to + change them into ``'\n'``. + + The *filename* argument should give the file from which the code was read; + pass some recognizable value if it wasn't read from a file (``''`` is + commonly used). - When compiling multi-line statements, two caveats apply: line endings must be - represented by a single newline character (``'\n'``), and the input must be - terminated by at least one newline character. If line endings are represented - by ``'\r\n'``, use the string :meth:`replace` method to change them into - ``'\n'``. + The *mode* argument specifies what kind of code must be compiled; it can be + ``'exec'`` if *source* consists of a sequence of statements, ``'eval'`` if it + consists of a single expression, or ``'single'`` if it consists of a single + interactive statement (in the latter case, expression statements that + evaluate to something else than ``None`` will be printed). The optional arguments *flags* and *dont_inherit* (which are new in Python 2.2) control which future statements (see :pep:`236`) affect the compilation of @@ -224,6 +230,9 @@ This function raises :exc:`SyntaxError` if the compiled source is invalid, and :exc:`TypeError` if the source contains null bytes. + .. versionadded:: 2.6 + Support for compiling AST objects. + .. function:: complex([real[, imag]]) Modified: python/trunk/Doc/library/language.rst ============================================================================== --- python/trunk/Doc/library/language.rst (original) +++ python/trunk/Doc/library/language.rst Fri Mar 28 13:11:56 2008 @@ -15,6 +15,7 @@ .. toctree:: parser.rst + _ast.rst symbol.rst token.rst keyword.rst Modified: python/trunk/Include/Python-ast.h ============================================================================== --- python/trunk/Include/Python-ast.h (original) +++ python/trunk/Include/Python-ast.h Fri Mar 28 13:11:56 2008 @@ -501,3 +501,5 @@ alias_ty _Py_alias(identifier name, identifier asname, PyArena *arena); PyObject* PyAST_mod2obj(mod_ty t); +mod_ty PyAST_obj2mod(PyObject* ast, PyArena* arena); +int PyAST_Check(PyObject* obj); Modified: python/trunk/Lib/test/test_compile.py ============================================================================== --- python/trunk/Lib/test/test_compile.py (original) +++ python/trunk/Lib/test/test_compile.py Fri Mar 28 13:11:56 2008 @@ -1,5 +1,6 @@ import unittest import sys +import _ast from test import test_support class TestSpecifics(unittest.TestCase): @@ -416,6 +417,32 @@ self.assert_("_A__mangled_mod" in A.f.func_code.co_varnames) self.assert_("__package__" in A.f.func_code.co_varnames) + def test_compile_ast(self): + fname = __file__ + if fname.lower().endswith(('pyc', 'pyo')): + fname = fname[:-1] + with open(fname, 'r') as f: + fcontents = f.read() + sample_code = [ + ['', 'x = 5'], + ['', 'print 1'], + ['', 'print v'], + ['', 'print True'], + ['', 'print []'], + ['', """if True:\n pass\n"""], + ['', """for n in [1, 2, 3]:\n print n\n"""], + ['', """def foo():\n pass\nfoo()\n"""], + [fname, fcontents], + ] + + for fname, code in sample_code: + co1 = compile(code, '%s1' % fname, 'exec') + ast = compile(code, '%s2' % fname, 'exec', _ast.PyCF_ONLY_AST) + self.assert_(type(ast) == _ast.Module) + co2 = compile(ast, '%s3' % fname, 'exec') + self.assertEqual(co1, co2) + + def test_main(): test_support.run_unittest(TestSpecifics) Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Fri Mar 28 13:11:56 2008 @@ -12,6 +12,9 @@ Core and builtins ----------------- +- Patch #1810: compile() can now compile _ast trees as returned by + compile(..., PyCF_ONLY_AST). + - Patch #2426: Added sqlite3.Connection.iterdump method to allow easy dumping of databases. Contributed by Paul Kippes at PyCon 2008. Modified: python/trunk/Parser/asdl_c.py ============================================================================== --- python/trunk/Parser/asdl_c.py (original) +++ python/trunk/Parser/asdl_c.py Fri Mar 28 13:11:56 2008 @@ -73,12 +73,12 @@ A sum is simple if its types have no fields, e.g. unaryop = Invert | Not | UAdd | USub """ - for t in sum.types: if t.fields: return False return True + class EmitVisitor(asdl.VisitorBase): """Visit that emits lines""" @@ -96,6 +96,7 @@ line = (" " * TABSIZE * depth) + line + "\n" self.file.write(line) + class TypeDefVisitor(EmitVisitor): def visitModule(self, mod): for dfn in mod.dfns: @@ -133,6 +134,7 @@ self.emit(s, depth) self.emit("", depth) + class StructVisitor(EmitVisitor): """Visitor to generate typdefs for AST.""" @@ -202,6 +204,7 @@ self.emit("};", depth) self.emit("", depth) + class PrototypeVisitor(EmitVisitor): """Generate function prototypes for the .h file""" @@ -271,6 +274,7 @@ self.emit_function(name, get_c_type(name), self.get_args(prod.fields), [], union=0) + class FunctionVisitor(PrototypeVisitor): """Visitor to generate constructor functions for AST.""" @@ -325,6 +329,7 @@ emit("p->%s = %s;" % (argname, argname), 1) assert not attrs + class PickleVisitor(EmitVisitor): def visitModule(self, mod): @@ -346,6 +351,181 @@ def visitField(self, sum): pass + +class Obj2ModPrototypeVisitor(PickleVisitor): + def visitProduct(self, prod, name): + code = "static int obj2ast_%s(PyObject* obj, %s* out, PyArena* arena);" + self.emit(code % (name, get_c_type(name)), 0) + + visitSum = visitProduct + + +class Obj2ModVisitor(PickleVisitor): + def funcHeader(self, name): + ctype = get_c_type(name) + self.emit("int", 0) + self.emit("obj2ast_%s(PyObject* obj, %s* out, PyArena* arena)" % (name, ctype), 0) + self.emit("{", 0) + self.emit("PyObject* tmp = NULL;", 1) + self.emit("", 0) + + def sumTrailer(self, name): + self.emit("", 0) + self.emit("tmp = PyObject_Repr(obj);", 1) + # there's really nothing more we can do if this fails ... + self.emit("if (tmp == NULL) goto failed;", 1) + error = "expected some sort of %s, but got %%.400s" % name + format = "PyErr_Format(PyExc_TypeError, \"%s\", PyString_AS_STRING(tmp));" + self.emit(format % error, 1, reflow=False) + self.emit("failed:", 0) + self.emit("Py_XDECREF(tmp);", 1) + self.emit("return 1;", 1) + self.emit("}", 0) + self.emit("", 0) + + def simpleSum(self, sum, name): + self.funcHeader(name) + for t in sum.types: + self.emit("if (PyObject_IsInstance(obj, (PyObject*)%s_type)) {" % t.name, 1) + self.emit("*out = %s;" % t.name, 2) + self.emit("return 0;", 2) + self.emit("}", 1) + self.sumTrailer(name) + + def buildArgs(self, fields): + return ", ".join(fields + ["arena"]) + + def complexSum(self, sum, name): + self.funcHeader(name) + for a in sum.attributes: + self.visitAttributeDeclaration(a, name, sum=sum) + self.emit("", 0) + # XXX: should we only do this for 'expr'? + self.emit("if (obj == Py_None) {", 1) + self.emit("*out = NULL;", 2) + self.emit("return 0;", 2) + self.emit("}", 1) + for a in sum.attributes: + self.visitField(a, name, sum=sum, depth=1) + for t in sum.types: + self.emit("if (PyObject_IsInstance(obj, (PyObject*)%s_type)) {" % t.name, 1) + for f in t.fields: + self.visitFieldDeclaration(f, t.name, sum=sum, depth=2) + self.emit("", 0) + for f in t.fields: + self.visitField(f, t.name, sum=sum, depth=2) + args = [f.name.value for f in t.fields] + [a.name.value for a in sum.attributes] + self.emit("*out = %s(%s);" % (t.name, self.buildArgs(args)), 2) + self.emit("if (*out == NULL) goto failed;", 2) + self.emit("return 0;", 2) + self.emit("}", 1) + self.sumTrailer(name) + + def visitAttributeDeclaration(self, a, name, sum=sum): + ctype = get_c_type(a.type) + self.emit("%s %s;" % (ctype, a.name), 1) + + def visitSum(self, sum, name): + if is_simple(sum): + self.simpleSum(sum, name) + else: + self.complexSum(sum, name) + + def visitProduct(self, prod, name): + ctype = get_c_type(name) + self.emit("int", 0) + self.emit("obj2ast_%s(PyObject* obj, %s* out, PyArena* arena)" % (name, ctype), 0) + self.emit("{", 0) + self.emit("PyObject* tmp = NULL;", 1) + for f in prod.fields: + self.visitFieldDeclaration(f, name, prod=prod, depth=1) + self.emit("", 0) + for f in prod.fields: + self.visitField(f, name, prod=prod, depth=1) + args = [f.name.value for f in prod.fields] + self.emit("*out = %s(%s);" % (name, self.buildArgs(args)), 1) + self.emit("return 0;", 1) + self.emit("failed:", 0) + self.emit("Py_XDECREF(tmp);", 1) + self.emit("return 1;", 1) + self.emit("}", 0) + self.emit("", 0) + + def visitFieldDeclaration(self, field, name, sum=None, prod=None, depth=0): + ctype = get_c_type(field.type) + if field.seq: + if self.isSimpleType(field): + self.emit("asdl_int_seq* %s;" % field.name, depth) + else: + self.emit("asdl_seq* %s;" % field.name, depth) + else: + ctype = get_c_type(field.type) + self.emit("%s %s;" % (ctype, field.name), depth) + + def isSimpleSum(self, field): + # XXX can the members of this list be determined automatically? + return field.type.value in ('expr_context', 'boolop', 'operator', + 'unaryop', 'cmpop') + + def isNumeric(self, field): + return get_c_type(field.type) in ("int", "bool") + + def isSimpleType(self, field): + return self.isSimpleSum(field) or self.isNumeric(field) + + def visitField(self, field, name, sum=None, prod=None, depth=0): + ctype = get_c_type(field.type) + self.emit("if (PyObject_HasAttrString(obj, \"%s\")) {" % field.name, depth) + self.emit("int res;", depth+1) + if field.seq: + self.emit("Py_ssize_t len;", depth+1) + self.emit("Py_ssize_t i;", depth+1) + self.emit("tmp = PyObject_GetAttrString(obj, \"%s\");" % field.name, depth+1) + self.emit("if (tmp == NULL) goto failed;", depth+1) + if field.seq: + self.emit("if (!PyList_Check(tmp)) {", depth+1) + self.emit("PyErr_Format(PyExc_TypeError, \"%s field \\\"%s\\\" must " + "be a list, not a %%.200s\", tmp->ob_type->tp_name);" % + (name, field.name), + depth+2, reflow=False) + self.emit("goto failed;", depth+2) + self.emit("}", depth+1) + self.emit("len = PyList_GET_SIZE(tmp);", depth+1) + if self.isSimpleType(field): + self.emit("%s = asdl_int_seq_new(len, arena);" % field.name, depth+1) + else: + self.emit("%s = asdl_seq_new(len, arena);" % field.name, depth+1) + self.emit("if (%s == NULL) goto failed;" % field.name, depth+1) + self.emit("for (i = 0; i < len; i++) {", depth+1) + self.emit("%s value;" % ctype, depth+2) + self.emit("res = obj2ast_%s(PyList_GET_ITEM(tmp, i), &value, arena);" % + field.type, depth+2, reflow=False) + self.emit("if (res != 0) goto failed;", depth+2) + self.emit("asdl_seq_SET(%s, i, value);" % field.name, depth+2) + self.emit("}", depth+1) + else: + self.emit("res = obj2ast_%s(tmp, &%s, arena);" % + (field.type, field.name), depth+1) + self.emit("if (res != 0) goto failed;", depth+1) + + self.emit("Py_XDECREF(tmp);", depth+1) + self.emit("tmp = NULL;", depth+1) + self.emit("} else {", depth) + if not field.opt: + message = "required field \\\"%s\\\" missing from %s" % (field.name, name) + format = "PyErr_SetString(PyExc_TypeError, \"%s\");" + self.emit(format % message, depth+1, reflow=False) + self.emit("return 1;", depth+1) + else: + if self.isNumeric(field): + self.emit("%s = 0;" % field.name, depth+1) + elif not self.isSimpleType(field): + self.emit("%s = NULL;" % field.name, depth+1) + else: + raise TypeError("could not determine the default value for %s" % field.name) + self.emit("}", depth) + + class MarshalPrototypeVisitor(PickleVisitor): def prototype(self, sum, name): @@ -355,6 +535,7 @@ visitProduct = visitSum = prototype + class PyTypesDeclareVisitor(PickleVisitor): def visitProduct(self, prod, name): @@ -440,6 +621,8 @@ return result; } +/* Conversion AST -> Python */ + static PyObject* ast2obj_list(asdl_seq *seq, PyObject* (*func)(void*)) { int i, n = asdl_seq_LEN(seq); @@ -476,6 +659,57 @@ { return PyInt_FromLong(b); } + +/* Conversion Python -> AST */ + +static int obj2ast_object(PyObject* obj, PyObject** out, PyArena* arena) +{ + if (obj == Py_None) + obj = NULL; + if (obj) + PyArena_AddPyObject(arena, obj); + Py_XINCREF(obj); + *out = obj; + return 0; +} + +#define obj2ast_identifier obj2ast_object +#define obj2ast_string obj2ast_object + +static int obj2ast_int(PyObject* obj, int* out, PyArena* arena) +{ + int i; + if (!PyInt_Check(obj) && !PyLong_Check(obj)) { + PyObject *s = PyObject_Repr(obj); + if (s == NULL) return 1; + PyErr_Format(PyExc_ValueError, "invalid integer value: %.400s", + PyString_AS_STRING(s)); + Py_DECREF(s); + return 1; + } + + i = (int)PyLong_AsLong(obj); + if (i == -1 && PyErr_Occurred()) + return 1; + *out = i; + return 0; +} + +static int obj2ast_bool(PyObject* obj, bool* out, PyArena* arena) +{ + if (!PyBool_Check(obj)) { + PyObject *s = PyObject_Repr(obj); + if (s == NULL) return 1; + PyErr_Format(PyExc_ValueError, "invalid boolean value: %.400s", + PyString_AS_STRING(s)); + Py_DECREF(s); + return 1; + } + + *out = (obj == Py_True); + return 0; +} + """, 0, reflow=False) self.emit("static int init_types(void)",0) @@ -523,6 +757,7 @@ (cons.name, cons.name), 1) self.emit("if (!%s_singleton) return 0;" % cons.name, 1) + def parse_version(mod): return mod.version.value[12:-3] @@ -562,6 +797,7 @@ def addObj(self, name): self.emit('if (PyDict_SetItemString(d, "%s", (PyObject*)%s_type) < 0) return;' % (name, name), 1) + _SPECIALIZED_SEQUENCES = ('stmt', 'expr') def find_sequence(fields, doing_specialization): @@ -587,6 +823,7 @@ def visit(self, object): self.emit(self.CODE, 0, reflow=False) + class ObjVisitor(PickleVisitor): def func_begin(self, name): @@ -637,8 +874,12 @@ self.emit("case %s:" % t.name, 2) self.emit("Py_INCREF(%s_singleton);" % t.name, 3) self.emit("return %s_singleton;" % t.name, 3) + self.emit("default:" % name, 2) + self.emit('/* should never happen, but just in case ... */', 3) + code = "PyErr_Format(PyExc_SystemError, \"unknown %s found\");" % name + self.emit(code, 3, reflow=False) + self.emit("return NULL;", 3) self.emit("}", 1) - self.emit("return NULL; /* cannot happen */", 1) self.emit("}", 0) def visitProduct(self, prod, name): @@ -712,6 +953,27 @@ init_types(); return ast2obj_mod(t); } + +mod_ty PyAST_obj2mod(PyObject* ast, PyArena* arena) +{ + mod_ty res; + init_types(); + if (!PyObject_IsInstance(ast, mod_type)) { + PyErr_SetString(PyExc_TypeError, "expected either Module, Interactive " + "or Expression node"); + return NULL; + } + if (obj2ast_mod(ast, &res, arena) != 0) + return NULL; + else + return res; +} + +int PyAST_Check(PyObject* obj) +{ + init_types(); + return PyObject_IsInstance(obj, (PyObject*)AST_type); +} """ class ChainOfVisitors: @@ -754,6 +1016,8 @@ ) c.visit(mod) print >>f, "PyObject* PyAST_mod2obj(mod_ty t);" + print >>f, "mod_ty PyAST_obj2mod(PyObject* ast, PyArena* arena);" + print >>f, "int PyAST_Check(PyObject* obj);" f.close() if SRC_DIR: @@ -768,8 +1032,10 @@ v = ChainOfVisitors( PyTypesDeclareVisitor(f), PyTypesVisitor(f), + Obj2ModPrototypeVisitor(f), FunctionVisitor(f), ObjVisitor(f), + Obj2ModVisitor(f), ASTModuleVisitor(f), PartingShots(f), ) Modified: python/trunk/Python/Python-ast.c ============================================================================== --- python/trunk/Python/Python-ast.c (original) +++ python/trunk/Python/Python-ast.c Fri Mar 28 13:11:56 2008 @@ -409,6 +409,8 @@ return result; } +/* Conversion AST -> Python */ + static PyObject* ast2obj_list(asdl_seq *seq, PyObject* (*func)(void*)) { int i, n = asdl_seq_LEN(seq); @@ -446,6 +448,57 @@ return PyInt_FromLong(b); } +/* Conversion Python -> AST */ + +static int obj2ast_object(PyObject* obj, PyObject** out, PyArena* arena) +{ + if (obj == Py_None) + obj = NULL; + if (obj) + PyArena_AddPyObject(arena, obj); + Py_XINCREF(obj); + *out = obj; + return 0; +} + +#define obj2ast_identifier obj2ast_object +#define obj2ast_string obj2ast_object + +static int obj2ast_int(PyObject* obj, int* out, PyArena* arena) +{ + int i; + if (!PyInt_Check(obj) && !PyLong_Check(obj)) { + PyObject *s = PyObject_Repr(obj); + if (s == NULL) return 1; + PyErr_Format(PyExc_ValueError, "invalid integer value: %.400s", + PyString_AS_STRING(s)); + Py_DECREF(s); + return 1; + } + + i = (int)PyLong_AsLong(obj); + if (i == -1 && PyErr_Occurred()) + return 1; + *out = i; + return 0; +} + +static int obj2ast_bool(PyObject* obj, bool* out, PyArena* arena) +{ + if (!PyBool_Check(obj)) { + PyObject *s = PyObject_Repr(obj); + if (s == NULL) return 1; + PyErr_Format(PyExc_ValueError, "invalid boolean value: %.400s", + PyString_AS_STRING(s)); + Py_DECREF(s); + return 1; + } + + *out = (obj == Py_True); + return 0; +} + + static int init_types(void) { static int initialized; @@ -736,6 +789,24 @@ return 1; } +static int obj2ast_mod(PyObject* obj, mod_ty* out, PyArena* arena); +static int obj2ast_stmt(PyObject* obj, stmt_ty* out, PyArena* arena); +static int obj2ast_expr(PyObject* obj, expr_ty* out, PyArena* arena); +static int obj2ast_expr_context(PyObject* obj, expr_context_ty* out, PyArena* + arena); +static int obj2ast_slice(PyObject* obj, slice_ty* out, PyArena* arena); +static int obj2ast_boolop(PyObject* obj, boolop_ty* out, PyArena* arena); +static int obj2ast_operator(PyObject* obj, operator_ty* out, PyArena* arena); +static int obj2ast_unaryop(PyObject* obj, unaryop_ty* out, PyArena* arena); +static int obj2ast_cmpop(PyObject* obj, cmpop_ty* out, PyArena* arena); +static int obj2ast_comprehension(PyObject* obj, comprehension_ty* out, PyArena* + arena); +static int obj2ast_excepthandler(PyObject* obj, excepthandler_ty* out, PyArena* + arena); +static int obj2ast_arguments(PyObject* obj, arguments_ty* out, PyArena* arena); +static int obj2ast_keyword(PyObject* obj, keyword_ty* out, PyArena* arena); +static int obj2ast_alias(PyObject* obj, alias_ty* out, PyArena* arena); + mod_ty Module(asdl_seq * body, PyArena *arena) { @@ -2600,8 +2671,11 @@ case Param: Py_INCREF(Param_singleton); return Param_singleton; + default: + /* should never happen, but just in case ... */ + PyErr_Format(PyExc_SystemError, "unknown expr_context found"); + return NULL; } - return NULL; /* cannot happen */ } PyObject* ast2obj_slice(void* _o) @@ -2672,8 +2746,11 @@ case Or: Py_INCREF(Or_singleton); return Or_singleton; + default: + /* should never happen, but just in case ... */ + PyErr_Format(PyExc_SystemError, "unknown boolop found"); + return NULL; } - return NULL; /* cannot happen */ } PyObject* ast2obj_operator(operator_ty o) { @@ -2714,8 +2791,11 @@ case FloorDiv: Py_INCREF(FloorDiv_singleton); return FloorDiv_singleton; + default: + /* should never happen, but just in case ... */ + PyErr_Format(PyExc_SystemError, "unknown operator found"); + return NULL; } - return NULL; /* cannot happen */ } PyObject* ast2obj_unaryop(unaryop_ty o) { @@ -2732,8 +2812,11 @@ case USub: Py_INCREF(USub_singleton); return USub_singleton; + default: + /* should never happen, but just in case ... */ + PyErr_Format(PyExc_SystemError, "unknown unaryop found"); + return NULL; } - return NULL; /* cannot happen */ } PyObject* ast2obj_cmpop(cmpop_ty o) { @@ -2768,8 +2851,11 @@ case NotIn: Py_INCREF(NotIn_singleton); return NotIn_singleton; + default: + /* should never happen, but just in case ... */ + PyErr_Format(PyExc_SystemError, "unknown cmpop found"); + return NULL; } - return NULL; /* cannot happen */ } PyObject* ast2obj_comprehension(void* _o) @@ -2947,6 +3033,2755 @@ } +int +obj2ast_mod(PyObject* obj, mod_ty* out, PyArena* arena) +{ + PyObject* tmp = NULL; + + + if (obj == Py_None) { + *out = NULL; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Module_type)) { + asdl_seq* body; + + if (PyObject_HasAttrString(obj, "body")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "body"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "Module field \"body\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + body = asdl_seq_new(len, arena); + if (body == NULL) goto failed; + for (i = 0; i < len; i++) { + stmt_ty value; + res = obj2ast_stmt(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(body, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"body\" missing from Module"); + return 1; + } + *out = Module(body, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Interactive_type)) { + asdl_seq* body; + + if (PyObject_HasAttrString(obj, "body")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "body"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "Interactive field \"body\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + body = asdl_seq_new(len, arena); + if (body == NULL) goto failed; + for (i = 0; i < len; i++) { + stmt_ty value; + res = obj2ast_stmt(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(body, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"body\" missing from Interactive"); + return 1; + } + *out = Interactive(body, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Expression_type)) { + expr_ty body; + + if (PyObject_HasAttrString(obj, "body")) { + int res; + tmp = PyObject_GetAttrString(obj, "body"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &body, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"body\" missing from Expression"); + return 1; + } + *out = Expression(body, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Suite_type)) { + asdl_seq* body; + + if (PyObject_HasAttrString(obj, "body")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "body"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "Suite field \"body\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + body = asdl_seq_new(len, arena); + if (body == NULL) goto failed; + for (i = 0; i < len; i++) { + stmt_ty value; + res = obj2ast_stmt(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(body, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"body\" missing from Suite"); + return 1; + } + *out = Suite(body, arena); + if (*out == NULL) goto failed; + return 0; + } + + tmp = PyObject_Repr(obj); + if (tmp == NULL) goto failed; + PyErr_Format(PyExc_TypeError, "expected some sort of mod, but got %.400s", PyString_AS_STRING(tmp)); +failed: + Py_XDECREF(tmp); + return 1; +} + +int +obj2ast_stmt(PyObject* obj, stmt_ty* out, PyArena* arena) +{ + PyObject* tmp = NULL; + + int lineno; + int col_offset; + + if (obj == Py_None) { + *out = NULL; + return 0; + } + if (PyObject_HasAttrString(obj, "lineno")) { + int res; + tmp = PyObject_GetAttrString(obj, "lineno"); + if (tmp == NULL) goto failed; + res = obj2ast_int(tmp, &lineno, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"lineno\" missing from stmt"); + return 1; + } + if (PyObject_HasAttrString(obj, "col_offset")) { + int res; + tmp = PyObject_GetAttrString(obj, "col_offset"); + if (tmp == NULL) goto failed; + res = obj2ast_int(tmp, &col_offset, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"col_offset\" missing from stmt"); + return 1; + } + if (PyObject_IsInstance(obj, (PyObject*)FunctionDef_type)) { + identifier name; + arguments_ty args; + asdl_seq* body; + asdl_seq* decorator_list; + + if (PyObject_HasAttrString(obj, "name")) { + int res; + tmp = PyObject_GetAttrString(obj, "name"); + if (tmp == NULL) goto failed; + res = obj2ast_identifier(tmp, &name, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"name\" missing from FunctionDef"); + return 1; + } + if (PyObject_HasAttrString(obj, "args")) { + int res; + tmp = PyObject_GetAttrString(obj, "args"); + if (tmp == NULL) goto failed; + res = obj2ast_arguments(tmp, &args, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"args\" missing from FunctionDef"); + return 1; + } + if (PyObject_HasAttrString(obj, "body")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "body"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "FunctionDef field \"body\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + body = asdl_seq_new(len, arena); + if (body == NULL) goto failed; + for (i = 0; i < len; i++) { + stmt_ty value; + res = obj2ast_stmt(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(body, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"body\" missing from FunctionDef"); + return 1; + } + if (PyObject_HasAttrString(obj, "decorator_list")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "decorator_list"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "FunctionDef field \"decorator_list\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + decorator_list = asdl_seq_new(len, arena); + if (decorator_list == NULL) goto failed; + for (i = 0; i < len; i++) { + expr_ty value; + res = obj2ast_expr(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(decorator_list, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"decorator_list\" missing from FunctionDef"); + return 1; + } + *out = FunctionDef(name, args, body, decorator_list, lineno, + col_offset, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)ClassDef_type)) { + identifier name; + asdl_seq* bases; + asdl_seq* body; + asdl_seq* decorator_list; + + if (PyObject_HasAttrString(obj, "name")) { + int res; + tmp = PyObject_GetAttrString(obj, "name"); + if (tmp == NULL) goto failed; + res = obj2ast_identifier(tmp, &name, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"name\" missing from ClassDef"); + return 1; + } + if (PyObject_HasAttrString(obj, "bases")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "bases"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "ClassDef field \"bases\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + bases = asdl_seq_new(len, arena); + if (bases == NULL) goto failed; + for (i = 0; i < len; i++) { + expr_ty value; + res = obj2ast_expr(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(bases, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"bases\" missing from ClassDef"); + return 1; + } + if (PyObject_HasAttrString(obj, "body")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "body"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "ClassDef field \"body\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + body = asdl_seq_new(len, arena); + if (body == NULL) goto failed; + for (i = 0; i < len; i++) { + stmt_ty value; + res = obj2ast_stmt(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(body, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"body\" missing from ClassDef"); + return 1; + } + if (PyObject_HasAttrString(obj, "decorator_list")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "decorator_list"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "ClassDef field \"decorator_list\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + decorator_list = asdl_seq_new(len, arena); + if (decorator_list == NULL) goto failed; + for (i = 0; i < len; i++) { + expr_ty value; + res = obj2ast_expr(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(decorator_list, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"decorator_list\" missing from ClassDef"); + return 1; + } + *out = ClassDef(name, bases, body, decorator_list, lineno, + col_offset, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Return_type)) { + expr_ty value; + + if (PyObject_HasAttrString(obj, "value")) { + int res; + tmp = PyObject_GetAttrString(obj, "value"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &value, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + value = NULL; + } + *out = Return(value, lineno, col_offset, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Delete_type)) { + asdl_seq* targets; + + if (PyObject_HasAttrString(obj, "targets")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "targets"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "Delete field \"targets\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + targets = asdl_seq_new(len, arena); + if (targets == NULL) goto failed; + for (i = 0; i < len; i++) { + expr_ty value; + res = obj2ast_expr(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(targets, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"targets\" missing from Delete"); + return 1; + } + *out = Delete(targets, lineno, col_offset, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Assign_type)) { + asdl_seq* targets; + expr_ty value; + + if (PyObject_HasAttrString(obj, "targets")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "targets"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "Assign field \"targets\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + targets = asdl_seq_new(len, arena); + if (targets == NULL) goto failed; + for (i = 0; i < len; i++) { + expr_ty value; + res = obj2ast_expr(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(targets, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"targets\" missing from Assign"); + return 1; + } + if (PyObject_HasAttrString(obj, "value")) { + int res; + tmp = PyObject_GetAttrString(obj, "value"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &value, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"value\" missing from Assign"); + return 1; + } + *out = Assign(targets, value, lineno, col_offset, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)AugAssign_type)) { + expr_ty target; + operator_ty op; + expr_ty value; + + if (PyObject_HasAttrString(obj, "target")) { + int res; + tmp = PyObject_GetAttrString(obj, "target"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &target, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"target\" missing from AugAssign"); + return 1; + } + if (PyObject_HasAttrString(obj, "op")) { + int res; + tmp = PyObject_GetAttrString(obj, "op"); + if (tmp == NULL) goto failed; + res = obj2ast_operator(tmp, &op, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"op\" missing from AugAssign"); + return 1; + } + if (PyObject_HasAttrString(obj, "value")) { + int res; + tmp = PyObject_GetAttrString(obj, "value"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &value, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"value\" missing from AugAssign"); + return 1; + } + *out = AugAssign(target, op, value, lineno, col_offset, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Print_type)) { + expr_ty dest; + asdl_seq* values; + bool nl; + + if (PyObject_HasAttrString(obj, "dest")) { + int res; + tmp = PyObject_GetAttrString(obj, "dest"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &dest, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + dest = NULL; + } + if (PyObject_HasAttrString(obj, "values")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "values"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "Print field \"values\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + values = asdl_seq_new(len, arena); + if (values == NULL) goto failed; + for (i = 0; i < len; i++) { + expr_ty value; + res = obj2ast_expr(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(values, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"values\" missing from Print"); + return 1; + } + if (PyObject_HasAttrString(obj, "nl")) { + int res; + tmp = PyObject_GetAttrString(obj, "nl"); + if (tmp == NULL) goto failed; + res = obj2ast_bool(tmp, &nl, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"nl\" missing from Print"); + return 1; + } + *out = Print(dest, values, nl, lineno, col_offset, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)For_type)) { + expr_ty target; + expr_ty iter; + asdl_seq* body; + asdl_seq* orelse; + + if (PyObject_HasAttrString(obj, "target")) { + int res; + tmp = PyObject_GetAttrString(obj, "target"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &target, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"target\" missing from For"); + return 1; + } + if (PyObject_HasAttrString(obj, "iter")) { + int res; + tmp = PyObject_GetAttrString(obj, "iter"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &iter, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"iter\" missing from For"); + return 1; + } + if (PyObject_HasAttrString(obj, "body")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "body"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "For field \"body\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + body = asdl_seq_new(len, arena); + if (body == NULL) goto failed; + for (i = 0; i < len; i++) { + stmt_ty value; + res = obj2ast_stmt(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(body, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"body\" missing from For"); + return 1; + } + if (PyObject_HasAttrString(obj, "orelse")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "orelse"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "For field \"orelse\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + orelse = asdl_seq_new(len, arena); + if (orelse == NULL) goto failed; + for (i = 0; i < len; i++) { + stmt_ty value; + res = obj2ast_stmt(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(orelse, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"orelse\" missing from For"); + return 1; + } + *out = For(target, iter, body, orelse, lineno, col_offset, + arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)While_type)) { + expr_ty test; + asdl_seq* body; + asdl_seq* orelse; + + if (PyObject_HasAttrString(obj, "test")) { + int res; + tmp = PyObject_GetAttrString(obj, "test"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &test, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"test\" missing from While"); + return 1; + } + if (PyObject_HasAttrString(obj, "body")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "body"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "While field \"body\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + body = asdl_seq_new(len, arena); + if (body == NULL) goto failed; + for (i = 0; i < len; i++) { + stmt_ty value; + res = obj2ast_stmt(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(body, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"body\" missing from While"); + return 1; + } + if (PyObject_HasAttrString(obj, "orelse")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "orelse"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "While field \"orelse\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + orelse = asdl_seq_new(len, arena); + if (orelse == NULL) goto failed; + for (i = 0; i < len; i++) { + stmt_ty value; + res = obj2ast_stmt(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(orelse, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"orelse\" missing from While"); + return 1; + } + *out = While(test, body, orelse, lineno, col_offset, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)If_type)) { + expr_ty test; + asdl_seq* body; + asdl_seq* orelse; + + if (PyObject_HasAttrString(obj, "test")) { + int res; + tmp = PyObject_GetAttrString(obj, "test"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &test, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"test\" missing from If"); + return 1; + } + if (PyObject_HasAttrString(obj, "body")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "body"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "If field \"body\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + body = asdl_seq_new(len, arena); + if (body == NULL) goto failed; + for (i = 0; i < len; i++) { + stmt_ty value; + res = obj2ast_stmt(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(body, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"body\" missing from If"); + return 1; + } + if (PyObject_HasAttrString(obj, "orelse")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "orelse"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "If field \"orelse\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + orelse = asdl_seq_new(len, arena); + if (orelse == NULL) goto failed; + for (i = 0; i < len; i++) { + stmt_ty value; + res = obj2ast_stmt(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(orelse, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"orelse\" missing from If"); + return 1; + } + *out = If(test, body, orelse, lineno, col_offset, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)With_type)) { + expr_ty context_expr; + expr_ty optional_vars; + asdl_seq* body; + + if (PyObject_HasAttrString(obj, "context_expr")) { + int res; + tmp = PyObject_GetAttrString(obj, "context_expr"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &context_expr, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"context_expr\" missing from With"); + return 1; + } + if (PyObject_HasAttrString(obj, "optional_vars")) { + int res; + tmp = PyObject_GetAttrString(obj, "optional_vars"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &optional_vars, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + optional_vars = NULL; + } + if (PyObject_HasAttrString(obj, "body")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "body"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "With field \"body\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + body = asdl_seq_new(len, arena); + if (body == NULL) goto failed; + for (i = 0; i < len; i++) { + stmt_ty value; + res = obj2ast_stmt(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(body, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"body\" missing from With"); + return 1; + } + *out = With(context_expr, optional_vars, body, lineno, + col_offset, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Raise_type)) { + expr_ty type; + expr_ty inst; + expr_ty tback; + + if (PyObject_HasAttrString(obj, "type")) { + int res; + tmp = PyObject_GetAttrString(obj, "type"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &type, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + type = NULL; + } + if (PyObject_HasAttrString(obj, "inst")) { + int res; + tmp = PyObject_GetAttrString(obj, "inst"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &inst, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + inst = NULL; + } + if (PyObject_HasAttrString(obj, "tback")) { + int res; + tmp = PyObject_GetAttrString(obj, "tback"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &tback, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + tback = NULL; + } + *out = Raise(type, inst, tback, lineno, col_offset, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)TryExcept_type)) { + asdl_seq* body; + asdl_seq* handlers; + asdl_seq* orelse; + + if (PyObject_HasAttrString(obj, "body")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "body"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "TryExcept field \"body\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + body = asdl_seq_new(len, arena); + if (body == NULL) goto failed; + for (i = 0; i < len; i++) { + stmt_ty value; + res = obj2ast_stmt(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(body, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"body\" missing from TryExcept"); + return 1; + } + if (PyObject_HasAttrString(obj, "handlers")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "handlers"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "TryExcept field \"handlers\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + handlers = asdl_seq_new(len, arena); + if (handlers == NULL) goto failed; + for (i = 0; i < len; i++) { + excepthandler_ty value; + res = obj2ast_excepthandler(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(handlers, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"handlers\" missing from TryExcept"); + return 1; + } + if (PyObject_HasAttrString(obj, "orelse")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "orelse"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "TryExcept field \"orelse\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + orelse = asdl_seq_new(len, arena); + if (orelse == NULL) goto failed; + for (i = 0; i < len; i++) { + stmt_ty value; + res = obj2ast_stmt(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(orelse, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"orelse\" missing from TryExcept"); + return 1; + } + *out = TryExcept(body, handlers, orelse, lineno, col_offset, + arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)TryFinally_type)) { + asdl_seq* body; + asdl_seq* finalbody; + + if (PyObject_HasAttrString(obj, "body")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "body"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "TryFinally field \"body\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + body = asdl_seq_new(len, arena); + if (body == NULL) goto failed; + for (i = 0; i < len; i++) { + stmt_ty value; + res = obj2ast_stmt(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(body, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"body\" missing from TryFinally"); + return 1; + } + if (PyObject_HasAttrString(obj, "finalbody")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "finalbody"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "TryFinally field \"finalbody\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + finalbody = asdl_seq_new(len, arena); + if (finalbody == NULL) goto failed; + for (i = 0; i < len; i++) { + stmt_ty value; + res = obj2ast_stmt(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(finalbody, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"finalbody\" missing from TryFinally"); + return 1; + } + *out = TryFinally(body, finalbody, lineno, col_offset, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Assert_type)) { + expr_ty test; + expr_ty msg; + + if (PyObject_HasAttrString(obj, "test")) { + int res; + tmp = PyObject_GetAttrString(obj, "test"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &test, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"test\" missing from Assert"); + return 1; + } + if (PyObject_HasAttrString(obj, "msg")) { + int res; + tmp = PyObject_GetAttrString(obj, "msg"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &msg, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + msg = NULL; + } + *out = Assert(test, msg, lineno, col_offset, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Import_type)) { + asdl_seq* names; + + if (PyObject_HasAttrString(obj, "names")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "names"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "Import field \"names\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + names = asdl_seq_new(len, arena); + if (names == NULL) goto failed; + for (i = 0; i < len; i++) { + alias_ty value; + res = obj2ast_alias(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(names, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"names\" missing from Import"); + return 1; + } + *out = Import(names, lineno, col_offset, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)ImportFrom_type)) { + identifier module; + asdl_seq* names; + int level; + + if (PyObject_HasAttrString(obj, "module")) { + int res; + tmp = PyObject_GetAttrString(obj, "module"); + if (tmp == NULL) goto failed; + res = obj2ast_identifier(tmp, &module, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"module\" missing from ImportFrom"); + return 1; + } + if (PyObject_HasAttrString(obj, "names")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "names"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "ImportFrom field \"names\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + names = asdl_seq_new(len, arena); + if (names == NULL) goto failed; + for (i = 0; i < len; i++) { + alias_ty value; + res = obj2ast_alias(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(names, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"names\" missing from ImportFrom"); + return 1; + } + if (PyObject_HasAttrString(obj, "level")) { + int res; + tmp = PyObject_GetAttrString(obj, "level"); + if (tmp == NULL) goto failed; + res = obj2ast_int(tmp, &level, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + level = 0; + } + *out = ImportFrom(module, names, level, lineno, col_offset, + arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Exec_type)) { + expr_ty body; + expr_ty globals; + expr_ty locals; + + if (PyObject_HasAttrString(obj, "body")) { + int res; + tmp = PyObject_GetAttrString(obj, "body"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &body, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"body\" missing from Exec"); + return 1; + } + if (PyObject_HasAttrString(obj, "globals")) { + int res; + tmp = PyObject_GetAttrString(obj, "globals"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &globals, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + globals = NULL; + } + if (PyObject_HasAttrString(obj, "locals")) { + int res; + tmp = PyObject_GetAttrString(obj, "locals"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &locals, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + locals = NULL; + } + *out = Exec(body, globals, locals, lineno, col_offset, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Global_type)) { + asdl_seq* names; + + if (PyObject_HasAttrString(obj, "names")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "names"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "Global field \"names\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + names = asdl_seq_new(len, arena); + if (names == NULL) goto failed; + for (i = 0; i < len; i++) { + identifier value; + res = obj2ast_identifier(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(names, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"names\" missing from Global"); + return 1; + } + *out = Global(names, lineno, col_offset, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Expr_type)) { + expr_ty value; + + if (PyObject_HasAttrString(obj, "value")) { + int res; + tmp = PyObject_GetAttrString(obj, "value"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &value, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"value\" missing from Expr"); + return 1; + } + *out = Expr(value, lineno, col_offset, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Pass_type)) { + + *out = Pass(lineno, col_offset, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Break_type)) { + + *out = Break(lineno, col_offset, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Continue_type)) { + + *out = Continue(lineno, col_offset, arena); + if (*out == NULL) goto failed; + return 0; + } + + tmp = PyObject_Repr(obj); + if (tmp == NULL) goto failed; + PyErr_Format(PyExc_TypeError, "expected some sort of stmt, but got %.400s", PyString_AS_STRING(tmp)); +failed: + Py_XDECREF(tmp); + return 1; +} + +int +obj2ast_expr(PyObject* obj, expr_ty* out, PyArena* arena) +{ + PyObject* tmp = NULL; + + int lineno; + int col_offset; + + if (obj == Py_None) { + *out = NULL; + return 0; + } + if (PyObject_HasAttrString(obj, "lineno")) { + int res; + tmp = PyObject_GetAttrString(obj, "lineno"); + if (tmp == NULL) goto failed; + res = obj2ast_int(tmp, &lineno, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"lineno\" missing from expr"); + return 1; + } + if (PyObject_HasAttrString(obj, "col_offset")) { + int res; + tmp = PyObject_GetAttrString(obj, "col_offset"); + if (tmp == NULL) goto failed; + res = obj2ast_int(tmp, &col_offset, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"col_offset\" missing from expr"); + return 1; + } + if (PyObject_IsInstance(obj, (PyObject*)BoolOp_type)) { + boolop_ty op; + asdl_seq* values; + + if (PyObject_HasAttrString(obj, "op")) { + int res; + tmp = PyObject_GetAttrString(obj, "op"); + if (tmp == NULL) goto failed; + res = obj2ast_boolop(tmp, &op, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"op\" missing from BoolOp"); + return 1; + } + if (PyObject_HasAttrString(obj, "values")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "values"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "BoolOp field \"values\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + values = asdl_seq_new(len, arena); + if (values == NULL) goto failed; + for (i = 0; i < len; i++) { + expr_ty value; + res = obj2ast_expr(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(values, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"values\" missing from BoolOp"); + return 1; + } + *out = BoolOp(op, values, lineno, col_offset, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)BinOp_type)) { + expr_ty left; + operator_ty op; + expr_ty right; + + if (PyObject_HasAttrString(obj, "left")) { + int res; + tmp = PyObject_GetAttrString(obj, "left"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &left, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"left\" missing from BinOp"); + return 1; + } + if (PyObject_HasAttrString(obj, "op")) { + int res; + tmp = PyObject_GetAttrString(obj, "op"); + if (tmp == NULL) goto failed; + res = obj2ast_operator(tmp, &op, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"op\" missing from BinOp"); + return 1; + } + if (PyObject_HasAttrString(obj, "right")) { + int res; + tmp = PyObject_GetAttrString(obj, "right"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &right, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"right\" missing from BinOp"); + return 1; + } + *out = BinOp(left, op, right, lineno, col_offset, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)UnaryOp_type)) { + unaryop_ty op; + expr_ty operand; + + if (PyObject_HasAttrString(obj, "op")) { + int res; + tmp = PyObject_GetAttrString(obj, "op"); + if (tmp == NULL) goto failed; + res = obj2ast_unaryop(tmp, &op, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"op\" missing from UnaryOp"); + return 1; + } + if (PyObject_HasAttrString(obj, "operand")) { + int res; + tmp = PyObject_GetAttrString(obj, "operand"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &operand, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"operand\" missing from UnaryOp"); + return 1; + } + *out = UnaryOp(op, operand, lineno, col_offset, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Lambda_type)) { + arguments_ty args; + expr_ty body; + + if (PyObject_HasAttrString(obj, "args")) { + int res; + tmp = PyObject_GetAttrString(obj, "args"); + if (tmp == NULL) goto failed; + res = obj2ast_arguments(tmp, &args, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"args\" missing from Lambda"); + return 1; + } + if (PyObject_HasAttrString(obj, "body")) { + int res; + tmp = PyObject_GetAttrString(obj, "body"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &body, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"body\" missing from Lambda"); + return 1; + } + *out = Lambda(args, body, lineno, col_offset, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)IfExp_type)) { + expr_ty test; + expr_ty body; + expr_ty orelse; + + if (PyObject_HasAttrString(obj, "test")) { + int res; + tmp = PyObject_GetAttrString(obj, "test"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &test, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"test\" missing from IfExp"); + return 1; + } + if (PyObject_HasAttrString(obj, "body")) { + int res; + tmp = PyObject_GetAttrString(obj, "body"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &body, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"body\" missing from IfExp"); + return 1; + } + if (PyObject_HasAttrString(obj, "orelse")) { + int res; + tmp = PyObject_GetAttrString(obj, "orelse"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &orelse, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"orelse\" missing from IfExp"); + return 1; + } + *out = IfExp(test, body, orelse, lineno, col_offset, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Dict_type)) { + asdl_seq* keys; + asdl_seq* values; + + if (PyObject_HasAttrString(obj, "keys")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "keys"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "Dict field \"keys\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + keys = asdl_seq_new(len, arena); + if (keys == NULL) goto failed; + for (i = 0; i < len; i++) { + expr_ty value; + res = obj2ast_expr(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(keys, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"keys\" missing from Dict"); + return 1; + } + if (PyObject_HasAttrString(obj, "values")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "values"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "Dict field \"values\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + values = asdl_seq_new(len, arena); + if (values == NULL) goto failed; + for (i = 0; i < len; i++) { + expr_ty value; + res = obj2ast_expr(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(values, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"values\" missing from Dict"); + return 1; + } + *out = Dict(keys, values, lineno, col_offset, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)ListComp_type)) { + expr_ty elt; + asdl_seq* generators; + + if (PyObject_HasAttrString(obj, "elt")) { + int res; + tmp = PyObject_GetAttrString(obj, "elt"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &elt, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"elt\" missing from ListComp"); + return 1; + } + if (PyObject_HasAttrString(obj, "generators")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "generators"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "ListComp field \"generators\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + generators = asdl_seq_new(len, arena); + if (generators == NULL) goto failed; + for (i = 0; i < len; i++) { + comprehension_ty value; + res = obj2ast_comprehension(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(generators, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"generators\" missing from ListComp"); + return 1; + } + *out = ListComp(elt, generators, lineno, col_offset, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)GeneratorExp_type)) { + expr_ty elt; + asdl_seq* generators; + + if (PyObject_HasAttrString(obj, "elt")) { + int res; + tmp = PyObject_GetAttrString(obj, "elt"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &elt, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"elt\" missing from GeneratorExp"); + return 1; + } + if (PyObject_HasAttrString(obj, "generators")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "generators"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "GeneratorExp field \"generators\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + generators = asdl_seq_new(len, arena); + if (generators == NULL) goto failed; + for (i = 0; i < len; i++) { + comprehension_ty value; + res = obj2ast_comprehension(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(generators, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"generators\" missing from GeneratorExp"); + return 1; + } + *out = GeneratorExp(elt, generators, lineno, col_offset, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Yield_type)) { + expr_ty value; + + if (PyObject_HasAttrString(obj, "value")) { + int res; + tmp = PyObject_GetAttrString(obj, "value"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &value, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + value = NULL; + } + *out = Yield(value, lineno, col_offset, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Compare_type)) { + expr_ty left; + asdl_int_seq* ops; + asdl_seq* comparators; + + if (PyObject_HasAttrString(obj, "left")) { + int res; + tmp = PyObject_GetAttrString(obj, "left"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &left, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"left\" missing from Compare"); + return 1; + } + if (PyObject_HasAttrString(obj, "ops")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "ops"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "Compare field \"ops\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + ops = asdl_int_seq_new(len, arena); + if (ops == NULL) goto failed; + for (i = 0; i < len; i++) { + cmpop_ty value; + res = obj2ast_cmpop(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(ops, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"ops\" missing from Compare"); + return 1; + } + if (PyObject_HasAttrString(obj, "comparators")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "comparators"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "Compare field \"comparators\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + comparators = asdl_seq_new(len, arena); + if (comparators == NULL) goto failed; + for (i = 0; i < len; i++) { + expr_ty value; + res = obj2ast_expr(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(comparators, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"comparators\" missing from Compare"); + return 1; + } + *out = Compare(left, ops, comparators, lineno, col_offset, + arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Call_type)) { + expr_ty func; + asdl_seq* args; + asdl_seq* keywords; + expr_ty starargs; + expr_ty kwargs; + + if (PyObject_HasAttrString(obj, "func")) { + int res; + tmp = PyObject_GetAttrString(obj, "func"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &func, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"func\" missing from Call"); + return 1; + } + if (PyObject_HasAttrString(obj, "args")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "args"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "Call field \"args\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + args = asdl_seq_new(len, arena); + if (args == NULL) goto failed; + for (i = 0; i < len; i++) { + expr_ty value; + res = obj2ast_expr(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(args, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"args\" missing from Call"); + return 1; + } + if (PyObject_HasAttrString(obj, "keywords")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "keywords"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "Call field \"keywords\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + keywords = asdl_seq_new(len, arena); + if (keywords == NULL) goto failed; + for (i = 0; i < len; i++) { + keyword_ty value; + res = obj2ast_keyword(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(keywords, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"keywords\" missing from Call"); + return 1; + } + if (PyObject_HasAttrString(obj, "starargs")) { + int res; + tmp = PyObject_GetAttrString(obj, "starargs"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &starargs, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + starargs = NULL; + } + if (PyObject_HasAttrString(obj, "kwargs")) { + int res; + tmp = PyObject_GetAttrString(obj, "kwargs"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &kwargs, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + kwargs = NULL; + } + *out = Call(func, args, keywords, starargs, kwargs, lineno, + col_offset, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Repr_type)) { + expr_ty value; + + if (PyObject_HasAttrString(obj, "value")) { + int res; + tmp = PyObject_GetAttrString(obj, "value"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &value, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"value\" missing from Repr"); + return 1; + } + *out = Repr(value, lineno, col_offset, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Num_type)) { + object n; + + if (PyObject_HasAttrString(obj, "n")) { + int res; + tmp = PyObject_GetAttrString(obj, "n"); + if (tmp == NULL) goto failed; + res = obj2ast_object(tmp, &n, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"n\" missing from Num"); + return 1; + } + *out = Num(n, lineno, col_offset, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Str_type)) { + string s; + + if (PyObject_HasAttrString(obj, "s")) { + int res; + tmp = PyObject_GetAttrString(obj, "s"); + if (tmp == NULL) goto failed; + res = obj2ast_string(tmp, &s, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"s\" missing from Str"); + return 1; + } + *out = Str(s, lineno, col_offset, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Attribute_type)) { + expr_ty value; + identifier attr; + expr_context_ty ctx; + + if (PyObject_HasAttrString(obj, "value")) { + int res; + tmp = PyObject_GetAttrString(obj, "value"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &value, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"value\" missing from Attribute"); + return 1; + } + if (PyObject_HasAttrString(obj, "attr")) { + int res; + tmp = PyObject_GetAttrString(obj, "attr"); + if (tmp == NULL) goto failed; + res = obj2ast_identifier(tmp, &attr, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"attr\" missing from Attribute"); + return 1; + } + if (PyObject_HasAttrString(obj, "ctx")) { + int res; + tmp = PyObject_GetAttrString(obj, "ctx"); + if (tmp == NULL) goto failed; + res = obj2ast_expr_context(tmp, &ctx, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"ctx\" missing from Attribute"); + return 1; + } + *out = Attribute(value, attr, ctx, lineno, col_offset, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Subscript_type)) { + expr_ty value; + slice_ty slice; + expr_context_ty ctx; + + if (PyObject_HasAttrString(obj, "value")) { + int res; + tmp = PyObject_GetAttrString(obj, "value"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &value, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"value\" missing from Subscript"); + return 1; + } + if (PyObject_HasAttrString(obj, "slice")) { + int res; + tmp = PyObject_GetAttrString(obj, "slice"); + if (tmp == NULL) goto failed; + res = obj2ast_slice(tmp, &slice, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"slice\" missing from Subscript"); + return 1; + } + if (PyObject_HasAttrString(obj, "ctx")) { + int res; + tmp = PyObject_GetAttrString(obj, "ctx"); + if (tmp == NULL) goto failed; + res = obj2ast_expr_context(tmp, &ctx, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"ctx\" missing from Subscript"); + return 1; + } + *out = Subscript(value, slice, ctx, lineno, col_offset, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Name_type)) { + identifier id; + expr_context_ty ctx; + + if (PyObject_HasAttrString(obj, "id")) { + int res; + tmp = PyObject_GetAttrString(obj, "id"); + if (tmp == NULL) goto failed; + res = obj2ast_identifier(tmp, &id, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"id\" missing from Name"); + return 1; + } + if (PyObject_HasAttrString(obj, "ctx")) { + int res; + tmp = PyObject_GetAttrString(obj, "ctx"); + if (tmp == NULL) goto failed; + res = obj2ast_expr_context(tmp, &ctx, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"ctx\" missing from Name"); + return 1; + } + *out = Name(id, ctx, lineno, col_offset, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)List_type)) { + asdl_seq* elts; + expr_context_ty ctx; + + if (PyObject_HasAttrString(obj, "elts")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "elts"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "List field \"elts\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + elts = asdl_seq_new(len, arena); + if (elts == NULL) goto failed; + for (i = 0; i < len; i++) { + expr_ty value; + res = obj2ast_expr(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(elts, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"elts\" missing from List"); + return 1; + } + if (PyObject_HasAttrString(obj, "ctx")) { + int res; + tmp = PyObject_GetAttrString(obj, "ctx"); + if (tmp == NULL) goto failed; + res = obj2ast_expr_context(tmp, &ctx, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"ctx\" missing from List"); + return 1; + } + *out = List(elts, ctx, lineno, col_offset, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Tuple_type)) { + asdl_seq* elts; + expr_context_ty ctx; + + if (PyObject_HasAttrString(obj, "elts")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "elts"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "Tuple field \"elts\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + elts = asdl_seq_new(len, arena); + if (elts == NULL) goto failed; + for (i = 0; i < len; i++) { + expr_ty value; + res = obj2ast_expr(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(elts, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"elts\" missing from Tuple"); + return 1; + } + if (PyObject_HasAttrString(obj, "ctx")) { + int res; + tmp = PyObject_GetAttrString(obj, "ctx"); + if (tmp == NULL) goto failed; + res = obj2ast_expr_context(tmp, &ctx, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"ctx\" missing from Tuple"); + return 1; + } + *out = Tuple(elts, ctx, lineno, col_offset, arena); + if (*out == NULL) goto failed; + return 0; + } + + tmp = PyObject_Repr(obj); + if (tmp == NULL) goto failed; + PyErr_Format(PyExc_TypeError, "expected some sort of expr, but got %.400s", PyString_AS_STRING(tmp)); +failed: + Py_XDECREF(tmp); + return 1; +} + +int +obj2ast_expr_context(PyObject* obj, expr_context_ty* out, PyArena* arena) +{ + PyObject* tmp = NULL; + + if (PyObject_IsInstance(obj, (PyObject*)Load_type)) { + *out = Load; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Store_type)) { + *out = Store; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Del_type)) { + *out = Del; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)AugLoad_type)) { + *out = AugLoad; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)AugStore_type)) { + *out = AugStore; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Param_type)) { + *out = Param; + return 0; + } + + tmp = PyObject_Repr(obj); + if (tmp == NULL) goto failed; + PyErr_Format(PyExc_TypeError, "expected some sort of expr_context, but got %.400s", PyString_AS_STRING(tmp)); +failed: + Py_XDECREF(tmp); + return 1; +} + +int +obj2ast_slice(PyObject* obj, slice_ty* out, PyArena* arena) +{ + PyObject* tmp = NULL; + + + if (obj == Py_None) { + *out = NULL; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Ellipsis_type)) { + + *out = Ellipsis(arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Slice_type)) { + expr_ty lower; + expr_ty upper; + expr_ty step; + + if (PyObject_HasAttrString(obj, "lower")) { + int res; + tmp = PyObject_GetAttrString(obj, "lower"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &lower, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + lower = NULL; + } + if (PyObject_HasAttrString(obj, "upper")) { + int res; + tmp = PyObject_GetAttrString(obj, "upper"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &upper, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + upper = NULL; + } + if (PyObject_HasAttrString(obj, "step")) { + int res; + tmp = PyObject_GetAttrString(obj, "step"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &step, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + step = NULL; + } + *out = Slice(lower, upper, step, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)ExtSlice_type)) { + asdl_seq* dims; + + if (PyObject_HasAttrString(obj, "dims")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "dims"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "ExtSlice field \"dims\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + dims = asdl_seq_new(len, arena); + if (dims == NULL) goto failed; + for (i = 0; i < len; i++) { + slice_ty value; + res = obj2ast_slice(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(dims, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"dims\" missing from ExtSlice"); + return 1; + } + *out = ExtSlice(dims, arena); + if (*out == NULL) goto failed; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Index_type)) { + expr_ty value; + + if (PyObject_HasAttrString(obj, "value")) { + int res; + tmp = PyObject_GetAttrString(obj, "value"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &value, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"value\" missing from Index"); + return 1; + } + *out = Index(value, arena); + if (*out == NULL) goto failed; + return 0; + } + + tmp = PyObject_Repr(obj); + if (tmp == NULL) goto failed; + PyErr_Format(PyExc_TypeError, "expected some sort of slice, but got %.400s", PyString_AS_STRING(tmp)); +failed: + Py_XDECREF(tmp); + return 1; +} + +int +obj2ast_boolop(PyObject* obj, boolop_ty* out, PyArena* arena) +{ + PyObject* tmp = NULL; + + if (PyObject_IsInstance(obj, (PyObject*)And_type)) { + *out = And; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Or_type)) { + *out = Or; + return 0; + } + + tmp = PyObject_Repr(obj); + if (tmp == NULL) goto failed; + PyErr_Format(PyExc_TypeError, "expected some sort of boolop, but got %.400s", PyString_AS_STRING(tmp)); +failed: + Py_XDECREF(tmp); + return 1; +} + +int +obj2ast_operator(PyObject* obj, operator_ty* out, PyArena* arena) +{ + PyObject* tmp = NULL; + + if (PyObject_IsInstance(obj, (PyObject*)Add_type)) { + *out = Add; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Sub_type)) { + *out = Sub; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Mult_type)) { + *out = Mult; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Div_type)) { + *out = Div; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Mod_type)) { + *out = Mod; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Pow_type)) { + *out = Pow; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)LShift_type)) { + *out = LShift; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)RShift_type)) { + *out = RShift; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)BitOr_type)) { + *out = BitOr; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)BitXor_type)) { + *out = BitXor; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)BitAnd_type)) { + *out = BitAnd; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)FloorDiv_type)) { + *out = FloorDiv; + return 0; + } + + tmp = PyObject_Repr(obj); + if (tmp == NULL) goto failed; + PyErr_Format(PyExc_TypeError, "expected some sort of operator, but got %.400s", PyString_AS_STRING(tmp)); +failed: + Py_XDECREF(tmp); + return 1; +} + +int +obj2ast_unaryop(PyObject* obj, unaryop_ty* out, PyArena* arena) +{ + PyObject* tmp = NULL; + + if (PyObject_IsInstance(obj, (PyObject*)Invert_type)) { + *out = Invert; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Not_type)) { + *out = Not; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)UAdd_type)) { + *out = UAdd; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)USub_type)) { + *out = USub; + return 0; + } + + tmp = PyObject_Repr(obj); + if (tmp == NULL) goto failed; + PyErr_Format(PyExc_TypeError, "expected some sort of unaryop, but got %.400s", PyString_AS_STRING(tmp)); +failed: + Py_XDECREF(tmp); + return 1; +} + +int +obj2ast_cmpop(PyObject* obj, cmpop_ty* out, PyArena* arena) +{ + PyObject* tmp = NULL; + + if (PyObject_IsInstance(obj, (PyObject*)Eq_type)) { + *out = Eq; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)NotEq_type)) { + *out = NotEq; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Lt_type)) { + *out = Lt; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)LtE_type)) { + *out = LtE; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Gt_type)) { + *out = Gt; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)GtE_type)) { + *out = GtE; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)Is_type)) { + *out = Is; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)IsNot_type)) { + *out = IsNot; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)In_type)) { + *out = In; + return 0; + } + if (PyObject_IsInstance(obj, (PyObject*)NotIn_type)) { + *out = NotIn; + return 0; + } + + tmp = PyObject_Repr(obj); + if (tmp == NULL) goto failed; + PyErr_Format(PyExc_TypeError, "expected some sort of cmpop, but got %.400s", PyString_AS_STRING(tmp)); +failed: + Py_XDECREF(tmp); + return 1; +} + +int +obj2ast_comprehension(PyObject* obj, comprehension_ty* out, PyArena* arena) +{ + PyObject* tmp = NULL; + expr_ty target; + expr_ty iter; + asdl_seq* ifs; + + if (PyObject_HasAttrString(obj, "target")) { + int res; + tmp = PyObject_GetAttrString(obj, "target"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &target, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"target\" missing from comprehension"); + return 1; + } + if (PyObject_HasAttrString(obj, "iter")) { + int res; + tmp = PyObject_GetAttrString(obj, "iter"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &iter, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"iter\" missing from comprehension"); + return 1; + } + if (PyObject_HasAttrString(obj, "ifs")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "ifs"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "comprehension field \"ifs\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + ifs = asdl_seq_new(len, arena); + if (ifs == NULL) goto failed; + for (i = 0; i < len; i++) { + expr_ty value; + res = obj2ast_expr(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(ifs, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"ifs\" missing from comprehension"); + return 1; + } + *out = comprehension(target, iter, ifs, arena); + return 0; +failed: + Py_XDECREF(tmp); + return 1; +} + +int +obj2ast_excepthandler(PyObject* obj, excepthandler_ty* out, PyArena* arena) +{ + PyObject* tmp = NULL; + expr_ty type; + expr_ty name; + asdl_seq* body; + int lineno; + int col_offset; + + if (PyObject_HasAttrString(obj, "type")) { + int res; + tmp = PyObject_GetAttrString(obj, "type"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &type, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + type = NULL; + } + if (PyObject_HasAttrString(obj, "name")) { + int res; + tmp = PyObject_GetAttrString(obj, "name"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &name, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + name = NULL; + } + if (PyObject_HasAttrString(obj, "body")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "body"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "excepthandler field \"body\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + body = asdl_seq_new(len, arena); + if (body == NULL) goto failed; + for (i = 0; i < len; i++) { + stmt_ty value; + res = obj2ast_stmt(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(body, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"body\" missing from excepthandler"); + return 1; + } + if (PyObject_HasAttrString(obj, "lineno")) { + int res; + tmp = PyObject_GetAttrString(obj, "lineno"); + if (tmp == NULL) goto failed; + res = obj2ast_int(tmp, &lineno, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"lineno\" missing from excepthandler"); + return 1; + } + if (PyObject_HasAttrString(obj, "col_offset")) { + int res; + tmp = PyObject_GetAttrString(obj, "col_offset"); + if (tmp == NULL) goto failed; + res = obj2ast_int(tmp, &col_offset, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"col_offset\" missing from excepthandler"); + return 1; + } + *out = excepthandler(type, name, body, lineno, col_offset, arena); + return 0; +failed: + Py_XDECREF(tmp); + return 1; +} + +int +obj2ast_arguments(PyObject* obj, arguments_ty* out, PyArena* arena) +{ + PyObject* tmp = NULL; + asdl_seq* args; + identifier vararg; + identifier kwarg; + asdl_seq* defaults; + + if (PyObject_HasAttrString(obj, "args")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "args"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "arguments field \"args\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + args = asdl_seq_new(len, arena); + if (args == NULL) goto failed; + for (i = 0; i < len; i++) { + expr_ty value; + res = obj2ast_expr(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(args, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"args\" missing from arguments"); + return 1; + } + if (PyObject_HasAttrString(obj, "vararg")) { + int res; + tmp = PyObject_GetAttrString(obj, "vararg"); + if (tmp == NULL) goto failed; + res = obj2ast_identifier(tmp, &vararg, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + vararg = NULL; + } + if (PyObject_HasAttrString(obj, "kwarg")) { + int res; + tmp = PyObject_GetAttrString(obj, "kwarg"); + if (tmp == NULL) goto failed; + res = obj2ast_identifier(tmp, &kwarg, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + kwarg = NULL; + } + if (PyObject_HasAttrString(obj, "defaults")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "defaults"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "arguments field \"defaults\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + defaults = asdl_seq_new(len, arena); + if (defaults == NULL) goto failed; + for (i = 0; i < len; i++) { + expr_ty value; + res = obj2ast_expr(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(defaults, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"defaults\" missing from arguments"); + return 1; + } + *out = arguments(args, vararg, kwarg, defaults, arena); + return 0; +failed: + Py_XDECREF(tmp); + return 1; +} + +int +obj2ast_keyword(PyObject* obj, keyword_ty* out, PyArena* arena) +{ + PyObject* tmp = NULL; + identifier arg; + expr_ty value; + + if (PyObject_HasAttrString(obj, "arg")) { + int res; + tmp = PyObject_GetAttrString(obj, "arg"); + if (tmp == NULL) goto failed; + res = obj2ast_identifier(tmp, &arg, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"arg\" missing from keyword"); + return 1; + } + if (PyObject_HasAttrString(obj, "value")) { + int res; + tmp = PyObject_GetAttrString(obj, "value"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &value, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"value\" missing from keyword"); + return 1; + } + *out = keyword(arg, value, arena); + return 0; +failed: + Py_XDECREF(tmp); + return 1; +} + +int +obj2ast_alias(PyObject* obj, alias_ty* out, PyArena* arena) +{ + PyObject* tmp = NULL; + identifier name; + identifier asname; + + if (PyObject_HasAttrString(obj, "name")) { + int res; + tmp = PyObject_GetAttrString(obj, "name"); + if (tmp == NULL) goto failed; + res = obj2ast_identifier(tmp, &name, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"name\" missing from alias"); + return 1; + } + if (PyObject_HasAttrString(obj, "asname")) { + int res; + tmp = PyObject_GetAttrString(obj, "asname"); + if (tmp == NULL) goto failed; + res = obj2ast_identifier(tmp, &asname, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + asname = NULL; + } + *out = alias(name, asname, arena); + return 0; +failed: + Py_XDECREF(tmp); + return 1; +} + + PyMODINIT_FUNC init_ast(void) { @@ -3109,4 +5944,25 @@ return ast2obj_mod(t); } +mod_ty PyAST_obj2mod(PyObject* ast, PyArena* arena) +{ + mod_ty res; + init_types(); + if (!PyObject_IsInstance(ast, mod_type)) { + PyErr_SetString(PyExc_TypeError, "expected either Module, Interactive " + "or Expression node"); + return NULL; + } + if (obj2ast_mod(ast, &res, arena) != 0) + return NULL; + else + return res; +} + +int PyAST_Check(PyObject* obj) +{ + init_types(); + return PyObject_IsInstance(obj, (PyObject*)AST_type); +} + Modified: python/trunk/Python/bltinmodule.c ============================================================================== --- python/trunk/Python/bltinmodule.c (original) +++ python/trunk/Python/bltinmodule.c Fri Mar 28 13:11:56 2008 @@ -1,6 +1,7 @@ /* Built-in functions */ #include "Python.h" +#include "Python-ast.h" #include "node.h" #include "code.h" @@ -481,6 +482,41 @@ cf.cf_flags = supplied_flags; + if (supplied_flags & + ~(PyCF_MASK | PyCF_MASK_OBSOLETE | PyCF_DONT_IMPLY_DEDENT | PyCF_ONLY_AST)) + { + PyErr_SetString(PyExc_ValueError, + "compile(): unrecognised flags"); + return NULL; + } + /* XXX Warn if (supplied_flags & PyCF_MASK_OBSOLETE) != 0? */ + + if (!dont_inherit) { + PyEval_MergeCompilerFlags(&cf); + } + + if (PyAST_Check(cmd)) { + if (supplied_flags & PyCF_ONLY_AST) { + Py_INCREF(cmd); + result = cmd; + } + else { + PyArena *arena; + mod_ty mod; + + arena = PyArena_New(); + mod = PyAST_obj2mod(cmd, arena); + if (mod == NULL) { + PyArena_Free(arena); + return NULL; + } + result = (PyObject*)PyAST_Compile(mod, filename, + &cf, arena); + PyArena_Free(arena); + } + return result; + } + #ifdef Py_USING_UNICODE if (PyUnicode_Check(cmd)) { tmp = PyUnicode_AsUTF8String(cmd); @@ -490,14 +526,7 @@ cf.cf_flags |= PyCF_SOURCE_IS_UTF8; } #endif - if (PyObject_AsReadBuffer(cmd, (const void **)&str, &length)) - return NULL; - if ((size_t)length != strlen(str)) { - PyErr_SetString(PyExc_TypeError, - "compile() expected string without null bytes"); - goto cleanup; - } - + /* XXX: is it possible to pass start to the PyAST_ branch? */ if (strcmp(startstr, "exec") == 0) start = Py_file_input; else if (strcmp(startstr, "eval") == 0) @@ -506,21 +535,17 @@ start = Py_single_input; else { PyErr_SetString(PyExc_ValueError, - "compile() arg 3 must be 'exec' or 'eval' or 'single'"); + "compile() arg 3 must be 'exec'" + "or 'eval' or 'single'"); goto cleanup; } - if (supplied_flags & - ~(PyCF_MASK | PyCF_MASK_OBSOLETE | PyCF_DONT_IMPLY_DEDENT | PyCF_ONLY_AST)) - { - PyErr_SetString(PyExc_ValueError, - "compile(): unrecognised flags"); + if (PyObject_AsReadBuffer(cmd, (const void **)&str, &length)) + goto cleanup; + if ((size_t)length != strlen(str)) { + PyErr_SetString(PyExc_TypeError, + "compile() expected string without null bytes"); goto cleanup; - } - /* XXX Warn if (supplied_flags & PyCF_MASK_OBSOLETE) != 0? */ - - if (!dont_inherit) { - PyEval_MergeCompilerFlags(&cf); } result = Py_CompileStringFlags(str, filename, start, &cf); cleanup: Modified: python/trunk/Python/compile.c ============================================================================== --- python/trunk/Python/compile.c (original) +++ python/trunk/Python/compile.c Fri Mar 28 13:11:56 2008 @@ -2211,8 +2211,11 @@ return UNARY_POSITIVE; case USub: return UNARY_NEGATIVE; + default: + PyErr_Format(PyExc_SystemError, + "unary op %d should not be possible", op); + return 0; } - return 0; } static int @@ -2246,8 +2249,11 @@ return BINARY_AND; case FloorDiv: return BINARY_FLOOR_DIVIDE; + default: + PyErr_Format(PyExc_SystemError, + "binary op %d should not be possible", op); + return 0; } - return 0; } static int @@ -2274,8 +2280,9 @@ return PyCmp_IN; case NotIn: return PyCmp_NOT_IN; + default: + return PyCmp_BAD; } - return PyCmp_BAD; } static int @@ -2309,10 +2316,11 @@ return INPLACE_AND; case FloorDiv: return INPLACE_FLOOR_DIVIDE; + default: + PyErr_Format(PyExc_SystemError, + "inplace binary op %d should not be possible", op); + return 0; } - PyErr_Format(PyExc_SystemError, - "inplace binary op %d should not be possible", op); - return 0; } static int From python-checkins at python.org Fri Mar 28 13:22:12 2008 From: python-checkins at python.org (georg.brandl) Date: Fri, 28 Mar 2008 13:22:12 +0100 (CET) Subject: [Python-checkins] r62005 - in python/trunk/Doc: c-api/mapping.rst library/rfc822.rst tutorial/datastructures.rst Message-ID: <20080328122212.AB84D1E4014@bag.python.org> Author: georg.brandl Date: Fri Mar 28 13:22:12 2008 New Revision: 62005 Modified: python/trunk/Doc/c-api/mapping.rst python/trunk/Doc/library/rfc822.rst python/trunk/Doc/tutorial/datastructures.rst Log: Phase out has_key usage in the tutorial; correct docs for PyMapping_HasKey*. Modified: python/trunk/Doc/c-api/mapping.rst ============================================================================== --- python/trunk/Doc/c-api/mapping.rst (original) +++ python/trunk/Doc/c-api/mapping.rst Fri Mar 28 13:22:12 2008 @@ -36,15 +36,15 @@ .. cfunction:: int PyMapping_HasKeyString(PyObject *o, char *key) On success, return ``1`` if the mapping object has the key *key* and ``0`` - otherwise. This is equivalent to the Python expression ``o.has_key(key)``. - This function always succeeds. + otherwise. This is equivalent to ``o[key]``, returning ``True`` on success + and ``False`` on an exception. This function always succeeds. .. cfunction:: int PyMapping_HasKey(PyObject *o, PyObject *key) - Return ``1`` if the mapping object has the key *key* and ``0`` otherwise. This - is equivalent to the Python expression ``o.has_key(key)``. This function always - succeeds. + Return ``1`` if the mapping object has the key *key* and ``0`` otherwise. + This is equivalent to ``o[key]``, returning ``True`` on success and ``False`` + on an exception. This function always succeeds. .. cfunction:: PyObject* PyMapping_Keys(PyObject *o) Modified: python/trunk/Doc/library/rfc822.rst ============================================================================== --- python/trunk/Doc/library/rfc822.rst (original) +++ python/trunk/Doc/library/rfc822.rst Fri Mar 28 13:22:12 2008 @@ -260,7 +260,7 @@ :class:`Message` instances also support a limited mapping interface. In particular: ``m[name]`` is like ``m.getheader(name)`` but raises :exc:`KeyError` if there is no matching header; and ``len(m)``, ``m.get(name[, default])``, -``m.has_key(name)``, ``m.keys()``, ``m.values()`` ``m.items()``, and +``name in m``, ``m.keys()``, ``m.values()`` ``m.items()``, and ``m.setdefault(name[, default])`` act as expected, with the one difference that :meth:`setdefault` uses an empty string as the default value. :class:`Message` instances also support the mapping writable interface ``m[name] Modified: python/trunk/Doc/tutorial/datastructures.rst ============================================================================== --- python/trunk/Doc/tutorial/datastructures.rst (original) +++ python/trunk/Doc/tutorial/datastructures.rst Fri Mar 28 13:22:12 2008 @@ -480,8 +480,7 @@ The :meth:`keys` method of a dictionary object returns a list of all the keys used in the dictionary, in arbitrary order (if you want it sorted, just apply the :meth:`sort` method to the list of keys). To check whether a single key is -in the dictionary, either use the dictionary's :meth:`has_key` method or the -:keyword:`in` keyword. +in the dictionary, use the :keyword:`in` keyword. Here is a small example using a dictionary:: @@ -497,8 +496,6 @@ {'guido': 4127, 'irv': 4127, 'jack': 4098} >>> tel.keys() ['guido', 'irv', 'jack'] - >>> tel.has_key('guido') - True >>> 'guido' in tel True From python-checkins at python.org Fri Mar 28 13:24:51 2008 From: python-checkins at python.org (georg.brandl) Date: Fri, 28 Mar 2008 13:24:51 +0100 (CET) Subject: [Python-checkins] r62006 - python/trunk/Doc/reference/expressions.rst Message-ID: <20080328122451.F31C41E4014@bag.python.org> Author: georg.brandl Date: Fri Mar 28 13:24:51 2008 New Revision: 62006 Modified: python/trunk/Doc/reference/expressions.rst Log: Don't use the confusing term "set membership". Modified: python/trunk/Doc/reference/expressions.rst ============================================================================== --- python/trunk/Doc/reference/expressions.rst (original) +++ python/trunk/Doc/reference/expressions.rst Fri Mar 28 13:24:51 2008 @@ -1061,14 +1061,14 @@ another one is made arbitrarily but consistently within one execution of a program. -The operators :keyword:`in` and :keyword:`not in` test for set membership. ``x -in s`` evaluates to true if *x* is a member of the set *s*, and false otherwise. -``x not in s`` returns the negation of ``x in s``. The set membership test has -traditionally been bound to sequences; an object is a member of a set if the set -is a sequence and contains an element equal to that object. However, it is -possible for an object to support membership tests without being a sequence. In -particular, dictionaries support membership testing as a nicer way of spelling -``key in dict``; other mapping types may follow suit. +The operators :keyword:`in` and :keyword:`not in` test for collection +membership. ``x in s`` evaluates to true if *x* is a member of the collection +*s*, and false otherwise. ``x not in s`` returns the negation of ``x in s``. +The collection membership test has traditionally been bound to sequences; an +object is a member of a collection if the collection is a sequence and contains +an element equal to that object. However, it make sense for many other object +types to support membership tests without being a sequence. In particular, +dictionaries (for keys) and sets support membership testing. For the list and tuple types, ``x in y`` is true if and only if there exists an index *i* such that ``x == y[i]`` is true. From buildbot at python.org Fri Mar 28 13:50:49 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 28 Mar 2008 12:50:49 +0000 Subject: [Python-checkins] buildbot failure in ppc Debian unstable trunk Message-ID: <20080328125049.7DABF1E4014@bag.python.org> The Buildbot has detected a new failure of ppc Debian unstable trunk. Full details are available at: http://www.python.org/dev/buildbot/all/ppc%20Debian%20unstable%20trunk/builds/1109 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_logging ====================================================================== FAIL: test_flush (test.test_logging.MemoryHandlerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_logging.py", line 468, in test_flush self.assert_log_lines(lines) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_logging.py", line 112, in assert_log_lines self.assertEquals(len(actual_lines), len(expected_values)) AssertionError: 0 != 3 make: *** [buildbottest] Error 1 sincerely, -The Buildbot From python-checkins at python.org Fri Mar 28 13:58:26 2008 From: python-checkins at python.org (georg.brandl) Date: Fri, 28 Mar 2008 13:58:26 +0100 (CET) Subject: [Python-checkins] r62007 - python/trunk/Doc/library/collections.rst Message-ID: <20080328125826.E391C1E4015@bag.python.org> Author: georg.brandl Date: Fri Mar 28 13:58:26 2008 New Revision: 62007 Modified: python/trunk/Doc/library/collections.rst Log: #2502: add example how to do enum types with named tuples. Modified: python/trunk/Doc/library/collections.rst ============================================================================== --- python/trunk/Doc/library/collections.rst (original) +++ python/trunk/Doc/library/collections.rst Fri Mar 28 13:58:26 2008 @@ -567,6 +567,16 @@ for emp in map(EmployeeRecord._make, cursor.fetchall()): print emp.name, emp.title +Named tuples can also be used to generate enumerated constants: + +.. testcode:: + + def enum(*names): + return namedtuple('Enum', ' '.join(names))(*range(len(names))) + + Status = enum('open', 'pending', 'closed') + assert (0, 1, 2) == (Status.open, Status.pending, Status.closed) + In addition to the methods inherited from tuples, named tuples support three additional methods and one attribute. To prevent conflicts with field names, the method and attribute names start with an underscore. From python-checkins at python.org Fri Mar 28 14:04:06 2008 From: python-checkins at python.org (georg.brandl) Date: Fri, 28 Mar 2008 14:04:06 +0100 (CET) Subject: [Python-checkins] r62008 - doctools/trunk/doc/_templates/indexsidebar.html Message-ID: <20080328130406.424431E4015@bag.python.org> Author: georg.brandl Date: Fri Mar 28 14:04:05 2008 New Revision: 62008 Modified: doctools/trunk/doc/_templates/indexsidebar.html Log: Add subscription box for the google group. Modified: doctools/trunk/doc/_templates/indexsidebar.html ============================================================================== --- doctools/trunk/doc/_templates/indexsidebar.html (original) +++ doctools/trunk/doc/_templates/indexsidebar.html Fri Mar 28 14:04:05 2008 @@ -3,11 +3,14 @@

                  Get Sphinx from the Python Package Index, or install it with:

                  easy_install Sphinx
                  -

                   

                  Questions? Suggestions?

                  -

                  Join the Google group -or come to the #python-docs channel on FreeNode.

                  +

                  Join the Google group:

                  + + + + +

                  or come to the #python-docs channel on FreeNode.

                  You can also open a bug at Python's bug tracker, using the "Documentation tools" category.

                  From buildbot at python.org Fri Mar 28 14:22:46 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 28 Mar 2008 13:22:46 +0000 Subject: [Python-checkins] buildbot failure in S-390 Debian trunk Message-ID: <20080328132246.C039D1E4015@bag.python.org> The Buildbot has detected a new failure of S-390 Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/S-390%20Debian%20trunk/builds/282 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-s390 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_logging ====================================================================== FAIL: test_flush (test.test_logging.MemoryHandlerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/test/test_logging.py", line 468, in test_flush self.assert_log_lines(lines) File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/test/test_logging.py", line 112, in assert_log_lines self.assertEquals(len(actual_lines), len(expected_values)) AssertionError: 0 != 3 make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Fri Mar 28 14:26:17 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 28 Mar 2008 13:26:17 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 trunk Message-ID: <20080328132617.766861E4015@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%20trunk/builds/2778 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl BUILD FAILED: failed test Excerpt from the test logfile: 3 tests failed: test_asynchat test_signal test_smtplib ====================================================================== FAIL: test_wakeup_fd_during (test.test_signal.WakeupSignalTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_signal.py", line 205, in test_wakeup_fd_during [self.read], [], [], self.TIMEOUT_FULL) AssertionError: error not raised ====================================================================== FAIL: test_wakeup_fd_early (test.test_signal.WakeupSignalTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_signal.py", line 193, in test_wakeup_fd_early self.assert_(mid_time - before_time < self.TIMEOUT_HALF) AssertionError sincerely, -The Buildbot From buildbot at python.org Fri Mar 28 18:00:13 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 28 Mar 2008 17:00:13 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-4 trunk Message-ID: <20080328170013.F106F1E4015@bag.python.org> The Buildbot has detected a new failure of x86 XP-4 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-4%20trunk/builds/900 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: bolen-windows Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_logging ====================================================================== FAIL: test_flush (test.test_logging.MemoryHandlerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_logging.py", line 468, in test_flush self.assert_log_lines(lines) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_logging.py", line 112, in assert_log_lines self.assertEquals(len(actual_lines), len(expected_values)) AssertionError: 0 != 3 sincerely, -The Buildbot From python-checkins at python.org Fri Mar 28 19:50:03 2008 From: python-checkins at python.org (georg.brandl) Date: Fri, 28 Mar 2008 19:50:03 +0100 (CET) Subject: [Python-checkins] r62010 - doctools/trunk/doc/ext.py Message-ID: <20080328185003.C18F01E4015@bag.python.org> Author: georg.brandl Date: Fri Mar 28 19:50:03 2008 New Revision: 62010 Modified: doctools/trunk/doc/ext.py Log: Adapt extension mod to new indextemplate syntax. Modified: doctools/trunk/doc/ext.py ============================================================================== --- doctools/trunk/doc/ext.py (original) +++ doctools/trunk/doc/ext.py Fri Mar 28 19:50:03 2008 @@ -35,6 +35,6 @@ def setup(app): - app.add_description_unit('directive', 'dir', 'directive', parse_directive) - app.add_description_unit('role', 'role', 'role', parse_role) - app.add_description_unit('confval', 'confval', 'configuration value') + app.add_description_unit('directive', 'dir', 'pair: %s; directive', parse_directive) + app.add_description_unit('role', 'role', 'pair: %s; role', parse_role) + app.add_description_unit('confval', 'confval', 'pair: %s; configuration value') From python-checkins at python.org Fri Mar 28 21:08:46 2008 From: python-checkins at python.org (gerhard.haering) Date: Fri, 28 Mar 2008 21:08:46 +0100 (CET) Subject: [Python-checkins] r62011 - in python/trunk: Lib/sqlite3/test/factory.py Lib/sqlite3/test/userfunctions.py Modules/_sqlite/cache.c Modules/_sqlite/cache.h Modules/_sqlite/cursor.c Modules/_sqlite/prepare_protocol.h Modules/_sqlite/row.h Modules/_sqlite/statement.c Modules/_sqlite/statement.h Modules/_sqlite/util.h Message-ID: <20080328200846.AD6A51E4018@bag.python.org> Author: gerhard.haering Date: Fri Mar 28 21:08:36 2008 New Revision: 62011 Modified: python/trunk/Lib/sqlite3/test/factory.py python/trunk/Lib/sqlite3/test/userfunctions.py python/trunk/Modules/_sqlite/cache.c python/trunk/Modules/_sqlite/cache.h python/trunk/Modules/_sqlite/cursor.c python/trunk/Modules/_sqlite/prepare_protocol.h python/trunk/Modules/_sqlite/row.h python/trunk/Modules/_sqlite/statement.c python/trunk/Modules/_sqlite/statement.h python/trunk/Modules/_sqlite/util.h Log: Update sqlite3 module to match current version of pysqlite. Modified: python/trunk/Lib/sqlite3/test/factory.py ============================================================================== --- python/trunk/Lib/sqlite3/test/factory.py (original) +++ python/trunk/Lib/sqlite3/test/factory.py Fri Mar 28 21:08:36 2008 @@ -1,7 +1,7 @@ #-*- coding: ISO-8859-1 -*- # pysqlite2/test/factory.py: tests for the various factories in pysqlite # -# Copyright (C) 2005 Gerhard H?ring +# Copyright (C) 2005-2007 Gerhard H?ring # # This file is part of pysqlite. # Modified: python/trunk/Lib/sqlite3/test/userfunctions.py ============================================================================== --- python/trunk/Lib/sqlite3/test/userfunctions.py (original) +++ python/trunk/Lib/sqlite3/test/userfunctions.py Fri Mar 28 21:08:36 2008 @@ -2,7 +2,7 @@ # pysqlite2/test/userfunctions.py: tests for user-defined functions and # aggregates. # -# Copyright (C) 2005 Gerhard H?ring +# Copyright (C) 2005-2007 Gerhard H?ring # # This file is part of pysqlite. # Modified: python/trunk/Modules/_sqlite/cache.c ============================================================================== --- python/trunk/Modules/_sqlite/cache.c (original) +++ python/trunk/Modules/_sqlite/cache.c Fri Mar 28 21:08:36 2008 @@ -1,6 +1,6 @@ /* cache .c - a LRU cache * - * Copyright (C) 2004-2006 Gerhard H?ring + * Copyright (C) 2004-2007 Gerhard H?ring * * This file is part of pysqlite. * Modified: python/trunk/Modules/_sqlite/cache.h ============================================================================== --- python/trunk/Modules/_sqlite/cache.h (original) +++ python/trunk/Modules/_sqlite/cache.h Fri Mar 28 21:08:36 2008 @@ -1,6 +1,6 @@ /* cache.h - definitions for the LRU cache * - * Copyright (C) 2004-2006 Gerhard H?ring + * Copyright (C) 2004-2007 Gerhard H?ring * * This file is part of pysqlite. * Modified: python/trunk/Modules/_sqlite/cursor.c ============================================================================== --- python/trunk/Modules/_sqlite/cursor.c (original) +++ python/trunk/Modules/_sqlite/cursor.c Fri Mar 28 21:08:36 2008 @@ -424,10 +424,14 @@ PyObject* descriptor; PyObject* second_argument = NULL; long rowcount = 0; + int allow_8bit_chars; if (!pysqlite_check_thread(self->connection) || !pysqlite_check_connection(self->connection)) { return NULL; } + /* Make shooting yourself in the foot with not utf-8 decodable 8-bit-strings harder */ + allow_8bit_chars = ((self->connection->text_factory != (PyObject*)&PyUnicode_Type) && + (self->connection->text_factory != (PyObject*)&PyUnicode_Type && pysqlite_OptimizedUnicode)); Py_XDECREF(self->next_row); self->next_row = NULL; @@ -592,7 +596,7 @@ pysqlite_statement_mark_dirty(self->statement); - pysqlite_statement_bind_parameters(self->statement, parameters); + pysqlite_statement_bind_parameters(self->statement, parameters, allow_8bit_chars); if (PyErr_Occurred()) { goto error; } Modified: python/trunk/Modules/_sqlite/prepare_protocol.h ============================================================================== --- python/trunk/Modules/_sqlite/prepare_protocol.h (original) +++ python/trunk/Modules/_sqlite/prepare_protocol.h Fri Mar 28 21:08:36 2008 @@ -1,6 +1,6 @@ /* prepare_protocol.h - the protocol for preparing values for SQLite * - * Copyright (C) 2005 Gerhard H?ring + * Copyright (C) 2005-2007 Gerhard H?ring * * This file is part of pysqlite. * Modified: python/trunk/Modules/_sqlite/row.h ============================================================================== --- python/trunk/Modules/_sqlite/row.h (original) +++ python/trunk/Modules/_sqlite/row.h Fri Mar 28 21:08:36 2008 @@ -1,6 +1,6 @@ /* row.h - an enhanced tuple for database rows * - * Copyright (C) 2005 Gerhard H?ring + * Copyright (C) 2005-2007 Gerhard H?ring * * This file is part of pysqlite. * Modified: python/trunk/Modules/_sqlite/statement.c ============================================================================== --- python/trunk/Modules/_sqlite/statement.c (original) +++ python/trunk/Modules/_sqlite/statement.c Fri Mar 28 21:08:36 2008 @@ -96,7 +96,7 @@ return rc; } -int pysqlite_statement_bind_parameter(pysqlite_Statement* self, int pos, PyObject* parameter) +int pysqlite_statement_bind_parameter(pysqlite_Statement* self, int pos, PyObject* parameter, int allow_8bit_chars) { int rc = SQLITE_OK; long longval; @@ -108,6 +108,7 @@ Py_ssize_t buflen; PyObject* stringval; parameter_type paramtype; + char* c; if (parameter == Py_None) { rc = sqlite3_bind_null(self->st, pos); @@ -140,6 +141,17 @@ paramtype = TYPE_UNKNOWN; } + if (paramtype == TYPE_STRING && !allow_8bit_chars) { + string = PyString_AS_STRING(parameter); + for (c = string; *c != 0; c++) { + if (*c & 0x80) { + PyErr_SetString(pysqlite_ProgrammingError, "You must not use 8-bit bytestrings unless you use a text_factory that can interpret 8-bit bytestrings (like text_factory = str). It is highly recommended that you instead just switch your application to Unicode strings."); + rc = -1; + goto final; + } + } + } + switch (paramtype) { case TYPE_INT: longval = PyInt_AsLong(parameter); @@ -197,7 +209,7 @@ } } -void pysqlite_statement_bind_parameters(pysqlite_Statement* self, PyObject* parameters) +void pysqlite_statement_bind_parameters(pysqlite_Statement* self, PyObject* parameters, int allow_8bit_chars) { PyObject* current_param; PyObject* adapted; @@ -251,11 +263,13 @@ } } - rc = pysqlite_statement_bind_parameter(self, i + 1, adapted); + rc = pysqlite_statement_bind_parameter(self, i + 1, adapted, allow_8bit_chars); Py_DECREF(adapted); if (rc != SQLITE_OK) { - PyErr_Format(pysqlite_InterfaceError, "Error binding parameter %d - probably unsupported type.", i); + if (!PyErr_Occurred()) { + PyErr_Format(pysqlite_InterfaceError, "Error binding parameter %d - probably unsupported type.", i); + } return; } } @@ -294,11 +308,13 @@ } } - rc = pysqlite_statement_bind_parameter(self, i, adapted); + rc = pysqlite_statement_bind_parameter(self, i, adapted, allow_8bit_chars); Py_DECREF(adapted); if (rc != SQLITE_OK) { - PyErr_Format(pysqlite_InterfaceError, "Error binding parameter :%s - probably unsupported type.", binding_name); + if (!PyErr_Occurred()) { + PyErr_Format(pysqlite_InterfaceError, "Error binding parameter :%s - probably unsupported type.", binding_name); + } return; } } Modified: python/trunk/Modules/_sqlite/statement.h ============================================================================== --- python/trunk/Modules/_sqlite/statement.h (original) +++ python/trunk/Modules/_sqlite/statement.h Fri Mar 28 21:08:36 2008 @@ -1,6 +1,6 @@ /* statement.h - definitions for the statement type * - * Copyright (C) 2005 Gerhard H?ring + * Copyright (C) 2005-2007 Gerhard H?ring * * This file is part of pysqlite. * @@ -46,8 +46,8 @@ int pysqlite_statement_create(pysqlite_Statement* self, pysqlite_Connection* connection, PyObject* sql); void pysqlite_statement_dealloc(pysqlite_Statement* self); -int pysqlite_statement_bind_parameter(pysqlite_Statement* self, int pos, PyObject* parameter); -void pysqlite_statement_bind_parameters(pysqlite_Statement* self, PyObject* parameters); +int pysqlite_statement_bind_parameter(pysqlite_Statement* self, int pos, PyObject* parameter, int allow_8bit_chars); +void pysqlite_statement_bind_parameters(pysqlite_Statement* self, PyObject* parameters, int allow_8bit_chars); int pysqlite_statement_recompile(pysqlite_Statement* self, PyObject* parameters); int pysqlite_statement_finalize(pysqlite_Statement* self); Modified: python/trunk/Modules/_sqlite/util.h ============================================================================== --- python/trunk/Modules/_sqlite/util.h (original) +++ python/trunk/Modules/_sqlite/util.h Fri Mar 28 21:08:36 2008 @@ -1,6 +1,6 @@ /* util.h - various utility functions * - * Copyright (C) 2005-2006 Gerhard H?ring + * Copyright (C) 2005-2007 Gerhard H?ring * * This file is part of pysqlite. * From python-checkins at python.org Fri Mar 28 21:11:59 2008 From: python-checkins at python.org (gregory.p.smith) Date: Fri, 28 Mar 2008 21:11:59 +0100 (CET) Subject: [Python-checkins] r62012 - in python/trunk/Lib/sqlite3: dump.py test/dump.py Message-ID: <20080328201159.3D5E11E4015@bag.python.org> Author: gregory.p.smith Date: Fri Mar 28 21:11:49 2008 New Revision: 62012 Added: python/trunk/Lib/sqlite3/dump.py python/trunk/Lib/sqlite3/test/dump.py Log: These svn adds were forgotten in r62000 Added: python/trunk/Lib/sqlite3/dump.py ============================================================================== --- (empty file) +++ python/trunk/Lib/sqlite3/dump.py Fri Mar 28 21:11:49 2008 @@ -0,0 +1,63 @@ +# Mimic the sqlite3 console shell's .dump command +# Author: Paul Kippes + +def _iterdump(connection): + """ + Returns an iterator to the dump of the database in an SQL text format. + + Used to produce an SQL dump of the database. Useful to save an in-memory + database for later restoration. This function should not be called + directly but instead called from the Connection method, iterdump(). + """ + + cu = connection.cursor() + yield('BEGIN TRANSACTION;') + + # sqlite_master table contains the SQL CREATE statements for the database. + q = """ + SELECT name, type, sql + FROM sqlite_master + WHERE sql NOT NULL AND + type == 'table' + """ + schema_res = cu.execute(q) + for table_name, type, sql in schema_res.fetchall(): + if table_name == 'sqlite_sequence': + yield('DELETE FROM sqlite_sequence;') + elif table_name == 'sqlite_stat1': + yield('ANALYZE sqlite_master;') + elif table_name.startswith('sqlite_'): + continue + # NOTE: Virtual table support not implemented + #elif sql.startswith('CREATE VIRTUAL TABLE'): + # qtable = table_name.replace("'", "''") + # yield("INSERT INTO sqlite_master(type,name,tbl_name,rootpage,sql)"\ + # "VALUES('table','%s','%s',0,'%s');" % + # qtable, + # qtable, + # sql.replace("''")) + else: + yield('%s;' % sql) + + # Build the insert statement for each row of the current table + res = cu.execute("PRAGMA table_info('%s')" % table_name) + column_names = [str(table_info[1]) for table_info in res.fetchall()] + q = "SELECT 'INSERT INTO \"%(tbl_name)s\" VALUES(" + q += ",".join(["'||quote(" + col + ")||'" for col in column_names]) + q += ")' FROM '%(tbl_name)s'" + query_res = cu.execute(q % {'tbl_name': table_name}) + for row in query_res: + yield("%s;" % row[0]) + + # Now when the type is 'index', 'trigger', or 'view' + q = """ + SELECT name, type, sql + FROM sqlite_master + WHERE sql NOT NULL AND + type IN ('index', 'trigger', 'view') + """ + schema_res = cu.execute(q) + for name, type, sql in schema_res.fetchall(): + yield('%s;' % sql) + + yield('COMMIT;') Added: python/trunk/Lib/sqlite3/test/dump.py ============================================================================== --- (empty file) +++ python/trunk/Lib/sqlite3/test/dump.py Fri Mar 28 21:11:49 2008 @@ -0,0 +1,52 @@ +# Author: Paul Kippes + +import unittest +import sqlite3 as sqlite + +class DumpTests(unittest.TestCase): + def setUp(self): + self.cx = sqlite.connect(":memory:") + self.cu = self.cx.cursor() + + def tearDown(self): + self.cx.close() + + def CheckTableDump(self): + expected_sqls = [ + "CREATE TABLE t1(id integer primary key, s1 text, " \ + "t1_i1 integer not null, i2 integer, unique (s1), " \ + "constraint t1_idx1 unique (i2));" + , + "INSERT INTO \"t1\" VALUES(1,'foo',10,20);" + , + "INSERT INTO \"t1\" VALUES(2,'foo2',30,30);" + , + "CREATE TABLE t2(id integer, t2_i1 integer, " \ + "t2_i2 integer, primary key (id)," \ + "foreign key(t2_i1) references t1(t1_i1));" + , + "CREATE TRIGGER trigger_1 update of t1_i1 on t1 " \ + "begin " \ + "update t2 set t2_i1 = new.t1_i1 where t2_i1 = old.t1_i1; " \ + "end;" + , + "CREATE VIEW v1 as select * from t1 left join t2 " \ + "using (id);" + ] + [self.cu.execute(s) for s in expected_sqls] + i = self.cx.iterdump() + actual_sqls = [s for s in i] + expected_sqls = ['BEGIN TRANSACTION;'] + expected_sqls + \ + ['COMMIT;'] + [self.assertEqual(expected_sqls[i], actual_sqls[i]) + for i in xrange(len(expected_sqls))] + +def suite(): + return unittest.TestSuite(unittest.makeSuite(DumpTests, "Check")) + +def test(): + runner = unittest.TextTestRunner() + runner.run(suite()) + +if __name__ == "__main__": + test() From python-checkins at python.org Fri Mar 28 21:18:06 2008 From: python-checkins at python.org (amaury.forgeotdarc) Date: Fri, 28 Mar 2008 21:18:06 +0100 (CET) Subject: [Python-checkins] r62013 - python/trunk/Python/Python-ast.c Message-ID: <20080328201806.EF9891E4015@bag.python.org> Author: amaury.forgeotdarc Date: Fri Mar 28 21:17:51 2008 New Revision: 62013 Modified: python/trunk/Python/Python-ast.c Log: Silence a compilation warning Modified: python/trunk/Python/Python-ast.c ============================================================================== --- python/trunk/Python/Python-ast.c (original) +++ python/trunk/Python/Python-ast.c Fri Mar 28 21:17:51 2008 @@ -5948,7 +5948,7 @@ { mod_ty res; init_types(); - if (!PyObject_IsInstance(ast, mod_type)) { + if (!PyObject_IsInstance(ast, (PyObject*)mod_type)) { PyErr_SetString(PyExc_TypeError, "expected either Module, Interactive " "or Expression node"); return NULL; From python-checkins at python.org Fri Mar 28 21:22:56 2008 From: python-checkins at python.org (georg.brandl) Date: Fri, 28 Mar 2008 21:22:56 +0100 (CET) Subject: [Python-checkins] r62014 - python/trunk/Parser/asdl_c.py Message-ID: <20080328202256.D98171E4015@bag.python.org> Author: georg.brandl Date: Fri Mar 28 21:22:56 2008 New Revision: 62014 Modified: python/trunk/Parser/asdl_c.py Log: Silence compiler warning at the source. Modified: python/trunk/Parser/asdl_c.py ============================================================================== --- python/trunk/Parser/asdl_c.py (original) +++ python/trunk/Parser/asdl_c.py Fri Mar 28 21:22:56 2008 @@ -958,7 +958,7 @@ { mod_ty res; init_types(); - if (!PyObject_IsInstance(ast, mod_type)) { + if (!PyObject_IsInstance(ast, (PyObject*)mod_type)) { PyErr_SetString(PyExc_TypeError, "expected either Module, Interactive " "or Expression node"); return NULL; From python-checkins at python.org Fri Mar 28 21:30:51 2008 From: python-checkins at python.org (amaury.forgeotdarc) Date: Fri, 28 Mar 2008 21:30:51 +0100 (CET) Subject: [Python-checkins] r62015 - in python/trunk: Misc/NEWS Python/compile.c Message-ID: <20080328203051.2D0031E402D@bag.python.org> Author: amaury.forgeotdarc Date: Fri Mar 28 21:30:50 2008 New Revision: 62015 Modified: python/trunk/Misc/NEWS python/trunk/Python/compile.c Log: Fix a reference leak found by Georg, when compiling a class nested in another class. Now "regrtest.py -R:: test_compile" is satisfied. Will backport. Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Fri Mar 28 21:30:50 2008 @@ -12,6 +12,9 @@ Core and builtins ----------------- +- The compilation of a class nested in another class used to leak one + reference on the outer class name. + - Patch #1810: compile() can now compile _ast trees as returned by compile(..., PyCF_ONLY_AST). Modified: python/trunk/Python/compile.c ============================================================================== --- python/trunk/Python/compile.c (original) +++ python/trunk/Python/compile.c Fri Mar 28 21:30:50 2008 @@ -1431,6 +1431,7 @@ if (!compiler_enter_scope(c, s->v.ClassDef.name, (void *)s, s->lineno)) return 0; + Py_XDECREF(c->u->u_private); c->u->u_private = s->v.ClassDef.name; Py_INCREF(c->u->u_private); str = PyString_InternFromString("__name__"); From amauryfa at gmail.com Fri Mar 28 21:36:06 2008 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Fri, 28 Mar 2008 21:36:06 +0100 Subject: [Python-checkins] r62012 - in python/trunk/Lib/sqlite3: dump.py test/dump.py Message-ID: > Author: gregory.p.smith > Date: Fri Mar 28 21:11:49 2008 > New Revision: 62012 > > Added: > python/trunk/Lib/sqlite3/dump.py > python/trunk/Lib/sqlite3/test/dump.py test/dump.py should be renamed to test/test_dump.py. Otherwise it will not be seen by the test suite. -- Amaury Forgeot d'Arc From nnorwitz at gmail.com Fri Mar 28 21:40:53 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Fri, 28 Mar 2008 13:40:53 -0700 Subject: [Python-checkins] r62012 - in python/trunk/Lib/sqlite3: dump.py test/dump.py In-Reply-To: References: Message-ID: On Fri, Mar 28, 2008 at 1:36 PM, Amaury Forgeot d'Arc wrote: > > Author: gregory.p.smith > > Date: Fri Mar 28 21:11:49 2008 > > New Revision: 62012 > > > > Added: > > python/trunk/Lib/sqlite3/dump.py > > python/trunk/Lib/sqlite3/test/dump.py > > test/dump.py should be renamed to test/test_dump.py. > > Otherwise it will not be seen by the test suite. Does the "main" method need to be called test() or test_main() like in the stdlib (or something else)? n From python-checkins at python.org Fri Mar 28 21:45:42 2008 From: python-checkins at python.org (amaury.forgeotdarc) Date: Fri, 28 Mar 2008 21:45:42 +0100 (CET) Subject: [Python-checkins] r62016 - in python/branches/release25-maint: Lib/test/test_compile.py Misc/NEWS Python/compile.c Message-ID: <20080328204542.A2EFB1E4015@bag.python.org> Author: amaury.forgeotdarc Date: Fri Mar 28 21:45:42 2008 New Revision: 62016 Modified: python/branches/release25-maint/Lib/test/test_compile.py python/branches/release25-maint/Misc/NEWS python/branches/release25-maint/Python/compile.c Log: Fix a reference leak found by Georg, when compiling a class nested in another class. Test is run with "regrtest.py -R:: test_compile" Backport of r62015 Modified: python/branches/release25-maint/Lib/test/test_compile.py ============================================================================== --- python/branches/release25-maint/Lib/test/test_compile.py (original) +++ python/branches/release25-maint/Lib/test/test_compile.py Fri Mar 28 21:45:42 2008 @@ -398,6 +398,10 @@ del d[..., ...] self.assertEqual((Ellipsis, Ellipsis) in d, False) + def test_nested_classes(self): + # Verify that it does not leak + compile("class A:\n class B: pass", 'tmp', 'exec') + def test_main(): test_support.run_unittest(TestSpecifics) Modified: python/branches/release25-maint/Misc/NEWS ============================================================================== --- python/branches/release25-maint/Misc/NEWS (original) +++ python/branches/release25-maint/Misc/NEWS Fri Mar 28 21:45:42 2008 @@ -12,6 +12,9 @@ Core and builtins ----------------- +- The compilation of a class nested in another class used to leak one + reference on the outer class name. + - Issue #1477: With narrow Unicode builds, the unicode escape sequence \Uxxxxxxxx did not accept values outside the Basic Multilingual Plane. This affected raw unicode literals and the 'raw-unicode-escape' codec. Now Modified: python/branches/release25-maint/Python/compile.c ============================================================================== --- python/branches/release25-maint/Python/compile.c (original) +++ python/branches/release25-maint/Python/compile.c Fri Mar 28 21:45:42 2008 @@ -2061,6 +2061,7 @@ if (!compiler_enter_scope(c, s->v.ClassDef.name, (void *)s, s->lineno)) return 0; + Py_XDECREF(c->u->u_private); c->u->u_private = s->v.ClassDef.name; Py_INCREF(c->u->u_private); str = PyString_InternFromString("__name__"); From buildbot at python.org Fri Mar 28 21:54:34 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 28 Mar 2008 20:54:34 +0000 Subject: [Python-checkins] buildbot failure in PPC64 Debian trunk Message-ID: <20080328205434.982D51E4015@bag.python.org> The Buildbot has detected a new failure of PPC64 Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/PPC64%20Debian%20trunk/builds/626 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl,gerhard.haering,gregory.p.smith BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_logging ====================================================================== FAIL: test_flush (test.test_logging.MemoryHandlerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/test/test_logging.py", line 468, in test_flush self.assert_log_lines(lines) File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/test/test_logging.py", line 112, in assert_log_lines self.assertEquals(len(actual_lines), len(expected_values)) AssertionError: 0 != 3 make: *** [buildbottest] Error 1 sincerely, -The Buildbot From python-checkins at python.org Fri Mar 28 21:54:37 2008 From: python-checkins at python.org (david.wolever) Date: Fri, 28 Mar 2008 21:54:37 +0100 (CET) Subject: [Python-checkins] r62017 - sandbox/trunk/2to3/lib2to3/tests/test_fixers.py Message-ID: <20080328205437.5EF561E401B@bag.python.org> Author: david.wolever Date: Fri Mar 28 21:54:37 2008 New Revision: 62017 Modified: sandbox/trunk/2to3/lib2to3/tests/test_fixers.py Log: Fixed an out-of-date comment. Modified: sandbox/trunk/2to3/lib2to3/tests/test_fixers.py ============================================================================== --- sandbox/trunk/2to3/lib2to3/tests/test_fixers.py (original) +++ sandbox/trunk/2to3/lib2to3/tests/test_fixers.py Fri Mar 28 21:54:37 2008 @@ -3137,7 +3137,7 @@ def setUp(self): FixerTestCase.setUp(self) - # Need to replace fix_import's isfile and isdir method + # Need to replace fix_import's exists method # so we can check that it's doing the right thing self.files_checked = [] self.always_exists = True From python-checkins at python.org Fri Mar 28 21:56:00 2008 From: python-checkins at python.org (benjamin.peterson) Date: Fri, 28 Mar 2008 21:56:00 +0100 (CET) Subject: [Python-checkins] r62018 - python/trunk/Lib/bdb.py Message-ID: <20080328205600.5D1E91E4015@bag.python.org> Author: benjamin.peterson Date: Fri Mar 28 21:56:00 2008 New Revision: 62018 Modified: python/trunk/Lib/bdb.py Log: #2498 modernized try, except, finally statments in bdb Modified: python/trunk/Lib/bdb.py ============================================================================== --- python/trunk/Lib/bdb.py (original) +++ python/trunk/Lib/bdb.py Fri Mar 28 21:56:00 2008 @@ -362,10 +362,9 @@ if not isinstance(cmd, types.CodeType): cmd = cmd+'\n' try: - try: - exec cmd in globals, locals - except BdbQuit: - pass + exec cmd in globals, locals + except BdbQuit: + pass finally: self.quitting = 1 sys.settrace(None) @@ -381,10 +380,9 @@ if not isinstance(expr, types.CodeType): expr = expr+'\n' try: - try: - return eval(expr, globals, locals) - except BdbQuit: - pass + return eval(expr, globals, locals) + except BdbQuit: + pass finally: self.quitting = 1 sys.settrace(None) @@ -400,10 +398,9 @@ sys.settrace(self.trace_dispatch) res = None try: - try: - res = func(*args, **kwds) - except BdbQuit: - pass + res = func(*args, **kwds) + except BdbQuit: + pass finally: self.quitting = 1 sys.settrace(None) From greg at krypto.org Fri Mar 28 21:58:47 2008 From: greg at krypto.org (Gregory P. Smith) Date: Fri, 28 Mar 2008 13:58:47 -0700 Subject: [Python-checkins] r62012 - in python/trunk/Lib/sqlite3: dump.py test/dump.py In-Reply-To: References: Message-ID: <52dc1c820803281358v3ada130ah384d870caf1cd997@mail.gmail.com> This is in Lib/sqlite3/test/ where nothing has a test_ filename and all is imported directly from Lib/test/test_sqlite.py in order to be run. It seemed to follow the existing pattern for the sqlite3 module. On Fri, Mar 28, 2008 at 1:40 PM, Neal Norwitz wrote: > On Fri, Mar 28, 2008 at 1:36 PM, Amaury Forgeot d'Arc > wrote: > > > Author: gregory.p.smith > > > Date: Fri Mar 28 21:11:49 2008 > > > New Revision: 62012 > > > > > > Added: > > > python/trunk/Lib/sqlite3/dump.py > > > python/trunk/Lib/sqlite3/test/dump.py > > > > test/dump.py should be renamed to test/test_dump.py. > > > > Otherwise it will not be seen by the test suite. > > Does the "main" method need to be called test() or test_main() like in > the stdlib (or something else)? > > n > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-checkins/attachments/20080328/597952d8/attachment.htm From python-checkins at python.org Fri Mar 28 22:55:29 2008 From: python-checkins at python.org (amaury.forgeotdarc) Date: Fri, 28 Mar 2008 22:55:29 +0100 (CET) Subject: [Python-checkins] r62019 - in python/trunk: Modules/socketmodule.c PC/VC6/pythoncore.dsp PC/VS7.1/_elementtree.vcproj PC/VS7.1/_tkinter.vcproj PC/VS7.1/make_versioninfo.vcproj PC/VS7.1/pyexpat.vcproj PC/VS7.1/python.vcproj PC/VS7.1/pythoncore.vcproj PC/VS8.0/_elementtree.vcproj PC/VS8.0/make_versioninfo.vcproj PC/VS8.0/python.vcproj PC/VS8.0/pythoncore.vcproj PC/pyconfig.h Message-ID: <20080328215529.D21041E402B@bag.python.org> Author: amaury.forgeotdarc Date: Fri Mar 28 22:55:29 2008 New Revision: 62019 Modified: python/trunk/Modules/socketmodule.c python/trunk/PC/VC6/pythoncore.dsp python/trunk/PC/VS7.1/_elementtree.vcproj python/trunk/PC/VS7.1/_tkinter.vcproj python/trunk/PC/VS7.1/make_versioninfo.vcproj python/trunk/PC/VS7.1/pyexpat.vcproj python/trunk/PC/VS7.1/python.vcproj python/trunk/PC/VS7.1/pythoncore.vcproj python/trunk/PC/VS8.0/_elementtree.vcproj python/trunk/PC/VS8.0/make_versioninfo.vcproj python/trunk/PC/VS8.0/python.vcproj python/trunk/PC/VS8.0/pythoncore.vcproj python/trunk/PC/pyconfig.h Log: Repair compilation for Visual Studio 2005. I applied the same changes manually to VS7.1 and VC6 files; completely untested. (Christian, don't try too hard merging this change into py3k. It will be easier to do the same work again on the branch) Modified: python/trunk/Modules/socketmodule.c ============================================================================== --- python/trunk/Modules/socketmodule.c (original) +++ python/trunk/Modules/socketmodule.c Fri Mar 28 22:55:29 2008 @@ -5230,8 +5230,12 @@ PyModule_AddIntConstant(m, "RCVALL_OFF", RCVALL_OFF); PyModule_AddIntConstant(m, "RCVALL_ON", RCVALL_ON); PyModule_AddIntConstant(m, "RCVALL_SOCKETLEVELONLY", RCVALL_SOCKETLEVELONLY); +#ifdef RCVALL_IPLEVEL PyModule_AddIntConstant(m, "RCVALL_IPLEVEL", RCVALL_IPLEVEL); +#endif +#ifdef RCVALL_MAX PyModule_AddIntConstant(m, "RCVALL_MAX", RCVALL_MAX); +#endif #endif /* _MSTCPIP_ */ /* Initialize gethostbyname lock */ Modified: python/trunk/PC/VC6/pythoncore.dsp ============================================================================== --- python/trunk/PC/VC6/pythoncore.dsp (original) +++ python/trunk/PC/VC6/pythoncore.dsp Fri Mar 28 22:55:29 2008 @@ -133,6 +133,10 @@ # End Source File # Begin Source File +SOURCE=..\..\Modules\_fileio.c +# End Source File +# Begin Source File + SOURCE=..\..\Modules\_functoolsmodule.c # End Source File # Begin Source File @@ -229,6 +233,14 @@ # End Source File # Begin Source File +SOURCE=..\..\Objects\bytesobject.c +# End Source File +# Begin Source File + +SOURCE=..\..\Objects\bytes_methods.c +# End Source File +# Begin Source File + SOURCE=..\..\Objects\cellobject.c # End Source File # Begin Source File @@ -357,6 +369,10 @@ # End Source File # Begin Source File +SOURCE=..\..\Modules\future_builtins.c +# End Source File +# Begin Source File + SOURCE=..\..\Modules\gcmodule.c # End Source File # Begin Source File Modified: python/trunk/PC/VS7.1/_elementtree.vcproj ============================================================================== --- python/trunk/PC/VS7.1/_elementtree.vcproj (original) +++ python/trunk/PC/VS7.1/_elementtree.vcproj Fri Mar 28 22:55:29 2008 @@ -33,7 +33,6 @@ Name="VCCustomBuildTool"/> + + + + + + + + + RelativePath="..\..\Python\pystrcmp.c"> Modified: python/trunk/PC/VS8.0/_elementtree.vcproj ============================================================================== --- python/trunk/PC/VS8.0/_elementtree.vcproj (original) +++ python/trunk/PC/VS8.0/_elementtree.vcproj Fri Mar 28 22:55:29 2008 @@ -56,7 +56,6 @@ /> @@ -437,7 +431,6 @@ /> Modified: python/trunk/PC/VS8.0/make_versioninfo.vcproj ============================================================================== --- python/trunk/PC/VS8.0/make_versioninfo.vcproj (original) +++ python/trunk/PC/VS8.0/make_versioninfo.vcproj Fri Mar 28 22:55:29 2008 @@ -67,7 +67,6 @@ /> + + + + @@ -979,6 +987,10 @@ > + + @@ -1051,6 +1063,10 @@ > + + @@ -1343,6 +1359,14 @@ > + + + + @@ -1627,11 +1651,11 @@ > The Buildbot has detected a new failure of x86 W2k8 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20W2k8%20trunk/builds/249 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: nelson-windows Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: amaury.forgeotdarc BUILD FAILED: failed compile sincerely, -The Buildbot From buildbot at python.org Fri Mar 28 23:17:20 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 28 Mar 2008 22:17:20 +0000 Subject: [Python-checkins] buildbot failure in amd64 XP trunk Message-ID: <20080328221720.396E11E400F@bag.python.org> The Buildbot has detected a new failure of amd64 XP trunk. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20XP%20trunk/builds/1079 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows-amd64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: amaury.forgeotdarc BUILD FAILED: failed compile sincerely, -The Buildbot From buildbot at python.org Fri Mar 28 23:39:41 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 28 Mar 2008 22:39:41 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-3 trunk Message-ID: <20080328223948.784451E400F@bag.python.org> The Buildbot has detected a new failure of x86 XP-3 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-3%20trunk/builds/1200 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl,gerhard.haering,gregory.p.smith BUILD FAILED: failed test Excerpt from the test logfile: 21 tests failed: test_array test_bufio test_cookielib test_deque test_distutils test_file test_gzip test_hotshot test_io test_logging test_mailbox test_marshal test_mmap test_set test_shutil test_univnewlines test_urllib test_urllib2 test_uu test_zipfile test_zipimport ====================================================================== ERROR: test_tofromfile (test.test_array.UnicodeTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_array.py", line 166, in test_tofromfile f = open(test_support.TESTFN, 'wb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_primepat (test.test_bufio.BufferSizeTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_bufio.py", line 56, in test_primepat self.drive_one("1234567890\00\01\02\03\04\05\06") File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_bufio.py", line 51, in drive_one self.try_one(teststring[:-1]) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_bufio.py", line 21, in try_one f = open(test_support.TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_missing_value (test.test_cookielib.CookieTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_cookielib.py", line 374, in test_missing_value c.save(ignore_expires=True, ignore_discard=True) File "C:\buildbot\work\trunk.heller-windows\build\lib\_MozillaCookieJar.py", line 118, in save f = open(filename, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_bad_magic (test.test_cookielib.FileCookieJarTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_cookielib.py", line 268, in test_bad_magic f = open(filename, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_lwp_valueless_cookie (test.test_cookielib.FileCookieJarTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_cookielib.py", line 243, in test_lwp_valueless_cookie c.save(filename, ignore_discard=True) File "C:\buildbot\work\trunk.heller-windows\build\lib\_LWPCookieJar.py", line 83, in save f = open(filename, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_mozilla (test.test_cookielib.LWPCookieTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_cookielib.py", line 1580, in test_mozilla new_c = save_and_restore(c, True) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_cookielib.py", line 1571, in save_and_restore cj.save(ignore_discard=ignore_discard) File "C:\buildbot\work\trunk.heller-windows\build\lib\_MozillaCookieJar.py", line 118, in save f = open(filename, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_rejection (test.test_cookielib.LWPCookieTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_cookielib.py", line 1512, in test_rejection c.save(filename, ignore_discard=True) File "C:\buildbot\work\trunk.heller-windows\build\lib\_LWPCookieJar.py", line 83, in save f = open(filename, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_maxlen (test.test_deque.TestBasic) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_deque.py", line 78, in test_maxlen fo = open(test_support.TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_print (test.test_deque.TestBasic) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_deque.py", line 284, in test_print fo = open(test_support.TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_seek_write (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 144, in test_seek_write f = gzip.GzipFile(self.filename, 'w') File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 79, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_write (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 79, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_addinfo (test.test_hotshot.HotShotTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_hotshot.py", line 74, in test_addinfo profiler = self.new_profiler() File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_hotshot.py", line 42, in new_profiler return hotshot.Profile(self.logfn, lineevents, linetimings) File "C:\buildbot\work\trunk.heller-windows\build\lib\hotshot\__init__.py", line 15, in __init__ logfn, self.lineevents, self.linetimings) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_bad_sys_path (test.test_hotshot.HotShotTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_hotshot.py", line 118, in test_bad_sys_path self.assertRaises(RuntimeError, coverage, test_support.TESTFN) File "C:\buildbot\work\trunk.heller-windows\build\lib\unittest.py", line 329, in failUnlessRaises callableObj(*args, **kwargs) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_line_numbers (test.test_hotshot.HotShotTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_hotshot.py", line 98, in test_line_numbers self.run_test(g, events, self.new_profiler(lineevents=1)) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_hotshot.py", line 42, in new_profiler return hotshot.Profile(self.logfn, lineevents, linetimings) File "C:\buildbot\work\trunk.heller-windows\build\lib\hotshot\__init__.py", line 15, in __init__ logfn, self.lineevents, self.linetimings) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_start_stop (test.test_hotshot.HotShotTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_hotshot.py", line 104, in test_start_stop profiler = self.new_profiler() File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_hotshot.py", line 42, in new_profiler return hotshot.Profile(self.logfn, lineevents, linetimings) File "C:\buildbot\work\trunk.heller-windows\build\lib\hotshot\__init__.py", line 15, in __init__ logfn, self.lineevents, self.linetimings) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_buffered_file_io (test.test_io.IOTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_io.py", line 165, in test_buffered_file_io f = io.open(test_support.TESTFN, "wb") File "C:\buildbot\work\trunk.heller-windows\build\lib\io.py", line 155, in open closefd) IOError: [Errno 13] Permission denied ====================================================================== ERROR: test_close_flushes (test.test_io.IOTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_io.py", line 251, in test_close_flushes f = io.open(test_support.TESTFN, "wb") File "C:\buildbot\work\trunk.heller-windows\build\lib\io.py", line 155, in open closefd) IOError: [Errno 13] Permission denied ====================================================================== ERROR: test_destructor (test.test_io.IOTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_io.py", line 245, in test_destructor f = MyFileIO(test_support.TESTFN, "w") IOError: [Errno 13] Permission denied ====================================================================== ERROR: test_large_file_ops (test.test_io.IOTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_io.py", line 211, in test_large_file_ops f = io.open(test_support.TESTFN, "w+b", 0) File "C:\buildbot\work\trunk.heller-windows\build\lib\io.py", line 155, in open closefd) IOError: [Errno 13] Permission denied ====================================================================== ERROR: test_raw_file_io (test.test_io.IOTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_io.py", line 151, in test_raw_file_io f = io.open(test_support.TESTFN, "wb", buffering=0) File "C:\buildbot\work\trunk.heller-windows\build\lib\io.py", line 155, in open closefd) IOError: [Errno 13] Permission denied ====================================================================== ERROR: test_readline (test.test_io.IOTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_io.py", line 179, in test_readline f = io.open(test_support.TESTFN, "wb") File "C:\buildbot\work\trunk.heller-windows\build\lib\io.py", line 155, in open closefd) IOError: [Errno 13] Permission denied ====================================================================== ERROR: test_with_open (test.test_io.IOTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_io.py", line 221, in test_with_open with open(test_support.TESTFN, "wb", bufsize) as f: IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testBasicIO (test.test_io.TextIOWrapperTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_io.py", line 802, in testBasicIO f = io.open(test_support.TESTFN, "w+", encoding=enc) File "C:\buildbot\work\trunk.heller-windows\build\lib\io.py", line 155, in open closefd) IOError: [Errno 13] Permission denied ====================================================================== ERROR: testSeeking (test.test_io.TextIOWrapperTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_io.py", line 877, in testSeeking f = io.open(test_support.TESTFN, "wb") File "C:\buildbot\work\trunk.heller-windows\build\lib\io.py", line 155, in open closefd) IOError: [Errno 13] Permission denied ====================================================================== ERROR: testSeekingToo (test.test_io.TextIOWrapperTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_io.py", line 889, in testSeekingToo f = io.open(test_support.TESTFN, "wb") File "C:\buildbot\work\trunk.heller-windows\build\lib\io.py", line 155, in open closefd) IOError: [Errno 13] Permission denied ====================================================================== ERROR: testTelling (test.test_io.TextIOWrapperTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_io.py", line 849, in testTelling f = io.open(test_support.TESTFN, "w+", encoding="utf8") File "C:\buildbot\work\trunk.heller-windows\build\lib\io.py", line 155, in open closefd) IOError: [Errno 13] Permission denied ====================================================================== FAIL: test_flush (test.test_logging.MemoryHandlerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_logging.py", line 468, in test_flush self.assert_log_lines(lines) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_logging.py", line 112, in assert_log_lines self.assertEquals(len(actual_lines), len(expected_values)) AssertionError: 0 != 3 ====================================================================== ERROR: test_add (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add_and_remove_folders (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_clear (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_close (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_contains (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_delitem (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_discard (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_dump_message (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_flush (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_file (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_folder (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_message (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_string (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_getitem (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_has_key (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_items (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iter (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iteritems (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iterkeys (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_itervalues (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_keys (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_len (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_list_folders (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_lock_unlock (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_pack (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_pop (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_popitem (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_remove (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_sequences (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_set_item (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_update (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_values (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_clear (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_close (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_contains (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_delitem (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_discard (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_dump_message (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_flush (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_file (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_message (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_string (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_getitem (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_has_key (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_items (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iter (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iteritems (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iterkeys (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_itervalues (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_keys (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_labels (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_len (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_lock_unlock (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_pop (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_popitem (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_remove (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_set_item (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_update (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_values (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_initialize_with_file (test.test_mailbox.TestMessage) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 992, in test_initialize_with_file f = open(self._path, 'w+') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_initialize_with_file (test.test_mailbox.TestMaildirMessage) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 992, in test_initialize_with_file f = open(self._path, 'w+') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_initialize_with_file (test.test_mailbox.TestMboxMessage) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 992, in test_initialize_with_file f = open(self._path, 'w+') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_initialize_with_file (test.test_mailbox.TestMHMessage) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 992, in test_initialize_with_file f = open(self._path, 'w+') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_initialize_with_file (test.test_mailbox.TestBabylMessage) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 992, in test_initialize_with_file f = open(self._path, 'w+') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_cyclical_print (test.test_set.TestSetSubclass) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_set.py", line 286, in test_cyclical_print fo = open(test_support.TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_cyclical_print (test.test_set.TestSetSubclassWithKeywordArgs) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_set.py", line 286, in test_cyclical_print fo = open(test_support.TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_file (test.test_urllib2.HandlerTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_urllib2.py", line 612, in test_file f = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_decode (test.test_uu.UUFileTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_uu.py", line 151, in test_decode uu.decode(f) File "C:\buildbot\work\trunk.heller-windows\build\lib\uu.py", line 111, in decode raise Error('Cannot overwrite existing file: %s' % out_file) Error: Cannot overwrite existing file: @testo ====================================================================== ERROR: test_decodetwice (test.test_uu.UUFileTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_uu.py", line 168, in test_decodetwice f = open(self.tmpin, 'r') IOError: [Errno 2] No such file or directory: '@testi' ====================================================================== ERROR: testAbsoluteArcnames (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testAppendToNonZipFile (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testAppendToZipFile (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testExtract (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testExtractAll (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testIterlinesDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testIterlinesStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testLowCompression (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testOpenDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testOpenStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testRandomOpenDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testRandomOpenStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testReadlineDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testReadlineStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testReadlinesDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 31, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testGoodPassword (test.test_zipfile.DecryptionTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 716, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testNoPassword (test.test_zipfile.DecryptionTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 716, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testBadMagic (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 161, in testBadMagic self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testBadMagic2 (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 170, in testBadMagic2 self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testBoth (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 148, in testBoth self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testDeepPackage (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 197, in testDeepPackage self.doTest(pyc_ext, files, TESTPACK, TESTPACK2, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testDoctestFile (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 299, in testDoctestFile self.runDoctest(self.doDoctestFile) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 284, in runDoctest self.doTest(".py", files, TESTMOD, call=callback) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testDoctestSuite (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 310, in testDoctestSuite self.runDoctest(self.doDoctestSuite) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 284, in runDoctest self.doTest(".py", files, TESTMOD, call=callback) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testEmptyPy (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 152, in testEmptyPy self.doTest(None, files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testGetCompiledSource (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 279, in testGetCompiledSource self.doTest(pyc_ext, files, TESTMOD, call=self.assertModuleSource) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testGetData (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 236, in testGetData z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testGetSource (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 273, in testGetSource self.doTest(".py", files, TESTMOD, call=self.assertModuleSource) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testImport_WithStuff (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 266, in testImport_WithStuff stuff="Some Stuff"*31) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testImporterAttr (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 259, in testImporterAttr self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testPackage (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 189, in testPackage self.doTest(pyc_ext, files, TESTPACK, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testPy (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 139, in testPy self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testPyc (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 143, in testPyc self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testTraceback (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 333, in testTraceback self.doTest(None, files, TESTMOD, call=self.doTraceback) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testZipImporterMethods (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 206, in testZipImporterMethods z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testBadMTime (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 183, in testBadMTime self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testBadMagic (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 161, in testBadMagic self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testBadMagic2 (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 170, in testBadMagic2 self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testBoth (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 148, in testBoth self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testDeepPackage (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 197, in testDeepPackage self.doTest(pyc_ext, files, TESTPACK, TESTPACK2, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testDoctestFile (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 299, in testDoctestFile self.runDoctest(self.doDoctestFile) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 284, in runDoctest self.doTest(".py", files, TESTMOD, call=callback) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testDoctestSuite (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 310, in testDoctestSuite self.runDoctest(self.doDoctestSuite) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 284, in runDoctest self.doTest(".py", files, TESTMOD, call=callback) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testEmptyPy (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 152, in testEmptyPy self.doTest(None, files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testGetCompiledSource (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 279, in testGetCompiledSource self.doTest(pyc_ext, files, TESTMOD, call=self.assertModuleSource) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testGetData (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 236, in testGetData z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testGetSource (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 273, in testGetSource self.doTest(".py", files, TESTMOD, call=self.assertModuleSource) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testImport_WithStuff (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 266, in testImport_WithStuff stuff="Some Stuff"*31) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testImporterAttr (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 259, in testImporterAttr self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testPackage (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 189, in testPackage self.doTest(pyc_ext, files, TESTPACK, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testPy (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 139, in testPy self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testPyc (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 143, in testPyc self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testTraceback (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 333, in testTraceback self.doTest(None, files, TESTMOD, call=self.doTraceback) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testZipImporterMethods (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 206, in testZipImporterMethods z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' sincerely, -The Buildbot From python-checkins at python.org Fri Mar 28 23:43:38 2008 From: python-checkins at python.org (amaury.forgeotdarc) Date: Fri, 28 Mar 2008 23:43:38 +0100 (CET) Subject: [Python-checkins] r62020 - python/trunk/PC/pyconfig.h Message-ID: <20080328224338.DBC701E400F@bag.python.org> Author: amaury.forgeotdarc Date: Fri Mar 28 23:43:38 2008 New Revision: 62020 Modified: python/trunk/PC/pyconfig.h Log: One #ifdef too much, and I broke all windows buildbots: in pyconfig.h, NTDDI_WIN2KSP4 is not *yet* defined, but will be at some point on some modules. Let this line even for older SDKs, they don't use it anyway. Modified: python/trunk/PC/pyconfig.h ============================================================================== --- python/trunk/PC/pyconfig.h (original) +++ python/trunk/PC/pyconfig.h Fri Mar 28 23:43:38 2008 @@ -167,10 +167,8 @@ #else #define Py_WINVER 0x0500 #endif -#ifdef NTDDI_WIN2KSP4 #define Py_NTDDI NTDDI_WIN2KSP4 #endif -#endif /* We only set these values when building Python - we don't want to force these values on extensions, as that will affect the prototypes and From python-checkins at python.org Sat Mar 29 00:11:02 2008 From: python-checkins at python.org (benjamin.peterson) Date: Sat, 29 Mar 2008 00:11:02 +0100 (CET) Subject: [Python-checkins] r62021 - python/trunk/Include/object.h Message-ID: <20080328231102.56BB01E4017@bag.python.org> Author: benjamin.peterson Date: Sat Mar 29 00:11:01 2008 New Revision: 62021 Modified: python/trunk/Include/object.h Log: NIL => NULL Modified: python/trunk/Include/object.h ============================================================================== --- python/trunk/Include/object.h (original) +++ python/trunk/Include/object.h Sat Mar 29 00:11:01 2008 @@ -643,7 +643,7 @@ objects that don't contain references to other objects or heap memory this can be the standard function free(). Both macros can be used wherever a void expression is allowed. The argument must not be a -NIL pointer. If it may be NIL, use Py_XINCREF/Py_XDECREF instead. +NULL pointer. If it may be NULL, use Py_XINCREF/Py_XDECREF instead. The macro _Py_NewReference(op) initialize reference counts to 1, and in special builds (Py_REF_DEBUG, Py_TRACE_REFS) performs additional bookkeeping appropriate to the special build. From buildbot at python.org Sat Mar 29 00:31:46 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 28 Mar 2008 23:31:46 +0000 Subject: [Python-checkins] buildbot failure in g4 osx.4 trunk Message-ID: <20080328233147.114711E4017@bag.python.org> The Buildbot has detected a new failure of g4 osx.4 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/g4%20osx.4%20trunk/builds/3113 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: psf-g4 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: amaury.forgeotdarc BUILD FAILED: failed test Excerpt from the test logfile: 2 tests failed: test_logging test_signal ====================================================================== FAIL: test_flush (test.test_logging.MemoryHandlerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/test/test_logging.py", line 468, in test_flush self.assert_log_lines(lines) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/test/test_logging.py", line 112, in assert_log_lines self.assertEquals(len(actual_lines), len(expected_values)) AssertionError: 0 != 3 make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Sat Mar 29 00:34:59 2008 From: buildbot at python.org (buildbot at python.org) Date: Fri, 28 Mar 2008 23:34:59 +0000 Subject: [Python-checkins] buildbot failure in amd64 XP trunk Message-ID: <20080328233459.4F1E71E4015@bag.python.org> The Buildbot has detected a new failure of amd64 XP trunk. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20XP%20trunk/builds/1081 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows-amd64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: benjamin.peterson BUILD FAILED: failed compile sincerely, -The Buildbot From buildbot at python.org Sat Mar 29 01:36:58 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 29 Mar 2008 00:36:58 +0000 Subject: [Python-checkins] buildbot failure in PPC64 Debian trunk Message-ID: <20080329003658.6F1981E4017@bag.python.org> The Buildbot has detected a new failure of PPC64 Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/PPC64%20Debian%20trunk/builds/630 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: benjamin.peterson BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_logging ====================================================================== FAIL: test_flush (test.test_logging.MemoryHandlerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/test/test_logging.py", line 468, in test_flush self.assert_log_lines(lines) File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/test/test_logging.py", line 112, in assert_log_lines self.assertEquals(len(actual_lines), len(expected_values)) AssertionError: 0 != 3 make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Sat Mar 29 01:43:32 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 29 Mar 2008 00:43:32 +0000 Subject: [Python-checkins] buildbot failure in sparc solaris10 gcc trunk Message-ID: <20080329004332.95AB31E4017@bag.python.org> The Buildbot has detected a new failure of sparc solaris10 gcc trunk. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20solaris10%20gcc%20trunk/builds/3083 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: loewis-sun Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: benjamin.peterson BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_logging ====================================================================== FAIL: test_flush (test.test_logging.MemoryHandlerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/users/buildbot/slave/trunk.loewis-sun/build/Lib/test/test_logging.py", line 468, in test_flush self.assert_log_lines(lines) File "/opt/users/buildbot/slave/trunk.loewis-sun/build/Lib/test/test_logging.py", line 112, in assert_log_lines self.assertEquals(len(actual_lines), len(expected_values)) AssertionError: 0 != 3 sincerely, -The Buildbot From python-checkins at python.org Sat Mar 29 01:44:58 2008 From: python-checkins at python.org (amaury.forgeotdarc) Date: Sat, 29 Mar 2008 01:44:58 +0100 (CET) Subject: [Python-checkins] r62023 - python/trunk/Lib/threading.py Message-ID: <20080329004458.D1FE91E4017@bag.python.org> Author: amaury.forgeotdarc Date: Sat Mar 29 01:44:58 2008 New Revision: 62023 Modified: python/trunk/Lib/threading.py Log: Try to understand why most buildbots suddenly turned to red. Undo the only change that might have unexpected effects. To be followed. Modified: python/trunk/Lib/threading.py ============================================================================== --- python/trunk/Lib/threading.py (original) +++ python/trunk/Lib/threading.py Sat Mar 29 01:44:58 2008 @@ -535,7 +535,8 @@ # test_threading.test_no_refcycle_through_target when # the exception keeps the target alive past when we # assert that it's dead. - self.__exc_clear() + # XXX Temporary experiment + # self.__exc_clear() finally: with _active_limbo_lock: self.__stop() From python-checkins at python.org Sat Mar 29 01:49:08 2008 From: python-checkins at python.org (amaury.forgeotdarc) Date: Sat, 29 Mar 2008 01:49:08 +0100 (CET) Subject: [Python-checkins] r62025 - python/trunk/Lib/threading.py Message-ID: <20080329004908.26CEC1E4017@bag.python.org> Author: amaury.forgeotdarc Date: Sat Mar 29 01:49:07 2008 New Revision: 62025 Modified: python/trunk/Lib/threading.py Log: At least let the module compile Modified: python/trunk/Lib/threading.py ============================================================================== --- python/trunk/Lib/threading.py (original) +++ python/trunk/Lib/threading.py Sat Mar 29 01:49:07 2008 @@ -537,6 +537,7 @@ # assert that it's dead. # XXX Temporary experiment # self.__exc_clear() + pass finally: with _active_limbo_lock: self.__stop() From python-checkins at python.org Sat Mar 29 02:27:39 2008 From: python-checkins at python.org (gerhard.haering) Date: Sat, 29 Mar 2008 02:27:39 +0100 (CET) Subject: [Python-checkins] r62026 - in python/trunk/Doc: includes/sqlite3/ctx_manager.py library/sqlite3.rst Message-ID: <20080329012739.9E01E1E4018@bag.python.org> Author: gerhard.haering Date: Sat Mar 29 02:27:37 2008 New Revision: 62026 Added: python/trunk/Doc/includes/sqlite3/ctx_manager.py Modified: python/trunk/Doc/library/sqlite3.rst Log: Brought documentation for sqlite3 module up-to-date. Fixed Issue1625205 which complained about commit, rollback and close not being documented. Added: python/trunk/Doc/includes/sqlite3/ctx_manager.py ============================================================================== --- (empty file) +++ python/trunk/Doc/includes/sqlite3/ctx_manager.py Sat Mar 29 02:27:37 2008 @@ -0,0 +1,16 @@ +import sqlite3 + +con = sqlite3.connect(":memory:") +con.execute("create table person (id integer primary key, firstname varchar unique)") + +# Successful, con.commit() is called automatically afterwards +with con: + con.execute("insert into person(firstname) values (?)", ("Joe",)) + +# con.rollback() is called after the with block finishes with an exception, the +# exception is still raised and must be catched +try: + with con: + con.execute("insert into person(firstname) values (?)", ("Joe",)) +except sqlite3.IntegrityError: + print "couldn't add Joe twice" Modified: python/trunk/Doc/library/sqlite3.rst ============================================================================== --- python/trunk/Doc/library/sqlite3.rst (original) +++ python/trunk/Doc/library/sqlite3.rst Sat Mar 29 02:27:37 2008 @@ -232,6 +232,24 @@ :class:`sqlite3.Cursor`. +.. method:: Connection.commit() + + This method commits the current transaction. If you don't call this method, + anything you did since the last call to commit() is not visible from from + other database connections. If you wonder why you don't see the data you've + written to the database, please check you didn't forget to call this method. + +.. method:: Connection.rollback() + + This method rolls back any changes to the database since the last call to + :meth:`commit`. + +.. method:: Connection.close() + + This closes the database connection. Note that this does not automatically + call :meth:`commit`. If you just close your database connection without + calling :meth:`commit` first, your changes will be lost! + .. method:: Connection.execute(sql, [parameters]) This is a nonstandard shortcut that creates an intermediate cursor object by @@ -245,7 +263,6 @@ calling the cursor method, then calls the cursor's :meth:`executemany` method with the parameters given. - .. method:: Connection.executescript(sql_script) This is a nonstandard shortcut that creates an intermediate cursor object by @@ -332,6 +349,19 @@ one. All necessary constants are available in the :mod:`sqlite3` module. +.. method:: Connection.set_progress_handler(handler, n) + + .. versionadded:: 2.6 + + This routine registers a callback. The callback is invoked for every *n* + instructions of the SQLite virtual machine. This is useful if you want to + get called from SQLite during long-running operations, for example to update + a GUI. + + If you want to clear any previously installed progress handler, call the + method with :const:`None` for *handler*. + + .. attribute:: Connection.row_factory You can change this attribute to a callable that accepts the cursor and the @@ -701,10 +731,6 @@ statement, or set it to one of SQLite's supported isolation levels: DEFERRED, IMMEDIATE or EXCLUSIVE. -As the :mod:`sqlite3` module needs to keep track of the transaction state, you -should not use ``OR ROLLBACK`` or ``ON CONFLICT ROLLBACK`` in your SQL. Instead, -catch the :exc:`IntegrityError` and call the :meth:`rollback` method of the -connection yourself. Using pysqlite efficiently @@ -736,3 +762,15 @@ .. literalinclude:: ../includes/sqlite3/rowclass.py + +Using the connection as a context manager +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +.. versionadded:: 2.6 + +Connection objects can be used as context managers +that automatically commit or rollback transactions. In the event of an +exception, the transaction is rolled back; otherwise, the transaction is +committed: + +.. literalinclude:: ../includes/sqlite3/ctx_manager.py From python-checkins at python.org Sat Mar 29 02:41:09 2008 From: python-checkins at python.org (amaury.forgeotdarc) Date: Sat, 29 Mar 2008 02:41:09 +0100 (CET) Subject: [Python-checkins] r62028 - python/trunk/Lib/threading.py Message-ID: <20080329014109.217E71E4017@bag.python.org> Author: amaury.forgeotdarc Date: Sat Mar 29 02:41:08 2008 New Revision: 62028 Modified: python/trunk/Lib/threading.py Log: Revert my experiment. I found one reason of failures in test_logging. Modified: python/trunk/Lib/threading.py ============================================================================== --- python/trunk/Lib/threading.py (original) +++ python/trunk/Lib/threading.py Sat Mar 29 02:41:08 2008 @@ -535,9 +535,7 @@ # test_threading.test_no_refcycle_through_target when # the exception keeps the target alive past when we # assert that it's dead. - # XXX Temporary experiment - # self.__exc_clear() - pass + self.__exc_clear() finally: with _active_limbo_lock: self.__stop() From buildbot at python.org Sat Mar 29 02:41:44 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 29 Mar 2008 01:41:44 +0000 Subject: [Python-checkins] buildbot failure in amd64 XP 3.0 Message-ID: <20080329014144.9146F1E4017@bag.python.org> The Buildbot has detected a new failure of amd64 XP 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20XP%203.0/builds/728 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows-amd64 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: gerhard.haering BUILD FAILED: failed test Excerpt from the test logfile: 2 tests failed: test_getargs2 test_logging ====================================================================== ERROR: test_n (test.test_getargs2.Signed_TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_getargs2.py", line 190, in test_n self.failUnlessEqual(99, getargs_n(Long())) TypeError: 'Long' object cannot be interpreted as an integer ====================================================================== FAIL: test_flush (test.test_logging.MemoryHandlerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_logging.py", line 468, in test_flush self.assert_log_lines(lines) File "C:\buildbot\3.0.heller-windows-amd64\build\lib\test\test_logging.py", line 112, in assert_log_lines self.assertEquals(len(actual_lines), len(expected_values)) AssertionError: 0 != 3 sincerely, -The Buildbot From python-checkins at python.org Sat Mar 29 02:42:31 2008 From: python-checkins at python.org (amaury.forgeotdarc) Date: Sat, 29 Mar 2008 02:42:31 +0100 (CET) Subject: [Python-checkins] r62029 - python/trunk/Lib/test/test_logging.py Message-ID: <20080329014231.4A7ED1E4017@bag.python.org> Author: amaury.forgeotdarc Date: Sat Mar 29 02:42:31 2008 New Revision: 62029 Modified: python/trunk/Lib/test/test_logging.py Log: Correctly call the base class tearDown(); otherwise running test_logging twice produce the errors we see on all buildbots Modified: python/trunk/Lib/test/test_logging.py ============================================================================== --- python/trunk/Lib/test/test_logging.py (original) +++ python/trunk/Lib/test/test_logging.py Sat Mar 29 02:42:31 2008 @@ -450,6 +450,7 @@ def tearDown(self): self.mem_hdlr.close() + BaseTest.tearDown(self) def test_flush(self): # The memory handler flushes to its target handler based on specific From buildbot at python.org Sat Mar 29 02:46:56 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 29 Mar 2008 01:46:56 +0000 Subject: [Python-checkins] buildbot failure in S-390 Debian 3.0 Message-ID: <20080329014656.BAC9E1E4020@bag.python.org> The Buildbot has detected a new failure of S-390 Debian 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/S-390%20Debian%203.0/builds/160 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-s390 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: gerhard.haering BUILD FAILED: failed test Excerpt from the test logfile: 2 tests failed: test_logging test_signal ====================================================================== FAIL: test_flush (test.test_logging.MemoryHandlerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/3.0.klose-debian-s390/build/Lib/test/test_logging.py", line 468, in test_flush self.assert_log_lines(lines) File "/home/pybot/buildarea/3.0.klose-debian-s390/build/Lib/test/test_logging.py", line 112, in assert_log_lines self.assertEquals(len(actual_lines), len(expected_values)) AssertionError: 0 != 3 make: *** [buildbottest] Error 1 sincerely, -The Buildbot From python-checkins at python.org Sat Mar 29 02:50:06 2008 From: python-checkins at python.org (georg.brandl) Date: Sat, 29 Mar 2008 02:50:06 +0100 (CET) Subject: [Python-checkins] r62030 - in python/trunk: Misc/NEWS Modules/main.c Message-ID: <20080329015006.5F53B1E4017@bag.python.org> Author: georg.brandl Date: Sat Mar 29 02:50:06 2008 New Revision: 62030 Modified: python/trunk/Misc/NEWS python/trunk/Modules/main.c Log: Backport #1442: report exception when startup file cannot be run. Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Sat Mar 29 02:50:06 2008 @@ -12,6 +12,9 @@ Core and builtins ----------------- +- Patch #1442: properly report exceptions when the PYTHONSTARTUP file + cannot be executed. + - The compilation of a class nested in another class used to leak one reference on the outer class name. Modified: python/trunk/Modules/main.c ============================================================================== --- python/trunk/Modules/main.c (original) +++ python/trunk/Modules/main.c Sat Mar 29 02:50:06 2008 @@ -140,6 +140,15 @@ (void) PyRun_SimpleFileExFlags(fp, startup, 0, cf); PyErr_Clear(); fclose(fp); + } else { + int save_errno; + save_errno = errno; + PySys_WriteStderr("Could not open PYTHONSTARTUP\n"); + errno = save_errno; + PyErr_SetFromErrnoWithFilename(PyExc_IOError, + startup); + PyErr_Print(); + PyErr_Clear(); } } } From python-checkins at python.org Sat Mar 29 02:50:46 2008 From: python-checkins at python.org (georg.brandl) Date: Sat, 29 Mar 2008 02:50:46 +0100 (CET) Subject: [Python-checkins] r62031 - in python/branches/release25-maint: Misc/NEWS Modules/main.c Message-ID: <20080329015046.B6D5C1E4017@bag.python.org> Author: georg.brandl Date: Sat Mar 29 02:50:46 2008 New Revision: 62031 Modified: python/branches/release25-maint/Misc/NEWS python/branches/release25-maint/Modules/main.c Log: Backport #1442: report exception when startup file cannot be run. Modified: python/branches/release25-maint/Misc/NEWS ============================================================================== --- python/branches/release25-maint/Misc/NEWS (original) +++ python/branches/release25-maint/Misc/NEWS Sat Mar 29 02:50:46 2008 @@ -12,6 +12,9 @@ Core and builtins ----------------- +- Patch #1442: properly report exceptions when the PYTHONSTARTUP file + cannot be executed. + - The compilation of a class nested in another class used to leak one reference on the outer class name. @@ -27,6 +30,7 @@ - Issue #2238: Some syntax errors in *args and **kwargs expressions could give bogus error messages. + Library ------- @@ -81,8 +85,9 @@ Windows ------- + What's New in Python 2.5.2? -============================= +=========================== *Release date: 21-Feb-2008* Modified: python/branches/release25-maint/Modules/main.c ============================================================================== --- python/branches/release25-maint/Modules/main.c (original) +++ python/branches/release25-maint/Modules/main.c Sat Mar 29 02:50:46 2008 @@ -134,6 +134,15 @@ (void) PyRun_SimpleFileExFlags(fp, startup, 0, cf); PyErr_Clear(); fclose(fp); + } else { + int save_errno; + save_errno = errno; + PySys_WriteStderr("Could not open PYTHONSTARTUP\n"); + errno = save_errno; + PyErr_SetFromErrnoWithFilename(PyExc_IOError, + startup); + PyErr_Print(); + PyErr_Clear(); } } } From buildbot at python.org Sat Mar 29 03:12:12 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 29 Mar 2008 02:12:12 +0000 Subject: [Python-checkins] buildbot failure in x86 XP 3.0 Message-ID: <20080329021212.999DF1E402E@bag.python.org> The Buildbot has detected a new failure of x86 XP 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP%203.0/builds/74 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: armbruster-windows Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: gerhard.haering BUILD FAILED: failed test Excerpt from the test logfile: 5 tests failed: test_logging test_ssl test_urllib2_localnet test_urllibnet test_xmlrpc_net ====================================================================== FAIL: test_flush (test.test_logging.MemoryHandlerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_logging.py", line 468, in test_flush self.assert_log_lines(lines) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_logging.py", line 112, in assert_log_lines self.assertEquals(len(actual_lines), len(expected_values)) AssertionError: 0 != 3 ====================================================================== ERROR: testConnect (test.test_ssl.NetworkedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 130, in testConnect raise test_support.TestFailed("Unexpected exception %s" % x) test.test_support.TestFailed: Unexpected exception [Errno 1] _ssl.c:486: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed ====================================================================== ERROR: testProtocolSSL2 (test.test_ssl.ThreadedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 822, in testProtocolSSL2 tryProtocolCombo(ssl.PROTOCOL_SSLv2, ssl.PROTOCOL_SSLv2, True, ssl.CERT_OPTIONAL) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 691, in tryProtocolCombo chatty=False, connectionchatty=False) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 640, in serverParamsTest raise test_support.TestFailed("Unexpected SSL error: " + str(x)) test.test_support.TestFailed: Unexpected SSL error: [Errno 1] _ssl.c:486: error:1407E086:SSL routines:SSL2_SET_CERTIFICATE:certificate verify failed ====================================================================== ERROR: testProtocolSSL23 (test.test_ssl.ThreadedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 843, in testProtocolSSL23 tryProtocolCombo(ssl.PROTOCOL_SSLv23, ssl.PROTOCOL_SSLv3, True, ssl.CERT_OPTIONAL) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 691, in tryProtocolCombo chatty=False, connectionchatty=False) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 640, in serverParamsTest raise test_support.TestFailed("Unexpected SSL error: " + str(x)) test.test_support.TestFailed: Unexpected SSL error: [Errno 1] _ssl.c:486: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed ====================================================================== ERROR: testProtocolSSL3 (test.test_ssl.ThreadedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 855, in testProtocolSSL3 tryProtocolCombo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True, ssl.CERT_OPTIONAL) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 691, in tryProtocolCombo chatty=False, connectionchatty=False) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 640, in serverParamsTest raise test_support.TestFailed("Unexpected SSL error: " + str(x)) test.test_support.TestFailed: Unexpected SSL error: [Errno 1] _ssl.c:486: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed ====================================================================== ERROR: testProtocolTLS1 (test.test_ssl.ThreadedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 865, in testProtocolTLS1 tryProtocolCombo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True, ssl.CERT_OPTIONAL) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 691, in tryProtocolCombo chatty=False, connectionchatty=False) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 640, in serverParamsTest raise test_support.TestFailed("Unexpected SSL error: " + str(x)) test.test_support.TestFailed: Unexpected SSL error: [Errno 1] _ssl.c:486: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed ====================================================================== ERROR: testReadCert (test.test_ssl.ThreadedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 738, in testReadCert "Unexpected SSL error: " + str(x)) test.test_support.TestFailed: Unexpected SSL error: [Errno 1] _ssl.c:486: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed ====================================================================== FAIL: test_bad_address (test.test_urllib2_localnet.TestUrlopen) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_urllib2_localnet.py", line 476, in test_bad_address urllib2.urlopen, "http://www.python.invalid./") AssertionError: IOError not raised by urlopen ====================================================================== FAIL: test_bad_address (test.test_urllibnet.urlopenNetworkTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_urllibnet.py", line 145, in test_bad_address urllib.urlopen, "http://www.python.invalid./") AssertionError: IOError not raised by urlopen ====================================================================== FAIL: test_current_time (test.test_xmlrpc_net.CurrentTimeTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_xmlrpc_net.py", line 38, in test_current_time self.assert_(delta.days <= 1) AssertionError: None sincerely, -The Buildbot From buildbot at python.org Sat Mar 29 03:23:56 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 29 Mar 2008 02:23:56 +0000 Subject: [Python-checkins] buildbot failure in ARM Linux EABI 2.5 Message-ID: <20080329022356.6481D1E4028@bag.python.org> The Buildbot has detected a new failure of ARM Linux EABI 2.5. Full details are available at: http://www.python.org/dev/buildbot/all/ARM%20Linux%20EABI%202.5/builds/2 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-linux-armeabi Build Reason: Build Source Stamp: [branch branches/release25-maint] HEAD Blamelist: amaury.forgeotdarc BUILD FAILED: failed failed slave lost sincerely, -The Buildbot From buildbot at python.org Sat Mar 29 04:14:31 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 29 Mar 2008 03:14:31 +0000 Subject: [Python-checkins] buildbot failure in sparc Ubuntu 3.0 Message-ID: <20080329031432.181E31E4017@bag.python.org> The Buildbot has detected a new failure of sparc Ubuntu 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20Ubuntu%203.0/builds/218 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-ubuntu-sparc Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: gerhard.haering BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_logging ====================================================================== FAIL: test_flush (test.test_logging.MemoryHandlerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/3.0.klose-ubuntu-sparc/build/Lib/test/test_logging.py", line 468, in test_flush self.assert_log_lines(lines) File "/home/pybot/buildarea/3.0.klose-ubuntu-sparc/build/Lib/test/test_logging.py", line 112, in assert_log_lines self.assertEquals(len(actual_lines), len(expected_values)) AssertionError: 0 != 3 make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Sat Mar 29 06:29:21 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 29 Mar 2008 05:29:21 +0000 Subject: [Python-checkins] buildbot failure in ARM Linux EABI trunk Message-ID: <20080329052921.A2A281E4017@bag.python.org> The Buildbot has detected a new failure of ARM Linux EABI trunk. Full details are available at: http://www.python.org/dev/buildbot/all/ARM%20Linux%20EABI%20trunk/builds/8 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-linux-armeabi Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: amaury.forgeotdarc,georg.brandl,gerhard.haering BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From buildbot at python.org Sat Mar 29 06:29:31 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 29 Mar 2008 05:29:31 +0000 Subject: [Python-checkins] buildbot failure in ARM Linux EABI 3.0 Message-ID: <20080329052931.4A5B51E4017@bag.python.org> The Buildbot has detected a new failure of ARM Linux EABI 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/ARM%20Linux%20EABI%203.0/builds/5 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-linux-armeabi Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: georg.brandl,gerhard.haering BUILD FAILED: failed svn sincerely, -The Buildbot From buildbot at python.org Sat Mar 29 07:00:02 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 29 Mar 2008 06:00:02 +0000 Subject: [Python-checkins] buildbot failure in amd64 gentoo 3.0 Message-ID: <20080329060002.99B3F1E4017@bag.python.org> The Buildbot has detected a new failure of amd64 gentoo 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20gentoo%203.0/builds/245 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-amd64 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: georg.brandl,gerhard.haering BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_logging ====================================================================== FAIL: test_flush (test.test_logging.MemoryHandlerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/test/test_logging.py", line 468, in test_flush self.assert_log_lines(lines) File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/test/test_logging.py", line 112, in assert_log_lines self.assertEquals(len(actual_lines), len(expected_values)) AssertionError: 0 != 3 make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Sat Mar 29 07:03:03 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 29 Mar 2008 06:03:03 +0000 Subject: [Python-checkins] buildbot failure in ppc Debian unstable 3.0 Message-ID: <20080329060303.B87491E4017@bag.python.org> The Buildbot has detected a new failure of ppc Debian unstable 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/ppc%20Debian%20unstable%203.0/builds/716 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: jeffrey.yasskin BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_signal make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Sat Mar 29 11:11:41 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 29 Mar 2008 10:11:41 +0000 Subject: [Python-checkins] buildbot failure in sparc Ubuntu trunk Message-ID: <20080329101147.4F09F1E4015@bag.python.org> The Buildbot has detected a new failure of sparc Ubuntu trunk. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20Ubuntu%20trunk/builds/420 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-ubuntu-sparc Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: amaury.forgeotdarc,georg.brandl,gerhard.haering BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_logging make: *** [buildbottest] Error 1 sincerely, -The Buildbot From python-checkins at python.org Sat Mar 29 11:42:07 2008 From: python-checkins at python.org (raymond.hettinger) Date: Sat, 29 Mar 2008 11:42:07 +0100 (CET) Subject: [Python-checkins] r62035 - python/trunk/Doc/library/array.rst Message-ID: <20080329104207.F01191E4015@bag.python.org> Author: raymond.hettinger Date: Sat Mar 29 11:42:07 2008 New Revision: 62035 Modified: python/trunk/Doc/library/array.rst Log: Be explicit about what efficient means. Modified: python/trunk/Doc/library/array.rst ============================================================================== --- python/trunk/Doc/library/array.rst (original) +++ python/trunk/Doc/library/array.rst Sat Mar 29 11:42:07 2008 @@ -3,12 +3,12 @@ =================================================== .. module:: array - :synopsis: Efficient arrays of uniformly typed numeric values. + :synopsis: Space efficient arrays of uniformly typed numeric values. .. index:: single: arrays -This module defines an object type which can efficiently represent an array of +This module defines an object type which can Compactly represent an array of basic values: characters, integers, floating point numbers. Arrays are sequence types and behave very much like lists, except that the type of objects stored in them is constrained. The type is specified at object creation time by using a From buildbot at python.org Sat Mar 29 12:03:42 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 29 Mar 2008 11:03:42 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-4 3.0 Message-ID: <20080329110342.810F01E4015@bag.python.org> The Buildbot has detected a new failure of x86 XP-4 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-4%203.0/builds/647 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: bolen-windows Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: jeffrey.yasskin BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From python-checkins at python.org Sat Mar 29 12:46:18 2008 From: python-checkins at python.org (georg.brandl) Date: Sat, 29 Mar 2008 12:46:18 +0100 (CET) Subject: [Python-checkins] r62036 - python/trunk/Doc/library/array.rst Message-ID: <20080329114618.4D7661E402C@bag.python.org> Author: georg.brandl Date: Sat Mar 29 12:46:18 2008 New Revision: 62036 Modified: python/trunk/Doc/library/array.rst Log: Fix capitalization. Modified: python/trunk/Doc/library/array.rst ============================================================================== --- python/trunk/Doc/library/array.rst (original) +++ python/trunk/Doc/library/array.rst Sat Mar 29 12:46:18 2008 @@ -8,7 +8,7 @@ .. index:: single: arrays -This module defines an object type which can Compactly represent an array of +This module defines an object type which can compactly represent an array of basic values: characters, integers, floating point numbers. Arrays are sequence types and behave very much like lists, except that the type of objects stored in them is constrained. The type is specified at object creation time by using a From python-checkins at python.org Sat Mar 29 13:42:55 2008 From: python-checkins at python.org (amaury.forgeotdarc) Date: Sat, 29 Mar 2008 13:42:55 +0100 (CET) Subject: [Python-checkins] r62037 - python/trunk/Lib/lib2to3/refactor.py Message-ID: <20080329124255.287FF1E4003@bag.python.org> Author: amaury.forgeotdarc Date: Sat Mar 29 13:42:54 2008 New Revision: 62037 Modified: python/trunk/Lib/lib2to3/refactor.py Log: lib2to3 should install a logging handler only when run as a main program, not when used as a library. This may please the buildbots, which fail when test_lib2to3 is run before test_logging. Modified: python/trunk/Lib/lib2to3/refactor.py ============================================================================== --- python/trunk/Lib/lib2to3/refactor.py (original) +++ python/trunk/Lib/lib2to3/refactor.py Sat Mar 29 13:42:54 2008 @@ -28,15 +28,6 @@ from . import fixes from . import pygram -if sys.version_info < (2, 4): - hdlr = logging.StreamHandler() - fmt = logging.Formatter('%(name)s: %(message)s') - hdlr.setFormatter(fmt) - logging.root.addHandler(hdlr) -else: - logging.basicConfig(format='%(name)s: %(message)s', level=logging.INFO) - - def main(args=None): """Main program. @@ -73,6 +64,15 @@ print >>sys.stderr, "Use --help to show usage." return 2 + # Set up logging handler + if sys.version_info < (2, 4): + hdlr = logging.StreamHandler() + fmt = logging.Formatter('%(name)s: %(message)s') + hdlr.setFormatter(fmt) + logging.root.addHandler(hdlr) + else: + logging.basicConfig(format='%(name)s: %(message)s', level=logging.INFO) + # Initialize the refactoring tool rt = RefactoringTool(options) From python-checkins at python.org Sat Mar 29 14:14:55 2008 From: python-checkins at python.org (amaury.forgeotdarc) Date: Sat, 29 Mar 2008 14:14:55 +0100 (CET) Subject: [Python-checkins] r62038 - python/trunk/Lib/test/regrtest.py Message-ID: <20080329131455.3C1141E4003@bag.python.org> Author: amaury.forgeotdarc Date: Sat Mar 29 14:14:52 2008 New Revision: 62038 Modified: python/trunk/Lib/test/regrtest.py Log: Now that Lib/test/output is gone, tests should not print anything, except in verbose mode. Support code is much simpler. Modified: python/trunk/Lib/test/regrtest.py ============================================================================== --- python/trunk/Lib/test/regrtest.py (original) +++ python/trunk/Lib/test/regrtest.py Sat Mar 29 14:14:52 2008 @@ -31,8 +31,6 @@ unless -x is given, in which case they are names for tests not to run. If no test names are given, all tests are run. --v is incompatible with -g and does not compare test output files. - -T turns on code coverage tracing with the trace module. -D specifies the directory where coverage files are put. @@ -178,7 +176,7 @@ sys.exit(code) -def main(tests=None, testdir=None, verbose=0, quiet=False, generate=False, +def main(tests=None, testdir=None, verbose=0, quiet=False, exclude=False, single=False, randomize=False, fromfile=None, findleaks=False, use_resources=None, trace=False, coverdir='coverage', runleaks=False, huntrleaks=False, verbose2=False, print_slow=False): @@ -198,7 +196,7 @@ command-line will be used. If that's empty, too, then all *.py files beginning with test_ will be used. - The other default arguments (verbose, quiet, generate, exclude, + The other default arguments (verbose, quiet, exclude, single, randomize, findleaks, use_resources, trace, coverdir, and print_slow) allow programmers calling main() directly to set the values that would normally be set by flags on the command line. @@ -361,12 +359,12 @@ if trace: # If we're tracing code coverage, then we don't exit with status # if on a false return value from main. - tracer.runctx('runtest(test, generate, verbose, quiet,' + tracer.runctx('runtest(test, verbose, quiet,' ' test_times, testdir)', globals=globals(), locals=vars()) else: try: - ok = runtest(test, generate, verbose, quiet, test_times, + ok = runtest(test, verbose, quiet, test_times, testdir, huntrleaks) except KeyboardInterrupt: # print a newline separate from the ^C @@ -438,7 +436,7 @@ sys.stdout.flush() try: test_support.verbose = True - ok = runtest(test, generate, True, quiet, test_times, testdir, + ok = runtest(test, True, quiet, test_times, testdir, huntrleaks) except KeyboardInterrupt: # print a newline separate from the ^C @@ -502,7 +500,7 @@ tests.sort() return stdtests + tests -def runtest(test, generate, verbose, quiet, test_times, +def runtest(test, verbose, quiet, test_times, testdir=None, huntrleaks=False): """Run a single test. @@ -521,27 +519,26 @@ """ try: - return runtest_inner(test, generate, verbose, quiet, test_times, + return runtest_inner(test, verbose, quiet, test_times, testdir, huntrleaks) finally: cleanup_test_droppings(test, verbose) -def runtest_inner(test, generate, verbose, quiet, test_times, +def runtest_inner(test, verbose, quiet, test_times, testdir=None, huntrleaks=False): test_support.unload(test) if not testdir: testdir = findtestdir() if verbose: - cfp = None + capture_stdout = None else: - cfp = cStringIO.StringIO() + capture_stdout = cStringIO.StringIO() try: save_stdout = sys.stdout try: - if cfp: - sys.stdout = cfp - print test # Output file starts with test name + if capture_stdout: + sys.stdout = capture_stdout if test.startswith('test.'): abstest = test else: @@ -587,15 +584,16 @@ sys.stdout.flush() return 0 else: - if not cfp: + # Except in verbose mode, tests should not print anything + if verbose or huntrleaks: return 1 - output = cfp.getvalue() - expected = test + "\n" - if output == expected or huntrleaks: + output = capture_stdout.getvalue() + if not output: return 1 print "test", test, "produced unexpected output:" - sys.stdout.flush() - reportdiff(expected, output) + print "*" * 70 + print output + print "*" * 70 sys.stdout.flush() return 0 @@ -720,48 +718,6 @@ # Collect cyclic trash. gc.collect() -def reportdiff(expected, output): - import difflib - print "*" * 70 - a = expected.splitlines(1) - b = output.splitlines(1) - sm = difflib.SequenceMatcher(a=a, b=b) - tuples = sm.get_opcodes() - - def pair(x0, x1): - # x0:x1 are 0-based slice indices; convert to 1-based line indices. - x0 += 1 - if x0 >= x1: - return "line " + str(x0) - else: - return "lines %d-%d" % (x0, x1) - - for op, a0, a1, b0, b1 in tuples: - if op == 'equal': - pass - - elif op == 'delete': - print "***", pair(a0, a1), "of expected output missing:" - for line in a[a0:a1]: - print "-", line, - - elif op == 'replace': - print "*** mismatch between", pair(a0, a1), "of expected", \ - "output and", pair(b0, b1), "of actual output:" - for line in difflib.ndiff(a[a0:a1], b[b0:b1]): - print line, - - elif op == 'insert': - print "***", pair(b0, b1), "of actual output doesn't appear", \ - "in expected output after line", str(a1)+":" - for line in b[b0:b1]: - print "+", line, - - else: - print "get_opcodes() returned bad tuple?!?!", (op, a0, a1, b0, b1) - - print "*" * 70 - def findtestdir(): if __name__ == '__main__': file = sys.argv[0] From python-checkins at python.org Sat Mar 29 14:24:24 2008 From: python-checkins at python.org (georg.brandl) Date: Sat, 29 Mar 2008 14:24:24 +0100 (CET) Subject: [Python-checkins] r62039 - in python/trunk: Include/Python-ast.h Lib/test/test_compile.py Parser/asdl_c.py Python/Python-ast.c Python/bltinmodule.c Message-ID: <20080329132424.5AD481E4003@bag.python.org> Author: georg.brandl Date: Sat Mar 29 14:24:23 2008 New Revision: 62039 Modified: python/trunk/Include/Python-ast.h python/trunk/Lib/test/test_compile.py python/trunk/Parser/asdl_c.py python/trunk/Python/Python-ast.c python/trunk/Python/bltinmodule.c Log: Properly check for consistency with the third argument of compile() when compiling an AST node. Modified: python/trunk/Include/Python-ast.h ============================================================================== --- python/trunk/Include/Python-ast.h (original) +++ python/trunk/Include/Python-ast.h Sat Mar 29 14:24:23 2008 @@ -501,5 +501,5 @@ alias_ty _Py_alias(identifier name, identifier asname, PyArena *arena); PyObject* PyAST_mod2obj(mod_ty t); -mod_ty PyAST_obj2mod(PyObject* ast, PyArena* arena); +mod_ty PyAST_obj2mod(PyObject* ast, PyArena* arena, int mode); int PyAST_Check(PyObject* obj); Modified: python/trunk/Lib/test/test_compile.py ============================================================================== --- python/trunk/Lib/test/test_compile.py (original) +++ python/trunk/Lib/test/test_compile.py Sat Mar 29 14:24:23 2008 @@ -441,6 +441,20 @@ self.assert_(type(ast) == _ast.Module) co2 = compile(ast, '%s3' % fname, 'exec') self.assertEqual(co1, co2) + # the code object's filename comes from the second compilation step + self.assertEqual(co2.co_filename, '%s3' % fname) + + # raise exception when node type doesn't match with compile mode + co1 = compile('print 1', '', 'exec', _ast.PyCF_ONLY_AST) + self.assertRaises(TypeError, compile, co1, '', 'eval') + + # raise exception when node type is no start node + self.assertRaises(TypeError, compile, _ast.If(), '', 'exec') + + # raise exception when node has invalid children + ast = _ast.Module() + ast.body = [_ast.BoolOp()] + self.assertRaises(TypeError, compile, ast, '', 'exec') def test_main(): Modified: python/trunk/Parser/asdl_c.py ============================================================================== --- python/trunk/Parser/asdl_c.py (original) +++ python/trunk/Parser/asdl_c.py Sat Mar 29 14:24:23 2008 @@ -954,13 +954,20 @@ return ast2obj_mod(t); } -mod_ty PyAST_obj2mod(PyObject* ast, PyArena* arena) +/* mode is 0 for "exec", 1 for "eval" and 2 for "single" input */ +mod_ty PyAST_obj2mod(PyObject* ast, PyArena* arena, int mode) { mod_ty res; + PyObject *req_type[] = {(PyObject*)Module_type, (PyObject*)Expression_type, + (PyObject*)Interactive_type}; + char *req_name[] = {"Module", "Expression", "Interactive"}; + assert(0 <= mode && mode <= 2); + init_types(); - if (!PyObject_IsInstance(ast, (PyObject*)mod_type)) { - PyErr_SetString(PyExc_TypeError, "expected either Module, Interactive " - "or Expression node"); + + if (!PyObject_IsInstance(ast, req_type[mode])) { + PyErr_Format(PyExc_TypeError, "expected %s node, got %.400s", + req_name[mode], Py_TYPE(ast)->tp_name); return NULL; } if (obj2ast_mod(ast, &res, arena) != 0) @@ -1016,7 +1023,7 @@ ) c.visit(mod) print >>f, "PyObject* PyAST_mod2obj(mod_ty t);" - print >>f, "mod_ty PyAST_obj2mod(PyObject* ast, PyArena* arena);" + print >>f, "mod_ty PyAST_obj2mod(PyObject* ast, PyArena* arena, int mode);" print >>f, "int PyAST_Check(PyObject* obj);" f.close() Modified: python/trunk/Python/Python-ast.c ============================================================================== --- python/trunk/Python/Python-ast.c (original) +++ python/trunk/Python/Python-ast.c Sat Mar 29 14:24:23 2008 @@ -5944,13 +5944,20 @@ return ast2obj_mod(t); } -mod_ty PyAST_obj2mod(PyObject* ast, PyArena* arena) +/* mode is 0 for "exec", 1 for "eval" and 2 for "single" input */ +mod_ty PyAST_obj2mod(PyObject* ast, PyArena* arena, int mode) { mod_ty res; + PyObject *req_type[] = {(PyObject*)Module_type, (PyObject*)Expression_type, + (PyObject*)Interactive_type}; + char *req_name[] = {"Module", "Expression", "Interactive"}; + assert(0 <= mode && mode <= 2); + init_types(); - if (!PyObject_IsInstance(ast, (PyObject*)mod_type)) { - PyErr_SetString(PyExc_TypeError, "expected either Module, Interactive " - "or Expression node"); + + if (!PyObject_IsInstance(ast, req_type[mode])) { + PyErr_Format(PyExc_TypeError, "expected %s node, got %.400s", + req_name[mode], Py_TYPE(ast)->tp_name); return NULL; } if (obj2ast_mod(ast, &res, arena) != 0) Modified: python/trunk/Python/bltinmodule.c ============================================================================== --- python/trunk/Python/bltinmodule.c (original) +++ python/trunk/Python/bltinmodule.c Sat Mar 29 14:24:23 2008 @@ -466,7 +466,7 @@ char *str; char *filename; char *startstr; - int start; + int mode = -1; int dont_inherit = 0; int supplied_flags = 0; PyCompilerFlags cf; @@ -474,6 +474,7 @@ Py_ssize_t length; static char *kwlist[] = {"source", "filename", "mode", "flags", "dont_inherit", NULL}; + int start[] = {Py_file_input, Py_eval_input, Py_single_input}; if (!PyArg_ParseTupleAndKeywords(args, kwds, "Oss|ii:compile", kwlist, &cmd, &filename, &startstr, @@ -495,6 +496,18 @@ PyEval_MergeCompilerFlags(&cf); } + if (strcmp(startstr, "exec") == 0) + mode = 0; + else if (strcmp(startstr, "eval") == 0) + mode = 1; + else if (strcmp(startstr, "single") == 0) + mode = 2; + else { + PyErr_SetString(PyExc_ValueError, + "compile() arg 3 must be 'exec', 'eval' or 'single'"); + return NULL; + } + if (PyAST_Check(cmd)) { if (supplied_flags & PyCF_ONLY_AST) { Py_INCREF(cmd); @@ -505,7 +518,7 @@ mod_ty mod; arena = PyArena_New(); - mod = PyAST_obj2mod(cmd, arena); + mod = PyAST_obj2mod(cmd, arena, mode); if (mod == NULL) { PyArena_Free(arena); return NULL; @@ -526,19 +539,6 @@ cf.cf_flags |= PyCF_SOURCE_IS_UTF8; } #endif - /* XXX: is it possible to pass start to the PyAST_ branch? */ - if (strcmp(startstr, "exec") == 0) - start = Py_file_input; - else if (strcmp(startstr, "eval") == 0) - start = Py_eval_input; - else if (strcmp(startstr, "single") == 0) - start = Py_single_input; - else { - PyErr_SetString(PyExc_ValueError, - "compile() arg 3 must be 'exec'" - "or 'eval' or 'single'"); - goto cleanup; - } if (PyObject_AsReadBuffer(cmd, (const void **)&str, &length)) goto cleanup; @@ -547,7 +547,7 @@ "compile() expected string without null bytes"); goto cleanup; } - result = Py_CompileStringFlags(str, filename, start, &cf); + result = Py_CompileStringFlags(str, filename, start[mode], &cf); cleanup: Py_XDECREF(tmp); return result; From buildbot at python.org Sat Mar 29 14:26:00 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 29 Mar 2008 13:26:00 +0000 Subject: [Python-checkins] buildbot failure in ppc Debian unstable trunk Message-ID: <20080329132600.898DB1E4003@bag.python.org> The Buildbot has detected a new failure of ppc Debian unstable trunk. Full details are available at: http://www.python.org/dev/buildbot/all/ppc%20Debian%20unstable%20trunk/builds/1118 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: amaury.forgeotdarc,georg.brandl,raymond.hettinger BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_signal make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Sat Mar 29 14:30:18 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 29 Mar 2008 13:30:18 +0000 Subject: [Python-checkins] buildbot failure in amd64 XP trunk Message-ID: <20080329133019.227601E4003@bag.python.org> The Buildbot has detected a new failure of amd64 XP trunk. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20XP%20trunk/builds/1085 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows-amd64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: amaury.forgeotdarc BUILD FAILED: failed compile sincerely, -The Buildbot From python-checkins at python.org Sat Mar 29 14:47:05 2008 From: python-checkins at python.org (amaury.forgeotdarc) Date: Sat, 29 Mar 2008 14:47:05 +0100 (CET) Subject: [Python-checkins] r62040 - python/trunk/Lib/test/test_socket.py Message-ID: <20080329134705.CA9D41E4003@bag.python.org> Author: amaury.forgeotdarc Date: Sat Mar 29 14:47:05 2008 New Revision: 62040 Modified: python/trunk/Lib/test/test_socket.py Log: The buildbot "x86 W2k8 trunk" seems to hang in test_socket. http://www.python.org/dev/buildbot/trunk/x86%20W2k8%20trunk/builds/255/step-test/0 Temporarily increase verbosity of this test. Modified: python/trunk/Lib/test/test_socket.py ============================================================================== --- python/trunk/Lib/test/test_socket.py (original) +++ python/trunk/Lib/test/test_socket.py Sat Mar 29 14:47:05 2008 @@ -15,6 +15,14 @@ from weakref import proxy import signal +# Temporary hack to see why test_socket hangs on one buildbot +if os.environ.get('COMPUTERNAME') == "GRAPE": + def verbose_write(arg): + print >>sys.__stdout__, arg +else: + def verbose_write(arg): + pass + PORT = 50007 HOST = 'localhost' MSG = 'Michael Gilfix was here\n' @@ -22,6 +30,7 @@ class SocketTCPTest(unittest.TestCase): def setUp(self): + verbose_write(self) self.serv = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.serv.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) global PORT @@ -31,10 +40,12 @@ def tearDown(self): self.serv.close() self.serv = None + verbose_write(str(self) + " done") class SocketUDPTest(unittest.TestCase): def setUp(self): + verbose_write(self) self.serv = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) self.serv.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) global PORT @@ -43,6 +54,7 @@ def tearDown(self): self.serv.close() self.serv = None + verbose_write(str(self) + " done") class ThreadableTest: """Threadable Test class From python-checkins at python.org Sat Mar 29 15:53:05 2008 From: python-checkins at python.org (amaury.forgeotdarc) Date: Sat, 29 Mar 2008 15:53:05 +0100 (CET) Subject: [Python-checkins] r62042 - python/trunk/Lib/test/test_socket.py Message-ID: <20080329145305.7A0871E4003@bag.python.org> Author: amaury.forgeotdarc Date: Sat Mar 29 15:53:05 2008 New Revision: 62042 Modified: python/trunk/Lib/test/test_socket.py Log: Still investigating on the hanging test_socket. the test itself doesn't do anything on windows, focus on setUp and tearDown. Modified: python/trunk/Lib/test/test_socket.py ============================================================================== --- python/trunk/Lib/test/test_socket.py (original) +++ python/trunk/Lib/test/test_socket.py Sat Mar 29 15:53:05 2008 @@ -33,11 +33,15 @@ verbose_write(self) self.serv = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.serv.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) + verbose_write(str(self) + " socket created") global PORT PORT = test_support.bind_port(self.serv, HOST, PORT) + verbose_write(str(self) + " start listening") self.serv.listen(1) + verbose_write(str(self) + " started") def tearDown(self): + verbose_write(str(self) + " close") self.serv.close() self.serv = None verbose_write(str(self) + " done") @@ -45,7 +49,6 @@ class SocketUDPTest(unittest.TestCase): def setUp(self): - verbose_write(self) self.serv = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) self.serv.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) global PORT @@ -54,7 +57,6 @@ def tearDown(self): self.serv.close() self.serv = None - verbose_write(str(self) + " done") class ThreadableTest: """Threadable Test class From buildbot at python.org Sat Mar 29 16:06:33 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 29 Mar 2008 15:06:33 +0000 Subject: [Python-checkins] buildbot failure in ARM Linux trunk Message-ID: <20080329150634.199891E4003@bag.python.org> The Buildbot has detected a new failure of ARM Linux trunk. Full details are available at: http://www.python.org/dev/buildbot/all/ARM%20Linux%20trunk/builds/0 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-linux-arm Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: amaury.forgeotdarc,andrew.kuchling,benjamin.peterson,christian.heimes,eric.smith,georg.brandl,gerhard.haering,gregory.p.smith,jeffrey.yasskin,jerry.seutter,mark.dickinson,martin.v.loewis,neal.norwitz,thomas.heller BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From python-checkins at python.org Sat Mar 29 16:24:27 2008 From: python-checkins at python.org (benjamin.peterson) Date: Sat, 29 Mar 2008 16:24:27 +0100 (CET) Subject: [Python-checkins] r62043 - in python/trunk: Demo/classes/Dbm.py Demo/curses/ncurses.py Demo/rpc/mountclient.py Demo/rpc/nfsclient.py Demo/rpc/rpc.py Demo/tkinter/guido/paint.py Lib/bsddb/dbshelve.py Lib/bsddb/test/test_basics.py Lib/bsddb/test/test_dbtables.py Lib/idlelib/AutoComplete.py Lib/idlelib/PyShell.py Lib/lib-tk/Tkinter.py Lib/lib-tk/turtle.py Lib/plat-mac/EasyDialogs.py Lib/plat-mac/FrameWork.py Lib/plat-mac/MiniAEFrame.py Lib/plat-mac/PixMapWrapper.py Lib/plat-mac/aepack.py Lib/plat-mac/buildtools.py Lib/plat-mac/findertools.py Lib/plat-mac/gensuitemodule.py Lib/plat-mac/ic.py Lib/plat-mac/lib-scriptpackages/CodeWarrior/CodeWarrior_suite.py Lib/plat-mac/lib-scriptpackages/CodeWarrior/Metrowerks_Shell_Suite.py Lib/plat-mac/lib-scriptpackages/CodeWarrior/Standard_Suite.py Lib/plat-mac/lib-scriptpackages/Explorer/Required_Suite.py Lib/plat-mac/lib-scriptpackages/Explorer/Web_Browser_Suite.py Lib/plat-mac/lib-scriptpackages/Finder/Finder_Basics.py Lib/plat-mac/lib-scriptpackages/Finder/Legacy_suite.py Lib/plat-mac/lib-scriptpackages/Finder/Standard_Suite.py Lib/plat-mac/lib-scriptpackages/Netscape/Mozilla_suite.py Lib/plat-mac/lib-scriptpackages/Netscape/PowerPlant.py Lib/plat-mac/lib-scriptpackages/Netscape/Required_suite.py Lib/plat-mac/lib-scriptpackages/Netscape/WorldWideWeb_suite.py Lib/plat-mac/lib-scriptpackages/StdSuites/AppleScript_Suite.py Lib/plat-mac/lib-scriptpackages/StdSuites/Standard_Suite.py Lib/plat-mac/lib-scriptpackages/SystemEvents/Standard_Suite.py Lib/plat-mac/lib-scriptpackages/Terminal/Standard_Suite.py Lib/plat-mac/lib-scriptpackages/_builtinSuites/builtin_Suite.py Lib/plat-mac/macostools.py Lib/plat-mac/videoreader.py Lib/plat-os2emx/grp.py Lib/plat-os2emx/pwd.py Lib/test/test_ast.py Lib/test/test_mailbox.py Lib/test/test_pyclbr.py Lib/test/test_ssl.py Lib/xml/sax/expatreader.py Mac/BuildScript/build-installer.py Mac/Demo/applescript/Disk_Copy/Utility_Events.py Mac/Tools/Doc/HelpIndexingTool/Standard_Suite.py Mac/Tools/Doc/setup.py Mac/scripts/buildpkg.py Tools/bgen/bgen/bgenGenerator.py Message-ID: <20080329152427.463211E4003@bag.python.org> Author: benjamin.peterson Date: Sat Mar 29 16:24:25 2008 New Revision: 62043 Modified: python/trunk/Demo/classes/Dbm.py python/trunk/Demo/curses/ncurses.py python/trunk/Demo/rpc/mountclient.py python/trunk/Demo/rpc/nfsclient.py python/trunk/Demo/rpc/rpc.py python/trunk/Demo/tkinter/guido/paint.py python/trunk/Lib/bsddb/dbshelve.py python/trunk/Lib/bsddb/test/test_basics.py python/trunk/Lib/bsddb/test/test_dbtables.py python/trunk/Lib/idlelib/AutoComplete.py python/trunk/Lib/idlelib/PyShell.py python/trunk/Lib/lib-tk/Tkinter.py python/trunk/Lib/lib-tk/turtle.py python/trunk/Lib/plat-mac/EasyDialogs.py python/trunk/Lib/plat-mac/FrameWork.py python/trunk/Lib/plat-mac/MiniAEFrame.py python/trunk/Lib/plat-mac/PixMapWrapper.py python/trunk/Lib/plat-mac/aepack.py python/trunk/Lib/plat-mac/buildtools.py python/trunk/Lib/plat-mac/findertools.py python/trunk/Lib/plat-mac/gensuitemodule.py python/trunk/Lib/plat-mac/ic.py python/trunk/Lib/plat-mac/lib-scriptpackages/CodeWarrior/CodeWarrior_suite.py python/trunk/Lib/plat-mac/lib-scriptpackages/CodeWarrior/Metrowerks_Shell_Suite.py python/trunk/Lib/plat-mac/lib-scriptpackages/CodeWarrior/Standard_Suite.py python/trunk/Lib/plat-mac/lib-scriptpackages/Explorer/Required_Suite.py python/trunk/Lib/plat-mac/lib-scriptpackages/Explorer/Web_Browser_Suite.py python/trunk/Lib/plat-mac/lib-scriptpackages/Finder/Finder_Basics.py python/trunk/Lib/plat-mac/lib-scriptpackages/Finder/Legacy_suite.py python/trunk/Lib/plat-mac/lib-scriptpackages/Finder/Standard_Suite.py python/trunk/Lib/plat-mac/lib-scriptpackages/Netscape/Mozilla_suite.py python/trunk/Lib/plat-mac/lib-scriptpackages/Netscape/PowerPlant.py python/trunk/Lib/plat-mac/lib-scriptpackages/Netscape/Required_suite.py python/trunk/Lib/plat-mac/lib-scriptpackages/Netscape/WorldWideWeb_suite.py python/trunk/Lib/plat-mac/lib-scriptpackages/StdSuites/AppleScript_Suite.py python/trunk/Lib/plat-mac/lib-scriptpackages/StdSuites/Standard_Suite.py python/trunk/Lib/plat-mac/lib-scriptpackages/SystemEvents/Standard_Suite.py python/trunk/Lib/plat-mac/lib-scriptpackages/Terminal/Standard_Suite.py python/trunk/Lib/plat-mac/lib-scriptpackages/_builtinSuites/builtin_Suite.py python/trunk/Lib/plat-mac/macostools.py python/trunk/Lib/plat-mac/videoreader.py python/trunk/Lib/plat-os2emx/grp.py python/trunk/Lib/plat-os2emx/pwd.py python/trunk/Lib/test/test_ast.py python/trunk/Lib/test/test_mailbox.py python/trunk/Lib/test/test_pyclbr.py python/trunk/Lib/test/test_ssl.py python/trunk/Lib/xml/sax/expatreader.py python/trunk/Mac/BuildScript/build-installer.py python/trunk/Mac/Demo/applescript/Disk_Copy/Utility_Events.py python/trunk/Mac/Tools/Doc/HelpIndexingTool/Standard_Suite.py python/trunk/Mac/Tools/Doc/setup.py python/trunk/Mac/scripts/buildpkg.py python/trunk/Tools/bgen/bgen/bgenGenerator.py Log: #2503 make singletons compared with "is" not == or != Thanks to Wummel for the patch Modified: python/trunk/Demo/classes/Dbm.py ============================================================================== --- python/trunk/Demo/classes/Dbm.py (original) +++ python/trunk/Demo/classes/Dbm.py Sat Mar 29 16:24:25 2008 @@ -50,7 +50,7 @@ value = d[key] print 'currently:', value value = input('value: ') - if value == None: + if value is None: del d[key] else: d[key] = value Modified: python/trunk/Demo/curses/ncurses.py ============================================================================== --- python/trunk/Demo/curses/ncurses.py (original) +++ python/trunk/Demo/curses/ncurses.py Sat Mar 29 16:24:25 2008 @@ -9,7 +9,7 @@ from curses import panel def wGetchar(win = None): - if win == None: win = stdscr + if win is None: win = stdscr return win.getch() def Getchar(): Modified: python/trunk/Demo/rpc/mountclient.py ============================================================================== --- python/trunk/Demo/rpc/mountclient.py (original) +++ python/trunk/Demo/rpc/mountclient.py Sat Mar 29 16:24:25 2008 @@ -100,7 +100,7 @@ # This function is called to cough up a suitable # authentication object for a call to procedure 'proc'. def mkcred(self): - if self.cred == None: + if self.cred is None: self.cred = rpc.AUTH_UNIX, rpc.make_auth_unix_default() return self.cred Modified: python/trunk/Demo/rpc/nfsclient.py ============================================================================== --- python/trunk/Demo/rpc/nfsclient.py (original) +++ python/trunk/Demo/rpc/nfsclient.py Sat Mar 29 16:24:25 2008 @@ -129,7 +129,7 @@ self.unpacker = NFSUnpacker('') def mkcred(self): - if self.cred == None: + if self.cred is None: self.cred = rpc.AUTH_UNIX, rpc.make_auth_unix_default() return self.cred @@ -170,7 +170,7 @@ for fileid, name, cookie in entries: list.append((fileid, name)) last_cookie = cookie - if eof or last_cookie == None: + if eof or last_cookie is None: break ra = (ra[0], last_cookie, ra[2]) return list @@ -184,7 +184,7 @@ else: filesys = None from mountclient import UDPMountClient, TCPMountClient mcl = TCPMountClient(host) - if filesys == None: + if filesys is None: list = mcl.Export() for item in list: print item Modified: python/trunk/Demo/rpc/rpc.py ============================================================================== --- python/trunk/Demo/rpc/rpc.py (original) +++ python/trunk/Demo/rpc/rpc.py Sat Mar 29 16:24:25 2008 @@ -264,13 +264,13 @@ def mkcred(self): # Override this to use more powerful credentials - if self.cred == None: + if self.cred is None: self.cred = (AUTH_NULL, make_auth_null()) return self.cred def mkverf(self): # Override this to use a more powerful verifier - if self.verf == None: + if self.verf is None: self.verf = (AUTH_NULL, make_auth_null()) return self.verf @@ -321,7 +321,7 @@ def bindresvport(sock, host): global last_resv_port_tried FIRST, LAST = 600, 1024 # Range of ports to try - if last_resv_port_tried == None: + if last_resv_port_tried is None: import os last_resv_port_tried = FIRST + os.getpid() % (LAST-FIRST) for i in range(last_resv_port_tried, LAST) + \ @@ -814,7 +814,7 @@ def session(self): call, host_port = self.sock.recvfrom(8192) reply = self.handle(call) - if reply != None: + if reply is not None: self.sock.sendto(reply, host_port) Modified: python/trunk/Demo/tkinter/guido/paint.py ============================================================================== --- python/trunk/Demo/tkinter/guido/paint.py (original) +++ python/trunk/Demo/tkinter/guido/paint.py Sat Mar 29 16:24:25 2008 @@ -50,7 +50,7 @@ def motion(event): if b1 == "down": global xold, yold - if xold != None and yold != None: + if xold is not None and yold is not None: event.widget.create_line(xold,yold,event.x,event.y,smooth=TRUE) # here's where you draw it. smooth. neat. xold = event.x Modified: python/trunk/Lib/bsddb/dbshelve.py ============================================================================== --- python/trunk/Lib/bsddb/dbshelve.py (original) +++ python/trunk/Lib/bsddb/dbshelve.py Sat Mar 29 16:24:25 2008 @@ -133,7 +133,7 @@ def keys(self, txn=None): - if txn != None: + if txn is not None: return self.db.keys(txn) else: return self.db.keys() @@ -157,7 +157,7 @@ def items(self, txn=None): - if txn != None: + if txn is not None: items = self.db.items(txn) else: items = self.db.items() @@ -168,7 +168,7 @@ return newitems def values(self, txn=None): - if txn != None: + if txn is not None: values = self.db.values(txn) else: values = self.db.values() Modified: python/trunk/Lib/bsddb/test/test_basics.py ============================================================================== --- python/trunk/Lib/bsddb/test/test_basics.py (original) +++ python/trunk/Lib/bsddb/test/test_basics.py Sat Mar 29 16:24:25 2008 @@ -363,7 +363,7 @@ else: if set_raises_error: self.fail("expected exception") - if n != None: + if n is not None: self.fail("expected None: %r" % (n,)) rec = c.get_both('0404', self.makeData('0404')) @@ -377,7 +377,7 @@ else: if get_raises_error: self.fail("expected exception") - if n != None: + if n is not None: self.fail("expected None: %r" % (n,)) if self.d.get_type() == db.DB_BTREE: Modified: python/trunk/Lib/bsddb/test/test_dbtables.py ============================================================================== --- python/trunk/Lib/bsddb/test/test_dbtables.py (original) +++ python/trunk/Lib/bsddb/test/test_dbtables.py Sat Mar 29 16:24:25 2008 @@ -323,7 +323,7 @@ self.tdb.Insert(tabname, {'Type': 'Unknown', 'Access': '0'}) def set_type(type): - if type == None: + if type is None: return 'MP3' return type Modified: python/trunk/Lib/idlelib/AutoComplete.py ============================================================================== --- python/trunk/Lib/idlelib/AutoComplete.py (original) +++ python/trunk/Lib/idlelib/AutoComplete.py Sat Mar 29 16:24:25 2008 @@ -35,10 +35,9 @@ "popupwait", type="int", default=0) def __init__(self, editwin=None): - if editwin == None: # subprocess and test - self.editwin = None - return self.editwin = editwin + if editwin is None: # subprocess and test + return self.text = editwin.text self.autocompletewindow = None Modified: python/trunk/Lib/idlelib/PyShell.py ============================================================================== --- python/trunk/Lib/idlelib/PyShell.py (original) +++ python/trunk/Lib/idlelib/PyShell.py Sat Mar 29 16:24:25 2008 @@ -932,7 +932,7 @@ "The program is still running!\n Do you want to kill it?", default="ok", parent=self.text) - if response == False: + if response is False: return "cancel" if self.reading: self.top.quit() Modified: python/trunk/Lib/lib-tk/Tkinter.py ============================================================================== --- python/trunk/Lib/lib-tk/Tkinter.py (original) +++ python/trunk/Lib/lib-tk/Tkinter.py Sat Mar 29 16:24:25 2008 @@ -188,7 +188,7 @@ else: self._name = 'PY_VAR' + repr(_varnum) _varnum += 1 - if value != None: + if value is not None: self.set(value) elif not self._tk.call("info", "exists", self._name): self.set(self._default) Modified: python/trunk/Lib/lib-tk/turtle.py ============================================================================== --- python/trunk/Lib/lib-tk/turtle.py (original) +++ python/trunk/Lib/lib-tk/turtle.py Sat Mar 29 16:24:25 2008 @@ -749,25 +749,25 @@ global _width, _height, _startx, _starty width = geometry.get('width',_width) - if width >= 0 or width == None: + if width >= 0 or width is None: _width = width else: raise ValueError, "width can not be less than 0" height = geometry.get('height',_height) - if height >= 0 or height == None: + if height >= 0 or height is None: _height = height else: raise ValueError, "height can not be less than 0" startx = geometry.get('startx', _startx) - if startx >= 0 or startx == None: + if startx >= 0 or startx is None: _startx = _startx else: raise ValueError, "startx can not be less than 0" starty = geometry.get('starty', _starty) - if starty >= 0 or starty == None: + if starty >= 0 or starty is None: _starty = starty else: raise ValueError, "startx can not be less than 0" Modified: python/trunk/Lib/plat-mac/EasyDialogs.py ============================================================================== --- python/trunk/Lib/plat-mac/EasyDialogs.py (original) +++ python/trunk/Lib/plat-mac/EasyDialogs.py Sat Mar 29 16:24:25 2008 @@ -79,7 +79,7 @@ return h = d.GetDialogItemAsControl(2) SetDialogItemText(h, lf2cr(msg)) - if ok != None: + if ok is not None: h = d.GetDialogItemAsControl(1) h.SetControlTitle(ok) d.SetDialogDefaultItem(1) @@ -116,10 +116,10 @@ SetDialogItemText(h, lf2cr(default)) d.SelectDialogItemText(4, 0, 999) # d.SetDialogItem(4, 0, 255) - if ok != None: + if ok is not None: h = d.GetDialogItemAsControl(1) h.SetControlTitle(ok) - if cancel != None: + if cancel is not None: h = d.GetDialogItemAsControl(2) h.SetControlTitle(cancel) d.SetDialogDefaultItem(1) @@ -160,10 +160,10 @@ SetControlData(pwd, kControlEditTextPart, kControlEditTextPasswordTag, default) d.SelectDialogItemText(4, 0, 999) Ctl.SetKeyboardFocus(d.GetDialogWindow(), pwd, kControlEditTextPart) - if ok != None: + if ok is not None: h = d.GetDialogItemAsControl(1) h.SetControlTitle(ok) - if cancel != None: + if cancel is not None: h = d.GetDialogItemAsControl(2) h.SetControlTitle(cancel) d.SetDialogDefaultItem(Dialogs.ok) @@ -204,19 +204,19 @@ # The question string is item 5 h = d.GetDialogItemAsControl(5) SetDialogItemText(h, lf2cr(question)) - if yes != None: + if yes is not None: if yes == '': d.HideDialogItem(2) else: h = d.GetDialogItemAsControl(2) h.SetControlTitle(yes) - if no != None: + if no is not None: if no == '': d.HideDialogItem(3) else: h = d.GetDialogItemAsControl(3) h.SetControlTitle(no) - if cancel != None: + if cancel is not None: if cancel == '': d.HideDialogItem(4) else: @@ -317,7 +317,7 @@ def set(self, value, max=None): """set(value) - Set progress bar position""" - if max != None: + if max is not None: self.maxval = max bar = self.d.GetDialogItemAsControl(3) if max <= 0: # indeterminate bar Modified: python/trunk/Lib/plat-mac/FrameWork.py ============================================================================== --- python/trunk/Lib/plat-mac/FrameWork.py (original) +++ python/trunk/Lib/plat-mac/FrameWork.py Sat Mar 29 16:24:25 2008 @@ -92,7 +92,7 @@ def setwatchcursor(): global _watch - if _watch == None: + if _watch is None: _watch = GetCursor(4).data SetCursor(_watch) @@ -129,7 +129,7 @@ self._quititem = MenuItem(m, "Quit", "Q", self._quit) def gethelpmenu(self): - if self._helpmenu == None: + if self._helpmenu is None: self._helpmenu = HelpMenu(self.menubar) return self._helpmenu @@ -266,7 +266,7 @@ else: name = "do_%d" % partcode - if wid == None: + if wid is None: # No window, or a non-python window try: handler = getattr(self, name) @@ -475,7 +475,7 @@ self.menus = None def addmenu(self, title, after = 0, id=None): - if id == None: + if id is None: id = self.getnextid() if DEBUG: print 'Newmenu', title, id # XXXX m = NewMenu(id, title) @@ -907,8 +907,8 @@ self.barx_enabled = self.bary_enabled = 1 x0, y0, x1, y1 = self.wid.GetWindowPort().GetPortBounds() vx, vy = self.getscrollbarvalues() - if vx == None: self.barx_enabled, vx = 0, 0 - if vy == None: self.bary_enabled, vy = 0, 0 + if vx is None: self.barx_enabled, vx = 0, 0 + if vy is None: self.bary_enabled, vy = 0, 0 if wantx: rect = x0-1, y1-(SCROLLBARWIDTH-1), x1-(SCROLLBARWIDTH-2), y1+1 self.barx = NewControl(self.wid, rect, "", 1, vx, 0, 32767, 16, 0) @@ -1007,7 +1007,7 @@ SetPort(self.wid) vx, vy = self.getscrollbarvalues() if self.barx: - if vx == None: + if vx is None: self.barx.HiliteControl(255) self.barx_enabled = 0 else: @@ -1017,7 +1017,7 @@ self.barx.HiliteControl(0) self.barx.SetControlValue(vx) if self.bary: - if vy == None: + if vy is None: self.bary.HiliteControl(255) self.bary_enabled = 0 else: Modified: python/trunk/Lib/plat-mac/MiniAEFrame.py ============================================================================== --- python/trunk/Lib/plat-mac/MiniAEFrame.py (original) +++ python/trunk/Lib/plat-mac/MiniAEFrame.py Sat Mar 29 16:24:25 2008 @@ -158,7 +158,7 @@ #Same try/except comment as above rv = _function(**_parameters) - if rv == None: + if rv is None: aetools.packevent(_reply, {}) else: aetools.packevent(_reply, {'----':rv}) Modified: python/trunk/Lib/plat-mac/PixMapWrapper.py ============================================================================== --- python/trunk/Lib/plat-mac/PixMapWrapper.py (original) +++ python/trunk/Lib/plat-mac/PixMapWrapper.py Sat Mar 29 16:24:25 2008 @@ -146,9 +146,9 @@ """Draw this pixmap into the given (default current) grafport.""" src = self.bounds dest = [x1,y1,x2,y2] - if x2 == None: + if x2 is None: dest[2] = x1 + src[2]-src[0] - if y2 == None: + if y2 is None: dest[3] = y1 + src[3]-src[1] if not port: port = Qd.GetPort() Qd.CopyBits(self.PixMap(), port.GetPortBitMapForCopyBits(), src, tuple(dest), Modified: python/trunk/Lib/plat-mac/aepack.py ============================================================================== --- python/trunk/Lib/plat-mac/aepack.py (original) +++ python/trunk/Lib/plat-mac/aepack.py Sat Mar 29 16:24:25 2008 @@ -77,7 +77,7 @@ else: return pack(x).AECoerceDesc(forcetype) - if x == None: + if x is None: return AE.AECreateDesc('null', '') if isinstance(x, AEDescType): Modified: python/trunk/Lib/plat-mac/buildtools.py ============================================================================== --- python/trunk/Lib/plat-mac/buildtools.py (original) +++ python/trunk/Lib/plat-mac/buildtools.py Sat Mar 29 16:24:25 2008 @@ -203,13 +203,13 @@ dummy, tmplowner = copyres(input, output, skiptypes, 1, progress) Res.CloseResFile(input) -## if ownertype == None: +## if ownertype is None: ## raise BuildError, "No owner resource found in either resource file or template" # Make sure we're manipulating the output resource file now Res.UseResFile(output) - if ownertype == None: + if ownertype is None: # No owner resource in the template. We have skipped the # Python owner resource, so we have to add our own. The relevant # bundle stuff is already included in the interpret/applet template. Modified: python/trunk/Lib/plat-mac/findertools.py ============================================================================== --- python/trunk/Lib/plat-mac/findertools.py (original) +++ python/trunk/Lib/plat-mac/findertools.py Sat Mar 29 16:24:25 2008 @@ -125,7 +125,7 @@ """comment: get or set the Finder-comment of the item, displayed in the 'Get Info' window.""" object = Carbon.File.FSRef(object) object_alias = object.FSNewAliasMonimal() - if comment == None: + if comment is None: return _getcomment(object_alias) else: return _setcomment(object_alias, comment) @@ -329,7 +329,7 @@ """label: set or get the label of the item. Specify file by name or fsspec.""" object = Carbon.File.FSRef(object) object_alias = object.FSNewAliasMinimal() - if index == None: + if index is None: return _getlabel(object_alias) if index < 0 or index > 7: index = 0 @@ -375,7 +375,7 @@ """ fsr = Carbon.File.FSRef(folder) folder_alias = fsr.FSNewAliasMinimal() - if view == None: + if view is None: return _getwindowview(folder_alias) return _setwindowview(folder_alias, view) @@ -533,7 +533,7 @@ Development opportunity: get and set the data as PICT.""" fsr = Carbon.File.FSRef(object) object_alias = fsr.FSNewAliasMinimal() - if icondata == None: + if icondata is None: return _geticon(object_alias) return _seticon(object_alias, icondata) Modified: python/trunk/Lib/plat-mac/gensuitemodule.py ============================================================================== --- python/trunk/Lib/plat-mac/gensuitemodule.py (original) +++ python/trunk/Lib/plat-mac/gensuitemodule.py Sat Mar 29 16:24:25 2008 @@ -770,7 +770,7 @@ fp.write(" if _object:\n") fp.write(" _arguments['----'] = _object\n") else: - fp.write(" if _no_object != None: raise TypeError, 'No direct arg expected'\n") + fp.write(" if _no_object is not None: raise TypeError, 'No direct arg expected'\n") fp.write("\n") # # Do enum-name substitution Modified: python/trunk/Lib/plat-mac/ic.py ============================================================================== --- python/trunk/Lib/plat-mac/ic.py (original) +++ python/trunk/Lib/plat-mac/ic.py Sat Mar 29 16:24:25 2008 @@ -202,12 +202,12 @@ self.ic.ICLaunchURL(hint, url, 0, len(url)) def parseurl(self, data, start=None, end=None, hint=""): - if start == None: + if start is None: selStart = 0 selEnd = len(data) else: selStart = selEnd = start - if end != None: + if end is not None: selEnd = end selStart, selEnd = self.ic.ICParseURL(hint, data, selStart, selEnd, self.h) return self.h.data, selStart, selEnd @@ -231,27 +231,27 @@ def launchurl(url, hint=""): global _dft_ic - if _dft_ic == None: _dft_ic = IC() + if _dft_ic is None: _dft_ic = IC() return _dft_ic.launchurl(url, hint) def parseurl(data, start=None, end=None, hint=""): global _dft_ic - if _dft_ic == None: _dft_ic = IC() + if _dft_ic is None: _dft_ic = IC() return _dft_ic.parseurl(data, start, end, hint) def mapfile(filename): global _dft_ic - if _dft_ic == None: _dft_ic = IC() + if _dft_ic is None: _dft_ic = IC() return _dft_ic.mapfile(filename) def maptypecreator(type, creator, filename=""): global _dft_ic - if _dft_ic == None: _dft_ic = IC() + if _dft_ic is None: _dft_ic = IC() return _dft_ic.maptypecreator(type, creator, filename) def settypecreator(file): global _dft_ic - if _dft_ic == None: _dft_ic = IC() + if _dft_ic is None: _dft_ic = IC() return _dft_ic.settypecreator(file) def _test(): Modified: python/trunk/Lib/plat-mac/lib-scriptpackages/CodeWarrior/CodeWarrior_suite.py ============================================================================== --- python/trunk/Lib/plat-mac/lib-scriptpackages/CodeWarrior/CodeWarrior_suite.py (original) +++ python/trunk/Lib/plat-mac/lib-scriptpackages/CodeWarrior/CodeWarrior_suite.py Sat Mar 29 16:24:25 2008 @@ -51,7 +51,7 @@ _subcode = 'MAKE' if _arguments: raise TypeError, 'No optional args expected' - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, @@ -135,7 +135,7 @@ _subcode = 'EXPT' aetools.keysubst(_arguments, self._argmap_export) - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, @@ -154,7 +154,7 @@ _subcode = 'RMOB' if _arguments: raise TypeError, 'No optional args expected' - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, @@ -193,7 +193,7 @@ _subcode = 'RUN ' if _arguments: raise TypeError, 'No optional args expected' - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, @@ -232,7 +232,7 @@ _subcode = 'UP2D' if _arguments: raise TypeError, 'No optional args expected' - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, Modified: python/trunk/Lib/plat-mac/lib-scriptpackages/CodeWarrior/Metrowerks_Shell_Suite.py ============================================================================== --- python/trunk/Lib/plat-mac/lib-scriptpackages/CodeWarrior/Metrowerks_Shell_Suite.py (original) +++ python/trunk/Lib/plat-mac/lib-scriptpackages/CodeWarrior/Metrowerks_Shell_Suite.py Sat Mar 29 16:24:25 2008 @@ -72,7 +72,7 @@ _subcode = 'ClsP' if _arguments: raise TypeError, 'No optional args expected' - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, @@ -190,7 +190,7 @@ _subcode = 'GDoc' if _arguments: raise TypeError, 'No optional args expected' - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, @@ -217,7 +217,7 @@ _subcode = 'Gref' aetools.keysubst(_arguments, self._argmap_Get_Preferences) - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, @@ -263,7 +263,7 @@ _subcode = 'GetP' if _arguments: raise TypeError, 'No optional args expected' - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, @@ -283,7 +283,7 @@ _subcode = 'GSeg' if _arguments: raise TypeError, 'No optional args expected' - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, @@ -324,7 +324,7 @@ _subcode = 'NsCl' if _arguments: raise TypeError, 'No optional args expected' - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, @@ -410,7 +410,7 @@ _subcode = 'Make' aetools.keysubst(_arguments, self._argmap_Make_Project) - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, @@ -503,7 +503,7 @@ _subcode = 'RemB' if _arguments: raise TypeError, 'No optional args expected' - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, @@ -543,7 +543,7 @@ _subcode = 'ReFP' if _arguments: raise TypeError, 'No optional args expected' - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, @@ -570,7 +570,7 @@ _subcode = 'RunP' aetools.keysubst(_arguments, self._argmap_Run_Project) - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, @@ -682,7 +682,7 @@ _subcode = 'Pref' aetools.keysubst(_arguments, self._argmap_Set_Preferences) - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, @@ -778,7 +778,7 @@ _subcode = 'UpdP' aetools.keysubst(_arguments, self._argmap_Update_Project) - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, Modified: python/trunk/Lib/plat-mac/lib-scriptpackages/CodeWarrior/Standard_Suite.py ============================================================================== --- python/trunk/Lib/plat-mac/lib-scriptpackages/CodeWarrior/Standard_Suite.py (original) +++ python/trunk/Lib/plat-mac/lib-scriptpackages/CodeWarrior/Standard_Suite.py Sat Mar 29 16:24:25 2008 @@ -115,7 +115,7 @@ _subcode = 'crel' aetools.keysubst(_arguments, self._argmap_make) - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, Modified: python/trunk/Lib/plat-mac/lib-scriptpackages/Explorer/Required_Suite.py ============================================================================== --- python/trunk/Lib/plat-mac/lib-scriptpackages/Explorer/Required_Suite.py (original) +++ python/trunk/Lib/plat-mac/lib-scriptpackages/Explorer/Required_Suite.py Sat Mar 29 16:24:25 2008 @@ -61,7 +61,7 @@ _subcode = 'quit' if _arguments: raise TypeError, 'No optional args expected' - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, @@ -80,7 +80,7 @@ _subcode = 'oapp' if _arguments: raise TypeError, 'No optional args expected' - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, Modified: python/trunk/Lib/plat-mac/lib-scriptpackages/Explorer/Web_Browser_Suite.py ============================================================================== --- python/trunk/Lib/plat-mac/lib-scriptpackages/Explorer/Web_Browser_Suite.py (original) +++ python/trunk/Lib/plat-mac/lib-scriptpackages/Explorer/Web_Browser_Suite.py Sat Mar 29 16:24:25 2008 @@ -42,7 +42,7 @@ _subcode = 'CLSA' if _arguments: raise TypeError, 'No optional args expected' - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, @@ -69,7 +69,7 @@ _subcode = 'CLOS' aetools.keysubst(_arguments, self._argmap_CloseWindow) - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, @@ -110,7 +110,7 @@ _subcode = 'LSTW' if _arguments: raise TypeError, 'No optional args expected' - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, Modified: python/trunk/Lib/plat-mac/lib-scriptpackages/Finder/Finder_Basics.py ============================================================================== --- python/trunk/Lib/plat-mac/lib-scriptpackages/Finder/Finder_Basics.py (original) +++ python/trunk/Lib/plat-mac/lib-scriptpackages/Finder/Finder_Basics.py Sat Mar 29 16:24:25 2008 @@ -20,7 +20,7 @@ _subcode = 'copy' if _arguments: raise TypeError, 'No optional args expected' - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, Modified: python/trunk/Lib/plat-mac/lib-scriptpackages/Finder/Legacy_suite.py ============================================================================== --- python/trunk/Lib/plat-mac/lib-scriptpackages/Finder/Legacy_suite.py (original) +++ python/trunk/Lib/plat-mac/lib-scriptpackages/Finder/Legacy_suite.py Sat Mar 29 16:24:25 2008 @@ -20,7 +20,7 @@ _subcode = 'rest' if _arguments: raise TypeError, 'No optional args expected' - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, @@ -39,7 +39,7 @@ _subcode = 'shut' if _arguments: raise TypeError, 'No optional args expected' - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, @@ -58,7 +58,7 @@ _subcode = 'slep' if _arguments: raise TypeError, 'No optional args expected' - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, Modified: python/trunk/Lib/plat-mac/lib-scriptpackages/Finder/Standard_Suite.py ============================================================================== --- python/trunk/Lib/plat-mac/lib-scriptpackages/Finder/Standard_Suite.py (original) +++ python/trunk/Lib/plat-mac/lib-scriptpackages/Finder/Standard_Suite.py Sat Mar 29 16:24:25 2008 @@ -179,7 +179,7 @@ _subcode = 'crel' aetools.keysubst(_arguments, self._argmap_make) - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, @@ -285,7 +285,7 @@ _subcode = 'quit' if _arguments: raise TypeError, 'No optional args expected' - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, Modified: python/trunk/Lib/plat-mac/lib-scriptpackages/Netscape/Mozilla_suite.py ============================================================================== --- python/trunk/Lib/plat-mac/lib-scriptpackages/Netscape/Mozilla_suite.py (original) +++ python/trunk/Lib/plat-mac/lib-scriptpackages/Netscape/Mozilla_suite.py Sat Mar 29 16:24:25 2008 @@ -21,7 +21,7 @@ _subcode = 'Impt' if _arguments: raise TypeError, 'No optional args expected' - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, @@ -41,7 +41,7 @@ _subcode = 'upro' if _arguments: raise TypeError, 'No optional args expected' - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, @@ -61,7 +61,7 @@ _subcode = 'wurl' if _arguments: raise TypeError, 'No optional args expected' - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, @@ -126,7 +126,7 @@ _subcode = 'addr' if _arguments: raise TypeError, 'No optional args expected' - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, @@ -165,7 +165,7 @@ _subcode = 'prfl' if _arguments: raise TypeError, 'No optional args expected' - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, Modified: python/trunk/Lib/plat-mac/lib-scriptpackages/Netscape/PowerPlant.py ============================================================================== --- python/trunk/Lib/plat-mac/lib-scriptpackages/Netscape/PowerPlant.py (original) +++ python/trunk/Lib/plat-mac/lib-scriptpackages/Netscape/PowerPlant.py Sat Mar 29 16:24:25 2008 @@ -25,7 +25,7 @@ _subcode = 'sttg' aetools.keysubst(_arguments, self._argmap_SwitchTellTarget) - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, Modified: python/trunk/Lib/plat-mac/lib-scriptpackages/Netscape/Required_suite.py ============================================================================== --- python/trunk/Lib/plat-mac/lib-scriptpackages/Netscape/Required_suite.py (original) +++ python/trunk/Lib/plat-mac/lib-scriptpackages/Netscape/Required_suite.py Sat Mar 29 16:24:25 2008 @@ -61,7 +61,7 @@ _subcode = 'quit' if _arguments: raise TypeError, 'No optional args expected' - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, @@ -80,7 +80,7 @@ _subcode = 'oapp' if _arguments: raise TypeError, 'No optional args expected' - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, Modified: python/trunk/Lib/plat-mac/lib-scriptpackages/Netscape/WorldWideWeb_suite.py ============================================================================== --- python/trunk/Lib/plat-mac/lib-scriptpackages/Netscape/WorldWideWeb_suite.py (original) +++ python/trunk/Lib/plat-mac/lib-scriptpackages/Netscape/WorldWideWeb_suite.py Sat Mar 29 16:24:25 2008 @@ -154,7 +154,7 @@ _subcode = 'LSTW' if _arguments: raise TypeError, 'No optional args expected' - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, Modified: python/trunk/Lib/plat-mac/lib-scriptpackages/StdSuites/AppleScript_Suite.py ============================================================================== --- python/trunk/Lib/plat-mac/lib-scriptpackages/StdSuites/AppleScript_Suite.py (original) +++ python/trunk/Lib/plat-mac/lib-scriptpackages/StdSuites/AppleScript_Suite.py Sat Mar 29 16:24:25 2008 @@ -268,7 +268,7 @@ _subcode = 'actv' if _arguments: raise TypeError, 'No optional args expected' - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, @@ -371,7 +371,7 @@ _subcode = 'tend' if _arguments: raise TypeError, 'No optional args expected' - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, @@ -443,7 +443,7 @@ _subcode = 'idle' if _arguments: raise TypeError, 'No optional args expected' - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, @@ -462,7 +462,7 @@ _subcode = 'noop' if _arguments: raise TypeError, 'No optional args expected' - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, @@ -585,7 +585,7 @@ _subcode = 'log1' if _arguments: raise TypeError, 'No optional args expected' - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, @@ -625,7 +625,7 @@ _subcode = 'log0' if _arguments: raise TypeError, 'No optional args expected' - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, @@ -644,7 +644,7 @@ _subcode = 'tell' if _arguments: raise TypeError, 'No optional args expected' - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, Modified: python/trunk/Lib/plat-mac/lib-scriptpackages/StdSuites/Standard_Suite.py ============================================================================== --- python/trunk/Lib/plat-mac/lib-scriptpackages/StdSuites/Standard_Suite.py (original) +++ python/trunk/Lib/plat-mac/lib-scriptpackages/StdSuites/Standard_Suite.py Sat Mar 29 16:24:25 2008 @@ -255,7 +255,7 @@ _subcode = 'crel' aetools.keysubst(_arguments, self._argmap_make) - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, @@ -345,7 +345,7 @@ _subcode = 'quit' aetools.keysubst(_arguments, self._argmap_quit) - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' aetools.enumsubst(_arguments, 'savo', _Enum_savo) @@ -365,7 +365,7 @@ _subcode = 'rapp' if _arguments: raise TypeError, 'No optional args expected' - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, @@ -384,7 +384,7 @@ _subcode = 'oapp' if _arguments: raise TypeError, 'No optional args expected' - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, Modified: python/trunk/Lib/plat-mac/lib-scriptpackages/SystemEvents/Standard_Suite.py ============================================================================== --- python/trunk/Lib/plat-mac/lib-scriptpackages/SystemEvents/Standard_Suite.py (original) +++ python/trunk/Lib/plat-mac/lib-scriptpackages/SystemEvents/Standard_Suite.py Sat Mar 29 16:24:25 2008 @@ -175,7 +175,7 @@ _subcode = 'crel' aetools.keysubst(_arguments, self._argmap_make) - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, Modified: python/trunk/Lib/plat-mac/lib-scriptpackages/Terminal/Standard_Suite.py ============================================================================== --- python/trunk/Lib/plat-mac/lib-scriptpackages/Terminal/Standard_Suite.py (original) +++ python/trunk/Lib/plat-mac/lib-scriptpackages/Terminal/Standard_Suite.py Sat Mar 29 16:24:25 2008 @@ -175,7 +175,7 @@ _subcode = 'crel' aetools.keysubst(_arguments, self._argmap_make) - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, Modified: python/trunk/Lib/plat-mac/lib-scriptpackages/_builtinSuites/builtin_Suite.py ============================================================================== --- python/trunk/Lib/plat-mac/lib-scriptpackages/_builtinSuites/builtin_Suite.py (original) +++ python/trunk/Lib/plat-mac/lib-scriptpackages/_builtinSuites/builtin_Suite.py Sat Mar 29 16:24:25 2008 @@ -37,7 +37,7 @@ _subcode = 'oapp' if _arguments: raise TypeError, 'No optional args expected' - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, @@ -56,7 +56,7 @@ _subcode = 'rapp' if _arguments: raise TypeError, 'No optional args expected' - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, @@ -100,7 +100,7 @@ _subcode = 'quit' aetools.keysubst(_arguments, self._argmap_quit) - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' aetools.enumsubst(_arguments, 'savo', _Enum_savo) Modified: python/trunk/Lib/plat-mac/macostools.py ============================================================================== --- python/trunk/Lib/plat-mac/macostools.py (original) +++ python/trunk/Lib/plat-mac/macostools.py Sat Mar 29 16:24:25 2008 @@ -106,7 +106,7 @@ sf = srcfss.FSpGetFInfo() df = dstfss.FSpGetFInfo() df.Creator, df.Type = sf.Creator, sf.Type - if forcetype != None: + if forcetype is not None: df.Type = forcetype df.Flags = (sf.Flags & COPY_FLAGS) dstfss.FSpSetFInfo(df) Modified: python/trunk/Lib/plat-mac/videoreader.py ============================================================================== --- python/trunk/Lib/plat-mac/videoreader.py (original) +++ python/trunk/Lib/plat-mac/videoreader.py Sat Mar 29 16:24:25 2008 @@ -188,7 +188,7 @@ def GetVideoFrameRate(self): tv = self.videocurtime - if tv == None: + if tv is None: tv = 0 flags = QuickTime.nextTimeStep|QuickTime.nextTimeEdgeOK tv, dur = self.videomedia.GetMediaNextInterestingTime(flags, tv, 1.0) @@ -199,7 +199,7 @@ if not time is None: self.audiocurtime = time flags = QuickTime.nextTimeStep|QuickTime.nextTimeEdgeOK - if self.audiocurtime == None: + if self.audiocurtime is None: self.audiocurtime = 0 tv = self.audiomedia.GetMediaNextInterestingTimeOnly(flags, self.audiocurtime, 1.0) if tv < 0 or (self.audiocurtime and tv < self.audiocurtime): @@ -215,7 +215,7 @@ if not time is None: self.videocurtime = time flags = QuickTime.nextTimeStep - if self.videocurtime == None: + if self.videocurtime is None: flags = flags | QuickTime.nextTimeEdgeOK self.videocurtime = 0 tv = self.videomedia.GetMediaNextInterestingTimeOnly(flags, self.videocurtime, 1.0) Modified: python/trunk/Lib/plat-os2emx/grp.py ============================================================================== --- python/trunk/Lib/plat-os2emx/grp.py (original) +++ python/trunk/Lib/plat-os2emx/grp.py Sat Mar 29 16:24:25 2008 @@ -143,7 +143,7 @@ while 1: entry = group.readline().strip() if len(entry) > 3: - if sep == None: + if sep is None: sep = __get_field_sep(entry) fields = entry.split(sep) fields[2] = int(fields[2]) Modified: python/trunk/Lib/plat-os2emx/pwd.py ============================================================================== --- python/trunk/Lib/plat-os2emx/pwd.py (original) +++ python/trunk/Lib/plat-os2emx/pwd.py Sat Mar 29 16:24:25 2008 @@ -167,7 +167,7 @@ while 1: entry = passwd.readline().strip() if len(entry) > 6: - if sep == None: + if sep is None: sep = __get_field_sep(entry) fields = entry.split(sep) for i in (2, 3): Modified: python/trunk/Lib/test/test_ast.py ============================================================================== --- python/trunk/Lib/test/test_ast.py (original) +++ python/trunk/Lib/test/test_ast.py Sat Mar 29 16:24:25 2008 @@ -130,7 +130,7 @@ def test_order(ast_node, parent_pos): - if not isinstance(ast_node, _ast.AST) or ast_node._fields == None: + if not isinstance(ast_node, _ast.AST) or ast_node._fields is None: return if isinstance(ast_node, (_ast.expr, _ast.stmt, _ast.excepthandler)): node_pos = (ast_node.lineno, ast_node.col_offset) @@ -141,7 +141,7 @@ if isinstance(value, list): for child in value: test_order(child, parent_pos) - elif value != None: + elif value is not None: test_order(value, parent_pos) def run_tests(): Modified: python/trunk/Lib/test/test_mailbox.py ============================================================================== --- python/trunk/Lib/test/test_mailbox.py (original) +++ python/trunk/Lib/test/test_mailbox.py Sat Mar 29 16:24:25 2008 @@ -630,9 +630,9 @@ "tmp")), "File in wrong location: '%s'" % head) match = pattern.match(tail) - self.assert_(match != None, "Invalid file name: '%s'" % tail) + self.assert_(match is not None, "Invalid file name: '%s'" % tail) groups = match.groups() - if previous_groups != None: + if previous_groups is not None: self.assert_(int(groups[0] >= previous_groups[0]), "Non-monotonic seconds: '%s' before '%s'" % (previous_groups[0], groups[0])) Modified: python/trunk/Lib/test/test_pyclbr.py ============================================================================== --- python/trunk/Lib/test/test_pyclbr.py (original) +++ python/trunk/Lib/test/test_pyclbr.py Sat Mar 29 16:24:25 2008 @@ -57,7 +57,7 @@ ignore are ignored. If no module is provided, the appropriate module is loaded with __import__.''' - if module == None: + if module is None: # Import it. # ('' is to work around an API silliness in __import__) module = __import__(moduleName, globals(), {}, ['']) Modified: python/trunk/Lib/test/test_ssl.py ============================================================================== --- python/trunk/Lib/test/test_ssl.py (original) +++ python/trunk/Lib/test/test_ssl.py Sat Mar 29 16:24:25 2008 @@ -546,7 +546,7 @@ expectedToWork, certsreqs=None): - if certsreqs == None: + if certsreqs is None: certsreqs = ssl.CERT_NONE if certsreqs == ssl.CERT_NONE: Modified: python/trunk/Lib/xml/sax/expatreader.py ============================================================================== --- python/trunk/Lib/xml/sax/expatreader.py (original) +++ python/trunk/Lib/xml/sax/expatreader.py Sat Mar 29 16:24:25 2008 @@ -107,7 +107,7 @@ xmlreader.IncrementalParser.parse(self, source) def prepareParser(self, source): - if source.getSystemId() != None: + if source.getSystemId() is not None: self._parser.SetBase(source.getSystemId()) # Redefined setContentHandler to allow changing handlers during parsing Modified: python/trunk/Mac/BuildScript/build-installer.py ============================================================================== --- python/trunk/Mac/BuildScript/build-installer.py (original) +++ python/trunk/Mac/BuildScript/build-installer.py Sat Mar 29 16:24:25 2008 @@ -289,7 +289,7 @@ fd = os.popen(commandline, 'r') data = fd.read() xit = fd.close() - if xit != None: + if xit is not None: sys.stdout.write(data) raise RuntimeError, "command failed: %s"%(commandline,) @@ -300,7 +300,7 @@ fd = os.popen(commandline, 'r') data = fd.read() xit = fd.close() - if xit != None: + if xit is not None: sys.stdout.write(data) raise RuntimeError, "command failed: %s"%(commandline,) Modified: python/trunk/Mac/Demo/applescript/Disk_Copy/Utility_Events.py ============================================================================== --- python/trunk/Mac/Demo/applescript/Disk_Copy/Utility_Events.py (original) +++ python/trunk/Mac/Demo/applescript/Disk_Copy/Utility_Events.py Sat Mar 29 16:24:25 2008 @@ -26,7 +26,7 @@ _subcode = 'SEL1' aetools.keysubst(_arguments, self._argmap_select_disk_image) - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' aetools.enumsubst(_arguments, 'SELp', _Enum_TEXT) @@ -52,7 +52,7 @@ _subcode = 'SEL2' aetools.keysubst(_arguments, self._argmap_select_DiskScript) - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' aetools.enumsubst(_arguments, 'SELp', _Enum_TEXT) @@ -78,7 +78,7 @@ _subcode = 'SEL3' aetools.keysubst(_arguments, self._argmap_select_disk_image_or_DiskScript) - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' aetools.enumsubst(_arguments, 'SELp', _Enum_TEXT) @@ -104,7 +104,7 @@ _subcode = 'SEL4' aetools.keysubst(_arguments, self._argmap_select_floppy_disk_image) - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' aetools.enumsubst(_arguments, 'SELp', _Enum_TEXT) @@ -130,7 +130,7 @@ _subcode = 'SEL5' aetools.keysubst(_arguments, self._argmap_select_disk) - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' aetools.enumsubst(_arguments, 'SELp', _Enum_TEXT) @@ -156,7 +156,7 @@ _subcode = 'SEL6' aetools.keysubst(_arguments, self._argmap_select_folder) - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' aetools.enumsubst(_arguments, 'SELp', _Enum_TEXT) Modified: python/trunk/Mac/Tools/Doc/HelpIndexingTool/Standard_Suite.py ============================================================================== --- python/trunk/Mac/Tools/Doc/HelpIndexingTool/Standard_Suite.py (original) +++ python/trunk/Mac/Tools/Doc/HelpIndexingTool/Standard_Suite.py Sat Mar 29 16:24:25 2008 @@ -103,7 +103,7 @@ _subcode = 'crel' aetools.keysubst(_arguments, self._argmap_make) - if _no_object != None: raise TypeError, 'No direct arg expected' + if _no_object is not None: raise TypeError, 'No direct arg expected' _reply, _arguments, _attributes = self.send(_code, _subcode, Modified: python/trunk/Mac/Tools/Doc/setup.py ============================================================================== --- python/trunk/Mac/Tools/Doc/setup.py (original) +++ python/trunk/Mac/Tools/Doc/setup.py Sat Mar 29 16:24:25 2008 @@ -175,10 +175,10 @@ ('root', 'root')) # import pdb ; pdb.set_trace() build_cmd = self.get_finalized_command('build') - if self.build_dest == None: + if self.build_dest is None: build_cmd = self.get_finalized_command('build') self.build_dest = build_cmd.build_dest - if self.install_doc == None: + if self.install_doc is None: self.install_doc = os.path.join(self.prefix, DESTDIR) print 'INSTALL', self.build_dest, '->', self.install_doc Modified: python/trunk/Mac/scripts/buildpkg.py ============================================================================== --- python/trunk/Mac/scripts/buildpkg.py (original) +++ python/trunk/Mac/scripts/buildpkg.py Sat Mar 29 16:24:25 2008 @@ -183,7 +183,7 @@ # set folder attributes self.sourceFolder = root - if resources == None: + if resources is None: self.resourceFolder = root else: self.resourceFolder = resources Modified: python/trunk/Tools/bgen/bgen/bgenGenerator.py ============================================================================== --- python/trunk/Tools/bgen/bgen/bgenGenerator.py (original) +++ python/trunk/Tools/bgen/bgen/bgenGenerator.py Sat Mar 29 16:24:25 2008 @@ -148,7 +148,7 @@ for arg in self.argumentList: if arg.flags == ErrorMode or arg.flags == SelfMode: continue - if arg.type == None: + if arg.type is None: str = 'void' else: if hasattr(arg.type, 'typeName'): From buildbot at python.org Sat Mar 29 16:34:33 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 29 Mar 2008 15:34:33 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-3 trunk Message-ID: <20080329153434.1B5351E4003@bag.python.org> The Buildbot has detected a new failure of x86 XP-3 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-3%20trunk/builds/1205 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: amaury.forgeotdarc,georg.brandl,raymond.hettinger BUILD FAILED: failed test Excerpt from the test logfile: 19 tests failed: test_array test_bufio test_deque test_distutils test_file test_gzip test_hotshot test_io test_iter test_mailbox test_mmap test_multibytecodec test_set test_univnewlines test_urllib test_urllib2 test_uu test_zipfile test_zipimport ====================================================================== ERROR: test_tofromfile (test.test_array.UnicodeTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_array.py", line 166, in test_tofromfile f = open(test_support.TESTFN, 'wb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_tofromfile (test.test_array.UnsignedIntTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_array.py", line 166, in test_tofromfile f = open(test_support.TESTFN, 'wb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_nullpat (test.test_bufio.BufferSizeTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_bufio.py", line 59, in test_nullpat self.drive_one("\0" * 1000) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_bufio.py", line 51, in drive_one self.try_one(teststring[:-1]) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_bufio.py", line 21, in try_one f = open(test_support.TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_primepat (test.test_bufio.BufferSizeTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_bufio.py", line 56, in test_primepat self.drive_one("1234567890\00\01\02\03\04\05\06") File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_bufio.py", line 49, in drive_one self.try_one(teststring) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_bufio.py", line 21, in try_one f = open(test_support.TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_maxlen (test.test_deque.TestBasic) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_deque.py", line 78, in test_maxlen fo = open(test_support.TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_print (test.test_deque.TestBasic) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_deque.py", line 284, in test_print fo = open(test_support.TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_read (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 48, in test_read self.test_write() File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 79, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_readline (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 85, in test_readline self.test_write() File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 79, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_readlines (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 98, in test_readlines self.test_write() File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 79, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_seek_read (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 112, in test_seek_read self.test_write() File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 79, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_seek_whence (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 132, in test_seek_whence self.test_write() File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 79, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_seek_write (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 144, in test_seek_write f = gzip.GzipFile(self.filename, 'w') File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 79, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_write (test.test_gzip.TestGzip) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_gzip.py", line 38, in test_write f = gzip.GzipFile(self.filename, 'wb') ; f.write(data1 * 50) File "C:\buildbot\work\trunk.heller-windows\build\lib\gzip.py", line 79, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_addinfo (test.test_hotshot.HotShotTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_hotshot.py", line 74, in test_addinfo profiler = self.new_profiler() File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_hotshot.py", line 42, in new_profiler return hotshot.Profile(self.logfn, lineevents, linetimings) File "C:\buildbot\work\trunk.heller-windows\build\lib\hotshot\__init__.py", line 15, in __init__ logfn, self.lineevents, self.linetimings) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_bad_sys_path (test.test_hotshot.HotShotTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_hotshot.py", line 118, in test_bad_sys_path self.assertRaises(RuntimeError, coverage, test_support.TESTFN) File "C:\buildbot\work\trunk.heller-windows\build\lib\unittest.py", line 329, in failUnlessRaises callableObj(*args, **kwargs) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_line_numbers (test.test_hotshot.HotShotTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_hotshot.py", line 98, in test_line_numbers self.run_test(g, events, self.new_profiler(lineevents=1)) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_hotshot.py", line 42, in new_profiler return hotshot.Profile(self.logfn, lineevents, linetimings) File "C:\buildbot\work\trunk.heller-windows\build\lib\hotshot\__init__.py", line 15, in __init__ logfn, self.lineevents, self.linetimings) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_start_stop (test.test_hotshot.HotShotTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_hotshot.py", line 104, in test_start_stop profiler = self.new_profiler() File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_hotshot.py", line 42, in new_profiler return hotshot.Profile(self.logfn, lineevents, linetimings) File "C:\buildbot\work\trunk.heller-windows\build\lib\hotshot\__init__.py", line 15, in __init__ logfn, self.lineevents, self.linetimings) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_add_and_close (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 811, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add_from_string (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 811, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add_mbox_or_mmdf_message (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 811, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_clear (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 811, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_close (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 811, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_contains (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 811, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_delitem (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 811, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_discard (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 811, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_dump_message (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 811, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_flush (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 811, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 811, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_file (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 811, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_message (test.test_mailbox.TestMbox) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 811, in _factory = lambda self, path, factory=None: mailbox.mbox(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 736, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add_and_close (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 816, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add_from_string (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 816, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add_mbox_or_mmdf_message (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 816, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_clear (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 816, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_close (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 816, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_contains (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 816, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_delitem (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 816, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_discard (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 816, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_dump_message (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 816, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_flush (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 816, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 816, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_file (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 816, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_message (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 816, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_string (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 816, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_getitem (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 816, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_has_key (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 816, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_items (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 816, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iter (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 816, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iteritems (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 816, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iterkeys (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 816, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_itervalues (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 816, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_keys (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 816, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_len (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 816, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_lock_conflict (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 816, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_lock_unlock (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 816, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_open_close_open (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 816, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_pop (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 816, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_popitem (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 816, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_relock (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 816, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_remove (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 816, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_set_item (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 816, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_update (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 816, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_values (test.test_mailbox.TestMMDF) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 816, in _factory = lambda self, path, factory=None: mailbox.MMDF(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 768, in __init__ _mboxMMDF.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add_and_remove_folders (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_clear (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_close (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_contains (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_delitem (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_discard (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_dump_message (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_flush (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_file (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_folder (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_message (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_string (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_getitem (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_has_key (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_items (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iter (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iteritems (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_iterkeys (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_itervalues (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_keys (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_len (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_list_folders (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_lock_unlock (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_pack (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_pop (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_popitem (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_remove (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_sequences (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_set_item (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_update (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_values (test.test_mailbox.TestMH) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 821, in _factory = lambda self, path, factory=None: mailbox.MH(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 815, in __init__ os.mkdir(self._path, 0700) WindowsError: [Error 5] Access is denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_add (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_clear (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_close (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_contains (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_delitem (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_discard (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_dump_message (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_flush (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_file (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_message (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_get_string (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_getitem (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 58, in setUp self._box = self._factory(self._path) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mailbox.py", line 940, in _factory = lambda self, path, factory=None: mailbox.Babyl(path, factory) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 1123, in __init__ _singlefileMailbox.__init__(self, path, factory, create) File "C:\buildbot\work\trunk.heller-windows\build\lib\mailbox.py", line 515, in __init__ f = open(self._path, 'rb') IOError: [Errno 2] No such file or directory: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\@test' ====================================================================== ERROR: test_basic (test.test_mmap.MmapTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mmap.py", line 24, in test_basic f = open(TESTFN, 'w+') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_double_close (test.test_mmap.MmapTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mmap.py", line 296, in test_double_close f = open(TESTFN, 'w+') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_entire_file (test.test_mmap.MmapTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mmap.py", line 310, in test_entire_file f = open(TESTFN, "w+") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_find_end (test.test_mmap.MmapTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mmap.py", line 260, in test_find_end f = open(TESTFN, 'w+') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_move (test.test_mmap.MmapTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mmap.py", line 324, in test_move f = open(TESTFN, 'w+') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_offset (test.test_mmap.MmapTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mmap.py", line 388, in test_offset f = open (TESTFN, 'w+b') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_rfind (test.test_mmap.MmapTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mmap.py", line 278, in test_rfind f = open(TESTFN, 'w+') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_tougher_find (test.test_mmap.MmapTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_mmap.py", line 242, in test_tougher_find f = open(TESTFN, 'w+') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_bug1728403 (test.test_multibytecodec.Test_StreamReader) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_multibytecodec.py", line 148, in test_bug1728403 os.unlink(TESTFN) WindowsError: [Error 5] Access is denied: '@test' ====================================================================== ERROR: test_read (test.test_univnewlines.TestNativeNewlines) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_univnewlines.py", line 40, in setUp fp = open(test_support.TESTFN, self.WRITEMODE) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_readline (test.test_univnewlines.TestNativeNewlines) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_univnewlines.py", line 40, in setUp fp = open(test_support.TESTFN, self.WRITEMODE) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_readlines (test.test_univnewlines.TestNativeNewlines) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_univnewlines.py", line 40, in setUp fp = open(test_support.TESTFN, self.WRITEMODE) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_seek (test.test_univnewlines.TestNativeNewlines) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_univnewlines.py", line 40, in setUp fp = open(test_support.TESTFN, self.WRITEMODE) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_execfile (test.test_univnewlines.TestCRNewlines) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_univnewlines.py", line 40, in setUp fp = open(test_support.TESTFN, self.WRITEMODE) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_read (test.test_univnewlines.TestCRNewlines) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_univnewlines.py", line 40, in setUp fp = open(test_support.TESTFN, self.WRITEMODE) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_readline (test.test_univnewlines.TestCRNewlines) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_univnewlines.py", line 40, in setUp fp = open(test_support.TESTFN, self.WRITEMODE) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_readlines (test.test_univnewlines.TestCRNewlines) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_univnewlines.py", line 40, in setUp fp = open(test_support.TESTFN, self.WRITEMODE) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_seek (test.test_univnewlines.TestCRNewlines) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_univnewlines.py", line 40, in setUp fp = open(test_support.TESTFN, self.WRITEMODE) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_execfile (test.test_univnewlines.TestLFNewlines) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_univnewlines.py", line 40, in setUp fp = open(test_support.TESTFN, self.WRITEMODE) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_read (test.test_univnewlines.TestLFNewlines) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_univnewlines.py", line 40, in setUp fp = open(test_support.TESTFN, self.WRITEMODE) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_readline (test.test_univnewlines.TestLFNewlines) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_univnewlines.py", line 40, in setUp fp = open(test_support.TESTFN, self.WRITEMODE) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_readlines (test.test_univnewlines.TestLFNewlines) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_univnewlines.py", line 40, in setUp fp = open(test_support.TESTFN, self.WRITEMODE) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_seek (test.test_univnewlines.TestLFNewlines) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_univnewlines.py", line 40, in setUp fp = open(test_support.TESTFN, self.WRITEMODE) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_execfile (test.test_univnewlines.TestCRLFNewlines) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_univnewlines.py", line 40, in setUp fp = open(test_support.TESTFN, self.WRITEMODE) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_read (test.test_univnewlines.TestCRLFNewlines) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_univnewlines.py", line 40, in setUp fp = open(test_support.TESTFN, self.WRITEMODE) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_readline (test.test_univnewlines.TestCRLFNewlines) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_univnewlines.py", line 40, in setUp fp = open(test_support.TESTFN, self.WRITEMODE) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_readlines (test.test_univnewlines.TestCRLFNewlines) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_univnewlines.py", line 40, in setUp fp = open(test_support.TESTFN, self.WRITEMODE) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_seek (test.test_univnewlines.TestCRLFNewlines) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_univnewlines.py", line 40, in setUp fp = open(test_support.TESTFN, self.WRITEMODE) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_tell (test.test_univnewlines.TestCRLFNewlines) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_univnewlines.py", line 40, in setUp fp = open(test_support.TESTFN, self.WRITEMODE) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_execfile (test.test_univnewlines.TestMixedNewlines) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_univnewlines.py", line 40, in setUp fp = open(test_support.TESTFN, self.WRITEMODE) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_read (test.test_univnewlines.TestMixedNewlines) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_univnewlines.py", line 40, in setUp fp = open(test_support.TESTFN, self.WRITEMODE) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_readline (test.test_univnewlines.TestMixedNewlines) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_univnewlines.py", line 40, in setUp fp = open(test_support.TESTFN, self.WRITEMODE) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_readlines (test.test_univnewlines.TestMixedNewlines) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_univnewlines.py", line 40, in setUp fp = open(test_support.TESTFN, self.WRITEMODE) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_seek (test.test_univnewlines.TestMixedNewlines) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_univnewlines.py", line 40, in setUp fp = open(test_support.TESTFN, self.WRITEMODE) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_close (test.test_urllib.urlopen_FileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_urllib.py", line 30, in setUp FILE = file(test_support.TESTFN, 'wb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_fileno (test.test_urllib.urlopen_FileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_urllib.py", line 30, in setUp FILE = file(test_support.TESTFN, 'wb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_getcode (test.test_urllib.urlopen_FileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_urllib.py", line 30, in setUp FILE = file(test_support.TESTFN, 'wb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_geturl (test.test_urllib.urlopen_FileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_urllib.py", line 30, in setUp FILE = file(test_support.TESTFN, 'wb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_info (test.test_urllib.urlopen_FileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_urllib.py", line 30, in setUp FILE = file(test_support.TESTFN, 'wb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_interface (test.test_urllib.urlopen_FileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_urllib.py", line 30, in setUp FILE = file(test_support.TESTFN, 'wb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_iter (test.test_urllib.urlopen_FileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_urllib.py", line 30, in setUp FILE = file(test_support.TESTFN, 'wb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_read (test.test_urllib.urlopen_FileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_urllib.py", line 30, in setUp FILE = file(test_support.TESTFN, 'wb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_readline (test.test_urllib.urlopen_FileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_urllib.py", line 30, in setUp FILE = file(test_support.TESTFN, 'wb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_readlines (test.test_urllib.urlopen_FileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_urllib.py", line 30, in setUp FILE = file(test_support.TESTFN, 'wb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_basic (test.test_urllib.urlretrieve_FileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_urllib.py", line 169, in setUp FILE = file(test_support.TESTFN, 'wb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_copy (test.test_urllib.urlretrieve_FileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_urllib.py", line 169, in setUp FILE = file(test_support.TESTFN, 'wb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_reporthook (test.test_urllib.urlretrieve_FileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_urllib.py", line 169, in setUp FILE = file(test_support.TESTFN, 'wb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_reporthook_0_bytes (test.test_urllib.urlretrieve_FileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_urllib.py", line 169, in setUp FILE = file(test_support.TESTFN, 'wb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_reporthook_5_bytes (test.test_urllib.urlretrieve_FileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_urllib.py", line 169, in setUp FILE = file(test_support.TESTFN, 'wb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_reporthook_8193_bytes (test.test_urllib.urlretrieve_FileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_urllib.py", line 169, in setUp FILE = file(test_support.TESTFN, 'wb') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_encode (test.test_uu.UUFileTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_uu.py", line 117, in test_encode fin = open(self.tmpin, 'wb') IOError: [Errno 13] Permission denied: '@testi' ====================================================================== ERROR: testAbsoluteArcnames (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 240, in testAbsoluteArcnames zipfp.write(TESTFN, "/absolute") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testAbsoluteArcnames (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testAppendToNonZipFile (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 266, in testAppendToNonZipFile zipfp.write(TESTFN, TESTFN) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testAppendToNonZipFile (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testAppendToZipFile (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 250, in testAppendToZipFile zipfp.write(TESTFN, TESTFN) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testAppendToZipFile (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 203, in testDeflated self.zipTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 44, in zipTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 38, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testExtract (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 307, in testExtract zipfp.writestr(fpath, fdata) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testExtract (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testExtractAll (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 336, in testExtractAll zipfp.writestr(fpath, fdata) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testExtractAll (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testIterlinesDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 223, in testIterlinesDeflated self.zipIterlinesTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 179, in zipIterlinesTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 38, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testIterlinesDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testIterlinesStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 198, in testIterlinesStored self.zipIterlinesTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 179, in zipIterlinesTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 38, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testIterlinesStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testOpenDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 207, in testOpenDeflated self.zipOpenTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 107, in zipOpenTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 38, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testOpenDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testOpenStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 133, in testOpenStored self.zipOpenTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 107, in zipOpenTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 38, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testOpenStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testRandomOpenDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 211, in testRandomOpenDeflated self.zipRandomOpenTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 136, in zipRandomOpenTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 38, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testRandomOpenDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testRandomOpenStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 153, in testRandomOpenStored self.zipRandomOpenTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 136, in zipRandomOpenTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 38, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testRandomOpenStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testReadlineDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 215, in testReadlineDeflated self.zipReadlineTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 156, in zipReadlineTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 38, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadlineDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testReadlineStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 190, in testReadlineStored self.zipReadlineTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 156, in zipReadlineTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 38, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadlineStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testReadlinesDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 219, in testReadlinesDeflated self.zipReadlinesTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 168, in zipReadlinesTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 38, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadlinesDeflated (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testReadlinesStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 194, in testReadlinesStored self.zipReadlinesTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 168, in zipReadlinesTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 38, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadlinesStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 104, in testStored self.zipTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 44, in zipTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 38, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testStored (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: test_PerFileCompression (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 286, in test_PerFileCompression zipfp.write(TESTFN, 'storeme', zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: test_PerFileCompression (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: test_WriteDefaultName (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 279, in test_WriteDefaultName zipfp.write(TESTFN) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: test_WriteDefaultName (test.test_zipfile.TestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 358, in tearDown os.remove(TESTFN2) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: '@test2' ====================================================================== ERROR: testCloseErroneousFile (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 583, in testCloseErroneousFile fp = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testClosedZipRaisesRuntimeError (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 627, in testClosedZipRaisesRuntimeError zipf.writestr("foo.txt", "O, for a Muse of Fire!") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testIsZipErroneousFile (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 594, in testIsZipErroneousFile fp = open(TESTFN, "w") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testIsZipValidFile (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 603, in testIsZipValidFile zipf = zipfile.ZipFile(TESTFN, mode="w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_BadOpenMode (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 647, in test_BadOpenMode zipf = zipfile.ZipFile(TESTFN, mode="w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_NullByteInFilename (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 681, in test_NullByteInFilename zipf = zipfile.ZipFile(TESTFN, mode="w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_OpenNonexistentItem (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 672, in test_OpenNonexistentItem zipf = zipfile.ZipFile(TESTFN, mode="w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: test_Read0 (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 659, in test_Read0 zipf = zipfile.ZipFile(TESTFN, mode="w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testWriteNonPyfile (test.test_zipfile.PyZipFileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 550, in testWriteNonPyfile file(TESTFN, 'w').write('most definitely not a python file') IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testWritePyfile (test.test_zipfile.PyZipFileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 490, in testWritePyfile zipfp.writepy(fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 1174, in writepy self.write(fname, arcname) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testWritePythonDirectory (test.test_zipfile.PyZipFileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 538, in testWritePythonDirectory zipfp.writepy(TESTFN2) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 1166, in writepy self.write(fname, arcname) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testWritePythonPackage (test.test_zipfile.PyZipFileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 515, in testWritePythonPackage zipfp.writepy(packagedir) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 1137, in writepy self.write(fname, arcname) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testBadPassword (test.test_zipfile.DecryptionTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 716, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testGoodPassword (test.test_zipfile.DecryptionTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 716, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testNoPassword (test.test_zipfile.DecryptionTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 716, in setUp fp = open(TESTFN, "wb") IOError: [Errno 13] Permission denied: '@test' ====================================================================== ERROR: testDifferentFile (test.test_zipfile.TestsWithMultipleOpens) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 846, in setUp zipfp.writestr('ones', '1'*FIXEDTEST_SIZE) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testInterleaved (test.test_zipfile.TestsWithMultipleOpens) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 846, in setUp zipfp.writestr('ones', '1'*FIXEDTEST_SIZE) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testSameFile (test.test_zipfile.TestsWithMultipleOpens) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 846, in setUp zipfp.writestr('ones', '1'*FIXEDTEST_SIZE) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testIterlinesDeflated (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 990, in testIterlinesDeflated self.iterlinesTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 949, in iterlinesTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 909, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testIterlinesStored (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 973, in testIterlinesStored self.iterlinesTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 949, in iterlinesTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 909, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadDeflated (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 978, in testReadDeflated self.readTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 913, in readTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 909, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadStored (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 961, in testReadStored self.readTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 913, in readTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 909, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadlineDeflated (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 982, in testReadlineDeflated self.readlineTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 924, in readlineTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 909, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadlineStored (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 965, in testReadlineStored self.readlineTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 924, in readlineTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 909, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadlinesDeflated (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 986, in testReadlinesDeflated self.readlinesTest(f, zipfile.ZIP_DEFLATED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 937, in readlinesTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 909, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testReadlinesStored (test.test_zipfile.UniversalNewlineTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 969, in testReadlinesStored self.readlinesTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 937, in readlinesTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 909, in makeTestArchive zipfp.write(fn, fn) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testOpenStored (test.test_zipfile.TestsWithRandomBinaryFiles) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 818, in testOpenStored self.zipOpenTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 787, in zipOpenTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 767, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testRandomOpenStored (test.test_zipfile.TestsWithRandomBinaryFiles) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 840, in testRandomOpenStored self.zipRandomOpenTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 821, in zipRandomOpenTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 767, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testStored (test.test_zipfile.TestsWithRandomBinaryFiles) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 784, in testStored self.zipTest(f, zipfile.ZIP_STORED) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 772, in zipTest self.makeTestArchive(f, compression) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 767, in makeTestArchive zipfp.write(TESTFN, "another"+os.extsep+"name") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 928, in write self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== FAIL: testCreateNonExistentFileForAppend (test.test_zipfile.OtherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipfile.py", line 568, in testCreateNonExistentFileForAppend self.fail('Could not append data to a non-existent zip file.') AssertionError: Could not append data to a non-existent zip file. ====================================================================== ERROR: testBadMTime (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 183, in testBadMTime self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testBadMagic (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 161, in testBadMagic self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testBadMagic2 (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 170, in testBadMagic2 self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testBoth (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 148, in testBoth self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testDeepPackage (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 197, in testDeepPackage self.doTest(pyc_ext, files, TESTPACK, TESTPACK2, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testDoctestFile (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 299, in testDoctestFile self.runDoctest(self.doDoctestFile) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 284, in runDoctest self.doTest(".py", files, TESTMOD, call=callback) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testDoctestSuite (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 310, in testDoctestSuite self.runDoctest(self.doDoctestSuite) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 284, in runDoctest self.doTest(".py", files, TESTMOD, call=callback) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 73, in doTest z.writestr(zinfo, data) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 987, in writestr self._writecheck(zinfo) File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 896, in _writecheck raise LargeZipFile("Filesize would require ZIP64 extensions") LargeZipFile: Filesize would require ZIP64 extensions ====================================================================== ERROR: testEmptyPy (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 152, in testEmptyPy self.doTest(None, files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 91, in doTest ["__dummy__"]) ImportError: No module named ziptestmodule ====================================================================== ERROR: testGetCompiledSource (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 279, in testGetCompiledSource self.doTest(pyc_ext, files, TESTMOD, call=self.assertModuleSource) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testGetData (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 236, in testGetData z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testGetSource (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 273, in testGetSource self.doTest(".py", files, TESTMOD, call=self.assertModuleSource) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testImport_WithStuff (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 266, in testImport_WithStuff stuff="Some Stuff"*31) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testImporterAttr (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 259, in testImporterAttr self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testPackage (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 189, in testPackage self.doTest(pyc_ext, files, TESTPACK, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testPy (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 139, in testPy self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testPyc (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 143, in testPyc self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testTraceback (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 333, in testTraceback self.doTest(None, files, TESTMOD, call=self.doTraceback) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testZipImporterMethods (test.test_zipimport.UncompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 206, in testZipImporterMethods z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testBadMTime (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 183, in testBadMTime self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testBadMagic (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 161, in testBadMagic self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testBadMagic2 (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 170, in testBadMagic2 self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testBoth (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 148, in testBoth self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testDeepPackage (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 197, in testDeepPackage self.doTest(pyc_ext, files, TESTPACK, TESTPACK2, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testDoctestFile (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 299, in testDoctestFile self.runDoctest(self.doDoctestFile) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 284, in runDoctest self.doTest(".py", files, TESTMOD, call=callback) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testDoctestSuite (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 310, in testDoctestSuite self.runDoctest(self.doDoctestSuite) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 284, in runDoctest self.doTest(".py", files, TESTMOD, call=callback) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testEmptyPy (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 152, in testEmptyPy self.doTest(None, files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testGetCompiledSource (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 279, in testGetCompiledSource self.doTest(pyc_ext, files, TESTMOD, call=self.assertModuleSource) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testGetData (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 236, in testGetData z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testGetSource (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 273, in testGetSource self.doTest(".py", files, TESTMOD, call=self.assertModuleSource) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testImport_WithStuff (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 266, in testImport_WithStuff stuff="Some Stuff"*31) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testImporterAttr (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 259, in testImporterAttr self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testPackage (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 189, in testPackage self.doTest(pyc_ext, files, TESTPACK, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testPy (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 139, in testPy self.doTest(".py", files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testPyc (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 143, in testPyc self.doTest(pyc_ext, files, TESTMOD) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testTraceback (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 333, in testTraceback self.doTest(None, files, TESTMOD, call=self.doTraceback) File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 68, in doTest z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' ====================================================================== ERROR: testZipImporterMethods (test.test_zipimport.CompressedZipImportTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot\work\trunk.heller-windows\build\lib\test\test_zipimport.py", line 206, in testZipImporterMethods z = ZipFile(TEMP_ZIP, "w") File "C:\buildbot\work\trunk.heller-windows\build\lib\zipfile.py", line 602, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 13] Permission denied: 'C:\\buildbot\\work\\trunk.heller-windows\\build\\PCbuild\\junk95142.zip' sincerely, -The Buildbot From buildbot at python.org Sat Mar 29 16:48:14 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 29 Mar 2008 15:48:14 +0000 Subject: [Python-checkins] buildbot failure in ARM Linux 3.0 Message-ID: <20080329154815.02A891E4003@bag.python.org> The Buildbot has detected a new failure of ARM Linux 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/ARM%20Linux%203.0/builds/0 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-linux-arm Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: amaury.forgeotdarc,benjamin.peterson,christian.heimes,georg.brandl,gerhard.haering,gregory.p.smith,jeffrey.yasskin,mark.dickinson,martin.v.loewis,neal.norwitz BUILD FAILED: failed compile sincerely, -The Buildbot From buildbot at python.org Sat Mar 29 16:50:41 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 29 Mar 2008 15:50:41 +0000 Subject: [Python-checkins] buildbot failure in x86 XP 3.0 Message-ID: <20080329155041.F2C721E4003@bag.python.org> The Buildbot has detected a new failure of x86 XP 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP%203.0/builds/77 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: armbruster-windows Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: gerhard.haering BUILD FAILED: failed test Excerpt from the test logfile: 4 tests failed: test_ssl test_urllib2_localnet test_urllibnet test_xmlrpc_net ====================================================================== ERROR: testConnect (test.test_ssl.NetworkedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 130, in testConnect raise test_support.TestFailed("Unexpected exception %s" % x) test.test_support.TestFailed: Unexpected exception [Errno 1] _ssl.c:486: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed ====================================================================== ERROR: testProtocolSSL2 (test.test_ssl.ThreadedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 822, in testProtocolSSL2 tryProtocolCombo(ssl.PROTOCOL_SSLv2, ssl.PROTOCOL_SSLv2, True, ssl.CERT_OPTIONAL) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 691, in tryProtocolCombo chatty=False, connectionchatty=False) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 640, in serverParamsTest raise test_support.TestFailed("Unexpected SSL error: " + str(x)) test.test_support.TestFailed: Unexpected SSL error: [Errno 1] _ssl.c:486: error:1407E086:SSL routines:SSL2_SET_CERTIFICATE:certificate verify failed ====================================================================== ERROR: testProtocolSSL23 (test.test_ssl.ThreadedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 843, in testProtocolSSL23 tryProtocolCombo(ssl.PROTOCOL_SSLv23, ssl.PROTOCOL_SSLv3, True, ssl.CERT_OPTIONAL) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 691, in tryProtocolCombo chatty=False, connectionchatty=False) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 640, in serverParamsTest raise test_support.TestFailed("Unexpected SSL error: " + str(x)) test.test_support.TestFailed: Unexpected SSL error: [Errno 1] _ssl.c:486: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed ====================================================================== ERROR: testProtocolSSL3 (test.test_ssl.ThreadedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 855, in testProtocolSSL3 tryProtocolCombo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True, ssl.CERT_OPTIONAL) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 691, in tryProtocolCombo chatty=False, connectionchatty=False) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 640, in serverParamsTest raise test_support.TestFailed("Unexpected SSL error: " + str(x)) test.test_support.TestFailed: Unexpected SSL error: [Errno 1] _ssl.c:486: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed ====================================================================== ERROR: testProtocolTLS1 (test.test_ssl.ThreadedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 865, in testProtocolTLS1 tryProtocolCombo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True, ssl.CERT_OPTIONAL) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 691, in tryProtocolCombo chatty=False, connectionchatty=False) File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 640, in serverParamsTest raise test_support.TestFailed("Unexpected SSL error: " + str(x)) test.test_support.TestFailed: Unexpected SSL error: [Errno 1] _ssl.c:486: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed ====================================================================== ERROR: testReadCert (test.test_ssl.ThreadedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_ssl.py", line 738, in testReadCert "Unexpected SSL error: " + str(x)) test.test_support.TestFailed: Unexpected SSL error: [Errno 1] _ssl.c:486: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed ====================================================================== FAIL: test_bad_address (test.test_urllib2_localnet.TestUrlopen) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_urllib2_localnet.py", line 476, in test_bad_address urllib2.urlopen, "http://www.python.invalid./") AssertionError: IOError not raised by urlopen ====================================================================== FAIL: test_bad_address (test.test_urllibnet.urlopenNetworkTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_urllibnet.py", line 145, in test_bad_address urllib.urlopen, "http://www.python.invalid./") AssertionError: IOError not raised by urlopen ====================================================================== FAIL: test_current_time (test.test_xmlrpc_net.CurrentTimeTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\buildarea\3.0.armbruster-windows\build\lib\test\test_xmlrpc_net.py", line 38, in test_current_time self.assert_(delta.days <= 1) AssertionError: None sincerely, -The Buildbot From buildbot at python.org Sat Mar 29 17:25:01 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 29 Mar 2008 16:25:01 +0000 Subject: [Python-checkins] buildbot failure in amd64 XP trunk Message-ID: <20080329162501.4C03B1E4003@bag.python.org> The Buildbot has detected a new failure of amd64 XP trunk. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20XP%20trunk/builds/1089 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows-amd64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: benjamin.peterson BUILD FAILED: failed compile sincerely, -The Buildbot From buildbot at python.org Sat Mar 29 17:28:54 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 29 Mar 2008 16:28:54 +0000 Subject: [Python-checkins] buildbot failure in sparc Debian 3.0 Message-ID: <20080329162854.76BD81E4003@bag.python.org> The Buildbot has detected a new failure of sparc Debian 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20Debian%203.0/builds/148 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-sparc Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: gerhard.haering BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_logging ====================================================================== FAIL: test_flush (test.test_logging.MemoryHandlerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea-sid/3.0.klose-debian-sparc/build/Lib/test/test_logging.py", line 468, in test_flush self.assert_log_lines(lines) File "/home/pybot/buildarea-sid/3.0.klose-debian-sparc/build/Lib/test/test_logging.py", line 112, in assert_log_lines self.assertEquals(len(actual_lines), len(expected_values)) AssertionError: 0 != 3 make: *** [buildbottest] Error 1 sincerely, -The Buildbot From nnorwitz at gmail.com Sat Mar 29 17:41:57 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Sat, 29 Mar 2008 09:41:57 -0700 Subject: [Python-checkins] r62038 - python/trunk/Lib/test/regrtest.py In-Reply-To: <20080329131455.3C1141E4003@bag.python.org> References: <20080329131455.3C1141E4003@bag.python.org> Message-ID: On Sat, Mar 29, 2008 at 6:14 AM, amaury.forgeotdarc wrote: > Author: amaury.forgeotdarc > Date: Sat Mar 29 14:14:52 2008 > New Revision: 62038 > > Modified: > python/trunk/Lib/test/regrtest.py > Log: > Now that Lib/test/output is gone, tests should not print anything, > except in verbose mode. > Support code is much simpler. Nice! From buildbot at python.org Sat Mar 29 18:35:00 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 29 Mar 2008 17:35:00 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 trunk Message-ID: <20080329173500.5C9E71E401A@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%20trunk/builds/2786 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: amaury.forgeotdarc,benjamin.peterson BUILD FAILED: failed test Excerpt from the test logfile: 2 tests failed: test_asynchat test_smtplib sincerely, -The Buildbot From buildbot at python.org Sat Mar 29 18:47:04 2008 From: buildbot at python.org (buildbot at python.org) Date: Sat, 29 Mar 2008 17:47:04 +0000 Subject: [Python-checkins] buildbot failure in ARM Linux 2.5 Message-ID: <20080329174704.4ECCF1E4003@bag.python.org> The Buildbot has detected a new failure of ARM Linux 2.5. Full details are available at: http://www.python.org/dev/buildbot/all/ARM%20Linux%202.5/builds/0 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-linux-arm Build Reason: Build Source Stamp: [branch branches/release25-maint] HEAD Blamelist: amaury.forgeotdarc,georg.brandl,mark.dickinson BUILD FAILED: failed test Excerpt from the test logfile: make: *** [buildbottest] Aborted sincerely, -The Buildbot From python-checkins at python.org Sat Mar 29 20:11:52 2008 From: python-checkins at python.org (gerhard.haering) Date: Sat, 29 Mar 2008 20:11:52 +0100 (CET) Subject: [Python-checkins] r62044 - python/trunk/Doc/library/sqlite3.rst Message-ID: <20080329191152.AA8CD1E4028@bag.python.org> Author: gerhard.haering Date: Sat Mar 29 20:11:52 2008 New Revision: 62044 Modified: python/trunk/Doc/library/sqlite3.rst Log: Documented the lastrowid attribute. Modified: python/trunk/Doc/library/sqlite3.rst ============================================================================== --- python/trunk/Doc/library/sqlite3.rst (original) +++ python/trunk/Doc/library/sqlite3.rst Sat Mar 29 20:11:52 2008 @@ -532,6 +532,12 @@ This includes ``SELECT`` statements because we cannot determine the number of rows a query produced until all rows were fetched. +.. attribute:: Cursor.lastrowid + + This read-only attribute provides the rowid of the last modified row. It is + only set if you issued a ``INSERT`` statement using the :meth:`execute` + method. For operations other than ``INSERT`` or when :meth:`executemany` is + called, :attr:`lastrowid` is set to :const:`None`. .. _sqlite3-types: From collinw at gmail.com Sun Mar 30 05:18:09 2008 From: collinw at gmail.com (Collin Winter) Date: Sat, 29 Mar 2008 20:18:09 -0700 Subject: [Python-checkins] r62037 - python/trunk/Lib/lib2to3/refactor.py In-Reply-To: <20080329124255.287FF1E4003@bag.python.org> References: <20080329124255.287FF1E4003@bag.python.org> Message-ID: <43aa6ff70803292018n13f611d5h739c672ba0c08e4d@mail.gmail.com> On Sat, Mar 29, 2008 at 5:42 AM, amaury.forgeotdarc wrote: > Author: amaury.forgeotdarc > Date: Sat Mar 29 13:42:54 2008 > New Revision: 62037 > > Modified: > python/trunk/Lib/lib2to3/refactor.py > Log: > lib2to3 should install a logging handler only when run as a main program, > not when used as a library. > > This may please the buildbots, which fail when test_lib2to3 is run before test_logging. FYI, modifications to 2to3 should generally be checked in to the main 2to3 tree (http://svn.python.org/view/sandbox/trunk/2to3/), then merged to the others. Collin From python-checkins at python.org Sun Mar 30 08:36:21 2008 From: python-checkins at python.org (georg.brandl) Date: Sun, 30 Mar 2008 08:36:21 +0200 (CEST) Subject: [Python-checkins] r62046 - in doctools/trunk: CHANGES doc/concepts.rst doc/contents.rst sphinx/directives.py sphinx/environment.py Message-ID: <20080330063621.952501E4017@bag.python.org> Author: georg.brandl Date: Sun Mar 30 08:36:20 2008 New Revision: 62046 Modified: doctools/trunk/CHANGES doctools/trunk/doc/concepts.rst doctools/trunk/doc/contents.rst doctools/trunk/sphinx/directives.py doctools/trunk/sphinx/environment.py Log: Allow new titles in the toctree. Modified: doctools/trunk/CHANGES ============================================================================== --- doctools/trunk/CHANGES (original) +++ doctools/trunk/CHANGES Sun Mar 30 08:36:20 2008 @@ -5,6 +5,9 @@ It works like ``add_description_unit`` but the directive will only create a target and no output. +* sphinx.directives: Allow giving a different title to documents + in the toctree. + Release 0.1.61950 (Mar 26, 2008) ================================ Modified: doctools/trunk/doc/concepts.rst ============================================================================== --- doctools/trunk/doc/concepts.rst (original) +++ doctools/trunk/doc/concepts.rst Sun Mar 30 08:36:20 2008 @@ -55,10 +55,25 @@ ``strings`` and so forth, and it knows that they are children of the shown document, the library index. From this information it generates "next chapter", "previous chapter" and "parent chapter" links. - + + Document titles in the :dir:`toctree` will be automatically read from the + title of the referenced document. If that isn't what you want, you can give + the specify an explicit title and target using a similar syntax to reST + hyperlinks (and Sphinx's :ref:`cross-referencing syntax `). This + looks like:: + + .. toctree:: + + intro + All about strings + datatypes + + The second line above will link to the ``strings`` document, but will use the + title "All about strings" instead of the title of the ``strings`` document. + In the end, all documents under the :term:`documentation root` must occur in - one ``toctree`` directive; Sphinx will emit a warning if it finds a file that - is not included, because that means that this file will not be reachable + some ``toctree`` directive; Sphinx will emit a warning if it finds a file + that is not included, because that means that this file will not be reachable through standard navigation. Use :confval:`unused_documents` to explicitly exclude documents from this check. Modified: doctools/trunk/doc/contents.rst ============================================================================== --- doctools/trunk/doc/contents.rst (original) +++ doctools/trunk/doc/contents.rst Sun Mar 30 08:36:20 2008 @@ -4,10 +4,10 @@ ============================= .. toctree:: - :maxdepth: 1 + :maxdepth: 2 intro - concepts + Konzepte rest markup/index builders Modified: doctools/trunk/sphinx/directives.py ============================================================================== --- doctools/trunk/sphinx/directives.py (original) +++ doctools/trunk/sphinx/directives.py Sun Mar 30 08:36:20 2008 @@ -19,6 +19,7 @@ from docutils.parsers.rst import directives from sphinx import addnodes +from sphinx.roles import caption_ref_re from sphinx.util.compat import make_admonition ws_re = re.compile(r'\s+') @@ -622,9 +623,15 @@ ret = [] subnode = addnodes.toctree() includefiles = [] + includetitles = {} for docname in content: if not docname: continue + # look for explicit titles and documents ("Some Title "). + m = caption_ref_re.match(docname) + if m: + docname = m.group(2) + includetitles[docname] = m.group(1) # absolutize filenames, remove suffixes if docname.endswith(suffix): docname = docname[:-len(suffix)] @@ -635,6 +642,7 @@ else: includefiles.append(docname) subnode['includefiles'] = includefiles + subnode['includetitles'] = includetitles subnode['maxdepth'] = options.get('maxdepth', -1) ret.append(subnode) return ret Modified: doctools/trunk/sphinx/environment.py ============================================================================== --- doctools/trunk/sphinx/environment.py (original) +++ doctools/trunk/sphinx/environment.py Sun Mar 30 08:36:20 2008 @@ -60,18 +60,6 @@ ENV_VERSION = 21 -def walk_depth(node, depth, maxdepth): - """Utility: Cut a TOC at a specified depth.""" - for subnode in node.children[:]: - if isinstance(subnode, (addnodes.compact_paragraph, nodes.list_item)): - walk_depth(subnode, depth, maxdepth) - elif isinstance(subnode, nodes.bullet_list): - if depth > maxdepth: - subnode.parent.replace(subnode, []) - else: - walk_depth(subnode, depth+1, maxdepth) - - default_substitutions = set([ 'version', 'release', @@ -736,13 +724,33 @@ return addnodes.compact_paragraph('', '', *entries) return None + def _walk_depth(node, depth, maxdepth, titleoverrides): + """Utility: Cut a TOC at a specified depth.""" + for subnode in node.children[:]: + if isinstance(subnode, (addnodes.compact_paragraph, nodes.list_item)): + _walk_depth(subnode, depth, maxdepth, titleoverrides) + elif isinstance(subnode, nodes.bullet_list): + if depth > maxdepth: + subnode.parent.replace(subnode, []) + else: + _walk_depth(subnode, depth+1, maxdepth, titleoverrides) + for toctreenode in doctree.traverse(addnodes.toctree): maxdepth = toctreenode.get('maxdepth', -1) + titleoverrides = toctreenode.get('includetitles', {}) newnode = _entries_from_toctree(toctreenode) if newnode is not None: - # prune the tree to maxdepth + # prune the tree to maxdepth and replace titles if maxdepth > 0: - walk_depth(newnode, 1, maxdepth) + _walk_depth(newnode, 1, maxdepth, titleoverrides) + # replace titles, if needed + if titleoverrides: + for refnode in newnode.traverse(nodes.reference): + if refnode.get('anchorname', None): + continue + if refnode['refuri'] in titleoverrides: + newtitle = titleoverrides[refnode['refuri']] + refnode.children = [nodes.Text(newtitle)] toctreenode.replace_self(newnode) else: toctreenode.replace_self([]) From python-checkins at python.org Sun Mar 30 08:40:18 2008 From: python-checkins at python.org (georg.brandl) Date: Sun, 30 Mar 2008 08:40:18 +0200 (CEST) Subject: [Python-checkins] r62047 - in python/trunk: Include/Python-ast.h Parser/Python.asdl Python/Python-ast.c Python/ast.c Python/compile.c Python/symtable.c Message-ID: <20080330064018.3D2A51E4017@bag.python.org> Author: georg.brandl Date: Sun Mar 30 08:40:17 2008 New Revision: 62047 Modified: python/trunk/Include/Python-ast.h python/trunk/Parser/Python.asdl python/trunk/Python/Python-ast.c python/trunk/Python/ast.c python/trunk/Python/compile.c python/trunk/Python/symtable.c Log: Patch #2511: Give the "excepthandler" AST item proper attributes by making it a Sum. Modified: python/trunk/Include/Python-ast.h ============================================================================== --- python/trunk/Include/Python-ast.h (original) +++ python/trunk/Include/Python-ast.h Sun Mar 30 08:40:17 2008 @@ -324,10 +324,17 @@ asdl_seq *ifs; }; +enum _excepthandler_kind {ExceptHandler_kind=1}; struct _excepthandler { - expr_ty type; - expr_ty name; - asdl_seq *body; + enum _excepthandler_kind kind; + union { + struct { + expr_ty type; + expr_ty name; + asdl_seq *body; + } ExceptHandler; + + } v; int lineno; int col_offset; }; @@ -489,8 +496,8 @@ #define comprehension(a0, a1, a2, a3) _Py_comprehension(a0, a1, a2, a3) comprehension_ty _Py_comprehension(expr_ty target, expr_ty iter, asdl_seq * ifs, PyArena *arena); -#define excepthandler(a0, a1, a2, a3, a4, a5) _Py_excepthandler(a0, a1, a2, a3, a4, a5) -excepthandler_ty _Py_excepthandler(expr_ty type, expr_ty name, asdl_seq * body, +#define ExceptHandler(a0, a1, a2, a3, a4, a5) _Py_ExceptHandler(a0, a1, a2, a3, a4, a5) +excepthandler_ty _Py_ExceptHandler(expr_ty type, expr_ty name, asdl_seq * body, int lineno, int col_offset, PyArena *arena); #define arguments(a0, a1, a2, a3, a4) _Py_arguments(a0, a1, a2, a3, a4) arguments_ty _Py_arguments(asdl_seq * args, identifier vararg, identifier Modified: python/trunk/Parser/Python.asdl ============================================================================== --- python/trunk/Parser/Python.asdl (original) +++ python/trunk/Parser/Python.asdl Sun Mar 30 08:40:17 2008 @@ -98,11 +98,8 @@ comprehension = (expr target, expr iter, expr* ifs) -- not sure what to call the first argument for raise and except - -- TODO(jhylton): Figure out if there is a better way to handle - -- lineno and col_offset fields, particularly when - -- ast is exposed to Python. - excepthandler = (expr? type, expr? name, stmt* body, int lineno, - int col_offset) + excepthandler = ExceptHandler(expr? type, expr? name, stmt* body) + attributes (int lineno, int col_offset) arguments = (expr* args, identifier? vararg, identifier? kwarg, expr* defaults) Modified: python/trunk/Python/Python-ast.c ============================================================================== --- python/trunk/Python/Python-ast.c (original) +++ python/trunk/Python/Python-ast.c Sun Mar 30 08:40:17 2008 @@ -336,13 +336,16 @@ "ifs", }; static PyTypeObject *excepthandler_type; +static char *excepthandler_attributes[] = { + "lineno", + "col_offset", +}; static PyObject* ast2obj_excepthandler(void*); -static char *excepthandler_fields[]={ +static PyTypeObject *ExceptHandler_type; +static char *ExceptHandler_fields[]={ "type", "name", "body", - "lineno", - "col_offset", }; static PyTypeObject *arguments_type; static PyObject* ast2obj_arguments(void*); @@ -776,9 +779,13 @@ comprehension_type = make_type("comprehension", AST_type, comprehension_fields, 3); if (!comprehension_type) return 0; - excepthandler_type = make_type("excepthandler", AST_type, - excepthandler_fields, 5); + excepthandler_type = make_type("excepthandler", AST_type, NULL, 0); if (!excepthandler_type) return 0; + if (!add_attributes(excepthandler_type, excepthandler_attributes, 2)) + return 0; + ExceptHandler_type = make_type("ExceptHandler", excepthandler_type, + ExceptHandler_fields, 3); + if (!ExceptHandler_type) return 0; arguments_type = make_type("arguments", AST_type, arguments_fields, 4); if (!arguments_type) return 0; keyword_type = make_type("keyword", AST_type, keyword_fields, 2); @@ -1825,16 +1832,17 @@ } excepthandler_ty -excepthandler(expr_ty type, expr_ty name, asdl_seq * body, int lineno, int +ExceptHandler(expr_ty type, expr_ty name, asdl_seq * body, int lineno, int col_offset, PyArena *arena) { excepthandler_ty p; p = (excepthandler_ty)PyArena_Malloc(arena, sizeof(*p)); if (!p) return NULL; - p->type = type; - p->name = name; - p->body = body; + p->kind = ExceptHandler_kind; + p->v.ExceptHandler.type = type; + p->v.ExceptHandler.name = name; + p->v.ExceptHandler.body = body; p->lineno = lineno; p->col_offset = col_offset; return p; @@ -2901,31 +2909,35 @@ return Py_None; } - result = PyType_GenericNew(excepthandler_type, NULL, NULL); - if (!result) return NULL; - value = ast2obj_expr(o->type); - if (!value) goto failed; - if (PyObject_SetAttrString(result, "type", value) == -1) - goto failed; - Py_DECREF(value); - value = ast2obj_expr(o->name); - if (!value) goto failed; - if (PyObject_SetAttrString(result, "name", value) == -1) - goto failed; - Py_DECREF(value); - value = ast2obj_list(o->body, ast2obj_stmt); - if (!value) goto failed; - if (PyObject_SetAttrString(result, "body", value) == -1) - goto failed; - Py_DECREF(value); + switch (o->kind) { + case ExceptHandler_kind: + result = PyType_GenericNew(ExceptHandler_type, NULL, NULL); + if (!result) goto failed; + value = ast2obj_expr(o->v.ExceptHandler.type); + if (!value) goto failed; + if (PyObject_SetAttrString(result, "type", value) == -1) + goto failed; + Py_DECREF(value); + value = ast2obj_expr(o->v.ExceptHandler.name); + if (!value) goto failed; + if (PyObject_SetAttrString(result, "name", value) == -1) + goto failed; + Py_DECREF(value); + value = ast2obj_list(o->v.ExceptHandler.body, ast2obj_stmt); + if (!value) goto failed; + if (PyObject_SetAttrString(result, "body", value) == -1) + goto failed; + Py_DECREF(value); + break; + } value = ast2obj_int(o->lineno); if (!value) goto failed; - if (PyObject_SetAttrString(result, "lineno", value) == -1) + if (PyObject_SetAttrString(result, "lineno", value) < 0) goto failed; Py_DECREF(value); value = ast2obj_int(o->col_offset); if (!value) goto failed; - if (PyObject_SetAttrString(result, "col_offset", value) == -1) + if (PyObject_SetAttrString(result, "col_offset", value) < 0) goto failed; Py_DECREF(value); return result; @@ -5534,58 +5546,13 @@ obj2ast_excepthandler(PyObject* obj, excepthandler_ty* out, PyArena* arena) { PyObject* tmp = NULL; - expr_ty type; - expr_ty name; - asdl_seq* body; + int lineno; int col_offset; - if (PyObject_HasAttrString(obj, "type")) { - int res; - tmp = PyObject_GetAttrString(obj, "type"); - if (tmp == NULL) goto failed; - res = obj2ast_expr(tmp, &type, arena); - if (res != 0) goto failed; - Py_XDECREF(tmp); - tmp = NULL; - } else { - type = NULL; - } - if (PyObject_HasAttrString(obj, "name")) { - int res; - tmp = PyObject_GetAttrString(obj, "name"); - if (tmp == NULL) goto failed; - res = obj2ast_expr(tmp, &name, arena); - if (res != 0) goto failed; - Py_XDECREF(tmp); - tmp = NULL; - } else { - name = NULL; - } - if (PyObject_HasAttrString(obj, "body")) { - int res; - Py_ssize_t len; - Py_ssize_t i; - tmp = PyObject_GetAttrString(obj, "body"); - if (tmp == NULL) goto failed; - if (!PyList_Check(tmp)) { - PyErr_Format(PyExc_TypeError, "excepthandler field \"body\" must be a list, not a %.200s", tmp->ob_type->tp_name); - goto failed; - } - len = PyList_GET_SIZE(tmp); - body = asdl_seq_new(len, arena); - if (body == NULL) goto failed; - for (i = 0; i < len; i++) { - stmt_ty value; - res = obj2ast_stmt(PyList_GET_ITEM(tmp, i), &value, arena); - if (res != 0) goto failed; - asdl_seq_SET(body, i, value); - } - Py_XDECREF(tmp); - tmp = NULL; - } else { - PyErr_SetString(PyExc_TypeError, "required field \"body\" missing from excepthandler"); - return 1; + if (obj == Py_None) { + *out = NULL; + return 0; } if (PyObject_HasAttrString(obj, "lineno")) { int res; @@ -5611,8 +5578,67 @@ PyErr_SetString(PyExc_TypeError, "required field \"col_offset\" missing from excepthandler"); return 1; } - *out = excepthandler(type, name, body, lineno, col_offset, arena); - return 0; + if (PyObject_IsInstance(obj, (PyObject*)ExceptHandler_type)) { + expr_ty type; + expr_ty name; + asdl_seq* body; + + if (PyObject_HasAttrString(obj, "type")) { + int res; + tmp = PyObject_GetAttrString(obj, "type"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &type, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + type = NULL; + } + if (PyObject_HasAttrString(obj, "name")) { + int res; + tmp = PyObject_GetAttrString(obj, "name"); + if (tmp == NULL) goto failed; + res = obj2ast_expr(tmp, &name, arena); + if (res != 0) goto failed; + Py_XDECREF(tmp); + tmp = NULL; + } else { + name = NULL; + } + if (PyObject_HasAttrString(obj, "body")) { + int res; + Py_ssize_t len; + Py_ssize_t i; + tmp = PyObject_GetAttrString(obj, "body"); + if (tmp == NULL) goto failed; + if (!PyList_Check(tmp)) { + PyErr_Format(PyExc_TypeError, "ExceptHandler field \"body\" must be a list, not a %.200s", tmp->ob_type->tp_name); + goto failed; + } + len = PyList_GET_SIZE(tmp); + body = asdl_seq_new(len, arena); + if (body == NULL) goto failed; + for (i = 0; i < len; i++) { + stmt_ty value; + res = obj2ast_stmt(PyList_GET_ITEM(tmp, i), &value, arena); + if (res != 0) goto failed; + asdl_seq_SET(body, i, value); + } + Py_XDECREF(tmp); + tmp = NULL; + } else { + PyErr_SetString(PyExc_TypeError, "required field \"body\" missing from ExceptHandler"); + return 1; + } + *out = ExceptHandler(type, name, body, lineno, col_offset, + arena); + if (*out == NULL) goto failed; + return 0; + } + + tmp = PyObject_Repr(obj); + if (tmp == NULL) goto failed; + PyErr_Format(PyExc_TypeError, "expected some sort of excepthandler, but got %.400s", PyString_AS_STRING(tmp)); failed: Py_XDECREF(tmp); return 1; @@ -5930,6 +5956,8 @@ (PyObject*)comprehension_type) < 0) return; if (PyDict_SetItemString(d, "excepthandler", (PyObject*)excepthandler_type) < 0) return; + if (PyDict_SetItemString(d, "ExceptHandler", + (PyObject*)ExceptHandler_type) < 0) return; if (PyDict_SetItemString(d, "arguments", (PyObject*)arguments_type) < 0) return; if (PyDict_SetItemString(d, "keyword", (PyObject*)keyword_type) < 0) Modified: python/trunk/Python/ast.c ============================================================================== --- python/trunk/Python/ast.c (original) +++ python/trunk/Python/ast.c Sun Mar 30 08:40:17 2008 @@ -2830,7 +2830,7 @@ if (!suite_seq) return NULL; - return excepthandler(NULL, NULL, suite_seq, LINENO(exc), + return ExceptHandler(NULL, NULL, suite_seq, LINENO(exc), exc->n_col_offset, c->c_arena); } else if (NCH(exc) == 2) { @@ -2844,7 +2844,7 @@ if (!suite_seq) return NULL; - return excepthandler(expression, NULL, suite_seq, LINENO(exc), + return ExceptHandler(expression, NULL, suite_seq, LINENO(exc), exc->n_col_offset, c->c_arena); } else if (NCH(exc) == 4) { @@ -2862,7 +2862,7 @@ if (!suite_seq) return NULL; - return excepthandler(expression, e, suite_seq, LINENO(exc), + return ExceptHandler(expression, e, suite_seq, LINENO(exc), exc->n_col_offset, c->c_arena); } Modified: python/trunk/Python/compile.c ============================================================================== --- python/trunk/Python/compile.c (original) +++ python/trunk/Python/compile.c Sun Mar 30 08:40:17 2008 @@ -1855,32 +1855,32 @@ for (i = 0; i < n; i++) { excepthandler_ty handler = (excepthandler_ty)asdl_seq_GET( s->v.TryExcept.handlers, i); - if (!handler->type && i < n-1) + if (!handler->v.ExceptHandler.type && i < n-1) return compiler_error(c, "default 'except:' must be last"); c->u->u_lineno_set = false; c->u->u_lineno = handler->lineno; except = compiler_new_block(c); if (except == NULL) return 0; - if (handler->type) { + if (handler->v.ExceptHandler.type) { ADDOP(c, DUP_TOP); - VISIT(c, expr, handler->type); + VISIT(c, expr, handler->v.ExceptHandler.type); ADDOP_I(c, COMPARE_OP, PyCmp_EXC_MATCH); ADDOP_JREL(c, JUMP_IF_FALSE, except); ADDOP(c, POP_TOP); } ADDOP(c, POP_TOP); - if (handler->name) { - VISIT(c, expr, handler->name); + if (handler->v.ExceptHandler.name) { + VISIT(c, expr, handler->v.ExceptHandler.name); } else { ADDOP(c, POP_TOP); } ADDOP(c, POP_TOP); - VISIT_SEQ(c, stmt, handler->body); + VISIT_SEQ(c, stmt, handler->v.ExceptHandler.body); ADDOP_JREL(c, JUMP_FORWARD, end); compiler_use_next_block(c, except); - if (handler->type) + if (handler->v.ExceptHandler.type) ADDOP(c, POP_TOP); } ADDOP(c, END_FINALLY); Modified: python/trunk/Python/symtable.c ============================================================================== --- python/trunk/Python/symtable.c (original) +++ python/trunk/Python/symtable.c Sun Mar 30 08:40:17 2008 @@ -1308,11 +1308,11 @@ static int symtable_visit_excepthandler(struct symtable *st, excepthandler_ty eh) { - if (eh->type) - VISIT(st, expr, eh->type); - if (eh->name) - VISIT(st, expr, eh->name); - VISIT_SEQ(st, stmt, eh->body); + if (eh->v.ExceptHandler.type) + VISIT(st, expr, eh->v.ExceptHandler.type); + if (eh->v.ExceptHandler.name) + VISIT(st, expr, eh->v.ExceptHandler.name); + VISIT_SEQ(st, stmt, eh->v.ExceptHandler.body); return 1; } From python-checkins at python.org Sun Mar 30 08:53:55 2008 From: python-checkins at python.org (georg.brandl) Date: Sun, 30 Mar 2008 08:53:55 +0200 (CEST) Subject: [Python-checkins] r62048 - python/trunk/Lib/test/test_ast.py Message-ID: <20080330065355.A160A1E4017@bag.python.org> Author: georg.brandl Date: Sun Mar 30 08:53:55 2008 New Revision: 62048 Modified: python/trunk/Lib/test/test_ast.py Log: Adapt test_ast to the new ExceptHandler type. Modified: python/trunk/Lib/test/test_ast.py ============================================================================== --- python/trunk/Lib/test/test_ast.py (original) +++ python/trunk/Lib/test/test_ast.py Sun Mar 30 08:53:55 2008 @@ -150,6 +150,7 @@ (eval_tests, eval_results, "eval")): for i, o in itertools.izip(input, output): ast_tree = compile(i, "?", kind, 0x400) + print to_tuple(ast_tree), o assert to_tuple(ast_tree) == o test_order(ast_tree, (0, 0)) @@ -166,7 +167,7 @@ ('Module', [('While', (1, 0), ('Name', (1, 6), 'v', ('Load',)), [('Pass', (1, 8))], [])]), ('Module', [('If', (1, 0), ('Name', (1, 3), 'v', ('Load',)), [('Pass', (1, 5))], [])]), ('Module', [('Raise', (1, 0), ('Name', (1, 6), 'Exception', ('Load',)), ('Str', (1, 17), 'string'), None)]), -('Module', [('TryExcept', (1, 0), [('Pass', (2, 2))], [('excepthandler', (3, 0), ('Name', (3, 7), 'Exception', ('Load',)), None, [('Pass', (4, 2))], 3, 0)], [])]), +('Module', [('TryExcept', (1, 0), [('Pass', (2, 2))], [('ExceptHandler', (3, 0), ('Name', (3, 7), 'Exception', ('Load',)), None, [('Pass', (4, 2))])], [])]), ('Module', [('TryFinally', (1, 0), [('Pass', (2, 2))], [('Pass', (4, 2))])]), ('Module', [('Assert', (1, 0), ('Name', (1, 7), 'v', ('Load',)), None)]), ('Module', [('Import', (1, 0), [('alias', 'sys', None)])]), From python-checkins at python.org Sun Mar 30 09:01:48 2008 From: python-checkins at python.org (georg.brandl) Date: Sun, 30 Mar 2008 09:01:48 +0200 (CEST) Subject: [Python-checkins] r62049 - in python/trunk: Doc/library/_ast.rst Parser/asdl_c.py Python/Python-ast.c Message-ID: <20080330070148.2FC641E4017@bag.python.org> Author: georg.brandl Date: Sun Mar 30 09:01:47 2008 New Revision: 62049 Modified: python/trunk/Doc/library/_ast.rst python/trunk/Parser/asdl_c.py python/trunk/Python/Python-ast.c Log: #2505: allow easier creation of AST nodes. Modified: python/trunk/Doc/library/_ast.rst ============================================================================== --- python/trunk/Doc/library/_ast.rst (original) +++ python/trunk/Doc/library/_ast.rst Sun Mar 30 09:01:47 2008 @@ -46,9 +46,32 @@ If these attributes are marked as optional in the grammar (using a question mark), the value might be ``None``. If the attributes can have zero-or-more values (marked with an asterisk), the values are represented as Python lists. +All possible attributes must be present and have valid values when compiling an +AST with :func:`compile`. + +The constructor of a class ``_ast.T`` parses their arguments as follows: + +* If there are positional arguments, there must be as many as there are items in + ``T._fields``; they will be assigned as attributes of these names. +* If there are keyword arguments, they will set the attributes of the same names + to the given values. + +For example, to create and populate a ``UnaryOp`` node, you could use :: + + node = _ast.UnaryOp() + node.op = _ast.USub() + node.operand = _ast.Num() + node.operand.n = 5 + node.operand.lineno = 0 + node.operand.col_offset = 0 + node.lineno = 0 + node.col_offset = 0 + +or the more compact :: + + node = _ast.UnaryOp(_ast.USub(), _ast.Num(5, lineno=0, col_offset=0), + lineno=0, col_offset=0) -The constructors of all ``_ast`` classes don't take arguments; instead, if you -create instances, you must assign the required attributes separately. Abstract Grammar Modified: python/trunk/Parser/asdl_c.py ============================================================================== --- python/trunk/Parser/asdl_c.py (original) +++ python/trunk/Parser/asdl_c.py Sun Mar 30 09:01:47 2008 @@ -578,6 +578,98 @@ def visitModule(self, mod): self.emit(""" +static int +ast_type_init(PyObject *self, PyObject *args, PyObject *kw) +{ + Py_ssize_t i, numfields = 0; + int res = -1; + PyObject *key, *value, *fields; + fields = PyObject_GetAttrString((PyObject*)Py_TYPE(self), "_fields"); + if (!fields) + PyErr_Clear(); + if (fields) { + numfields = PySequence_Size(fields); + if (numfields == -1) + goto cleanup; + } + res = 0; /* if no error occurs, this stays 0 to the end */ + if (PyTuple_GET_SIZE(args) > 0) { + if (numfields != PyTuple_GET_SIZE(args)) { + PyErr_Format(PyExc_TypeError, "%.400s constructor takes either 0 or " + "%d positional argument%s", Py_TYPE(self)->tp_name, + numfields, numfields == 1 ? "" : "s"); + res = -1; + goto cleanup; + } + for (i = 0; i < PyTuple_GET_SIZE(args); i++) { + /* cannot be reached when fields is NULL */ + PyObject *name = PySequence_GetItem(fields, i); + if (!name) { + res = -1; + goto cleanup; + } + res = PyObject_SetAttr(self, name, PyTuple_GET_ITEM(args, i)); + Py_DECREF(name); + if (res < 0) + goto cleanup; + } + } + if (kw) { + i = 0; /* needed by PyDict_Next */ + while (PyDict_Next(kw, &i, &key, &value)) { + res = PyObject_SetAttr(self, key, value); + if (res < 0) + goto cleanup; + } + } + cleanup: + Py_XDECREF(fields); + return res; +} + +static PyTypeObject AST_type = { + PyVarObject_HEAD_INIT(&PyType_Type, 0) + "AST", + sizeof(PyObject), + 0, + 0, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + 0, /* tp_compare */ + 0, /* tp_repr */ + 0, /* tp_as_number */ + 0, /* tp_as_sequence */ + 0, /* tp_as_mapping */ + 0, /* tp_hash */ + 0, /* tp_call */ + 0, /* tp_str */ + PyObject_GenericGetAttr, /* tp_getattro */ + PyObject_GenericSetAttr, /* tp_setattro */ + 0, /* tp_as_buffer */ + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /* tp_flags */ + 0, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + 0, /* tp_methods */ + 0, /* tp_members */ + 0, /* tp_getset */ + 0, /* tp_base */ + 0, /* tp_dict */ + 0, /* tp_descr_get */ + 0, /* tp_descr_set */ + 0, /* tp_dictoffset */ + (initproc)ast_type_init, /* tp_init */ + PyType_GenericAlloc, /* tp_alloc */ + PyType_GenericNew, /* tp_new */ + PyObject_Del, /* tp_free */ +}; + + static PyTypeObject* make_type(char *type, PyTypeObject* base, char**fields, int num_fields) { PyObject *fnames, *result; @@ -606,7 +698,7 @@ static int add_attributes(PyTypeObject* type, char**attrs, int num_fields) { int i, result; - PyObject *s, *l = PyList_New(num_fields); + PyObject *s, *l = PyTuple_New(num_fields); if (!l) return 0; for(i = 0; i < num_fields; i++) { s = PyString_FromString(attrs[i]); @@ -614,7 +706,7 @@ Py_DECREF(l); return 0; } - PyList_SET_ITEM(l, i, s); + PyTuple_SET_ITEM(l, i, s); } result = PyObject_SetAttrString((PyObject*)type, "_attributes", l) >= 0; Py_DECREF(l); @@ -716,7 +808,6 @@ self.emit("{", 0) self.emit("static int initialized;", 1) self.emit("if (initialized) return 1;", 1) - self.emit('AST_type = make_type("AST", &PyBaseObject_Type, NULL, 0);', 1) for dfn in mod.dfns: self.visit(dfn) self.emit("initialized = 1;", 1) @@ -728,12 +819,13 @@ fields = name.value+"_fields" else: fields = "NULL" - self.emit('%s_type = make_type("%s", AST_type, %s, %d);' % + self.emit('%s_type = make_type("%s", &AST_type, %s, %d);' % (name, name, fields, len(prod.fields)), 1) self.emit("if (!%s_type) return 0;" % name, 1) def visitSum(self, sum, name): - self.emit('%s_type = make_type("%s", AST_type, NULL, 0);' % (name, name), 1) + self.emit('%s_type = make_type("%s", &AST_type, NULL, 0);' % + (name, name), 1) self.emit("if (!%s_type) return 0;" % name, 1) if sum.attributes: self.emit("if (!add_attributes(%s_type, %s_attributes, %d)) return 0;" % @@ -772,7 +864,7 @@ self.emit('m = Py_InitModule3("_ast", NULL, NULL);', 1) self.emit("if (!m) return;", 1) self.emit("d = PyModule_GetDict(m);", 1) - self.emit('if (PyDict_SetItemString(d, "AST", (PyObject*)AST_type) < 0) return;', 1) + self.emit('if (PyDict_SetItemString(d, "AST", (PyObject*)&AST_type) < 0) return;', 1) self.emit('if (PyModule_AddIntConstant(m, "PyCF_ONLY_AST", PyCF_ONLY_AST) < 0)', 1) self.emit("return;", 2) # Value of version: "$Revision$" @@ -979,7 +1071,7 @@ int PyAST_Check(PyObject* obj) { init_types(); - return PyObject_IsInstance(obj, (PyObject*)AST_type); + return PyObject_IsInstance(obj, (PyObject*)&AST_type); } """ @@ -1035,7 +1127,7 @@ print >> f, '#include "Python.h"' print >> f, '#include "%s-ast.h"' % mod.name print >> f - print >>f, "static PyTypeObject* AST_type;" + print >>f, "static PyTypeObject AST_type;" v = ChainOfVisitors( PyTypesDeclareVisitor(f), PyTypesVisitor(f), Modified: python/trunk/Python/Python-ast.c ============================================================================== --- python/trunk/Python/Python-ast.c (original) +++ python/trunk/Python/Python-ast.c Sun Mar 30 09:01:47 2008 @@ -2,7 +2,7 @@ /* - __version__ 60978. + __version__ 62047. This module must be committed separately after each AST grammar change; The __version__ number is set to the revision number of the commit @@ -12,7 +12,7 @@ #include "Python.h" #include "Python-ast.h" -static PyTypeObject* AST_type; +static PyTypeObject AST_type; static PyTypeObject *mod_type; static PyObject* ast2obj_mod(void*); static PyTypeObject *Module_type; @@ -369,6 +369,98 @@ }; +static int +ast_type_init(PyObject *self, PyObject *args, PyObject *kw) +{ + Py_ssize_t i, numfields = 0; + int res = -1; + PyObject *key, *value, *fields; + fields = PyObject_GetAttrString((PyObject*)Py_TYPE(self), "_fields"); + if (!fields) + PyErr_Clear(); + if (fields) { + numfields = PySequence_Size(fields); + if (numfields == -1) + goto cleanup; + } + res = 0; /* if no error occurs, this stays 0 to the end */ + if (PyTuple_GET_SIZE(args) > 0) { + if (numfields != PyTuple_GET_SIZE(args)) { + PyErr_Format(PyExc_TypeError, "%.400s constructor takes either 0 or " + "%d positional argument%s", Py_TYPE(self)->tp_name, + numfields, numfields == 1 ? "" : "s"); + res = -1; + goto cleanup; + } + for (i = 0; i < PyTuple_GET_SIZE(args); i++) { + /* cannot be reached when fields is NULL */ + PyObject *name = PySequence_GetItem(fields, i); + if (!name) { + res = -1; + goto cleanup; + } + res = PyObject_SetAttr(self, name, PyTuple_GET_ITEM(args, i)); + Py_DECREF(name); + if (res < 0) + goto cleanup; + } + } + if (kw) { + i = 0; /* needed by PyDict_Next */ + while (PyDict_Next(kw, &i, &key, &value)) { + res = PyObject_SetAttr(self, key, value); + if (res < 0) + goto cleanup; + } + } + cleanup: + Py_XDECREF(fields); + return res; +} + +static PyTypeObject AST_type = { + PyVarObject_HEAD_INIT(&PyType_Type, 0) + "AST", + sizeof(PyObject), + 0, + 0, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + 0, /* tp_compare */ + 0, /* tp_repr */ + 0, /* tp_as_number */ + 0, /* tp_as_sequence */ + 0, /* tp_as_mapping */ + 0, /* tp_hash */ + 0, /* tp_call */ + 0, /* tp_str */ + PyObject_GenericGetAttr, /* tp_getattro */ + PyObject_GenericSetAttr, /* tp_setattro */ + 0, /* tp_as_buffer */ + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /* tp_flags */ + 0, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + 0, /* tp_methods */ + 0, /* tp_members */ + 0, /* tp_getset */ + 0, /* tp_base */ + 0, /* tp_dict */ + 0, /* tp_descr_get */ + 0, /* tp_descr_set */ + 0, /* tp_dictoffset */ + (initproc)ast_type_init, /* tp_init */ + PyType_GenericAlloc, /* tp_alloc */ + PyType_GenericNew, /* tp_new */ + PyObject_Del, /* tp_free */ +}; + + static PyTypeObject* make_type(char *type, PyTypeObject* base, char**fields, int num_fields) { PyObject *fnames, *result; @@ -397,7 +489,7 @@ static int add_attributes(PyTypeObject* type, char**attrs, int num_fields) { int i, result; - PyObject *s, *l = PyList_New(num_fields); + PyObject *s, *l = PyTuple_New(num_fields); if (!l) return 0; for(i = 0; i < num_fields; i++) { s = PyString_FromString(attrs[i]); @@ -405,7 +497,7 @@ Py_DECREF(l); return 0; } - PyList_SET_ITEM(l, i, s); + PyTuple_SET_ITEM(l, i, s); } result = PyObject_SetAttrString((PyObject*)type, "_attributes", l) >= 0; Py_DECREF(l); @@ -506,8 +598,7 @@ { static int initialized; if (initialized) return 1; - AST_type = make_type("AST", &PyBaseObject_Type, NULL, 0); - mod_type = make_type("mod", AST_type, NULL, 0); + mod_type = make_type("mod", &AST_type, NULL, 0); if (!mod_type) return 0; if (!add_attributes(mod_type, NULL, 0)) return 0; Module_type = make_type("Module", mod_type, Module_fields, 1); @@ -520,7 +611,7 @@ if (!Expression_type) return 0; Suite_type = make_type("Suite", mod_type, Suite_fields, 1); if (!Suite_type) return 0; - stmt_type = make_type("stmt", AST_type, NULL, 0); + stmt_type = make_type("stmt", &AST_type, NULL, 0); if (!stmt_type) return 0; if (!add_attributes(stmt_type, stmt_attributes, 2)) return 0; FunctionDef_type = make_type("FunctionDef", stmt_type, @@ -572,7 +663,7 @@ if (!Break_type) return 0; Continue_type = make_type("Continue", stmt_type, NULL, 0); if (!Continue_type) return 0; - expr_type = make_type("expr", AST_type, NULL, 0); + expr_type = make_type("expr", &AST_type, NULL, 0); if (!expr_type) return 0; if (!add_attributes(expr_type, expr_attributes, 2)) return 0; BoolOp_type = make_type("BoolOp", expr_type, BoolOp_fields, 2); @@ -614,7 +705,7 @@ if (!List_type) return 0; Tuple_type = make_type("Tuple", expr_type, Tuple_fields, 2); if (!Tuple_type) return 0; - expr_context_type = make_type("expr_context", AST_type, NULL, 0); + expr_context_type = make_type("expr_context", &AST_type, NULL, 0); if (!expr_context_type) return 0; if (!add_attributes(expr_context_type, NULL, 0)) return 0; Load_type = make_type("Load", expr_context_type, NULL, 0); @@ -641,7 +732,7 @@ if (!Param_type) return 0; Param_singleton = PyType_GenericNew(Param_type, NULL, NULL); if (!Param_singleton) return 0; - slice_type = make_type("slice", AST_type, NULL, 0); + slice_type = make_type("slice", &AST_type, NULL, 0); if (!slice_type) return 0; if (!add_attributes(slice_type, NULL, 0)) return 0; Ellipsis_type = make_type("Ellipsis", slice_type, NULL, 0); @@ -652,7 +743,7 @@ if (!ExtSlice_type) return 0; Index_type = make_type("Index", slice_type, Index_fields, 1); if (!Index_type) return 0; - boolop_type = make_type("boolop", AST_type, NULL, 0); + boolop_type = make_type("boolop", &AST_type, NULL, 0); if (!boolop_type) return 0; if (!add_attributes(boolop_type, NULL, 0)) return 0; And_type = make_type("And", boolop_type, NULL, 0); @@ -663,7 +754,7 @@ if (!Or_type) return 0; Or_singleton = PyType_GenericNew(Or_type, NULL, NULL); if (!Or_singleton) return 0; - operator_type = make_type("operator", AST_type, NULL, 0); + operator_type = make_type("operator", &AST_type, NULL, 0); if (!operator_type) return 0; if (!add_attributes(operator_type, NULL, 0)) return 0; Add_type = make_type("Add", operator_type, NULL, 0); @@ -714,7 +805,7 @@ if (!FloorDiv_type) return 0; FloorDiv_singleton = PyType_GenericNew(FloorDiv_type, NULL, NULL); if (!FloorDiv_singleton) return 0; - unaryop_type = make_type("unaryop", AST_type, NULL, 0); + unaryop_type = make_type("unaryop", &AST_type, NULL, 0); if (!unaryop_type) return 0; if (!add_attributes(unaryop_type, NULL, 0)) return 0; Invert_type = make_type("Invert", unaryop_type, NULL, 0); @@ -733,7 +824,7 @@ if (!USub_type) return 0; USub_singleton = PyType_GenericNew(USub_type, NULL, NULL); if (!USub_singleton) return 0; - cmpop_type = make_type("cmpop", AST_type, NULL, 0); + cmpop_type = make_type("cmpop", &AST_type, NULL, 0); if (!cmpop_type) return 0; if (!add_attributes(cmpop_type, NULL, 0)) return 0; Eq_type = make_type("Eq", cmpop_type, NULL, 0); @@ -776,21 +867,21 @@ if (!NotIn_type) return 0; NotIn_singleton = PyType_GenericNew(NotIn_type, NULL, NULL); if (!NotIn_singleton) return 0; - comprehension_type = make_type("comprehension", AST_type, + comprehension_type = make_type("comprehension", &AST_type, comprehension_fields, 3); if (!comprehension_type) return 0; - excepthandler_type = make_type("excepthandler", AST_type, NULL, 0); + excepthandler_type = make_type("excepthandler", &AST_type, NULL, 0); if (!excepthandler_type) return 0; if (!add_attributes(excepthandler_type, excepthandler_attributes, 2)) return 0; ExceptHandler_type = make_type("ExceptHandler", excepthandler_type, ExceptHandler_fields, 3); if (!ExceptHandler_type) return 0; - arguments_type = make_type("arguments", AST_type, arguments_fields, 4); + arguments_type = make_type("arguments", &AST_type, arguments_fields, 4); if (!arguments_type) return 0; - keyword_type = make_type("keyword", AST_type, keyword_fields, 2); + keyword_type = make_type("keyword", &AST_type, keyword_fields, 2); if (!keyword_type) return 0; - alias_type = make_type("alias", AST_type, alias_fields, 2); + alias_type = make_type("alias", &AST_type, alias_fields, 2); if (!alias_type) return 0; initialized = 1; return 1; @@ -5816,10 +5907,10 @@ m = Py_InitModule3("_ast", NULL, NULL); if (!m) return; d = PyModule_GetDict(m); - if (PyDict_SetItemString(d, "AST", (PyObject*)AST_type) < 0) return; + if (PyDict_SetItemString(d, "AST", (PyObject*)&AST_type) < 0) return; if (PyModule_AddIntConstant(m, "PyCF_ONLY_AST", PyCF_ONLY_AST) < 0) return; - if (PyModule_AddStringConstant(m, "__version__", "60978") < 0) + if (PyModule_AddStringConstant(m, "__version__", "62047") < 0) return; if (PyDict_SetItemString(d, "mod", (PyObject*)mod_type) < 0) return; if (PyDict_SetItemString(d, "Module", (PyObject*)Module_type) < 0) @@ -5997,7 +6088,7 @@ int PyAST_Check(PyObject* obj) { init_types(); - return PyObject_IsInstance(obj, (PyObject*)AST_type); + return PyObject_IsInstance(obj, (PyObject*)&AST_type); } From python-checkins at python.org Sun Mar 30 09:09:23 2008 From: python-checkins at python.org (georg.brandl) Date: Sun, 30 Mar 2008 09:09:23 +0200 (CEST) Subject: [Python-checkins] r62050 - python/trunk/Lib/test/test_ast.py Message-ID: <20080330070923.21DC11E4017@bag.python.org> Author: georg.brandl Date: Sun Mar 30 09:09:22 2008 New Revision: 62050 Modified: python/trunk/Lib/test/test_ast.py Log: Convert test_ast to unittest and add a test for r62049. Modified: python/trunk/Lib/test/test_ast.py ============================================================================== --- python/trunk/Lib/test/test_ast.py (original) +++ python/trunk/Lib/test/test_ast.py Sun Mar 30 09:09:22 2008 @@ -1,4 +1,5 @@ -import sys, itertools +import sys, itertools, unittest +from test import test_support import _ast def to_tuple(t): @@ -15,6 +16,7 @@ result.append(to_tuple(getattr(t, f))) return tuple(result) + # These tests are compiled through "exec" # There should be atleast one test per statement exec_tests = [ @@ -118,41 +120,66 @@ # TODO: expr_context, slice, boolop, operator, unaryop, cmpop, comprehension # excepthandler, arguments, keywords, alias -if __name__=='__main__' and sys.argv[1:] == ['-g']: - for statements, kind in ((exec_tests, "exec"), (single_tests, "single"), - (eval_tests, "eval")): - print kind+"_results = [" - for s in statements: - print repr(to_tuple(compile(s, "?", kind, 0x400)))+"," - print "]" - print "run_tests()" - raise SystemExit +class AST_Tests(unittest.TestCase): + + def _assert_order(self, ast_node, parent_pos): + if not isinstance(ast_node, _ast.AST) or ast_node._fields is None: + return + if isinstance(ast_node, (_ast.expr, _ast.stmt, _ast.excepthandler)): + node_pos = (ast_node.lineno, ast_node.col_offset) + self.assert_(node_pos >= parent_pos) + parent_pos = (ast_node.lineno, ast_node.col_offset) + for name in ast_node._fields: + value = getattr(ast_node, name) + if isinstance(value, list): + for child in value: + self._assert_order(child, parent_pos) + elif value is not None: + self._assert_order(value, parent_pos) + + def test_snippets(self): + for input, output, kind in ((exec_tests, exec_results, "exec"), + (single_tests, single_results, "single"), + (eval_tests, eval_results, "eval")): + for i, o in itertools.izip(input, output): + ast_tree = compile(i, "?", kind, _ast.PyCF_ONLY_AST) + self.assertEquals(to_tuple(ast_tree), o) + self._assert_order(ast_tree, (0, 0)) + + def test_nodeclasses(self): + x = _ast.BinOp(1, 2, 3, lineno=0) + self.assertEquals(x.left, 1) + self.assertEquals(x.op, 2) + self.assertEquals(x.right, 3) + self.assertEquals(x.lineno, 0) + + # node raises exception when not given enough arguments + self.assertRaises(TypeError, _ast.BinOp, 1, 2) + + # can set attributes through kwargs too + x = _ast.BinOp(left=1, op=2, right=3, lineno=0) + self.assertEquals(x.left, 1) + self.assertEquals(x.op, 2) + self.assertEquals(x.right, 3) + self.assertEquals(x.lineno, 0) + -def test_order(ast_node, parent_pos): +def test_main(): + test_support.run_unittest(AST_Tests) - if not isinstance(ast_node, _ast.AST) or ast_node._fields is None: +def main(): + if __name__ != '__main__': return - if isinstance(ast_node, (_ast.expr, _ast.stmt, _ast.excepthandler)): - node_pos = (ast_node.lineno, ast_node.col_offset) - assert node_pos >= parent_pos, (node_pos, parent_pos) - parent_pos = (ast_node.lineno, ast_node.col_offset) - for name in ast_node._fields: - value = getattr(ast_node, name) - if isinstance(value, list): - for child in value: - test_order(child, parent_pos) - elif value is not None: - test_order(value, parent_pos) - -def run_tests(): - for input, output, kind in ((exec_tests, exec_results, "exec"), - (single_tests, single_results, "single"), - (eval_tests, eval_results, "eval")): - for i, o in itertools.izip(input, output): - ast_tree = compile(i, "?", kind, 0x400) - print to_tuple(ast_tree), o - assert to_tuple(ast_tree) == o - test_order(ast_tree, (0, 0)) + if sys.argv[1:] == ['-g']: + for statements, kind in ((exec_tests, "exec"), (single_tests, "single"), + (eval_tests, "eval")): + print kind+"_results = [" + for s in statements: + print repr(to_tuple(compile(s, "?", kind, 0x400)))+"," + print "]" + print "main()" + raise SystemExit + test_main() #### EVERYTHING BELOW IS GENERATED ##### exec_results = [ @@ -202,4 +229,4 @@ ('Expression', ('Tuple', (1, 0), [('Num', (1, 0), 1), ('Num', (1, 2), 2), ('Num', (1, 4), 3)], ('Load',))), ('Expression', ('Call', (1, 0), ('Attribute', (1, 0), ('Attribute', (1, 0), ('Attribute', (1, 0), ('Name', (1, 0), 'a', ('Load',)), 'b', ('Load',)), 'c', ('Load',)), 'd', ('Load',)), [('Subscript', (1, 8), ('Attribute', (1, 8), ('Name', (1, 8), 'a', ('Load',)), 'b', ('Load',)), ('Slice', ('Num', (1, 12), 1), ('Num', (1, 14), 2), None), ('Load',))], [], None, None)), ] -run_tests() +main() From buildbot at python.org Sun Mar 30 09:09:56 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 30 Mar 2008 07:09:56 +0000 Subject: [Python-checkins] buildbot failure in amd64 gentoo trunk Message-ID: <20080330070957.01F551E4017@bag.python.org> The Buildbot has detected a new failure of amd64 gentoo trunk. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20gentoo%20trunk/builds/528 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-amd64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl,gerhard.haering BUILD FAILED: failed test Excerpt from the test logfile: Traceback (most recent call last): File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/threading.py", line 493, in __bootstrap_inner self.run() File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/threading.py", line 449, in run self.__target(*self.__args, **self.__kwargs) File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/bsddb/test/test_thread.py", line 284, in readerThread rec = dbutils.DeadlockWrap(c.next, max_retries=10) File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/bsddb/dbutils.py", line 62, in DeadlockWrap return function(*_args, **_kwargs) DBLockDeadlockError: (-30995, 'DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock') Traceback (most recent call last): File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/threading.py", line 493, in __bootstrap_inner self.run() File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/threading.py", line 449, in run self.__target(*self.__args, **self.__kwargs) File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/bsddb/test/test_thread.py", line 284, in readerThread rec = dbutils.DeadlockWrap(c.next, max_retries=10) File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/bsddb/dbutils.py", line 62, in DeadlockWrap return function(*_args, **_kwargs) DBLockDeadlockError: (-30995, 'DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock') Traceback (most recent call last): File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/threading.py", line 493, in __bootstrap_inner self.run() File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/threading.py", line 449, in run self.__target(*self.__args, **self.__kwargs) File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/bsddb/test/test_thread.py", line 284, in readerThread rec = dbutils.DeadlockWrap(c.next, max_retries=10) File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/bsddb/dbutils.py", line 62, in DeadlockWrap return function(*_args, **_kwargs) DBLockDeadlockError: (-30995, 'DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock') Traceback (most recent call last): File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/threading.py", line 493, in __bootstrap_inner self.run() File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/threading.py", line 449, in run self.__target(*self.__args, **self.__kwargs) File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/bsddb/test/test_thread.py", line 263, in writerThread self.assertEqual(data, self.makeData(key)) File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/unittest.py", line 343, in failUnlessEqual (msg or '%r != %r' % (first, second)) AssertionError: None != '1000-1000-1000-1000-1000' Traceback (most recent call last): File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/threading.py", line 493, in __bootstrap_inner self.run() File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/threading.py", line 449, in run self.__target(*self.__args, **self.__kwargs) File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/bsddb/test/test_thread.py", line 263, in writerThread self.assertEqual(data, self.makeData(key)) File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/unittest.py", line 343, in failUnlessEqual (msg or '%r != %r' % (first, second)) AssertionError: None != '2000-2000-2000-2000-2000' Traceback (most recent call last): File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/threading.py", line 493, in __bootstrap_inner self.run() File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/threading.py", line 449, in run self.__target(*self.__args, **self.__kwargs) File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/bsddb/test/test_thread.py", line 263, in writerThread self.assertEqual(data, self.makeData(key)) File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/unittest.py", line 343, in failUnlessEqual (msg or '%r != %r' % (first, second)) AssertionError: None != '0002-0002-0002-0002-0002' 2 tests failed: test_ast test_signal Traceback (most recent call last): File "./Lib/test/regrtest.py", line 548, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/test/test_ast.py", line 204, in run_tests() File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/test/test_ast.py", line 153, in run_tests assert to_tuple(ast_tree) == o AssertionError make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Sun Mar 30 09:30:10 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 30 Mar 2008 07:30:10 +0000 Subject: [Python-checkins] buildbot failure in sparc solaris10 gcc trunk Message-ID: <20080330073010.AB2061E4017@bag.python.org> The Buildbot has detected a new failure of sparc solaris10 gcc trunk. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20solaris10%20gcc%20trunk/builds/3092 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: loewis-sun Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl,gerhard.haering BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_ast Traceback (most recent call last): File "./Lib/test/regrtest.py", line 548, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/opt/users/buildbot/slave/trunk.loewis-sun/build/Lib/test/test_ast.py", line 204, in run_tests() File "/opt/users/buildbot/slave/trunk.loewis-sun/build/Lib/test/test_ast.py", line 153, in run_tests assert to_tuple(ast_tree) == o AssertionError sincerely, -The Buildbot From buildbot at python.org Sun Mar 30 09:34:12 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 30 Mar 2008 07:34:12 +0000 Subject: [Python-checkins] buildbot failure in g4 osx.4 trunk Message-ID: <20080330073412.2DC1F1E4017@bag.python.org> The Buildbot has detected a new failure of g4 osx.4 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/g4%20osx.4%20trunk/builds/3121 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: psf-g4 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl,gerhard.haering BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_ast Traceback (most recent call last): File "./Lib/test/regrtest.py", line 548, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/test/test_ast.py", line 204, in run_tests() File "/Users/buildslave/bb/trunk.psf-g4/build/Lib/test/test_ast.py", line 153, in run_tests assert to_tuple(ast_tree) == o AssertionError make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Sun Mar 30 09:36:36 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 30 Mar 2008 07:36:36 +0000 Subject: [Python-checkins] buildbot failure in ppc Debian unstable trunk Message-ID: <20080330073637.378F51E4024@bag.python.org> The Buildbot has detected a new failure of ppc Debian unstable trunk. Full details are available at: http://www.python.org/dev/buildbot/all/ppc%20Debian%20unstable%20trunk/builds/1123 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl,gerhard.haering BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_ast Traceback (most recent call last): File "./Lib/test/regrtest.py", line 548, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_ast.py", line 204, in run_tests() File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_ast.py", line 153, in run_tests assert to_tuple(ast_tree) == o AssertionError make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Sun Mar 30 09:41:05 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 30 Mar 2008 07:41:05 +0000 Subject: [Python-checkins] buildbot failure in PPC64 Debian trunk Message-ID: <20080330074106.2F9D31E4018@bag.python.org> The Buildbot has detected a new failure of PPC64 Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/PPC64%20Debian%20trunk/builds/639 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl,gerhard.haering BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_ast Traceback (most recent call last): File "./Lib/test/regrtest.py", line 548, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/test/test_ast.py", line 204, in run_tests() File "/home/pybot/buildarea64/trunk.klose-debian-ppc64/build/Lib/test/test_ast.py", line 153, in run_tests assert to_tuple(ast_tree) == o AssertionError make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Sun Mar 30 09:49:22 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 30 Mar 2008 07:49:22 +0000 Subject: [Python-checkins] buildbot failure in S-390 Debian trunk Message-ID: <20080330074922.9CC171E4017@bag.python.org> The Buildbot has detected a new failure of S-390 Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/S-390%20Debian%20trunk/builds/292 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-s390 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl,gerhard.haering BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_ast Traceback (most recent call last): File "./Lib/test/regrtest.py", line 548, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/test/test_ast.py", line 204, in run_tests() File "/home/pybot/buildarea/trunk.klose-debian-s390/build/Lib/test/test_ast.py", line 153, in run_tests assert to_tuple(ast_tree) == o AssertionError make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Sun Mar 30 10:40:22 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 30 Mar 2008 08:40:22 +0000 Subject: [Python-checkins] buildbot failure in sparc Ubuntu trunk Message-ID: <20080330084022.BDDE21E4018@bag.python.org> The Buildbot has detected a new failure of sparc Ubuntu trunk. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20Ubuntu%20trunk/builds/424 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-ubuntu-sparc Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl,gerhard.haering BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_ast Traceback (most recent call last): File "./Lib/test/regrtest.py", line 548, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/home/pybot/buildarea/trunk.klose-ubuntu-sparc/build/Lib/test/test_ast.py", line 204, in run_tests() File "/home/pybot/buildarea/trunk.klose-ubuntu-sparc/build/Lib/test/test_ast.py", line 153, in run_tests assert to_tuple(ast_tree) == o AssertionError make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Sun Mar 30 10:42:54 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 30 Mar 2008 08:42:54 +0000 Subject: [Python-checkins] buildbot failure in sparc Debian trunk Message-ID: <20080330084254.732491E4018@bag.python.org> The Buildbot has detected a new failure of sparc Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20Debian%20trunk/builds/271 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-sparc Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl,gerhard.haering BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_ast Traceback (most recent call last): File "./Lib/test/regrtest.py", line 548, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/home/pybot/buildarea-sid/trunk.klose-debian-sparc/build/Lib/test/test_ast.py", line 204, in run_tests() File "/home/pybot/buildarea-sid/trunk.klose-debian-sparc/build/Lib/test/test_ast.py", line 153, in run_tests assert to_tuple(ast_tree) == o AssertionError make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Sun Mar 30 11:05:32 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 30 Mar 2008 09:05:32 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-4 trunk Message-ID: <20080330090534.82DD01E4018@bag.python.org> The Buildbot has detected a new failure of x86 XP-4 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-4%20trunk/builds/907 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: bolen-windows Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl,gerhard.haering BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_ast Traceback (most recent call last): File "../lib/test/regrtest.py", line 548, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_ast.py", line 204, in run_tests() File "E:\cygwin\home\db3l\buildarea\trunk.bolen-windows\build\lib\test\test_ast.py", line 153, in run_tests assert to_tuple(ast_tree) == o AssertionError sincerely, -The Buildbot From python-checkins at python.org Sun Mar 30 21:00:49 2008 From: python-checkins at python.org (georg.brandl) Date: Sun, 30 Mar 2008 21:00:49 +0200 (CEST) Subject: [Python-checkins] r62051 - in python/trunk: Lib/test/test_ast.py Parser/asdl_c.py Python/Python-ast.c Message-ID: <20080330190049.D34C61E401D@bag.python.org> Author: georg.brandl Date: Sun Mar 30 21:00:49 2008 New Revision: 62051 Modified: python/trunk/Lib/test/test_ast.py python/trunk/Parser/asdl_c.py python/trunk/Python/Python-ast.c Log: Make _fields attr for no fields consistent with _attributes attr. Modified: python/trunk/Lib/test/test_ast.py ============================================================================== --- python/trunk/Lib/test/test_ast.py (original) +++ python/trunk/Lib/test/test_ast.py Sun Mar 30 21:00:49 2008 @@ -163,6 +163,9 @@ self.assertEquals(x.right, 3) self.assertEquals(x.lineno, 0) + # this used to fail because Sub._fields was None + x = _ast.Sub() + def test_main(): test_support.run_unittest(AST_Tests) Modified: python/trunk/Parser/asdl_c.py ============================================================================== --- python/trunk/Parser/asdl_c.py (original) +++ python/trunk/Parser/asdl_c.py Sun Mar 30 21:00:49 2008 @@ -674,14 +674,9 @@ { PyObject *fnames, *result; int i; - if (num_fields) { - fnames = PyTuple_New(num_fields); - if (!fnames) return NULL; - } else { - fnames = Py_None; - Py_INCREF(Py_None); - } - for(i=0; i < num_fields; i++) { + fnames = PyTuple_New(num_fields); + if (!fnames) return NULL; + for (i = 0; i < num_fields; i++) { PyObject *field = PyString_FromString(fields[i]); if (!field) { Py_DECREF(fnames); Modified: python/trunk/Python/Python-ast.c ============================================================================== --- python/trunk/Python/Python-ast.c (original) +++ python/trunk/Python/Python-ast.c Sun Mar 30 21:00:49 2008 @@ -465,14 +465,9 @@ { PyObject *fnames, *result; int i; - if (num_fields) { - fnames = PyTuple_New(num_fields); - if (!fnames) return NULL; - } else { - fnames = Py_None; - Py_INCREF(Py_None); - } - for(i=0; i < num_fields; i++) { + fnames = PyTuple_New(num_fields); + if (!fnames) return NULL; + for (i = 0; i < num_fields; i++) { PyObject *field = PyString_FromString(fields[i]); if (!field) { Py_DECREF(fnames); From python-checkins at python.org Sun Mar 30 21:35:11 2008 From: python-checkins at python.org (benjamin.peterson) Date: Sun, 30 Mar 2008 21:35:11 +0200 (CEST) Subject: [Python-checkins] r62052 - python/trunk/README Message-ID: <20080330193511.3848E1E4025@bag.python.org> Author: benjamin.peterson Date: Sun Mar 30 21:35:10 2008 New Revision: 62052 Modified: python/trunk/README Log: Updated README regarding doc formats Modified: python/trunk/README ============================================================================== --- python/trunk/README (original) +++ python/trunk/README Sun Mar 30 21:35:10 2008 @@ -83,11 +83,12 @@ and functions! All documentation is also available online at the Python web site -(http://docs.python.org/, see below). It is available online for -occasional reference, or can be downloaded in many formats for faster -access. The documentation is available in HTML, PostScript, PDF, and -LaTeX formats; the LaTeX version is primarily for documentation -authors, translators, and people with special formatting requirements. +(http://docs.python.org/, see below). It is available online for occasional +reference, or can be downloaded in many formats for faster access. The +documentation is available in HTML, PostScript, PDF, LaTeX (through 2.5), and +reStructuredText (2.6+) formats; the LaTeX and reStructuredText versions are +primarily for documentation authors, translators, and people with special +formatting requirements. Unfortunately, new-style classes (new in Python 2.2) have not yet been integrated into Python's standard documentation. A collection of @@ -135,9 +136,8 @@ for patch submission may be found at http://www.python.org/dev/patches/. If you have a proposal to change Python, it's best to submit a Python -Enhancement Proposal (PEP) first. All current PEPs, as well as -guidelines for submitting a new PEP, are listed at -http://www.python.org/dev/peps/. +Enhancement Proposal (PEP) first. All current PEPs, as well as guidelines for +submitting a new PEP, are listed at http://www.python.org/dev/peps/. Questions From buildbot at python.org Sun Mar 30 21:40:06 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 30 Mar 2008 19:40:06 +0000 Subject: [Python-checkins] buildbot failure in S-390 Debian trunk Message-ID: <20080330194006.60FB81E4002@bag.python.org> The Buildbot has detected a new failure of S-390 Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/S-390%20Debian%20trunk/builds/294 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-s390 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From python-checkins at python.org Sun Mar 30 21:41:40 2008 From: python-checkins at python.org (georg.brandl) Date: Sun, 30 Mar 2008 21:41:40 +0200 (CEST) Subject: [Python-checkins] r62053 - python/trunk/README Message-ID: <20080330194140.312B41E4002@bag.python.org> Author: georg.brandl Date: Sun Mar 30 21:41:39 2008 New Revision: 62053 Modified: python/trunk/README Log: The other download formats will be available for 2.6 too. Modified: python/trunk/README ============================================================================== --- python/trunk/README (original) +++ python/trunk/README Sun Mar 30 21:41:39 2008 @@ -85,7 +85,7 @@ All documentation is also available online at the Python web site (http://docs.python.org/, see below). It is available online for occasional reference, or can be downloaded in many formats for faster access. The -documentation is available in HTML, PostScript, PDF, LaTeX (through 2.5), and +documentation is downloadable in HTML, PostScript, PDF, LaTeX, and reStructuredText (2.6+) formats; the LaTeX and reStructuredText versions are primarily for documentation authors, translators, and people with special formatting requirements. From python-checkins at python.org Sun Mar 30 21:43:27 2008 From: python-checkins at python.org (georg.brandl) Date: Sun, 30 Mar 2008 21:43:27 +0200 (CEST) Subject: [Python-checkins] r62054 - in python/trunk: Parser/asdl_c.py Python/Python-ast.c Message-ID: <20080330194327.77AE31E4002@bag.python.org> Author: georg.brandl Date: Sun Mar 30 21:43:27 2008 New Revision: 62054 Modified: python/trunk/Parser/asdl_c.py python/trunk/Python/Python-ast.c Log: Fix error message -- "expects either 0 or 0 arguments" Modified: python/trunk/Parser/asdl_c.py ============================================================================== --- python/trunk/Parser/asdl_c.py (original) +++ python/trunk/Parser/asdl_c.py Sun Mar 30 21:43:27 2008 @@ -595,8 +595,10 @@ res = 0; /* if no error occurs, this stays 0 to the end */ if (PyTuple_GET_SIZE(args) > 0) { if (numfields != PyTuple_GET_SIZE(args)) { - PyErr_Format(PyExc_TypeError, "%.400s constructor takes either 0 or " - "%d positional argument%s", Py_TYPE(self)->tp_name, + PyErr_Format(PyExc_TypeError, "%.400s constructor takes %s" + "%" PY_FORMAT_SIZE_T "d positional argument%s", + Py_TYPE(self)->tp_name, + numfields == 0 ? "" : "either 0 or ", numfields, numfields == 1 ? "" : "s"); res = -1; goto cleanup; Modified: python/trunk/Python/Python-ast.c ============================================================================== --- python/trunk/Python/Python-ast.c (original) +++ python/trunk/Python/Python-ast.c Sun Mar 30 21:43:27 2008 @@ -386,8 +386,10 @@ res = 0; /* if no error occurs, this stays 0 to the end */ if (PyTuple_GET_SIZE(args) > 0) { if (numfields != PyTuple_GET_SIZE(args)) { - PyErr_Format(PyExc_TypeError, "%.400s constructor takes either 0 or " - "%d positional argument%s", Py_TYPE(self)->tp_name, + PyErr_Format(PyExc_TypeError, "%.400s constructor takes %s" + "%" PY_FORMAT_SIZE_T "d positional argument%s", + Py_TYPE(self)->tp_name, + numfields == 0 ? "" : "either 0 or ", numfields, numfields == 1 ? "" : "s"); res = -1; goto cleanup; From buildbot at python.org Sun Mar 30 21:55:21 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 30 Mar 2008 19:55:21 +0000 Subject: [Python-checkins] buildbot failure in g4 osx.4 trunk Message-ID: <20080330195610.064511E4002@bag.python.org> The Buildbot has detected a new failure of g4 osx.4 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/g4%20osx.4%20trunk/builds/3123 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: psf-g4 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_signal make: *** [buildbottest] Error 1 sincerely, -The Buildbot From python-checkins at python.org Sun Mar 30 22:20:40 2008 From: python-checkins at python.org (georg.brandl) Date: Sun, 30 Mar 2008 22:20:40 +0200 (CEST) Subject: [Python-checkins] r62059 - in python/trunk: Lib/test/test_ast.py Parser/asdl_c.py Python/Python-ast.c Message-ID: <20080330202040.8BE5A1E4002@bag.python.org> Author: georg.brandl Date: Sun Mar 30 22:20:39 2008 New Revision: 62059 Modified: python/trunk/Lib/test/test_ast.py python/trunk/Parser/asdl_c.py python/trunk/Python/Python-ast.c Log: Make AST nodes pickleable. Modified: python/trunk/Lib/test/test_ast.py ============================================================================== --- python/trunk/Lib/test/test_ast.py (original) +++ python/trunk/Lib/test/test_ast.py Sun Mar 30 22:20:39 2008 @@ -166,6 +166,20 @@ # this used to fail because Sub._fields was None x = _ast.Sub() + def test_pickling(self): + import pickle + mods = [pickle] + try: + import cPickle + mods.append(cPickle) + except ImportError: + pass + protocols = [0, 1, 2] + for mod in mods: + for protocol in protocols: + for ast in (compile(i, "?", "exec", 0x400) for i in exec_tests): + ast2 = mod.loads(mod.dumps(ast, protocol)) + self.assertEquals(to_tuple(ast2), to_tuple(ast)) def test_main(): test_support.run_unittest(AST_Tests) Modified: python/trunk/Parser/asdl_c.py ============================================================================== --- python/trunk/Parser/asdl_c.py (original) +++ python/trunk/Parser/asdl_c.py Sun Mar 30 22:20:39 2008 @@ -629,9 +629,34 @@ return res; } +/* Pickling support */ +static PyObject * +ast_type_reduce(PyObject *self, PyObject *unused) +{ + PyObject *res; + PyObject *dict = PyObject_GetAttrString(self, "__dict__"); + if (dict == NULL) { + if (PyErr_ExceptionMatches(PyExc_AttributeError)) + PyErr_Clear(); + else + return NULL; + } + if (dict) { + res = Py_BuildValue("O()O", Py_TYPE(self), dict); + Py_DECREF(dict); + return res; + } + return Py_BuildValue("O()", Py_TYPE(self)); +} + +static PyMethodDef ast_type_methods[] = { + {"__reduce__", ast_type_reduce, METH_NOARGS, NULL}, + {NULL} +}; + static PyTypeObject AST_type = { PyVarObject_HEAD_INIT(&PyType_Type, 0) - "AST", + "_ast.AST", sizeof(PyObject), 0, 0, /* tp_dealloc */ @@ -657,7 +682,7 @@ 0, /* tp_weaklistoffset */ 0, /* tp_iter */ 0, /* tp_iternext */ - 0, /* tp_methods */ + ast_type_methods, /* tp_methods */ 0, /* tp_members */ 0, /* tp_getset */ 0, /* tp_base */ Modified: python/trunk/Python/Python-ast.c ============================================================================== --- python/trunk/Python/Python-ast.c (original) +++ python/trunk/Python/Python-ast.c Sun Mar 30 22:20:39 2008 @@ -420,9 +420,34 @@ return res; } +/* Pickling support */ +static PyObject * +ast_type_reduce(PyObject *self, PyObject *unused) +{ + PyObject *res; + PyObject *dict = PyObject_GetAttrString(self, "__dict__"); + if (dict == NULL) { + if (PyErr_ExceptionMatches(PyExc_AttributeError)) + PyErr_Clear(); + else + return NULL; + } + if (dict) { + res = Py_BuildValue("O()O", Py_TYPE(self), dict); + Py_DECREF(dict); + return res; + } + return Py_BuildValue("O()", Py_TYPE(self)); +} + +static PyMethodDef ast_type_methods[] = { + {"__reduce__", ast_type_reduce, METH_NOARGS, NULL}, + {NULL} +}; + static PyTypeObject AST_type = { PyVarObject_HEAD_INIT(&PyType_Type, 0) - "AST", + "_ast.AST", sizeof(PyObject), 0, 0, /* tp_dealloc */ @@ -448,7 +473,7 @@ 0, /* tp_weaklistoffset */ 0, /* tp_iter */ 0, /* tp_iternext */ - 0, /* tp_methods */ + ast_type_methods, /* tp_methods */ 0, /* tp_members */ 0, /* tp_getset */ 0, /* tp_base */ From buildbot at python.org Sun Mar 30 22:43:31 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 30 Mar 2008 20:43:31 +0000 Subject: [Python-checkins] buildbot failure in ppc Debian unstable 3.0 Message-ID: <20080330204331.7C3171E4002@bag.python.org> The Buildbot has detected a new failure of ppc Debian unstable 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/ppc%20Debian%20unstable%203.0/builds/718 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: gerhard.haering,martin.v.loewis BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_logging ====================================================================== FAIL: test_flush (test.test_logging.MemoryHandlerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/3.0.klose-debian-ppc/build/Lib/test/test_logging.py", line 468, in test_flush self.assert_log_lines(lines) File "/home/pybot/buildarea/3.0.klose-debian-ppc/build/Lib/test/test_logging.py", line 112, in assert_log_lines self.assertEquals(len(actual_lines), len(expected_values)) AssertionError: 0 != 3 make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Sun Mar 30 23:52:05 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 30 Mar 2008 21:52:05 +0000 Subject: [Python-checkins] buildbot failure in amd64 gentoo 3.0 Message-ID: <20080330215205.59BEB1E4002@bag.python.org> The Buildbot has detected a new failure of amd64 gentoo 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20gentoo%203.0/builds/250 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-amd64 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_logging ====================================================================== FAIL: test_flush (test.test_logging.MemoryHandlerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/test/test_logging.py", line 468, in test_flush self.assert_log_lines(lines) File "/home/buildbot/slave/py-build/3.0.norwitz-amd64/build/Lib/test/test_logging.py", line 112, in assert_log_lines self.assertEquals(len(actual_lines), len(expected_values)) AssertionError: 0 != 3 make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Mon Mar 31 00:52:46 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 30 Mar 2008 22:52:46 +0000 Subject: [Python-checkins] buildbot failure in ppc Debian unstable 3.0 Message-ID: <20080330225246.5CF971E4002@bag.python.org> The Buildbot has detected a new failure of ppc Debian unstable 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/ppc%20Debian%20unstable%203.0/builds/720 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_logging ====================================================================== FAIL: test_flush (test.test_logging.MemoryHandlerTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/3.0.klose-debian-ppc/build/Lib/test/test_logging.py", line 468, in test_flush self.assert_log_lines(lines) File "/home/pybot/buildarea/3.0.klose-debian-ppc/build/Lib/test/test_logging.py", line 112, in assert_log_lines self.assertEquals(len(actual_lines), len(expected_values)) AssertionError: 0 != 3 make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Mon Mar 31 01:07:33 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 30 Mar 2008 23:07:33 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 trunk Message-ID: <20080330230733.73C571E4002@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%20trunk/builds/2790 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: benjamin.peterson,georg.brandl BUILD FAILED: failed test Excerpt from the test logfile: 2 tests failed: test_asynchat test_socket ====================================================================== FAIL: testInterruptedTimeout (test.test_socket.TCPTimeoutTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_socket.py", line 1012, in testInterruptedTimeout self.fail("got Alarm in wrong place") AssertionError: got Alarm in wrong place sincerely, -The Buildbot From buildbot at python.org Mon Mar 31 01:50:08 2008 From: buildbot at python.org (buildbot at python.org) Date: Sun, 30 Mar 2008 23:50:08 +0000 Subject: [Python-checkins] buildbot failure in S-390 Debian 3.0 Message-ID: <20080330235108.C406B1E4002@bag.python.org> The Buildbot has detected a new failure of S-390 Debian 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/S-390%20Debian%203.0/builds/165 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-s390 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_signal make: *** [buildbottest] Error 1 sincerely, -The Buildbot From python-checkins at python.org Mon Mar 31 02:36:04 2008 From: python-checkins at python.org (jeffrey.yasskin) Date: Mon, 31 Mar 2008 02:36:04 +0200 (CEST) Subject: [Python-checkins] r62067 - python/trunk/Lib/threading.py Message-ID: <20080331003604.9173A1E4002@bag.python.org> Author: jeffrey.yasskin Date: Mon Mar 31 02:35:53 2008 New Revision: 62067 Modified: python/trunk/Lib/threading.py Log: Block the sys.exc_clear -3 warning from threading.py. Modified: python/trunk/Lib/threading.py ============================================================================== --- python/trunk/Lib/threading.py (original) +++ python/trunk/Lib/threading.py Mon Mar 31 02:35:53 2008 @@ -8,6 +8,7 @@ del _sys.modules[__name__] raise +import warnings from time import time as _time, sleep as _sleep from traceback import format_exc as _format_exc from collections import deque @@ -24,6 +25,12 @@ del thread +# sys.exc_clear is used to work around the fact that except blocks +# don't fully clear the exception until 3.0. +warnings.filterwarnings('ignore', category=DeprecationWarning, + module='threading', message='sys.exc_clear') + + # Debug support (adapted from ihooks.py). # All the major classes here derive from _Verbose. We force that to # be a new-style class so that all the major classes here are new-style. From buildbot at python.org Mon Mar 31 04:43:37 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 31 Mar 2008 02:43:37 +0000 Subject: [Python-checkins] buildbot failure in amd64 gentoo 3.0 Message-ID: <20080331024338.193661E4019@bag.python.org> The Buildbot has detected a new failure of amd64 gentoo 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20gentoo%203.0/builds/254 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-amd64 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: benjamin.peterson BUILD FAILED: failed test Excerpt from the test logfile: make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Mon Mar 31 04:44:15 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 31 Mar 2008 02:44:15 +0000 Subject: [Python-checkins] buildbot failure in x86 W2k8 3.0 Message-ID: <20080331024416.07B3D1E4021@bag.python.org> The Buildbot has detected a new failure of x86 W2k8 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20W2k8%203.0/builds/138 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: nelson-windows Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: benjamin.peterson BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From buildbot at python.org Mon Mar 31 05:16:01 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 31 Mar 2008 03:16:01 +0000 Subject: [Python-checkins] buildbot failure in ARM Linux 3.0 Message-ID: <20080331031601.BF4BE1E4002@bag.python.org> The Buildbot has detected a new failure of ARM Linux 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/ARM%20Linux%203.0/builds/5 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-linux-arm Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: benjamin.peterson BUILD FAILED: failed compile sincerely, -The Buildbot From nnorwitz at gmail.com Sun Mar 30 12:04:38 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Sun, 30 Mar 2008 05:04:38 -0500 Subject: [Python-checkins] Python Regression Test Failures refleak (1) Message-ID: <20080330100438.GA19576@python.psfb.org> More important issues: ---------------------- test_io leaked [61, 61, 61] references, sum=183 Less important issues: ---------------------- test_smtplib leaked [86, -86, -10] references, sum=-10 test_socketserver leaked [-78, 0, 0] references, sum=-78 test_urllib2_localnet leaked [146, -140, 146] references, sum=152 From nnorwitz at gmail.com Mon Mar 31 00:09:31 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Sun, 30 Mar 2008 17:09:31 -0500 Subject: [Python-checkins] Python Regression Test Failures refleak (1) Message-ID: <20080330220931.GA2162@python.psfb.org> More important issues: ---------------------- test_io leaked [61, 61, 61] references, sum=183 Less important issues: ---------------------- test_asynchat leaked [0, 110, -110] references, sum=0 test_smtplib leaked [-8, 8, -6] references, sum=-6 test_threadsignals leaked [-8, 0, 0] references, sum=-8 test_urllib2_localnet leaked [146, -140, 146] references, sum=152 From nnorwitz at gmail.com Sat Mar 29 23:20:15 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Sat, 29 Mar 2008 17:20:15 -0500 Subject: [Python-checkins] Python Regression Test Failures all (1) Message-ID: <20080329222015.GA5074@python.psfb.org> 324 tests OK. 1 test failed: test_signal 24 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_cd test_cl test_epoll test_gl test_imageop test_imgfile test_ioctl test_kqueue test_lib2to3 test_macostools test_pep277 test_py3kwarn test_scriptpackages test_startfile test_sunaudiodev test_tcl test_unicode_file test_winreg test_winsound test_zipfile64 3 skips unexpected on linux2: test_epoll test_lib2to3 test_ioctl test_grammar test_opcodes test_dict test_builtin test_exceptions test_types test_unittest test_doctest test_doctest2 test_MimeWriter test_SimpleHTTPServer test_StringIO test___all__ test___future__ test__locale test_abc test_abstract_numbers test_aepack test_aepack skipped -- No module named aepack test_al test_al skipped -- No module named al test_anydbm test_applesingle test_applesingle skipped -- No module named macostools test_array test_ast test_asynchat test_asyncore test_atexit test_audioop test_augassign test_base64 test_bastion test_bigaddrspace test_bigmem test_binascii test_binhex test_binop test_bisect test_bool test_bsddb test_bsddb185 test_bsddb185 skipped -- No module named bsddb185 test_bsddb3 test_buffer test_bufio test_bytes test_bz2 test_calendar test_call test_capi test_cd test_cd skipped -- No module named cd test_cfgparser test_cgi test_charmapcodec test_cl test_cl skipped -- No module named cl test_class test_cmath test_cmd test_cmd_line test_cmd_line_script test_code test_codeccallbacks test_codecencodings_cn test_codecencodings_hk test_codecencodings_jp test_codecencodings_kr test_codecencodings_tw test_codecmaps_cn test_codecmaps_hk test_codecmaps_jp test_codecmaps_kr test_codecmaps_tw test_codecs test_codeop test_coding test_coercion test_collections test_colorsys test_commands test_compare test_compile test_compiler testCompileLibrary still working, be patient... test_complex test_complex_args test_contains test_contextlib test_cookie test_cookielib test_copy test_copy_reg test_cpickle test_cprofile test_crypt test_csv test_ctypes test_datetime test_dbm test_decimal test_decorators test_defaultdict test_deque test_descr test_descrtut test_difflib test_dircache test_dis test_distutils test_dl test_docxmlrpc test_dumbdbm test_dummy_thread test_dummy_threading test_email test_email_codecs test_email_renamed test_enumerate test_eof test_epoll test_epoll skipped -- kernel doesn't support epoll() test_errno test_exception_variations test_extcall test_fcntl test_file test_filecmp test_fileinput test_float test_fnmatch test_fork1 test_format test_fpformat test_fractions test_frozen test_ftplib test_funcattrs test_functools test_future test_future3 test_future4 test_future_builtins test_gc test_gdbm test_generators test_genericpath test_genexps test_getargs test_getargs2 test_getopt test_gettext test_gl test_gl skipped -- No module named gl test_glob test_global test_grp test_gzip test_hash test_hashlib test_heapq test_hmac test_hotshot test_htmllib test_htmlparser test_httplib test_imageop test_imageop skipped -- No module named imgfile test_imaplib test_imgfile test_imgfile skipped -- No module named imgfile test_imp test_import test_importhooks test_index test_inspect test_int_literal test_io test_ioctl test_ioctl skipped -- Unable to open /dev/tty test_isinstance test_iter test_iterlen test_itertools test_kqueue test_kqueue skipped -- test works only on BSD test_largefile test_lib2to3 test_lib2to3 skipped -- No module named tests test_list test_locale test_logging test_long test_long_future test_longexp test_macostools test_macostools skipped -- No module named macostools test_macpath test_mailbox test_marshal test_math test_md5 test_mhlib test_mimetools test_mimetypes test_minidom test_mmap test_module test_modulefinder test_multibytecodec test_multibytecodec_support test_multifile test_mutants test_mutex test_netrc test_new test_nis test_normalization test_ntpath test_old_mailbox test_openpty test_operator test_optparse test_os test_parser Expecting 's_push: parser stack overflow' in next line s_push: parser stack overflow test_peepholer test_pep247 test_pep263 test_pep277 test_pep277 skipped -- test works only on NT+ test_pep292 test_pep352 test_pickle test_pickletools test_pipes test_pkg test_pkgimport test_platform test_plistlib test_poll test_popen [8551 refs] [8551 refs] [8551 refs] test_popen2 test_poplib test_posix test_posixpath test_pow test_pprint test_print test_profile test_profilehooks test_property test_pstats test_pty test_pwd test_py3kwarn test_py3kwarn skipped -- test.test_py3kwarn must be run with the -3 flag test_pyclbr test_pyexpat test_queue test_quopri [10921 refs] [10921 refs] test_random test_re test_repr test_resource test_rfc822 test_richcmp test_robotparser test_runpy test_sax test_scope test_scriptpackages test_scriptpackages skipped -- No module named aetools test_select test_set test_sets test_sgmllib test_sha test_shelve test_shlex test_shutil test_signal test test_signal failed -- Traceback (most recent call last): File "/tmp/python-test/local/lib/python2.6/test/test_signal.py", line 354, in test_itimer_prof self.assertEqual(self.hndl_called, True) AssertionError: False != True test_site test_slice test_smtplib test_socket test_socket_ssl test_socketserver test_softspace test_sort test_sqlite test_ssl test_startfile test_startfile skipped -- cannot import name startfile test_str test_strftime test_string test_stringprep test_strop test_strptime test_struct test_structmembers test_structseq test_subprocess [8546 refs] [8548 refs] [8546 refs] [8546 refs] [8546 refs] [8546 refs] [8546 refs] [8546 refs] [8546 refs] [8546 refs] [8548 refs] [10471 refs] [8764 refs] [8548 refs] [8546 refs] [8546 refs] [8546 refs] [8546 refs] [8546 refs] . [8546 refs] [8546 refs] this bit of output is from a test of stdout in a different process ... [8546 refs] [8546 refs] [8764 refs] test_sunaudiodev test_sunaudiodev skipped -- No module named sunaudiodev test_sundry test_symtable test_syntax test_sys [8546 refs] [8546 refs] test_tarfile test_tcl test_tcl skipped -- No module named _tkinter test_telnetlib test_tempfile [8551 refs] test_textwrap test_thread test_threaded_import test_threadedtempfile test_threading [11688 refs] test_threading_local test_threadsignals test_time test_timeout test_tokenize test_trace test_traceback test_transformer test_tuple test_typechecks test_ucn test_unary test_undocumented_details test_unicode test_unicode_file test_unicode_file skipped -- No Unicode filesystem semantics on this platform. test_unicodedata test_univnewlines test_unpack test_urllib test_urllib2 test_urllib2_localnet test_urllib2net test_urllibnet test_urlparse test_userdict test_userlist test_userstring test_uu test_uuid WARNING: uuid.getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._ifconfig_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._unixdll_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. test_wait3 test_wait4 test_warnings test_wave test_weakref test_whichdb test_winreg test_winreg skipped -- No module named _winreg test_winsound test_winsound skipped -- No module named winsound test_with test_wsgiref test_xdrlib test_xml_etree test_xml_etree_c test_xmllib test_xmlrpc test_xpickle test_xrange test_zipfile test_zipfile64 test_zipfile64 skipped -- test requires loads of disk-space bytes and a long time to run test_zipimport test_zlib 324 tests OK. 1 test failed: test_signal 24 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_cd test_cl test_epoll test_gl test_imageop test_imgfile test_ioctl test_kqueue test_lib2to3 test_macostools test_pep277 test_py3kwarn test_scriptpackages test_startfile test_sunaudiodev test_tcl test_unicode_file test_winreg test_winsound test_zipfile64 3 skips unexpected on linux2: test_epoll test_lib2to3 test_ioctl [595578 refs] From nnorwitz at gmail.com Fri Mar 28 23:15:20 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Fri, 28 Mar 2008 17:15:20 -0500 Subject: [Python-checkins] Python Regression Test Failures refleak (1) Message-ID: <20080328221520.GA18650@python.psfb.org> More important issues: ---------------------- test_io leaked [61, 61, 61] references, sum=183 Less important issues: ---------------------- test_popen2 leaked [-23, 0, 0] references, sum=-23 test_smtplib leaked [82, -82, 0] references, sum=0 test_sys leaked [0, 0, 80] references, sum=80 test_threadsignals leaked [-8, 0, 0] references, sum=-8 test_urllib2_localnet leaked [-120, -140, 146] references, sum=-114 From nnorwitz at gmail.com Sat Mar 29 23:04:21 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Sat, 29 Mar 2008 17:04:21 -0500 Subject: [Python-checkins] Python Regression Test Failures refleak (1) Message-ID: <20080329220421.GA2730@python.psfb.org> More important issues: ---------------------- test_io leaked [61, 61, 61] references, sum=183 Less important issues: ---------------------- test_smtplib leaked [78, -82, 6] references, sum=2 test_urllib2_localnet leaked [-131, -140, 146] references, sum=-125 From nnorwitz at gmail.com Wed Mar 26 23:04:17 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Wed, 26 Mar 2008 17:04:17 -0500 Subject: [Python-checkins] Python Regression Test Failures refleak (1) Message-ID: <20080326220417.GA23323@python.psfb.org> More important issues: ---------------------- test_io leaked [61, 61, 61] references, sum=183 Less important issues: ---------------------- test_asynchat leaked [0, 110, -110] references, sum=0 test_smtplib leaked [-86, 0, 81] references, sum=-5 test_urllib2_localnet leaked [3, 182, -176] references, sum=9 From nnorwitz at gmail.com Thu Mar 27 11:06:01 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Thu, 27 Mar 2008 05:06:01 -0500 Subject: [Python-checkins] Python Regression Test Failures refleak (1) Message-ID: <20080327100601.GA7161@python.psfb.org> More important issues: ---------------------- test_io leaked [61, 61, 61] references, sum=183 Less important issues: ---------------------- test_smtplib leaked [-85, 0, 86] references, sum=1 test_socketserver leaked [82, -82, 80] references, sum=80 test_threadsignals leaked [-8, 0, 0] references, sum=-8 test_urllib2_localnet leaked [3, 182, -176] references, sum=9 From nnorwitz at gmail.com Sat Mar 29 11:05:08 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Sat, 29 Mar 2008 05:05:08 -0500 Subject: [Python-checkins] Python Regression Test Failures refleak (1) Message-ID: <20080329100508.GA4451@python.psfb.org> More important issues: ---------------------- test_io leaked [61, 61, 61] references, sum=183 Less important issues: ---------------------- test_asynchat leaked [-108, 0, 0] references, sum=-108 test_smtplib leaked [86, -86, 0] references, sum=0 test_socketserver leaked [-82, 0, 0] references, sum=-82 test_urllib2_localnet leaked [-120, -140, 146] references, sum=-114 From nnorwitz at gmail.com Fri Mar 28 11:14:00 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Fri, 28 Mar 2008 05:14:00 -0500 Subject: [Python-checkins] Python Regression Test Failures refleak (1) Message-ID: <20080328101400.GA24469@python.psfb.org> More important issues: ---------------------- test_io leaked [61, 61, 61] references, sum=183 Less important issues: ---------------------- test_asynchat leaked [-121, 0, 0] references, sum=-121 test_popen2 leaked [26, -26, 0] references, sum=0 test_smtplib leaked [0, 0, 50] references, sum=50 test_socketserver leaked [0, 0, 75] references, sum=75 test_threadsignals leaked [-8, 0, 0] references, sum=-8 test_urllib2_localnet leaked [146, -140, 146] references, sum=152 From nnorwitz at gmail.com Thu Mar 27 23:02:56 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Thu, 27 Mar 2008 17:02:56 -0500 Subject: [Python-checkins] Python Regression Test Failures refleak (1) Message-ID: <20080327220256.GA1251@python.psfb.org> More important issues: ---------------------- test_io leaked [61, 61, 61] references, sum=183 Less important issues: ---------------------- test_asynchat leaked [108, -108, 0] references, sum=0 test_smtplib leaked [86, -86, 0] references, sum=0 test_socketserver leaked [-82, 0, 0] references, sum=-82 test_threadsignals leaked [-8, 0, 0] references, sum=-8 test_urllib2_localnet leaked [3, 3, 3] references, sum=9 From python-checkins at python.org Mon Mar 31 06:28:41 2008 From: python-checkins at python.org (neal.norwitz) Date: Mon, 31 Mar 2008 06:28:41 +0200 (CEST) Subject: [Python-checkins] r62075 - python/trunk/Parser/asdl_c.py Message-ID: <20080331042841.053AB1E4002@bag.python.org> Author: neal.norwitz Date: Mon Mar 31 06:28:40 2008 New Revision: 62075 Modified: python/trunk/Parser/asdl_c.py Log: Use file.write instead of print to make it easier to merge with 3k. Modified: python/trunk/Parser/asdl_c.py ============================================================================== --- python/trunk/Parser/asdl_c.py (original) +++ python/trunk/Parser/asdl_c.py Mon Mar 31 06:28:40 2008 @@ -1106,7 +1106,7 @@ v.visit(object) v.emit("", 0) -common_msg = "/* File automatically generated by %s. */\n" +common_msg = "/* File automatically generated by %s. */\n\n" c_file_msg = """ /* @@ -1116,6 +1116,7 @@ The __version__ number is set to the revision number of the commit containing the grammar change. */ + """ def main(srcfile): @@ -1129,27 +1130,27 @@ if INC_DIR: p = "%s/%s-ast.h" % (INC_DIR, mod.name) f = open(p, "wb") - print >> f, auto_gen_msg - print >> f, '#include "asdl.h"\n' + f.write(auto_gen_msg) + f.write('#include "asdl.h"\n\n') c = ChainOfVisitors(TypeDefVisitor(f), StructVisitor(f), PrototypeVisitor(f), ) c.visit(mod) - print >>f, "PyObject* PyAST_mod2obj(mod_ty t);" - print >>f, "mod_ty PyAST_obj2mod(PyObject* ast, PyArena* arena, int mode);" - print >>f, "int PyAST_Check(PyObject* obj);" + f.write("PyObject* PyAST_mod2obj(mod_ty t);\n") + f.write("mod_ty PyAST_obj2mod(PyObject* ast, PyArena* arena, int mode);\n") + f.write("int PyAST_Check(PyObject* obj);\n") f.close() if SRC_DIR: p = os.path.join(SRC_DIR, str(mod.name) + "-ast.c") f = open(p, "wb") - print >> f, auto_gen_msg - print >> f, c_file_msg % parse_version(mod) - print >> f, '#include "Python.h"' - print >> f, '#include "%s-ast.h"' % mod.name - print >> f - print >>f, "static PyTypeObject AST_type;" + f.write(auto_gen_msg) + f.write(c_file_msg % parse_version(mod)) + f.write('#include "Python.h"\n') + f.write('#include "%s-ast.h"\n' % mod.name) + f.write('\n') + f.write("static PyTypeObject AST_type;\n") v = ChainOfVisitors( PyTypesDeclareVisitor(f), PyTypesVisitor(f), From buildbot at python.org Mon Mar 31 06:54:38 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 31 Mar 2008 04:54:38 +0000 Subject: [Python-checkins] buildbot failure in ppc Debian unstable 3.0 Message-ID: <20080331045439.296721E4002@bag.python.org> The Buildbot has detected a new failure of ppc Debian unstable 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/ppc%20Debian%20unstable%203.0/builds/727 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: neal.norwitz BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_ssl make: *** [buildbottest] Error 1 sincerely, -The Buildbot From python-checkins at python.org Mon Mar 31 07:18:16 2008 From: python-checkins at python.org (martin.v.loewis) Date: Mon, 31 Mar 2008 07:18:16 +0200 (CEST) Subject: [Python-checkins] r62079 - sandbox/trunk/2to3/lib2to3/refactor.py Message-ID: <20080331051816.35EAA1E4002@bag.python.org> Author: martin.v.loewis Date: Mon Mar 31 07:18:15 2008 New Revision: 62079 Modified: sandbox/trunk/2to3/lib2to3/refactor.py Log: Copy r62037 from trunk: lib2to3 should install a logging handler only when run as a main program, not when used as a library. This may please the buildbots, which fail when test_lib2to3 is run before test_logging. Modified: sandbox/trunk/2to3/lib2to3/refactor.py ============================================================================== --- sandbox/trunk/2to3/lib2to3/refactor.py (original) +++ sandbox/trunk/2to3/lib2to3/refactor.py Mon Mar 31 07:18:15 2008 @@ -28,15 +28,6 @@ from . import fixes from . import pygram -if sys.version_info < (2, 4): - hdlr = logging.StreamHandler() - fmt = logging.Formatter('%(name)s: %(message)s') - hdlr.setFormatter(fmt) - logging.root.addHandler(hdlr) -else: - logging.basicConfig(format='%(name)s: %(message)s', level=logging.INFO) - - def main(args=None): """Main program. @@ -73,6 +64,15 @@ print >>sys.stderr, "Use --help to show usage." return 2 + # Set up logging handler + if sys.version_info < (2, 4): + hdlr = logging.StreamHandler() + fmt = logging.Formatter('%(name)s: %(message)s') + hdlr.setFormatter(fmt) + logging.root.addHandler(hdlr) + else: + logging.basicConfig(format='%(name)s: %(message)s', level=logging.INFO) + # Initialize the refactoring tool rt = RefactoringTool(options) From python-checkins at python.org Mon Mar 31 07:20:55 2008 From: python-checkins at python.org (martin.v.loewis) Date: Mon, 31 Mar 2008 07:20:55 +0200 (CEST) Subject: [Python-checkins] r62080 - in python/trunk/Lib/lib2to3: tests/test_fixers.py Message-ID: <20080331052055.772621E4002@bag.python.org> Author: martin.v.loewis Date: Mon Mar 31 07:20:55 2008 New Revision: 62080 Modified: python/trunk/Lib/lib2to3/ (props changed) python/trunk/Lib/lib2to3/tests/test_fixers.py Log: Merged revisions 61990-62079 via svnmerge from svn+ssh://pythondev at svn.python.org/sandbox/trunk/2to3/lib2to3 ........ r62017 | david.wolever | 2008-03-28 21:54:37 +0100 (Fr, 28 M?r 2008) | 1 line Fixed an out-of-date comment. ........ Modified: python/trunk/Lib/lib2to3/tests/test_fixers.py ============================================================================== --- python/trunk/Lib/lib2to3/tests/test_fixers.py (original) +++ python/trunk/Lib/lib2to3/tests/test_fixers.py Mon Mar 31 07:20:55 2008 @@ -3139,7 +3139,7 @@ def setUp(self): FixerTestCase.setUp(self) - # Need to replace fix_import's isfile and isdir method + # Need to replace fix_import's exists method # so we can check that it's doing the right thing self.files_checked = [] self.always_exists = True From buildbot at python.org Mon Mar 31 07:46:51 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 31 Mar 2008 05:46:51 +0000 Subject: [Python-checkins] buildbot failure in alpha Debian trunk Message-ID: <20080331054651.C90E81E4002@bag.python.org> The Buildbot has detected a new failure of alpha Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Debian%20trunk/builds/66 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-alpha Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: benjamin.peterson,georg.brandl,jeffrey.yasskin BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From buildbot at python.org Mon Mar 31 08:02:34 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 31 Mar 2008 06:02:34 +0000 Subject: [Python-checkins] buildbot failure in sparc Ubuntu 3.0 Message-ID: <20080331060234.9D4821E4019@bag.python.org> The Buildbot has detected a new failure of sparc Ubuntu 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20Ubuntu%203.0/builds/224 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-ubuntu-sparc Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: benjamin.peterson,jeffrey.yasskin,neal.norwitz BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_xmlrpc_net make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Mon Mar 31 08:29:22 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 31 Mar 2008 06:29:22 +0000 Subject: [Python-checkins] buildbot failure in ppc Debian unstable trunk Message-ID: <20080331062923.1DAAC1E4019@bag.python.org> The Buildbot has detected a new failure of ppc Debian unstable trunk. Full details are available at: http://www.python.org/dev/buildbot/all/ppc%20Debian%20unstable%20trunk/builds/1131 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_signal make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Mon Mar 31 08:53:51 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 31 Mar 2008 06:53:51 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-3 3.0 Message-ID: <20080331065351.9C1511E403C@bag.python.org> The Buildbot has detected a new failure of x86 XP-3 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-3%203.0/builds/738 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: benjamin.peterson,gerhard.haering,jeffrey.yasskin,martin.v.loewis,neal.norwitz BUILD FAILED: failed failed slave lost sincerely, -The Buildbot From buildbot at python.org Mon Mar 31 08:57:12 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 31 Mar 2008 06:57:12 +0000 Subject: [Python-checkins] buildbot failure in amd64 XP 3.0 Message-ID: <20080331065712.CBD3E1E4024@bag.python.org> The Buildbot has detected a new failure of amd64 XP 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20XP%203.0/builds/732 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows-amd64 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: benjamin.peterson,gerhard.haering,jeffrey.yasskin,martin.v.loewis,neal.norwitz BUILD FAILED: failed failed slave lost sincerely, -The Buildbot From buildbot at python.org Mon Mar 31 09:07:26 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 31 Mar 2008 07:07:26 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-3 trunk Message-ID: <20080331070726.8678F1E4019@bag.python.org> The Buildbot has detected a new failure of x86 XP-3 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-3%20trunk/builds/1208 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-windows Build Reason: The web-page 'rebuild' button was pressed by '': Build Source Stamp: [branch trunk] HEAD Blamelist: benjamin.peterson,georg.brandl,gerhard.haering,jeffrey.yasskin,martin.v.loewis,neal.norwitz BUILD FAILED: failed failed slave lost sincerely, -The Buildbot From buildbot at python.org Mon Mar 31 09:37:17 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 31 Mar 2008 07:37:17 +0000 Subject: [Python-checkins] buildbot failure in S-390 Debian 3.0 Message-ID: <20080331073725.0C2461E4019@bag.python.org> The Buildbot has detected a new failure of S-390 Debian 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/S-390%20Debian%203.0/builds/171 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-s390 Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: neal.norwitz BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_signal make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Mon Mar 31 09:40:44 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 31 Mar 2008 07:40:44 +0000 Subject: [Python-checkins] buildbot failure in alpha Tru64 5.1 trunk Message-ID: <20080331074054.F08DF1E403D@bag.python.org> The Buildbot has detected a new failure of alpha Tru64 5.1 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/alpha%20Tru64%205.1%20trunk/builds/2793 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-tru64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed test Excerpt from the test logfile: 3 tests failed: test_asynchat test_signal test_smtplib ====================================================================== FAIL: test_wakeup_fd_during (test.test_signal.WakeupSignalTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/net/taipan/scratch1/nnorwitz/python/trunk.norwitz-tru64/build/Lib/test/test_signal.py", line 205, in test_wakeup_fd_during [self.read], [], [], self.TIMEOUT_FULL) AssertionError: error not raised sincerely, -The Buildbot From buildbot at python.org Mon Mar 31 10:40:57 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 31 Mar 2008 08:40:57 +0000 Subject: [Python-checkins] buildbot failure in S-390 Debian trunk Message-ID: <20080331084057.56DD21E4019@bag.python.org> The Buildbot has detected a new failure of S-390 Debian trunk. Full details are available at: http://www.python.org/dev/buildbot/all/S-390%20Debian%20trunk/builds/299 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-s390 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: martin.v.loewis BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_signal make: *** [buildbottest] Error 1 sincerely, -The Buildbot From nnorwitz at gmail.com Mon Mar 31 12:04:43 2008 From: nnorwitz at gmail.com (Neal Norwitz) Date: Mon, 31 Mar 2008 05:04:43 -0500 Subject: [Python-checkins] Python Regression Test Failures refleak (1) Message-ID: <20080331100443.GA22389@python.psfb.org> More important issues: ---------------------- test_io leaked [61, 61, 61] references, sum=183 Less important issues: ---------------------- test_asynchat leaked [-100, 0, 0] references, sum=-100 test_smtplib leaked [0, 0, 88] references, sum=88 test_socketserver leaked [-82, 0, 0] references, sum=-82 test_threadedtempfile leaked [-100, 0, 0] references, sum=-100 test_threadsignals leaked [-8, 0, 0] references, sum=-8 test_urllib2_localnet leaked [146, 134, -128] references, sum=152 From buildbot at python.org Mon Mar 31 13:03:41 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 31 Mar 2008 11:03:41 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-4 3.0 Message-ID: <20080331110341.8972D1E401B@bag.python.org> The Buildbot has detected a new failure of x86 XP-4 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-4%203.0/builds/652 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: bolen-windows Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: neal.norwitz BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From python-checkins at python.org Mon Mar 31 13:42:05 2008 From: python-checkins at python.org (georg.brandl) Date: Mon, 31 Mar 2008 13:42:05 +0200 (CEST) Subject: [Python-checkins] r62083 - doctools/trunk/sphinx/directives.py Message-ID: <20080331114205.CEFFF1E401A@bag.python.org> Author: georg.brandl Date: Mon Mar 31 13:42:05 2008 New Revision: 62083 Modified: doctools/trunk/sphinx/directives.py Log: Fix envvar index template. Modified: doctools/trunk/sphinx/directives.py ============================================================================== --- doctools/trunk/sphinx/directives.py (original) +++ doctools/trunk/sphinx/directives.py Mon Mar 31 13:42:05 2008 @@ -426,7 +426,7 @@ # the directives are either desc_directive or target_directive additional_xref_types = { # directive name: (role name, index text, function to parse the desc node) - 'envvar': ('envvar', 'environment variable', None), + 'envvar': ('envvar', 'environment variable; %s', None), } From buildbot at python.org Mon Mar 31 16:59:26 2008 From: buildbot at python.org (buildbot at python.org) Date: Mon, 31 Mar 2008 14:59:26 +0000 Subject: [Python-checkins] buildbot failure in ARM Linux EABI 3.0 Message-ID: <20080331145926.C233A1E401A@bag.python.org> The Buildbot has detected a new failure of ARM Linux EABI 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/ARM%20Linux%20EABI%203.0/builds/10 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-linux-armeabi Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: benjamin.peterson,jeffrey.yasskin BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From python-checkins at python.org Mon Mar 31 23:57:13 2008 From: python-checkins at python.org (benjamin.peterson) Date: Mon, 31 Mar 2008 23:57:13 +0200 (CEST) Subject: [Python-checkins] r62084 - python/trunk/Doc/library/warnings.rst Message-ID: <20080331215713.D5B801E403A@bag.python.org> Author: benjamin.peterson Date: Mon Mar 31 23:57:13 2008 New Revision: 62084 Modified: python/trunk/Doc/library/warnings.rst Log: PyErr_Warn is decrepated. Use PyErr_WarnEx Modified: python/trunk/Doc/library/warnings.rst ============================================================================== --- python/trunk/Doc/library/warnings.rst (original) +++ python/trunk/Doc/library/warnings.rst Mon Mar 31 23:57:13 2008 @@ -16,7 +16,7 @@ might want to issue a warning when a program uses an obsolete module. Python programmers issue warnings by calling the :func:`warn` function defined -in this module. (C programmers use :cfunc:`PyErr_Warn`; see +in this module. (C programmers use :cfunc:`PyErr_WarnEx`; see :ref:`exceptionhandling` for details). Warning messages are normally written to ``sys.stderr``, but their disposition